content
large_stringlengths 0
6.46M
| path
large_stringlengths 3
331
| license_type
large_stringclasses 2
values | repo_name
large_stringlengths 5
125
| language
large_stringclasses 1
value | is_vendor
bool 2
classes | is_generated
bool 2
classes | length_bytes
int64 4
6.46M
| extension
large_stringclasses 75
values | text
stringlengths 0
6.46M
|
|---|---|---|---|---|---|---|---|---|---|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/utils.R
\name{length-zero-default}
\alias{length-zero-default}
\alias{\%|0|\%}
\title{Default value for `length(x) == 0`.}
\usage{
x \%|0|\% y
}
\arguments{
\item{x, y}{If `length(x) == 0`, will return `y`; otherwise returns
`x`.}
}
\description{
This infix function makes it easy to replace a length 0 value with a
default value. It's inspired by the way that Ruby's or operation (`||`)
works.
}
\examples{
"bacon" \%|0|\% "eggs"
NULL \%|0|\% "eggs"
}
|
/man/length-zero-default.Rd
|
no_license
|
GarrettMooney/moonmisc
|
R
| false
| true
| 531
|
rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/utils.R
\name{length-zero-default}
\alias{length-zero-default}
\alias{\%|0|\%}
\title{Default value for `length(x) == 0`.}
\usage{
x \%|0|\% y
}
\arguments{
\item{x, y}{If `length(x) == 0`, will return `y`; otherwise returns
`x`.}
}
\description{
This infix function makes it easy to replace a length 0 value with a
default value. It's inspired by the way that Ruby's or operation (`||`)
works.
}
\examples{
"bacon" \%|0|\% "eggs"
NULL \%|0|\% "eggs"
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/class-recall.R
\name{recall}
\alias{recall}
\alias{recall.data.frame}
\alias{recall_vec}
\title{Recall}
\usage{
recall(data, ...)
\method{recall}{data.frame}(data, truth, estimate, estimator = NULL,
na_rm = TRUE, ...)
recall_vec(truth, estimate, estimator = NULL, na_rm = TRUE, ...)
}
\arguments{
\item{data}{Either a \code{data.frame} containing the \code{truth} and \code{estimate}
columns, or a \code{table}/\code{matrix} where the true class results should be
in the columns of the table.}
\item{...}{Not currently used.}
\item{truth}{The column identifier for the true class results
(that is a \code{factor}). This should be an unquoted column name although
this argument is passed by expression and supports
\link[rlang:quasiquotation]{quasiquotation} (you can unquote column
names). For \code{_vec()} functions, a \code{factor} vector.}
\item{estimate}{The column identifier for the predicted class
results (that is also \code{factor}). As with \code{truth} this can be
specified different ways but the primary method is to use an
unquoted variable name. For \code{_vec()} functions, a \code{factor} vector.}
\item{estimator}{One of: \code{"binary"}, \code{"macro"}, \code{"macro_weighted"},
or \code{"micro"} to specify the type of averaging to be done. \code{"binary"} is
only relevant for the two class case. The other three are general methods for
calculating multiclass metrics. The default will automatically choose \code{"binary"}
or \code{"macro"} based on \code{estimate}.}
\item{na_rm}{A \code{logical} value indicating whether \code{NA}
values should be stripped before the computation proceeds.}
}
\value{
A \code{tibble} with columns \code{.metric}, \code{.estimator},
and \code{.estimate} and 1 row of values.
For grouped data frames, the number of rows returned will be the same as
the number of groups.
For \code{recall_vec()}, a single \code{numeric} value (or \code{NA}).
}
\description{
These functions calculate the \code{\link[=recall]{recall()}} of a measurement system for
finding relevant documents compared to reference results
(the truth regarding relevance). Highly related functions are \code{\link[=precision]{precision()}}
and \code{\link[=f_meas]{f_meas()}}.
}
\details{
The recall (aka sensitivity) is defined as the proportion of
relevant results out of the number of samples which were
actually relevant. When there are no relevant results, recall is
not defined and a value of \code{NA} is returned.
}
\section{Relevant level}{
There is no common convention on which factor level should
automatically be considered the "event" or "positive" result.
In \code{yardstick}, the default is to use the \emph{first} level. To
change this, a global option called \code{yardstick.event_first} is
set to \code{TRUE} when the package is loaded. This can be changed
to \code{FALSE} if the last level of the factor is considered the
level of interest. For multiclass extensions involving one-vs-all
comparisons (such as macro averaging), this option is ignored and
the "one" level is always the relevant result.
}
\section{Multiclass}{
Macro, micro, and macro-weighted averaging is available for this metric.
The default is to select macro averaging if a \code{truth} factor with more
than 2 levels is provided. Otherwise, a standard binary calculation is done.
See \code{vignette("multiclass", "yardstick")} for more information.
}
\section{Implementation}{
Suppose a 2x2 table with notation:
\tabular{rcc}{ \tab Reference \tab \cr Predicted \tab Relevant \tab
Irrelevant \cr Relevant \tab A \tab B \cr Irrelevant \tab C \tab D \cr }
The formulas used here are:
\deqn{recall = A/(A+C)}
\deqn{precision = A/(A+B)}
\deqn{F_meas_\beta = (1+\beta^2) * precision * recall/((\beta^2 * precision)+recall)}
See the references for discussions of the statistics.
}
\examples{
# Two class
data("two_class_example")
recall(two_class_example, truth, predicted)
# Multiclass
library(dplyr)
data(hpc_cv)
hpc_cv \%>\%
filter(Resample == "Fold01") \%>\%
recall(obs, pred)
# Groups are respected
hpc_cv \%>\%
group_by(Resample) \%>\%
recall(obs, pred)
# Weighted macro averaging
hpc_cv \%>\%
group_by(Resample) \%>\%
recall(obs, pred, estimator = "macro_weighted")
# Vector version
recall_vec(two_class_example$truth, two_class_example$predicted)
# Making Class2 the "relevant" level
options(yardstick.event_first = FALSE)
recall_vec(two_class_example$truth, two_class_example$predicted)
options(yardstick.event_first = TRUE)
}
\references{
Buckland, M., & Gey, F. (1994). The relationship
between Recall and Precision. \emph{Journal of the American Society
for Information Science}, 45(1), 12-19.
Powers, D. (2007). Evaluation: From Precision, Recall and F
Factor to ROC, Informedness, Markedness and Correlation.
Technical Report SIE-07-001, Flinders University
}
\seealso{
Other class metrics: \code{\link{accuracy}},
\code{\link{bal_accuracy}},
\code{\link{detection_prevalence}}, \code{\link{f_meas}},
\code{\link{j_index}}, \code{\link{kap}},
\code{\link{mcc}}, \code{\link{npv}}, \code{\link{ppv}},
\code{\link{precision}}, \code{\link{sens}},
\code{\link{spec}}
Other relevance metrics: \code{\link{f_meas}},
\code{\link{precision}}
}
\author{
Max Kuhn
}
\concept{class metrics}
\concept{relevance metrics}
|
/man/recall.Rd
|
no_license
|
meta00/yardstick
|
R
| false
| true
| 5,364
|
rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/class-recall.R
\name{recall}
\alias{recall}
\alias{recall.data.frame}
\alias{recall_vec}
\title{Recall}
\usage{
recall(data, ...)
\method{recall}{data.frame}(data, truth, estimate, estimator = NULL,
na_rm = TRUE, ...)
recall_vec(truth, estimate, estimator = NULL, na_rm = TRUE, ...)
}
\arguments{
\item{data}{Either a \code{data.frame} containing the \code{truth} and \code{estimate}
columns, or a \code{table}/\code{matrix} where the true class results should be
in the columns of the table.}
\item{...}{Not currently used.}
\item{truth}{The column identifier for the true class results
(that is a \code{factor}). This should be an unquoted column name although
this argument is passed by expression and supports
\link[rlang:quasiquotation]{quasiquotation} (you can unquote column
names). For \code{_vec()} functions, a \code{factor} vector.}
\item{estimate}{The column identifier for the predicted class
results (that is also \code{factor}). As with \code{truth} this can be
specified different ways but the primary method is to use an
unquoted variable name. For \code{_vec()} functions, a \code{factor} vector.}
\item{estimator}{One of: \code{"binary"}, \code{"macro"}, \code{"macro_weighted"},
or \code{"micro"} to specify the type of averaging to be done. \code{"binary"} is
only relevant for the two class case. The other three are general methods for
calculating multiclass metrics. The default will automatically choose \code{"binary"}
or \code{"macro"} based on \code{estimate}.}
\item{na_rm}{A \code{logical} value indicating whether \code{NA}
values should be stripped before the computation proceeds.}
}
\value{
A \code{tibble} with columns \code{.metric}, \code{.estimator},
and \code{.estimate} and 1 row of values.
For grouped data frames, the number of rows returned will be the same as
the number of groups.
For \code{recall_vec()}, a single \code{numeric} value (or \code{NA}).
}
\description{
These functions calculate the \code{\link[=recall]{recall()}} of a measurement system for
finding relevant documents compared to reference results
(the truth regarding relevance). Highly related functions are \code{\link[=precision]{precision()}}
and \code{\link[=f_meas]{f_meas()}}.
}
\details{
The recall (aka sensitivity) is defined as the proportion of
relevant results out of the number of samples which were
actually relevant. When there are no relevant results, recall is
not defined and a value of \code{NA} is returned.
}
\section{Relevant level}{
There is no common convention on which factor level should
automatically be considered the "event" or "positive" result.
In \code{yardstick}, the default is to use the \emph{first} level. To
change this, a global option called \code{yardstick.event_first} is
set to \code{TRUE} when the package is loaded. This can be changed
to \code{FALSE} if the last level of the factor is considered the
level of interest. For multiclass extensions involving one-vs-all
comparisons (such as macro averaging), this option is ignored and
the "one" level is always the relevant result.
}
\section{Multiclass}{
Macro, micro, and macro-weighted averaging is available for this metric.
The default is to select macro averaging if a \code{truth} factor with more
than 2 levels is provided. Otherwise, a standard binary calculation is done.
See \code{vignette("multiclass", "yardstick")} for more information.
}
\section{Implementation}{
Suppose a 2x2 table with notation:
\tabular{rcc}{ \tab Reference \tab \cr Predicted \tab Relevant \tab
Irrelevant \cr Relevant \tab A \tab B \cr Irrelevant \tab C \tab D \cr }
The formulas used here are:
\deqn{recall = A/(A+C)}
\deqn{precision = A/(A+B)}
\deqn{F_meas_\beta = (1+\beta^2) * precision * recall/((\beta^2 * precision)+recall)}
See the references for discussions of the statistics.
}
\examples{
# Two class
data("two_class_example")
recall(two_class_example, truth, predicted)
# Multiclass
library(dplyr)
data(hpc_cv)
hpc_cv \%>\%
filter(Resample == "Fold01") \%>\%
recall(obs, pred)
# Groups are respected
hpc_cv \%>\%
group_by(Resample) \%>\%
recall(obs, pred)
# Weighted macro averaging
hpc_cv \%>\%
group_by(Resample) \%>\%
recall(obs, pred, estimator = "macro_weighted")
# Vector version
recall_vec(two_class_example$truth, two_class_example$predicted)
# Making Class2 the "relevant" level
options(yardstick.event_first = FALSE)
recall_vec(two_class_example$truth, two_class_example$predicted)
options(yardstick.event_first = TRUE)
}
\references{
Buckland, M., & Gey, F. (1994). The relationship
between Recall and Precision. \emph{Journal of the American Society
for Information Science}, 45(1), 12-19.
Powers, D. (2007). Evaluation: From Precision, Recall and F
Factor to ROC, Informedness, Markedness and Correlation.
Technical Report SIE-07-001, Flinders University
}
\seealso{
Other class metrics: \code{\link{accuracy}},
\code{\link{bal_accuracy}},
\code{\link{detection_prevalence}}, \code{\link{f_meas}},
\code{\link{j_index}}, \code{\link{kap}},
\code{\link{mcc}}, \code{\link{npv}}, \code{\link{ppv}},
\code{\link{precision}}, \code{\link{sens}},
\code{\link{spec}}
Other relevance metrics: \code{\link{f_meas}},
\code{\link{precision}}
}
\author{
Max Kuhn
}
\concept{class metrics}
\concept{relevance metrics}
|
remove(list = ls())
options(stringsAsFactors = FALSE)
options(scipen = 999)
library(tidyverse)
commute_mode <- readr::read_csv("https://raw.githubusercontent.com/rfordatascience/tidytuesday/master/data/2019/2019-11-05/commute.csv") %>%
mutate(state = case_when(
state == "Ca" ~ "California",
state == "Massachusett" ~ "Massachusett",
TRUE ~ state
),
state_region = case_when(
state == "District of Columbia" ~ "South",
city == "West Springfield Town city" ~ "Northeast",
city == "El Paso de Robles (Paso Robles) city" ~ "West",
TRUE ~ state_region
))
averages <- commute_mode %>%
group_by(state_region,
city_size,mode) %>%
summarize(mean_percent = mean(percent, na.rm = TRUE)) %>%
ungroup() %>%
mutate(city_size = factor(city_size, levels = c("Small",
"Medium",
"Large")))
commute_mode %>%
group_by(state_region,
mode) %>%
summarize(mean_percent = mean(percent, na.rm = TRUE)) %>%
arrange(mode, desc(mean_percent))
# Biking to work ----------------------------------------------------------
bike <- averages %>%
filter(mode == "Bike") %>%
ggplot(aes(x = fct_rev(factor(state_region,
levels = c("West",
"Northeast",
"North Central",
"South"))),
y = mean_percent,
fill = city_size,
label = round(mean_percent,1))) +
geom_col(position = position_dodge2(),
color = "gray90",
width = 0.5) +
scale_y_continuous(limits = c(0,1.5),
breaks = c(0,0.4,0.8,1.2, 1.5),
labels = c("0.0%","0.4%","0.8%", "1.2%", "1.5%")) +
labs(x = "",y = "%\n", fill = "City size",
title = "Biking to work by region and city size: 2008-2012",
subtitle = "(Data based on sample. Regions ordered from highest to lowest)",
caption = "Source: U.S. Census Bureau, American Community Survey, 2008-2012") +
theme_minimal(base_family = "Roboto Condensed",base_size = 14) +
theme(
panel.grid.minor = element_blank(),
panel.grid.major = element_line(color = "black",
size = 0.02),
legend.position = "bottom",
plot.title = element_text(color = "darkorchid4", face = "bold",
size = 16)
) +
guides(fill = guide_legend(title.position = "top",
title.hjust = 0.5,
label.position = "bottom",keywidth = 3,
keyheight = 0.5)) +
scale_fill_manual(values = c("khaki",
"indianred1",
"gray64"))
# Walking to work ---------------------------------------------------------
walk <- averages %>%
filter(mode == "Walk") %>%
ggplot(aes(x = fct_rev(factor(state_region,
levels = c("Northeast",
"North Central",
"West",
"South"))),
y = mean_percent,
fill = city_size,
label = mean_percent)) +
geom_col(position = position_dodge2(),
color = "gray90",
width = 0.5) +
scale_y_continuous(limits = c(0,10),
breaks = c(0,2.5,5,7.5,10),
labels = c("0.0%","2.5%","5.0%", "7.5%", "10.0%")) +
labs(x = "",y = "%\n", fill = "City size",
title = "Walking to work by region and city size: 2008-2012",
subtitle = "(Data based on sample. Regions ordered from highest to lowest)",
caption = "Source: U.S. Census Bureau, American Community Survey, 2008-2012") +
theme_minimal(base_family = "Roboto Condensed",base_size = 14) +
theme(
panel.grid.minor = element_blank(),
panel.grid.major = element_line(color = "black",
size = 0.02),
legend.position = "bottom",
plot.title = element_text(color = "darkorchid4", face = "bold",
size = 16)
) +
guides(fill = guide_legend(title.position = "top",
title.hjust = 0.5,
label.position = "bottom",keywidth = 3,
keyheight = 0.5)) +
scale_fill_manual(values = c("khaki",
"indianred1",
"gray64"))
cowplot::plot_grid(bike, walk, nrow = 2)
|
/tidy_tuesday_2019_modes_less_traveled/code.R
|
no_license
|
Nhiemth1985/rviz
|
R
| false
| false
| 4,859
|
r
|
remove(list = ls())
options(stringsAsFactors = FALSE)
options(scipen = 999)
library(tidyverse)
commute_mode <- readr::read_csv("https://raw.githubusercontent.com/rfordatascience/tidytuesday/master/data/2019/2019-11-05/commute.csv") %>%
mutate(state = case_when(
state == "Ca" ~ "California",
state == "Massachusett" ~ "Massachusett",
TRUE ~ state
),
state_region = case_when(
state == "District of Columbia" ~ "South",
city == "West Springfield Town city" ~ "Northeast",
city == "El Paso de Robles (Paso Robles) city" ~ "West",
TRUE ~ state_region
))
averages <- commute_mode %>%
group_by(state_region,
city_size,mode) %>%
summarize(mean_percent = mean(percent, na.rm = TRUE)) %>%
ungroup() %>%
mutate(city_size = factor(city_size, levels = c("Small",
"Medium",
"Large")))
commute_mode %>%
group_by(state_region,
mode) %>%
summarize(mean_percent = mean(percent, na.rm = TRUE)) %>%
arrange(mode, desc(mean_percent))
# Biking to work ----------------------------------------------------------
bike <- averages %>%
filter(mode == "Bike") %>%
ggplot(aes(x = fct_rev(factor(state_region,
levels = c("West",
"Northeast",
"North Central",
"South"))),
y = mean_percent,
fill = city_size,
label = round(mean_percent,1))) +
geom_col(position = position_dodge2(),
color = "gray90",
width = 0.5) +
scale_y_continuous(limits = c(0,1.5),
breaks = c(0,0.4,0.8,1.2, 1.5),
labels = c("0.0%","0.4%","0.8%", "1.2%", "1.5%")) +
labs(x = "",y = "%\n", fill = "City size",
title = "Biking to work by region and city size: 2008-2012",
subtitle = "(Data based on sample. Regions ordered from highest to lowest)",
caption = "Source: U.S. Census Bureau, American Community Survey, 2008-2012") +
theme_minimal(base_family = "Roboto Condensed",base_size = 14) +
theme(
panel.grid.minor = element_blank(),
panel.grid.major = element_line(color = "black",
size = 0.02),
legend.position = "bottom",
plot.title = element_text(color = "darkorchid4", face = "bold",
size = 16)
) +
guides(fill = guide_legend(title.position = "top",
title.hjust = 0.5,
label.position = "bottom",keywidth = 3,
keyheight = 0.5)) +
scale_fill_manual(values = c("khaki",
"indianred1",
"gray64"))
# Walking to work ---------------------------------------------------------
walk <- averages %>%
filter(mode == "Walk") %>%
ggplot(aes(x = fct_rev(factor(state_region,
levels = c("Northeast",
"North Central",
"West",
"South"))),
y = mean_percent,
fill = city_size,
label = mean_percent)) +
geom_col(position = position_dodge2(),
color = "gray90",
width = 0.5) +
scale_y_continuous(limits = c(0,10),
breaks = c(0,2.5,5,7.5,10),
labels = c("0.0%","2.5%","5.0%", "7.5%", "10.0%")) +
labs(x = "",y = "%\n", fill = "City size",
title = "Walking to work by region and city size: 2008-2012",
subtitle = "(Data based on sample. Regions ordered from highest to lowest)",
caption = "Source: U.S. Census Bureau, American Community Survey, 2008-2012") +
theme_minimal(base_family = "Roboto Condensed",base_size = 14) +
theme(
panel.grid.minor = element_blank(),
panel.grid.major = element_line(color = "black",
size = 0.02),
legend.position = "bottom",
plot.title = element_text(color = "darkorchid4", face = "bold",
size = 16)
) +
guides(fill = guide_legend(title.position = "top",
title.hjust = 0.5,
label.position = "bottom",keywidth = 3,
keyheight = 0.5)) +
scale_fill_manual(values = c("khaki",
"indianred1",
"gray64"))
cowplot::plot_grid(bike, walk, nrow = 2)
|
#Function to display interaction contrast matrices
plotcontrast <- function(cmat){
if(!is.matrix(cmat) || !is.numeric(cmat)){stop("cmat must be a matrix with numeric entries")}
if(is.null(rownames(cmat))){rownames(cmat)<-paste("c", 1:nrow(cmat), sep="")}
dcm <- melt(cmat, varnames = c("comparison", "treatment"), value.name="coefficient" )
dcm$comparison <- factor(dcm$comparison, levels=rev(rownames(cmat)))
pp <- ggplot(dcm, aes(y=comparison, x=treatment)) +
theme_grey() +
theme(axis.text = element_text(colour = "black")) +
geom_tile(aes(fill=coefficient), colour="black") +
scale_fill_gradient2(low = "red", mid = "white", high = "blue", midpoint = 0)
return(pp)
}
|
/R/plotcontrast.R
|
no_license
|
schaarschmidt/statint
|
R
| false
| false
| 708
|
r
|
#Function to display interaction contrast matrices
plotcontrast <- function(cmat){
if(!is.matrix(cmat) || !is.numeric(cmat)){stop("cmat must be a matrix with numeric entries")}
if(is.null(rownames(cmat))){rownames(cmat)<-paste("c", 1:nrow(cmat), sep="")}
dcm <- melt(cmat, varnames = c("comparison", "treatment"), value.name="coefficient" )
dcm$comparison <- factor(dcm$comparison, levels=rev(rownames(cmat)))
pp <- ggplot(dcm, aes(y=comparison, x=treatment)) +
theme_grey() +
theme(axis.text = element_text(colour = "black")) +
geom_tile(aes(fill=coefficient), colour="black") +
scale_fill_gradient2(low = "red", mid = "white", high = "blue", midpoint = 0)
return(pp)
}
|
#' create a barplot
#'
#' @param iter number of iterations
#' @param n number of bernoulli trials
#' @param p probability of success
#'
#' @return it return a barplot of the proportions and table of relative frequencies
#' @export
#'
#' @examples
#' mybin(100,10,0.5)
mybin=function(iter=100,n=10, p=0.5){
# make a matrix to hold the samples
#initially filled with NA's
sam.mat=matrix(NA,nr=n,nc=iter, byrow=TRUE)
#Make a vector to hold the number of successes in each trial
succ=c()
for( i in 1:iter){
#Fill each column with a new sample
sam.mat[,i]=sample(c(1,0),n,replace=TRUE, prob=c(p,1-p))
#Calculate a statistic from the sample (this case it is the sum)
succ[i]=sum(sam.mat[,i])
}
#Make a table of successes
succ.tab=table(factor(succ,levels=0:n))
#Make a barplot of the proportions
barplot(succ.tab/(iter), col=rainbow(n+1), main="Binomial simulation", xlab="Number of successes")
succ.tab/iter
}
|
/R/mybin.R
|
no_license
|
jiawei313/Rpack
|
R
| false
| false
| 946
|
r
|
#' create a barplot
#'
#' @param iter number of iterations
#' @param n number of bernoulli trials
#' @param p probability of success
#'
#' @return it return a barplot of the proportions and table of relative frequencies
#' @export
#'
#' @examples
#' mybin(100,10,0.5)
mybin=function(iter=100,n=10, p=0.5){
# make a matrix to hold the samples
#initially filled with NA's
sam.mat=matrix(NA,nr=n,nc=iter, byrow=TRUE)
#Make a vector to hold the number of successes in each trial
succ=c()
for( i in 1:iter){
#Fill each column with a new sample
sam.mat[,i]=sample(c(1,0),n,replace=TRUE, prob=c(p,1-p))
#Calculate a statistic from the sample (this case it is the sum)
succ[i]=sum(sam.mat[,i])
}
#Make a table of successes
succ.tab=table(factor(succ,levels=0:n))
#Make a barplot of the proportions
barplot(succ.tab/(iter), col=rainbow(n+1), main="Binomial simulation", xlab="Number of successes")
succ.tab/iter
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/parse_mgf.R
\name{parse_mgf}
\alias{parse_mgf}
\title{Parse mgf}
\usage{
parse_mgf(mgf_path)
}
\arguments{
\item{mgf_path}{path to mgf}
}
\value{
data.table with cols pepmass, charge, scan,
and col containing json representation of mass/intensity
}
\description{
Parse mgf
}
|
/man/parse_mgf.Rd
|
permissive
|
chasemc/mgfparse
|
R
| false
| true
| 357
|
rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/parse_mgf.R
\name{parse_mgf}
\alias{parse_mgf}
\title{Parse mgf}
\usage{
parse_mgf(mgf_path)
}
\arguments{
\item{mgf_path}{path to mgf}
}
\value{
data.table with cols pepmass, charge, scan,
and col containing json representation of mass/intensity
}
\description{
Parse mgf
}
|
library(stringr)
library(ggplot2)
library(cowplot)
source("~/Documents/Bioinformatik/Master/Masterarbeit/R_Scripts/marker_genes.R")
source("~/Documents/Bioinformatik/Master/Masterarbeit/R_Scripts/Simulation.R")
### Helper functions
evaluate = function(row){
ind = which.max(row)
name = names(row)[ind]
}
evaluate2 = function(row){
sum(row) > 0
}
evaluate3 = function(row){
amt = sum(row > 5)
if(amt > 1)
return(FALSE)
else
return(TRUE)
}
### Read in bulk data used as basis of simulation
# Alpha-cells, beta-cells, delta-cells MOUSE
alpha_beta_delta = read.table(
"~/Documents/Bioinformatik/Master/Masterarbeit/Expressionsdaten/Purified/alpha_beta_delta_mouse/GSE76017_data_geo.tsv",
header = TRUE,
sep = "\t"
)
alpha_bulk = alpha_beta_delta[,grep("Alpha", colnames(alpha_beta_delta))]
beta_bulk = alpha_beta_delta[,grep("Beta", colnames(alpha_beta_delta))]
delta_bulk = alpha_beta_delta[,grep("Delta", colnames(alpha_beta_delta))]
# Alpha-cells, beta-cells HUMAN
alpha_beta = read.table(
"~/Documents/Bioinformatik/Master/Masterarbeit/Expressionsdaten/Purified/alpha_beta_purified/purified_alpha_beta_bulk_tpm.tsv",
header = TRUE,
sep = "\t"
)
alpha_bulk = alpha_beta[,grep("Alpha", colnames(alpha_beta))]
beta_bulk = alpha_beta[,grep("Beta", colnames(alpha_beta))]
### Read in single-cell seq data used for training
# Read in meta info
meta = "~/Documents/Bioinformatik/Master/Masterarbeit/Expressionsdaten/Meta_information.tsv"
meta = read.table(meta, header = TRUE, sep = "\t")
rownames(meta) = meta$Name
# Read in Lawlor
lawlor_counts = "~/Documents/Bioinformatik/Master/Masterarbeit/Expressionsdaten/Human_differentiated_pancreatic_islet_cells_scRNA/Lawlor.tsv"
lawlor_counts = read.table(lawlor_counts, header = TRUE, sep = "\t")
colnames(lawlor_counts) = str_replace(colnames(lawlor_counts), pattern = "\\.", "_")
# Process Lawlor
lawlor_meta = meta[colnames(lawlor_counts),]
lawlor_counts = lawlor_counts[, lawlor_meta$Subtype %in% c("Alpha", "Beta", "Gamma", "Delta")]
rownames(lawlor_counts) = str_to_upper(rownames(lawlor_counts))
lawlor_types = lawlor_meta$Subtype[rownames(lawlor_meta) %in% colnames(lawlor_counts)]
lawlor_types = as.character(lawlor_types)
# Read in Segerstolpe
segerstolpe_counts = "~/Documents/Bioinformatik/Master/Masterarbeit/Expressionsdaten/Human_differentiated_pancreatic_islet_cells_scRNA/Segerstolpe.tsv"
segerstolpe_counts = read.table(segerstolpe_counts, header = TRUE, sep = "\t")
colnames(segerstolpe_counts) = str_replace(colnames(segerstolpe_counts), pattern = "\\.", "_")
# Process Segerstolpe
seg_meta = meta[colnames(segerstolpe_counts),]
segerstolpe_counts = segerstolpe_counts[, seg_meta$Subtype %in% c("Alpha", "Beta", "Gamma", "Delta")]
rownames(segerstolpe_counts) = str_to_upper(rownames(segerstolpe_counts))
seg_types = seg_meta$Subtype[rownames(seg_meta) %in% colnames(segerstolpe_counts)]
seg_types = as.character(seg_types)
# Read in Baron
baron_counts = "~/Documents/Bioinformatik/Master/Masterarbeit/Expressionsdaten/Human_differentiated_pancreatic_islet_cells_scRNA/Baron_human.tsv"
baron_counts = read.table(baron_counts, header = TRUE, sep = "\t")
colnames(baron_counts) = str_replace(colnames(baron_counts), pattern = "\\.", "_")
# Process Baron
baron_meta = meta[colnames(baron_counts),]
baron_counts = baron_counts[,baron_meta$Subtype %in% c("Alpha", "Beta", "Gamma", "Delta")]
rownames(baron_counts) = str_to_upper(rownames(baron_counts))
baron_types = baron_meta$Subtype[rownames(baron_meta) %in% colnames(baron_counts)]
baron_types = as.character(baron_types)
### Choose current training data, process and prepare
training_data = baron_counts
training_types = baron_types
colnames(training_data) = str_replace(colnames(training_data), pattern = "\\.", "_")
colnames(training_data) = str_replace_all(colnames(training_data), pattern = "^X", "")
# Data cleaning
row_var = apply(training_data, FUN = var, MARGIN = 1)
training_data = training_data[row_var != 0,]
training_data = training_data[rowSums(training_data) >= 1,]
# Determine marker genes by differential expression
marker_gene_list = list()
subtypes = unique(training_types)
for(cell_type in subtypes){
marker_gene_list[[cell_type]] = identify_marker_genes(
training_data,
training_types,
cell_type,
nr_marker_genes = 100
)
}
# Prepare training data objects
training_mat_bseq = new(
"ExpressionSet",
exprs = as.matrix(training_data)
)
fData(training_mat_bseq) = data.frame(rownames(training_data))
pData(training_mat_bseq) = data.frame(training_types)
# Train deconvolution basis matrix
basis = suppressMessages(
bseqsc_basis(
training_mat_bseq,
marker_gene_list,
clusters = 'training_types',
samples = colnames(training_data),
ct.scale = FALSE
)
)
### Simulate bulk data, process and prepare
alpha_sim_bulk = simulateCellTypes(referenceCellTypes = alpha_bulk,
markerGenes = as.character(unlist(marker_gene_list)),
numSamples = 100)
beta_sim_bulk = simulateCellTypes(referenceCellTypes = beta_bulk,
markerGenes = as.character(unlist(marker_gene_list)),
numSamples = 100)
delta_sim_bulk = simulateCellTypes(referenceCellTypes = delta_bulk,
markerGenes = as.character(unlist(marker_gene_list)),
numSamples = 100)
test_cellTypes = c(rep("Alpha", 100), rep("Beta", 100), rep("Delta", 100))
### Estimate parameters for perturbation from simulated data
### perturbation
interval = seq(0, 1, by = 0.05)
results_perc = c()
results_p = c()
exp_markers = rownames(alpha_sim_bulk) %in% as.character(unlist(marker_gene_list))
exp_markers = cbind(alpha_sim_bulk[exp_markers,], beta_sim_bulk[exp_markers,])
deviation = sd(c(t(exp_markers)))
### Deconvolution of simulated data with increasing noise level
for(j in 1:length(interval)){
# Alpha expression + noise
alpha_sim_bulk_noise = c()
for(i in 1:ncol(alpha_sim_bulk)){
noise = rnorm(length(alpha_sim_bulk[,i]), 0, interval[j] * log2(deviation))
noise = 2 ^ noise
current = alpha_sim_bulk[,i] + noise
index = which(current < 0)
current[index] = 0
alpha_sim_bulk_noise = cbind(alpha_sim_bulk_noise, current)
}
# Beta expression + noise
beta_sim_bulk_noise = c()
for(i in 1:ncol(beta_sim_bulk)){
noise = rnorm(length(beta_sim_bulk[,i]), 0, interval[j] * log2(deviation))
noise = 2 ^ noise
current = beta_sim_bulk[,i] + noise
index = which(current < 0)
current[index] = 0
beta_sim_bulk_noise = cbind(beta_sim_bulk_noise, current)
}
# Delta expression + noise
delta_sim_bulk_noise = c()
for(i in 1:ncol(delta_sim_bulk)){
noise = rnorm(length(delta_sim_bulk[,i]), 0, interval[j] * log2(deviation))
noise = 2 ^ noise
current = delta_sim_bulk[,i] + noise
index = which(current < 0)
current[index] = 0
delta_sim_bulk_noise = cbind(delta_sim_bulk_noise, current)
}
#combined_sim = cbind(alpha_sim_bulk_noise, beta_sim_bulk_noise)
combined_sim = cbind(alpha_sim_bulk_noise, beta_sim_bulk_noise, delta_sim_bulk_noise)
rownames(combined_sim) = rownames(alpha_sim_bulk)
# Deconvolution
fit = bseqsc_proportions(
as.matrix(combined_sim),
basis,
verbose = TRUE,
log = FALSE,
perm = 500,
absolute = TRUE
)
# Extract results
res_coeff = t(fit$coefficients)
res_coeff[is.na(res_coeff)] = 0.0
res_cor = fit$stats
res_cor[is.na(res_cor)] = 0.0
# Evaluate performance: p-value <= 0.03, correct cell-type predicted
predicted_types = apply(res_coeff, 1, evaluate)
correct = test_cellTypes == predicted_types
sig_prop = apply(res_coeff, 1, evaluate2)
correct = correct & sig_prop
only_one = apply(res_coeff, 1, evaluate3)
correct = correct & only_one
p_value = which(res_cor[,1] > 0.03)
correct[p_value] = FALSE
# Keep track of mean p-value and % predicted correct
results_p = c(results_p, mean(res_cor[,1]))
results_perc = c(results_perc, (sum(correct) / 300))
}
|
/R-Code/robustness_benchmark_bulk.R
|
no_license
|
janniklas93/Masterarbeit
|
R
| false
| false
| 8,323
|
r
|
library(stringr)
library(ggplot2)
library(cowplot)
source("~/Documents/Bioinformatik/Master/Masterarbeit/R_Scripts/marker_genes.R")
source("~/Documents/Bioinformatik/Master/Masterarbeit/R_Scripts/Simulation.R")
### Helper functions
evaluate = function(row){
ind = which.max(row)
name = names(row)[ind]
}
evaluate2 = function(row){
sum(row) > 0
}
evaluate3 = function(row){
amt = sum(row > 5)
if(amt > 1)
return(FALSE)
else
return(TRUE)
}
### Read in bulk data used as basis of simulation
# Alpha-cells, beta-cells, delta-cells MOUSE
alpha_beta_delta = read.table(
"~/Documents/Bioinformatik/Master/Masterarbeit/Expressionsdaten/Purified/alpha_beta_delta_mouse/GSE76017_data_geo.tsv",
header = TRUE,
sep = "\t"
)
alpha_bulk = alpha_beta_delta[,grep("Alpha", colnames(alpha_beta_delta))]
beta_bulk = alpha_beta_delta[,grep("Beta", colnames(alpha_beta_delta))]
delta_bulk = alpha_beta_delta[,grep("Delta", colnames(alpha_beta_delta))]
# Alpha-cells, beta-cells HUMAN
alpha_beta = read.table(
"~/Documents/Bioinformatik/Master/Masterarbeit/Expressionsdaten/Purified/alpha_beta_purified/purified_alpha_beta_bulk_tpm.tsv",
header = TRUE,
sep = "\t"
)
alpha_bulk = alpha_beta[,grep("Alpha", colnames(alpha_beta))]
beta_bulk = alpha_beta[,grep("Beta", colnames(alpha_beta))]
### Read in single-cell seq data used for training
# Read in meta info
meta = "~/Documents/Bioinformatik/Master/Masterarbeit/Expressionsdaten/Meta_information.tsv"
meta = read.table(meta, header = TRUE, sep = "\t")
rownames(meta) = meta$Name
# Read in Lawlor
lawlor_counts = "~/Documents/Bioinformatik/Master/Masterarbeit/Expressionsdaten/Human_differentiated_pancreatic_islet_cells_scRNA/Lawlor.tsv"
lawlor_counts = read.table(lawlor_counts, header = TRUE, sep = "\t")
colnames(lawlor_counts) = str_replace(colnames(lawlor_counts), pattern = "\\.", "_")
# Process Lawlor
lawlor_meta = meta[colnames(lawlor_counts),]
lawlor_counts = lawlor_counts[, lawlor_meta$Subtype %in% c("Alpha", "Beta", "Gamma", "Delta")]
rownames(lawlor_counts) = str_to_upper(rownames(lawlor_counts))
lawlor_types = lawlor_meta$Subtype[rownames(lawlor_meta) %in% colnames(lawlor_counts)]
lawlor_types = as.character(lawlor_types)
# Read in Segerstolpe
segerstolpe_counts = "~/Documents/Bioinformatik/Master/Masterarbeit/Expressionsdaten/Human_differentiated_pancreatic_islet_cells_scRNA/Segerstolpe.tsv"
segerstolpe_counts = read.table(segerstolpe_counts, header = TRUE, sep = "\t")
colnames(segerstolpe_counts) = str_replace(colnames(segerstolpe_counts), pattern = "\\.", "_")
# Process Segerstolpe
seg_meta = meta[colnames(segerstolpe_counts),]
segerstolpe_counts = segerstolpe_counts[, seg_meta$Subtype %in% c("Alpha", "Beta", "Gamma", "Delta")]
rownames(segerstolpe_counts) = str_to_upper(rownames(segerstolpe_counts))
seg_types = seg_meta$Subtype[rownames(seg_meta) %in% colnames(segerstolpe_counts)]
seg_types = as.character(seg_types)
# Read in Baron
baron_counts = "~/Documents/Bioinformatik/Master/Masterarbeit/Expressionsdaten/Human_differentiated_pancreatic_islet_cells_scRNA/Baron_human.tsv"
baron_counts = read.table(baron_counts, header = TRUE, sep = "\t")
colnames(baron_counts) = str_replace(colnames(baron_counts), pattern = "\\.", "_")
# Process Baron
baron_meta = meta[colnames(baron_counts),]
baron_counts = baron_counts[,baron_meta$Subtype %in% c("Alpha", "Beta", "Gamma", "Delta")]
rownames(baron_counts) = str_to_upper(rownames(baron_counts))
baron_types = baron_meta$Subtype[rownames(baron_meta) %in% colnames(baron_counts)]
baron_types = as.character(baron_types)
### Choose current training data, process and prepare
training_data = baron_counts
training_types = baron_types
colnames(training_data) = str_replace(colnames(training_data), pattern = "\\.", "_")
colnames(training_data) = str_replace_all(colnames(training_data), pattern = "^X", "")
# Data cleaning
row_var = apply(training_data, FUN = var, MARGIN = 1)
training_data = training_data[row_var != 0,]
training_data = training_data[rowSums(training_data) >= 1,]
# Determine marker genes by differential expression
marker_gene_list = list()
subtypes = unique(training_types)
for(cell_type in subtypes){
marker_gene_list[[cell_type]] = identify_marker_genes(
training_data,
training_types,
cell_type,
nr_marker_genes = 100
)
}
# Prepare training data objects
training_mat_bseq = new(
"ExpressionSet",
exprs = as.matrix(training_data)
)
fData(training_mat_bseq) = data.frame(rownames(training_data))
pData(training_mat_bseq) = data.frame(training_types)
# Train deconvolution basis matrix
basis = suppressMessages(
bseqsc_basis(
training_mat_bseq,
marker_gene_list,
clusters = 'training_types',
samples = colnames(training_data),
ct.scale = FALSE
)
)
### Simulate bulk data, process and prepare
alpha_sim_bulk = simulateCellTypes(referenceCellTypes = alpha_bulk,
markerGenes = as.character(unlist(marker_gene_list)),
numSamples = 100)
beta_sim_bulk = simulateCellTypes(referenceCellTypes = beta_bulk,
markerGenes = as.character(unlist(marker_gene_list)),
numSamples = 100)
delta_sim_bulk = simulateCellTypes(referenceCellTypes = delta_bulk,
markerGenes = as.character(unlist(marker_gene_list)),
numSamples = 100)
test_cellTypes = c(rep("Alpha", 100), rep("Beta", 100), rep("Delta", 100))
### Estimate parameters for perturbation from simulated data
### perturbation
interval = seq(0, 1, by = 0.05)
results_perc = c()
results_p = c()
exp_markers = rownames(alpha_sim_bulk) %in% as.character(unlist(marker_gene_list))
exp_markers = cbind(alpha_sim_bulk[exp_markers,], beta_sim_bulk[exp_markers,])
deviation = sd(c(t(exp_markers)))
### Deconvolution of simulated data with increasing noise level
for(j in 1:length(interval)){
# Alpha expression + noise
alpha_sim_bulk_noise = c()
for(i in 1:ncol(alpha_sim_bulk)){
noise = rnorm(length(alpha_sim_bulk[,i]), 0, interval[j] * log2(deviation))
noise = 2 ^ noise
current = alpha_sim_bulk[,i] + noise
index = which(current < 0)
current[index] = 0
alpha_sim_bulk_noise = cbind(alpha_sim_bulk_noise, current)
}
# Beta expression + noise
beta_sim_bulk_noise = c()
for(i in 1:ncol(beta_sim_bulk)){
noise = rnorm(length(beta_sim_bulk[,i]), 0, interval[j] * log2(deviation))
noise = 2 ^ noise
current = beta_sim_bulk[,i] + noise
index = which(current < 0)
current[index] = 0
beta_sim_bulk_noise = cbind(beta_sim_bulk_noise, current)
}
# Delta expression + noise
delta_sim_bulk_noise = c()
for(i in 1:ncol(delta_sim_bulk)){
noise = rnorm(length(delta_sim_bulk[,i]), 0, interval[j] * log2(deviation))
noise = 2 ^ noise
current = delta_sim_bulk[,i] + noise
index = which(current < 0)
current[index] = 0
delta_sim_bulk_noise = cbind(delta_sim_bulk_noise, current)
}
#combined_sim = cbind(alpha_sim_bulk_noise, beta_sim_bulk_noise)
combined_sim = cbind(alpha_sim_bulk_noise, beta_sim_bulk_noise, delta_sim_bulk_noise)
rownames(combined_sim) = rownames(alpha_sim_bulk)
# Deconvolution
fit = bseqsc_proportions(
as.matrix(combined_sim),
basis,
verbose = TRUE,
log = FALSE,
perm = 500,
absolute = TRUE
)
# Extract results
res_coeff = t(fit$coefficients)
res_coeff[is.na(res_coeff)] = 0.0
res_cor = fit$stats
res_cor[is.na(res_cor)] = 0.0
# Evaluate performance: p-value <= 0.03, correct cell-type predicted
predicted_types = apply(res_coeff, 1, evaluate)
correct = test_cellTypes == predicted_types
sig_prop = apply(res_coeff, 1, evaluate2)
correct = correct & sig_prop
only_one = apply(res_coeff, 1, evaluate3)
correct = correct & only_one
p_value = which(res_cor[,1] > 0.03)
correct[p_value] = FALSE
# Keep track of mean p-value and % predicted correct
results_p = c(results_p, mean(res_cor[,1]))
results_perc = c(results_perc, (sum(correct) / 300))
}
|
library(shiny)
library(tidyverse)
library(here)
library(janitor)
library(fs)
library(reactable)
library(htmltools)
library(httr)
library(bslib)
library(shinyjs)
library(shinyvalidate)
source(paste0(here(), "/ud_adp_helpers.R"))
trace(grDevices:::png, quote({
if (missing(type) && missing(antialias)) {
type <- "cairo-png"
antialias <- "subpixel"
}
}), print = FALSE)
### Get user ADP ###
## Used for local testing ##
# user_data_directory <-
# str_replace_all(paste0(here(), "/user_data"), "/", "\\\\")
#
# user_adp_file <-
# list.files(user_data_directory) %>%
# as_tibble() %>%
# filter(endsWith(value, ".csv")) %>%
# slice(1) %>%
# .$value
## FIX FLEX ALWAYS BEING A TE
server <- function(input, output, session) {
observeEvent(input$upload_csv, {
showModal(modalDialog(
fileInput(
"ud_user_adp",
"Upload CSV",
multiple = FALSE,
accept = c(
"text/csv",
"text/comma-separated-values,text/plain",
".csv"
)
),
title = "Upload UD Exposure",
# tags$href(href="https://www.underdogfantasy.com", "Underdog"),
tags$img(src = "UD_Export_Helper_1.jpg"),
p(),
tags$img(src = "UD_Export_Helper_2.jpg"),
easyClose = TRUE,
footer = NULL
))
})
observeEvent(input$ud_user_adp, {
showElement(id = "player_search")
})
user_upload <- reactive({
input$ud_user_adp
})
output$map <- renderUI({
uu <- user_upload()
if (!is.null(uu)) {
return(NULL)
}
h3(
"How to export your exposure .csv",
p(),
tags$img(src = "UD_Export_Helper_1.jpg"),
p(),
tags$img(src = "UD_Export_Helper_2.jpg"),
p(),
h5("Get it from your email and upload it here!")
)
})
observe({
if (!is.null(user_upload())) {
appendTab(inputId = "user_upload_tabs", tabPanel(
title = "ADP Comparison",
value = "adp_comp",
uiOutput("map"),
shinycssloaders::withSpinner(
reactableOutput("user_adp_table"),
type = 7,
color = "#D9230F",
hide.ui = TRUE
)
))
appendTab(
inputId = "user_upload_tabs",
tabPanel(
"Team by Team",
shinycssloaders::withSpinner(
reactableOutput("team_breakdown"),
type = 7,
color = "#D9230F"
)
)
)
appendTab(
inputId = "user_upload_tabs",
tabPanel(
"Team Builds",
shinycssloaders::withSpinner(
tagList(
reactableOutput("build_explorer"),
plotOutput("build_bargraph")
),
type = 7,
color = "#D9230F"
)
)
)
appendTab(
inputId = "user_upload_tabs",
tabPanel(
"Draft Slots",
shinycssloaders::withSpinner(
plotOutput("draft_slot_frequency"),
type = 7,
color = "#D9230F"
)
)
)
}
})
nfl_team <- reactive({
get_nfl_teams()
})
### Get underdog overall ADP ###
current_ud_adp <- reactive({
x <- GET(
"https://raw.githubusercontent.com/DangyWing/Underdog-ADP/main/data/ud_adp_current.rds",
authenticate(Sys.getenv("GITHUB_PAT_DANGY"), "")
)
### THANK YOU TAN ###
current_ud_adp <-
content(x, as = "raw") %>%
parse_raw_rds() %>%
mutate(
adp = as.double(adp),
teamName = case_when(
teamName == "NY Jets" ~ "New York Jets",
teamName == "NY Giants" ~ "New York Giants",
teamName == "Washington Football Team" ~ "Washington",
TRUE ~ teamName
)
) %>%
left_join(nfl_team(), by = c("teamName" = "full_name"))
})
user_adp_table_data <-
# read_csv(paste0("user_data/DiCatoUDExposure.csv"))
# read_csv(paste0("user_data/cgudbbm.csv"))
reactive({
file <- user_upload()
req(file)
ext <- tools::file_ext(file$datapath)
validate(need(ext == "csv", "Please upload a csv file"))
read_csv(file$datapath) %>%
clean_names()
})
output$ud_adp <- renderReactable({
adp_date <- max(current_ud_adp()$adp_date) %>%
as.Date()
ud_adp_data <-
current_ud_adp() %>%
clean_names() %>%
mutate(
player_name = paste(first_name, last_name),
position_color = case_when(
slot_name == "QB" ~ "#9647B8",
slot_name == "RB" ~ "#15967B",
slot_name == "WR" ~ "#E67E22",
slot_name == "TE" ~ "#2980B9",
TRUE ~ ""
),
team_name = ifelse(is.na(team_name), "FA", team_name),
projected_points = ifelse(is.na(projected_points), 0, projected_points)
) %>%
select(
player_name,
position = slot_name,
team_name,
adp,
projected_points,
position_color,
logo
) %>%
filter(adp >= input$ud_adp_slider[1] &
adp <= input$ud_adp_slider[2])
reactable(ud_adp_data,
columns = list(
player_name = colDef(
name = "Player",
cell = function(value, index) {
pos_color <- ud_adp_data$position_color[index]
pos_color <- if (!is.na(pos_color)) pos_color else "#000000"
tagList(
htmltools::div(
style = list(color = pos_color),
value
)
)
},
minWidth = 240
),
position = colDef(name = "Position"),
team_name = colDef(
name = "Team",
cell = function(value, index) {
image <- img(src = ud_adp_data$logo[index], height = "24px")
tagList(
div(style = list(display = "inline-block", alignItems = "center"), image),
value
)
},
minWidth = 200
),
adp = colDef(
name = "UD ADP",
header = function(value, index) {
note <- div(style = "color: #666", paste("("), adp_date, ")")
tagList(
div(title = value, value),
div(style = list(fontSize = 12), note)
)
}, filterable = FALSE
),
projected_points = colDef(
name = "Underdog<br>Projection",
html = TRUE,
filterable = FALSE,
show = FALSE
),
position_color = colDef(show = FALSE),
logo = colDef(show = FALSE)
),
filterable = TRUE,
highlight = TRUE,
defaultPageSize = 24,
style = list(fontFamily = "Montserrat")
)
})
output$build_explorer <- renderReactable({
cg_table() %>%
ungroup() %>%
mutate(draft_start_date = format(as.Date(draft_start_date, format = "%Y-%Om-%d %H:%M:%S"), format = "%B %e")) %>%
group_by(draft, draft_slot) %>%
summarise(
QB = sum(position == "QB"),
RB = sum(position == "RB"),
WR = sum(position == "WR"),
TE = sum(position == "TE")
) %>%
ungroup() %>%
count(QB, WR, RB, TE, name = "build_count", sort = TRUE) %>%
mutate(build = paste0("QB: ", QB, "\nRB: ", RB, "\nWR: ", WR, "\nTE: ", TE), build_count) %>%
select(QB:TE, build_count) %>%
reactable(
columns = list(
QB = colDef(
name = "QB", headerStyle = "color: #9647B8",
footer = "Total", footerStyle = list(fontWeight = "bold")
),
RB = colDef(name = "RB", headerStyle = "color: #15967B"),
WR = colDef(name = "WR", headerStyle = "color: #E67E22"),
TE = colDef(name = "TE", headerStyle = "color: #2980B9"),
build_count = colDef(
name = "Count",
footer = JS("function(colInfo) {
var total = 0
colInfo.data.forEach(function(row) {
total += row[colInfo.column.id]
})
return + total.toFixed(0)
}"), filterable = FALSE,
footerStyle = list(fontWeight = "bold")
)
),
filterable = TRUE,
highlight = TRUE,
defaultPageSize = 20,
style = list(fontFamily = "Montserrat"),
defaultColDef = colDef(align = "center")
)
})
output$build_bargraph <- renderPlot({
team_build_count <-
cg_table() %>%
ungroup() %>%
mutate(draft_start_date = format(as.Date(draft_start_date, format = "%Y-%Om-%d %H:%M:%S"), format = "%B %e")) %>%
group_by(draft, draft_number, draft_start_date) %>%
summarise(
QB = sum(position == "QB"),
RB = sum(position == "RB"),
WR = sum(position == "WR"),
TE = sum(position == "TE")
) %>%
ungroup() %>%
count(QB, WR, RB, TE, name = "build_count", sort = TRUE) %>%
transmute(build = paste0("QB: ", QB, "\nRB: ", RB, "\nWR: ", WR, "\nTE: ", TE), build_count) %>%
arrange(desc(build_count))
max_build_count <-
max(team_build_count$build_count)
ggplot(team_build_count, aes(x = reorder(build, -build_count), y = build_count)) +
geom_bar(stat = "identity", fill = "#393939", color = "#393939") +
geom_text(
data = subset(team_build_count, build_count > 1),
aes(label = build_count), vjust = 1.5, color = "White", family = "Roboto", size = 4
) +
hrbrthemes::theme_ipsum_rc() +
theme(
axis.text.x = element_text(angle = 0),
panel.grid.major = element_blank(),
panel.background = element_rect(fill = "#1B1B1B"),
panel.grid.minor.y = element_line(colour = "#E0B400"),
plot.background = element_rect(fill = "#393939"),
axis.title = element_text(color = "#E0B400"),
axis.text = element_text(color = "white", hjust = 0.5),
axis.text.x.bottom = element_text(hjust = .5),
plot.title = element_text(color = "white", size = 14, hjust = 0.5)
) +
scale_y_continuous(expand = c(.01, 0), limits = c(0, max_build_count * 1.15)) +
xlab("Build Type") +
ylab("Count") +
labs(
title = "Underdog Best Ball Build Count"
)
})
output$user_adp_table <- renderReactable({
file <- user_upload()
req(file)
ext <- tools::file_ext(file$datapath)
validate(need(ext == "csv", "Please upload a csv file"))
adp_date <- max(current_ud_adp()$adp_date) %>%
as.Date()
if (!isTruthy(input$player_search)) {
appearance_list <-
user_adp_table_data() %>%
.$appearance
} else {
appearance_list <-
input$player_search
}
user_adp_vs_ud_adp <-
user_adp_table_data() %>%
clean_names() %>%
filter(tournament_title == "Best Ball Mania II") %>%
# filter(appearance %in% appearance_list) %>%
select(id = appearance, picked_at:position, tournament_title, draft_entry) %>%
mutate(picked_at = as.POSIXct(picked_at)) %>%
group_by(draft_entry) %>%
mutate(draft_start_date = min(picked_at)) %>%
ungroup() %>%
arrange(desc(draft_start_date), pick_number) %>%
left_join(current_ud_adp(), by = c("id" = "id")) %>%
as_tibble() %>%
mutate(
max_adp = max(adp, na.rm = TRUE),
adp = coalesce(adp, max_adp)
) %>%
mutate(
adp_delta = pick_number - adp,
player_name = paste(firstName, lastName)
) %>%
select(-c(slotName, lineupStatus, byeWeek, teamName, first_name, last_name, firstName, lastName), pick_datetime = picked_at) %>%
relocate(player_name, position, pick_number, adp, adp_delta, pick_datetime) %>%
mutate(
team_pos = paste0(team, "-", position),
position_color = case_when(
position == "QB" ~ "#9647B8",
position == "RB" ~ "#15967B",
position == "WR" ~ "#E67E22",
position == "TE" ~ "#2980B9",
TRUE ~ ""
),
draft_count = n_distinct(draft_start_date)
) %>%
arrange(desc(draft_start_date), pick_datetime)
parent_data <-
user_adp_vs_ud_adp %>%
select(-pick_datetime) %>%
group_by(id, player_name, position, adp_date, position_color) %>%
summarise(
pick_number = mean(pick_number),
pick_count = n(),
exposure = n() / draft_count,
adp = max(adp),
adp_delta = pick_number - adp,
projectedPoints = max(projectedPoints),
team_pos = max(team_pos),
player_name = max(paste0(player_name, " (", pick_count, ")"))
) %>%
ungroup() %>%
distinct() %>%
# relocate(pick_count, .after = player_name) %>%
arrange(desc(pick_count))
reactable(parent_data,
columns = list(
id = colDef(show = FALSE),
team_pos = colDef(show = FALSE),
player_name = colDef(
name = "Player (Pick Count)",
# show team under player name
cell = function(value, index) {
team_pos <- parent_data$team_pos[index]
team_pos <- if (!is.na(team_pos)) team_pos else "Unknown"
position_color <- parent_data$position_color[index]
position_color <- if (!is.na(position_color)) position_color else "#000000"
tagList(
div(style = list(fontWeight = 600), value),
div(style = list(
fontSize = 12,
color = position_color
), team_pos)
)
},
align = "left",
minWidth = 200
),
pick_number = colDef(name = "Avg. ADP", format = colFormat(digits = 1)),
exposure = colDef(
name = "Exposure",
format = colFormat(percent = TRUE, digits = 0)
),
pick_count = colDef(show = FALSE),
position = colDef(show = FALSE),
adp_date = colDef(show = FALSE),
projectedPoints = colDef(name = "Projected Pts."),
position_color = colDef(show = FALSE),
adp = colDef(name = "UD ADP", header = function(value, index) {
note <- div(style = "color: #666", paste("("), adp_date, ")")
tagList(
div(title = value, value),
div(style = list(fontSize = 12), note)
)
}),
adp_delta = colDef(
name = "Pick Value vs. ADP", format = colFormat(digits = 1),
style = function(value) {
if (value > 0) {
color <- "#008000"
} else if (value < 0) {
color <- "#e00000"
} else if (value == 0) {
color <- "#77777"
}
list(color = color, fontWeight = "bold")
}
)
),
details = function(index) {
player_details <- user_adp_vs_ud_adp[user_adp_vs_ud_adp$id == parent_data$id[index], ]
htmltools::div(
style = "padding: 16px",
reactable(player_details,
columns = list(
pick_number = colDef(name = "Pick"),
adp = colDef(name = "Current ADP"),
uid = colDef(show = FALSE),
team_name = colDef(name = "Team"),
team_short_name = colDef(show = FALSE),
team_color = colDef(show = FALSE),
alternate_color = colDef(show = FALSE),
adp_delta = colDef(
name = "Pick Value vs. ADP", format = colFormat(digits = 1),
style = function(value) {
if (value > 0) {
color <- "#008000"
} else if (value < 0) {
color <- "#e00000"
} else if (value == 0) {
color <- "#77777"
}
list(color = color, fontWeight = "bold")
}
),
pick_datetime = colDef(
format = colFormat(date = TRUE),
name = "Pick Date"
),
draft_start_date = colDef(
format = colFormat(date = TRUE),
name = "Draft Start",
show = FALSE
),
projectedPoints = colDef(show = FALSE),
player_name = colDef(show = FALSE),
position = colDef(show = FALSE),
team = colDef(show = FALSE),
tournament_title = colDef(show = FALSE),
adp_date = colDef(show = FALSE),
max_adp = colDef(show = FALSE),
team_pos = colDef(show = FALSE),
position_color = colDef(show = FALSE),
id = colDef(show = FALSE),
draft_count = colDef(show = FALSE),
team_nickname = colDef(show = FALSE),
logo = colDef(show = FALSE),
draft_entry = colDef(show = FALSE)
),
outlined = FALSE
)
)
},
style = list(fontFamily = "Montserrat"),
theme = reactableTheme(
# Vertically center cells
cellStyle = list(display = "flex", flexDirection = "column", justifyContent = "center")
)
)
})
cg_table <-
reactive({
if (!isTruthy(input$player_search)) {
draft_entry_list <-
user_adp_table_data() %>%
clean_names() %>%
.$draft_entry
} else {
draft_entry_list <-
user_adp_table_data() %>%
clean_names()
filter(appearance %in% input$player_search) %>%
.$draft_entry %>%
unique()
}
user_adp_table_data() %>%
clean_names() %>%
filter(tournament_title == "Best Ball Mania II") %>%
filter(draft_entry %in% draft_entry_list) %>%
group_by(draft_entry) %>%
filter(n() == 18) %>%
mutate(
draft_start_date = min(picked_at),
draft_slot = min(pick_number),
.before = 1
) %>%
ungroup() %>%
mutate(draft_number = dense_rank(draft_start_date), .before = 1) %>%
arrange(draft_start_date, team) %>%
select(!(tournament_entry_fee:draft_pool_size) & !(draft_entry:draft_total_prizes)) %>%
group_by(draft, team) %>%
add_count(team, name = "stack_count") %>%
arrange(desc(stack_count)) %>%
mutate(team_stack = paste0(team, " (", stack_count, ")")) %>%
ungroup() %>%
group_by(draft_number, position) %>%
arrange(pick_number, stack_count) %>%
mutate(position_order = row_number(), .before = 1) %>%
ungroup() %>%
mutate(
stack_num = dense_rank(desc(stack_count)),
draft_display_label = case_when(
(position == "QB" & position_order == 1) ~ "QB",
(position == "RB" & position_order <= 2) ~ "RB",
(position == "WR" & position_order <= 3) ~ "WR",
(position == "TE" & position_order == 1) ~ "TE",
TRUE ~ "BE"
),
position_color = case_when(
position == "QB" ~ "#9647B8",
position == "RB" ~ "#15967B",
position == "WR" ~ "#E67E22",
position == "TE" ~ "#2980B9",
TRUE ~ ""
)
) %>%
group_by(draft_number, draft_display_label) %>%
arrange(draft_number, draft_display_label, position_order) %>%
mutate(
bench_order = row_number(),
bench_order = case_when(
(position == "QB" & bench_order == 1) ~ as.double(bench_order) + 1,
TRUE ~ as.double(bench_order)
)
) %>%
mutate(draft_display_label = case_when(
(bench_order == min(bench_order) &
draft_display_label == "BE" & position != "QB")
~ "FLEX-1",
TRUE ~ draft_display_label
)) %>%
ungroup() %>%
mutate(
position_sortorder = case_when(
draft_display_label == "QB" ~ 1,
draft_display_label == "RB" ~ 2,
draft_display_label == "WR" ~ 3,
draft_display_label == "TE" ~ 4,
str_detect(draft_display_label, "FLEX") ~ 5,
TRUE ~ 6
),
draft_display_label = case_when(
draft_display_label == "BE" ~ paste0("BE-", bench_order),
str_detect(draft_display_label, "FLEX") ~ draft_display_label,
TRUE ~ paste0(draft_display_label, "-", position_order)
)
) %>%
group_by(draft_number, draft_display_label) %>%
arrange(draft_number, draft_display_label, position_order) %>%
mutate(bench_order = row_number()) %>%
select(
draft_display_label, appearance, picked_at, pick_number, first_name, last_name, team, position, team_stack,
stack_num, position_sortorder, position_color, draft, draft_number, position_order, bench_order, draft_slot,
draft_start_date
) %>%
arrange(
draft_number,
position_sortorder,
pick_number
)
})
output$team_breakdown <- renderReactable({
reactable(cg_final2(),
defaultColDef = (
colDef(
align = "center",
width = 150,
style =
function(value) {
list(color = case_player_position_color(value))
},
header =
function(value) {
value
},
html = TRUE,
)
),
columns = list(
draft_display_label = colDef(
name = "",
style = list(position = "sticky", background = "#fff", left = 0, zIndex = 1),
width = 65
)
),
pagination = FALSE,
searchable = FALSE,
sortable = FALSE,
compact = TRUE,
style = list(fontFamily = "Montserrat"),
theme = reactableTheme(
# Vertically center cells
cellStyle = list(
fontSize = "12px", display = "flex", flexDirection = "column", justifyContent = "center",
align = "center"
)
)
)
})
cg_final2 <-
reactive({
file <- user_upload()
req(file)
ext <- tools::file_ext(file$datapath)
validate(need(ext == "csv", "Please upload a csv file"))
data <- read_csv(file$datapath) %>%
clean_names() %>%
filter(tournament_title == "Best Ball Mania II")
### Get the drafts where the user selected players they're searching for ###
if (!isTruthy(input$player_search)) {
draft_entry_list <-
user_adp_table_data() %>%
.$draft_entry %>%
unique()
} else {
draft_entry_list <-
user_adp_table_data() %>%
filter(appearance %in% input$player_search) %>%
.$draft_entry %>%
unique()
}
cg_table_pivot <-
cg_table() %>%
mutate(
player_name = paste(first_name, last_name),
draft_start_date = format(as.Date(draft_start_date,
format = "%Y-%Om-%d %H:%M:%S"
), format = "%B %e")
) %>%
arrange(position_sortorder, draft_number, pick_number, bench_order) %>%
select(draft_number, player_name, draft_display_label, draft_start_date) %>%
pivot_wider(
names_from = c(draft_number, draft_start_date),
id_cols = c(draft_display_label),
names_glue = "Draft# { draft_number }
{ draft_start_date }",
values_from = player_name
) %>%
mutate(draft_display_label = word(draft_display_label, 1, sep = "-"))
cg_table_filtered <-
cg_table() %>%
ungroup() %>%
mutate(
player_name = paste(first_name, last_name),
draft_start_date = format(as.Date(draft_start_date, format = "%Y-%Om-%d %H:%M:%S"), format = "%B %e"),
team_stack = case_when(
str_detect(team_stack, "1") ~ NA_character_,
TRUE ~ team_stack
)
) %>%
distinct(draft_number, team_stack, .keep_all = TRUE) %>%
filter(!is.na(team_stack)) %>%
arrange(desc(str_extract(team_stack, "\\d")), draft_number, stack_num) %>%
select(draft_number, team_stack, draft_display_label, draft_start_date) %>%
pivot_wider(
names_from = c(draft_number, draft_start_date),
names_glue = "Draft# { draft_number }
{ draft_start_date }",
values_from = team_stack
) %>%
select(-draft_display_label) %>%
fill(everything(), .direction = "up") %>%
mutate(across(everything(), ~ replace(.x, duplicated(.x), NA))) %>%
filter(if_any(everything(), ~ !is.na(.x))) %>%
fill(everything(), .direction = "up") %>%
mutate(across(everything(), ~ replace(.x, duplicated(.x), NA))) %>%
filter(if_any(everything(), ~ !is.na(.x))) %>%
fill(everything(), .direction = "up") %>%
mutate(across(everything(), ~ replace(.x, duplicated(.x), NA))) %>%
filter(if_any(everything(), ~ !is.na(.x))) %>%
fill(everything(), .direction = "up") %>%
mutate(across(everything(), ~ replace(.x, duplicated(.x), NA))) %>%
filter(if_any(everything(), ~ !is.na(.x))) %>%
fill(everything(), .direction = "up") %>%
mutate(across(everything(), ~ replace(.x, duplicated(.x), NA))) %>%
filter(if_any(everything(), ~ !is.na(.x))) %>%
add_column(draft_display_label = "Stack", .before = 1) %>%
mutate(draft_display_label = ifelse(row_number() == 1, "Stack", NA_character_))
cg_table_draft_slot <-
cg_table() %>%
ungroup() %>%
mutate(draft_start_date = format(as.Date(draft_start_date, format = "%Y-%Om-%d %H:%M:%S"), format = "%B %e")) %>%
select(draft_slot, draft_number, draft_start_date) %>%
distinct() %>%
pivot_wider(
names_from = c(draft_number, draft_start_date),
names_glue = "Draft# { draft_number }
{ draft_start_date }",
values_from = draft_slot,
values_fn = as.character
) %>%
add_column(draft_display_label = "Draft Slot", .before = 1)
bbm_playoff_stacks <-
cg_table() %>%
ungroup() %>%
mutate(draft_start_date = format(as.Date(draft_start_date, format = "%Y-%Om-%d %H:%M:%S"), format = "%B %e")) %>%
group_by(draft, team, draft_number, draft_start_date) %>%
summarise(stack_count = n()) %>%
inner_join(bbm_playoff_matchups_team_by_team(), by = c("team" = "team_short_name")) %>%
group_by(draft, draft_number, draft_start_date) %>%
#filter(opp %in% team) %>%
group_by(game_id, roof_emoji, week, draft_number, draft_start_date) %>%
mutate(stack_total = sum(stack_count)) %>%
separate(game_id, into = c("season", "week", "team_1", "team_2"), sep = "_") %>%
select(-c("week", "season")) %>%
mutate(
bbm_playoff_stack = ifelse(stack_count > 1,
paste0("Week: ", week, " ", roof_emoji, "<br>", team_1, "-", team_2, " (", stack_total, ")"),
NA_character_
)
) %>%
select(stack_total, bbm_playoff_stack, draft_number, draft_start_date, draft, week, team) %>%
distinct() %>%
arrange(draft_number, desc(stack_total), week) %>%
ungroup() %>%
pivot_wider(
id_cols = c(week, team, stack_total),
names_from = c(draft_number, draft_start_date),
names_glue = "Draft# { draft_number }
{ draft_start_date }",
values_from = bbm_playoff_stack
) %>%
select(-c(week, team, stack_total)) %>%
distinct(.keep_all = TRUE) %>%
sort_na_to_bottom_colwise() %>%
mutate(draft_display_label = case_when(
row_number() == 1 ~ NA_character_,
row_number() == 2 ~ "BBM II",
row_number() == 3 ~ "Playoff",
row_number() == 4 ~ "Game",
row_number() == 5 ~ "Stacks",
TRUE ~ NA_character_
), .before = 1)
bind_rows(cg_table_pivot, cg_table_filtered, cg_table_draft_slot, bbm_playoff_stacks)
})
output$draft_slot_frequency <- renderPlot({
expected_frequency <-
cg_table() %>%
ungroup() %>%
distinct(draft_slot, draft) %>%
summarise(expected_frequency = round(n() / 12, 1)) %>%
.$expected_frequency
cg_table() %>%
ungroup() %>%
distinct(draft_slot, draft) %>%
select(draft_slot) %>%
ggplot(aes(draft_slot)) +
geom_bar(fill = "#DDB423", color = "#DDB423") +
geom_hline(
yintercept = expected_frequency, size = 2,
alpha = .3, color = "white",
linetype = "longdash"
) +
geom_text(stat = "count", aes(label = stat(count), vjust = 1.5), color = "#393939") +
geom_text(
data = data.frame(x = 5, y = expected_frequency), aes(x, y),
label = paste0("Expected Frequency"),
vjust = -1,
color = "white",
alpha = .5,
size = 8
) +
hrbrthemes::theme_ipsum_rc() +
expand_limits(x = c(0, 12)) +
scale_x_continuous(breaks = 1:12) +
xlab("Draft Slot") +
theme(
panel.grid.major = element_blank(),
panel.grid.minor = element_blank(),
axis.text.y = element_blank(),
axis.title.y = element_blank(),
panel.background = element_rect(fill = "#393939")
)
})
nfl_schedule <- reactive({
read_csv(url("https://raw.githubusercontent.com/nflverse/nfldata/master/data/games.csv"))
})
observeEvent(input$ud_user_adp,
{
file <- user_upload()
req(file)
ext <- tools::file_ext(file$datapath)
validate(need(ext == "csv", "Please upload a csv file"))
appearance_list_choices <- read_csv(file$datapath) %>%
clean_names() %>%
filter(tournament_title == "Best Ball Mania II") %>%
transmute(player_name = paste(first_name, last_name), id = appearance) %>%
distinct() %>%
deframe()
updatePickerInput(
session = session, inputId = "player_search",
choices = appearance_list_choices
)
},
ignoreInit = TRUE
)
observe({
if (isTruthy(input$ud_user_adp)) {
Sys.sleep(.25)
# enable the download button
shinyjs::enable("data_file")
# change the html of the download button
shinyjs::html(
"data_file",
sprintf("<i class='fa fa-download'></i>
Download Ready")
)
}
})
output$data_file <- downloadHandler(
filename = function() {
paste("TeamAnalysis-", Sys.Date(), ".csv", sep = "")
},
content = function(file) {
write.csv(cg_final2(), file)
}
)
# disable the downdload button on page load
shinyjs::disable("data_file")
user_adp_filter <- reactive({
as.numeric(input$user_filter_adp)
})
user_filter_position <- reactive({
input$current_draft_stack_position
})
user_filter_team <- reactive({
input$current_draft_teams
})
bbm_playoff_matchups_team_by_team <- reactive({
nfl_schedule() %>%
# read_csv(url("https://raw.githubusercontent.com/nflverse/nfldata/master/data/games.csv")) %>%
filter(
#home_team %in% "ARI" |
(home_team %in% user_filter_team() |
# away_team %in% "ARI",
away_team %in% user_filter_team()),
season == 2021,
week > 14,
week < 18
) %>%
select(home_team, away_team, roof, week, game_id) %>%
bind_rows(
nfl_schedule() %>%
# read_csv(url("https://raw.githubusercontent.com/nflverse/nfldata/master/data/games.csv")) %>%
select(home_team, away_team, season, week) %>%
filter(
# home_team %in% "ARI",
(home_team %in% user_filter_team()),
season == 2021,
week > 14,
week < 18
) %>%
rename(home_team = away_team, away_team = home_team)
) %>%
mutate(roof = ifelse((roof == "closed" | is.na(roof)), "retractable", roof)) %>%
select(team = home_team, opp = away_team, roof, week, game_id) %>%
left_join(get_nfl_teams(), by = c("team" = "team_short_name")) %>%
select(team, main_team_name = team_name, opp, roof, week, game_id) %>%
left_join(get_nfl_teams(), by = c("opp" = "team_short_name")) %>%
select(team, main_team_name, opp, opp_team_name = team_name, roof, week, game_id) %>%
filter(opp != user_filter_team()) %>%
left_join(
current_ud_adp(),
#current_ud_adp(),
by = c("opp" = "team_short_name")
) %>%
filter(
as.double(adp) + 12 >= as.double(user_adp_filter()),
slotName %in% user_filter_position(),
(team %in% user_filter_team() |
opp %in% user_filter_team())
# as.double(adp) + 12 >= as.double(user_adp_filter,
# slotName %in% user_filter_position,
# (team %in% user_filter_team |
# opp %in% user_filter_team)
) %>%
transmute(
team_name,
game_id,
player = paste(firstName, lastName),
slotName, opp, week, adp, logo,
roof_emoji = case_when(
roof == "retractable" ~ "\U00026C5",
roof == "dome" ~ "\U2600",
roof == "outdoors" ~ "\U1F327"
),
id
) %>%
arrange(adp) %>%
inner_join(get_nfl_teams(), by = c("team_name" = "team_name"))
})
bbm_playoff_matchups_explorer <- reactive({
nfl_schedule() %>%
# read_csv(url("https://raw.githubusercontent.com/nflverse/nfldata/master/data/games.csv")) %>%
filter(
#home_team %in% "ARI" |
(home_team %in% user_filter_team() |
# away_team %in% "ARI",
away_team %in% user_filter_team()),
season == 2021,
week > 14,
week < 18
) %>%
select(home_team, away_team, roof, week, game_id) %>%
bind_rows(
nfl_schedule() %>%
# read_csv(url("https://raw.githubusercontent.com/nflverse/nfldata/master/data/games.csv")) %>%
select(home_team, away_team, season, week) %>%
filter(
# home_team %in% "ARI",
(home_team %in% user_filter_team() |
# away_team %in% "ARI",
away_team %in% user_filter_team()),
season == 2021,
week > 14,
week < 18
) %>%
rename(home_team = away_team, away_team = home_team)
) %>%
mutate(roof = ifelse((roof == "closed" | is.na(roof)), "retractable", roof)) %>%
select(team = home_team, opp = away_team, roof, week, game_id) %>%
left_join(get_nfl_teams(), by = c("team" = "team_short_name")) %>%
select(team, main_team_name = team_name, opp, roof, week, game_id) %>%
left_join(get_nfl_teams(), by = c("opp" = "team_short_name")) %>%
select(team, main_team_name, opp, opp_team_name = team_name, roof, week, game_id) %>%
left_join(
#current_ud_adp,
current_ud_adp(),
by = c("opp" = "team_short_name")
) %>%
filter(
as.double(adp) + 12 >= as.double(user_adp_filter()),
slotName %in% user_filter_position(),
(team %in% user_filter_team() |
opp %in% user_filter_team())
# as.double(adp) + 12 >= as.double(user_adp_filter,
# slotName %in% user_filter_position,
# (team %in% user_filter_team |
# opp %in% user_filter_team)
) %>%
transmute(
team_name,
player = paste(firstName, lastName),
slotName, opp, week, adp, logo,
roof_emoji = case_when(
roof == "retractable" ~ "\U00026C5",
roof == "dome" ~ "\U2600",
roof == "outdoors" ~ "\U1F327"
),
id
) %>%
arrange(adp)
})
output$in_draft_helper_table <- renderReactable({
req(
user_adp_filter(),
user_filter_team(),
user_filter_position()
)
position_color <-
bbm_playoff_matchups_explorer() %>%
mutate(
position_color = case_when(
slotName == "QB" ~ "#9647B8",
slotName == "RB" ~ "#15967B",
slotName == "WR" ~ "#E67E22",
slotName == "TE" ~ "#2980B9",
TRUE ~ ""
)
)
adp_date <-
as.Date(min(current_ud_adp()$adp_date)) %>%
format("%m/%d/%y") %>%
str_remove_all("0")
reactable(
bbm_playoff_matchups_explorer(),
columns = list(
id = colDef(show = FALSE),
player = colDef(
name = "Player",
cell = function(value, index) {
pos_color <- position_color$position_color[index]
pos_color <- if (!is.na(pos_color)) pos_color else "#000000"
tagList(
htmltools::div(
style = list(color = pos_color),
value
)
)
},
minWidth = 240
),
slotName = colDef(name = "Position"),
team_name = colDef(
name = "Team",
cell = function(value, index) {
image <- img(src = position_color$logo[index], height = "20px")
tagList(
div(style = list(display = "inline-block", alignItems = "center"), image),
" ", value
)
},
minWidth = 200,
align = "center"
),
opp = colDef(
show = FALSE
),
roof_emoji = colDef(
name = "Roof",
html = TRUE,
filterable = FALSE,
align = "center"
),
week = colDef(
name = "Week",
align = "center"
),
adp = colDef(
name = "UD ADP",
header = function(value, index) {
note <- div(style = "color: #666", paste("("), adp_date, ")")
tagList(
div(title = value, value),
div(style = list(fontSize = 12), note)
)
}, filterable = FALSE,
align = "center"
),
logo = colDef(
show = FALSE
)
),
filterable = FALSE,
highlight = TRUE,
style = list(fontFamily = "Montserrat")
)
})
output$in_draft_helper_plot <- renderPlot({
req(
user_adp_filter(),
user_filter_team(),
user_filter_position()
)
total_adp <- GET(
"https://raw.githubusercontent.com/DangyWing/Underdog-ADP/main/data/ud_adp_total.rds",
authenticate(Sys.getenv("GITHUB_PAT_DANGY"), "")
)
ud_adp_total <-
content(total_adp, as = "raw") %>%
parse_raw_rds() %>%
mutate(
adp = as.double(adp),
teamName = case_when(
teamName == "NY Jets" ~ "New York Jets",
teamName == "NY Giants" ~ "New York Giants",
teamName == "Washington Football Team" ~ "Washington",
TRUE ~ teamName
)
) %>%
left_join(nfl_team(), by = c("teamName" = "full_name"))
relevant_players <-
reactive({
ud_adp_total %>%
filter(
slotName %in% user_filter_position()
# ,team_short_name %in% user_filter_team()
) %>%
arrange(desc(adp_date)) %>%
group_by(id) %>%
filter(!is.na(adp)) %>%
summarize(
min_adp = min(adp),
max_adp = max(adp),
most_recent_draft_adp = ifelse(row_number() == 1, adp, 60000)
) %>%
ungroup() %>%
filter(
most_recent_draft_adp + 12 >= as.double(user_adp_filter()),
most_recent_draft_adp - 50 <= as.double(user_adp_filter())
) %>%
distinct()
})
bbm_playoff_matchups_explorer() %>%
inner_join(ud_adp_total %>%
filter(id %in% relevant_players()$id),
by = c("team_name", "slotName", "logo", "id")
) %>%
select(-adp.x) %>%
mutate(adp = adp.y) %>%
distinct() %>%
mutate(
adp_date = as.Date(adp_date),
player = paste0(player, " (", opp, ")")
) %>%
ggplot2::ggplot(aes(x = as.Date(adp_date), y = adp, group = id, color = player)) +
ggplot2::geom_line() +
ggplot2::geom_hline(yintercept = user_adp_filter(), alpha = .4, colour = "#E0B400") +
ggplot2::annotate("text",
x = median(as.Date(ud_adp_total$adp_date)),
y = user_adp_filter(),
vjust = 1.5,
label = "Upcoming pick",
alpha = .7, colour = "#E0B400",
size = 8
) +
# geom_blank(aes(y = adp * 1.05)) +
hrbrthemes::theme_ft_rc() +
theme(
panel.grid.major.x = element_blank(),
panel.grid.minor.x = element_blank(),
axis.title.y = element_text(color = "#E0B400", size = 18, face ="bold"),
axis.title.x = element_text(color = "#E0B400", size = 18, face ="bold"),
axis.text.x.bottom = element_text(color = "white", hjust = 1, angle = 45, size = 18),
axis.text.y.left = element_text(color = "white", hjust = 1, size = 18),
plot.title = element_text(color = "white", size = 14, hjust = 0.5),
legend.title = element_blank(),
legend.text = element_text(size = 14)
) +
scale_x_date(date_labels = "%b %d", date_breaks = "3 days") +
scale_y_continuous(trans = "reverse") +
# scale_color_manual(values = c(
# "#175FC7",
# "#E67E22",
# "#6CBE45",
# "#0FC8D4",
# "#D0402E",
# "#9647B8",
# "#2980B9",
# "#15997E"
# )) +
xlab("Date") +
ylab("ADP")
})
}
|
/server.R
|
no_license
|
tomdicato/UnderdogADPAnalyzer
|
R
| false
| false
| 41,305
|
r
|
library(shiny)
library(tidyverse)
library(here)
library(janitor)
library(fs)
library(reactable)
library(htmltools)
library(httr)
library(bslib)
library(shinyjs)
library(shinyvalidate)
source(paste0(here(), "/ud_adp_helpers.R"))
trace(grDevices:::png, quote({
if (missing(type) && missing(antialias)) {
type <- "cairo-png"
antialias <- "subpixel"
}
}), print = FALSE)
### Get user ADP ###
## Used for local testing ##
# user_data_directory <-
# str_replace_all(paste0(here(), "/user_data"), "/", "\\\\")
#
# user_adp_file <-
# list.files(user_data_directory) %>%
# as_tibble() %>%
# filter(endsWith(value, ".csv")) %>%
# slice(1) %>%
# .$value
## FIX FLEX ALWAYS BEING A TE
server <- function(input, output, session) {
observeEvent(input$upload_csv, {
showModal(modalDialog(
fileInput(
"ud_user_adp",
"Upload CSV",
multiple = FALSE,
accept = c(
"text/csv",
"text/comma-separated-values,text/plain",
".csv"
)
),
title = "Upload UD Exposure",
# tags$href(href="https://www.underdogfantasy.com", "Underdog"),
tags$img(src = "UD_Export_Helper_1.jpg"),
p(),
tags$img(src = "UD_Export_Helper_2.jpg"),
easyClose = TRUE,
footer = NULL
))
})
observeEvent(input$ud_user_adp, {
showElement(id = "player_search")
})
user_upload <- reactive({
input$ud_user_adp
})
output$map <- renderUI({
uu <- user_upload()
if (!is.null(uu)) {
return(NULL)
}
h3(
"How to export your exposure .csv",
p(),
tags$img(src = "UD_Export_Helper_1.jpg"),
p(),
tags$img(src = "UD_Export_Helper_2.jpg"),
p(),
h5("Get it from your email and upload it here!")
)
})
observe({
if (!is.null(user_upload())) {
appendTab(inputId = "user_upload_tabs", tabPanel(
title = "ADP Comparison",
value = "adp_comp",
uiOutput("map"),
shinycssloaders::withSpinner(
reactableOutput("user_adp_table"),
type = 7,
color = "#D9230F",
hide.ui = TRUE
)
))
appendTab(
inputId = "user_upload_tabs",
tabPanel(
"Team by Team",
shinycssloaders::withSpinner(
reactableOutput("team_breakdown"),
type = 7,
color = "#D9230F"
)
)
)
appendTab(
inputId = "user_upload_tabs",
tabPanel(
"Team Builds",
shinycssloaders::withSpinner(
tagList(
reactableOutput("build_explorer"),
plotOutput("build_bargraph")
),
type = 7,
color = "#D9230F"
)
)
)
appendTab(
inputId = "user_upload_tabs",
tabPanel(
"Draft Slots",
shinycssloaders::withSpinner(
plotOutput("draft_slot_frequency"),
type = 7,
color = "#D9230F"
)
)
)
}
})
nfl_team <- reactive({
get_nfl_teams()
})
### Get underdog overall ADP ###
current_ud_adp <- reactive({
x <- GET(
"https://raw.githubusercontent.com/DangyWing/Underdog-ADP/main/data/ud_adp_current.rds",
authenticate(Sys.getenv("GITHUB_PAT_DANGY"), "")
)
### THANK YOU TAN ###
current_ud_adp <-
content(x, as = "raw") %>%
parse_raw_rds() %>%
mutate(
adp = as.double(adp),
teamName = case_when(
teamName == "NY Jets" ~ "New York Jets",
teamName == "NY Giants" ~ "New York Giants",
teamName == "Washington Football Team" ~ "Washington",
TRUE ~ teamName
)
) %>%
left_join(nfl_team(), by = c("teamName" = "full_name"))
})
user_adp_table_data <-
# read_csv(paste0("user_data/DiCatoUDExposure.csv"))
# read_csv(paste0("user_data/cgudbbm.csv"))
reactive({
file <- user_upload()
req(file)
ext <- tools::file_ext(file$datapath)
validate(need(ext == "csv", "Please upload a csv file"))
read_csv(file$datapath) %>%
clean_names()
})
output$ud_adp <- renderReactable({
adp_date <- max(current_ud_adp()$adp_date) %>%
as.Date()
ud_adp_data <-
current_ud_adp() %>%
clean_names() %>%
mutate(
player_name = paste(first_name, last_name),
position_color = case_when(
slot_name == "QB" ~ "#9647B8",
slot_name == "RB" ~ "#15967B",
slot_name == "WR" ~ "#E67E22",
slot_name == "TE" ~ "#2980B9",
TRUE ~ ""
),
team_name = ifelse(is.na(team_name), "FA", team_name),
projected_points = ifelse(is.na(projected_points), 0, projected_points)
) %>%
select(
player_name,
position = slot_name,
team_name,
adp,
projected_points,
position_color,
logo
) %>%
filter(adp >= input$ud_adp_slider[1] &
adp <= input$ud_adp_slider[2])
reactable(ud_adp_data,
columns = list(
player_name = colDef(
name = "Player",
cell = function(value, index) {
pos_color <- ud_adp_data$position_color[index]
pos_color <- if (!is.na(pos_color)) pos_color else "#000000"
tagList(
htmltools::div(
style = list(color = pos_color),
value
)
)
},
minWidth = 240
),
position = colDef(name = "Position"),
team_name = colDef(
name = "Team",
cell = function(value, index) {
image <- img(src = ud_adp_data$logo[index], height = "24px")
tagList(
div(style = list(display = "inline-block", alignItems = "center"), image),
value
)
},
minWidth = 200
),
adp = colDef(
name = "UD ADP",
header = function(value, index) {
note <- div(style = "color: #666", paste("("), adp_date, ")")
tagList(
div(title = value, value),
div(style = list(fontSize = 12), note)
)
}, filterable = FALSE
),
projected_points = colDef(
name = "Underdog<br>Projection",
html = TRUE,
filterable = FALSE,
show = FALSE
),
position_color = colDef(show = FALSE),
logo = colDef(show = FALSE)
),
filterable = TRUE,
highlight = TRUE,
defaultPageSize = 24,
style = list(fontFamily = "Montserrat")
)
})
output$build_explorer <- renderReactable({
cg_table() %>%
ungroup() %>%
mutate(draft_start_date = format(as.Date(draft_start_date, format = "%Y-%Om-%d %H:%M:%S"), format = "%B %e")) %>%
group_by(draft, draft_slot) %>%
summarise(
QB = sum(position == "QB"),
RB = sum(position == "RB"),
WR = sum(position == "WR"),
TE = sum(position == "TE")
) %>%
ungroup() %>%
count(QB, WR, RB, TE, name = "build_count", sort = TRUE) %>%
mutate(build = paste0("QB: ", QB, "\nRB: ", RB, "\nWR: ", WR, "\nTE: ", TE), build_count) %>%
select(QB:TE, build_count) %>%
reactable(
columns = list(
QB = colDef(
name = "QB", headerStyle = "color: #9647B8",
footer = "Total", footerStyle = list(fontWeight = "bold")
),
RB = colDef(name = "RB", headerStyle = "color: #15967B"),
WR = colDef(name = "WR", headerStyle = "color: #E67E22"),
TE = colDef(name = "TE", headerStyle = "color: #2980B9"),
build_count = colDef(
name = "Count",
footer = JS("function(colInfo) {
var total = 0
colInfo.data.forEach(function(row) {
total += row[colInfo.column.id]
})
return + total.toFixed(0)
}"), filterable = FALSE,
footerStyle = list(fontWeight = "bold")
)
),
filterable = TRUE,
highlight = TRUE,
defaultPageSize = 20,
style = list(fontFamily = "Montserrat"),
defaultColDef = colDef(align = "center")
)
})
output$build_bargraph <- renderPlot({
team_build_count <-
cg_table() %>%
ungroup() %>%
mutate(draft_start_date = format(as.Date(draft_start_date, format = "%Y-%Om-%d %H:%M:%S"), format = "%B %e")) %>%
group_by(draft, draft_number, draft_start_date) %>%
summarise(
QB = sum(position == "QB"),
RB = sum(position == "RB"),
WR = sum(position == "WR"),
TE = sum(position == "TE")
) %>%
ungroup() %>%
count(QB, WR, RB, TE, name = "build_count", sort = TRUE) %>%
transmute(build = paste0("QB: ", QB, "\nRB: ", RB, "\nWR: ", WR, "\nTE: ", TE), build_count) %>%
arrange(desc(build_count))
max_build_count <-
max(team_build_count$build_count)
ggplot(team_build_count, aes(x = reorder(build, -build_count), y = build_count)) +
geom_bar(stat = "identity", fill = "#393939", color = "#393939") +
geom_text(
data = subset(team_build_count, build_count > 1),
aes(label = build_count), vjust = 1.5, color = "White", family = "Roboto", size = 4
) +
hrbrthemes::theme_ipsum_rc() +
theme(
axis.text.x = element_text(angle = 0),
panel.grid.major = element_blank(),
panel.background = element_rect(fill = "#1B1B1B"),
panel.grid.minor.y = element_line(colour = "#E0B400"),
plot.background = element_rect(fill = "#393939"),
axis.title = element_text(color = "#E0B400"),
axis.text = element_text(color = "white", hjust = 0.5),
axis.text.x.bottom = element_text(hjust = .5),
plot.title = element_text(color = "white", size = 14, hjust = 0.5)
) +
scale_y_continuous(expand = c(.01, 0), limits = c(0, max_build_count * 1.15)) +
xlab("Build Type") +
ylab("Count") +
labs(
title = "Underdog Best Ball Build Count"
)
})
output$user_adp_table <- renderReactable({
file <- user_upload()
req(file)
ext <- tools::file_ext(file$datapath)
validate(need(ext == "csv", "Please upload a csv file"))
adp_date <- max(current_ud_adp()$adp_date) %>%
as.Date()
if (!isTruthy(input$player_search)) {
appearance_list <-
user_adp_table_data() %>%
.$appearance
} else {
appearance_list <-
input$player_search
}
user_adp_vs_ud_adp <-
user_adp_table_data() %>%
clean_names() %>%
filter(tournament_title == "Best Ball Mania II") %>%
# filter(appearance %in% appearance_list) %>%
select(id = appearance, picked_at:position, tournament_title, draft_entry) %>%
mutate(picked_at = as.POSIXct(picked_at)) %>%
group_by(draft_entry) %>%
mutate(draft_start_date = min(picked_at)) %>%
ungroup() %>%
arrange(desc(draft_start_date), pick_number) %>%
left_join(current_ud_adp(), by = c("id" = "id")) %>%
as_tibble() %>%
mutate(
max_adp = max(adp, na.rm = TRUE),
adp = coalesce(adp, max_adp)
) %>%
mutate(
adp_delta = pick_number - adp,
player_name = paste(firstName, lastName)
) %>%
select(-c(slotName, lineupStatus, byeWeek, teamName, first_name, last_name, firstName, lastName), pick_datetime = picked_at) %>%
relocate(player_name, position, pick_number, adp, adp_delta, pick_datetime) %>%
mutate(
team_pos = paste0(team, "-", position),
position_color = case_when(
position == "QB" ~ "#9647B8",
position == "RB" ~ "#15967B",
position == "WR" ~ "#E67E22",
position == "TE" ~ "#2980B9",
TRUE ~ ""
),
draft_count = n_distinct(draft_start_date)
) %>%
arrange(desc(draft_start_date), pick_datetime)
parent_data <-
user_adp_vs_ud_adp %>%
select(-pick_datetime) %>%
group_by(id, player_name, position, adp_date, position_color) %>%
summarise(
pick_number = mean(pick_number),
pick_count = n(),
exposure = n() / draft_count,
adp = max(adp),
adp_delta = pick_number - adp,
projectedPoints = max(projectedPoints),
team_pos = max(team_pos),
player_name = max(paste0(player_name, " (", pick_count, ")"))
) %>%
ungroup() %>%
distinct() %>%
# relocate(pick_count, .after = player_name) %>%
arrange(desc(pick_count))
reactable(parent_data,
columns = list(
id = colDef(show = FALSE),
team_pos = colDef(show = FALSE),
player_name = colDef(
name = "Player (Pick Count)",
# show team under player name
cell = function(value, index) {
team_pos <- parent_data$team_pos[index]
team_pos <- if (!is.na(team_pos)) team_pos else "Unknown"
position_color <- parent_data$position_color[index]
position_color <- if (!is.na(position_color)) position_color else "#000000"
tagList(
div(style = list(fontWeight = 600), value),
div(style = list(
fontSize = 12,
color = position_color
), team_pos)
)
},
align = "left",
minWidth = 200
),
pick_number = colDef(name = "Avg. ADP", format = colFormat(digits = 1)),
exposure = colDef(
name = "Exposure",
format = colFormat(percent = TRUE, digits = 0)
),
pick_count = colDef(show = FALSE),
position = colDef(show = FALSE),
adp_date = colDef(show = FALSE),
projectedPoints = colDef(name = "Projected Pts."),
position_color = colDef(show = FALSE),
adp = colDef(name = "UD ADP", header = function(value, index) {
note <- div(style = "color: #666", paste("("), adp_date, ")")
tagList(
div(title = value, value),
div(style = list(fontSize = 12), note)
)
}),
adp_delta = colDef(
name = "Pick Value vs. ADP", format = colFormat(digits = 1),
style = function(value) {
if (value > 0) {
color <- "#008000"
} else if (value < 0) {
color <- "#e00000"
} else if (value == 0) {
color <- "#77777"
}
list(color = color, fontWeight = "bold")
}
)
),
details = function(index) {
player_details <- user_adp_vs_ud_adp[user_adp_vs_ud_adp$id == parent_data$id[index], ]
htmltools::div(
style = "padding: 16px",
reactable(player_details,
columns = list(
pick_number = colDef(name = "Pick"),
adp = colDef(name = "Current ADP"),
uid = colDef(show = FALSE),
team_name = colDef(name = "Team"),
team_short_name = colDef(show = FALSE),
team_color = colDef(show = FALSE),
alternate_color = colDef(show = FALSE),
adp_delta = colDef(
name = "Pick Value vs. ADP", format = colFormat(digits = 1),
style = function(value) {
if (value > 0) {
color <- "#008000"
} else if (value < 0) {
color <- "#e00000"
} else if (value == 0) {
color <- "#77777"
}
list(color = color, fontWeight = "bold")
}
),
pick_datetime = colDef(
format = colFormat(date = TRUE),
name = "Pick Date"
),
draft_start_date = colDef(
format = colFormat(date = TRUE),
name = "Draft Start",
show = FALSE
),
projectedPoints = colDef(show = FALSE),
player_name = colDef(show = FALSE),
position = colDef(show = FALSE),
team = colDef(show = FALSE),
tournament_title = colDef(show = FALSE),
adp_date = colDef(show = FALSE),
max_adp = colDef(show = FALSE),
team_pos = colDef(show = FALSE),
position_color = colDef(show = FALSE),
id = colDef(show = FALSE),
draft_count = colDef(show = FALSE),
team_nickname = colDef(show = FALSE),
logo = colDef(show = FALSE),
draft_entry = colDef(show = FALSE)
),
outlined = FALSE
)
)
},
style = list(fontFamily = "Montserrat"),
theme = reactableTheme(
# Vertically center cells
cellStyle = list(display = "flex", flexDirection = "column", justifyContent = "center")
)
)
})
cg_table <-
reactive({
if (!isTruthy(input$player_search)) {
draft_entry_list <-
user_adp_table_data() %>%
clean_names() %>%
.$draft_entry
} else {
draft_entry_list <-
user_adp_table_data() %>%
clean_names()
filter(appearance %in% input$player_search) %>%
.$draft_entry %>%
unique()
}
user_adp_table_data() %>%
clean_names() %>%
filter(tournament_title == "Best Ball Mania II") %>%
filter(draft_entry %in% draft_entry_list) %>%
group_by(draft_entry) %>%
filter(n() == 18) %>%
mutate(
draft_start_date = min(picked_at),
draft_slot = min(pick_number),
.before = 1
) %>%
ungroup() %>%
mutate(draft_number = dense_rank(draft_start_date), .before = 1) %>%
arrange(draft_start_date, team) %>%
select(!(tournament_entry_fee:draft_pool_size) & !(draft_entry:draft_total_prizes)) %>%
group_by(draft, team) %>%
add_count(team, name = "stack_count") %>%
arrange(desc(stack_count)) %>%
mutate(team_stack = paste0(team, " (", stack_count, ")")) %>%
ungroup() %>%
group_by(draft_number, position) %>%
arrange(pick_number, stack_count) %>%
mutate(position_order = row_number(), .before = 1) %>%
ungroup() %>%
mutate(
stack_num = dense_rank(desc(stack_count)),
draft_display_label = case_when(
(position == "QB" & position_order == 1) ~ "QB",
(position == "RB" & position_order <= 2) ~ "RB",
(position == "WR" & position_order <= 3) ~ "WR",
(position == "TE" & position_order == 1) ~ "TE",
TRUE ~ "BE"
),
position_color = case_when(
position == "QB" ~ "#9647B8",
position == "RB" ~ "#15967B",
position == "WR" ~ "#E67E22",
position == "TE" ~ "#2980B9",
TRUE ~ ""
)
) %>%
group_by(draft_number, draft_display_label) %>%
arrange(draft_number, draft_display_label, position_order) %>%
mutate(
bench_order = row_number(),
bench_order = case_when(
(position == "QB" & bench_order == 1) ~ as.double(bench_order) + 1,
TRUE ~ as.double(bench_order)
)
) %>%
mutate(draft_display_label = case_when(
(bench_order == min(bench_order) &
draft_display_label == "BE" & position != "QB")
~ "FLEX-1",
TRUE ~ draft_display_label
)) %>%
ungroup() %>%
mutate(
position_sortorder = case_when(
draft_display_label == "QB" ~ 1,
draft_display_label == "RB" ~ 2,
draft_display_label == "WR" ~ 3,
draft_display_label == "TE" ~ 4,
str_detect(draft_display_label, "FLEX") ~ 5,
TRUE ~ 6
),
draft_display_label = case_when(
draft_display_label == "BE" ~ paste0("BE-", bench_order),
str_detect(draft_display_label, "FLEX") ~ draft_display_label,
TRUE ~ paste0(draft_display_label, "-", position_order)
)
) %>%
group_by(draft_number, draft_display_label) %>%
arrange(draft_number, draft_display_label, position_order) %>%
mutate(bench_order = row_number()) %>%
select(
draft_display_label, appearance, picked_at, pick_number, first_name, last_name, team, position, team_stack,
stack_num, position_sortorder, position_color, draft, draft_number, position_order, bench_order, draft_slot,
draft_start_date
) %>%
arrange(
draft_number,
position_sortorder,
pick_number
)
})
output$team_breakdown <- renderReactable({
reactable(cg_final2(),
defaultColDef = (
colDef(
align = "center",
width = 150,
style =
function(value) {
list(color = case_player_position_color(value))
},
header =
function(value) {
value
},
html = TRUE,
)
),
columns = list(
draft_display_label = colDef(
name = "",
style = list(position = "sticky", background = "#fff", left = 0, zIndex = 1),
width = 65
)
),
pagination = FALSE,
searchable = FALSE,
sortable = FALSE,
compact = TRUE,
style = list(fontFamily = "Montserrat"),
theme = reactableTheme(
# Vertically center cells
cellStyle = list(
fontSize = "12px", display = "flex", flexDirection = "column", justifyContent = "center",
align = "center"
)
)
)
})
cg_final2 <-
reactive({
file <- user_upload()
req(file)
ext <- tools::file_ext(file$datapath)
validate(need(ext == "csv", "Please upload a csv file"))
data <- read_csv(file$datapath) %>%
clean_names() %>%
filter(tournament_title == "Best Ball Mania II")
### Get the drafts where the user selected players they're searching for ###
if (!isTruthy(input$player_search)) {
draft_entry_list <-
user_adp_table_data() %>%
.$draft_entry %>%
unique()
} else {
draft_entry_list <-
user_adp_table_data() %>%
filter(appearance %in% input$player_search) %>%
.$draft_entry %>%
unique()
}
cg_table_pivot <-
cg_table() %>%
mutate(
player_name = paste(first_name, last_name),
draft_start_date = format(as.Date(draft_start_date,
format = "%Y-%Om-%d %H:%M:%S"
), format = "%B %e")
) %>%
arrange(position_sortorder, draft_number, pick_number, bench_order) %>%
select(draft_number, player_name, draft_display_label, draft_start_date) %>%
pivot_wider(
names_from = c(draft_number, draft_start_date),
id_cols = c(draft_display_label),
names_glue = "Draft# { draft_number }
{ draft_start_date }",
values_from = player_name
) %>%
mutate(draft_display_label = word(draft_display_label, 1, sep = "-"))
cg_table_filtered <-
cg_table() %>%
ungroup() %>%
mutate(
player_name = paste(first_name, last_name),
draft_start_date = format(as.Date(draft_start_date, format = "%Y-%Om-%d %H:%M:%S"), format = "%B %e"),
team_stack = case_when(
str_detect(team_stack, "1") ~ NA_character_,
TRUE ~ team_stack
)
) %>%
distinct(draft_number, team_stack, .keep_all = TRUE) %>%
filter(!is.na(team_stack)) %>%
arrange(desc(str_extract(team_stack, "\\d")), draft_number, stack_num) %>%
select(draft_number, team_stack, draft_display_label, draft_start_date) %>%
pivot_wider(
names_from = c(draft_number, draft_start_date),
names_glue = "Draft# { draft_number }
{ draft_start_date }",
values_from = team_stack
) %>%
select(-draft_display_label) %>%
fill(everything(), .direction = "up") %>%
mutate(across(everything(), ~ replace(.x, duplicated(.x), NA))) %>%
filter(if_any(everything(), ~ !is.na(.x))) %>%
fill(everything(), .direction = "up") %>%
mutate(across(everything(), ~ replace(.x, duplicated(.x), NA))) %>%
filter(if_any(everything(), ~ !is.na(.x))) %>%
fill(everything(), .direction = "up") %>%
mutate(across(everything(), ~ replace(.x, duplicated(.x), NA))) %>%
filter(if_any(everything(), ~ !is.na(.x))) %>%
fill(everything(), .direction = "up") %>%
mutate(across(everything(), ~ replace(.x, duplicated(.x), NA))) %>%
filter(if_any(everything(), ~ !is.na(.x))) %>%
fill(everything(), .direction = "up") %>%
mutate(across(everything(), ~ replace(.x, duplicated(.x), NA))) %>%
filter(if_any(everything(), ~ !is.na(.x))) %>%
add_column(draft_display_label = "Stack", .before = 1) %>%
mutate(draft_display_label = ifelse(row_number() == 1, "Stack", NA_character_))
cg_table_draft_slot <-
cg_table() %>%
ungroup() %>%
mutate(draft_start_date = format(as.Date(draft_start_date, format = "%Y-%Om-%d %H:%M:%S"), format = "%B %e")) %>%
select(draft_slot, draft_number, draft_start_date) %>%
distinct() %>%
pivot_wider(
names_from = c(draft_number, draft_start_date),
names_glue = "Draft# { draft_number }
{ draft_start_date }",
values_from = draft_slot,
values_fn = as.character
) %>%
add_column(draft_display_label = "Draft Slot", .before = 1)
bbm_playoff_stacks <-
cg_table() %>%
ungroup() %>%
mutate(draft_start_date = format(as.Date(draft_start_date, format = "%Y-%Om-%d %H:%M:%S"), format = "%B %e")) %>%
group_by(draft, team, draft_number, draft_start_date) %>%
summarise(stack_count = n()) %>%
inner_join(bbm_playoff_matchups_team_by_team(), by = c("team" = "team_short_name")) %>%
group_by(draft, draft_number, draft_start_date) %>%
#filter(opp %in% team) %>%
group_by(game_id, roof_emoji, week, draft_number, draft_start_date) %>%
mutate(stack_total = sum(stack_count)) %>%
separate(game_id, into = c("season", "week", "team_1", "team_2"), sep = "_") %>%
select(-c("week", "season")) %>%
mutate(
bbm_playoff_stack = ifelse(stack_count > 1,
paste0("Week: ", week, " ", roof_emoji, "<br>", team_1, "-", team_2, " (", stack_total, ")"),
NA_character_
)
) %>%
select(stack_total, bbm_playoff_stack, draft_number, draft_start_date, draft, week, team) %>%
distinct() %>%
arrange(draft_number, desc(stack_total), week) %>%
ungroup() %>%
pivot_wider(
id_cols = c(week, team, stack_total),
names_from = c(draft_number, draft_start_date),
names_glue = "Draft# { draft_number }
{ draft_start_date }",
values_from = bbm_playoff_stack
) %>%
select(-c(week, team, stack_total)) %>%
distinct(.keep_all = TRUE) %>%
sort_na_to_bottom_colwise() %>%
mutate(draft_display_label = case_when(
row_number() == 1 ~ NA_character_,
row_number() == 2 ~ "BBM II",
row_number() == 3 ~ "Playoff",
row_number() == 4 ~ "Game",
row_number() == 5 ~ "Stacks",
TRUE ~ NA_character_
), .before = 1)
bind_rows(cg_table_pivot, cg_table_filtered, cg_table_draft_slot, bbm_playoff_stacks)
})
output$draft_slot_frequency <- renderPlot({
expected_frequency <-
cg_table() %>%
ungroup() %>%
distinct(draft_slot, draft) %>%
summarise(expected_frequency = round(n() / 12, 1)) %>%
.$expected_frequency
cg_table() %>%
ungroup() %>%
distinct(draft_slot, draft) %>%
select(draft_slot) %>%
ggplot(aes(draft_slot)) +
geom_bar(fill = "#DDB423", color = "#DDB423") +
geom_hline(
yintercept = expected_frequency, size = 2,
alpha = .3, color = "white",
linetype = "longdash"
) +
geom_text(stat = "count", aes(label = stat(count), vjust = 1.5), color = "#393939") +
geom_text(
data = data.frame(x = 5, y = expected_frequency), aes(x, y),
label = paste0("Expected Frequency"),
vjust = -1,
color = "white",
alpha = .5,
size = 8
) +
hrbrthemes::theme_ipsum_rc() +
expand_limits(x = c(0, 12)) +
scale_x_continuous(breaks = 1:12) +
xlab("Draft Slot") +
theme(
panel.grid.major = element_blank(),
panel.grid.minor = element_blank(),
axis.text.y = element_blank(),
axis.title.y = element_blank(),
panel.background = element_rect(fill = "#393939")
)
})
nfl_schedule <- reactive({
read_csv(url("https://raw.githubusercontent.com/nflverse/nfldata/master/data/games.csv"))
})
observeEvent(input$ud_user_adp,
{
file <- user_upload()
req(file)
ext <- tools::file_ext(file$datapath)
validate(need(ext == "csv", "Please upload a csv file"))
appearance_list_choices <- read_csv(file$datapath) %>%
clean_names() %>%
filter(tournament_title == "Best Ball Mania II") %>%
transmute(player_name = paste(first_name, last_name), id = appearance) %>%
distinct() %>%
deframe()
updatePickerInput(
session = session, inputId = "player_search",
choices = appearance_list_choices
)
},
ignoreInit = TRUE
)
observe({
if (isTruthy(input$ud_user_adp)) {
Sys.sleep(.25)
# enable the download button
shinyjs::enable("data_file")
# change the html of the download button
shinyjs::html(
"data_file",
sprintf("<i class='fa fa-download'></i>
Download Ready")
)
}
})
output$data_file <- downloadHandler(
filename = function() {
paste("TeamAnalysis-", Sys.Date(), ".csv", sep = "")
},
content = function(file) {
write.csv(cg_final2(), file)
}
)
# disable the downdload button on page load
shinyjs::disable("data_file")
user_adp_filter <- reactive({
as.numeric(input$user_filter_adp)
})
user_filter_position <- reactive({
input$current_draft_stack_position
})
user_filter_team <- reactive({
input$current_draft_teams
})
bbm_playoff_matchups_team_by_team <- reactive({
nfl_schedule() %>%
# read_csv(url("https://raw.githubusercontent.com/nflverse/nfldata/master/data/games.csv")) %>%
filter(
#home_team %in% "ARI" |
(home_team %in% user_filter_team() |
# away_team %in% "ARI",
away_team %in% user_filter_team()),
season == 2021,
week > 14,
week < 18
) %>%
select(home_team, away_team, roof, week, game_id) %>%
bind_rows(
nfl_schedule() %>%
# read_csv(url("https://raw.githubusercontent.com/nflverse/nfldata/master/data/games.csv")) %>%
select(home_team, away_team, season, week) %>%
filter(
# home_team %in% "ARI",
(home_team %in% user_filter_team()),
season == 2021,
week > 14,
week < 18
) %>%
rename(home_team = away_team, away_team = home_team)
) %>%
mutate(roof = ifelse((roof == "closed" | is.na(roof)), "retractable", roof)) %>%
select(team = home_team, opp = away_team, roof, week, game_id) %>%
left_join(get_nfl_teams(), by = c("team" = "team_short_name")) %>%
select(team, main_team_name = team_name, opp, roof, week, game_id) %>%
left_join(get_nfl_teams(), by = c("opp" = "team_short_name")) %>%
select(team, main_team_name, opp, opp_team_name = team_name, roof, week, game_id) %>%
filter(opp != user_filter_team()) %>%
left_join(
current_ud_adp(),
#current_ud_adp(),
by = c("opp" = "team_short_name")
) %>%
filter(
as.double(adp) + 12 >= as.double(user_adp_filter()),
slotName %in% user_filter_position(),
(team %in% user_filter_team() |
opp %in% user_filter_team())
# as.double(adp) + 12 >= as.double(user_adp_filter,
# slotName %in% user_filter_position,
# (team %in% user_filter_team |
# opp %in% user_filter_team)
) %>%
transmute(
team_name,
game_id,
player = paste(firstName, lastName),
slotName, opp, week, adp, logo,
roof_emoji = case_when(
roof == "retractable" ~ "\U00026C5",
roof == "dome" ~ "\U2600",
roof == "outdoors" ~ "\U1F327"
),
id
) %>%
arrange(adp) %>%
inner_join(get_nfl_teams(), by = c("team_name" = "team_name"))
})
bbm_playoff_matchups_explorer <- reactive({
nfl_schedule() %>%
# read_csv(url("https://raw.githubusercontent.com/nflverse/nfldata/master/data/games.csv")) %>%
filter(
#home_team %in% "ARI" |
(home_team %in% user_filter_team() |
# away_team %in% "ARI",
away_team %in% user_filter_team()),
season == 2021,
week > 14,
week < 18
) %>%
select(home_team, away_team, roof, week, game_id) %>%
bind_rows(
nfl_schedule() %>%
# read_csv(url("https://raw.githubusercontent.com/nflverse/nfldata/master/data/games.csv")) %>%
select(home_team, away_team, season, week) %>%
filter(
# home_team %in% "ARI",
(home_team %in% user_filter_team() |
# away_team %in% "ARI",
away_team %in% user_filter_team()),
season == 2021,
week > 14,
week < 18
) %>%
rename(home_team = away_team, away_team = home_team)
) %>%
mutate(roof = ifelse((roof == "closed" | is.na(roof)), "retractable", roof)) %>%
select(team = home_team, opp = away_team, roof, week, game_id) %>%
left_join(get_nfl_teams(), by = c("team" = "team_short_name")) %>%
select(team, main_team_name = team_name, opp, roof, week, game_id) %>%
left_join(get_nfl_teams(), by = c("opp" = "team_short_name")) %>%
select(team, main_team_name, opp, opp_team_name = team_name, roof, week, game_id) %>%
left_join(
#current_ud_adp,
current_ud_adp(),
by = c("opp" = "team_short_name")
) %>%
filter(
as.double(adp) + 12 >= as.double(user_adp_filter()),
slotName %in% user_filter_position(),
(team %in% user_filter_team() |
opp %in% user_filter_team())
# as.double(adp) + 12 >= as.double(user_adp_filter,
# slotName %in% user_filter_position,
# (team %in% user_filter_team |
# opp %in% user_filter_team)
) %>%
transmute(
team_name,
player = paste(firstName, lastName),
slotName, opp, week, adp, logo,
roof_emoji = case_when(
roof == "retractable" ~ "\U00026C5",
roof == "dome" ~ "\U2600",
roof == "outdoors" ~ "\U1F327"
),
id
) %>%
arrange(adp)
})
output$in_draft_helper_table <- renderReactable({
req(
user_adp_filter(),
user_filter_team(),
user_filter_position()
)
position_color <-
bbm_playoff_matchups_explorer() %>%
mutate(
position_color = case_when(
slotName == "QB" ~ "#9647B8",
slotName == "RB" ~ "#15967B",
slotName == "WR" ~ "#E67E22",
slotName == "TE" ~ "#2980B9",
TRUE ~ ""
)
)
adp_date <-
as.Date(min(current_ud_adp()$adp_date)) %>%
format("%m/%d/%y") %>%
str_remove_all("0")
reactable(
bbm_playoff_matchups_explorer(),
columns = list(
id = colDef(show = FALSE),
player = colDef(
name = "Player",
cell = function(value, index) {
pos_color <- position_color$position_color[index]
pos_color <- if (!is.na(pos_color)) pos_color else "#000000"
tagList(
htmltools::div(
style = list(color = pos_color),
value
)
)
},
minWidth = 240
),
slotName = colDef(name = "Position"),
team_name = colDef(
name = "Team",
cell = function(value, index) {
image <- img(src = position_color$logo[index], height = "20px")
tagList(
div(style = list(display = "inline-block", alignItems = "center"), image),
" ", value
)
},
minWidth = 200,
align = "center"
),
opp = colDef(
show = FALSE
),
roof_emoji = colDef(
name = "Roof",
html = TRUE,
filterable = FALSE,
align = "center"
),
week = colDef(
name = "Week",
align = "center"
),
adp = colDef(
name = "UD ADP",
header = function(value, index) {
note <- div(style = "color: #666", paste("("), adp_date, ")")
tagList(
div(title = value, value),
div(style = list(fontSize = 12), note)
)
}, filterable = FALSE,
align = "center"
),
logo = colDef(
show = FALSE
)
),
filterable = FALSE,
highlight = TRUE,
style = list(fontFamily = "Montserrat")
)
})
output$in_draft_helper_plot <- renderPlot({
req(
user_adp_filter(),
user_filter_team(),
user_filter_position()
)
total_adp <- GET(
"https://raw.githubusercontent.com/DangyWing/Underdog-ADP/main/data/ud_adp_total.rds",
authenticate(Sys.getenv("GITHUB_PAT_DANGY"), "")
)
ud_adp_total <-
content(total_adp, as = "raw") %>%
parse_raw_rds() %>%
mutate(
adp = as.double(adp),
teamName = case_when(
teamName == "NY Jets" ~ "New York Jets",
teamName == "NY Giants" ~ "New York Giants",
teamName == "Washington Football Team" ~ "Washington",
TRUE ~ teamName
)
) %>%
left_join(nfl_team(), by = c("teamName" = "full_name"))
relevant_players <-
reactive({
ud_adp_total %>%
filter(
slotName %in% user_filter_position()
# ,team_short_name %in% user_filter_team()
) %>%
arrange(desc(adp_date)) %>%
group_by(id) %>%
filter(!is.na(adp)) %>%
summarize(
min_adp = min(adp),
max_adp = max(adp),
most_recent_draft_adp = ifelse(row_number() == 1, adp, 60000)
) %>%
ungroup() %>%
filter(
most_recent_draft_adp + 12 >= as.double(user_adp_filter()),
most_recent_draft_adp - 50 <= as.double(user_adp_filter())
) %>%
distinct()
})
bbm_playoff_matchups_explorer() %>%
inner_join(ud_adp_total %>%
filter(id %in% relevant_players()$id),
by = c("team_name", "slotName", "logo", "id")
) %>%
select(-adp.x) %>%
mutate(adp = adp.y) %>%
distinct() %>%
mutate(
adp_date = as.Date(adp_date),
player = paste0(player, " (", opp, ")")
) %>%
ggplot2::ggplot(aes(x = as.Date(adp_date), y = adp, group = id, color = player)) +
ggplot2::geom_line() +
ggplot2::geom_hline(yintercept = user_adp_filter(), alpha = .4, colour = "#E0B400") +
ggplot2::annotate("text",
x = median(as.Date(ud_adp_total$adp_date)),
y = user_adp_filter(),
vjust = 1.5,
label = "Upcoming pick",
alpha = .7, colour = "#E0B400",
size = 8
) +
# geom_blank(aes(y = adp * 1.05)) +
hrbrthemes::theme_ft_rc() +
theme(
panel.grid.major.x = element_blank(),
panel.grid.minor.x = element_blank(),
axis.title.y = element_text(color = "#E0B400", size = 18, face ="bold"),
axis.title.x = element_text(color = "#E0B400", size = 18, face ="bold"),
axis.text.x.bottom = element_text(color = "white", hjust = 1, angle = 45, size = 18),
axis.text.y.left = element_text(color = "white", hjust = 1, size = 18),
plot.title = element_text(color = "white", size = 14, hjust = 0.5),
legend.title = element_blank(),
legend.text = element_text(size = 14)
) +
scale_x_date(date_labels = "%b %d", date_breaks = "3 days") +
scale_y_continuous(trans = "reverse") +
# scale_color_manual(values = c(
# "#175FC7",
# "#E67E22",
# "#6CBE45",
# "#0FC8D4",
# "#D0402E",
# "#9647B8",
# "#2980B9",
# "#15997E"
# )) +
xlab("Date") +
ylab("ADP")
})
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/rankwithranges.R
\name{rankwithranges}
\alias{rankwithranges}
\title{Rank With Ranges}
\usage{
rankwithranges(x, b)
}
\arguments{
\item{x}{Observed value in the raw data table}
\item{b}{First two columns of the ranking subtable}
}
\value{
ranking value to be assigned to this observed data value
}
\description{
This function draws the appropriate ranking value from the appropriate ranking subtable
based on the value found in the raw data table (for variables that have observations that
are observed in ranges, i.e. length and cluster)
}
\keyword{range}
\keyword{rank}
|
/man/rankwithranges.Rd
|
no_license
|
hannahjallen/response
|
R
| false
| true
| 651
|
rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/rankwithranges.R
\name{rankwithranges}
\alias{rankwithranges}
\title{Rank With Ranges}
\usage{
rankwithranges(x, b)
}
\arguments{
\item{x}{Observed value in the raw data table}
\item{b}{First two columns of the ranking subtable}
}
\value{
ranking value to be assigned to this observed data value
}
\description{
This function draws the appropriate ranking value from the appropriate ranking subtable
based on the value found in the raw data table (for variables that have observations that
are observed in ranges, i.e. length and cluster)
}
\keyword{range}
\keyword{rank}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/models.R
\name{h2o.get_ntrees_actual}
\alias{h2o.get_ntrees_actual}
\title{Retrieve actual number of trees for tree algorithms}
\usage{
h2o.get_ntrees_actual(object)
}
\arguments{
\item{object}{An \linkS4class{H2OModel} object.}
}
\description{
Retrieve actual number of trees for tree algorithms
}
|
/man/h2o.get_ntrees_actual.Rd
|
no_license
|
cran/h2o
|
R
| false
| true
| 377
|
rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/models.R
\name{h2o.get_ntrees_actual}
\alias{h2o.get_ntrees_actual}
\title{Retrieve actual number of trees for tree algorithms}
\usage{
h2o.get_ntrees_actual(object)
}
\arguments{
\item{object}{An \linkS4class{H2OModel} object.}
}
\description{
Retrieve actual number of trees for tree algorithms
}
|
library(dplyr)
library(readr)
# srun -J interactive --pty bash
# module load r/3.3.3
chr <- list.files("/scratch/nmr15102/sweepfinder/counts",pattern="^chr.*gz",full.names=TRUE)
ugh <- read.table("~/fhet_genome/fst_dxy_allpops_liftover.txt", stringsAsFactors=FALSE)
for(k in 1:length(chr)){
# genomic windows to use for grid location selection
lift <- ugh
lord <- order(lift[,1],lift[,2])
lift <- lift[lord,]
# read in a chromosome
tab <- read_tsv(chr[k],col_names=FALSE,progress=FALSE)
tab <- as.data.frame(tab,stringsAsFactors=FALSE)
tab <- tab[order(tab[,2]),]
# make sure first and last variant in each dataset is the same.
ss <- which(rowSums(tab[,seq(8,32,2)] > 5)==13 & rowSums(tab[,seq(7,32,2)] > 0)==13 & rowSums(tab[,seq(7,32,2)] < tab[,seq(8,32,2)])==13)
tab <- tab[min(ss):max(ss),]
# windows for grid points
subw <- lift[,1] == tab[1,1] & (lift[,2]+1) > tab[1,2] & lift[,3] < tab[dim(tab)[1],3]
lift <- lift[subw,]
gridp <- lift[,3]
# write grid file
cat(gridp,sep="\n",file=paste(lift[1,1],"grid",sep="."))
popcols <- c("chr","cstart","cend","scaf","start","end","BB.x","BB.n","BNP.x","BNP.n","BP.x","BP.n","ER.x","ER.n","F.x","F.n","GB.x","GB.n","KC.x","KC.n","NYC.x","NYC.n","PB.x","PB.n","SH.x","SH.n","SJSP.x","SJSP.n","SP.x","SP.n","VB.x","VB.n")
colnames(tab) <- popcols
grand <- c(7,9,17,23,27,29,31)
het <- c(11,13,15,19,21,25)
# for grandis populations
for(i in grand){
#i <- 7
s1 <- tab[,c(2,i,i+1)]
ss <- s1[,3]>5 & s1[,2] > 0 & s1[,2] < s1[,3]
s1 <- s1[ss,]
s1 <- cbind(s1,folded=0)
colnames(s1) <- c("position","x","n","folded")
rec1 <- cbind(pos=s1[,1],rate=(s1[,1]-s1[1,1])/1e6)
# file names
ffile <- paste(lift[1,1],gsub(".x","",colnames(tab)[i]),"freq",sep=".")
# rfile <- paste(lift[1,1],gsub(".x","",colnames(tab)[i]),"rec",sep=".")
# write input files
write.table(s1,file=ffile,row.names=FALSE,col.names=TRUE,sep="\t",quote=FALSE)
# write.table(rec1,file=rfile,row.names=FALSE,col.names=TRUE,sep="\t",quote=FALSE)
}
# polarize spectrum assuming grandis major allele is ancestral
gf <- rowSums(tab[,grand])/rowSums(tab[,grand+1])
pol <- which(gf > 0.5)
# for heteroclitus populations
for(i in het){
#i <- 7
s1 <- tab[,c(2,i,i+1)]
s1[pol,2] <- s1[pol,3] - s1[pol,2]
ss <- s1[,3]>5 & s1[,2] > 0 & s1[,2] < s1[,3]
s1 <- s1[ss,]
s1 <- cbind(s1,folded=0)
colnames(s1) <- c("position","x","n","folded")
# skip every 4th SNP to save computation time.
keep <- c(seq(1,dim(s1)[1]-1,4),dim(s1)[1])
rec1 <- cbind(pos=s1[,1],rate=(s1[,1]-s1[1,1])/1e6)
# file names
ffile <- paste(lift[1,1],gsub(".x","",colnames(tab)[i]),"freq",sep=".")
# rfile <- paste(lift[1,1],gsub(".x","",colnames(tab)[i]),"rec",sep=".")
# write input files
write.table(s1[keep,],file=ffile,row.names=FALSE,col.names=TRUE,sep="\t",quote=FALSE)
# write.table(rec1,file=rfile,row.names=FALSE,col.names=TRUE,sep="\t",quote=FALSE)
}
print(k)
}
|
/sweepfinder/counts_to_input.R
|
no_license
|
nreid/het_grand
|
R
| false
| false
| 2,896
|
r
|
library(dplyr)
library(readr)
# srun -J interactive --pty bash
# module load r/3.3.3
chr <- list.files("/scratch/nmr15102/sweepfinder/counts",pattern="^chr.*gz",full.names=TRUE)
ugh <- read.table("~/fhet_genome/fst_dxy_allpops_liftover.txt", stringsAsFactors=FALSE)
for(k in 1:length(chr)){
# genomic windows to use for grid location selection
lift <- ugh
lord <- order(lift[,1],lift[,2])
lift <- lift[lord,]
# read in a chromosome
tab <- read_tsv(chr[k],col_names=FALSE,progress=FALSE)
tab <- as.data.frame(tab,stringsAsFactors=FALSE)
tab <- tab[order(tab[,2]),]
# make sure first and last variant in each dataset is the same.
ss <- which(rowSums(tab[,seq(8,32,2)] > 5)==13 & rowSums(tab[,seq(7,32,2)] > 0)==13 & rowSums(tab[,seq(7,32,2)] < tab[,seq(8,32,2)])==13)
tab <- tab[min(ss):max(ss),]
# windows for grid points
subw <- lift[,1] == tab[1,1] & (lift[,2]+1) > tab[1,2] & lift[,3] < tab[dim(tab)[1],3]
lift <- lift[subw,]
gridp <- lift[,3]
# write grid file
cat(gridp,sep="\n",file=paste(lift[1,1],"grid",sep="."))
popcols <- c("chr","cstart","cend","scaf","start","end","BB.x","BB.n","BNP.x","BNP.n","BP.x","BP.n","ER.x","ER.n","F.x","F.n","GB.x","GB.n","KC.x","KC.n","NYC.x","NYC.n","PB.x","PB.n","SH.x","SH.n","SJSP.x","SJSP.n","SP.x","SP.n","VB.x","VB.n")
colnames(tab) <- popcols
grand <- c(7,9,17,23,27,29,31)
het <- c(11,13,15,19,21,25)
# for grandis populations
for(i in grand){
#i <- 7
s1 <- tab[,c(2,i,i+1)]
ss <- s1[,3]>5 & s1[,2] > 0 & s1[,2] < s1[,3]
s1 <- s1[ss,]
s1 <- cbind(s1,folded=0)
colnames(s1) <- c("position","x","n","folded")
rec1 <- cbind(pos=s1[,1],rate=(s1[,1]-s1[1,1])/1e6)
# file names
ffile <- paste(lift[1,1],gsub(".x","",colnames(tab)[i]),"freq",sep=".")
# rfile <- paste(lift[1,1],gsub(".x","",colnames(tab)[i]),"rec",sep=".")
# write input files
write.table(s1,file=ffile,row.names=FALSE,col.names=TRUE,sep="\t",quote=FALSE)
# write.table(rec1,file=rfile,row.names=FALSE,col.names=TRUE,sep="\t",quote=FALSE)
}
# polarize spectrum assuming grandis major allele is ancestral
gf <- rowSums(tab[,grand])/rowSums(tab[,grand+1])
pol <- which(gf > 0.5)
# for heteroclitus populations
for(i in het){
#i <- 7
s1 <- tab[,c(2,i,i+1)]
s1[pol,2] <- s1[pol,3] - s1[pol,2]
ss <- s1[,3]>5 & s1[,2] > 0 & s1[,2] < s1[,3]
s1 <- s1[ss,]
s1 <- cbind(s1,folded=0)
colnames(s1) <- c("position","x","n","folded")
# skip every 4th SNP to save computation time.
keep <- c(seq(1,dim(s1)[1]-1,4),dim(s1)[1])
rec1 <- cbind(pos=s1[,1],rate=(s1[,1]-s1[1,1])/1e6)
# file names
ffile <- paste(lift[1,1],gsub(".x","",colnames(tab)[i]),"freq",sep=".")
# rfile <- paste(lift[1,1],gsub(".x","",colnames(tab)[i]),"rec",sep=".")
# write input files
write.table(s1[keep,],file=ffile,row.names=FALSE,col.names=TRUE,sep="\t",quote=FALSE)
# write.table(rec1,file=rfile,row.names=FALSE,col.names=TRUE,sep="\t",quote=FALSE)
}
print(k)
}
|
#!/applications/R/R-3.5.0/bin/Rscript
# Plot average coverage profiles with 95% CIs around peak quantiles
# Usage:
# csmit -m 50G -c 3 "/applications/R/R-3.5.0/bin/Rscript quantile_peaks_avgProfileRibbon_cMMb_exome_SNPs.R DMC1_Rep1_ChIP DMC1 'Agenome_euchromatin,Bgenome_euchromatin,Dgenome_euchromatin' cMMb 4 both 400 2000 2kb 20 '0.02,0.96'"
#libName <- "DMC1_Rep1_ChIP"
#dirName <- "DMC1"
#featureName <- unlist(strsplit("Agenome_euchromatin,Bgenome_euchromatin,Dgenome_euchromatin",
# split = ","))
#orderingFactor <- "cMMb"
#quantiles <- 4
#align <- "both"
#bodyLength <- 400
#upstream <- 2000
#downstream <- 2000
#flankName <- "2 kb"
#binSize <- 20
## top left
#legendPos <- as.numeric(unlist(strsplit("0.02,0.96",
# split = ",")))
## top centre
#legendPos <- as.numeric(unlist(strsplit("0.38,0.96",
# split = ",")))
## top right
#legendPos <- as.numeric(unlist(strsplit("0.75,0.96",
# split = ",")))
## bottom left
#legendPos <- as.numeric(unlist(strsplit("0.02,0.30",
# split = ",")))
args <- commandArgs(trailingOnly = T)
libName <- args[1]
dirName <- args[2]
featureName <- unlist(strsplit(args[3],
split = ","))
orderingFactor <- args[4]
quantiles <- as.numeric(args[5])
align <- args[6]
bodyLength <- as.numeric(args[7])
upstream <- as.numeric(args[8])
downstream <- as.numeric(args[8])
flankName <- args[9]
binSize <- as.numeric(args[10])
legendPos <- as.numeric(unlist(strsplit(args[11],
split = ",")))
library(parallel)
library(tidyr)
library(dplyr)
library(ggplot2)
library(ggthemes)
library(grid)
library(gridExtra)
library(extrafont)
outDir <- paste0("quantiles_by_", orderingFactor, "/")
plotDir <- paste0(outDir, "plots/")
system(paste0("[ -d ", outDir, " ] || mkdir ", outDir))
system(paste0("[ -d ", plotDir, " ] || mkdir ", plotDir))
# Define plot titles
if(orderingFactor == "cMMb") {
featureNamePlot <- paste0(substr(orderingFactor, start = 1, stop = 2), "/",
substr(orderingFactor, start = 3, stop = 4), " ",
sub("_\\w+", "", libName), " peaks")
}
ranFeatNamePlot <- paste0("Random ",
sub("_\\w+", "", libName), " peaks")
ranLocNamePlot <- "Random loci"
# Define quantile colours
quantileColours <- c("red", "purple", "blue", "navy")
# Define feature start and end labels for plotting
featureStartLab <- "Start"
featureEndLab <- "End"
# Genomic definitions
chrs <- as.vector(read.table("/home/ajt200/analysis/wheat/sRNAseq_meiocyte_Martin_Moore/snakemake_sRNAseq/data/index/wheat_v1.0.fa.sizes")[,1])
chrs <- chrs[-length(chrs)]
# Load table of features grouped into quantiles
# by decreasing cM/Mb
featuresDF <- read.table(paste0(outDir,
"features_", quantiles, "quantiles",
"_by_", orderingFactor,
"_of_", libName, "_peaks_in_",
paste0(featureName,
collapse = "_"), ".txt"),
header = T, sep = "\t", row.names = NULL, stringsAsFactors = F)
# Load features to confirm feature (row) ordering in "featuresDF" is the same
# as in "features" (which was used for generating the coverage matrices)
features <- lapply(seq_along(featureName), function(y) {
tmp <- read.table(paste0("/home/ajt200/analysis/wheat/", dirName,
"/snakemake_ChIPseq/mapped/both/peaks/PeakRanger1.18/ranger/p0.001_q0.01/",
libName,
"_rangerPeaksGRmergedOverlaps_minuslog10_p0.001_q0.01_noMinWidth_in_",
featureName[y], ".bed"),
header = F)
data.frame(tmp,
V7 = paste0(featureName[y], "_", tmp$V4),
stringsAsFactors = F)
})
# If features from multiple subgenomes and/or compartments are to be analysed,
# concatenate the corresponding feature data.frames
if(length(featureName) > 1) {
features <- do.call(rbind, features)
} else {
features <- features[[1]]
}
colnames(features) <- c("chr", "start", "end", "name", "score", "strand", "featureID")
stopifnot(identical(as.character(featuresDF$featureID),
as.character(features$featureID)))
rm(features); gc()
# Get row indices for each feature quantile
quantileIndices <- lapply(1:quantiles, function(k) {
which(featuresDF$quantile == paste0("Quantile ", k))
})
## Random feature quantiles
# Define function to randomly select n rows from
# a data.frame
selectRandomFeatures <- function(features, n) {
return(features[sample(x = dim(features)[1],
size = n,
replace = FALSE),])
}
# Define seed so that random selections are reproducible
set.seed(93750174)
# Divide features into random sets of equal number,
# with the same number of peaks per chromosome as
# above-defined orderingFactor-defined feature quantiles
randomPCIndices <- lapply(1:quantiles, function(k) {
randomPCIndicesk <- NULL
for(i in 1:length(chrs)) {
randomPCfeatureskChr <- selectRandomFeatures(features = featuresDF[featuresDF$seqnames == chrs[i],],
n = dim(featuresDF[featuresDF$quantile == paste0("Quantile ", k) &
featuresDF$seqnames == chrs[i],])[1])
randomPCIndicesk <- c(randomPCIndicesk, as.integer(rownames(randomPCfeatureskChr)))
}
randomPCIndicesk
})
# Confirm per-chromosome feature numbers are the same for quantiles and random groupings
lapply(seq_along(1:quantiles), function(k) {
sapply(seq_along(chrs), function(x) {
if(!identical(dim(featuresDF[randomPCIndices[[k]],][featuresDF[randomPCIndices[[k]],]$seqnames == chrs[x],]),
dim(featuresDF[quantileIndices[[k]],][featuresDF[quantileIndices[[k]],]$seqnames == chrs[x],]))) {
stop("Quantile features and random features do not consist of the same number of features per chromosome")
}
})
})
# exome SNPclasses
SNPclassNames <- c(
"all",
"transition",
"transversion"
)
SNPclassNamesPlot <- c(
"Exome SNPs",
"Transitions",
"Transversions"
)
# feature
SNPclass_featureMats <- mclapply(seq_along(SNPclassNames), function(x) {
lapply(seq_along(featureName), function(y) {
as.matrix(read.table(paste0("/home/ajt200/analysis/wheat/DMC1/snakemake_ChIPseq/mapped/DMC1peakProfiles/matrices/",
"exome_", SNPclassNames[x],
"_SNPs_around_", dirName, "_peaks_in_", featureName[y],
"_matrix_bin", binSize, "bp_flank", sub(" ", "", flankName), ".tab"),
header = T))
})
}, mc.cores = length(SNPclassNames))
# If features from multiple subgenomes and/or compartments are to be analysed,
# concatenate the corresponding feature coverage matrices
SNPclass_featureMats <- mclapply(seq_along(SNPclass_featureMats), function(x) {
if(length(featureName) > 1) {
do.call(rbind, SNPclass_featureMats[[x]])
} else {
SNPclass_featureMats[[x]][[1]]
}
}, mc.cores = length(SNPclass_featureMats))
# ranLoc
SNPclass_ranLocMats <- mclapply(seq_along(SNPclassNames), function(x) {
lapply(seq_along(featureName), function(y) {
as.matrix(read.table(paste0("/home/ajt200/analysis/wheat/DMC1/snakemake_ChIPseq/mapped/DMC1peakProfiles/matrices/",
"exome_", SNPclassNames[x],
"_SNPs_around_", dirName, "_peaks_in_", featureName[y],
"_ranLoc_matrix_bin", binSize, "bp_flank", sub(" ", "", flankName), ".tab"),
header = T))
})
}, mc.cores = length(SNPclassNames))
# If features from multiple subgenomes and/or compartments are to be analysed,
# concatenate the corresponding feature coverage matrices
SNPclass_ranLocMats <- mclapply(seq_along(SNPclass_ranLocMats), function(x) {
if(length(featureName) > 1) {
do.call(rbind, SNPclass_ranLocMats[[x]])
} else {
SNPclass_ranLocMats[[x]][[1]]
}
}, mc.cores = length(SNPclass_ranLocMats))
# Add column names
for(x in seq_along(SNPclass_featureMats)) {
colnames(SNPclass_featureMats[[x]]) <- c(paste0("u", 1:(upstream/binSize)),
paste0("t", ((upstream/binSize)+1):((upstream+bodyLength)/binSize)),
paste0("d", (((upstream+bodyLength)/binSize)+1):(((upstream+bodyLength)/binSize)+(downstream/binSize))))
colnames(SNPclass_ranLocMats[[x]]) <- c(paste0("u", 1:(upstream/binSize)),
paste0("t", ((upstream/binSize)+1):((upstream+bodyLength)/binSize)),
paste0("d", (((upstream+bodyLength)/binSize)+1):(((upstream+bodyLength)/binSize)+(downstream/binSize))))
}
# Subdivide coverage matrices into above-defined quantiles and random groupings
SNPclass_mats_quantiles <- mclapply(seq_along(SNPclass_featureMats), function(x) {
list(
# feature quantiles
lapply(1:quantiles, function(k) {
SNPclass_featureMats[[x]][quantileIndices[[k]],]
}),
# feature random groupings
lapply(1:quantiles, function(k) {
SNPclass_featureMats[[x]][randomPCIndices[[k]],]
}),
# random loci groupings
lapply(1:quantiles, function(k) {
SNPclass_ranLocMats[[x]][quantileIndices[[k]],]
})
)
}, mc.cores = length(SNPclass_featureMats))
# Transpose matrix and convert into dataframe
# in which first column is window name
wideDFfeature_list_SNPclass <- mclapply(seq_along(SNPclass_mats_quantiles), function(x) {
lapply(seq_along(SNPclass_mats_quantiles[[x]]), function(y) {
lapply(seq_along(SNPclass_mats_quantiles[[x]][[y]]), function(k) {
data.frame(window = colnames(SNPclass_mats_quantiles[[x]][[y]][[k]]),
t(SNPclass_mats_quantiles[[x]][[y]][[k]]))
})
})
}, mc.cores = length(SNPclass_mats_quantiles)/2)
# Convert into tidy data.frame (long format)
tidyDFfeature_list_SNPclass <- mclapply(seq_along(wideDFfeature_list_SNPclass), function(x) {
lapply(seq_along(SNPclass_mats_quantiles[[x]]), function(y) {
lapply(seq_along(SNPclass_mats_quantiles[[x]][[y]]), function(k) {
gather(data = wideDFfeature_list_SNPclass[[x]][[y]][[k]],
key = feature,
value = coverage,
-window)
})
})
}, mc.cores = length(wideDFfeature_list_SNPclass)/2)
# Order levels of factor "window" so that sequential levels
# correspond to sequential windows
for(x in seq_along(tidyDFfeature_list_SNPclass)) {
for(y in seq_along(SNPclass_mats_quantiles[[x]])) {
for(k in seq_along(SNPclass_mats_quantiles[[x]][[y]])) {
tidyDFfeature_list_SNPclass[[x]][[y]][[k]]$window <- factor(tidyDFfeature_list_SNPclass[[x]][[y]][[k]]$window,
levels = as.character(wideDFfeature_list_SNPclass[[x]][[y]][[k]]$window))
}
}
}
# Create summary data.frame in which each row corresponds to a window (Column 1),
# Column2 is the number of coverage values (features) per window,
# Column3 is the mean of coverage values per window,
# Column4 is the standard deviation of coverage values per window,
# Column5 is the standard error of the mean of coverage values per window,
# Column6 is the lower bound of the 95% confidence interval, and
# Column7 is the upper bound of the 95% confidence interval
summaryDFfeature_list_SNPclass <- mclapply(seq_along(tidyDFfeature_list_SNPclass), function(x) {
lapply(seq_along(SNPclass_mats_quantiles[[x]]), function(y) {
lapply(seq_along(SNPclass_mats_quantiles[[x]][[y]]), function(k) {
data.frame(window = as.character(wideDFfeature_list_SNPclass[[x]][[y]][[k]]$window),
n = tapply(X = tidyDFfeature_list_SNPclass[[x]][[y]][[k]]$coverage,
INDEX = tidyDFfeature_list_SNPclass[[x]][[y]][[k]]$window,
FUN = length),
mean = tapply(X = tidyDFfeature_list_SNPclass[[x]][[y]][[k]]$coverage,
INDEX = tidyDFfeature_list_SNPclass[[x]][[y]][[k]]$window,
FUN = mean,
na.rm = TRUE),
sd = tapply(X = tidyDFfeature_list_SNPclass[[x]][[y]][[k]]$coverage,
INDEX = tidyDFfeature_list_SNPclass[[x]][[y]][[k]]$window,
FUN = sd,
na.rm = TRUE))
})
})
}, mc.cores = length(tidyDFfeature_list_SNPclass)/2)
for(x in seq_along(summaryDFfeature_list_SNPclass)) {
for(y in seq_along(SNPclass_mats_quantiles[[x]])) {
for(k in seq_along(SNPclass_mats_quantiles[[x]][[y]])) {
summaryDFfeature_list_SNPclass[[x]][[y]][[k]]$window <- factor(summaryDFfeature_list_SNPclass[[x]][[y]][[k]]$window,
levels = as.character(wideDFfeature_list_SNPclass[[x]][[y]][[k]]$window))
summaryDFfeature_list_SNPclass[[x]][[y]][[k]]$winNo <- factor(1:dim(summaryDFfeature_list_SNPclass[[x]][[y]][[k]])[1])
summaryDFfeature_list_SNPclass[[x]][[y]][[k]]$sem <- summaryDFfeature_list_SNPclass[[x]][[y]][[k]]$sd/sqrt(summaryDFfeature_list_SNPclass[[x]][[y]][[k]]$n-1)
summaryDFfeature_list_SNPclass[[x]][[y]][[k]]$CI_lower <- summaryDFfeature_list_SNPclass[[x]][[y]][[k]]$mean -
qt(0.975, df = summaryDFfeature_list_SNPclass[[x]][[y]][[k]]$n-1)*summaryDFfeature_list_SNPclass[[x]][[y]][[k]]$sem
summaryDFfeature_list_SNPclass[[x]][[y]][[k]]$CI_upper <- summaryDFfeature_list_SNPclass[[x]][[y]][[k]]$mean +
qt(0.975, df = summaryDFfeature_list_SNPclass[[x]][[y]][[k]]$n-1)*summaryDFfeature_list_SNPclass[[x]][[y]][[k]]$sem
}
}
}
quantileNames <- paste0(rep("Quantile ", quantiles), 1:quantiles)
randomPCNames <- paste0(rep("Random ", quantiles), 1:quantiles)
for(x in seq_along(summaryDFfeature_list_SNPclass)) {
# feature quantiles
names(summaryDFfeature_list_SNPclass[[x]][[1]]) <- quantileNames
# feature random groupings
names(summaryDFfeature_list_SNPclass[[x]][[2]]) <- randomPCNames
# random loci groupings
names(summaryDFfeature_list_SNPclass[[x]][[3]]) <- randomPCNames
}
# Convert list of lists of lists of feature quantiles summaryDFfeature_list_SNPclass into
# a list of lists of single data.frames containing all feature quantiles for plotting
summaryDFfeature_SNPclass <- mclapply(seq_along(summaryDFfeature_list_SNPclass), function(x) {
lapply(seq_along(SNPclass_mats_quantiles[[x]]), function(y) {
bind_rows(summaryDFfeature_list_SNPclass[[x]][[y]], .id = "quantile")
})
}, mc.cores = length(summaryDFfeature_list_SNPclass))
for(x in seq_along(summaryDFfeature_SNPclass)) {
# feature quantiles
summaryDFfeature_SNPclass[[x]][[1]]$quantile <- factor(summaryDFfeature_SNPclass[[x]][[1]]$quantile,
levels = names(summaryDFfeature_list_SNPclass[[x]][[1]]))
# feature random groupings
summaryDFfeature_SNPclass[[x]][[2]]$quantile <- factor(summaryDFfeature_SNPclass[[x]][[2]]$quantile,
levels = names(summaryDFfeature_list_SNPclass[[x]][[2]]))
# random loci groupings
summaryDFfeature_SNPclass[[x]][[3]]$quantile <- factor(summaryDFfeature_SNPclass[[x]][[3]]$quantile,
levels = names(summaryDFfeature_list_SNPclass[[x]][[3]]))
}
# Define y-axis limits
ymin_list_SNPclass <- lapply(seq_along(summaryDFfeature_SNPclass), function(x) {
min(c(summaryDFfeature_SNPclass[[x]][[1]]$CI_lower,
summaryDFfeature_SNPclass[[x]][[2]]$CI_lower,
summaryDFfeature_SNPclass[[x]][[3]]$CI_lower))
})
ymax_list_SNPclass <- lapply(seq_along(summaryDFfeature_SNPclass), function(x) {
max(c(summaryDFfeature_SNPclass[[x]][[1]]$CI_upper,
summaryDFfeature_SNPclass[[x]][[2]]$CI_upper,
summaryDFfeature_SNPclass[[x]][[3]]$CI_upper))
})
# Define legend labels
legendLabs_feature <- lapply(seq_along(quantileNames), function(x) {
grobTree(textGrob(bquote(.(quantileNames[x])),
x = legendPos[1], y = legendPos[2]-((x-1)*0.06), just = "left",
gp = gpar(col = quantileColours[x], fontsize = 18)))
})
legendLabs_ranFeat <- lapply(seq_along(randomPCNames), function(x) {
grobTree(textGrob(bquote(.(randomPCNames[x])),
x = legendPos[1], y = legendPos[2]-((x-1)*0.06), just = "left",
gp = gpar(col = quantileColours[x], fontsize = 18)))
})
legendLabs_ranLoc <- lapply(seq_along(randomPCNames), function(x) {
grobTree(textGrob(bquote(.(randomPCNames[x])),
x = legendPos[1], y = legendPos[2]-((x-1)*0.06), just = "left",
gp = gpar(col = quantileColours[x], fontsize = 18)))
})
# Plot average profiles with 95% CI ribbon
## feature
ggObj1_combined_SNPclass <- mclapply(seq_along(SNPclassNamesPlot), function(x) {
summaryDFfeature <- summaryDFfeature_SNPclass[[x]][[1]]
ggplot(data = summaryDFfeature,
mapping = aes(x = winNo,
y = mean,
group = quantile)
) +
geom_line(data = summaryDFfeature,
mapping = aes(colour = quantile),
size = 1) +
scale_colour_manual(values = quantileColours) +
geom_ribbon(data = summaryDFfeature,
mapping = aes(ymin = CI_lower,
ymax = CI_upper,
fill = quantile),
alpha = 0.4) +
scale_fill_manual(values = quantileColours) +
scale_y_continuous(limits = c(ymin_list_SNPclass[[x]], ymax_list_SNPclass[[x]]),
labels = function(x) sprintf("%6.3f", x)) +
scale_x_discrete(breaks = c(1,
(upstream/binSize)+1,
(dim(summaryDFfeature_SNPclass[[x]][[1]])[1]/quantiles)-(downstream/binSize),
dim(summaryDFfeature_SNPclass[[x]][[1]])[1]/quantiles),
labels = c(paste0("-", flankName),
featureStartLab,
featureEndLab,
paste0("+", flankName))) +
geom_vline(xintercept = c((upstream/binSize)+1,
(dim(summaryDFfeature_SNPclass[[x]][[1]])[1]/quantiles)-(downstream/binSize)),
linetype = "dashed",
size = 1) +
labs(x = "",
y = SNPclassNamesPlot[x]) +
annotation_custom(legendLabs_feature[[1]]) +
annotation_custom(legendLabs_feature[[2]]) +
annotation_custom(legendLabs_feature[[3]]) +
annotation_custom(legendLabs_feature[[4]]) +
theme_bw() +
theme(
axis.ticks = element_line(size = 1.0, colour = "black"),
axis.ticks.length = unit(0.25, "cm"),
axis.text.x = element_text(size = 22, colour = "black"),
axis.text.y = element_text(size = 18, colour = "black", family = "Luxi Mono"),
axis.title = element_text(size = 30, colour = "black"),
legend.position = "none",
panel.grid = element_blank(),
panel.border = element_rect(size = 3.5, colour = "black"),
panel.background = element_blank(),
plot.margin = unit(c(0.3,1.2,0.0,0.3), "cm"),
plot.title = element_text(hjust = 1.0, size = 30)) +
ggtitle(bquote(.(featureNamePlot) ~ "(" * italic("n") ~ "=" ~
.(prettyNum(summaryDFfeature$n[1],
big.mark = ",", trim = T)) *
")"))
}, mc.cores = length(SNPclassNamesPlot))
## ranFeat
ggObj2_combined_SNPclass <- mclapply(seq_along(SNPclassNamesPlot), function(x) {
summaryDFfeature <- summaryDFfeature_SNPclass[[x]][[2]]
ggplot(data = summaryDFfeature,
mapping = aes(x = winNo,
y = mean,
group = quantile)
) +
geom_line(data = summaryDFfeature,
mapping = aes(colour = quantile),
size = 1) +
scale_colour_manual(values = quantileColours) +
geom_ribbon(data = summaryDFfeature,
mapping = aes(ymin = CI_lower,
ymax = CI_upper,
fill = quantile),
alpha = 0.4) +
scale_fill_manual(values = quantileColours) +
scale_y_continuous(limits = c(ymin_list_SNPclass[[x]], ymax_list_SNPclass[[x]]),
labels = function(x) sprintf("%6.3f", x)) +
scale_x_discrete(breaks = c(1,
(upstream/binSize)+1,
(dim(summaryDFfeature_SNPclass[[x]][[2]])[1]/quantiles)-(downstream/binSize),
dim(summaryDFfeature_SNPclass[[x]][[2]])[1]/quantiles),
labels = c(paste0("-", flankName),
featureStartLab,
featureEndLab,
paste0("+", flankName))) +
geom_vline(xintercept = c((upstream/binSize)+1,
(dim(summaryDFfeature_SNPclass[[x]][[2]])[1]/quantiles)-(downstream/binSize)),
linetype = "dashed",
size = 1) +
labs(x = "",
y = SNPclassNamesPlot[x]) +
annotation_custom(legendLabs_ranFeat[[1]]) +
annotation_custom(legendLabs_ranFeat[[2]]) +
annotation_custom(legendLabs_ranFeat[[3]]) +
annotation_custom(legendLabs_ranFeat[[4]]) +
theme_bw() +
theme(
axis.ticks = element_line(size = 1.0, colour = "black"),
axis.ticks.length = unit(0.25, "cm"),
axis.text.x = element_text(size = 22, colour = "black"),
axis.text.y = element_text(size = 18, colour = "black", family = "Luxi Mono"),
axis.title = element_text(size = 30, colour = "black"),
legend.position = "none",
panel.grid = element_blank(),
panel.border = element_rect(size = 3.5, colour = "black"),
panel.background = element_blank(),
plot.margin = unit(c(0.3,1.2,0.0,0.3), "cm"),
plot.title = element_text(hjust = 1.0, size = 30)) +
ggtitle(bquote(.(ranFeatNamePlot) ~ "(" * italic("n") ~ "=" ~
.(prettyNum(summaryDFfeature$n[1],
big.mark = ",", trim = T)) *
")"))
}, mc.cores = length(SNPclassNamesPlot))
## ranLoc
ggObj3_combined_SNPclass <- mclapply(seq_along(SNPclassNamesPlot), function(x) {
summaryDFfeature <- summaryDFfeature_SNPclass[[x]][[3]]
ggplot(data = summaryDFfeature,
mapping = aes(x = winNo,
y = mean,
group = quantile)
) +
geom_line(data = summaryDFfeature,
mapping = aes(colour = quantile),
size = 1) +
scale_colour_manual(values = quantileColours) +
geom_ribbon(data = summaryDFfeature,
mapping = aes(ymin = CI_lower,
ymax = CI_upper,
fill = quantile),
alpha = 0.4) +
scale_fill_manual(values = quantileColours) +
scale_y_continuous(limits = c(ymin_list_SNPclass[[x]], ymax_list_SNPclass[[x]]),
labels = function(x) sprintf("%6.3f", x)) +
scale_x_discrete(breaks = c(1,
(upstream/binSize)+1,
(dim(summaryDFfeature_SNPclass[[x]][[3]])[1]/quantiles)-(downstream/binSize),
dim(summaryDFfeature_SNPclass[[x]][[3]])[1]/quantiles),
labels = c(paste0("-", flankName),
"Start",
"End",
paste0("+", flankName))) +
geom_vline(xintercept = c((upstream/binSize)+1,
(dim(summaryDFfeature_SNPclass[[x]][[3]])[1]/quantiles)-(downstream/binSize)),
linetype = "dashed",
size = 1) +
labs(x = "",
y = SNPclassNamesPlot[x]) +
annotation_custom(legendLabs_ranLoc[[1]]) +
annotation_custom(legendLabs_ranLoc[[2]]) +
annotation_custom(legendLabs_ranLoc[[3]]) +
annotation_custom(legendLabs_ranLoc[[4]]) +
theme_bw() +
theme(
axis.ticks = element_line(size = 1.0, colour = "black"),
axis.ticks.length = unit(0.25, "cm"),
axis.text.x = element_text(size = 22, colour = "black"),
axis.text.y = element_text(size = 18, colour = "black", family = "Luxi Mono"),
axis.title = element_text(size = 30, colour = "black"),
legend.position = "none",
panel.grid = element_blank(),
panel.border = element_rect(size = 3.5, colour = "black"),
panel.background = element_blank(),
plot.margin = unit(c(0.3,1.2,0.0,0.3), "cm"),
plot.title = element_text(hjust = 1.0, size = 30)) +
ggtitle(bquote(.(ranLocNamePlot) ~ "(" * italic("n") ~ "=" ~
.(prettyNum(summaryDFfeature$n[1],
big.mark = ",", trim = T)) *
")"))
}, mc.cores = length(SNPclassNamesPlot))
ggObjGA_combined <- grid.arrange(grobs = c(
ggObj1_combined_SNPclass,
ggObj2_combined_SNPclass,
ggObj3_combined_SNPclass
),
layout_matrix = cbind(
1:length(c(SNPclassNamesPlot)),
(length(c(SNPclassNamesPlot))+1):(length(c(SNPclassNamesPlot))*2),
((length(c(SNPclassNamesPlot))*2)+1):(length(c(SNPclassNamesPlot))*3)
))
ggsave(paste0(plotDir,
"1000exomesSNPclass_avgProfiles_around_", quantiles, "quantiles",
"_by_", orderingFactor,
"_of_", libName, "_peaks_in_",
paste0(featureName,
collapse = "_"), "_v090620.pdf"),
plot = ggObjGA_combined,
height = 6.5*length(c(SNPclassNamesPlot)), width = 21, limitsize = FALSE)
#### Free up memory by removing no longer required objects
rm(
SNPclass_featureMats, SNPclass_ranLocMats,
SNPclass_mats_quantiles,
wideDFfeature_list_SNPclass,
tidyDFfeature_list_SNPclass,
summaryDFfeature_list_SNPclass,
summaryDFfeature_SNPclass
)
gc()
#####
|
/DMC1/snakemake_ChIPseq/mapped/DMC1peakProfiles/quantiles/quantile_peaks_avgProfileRibbon_cMMb_exome_SNPs.R
|
no_license
|
ajtock/wheat
|
R
| false
| false
| 26,854
|
r
|
#!/applications/R/R-3.5.0/bin/Rscript
# Plot average coverage profiles with 95% CIs around peak quantiles
# Usage:
# csmit -m 50G -c 3 "/applications/R/R-3.5.0/bin/Rscript quantile_peaks_avgProfileRibbon_cMMb_exome_SNPs.R DMC1_Rep1_ChIP DMC1 'Agenome_euchromatin,Bgenome_euchromatin,Dgenome_euchromatin' cMMb 4 both 400 2000 2kb 20 '0.02,0.96'"
#libName <- "DMC1_Rep1_ChIP"
#dirName <- "DMC1"
#featureName <- unlist(strsplit("Agenome_euchromatin,Bgenome_euchromatin,Dgenome_euchromatin",
# split = ","))
#orderingFactor <- "cMMb"
#quantiles <- 4
#align <- "both"
#bodyLength <- 400
#upstream <- 2000
#downstream <- 2000
#flankName <- "2 kb"
#binSize <- 20
## top left
#legendPos <- as.numeric(unlist(strsplit("0.02,0.96",
# split = ",")))
## top centre
#legendPos <- as.numeric(unlist(strsplit("0.38,0.96",
# split = ",")))
## top right
#legendPos <- as.numeric(unlist(strsplit("0.75,0.96",
# split = ",")))
## bottom left
#legendPos <- as.numeric(unlist(strsplit("0.02,0.30",
# split = ",")))
args <- commandArgs(trailingOnly = T)
libName <- args[1]
dirName <- args[2]
featureName <- unlist(strsplit(args[3],
split = ","))
orderingFactor <- args[4]
quantiles <- as.numeric(args[5])
align <- args[6]
bodyLength <- as.numeric(args[7])
upstream <- as.numeric(args[8])
downstream <- as.numeric(args[8])
flankName <- args[9]
binSize <- as.numeric(args[10])
legendPos <- as.numeric(unlist(strsplit(args[11],
split = ",")))
library(parallel)
library(tidyr)
library(dplyr)
library(ggplot2)
library(ggthemes)
library(grid)
library(gridExtra)
library(extrafont)
outDir <- paste0("quantiles_by_", orderingFactor, "/")
plotDir <- paste0(outDir, "plots/")
system(paste0("[ -d ", outDir, " ] || mkdir ", outDir))
system(paste0("[ -d ", plotDir, " ] || mkdir ", plotDir))
# Define plot titles
if(orderingFactor == "cMMb") {
featureNamePlot <- paste0(substr(orderingFactor, start = 1, stop = 2), "/",
substr(orderingFactor, start = 3, stop = 4), " ",
sub("_\\w+", "", libName), " peaks")
}
ranFeatNamePlot <- paste0("Random ",
sub("_\\w+", "", libName), " peaks")
ranLocNamePlot <- "Random loci"
# Define quantile colours
quantileColours <- c("red", "purple", "blue", "navy")
# Define feature start and end labels for plotting
featureStartLab <- "Start"
featureEndLab <- "End"
# Genomic definitions
chrs <- as.vector(read.table("/home/ajt200/analysis/wheat/sRNAseq_meiocyte_Martin_Moore/snakemake_sRNAseq/data/index/wheat_v1.0.fa.sizes")[,1])
chrs <- chrs[-length(chrs)]
# Load table of features grouped into quantiles
# by decreasing cM/Mb
featuresDF <- read.table(paste0(outDir,
"features_", quantiles, "quantiles",
"_by_", orderingFactor,
"_of_", libName, "_peaks_in_",
paste0(featureName,
collapse = "_"), ".txt"),
header = T, sep = "\t", row.names = NULL, stringsAsFactors = F)
# Load features to confirm feature (row) ordering in "featuresDF" is the same
# as in "features" (which was used for generating the coverage matrices)
features <- lapply(seq_along(featureName), function(y) {
tmp <- read.table(paste0("/home/ajt200/analysis/wheat/", dirName,
"/snakemake_ChIPseq/mapped/both/peaks/PeakRanger1.18/ranger/p0.001_q0.01/",
libName,
"_rangerPeaksGRmergedOverlaps_minuslog10_p0.001_q0.01_noMinWidth_in_",
featureName[y], ".bed"),
header = F)
data.frame(tmp,
V7 = paste0(featureName[y], "_", tmp$V4),
stringsAsFactors = F)
})
# If features from multiple subgenomes and/or compartments are to be analysed,
# concatenate the corresponding feature data.frames
if(length(featureName) > 1) {
features <- do.call(rbind, features)
} else {
features <- features[[1]]
}
colnames(features) <- c("chr", "start", "end", "name", "score", "strand", "featureID")
stopifnot(identical(as.character(featuresDF$featureID),
as.character(features$featureID)))
rm(features); gc()
# Get row indices for each feature quantile
quantileIndices <- lapply(1:quantiles, function(k) {
which(featuresDF$quantile == paste0("Quantile ", k))
})
## Random feature quantiles
# Define function to randomly select n rows from
# a data.frame
selectRandomFeatures <- function(features, n) {
return(features[sample(x = dim(features)[1],
size = n,
replace = FALSE),])
}
# Define seed so that random selections are reproducible
set.seed(93750174)
# Divide features into random sets of equal number,
# with the same number of peaks per chromosome as
# above-defined orderingFactor-defined feature quantiles
randomPCIndices <- lapply(1:quantiles, function(k) {
randomPCIndicesk <- NULL
for(i in 1:length(chrs)) {
randomPCfeatureskChr <- selectRandomFeatures(features = featuresDF[featuresDF$seqnames == chrs[i],],
n = dim(featuresDF[featuresDF$quantile == paste0("Quantile ", k) &
featuresDF$seqnames == chrs[i],])[1])
randomPCIndicesk <- c(randomPCIndicesk, as.integer(rownames(randomPCfeatureskChr)))
}
randomPCIndicesk
})
# Confirm per-chromosome feature numbers are the same for quantiles and random groupings
lapply(seq_along(1:quantiles), function(k) {
sapply(seq_along(chrs), function(x) {
if(!identical(dim(featuresDF[randomPCIndices[[k]],][featuresDF[randomPCIndices[[k]],]$seqnames == chrs[x],]),
dim(featuresDF[quantileIndices[[k]],][featuresDF[quantileIndices[[k]],]$seqnames == chrs[x],]))) {
stop("Quantile features and random features do not consist of the same number of features per chromosome")
}
})
})
# exome SNPclasses
SNPclassNames <- c(
"all",
"transition",
"transversion"
)
SNPclassNamesPlot <- c(
"Exome SNPs",
"Transitions",
"Transversions"
)
# feature
SNPclass_featureMats <- mclapply(seq_along(SNPclassNames), function(x) {
lapply(seq_along(featureName), function(y) {
as.matrix(read.table(paste0("/home/ajt200/analysis/wheat/DMC1/snakemake_ChIPseq/mapped/DMC1peakProfiles/matrices/",
"exome_", SNPclassNames[x],
"_SNPs_around_", dirName, "_peaks_in_", featureName[y],
"_matrix_bin", binSize, "bp_flank", sub(" ", "", flankName), ".tab"),
header = T))
})
}, mc.cores = length(SNPclassNames))
# If features from multiple subgenomes and/or compartments are to be analysed,
# concatenate the corresponding feature coverage matrices
SNPclass_featureMats <- mclapply(seq_along(SNPclass_featureMats), function(x) {
if(length(featureName) > 1) {
do.call(rbind, SNPclass_featureMats[[x]])
} else {
SNPclass_featureMats[[x]][[1]]
}
}, mc.cores = length(SNPclass_featureMats))
# ranLoc
SNPclass_ranLocMats <- mclapply(seq_along(SNPclassNames), function(x) {
lapply(seq_along(featureName), function(y) {
as.matrix(read.table(paste0("/home/ajt200/analysis/wheat/DMC1/snakemake_ChIPseq/mapped/DMC1peakProfiles/matrices/",
"exome_", SNPclassNames[x],
"_SNPs_around_", dirName, "_peaks_in_", featureName[y],
"_ranLoc_matrix_bin", binSize, "bp_flank", sub(" ", "", flankName), ".tab"),
header = T))
})
}, mc.cores = length(SNPclassNames))
# If features from multiple subgenomes and/or compartments are to be analysed,
# concatenate the corresponding feature coverage matrices
SNPclass_ranLocMats <- mclapply(seq_along(SNPclass_ranLocMats), function(x) {
if(length(featureName) > 1) {
do.call(rbind, SNPclass_ranLocMats[[x]])
} else {
SNPclass_ranLocMats[[x]][[1]]
}
}, mc.cores = length(SNPclass_ranLocMats))
# Add column names
for(x in seq_along(SNPclass_featureMats)) {
colnames(SNPclass_featureMats[[x]]) <- c(paste0("u", 1:(upstream/binSize)),
paste0("t", ((upstream/binSize)+1):((upstream+bodyLength)/binSize)),
paste0("d", (((upstream+bodyLength)/binSize)+1):(((upstream+bodyLength)/binSize)+(downstream/binSize))))
colnames(SNPclass_ranLocMats[[x]]) <- c(paste0("u", 1:(upstream/binSize)),
paste0("t", ((upstream/binSize)+1):((upstream+bodyLength)/binSize)),
paste0("d", (((upstream+bodyLength)/binSize)+1):(((upstream+bodyLength)/binSize)+(downstream/binSize))))
}
# Subdivide coverage matrices into above-defined quantiles and random groupings
SNPclass_mats_quantiles <- mclapply(seq_along(SNPclass_featureMats), function(x) {
list(
# feature quantiles
lapply(1:quantiles, function(k) {
SNPclass_featureMats[[x]][quantileIndices[[k]],]
}),
# feature random groupings
lapply(1:quantiles, function(k) {
SNPclass_featureMats[[x]][randomPCIndices[[k]],]
}),
# random loci groupings
lapply(1:quantiles, function(k) {
SNPclass_ranLocMats[[x]][quantileIndices[[k]],]
})
)
}, mc.cores = length(SNPclass_featureMats))
# Transpose matrix and convert into dataframe
# in which first column is window name
wideDFfeature_list_SNPclass <- mclapply(seq_along(SNPclass_mats_quantiles), function(x) {
lapply(seq_along(SNPclass_mats_quantiles[[x]]), function(y) {
lapply(seq_along(SNPclass_mats_quantiles[[x]][[y]]), function(k) {
data.frame(window = colnames(SNPclass_mats_quantiles[[x]][[y]][[k]]),
t(SNPclass_mats_quantiles[[x]][[y]][[k]]))
})
})
}, mc.cores = length(SNPclass_mats_quantiles)/2)
# Convert into tidy data.frame (long format)
tidyDFfeature_list_SNPclass <- mclapply(seq_along(wideDFfeature_list_SNPclass), function(x) {
lapply(seq_along(SNPclass_mats_quantiles[[x]]), function(y) {
lapply(seq_along(SNPclass_mats_quantiles[[x]][[y]]), function(k) {
gather(data = wideDFfeature_list_SNPclass[[x]][[y]][[k]],
key = feature,
value = coverage,
-window)
})
})
}, mc.cores = length(wideDFfeature_list_SNPclass)/2)
# Order levels of factor "window" so that sequential levels
# correspond to sequential windows
for(x in seq_along(tidyDFfeature_list_SNPclass)) {
for(y in seq_along(SNPclass_mats_quantiles[[x]])) {
for(k in seq_along(SNPclass_mats_quantiles[[x]][[y]])) {
tidyDFfeature_list_SNPclass[[x]][[y]][[k]]$window <- factor(tidyDFfeature_list_SNPclass[[x]][[y]][[k]]$window,
levels = as.character(wideDFfeature_list_SNPclass[[x]][[y]][[k]]$window))
}
}
}
# Create summary data.frame in which each row corresponds to a window (Column 1),
# Column2 is the number of coverage values (features) per window,
# Column3 is the mean of coverage values per window,
# Column4 is the standard deviation of coverage values per window,
# Column5 is the standard error of the mean of coverage values per window,
# Column6 is the lower bound of the 95% confidence interval, and
# Column7 is the upper bound of the 95% confidence interval
summaryDFfeature_list_SNPclass <- mclapply(seq_along(tidyDFfeature_list_SNPclass), function(x) {
lapply(seq_along(SNPclass_mats_quantiles[[x]]), function(y) {
lapply(seq_along(SNPclass_mats_quantiles[[x]][[y]]), function(k) {
data.frame(window = as.character(wideDFfeature_list_SNPclass[[x]][[y]][[k]]$window),
n = tapply(X = tidyDFfeature_list_SNPclass[[x]][[y]][[k]]$coverage,
INDEX = tidyDFfeature_list_SNPclass[[x]][[y]][[k]]$window,
FUN = length),
mean = tapply(X = tidyDFfeature_list_SNPclass[[x]][[y]][[k]]$coverage,
INDEX = tidyDFfeature_list_SNPclass[[x]][[y]][[k]]$window,
FUN = mean,
na.rm = TRUE),
sd = tapply(X = tidyDFfeature_list_SNPclass[[x]][[y]][[k]]$coverage,
INDEX = tidyDFfeature_list_SNPclass[[x]][[y]][[k]]$window,
FUN = sd,
na.rm = TRUE))
})
})
}, mc.cores = length(tidyDFfeature_list_SNPclass)/2)
for(x in seq_along(summaryDFfeature_list_SNPclass)) {
for(y in seq_along(SNPclass_mats_quantiles[[x]])) {
for(k in seq_along(SNPclass_mats_quantiles[[x]][[y]])) {
summaryDFfeature_list_SNPclass[[x]][[y]][[k]]$window <- factor(summaryDFfeature_list_SNPclass[[x]][[y]][[k]]$window,
levels = as.character(wideDFfeature_list_SNPclass[[x]][[y]][[k]]$window))
summaryDFfeature_list_SNPclass[[x]][[y]][[k]]$winNo <- factor(1:dim(summaryDFfeature_list_SNPclass[[x]][[y]][[k]])[1])
summaryDFfeature_list_SNPclass[[x]][[y]][[k]]$sem <- summaryDFfeature_list_SNPclass[[x]][[y]][[k]]$sd/sqrt(summaryDFfeature_list_SNPclass[[x]][[y]][[k]]$n-1)
summaryDFfeature_list_SNPclass[[x]][[y]][[k]]$CI_lower <- summaryDFfeature_list_SNPclass[[x]][[y]][[k]]$mean -
qt(0.975, df = summaryDFfeature_list_SNPclass[[x]][[y]][[k]]$n-1)*summaryDFfeature_list_SNPclass[[x]][[y]][[k]]$sem
summaryDFfeature_list_SNPclass[[x]][[y]][[k]]$CI_upper <- summaryDFfeature_list_SNPclass[[x]][[y]][[k]]$mean +
qt(0.975, df = summaryDFfeature_list_SNPclass[[x]][[y]][[k]]$n-1)*summaryDFfeature_list_SNPclass[[x]][[y]][[k]]$sem
}
}
}
quantileNames <- paste0(rep("Quantile ", quantiles), 1:quantiles)
randomPCNames <- paste0(rep("Random ", quantiles), 1:quantiles)
for(x in seq_along(summaryDFfeature_list_SNPclass)) {
# feature quantiles
names(summaryDFfeature_list_SNPclass[[x]][[1]]) <- quantileNames
# feature random groupings
names(summaryDFfeature_list_SNPclass[[x]][[2]]) <- randomPCNames
# random loci groupings
names(summaryDFfeature_list_SNPclass[[x]][[3]]) <- randomPCNames
}
# Convert list of lists of lists of feature quantiles summaryDFfeature_list_SNPclass into
# a list of lists of single data.frames containing all feature quantiles for plotting
summaryDFfeature_SNPclass <- mclapply(seq_along(summaryDFfeature_list_SNPclass), function(x) {
lapply(seq_along(SNPclass_mats_quantiles[[x]]), function(y) {
bind_rows(summaryDFfeature_list_SNPclass[[x]][[y]], .id = "quantile")
})
}, mc.cores = length(summaryDFfeature_list_SNPclass))
for(x in seq_along(summaryDFfeature_SNPclass)) {
# feature quantiles
summaryDFfeature_SNPclass[[x]][[1]]$quantile <- factor(summaryDFfeature_SNPclass[[x]][[1]]$quantile,
levels = names(summaryDFfeature_list_SNPclass[[x]][[1]]))
# feature random groupings
summaryDFfeature_SNPclass[[x]][[2]]$quantile <- factor(summaryDFfeature_SNPclass[[x]][[2]]$quantile,
levels = names(summaryDFfeature_list_SNPclass[[x]][[2]]))
# random loci groupings
summaryDFfeature_SNPclass[[x]][[3]]$quantile <- factor(summaryDFfeature_SNPclass[[x]][[3]]$quantile,
levels = names(summaryDFfeature_list_SNPclass[[x]][[3]]))
}
# Define y-axis limits
ymin_list_SNPclass <- lapply(seq_along(summaryDFfeature_SNPclass), function(x) {
min(c(summaryDFfeature_SNPclass[[x]][[1]]$CI_lower,
summaryDFfeature_SNPclass[[x]][[2]]$CI_lower,
summaryDFfeature_SNPclass[[x]][[3]]$CI_lower))
})
ymax_list_SNPclass <- lapply(seq_along(summaryDFfeature_SNPclass), function(x) {
max(c(summaryDFfeature_SNPclass[[x]][[1]]$CI_upper,
summaryDFfeature_SNPclass[[x]][[2]]$CI_upper,
summaryDFfeature_SNPclass[[x]][[3]]$CI_upper))
})
# Define legend labels
legendLabs_feature <- lapply(seq_along(quantileNames), function(x) {
grobTree(textGrob(bquote(.(quantileNames[x])),
x = legendPos[1], y = legendPos[2]-((x-1)*0.06), just = "left",
gp = gpar(col = quantileColours[x], fontsize = 18)))
})
legendLabs_ranFeat <- lapply(seq_along(randomPCNames), function(x) {
grobTree(textGrob(bquote(.(randomPCNames[x])),
x = legendPos[1], y = legendPos[2]-((x-1)*0.06), just = "left",
gp = gpar(col = quantileColours[x], fontsize = 18)))
})
legendLabs_ranLoc <- lapply(seq_along(randomPCNames), function(x) {
grobTree(textGrob(bquote(.(randomPCNames[x])),
x = legendPos[1], y = legendPos[2]-((x-1)*0.06), just = "left",
gp = gpar(col = quantileColours[x], fontsize = 18)))
})
# Plot average profiles with 95% CI ribbon
## feature
ggObj1_combined_SNPclass <- mclapply(seq_along(SNPclassNamesPlot), function(x) {
summaryDFfeature <- summaryDFfeature_SNPclass[[x]][[1]]
ggplot(data = summaryDFfeature,
mapping = aes(x = winNo,
y = mean,
group = quantile)
) +
geom_line(data = summaryDFfeature,
mapping = aes(colour = quantile),
size = 1) +
scale_colour_manual(values = quantileColours) +
geom_ribbon(data = summaryDFfeature,
mapping = aes(ymin = CI_lower,
ymax = CI_upper,
fill = quantile),
alpha = 0.4) +
scale_fill_manual(values = quantileColours) +
scale_y_continuous(limits = c(ymin_list_SNPclass[[x]], ymax_list_SNPclass[[x]]),
labels = function(x) sprintf("%6.3f", x)) +
scale_x_discrete(breaks = c(1,
(upstream/binSize)+1,
(dim(summaryDFfeature_SNPclass[[x]][[1]])[1]/quantiles)-(downstream/binSize),
dim(summaryDFfeature_SNPclass[[x]][[1]])[1]/quantiles),
labels = c(paste0("-", flankName),
featureStartLab,
featureEndLab,
paste0("+", flankName))) +
geom_vline(xintercept = c((upstream/binSize)+1,
(dim(summaryDFfeature_SNPclass[[x]][[1]])[1]/quantiles)-(downstream/binSize)),
linetype = "dashed",
size = 1) +
labs(x = "",
y = SNPclassNamesPlot[x]) +
annotation_custom(legendLabs_feature[[1]]) +
annotation_custom(legendLabs_feature[[2]]) +
annotation_custom(legendLabs_feature[[3]]) +
annotation_custom(legendLabs_feature[[4]]) +
theme_bw() +
theme(
axis.ticks = element_line(size = 1.0, colour = "black"),
axis.ticks.length = unit(0.25, "cm"),
axis.text.x = element_text(size = 22, colour = "black"),
axis.text.y = element_text(size = 18, colour = "black", family = "Luxi Mono"),
axis.title = element_text(size = 30, colour = "black"),
legend.position = "none",
panel.grid = element_blank(),
panel.border = element_rect(size = 3.5, colour = "black"),
panel.background = element_blank(),
plot.margin = unit(c(0.3,1.2,0.0,0.3), "cm"),
plot.title = element_text(hjust = 1.0, size = 30)) +
ggtitle(bquote(.(featureNamePlot) ~ "(" * italic("n") ~ "=" ~
.(prettyNum(summaryDFfeature$n[1],
big.mark = ",", trim = T)) *
")"))
}, mc.cores = length(SNPclassNamesPlot))
## ranFeat
ggObj2_combined_SNPclass <- mclapply(seq_along(SNPclassNamesPlot), function(x) {
summaryDFfeature <- summaryDFfeature_SNPclass[[x]][[2]]
ggplot(data = summaryDFfeature,
mapping = aes(x = winNo,
y = mean,
group = quantile)
) +
geom_line(data = summaryDFfeature,
mapping = aes(colour = quantile),
size = 1) +
scale_colour_manual(values = quantileColours) +
geom_ribbon(data = summaryDFfeature,
mapping = aes(ymin = CI_lower,
ymax = CI_upper,
fill = quantile),
alpha = 0.4) +
scale_fill_manual(values = quantileColours) +
scale_y_continuous(limits = c(ymin_list_SNPclass[[x]], ymax_list_SNPclass[[x]]),
labels = function(x) sprintf("%6.3f", x)) +
scale_x_discrete(breaks = c(1,
(upstream/binSize)+1,
(dim(summaryDFfeature_SNPclass[[x]][[2]])[1]/quantiles)-(downstream/binSize),
dim(summaryDFfeature_SNPclass[[x]][[2]])[1]/quantiles),
labels = c(paste0("-", flankName),
featureStartLab,
featureEndLab,
paste0("+", flankName))) +
geom_vline(xintercept = c((upstream/binSize)+1,
(dim(summaryDFfeature_SNPclass[[x]][[2]])[1]/quantiles)-(downstream/binSize)),
linetype = "dashed",
size = 1) +
labs(x = "",
y = SNPclassNamesPlot[x]) +
annotation_custom(legendLabs_ranFeat[[1]]) +
annotation_custom(legendLabs_ranFeat[[2]]) +
annotation_custom(legendLabs_ranFeat[[3]]) +
annotation_custom(legendLabs_ranFeat[[4]]) +
theme_bw() +
theme(
axis.ticks = element_line(size = 1.0, colour = "black"),
axis.ticks.length = unit(0.25, "cm"),
axis.text.x = element_text(size = 22, colour = "black"),
axis.text.y = element_text(size = 18, colour = "black", family = "Luxi Mono"),
axis.title = element_text(size = 30, colour = "black"),
legend.position = "none",
panel.grid = element_blank(),
panel.border = element_rect(size = 3.5, colour = "black"),
panel.background = element_blank(),
plot.margin = unit(c(0.3,1.2,0.0,0.3), "cm"),
plot.title = element_text(hjust = 1.0, size = 30)) +
ggtitle(bquote(.(ranFeatNamePlot) ~ "(" * italic("n") ~ "=" ~
.(prettyNum(summaryDFfeature$n[1],
big.mark = ",", trim = T)) *
")"))
}, mc.cores = length(SNPclassNamesPlot))
## ranLoc
ggObj3_combined_SNPclass <- mclapply(seq_along(SNPclassNamesPlot), function(x) {
summaryDFfeature <- summaryDFfeature_SNPclass[[x]][[3]]
ggplot(data = summaryDFfeature,
mapping = aes(x = winNo,
y = mean,
group = quantile)
) +
geom_line(data = summaryDFfeature,
mapping = aes(colour = quantile),
size = 1) +
scale_colour_manual(values = quantileColours) +
geom_ribbon(data = summaryDFfeature,
mapping = aes(ymin = CI_lower,
ymax = CI_upper,
fill = quantile),
alpha = 0.4) +
scale_fill_manual(values = quantileColours) +
scale_y_continuous(limits = c(ymin_list_SNPclass[[x]], ymax_list_SNPclass[[x]]),
labels = function(x) sprintf("%6.3f", x)) +
scale_x_discrete(breaks = c(1,
(upstream/binSize)+1,
(dim(summaryDFfeature_SNPclass[[x]][[3]])[1]/quantiles)-(downstream/binSize),
dim(summaryDFfeature_SNPclass[[x]][[3]])[1]/quantiles),
labels = c(paste0("-", flankName),
"Start",
"End",
paste0("+", flankName))) +
geom_vline(xintercept = c((upstream/binSize)+1,
(dim(summaryDFfeature_SNPclass[[x]][[3]])[1]/quantiles)-(downstream/binSize)),
linetype = "dashed",
size = 1) +
labs(x = "",
y = SNPclassNamesPlot[x]) +
annotation_custom(legendLabs_ranLoc[[1]]) +
annotation_custom(legendLabs_ranLoc[[2]]) +
annotation_custom(legendLabs_ranLoc[[3]]) +
annotation_custom(legendLabs_ranLoc[[4]]) +
theme_bw() +
theme(
axis.ticks = element_line(size = 1.0, colour = "black"),
axis.ticks.length = unit(0.25, "cm"),
axis.text.x = element_text(size = 22, colour = "black"),
axis.text.y = element_text(size = 18, colour = "black", family = "Luxi Mono"),
axis.title = element_text(size = 30, colour = "black"),
legend.position = "none",
panel.grid = element_blank(),
panel.border = element_rect(size = 3.5, colour = "black"),
panel.background = element_blank(),
plot.margin = unit(c(0.3,1.2,0.0,0.3), "cm"),
plot.title = element_text(hjust = 1.0, size = 30)) +
ggtitle(bquote(.(ranLocNamePlot) ~ "(" * italic("n") ~ "=" ~
.(prettyNum(summaryDFfeature$n[1],
big.mark = ",", trim = T)) *
")"))
}, mc.cores = length(SNPclassNamesPlot))
ggObjGA_combined <- grid.arrange(grobs = c(
ggObj1_combined_SNPclass,
ggObj2_combined_SNPclass,
ggObj3_combined_SNPclass
),
layout_matrix = cbind(
1:length(c(SNPclassNamesPlot)),
(length(c(SNPclassNamesPlot))+1):(length(c(SNPclassNamesPlot))*2),
((length(c(SNPclassNamesPlot))*2)+1):(length(c(SNPclassNamesPlot))*3)
))
ggsave(paste0(plotDir,
"1000exomesSNPclass_avgProfiles_around_", quantiles, "quantiles",
"_by_", orderingFactor,
"_of_", libName, "_peaks_in_",
paste0(featureName,
collapse = "_"), "_v090620.pdf"),
plot = ggObjGA_combined,
height = 6.5*length(c(SNPclassNamesPlot)), width = 21, limitsize = FALSE)
#### Free up memory by removing no longer required objects
rm(
SNPclass_featureMats, SNPclass_ranLocMats,
SNPclass_mats_quantiles,
wideDFfeature_list_SNPclass,
tidyDFfeature_list_SNPclass,
summaryDFfeature_list_SNPclass,
summaryDFfeature_SNPclass
)
gc()
#####
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/fevd.varshrinkest.R
\name{fevd.varshrinkest}
\alias{fevd.varshrinkest}
\title{Forecast Error Variance Decomposition}
\usage{
\method{fevd}{varshrinkest}(x, n.ahead = 10, ...)
}
\arguments{
\item{x}{Object of class 'varshrinkest';
generated by \code{VARshrink()}.}
\item{n.ahead}{Integer specifying the steps.}
\item{...}{Currently not used.}
}
\description{
Computes the forecast error variance decomposition of a VAR(p) for
n.ahead steps.
This is a modification of vars::fevd() for the class "varshrinkest".
}
\seealso{
\code{\link[vars]{fevd}}
}
|
/man/fevd.varshrinkest.Rd
|
no_license
|
Allisterh/VARshrink-1
|
R
| false
| true
| 652
|
rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/fevd.varshrinkest.R
\name{fevd.varshrinkest}
\alias{fevd.varshrinkest}
\title{Forecast Error Variance Decomposition}
\usage{
\method{fevd}{varshrinkest}(x, n.ahead = 10, ...)
}
\arguments{
\item{x}{Object of class 'varshrinkest';
generated by \code{VARshrink()}.}
\item{n.ahead}{Integer specifying the steps.}
\item{...}{Currently not used.}
}
\description{
Computes the forecast error variance decomposition of a VAR(p) for
n.ahead steps.
This is a modification of vars::fevd() for the class "varshrinkest".
}
\seealso{
\code{\link[vars]{fevd}}
}
|
-- @@stderr --
cat: invalid option -- 'D'
Try 'cat --help' for more information.
dtrace: failed to compile script /dev/stdin: Preprocessor failed to process input program
|
/test/unittest/options/tst.cpppath.r
|
permissive
|
oracle/dtrace-utils
|
R
| false
| false
| 171
|
r
|
-- @@stderr --
cat: invalid option -- 'D'
Try 'cat --help' for more information.
dtrace: failed to compile script /dev/stdin: Preprocessor failed to process input program
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/golem_utils_ui.R
\name{sliderInput}
\alias{sliderInput}
\title{Modified shiny sliderInput func}
\usage{
sliderInput(
inputId,
label,
min,
max,
value,
step = NULL,
round = FALSE,
format = NULL,
locale = NULL,
ticks = TRUE,
animate = FALSE,
width = NULL,
sep = ",",
pre = NULL,
post = NULL,
timeFormat = NULL,
timezone = NULL,
dragRange = TRUE,
tooltip = T,
header = "?",
popup = "Help tips",
pos = "right"
)
}
\description{
Modified shiny sliderInput func
}
|
/man/sliderInput.Rd
|
permissive
|
igemsoftware2020/ClusteRsy-Linkoping
|
R
| false
| true
| 579
|
rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/golem_utils_ui.R
\name{sliderInput}
\alias{sliderInput}
\title{Modified shiny sliderInput func}
\usage{
sliderInput(
inputId,
label,
min,
max,
value,
step = NULL,
round = FALSE,
format = NULL,
locale = NULL,
ticks = TRUE,
animate = FALSE,
width = NULL,
sep = ",",
pre = NULL,
post = NULL,
timeFormat = NULL,
timezone = NULL,
dragRange = TRUE,
tooltip = T,
header = "?",
popup = "Help tips",
pos = "right"
)
}
\description{
Modified shiny sliderInput func
}
|
#' Extract Numeric Values from Character Strings
#'
#' Given a character string, this function will attempt to extract digits and return the result as
#' a numeric value.
#'
#' @param string A single string of \code{class} string to parse for digits.
#' @param sequence A second character string, matching one of the following: \code{"first"}, \code{"last"}, \code{"collapse"}, or \code{"midpoint"}.
#'
#' @details All functions used are available in base R; no additional packages are required.
#' If one matching sequence is identified, but the \code{sequence} argument is \code{"midpoint"}
#' or \code{"collapse"}, the function attempts to return a "safe" value. In this case, the only
#' numeric match is returned. If no matches are found, the function returns \code{numeric()}.
#'
#' @return Numeric value(s) occurring in \code{string}, or the midpoint of the first and last digits
#' within the string.
#'
#' @author Ryan Kyle, \email{ryan.kyle@mail.mcgill.ca}
#'
#' @examples
#' example_string1 <- "12-15 HOURS"
#' example_string2 <- "DAY -1"
#'
#' # Returns 12.
#' numextract(example_string1)
#' numextract(example_string1, sequence="first")
#'
#' # Returns -15, a negative numeric value.
#' numextract(example_string1, sequence="last")
#'
#' # Returns 1215, compressing two sequences into one.
#' numextract(example_string1, sequence="collapse")
#'
#' # Returns 13.5, which is the midpoint of 15 and 12
#' (assumes second sequence does not correspond to negative numeric value).
#' numextract(example_string1, sequence="midpoint")
#'
#' # All return -1
#' numextract(example_string2)
#' numextract(example_string2, sequence="first")
#' numextract(example_string2, sequence="last")
#' numextract(example_string2, sequence="midpoint")
#' numextract(example_string2, sequence="collapse")
#'
#' @keywords utilities
#'
#' @export
#'
numextract <- function (string, sequence = "first")
{
# convert factor to string
string <- sapply(string, FUN = function(x) {
ifelse("factor" %in% class(x), as.character(x), x)
})
# check to see if it's a character string with no digits
string <- sapply(string, FUN = function(x) {
ifelse(!(grepl("[[:digit:]]", x)), NA, x)
})
if (is.na(sequence))
stop("sequence cannot be NA; please specify first, last, collapse, or midpoint (latter will compute mean of last and first).")
if (sequence == "collapse") {
# Allow retrieval of a negative number if only one digit sequence exists
string <- sapply(string, FUN = function(x) {
ifelse(lengths(regmatches(string, gregexpr("[[:digit:]]", string))) == 1,
gsub("[^-\\d.]+", "", string, perl = TRUE),
gsub("[^\\d.]+", "", string, perl = TRUE))
})
return(as.numeric(string))
}
if (sequence == "midpoint") {
string <- sapply(string, FUN = function(x) {
if(lengths(regmatches(x, gregexpr("[[:digit:]]", x))) == 1) {
return(gsub("[^-\\d.]+", "", x, perl = TRUE))
} else {
first.pos <- regexpr("(\\-*\\d+\\.*\\d*)", x, perl = TRUE)
last.pos <- regexpr("(\\d+\\.*\\d*)(?!.*\\d)", x, perl = TRUE)
first.pos.out <- rep(NA, length(x))
last.pos.out <- rep(NA, length(x))
# Preserve NA values
first.pos.out[!is.na(first.pos)] <- regmatches(x, first.pos)
last.pos.out[!is.na(last.pos)] <- regmatches(x, last.pos)
# Generate a table with the first and last matches by string
numlist <- mapply(c, as.numeric(first.pos.out), as.numeric(last.pos.out), SIMPLIFY = TRUE)
# Compute the mean of the two values
result <- apply(numlist, MARGIN = 2, FUN=mean)
return(result)
}
})
return(as.numeric(string))
}
if (sequence == "first")
pattern <- "(\\-*\\d+\\.*\\d*)"
if (sequence == "last")
pattern <- "(\\-*\\d+\\.*\\d*)(?!.*\\d)"
hits <- regexpr(pattern, string, perl = TRUE)
result <- rep(NA, length(string))
result[!is.na(hits)] <- regmatches(string, hits)
result <- as.numeric(result)
return(result)
}
|
/R/numextract.R
|
permissive
|
rpkyle/cscmisc
|
R
| false
| false
| 4,040
|
r
|
#' Extract Numeric Values from Character Strings
#'
#' Given a character string, this function will attempt to extract digits and return the result as
#' a numeric value.
#'
#' @param string A single string of \code{class} string to parse for digits.
#' @param sequence A second character string, matching one of the following: \code{"first"}, \code{"last"}, \code{"collapse"}, or \code{"midpoint"}.
#'
#' @details All functions used are available in base R; no additional packages are required.
#' If one matching sequence is identified, but the \code{sequence} argument is \code{"midpoint"}
#' or \code{"collapse"}, the function attempts to return a "safe" value. In this case, the only
#' numeric match is returned. If no matches are found, the function returns \code{numeric()}.
#'
#' @return Numeric value(s) occurring in \code{string}, or the midpoint of the first and last digits
#' within the string.
#'
#' @author Ryan Kyle, \email{ryan.kyle@mail.mcgill.ca}
#'
#' @examples
#' example_string1 <- "12-15 HOURS"
#' example_string2 <- "DAY -1"
#'
#' # Returns 12.
#' numextract(example_string1)
#' numextract(example_string1, sequence="first")
#'
#' # Returns -15, a negative numeric value.
#' numextract(example_string1, sequence="last")
#'
#' # Returns 1215, compressing two sequences into one.
#' numextract(example_string1, sequence="collapse")
#'
#' # Returns 13.5, which is the midpoint of 15 and 12
#' (assumes second sequence does not correspond to negative numeric value).
#' numextract(example_string1, sequence="midpoint")
#'
#' # All return -1
#' numextract(example_string2)
#' numextract(example_string2, sequence="first")
#' numextract(example_string2, sequence="last")
#' numextract(example_string2, sequence="midpoint")
#' numextract(example_string2, sequence="collapse")
#'
#' @keywords utilities
#'
#' @export
#'
numextract <- function (string, sequence = "first")
{
# convert factor to string
string <- sapply(string, FUN = function(x) {
ifelse("factor" %in% class(x), as.character(x), x)
})
# check to see if it's a character string with no digits
string <- sapply(string, FUN = function(x) {
ifelse(!(grepl("[[:digit:]]", x)), NA, x)
})
if (is.na(sequence))
stop("sequence cannot be NA; please specify first, last, collapse, or midpoint (latter will compute mean of last and first).")
if (sequence == "collapse") {
# Allow retrieval of a negative number if only one digit sequence exists
string <- sapply(string, FUN = function(x) {
ifelse(lengths(regmatches(string, gregexpr("[[:digit:]]", string))) == 1,
gsub("[^-\\d.]+", "", string, perl = TRUE),
gsub("[^\\d.]+", "", string, perl = TRUE))
})
return(as.numeric(string))
}
if (sequence == "midpoint") {
string <- sapply(string, FUN = function(x) {
if(lengths(regmatches(x, gregexpr("[[:digit:]]", x))) == 1) {
return(gsub("[^-\\d.]+", "", x, perl = TRUE))
} else {
first.pos <- regexpr("(\\-*\\d+\\.*\\d*)", x, perl = TRUE)
last.pos <- regexpr("(\\d+\\.*\\d*)(?!.*\\d)", x, perl = TRUE)
first.pos.out <- rep(NA, length(x))
last.pos.out <- rep(NA, length(x))
# Preserve NA values
first.pos.out[!is.na(first.pos)] <- regmatches(x, first.pos)
last.pos.out[!is.na(last.pos)] <- regmatches(x, last.pos)
# Generate a table with the first and last matches by string
numlist <- mapply(c, as.numeric(first.pos.out), as.numeric(last.pos.out), SIMPLIFY = TRUE)
# Compute the mean of the two values
result <- apply(numlist, MARGIN = 2, FUN=mean)
return(result)
}
})
return(as.numeric(string))
}
if (sequence == "first")
pattern <- "(\\-*\\d+\\.*\\d*)"
if (sequence == "last")
pattern <- "(\\-*\\d+\\.*\\d*)(?!.*\\d)"
hits <- regexpr(pattern, string, perl = TRUE)
result <- rep(NA, length(string))
result[!is.na(hits)] <- regmatches(string, hits)
result <- as.numeric(result)
return(result)
}
|
# Testing function at the bottom of the file
# 0. Building value box function ------------------------------------------
### Arguments : DT, the data set.
Build_valuebox <- function(DT){
DT <- as.data.table(DT)
# turnover of sales
turnover_sales <-
(sum(
DT[tdt_type_detail=='sale'][,c(turnover)]
) / 10^6) %>%
round(3) %>%
as.character() %>%
str_replace("[.]",",") %>%
paste0(" m")
# turnover of returns
turnover_returns <-
(sum(
DT[tdt_type_detail=='return'][,c(turnover)]
) / 10^6) %>%
round(3) %>%
as.character() %>%
str_replace("[.]",",") %>%
paste0(" - ",.," m")
# Number of transactions of sales
N_transactions_sales <-
(nrow(DT[tdt_type_detail=='sale']) /10^3) %>%
round(3) %>%
as.character() %>%
str_replace("[.]",",") %>%
paste0(" k")
# Number of transactions of returns
N_transactions_returns <-
(nrow(DT[tdt_type_detail=='return']) / 10^3) %>%
round(3) %>%
as.character() %>%
str_replace("[.]",",") %>%
paste0(" - ",.," k")
# Value box functions
valuebox_turnover_sales <-
valueBox(value = turnover_sales, subtitle = tags$span("Total turnover of sales",style="font-size: 1.5em;"),
icon = icon("euro-sign"),color = "green")
valuebox_turnover_returns <-
valueBox(value = turnover_returns, subtitle = tags$span("Total turnover of returns",style="font-size: 1.5em;"),
icon = icon("euro-sign"),color = "red")
valuebox_transactions_sales <-
valueBox(value = N_transactions_sales, subtitle = tags$span("Total transaction of sales",style="font-size: 1.5em;"),
icon = icon("shopping-cart"),color = "green")
valuebox_transactions_returns <-
valueBox(value = N_transactions_returns, subtitle = tags$span("Total transaction of returns",style="font-size: 1.5em;"),
icon = icon("shopping-cart"),color = "red")
# Return list of value box functions
list(
valuebox_turnover_sales = valuebox_turnover_sales,
valuebox_turnover_returns = valuebox_turnover_returns,
valuebox_transactions_sales = valuebox_transactions_sales,
valuebox_transactions_returns = valuebox_transactions_returns
)
}
# 1. Building leaflet map function --------------------------------------------
### Arguments :
### DT = Filtered Data
### radius = The performance sales indicator that corresponds to the circle radius
### if the variable is qualitative (i.e. "the_transaction_id"), then it calculates the occurrence
### which means: {length(the_transaction_id))}. otherwise, if the argument is quantitative
### (i.e. "turnover"), then it calculates the total sum, which means: sum(turnover).
### transaction_type : The type of transaction, takes either "sale" or "return"
Build_leaflet_map <- function(DT, radius, transaction_type){
DT <- as.data.table(DT)
# 1.1. Filter by transaction_type, Group by store_name and summarize by radius------
DT <-
DT[tdt_type_detail==transaction_type] %>%
group_by(store_name) %>%
summarize(value =
ifelse(
# If the argument 'radius' we summarize through is a qualitative variable in DT, then calculate
# the occurrence (e.g: the_transaction_id). otherwise : calculate the sum (e.g: turnover or quantity)
is.character(.data[[radius]]),
length(.data[[radius]]),
sum(.data[[radius]]))) %>%
as.data.frame()
# 1.2. Attribution of Lat/Long to stores (to save storage)--------------------------
# Get latitude and longitude Data
cities <- read.csv2("Cities-lat-long.csv")
# Add columns of latitude and longitude
DT$Lat <-
sapply(DT$store_name,function(store_name)
cities$Lat[which(cities$City == store_name)]
)
DT$Long <-
sapply(DT$store_name,function(store_name)
cities$Long[which(cities$City == store_name)]
)
# 1.3. Build leaflet map depending on arguments -------------------------------------
# Palette
pal <- colorNumeric(palette = ifelse(transaction_type=='sale',
'Greens',
'Reds'),
DT$value)
# Map
leaflet(DT) %>%
addProviderTiles(providers$ CartoDB.Positron) %>%
addCircles(lng = ~Long, lat = ~Lat, fillColor = ~pal(value), fillOpacity = 0.73,
color = ifelse(transaction_type=='sale','green', 'red'),
stroke = TRUE , weight = 1, radius = 5000,
popup = ~paste0("Decathlon ", store_name, ' - ',
case_when(
radius=='turnover' ~ paste0(
round(value/10^6,2),
" M EUR"),
radius=='the_transaction_id' ~ paste0(
round(value/10^3,6),
" K Transactions"),
radius=='quantity' ~ paste0(
round(value/10^3,6),
" K Units sold")
)
)
) %>%
clearBounds()
}
# 2. Temporal heatmap function------------------------------------------------
### Arguments :
### DT = Filtered Data
### X = X axis of heat map -- e.g. : "month"
### intensity = The Heat Map intensity variable, value token for summarizing the grouped DT
### if the variable is qualitative (i.e. "the_transaction_id"), then it calculates the occurrence
### which means: {length(the_transaction_id))}. otherwise, if the argument is quantitative
### (i.e. "turnover"), then it calculates the total sum, which means: sum(turnover).
### transaction_type : The type of transaction, takes either "sale" or "return"
Build_HC_Temporal_heatmap <- function(DT,X,intensity,transaction_type){
# 2.1 Filtering Data depending on the transaction type ---------------------
DT <- subset(
DT,
tdt_type_detail == transaction_type
)
# 2.2 Color of heatmap depending on type of transaction --------------------
HT_color <- ifelse(transaction_type=='sale','#07AE6B','red')
# 2.3. If X is equal to 'month' or 'quarter' -------------------------------
# then Y will be expressed on years
# In fact, the whole plot approach is not the same as for X = 'days'
# Consequence : two different highchart functions depending on the value of X
if(X %in% c('month','quarter')){
# Y Axis
Y = "year"
# 2.3.1 Java Script function to format the heatmap hovering box ---------
tooltip_formater <- JS(paste0("function () {
function getPointCategoryName(point, dimension) {
var series = point.series,
isY = dimension === 'y',
axis = series[isY ? 'yAxis' : 'xAxis'];
return axis.categories[point[isY ? 'y' : 'x']];
}
return '<b>' + getPointCategoryName(this.point, 'x') + '-' +
getPointCategoryName(this.point, 'y') + '</b>' +'<br>'+
'Total of {",intensity,"} : ' + '<b>' + this.point.value + '<b>';
}"))
# 2.3.2 Some JS to enable + customize the exporting button --------------
JS_enable_exporting <- JS(('{
contextButton: {
symbolStroke: "white",
theme: {
fill:"#3C8DBC"
}
}
}'))
# 2.3.3 Highcharter -----------------------------------------------------
DT %>%
group_by(.data[[X]],.data[[Y]]) %>%
summarize(intensity= ifelse(
# If the argument 'intensity' we summarize through is a qualitative variable in DT, then calculate
# the occurrence (e.g: the_transaction_id). otherwise : calculate the sum (e.g: turnover or quantity)
is.character(.data[[intensity]]),
length(.data[[intensity]]),
sum(.data[[intensity]])
)) %>%
hchart("heatmap",
hcaes(x = .data[[X]], y = .data[[Y]], value= intensity),
marginTop = 0,
marginBottom = 0) %>%
hc_exporting(enabled = TRUE, formAttributes = list(target = '_blank'),
buttons = JS_enable_exporting) %>% # Customizing exporting button
hc_yAxis(title= "null", reversed = T) %>%
hc_xAxis(title= "null") %>%
hc_tooltip(formatter = tooltip_formater,
borderWidth = 3.5) %>%
hc_colorAxis(
min = 0,
minColor= '#FFFFFF',
maxColor= HT_color # Intensity Color
) %>%
hc_legend(
align= 'right',
layout= 'vertical',
margin= 0,
verticalAlign= 'top',
y= 25,
symbolHeight= 320
)
} else{
# 2.4. If X is equal to 'Date'-------------------------------------------
# Y Axis
Y = "aggregated_date_week"
# 2.4.1 JS functions to format the heatmap ------------------------------
### format the tooltip (hovering box)
tooltip_formater <- JS(paste0(
"
// correction of aggregated week dates generated by {lubridate::floor_date}
// In the generating random Data script
// I want to have on hovering the exact date of the hovered day
// Not the floor_date Date.
function () {
function getPointCategoryName(point, dimension) {
var series = point.series,
isY = dimension === 'y',
axis = series[isY ? 'yAxis' : 'xAxis'];
return axis.categories[point[isY ? 'y' : 'x']];
}
var day_w = getPointCategoryName(this.point, 'x');
var date = getPointCategoryName(this.point, 'y');
var day = parseInt(date.substring(8,10));
var month = parseInt(date.substring(5,7)) - 1; // JS begin from 0
var year = parseInt(date.substring(0,4));
var date_utc = Date.UTC(year, month, day);
if(day_w==='Tue'){
date_utc = date_utc + 3600000*24*1;
}
if(day_w==='Wed'){
date_utc = date_utc + 3600000*24*2;
}
if(day_w==='Thu'){
date_utc = date_utc + 3600000*24*3;
}
if(day_w==='Fri'){
date_utc = date_utc + 3600000*24*4;
}
if(day_w==='Sat'){
date_utc = date_utc + 3600000*24*5;
}
if(day_w==='Sun'){
date_utc = date_utc + 3600000*24*6;
}
var date_normal = new Date(date_utc);
var formated_date = date_normal.toLocaleDateString();
return '<b>' + getPointCategoryName(this.point, 'x') + ' - ' +
formated_date + '</b>' +'<br>'+
'Total of {",intensity,"} : ' + '<b>' + this.point.value + '<b>';
}
")
)
### format the Xaxis label (We want a yyyy/mm) format instead of aggregated date week
Yaxis_formater <- JS("
// We want to show YYYY/MM instead of aggregated date week
function () {
var monthNames = [ 'null', 'Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun',
'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec' ];
var month = monthNames[
parseInt(this.value.substring(5,7))];
var year = this.value.substring(0,4);
return year.concat('-',month);}"
)
### Enable + customize the exporting button
JS_enable_exporting <- JS(('{
contextButton: {
symbolStroke: "white",
theme: {
fill:"#3C8DBC"
}
}
}'))
### easing the display of Xaxis labels when the date range is too large
### Depending of the number of week to display
n_weeks <- length(unique(DT$aggregated_date_week))
tickPositions <- case_when(
n_weeks<35 ~ list(seq(0 , n_weeks-1 , by=1)),
n_weeks>=35 & n_weeks<80 ~ list(unique(c(seq(0 , n_weeks-1 , by=3),n_weeks-1))),
n_weeks>=80 & n_weeks<120 ~ list(unique(c(seq(0 , n_weeks-1 , by=5),n_weeks-1))),
n_weeks>=120 & n_weeks<160 ~ list(unique(c(seq(0 , n_weeks-1 , by=7),n_weeks-1))),
n_weeks>=160 ~ list(unique(c(seq(0 , n_weeks-1 , by=9),n_weeks-1))),
)
tickPositions <- tickPositions[[1]]
# 2.4.2 Highcharter ---------
DT %>%
group_by(.data[[X]],.data[[Y]]) %>%
summarize(intensity= ifelse(
# If the argument 'intensity' we summarize through is a qualitative variable in DT, then calculate
# the occurrence (e.g: the_transaction_id). otherwise : calculate the sum (e.g: turnover or quantity)
is.character(.data[[intensity]]),
length(.data[[intensity]]),
sum(.data[[intensity]])
)) %>%
hchart("heatmap",
hcaes(x = .data[[X]], y = .data[[Y]], value= intensity),
marginTop = 0,
marginBottom = 0) %>%
hc_exporting(enabled = TRUE, formAttributes = list(target = '_blank'), # Customizing exporting button
buttons = JS_enable_exporting) %>%
hc_xAxis(title= "") %>%
hc_yAxis(title= "",
labels =
list(formatter= Yaxis_formater
),
reversed = T,
tickPositions = tickPositions
) %>%
hc_tooltip(formatter = tooltip_formater,
borderWidth = 3.5) %>%
hc_colorAxis(
min = 0,
minColor= '#FFFFFF',
maxColor= HT_color # Intensity Color
) %>%
hc_legend(
align= 'right',
layout= 'vertical',
margin= 0,
verticalAlign= 'top',
y= 25,
symbolHeight= 320
)
}
}
# 3. Line chart function-----------------------------------------------------
### Arguments :
### DT = Data
### X = X axis of Line chart, Default = 'the_date_transaction'
### Y = The Y Axis lines variable, value token for summarizing the grouped DT
### if the variable is qualitative (i.e. "the_transaction_id"), then it calculates the occurrence
### which means: {length(the_transaction_id))}. otherwise, if the argument is quantitative
### (i.e. "turnover"), then it calculates the total sum, which means: sum(turnover).
### group = the variable we group DT through, always = 'item_name'
### transaction_type : The type of transaction, takes either "sale" or "return"
### aggregation_period : The aggregation date label (by days, months, years...)
Build_Date_line_charts <- function(DT, X, Y, group, transaction_type, aggregation_period){
# 3.1 Filtering Data depending on the transaction type --------------------
DT <- subset(
DT,
tdt_type_detail == transaction_type
)
# 3.2 JS formatting functions ---------------------------------------------
# Enable + customize the exporting button
JS_enable_exporting <- JS(('{
contextButton: {
symbolStroke: "white",
theme: {
fill:"#3C8DBC"
}
}
}'))
# 3.3 Highcharter ---------------------------------------------------------
DT %>% group_by(.data[[group]],
Date = floor_date(.data[[X]], aggregation_period)
) %>%
summarize(value=ifelse(
# If the argument 'Y' we summarize through is a qualitative variable in DT, then calculate
# the occurrence (e.g: the_transaction_id). otherwise, calculate the sum (e.g: turnover or quantity)
is.character(.data[[Y]]),
length(.data[[Y]]),
sum(.data[[Y]])
)) %>%
hchart("spline",
hcaes(x = Date, y = value, group = .data[[group]]),
marginTop = 100,
marginBottom = 0
) %>%
hc_xAxis(title = "") %>%
hc_yAxis(title = list(text=paste0('Total of {',Y,'}')))%>%
hc_tooltip(shared = TRUE,
crosshairs = TRUE,
followPointer = T,
borderColor = "grey")%>%
hc_exporting(enabled = TRUE, formAttributes = list(target = '_blank'), # Customizing exporting button
buttons = JS_enable_exporting)
}
# 4. correlation heatmap function--------------------------------------------
### Arguments :
### DT = Filtered Data
### X = X axis of heat map -- e.g. : "store_name"
### Y = Y axis of heat map -- e.g. : "item_name"
### intensity = The Heat Map intensity variable, value token for summarizing the grouped DT
### if the variable is qualitative (i.e. "the_transaction_id"), then it calculates the occurrence
### which means: {length(the_transaction_id))}. otherwise, if the argument is quantitative
### (i.e. "turnover"), then it calculates the total sum, which means: sum(turnover).
### transaction_type : The type of transaction, takes either "sale" or "return"
Build_HC_Heatmap_Correlation <- function(DT,X,Y,intensity,transaction_type){
# 4.0 Filtering Data depending on the transaction type -------------------
DT <- subset(
DT,
tdt_type_detail == transaction_type
)
# 4.1 Color of heatmap depending on type of transaction -------------------
HT_color <- ifelse(transaction_type=='sale','#07AE6B','red')
# 4.2 Some JS functions to format the chart -------------------------------
# Format the hovering box (hc_tootlip)
tooltip_formater <- JS(paste0("
function () {
function getPointCategoryName(point, dimension) {
var series = point.series,
isY = dimension === 'y',
axis = series[isY ? 'yAxis' : 'xAxis'];
return axis.categories[point[isY ? 'y' : 'x']];
}
return '",X,": ' + '<b>' + getPointCategoryName(this.point, 'x') + '</b>' +'<br>'+
'",Y,": ' + '<b>' + getPointCategoryName(this.point, 'y') + '</b> <br>'+
'Total of {",intensity,"} : ' + '<b>' + this.point.value + '<b>';
}"))
# Enable + Customize the exporting button
JS_enable_exporting <- JS(('{
contextButton: {
symbolStroke: "white",
theme: {
fill:"#3C8DBC"
}
}
}'))
# 4.3 Highcharter ---------------------------------------------------------
DT %>%
group_by(.data[[X]],.data[[Y]]) %>%
summarize(intensity= ifelse(
# If the argument 'intensity' we summarize through is a qualitative variable in DT, then calculate
# the occurrence (e.g: the_transaction_id). otherwise : calculate the sum (e.g: turnover or quantity)
is.character(.data[[intensity]]),
length(.data[[intensity]]),
sum(.data[[intensity]])
)) %>%
hchart("heatmap",
hcaes(x = .data[[X]], y = .data[[Y]], value= intensity),
marginTop = 0,
marginBottom = 0) %>%
hc_exporting(enabled = TRUE, formAttributes = list(target = '_blank'), # Customizing exporting button
buttons = JS_enable_exporting) %>%
hc_yAxis(title= "") %>%
hc_xAxis(title= "") %>%
hc_tooltip(formatter = tooltip_formater,
borderWidth = 3.5) %>%
hc_colorAxis(
min = 0,
minColor= '#FFFFFF',
maxColor= HT_color # Intensity Color
) %>%
hc_legend(
align= 'right',
layout= 'vertical',
margin= 0,
verticalAlign= 'top',
y= 25,
symbolHeight= 320
)
}
# 5. Barplot for store_names comparaison--------------------------------------
### Arguments :
### DT = Filtered Data
### X = X axis of heat map -- e.g. : "store_name"
### Y = The Bar plot Y Axis variable, value token for summarizing the grouped DT
### if the variable is qualitative (i.e. "the_transaction_id"), then it calculates the occurrence
### which means: {length(the_transaction_id))}. otherwise, if the argument is quantitative
### (i.e. "turnover"), then it calculates the total sum, which means: sum(turnover).
### group : The variable we group through, default = tdt_type_detail
### Percent = whether or not you want to display in percentage
### horizontal = whether or not you want to display horizontal lines
Build_HC_Barplot <- function(DT, X, Y, group, percent, horizontal){
# (Testing Function bellow)
# Enable + Customize the exporting button
JS_enable_exporting <- JS(('{
contextButton: {
symbolStroke: "white",
theme: {
fill:"#3C8DBC"
}
}
}'))
type_HC <- ifelse(horizontal, 'bar','column')
DT %>%
group_by(.data[[X]],.data[[group]]) %>%
summarize(value = ifelse(
# If the argument 'Y' we *summarize* through is a qualitative variable in DT, then calculate
# the occurrence (e.g: the_transaction_id). otherwise, calculate the sum (e.g: turnover or quantity)
is.character(.data[[Y]]),
length(.data[[Y]]),
sum(.data[[Y]])
)) %>%
hchart(type_HC,
hcaes(x = .data[[X]], y = value, group= .data[[group]]),
marginTop = 100,
marginBottom = 0
) %>%
hc_xAxis(title = '') %>%
hc_yAxis(title =
list(text = paste0('Total {',Y,'}')
)
)%>%
hc_colors(c("red", "#07AE6B")) %>%
hc_exporting(enabled = TRUE, formAttributes = list(target = '_blank'), # Customizing exporting button
buttons = JS_enable_exporting) %>%
hc_plotOptions(series = list(pointPadding=0.05, groupPadding= 0.09,
stacking = ifelse(percent,
list('percent'),
list(NULL))[[1]]
)
) %>%
hc_tooltip(shared = TRUE,
crosshairs = TRUE,
followPointer = T,
borderColor = "grey")
}
# TESTING FUNCTIONS -------------------------------------------------------
# Data <- read_feather("Data-Decathlon.feather")
# Build_valuebox(Data)
# Build_leaflet_map(DT = Data,
# radius = 'turnover',
# transaction_type = 'sale')
# Build_HC_Temporal_heatmap(DT = Data, X = 'month', intensity = 'turnover',
# transaction_type = 'return')
# Build_Date_line_charts(DT = Data,
# X = 'the_date_transaction',
# Y = 'quantity',
# group = 'item_name',
# transaction_type = 'return',
# aggregation_period = 'years')
# Build_HC_Heatmap_Correlation(DT = Data, X = 'store_name',Y = 'item_name',
# intensity = 'the_transaction_id',
# transaction_type = 'sale')
# Build_HC_Barplot(DT = Data, X = "store_name", Y = "quantity",
# group= "tdt_type_detail", percent = F,horizontal=F)
|
/helpers.R
|
no_license
|
ahmedjoubest/Preview-Shinyapp
|
R
| false
| false
| 24,283
|
r
|
# Testing function at the bottom of the file
# 0. Building value box function ------------------------------------------
### Arguments : DT, the data set.
Build_valuebox <- function(DT){
DT <- as.data.table(DT)
# turnover of sales
turnover_sales <-
(sum(
DT[tdt_type_detail=='sale'][,c(turnover)]
) / 10^6) %>%
round(3) %>%
as.character() %>%
str_replace("[.]",",") %>%
paste0(" m")
# turnover of returns
turnover_returns <-
(sum(
DT[tdt_type_detail=='return'][,c(turnover)]
) / 10^6) %>%
round(3) %>%
as.character() %>%
str_replace("[.]",",") %>%
paste0(" - ",.," m")
# Number of transactions of sales
N_transactions_sales <-
(nrow(DT[tdt_type_detail=='sale']) /10^3) %>%
round(3) %>%
as.character() %>%
str_replace("[.]",",") %>%
paste0(" k")
# Number of transactions of returns
N_transactions_returns <-
(nrow(DT[tdt_type_detail=='return']) / 10^3) %>%
round(3) %>%
as.character() %>%
str_replace("[.]",",") %>%
paste0(" - ",.," k")
# Value box functions
valuebox_turnover_sales <-
valueBox(value = turnover_sales, subtitle = tags$span("Total turnover of sales",style="font-size: 1.5em;"),
icon = icon("euro-sign"),color = "green")
valuebox_turnover_returns <-
valueBox(value = turnover_returns, subtitle = tags$span("Total turnover of returns",style="font-size: 1.5em;"),
icon = icon("euro-sign"),color = "red")
valuebox_transactions_sales <-
valueBox(value = N_transactions_sales, subtitle = tags$span("Total transaction of sales",style="font-size: 1.5em;"),
icon = icon("shopping-cart"),color = "green")
valuebox_transactions_returns <-
valueBox(value = N_transactions_returns, subtitle = tags$span("Total transaction of returns",style="font-size: 1.5em;"),
icon = icon("shopping-cart"),color = "red")
# Return list of value box functions
list(
valuebox_turnover_sales = valuebox_turnover_sales,
valuebox_turnover_returns = valuebox_turnover_returns,
valuebox_transactions_sales = valuebox_transactions_sales,
valuebox_transactions_returns = valuebox_transactions_returns
)
}
# 1. Building leaflet map function --------------------------------------------
### Arguments :
### DT = Filtered Data
### radius = The performance sales indicator that corresponds to the circle radius
### if the variable is qualitative (i.e. "the_transaction_id"), then it calculates the occurrence
### which means: {length(the_transaction_id))}. otherwise, if the argument is quantitative
### (i.e. "turnover"), then it calculates the total sum, which means: sum(turnover).
### transaction_type : The type of transaction, takes either "sale" or "return"
Build_leaflet_map <- function(DT, radius, transaction_type){
DT <- as.data.table(DT)
# 1.1. Filter by transaction_type, Group by store_name and summarize by radius------
DT <-
DT[tdt_type_detail==transaction_type] %>%
group_by(store_name) %>%
summarize(value =
ifelse(
# If the argument 'radius' we summarize through is a qualitative variable in DT, then calculate
# the occurrence (e.g: the_transaction_id). otherwise : calculate the sum (e.g: turnover or quantity)
is.character(.data[[radius]]),
length(.data[[radius]]),
sum(.data[[radius]]))) %>%
as.data.frame()
# 1.2. Attribution of Lat/Long to stores (to save storage)--------------------------
# Get latitude and longitude Data
cities <- read.csv2("Cities-lat-long.csv")
# Add columns of latitude and longitude
DT$Lat <-
sapply(DT$store_name,function(store_name)
cities$Lat[which(cities$City == store_name)]
)
DT$Long <-
sapply(DT$store_name,function(store_name)
cities$Long[which(cities$City == store_name)]
)
# 1.3. Build leaflet map depending on arguments -------------------------------------
# Palette
pal <- colorNumeric(palette = ifelse(transaction_type=='sale',
'Greens',
'Reds'),
DT$value)
# Map
leaflet(DT) %>%
addProviderTiles(providers$ CartoDB.Positron) %>%
addCircles(lng = ~Long, lat = ~Lat, fillColor = ~pal(value), fillOpacity = 0.73,
color = ifelse(transaction_type=='sale','green', 'red'),
stroke = TRUE , weight = 1, radius = 5000,
popup = ~paste0("Decathlon ", store_name, ' - ',
case_when(
radius=='turnover' ~ paste0(
round(value/10^6,2),
" M EUR"),
radius=='the_transaction_id' ~ paste0(
round(value/10^3,6),
" K Transactions"),
radius=='quantity' ~ paste0(
round(value/10^3,6),
" K Units sold")
)
)
) %>%
clearBounds()
}
# 2. Temporal heatmap function------------------------------------------------
### Arguments :
### DT = Filtered Data
### X = X axis of heat map -- e.g. : "month"
### intensity = The Heat Map intensity variable, value token for summarizing the grouped DT
### if the variable is qualitative (i.e. "the_transaction_id"), then it calculates the occurrence
### which means: {length(the_transaction_id))}. otherwise, if the argument is quantitative
### (i.e. "turnover"), then it calculates the total sum, which means: sum(turnover).
### transaction_type : The type of transaction, takes either "sale" or "return"
Build_HC_Temporal_heatmap <- function(DT,X,intensity,transaction_type){
# 2.1 Filtering Data depending on the transaction type ---------------------
DT <- subset(
DT,
tdt_type_detail == transaction_type
)
# 2.2 Color of heatmap depending on type of transaction --------------------
HT_color <- ifelse(transaction_type=='sale','#07AE6B','red')
# 2.3. If X is equal to 'month' or 'quarter' -------------------------------
# then Y will be expressed on years
# In fact, the whole plot approach is not the same as for X = 'days'
# Consequence : two different highchart functions depending on the value of X
if(X %in% c('month','quarter')){
# Y Axis
Y = "year"
# 2.3.1 Java Script function to format the heatmap hovering box ---------
tooltip_formater <- JS(paste0("function () {
function getPointCategoryName(point, dimension) {
var series = point.series,
isY = dimension === 'y',
axis = series[isY ? 'yAxis' : 'xAxis'];
return axis.categories[point[isY ? 'y' : 'x']];
}
return '<b>' + getPointCategoryName(this.point, 'x') + '-' +
getPointCategoryName(this.point, 'y') + '</b>' +'<br>'+
'Total of {",intensity,"} : ' + '<b>' + this.point.value + '<b>';
}"))
# 2.3.2 Some JS to enable + customize the exporting button --------------
JS_enable_exporting <- JS(('{
contextButton: {
symbolStroke: "white",
theme: {
fill:"#3C8DBC"
}
}
}'))
# 2.3.3 Highcharter -----------------------------------------------------
DT %>%
group_by(.data[[X]],.data[[Y]]) %>%
summarize(intensity= ifelse(
# If the argument 'intensity' we summarize through is a qualitative variable in DT, then calculate
# the occurrence (e.g: the_transaction_id). otherwise : calculate the sum (e.g: turnover or quantity)
is.character(.data[[intensity]]),
length(.data[[intensity]]),
sum(.data[[intensity]])
)) %>%
hchart("heatmap",
hcaes(x = .data[[X]], y = .data[[Y]], value= intensity),
marginTop = 0,
marginBottom = 0) %>%
hc_exporting(enabled = TRUE, formAttributes = list(target = '_blank'),
buttons = JS_enable_exporting) %>% # Customizing exporting button
hc_yAxis(title= "null", reversed = T) %>%
hc_xAxis(title= "null") %>%
hc_tooltip(formatter = tooltip_formater,
borderWidth = 3.5) %>%
hc_colorAxis(
min = 0,
minColor= '#FFFFFF',
maxColor= HT_color # Intensity Color
) %>%
hc_legend(
align= 'right',
layout= 'vertical',
margin= 0,
verticalAlign= 'top',
y= 25,
symbolHeight= 320
)
} else{
# 2.4. If X is equal to 'Date'-------------------------------------------
# Y Axis
Y = "aggregated_date_week"
# 2.4.1 JS functions to format the heatmap ------------------------------
### format the tooltip (hovering box)
tooltip_formater <- JS(paste0(
"
// correction of aggregated week dates generated by {lubridate::floor_date}
// In the generating random Data script
// I want to have on hovering the exact date of the hovered day
// Not the floor_date Date.
function () {
function getPointCategoryName(point, dimension) {
var series = point.series,
isY = dimension === 'y',
axis = series[isY ? 'yAxis' : 'xAxis'];
return axis.categories[point[isY ? 'y' : 'x']];
}
var day_w = getPointCategoryName(this.point, 'x');
var date = getPointCategoryName(this.point, 'y');
var day = parseInt(date.substring(8,10));
var month = parseInt(date.substring(5,7)) - 1; // JS begin from 0
var year = parseInt(date.substring(0,4));
var date_utc = Date.UTC(year, month, day);
if(day_w==='Tue'){
date_utc = date_utc + 3600000*24*1;
}
if(day_w==='Wed'){
date_utc = date_utc + 3600000*24*2;
}
if(day_w==='Thu'){
date_utc = date_utc + 3600000*24*3;
}
if(day_w==='Fri'){
date_utc = date_utc + 3600000*24*4;
}
if(day_w==='Sat'){
date_utc = date_utc + 3600000*24*5;
}
if(day_w==='Sun'){
date_utc = date_utc + 3600000*24*6;
}
var date_normal = new Date(date_utc);
var formated_date = date_normal.toLocaleDateString();
return '<b>' + getPointCategoryName(this.point, 'x') + ' - ' +
formated_date + '</b>' +'<br>'+
'Total of {",intensity,"} : ' + '<b>' + this.point.value + '<b>';
}
")
)
### format the Xaxis label (We want a yyyy/mm) format instead of aggregated date week
Yaxis_formater <- JS("
// We want to show YYYY/MM instead of aggregated date week
function () {
var monthNames = [ 'null', 'Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun',
'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec' ];
var month = monthNames[
parseInt(this.value.substring(5,7))];
var year = this.value.substring(0,4);
return year.concat('-',month);}"
)
### Enable + customize the exporting button
JS_enable_exporting <- JS(('{
contextButton: {
symbolStroke: "white",
theme: {
fill:"#3C8DBC"
}
}
}'))
### easing the display of Xaxis labels when the date range is too large
### Depending of the number of week to display
n_weeks <- length(unique(DT$aggregated_date_week))
tickPositions <- case_when(
n_weeks<35 ~ list(seq(0 , n_weeks-1 , by=1)),
n_weeks>=35 & n_weeks<80 ~ list(unique(c(seq(0 , n_weeks-1 , by=3),n_weeks-1))),
n_weeks>=80 & n_weeks<120 ~ list(unique(c(seq(0 , n_weeks-1 , by=5),n_weeks-1))),
n_weeks>=120 & n_weeks<160 ~ list(unique(c(seq(0 , n_weeks-1 , by=7),n_weeks-1))),
n_weeks>=160 ~ list(unique(c(seq(0 , n_weeks-1 , by=9),n_weeks-1))),
)
tickPositions <- tickPositions[[1]]
# 2.4.2 Highcharter ---------
DT %>%
group_by(.data[[X]],.data[[Y]]) %>%
summarize(intensity= ifelse(
# If the argument 'intensity' we summarize through is a qualitative variable in DT, then calculate
# the occurrence (e.g: the_transaction_id). otherwise : calculate the sum (e.g: turnover or quantity)
is.character(.data[[intensity]]),
length(.data[[intensity]]),
sum(.data[[intensity]])
)) %>%
hchart("heatmap",
hcaes(x = .data[[X]], y = .data[[Y]], value= intensity),
marginTop = 0,
marginBottom = 0) %>%
hc_exporting(enabled = TRUE, formAttributes = list(target = '_blank'), # Customizing exporting button
buttons = JS_enable_exporting) %>%
hc_xAxis(title= "") %>%
hc_yAxis(title= "",
labels =
list(formatter= Yaxis_formater
),
reversed = T,
tickPositions = tickPositions
) %>%
hc_tooltip(formatter = tooltip_formater,
borderWidth = 3.5) %>%
hc_colorAxis(
min = 0,
minColor= '#FFFFFF',
maxColor= HT_color # Intensity Color
) %>%
hc_legend(
align= 'right',
layout= 'vertical',
margin= 0,
verticalAlign= 'top',
y= 25,
symbolHeight= 320
)
}
}
# 3. Line chart function-----------------------------------------------------
### Arguments :
### DT = Data
### X = X axis of Line chart, Default = 'the_date_transaction'
### Y = The Y Axis lines variable, value token for summarizing the grouped DT
### if the variable is qualitative (i.e. "the_transaction_id"), then it calculates the occurrence
### which means: {length(the_transaction_id))}. otherwise, if the argument is quantitative
### (i.e. "turnover"), then it calculates the total sum, which means: sum(turnover).
### group = the variable we group DT through, always = 'item_name'
### transaction_type : The type of transaction, takes either "sale" or "return"
### aggregation_period : The aggregation date label (by days, months, years...)
Build_Date_line_charts <- function(DT, X, Y, group, transaction_type, aggregation_period){
# 3.1 Filtering Data depending on the transaction type --------------------
DT <- subset(
DT,
tdt_type_detail == transaction_type
)
# 3.2 JS formatting functions ---------------------------------------------
# Enable + customize the exporting button
JS_enable_exporting <- JS(('{
contextButton: {
symbolStroke: "white",
theme: {
fill:"#3C8DBC"
}
}
}'))
# 3.3 Highcharter ---------------------------------------------------------
DT %>% group_by(.data[[group]],
Date = floor_date(.data[[X]], aggregation_period)
) %>%
summarize(value=ifelse(
# If the argument 'Y' we summarize through is a qualitative variable in DT, then calculate
# the occurrence (e.g: the_transaction_id). otherwise, calculate the sum (e.g: turnover or quantity)
is.character(.data[[Y]]),
length(.data[[Y]]),
sum(.data[[Y]])
)) %>%
hchart("spline",
hcaes(x = Date, y = value, group = .data[[group]]),
marginTop = 100,
marginBottom = 0
) %>%
hc_xAxis(title = "") %>%
hc_yAxis(title = list(text=paste0('Total of {',Y,'}')))%>%
hc_tooltip(shared = TRUE,
crosshairs = TRUE,
followPointer = T,
borderColor = "grey")%>%
hc_exporting(enabled = TRUE, formAttributes = list(target = '_blank'), # Customizing exporting button
buttons = JS_enable_exporting)
}
# 4. correlation heatmap function--------------------------------------------
### Arguments :
### DT = Filtered Data
### X = X axis of heat map -- e.g. : "store_name"
### Y = Y axis of heat map -- e.g. : "item_name"
### intensity = The Heat Map intensity variable, value token for summarizing the grouped DT
### if the variable is qualitative (i.e. "the_transaction_id"), then it calculates the occurrence
### which means: {length(the_transaction_id))}. otherwise, if the argument is quantitative
### (i.e. "turnover"), then it calculates the total sum, which means: sum(turnover).
### transaction_type : The type of transaction, takes either "sale" or "return"
Build_HC_Heatmap_Correlation <- function(DT,X,Y,intensity,transaction_type){
# 4.0 Filtering Data depending on the transaction type -------------------
DT <- subset(
DT,
tdt_type_detail == transaction_type
)
# 4.1 Color of heatmap depending on type of transaction -------------------
HT_color <- ifelse(transaction_type=='sale','#07AE6B','red')
# 4.2 Some JS functions to format the chart -------------------------------
# Format the hovering box (hc_tootlip)
tooltip_formater <- JS(paste0("
function () {
function getPointCategoryName(point, dimension) {
var series = point.series,
isY = dimension === 'y',
axis = series[isY ? 'yAxis' : 'xAxis'];
return axis.categories[point[isY ? 'y' : 'x']];
}
return '",X,": ' + '<b>' + getPointCategoryName(this.point, 'x') + '</b>' +'<br>'+
'",Y,": ' + '<b>' + getPointCategoryName(this.point, 'y') + '</b> <br>'+
'Total of {",intensity,"} : ' + '<b>' + this.point.value + '<b>';
}"))
# Enable + Customize the exporting button
JS_enable_exporting <- JS(('{
contextButton: {
symbolStroke: "white",
theme: {
fill:"#3C8DBC"
}
}
}'))
# 4.3 Highcharter ---------------------------------------------------------
DT %>%
group_by(.data[[X]],.data[[Y]]) %>%
summarize(intensity= ifelse(
# If the argument 'intensity' we summarize through is a qualitative variable in DT, then calculate
# the occurrence (e.g: the_transaction_id). otherwise : calculate the sum (e.g: turnover or quantity)
is.character(.data[[intensity]]),
length(.data[[intensity]]),
sum(.data[[intensity]])
)) %>%
hchart("heatmap",
hcaes(x = .data[[X]], y = .data[[Y]], value= intensity),
marginTop = 0,
marginBottom = 0) %>%
hc_exporting(enabled = TRUE, formAttributes = list(target = '_blank'), # Customizing exporting button
buttons = JS_enable_exporting) %>%
hc_yAxis(title= "") %>%
hc_xAxis(title= "") %>%
hc_tooltip(formatter = tooltip_formater,
borderWidth = 3.5) %>%
hc_colorAxis(
min = 0,
minColor= '#FFFFFF',
maxColor= HT_color # Intensity Color
) %>%
hc_legend(
align= 'right',
layout= 'vertical',
margin= 0,
verticalAlign= 'top',
y= 25,
symbolHeight= 320
)
}
# 5. Barplot for store_names comparaison--------------------------------------
### Arguments :
### DT = Filtered Data
### X = X axis of heat map -- e.g. : "store_name"
### Y = The Bar plot Y Axis variable, value token for summarizing the grouped DT
### if the variable is qualitative (i.e. "the_transaction_id"), then it calculates the occurrence
### which means: {length(the_transaction_id))}. otherwise, if the argument is quantitative
### (i.e. "turnover"), then it calculates the total sum, which means: sum(turnover).
### group : The variable we group through, default = tdt_type_detail
### Percent = whether or not you want to display in percentage
### horizontal = whether or not you want to display horizontal lines
Build_HC_Barplot <- function(DT, X, Y, group, percent, horizontal){
# (Testing Function bellow)
# Enable + Customize the exporting button
JS_enable_exporting <- JS(('{
contextButton: {
symbolStroke: "white",
theme: {
fill:"#3C8DBC"
}
}
}'))
type_HC <- ifelse(horizontal, 'bar','column')
DT %>%
group_by(.data[[X]],.data[[group]]) %>%
summarize(value = ifelse(
# If the argument 'Y' we *summarize* through is a qualitative variable in DT, then calculate
# the occurrence (e.g: the_transaction_id). otherwise, calculate the sum (e.g: turnover or quantity)
is.character(.data[[Y]]),
length(.data[[Y]]),
sum(.data[[Y]])
)) %>%
hchart(type_HC,
hcaes(x = .data[[X]], y = value, group= .data[[group]]),
marginTop = 100,
marginBottom = 0
) %>%
hc_xAxis(title = '') %>%
hc_yAxis(title =
list(text = paste0('Total {',Y,'}')
)
)%>%
hc_colors(c("red", "#07AE6B")) %>%
hc_exporting(enabled = TRUE, formAttributes = list(target = '_blank'), # Customizing exporting button
buttons = JS_enable_exporting) %>%
hc_plotOptions(series = list(pointPadding=0.05, groupPadding= 0.09,
stacking = ifelse(percent,
list('percent'),
list(NULL))[[1]]
)
) %>%
hc_tooltip(shared = TRUE,
crosshairs = TRUE,
followPointer = T,
borderColor = "grey")
}
# TESTING FUNCTIONS -------------------------------------------------------
# Data <- read_feather("Data-Decathlon.feather")
# Build_valuebox(Data)
# Build_leaflet_map(DT = Data,
# radius = 'turnover',
# transaction_type = 'sale')
# Build_HC_Temporal_heatmap(DT = Data, X = 'month', intensity = 'turnover',
# transaction_type = 'return')
# Build_Date_line_charts(DT = Data,
# X = 'the_date_transaction',
# Y = 'quantity',
# group = 'item_name',
# transaction_type = 'return',
# aggregation_period = 'years')
# Build_HC_Heatmap_Correlation(DT = Data, X = 'store_name',Y = 'item_name',
# intensity = 'the_transaction_id',
# transaction_type = 'sale')
# Build_HC_Barplot(DT = Data, X = "store_name", Y = "quantity",
# group= "tdt_type_detail", percent = F,horizontal=F)
|
% Generated by roxygen2 (4.1.1): do not edit by hand
% Please edit documentation in R/nearest_RIT.R
\name{nearest_rit}
\alias{nearest_rit}
\title{find nearest RIT score for a student, by date}
\usage{
nearest_rit(mapvizieR_obj, studentid, measurementscale, target_date,
num_days = 180, forward = TRUE)
}
\arguments{
\item{mapvizieR_obj}{mapvizieR object}
\item{studentid}{target studentid}
\item{measurementscale}{target subject}
\item{target_date}{date of interest, \code{Y-m-d} format}
\item{num_days}{function will only return test score within num_days of target_date}
\item{forward}{default is TRUE, set to FALSE if only scores before target_date should be chosen}
}
\description{
Given studentid, measurementscale, and a target_date, the function will return
the closest RIT score.
}
|
/man/nearest_rit.Rd
|
no_license
|
rabare/mapvizieR
|
R
| false
| false
| 798
|
rd
|
% Generated by roxygen2 (4.1.1): do not edit by hand
% Please edit documentation in R/nearest_RIT.R
\name{nearest_rit}
\alias{nearest_rit}
\title{find nearest RIT score for a student, by date}
\usage{
nearest_rit(mapvizieR_obj, studentid, measurementscale, target_date,
num_days = 180, forward = TRUE)
}
\arguments{
\item{mapvizieR_obj}{mapvizieR object}
\item{studentid}{target studentid}
\item{measurementscale}{target subject}
\item{target_date}{date of interest, \code{Y-m-d} format}
\item{num_days}{function will only return test score within num_days of target_date}
\item{forward}{default is TRUE, set to FALSE if only scores before target_date should be chosen}
}
\description{
Given studentid, measurementscale, and a target_date, the function will return
the closest RIT score.
}
|
library(dplyr)
en_afa <- read.table(file = "/home/pokoro/data/mesa_models/split_mesa/results/all_chr_AFA_model_summaries.txt", header = TRUE)
en_afa$gene_id <- as.character(en_afa$gene_id)
en_afa <- en_afa[,c(1,10)]
names(en_afa) <- c("gene", "cvR2")
en_afa <- subset(en_afa, cvR2 > -1)
#start df for dot plots ML v. EN
en_afa <- mutate(en_afa, model="EN",pop="AFA")
rf_afa <- read.table(file = "/home/pokoro/data/mesa_models/python_ml_models/merged_chunk_results/AFA_best_grid_rf_all_chrom.txt", header = T)
rf_afa$Gene_ID <- as.character(rf_afa$Gene_ID)
rf_afa <- rf_afa[,c(1,3)]
names(rf_afa) <- c("gene", "cvR2")
rf_afa <- subset(rf_afa, cvR2 > -1)
rf_afa <- mutate(rf_afa, model="RF",pop="AFA")
svr_afa <- read.table(file = "/home/pokoro/data/mesa_models/python_ml_models/merged_chunk_results/AFA_best_grid_svr_all_chrom.txt", header = T)
svr_afa$Gene_ID <- as.character(svr_afa$Gene_ID)
svr_afa <- svr_afa[,c(1,3)]
names(svr_afa) <- c("gene", "cvR2")
svr_afa <- subset(svr_afa, cvR2 > -1)
svr_afa <- mutate(svr_afa, model="SVR",pop="AFA")
knn_afa <- read.table(file = "/home/pokoro/data/mesa_models/python_ml_models/merged_chunk_results/AFA_best_grid_knn_all_chrom.txt", header = T)
knn_afa$Gene_ID <- as.character(knn_afa$Gene_ID)
knn_afa <- knn_afa[,c(1,3)]
names(knn_afa) <- c("gene", "cvR2")
knn_afa <- subset(knn_afa, cvR2 > -1)
knn_afa <- mutate(knn_afa, model="KNN",pop="AFA")
#data.frame for boxplots
bpdf <- rbind(en_afa, rf_afa, svr_afa, knn_afa)
#data.frame for dot plots EN v. ML model
enrf <- inner_join(en_afa,rf_afa,by=c("gene","pop"))
ensvr <- inner_join(en_afa,svr_afa,by=c("gene","pop"))
enknn <- inner_join(en_afa,knn_afa,by=c("gene","pop"))
#try facet_grid(pop ~ model)
fig1df <- rbind(enrf, ensvr, enknn)
#do same for HIS
en_afa <- read.table(file = "/home/pokoro/data/mesa_models/split_mesa/results/all_chr_HIS_model_summaries.txt", header = TRUE)
en_afa$gene_id <- as.character(en_afa$gene_id)
en_afa <- en_afa[,c(1,10)]
names(en_afa) <- c("gene", "cvR2")
en_afa <- subset(en_afa, cvR2 > -1)
#start df for dot plots ML v. EN
en_afa <- mutate(en_afa, model="EN",pop="HIS")
rf_afa <- read.table(file = "/home/pokoro/data/mesa_models/python_ml_models/merged_chunk_results/HIS_best_grid_rf_all_chrom.txt", header = T)
rf_afa$Gene_ID <- as.character(rf_afa$Gene_ID)
rf_afa <- rf_afa[,c(1,3)]
names(rf_afa) <- c("gene", "cvR2")
rf_afa <- subset(rf_afa, cvR2 > -1)
rf_afa <- mutate(rf_afa, model="RF",pop="HIS")
svr_afa <- read.table(file = "/home/pokoro/data/mesa_models/python_ml_models/merged_chunk_results/HIS_best_grid_svr_all_chrom.txt", header = T)
svr_afa$Gene_ID <- as.character(svr_afa$Gene_ID)
svr_afa <- svr_afa[,c(1,3)]
names(svr_afa) <- c("gene", "cvR2")
svr_afa <- subset(svr_afa, cvR2 > -1)
svr_afa <- mutate(svr_afa, model="SVR",pop="HIS")
knn_afa <- read.table(file = "/home/pokoro/data/mesa_models/python_ml_models/merged_chunk_results/HIS_best_grid_knn_all_chrom.txt", header = T)
knn_afa$Gene_ID <- as.character(knn_afa$Gene_ID)
knn_afa <- knn_afa[,c(1,3)]
names(knn_afa) <- c("gene", "cvR2")
knn_afa <- subset(knn_afa, cvR2 > -1)
knn_afa <- mutate(knn_afa, model="KNN",pop="HIS")
#data.frame for boxplots
bpdf <- rbind(bpdf, en_afa, rf_afa, svr_afa, knn_afa)
#data.frame for dot plots EN v. ML model
enrf <- inner_join(en_afa,rf_afa,by=c("gene","pop"))
ensvr <- inner_join(en_afa,svr_afa,by=c("gene","pop"))
enknn <- inner_join(en_afa,knn_afa,by=c("gene","pop"))
fig1df <- rbind(fig1df, enrf, ensvr, enknn)
#do same for CAU
en_afa <- read.table(file = "/home/pokoro/data/mesa_models/split_mesa/results/all_chr_CAU_model_summaries.txt", header = TRUE)
en_afa$gene_id <- as.character(en_afa$gene_id)
en_afa <- en_afa[,c(1,10)]
names(en_afa) <- c("gene", "cvR2")
en_afa <- subset(en_afa, cvR2 > -1)
#start df for dot plots ML v. EN
en_afa <- mutate(en_afa, model="EN",pop="CAU")
rf_afa <- read.table(file = "/home/pokoro/data/mesa_models/python_ml_models/merged_chunk_results/CAU_best_grid_rf_all_chrom.txt", header = T)
rf_afa$Gene_ID <- as.character(rf_afa$Gene_ID)
rf_afa <- rf_afa[,c(1,3)]
names(rf_afa) <- c("gene", "cvR2")
rf_afa <- subset(rf_afa, cvR2 > -1)
rf_afa <- mutate(rf_afa, model="RF",pop="CAU")
svr_afa <- read.table(file = "/home/pokoro/data/mesa_models/python_ml_models/merged_chunk_results/CAU_best_grid_svr_all_chrom.txt", header = T)
svr_afa$Gene_ID <- as.character(svr_afa$Gene_ID)
svr_afa <- svr_afa[,c(1,3)]
names(svr_afa) <- c("gene", "cvR2")
svr_afa <- subset(svr_afa, cvR2 > -1)
svr_afa <- mutate(svr_afa, model="SVR",pop="CAU")
knn_afa <- read.table(file = "/home/pokoro/data/mesa_models/python_ml_models/merged_chunk_results/CAU_best_grid_knn_all_chrom.txt", header = T)
knn_afa$Gene_ID <- as.character(knn_afa$Gene_ID)
knn_afa <- knn_afa[,c(1,3)]
names(knn_afa) <- c("gene", "cvR2")
knn_afa <- subset(knn_afa, cvR2 > -1)
knn_afa <- mutate(knn_afa, model="KNN",pop="CAU")
#data.frame for boxplots
bpdf <- rbind(bpdf, en_afa, rf_afa, svr_afa, knn_afa)
#data.frame for dot plots EN v. ML model
enrf <- inner_join(en_afa,rf_afa,by=c("gene","pop"))
ensvr <- inner_join(en_afa,svr_afa,by=c("gene","pop"))
enknn <- inner_join(en_afa,knn_afa,by=c("gene","pop"))
fig1df <- rbind(fig1df, enrf, ensvr, enknn)
#do same for ALL
en_afa <- read.table(file = "/home/pokoro/data/mesa_models/split_mesa/results/all_chr_ALL_model_summaries.txt", header = TRUE)
en_afa$gene_id <- as.character(en_afa$gene_id)
en_afa <- en_afa[,c(1,10)]
names(en_afa) <- c("gene", "cvR2")
en_afa <- subset(en_afa, cvR2 > -1)
#start df for dot plots ML v. EN
en_afa <- mutate(en_afa, model="EN",pop="ALL")
rf_afa <- read.table(file = "/home/pokoro/data/mesa_models/python_ml_models/merged_chunk_results/ALL_best_grid_rf_all_chrom.txt", header = T)
rf_afa$Gene_ID <- as.character(rf_afa$Gene_ID)
rf_afa <- rf_afa[,c(1,3)]
names(rf_afa) <- c("gene", "cvR2")
rf_afa <- subset(rf_afa, cvR2 > -1)
rf_afa <- mutate(rf_afa, model="RF",pop="ALL")
svr_afa <- read.table(file = "/home/pokoro/data/mesa_models/python_ml_models/merged_chunk_results/ALL_best_grid_svr_all_chrom.txt", header = T)
svr_afa$Gene_ID <- as.character(svr_afa$Gene_ID)
svr_afa <- svr_afa[,c(1,3)]
names(svr_afa) <- c("gene", "cvR2")
svr_afa <- subset(svr_afa, cvR2 > -1)
svr_afa <- mutate(svr_afa, model="SVR",pop="ALL")
knn_afa <- read.table(file = "/home/pokoro/data/mesa_models/python_ml_models/merged_chunk_results/ALL_best_grid_knn_all_chrom.txt", header = T)
knn_afa$Gene_ID <- as.character(knn_afa$Gene_ID)
knn_afa <- knn_afa[,c(1,3)]
names(knn_afa) <- c("gene", "cvR2")
knn_afa <- subset(knn_afa, cvR2 > -1)
knn_afa <- mutate(knn_afa, model="KNN",pop="ALL")
#data.frame for boxplots
bpdf <- rbind(bpdf, en_afa, rf_afa, svr_afa, knn_afa)
#data.frame for dot plots EN v. ML model
enrf <- inner_join(en_afa,rf_afa,by=c("gene","pop"))
ensvr <- inner_join(en_afa,svr_afa,by=c("gene","pop"))
enknn <- inner_join(en_afa,knn_afa,by=c("gene","pop"))
fig1df <- rbind(fig1df, enrf, ensvr, enknn)
colnames(fig1df) <- c("gene", "ENcvR2", "EN", "pop", "cvR2", "MLmodel")
fig1df <- select(fig1df, gene, ENcvR2, MLmodel, cvR2, pop)
write.table(fig1df, "fig1df.txt", quote=F, row.names=F)
write.table(bpdf, "boxplotsdf.txt", quote=F, row.names=F)
|
/ml_paper_figs/make_cvr2_df_pauls-paper.R
|
no_license
|
okoropaulc/mesa_scripts
|
R
| false
| false
| 7,160
|
r
|
library(dplyr)
en_afa <- read.table(file = "/home/pokoro/data/mesa_models/split_mesa/results/all_chr_AFA_model_summaries.txt", header = TRUE)
en_afa$gene_id <- as.character(en_afa$gene_id)
en_afa <- en_afa[,c(1,10)]
names(en_afa) <- c("gene", "cvR2")
en_afa <- subset(en_afa, cvR2 > -1)
#start df for dot plots ML v. EN
en_afa <- mutate(en_afa, model="EN",pop="AFA")
rf_afa <- read.table(file = "/home/pokoro/data/mesa_models/python_ml_models/merged_chunk_results/AFA_best_grid_rf_all_chrom.txt", header = T)
rf_afa$Gene_ID <- as.character(rf_afa$Gene_ID)
rf_afa <- rf_afa[,c(1,3)]
names(rf_afa) <- c("gene", "cvR2")
rf_afa <- subset(rf_afa, cvR2 > -1)
rf_afa <- mutate(rf_afa, model="RF",pop="AFA")
svr_afa <- read.table(file = "/home/pokoro/data/mesa_models/python_ml_models/merged_chunk_results/AFA_best_grid_svr_all_chrom.txt", header = T)
svr_afa$Gene_ID <- as.character(svr_afa$Gene_ID)
svr_afa <- svr_afa[,c(1,3)]
names(svr_afa) <- c("gene", "cvR2")
svr_afa <- subset(svr_afa, cvR2 > -1)
svr_afa <- mutate(svr_afa, model="SVR",pop="AFA")
knn_afa <- read.table(file = "/home/pokoro/data/mesa_models/python_ml_models/merged_chunk_results/AFA_best_grid_knn_all_chrom.txt", header = T)
knn_afa$Gene_ID <- as.character(knn_afa$Gene_ID)
knn_afa <- knn_afa[,c(1,3)]
names(knn_afa) <- c("gene", "cvR2")
knn_afa <- subset(knn_afa, cvR2 > -1)
knn_afa <- mutate(knn_afa, model="KNN",pop="AFA")
#data.frame for boxplots
bpdf <- rbind(en_afa, rf_afa, svr_afa, knn_afa)
#data.frame for dot plots EN v. ML model
enrf <- inner_join(en_afa,rf_afa,by=c("gene","pop"))
ensvr <- inner_join(en_afa,svr_afa,by=c("gene","pop"))
enknn <- inner_join(en_afa,knn_afa,by=c("gene","pop"))
#try facet_grid(pop ~ model)
fig1df <- rbind(enrf, ensvr, enknn)
#do same for HIS
en_afa <- read.table(file = "/home/pokoro/data/mesa_models/split_mesa/results/all_chr_HIS_model_summaries.txt", header = TRUE)
en_afa$gene_id <- as.character(en_afa$gene_id)
en_afa <- en_afa[,c(1,10)]
names(en_afa) <- c("gene", "cvR2")
en_afa <- subset(en_afa, cvR2 > -1)
#start df for dot plots ML v. EN
en_afa <- mutate(en_afa, model="EN",pop="HIS")
rf_afa <- read.table(file = "/home/pokoro/data/mesa_models/python_ml_models/merged_chunk_results/HIS_best_grid_rf_all_chrom.txt", header = T)
rf_afa$Gene_ID <- as.character(rf_afa$Gene_ID)
rf_afa <- rf_afa[,c(1,3)]
names(rf_afa) <- c("gene", "cvR2")
rf_afa <- subset(rf_afa, cvR2 > -1)
rf_afa <- mutate(rf_afa, model="RF",pop="HIS")
svr_afa <- read.table(file = "/home/pokoro/data/mesa_models/python_ml_models/merged_chunk_results/HIS_best_grid_svr_all_chrom.txt", header = T)
svr_afa$Gene_ID <- as.character(svr_afa$Gene_ID)
svr_afa <- svr_afa[,c(1,3)]
names(svr_afa) <- c("gene", "cvR2")
svr_afa <- subset(svr_afa, cvR2 > -1)
svr_afa <- mutate(svr_afa, model="SVR",pop="HIS")
knn_afa <- read.table(file = "/home/pokoro/data/mesa_models/python_ml_models/merged_chunk_results/HIS_best_grid_knn_all_chrom.txt", header = T)
knn_afa$Gene_ID <- as.character(knn_afa$Gene_ID)
knn_afa <- knn_afa[,c(1,3)]
names(knn_afa) <- c("gene", "cvR2")
knn_afa <- subset(knn_afa, cvR2 > -1)
knn_afa <- mutate(knn_afa, model="KNN",pop="HIS")
#data.frame for boxplots
bpdf <- rbind(bpdf, en_afa, rf_afa, svr_afa, knn_afa)
#data.frame for dot plots EN v. ML model
enrf <- inner_join(en_afa,rf_afa,by=c("gene","pop"))
ensvr <- inner_join(en_afa,svr_afa,by=c("gene","pop"))
enknn <- inner_join(en_afa,knn_afa,by=c("gene","pop"))
fig1df <- rbind(fig1df, enrf, ensvr, enknn)
#do same for CAU
en_afa <- read.table(file = "/home/pokoro/data/mesa_models/split_mesa/results/all_chr_CAU_model_summaries.txt", header = TRUE)
en_afa$gene_id <- as.character(en_afa$gene_id)
en_afa <- en_afa[,c(1,10)]
names(en_afa) <- c("gene", "cvR2")
en_afa <- subset(en_afa, cvR2 > -1)
#start df for dot plots ML v. EN
en_afa <- mutate(en_afa, model="EN",pop="CAU")
rf_afa <- read.table(file = "/home/pokoro/data/mesa_models/python_ml_models/merged_chunk_results/CAU_best_grid_rf_all_chrom.txt", header = T)
rf_afa$Gene_ID <- as.character(rf_afa$Gene_ID)
rf_afa <- rf_afa[,c(1,3)]
names(rf_afa) <- c("gene", "cvR2")
rf_afa <- subset(rf_afa, cvR2 > -1)
rf_afa <- mutate(rf_afa, model="RF",pop="CAU")
svr_afa <- read.table(file = "/home/pokoro/data/mesa_models/python_ml_models/merged_chunk_results/CAU_best_grid_svr_all_chrom.txt", header = T)
svr_afa$Gene_ID <- as.character(svr_afa$Gene_ID)
svr_afa <- svr_afa[,c(1,3)]
names(svr_afa) <- c("gene", "cvR2")
svr_afa <- subset(svr_afa, cvR2 > -1)
svr_afa <- mutate(svr_afa, model="SVR",pop="CAU")
knn_afa <- read.table(file = "/home/pokoro/data/mesa_models/python_ml_models/merged_chunk_results/CAU_best_grid_knn_all_chrom.txt", header = T)
knn_afa$Gene_ID <- as.character(knn_afa$Gene_ID)
knn_afa <- knn_afa[,c(1,3)]
names(knn_afa) <- c("gene", "cvR2")
knn_afa <- subset(knn_afa, cvR2 > -1)
knn_afa <- mutate(knn_afa, model="KNN",pop="CAU")
#data.frame for boxplots
bpdf <- rbind(bpdf, en_afa, rf_afa, svr_afa, knn_afa)
#data.frame for dot plots EN v. ML model
enrf <- inner_join(en_afa,rf_afa,by=c("gene","pop"))
ensvr <- inner_join(en_afa,svr_afa,by=c("gene","pop"))
enknn <- inner_join(en_afa,knn_afa,by=c("gene","pop"))
fig1df <- rbind(fig1df, enrf, ensvr, enknn)
#do same for ALL
en_afa <- read.table(file = "/home/pokoro/data/mesa_models/split_mesa/results/all_chr_ALL_model_summaries.txt", header = TRUE)
en_afa$gene_id <- as.character(en_afa$gene_id)
en_afa <- en_afa[,c(1,10)]
names(en_afa) <- c("gene", "cvR2")
en_afa <- subset(en_afa, cvR2 > -1)
#start df for dot plots ML v. EN
en_afa <- mutate(en_afa, model="EN",pop="ALL")
rf_afa <- read.table(file = "/home/pokoro/data/mesa_models/python_ml_models/merged_chunk_results/ALL_best_grid_rf_all_chrom.txt", header = T)
rf_afa$Gene_ID <- as.character(rf_afa$Gene_ID)
rf_afa <- rf_afa[,c(1,3)]
names(rf_afa) <- c("gene", "cvR2")
rf_afa <- subset(rf_afa, cvR2 > -1)
rf_afa <- mutate(rf_afa, model="RF",pop="ALL")
svr_afa <- read.table(file = "/home/pokoro/data/mesa_models/python_ml_models/merged_chunk_results/ALL_best_grid_svr_all_chrom.txt", header = T)
svr_afa$Gene_ID <- as.character(svr_afa$Gene_ID)
svr_afa <- svr_afa[,c(1,3)]
names(svr_afa) <- c("gene", "cvR2")
svr_afa <- subset(svr_afa, cvR2 > -1)
svr_afa <- mutate(svr_afa, model="SVR",pop="ALL")
knn_afa <- read.table(file = "/home/pokoro/data/mesa_models/python_ml_models/merged_chunk_results/ALL_best_grid_knn_all_chrom.txt", header = T)
knn_afa$Gene_ID <- as.character(knn_afa$Gene_ID)
knn_afa <- knn_afa[,c(1,3)]
names(knn_afa) <- c("gene", "cvR2")
knn_afa <- subset(knn_afa, cvR2 > -1)
knn_afa <- mutate(knn_afa, model="KNN",pop="ALL")
#data.frame for boxplots
bpdf <- rbind(bpdf, en_afa, rf_afa, svr_afa, knn_afa)
#data.frame for dot plots EN v. ML model
enrf <- inner_join(en_afa,rf_afa,by=c("gene","pop"))
ensvr <- inner_join(en_afa,svr_afa,by=c("gene","pop"))
enknn <- inner_join(en_afa,knn_afa,by=c("gene","pop"))
fig1df <- rbind(fig1df, enrf, ensvr, enknn)
colnames(fig1df) <- c("gene", "ENcvR2", "EN", "pop", "cvR2", "MLmodel")
fig1df <- select(fig1df, gene, ENcvR2, MLmodel, cvR2, pop)
write.table(fig1df, "fig1df.txt", quote=F, row.names=F)
write.table(bpdf, "boxplotsdf.txt", quote=F, row.names=F)
|
squeeze
arrive
person
monitor
sand
present
elderly
publisher
recognition
singer
primary
fifty
respondent
detail
race
herself
broad
remaining
account
variety
estimate
operation
short
pool
morning
employee
consideration
gain
Mexican
debt
through
fighter
middle
mother
warn
tomorrow
advice
regard
layer
near
bone
style
enable
frequent
French
paper
joy
worried
segment
slide
stick
championship
amazing
nice
cope
dress
private
out
comment
pursue
simply
sue
family
imagination
supply
classroom
disaster
collect
penalty
mail
aggressive
basically
visible
goal
overlook
finding
event
struggle
science
hip
suppose
reinforce
final
deeply
cross
fee
recipe
flesh
recommend
latter
shit
pattern
representative
national
so-called
athletic
universal
code
dig
angry
toy
rid
auto
fund
trade
heavy
learn
disagree
native
funny
action
street
less
fifteen
lung
extreme
boot
sugar
professor
civilian
recall
impose
relation
sweep
born
ask
passion
hang
sick
pilot
fundamental
respect
wide
dog
criticism
cancer
knowledge
distant
regardless
really
bullet
motion
me
judge
cousin
different
literally
program
resolution
distinction
pie
modest
tower
rural
release
variation
tournament
glove
nearby
translate
cell
contribute
lawn
courage
plus
move
full
Russian
shot
nut
afraid
proud
inflation
tremendous
barrier
link
Congress
schedule
work
nowhere
stability
parking
include
stock
visit
tie
interpret
extraordinary
manufacturer
that
major
achievement
Internet
beneath
some
grand
pay
establishment
extension
significant
unknown
bake
blanket
next
typical
tend
consciousness
item
joke
via
bombing
those
screen
funding
admit
lucky
if
concerned
boy
hypothesis
iron
rain
purpose
permit
possible
lemon
meat
regulation
title
modern
slice
enter
collective
recently
two
smile
admission
effort
himself
soldier
fellow
destruction
teaching
second
congressional
follow
soon
healthy
time
love
address
bond
barely
presence
describe
silent
status
sake
alone
session
tomato
distinct
conversation
experience
stand
truth
tape
somebody
beat
sight
source
quarterback
gas
bar
glass
routine
presentation
musician
honor
company
objective
result
reflection
fast
lesson
within
smart
charge
wipe
various
shower
minute
sin
limitation
possibility
more
kick
accept
tourist
we
who
appear
diversity
student
winner
transformation
definitely
camera
partner
shake
today
constitutional
sister
hell
relationship
video
compete
growth
activist
seek
citizen
housing
manner
position
custom
culture
relevant
vehicle
combine
complete
agent
movie
central
friendship
neither
federal
return
clue
plant
stuff
together
lab
pregnant
she
grocery
preserve
male
delivery
galaxy
notion
instance
odds
outcome
cost
attach
need
job
invest
Jewish
relatively
human
maintain
so
dinner
sky
car
perform
infection
class
health
deserve
bike
guide
miss
tank
gay
show
conclusion
meter
when
ship
speed
easily
connection
porch
emerge
confirm
precisely
adjustment
taste
setting
round
skin
why
expensive
estate
characteristic
surprise
produce
recovery
widely
lovely
side
still
tone
field
greatest
eat
glad
list
tax
convince
southern
weak
strengthen
trouble
lots
furniture
contract
angle
increase
capable
competitor
nuclear
expert
rarely
gang
veteran
library
upon
seven
three
government
nurse
buy
relief
earn
prospect
ghost
Supreme
beauty
qualify
wait
pitch
due
entire
soccer
influence
economics
stop
accompany
obviously
agreement
mechanism
concentration
plastic
rifle
orange
fair
date
beyond
put
moderate
user
proceed
organic
pepper
celebration
approximately
interested
league
vital
lean
willing
possibly
condition
uncle
attractive
how
talk
illustrate
theater
a
storm
exposure
policy
discipline
be
bathroom
anniversary
upper
surround
enemy
grow
quickly
everyone
community
counselor
confusion
country
pick
observation
college
play
open
wife
negotiate
appreciate
legend
garlic
large
resistance
appearance
acid
sophisticated
learning
ground
protection
behavior
enforcement
brilliant
consumption
offer
ocean
powder
percentage
abroad
double
week
effect
walk
employment
silence
sexual
chemical
control
introduce
overall
nerve
themselves
north
claim
traditional
like
constant
none
wisdom
form
wing
song
society
kill
generate
strongly
split
assault
surface
true
it
bad
observe
union
package
limit
chest
guilty
daughter
begin
ride
clean
coach
rope
whom
whole
prison
tension
rich
know
aircraft
online
capture
up
somehow
PC
inner
yes
defensive
priest
decide
teen
whisper
leave
shirt
draw
wire
liberal
wander
past
test
billion
ugly
which
promise
gift
category
revenue
cause
among
solve
check
differ
bottle
customer
essential
language
operate
palm
mall
both
tunnel
reservation
scene
utility
age
historic
boss
summer
therapy
waste
system
workshop
motor
punishment
group
focus
summit
vision
climb
jet
orientation
discourse
prescription
rank
during
electric
frequency
adult
hundred
parent
far
educate
even
interest
political
electricity
depict
state
chicken
lifetime
miracle
reference
accuse
works
ready
embrace
patient
of
element
say
brown
head
especially
passage
environment
neighbor
dominant
border
hurt
knife
bridge
bet
belong
hope
clinical
moon
difference
arrest
dust
shout
persuade
clock
involvement
everyday
develop
the
travel
telescope
carbon
frame
elsewhere
hold
childhood
intend
interpretation
planet
device
disorder
capacity
best
exercise
new
efficiency
comedy
telephone
benefit
whereas
grab
labor
bottom
Catholic
turn
feel
anger
involved
universe
adapt
scared
pray
believe
tiny
moreover
method
attention
phone
safe
watch
before
assignment
absorb
awareness
lap
truly
coverage
episode
humor
killing
because
helicopter
chart
developing
resident
coffee
golden
information
salt
carrier
adjust
dance
manage
demand
emotional
hunter
analysis
component
young
down
kiss
victim
gap
repeatedly
remind
drama
prayer
used
plot
approval
deer
quit
transportation
flee
Spanish
blade
cry
ought
dominate
official
river
slowly
responsible
process
construction
interesting
successfully
perception
animal
international
tooth
seriously
star
weapon
primarily
rare
energy
number
fortune
inside
crowd
wonder
compare
athlete
dirty
experiment
police
protest
lead
violence
adventure
direct
gene
red
medicine
tall
fly
mutual
sweet
must
print
football
shelter
risk
negotiation
rate
tent
count
exception
Iraqi
birth
troop
competition
weekend
implication
room
prime
working
match
bread
Jew
save
evening
comparison
aide
taxpayer
actual
body
budget
engineering
string
signal
pure
contrast
king
maker
temporary
dining
thought
corporate
sit
attend
increased
favor
skill
ring
story
wear
impact
either
retain
bill
executive
hey
top
acknowledge
statement
left
sad
muscle
register
succeed
baseball
refuse
general
progress
etc
gender
degree
yield
senior
profile
sense
ceremony
advocate
clothing
occupation
flame
listen
pressure
react
respond
leaf
marry
text
around
Indian
household
distinguish
please
normal
controversial
alter
let
organization
prove
shadow
lie
affair
should
would
electronic
function
mountain
their
cover
largely
wedding
okay
sufficient
extra
running
rise
preference
anyway
psychology
Democrat
apparently
frequently
critical
reduce
honest
park
naturally
quarter
stake
worry
tough
occur
increasing
value
helpful
sex
worker
girlfriend
corporation
zone
sir
trial
specific
painting
effective
false
external
level
grade
certainly
loud
century
meet
join
ice
television
snow
husband
|
/src/test/resources/ioset/58i.r
|
permissive
|
OzGhost/qpro
|
R
| false
| false
| 7,242
|
r
|
squeeze
arrive
person
monitor
sand
present
elderly
publisher
recognition
singer
primary
fifty
respondent
detail
race
herself
broad
remaining
account
variety
estimate
operation
short
pool
morning
employee
consideration
gain
Mexican
debt
through
fighter
middle
mother
warn
tomorrow
advice
regard
layer
near
bone
style
enable
frequent
French
paper
joy
worried
segment
slide
stick
championship
amazing
nice
cope
dress
private
out
comment
pursue
simply
sue
family
imagination
supply
classroom
disaster
collect
penalty
mail
aggressive
basically
visible
goal
overlook
finding
event
struggle
science
hip
suppose
reinforce
final
deeply
cross
fee
recipe
flesh
recommend
latter
shit
pattern
representative
national
so-called
athletic
universal
code
dig
angry
toy
rid
auto
fund
trade
heavy
learn
disagree
native
funny
action
street
less
fifteen
lung
extreme
boot
sugar
professor
civilian
recall
impose
relation
sweep
born
ask
passion
hang
sick
pilot
fundamental
respect
wide
dog
criticism
cancer
knowledge
distant
regardless
really
bullet
motion
me
judge
cousin
different
literally
program
resolution
distinction
pie
modest
tower
rural
release
variation
tournament
glove
nearby
translate
cell
contribute
lawn
courage
plus
move
full
Russian
shot
nut
afraid
proud
inflation
tremendous
barrier
link
Congress
schedule
work
nowhere
stability
parking
include
stock
visit
tie
interpret
extraordinary
manufacturer
that
major
achievement
Internet
beneath
some
grand
pay
establishment
extension
significant
unknown
bake
blanket
next
typical
tend
consciousness
item
joke
via
bombing
those
screen
funding
admit
lucky
if
concerned
boy
hypothesis
iron
rain
purpose
permit
possible
lemon
meat
regulation
title
modern
slice
enter
collective
recently
two
smile
admission
effort
himself
soldier
fellow
destruction
teaching
second
congressional
follow
soon
healthy
time
love
address
bond
barely
presence
describe
silent
status
sake
alone
session
tomato
distinct
conversation
experience
stand
truth
tape
somebody
beat
sight
source
quarterback
gas
bar
glass
routine
presentation
musician
honor
company
objective
result
reflection
fast
lesson
within
smart
charge
wipe
various
shower
minute
sin
limitation
possibility
more
kick
accept
tourist
we
who
appear
diversity
student
winner
transformation
definitely
camera
partner
shake
today
constitutional
sister
hell
relationship
video
compete
growth
activist
seek
citizen
housing
manner
position
custom
culture
relevant
vehicle
combine
complete
agent
movie
central
friendship
neither
federal
return
clue
plant
stuff
together
lab
pregnant
she
grocery
preserve
male
delivery
galaxy
notion
instance
odds
outcome
cost
attach
need
job
invest
Jewish
relatively
human
maintain
so
dinner
sky
car
perform
infection
class
health
deserve
bike
guide
miss
tank
gay
show
conclusion
meter
when
ship
speed
easily
connection
porch
emerge
confirm
precisely
adjustment
taste
setting
round
skin
why
expensive
estate
characteristic
surprise
produce
recovery
widely
lovely
side
still
tone
field
greatest
eat
glad
list
tax
convince
southern
weak
strengthen
trouble
lots
furniture
contract
angle
increase
capable
competitor
nuclear
expert
rarely
gang
veteran
library
upon
seven
three
government
nurse
buy
relief
earn
prospect
ghost
Supreme
beauty
qualify
wait
pitch
due
entire
soccer
influence
economics
stop
accompany
obviously
agreement
mechanism
concentration
plastic
rifle
orange
fair
date
beyond
put
moderate
user
proceed
organic
pepper
celebration
approximately
interested
league
vital
lean
willing
possibly
condition
uncle
attractive
how
talk
illustrate
theater
a
storm
exposure
policy
discipline
be
bathroom
anniversary
upper
surround
enemy
grow
quickly
everyone
community
counselor
confusion
country
pick
observation
college
play
open
wife
negotiate
appreciate
legend
garlic
large
resistance
appearance
acid
sophisticated
learning
ground
protection
behavior
enforcement
brilliant
consumption
offer
ocean
powder
percentage
abroad
double
week
effect
walk
employment
silence
sexual
chemical
control
introduce
overall
nerve
themselves
north
claim
traditional
like
constant
none
wisdom
form
wing
song
society
kill
generate
strongly
split
assault
surface
true
it
bad
observe
union
package
limit
chest
guilty
daughter
begin
ride
clean
coach
rope
whom
whole
prison
tension
rich
know
aircraft
online
capture
up
somehow
PC
inner
yes
defensive
priest
decide
teen
whisper
leave
shirt
draw
wire
liberal
wander
past
test
billion
ugly
which
promise
gift
category
revenue
cause
among
solve
check
differ
bottle
customer
essential
language
operate
palm
mall
both
tunnel
reservation
scene
utility
age
historic
boss
summer
therapy
waste
system
workshop
motor
punishment
group
focus
summit
vision
climb
jet
orientation
discourse
prescription
rank
during
electric
frequency
adult
hundred
parent
far
educate
even
interest
political
electricity
depict
state
chicken
lifetime
miracle
reference
accuse
works
ready
embrace
patient
of
element
say
brown
head
especially
passage
environment
neighbor
dominant
border
hurt
knife
bridge
bet
belong
hope
clinical
moon
difference
arrest
dust
shout
persuade
clock
involvement
everyday
develop
the
travel
telescope
carbon
frame
elsewhere
hold
childhood
intend
interpretation
planet
device
disorder
capacity
best
exercise
new
efficiency
comedy
telephone
benefit
whereas
grab
labor
bottom
Catholic
turn
feel
anger
involved
universe
adapt
scared
pray
believe
tiny
moreover
method
attention
phone
safe
watch
before
assignment
absorb
awareness
lap
truly
coverage
episode
humor
killing
because
helicopter
chart
developing
resident
coffee
golden
information
salt
carrier
adjust
dance
manage
demand
emotional
hunter
analysis
component
young
down
kiss
victim
gap
repeatedly
remind
drama
prayer
used
plot
approval
deer
quit
transportation
flee
Spanish
blade
cry
ought
dominate
official
river
slowly
responsible
process
construction
interesting
successfully
perception
animal
international
tooth
seriously
star
weapon
primarily
rare
energy
number
fortune
inside
crowd
wonder
compare
athlete
dirty
experiment
police
protest
lead
violence
adventure
direct
gene
red
medicine
tall
fly
mutual
sweet
must
print
football
shelter
risk
negotiation
rate
tent
count
exception
Iraqi
birth
troop
competition
weekend
implication
room
prime
working
match
bread
Jew
save
evening
comparison
aide
taxpayer
actual
body
budget
engineering
string
signal
pure
contrast
king
maker
temporary
dining
thought
corporate
sit
attend
increased
favor
skill
ring
story
wear
impact
either
retain
bill
executive
hey
top
acknowledge
statement
left
sad
muscle
register
succeed
baseball
refuse
general
progress
etc
gender
degree
yield
senior
profile
sense
ceremony
advocate
clothing
occupation
flame
listen
pressure
react
respond
leaf
marry
text
around
Indian
household
distinguish
please
normal
controversial
alter
let
organization
prove
shadow
lie
affair
should
would
electronic
function
mountain
their
cover
largely
wedding
okay
sufficient
extra
running
rise
preference
anyway
psychology
Democrat
apparently
frequently
critical
reduce
honest
park
naturally
quarter
stake
worry
tough
occur
increasing
value
helpful
sex
worker
girlfriend
corporation
zone
sir
trial
specific
painting
effective
false
external
level
grade
certainly
loud
century
meet
join
ice
television
snow
husband
|
testlist <- list(m = NULL, repetitions = 0L, in_m = structure(c(2.31584307392677e+77, 9.53818252170339e+295, 1.22810536108214e+146, 2.87905896347792e-221, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), .Dim = c(10L, 3L)))
result <- do.call(CNull:::communities_individual_based_sampling_alpha,testlist)
str(result)
|
/CNull/inst/testfiles/communities_individual_based_sampling_alpha/AFL_communities_individual_based_sampling_alpha/communities_individual_based_sampling_alpha_valgrind_files/1615779075-test.R
|
no_license
|
akhikolla/updatedatatype-list2
|
R
| false
| false
| 348
|
r
|
testlist <- list(m = NULL, repetitions = 0L, in_m = structure(c(2.31584307392677e+77, 9.53818252170339e+295, 1.22810536108214e+146, 2.87905896347792e-221, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), .Dim = c(10L, 3L)))
result <- do.call(CNull:::communities_individual_based_sampling_alpha,testlist)
str(result)
|
data.clean1<-data.clean[data.clean$AmbiguityCondition!='No Ambiguity',]
data.clean1$AmbiguityCondition<-droplevels(data.clean1$AmbiguityCondition)
#####Analysis 1#####
analysis_1<-data.clean1[data.clean1$AmbiguityCondition!='75% Positive Ambiguity' & data.clean1$AmbiguityCondition!='75% Negative Ambiguity',]
analysis_1$AmbiguityCondition<-droplevels(analysis_1$AmbiguityCondition)
table1<-table(analysis_1$socialCond,analysis_1$Ambiguity)
chisq.test(table1)
#####Analysis 2#####
#positive
analysis_2<-data.clean1[data.clean1$AmbiguityCondition!='75% Positive Ambiguity' & data.clean1$AmbiguityCondition!='75% Negative Ambiguity' & data.clean1$socialCond=='helpful',]
analysis_2$AmbiguityCondition<-droplevels(analysis_2$AmbiguityCondition)
table2<-table(analysis_2$Ambiguity)
t2<-chisq.test(table2)
p2<-t2$p.value
#neutral
analysis_3<-data.clean1[data.clean1$AmbiguityCondition!='75% Positive Ambiguity' & data.clean1$AmbiguityCondition!='75% Negative Ambiguity' & data.clean1$socialCond=='neutral',]
analysis_3$AmbiguityCondition<-droplevels(analysis_3$AmbiguityCondition)
table3<-table(analysis_3$Ambiguity)
t3<-chisq.test(table3)
p3<-t3$p.value
#malic
analysis_4<-data.clean1[data.clean1$AmbiguityCondition!='75% Positive Ambiguity' & data.clean1$AmbiguityCondition!='75% Negative Ambiguity' & data.clean1$socialCond=='malic',]
analysis_4$AmbiguityCondition<-droplevels(analysis_4$AmbiguityCondition)
table4<-table(analysis_4$Ambiguity)
t4<-chisq.test(table4)
p4<-t4$p.value
p.adjust(p=c(p2,p3,p4),method='holm')
#####Analysis 3##### Are the neutral and neg conditions showinf the same level on ambiguity aversion? YES
#positive
analysis_3<-data.clean1[data.clean1$AmbiguityCondition!='75% Positive Ambiguity' & data.clean1$AmbiguityCondition!='75% Negative Ambiguity' & data.clean1$socialCond!='helpful',]
analysis_3$AmbiguityCondition<-droplevels(analysis_3$AmbiguityCondition)
analysis_3$socialCond<-droplevels(analysis_3$socialCond)
table3<-table(analysis_3$socialCond,analysis_3$Ambiguity)
t3<-chisq.test(table3)
#####Analysis 4##### Does Ambiguity impact social condition?
#neutral
analysis_4b<-data.clean1[data.clean1$AmbiguityCondition!='75% Positive Ambiguity' & data.clean1$AmbiguityCondition!='75% Negative Ambiguity' & data.clean1$socialCond=='neutral',]
analysis_4b$AmbiguityCondition<-droplevels(analysis_4b$AmbiguityCondition)
analysis_4b$socialCond<-droplevels(analysis_4b$socialCond)
table4b<-table(analysis_4b$AmbiguityCondition,analysis_4b$Ambiguity)
table4b
t4b<-chisq.test(table4b)
#malic
analysis_4c<-data.clean1[data.clean1$AmbiguityCondition!='75% Positive Ambiguity' & data.clean1$AmbiguityCondition!='75% Negative Ambiguity' & data.clean1$socialCond=='malic',]
analysis_4c$AmbiguityCondition<-droplevels(analysis_4c$AmbiguityCondition)
analysis_4c$socialCond<-droplevels(analysis_4c$socialCond)
table4c<-table(analysis_4c$AmbiguityCondition,analysis_4c$Ambiguity)
table4c
t4c<-chisq.test(table4c)
t4c
#####Analysis 5#####
#positive
analysis_5<-data.clean1[data.clean1$AmbiguityCondition!='75% Positive Ambiguity' & data.clean1$AmbiguityCondition!='75% Negative Ambiguity' & data.clean1$socialCond=='helpful',]
analysis_5$AmbiguityCondition<-droplevels(analysis_5$AmbiguityCondition)
analysis_5$socialCond<-droplevels(analysis_5$socialCond)
table5<-table(analysis_5$Ambiguity,analysis_5$AmbiguityCondition)
Ordinal.Test(17,14,12,8,5,1)
#####Analysis 6######
social.cond<-rep(0,nrow(data.clean1))
social.cond[data.clean1$socialCond=="helpful"]=1
social.cond[data.clean1$socialCond=="malic"]=2
social.cond[data.clean1$socialCond=="neutral"]=3
ambig.cond<-rep(0,nrow(data.clean1))
ambig.cond[data.clean1$AmbiguityCondition=="50% Neutral Ambiguity"]=1
ambig.cond[data.clean1$AmbiguityCondition=="75% Positive Ambiguity"]=2
ambig.cond[data.clean1$AmbiguityCondition=="75% Neutral Ambiguity"]=3
ambig.cond[data.clean1$AmbiguityCondition=="75% Negative Ambiguity"]=4
ambig.cond[data.clean1$AmbiguityCondition=="Full Ambiguity"]=5
x.obs<-rep(0,nrow(data.clean1))
x.obs[ambig.cond==1]<-2
x.obs[ambig.cond==2]<-2
x.obs[ambig.cond==3]<-1
x.obs[ambig.cond==4]<-0
x.obs[ambig.cond==5]<-0
n.obs<-rep(0,nrow(data.clean1))
n.obs[ambig.cond==1]<-4
n.obs[ambig.cond==2]<-2
n.obs[ambig.cond==3]<-2
n.obs[ambig.cond==4]<-2
n.obs[ambig.cond==5]<-0
n.unobs<-rep(0,nrow(data.clean1))
n.unobs[ambig.cond==1]<-4
n.unobs[ambig.cond==2]<-6
n.unobs[ambig.cond==3]<-6
n.unobs[ambig.cond==4]<-6
n.unobs[ambig.cond==5]<-8
data.clean1$unobs<-n.unobs
data.clean1$obs<-x.obs
data.clean1$infer<-data.clean1$obs+data.clean1$BlueEstimate
analysis_6<-data.clean1[(data.clean1$AmbiguityCondition=='50% Neutral Ambiguity' | data.clean1$AmbiguityCondition=='Full Ambiguity' | data.clean1$AmbiguityCondition=='75% Neutral Ambiguity') & data.clean1$socialCond=='helpful',]
analysis_6$AmbiguityCondition<-droplevels(analysis_6$AmbiguityCondition)
analysis_6$socialCond<-droplevels(analysis_6$socialCond)
t.test(analysis_6$infer,mu=4)
analysis_6b<-data.clean1[(data.clean1$AmbiguityCondition=='50% Neutral Ambiguity' | data.clean1$AmbiguityCondition=='Full Ambiguity' | data.clean1$AmbiguityCondition=='75% Neutral Ambiguity') & data.clean1$socialCond=='neutral',]
analysis_6b$AmbiguityCondition<-droplevels(analysis_6b$AmbiguityCondition)
analysis_6b$socialCond<-droplevels(analysis_6b$socialCond)
t.test(analysis_6b$infer,mu=4)
analysis_6c<-data.clean1[(data.clean1$AmbiguityCondition=='50% Neutral Ambiguity' | data.clean1$AmbiguityCondition=='Full Ambiguity' | data.clean1$AmbiguityCondition=='75% Neutral Ambiguity') & data.clean1$socialCond=='malic',]
analysis_6c$AmbiguityCondition<-droplevels(analysis_6c$AmbiguityCondition)
analysis_6c$socialCond<-droplevels(analysis_6c$socialCond)
t.test(analysis_6c$infer,mu=4)
#####Analysis 7########
analysis_7a<-data.clean1[(data.clean1$AmbiguityCondition=='50% Neutral Ambiguity' | data.clean1$AmbiguityCondition=='Full Ambiguity') & data.clean1$socialCond=='helpful',]
analysis_7a$AmbiguityCondition<-droplevels(analysis_7a$AmbiguityCondition)
analysis_7a$socialCond<-droplevels(analysis_7a$socialCond)
table_7a<-table(analysis_7a$Ambiguity,analysis_7a$AmbiguityCondition)
t_7a<-chisq.test(table_7a,simulate.p.value = TRUE)
t_7a
test_7b<-t.test(infer~AmbiguityCondition, data=analysis_7a)
#####Analysis 8######## Does Distribution Information matter in positive?t
analysis_8<-data.clean1[(data.clean1$AmbiguityCondition=='75% Positive Ambiguity' | data.clean1$AmbiguityCondition=='75% Negative Ambiguity' | data.clean1$AmbiguityCondition=='75% Neutral Ambiguity') & data.clean1$socialCond=='helpful',]
analysis_8$AmbiguityCondition<-droplevels(analysis_8$AmbiguityCondition)
analysis_8$socialCond<-droplevels(analysis_8$socialCond)
table8<-table(analysis_8$Ambiguity,analysis_8$AmbiguityCondition)
t8<-chisq.test(table8)
analysis_8b<-data.clean1[(data.clean1$AmbiguityCondition=='75% Positive Ambiguity' | data.clean1$AmbiguityCondition=='75% Negative Ambiguity' | data.clean1$AmbiguityCondition=='75% Neutral Ambiguity') & data.clean1$socialCond=='malic',]
analysis_8b$AmbiguityCondition<-droplevels(analysis_8b$AmbiguityCondition)
analysis_8b$socialCond<-droplevels(analysis_8b$socialCond)
table8b<-table(analysis_8b$Ambiguity,analysis_8b$AmbiguityCondition)
t8b<-chisq.test(table8b)
analysis_8c<-data.clean1[(data.clean1$AmbiguityCondition=='75% Positive Ambiguity' | data.clean1$AmbiguityCondition=='75% Negative Ambiguity' | data.clean1$AmbiguityCondition=='75% Neutral Ambiguity') & data.clean1$socialCond=='neutral',]
analysis_8c$AmbiguityCondition<-droplevels(analysis_8c$AmbiguityCondition)
analysis_8c$socialCond<-droplevels(analysis_8c$socialCond)
table8c<-table(analysis_8c$Ambiguity,analysis_8c$AmbiguityCondition)
t8c<-chisq.test(table8c)
Ordinal.Test(9,11,13,19,14,7)
######Table 9###### #Does inferred numbers increase with social condition?
analysis_9a<-data.clean1[(data.clean1$AmbiguityCondition=='75% Neutral Ambiguity' | data.clean1$AmbiguityCondition=='75% Positive Ambiguity' | data.clean1$AmbiguityCondition=='75% Negative Ambiguity') & c(data.clean1$socialCond=='helpful'),]
analysis_9a$AmbiguityCondition<-droplevels(analysis_9a$AmbiguityCondition)
analysis_9a$socialCond<-droplevels(analysis_9a$socialCond)
test_9a<-lm(infer~AmbiguityCondition, data=analysis_9a)
analysis_9b<-data.clean1[(data.clean1$AmbiguityCondition=='75% Neutral Ambiguity' | data.clean1$AmbiguityCondition=='75% Positive Ambiguity' | data.clean1$AmbiguityCondition=='75% Negative Ambiguity') & c(data.clean1$socialCond=='neutral'),]
analysis_9b$AmbiguityCondition<-droplevels(analysis_9b$AmbiguityCondition)
analysis_9b$socialCond<-droplevels(analysis_9b$socialCond)
test_9b<-lm(infer~AmbiguityCondition, data=analysis_9b)
analysis_9c<-data.clean1[(data.clean1$AmbiguityCondition=='75% Neutral Ambiguity' | data.clean1$AmbiguityCondition=='75% Positive Ambiguity' | data.clean1$AmbiguityCondition=='75% Negative Ambiguity') & c(data.clean1$socialCond=='malic'),]
analysis_9c$AmbiguityCondition<-droplevels(analysis_9c$AmbiguityCondition)
analysis_9c$socialCond<-droplevels(analysis_9c$socialCond)
test_9c<-lm(infer~AmbiguityCondition, data=analysis_9c)
###Test 10# Effect of cover story
analysis_10a<-data.clean1[(data.clean1$AmbiguityCondition=='75% Neutral Ambiguity' | data.clean1$AmbiguityCondition=='75% Positive Ambiguity' | data.clean1$AmbiguityCondition=='75% Negative Ambiguity') & c(data.clean1$socialCond=='helpful' | data.clean1$socialCond=='neutral'),]
analysis_10a$AmbiguityCondition<-droplevels(analysis_10a$AmbiguityCondition)
analysis_10a$socialCond<-droplevels(analysis_10a$socialCond)
summary(analysis_10a)
t.test(infer~socialCond, data=analysis_10a)
analysis_10b<-data.clean1[(data.clean1$AmbiguityCondition=='75% Neutral Ambiguity' | data.clean1$AmbiguityCondition=='75% Positive Ambiguity' | data.clean1$AmbiguityCondition=='75% Negative Ambiguity') & c(data.clean1$socialCond=='helpful' | data.clean1$socialCond=='malic'),]
analysis_10b$AmbiguityCondition<-droplevels(analysis_10b$AmbiguityCondition)
analysis_10b$socialCond<-droplevels(analysis_10b$socialCond)
summary(analysis_10b)
t.test(infer~socialCond, data=analysis_10b)
|
/AnalyseData.R
|
no_license
|
lauken13/Ambiguity
|
R
| false
| false
| 10,155
|
r
|
data.clean1<-data.clean[data.clean$AmbiguityCondition!='No Ambiguity',]
data.clean1$AmbiguityCondition<-droplevels(data.clean1$AmbiguityCondition)
#####Analysis 1#####
analysis_1<-data.clean1[data.clean1$AmbiguityCondition!='75% Positive Ambiguity' & data.clean1$AmbiguityCondition!='75% Negative Ambiguity',]
analysis_1$AmbiguityCondition<-droplevels(analysis_1$AmbiguityCondition)
table1<-table(analysis_1$socialCond,analysis_1$Ambiguity)
chisq.test(table1)
#####Analysis 2#####
#positive
analysis_2<-data.clean1[data.clean1$AmbiguityCondition!='75% Positive Ambiguity' & data.clean1$AmbiguityCondition!='75% Negative Ambiguity' & data.clean1$socialCond=='helpful',]
analysis_2$AmbiguityCondition<-droplevels(analysis_2$AmbiguityCondition)
table2<-table(analysis_2$Ambiguity)
t2<-chisq.test(table2)
p2<-t2$p.value
#neutral
analysis_3<-data.clean1[data.clean1$AmbiguityCondition!='75% Positive Ambiguity' & data.clean1$AmbiguityCondition!='75% Negative Ambiguity' & data.clean1$socialCond=='neutral',]
analysis_3$AmbiguityCondition<-droplevels(analysis_3$AmbiguityCondition)
table3<-table(analysis_3$Ambiguity)
t3<-chisq.test(table3)
p3<-t3$p.value
#malic
analysis_4<-data.clean1[data.clean1$AmbiguityCondition!='75% Positive Ambiguity' & data.clean1$AmbiguityCondition!='75% Negative Ambiguity' & data.clean1$socialCond=='malic',]
analysis_4$AmbiguityCondition<-droplevels(analysis_4$AmbiguityCondition)
table4<-table(analysis_4$Ambiguity)
t4<-chisq.test(table4)
p4<-t4$p.value
p.adjust(p=c(p2,p3,p4),method='holm')
#####Analysis 3##### Are the neutral and neg conditions showinf the same level on ambiguity aversion? YES
#positive
analysis_3<-data.clean1[data.clean1$AmbiguityCondition!='75% Positive Ambiguity' & data.clean1$AmbiguityCondition!='75% Negative Ambiguity' & data.clean1$socialCond!='helpful',]
analysis_3$AmbiguityCondition<-droplevels(analysis_3$AmbiguityCondition)
analysis_3$socialCond<-droplevels(analysis_3$socialCond)
table3<-table(analysis_3$socialCond,analysis_3$Ambiguity)
t3<-chisq.test(table3)
#####Analysis 4##### Does Ambiguity impact social condition?
#neutral
analysis_4b<-data.clean1[data.clean1$AmbiguityCondition!='75% Positive Ambiguity' & data.clean1$AmbiguityCondition!='75% Negative Ambiguity' & data.clean1$socialCond=='neutral',]
analysis_4b$AmbiguityCondition<-droplevels(analysis_4b$AmbiguityCondition)
analysis_4b$socialCond<-droplevels(analysis_4b$socialCond)
table4b<-table(analysis_4b$AmbiguityCondition,analysis_4b$Ambiguity)
table4b
t4b<-chisq.test(table4b)
#malic
analysis_4c<-data.clean1[data.clean1$AmbiguityCondition!='75% Positive Ambiguity' & data.clean1$AmbiguityCondition!='75% Negative Ambiguity' & data.clean1$socialCond=='malic',]
analysis_4c$AmbiguityCondition<-droplevels(analysis_4c$AmbiguityCondition)
analysis_4c$socialCond<-droplevels(analysis_4c$socialCond)
table4c<-table(analysis_4c$AmbiguityCondition,analysis_4c$Ambiguity)
table4c
t4c<-chisq.test(table4c)
t4c
#####Analysis 5#####
#positive
analysis_5<-data.clean1[data.clean1$AmbiguityCondition!='75% Positive Ambiguity' & data.clean1$AmbiguityCondition!='75% Negative Ambiguity' & data.clean1$socialCond=='helpful',]
analysis_5$AmbiguityCondition<-droplevels(analysis_5$AmbiguityCondition)
analysis_5$socialCond<-droplevels(analysis_5$socialCond)
table5<-table(analysis_5$Ambiguity,analysis_5$AmbiguityCondition)
Ordinal.Test(17,14,12,8,5,1)
#####Analysis 6######
social.cond<-rep(0,nrow(data.clean1))
social.cond[data.clean1$socialCond=="helpful"]=1
social.cond[data.clean1$socialCond=="malic"]=2
social.cond[data.clean1$socialCond=="neutral"]=3
ambig.cond<-rep(0,nrow(data.clean1))
ambig.cond[data.clean1$AmbiguityCondition=="50% Neutral Ambiguity"]=1
ambig.cond[data.clean1$AmbiguityCondition=="75% Positive Ambiguity"]=2
ambig.cond[data.clean1$AmbiguityCondition=="75% Neutral Ambiguity"]=3
ambig.cond[data.clean1$AmbiguityCondition=="75% Negative Ambiguity"]=4
ambig.cond[data.clean1$AmbiguityCondition=="Full Ambiguity"]=5
x.obs<-rep(0,nrow(data.clean1))
x.obs[ambig.cond==1]<-2
x.obs[ambig.cond==2]<-2
x.obs[ambig.cond==3]<-1
x.obs[ambig.cond==4]<-0
x.obs[ambig.cond==5]<-0
n.obs<-rep(0,nrow(data.clean1))
n.obs[ambig.cond==1]<-4
n.obs[ambig.cond==2]<-2
n.obs[ambig.cond==3]<-2
n.obs[ambig.cond==4]<-2
n.obs[ambig.cond==5]<-0
n.unobs<-rep(0,nrow(data.clean1))
n.unobs[ambig.cond==1]<-4
n.unobs[ambig.cond==2]<-6
n.unobs[ambig.cond==3]<-6
n.unobs[ambig.cond==4]<-6
n.unobs[ambig.cond==5]<-8
data.clean1$unobs<-n.unobs
data.clean1$obs<-x.obs
data.clean1$infer<-data.clean1$obs+data.clean1$BlueEstimate
analysis_6<-data.clean1[(data.clean1$AmbiguityCondition=='50% Neutral Ambiguity' | data.clean1$AmbiguityCondition=='Full Ambiguity' | data.clean1$AmbiguityCondition=='75% Neutral Ambiguity') & data.clean1$socialCond=='helpful',]
analysis_6$AmbiguityCondition<-droplevels(analysis_6$AmbiguityCondition)
analysis_6$socialCond<-droplevels(analysis_6$socialCond)
t.test(analysis_6$infer,mu=4)
analysis_6b<-data.clean1[(data.clean1$AmbiguityCondition=='50% Neutral Ambiguity' | data.clean1$AmbiguityCondition=='Full Ambiguity' | data.clean1$AmbiguityCondition=='75% Neutral Ambiguity') & data.clean1$socialCond=='neutral',]
analysis_6b$AmbiguityCondition<-droplevels(analysis_6b$AmbiguityCondition)
analysis_6b$socialCond<-droplevels(analysis_6b$socialCond)
t.test(analysis_6b$infer,mu=4)
analysis_6c<-data.clean1[(data.clean1$AmbiguityCondition=='50% Neutral Ambiguity' | data.clean1$AmbiguityCondition=='Full Ambiguity' | data.clean1$AmbiguityCondition=='75% Neutral Ambiguity') & data.clean1$socialCond=='malic',]
analysis_6c$AmbiguityCondition<-droplevels(analysis_6c$AmbiguityCondition)
analysis_6c$socialCond<-droplevels(analysis_6c$socialCond)
t.test(analysis_6c$infer,mu=4)
#####Analysis 7########
analysis_7a<-data.clean1[(data.clean1$AmbiguityCondition=='50% Neutral Ambiguity' | data.clean1$AmbiguityCondition=='Full Ambiguity') & data.clean1$socialCond=='helpful',]
analysis_7a$AmbiguityCondition<-droplevels(analysis_7a$AmbiguityCondition)
analysis_7a$socialCond<-droplevels(analysis_7a$socialCond)
table_7a<-table(analysis_7a$Ambiguity,analysis_7a$AmbiguityCondition)
t_7a<-chisq.test(table_7a,simulate.p.value = TRUE)
t_7a
test_7b<-t.test(infer~AmbiguityCondition, data=analysis_7a)
#####Analysis 8######## Does Distribution Information matter in positive?t
analysis_8<-data.clean1[(data.clean1$AmbiguityCondition=='75% Positive Ambiguity' | data.clean1$AmbiguityCondition=='75% Negative Ambiguity' | data.clean1$AmbiguityCondition=='75% Neutral Ambiguity') & data.clean1$socialCond=='helpful',]
analysis_8$AmbiguityCondition<-droplevels(analysis_8$AmbiguityCondition)
analysis_8$socialCond<-droplevels(analysis_8$socialCond)
table8<-table(analysis_8$Ambiguity,analysis_8$AmbiguityCondition)
t8<-chisq.test(table8)
analysis_8b<-data.clean1[(data.clean1$AmbiguityCondition=='75% Positive Ambiguity' | data.clean1$AmbiguityCondition=='75% Negative Ambiguity' | data.clean1$AmbiguityCondition=='75% Neutral Ambiguity') & data.clean1$socialCond=='malic',]
analysis_8b$AmbiguityCondition<-droplevels(analysis_8b$AmbiguityCondition)
analysis_8b$socialCond<-droplevels(analysis_8b$socialCond)
table8b<-table(analysis_8b$Ambiguity,analysis_8b$AmbiguityCondition)
t8b<-chisq.test(table8b)
analysis_8c<-data.clean1[(data.clean1$AmbiguityCondition=='75% Positive Ambiguity' | data.clean1$AmbiguityCondition=='75% Negative Ambiguity' | data.clean1$AmbiguityCondition=='75% Neutral Ambiguity') & data.clean1$socialCond=='neutral',]
analysis_8c$AmbiguityCondition<-droplevels(analysis_8c$AmbiguityCondition)
analysis_8c$socialCond<-droplevels(analysis_8c$socialCond)
table8c<-table(analysis_8c$Ambiguity,analysis_8c$AmbiguityCondition)
t8c<-chisq.test(table8c)
Ordinal.Test(9,11,13,19,14,7)
######Table 9###### #Does inferred numbers increase with social condition?
analysis_9a<-data.clean1[(data.clean1$AmbiguityCondition=='75% Neutral Ambiguity' | data.clean1$AmbiguityCondition=='75% Positive Ambiguity' | data.clean1$AmbiguityCondition=='75% Negative Ambiguity') & c(data.clean1$socialCond=='helpful'),]
analysis_9a$AmbiguityCondition<-droplevels(analysis_9a$AmbiguityCondition)
analysis_9a$socialCond<-droplevels(analysis_9a$socialCond)
test_9a<-lm(infer~AmbiguityCondition, data=analysis_9a)
analysis_9b<-data.clean1[(data.clean1$AmbiguityCondition=='75% Neutral Ambiguity' | data.clean1$AmbiguityCondition=='75% Positive Ambiguity' | data.clean1$AmbiguityCondition=='75% Negative Ambiguity') & c(data.clean1$socialCond=='neutral'),]
analysis_9b$AmbiguityCondition<-droplevels(analysis_9b$AmbiguityCondition)
analysis_9b$socialCond<-droplevels(analysis_9b$socialCond)
test_9b<-lm(infer~AmbiguityCondition, data=analysis_9b)
analysis_9c<-data.clean1[(data.clean1$AmbiguityCondition=='75% Neutral Ambiguity' | data.clean1$AmbiguityCondition=='75% Positive Ambiguity' | data.clean1$AmbiguityCondition=='75% Negative Ambiguity') & c(data.clean1$socialCond=='malic'),]
analysis_9c$AmbiguityCondition<-droplevels(analysis_9c$AmbiguityCondition)
analysis_9c$socialCond<-droplevels(analysis_9c$socialCond)
test_9c<-lm(infer~AmbiguityCondition, data=analysis_9c)
###Test 10# Effect of cover story
analysis_10a<-data.clean1[(data.clean1$AmbiguityCondition=='75% Neutral Ambiguity' | data.clean1$AmbiguityCondition=='75% Positive Ambiguity' | data.clean1$AmbiguityCondition=='75% Negative Ambiguity') & c(data.clean1$socialCond=='helpful' | data.clean1$socialCond=='neutral'),]
analysis_10a$AmbiguityCondition<-droplevels(analysis_10a$AmbiguityCondition)
analysis_10a$socialCond<-droplevels(analysis_10a$socialCond)
summary(analysis_10a)
t.test(infer~socialCond, data=analysis_10a)
analysis_10b<-data.clean1[(data.clean1$AmbiguityCondition=='75% Neutral Ambiguity' | data.clean1$AmbiguityCondition=='75% Positive Ambiguity' | data.clean1$AmbiguityCondition=='75% Negative Ambiguity') & c(data.clean1$socialCond=='helpful' | data.clean1$socialCond=='malic'),]
analysis_10b$AmbiguityCondition<-droplevels(analysis_10b$AmbiguityCondition)
analysis_10b$socialCond<-droplevels(analysis_10b$socialCond)
summary(analysis_10b)
t.test(infer~socialCond, data=analysis_10b)
|
\name{HWPosterior}
\alias{HWPosterior}
\title{
Calculation of posterior probabilities and Bayes factors for
Hardy-Weinberg tests at X-chromosomal variants.
}
\description{
Function \code{HWPosterior} calculates posterior probabilities and Bayes
factors for tests for Hardy-Weinberg equilibrium of autosomal and X-chromosomal
variants.
}
\usage{
HWPosterior(X, verbose = TRUE, prior.af = c(0.5,0.5), prior.gf =
c(0.333,0.333,0.333), x.linked = FALSE, precision = 0.05)
}
\arguments{
\item{X}{A vector of genotype counts. The order c(A,B,AA,AB,BB) is
assumed. Differently ordered vectors can be supplied but then elements must
be labeled by their genotype}
\item{verbose}{prints results if \code{verbose = TRUE}}
\item{prior.af}{Beta prior parameters for male and female allele frequencies}
\item{prior.gf}{Dirichlet prior parameters for female genotype
frequencies}
\item{x.linked}{logical indicating whether the variant is autosomal or
X-chromosomal}
\item{precision}{precision parameter for marginal likelihoods that
require numeric integration}
}
\details{For X-chromosomal variants, four possible models are considered, and the posterior
probabilities and Bayes factors for each model are calculated.
For autosomal variants, ten possible scenarios are considered, and the
posterior probabilities for all models are calculated.
In general, default Dirichlet priors are used for genotype
frequencies, and beta prior are used for allele frequencies.
}
\value{
For X-chromosomal variants, a matrix with posterior probabilities and
Bayes factors will be produced.
For autosomal variants, a vector of posterior probabilities is produced.
}
\references{
Puig, X., Ginebra, J. and Graffelman, J. (2017) A Bayesian test for
Hardy-Weinberg equilibrium of bi-allelic X-chromosomal markers. To
appear in Heredity.
}
\author{
Xavi Puig \email{xavier.puig@upc.edu} and
Jan Graffelman \email{jan.graffelman@upc.edu}
}
\seealso{ \code{\link{HWChisq}}, \code{\link{HWExact}}, \code{\link{HWExactStats}} }
\examples{
#
# An X-chromosomal example
#
x <- c(A=43,B=13,AA=26,AB=19,BB=3)
out <- HWPosterior(x,verbose=TRUE,x.linked=TRUE)
#
# An autosomal example
#
data(JPTsnps)
post.prob <- HWPosterior(JPTsnps[1,],x.linked=FALSE)
}
\keyword{htest}
|
/HardyWeinberg/man/HWPosterior.Rd
|
no_license
|
akhikolla/InformationHouse
|
R
| false
| false
| 2,369
|
rd
|
\name{HWPosterior}
\alias{HWPosterior}
\title{
Calculation of posterior probabilities and Bayes factors for
Hardy-Weinberg tests at X-chromosomal variants.
}
\description{
Function \code{HWPosterior} calculates posterior probabilities and Bayes
factors for tests for Hardy-Weinberg equilibrium of autosomal and X-chromosomal
variants.
}
\usage{
HWPosterior(X, verbose = TRUE, prior.af = c(0.5,0.5), prior.gf =
c(0.333,0.333,0.333), x.linked = FALSE, precision = 0.05)
}
\arguments{
\item{X}{A vector of genotype counts. The order c(A,B,AA,AB,BB) is
assumed. Differently ordered vectors can be supplied but then elements must
be labeled by their genotype}
\item{verbose}{prints results if \code{verbose = TRUE}}
\item{prior.af}{Beta prior parameters for male and female allele frequencies}
\item{prior.gf}{Dirichlet prior parameters for female genotype
frequencies}
\item{x.linked}{logical indicating whether the variant is autosomal or
X-chromosomal}
\item{precision}{precision parameter for marginal likelihoods that
require numeric integration}
}
\details{For X-chromosomal variants, four possible models are considered, and the posterior
probabilities and Bayes factors for each model are calculated.
For autosomal variants, ten possible scenarios are considered, and the
posterior probabilities for all models are calculated.
In general, default Dirichlet priors are used for genotype
frequencies, and beta prior are used for allele frequencies.
}
\value{
For X-chromosomal variants, a matrix with posterior probabilities and
Bayes factors will be produced.
For autosomal variants, a vector of posterior probabilities is produced.
}
\references{
Puig, X., Ginebra, J. and Graffelman, J. (2017) A Bayesian test for
Hardy-Weinberg equilibrium of bi-allelic X-chromosomal markers. To
appear in Heredity.
}
\author{
Xavi Puig \email{xavier.puig@upc.edu} and
Jan Graffelman \email{jan.graffelman@upc.edu}
}
\seealso{ \code{\link{HWChisq}}, \code{\link{HWExact}}, \code{\link{HWExactStats}} }
\examples{
#
# An X-chromosomal example
#
x <- c(A=43,B=13,AA=26,AB=19,BB=3)
out <- HWPosterior(x,verbose=TRUE,x.linked=TRUE)
#
# An autosomal example
#
data(JPTsnps)
post.prob <- HWPosterior(JPTsnps[1,],x.linked=FALSE)
}
\keyword{htest}
|
library(methods)
library(magrittr)
tcga_path = "/home/cliu18/liucj/projects/6.autophagy/TCGA"
expr_path <- "/home/cliu18/liucj/projects/6.autophagy/02_autophagy_expr/"
expr_path_a <- file.path(expr_path, "03_a_gene_expr")
load(file = file.path(expr_path_a, ".rda_03_h_coca.rda"))
# expr
expr_matrix %>% t() -> expr_matrix_t
factoextra::hcut(expr_matrix_t, k = 4, hc_func = 'hclust', hc_method = 'ward.D', hc_metric = 'pearson', stand = T) -> expr_hcut
cutree(expr_hcut, k = 4) -> expr_group
expr_group %>%
tibble::enframe(name = "sample", value = "group") %>%
dplyr::inner_join(sample_info, by = "sample") %>%
dplyr::distinct(sample, cancer_types, .keep_all = T) %>%
ggplot(aes(x = as.factor(group), y = cancer_types)) +
geom_tile()
fn_encode <- function(.x){
.d <- tibble::tibble()
if(.x == 1) {.d <- tibble::tibble(a = 1,b = 0, c = 0, d= 0)}
if(.x == 2) {.d <- tibble::tibble(a = 0,b = 1, c = 0, d = 0)}
if(.x == 3) {.d <- tibble::tibble(a = 0,b = 0, c = 1, d = 0)}
if(.x == 4) {.d <- tibble::tibble(a = 0,b = 0, c = 0, d = 1)}
.d
}
expr_group %>%
tibble::enframe(name = "sample") %>%
dplyr::mutate(encode = purrr::map(.x = value, .f = fn_encode)) %>%
tidyr::unnest() %>%
dplyr::select(-value) -> expr_encode
cnv_matrix %>% t() -> cnv_matrix_t
factoextra::hcut(cnv_matrix_t, k = 4, hc_func = 'hclust', hc_method = 'ward.D', hc_metric = 'pearson', stand = T) -> cnv_hcut
cutree(cnv_hcut, k = 4) -> cnv_group
cnv_group %>%
tibble::enframe(name = "sample", value = "group") %>%
dplyr::inner_join(sample_info, by = "sample") %>%
dplyr::distinct(sample, cancer_types, .keep_all = T) %>%
ggplot(aes(x = as.factor(group), y = cancer_types)) +
geom_tile()
cnv_group %>%
tibble::enframe(name = "sample") %>%
dplyr::mutate(encode = purrr::map(.x = value, .f = fn_encode)) %>%
tidyr::unnest() %>%
dplyr::select(-value) -> cnv_encode
methy_matrix %>% t() -> methy_matrix_t
factoextra::hcut(methy_matrix_t, k = 4, hc_func = 'hclust', hc_method = 'ward.D', hc_metric = 'pearson', stand = T) -> methy_hcut
cutree(methy_hcut, k = 4) -> methy_group
methy_group %>%
tibble::enframe(name = "sample", value = "group") %>%
dplyr::inner_join(sample_info, by = "sample") %>%
dplyr::distinct(sample, cancer_types, .keep_all = T) %>%
ggplot(aes(x = as.factor(group), y = cancer_types)) +
geom_tile()
methy_group %>%
tibble::enframe(name = "sample") %>%
dplyr::mutate(encode = purrr::map(.x = value, .f = fn_encode)) %>%
tidyr::unnest() %>%
dplyr::select(-value) -> methy_encode
list(expr_encode, cnv_encode, methy_encode) %>%
purrr::reduce(.f = function(x, y){x %>% dplyr::inner_join(y, by = "sample")}, .init = tibble::tibble(sample = common_names[-1])) %>%
tidyr::gather(key = type, value = v, -sample) %>%
tidyr::spread(key = sample, value = v) %>%
dplyr::select(-type) -> cc_d
library(ConsensusClusterPlus)
ConsensusClusterPlus(cc_d %>% as.matrix(), maxK=20, reps=500,pItem=0.8,pFeature=1, title="ConsensusClusterplus4", clusterAlg="hc",distance="pearson",seed=1262118388.71279, plot = "pdf") -> cc_res
cc_res %>% readr::write_rds(path = file.path(expr_path_a, ".rds_03_h_coca_cc_cc_res.rds.gz"), compress = "gz")
fn_best_clust <- function(k, d = d){
cc <- d[[k]][[3]]
l <- length(cc)
cm <- d[[k]][[1]]
tibble::tibble(
sample = names(cc),
name = paste("V", c(1:l), sep = ""),
group = cc) %>%
dplyr::arrange(group) -> rank_sample
cm %>%
as.data.frame() %>%
tibble::as_tibble() %>%
tibble::add_column(
sample1 = paste("V", c(1:l), sep = ""), .before = 1) %>%
tidyr::gather(key = sample2, value = sim, -sample1) -> plot_ready
plot_ready %>%
ggplot(aes(x= sample1, y = sample2, fill = sim)) +
geom_tile() +
scale_x_discrete(limits = rank_sample$name) +
scale_y_discrete(limits = rank_sample$name) +
scale_fill_gradient(high = "#1DEDF2", low = "#010235") +
theme(
axis.text = element_blank(),
axis.title = element_blank(),
axis.ticks = element_blank(),
legend.position = "none"
) -> p
ggsave(filename = paste("cc4_c",k, ".tif", sep = ""), plot = p, device = "tiff", width = 8, height = 8, path = "/extraspace/liucj/projects/6.autophagy/02_autophagy_expr/03_a_gene_expr")
}
cc_res -> d
cluster <- multidplyr::create_cluster(19)
tibble::tibble(k = 2:20) %>%
multidplyr::partition(cluster = cluster) %>%
multidplyr::cluster_library("magrittr") %>%
multidplyr::cluster_library("ggplot2") %>%
multidplyr::cluster_assign_value("fn_best_clust", fn_best_clust) %>%
multidplyr::cluster_assign_value("d", d) %>%
dplyr::mutate(a = purrr::walk(.x = k, .f = fn_best_clust, d = d))
parallel::stopCluster(cluster)
clinical <- readr::read_rds(path = file.path(tcga_path,"pancan34_clinical.rds.gz"))
clinical_simplified <-
clinical %>%
dplyr::mutate(succinct = purrr::map(.x = clinical, function(.x ){.x %>% dplyr::select(barcode, os_days, os_status)})) %>%
dplyr::select(-clinical) %>%
tidyr::unnest() %>%
dplyr::rename(sample = barcode)
clinical_stage <-
readr::read_rds(path = file.path(tcga_path,"pancan34_clinical_stage.rds.gz")) %>%
dplyr::filter(n >= 40) %>%
dplyr::select(-n) %>%
tidyr::unnest() %>%
dplyr::rename(sample =barcode)
sample_info <- readr::read_rds(path = file.path(tcga_path, "sample_info.rds.gz"))
d[[4]][[3]] %>% tibble::enframe(name = "sample", value = "group") -> .d3
.d3 %>%
dplyr::inner_join(sample_info, by = "sample") %>%
dplyr::distinct(sample, cancer_types, .keep_all = T) -> .d3_sample
# --------------------- draw merged heatmap-----------------
.d3_sample %>% dplyr::arrange(group) -> .d3_sample_sort
expr_matrix[,.d3_sample_sort$sample] -> expr_matrix_sort
cnv_matrix[, .d3_sample_sort$sample] %>% apply(1, scale) %>% t() -> cnv_matrix_sort
methy_matrix[, .d3_sample_sort$sample] %>% apply(1, scale) %>% t() -> methy_matrix_sort
cluster_col <- c("#FF0000", "#191970", "#98F5FF", "#8B008B")
names(cluster_col) <- c(1,2,3,4)
library(ComplexHeatmap)
library(circlize)
ha = HeatmapAnnotation(
df = data.frame(cluster = .d3_sample_sort$group, cancer = .d3_sample_sort$cancer_types),
gap = unit(c(4,2), "mm"),
col = list(
cancer = pcc,
cluster = cluster_col
)
)
fn_draw_heatmap <- function(.x, .y, ha = ha){
Heatmap(
.y,
col = colorRamp2(c(-2, 0, 2), c("blue", "white", "red"), space = "RGB"),
name = "expression",
# annotation
top_annotation = ha,
# show row and columns
show_row_names = F,
show_column_names = FALSE,
show_row_dend = F,
show_column_dend = F,
clustering_distance_rows = "pearson",
clustering_method_rows = "ward.D",
cluster_columns = F,
row_title = .x,
row_title_rot = 90
) -> expr_ht
.filename <- stringr::str_c("coca_heatmap", .x, "tiff", sep = " ") %>% stringr::str_replace_all(pattern = " ", replacement = ".")
.pdf_file <- file.path("/home/cliu18/liucj/projects/6.autophagy/02_autophagy_expr//03_a_gene_expr", .filename)
pdf(.pdf_file, width=10, height = 5)
draw(expr_ht, show_heatmap_legend = F, show_annotation_legend = F)
decorate_annotation(
"cancer",
{grid.text("cancer", unit(1, "npc") + unit(2, "mm"), 0.5, default.units = "npc", just = "left", gp = gpar(fontsize = 12))}
)
decorate_annotation(
"cluster",
{grid.text("cluster", unit(1, "npc") + unit(2, "mm"), 0.5, default.units = "npc", just = "left", gp = gpar(fontsize = 12))}
)
dev.off()
}
fn_draw_heatmap(.x = "mRNA Expression", .y = expr_matrix_sort, ha = ha)
fn_draw_heatmap(.x = "Copy Number Variation", .y = cnv_matrix_sort, ha = ha)
fn_draw_heatmap(.x = "Methylation", .y = methy_matrix_sort, ha = ha)
c(6, 8, 1, 1) %>%
tibble::enframe() %>%
ggplot(aes(x = 1, y = value, fill = as.factor(name))) +
geom_bar(stat = 'identity', width = 0.02) +
# geom_text(label = c("C4", "C3", "C2", "C1"), position = position_stack(vjust = 0.5, )) +
annotate("text", x = 1.5, y = c(0.5, 1.5, 6, 13), label = c("C4 (492)", "C3 (543)", "C2 (4086)", "C1 (3019)")) +
scale_y_discrete(
position = "right"
) +
scale_fill_manual(
values = unname(cluster_col),
guide = F) +
theme(
axis.title = element_blank(),
axis.text = element_blank(),
# axis.text.y = element_blank(),
# axis.text.x = element_text(
# size = 16,
# face = "bold"
# ),
axis.ticks = element_blank(),
panel.background = element_blank(),
panel.border = element_blank(),
plot.background = element_blank(),
plot.margin = unit(c(0,0,0,0), "mm")
) +
labs(x = NULL, y = NULL) -> p
ggsave(filename = "coca_cc_legend.pdf", device = "pdf", plot = p, path = expr_path_a)
.d3 %>%
ggplot(aes(x = as.factor(group), y = 1, fill = as.factor(group))) +
geom_tile() +
scale_x_discrete(
labels = c("C1", "C2", "C3", "C4"),
position = "top") +
scale_fill_manual(
values = unname(cluster_col),
guide = F) +
theme(
axis.title = element_blank(),
# axis.text = element_blank(),
axis.text.y = element_blank(),
axis.text.x = element_text(
size = 16,
face = "bold"
),
axis.ticks = element_blank(),
panel.background = element_blank(),
panel.border = element_blank(),
plot.background = element_blank(),
plot.margin = unit(c(0,0,0,0), "mm")
) +
labs(x = NULL, y = NULL) +
coord_fixed(ratio = 0.2) -> p
ggsave(filename = "coca_anno.pdf", device = "pdf", plot = p, path = expr_path_a, width = 4, height = 1)
.d3_sample %>%
dplyr::group_by(cancer_types, group) %>%
dplyr::count() %>%
dplyr::ungroup() %>%
dplyr::group_by(cancer_types) %>%
dplyr::mutate(freq = n / sum(n)) %>%
dplyr::ungroup() %>%
ggplot(aes(x = as.factor(group), y = cancer_types, fill = freq)) +
geom_tile() +
geom_text(aes(label = n)) +
scale_fill_gradient(
name = "Percent (%)",
limit = c(0, 1),
high = "red",
low = "white",
na.value = "white"
) +
theme(
axis.title = element_blank(),
axis.text.x = element_blank(),
axis.text.y = element_text(
color = 'black'
),
axis.ticks = element_blank(),
panel.background = element_blank(),
panel.border = element_rect(
color = "black",
size = 0.2,
fill = NA
)
) +
guides(
fill = guide_legend(
title = "Percent (%)",
title.position = 'left',
title.theme = element_text(angle = 90, vjust = 2, size = 10),
reverse = T,
keywidth = 0.6,
keyheight = 0.7
)
) -> p
ggsave(filename = "coca_tile_subtype.pdf", device = "pdf", plot = p, path = expr_path_a, width = 5, height = 6)
.d3_sample %>%
dplyr::inner_join(clinical_stage, by = c("cancer_types", "sample")) %>%
dplyr::group_by(stage, group) %>%
ggplot(aes(x = as.factor(group), fill = stage)) +
geom_bar(stat = 'count')
.d3_sample %>%
dplyr::inner_join(clinical_stage, by = c("cancer_types", "sample")) %>%
dplyr::group_by(stage, group) %>%
dplyr::count() %>%
dplyr::ungroup() %>%
tidyr::spread(key = group, value = n) %>%
as.data.frame() -> .d3_sample_stage
rownames(.d3_sample_stage) <- .d3_sample_stage$stage
.d3_sample_stage[-1] -> .d3_sample_stage
# gplots::balloonplot(t(as.table(as.matrix(.d3_sample_stage))))
.d3_sample_stage %>% chisq.test()
.d3_sample_stage %>% dplyr::select(1,2) %>% chisq.test()
.d3_sample_stage %>% dplyr::select(1,3) %>% chisq.test()
.d3_sample_stage %>% dplyr::select(2, 3) %>% chisq.test()
clinical_simplified %>%
dplyr::inner_join(.d3_sample, by = c("cancer_types", "sample")) %>%
dplyr::mutate(os_status = dplyr::recode(os_status, "Dead" = 1, "Alive" = 0)) %>%
dplyr::filter(! is.na(os_days), os_days > 0) -> .d
.d_diff <- survival::survdiff(survival::Surv(os_days, os_status) ~ group, data = .d)
kmp <- 1 - pchisq(.d_diff$chisq, df = length(levels(as.factor(.d$group))) - 1)
fit_x <- survival::survfit(survival::Surv(os_days, os_status) ~ as.factor(group), data = .d , na.action = na.exclude)
survminer::ggsurvplot(fit_x, data = .d, pval=T, pval.method = T,
palette = unname(cluster_col),
xlab = "Survival in days",
ylab = 'Probability of survival',
legend.title = "COCA",
legend.labs = c("C1", "C2", "C3", "C4")) -> p_survival
ggsave(filename = "coca_all_survival.pdf", plot = p_survival$plot, device = 'pdf', path = expr_path_a, width = 6, height = 6)
comp_list <- list(c(1,3),c(2, 4), c(2,3), c(3,4))
.d %>%
ggpubr::ggboxplot(x = "group", y = "os_days", color = "group", pallete = "jco") +
ggpubr::stat_compare_means(comparisons = comp_list, method = "t.test") +
ggpubr::stat_compare_means(method = "anova", label.y = 9000)
# nodes <-
# .d %>%
# dplyr::rename(cluster = group, group = cancer_types, name = sample) %>%
# dplyr::mutate(size = os_days / 30) %>%
# dplyr::mutate(size = dplyr::case_when(size < 1 ~ 1, size > 100 ~ 100, TRUE ~ round(size))) %>%
# dplyr::select(name, group, size, cluster) %>%
# as.data.frame() %>%
# head(100)
#
# inter <- d[[4]][[1]] %>% as.data.frame() %>% tibble::as_tibble()
# names(inter) <- 0: {length(.d3_sample$sample) - 1}
#
# inter %>%
# tibble::add_column(source = 0: {length(.d3_sample$sample) - 1}, .before = 1) %>%
# tidyr::gather(key = target, value = value, - source) %>%
# dplyr::filter(source != target) %>%
# dplyr::mutate(value = ifelse(value == 1, 1, 0)) %>%
# dplyr::mutate(target = as.numeric(target)) -> te
#
#
#
# edges <-
# te %>%
# dplyr::filter(source %in% 0:99, target %in% 0:99) %>%
# as.data.frame()
#
# networkD3::forceNetwork(Links = edges, Nodes = nodes , Source = "source", Target = "target", Value = "value", NodeID = "name", Nodesize = "size", Group = "group", opacity = 1) -> res_network
#
# res_network
# data(MisLinks, MisNodes)
# forceNetwork(Links = MisLinks, Nodes = MisNodes, Source = "source",
# Target = "target", Value = "value", NodeID = "name", Nodesize = "size",
# Group = "group", opacity = 1, zoom = T)
#
#
#
save.image(file = file.path(expr_path_a, ".rda_03_h_coca_cc.rda"))
load(file.path(expr_path_a, ".rda_03_h_coca_cc4.rda"))
|
/01.expr/03_h_coca_cc.R
|
permissive
|
zhangyupisa/autophagy-in-cancer
|
R
| false
| false
| 14,298
|
r
|
library(methods)
library(magrittr)
tcga_path = "/home/cliu18/liucj/projects/6.autophagy/TCGA"
expr_path <- "/home/cliu18/liucj/projects/6.autophagy/02_autophagy_expr/"
expr_path_a <- file.path(expr_path, "03_a_gene_expr")
load(file = file.path(expr_path_a, ".rda_03_h_coca.rda"))
# expr
expr_matrix %>% t() -> expr_matrix_t
factoextra::hcut(expr_matrix_t, k = 4, hc_func = 'hclust', hc_method = 'ward.D', hc_metric = 'pearson', stand = T) -> expr_hcut
cutree(expr_hcut, k = 4) -> expr_group
expr_group %>%
tibble::enframe(name = "sample", value = "group") %>%
dplyr::inner_join(sample_info, by = "sample") %>%
dplyr::distinct(sample, cancer_types, .keep_all = T) %>%
ggplot(aes(x = as.factor(group), y = cancer_types)) +
geom_tile()
fn_encode <- function(.x){
.d <- tibble::tibble()
if(.x == 1) {.d <- tibble::tibble(a = 1,b = 0, c = 0, d= 0)}
if(.x == 2) {.d <- tibble::tibble(a = 0,b = 1, c = 0, d = 0)}
if(.x == 3) {.d <- tibble::tibble(a = 0,b = 0, c = 1, d = 0)}
if(.x == 4) {.d <- tibble::tibble(a = 0,b = 0, c = 0, d = 1)}
.d
}
expr_group %>%
tibble::enframe(name = "sample") %>%
dplyr::mutate(encode = purrr::map(.x = value, .f = fn_encode)) %>%
tidyr::unnest() %>%
dplyr::select(-value) -> expr_encode
cnv_matrix %>% t() -> cnv_matrix_t
factoextra::hcut(cnv_matrix_t, k = 4, hc_func = 'hclust', hc_method = 'ward.D', hc_metric = 'pearson', stand = T) -> cnv_hcut
cutree(cnv_hcut, k = 4) -> cnv_group
cnv_group %>%
tibble::enframe(name = "sample", value = "group") %>%
dplyr::inner_join(sample_info, by = "sample") %>%
dplyr::distinct(sample, cancer_types, .keep_all = T) %>%
ggplot(aes(x = as.factor(group), y = cancer_types)) +
geom_tile()
cnv_group %>%
tibble::enframe(name = "sample") %>%
dplyr::mutate(encode = purrr::map(.x = value, .f = fn_encode)) %>%
tidyr::unnest() %>%
dplyr::select(-value) -> cnv_encode
methy_matrix %>% t() -> methy_matrix_t
factoextra::hcut(methy_matrix_t, k = 4, hc_func = 'hclust', hc_method = 'ward.D', hc_metric = 'pearson', stand = T) -> methy_hcut
cutree(methy_hcut, k = 4) -> methy_group
methy_group %>%
tibble::enframe(name = "sample", value = "group") %>%
dplyr::inner_join(sample_info, by = "sample") %>%
dplyr::distinct(sample, cancer_types, .keep_all = T) %>%
ggplot(aes(x = as.factor(group), y = cancer_types)) +
geom_tile()
methy_group %>%
tibble::enframe(name = "sample") %>%
dplyr::mutate(encode = purrr::map(.x = value, .f = fn_encode)) %>%
tidyr::unnest() %>%
dplyr::select(-value) -> methy_encode
list(expr_encode, cnv_encode, methy_encode) %>%
purrr::reduce(.f = function(x, y){x %>% dplyr::inner_join(y, by = "sample")}, .init = tibble::tibble(sample = common_names[-1])) %>%
tidyr::gather(key = type, value = v, -sample) %>%
tidyr::spread(key = sample, value = v) %>%
dplyr::select(-type) -> cc_d
library(ConsensusClusterPlus)
ConsensusClusterPlus(cc_d %>% as.matrix(), maxK=20, reps=500,pItem=0.8,pFeature=1, title="ConsensusClusterplus4", clusterAlg="hc",distance="pearson",seed=1262118388.71279, plot = "pdf") -> cc_res
cc_res %>% readr::write_rds(path = file.path(expr_path_a, ".rds_03_h_coca_cc_cc_res.rds.gz"), compress = "gz")
fn_best_clust <- function(k, d = d){
cc <- d[[k]][[3]]
l <- length(cc)
cm <- d[[k]][[1]]
tibble::tibble(
sample = names(cc),
name = paste("V", c(1:l), sep = ""),
group = cc) %>%
dplyr::arrange(group) -> rank_sample
cm %>%
as.data.frame() %>%
tibble::as_tibble() %>%
tibble::add_column(
sample1 = paste("V", c(1:l), sep = ""), .before = 1) %>%
tidyr::gather(key = sample2, value = sim, -sample1) -> plot_ready
plot_ready %>%
ggplot(aes(x= sample1, y = sample2, fill = sim)) +
geom_tile() +
scale_x_discrete(limits = rank_sample$name) +
scale_y_discrete(limits = rank_sample$name) +
scale_fill_gradient(high = "#1DEDF2", low = "#010235") +
theme(
axis.text = element_blank(),
axis.title = element_blank(),
axis.ticks = element_blank(),
legend.position = "none"
) -> p
ggsave(filename = paste("cc4_c",k, ".tif", sep = ""), plot = p, device = "tiff", width = 8, height = 8, path = "/extraspace/liucj/projects/6.autophagy/02_autophagy_expr/03_a_gene_expr")
}
cc_res -> d
cluster <- multidplyr::create_cluster(19)
tibble::tibble(k = 2:20) %>%
multidplyr::partition(cluster = cluster) %>%
multidplyr::cluster_library("magrittr") %>%
multidplyr::cluster_library("ggplot2") %>%
multidplyr::cluster_assign_value("fn_best_clust", fn_best_clust) %>%
multidplyr::cluster_assign_value("d", d) %>%
dplyr::mutate(a = purrr::walk(.x = k, .f = fn_best_clust, d = d))
parallel::stopCluster(cluster)
clinical <- readr::read_rds(path = file.path(tcga_path,"pancan34_clinical.rds.gz"))
clinical_simplified <-
clinical %>%
dplyr::mutate(succinct = purrr::map(.x = clinical, function(.x ){.x %>% dplyr::select(barcode, os_days, os_status)})) %>%
dplyr::select(-clinical) %>%
tidyr::unnest() %>%
dplyr::rename(sample = barcode)
clinical_stage <-
readr::read_rds(path = file.path(tcga_path,"pancan34_clinical_stage.rds.gz")) %>%
dplyr::filter(n >= 40) %>%
dplyr::select(-n) %>%
tidyr::unnest() %>%
dplyr::rename(sample =barcode)
sample_info <- readr::read_rds(path = file.path(tcga_path, "sample_info.rds.gz"))
d[[4]][[3]] %>% tibble::enframe(name = "sample", value = "group") -> .d3
.d3 %>%
dplyr::inner_join(sample_info, by = "sample") %>%
dplyr::distinct(sample, cancer_types, .keep_all = T) -> .d3_sample
# --------------------- draw merged heatmap-----------------
.d3_sample %>% dplyr::arrange(group) -> .d3_sample_sort
expr_matrix[,.d3_sample_sort$sample] -> expr_matrix_sort
cnv_matrix[, .d3_sample_sort$sample] %>% apply(1, scale) %>% t() -> cnv_matrix_sort
methy_matrix[, .d3_sample_sort$sample] %>% apply(1, scale) %>% t() -> methy_matrix_sort
cluster_col <- c("#FF0000", "#191970", "#98F5FF", "#8B008B")
names(cluster_col) <- c(1,2,3,4)
library(ComplexHeatmap)
library(circlize)
ha = HeatmapAnnotation(
df = data.frame(cluster = .d3_sample_sort$group, cancer = .d3_sample_sort$cancer_types),
gap = unit(c(4,2), "mm"),
col = list(
cancer = pcc,
cluster = cluster_col
)
)
fn_draw_heatmap <- function(.x, .y, ha = ha){
Heatmap(
.y,
col = colorRamp2(c(-2, 0, 2), c("blue", "white", "red"), space = "RGB"),
name = "expression",
# annotation
top_annotation = ha,
# show row and columns
show_row_names = F,
show_column_names = FALSE,
show_row_dend = F,
show_column_dend = F,
clustering_distance_rows = "pearson",
clustering_method_rows = "ward.D",
cluster_columns = F,
row_title = .x,
row_title_rot = 90
) -> expr_ht
.filename <- stringr::str_c("coca_heatmap", .x, "tiff", sep = " ") %>% stringr::str_replace_all(pattern = " ", replacement = ".")
.pdf_file <- file.path("/home/cliu18/liucj/projects/6.autophagy/02_autophagy_expr//03_a_gene_expr", .filename)
pdf(.pdf_file, width=10, height = 5)
draw(expr_ht, show_heatmap_legend = F, show_annotation_legend = F)
decorate_annotation(
"cancer",
{grid.text("cancer", unit(1, "npc") + unit(2, "mm"), 0.5, default.units = "npc", just = "left", gp = gpar(fontsize = 12))}
)
decorate_annotation(
"cluster",
{grid.text("cluster", unit(1, "npc") + unit(2, "mm"), 0.5, default.units = "npc", just = "left", gp = gpar(fontsize = 12))}
)
dev.off()
}
fn_draw_heatmap(.x = "mRNA Expression", .y = expr_matrix_sort, ha = ha)
fn_draw_heatmap(.x = "Copy Number Variation", .y = cnv_matrix_sort, ha = ha)
fn_draw_heatmap(.x = "Methylation", .y = methy_matrix_sort, ha = ha)
c(6, 8, 1, 1) %>%
tibble::enframe() %>%
ggplot(aes(x = 1, y = value, fill = as.factor(name))) +
geom_bar(stat = 'identity', width = 0.02) +
# geom_text(label = c("C4", "C3", "C2", "C1"), position = position_stack(vjust = 0.5, )) +
annotate("text", x = 1.5, y = c(0.5, 1.5, 6, 13), label = c("C4 (492)", "C3 (543)", "C2 (4086)", "C1 (3019)")) +
scale_y_discrete(
position = "right"
) +
scale_fill_manual(
values = unname(cluster_col),
guide = F) +
theme(
axis.title = element_blank(),
axis.text = element_blank(),
# axis.text.y = element_blank(),
# axis.text.x = element_text(
# size = 16,
# face = "bold"
# ),
axis.ticks = element_blank(),
panel.background = element_blank(),
panel.border = element_blank(),
plot.background = element_blank(),
plot.margin = unit(c(0,0,0,0), "mm")
) +
labs(x = NULL, y = NULL) -> p
ggsave(filename = "coca_cc_legend.pdf", device = "pdf", plot = p, path = expr_path_a)
.d3 %>%
ggplot(aes(x = as.factor(group), y = 1, fill = as.factor(group))) +
geom_tile() +
scale_x_discrete(
labels = c("C1", "C2", "C3", "C4"),
position = "top") +
scale_fill_manual(
values = unname(cluster_col),
guide = F) +
theme(
axis.title = element_blank(),
# axis.text = element_blank(),
axis.text.y = element_blank(),
axis.text.x = element_text(
size = 16,
face = "bold"
),
axis.ticks = element_blank(),
panel.background = element_blank(),
panel.border = element_blank(),
plot.background = element_blank(),
plot.margin = unit(c(0,0,0,0), "mm")
) +
labs(x = NULL, y = NULL) +
coord_fixed(ratio = 0.2) -> p
ggsave(filename = "coca_anno.pdf", device = "pdf", plot = p, path = expr_path_a, width = 4, height = 1)
.d3_sample %>%
dplyr::group_by(cancer_types, group) %>%
dplyr::count() %>%
dplyr::ungroup() %>%
dplyr::group_by(cancer_types) %>%
dplyr::mutate(freq = n / sum(n)) %>%
dplyr::ungroup() %>%
ggplot(aes(x = as.factor(group), y = cancer_types, fill = freq)) +
geom_tile() +
geom_text(aes(label = n)) +
scale_fill_gradient(
name = "Percent (%)",
limit = c(0, 1),
high = "red",
low = "white",
na.value = "white"
) +
theme(
axis.title = element_blank(),
axis.text.x = element_blank(),
axis.text.y = element_text(
color = 'black'
),
axis.ticks = element_blank(),
panel.background = element_blank(),
panel.border = element_rect(
color = "black",
size = 0.2,
fill = NA
)
) +
guides(
fill = guide_legend(
title = "Percent (%)",
title.position = 'left',
title.theme = element_text(angle = 90, vjust = 2, size = 10),
reverse = T,
keywidth = 0.6,
keyheight = 0.7
)
) -> p
ggsave(filename = "coca_tile_subtype.pdf", device = "pdf", plot = p, path = expr_path_a, width = 5, height = 6)
.d3_sample %>%
dplyr::inner_join(clinical_stage, by = c("cancer_types", "sample")) %>%
dplyr::group_by(stage, group) %>%
ggplot(aes(x = as.factor(group), fill = stage)) +
geom_bar(stat = 'count')
.d3_sample %>%
dplyr::inner_join(clinical_stage, by = c("cancer_types", "sample")) %>%
dplyr::group_by(stage, group) %>%
dplyr::count() %>%
dplyr::ungroup() %>%
tidyr::spread(key = group, value = n) %>%
as.data.frame() -> .d3_sample_stage
rownames(.d3_sample_stage) <- .d3_sample_stage$stage
.d3_sample_stage[-1] -> .d3_sample_stage
# gplots::balloonplot(t(as.table(as.matrix(.d3_sample_stage))))
.d3_sample_stage %>% chisq.test()
.d3_sample_stage %>% dplyr::select(1,2) %>% chisq.test()
.d3_sample_stage %>% dplyr::select(1,3) %>% chisq.test()
.d3_sample_stage %>% dplyr::select(2, 3) %>% chisq.test()
clinical_simplified %>%
dplyr::inner_join(.d3_sample, by = c("cancer_types", "sample")) %>%
dplyr::mutate(os_status = dplyr::recode(os_status, "Dead" = 1, "Alive" = 0)) %>%
dplyr::filter(! is.na(os_days), os_days > 0) -> .d
.d_diff <- survival::survdiff(survival::Surv(os_days, os_status) ~ group, data = .d)
kmp <- 1 - pchisq(.d_diff$chisq, df = length(levels(as.factor(.d$group))) - 1)
fit_x <- survival::survfit(survival::Surv(os_days, os_status) ~ as.factor(group), data = .d , na.action = na.exclude)
survminer::ggsurvplot(fit_x, data = .d, pval=T, pval.method = T,
palette = unname(cluster_col),
xlab = "Survival in days",
ylab = 'Probability of survival',
legend.title = "COCA",
legend.labs = c("C1", "C2", "C3", "C4")) -> p_survival
ggsave(filename = "coca_all_survival.pdf", plot = p_survival$plot, device = 'pdf', path = expr_path_a, width = 6, height = 6)
comp_list <- list(c(1,3),c(2, 4), c(2,3), c(3,4))
.d %>%
ggpubr::ggboxplot(x = "group", y = "os_days", color = "group", pallete = "jco") +
ggpubr::stat_compare_means(comparisons = comp_list, method = "t.test") +
ggpubr::stat_compare_means(method = "anova", label.y = 9000)
# nodes <-
# .d %>%
# dplyr::rename(cluster = group, group = cancer_types, name = sample) %>%
# dplyr::mutate(size = os_days / 30) %>%
# dplyr::mutate(size = dplyr::case_when(size < 1 ~ 1, size > 100 ~ 100, TRUE ~ round(size))) %>%
# dplyr::select(name, group, size, cluster) %>%
# as.data.frame() %>%
# head(100)
#
# inter <- d[[4]][[1]] %>% as.data.frame() %>% tibble::as_tibble()
# names(inter) <- 0: {length(.d3_sample$sample) - 1}
#
# inter %>%
# tibble::add_column(source = 0: {length(.d3_sample$sample) - 1}, .before = 1) %>%
# tidyr::gather(key = target, value = value, - source) %>%
# dplyr::filter(source != target) %>%
# dplyr::mutate(value = ifelse(value == 1, 1, 0)) %>%
# dplyr::mutate(target = as.numeric(target)) -> te
#
#
#
# edges <-
# te %>%
# dplyr::filter(source %in% 0:99, target %in% 0:99) %>%
# as.data.frame()
#
# networkD3::forceNetwork(Links = edges, Nodes = nodes , Source = "source", Target = "target", Value = "value", NodeID = "name", Nodesize = "size", Group = "group", opacity = 1) -> res_network
#
# res_network
# data(MisLinks, MisNodes)
# forceNetwork(Links = MisLinks, Nodes = MisNodes, Source = "source",
# Target = "target", Value = "value", NodeID = "name", Nodesize = "size",
# Group = "group", opacity = 1, zoom = T)
#
#
#
save.image(file = file.path(expr_path_a, ".rda_03_h_coca_cc.rda"))
load(file.path(expr_path_a, ".rda_03_h_coca_cc4.rda"))
|
#' add_row_annotation
#'
#' Adds annotation heatmaps for one or more qualitative or quantitative
#' annotations for each row of a main heatmap.
#' @param p \code{link{Iheatmap-class}} object
#' @param annotation data.frame or object that can be converted to data frame
#' @param colors list of color palettes, with one color per annotation column
#' name
#' @param side side of plot on which to add row annotation
#' @param size relative size of each row annotation
#' @param buffer relative size of buffer between previous subplot and row
#' annotation
#' @param inner_buffer relative size of buffer between each annotation
#' @param layout layout properties for new x axis
#' @param show_colorbar logical indicator to show or hide colorbar
#'
#' @return \code{\link{Iheatmap-class}} object, which can be printed to generate
#' an interactive graphic
#' @export
#' @rdname add_row_annotation
#' @name add_row_annotation
#' @aliases add_row_annotation,Iheatmap-method
#' @author Alicia Schep
#' @seealso \code{\link{iheatmap}}, \code{\link{add_row_annotation}},
#' \code{\link{add_col_signal}}, \code{\link{add_col_groups}}
#' @examples
#'
#' mat <- matrix(rnorm(24), nrow = 6)
#' annotation <- data.frame(gender = c(rep("M", 3),rep("F",3)),
#' age = c(20,34,27,19,23,30))
#' hm <- iheatmap(mat) %>% add_row_annotation(annotation)
#'
#' # Print heatmap if interactive session
#' if (interactive()) hm
setMethod(add_row_annotation,
c(p = "Iheatmap"),
function(p,
annotation,
colors = NULL,
side = c("right","left"),
size = 0.05,
buffer = 0.015,
inner_buffer = buffer / 2,
layout = list(),
show_colorbar = TRUE){
side <- match.arg(side)
# Convert to data.frame
x <- as.data.frame(annotation)
for (i in seq_len(ncol(x))){
if (is.character(x[,i]) || is.factor(x[,i]) || is.logical(x[,i])){
if (!is.null(colors) && colnames(x)[i] %in% names(colors)){
tmp_colors <- colors[[colnames(x)[i]]]
} else{
tmp_colors <- pick_discrete_colors(as.factor(x[,i]), p)
}
p <- add_row_groups(p, x[,i],
name = colnames(x)[i],
title = colnames(x)[i],
colors = tmp_colors,
side = side,
size = size,
buffer = if (i == 1)
buffer else inner_buffer,
layout = layout,
show_title = TRUE)
} else if (is.numeric(x[,i])){
if (!is.null(colors) && colnames(x)[i] %in% names(colors)){
tmp_colors <- colors[[colnames(x)[i]]]
} else{
tmp_colors <- pick_continuous_colors(zmid = 0,
zmin = min(x[,i]),
zmax = max(x[,i]), p)
}
p <- add_row_signal(p,
x[,i],
name = colnames(x)[i],
colors = tmp_colors,
side = side,
size = size,
buffer = if (i == 1)
buffer else inner_buffer,
layout = layout,
show_title = TRUE,
show_colorbar = show_colorbar)
} else{
stop("Input should be character, factor, logical, or numeric")
}
}
return(p)
})
#' add_col_annotation
#'
#' Adds annotation heatmaps for one or more qualitative or quantitative
#' annotations for each column of a main heatmap.
#' @param p \code{link{Iheatmap-class}} object
#' @param annotation data.frame or object that can be converted to data frame
#' @param colors list of color palettes, with one color per annotation column
#' name
#' @param side side of plot on which to add column annotation
#' @param size relative size of each row annotation
#' @param buffer relative size of buffer between previous subplot and column
#' annotation
#' @param inner_buffer relative size of buffer between each annotation
#' @param layout layout properties for new y axis
#' @param show_colorbar logical indicator to show or hide colorbar
#'
#' @return \code{\link{Iheatmap-class}} object, which can be printed to generate
#' an interactive graphic
#' @export
#' @rdname add_col_annotation
#' @name add_col_annotation
#' @aliases add_col_annotation,Iheatmap-method
#' @seealso \code{\link{iheatmap}}, \code{\link{add_row_annotation}},
#' \code{\link{add_col_signal}}, \code{\link{add_col_groups}}
#' @author Alicia Schep
#' @examples
#'
#' mat <- matrix(rnorm(24), ncol = 6)
#' annotation <- data.frame(gender = c(rep("M", 3),rep("F",3)),
#' age = c(20,34,27,19,23,30))
#' hm <- iheatmap(mat) %>% add_col_annotation(annotation)
#'
#' # Print heatmap if interactive session
#' if (interactive()) hm
setMethod(add_col_annotation,
c(p = "Iheatmap"),
function(p,
annotation,
colors = NULL,
side = c("top","bottom"),
size = 0.05,
buffer = 0.015,
inner_buffer = buffer / 2,
layout = list(),
show_colorbar = TRUE){
side <- match.arg(side)
# Convert to data.frame
x <- as.data.frame(annotation)
for (i in seq_len(ncol(x))){
if (is.character(x[,i]) || is.factor(x[,i]) || is.logical(x[,i])){
if (!is.null(colors) && colnames(x)[i] %in% names(colors)){
tmp_colors <- colors[[colnames(x)[i]]]
} else{
tmp_colors <- pick_discrete_colors(as.factor(x[,i]), p)
}
p <- add_col_groups(p,
x[,i],
name = colnames(x)[i],
title = colnames(x)[i],
colors = tmp_colors,
side = side,
size = size,
buffer = if (i == 1)
buffer else inner_buffer,
layout = layout,
show_title = TRUE)
} else if (is.numeric(x[,i])){
if (!is.null(colors) && colnames(x)[i] %in% names(colors)){
tmp_colors <- colors[[colnames(x)[i]]]
} else{
tmp_colors <- pick_continuous_colors(zmid = 0,
zmin = min(x[,i]),
zmax = max(x[,i]), p)
}
p <- add_col_signal(p,
x[,i],
name = colnames(x)[i],
colors = tmp_colors,
side = side,
size = size,
buffer = if (i == 1)
buffer else inner_buffer,
layout = layout,
show_title = TRUE,
show_colorbar = show_colorbar)
} else{
stop("Input should be character, factor, logical, or numeric")
}
}
return(p)
})
|
/R/annotations.R
|
permissive
|
fboehm/iheatmapr
|
R
| false
| false
| 8,197
|
r
|
#' add_row_annotation
#'
#' Adds annotation heatmaps for one or more qualitative or quantitative
#' annotations for each row of a main heatmap.
#' @param p \code{link{Iheatmap-class}} object
#' @param annotation data.frame or object that can be converted to data frame
#' @param colors list of color palettes, with one color per annotation column
#' name
#' @param side side of plot on which to add row annotation
#' @param size relative size of each row annotation
#' @param buffer relative size of buffer between previous subplot and row
#' annotation
#' @param inner_buffer relative size of buffer between each annotation
#' @param layout layout properties for new x axis
#' @param show_colorbar logical indicator to show or hide colorbar
#'
#' @return \code{\link{Iheatmap-class}} object, which can be printed to generate
#' an interactive graphic
#' @export
#' @rdname add_row_annotation
#' @name add_row_annotation
#' @aliases add_row_annotation,Iheatmap-method
#' @author Alicia Schep
#' @seealso \code{\link{iheatmap}}, \code{\link{add_row_annotation}},
#' \code{\link{add_col_signal}}, \code{\link{add_col_groups}}
#' @examples
#'
#' mat <- matrix(rnorm(24), nrow = 6)
#' annotation <- data.frame(gender = c(rep("M", 3),rep("F",3)),
#' age = c(20,34,27,19,23,30))
#' hm <- iheatmap(mat) %>% add_row_annotation(annotation)
#'
#' # Print heatmap if interactive session
#' if (interactive()) hm
setMethod(add_row_annotation,
c(p = "Iheatmap"),
function(p,
annotation,
colors = NULL,
side = c("right","left"),
size = 0.05,
buffer = 0.015,
inner_buffer = buffer / 2,
layout = list(),
show_colorbar = TRUE){
side <- match.arg(side)
# Convert to data.frame
x <- as.data.frame(annotation)
for (i in seq_len(ncol(x))){
if (is.character(x[,i]) || is.factor(x[,i]) || is.logical(x[,i])){
if (!is.null(colors) && colnames(x)[i] %in% names(colors)){
tmp_colors <- colors[[colnames(x)[i]]]
} else{
tmp_colors <- pick_discrete_colors(as.factor(x[,i]), p)
}
p <- add_row_groups(p, x[,i],
name = colnames(x)[i],
title = colnames(x)[i],
colors = tmp_colors,
side = side,
size = size,
buffer = if (i == 1)
buffer else inner_buffer,
layout = layout,
show_title = TRUE)
} else if (is.numeric(x[,i])){
if (!is.null(colors) && colnames(x)[i] %in% names(colors)){
tmp_colors <- colors[[colnames(x)[i]]]
} else{
tmp_colors <- pick_continuous_colors(zmid = 0,
zmin = min(x[,i]),
zmax = max(x[,i]), p)
}
p <- add_row_signal(p,
x[,i],
name = colnames(x)[i],
colors = tmp_colors,
side = side,
size = size,
buffer = if (i == 1)
buffer else inner_buffer,
layout = layout,
show_title = TRUE,
show_colorbar = show_colorbar)
} else{
stop("Input should be character, factor, logical, or numeric")
}
}
return(p)
})
#' add_col_annotation
#'
#' Adds annotation heatmaps for one or more qualitative or quantitative
#' annotations for each column of a main heatmap.
#' @param p \code{link{Iheatmap-class}} object
#' @param annotation data.frame or object that can be converted to data frame
#' @param colors list of color palettes, with one color per annotation column
#' name
#' @param side side of plot on which to add column annotation
#' @param size relative size of each row annotation
#' @param buffer relative size of buffer between previous subplot and column
#' annotation
#' @param inner_buffer relative size of buffer between each annotation
#' @param layout layout properties for new y axis
#' @param show_colorbar logical indicator to show or hide colorbar
#'
#' @return \code{\link{Iheatmap-class}} object, which can be printed to generate
#' an interactive graphic
#' @export
#' @rdname add_col_annotation
#' @name add_col_annotation
#' @aliases add_col_annotation,Iheatmap-method
#' @seealso \code{\link{iheatmap}}, \code{\link{add_row_annotation}},
#' \code{\link{add_col_signal}}, \code{\link{add_col_groups}}
#' @author Alicia Schep
#' @examples
#'
#' mat <- matrix(rnorm(24), ncol = 6)
#' annotation <- data.frame(gender = c(rep("M", 3),rep("F",3)),
#' age = c(20,34,27,19,23,30))
#' hm <- iheatmap(mat) %>% add_col_annotation(annotation)
#'
#' # Print heatmap if interactive session
#' if (interactive()) hm
setMethod(add_col_annotation,
c(p = "Iheatmap"),
function(p,
annotation,
colors = NULL,
side = c("top","bottom"),
size = 0.05,
buffer = 0.015,
inner_buffer = buffer / 2,
layout = list(),
show_colorbar = TRUE){
side <- match.arg(side)
# Convert to data.frame
x <- as.data.frame(annotation)
for (i in seq_len(ncol(x))){
if (is.character(x[,i]) || is.factor(x[,i]) || is.logical(x[,i])){
if (!is.null(colors) && colnames(x)[i] %in% names(colors)){
tmp_colors <- colors[[colnames(x)[i]]]
} else{
tmp_colors <- pick_discrete_colors(as.factor(x[,i]), p)
}
p <- add_col_groups(p,
x[,i],
name = colnames(x)[i],
title = colnames(x)[i],
colors = tmp_colors,
side = side,
size = size,
buffer = if (i == 1)
buffer else inner_buffer,
layout = layout,
show_title = TRUE)
} else if (is.numeric(x[,i])){
if (!is.null(colors) && colnames(x)[i] %in% names(colors)){
tmp_colors <- colors[[colnames(x)[i]]]
} else{
tmp_colors <- pick_continuous_colors(zmid = 0,
zmin = min(x[,i]),
zmax = max(x[,i]), p)
}
p <- add_col_signal(p,
x[,i],
name = colnames(x)[i],
colors = tmp_colors,
side = side,
size = size,
buffer = if (i == 1)
buffer else inner_buffer,
layout = layout,
show_title = TRUE,
show_colorbar = show_colorbar)
} else{
stop("Input should be character, factor, logical, or numeric")
}
}
return(p)
})
|
\name{USstocks-package}
\alias{USstocks-package}
\alias{USstocks}
\docType{package}
\title{
What the package does (short line)
~~ USstocks ~~
}
\description{
More about what it does (maybe more than one line)
~~ A concise (1-5 lines) description of the package ~~
}
\details{
\tabular{ll}{
Package: \tab USstocks\cr
Type: \tab Package\cr
Version: \tab 1.0\cr
Date: \tab 2015-04-02\cr
License: \tab What license is it under?\cr
}
~~ An overview of how to use the package, including the most important functions ~~
}
\author{
Who wrote it
Maintainer: Who to complain to <yourfault@somewhere.net>
~~ The author and/or maintainer of the package ~~
}
\references{
~~ Literature or other references for background information ~~
}
~~ Optionally other standard keywords, one per line, from file KEYWORDS in the R documentation directory ~~
\keyword{ package }
\seealso{
~~ Optional links to other man pages, e.g. ~~
~~ \code{\link[<pkg>:<pkg>-package]{<pkg>}} ~~
}
\examples{
~~ simple examples of the most important functions ~~
}
|
/man/USstocks-package.Rd
|
no_license
|
andyyao95/USstocks
|
R
| false
| false
| 1,026
|
rd
|
\name{USstocks-package}
\alias{USstocks-package}
\alias{USstocks}
\docType{package}
\title{
What the package does (short line)
~~ USstocks ~~
}
\description{
More about what it does (maybe more than one line)
~~ A concise (1-5 lines) description of the package ~~
}
\details{
\tabular{ll}{
Package: \tab USstocks\cr
Type: \tab Package\cr
Version: \tab 1.0\cr
Date: \tab 2015-04-02\cr
License: \tab What license is it under?\cr
}
~~ An overview of how to use the package, including the most important functions ~~
}
\author{
Who wrote it
Maintainer: Who to complain to <yourfault@somewhere.net>
~~ The author and/or maintainer of the package ~~
}
\references{
~~ Literature or other references for background information ~~
}
~~ Optionally other standard keywords, one per line, from file KEYWORDS in the R documentation directory ~~
\keyword{ package }
\seealso{
~~ Optional links to other man pages, e.g. ~~
~~ \code{\link[<pkg>:<pkg>-package]{<pkg>}} ~~
}
\examples{
~~ simple examples of the most important functions ~~
}
|
# Part 2 Wk 5
# Pre-Lab
film <- FilmData
View(film)
# Show how many films are in each group
table(film$Rating)
# Calculate avg film budget of each group
aggregate(Budget~Rating,film,mean)
# Calculate sd of film budget within each group
aggregate(Budget~Rating,film,sd)
# Visualize the group means and variability
boxplot(film$Budget~film$Rating, main= "Film Budgets by Rating",
ylab= "Budget", xlab= "MPAA Rating")
# Run ANOVA
modelbud <- aov(film$Budget~film$Rating)
summary(modelbud)
# Run post-hoc test if F statistic is significant
TukeyHSD(modelbud)
# Calculate avg IMDB score of each group
aggregate(IMDB~Rating,film,mean)
#Calculate sd of IMDB scores within each group
aggregate(IMDB~Rating,film,sd)
# Visualize the group means and variability
boxplot(film$IMDB~film$Rating, main= "IMDB Scores by Rating",
ylab= "IMDB Score", xlab= "MPAA Rating")
# Run ANOVA
modelscore <- aov(film$IMDB~film$Rating)
summary(modelscore)
# Run post-hod text if F statistic is significant
TukeyHSD(modelscore)
# Lab
# Question 1
table(film$Studio)
aggregate(Days~Studio, film, mean)
modelDays <- aov(film$Days~film$Studio)
summary(modelDays)
TukeyHSD(modelDays)
# Question 2
aggregate(Pct.Dom~Studio, film, mean)
modelPctDom <- aov(film$Pct.Dom ~ film$Studio)
summary(modelPctDom)
boxplot(film$Pct.Dom ~ film$Studio)
# Problem Sets
# 1
film$BudgetGroup[film$Budget < 100] <- 'low'
film$BudgetGroup[film$Budget >= 100 & film$Budget < 150] <- 'medium'
film$BudgetGroup[film$Budget >= 150] <- 'high'
table(film$BudgetGroup)
aggregate(Pct.Dom ~ BudgetGroup, film, mean)
boxplot(film$Pct.Dom ~ film$BudgetGroup)
modelBudgetGroup <- aov(film$Pct.Dom ~ film$BudgetGroup)
summary(modelBudgetGroup)
TukeyHSD(modelBudgetGroup)
# 2
qf(0.95, 2, 42)
q2f <- (2387.7/2)/((5949.1-2387.7)/42)
q2f
# 3
modelq3 <- aov(q3$ticket ~ q3$sections)
summary(modelq3)
# 4
MSw <- 1365/34
MSw
MSb <- 782/2
q4f <- MSb/round(MSw,2)
q4f
adjP <- 0.05/3
adjP
|
/Part2-Wk5.R
|
no_license
|
linusyoung/RStudy-FoDA
|
R
| false
| false
| 1,949
|
r
|
# Part 2 Wk 5
# Pre-Lab
film <- FilmData
View(film)
# Show how many films are in each group
table(film$Rating)
# Calculate avg film budget of each group
aggregate(Budget~Rating,film,mean)
# Calculate sd of film budget within each group
aggregate(Budget~Rating,film,sd)
# Visualize the group means and variability
boxplot(film$Budget~film$Rating, main= "Film Budgets by Rating",
ylab= "Budget", xlab= "MPAA Rating")
# Run ANOVA
modelbud <- aov(film$Budget~film$Rating)
summary(modelbud)
# Run post-hoc test if F statistic is significant
TukeyHSD(modelbud)
# Calculate avg IMDB score of each group
aggregate(IMDB~Rating,film,mean)
#Calculate sd of IMDB scores within each group
aggregate(IMDB~Rating,film,sd)
# Visualize the group means and variability
boxplot(film$IMDB~film$Rating, main= "IMDB Scores by Rating",
ylab= "IMDB Score", xlab= "MPAA Rating")
# Run ANOVA
modelscore <- aov(film$IMDB~film$Rating)
summary(modelscore)
# Run post-hod text if F statistic is significant
TukeyHSD(modelscore)
# Lab
# Question 1
table(film$Studio)
aggregate(Days~Studio, film, mean)
modelDays <- aov(film$Days~film$Studio)
summary(modelDays)
TukeyHSD(modelDays)
# Question 2
aggregate(Pct.Dom~Studio, film, mean)
modelPctDom <- aov(film$Pct.Dom ~ film$Studio)
summary(modelPctDom)
boxplot(film$Pct.Dom ~ film$Studio)
# Problem Sets
# 1
film$BudgetGroup[film$Budget < 100] <- 'low'
film$BudgetGroup[film$Budget >= 100 & film$Budget < 150] <- 'medium'
film$BudgetGroup[film$Budget >= 150] <- 'high'
table(film$BudgetGroup)
aggregate(Pct.Dom ~ BudgetGroup, film, mean)
boxplot(film$Pct.Dom ~ film$BudgetGroup)
modelBudgetGroup <- aov(film$Pct.Dom ~ film$BudgetGroup)
summary(modelBudgetGroup)
TukeyHSD(modelBudgetGroup)
# 2
qf(0.95, 2, 42)
q2f <- (2387.7/2)/((5949.1-2387.7)/42)
q2f
# 3
modelq3 <- aov(q3$ticket ~ q3$sections)
summary(modelq3)
# 4
MSw <- 1365/34
MSw
MSb <- 782/2
q4f <- MSb/round(MSw,2)
q4f
adjP <- 0.05/3
adjP
|
library(tidyLPA)
### Name: print.tidyProfile
### Title: Print tidyProfile
### Aliases: print.tidyProfile
### ** Examples
## Not run:
##D if(interactive()){
##D tmp <- iris %>%
##D select(Sepal.Length, Sepal.Width, Petal.Length, Petal.Width) %>%
##D estimate_profiles(3)
##D tmp[[2]]
##D }
## End(Not run)
|
/data/genthat_extracted_code/tidyLPA/examples/print.tidyProfile.Rd.R
|
no_license
|
surayaaramli/typeRrh
|
R
| false
| false
| 322
|
r
|
library(tidyLPA)
### Name: print.tidyProfile
### Title: Print tidyProfile
### Aliases: print.tidyProfile
### ** Examples
## Not run:
##D if(interactive()){
##D tmp <- iris %>%
##D select(Sepal.Length, Sepal.Width, Petal.Length, Petal.Width) %>%
##D estimate_profiles(3)
##D tmp[[2]]
##D }
## End(Not run)
|
#' Group all the functions
#'
#' @note downloads data from the Local Area Unemployment Statistics from the BLS
#' #' Data is split and tidy
#' Download LAU County dataset from directly from the BLS website
#'
#' @param which_data what kind of data download: default are the county files
#' @param path_data where does the download happen: default current directory
#' @param years which year do we download, as of now only 1990 to 2016 are available
#' @param verbose show intermediate steps
#' @return dt_res
#' @examples
#' \dontrun{
#' dt_lau <- get_lau_data(years = c(1995, 2000))
#' }
#' @export
get_lau_data = function(
which_data = "county",
years = seq(1990, 2016),
path_data = "./",
verbose = T
){
fips_state <- NULL
if (which_data == "county"){
url <- "https://www.bls.gov/lau/laucnty"
dt_res <- data.table()
for (year_iter in years){
year_iter = year_iter - 1900
if (year_iter >= 100){ year_iter = year_iter - 100}
if (year_iter < 10){ year_iter = paste0(0, year_iter) }
utils::download.file(paste0(url, year_iter, ".xlsx"),
paste0(path_data, "lau_cty.xlsx"),
quiet = !verbose)
dt_ind <- readxl::read_excel(paste0(path_data, "lau_cty.xlsx")) %>% data.table
setnames(dt_ind, c("laus_code", "fips_state", "fips_cty",
"lau_name", "date_y", "V1", "force", "employed", "level", "rate"))
dt_ind <- dt_ind[ !is.na(as.numeric(fips_state)) ]
dt_ind[, c("laus_code", "lau_name", "V1") := NULL ]
dt_res <- rbind(dt_res, dt_ind)
}
# cleaning up
file.remove( paste0(path_data, "lau_cty.xlsx") )
# return dataset
return( dt_res )
}
} # end of get_lau_data
|
/R/get_lau.R
|
permissive
|
eloualiche/entrydatar
|
R
| false
| false
| 1,782
|
r
|
#' Group all the functions
#'
#' @note downloads data from the Local Area Unemployment Statistics from the BLS
#' #' Data is split and tidy
#' Download LAU County dataset from directly from the BLS website
#'
#' @param which_data what kind of data download: default are the county files
#' @param path_data where does the download happen: default current directory
#' @param years which year do we download, as of now only 1990 to 2016 are available
#' @param verbose show intermediate steps
#' @return dt_res
#' @examples
#' \dontrun{
#' dt_lau <- get_lau_data(years = c(1995, 2000))
#' }
#' @export
get_lau_data = function(
which_data = "county",
years = seq(1990, 2016),
path_data = "./",
verbose = T
){
fips_state <- NULL
if (which_data == "county"){
url <- "https://www.bls.gov/lau/laucnty"
dt_res <- data.table()
for (year_iter in years){
year_iter = year_iter - 1900
if (year_iter >= 100){ year_iter = year_iter - 100}
if (year_iter < 10){ year_iter = paste0(0, year_iter) }
utils::download.file(paste0(url, year_iter, ".xlsx"),
paste0(path_data, "lau_cty.xlsx"),
quiet = !verbose)
dt_ind <- readxl::read_excel(paste0(path_data, "lau_cty.xlsx")) %>% data.table
setnames(dt_ind, c("laus_code", "fips_state", "fips_cty",
"lau_name", "date_y", "V1", "force", "employed", "level", "rate"))
dt_ind <- dt_ind[ !is.na(as.numeric(fips_state)) ]
dt_ind[, c("laus_code", "lau_name", "V1") := NULL ]
dt_res <- rbind(dt_res, dt_ind)
}
# cleaning up
file.remove( paste0(path_data, "lau_cty.xlsx") )
# return dataset
return( dt_res )
}
} # end of get_lau_data
|
library(lsr)
### Name: rmAll
### Title: Remove all objects
### Aliases: rmAll
### ** Examples
# equivalent to rm(list = objects())
rmAll(ask = FALSE)
|
/data/genthat_extracted_code/lsr/examples/rmAll.Rd.R
|
no_license
|
surayaaramli/typeRrh
|
R
| false
| false
| 158
|
r
|
library(lsr)
### Name: rmAll
### Title: Remove all objects
### Aliases: rmAll
### ** Examples
# equivalent to rm(list = objects())
rmAll(ask = FALSE)
|
#-------------------------------------------------------------------------------------------------------------------
#TODO
#1. Make 'ranger' method work for fitting models on control and treatment data
#2. Prune RFModel_c and RFModel_t
#-------------------------------------------------------------------------------------------------------------------
#BIG WARNINGS
#1. RMSE is very low. Check why!
#-------------------------------------------------------------------------------------------------------------------
#
#install.packages('rattle')
#install.packages('ggplot2')
library(ggplot2)
library(caret)
library(rattle)
library(rpart)
library(rpart.plot)
library(rpart.utils)
deduplication <- function(paths)
{
new_paths <- list()
for (i in 1:length(paths)){
a <- strsplit(paths[[i]], ',')
lpag<- NULL; lpal<- NULL; msag<- NULL; msal<- NULL; npsg<- NULL; npsl<- NULL; npag<- NULL; npal<- NULL; sat<- NULL; sur<- NULL;
for (j in 1:length(a)){
#print(a[[j]])
if(grepl('lastday_purchase_all>', a[[j]])) {lpag = a[[j]]} #else {lpag = NULL}
if(grepl('lastday_purchase_all<', a[[j]])) {lpal = a[[j]]} #else {lpal = NULL}
if(grepl('money_spend_all>', a[[j]])) {msag = a[[j]]} #else {msag = NULL}
if(grepl('money_spend_all<', a[[j]])) {msal = a[[j]]} #else {msal = NULL}
if(grepl('NPS>', a[[j]])) {npsg = a[[j]]} #else {npsg = NULL}
if(grepl('NPS<', a[[j]])) {npsl = a[[j]]} #else {npsl = NULL}
if(grepl('num_purchase_all>', a[[j]])) {npag = a[[j]]} #else {npag = NULL}
if(grepl('num_purchase_all<', a[[j]])) {npal = a[[j]]} #else {npal = NULL}
if(grepl('satisfied', a[[j]])) {sat = a[[j]]} #else {sat = NULL}
if(grepl('survey', a[[j]])) {sur = a[[j]]} #else {sur = NULL}
}
new_paths[[i]] <- c(lpag, lpal, msag, msal, npsg, npsl, npag, npal, sat, sur)
}
return(new_paths)
}
positions_fn <- function(paths)
{
new_paths <- list()
for (i in 1:length(paths)){
a <- strsplit(paths[[i]], ',')
lpag<- NULL; lpal<- NULL; msag<- NULL; msal<- NULL; npsg<- NULL; npsl<- NULL; npag<- NULL; npal<- NULL; sat<- NULL; sur<- NULL;
for (j in 1:length(a)){
#print(a[[j]])
if(grepl('lastday_purchase_all>', a[[j]])) {lpag = a[[j]]} #else {lpag = NULL}
if(grepl('lastday_purchase_all<', a[[j]])) {lpal = a[[j]]} #else {lpal = NULL}
if(grepl('money_spend_all>', a[[j]])) {msag = a[[j]]} #else {msag = NULL}
if(grepl('money_spend_all<', a[[j]])) {msal = a[[j]]} #else {msal = NULL}
if(grepl('NPS>', a[[j]])) {npsg = a[[j]]} #else {npsg = NULL}
if(grepl('NPS<', a[[j]])) {npsl = a[[j]]} #else {npsl = NULL}
if(grepl('num_purchase_all>', a[[j]])) {npag = a[[j]]} #else {npag = NULL}
if(grepl('num_purchase_all<', a[[j]])) {npal = a[[j]]} #else {npal = NULL}
if(grepl('satisfied', a[[j]])) {sat = a[[j]]} #else {sat = NULL}
if(grepl('survey', a[[j]])) {sur = a[[j]]} #else {sur = NULL}
}
new_paths[[i]] <- list(lpag=lpag, lpal=lpal, msag=msag,
msal=msal, npsg=npsg, npsl=npsl,
npag=npag, npal=npal, sat=sat, sur=sur)
}
return(new_paths)
}
#-------------------------------------------------------------------------------------------------------------------
#Pre processing of data
#-------------------------------------------------------------------------------------------------------------------
#Reading treatment control data with covariates and target variable
collage_data <- read.csv('C:\\Users\\ganga020\\Google Drive\\Ed Research\\Heterogenous treatment effects\\collage_treatment_effect.csv')
str(collage_data)
#changing some variable to factors
col_names <- c("cell","satisfied","survey")
collage_data[col_names] <- lapply(collage_data[col_names], factor)
#histograms of individual predictors
hist(collage_data$money_spend_all, labels = TRUE, breaks = 40)
hist(collage_data$lastday_purchase_all, labels = TRUE, breaks = 40)
barplot(prop.table(table(collage_data$satisfied)))
barplot(prop.table(table(collage_data$survey)))
hist(collage_data$NPS, labels = TRUE, breaks = 11)
hist(collage_data$number_referrals, labels = TRUE, breaks = 6)
hist(collage_data$num_purchase_all, labels = TRUE, breaks = 40)
#Making a copy
collage_data2 <- collage_data
TreatmentVar <- 'cell'
PredictorsVar <- c("satisfied","NPS","lastday_purchase_all","num_purchase_all","money_spend_all","survey")
OutcomeVar <- "benefit"
#Dividing control and treatments groups so as to fit seperate models
control_data <- collage_data[collage_data[TreatmentVar]== '1',]
treatment_data <- collage_data[collage_data[TreatmentVar]== '2',]
#-------------------------------------------------------------------------------------------------------------------
#End of Pre processing of data
#-------------------------------------------------------------------------------------------------------------------
#-------------------------------------------------------------------------------------------------------------------
#Model Creation
#-------------------------------------------------------------------------------------------------------------------
#Creating train control object using 10 fold cross validation
RFControl <- trainControl(method = "cv",
number = 10,
allowParallel = TRUE,
verboseIter = FALSE)
#TODO: 2 : Prune these trees
#Training a Random Forest model on control group data
RFModel_c <- train(benefit ~ satisfied+NPS+lastday_purchase_all+num_purchase_all+money_spend_all+survey,
control_data,
#'ranger' model can be used for faster processing,
#but it's giving errors w.r.t RMSE, hence using rf as of now
method = "rf",
metric = "RMSE",
#This argument checks n no. of combinations max for tree splitting
#As there are 6 predictor vars, using 3
tuneLength = 3,
prox = FALSE,
trControl = RFControl)
#Training a Random Forest model on treatment group data
RFModel_t <- train(benefit ~ satisfied+NPS+lastday_purchase_all+num_purchase_all+money_spend_all+survey,
treatment_data,
method = "rf",
metric = "RMSE",
tuneLength = 3,
prox = FALSE,
trControl = RFControl)
#Now we predict treatment and control outcomes for everyone using fitted models
predictions_t <- predict(object = RFModel_t,
collage_data)
predictions_c <- predict(object = RFModel_c,
collage_data)
#Calculating treatment effect for each entry
tau_2s <- predictions_t - predictions_c
#Adding the treatment effect back to the full data
collage_data2['tau'] <- tau_2s
#-------------------------------------------------------------------------------------------------------------------
#End of Model Creation
#-------------------------------------------------------------------------------------------------------------------
#-------------------------------------------------------------------------------------------------------------------
#Model Analysis
#-------------------------------------------------------------------------------------------------------------------
#Histogram of treatment effects of all individual observations
hist(tau_2s, labels = TRUE, breaks = 40)
#Adding a Normal curve on top of histogram
curve(dnorm(x, mean=mean(tau_2s), sd=sd(tau_2s)), add=TRUE, col='darkblue', lwd=2)
#Mean and standard deviation of treatment effects
mean_tau2s=mean(tau_2s)
sd_tau2s=sd(tau_2s)
#-------------------------------------------------------------------------------------------------------------------
#End of Model Analysis
#-------------------------------------------------------------------------------------------------------------------
#-------------------------------------------------------------------------------------------------------------------
#Treatment effect generation
#-------------------------------------------------------------------------------------------------------------------
#Creating tree with treatment effect as outcome variable to find heterogeneity
or_tree <- rpart(tau ~ satisfied+NPS+lastday_purchase_all+num_purchase_all+money_spend_all+survey,
collage_data2)
#Tree visualization using fancy Rpart plot
fancyRpartPlot(or_tree)
summary(or_tree)
#-------------------------------------------------------------------------------------------------------------------
#End of Treatment effect generation
#-------------------------------------------------------------------------------------------------------------------
#------------------------------------------------------------------------------------
#TREE PRUNING - This may not be required as it is giving vely les leaves :(
#------------------------------------------------------------------------------------
#As a rule of thumb, it's best to prune a decision tree using the cp of smallest tree
#that is within one standard deviation of the tree with the smallest xerror.
#In this example, the best xerror is 0.776 with standard deviation 0.108.
#So, we want the smallest tree with xerror less than 0.884.
#This is the tree with cp = 0.0158, so we'll want to prune our tree with a
#cp slightly greater than than 0.0158.
tree_cp <- or_tree$cptable[,1][which.min(or_tree$cptable[,4])]
tree <- prune(or_tree, tree_cp)
fancyRpartPlot(tree)
predictions <- predict(tree, collage_data2)
leaf_frame$yval
#------------------------------------------------------------------------------------
#END OF TREE PRUNING
#------------------------------------------------------------------------------------
#-------------------------------------------------------------------------------------------------------------------
#Treatment effect Analysis
#-------------------------------------------------------------------------------------------------------------------
#*****ADDITIONAL CODE
rpart.lists(tree2)
rpart.rules(tree2)
list.rules.rpart(tree2)
model.frame(tree2)
unlist(paths[1], use.names = FALSE)[-1]
#*****ADDITIONAL CODE
#Extracting usefyl info from rpart tree
frame <- tree$frame
#Finding the node values of nodes that are leaves
leaves <- row.names(frame)[frame$var == '<leaf>']
#Finding path of leaf nodes
leaf_paths <- path.rpart(tree, leaves, print.it = FALSE)
dedup_paths<- deduplication(leaf_paths)
position_path<- positions_fn(leaf_paths)
#________________________________________________________
position_path
#________________________________________________________
#Subsetting frame that contains leaf_nodes
leaf_frame <- frame[frame$var == '<leaf>',]
std<- vector()
for (i in 1:length(leaf_frame$yval)){
a <- leaf_frame$yval[i]
b <- which(predictions==a)
c <- vector()
for(j in 1:length(b)){
d<- collage_data2[,'tau'][b[j]]
c <- append(c,d)
}
std <- append(std, sd(c))
}
#Adding the standard deviation at which the yval of a particular node lies
#on tau_2s distribution
leaf_frame['sd'] <- std
#Adding t statistic to see if the treatment effect(y_val) is significantly
#different from zero
leaf_frame['t_test'] <- (leaf_frame$yval*sqrt(leaf_frame$n))/sd_tau2s
#Adding paths to leaf_frame
leaf_frame$path <- sapply(dedup_paths, paste0, collapse = ',')
#Ordering frame by descending order of treatment effect
leaf_frame <- leaf_frame[order(-leaf_frame$yval),]
#Subsetting leaf frame to only keep useful values
leaf_frame <- leaf_frame[c("n","yval","sd","t_test","dev","complexity","path")]
#-------------------------------------------------------------------------------------------------------------------
#End of Treatment effect Analysis
#-------------------------------------------------------------------------------------------------------------------
|
/Before LP/berry_base.R
|
no_license
|
sandeepgangarapu/hte
|
R
| false
| false
| 11,942
|
r
|
#-------------------------------------------------------------------------------------------------------------------
#TODO
#1. Make 'ranger' method work for fitting models on control and treatment data
#2. Prune RFModel_c and RFModel_t
#-------------------------------------------------------------------------------------------------------------------
#BIG WARNINGS
#1. RMSE is very low. Check why!
#-------------------------------------------------------------------------------------------------------------------
#
#install.packages('rattle')
#install.packages('ggplot2')
library(ggplot2)
library(caret)
library(rattle)
library(rpart)
library(rpart.plot)
library(rpart.utils)
deduplication <- function(paths)
{
new_paths <- list()
for (i in 1:length(paths)){
a <- strsplit(paths[[i]], ',')
lpag<- NULL; lpal<- NULL; msag<- NULL; msal<- NULL; npsg<- NULL; npsl<- NULL; npag<- NULL; npal<- NULL; sat<- NULL; sur<- NULL;
for (j in 1:length(a)){
#print(a[[j]])
if(grepl('lastday_purchase_all>', a[[j]])) {lpag = a[[j]]} #else {lpag = NULL}
if(grepl('lastday_purchase_all<', a[[j]])) {lpal = a[[j]]} #else {lpal = NULL}
if(grepl('money_spend_all>', a[[j]])) {msag = a[[j]]} #else {msag = NULL}
if(grepl('money_spend_all<', a[[j]])) {msal = a[[j]]} #else {msal = NULL}
if(grepl('NPS>', a[[j]])) {npsg = a[[j]]} #else {npsg = NULL}
if(grepl('NPS<', a[[j]])) {npsl = a[[j]]} #else {npsl = NULL}
if(grepl('num_purchase_all>', a[[j]])) {npag = a[[j]]} #else {npag = NULL}
if(grepl('num_purchase_all<', a[[j]])) {npal = a[[j]]} #else {npal = NULL}
if(grepl('satisfied', a[[j]])) {sat = a[[j]]} #else {sat = NULL}
if(grepl('survey', a[[j]])) {sur = a[[j]]} #else {sur = NULL}
}
new_paths[[i]] <- c(lpag, lpal, msag, msal, npsg, npsl, npag, npal, sat, sur)
}
return(new_paths)
}
positions_fn <- function(paths)
{
new_paths <- list()
for (i in 1:length(paths)){
a <- strsplit(paths[[i]], ',')
lpag<- NULL; lpal<- NULL; msag<- NULL; msal<- NULL; npsg<- NULL; npsl<- NULL; npag<- NULL; npal<- NULL; sat<- NULL; sur<- NULL;
for (j in 1:length(a)){
#print(a[[j]])
if(grepl('lastday_purchase_all>', a[[j]])) {lpag = a[[j]]} #else {lpag = NULL}
if(grepl('lastday_purchase_all<', a[[j]])) {lpal = a[[j]]} #else {lpal = NULL}
if(grepl('money_spend_all>', a[[j]])) {msag = a[[j]]} #else {msag = NULL}
if(grepl('money_spend_all<', a[[j]])) {msal = a[[j]]} #else {msal = NULL}
if(grepl('NPS>', a[[j]])) {npsg = a[[j]]} #else {npsg = NULL}
if(grepl('NPS<', a[[j]])) {npsl = a[[j]]} #else {npsl = NULL}
if(grepl('num_purchase_all>', a[[j]])) {npag = a[[j]]} #else {npag = NULL}
if(grepl('num_purchase_all<', a[[j]])) {npal = a[[j]]} #else {npal = NULL}
if(grepl('satisfied', a[[j]])) {sat = a[[j]]} #else {sat = NULL}
if(grepl('survey', a[[j]])) {sur = a[[j]]} #else {sur = NULL}
}
new_paths[[i]] <- list(lpag=lpag, lpal=lpal, msag=msag,
msal=msal, npsg=npsg, npsl=npsl,
npag=npag, npal=npal, sat=sat, sur=sur)
}
return(new_paths)
}
#-------------------------------------------------------------------------------------------------------------------
#Pre processing of data
#-------------------------------------------------------------------------------------------------------------------
#Reading treatment control data with covariates and target variable
collage_data <- read.csv('C:\\Users\\ganga020\\Google Drive\\Ed Research\\Heterogenous treatment effects\\collage_treatment_effect.csv')
str(collage_data)
#changing some variable to factors
col_names <- c("cell","satisfied","survey")
collage_data[col_names] <- lapply(collage_data[col_names], factor)
#histograms of individual predictors
hist(collage_data$money_spend_all, labels = TRUE, breaks = 40)
hist(collage_data$lastday_purchase_all, labels = TRUE, breaks = 40)
barplot(prop.table(table(collage_data$satisfied)))
barplot(prop.table(table(collage_data$survey)))
hist(collage_data$NPS, labels = TRUE, breaks = 11)
hist(collage_data$number_referrals, labels = TRUE, breaks = 6)
hist(collage_data$num_purchase_all, labels = TRUE, breaks = 40)
#Making a copy
collage_data2 <- collage_data
TreatmentVar <- 'cell'
PredictorsVar <- c("satisfied","NPS","lastday_purchase_all","num_purchase_all","money_spend_all","survey")
OutcomeVar <- "benefit"
#Dividing control and treatments groups so as to fit seperate models
control_data <- collage_data[collage_data[TreatmentVar]== '1',]
treatment_data <- collage_data[collage_data[TreatmentVar]== '2',]
#-------------------------------------------------------------------------------------------------------------------
#End of Pre processing of data
#-------------------------------------------------------------------------------------------------------------------
#-------------------------------------------------------------------------------------------------------------------
#Model Creation
#-------------------------------------------------------------------------------------------------------------------
#Creating train control object using 10 fold cross validation
RFControl <- trainControl(method = "cv",
number = 10,
allowParallel = TRUE,
verboseIter = FALSE)
#TODO: 2 : Prune these trees
#Training a Random Forest model on control group data
RFModel_c <- train(benefit ~ satisfied+NPS+lastday_purchase_all+num_purchase_all+money_spend_all+survey,
control_data,
#'ranger' model can be used for faster processing,
#but it's giving errors w.r.t RMSE, hence using rf as of now
method = "rf",
metric = "RMSE",
#This argument checks n no. of combinations max for tree splitting
#As there are 6 predictor vars, using 3
tuneLength = 3,
prox = FALSE,
trControl = RFControl)
#Training a Random Forest model on treatment group data
RFModel_t <- train(benefit ~ satisfied+NPS+lastday_purchase_all+num_purchase_all+money_spend_all+survey,
treatment_data,
method = "rf",
metric = "RMSE",
tuneLength = 3,
prox = FALSE,
trControl = RFControl)
#Now we predict treatment and control outcomes for everyone using fitted models
predictions_t <- predict(object = RFModel_t,
collage_data)
predictions_c <- predict(object = RFModel_c,
collage_data)
#Calculating treatment effect for each entry
tau_2s <- predictions_t - predictions_c
#Adding the treatment effect back to the full data
collage_data2['tau'] <- tau_2s
#-------------------------------------------------------------------------------------------------------------------
#End of Model Creation
#-------------------------------------------------------------------------------------------------------------------
#-------------------------------------------------------------------------------------------------------------------
#Model Analysis
#-------------------------------------------------------------------------------------------------------------------
#Histogram of treatment effects of all individual observations
hist(tau_2s, labels = TRUE, breaks = 40)
#Adding a Normal curve on top of histogram
curve(dnorm(x, mean=mean(tau_2s), sd=sd(tau_2s)), add=TRUE, col='darkblue', lwd=2)
#Mean and standard deviation of treatment effects
mean_tau2s=mean(tau_2s)
sd_tau2s=sd(tau_2s)
#-------------------------------------------------------------------------------------------------------------------
#End of Model Analysis
#-------------------------------------------------------------------------------------------------------------------
#-------------------------------------------------------------------------------------------------------------------
#Treatment effect generation
#-------------------------------------------------------------------------------------------------------------------
#Creating tree with treatment effect as outcome variable to find heterogeneity
or_tree <- rpart(tau ~ satisfied+NPS+lastday_purchase_all+num_purchase_all+money_spend_all+survey,
collage_data2)
#Tree visualization using fancy Rpart plot
fancyRpartPlot(or_tree)
summary(or_tree)
#-------------------------------------------------------------------------------------------------------------------
#End of Treatment effect generation
#-------------------------------------------------------------------------------------------------------------------
#------------------------------------------------------------------------------------
#TREE PRUNING - This may not be required as it is giving vely les leaves :(
#------------------------------------------------------------------------------------
#As a rule of thumb, it's best to prune a decision tree using the cp of smallest tree
#that is within one standard deviation of the tree with the smallest xerror.
#In this example, the best xerror is 0.776 with standard deviation 0.108.
#So, we want the smallest tree with xerror less than 0.884.
#This is the tree with cp = 0.0158, so we'll want to prune our tree with a
#cp slightly greater than than 0.0158.
tree_cp <- or_tree$cptable[,1][which.min(or_tree$cptable[,4])]
tree <- prune(or_tree, tree_cp)
fancyRpartPlot(tree)
predictions <- predict(tree, collage_data2)
leaf_frame$yval
#------------------------------------------------------------------------------------
#END OF TREE PRUNING
#------------------------------------------------------------------------------------
#-------------------------------------------------------------------------------------------------------------------
#Treatment effect Analysis
#-------------------------------------------------------------------------------------------------------------------
#*****ADDITIONAL CODE
rpart.lists(tree2)
rpart.rules(tree2)
list.rules.rpart(tree2)
model.frame(tree2)
unlist(paths[1], use.names = FALSE)[-1]
#*****ADDITIONAL CODE
#Extracting usefyl info from rpart tree
frame <- tree$frame
#Finding the node values of nodes that are leaves
leaves <- row.names(frame)[frame$var == '<leaf>']
#Finding path of leaf nodes
leaf_paths <- path.rpart(tree, leaves, print.it = FALSE)
dedup_paths<- deduplication(leaf_paths)
position_path<- positions_fn(leaf_paths)
#________________________________________________________
position_path
#________________________________________________________
#Subsetting frame that contains leaf_nodes
leaf_frame <- frame[frame$var == '<leaf>',]
std<- vector()
for (i in 1:length(leaf_frame$yval)){
a <- leaf_frame$yval[i]
b <- which(predictions==a)
c <- vector()
for(j in 1:length(b)){
d<- collage_data2[,'tau'][b[j]]
c <- append(c,d)
}
std <- append(std, sd(c))
}
#Adding the standard deviation at which the yval of a particular node lies
#on tau_2s distribution
leaf_frame['sd'] <- std
#Adding t statistic to see if the treatment effect(y_val) is significantly
#different from zero
leaf_frame['t_test'] <- (leaf_frame$yval*sqrt(leaf_frame$n))/sd_tau2s
#Adding paths to leaf_frame
leaf_frame$path <- sapply(dedup_paths, paste0, collapse = ',')
#Ordering frame by descending order of treatment effect
leaf_frame <- leaf_frame[order(-leaf_frame$yval),]
#Subsetting leaf frame to only keep useful values
leaf_frame <- leaf_frame[c("n","yval","sd","t_test","dev","complexity","path")]
#-------------------------------------------------------------------------------------------------------------------
#End of Treatment effect Analysis
#-------------------------------------------------------------------------------------------------------------------
|
#INSTALLATION
library(annotate)
library(Biostrings) #DNAStringSet Object
library(rBLAST)
library(rMSA)
library(devtools)
library(magrittr)
library(seqinr)
library(ape) #read.dna, write.fasta
library(data.table)
library(lubridate)
library(RCurl)
library(magrittr)
library(R.utils)
library(downloader)
library(ggplot2)
library(gridExtra)
library(plyr)
library(taxize)
library(rentrez)
MCLab=function(primer_f, primer_r, name_primer_f, name_primer_r, source, username, password, desdir,folder,local_path, local, nt_search){
packageStartupMessage("Creating Folders...", appendLF = FALSE)
setwd(desdir)
if (!file.exists(file.path(desdir,'seq',folder,'raw'))){
dir.create(file.path(desdir,'seq',folder,'raw'),recursive = TRUE)
}
if (!file.exists(file.path(desdir,'seq',folder,'fasta'))){
dir.create(file.path(desdir,'seq',folder,'fasta'),recursive = TRUE)
}
if (!file.exists(file.path(desdir,'seq',folder,'fasta.aln'))){
dir.create(file.path(desdir,'seq',folder,'fasta.aln'),recursive = TRUE)
}
if(local){system(paste('cp -r',file.path(local_path,'.'),file.path(desdir,'seq',folder,'raw')))}
packageStartupMessage(" Done!")
packageStartupMessage("Primer Analyzing...", appendLF = FALSE)
primer_f2=primer_f%>%DNAString()%>%reverseComplement()%>%as.character()
primer_r2=primer_r%>%DNAString()%>%reverseComplement()%>%as.character()
primerall=c(primer_f,primer_f2,primer_r,primer_r2)
pro=c('N','R','Y','K','M','W')
sp_N=c('A','T','C','G')
sp_R=c('A','G')
sp_Y=c('T','C')
sp_K=c('T','G')
sp_M=c('A','C')
sp_W=c('A','T')
sp_list=list(sp_N,sp_R,sp_Y,sp_K,sp_M,sp_W)
names(sp_list)=pro
primer=c()
for (pri in 1:4){
listP=list()
for (i in 1:6){
listP[[i]]=strsplit(primerall[pri],'')%>%unlist() %>%grep(pro[i],.)
}
seqn=c()
for (i in 1:6){
seqn[listP[[i]]]=pro[i]
}
seqn=seqn%>%na.omit()%>%as.character()
grid= expand.grid(if(!is.na(seqn[1])){sp_list[seqn[1]]%>%unlist()}else{NA},
if(!is.na(seqn[2])){sp_list[seqn[2]]%>%unlist()}else{NA},
if(!is.na(seqn[3])){sp_list[seqn[3]]%>%unlist()}else{NA},
if(!is.na(seqn[4])){sp_list[seqn[4]]%>%unlist()}else{NA},
if(!is.na(seqn[5])){sp_list[seqn[5]]%>%unlist()}else{NA},
if(!is.na(seqn[6])){sp_list[seqn[6]]%>%unlist()}else{NA},
if(!is.na(seqn[7])){sp_list[seqn[7]]%>%unlist()}else{NA})
primer=c(primer, primerall[pri]%>% gsub('[NRYKMW]','%s',.) %>%
sprintf(.,grid[,1],grid[,2],grid[,3],grid[,4],grid[,5],grid[,6],grid[,7]))
}
primerraw=primer
primern=c()
for (i in 1:length(primer)){
primern=c(primern,primer[i] %>% substr(.,1,nchar(.)-5),primer[i] %>% substr(.,6,nchar(.)))
}
primer=primern
packageStartupMessage(" Done!")
setwd(file.path(desdir,'seq',folder,'raw'))
if(!local){
packageStartupMessage("File Downloading...", appendLF = FALSE)
filenames= getURL(source,userpwd=paste0(username,':',password),
verbose=TRUE,ftp.use.epsv=TRUE, dirlistonly = TRUE) %>%
strsplit("[\\\\]|[^[:print:]]",fixed = FALSE) %>%
unlist() %>% (function(x){x[grep('seq',x)]})
filepath= sprintf(paste0('ftp://',
paste0(username,':',password),'@',
(source%>%gsub('ftp://','',.)),'%s'),filenames)
for (i in 1:(length(filenames))){
download.file(filepath[i],
file.path(getwd(),filenames[i]))
}
packageStartupMessage(" Done!")
}
packageStartupMessage("Renaming...", appendLF = FALSE)
names=list.files()
new_names=names
new_names[grep(name_primer_r,names)]= names[grep(name_primer_r,names)] %>%
gsub("^[0-9][0-9]\\.","",.) %>%
gsub(paste0('(',name_primer_r,')'),"_R",.)
new_names[grep(name_primer_f,names)]= names[grep(name_primer_f,names)] %>%
gsub("^[0-9][0-9]\\.","",.) %>%
gsub(paste0('(',name_primer_f,')'),"_F",.)
new_names_split= new_names %>% strsplit('_') %>% unlist()
SN= new_names_split[seq(1,length(new_names_split),2)]
FR= new_names_split[seq(2,length(new_names_split),2)]
nchr= SN %>% nchar() %>% (function(x){c(min(x),max(x))})
if (nchr[1]!=nchr[2]){
s_index= SN%>%nchar() == nchr[1]
l_index= SN%>%nchar() == nchr[2]
new_names[l_index]=paste0(SN[l_index] %>% substr(1,nchr[2]-3),'-',
SN[l_index] %>% substr(nchr[2]-2,nchr[2]-2),'-',
SN[l_index] %>% substr(nchr[2]-1,nchr[2]),'_', FR[l_index])
new_names[s_index]=paste0(SN[s_index] %>% substr(1,nchr[1]-2),'-',
SN[s_index] %>% substr(nchr[1]-1,nchr[1]-1),'-',
SN[s_index] %>% substr(nchr[1],nchr[1]),'_', FR[s_index])
new_names=new_names%>%gsub('.seq','.fasta',.)
for (j in 1:length(names)){
text=readLines(names[j])
for (i in length(text):1){
text[i+1]=text[i]
}
text[1]=paste0(">",new_names[j])
writeLines(text,file.path('../fasta',new_names[j]%>%gsub('seq','fasta',.)))
}
}else{
new_names=paste0(SN %>% substr(1,nchr[2]-3),'-',
SN %>% substr(nchr[2]-2,nchr[2]-2),'-',
SN %>% substr(nchr[2]-1,nchr[2]),'_',FR)
new_names=new_names%>%gsub('.seq','.fasta',.)
for (j in 1:length(names)){
text=readLines(names[j])
for (i in length(text):1){
text[i+1]=text[i]
}
text[1]=paste0(">",new_names[j])
writeLines(text,file.path('../fasta',new_names[j]%>%gsub('seq','fasta',.)))
}
}
packageStartupMessage(" Done!")
packageStartupMessage("Vector Screening...", appendLF = FALSE)
setwd('../fasta')
db_vec= blast(db="../../../db/UniVec")
seq=readDNAStringSet(new_names)
data_seq=seq@ranges %>% data.frame()
index_r=new_names%>%grep('R',.)
seq[index_r]=seq[index_r]%>% reverseComplement()
l.vec=array(dim=c(2,2,length(new_names)),
dimnames = list(c(1:2),c('Start.pt','End.pt'),new_names))
for (k in 1: length(new_names)){
if(nrow(predict(db_vec,seq[k]))!=0){
num=predict(db_vec,seq[k])%>% (function(x){x[x$Mismatches<10,]})
qs= num$Q.start
qe= num$Q.end
d3= data.frame(qs=qs, qe=qe)
d3=d3[order(d3$qs),]
vec=matrix(ncol=2,nrow=2)
vec[1,1]=d3[1,1]
vec[1,2]=d3[1,2]
for (i in 1:(nrow(d3)-1)){
if(d3[i+1,1]<=vec[1,2]){
if(d3[i+1,2]>vec[1,2]){
vec[1,2]=d3[i+1,2]
}}
else if(vec[2,1]%>%is.na()){
vec[2,1]=d3[i+1,1]
vec[2,2]=d3[i+1,2]
}else if(d3[i+1,2]>vec[2,2]){
vec[2,2]=d3[i+1,2]
}
}
l.vec[,,k]=vec
}
}
packageStartupMessage(" Done!")
packageStartupMessage("Candidate Sequences Evaluating...", appendLF = FALSE)
l.seq=array(dim=c(3,2,length(new_names)),
dimnames = list(c(1:3),c('Start.pt','End.pt'),new_names))
for (k in 1: length(new_names)){
if(l.vec[2,1,k]%>%is.na()){
l.seq[1,,k]=c(1,l.vec[1,1,k]-1)
l.seq[2,,k]=c(l.vec[1,2,k]+1,data_seq$width[k])
}else{
if(l.vec[1,1,k]==1){
l.seq[1,,k]=c(1,1)
}else{
l.seq[1,,k]=c(1,l.vec[1,1,k]-1)
}
l.seq[2,,k]=c(l.vec[1,2,k]+1,l.vec[2,1,k]-1)
if(l.vec[2,2,k]!=data_seq$width[k]){
l.seq[3,,k]=c(l.vec[2,2,k]+1,data_seq$width[k])
}
}
}
seq.pure=seq
for (k in 1: length(new_names)){
if(nrow(predict(db_vec,seq[k]))!=0){
s.seq=seq[k]%>%unlist()
c=matrix(ncol=3,nrow=length(primer))
if(l.seq[3,1,k]%>%is.na()){
for (i in 1:2){
for (j in 1:length(primer)){
c[j,i]=s.seq[l.seq[i,1,k]:l.seq[i,2,k]]%>%as.character()%>%grepl(primer[j],.)
}
}
}else{
for (i in 1:3){
for (j in 1:length(primer)){
c[j,i]=s.seq[l.seq[i,1,k]:l.seq[i,2,k]]%>%as.character()%>%grepl(primer[j],.)
}
}
}
if(!(grep('TRUE',c)/length(primer))%>%isEmpty()){
seq.pure[k]=seq[k]%>%
unlist()%>%
(function(x){x[l.seq[(grep('TRUE',c)/length(primer)) %>% ceiling()%>%mean(),1,k]:
l.seq[(grep('TRUE',c)/length(primer)) %>% ceiling()%>%mean(),2,k]]})%>%
as("DNAStringSet")
}
}
}
for (i in 1:length(seq.pure)){
write.fasta(seq.pure[i]%>%as.DNAbin(), names(seq.pure[i])%>%gsub('.seq','',.), names(seq.pure[i])%>%gsub('.seq','.fasta',.))
}
packageStartupMessage(" Done!")
packageStartupMessage("Sequences Alignment...", appendLF = FALSE)
data_seq=cbind(seq@ranges%>%data.frame()%>%(function(x){cbind(x$names, x$width)}),
seq.pure@ranges%>%data.frame()%>%(function(x){x$width})) %>% data.frame()
colnames(data_seq)=c('names','original','selected')
data_seq$original=data_seq$original %>% as.character()%>% as.numeric()
data_seq$selected=data_seq$selected %>% as.character()%>% as.numeric()
data_seq=data.frame(data_seq,success=(data_seq$original!=data_seq$selected))
seq_names=data_seq[data_seq$success==TRUE,]$names %>%
(function(x){x[grep('_F',x)]}) %>%
gsub('_F.fasta','',.)
table_length=data.table()
for (sn in 1:length(seq_names)){
setwd(file.path(desdir,'seq',folder,'fasta'))
set=seq.pure[grep(paste0(seq_names[sn],'_'),new_names)]
if(length(set)==2){
set=DNAStringSet(c(set[names(set)%>%grep('_F',.)],set[names(set)%>%grep('_R',.)]))
aln.r=system(paste('blastn','-subject',set[2]%>%names,'-query',set[1]%>%names),intern=TRUE)
aln.r2=aln.r[(aln.r %>% grep('Score',.)%>%min()):
((aln.r %>% grep('Lambda',.)%>%min())-4)]
aln.in=aln.r2 %>% grep('Score',.)
aln=matrix(nrow=length(aln.in), ncol=2)
for (i in 1:length(aln.in)){
aln[i,1]=aln.in[i]
if (i==length(aln.in)){
aln[i,2]=length(aln.r2)
}else{
aln[i,2]=aln.in[i+1]-3
}
}
aln.len=aln.r2%>% grep('Identities',.)
aln.bigind= aln.r2[aln.len] %>%
strsplit(' ')%>% unlist() %>%
(function(x){x[seq(4,length(x),9)]}) %>%
strsplit('/')%>% unlist() %>%
(function(x){x[seq(2,length(x),2)]}) %>%
(function(x){x==max(x)})
aln.truind=aln[aln.bigind,]
aln.t=aln.r2[aln.truind[1]:aln.truind[2]]
aln.q=aln.t%>% grep('Query',.,value=TRUE)%>%gsub('[A-Z][a-z]*','',.)%>%
strsplit(' ')%>%unlist()%>%as.numeric()%>%na.omit()
aln.s=aln.t%>% grep('Sbjct',.,value=TRUE)%>%gsub('[A-Z][a-z]*','',.)%>%
strsplit(' ')%>%unlist()%>%as.numeric()%>%na.omit()
if(mean(aln.q)>mean(aln.s)){
seq.aln=paste0(set[1]%>%as.character()%>% substr(1,mean(c(max(aln.q),min(aln.q)))),
set[2]%>%as.character()%>% substr(mean(c(max(aln.s),min(aln.s)))+1,nchar(.))) %>%
DNAStringSet()
}else{
seq.aln=paste0(set[2]%>%as.character()%>% substr(1,mean(c(max(aln.s),min(aln.s)))),
set[1]%>%as.character()%>% substr(mean(c(max(aln.q),min(aln.q)))+1,nchar(.))) %>%
DNAStringSet()
}
setwd(file.path(desdir,'seq',folder,'fasta.aln'))
seq.aln%>% as.DNAbin()%>%write.fasta(names=seq_names[sn],file.out=paste(seq_names[sn],'.fasta',sep=''))
table_length=rbind(table_length,data.table(seq_names[sn],seq.aln@ranges@width))
}
}
seq.aln= readDNAStringSet(list.files())
packageStartupMessage(" Done!")
######################################
######################################
if (nt_search){
packageStartupMessage(paste0("Fetching sequence information...\nIt may take at least ",
length(seq.aln)*3,
" mins to complete."), appendLF = FALSE)
db_nt <- blast(db="../../../db/nt")
report=data.frame()
options(download.file.method = "wininet")
for (i in 1: length(seq.aln)){
if (nchar(seq.aln[i])>100){
cl <- predict(db_nt, seq.aln[i])
if(!cl[1]%>%isEmpty()){
if (nrow(cl)<10){
x=cl[order(-cl$Bits),] %>%
(function(x){x[1:nrow(cl),2]})
}else{
x=cl[order(-cl$Bits),] %>%
(function(x){x[1:10,2]})
}
x2=x %>%
as.character() %>%
strsplit('\\|') %>%
unlist()
y=x2[seq(4,length(x2),4)]%>%
genbank2uid()%>%
ncbi_get_taxon_summary()
z=x2[seq(2,length(x2),4)]%>%
entrez_summary(db='Nucleotide',id=.)%>%
extract_from_esummary('title')
if (nrow(cl)<10){
report=rbind(report,
data.frame(Seq.Names=cl[1:nrow(cl),c(1)],
description=z,y[,c(2,3)],
cl[1:nrow(cl),c(3,4,5,6,7,8,9,10)]))
}else{
report=rbind(report,
data.frame(Seq.Names=cl[1:10,c(1)],
description=z,y[,c(2,3)],
cl[1:10,c(3,4,5,6,7,8,9,10)]))
}
}
}
}
packageStartupMessage(" Done!")
write.csv(report,'../summary_seq.csv',row.names = FALSE)
}
packageStartupMessage("Exporting Summary Information...", appendLF = FALSE)
sum_names= gsub('_F.fasta','',data_seq$names) %>%
grep('_R',.,value=TRUE,invert=TRUE)
data_summary=matrix(nrow=length(sum_names),ncol=7)
colnames(data_summary)=c('seq.names','F_length_raw','R_length_raw','F_length_vs','R_length_vs','Suc_or_Fal','aln_length')
for (su in 1:length(sum_names)){
index=data_seq$names %>% grep(paste(sum_names[su],'_',sep=''),.)
index_f= index[data_seq$names[index] %>% grep('F',.)]
index_r= index[data_seq$names[index] %>% grep('R',.)]
data_summary[su,1]=sum_names[su]
if(index_f%>%isEmpty()){
data_summary[su,2]=NA
data_summary[su,3]=data_seq[index_r,2]
data_summary[su,4]=NA
data_summary[su,5]=data_seq[index_r,3]
data_summary[su,6]='Failure'
data_summary[su,7]=NA
}else if(index_r%>%isEmpty()){
data_summary[su,2]=data_seq[index_f,2]
data_summary[su,3]=NA
data_summary[su,4]=data_seq[index_f,3]
data_summary[su,5]=NA
data_summary[su,6]='Failure'
data_summary[su,7]=NA
}else{
data_summary[su,2]=data_seq[index_f,2]
data_summary[su,3]=data_seq[index_r,2]
data_summary[su,4]=data_seq[index_f,3]
data_summary[su,5]=data_seq[index_r,3]
data_summary[su,6]=if(data_seq[index_f,4]&data_seq[index_r,4]){'Success'}else{'Failure'}
if(!table_length[,V1]%>%grep(sum_names[su],.)%>%isEmpty()){
data_summary[su,7]=table_length[table_length[,V1]%>%grep(paste0(sum_names[su],'$'),.),V2]
}else{
data_summary[su,7]=NA
}
}
}
data_summary=data_summary%>% as.data.frame()
write.csv(data_summary,'../summary_aln.csv')
packageStartupMessage(paste0('Done! Program End! \n\n\nFiles Location: ',
file.path(desdir,'seq',folder)))
}
DownloadFTP=function(source, username, password, des_folder){
filenames= getURL(source,userpwd=paste0(username,':',password),
verbose=TRUE,ftp.use.epsv=TRUE, dirlistonly = TRUE) %>%
strsplit("[\\\\]|[^[:print:]]",fixed = FALSE) %>%
unlist() %>% (function(x){x[grep('seq',x)]})
filepath=sprintf(paste0('ftp://',
paste0(username,':',password),'@',
(source%>%gsub('ftp://','',.)),'%s'),filenames)
if (!file.exists(file.path(desdir,'Download',des_folder))){
dir.create(file.path(desdir,'Download',des_folder),recursive = TRUE)
}
for (i in 1:(length(filenames))){
download.file(filepath[i],
file.path(desdir,'Download',des_folder,filenames[i]))
}
}
FetchingSeq=function(folder){
setwd(filePath(desdir,'seq',folder,'fasta.aln'))
seq.aln= readDNAStringSet(list.files())
db_nt <- blast(db="../../../db/nt")
report=data.frame()
options(download.file.method = "wininet")
packageStartupMessage(paste0("Fetching sequence information...\nIt may take at least ",
length(seq.aln)*3,
" mins to complete."), appendLF = FALSE)
for (i in 1: length(seq.aln)){
if (nchar(seq.aln[i])>100){
cl <- predict(db_nt, seq.aln[i])
if(!cl[1]%>%isEmpty()){
if (nrow(cl)<10){
x=cl[order(-cl$Bits),] %>%
(function(x){x[1:nrow(cl),2]})
}else{
x=cl[order(-cl$Bits),] %>%
(function(x){x[1:10,2]})
}
x2=x %>%
as.character() %>%
strsplit('\\|') %>%
unlist()
y=x2[seq(4,length(x2),4)]%>%
genbank2uid()%>%
ncbi_get_taxon_summary()
z=x2[seq(2,length(x2),4)]%>%
entrez_summary(db='Nucleotide',id=.)%>%
extract_from_esummary('title')
if (nrow(cl)<10){
report=rbind(report,
data.frame(Seq.Names=cl[1:nrow(cl),c(1)],
description=z,y[,c(2,3)],
cl[1:nrow(cl),c(3,4,5,6,7,8,9,10)]))
}else{
report=rbind(report,
data.frame(Seq.Names=cl[1:10,c(1)],
description=z,y[,c(2,3)],
cl[1:10,c(3,4,5,6,7,8,9,10)]))
}
}
}
}
}
|
/MCLab_Function.R
|
no_license
|
Poissonfish/M.C.Lab
|
R
| false
| false
| 17,805
|
r
|
#INSTALLATION
library(annotate)
library(Biostrings) #DNAStringSet Object
library(rBLAST)
library(rMSA)
library(devtools)
library(magrittr)
library(seqinr)
library(ape) #read.dna, write.fasta
library(data.table)
library(lubridate)
library(RCurl)
library(magrittr)
library(R.utils)
library(downloader)
library(ggplot2)
library(gridExtra)
library(plyr)
library(taxize)
library(rentrez)
MCLab=function(primer_f, primer_r, name_primer_f, name_primer_r, source, username, password, desdir,folder,local_path, local, nt_search){
packageStartupMessage("Creating Folders...", appendLF = FALSE)
setwd(desdir)
if (!file.exists(file.path(desdir,'seq',folder,'raw'))){
dir.create(file.path(desdir,'seq',folder,'raw'),recursive = TRUE)
}
if (!file.exists(file.path(desdir,'seq',folder,'fasta'))){
dir.create(file.path(desdir,'seq',folder,'fasta'),recursive = TRUE)
}
if (!file.exists(file.path(desdir,'seq',folder,'fasta.aln'))){
dir.create(file.path(desdir,'seq',folder,'fasta.aln'),recursive = TRUE)
}
if(local){system(paste('cp -r',file.path(local_path,'.'),file.path(desdir,'seq',folder,'raw')))}
packageStartupMessage(" Done!")
packageStartupMessage("Primer Analyzing...", appendLF = FALSE)
primer_f2=primer_f%>%DNAString()%>%reverseComplement()%>%as.character()
primer_r2=primer_r%>%DNAString()%>%reverseComplement()%>%as.character()
primerall=c(primer_f,primer_f2,primer_r,primer_r2)
pro=c('N','R','Y','K','M','W')
sp_N=c('A','T','C','G')
sp_R=c('A','G')
sp_Y=c('T','C')
sp_K=c('T','G')
sp_M=c('A','C')
sp_W=c('A','T')
sp_list=list(sp_N,sp_R,sp_Y,sp_K,sp_M,sp_W)
names(sp_list)=pro
primer=c()
for (pri in 1:4){
listP=list()
for (i in 1:6){
listP[[i]]=strsplit(primerall[pri],'')%>%unlist() %>%grep(pro[i],.)
}
seqn=c()
for (i in 1:6){
seqn[listP[[i]]]=pro[i]
}
seqn=seqn%>%na.omit()%>%as.character()
grid= expand.grid(if(!is.na(seqn[1])){sp_list[seqn[1]]%>%unlist()}else{NA},
if(!is.na(seqn[2])){sp_list[seqn[2]]%>%unlist()}else{NA},
if(!is.na(seqn[3])){sp_list[seqn[3]]%>%unlist()}else{NA},
if(!is.na(seqn[4])){sp_list[seqn[4]]%>%unlist()}else{NA},
if(!is.na(seqn[5])){sp_list[seqn[5]]%>%unlist()}else{NA},
if(!is.na(seqn[6])){sp_list[seqn[6]]%>%unlist()}else{NA},
if(!is.na(seqn[7])){sp_list[seqn[7]]%>%unlist()}else{NA})
primer=c(primer, primerall[pri]%>% gsub('[NRYKMW]','%s',.) %>%
sprintf(.,grid[,1],grid[,2],grid[,3],grid[,4],grid[,5],grid[,6],grid[,7]))
}
primerraw=primer
primern=c()
for (i in 1:length(primer)){
primern=c(primern,primer[i] %>% substr(.,1,nchar(.)-5),primer[i] %>% substr(.,6,nchar(.)))
}
primer=primern
packageStartupMessage(" Done!")
setwd(file.path(desdir,'seq',folder,'raw'))
if(!local){
packageStartupMessage("File Downloading...", appendLF = FALSE)
filenames= getURL(source,userpwd=paste0(username,':',password),
verbose=TRUE,ftp.use.epsv=TRUE, dirlistonly = TRUE) %>%
strsplit("[\\\\]|[^[:print:]]",fixed = FALSE) %>%
unlist() %>% (function(x){x[grep('seq',x)]})
filepath= sprintf(paste0('ftp://',
paste0(username,':',password),'@',
(source%>%gsub('ftp://','',.)),'%s'),filenames)
for (i in 1:(length(filenames))){
download.file(filepath[i],
file.path(getwd(),filenames[i]))
}
packageStartupMessage(" Done!")
}
packageStartupMessage("Renaming...", appendLF = FALSE)
names=list.files()
new_names=names
new_names[grep(name_primer_r,names)]= names[grep(name_primer_r,names)] %>%
gsub("^[0-9][0-9]\\.","",.) %>%
gsub(paste0('(',name_primer_r,')'),"_R",.)
new_names[grep(name_primer_f,names)]= names[grep(name_primer_f,names)] %>%
gsub("^[0-9][0-9]\\.","",.) %>%
gsub(paste0('(',name_primer_f,')'),"_F",.)
new_names_split= new_names %>% strsplit('_') %>% unlist()
SN= new_names_split[seq(1,length(new_names_split),2)]
FR= new_names_split[seq(2,length(new_names_split),2)]
nchr= SN %>% nchar() %>% (function(x){c(min(x),max(x))})
if (nchr[1]!=nchr[2]){
s_index= SN%>%nchar() == nchr[1]
l_index= SN%>%nchar() == nchr[2]
new_names[l_index]=paste0(SN[l_index] %>% substr(1,nchr[2]-3),'-',
SN[l_index] %>% substr(nchr[2]-2,nchr[2]-2),'-',
SN[l_index] %>% substr(nchr[2]-1,nchr[2]),'_', FR[l_index])
new_names[s_index]=paste0(SN[s_index] %>% substr(1,nchr[1]-2),'-',
SN[s_index] %>% substr(nchr[1]-1,nchr[1]-1),'-',
SN[s_index] %>% substr(nchr[1],nchr[1]),'_', FR[s_index])
new_names=new_names%>%gsub('.seq','.fasta',.)
for (j in 1:length(names)){
text=readLines(names[j])
for (i in length(text):1){
text[i+1]=text[i]
}
text[1]=paste0(">",new_names[j])
writeLines(text,file.path('../fasta',new_names[j]%>%gsub('seq','fasta',.)))
}
}else{
new_names=paste0(SN %>% substr(1,nchr[2]-3),'-',
SN %>% substr(nchr[2]-2,nchr[2]-2),'-',
SN %>% substr(nchr[2]-1,nchr[2]),'_',FR)
new_names=new_names%>%gsub('.seq','.fasta',.)
for (j in 1:length(names)){
text=readLines(names[j])
for (i in length(text):1){
text[i+1]=text[i]
}
text[1]=paste0(">",new_names[j])
writeLines(text,file.path('../fasta',new_names[j]%>%gsub('seq','fasta',.)))
}
}
packageStartupMessage(" Done!")
packageStartupMessage("Vector Screening...", appendLF = FALSE)
setwd('../fasta')
db_vec= blast(db="../../../db/UniVec")
seq=readDNAStringSet(new_names)
data_seq=seq@ranges %>% data.frame()
index_r=new_names%>%grep('R',.)
seq[index_r]=seq[index_r]%>% reverseComplement()
l.vec=array(dim=c(2,2,length(new_names)),
dimnames = list(c(1:2),c('Start.pt','End.pt'),new_names))
for (k in 1: length(new_names)){
if(nrow(predict(db_vec,seq[k]))!=0){
num=predict(db_vec,seq[k])%>% (function(x){x[x$Mismatches<10,]})
qs= num$Q.start
qe= num$Q.end
d3= data.frame(qs=qs, qe=qe)
d3=d3[order(d3$qs),]
vec=matrix(ncol=2,nrow=2)
vec[1,1]=d3[1,1]
vec[1,2]=d3[1,2]
for (i in 1:(nrow(d3)-1)){
if(d3[i+1,1]<=vec[1,2]){
if(d3[i+1,2]>vec[1,2]){
vec[1,2]=d3[i+1,2]
}}
else if(vec[2,1]%>%is.na()){
vec[2,1]=d3[i+1,1]
vec[2,2]=d3[i+1,2]
}else if(d3[i+1,2]>vec[2,2]){
vec[2,2]=d3[i+1,2]
}
}
l.vec[,,k]=vec
}
}
packageStartupMessage(" Done!")
packageStartupMessage("Candidate Sequences Evaluating...", appendLF = FALSE)
l.seq=array(dim=c(3,2,length(new_names)),
dimnames = list(c(1:3),c('Start.pt','End.pt'),new_names))
for (k in 1: length(new_names)){
if(l.vec[2,1,k]%>%is.na()){
l.seq[1,,k]=c(1,l.vec[1,1,k]-1)
l.seq[2,,k]=c(l.vec[1,2,k]+1,data_seq$width[k])
}else{
if(l.vec[1,1,k]==1){
l.seq[1,,k]=c(1,1)
}else{
l.seq[1,,k]=c(1,l.vec[1,1,k]-1)
}
l.seq[2,,k]=c(l.vec[1,2,k]+1,l.vec[2,1,k]-1)
if(l.vec[2,2,k]!=data_seq$width[k]){
l.seq[3,,k]=c(l.vec[2,2,k]+1,data_seq$width[k])
}
}
}
seq.pure=seq
for (k in 1: length(new_names)){
if(nrow(predict(db_vec,seq[k]))!=0){
s.seq=seq[k]%>%unlist()
c=matrix(ncol=3,nrow=length(primer))
if(l.seq[3,1,k]%>%is.na()){
for (i in 1:2){
for (j in 1:length(primer)){
c[j,i]=s.seq[l.seq[i,1,k]:l.seq[i,2,k]]%>%as.character()%>%grepl(primer[j],.)
}
}
}else{
for (i in 1:3){
for (j in 1:length(primer)){
c[j,i]=s.seq[l.seq[i,1,k]:l.seq[i,2,k]]%>%as.character()%>%grepl(primer[j],.)
}
}
}
if(!(grep('TRUE',c)/length(primer))%>%isEmpty()){
seq.pure[k]=seq[k]%>%
unlist()%>%
(function(x){x[l.seq[(grep('TRUE',c)/length(primer)) %>% ceiling()%>%mean(),1,k]:
l.seq[(grep('TRUE',c)/length(primer)) %>% ceiling()%>%mean(),2,k]]})%>%
as("DNAStringSet")
}
}
}
for (i in 1:length(seq.pure)){
write.fasta(seq.pure[i]%>%as.DNAbin(), names(seq.pure[i])%>%gsub('.seq','',.), names(seq.pure[i])%>%gsub('.seq','.fasta',.))
}
packageStartupMessage(" Done!")
packageStartupMessage("Sequences Alignment...", appendLF = FALSE)
data_seq=cbind(seq@ranges%>%data.frame()%>%(function(x){cbind(x$names, x$width)}),
seq.pure@ranges%>%data.frame()%>%(function(x){x$width})) %>% data.frame()
colnames(data_seq)=c('names','original','selected')
data_seq$original=data_seq$original %>% as.character()%>% as.numeric()
data_seq$selected=data_seq$selected %>% as.character()%>% as.numeric()
data_seq=data.frame(data_seq,success=(data_seq$original!=data_seq$selected))
seq_names=data_seq[data_seq$success==TRUE,]$names %>%
(function(x){x[grep('_F',x)]}) %>%
gsub('_F.fasta','',.)
table_length=data.table()
for (sn in 1:length(seq_names)){
setwd(file.path(desdir,'seq',folder,'fasta'))
set=seq.pure[grep(paste0(seq_names[sn],'_'),new_names)]
if(length(set)==2){
set=DNAStringSet(c(set[names(set)%>%grep('_F',.)],set[names(set)%>%grep('_R',.)]))
aln.r=system(paste('blastn','-subject',set[2]%>%names,'-query',set[1]%>%names),intern=TRUE)
aln.r2=aln.r[(aln.r %>% grep('Score',.)%>%min()):
((aln.r %>% grep('Lambda',.)%>%min())-4)]
aln.in=aln.r2 %>% grep('Score',.)
aln=matrix(nrow=length(aln.in), ncol=2)
for (i in 1:length(aln.in)){
aln[i,1]=aln.in[i]
if (i==length(aln.in)){
aln[i,2]=length(aln.r2)
}else{
aln[i,2]=aln.in[i+1]-3
}
}
aln.len=aln.r2%>% grep('Identities',.)
aln.bigind= aln.r2[aln.len] %>%
strsplit(' ')%>% unlist() %>%
(function(x){x[seq(4,length(x),9)]}) %>%
strsplit('/')%>% unlist() %>%
(function(x){x[seq(2,length(x),2)]}) %>%
(function(x){x==max(x)})
aln.truind=aln[aln.bigind,]
aln.t=aln.r2[aln.truind[1]:aln.truind[2]]
aln.q=aln.t%>% grep('Query',.,value=TRUE)%>%gsub('[A-Z][a-z]*','',.)%>%
strsplit(' ')%>%unlist()%>%as.numeric()%>%na.omit()
aln.s=aln.t%>% grep('Sbjct',.,value=TRUE)%>%gsub('[A-Z][a-z]*','',.)%>%
strsplit(' ')%>%unlist()%>%as.numeric()%>%na.omit()
if(mean(aln.q)>mean(aln.s)){
seq.aln=paste0(set[1]%>%as.character()%>% substr(1,mean(c(max(aln.q),min(aln.q)))),
set[2]%>%as.character()%>% substr(mean(c(max(aln.s),min(aln.s)))+1,nchar(.))) %>%
DNAStringSet()
}else{
seq.aln=paste0(set[2]%>%as.character()%>% substr(1,mean(c(max(aln.s),min(aln.s)))),
set[1]%>%as.character()%>% substr(mean(c(max(aln.q),min(aln.q)))+1,nchar(.))) %>%
DNAStringSet()
}
setwd(file.path(desdir,'seq',folder,'fasta.aln'))
seq.aln%>% as.DNAbin()%>%write.fasta(names=seq_names[sn],file.out=paste(seq_names[sn],'.fasta',sep=''))
table_length=rbind(table_length,data.table(seq_names[sn],seq.aln@ranges@width))
}
}
seq.aln= readDNAStringSet(list.files())
packageStartupMessage(" Done!")
######################################
######################################
if (nt_search){
packageStartupMessage(paste0("Fetching sequence information...\nIt may take at least ",
length(seq.aln)*3,
" mins to complete."), appendLF = FALSE)
db_nt <- blast(db="../../../db/nt")
report=data.frame()
options(download.file.method = "wininet")
for (i in 1: length(seq.aln)){
if (nchar(seq.aln[i])>100){
cl <- predict(db_nt, seq.aln[i])
if(!cl[1]%>%isEmpty()){
if (nrow(cl)<10){
x=cl[order(-cl$Bits),] %>%
(function(x){x[1:nrow(cl),2]})
}else{
x=cl[order(-cl$Bits),] %>%
(function(x){x[1:10,2]})
}
x2=x %>%
as.character() %>%
strsplit('\\|') %>%
unlist()
y=x2[seq(4,length(x2),4)]%>%
genbank2uid()%>%
ncbi_get_taxon_summary()
z=x2[seq(2,length(x2),4)]%>%
entrez_summary(db='Nucleotide',id=.)%>%
extract_from_esummary('title')
if (nrow(cl)<10){
report=rbind(report,
data.frame(Seq.Names=cl[1:nrow(cl),c(1)],
description=z,y[,c(2,3)],
cl[1:nrow(cl),c(3,4,5,6,7,8,9,10)]))
}else{
report=rbind(report,
data.frame(Seq.Names=cl[1:10,c(1)],
description=z,y[,c(2,3)],
cl[1:10,c(3,4,5,6,7,8,9,10)]))
}
}
}
}
packageStartupMessage(" Done!")
write.csv(report,'../summary_seq.csv',row.names = FALSE)
}
packageStartupMessage("Exporting Summary Information...", appendLF = FALSE)
sum_names= gsub('_F.fasta','',data_seq$names) %>%
grep('_R',.,value=TRUE,invert=TRUE)
data_summary=matrix(nrow=length(sum_names),ncol=7)
colnames(data_summary)=c('seq.names','F_length_raw','R_length_raw','F_length_vs','R_length_vs','Suc_or_Fal','aln_length')
for (su in 1:length(sum_names)){
index=data_seq$names %>% grep(paste(sum_names[su],'_',sep=''),.)
index_f= index[data_seq$names[index] %>% grep('F',.)]
index_r= index[data_seq$names[index] %>% grep('R',.)]
data_summary[su,1]=sum_names[su]
if(index_f%>%isEmpty()){
data_summary[su,2]=NA
data_summary[su,3]=data_seq[index_r,2]
data_summary[su,4]=NA
data_summary[su,5]=data_seq[index_r,3]
data_summary[su,6]='Failure'
data_summary[su,7]=NA
}else if(index_r%>%isEmpty()){
data_summary[su,2]=data_seq[index_f,2]
data_summary[su,3]=NA
data_summary[su,4]=data_seq[index_f,3]
data_summary[su,5]=NA
data_summary[su,6]='Failure'
data_summary[su,7]=NA
}else{
data_summary[su,2]=data_seq[index_f,2]
data_summary[su,3]=data_seq[index_r,2]
data_summary[su,4]=data_seq[index_f,3]
data_summary[su,5]=data_seq[index_r,3]
data_summary[su,6]=if(data_seq[index_f,4]&data_seq[index_r,4]){'Success'}else{'Failure'}
if(!table_length[,V1]%>%grep(sum_names[su],.)%>%isEmpty()){
data_summary[su,7]=table_length[table_length[,V1]%>%grep(paste0(sum_names[su],'$'),.),V2]
}else{
data_summary[su,7]=NA
}
}
}
data_summary=data_summary%>% as.data.frame()
write.csv(data_summary,'../summary_aln.csv')
packageStartupMessage(paste0('Done! Program End! \n\n\nFiles Location: ',
file.path(desdir,'seq',folder)))
}
DownloadFTP=function(source, username, password, des_folder){
filenames= getURL(source,userpwd=paste0(username,':',password),
verbose=TRUE,ftp.use.epsv=TRUE, dirlistonly = TRUE) %>%
strsplit("[\\\\]|[^[:print:]]",fixed = FALSE) %>%
unlist() %>% (function(x){x[grep('seq',x)]})
filepath=sprintf(paste0('ftp://',
paste0(username,':',password),'@',
(source%>%gsub('ftp://','',.)),'%s'),filenames)
if (!file.exists(file.path(desdir,'Download',des_folder))){
dir.create(file.path(desdir,'Download',des_folder),recursive = TRUE)
}
for (i in 1:(length(filenames))){
download.file(filepath[i],
file.path(desdir,'Download',des_folder,filenames[i]))
}
}
FetchingSeq=function(folder){
setwd(filePath(desdir,'seq',folder,'fasta.aln'))
seq.aln= readDNAStringSet(list.files())
db_nt <- blast(db="../../../db/nt")
report=data.frame()
options(download.file.method = "wininet")
packageStartupMessage(paste0("Fetching sequence information...\nIt may take at least ",
length(seq.aln)*3,
" mins to complete."), appendLF = FALSE)
for (i in 1: length(seq.aln)){
if (nchar(seq.aln[i])>100){
cl <- predict(db_nt, seq.aln[i])
if(!cl[1]%>%isEmpty()){
if (nrow(cl)<10){
x=cl[order(-cl$Bits),] %>%
(function(x){x[1:nrow(cl),2]})
}else{
x=cl[order(-cl$Bits),] %>%
(function(x){x[1:10,2]})
}
x2=x %>%
as.character() %>%
strsplit('\\|') %>%
unlist()
y=x2[seq(4,length(x2),4)]%>%
genbank2uid()%>%
ncbi_get_taxon_summary()
z=x2[seq(2,length(x2),4)]%>%
entrez_summary(db='Nucleotide',id=.)%>%
extract_from_esummary('title')
if (nrow(cl)<10){
report=rbind(report,
data.frame(Seq.Names=cl[1:nrow(cl),c(1)],
description=z,y[,c(2,3)],
cl[1:nrow(cl),c(3,4,5,6,7,8,9,10)]))
}else{
report=rbind(report,
data.frame(Seq.Names=cl[1:10,c(1)],
description=z,y[,c(2,3)],
cl[1:10,c(3,4,5,6,7,8,9,10)]))
}
}
}
}
}
|
lineChart <- function(chart.matrix,
series.line.width,
series.marker.show)
{
ErrorIfNotEnoughData(chart.matrix)
no.data.in.series <- colSums(is.na(chart.matrix)) >= length(chart.matrix[, 1])
if (any(no.data.in.series))
chart.matrix <- chart.matrix[, !no.data.in.series]
## Check that line width is at least 1
if (series.line.width < 1)
series.line.width <- 1
## Showing markers and lines
series.mode <- if (series.line.width >= 1 && (is.null(series.marker.show) || series.marker.show == "none"))
"lines"
else
"lines+markers"
return(list(chart.matrix = chart.matrix,
series.mode = series.mode,
series.line.width = series.line.width))
}
|
/R/deprecated_linechart.R
|
no_license
|
neraunzaran/flipStandardCharts
|
R
| false
| false
| 782
|
r
|
lineChart <- function(chart.matrix,
series.line.width,
series.marker.show)
{
ErrorIfNotEnoughData(chart.matrix)
no.data.in.series <- colSums(is.na(chart.matrix)) >= length(chart.matrix[, 1])
if (any(no.data.in.series))
chart.matrix <- chart.matrix[, !no.data.in.series]
## Check that line width is at least 1
if (series.line.width < 1)
series.line.width <- 1
## Showing markers and lines
series.mode <- if (series.line.width >= 1 && (is.null(series.marker.show) || series.marker.show == "none"))
"lines"
else
"lines+markers"
return(list(chart.matrix = chart.matrix,
series.mode = series.mode,
series.line.width = series.line.width))
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/cv_predictiveness_update.R
\name{cv_predictiveness_update}
\alias{cv_predictiveness_update}
\title{Estimate the influence function for an estimator of predictiveness}
\usage{
cv_predictiveness_update(
fitted_values,
y,
folds,
weights = rep(1, length(y)),
type = "r_squared",
na.rm = FALSE
)
}
\arguments{
\item{fitted_values}{fitted values from a regression function; a list of length V, where each object is a set of predictions on the validation data.}
\item{y}{the outcome.}
\item{folds}{the cross-validation folds}
\item{weights}{weights for the computed influence curve (e.g., inverse probability weights for coarsened-at-random settings)}
\item{type}{which risk parameter are you estimating (defaults to \code{r_squared}, for the $R^2$)?}
\item{na.rm}{logical; should NAs be removed in computation? (defaults to \code{FALSE})}
}
\value{
The estimated influence function values for the given measure of predictiveness.
}
\description{
Estimate the influence function for the given measure of predictiveness.
}
\details{
See the paper by Williamson, Gilbert, Simon, and Carone for more
details on the mathematics behind this function and the definition of the parameter of interest.
}
|
/man/cv_predictiveness_update.Rd
|
permissive
|
jjfeng/vimp
|
R
| false
| true
| 1,284
|
rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/cv_predictiveness_update.R
\name{cv_predictiveness_update}
\alias{cv_predictiveness_update}
\title{Estimate the influence function for an estimator of predictiveness}
\usage{
cv_predictiveness_update(
fitted_values,
y,
folds,
weights = rep(1, length(y)),
type = "r_squared",
na.rm = FALSE
)
}
\arguments{
\item{fitted_values}{fitted values from a regression function; a list of length V, where each object is a set of predictions on the validation data.}
\item{y}{the outcome.}
\item{folds}{the cross-validation folds}
\item{weights}{weights for the computed influence curve (e.g., inverse probability weights for coarsened-at-random settings)}
\item{type}{which risk parameter are you estimating (defaults to \code{r_squared}, for the $R^2$)?}
\item{na.rm}{logical; should NAs be removed in computation? (defaults to \code{FALSE})}
}
\value{
The estimated influence function values for the given measure of predictiveness.
}
\description{
Estimate the influence function for the given measure of predictiveness.
}
\details{
See the paper by Williamson, Gilbert, Simon, and Carone for more
details on the mathematics behind this function and the definition of the parameter of interest.
}
|
# Logistic Regression
# Importing the dataset
dataset = read.csv('Wine.csv')
#dataset = dataset[3:5]
# Feature Scaling
training_set[-14] = scale(training_set[-14])
test_set[-14] = scale(test_set[-14])
# Encoding the target feature as factor
#dataset$Purchased = factor(dataset$Purchased, levels = c(0, 1))
# Splitting the dataset into the Training set and Test set
# install.packages('caTools')
library(caTools)
set.seed(123)
split = sample.split(dataset$Customer_Segment, SplitRatio = 0.8)
training_set = subset(dataset, split == TRUE)
test_set = subset(dataset, split == FALSE)
#apply LDA
library(MASS)
lda = lda (formula = Customer_Segment ~.,
data = training_set)
training_set = as.data.frame(predict(lda, training_set))
training_set = training_set[c(5, 6, 1)]
test_set = as.data.frame(predict(lda, test_set))
test_set = test_set[c(5, 6, 1)]
# Fitting Logistic Regression to the Training set
classifier = svm(formula = class ~ .,
data = training_set,
type = 'C-classification',
kernel = 'linear')
# Predicting the Test set results
y_pred = predict(classifier, newdata = test_set[-3])
# Making the Confusion Matrix
cm = table(test_set[, 3], y_pred )
# Visualising the Training set results
library(ElemStatLearn)
set = test_set
X1 = seq(min(set[, 1]) - 1, max(set[, 1]) + 1, by = 0.03)
X2 = seq(min(set[, 2]) - 1, max(set[, 2]) + 1, by = 0.03)
grid_set = expand.grid(X1, X2)
colnames(grid_set) = c('x.LD1', 'x.LD2')
y_grid = predict(classifier, newdata = grid_set)
plot(set[, -3],
main = 'Logistic Regression (Training set)',
xlab = 'Age', ylab = 'Estimated Salary',
xlim = range(X1), ylim = range(X2))
contour(X1, X2, matrix(as.numeric(y_grid), length(X1), length(X2)), add = TRUE)
points(grid_set, pch = '.', col = ifelse(y_grid==2,'deepskyblue',ifelse(y_grid == 1, 'springgreen3', 'tomato')))
points(set, pch = 21, bg = ifelse(set[, 3] == 2,'blue3', ifelse(set[, 3] == 1, 'green4', 'red3')))
|
/Part 9 - Dimensionality Reduction/Section 44 - Linear Discriminant Analysis (LDA)/LDA/myLDA.R
|
no_license
|
Sash-github-account/ML_A_to_Z
|
R
| false
| false
| 2,051
|
r
|
# Logistic Regression
# Importing the dataset
dataset = read.csv('Wine.csv')
#dataset = dataset[3:5]
# Feature Scaling
training_set[-14] = scale(training_set[-14])
test_set[-14] = scale(test_set[-14])
# Encoding the target feature as factor
#dataset$Purchased = factor(dataset$Purchased, levels = c(0, 1))
# Splitting the dataset into the Training set and Test set
# install.packages('caTools')
library(caTools)
set.seed(123)
split = sample.split(dataset$Customer_Segment, SplitRatio = 0.8)
training_set = subset(dataset, split == TRUE)
test_set = subset(dataset, split == FALSE)
#apply LDA
library(MASS)
lda = lda (formula = Customer_Segment ~.,
data = training_set)
training_set = as.data.frame(predict(lda, training_set))
training_set = training_set[c(5, 6, 1)]
test_set = as.data.frame(predict(lda, test_set))
test_set = test_set[c(5, 6, 1)]
# Fitting Logistic Regression to the Training set
classifier = svm(formula = class ~ .,
data = training_set,
type = 'C-classification',
kernel = 'linear')
# Predicting the Test set results
y_pred = predict(classifier, newdata = test_set[-3])
# Making the Confusion Matrix
cm = table(test_set[, 3], y_pred )
# Visualising the Training set results
library(ElemStatLearn)
set = test_set
X1 = seq(min(set[, 1]) - 1, max(set[, 1]) + 1, by = 0.03)
X2 = seq(min(set[, 2]) - 1, max(set[, 2]) + 1, by = 0.03)
grid_set = expand.grid(X1, X2)
colnames(grid_set) = c('x.LD1', 'x.LD2')
y_grid = predict(classifier, newdata = grid_set)
plot(set[, -3],
main = 'Logistic Regression (Training set)',
xlab = 'Age', ylab = 'Estimated Salary',
xlim = range(X1), ylim = range(X2))
contour(X1, X2, matrix(as.numeric(y_grid), length(X1), length(X2)), add = TRUE)
points(grid_set, pch = '.', col = ifelse(y_grid==2,'deepskyblue',ifelse(y_grid == 1, 'springgreen3', 'tomato')))
points(set, pch = 21, bg = ifelse(set[, 3] == 2,'blue3', ifelse(set[, 3] == 1, 'green4', 'red3')))
|
#' Title: fars_read_years
#' Description: For each input, this function will call the make_filename function,
#' which produces the files name for the info in that year,
#' then loads the generated filename into R using the fars_read function,
#' and then filters the data to just month and years as the cols.
#' If a file with a year doesn't exist, an error message is produced.
#' Usage: fars_read_years(years)
#' Arguments: years. the years to find files to bring into R. Can be single or multiple values.
#' @param years
#' Value: This function returns the dataframes for files that exist for the years inputted. message of error will occur if it does not exist.
#' @return fars_read_years
#' @examples
#' \dontrun{
#' fars_read_years(years=c(2013,2015))
#' fars_read_years(c(2014,2015)) }
#' @importFrom dplyr mutate select
#' @export
fars_read_years <- function(years) {
lapply(years, function(year) {
file <- make_filename(year)
tryCatch({
dat <- fars_read(file)
dplyr::mutate(dat, year = year) %>%
dplyr::select(MONTH, year)
}, error = function(e) {
warning("invalid year: ", year)
return(NULL)
})
})
}
|
/R/fars_read_years.R
|
no_license
|
latsa001/Coursera-Assignment-Build-an-R-Package
|
R
| false
| false
| 1,165
|
r
|
#' Title: fars_read_years
#' Description: For each input, this function will call the make_filename function,
#' which produces the files name for the info in that year,
#' then loads the generated filename into R using the fars_read function,
#' and then filters the data to just month and years as the cols.
#' If a file with a year doesn't exist, an error message is produced.
#' Usage: fars_read_years(years)
#' Arguments: years. the years to find files to bring into R. Can be single or multiple values.
#' @param years
#' Value: This function returns the dataframes for files that exist for the years inputted. message of error will occur if it does not exist.
#' @return fars_read_years
#' @examples
#' \dontrun{
#' fars_read_years(years=c(2013,2015))
#' fars_read_years(c(2014,2015)) }
#' @importFrom dplyr mutate select
#' @export
fars_read_years <- function(years) {
lapply(years, function(year) {
file <- make_filename(year)
tryCatch({
dat <- fars_read(file)
dplyr::mutate(dat, year = year) %>%
dplyr::select(MONTH, year)
}, error = function(e) {
warning("invalid year: ", year)
return(NULL)
})
})
}
|
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
# Ready workspace
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
# Identify git directory and remove previous objects if not already done
if (!exists("git.dir")) {
rm(list = ls(all = T))
wd <- c("C:/Users/Jim Hughes/Documents", "C:/Users/hugjh001/Documents",
"C:/Users/hugjh001/Desktop", "C:/windows/system32", "C:/Users/hugjh001/Documents/len_pbpk")
graphics.off()
if (getwd() == wd[1]) {
git.dir <- paste0(getwd(), "/GitRepos")
reponame <- "len_pbpk"
} else if (getwd() == wd[2] | getwd() == wd[5]) {
git.dir <- "C:/Users/hugjh001/Documents"
reponame <- "len_pbpk"
} else if (getwd() == wd[3] | getwd() == wd[4]) {
git.dir <- "E:/Hughes/Git"
reponame <- "len_pbpk"
}
rm("wd")
}
# Load libraries
library(plyr)
library(ggplot2)
library(scales)
# Customize ggplot2 theme
theme_bw2 <- theme_set(theme_bw(base_size = 14))
theme_update(plot.title = element_text(hjust = 0.5))
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
# Plot data
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
# Source data then redefine objects to allow them to be row bound
# The workspace will contain:
# obsdata - PKSim data melted and cast into afferent, efferent and tissue
# columns for each of the four PKSim compartments. The tissue type is in
# long format for use with facet_wrap().
# simdata - MoBi data rbound into columns that match that of obsdata. Again,
# tissue type is in long format.
# Units for all data are:
# time - minutes
# concentration - nmol/mL
# First redefine mouse data
source("rscripts/mobi_dataprep_base.R")
mobsdata <- obsdata
msimdata <- simdata
# Then the human data
source("rscripts/mobi_dataprep_basehuman.R")
hobsdata <- obsdata
hsimdata <- simdata
# Give the datasets a species column before binding together
mobsdata$SPECIES <- "Mouse"
msimdata$SPECIES <- "Mouse"
hobsdata$SPECIES <- "Human"
hsimdata$SPECIES <- "Human"
obsdata <- rbind(mobsdata, hobsdata)
simdata <- rbind(msimdata, hsimdata)
# Create a joint column describing both the tissue and species
obsdata$MODEL <- with(obsdata, paste(SPECIES, TISSUE))
simdata$MODEL <- with(simdata, paste(SPECIES, TISSUE))
# First create our plot datasets (pdata)
desired.columns <- c("TIME", "plAfferent", "bcAfferent", "tiTissue", "MODEL",
"TISSUE", "SPECIES")
obspdata <- obsdata[, desired.columns]
simpdata <- simdata[, desired.columns]
simpdata$RES <- simpdata$tiTissue - obspdata$tiTissue
simpdata$PROPRES <- simpdata$RES/obspdata$tiTissue*100
# Create data.frame to dictate the geom_text arguments
# Want human geom_text to be in the lower half of the plot, mouse in the upper
textdata <- ddply(simpdata, .(MODEL, SPECIES), function(df) {
meanerr <- mean(df$PROPRES, na.rm = T)
errtext <- paste0(signif(meanerr, 3), "%")
out <- data.frame(meanERR = errtext)
if (unique(df$SPECIES) == "Human") {
out$yaxis <- -25
} else { # if unique(df$SPECIES == "Mouse")
out$yaxis <- 25
}
out
})
# Define colourblind palette
cbPalette <- c("#D55E00", "#E69F00", "#CC79A7", "#009E73", "#0072B2")
# Plot proportional residuals against time
p <- NULL
p <- ggplot()
p <- p + geom_point(aes(x = TIME/60, y = PROPRES, colour = TISSUE),
shape = 1, data = simpdata)
p <- p + geom_hline(yintercept = 0, linetype = "dashed", colour = "green4")
p <- p + geom_text(aes(label = meanERR, y = yaxis), x = 850/60, data = textdata)
p <- p + scale_colour_manual("Model", values = cbPalette)
p <- p + scale_x_continuous("Time (h)", breaks = c(0, 4, 8, 12, 16))
p <- p + scale_y_continuous("Percent Error (%)", lim = c(-50, 50))
p <- p + facet_wrap(~MODEL, ncol = 2, dir = "v")
p <- p + theme(legend.position = "none") # remove legend
p
# Produce Figure 3 for the MoBi methods paper
ggsave("produced_data/mobi_paper_tissue2.png", width = 17.4, height = 17.4,
units = c("cm"))
ggsave("produced_data/Figure3.eps", width = 17.4, height = 23.4,
units = c("cm"), dpi = 1200, device = cairo_ps, fallback_resolution = 1200)
ggsave("produced_data/mobi_paganz_tissue2.png", width = 17.4, height = 12.4,
units = c("cm"))
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
# Extra Plots
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
# p1 - Plot tissue concentrations against time
# p1 <- NULL
# p1 <- ggplot()
# p1 <- p1 + geom_point(aes(x = TIME, y = tiTissue),
# data = obspdata, colour = "blue", shape = 1)
# p1 <- p1 + geom_line(aes(x = TIME, y = tiTissue),
# data = simpdata, colour = "red")
# p1 <- p1 + xlab("Time (min)")
# p1 <- p1 + scale_y_log10("Concentration (nmol/mL)")
# p1 <- p1 + facet_wrap(~MODEL, ncol = 4)
# p1
# p2 - Plot residuals against time
# p2 <- NULL
# p2 <- ggplot()
# p2 <- p2 + geom_point(aes(x = TIME, y = RES), data = simpdata)
# p2 <- p2 + xlab("Time (min)")
# p2 <- p2 + scale_y_continuous("Residuals (nmol/mL)")
# p2 <- p2 + facet_wrap(~MODEL, ncol = 4, scales = "free_y")
# p2
# p3 - Plot residuals against concentration
# p3 <- NULL
# p3 <- ggplot()
# p3 <- p3 + geom_point(aes(x = tiTissue, y = RES), data = simpdata)
# p3 <- p3 + xlab("Concentration (nmol/mL)")
# p3 <- p3 + scale_y_continuous("Residuals (nmol/mL)")
# p3 <- p3 + facet_wrap(~MODEL, ncol = 4, scales = "free")
# p3
|
/rscripts/mobi_paper_tissue2.R
|
no_license
|
jhhughes256/len_pbpk
|
R
| false
| false
| 5,632
|
r
|
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
# Ready workspace
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
# Identify git directory and remove previous objects if not already done
if (!exists("git.dir")) {
rm(list = ls(all = T))
wd <- c("C:/Users/Jim Hughes/Documents", "C:/Users/hugjh001/Documents",
"C:/Users/hugjh001/Desktop", "C:/windows/system32", "C:/Users/hugjh001/Documents/len_pbpk")
graphics.off()
if (getwd() == wd[1]) {
git.dir <- paste0(getwd(), "/GitRepos")
reponame <- "len_pbpk"
} else if (getwd() == wd[2] | getwd() == wd[5]) {
git.dir <- "C:/Users/hugjh001/Documents"
reponame <- "len_pbpk"
} else if (getwd() == wd[3] | getwd() == wd[4]) {
git.dir <- "E:/Hughes/Git"
reponame <- "len_pbpk"
}
rm("wd")
}
# Load libraries
library(plyr)
library(ggplot2)
library(scales)
# Customize ggplot2 theme
theme_bw2 <- theme_set(theme_bw(base_size = 14))
theme_update(plot.title = element_text(hjust = 0.5))
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
# Plot data
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
# Source data then redefine objects to allow them to be row bound
# The workspace will contain:
# obsdata - PKSim data melted and cast into afferent, efferent and tissue
# columns for each of the four PKSim compartments. The tissue type is in
# long format for use with facet_wrap().
# simdata - MoBi data rbound into columns that match that of obsdata. Again,
# tissue type is in long format.
# Units for all data are:
# time - minutes
# concentration - nmol/mL
# First redefine mouse data
source("rscripts/mobi_dataprep_base.R")
mobsdata <- obsdata
msimdata <- simdata
# Then the human data
source("rscripts/mobi_dataprep_basehuman.R")
hobsdata <- obsdata
hsimdata <- simdata
# Give the datasets a species column before binding together
mobsdata$SPECIES <- "Mouse"
msimdata$SPECIES <- "Mouse"
hobsdata$SPECIES <- "Human"
hsimdata$SPECIES <- "Human"
obsdata <- rbind(mobsdata, hobsdata)
simdata <- rbind(msimdata, hsimdata)
# Create a joint column describing both the tissue and species
obsdata$MODEL <- with(obsdata, paste(SPECIES, TISSUE))
simdata$MODEL <- with(simdata, paste(SPECIES, TISSUE))
# First create our plot datasets (pdata)
desired.columns <- c("TIME", "plAfferent", "bcAfferent", "tiTissue", "MODEL",
"TISSUE", "SPECIES")
obspdata <- obsdata[, desired.columns]
simpdata <- simdata[, desired.columns]
simpdata$RES <- simpdata$tiTissue - obspdata$tiTissue
simpdata$PROPRES <- simpdata$RES/obspdata$tiTissue*100
# Create data.frame to dictate the geom_text arguments
# Want human geom_text to be in the lower half of the plot, mouse in the upper
textdata <- ddply(simpdata, .(MODEL, SPECIES), function(df) {
meanerr <- mean(df$PROPRES, na.rm = T)
errtext <- paste0(signif(meanerr, 3), "%")
out <- data.frame(meanERR = errtext)
if (unique(df$SPECIES) == "Human") {
out$yaxis <- -25
} else { # if unique(df$SPECIES == "Mouse")
out$yaxis <- 25
}
out
})
# Define colourblind palette
cbPalette <- c("#D55E00", "#E69F00", "#CC79A7", "#009E73", "#0072B2")
# Plot proportional residuals against time
p <- NULL
p <- ggplot()
p <- p + geom_point(aes(x = TIME/60, y = PROPRES, colour = TISSUE),
shape = 1, data = simpdata)
p <- p + geom_hline(yintercept = 0, linetype = "dashed", colour = "green4")
p <- p + geom_text(aes(label = meanERR, y = yaxis), x = 850/60, data = textdata)
p <- p + scale_colour_manual("Model", values = cbPalette)
p <- p + scale_x_continuous("Time (h)", breaks = c(0, 4, 8, 12, 16))
p <- p + scale_y_continuous("Percent Error (%)", lim = c(-50, 50))
p <- p + facet_wrap(~MODEL, ncol = 2, dir = "v")
p <- p + theme(legend.position = "none") # remove legend
p
# Produce Figure 3 for the MoBi methods paper
ggsave("produced_data/mobi_paper_tissue2.png", width = 17.4, height = 17.4,
units = c("cm"))
ggsave("produced_data/Figure3.eps", width = 17.4, height = 23.4,
units = c("cm"), dpi = 1200, device = cairo_ps, fallback_resolution = 1200)
ggsave("produced_data/mobi_paganz_tissue2.png", width = 17.4, height = 12.4,
units = c("cm"))
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
# Extra Plots
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
# p1 - Plot tissue concentrations against time
# p1 <- NULL
# p1 <- ggplot()
# p1 <- p1 + geom_point(aes(x = TIME, y = tiTissue),
# data = obspdata, colour = "blue", shape = 1)
# p1 <- p1 + geom_line(aes(x = TIME, y = tiTissue),
# data = simpdata, colour = "red")
# p1 <- p1 + xlab("Time (min)")
# p1 <- p1 + scale_y_log10("Concentration (nmol/mL)")
# p1 <- p1 + facet_wrap(~MODEL, ncol = 4)
# p1
# p2 - Plot residuals against time
# p2 <- NULL
# p2 <- ggplot()
# p2 <- p2 + geom_point(aes(x = TIME, y = RES), data = simpdata)
# p2 <- p2 + xlab("Time (min)")
# p2 <- p2 + scale_y_continuous("Residuals (nmol/mL)")
# p2 <- p2 + facet_wrap(~MODEL, ncol = 4, scales = "free_y")
# p2
# p3 - Plot residuals against concentration
# p3 <- NULL
# p3 <- ggplot()
# p3 <- p3 + geom_point(aes(x = tiTissue, y = RES), data = simpdata)
# p3 <- p3 + xlab("Concentration (nmol/mL)")
# p3 <- p3 + scale_y_continuous("Residuals (nmol/mL)")
# p3 <- p3 + facet_wrap(~MODEL, ncol = 4, scales = "free")
# p3
|
dir = "./"
oname = paste0(dir, "genot_difference_")
data <- NULL
for (g in c("Rad53K", "Mrc1d", "Rad9d")) {
ofile = paste0(oname, g, ".tsv")
if (!file.exists(ofile)) next
a <- read.table(ofile, header=T, sep="\t")
a <- a[,c(2, 6)]
colnames(a) = c(paste0("log2FC_", g), paste0("padj_", g))
if (is.null(data)) {
data <- a
} else {
stopifnot(rownames(data) == rownames(a))
data <- cbind(data, a)
}
}
if (!is.null(data))
write.table(data, "de_result_mc_genotype.tsv", sep="\t", quote=F)
oname = paste0(dir, "genotype_difference_")
data <- NULL
for (time in c("G1", "HU45", "HU90")) {
for (g in c("Rad53K", "Mrc1d", "Rad9d")) {
ofile = paste0(oname, time, "_", g, ".tsv")
if (!file.exists(ofile)) next
a <- read.table(ofile, header=T, sep="\t")
a <- a[,c(2, 6)]
colnames(a) = c(paste0("log2FC_", time, "_", g), paste0("padj_", time, "_", g))
if (is.null(data)) {
data <- a
} else {
stopifnot(rownames(data) == rownames(a))
data <- cbind(data, a)
}
}
}
if (!is.null(data))
write.table(data, "de_result_sc_genotype.tsv", sep="\t", quote=F)
oname = paste0(dir, "time_difference_")
data <- NULL
for (time in c("HU45", "HU90")) {
for (g in c("WT", "Rad53K", "Mrc1d", "Rad9d")) {
ofile = paste0(oname, time, "_", g, ".tsv")
if (!file.exists(ofile)) next
a <- read.table(ofile, header=T, sep="\t")
a <- a[,c(2, 6)]
colnames(a) = c(paste0("log2FC_", time, "_", g), paste0("padj_", time, "_", g))
if (is.null(data)) {
data <- a
} else {
stopifnot(rownames(data) == rownames(a))
data <- cbind(data, a)
}
}
}
if (!is.null(data))
write.table(data, "de_result_sc_time.tsv", sep="\t", quote=F)
oname = paste0(dir, "genotype_difference_")
data <- NULL
for (time in c("G1", "HU45", "HU90")) {
genotypes = c("WT", "Rad53K", "Mrc1d", "Rad9d")
for (gi in 1:3) {
for (gh in (gi+1):4) {
ofile = paste0(oname, time, "_", genotypes[gh], "_", genotypes[gi], ".tsv")
if (!file.exists(ofile)) next
a <- read.table(ofile, header=T, sep="\t")
a <- a[,c(2, 6)]
colnames(a) = c(paste0("log2FC_", time, "_", genotypes[gh], "_", genotypes[gi]), paste0("padj_", time, "_", genotypes[gh], "_", genotypes[gi]))
if (is.null(data)) {
data <- a
} else {
stopifnot(rownames(data) == rownames(a))
data <- cbind(data, a)
}
}
}
}
if (!is.null(data))
write.table(data, "de_result_sc_each_genot.tsv", sep="\t", quote=F)
|
/rna_analysis/script/merge_de_result.R
|
permissive
|
carushi/yeast_coexp_analysis
|
R
| false
| false
| 2,735
|
r
|
dir = "./"
oname = paste0(dir, "genot_difference_")
data <- NULL
for (g in c("Rad53K", "Mrc1d", "Rad9d")) {
ofile = paste0(oname, g, ".tsv")
if (!file.exists(ofile)) next
a <- read.table(ofile, header=T, sep="\t")
a <- a[,c(2, 6)]
colnames(a) = c(paste0("log2FC_", g), paste0("padj_", g))
if (is.null(data)) {
data <- a
} else {
stopifnot(rownames(data) == rownames(a))
data <- cbind(data, a)
}
}
if (!is.null(data))
write.table(data, "de_result_mc_genotype.tsv", sep="\t", quote=F)
oname = paste0(dir, "genotype_difference_")
data <- NULL
for (time in c("G1", "HU45", "HU90")) {
for (g in c("Rad53K", "Mrc1d", "Rad9d")) {
ofile = paste0(oname, time, "_", g, ".tsv")
if (!file.exists(ofile)) next
a <- read.table(ofile, header=T, sep="\t")
a <- a[,c(2, 6)]
colnames(a) = c(paste0("log2FC_", time, "_", g), paste0("padj_", time, "_", g))
if (is.null(data)) {
data <- a
} else {
stopifnot(rownames(data) == rownames(a))
data <- cbind(data, a)
}
}
}
if (!is.null(data))
write.table(data, "de_result_sc_genotype.tsv", sep="\t", quote=F)
oname = paste0(dir, "time_difference_")
data <- NULL
for (time in c("HU45", "HU90")) {
for (g in c("WT", "Rad53K", "Mrc1d", "Rad9d")) {
ofile = paste0(oname, time, "_", g, ".tsv")
if (!file.exists(ofile)) next
a <- read.table(ofile, header=T, sep="\t")
a <- a[,c(2, 6)]
colnames(a) = c(paste0("log2FC_", time, "_", g), paste0("padj_", time, "_", g))
if (is.null(data)) {
data <- a
} else {
stopifnot(rownames(data) == rownames(a))
data <- cbind(data, a)
}
}
}
if (!is.null(data))
write.table(data, "de_result_sc_time.tsv", sep="\t", quote=F)
oname = paste0(dir, "genotype_difference_")
data <- NULL
for (time in c("G1", "HU45", "HU90")) {
genotypes = c("WT", "Rad53K", "Mrc1d", "Rad9d")
for (gi in 1:3) {
for (gh in (gi+1):4) {
ofile = paste0(oname, time, "_", genotypes[gh], "_", genotypes[gi], ".tsv")
if (!file.exists(ofile)) next
a <- read.table(ofile, header=T, sep="\t")
a <- a[,c(2, 6)]
colnames(a) = c(paste0("log2FC_", time, "_", genotypes[gh], "_", genotypes[gi]), paste0("padj_", time, "_", genotypes[gh], "_", genotypes[gi]))
if (is.null(data)) {
data <- a
} else {
stopifnot(rownames(data) == rownames(a))
data <- cbind(data, a)
}
}
}
}
if (!is.null(data))
write.table(data, "de_result_sc_each_genot.tsv", sep="\t", quote=F)
|
% Generated by roxygen2 (4.1.0): do not edit by hand
% Please edit documentation in R/getNewsfeed.R
\name{getNewsfeed}
\alias{getNewsfeed}
\title{Download recent posts from the authenticated user's newsfeed}
\usage{
getNewsfeed(token, n = 200)
}
\arguments{
\item{token}{Either a temporary access token created at
\url{https://developers.facebook.com/tools/explorer} or the OAuth token
created with \code{fbOAuth}.}
\item{n}{Maximum number of posts to return.}
}
\description{
\code{getNewsfeed} retrieves status updates from the authenticated user's
News Feed
}
\examples{
\dontrun{
## See examples for fbOAuth to know how token was created.
## Capture 100 most recent posts on my newsfeed
load("fb_oauth")
my_newsfeed <- getNewsfeed(token=fb_oauth, n=100)
}
}
\author{
Pablo Barbera \email{pablo.barbera@nyu.edu}
}
\seealso{
\code{\link{fbOAuth}}, \code{\link{getPost}}
}
|
/Rfacebook/man/getNewsfeed.Rd
|
no_license
|
kwang557/Rfacebook
|
R
| false
| false
| 878
|
rd
|
% Generated by roxygen2 (4.1.0): do not edit by hand
% Please edit documentation in R/getNewsfeed.R
\name{getNewsfeed}
\alias{getNewsfeed}
\title{Download recent posts from the authenticated user's newsfeed}
\usage{
getNewsfeed(token, n = 200)
}
\arguments{
\item{token}{Either a temporary access token created at
\url{https://developers.facebook.com/tools/explorer} or the OAuth token
created with \code{fbOAuth}.}
\item{n}{Maximum number of posts to return.}
}
\description{
\code{getNewsfeed} retrieves status updates from the authenticated user's
News Feed
}
\examples{
\dontrun{
## See examples for fbOAuth to know how token was created.
## Capture 100 most recent posts on my newsfeed
load("fb_oauth")
my_newsfeed <- getNewsfeed(token=fb_oauth, n=100)
}
}
\author{
Pablo Barbera \email{pablo.barbera@nyu.edu}
}
\seealso{
\code{\link{fbOAuth}}, \code{\link{getPost}}
}
|
##----------source the R file that downloads and reads the data---------
source("readPowerData.R")
##---------make the plots----------------------
par(mfrow=c(2,2))
# ------Global_active_power
plot(powerConsumptionDataFiltered$Time, powerConsumptionDataFiltered$Global_active_power, type="l", xlab="", ylab="Global Active Power",cex.lab=0.7, cex.axis=0.8)
# ------Voltage
plot(powerConsumptionDataFiltered$Time, powerConsumptionDataFiltered$Voltage,type="l", ylab="Voltage", xlab="datetime", cex.lab=0.7, cex.axis=0.8)
# -------Sub_metering
plot(powerConsumptionDataFiltered$Time, powerConsumptionDataFiltered$Sub_metering_1, type="l", ylab="Energy sub metering", xlab="", cex.lab=0.7, cex.axis=0.8)
lines(powerConsumptionDataFiltered$Time, powerConsumptionDataFiltered$Sub_metering_2, col="red")
lines(powerConsumptionDataFiltered$Time, powerConsumptionDataFiltered$Sub_metering_3, col="blue")
legend("topright", legend=c("Sub_metering_1", "Sub_metering_2", "Sub_metering_3"), lty=c(1,1,1), col=c("black","red", "blue"), cex=0.6, bty="n")
# ---------Global_reactive_power
plot(powerConsumptionDataFiltered$Time, powerConsumptionDataFiltered$Global_reactive_power,type="l",lwd=0.5,xlab="datetime", ylab="Global_reactive_power",cex.lab=0.8, cex.axis=0.8)
dev.copy(png,"plot4.png", width = 480, height = 480)
dev.off()
|
/plot4.R
|
no_license
|
mlopezlago/ExData_Plotting1
|
R
| false
| false
| 1,322
|
r
|
##----------source the R file that downloads and reads the data---------
source("readPowerData.R")
##---------make the plots----------------------
par(mfrow=c(2,2))
# ------Global_active_power
plot(powerConsumptionDataFiltered$Time, powerConsumptionDataFiltered$Global_active_power, type="l", xlab="", ylab="Global Active Power",cex.lab=0.7, cex.axis=0.8)
# ------Voltage
plot(powerConsumptionDataFiltered$Time, powerConsumptionDataFiltered$Voltage,type="l", ylab="Voltage", xlab="datetime", cex.lab=0.7, cex.axis=0.8)
# -------Sub_metering
plot(powerConsumptionDataFiltered$Time, powerConsumptionDataFiltered$Sub_metering_1, type="l", ylab="Energy sub metering", xlab="", cex.lab=0.7, cex.axis=0.8)
lines(powerConsumptionDataFiltered$Time, powerConsumptionDataFiltered$Sub_metering_2, col="red")
lines(powerConsumptionDataFiltered$Time, powerConsumptionDataFiltered$Sub_metering_3, col="blue")
legend("topright", legend=c("Sub_metering_1", "Sub_metering_2", "Sub_metering_3"), lty=c(1,1,1), col=c("black","red", "blue"), cex=0.6, bty="n")
# ---------Global_reactive_power
plot(powerConsumptionDataFiltered$Time, powerConsumptionDataFiltered$Global_reactive_power,type="l",lwd=0.5,xlab="datetime", ylab="Global_reactive_power",cex.lab=0.8, cex.axis=0.8)
dev.copy(png,"plot4.png", width = 480, height = 480)
dev.off()
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/wgroup.R
\name{[.WGroup}
\alias{[.WGroup}
\title{subset WGroup}
\usage{
\method{[}{WGroup}(x, i)
}
\arguments{
\item{x}{a WGroup object}
\item{i}{integer indexing element}
}
\value{
a subset of WGroup or NULL
}
\description{
subset WGroup
}
|
/man/sub-.WGroup.Rd
|
no_license
|
zwdzwd/wheatmap
|
R
| false
| true
| 320
|
rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/wgroup.R
\name{[.WGroup}
\alias{[.WGroup}
\title{subset WGroup}
\usage{
\method{[}{WGroup}(x, i)
}
\arguments{
\item{x}{a WGroup object}
\item{i}{integer indexing element}
}
\value{
a subset of WGroup or NULL
}
\description{
subset WGroup
}
|
# imports
library(rEDM)
library(plyr)
library(ggplot2)
# load data
df <- read.csv("data/steering.csv")
# segment into library/prediction sets
n <- NROW(df)
lib <- c(1, floor(2/3 * n))
pred <- c(floor(2/3 * n) + 1, n)
# segregate the columns
steering <- df[c("Time", "Steering.wheel.angle")]
left <- df[c("Time", "Left.wheel.RPM")]
right <- df[c("Time", "Right.wheel.RPM")]
# scale the columns
steering$Steering.wheel.angle <- scale(steering$Steering.wheel.angle)
left$Left.wheel.RPM <- scale(left$Left.wheel.RPM)
right$Right.wheel.RPM <- scale(right$Right.wheel.RPM)
# identify the optimal embedding dimension (E)
ComputeE <- function(var) {
out <- simplex(var, lib=lib, pred=pred, E=1:10)
out[is.na(out)] <- 0
out$rho[out$rho < 0] <- 0
E <- which.max(as.numeric(unlist(out[c("rho")])))
E <- as.numeric(unlist(out[c("E")]))[E]
plot(out$E, out$rho, type="l", main=colnames(var)[-1], xlab="E", ylab="ρ")
return(E)
}
# identify the optimal tps
ComputeTp <- function(var, sequence=0:10) {
opt_e <- ComputeE(var)
out <- simplex(var, lib=lib, pred=pred, E=opt_e, tp=sequence)
tp <- which.max(as.numeric(unlist(out[c("rho")])))
tp <- as.numeric(unlist(out[c("tp")]))[tp]
plot(out$tp, out$rho, type="l", main=colnames(var)[-1], xlab="tp", ylab="ρ")
return(tp)
}
# identify nonlinearity
PlotNonlinearity <- function(var) {
opt_e <- ComputeE(var)
out <- s_map(var, lib, pred, E=opt_e)
plot(out$theta, out$rho, type="l", main=colnames(var)[-1], xlab="θ", ylab="ρ")
}
# make and plot predictions
PlotPredictions <- function(var, sequence=0:10) {
opt_e <- ComputeE(var)
opt_tp <- ComputeTp(var, sequence)
out <- simplex(var, lib=lib, pred=pred, E=opt_e, tp=sequence, stats_only=FALSE)
preds <- na.omit(out$model_output[[opt_tp + 1]])
plot(var, type="l", main=colnames(var)[-1], xlab="Time", ylab="Value")
lines(preds$time, preds$pred, col="blue", lty=2)
polygon(c(preds$time, rev(preds$time)), c(preds$pred - sqrt(preds$pred_var),
rev(preds$pred + sqrt(preds$pred_var))), col=rgb(0,0,1,0.3), border=NA)
}
# conduct and plot analysis of causality
PlotCausality <- function(vars=list(steering, left, right), tp=-10:10) {
# identify the optimal embedding dimension (E) for each column
i <- 1
opt_e <- list()
var_names <- list()
for (var in vars) {
var_names[i] <- colnames(var)[-1]
opt_e[i] <- ComputeE(var)
i <- i + 1
}
opt_e <- unlist(opt_e)
var_names <- unlist(var_names)
# get every combination of var1 xmap var2
# add an (E) column that corresponds to the lib column
params <- expand.grid(lib_column=var_names, target_column=var_names,
tp=tp, stringsAsFactors=FALSE)
params <- params[params$lib_column != params$target_column,]
rownames(params) <- NULL
params$E <- as.integer(mapvalues(params$lib_column, var_names,
opt_e, warn_missing=FALSE))
# compute causality
out <- do.call(rbind, lapply(seq_len(NROW(params)), function(i) {
ccm(df, E=params$E[i], random_libs=FALSE, lib_sizes=n,
lib_column=params$lib_column[i],
target_column=params$target_column[i],
tp=params$tp[i], silent=TRUE)
}))
# add a new column
out$direction <- paste(out$lib_column, "xmap", out$target_column)
# plot the causalities
labels <- paste(as.character(round(0.1 * tp, 2)), "(s)")
ggplot(out, aes(tp, rho, colour=direction)) + geom_line() + geom_point() +
geom_vline(xintercept=0, linetype="dashed") + labs(x="tp", y="ρ") +
scale_x_discrete(limits=tp, labels=labels)
}
# conduct and plot analysis of causality means
PlotCausalityMeans <- function(var1, var2) {
# compute means
label1 <- colnames(var1)[-1]
label2 <- colnames(var2)[-1]
opt_e <- ComputeE(df[c("Time", label1, label2)])
var1_xmap_var2_means <- ccm_means(ccm(df, E=opt_e, num_samples=100,
lib_column=label1, target_column=label2, lib_sizes=seq(50, 950, by=50),
random_libs=TRUE, replace=TRUE))
var2_xmap_var1_means <- ccm_means(ccm(df, E=opt_e, num_samples=100,
lib_column=label2, target_column=label1, lib_sizes=seq(50, 950, by=50),
random_libs=TRUE, replace=TRUE))
# compute parallel maxima
y1 <- pmax(0, var1_xmap_var2_means$rho)
y2 <- pmax(0, var2_xmap_var1_means$rho)
# plot the means
title <- paste(label1, "&", label2)
limits <- c(min(min(y1), min(y2)), max(max(y1, max(y2))))
plot(var1_xmap_var2_means$lib_size, y1, type="l", ylim=limits,
main=title, xlab="Library Size", ylab="ρ", col="red",)
lines(var2_xmap_var1_means$lib_size, y2, col="blue")
legend(x="topleft", col=c("red", "blue"), lwd=1, bty="n", inset=0.02, cex=0.8,
legend=c(paste(label1, "xmap", label2), paste(label2, "xmap", label1)))
}
# calls all other functions
RunEdm <- function(vars=list(steering, left, right)){
for (var in vars) {
PlotNonlinearity(var)
PlotPredictions(var)
}
for (var1 in vars) {
for (var2 in vars) {
if (colnames(var1)[-1] != colnames(var2)[-1]) {
PlotCausalityMeans(var1, var2)
}
}
}
PlotCausality(vars)
}
|
/thesis/appendix/edm.R
|
no_license
|
david-crow/afit
|
R
| false
| false
| 5,272
|
r
|
# imports
library(rEDM)
library(plyr)
library(ggplot2)
# load data
df <- read.csv("data/steering.csv")
# segment into library/prediction sets
n <- NROW(df)
lib <- c(1, floor(2/3 * n))
pred <- c(floor(2/3 * n) + 1, n)
# segregate the columns
steering <- df[c("Time", "Steering.wheel.angle")]
left <- df[c("Time", "Left.wheel.RPM")]
right <- df[c("Time", "Right.wheel.RPM")]
# scale the columns
steering$Steering.wheel.angle <- scale(steering$Steering.wheel.angle)
left$Left.wheel.RPM <- scale(left$Left.wheel.RPM)
right$Right.wheel.RPM <- scale(right$Right.wheel.RPM)
# identify the optimal embedding dimension (E)
ComputeE <- function(var) {
out <- simplex(var, lib=lib, pred=pred, E=1:10)
out[is.na(out)] <- 0
out$rho[out$rho < 0] <- 0
E <- which.max(as.numeric(unlist(out[c("rho")])))
E <- as.numeric(unlist(out[c("E")]))[E]
plot(out$E, out$rho, type="l", main=colnames(var)[-1], xlab="E", ylab="ρ")
return(E)
}
# identify the optimal tps
ComputeTp <- function(var, sequence=0:10) {
opt_e <- ComputeE(var)
out <- simplex(var, lib=lib, pred=pred, E=opt_e, tp=sequence)
tp <- which.max(as.numeric(unlist(out[c("rho")])))
tp <- as.numeric(unlist(out[c("tp")]))[tp]
plot(out$tp, out$rho, type="l", main=colnames(var)[-1], xlab="tp", ylab="ρ")
return(tp)
}
# identify nonlinearity
PlotNonlinearity <- function(var) {
opt_e <- ComputeE(var)
out <- s_map(var, lib, pred, E=opt_e)
plot(out$theta, out$rho, type="l", main=colnames(var)[-1], xlab="θ", ylab="ρ")
}
# make and plot predictions
PlotPredictions <- function(var, sequence=0:10) {
opt_e <- ComputeE(var)
opt_tp <- ComputeTp(var, sequence)
out <- simplex(var, lib=lib, pred=pred, E=opt_e, tp=sequence, stats_only=FALSE)
preds <- na.omit(out$model_output[[opt_tp + 1]])
plot(var, type="l", main=colnames(var)[-1], xlab="Time", ylab="Value")
lines(preds$time, preds$pred, col="blue", lty=2)
polygon(c(preds$time, rev(preds$time)), c(preds$pred - sqrt(preds$pred_var),
rev(preds$pred + sqrt(preds$pred_var))), col=rgb(0,0,1,0.3), border=NA)
}
# conduct and plot analysis of causality
PlotCausality <- function(vars=list(steering, left, right), tp=-10:10) {
# identify the optimal embedding dimension (E) for each column
i <- 1
opt_e <- list()
var_names <- list()
for (var in vars) {
var_names[i] <- colnames(var)[-1]
opt_e[i] <- ComputeE(var)
i <- i + 1
}
opt_e <- unlist(opt_e)
var_names <- unlist(var_names)
# get every combination of var1 xmap var2
# add an (E) column that corresponds to the lib column
params <- expand.grid(lib_column=var_names, target_column=var_names,
tp=tp, stringsAsFactors=FALSE)
params <- params[params$lib_column != params$target_column,]
rownames(params) <- NULL
params$E <- as.integer(mapvalues(params$lib_column, var_names,
opt_e, warn_missing=FALSE))
# compute causality
out <- do.call(rbind, lapply(seq_len(NROW(params)), function(i) {
ccm(df, E=params$E[i], random_libs=FALSE, lib_sizes=n,
lib_column=params$lib_column[i],
target_column=params$target_column[i],
tp=params$tp[i], silent=TRUE)
}))
# add a new column
out$direction <- paste(out$lib_column, "xmap", out$target_column)
# plot the causalities
labels <- paste(as.character(round(0.1 * tp, 2)), "(s)")
ggplot(out, aes(tp, rho, colour=direction)) + geom_line() + geom_point() +
geom_vline(xintercept=0, linetype="dashed") + labs(x="tp", y="ρ") +
scale_x_discrete(limits=tp, labels=labels)
}
# conduct and plot analysis of causality means
PlotCausalityMeans <- function(var1, var2) {
# compute means
label1 <- colnames(var1)[-1]
label2 <- colnames(var2)[-1]
opt_e <- ComputeE(df[c("Time", label1, label2)])
var1_xmap_var2_means <- ccm_means(ccm(df, E=opt_e, num_samples=100,
lib_column=label1, target_column=label2, lib_sizes=seq(50, 950, by=50),
random_libs=TRUE, replace=TRUE))
var2_xmap_var1_means <- ccm_means(ccm(df, E=opt_e, num_samples=100,
lib_column=label2, target_column=label1, lib_sizes=seq(50, 950, by=50),
random_libs=TRUE, replace=TRUE))
# compute parallel maxima
y1 <- pmax(0, var1_xmap_var2_means$rho)
y2 <- pmax(0, var2_xmap_var1_means$rho)
# plot the means
title <- paste(label1, "&", label2)
limits <- c(min(min(y1), min(y2)), max(max(y1, max(y2))))
plot(var1_xmap_var2_means$lib_size, y1, type="l", ylim=limits,
main=title, xlab="Library Size", ylab="ρ", col="red",)
lines(var2_xmap_var1_means$lib_size, y2, col="blue")
legend(x="topleft", col=c("red", "blue"), lwd=1, bty="n", inset=0.02, cex=0.8,
legend=c(paste(label1, "xmap", label2), paste(label2, "xmap", label1)))
}
# calls all other functions
RunEdm <- function(vars=list(steering, left, right)){
for (var in vars) {
PlotNonlinearity(var)
PlotPredictions(var)
}
for (var1 in vars) {
for (var2 in vars) {
if (colnames(var1)[-1] != colnames(var2)[-1]) {
PlotCausalityMeans(var1, var2)
}
}
}
PlotCausality(vars)
}
|
% Generated by roxygen2 (4.1.0): do not edit by hand
% Please edit documentation in R/imbalanceMetrics.R
\name{I1}
\alias{I1}
\title{I1}
\usage{
I1(tree)
}
\arguments{
\item{tree}{A tree of class \code{phylo} or \code{treeshape}.}
}
\value{
An object of class \code{numeric}.
}
\description{
This calculates I1, a weighted average of the balance of internal nodes, where \eqn{N} is the number of tips, \eqn{j} is the number of nodes, and \eqn{r_j} and \eqn{l_j} represent the number of tips in the left and right subtrees respectively. Then,
\deqn{ I_1 = \frac{2}{(N-1)(N-2)} \sum_{j \in \mathcal{I} } {|r_j-l_j|}.}
I1 is closely related to the Colless index, which can be found using \code{\link[apTreeshape]{colless}}, or by multiplying I1 by \deqn{\frac{(N-1)(N-2)}{2}.}
}
\examples{
N=30
tree<-rtreeshape(1,tip.number=N,model="pda")[[1]]
I1(tree)
}
\references{
\itemize{
\item Pompei S, Loreto V, Tria F (2012) Phylogenetic Properties of RNA Viruses. PLoS ONE 7(9): e44849. doi:10.1371/journal.pone.0044849
\item Purvis A, Agapow PM (2002) Phylogeny imbalance: Taxonomic level matters. Systematic Biology 51: 844-854.
\item Fusco G, Cronk Q (1995) A new method for evaluating the shape of large phylogenies. Journal of Theoretical Biology 175: 235-243. doi: 10.1006/jtbi.1995.0136
}
}
\keyword{Colless}
\keyword{Index}
\keyword{asymmetry,}
\keyword{imbalance,}
|
/man/I1.Rd
|
permissive
|
dverac/treeImbalance
|
R
| false
| false
| 1,370
|
rd
|
% Generated by roxygen2 (4.1.0): do not edit by hand
% Please edit documentation in R/imbalanceMetrics.R
\name{I1}
\alias{I1}
\title{I1}
\usage{
I1(tree)
}
\arguments{
\item{tree}{A tree of class \code{phylo} or \code{treeshape}.}
}
\value{
An object of class \code{numeric}.
}
\description{
This calculates I1, a weighted average of the balance of internal nodes, where \eqn{N} is the number of tips, \eqn{j} is the number of nodes, and \eqn{r_j} and \eqn{l_j} represent the number of tips in the left and right subtrees respectively. Then,
\deqn{ I_1 = \frac{2}{(N-1)(N-2)} \sum_{j \in \mathcal{I} } {|r_j-l_j|}.}
I1 is closely related to the Colless index, which can be found using \code{\link[apTreeshape]{colless}}, or by multiplying I1 by \deqn{\frac{(N-1)(N-2)}{2}.}
}
\examples{
N=30
tree<-rtreeshape(1,tip.number=N,model="pda")[[1]]
I1(tree)
}
\references{
\itemize{
\item Pompei S, Loreto V, Tria F (2012) Phylogenetic Properties of RNA Viruses. PLoS ONE 7(9): e44849. doi:10.1371/journal.pone.0044849
\item Purvis A, Agapow PM (2002) Phylogeny imbalance: Taxonomic level matters. Systematic Biology 51: 844-854.
\item Fusco G, Cronk Q (1995) A new method for evaluating the shape of large phylogenies. Journal of Theoretical Biology 175: 235-243. doi: 10.1006/jtbi.1995.0136
}
}
\keyword{Colless}
\keyword{Index}
\keyword{asymmetry,}
\keyword{imbalance,}
|
compliment <- function(name) {
if (!is.character(name)) {
stop("The input must be a character string.")
}
words <- paste0("Hello ", name, ", I like your face.")
print(words)
}
# compliment("Sally")
|
/functions/compliment_func.R
|
no_license
|
cjarayata/UpennCarpentry
|
R
| false
| false
| 211
|
r
|
compliment <- function(name) {
if (!is.character(name)) {
stop("The input must be a character string.")
}
words <- paste0("Hello ", name, ", I like your face.")
print(words)
}
# compliment("Sally")
|
.peprow <- function(n,x,prot){
pprot <- prot[which(prot$protid==x$protid[n]),]
pr <- data.frame(pep.protid=pprot$protid,
pep.start=pprot$start+1,
pep.end=pprot$start+pprot$len,
pep.mass=x$pep.mass[n],
pep.xlsite=x$pep.xlsite[n],
pep.seq=pprot$seq)
names(pr) <- sub("pep",paste0("pep",n),names(pr))
pr
}
#' Crosslink peptide subset
#'
#' Subset the bupid object by peptide number
#'
#' @param object
#' bupid object returned from another function
#' @param pepnum
#' peptide number
#'
#' @return Returns a bupid object with results for one peptide
#'
#' @examples
#' \dontrun{
#' server <- "http://bupid.bumc.bu.edu/cgi-bin/get_results.cgi"
#' infile <- "key=WBNqTswT5DPg3aDO&ID=320&date=20150309"
#' data <- read.bupid(url=paste(server,infile,sep="?"))
#' sdata <- subset(data,scan.num=234,"protein")
#' fragment.matched.ions(xlinkpep(sdata,1))
#' }
#' @export xlinkpep
xlinkpep <- function(object,pepnum){
prot <- pepnum-1
subset(object,tag.rank==prot,"protein")
}
.xlfragcov <- function(object,xldf){
pid <- object@scan$plid[which(object@scan$scanid==xldf$scan.num)]
lens <- (xldf$pep1.end-xldf$pep1.start+1)+(xldf$pep2.end-xldf$pep2.start+1)
fr=object@fit[which(object@fit$peak.id==pid),c("ion.start","ion.len")]
nrow(unique(fr))/(lens*2-2)
}
#' Crosslink results table
#'
#' Create a dataframe of xlink results
#'
#' @param object
#' bupid object returned from another function
#' @param n
#' number of linked peptides to display per scan
#'
#' @return Returns a data.frame with the assignments.
#'
#' @examples
#' \dontrun{
#' server <- "http://bupid.bumc.bu.edu/cgi-bin/get_results.cgi"
#' infile <- "key=WBNqTswT5DPg3aDO&ID=320&date=20150309"
#' data <- read.bupid(url=paste(server,infile,sep="?"))
#' xlinks(data,2)
#' }
#' @export xlinks
xlinks <- function(object,n=2){
td <- do.call("rbind",lapply(unique(object@xlink$peakid),FUN=function(id){
xid <- which(object@xlink$peakid==id)
xpid <- which(object@xlpep$xlid==object@xlink$xlid[xid[1]])
if(length(xpid)!=n)
return(data.frame())
scans <- object@scan[which(object@scan$plid == id),]
pre <- data.frame(scan.num=paste(scans$scanid,collapse=" "),
pre.int=scans$pre.int[1],
pre.mz=scans$mz[1],
pre.z=scans$z[1],
pre.error=object@xlink$error[xid[1]],
frag.cov=0, # fill later
mods=object@xlink$mods[xid[1]])
xlprot <- subset(object@prot,peakid==id)
peps <- do.call("cbind",lapply(1:length(xpid),FUN=.peprow,object@xlpep[xpid,],xlprot))
xldf <- cbind(pre,peps)
xldf$frag.cov <- .xlfragcov(object,xldf) # TODO: vectorize
xldf
}))
td
}
|
/R/xlink_table.R
|
no_license
|
heckendorfc/BTDR
|
R
| false
| false
| 2,608
|
r
|
.peprow <- function(n,x,prot){
pprot <- prot[which(prot$protid==x$protid[n]),]
pr <- data.frame(pep.protid=pprot$protid,
pep.start=pprot$start+1,
pep.end=pprot$start+pprot$len,
pep.mass=x$pep.mass[n],
pep.xlsite=x$pep.xlsite[n],
pep.seq=pprot$seq)
names(pr) <- sub("pep",paste0("pep",n),names(pr))
pr
}
#' Crosslink peptide subset
#'
#' Subset the bupid object by peptide number
#'
#' @param object
#' bupid object returned from another function
#' @param pepnum
#' peptide number
#'
#' @return Returns a bupid object with results for one peptide
#'
#' @examples
#' \dontrun{
#' server <- "http://bupid.bumc.bu.edu/cgi-bin/get_results.cgi"
#' infile <- "key=WBNqTswT5DPg3aDO&ID=320&date=20150309"
#' data <- read.bupid(url=paste(server,infile,sep="?"))
#' sdata <- subset(data,scan.num=234,"protein")
#' fragment.matched.ions(xlinkpep(sdata,1))
#' }
#' @export xlinkpep
xlinkpep <- function(object,pepnum){
prot <- pepnum-1
subset(object,tag.rank==prot,"protein")
}
.xlfragcov <- function(object,xldf){
pid <- object@scan$plid[which(object@scan$scanid==xldf$scan.num)]
lens <- (xldf$pep1.end-xldf$pep1.start+1)+(xldf$pep2.end-xldf$pep2.start+1)
fr=object@fit[which(object@fit$peak.id==pid),c("ion.start","ion.len")]
nrow(unique(fr))/(lens*2-2)
}
#' Crosslink results table
#'
#' Create a dataframe of xlink results
#'
#' @param object
#' bupid object returned from another function
#' @param n
#' number of linked peptides to display per scan
#'
#' @return Returns a data.frame with the assignments.
#'
#' @examples
#' \dontrun{
#' server <- "http://bupid.bumc.bu.edu/cgi-bin/get_results.cgi"
#' infile <- "key=WBNqTswT5DPg3aDO&ID=320&date=20150309"
#' data <- read.bupid(url=paste(server,infile,sep="?"))
#' xlinks(data,2)
#' }
#' @export xlinks
xlinks <- function(object,n=2){
td <- do.call("rbind",lapply(unique(object@xlink$peakid),FUN=function(id){
xid <- which(object@xlink$peakid==id)
xpid <- which(object@xlpep$xlid==object@xlink$xlid[xid[1]])
if(length(xpid)!=n)
return(data.frame())
scans <- object@scan[which(object@scan$plid == id),]
pre <- data.frame(scan.num=paste(scans$scanid,collapse=" "),
pre.int=scans$pre.int[1],
pre.mz=scans$mz[1],
pre.z=scans$z[1],
pre.error=object@xlink$error[xid[1]],
frag.cov=0, # fill later
mods=object@xlink$mods[xid[1]])
xlprot <- subset(object@prot,peakid==id)
peps <- do.call("cbind",lapply(1:length(xpid),FUN=.peprow,object@xlpep[xpid,],xlprot))
xldf <- cbind(pre,peps)
xldf$frag.cov <- .xlfragcov(object,xldf) # TODO: vectorize
xldf
}))
td
}
|
#!/usr/bin/env Rscript
######################
# rCNV Project #
######################
# Copyright (c) 2020 Ryan L. Collins and the Talkowski Laboratory
# Distributed under terms of the MIT License (see LICENSE)
# Contact: Ryan L. Collins <rlcollins@g.harvard.edu>
# Plot dummy Miami for graphical abstract of rCNV2 paper
options(stringsAsFactors=F, scipen=1000, family="sans")
######################
### DATA FUNCTIONS ###
######################
# Load summary stats
load.sumstats <- function(stats.in, chrom.colors, sig.color,
p.col.name="meta_phred_p"){
# Read data
stats <- read.table(stats.in, header=T, sep="\t", check.names=F, comment.char="")
colnames(stats)[1] <- gsub("#", "", colnames(stats)[1])
# Get coordinates
chr <- as.numeric(stats[, 1])
if("pos" %in% colnames(stats)){
pos <- stats$pos
}else if(all(c("start", "end") %in% colnames(stats))){
pos <- (stats$start + stats$end) / 2
}else{
stop("Unable to identify locus coordinate info. Must supply columns for either 'pos' or 'start' and 'end'.")
}
pos <- as.numeric(pos)
# Get p-values
if(p.col.name %in% colnames(stats)){
p <- stats[, which(colnames(stats) == p.col.name)]
p <- as.numeric(p)
}else{
stop(paste("Unable to identify p-value column by header name, ",
p.col.name, ".", sep=""))
}
data.frame("chr"=chr, "pos"=pos, "p"=p)
}
# Get Manhattan plotting df
get.manhattan.plot.df <- function(df, chrom.colors, sig.color,
max.p=12, gw.sig=6, sig.buffer=500000){
contigs <- unique(df[, 1])
contigs <- contigs[which(!(is.na(contigs)))]
indexes <- as.data.frame(t(sapply(contigs, function(chr){
return(c(chr, 0, max(df[which(df[, 1] == chr), 2])))
})))
indexes$sum <- cumsum(indexes[, 3])
indexes$bg <- chrom.colors[(contigs %% 2) + 1]
indexes[, 2:4] <- apply(indexes[, 2:4], 2, as.numeric)
sig.idx <- which(df[, 3] >= gw.sig)
sig.idx.all <- sort(unique(unlist(lapply(sig.idx, function(idx){
which(df[, 1] == df[idx, 1]
& df[, 2] <= df[idx, 2] + sig.buffer
& df[, 2] >= df[idx, 2] - sig.buffer)
}))))
df.plot <- as.data.frame(t(sapply(1:nrow(df), function(idx){
row <- df[idx, ]
contig.idx <- which(indexes[, 1]==as.numeric(row[1]))
pval <- as.numeric(row[3])
if(is.na(pval)){
pval <- 0
}
if(pval>max.p){
pval <- max.p
}
if(idx %in% sig.idx.all){
# pt.color <- sig.color
# Disable significant peak highlighting
pt.color <- indexes$bg[which(indexes[, 1] == contig.idx)]
sig <- T
}else{
pt.color <- indexes$bg[which(indexes[, 1] == contig.idx)]
sig <- T
}
return(c(row[1],
as.numeric(row[2]) + indexes[contig.idx, 4] - indexes[contig.idx, 3],
pval,
pt.color,
sig))
})))
df.plot[, c(1, 4, 5)] <- apply(df.plot[, c(1, 4, 5)], 2, unlist)
df.plot[, 2] <- as.numeric(as.character(df.plot[, 2]))
df.plot[, 3] <- as.numeric(as.character(df.plot[, 3]))
colnames(df.plot) <- c("chr", "pos", "p", "color", "sig")
# Drop rows with NA p-values & return
df.plot[which(!is.na(df.plot$p)), ]
}
##########################
### PLOTTING FUNCTIONS ###
##########################
# Custom Miami plot
mini.miami <- function(del, dup, max.p=10,
gw.sig=6, sig.buffer=500000, sig.color=graphabs.green,
middle.axis.width=1,
parmar=c(0.5, 1.7, 0.5, 0.3)){
# Get plotting data for dels & dups
del.df <- get.manhattan.plot.df(del,
chrom.colors=c(cnv.colors[1], control.cnv.colors[1]),
sig.color=sig.color,
max.p=max.p, gw.sig=gw.sig, sig.buffer=sig.buffer)
dup.df <- get.manhattan.plot.df(dup,
chrom.colors=c(cnv.colors[2], control.cnv.colors[2]),
sig.color=sig.color,
max.p=max.p, gw.sig=gw.sig, sig.buffer=sig.buffer)
# Modify data to account for shared x-axis
dup.df$p = dup.df$p + middle.axis.width/2
del.df$p = -del.df$p - middle.axis.width/2
# Get plot values
x.range <- range(c(del.df$pos, dup.df$pos), na.rm=T)
y.range <- range(c(del.df$p, dup.df$p), na.rm=T)
# Prep plot area
par(bty="n", mar=parmar)
plot(NA, xlim=x.range, ylim=y.range,
xaxt="n", xlab="", yaxt="n", ylab="")
abline(h=c(gw.sig + middle.axis.width/2, -gw.sig - middle.axis.width/2),
lty=2, lwd=2, col=sig.color)
# Add points
points(x=del.df$pos, y=del.df$p,
col=del.df$color, pch=19, cex=0.275, xpd=T)
points(x=dup.df$pos, y=dup.df$p,
col=dup.df$color, pch=19, cex=0.275, xpd=T)
# Add axes
y.at <- seq(0, ceiling(par("usr")[4]), by=ceiling(par("usr")[4]/6))
axis(2, at=y.at+middle.axis.width/2, labels=NA, tck=0, col=blueblack, lwd=2)
# axis(2, at=y.at+middle.axis.width/2, tick=F, line=-0.5, labels=abs(y.at), las=2)
axis(2, at=-y.at-middle.axis.width/2, labels=NA, tck=0, col=blueblack, lwd=2)
# axis(2, at=-y.at-middle.axis.width/2, tick=F, line=-0.5, labels=abs(y.at), las=2)
axis(2, at=middle.axis.width/2 + (par("usr")[4]-par("usr")[3])/4,
tick=F, line=-0.9, labels=bquote(-log[10](italic(P)) ~ "Duplication"),
col.axis=blueblack)
axis(2, at=-middle.axis.width/2 - (par("usr")[4]-par("usr")[3])/4,
tick=F, line=-0.9, labels=bquote(-log[10](italic(P)) ~ "Deletion"),
col.axis=blueblack)
segments(x0=rep(par("usr")[1], 2),
x1=rep(par("usr")[2], 2),
y0=c(0.5, -0.5) * middle.axis.width,
y1=c(0.5, -0.5) * middle.axis.width,
col=blueblack, lwd=2, xpd=T, lend="round")
text(x=mean(par("usr")[1:2]), y=0, col=blueblack, labels="Chromosomes")
}
#####################
### RSCRIPT BLOCK ###
#####################
require(optparse, quietly=T)
# List of command-line options
option_list <- list(
make_option(c("--rcnv-config"), help="rCNV2 config file to be sourced.")
)
# Get command-line arguments & options
args <- parse_args(OptionParser(usage=paste("%prog del_stats.bed dup_stats.bed out.png", sep=" "),
option_list=option_list),
positional_arguments=TRUE)
opts <- args$options
# Checks for appropriate positional arguments
if(length(args$args) != 3){
stop(paste("Three positional arguments required: del_stats.bed, dup_stats.bed, and output.png\n", sep=" "))
}
# Writes args & opts to vars
del.in <- args$args[1]
dup.in <- args$args[2]
out.png <- args$args[3]
rcnv.config <- opts$`rcnv-config`
# # DEV PARAMETERS
# del.in <- "~/scratch/HP0012759.rCNV.DEL.sliding_window.meta_analysis.stats.bed.gz"
# dup.in <- "~/scratch/HP0012759.rCNV.DUP.sliding_window.meta_analysis.stats.bed.gz"
# out.png <- "~/scratch/test_mini_miami.png"
# rcnv.config <- "~/Desktop/Collins/Talkowski/CNV_DB/rCNV_map/rCNV2/config/rCNV2_rscript_config.R"
# Source rCNV2 config, if optioned
if(!is.null(rcnv.config)){
source(rcnv.config)
}
# Load sumstats
del <- load.sumstats(del.in, p.col.name="meta_phred_p")
dup <- load.sumstats(dup.in, p.col.name="meta_phred_p")
# # DEV: downsample
# del <- del[sort(sample(1:nrow(del), 50000, replace=F)), ]
# dup <- dup[sort(sample(1:nrow(dup), 50000, replace=F)), ]
# Plot miami
png(out.png, height=4.1*300, width=3.75*300, res=300, family="sans", bg=NA)
mini.miami(del, dup, middle.axis.width=1.25, sig.col=graphabs.green, max.p=8)
dev.off()
|
/analysis/paper/plot/misc/plot_dummy_miami.R
|
permissive
|
JPMirandaM/rCNV2
|
R
| false
| false
| 7,570
|
r
|
#!/usr/bin/env Rscript
######################
# rCNV Project #
######################
# Copyright (c) 2020 Ryan L. Collins and the Talkowski Laboratory
# Distributed under terms of the MIT License (see LICENSE)
# Contact: Ryan L. Collins <rlcollins@g.harvard.edu>
# Plot dummy Miami for graphical abstract of rCNV2 paper
options(stringsAsFactors=F, scipen=1000, family="sans")
######################
### DATA FUNCTIONS ###
######################
# Load summary stats
load.sumstats <- function(stats.in, chrom.colors, sig.color,
p.col.name="meta_phred_p"){
# Read data
stats <- read.table(stats.in, header=T, sep="\t", check.names=F, comment.char="")
colnames(stats)[1] <- gsub("#", "", colnames(stats)[1])
# Get coordinates
chr <- as.numeric(stats[, 1])
if("pos" %in% colnames(stats)){
pos <- stats$pos
}else if(all(c("start", "end") %in% colnames(stats))){
pos <- (stats$start + stats$end) / 2
}else{
stop("Unable to identify locus coordinate info. Must supply columns for either 'pos' or 'start' and 'end'.")
}
pos <- as.numeric(pos)
# Get p-values
if(p.col.name %in% colnames(stats)){
p <- stats[, which(colnames(stats) == p.col.name)]
p <- as.numeric(p)
}else{
stop(paste("Unable to identify p-value column by header name, ",
p.col.name, ".", sep=""))
}
data.frame("chr"=chr, "pos"=pos, "p"=p)
}
# Get Manhattan plotting df
get.manhattan.plot.df <- function(df, chrom.colors, sig.color,
max.p=12, gw.sig=6, sig.buffer=500000){
contigs <- unique(df[, 1])
contigs <- contigs[which(!(is.na(contigs)))]
indexes <- as.data.frame(t(sapply(contigs, function(chr){
return(c(chr, 0, max(df[which(df[, 1] == chr), 2])))
})))
indexes$sum <- cumsum(indexes[, 3])
indexes$bg <- chrom.colors[(contigs %% 2) + 1]
indexes[, 2:4] <- apply(indexes[, 2:4], 2, as.numeric)
sig.idx <- which(df[, 3] >= gw.sig)
sig.idx.all <- sort(unique(unlist(lapply(sig.idx, function(idx){
which(df[, 1] == df[idx, 1]
& df[, 2] <= df[idx, 2] + sig.buffer
& df[, 2] >= df[idx, 2] - sig.buffer)
}))))
df.plot <- as.data.frame(t(sapply(1:nrow(df), function(idx){
row <- df[idx, ]
contig.idx <- which(indexes[, 1]==as.numeric(row[1]))
pval <- as.numeric(row[3])
if(is.na(pval)){
pval <- 0
}
if(pval>max.p){
pval <- max.p
}
if(idx %in% sig.idx.all){
# pt.color <- sig.color
# Disable significant peak highlighting
pt.color <- indexes$bg[which(indexes[, 1] == contig.idx)]
sig <- T
}else{
pt.color <- indexes$bg[which(indexes[, 1] == contig.idx)]
sig <- T
}
return(c(row[1],
as.numeric(row[2]) + indexes[contig.idx, 4] - indexes[contig.idx, 3],
pval,
pt.color,
sig))
})))
df.plot[, c(1, 4, 5)] <- apply(df.plot[, c(1, 4, 5)], 2, unlist)
df.plot[, 2] <- as.numeric(as.character(df.plot[, 2]))
df.plot[, 3] <- as.numeric(as.character(df.plot[, 3]))
colnames(df.plot) <- c("chr", "pos", "p", "color", "sig")
# Drop rows with NA p-values & return
df.plot[which(!is.na(df.plot$p)), ]
}
##########################
### PLOTTING FUNCTIONS ###
##########################
# Custom Miami plot
mini.miami <- function(del, dup, max.p=10,
gw.sig=6, sig.buffer=500000, sig.color=graphabs.green,
middle.axis.width=1,
parmar=c(0.5, 1.7, 0.5, 0.3)){
# Get plotting data for dels & dups
del.df <- get.manhattan.plot.df(del,
chrom.colors=c(cnv.colors[1], control.cnv.colors[1]),
sig.color=sig.color,
max.p=max.p, gw.sig=gw.sig, sig.buffer=sig.buffer)
dup.df <- get.manhattan.plot.df(dup,
chrom.colors=c(cnv.colors[2], control.cnv.colors[2]),
sig.color=sig.color,
max.p=max.p, gw.sig=gw.sig, sig.buffer=sig.buffer)
# Modify data to account for shared x-axis
dup.df$p = dup.df$p + middle.axis.width/2
del.df$p = -del.df$p - middle.axis.width/2
# Get plot values
x.range <- range(c(del.df$pos, dup.df$pos), na.rm=T)
y.range <- range(c(del.df$p, dup.df$p), na.rm=T)
# Prep plot area
par(bty="n", mar=parmar)
plot(NA, xlim=x.range, ylim=y.range,
xaxt="n", xlab="", yaxt="n", ylab="")
abline(h=c(gw.sig + middle.axis.width/2, -gw.sig - middle.axis.width/2),
lty=2, lwd=2, col=sig.color)
# Add points
points(x=del.df$pos, y=del.df$p,
col=del.df$color, pch=19, cex=0.275, xpd=T)
points(x=dup.df$pos, y=dup.df$p,
col=dup.df$color, pch=19, cex=0.275, xpd=T)
# Add axes
y.at <- seq(0, ceiling(par("usr")[4]), by=ceiling(par("usr")[4]/6))
axis(2, at=y.at+middle.axis.width/2, labels=NA, tck=0, col=blueblack, lwd=2)
# axis(2, at=y.at+middle.axis.width/2, tick=F, line=-0.5, labels=abs(y.at), las=2)
axis(2, at=-y.at-middle.axis.width/2, labels=NA, tck=0, col=blueblack, lwd=2)
# axis(2, at=-y.at-middle.axis.width/2, tick=F, line=-0.5, labels=abs(y.at), las=2)
axis(2, at=middle.axis.width/2 + (par("usr")[4]-par("usr")[3])/4,
tick=F, line=-0.9, labels=bquote(-log[10](italic(P)) ~ "Duplication"),
col.axis=blueblack)
axis(2, at=-middle.axis.width/2 - (par("usr")[4]-par("usr")[3])/4,
tick=F, line=-0.9, labels=bquote(-log[10](italic(P)) ~ "Deletion"),
col.axis=blueblack)
segments(x0=rep(par("usr")[1], 2),
x1=rep(par("usr")[2], 2),
y0=c(0.5, -0.5) * middle.axis.width,
y1=c(0.5, -0.5) * middle.axis.width,
col=blueblack, lwd=2, xpd=T, lend="round")
text(x=mean(par("usr")[1:2]), y=0, col=blueblack, labels="Chromosomes")
}
#####################
### RSCRIPT BLOCK ###
#####################
require(optparse, quietly=T)
# List of command-line options
option_list <- list(
make_option(c("--rcnv-config"), help="rCNV2 config file to be sourced.")
)
# Get command-line arguments & options
args <- parse_args(OptionParser(usage=paste("%prog del_stats.bed dup_stats.bed out.png", sep=" "),
option_list=option_list),
positional_arguments=TRUE)
opts <- args$options
# Checks for appropriate positional arguments
if(length(args$args) != 3){
stop(paste("Three positional arguments required: del_stats.bed, dup_stats.bed, and output.png\n", sep=" "))
}
# Writes args & opts to vars
del.in <- args$args[1]
dup.in <- args$args[2]
out.png <- args$args[3]
rcnv.config <- opts$`rcnv-config`
# # DEV PARAMETERS
# del.in <- "~/scratch/HP0012759.rCNV.DEL.sliding_window.meta_analysis.stats.bed.gz"
# dup.in <- "~/scratch/HP0012759.rCNV.DUP.sliding_window.meta_analysis.stats.bed.gz"
# out.png <- "~/scratch/test_mini_miami.png"
# rcnv.config <- "~/Desktop/Collins/Talkowski/CNV_DB/rCNV_map/rCNV2/config/rCNV2_rscript_config.R"
# Source rCNV2 config, if optioned
if(!is.null(rcnv.config)){
source(rcnv.config)
}
# Load sumstats
del <- load.sumstats(del.in, p.col.name="meta_phred_p")
dup <- load.sumstats(dup.in, p.col.name="meta_phred_p")
# # DEV: downsample
# del <- del[sort(sample(1:nrow(del), 50000, replace=F)), ]
# dup <- dup[sort(sample(1:nrow(dup), 50000, replace=F)), ]
# Plot miami
png(out.png, height=4.1*300, width=3.75*300, res=300, family="sans", bg=NA)
mini.miami(del, dup, middle.axis.width=1.25, sig.col=graphabs.green, max.p=8)
dev.off()
|
# Exercise 6: Husky Football 2016 Season
# Read in the Husky Football 2016 game data into a variable called `husky.games.2016`
husky.games.2016 <- read.csv('data/huskies_2016.csv')
# Create a vector of the teams that the Huskies played against during that season
# Call this vector `not.huskies`. You'll need to convert this column to a vector
not.huskies <- as.vector(husky.games.2016$opponent)
# Create a vector of the their final scores for the games
# Call this variable `husky.scores`
husky.scores <- husky.games.2016$uw_score
# Create 2 variables called `rushing.yards` and `passing.yards` to represent the yards the Huskies rushed and passed
rushing.yards <- husky.games.2016$rushing_yards
passing.yards <- husky.games.2016$passing_yards
# Create a variabled called `combined.yards` that is the total yardage of the Huskies for each game
combined.yards <- passing.yards + rushing.yards
# What is the score of the game where the Huskies had the most combined yards?
score.with.most.yards <- husky.scores[combined.yards == max(combined.yards)]
# Write a function `MostYardsScore` that takes in a dataframe parameter `games` and returns a descriptive sentence
# about the game that was played that had the most yards scored in it.
# Take note of the steps from above including the opposing team, score, and date the game was played
MostYardsScore <- function(games) {
dates <- as.vector(games$date)
scores <- games$uw.score
opponents <- as.vector(games$opponent)
rushing.yards <- games$rushing.yards
passing.yards <- games$passing.yards
combined.yards <- passing.yards + rushing.yards
most.yards <- max(combined.yards)
opponent <- opponents[combined.yards == most.yards]
date <- dates[combined.yards == most.yards]
highest.score <- scores[combined.yards == most.yards]
return(paste("The game that the Huskies had the most yards was against", opponent, "on", date, "where they scored", highest.score, "points!"))
}
# What was the highest yardage game so far last season?
# Hint: Read in the dataset titled `huskies_2015.csv` and save it as a variable
### Bonus ###
# When working with data, you will often find yourself manually adding data into your dataset.
# This bonus will help you practice such skills.
# Using resources online, find out whether each game was played at home or away.
# Create a vector `where.played` that corresponds to what you found.
# Add the vector `where.played` as a new column to `husky.games.2016`
# Hint: For bowl games simply listing "bowl" will be fine.
|
/exercise-6/exercise.R
|
permissive
|
EmmanuelRobi/m10-dataframes
|
R
| false
| false
| 2,528
|
r
|
# Exercise 6: Husky Football 2016 Season
# Read in the Husky Football 2016 game data into a variable called `husky.games.2016`
husky.games.2016 <- read.csv('data/huskies_2016.csv')
# Create a vector of the teams that the Huskies played against during that season
# Call this vector `not.huskies`. You'll need to convert this column to a vector
not.huskies <- as.vector(husky.games.2016$opponent)
# Create a vector of the their final scores for the games
# Call this variable `husky.scores`
husky.scores <- husky.games.2016$uw_score
# Create 2 variables called `rushing.yards` and `passing.yards` to represent the yards the Huskies rushed and passed
rushing.yards <- husky.games.2016$rushing_yards
passing.yards <- husky.games.2016$passing_yards
# Create a variabled called `combined.yards` that is the total yardage of the Huskies for each game
combined.yards <- passing.yards + rushing.yards
# What is the score of the game where the Huskies had the most combined yards?
score.with.most.yards <- husky.scores[combined.yards == max(combined.yards)]
# Write a function `MostYardsScore` that takes in a dataframe parameter `games` and returns a descriptive sentence
# about the game that was played that had the most yards scored in it.
# Take note of the steps from above including the opposing team, score, and date the game was played
MostYardsScore <- function(games) {
dates <- as.vector(games$date)
scores <- games$uw.score
opponents <- as.vector(games$opponent)
rushing.yards <- games$rushing.yards
passing.yards <- games$passing.yards
combined.yards <- passing.yards + rushing.yards
most.yards <- max(combined.yards)
opponent <- opponents[combined.yards == most.yards]
date <- dates[combined.yards == most.yards]
highest.score <- scores[combined.yards == most.yards]
return(paste("The game that the Huskies had the most yards was against", opponent, "on", date, "where they scored", highest.score, "points!"))
}
# What was the highest yardage game so far last season?
# Hint: Read in the dataset titled `huskies_2015.csv` and save it as a variable
### Bonus ###
# When working with data, you will often find yourself manually adding data into your dataset.
# This bonus will help you practice such skills.
# Using resources online, find out whether each game was played at home or away.
# Create a vector `where.played` that corresponds to what you found.
# Add the vector `where.played` as a new column to `husky.games.2016`
# Hint: For bowl games simply listing "bowl" will be fine.
|
rm(list=ls())
library(WebGestaltR)
WebGestaltR(enrichMethod="NTA", organism="hsapiens",
enrichDatabase="network_PPI_BIOGRID", interestGeneFile="HALLMARK_HYPOXIA.geneset.txt",
interestGeneType="ensembl_gene_id",sigMethod="fdr",
outputDirectory=getwd(), highlightType = 'Neighbors', neighborNum = 10,
networkConstructionMethod="Network_Expansion")
|
/Data/MSigDB.go.pathway/HALLMARK_HYPOXIA/WebGestalt.R
|
no_license
|
haoboguo/NetBAS
|
R
| false
| false
| 363
|
r
|
rm(list=ls())
library(WebGestaltR)
WebGestaltR(enrichMethod="NTA", organism="hsapiens",
enrichDatabase="network_PPI_BIOGRID", interestGeneFile="HALLMARK_HYPOXIA.geneset.txt",
interestGeneType="ensembl_gene_id",sigMethod="fdr",
outputDirectory=getwd(), highlightType = 'Neighbors', neighborNum = 10,
networkConstructionMethod="Network_Expansion")
|
#' @title create_df_discus
#' @description Template for storm discussions dataframe
#' @return empty dataframe
#' @seealso \code{\link{get_discus}}
#' @keywords internal
create_df_discus <- function() {
df <- tibble::data_frame("Status" = character(),
"Name" = character(),
# Allow for intermediate advisories,
# i.e., "1A", "2", "2A"...
"Adv" = integer(),
"Date" = as.POSIXct(character(), tz = "UTC"),
"Key" = character(),
"Contents" = character())
return(df)
}
#' @title get_discus
#' @description Return dataframe of discussion data.
#' \describe{
#' \item{Status}{Classification of storm, e.g., Tropical Storm, Hurricane,
#' etc.}
#' \item{Name}{Name of storm}
#' \item{Adv}{Advisory Number}
#' \item{Date}{Date of advisory issuance}
#' \item{Key}{ID of cyclone}
#' \item{Contents}{Text content of product}
#' }
#' @param links URL to storm's archive page.
#' @seealso \code{\link{get_storms}}, \code{\link{public}}
#' @examples
#' \dontrun{
#' # Return dataframe of storm discussions for Tropical Storm Alex (AL011998)
#' get_discus("http://www.nhc.noaa.gov/archive/1998/1998ALEXadv.html")
#' }
#' @export
get_discus <- function(links) {
df <- get_storm_data(links, products = "discus")
return(df$discus)
}
#' @title discus
#' @description Parse storm Discussion products
#' @details Given a direct link to a discussion product, parse and return
#' dataframe of values.
#' @param contents Link to a storm's specific discussion product.
#' @return Dataframe
#' @seealso \code{\link{get_discus}}
#' @keywords internal
discus <- function(contents) {
# Replace all carriage returns with empty string.
contents <- stringr::str_replace_all(contents, "\r", "")
df <- create_df_discus()
status <- scrape_header(contents, ret = "status")
name <- scrape_header(contents, ret = "name")
adv <- scrape_header(contents, ret = "adv") %>% as.numeric()
date <- scrape_header(contents, ret = "date")
# Keys were added to discus products beginning 2006. Prior, it doesn't
# exist. safely run scrape_header for key. If error, then use NA. Otherwise,
# add it.
safely_scrape_header <- purrr::safely(scrape_header)
key <- safely_scrape_header(contents, ret = "key")
if (is.null(key$error)) {
key <- key$result
} else {
key <- NA
}
if (getOption("rrricanes.working_msg"))
message(sprintf("Working %s %s Storm Discussion #%s (%s)",
status, name, adv, date))
df <- df %>%
tibble::add_row("Status" = status,
"Name" = name,
"Adv" = adv,
"Date" = date,
"Key" = key,
"Contents" = contents)
return(df)
}
|
/R/discus.R
|
permissive
|
mraza007/rrricanes
|
R
| false
| false
| 2,720
|
r
|
#' @title create_df_discus
#' @description Template for storm discussions dataframe
#' @return empty dataframe
#' @seealso \code{\link{get_discus}}
#' @keywords internal
create_df_discus <- function() {
df <- tibble::data_frame("Status" = character(),
"Name" = character(),
# Allow for intermediate advisories,
# i.e., "1A", "2", "2A"...
"Adv" = integer(),
"Date" = as.POSIXct(character(), tz = "UTC"),
"Key" = character(),
"Contents" = character())
return(df)
}
#' @title get_discus
#' @description Return dataframe of discussion data.
#' \describe{
#' \item{Status}{Classification of storm, e.g., Tropical Storm, Hurricane,
#' etc.}
#' \item{Name}{Name of storm}
#' \item{Adv}{Advisory Number}
#' \item{Date}{Date of advisory issuance}
#' \item{Key}{ID of cyclone}
#' \item{Contents}{Text content of product}
#' }
#' @param links URL to storm's archive page.
#' @seealso \code{\link{get_storms}}, \code{\link{public}}
#' @examples
#' \dontrun{
#' # Return dataframe of storm discussions for Tropical Storm Alex (AL011998)
#' get_discus("http://www.nhc.noaa.gov/archive/1998/1998ALEXadv.html")
#' }
#' @export
get_discus <- function(links) {
df <- get_storm_data(links, products = "discus")
return(df$discus)
}
#' @title discus
#' @description Parse storm Discussion products
#' @details Given a direct link to a discussion product, parse and return
#' dataframe of values.
#' @param contents Link to a storm's specific discussion product.
#' @return Dataframe
#' @seealso \code{\link{get_discus}}
#' @keywords internal
discus <- function(contents) {
# Replace all carriage returns with empty string.
contents <- stringr::str_replace_all(contents, "\r", "")
df <- create_df_discus()
status <- scrape_header(contents, ret = "status")
name <- scrape_header(contents, ret = "name")
adv <- scrape_header(contents, ret = "adv") %>% as.numeric()
date <- scrape_header(contents, ret = "date")
# Keys were added to discus products beginning 2006. Prior, it doesn't
# exist. safely run scrape_header for key. If error, then use NA. Otherwise,
# add it.
safely_scrape_header <- purrr::safely(scrape_header)
key <- safely_scrape_header(contents, ret = "key")
if (is.null(key$error)) {
key <- key$result
} else {
key <- NA
}
if (getOption("rrricanes.working_msg"))
message(sprintf("Working %s %s Storm Discussion #%s (%s)",
status, name, adv, date))
df <- df %>%
tibble::add_row("Status" = status,
"Name" = name,
"Adv" = adv,
"Date" = date,
"Key" = key,
"Contents" = contents)
return(df)
}
|
library(stringr)
#1
print("\"")
cat("\"")
cat("DSO\n545")
# \n = newline
cat("DSO\t545")
# \t = Tab
#2
cat(":-\\")
cat("(^_^\")")
cat("@_'-'")
cat("\\m/")
#3
?str_locate
?str_sub
#4
str_locate(string = "great", pattern = "a")
str_locate("fantastic","a")
str_locate_all("fantastic","a")
str_locate("super","a")
str_locate(c("fantastic","great","super"),"a")
#5
str_sub(string = "testing",start = 1, end = 3)
str_sub("testing", start = 4)
str_sub("testing", end = 4)
#6
input <- c("abc","defg")
str_sub(input,c(2,3))
#7
emails <- readRDS("email.rds")
cat(emails[1])
str_locate(emails[1],"\n\n")
#8
email1 = str_locate(emails[1],"\n\n")
hearder = str_sub(emails[1], end = email1[1])
body = str_sub(emails[1], start = email1[2])
#10
breaks = str_locate(emails,"\n\n")
headers = str_sub(emails, end = breaks[,1])
bodies = str_sub(emails, start = breaks[,2])
### Second Lab
#1
fruit = c("apple","banana","pear","pinapple")
#2
# detect if the pattern a is found
str_detect(fruit,"a")
# detect if it starts with a
str_detect(fruit,"^a")
# detect if it ends with a
str_detect(fruit,"a$")
# check if it has a or e or i or o or u
str_detect(fruit,"[aeiou]")
# check if it has a or b or c or d
str_detect(fruit,"[a-d]")
str_detect(fruit,"[0-9]")
#3
str_detect(fruit, "^a[a-z]*e$")
# . could be any character or number
str_detect(fruit, "^a.*e$")
#4
phone = c("213 748 4826","213-748-4826","(213) 748 4826")
str_detect(phone, "[(]?[0-9]{3}[)]?[ -]?[0-9]{3}[ -]?[0-9]{4}")
#5
str_extract_all(bodies,"[(]?[0-9]{3}[)]?[ -]?[0-9]{3}[ -]?[0-9]{4}")
|
/class10.R
|
no_license
|
yichengx/Strings_in_R
|
R
| false
| false
| 1,574
|
r
|
library(stringr)
#1
print("\"")
cat("\"")
cat("DSO\n545")
# \n = newline
cat("DSO\t545")
# \t = Tab
#2
cat(":-\\")
cat("(^_^\")")
cat("@_'-'")
cat("\\m/")
#3
?str_locate
?str_sub
#4
str_locate(string = "great", pattern = "a")
str_locate("fantastic","a")
str_locate_all("fantastic","a")
str_locate("super","a")
str_locate(c("fantastic","great","super"),"a")
#5
str_sub(string = "testing",start = 1, end = 3)
str_sub("testing", start = 4)
str_sub("testing", end = 4)
#6
input <- c("abc","defg")
str_sub(input,c(2,3))
#7
emails <- readRDS("email.rds")
cat(emails[1])
str_locate(emails[1],"\n\n")
#8
email1 = str_locate(emails[1],"\n\n")
hearder = str_sub(emails[1], end = email1[1])
body = str_sub(emails[1], start = email1[2])
#10
breaks = str_locate(emails,"\n\n")
headers = str_sub(emails, end = breaks[,1])
bodies = str_sub(emails, start = breaks[,2])
### Second Lab
#1
fruit = c("apple","banana","pear","pinapple")
#2
# detect if the pattern a is found
str_detect(fruit,"a")
# detect if it starts with a
str_detect(fruit,"^a")
# detect if it ends with a
str_detect(fruit,"a$")
# check if it has a or e or i or o or u
str_detect(fruit,"[aeiou]")
# check if it has a or b or c or d
str_detect(fruit,"[a-d]")
str_detect(fruit,"[0-9]")
#3
str_detect(fruit, "^a[a-z]*e$")
# . could be any character or number
str_detect(fruit, "^a.*e$")
#4
phone = c("213 748 4826","213-748-4826","(213) 748 4826")
str_detect(phone, "[(]?[0-9]{3}[)]?[ -]?[0-9]{3}[ -]?[0-9]{4}")
#5
str_extract_all(bodies,"[(]?[0-9]{3}[)]?[ -]?[0-9]{3}[ -]?[0-9]{4}")
|
### Scripts used for transcriptome analysis
#############################################################################################
#############################################################################################
#############################################################################################
# Set working directory
setwd("C:/Users/Charissa/Dropbox/bartb/")
# Load Packages
library(DESeq2)
library(edgeR)
library(limma)
library(Glimma)
library(gplots)
library(RColorBrewer)
library(pheatmap)
library(ggplot2)
library(ggrepel)
library(pathfindR)
library(scales)
library(data.table)
library(fBasics)
library(forcats)
library(vegan)
library(dplyr)
library(MetaboSignal)
# Load counts
count_data = read.table(file = "rna.rawcounts.csv", header = T, sep = ",", row.names=1)
head(count_data)
# Load metadata
sampleinfo <- read.table("rna.map.txt", header=T, sep="\t", row.names=1)
# Sampleinfo <- as.matrix((sampleinfo))
colnames(sampleinfo)
# Round off counts
counts <- round(count_data)
head(counts)
#########################################################
#########################################################
#########################################################
### DIFFERENTIALLY ABUNDANT GENES
# Make CountData and MetaData into DESEq Object; Choose the comparison Variable as design
dds <- DESeqDataSetFromMatrix(countData = counts,
colData = sampleinfo,
design= ~ tb_status_biome)
# Estimate Factors of DESeq Object
dds <- estimateSizeFactors(dds)
# Pruning before DESeq:
#This would filter out genes where there are less than 3 samples with normalized counts greater than or equal to 100.
idx100 <- rowSums( counts(dds, normalized=TRUE) >= 100 ) >= 3
dds <- dds[idx100,]
dds
# Differential Expression Analysis
# Drop Unencessary Levels
dds$tb_status_biome <- droplevels(dds$tb_status_biome)
# Set the baseline level --> positive is upregulated in cases; negative is downregulated
dds$tb_status_biome <- relevel(dds$tb_status_biome, ref ="No_TB")
# Run DESeq
dds<- DESeq(dds, test="Wald", fitType="local")
res <- results(dds, cooksCutoff=FALSE)
res = res[order(res$padj, na.last = NA), ]
# Annotation
library("AnnotationDbi")
library("org.Hs.eg.db")
columns(org.Hs.eg.db) #To get a list of all available key types
convertIDs <- function( ids, fromKey, toKey, db, ifMultiple=c( "putNA", "useFirst" ) ) {
stopifnot( inherits( db, "AnnotationDb" ) )
ifMultiple <- match.arg( ifMultiple )
suppressWarnings( selRes <- AnnotationDbi::select(
db, keys=ids, keytype=fromKey, columns=c(fromKey,toKey) ) )
if( ifMultiple == "putNA" ) {
duplicatedIds <- selRes[ duplicated( selRes[,1] ), 1 ]
selRes <- selRes[ ! selRes[,1] %in% duplicatedIds, ] }
return( selRes[ match( ids, selRes[,1] ), 2 ] )
}
# ENTREZID:
res$entrezgene = convertIDs(row.names(res), "ENSEMBL", "ENTREZID", org.Hs.eg.db)
# Gene symbols:
res$hgnc_symbol = convertIDs(row.names(res), "ENSEMBL", "SYMBOL", org.Hs.eg.db)
# Gene name:
res$gene_name = convertIDs(row.names(res), "ENSEMBL", "GENENAME", org.Hs.eg.db)
# Remove NAs in all newly annotated columns:
res = res[order(res$hgnc_symbol, na.last = NA), ]
res = res[order(res$entrezgene, na.last = NA), ]
res = res[order(res$gene_name, na.last = NA), ]
# Save image
save.image(file="C:/Users/Charissa/Documents/16s/seq_batch_2/bar_tb/final/rna/rna.RData")
# Convert resuts table into a data.frame
res <- as.data.frame(res)
# Select genes to save
gene.to.save <-as.character(rownames(res))
gene.to.save_entrezgene <-as.character(res$entrezgene)
gene.to.save_hgnc_symbol <-as.character(res$hgnc_symbol)
# From main table we should get the mean across the row of the table
ko.table.df <- data.frame(assay(dds))
ko.table.df.meanRA <- rowMeans(ko.table.df)
# Subset and reorder just the IDs that we have
ko.table.df.meanRA.save <- ko.table.df.meanRA[gene.to.save]
# Add the abundance data for the res dataframe
res$abundance <- ko.table.df.meanRA.save
# Subset table for groups being analysed to get their separate abundances
# Extract and subset count data
countdata = assay(dds)
coldata = colData(dds)
cases.pruned.table = countdata[, coldata$tb_status_biome %in% c("TB")]
sc.pruned.table = countdata[, coldata$tb_status_biome %in% c("No_TB")]
# From pruned table we should get the mean across the row of the table
cases.pruned.table.df <- data.frame(cases.pruned.table)
cases.pruned.table.df.meanRA <- rowMeans(cases.pruned.table.df)
sc.pruned.table.df <- data.frame(sc.pruned.table)
sc.pruned.table.df.meanRA <- rowMeans(sc.pruned.table.df)
# Subset AND reorder just the genes that we have
cases.pruned.table.df.meanRA.save <- cases.pruned.table.df.meanRA[gene.to.save]
sc.pruned.table.df.meanRA.save <- sc.pruned.table.df.meanRA[gene.to.save]
# Add the abundance data for the res dataframe
res$abundance.cases <- cases.pruned.table.df.meanRA.save
res$abundance.sc <- sc.pruned.table.df.meanRA.save
# Set Names of Results Table
res <- setNames(cbind(rownames(res), res, row.names = NULL), c("Gene.symbol","baseMean", "logFC", "lfcSE", "stat", "pvalue", "adj.P.Val","entrez_ID", "hgnc_symbol", "gene_name", "abundance", "abundance.cases", "abundance.sc"))
write.table(res,file="rna.abundance.txt", sep="\t", col.names = NA, row.names = TRUE)
# Keep only the variables you need for pathway analysis
res1 <- res[,c("entrez_ID","logFC","pvalue","adj.P.Val")]
# Write Tables to TXT file
write.table(res1,file="rna.IPA.txt", sep="\t", col.names = NA, row.names = TRUE, quote=FALSE)
# Set Theme for Figures
theme <-theme(panel.background = element_blank(),panel.border=element_rect(fill=NA),panel.grid.major = element_blank(),panel.grid.minor = element_blank(),strip.background=element_blank(),axis.text.x=element_text(colour="black"),axis.text.y=element_text(colour="black"),axis.ticks=element_line(colour="black"),plot.margin=unit(c(1,1,1,1),"line"), legend.position="none")
# Set alpha
alpha = 0.01
# Compute FDR in a log scales
res$sig <- -log10(res$adj.P.Val)
# See how many are now infinite
sum(is.infinite(res$sig))
# Set the colors for your volcano plat
cols <- densCols(res$logFC, res$sig)
cols[res$pvalue ==0] <- "purple"
cols[res$logFC > 0 & res$adj.P.Val < alpha ] <- "red"
cols[res$logFC < 0 & res$adj.P.Val < alpha ] <- "darkgreen"
# Create a Variable for the size of the dots in the Volcano Plot
res$pch <- 19
res$pch[res$pvalue ==0] <- 6
# Select genes with a defined p-value (DESeq2 assigns NA to some genes)
genes.to.plot <- !is.na(res$pvalue)
# Check the range of the LogFC
range(res[genes.to.plot, "logFC"])
# Volcano plot
pdf(file="rna.volcano.fdr.0.01.pdf", width=5, height=5)
ggplot(res, aes(x = logFC, y = sig,label=hgnc_symbol)) +
geom_point(color=cols, alpha=0.7) + #Chose Colors and size for dots
geom_text_repel(aes(label=ifelse(res$adj.P.Val < alpha & res$logFC>2, as.character(res$hgnc_symbol),'')),size=2,force=25,segment.colour="darkgrey",segment.alpha=0.5) + #Label values based on parameters, including pcal and logFC
geom_text_repel(aes(label=ifelse(res$adj.P.Val < alpha & res$logFC<=-2, as.character(res$hgnc_symbol),'')),size=2,force=25,segment.colour="darkgrey",segment.alpha=0.5) + #Label values based on parameters, including pcal and logFC
theme(legend.position = "none") +
geom_hline(yintercept=-log10(alpha), color="red",linetype="dashed") +
xlab("Effect size: log2(fold-change)") +
ylab("-log10(adjusted p-value)") +
#ylim(0,20)+
theme
dev.off()
#########################################################
#########################################################
#########################################################
### BRAY-CURTIS ANALYSIS
# Create Distance Matrix
vegdist = vegdist(t(assay(dds)), method = "bray")
# Calculate Adonis for p-value for TB status
adonis(vegdist ~ tb_status_biome, data=data.frame(colData(dds)))
# Formulate principal component co-ordinates for PCOA plot, k as the choice of PCs
CmdScale <- cmdscale(vegdist, k =10)
# Calculate sample variance for each PC
vars <- apply(CmdScale, 2, var)
# Create Variable with the Percent Variance
percentVar <- round(100 * (vars/sum(vars)))
# Merge PC Data with MetaData
newResults <- merge(x = CmdScale, y = colData(dds), by = "row.names", all.x = TRUE)
# Rename Variables for PC1 and PC2
colnames(newResults)[colnames(newResults)=="V1"] <- "PC1"
colnames(newResults)[colnames(newResults)=="V2"] <- "PC2"
colnames(newResults)[colnames(newResults)=="Row.names"] <- "name"
# Calculate the Centroid Value - for TB
centroids <- aggregate(cbind(PC1,PC2)~ tb_status_biome,data= newResults, mean)
# Merge the Centroid Data into the PCOA Data
newResults <- merge(newResults,centroids,by="tb_status_biome",suffixes=c("",".centroid"))
# Plot
pdf("rna.bray.curtis.pdf", height = 10, width = 10)
ggplot(newResults, aes(PC1, PC2, color= tb_status_biome)) +
geom_point(size=5) +
xlab(paste0("PC1: ",percentVar[1],"% variance")) +
ylab(paste0("PC2: ",percentVar[2],"% variance")) +
scale_color_manual(values=c("darkgreen", "red", "grey", "blue", "purple")) +
geom_point(data=centroids, aes(x=PC1, y=PC2, color= tb_status_biome), size=0) +
geom_segment(aes(x=PC1.centroid, y=PC2.centroid, xend=PC1, yend=PC2, color= tb_status_biome))+
geom_label_repel(data = centroids, aes(x=PC1, y=PC2, label=c("SCs", "Cases")), size=10) +
theme(panel.background = element_blank(),panel.border=element_rect(fill=NA),panel.grid.major = element_line(linetype = "dashed", size = 0.5, colour = "grey80"),panel.grid.minor = element_blank(),strip.background=element_blank(),axis.title=element_text(size=30,face="bold"),axis.text.x=element_text(colour= "grey80", size = rel(0.75)),axis.text.y=element_text(colour = "grey80", size = rel(0.75)),axis.ticks=element_blank(),plot.margin=unit(c(1,1,1,1),"line"), legend.position="none")
dev.off()
|
/transcriptome.R
|
no_license
|
segalmicrobiomelab/BARTB_Naidoo_Paper
|
R
| false
| false
| 10,097
|
r
|
### Scripts used for transcriptome analysis
#############################################################################################
#############################################################################################
#############################################################################################
# Set working directory
setwd("C:/Users/Charissa/Dropbox/bartb/")
# Load Packages
library(DESeq2)
library(edgeR)
library(limma)
library(Glimma)
library(gplots)
library(RColorBrewer)
library(pheatmap)
library(ggplot2)
library(ggrepel)
library(pathfindR)
library(scales)
library(data.table)
library(fBasics)
library(forcats)
library(vegan)
library(dplyr)
library(MetaboSignal)
# Load counts
count_data = read.table(file = "rna.rawcounts.csv", header = T, sep = ",", row.names=1)
head(count_data)
# Load metadata
sampleinfo <- read.table("rna.map.txt", header=T, sep="\t", row.names=1)
# Sampleinfo <- as.matrix((sampleinfo))
colnames(sampleinfo)
# Round off counts
counts <- round(count_data)
head(counts)
#########################################################
#########################################################
#########################################################
### DIFFERENTIALLY ABUNDANT GENES
# Make CountData and MetaData into DESEq Object; Choose the comparison Variable as design
dds <- DESeqDataSetFromMatrix(countData = counts,
colData = sampleinfo,
design= ~ tb_status_biome)
# Estimate Factors of DESeq Object
dds <- estimateSizeFactors(dds)
# Pruning before DESeq:
#This would filter out genes where there are less than 3 samples with normalized counts greater than or equal to 100.
idx100 <- rowSums( counts(dds, normalized=TRUE) >= 100 ) >= 3
dds <- dds[idx100,]
dds
# Differential Expression Analysis
# Drop Unencessary Levels
dds$tb_status_biome <- droplevels(dds$tb_status_biome)
# Set the baseline level --> positive is upregulated in cases; negative is downregulated
dds$tb_status_biome <- relevel(dds$tb_status_biome, ref ="No_TB")
# Run DESeq
dds<- DESeq(dds, test="Wald", fitType="local")
res <- results(dds, cooksCutoff=FALSE)
res = res[order(res$padj, na.last = NA), ]
# Annotation
library("AnnotationDbi")
library("org.Hs.eg.db")
columns(org.Hs.eg.db) #To get a list of all available key types
convertIDs <- function( ids, fromKey, toKey, db, ifMultiple=c( "putNA", "useFirst" ) ) {
stopifnot( inherits( db, "AnnotationDb" ) )
ifMultiple <- match.arg( ifMultiple )
suppressWarnings( selRes <- AnnotationDbi::select(
db, keys=ids, keytype=fromKey, columns=c(fromKey,toKey) ) )
if( ifMultiple == "putNA" ) {
duplicatedIds <- selRes[ duplicated( selRes[,1] ), 1 ]
selRes <- selRes[ ! selRes[,1] %in% duplicatedIds, ] }
return( selRes[ match( ids, selRes[,1] ), 2 ] )
}
# ENTREZID:
res$entrezgene = convertIDs(row.names(res), "ENSEMBL", "ENTREZID", org.Hs.eg.db)
# Gene symbols:
res$hgnc_symbol = convertIDs(row.names(res), "ENSEMBL", "SYMBOL", org.Hs.eg.db)
# Gene name:
res$gene_name = convertIDs(row.names(res), "ENSEMBL", "GENENAME", org.Hs.eg.db)
# Remove NAs in all newly annotated columns:
res = res[order(res$hgnc_symbol, na.last = NA), ]
res = res[order(res$entrezgene, na.last = NA), ]
res = res[order(res$gene_name, na.last = NA), ]
# Save image
save.image(file="C:/Users/Charissa/Documents/16s/seq_batch_2/bar_tb/final/rna/rna.RData")
# Convert resuts table into a data.frame
res <- as.data.frame(res)
# Select genes to save
gene.to.save <-as.character(rownames(res))
gene.to.save_entrezgene <-as.character(res$entrezgene)
gene.to.save_hgnc_symbol <-as.character(res$hgnc_symbol)
# From main table we should get the mean across the row of the table
ko.table.df <- data.frame(assay(dds))
ko.table.df.meanRA <- rowMeans(ko.table.df)
# Subset and reorder just the IDs that we have
ko.table.df.meanRA.save <- ko.table.df.meanRA[gene.to.save]
# Add the abundance data for the res dataframe
res$abundance <- ko.table.df.meanRA.save
# Subset table for groups being analysed to get their separate abundances
# Extract and subset count data
countdata = assay(dds)
coldata = colData(dds)
cases.pruned.table = countdata[, coldata$tb_status_biome %in% c("TB")]
sc.pruned.table = countdata[, coldata$tb_status_biome %in% c("No_TB")]
# From pruned table we should get the mean across the row of the table
cases.pruned.table.df <- data.frame(cases.pruned.table)
cases.pruned.table.df.meanRA <- rowMeans(cases.pruned.table.df)
sc.pruned.table.df <- data.frame(sc.pruned.table)
sc.pruned.table.df.meanRA <- rowMeans(sc.pruned.table.df)
# Subset AND reorder just the genes that we have
cases.pruned.table.df.meanRA.save <- cases.pruned.table.df.meanRA[gene.to.save]
sc.pruned.table.df.meanRA.save <- sc.pruned.table.df.meanRA[gene.to.save]
# Add the abundance data for the res dataframe
res$abundance.cases <- cases.pruned.table.df.meanRA.save
res$abundance.sc <- sc.pruned.table.df.meanRA.save
# Set Names of Results Table
res <- setNames(cbind(rownames(res), res, row.names = NULL), c("Gene.symbol","baseMean", "logFC", "lfcSE", "stat", "pvalue", "adj.P.Val","entrez_ID", "hgnc_symbol", "gene_name", "abundance", "abundance.cases", "abundance.sc"))
write.table(res,file="rna.abundance.txt", sep="\t", col.names = NA, row.names = TRUE)
# Keep only the variables you need for pathway analysis
res1 <- res[,c("entrez_ID","logFC","pvalue","adj.P.Val")]
# Write Tables to TXT file
write.table(res1,file="rna.IPA.txt", sep="\t", col.names = NA, row.names = TRUE, quote=FALSE)
# Set Theme for Figures
theme <-theme(panel.background = element_blank(),panel.border=element_rect(fill=NA),panel.grid.major = element_blank(),panel.grid.minor = element_blank(),strip.background=element_blank(),axis.text.x=element_text(colour="black"),axis.text.y=element_text(colour="black"),axis.ticks=element_line(colour="black"),plot.margin=unit(c(1,1,1,1),"line"), legend.position="none")
# Set alpha
alpha = 0.01
# Compute FDR in a log scales
res$sig <- -log10(res$adj.P.Val)
# See how many are now infinite
sum(is.infinite(res$sig))
# Set the colors for your volcano plat
cols <- densCols(res$logFC, res$sig)
cols[res$pvalue ==0] <- "purple"
cols[res$logFC > 0 & res$adj.P.Val < alpha ] <- "red"
cols[res$logFC < 0 & res$adj.P.Val < alpha ] <- "darkgreen"
# Create a Variable for the size of the dots in the Volcano Plot
res$pch <- 19
res$pch[res$pvalue ==0] <- 6
# Select genes with a defined p-value (DESeq2 assigns NA to some genes)
genes.to.plot <- !is.na(res$pvalue)
# Check the range of the LogFC
range(res[genes.to.plot, "logFC"])
# Volcano plot
pdf(file="rna.volcano.fdr.0.01.pdf", width=5, height=5)
ggplot(res, aes(x = logFC, y = sig,label=hgnc_symbol)) +
geom_point(color=cols, alpha=0.7) + #Chose Colors and size for dots
geom_text_repel(aes(label=ifelse(res$adj.P.Val < alpha & res$logFC>2, as.character(res$hgnc_symbol),'')),size=2,force=25,segment.colour="darkgrey",segment.alpha=0.5) + #Label values based on parameters, including pcal and logFC
geom_text_repel(aes(label=ifelse(res$adj.P.Val < alpha & res$logFC<=-2, as.character(res$hgnc_symbol),'')),size=2,force=25,segment.colour="darkgrey",segment.alpha=0.5) + #Label values based on parameters, including pcal and logFC
theme(legend.position = "none") +
geom_hline(yintercept=-log10(alpha), color="red",linetype="dashed") +
xlab("Effect size: log2(fold-change)") +
ylab("-log10(adjusted p-value)") +
#ylim(0,20)+
theme
dev.off()
#########################################################
#########################################################
#########################################################
### BRAY-CURTIS ANALYSIS
# Create Distance Matrix
vegdist = vegdist(t(assay(dds)), method = "bray")
# Calculate Adonis for p-value for TB status
adonis(vegdist ~ tb_status_biome, data=data.frame(colData(dds)))
# Formulate principal component co-ordinates for PCOA plot, k as the choice of PCs
CmdScale <- cmdscale(vegdist, k =10)
# Calculate sample variance for each PC
vars <- apply(CmdScale, 2, var)
# Create Variable with the Percent Variance
percentVar <- round(100 * (vars/sum(vars)))
# Merge PC Data with MetaData
newResults <- merge(x = CmdScale, y = colData(dds), by = "row.names", all.x = TRUE)
# Rename Variables for PC1 and PC2
colnames(newResults)[colnames(newResults)=="V1"] <- "PC1"
colnames(newResults)[colnames(newResults)=="V2"] <- "PC2"
colnames(newResults)[colnames(newResults)=="Row.names"] <- "name"
# Calculate the Centroid Value - for TB
centroids <- aggregate(cbind(PC1,PC2)~ tb_status_biome,data= newResults, mean)
# Merge the Centroid Data into the PCOA Data
newResults <- merge(newResults,centroids,by="tb_status_biome",suffixes=c("",".centroid"))
# Plot
pdf("rna.bray.curtis.pdf", height = 10, width = 10)
ggplot(newResults, aes(PC1, PC2, color= tb_status_biome)) +
geom_point(size=5) +
xlab(paste0("PC1: ",percentVar[1],"% variance")) +
ylab(paste0("PC2: ",percentVar[2],"% variance")) +
scale_color_manual(values=c("darkgreen", "red", "grey", "blue", "purple")) +
geom_point(data=centroids, aes(x=PC1, y=PC2, color= tb_status_biome), size=0) +
geom_segment(aes(x=PC1.centroid, y=PC2.centroid, xend=PC1, yend=PC2, color= tb_status_biome))+
geom_label_repel(data = centroids, aes(x=PC1, y=PC2, label=c("SCs", "Cases")), size=10) +
theme(panel.background = element_blank(),panel.border=element_rect(fill=NA),panel.grid.major = element_line(linetype = "dashed", size = 0.5, colour = "grey80"),panel.grid.minor = element_blank(),strip.background=element_blank(),axis.title=element_text(size=30,face="bold"),axis.text.x=element_text(colour= "grey80", size = rel(0.75)),axis.text.y=element_text(colour = "grey80", size = rel(0.75)),axis.ticks=element_blank(),plot.margin=unit(c(1,1,1,1),"line"), legend.position="none")
dev.off()
|
library(tidyverse)
library(pipeR)
library(gridExtra)
loadNamespace('cowplot')
setwd('/Volumes/DR8TB2/tcga_rare_germ/top_driver_gene')
write_df= function(x, path, delim='\t', na='NA', append=FALSE, col_names=!append, ...) {
file = if (grepl('gz$', path)) {
gzfile(path, ...)
} else if (grepl('bz2$', path)) {
bzfile(path, ...)
} else if (grepl('xz$', path)) {
xzfile(path, ...)
} else {path}
utils::write.table(x, file,
append=append, quote=FALSE, sep=delim, na=na,
row.names=FALSE, col.names=col_names)
}
options(scipen=100)
driver_genes=read_tsv("/Volumes/DR8TB2/tcga_rare_germ/top_driver_gene/driver_genes.tsv")
patient_list = read_tsv("/Volumes/DR8TB2/tcga_rare_germ/patient_list.tsv")
patient_tdg = read_tsv("/Volumes/DR8TB2/tcga_rare_germ/top_driver_gene/patient_list_forTGD.tsv")
################ read MAF extracted ################
all_maf_for_cumulative = read_tsv("/Volumes/DR8TB2/tcga_rare_germ/top_driver_gene/all_maf_for_cumulative.tsv.gz")%>>%
filter(chr!="chrX",FILTER=="PASS")
####################################################################################################################
####################################################### TSG ########################################################
####################################################################################################################
#患者ごとのtruncating な遺伝子の数
truncating_count = all_maf_for_cumulative %>>%
filter(chr!="chrX",FILTER=="PASS")%>>%
left_join(driver_genes %>>%dplyr::select(gene_symbol,role),by="gene_symbol") %>>%
filter(mutype=="truncating"|mutype=="splice") %>>%
filter(role=="TSG"|role=="oncogene/TSG")%>>%
#count(cancer_type,patient_id,gene_symbol) %>>%dplyr::select(-n)%>>%
group_by(cancer_type,patient_id) %>>%summarise(truncating_count_n=n())%>>%ungroup()%>>%
#0個の患者も入れる
right_join(patient_tdg)%>>%
mutate(truncating_count_n=ifelse(is.na(truncating_count_n),0,truncating_count_n)) %>>%
mutate(age=round(age/365.25*100)/100)
.plot_all = truncating_count%>>%#filter(truncating_count_n<5)%>>%
truncate_plot_allcantype(.permu_file = "TSG/truncate_all.tsv")
ggsave("age_plot/cumulative/tsg/all_race/truncating_all.pdf",.plot_all,height = 5,width = 5)
.plot_by = truncating_count%>>%
truncate_plot_bycantype(.permu_file = "TSG/truncate_all_byCT.tsv")
ggsave("age_plot/cumulative/tsg/all_race/truncating_by_cancerype.pdf",.plot_by,height = 10,width = 10)
#TSGのtruncateあるなしのt_testでは??p_value=0.01997
t.test(truncating_count[truncating_count$truncating_count_n>0,]$age/365.25,
truncating_count[truncating_count$truncating_count_n==0,]$age/365.25,alternative="less")
.plot = cowplot::ggdraw()+
cowplot::draw_plot(.plot_all+theme(axis.title.x = element_blank()),x=0,y=0.07,width=0.36,height=0.93)+
cowplot::draw_plot(.plot_by + theme(axis.title = element_blank(),axis.text.y = element_text(size=10),
strip.text = element_text(size=8,margin=margin(1,0,1,0))),
x=0.37,y=0.07,width=0.63,height=0.93)+
cowplot::draw_text("Number of truncating and splice variants in TSG",x=0.5,y=0.05,size=20)+
cowplot::draw_plot_label("a",x=0.01,y=0.99,hjust = 0,vjust = 1,size = 20)+
cowplot::draw_plot_label("b",x=0.36,y=0.99,hjust = 0,vjust = 1,size = 20)
ggsave("age_plot/fig/truncate/all_race/allTSG.pdf",.plot,width = 14,height = 8)
############### MAF<0.05%のtruncating mutationのみでやってみたら?
truncating_count_rare = all_maf_for_cumulative %>>%
filter(chr!="chrX",FILTER=="PASS")%>>%
left_join(driver_genes %>>%dplyr::select(gene_symbol,role),by="gene_symbol") %>>%
filter(mutype=="truncating"|mutype=="splice") %>>%
filter(role=="TSG"|role=="oncogene/TSG")%>>%
filter(MAF < 0.0005)%>>%
count(cancer_type,patient_id,gene_symbol) %>>% dplyr::select(-n)%>>%
group_by(cancer_type,patient_id) %>>%summarise(truncating_count_n=n())%>>%ungroup()%>>%
#0個の患者も入れる
right_join(patient_tdg)%>>%
mutate(truncating_count_n=ifelse(is.na(truncating_count_n),0,truncating_count_n),
age=round(age/365.25*100)/100)
.plot_all = truncating_count_rare%>>%
truncate_plot_allcantype(.permu_file = "TSG/truncate_rare.tsv")
ggsave("age_plot/cumulative/tsg/all_race/rare_truncating.pdf",.plot_all,height = 5,width = 5)
.plot_by = truncating_count_rare%>>%
truncate_plot_bycantype(.permu_file = "TSG/truncate_rare_byCT.tsv")
ggsave("age_plot/cumulative/tsg/all_race/raer_truncating_by_cancerype.pdf",.plot_by,height = 10,width = 10)
#あるなしのt.test p-value=0.003491
t.test(truncating_count_rare[truncating_count_rare$truncating_count_n>0,]$age/365.25,
truncating_count_rare[truncating_count_rare$truncating_count_n==0,]$age/365.25,alternative="less")
.plot = cowplot::ggdraw()+
cowplot::draw_plot(.plot_all+theme(axis.title.x = element_blank()),x=0,y=0.07,width=0.36,height=0.93)+
cowplot::draw_plot(.plot_by + theme(axis.title = element_blank(),axis.text.y = element_text(size=10),
strip.text = element_text(size=8,margin=margin(1,0,1,0))),
x=0.37,y=0.07,width=0.63,height=0.93)+
cowplot::draw_text("Number of rare truncating and splice variants in TSG",x=0.5,y=0.05,size=20)+
cowplot::draw_plot_label("a",x=0.01,y=0.99,hjust = 0,vjust = 1,size = 20)+
cowplot::draw_plot_label("b",x=0.36,y=0.99,hjust = 0,vjust = 1,size = 20)
ggsave("age_plot/fig/truncate/all_race/TSG_rare.pdf",.plot,width = 14,height = 8)
if(0){
#BRCA1,2を除外したら?(BRCA1:76(12.8%),BRCA2:260(43.8%), all TSG:594)
#BRCA1,2 truncating or spliceを持つ患者は
brca_paid = all_maf_for_cumulative %>>%
filter(chr!="chrX",FILTER=="PASS")%>>%
left_join(driver_genes %>>%dplyr::select(gene_symbol,role),by="gene_symbol") %>>%
filter(mutype=="truncating"|mutype=="splice") %>>%
filter(role=="TSG"|role=="oncogene/TSG")%>>%
count(cancer_type,patient_id,gene_symbol) %>>%
filter(gene_symbol=="BRCA1"|gene_symbol=="BRCA2")%>>%dplyr::select(cancer_type,patient_id)
.plot_all = truncating_count%>>%anti_join(brca_paid)%>%
truncate_plot_allcantype(.permu_file = "TSG/truncate_notbrca.tsv")
ggsave("age_plot/cumulative/tsg/all_race/truncating_notbrca.pdf",.plot_all,height = 5,width = 5)
.plot_by = truncating_count%>>%anti_join(brca_paid)%>%
truncate_plot_bycantype(.permu_file = "TSG/truncate_notbrca_byCT.tsv")
ggsave("age_plot/cumulative/tsg/all_race/truncating_notbrca_by_cancerype.pdf",.plot_by,height = 10,width = 10)
#TSGのtruncateあるなしのt_testでは??p_value=0.2227
truncating_count_nbr=truncating_count%>>%anti_join(brca_paid)
t.test(truncating_count_nbr[truncating_count_nbr$truncating_count_n>0,]$age/365.25,
truncating_count_nbr[truncating_count_nbr$truncating_count_n==0,]$age/365.25,alternative="less")
.plot = cowplot::ggdraw()+
cowplot::draw_plot(.plot_all+theme(axis.title.x = element_blank()),x=0,y=0.07,width=0.36,height=0.93)+
cowplot::draw_plot(.plot_by + theme(axis.title = element_blank(),axis.text.y = element_text(size=10),
strip.text = element_text(size=8,margin=margin(1,0,1,0))),
x=0.37,y=0.07,width=0.63,height=0.93)+
cowplot::draw_text("Number of truncating and splice variants",x=0.5,y=0.05,size=20)+
cowplot::draw_plot_label("a",x=0.01,y=0.99,hjust = 0,vjust = 1,size = 20)+
cowplot::draw_plot_label("b",x=0.36,y=0.99,hjust = 0,vjust = 1,size = 20)
ggsave("age_plot/fig/truncate/all_race/allTSG_notbrca.pdf",.plot,width = 14,height = 8)
}
#################################################################################################################
################################################## oncogene #####################################################
#################################################################################################################
###### truncating ########
truncating_count_onco_rare = all_maf_for_cumulative %>>%
filter(chr!="chrX",FILTER=="PASS")%>>%
filter(MAF < 0.0005)%>>%
left_join(driver_genes %>>%dplyr::select(gene_symbol,role),by="gene_symbol") %>>%
filter(mutype=="truncating"|mutype=="splice") %>>%
filter(role=="oncogene"|role=="oncogene/TSG")%>>%
#count(cancer_type,patient_id,gene_symbol) %>>%dplyr::select(-n)%>>%
group_by(cancer_type,patient_id) %>>%summarise(truncating_count_n=n())%>>%ungroup()%>>%
right_join(patient_tdg)%>>%
mutate(age=round(age/365.25*100)/100) %>>%
mutate(truncating_count_n=ifelse(is.na(truncating_count_n),0,truncating_count_n))
.plot_all = truncating_count_onco_rare %>>%
truncate_plot_allcantype(.permu_file = "oncogene/truncate_rare.tsv")
ggsave("age_plot/cumulative/oncogene/all_race/truncating_rare.pdf",.plot_all,height = 5,width = 5)
.plot_by = truncating_count_onco_rare %>>%
truncate_plot_bycantype(.permu_file = "oncogene/truncate_rare_byCT.tsv")
ggsave("age_plot/cumulative/oncogene/all_race/truncating_rare_byCT.pdf",.plot_by,height = 10,width = 10)
#oncogeneのtruncateあるなしのt_testでは??p-value=0.2958
t.test(truncating_count_onco_rare[truncating_count_onco_rare$truncating_count_n>0,]$age/365.25,
truncating_count_onco_rare[truncating_count_onco_rare$truncating_count_n==0,]$age/365.25,alternative="less")
.plot = cowplot::ggdraw()+
cowplot::draw_plot(.plot_all+theme(axis.title.x = element_blank()),x=0,y=0.07,width=0.36,height=0.93)+
cowplot::draw_plot(.plot_by + theme(axis.title = element_blank(),axis.text.y = element_text(size=10),
strip.text = element_text(size=8,margin=margin(1,0,1,0))),
x=0.37,y=0.07,width=0.63,height=0.93)+
cowplot::draw_text("Number of rare truncating and splice variants in oncogene",x=0.5,y=0.05,size=20)+
cowplot::draw_plot_label("a",x=0.01,y=0.99,hjust = 0,vjust = 1,size = 20)+
cowplot::draw_plot_label("b",x=0.36,y=0.99,hjust = 0,vjust = 1,size = 20)
ggsave("age_plot/fig/truncate/all_race/oncogene_rare.pdf",.plot,width = 10,height = 10)
if(1){
truncating_count_onco = all_maf_for_cumulative %>>%
filter(chr!="chrX",FILTER=="PASS")%>>%
left_join(driver_genes %>>%dplyr::select(gene_symbol,role),by="gene_symbol") %>>%
filter(mutype=="truncating"|mutype=="splice") %>>%
filter(role=="oncogene"|role=="oncogene/TSG")%>>%
count(cancer_type,patient_id,gene_symbol) %>>% dplyr::select(-n)%>>%
group_by(cancer_type,patient_id) %>>%summarise(truncating_count_n=n())%>>%ungroup()%>>%
right_join(patient_tdg)%>>%
mutate(age=round(age/365.25*100)/100) %>>%
mutate(truncating_count_n=ifelse(is.na(truncating_count_n),0,truncating_count_n))
.plot_all = truncating_count_onco %>>%
truncate_plot_allcantype(.permu_file = "oncogene/truncate.tsv")
ggsave("age_plot/cumulative/oncogene/all_race/truncating.pdf",.plot_all,height = 5,width = 5)
.plot_by = truncating_count_onco %>>%
truncate_plot_bycantype(.permu_file = "oncogene/truncate_byCT.tsv")
ggsave("age_plot/cumulative/oncogene/all_race/truncating_byCT.pdf",.plot_by,height = 10,width = 10)
#oncogeneのtruncateあるなしのt_testでは??p-value=0.2958
t.test(truncating_count_onco[truncating_count_onco$truncating_count_n>0,]$age/365.25,
truncating_count_onco[truncating_count_onco$truncating_count_n==0,]$age/365.25,alternative="less")
.plot = cowplot::ggdraw()+
cowplot::draw_plot(.plot_all+theme(axis.title.x = element_blank()),x=0,y=0.07,width=0.36,height=0.93)+
cowplot::draw_plot(.plot_by + theme(axis.title = element_blank(),axis.text.y = element_text(size=10),
strip.text = element_text(size=8,margin=margin(1,0,1,0))),
x=0.37,y=0.07,width=0.63,height=0.93)+
cowplot::draw_text("Number of truncating and splice variants in oncogene",x=0.5,y=0.05,size=20)+
cowplot::draw_plot_label("a",x=0.01,y=0.99,hjust = 0,vjust = 1,size = 20)+
cowplot::draw_plot_label("b",x=0.36,y=0.99,hjust = 0,vjust = 1,size = 20)
ggsave("age_plot/fig/truncate/all_race/alloncogene.pdf",.plot,width = 10,height = 10)
}
|
/cumulative_analysis/top_driver_genes/truncating_cumulative_allrace.R
|
no_license
|
Kazuki526/tcga_germ
|
R
| false
| false
| 12,216
|
r
|
library(tidyverse)
library(pipeR)
library(gridExtra)
loadNamespace('cowplot')
setwd('/Volumes/DR8TB2/tcga_rare_germ/top_driver_gene')
write_df= function(x, path, delim='\t', na='NA', append=FALSE, col_names=!append, ...) {
file = if (grepl('gz$', path)) {
gzfile(path, ...)
} else if (grepl('bz2$', path)) {
bzfile(path, ...)
} else if (grepl('xz$', path)) {
xzfile(path, ...)
} else {path}
utils::write.table(x, file,
append=append, quote=FALSE, sep=delim, na=na,
row.names=FALSE, col.names=col_names)
}
options(scipen=100)
driver_genes=read_tsv("/Volumes/DR8TB2/tcga_rare_germ/top_driver_gene/driver_genes.tsv")
patient_list = read_tsv("/Volumes/DR8TB2/tcga_rare_germ/patient_list.tsv")
patient_tdg = read_tsv("/Volumes/DR8TB2/tcga_rare_germ/top_driver_gene/patient_list_forTGD.tsv")
################ read MAF extracted ################
all_maf_for_cumulative = read_tsv("/Volumes/DR8TB2/tcga_rare_germ/top_driver_gene/all_maf_for_cumulative.tsv.gz")%>>%
filter(chr!="chrX",FILTER=="PASS")
####################################################################################################################
####################################################### TSG ########################################################
####################################################################################################################
#患者ごとのtruncating な遺伝子の数
truncating_count = all_maf_for_cumulative %>>%
filter(chr!="chrX",FILTER=="PASS")%>>%
left_join(driver_genes %>>%dplyr::select(gene_symbol,role),by="gene_symbol") %>>%
filter(mutype=="truncating"|mutype=="splice") %>>%
filter(role=="TSG"|role=="oncogene/TSG")%>>%
#count(cancer_type,patient_id,gene_symbol) %>>%dplyr::select(-n)%>>%
group_by(cancer_type,patient_id) %>>%summarise(truncating_count_n=n())%>>%ungroup()%>>%
#0個の患者も入れる
right_join(patient_tdg)%>>%
mutate(truncating_count_n=ifelse(is.na(truncating_count_n),0,truncating_count_n)) %>>%
mutate(age=round(age/365.25*100)/100)
.plot_all = truncating_count%>>%#filter(truncating_count_n<5)%>>%
truncate_plot_allcantype(.permu_file = "TSG/truncate_all.tsv")
ggsave("age_plot/cumulative/tsg/all_race/truncating_all.pdf",.plot_all,height = 5,width = 5)
.plot_by = truncating_count%>>%
truncate_plot_bycantype(.permu_file = "TSG/truncate_all_byCT.tsv")
ggsave("age_plot/cumulative/tsg/all_race/truncating_by_cancerype.pdf",.plot_by,height = 10,width = 10)
#TSGのtruncateあるなしのt_testでは??p_value=0.01997
t.test(truncating_count[truncating_count$truncating_count_n>0,]$age/365.25,
truncating_count[truncating_count$truncating_count_n==0,]$age/365.25,alternative="less")
.plot = cowplot::ggdraw()+
cowplot::draw_plot(.plot_all+theme(axis.title.x = element_blank()),x=0,y=0.07,width=0.36,height=0.93)+
cowplot::draw_plot(.plot_by + theme(axis.title = element_blank(),axis.text.y = element_text(size=10),
strip.text = element_text(size=8,margin=margin(1,0,1,0))),
x=0.37,y=0.07,width=0.63,height=0.93)+
cowplot::draw_text("Number of truncating and splice variants in TSG",x=0.5,y=0.05,size=20)+
cowplot::draw_plot_label("a",x=0.01,y=0.99,hjust = 0,vjust = 1,size = 20)+
cowplot::draw_plot_label("b",x=0.36,y=0.99,hjust = 0,vjust = 1,size = 20)
ggsave("age_plot/fig/truncate/all_race/allTSG.pdf",.plot,width = 14,height = 8)
############### MAF<0.05%のtruncating mutationのみでやってみたら?
truncating_count_rare = all_maf_for_cumulative %>>%
filter(chr!="chrX",FILTER=="PASS")%>>%
left_join(driver_genes %>>%dplyr::select(gene_symbol,role),by="gene_symbol") %>>%
filter(mutype=="truncating"|mutype=="splice") %>>%
filter(role=="TSG"|role=="oncogene/TSG")%>>%
filter(MAF < 0.0005)%>>%
count(cancer_type,patient_id,gene_symbol) %>>% dplyr::select(-n)%>>%
group_by(cancer_type,patient_id) %>>%summarise(truncating_count_n=n())%>>%ungroup()%>>%
#0個の患者も入れる
right_join(patient_tdg)%>>%
mutate(truncating_count_n=ifelse(is.na(truncating_count_n),0,truncating_count_n),
age=round(age/365.25*100)/100)
.plot_all = truncating_count_rare%>>%
truncate_plot_allcantype(.permu_file = "TSG/truncate_rare.tsv")
ggsave("age_plot/cumulative/tsg/all_race/rare_truncating.pdf",.plot_all,height = 5,width = 5)
.plot_by = truncating_count_rare%>>%
truncate_plot_bycantype(.permu_file = "TSG/truncate_rare_byCT.tsv")
ggsave("age_plot/cumulative/tsg/all_race/raer_truncating_by_cancerype.pdf",.plot_by,height = 10,width = 10)
#あるなしのt.test p-value=0.003491
t.test(truncating_count_rare[truncating_count_rare$truncating_count_n>0,]$age/365.25,
truncating_count_rare[truncating_count_rare$truncating_count_n==0,]$age/365.25,alternative="less")
.plot = cowplot::ggdraw()+
cowplot::draw_plot(.plot_all+theme(axis.title.x = element_blank()),x=0,y=0.07,width=0.36,height=0.93)+
cowplot::draw_plot(.plot_by + theme(axis.title = element_blank(),axis.text.y = element_text(size=10),
strip.text = element_text(size=8,margin=margin(1,0,1,0))),
x=0.37,y=0.07,width=0.63,height=0.93)+
cowplot::draw_text("Number of rare truncating and splice variants in TSG",x=0.5,y=0.05,size=20)+
cowplot::draw_plot_label("a",x=0.01,y=0.99,hjust = 0,vjust = 1,size = 20)+
cowplot::draw_plot_label("b",x=0.36,y=0.99,hjust = 0,vjust = 1,size = 20)
ggsave("age_plot/fig/truncate/all_race/TSG_rare.pdf",.plot,width = 14,height = 8)
if(0){
#BRCA1,2を除外したら?(BRCA1:76(12.8%),BRCA2:260(43.8%), all TSG:594)
#BRCA1,2 truncating or spliceを持つ患者は
brca_paid = all_maf_for_cumulative %>>%
filter(chr!="chrX",FILTER=="PASS")%>>%
left_join(driver_genes %>>%dplyr::select(gene_symbol,role),by="gene_symbol") %>>%
filter(mutype=="truncating"|mutype=="splice") %>>%
filter(role=="TSG"|role=="oncogene/TSG")%>>%
count(cancer_type,patient_id,gene_symbol) %>>%
filter(gene_symbol=="BRCA1"|gene_symbol=="BRCA2")%>>%dplyr::select(cancer_type,patient_id)
.plot_all = truncating_count%>>%anti_join(brca_paid)%>%
truncate_plot_allcantype(.permu_file = "TSG/truncate_notbrca.tsv")
ggsave("age_plot/cumulative/tsg/all_race/truncating_notbrca.pdf",.plot_all,height = 5,width = 5)
.plot_by = truncating_count%>>%anti_join(brca_paid)%>%
truncate_plot_bycantype(.permu_file = "TSG/truncate_notbrca_byCT.tsv")
ggsave("age_plot/cumulative/tsg/all_race/truncating_notbrca_by_cancerype.pdf",.plot_by,height = 10,width = 10)
#TSGのtruncateあるなしのt_testでは??p_value=0.2227
truncating_count_nbr=truncating_count%>>%anti_join(brca_paid)
t.test(truncating_count_nbr[truncating_count_nbr$truncating_count_n>0,]$age/365.25,
truncating_count_nbr[truncating_count_nbr$truncating_count_n==0,]$age/365.25,alternative="less")
.plot = cowplot::ggdraw()+
cowplot::draw_plot(.plot_all+theme(axis.title.x = element_blank()),x=0,y=0.07,width=0.36,height=0.93)+
cowplot::draw_plot(.plot_by + theme(axis.title = element_blank(),axis.text.y = element_text(size=10),
strip.text = element_text(size=8,margin=margin(1,0,1,0))),
x=0.37,y=0.07,width=0.63,height=0.93)+
cowplot::draw_text("Number of truncating and splice variants",x=0.5,y=0.05,size=20)+
cowplot::draw_plot_label("a",x=0.01,y=0.99,hjust = 0,vjust = 1,size = 20)+
cowplot::draw_plot_label("b",x=0.36,y=0.99,hjust = 0,vjust = 1,size = 20)
ggsave("age_plot/fig/truncate/all_race/allTSG_notbrca.pdf",.plot,width = 14,height = 8)
}
#################################################################################################################
################################################## oncogene #####################################################
#################################################################################################################
###### truncating ########
truncating_count_onco_rare = all_maf_for_cumulative %>>%
filter(chr!="chrX",FILTER=="PASS")%>>%
filter(MAF < 0.0005)%>>%
left_join(driver_genes %>>%dplyr::select(gene_symbol,role),by="gene_symbol") %>>%
filter(mutype=="truncating"|mutype=="splice") %>>%
filter(role=="oncogene"|role=="oncogene/TSG")%>>%
#count(cancer_type,patient_id,gene_symbol) %>>%dplyr::select(-n)%>>%
group_by(cancer_type,patient_id) %>>%summarise(truncating_count_n=n())%>>%ungroup()%>>%
right_join(patient_tdg)%>>%
mutate(age=round(age/365.25*100)/100) %>>%
mutate(truncating_count_n=ifelse(is.na(truncating_count_n),0,truncating_count_n))
.plot_all = truncating_count_onco_rare %>>%
truncate_plot_allcantype(.permu_file = "oncogene/truncate_rare.tsv")
ggsave("age_plot/cumulative/oncogene/all_race/truncating_rare.pdf",.plot_all,height = 5,width = 5)
.plot_by = truncating_count_onco_rare %>>%
truncate_plot_bycantype(.permu_file = "oncogene/truncate_rare_byCT.tsv")
ggsave("age_plot/cumulative/oncogene/all_race/truncating_rare_byCT.pdf",.plot_by,height = 10,width = 10)
#oncogeneのtruncateあるなしのt_testでは??p-value=0.2958
t.test(truncating_count_onco_rare[truncating_count_onco_rare$truncating_count_n>0,]$age/365.25,
truncating_count_onco_rare[truncating_count_onco_rare$truncating_count_n==0,]$age/365.25,alternative="less")
.plot = cowplot::ggdraw()+
cowplot::draw_plot(.plot_all+theme(axis.title.x = element_blank()),x=0,y=0.07,width=0.36,height=0.93)+
cowplot::draw_plot(.plot_by + theme(axis.title = element_blank(),axis.text.y = element_text(size=10),
strip.text = element_text(size=8,margin=margin(1,0,1,0))),
x=0.37,y=0.07,width=0.63,height=0.93)+
cowplot::draw_text("Number of rare truncating and splice variants in oncogene",x=0.5,y=0.05,size=20)+
cowplot::draw_plot_label("a",x=0.01,y=0.99,hjust = 0,vjust = 1,size = 20)+
cowplot::draw_plot_label("b",x=0.36,y=0.99,hjust = 0,vjust = 1,size = 20)
ggsave("age_plot/fig/truncate/all_race/oncogene_rare.pdf",.plot,width = 10,height = 10)
if(1){
truncating_count_onco = all_maf_for_cumulative %>>%
filter(chr!="chrX",FILTER=="PASS")%>>%
left_join(driver_genes %>>%dplyr::select(gene_symbol,role),by="gene_symbol") %>>%
filter(mutype=="truncating"|mutype=="splice") %>>%
filter(role=="oncogene"|role=="oncogene/TSG")%>>%
count(cancer_type,patient_id,gene_symbol) %>>% dplyr::select(-n)%>>%
group_by(cancer_type,patient_id) %>>%summarise(truncating_count_n=n())%>>%ungroup()%>>%
right_join(patient_tdg)%>>%
mutate(age=round(age/365.25*100)/100) %>>%
mutate(truncating_count_n=ifelse(is.na(truncating_count_n),0,truncating_count_n))
.plot_all = truncating_count_onco %>>%
truncate_plot_allcantype(.permu_file = "oncogene/truncate.tsv")
ggsave("age_plot/cumulative/oncogene/all_race/truncating.pdf",.plot_all,height = 5,width = 5)
.plot_by = truncating_count_onco %>>%
truncate_plot_bycantype(.permu_file = "oncogene/truncate_byCT.tsv")
ggsave("age_plot/cumulative/oncogene/all_race/truncating_byCT.pdf",.plot_by,height = 10,width = 10)
#oncogeneのtruncateあるなしのt_testでは??p-value=0.2958
t.test(truncating_count_onco[truncating_count_onco$truncating_count_n>0,]$age/365.25,
truncating_count_onco[truncating_count_onco$truncating_count_n==0,]$age/365.25,alternative="less")
.plot = cowplot::ggdraw()+
cowplot::draw_plot(.plot_all+theme(axis.title.x = element_blank()),x=0,y=0.07,width=0.36,height=0.93)+
cowplot::draw_plot(.plot_by + theme(axis.title = element_blank(),axis.text.y = element_text(size=10),
strip.text = element_text(size=8,margin=margin(1,0,1,0))),
x=0.37,y=0.07,width=0.63,height=0.93)+
cowplot::draw_text("Number of truncating and splice variants in oncogene",x=0.5,y=0.05,size=20)+
cowplot::draw_plot_label("a",x=0.01,y=0.99,hjust = 0,vjust = 1,size = 20)+
cowplot::draw_plot_label("b",x=0.36,y=0.99,hjust = 0,vjust = 1,size = 20)
ggsave("age_plot/fig/truncate/all_race/alloncogene.pdf",.plot,width = 10,height = 10)
}
|
setwd("D://")
getwd()
df<-read.csv("Demog_Data.csv")
summary(df)
library(readr)
head(df)
demo<- df
head(demo)
tail(demo )
rm(df)
# cntrl L to remove everything from console
dim(demo) # dim function is for checking dimension
names(demo) # to check the names of the dataframe (only for dataframe)
colnames(demo) # colnames can be applied to table and matrix as well along with dataframe
calories <- demo$Cal..Intake..Cal.
names(demo)
colnames(demo)[4] <- "cal" # to change the name of the columns number wise
names(demo)
colnames(demo)[5] <- "Weight"
names(demo)demo$Age_Cat <- "Age_Category" # to change the column name by entering the name
names(demo)
class(demo$Gender)
class(demo$Age_Cat)
summary(demo$Gender)
# A character is taken as a nominal variable
# R do not understand character variable so we have to convert them into factor
demo$Gender <- as.character(demo$Gender) # to change into character
class(demo$Gender)
demo$Gender <- as.factor(demo$Gender) # to change into factor
class(demo$Gender)
head(demo,20)
# how to change the value of a variable in the dataset
edit(demo)
head(demo ,20)
# it will change the value in data only and will not fix the changed variable
fix(demo)
head(demo ,20)
# to change the value of the data peramanently in the main data
demo_1<- edit(demo) # to make change in main data and make new data by making changes
rm(demo_1)
demo[4,5] # value of 4th row and 5th column
demo[19,4] # value is 2000
demo[19,4] <- 1700 # to change the value directly by row and column
demo[19,4] # value has become 1700
demo[1:10, 1:3]
demo[c(5,8,75), c(3,5)] # to see the 5th,8th,75th and 3rd, 5th column
summary(demo)
library(psych)
describe(demo) # library pysch is used
describe(demo$cal)
mean(demo$Weight)
mean(demo$Weight, trim = 0.5)
str(demo) # to see the structure of the data
summary(demo$Age_Cat)
table(demo$Gender, demo$Age_Cat) # to check cross tabulation of categorical variables
tapply(demo$Weight,demo$Gender, FUN = mean) # to find the mean of two different varibales
tapply(demo$Weight,demo$Gender, FUN = function(x) {sd(x/mean(x))}) # to make the function according to the need
# Second day
|
/Marketing_analytics 1.R
|
no_license
|
Gauravyadav2806/New-file
|
R
| false
| false
| 2,216
|
r
|
setwd("D://")
getwd()
df<-read.csv("Demog_Data.csv")
summary(df)
library(readr)
head(df)
demo<- df
head(demo)
tail(demo )
rm(df)
# cntrl L to remove everything from console
dim(demo) # dim function is for checking dimension
names(demo) # to check the names of the dataframe (only for dataframe)
colnames(demo) # colnames can be applied to table and matrix as well along with dataframe
calories <- demo$Cal..Intake..Cal.
names(demo)
colnames(demo)[4] <- "cal" # to change the name of the columns number wise
names(demo)
colnames(demo)[5] <- "Weight"
names(demo)demo$Age_Cat <- "Age_Category" # to change the column name by entering the name
names(demo)
class(demo$Gender)
class(demo$Age_Cat)
summary(demo$Gender)
# A character is taken as a nominal variable
# R do not understand character variable so we have to convert them into factor
demo$Gender <- as.character(demo$Gender) # to change into character
class(demo$Gender)
demo$Gender <- as.factor(demo$Gender) # to change into factor
class(demo$Gender)
head(demo,20)
# how to change the value of a variable in the dataset
edit(demo)
head(demo ,20)
# it will change the value in data only and will not fix the changed variable
fix(demo)
head(demo ,20)
# to change the value of the data peramanently in the main data
demo_1<- edit(demo) # to make change in main data and make new data by making changes
rm(demo_1)
demo[4,5] # value of 4th row and 5th column
demo[19,4] # value is 2000
demo[19,4] <- 1700 # to change the value directly by row and column
demo[19,4] # value has become 1700
demo[1:10, 1:3]
demo[c(5,8,75), c(3,5)] # to see the 5th,8th,75th and 3rd, 5th column
summary(demo)
library(psych)
describe(demo) # library pysch is used
describe(demo$cal)
mean(demo$Weight)
mean(demo$Weight, trim = 0.5)
str(demo) # to see the structure of the data
summary(demo$Age_Cat)
table(demo$Gender, demo$Age_Cat) # to check cross tabulation of categorical variables
tapply(demo$Weight,demo$Gender, FUN = mean) # to find the mean of two different varibales
tapply(demo$Weight,demo$Gender, FUN = function(x) {sd(x/mean(x))}) # to make the function according to the need
# Second day
|
source('fonctions-tp3/ceuc.R')
source('fonctions-tp3/distXY.R')
source('fonctions-tp3/front.ceuc.R')
source('fonctions-tp3/front.kppv.R')
source('fonctions-tp3/kppv.R')
source('fonctions-tp3/separ1.R')
source('fonctions-tp3/separ2.R')
source('fonctions-tp3/EspVarPI.R')
source('fonctions-tp3/errorCEUC.R')
source('fonctions-tp3/errorKPPV.R')
source('fonctions-tp3/Intervalle.R')
data40 <- read.csv("data/Synth1-40.csv", header=T)
data100 <- read.csv("data/Synth1-100.csv", header=T)
data500 <- read.csv("data/Synth1-500.csv", header=T)
data1000 <- read.csv("data/Synth1-1000.csv", header=T)
X <- data40[,-3]
z <- data40[,3]
# Classifieur euclidien
print("Classifieur Euclidien")
donn.sep <- separ1(X, z)
Xapp <- donn.sep$Xapp
zapp <- donn.sep$zapp
Xtst <- donn.sep$Xtst
ztst <- donn.sep$ztst
muprint("mu =")
print(mu)
zpred <- ceuc.val(mu, Xtst)
print("zpred =")
print(zpred)
front.ceuc(mu, X,z, 1000)
# K plus proches voisins
print("K plus proches voisins")
donn.sep <- separ2(X, z)
Xapp <- donn.sep$Xapp
zapp <- donn.sep$zapp
Xval <- donn.sep$Xval
zval <- donn.sep$zval
Xtst <- donn.sep$Xtst
ztst <- donn.sep$ztst
Kopt <- kppv.tune(Xapp, zapp, Xval, zval, 1:10)
print("Kopt =")
print(Kopt)
zpred <- kppv.val(Xapp, zapp, 2, Xtst)
print("zpred =")
print(zpred)
front.kppv(Xapp, zapp, Kopt)
|
/TP03/fonctions-tp3/script1.R
|
no_license
|
ValentinMntp/SY09-Data-Mining
|
R
| false
| false
| 1,299
|
r
|
source('fonctions-tp3/ceuc.R')
source('fonctions-tp3/distXY.R')
source('fonctions-tp3/front.ceuc.R')
source('fonctions-tp3/front.kppv.R')
source('fonctions-tp3/kppv.R')
source('fonctions-tp3/separ1.R')
source('fonctions-tp3/separ2.R')
source('fonctions-tp3/EspVarPI.R')
source('fonctions-tp3/errorCEUC.R')
source('fonctions-tp3/errorKPPV.R')
source('fonctions-tp3/Intervalle.R')
data40 <- read.csv("data/Synth1-40.csv", header=T)
data100 <- read.csv("data/Synth1-100.csv", header=T)
data500 <- read.csv("data/Synth1-500.csv", header=T)
data1000 <- read.csv("data/Synth1-1000.csv", header=T)
X <- data40[,-3]
z <- data40[,3]
# Classifieur euclidien
print("Classifieur Euclidien")
donn.sep <- separ1(X, z)
Xapp <- donn.sep$Xapp
zapp <- donn.sep$zapp
Xtst <- donn.sep$Xtst
ztst <- donn.sep$ztst
muprint("mu =")
print(mu)
zpred <- ceuc.val(mu, Xtst)
print("zpred =")
print(zpred)
front.ceuc(mu, X,z, 1000)
# K plus proches voisins
print("K plus proches voisins")
donn.sep <- separ2(X, z)
Xapp <- donn.sep$Xapp
zapp <- donn.sep$zapp
Xval <- donn.sep$Xval
zval <- donn.sep$zval
Xtst <- donn.sep$Xtst
ztst <- donn.sep$ztst
Kopt <- kppv.tune(Xapp, zapp, Xval, zval, 1:10)
print("Kopt =")
print(Kopt)
zpred <- kppv.val(Xapp, zapp, 2, Xtst)
print("zpred =")
print(zpred)
front.kppv(Xapp, zapp, Kopt)
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/dfareporting_functions.R
\name{cities.list}
\alias{cities.list}
\title{Retrieves a list of cities, possibly filtered.}
\usage{
cities.list(profileId, countryDartIds = NULL, dartIds = NULL,
namePrefix = NULL, regionDartIds = NULL)
}
\arguments{
\item{profileId}{User profile ID associated with this request}
\item{countryDartIds}{Select only cities from these countries}
\item{dartIds}{Select only cities with these DART IDs}
\item{namePrefix}{Select only cities with names starting with this prefix}
\item{regionDartIds}{Select only cities from these regions}
}
\description{
Autogenerated via \code{\link[googleAuthR]{gar_create_api_skeleton}}
}
\details{
Authentication scopes used by this function are:
\itemize{
\item https://www.googleapis.com/auth/dfatrafficking
}
Set \code{options(googleAuthR.scopes.selected = c(https://www.googleapis.com/auth/dfatrafficking)}
Then run \code{googleAuthR::gar_auth()} to authenticate.
See \code{\link[googleAuthR]{gar_auth}} for details.
}
\seealso{
\href{https://developers.google.com/doubleclick-advertisers/}{Google Documentation}
}
|
/googledfareportingv26.auto/man/cities.list.Rd
|
permissive
|
GVersteeg/autoGoogleAPI
|
R
| false
| true
| 1,165
|
rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/dfareporting_functions.R
\name{cities.list}
\alias{cities.list}
\title{Retrieves a list of cities, possibly filtered.}
\usage{
cities.list(profileId, countryDartIds = NULL, dartIds = NULL,
namePrefix = NULL, regionDartIds = NULL)
}
\arguments{
\item{profileId}{User profile ID associated with this request}
\item{countryDartIds}{Select only cities from these countries}
\item{dartIds}{Select only cities with these DART IDs}
\item{namePrefix}{Select only cities with names starting with this prefix}
\item{regionDartIds}{Select only cities from these regions}
}
\description{
Autogenerated via \code{\link[googleAuthR]{gar_create_api_skeleton}}
}
\details{
Authentication scopes used by this function are:
\itemize{
\item https://www.googleapis.com/auth/dfatrafficking
}
Set \code{options(googleAuthR.scopes.selected = c(https://www.googleapis.com/auth/dfatrafficking)}
Then run \code{googleAuthR::gar_auth()} to authenticate.
See \code{\link[googleAuthR]{gar_auth}} for details.
}
\seealso{
\href{https://developers.google.com/doubleclick-advertisers/}{Google Documentation}
}
|
### merge results
library(tidyverse)
library(tidyr)
### work flow
## upload time stats
## converge all nodes per species
## converge all species
## calculate suitability per node position
## calculate suitability per node (if 1 position suitable, node is suitable)
# setwd("/Users/katieirving/Documents/git/SOC_tier_3/output_data")
## water year types
getwd()
### santa ana sucker
## velocity
## upload all time stats csvs
## time stats
ts <- list.files("output_data/", pattern="time_stats")
length(ts) ## 219
ts
ts <- Filter(function(x) grepl("StreamPower", x), ts)
ts <- Filter(function(x) grepl("Adult", x), ts)
ts <- Filter(function(x) grepl("Willow", x), ts)
time_statsx <- NULL
for(j in 1: length(ts)) {
time_stats <- read.csv(file=paste("output_data/", ts[j], sep=""))
######## juvenile
time_stats <- time_stats %>%
select(-X) %>%
# filter(Probability_Threshold == ann_stats) %>%
rename(TimePercentage = value, TimePeriod2 = Probability_Threshold) %>%
# mutate(Species = "Willow", Life_Stage = "Seedling", Hydraulic = "Depth", Node = paste(stringx[3])) %>%
distinct()
time_statsx <- rbind(time_statsx, time_stats)
}
head(time_stats)
unique(time_statsx$TimePeriod2)
## change time period to seasonal
time_stats_seas <- time_statsx %>%
filter(TimePeriod2 == "Annual") %>%
distinct()
head(time_stats_seas)
## calculate suitability
time_stats_seas <- time_stats_seas %>%
mutate(Suitability_Class = NA)
# group_by(Node, position, Species, Life_Stage, water_year) %>%
probs <- seq(1, dim(time_stats_seas)[1], 1)
for(p in 1: length(probs)) {
time_stats_seas$Suitability_Class[p] = if(time_stats_seas$TimePercentage[p] >= 50) {
paste("High")
} else if(time_stats_seas$TimePercentage[p] >= 25 & time_stats_seas$TimePercentage[p] <= 50){
paste("Partial")
} else if(time_stats_seas$TimePercentage[p] < 25){
paste("Low")
} else {
paste("Partial")
}
}
time_stats_seas
head(time_stats_seas)
## join back together and save
time_stats_all <-time_stats_seas
write.csv(time_stats_all, "/Users/katieirving/Documents/git/SOC_tier_3/output_data/results/Willow_Adult_StreamPower_time_stats_50.csv")
# Days per month ----------------------------------------------------------
### days per month
td <- list.files("output_data/", pattern="total_days")
length(td) ## 153
td <- Filter(function(x) grepl("StreamPower", x), td)
td <- Filter(function(x) grepl("Adult", x), td)
td <- Filter(function(x) grepl("Willow", x), td)
td
total_daysx <- NULL
for(j in 1: length(td)) {
total_days <- read.csv(file=paste("output_data/", td[j], sep=""))
######## juvenile
total_days <- total_days %>%
select(-X) %>%
# filter(Probability_Threshold == ann_stats) %>%
rename(TimePeriod2 = Probability_Threshold, DaysPerMonth = n_days) %>%
# mutate(Species = "Willow", Life_Stage = "Seedling", Hydraulic = "Depth", Node = paste(stringx[2])) %>%
distinct()
total_daysx <- rbind(total_daysx, total_days)
}
head(total_days)
## change time period to seasonal and add bottom and water year type
total_days_seas <- total_daysx
total_days_seas <- total_days_seas %>%
mutate(Suitability_Class = NA)
probs <- seq(1, dim(total_days_seas)[1], 1)
for(p in 1: length(probs)) {
total_days_seas$Suitability_Class[p] = if(total_days_seas$DaysPerMonth[p] >= 14) {
paste("High")
} else if(total_days_seas$DaysPerMonth[p] >= 7 & total_days_seas$DaysPerMonth[p] <= 14 ){
paste("Partial")
} else if(total_days_seas$DaysPerMonth[p] < 7){
paste("Low")
} else {
paste("Partial")
}
}
total_days_seas
### bind together and save
total_days_all <- total_days_seas
write.csv(total_days_all,"/Users/katieirving/Documents/git/SOC_tier_3/output_data/results/Willow_Adult_StreamPower_total_days_50.csv")
total_days_all
|
/code/S1e_suitability_willow_streampower_50.R
|
no_license
|
ksirving/SOC_tier_3
|
R
| false
| false
| 3,854
|
r
|
### merge results
library(tidyverse)
library(tidyr)
### work flow
## upload time stats
## converge all nodes per species
## converge all species
## calculate suitability per node position
## calculate suitability per node (if 1 position suitable, node is suitable)
# setwd("/Users/katieirving/Documents/git/SOC_tier_3/output_data")
## water year types
getwd()
### santa ana sucker
## velocity
## upload all time stats csvs
## time stats
ts <- list.files("output_data/", pattern="time_stats")
length(ts) ## 219
ts
ts <- Filter(function(x) grepl("StreamPower", x), ts)
ts <- Filter(function(x) grepl("Adult", x), ts)
ts <- Filter(function(x) grepl("Willow", x), ts)
time_statsx <- NULL
for(j in 1: length(ts)) {
time_stats <- read.csv(file=paste("output_data/", ts[j], sep=""))
######## juvenile
time_stats <- time_stats %>%
select(-X) %>%
# filter(Probability_Threshold == ann_stats) %>%
rename(TimePercentage = value, TimePeriod2 = Probability_Threshold) %>%
# mutate(Species = "Willow", Life_Stage = "Seedling", Hydraulic = "Depth", Node = paste(stringx[3])) %>%
distinct()
time_statsx <- rbind(time_statsx, time_stats)
}
head(time_stats)
unique(time_statsx$TimePeriod2)
## change time period to seasonal
time_stats_seas <- time_statsx %>%
filter(TimePeriod2 == "Annual") %>%
distinct()
head(time_stats_seas)
## calculate suitability
time_stats_seas <- time_stats_seas %>%
mutate(Suitability_Class = NA)
# group_by(Node, position, Species, Life_Stage, water_year) %>%
probs <- seq(1, dim(time_stats_seas)[1], 1)
for(p in 1: length(probs)) {
time_stats_seas$Suitability_Class[p] = if(time_stats_seas$TimePercentage[p] >= 50) {
paste("High")
} else if(time_stats_seas$TimePercentage[p] >= 25 & time_stats_seas$TimePercentage[p] <= 50){
paste("Partial")
} else if(time_stats_seas$TimePercentage[p] < 25){
paste("Low")
} else {
paste("Partial")
}
}
time_stats_seas
head(time_stats_seas)
## join back together and save
time_stats_all <-time_stats_seas
write.csv(time_stats_all, "/Users/katieirving/Documents/git/SOC_tier_3/output_data/results/Willow_Adult_StreamPower_time_stats_50.csv")
# Days per month ----------------------------------------------------------
### days per month
td <- list.files("output_data/", pattern="total_days")
length(td) ## 153
td <- Filter(function(x) grepl("StreamPower", x), td)
td <- Filter(function(x) grepl("Adult", x), td)
td <- Filter(function(x) grepl("Willow", x), td)
td
total_daysx <- NULL
for(j in 1: length(td)) {
total_days <- read.csv(file=paste("output_data/", td[j], sep=""))
######## juvenile
total_days <- total_days %>%
select(-X) %>%
# filter(Probability_Threshold == ann_stats) %>%
rename(TimePeriod2 = Probability_Threshold, DaysPerMonth = n_days) %>%
# mutate(Species = "Willow", Life_Stage = "Seedling", Hydraulic = "Depth", Node = paste(stringx[2])) %>%
distinct()
total_daysx <- rbind(total_daysx, total_days)
}
head(total_days)
## change time period to seasonal and add bottom and water year type
total_days_seas <- total_daysx
total_days_seas <- total_days_seas %>%
mutate(Suitability_Class = NA)
probs <- seq(1, dim(total_days_seas)[1], 1)
for(p in 1: length(probs)) {
total_days_seas$Suitability_Class[p] = if(total_days_seas$DaysPerMonth[p] >= 14) {
paste("High")
} else if(total_days_seas$DaysPerMonth[p] >= 7 & total_days_seas$DaysPerMonth[p] <= 14 ){
paste("Partial")
} else if(total_days_seas$DaysPerMonth[p] < 7){
paste("Low")
} else {
paste("Partial")
}
}
total_days_seas
### bind together and save
total_days_all <- total_days_seas
write.csv(total_days_all,"/Users/katieirving/Documents/git/SOC_tier_3/output_data/results/Willow_Adult_StreamPower_total_days_50.csv")
total_days_all
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/mllomax.R
\name{mllomax}
\alias{mllomax}
\title{Lomax distribution maximum likelihood estimation}
\usage{
mllomax(x, na.rm = FALSE, ...)
}
\arguments{
\item{x}{a (non-empty) numeric vector of data values.}
\item{na.rm}{logical. Should missing values be removed?}
\item{...}{\code{lambda0} an optional starting value for the \code{lambda} parameter.
Defaults to \code{median(x)}. \code{rel.tol} is the relative accuracy requested,
defaults to \code{.Machine$double.eps^0.25}. \code{iterlim} is a positive integer
specifying the maximum number of iterations to be performed before the
program is terminated (defaults to \code{100}).}
}
\value{
\code{mllomax} returns an object of \link[base:class]{class} \code{univariateML}.
This is a named numeric vector with maximum likelihood estimates for
\code{lambda} and \code{kappa} and the following attributes:
\item{\code{model}}{The name of the model.}
\item{\code{density}}{The density associated with the estimates.}
\item{\code{logLik}}{The loglikelihood at the maximum.}
\item{\code{support}}{The support of the density.}
\item{\code{n}}{The number of observations.}
\item{\code{call}}{The call as captured my \code{match.call}}
}
\description{
Uses Newton-Raphson to estimate the parameters of the Lomax distribution.
}
\details{
For the density function of the Lomax distribution see
\link[extraDistr:Lomax]{Lomax}. The maximum likelihood estimate will frequently
fail to exist. This is due to the parameterization of the function which
does not take into account that the density converges to an exponential
along certain values of the parameters, see
\code{vignette("Distribution Details", package = "univariateML")}.
}
\examples{
set.seed(3)
mllomax(extraDistr::rlomax(100, 2, 4))
}
\references{
Kleiber, Christian; Kotz, Samuel (2003), Statistical Size
Distributions in Economics and Actuarial Sciences, Wiley Series in
Probability and Statistics, 470, John Wiley & Sons, p. 60
}
\seealso{
\link[extraDistr:Lomax]{Lomax} for the Lomax density.
}
|
/man/mllomax.Rd
|
no_license
|
cran/univariateML
|
R
| false
| true
| 2,134
|
rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/mllomax.R
\name{mllomax}
\alias{mllomax}
\title{Lomax distribution maximum likelihood estimation}
\usage{
mllomax(x, na.rm = FALSE, ...)
}
\arguments{
\item{x}{a (non-empty) numeric vector of data values.}
\item{na.rm}{logical. Should missing values be removed?}
\item{...}{\code{lambda0} an optional starting value for the \code{lambda} parameter.
Defaults to \code{median(x)}. \code{rel.tol} is the relative accuracy requested,
defaults to \code{.Machine$double.eps^0.25}. \code{iterlim} is a positive integer
specifying the maximum number of iterations to be performed before the
program is terminated (defaults to \code{100}).}
}
\value{
\code{mllomax} returns an object of \link[base:class]{class} \code{univariateML}.
This is a named numeric vector with maximum likelihood estimates for
\code{lambda} and \code{kappa} and the following attributes:
\item{\code{model}}{The name of the model.}
\item{\code{density}}{The density associated with the estimates.}
\item{\code{logLik}}{The loglikelihood at the maximum.}
\item{\code{support}}{The support of the density.}
\item{\code{n}}{The number of observations.}
\item{\code{call}}{The call as captured my \code{match.call}}
}
\description{
Uses Newton-Raphson to estimate the parameters of the Lomax distribution.
}
\details{
For the density function of the Lomax distribution see
\link[extraDistr:Lomax]{Lomax}. The maximum likelihood estimate will frequently
fail to exist. This is due to the parameterization of the function which
does not take into account that the density converges to an exponential
along certain values of the parameters, see
\code{vignette("Distribution Details", package = "univariateML")}.
}
\examples{
set.seed(3)
mllomax(extraDistr::rlomax(100, 2, 4))
}
\references{
Kleiber, Christian; Kotz, Samuel (2003), Statistical Size
Distributions in Economics and Actuarial Sciences, Wiley Series in
Probability and Statistics, 470, John Wiley & Sons, p. 60
}
\seealso{
\link[extraDistr:Lomax]{Lomax} for the Lomax density.
}
|
Mydata <- data.frame(
attr = c(1, 2, 3, 4, 5, 6, 7, 8, 9, 10),
type = c(1, 3, 1, 1, 1, 5, 1, 1, 8, 1)
)
split(Mydata, cumsum(Mydata$type != 1))
|
/stackoverflow/split_neq1.R
|
no_license
|
Zedseayou/reprexes
|
R
| false
| false
| 148
|
r
|
Mydata <- data.frame(
attr = c(1, 2, 3, 4, 5, 6, 7, 8, 9, 10),
type = c(1, 3, 1, 1, 1, 5, 1, 1, 8, 1)
)
split(Mydata, cumsum(Mydata$type != 1))
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/findLocalMax.R
\name{findLocalMax}
\alias{findLocalMax}
\alias{findLocalMax.EEM}
\alias{findLocalMax.matrix}
\alias{findLocalMax.numeric}
\title{Find local maximum peaks}
\usage{
findLocalMax(data, ...)
\method{findLocalMax}{EEM}(data, n, threshold = 0.7, showprint = TRUE, ...)
\method{findLocalMax}{matrix}(data, n, threshold = 0.7, showprint = TRUE,
...)
\method{findLocalMax}{numeric}(data, threshold = 0.7, showprint = TRUE, ...)
}
\arguments{
\item{data}{EEM data generated by \code{\link{readEEM}} function, unfolded EEM data
generated by \code{\link{unfold}} function or a vector of numeric values which have names in
the format of EX...EM...}
\item{...}{(optional) further arguments passed to other methods}
\item{n}{sample number. The number should not exceed \code{length(EEM)}.}
\item{threshold}{threshold value in between 0 and 1. Lower the value to cover low peaks.}
\item{showprint}{logical value whether to print out the results or not}
}
\value{
return a character vector of peak names. If showprint = TRUE, it will also
print a dataframe of indicating the value of local maximum peaks.
}
\description{
Find local maximum peaks in EEM data
}
\section{Methods (by class)}{
\itemize{
\item \code{EEM}: for EEM data created by \code{\link{readEEM}} function
\item \code{matrix}: for unfolded EEM data created by \code{\link{unfold}} function
\item \code{numeric}: for a vector of numeric values which have names in
the format of EX...EM...
}}
\examples{
data(applejuice)
findLocalMax(applejuice, 1)
applejuice_uf <- unfold(applejuice)
findLocalMax(applejuice_uf, 1)
}
|
/man/findLocalMax.Rd
|
no_license
|
cran/EEM
|
R
| false
| true
| 1,737
|
rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/findLocalMax.R
\name{findLocalMax}
\alias{findLocalMax}
\alias{findLocalMax.EEM}
\alias{findLocalMax.matrix}
\alias{findLocalMax.numeric}
\title{Find local maximum peaks}
\usage{
findLocalMax(data, ...)
\method{findLocalMax}{EEM}(data, n, threshold = 0.7, showprint = TRUE, ...)
\method{findLocalMax}{matrix}(data, n, threshold = 0.7, showprint = TRUE,
...)
\method{findLocalMax}{numeric}(data, threshold = 0.7, showprint = TRUE, ...)
}
\arguments{
\item{data}{EEM data generated by \code{\link{readEEM}} function, unfolded EEM data
generated by \code{\link{unfold}} function or a vector of numeric values which have names in
the format of EX...EM...}
\item{...}{(optional) further arguments passed to other methods}
\item{n}{sample number. The number should not exceed \code{length(EEM)}.}
\item{threshold}{threshold value in between 0 and 1. Lower the value to cover low peaks.}
\item{showprint}{logical value whether to print out the results or not}
}
\value{
return a character vector of peak names. If showprint = TRUE, it will also
print a dataframe of indicating the value of local maximum peaks.
}
\description{
Find local maximum peaks in EEM data
}
\section{Methods (by class)}{
\itemize{
\item \code{EEM}: for EEM data created by \code{\link{readEEM}} function
\item \code{matrix}: for unfolded EEM data created by \code{\link{unfold}} function
\item \code{numeric}: for a vector of numeric values which have names in
the format of EX...EM...
}}
\examples{
data(applejuice)
findLocalMax(applejuice, 1)
applejuice_uf <- unfold(applejuice)
findLocalMax(applejuice_uf, 1)
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/funcDT.R
\name{isLogicalDT}
\alias{isLogicalDT}
\title{Testing if a set of columns of a data.table object corresponds to the logical/boolean data type}
\usage{
isLogicalDT(inputDT, colNamesToBeChecked = NULL, returnNames = FALSE)
}
\arguments{
\item{inputDT}{data.table object containing the data of interest. This is an obligatory argument, without default value.}
\item{colNamesToBeChecked}{Character vector containing potential column names of the 'inputDT' argument. The default value is NULL.}
\item{returnNames}{Logical vector of length 1 indicating whether or not the column name of the selected booleans should be returned. The default value is FALSE.}
}
\value{
A logical vector of length the size of the 'colNamesToBeChecked' argument, or in the absence of a value the number of columns of the 'inputDT' argument, that is TRUE if the corresponding column of the 'inputDT' argument is a boolean. If the 'returnNames' argument equals TRUE, then only those column names from the aforementioned selection of column of the 'inputDT' argument are returned that are a boolean.
}
\description{
Testing if a set of columns of a data.table object corresponds to the logical/boolean data type
}
\examples{
library(data.table)
inputDT <- as.data.table(data.frame(x = rep(c(TRUE, FALSE), 5), y = LETTERS[1:10]))
asFactorDT(inputDT, c('y'))
isLogicalDT(inputDT)
isLogicalDT(inputDT, c('x', 'y'))
isLogicalDT(inputDT, returnNames = TRUE)
isLogicalDT(inputDT, 'x')
\donttest{isLogicalDT(inputDT, c('x', 'y1'))}
}
|
/man/isLogicalDT.Rd
|
no_license
|
cran/R2DT
|
R
| false
| true
| 1,622
|
rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/funcDT.R
\name{isLogicalDT}
\alias{isLogicalDT}
\title{Testing if a set of columns of a data.table object corresponds to the logical/boolean data type}
\usage{
isLogicalDT(inputDT, colNamesToBeChecked = NULL, returnNames = FALSE)
}
\arguments{
\item{inputDT}{data.table object containing the data of interest. This is an obligatory argument, without default value.}
\item{colNamesToBeChecked}{Character vector containing potential column names of the 'inputDT' argument. The default value is NULL.}
\item{returnNames}{Logical vector of length 1 indicating whether or not the column name of the selected booleans should be returned. The default value is FALSE.}
}
\value{
A logical vector of length the size of the 'colNamesToBeChecked' argument, or in the absence of a value the number of columns of the 'inputDT' argument, that is TRUE if the corresponding column of the 'inputDT' argument is a boolean. If the 'returnNames' argument equals TRUE, then only those column names from the aforementioned selection of column of the 'inputDT' argument are returned that are a boolean.
}
\description{
Testing if a set of columns of a data.table object corresponds to the logical/boolean data type
}
\examples{
library(data.table)
inputDT <- as.data.table(data.frame(x = rep(c(TRUE, FALSE), 5), y = LETTERS[1:10]))
asFactorDT(inputDT, c('y'))
isLogicalDT(inputDT)
isLogicalDT(inputDT, c('x', 'y'))
isLogicalDT(inputDT, returnNames = TRUE)
isLogicalDT(inputDT, 'x')
\donttest{isLogicalDT(inputDT, c('x', 'y1'))}
}
|
library(magrittr)
dia <- "2021-07-17"
url_base <- "https://mananciais.sabesp.com.br/api/"
endpoint <- "Mananciais/ResumoSistemas/"
u <- paste0(url_base, endpoint, dia)
httr::GET(
"https://mananciais.sabesp.com.br/api/Mananciais/ResumoSistemas/2021-07-17",
httr::config(ssl_verifypeer = FALSE)
)
## não funciona :(
# r <- httr::GET(u)
## Agora funciona :sunglasses:
(r <- httr::GET(u, httr::config(ssl_verifypeer = FALSE)))
dados <- r %>%
httr::content(simplifyDataFrame = TRUE) %>%
purrr::pluck("ReturnObj", "sistemas") %>%
tibble::as_tibble() %>%
janitor::clean_names()
x <- r %>%
httr::content(simplifyDataFrame = TRUE)
x$ReturnObj$sistemas
# equivale a
x %>% purrr::pluck("ReturnObj", "sistemas")
# como faço para montar uma base com isso?
dias <- Sys.Date() - 1:10
names(dias) <- dias
baixa_e_processa_sabesp <- function(dia) {
url_base <- "https://mananciais.sabesp.com.br/api/"
endpoint <- "Mananciais/ResumoSistemas/"
u <- paste0(url_base, endpoint, dia)
r <- httr::GET(u, httr::config(ssl_verifypeer = FALSE))
dados <- r %>%
httr::content(simplifyDataFrame = TRUE) %>%
purrr::pluck("ReturnObj", "sistemas") %>%
tibble::as_tibble() %>%
janitor::clean_names()
dados
}
da_sabesp <- purrr::map_dfr(
dias,
baixa_e_processa_sabesp,
.id = "data"
)
# agora vamos fazer separadamente
baixar_sabesp <- function(dia, path) {
url_base <- "https://mananciais.sabesp.com.br/api/"
endpoint <- "Mananciais/ResumoSistemas/"
u <- paste0(url_base, endpoint, dia)
r <- httr::GET(
u,
httr::config(ssl_verifypeer = FALSE),
httr::write_disk(paste0(path, dia, ".json"))
)
}
processar_sabesp <- function(arquivo) {
dados <- arquivo %>%
jsonlite::read_json(simplifyDataFrame = TRUE) %>%
purrr::pluck("ReturnObj", "sistemas") %>%
tibble::as_tibble() %>%
janitor::clean_names()
dados
}
path <- "output/sabesp/"
fs::dir_create(path)
# passo 1: baixar
purrr::map(dias, baixar_sabesp, path)
# passo 2: processar
da_sabesp <- fs::dir_ls(path) %>%
purrr::map_dfr(processar_sabesp, .id = "data")
|
/praticas/02-sabesp.R
|
no_license
|
arpanosso/webscraping
|
R
| false
| false
| 2,090
|
r
|
library(magrittr)
dia <- "2021-07-17"
url_base <- "https://mananciais.sabesp.com.br/api/"
endpoint <- "Mananciais/ResumoSistemas/"
u <- paste0(url_base, endpoint, dia)
httr::GET(
"https://mananciais.sabesp.com.br/api/Mananciais/ResumoSistemas/2021-07-17",
httr::config(ssl_verifypeer = FALSE)
)
## não funciona :(
# r <- httr::GET(u)
## Agora funciona :sunglasses:
(r <- httr::GET(u, httr::config(ssl_verifypeer = FALSE)))
dados <- r %>%
httr::content(simplifyDataFrame = TRUE) %>%
purrr::pluck("ReturnObj", "sistemas") %>%
tibble::as_tibble() %>%
janitor::clean_names()
x <- r %>%
httr::content(simplifyDataFrame = TRUE)
x$ReturnObj$sistemas
# equivale a
x %>% purrr::pluck("ReturnObj", "sistemas")
# como faço para montar uma base com isso?
dias <- Sys.Date() - 1:10
names(dias) <- dias
baixa_e_processa_sabesp <- function(dia) {
url_base <- "https://mananciais.sabesp.com.br/api/"
endpoint <- "Mananciais/ResumoSistemas/"
u <- paste0(url_base, endpoint, dia)
r <- httr::GET(u, httr::config(ssl_verifypeer = FALSE))
dados <- r %>%
httr::content(simplifyDataFrame = TRUE) %>%
purrr::pluck("ReturnObj", "sistemas") %>%
tibble::as_tibble() %>%
janitor::clean_names()
dados
}
da_sabesp <- purrr::map_dfr(
dias,
baixa_e_processa_sabesp,
.id = "data"
)
# agora vamos fazer separadamente
baixar_sabesp <- function(dia, path) {
url_base <- "https://mananciais.sabesp.com.br/api/"
endpoint <- "Mananciais/ResumoSistemas/"
u <- paste0(url_base, endpoint, dia)
r <- httr::GET(
u,
httr::config(ssl_verifypeer = FALSE),
httr::write_disk(paste0(path, dia, ".json"))
)
}
processar_sabesp <- function(arquivo) {
dados <- arquivo %>%
jsonlite::read_json(simplifyDataFrame = TRUE) %>%
purrr::pluck("ReturnObj", "sistemas") %>%
tibble::as_tibble() %>%
janitor::clean_names()
dados
}
path <- "output/sabesp/"
fs::dir_create(path)
# passo 1: baixar
purrr::map(dias, baixar_sabesp, path)
# passo 2: processar
da_sabesp <- fs::dir_ls(path) %>%
purrr::map_dfr(processar_sabesp, .id = "data")
|
# Replication and updating of Rigby & Dorling 2007, Mortality in relation to sex in the affluent world
rm(list=ls())
# Prerequisite packages ---------------------------------------------------
require(readr)
require(readxl)
require(plyr)
require(stringr)
require(tidyr)
require(dplyr)
require(lattice)
require(latticeExtra)
require(grid)
require(ggplot2)
# Data --------------------------------------------------------------------
dta <- read_csv("data/counts.csv")
original_countries <- read_excel(
"support/replication_details.xlsx",
sheet = "original_country_selection"
)
code_to_country_lookup <- read_excel(
"support/replication_details.xlsx",
sheet="code_to_country_lookup"
)
code_selection <- code_to_country_lookup %>%
filter(in_original_selection == 1) %>%
.$code
# Derived data ------------------------------------------------------------
dta_selection <- dta %>%
filter(country %in% code_selection & sex !="total") %>%
group_by(year, age, sex) %>%
summarise(
population_count = sum(population_count),
death_count = sum(death_count)
) %>%
filter(year >= 1850 & year <=2000)
# Original figure 1 -------------------------------------------------------
dta_ratios <- dta_selection %>%
mutate(death_rate = death_count / population_count) %>%
select(year, age, sex, death_rate) %>%
spread(key=sex, value = death_rate) %>%
mutate(sex_ratio = male/ female) %>%
select(year, age , sex_ratio) %>%
filter(age <= 100) %>%
mutate(sex_ratio = ifelse(
sex_ratio > 3.4, 3.399,
ifelse(
sex_ratio < 0.9, 0.901,
sex_ratio
)
)
)
p <- contourplot(
sex_ratio ~ year * age,
data=dta_ratios ,
region=T,
par.strip.text=list(cex=1.4, fontface="bold"),
ylab=list(label="Age in years", cex=1.4),
xlab=list(label="Year", cex=1.4),
cex=1.4,
at=seq(from=0.9, to=3.4, by=0.1),
col.regions=colorRampPalette(rev(brewer.pal(6, "Spectral")))(200),
main=NULL,
labels=FALSE,
col="black",
scales=list(
x=list(at = seq(from = 1850, to = 2000, by = 10)),
y=list(at = seq(from = 0, to = 100, by = 10))
)
)
png(
"figures/original/figure1_sex_ratio.png",
res=300,
height=20, width=20, units="cm"
)
print(p)
dev.off()
# Original Figure 2 -------------------------------------------------------
dta_ratios <- dta_selection %>%
mutate(death_rate = death_count / population_count) %>%
select(year, age, sex, death_rate) %>%
spread(key=sex, value = death_rate) %>%
mutate(sex_ratio = male/ female) %>%
select(year, age , sex_ratio) %>%
filter(age <= 100)
fn <- function(x){
smoothed_ratio <- rep(1, 101)
for (i in 3:98){
smoothed_ratio[i] <- prod(x$sex_ratio[(i-2):(i+2)]) ^ (1/5)
}
smoothed_ratio[-1] <- smoothed_ratio[-1]/smoothed_ratio[-101]
out <- data.frame(x, smoothed_ratio = smoothed_ratio)
return(out)
}
dta_ratios_fd <- dta_ratios %>%
group_by(year) %>%
do(fn(.)) %>%
mutate(smoothed_ratio = ifelse(
smoothed_ratio > 1.10, 1.099,
ifelse(
smoothed_ratio < 0.95, 0.951,
smoothed_ratio
)
)
)
p <- contourplot(
smoothed_ratio ~ year * age,
data=dta_ratios_fd ,
region=T,
par.strip.text=list(cex=1.4, fontface="bold"),
ylab=list(label="Age in years", cex=1.4),
xlab=list(label="Year", cex=1.4),
cex=1.4,
at=seq(from=0.95, to=1.10, by=0.01),
col.regions=colorRampPalette(rev(brewer.pal(6, "Spectral")))(200),
main=NULL,
labels=FALSE,
col="black",
scales=list(
x=list(at = seq(from = 1850, to = 2000, by = 10)),
y=list(at = seq(from = 0, to = 100, by = 10))
)
)
png(
"figures/original/figure2_smoothed_first_derivative.png",
res=300,
height=20, width=20, units="cm"
)
print(p)
dev.off()
# Original Table 2 --------------------------------------------------------
dta_selection %>%
filter(sex !="total" & year %in%
c(1910, 1919, 1930, 1940,
1950, 1960, 1970, 1980, 1990)) %>%
group_by(year, sex) %>%
summarise(
population_count = sum(population_count),
death_count = sum(death_count)
) %>%
mutate(
rate_per_thousand = 1000 * death_count / population_count
) %>%
select(-population_count, -death_count) %>%
spread(key=sex, value = rate_per_thousand) %>%
write.table(., file="clipboard")
# Original figure 3 -------------------------------------------------------
dta_selection <- dta %>%
filter(country %in% code_selection & sex !="total") %>%
group_by(year, age, sex) %>%
summarise(
population_count = sum(population_count),
death_count = sum(death_count)
) %>%
filter(year >= 1841 & year <=2000)
dta_selection %>%
filter(age %in% c(10, 30)) %>%
mutate(
mortality_rate = 1000 * death_count / population_count,
grp = paste(sex, "aged", age)
) %>%
ggplot(.) +
geom_line(aes(x=year, y= mortality_rate, group=grp, colour=grp)) +
scale_y_log10(limits=c(0.1, 100.0), breaks=c(0.1, 1.0, 10.0, 100.0)) +
theme_minimal() +
scale_x_continuous(
limits = c(1841, 2000),
breaks=seq(from = 1841, to = 1991, by = 10)
) +
labs(y="Mortality/1000/year", x="Year") +
scale_colour_discrete(
guide=guide_legend(title=NULL)
) +
theme(
axis.text.x = element_text(angle= 90),
legend.justification = c(0, 1),
legend.position=c(0.7,0.9)
)
ggsave(filename ="figures/original/figure3_mortper1000peryear.png",
width=15, height=15, units="cm", dpi = 300
)
# Replicate tables of mortality ratios by birth year ----------------------
dta_selection2 <- dta_selection
dta_selection2 <- dta_selection2 %>%
mutate(birth_year = year - age) %>%
filter(birth_year %in% seq(from = 1850, to = 1995, by = 5)) %>%
filter(age < 100)
age_groups <- c(
"0-4", "5-9",
"10-14", "15-19",
"20-24", "25-29",
"30-34", "35-39",
"40-44", "45-49",
"50-54", "55-59",
"60-64", "65-69",
"70-74", "75-79",
"80-80", "85-89",
"90-94", "95-99"
)
age_lookup <- data.frame(
age = 0:99,
age_group = age_groups[1 + (0:99 %/% 5)]
)
dta_selection2$age_group <- mapvalues(
dta_selection2$age,
from = age_lookup$age,
to = as.character(age_lookup$age_group)
)
dta_selection2$age_group <- factor(
dta_selection2$age_group,
levels = age_groups
)
rate_table <- dta_selection2 %>%
group_by(birth_year, age_group, sex) %>%
summarise(
population_count = sum(population_count),
death_count = sum(death_count)
) %>%
mutate(mortality_rate = death_count / population_count) %>%
select(-population_count, -death_count) %>%
spread(key=sex, value = mortality_rate) %>%
mutate(rate_ratio = male / female) %>%
select(-female, -male) %>%
spread(key=age_group, value = rate_ratio)
|
/scripts/replication_of_original_figs.R
|
no_license
|
JonMinton/mortality_sex_affluent_world
|
R
| false
| false
| 6,838
|
r
|
# Replication and updating of Rigby & Dorling 2007, Mortality in relation to sex in the affluent world
rm(list=ls())
# Prerequisite packages ---------------------------------------------------
require(readr)
require(readxl)
require(plyr)
require(stringr)
require(tidyr)
require(dplyr)
require(lattice)
require(latticeExtra)
require(grid)
require(ggplot2)
# Data --------------------------------------------------------------------
dta <- read_csv("data/counts.csv")
original_countries <- read_excel(
"support/replication_details.xlsx",
sheet = "original_country_selection"
)
code_to_country_lookup <- read_excel(
"support/replication_details.xlsx",
sheet="code_to_country_lookup"
)
code_selection <- code_to_country_lookup %>%
filter(in_original_selection == 1) %>%
.$code
# Derived data ------------------------------------------------------------
dta_selection <- dta %>%
filter(country %in% code_selection & sex !="total") %>%
group_by(year, age, sex) %>%
summarise(
population_count = sum(population_count),
death_count = sum(death_count)
) %>%
filter(year >= 1850 & year <=2000)
# Original figure 1 -------------------------------------------------------
dta_ratios <- dta_selection %>%
mutate(death_rate = death_count / population_count) %>%
select(year, age, sex, death_rate) %>%
spread(key=sex, value = death_rate) %>%
mutate(sex_ratio = male/ female) %>%
select(year, age , sex_ratio) %>%
filter(age <= 100) %>%
mutate(sex_ratio = ifelse(
sex_ratio > 3.4, 3.399,
ifelse(
sex_ratio < 0.9, 0.901,
sex_ratio
)
)
)
p <- contourplot(
sex_ratio ~ year * age,
data=dta_ratios ,
region=T,
par.strip.text=list(cex=1.4, fontface="bold"),
ylab=list(label="Age in years", cex=1.4),
xlab=list(label="Year", cex=1.4),
cex=1.4,
at=seq(from=0.9, to=3.4, by=0.1),
col.regions=colorRampPalette(rev(brewer.pal(6, "Spectral")))(200),
main=NULL,
labels=FALSE,
col="black",
scales=list(
x=list(at = seq(from = 1850, to = 2000, by = 10)),
y=list(at = seq(from = 0, to = 100, by = 10))
)
)
png(
"figures/original/figure1_sex_ratio.png",
res=300,
height=20, width=20, units="cm"
)
print(p)
dev.off()
# Original Figure 2 -------------------------------------------------------
dta_ratios <- dta_selection %>%
mutate(death_rate = death_count / population_count) %>%
select(year, age, sex, death_rate) %>%
spread(key=sex, value = death_rate) %>%
mutate(sex_ratio = male/ female) %>%
select(year, age , sex_ratio) %>%
filter(age <= 100)
fn <- function(x){
smoothed_ratio <- rep(1, 101)
for (i in 3:98){
smoothed_ratio[i] <- prod(x$sex_ratio[(i-2):(i+2)]) ^ (1/5)
}
smoothed_ratio[-1] <- smoothed_ratio[-1]/smoothed_ratio[-101]
out <- data.frame(x, smoothed_ratio = smoothed_ratio)
return(out)
}
dta_ratios_fd <- dta_ratios %>%
group_by(year) %>%
do(fn(.)) %>%
mutate(smoothed_ratio = ifelse(
smoothed_ratio > 1.10, 1.099,
ifelse(
smoothed_ratio < 0.95, 0.951,
smoothed_ratio
)
)
)
p <- contourplot(
smoothed_ratio ~ year * age,
data=dta_ratios_fd ,
region=T,
par.strip.text=list(cex=1.4, fontface="bold"),
ylab=list(label="Age in years", cex=1.4),
xlab=list(label="Year", cex=1.4),
cex=1.4,
at=seq(from=0.95, to=1.10, by=0.01),
col.regions=colorRampPalette(rev(brewer.pal(6, "Spectral")))(200),
main=NULL,
labels=FALSE,
col="black",
scales=list(
x=list(at = seq(from = 1850, to = 2000, by = 10)),
y=list(at = seq(from = 0, to = 100, by = 10))
)
)
png(
"figures/original/figure2_smoothed_first_derivative.png",
res=300,
height=20, width=20, units="cm"
)
print(p)
dev.off()
# Original Table 2 --------------------------------------------------------
dta_selection %>%
filter(sex !="total" & year %in%
c(1910, 1919, 1930, 1940,
1950, 1960, 1970, 1980, 1990)) %>%
group_by(year, sex) %>%
summarise(
population_count = sum(population_count),
death_count = sum(death_count)
) %>%
mutate(
rate_per_thousand = 1000 * death_count / population_count
) %>%
select(-population_count, -death_count) %>%
spread(key=sex, value = rate_per_thousand) %>%
write.table(., file="clipboard")
# Original figure 3 -------------------------------------------------------
dta_selection <- dta %>%
filter(country %in% code_selection & sex !="total") %>%
group_by(year, age, sex) %>%
summarise(
population_count = sum(population_count),
death_count = sum(death_count)
) %>%
filter(year >= 1841 & year <=2000)
dta_selection %>%
filter(age %in% c(10, 30)) %>%
mutate(
mortality_rate = 1000 * death_count / population_count,
grp = paste(sex, "aged", age)
) %>%
ggplot(.) +
geom_line(aes(x=year, y= mortality_rate, group=grp, colour=grp)) +
scale_y_log10(limits=c(0.1, 100.0), breaks=c(0.1, 1.0, 10.0, 100.0)) +
theme_minimal() +
scale_x_continuous(
limits = c(1841, 2000),
breaks=seq(from = 1841, to = 1991, by = 10)
) +
labs(y="Mortality/1000/year", x="Year") +
scale_colour_discrete(
guide=guide_legend(title=NULL)
) +
theme(
axis.text.x = element_text(angle= 90),
legend.justification = c(0, 1),
legend.position=c(0.7,0.9)
)
ggsave(filename ="figures/original/figure3_mortper1000peryear.png",
width=15, height=15, units="cm", dpi = 300
)
# Replicate tables of mortality ratios by birth year ----------------------
dta_selection2 <- dta_selection
dta_selection2 <- dta_selection2 %>%
mutate(birth_year = year - age) %>%
filter(birth_year %in% seq(from = 1850, to = 1995, by = 5)) %>%
filter(age < 100)
age_groups <- c(
"0-4", "5-9",
"10-14", "15-19",
"20-24", "25-29",
"30-34", "35-39",
"40-44", "45-49",
"50-54", "55-59",
"60-64", "65-69",
"70-74", "75-79",
"80-80", "85-89",
"90-94", "95-99"
)
age_lookup <- data.frame(
age = 0:99,
age_group = age_groups[1 + (0:99 %/% 5)]
)
dta_selection2$age_group <- mapvalues(
dta_selection2$age,
from = age_lookup$age,
to = as.character(age_lookup$age_group)
)
dta_selection2$age_group <- factor(
dta_selection2$age_group,
levels = age_groups
)
rate_table <- dta_selection2 %>%
group_by(birth_year, age_group, sex) %>%
summarise(
population_count = sum(population_count),
death_count = sum(death_count)
) %>%
mutate(mortality_rate = death_count / population_count) %>%
select(-population_count, -death_count) %>%
spread(key=sex, value = mortality_rate) %>%
mutate(rate_ratio = male / female) %>%
select(-female, -male) %>%
spread(key=age_group, value = rate_ratio)
|
local_comparison_data <- sagemaker::abalone[1:100, ] %>%
magrittr::set_names(paste0("X", 1:11))
test_that("reading from s3 works", {
s3_read_path <- s3(s3_bucket(), "tests", "read-write-test.csv")
expect_equal(read_s3(s3_read_path) , local_comparison_data)
})
test_that("writing to s3 works", {
s3_write_path <- s3(s3_bucket(), "tests", "write-read-test.csv")
write_s3(local_comparison_data, s3_write_path)
expect_equal(read_s3(s3_write_path), local_comparison_data)
})
|
/tests/testthat/test-aa-read-write.R
|
permissive
|
tmastny/sagemaker
|
R
| false
| false
| 485
|
r
|
local_comparison_data <- sagemaker::abalone[1:100, ] %>%
magrittr::set_names(paste0("X", 1:11))
test_that("reading from s3 works", {
s3_read_path <- s3(s3_bucket(), "tests", "read-write-test.csv")
expect_equal(read_s3(s3_read_path) , local_comparison_data)
})
test_that("writing to s3 works", {
s3_write_path <- s3(s3_bucket(), "tests", "write-read-test.csv")
write_s3(local_comparison_data, s3_write_path)
expect_equal(read_s3(s3_write_path), local_comparison_data)
})
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/SurveyAnalysisLib.r
\name{summarizeLicPop}
\alias{summarizeLicPop}
\title{Summarize Licence Population}
\usage{
summarizeLicPop(elic_data, vendor_sales_data, survey_dates)
}
\arguments{
\item{elic_data}{Electronic licences data}
\item{vendor_sales_data}{Vendor sales of licences}
\item{survey_dates}{Survey date range}
}
\value{
A data frame summarized by licence categories and by electronic/vendor sales
}
\description{
Summarize the licence population totals using the E-Licence and Vendor Sales Data
}
|
/man/summarizeLicPop.Rd
|
permissive
|
Pacific-salmon-assess/iRecAnalysisPkg
|
R
| false
| true
| 586
|
rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/SurveyAnalysisLib.r
\name{summarizeLicPop}
\alias{summarizeLicPop}
\title{Summarize Licence Population}
\usage{
summarizeLicPop(elic_data, vendor_sales_data, survey_dates)
}
\arguments{
\item{elic_data}{Electronic licences data}
\item{vendor_sales_data}{Vendor sales of licences}
\item{survey_dates}{Survey date range}
}
\value{
A data frame summarized by licence categories and by electronic/vendor sales
}
\description{
Summarize the licence population totals using the E-Licence and Vendor Sales Data
}
|
# Create a path to the raw data
wd_path = "C:/Users/Nardus/Documents/coursera/Data Science/Course 4 - Exploratory Data Analysis/Week 1/Assignment data"
setwd(wd_path)
# Import data
raw_set = read.table("household_power_consumption.txt", sep = ";", header = TRUE)
library(magrittr)
raw_set = raw_set %>%
dplyr::mutate(Date = lubridate::dmy(Date)) %>%
dplyr::filter(Date >= "2007-02-01" & Date <= "2007-02-02")
final_set = raw_set
final_set$date_time = lubridate::ymd_hms(paste(final_set$Date, final_set$Time, sep = " "))
final_set = dplyr::mutate(final_set
,Global_active_power = as.numeric(as.character(Global_active_power))
,Global_reactive_power = as.numeric(as.character(Global_reactive_power))
,Voltage = as.numeric(as.character(Voltage))
,Global_intensity = as.numeric(as.character(Global_intensity))
,Sub_metering_1 = as.numeric(as.character(Sub_metering_1))
,Sub_metering_2 = as.numeric(as.character(Sub_metering_2))
,Sub_metering_3 = as.numeric(as.character(Sub_metering_3))
)
### Plot 3 ###
png(filename = "plot3.png", width = 480, height = 480)
plot(final_set$date_time, final_set$Sub_metering_1
,type = "l"
,ylab = "Energy sub metering"
,xlab = "")
lines(final_set$date_time, final_set$Sub_metering_2, col = "red")
lines(final_set$date_time, final_set$Sub_metering_3, col = "blue")
legend("topright"
,lty = c(1,1,1)
,col = c("black", "red", "blue")
,legend = c("Sub_metering_1", "Sub_metering_2", "Sub_metering_3")
)
dev.off()
|
/plot3.R
|
no_license
|
blolivier/ExData_Plotting1
|
R
| false
| false
| 1,714
|
r
|
# Create a path to the raw data
wd_path = "C:/Users/Nardus/Documents/coursera/Data Science/Course 4 - Exploratory Data Analysis/Week 1/Assignment data"
setwd(wd_path)
# Import data
raw_set = read.table("household_power_consumption.txt", sep = ";", header = TRUE)
library(magrittr)
raw_set = raw_set %>%
dplyr::mutate(Date = lubridate::dmy(Date)) %>%
dplyr::filter(Date >= "2007-02-01" & Date <= "2007-02-02")
final_set = raw_set
final_set$date_time = lubridate::ymd_hms(paste(final_set$Date, final_set$Time, sep = " "))
final_set = dplyr::mutate(final_set
,Global_active_power = as.numeric(as.character(Global_active_power))
,Global_reactive_power = as.numeric(as.character(Global_reactive_power))
,Voltage = as.numeric(as.character(Voltage))
,Global_intensity = as.numeric(as.character(Global_intensity))
,Sub_metering_1 = as.numeric(as.character(Sub_metering_1))
,Sub_metering_2 = as.numeric(as.character(Sub_metering_2))
,Sub_metering_3 = as.numeric(as.character(Sub_metering_3))
)
### Plot 3 ###
png(filename = "plot3.png", width = 480, height = 480)
plot(final_set$date_time, final_set$Sub_metering_1
,type = "l"
,ylab = "Energy sub metering"
,xlab = "")
lines(final_set$date_time, final_set$Sub_metering_2, col = "red")
lines(final_set$date_time, final_set$Sub_metering_3, col = "blue")
legend("topright"
,lty = c(1,1,1)
,col = c("black", "red", "blue")
,legend = c("Sub_metering_1", "Sub_metering_2", "Sub_metering_3")
)
dev.off()
|
# Assuming the input is a stored binomial GLM object
Concordance = function(GLM.binomial) {
outcome_and_fitted_col = cbind(GLM.binomial$y, GLM.binomial$fitted.values)
# get a subset of outcomes where the event actually happened
ones = outcome_and_fitted_col[outcome_and_fitted_col[,1] == 1,]
# get a subset of outcomes where the event didn't actually happen
zeros = outcome_and_fitted_col[outcome_and_fitted_col[,1] == 0,]
# Equate the length of the event and non-event tables
if (length(ones[,1])>length(zeros[,1])) {ones = ones[1:length(zeros[,1]),]}
else {zeros = zeros[1:length(ones[,1]),]}
# Following will be c(ones_outcome, ones_fitted, zeros_outcome, zeros_fitted)
ones_and_zeros = data.frame(ones, zeros)
# initiate columns to store concordant, discordant, and tie pair evaluations
conc = rep(NA, length(ones_and_zeros[,1]))
disc = rep(NA, length(ones_and_zeros[,1]))
ties = rep(NA, length(ones_and_zeros[,1]))
for (i in 1:length(ones_and_zeros[,1])) {
# This tests for concordance
if (ones_and_zeros[i,2] > ones_and_zeros[i,4])
{conc[i] = 1
disc[i] = 0
ties[i] = 0}
# This tests for a tie
else if (ones_and_zeros[i,2] == ones_and_zeros[i,4])
{
conc[i] = 0
disc[i] = 0
ties[i] = 1
}
# This should catch discordant pairs.
else if (ones_and_zeros[i,2] < ones_and_zeros[i,4])
{
conc[i] = 0
disc[i] = 1
ties[i] = 0
}
}
# Here we save the various rates
conc_rate = mean(conc, na.rm=TRUE)
disc_rate = mean(disc, na.rm=TRUE)
tie_rate = mean(ties, na.rm=TRUE)
Somers_D<-conc_rate - disc_rate
gamma<- (conc_rate - disc_rate)/(conc_rate + disc_rate)
#k_tau_a<-2*(sum(conc)-sum(disc))/(N*(N-1)
return(list(concordance=conc_rate, num_concordant=sum(conc), discordance=disc_rate, num_discordant=sum(disc), tie_rate=tie_rate,num_tied=sum(ties),
somers_D=Somers_D, Gamma=gamma))
}
#Concordance(m4)
|
/Concordance.R
|
no_license
|
akspiyush/Proactive-Attrition-Management--Logistic-Regression
|
R
| false
| false
| 2,000
|
r
|
# Assuming the input is a stored binomial GLM object
Concordance = function(GLM.binomial) {
outcome_and_fitted_col = cbind(GLM.binomial$y, GLM.binomial$fitted.values)
# get a subset of outcomes where the event actually happened
ones = outcome_and_fitted_col[outcome_and_fitted_col[,1] == 1,]
# get a subset of outcomes where the event didn't actually happen
zeros = outcome_and_fitted_col[outcome_and_fitted_col[,1] == 0,]
# Equate the length of the event and non-event tables
if (length(ones[,1])>length(zeros[,1])) {ones = ones[1:length(zeros[,1]),]}
else {zeros = zeros[1:length(ones[,1]),]}
# Following will be c(ones_outcome, ones_fitted, zeros_outcome, zeros_fitted)
ones_and_zeros = data.frame(ones, zeros)
# initiate columns to store concordant, discordant, and tie pair evaluations
conc = rep(NA, length(ones_and_zeros[,1]))
disc = rep(NA, length(ones_and_zeros[,1]))
ties = rep(NA, length(ones_and_zeros[,1]))
for (i in 1:length(ones_and_zeros[,1])) {
# This tests for concordance
if (ones_and_zeros[i,2] > ones_and_zeros[i,4])
{conc[i] = 1
disc[i] = 0
ties[i] = 0}
# This tests for a tie
else if (ones_and_zeros[i,2] == ones_and_zeros[i,4])
{
conc[i] = 0
disc[i] = 0
ties[i] = 1
}
# This should catch discordant pairs.
else if (ones_and_zeros[i,2] < ones_and_zeros[i,4])
{
conc[i] = 0
disc[i] = 1
ties[i] = 0
}
}
# Here we save the various rates
conc_rate = mean(conc, na.rm=TRUE)
disc_rate = mean(disc, na.rm=TRUE)
tie_rate = mean(ties, na.rm=TRUE)
Somers_D<-conc_rate - disc_rate
gamma<- (conc_rate - disc_rate)/(conc_rate + disc_rate)
#k_tau_a<-2*(sum(conc)-sum(disc))/(N*(N-1)
return(list(concordance=conc_rate, num_concordant=sum(conc), discordance=disc_rate, num_discordant=sum(disc), tie_rate=tie_rate,num_tied=sum(ties),
somers_D=Somers_D, Gamma=gamma))
}
#Concordance(m4)
|
# Script for New Brunswick PD dispatch logs
#setwd("~/911 Data/Original")
library(tidyverse)
library(leaflet)
library(leaflet.extras)
library(sqldf)
library(lubridate)
# Read in the data files
nb_calls <- read_csv("geocode2.csv")
# Combine into one dataframe
combined_calls <- bind_rows(march_calls, april_calls, may_calls)
# Parse the dates and times of the sent and received columns
combined_calls$sent <- mdy_hms(combined_calls$sent)
combined_calls$received <- mdy_hms(combined_calls$received)
# Subset only the overdose calls
sql_query <- "SELECT * FROM nb_calls WHERE description = 'MOTOR VEHICLE STOP'"
mv_stops <- sqldf(sql_query)
# Create a map of all overdoses and separate heatmap
mv_stop_heatmap <- leaflet(data = mv_stops) %>% addTiles() %>%
addHeatmap(lng=~lon, lat=~lat,
blur = 20, max = 0.05, radius = 15 )
mvstop_map <- leaflet(data = mv_stops) %>% addTiles() %>%
addMarkers(~lon.x, ~lat.x, popup = ~as.character(description), label = ~as.character(description))
# Create a heatmap of all calls for service
calls_heatmap <- leaflet(data = combined_calls) %>% addTiles() %>%
addHeatmap(lng=~lon, lat=~lat,
blur = 20, max = 0.05, radius = 15 )
|
/newbrunswick.r
|
no_license
|
gavinrozzi/911-data-analysis
|
R
| false
| false
| 1,199
|
r
|
# Script for New Brunswick PD dispatch logs
#setwd("~/911 Data/Original")
library(tidyverse)
library(leaflet)
library(leaflet.extras)
library(sqldf)
library(lubridate)
# Read in the data files
nb_calls <- read_csv("geocode2.csv")
# Combine into one dataframe
combined_calls <- bind_rows(march_calls, april_calls, may_calls)
# Parse the dates and times of the sent and received columns
combined_calls$sent <- mdy_hms(combined_calls$sent)
combined_calls$received <- mdy_hms(combined_calls$received)
# Subset only the overdose calls
sql_query <- "SELECT * FROM nb_calls WHERE description = 'MOTOR VEHICLE STOP'"
mv_stops <- sqldf(sql_query)
# Create a map of all overdoses and separate heatmap
mv_stop_heatmap <- leaflet(data = mv_stops) %>% addTiles() %>%
addHeatmap(lng=~lon, lat=~lat,
blur = 20, max = 0.05, radius = 15 )
mvstop_map <- leaflet(data = mv_stops) %>% addTiles() %>%
addMarkers(~lon.x, ~lat.x, popup = ~as.character(description), label = ~as.character(description))
# Create a heatmap of all calls for service
calls_heatmap <- leaflet(data = combined_calls) %>% addTiles() %>%
addHeatmap(lng=~lon, lat=~lat,
blur = 20, max = 0.05, radius = 15 )
|
# Purpose : Speading up ordinary kriging (e.g. of the regression residuals);
# Maintainer : Tomislav Hengl (tom.hengl@wur.nl);
# Contributions : ;
# Status : pre-alpha
# Note : this function is ONLY useful for highly clustered point data sets;
spline.krige <- function(formula, locations, newdata, newlocs=NULL, model, te=as.vector(newdata@bbox), file.name, silent=FALSE, t_cellsize=newdata@grid@cellsize[1], optN=20, quant.nndist=.5, nmax=30, predictOnly=FALSE, resample=TRUE, saga.env, saga.lib=c("grid_spline","grid_tools"), saga.module=c(4,0), ...){
if(!class(locations)=="SpatialPointsDataFrame"){
stop("Object 'locations' of class 'SpatialPointsDataFrame' expected")
}
if(!class(newdata)=="SpatialPixelsDataFrame"){
stop("Object 'newdata' of class 'SpatialPixelsDataFrame' expected")
}
if(is.null(newlocs)){
newlocs <- resample.grid(locations, newdata, silent=silent, t_cellsize=t_cellsize, quant.nndist=quant.nndist)$newlocs
}
if(missing(saga.env)){
saga.env <- rsaga.env()
}
s_te <- as.vector(newdata@bbox)
if(silent==FALSE){
message("Predicting at variable grid...")
}
if(missing(formula)){
formula <- as.formula(paste(names(locations)[1], 1, sep="~"))
}
class(model) <- c("variogramModel", "data.frame")
tvar <- all.vars(formula)[1]
ok <- krige(formula, locations=locations[!is.na(locations@data[,tvar]),], newdata=newlocs, model=model, nmax=nmax, debug.level=-1, ...)
## write points to a shape file:
tmp <- list(NULL)
tmp.out <- list(NULL)
if(predictOnly==TRUE){
ok <- ok["var1.pred"]
}
for(k in 1:ncol(ok@data)){
tmp[[k]] <- set.file.extension(tempfile(), ".shp")
writeOGR(ok[k], names(ok)[k], dsn=tmp[[k]], "ESRI Shapefile")
if(missing(file.name)){
tmp.out[[k]] <- tempfile()
} else {
tmp.out[[k]] <- paste(file.name, k, sep="_")
}
## point to grid (spline interpolation):
suppressWarnings( rsaga.geoprocessor(lib=saga.lib[1], module=saga.module[1], param=list(SHAPES=tmp[[k]], FIELD=0, TARGET=0, METHOD=1, LEVEL_MAX=14, USER_XMIN=te[1]+t_cellsize/2, USER_XMAX=te[3]-t_cellsize/2, USER_YMIN=te[2]+t_cellsize/2, USER_YMAX=te[4]-t_cellsize/2, USER_SIZE=t_cellsize, USER_GRID=set.file.extension(tmp.out[[k]], ".sgrd")), show.output.on.console = FALSE, env=saga.env) )
if(resample==TRUE){
if(!all(te==s_te)|t_cellsize<newdata@grid@cellsize[1]){
if(silent==FALSE){ message(paste("Resampling band", k, "to the target resolution and extent...")) }
if(t_cellsize<newdata@grid@cellsize[1]){
suppressWarnings( rsaga.geoprocessor(lib=saga.lib[2], module=saga.module[2], param=list(INPUT=set.file.extension(tmp.out[[k]], ".sgrd"), TARGET=0, SCALE_DOWN_METHOD=4, USER_XMIN=s_te[1]+t_cellsize/2, USER_XMAX=s_te[3]-t_cellsize/2, USER_YMIN=s_te[2]+t_cellsize/2, USER_YMAX=s_te[4]-t_cellsize/2, USER_SIZE=t_cellsize, USER_GRID=set.file.extension(tmp.out[[k]], ".sgrd")), show.output.on.console=FALSE, env=saga.env) )
} else {
## upscale:
suppressWarnings( rsaga.geoprocessor(lib=saga.lib[2], module=saga.module[2], param=list(INPUT=set.file.extension(tmp.out[[k]], ".sgrd"), TARGET=0, SCALE_DOWN_METHOD=0, SCALE_UP_METHOD=0, USER_XMIN=s_te[1]+t_cellsize/2, USER_XMAX=s_te[3]-t_cellsize/2, USER_YMIN=s_te[2]+t_cellsize/2, USER_YMAX=s_te[4]-t_cellsize/2, USER_SIZE=t_cellsize, USER_GRID=set.file.extension(tmp.out[[k]], ".sgrd")), show.output.on.console=FALSE, env=saga.env) )
}
}
}
if(missing(file.name)){
if(k==1){
out <- readGDAL(set.file.extension(tmp.out[[k]], ".sdat"), silent=TRUE)
proj4string(out) <- newdata@proj4string
names(out) <- names(ok)[k]
} else {
out@data[,names(ok)[k]] <- readGDAL(set.file.extension(tmp.out[[k]], ".sdat"), silent=TRUE)$band1
}
unlink(paste0(tmp.out[[k]],".*"))
} else {
if(silent==FALSE){ message(paste0("Created output SAGA GIS grid: ", tmp.out[[k]], ".sdat")) }
}
}
if(missing(file.name)){
return(out)
}
}
## resample using variable sampling intensity:
resample.grid <- function(locations, newdata, silent=FALSE, n.sigma, t_cellsize, optN, quant.nndist=.5, breaks.d=NULL){
if(silent==FALSE){
message("Deriving density map...")
}
if(requireNamespace("spatstat", quietly = TRUE)&requireNamespace("maptools", quietly = TRUE)){
## derive density map:
W <- as.matrix(newdata[1])
W <- ifelse(is.na(W), FALSE, TRUE)
suppressWarnings( locs.ppp <- spatstat::ppp(x=locations@coords[,1], y=locations@coords[,2], xrange=newdata@bbox[1,], yrange=newdata@bbox[2,], mask=t(W)[ncol(W):1,]) )
if(missing(n.sigma)){
dist.locs <- spatstat::nndist(locs.ppp)
n.sigma <- quantile(dist.locs, quant.nndist)
}
if(n.sigma < 2*t_cellsize){
warning(paste0("Estimated 'Sigma' too small. Using 2 * newdata cellsize."))
n.sigma = 2*newdata@grid@cellsize[1]
}
if(n.sigma > sqrt(length(newdata)*newdata@grid@cellsize[1]*newdata@grid@cellsize[2]/length(locations))){
warning(paste0("'Sigma' set at ", signif(n.sigma, 3), ". This is possibly an unclustered point sample. See '?resample.grid' for more information."))
}
dmap <- maptools::as.SpatialGridDataFrame.im(density(locs.ppp, sigm=n.sigma))
dmap.max <- max(dmap@data[,1], na.rm=TRUE)
dmap@data[,1] <- signif(dmap@data[,1]/dmap.max, 3)
if(is.null(breaks.d)){
## TH: not sure if here is better to use quantiles or a regular split?
breaks.d <- expm1(seq(0, 3, by=3/10))/expm1(3)
#breaks.d <- quantile(dmap@data[,1], seq(0, 1, by=1/5)), na.rm=TRUE)
}
if(sd(dmap@data[,1], na.rm=TRUE)==0){ stop("Density map shows no variance. See '?resample.grid' for more information.") }
dmap$strata <- cut(x=dmap@data[,1], breaks=breaks.d, include.lowest=TRUE, labels=paste0("L", 1:(length(breaks.d)-1)))
proj4string(dmap) = locations@proj4string
## regular sampling proportional to the sampling density (rule of thumb: one sampling point can be used to predict 'optN' grids):
newlocs <- list(NULL)
for(i in 1:length(levels(dmap$strata))){
im <- dmap[dmap$strata==paste0("L",i),"strata"]
im <- as(im, "SpatialPixelsDataFrame")
if(i==length(levels(dmap$strata))){
Ns <- round(sum(!is.na(im$strata)) * newdata@grid@cellsize[1]/t_cellsize)
} else {
ov <- over(locations, im)
if(sum(!is.na(ov$strata))==0){
Ns <- optN
} else {
Ns <- round(sum(!is.na(ov$strata)) * optN)
}
}
newlocs[[i]] <- sp::spsample(im, type="regular", n=Ns)
}
newlocs <- do.call(rbind, newlocs)
if(silent==FALSE){
message(paste("Generated:", length(newlocs), "prediction locations."))
}
return(list(newlocs=newlocs, density=dmap))
}
}
## end of script;
|
/R/spline.krige.R
|
no_license
|
GreatEmerald/GSIF
|
R
| false
| false
| 6,973
|
r
|
# Purpose : Speading up ordinary kriging (e.g. of the regression residuals);
# Maintainer : Tomislav Hengl (tom.hengl@wur.nl);
# Contributions : ;
# Status : pre-alpha
# Note : this function is ONLY useful for highly clustered point data sets;
spline.krige <- function(formula, locations, newdata, newlocs=NULL, model, te=as.vector(newdata@bbox), file.name, silent=FALSE, t_cellsize=newdata@grid@cellsize[1], optN=20, quant.nndist=.5, nmax=30, predictOnly=FALSE, resample=TRUE, saga.env, saga.lib=c("grid_spline","grid_tools"), saga.module=c(4,0), ...){
if(!class(locations)=="SpatialPointsDataFrame"){
stop("Object 'locations' of class 'SpatialPointsDataFrame' expected")
}
if(!class(newdata)=="SpatialPixelsDataFrame"){
stop("Object 'newdata' of class 'SpatialPixelsDataFrame' expected")
}
if(is.null(newlocs)){
newlocs <- resample.grid(locations, newdata, silent=silent, t_cellsize=t_cellsize, quant.nndist=quant.nndist)$newlocs
}
if(missing(saga.env)){
saga.env <- rsaga.env()
}
s_te <- as.vector(newdata@bbox)
if(silent==FALSE){
message("Predicting at variable grid...")
}
if(missing(formula)){
formula <- as.formula(paste(names(locations)[1], 1, sep="~"))
}
class(model) <- c("variogramModel", "data.frame")
tvar <- all.vars(formula)[1]
ok <- krige(formula, locations=locations[!is.na(locations@data[,tvar]),], newdata=newlocs, model=model, nmax=nmax, debug.level=-1, ...)
## write points to a shape file:
tmp <- list(NULL)
tmp.out <- list(NULL)
if(predictOnly==TRUE){
ok <- ok["var1.pred"]
}
for(k in 1:ncol(ok@data)){
tmp[[k]] <- set.file.extension(tempfile(), ".shp")
writeOGR(ok[k], names(ok)[k], dsn=tmp[[k]], "ESRI Shapefile")
if(missing(file.name)){
tmp.out[[k]] <- tempfile()
} else {
tmp.out[[k]] <- paste(file.name, k, sep="_")
}
## point to grid (spline interpolation):
suppressWarnings( rsaga.geoprocessor(lib=saga.lib[1], module=saga.module[1], param=list(SHAPES=tmp[[k]], FIELD=0, TARGET=0, METHOD=1, LEVEL_MAX=14, USER_XMIN=te[1]+t_cellsize/2, USER_XMAX=te[3]-t_cellsize/2, USER_YMIN=te[2]+t_cellsize/2, USER_YMAX=te[4]-t_cellsize/2, USER_SIZE=t_cellsize, USER_GRID=set.file.extension(tmp.out[[k]], ".sgrd")), show.output.on.console = FALSE, env=saga.env) )
if(resample==TRUE){
if(!all(te==s_te)|t_cellsize<newdata@grid@cellsize[1]){
if(silent==FALSE){ message(paste("Resampling band", k, "to the target resolution and extent...")) }
if(t_cellsize<newdata@grid@cellsize[1]){
suppressWarnings( rsaga.geoprocessor(lib=saga.lib[2], module=saga.module[2], param=list(INPUT=set.file.extension(tmp.out[[k]], ".sgrd"), TARGET=0, SCALE_DOWN_METHOD=4, USER_XMIN=s_te[1]+t_cellsize/2, USER_XMAX=s_te[3]-t_cellsize/2, USER_YMIN=s_te[2]+t_cellsize/2, USER_YMAX=s_te[4]-t_cellsize/2, USER_SIZE=t_cellsize, USER_GRID=set.file.extension(tmp.out[[k]], ".sgrd")), show.output.on.console=FALSE, env=saga.env) )
} else {
## upscale:
suppressWarnings( rsaga.geoprocessor(lib=saga.lib[2], module=saga.module[2], param=list(INPUT=set.file.extension(tmp.out[[k]], ".sgrd"), TARGET=0, SCALE_DOWN_METHOD=0, SCALE_UP_METHOD=0, USER_XMIN=s_te[1]+t_cellsize/2, USER_XMAX=s_te[3]-t_cellsize/2, USER_YMIN=s_te[2]+t_cellsize/2, USER_YMAX=s_te[4]-t_cellsize/2, USER_SIZE=t_cellsize, USER_GRID=set.file.extension(tmp.out[[k]], ".sgrd")), show.output.on.console=FALSE, env=saga.env) )
}
}
}
if(missing(file.name)){
if(k==1){
out <- readGDAL(set.file.extension(tmp.out[[k]], ".sdat"), silent=TRUE)
proj4string(out) <- newdata@proj4string
names(out) <- names(ok)[k]
} else {
out@data[,names(ok)[k]] <- readGDAL(set.file.extension(tmp.out[[k]], ".sdat"), silent=TRUE)$band1
}
unlink(paste0(tmp.out[[k]],".*"))
} else {
if(silent==FALSE){ message(paste0("Created output SAGA GIS grid: ", tmp.out[[k]], ".sdat")) }
}
}
if(missing(file.name)){
return(out)
}
}
## resample using variable sampling intensity:
resample.grid <- function(locations, newdata, silent=FALSE, n.sigma, t_cellsize, optN, quant.nndist=.5, breaks.d=NULL){
if(silent==FALSE){
message("Deriving density map...")
}
if(requireNamespace("spatstat", quietly = TRUE)&requireNamespace("maptools", quietly = TRUE)){
## derive density map:
W <- as.matrix(newdata[1])
W <- ifelse(is.na(W), FALSE, TRUE)
suppressWarnings( locs.ppp <- spatstat::ppp(x=locations@coords[,1], y=locations@coords[,2], xrange=newdata@bbox[1,], yrange=newdata@bbox[2,], mask=t(W)[ncol(W):1,]) )
if(missing(n.sigma)){
dist.locs <- spatstat::nndist(locs.ppp)
n.sigma <- quantile(dist.locs, quant.nndist)
}
if(n.sigma < 2*t_cellsize){
warning(paste0("Estimated 'Sigma' too small. Using 2 * newdata cellsize."))
n.sigma = 2*newdata@grid@cellsize[1]
}
if(n.sigma > sqrt(length(newdata)*newdata@grid@cellsize[1]*newdata@grid@cellsize[2]/length(locations))){
warning(paste0("'Sigma' set at ", signif(n.sigma, 3), ". This is possibly an unclustered point sample. See '?resample.grid' for more information."))
}
dmap <- maptools::as.SpatialGridDataFrame.im(density(locs.ppp, sigm=n.sigma))
dmap.max <- max(dmap@data[,1], na.rm=TRUE)
dmap@data[,1] <- signif(dmap@data[,1]/dmap.max, 3)
if(is.null(breaks.d)){
## TH: not sure if here is better to use quantiles or a regular split?
breaks.d <- expm1(seq(0, 3, by=3/10))/expm1(3)
#breaks.d <- quantile(dmap@data[,1], seq(0, 1, by=1/5)), na.rm=TRUE)
}
if(sd(dmap@data[,1], na.rm=TRUE)==0){ stop("Density map shows no variance. See '?resample.grid' for more information.") }
dmap$strata <- cut(x=dmap@data[,1], breaks=breaks.d, include.lowest=TRUE, labels=paste0("L", 1:(length(breaks.d)-1)))
proj4string(dmap) = locations@proj4string
## regular sampling proportional to the sampling density (rule of thumb: one sampling point can be used to predict 'optN' grids):
newlocs <- list(NULL)
for(i in 1:length(levels(dmap$strata))){
im <- dmap[dmap$strata==paste0("L",i),"strata"]
im <- as(im, "SpatialPixelsDataFrame")
if(i==length(levels(dmap$strata))){
Ns <- round(sum(!is.na(im$strata)) * newdata@grid@cellsize[1]/t_cellsize)
} else {
ov <- over(locations, im)
if(sum(!is.na(ov$strata))==0){
Ns <- optN
} else {
Ns <- round(sum(!is.na(ov$strata)) * optN)
}
}
newlocs[[i]] <- sp::spsample(im, type="regular", n=Ns)
}
newlocs <- do.call(rbind, newlocs)
if(silent==FALSE){
message(paste("Generated:", length(newlocs), "prediction locations."))
}
return(list(newlocs=newlocs, density=dmap))
}
}
## end of script;
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/documentation.R
\docType{data}
\name{phone}
\alias{phone}
\title{Phone}
\format{A \code{\link{data.frame}}.}
\source{
Tim Bock.
}
\usage{
phone
}
\description{
A survey about mobile phones, conducted from a sample of 725 friends and family of University of Sydney students, c. 2001.
}
\keyword{datasets}
|
/man/phone.Rd
|
no_license
|
gkalnytskyi/flipExampleData
|
R
| false
| true
| 383
|
rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/documentation.R
\docType{data}
\name{phone}
\alias{phone}
\title{Phone}
\format{A \code{\link{data.frame}}.}
\source{
Tim Bock.
}
\usage{
phone
}
\description{
A survey about mobile phones, conducted from a sample of 725 friends and family of University of Sydney students, c. 2001.
}
\keyword{datasets}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/bcrypt.R
\name{bcrypt}
\alias{bcrypt}
\alias{gensalt}
\alias{hashpw}
\alias{checkpw}
\title{Bcrypt password hashing}
\usage{
gensalt(log_rounds = 12)
hashpw(password, salt = gensalt())
checkpw(password, hash)
}
\arguments{
\item{log_rounds}{integer between 4 and 31 that defines the complexity of
the hashing, increasing the cost as \code{2^log_rounds}.}
\item{password}{the message (password) to encrypt}
\item{salt}{a salt generated with \code{gensalt}.}
\item{hash}{the previously generated bcrypt hash to verify}
}
\description{
Bcrypt is used for secure password hashing. The main difference with
regular digest algorithms such as MD5 or SHA256 is that the bcrypt
algorithm is specifically designed to be CPU intensive in order to
protect against brute force attacks. The exact complexity of the
algorithm is configurable via the \code{log_rounds} parameter. The
interface is fully compatible with the Python one.
}
\details{
The \code{hashpw} function calculates a hash from a password using
a random salt. Validating the hash is done by rehashing the password
using the hash as a salt. The \code{checkpw} function is a simple
wrapper that does exactly this.
\code{gensalt} generates a random text salt for use with \code{hashpw}.
The first few characters in the salt string hold the bcrypt version number
and value for \code{log_rounds}. The remainder stores 16 bytes of base64
encoded randomness for seeding the hashing algorithm.
}
\examples{
# Secret message as a string
passwd <- "supersecret"
# Create the hash
hash <- hashpw(passwd)
hash
# To validate the hash
identical(hash, hashpw(passwd, hash))
# Or use the wrapper
checkpw(passwd, hash)
# Use varying complexity:
hash11 <- hashpw(passwd, gensalt(11))
hash12 <- hashpw(passwd, gensalt(12))
hash13 <- hashpw(passwd, gensalt(13))
# Takes longer to verify (or crack)
system.time(checkpw(passwd, hash11))
system.time(checkpw(passwd, hash12))
system.time(checkpw(passwd, hash13))
}
|
/man/bcrypt.Rd
|
no_license
|
cran/bcrypt
|
R
| false
| true
| 2,033
|
rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/bcrypt.R
\name{bcrypt}
\alias{bcrypt}
\alias{gensalt}
\alias{hashpw}
\alias{checkpw}
\title{Bcrypt password hashing}
\usage{
gensalt(log_rounds = 12)
hashpw(password, salt = gensalt())
checkpw(password, hash)
}
\arguments{
\item{log_rounds}{integer between 4 and 31 that defines the complexity of
the hashing, increasing the cost as \code{2^log_rounds}.}
\item{password}{the message (password) to encrypt}
\item{salt}{a salt generated with \code{gensalt}.}
\item{hash}{the previously generated bcrypt hash to verify}
}
\description{
Bcrypt is used for secure password hashing. The main difference with
regular digest algorithms such as MD5 or SHA256 is that the bcrypt
algorithm is specifically designed to be CPU intensive in order to
protect against brute force attacks. The exact complexity of the
algorithm is configurable via the \code{log_rounds} parameter. The
interface is fully compatible with the Python one.
}
\details{
The \code{hashpw} function calculates a hash from a password using
a random salt. Validating the hash is done by rehashing the password
using the hash as a salt. The \code{checkpw} function is a simple
wrapper that does exactly this.
\code{gensalt} generates a random text salt for use with \code{hashpw}.
The first few characters in the salt string hold the bcrypt version number
and value for \code{log_rounds}. The remainder stores 16 bytes of base64
encoded randomness for seeding the hashing algorithm.
}
\examples{
# Secret message as a string
passwd <- "supersecret"
# Create the hash
hash <- hashpw(passwd)
hash
# To validate the hash
identical(hash, hashpw(passwd, hash))
# Or use the wrapper
checkpw(passwd, hash)
# Use varying complexity:
hash11 <- hashpw(passwd, gensalt(11))
hash12 <- hashpw(passwd, gensalt(12))
hash13 <- hashpw(passwd, gensalt(13))
# Takes longer to verify (or crack)
system.time(checkpw(passwd, hash11))
system.time(checkpw(passwd, hash12))
system.time(checkpw(passwd, hash13))
}
|
#' Determine Fmsy for a given operating model
#'
#' Runs an operating model over a range of fishing mortality levels to
#' determine the profile of F values from which Fmsy can be determined.
#'
#' @param om_in A directory for an \pkg{ss3sim} operating model.
#' @param results_out A directory to place the results
#' @param start Lower fishing mortality level
#' @param end Upper fishing mortality level
#' @param by_val Interval in which you wish to increment the fishing mortality
#' level
#' @param ss_mode SS3 binary option. One of \code{"safe"} for the safe version
#' of SS3 or \code{"optimized"} for the optimized version of SS3. The relevant
#' binary needs to be in your system's path. See the vignette
#' \code{vignette("ss3sim-vignette", package = "ss3sim")} for details and
#' links to the binary files. Defaults to safe mode.
#' @importFrom r4ss SS_readdat SS_writedat
#' @return Creates a plot and a table with catches and F values (see the
#' \code{results_out} folder). Also invisibly returns the Fmsy table as a data
#' frame.
#' @export
#' @details This function extracts the number of years from the model dat
#' file and then runs at a constant level of fishing for each year,
#' extracting the catch in the last year. This assumes the length of the
#' model is long enough to reach an equilibrium catch. The user is
#' responsible for ensuring this fact.
#' @examples
#' \dontrun{
#' d <- system.file("extdata", package = "ss3sim")
#' omfolder <- paste0(d, "/models/cod-om")
#'
#'
#' fmsy.val <- profile_fmsy(om_in = omfolder, results_out = "fmsy",
#' start = 0.1, end = 0.2, by_val = 0.05)
#' }
profile_fmsy <- function(om_in, results_out,
start = 0.00, end = 1.5, by_val = 0.01, ss_mode = c("safe", "optimized")) {
# overM needs to be the value
# you want to add or subtract from the trueM
# or the case file you want to get the value
# that you add to M in the last year, i.e. "M1"
# used for + trueM
origWD <- getwd()
on.exit(expr = setwd(origWD), add = FALSE)
if(ss_mode[1] == "optimized") ss_mode <- "opt"
ss_bin <- paste0("ss3_24o_", ss_mode[1])
ss_bin <- get_bin(ss_bin)
fVector <- seq(start, end, by_val)
fEqCatch <- NULL
omModel <- om_in
if(!file.exists(omModel)) {
stop("OM folder does not exist")
}
newWD <- results_out
dir.create(newWD, showWarnings = FALSE)
setwd(newWD)
file.copy(dir(omModel, full.names = TRUE), list.files(omModel))
## read in dat file to get years of model
datFile <- SS_readdat(file='ss3.dat', verbose=FALSE)
simlength <- datFile$endyr-datFile$styr+1
if(!is.numeric(simlength) | simlength < 1)
stop(paste("Calculated length of model from dat file was", simlength))
## remove recdevs from par
parFile <- readLines("ss3.par", warn = FALSE)
recDevLine <- grep("# recdev1", parFile) + 1
sigmaRLine <- grep("# SR_parm[3]", parFile, fixed = TRUE) + 1
parFile[recDevLine] <- paste(rep(0, simlength), collapse = ", ")
parFile[sigmaRLine] <- 0.001
writeLines(parFile, "ss3.par")
for(i in seq(fVector)) {
change_f(years = 1:simlength, years_alter = 1:simlength,
fvals = rep(fVector[i], simlength),
par_file_in = "ss3.par", par_file_out = "ss3.par" )
system(paste(ss_bin, "-nohess"), show.output.on.console = FALSE,
ignore.stdout=TRUE)
temp_feq <- SS_readdat("data.ss_new", verbose = FALSE,
section = 2)$catch$Fishery[simlength]
if(is.null(temp_feq))
{
fEqCatch[i] <- SS_readdat("data.ss_new", verbose = FALSE,
section = 2)$catch$fishery1[simlength]
} else {
fEqCatch[i] <- temp_feq
}
}
pdf("Fmsy.pdf")
par(mar = c(4, 6, 4, 4))
plot(fVector, fEqCatch, las = 1,
xlab = "Fishing mortality rate", ylab = "")
mtext(side = 2, text = "Yield at equilibrium", line = 4)
maxFVal <- which.max(fEqCatch)
Fmsy <- fVector[maxFVal]
abline(v = Fmsy)
mtext(text = paste(om_in, "\n",
"Fmsy \n", Fmsy, "\n",
"Catch at Fmsy \n", max(fEqCatch)),
side = 1, line = -2, las = 1)
dev.off()
FmsyTable <- data.frame(fValues = fVector,
eqCatch = fEqCatch)
write.table(FmsyTable, "Fmsy.txt")
invisible(FmsyTable)
}
|
/R/profile_fmsy.r
|
no_license
|
heavywatal/ss3sim
|
R
| false
| false
| 4,332
|
r
|
#' Determine Fmsy for a given operating model
#'
#' Runs an operating model over a range of fishing mortality levels to
#' determine the profile of F values from which Fmsy can be determined.
#'
#' @param om_in A directory for an \pkg{ss3sim} operating model.
#' @param results_out A directory to place the results
#' @param start Lower fishing mortality level
#' @param end Upper fishing mortality level
#' @param by_val Interval in which you wish to increment the fishing mortality
#' level
#' @param ss_mode SS3 binary option. One of \code{"safe"} for the safe version
#' of SS3 or \code{"optimized"} for the optimized version of SS3. The relevant
#' binary needs to be in your system's path. See the vignette
#' \code{vignette("ss3sim-vignette", package = "ss3sim")} for details and
#' links to the binary files. Defaults to safe mode.
#' @importFrom r4ss SS_readdat SS_writedat
#' @return Creates a plot and a table with catches and F values (see the
#' \code{results_out} folder). Also invisibly returns the Fmsy table as a data
#' frame.
#' @export
#' @details This function extracts the number of years from the model dat
#' file and then runs at a constant level of fishing for each year,
#' extracting the catch in the last year. This assumes the length of the
#' model is long enough to reach an equilibrium catch. The user is
#' responsible for ensuring this fact.
#' @examples
#' \dontrun{
#' d <- system.file("extdata", package = "ss3sim")
#' omfolder <- paste0(d, "/models/cod-om")
#'
#'
#' fmsy.val <- profile_fmsy(om_in = omfolder, results_out = "fmsy",
#' start = 0.1, end = 0.2, by_val = 0.05)
#' }
profile_fmsy <- function(om_in, results_out,
start = 0.00, end = 1.5, by_val = 0.01, ss_mode = c("safe", "optimized")) {
# overM needs to be the value
# you want to add or subtract from the trueM
# or the case file you want to get the value
# that you add to M in the last year, i.e. "M1"
# used for + trueM
origWD <- getwd()
on.exit(expr = setwd(origWD), add = FALSE)
if(ss_mode[1] == "optimized") ss_mode <- "opt"
ss_bin <- paste0("ss3_24o_", ss_mode[1])
ss_bin <- get_bin(ss_bin)
fVector <- seq(start, end, by_val)
fEqCatch <- NULL
omModel <- om_in
if(!file.exists(omModel)) {
stop("OM folder does not exist")
}
newWD <- results_out
dir.create(newWD, showWarnings = FALSE)
setwd(newWD)
file.copy(dir(omModel, full.names = TRUE), list.files(omModel))
## read in dat file to get years of model
datFile <- SS_readdat(file='ss3.dat', verbose=FALSE)
simlength <- datFile$endyr-datFile$styr+1
if(!is.numeric(simlength) | simlength < 1)
stop(paste("Calculated length of model from dat file was", simlength))
## remove recdevs from par
parFile <- readLines("ss3.par", warn = FALSE)
recDevLine <- grep("# recdev1", parFile) + 1
sigmaRLine <- grep("# SR_parm[3]", parFile, fixed = TRUE) + 1
parFile[recDevLine] <- paste(rep(0, simlength), collapse = ", ")
parFile[sigmaRLine] <- 0.001
writeLines(parFile, "ss3.par")
for(i in seq(fVector)) {
change_f(years = 1:simlength, years_alter = 1:simlength,
fvals = rep(fVector[i], simlength),
par_file_in = "ss3.par", par_file_out = "ss3.par" )
system(paste(ss_bin, "-nohess"), show.output.on.console = FALSE,
ignore.stdout=TRUE)
temp_feq <- SS_readdat("data.ss_new", verbose = FALSE,
section = 2)$catch$Fishery[simlength]
if(is.null(temp_feq))
{
fEqCatch[i] <- SS_readdat("data.ss_new", verbose = FALSE,
section = 2)$catch$fishery1[simlength]
} else {
fEqCatch[i] <- temp_feq
}
}
pdf("Fmsy.pdf")
par(mar = c(4, 6, 4, 4))
plot(fVector, fEqCatch, las = 1,
xlab = "Fishing mortality rate", ylab = "")
mtext(side = 2, text = "Yield at equilibrium", line = 4)
maxFVal <- which.max(fEqCatch)
Fmsy <- fVector[maxFVal]
abline(v = Fmsy)
mtext(text = paste(om_in, "\n",
"Fmsy \n", Fmsy, "\n",
"Catch at Fmsy \n", max(fEqCatch)),
side = 1, line = -2, las = 1)
dev.off()
FmsyTable <- data.frame(fValues = fVector,
eqCatch = fEqCatch)
write.table(FmsyTable, "Fmsy.txt")
invisible(FmsyTable)
}
|
library(shiny)
library(shiny)
ui <- fluidPage(
headerPanel("Example reactive"),
mainPanel(
# action buttons
actionButton("button1","Button 1"),
actionButton("button2","Button 2")
)
)
server <- function(input, output) {
# observe button 1 press.
observe({
input$button1
# input$button2
showModal(modalDialog(
title = "Button pressed",
"You pressed one of the buttons!"
))
})
}
shinyApp(ui = ui, server = server)
|
/sh_practice_observe.R
|
no_license
|
plus4u/Shiny
|
R
| false
| false
| 540
|
r
|
library(shiny)
library(shiny)
ui <- fluidPage(
headerPanel("Example reactive"),
mainPanel(
# action buttons
actionButton("button1","Button 1"),
actionButton("button2","Button 2")
)
)
server <- function(input, output) {
# observe button 1 press.
observe({
input$button1
# input$button2
showModal(modalDialog(
title = "Button pressed",
"You pressed one of the buttons!"
))
})
}
shinyApp(ui = ui, server = server)
|
library(dashBootstrapComponents)
library(dashHtmlComponents)
card <- dbcCard(
dbcListGroup(
list(
dbcListGroupItem("Item 1"),
dbcListGroupItem("Item 2"),
dbcListGroupItem("Item 3")
),
flush = TRUE
),
style = list(width = "18rem")
)
|
/docs/components/card/list_group.R
|
no_license
|
AnnMarieW/R-dash-bootstrap-components
|
R
| false
| false
| 272
|
r
|
library(dashBootstrapComponents)
library(dashHtmlComponents)
card <- dbcCard(
dbcListGroup(
list(
dbcListGroupItem("Item 1"),
dbcListGroupItem("Item 2"),
dbcListGroupItem("Item 3")
),
flush = TRUE
),
style = list(width = "18rem")
)
|
library(shiny)
library(ggplot2)
library(shinydashboard)
library(e1071)
pal <- c('lightgray', 'darkblue', 'red')
def <- par(bty="l",
las=1,
bg="#F0F0F0",
col.axis="#434343",
col.main="#343434",
cex=0.85,
cex.axis=0.8)
makePlot <- function(dat, xvar, yvar) {
par(def)
plot(0,
xlab="Judges' Assessment",
ylab="Overall Score",
xlim=c(1.5, 5.0),
ylim=c(20, 100),
type="n")
grid(NULL,NULL, col="#DEDEDE",lty="solid", lwd=0.9)
points(data()$judges_assess[data()$group=="Others"],
data()$overall[data()$group=="Others"],
pch=16,
col=pal[1])
points(data()$judges_assess[data()$group=="ND (Reported)"],
data()$overall[data()$group=="ND (Reported)"],
pch=16,
col=pal[2])
points(data()$judges_assess[data()$group=="ND (Predicted)"],
data()$overall[data()$group=="ND (Predicted)"],
pch=16,
col=pal[3])
abline(lm(data()$overall[data()$group %in% c("Others", "ND (Reported)")]~
data()$judges_assess[data()$group %in% c("Others", "ND (Reported)")]),
col="blue")
}
runModel <- function(dat) {
mod <- svm(overall ~ zgpa25 +
zgpa75 +
zlsat25 +
zlsat75 +
zpct_emp_grad +
zpct_emp_9 +
zaccept +
zsf_ratio +
zbar_pass_pct,
data=dat,
kernel='linear',
gamma=0.05,
cost=1,
epsilon=0.2)
return(mod)
}
getData <- function(selYear) {
dat <- read.table(file='../data/LawSchools2014-15.csv',
header=TRUE,
sep=",",
quote="\"",
skip=0,
row.names=NULL)
# Impute!
empfit <- lm(pct_emp_grad ~ pct_emp_9 + accept + lsat25 + lsat75, data=dat[!is.na(dat$pct_emp_grad),])
dat$pct_emp_grad[is.na(dat$pct_emp_grad)] <- predict(empfit, dat[is.na(dat$pct_emp_grad),])
# dat$over_under <- dat$bar_pass_pct - dat$pass_rate
dat$group <- as.integer(ifelse(!dat$school=='University of Notre Dame',1,2))
dat$label <- ifelse(dat$school=='University of Notre Dame',dat$rank,'')
dat$zpeer_assess <- as.numeric(scale(dat$peer_assess, center = TRUE, scale = TRUE))
dat$zjudges_assess <- as.numeric(scale(dat$judges_assess, center = TRUE, scale = TRUE))
dat$zgpa25 <- as.numeric(scale(dat$gpa25, center = TRUE, scale = TRUE))
dat$zgpa75 <- as.numeric(scale(dat$gpa75, center = TRUE, scale = TRUE))
dat$zlsat25 <- as.numeric(scale(dat$lsat25, center = TRUE, scale = TRUE))
dat$zlsat75 <- as.numeric(scale(dat$lsat75, center = TRUE, scale = TRUE))
dat$zpct_emp_grad <- as.numeric(scale(dat$pct_emp_grad, center = TRUE, scale = TRUE))
dat$zpct_emp_9 <- as.numeric(scale(dat$pct_emp_9, center = TRUE, scale = TRUE))
dat$zaccept <- as.numeric(scale(dat$accept, center = TRUE, scale = TRUE))
dat$zsf_ratio <- as.numeric(scale(dat$sf_ratio, center = TRUE, scale = TRUE))
dat$zbar_pass_pct <- as.numeric(scale(dat$bar_pass_pct, center = TRUE, scale = TRUE))
dat <- subset(dat,
year==selYear,
select=c(overall,
peer_assess,
judges_assess,
gpa25,
gpa75,
lsat25,
lsat75,
pct_emp_grad,
pct_emp_9,
accept,
sf_ratio,
bar_pass_pct,
zpeer_assess,
zjudges_assess,
zgpa25,
zgpa75,
zlsat25,
zlsat75,
zpct_emp_grad,
zpct_emp_9,
zaccept,
zsf_ratio,
zbar_pass_pct,
label,
group,
school,
rank))
return(dat)
}
makeRow <- function(peer_assess,
judges_assess,
gpa25,
gpa75,
lsat25,
lsat75,
pct_emp_grad,
pct_emp_9,
accept,
sf_ratio,
bar_pass_pct,
dat) {
new.row <- data.frame(overall = NA,
peer_assess = peer_assess,
judges_assess = judges_assess,
gpa25 = gpa25,
gpa75 = gpa75,
lsat25 = lsat25,
lsat75 = lsat75,
pct_emp_grad = pct_emp_grad,
pct_emp_9 = pct_emp_9,
accept = accept,
sf_ratio = sf_ratio,
bar_pass_pct = bar_pass_pct)
new.row$zpeer_assess = as.numeric((new.row$peer_assess - mean(dat$peer_assess)) / sd(dat$peer_assess))
new.row$zjudges_assess = as.numeric((new.row$judges_assess - mean(dat$judges_assess)) / sd(dat$judges_assess))
new.row$zgpa25 = as.numeric((new.row$gpa25 - mean(dat$gpa25)) / sd(dat$gpa25))
new.row$zgpa75 = as.numeric((new.row$gpa75 - mean(dat$gpa75)) / sd(dat$gpa75))
new.row$zlsat25 = as.numeric((new.row$lsat25 - mean(dat$lsat25)) / sd(dat$lsat25))
new.row$zlsat75 = as.numeric((new.row$lsat75 - mean(dat$lsat75)) / sd(dat$lsat75))
new.row$zpct_emp_grad = as.numeric((new.row$pct_emp_grad - mean(dat$pct_emp_grad)) / sd(dat$pct_emp_grad))
new.row$zpct_emp_9 = as.numeric((new.row$pct_emp_9 - mean(dat$pct_emp_9)) / sd(dat$pct_emp_9))
new.row$zaccept = as.numeric((new.row$accept - mean(dat$accept)) / sd(dat$accept))
new.row$zsf_ratio = as.numeric((new.row$sf_ratio - mean(dat$sf_ratio)) / sd(dat$sf_ratio))
new.row$zbar_pass_pct = as.numeric((new.row$bar_pass_pct - mean(dat$bar_pass_pct)) / sd(dat$bar_pass_pct))
new.row$label <- NA
new.row$group <- 3
new.row$school <- 'University of Notre Dame (Pred.)'
new.row$rank <- NA
return(new.row)
}
original.data <- getData(2016)
model <- runModel(original.data)
base.case <- subset(original.data,
school=='University of Notre Dame')
base.data <- subset(original.data,
school!='University of Notre Dame')
updatePrediction <- function(new.row) {
new.row$overall <- predict(model, new.row[,15:23])
new.row$group <- 3L
new.set <- rbind(base.data, new.row)
new.set$rank <- rank(-new.set$overall, ties.method = 'first')
new.set$label[new.set$group==3] <- new.set$rank[new.set$group==3]
new.set <- rbind(new.set, base.case)
new.set$group <- factor(new.set$group,
levels=c(1,2,3),
labels=c('Others',
'ND (Reported)',
'ND (Predicted)'))
return(new.set)
}
|
/usn-predictor/global.R
|
permissive
|
luciahao/neair-predictives
|
R
| false
| false
| 7,102
|
r
|
library(shiny)
library(ggplot2)
library(shinydashboard)
library(e1071)
pal <- c('lightgray', 'darkblue', 'red')
def <- par(bty="l",
las=1,
bg="#F0F0F0",
col.axis="#434343",
col.main="#343434",
cex=0.85,
cex.axis=0.8)
makePlot <- function(dat, xvar, yvar) {
par(def)
plot(0,
xlab="Judges' Assessment",
ylab="Overall Score",
xlim=c(1.5, 5.0),
ylim=c(20, 100),
type="n")
grid(NULL,NULL, col="#DEDEDE",lty="solid", lwd=0.9)
points(data()$judges_assess[data()$group=="Others"],
data()$overall[data()$group=="Others"],
pch=16,
col=pal[1])
points(data()$judges_assess[data()$group=="ND (Reported)"],
data()$overall[data()$group=="ND (Reported)"],
pch=16,
col=pal[2])
points(data()$judges_assess[data()$group=="ND (Predicted)"],
data()$overall[data()$group=="ND (Predicted)"],
pch=16,
col=pal[3])
abline(lm(data()$overall[data()$group %in% c("Others", "ND (Reported)")]~
data()$judges_assess[data()$group %in% c("Others", "ND (Reported)")]),
col="blue")
}
runModel <- function(dat) {
mod <- svm(overall ~ zgpa25 +
zgpa75 +
zlsat25 +
zlsat75 +
zpct_emp_grad +
zpct_emp_9 +
zaccept +
zsf_ratio +
zbar_pass_pct,
data=dat,
kernel='linear',
gamma=0.05,
cost=1,
epsilon=0.2)
return(mod)
}
getData <- function(selYear) {
dat <- read.table(file='../data/LawSchools2014-15.csv',
header=TRUE,
sep=",",
quote="\"",
skip=0,
row.names=NULL)
# Impute!
empfit <- lm(pct_emp_grad ~ pct_emp_9 + accept + lsat25 + lsat75, data=dat[!is.na(dat$pct_emp_grad),])
dat$pct_emp_grad[is.na(dat$pct_emp_grad)] <- predict(empfit, dat[is.na(dat$pct_emp_grad),])
# dat$over_under <- dat$bar_pass_pct - dat$pass_rate
dat$group <- as.integer(ifelse(!dat$school=='University of Notre Dame',1,2))
dat$label <- ifelse(dat$school=='University of Notre Dame',dat$rank,'')
dat$zpeer_assess <- as.numeric(scale(dat$peer_assess, center = TRUE, scale = TRUE))
dat$zjudges_assess <- as.numeric(scale(dat$judges_assess, center = TRUE, scale = TRUE))
dat$zgpa25 <- as.numeric(scale(dat$gpa25, center = TRUE, scale = TRUE))
dat$zgpa75 <- as.numeric(scale(dat$gpa75, center = TRUE, scale = TRUE))
dat$zlsat25 <- as.numeric(scale(dat$lsat25, center = TRUE, scale = TRUE))
dat$zlsat75 <- as.numeric(scale(dat$lsat75, center = TRUE, scale = TRUE))
dat$zpct_emp_grad <- as.numeric(scale(dat$pct_emp_grad, center = TRUE, scale = TRUE))
dat$zpct_emp_9 <- as.numeric(scale(dat$pct_emp_9, center = TRUE, scale = TRUE))
dat$zaccept <- as.numeric(scale(dat$accept, center = TRUE, scale = TRUE))
dat$zsf_ratio <- as.numeric(scale(dat$sf_ratio, center = TRUE, scale = TRUE))
dat$zbar_pass_pct <- as.numeric(scale(dat$bar_pass_pct, center = TRUE, scale = TRUE))
dat <- subset(dat,
year==selYear,
select=c(overall,
peer_assess,
judges_assess,
gpa25,
gpa75,
lsat25,
lsat75,
pct_emp_grad,
pct_emp_9,
accept,
sf_ratio,
bar_pass_pct,
zpeer_assess,
zjudges_assess,
zgpa25,
zgpa75,
zlsat25,
zlsat75,
zpct_emp_grad,
zpct_emp_9,
zaccept,
zsf_ratio,
zbar_pass_pct,
label,
group,
school,
rank))
return(dat)
}
makeRow <- function(peer_assess,
judges_assess,
gpa25,
gpa75,
lsat25,
lsat75,
pct_emp_grad,
pct_emp_9,
accept,
sf_ratio,
bar_pass_pct,
dat) {
new.row <- data.frame(overall = NA,
peer_assess = peer_assess,
judges_assess = judges_assess,
gpa25 = gpa25,
gpa75 = gpa75,
lsat25 = lsat25,
lsat75 = lsat75,
pct_emp_grad = pct_emp_grad,
pct_emp_9 = pct_emp_9,
accept = accept,
sf_ratio = sf_ratio,
bar_pass_pct = bar_pass_pct)
new.row$zpeer_assess = as.numeric((new.row$peer_assess - mean(dat$peer_assess)) / sd(dat$peer_assess))
new.row$zjudges_assess = as.numeric((new.row$judges_assess - mean(dat$judges_assess)) / sd(dat$judges_assess))
new.row$zgpa25 = as.numeric((new.row$gpa25 - mean(dat$gpa25)) / sd(dat$gpa25))
new.row$zgpa75 = as.numeric((new.row$gpa75 - mean(dat$gpa75)) / sd(dat$gpa75))
new.row$zlsat25 = as.numeric((new.row$lsat25 - mean(dat$lsat25)) / sd(dat$lsat25))
new.row$zlsat75 = as.numeric((new.row$lsat75 - mean(dat$lsat75)) / sd(dat$lsat75))
new.row$zpct_emp_grad = as.numeric((new.row$pct_emp_grad - mean(dat$pct_emp_grad)) / sd(dat$pct_emp_grad))
new.row$zpct_emp_9 = as.numeric((new.row$pct_emp_9 - mean(dat$pct_emp_9)) / sd(dat$pct_emp_9))
new.row$zaccept = as.numeric((new.row$accept - mean(dat$accept)) / sd(dat$accept))
new.row$zsf_ratio = as.numeric((new.row$sf_ratio - mean(dat$sf_ratio)) / sd(dat$sf_ratio))
new.row$zbar_pass_pct = as.numeric((new.row$bar_pass_pct - mean(dat$bar_pass_pct)) / sd(dat$bar_pass_pct))
new.row$label <- NA
new.row$group <- 3
new.row$school <- 'University of Notre Dame (Pred.)'
new.row$rank <- NA
return(new.row)
}
original.data <- getData(2016)
model <- runModel(original.data)
base.case <- subset(original.data,
school=='University of Notre Dame')
base.data <- subset(original.data,
school!='University of Notre Dame')
updatePrediction <- function(new.row) {
new.row$overall <- predict(model, new.row[,15:23])
new.row$group <- 3L
new.set <- rbind(base.data, new.row)
new.set$rank <- rank(-new.set$overall, ties.method = 'first')
new.set$label[new.set$group==3] <- new.set$rank[new.set$group==3]
new.set <- rbind(new.set, base.case)
new.set$group <- factor(new.set$group,
levels=c(1,2,3),
labels=c('Others',
'ND (Reported)',
'ND (Predicted)'))
return(new.set)
}
|
# Plot3: with legend
library(dplyr)
#x1 <- household_power_consumption
#names(x1)
#x1 <- household_power_consumption %>%
#mutate(Global_active_power = as.numeric(Global_active_power))
x1 <- household_power_consumption %>%
mutate (Date = as.Date(Date, format = "%d/%m/%Y"))%>%
mutate(Global_active_power = as.numeric(Global_active_power)) %>%
mutate(Global_reactive_power = as.numeric(Global_reactive_power))%>%
mutate(Voltage = as.numeric(Voltage))%>%
mutate(Sub_metering_1 = as.numeric(Sub_metering_1))%>%
mutate(Sub_metering_2 = as.numeric(Sub_metering_2))%>%
mutate(Sub_metering_3 = as.numeric(Sub_metering_3))%>%
mutate( Global_intensity = as.numeric( Global_intensity))
#changing Class to Date:
#x1$Date <- as.Date(x1$Date, "%d/%m/%Y")
#subsetting 2 days:
power_feb <- subset(x1, Date == as.Date(as.character("2007-02-01"))|Date == as.Date(as.character("2007-02-02")) )
#Creating a DateTime column
power_feb <- mutate(power_feb, DateTime = as.POSIXct( strptime( paste(Date, Time), format = "%Y-%m-%d %H:%M:%S")))
plot(power_feb$Sub_metering_1~power_feb$DateTime, type = "l", ylab = "Energy sub metering",xlab = "")
lines(power_feb$Sub_metering_2~power_feb$DateTime, type = "l", col = "red")
lines( power_feb$Sub_metering_3~power_feb$DateTime,type = "l",col = "blue")
legend("topright", col=c("black", "red", "blue"), lwd=c(1,1,1),c("Sub_metering_1", "Sub_metering_2", "Sub_metering_3"))
dev.copy(png, file="Plot3.png", height=480, width=480)
dev.off()
|
/Plot3.R
|
no_license
|
sgargary/Plot_Project-1_Electric-power-consumption
|
R
| false
| false
| 1,528
|
r
|
# Plot3: with legend
library(dplyr)
#x1 <- household_power_consumption
#names(x1)
#x1 <- household_power_consumption %>%
#mutate(Global_active_power = as.numeric(Global_active_power))
x1 <- household_power_consumption %>%
mutate (Date = as.Date(Date, format = "%d/%m/%Y"))%>%
mutate(Global_active_power = as.numeric(Global_active_power)) %>%
mutate(Global_reactive_power = as.numeric(Global_reactive_power))%>%
mutate(Voltage = as.numeric(Voltage))%>%
mutate(Sub_metering_1 = as.numeric(Sub_metering_1))%>%
mutate(Sub_metering_2 = as.numeric(Sub_metering_2))%>%
mutate(Sub_metering_3 = as.numeric(Sub_metering_3))%>%
mutate( Global_intensity = as.numeric( Global_intensity))
#changing Class to Date:
#x1$Date <- as.Date(x1$Date, "%d/%m/%Y")
#subsetting 2 days:
power_feb <- subset(x1, Date == as.Date(as.character("2007-02-01"))|Date == as.Date(as.character("2007-02-02")) )
#Creating a DateTime column
power_feb <- mutate(power_feb, DateTime = as.POSIXct( strptime( paste(Date, Time), format = "%Y-%m-%d %H:%M:%S")))
plot(power_feb$Sub_metering_1~power_feb$DateTime, type = "l", ylab = "Energy sub metering",xlab = "")
lines(power_feb$Sub_metering_2~power_feb$DateTime, type = "l", col = "red")
lines( power_feb$Sub_metering_3~power_feb$DateTime,type = "l",col = "blue")
legend("topright", col=c("black", "red", "blue"), lwd=c(1,1,1),c("Sub_metering_1", "Sub_metering_2", "Sub_metering_3"))
dev.copy(png, file="Plot3.png", height=480, width=480)
dev.off()
|
## code to prepare `ACS_CC` and `ACS_CP` datasets
rm(list=ls(all=T))
library(readr)
seed = 345346
set.seed(seed)
ACS_case = read.table("data-raw/ACS_cases.csv",header=T,sep=",")
colnames(ACS_case) = c("age", "married", "ind", "baplus", "topincome")
n_case = nrow(ACS_case)
ACS_control = read.table("data-raw/ACS_controls.csv",header=T,sep=",")
colnames(ACS_control) = c("age", "married", "ind", "baplus", "topincome")
n_control = nrow(ACS_control)
# Part 1: Construction of Case-Control Sample
# Random sampling from controls without replacement
rs_i = sample.int(n_control,n_case)
ACS_control_rs = ACS_control[rs_i,]
ACS_CC = rbind(ACS_case,ACS_control_rs)
ACS_CC = ACS_CC[,-2] # drop the marital status variable
n = nrow(ACS_CC)
ACS_CC = ACS_CC[sample.int(n,n),] # random permutation to shuffle the data
rownames(ACS_CC) = c() # remove row names
write_csv(ACS_CC, "data-raw/ACS_CC.csv")
usethis::use_data(ACS_CC, overwrite = TRUE)
# Part 2: Construction of Case-Population Sample
# Random sampling from all observations without replacement
rs_i = sample.int((n_case+n_control),n_case)
ACS_all = rbind(ACS_case,ACS_control)
ACS_control_rs = ACS_all[rs_i,]
ACS_control_rs[,5] = NA
ACS_CP = rbind(ACS_case,ACS_control_rs)
ACS_CP = ACS_CP[,-2] # drop the marital status variable
n = nrow(ACS_CP)
ACS_CP = ACS_CP[sample.int(n,n),] # random permutation to shuffle the data
rownames(ACS_CP) = c() # remove row names
write_csv(ACS_CP, "data-raw/ACS_CP.csv")
usethis::use_data(ACS_CP, overwrite = TRUE)
# Part 3: Construction of Random Sample
# Keep all observations and shuffle the data
ACS = rbind(ACS_case,ACS_control)
ACS = ACS[,-2] # drop the marital status variable
n = nrow(ACS)
ACS = ACS[sample.int(n,n),] # random permutation to shuffle the data
rownames(ACS) = c() # remove row names
write_csv(ACS, "data-raw/ACS.csv")
usethis::use_data(ACS, overwrite = TRUE)
|
/data-raw/ACS.R
|
no_license
|
lbiagini75/ciccr
|
R
| false
| false
| 1,983
|
r
|
## code to prepare `ACS_CC` and `ACS_CP` datasets
rm(list=ls(all=T))
library(readr)
seed = 345346
set.seed(seed)
ACS_case = read.table("data-raw/ACS_cases.csv",header=T,sep=",")
colnames(ACS_case) = c("age", "married", "ind", "baplus", "topincome")
n_case = nrow(ACS_case)
ACS_control = read.table("data-raw/ACS_controls.csv",header=T,sep=",")
colnames(ACS_control) = c("age", "married", "ind", "baplus", "topincome")
n_control = nrow(ACS_control)
# Part 1: Construction of Case-Control Sample
# Random sampling from controls without replacement
rs_i = sample.int(n_control,n_case)
ACS_control_rs = ACS_control[rs_i,]
ACS_CC = rbind(ACS_case,ACS_control_rs)
ACS_CC = ACS_CC[,-2] # drop the marital status variable
n = nrow(ACS_CC)
ACS_CC = ACS_CC[sample.int(n,n),] # random permutation to shuffle the data
rownames(ACS_CC) = c() # remove row names
write_csv(ACS_CC, "data-raw/ACS_CC.csv")
usethis::use_data(ACS_CC, overwrite = TRUE)
# Part 2: Construction of Case-Population Sample
# Random sampling from all observations without replacement
rs_i = sample.int((n_case+n_control),n_case)
ACS_all = rbind(ACS_case,ACS_control)
ACS_control_rs = ACS_all[rs_i,]
ACS_control_rs[,5] = NA
ACS_CP = rbind(ACS_case,ACS_control_rs)
ACS_CP = ACS_CP[,-2] # drop the marital status variable
n = nrow(ACS_CP)
ACS_CP = ACS_CP[sample.int(n,n),] # random permutation to shuffle the data
rownames(ACS_CP) = c() # remove row names
write_csv(ACS_CP, "data-raw/ACS_CP.csv")
usethis::use_data(ACS_CP, overwrite = TRUE)
# Part 3: Construction of Random Sample
# Keep all observations and shuffle the data
ACS = rbind(ACS_case,ACS_control)
ACS = ACS[,-2] # drop the marital status variable
n = nrow(ACS)
ACS = ACS[sample.int(n,n),] # random permutation to shuffle the data
rownames(ACS) = c() # remove row names
write_csv(ACS, "data-raw/ACS.csv")
usethis::use_data(ACS, overwrite = TRUE)
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/coor2index.R
\name{idx2chr}
\alias{idx2chr}
\title{get the chromosome from continuous index}
\usage{
idx2chr(idx, chrS)
}
\arguments{
\item{idx}{index position for which chromosome information is reported}
\item{chrS}{the chromosome index, indicating the start position
of each chromosome in the continuous index, derived from chromosome length
information}
}
\value{
returns the chromosome number
}
\description{
get the chromosome from continuous index
}
|
/man/idx2chr.Rd
|
no_license
|
raim/segmenTools
|
R
| false
| true
| 536
|
rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/coor2index.R
\name{idx2chr}
\alias{idx2chr}
\title{get the chromosome from continuous index}
\usage{
idx2chr(idx, chrS)
}
\arguments{
\item{idx}{index position for which chromosome information is reported}
\item{chrS}{the chromosome index, indicating the start position
of each chromosome in the continuous index, derived from chromosome length
information}
}
\value{
returns the chromosome number
}
\description{
get the chromosome from continuous index
}
|
## version: 1.35
## method: post
## path: /commit
## code: 201
## response: {"Id": "string"}
list(id = "string")
|
/tests/testthat/sample_responses/v1.35/image_commit.R
|
no_license
|
cran/stevedore
|
R
| false
| false
| 113
|
r
|
## version: 1.35
## method: post
## path: /commit
## code: 201
## response: {"Id": "string"}
list(id = "string")
|
library(shiny)
library(ggplot2)
library(gridExtra, quietly =T)
shinyUI(fluidPage(
# Application title
titlePanel("Diferenciales de selección"),
# Sidebar with a slider input for number of bins
sidebarLayout(
sidebarPanel(
sliderInput("dif", label = "diferencial s", min=-2, max=2, value = 0, step = 0.01),
width = 3),
# Show a plot of the generated distribution
mainPanel(
plotOutput("distPlot")
)
)
))
|
/ui.R
|
no_license
|
santiagombv/selection.cov
|
R
| false
| false
| 467
|
r
|
library(shiny)
library(ggplot2)
library(gridExtra, quietly =T)
shinyUI(fluidPage(
# Application title
titlePanel("Diferenciales de selección"),
# Sidebar with a slider input for number of bins
sidebarLayout(
sidebarPanel(
sliderInput("dif", label = "diferencial s", min=-2, max=2, value = 0, step = 0.01),
width = 3),
# Show a plot of the generated distribution
mainPanel(
plotOutput("distPlot")
)
)
))
|
\name{clusterRun}
\alias{clusterRun}
\title{
Submit command-line tools to cluster
}
\description{
Submits non-R command-line software to queueing/scheduling systems of compute clusters using run specifications defined by functions similar to \code{runCommandline}. \code{clusterRun} can be used with most queueing systems since it is based on utilities from the \code{batchtools} package which supports the use of template files (\code{*.tmpl}) for defining the run parameters of the different schedulers. The path to the \code{*.tmpl} file needs to be specified in a conf file provided under the \code{conffile} argument.
}
\usage{
clusterRun(args, FUN = runCommandline, more.args = list(args = args, make_bam = TRUE), conffile = ".batchtools.conf.R", template = "batchtools.slurm.tmpl", Njobs, runid = "01", resourceList)
}
\arguments{
\item{args}{
Object of class \code{SYSargs} or \code{SYSargs2}.
}
\item{FUN}{
Accepts functions such as \code{runCommandline(args, ...)} where the \code{args} argument is mandatory and needs to be of class \code{SYSargs} or \code{SYSargs2}.
}
\item{more.args}{
Object of class \code{list}, which provides the arguments that control the \code{FUN} function.
}
\item{conffile}{
Path to conf file (default location \code{./.batchtools.conf.R}). This file contains in its simplest form just one command, such as this line for the Slurm scheduler: \code{cluster.functions <- makeClusterFunctionsSlurm(template="batchtools.slurm.tmpl")}. For more detailed information visit this page: https://mllg.github.io/batchtools/index.html
}
\item{template}{
The template files for a specific queueing/scheduling systems can be downloaded from here:
https://github.com/mllg/batchtools/tree/master/inst/templates. Slurm, PBS/Torque, and Sun Grid Engine (SGE) templates are provided.
}
\item{Njobs}{
Interger defining the number of cluster jobs. For instance, if \code{args} contains 18 command-line jobs and \code{Njobs=9}, then the function will distribute them accross 9 cluster jobs each running 2 command-line jobs. To increase the number of CPU cores used by each process, one can do this under the corresonding argument of the command-line tool, e.g. \code{-p} argument for Tophat.
}
\item{runid}{
Run identifier used for log file to track system call commands. Default is \code{"01"}.
}
\item{resourceList}{
\code{List} for reserving for each cluster job sufficient computing resources including memory (Megabyte), number of nodes, CPU cores, walltime (minutes), etc. For more details, one can consult the template file for each queueing/scheduling system.
}
}
\value{Object of class \code{Registry}, as well as files and directories created by the executed command-line tools.
}
\references{
For more details on \code{batchtools}, please consult the following page: https://github.com/mllg/batchtools/
}
\author{
Daniela Cassol and Thomas Girke
}
\seealso{
\code{clusterRun} replaces the older functions \code{getQsubargs} and \code{qsubRun}.
}
\examples{
#########################################
## Examples with \code{SYSargs} object ##
#########################################
## Construct SYSargs object from param and targets files
param <- system.file("extdata", "hisat2.param", package="systemPipeR")
targets <- system.file("extdata", "targets.txt", package="systemPipeR")
args <- systemArgs(sysma=param, mytargets=targets)
args
names(args); modules(args); cores(args); outpaths(args); sysargs(args)
\dontrun{
## Execute SYSargs on multiple machines of a compute cluster. The following
## example uses the conf and template files for the Slurm scheduler. Please
## read the instructions on how to obtain the corresponding files for other schedulers.
file.copy(system.file("extdata", ".batchtools.conf.R", package="systemPipeR"), ".")
file.copy(system.file("extdata", "batchtools.slurm.tmpl", package="systemPipeR"), ".")
resources <- list(walltime=120, ntasks=1, ncpus=cores(args), memory=1024)
reg <- clusterRun(args, FUN = runCommandline, more.args = list(args = args, make_bam = TRUE), conffile=".batchtools.conf.R", template="batchtools.slurm.tmpl", Njobs=18, runid="01", resourceList=resources)
## Monitor progress of submitted jobs
getStatus(reg=reg)
file.exists(outpaths(args))
}
##########################################
## Examples with \code{SYSargs2} object ##
##########################################
## Construct SYSargs2 object from CWl param, CWL input, and targets files
targets <- system.file("extdata", "targets.txt", package="systemPipeR")
dir_path <- system.file("extdata/cwl", package="systemPipeR")
WF <- loadWorkflow(targets=targets, wf_file="hisat2-se/hisat2-mapping-se.cwl",
input_file="hisat2-se/hisat2-mapping-se.yml",
dir_path=dir_path)
WF <- renderWF(WF, inputvars=c(FileName="_FASTQ_PATH_", SampleName="_SampleName_"))
WF
names(WF); modules(WF); targets(WF)[1]; cmdlist(WF)[1:2]; output(WF)
\dontrun{
## Execute SYSargs2 on multiple machines of a compute cluster. The following
## example uses the conf and template files for the Slurm scheduler. Please
## read the instructions on how to obtain the corresponding files for other schedulers.
file.copy(system.file("extdata", ".batchtools.conf.R", package="systemPipeR"), ".")
file.copy(system.file("extdata", "batchtools.slurm.tmpl", package="systemPipeR"), ".")
resources <- list(walltime=120, ntasks=1, ncpus=4, memory=1024)
reg <- clusterRun(WF, FUN = runCommandline, more.args = list(args = WF, make_bam = FALSE), conffile=".batchtools.conf.R", template="batchtools.slurm.tmpl", Njobs=18, runid="01", resourceList=resources)
## Monitor progress of submitted jobs
getStatus(reg=reg)
## Updates the path in the object \code{output(WF)}
WF <- output_update(WF, dir=FALSE, replace = ".bam")
## Alignment stats
read_statsDF <- alignStats(WF)
read_statsDF <- cbind(read_statsDF[targets$FileName,], targets)
write.table(read_statsDF, "results/alignStats.xls", row.names=FALSE, quote=FALSE, sep="\t")
}
}
\keyword{ utilities }
|
/man/clusterRun.Rd
|
no_license
|
Feigeliudan01/systemPipeR
|
R
| false
| false
| 6,026
|
rd
|
\name{clusterRun}
\alias{clusterRun}
\title{
Submit command-line tools to cluster
}
\description{
Submits non-R command-line software to queueing/scheduling systems of compute clusters using run specifications defined by functions similar to \code{runCommandline}. \code{clusterRun} can be used with most queueing systems since it is based on utilities from the \code{batchtools} package which supports the use of template files (\code{*.tmpl}) for defining the run parameters of the different schedulers. The path to the \code{*.tmpl} file needs to be specified in a conf file provided under the \code{conffile} argument.
}
\usage{
clusterRun(args, FUN = runCommandline, more.args = list(args = args, make_bam = TRUE), conffile = ".batchtools.conf.R", template = "batchtools.slurm.tmpl", Njobs, runid = "01", resourceList)
}
\arguments{
\item{args}{
Object of class \code{SYSargs} or \code{SYSargs2}.
}
\item{FUN}{
Accepts functions such as \code{runCommandline(args, ...)} where the \code{args} argument is mandatory and needs to be of class \code{SYSargs} or \code{SYSargs2}.
}
\item{more.args}{
Object of class \code{list}, which provides the arguments that control the \code{FUN} function.
}
\item{conffile}{
Path to conf file (default location \code{./.batchtools.conf.R}). This file contains in its simplest form just one command, such as this line for the Slurm scheduler: \code{cluster.functions <- makeClusterFunctionsSlurm(template="batchtools.slurm.tmpl")}. For more detailed information visit this page: https://mllg.github.io/batchtools/index.html
}
\item{template}{
The template files for a specific queueing/scheduling systems can be downloaded from here:
https://github.com/mllg/batchtools/tree/master/inst/templates. Slurm, PBS/Torque, and Sun Grid Engine (SGE) templates are provided.
}
\item{Njobs}{
Interger defining the number of cluster jobs. For instance, if \code{args} contains 18 command-line jobs and \code{Njobs=9}, then the function will distribute them accross 9 cluster jobs each running 2 command-line jobs. To increase the number of CPU cores used by each process, one can do this under the corresonding argument of the command-line tool, e.g. \code{-p} argument for Tophat.
}
\item{runid}{
Run identifier used for log file to track system call commands. Default is \code{"01"}.
}
\item{resourceList}{
\code{List} for reserving for each cluster job sufficient computing resources including memory (Megabyte), number of nodes, CPU cores, walltime (minutes), etc. For more details, one can consult the template file for each queueing/scheduling system.
}
}
\value{Object of class \code{Registry}, as well as files and directories created by the executed command-line tools.
}
\references{
For more details on \code{batchtools}, please consult the following page: https://github.com/mllg/batchtools/
}
\author{
Daniela Cassol and Thomas Girke
}
\seealso{
\code{clusterRun} replaces the older functions \code{getQsubargs} and \code{qsubRun}.
}
\examples{
#########################################
## Examples with \code{SYSargs} object ##
#########################################
## Construct SYSargs object from param and targets files
param <- system.file("extdata", "hisat2.param", package="systemPipeR")
targets <- system.file("extdata", "targets.txt", package="systemPipeR")
args <- systemArgs(sysma=param, mytargets=targets)
args
names(args); modules(args); cores(args); outpaths(args); sysargs(args)
\dontrun{
## Execute SYSargs on multiple machines of a compute cluster. The following
## example uses the conf and template files for the Slurm scheduler. Please
## read the instructions on how to obtain the corresponding files for other schedulers.
file.copy(system.file("extdata", ".batchtools.conf.R", package="systemPipeR"), ".")
file.copy(system.file("extdata", "batchtools.slurm.tmpl", package="systemPipeR"), ".")
resources <- list(walltime=120, ntasks=1, ncpus=cores(args), memory=1024)
reg <- clusterRun(args, FUN = runCommandline, more.args = list(args = args, make_bam = TRUE), conffile=".batchtools.conf.R", template="batchtools.slurm.tmpl", Njobs=18, runid="01", resourceList=resources)
## Monitor progress of submitted jobs
getStatus(reg=reg)
file.exists(outpaths(args))
}
##########################################
## Examples with \code{SYSargs2} object ##
##########################################
## Construct SYSargs2 object from CWl param, CWL input, and targets files
targets <- system.file("extdata", "targets.txt", package="systemPipeR")
dir_path <- system.file("extdata/cwl", package="systemPipeR")
WF <- loadWorkflow(targets=targets, wf_file="hisat2-se/hisat2-mapping-se.cwl",
input_file="hisat2-se/hisat2-mapping-se.yml",
dir_path=dir_path)
WF <- renderWF(WF, inputvars=c(FileName="_FASTQ_PATH_", SampleName="_SampleName_"))
WF
names(WF); modules(WF); targets(WF)[1]; cmdlist(WF)[1:2]; output(WF)
\dontrun{
## Execute SYSargs2 on multiple machines of a compute cluster. The following
## example uses the conf and template files for the Slurm scheduler. Please
## read the instructions on how to obtain the corresponding files for other schedulers.
file.copy(system.file("extdata", ".batchtools.conf.R", package="systemPipeR"), ".")
file.copy(system.file("extdata", "batchtools.slurm.tmpl", package="systemPipeR"), ".")
resources <- list(walltime=120, ntasks=1, ncpus=4, memory=1024)
reg <- clusterRun(WF, FUN = runCommandline, more.args = list(args = WF, make_bam = FALSE), conffile=".batchtools.conf.R", template="batchtools.slurm.tmpl", Njobs=18, runid="01", resourceList=resources)
## Monitor progress of submitted jobs
getStatus(reg=reg)
## Updates the path in the object \code{output(WF)}
WF <- output_update(WF, dir=FALSE, replace = ".bam")
## Alignment stats
read_statsDF <- alignStats(WF)
read_statsDF <- cbind(read_statsDF[targets$FileName,], targets)
write.table(read_statsDF, "results/alignStats.xls", row.names=FALSE, quote=FALSE, sep="\t")
}
}
\keyword{ utilities }
|
#' Base url
#'
#' @return
#' @export
base_url <- function() {
'https://api-gtm.grubhub.com'
}
add_headers <- function() {
httr::add_headers(
'Accept'='text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',
'Accept-Encoding'= 'gzip, deflate, br',
'Accept-Language'='en-US,en;q=0.5',
Connection='keep-alive',
'Content-Type'= 'application/json;charset=UTF-8',
Host='api-gtm.grubhub.com',
'User-Agent'='Mozilla/5.0 (Macintosh; Intel Mac OS X 10.16; rv:83.0) Gecko/20100101 Firefox/83.0'
)
}
|
/R/client.R
|
no_license
|
bill-ash/hungryr
|
R
| false
| false
| 545
|
r
|
#' Base url
#'
#' @return
#' @export
base_url <- function() {
'https://api-gtm.grubhub.com'
}
add_headers <- function() {
httr::add_headers(
'Accept'='text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',
'Accept-Encoding'= 'gzip, deflate, br',
'Accept-Language'='en-US,en;q=0.5',
Connection='keep-alive',
'Content-Type'= 'application/json;charset=UTF-8',
Host='api-gtm.grubhub.com',
'User-Agent'='Mozilla/5.0 (Macintosh; Intel Mac OS X 10.16; rv:83.0) Gecko/20100101 Firefox/83.0'
)
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/MV04_LM.R
\name{asymp_MV04}
\alias{asymp_MV04}
\title{Critical values from MV04 paper, p.1831}
\usage{
asymp_MV04(alpha = c(0.01, 0.05, 0.1), M = c(1, 2, 3), type = c("no",
"const", "trend"))
}
\description{
Critical values from MV04 paper, p.1831
}
\keyword{internal}
|
/LongMemoryTS/man/asymp_MV04.Rd
|
no_license
|
akhikolla/TestedPackages-NoIssues
|
R
| false
| true
| 362
|
rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/MV04_LM.R
\name{asymp_MV04}
\alias{asymp_MV04}
\title{Critical values from MV04 paper, p.1831}
\usage{
asymp_MV04(alpha = c(0.01, 0.05, 0.1), M = c(1, 2, 3), type = c("no",
"const", "trend"))
}
\description{
Critical values from MV04 paper, p.1831
}
\keyword{internal}
|
#q4:
position_factor <- as.factor(my.dataframe$Club_Position)
levels(position_factor)
|
/IntroToR/q4.r
|
no_license
|
Soroosh-Bsl/ProbabilityStatistics---Fall2017-2018
|
R
| false
| false
| 85
|
r
|
#q4:
position_factor <- as.factor(my.dataframe$Club_Position)
levels(position_factor)
|
## STATIC SIMULATION MODEL
## BASE MODEL
## SCENARIO:
## ORIGINAL: TOR TOLHURST (OCT/2014)
## LAST UPDATE: Scott Biden (Aug/2017)
## LAST UPDATE:
## CHOOSE FIRST QUARTER TO SHOCK
qtr <- c(3, 4, 1, 2)
source("beef/code-version2/step-0 preliminaries.R")
for (i in qtr) {
# column name equivalent of i (to run in different order and 1, 2, 3, 4)
ii <- paste("Q", i, sep="")
###############################
#### COW-CALF
###############################
## QUANTITIES
# load total supply and demand, calculate interprovincial imports (mip) as residual
calf_1["q_sup total", ii] <- round(fit(log_data(qq$calf$supply$data))$yhat[i])
calf_1["q_dem total", ii] <- round(fit(log_data(qq$calf$demand$data))$yhat[i])
calf_1["q_mip total", ii] <- calf_1["q_dem total", ii] - calf_1["q_sup total", ii]
# supply and mip by gender (derived by assumption)
calf_1["q_sup steer", ii] <- round(calf_1["q_sup total", ii]*assume1$birth)
calf_1["q_sup heifer", ii] <- round(calf_1["q_sup total", ii]*(1-assume1$birth))
calf_1["q_mip steer", ii] <- round(calf_1["q_mip total", ii]*assume1$mip$calf)
calf_1["q_mip heifer", ii] <- round(calf_1["q_mip total", ii]*(1-assume1$mip$calf))
# demand by gender (based on weighted ratio of gender in supply and mip)
temp <- (calf_1["q_sup steer", ii]+calf_1["q_mip steer", ii])/(calf_1["q_sup total", ii]+calf_1["q_mip total", ii])
calf_1["q_dem steer", ii] <- round(calf_1["q_dem total", ii]*temp)
calf_1["q_dem heifer", ii] <- round(calf_1["q_dem total", ii]*(1-temp))
rm(temp)
## PRICES
# load estimated prices
calf_1["price steer", ii] <- round(fit(log_data(pp$calf$steer$data))$yhat[i], 2)
calf_1["price heifer", ii] <- round(fit(log_data(pp$calf$heifer$data))$yhat[i], 2)
###############################
#### BACKGROUNDING
###############################
## QUANTITIES
# load provincial supply and demand
bkgd_1["q_sup total", ii] <- round(fit(log_data(qq$bkgrd$supply$data))$yhat[i])
bkgd_1["q_dem total", ii] <- round(fit(log_data(qq$bkgrd$demand$data))$yhat[i])
# supply retained for replacement by gender
# note: uses calf supply and mip from two quarters ago
if (which(qtr == i) <= 2){
tmp1 <- which(colnames(inventory1) == ii) - 2
den1 <- inventory1["calf supply", tmp1] + inventory1["calf mip", tmp1]
num1 <- inventory1["calf supply", tmp1]*(assume1$birth) + inventory1["calf mip", tmp1]*(assume1$mip$calf)
if (num1 > den1) num1 <- den1
num2 <- den1 - num1
}
if (which(qtr == i) > 2) {
# gender ratio of calf supply and mip two quarters ago
tmp1 <- paste("Q", qtr[which(qtr == i)-2], sep="")
den1 <- calf_1["q_sup total", tmp1] + calf_1["q_mip total", tmp1]
num1 <- calf_1["q_sup steer", tmp1] + calf_1["q_mip steer", tmp1]
num2 <- calf_1["q_sup heifer", tmp1] + calf_1["q_mip heifer", tmp1]
}
# provincial supply by gender net of replacements
bkgd_1["q_sup steer", ii] <- round(bkgd_1["q_sup total", ii]*(1-assume1$replace$s)*(num1/den1))
bkgd_1["q_rep steer", ii] <- round(bkgd_1["q_sup total", ii]*(assume1$replace$s)*(num1/den1))
bkgd_1["q_sup heifer", ii] <- round(bkgd_1["q_sup total", ii]*(1-assume1$replace$h)*(num2/den1))
bkgd_1["q_rep heifer", ii] <- round(bkgd_1["q_sup total", ii]*(assume1$replace$h)*(num2/den1))
bkgd_1["q_rep total", ii] <- bkgd_1["q_rep steer", ii]+bkgd_1["q_rep heifer", ii]
# remove temporary variables
rm(tmp1, den1, num1, num2)
# demand by gender -- determined by ratio of heifers to steers in provnicial finishing supply
fnsh_1["q_sup steer", ii] <- round(fit(log_data(qq$fnshg$steer$supply$data))$yhat[i])
fnsh_1["q_sup heifer", ii] <- round(fit(log_data(qq$fnshg$heifer$supply$data))$yhat[i])
fnsh_1["q_sup total", ii] <- fnsh_1["q_sup steer", ii]+fnsh_1["q_sup heifer", ii]
bkgd_1["q_dem steer", ii] <- round(bkgd_1["q_dem total", ii]*(fnsh_1["q_sup steer", ii]/fnsh_1["q_sup total", ii]))
bkgd_1["q_dem heifer", ii] <- round(bkgd_1["q_dem total", ii]*(fnsh_1["q_sup heifer", ii]/fnsh_1["q_sup total", ii]))
# feeder exports to U.S. by gender
bkgd_1["q_xus total", ii] <- round(fit(qq$bkgrd$export$data, logeq = F)$yhat)[i]
bkgd_1["q_xus steer", ii] <- round(bkgd_1["q_xus total", ii]*assume1$exportfdr)
bkgd_1["q_xus heifer", ii] <- round(bkgd_1["q_xus total", ii]*(1-assume1$exportfdr))
# interprovincial imports includes supply from dairy operations
# temp = dairy feeders (irrespective of gender)
temp <- round(fit(qq$bkgrd$dairy$data, logeq = F)$yhat)[i]
bkgd_1["q_mip steer", ii] <- bkgd_1["q_dem steer", ii] + bkgd_1["q_xus steer", ii] - bkgd_1["q_sup steer", ii] - round(temp*assume1$dairy_fdr)
bkgd_1["q_mip heifer", ii] <- bkgd_1["q_dem heifer", ii] + bkgd_1["q_xus heifer", ii] - bkgd_1["q_sup heifer", ii] - round(temp*(1-assume1$dairy_fdr))
bkgd_1["q_mip total", ii] <- bkgd_1["q_mip steer", ii] + bkgd_1["q_mip heifer", ii]
rm (temp)
## PRICES
bkgd_1["price steer", ii] <- round(fit(log_data(pp$bkgrd$steer$data))$yhat[i], 2)
bkgd_1["price heifer", ii] <- round(fit(log_data(pp$bkgrd$heifer$data))$yhat[i], 2)
###############################
#### FINISHING
###############################
## QUANTITIES
# load supply and demand by gender
fnsh_1["q_sup steer", ii] <- round(fit(log_data(qq$fnshg$steer$supply$data))$yhat[i])
fnsh_1["q_sup heifer", ii] <- round(fit(log_data(qq$fnshg$heifer$supply$data))$yhat[i])
fnsh_1["q_sup total", ii] <- fnsh_1["q_sup steer", ii]+fnsh_1["q_sup heifer", ii]
fnsh_1["q_dem steer", ii] <- round(fit(log_data(qq$fnshg$steer$demand$data))$yhat[i])
fnsh_1["q_dem heifer", ii] <- round(fit(log_data(qq$fnshg$heifer$demand$data))$yhat[i])
fnsh_1["q_dem total", ii] <- fnsh_1["q_dem steer", ii]+fnsh_1["q_dem heifer", ii]
# load exports to u.s. by gender
fnsh_1["q_xus steer", ii] <- round(fit(qq$fnshg$steer$export$data, logeq = F)$yhat[i])
fnsh_1["q_xus heifer", ii] <- round(fit(qq$fnshg$heifer$export$data, logeq = F)$yhat[i])
fnsh_1["q_xus total", ii] <- fnsh_1["q_xus steer", ii]+fnsh_1["q_xus heifer", ii]
# calculate interprovincial imports (mip) as residual
fnsh_1["q_mip steer", ii] <- fnsh_1["q_dem steer", ii] + fnsh_1["q_xus steer", ii] - fnsh_1["q_sup steer", ii]
fnsh_1["q_mip heifer", ii] <- fnsh_1["q_dem heifer", ii] + fnsh_1["q_xus heifer", ii] - fnsh_1["q_sup heifer", ii]
fnsh_1["q_mip total", ii] <- fnsh_1["q_mip steer", ii] + fnsh_1["q_mip heifer", ii]
## PRICES
# load estimated prices
fnsh_1["price steer", ii] <- round(fit(log_data(pp$fnshg$steer$data))$yhat[i], 2)
fnsh_1["price heifer", ii] <- round(fit(log_data(pp$fnshg$heifer$data))$yhat[i], 2)
###############################
#### CULLED
###############################
## QUANTITIES
# residual slaughter capacity (in head per quarter)
spare_capacity1 <- max(assume1$capacity - fnsh_1["q_sup total", ii], 0)
# load demand and exports to us
cull_1["q_dem bull", ii] <- round(fit(log_data(qq$culld$bulls$demand$data))$yhat[i])
cull_1["q_dem cow", ii] <- round(fit(log_data(qq$culld$cows$demand$data))$yhat[i])
cull_1["q_dem total", ii] <- cull_1["q_dem bull", ii]+cull_1["q_dem cow", ii]
cull_1["q_xus bull", ii] <- round(fit(qq$culld$bulls$export$data, logeq = F)$yhat[i])
cull_1["q_xus cow", ii] <- round(fit(qq$culld$cows$export$data, logeq = F)$yhat[i])
cull_1["q_xus total", ii] <- cull_1["q_xus bull", ii]+cull_1["q_xus cow", ii]
# load non-fed cow and bull inventory
cull_1["q_inv bull", ii] <- inventory1["cull bull", ii]
cull_1["q_inv cow", ii] <- inventory1["cull cow", ii]
cull_1["q_inv total", ii] <- cull_1["q_inv bull", ii] + cull_1["q_inv cow", ii]
# apply slaughter capacity constraint to demand (seperately for each gender)
temp <- cull_1["q_dem bull", ii]/cull_1["q_dem total", ii]
if (cull_1["q_dem bull", ii] > temp*spare_capacity1) {
cull_1["q_dem bull", ii] <- temp*spare_capacity1
cull_1["capacity constraint", ii] <- 1
}
if (cull_1["q_dem cow", ii] > (1-temp)*spare_capacity1) {
cull_1["q_dem cow", ii] <- (1-temp)*spare_capacity1
cull_1["capacity constraint", ii] <- 1
}
cull_1["q_dem total", ii] <- cull_1["q_dem bull", ii]+cull_1["q_dem cow", ii]
# supply requires change (D = delta) in inventory
temp <- which(colnames(inventory1) == ii)
Dinv_b <- inventory1["cull bull", temp] - inventory1["cull bull", temp-1]
Dinv_c <- inventory1["cull cow", temp] - inventory1["cull cow", temp-1]
# derive provincial supply
cull_1["q_sup bull", ii] <- cull_1["q_dem bull", ii] + cull_1["q_xus bull", ii] + Dinv_b
cull_1["q_sup cow", ii] <- cull_1["q_dem cow", ii] + cull_1["q_xus cow", ii] + Dinv_c
cull_1["q_sup total", ii] <- cull_1["q_sup bull", ii] + cull_1["q_sup cow", ii]
# remove temporary inventory variables
rm(Dinv_b, Dinv_c, temp)
## PRICES
cull_1["price cow", ii] <- round(fit(log_data(pp$culld$heifer$data))$yhat[i], 2)
cull_1["price bull", ii] <- round(fit(log_data(pp$culld$steer$data))$yhat[i], 2)
###############################
#### END CONSUMER MARKETS
###############################
## QUANTITIES
# load provincial demand in lbs.
proc_1["q_dem total", ii] <- round(fit(log_data(qq$markt$demand$data))$yhat[i]*assume1$popn[i])
# provincial supply in lbs.
proc_1["q_sup total", ii] <- fnsh_1["q_sup heifer", ii]*weight[1, ii]
proc_1["q_sup total", ii] <- fnsh_1["q_sup steer", ii]*weight[2, ii] + proc_1["q_sup total", ii]
proc_1["q_sup total", ii] <- cull_1["q_sup cow", ii]*weight[3, ii] + proc_1["q_sup total", ii]
proc_1["q_sup total", ii] <- round(cull_1["q_sup bull", ii]*weight[4, ii] + proc_1["q_sup total", ii])
# exports and imports
proc_1["q_xus total", ii] <- round(fit(qq$markt$export$usa$data, logeq = F)$yhat[i])
proc_1["q_xrw total", ii] <- round(fit(qq$markt$export$row$data, logeq = T)$yhat[i])
proc_1["q_mus total", ii] <- round(fit(qq$markt$import$usa$data, logeq = F)$yhat[i])
proc_1["q_mrw total", ii] <- round(fit(qq$markt$import$row$data, logeq = F)$yhat[i])
# interprovincial imports
proc_1["q_mip total", ii] <- proc_1["q_dem total", ii] - proc_1["q_sup total", ii] + (proc_1["q_xus total", ii] + proc_1["q_xrw total", ii]) - (proc_1["q_mus total", ii] + proc_1["q_mrw total", ii])
## PRICES
proc_1["price retail", ii] <- round(fit(log_data(pp$markt$retail$data))$yhat[i], 2)
proc_1["price wholes", ii] <- round(fit(log_data(pp$markt$wholesale$data))$yhat[i], 2)
}
|
/code-version2/step-1 base_model.R
|
no_license
|
Scobid/Beef-model
|
R
| false
| false
| 10,556
|
r
|
## STATIC SIMULATION MODEL
## BASE MODEL
## SCENARIO:
## ORIGINAL: TOR TOLHURST (OCT/2014)
## LAST UPDATE: Scott Biden (Aug/2017)
## LAST UPDATE:
## CHOOSE FIRST QUARTER TO SHOCK
qtr <- c(3, 4, 1, 2)
source("beef/code-version2/step-0 preliminaries.R")
for (i in qtr) {
# column name equivalent of i (to run in different order and 1, 2, 3, 4)
ii <- paste("Q", i, sep="")
###############################
#### COW-CALF
###############################
## QUANTITIES
# load total supply and demand, calculate interprovincial imports (mip) as residual
calf_1["q_sup total", ii] <- round(fit(log_data(qq$calf$supply$data))$yhat[i])
calf_1["q_dem total", ii] <- round(fit(log_data(qq$calf$demand$data))$yhat[i])
calf_1["q_mip total", ii] <- calf_1["q_dem total", ii] - calf_1["q_sup total", ii]
# supply and mip by gender (derived by assumption)
calf_1["q_sup steer", ii] <- round(calf_1["q_sup total", ii]*assume1$birth)
calf_1["q_sup heifer", ii] <- round(calf_1["q_sup total", ii]*(1-assume1$birth))
calf_1["q_mip steer", ii] <- round(calf_1["q_mip total", ii]*assume1$mip$calf)
calf_1["q_mip heifer", ii] <- round(calf_1["q_mip total", ii]*(1-assume1$mip$calf))
# demand by gender (based on weighted ratio of gender in supply and mip)
temp <- (calf_1["q_sup steer", ii]+calf_1["q_mip steer", ii])/(calf_1["q_sup total", ii]+calf_1["q_mip total", ii])
calf_1["q_dem steer", ii] <- round(calf_1["q_dem total", ii]*temp)
calf_1["q_dem heifer", ii] <- round(calf_1["q_dem total", ii]*(1-temp))
rm(temp)
## PRICES
# load estimated prices
calf_1["price steer", ii] <- round(fit(log_data(pp$calf$steer$data))$yhat[i], 2)
calf_1["price heifer", ii] <- round(fit(log_data(pp$calf$heifer$data))$yhat[i], 2)
###############################
#### BACKGROUNDING
###############################
## QUANTITIES
# load provincial supply and demand
bkgd_1["q_sup total", ii] <- round(fit(log_data(qq$bkgrd$supply$data))$yhat[i])
bkgd_1["q_dem total", ii] <- round(fit(log_data(qq$bkgrd$demand$data))$yhat[i])
# supply retained for replacement by gender
# note: uses calf supply and mip from two quarters ago
if (which(qtr == i) <= 2){
tmp1 <- which(colnames(inventory1) == ii) - 2
den1 <- inventory1["calf supply", tmp1] + inventory1["calf mip", tmp1]
num1 <- inventory1["calf supply", tmp1]*(assume1$birth) + inventory1["calf mip", tmp1]*(assume1$mip$calf)
if (num1 > den1) num1 <- den1
num2 <- den1 - num1
}
if (which(qtr == i) > 2) {
# gender ratio of calf supply and mip two quarters ago
tmp1 <- paste("Q", qtr[which(qtr == i)-2], sep="")
den1 <- calf_1["q_sup total", tmp1] + calf_1["q_mip total", tmp1]
num1 <- calf_1["q_sup steer", tmp1] + calf_1["q_mip steer", tmp1]
num2 <- calf_1["q_sup heifer", tmp1] + calf_1["q_mip heifer", tmp1]
}
# provincial supply by gender net of replacements
bkgd_1["q_sup steer", ii] <- round(bkgd_1["q_sup total", ii]*(1-assume1$replace$s)*(num1/den1))
bkgd_1["q_rep steer", ii] <- round(bkgd_1["q_sup total", ii]*(assume1$replace$s)*(num1/den1))
bkgd_1["q_sup heifer", ii] <- round(bkgd_1["q_sup total", ii]*(1-assume1$replace$h)*(num2/den1))
bkgd_1["q_rep heifer", ii] <- round(bkgd_1["q_sup total", ii]*(assume1$replace$h)*(num2/den1))
bkgd_1["q_rep total", ii] <- bkgd_1["q_rep steer", ii]+bkgd_1["q_rep heifer", ii]
# remove temporary variables
rm(tmp1, den1, num1, num2)
# demand by gender -- determined by ratio of heifers to steers in provnicial finishing supply
fnsh_1["q_sup steer", ii] <- round(fit(log_data(qq$fnshg$steer$supply$data))$yhat[i])
fnsh_1["q_sup heifer", ii] <- round(fit(log_data(qq$fnshg$heifer$supply$data))$yhat[i])
fnsh_1["q_sup total", ii] <- fnsh_1["q_sup steer", ii]+fnsh_1["q_sup heifer", ii]
bkgd_1["q_dem steer", ii] <- round(bkgd_1["q_dem total", ii]*(fnsh_1["q_sup steer", ii]/fnsh_1["q_sup total", ii]))
bkgd_1["q_dem heifer", ii] <- round(bkgd_1["q_dem total", ii]*(fnsh_1["q_sup heifer", ii]/fnsh_1["q_sup total", ii]))
# feeder exports to U.S. by gender
bkgd_1["q_xus total", ii] <- round(fit(qq$bkgrd$export$data, logeq = F)$yhat)[i]
bkgd_1["q_xus steer", ii] <- round(bkgd_1["q_xus total", ii]*assume1$exportfdr)
bkgd_1["q_xus heifer", ii] <- round(bkgd_1["q_xus total", ii]*(1-assume1$exportfdr))
# interprovincial imports includes supply from dairy operations
# temp = dairy feeders (irrespective of gender)
temp <- round(fit(qq$bkgrd$dairy$data, logeq = F)$yhat)[i]
bkgd_1["q_mip steer", ii] <- bkgd_1["q_dem steer", ii] + bkgd_1["q_xus steer", ii] - bkgd_1["q_sup steer", ii] - round(temp*assume1$dairy_fdr)
bkgd_1["q_mip heifer", ii] <- bkgd_1["q_dem heifer", ii] + bkgd_1["q_xus heifer", ii] - bkgd_1["q_sup heifer", ii] - round(temp*(1-assume1$dairy_fdr))
bkgd_1["q_mip total", ii] <- bkgd_1["q_mip steer", ii] + bkgd_1["q_mip heifer", ii]
rm (temp)
## PRICES
bkgd_1["price steer", ii] <- round(fit(log_data(pp$bkgrd$steer$data))$yhat[i], 2)
bkgd_1["price heifer", ii] <- round(fit(log_data(pp$bkgrd$heifer$data))$yhat[i], 2)
###############################
#### FINISHING
###############################
## QUANTITIES
# load supply and demand by gender
fnsh_1["q_sup steer", ii] <- round(fit(log_data(qq$fnshg$steer$supply$data))$yhat[i])
fnsh_1["q_sup heifer", ii] <- round(fit(log_data(qq$fnshg$heifer$supply$data))$yhat[i])
fnsh_1["q_sup total", ii] <- fnsh_1["q_sup steer", ii]+fnsh_1["q_sup heifer", ii]
fnsh_1["q_dem steer", ii] <- round(fit(log_data(qq$fnshg$steer$demand$data))$yhat[i])
fnsh_1["q_dem heifer", ii] <- round(fit(log_data(qq$fnshg$heifer$demand$data))$yhat[i])
fnsh_1["q_dem total", ii] <- fnsh_1["q_dem steer", ii]+fnsh_1["q_dem heifer", ii]
# load exports to u.s. by gender
fnsh_1["q_xus steer", ii] <- round(fit(qq$fnshg$steer$export$data, logeq = F)$yhat[i])
fnsh_1["q_xus heifer", ii] <- round(fit(qq$fnshg$heifer$export$data, logeq = F)$yhat[i])
fnsh_1["q_xus total", ii] <- fnsh_1["q_xus steer", ii]+fnsh_1["q_xus heifer", ii]
# calculate interprovincial imports (mip) as residual
fnsh_1["q_mip steer", ii] <- fnsh_1["q_dem steer", ii] + fnsh_1["q_xus steer", ii] - fnsh_1["q_sup steer", ii]
fnsh_1["q_mip heifer", ii] <- fnsh_1["q_dem heifer", ii] + fnsh_1["q_xus heifer", ii] - fnsh_1["q_sup heifer", ii]
fnsh_1["q_mip total", ii] <- fnsh_1["q_mip steer", ii] + fnsh_1["q_mip heifer", ii]
## PRICES
# load estimated prices
fnsh_1["price steer", ii] <- round(fit(log_data(pp$fnshg$steer$data))$yhat[i], 2)
fnsh_1["price heifer", ii] <- round(fit(log_data(pp$fnshg$heifer$data))$yhat[i], 2)
###############################
#### CULLED
###############################
## QUANTITIES
# residual slaughter capacity (in head per quarter)
spare_capacity1 <- max(assume1$capacity - fnsh_1["q_sup total", ii], 0)
# load demand and exports to us
cull_1["q_dem bull", ii] <- round(fit(log_data(qq$culld$bulls$demand$data))$yhat[i])
cull_1["q_dem cow", ii] <- round(fit(log_data(qq$culld$cows$demand$data))$yhat[i])
cull_1["q_dem total", ii] <- cull_1["q_dem bull", ii]+cull_1["q_dem cow", ii]
cull_1["q_xus bull", ii] <- round(fit(qq$culld$bulls$export$data, logeq = F)$yhat[i])
cull_1["q_xus cow", ii] <- round(fit(qq$culld$cows$export$data, logeq = F)$yhat[i])
cull_1["q_xus total", ii] <- cull_1["q_xus bull", ii]+cull_1["q_xus cow", ii]
# load non-fed cow and bull inventory
cull_1["q_inv bull", ii] <- inventory1["cull bull", ii]
cull_1["q_inv cow", ii] <- inventory1["cull cow", ii]
cull_1["q_inv total", ii] <- cull_1["q_inv bull", ii] + cull_1["q_inv cow", ii]
# apply slaughter capacity constraint to demand (seperately for each gender)
temp <- cull_1["q_dem bull", ii]/cull_1["q_dem total", ii]
if (cull_1["q_dem bull", ii] > temp*spare_capacity1) {
cull_1["q_dem bull", ii] <- temp*spare_capacity1
cull_1["capacity constraint", ii] <- 1
}
if (cull_1["q_dem cow", ii] > (1-temp)*spare_capacity1) {
cull_1["q_dem cow", ii] <- (1-temp)*spare_capacity1
cull_1["capacity constraint", ii] <- 1
}
cull_1["q_dem total", ii] <- cull_1["q_dem bull", ii]+cull_1["q_dem cow", ii]
# supply requires change (D = delta) in inventory
temp <- which(colnames(inventory1) == ii)
Dinv_b <- inventory1["cull bull", temp] - inventory1["cull bull", temp-1]
Dinv_c <- inventory1["cull cow", temp] - inventory1["cull cow", temp-1]
# derive provincial supply
cull_1["q_sup bull", ii] <- cull_1["q_dem bull", ii] + cull_1["q_xus bull", ii] + Dinv_b
cull_1["q_sup cow", ii] <- cull_1["q_dem cow", ii] + cull_1["q_xus cow", ii] + Dinv_c
cull_1["q_sup total", ii] <- cull_1["q_sup bull", ii] + cull_1["q_sup cow", ii]
# remove temporary inventory variables
rm(Dinv_b, Dinv_c, temp)
## PRICES
cull_1["price cow", ii] <- round(fit(log_data(pp$culld$heifer$data))$yhat[i], 2)
cull_1["price bull", ii] <- round(fit(log_data(pp$culld$steer$data))$yhat[i], 2)
###############################
#### END CONSUMER MARKETS
###############################
## QUANTITIES
# load provincial demand in lbs.
proc_1["q_dem total", ii] <- round(fit(log_data(qq$markt$demand$data))$yhat[i]*assume1$popn[i])
# provincial supply in lbs.
proc_1["q_sup total", ii] <- fnsh_1["q_sup heifer", ii]*weight[1, ii]
proc_1["q_sup total", ii] <- fnsh_1["q_sup steer", ii]*weight[2, ii] + proc_1["q_sup total", ii]
proc_1["q_sup total", ii] <- cull_1["q_sup cow", ii]*weight[3, ii] + proc_1["q_sup total", ii]
proc_1["q_sup total", ii] <- round(cull_1["q_sup bull", ii]*weight[4, ii] + proc_1["q_sup total", ii])
# exports and imports
proc_1["q_xus total", ii] <- round(fit(qq$markt$export$usa$data, logeq = F)$yhat[i])
proc_1["q_xrw total", ii] <- round(fit(qq$markt$export$row$data, logeq = T)$yhat[i])
proc_1["q_mus total", ii] <- round(fit(qq$markt$import$usa$data, logeq = F)$yhat[i])
proc_1["q_mrw total", ii] <- round(fit(qq$markt$import$row$data, logeq = F)$yhat[i])
# interprovincial imports
proc_1["q_mip total", ii] <- proc_1["q_dem total", ii] - proc_1["q_sup total", ii] + (proc_1["q_xus total", ii] + proc_1["q_xrw total", ii]) - (proc_1["q_mus total", ii] + proc_1["q_mrw total", ii])
## PRICES
proc_1["price retail", ii] <- round(fit(log_data(pp$markt$retail$data))$yhat[i], 2)
proc_1["price wholes", ii] <- round(fit(log_data(pp$markt$wholesale$data))$yhat[i], 2)
}
|
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
skip_if_not_available("dataset")
library(dplyr)
# randomize order of rows in test data
tbl <- slice_sample(example_data_for_sorting, prop = 1L)
test_that("arrange() on integer, double, and character columns", {
expect_dplyr_equal(
input %>%
arrange(int, chr) %>%
collect(),
tbl
)
expect_dplyr_equal(
input %>%
arrange(int, desc(dbl)) %>%
collect(),
tbl
)
expect_dplyr_equal(
input %>%
arrange(int, desc(desc(dbl))) %>%
collect(),
tbl
)
expect_dplyr_equal(
input %>%
arrange(int) %>%
arrange(desc(dbl)) %>%
collect(),
tbl
)
expect_dplyr_equal(
input %>%
arrange(int + dbl, chr) %>%
collect(),
tbl
)
expect_dplyr_equal(
input %>%
mutate(zzz = int + dbl, ) %>%
arrange(zzz, chr) %>%
collect(),
tbl
)
expect_dplyr_equal(
input %>%
mutate(zzz = int + dbl) %>%
arrange(int + dbl, chr) %>%
collect(),
tbl
)
expect_dplyr_equal(
input %>%
mutate(int + dbl) %>%
arrange(int + dbl, chr) %>%
collect(),
tbl
)
expect_dplyr_equal(
input %>%
group_by(grp) %>%
arrange(int, dbl) %>%
collect(),
tbl
)
expect_dplyr_equal(
input %>%
group_by(grp) %>%
arrange(int, dbl, .by_group = TRUE) %>%
collect(),
tbl
)
expect_dplyr_equal(
input %>%
group_by(grp, grp2) %>%
arrange(int, dbl, .by_group = TRUE) %>%
collect(),
tbl %>%
mutate(grp2 = ifelse(is.na(lgl), 1L, as.integer(lgl)))
)
expect_dplyr_equal(
input %>%
group_by(grp) %>%
arrange(.by_group = TRUE) %>%
pull(grp),
tbl
)
expect_dplyr_equal(
input %>%
arrange() %>%
collect(),
tbl %>%
group_by(grp)
)
expect_dplyr_equal(
input %>%
group_by(grp) %>%
arrange() %>%
collect(),
tbl
)
expect_dplyr_equal(
input %>%
arrange() %>%
collect(),
tbl
)
test_sort_col <- "chr"
expect_dplyr_equal(
input %>%
arrange(!!sym(test_sort_col)) %>%
collect(),
tbl %>%
select(chr, lgl)
)
test_sort_cols <- c("int", "dbl")
expect_dplyr_equal(
input %>%
arrange(!!!syms(test_sort_cols)) %>%
collect(),
tbl
)
})
test_that("arrange() on datetime columns", {
expect_dplyr_equal(
input %>%
arrange(dttm, int) %>%
collect(),
tbl
)
skip("Sorting by only a single timestamp column fails (ARROW-12087)")
expect_dplyr_equal(
input %>%
arrange(dttm) %>%
collect(),
tbl %>%
select(dttm, grp)
)
})
test_that("arrange() on logical columns", {
expect_dplyr_equal(
input %>%
arrange(lgl, int) %>%
collect(),
tbl
)
})
test_that("arrange() with bad inputs", {
expect_error(
tbl %>%
Table$create() %>%
arrange(1),
"does not contain any field names",
fixed = TRUE
)
expect_error(
tbl %>%
Table$create() %>%
arrange(2 + 2),
"does not contain any field names",
fixed = TRUE
)
expect_error(
tbl %>%
Table$create() %>%
arrange(aertidjfgjksertyj),
"not found",
fixed = TRUE
)
expect_error(
tbl %>%
Table$create() %>%
arrange(desc(aertidjfgjksertyj + iaermxiwerksxsdqq)),
"not found",
fixed = TRUE
)
expect_error(
tbl %>%
Table$create() %>%
arrange(desc(int, chr)),
"expects only one argument",
fixed = TRUE
)
})
|
/r/tests/testthat/test-dplyr-arrange.R
|
permissive
|
tianchen92/arrow
|
R
| false
| false
| 4,292
|
r
|
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
skip_if_not_available("dataset")
library(dplyr)
# randomize order of rows in test data
tbl <- slice_sample(example_data_for_sorting, prop = 1L)
test_that("arrange() on integer, double, and character columns", {
expect_dplyr_equal(
input %>%
arrange(int, chr) %>%
collect(),
tbl
)
expect_dplyr_equal(
input %>%
arrange(int, desc(dbl)) %>%
collect(),
tbl
)
expect_dplyr_equal(
input %>%
arrange(int, desc(desc(dbl))) %>%
collect(),
tbl
)
expect_dplyr_equal(
input %>%
arrange(int) %>%
arrange(desc(dbl)) %>%
collect(),
tbl
)
expect_dplyr_equal(
input %>%
arrange(int + dbl, chr) %>%
collect(),
tbl
)
expect_dplyr_equal(
input %>%
mutate(zzz = int + dbl, ) %>%
arrange(zzz, chr) %>%
collect(),
tbl
)
expect_dplyr_equal(
input %>%
mutate(zzz = int + dbl) %>%
arrange(int + dbl, chr) %>%
collect(),
tbl
)
expect_dplyr_equal(
input %>%
mutate(int + dbl) %>%
arrange(int + dbl, chr) %>%
collect(),
tbl
)
expect_dplyr_equal(
input %>%
group_by(grp) %>%
arrange(int, dbl) %>%
collect(),
tbl
)
expect_dplyr_equal(
input %>%
group_by(grp) %>%
arrange(int, dbl, .by_group = TRUE) %>%
collect(),
tbl
)
expect_dplyr_equal(
input %>%
group_by(grp, grp2) %>%
arrange(int, dbl, .by_group = TRUE) %>%
collect(),
tbl %>%
mutate(grp2 = ifelse(is.na(lgl), 1L, as.integer(lgl)))
)
expect_dplyr_equal(
input %>%
group_by(grp) %>%
arrange(.by_group = TRUE) %>%
pull(grp),
tbl
)
expect_dplyr_equal(
input %>%
arrange() %>%
collect(),
tbl %>%
group_by(grp)
)
expect_dplyr_equal(
input %>%
group_by(grp) %>%
arrange() %>%
collect(),
tbl
)
expect_dplyr_equal(
input %>%
arrange() %>%
collect(),
tbl
)
test_sort_col <- "chr"
expect_dplyr_equal(
input %>%
arrange(!!sym(test_sort_col)) %>%
collect(),
tbl %>%
select(chr, lgl)
)
test_sort_cols <- c("int", "dbl")
expect_dplyr_equal(
input %>%
arrange(!!!syms(test_sort_cols)) %>%
collect(),
tbl
)
})
test_that("arrange() on datetime columns", {
expect_dplyr_equal(
input %>%
arrange(dttm, int) %>%
collect(),
tbl
)
skip("Sorting by only a single timestamp column fails (ARROW-12087)")
expect_dplyr_equal(
input %>%
arrange(dttm) %>%
collect(),
tbl %>%
select(dttm, grp)
)
})
test_that("arrange() on logical columns", {
expect_dplyr_equal(
input %>%
arrange(lgl, int) %>%
collect(),
tbl
)
})
test_that("arrange() with bad inputs", {
expect_error(
tbl %>%
Table$create() %>%
arrange(1),
"does not contain any field names",
fixed = TRUE
)
expect_error(
tbl %>%
Table$create() %>%
arrange(2 + 2),
"does not contain any field names",
fixed = TRUE
)
expect_error(
tbl %>%
Table$create() %>%
arrange(aertidjfgjksertyj),
"not found",
fixed = TRUE
)
expect_error(
tbl %>%
Table$create() %>%
arrange(desc(aertidjfgjksertyj + iaermxiwerksxsdqq)),
"not found",
fixed = TRUE
)
expect_error(
tbl %>%
Table$create() %>%
arrange(desc(int, chr)),
"expects only one argument",
fixed = TRUE
)
})
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/TESTCN.R
\name{testCn_}
\alias{testCn_}
\title{Edf Test for Poisson Distribution \eqn{Cn*}}
\usage{
testCn_(x, n.boot)
}
\arguments{
\item{x}{vector of nonnegative integers, the sample data}
\item{n.boot}{number of bootstrap replicates}
}
\description{
Performs the empirical distribution function goodness-of-fit test of Poisson distribution with unknown parameter
}
\value{The function \code{testCn_} returns a list with class \code{htest} containing:
\item{}{Description of test}
\item{data}{Description of data}
\item{test statistic}{Value of test statistic}
\item{p-value}{approximate p-value of the test}
\item{mean}{sample mean}
}
\details{The edf test of Poissonity \eqn{Cn*} was proposed by Henze (1996). The test is based on the similarity between the edf of the random variable \eqn{X}, \eqn{F_{n}(k)}, and the cdf of Poisson distribution with parameter \eqn{\lambda = \hat{X}}, \eqn{F(k,\hat{\lambda}_{n})}. The test statistic is a modification of the Cramer-von-Mises type of distance.
\eqn{C_{n}^\star = n \sum_{k = 0}^{\infty} [F_{n}(k) - F(k;\hat{\lambda}_{n})]^2 f_{n}(k)}
\eqn{f_{n}(k)} denotes \eqn{F_{n}(k) - F_{n}(k-1)}.
The test is implemented by parametric boostrap with \code{n.boot} replicates}
\author{Manuel Mendez Hurtado \email{mmendezinformatica@gmail.com}}
\references{Henze, N. (1996) Empirical-distribution-function goodness-of-fit tests for discrete models, \emph{The Canadian Journal of Statistics} Vol 24 No 1, 81-93 \url{https://www.jstor.org/stable/3315691?seq=1}}
\examples{x <- rpois(20,2)
testCn_(x, n.boot = 500)}
|
/man/testCn_.Rd
|
no_license
|
MMH1997/TestPoissonity
|
R
| false
| true
| 1,678
|
rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/TESTCN.R
\name{testCn_}
\alias{testCn_}
\title{Edf Test for Poisson Distribution \eqn{Cn*}}
\usage{
testCn_(x, n.boot)
}
\arguments{
\item{x}{vector of nonnegative integers, the sample data}
\item{n.boot}{number of bootstrap replicates}
}
\description{
Performs the empirical distribution function goodness-of-fit test of Poisson distribution with unknown parameter
}
\value{The function \code{testCn_} returns a list with class \code{htest} containing:
\item{}{Description of test}
\item{data}{Description of data}
\item{test statistic}{Value of test statistic}
\item{p-value}{approximate p-value of the test}
\item{mean}{sample mean}
}
\details{The edf test of Poissonity \eqn{Cn*} was proposed by Henze (1996). The test is based on the similarity between the edf of the random variable \eqn{X}, \eqn{F_{n}(k)}, and the cdf of Poisson distribution with parameter \eqn{\lambda = \hat{X}}, \eqn{F(k,\hat{\lambda}_{n})}. The test statistic is a modification of the Cramer-von-Mises type of distance.
\eqn{C_{n}^\star = n \sum_{k = 0}^{\infty} [F_{n}(k) - F(k;\hat{\lambda}_{n})]^2 f_{n}(k)}
\eqn{f_{n}(k)} denotes \eqn{F_{n}(k) - F_{n}(k-1)}.
The test is implemented by parametric boostrap with \code{n.boot} replicates}
\author{Manuel Mendez Hurtado \email{mmendezinformatica@gmail.com}}
\references{Henze, N. (1996) Empirical-distribution-function goodness-of-fit tests for discrete models, \emph{The Canadian Journal of Statistics} Vol 24 No 1, 81-93 \url{https://www.jstor.org/stable/3315691?seq=1}}
\examples{x <- rpois(20,2)
testCn_(x, n.boot = 500)}
|
data(votes.repub)
head(votes.repub)
library(dplyr)
library(viridis)
#Using Virdis library for color pallette
votes.repub %>%
tbl_df() %>%
mutate(state = rownames(votes.repub),
state = tolower(state)) %>%
right_join(us_map, by = c("state" = "region")) %>%
ggplot(aes(x = long, y = lat, group = group, fill = `1976`)) +
geom_polygon(color = "black") +
theme_void() +
scale_fill_viridis(name = "Republican/nvotes (%)")
library(ggplot2)
maryland <- map_data('county', region = 'maryland')
head(maryland)
baltimore <- maryland %>%
filter(subregion %in% c("baltimore city", "baltimore"))
head(baltimore, 3)
ggplot(baltimore, aes(x = long, y = lat, group = group)) +
geom_polygon(fill = "lightblue", color = "black") +
theme_void()
serial <- read.csv("G:/Bootcamp R/Mapping in R/serial_map_data.csv")
head(serial, 3)
View(serial)
serial <- serial %>%
mutate(long = -76.8854 + 0.00017022 * x,
lat = 39.23822 + 1.371014e-04 * y,
tower = Type == "cell-site")
View(serial)
ggplot(baltimore, aes(x = long, y = lat, group = group)) +
geom_polygon(fill = "lightblue", color = "black") +
geom_point(data = serial, aes(group = NULL, color = tower)) +
theme_void() +
scale_color_manual(name = "Cell tower", values = c("black", "red"))
|
/mapping-2.R
|
no_license
|
SurajMalpani/BootcampR2018
|
R
| false
| false
| 1,356
|
r
|
data(votes.repub)
head(votes.repub)
library(dplyr)
library(viridis)
#Using Virdis library for color pallette
votes.repub %>%
tbl_df() %>%
mutate(state = rownames(votes.repub),
state = tolower(state)) %>%
right_join(us_map, by = c("state" = "region")) %>%
ggplot(aes(x = long, y = lat, group = group, fill = `1976`)) +
geom_polygon(color = "black") +
theme_void() +
scale_fill_viridis(name = "Republican/nvotes (%)")
library(ggplot2)
maryland <- map_data('county', region = 'maryland')
head(maryland)
baltimore <- maryland %>%
filter(subregion %in% c("baltimore city", "baltimore"))
head(baltimore, 3)
ggplot(baltimore, aes(x = long, y = lat, group = group)) +
geom_polygon(fill = "lightblue", color = "black") +
theme_void()
serial <- read.csv("G:/Bootcamp R/Mapping in R/serial_map_data.csv")
head(serial, 3)
View(serial)
serial <- serial %>%
mutate(long = -76.8854 + 0.00017022 * x,
lat = 39.23822 + 1.371014e-04 * y,
tower = Type == "cell-site")
View(serial)
ggplot(baltimore, aes(x = long, y = lat, group = group)) +
geom_polygon(fill = "lightblue", color = "black") +
geom_point(data = serial, aes(group = NULL, color = tower)) +
theme_void() +
scale_color_manual(name = "Cell tower", values = c("black", "red"))
|
find_spells <- function(x, threshold = 0.001, rule = "cut", na.rm = TRUE,
warn = TRUE, complete = "none")
{
x %>%
.append_flow_state(threshold = threshold) %>%
summarize_spell(rule = rule, na.rm = na.rm, warn = warn,
complete = complete)
}
summarize_spell <- function(x,
rule = c("cut", "duplicate", "onset", "termination"),
na.rm = TRUE, warn = TRUE, complete = "none")
{
rule <- match.arg(rule)
x %>%
.add_spellvars(warn = warn, duplicate = rule != "cut") %>%
.assign_spell(rule = rule) %>%
.complete_spell(complete = complete) %>%
arrange(spell) # sort by spell
}
.assign_spell <- function(x,
rule = c("cut", "duplicate", "onset", "termination"))
{
rule <- match.arg(rule)
attr_smires(x) <- list("rule" = rule)
# todo: rules for "majority" and "center"
# todo: cut = cut_group = cut_minor, cut_major
# spells are already cut or duplicated
if(rule %in% c("cut", "duplicate")) {
y <- x
}
if(rule == "onset") {
y <- arrange(ungroup(x), spell, group) %>%
distinct(spell, state, .keep_all = TRUE)
}
if(rule == "termination") {
y <- arrange(ungroup(x), desc(spell), desc(group)) %>%
distinct(spell, state, .keep_all = TRUE) %>%
arrange(spell)
}
return(y)
}
.complete_spell <- function(x, complete = c("none", "major", "minor", "group"),
fill = NA)
{
complete <- match.arg(complete)
# retain zero length events
fill <- list(onset = NA, termination = NA, duration = 0,
group = NA, major = NA, minor = NA, var = fill)
x <- ungroup(x)
y <- switch(complete,
major = complete(x, state, major, fill = fill),
minor = complete(x, state, minor, fill = fill),
group = complete(x, state, group, fill = fill),
x)
return(y)
}
.append_flow_state <- function(x, threshold = 0.001)
{
if(is.null(threshold))
{
x$spell <- seq_len(nrow(x))
} else {
att <- attr_smires(x)
# todo, better use cut?
# cut(, breaks = c(0, threshold, Inf), labels = c("no-flow", "flow"))
x$state <- ifelse(x$discharge <= threshold, "no-flow", "flow")
x$state <- factor(x$state, levels = c("no-flow", "flow"))
x <- mutate(x, spell = .spell(x$state))
att[["threshold"]] <- threshold
attr_smires(x) <- att
}
return(x)
}
# .detect_increase(balder) %>%
# .add_spellvars()
.detect_increase <- function(x)
{
att <- attr_smires(x)
d <- diff(x$discharge)
# as we are only interested in counting the state changes and not in
# the duration of a state, we can assign zeros to either increase or decrease
state <- cut(d, breaks = c(-Inf, 0, Inf), labels = c("decrease", "increase"))
x <- data_frame(time = head(x$time, n = -1),
state = state,
spell = .spell(state))
attr_smires(x) <- att
return(x)
}
.add_spellvars <- function(x, warn = TRUE, duplicate = FALSE)
{
grouped <- "group" %in% colnames(x)
y <- if(grouped && !duplicate) {
group_by(x, spell, state, group)
} else {
group_by(x, spell, state)
}
att <- attr_smires(x)
# always store cutted spells in attributes, needed for plotting
if(grouped) {
cut <- group_by(x, spell, state, group)
} else{
cut <- group_by(x, spell, state)
}
cut <- cut %>%
summarize(onset = min(time), termination = max(time) + att$dt,
duration = termination - onset)
if(duplicate) {
res <- y %>% do(data.frame(onset = min(.$time), termination = max(.$time) + att$dt,
group = unique(.$group))) %>%
mutate(duration = termination - onset)
} else {
res <- summarize(y, onset = min(time), termination = max(time) + att$dt,
duration = termination - onset)
}
if(grouped) {
# merge with minor an major intervals, if data was grouped
res <- right_join(res, att[["group_interval"]], by = "group")
cut <- right_join(cut, att[["group_interval"]], by = "group")
}
# quick and dirty way to drop smires attributes, no need to store them twice
att[["spell_cut"]] <- cut[, seq_along(cut)]
#if(grouped | duplicate)
attr_smires(res) <- att
return(res)
}
.spell <- function(x, new.group.na = TRUE, as.factor = TRUE)
{
# copied from lfstat group()
# operates just on the grouping variable
x <- as.numeric(as.factor(x))
if(!new.group.na) {
s <- seq_along(x)
finite <- !is.na(x)
x <- approx(s[finite], x[finite], xout = s, f = 0,
method = "constant", rule = c(1, 2))$y
} else {
# treat NAs as a group of its own
# there isn't yet a level zero, therefore NAs can become zeros
x[is.na(x)] <- 0
}
inc <- diff(x)
if (new.group.na) inc[is.na(inc)] <- Inf
grp <- c(0, cumsum(inc != 0))
if(grp[1] == 0) grp <- grp + 1
if(as.factor) {
return(factor(grp, ordered = TRUE))
} else {
return(grp)
}
}
|
/R/events.R
|
no_license
|
lhmet/smires
|
R
| false
| false
| 5,035
|
r
|
find_spells <- function(x, threshold = 0.001, rule = "cut", na.rm = TRUE,
warn = TRUE, complete = "none")
{
x %>%
.append_flow_state(threshold = threshold) %>%
summarize_spell(rule = rule, na.rm = na.rm, warn = warn,
complete = complete)
}
summarize_spell <- function(x,
rule = c("cut", "duplicate", "onset", "termination"),
na.rm = TRUE, warn = TRUE, complete = "none")
{
rule <- match.arg(rule)
x %>%
.add_spellvars(warn = warn, duplicate = rule != "cut") %>%
.assign_spell(rule = rule) %>%
.complete_spell(complete = complete) %>%
arrange(spell) # sort by spell
}
.assign_spell <- function(x,
rule = c("cut", "duplicate", "onset", "termination"))
{
rule <- match.arg(rule)
attr_smires(x) <- list("rule" = rule)
# todo: rules for "majority" and "center"
# todo: cut = cut_group = cut_minor, cut_major
# spells are already cut or duplicated
if(rule %in% c("cut", "duplicate")) {
y <- x
}
if(rule == "onset") {
y <- arrange(ungroup(x), spell, group) %>%
distinct(spell, state, .keep_all = TRUE)
}
if(rule == "termination") {
y <- arrange(ungroup(x), desc(spell), desc(group)) %>%
distinct(spell, state, .keep_all = TRUE) %>%
arrange(spell)
}
return(y)
}
.complete_spell <- function(x, complete = c("none", "major", "minor", "group"),
fill = NA)
{
complete <- match.arg(complete)
# retain zero length events
fill <- list(onset = NA, termination = NA, duration = 0,
group = NA, major = NA, minor = NA, var = fill)
x <- ungroup(x)
y <- switch(complete,
major = complete(x, state, major, fill = fill),
minor = complete(x, state, minor, fill = fill),
group = complete(x, state, group, fill = fill),
x)
return(y)
}
.append_flow_state <- function(x, threshold = 0.001)
{
if(is.null(threshold))
{
x$spell <- seq_len(nrow(x))
} else {
att <- attr_smires(x)
# todo, better use cut?
# cut(, breaks = c(0, threshold, Inf), labels = c("no-flow", "flow"))
x$state <- ifelse(x$discharge <= threshold, "no-flow", "flow")
x$state <- factor(x$state, levels = c("no-flow", "flow"))
x <- mutate(x, spell = .spell(x$state))
att[["threshold"]] <- threshold
attr_smires(x) <- att
}
return(x)
}
# .detect_increase(balder) %>%
# .add_spellvars()
.detect_increase <- function(x)
{
att <- attr_smires(x)
d <- diff(x$discharge)
# as we are only interested in counting the state changes and not in
# the duration of a state, we can assign zeros to either increase or decrease
state <- cut(d, breaks = c(-Inf, 0, Inf), labels = c("decrease", "increase"))
x <- data_frame(time = head(x$time, n = -1),
state = state,
spell = .spell(state))
attr_smires(x) <- att
return(x)
}
.add_spellvars <- function(x, warn = TRUE, duplicate = FALSE)
{
grouped <- "group" %in% colnames(x)
y <- if(grouped && !duplicate) {
group_by(x, spell, state, group)
} else {
group_by(x, spell, state)
}
att <- attr_smires(x)
# always store cutted spells in attributes, needed for plotting
if(grouped) {
cut <- group_by(x, spell, state, group)
} else{
cut <- group_by(x, spell, state)
}
cut <- cut %>%
summarize(onset = min(time), termination = max(time) + att$dt,
duration = termination - onset)
if(duplicate) {
res <- y %>% do(data.frame(onset = min(.$time), termination = max(.$time) + att$dt,
group = unique(.$group))) %>%
mutate(duration = termination - onset)
} else {
res <- summarize(y, onset = min(time), termination = max(time) + att$dt,
duration = termination - onset)
}
if(grouped) {
# merge with minor an major intervals, if data was grouped
res <- right_join(res, att[["group_interval"]], by = "group")
cut <- right_join(cut, att[["group_interval"]], by = "group")
}
# quick and dirty way to drop smires attributes, no need to store them twice
att[["spell_cut"]] <- cut[, seq_along(cut)]
#if(grouped | duplicate)
attr_smires(res) <- att
return(res)
}
.spell <- function(x, new.group.na = TRUE, as.factor = TRUE)
{
# copied from lfstat group()
# operates just on the grouping variable
x <- as.numeric(as.factor(x))
if(!new.group.na) {
s <- seq_along(x)
finite <- !is.na(x)
x <- approx(s[finite], x[finite], xout = s, f = 0,
method = "constant", rule = c(1, 2))$y
} else {
# treat NAs as a group of its own
# there isn't yet a level zero, therefore NAs can become zeros
x[is.na(x)] <- 0
}
inc <- diff(x)
if (new.group.na) inc[is.na(inc)] <- Inf
grp <- c(0, cumsum(inc != 0))
if(grp[1] == 0) grp <- grp + 1
if(as.factor) {
return(factor(grp, ordered = TRUE))
} else {
return(grp)
}
}
|
# ----------------------------------------------------------------------- #
# DEMAND PREDICTION PIPELINE - format the data frames
# Antoine - Version 2.0, 11.07.2016
# ----------------------------------------------------------------------- #
# Description:
# this script contains 2 types of functions.
#
# 1. general functions for formatting that can be called by all other scripts
#
# 2. specific functions for formatting individual data frames,
# regarding interactions, idle time, nb of drivers ...
# ----------------------------------------------------------------------- #
require(dplyr)
require(dummies)
# ----------------------------------------------------------------------- #
# 1. General functions -----
# -------------------------------- #
# transform a factor into columns and associate the values to it
valumy <- function(df, name_col_factor, name_col_value){
# list all values of the factor
all_factors <- unique(df[, c(name_col_factor)])
# create dummy indexes df for each value of the factor
dummy_factors <- dummy.data.frame(data = df, names = name_col_factor,omit.constants = FALSE)
colnames(dummy_factors) <- gsub(name_col_factor, "", colnames(dummy_factors))
# update the values
for (i_factor in all_factors) {
dummy_factors[, c(i_factor)] <- dummy_factors[, c(i_factor)]*dummy_factors[, name_col_value]
}
return(dummy_factors)
}
# -------------------------------- #
# take care of the change of hour fron summer time to winter time !
summer_time_fix <- function(df, col_date = "Date", col_hour = "Hour",
start_summer = "2016-03-27", end_summer = "2016-10-30") {
require(dplyr)
# format as character the hour
df[, col_hour] <- as.vector(df[, col_hour])
# add one hour for all hours between the two dates
idx_dates <- which(df[, col_date] >= start_summer & df[, col_date] <= end_summer)
# get the hour from df
hours <- unlist(
strsplit(df[idx_dates, col_hour], ":"))[seq(1, 2*length(df[idx_dates, col_hour]), 2)]
hours <- as.numeric(hours) + 1 # add one hour in the summer season
# get the minutes
minutes <- unlist(
strsplit(df[idx_dates, col_hour], ":"))[seq(2, 2*length(df[idx_dates, col_hour]), 2)]
# put back in the df
df[idx_dates, col_hour] <- paste(hours, minutes, sep = ":")
return(df)
}
# -------------------------------- #
# change from hourly to bi-hourly format
one2two_hour_bins <- function(df_inter_hourly, method = sum){
# combine the 30min timeslots in 1h
df_inter_hourly <- df_inter_hourly[, which(!colnames(df_inter_hourly) %in% "To")]
df_inter_hourly$From <- gsub(":30", ":00", df_inter_hourly$From)
df_inter_hourly <- aggregate(formula = . ~ Date + From, data = df_inter_hourly, FUN = sum)
# Add zeros for times when closed to have consistent frequency !!
df_inter_hourly <- complete_hours(df_inter_hourly)
# aggregate by 2 hours
df_inter_hourly <- aggregate_odd_hours(df_inter_hourly, method = method)
# sort by increasing date and hour
df_inter_hourly <- arrange(df_inter_hourly, Date, From)
return(df_inter_hourly)
}
# -------------------------------- #
# add zeros for closed hours to have a consistent frequency
complete_hours <- function(df_hourly) {
# create all_hours
all_hours <- format(as.POSIXct(as.character(0:23), format = "%H"), format = "%H:%M")
# create an empty df_hourly with full hours
new_df_hourly <- expand.grid(Date = sort(unique(df_hourly$Date)),
From = all_hours, stringsAsFactors = FALSE)
# left join to fill in all the known hours
new_df_hourly <- left_join(new_df_hourly, df_hourly)
# replace NAs by 0
new_df_hourly[is.na(new_df_hourly)] <- 0
return(new_df_hourly)
}
# -------------------------------- #
# normalize the hourly data by the daily sum
normalize_profiles<- function(df_hourly, df_daily, method = "MinMax"){
all_districts <- colnames(df_hourly)[which(!colnames(df_hourly) %in% c("Date", "From", "To", "Weekdays"))]
# loop on days
for (day in unique(df_hourly$Date)){
for (district in all_districts) {
# divide the hourly data for one day, by the sum for that day
df_hourly[df_hourly$Date == day,
district] <- df_hourly[df_hourly$Date == day, district] / as.numeric(df_daily[df_daily$Date == day, district])
}
}
return(df_hourly)
}
# -------------------------------- #
# aggregate to get 2hour bins
aggregate_odd_hours <- function(df, method = sum, agg = TRUE){
# get the hour from the time
df$From <- format(as.POSIXct(df$From, format = "%H:%M"), format = "%H")
df$From <- as.numeric(df$From)
# find the odd hour and remove 1 hour from them
# df$From <- ifelse(df$From %% 2 ,
# df$From,
# df$From - 1)
df$From <- ifelse(df$From %% 2 ,
df$From-1,
df$From)
# transform to time format
df$From <- format(as.POSIXct(as.character(df$From), format = "%H"), format = "%H:%M")
# aggregate by date and hours
if (agg) {
df <- aggregate(formula = . ~ Date + From, data = df, FUN = method)
}
df <- arrange(df, Date, From)
return(df)
}
# ----------------------------------------------------------------------- #
# 2. specific functions -----
# -------------------------------- #
# ... interactions
# !! for CW before 18 (excluded)
format_interaction <- function(df_interactions_raw, dates, daily = FALSE){
df_interactions <- df_interactions_raw
# formating dates
df_interactions$Date <- as.Date(df_interactions$Date, format = "%d.%m.%y")
# filter on the relevant dates
df_interactions <- df_interactions[df_interactions$Date >= dates[1] &
df_interactions$Date <= dates[2],]
# aggregate the data if daily = TRUE
if (daily){
df_interactions <- df_interactions[,-which(colnames(df_interactions) == "Hour")]
df_interactions <- aggregate(data = df_interactions, . ~ Date, sum)
}
return(df_interactions)
}
# !! for CW after 18 (included)
format_interaction_2 <- function(df_interactions_raw, dates, daily = FALSE){
df_interactions <- df_interactions_raw
# change the name
colnames(df_interactions) <- c("Date", "From", "To", "City", "Cluster", "PU", "DO")
# replace NAs by 0s
df_interactions$PU[is.na(df_interactions$PU)] <- 0
df_interactions$DO[is.na(df_interactions$DO)] <- 0
# calculate the nb of interaction per timeslot
df_interactions <- mutate(df_interactions, Interaction = PU + DO)
# format
df_interactions$Date <- as.Date(df_interactions$Date, format = "%m/%d/%y")
# filter on the dates
df_interactions <- df_interactions[df_interactions$Date >= dates[1] &
df_interactions$Date <= dates[2],]
# expand by city
df_new <- valumy(df_interactions, "City", "Interaction")
# expand by cluster
df_new <- valumy(df_new, "Cluster", "Interaction")
#... aggregate per day and timeslots
df_new <- aggregate(data = df_new, . ~ Date + From + To, sum)
df_new <- df_new[, which(!colnames(df_new) %in% c("To"))]
# aggregate the data if daily = TRUE
if (daily){
# TODO : use select instead of hard code
# df_new <- aggregate(data = select(df_new, -one_of(c("From", "To")) ), . ~ Date, sum)
df_new <- aggregate(data = df_new[, which(!colnames(df_new) %in% c("From", "To"))],
. ~ Date, sum)
}
# drop the PU, DO and Interaction columns
df_new <- df_new[, which(!colnames(df_new) %in% c("PU", "DO", "Interaction", "To"))]
return(df_new)
}
# -------------------------------- #
# ... holidays
format_holidays <- function(holidays_raw, dates){
holidays <- holidays_raw
# rename the columns
# colnames(holidays) <- c("Date", "London", "Central",
# "North.East", "Southwark", "Victoria",
# "West", "South.West", "Berlin",
# "Berlin.Ost", "Westen")
colnames(holidays) <- c("Date", "London", "Berlin")
# format dates
holidays$Date <- as.Date(holidays$Date, format = "%d.%m.%Y")
# filter on relevant dates
holidays <- holidays[holidays$Date >= dates[1] &
holidays$Date <= dates[2], ]
return(holidays)
}
# -------------------------------- #
# ... idle_time
format_idletime <- function(idle_time_raw, dates, daily = FALSE){
require(dplyr)
idle_time <- idle_time_raw
idle_time <- dplyr::select(idle_time,
Date = Days.in.forecast__completedAt__date,
From = destination_from_hours,
To = destination_to_hours,
City = destination__externalId,
Cluster = fleet__name,
Idle_time = final_idletime) %>%
mutate(Date = as.Date(Date, format = "%m/%d/%y")) %>%
dplyr::filter(Date >= dates[1] &
Date <= dates[2])
idle_time$City<-substr(idle_time$City,1,2)
idle_time$City[idle_time$City=="DE"]<-"Berlin"
idle_time$City[idle_time$City=="GB"]<-"London"
# expand by city
idle_time <- valumy(idle_time, "City", "Idle_time")
# expand by cluster
idle_time <- valumy(idle_time, "Cluster", "Idle_time")
#... aggregate per day and timeslots
idle_time <- aggregate(data = idle_time, . ~ Date + From + To, sum)
idle_time <- idle_time[, which(!colnames(idle_time) %in% c("To", "Idle_time"))]
# aggregate the data if daily = TRUE
if (daily){
idle_time <- aggregate(data = idle_time[, which(!colnames(idle_time) %in% c("From", "To"))],
. ~ Date,
sum)
}
return(idle_time)
}
format_idletime_2 <- function(idle_time_raw, dates, daily = FALSE) {
require(dplyr)
idle_time <- idle_time_raw %>%
transmute(Date = as.Date(slot, format = "%Y-%m-%d"),
From = localHoursFrom,
City = as.factor(city),
Cluster = as.factor(fleet),
idletime = ifelse(is.na(time),
0,
time)) %>%
filter(Date >= dates[1] &
Date <= dates[2]) %>%
# /!\ take care of non plain hour by just setting "From" to "xx:00" format
mutate(From = gsub(":..", ":00", From))
# /!\
# expand by city
idle_time <- valumy(df = idle_time,
name_col_factor = "City",
name_col_value = "idletime")
# expand by cluster
idle_time <- valumy(df = idle_time,
name_col_factor = "Cluster",
name_col_value = "idletime")
# get rid of the total idle time and aggregate to remove the duplicate
if(! daily) {
idle_time <- aggregate(data = dplyr::select(idle_time, -idletime),
. ~ Date + From,
sum) %>%
arrange(Date, From)
} else {
idle_time <- aggregate(data = dplyr::select(idle_time, -idletime, -From),
. ~ Date,
sum) %>%
arrange(Date)
}
return(idle_time)
}
# -------------------------------- #
# ... nb of drivers per hour
format_nb_drivers <- function(nb_driver_raw, dates, district) {
# calculate the nb of drivers bi-hourly
nb_driver <- transmute(nb_driver_raw,
Date = as.Date(Days.in.forecast__completedAt__date,
format = "%m/%d/%y"),
From = gsub(":30", ":00", destination_from_hours),
To = destination_to_hours,
Driver = as.character(courier__name),
Cluster = as.character(fleet__name)) %>%
dplyr::filter(Date > dates[1] & Date <= dates[2] & Cluster == district) %>%
dplyr::select(-To, -Cluster) %>%
group_by(Date, From) %>%
dplyr::summarise(nb_driver = n_distinct(Driver)) %>%
ungroup()
nb_driver <- as.data.frame(nb_driver)
nb_driver <- one2two_hour_bins(nb_driver, method = max)
nb_driver <- mutate(nb_driver,
Weekday = factor(weekdays(Date),
levels = c("Monday", "Tuesday", "Wednesday",
"Thursday", "Friday", "Saturday", "Sunday"))
)
# # prepare the interactions data to be joined
# inter_hourly <- dplyr::select(inter_hourly,
# Date,
# From,
# Interaction = one_of(district)) %>%
# filter(Date > dates[1] &
# Date <= dates[2])
# inter_hourly <- one2two_hour_bins(inter_hourly, method = sum)
#
# # join with the nb of interactions
# nb_driver <- left_join(x = inter_hourly,
# y = nb_driver)
return(nb_driver)
}
# -------------------------------- #
# ... marketing
# format_marketing <- function(df_marketing_raw, dates, features = NULL){
# df_marketing <- df_marketing_raw
# # format dates
# df_marketing$date <- as.Date(df_marketing$date, format = "%m/%d/%y")
# # filter on the relevant dates
# df_marketing <- df_marketing[df_marketing$Date >= dates[1] &
# df_marketing$Date <= dates[2],]
# # select the relevant features
# if (is.null(features)){
# df_marketing <- df_marketing[,which(colnames(df_marketing) %in%
# c("sea_clicks","facebook_spend","emails_sent"))] #"date",
# } else {
# df_marketing <- df_marketing[,which(colnames(df_marketing) %in% features)] #"date",
# }
# return(df_marketing)
# }
#
# format_marketing_2 <- function(df_marketing_raw, dates, features = NULL){
# df_marketing <- df_marketing_raw
# # change the names
# colnames(df_marketing) <- c("Date", "Channel", "Orders")
# # format dates
# df_marketing$date <- as.Date(df_marketing$date, format = "%m/%d/%y")
# # filter on the relevant dates
# df_marketing <- df_marketing[df_marketing$Date >= dates[1] &
# df_marketing$Date <= dates[2],]
# # expand the Channel factor
# df_marketing <- valumy(df_marketing, "Channel", "Orders")
#
# # select the relevant features
# if (is.null(features)){
# df_marketing <- df_marketing[,which(colnames(df_marketing) %in%
# c("SEM","SEM","Facebook","Twitter"))] #"date",
# } else {
# df_marketing <- df_marketing[,which(colnames(df_marketing) %in% features)] #"date",
# }
# return(df_marketing)
# }
# -------------------------------- #
# ... customers
# format_customers <- function(df_customers_raw, dates){
# df_customers <- df_customers_raw
# # replace NAs by 0s
# df_customers[is.na(df_customers$New), "New"] <- 0
# df_customers[is.na(df_customers$Returning), "Returning"] <- 0
# # rename the columns
# colnames(df_customers) <- c("Date", "Valid.Order", "Total.Order", "Cancel.Rate", "New", "Returning")
# # format dates
# df_customers$Date <- as.Date(df_customers$Date, format = "%m/%d/%y")
# # filter the dates
# df_customers <- df_customers[df_customers$Date >= dates[1] &
# df_customers$Date <= dates[2], ]
# # select useful columns
# df_customers <- df_customers[, c("Date", "Cancel.Rate", "New", "Returning")]
#
# return(df_customers)
# }
|
/Demand_format.R
|
no_license
|
madhudharmav/DemandPrediction_ZJ
|
R
| false
| false
| 16,159
|
r
|
# ----------------------------------------------------------------------- #
# DEMAND PREDICTION PIPELINE - format the data frames
# Antoine - Version 2.0, 11.07.2016
# ----------------------------------------------------------------------- #
# Description:
# this script contains 2 types of functions.
#
# 1. general functions for formatting that can be called by all other scripts
#
# 2. specific functions for formatting individual data frames,
# regarding interactions, idle time, nb of drivers ...
# ----------------------------------------------------------------------- #
require(dplyr)
require(dummies)
# ----------------------------------------------------------------------- #
# 1. General functions -----
# -------------------------------- #
# transform a factor into columns and associate the values to it
valumy <- function(df, name_col_factor, name_col_value){
# list all values of the factor
all_factors <- unique(df[, c(name_col_factor)])
# create dummy indexes df for each value of the factor
dummy_factors <- dummy.data.frame(data = df, names = name_col_factor,omit.constants = FALSE)
colnames(dummy_factors) <- gsub(name_col_factor, "", colnames(dummy_factors))
# update the values
for (i_factor in all_factors) {
dummy_factors[, c(i_factor)] <- dummy_factors[, c(i_factor)]*dummy_factors[, name_col_value]
}
return(dummy_factors)
}
# -------------------------------- #
# take care of the change of hour fron summer time to winter time !
summer_time_fix <- function(df, col_date = "Date", col_hour = "Hour",
start_summer = "2016-03-27", end_summer = "2016-10-30") {
require(dplyr)
# format as character the hour
df[, col_hour] <- as.vector(df[, col_hour])
# add one hour for all hours between the two dates
idx_dates <- which(df[, col_date] >= start_summer & df[, col_date] <= end_summer)
# get the hour from df
hours <- unlist(
strsplit(df[idx_dates, col_hour], ":"))[seq(1, 2*length(df[idx_dates, col_hour]), 2)]
hours <- as.numeric(hours) + 1 # add one hour in the summer season
# get the minutes
minutes <- unlist(
strsplit(df[idx_dates, col_hour], ":"))[seq(2, 2*length(df[idx_dates, col_hour]), 2)]
# put back in the df
df[idx_dates, col_hour] <- paste(hours, minutes, sep = ":")
return(df)
}
# -------------------------------- #
# change from hourly to bi-hourly format
one2two_hour_bins <- function(df_inter_hourly, method = sum){
# combine the 30min timeslots in 1h
df_inter_hourly <- df_inter_hourly[, which(!colnames(df_inter_hourly) %in% "To")]
df_inter_hourly$From <- gsub(":30", ":00", df_inter_hourly$From)
df_inter_hourly <- aggregate(formula = . ~ Date + From, data = df_inter_hourly, FUN = sum)
# Add zeros for times when closed to have consistent frequency !!
df_inter_hourly <- complete_hours(df_inter_hourly)
# aggregate by 2 hours
df_inter_hourly <- aggregate_odd_hours(df_inter_hourly, method = method)
# sort by increasing date and hour
df_inter_hourly <- arrange(df_inter_hourly, Date, From)
return(df_inter_hourly)
}
# -------------------------------- #
# add zeros for closed hours to have a consistent frequency
complete_hours <- function(df_hourly) {
# create all_hours
all_hours <- format(as.POSIXct(as.character(0:23), format = "%H"), format = "%H:%M")
# create an empty df_hourly with full hours
new_df_hourly <- expand.grid(Date = sort(unique(df_hourly$Date)),
From = all_hours, stringsAsFactors = FALSE)
# left join to fill in all the known hours
new_df_hourly <- left_join(new_df_hourly, df_hourly)
# replace NAs by 0
new_df_hourly[is.na(new_df_hourly)] <- 0
return(new_df_hourly)
}
# -------------------------------- #
# normalize the hourly data by the daily sum
normalize_profiles<- function(df_hourly, df_daily, method = "MinMax"){
all_districts <- colnames(df_hourly)[which(!colnames(df_hourly) %in% c("Date", "From", "To", "Weekdays"))]
# loop on days
for (day in unique(df_hourly$Date)){
for (district in all_districts) {
# divide the hourly data for one day, by the sum for that day
df_hourly[df_hourly$Date == day,
district] <- df_hourly[df_hourly$Date == day, district] / as.numeric(df_daily[df_daily$Date == day, district])
}
}
return(df_hourly)
}
# -------------------------------- #
# aggregate to get 2hour bins
aggregate_odd_hours <- function(df, method = sum, agg = TRUE){
# get the hour from the time
df$From <- format(as.POSIXct(df$From, format = "%H:%M"), format = "%H")
df$From <- as.numeric(df$From)
# find the odd hour and remove 1 hour from them
# df$From <- ifelse(df$From %% 2 ,
# df$From,
# df$From - 1)
df$From <- ifelse(df$From %% 2 ,
df$From-1,
df$From)
# transform to time format
df$From <- format(as.POSIXct(as.character(df$From), format = "%H"), format = "%H:%M")
# aggregate by date and hours
if (agg) {
df <- aggregate(formula = . ~ Date + From, data = df, FUN = method)
}
df <- arrange(df, Date, From)
return(df)
}
# ----------------------------------------------------------------------- #
# 2. specific functions -----
# -------------------------------- #
# ... interactions
# !! for CW before 18 (excluded)
format_interaction <- function(df_interactions_raw, dates, daily = FALSE){
df_interactions <- df_interactions_raw
# formating dates
df_interactions$Date <- as.Date(df_interactions$Date, format = "%d.%m.%y")
# filter on the relevant dates
df_interactions <- df_interactions[df_interactions$Date >= dates[1] &
df_interactions$Date <= dates[2],]
# aggregate the data if daily = TRUE
if (daily){
df_interactions <- df_interactions[,-which(colnames(df_interactions) == "Hour")]
df_interactions <- aggregate(data = df_interactions, . ~ Date, sum)
}
return(df_interactions)
}
# !! for CW after 18 (included)
format_interaction_2 <- function(df_interactions_raw, dates, daily = FALSE){
df_interactions <- df_interactions_raw
# change the name
colnames(df_interactions) <- c("Date", "From", "To", "City", "Cluster", "PU", "DO")
# replace NAs by 0s
df_interactions$PU[is.na(df_interactions$PU)] <- 0
df_interactions$DO[is.na(df_interactions$DO)] <- 0
# calculate the nb of interaction per timeslot
df_interactions <- mutate(df_interactions, Interaction = PU + DO)
# format
df_interactions$Date <- as.Date(df_interactions$Date, format = "%m/%d/%y")
# filter on the dates
df_interactions <- df_interactions[df_interactions$Date >= dates[1] &
df_interactions$Date <= dates[2],]
# expand by city
df_new <- valumy(df_interactions, "City", "Interaction")
# expand by cluster
df_new <- valumy(df_new, "Cluster", "Interaction")
#... aggregate per day and timeslots
df_new <- aggregate(data = df_new, . ~ Date + From + To, sum)
df_new <- df_new[, which(!colnames(df_new) %in% c("To"))]
# aggregate the data if daily = TRUE
if (daily){
# TODO : use select instead of hard code
# df_new <- aggregate(data = select(df_new, -one_of(c("From", "To")) ), . ~ Date, sum)
df_new <- aggregate(data = df_new[, which(!colnames(df_new) %in% c("From", "To"))],
. ~ Date, sum)
}
# drop the PU, DO and Interaction columns
df_new <- df_new[, which(!colnames(df_new) %in% c("PU", "DO", "Interaction", "To"))]
return(df_new)
}
# -------------------------------- #
# ... holidays
format_holidays <- function(holidays_raw, dates){
holidays <- holidays_raw
# rename the columns
# colnames(holidays) <- c("Date", "London", "Central",
# "North.East", "Southwark", "Victoria",
# "West", "South.West", "Berlin",
# "Berlin.Ost", "Westen")
colnames(holidays) <- c("Date", "London", "Berlin")
# format dates
holidays$Date <- as.Date(holidays$Date, format = "%d.%m.%Y")
# filter on relevant dates
holidays <- holidays[holidays$Date >= dates[1] &
holidays$Date <= dates[2], ]
return(holidays)
}
# -------------------------------- #
# ... idle_time
format_idletime <- function(idle_time_raw, dates, daily = FALSE){
require(dplyr)
idle_time <- idle_time_raw
idle_time <- dplyr::select(idle_time,
Date = Days.in.forecast__completedAt__date,
From = destination_from_hours,
To = destination_to_hours,
City = destination__externalId,
Cluster = fleet__name,
Idle_time = final_idletime) %>%
mutate(Date = as.Date(Date, format = "%m/%d/%y")) %>%
dplyr::filter(Date >= dates[1] &
Date <= dates[2])
idle_time$City<-substr(idle_time$City,1,2)
idle_time$City[idle_time$City=="DE"]<-"Berlin"
idle_time$City[idle_time$City=="GB"]<-"London"
# expand by city
idle_time <- valumy(idle_time, "City", "Idle_time")
# expand by cluster
idle_time <- valumy(idle_time, "Cluster", "Idle_time")
#... aggregate per day and timeslots
idle_time <- aggregate(data = idle_time, . ~ Date + From + To, sum)
idle_time <- idle_time[, which(!colnames(idle_time) %in% c("To", "Idle_time"))]
# aggregate the data if daily = TRUE
if (daily){
idle_time <- aggregate(data = idle_time[, which(!colnames(idle_time) %in% c("From", "To"))],
. ~ Date,
sum)
}
return(idle_time)
}
format_idletime_2 <- function(idle_time_raw, dates, daily = FALSE) {
require(dplyr)
idle_time <- idle_time_raw %>%
transmute(Date = as.Date(slot, format = "%Y-%m-%d"),
From = localHoursFrom,
City = as.factor(city),
Cluster = as.factor(fleet),
idletime = ifelse(is.na(time),
0,
time)) %>%
filter(Date >= dates[1] &
Date <= dates[2]) %>%
# /!\ take care of non plain hour by just setting "From" to "xx:00" format
mutate(From = gsub(":..", ":00", From))
# /!\
# expand by city
idle_time <- valumy(df = idle_time,
name_col_factor = "City",
name_col_value = "idletime")
# expand by cluster
idle_time <- valumy(df = idle_time,
name_col_factor = "Cluster",
name_col_value = "idletime")
# get rid of the total idle time and aggregate to remove the duplicate
if(! daily) {
idle_time <- aggregate(data = dplyr::select(idle_time, -idletime),
. ~ Date + From,
sum) %>%
arrange(Date, From)
} else {
idle_time <- aggregate(data = dplyr::select(idle_time, -idletime, -From),
. ~ Date,
sum) %>%
arrange(Date)
}
return(idle_time)
}
# -------------------------------- #
# ... nb of drivers per hour
format_nb_drivers <- function(nb_driver_raw, dates, district) {
# calculate the nb of drivers bi-hourly
nb_driver <- transmute(nb_driver_raw,
Date = as.Date(Days.in.forecast__completedAt__date,
format = "%m/%d/%y"),
From = gsub(":30", ":00", destination_from_hours),
To = destination_to_hours,
Driver = as.character(courier__name),
Cluster = as.character(fleet__name)) %>%
dplyr::filter(Date > dates[1] & Date <= dates[2] & Cluster == district) %>%
dplyr::select(-To, -Cluster) %>%
group_by(Date, From) %>%
dplyr::summarise(nb_driver = n_distinct(Driver)) %>%
ungroup()
nb_driver <- as.data.frame(nb_driver)
nb_driver <- one2two_hour_bins(nb_driver, method = max)
nb_driver <- mutate(nb_driver,
Weekday = factor(weekdays(Date),
levels = c("Monday", "Tuesday", "Wednesday",
"Thursday", "Friday", "Saturday", "Sunday"))
)
# # prepare the interactions data to be joined
# inter_hourly <- dplyr::select(inter_hourly,
# Date,
# From,
# Interaction = one_of(district)) %>%
# filter(Date > dates[1] &
# Date <= dates[2])
# inter_hourly <- one2two_hour_bins(inter_hourly, method = sum)
#
# # join with the nb of interactions
# nb_driver <- left_join(x = inter_hourly,
# y = nb_driver)
return(nb_driver)
}
# -------------------------------- #
# ... marketing
# format_marketing <- function(df_marketing_raw, dates, features = NULL){
# df_marketing <- df_marketing_raw
# # format dates
# df_marketing$date <- as.Date(df_marketing$date, format = "%m/%d/%y")
# # filter on the relevant dates
# df_marketing <- df_marketing[df_marketing$Date >= dates[1] &
# df_marketing$Date <= dates[2],]
# # select the relevant features
# if (is.null(features)){
# df_marketing <- df_marketing[,which(colnames(df_marketing) %in%
# c("sea_clicks","facebook_spend","emails_sent"))] #"date",
# } else {
# df_marketing <- df_marketing[,which(colnames(df_marketing) %in% features)] #"date",
# }
# return(df_marketing)
# }
#
# format_marketing_2 <- function(df_marketing_raw, dates, features = NULL){
# df_marketing <- df_marketing_raw
# # change the names
# colnames(df_marketing) <- c("Date", "Channel", "Orders")
# # format dates
# df_marketing$date <- as.Date(df_marketing$date, format = "%m/%d/%y")
# # filter on the relevant dates
# df_marketing <- df_marketing[df_marketing$Date >= dates[1] &
# df_marketing$Date <= dates[2],]
# # expand the Channel factor
# df_marketing <- valumy(df_marketing, "Channel", "Orders")
#
# # select the relevant features
# if (is.null(features)){
# df_marketing <- df_marketing[,which(colnames(df_marketing) %in%
# c("SEM","SEM","Facebook","Twitter"))] #"date",
# } else {
# df_marketing <- df_marketing[,which(colnames(df_marketing) %in% features)] #"date",
# }
# return(df_marketing)
# }
# -------------------------------- #
# ... customers
# format_customers <- function(df_customers_raw, dates){
# df_customers <- df_customers_raw
# # replace NAs by 0s
# df_customers[is.na(df_customers$New), "New"] <- 0
# df_customers[is.na(df_customers$Returning), "Returning"] <- 0
# # rename the columns
# colnames(df_customers) <- c("Date", "Valid.Order", "Total.Order", "Cancel.Rate", "New", "Returning")
# # format dates
# df_customers$Date <- as.Date(df_customers$Date, format = "%m/%d/%y")
# # filter the dates
# df_customers <- df_customers[df_customers$Date >= dates[1] &
# df_customers$Date <= dates[2], ]
# # select useful columns
# df_customers <- df_customers[, c("Date", "Cancel.Rate", "New", "Returning")]
#
# return(df_customers)
# }
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/tfunctions.r
\name{tprod}
\alias{tprod}
\title{Tucker product}
\usage{
tprod(A, B, modes = 1:length(B))
}
\arguments{
\item{A}{a real valued array}
\item{B}{a list of matrices, the second dimension of each matching the
dimensions of A}
\item{modes}{a vector giving which modes of A should be multiplied}
}
\description{
Multiply an array by a list of matrices along each mode
}
\details{
This function multiplies an array along each mode.
}
\examples{
m<-c(6,5,4)
A<-rsan(c(6,5,4))
B<-list() ; for(k in 1:3) { B[[k]]<-rsan(c(3,dim(A)[[k]])) }
}
\author{
Peter Hoff
}
\keyword{arrays}
|
/man/tprod.Rd
|
no_license
|
MikeKozelMSU/mcmcFunc
|
R
| false
| true
| 664
|
rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/tfunctions.r
\name{tprod}
\alias{tprod}
\title{Tucker product}
\usage{
tprod(A, B, modes = 1:length(B))
}
\arguments{
\item{A}{a real valued array}
\item{B}{a list of matrices, the second dimension of each matching the
dimensions of A}
\item{modes}{a vector giving which modes of A should be multiplied}
}
\description{
Multiply an array by a list of matrices along each mode
}
\details{
This function multiplies an array along each mode.
}
\examples{
m<-c(6,5,4)
A<-rsan(c(6,5,4))
B<-list() ; for(k in 1:3) { B[[k]]<-rsan(c(3,dim(A)[[k]])) }
}
\author{
Peter Hoff
}
\keyword{arrays}
|
#setwd('~/Dropbox/arber/')
#=====================================================================
#===============================DATA INPUT============================
#=====================================================================
#load expression table
expTable = read.delim('/grail/projects/arber2/cufflinks/arber_rna_cuffnorm/output/arber_rna_all_fpkm_exprs_raw.txt', header = TRUE, sep ='\t')
#load list of differential genes determines
diffGenes = read.delim('/grail/projects/arber2/tables/clustergram_genes_1.5_fold_change.txt')
diffGenesList = as.character(diffGenes[,1])
diffGenesList = sort(diffGenesList)
expMatrix = as.matrix(expTable)
#create matrix of differential gene expression
diffMatrix = matrix(nrow=length(diffGenesList),ncol=ncol(expMatrix))
rownames(diffMatrix) = diffGenesList
colnames(diffMatrix) = colnames(expMatrix)
for(i in 1:length(diffGenesList)){
geneName = diffGenesList[i]
expRow = which(rownames(expMatrix)==geneName)
diffMatrix[i,] = expMatrix[expRow,]
}
#now we want to make a log2 row median normalized expression matrix
medianVector = apply(diffMatrix,1,median)
medianMatrix = log2(diffMatrix/medianVector)
#===================================================================
#======================CLUSTERING EXPRESSION========================
#===================================================================
expCorDist = as.dist(1-cor(t(medianMatrix)))
expHClust = hclust(expCorDist)
expOrder = expHClust$order
#===================================================================
#=========================MAKING HEATMAPS===========================
#===================================================================
#Set the color spectrum
colorSpectrum <- colorRampPalette(c("blue","white","red"))(100)
#colorSpectrum <- colorRampPalette(c("white","red"))(100)
#setting a color data range
#minValue <- quantile(clusterMatrix,na.rm=TRUE,prob=0.025,names=FALSE)
#maxValue <- quantile(clusterMatrix,na.rm=TRUE,prob=0.975,names=FALSE)
minValue= -2
maxValue = 2
color_cuts <- seq(minValue,maxValue,length=100)
#color_cuts <- seq(-1,1,length=100)
color_cuts <- c(min(medianMatrix), color_cuts,max(medianMatrix)) # this catches the full dynamic range
#add one extra min color to even out sampling
colorSpectrum <- c(colorSpectrum[1],colorSpectrum) #this is something stupid in R that you always have to do
#Making png
#clusterPNGFile = paste(outputFolder,genome,'_',analysisName,'_2d_cluster.png',sep='')
png(filename = '/grail/projects/arber2/Expression_Heatmap_clusters4_5.png',width = 800,height =800)
layout(matrix(data=c(1,1,1,1,1,2,2),ncol= 7))
image(1:ncol(medianMatrix),1:nrow(medianMatrix),t(medianMatrix[expOrder,]),breaks=color_cuts,col=colorSpectrum,xaxt="n",yaxt="n",ylab='',xlab='')
image(1:2,color_cuts[2:101],t(matrix(data=color_cuts[2:101],ncol=2,nrow=100)),breaks=color_cuts,col=colorSpectrum,xaxt="n",xlab="",ylab="Log2 fold vs median")
dev.off()
#===================================================================
#===========================DENDROGRAMS=============================
#===================================================================
#plot(expHClust)
clusterCut <- cutree(expHClust, 10)
#plot(expHClust, h = -.5)
clusterList <-c(clusterCut)
#===================================================================
#===========================ZOOM IN 4 & 5===========================
#===================================================================
#add list of clusters 1 thru 10 to diffMatrix
diffMatrix <- cbind(diffMatrix,clusterList)
#isolate clusters of interest
cluster2genes_rows <- which(diffMatrix[,9] == 2)
cluster4genes_rows <- which(diffMatrix[,9] == 4)
cluster1genes_rows <- which(diffMatrix[,9] == 1)
cluster5genes_rows <- which(diffMatrix[,9] == 5)
#isolate gene names
cluster2genes <- rownames(diffMatrix[cluster2genes_rows,])
cluster4genes <- rownames(diffMatrix[cluster4genes_rows,])
cluster1genes <- rownames(diffMatrix[cluster1genes_rows,])
cluster5genes <- rownames(diffMatrix[cluster5genes_rows,])
all_genes = c(cluster1genes_rows, cluster2genes_rows,cluster5genes_rows,cluster4genes_rows)
clusterTable = diffMatrix[all_genes,]
#creates a table of just selected clusters and expression
for(i in 1:length(clusterTable[,1])){
if(clusterTable[i,9]==1){
clusterTable[i,9]='C'
}
if(clusterTable[i,9]==2){
clusterTable[i,9]='A'
}
if(clusterTable[i,9]==4){
clusterTable[i,9]='B'
}
if(clusterTable[i,9]==5){
clusterTable[i,9]='D'
}
}
write.table(clusterTable, file = "/grail/projects/arber2/cluster_A-D_members_and_expr.txt", append = FALSE, quote = FALSE, sep = "\t", eol = '\n', na = "NA", dec = '.', row.names = TRUE, col.names = TRUE,
qmethod = c("escape", "double"), fileEncoding = "")
#create list output for cluster1 genes
fileConn<-file("/grail/projects/arber2/cluster_1_genes_list.txt")
writeLines(cluster1genes, fileConn)
close(fileConn)
# create list output for cluster2 genes
fileConn<-file("/grail/projects/arber2/cluster_2_genes_list.txt")
writeLines(cluster2genes, fileConn)
close(fileConn)
# create list output for cluster4 genes
fileConn<-file("/grail/projects/arber2/cluster_4_genes_list.txt")
writeLines(cluster4genes, fileConn)
close(fileConn)
#create list output for cluster7 genes
fileConn<-file("/grail/projects/arber2/cluster_5_genes_list.txt")
writeLines(cluster5genes, fileConn)
close(fileConn)
|
/arber_heatmaps.R
|
no_license
|
linlabbcm/arber_ep_tpo
|
R
| false
| false
| 5,429
|
r
|
#setwd('~/Dropbox/arber/')
#=====================================================================
#===============================DATA INPUT============================
#=====================================================================
#load expression table
expTable = read.delim('/grail/projects/arber2/cufflinks/arber_rna_cuffnorm/output/arber_rna_all_fpkm_exprs_raw.txt', header = TRUE, sep ='\t')
#load list of differential genes determines
diffGenes = read.delim('/grail/projects/arber2/tables/clustergram_genes_1.5_fold_change.txt')
diffGenesList = as.character(diffGenes[,1])
diffGenesList = sort(diffGenesList)
expMatrix = as.matrix(expTable)
#create matrix of differential gene expression
diffMatrix = matrix(nrow=length(diffGenesList),ncol=ncol(expMatrix))
rownames(diffMatrix) = diffGenesList
colnames(diffMatrix) = colnames(expMatrix)
for(i in 1:length(diffGenesList)){
geneName = diffGenesList[i]
expRow = which(rownames(expMatrix)==geneName)
diffMatrix[i,] = expMatrix[expRow,]
}
#now we want to make a log2 row median normalized expression matrix
medianVector = apply(diffMatrix,1,median)
medianMatrix = log2(diffMatrix/medianVector)
#===================================================================
#======================CLUSTERING EXPRESSION========================
#===================================================================
expCorDist = as.dist(1-cor(t(medianMatrix)))
expHClust = hclust(expCorDist)
expOrder = expHClust$order
#===================================================================
#=========================MAKING HEATMAPS===========================
#===================================================================
#Set the color spectrum
colorSpectrum <- colorRampPalette(c("blue","white","red"))(100)
#colorSpectrum <- colorRampPalette(c("white","red"))(100)
#setting a color data range
#minValue <- quantile(clusterMatrix,na.rm=TRUE,prob=0.025,names=FALSE)
#maxValue <- quantile(clusterMatrix,na.rm=TRUE,prob=0.975,names=FALSE)
minValue= -2
maxValue = 2
color_cuts <- seq(minValue,maxValue,length=100)
#color_cuts <- seq(-1,1,length=100)
color_cuts <- c(min(medianMatrix), color_cuts,max(medianMatrix)) # this catches the full dynamic range
#add one extra min color to even out sampling
colorSpectrum <- c(colorSpectrum[1],colorSpectrum) #this is something stupid in R that you always have to do
#Making png
#clusterPNGFile = paste(outputFolder,genome,'_',analysisName,'_2d_cluster.png',sep='')
png(filename = '/grail/projects/arber2/Expression_Heatmap_clusters4_5.png',width = 800,height =800)
layout(matrix(data=c(1,1,1,1,1,2,2),ncol= 7))
image(1:ncol(medianMatrix),1:nrow(medianMatrix),t(medianMatrix[expOrder,]),breaks=color_cuts,col=colorSpectrum,xaxt="n",yaxt="n",ylab='',xlab='')
image(1:2,color_cuts[2:101],t(matrix(data=color_cuts[2:101],ncol=2,nrow=100)),breaks=color_cuts,col=colorSpectrum,xaxt="n",xlab="",ylab="Log2 fold vs median")
dev.off()
#===================================================================
#===========================DENDROGRAMS=============================
#===================================================================
#plot(expHClust)
clusterCut <- cutree(expHClust, 10)
#plot(expHClust, h = -.5)
clusterList <-c(clusterCut)
#===================================================================
#===========================ZOOM IN 4 & 5===========================
#===================================================================
#add list of clusters 1 thru 10 to diffMatrix
diffMatrix <- cbind(diffMatrix,clusterList)
#isolate clusters of interest
cluster2genes_rows <- which(diffMatrix[,9] == 2)
cluster4genes_rows <- which(diffMatrix[,9] == 4)
cluster1genes_rows <- which(diffMatrix[,9] == 1)
cluster5genes_rows <- which(diffMatrix[,9] == 5)
#isolate gene names
cluster2genes <- rownames(diffMatrix[cluster2genes_rows,])
cluster4genes <- rownames(diffMatrix[cluster4genes_rows,])
cluster1genes <- rownames(diffMatrix[cluster1genes_rows,])
cluster5genes <- rownames(diffMatrix[cluster5genes_rows,])
all_genes = c(cluster1genes_rows, cluster2genes_rows,cluster5genes_rows,cluster4genes_rows)
clusterTable = diffMatrix[all_genes,]
#creates a table of just selected clusters and expression
for(i in 1:length(clusterTable[,1])){
if(clusterTable[i,9]==1){
clusterTable[i,9]='C'
}
if(clusterTable[i,9]==2){
clusterTable[i,9]='A'
}
if(clusterTable[i,9]==4){
clusterTable[i,9]='B'
}
if(clusterTable[i,9]==5){
clusterTable[i,9]='D'
}
}
write.table(clusterTable, file = "/grail/projects/arber2/cluster_A-D_members_and_expr.txt", append = FALSE, quote = FALSE, sep = "\t", eol = '\n', na = "NA", dec = '.', row.names = TRUE, col.names = TRUE,
qmethod = c("escape", "double"), fileEncoding = "")
#create list output for cluster1 genes
fileConn<-file("/grail/projects/arber2/cluster_1_genes_list.txt")
writeLines(cluster1genes, fileConn)
close(fileConn)
# create list output for cluster2 genes
fileConn<-file("/grail/projects/arber2/cluster_2_genes_list.txt")
writeLines(cluster2genes, fileConn)
close(fileConn)
# create list output for cluster4 genes
fileConn<-file("/grail/projects/arber2/cluster_4_genes_list.txt")
writeLines(cluster4genes, fileConn)
close(fileConn)
#create list output for cluster7 genes
fileConn<-file("/grail/projects/arber2/cluster_5_genes_list.txt")
writeLines(cluster5genes, fileConn)
close(fileConn)
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/drive_functions.R
\name{files.update}
\alias{files.update}
\title{Updates a file's metadata and/or content with patch semantics.}
\usage{
files.update(File, fileId, addParents = NULL, keepRevisionForever = NULL,
ocrLanguage = NULL, removeParents = NULL, supportsTeamDrives = NULL,
useContentAsIndexableText = NULL)
}
\arguments{
\item{File}{The \link{File} object to pass to this method}
\item{fileId}{The ID of the file}
\item{addParents}{A comma-separated list of parent IDs to add}
\item{keepRevisionForever}{Whether to set the 'keepForever' field in the new head revision}
\item{ocrLanguage}{A language hint for OCR processing during image import (ISO 639-1 code)}
\item{removeParents}{A comma-separated list of parent IDs to remove}
\item{supportsTeamDrives}{Whether the requesting application supports Team Drives}
\item{useContentAsIndexableText}{Whether to use the uploaded content as indexable text}
}
\description{
Autogenerated via \code{\link[googleAuthR]{gar_create_api_skeleton}}
}
\details{
Authentication scopes used by this function are:
\itemize{
\item https://www.googleapis.com/auth/drive
\item https://www.googleapis.com/auth/drive.appdata
\item https://www.googleapis.com/auth/drive.file
\item https://www.googleapis.com/auth/drive.metadata
\item https://www.googleapis.com/auth/drive.scripts
}
Set \code{options(googleAuthR.scopes.selected = c(https://www.googleapis.com/auth/drive, https://www.googleapis.com/auth/drive.appdata, https://www.googleapis.com/auth/drive.file, https://www.googleapis.com/auth/drive.metadata, https://www.googleapis.com/auth/drive.scripts)}
Then run \code{googleAuthR::gar_auth()} to authenticate.
See \code{\link[googleAuthR]{gar_auth}} for details.
}
\seealso{
\href{https://developers.google.com/drive/}{Google Documentation}
Other File functions: \code{\link{File.appProperties}},
\code{\link{File.capabilities}},
\code{\link{File.contentHints.thumbnail}},
\code{\link{File.contentHints}},
\code{\link{File.imageMediaMetadata.location}},
\code{\link{File.imageMediaMetadata}},
\code{\link{File.properties}},
\code{\link{File.videoMediaMetadata}},
\code{\link{File}}, \code{\link{files.copy}},
\code{\link{files.create}}
}
|
/googledrivev3.auto/man/files.update.Rd
|
permissive
|
GVersteeg/autoGoogleAPI
|
R
| false
| true
| 2,289
|
rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/drive_functions.R
\name{files.update}
\alias{files.update}
\title{Updates a file's metadata and/or content with patch semantics.}
\usage{
files.update(File, fileId, addParents = NULL, keepRevisionForever = NULL,
ocrLanguage = NULL, removeParents = NULL, supportsTeamDrives = NULL,
useContentAsIndexableText = NULL)
}
\arguments{
\item{File}{The \link{File} object to pass to this method}
\item{fileId}{The ID of the file}
\item{addParents}{A comma-separated list of parent IDs to add}
\item{keepRevisionForever}{Whether to set the 'keepForever' field in the new head revision}
\item{ocrLanguage}{A language hint for OCR processing during image import (ISO 639-1 code)}
\item{removeParents}{A comma-separated list of parent IDs to remove}
\item{supportsTeamDrives}{Whether the requesting application supports Team Drives}
\item{useContentAsIndexableText}{Whether to use the uploaded content as indexable text}
}
\description{
Autogenerated via \code{\link[googleAuthR]{gar_create_api_skeleton}}
}
\details{
Authentication scopes used by this function are:
\itemize{
\item https://www.googleapis.com/auth/drive
\item https://www.googleapis.com/auth/drive.appdata
\item https://www.googleapis.com/auth/drive.file
\item https://www.googleapis.com/auth/drive.metadata
\item https://www.googleapis.com/auth/drive.scripts
}
Set \code{options(googleAuthR.scopes.selected = c(https://www.googleapis.com/auth/drive, https://www.googleapis.com/auth/drive.appdata, https://www.googleapis.com/auth/drive.file, https://www.googleapis.com/auth/drive.metadata, https://www.googleapis.com/auth/drive.scripts)}
Then run \code{googleAuthR::gar_auth()} to authenticate.
See \code{\link[googleAuthR]{gar_auth}} for details.
}
\seealso{
\href{https://developers.google.com/drive/}{Google Documentation}
Other File functions: \code{\link{File.appProperties}},
\code{\link{File.capabilities}},
\code{\link{File.contentHints.thumbnail}},
\code{\link{File.contentHints}},
\code{\link{File.imageMediaMetadata.location}},
\code{\link{File.imageMediaMetadata}},
\code{\link{File.properties}},
\code{\link{File.videoMediaMetadata}},
\code{\link{File}}, \code{\link{files.copy}},
\code{\link{files.create}}
}
|
elefanGaModule <- function(input, output, session) {
elefan_ga <- reactiveValues()
elefanGaUploadVreResult <- reactiveValues()
inputElefanGaData <- reactiveValues()
fileGaState <- reactiveValues(
upload = NULL
)
elefanGaFileData <- reactive({
if (is.null(input$fileGa) || is.null(fileGaState$upload)) {
return(NA)
}
contents <- read_elefan_csv(input$fileGa$datapath, input$elefanGaDateFormat)
print(input$fileGa)
if (is.null(contents$catch)) {
shinyjs::disable("go_ga")
showModal(modalDialog(
title = "Error",
if(!is.null(contents$checkDelim)){
if(contents$checkDelim=="not ok"){"Please ensure that your .csv file delimiter is a comma ','" }
}else if(!is.null(contents$checkDec)){
if(contents$checkDec=="not point"){"Please ensure your separate decimals using points ‘.’ or you don't have non numeric value"
}else if(contents$checkName=="colname error"){"Please ensure your first column name is : 'midLength'"
} else{"Input file seems invalid"}},
easyClose = TRUE,
footer = NULL
))
return (NULL)
} else {
if(is.Date(contents$dates)&&is.unsorted(contents$dates)){
shinyjs::disable("go")
showModal(modalDialog(
title = "Error",
"Please ensure that your dates are input in chronological order from left to right. If dates are in the right order select the date format coresponding to your file.",
easyClose = TRUE,
footer = NULL
))
return(NULL)
} else {
shinyjs::enable("go_ga")
return (contents)
}
}
})
observeEvent(input$fileGa, {
fileGaState$upload <- 'uploaded'
inputElefanGaData$data <- elefanGaFileData()
})
observeEvent(input$elefanGaDateFormat, {
inputElefanGaData$data <- elefanGaFileData()
})
observeEvent(input$go_ga, {
js$showComputing()
js$removeBox("box_elefan_ga_results")
js$disableAllButtons()
result = tryCatch({
ds <- lfqModify(lfqRestructure(inputElefanGaData$data), bin_size = input$ELEFAN_GA_binSize)
flog.info("Starting Elegan GA computation")
res <- run_elefan_ga(ds,binSize = input$ELEFAN_GA_binSize, seasonalised = input$ELEFAN_GA_seasonalised,
low_par = list(Linf = input$ELEFAN_GA_lowPar_Linf, K = input$ELEFAN_GA_lowPar_K, t_anchor = input$ELEFAN_GA_lowPar_t_anchor, C = input$ELEFAN_GA_lowPar_C, ts = input$ELEFAN_GA_lowPar_ts),
up_par = list(Linf = input$ELEFAN_GA_upPar_Linf, K = input$ELEFAN_GA_upPar_K, t_anchor = input$ELEFAN_GA_upPar_t_anchor, C = input$ELEFAN_GA_upPar_C, ts = input$ELEFAN_GA_upPar_ts),
popSize = input$ELEFAN_GA_popSize, maxiter = input$ELEFAN_GA_maxiter, run = input$ELEFAN_GA_run, pmutation = input$ELEFAN_GA_pmutation, pcrossover = input$ELEFAN_GA_pcrossover,
elitism = input$ELEFAN_GA_elitism, MA = input$ELEFAN_GA_MA, addl.sqrt = input$ELEFAN_GA_addl.sqrt, plus_group = input$ELEFAN_GA_PLUS_GROUP)
js$hideComputing()
js$enableAllButtons()
if ('error' %in% names(res)) {
showModal(modalDialog(
title = "Error",
if(!is.null(res$error)){if (!is.null(grep("MA must be an odd integer",res$error))) {
HTML(sprintf("Please length of classes indicate for the moving average must be a odd number.<hr/> <b>%s</b>",res$error))
}else if(grep("POSIXlt",res$error)==1) {
HTML(sprintf("Please check that the chosen date format matches the date format in your data file.<hr/> <b>%s</b>",res$error))
}else{res$error}},
easyClose = TRUE,
footer = NULL
))
# } else if(is.unsorted(contents$dates, na.rm = FALSE, strictly = FALSE)){
# showModal(modalDialog(
# title = "Error",
# HTML(sprintf("Please ensure that your dates are input in chronological order from left to right.")),
# easyClose = TRUE,
# footer = NULL
# ))
} else {
js$showBox("box_elefan_ga_results")
elefan_ga$results <- res
session$userData$fishingMortality$FcurrGA <- round(elefan_ga$results$plot3$currents[4]$curr.F, 2)
if (!is.null(session$userData$sessionMode()) && session$userData$sessionMode()=="GCUBE") {
flog.info("Uploading Elefan GA report to i-Marine workspace")
reportFileName <- paste(tempdir(),"/","ElefanGA_report_",format(Sys.time(), "%Y%m%d_%H%M_%s"),".pdf",sep="")
createElefanGaPDFReport(reportFileName,elefan_ga,input)
elefanGaUploadVreResult$res <- FALSE
basePath <- paste0("/Home/",session$userData$sessionUsername(),"/Workspace/")
tryCatch({
uploadToIMarineFolder(reportFileName, basePath, uploadFolderName)
elefanGaUploadVreResult$res <- TRUE
}, error = function(err) {
flog.error("Error uploading Elefan GA report to the i-Marine Workspace: %s", err)
elefanGaUploadVreResult$res <- FALSE
}, finally = {})
}
}
}, error = function(err) {
flog.error("Error in Elefan GA: %s ",err)
showModal(modalDialog(
title = "Error",
HTML(sprintf(getErrorMessage("ELEFAN GA"), err)),
easyClose = TRUE,
footer = NULL
))
return(NULL)
},
finally = {
js$hideComputing()
js$enableAllButtons()
})
})
observeEvent(input$reset_ga, {
fileGaState$upload <- NULL
resetElefanGaInputValues()
})
output$plot_ga_1 <- renderPlot({
if ('results' %in% names(elefan_ga)) {
plot(elefan_ga$results$plot1, Fname = "catch", date.axis = "modern")
}
})
output$plot_ga_2 <- renderPlot({
if ('results' %in% names(elefan_ga)) {
plot(elefan_ga$results$plot2, Fname = "rcounts", date.axis = "modern")
}
})
output$plot_ga_3 <- renderPlot({
if ('results' %in% names(elefan_ga)) {
plot(elefan_ga$results$plot3, mark = TRUE)
mtext("(a)", side = 3, at = -1, line = 0.6)
}
})
output$plot_ga_4 <- renderPlot({
if ('results' %in% names(elefan_ga)) {
plot(elefan_ga$results$plot4, type = "Isopleth", xaxis1 = "FM", mark = TRUE, contour = 6)
mtext("(b)", side = 3, at = -0.1, line = 0.6)
}
})
output$plot_ga_5 <- renderPlot({
if ('results' %in% names(elefan_ga)) {
plot(elefan_ga$results$data)
}
})
output$par_ga <- renderText({
if ("results" %in% names(elefan_ga)) {
title <- "<hr>"
title <- paste0(title, "<strong>Length infinity (", withMathJax("\\(L_\\infty\\)"), "in cm):</strong> ", round(elefan_ga$results$data$par$Linf, 2))
title <- paste0(title, "<br/>")
title <- paste0(title, "<strong>Curving coefficient (K):</strong> ", round(elefan_ga$results$data$par$K, 2))
title <- paste0(title, "<br/>")
title <- paste0(title, "<strong>Time point anchoring growth curves in year-length coordinate system, corresponds to peak spawning month (t_anchor):</strong> ", round(elefan_ga$results$data$par$t_anchor, 2))
title <- paste0(title, "<br/>")
title <- paste0(title, "<strong>Amplitude of growth oscillation (NOTE: only if 'Seasonalized' is checked; C):</strong> ", ifelse(is.na(elefan_ga$results$data$par$C), NA, round(elefan_ga$results$data$par$C, 2)))
title <- paste0(title, "<br/>")
title <- paste0(title, "<strong>Winter point of oscillation (</strong> ", withMathJax("\\(t_w\\)") , "<strong>)</strong> ")
title <- paste0(title, "<br/>")
title <- paste0(title, "<strong>Summer point of oscillation (NOTE: only if 'Seasonalized' is checked; ", withMathJax("\\(ts\\)"),"=", withMathJax("\\(t_w\\)"), "- 0.5):</strong> ", ifelse(is.na(elefan_ga$results$data$par$ts), NA, round(elefan_ga$results$data$par$ts, 2)))
title <- paste0(title, "<br/>")
title <- paste0(title, "<strong>Growth performance index defined as phiL = log10(K) + 2 * log10(Linf):</strong> ", ifelse(is.na(elefan_ga$results$data$par$phiL), "--", round(elefan_ga$results$data$par$phiL, 2)))
title <- paste0(title, "<br/>")
title <- paste0(title, "<br>")
title
} else { "" }
})
output$downloadReport_ga <- renderUI({
if ("results" %in% names(elefan_ga)) {
downloadButton(session$ns('createElefanGAReport'), 'Download Report')
}
})
output$ElefanGaVREUpload <- renderText(
{
text <- ""
if ("results" %in% names(elefan_ga)) {
if (!is.null(session$userData$sessionMode()) && session$userData$sessionMode() == "GCUBE") {
if (isTRUE(elefanGaUploadVreResult$res)) {
text <- paste0(text, VREUploadText)
}
}
}
text
}
)
output$createElefanGAReport <- downloadHandler(
filename = paste("ElefanGA_report_",format(Sys.time(), "%Y%m%d_%H%M_%s"),".pdf",sep=""),
content = function(file) {
createElefanGaPDFReport(file, elefan_ga, input)
}
)
output$tbl1_ga <- renderTable({
if ('results' %in% names(elefan_ga)) {
elefan_ga$results$plot3$df_Es
}
},
include.rownames=TRUE)
output$tbl2_ga <- renderTable({
if ('results' %in% names(elefan_ga)) {
CURR_GA<-elefan_ga$results$plot3$currents
CURR_GA<-CURR_GA[,-7]
names(CURR_GA)<-c("Length-at-1st-capture (Lc)", "Age-at-1st-capture (tc)", "Effort","Fishing mortality", "Catch", "Yield", "Biomass")
CURR_GA
}
},
include.rownames=TRUE)
output$title_tbl1_ga <- renderText({
if ('results' %in% names(elefan_ga)) {
txt <- "<p class=\"pheader_elefan\">Biological reference levels:</p>"
txt
}
})
output$title_tbl2_ga <- renderText({
if ('results' %in% names(elefan_ga)) {
txt <- "<p class=\"pheader_elefan\">Current levels:</p>"
txt
}
})
output$titlePlot1_elefan_ga <- renderText({
if ('results' %in% names(elefan_ga)) {
txt <- "<p class=\"pheader_elefan\">Raw LFQ data</p>"
txt
}
})
output$titlePlot2_elefan_ga <- renderText({
if ('results' %in% names(elefan_ga)) {
txt <- "<p class=\"pheader_elefan\">Restructured LFQ data</p>"
txt
}
})
output$titlePlot3_elefan_ga <- renderText({
if ('results' %in% names(elefan_ga)) {
txt <- "<p class=\"pheader_elefan\">Thompson and Bell model with changes in F</p>"
txt
}
})
output$titlePlot4_elefan_ga <- renderText({
if ('results' %in% names(elefan_ga)) {
txt <- "<p class=\"pheader_elefan\">Thompson and Bell model with changes in F and Lc</p>"
txt
}
})
output$titleResultsOfTheComputation_elefan_ga <- renderText({
if ('results' %in% names(elefan_ga)) {
txt <- "<h2>Results of the ELEFAN_GA computation</h2>"
txt
}
})
output$elefanGADataConsiderationsText <- renderText({
text <- gsub("%%ELEFAN%%", "ELEFAN_GA", getDataConsiderationTextForElefan())
text
})
output$rnMax_ga <- renderText({
if ("results" %in% names(elefan_ga)) {
title <- paste0("<strong>Highest value of fitness function:</strong> ", round(elefan_ga$results$data$Rn_max, 3))
title
} else { "" }
})
output$elefanGaTitle <- renderText({
session$userData$page("elefan-ga")
text <- "<span><h3><b>Elefan GA (Genetic Algorithm)</b></h3></span>"
text
})
}
|
/server/elefan/elefanGaServer.R
|
no_license
|
pink-sh/StockMonitoringTool
|
R
| false
| false
| 11,436
|
r
|
elefanGaModule <- function(input, output, session) {
elefan_ga <- reactiveValues()
elefanGaUploadVreResult <- reactiveValues()
inputElefanGaData <- reactiveValues()
fileGaState <- reactiveValues(
upload = NULL
)
elefanGaFileData <- reactive({
if (is.null(input$fileGa) || is.null(fileGaState$upload)) {
return(NA)
}
contents <- read_elefan_csv(input$fileGa$datapath, input$elefanGaDateFormat)
print(input$fileGa)
if (is.null(contents$catch)) {
shinyjs::disable("go_ga")
showModal(modalDialog(
title = "Error",
if(!is.null(contents$checkDelim)){
if(contents$checkDelim=="not ok"){"Please ensure that your .csv file delimiter is a comma ','" }
}else if(!is.null(contents$checkDec)){
if(contents$checkDec=="not point"){"Please ensure your separate decimals using points ‘.’ or you don't have non numeric value"
}else if(contents$checkName=="colname error"){"Please ensure your first column name is : 'midLength'"
} else{"Input file seems invalid"}},
easyClose = TRUE,
footer = NULL
))
return (NULL)
} else {
if(is.Date(contents$dates)&&is.unsorted(contents$dates)){
shinyjs::disable("go")
showModal(modalDialog(
title = "Error",
"Please ensure that your dates are input in chronological order from left to right. If dates are in the right order select the date format coresponding to your file.",
easyClose = TRUE,
footer = NULL
))
return(NULL)
} else {
shinyjs::enable("go_ga")
return (contents)
}
}
})
observeEvent(input$fileGa, {
fileGaState$upload <- 'uploaded'
inputElefanGaData$data <- elefanGaFileData()
})
observeEvent(input$elefanGaDateFormat, {
inputElefanGaData$data <- elefanGaFileData()
})
observeEvent(input$go_ga, {
js$showComputing()
js$removeBox("box_elefan_ga_results")
js$disableAllButtons()
result = tryCatch({
ds <- lfqModify(lfqRestructure(inputElefanGaData$data), bin_size = input$ELEFAN_GA_binSize)
flog.info("Starting Elegan GA computation")
res <- run_elefan_ga(ds,binSize = input$ELEFAN_GA_binSize, seasonalised = input$ELEFAN_GA_seasonalised,
low_par = list(Linf = input$ELEFAN_GA_lowPar_Linf, K = input$ELEFAN_GA_lowPar_K, t_anchor = input$ELEFAN_GA_lowPar_t_anchor, C = input$ELEFAN_GA_lowPar_C, ts = input$ELEFAN_GA_lowPar_ts),
up_par = list(Linf = input$ELEFAN_GA_upPar_Linf, K = input$ELEFAN_GA_upPar_K, t_anchor = input$ELEFAN_GA_upPar_t_anchor, C = input$ELEFAN_GA_upPar_C, ts = input$ELEFAN_GA_upPar_ts),
popSize = input$ELEFAN_GA_popSize, maxiter = input$ELEFAN_GA_maxiter, run = input$ELEFAN_GA_run, pmutation = input$ELEFAN_GA_pmutation, pcrossover = input$ELEFAN_GA_pcrossover,
elitism = input$ELEFAN_GA_elitism, MA = input$ELEFAN_GA_MA, addl.sqrt = input$ELEFAN_GA_addl.sqrt, plus_group = input$ELEFAN_GA_PLUS_GROUP)
js$hideComputing()
js$enableAllButtons()
if ('error' %in% names(res)) {
showModal(modalDialog(
title = "Error",
if(!is.null(res$error)){if (!is.null(grep("MA must be an odd integer",res$error))) {
HTML(sprintf("Please length of classes indicate for the moving average must be a odd number.<hr/> <b>%s</b>",res$error))
}else if(grep("POSIXlt",res$error)==1) {
HTML(sprintf("Please check that the chosen date format matches the date format in your data file.<hr/> <b>%s</b>",res$error))
}else{res$error}},
easyClose = TRUE,
footer = NULL
))
# } else if(is.unsorted(contents$dates, na.rm = FALSE, strictly = FALSE)){
# showModal(modalDialog(
# title = "Error",
# HTML(sprintf("Please ensure that your dates are input in chronological order from left to right.")),
# easyClose = TRUE,
# footer = NULL
# ))
} else {
js$showBox("box_elefan_ga_results")
elefan_ga$results <- res
session$userData$fishingMortality$FcurrGA <- round(elefan_ga$results$plot3$currents[4]$curr.F, 2)
if (!is.null(session$userData$sessionMode()) && session$userData$sessionMode()=="GCUBE") {
flog.info("Uploading Elefan GA report to i-Marine workspace")
reportFileName <- paste(tempdir(),"/","ElefanGA_report_",format(Sys.time(), "%Y%m%d_%H%M_%s"),".pdf",sep="")
createElefanGaPDFReport(reportFileName,elefan_ga,input)
elefanGaUploadVreResult$res <- FALSE
basePath <- paste0("/Home/",session$userData$sessionUsername(),"/Workspace/")
tryCatch({
uploadToIMarineFolder(reportFileName, basePath, uploadFolderName)
elefanGaUploadVreResult$res <- TRUE
}, error = function(err) {
flog.error("Error uploading Elefan GA report to the i-Marine Workspace: %s", err)
elefanGaUploadVreResult$res <- FALSE
}, finally = {})
}
}
}, error = function(err) {
flog.error("Error in Elefan GA: %s ",err)
showModal(modalDialog(
title = "Error",
HTML(sprintf(getErrorMessage("ELEFAN GA"), err)),
easyClose = TRUE,
footer = NULL
))
return(NULL)
},
finally = {
js$hideComputing()
js$enableAllButtons()
})
})
observeEvent(input$reset_ga, {
fileGaState$upload <- NULL
resetElefanGaInputValues()
})
output$plot_ga_1 <- renderPlot({
if ('results' %in% names(elefan_ga)) {
plot(elefan_ga$results$plot1, Fname = "catch", date.axis = "modern")
}
})
output$plot_ga_2 <- renderPlot({
if ('results' %in% names(elefan_ga)) {
plot(elefan_ga$results$plot2, Fname = "rcounts", date.axis = "modern")
}
})
output$plot_ga_3 <- renderPlot({
if ('results' %in% names(elefan_ga)) {
plot(elefan_ga$results$plot3, mark = TRUE)
mtext("(a)", side = 3, at = -1, line = 0.6)
}
})
output$plot_ga_4 <- renderPlot({
if ('results' %in% names(elefan_ga)) {
plot(elefan_ga$results$plot4, type = "Isopleth", xaxis1 = "FM", mark = TRUE, contour = 6)
mtext("(b)", side = 3, at = -0.1, line = 0.6)
}
})
output$plot_ga_5 <- renderPlot({
if ('results' %in% names(elefan_ga)) {
plot(elefan_ga$results$data)
}
})
output$par_ga <- renderText({
if ("results" %in% names(elefan_ga)) {
title <- "<hr>"
title <- paste0(title, "<strong>Length infinity (", withMathJax("\\(L_\\infty\\)"), "in cm):</strong> ", round(elefan_ga$results$data$par$Linf, 2))
title <- paste0(title, "<br/>")
title <- paste0(title, "<strong>Curving coefficient (K):</strong> ", round(elefan_ga$results$data$par$K, 2))
title <- paste0(title, "<br/>")
title <- paste0(title, "<strong>Time point anchoring growth curves in year-length coordinate system, corresponds to peak spawning month (t_anchor):</strong> ", round(elefan_ga$results$data$par$t_anchor, 2))
title <- paste0(title, "<br/>")
title <- paste0(title, "<strong>Amplitude of growth oscillation (NOTE: only if 'Seasonalized' is checked; C):</strong> ", ifelse(is.na(elefan_ga$results$data$par$C), NA, round(elefan_ga$results$data$par$C, 2)))
title <- paste0(title, "<br/>")
title <- paste0(title, "<strong>Winter point of oscillation (</strong> ", withMathJax("\\(t_w\\)") , "<strong>)</strong> ")
title <- paste0(title, "<br/>")
title <- paste0(title, "<strong>Summer point of oscillation (NOTE: only if 'Seasonalized' is checked; ", withMathJax("\\(ts\\)"),"=", withMathJax("\\(t_w\\)"), "- 0.5):</strong> ", ifelse(is.na(elefan_ga$results$data$par$ts), NA, round(elefan_ga$results$data$par$ts, 2)))
title <- paste0(title, "<br/>")
title <- paste0(title, "<strong>Growth performance index defined as phiL = log10(K) + 2 * log10(Linf):</strong> ", ifelse(is.na(elefan_ga$results$data$par$phiL), "--", round(elefan_ga$results$data$par$phiL, 2)))
title <- paste0(title, "<br/>")
title <- paste0(title, "<br>")
title
} else { "" }
})
output$downloadReport_ga <- renderUI({
if ("results" %in% names(elefan_ga)) {
downloadButton(session$ns('createElefanGAReport'), 'Download Report')
}
})
output$ElefanGaVREUpload <- renderText(
{
text <- ""
if ("results" %in% names(elefan_ga)) {
if (!is.null(session$userData$sessionMode()) && session$userData$sessionMode() == "GCUBE") {
if (isTRUE(elefanGaUploadVreResult$res)) {
text <- paste0(text, VREUploadText)
}
}
}
text
}
)
output$createElefanGAReport <- downloadHandler(
filename = paste("ElefanGA_report_",format(Sys.time(), "%Y%m%d_%H%M_%s"),".pdf",sep=""),
content = function(file) {
createElefanGaPDFReport(file, elefan_ga, input)
}
)
output$tbl1_ga <- renderTable({
if ('results' %in% names(elefan_ga)) {
elefan_ga$results$plot3$df_Es
}
},
include.rownames=TRUE)
output$tbl2_ga <- renderTable({
if ('results' %in% names(elefan_ga)) {
CURR_GA<-elefan_ga$results$plot3$currents
CURR_GA<-CURR_GA[,-7]
names(CURR_GA)<-c("Length-at-1st-capture (Lc)", "Age-at-1st-capture (tc)", "Effort","Fishing mortality", "Catch", "Yield", "Biomass")
CURR_GA
}
},
include.rownames=TRUE)
output$title_tbl1_ga <- renderText({
if ('results' %in% names(elefan_ga)) {
txt <- "<p class=\"pheader_elefan\">Biological reference levels:</p>"
txt
}
})
output$title_tbl2_ga <- renderText({
if ('results' %in% names(elefan_ga)) {
txt <- "<p class=\"pheader_elefan\">Current levels:</p>"
txt
}
})
output$titlePlot1_elefan_ga <- renderText({
if ('results' %in% names(elefan_ga)) {
txt <- "<p class=\"pheader_elefan\">Raw LFQ data</p>"
txt
}
})
output$titlePlot2_elefan_ga <- renderText({
if ('results' %in% names(elefan_ga)) {
txt <- "<p class=\"pheader_elefan\">Restructured LFQ data</p>"
txt
}
})
output$titlePlot3_elefan_ga <- renderText({
if ('results' %in% names(elefan_ga)) {
txt <- "<p class=\"pheader_elefan\">Thompson and Bell model with changes in F</p>"
txt
}
})
output$titlePlot4_elefan_ga <- renderText({
if ('results' %in% names(elefan_ga)) {
txt <- "<p class=\"pheader_elefan\">Thompson and Bell model with changes in F and Lc</p>"
txt
}
})
output$titleResultsOfTheComputation_elefan_ga <- renderText({
if ('results' %in% names(elefan_ga)) {
txt <- "<h2>Results of the ELEFAN_GA computation</h2>"
txt
}
})
output$elefanGADataConsiderationsText <- renderText({
text <- gsub("%%ELEFAN%%", "ELEFAN_GA", getDataConsiderationTextForElefan())
text
})
output$rnMax_ga <- renderText({
if ("results" %in% names(elefan_ga)) {
title <- paste0("<strong>Highest value of fitness function:</strong> ", round(elefan_ga$results$data$Rn_max, 3))
title
} else { "" }
})
output$elefanGaTitle <- renderText({
session$userData$page("elefan-ga")
text <- "<span><h3><b>Elefan GA (Genetic Algorithm)</b></h3></span>"
text
})
}
|
library('forecast')
#########################################
mobile.ts <- ts(mobile.Date$gmb, frequency=52*7, start=c(2010,1))
plot_mobile.ts <- plot.ts(mobile.ts)
#mobile agg forecast
arima_mobile <- auto.arima(mobile.ts,seasonal=TRUE,stepwise=FALSE,approx=FALSE)
plot(forecast(arima_mobile))
arima_mobile.ts <- ts(arima_mobile, frequency=364,start=c(2010,1,1))
accuracy(arima_mobile)
#by platform and country
##dataset: mobile.cntryPlat
unique(mobile.cntryPlat$country)
unique(mobile.cntryPlat$platform)
test <- subset(mobile, created_dt >= 2012-11-01
& (country=='US'|country=='AU')
& (platform=='iPhone App'))
test.gb <- group_by(test,created_dt,country, platform)
testarima <- summarise(test.gb, gmb=auto.arima(test.gb$gmb_plan))
#########################################
#forecast models in order of increasing MAPE
stlf_mobile_BC <- stlf(mobile.ts,lambda=BoxCox.lambda(mobile.ts))
plot(stlf_mobile.ts_BC)
stlf_mobile_BC$model
accuracy(stlf_mobile_BC)
#with robust options to remove outliers
stlf_mobile.ts_BC_rb <- stlf(mobile.ts,lambda=BoxCox.lambda(mobile.ts),robust=TRUE)
plot(stlf_mobile.ts_BC_rb)
stlf_mobile.ts_BC_rb$model
accuracy(stlf_mobile.ts_BC_rb)
stlf_mobile <- stlf(mobile.ts)
plot(stlf_mobile.ts)
stlf_mobile$model
accuracy(stlf_mobile)
#breakdown
stl_mobile.ts <- stl(mobile.ts,s.window="periodic",robust=TRUE)
plot(stl_mobile.ts)
#not working
hw_mobile <- HoltWinters(mobile.ts)
plot(mobile.ts,xlim=c(2010,2018),ylim=c(0,140000000))
lines(predict(hw_mobile,n.ahead=900),col=2)
###########################################
#cross validation
##forward chain
startYear <- as.numeric(format(minDate, "%Y"))
endYear <- as.numeric(format(maxDate, "%Y"))
years <- endYear - startYear + 1
for (i in 1:years-3)){
trainStart <- 364*(i-1)+1
trainEnd <- start + 364*2
testEnd <- length(mobile.ts)
mobile_cv.train <- ts(mobile.ts[trainStart:trainEnd],frequency=364)
mobile_cv.test <- ts(mobile.ts[(trainEnd+1):testEnd],frequency=364)
stlf_mobile_BC.cv <- stlf(mobile_cv.train,lambda=BoxCox.lambda(mobile_cv.train))
stlf_mobile.ts_BC_rb.cv <- stlf(mobile_cv.train,lambda=BoxCox.lambda(mobile_cv.train),robust=TRUE)
arima_mobile.cv <- auto.arima(mobile_cv.train,seasonal=TRUE)
stlf_mobile.cv <- stlf(mobile_cv.train)
stlf_mobile_BC.cv.a <- accuracy(stlf_mobile_BC.cv,mobile_cv.test)
stlf_mobile.ts_BC_rb.cv.a <- accuracy(stlf_mobile.ts_BC_rb.cv,mobile_cv.test)
arima_mobile.cv.a <- accuracy(forecast(arima_mobile.cv),mobile_cv.test)
stlf_mobile.cv.a <- accuracy(stlf_mobile.cv,mobile_cv.test)
#
arima_mobile <- auto.arima(mobile.ts,seasonal=TRUE)
plot(forecast(arima_mobile))
accuracy(arima_mobile)
#
plot(stlf_mobile.cv, ylim=c(0,120000000))
lines(mobile_cv.test)
plot(stlf_mobile.ts_BC_rb.cv,ylim=c(0,120000000))
lines(mobile_cv.test)
plot(stlf_mobile_BC.cv,ylim=c(0,120000000))
lines(mobile_cv.test)
plot(forecast(arima_mobile.cv),ylim=c(0,120000000))
lines(mobile_cv.test)
}
##last 2 years to predict next year
# #########################################
# #MA of stlf plot
# ma365 <- rep(1/365, 365)
# y_lag <- filter(stlf_mobile.ts, ma365, sides=1)
# lines(x, y_lag, col="red")
# #########################################
# #time series
# Creating a time series
# The ts() function will convert a numeric vector into an R time series object. The format is ts(vector, start=, end=, frequency=) where start and end are the times of the first and last observation and frequency is the number of observations per unit time (1=annual, 4=quartly, 12=monthly, etc.).
# # save a numeric vector containing 48 monthly observations
# # from Jan 2009 to Dec 2014 as a time series object
# myts <- ts(myvector, start=c(2009, 1), end=c(2014, 12), frequency=12)
#
# # subset the time series (June 2014 to December 2014)
# myts2 <- window(myts, start=c(2014, 6), end=c(2014, 12))
#
# # plot series
# plot(myts)
|
/forecast.R
|
no_license
|
li-frank/mForecast
|
R
| false
| false
| 3,937
|
r
|
library('forecast')
#########################################
mobile.ts <- ts(mobile.Date$gmb, frequency=52*7, start=c(2010,1))
plot_mobile.ts <- plot.ts(mobile.ts)
#mobile agg forecast
arima_mobile <- auto.arima(mobile.ts,seasonal=TRUE,stepwise=FALSE,approx=FALSE)
plot(forecast(arima_mobile))
arima_mobile.ts <- ts(arima_mobile, frequency=364,start=c(2010,1,1))
accuracy(arima_mobile)
#by platform and country
##dataset: mobile.cntryPlat
unique(mobile.cntryPlat$country)
unique(mobile.cntryPlat$platform)
test <- subset(mobile, created_dt >= 2012-11-01
& (country=='US'|country=='AU')
& (platform=='iPhone App'))
test.gb <- group_by(test,created_dt,country, platform)
testarima <- summarise(test.gb, gmb=auto.arima(test.gb$gmb_plan))
#########################################
#forecast models in order of increasing MAPE
stlf_mobile_BC <- stlf(mobile.ts,lambda=BoxCox.lambda(mobile.ts))
plot(stlf_mobile.ts_BC)
stlf_mobile_BC$model
accuracy(stlf_mobile_BC)
#with robust options to remove outliers
stlf_mobile.ts_BC_rb <- stlf(mobile.ts,lambda=BoxCox.lambda(mobile.ts),robust=TRUE)
plot(stlf_mobile.ts_BC_rb)
stlf_mobile.ts_BC_rb$model
accuracy(stlf_mobile.ts_BC_rb)
stlf_mobile <- stlf(mobile.ts)
plot(stlf_mobile.ts)
stlf_mobile$model
accuracy(stlf_mobile)
#breakdown
stl_mobile.ts <- stl(mobile.ts,s.window="periodic",robust=TRUE)
plot(stl_mobile.ts)
#not working
hw_mobile <- HoltWinters(mobile.ts)
plot(mobile.ts,xlim=c(2010,2018),ylim=c(0,140000000))
lines(predict(hw_mobile,n.ahead=900),col=2)
###########################################
#cross validation
##forward chain
startYear <- as.numeric(format(minDate, "%Y"))
endYear <- as.numeric(format(maxDate, "%Y"))
years <- endYear - startYear + 1
for (i in 1:years-3)){
trainStart <- 364*(i-1)+1
trainEnd <- start + 364*2
testEnd <- length(mobile.ts)
mobile_cv.train <- ts(mobile.ts[trainStart:trainEnd],frequency=364)
mobile_cv.test <- ts(mobile.ts[(trainEnd+1):testEnd],frequency=364)
stlf_mobile_BC.cv <- stlf(mobile_cv.train,lambda=BoxCox.lambda(mobile_cv.train))
stlf_mobile.ts_BC_rb.cv <- stlf(mobile_cv.train,lambda=BoxCox.lambda(mobile_cv.train),robust=TRUE)
arima_mobile.cv <- auto.arima(mobile_cv.train,seasonal=TRUE)
stlf_mobile.cv <- stlf(mobile_cv.train)
stlf_mobile_BC.cv.a <- accuracy(stlf_mobile_BC.cv,mobile_cv.test)
stlf_mobile.ts_BC_rb.cv.a <- accuracy(stlf_mobile.ts_BC_rb.cv,mobile_cv.test)
arima_mobile.cv.a <- accuracy(forecast(arima_mobile.cv),mobile_cv.test)
stlf_mobile.cv.a <- accuracy(stlf_mobile.cv,mobile_cv.test)
#
arima_mobile <- auto.arima(mobile.ts,seasonal=TRUE)
plot(forecast(arima_mobile))
accuracy(arima_mobile)
#
plot(stlf_mobile.cv, ylim=c(0,120000000))
lines(mobile_cv.test)
plot(stlf_mobile.ts_BC_rb.cv,ylim=c(0,120000000))
lines(mobile_cv.test)
plot(stlf_mobile_BC.cv,ylim=c(0,120000000))
lines(mobile_cv.test)
plot(forecast(arima_mobile.cv),ylim=c(0,120000000))
lines(mobile_cv.test)
}
##last 2 years to predict next year
# #########################################
# #MA of stlf plot
# ma365 <- rep(1/365, 365)
# y_lag <- filter(stlf_mobile.ts, ma365, sides=1)
# lines(x, y_lag, col="red")
# #########################################
# #time series
# Creating a time series
# The ts() function will convert a numeric vector into an R time series object. The format is ts(vector, start=, end=, frequency=) where start and end are the times of the first and last observation and frequency is the number of observations per unit time (1=annual, 4=quartly, 12=monthly, etc.).
# # save a numeric vector containing 48 monthly observations
# # from Jan 2009 to Dec 2014 as a time series object
# myts <- ts(myvector, start=c(2009, 1), end=c(2014, 12), frequency=12)
#
# # subset the time series (June 2014 to December 2014)
# myts2 <- window(myts, start=c(2014, 6), end=c(2014, 12))
#
# # plot series
# plot(myts)
|
library(sqldf)
## Read in the data file using sql to filter the dates we are interested in
df<-read.csv.sql("../Data/household_power_consumption.txt",
sql='select * from file where Date="1/2/2007" OR Date="2/2/2007"'
,sep=";",header=T)
closeAllConnections() ## Close sql connection
## Create DateTime column
var<-paste(df$Date,df$Time)
df$DateTime<-strptime(var, "%d/%m/%Y %H:%M:%S")
## Prepare plot 2
png("plot2.png", width=480, height=480)
plot(df$DateTime,df$Global_active_power,type="l",xlab="",ylab="Global Active Power (kilowatts)")
dev.off()
|
/plot2.R
|
no_license
|
Bspangehl/ExData_Plotting1
|
R
| false
| false
| 589
|
r
|
library(sqldf)
## Read in the data file using sql to filter the dates we are interested in
df<-read.csv.sql("../Data/household_power_consumption.txt",
sql='select * from file where Date="1/2/2007" OR Date="2/2/2007"'
,sep=";",header=T)
closeAllConnections() ## Close sql connection
## Create DateTime column
var<-paste(df$Date,df$Time)
df$DateTime<-strptime(var, "%d/%m/%Y %H:%M:%S")
## Prepare plot 2
png("plot2.png", width=480, height=480)
plot(df$DateTime,df$Global_active_power,type="l",xlab="",ylab="Global Active Power (kilowatts)")
dev.off()
|
#Read data from text file
data<-read.table("household_power_consumption.txt", header=TRUE, sep=";", stringsAsFactors=FALSE, na.strings="?")
#Target data from 2007-02-01 to 2007-02-02
dates<- data$Date=="1/2/2007"|data$Date=="2/2/2007"
targetdata<-data[dates,]
targetdata$Date <- as.Date(targetdata$Date, format = "%d/%m/%Y")
#Global Active Power histogram plot and output as png
hist(targetdata$Global_active_power, col="red", xlab="Global Active Power (kilowatts)", main="Global Active Power")
dev.copy(png, file = "plot1.png")
dev.off()
|
/plot1.R
|
no_license
|
kctoh/ExData_Plotting1
|
R
| false
| false
| 551
|
r
|
#Read data from text file
data<-read.table("household_power_consumption.txt", header=TRUE, sep=";", stringsAsFactors=FALSE, na.strings="?")
#Target data from 2007-02-01 to 2007-02-02
dates<- data$Date=="1/2/2007"|data$Date=="2/2/2007"
targetdata<-data[dates,]
targetdata$Date <- as.Date(targetdata$Date, format = "%d/%m/%Y")
#Global Active Power histogram plot and output as png
hist(targetdata$Global_active_power, col="red", xlab="Global Active Power (kilowatts)", main="Global Active Power")
dev.copy(png, file = "plot1.png")
dev.off()
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/extract.stringCode.R
\name{to.TreeCode}
\alias{to.TreeCode}
\title{to.TreeCode}
\usage{
to.TreeCode(SITE, PLOT, SUBPLOT, TAG = NULL)
}
|
/modules/data.land/man/to.TreeCode.Rd
|
permissive
|
Kah5/pecan
|
R
| false
| true
| 213
|
rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/extract.stringCode.R
\name{to.TreeCode}
\alias{to.TreeCode}
\title{to.TreeCode}
\usage{
to.TreeCode(SITE, PLOT, SUBPLOT, TAG = NULL)
}
|
testlist <- list(m = NULL, repetitions = 0L, in_m = structure(c(2.31584307392677e+77, 9.54001464073754e+295, 1.22810536108214e+146, 4.12396251261199e-221, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), .Dim = c(8L, 3L)))
result <- do.call(CNull:::communities_individual_based_sampling_alpha,testlist)
str(result)
|
/CNull/inst/testfiles/communities_individual_based_sampling_alpha/AFL_communities_individual_based_sampling_alpha/communities_individual_based_sampling_alpha_valgrind_files/1615782588-test.R
|
no_license
|
akhikolla/updatedatatype-list2
|
R
| false
| false
| 329
|
r
|
testlist <- list(m = NULL, repetitions = 0L, in_m = structure(c(2.31584307392677e+77, 9.54001464073754e+295, 1.22810536108214e+146, 4.12396251261199e-221, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), .Dim = c(8L, 3L)))
result <- do.call(CNull:::communities_individual_based_sampling_alpha,testlist)
str(result)
|
##########################################################################################################
#Replication Files for Housing Discrimination and the Toxics Exposure Gap in the United States:
#Evidence from the Rental Market by Peter Christensen, Ignacio Sarmiento-Barbieri and Christopher Timmins
##########################################################################################################
#Clean the workspace
rm(list=ls())
cat("\014")
local({r <- getOption("repos"); r["CRAN"] <- "http://cran.r-project.org"; options(repos=r)}) #set repo
#Load Packages
pkg<-c("dplyr","stargazer")
lapply(pkg, require, character.only=T)
rm(pkg)
#Descriptive
dta_desc<-read.csv("../views/descriptive_RSEI.csv")
colnames(dta_desc)<-c("Q1","Q2-Q3","Q4")
#dta_desc<-round(dta_desc,2)
dta_desc<-format(round(dta_desc, digits=3), nsmall = 3)
for(j in 1:3) dta_desc[,j]<-trimws(dta_desc[,j])
dta_desc[2,] <-paste0("(",dta_desc[2,] ,")")
dta_desc[4,] <-paste0("(",dta_desc[4,] ,")")
dta_desc[6,] <-paste0("(",dta_desc[6,] ,")")
dta_desc[8,] <-paste0("(",dta_desc[8,] ,")")
dta_desc[10,]<-paste0("(",dta_desc[10,],")")
dta_desc[12,]<-paste0("(",dta_desc[12,],")")
dta_desc[14,]<-paste0("(",dta_desc[14,],")")
dta_desc[16,]<-paste0("(",dta_desc[16,],")")
dta_desc[18,]<-paste0("(",dta_desc[18,],")")
dta_desc[20,]<-paste0("(",dta_desc[20,],")")
dta_desc[22,]<-paste0("(",dta_desc[22,],")")
dta_desc[24,]<-paste0("(",dta_desc[24,],")")
dta_desc[26,]<-paste0("(",dta_desc[26,],")")
dta_desc[28,]<-paste0("(",dta_desc[28,],")")
dta_desc[30,]<-paste0("(",dta_desc[30,],")")
dta_desc[32,]<-paste0("(",dta_desc[32,],")")
dta_desc[34,]<-paste0("(",dta_desc[34,],")")
dta_desc[36,]<-paste0("(",dta_desc[36,],")")
dta_desc[38,]<-paste0("(",dta_desc[38,],")")
dta_desc$names<-NA
dta_desc$names[1]<-"Toxic Concentration (K)"
dta_desc$names[2]<-""
dta_desc$names[3]<-"Cancer Score"
dta_desc$names[4]<-""
dta_desc$names[5]<-"Non Cancer Score"
dta_desc$names[6]<-""
dta_desc$names[7]<-"Rent (K)"
dta_desc$names[8]<-""
dta_desc$names[9]<-"Single Family Home"
dta_desc$names[10]<-""
dta_desc$names[11]<-"Apartment"
dta_desc$names[12]<-""
dta_desc$names[13]<-"Multi Family"
dta_desc$names[14]<-""
dta_desc$names[15]<-"Other Bldg. Type"
dta_desc$names[16]<-""
dta_desc$names[17]<-"Bedrooms"
dta_desc$names[18]<-""
dta_desc$names[19]<-"Bathrooms"
dta_desc$names[20]<-""
dta_desc$names[21]<-"Sqft."
dta_desc$names[22]<-""
dta_desc$names[23]<-"Assault"
dta_desc$names[24]<-""
dta_desc$names[25]<-"Groceries"
dta_desc$names[26]<-""
dta_desc$names[27]<-"Share of Hispanics"
dta_desc$names[28]<-""
dta_desc$names[29]<-"Share of African American"
dta_desc$names[30]<-""
dta_desc$names[31]<-"Share of Whites"
dta_desc$names[32]<-""
dta_desc$names[33]<-"Poverty Rate"
dta_desc$names[34]<-""
dta_desc$names[35]<-"Unemployment Rate"
dta_desc$names[36]<-""
dta_desc$names[37]<-"Share of College Educated"
dta_desc$names[38]<-""
dta_desc<- dta_desc[,c("names","Q1","Q2-Q3","Q4")]
# # -----------------------------------------------------------------------
#Ttest
# # -----------------------------------------------------------------------
dta<-read.csv("../views/descriptive_RSEI_ttest.csv")
colnames(dta)<-c("0th-25th","25th-75th","75th-100th")
dta<-round(dta, 4)
dta<-format(round(dta, digits=3), nsmall = 3)
for(j in 1:3) dta[,j]<-trimws(dta[,j])
dta
for(j in seq(1,37,by=2)){
for(k in 1:3){
value<-dta[j,k] #format(round(, 2), nsmall = 2, big.mark=",")
tscore<-abs(as.numeric(dta[j,k])/as.numeric(dta[j+1,k]))
dta[j,k] <-ifelse(tscore>=2.576, paste0(value,"***"),
ifelse(tscore>=1.96 & tscore<2.576, paste0(value ,"**"),
ifelse(tscore>=1.645 & tscore<1.96, paste0(value,"*"),paste0(value ))))
dta[j+1,k] <-paste0("(",dta[j+1,k],")") #for standard errors
}
}
dta<-dta[,c(2,3)]
# Bind rows ----------------------------------------------------------------
dta_bind<-bind_cols(dta_desc,dta)
stargazer(dta_bind,summary=FALSE,type="text", rownames = FALSE,out="../views/tableA2.tex")
system("rm ../views/descriptive_RSEI.csv")
system("rm ../views/descriptive_RSEI_ttest.csv")
#end
|
/scripts/Rscripts/5_TableA2.R
|
permissive
|
uiuc-bdeep/Housing_Discrimination_Pollution_Exposures
|
R
| false
| false
| 4,163
|
r
|
##########################################################################################################
#Replication Files for Housing Discrimination and the Toxics Exposure Gap in the United States:
#Evidence from the Rental Market by Peter Christensen, Ignacio Sarmiento-Barbieri and Christopher Timmins
##########################################################################################################
#Clean the workspace
rm(list=ls())
cat("\014")
local({r <- getOption("repos"); r["CRAN"] <- "http://cran.r-project.org"; options(repos=r)}) #set repo
#Load Packages
pkg<-c("dplyr","stargazer")
lapply(pkg, require, character.only=T)
rm(pkg)
#Descriptive
dta_desc<-read.csv("../views/descriptive_RSEI.csv")
colnames(dta_desc)<-c("Q1","Q2-Q3","Q4")
#dta_desc<-round(dta_desc,2)
dta_desc<-format(round(dta_desc, digits=3), nsmall = 3)
for(j in 1:3) dta_desc[,j]<-trimws(dta_desc[,j])
dta_desc[2,] <-paste0("(",dta_desc[2,] ,")")
dta_desc[4,] <-paste0("(",dta_desc[4,] ,")")
dta_desc[6,] <-paste0("(",dta_desc[6,] ,")")
dta_desc[8,] <-paste0("(",dta_desc[8,] ,")")
dta_desc[10,]<-paste0("(",dta_desc[10,],")")
dta_desc[12,]<-paste0("(",dta_desc[12,],")")
dta_desc[14,]<-paste0("(",dta_desc[14,],")")
dta_desc[16,]<-paste0("(",dta_desc[16,],")")
dta_desc[18,]<-paste0("(",dta_desc[18,],")")
dta_desc[20,]<-paste0("(",dta_desc[20,],")")
dta_desc[22,]<-paste0("(",dta_desc[22,],")")
dta_desc[24,]<-paste0("(",dta_desc[24,],")")
dta_desc[26,]<-paste0("(",dta_desc[26,],")")
dta_desc[28,]<-paste0("(",dta_desc[28,],")")
dta_desc[30,]<-paste0("(",dta_desc[30,],")")
dta_desc[32,]<-paste0("(",dta_desc[32,],")")
dta_desc[34,]<-paste0("(",dta_desc[34,],")")
dta_desc[36,]<-paste0("(",dta_desc[36,],")")
dta_desc[38,]<-paste0("(",dta_desc[38,],")")
dta_desc$names<-NA
dta_desc$names[1]<-"Toxic Concentration (K)"
dta_desc$names[2]<-""
dta_desc$names[3]<-"Cancer Score"
dta_desc$names[4]<-""
dta_desc$names[5]<-"Non Cancer Score"
dta_desc$names[6]<-""
dta_desc$names[7]<-"Rent (K)"
dta_desc$names[8]<-""
dta_desc$names[9]<-"Single Family Home"
dta_desc$names[10]<-""
dta_desc$names[11]<-"Apartment"
dta_desc$names[12]<-""
dta_desc$names[13]<-"Multi Family"
dta_desc$names[14]<-""
dta_desc$names[15]<-"Other Bldg. Type"
dta_desc$names[16]<-""
dta_desc$names[17]<-"Bedrooms"
dta_desc$names[18]<-""
dta_desc$names[19]<-"Bathrooms"
dta_desc$names[20]<-""
dta_desc$names[21]<-"Sqft."
dta_desc$names[22]<-""
dta_desc$names[23]<-"Assault"
dta_desc$names[24]<-""
dta_desc$names[25]<-"Groceries"
dta_desc$names[26]<-""
dta_desc$names[27]<-"Share of Hispanics"
dta_desc$names[28]<-""
dta_desc$names[29]<-"Share of African American"
dta_desc$names[30]<-""
dta_desc$names[31]<-"Share of Whites"
dta_desc$names[32]<-""
dta_desc$names[33]<-"Poverty Rate"
dta_desc$names[34]<-""
dta_desc$names[35]<-"Unemployment Rate"
dta_desc$names[36]<-""
dta_desc$names[37]<-"Share of College Educated"
dta_desc$names[38]<-""
dta_desc<- dta_desc[,c("names","Q1","Q2-Q3","Q4")]
# # -----------------------------------------------------------------------
#Ttest
# # -----------------------------------------------------------------------
dta<-read.csv("../views/descriptive_RSEI_ttest.csv")
colnames(dta)<-c("0th-25th","25th-75th","75th-100th")
dta<-round(dta, 4)
dta<-format(round(dta, digits=3), nsmall = 3)
for(j in 1:3) dta[,j]<-trimws(dta[,j])
dta
for(j in seq(1,37,by=2)){
for(k in 1:3){
value<-dta[j,k] #format(round(, 2), nsmall = 2, big.mark=",")
tscore<-abs(as.numeric(dta[j,k])/as.numeric(dta[j+1,k]))
dta[j,k] <-ifelse(tscore>=2.576, paste0(value,"***"),
ifelse(tscore>=1.96 & tscore<2.576, paste0(value ,"**"),
ifelse(tscore>=1.645 & tscore<1.96, paste0(value,"*"),paste0(value ))))
dta[j+1,k] <-paste0("(",dta[j+1,k],")") #for standard errors
}
}
dta<-dta[,c(2,3)]
# Bind rows ----------------------------------------------------------------
dta_bind<-bind_cols(dta_desc,dta)
stargazer(dta_bind,summary=FALSE,type="text", rownames = FALSE,out="../views/tableA2.tex")
system("rm ../views/descriptive_RSEI.csv")
system("rm ../views/descriptive_RSEI_ttest.csv")
#end
|
#test.id = seq(1, 2930, by=3)
set.seed(8871)
install.packages("robustHD")
install.packages("caTools")
install.packages("DescTools")
install.packages("caret")
install.packages("randomForest")
install.packages("xgboost")
install.packages("corrplot")
install.packages("Rmisc")
install.packages("ggrepel")
install.packages("psych")
install.packages("cran")
install.packages("gbm")
library(knitr)
library(ggplot2)
library(plyr)
library(dplyr)
library(corrplot)
library(caret)
library(gridExtra)
library(scales)
library(Rmisc)
library(ggrepel)
library(randomForest)
library(psych)
library(xgboost)
library(robustHD)
library(caTools)
library(DescTools)
library(caret)
library(randomForest)
library(xgboost)
##library(cran)
library(glmnet)
library(gbm)
data <- read.csv("/Users/karthikjvn/Documents/Mac_Transfer_March2018/UIUC/Fall_2018/STAT542/Project 1/Ames_data.csv", stringsAsFactors = F )
train <- read.csv("/Users/karthikjvn/Documents/Mac_Transfer_March2018/UIUC/Fall_2018/STAT542/Project 1/Ames_data.csv", stringsAsFactors = F)
test <- read.csv("/Users/karthikjvn/Documents/Mac_Transfer_March2018/UIUC/Fall_2018/STAT542/Project 1/Ames_data.csv", stringsAsFactors = F)
Project <- read.table("/Users/karthikjvn/Documents/Mac_Transfer_March2018/UIUC/Fall_2018/STAT542/Project 1/Project1_test_id.txt", stringsAsFactors = F )
#sample = sample.split(data$anycolumn, SplitRatio = .70)
#s = sample.split[Project$V3, data$PID == Project$V3, SplitRatio = .70]
#train = subset(data, sample == TRUE)
#test = subset(data, sample == FALSE)
train <- read.csv("/Users/karthikjvn/Documents/Mac_Transfer_March2018/UIUC/Fall_2018/STAT542/Project 1/Ames_data.csv", stringsAsFactors = F)
test <- read.csv("/Users/karthikjvn/Documents/Mac_Transfer_March2018/UIUC/Fall_2018/STAT542/Project 1/Ames_data.csv", stringsAsFactors = F)
test = data[data$PID%in%Project[,4],1:82]
train = data[!data$PID%in%Project[,4],]
#train$Garage_Yr_Blt <- NULL
#test$Garage_Yr_Blt <- NULL
#x <- factor(c('a', 'b', 'c'), levels = c('b', 'a', 'c'))
#x_ord <- as.numeric(x)
#>> [1] 2 1 3
#scale(x, center = TRUE, scale = TRUE)
#require(stats)
#x <- matrix(1:10, ncol = 2)
#(centered.x <- scale(x, scale = FALSE))
#cov(centered.scaled.x <- scale(x)) # all 1
#require(stats)
#x <- train(1:10, ncol = 2)
#(centered.x <- scale(x, scale = FALSE))
#cov(centered.scaled.x <- scale(x)) # all 1
#winsorize(train)
#Data size and structure
#dim(train)
#dim(test)
#test$SalePrice <- NA
#all <- rbind(train, test)
#dim(all)
##Table command
table(train$Land_Slope)/dim(train)[1] ## Categorical Variables non Numeric data Drop Land_Slope YES
table(train$Neighborhood)/dim(train)[1] ## No
table(train$Bldg_Type)/dim(train)[1] ##YEs Above 95% drop else keep
table(train$MS_SubClass)/dim(train)[1] ## NO
table(train$MS_Zoning)/dim(train)[1] ## NO
table(train$Street)/dim(train)[1] ## YES
table(train$Alley)/dim(train)[1] ## NO
table(train$Lot_Shape)/dim(train)[1] ##NO
table(train$Land_Contour)/dim(train)[1] ## NO
table(train$Utilities)/dim(train)[1] ##YES
table(train$Lot_Config)/dim(train)[1] ##NO
table(train$Condition_1)/dim(train)[1] ##NO
table(train$Condition_2)/dim(train)[1] ##YES
table(train$House_Style)/dim(train)[1] ##NO
table(train$Overall_Qual)/dim(train)[1] ##NO
table(train$Overall_Cond)/dim(train)[1] ##NO
table(train$Roof_Style)/dim(train)[1] ##NO
table(train$Roof_Matl)/dim(train)[1] ##YES
table(train$Exterior_1st)/dim(train)[1] ##NO
table(train$Exterior_2nd)/dim(train)[1] ##NO
table(train$Mas_Vnr_Type)/dim(train)[1] ##NO
table(train$Mas_Vnr_Area)/dim(train)[1] ##NO
##sapplytable(train$Mas_Vnr_Area)/dim(train)[1]
table(train$Exter_Qual)/dim(train)[1] ##NO
table(train$Exter_Cond)/dim(train)[1] ##NO
table(train$Foundation)/dim(train)[1] ##NO
table(train$Bsmt_Qual)/dim(train)[1] ##NO
table(train$Bsmt_Cond)/dim(train)[1] ##NO
table(train$Bsmt_Exposure)/dim(train)[1] ##NO
table(train$BsmtFin_Type_1)/dim(train)[1] ##NO
##table(train$BsmtFin_SF_1)/dim(train)[1] ##NO
table(train$BsmtFin_Type_2)/dim(train)[1] ##NO
table(train$Heating)/dim(train)[1] ##YES
table(train$Heating_QC)/dim(train)[1] ##NO
table(train$Central_Air)/dim(train)[1] ##NO
table(train$Electrical)/dim(train)[1] ##NO
table(train$Kitchen_Qual)/dim(train)[1] ##NO
table(train$Functional)/dim(train)[1] ##NO
table(train$Fireplace_Qu)/dim(train)[1] ##NO
table(train$Garage_Type)/dim(train)[1] ##NO
table(train$Garage_Finish)/dim(train)[1] ##NO
table(train$Garage_Qual)/dim(train)[1] ##NO
table(train$Garage_Cond)/dim(train)[1] ##NO
table(train$Paved_Drive)/dim(train)[1] ##NO
table(train$Pool_QC)/dim(train)[1] ##YES
table(train$Fence)/dim(train)[1] ##NO
table(train$Misc_Feature)/dim(train)[1] ##YES
table(train$Sale_Type)/dim(train)[1] ##NO
table(train$Sale_Condition)/dim(train)[1] ##NO
table(train$Sale_Condition)/dim(train)[1] ##NO
##DROPPING VARIABLES AFTER USING TABLE COMMAND IF 95%
## DROPPING VARIABLES LONGITUDE and LATITUDE
## train changed to train1
train1 = subset(train, select = -c(Land_Slope,Bldg_Type,Street,Utilities,Condition_2,Garage_Yr_Blt,Roof_Matl,Heating,Pool_QC,Misc_Feature,Longitude,Latitude))
test1 = subset(test, select = -c(Land_Slope,Bldg_Type,Street,Utilities,Condition_2,Garage_Yr_Blt,Roof_Matl,Heating,Pool_QC,Misc_Feature,Longitude,Latitude))
names(test1)[71]=c("Sale_Price")
####################
## ONE HOT ENCODING
## Karet package Use, Bind based on Columns, One hot encoding categorical variables, Bind Encode Then Split
#Binding
total = rbind(train1,test1)
## Winsorization Loop through all columns with Numeric data
summary(train1$Lot_Frontage)
train1$Lot_Frontage = Winsorize(train1$Lot_Frontage, minval = NULL, maxval = NULL, probs = c(0.05, 0.95),
na.rm = FALSE, type = 7)
summary(train1$Lot_Frontage)
train1$Lot_Area = Winsorize(train1$Lot_Area, minval = NULL, maxval = NULL, probs = c(0.05, 0.95),
na.rm = FALSE, type = 7)
train1$Mas_Vnr_Area = Winsorize(train1$Mas_Vnr_Area, minval = NULL, maxval = NULL, probs = c(0.05, 0.95),
na.rm = FALSE, type = 7)
train1$Exter_Qual = Winsorize(train1$Exter_Qual, minval = NULL, maxval = NULL, probs = c(0.05, 0.95),
na.rm = FALSE, type = 7)
train1$BsmtFin_SF_2 = Winsorize(train1$BsmtFin_SF_2, minval = NULL, maxval = NULL, probs = c(0.05, 0.95),
na.rm = FALSE, type = 7)
train1$Bsmt_Unf_SF = Winsorize(train1$Bsmt_Unf_SF, minval = NULL, maxval = NULL, probs = c(0.05, 0.95),
na.rm = FALSE, type = 7)
train1$Total_Bsmt_SF = Winsorize(train1$Total_Bsmt_SF, minval = NULL, maxval = NULL, probs = c(0.05, 0.95),
na.rm = FALSE, type = 7)
train1$First_Flr_SF = Winsorize(train1$First_Flr_SF, minval = NULL, maxval = NULL, probs = c(0.05, 0.95),
na.rm = FALSE, type = 7)
train1$Second_Flr_SF = Winsorize(train1$Second_Flr_SF, minval = NULL, maxval = NULL, probs = c(0.05, 0.95),
na.rm = FALSE, type = 7)
train1$Gr_Liv_Area = Winsorize(train1$Gr_Liv_Area, minval = NULL, maxval = NULL, probs = c(0.05, 0.95),
na.rm = FALSE, type = 7)
train1$Gr_Liv_Area = Winsorize(train1$Gr_Liv_Area, minval = NULL, maxval = NULL, probs = c(0.05, 0.95),
na.rm = FALSE, type = 7)
##TotRms SEE
train1$Garage_Area = Winsorize(train1$Garage_Area, minval = NULL, maxval = NULL, probs = c(0.05, 0.95),
na.rm = FALSE, type = 7)
train1$Wood_Deck_SF = Winsorize(train1$Wood_Deck_SF, minval = NULL, maxval = NULL, probs = c(0.05, 0.95),
na.rm = FALSE, type = 7)
train1$Open_Porch_SF = Winsorize(train1$Open_Porch_SF, minval = NULL, maxval = NULL, probs = c(0.05, 0.95),
na.rm = FALSE, type = 7)
### Enclosed_Porch BQ??? Three_season_porch BR??? Screen_Porch BS?? Mo_Sold?? Longitude?? Latitude?? Sale_Price??
train1$Misc_Val = Winsorize(train1$Misc_Val, minval = NULL, maxval = NULL, probs = c(0.05, 0.95),
na.rm = FALSE, type = 7)
train1$Sale_Price = Winsorize(train1$Sale_Price, minval = NULL, maxval = NULL, probs = c(0.05, 0.95),
na.rm = FALSE, type = 7)
####################
## ONE HOT ENCODING
## Karet package Use, Bind based on Columns, One hot encoding categorical variables, Bind Encode Then Split
#Binding
total = rbind(train1,test1)
### ONE HOT ENCODING
# dummify the data
dmy <- dummyVars(" ~ .", data = total)
trsf <- data.frame(predict(dmy, newdata = total))
#Resplit the data into train and test splits
train2 = trsf[1:nrow(train1),]
test2 = trsf[(nrow(train1)+1):nrow(trsf), -ncol(trsf)]
#########################
## FIT RANDOM FOREST
#########################
### fit log sale price with all variables, plot??
## fit models for train data
rfModel = randomForest(log(train2$Sale_Price) ~ ., data = train2[-test.id, ],
importance = T, ntree=400);
########## Lasso
#my_control <-trainControl(method="cv", number=5)
#lassoGrid <- expand.grid(alpha = 1, lambda = seq(0.001,0.1,by = 0.0005))
#lasso_mod <- train(x=train2, y=train2$Sale_Price[!is.na(train2$Sale_Price)], method='glmnet', trControl= my_control, tuneGrid=lassoGrid)
#lasso_mod$bestTune
########## LASSO MAIN
###lasso <- glmnet(x= as.matrix(train2[,-train2$Sale_Price]), y= as.matrix(train2$Sale_Price), alpha = 1 )
cv.out <- cv.glmnet(as.matrix(train2[,-ncol(train2)]),as.matrix(train2$Sale_Price), alpha = 1)
bestlam <- cv.out$lambda.min
lasso.pred <- predict(cv.out, s = bestlam, newx = as.matrix(test2) )
######## Find RMSE value as square root of mean(y - ypred)^2
########### XGBoost
#xgb_grid = expand.grid(
# nrounds = 1000,
# eta = c(0.1, 0.05, 0.01),
# max_depth = c(2, 3, 4, 5, 6),
# gamma = 0,
# colsample_bytree=1,
# min_child_weight=c(1, 2, 3, 4 ,5),
# subsample=1
#)
# xgb_caret <- train(x=train2, y=train2$Sale_Price[!is.na(tr<OwaStorableObject type="Conversations" version="1" denyCopy="false" denyMove="false"><Conversations FolderId="LgAAAACszHB5NvblSaBRvXxjZUgXAQDNxgQENlrxTL+R3Yg9YJI8AAAAkqPUAAAB" IsCrossFolder="false"><Item ItemId="CID.+i+52X0blUCuCTXG0NtMpQ==.LgAAAACszHB5NvblSaBRvXxjZUgXAQDNxgQENlrxTL+R3Yg9YJI8AAAAkqPUAAAB.riwAAACSo9AAAAAACaHuAAAAAAA=" ItemType="IPM.Conversation"/></Conversations></OwaStorableObject>ain2$Sale_Price)], method='xgbTree', trControl= my_control, tuneGrid=xgb_grid)
# xgb_caret$bestTune
######### XGBoost MAIN
#put into the xgb matrix format
#dtrain = xgb.DMatrix(data = as.matrix(train2), label = as.matrix(train2) )
#dtest = xgb.DMatrix(data = as.matrix(test2), label = as.matrix(test2) )
# these are the datasets the rmse is evaluated for at each iteration
#watchlist = list(train=dtrain, test=dtest)
#bst = xgb.train(data = dtrain,
# max.depth = 10,
# eta = 0.1,
# nthread = 2,
# nround = 10000,
# watchlist = watchlist,
# objective = "reg:linear",
# early_stopping_rounds = 50,
# print_every_n = 500)
##### XG BOOST FROM PIAZZA
# train GBM model
gbm.fit <- gbm(
formula = train2$Sale_Price ~ .,
distribution = "gaussian",
data = train2,
n.trees = 10000,
interaction.depth = 1,
shrinkage = 0.001,
cv.folds = 5,
n.cores = NULL, # will use all cores by default
verbose = FALSE
)
print(gbm.fit)
# create hyperparameter grid
hyper_grid <- expand.grid(
shrinkage = c(.01, .1, .3),
interaction.depth = c(1, 3, 5),
n.minobsinnode = c(5, 10, 15),
bag.fraction = c(.65, .8, 1),
optimal_trees = 0, # a place to dump results
min_RMSE = 0 # a place to dump results
)
# total number of combinations
nrow(hyper_grid)
############ TUNING FOR GBM
# train GBM model
gbm.fit2 <- gbm(
formula = train2$Sale_Price ~ .,
distribution = "gaussian",
data = train2,
n.trees = 5000,
interaction.depth = 3,
shrinkage = 0.1,
cv.folds = 5,
n.cores = NULL, # will use all cores by default
verbose = FALSE
)
# find index for n trees with minimum CV error
min_MSE <- which.min(gbm.fit2$cv.error)
# get MSE and compute RMSE
sqrt(gbm.fit2$cv.error[min_MSE])
## [1] 23112.1
# plot loss function as a result of n trees added to the ensemble
gbm.perf(gbm.fit2, method = "cv")
# create hyperparameter grid
hyper_grid <- expand.grid(
shrinkage = c(.01, .1, .3),
interaction.depth = c(1, 3, 5),
n.minobsinnode = c(5, 10, 15),
bag.fraction = c(.65, .8, 1),
optimal_trees = 0, # a place to dump results
min_RMSE = 0 # a place to dump results
)
# total number of combinations
nrow(hyper_grid)
# get MSE and compute RMSE
sqrt(min(gbm.fit$cv.error))
#dtest = xgb.DMatrix(data = as.matrix( test2 ))
#test the model on truly external data
#y_hat_valid = predict(bst, dtest)
###For part 1, use xgboost, fit data accordingly
###For part 2 split is different, kaggle, piazza, html
### Can we use Lasso for Part1 ? Use data.matrix as an input instead of directly.
#n = 50;
#X1 = sample(c("A", "B", "C"), n, replace=TRUE)
#X1 = as.factor(X1)
#X2 = sample(1:4, n, replace=TRUE)
#X2 = as.factor(X2)
# obtain the one hot coding for X1 and X2
#fake.y = rep(0, n)
#tmp = model.matrix(lm(fake.y ~ X1 + X2))
#tmp = tmp[, -1] # remove the 1st column (the intercept) of tmp
#Lot_Frontage_W = winsorize(train$Lot_Frontage, standardized = FALSE, centerFun = median, scaleFun = mad, const = 2, return = c("data", "weights"))
#hist(train$Lot_Frontage)
#hist(Lot_Frontage_W)
## Winsorize Command Numeric data, Standardized = FALSE, Put all default values
## One Hot Encoding - Linear regression lm sale_price 0 1 0, levels - Design matrix(Features obtained from design matrix) - glmnet - , NaN Values - Impute using median, Preprocessing Steps, glmnet??
## Garage belt drop, test also drop columns
##Random Forest, parameter tuning -- hyperparameters, no of trees, no of candidates, Initially use random values or default values make a list, highest accuracy, 1 . RF, 2. linear regression, XG Boost
## Piazza 10 random splits
#The response variable; SalePrice
#ggplot(data=all[!is.na(all$SalePrice),], aes(x=SalePrice)) +
# geom_histogram(fill="blue", binwidth = 10000) +
#scale_x_continuous(breaks= seq(0, 800000, by=100000), labels = comma)
#summary(all$SalePrice)
#### LASSO FUNCTION
#one_step_lasso = function(r, x, lam){
# xx = sum(x^2)
#xr = sum(r*x)
#b = (abs(xr) -lam/2)/xx
#b = sign(xr)*ifelse(b>0, b, 0)
#return(b)
#}
#### FUNCTION TO IMPLEMENT COORDINATE DESCENT
|
/Project_1_8871_kvn3.R
|
no_license
|
KarthikJVN/STAT542_StatisticalLearning
|
R
| false
| false
| 14,800
|
r
|
#test.id = seq(1, 2930, by=3)
set.seed(8871)
install.packages("robustHD")
install.packages("caTools")
install.packages("DescTools")
install.packages("caret")
install.packages("randomForest")
install.packages("xgboost")
install.packages("corrplot")
install.packages("Rmisc")
install.packages("ggrepel")
install.packages("psych")
install.packages("cran")
install.packages("gbm")
library(knitr)
library(ggplot2)
library(plyr)
library(dplyr)
library(corrplot)
library(caret)
library(gridExtra)
library(scales)
library(Rmisc)
library(ggrepel)
library(randomForest)
library(psych)
library(xgboost)
library(robustHD)
library(caTools)
library(DescTools)
library(caret)
library(randomForest)
library(xgboost)
##library(cran)
library(glmnet)
library(gbm)
data <- read.csv("/Users/karthikjvn/Documents/Mac_Transfer_March2018/UIUC/Fall_2018/STAT542/Project 1/Ames_data.csv", stringsAsFactors = F )
train <- read.csv("/Users/karthikjvn/Documents/Mac_Transfer_March2018/UIUC/Fall_2018/STAT542/Project 1/Ames_data.csv", stringsAsFactors = F)
test <- read.csv("/Users/karthikjvn/Documents/Mac_Transfer_March2018/UIUC/Fall_2018/STAT542/Project 1/Ames_data.csv", stringsAsFactors = F)
Project <- read.table("/Users/karthikjvn/Documents/Mac_Transfer_March2018/UIUC/Fall_2018/STAT542/Project 1/Project1_test_id.txt", stringsAsFactors = F )
#sample = sample.split(data$anycolumn, SplitRatio = .70)
#s = sample.split[Project$V3, data$PID == Project$V3, SplitRatio = .70]
#train = subset(data, sample == TRUE)
#test = subset(data, sample == FALSE)
train <- read.csv("/Users/karthikjvn/Documents/Mac_Transfer_March2018/UIUC/Fall_2018/STAT542/Project 1/Ames_data.csv", stringsAsFactors = F)
test <- read.csv("/Users/karthikjvn/Documents/Mac_Transfer_March2018/UIUC/Fall_2018/STAT542/Project 1/Ames_data.csv", stringsAsFactors = F)
test = data[data$PID%in%Project[,4],1:82]
train = data[!data$PID%in%Project[,4],]
#train$Garage_Yr_Blt <- NULL
#test$Garage_Yr_Blt <- NULL
#x <- factor(c('a', 'b', 'c'), levels = c('b', 'a', 'c'))
#x_ord <- as.numeric(x)
#>> [1] 2 1 3
#scale(x, center = TRUE, scale = TRUE)
#require(stats)
#x <- matrix(1:10, ncol = 2)
#(centered.x <- scale(x, scale = FALSE))
#cov(centered.scaled.x <- scale(x)) # all 1
#require(stats)
#x <- train(1:10, ncol = 2)
#(centered.x <- scale(x, scale = FALSE))
#cov(centered.scaled.x <- scale(x)) # all 1
#winsorize(train)
#Data size and structure
#dim(train)
#dim(test)
#test$SalePrice <- NA
#all <- rbind(train, test)
#dim(all)
##Table command
table(train$Land_Slope)/dim(train)[1] ## Categorical Variables non Numeric data Drop Land_Slope YES
table(train$Neighborhood)/dim(train)[1] ## No
table(train$Bldg_Type)/dim(train)[1] ##YEs Above 95% drop else keep
table(train$MS_SubClass)/dim(train)[1] ## NO
table(train$MS_Zoning)/dim(train)[1] ## NO
table(train$Street)/dim(train)[1] ## YES
table(train$Alley)/dim(train)[1] ## NO
table(train$Lot_Shape)/dim(train)[1] ##NO
table(train$Land_Contour)/dim(train)[1] ## NO
table(train$Utilities)/dim(train)[1] ##YES
table(train$Lot_Config)/dim(train)[1] ##NO
table(train$Condition_1)/dim(train)[1] ##NO
table(train$Condition_2)/dim(train)[1] ##YES
table(train$House_Style)/dim(train)[1] ##NO
table(train$Overall_Qual)/dim(train)[1] ##NO
table(train$Overall_Cond)/dim(train)[1] ##NO
table(train$Roof_Style)/dim(train)[1] ##NO
table(train$Roof_Matl)/dim(train)[1] ##YES
table(train$Exterior_1st)/dim(train)[1] ##NO
table(train$Exterior_2nd)/dim(train)[1] ##NO
table(train$Mas_Vnr_Type)/dim(train)[1] ##NO
table(train$Mas_Vnr_Area)/dim(train)[1] ##NO
##sapplytable(train$Mas_Vnr_Area)/dim(train)[1]
table(train$Exter_Qual)/dim(train)[1] ##NO
table(train$Exter_Cond)/dim(train)[1] ##NO
table(train$Foundation)/dim(train)[1] ##NO
table(train$Bsmt_Qual)/dim(train)[1] ##NO
table(train$Bsmt_Cond)/dim(train)[1] ##NO
table(train$Bsmt_Exposure)/dim(train)[1] ##NO
table(train$BsmtFin_Type_1)/dim(train)[1] ##NO
##table(train$BsmtFin_SF_1)/dim(train)[1] ##NO
table(train$BsmtFin_Type_2)/dim(train)[1] ##NO
table(train$Heating)/dim(train)[1] ##YES
table(train$Heating_QC)/dim(train)[1] ##NO
table(train$Central_Air)/dim(train)[1] ##NO
table(train$Electrical)/dim(train)[1] ##NO
table(train$Kitchen_Qual)/dim(train)[1] ##NO
table(train$Functional)/dim(train)[1] ##NO
table(train$Fireplace_Qu)/dim(train)[1] ##NO
table(train$Garage_Type)/dim(train)[1] ##NO
table(train$Garage_Finish)/dim(train)[1] ##NO
table(train$Garage_Qual)/dim(train)[1] ##NO
table(train$Garage_Cond)/dim(train)[1] ##NO
table(train$Paved_Drive)/dim(train)[1] ##NO
table(train$Pool_QC)/dim(train)[1] ##YES
table(train$Fence)/dim(train)[1] ##NO
table(train$Misc_Feature)/dim(train)[1] ##YES
table(train$Sale_Type)/dim(train)[1] ##NO
table(train$Sale_Condition)/dim(train)[1] ##NO
table(train$Sale_Condition)/dim(train)[1] ##NO
##DROPPING VARIABLES AFTER USING TABLE COMMAND IF 95%
## DROPPING VARIABLES LONGITUDE and LATITUDE
## train changed to train1
train1 = subset(train, select = -c(Land_Slope,Bldg_Type,Street,Utilities,Condition_2,Garage_Yr_Blt,Roof_Matl,Heating,Pool_QC,Misc_Feature,Longitude,Latitude))
test1 = subset(test, select = -c(Land_Slope,Bldg_Type,Street,Utilities,Condition_2,Garage_Yr_Blt,Roof_Matl,Heating,Pool_QC,Misc_Feature,Longitude,Latitude))
names(test1)[71]=c("Sale_Price")
####################
## ONE HOT ENCODING
## Karet package Use, Bind based on Columns, One hot encoding categorical variables, Bind Encode Then Split
#Binding
total = rbind(train1,test1)
## Winsorization Loop through all columns with Numeric data
summary(train1$Lot_Frontage)
train1$Lot_Frontage = Winsorize(train1$Lot_Frontage, minval = NULL, maxval = NULL, probs = c(0.05, 0.95),
na.rm = FALSE, type = 7)
summary(train1$Lot_Frontage)
train1$Lot_Area = Winsorize(train1$Lot_Area, minval = NULL, maxval = NULL, probs = c(0.05, 0.95),
na.rm = FALSE, type = 7)
train1$Mas_Vnr_Area = Winsorize(train1$Mas_Vnr_Area, minval = NULL, maxval = NULL, probs = c(0.05, 0.95),
na.rm = FALSE, type = 7)
train1$Exter_Qual = Winsorize(train1$Exter_Qual, minval = NULL, maxval = NULL, probs = c(0.05, 0.95),
na.rm = FALSE, type = 7)
train1$BsmtFin_SF_2 = Winsorize(train1$BsmtFin_SF_2, minval = NULL, maxval = NULL, probs = c(0.05, 0.95),
na.rm = FALSE, type = 7)
train1$Bsmt_Unf_SF = Winsorize(train1$Bsmt_Unf_SF, minval = NULL, maxval = NULL, probs = c(0.05, 0.95),
na.rm = FALSE, type = 7)
train1$Total_Bsmt_SF = Winsorize(train1$Total_Bsmt_SF, minval = NULL, maxval = NULL, probs = c(0.05, 0.95),
na.rm = FALSE, type = 7)
train1$First_Flr_SF = Winsorize(train1$First_Flr_SF, minval = NULL, maxval = NULL, probs = c(0.05, 0.95),
na.rm = FALSE, type = 7)
train1$Second_Flr_SF = Winsorize(train1$Second_Flr_SF, minval = NULL, maxval = NULL, probs = c(0.05, 0.95),
na.rm = FALSE, type = 7)
train1$Gr_Liv_Area = Winsorize(train1$Gr_Liv_Area, minval = NULL, maxval = NULL, probs = c(0.05, 0.95),
na.rm = FALSE, type = 7)
train1$Gr_Liv_Area = Winsorize(train1$Gr_Liv_Area, minval = NULL, maxval = NULL, probs = c(0.05, 0.95),
na.rm = FALSE, type = 7)
##TotRms SEE
train1$Garage_Area = Winsorize(train1$Garage_Area, minval = NULL, maxval = NULL, probs = c(0.05, 0.95),
na.rm = FALSE, type = 7)
train1$Wood_Deck_SF = Winsorize(train1$Wood_Deck_SF, minval = NULL, maxval = NULL, probs = c(0.05, 0.95),
na.rm = FALSE, type = 7)
train1$Open_Porch_SF = Winsorize(train1$Open_Porch_SF, minval = NULL, maxval = NULL, probs = c(0.05, 0.95),
na.rm = FALSE, type = 7)
### Enclosed_Porch BQ??? Three_season_porch BR??? Screen_Porch BS?? Mo_Sold?? Longitude?? Latitude?? Sale_Price??
train1$Misc_Val = Winsorize(train1$Misc_Val, minval = NULL, maxval = NULL, probs = c(0.05, 0.95),
na.rm = FALSE, type = 7)
train1$Sale_Price = Winsorize(train1$Sale_Price, minval = NULL, maxval = NULL, probs = c(0.05, 0.95),
na.rm = FALSE, type = 7)
####################
## ONE HOT ENCODING
## Karet package Use, Bind based on Columns, One hot encoding categorical variables, Bind Encode Then Split
#Binding
total = rbind(train1,test1)
### ONE HOT ENCODING
# dummify the data
dmy <- dummyVars(" ~ .", data = total)
trsf <- data.frame(predict(dmy, newdata = total))
#Resplit the data into train and test splits
train2 = trsf[1:nrow(train1),]
test2 = trsf[(nrow(train1)+1):nrow(trsf), -ncol(trsf)]
#########################
## FIT RANDOM FOREST
#########################
### fit log sale price with all variables, plot??
## fit models for train data
rfModel = randomForest(log(train2$Sale_Price) ~ ., data = train2[-test.id, ],
importance = T, ntree=400);
########## Lasso
#my_control <-trainControl(method="cv", number=5)
#lassoGrid <- expand.grid(alpha = 1, lambda = seq(0.001,0.1,by = 0.0005))
#lasso_mod <- train(x=train2, y=train2$Sale_Price[!is.na(train2$Sale_Price)], method='glmnet', trControl= my_control, tuneGrid=lassoGrid)
#lasso_mod$bestTune
########## LASSO MAIN
###lasso <- glmnet(x= as.matrix(train2[,-train2$Sale_Price]), y= as.matrix(train2$Sale_Price), alpha = 1 )
cv.out <- cv.glmnet(as.matrix(train2[,-ncol(train2)]),as.matrix(train2$Sale_Price), alpha = 1)
bestlam <- cv.out$lambda.min
lasso.pred <- predict(cv.out, s = bestlam, newx = as.matrix(test2) )
######## Find RMSE value as square root of mean(y - ypred)^2
########### XGBoost
#xgb_grid = expand.grid(
# nrounds = 1000,
# eta = c(0.1, 0.05, 0.01),
# max_depth = c(2, 3, 4, 5, 6),
# gamma = 0,
# colsample_bytree=1,
# min_child_weight=c(1, 2, 3, 4 ,5),
# subsample=1
#)
# xgb_caret <- train(x=train2, y=train2$Sale_Price[!is.na(tr<OwaStorableObject type="Conversations" version="1" denyCopy="false" denyMove="false"><Conversations FolderId="LgAAAACszHB5NvblSaBRvXxjZUgXAQDNxgQENlrxTL+R3Yg9YJI8AAAAkqPUAAAB" IsCrossFolder="false"><Item ItemId="CID.+i+52X0blUCuCTXG0NtMpQ==.LgAAAACszHB5NvblSaBRvXxjZUgXAQDNxgQENlrxTL+R3Yg9YJI8AAAAkqPUAAAB.riwAAACSo9AAAAAACaHuAAAAAAA=" ItemType="IPM.Conversation"/></Conversations></OwaStorableObject>ain2$Sale_Price)], method='xgbTree', trControl= my_control, tuneGrid=xgb_grid)
# xgb_caret$bestTune
######### XGBoost MAIN
#put into the xgb matrix format
#dtrain = xgb.DMatrix(data = as.matrix(train2), label = as.matrix(train2) )
#dtest = xgb.DMatrix(data = as.matrix(test2), label = as.matrix(test2) )
# these are the datasets the rmse is evaluated for at each iteration
#watchlist = list(train=dtrain, test=dtest)
#bst = xgb.train(data = dtrain,
# max.depth = 10,
# eta = 0.1,
# nthread = 2,
# nround = 10000,
# watchlist = watchlist,
# objective = "reg:linear",
# early_stopping_rounds = 50,
# print_every_n = 500)
##### XG BOOST FROM PIAZZA
# train GBM model
gbm.fit <- gbm(
formula = train2$Sale_Price ~ .,
distribution = "gaussian",
data = train2,
n.trees = 10000,
interaction.depth = 1,
shrinkage = 0.001,
cv.folds = 5,
n.cores = NULL, # will use all cores by default
verbose = FALSE
)
print(gbm.fit)
# create hyperparameter grid
hyper_grid <- expand.grid(
shrinkage = c(.01, .1, .3),
interaction.depth = c(1, 3, 5),
n.minobsinnode = c(5, 10, 15),
bag.fraction = c(.65, .8, 1),
optimal_trees = 0, # a place to dump results
min_RMSE = 0 # a place to dump results
)
# total number of combinations
nrow(hyper_grid)
############ TUNING FOR GBM
# train GBM model
gbm.fit2 <- gbm(
formula = train2$Sale_Price ~ .,
distribution = "gaussian",
data = train2,
n.trees = 5000,
interaction.depth = 3,
shrinkage = 0.1,
cv.folds = 5,
n.cores = NULL, # will use all cores by default
verbose = FALSE
)
# find index for n trees with minimum CV error
min_MSE <- which.min(gbm.fit2$cv.error)
# get MSE and compute RMSE
sqrt(gbm.fit2$cv.error[min_MSE])
## [1] 23112.1
# plot loss function as a result of n trees added to the ensemble
gbm.perf(gbm.fit2, method = "cv")
# create hyperparameter grid
hyper_grid <- expand.grid(
shrinkage = c(.01, .1, .3),
interaction.depth = c(1, 3, 5),
n.minobsinnode = c(5, 10, 15),
bag.fraction = c(.65, .8, 1),
optimal_trees = 0, # a place to dump results
min_RMSE = 0 # a place to dump results
)
# total number of combinations
nrow(hyper_grid)
# get MSE and compute RMSE
sqrt(min(gbm.fit$cv.error))
#dtest = xgb.DMatrix(data = as.matrix( test2 ))
#test the model on truly external data
#y_hat_valid = predict(bst, dtest)
###For part 1, use xgboost, fit data accordingly
###For part 2 split is different, kaggle, piazza, html
### Can we use Lasso for Part1 ? Use data.matrix as an input instead of directly.
#n = 50;
#X1 = sample(c("A", "B", "C"), n, replace=TRUE)
#X1 = as.factor(X1)
#X2 = sample(1:4, n, replace=TRUE)
#X2 = as.factor(X2)
# obtain the one hot coding for X1 and X2
#fake.y = rep(0, n)
#tmp = model.matrix(lm(fake.y ~ X1 + X2))
#tmp = tmp[, -1] # remove the 1st column (the intercept) of tmp
#Lot_Frontage_W = winsorize(train$Lot_Frontage, standardized = FALSE, centerFun = median, scaleFun = mad, const = 2, return = c("data", "weights"))
#hist(train$Lot_Frontage)
#hist(Lot_Frontage_W)
## Winsorize Command Numeric data, Standardized = FALSE, Put all default values
## One Hot Encoding - Linear regression lm sale_price 0 1 0, levels - Design matrix(Features obtained from design matrix) - glmnet - , NaN Values - Impute using median, Preprocessing Steps, glmnet??
## Garage belt drop, test also drop columns
##Random Forest, parameter tuning -- hyperparameters, no of trees, no of candidates, Initially use random values or default values make a list, highest accuracy, 1 . RF, 2. linear regression, XG Boost
## Piazza 10 random splits
#The response variable; SalePrice
#ggplot(data=all[!is.na(all$SalePrice),], aes(x=SalePrice)) +
# geom_histogram(fill="blue", binwidth = 10000) +
#scale_x_continuous(breaks= seq(0, 800000, by=100000), labels = comma)
#summary(all$SalePrice)
#### LASSO FUNCTION
#one_step_lasso = function(r, x, lam){
# xx = sum(x^2)
#xr = sum(r*x)
#b = (abs(xr) -lam/2)/xx
#b = sign(xr)*ifelse(b>0, b, 0)
#return(b)
#}
#### FUNCTION TO IMPLEMENT COORDINATE DESCENT
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.