blob_id stringlengths 40 40 | directory_id stringlengths 40 40 | path stringlengths 2 327 | content_id stringlengths 40 40 | detected_licenses listlengths 0 91 | license_type stringclasses 2 values | repo_name stringlengths 5 134 | snapshot_id stringlengths 40 40 | revision_id stringlengths 40 40 | branch_name stringclasses 46 values | visit_date timestamp[us]date 2016-08-02 22:44:29 2023-09-06 08:39:28 | revision_date timestamp[us]date 1977-08-08 00:00:00 2023-09-05 12:13:49 | committer_date timestamp[us]date 1977-08-08 00:00:00 2023-09-05 12:13:49 | github_id int64 19.4k 671M ⌀ | star_events_count int64 0 40k | fork_events_count int64 0 32.4k | gha_license_id stringclasses 14 values | gha_event_created_at timestamp[us]date 2012-06-21 16:39:19 2023-09-14 21:52:42 ⌀ | gha_created_at timestamp[us]date 2008-05-25 01:21:32 2023-06-28 13:19:12 ⌀ | gha_language stringclasses 60 values | src_encoding stringclasses 24 values | language stringclasses 1 value | is_vendor bool 2 classes | is_generated bool 2 classes | length_bytes int64 7 9.18M | extension stringclasses 20 values | filename stringlengths 1 141 | content stringlengths 7 9.18M |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
9ec44ebd90c15764ee9d221dbcc52472d0a8f8bc | 7f13e2c02d75b633d448d487b819f13048dd75a9 | /R/pmsampsize.R | 1377d68f3e430986d51bee9cba23c00e9895409a | [] | no_license | medical-imaging/pmsampsize | ed21779ac8cfca611850017fda65ecdafbb427a3 | f151bcfc59d6f8935ac588dc8a6dd070fbcb6668 | refs/heads/master | 2020-05-30T03:06:33.538716 | 2019-01-08T15:30:03 | 2019-01-08T15:30:03 | 189,509,756 | 1 | 0 | null | 2019-05-31T01:57:29 | 2019-05-31T01:57:28 | null | UTF-8 | R | false | false | 14,688 | r | pmsampsize.R | #' pmsampsize
#' - Calculates the minimum sample size required for developing a multivariable prediction model
#'
#' \code{pmsampsize} computes the minimum sample size required for the development of a new
#' multivariable prediction model using the criteria proposed by Riley \emph{et al}. 2018. \code{pmsampsize}
#' can be used to calculate the minimum sample size for the development of models with
#' continuous, binary or survival (time-to-event) outcomes. Riley \emph{et al}. lay out a series of
#' criteria the sample size should meet. These aim to minimise the overfitting and to ensure
#' precise estimation of key parameters in the prediction model. \cr \cr
#' For continuous outcomes, there are four criteria: \cr
#' i) small overfitting defined by an expected shrinkage of predictor effects by 10\% or less, \cr
#' ii) small absolute difference of 0.05 in the model's apparent and adjusted R-squared value, \cr
#' iii) precise estimation of the residual standard deviation, and \cr
#' iv) precise estimation of the average outcome value. \cr \cr
#' The sample size calculation requires the user to pre-specify (e.g. based on previous evidence)
#' the anticipated R-squared of the model, and the average outcome value and standard deviation
#' of outcome values in the population of interest. \cr \cr
#' For binary or survival (time-to-event) outcomes, there are three criteria: \cr
#' i) small overfitting defined by an expected shrinkage of predictor effects by 10\% or less, \cr
#' ii) small absolute difference of 0.05 in the model's apparent and adjusted Nagelkerke's R-squared
#' value, and \cr
#' iii) precise estimation (within +/- 0.05) of the average outcome risk in the
#' population for a key timepoint of interest for prediction.
#'
#'
#' @author Joie Ensor (Keele University, j.ensor@keele.ac.uk),
#' @author Emma C. Martin (University of Leicester, emma.martin@le.ac.uk),
#' @author Richard D. Riley (Keele University, r.riley@keele.ac.uk)
#'
#' @param type specifies the type of analysis for which sample size is being calculated
#' \itemize{
#' \item \code{"c"} specifies sample size calculation for a prediction model with a continuous outcome
#' \item \code{"b"} specifies sample size calculation for a prediction model with a binary outcome
#' \item \code{"s"} specifies sample size calculation for a prediction model with a survival (time-to-event) outcome
#' }
#' @param rsquared specifies the expected value of the (Cox-Snell) R-squared of the new model,
#' where R-squared is the percentage of variation in outcome values explained by the model.
#' For example, the user may input the value of the (Cox-Snell) R-squared reported for a
#' previous prediction model study in the same field. If taking a value from a previous
#' prediction model development study, users should input the model's adjusted R-squared
#' value, not the apparent R-squared value, as the latter is optimistic (biased). However,
#' if taking the R-squared value from an external validation of a previous model, the
#' apparent R-squared can be used (as the validation data was not used for development, and
#' so R-squared apparent is then unbiased). Note that for binary and survival outcome
#' models, the Cox-Snell R-squared value is required; this is the generalised version of
#' the well-known R-squared for continuous outcomes, based on the likelihood. The papers
#' by Riley et al. (see references) outline how to obtain the Cox-Snell R-squared value
#' from published studies if they are not reported, using other information (such as the
#' C-statistic or Nagelkerke's R-squared). Users should be conservative with their chosen
#' R-squared value; for example, by taking the R-squared value from a previous model, even
#' if they hope their new model will improve performance.
#'
#' @param parameters specifies the number of candidate predictor parameters for potential
#' inclusion in the new prediction model. Note that this may be larger than the number of
#' candidate predictors, as categorical and continuous predictors often require two or more
#' parameters to be estimated.
#'
#' @param shrinkage specifies the level of shrinkage desired at internal validation after
#' developing the new model. Shrinkage is a measure of overfitting, and can range from 0 to 1,
#' with higher values denoting less overfitting. We recommend a shrinkage = 0.9 (the
#' default in \code{pmsampsize}), which indicates that the predictor effect (beta coefficients) in
#' the model would need to be shrunk by 10\% to adjust for overfitting. See references
#' below for further information.
#'
#' @param prevalence (binary outcome option) specifies the overall outcome proportion
#' (for a prognostic model) or
#' overall prevalence (for a diagnostic model) expected within the model development
#' dataset. This should be derived based on previous studies in the same population.
#'
#' @param rate (survival outcome option) specifies the overall event rate in the population of interest,
#' for example as obtained from a previous study, for the survival outcome of interest.
#'
#' @param timepoint (survival outcome option) specifies the timepoint of interest for prediction.
#'
#' @param meanfup (survival outcome option) specifies the average (mean) follow-up time
#' anticipated for individuals in the model development dataset,
#' for example as taken from a previous study in the population of interest.
#'
#' @param intercept (continuous outcome options) specifies the average outcome value in the population of
#' interest e.g. the average blood pressure, or average pain score.
#' This could be based on a previous study, or on clinical knowledge.
#'
#' @param sd (continuous outcome options) specifies the standard deviation (SD) of
#' outcome values in the population e.g.
#' the SD for blood pressure in patients with all other predictors set to the average.
#' This could again be based on a previous study, or on clinical knowledge.
#'
#' @param mmoe (continuous outcome options) multiplicative margin of error (MMOE)
#' acceptable for calculation of the
#' intercept. The default is a MMOE of 10\%. Confidence interval for the intercept will be
#' displayed in the output for reference. See references below for further information.
#'
#' @import stats
#'
#' @examples
#' ## Examples based on those included in two papers by Riley et al.
#' ## published in Statistics in Medicine (2018).
#'
#' ## Binary outcomes (Logistic prediction models)
#' # Use pmsampsize to calculate the minimum sample size required to develop a
#' # multivariable prediction model for a binary outcome using 24 candidate
#' # predictor parameters. Based on previous evidence, the outcome prevalence is
#' # anticipated to be 0.174 (17.4%) and a lower bound (taken from the adjusted
#' # Cox-Snell R-squared of an existing prediction model) for the new model's
#' # R-squared value is 0.288
#'
#' pmsampsize(type = "b", rsquared = 0.288, parameters = 24, prevalence = 0.174)
#'
#' ## Survial outcomes (Cox prediction models)
#' # Use pmsampsize to calculate the minimum sample size required for developing
#' # a multivariable prediction model with a survival outcome using 25 candidate
#' # predictors. We know an existing prediction model in the same field has an
#' # R-squared adjusted of 0.051. Further, in the previous study the mean
#' # follow-up was 2.07 years, and overall event rate was 0.065. We select a
#' # timepoint of interest for prediction using the newly developed model of 2
#' # years
#'
#' pmsampsize(type = "s", rsquared = 0.051, parameters = 25, rate = 0.065,
#' timepoint = 2, meanfup = 2.07)
#'
#' ## Continuous outcomes (Linear prediction models)
#' # Use pmsampsize to calculate the minimum sample size required for developing
#' # a multivariable prediction model for a continuous outcome (here, FEV1 say),
#' # using 25 candidate predictors. We know an existing prediction model in the
#' # same field has an R-squared adjusted of 0.2, and that FEV1 values in the
#' # population have a mean of 1.9 and SD of 0.6
#'
#' pmsampsize(type = "c", rsquared = 0.2, parameters = 25, intercept = 1.9, sd = 0.6)
#'
#' @references Riley RD, Snell KIE, Ensor J, Burke DL, Harrell FE, Jr., Moons KG, Collins GS.
#' Minimum sample size required for developing a multivariable prediction model: Part I continuous outcomes.
#' \emph{Statistics in Medicine}. 2018 (in-press). doi: 10.1002/sim.7993
#'
#' @references Riley RD, Snell KIE, Ensor J, Burke DL, Harrell FE, Jr., Moons KG, Collins GS.
#' Minimum sample size required for developing a multivariable prediction model:
#' Part II binary and time-to-event outcomes.
#' \emph{Statistics in Medicine}. 2018 (in-press). doi: 10.1002/sim.7992
#' @export
pmsampsize <- function(type,
rsquared,
parameters,
shrinkage = 0.9,
prevalence = NA,
rate = NA,
timepoint = NA,
meanfup = NA,
intercept = NA,
sd = NA,
mmoe=1.1) {
# error checking
pmsampsize_errorcheck(type=type,rsquared=rsquared,parameters=parameters,shrinkage=shrinkage,prevalence=prevalence,
rate=rate,timepoint=timepoint,meanfup=meanfup,intercept=intercept,sd=sd,mmoe=mmoe)
# choose function based on analysis type
if (type == "c") out <- pmsampsize_cont(rsquared=rsquared,parameters=parameters,intercept=intercept,
sd=sd,shrinkage=shrinkage,mmoe=mmoe)
if (type == "b") out <- pmsampsize_bin(rsquared=rsquared,parameters=parameters,prevalence=prevalence,
shrinkage=shrinkage)
if (type == "s") out <- pmsampsize_surv(rsquared=rsquared,parameters=parameters,rate=rate,
timepoint=timepoint,meanfup=meanfup,shrinkage=shrinkage,mmoe=mmoe)
est <- out
class(est) <- "pmsampsize"
est
}
#' @export
print.pmsampsize <- function(x, ...) {
if (x$type == "continuous") {
cat("NB: Assuming 0.05 acceptable difference in apparent & adjusted R-squared \n")
cat("NB: Assuming MMOE <= 1.1 in estimation of intercept & residual standard deviation \n")
cat("SPP - Subjects per Predictor Parameter \n \n")
}
if (x$type == "survival") {
cat("NB: Assuming 0.05 acceptable difference in apparent & adjusted R-squared \n")
cat("NB: Assuming 0.05 margin of error in estimation of overall risk at time point =", x$timepoint, " \n")
cat("NB: Events per Predictor Parameter (EPP) assumes overall event rate =", x$rate, " \n \n")
}
if (x$type == "binary") {
cat("NB: Assuming 0.05 acceptable difference in apparent & adjusted R-squared \n")
cat("NB: Assuming 0.05 margin of error in estimation of intercept \n")
cat("NB: Events per Predictor Parameter (EPP) assumes prevalence =", x$prevalence, " \n \n")
}
print(x$results_table)
if (x$type == "continuous") {
cat(" \n Minimum sample size required for new model development based on user inputs =", x$sample_size, " \n \n ")
cat("* 95% CI for intercept = (",x$int_lci,", ", x$int_uci, "), for sample size n = ",x$sample_size,sep = "")
}
if (x$type == "survival") {
cat(" \n Minimum sample size required for new model development based on user inputs = ", x$sample_size, ", \n ",sep = "")
cat("corresponding to", x$tot_per_yrs_final, "person-years of follow-up, with", ceiling(x$events), "outcome events \n ")
cat("assuming an overall event rate =", x$rate, "and therefore an EPP =", x$EPP, " \n \n ")
cat("* 95% CI for overall risk = (",x$int_lci,", ", x$int_uci, "), for true value of ", x$int_cuminc, " and sample size n = ",x$sample_size,sep = "")
}
if (x$type == "binary") {
cat(" \n Minimum sample size required for new model development based on user inputs = ", x$sample_size, ", \n ",sep = "")
cat("with ", ceiling(x$events), " events (assuming an outcome prevalence = ", x$prevalence, ") and an EPP = ", x$EPP, " \n \n ",sep = "")
}
}
#' @export
summary.pmsampsize <- function(object, ...) {
if (object$type == "continuous") {
cat("NB: Assuming 0.05 acceptable difference in apparent & adjusted R-squared \n")
cat("NB: Assuming MMOE <= 1.1 in estimation of intercept & residual standard deviation \n")
cat("SPP - Subjects per Predictor Parameter \n \n")
}
if (object$type == "survival") {
cat("NB: Assuming 0.05 acceptable difference in apparent & adjusted R-squared \n")
cat("NB: Assuming 0.05 margin of error in estimation of overall risk at time point =", object$timepoint, " \n")
cat("NB: Events per Predictor Parameter (EPP) assumes overall event rate =", object$rate, " \n \n")
}
if (object$type == "binary") {
cat("NB: Assuming 0.05 acceptable difference in apparent & adjusted R-squared \n")
cat("NB: Assuming 0.05 margin of error in estimation of intercept \n")
cat("NB: Events per Predictor Parameter (EPP) assumes prevalence =", object$prevalence, " \n \n")
}
print(object$results_table)
if (object$type == "continuous") {
cat(" \n Minimum sample size required for new model development based on user inputs =", object$sample_size, " \n \n ")
cat("* 95% CI for intercept = (",object$int_lci,", ", object$int_uci, "), for sample size n = ",object$sample_size,sep = "")
}
if (object$type == "survival") {
cat(" \n Minimum sample size required for new model development based on user inputs = ", object$sample_size, ", \n ",sep = "")
cat("corresponding to", object$tot_per_yrs_final, "person-years of follow-up, with", ceiling(object$events), "outcome events \n ")
cat("assuming an overall event rate =", object$rate, "and therefore an EPP =", object$EPP, " \n \n ")
cat("* 95% CI for overall risk = (",object$int_lci,", ", object$int_uci, "), for true value of ", object$int_cuminc, " and sample size n = ",object$sample_size,sep = "")
}
if (object$type == "binary") {
cat(" \n Minimum sample size required for new model development based on user inputs = ", object$sample_size, ", \n ",sep = "")
cat("with", ceiling(object$events), "events (assuming an outcome prevalence =", object$prevalence, ") and an EPP =", object$EPP, " \n \n ")
}
}
|
e904ec298d13cc907b5bd07a71473fe51e368f84 | bcfbfc43b8aaf0ff74736eb8c8e6d61948779a34 | /man/fit.associated.genes.Rd | 5e87d20d35aa27ad0f758fe5a7165e7b35af719a | [] | no_license | hms-dbmi/crestree | 7973500763f9d59ca3b66b67b2a911d4f5746407 | aa37f21ba708267b93b1841415047b4361f65480 | refs/heads/master | 2020-05-07T15:45:24.631955 | 2019-07-09T19:42:15 | 2019-07-09T19:42:15 | 180,649,997 | 9 | 7 | null | null | null | null | UTF-8 | R | false | true | 941 | rd | fit.associated.genes.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/crestree.functions.R
\name{fit.associated.genes}
\alias{fit.associated.genes}
\title{Model gene expression levels as a function of tree positions.}
\usage{
fit.associated.genes(r, X, n.map = 1,
n.cores = parallel::detectCores()/2, method = "ts", knn = 1,
gamma = 1.5)
}
\arguments{
\item{r}{pptree object}
\item{X}{expressinon matrix of genes (rows) vs cells (columns)}
\item{n.map}{number of probabilistic cell-to-tree mappings to use}
\item{method}{method of modeling. Currently only splines with option 'ts' are supported.}
\item{knn}{use expression averaging among knn cells}
\item{gamma}{stringency of penalty.}
}
\value{
modified pptree object with new fields r$fit.list, r$fit.summary and r$fit.pattern. r$fit.pattern contains matrix of fitted gene expression levels
}
\description{
Model gene expression levels as a function of tree positions.
}
|
1758bd955b110b7745f8a1960e2c838d6026645b | 3f6e208060e72146b801676874c1f9cfa51fd932 | /man/dtruncnorm.Rd | 02f21b39caab285479c08703df8dba03ba73b4d0 | [] | no_license | Losses/mlisi | 5e1e34412fd27b7a08a96b9a3b98ca65cdcff90c | 54700ea9b7afb2cc4a342c2beb26f4500c919d9c | refs/heads/master | 2020-09-30T16:51:09.256135 | 2019-10-11T17:39:07 | 2019-10-11T17:39:07 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | true | 579 | rd | dtruncnorm.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/trunc-1.R
\name{dtruncnorm}
\alias{dtruncnorm}
\title{Truncated normal distribution}
\usage{
dtruncnorm(x, mu = 0, sigma = 1, a = -Inf, b = Inf)
}
\arguments{
\item{mu}{mean of `un-truncated` distribution}
\item{a}{minimum value}
\item{b}{maximum value}
\item{mu}{standard deviation of `un-truncated` distribution}
}
\value{
probability density
}
\description{
Density function of truncated normali distribution. See \link[=https://en.wikipedia.org/wiki/Truncated_normal_distribution]{Wikipedia}
}
|
0cc3a1ee5cac20e9fc4eaca57d00d03d2f9e0eb1 | 5ebefb6614588ae0f4cb75c8d0c94a6e00648f5a | /xml2_test.r | 35b5a1d99f4aa26a099ee040533272ad2ee0979e | [] | no_license | thefactmachine/automated_svg | da7f99725140e03b63bcaceb05184fa61df64946 | 4ae9489b59499407a4008f58ef1d4a4bf376ca04 | refs/heads/master | 2023-04-07T14:50:57.011549 | 2021-04-16T03:24:39 | 2021-04-16T03:24:39 | 355,420,836 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 2,626 | r | xml2_test.r | library(xml2)
# https://cran.r-project.org/web/packages/xml2/vignettes/modification.html
# You can avoid this by explicitly selecting the text node.
x <- read_xml("<p>This is some text. This is <b>bold!</b></p>")
text_only <- xml_find_all(x, "//text()")
# xml_text returns a character vector
x <- read_xml("<p>This is some text. This is <b>bold!</b></p>")
xml_text(x)
x <- read_xml("<p>This is some text. This is <b>bold!</b></p>")
xml_structure(x)
# add an attribute
x <- read_xml("<a href='invalid!'>xml2</a>")
xml_attr(x, "href") <- "https://github.com/r-lib/xml2"
xml_attr(x, "href")
cat(as.character(x))
# add many attributes
x <- read_xml("<svg />")
xml_attrs(x) <- c(id = "xml2", href = "https://github.com/r-lib/xml2")
xml_attrs(x)
cat(as.character(x))
# how to add a name space
xml_attr(x, "xmlns:bar") <- "http://bar"
cat(as.character(x))
# get and set a name
x <- read_xml("<a><b/></a>")
xml_name(x)
xml_name(x) <- "mark"
cat(as.character(x))
# ========================================================
# replace existing nodes
x <- read_xml("<parent><child>1</child><child>2<child>3</child></child></parent>")
cat(as.character(x))
children <- xml_children(x)
cat(as.character(children))
# t1 is child 1
t1 <- children[[1]]
cat(as.character(t1))
t2 <- children[[2]]
cat(as.character(t2))
# t3 is child 3
t3 <- xml_children(children[[2]])[[1]]
cat(as.character(t3))
xml_replace(t1, t3)
# child 1 has become child 3
cat(as.character(x))
# ========================================================
# add a child
x <- read_xml("<parent><child>1</child><child>2<child>3</child></child></parent>")
cat(as.character(x))
children <- xml_children(x)
t1 <- children[[1]]
t2 <- children[[2]]
t3 <- xml_children(children[[2]])[[1]]
cat(as.character(t3))
xml_add_child(t1, t3)
cat(as.character(x))
xml_add_child(t1, read_xml("<test/>"))
# ========================================================
d <- xml_new_root("sld",
xmlns = "http://www.o.net/sld",
"xmlns:ogc" = "http://www.o.net/ogc",
"xmlns:se" = "http://www.o.net/se",
version = "1.1.0") %>%
xml_add_child("layer") %>%
xml_add_child("se:Name", "My Layer") %>%
xml_root()
d
cat(as.character(d))
root <- xml_new_document() %>% xml_add_child("svg")
root %>%
xml_add_child("a1", x = "1", y = "2") %>%
xml_add_child("b") %>%
xml_add_child("c")
root %>%
xml_add_child("a2") %>%
xml_add_sibling("a3") %>%
invisible()
cat(as.character(root))
#> <!--?xml version="1.0"?-->
#> <root><a1 x="1" y="2"><b><c></c></b></a1><a2><a3></a3></a2></root> |
931446733307d4669506b5d24bc7e0709cc2cf82 | 29585dff702209dd446c0ab52ceea046c58e384e | /ExPosition/R/coreCA.R | b0d39f1e4f0a2235c018c64a487707729f475bc3 | [] | no_license | ingted/R-Examples | 825440ce468ce608c4d73e2af4c0a0213b81c0fe | d0917dbaf698cb8bc0789db0c3ab07453016eab9 | refs/heads/master | 2020-04-14T12:29:22.336088 | 2016-07-21T14:01:14 | 2016-07-21T14:01:14 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,948 | r | coreCA.R | coreCA <-
function(DATA,masses=NULL,weights=NULL,hellinger=FALSE,symmetric=TRUE,decomp.approach='svd',k=0){
DATA_dimensions = dim(DATA)
###PERHAPS ALL OF THIS SHOULD OCCUR IN THE CA FUNCTION?
mRP<-makeRowProfiles(DATA,weights=weights,masses=masses,hellinger=hellinger)
#rowCenter <- mRP$rowCenter
#rowProfiles <- mRP$rowProfiles
#deviations <- mRP$deviations
#masses <- mRP$masses
#weights <- mRP$weights
#pdq_results <- genPDQ(M=masses,deviations,W=weights,decomp.approach=decomp.approach,k=k)
pdq_results <- genPDQ(datain=mRP$deviations,M=mRP$masses,W=mRP$weights,is.mds=FALSE,decomp.approach=decomp.approach,k=k)
#Rows, F
fi <- pdq_results$p * matrix(pdq_results$Dv,nrow(pdq_results$p),ncol(pdq_results$p),byrow=TRUE)
rownames(fi) <- rownames(DATA)
di <- rowSums(fi^2)
ri <- repmat((1/di),1,pdq_results$ng) * (fi^2)
ri <- replace(ri,is.nan(ri),0)
ci <- repmat(mRP$masses,1,pdq_results$ng) * (fi^2)/repmat(t(pdq_results$Dv^2),DATA_dimensions[1],1)
ci <- replace(ci,is.nan(ci),0)
di <- as.matrix(di)
###this could be cleaned up. But, after I overhaul CA on the whole.
fj <- repmat(mRP$weights,1,pdq_results$ng) * pdq_results$q * matrix(pdq_results$Dv,nrow(pdq_results$q),ncol(pdq_results$q),byrow=TRUE)
rownames(fj) <- colnames(DATA)
if(hellinger){
cj <- (fj^2)/t(repmat(colSums(fj^2),1,nrow(fj)))
}else{
cj <- repmat(mRP$rowCenter,1,pdq_results$ng) * (fj^2)/repmat(t(pdq_results$Dv^2),DATA_dimensions[2],1)
}
if(!symmetric){
fj <- fj * matrix(pdq_results$Dv^-1,nrow(pdq_results$q),ncol(pdq_results$q),byrow=TRUE)
}
cj <- replace(cj,is.nan(cj),0)
dj <- rowSums(fj^2)
rj <- repmat((1/dj),1,pdq_results$ng) * (fj^2)
rj <- replace(rj,is.nan(rj),0)
dj <- as.matrix(dj)
return(list(fi=fi,di=di,ci=ci,ri=ri,fj=fj,cj=cj,rj=rj,dj=dj,t=pdq_results$tau,eigs=pdq_results$Dv^2,M=mRP$masses,W=mRP$weights,c= mRP$rowCenter,pdq=pdq_results,X=mRP$deviations,hellinger=hellinger,symmetric=symmetric))
}
|
afa522ef13fd6e3bbe2635ca06e044b4aabc93f6 | cc9707494002db22ce5797b63326602a99009e9a | /man/shi.Rd | 9af2717d25107fed9d2162fb6b9576d8f56de0d5 | [
"MIT"
] | permissive | CIP-RIU/sbformula | 54fae73fc69c9009ed74d8d6221ccffa907a5e58 | f22517e474411910c890083103f07346ab79c8a8 | refs/heads/master | 2021-06-21T07:55:37.290766 | 2017-08-18T21:34:01 | 2017-08-18T21:34:01 | 33,208,831 | 2 | 0 | null | null | null | null | UTF-8 | R | false | true | 752 | rd | shi.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/shi.R
\name{shi}
\alias{shi}
\title{Formula for calculating the Harvest sowing index - Survival (SHI)}
\usage{
shi(noph, nops)
}
\arguments{
\item{noph}{Number of plants harvested}
\item{nops}{Number of plants planted}
}
\value{
A vector that contains the harvest sowing index - Survival 'shi'
}
\description{
Formula for calculating the Harvest sowing index - Survival (SHI)
}
\details{
Formula for calculating the harvest sowing index - Survival
}
\seealso{
Other potato,Yield; Late Blight: \code{\link{pph}}
}
\author{
Omar Benites, Raul Eyzaguirre
}
\keyword{Blight}
\keyword{Late}
\keyword{agronomy,harvest,quantitative-continuous,percentage,Yield;}
\keyword{potato,}
|
341b7d80cf41b5d906b513d4c3d224e884cbf940 | 49c918bc3e90a2eb8ec00d687d723cd0a8c9d008 | /R/gather_2015_data.r | 6391d832a395526b748a906dd66249c24e9fc3db | [
"MIT"
] | permissive | sbfnk/measles.katanga | 06feeb9d47e6ac829ff5c2f7b90560342fd21aea | 5b1a1eee5c1217edb15f071647e90bb0a4680674 | refs/heads/master | 2020-06-25T19:44:45.559197 | 2019-11-08T15:21:39 | 2019-11-08T15:21:39 | 199,405,257 | 2 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,510 | r | gather_2015_data.r | ##' Gather 2015 data for cases, coverage and campaigns
##'
##' @return data frame
##' @author Sebastian Funk
##' @importFrom dplyr filter group_by summarise ungroup select mutate summarise ungroup right_join left_join
##' @importFrom tidyr replace_na
##' @importFrom lubridate year
##' @keywords internal
gather_2015_data <- function()
{
cases_2015 <- katanga_measles_cases %>%
filter(year(week_start)==2015) %>%
group_by(health_zone) %>%
summarise(cases=sum(cases)) %>%
ungroup() %>%
select(health_zone, cases)
coverage_2015 <- katanga_mcv_admin %>%
filter(between(year, 2010, 2015)) %>%
mutate(coverage=doses/target) %>%
group_by(health_zone) %>%
summarise(mean_coverage=mean(coverage, na.rm=TRUE)) %>%
ungroup() %>%
select(health_zone, mean_coverage)
campaigns_2015 <- katanga_2015_mvc %>%
group_by(health_zone) %>%
summarise(campaign="yes") %>%
ungroup() %>%
right_join(populations %>% select(health_zone), by="health_zone") %>%
replace_na(list(campaign="no")) %>%
mutate(campaign=factor(campaign, levels=c("no", "yes")))
df <- cases_2015 %>%
right_join(coverage_2015, by="health_zone") %>%
right_join(dhs_coverage_estimates, by="health_zone") %>%
filter(!is.na(mean_coverage)) %>%
filter(!is.na(coverage_estimate)) %>%
right_join(campaigns_2015, by="health_zone") %>%
left_join(populations, by="health_zone") %>%
replace_na(list(cases=0)) %>%
mutate(incidence=cases/population)
return(df)
}
|
052358408f33c11e7fdb01f42378e280866a364d | a32fc9e1d570db3f28fad05c0c3255bf6f6829b3 | /plot4.R | 636d9e0655d7ea27f73e215981ecb375b3ad1440 | [] | no_license | zerolocker/ExData_Plotting1 | 22ec4663e328376be093df23f876a19858bce28c | e881cabd6fd6fcdd954e897f8e99b08cf716ef77 | refs/heads/master | 2020-12-27T10:45:37.998733 | 2014-05-11T17:48:12 | 2014-05-11T17:48:12 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,251 | r | plot4.R | dat=read.csv('household_power_consumption.txt',header=TRUE,sep=';',na.strings=c('?'))
# Note that the argument "na.strings=c('?')" is a MUST, because missing values in the dataset are coded as "?", otherwise '?' char will make R treat the numeric column as factor(i.e. class(dat$Global_active_power) retruns [1] "factor")
dat$DateTime <- strptime(paste(dat$Date, dat$Time), "%d/%m/%Y %H:%M:%S")
target=subset(dat,dat[,1] %in% c("1/2/2007","2/2/2007"))
par(mfrow = c(2, 2))
plot(target$DateTime,target$Sub_metering_1,
type='n',xlab="",ylab="Energy sub metering")
lines(target$DateTime, target$Global_active_power)
plot(target$DateTime, target$Voltage, ylab="Voltage", xlab="datetime", type='l')
plot(target$DateTime,target$Sub_metering_1,
type='n',xlab="",ylab="Energy sub metering")
lines(target$DateTime, target$Sub_metering_1)
lines(target$DateTime, target$Sub_metering_2, col='red')
lines(target$DateTime, target$Sub_metering_3, col='blue')
legend('topright',
c("Sub_metering_1", "Sub_metering_2", "Sub_metering_3"),
lty = c(1,1,1),
col = c('black', 'red', 'blue'),
bty='n'
)
plot(target$DateTime, target$Global_reactive_power, xlab='datetime', type='l')
dev.copy(png,file="plot4.png")
dev.off()
|
e9832cce2a42afda64099a66013291feacdb12f0 | e2080f39ca981c320b2c882560bd039b086e11aa | /Scripts/03_zillow-hmda-merge/old/full_merge_11062018.r | a9266a4099cfcb0a7db16c540b192d72d140e406 | [] | no_license | mwibbenmeyer/sorting-wui | 031820ddf5db33d4ff68b2e0cb43027b72eae756 | d915be01e52dfd78fefb69d5d3d873a9aed0c6b0 | refs/heads/master | 2023-08-22T01:49:21.368772 | 2023-08-11T13:48:42 | 2023-08-11T13:48:42 | 254,092,824 | 0 | 1 | null | 2023-08-11T13:48:43 | 2020-04-08T13:17:24 | R | UTF-8 | R | false | false | 21,192 | r | full_merge_11062018.r | #---------------------------------------------------------------------------------------------------
# WORKING DIRECTORY
#---------------------------------------------------------------------------------------------------
#setwd("C:/HMDA_R_Files/")
wd <- "C:/HMDA_R_Files/"
#---------------------------------------------------------------------------------------------------
# PREAMBLE (PACKAGES NEEDED TO RUN SCRIPT)
#---------------------------------------------------------------------------------------------------
devtools::install_github('walkerke/tigris')
library(gmodels)
library(readr)
library(RODBC)
library(tidyverse)
library(ggmap)
library(DT)
library(knitr)
library(rgdal)
library(raster)
library(phonics)
library(UScensus2000tract)
library(tmap)
library(XML)
library(sf)
library(sp)
library(rgeos)
library(spatialEco)
library(tigris)
library(magrittr)
library(rgdal)
library(maptools)
#library(plyr)
library(dplyr)
library(lubridate)
library(base)
library(jsonlite)
library(httr)
library(magrittr)
library(ggplot2)
library(phonics)
library(data.table)
library(gtools)
#---------------------------------------------------------------------------------------------------
# LOOP SETUP
#---------------------------------------------------------------------------------------------------
# Set system time for checking performance after running
start.time <- Sys.time()
# Make the database connection for retrieving Zillow data via SQL
database <- odbcConnect("sql_test")
# HMDA data
hmda2 <- readRDS("C:/HMDA_Raw/intermediate_merges/hmda_groups_1_to_2.rds")
hmda3 <- readRDS("C:/HMDA_Raw/intermediate_merges/hmda_group_3.rds")
hmda3 <-hmda3[ c("action_taken_name", "agency_code")]
hmda4 <- readRDS("C:/HMDA_Raw/intermediate_merges/hmda_group_4.rds")
hmda4 <-hmda4[ c( "as_of_year")]
hmda5 <- readRDS("C:/HMDA_Raw/intermediate_merges/hmda_group_5.rds")
hmda5 <-hmda5[ c("denial_reason_1")]
#hmda6 <- readRDS("C:/HMDA_Raw/intermediate_merges/hmda_group_6.rds")
hmda7 <- readRDS("C:/HMDA_Raw/intermediate_merges/hmda_group_7.rds")
hmda7 <- hmda7[c("lien_status" , "loan_purpose")]
#hmda8 <- readRDS("C:/HMDA_Raw/intermediate_merges/hmda_group_8.rds")
#hmda9 <- readRDS("C:/HMDA_Raw/intermediate_merges/hmda_group_9.rds")
hmda10 <- readRDS("C:/HMDA_Raw/intermediate_merges/hmda_group_10.rds")
hmda10 <-hmda10[ c("state_abbr")]
hmda_final <- cbind(hmda2 , hmda3, hmda4, hmda5, hmda7, hmda10)
rm(hmda2)
rm(hmda3)
rm(hmda4)
rm(hmda5)
rm(hmda7)
rm(hmda10)
write.csv(hmda_final, file = "hmda_final.csv")
save(hmda_final, file = "C:/HMDA_R_Files/hmda_final.RData")
#FULL HMDA DATA
#HMDA 2007-2016
load("C:/HMDA_R_Files/hmda_final.RData")
hmda_final07_16 <- as.data.frame(hmda_final)
hmda_final07_16$rate_spread <- hmda_final07_16$loan_purpose_name <- hmda_final07_16$action_taken_name <- NULL
#HMDA 1995-2001
load("C:/HMDA_Raw/hmda_final1995_2001.RData")
hmda_final95_01 <- as.data.frame(hmda_final)
hmda_final95_01$rate_spread <- hmda_final95_01$loan_purpose_name <- hmda_final95_01$action_taken_name <- NULL
#HMDA 2002-2006
##bad for some reason
hmda_final <- read_dta("C:/Users/billinsb/Dropbox/BBGL_Sealevels/Data/HMDA/hmda2002_2006.dta")
#load("C:/HMDA_Raw/hmda_final2002_2006.RData")
hmda_final02_06 <- as.data.frame(hmda_final)
hmda_final02_06$rate_spread <- hmda_final02_06$loan_purpose_name <- hmda_final02_06$action_taken_name <- NULL
state_code <- hmda_final95_01[c("state_code", "state_abbr") ]
state_code <- distinct(state_code)
hmda_final02_06 <- merge(hmda_final02_06, state_code, by.x = c("state_code"),
by.y = c("state_code") , allow.cartesian = FALSE, all.x = TRUE)
##need state code abbreviations
rm(hmda_final)
#RESET FINAL DATASET OF OBS
mergestats_all_hmda <- data.frame(
state = c(),
year = c(),
numb_trans = c(),
numb_trans_loans = c(),
numb_trans_loans_matched = c()
)
mergestats_all_hmda$state = as.factor(mergestats_all_hmda$state)
mergestats_all_hmda$year = as.numeric(mergestats_all_hmda$year)
mergestats_all_hmda$numb_trans = as.factor(mergestats_all_hmda$snumb_trans)
mergestats_all_hmda$numb_trans_loans = as.factor(mergestats_all_hmda$numb_trans_loans)
mergestats_all_hmda$numb_trans_loans_matched = as.factor(mergestats_all_hmda$numb_trans_loans_matched)
#RESET FINAL DATASET
final <- data.frame(
state = c(),
year = c()
)
# MD - no lender names but can do other matches
#No loan amounts - VT
##Missing loan amount in some years/few loan amount values - GA (2016 - none), WY
#state.id <- c("AL", "AZ", "AR", "CA", "CO", "CT", "DE", "DC", "FL", "GA", "ID",
# "IL", "IN", "IA", "KS", "KY", "LA", "ME", "MD", "MA", "MI", "MN", "MS", "MO",
# "MT", "NE", "NV", "NH", "NJ", "NM", "NY", "NC", "ND", "OH", "OK", "OR", "PA",
# "RI", "SC", "SD", "TN", "TX", "UT", "VT", "VA", "WA", "WV", "WI", "WY")
# MD - no lender names but can do other matches
#No loan amounts - VT
##Missing loan amount in some years/few loan amount values - GA (2016 - none), WY
#Bring in older hmda data
memory.limit(size=5120000000)
library(haven)
state.id <- c("AL", "AZ", "AR", "CA", "CO", "CT", "DE", "DC", "FL", "GA", "ID",
"IL", "IN", "IA", "KS", "KY", "LA", "ME", "MD", "MA", "MI", "MN" , "MS", "MO",
"MT", "NE", "NV", "NH", "NJ", "NM", "NY", "NC", "ND", "OH", "OK", "OR", "PA",
"RI", "SC", "SD", "TN", "TX", "UT", "VT", "VA", "WA", "WV", "WI", "WY")
#MD, MN has no lendername.
year.id <- c(1995:2016)
# Define list to store elements of final data.table
list <- vector("list", length(state.id)*length(year.id))
#---------------------------------------------------------------------------------------------------
# LOOP
#---------------------------------------------------------------------------------------------------
tigris_cache_dir("C:/HMDA_R_Files/")
readRenviron('~/.Renviron')
options(tigris_use_cache = TRUE)
#options(tigris_class = "sf" )
library(sqldf)
#options(tigris_use_cache = FALSE)
for (i in 1:length(state.id)) {
tracts2000 <- tracts(state.id[[i]], county = NULL, cb = TRUE, year = 2000, refresh = TRUE)
tracts1990 <- tracts(state.id[[i]], county = NULL, cb = TRUE, year = 1990, refresh = TRUE)
tracts2010 <- tracts(state.id[[i]], county = NULL, cb = TRUE, year = 2010, refresh = TRUE)
tracts1990$TRACTCE90 <- paste(tracts1990$TRACTBASE,tracts1990$TRACTSUF , sep="")
for (j in 1:length(year.id)) {
print("YEAR.ID and STATE.ID are")
print(paste(year.id[[j]]," and ", state.id[[i]]), sep = "")
# Pull in data needed to run code
# Zillow Transaction data (two parts, plus a merge)
zillowSQL1 <- (sqlQuery(database, paste("
SELECT
TransId, FIPS, RecordingDate, OccupancyStatusStndCode, LenderName,
SalesPriceAmount, LoanAmount, InitialInterestRate
FROM dbo.transMain
WHERE RecordingDate >= '", year.id[j], "-01-01'", "
AND RecordingDate <= '", year.id[j], "-12-31'
AND State = ", "'", state.id[i], "'",
sep = "")))
if(dim(zillowSQL1)[1] > 100 ){
#If Statement that loanamount is not all zeros
test1 <- zillowSQL1[which(zillowSQL1$LoanAmount > 0
& zillowSQL1$LoanAmount != "NA"), ]
if(dim(test1)[1] > 100){
zillowSQL3 <- (sqlQuery(database, paste("
SELECT PropertyAddressLatitude,PropertyAddressLongitude, ImportParcelID, TransId
FROM dbo.transPropertyInfo
WHERE PropertyState = ", "'", state.id[i], "'",
sep = "")))
rm(test1)
# Census tract data
#test33 <- na.omit(tracts1990$TRACTSUF)
zillowSQL <- merge(zillowSQL1, zillowSQL3, by.x = c("TransId"),
by.y = c("TransId") , allow.cartesian = FALSE, all.x = TRUE)
zillow.main <- zillowSQL[which(zillowSQL$PropertyAddressLatitude != "NA" &
((zillowSQL$SalesPriceAmount!= "NA" &
zillowSQL$SalesPriceAmount!= 0) |
(zillowSQL$LoanAmount > 0
& zillowSQL$LoanAmount != "NA"))) , ]
rm(zillowSQL3)
rm(zillowSQL1)
rm(zillowSQL)
zillow.main$year <- year.id[[j]]
# Zillow stuff
# Drop rows that are not in the looped year j and rows that don't meet other criteria
zillow <- zillow.main[which(zillow.main$PropertyAddressLatitude != "NA"), ]
zillow <- zillow.main[which(zillow.main$PropertyAddressLongitude != "NA"), ]
zillow <- distinct(zillow,
ImportParcelID, SalesPriceAmount, RecordingDate, LoanAmount,
.keep_all = TRUE)
# Keep non-negative loan amounts and loan amouts that exist and properties that have lat/long
zillow2 <- zillow[which(zillow$LoanAmount > 0
& zillow$LoanAmount != "NA"), ]
zillow2 <- distinct(zillow2,
ImportParcelID, SalesPriceAmount, RecordingDate, LoanAmount,
.keep_all = TRUE)
# Make loan amount comparable to HMDA syntax
zillow2$LoanAmount = round(zillow2$LoanAmount, digits=-3)
# Set up coordinates to match with Census Data
id <- as.data.frame(zillow2)
coords <- as.data.frame(cbind(zillow2$PropertyAddressLongitude, zillow2$PropertyAddressLatitude ))
cord.dec = SpatialPointsDataFrame(coords ,
id ,proj4string = CRS("+proj=longlat"))
#tracts2000 <- california.tract
# Get Census CRS projection
census.CRS <- proj4string(tracts2000)
#proj4string(tracts1990) = CRS("+proj=longlat")
# Transform data to match CRS projection from Census data
cord.UTM <- spTransform(cord.dec, census.CRS)
tracts1990 <- spTransform(tracts1990, census.CRS)
tracts2010 <- spTransform(tracts2010, census.CRS)
#tracts1990$TRACTSUF[is.na(tracts1990$TRACTSUF)] <- "00"
#for 1990 tracts. NA sub for 00
if(year.id[j] > 2011){
pts.poly <- over(cord.UTM, tracts2010[3:4])
pts.poly2 <- cbind(id, pts.poly)
pts.poly2$id_number <- paste(pts.poly2$COUNTY, pts.poly2$TRACT, sep = '')
}
else{
if(year.id[j] > 2002){
pts.poly <- over(cord.UTM, tracts2000[4:5])
pts.poly2 <- cbind(id, pts.poly)
pts.poly2$id_number <- paste(pts.poly2$COUNTY, pts.poly2$TRACT, sep = '')
#pts.poly2$id_number <- paste(pts.poly2$county, pts.poly2$tract, sep = '')
}
else{
pts.poly <- over(cord.UTM, tracts1990[8:10])
pts.poly2 <- cbind(id, pts.poly)
#pts.poly2$id_number <- paste(pts.poly2$COUNTYFP, pts.poly2$TRACTCE90, sep = '')
#just CA
pts.poly2$id_number <- paste(pts.poly2$COUNTYFP, pts.poly2$TRACTCE90, sep = '')
}
}
#testing <- merge(pts.poly2, pts.poly2A,
# by.x = c("TransId"),
# by.y = c("TransId"))
#r <- as.data.frame(tracts1990)
#rm(tracts2000)
#rm(tracts1990)
rm(coords)
rm(cord.UTM)
rm(cord.dec)
rm(id)
# Add decimal to make mergeable to HMDA data
pts.poly2$id_number <- sub("([[:digit:]]{2,2})$", ".\\1", pts.poly2$id_number)
# Keep just necessary variabes
zillow2 <- pts.poly2
# Filter to unique transactions
count.zillow <- count_(zillow2, vars = c("id_number", "LoanAmount"))
zillow3 <- merge(zillow2, count.zillow,
by.x = c("id_number", "LoanAmount"),
by.y = c("id_number", "LoanAmount"))
rm(count.zillow)
zillow3 <- zillow3[which(zillow3$n == 1), ]
# HMDA Stuff
if(year.id[j] > 2006){
hmda <- hmda_final07_16[which(hmda_final07_16$as_of_year == year.id[j]
& hmda_final07_16$state_abbr == state.id[i] ) , ]
} else if(year.id[j] > 2001){
hmda <- hmda_final02_06[which(hmda_final02_06$as_of_year == year.id[j]
& hmda_final02_06$state_abbr == state.id[i] ) , ]
} else{
hmda <- hmda_final95_01[which(hmda_final95_01$as_of_year == year.id[j]
& hmda_final95_01$state_abbr == state.id[i] ) , ]
}
# Keep only home purchases and convert loan amount to match Zillow
hmda <- hmda[which(hmda$loan_purpose == 1
& hmda$action_taken == 1), ]
hmda$loan_purpose_name <- hmda$action_taken <- NULL
hmda$as_of_year <- hmda$denial_reason_1 <- NULL
hmda$LoanAmount <- hmda$loan_amount_000s * 1000
# Create location identifiers to match Zillow data
hmda$county_code <- as.numeric(hmda$county_code)
hmda$county_code <- paste(formatC(hmda$county_code, width = 3, flag = "0"), sep = "")
hmda$county_code <- as.character(hmda$county_code)
hmda$id_number <- paste(hmda$county_code, hmda$census_tract_number, sep = '')
# Drop loans without identifiers
hmda <- hmda[which(hmda$id_number != 'NA'
& hmda$LoanAmount != 'NA'), ]
# Filter to unique transactions
count.hmda <- count_(hmda, vars = c("id_number", "LoanAmount"))
hmda2 <- merge(hmda, count.hmda,
by.x = c("id_number", "LoanAmount"),
by.y = c("id_number", "LoanAmount"))
rm(count.hmda)
hmda2 <- hmda2[which(hmda2$n == 1), ]
# 4) MERGES
# FIRST: simple merge by id_number and LoanAmount
merge1 <- merge(zillow3, hmda2,
by.x = c("id_number", "LoanAmount"),
by.y = c("id_number", "LoanAmount"))
# SECOND: merge by respondent_id, LoanAmount, and LenderName
rm(hmda2)
# Create database of LenderName using respondent_id
lendername <- subset(merge1, select = c(LenderName, respondent_id))
lendername <- distinct(lendername, .keep_all = TRUE)
hmda3a <- inner_join(hmda, lendername,
by.x = "respondent_id",
by.y = "respondent_id")
# The merge by id_number, LoanAmount, and LenderName
hmda3 <- hmda3a[which(hmda3a$id_number != 'NA' &
hmda3a$LoanAmount != 'NA' &
hmda3a$LenderName != "NA"), ]
if(dim(hmda3)[1] > 100 ){
count.hmda.merge2 <- count_(hmda3,
vars = c("id_number", "LoanAmount", "LenderName"))
hmda3 <- merge(hmda3, count.hmda.merge2,
by.x = c("id_number", "LoanAmount", "LenderName"),
by.y = c("id_number", "LoanAmount", "LenderName"))
rm(count.hmda.merge2)
hmda3 <- hmda3[which(hmda3$n == 1), ]
count.zillow.merge2 <- count_(zillow2,
vars = c("id_number", "LoanAmount", "LenderName"))
zillow4 <- merge(zillow2, count.zillow.merge2,
by.x = c("id_number", "LoanAmount", "LenderName"),
by.y = c("id_number", "LoanAmount", "LenderName"))
zillow4 <- zillow4[which(zillow4$n == 1), ]
merge2 <- merge(zillow4, hmda3,
by.x = c("id_number", "LoanAmount", "LenderName"),
by.y = c("id_number", "LoanAmount", "LenderName"))
# THIRD: same as last but with "fuzzy" names
# Get rid of all observations without lender names
hmda3 <-hmda3[which(hmda3$id_number != 'NA'
& hmda3$LoanAmount != 'NA'
& hmda3$LenderName != "NA"), ]
# Get "fuzzy" names for HMDA and Zillow
hmda3a$LenderName2 <- soundex(hmda3a$LenderName, maxCodeLen = 4L)
hmda3a$LenderName <- NULL
zillow2$LenderName2 <- soundex(zillow2$LenderName, maxCodeLen = 4L)
zillow2$LenderName <- NULL
# Do the other stuff
count.hmda.merge3 <- count_(hmda3a,
vars = c("id_number", "LoanAmount", "LenderName2"))
hmda4 <- merge(hmda3a, count.hmda.merge3,
by.x = c("id_number", "LoanAmount", "LenderName2"),
by.y = c("id_number", "LoanAmount", "LenderName2"))
rm(hmda3a)
rm(hmda3)
rm(count.hmda.merge3)
hmda4 <- hmda4[which(hmda4$n == 1), ]
count.zillow.merge3 <- count_(zillow2,
vars = c("id_number", "LoanAmount", "LenderName2"))
zillow5 <- merge(zillow2, count.zillow.merge3,
by.x = c("id_number", "LoanAmount", "LenderName2"),
by.y = c("id_number", "LoanAmount", "LenderName2"))
zillow5 <- zillow5[which(zillow5$n == 1), ]
merge3 <- merge(zillow5, hmda4,
by.x = c("id_number", "LoanAmount", "LenderName2"),
by.y = c("id_number", "LoanAmount", "LenderName2"))
merge3$LenderName <- merge3$LenderName2
merge3$LenderName2 <- NULL
merge3$freq.y <- merge3$freq.x <- merge2$freq.y <- merge2$freq.x <- merge1$freq.y <- merge1$freq.x <- NULL
# 5) The final merge
merge1$match <- 1
merge2$match <- 2
merge3$match <- 3
final.merge <- bind_rows(merge1, merge2, merge3)
}
else{
final.merge <- merge1
}
rm(count.zillow)
rm(count.zillow.merge2)
rm(count.zillow.merge3)
rm(hmda4)
rm(hmda)
rm(pts.poly)
rm(pts.poly2)
rm(merge1)
rm(merge2)
rm(merge3)
rm(zillow3)
rm(zillow4)
rm(zillow5)
# Filter out repeated values
final.merge <- distinct(final.merge,
ImportParcelID, TransId, SalesPriceAmount,
RecordingDate, LoanAmount, .keep_all = TRUE)
final.merge <- as.data.table(final.merge)
final.merge$COUNTYFP10 <- final.merge$FIPS <- final.merge$id_number <- NULL
final.merge$action_taken_name <- final.merge$TRACTCE00 <- final.merge$STATEFP00 <- NULL
# Match to pre-defined list to merge all states and years into one data table
#list[[i]][[j]] <- final.merge
# NEED TO FIGURE OUT WHY THIS DATASET IS NOT RECORDING STATE
numbtrans <- sum(with(zillow, year.id[j] == year))
numbtransloans <- sum(with(zillow2, year.id[j] == year))
numbtransloansmatch <- sum(with(final.merge, year.id[j] == year))
mergestats_all_hmda2 <- data.frame(
state = c(state.id[[i]]),
year = c(year.id[[j]]),
numb_trans = c(numbtrans),
numb_trans_loans = c(numbtransloans ),
numb_trans_loans_matched = c(numbtransloansmatch )
)
mergestats_all_hmda <- rbind(mergestats_all_hmda, mergestats_all_hmda2)
final <- smartbind(final, final.merge )
#list <- do.call("rbind", list)
#final <- dplyr::bind_rows(list)
final$STATEFP00 <- final$COUNTYFP00 <- final$TRACTCE00 <- NULL
final$STATEFP <- final$COUNTYFP <- final$TRACTCE90 <- NULL
final$COUNTY <- final$TRACT <- final$n.x <- final$n.y <- NULL
}
}
else {
}
mergestats_hmda <- data.table(mergestats_all_hmda)
write.dta(mergestats_hmda, "C:/HMDA_R_Files/mergestats_hmda.dta")
final <- data.frame(final,stringsAsFactors = FALSE)
#final$RecordingDate2 = as.character(final$RecordingDate, "%Y/%m/%d")
final$rate_spread <- final$freq.x <- final$freq.y <- NULL
write_dta(final, "C:/HMDA_R_Files/final_hmda.dta")
}
}
|
374f3c055723352cf46a42521e78d9de6bbdf863 | dc1c20d5dea4e29634e9a9469025220c621c2575 | /R/listLinks.R | c189fc4bac9cc1e79ec2840bafa143ac18ed5c39 | [] | no_license | voltek62/oncrawlR | b90cee7c4985f01813fe8fcf41759ba50e6bdfaa | a3413465d94ad3cd3406751ee8e217ce2278c8e7 | refs/heads/master | 2020-06-01T05:01:25.932487 | 2020-01-31T16:16:12 | 2020-01-31T16:16:12 | 190,646,777 | 2 | 0 | null | null | null | null | UTF-8 | R | false | false | 3,028 | r | listLinks.R | #' List all links from a crawl
#'
#' @param crawlId ID of your crawl
#' @param originFilter select a specific source
#' @param targetFilter select a specific target
#'
#' @details
#' <http://developer.oncrawl.com/#Data-types>
#'
#' ResCode
#' 400 : Returned when the request has incompatible values or does not match the API specification.
#' 401 : Returned when the request is not authenticated.
#' 403 : Returned the current quota does not allow the action to be performed.
#' 404 : Returned when any of resource(s) referred in the request is not found.
#' 403 : Returned when the request is authenticated but the action is not allowed.
#' 409 : Returned when the requested operation is not allowed for current state of the resource.
#' 500 : Internal error
#'
#' @examples
#' \dontrun{
#' links <- listLinks(YOURCRAWLID)
#' }
#'
#' @return Json
#' @author Vincent Terrasi
#' @export
listLinks <- function(crawlId, originFilter="", targetFilter="") {
KEY <- getOption('oncrawl_token')
DEBUG <- getOption('oncrawl_debug')
API <- getOption('oncrawl_api')
if(nchar(KEY)<=10) {
testConf <- initAPI()
if(testConf!="ok") stop("No API Key detected")
}
curl <- RCurl::getCurlHandle()
pageAPI <- paste0(API,"data/crawl/", crawlId,"/links", sep = "")
hdr <- c('Content-Type'="application/json"
,Authorization=paste("Bearer",KEY)
)
listJSON <- list("fields"=c(
"origin",
#"origin_ext",
#"origin_first_path",
#"origin_has_params",
#"origin_host",
#"origin_path",
#"origin_querystring_key",
#"origin_querystring_keyvalue",
"target",
#"target_ext",
"target_fetched",
#"target_first_path",
#"target_has_params",
#"target_host",
#"target_path",
#"target_querystring_key",
#"target_querystring_keyvalue",
"target_status",
"target_status_range",
"anchor",
"follow",
"intern",
"juice"
),
export="true")
if (originFilter!="" && targetFilter=="") {
listJSON <- c(listJSON, list(oql=list(field= c("origin","contains",originFilter))))
}
else if (originFilter=="" && targetFilter!="") {
listJSON <- c(listJSON, list(oql=list(field= c("target","contains",targetFilter))))
}
else if (originFilter!="" && targetFilter!="") {
#NOT IMPLEMENTED
#listFilters <- list(field= c("origin","equals",originFilter))
#listFilters <- c(listFilters, list(field= c("target","equals",targetFilter)))
#listJSON <- c(listJSON, list(oql=list(and=listFilters)))
}
jsonbody <- jsonlite::toJSON(listJSON)
reply <- RCurl::postForm(pageAPI,
.opts=list(httpheader=hdr, postfields=jsonbody),
curl = curl,
style = "POST"
)
info <- RCurl::getCurlInfo(curl)
if (info$response.code==200) {
# return ok if response.code==200
csv <- read.csv(text = readLines(textConnection(reply)), sep = ";", header = TRUE)
} else {
# return error if response.code!=200
warning(reply)
}
return(csv)
}
|
78f1283f3e6f64802b4295c6940f456291bce071 | 142ff7e3bdbf2ff2c3b8b3ca892edc589134b7a9 | /GM-1.R | 7ec36d511711bc5069267b1221a576f54b53402f | [] | no_license | smenon72/DataScienceProjects | 3dcc094767977c6ad33170a5121d00b9da903b06 | 9df3fa9332cf756a2e667f939dddd8000730ba9d | refs/heads/master | 2020-05-20T02:12:12.881046 | 2019-05-18T05:25:00 | 2019-05-18T05:25:00 | 185,326,664 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 411 | r | GM-1.R | library(ggplot2)
library(dslabs)
data(gapminder)
gapminder %>%
filter(continent == "Africa" & year %in% c(1970,2010) & !is.na(gdp) & !is.na(year) & !is.na(infant_mortality)) %>%
mutate(dollars_per_day = gdp/population/365) %>%
ggplot(aes(dollars_per_day, infant_mortality, color = region,label = country)) +
geom_point() +
scale_x_continuous(trans = "log2") +
geom_text() +
facet_grid(year~.)
|
f25f22e82d61e1503728fcf5e25c141a7168f0b8 | ec40622f551b357d4a64ab08af683acbb1cfd38d | /data-raw/dados_telemetria_manaus.R | bb15dbee2a6c730b86470562db5e83b7c8fe47c6 | [
"MIT"
] | permissive | theoadepaula/tffdt | 93068762c7f47e526a83090f40da218c0cb3e08b | 68ba8062d9bceec768482af885790470c507adce | refs/heads/master | 2023-06-07T04:22:07.315375 | 2021-06-22T10:43:48 | 2021-06-22T10:43:48 | 376,614,770 | 0 | 1 | null | null | null | null | UTF-8 | R | false | false | 704 | r | dados_telemetria_manaus.R | ## code to prepare `dados_telemetria_manaus` dataset goes here
# Para arrumar os dados, foi preciso transformar todos os dados em branco
# em NA, transformar as variáveis Nivel,Vazao e Chuva em númericos e
# DataHora em datetime.
dados_telemetria_manaus <- dados_brutos_telemetria_manaus %>%
janitor::clean_names() %>% # transforma os nomes das colunas
dplyr::na_if('')%>% # Substitui o '' por NA
dplyr::mutate(dplyr::across(c(nivel,chuva,vazao),as.numeric), # Transforma as variáveis em númericos
data_hora = stringr::str_squish(data_hora) %>% # Transforma em datetime
lubridate::as_datetime())
usethis::use_data(dados_telemetria_manaus, overwrite = TRUE)
|
b5b44520ec168211783315583813f0ff9c608708 | a1fd4aeda488f2c8abf454b6429a50c3d4a07d11 | /Plot6.R | 0a6a9a7cf2b137b5da41fe78ed384ff1678f8479 | [] | no_license | Monnappa/Exploratory-Data-Analysis-Project-2 | a91a94d87351165890fc78a15e57fdc9998f1608 | 527bd11abacfa29c2b8063f995b2a174a1f67242 | refs/heads/master | 2020-05-17T18:32:02.641709 | 2015-02-21T10:35:36 | 2015-02-21T10:35:36 | 31,122,116 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,765 | r | Plot6.R | # Assignment Question 6
# 6. Compare emissions from motor vehicle sources in Baltimore City
#with emissions from motor vehicle sources in Los Angeles County,
#California (fips == "06037"). Which city has seen greater changes
#over time in motor vehicle emissions?
#Read ggplot2 Library
library(ggplot2)
## Read files using readRDS() function
NEI <- readRDS("summarySCC_PM25.rds")
SCC <- readRDS("Source_Classification_Code.rds")
# Subset SCC Data using above Data
# Extract Vehicle Data from SCC
SCC.VEH <- subset(SCC, grepl("Vehicle", EI.Sector))
# Subset NEI Data using SCC # for Vehicles
NEI.VEH <- subset(NEI, NEI$SCC %in% SCC.VEH$SCC)
# Subset above NEI Data for Baltimore City & LA
BC.LA <- subset(NEI.VEH, fips == "24510"|fips == "06037")
# Use aggregate function to find the sum of Emissions by Vehicle
# & Type for Baltimore City
SUM.BCLA <- aggregate(BC.LA[c("Emissions")], list(fips = BC.LA$fips, year = BC.LA$year), sum)
# Add a new column to include name of the City
SUM.BCLA$city <- rep(NA, nrow(SUM.BCLA))
# Add City/County Names based on fips number
SUM.BCLA[SUM.BCLA$fips == "06037", ][, "city"] <- "Los Angles County"
SUM.BCLA[SUM.BCLA$fips == "24510", ][, "city"] <- "Baltimore City"
# Plot the Emission Data Vs Year
# Create Plot
library(ggplot2)
png('plot6.png', width=480, height=480)
p <- ggplot(SUM.BCLA, aes(x=year, y=Emissions, colour=city)) +
geom_point(alpha=.3) +
geom_smooth(alpha=.2, size=1, method="loess") +
ggtitle("Vehicle Emissions in Baltimore vs. LA")
print(p)
dev.off() |
4549d967572523f230b0b6f116c814f424c6790a | b97ad80d43353fa93abe448e57f4fce297c037e0 | /man/info_geno.Rd | 64879718809685a0b773d481f984cb70f58fb59c | [
"MIT"
] | permissive | DannyArends/GNapi | a4dab46ff2b9529bdb497487f19ef43ff43f2c60 | b613e2b61d69c8319e005866d760b14e9b68c5ee | refs/heads/master | 2020-12-24T10:58:08.187096 | 2016-11-10T21:22:43 | 2016-11-10T21:22:43 | 73,219,657 | 1 | 0 | null | 2016-11-08T19:25:46 | 2016-11-08T19:25:46 | null | UTF-8 | R | false | true | 519 | rd | info_geno.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/get_info.R
\name{info_geno}
\alias{info_geno}
\title{Get meta information for genotypes}
\usage{
info_geno(cross, url = "http://test-gn2.genenetwork.org/api_pre1/")
}
\arguments{
\item{cross}{Cross name as a single character string}
\item{url}{URL for GeneNetwork API}
}
\value{
Matrix with genotypes; rows are markers, columns are strains
}
\description{
Get meta information for the genotypes for a cross
}
\examples{
info_geno("BXD")
}
|
6b24af477b7fede438bb08d851536826e6a8dbc1 | c904e7138320728cf05df1728d9875d87f4d5fa7 | /example/code/example.R | 80de162bd8625b19229ebde01f5f73f97c384452 | [
"MIT"
] | permissive | ryanedmundkessler/gaussian_empirical_bayes | c3d6186c64f06f7d897ff0f0e8361726e4240544 | 65b89ad3e84b99318da899fdee2e9dfe2610afbc | refs/heads/master | 2022-01-24T14:45:53.110478 | 2022-01-14T18:45:52 | 2022-01-14T18:45:52 | 168,087,260 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,182 | r | example.R | source('../../r/gaussian_empirical_bayes.R')
set.seed(19281)
main <- function() {
num_obs <- 5000
mu <- 1
sigma_sq <- 4
data <- simulate_data(num_obs = num_obs, mu = mu, sigma_sq = sigma_sq)
theta <- estimate_hyperparameters(beta = data$beta_hat, tau_sq = data$tau_sq)
posterior_params <- estimate_posterior_parameters(beta = data$beta_hat,
tau_sq = data$tau_sq,
mu = theta$mu,
sigma_sq = theta$sigma_sq)
png("../output/gaussian_eb_shrinkage.png")
plot(data$tau_sq, posterior_params$mu, ylim = c(-6, 6),
xlab = expression(hat(tau[i])^2), ylab = expression(tilde(mu)[i]))
dev.off()
}
simulate_data <- function(num_obs, mu, sigma_sq) {
beta <- rnorm(num_obs, mu, sqrt(sigma_sq))
tau_sq <- runif(num_obs, min= (sigma_sq / 100), max= (100 * sigma_sq))
beta_hat <- rnorm(num_obs, beta, sqrt(tau_sq))
return_list <- list(beta_hat = beta_hat,
tau_sq = tau_sq)
return(return_list)
}
main()
|
2152be62539722b9ab7aa468441d47e4a3332147 | 17b4fd4a2aa4d7741604e07f585bec783d558dde | /lessons/A_Monday/scripts/E_text_organization.R | 46289ac6da2f5d9e3e02a6ea4d776cc66175d261 | [
"MIT"
] | permissive | anhnguyendepocen/GSERM_TextMining | 41a04517212e5adde60865b6e5b3b1c149b7960d | 0bb2692b1130c9c9fe604e2e610fd057960a44ec | refs/heads/master | 2022-03-27T14:21:53.953619 | 2020-01-16T21:40:07 | 2020-01-16T21:40:07 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 2,622 | r | E_text_organization.R | #' Title: Text Organization for Bag of Words
#' Purpose: Learn some basic cleaning functions & term frequency
#' Author: Ted Kwartler
#' email: ehk116@gmail.com
#' License: GPL>=3
#' Date: Jan-12-2020
#'
# Set the working directory
setwd("/cloud/project/lessons/A_Monday/data/tweets")
# Libs
library(tm)
# Options & Functions
options(stringsAsFactors = FALSE)
Sys.setlocale('LC_ALL','C')
tryTolower <- function(x){
# return NA when there is an error
y = NA
# tryCatch error
try_error = tryCatch(tolower(x), error = function(e) e)
# if not an error
if (!inherits(try_error, 'error'))
y = tolower(x)
return(y)
}
cleanCorpus<-function(corpus, customStopwords){
corpus <- tm_map(corpus, content_transformer(qdapRegex::rm_url))
corpus <- tm_map(corpus, removePunctuation)
corpus <- tm_map(corpus, stripWhitespace)
corpus <- tm_map(corpus, removeNumbers)
corpus <- tm_map(corpus, content_transformer(tryTolower))
corpus <- tm_map(corpus, removeWords, customStopwords)
return(corpus)
}
# Create custom stop words
stops <- c(stopwords('english'), 'lol', 'smh')
# Data
text <- read.csv('coffee.csv', header=TRUE)
View(text)
# As of tm version 0.7-3 tabular was deprecated
names(text)[1] <- 'doc_id' #first 2 columns must be 'doc_id' & 'text'
# Make a volatile corpus
txtCorpus <- VCorpus(DataframeSource(text))
# Preprocess the corpus
txtCorpus <- cleanCorpus(txtCorpus, stops)
# Check Meta Data; brackets matter!!
txtCorpus[[4]]
meta(txtCorpus[[4]]) #double [[...]]
t(meta(txtCorpus[4])) #single [...]
content(txtCorpus[4]) #single [...]
content(txtCorpus[[4]]) #double [...]
# Need to plain text cleaned copy? Saves time on large corpora
df <- data.frame(text = unlist(sapply(txtCorpus, `[`, "content")),
stringsAsFactors=F)
# Or use lapply
cleanText <- lapply(txtCorpus, content)
cleanText <- do.call(rbind, cleanText)
# Compare a single tweet
text$text[4]
df[4,]
cleanText[4]
# Make a Document Term Matrix or Term Document Matrix depending on analysis
txtDtm <- DocumentTermMatrix(txtCorpus)
txtTdm <- TermDocumentMatrix(txtCorpus)
txtDtmM <- as.matrix(txtDtm)
txtTdmM <- as.matrix(txtTdm)
# Examine
txtDtmM[610:611,491:493]
txtTdmM[491:493,610:611]
#### Go back to PPT ####
# Get the most frequent terms
topTermsA <- colSums(txtDtmM)
topTermsB <- rowSums(txtTdmM)
# Add the terms
topTermsA <- data.frame(terms = colnames(txtDtmM), freq = topTermsA)
topTermsB <- data.frame(terms = rownames(txtTdmM), freq = topTermsB)
# Review
head(topTermsA)
head(topTermsB)
# Which term is the most frequent?
idx <- which.max(topTermsA$freq)
topTermsA[idx, ]
# End
|
bf2fe189cde31ab0c19b90a684ce33db4214b63d | b8229b3de589d2caca087cc71927e94bbac789b2 | /run_analysis.R | 7f0f859956bf60c8407d1ae4638f57b00ae11be0 | [] | no_license | AConundrum/GettingCleaning | a140d8bec128090b0d3b25d14eb6de2f8a4bcd15 | 4c55e775ad62cd1eff3a6137a12bd1c8a6f3dd6c | refs/heads/master | 2016-09-09T21:17:19.857453 | 2014-07-23T23:27:23 | 2014-07-23T23:27:23 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 3,779 | r | run_analysis.R | ##
## Description:
## 1. Merge the training and the test sets to create one data set.
## 2. Extract only the measurements on the mean and standard deviation for
## each measurement.
## 3. Use descriptive activity names to name the activities in the data set.
## 4. Appropriately label the data set with descriptive variable names.
## 5. Create a second, independent tidy data set with the average of each
## variable for each activity and each subject.
##
## AUTHOR : Edward J Hopkins
## $DATE : Thu Jul 10 01:02:11 2014 ## date()
## $Revision : 1.00 $
## DEVELOPED : Rstudio, Version 0.98.507 ## packageVersion("rstudio")
## : R version 3.1.0 (2014-04-10) ## R.Version.string
## Copyright : Copyright (c) 2014 E. J. Hopkins
## FILENAME : run_analysis.R
## Dependencies:
## See also:
###############################################################################
## BEGIN CODE
# Assume the folder "UCI HAR Dataset" is in the working directory
FILES <- list("UCI HAR Dataset/train/X_train.txt", # RAW[1], Data
"UCI HAR Dataset/test/X_test.txt", # RAW[2], Data
"UCI HAR Dataset/features.txt", # RAW[3], Data labels
"UCI HAR Dataset/activity_labels.txt", # RAW[4]
"UCI HAR Dataset/train/subject_train.txt", # RAW[5], Subjects
"UCI HAR Dataset/test/subject_test.txt", # RAW[6], Subjects
"UCI HAR Dataset/train/y_train.txt", # RAW[7], Activities
"UCI HAR Dataset/test/y_test.txt") # RAW[8], Activities
RAW <- sapply(FILES,read.table,simplify=FALSE) # Read in all files
## 1. Merge the training and the test sets to create one data set.
allDATA <- rbind(RAW[[1]],RAW[[2]]) # train over test
Subject <- rbind(RAW[[5]],RAW[[6]]) # train over test
ActNumber <- rbind(RAW[[7]],RAW[[8]]) # train over test
##3. Use descriptive activity names to name the activities in the data set.
ActNumFact <- as.factor(ActNumber$V1) # to rename, must be factor
# Rename Activity levels (keep correct order)
levels(ActNumFact) <- as.vector(RAW[[4]]$V2)
## 2. Extract only the measurements on the mean and standard deviation for
## each measurement.
VarNames <- RAW[[3]]$V2 # Create a variable containing labels
# Use "(" as part of the search so it doesn't find meanFreq
Nidx <- grep("(mean\\(|std\\())",VarNames)
names(allDATA)[Nidx] <- as.character(VarNames[Nidx]) # replace column names
tmpDATA <- allDATA[Nidx] # New data set with only "std()" and "mean()" columns
## 4. Appropriately label the data set with descriptive variable names.
VarNAMES <- names(tmpDATA)
VarNAMES <- gsub("[[:punct:]]","",VarNAMES) # Remove punctuation
VarNAMES <- gsub("std","Std",VarNAMES) # Capitalize std
VarNAMES <- gsub("mean","Mean",VarNAMES) # Capitalize mean
VarNAMES <- gsub("BodyBody","Body",VarNAMES) # Fix typo of "BodyBody" to "Body"
names(tmpDATA) <- VarNAMES
DATA <- cbind(Subject,ActNumFact,tmpDATA) # This is the working data set.
rm(list=setdiff(ls(),"DATA")) # Remove all other variables
names(DATA)[[1]] <- "Subject" # Name the 1st and 2nd columns
names(DATA)[[2]] <- "Activity"
## 5. Create a second, independent tidy data set with the average of each
## variable for each activity and each subject.
TIDY <- aggregate(DATA[,-1:-2],list(Subject=DATA$Subject,
Activity=DATA$Activity),mean)
write.table(TIDY,file="TIDY.txt")
#rm(TIDY)
#TIDY2 <- read.table("TIDY.txt")
## END CODE
###############################################################################
## MODIFICATIONS:
## date()
## Description...
## If this code does not work, then I don't know who wrote it. |
fd1cb3229349baadeb982252224a22d9cc6e6d39 | 3b9cd006ff3a25604b271c6d19219ed6be65fe07 | /R/autopoints.R | 1e31df35d6975078e1d6c906ac662d962a0a9b91 | [] | no_license | jfrench/autoimage | b9acf2e3e768354f7af7cec15603c5c895918e61 | 967890e8d56f384164b7fa58a8ac97f811b25d1b | refs/heads/master | 2023-03-21T03:50:13.520117 | 2021-03-16T02:50:47 | 2021-03-16T02:50:47 | 60,267,293 | 1 | 0 | null | null | null | null | UTF-8 | R | false | false | 13,245 | r | autopoints.R | #' Automatic facetting of multiple projected scatterplots
#'
#' \code{autopoints} plots a sequence of scatterplots (with possibly
#' projected coordinates) while also automatically plotting a
#' color scale matching the image colors to the values of \code{z}.
#' Many options are available for customization. See the Examples
#' below or execute \code{vignette("autopoints")} to better
#' understand the possibilities.
#'
#' The \code{n} argument specifies the desired number of color
#' partitions, but may not be exact. This is used to create
#' "pretty" color scale tick labels using the
#' \code{\link[base]{pretty}} function.
#' If \code{zlim} or \code{breaks} are provided, then the
#' \code{\link[base]{pretty}} function is not used to determine
#' the color scale tick lables, and the user may need to
#' manually specify \code{breaks} to get the color scale to
#' have "pretty" tick labels. If \code{col} is specified,
#' then \code{n} is set to \code{length(col)}, but the
#' functions otherwise works the same (i.e., pretty tick
#' labels are not automatic if \code{zlim} or \code{breaks}
#' are specified.
#'
#' The \code{\link[mapproj]{mapproject}} function is used to project
#' the \code{x} and \code{y} coordinates when \code{proj != "none"}.
#'
#' If multiple scatterplots are to be plotted (i.e., if
#' \code{z} is a matrix with more than 1 column), then the
#' \code{main} argument can be a vector with length matching
#' \code{ncol(z)}, and each successive element of the vector will
#' be used to add a title to each successive scatterplot.
#' See the Examples.
#'
#' Additionally, if \code{common.legend = FALSE}, then separate
#' z limits and breaks for the z-axis of each image can be provided as a list.
#' Specifically, if \code{ncol(z) == k}, then \code{zlim} should
#' be a list of length \code{k}, and each element of the list should
#' be a 2-dimensional vector providing the lower and upper limit,
#' respectively, of the legend for each image. Similarly,
#' the \code{breaks} argument should be a list of length \code{k},
#' and each element of the list should be a vector specifying
#' the breaks for the color scale for each plot. Note that
#' the length of each element of breaks should be 1 greater
#' then the number of colors in the color scale.
#'
#' The range of \code{zlim} is cut into \eqn{n} partitions,
#' where \code{n} is the length of \code{col}.
#'
#' It is generally desirable to increase \code{lratio} when
#' more images are plotted simultaneously.
#'
#' The multiple plots are constructed using the
#' \code{\link[autoimage]{autolayout}} function, which
#' is incompatible with the \code{mfrow} and \code{mfcol} arguments
#' in the \code{\link[graphics]{par}} function and is also
#' incompatible with the \code{\link[graphics]{split.screen}} function.
#'
#' The \code{mtext.args} argument can be passed through \code{...}
#' in order to customize the outer title. This should be a named
#' list with components matching the arguments of
#' \code{\link[graphics]{mtext}}.
#'
#' Lines can be added to each image by passing the \code{lines}
#' argument through \code{...}. In that case, \code{lines} should be
#' a list with components \code{x} and \code{y} specifying the
#' locations to draw the lines. The appearance of the plotted lines
#' can be customized by passing a named list called \code{lines.args}
#' through \code{...}. The components of \code{lines.args} should match
#' the arguments of \code{\link[graphics]{lines}}. See Examples.
#'
#' Points can be added to each image by passing the \code{points}
#' argument through \code{...}. In that case, \code{points} should be
#' a list with components \code{x} and \code{y} specifying the
#' locations to draw the points. The appearance of the plotted points
#' can be customized by passing a named list called \code{points.args}
#' through \code{...}. The components of \code{points.args} should match
#' the components of \code{\link[graphics]{points}}. See Examples.
#'
#' Text can be added to each image by passing the \code{text}
#' argument through \code{...}. In that case, \code{text} should be
#' a list with components \code{x} and \code{y} specifying the
#' locations to draw the text, and \code{labels}, a component
#' specifying the actual text to write. The appearance of the plotted text
#' can be customized by passing a named list called \code{text.args}
#' through \code{...}. The components of \code{text.args} should match
#' the components of \code{\link[graphics]{text}}. See Examples.
#'
#' The legend scale can be modified by passing \code{legend.axis.args}
#' through \code{...}. The argument should be a named list
#' corresponding to the arguments of the \code{\link[graphics]{axis}}
#' function. See Examples.
#'
#' The axes can be modified by passing \code{axis.args}
#' through \code{...}. The argument should be a named list
#' corresponding to the arguments of the \code{\link[graphics]{axis}}
#' function. The exception to this is that arguments \code{xat}
#' and \code{yat} can be specified (instead of \code{at}) to specify
#' the location of the x and y ticks. If \code{xat} or \code{yat}
#' are specified, then this overrides the \code{xaxt} and \code{yaxt}
#' arguments, respectively. See the \code{\link[autoimage]{paxes}}
#' function to see how \code{axis.args can be used.}
#'
#' The legend margin can be customized by passing \code{legend.mar}
#' to \code{autpoints} through \code{...}. This should be a numeric
#' vector indicating the margins of the legend, identical to how
#' \code{par("mar")} is specified.
#'
#' The various options of the labeling, axes, and legend are largely
#' independent. e.g., passing \code{col.axis} through \code{...}
#' will not affect the axis unless it is passed as part of the
#' named list \code{axis.args}. However, one can set the various
#' \code{par} options prior to plotting to simultaneously
#' affect the appearance of multiple aspects of the plot. See
#' Examples for \code{\link[autoimage]{pimage}}. After plotting,
#' \code{reset.par()} can be used to reset
#' the graphics device options to their default values.
#'
#' @inheritParams heat_ppoints
#' @inheritParams autolayout
#' @inheritParams autoimage
#' @param ... Additional arguments passed to the
#' \code{\link[graphics]{plot}}. e.g., \code{xlab}, \code{ylab},
#' \code{xlim}, \code{ylim}, \code{zlim}, etc.
#' @seealso \code{\link[autoimage]{autoimage}}, \code{\link[autoimage]{heat_ppoints}}
#' @return NULL
#' @export
#' @examples
#' data(co, package = "gear")
#' easting = co$easting
#' northing = co$northing
#' # heated scatterplot for Aluminum and Cadmium
#' autopoints(easting, northing, co[,c("Al", "Ca")],
#' common.legend = FALSE, map = "state",
#' main = c("Al", "Ca"), lratio = 0.2,
#' legend.mar = c(0.3, 0.1, 0.1, 0.1))
#'
#' # more complicated heat scatterplot for Aluminum and
#' # Cadmium used more advanced options
#' autopoints(co$lon, co$lat, co[,c("Al", "Ca")],
#' common.legend = FALSE,
#' map = "county", main = c("Aluminum", "Cadmium"),
#' proj = "bonne", parameters = 40,
#' text = list(x = c(-104.98, -104.80), y = c(39.74, 38.85),
#' labels = c("Denver", "Colorado Springs")),
#' text.args = list(col = "blue"))
autopoints <- function(x, y, z, legend = "horizontal",
proj = "none", parameters,
orientation, common.legend = TRUE,
map = "none", size, lratio,
outer.title, n = 5, ...) {
# obtain elements of ...
arglist <- list(...)
mtext.args <- arglist$mtext.args
legend <- match.arg(legend, c("none", "horizontal", "vertical"))
# set default for missing arguments
if (missing(outer.title)) outer.title <- NULL
if (missing(parameters)) parameters <- NULL
if (missing(orientation)) orientation <- NULL
verbose <- FALSE # some debugging stuff
# setup x, y, z information
xyz.list <- autopoints_xyz_setup(x, y, z,
tx = deparse(substitute(x)),
ty = deparse(substitute(y)),
arglist = arglist,
verbose = verbose,
common.legend = common.legend,
legend = legend,
n = n)
ng <- length(xyz.list) # number of grids
# additional argument checking
if (missing(size))
size <- autosize(length(xyz.list))
if (missing(lratio)) {
if (legend == "horizontal") {
lratio = 0.1 + 0.1 * size[1]
} else {
lratio = 0.1 + 0.1 * size[2]
}
}
# check other argument, specify outer arguments
outer.args <- arg.check.autoimage(common.legend, size, outer.title, ng, mtext.args)
curpar <- par(no.readonly = TRUE)
curmar <- curpar$mar # current mar values
autolayout(size, legend = legend, common.legend = common.legend, lratio = lratio,
outer = outer.args$outer, show = FALSE, reverse = FALSE)
for (i in seq_along(xyz.list)) {
par(mar = curmar)
arglisti <- xyz.list[[i]]
arglisti$legend <- "none"
arglisti$proj <- proj
arglisti$parameters <- parameters
arglisti$orientation <- orientation
arglisti$map <- map
do.call("heat_ppoints", arglisti)
if (!common.legend & legend != "none") {
autolegend()
}
}
# add blank plots, if necessary
deficit <- prod(size) - ng
if (!common.legend & legend != "none")
deficit <- 2 * deficit
for (i in seq_len(deficit)) {
blank.plot()
}
# add common legend, if necessary
if (common.legend & legend != "none") {
autolegend()
}
# plot outer title, if necessary
if (outer.args$outer) {
do.call("mtext", outer.args$mtext.args)
}
# restore previous par() settings
on.exit(par(curpar))
}
# sorts out x, y, and z for autoimage function
autopoints_xyz_setup <- function(x, y, z, tx, ty, arglist,
verbose, common.legend = FALSE,
legend = "none",
n) {
arglist$mtext.args <- NULL
# sanity checiking
if (length(verbose) != 1) {
stop("verbose must be a single logical value")
}
if (!is.logical(verbose)) {
stop("verbose must be a single logical value")
}
if (length(common.legend) != 1) {
stop("common.legend should be a single value")
}
if (!is.logical(common.legend)) {
stop("common.legend should be a logical value")
}
if (length(legend) != 1)
stop("legend should be a single value")
# set axis labels
if (is.null(arglist$xlab)) {
arglist$xlab <- tx
}
if (is.null(arglist$ylab)) {
arglist$ylab <- ty
}
if (is.null(arglist$xlim)) {
arglist$xlim <- range(x, na.rm = TRUE)
}
if (is.null(arglist$ylim)) {
arglist$ylim <- range(y, na.rm = TRUE)
}
if (!is.null(arglist$col)) {
n <- length(arglist$col)
}
if (length(n) != 1 | !is.numeric(n) | n <= 1) {
stop("n should be a positive integer")
}
# check x, y, z
if (!is.vector(x) | !is.numeric(x)) {
stop("x must be a numeric vector")
}
if (!is.vector(y) | !is.numeric(y)) {
stop("y must be a numeric vector")
}
if (length(x) != length(y)) {
stop("x and y must have the same length")
}
if (!is.vector(z) & is.null(dim(z))) {
stop("z must be a vector or matrix-like (!is.null(dim(z)))")
}
if (is.vector(z)) {
if (length(x) != length(z)) {
stop("length(z) must equal length(x) when z is a vector")
}
}
# convert z to a matrix
if (is.vector(z)) {
z <- matrix(z, ncol = 1)
}
# make sure z is a matrix
if (is.data.frame(z)) {
z <- as.matrix(z)
}
if (length(dim(z)) != 2) {
stop("z can have only two-dimensions if matrix-like")
}
if (nrow(z) != length(x)) {
stop("nrow(z) must equal length(x) when z is matrix-like")
}
# set plotting options for zlim, breaks
arglist <- auto_zlim_breaks_setup(arglist, n, z, common.legend)
# set legend margins
if (is.null(arglist$legend.mar) & legend != "none") {
arglist$legend.mar <- automar(legend)
}
# set main and zlim
nz = ncol(z)
main <- auto_main_setup(arglist_main = arglist$main, nz)
zlim <- arglist$zlim
arglist$zlim <- NULL
breaks <- arglist$breaks
arglist$breaks <- NULL
# construct list component for each column of z
# column of z
xyz.list <- vector("list", nz)
# replicate each set of information into list
for (i in seq_along(xyz.list)) {
arglist0 <- arglist
arglist0$main <- main[i]
arglist0$zlim <- zlim[[i]]
arglist0$breaks <- breaks[[i]]
arglist0$x <- x
arglist0$y <- y
arglist0$z <- z[, i]
xyz.list[[i]] <- arglist0
}
return(invisible(xyz.list))
}
|
41510f532a6f6661ac20c4a7f4326e4d8ac68240 | 3e3cbe2c54db5ac4267d17764877d0e824e08be5 | /R/TRPFunctions.R | b7b835e27e65978e67c70baef581db986d172583 | [] | no_license | avilesd/productConfig | 7444e5d04ce526654b8554a1d6af3adb71d7e8d5 | 516dd365c7f2c1733de2644a180a9098013a4125 | refs/heads/master | 2020-04-06T07:02:03.512297 | 2016-08-18T23:14:38 | 2016-08-18T23:14:38 | 37,199,135 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 29,299 | r | TRPFunctions.R | #New set of functions for calculating the value matrix of the Tri Reference Point theory and the overall prospect values
## DOCU: Mention that it is a practical problem that trp doesn't use normalized values for its value function, so
## you may have different (mr, sq, g) for each attribute, so you have to run the above function with the attribute
## that have the same values and then manually bind them.
## DOCU cost_ids still works pretty well, but the tri.refps also have to be
#changed, explain it on BA # with a nice diagram, mr and g exchange values and
#change from positive/negative sign. cost_ids, enter normal reference points, we
#will convert them. cost_ids has to equal the attribute you are inputting
## DOCU: you can enter cost_ids normally, program will recognize for which attr it should use the cost_ids
#Main Interface function as in P.10 from Notes
#' (Deprecated)Returns a Value Matrix using three reference points
#'
#' *Unless you have a special reason to do so, you should use
#' \code{\link{trp.valueMatrix}}
#'
#' This function is based on the value function of the tri-reference point (trp)
#' theory. It first builds a desicion matrix for each user and then applys the
#' trp-value function over each value using the three given reference points
#' (MR, SQ, G) and other four free parameters from the value function. See
#' references.
#'
#' @param dataset data.frame with the user generated data from a product
#' configurator. See \code{decisionMatrix} for specifications of the dataset.
#' @param userid a vector of integers that gives the information of which users
#' the matrix should be calculated. Vectorised.
#' @param attr attributes IDs, vector of integer numbers corresponding to the
#' attributes you desire to use; attr are assumed to be 1-indexed.
#' @param rounds integer vector or text option. Which steps of the configuration
#' process should be shown? Defaults are first and last step. Text options are
#' \code{all, first, last}.
#' @param cost_ids argument used to convert selected cost attributes into
#' benefit attributes. Integer vector.
#' @param mr numeric - Minimum Requirements is the lowest reference point
#' @param sq numeric - Status Quo reference point
#' @param g numeric - Goal reference point
#' @param beta(s) numeric arguments representing the psychological impact of an
#' outcome equaling failer (_f), loss (_l), gain (_g) or success (_s). Default
#' values are taken from our reference paper \code{(5,1,1,3)}.
#'
#'
#' @details This function only makes sense to use with multiple attributes if
#' those attributes have exactly the same three reference points (mr, sq, g).
#' Therefore you will have to manually calculate all the value matrices for
#' the different attributes (with different values) and cbind them together
#' using mapply. The full matrix can then be given as an input to the
#' \code{\link{overallPV_interface}} fucntion to calculate the overall
#' prospect values for each round.
#'
#' General: The value matrix has ncol = number of attributes you selected or
#' all(default) and nrow = number of rounds you selected or the first and
#' last(default) for all selected users.
#'
#' \code{dataset} We assume the input data.frame has following columns usid =
#' User IDs, round = integers indicating which round the user is in (0-index
#' works best for 'round'), atid = integer column for referring the attribute
#' ID (1 indexed), selected = numeric value of the attribute for a specific,
#' given round, selectable = amount of options the user can chose at a given
#' round, with the current configuration.
#'
#' \code{userid} is a necessary parameter, without it you'll get a warning.
#' Default is NULL.
#'
#' \code{attr} Default calculates with all attributes. Attributes are
#' automatically read from provided dataset, it is important you always
#' provide the complete data so that the package functions properly. Moreover,
#' \code{userid} and \code{attr} will not be sorted and will appear in the
#' order you input them.
#'
#' \code{rounds} Default calculates with first and last rounds (initial and
#' final product configuration). You can give a vector of arbitrarily chosen
#' rounds as well.
#'
#' \code{cost_ids} Default assumes all your attributes are of benefit type,
#' that is a higher value in the attribute means the user is better off than
#' with a lower value. If one or more of the attributes in your data is of
#' cost type, e.g. price, so that lower is better then you should identify
#' this attributes as such, providing their id, they'll be converted to
#' benefit type (higher amount is better).
#'
#' About reference points with cost_ids: For a cost attribute it should be
#' true, that a lower value is better for the user, this should also hold for
#' the three reference points. So contrary to normal/benefit attributes \code{
#' for cost attributes} reference points should follow that: \code{mr > sq >
#' g}.
#'
#' Note: When converting a cost attribute to a benefit attribute its three
#' reference points change as well, enter the unconverted refps, the function
#' transforms them automatically when it detects a \code{cost_ids != NULL}
#'
#' @return a list of value matrices for each user.
#'
#' @references [1]Wang, X. T.; Johnson, Joseph G. (2012) \emph{A tri-reference
#' point theory of decision making under risk. }Journal of Experimental
#' Psychology
#'
#' @examples
#' trpValueMatrix(pc_config_data, 9:11, mr = 0.5, sq = 2, g = 4.7)
#' trpValueMatrix(aDataFrame, userid = 100, rounds = "all", mr = 0.5, sq = 1.8, g = 2.5)
#' trpValueMatrix(my_data, userid = 11, attr = c(1,3,5), cost_ids = 2) #Input accepted but cost_ids = 2 will be ignored
#' trpValueMatrix(my_data, userid = 11, attr = 1, cost_ids = 1, mr = 10, sq = 5, g =3) # Note that for cost attributes: MR > SQ > G
#' trpValueMatrix(keyboard_data, 60, rounds = "first", attr=1, mr = 0.5, sq = 1.8, g = 2.5, beta_f = 6)
#' trpValueMatrix(data1, 2) # Returns an error since no reference points entered (mr, sq, g)
#'
#' @export
trpValueMatrix <- function(dataset, userid = NULL, attr = NULL, rounds = NULL, cost_ids = NULL,
mr = NULL, sq = NULL, g = NULL, beta_f = 5, beta_l = 1.5, beta_g = 1, beta_s = 3) {
counter <- 0
if (length(attr) == 1) {
trp.list <- trpValueMatrix.oneAttr(dataset, userid, attr, rounds, cost_ids,
mr, sq, g, beta_f, beta_l, beta_g, beta_s)
}
else {
if (is.null(attr)) attr <- get_attrs_ID(dataset)
for(i in attr) {
if (counter == 0) {
if(i %in% cost_ids) cost_ids_help <- i else cost_ids_help <- NULL
trp.list <- trpValueMatrix.oneAttr(dataset, userid, attr=i, rounds, cost_ids_help,
mr, sq, g, beta_f, beta_l, beta_g, beta_s)
counter <- 1
}
else {
if(i %in% cost_ids) cost_ids_help <- i else cost_ids_help <- NULL
tempVariable <- trpValueMatrix.oneAttr(dataset, userid, attr=i, rounds, cost_ids_help,
mr, sq, g, beta_f, beta_l, beta_g, beta_s)
trp.list <- mapply(cbind, trp.list, tempVariable, SIMPLIFY = F)
}
}
}
trp.list
}
#' Returns a Value Matrix using three reference points
#'
#' This function is based on the value function of the tri-reference point (trp)
#' theory. It first builds a desicion matrix for each user and then applys the
#' trp-value function over each value using the three given reference points
#' (MR, SQ, G) and other four free parameters from the value function. See
#' references.
#'
#' @param dataset data.frame with the user generated data from a product
#' configurator. See \code{decisionMatrix} for specifications of the dataset.
#' @param userid a vector of integers that gives the information of which users
#' the matrix should be calculated. Vectorised.
#' @param attr attributes IDs, vector of integer numbers corresponding to the
#' attributes you desire to use; attr are assumed to be 1-indexed.
#' @param rounds integer vector or text option. Which steps of the configuration
#' process should be shown? Defaults are first and last step. Text options are
#' \code{all, first, last}.
#' @param cost_ids argument used to convert selected cost attributes into
#' benefit attributes. Integer vector.
#' @param tri.refps numeric matrix or vector - three numbers per attribute,
#' indicating the minimum requirements, status-quo and the goals for a user
#' (MR, SQ, G).
#' @param beta(s) numeric arguments representing the psychological impact of an
#' outcome equaling failer (_f), loss (_l), gain (_g) or success (_s). Default
#' values are taken from our reference paper \code{(5,1,1,3)}.
#'
#' @details This function is an improvement over \code{\link{trpValueMatrix and
#' trpValueMatrix.oneAttr}} since it allows a matrix to be given through
#' \code{tri.refps}. The matrix should have three columns, first column is for
#' the minimum requirements, second for the status-quo, and third should be
#' for the Goal (MR, SQ, G). It should have as many rows as attributes, i.e.
#' one set of reference points for each attribute.
#'
#' General: The value matrix has ncol = number of attributes you selected or
#' all(default) and nrow = number of rounds you selected or the first and
#' last(default) for all selected users.
#'
#' \code{dataset} We assume the input data.frame has following columns usid =
#' User IDs, round = integers indicating which round the user is in (0-index
#' works best for 'round'), atid = integer column for referring the attribute
#' ID (1 indexed), selected = numeric value of the attribute for a specific,
#' given round, selectable = amount of options the user can chose at a given
#' round, with the current configuration.
#'
#' \code{userid} is a necessary parameter, without it you'll get a warning.
#' Default is NULL.
#'
#' \code{attr} Default calculates with all attributes. Attributes are
#' automatically read from provided dataset, it is important you always
#' provide the complete data so that the package functions properly. Moreover,
#' \code{userid} and \code{attr} will not be sorted and will appear in the
#' order you input them.
#'
#' \code{rounds} Default calculates with first and last rounds (initial and
#' final product configuration). You can give a vector of arbitrarily chosen
#' rounds as well.
#'
#' \code{cost_ids} Default assumes all your attributes are of benefit type,
#' that is a higher value in the attribute means the user is better off than
#' with a lower value. If one or more of the attributes in your data is of
#' cost type, e.g. price, so that lower is better then you should identify
#' this attributes as such, providing their id, they'll be converted to
#' benefit type (higher amount is better).
#'
#' About reference points with cost_ids: For a cost attribute it should be
#' true, that a lower value is better for the user, this should also hold for
#' the three reference points. So contrary to normal/benefit attributes \code{
#' for cost attributes} reference points should follow that: \code{mr > sq >
#' g}.
#'
#' Note: When converting a cost attribute to a benefit attribute its three
#' reference points change as well, enter the unconverted refps, the function
#' transforms them automatically when it detects a \code{cost_ids != NULL}.
#' But since for cost attributes, lower is better, unconverted they should
#' follow (G < SQ < MR).
#'
#' @return a list of value matrices for each user.
#'
#' @references [1]Wang, X. T.; Johnson, Joseph G. (2012) \emph{A tri-reference
#' point theory of decision making under risk. }Journal of Experimental
#' Psychology
#'
#' @examples #Not runnable yet
#' trpValueMatrix(pc_config_data, 9:11, mr = 0.5, sq = 2, g = 4.7)
#' trpValueMatrix(my_data, userid = 11, attr = 1, cost_ids = 1, mr = 10, sq = 5, g =3) # Note that for cost attributes: MR > SQ > G
#' trpValueMatrix(keyboard_data, 60, rounds = "first", attr=1, mr = 0.5, sq = 1.8, g = 2.5, beta_f = 6)
#' trpValueMatrix(data1, 2) # Returns an error since no reference points entered (mr, sq, g)
#'
#' @export
trp.valueMatrix <- function(dataset, userid = NULL, attr = NULL, rounds = NULL, cost_ids = NULL,
tri.refps = NULL, beta_f = 5, beta_l = 1.5, beta_g = 1, beta_s = 3) {
if(is.null(attr)) attr <- get_attrs_ID(dataset)
if(length(attr) != 1 & !is.matrix(tri.refps)) stop("For more than one attribute you must enter a matrix in 'tri.refps'")
if(is.vector(tri.refps) & !is.list(tri.refps) & length(attr) == 1) {
if(length(tri.refps)==3) {
mr <- tri.refps[1]
sq <- tri.refps[2]
g <- tri.refps[3]
trp.list <- trpValueMatrix.oneAttr(dataset, userid, attr, rounds, cost_ids, mr, sq, g, beta_f, beta_l, beta_g, beta_s)
}
else {
stop("You must enter three reference points for this attribute in 'tri.refps'")
}
}
else {
rows.attrLength <- length(attr)
cols.refps <- 3
if(all(!(dim(tri.refps) != c(rows.attrLength,cols.refps)))) {
counter <- 0
attrCounter <- 1
for(i in attr) {
mr <- tri.refps[attrCounter, 1]
sq <- tri.refps[attrCounter, 2]
g <- tri.refps[attrCounter, 3]
if (counter == 0) {
if(i %in% cost_ids) cost_ids_help <- i else cost_ids_help <- NULL
trp.list <- trpValueMatrix.oneAttr(dataset, userid, attr=i, rounds, cost_ids_help,
mr, sq, g, beta_f, beta_l, beta_g, beta_s)
counter <- 1
attrCounter <- attrCounter + 1
}
else {
if(i %in% cost_ids) cost_ids_help <- i else cost_ids_help <- NULL
tempVariable <- trpValueMatrix.oneAttr(dataset, userid, attr=i, rounds, cost_ids_help,
mr, sq, g, beta_f, beta_l, beta_g, beta_s)
trp.list <- mapply(cbind, trp.list, tempVariable, SIMPLIFY = F)
attrCounter <- attrCounter + 1
}
}
}
else {
eMessage <- paste("The number of columns in the input: tri.refps doesn't match the number of attributes you entered ",
nrow(tri.refps)," != ", rows.attrLength, "->length(attr) [No recycling allowed]")
stop(eMessage)
}
}
trp.list
}
#' Returns a Value Matrix using three reference points (one attribute only)
#'
#' This function is a more basic function than \code{trpValueMatrix}, for a
#' detailed descrpition, go to \code{\link{trpValueMatrix}}. This function is
#' based on the value function of the tri-reference point (trp) theory. It first
#' builds a desicion matrix for each user and then applys the trp-value function
#' over each value using the three given reference points (MR, SQ, G) and other
#' four free parameters from the value function. See references.
#'
#' @param dataset data.frame with the user generated data from a product
#' configurator. See \code{decisionMatrix} for specifications of the dataset.
#' @param userid a vector of integers that gives the information of which users
#' the matrix should be calculated. Vectorised.
#' @param attr attributes ID, \emph{one integer} corresponding to the attribute
#' you desire to use; attr are assumed to be 1-indexed.
#' @param rounds integer vector or text option. Which steps of the configuration
#' process should be shown? Defaults are first and last step. Text options are
#' \code{all, first, last}.
#' @param cost_ids argument used to convert selected cost attributes into
#' benefit attributes. In this function, should be the same as \code{attr}.
#' For a cost attribute it should be true, that a lower value is better for
#' the user, this should also hold for the three reference points. So contrary
#' to normal/benefit attributes \code{ for cost attributes} reference points
#' should follow that: \code{mr > sq > g}.
#' @param mr numeric - Minimum Requirements is the lowest reference point
#' @param sq numeric - Status Quo reference point
#' @param g numeric - Goal reference point
#' @param beta(s) numeric arguments representing the psychological impact of an
#' outcome equaling failer (_f), loss (_l), gain (_g) or success (_s). Default
#' values are taken from our reference paper \code{(5,1,1,3)}.
#'
#' @details This function does the same as \code{\link{trpValueMatrix}} but only
#' for one attribute, for more details please see the mentioned function.
#'
#' Note: When converting a cost attribute to a benefit attribute its three
#' reference points change as well, enter the unconverted refps, the function
#' transforms them automatically when it detects a \code{cost_ids != NULL}
#'
#' @return a list of value matrices with one attribute for each user.
#'
#' @references Wang, X. T.; Johnson, Joseph G. (2012) \emph{A tri-reference
#' point theory of decision making under risk. }Journal of Experimental
#' Psychology
#'
#' @examples
#' trpValueMatrix.oneAttr(pc_config_data, 9:15, attr = 15, mr = -1, sq = 0, g = 2.5)
#' trpValueMatrix.oneAttr(aDataFrame, userid = 100, rounds = "all", attr = 1, mr = 0.5, sq = 1.8, g = 2.5)
#' trpValueMatrix.oneAttr(myData, 10, attr = 3, cost_ids = 3, mr=4, sq=2, g=0.5) # Note for cost_ids mr > sq > g
#'
#' # Return an error, 1.Too many attributes or 2. none entered
#' trpValueMatrix.oneAttr(keyboard_data, 8:9 , attr = c(10,12,14,16), mr = 0.5, sq = 1.8, g = 2.5)
#' trpValueMatrix.oneAttr(data1, 2) # 2. No attribute entered
#'
#' @export
trpValueMatrix.oneAttr <- function(dataset, userid = NULL, attr = NULL, rounds = NULL, cost_ids = NULL,
mr = NULL, sq = NULL, g = NULL, beta_f = 5, beta_l = 1.5, beta_g = 1, beta_s = 3) {
if(length(attr)!= 1) stop("Please insert (only) one attribute ID.")
if(!is.null(cost_ids)) {
if(mr <= sq | mr <= g) stop("For cost attributes, since lower is better: Initial MR should be greater or equal to SQ or G")
if(sq <= g) stop("For cost attributes, since lower is better: SQ cannot be smaller or equal to G")
mr <- (-1)*mr
g <- (-1)*g
sq <- (-1)*sq
}
# First Transformation, monotonic transformation such that SQ = 0, this is an imoprtant step
# see appendix, because it acts as the Gain and Loss matrix. Write it on the Documentation
# Transform decision Matrix
list.decMatrices <- decisionMatrix(dataset, userid, attr, rounds, cost_ids)
list.decMatrices <- lapply(list.decMatrices, function(t) apply(t, 1:2, substract_sq, sq))
#Transform reference points (substract SQ)
mr <- substract_sq(mr, sq)
g <- substract_sq(g, sq)
sq <- substract_sq(sq, sq)
if(sq != 0) stop("After first transform, sq != 0, sq = ", sq)
tri.refps <- c(mr,sq,g)
# Second Transformation, normalize, first normalize matrices (Normalize a.matrices and b.Refps)
hmaxVector <- lapply(list.decMatrices, function(temp) apply(temp, 2, abs))
hmaxVector <- lapply(hmaxVector, function(temp1) if(is.null(ncol(temp1))) {temp1} else {apply(temp1, 2, max)})
hmaxVector <- lapply(hmaxVector, function(temp2) replace(temp2, temp2==0.0, 1.00)) #remove 0 to avoid NA when dividing
list.decMatrices <- mapply(function(aMatrix, aVector) aMatrix / aVector[col(aMatrix)], list.decMatrices, hmaxVector, SIMPLIFY = F)
# Second-Transformartion of reference points, DOCU?? Doesn't affect the user, just how we calculate it.
tri.refps <- lapply(hmaxVector, function(temp, temp2) temp2/temp, tri.refps)
valueMatrix <- mapply(trpValueFunction, list.decMatrices, tri.refps, beta_f, beta_l, beta_g, beta_s, SIMPLIFY = F)
valueMatrix
}
#' Transform a decision Matrix into a trp value matrix
#'
#' This function is based on the value function of the tri-reference point (trp)
#' theory. It is an auxiliary function, which intends to facilitate the work and
#' readability of \code{\link{trpValueMatrix.oneAttr, trpValueMatrix}}. It takes
#' a matrix and the three given reference points (MR, SQ, G) as a vector
#' \code{tri.refps} and applys the trp value function \code{trpValueFunction} to
#' each element of the matrix. Also takes into account the for free \code{beta}
#' parameters of the function.
#'
#' @param aMatrix a non-empty matrix, tipically with one column since this
#' function is called one attribute at a time by
#' \code{trpValueMatrix.oneAttr}.
#' @param mr numeric - Minimum Requirements is the lowest reference point
#' @param sq numeric - Status Quo reference point
#' @param g numeric - Goal reference point
#' @param beta(s) numeric arguments representing the psychological impact of an
#' outcome equaling failer (_f), loss (_l), gain (_g) or success (_s). Default
#' values are taken from our reference paper \code{(5,1,1,3)}. See details.
#'
#' @details The functions test for MR < SQ < G
#'
#' The beta arguments are important arguments that give form to the value function proposed in [1].
#' A higher number represents a higher relative psychological impact to the decision maker. Since in [1] it is assumed that the
#' reference point 'Minimum Requierment' has a greater impact when is not reached (failure aversion), it should have a higher beta, so in general
#' \code{beta_f > beta_l > beta_g > beta_s}. See our reference paper for a detailed theoretical background.
#' @return returns a matrix with the outputs of the trp value function for each of its elements
#'
#' @references [1] Wang, X. T.; Johnson, Joseph G. (2012) \emph{A tri-reference
#' point theory of decision making under risk. }Journal of Experimental
#' Psychology
#'
#' [2]Wang, X. T.; Johnson, Joseph G. (2012) \emph{Supplemental Material for: A tri-reference
#' point theory of decision making under risk. }Journal of Experimental
#' Psychology
#'
#' @examples # Runnable
#' trpValueFunction(aMatrix = matrix(1:6, 2, 3), triRefps = c(2,3,4.5))
#' trpValueFunction(matrix(1:16, 16, 1), triRefps = c(4, 8.9, 12.5), beta_f = 7)
#'
#' @export
trpValueFunction <- function(aMatrix, triRefps, beta_f = 5, beta_l = 1.5, beta_g = 1, beta_s = 3) {
mr <- triRefps[1]
sq <- triRefps[2]
g <- triRefps[3]
result <- apply(aMatrix, 1:2, trpValueFunction_extend, mr, sq, g, beta_f, beta_l, beta_g, beta_s)
result
}
#' TTri Reference Point Value Function for one element
#'
#' Auxiliary function: it is based on the value function of the tri-reference
#' point (trp) theory. It's called by \code{\link{trpValueFunction}}, it takes
#' one element and puts it through the trp value function as seen in reference
#' [1]. Not vectorised.
#'
#' @param x one numeric value
#' @param mr numeric - Minimum Requirements is the lowest reference point
#' @param sq numeric - Status Quo reference point
#' @param g numeric - Goal reference point
#' @param beta(s) numeric arguments representing the psychological impact of an
#' outcome equaling failer (_f), loss (_l), gain (_g) or success (_s). Default
#' values are taken from our reference paper \code{(5,1,1,3)}. See references.
#'
#' @details The functions test for MR < SQ < G.
#'
#' The beta arguments are important arguments that give form to the value
#' function proposed in [1]. A higher number represents a higher relative
#' psychological impact to the decision maker. Since in [1] it is assumed that
#' the reference point 'Minimum Requierment' has a greater impact when is not
#' reached (failure aversion), it should have a higher beta, so in general
#' \code{beta_f > beta_l > beta_g > beta_s}. See our reference paper for a
#' detailed theoretical background.
#'
#' On reference points by cost type \code{attr}: For a cost attribute it should be
#' true, that a lower value is better for the user, this should also hold for
#' the three reference points. So contrary to normal/benefit attributes \code{
#' for cost attributes} reference points should follow that: \code{mr > sq >
#' g}.
#' @return the output of v(x) with v: trp value function([1]).
#'
#' @references [1] Wang, X. T.; Johnson, Joseph G. (2012) \emph{A tri-reference
#' point theory of decision making under risk. }Journal of Experimental
#' Psychology
#'
#' [2]Wang, X. T.; Johnson, Joseph G. (2012) \emph{Supplemental Material for: A tri-reference
#' point theory of decision making under risk. }Journal of Experimental
#' Psychology
#'
#' @examples # Runnable
#' trpValueFunction_extend(0.18, mr = 0.15, sq = 0.55, g = 1.10)
#' trpValueFunction_extend(4, mr = 1, sq = 3, g = 8, beta_f = 7, beta_s = 4)
#'
#' @export
trpValueFunction_extend <- function(x, mr = NULL, sq = NULL, g = NULL , beta_f = 5, beta_l = 1.5, beta_g = 1, beta_s = 3) {
if(mr >= sq | mr >= g) stop("MR cannot be greater or equal to SQ or G")
if(sq >= g) stop("SQ cannot be greater or equal to G")
if (x < mr) result <- mr*beta_f
if (x >= mr & x < sq) result <- x*beta_l
if (x >= sq & x < g) result <- x*beta_g
if (x >= g) result <- g*beta_s
result
}
# AUxiliaary function only.
#' cost_ids, enter normal reference points, we will convert them. cost_ids has to equal the attribute you are inputting
#' DOCU: Converting tri.refps, no need to convert if attribute is of type cost.
#' New function as interface with weights and your trp.valueMatrix
#' Docu: For the way the trp function works it is a little more complicated than for overallPV for the pt
#' here we have to manually calculate the AttributeWeights whit your desired function, e.g ww <- getAttrWeights(...)
#' and the trp.ValueMatrix separately as well, trp.VM <- mapply() OR trpValueMatrix(...)
#' then giving to this function as input and getting the desired result
#'
#' DOCU: Explain what _extends is in pC, singalizes major functions that do not take the normal inputs but user
#' other functions' results to work.
#' Runs a simple additive weighting function over matrices
#'
#' Auxiliary function: Takes a matrix and a numeric vector and returns the
#' overall weighted values for each row of the matrix by means of a simple
#' additiv weighting function.
#'
#' @param trp.ValueMatrix generally a \emph{list} of matrices from different
#' users, such as the output of \code{\link{trpValueMatrix}}. One matrix is
#' accepted as input but it will be coerced to a list.
#' @param weight generally a \emph{list} of weights from different users, such
#' as the output of \code{\link{getAttrWeights}}. One vector is also accepted,
#' if there are more than one matrices, the function will try to recycle the
#' weight vector.
#'
#'
#' @details The columns of the matrix should be different attributes of a
#' product or setup and the weight vector should contain a numeric value for
#' each attribute, so that \code{ncol(trp.ValueMatrix)=length(weight)}. Both
#' parameters are vectorised so you can enter a list of matrices in
#' \code{trp.ValueMatrix} and a list of vector in \code{weight}. A matrix in
#' the first argument or a vector in the second will be coerced into a list.
#'
#' If some elements of the output list are called \code{$<NA>}, then try to
#' avoid recycling by checking your \code{weight} input.
#'
#' @return a (list of) vector(s) of overall prospect values
#'
#' @examples #Runnable
#' overallPV_interface(trp.ValueMatrix = matrix(1:8, 2, 4), weight = c(0.25, 0.3, 0.15, 0.3))
#' overallPV_interface(matrix(1:32, 16, 2), c(0.72, 0.25))
#' overallPV_interface(list(m1 = matrix(1:32, 16, 2), m2 = matrix(1:14, 7, 2)),
#' weight = c(100, 200)) # weight will be recycled: used on both matrices
#' overallPV_interface(list(m1 = matrix(1:32, 16, 2), m2 = matrix(1:14, 7, 2)),
#' list(weight1 = c(100, 200), weight2 = c(20, 50)))
#'
#' #Not Runnable
#' overallPV_interface(aLargeListOfMatrices, weight = c(0.1, 0.2, 0.62, 0.05, 0.03))
#' overallPV_interface(aLargeListOfMatrices, aLargeListOfVectors) #both arguments should have equal length
#' @export
overallPV_interface <- function (trp.ValueMatrix, weight = NULL) {
if(is.null(weight) | is.null(trp.ValueMatrix)) {
stop("You need to provide both arguments: trp.ValueMatrix and their weights")
}
if(is.vector(weight) & !is.list(weight)) {
weight <- list("oneVector" = weight)
}
if(is.matrix(trp.ValueMatrix) & !is.list(trp.ValueMatrix)) {
trp.ValueMatrix <- list(oneMatrix = trp.ValueMatrix)
}
tryCatchResult = tryCatch({
trp.overallPV <- mapply(overall_pv_extend, trp.ValueMatrix, weight, SIMPLIFY = F) ##Perhaps mapply when data.frame, make weights as list?!
}, warning = function(condition) {
message("Probably amount of users differs from amount of weightVectors and they cannot be recycled.")
}, error = function(condition) {
errorText <- paste0("Number of columns on your matrices:", ncol(trp.ValueMatrix[[1]])," differs from the length of at least one weight vector")
message("Also possible: amount of matrices (users) differs from amount of weightVectors and the latter could not be recycled.")
stop(errorText)
}, finally={
})
trp.overallPV
}
# Doesn't recquire documentation, only an auxiliary function to transform tri.refps monotonically so that sq = 0
substract_sq <- function(x, status_quo) {
res <- (-status_quo + x)
res
}
|
e799031cd0badc7b86631c6be45cb8334456e08e | 5d28ffcf29ea0f190a41010fac518e7cf5a9727b | /man/ez.whos.Rd | 6ccf1fa222508a8c0b939ad0a0b298908527b4f7 | [
"MIT"
] | permissive | rmtsoa/ezmisc | c300dea8d68299f62307bfc71990c9c4dba46aa8 | 70ef4e0efc83ba22081dca87bb43081d5f58ff96 | refs/heads/master | 2021-01-12T05:16:50.924349 | 2016-12-15T16:12:39 | 2016-12-15T16:12:39 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | true | 406 | rd | ez.whos.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/os.R
\name{ez.whos}
\alias{ez.whos}
\title{alias of \code{\link{sessionInfo}}, \code{\link{ez.who}}}
\usage{
ez.whos(package = NULL)
}
\value{
Print version information about R, the OS and attached or loaded packages.
}
\description{
alias of \code{\link{sessionInfo}}, \code{\link{ez.who}}
}
\seealso{
\code{\link{objects}}
}
|
d8ca8f0f75a890b9b25458f99e8979284d0aad87 | 2e661bab58e5d83fb7a54fd97a6d180e1a30a3f1 | /R/auxiliary.R | 84f43f176e38e2e03e358e224877da78c43775fa | [] | no_license | cran/InspectChangepoint | cb63f7dc5168cd2fbf2e86ca8c55162e50c4489c | 789b6a84eb1fc9e06dc97ee7e5ed0085e173229d | refs/heads/master | 2022-05-19T19:31:46.071576 | 2022-05-03T06:00:31 | 2022-05-03T06:00:31 | 61,978,349 | 1 | 1 | null | null | null | null | UTF-8 | R | false | false | 3,406 | r | auxiliary.R | ########### Matrices and vectors ##############
#' Norm of a vector
#' @description Calculate the entrywise L_q norm of a vector or a matrix
#' @param v a vector of real numbers
#' @param q a nonnegative real number or Inf
#' @param na.rm boolean, whether to remove NA before calculation
#' @return the entrywise L_q norm of a vector or a matrix
vector.norm <- function(v, q = 2, na.rm = FALSE){
if (na.rm) v <- na.omit(v)
M <- max(abs(v))
if (M == 0) return(0) else v <- v/M
if (q == Inf) {
nm <- max(abs(v))
} else if (q > 0) {
nm <- (sum(abs(v)^q))^(1/q)
} else if (q == 0) {
nm <- sum(v!=0)
} else {
return(NaN)
}
return(nm * M)
}
#' Normalise a vector
#' @param v a vector of real numbers
#' @param q a nonnegative real number or Inf
#' @param na.rm boolean, whether to remove NA before calculation
#' @return normalised version of this vector
vector.normalise <- function(v, q = 2, na.rm = FALSE){
return(v / vector.norm(v, q, na.rm))
}
#' Clipping a vector from above and below
#' @description Clipping vector or matrix x from above and below
#' @param x a vector of real numbers
#' @param upper clip above this value
#' @param lower clip below this value
#' @return the entrywise L_q norm of a vector or a matrix
vector.clip <- function(x, upper = Inf, lower = -upper){
if (upper < lower) stop("upper limit cannot be below lower limit")
x[x<lower]<-lower;
x[x>upper]<-upper;
x
}
#' Soft thresholding a vector
#' @param x a vector of real numbers
#' @param lambda soft thresholding value
#' @return a vector of the same length
#' @description entries of v are moved towards 0 by the amount lambda until they hit 0.
vector.soft.thresh <- function(x, lambda){
sign(x)*pmax(0,(abs(x)-lambda))
}
#' Generate a random unit vectors in R^n
#' @param n length of random vector
random.UnitVector <- function(n){
v = rnorm(n)
v/vector.norm(v)
}
#' Noise standardisation for multivariate time series.
#' @description Each row of the input matrix is normalised by the estimated standard deviation computed through the median absolute deviation of increments.
#' @param x An input matrix of real values.
#' @details This is an auxiliary function used by the \code{InspectChangepoint} package.
#' @return A rescaled matrix of the same size is returned.
#' @examples
#' x <- matrix(rnorm(40),5,8) * (1:5)
#' x.rescaled <- rescale.variance(x)
#' x.rescaled
#' @export
rescale.variance <- function(x){
p <- dim(x)[1]
n <- dim(x)[2]
for (j in 1:p){
v <- x[j,]
v <- v[!is.na(v)]
scale <- mad(diff(v))/sqrt(2)
x[j,] <- x[j,] / scale
}
return(x)
}
#' Print percentage
#' @param ind a vector of for loop interator
#' @param tot a vector of for loop lengths
#' @return on screen output of percentage
printPercentage <- function (ind, tot){
ind <- as.vector(ind); tot <- as.vector(tot)
if ((length(tot) > 1) & (length(ind) == 1)) {ind <- match(ind, tot); tot <- length(tot)}
len <- length(ind)
contrib <- rep(1,len)
if (len > 1) {
for (i in (len-1):1) contrib[i] <- contrib[i+1] * tot[i+1]
}
grand_tot <- contrib[1] * tot[1]
count <- (sum(contrib * (ind - 1)) + 1)
out <- ""
if (sum(ind-1)>0) out <- paste0(rep("\b", nchar(round((count-1)/grand_tot * 100))+1), collapse = "")
out <- paste0(out, round(count/grand_tot*100), "%")
if (identical(ind, tot)) out <- paste0(out, '\n')
cat(out)
return(NULL)
}
|
6d47be695b3c61ad3d336b67933febd4cc78af17 | bb9c41ac9228f6927dfa865c042cf8d24b7a8c86 | /cachematrix.R | 989b9a8d73c12a36f265ed2fcc97a5dfac05b4b2 | [] | no_license | akshatsprakash/ProgrammingAssignment2 | d7919fa57072210749089057e928114685cc9e8f | cfc3ce3b5aa8a39bb11229c983bf421284c9625c | refs/heads/master | 2020-12-25T13:23:51.063944 | 2016-01-23T10:34:16 | 2016-01-23T10:34:16 | 50,231,823 | 0 | 0 | null | 2016-01-23T09:17:54 | 2016-01-23T09:17:52 | null | UTF-8 | R | false | false | 1,407 | r | cachematrix.R | ## Author: ASP Date: 23-Jan-2016
## File created as part of Coursera R-Programming, Week 3
## Assignment to be assessed by course peers
## Creation and submission via GitHub
## function creates a special "matrix" object that can cache its inverse
makeCacheMatrix <- function(x = matrix()) {
inv <- NULL # nothing in beginning
set <- function(y) {
x <<- y
inv <<- NULL # reset
}
get <- function() x
setInverse <- function(solve) inv <<- solve
getInverse <- function() inv
list(set = set, get = get,
setInverse = setInverse,
getInverse = getInverse)
}
## function computes the inverse of the special "matrix" returned by
## makeCacheMatrix above. If the inverse has already been calculated
## it retrieves the inverse from the cache.
cacheSolve <- function(x, ...) {
## Return a matrix that is the inverse of 'x'
inv <- x$getInverse() # does it exist in cache?
if(!is.null(inv)) {
message("getting cached inverse data")
return(inv) # it does exist, look no further. Hurrey !
}
data <- x$get() # is not cached...
inv <- solve(data, ...) # work to do here and ...
x$setInverse(inv) # save the result in x's cache
message("caching inverse data")
inv # return
} |
be75b5700df52f721ee91dcff97f8fd7ed1bdcc6 | 0da4f9195a7e1c4651b709463a66308dc7cf771e | /R/approach_ctree.R | 3c73c0d5adca46b4d67cce87da80470c758119d3 | [
"MIT"
] | permissive | NorskRegnesentral/shapr | 6437ab6d4bb2de004dc5937397ab8e15ab2cf43c | 25f207d2e20835cb114ca13224fac49b91443bdb | refs/heads/master | 2023-08-17T22:12:15.012082 | 2023-08-17T14:02:38 | 2023-08-17T14:02:38 | 133,624,629 | 128 | 27 | NOASSERTION | 2023-09-11T11:13:53 | 2018-05-16T07:05:13 | R | UTF-8 | R | false | false | 10,216 | r | approach_ctree.R | #' @rdname setup_approach
#'
#' @param ctree.mincriterion Numeric scalar or vector. (default = 0.95)
#' Either a scalar or vector of length equal to the number of features in the model.
#' Value is equal to 1 - \eqn{\alpha} where \eqn{\alpha} is the nominal level of the conditional independence tests.
#' If it is a vector, this indicates which value to use when conditioning on various numbers of features.
#'
#' @param ctree.minsplit Numeric scalar. (default = 20)
#' Determines minimum value that the sum of the left and right daughter nodes required for a split.
#'
#' @param ctree.minbucket Numeric scalar. (default = 7)
#' Determines the minimum sum of weights in a terminal node required for a split
#'
#' @param ctree.sample Boolean. (default = TRUE)
#' If TRUE, then the method always samples `n_samples` observations from the leaf nodes (with replacement).
#' If FALSE and the number of observations in the leaf node is less than `n_samples`,
#' the method will take all observations in the leaf.
#' If FALSE and the number of observations in the leaf node is more than `n_samples`,
#' the method will sample `n_samples` observations (with replacement).
#' This means that there will always be sampling in the leaf unless
#' `sample` = FALSE AND the number of obs in the node is less than `n_samples`.
#'
#' @inheritParams default_doc_explain
#'
#' @export
setup_approach.ctree <- function(internal,
ctree.mincriterion = 0.95,
ctree.minsplit = 20,
ctree.minbucket = 7,
ctree.sample = TRUE, ...) {
defaults <- mget(c("ctree.mincriterion", "ctree.minsplit", "ctree.minbucket", "ctree.sample"))
internal <- insert_defaults(internal, defaults)
return(internal)
}
#' @inheritParams default_doc
#'
#' @rdname prepare_data
#' @export
#' @keywords internal
prepare_data.ctree <- function(internal, index_features = NULL, ...) {
x_train <- internal$data$x_train
x_explain <- internal$data$x_explain
n_explain <- internal$parameters$n_explain
n_samples <- internal$parameters$n_samples
n_features <- internal$parameters$n_features
ctree.mincriterion <- internal$parameters$ctree.mincriterion
ctree.minsplit <- internal$parameters$ctree.minsplit
ctree.minbucket <- internal$parameters$ctree.minbucket
ctree.sample <- internal$parameters$ctree.sample
labels <- internal$objects$feature_specs$labels
X <- internal$objects$X
dt_l <- list()
if (is.null(index_features)) {
features <- X$features
} else {
features <- X$features[index_features]
}
# this is a list of all 2^M trees (where number of features = M)
all_trees <- lapply(
X = features,
FUN = create_ctree,
x_train = x_train,
mincriterion = ctree.mincriterion,
minsplit = ctree.minsplit,
minbucket = ctree.minbucket
)
for (i in seq_len(n_explain)) {
l <- lapply(
X = all_trees,
FUN = sample_ctree,
n_samples = n_samples,
x_explain = x_explain[i, , drop = FALSE],
x_train = x_train,
n_features = n_features,
sample = ctree.sample
)
dt_l[[i]] <- data.table::rbindlist(l, idcol = "id_combination")
dt_l[[i]][, w := 1 / n_samples]
dt_l[[i]][, id := i]
if (!is.null(index_features)) dt_l[[i]][, id_combination := index_features[id_combination]]
}
dt <- data.table::rbindlist(dt_l, use.names = TRUE, fill = TRUE)
dt[id_combination %in% c(1, 2^n_features), w := 1.0]
# only return unique dt
dt2 <- dt[, sum(w), by = c("id_combination", labels, "id")]
setnames(dt2, "V1", "w")
return(dt2)
}
#' Make all conditional inference trees
#'
#' @param given_ind Numeric value. Indicates which features are conditioned on.
#'
#' @inheritParams default_doc
#'
#' @param mincriterion Numeric scalar or vector. (default = 0.95)
#' Either a scalar or vector of length equal to the number of features in the model.
#' Value is equal to 1 - \eqn{\alpha} where \eqn{\alpha} is the nominal level of the conditional independence tests.
#' If it is a vector, this indicates which value to use when conditioning on various numbers of features.
#'
#' @param minsplit Numeric scalar. (default = 20)
#' Determines minimum value that the sum of the left and right daughter nodes required for a split.
#'
#' @param minbucket Numeric scalar. (default = 7)
#' Determines the minimum sum of weights in a terminal node required for a split
#'
#' @param use_partykit String. In some semi-rare cases `partyk::ctree` runs into an error related to the LINPACK
#' used by R. To get around this problem, one may fall back to using the newer (but slower) `partykit::ctree`
#' function, which is a reimplementation of the same method. Setting this parameter to `"on_error"` (default)
#' falls back to `partykit::ctree`, if `party::ctree` fails. Other options are `"never"`, which always
#' uses `party::ctree`, and `"always"`, which always uses `partykit::ctree`. A warning message is
#' created whenever `partykit::ctree` is used.
#'
#' @return List with conditional inference tree and the variables conditioned/not conditioned on.
#'
#' @keywords internal
#' @author Annabelle Redelmeier, Martin Jullum
create_ctree <- function(given_ind,
x_train,
mincriterion,
minsplit,
minbucket,
use_partykit = "on_error") {
dependent_ind <- seq_len(ncol(x_train))[-given_ind]
if (length(given_ind) %in% c(0, ncol(x_train))) {
datact <- list()
} else {
y <- x_train[, dependent_ind, with = FALSE]
x <- x_train[, given_ind, with = FALSE]
df <- data.table::data.table(cbind(y, x))
colnames(df) <- c(paste0("Y", seq_len(ncol(y))), paste0("V", given_ind))
ynam <- paste0("Y", seq_len(ncol(y)))
fmla <- as.formula(paste(paste(ynam, collapse = "+"), "~ ."))
# Run party:ctree if that works. If that fails, run partykit instead
if (use_partykit == "on_error") {
datact <- tryCatch(expr = {
party::ctree(fmla,
data = df,
controls = party::ctree_control(
minbucket = minbucket,
mincriterion = mincriterion
)
)
}, error = function(ex) {
warning("party::ctree ran into the error: ", ex, "Using partykit::ctree instead!")
partykit::ctree(fmla,
data = df,
control = partykit::ctree_control(
minbucket = minbucket,
mincriterion = mincriterion,
splitstat = "maximum"
)
)
})
} else if (use_partykit == "never") {
datact <- party::ctree(fmla,
data = df,
controls = party::ctree_control(
minbucket = minbucket,
mincriterion = mincriterion
)
)
} else if (use_partykit == "always") {
warning("Using partykit::ctree instead of party::ctree!")
datact <- partykit::ctree(fmla,
data = df,
control = partykit::ctree_control(
minbucket = minbucket,
mincriterion = mincriterion,
splitstat = "maximum"
)
)
} else {
stop("use_partykit needs to be one of 'on_error', 'never', or 'always'. See ?create_ctree for details.")
}
}
return(list(tree = datact, given_ind = given_ind, dependent_ind = dependent_ind))
}
#' Sample ctree variables from a given conditional inference tree
#'
#' @param tree List. Contains tree which is an object of type ctree built from the party package.
#' Also contains given_ind, the features to condition upon.
#'
#' @param n_samples Numeric. Indicates how many samples to use for MCMC.
#'
#' @param x_explain Matrix, data.frame or data.table with the features of the observation whose
#' predictions ought to be explained (test data). Dimension `1\timesp` or `p\times1`.
#'
#' @param x_train Matrix, data.frame or data.table with training data.
#'
#' @param n_features Positive integer. The number of features.
#'
#' @param sample Boolean. True indicates that the method samples from the terminal node
#' of the tree whereas False indicates that the method takes all the observations if it is
#' less than n_samples.
#'
#' @return data.table with `n_samples` (conditional) Gaussian samples
#'
#' @keywords internal
#'
#' @author Annabelle Redelmeier
sample_ctree <- function(tree,
n_samples,
x_explain,
x_train,
n_features,
sample) {
datact <- tree$tree
using_partykit <- (class(datact)[1] != "BinaryTree")
cnms <- colnames(x_explain)
if (length(tree$given_ind) %in% c(0, n_features)) {
ret <- x_explain
} else {
given_ind <- tree$given_ind
dependent_ind <- tree$dependent_ind
x_explain_given <- x_explain[,
given_ind,
drop = FALSE,
with = FALSE
] #
xp <- x_explain_given
colnames(xp) <- paste0("V", given_ind) # this is important for where() below
if (using_partykit) {
fit.nodes <- predict(
object = datact,
type = "node"
)
# newdata must be data.frame + have the same colnames as x
pred.nodes <- predict(
object = datact, newdata = xp,
type = "node"
)
} else {
fit.nodes <- party::where(object = datact)
# newdata must be data.frame + have the same colnames as x
pred.nodes <- party::where(object = datact, newdata = xp)
}
rowno <- seq_len(nrow(x_train))
use_all_obs <- !sample & (length(rowno[fit.nodes == pred.nodes]) <= n_samples)
if (use_all_obs) {
newrowno <- rowno[fit.nodes == pred.nodes]
} else {
newrowno <- sample(rowno[fit.nodes == pred.nodes], n_samples,
replace = TRUE
)
}
depDT <- data.table::data.table(x_train[newrowno,
dependent_ind,
drop = FALSE,
with = FALSE
])
givenDT <- data.table::data.table(x_explain[1,
given_ind,
drop = FALSE,
with = FALSE
])
ret <- cbind(depDT, givenDT)
data.table::setcolorder(ret, colnames(x_train))
colnames(ret) <- cnms
}
return(data.table::as.data.table(ret))
}
|
122a3dbb01017910afc39357b9f398c6990e5928 | 0ae311da8066c14fefe5fdb5eb2a66f8fc7852b7 | /results/analyze2.R | f15c3126d97b0d977983963d098c595991829512 | [] | no_license | multimediaeval/2017-AcousticBrainz-Genre-Task | ca86fca4d0a97a4aa47a81c45819456f31fe927a | 625e47f0e86e27bbf78cefb07a0dae0420f202e6 | refs/heads/master | 2022-04-29T10:13:00.038971 | 2022-03-20T12:15:20 | 2022-03-20T12:15:20 | 89,104,213 | 3 | 1 | null | null | null | null | UTF-8 | R | false | false | 953 | r | analyze2.R | a <- read.csv("allscores.csv")
b <- read.csv("allscores2.csv")
w <- 320
h <- 240
for(x in c("Ftrackall", "Ftrackgen", "Ptrackall", "Ptrackgen", "Rtrackall", "Rtrackgen",
"Flabelall", "Flabelgen", "Plabelall", "Plabelgen", "Rlabelall", "Rlabelgen")) {
main = paste0(substr(x, 1, 1), " - ",
ifelse(grepl("track", x), "per track", "per label"), ", ",
ifelse(grepl("all$", x), "all labels",
ifelse(grepl("gen$", x), "genre labels", "subgenre labels")))
png(filename = paste0(w,"x",h,"_", x, ".png"), width = w, height = h)
par(mar=c(3.1,2.9,2,6), mgp=c(1.9,.7,0), xpd = FALSE)
plot(NA, xlim=0:1, ylim=0:1, xlab = "Without expansion", ylab = "With expansion", main = main)
abline(0:1)
points(a[,x], b[,x], col = a$team, pch = gsub("task", "", a$task))
par(xpd = TRUE)
legend(1+30/w, 1, col = seq(levels(a$team)), legend = levels(a$team), pch = 19, bty = "n")
dev.off()
}
|
665accd2da01c90a2dfb015b10d38ea01d1a3480 | 3285eec39395ad2545d850c8ef6cadb6837e2b39 | /R/read_data.R | 8f63fb03a276d3faaa286602e291d37b0cfbeb4d | [] | no_license | Allisterh/MPHM | be0d08d0045072cd755750f17df76011901653fb | 9c78b07920fe7af828eef8bfb7dffc37bfa8ffba | refs/heads/master | 2023-03-15T21:23:19.441289 | 2019-08-28T19:20:09 | 2019-08-28T19:20:09 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 6,227 | r | read_data.R | #' This function reads in the CPI data from two datasets.
#'
#' @param x is the filepath to the cp1904 spreadsheet
#' @param y is the filepath to the MacroVar spreadsheet
#' @import readxl
#' @import zoo
#' @import dplyr
#' @export
read_cpi <- function(x, y) {
x <- read_xlsx(x, sheet = "Monthly Series", skip = 3) %>%
rename(date = Period) %>%
select(date, ends_with("628")) %>%
mutate(date = as.yearqtr(date)) %>%
# Take quarterly averages for cpi
group_by(date) %>%
summarise_at(.vars = names(.)[-1], mean)
# The column names were formatted as M:(name):628. Here we only keep the names
country_list <- substr(names(x)[-1], 3, 4)
# rename columns by prepending "cpi_"
names(x)[-1] <- paste("cpi", country_list, sep = "_")
# Choosing columns containing AR and CO from MacroVar
y <- read_xlsx(y, sheet = "cpi") %>%
rename(date = 1) %>%
select(date, union(ends_with("AR"), ends_with("CO"))) %>%
mutate(date = as.yearqtr(date))
# Combine the two to make a complete CPI dataset
x %>%
select(-ends_with("AR")) %>%
full_join(y, by = "date")
}
#' This function reads in the housing market data and cleans
#' the date formatting
#'
#' @param x is the filepath to the credit data
#' @param cpi is the cpi data we deflate with
#' @import readxl
#' @import zoo
#' @import dplyr
#' @export
read_hc <- function(x, cpi) {
dat <- read_excel(x, sheet = "HouseCredit_NTcurr") %>%
rename(date = 1) %>%
mutate(date = as.yearqtr(date, format = "%Y-%m-%d"))
dat[dat == 0] <- NA
country_list <- colnames(dat)[colnames(dat) != "date"]
lapply(country_list, deflate, x = dat, cpi = cpi) %>%
reduce(full_join, by = "date") %>%
take_log()
}
#' This function reads in the property price data and cleans
#' the date formatting
#'
#' The returned data frame is deflated and in log form.
#'
#' @param x is the filepath to the price data
#' @param cpi is the cpi data
#' @param country_list is the list of countries
#' @import readxl
#' @import zoo
#' @import dplyr
#' @export
read_pp <- function(pps, ppl, pp_saar, cpi, country_list) {
pps <- read_excel(pps, sheet = "Quarterly Series", skip = 3) %>%
rename(date = Period) %>%
select(date, ends_with("R:628")) %>%
mutate(date = as.yearqtr(date))
names(pps)[-1] <- substr(names(pps)[-1], 3, 4)
ppl <- read_excel(ppl, sheet = "Quarterly Series", skip = 3) %>%
rename(date = Period) %>%
mutate(date = as.yearqtr(date))
# Reformat strings to only contain country names
names(ppl)[-1] <- substr(names(ppl)[-1], 3, 4)
ppl_complete <- read_excel(pp_saar, sheet = "HousePriceIndex") %>%
rename(date = 1) %>%
mutate(date = as.yearqtr(date)) %>%
# Merge SA&AR with ppl since they are both in nominal terms
full_join(ppl, by = "date")
pp <- lapply(country_list, select_pp,
pp = list("pps" = pps, "ppl" = ppl_complete, "cpi" = cpi)) %>%
reduce(full_join, by = "date") %>%
take_log()
# Prepend RPP(Real Property Price) to all columns but date
names(pp)[-1] <- paste("RPP", names(pp)[-1], sep = "_")
return(pp)
}
#' This function selects the appropriate price data for the given country
#'
#' @param country is the country name
#' @param pp is the pp data returned by `read_pp`
#' @import dplyr
#' @export
select_pp <- function(country, pp) {
# Load ppl and pps
ppl <- pp$ppl %>%
select(date, ends_with(country)) %>%
na.omit()
pps <- pp$pps %>%
select(date, ends_with(country)) %>%
na.omit()
# If pp-long has no data or pps has longer and valid data, use pps
if (ncol(ppl) == 1 || (length(pps$date) > length(ppl$date)) && ncol(pps) != 1) {
pps
} else{
# deflate ppl
deflate(ppl, cpi, country)
}
}
#' This function reads in the macroprudential policy actions and cleans
#' the date formatting
#'
#' @param x is the specific prudential policy
#' @param path is the file path
#' @import readxl
#' @import zoo
#' @import dplyr
#' @import stringr
#' @export
read_mp <- function(x, path){
mp <- read_excel(path, sheet = x, skip = 3) %>%
# Transpose the table to have countries as col names
t %>%
as.data.frame()
# Make country actions as col names
names(mp) <- as.character(unlist(mp[1,]))
mp = mp[-1, ]
# Reformat as mpaction_country
colnames(mp) <- paste(x, colnames(mp), sep = "_")
mp %>%
tibble::rownames_to_column("date") %>%
mutate(date = as.Date(as.numeric(date), origin = "1899-12-30")) %>%
na.omit() %>%
mutate(date = as.yearqtr(date)) %>%
select(-starts_with("Grand Total")) %>%
mutate_if(is.character, str_trim)
}
#' This function reads in coverage years for macroprudential actions.
#' For end dates that are not the last day of the quarter, we set them
#' to the quarters before.
#'
#' @param path is the file path
#' @import readxl
#' @import zoo
#' @import dplyr
#' @import timeDate
#' @export
read_cy <- function(path) {
read_excel(path, sheet = "Coverage years") %>%
# Filter out excessive rows
filter(!is.na(Country)) %>%
# Reformat start
mutate(mp_start = as.yearqtr(paste(Start, "1", sep = "-"))) %>%
# Check if date is last day in quarter. If not, move back to last quarter.
rename(mp_end = `Position (End) Date`) %>%
mutate(mp_end = if_else(mp_end == as.Date(timeLastDayInQuarter(mp_end)),
as.yearqtr(mp_end),
as.yearqtr(as.Date(timeFirstDayInQuarter(mp_end)) - 1))) %>% # Use last quarter as last date
select(Country, mp_start, mp_end)
}
#' @import tidyr
#' @export
read_rates <- function(path) {
read_excel(path, sheet = "Data") %>%
rename(date = 1) %>%
gather(., 'key', 'value', - date) %>%
mutate(q = as.yearqtr(date)) %>%
group_by(key) %>%
arrange(date) %>%
na.omit %>%
mutate(change_in_rates = c(NA, diff(value)))%>%
group_by(key, q) %>%
summarize(change_in_rates = sum(change_in_rates, na.rm = TRUE)) %>%
mutate(policy = ifelse(change_in_rates > 0, 'tighten',
ifelse(change_in_rates < 0, 'loosen',
ifelse(change_in_rates == 0, 'neither', NA)))) %>%
rename(date = q) %>%
select(date, key, policy) %>%
spread(., 'key', policy)
}
|
cb7b809f8a1d0e42876b10e32958531eefd34c8d | e706a7d76a4548173f1e09f957cfa262be330c7a | /1_population_dyn_fit/Analysis.R | 5b33a72ec3d0d51a80e06267097843462c8788f8 | [] | no_license | dieraz/prov-theo | 4dab0d9ae5b87e6109c6fd84996c832f25499da7 | 381fd281cf8f000b5aee9d9d62f124e9bcc0bcd7 | refs/heads/main | 2023-04-13T01:10:22.475014 | 2022-05-23T12:54:31 | 2022-05-23T12:54:31 | 369,817,986 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,827 | r | Analysis.R | library('fitR')
library('MASS')
library(tidyr)
library(ggplot2)
library('coda')
library(lattice)
library(reshape2)
library(plyr)
library(stringr)
library(gridExtra)
source('popdyn_det.R')
source('Functions.R')
load("POP_Haddon.RData")
load("POPDYN.ch1.RData")
my_mcmc.TdC2$acceptance.rate
trace1 <- mcmc(my_mcmc.TdC2$trace)
trace.burn1 <- burnAndThin(trace1, burn = 5000)
trace.burn.thin1 <- burnAndThin(trace.burn1, thin = 100)
theta1 <- colMeans(trace.burn.thin1)
load("POPDYN.ch2.RData")
my_mcmc.TdC2$acceptance.rate
trace2 <- mcmc(my_mcmc.TdC2$trace)
trace.burn2 <- burnAndThin(trace2, burn = 5000)
trace.burn.thin2 <- burnAndThin(trace.burn2, thin = 100)
theta2<- colMeans(trace.burn.thin2)
load("POPDYN.ch3.RData")
my_mcmc.TdC2$acceptance.rate
trace3 <- mcmc(my_mcmc.TdC2$trace)
trace.burn3 <- burnAndThin(trace3, burn = 5000)
trace.burn.thin3 <- burnAndThin(trace.burn3, thin = 100)
theta3 <- colMeans(trace.burn.thin3)
load("POPDYN.ch4.RData")
my_mcmc.TdC2$acceptance.rate
trace4 <- mcmc(my_mcmc.TdC2$trace)
trace.burn4 <- burnAndThin(trace4, burn = 5000)
trace.burn.thin4 <- burnAndThin(trace.burn4, thin = 100)
theta4 <- colMeans(trace.burn.thin4)
trace.info <- mcmc.list(trace.burn.thin1,trace.burn.thin2,trace.burn.thin3,trace.burn.thin4)
gelman.diag(trace.info)
trace.combined <- ldply(trace.info)
theta.bar <- colMeans(trace.combined[Popdyn_det$theta.names])
POP_Haddon <- fix_POP_Haddon
POP_Haddon<-POP_Haddon[complete.cases(POP_Haddon), ]
init.state <- c(N=50)
log.like.theta.bar <- dTrajObs_POPDYN(Popdyn_det, theta.bar, init.state, data = POP_Haddon,
log = TRUE)
D.theta.bar <- -2 * log.like.theta.bar
p.D <- var(-2 * trace.combined$log.density)/2
DIC <- D.theta.bar + 2 * p.D
plotFit_DE(Popdyn_det,theta.bar,init.state,data = POP_Haddon,n.replicates = 20)
|
c213faf323c55a59657af802c9194037f5adee05 | a2005588e3714a70314a2e748e5a3e2084262c56 | /plot2.R | e85004da9f8f36e3fa13740c7939444f28e34b6c | [] | no_license | Twilight-Hacker/ExPlot_Coursera2 | 8355c907103762ba1f82d45cb52fc5114bb20049 | 43a0e497f4c1c0ba7962b98704de0489d7f0cb48 | refs/heads/master | 2016-09-06T14:29:35.562145 | 2014-09-13T18:16:48 | 2014-09-13T18:16:48 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 404 | r | plot2.R | fulldata<-readRDS("summarySCC_PM25.rds")
SCC <- readRDS("Source_Classification_Code.rds")
data<-fulldata[grep("24510",fulldata[,1]),]
byyear<-aggregate(data[,4],list(data$year), FUN=sum)
png("plot2.png")
plot(byyear[,1],byyear[,2], type='b', main="Total Emissions by Year in Baltimore City, Maryland", xlab="Year", ylab="PM2.5 Emissions in tons", xaxt="n")
axis(1, at = seq(1999, 2008, by = 3))
dev.off() |
b9a3b0d4bfed97cf59accdddac43b64501aa6ee3 | 0b61fdadaaafb28829e1d7eccc07972f53f3aa3d | /man/write_landclim_maps.Rd | bae7e8c7a306c44ab07f3be08a05a2d6ee4ff039 | [] | no_license | mhhsanim/LandClimTools | 1e165ae435d4e4e043866bfc054b9fa8d651464d | 6c1a3c782990e2b4ebda51b0d37396f8868b2fe6 | refs/heads/master | 2020-12-11T09:04:07.487287 | 2016-04-26T19:05:51 | 2016-04-26T19:05:51 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 2,607 | rd | write_landclim_maps.Rd | \name{write_LandClim_maps}
\alias{write_LandClim_maps}
%- Also NEED an '\alias' for EACH other topic documented here.
\title{
%% ~~function to do ... ~~
Write landClim maps
}
\description{
%% ~~ A concise (1-5 lines) description of what the function does. ~~
Takes raster stack in the appropiate resolution and writes LandClim maps in the required format.
}
\usage{
write_LandClim_maps(LandClimRasterStack, nodata_value = "-9999", lcResolution = 25, ex)
}
%- maybe also 'usage' for other objects documented here.
\arguments{
\item{LandClimRasterStack}{
%% ~~Describe \code{LandClimRasterStack} here~~
}
\item{nodata_value}{
%% ~~Describe \code{nodata_value} here~~
}
\item{lcResolution}{
%% ~~Describe \code{lcResolution} here~~
}
\item{ex}{
%% ~~Describe \code{ex} here~~
}
}
\details{
%% ~~ If necessary, more details than the description above ~~
}
\value{
%% ~Describe the value returned
%% If it is a LIST, use
%% \item{comp1 }{Description of 'comp1'}
%% \item{comp2 }{Description of 'comp2'}
%% ...
}
\references{
%% ~put references to the literature/web site here ~
}
\author{
%% ~~who you are~~
}
\note{
%% ~~further notes~~
}
%% ~Make other sections like Warning with \section{Warning }{....} ~
\seealso{
%% ~~objects to See Also as \code{\link{help}}, ~~~
}
\examples{
##---- Should be DIRECTLY executable !! ----
##-- ==> Define data, use random,
##-- or do help(data=index) for the standard data sets.
## The function is currently defined as
function (LandClimRasterStack, nodata_value = "-9999", lcResolution = 25,
ex)
{
LandClimRasterStack_list <- lapply(unstack(LandClimRasterStack),
function(x) crop(x, ex))
rs <- stack(LandClimRasterStack_list)
names(rs) <- names(LandClimRasterStack)
writeRaster(rs, "landClimMaps.tif", overwrite = T)
rm(rs)
foo <- function(x) {
sink(paste(names(x), ".asc", sep = ""))
writeLines(c(paste("ncols", ncol(x)), paste("nrows",
nrow(x)), paste("xllcorner", xmin(x)), paste("yllcorner",
ymin(x)), paste("cellsize", lcResolution), paste("nodata_value ",
nodata_value)))
sink()
write.table(matrix(round(x[]), nrow = nrow(x), ncol = ncol(x),
byrow = T), file = paste(names(x), ".asc", sep = ""),
append = T, quote = FALSE, row.names = FALSE, col.names = FALSE)
}
lapply(LandClimRasterStack_list, function(x) foo(x))
}
}
% Add one or more standard keywords, see file 'KEYWORDS' in the
% R documentation directory.
\keyword{ ~kwd1 }
\keyword{ ~kwd2 }% __ONLY ONE__ keyword per line
|
57fc1014ce6f50d8c5d89b402051f3f51d8006b8 | 06f709c0ab47580ce21c56bcef7c229b353fff6c | /homework/dieroller/tests/tests.R | 692128f80bb75b0023065454f432c6b9cb9c1864 | [] | no_license | irlandab/stat133 | e5058f9980feca305614db75b8352182f67d157e | c794c4147f177bef305c45dd2db24f5b985778e2 | refs/heads/master | 2021-04-06T20:48:54.352399 | 2018-05-16T20:39:26 | 2018-05-16T20:39:26 | 125,313,385 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 81 | r | tests.R | # load the source code of the functions to be tested
source("../R/functions.R")
|
07ba7bffe3b9548c087556f6aec29472bba56e5f | 693943ad8edc799b5be90daefd21c833d550ee61 | /L10_principal_component_analysis_06.12.2016.R | 48f5538671da290f4251d3d1a8e1d23c04a5dc97 | [
"MIT"
] | permissive | LucieCBurgess/Data_analytics_R_exercises | c63ee29347461d392c8f4a7932b9a7817514a08a | 91de76ca86f77214d5a1acd6e93ee6089fb90b92 | refs/heads/master | 2021-06-10T05:07:40.641411 | 2017-01-06T11:57:03 | 2017-01-06T11:57:03 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 2,165 | r | L10_principal_component_analysis_06.12.2016.R | # Lecture 10 Principal Component Analysis
x=c(2.5,0.5,2.2,1.9,3.1,2.3,2,1,1.5,1.1)
mean(x)
xbar=x-mean(x)
y=c(2.4,0.7,2.9,2.2,3.0,2.7,1.6,1.1,1.6,0.9)
mean(y)
ybar=y-mean(y)
cov(xbar,xbar)
cov(xbar,ybar)
cov(ybar,xbar)
cov(ybar,ybar)
CoMatrix=matrix(c(cov(xbar,xbar),cov(xbar,ybar),cov(ybar,xbar),cov(ybar,ybar)),2,2)
CoMatrix
eigen(CoMatrix)
plot(xbar,ybar)
glm.fit=glm(y~x)
abline(glm.fit)
# This is all very complicated and R can do it for you using the prcomp function - see below.
# Example from the internet calculating the eigenvectors and eigvalues of a matrix:
aMatrix = matrix(c(3,1,1,1,1,1,1,1,4),3,3)
aMatrix
eigen(aMatrix)
# Example: marks
Marks=data.frame(Maths=c(80,90,95),Science=c(85,85,80), English=c(60,70,40), Music=c(55,45,50))
Marks
pr.marks=prcomp(Marks,scale=FALSE)
pr.marks$rotation
biplot(pr.marks,scale=FALSE)
apply(Marks,2,var) # apply variance by column of the dataset
apply(Marks,1,var)
pr.marks.s=prcomp(Marks,scale=TRUE)
biplot(pr.marks.s,scale=0)
pr.marks.var=pr.marks.s$sdev^2
pve=pr.marks.var/sum(pr.marks.var)
pve
plot(pve,xlab="Principal component",ylab="Proportion of variance explained", type="b",ylim=c(0,1),main="scree plot")
plot(cumsum(pve),xlab="Principal component",ylab="Proportion of variance explained", type="b",ylim=c(0,1),main="not a scree plot")
#LAB example: USArrest
library(ISLR)
?USArrests
pr.usarrest=prcomp(USArrests,scale=FALSE)
pr.usarrest$rotation
biplot(pr.usarrest,scale=FALSE) # unscaled biplot of the first two principal components
pr.usarrest.s=prcomp(USArrests,scale=TRUE)
biplot(pr.usarrest.s,scale=0) # scaled biplot of the first two principal components
pr.usarrest.var=pr.usarrest.s$sdev^2 # calculated the scaled standard deviation squared = the variance
# Proportion of variance explained by each component
# Is the variance of each principal component/ total variance
pve=pr.usarrest.var/sum(pr.usarrest.var)
pve # [1] 0.62006039 0.24744129 0.08914080 0.04335752
plot(pve,xlab="Principal component",ylab="Proportion of variance explained", type="b",ylim=c(0,1))
plot(cumsum(pve),xlab="Principal component",ylab="Proportion of variance explained", type="b",ylim=c(0,1))
|
e5987a7378b8f85a4e79a7e26577c92cf8ac7a15 | b6b550212e660111fec2af4280dc34bfc2ba6f00 | /shiny/app.R | 211861bd04c782c90d8b941d9fc10acdc206230f | [] | no_license | TZstatsADS/Spr2017-proj5-grp10 | 2de16590909effd8a7c90f4e37dd58d8fcafacd0 | be661d328e3c1e47a3ef2eee315599dce68ad902 | refs/heads/master | 2021-01-19T20:55:15.751385 | 2017-04-28T17:20:22 | 2017-04-28T17:20:22 | 88,578,838 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 5,550 | r | app.R | library(shiny)
library(shinydashboard)
library(omdbapi)
library(dplyr)
library(rvest)
library(RCurl)
library(jpeg)
library(shinyBS)
library(pbapply)
library(readr)
library(omdbapi)
library(lda)
library(recharts)
source("../lib/information.R")
source("../lib/rec.R")
load("../output/dat2.RData")
dat <- dat2
load("../output/wang_workspace.RData")
load("../output/data_cloud.RData")
load("../data/edges.RData")
load("../data/nodes.RData")
load("../data/nodes.RData")
load("../data/MovieRec.Rdata")
load("../data/label.Rdata")
ui <- dashboardPage(
dashboardHeader(title = "Movie Recommend"),
dashboardSidebar(
sidebarMenu(
menuItem("About This App",tabName = "info", icon=icon("id-card")),
menuItem("Top Movie",tabName = "top",icon = icon("thumbs-up")),
menuItem("Recommend by Type", icon = icon("th"), tabName = "words"),
menuItem("Recommend by Movie Name", tabName = "MovieSearch", icon = icon("film"))
))
,
dashboardBody(
tabItems(
tabItem(tabName = "info",h2(info)),
tabItem(tabName = "top",
fluidRow(
box(width = NULL,
title = "Top 8 Movies",
status = "danger",
collapsible = TRUE,
solidHeader = FALSE,
background = "black",
uiOutput("tiles")),
box(title = "Top 22",
status = "primary",
solidHeader = TRUE,
collapsible = TRUE,
uiOutput("top50")),
box(title = "Popular but bad movies",
status = "success",
solidHeader = T,
uiOutput("pop_lrated")),
box(title = "Not popular but great movies",
status = "warning",
solidHeader = T,
uiOutput("nopop_hrated"))
)##end of fluidRow
)##end of tabItem
,
tabItem(tabName = "words",fluidRow(
eWordcloud(data.cloud[[1]],
namevar = ~movie_name,
datavar = ~Freq,
size = c(600, 600),
title = "frequently rated movies - Comedy",
rotationRange = c(0, 0)),
eWordcloud(data.cloud[[2]], namevar = ~movie_name, datavar = ~Freq,size = c(600, 600),title = "frequently rated movies - Romance",rotationRange = c(0, 0)),
eWordcloud(data.cloud[[3]], namevar = ~movie_name, datavar = ~Freq,size = c(600, 600),title = "frequently rated movies - Thriller & Adventure",rotationRange = c(0, 0)),
eWordcloud(data.cloud[[4]], namevar = ~movie_name, datavar = ~Freq,size = c(600, 600),title = "frequently rated movies - Action & Crime",rotationRange = c(0, 0))
))
,
tabItem(tabName = "MovieSearch",
fluidRow(
column(
10, solidHeader = TRUE,
textInput("text", "Give us a movie!", value ="Star Wars ")
)
,
#graphOutput("MovieNetwork"),
column(10,
tableOutput("table")),
column(10,graphOutput("MovieNetwork"))
# , box(title="Network",graphOutput("MovieNetwork"))
)
)
#tabItem(tabName = "MovieSearch",graphOutput("MovieNetwork"))
)
)
)
server <- function(input, output) {
output$tiles <- renderUI({
fluidRow(
lapply(top50_df$poster[1:8], function(i) {
a(box(width=3,
title = img(src = i, height = 350, width = 250),
footer = top50_df$top50[top50_df$poster == i]
), href= top50_df$imbdLink[top50_df$poster == i] , target="_blank")
}) ##end of lappy
)
})##end of renderUI
output$top50 <- renderUI({
lapply(top50_df$top50[1:22], function(j){
br(a(top50_df$top50[j], href = top50_df$imbdLink[top50_df$top50 == j], target = "_blank"))
})
})##end of renderUI
output$pop_lrated <- renderUI({
lapply(pop_lrated$x[1:10], function(m){
br(a(pop_lrated$x[m], href = pop_lrated$link[pop_lrated$x == m], target = "_blank"))
})
})
output$nopop_hrated <- renderUI({
lapply(nopop_hrated$x[1:10], function(n){
br(a(nopop_hrated$x[n], href = nopop_hrated$link[nopop_hrated$x == n], target = "_blank"))
})
})
moviename <- reactive({
input$text
})
# output$movie <- renderText({
# as.vector(unlist(recon(moviename())))
# })
output$table<-renderTable({
as.vector(unlist(recon(moviename())))
})
output$MovieNetwork<-renderGraph({
Movie.index<-grep(substr(moviename(),start = 1,stop=(nchar(moviename())-1)),label)
Rele.Movie.index<-MovieRec[Movie.index,]
Movie.Net<-c(Movie.index,Rele.Movie.index)
Rele.Movie.index<-MovieRec[Movie.Net,]
Rele.Movie.index<-unique(as.vector(Rele.Movie.index))
suppressPackageStartupMessages(library(threejs, quietly=TRUE))
nodes.rec<-nodes[Rele.Movie.index,]
num<-paste("^",Movie.Net,"$",sep="")
ind<-sapply(num,grep,edges$from)
ind<-unique(as.vector(ind))
edges.rec<-edges[ind,]
graphjs(edges.rec,nodes.rec)
})
}
shinyApp(ui, server)
|
4d1eecfe67593f35e2b442517b3efbc61e8ab1dd | 3f415c71532e3f1c68027c052d399c56ffbd8d92 | /tests/testthat/test_discrete.R | ca3c4560c292b4120dabb925fc4283f8ee239652 | [] | no_license | dereksonderegger/truncdist2 | 6b3a643773e3a33360762ca8036c917396234db4 | 6531407ac277ac4ef111ba936dbcd05911aa140e | refs/heads/master | 2020-04-21T22:40:26.672996 | 2019-04-17T18:47:41 | 2019-04-17T18:47:41 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 5,833 | r | test_discrete.R | # library(truncdist2)
# library(testthat)
# Check to see if I get the same values in Poisson distribution
test_that('Unrestricted Poisson values correct', {
expect_equal(dpois( 2, lambda=3 ), dtrunc(2, 'pois', lambda=3) )
expect_equal(ppois( 2, lambda=3 ), ptrunc(2, 'pois', lambda=3) )
expect_equal(qpois( .8, lambda=3 ), qtrunc(.8, 'pois', lambda=3) )
})
# Check to see if I get the same values in Exponential distribution
test_that('Unrestricted Exponential values correct', {
expect_equal(dexp( 2, rate=3 ), dtrunc(2, 'exp', rate=3) )
expect_equal(pexp( 2, rate=3 ), ptrunc(2, 'exp', rate=3) )
expect_equal(qexp( .8, rate=3 ), qtrunc(.8, 'exp', rate=3) )
})
# Do I get the correct probabilities in truncated poisson case?
# Here we will consider removing 0 and 1
test_that('Restricted Poisson Correct Values',{
expect_equal(dpois(3, lambda=2)/(1-ppois(1,lambda=2)), dtrunc(3, 'pois', a=1, lambda=2))
expect_equal(dpois(3, lambda=5)/(1-ppois(1,lambda=5)), dtrunc(3, 'pois', a=1, lambda=5))
expect_equal(dpois(5, lambda=5)/(1-ppois(4,lambda=5)), dtrunc(5, 'pois', a=4, lambda=5))
})
# Do we get the correct probabilities in the Restricted Normal case?
test_that('Restricted Normal Case', {
expect_equal( 2*dnorm(1), dtrunc(1, 'norm', a=0) )
expect_equal( 2*dnorm(1), dtrunc(1, 'norm', a=0, mean=0, sd=1) )
expect_equal( 2*dnorm(1), dtrunc(1, 'norm', a=0, params=list(mean=0, sd=1)) )
expect_equal( 2*dnorm(-1), dtrunc(-1, 'norm', b=0) )
expect_equal( 2*dnorm(-1), dtrunc(-1, 'norm', b=0, mean=0, sd=1) )
expect_equal( 2*dnorm(-1), dtrunc(-1, 'norm', b=0, params=list(mean=0, sd=1)) )
expect_equal( 0, dtrunc(0, 'norm', a=0) )
expect_equal( 0, dtrunc(0, 'norm', b=0) )
})
# Can I send in vectors?
test_that('Sending multiple values to evaluate', {
expect_equal( dnorm( -1:1 ), dtrunc( -1:1, 'norm' ) )
expect_equal( dnorm( -1:1 ), dtrunc( -1:1, 'norm', mean=0, sd=1) )
expect_equal( dnorm( -1:1 ), dtrunc( -1:1, 'norm', params=list(mean=0, sd=1)) )
expect_equal( pnorm( -1:1 ), ptrunc( -1:1, 'norm' ) )
expect_equal( pnorm( -1:1 ), ptrunc( -1:1, 'norm', mean=0, sd=1) )
expect_equal( pnorm( -1:1 ), ptrunc( -1:1, 'norm', params=list(mean=0, sd=1)) )
expect_equal( qnorm( 1:3/4 ), qtrunc( 1:3/4, 'norm' ) )
expect_equal( qnorm( 1:3/4 ), qtrunc( 1:3/4, 'norm', mean=0, sd=1 ) )
expect_equal( qnorm( 1:3/4 ), qtrunc( 1:3/4, 'norm', params=list(mean=0, sd=1)) )
})
# Does the Hypergeometric distribution work?
test_that( 'Hypergeometric distribution', {
expect_equal( dhyper(3, m=5, n=10, k=4), dtrunc(3, 'hyper', m=5, n=10, k=4))
expect_equal( dhyper(3, m=5, n=10, k=4), dtrunc(3, 'hyper', params=list(m=5, n=10, k=4)) )
expect_equal( dhyper(c(3,4), m=5, n=10, k=4), dtrunc(c(3,4), 'hyper', params=list(m=5, n=10, k=4)) )
expect_equal( phyper(3, m=5, n=10, k=4), ptrunc(3, 'hyper', m=5, n=10, k=4))
expect_equal( phyper(3, m=5, n=10, k=4), ptrunc(3, 'hyper', params=list(m=5, n=10, k=4)) )
expect_equal( phyper(c(3,4), m=5, n=10, k=4), ptrunc(c(3,4), 'hyper', params=list(m=5, n=10, k=4)) )
expect_equal( qhyper(.8, m=5, n=10, k=4), qtrunc(.8, 'hyper', m=5, n=10, k=4))
expect_equal( qhyper(.8, m=5, n=10, k=4), qtrunc(.8, 'hyper', params=list(m=5, n=10, k=4)) )
expect_equal( qhyper(c(.7,.8), m=5, n=10, k=4), qtrunc(c(.7,.8), 'hyper', params=list(m=5, n=10, k=4)) )
})
# Testing the random number generation
test_that( 'Testing random number generation - right length:', {
expect_equal( length(rtrunc(1000, 'norm')), 1000 )
expect_equal( length(rtrunc(1000, 'norm', mean=5, sd=2)), 1000)
expect_equal( length(rtrunc(1000, 'norm', params=list(mean=5, sd=2))), 1000)
expect_equal( length(rtrunc(1000, 'norm', a=0, params=list(mean=5, sd=2))), 1000)
expect_equal( length(rtrunc(1000, 'hyper', params=list(m=5, n=10, k=4))), 1000 )
expect_equal( length(rtrunc(1000, 'hyper', a=0, params=list(m=5, n=10, k=4))), 1000 )
})
# Testing the random number generation
p <- seq(.2, .8, by=.2)
test_that( 'Testing random number generation - right distribution:', {
expect_lt( sum( abs( quantile(rtrunc(100000, 'norm'), p) - qtrunc(p, 'norm') ) )/4, 0.1)
expect_lt( sum( abs( quantile(rtrunc(100000, 'norm', a=0), p) - qtrunc(p, 'norm', a=0) ) )/4, 0.1)
expect_lt( sum( abs( quantile(rtrunc(100000, 'hyper', params=list(m=5,n=10,k=4)), p) - qtrunc(p, 'hyper', params=list(m=5,n=10,k=4))) )/4, 0.1)
})
# Testing can we send in vectors of parameters
test_that( 'Testing sending vectors of parameters', {
expect_equal(dnorm(c(-1,0,1), mean=c(-.5, 0, .5), sd=c(4,2,1) ), dtrunc(x=c(-1,0,1), spec='norm', mean=c(-.5, 0, .5), sd=c(4,2,1)))
expect_equal(dnorm(c(-1,0,1), mean=c(-.5, 0, .5), sd=1), dtrunc(c(-1,0,1), 'norm', params=list(mean=c(-.5,0,.5), sd=1)))
expect_equal(pnorm(c(-1,0,1), mean=c(-.5, 0, .5), sd=c(4,2,1) ), ptrunc(c(-1,0,1), 'norm', mean=c(-.5, 0, .5), sd=c(4,2,1)))
expect_equal(pnorm(c(-1,0,1), mean=c(-.5, 0, .5), sd=1), ptrunc(c(-1,0,1), 'norm', params=list(mean=c(-.5,0,.5), sd=1)))
expect_equal(qnorm(c(.1, .5, .8), mean=c(-.5, 0, .5), sd=c(4,2,1) ), qtrunc(c(.1, .5, .8), 'norm', mean=c(-.5, 0, .5), sd=c(4,2,1)))
expect_equal(qnorm(c(.1, .5, .8), mean=c(-.5, 0, .5), sd=1), qtrunc(c(.1, .5, .8), 'norm', params=list(mean=c(-.5,0,.5), sd=1)))
expect_equal(rtrunc(5, 'norm', params=list(mean=1:5, sd=0)), 1:5)
expect_equal(rtrunc(3, 'hyper', params=list(m=c(2,2,2), n=c(0,0,0), k=2)), c(2,2,2) )
expect_equal(dnorm( 3, mean=-1:1), dtrunc(3, 'norm', params=list(mean=-1:1)))
expect_equal(pnorm( 3, mean=-1:1), ptrunc(3, 'norm', params=list(mean=-1:1)))
expect_equal(qnorm( .2, mean=-1:1), qtrunc(.2, 'norm', params=list(mean=-1:1)))
})
# Testing the Expectation and Variance functions
test_that('Testing expectation and variance', {
extrunc(spec='pois', a=0, params=c(lambda=2))
})
|
5424c036add853646c2c4abc10f6376a2ee2ae90 | b2efd59b4a9e1a2d0bfecbed07bde737527377b2 | /testCode.R | f8a62b1d95bc0928e33215290be7e46095084488 | [] | no_license | dadaeuneun/- | e56ed9d77c883c66dbf901bfd9e7d85857f33d17 | c9e4dff35d8804e050ab77a5c998f6ef53bb24e2 | refs/heads/master | 2016-08-11T14:05:53.930929 | 2015-10-06T13:08:26 | 2015-10-06T13:08:26 | 43,750,709 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 52 | r | testCode.R | ## this is the test script
a<- rnorm(100)
plot(a)
|
c184ed539b37311b7913f6cb521ce8314ee875e8 | 1d4d05654b954cbf53d8fd886b38794acda9e020 | /R/fit_bias.R | 6f08990decab5c71e324d22ffe90a8705c9d302f | [] | no_license | mikelove/alpine | 2ca97b67d0650d55448b7aefcb2a77c88255c97a | 12c94f2e70981a520b8f98a2c7aeb35e041d333f | refs/heads/master | 2020-05-30T05:10:53.231643 | 2018-04-20T10:37:41 | 2018-04-20T10:37:41 | 14,243,756 | 29 | 6 | null | null | null | null | UTF-8 | R | false | false | 12,467 | r | fit_bias.R | #' Fit bias models over single-isoform genes
#'
#' This function estimates parameters for one or more bias models
#' for a single sample over a set of single-isoform
#' genes. ~100 medium to highly expressed genes should be sufficient to
#' estimate the parameters robustly.
#'
#' @param genes a GRangesList with the exons of different
#' single-isoform genes
#' @param bam.file a character string pointing to an indexed BAM file
#' @param fragtypes the output of \link{buildFragtypes}. must contain
#' the potential fragment types for the genes named in \code{genes}
#' @param genome a BSgenome object
#' @param models a list of lists: the outer list describes multiple models.
#' each element of the inner list has two elements: \code{formula} and \code{offset}.
#' \code{formula} should be a character strings of an R formula
#' describing the bias models, e.g. \code{"count ~ ns(gc) + gene"}.
#' The end of the string is required to be \code{"+ gene"}.
#' \code{offset} should be a character vector
#' listing possible bias offsets to be used (\code{"fraglen"} or \code{"vlmm"}).
#' Either \code{offset} or \code{formula} can be NULL for a model.
#' See vignette for recommendations and details.
#' @param readlength the read length
#' @param minsize the minimum fragment length to model
#' @param maxsize the maximum fragment length to model
#' @param speedglm logical, whether to use speedglm to estimate the coefficients.
#' Default is TRUE.
#' @param gc.knots knots for the GC splines
#' @param gc.bk boundary knots for the GC splines
#' @param relpos.knots knots for the relative position splines
#' @param relpos.bk boundary knots for the relative position splines
#'
#' @return a list with elements: coefs, summary, models, model.params,
#' and optional offets: fraglen.density, vlmm.fivep,
#' and vlmm.threep.
#' \itemize{
#' \item \strong{coefs} gives the estimated coefficients
#' for the different models that specified formula.
#' \item \strong{summary} gives the tables with coefficients, standard
#' errors and p-values,
#' \item \strong{models} stores the incoming \code{models} list,
#' \item \strong{model.params} stores parameters for the
#' models, such as knot locations
#' \item \strong{fraglen.density} is a
#' estimated density object for the fragment length distribution,
#' \item \strong{vlmm.fivep} and \strong{vlmm.threep}
#' store the observed and expected tabulations for the different
#' orders of the VLMM for read start bias.
#' }
#'
#' @references
#'
#' The complete bias model including fragment sequence bias
#' is described in detail in the Supplemental Note of the
#' following publication:
#'
#' Love, M.I., Hogenesch, J.B., and Irizarry, R.A.,
#' Modeling of RNA-seq fragment sequence bias reduces
#' systematic errors in transcript abundance estimation.
#' Nature Biotechnologyh (2016) doi: 10.1038/nbt.3682
#'
#' The read start variable length Markov model (VLMM) for
#' addressing bias introduced by random hexamer priming
#' was introduced in the following publication (the sequence
#' bias model used in Cufflinks):
#'
#' Roberts, A., Trapnell, C., Donaghey, J., Rinn, J.L., and Pachter, L.,
#' Improving RNA-Seq expression estimates by correcting for fragment bias.
#' Genome Biology (2011) doi: 10.1186/gb-2011-12-3-r22
#'
#' @examples
#'
#' # see vignette for a more realistic example
#'
#' # these next lines just write out a BAM file from R
#' # typically you would already have a BAM file
#' library(alpineData)
#' library(GenomicAlignments)
#' library(rtracklayer)
#' gap <- ERR188088()
#' dir <- system.file(package="alpineData", "extdata")
#' bam.file <- c("ERR188088" = file.path(dir,"ERR188088.bam"))
#' export(gap, con=bam.file)
#'
#' library(GenomicRanges)
#' library(BSgenome.Hsapiens.NCBI.GRCh38)
#' data(preprocessedData)
#'
#' readlength <- 75
#' minsize <- 125 # see vignette how to choose
#' maxsize <- 175 # see vignette how to choose
#'
#' # here a very small subset, should be ~100 genes
#' gene.names <- names(ebt.fit)[6:8]
#' names(gene.names) <- gene.names
#' fragtypes <- lapply(gene.names, function(gene.name) {
#' buildFragtypes(ebt.fit[[gene.name]],
#' Hsapiens, readlength,
#' minsize, maxsize)
#' })
#' models <- list(
#' "GC" = list(formula = "count ~ ns(gc,knots=gc.knots, Boundary.knots=gc.bk) + gene",
#' offset=c("fraglen","vlmm"))
#' )
#'
#' fitpar <- fitBiasModels(genes=ebt.fit[gene.names],
#' bam.file=bam.file,
#' fragtypes=fragtypes,
#' genome=Hsapiens,
#' models=models,
#' readlength=readlength,
#' minsize=minsize,
#' maxsize=maxsize)
#'
#' @export
fitBiasModels <- function(genes, bam.file, fragtypes, genome,
models, readlength, minsize, maxsize,
speedglm=TRUE,
gc.knots=seq(from=.4, to=.6, length=3),
gc.bk=c(0,1),
relpos.knots=seq(from=.25, to=.75, length=3),
relpos.bk=c(0,1)) {
stopifnot(file.exists(bam.file))
stopifnot(file.exists(paste0(as.character(bam.file),".bai")))
stopifnot(is(genes, "GRangesList"))
stopifnot(all(!is.na(sapply(models, function(x) x$formula))))
stopifnot(is.numeric(readlength) & length(readlength) == 1)
stopifnot(all(names(genes) %in% names(fragtypes)))
if (any(sapply(models, function(m) "vlmm" %in% m$offset))) {
stopifnot("fivep" %in% colnames(fragtypes[[1]]))
}
for (m in models) {
if (!is.null(m$formula)) {
stopifnot(is.character(m$formula))
if (!grepl("+ gene$",m$formula)) {
stop("'+ gene' needs to be at the end of the formula string")
}
}
}
exon.dna <- getSeq(genome, genes)
gene.seqs <- as(lapply(exon.dna, unlist), "DNAStringSet")
# FPBP needed to downsample to a target fragment per kilobase
fpbp <- getFPBP(genes, bam.file)
# TODO check these downsampling parameters now that subset
# routine is not related to number of positive counts
# want ~1000 rows per gene, so ~300 reads per gene
# so ~300/1500 = 0.2 fragments per basepair
target.fpbp <- 0.4
fitpar.sub <- list()
fitpar.sub[["coefs"]] <- list()
fitpar.sub[["summary"]] <- list()
# create a list over genes, populated with read info from this 'bam.file'
# so we create a new object, and preserve the original 'fragtypes' object
fragtypes.sub.list <- list()
for (i in seq_along(genes)) {
gene.name <- names(genes)[i]
gene <- genes[[gene.name]]
l <- sum(width(gene))
# add counts per sample and subset
generange <- range(gene)
strand(generange) <- "*" # not necessary
if (!as.character(seqnames(generange)) %in% seqlevels(BamFile(bam.file))) next
# this necessary to avoid hanging on highly duplicated regions
## roughNumFrags <- countBam(bam.file, param=ScanBamParam(which=generange))$records/2
## if (roughNumFrags > 10000) next
suppressWarnings({
ga <- readGAlignAlpine(bam.file, generange)
})
if (length(ga) < 20) next
ga <- keepSeqlevels(ga, as.character(seqnames(gene)[1]))
# downsample to a target FPBP
nfrags <- length(ga)
this.fpbp <- nfrags / l
if (this.fpbp > target.fpbp) {
ga <- ga[sample(nfrags, round(nfrags * target.fpbp / this.fpbp), FALSE)]
}
fco <- findCompatibleOverlaps(ga, GRangesList(gene))
# message("-- ",round(length(fco)/length(ga),2)," compatible overlaps")
# as.numeric(table(as.character(strand(ga))[queryHits(fco)])) # strand balance
reads <- gaToReadsOnTx(ga, GRangesList(gene), fco)
# fraglist.temp is a list of length 1
# ...(matchReadsToFraglist also works for multiple transcripts)
# it will only last for a few lines...
fraglist.temp <- matchReadsToFraglist(reads, fragtypes[gene.name])
# remove first and last bp for fitting the bias terms
not.first.or.last.bp <- !(fraglist.temp[[1]]$start == 1 | fraglist.temp[[1]]$end == l)
fraglist.temp[[1]] <- fraglist.temp[[1]][not.first.or.last.bp,]
if (sum(fraglist.temp[[1]]$count) < 20) next
# randomly downsample and up-weight
fragtypes.sub.list[[gene.name]] <- subsetAndWeightFraglist(fraglist.temp,
downsample=200,
minzero=700)
}
if (length(fragtypes.sub.list) == 0) stop("not enough reads to model: ",bam.file)
# collapse the list over genes into a
# single DataFrame with the subsetted and weighted
# potential fragment types from all genes
# message("num genes w/ suf. reads: ",length(fragtypes.sub.list))
if (length(fragtypes.sub.list) < 2) stop("requires at least two genes to fit model")
gene.nrows <- sapply(fragtypes.sub.list, nrow)
# message("mean rows per gene: ", round(mean(gene.nrows)))
# a DataFrame of the subsetted fragtypes
fragtypes.sub <- do.call(rbind, fragtypes.sub.list)
# check the FPBP after downsampling:
## gene.counts <- sapply(fragtypes.sub.list, function(x) sum(x$count))
## gene.lengths <- sum(width(genes))
## round(unname(gene.counts / gene.lengths[names(gene.counts)]), 2)
# save the models and parameters
fitpar.sub[["models"]] <- models
fitpar.sub[["model.params"]] <- list(
readlength=readlength,
minsize=minsize,
maxsize=maxsize,
gc.knots=gc.knots,
gc.bk=gc.bk,
relpos.knots=relpos.knots,
relpos.bk=relpos.bk
)
if (any(sapply(models, function(m) "fraglen" %in% m$offset))) {
## -- fragment bias --
pos.count <- fragtypes.sub$count > 0
fraglens <- rep(fragtypes.sub$fraglen[pos.count], fragtypes.sub$count[pos.count])
fraglen.density <- density(fraglens)
fragtypes.sub$logdfraglen <- log(matchToDensity(fragtypes.sub$fraglen, fraglen.density))
# with(fragtypes.sub, plot(fraglen, exp(logdfraglen), cex=.1))
fitpar.sub[["fraglen.density"]] <- fraglen.density
}
if (any(sapply(models, function(m) "vlmm" %in% m$offset))) {
## -- random hexamer priming bias with VLMM --
pos.count <- fragtypes.sub$count > 0
fivep <- fragtypes.sub$fivep[fragtypes.sub$fivep.test & pos.count]
threep <- fragtypes.sub$threep[fragtypes.sub$threep.test & pos.count]
vlmm.fivep <- fitVLMM(fivep, gene.seqs)
vlmm.threep <- fitVLMM(threep, gene.seqs)
## par(mfrow=c(2,1))
## plotOrder0(vlmm.fivep$order0)
## plotOrder0(vlmm.threep$order0)
# now calculate log(bias) for each fragment based on the VLMM
fragtypes.sub <- addVLMMBias(fragtypes.sub, vlmm.fivep, vlmm.threep)
fitpar.sub[["vlmm.fivep"]] <- vlmm.fivep
fitpar.sub[["vlmm.threep"]] <- vlmm.threep
}
# allow a gene-specific intercept (although mostly handled already with downsampling)
fragtypes.sub$gene <- factor(rep(seq_along(gene.nrows), gene.nrows))
for (modeltype in names(models)) {
if (is.null(models[[modeltype]]$formula)) {
next
}
# message("fitting model type: ",modeltype)
f <- models[[modeltype]]$formula
offset <- numeric(nrow(fragtypes.sub))
if ("fraglen" %in% models[[modeltype]]$offset) {
# message("-- fragment length correction")
offset <- offset + fragtypes.sub$logdfraglen
}
if ("vlmm" %in% models[[modeltype]]$offset) {
# message("-- VLMM fragment start/end correction")
offset <- offset + fragtypes.sub$fivep.bias + fragtypes.sub$threep.bias
}
if (!all(is.finite(offset))) stop("offset needs to be finite")
fragtypes.sub$offset <- offset
if ( speedglm ) {
# mm.small <- sparse.model.matrix(f, data=fragtypes.sub)
mm.small <- model.matrix(formula(f), data=fragtypes.sub)
stopifnot(all(colSums(abs(mm.small)) > 0))
fit <- speedglm.wfit(fragtypes.sub$count, mm.small,
family=poisson(),
weights=fragtypes.sub$wts,
offset=fragtypes.sub$offset)
} else {
fit <- glm(formula(f),
family=poisson,
data=fragtypes.sub,
weights=fragtypes.sub$wts,
offset=fragtypes.sub$offset)
}
fitpar.sub[["coefs"]][[modeltype]] <- fit$coefficients
fitpar.sub[["summary"]][[modeltype]] <- summary(fit)$coefficients
}
fitpar.sub
}
|
3f99e9ef766594868aa8799245f0069745ec885f | 2271a5faab43855132dd5bea92031b5433932bbc | /man/gl.dist.pop.Rd | f4565173c19a9694cf1031003faca7fbea0867cf | [] | no_license | Konoutan/dartR | 26252e126e5f38589e21f726e3777360390a8005 | aa35a02121aff4fb092b3f88e5200d343938656b | refs/heads/master | 2022-06-30T13:41:35.734109 | 2019-12-05T08:58:16 | 2019-12-05T08:58:16 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | true | 2,008 | rd | gl.dist.pop.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/gl.dist.pop.r
\name{gl.dist.pop}
\alias{gl.dist.pop}
\title{Calculate a distance matrix for populations defined in an \{adegenet\} genlight object}
\usage{
gl.dist.pop(x, method = "euclidean", binary = FALSE, diag = FALSE,
upper = FALSE, p = NULL, verbose = 2)
}
\arguments{
\item{x}{-- name of the genlight containing the SNP genotypes [required]}
\item{method}{-- Specify distance measure [method=euclidean]}
\item{binary}{-- Perform presence/absence standardization before analysis using decostand [binary=FALSE]}
\item{diag}{-- Compute and display diagonal [FALSE]}
\item{upper}{-- Return also upper triangle [FALSE]}
\item{p}{-- The power of the Minkowski distance (typically a value ranging from 0.25 to infinity) [0.5]}
\item{verbose}{-- verbosity: 0, silent or fatal errors; 1, begin and end; 2, progress log ; 3, progress and results summary; 5, full report [default 2]}
}
\value{
A matrix of distances between populations (class dist)
}
\description{
This script calculates various distances between populations based on allele frequencies. The distances are
calculated by scripts in the {stats} or {vegan} libraries, with the exception of the pcfixed (percent fixed
differences) and pa (total private alleles) distances.
}
\details{
The distance measure can be one of "manhattan", "euclidean", "pcfixed", "pa", canberra", "bray",
"kulczynski", "jaccard", "gower", "morisita", "horn", "mountford", "raup" ,
"binomial", "chao", "cao", "mahalanobis", "maximum", "binary" or "minkowski". Refer to the documentation for
dist stats or vegdist vegan for definitions.
Distance pcfixed calculates the pair-wise count of fixed allelic differences between populations. Distance pa
tallies the total number of private alleles possessed by one or the other population.
}
\examples{
gl.dist.pop(testset.gl, method="euclidean", diag=TRUE)
}
\author{
Arthur Georges (Post to \url{https://groups.google.com/d/forum/dartr})
}
|
eb2342f1644a6301e966cd28cbe7061b72689e50 | cba27e4e22d2206bd8e77ed89ddcda9058c40032 | /03_linear_model_exercise.R | 65bda794817674d44fa7068b3806e1281e225166 | [] | no_license | kristineorten/machine_learning_with_R | bd39fd24570cc250e06248a9258d61b0d8e02628 | 2745e349c4d070482174793e7296f0a72b5d1067 | refs/heads/main | 2023-08-14T01:58:14.036331 | 2021-09-15T07:44:39 | 2021-09-15T07:44:39 | 406,658,153 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 2,665 | r | 03_linear_model_exercise.R | ################### #
# Attach packages ----
################### #
library(tidyverse)
library(tidymodels)
library(readxl)
#################### #
# Exercise 1 - Import data ----
#################### #
# Insert the appropriate function to import the `houses.xlsx` dataset to R
houses_raw <- read_excel("temp/houses.xlsx")
#################### #
# Exercise 2 - Transform data ----
#################### #
# Select variables for sqm, expense, total price, latitude, longitude, and municipality (kommune) from the houses_raw data set
# Also mutate a new column `log_sqm` which is the logarithm of `sqm`
houses <- houses_raw %>%
select(sqm, expense, tot_price,
lat,
lng,
kommune_name) %>%
mutate(kommune_name = fct_lump_n(kommune_name, 220),
log_sqm = log(sqm))
#################### #
# Exercise 3 - Split data ----
#################### #
set.seed(42)
# Split 75 % of the data to the training set, and the rest to the test set
split <- initial_split(houses, 4/5)
train <- training(split)
test <- testing(split)
#################### #
# Exercise 4 - Fit a linear model ----
#################### #
# Set the target variable as the logarithm of total price and fit the linear model
model <- linear_reg() %>%
set_engine("lm") %>%
fit(log(tot_price) ~ ., data = train)
# Run the next line to view summary of fit
summary(model$fit)
#################### #
# Exercise 5 - Use model for predictions ----
#################### #
# Use the fitted model to predict the total price for observations in the test set
# Remember to take the exponential of the .pred to return meaningful prediction values
# Also find the appropriate function to rename columns
model_preds <-
predict(model, new_data = test) %>%
bind_cols(test) %>%
mutate(.pred = exp(.pred)) %>%
rename(estimate = .pred,
truth = tot_price) %>%
mutate(abs_dev = abs(truth - estimate),
abs_dev_perc = abs_dev/truth)
#################### #
# Exercise 6 - Evaluate model ----
#################### #
# Evaluate the mean absolute percentage error of our linear model predictions
mape(data = model_preds,
truth = truth,
estimate = estimate)
##################################### #
# Excerise 7 - Plot results ----
##################################### #
# Run the next lines to study the distribution of the percentage error
model_preds %>%
ggplot(aes(x = abs_dev_perc)) +
geom_histogram(fill = "dodgerblue3", color = "white") +
theme_minimal() +
labs(x = "Prosentvis feil",
y = "Antall observasjoner") +
scale_x_continuous(limits = c(0, 1.5), labels = scales::percent)
|
40319c526c4eedfd3f0b55042b207472d94c9eee | 67f1d6770a6dba97777c92969e03ecbfbc637d80 | /fixing svar.R | 96cb7c5776010e721e68bf6d1e4f22d761068955 | [] | no_license | bazoc/Masters-Thesis | 0505efd409ba9803a80fda7c2040f34a78159927 | 5149882ea71940a95f0eec0bf0cae11cd5748587 | refs/heads/master | 2022-12-04T23:51:22.974881 | 2020-08-19T17:13:55 | 2020-08-19T17:13:55 | 266,795,151 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 13,818 | r | fixing svar.R | SVAR
x = fevar
estmethod = c("scoring", "direct")
Amat = amat
Bmat = NULL
start = NULL
max.iter = 100
conv.crit = 1e-07
maxls = 1
lrtest = TRUE
fesvar <- SVAR(fevar, estmethod = "direct", Amat = amat)
function (x, estmethod = c("scoring", "direct"),
Amat = NULL, Bmat = NULL, start = NULL, max.iter = 100, conv.crit = 1e-07,
maxls = 1, lrtest = TRUE, ...)
{
if (!class(x) == "varest") {
stop("\nPlease, provide an object of class 'varest',\n generated by function 'VAR()' as input for 'x'.\n")
}
call <- match.call()
estmethod <- match.arg(estmethod)
if ((is.null(Amat)) && (is.null(Bmat))) {
stop("\nAt least one matrix, either 'Amat' or 'Bmat', must be non-null.\n")
}
if ((is.null(Amat)) && !(is.null(Bmat))) {
Amat <- diag(x$K)
svartype <- "B-model"
}
else if ((is.null(Bmat)) && !(is.null(Amat))) {
Bmat <- diag(x$K)
svartype <- "A-model"
}
else {
svartype <- "AB-model"
diag(Amat) <- 1
freeA <- which(is.na(c(Amat)))
freeB <- which(is.na(c(Bmat)))
if (any(freeA %in% freeB)) {
stop("\nSVAR not identified. Free parameters at the same positions in Amat and Bmat.\n")
}
}
if (!any(is.na(cbind(Amat, Bmat)))) {
stop("\nNo parameters provided for optimisation, i.e.\nneither 'Amat' nor 'Bmat' do contain NA elements.\n")
}
K <- x$K
obs <- x$obs
df <- summary(x$varresult[[1]])$df[2]
sigma <- crossprod(resid(x))/df
params.A <- sum(is.na(Amat))
params.B <- sum(is.na(Bmat))
params <- params.A + params.B
if ((svartype == "B-model") || (svartype == "A-model")) {
if (K^2 - params < K * (K - 1)/2) {
stop("\nModel is not identified,\nchoose different settings for 'Amat' and/or 'Bmat'.\n")
}
}
else if (svartype == "AB-model") {
if (2 * K^2 - params < K^2 + K * (K - 1)/2) {
stop("\nModel is not identified,\nchoose different settings for 'Amat' and/or 'Bmat'.\n")
}
}
if (is.null(start))
start <- rep(0.1, params)
start <- as.vector(start)
if (!(length(start) == params)) {
stop("\nWrong count of starting values provided in 'start'.\nLength of 'start' must be equal to the count of 'na' in 'Amat' and 'Bmat'.\n")
}
if (estmethod == "direct") {
param.Aidx <- which(is.na(Amat), arr.ind = TRUE)
param.Bidx <- which(is.na(Bmat), arr.ind = TRUE)
logLc <- function(coef) {
if (svartype == "B-model") {
Bmat[param.Bidx] <- coef
}
else if (svartype == "A-model") {
Amat[param.Aidx] <- coef
}
else if (svartype == "AB-model") {
if (length(param.Aidx) > 0) {
Amat[param.Aidx] <- coef[c(1:nrow(param.Aidx))]
if (length(param.Bidx) > 0) {
Bmat[param.Bidx] <- coef[-c(1:nrow(param.Aidx))]
}
}
else if (length(param.Aidx) == 0) {
Bmat[param.Bidx] <- coef
}
}
logLc <- -1 * (K * obs/2) * log(2 * pi) + obs/2 *
log(det(Amat)^2) - obs/2 * log(det(Bmat)^2) -
obs/2 * sum(diag(t(Amat) %*% solve(t(Bmat)) %*%
solve(Bmat) %*% Amat %*% sigma))
return(-logLc)
}
opt <- optim(start, logLc, ...)
iter <- opt$counts[1]
Asigma <- matrix(0, nrow = K, ncol = K)
Bsigma <- matrix(0, nrow = K, ncol = K)
if (!(is.null(opt$hessian))) {
Sigma <- sqrt(diag(solve(opt$hessian)))
}
if (svartype == "B-model") {
Bmat[param.Bidx] <- opt$par
if (!(is.null(opt$hessian))) {
Bsigma[param.Bidx] <- Sigma
}
}
else if (svartype == "A-model") {
Amat[param.Aidx] <- opt$par
if (!(is.null(opt$hessian))) {
Asigma[param.Aidx] <- Sigma
}
}
else if (svartype == "AB-model") {
if (length(param.Aidx) > 0) {
Amat[param.Aidx] <- head(opt$par, nrow(param.Aidx))
if (!(is.null(opt$hessian))) {
Asigma[param.Aidx] <- head(Sigma, nrow(param.Aidx))
}
}
else {
Amat <- Amat
}
if (length(param.Bidx) > 0) {
Bmat[param.Bidx] <- tail(opt$par, nrow(param.Bidx))
if (!(is.null(opt$hessian))) {
Bsigma[param.Bidx] <- tail(Sigma, nrow(param.Bidx))
}
}
else {
Bmat <- Bmat
}
}
}
if (estmethod == "scoring") {
gamma <- start
Ksq <- K^2
if (svartype == "A-model") {
rb <- c(diag(K))
ra <- c(Amat)
pos <- which(is.na(ra))
cols <- length(pos)
Ra <- matrix(0, nrow = Ksq, ncol = cols)
for (i in 1:cols) Ra[pos[i], i] <- 1
ra[pos] <- 0
}
if (svartype == "B-model") {
ra <- c(diag(K))
rb <- c(Bmat)
pos <- which(is.na(rb))
cols <- length(pos)
Rb <- matrix(0, nrow = Ksq, ncol = cols)
for (i in 1:cols) Rb[pos[i], i] <- 1
rb[pos] <- 0
}
if (svartype == "AB-model") {
ra <- c(Amat)
pos <- which(is.na(ra))
cols <- length(pos)
Ra <- matrix(0, nrow = Ksq, ncol = cols)
for (i in 1:cols) Ra[pos[i], i] <- 1
ra[pos] <- 0
rb <- c(Bmat)
pos <- which(is.na(rb))
cols <- length(pos)
Rb <- matrix(0, nrow = Ksq, ncol = cols)
for (i in 1:cols) Rb[pos[i], i] <- 1
rb[pos] <- 0
}
R <- matrix(0, nrow = 2 * Ksq, ncol = params)
if (identical(as.integer(params.A), as.integer(0))) {
R[(Ksq + 1):(2 * Ksq), 1:params] <- Rb
}
else if (identical(as.integer(params.B), as.integer(0))) {
R[1:Ksq, 1:params] <- Ra
}
else if ((!(is.null(params.A)) && (!(is.null(params.B))))) {
R[1:Ksq, 1:params.A] <- Ra
R[(Ksq + 1):(2 * Ksq), (params.A + 1):params] <- Rb
}
r <- c(ra, rb)
Kkk <- diag(Ksq)[, c(sapply(1:K, function(i) seq(i, Ksq,
K)))]
IK2 <- diag(Ksq)
IK <- diag(K)
iters <- 0
cvcrit <- conv.crit + 1
while (cvcrit > conv.crit) {
z <- gamma
vecab <- R %*% gamma + r
Amat <- matrix(vecab[1:Ksq], nrow = K, ncol = K)
Bmat <- matrix(vecab[(Ksq + 1):(2 * Ksq)], nrow = K,
ncol = K)
Binv <- solve(Bmat)
Btinv <- solve(t(Bmat))
BinvA <- Binv %*% Amat
infvecab.mat1 <- rbind(kronecker(solve(BinvA), Btinv),
-1 * kronecker(IK, Btinv))
infvecab.mat2 <- IK2 + Kkk
infvecab.mat3 <- cbind(kronecker(t(solve(BinvA)),
Binv), -1 * kronecker(IK, Binv))
infvecab <- obs * (infvecab.mat1 %*% infvecab.mat2 %*%
infvecab.mat3)
infgamma <- t(R) %*% infvecab %*% R
infgammainv <- solve(infgamma)
scorevecBinvA <- obs * c(solve(t(BinvA))) - obs *
(kronecker(sigma, IK) %*% c(BinvA))
scorevecAB.mat <- rbind(kronecker(IK, Btinv), -1 *
kronecker(BinvA, Btinv))
scorevecAB <- scorevecAB.mat %*% scorevecBinvA
scoregamma <- t(R) %*% scorevecAB
direction <- infgammainv %*% scoregamma
length <- max(abs(direction))
ifelse(length > maxls, lambda <- maxls/length, lambda <- 1)
gamma <- gamma + lambda * direction
iters <- iters + 1
z <- z - gamma
cvcrit <- max(abs(z))
if (iters >= max.iter) {
warning(paste("Convergence not achieved after",
iters, "iterations. Convergence value:",
cvcrit, "."))
break
}
}
iter <- iters - 1
abSigma <- sqrt(diag((R %*% solve(infgamma) %*% t(R))))
Asigma <- matrix(abSigma[1:Ksq], nrow = K, ncol = K)
Bsigma <- matrix(abSigma[(Ksq + 1):(2 * Ksq)], nrow = K,
ncol = K)
opt <- NULL
}
colnames(Amat) <- colnames(x$y)
rownames(Amat) <- colnames(Amat)
colnames(Bmat) <- colnames(Amat)
rownames(Bmat) <- colnames(Amat)
colnames(Asigma) <- colnames(Amat)
rownames(Asigma) <- colnames(Amat)
colnames(Bsigma) <- colnames(Amat)
rownames(Bsigma) <- colnames(Amat)
if (svartype == "AB-model") {
if (any(diag(Amat) < 0)) {
ind <- which(diag(Amat) < 0)
Amat[, ind] <- -1 * Amat[, ind]
}
if (any(diag(Bmat) < 0)) {
ind <- which(diag(Bmat) < 0)
Bmat[, ind] <- -1 * Bmat[, ind]
}
}
if (svartype == "B-model") {
if (any(diag(solve(Amat) %*% Bmat) < 0)) {
ind <- which(diag(solve(Amat) %*% Bmat) < 0)
Bmat[, ind] <- -1 * Bmat[, ind]
}
}
if (svartype == "A-model") {
if (any(diag(solve(Amat) %*% Bmat) < 0)) {
ind <- which(diag(solve(Amat) %*% Bmat) < 0)
Amat[, ind] <- -1 * Amat[, ind]
}
}
Sigma.U <- solve(Amat) %*% Bmat %*% t(Bmat) %*% t(solve(Amat))
LRover <- NULL
if (lrtest) {
degrees <- 2 * K^2 - params - 2 * K^2 + 0.5 * K * (K +
1)
if (identical(degrees, 0)) {
warning(paste("The", svartype, "is just identified. No test possible."))
}
else {
STATISTIC <- obs * (log(det(Sigma.U)) - log(det(sigma)))
names(STATISTIC) <- "Chi^2"
PARAMETER <- 2 * K^2 - params - 2 * K^2 + 0.5 * K *
(K + 1)
names(PARAMETER) <- "df"
PVAL <- 1 - pchisq(STATISTIC, df = PARAMETER)
METHOD <- "LR overidentification"
LRover <- list(statistic = STATISTIC, parameter = PARAMETER,
p.value = PVAL, method = METHOD, data.name = x$call$y)
class(LRover) <- "htest"
}
}
result <- list(A = Amat, Ase = Asigma, B = Bmat, Bse = Bsigma,
LRIM = NULL, Sigma.U = Sigma.U * 100, LR = LRover, opt = opt,
start = start, type = svartype, var = x, iter = iter,
call = call)
class(result) <- "svarest"
return(result)
}
#Summary
{
ynames <- colnames(object$y)
obs <- nrow(object$datamat)
if (is.null(equations)) {
names <- ynames
}
else {
names <- as.character(equations)
if (!(all(names %in% ynames))) {
warning("\nInvalid variable name(s) supplied, using first variable.\n")
names <- ynames[1]
}
}
eqest <- lapply(object$varresult[names], summary)
resids <- resid(object)
covres <- cov(resids) * (obs - 1)/(obs - (ncol(object$datamat) -
object$K))
corres <- cor(resids)
logLik <- as.numeric(logLik(object))
roots <- roots(object)
result <- list(names = names, varresult = eqest, covres = covres,
corres = corres, logLik = logLik, obs = obs, roots = roots,
type = object$type, call = object$call)
class(result) <- "varsum"
return(result)
}
{
z <- object
p <- z$rank
rdf <- z$df.residual
if (p == 0) {
r <- z$residuals
n <- length(r)
w <- z$weights
if (is.null(w)) {
rss <- sum(r^2)
}
else {
rss <- sum(w * r^2)
r <- sqrt(w) * r
}
resvar <- rss/rdf
ans <- z[c("call", "terms", if (!is.null(z$weights)) "weights")]
class(ans) <- "summary.lm"
ans$aliased <- is.na(coef(object))
ans$residuals <- r
ans$df <- c(0L, n, length(ans$aliased))
ans$coefficients <- matrix(NA_real_, 0L, 4L, dimnames = list(NULL,
c("Estimate", "Std. Error", "t value",
"Pr(>|t|)")))
ans$sigma <- sqrt(resvar)
ans$r.squared <- ans$adj.r.squared <- 0
ans$cov.unscaled <- matrix(NA_real_, 0L, 0L)
if (correlation)
ans$correlation <- ans$cov.unscaled
return(ans)
}
if (is.null(z$terms))
stop("invalid 'lm' object: no 'terms' component")
if (!inherits(object, "lm"))
warning("calling summary.lm(<fake-lm-object>) ...")
Qr <- qr.lm(object)
n <- NROW(Qr$qr)
if (is.na(z$df.residual) || n - p != z$df.residual)
warning("residual degrees of freedom in object suggest this is not an \"lm\" fit")
r <- z$residuals
f <- z$fitted.values
w <- z$weights
if (is.null(w)) {
mss <- if (attr(z$terms, "intercept"))
sum((f - mean(f))^2)
else sum(f^2)
rss <- sum(r^2)
}
else {
mss <- if (attr(z$terms, "intercept")) {
m <- sum(w * f/sum(w))
sum(w * (f - m)^2)
}
else sum(w * f^2)
rss <- sum(w * r^2)
r <- sqrt(w) * r
}
resvar <- rss/rdf
if (is.finite(resvar) && resvar < (mean(f)^2 + var(c(f))) *
1e-30)
warning("essentially perfect fit: summary may be unreliable")
p1 <- 1L:p
R <- chol2inv(Qr$qr[p1, p1, drop = FALSE])
se <- sqrt(diag(R) * resvar)
est <- z$coefficients[Qr$pivot[p1]]
tval <- est/se
ans <- z[c("call", "terms", if (!is.null(z$weights)) "weights")]
ans$residuals <- r
ans$coefficients <- cbind(Estimate = est, `Std. Error` = se,
`t value` = tval, `Pr(>|t|)` = 2 * pt(abs(tval),
rdf, lower.tail = FALSE))
ans$aliased <- is.na(z$coefficients)
ans$sigma <- sqrt(resvar)
ans$df <- c(p, rdf, NCOL(Qr$qr))
if (p != attr(z$terms, "intercept")) {
df.int <- if (attr(z$terms, "intercept"))
1L
else 0L
ans$r.squared <- mss/(mss + rss)
ans$adj.r.squared <- 1 - (1 - ans$r.squared) * ((n -
df.int)/rdf)
ans$fstatistic <- c(value = (mss/(p - df.int))/resvar,
numdf = p - df.int, dendf = rdf)
}
else ans$r.squared <- ans$adj.r.squared <- 0
ans$cov.unscaled <- R
dimnames(ans$cov.unscaled) <- dimnames(ans$coefficients)[c(1,
1)]
if (correlation) {
ans$correlation <- (R * resvar)/outer(se, se)
dimnames(ans$correlation) <- dimnames(ans$cov.unscaled)
ans$symbolic.cor <- symbolic.cor
}
if (!is.null(z$na.action))
ans$na.action <- z$na.action
class(ans) <- "summary.lm"
ans
} |
aa4b079917a0a843d5599cc079f3f65ad1c5bc90 | 0a906cf8b1b7da2aea87de958e3662870df49727 | /detectRUNS/inst/testfiles/genoConvertCpp/libFuzzer_genoConvertCpp/genoConvertCpp_valgrind_files/1609875417-test.R | f22ec1c291050a5f12e679918145727136c3b370 | [] | no_license | akhikolla/updated-only-Issues | a85c887f0e1aae8a8dc358717d55b21678d04660 | 7d74489dfc7ddfec3955ae7891f15e920cad2e0c | refs/heads/master | 2023-04-13T08:22:15.699449 | 2021-04-21T16:25:35 | 2021-04-21T16:25:35 | 360,232,775 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 357 | r | 1609875417-test.R | testlist <- list(genotype = c(-1266580863L, 8487553L, -2122218809L, -737627007L, 0L, 128L, 3866624L, 989855955L, NA, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L))
result <- do.call(detectRUNS:::genoConvertCpp,testlist)
str(result) |
b92ca3bd97932e86474f166df880eb027c1efe56 | 8a687f7449214a3a986c539e9ec616d19402730a | /data-raw/bc.R | 76d65699cef3ae9142a4d6cedf940665cc1d31d3 | [] | no_license | Moustapha-A/autopolate | c3e24ebe6ab3c8cf9664a2fa4546c2ba509c60da | c3275c9263792db124dd0b5f023a736c9f04e8ca | refs/heads/master | 2021-07-11T04:20:25.092767 | 2017-10-15T22:04:59 | 2017-10-15T22:04:59 | 106,672,172 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 84 | r | bc.R | BC <- data.table::fread('data-raw/BC.csv')
devtools::use_data(BC, overwrite = TRUE)
|
7836ceb7dd59d43a986910deed418ad77280886d | a06ad0b5797e82bde9ae94c15216aaddc654f214 | /R/quiet_join.R | 4653dc0e198026575b44c52ea142935304385a9b | [
"Artistic-2.0"
] | permissive | khemlalnirmalkar/HMP16SData | 5fb436a4aa6638ed1ec07e0f267f7a68bea9e8d9 | 11795265b43cb5eea1c1de57fd6a9936fe54577a | refs/heads/master | 2023-09-02T18:12:38.092116 | 2021-10-26T16:29:57 | 2021-10-26T16:29:57 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 234 | r | quiet_join.R | #' @keywords internal
#'
#' @importFrom magrittr %>%
#' @importFrom dplyr full_join
quiet_join <- function(x, y) {
list(x, y) %>%
lapply(colnames) %>%
Reduce(intersect, x = .) %>%
full_join(x, y, by = .)
}
|
6d46388e11f2abd5c8bcafa2e8a16c4c53bad2cb | 54a13f9f44a46c0ac23ceb5b051027bbd2fa0a24 | /Matrix.R | b61b1ac268a5fe1ac2d9facabccae6a1cb0bbbec | [] | no_license | rupinjairaj/R | b5fb75e70ed67e8f5d6d68e744d1b536e515fac2 | 5d8df1e532faa487e50d7320ac77600d939d35d9 | refs/heads/master | 2020-04-14T20:38:29.952930 | 2016-09-13T19:54:11 | 2016-09-13T19:54:11 | 68,141,140 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,465 | r | Matrix.R | ## MATRIX ##
?matrix
# creating a matrix; by default it fills the matrix column wise
tab <- matrix(c("a", "b", "c", "d"), 2, 2)
# using rownames() and colnames() to define names for the matrix
rownames(tab) <- c("Exposed", "unExposed")
colnames(tab) <- c("Disease", "noDisease")
tab
# setting byrow flag as true so as to fill the matrix row wise
tab <- matrix(c("a", "b", "c", "d"), 2, 2, byrow = T)
rownames(tab) <- c("Exposed", "unExposed")
colnames(tab) <- c("Disease", "noDisease")
tab
# considering the University Group Diabetes Program. A placebo-controlled, multi-center
## randomized clinical trial establised for the efficancy of curing type 2 diabetes.
## total of 409 patients. 204 patients received tolbutamide. 205 patients received placebo.
## 30 of the 204 tolbutamide patients died. 21 of the placebo patients died
ugpd <- matrix(c(30, 21, 174, 184), nrow = 2, ncol = 2, byrow = T, dimnames = list(c("Exposed", "Not Exposed"), c("Disease", "No Disease")))
ugpd
# using the apply()
## 1st parameter: the matrix/table you wish to apply the operation on
## 2nd parameter: the dimension you wish to operate on (1: rows, 2: columns)
## 3rd parameter: the operation you wish to perform: sum, mean, max, min, etc...
colTotal <- apply(ugpd, 2, sum)
colTotal
risks <- ugpd[, "Disease"]/colTotal
risks
odds <- risks/(1-risks)
odds
risks.ratio <- risks/risks[2]
risks.ratio
odds.ratio <- odds/odds[2]
odds.ratio
rbind(risks, risks.ratio, odds, odds.ratio)
|
567b9bda11b580ddc4fce2e561404df5e6367369 | c76fa6dd4dc45f29aafe74e77f5edfe8e17abfb8 | /tests/testthat.R | 31c5646bbac361ab55a971c942241c763c2aec1e | [
"MIT"
] | permissive | alexkurganski/GGenemy | 2a3be4978ee73b43222cf241447e33763931e32e | 4cf380ae3beb4b428044a0e36b8d83116b66a6af | refs/heads/master | 2022-03-03T23:26:09.776736 | 2019-09-23T12:18:33 | 2019-09-23T12:18:33 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 58 | r | testthat.R | library(testthat)
library(GGenemy)
test_check("GGenemy")
|
3cb8d1d52926ecfd9f85b5c55caca4ef9cac9e37 | ffdea92d4315e4363dd4ae673a1a6adf82a761b5 | /data/genthat_extracted_code/LPWC/examples/prep.data.Rd.R | 9547c83066560c1f2d355740bad38e714a95db38 | [] | no_license | surayaaramli/typeRrh | d257ac8905c49123f4ccd4e377ee3dfc84d1636c | 66e6996f31961bc8b9aafe1a6a6098327b66bf71 | refs/heads/master | 2023-05-05T04:05:31.617869 | 2019-04-25T22:10:06 | 2019-04-25T22:10:06 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 359 | r | prep.data.Rd.R | library(LPWC)
### Name: prep.data
### Title: Preparing Data
### Aliases: prep.data
### ** Examples
prep.data(array(rnorm(20), c(5, 4)), c(0, 0, 0, -1, 1),
timepoints = c(0, 5, 15, 30))
prep.data(array(runif(100, 0, 10), c(10, 10)), sample((-2:2), size = 10, replace = TRUE),
timepoints = c(0, 5, 15, 30, 45, 60, 75, 80, 100, 120))
|
e788437ba695eb85b267d08e6d73a5116b9687eb | 6c39e14c011a445d91616cfbbcd43a735d97389f | /plot2.R | e6f1805cd2bad21a7ee83c4b8ced9d66307fa02e | [] | no_license | alexmarandon/ExData_Plotting1 | 91ed2faf71acffc5f7012f508f790e86f83584d1 | 31890d91d48b314e06f968cd4e02ff5cce02e516 | refs/heads/master | 2020-04-01T16:57:33.406447 | 2015-04-10T15:45:04 | 2015-04-10T15:45:04 | 33,735,832 | 0 | 0 | null | 2015-04-10T15:29:12 | 2015-04-10T15:29:11 | null | UTF-8 | R | false | false | 1,090 | r | plot2.R | library("dplyr")
dataset <- tbl_df(read.table("household_power_consumption.txt",
sep =";", header = TRUE, na.strings = "?",
#nrows=1000,
colClass = c("character", "character", "numeric",
"numeric", "numeric", "numeric", "numeric",
"numeric", "numeric" )))
dataset0 <- dataset %>%
mutate(datetime = as.POSIXct(paste(Date, Time), tz="UTC", format="%d/%m/%Y %H:%M:%S")) %>%
select(10,3:9) %>%
filter(datetime>=as.POSIXct("2007-02-01 00:00:00", tz = "UTC")) %>%
filter(datetime<as.POSIXct("2007-02-03 00:00:00", tz = "UTC")) %>%
arrange(datetime)
remove(dataset)
png(filename = "plot2.png",
width = 480, height = 480, units = "px")
par(mfrow= c(1,1))
Sys.setlocale("LC_TIME", 'en_GB.UTF-8')
plot(dataset0$datetime, dataset0$Global_active_power, "n"
, xlab="", ylab="Global Active Power (kilowatts)")
lines(dataset0$datetime, dataset0$Global_active_power)
dev.off() |
4c20575cd27784e8e0d168dde82bb7f5687c87c2 | 0d558825a61837ff21367842628ef7c3e31db925 | /R/batsmanRunsFreqPerf.R | 04a994a84a734f1770c18d19ac703a2cd0091d08 | [] | no_license | tvganesh/ckt15 | e81313a063f357298d53bf6e13f05d2c0246298c | 924522b7ac86f9b2db785f1c202746fab3f9f787 | refs/heads/master | 2021-01-23T13:44:55.189756 | 2015-07-03T04:05:54 | 2015-07-03T04:05:54 | 38,437,805 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,120 | r | batsmanRunsFreqPerf.R | ##########################################################################################
# Designed and developed by Tinniam V Ganesh
# Date : 30 Jun 2015
# Function: batsmanRunsFreqPerf
# This function computes and plots the Moving Average of the batsman across his career
#
###########################################################################################
# Plot the performance of the batsman as a continous graph
# Create a performance plot between Runs and RunsFrequency
batsmanRunsFreqPerf <- function(file, name="A Hookshot") {
df <- clean(file)
# Create breaks in intervals of 10
maxi <- (max(df$Runs/10) + 1) *10
v <- seq(0,maxi,by=10)
a <- hist(df$Runs,breaks=v,plot=FALSE)
# Create mid points
Runs <- a$mids
RunFrequency <- a$counts
df1 <- data.frame(Runs,RunFrequency)
# Create a ggplot
atitle <- paste(name,"'s", " batting performance")
g <- qplot(df1$Runs,df1$RunFrequency, data=df1,geom=c("point","smooth"),method="loess",
xlab="Runs",ylab="Run Frequency")
p <-g + ggtitle(atitle)
print(p)
} |
0b6201ad85bd67698ee4f0c3eaefdcf7bcaac88e | dd0f5a54e5b93f8c64a25ee1763f25be10992048 | /bin/call-nuclei-atac.R | 6335501e79c619a292a38419684378c562f8c38e | [] | no_license | porchard/2021-03-sn-muscle | 2e457676026739f6983e4719d579f9e1a63fb4b1 | a850d88ea4b7573e1d330d44456f5c14e8c69eec | refs/heads/master | 2023-08-11T01:58:22.310770 | 2021-09-13T15:53:04 | 2021-09-13T15:53:04 | 343,512,362 | 0 | 1 | null | null | null | null | UTF-8 | R | false | false | 5,118 | r | call-nuclei-atac.R | #!/usr/bin/env Rscript
library(dplyr)
library(tidyr)
library(ggplot2)
library(glue)
parse_nucleus <- function(x) {
RE <- '(.*)-(.*)'
lib <- gsub(RE, '\\1', x)
barcode <- gsub(RE, '\\2', x)
return(list('library'=lib, 'barcode'=barcode))
}
args <- commandArgs(T)
METRICS <- args[1]
THRESHOLDS <- args[2]
#thresholds <- data.frame(
# library=c('125589-hg19', '125589-rn6', '133151-hg19', '133152-hg19', '133153-hg19', '133154-hg19', '63_20-hg19', '63_40-hg19'),
# min_hqaa=c(50000, 50000, 70000, 20000, 50000, 20000, 25000, 15000),
# max_hqaa=c(170000, 170000, 200000, 130000, 200000, 130000, 120000, 80000),
# max_max_fraction_reads_from_single_autosome=c(0.13, 0.15, rep(0.13, 6)),
# min_tss_enrichment=c(3, 3, 4.5, 3, 4.5, 3, 4.5, 4.5))
thresholds <- read.table(THRESHOLDS, stringsAsFactors = F, sep='\t', header = T)
ataqv <- read.table(METRICS, head = F, as.is = T, sep = '\t', col.names = c('nucleus', 'metric', 'value'), colClasses = c('character', 'character', 'character'))
ataqv$library <- parse_nucleus(ataqv$nucleus)$library
ataqv$value[ataqv$value=='None' | ataqv$value=='NA' | is.na(ataqv$value)] <- NA
ataqv$value <- as.numeric(ataqv$value)
ataqv <- ataqv[ataqv$metric %in% c('tss_enrichment', 'hqaa', 'max_fraction_reads_from_single_autosome'),]
ataqv <- ataqv %>%
tidyr::spread(key = metric, value = value) %>%
left_join(thresholds)
# assign species where appropriate...
assign_species <- ataqv
assign_species$genome <- gsub('(.*)-(.*)', '\\2', assign_species$library)
assign_species$library <- gsub('(.*)-(.*)', '\\1', assign_species$library)
number_species_per_library <- assign_species %>%
dplyr::group_by(library) %>%
dplyr::summarize(number_species=n_distinct(genome))
libraries_with_multiple_species <- number_species_per_library$library[number_species_per_library$number_species>1]
assign_species <- assign_species[assign_species$library %in% libraries_with_multiple_species,]
assign_species$barcode <- gsub('.*-', '', assign_species$nucleus)
assign_species <- assign_species %>%
dplyr::select(library, genome, barcode, hqaa) %>%
tidyr::spread(key=genome, value=hqaa)
genomes <- colnames(assign_species)[3:ncol(assign_species)]
assign_species$ratio <- apply(assign_species[,genomes], 1, function(x){max(x/sum(x))})
assign_species$best <- apply(assign_species[,genomes], 1, function(x){genomes[x==max(x)][1]})
assign_species$worst <- apply(assign_species[,genomes], 1, function(x){genomes[x==min(x)][1]})
assign_species$assignment <- 'none'
assign_species$assignment[assign_species$ratio>=0.87] <- assign_species$best[assign_species$ratio>=0.87]
p <- ggplot(assign_species) +
geom_point(aes(x=hg19+rn6, y=ratio), alpha=0.2, stroke=0) +
scale_x_log10() +
ylab('Max(hg19/(rn6+hg19), rn6/(rn6+hg19))') +
xlab('hg19+rn6') +
theme_bw() +
geom_hline(yintercept = 0.87, linetype='dashed', color='red')
png('hg19-rn6-ratio-threshold.png', height=5, width=6)
p
dev.off()
assign_species$drop <- apply(assign_species[,c('library', 'barcode', 'best', 'worst', 'assignment')], 1, function(x){
if (x[5]=='none') {
return(paste(paste(x[1], genomes, x[2], sep='-'), collapse=','))
} else {
return(paste(paste(x[1], genomes[genomes!=x[3]], x[2], sep='-'), collapse=','))
}
})
drop <- unlist(strsplit(assign_species$drop, ','))
ataqv <- ataqv[!ataqv$nucleus %in% drop,]
# will do this on a per-library basis...
p <- ggplot(ataqv) + geom_point(aes(x = hqaa, y = tss_enrichment), alpha=0.05, stroke=0) +
facet_wrap(~library) +
scale_x_log10() +
scale_y_log10() +
theme_bw() +
geom_vline(aes(xintercept = min_hqaa), col='red', linetype='dashed', data = thresholds) +
geom_vline(aes(xintercept = max_hqaa), col='red', linetype='dashed', data = thresholds) +
geom_hline(aes(yintercept = min_tss_enrichment), col='red', linetype='dashed') +
xlab('Final high-quality autosomal reads') +
ylab('TSS enrichment')
png('hqaa-vs-tss-enrichment.png', height = 10, width = 10, units='in', res=300)
p
dev.off()
p <- ggplot(ataqv) + geom_point(aes(x = hqaa, y = max_fraction_reads_from_single_autosome), alpha=0.05, stroke=0) +
facet_wrap(~library) +
scale_x_log10() +
theme_bw() +
geom_vline(aes(xintercept = min_hqaa), col='red', linetype='dashed', data = thresholds) +
geom_vline(aes(xintercept = max_hqaa), col='red', linetype='dashed', data = thresholds) +
geom_hline(aes(yintercept = max_max_fraction_reads_from_single_autosome), col='red', linetype='dashed') +
xlab('Final high-quality autosomal reads') +
ylab('Max. fraction reads from single autosome')
png('hqaa-vs-max-fraction-reads-from-single-autosome.png', height = 10, width = 10, res=300, units='in')
p
dev.off()
survivors <- ataqv %>%
dplyr::filter(hqaa>=min_hqaa) %>%
dplyr::filter(hqaa<=max_hqaa) %>%
dplyr::filter(tss_enrichment>=min_tss_enrichment) %>%
dplyr::filter(max_fraction_reads_from_single_autosome<=max_max_fraction_reads_from_single_autosome)
survivors$barcode <- parse_nucleus(survivors$nucleus)$barcode
write.table(survivors %>% dplyr::select(library, barcode), 'atac-nuclei.txt', append = F, quote = F, sep='\t', row.names = F, col.names = F)
|
edc88865612643181188b68ce0e7fd266d2a3710 | a50422dd9c18f72071e4bc11a9e372abbd342555 | /scripts/setup.R | 996b63a3c0a06c347cf606315e5874524e0c760f | [] | no_license | NoushinN/covid_19_analysis | da981d1ec6c4d53c1dee7495f535587a74598e6b | f70ef351bccd0e1dbe1dc560acd8ee2a8111e445 | refs/heads/master | 2021-04-11T18:07:15.821021 | 2020-09-02T01:12:26 | 2020-09-02T01:12:26 | 249,042,683 | 1 | 0 | null | null | null | null | UTF-8 | R | false | false | 236 | r | setup.R | # list of libraries
libraries <- c("tidyverse", "data.table", "here", "table1",
"fpp2", "lubridate", "httr")
# load libraries
lapply(libraries, library, character.only = TRUE)
# source the script
.setup_sourced <- TRUE
|
1a73a4d145983140793cc602432daa3b1c6f99c9 | 53a803d4957c26a8cb41e6b9e249ce3b5ebe72da | /01_parse_GFF_files.R | 62adedb1c0d693895160094c990e465216eaf4f7 | [] | no_license | tnaake/PKS_synteny | dd414275eb6d2cf22a28e664e60d0c01a6379315 | 5a674b5a55e7c518e0e37bac1a609cce4f600bfe | refs/heads/master | 2021-07-24T17:00:08.702829 | 2020-08-26T08:31:32 | 2020-08-26T08:31:32 | 212,771,848 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 21,870 | r | 01_parse_GFF_files.R | ## a script to create a file with genes and their orientations compatible with i-ADHore
setwd("H:/Documents/03_Syntenic_linkage/01_Data/OrthoFinder/input/gff_files")
setwd("W:/Thomas/Data/synteny/synteny_iadhore")
## define the functions write_file, write_file_gff and
## write_file_combine_different_entries
write_file <- function(file, column_name = 9, column_scaffold = 1,
column_orientation = 7, first_pattern = "ID=", second_pattern = ";",
keep_pattern_cut = "aaaaaaa", add_pattern = "", filter_take = "mRNA",
replace_pattern = FALSE, pattern = "abc", replacement = "abc") {
species <- unlist(lapply(strsplit(file, split = ".gff"), "[", 1))
gff <- read.table(file, sep = "\t", stringsAsFactors = FALSE, quote = "")
## filter for genes/mRNA, etc.
gff <- gff[gff[, 3] == filter_take, ]
name_tr <- unlist(lapply(strsplit(as.character(gff[, column_name]), first_pattern), "[", 2))
name_tr <- unlist(lapply(strsplit(name_tr, second_pattern), "[", 1))
# ind_remove <- grep(name_tr, pattern = exclude_pattern)
# gff <- gff[-ind_remove,]
# name_tr <- unlist(lapply(strsplit(gff[, column_name], first_pattern), "[", 2))
# name_tr <- unlist(lapply(strsplit(name_tr, ";"), "[", 1)
#ind_keep <- grep(name_tr, pattern = keep_pattern)
## truncate the gff that it contains only the kept features
#gff <- gff[ind_keep,]
## truncate name_tr that it contains only the kept features
#name_tr <- name_tr[ind_keep]
name_tr <- unlist(lapply(strsplit(
name_tr, split = keep_pattern_cut), "[", 1))
name_tr <- paste(name_tr, add_pattern, sep = "")
## paste species in front of gene name
name_tr <- paste(species, "_", name_tr, sep = "")
if (replace_pattern) {
name_tr <- gsub(x = name_tr, replacement = replacement, pattern = pattern)
}
## read fasta file
fasta <- read.table(file = paste("../", species, ".fasta", sep = ""),
header = FALSE, stringsAsFactors = FALSE)
ind_fasta <- grep(fasta[, 1], pattern = ">")
fasta <- fasta[ind_fasta, ]
fasta <- as.character(gsub(">", "", fasta))
## match values in fasta with the ones in name_tr and get indices of the matching ones
inds <- match(name_tr, fasta)
inds_na <- which(!is.na(inds))
## index name_tr and gff
gff <- gff[inds_na, ]
name_tr <- name_tr[inds_na]
value_scaffold <- as.character(gff[, column_scaffold])
unique_scaffold <- unique(value_scaffold)
for (i in 1:length(unique_scaffold)) {
inds <- which(value_scaffold == unique_scaffold[i])
name_tr_i <- name_tr[ inds ]
orientation_i <- gff[inds, column_orientation]
m_i <- matrix(paste(name_tr_i, orientation_i, sep = ""), ncol = 1)
m_i <- m_i[order(gff[inds, 4]), ]
## return matrix with gene + orientation
#if (!dir.exists(species)) dir.create(species)
file_name <- gsub("[|]", "_", unique_scaffold[i])
write.table(m_i, quote = FALSE,
file = paste("./lst_files/", species, "/", file_name, ".lst", sep = ""),
row.names = F, col.names = F)
}
}
write_file_gff <- function(file, fasta_file = file, column_name = 9,
column_orientation = 7, column_scaffold = 1, filter_take, first_pattern,
second_pattern) {
species <- strsplit(file, split = ".gff")[[1]][1]
species2 <- strsplit(fasta_file, split = ".gff|.fasta")[[1]][1]
fasta <- read.table(paste("../", species2, ".fasta", sep = ""))
inds <- grep(fasta[, 1], pattern = ">")
fasta <- gsub(">", "", fasta[inds, ])
gff <- read.csv(file, comment.char = "#", header = FALSE,
stringsAsFactors = FALSE, sep = "\t")
gff <- gff[gff[, 3] == filter_take, ]
name_tr <- unlist(lapply(strsplit(
gff[,column_name], split = first_pattern), "[", 2))
name_tr <- unlist(lapply(strsplit(
name_tr, split = second_pattern), "[", 1))
## paste species in front of gene name
name_tr <- paste(species, "_", name_tr, sep = "")
inds <- match(name_tr, fasta)
inds_na <- which(!is.na(inds))
## index name_tr and gff
gff <- gff[inds_na, ]
name_tr <- name_tr[inds_na]
scaffold <- gff[, column_scaffold]
scaffold_unique <- unique(scaffold)
for (i in 1:length(scaffold_unique)) {
inds <- which(scaffold == scaffold_unique[i])
## paste species in front of gene name, add gene gene name and orientation
m_i <- matrix(paste(
name_tr[inds], gff[inds, column_orientation], sep = ""), ncol = 1)
m_i <- m_i[order(gff[inds, 4]), ]
write.table(m_i,
file = paste("lst_files/", species, "/", scaffold_unique[i], ".lst", sep=""),
col.names = FALSE, row.names = FALSE, quote = FALSE)
}
}
write_file_combine_different_entries <- function(file, fasta_file, first_pattern1 = "Name=",
first_pattern2 = ";", second_pattern1 = "pacid=", second_pattern2 = ";",
connector1 = "Csu", connector2 = "|PACid_", column_scaffold = 1,
column_orientation = 7, column_name = 9, filter_take = "mRNA") {
species <- unlist(strsplit(file, split = ".gff"))[1]
## load gff
gff <- read.table(file, comment.char = "#", header = FALSE,
stringsAsFactors = FALSE, sep = "\t")
gff <- gff[gff[, 3] == filter_take, ]
gff_name <- as.character(gff[, column_name])
## strsplit the patterns and combine
pattern1 <- unlist(lapply(
strsplit(gff_name, split = first_pattern1), "[", 2))
pattern1 <- unlist(lapply(
strsplit(pattern1, split = first_pattern2), "[", 1))
pattern2 <- unlist(lapply(
strsplit(gff_name, split = second_pattern1), "[", 2))
pattern2 <- unlist(lapply(
strsplit(pattern2, split = second_pattern2), "[", 1))
combinedName <- paste(connector1, pattern1, connector2, pattern2, sep="")
## paste species in front of gene name
combinedName <- paste(species, "_", combinedName, sep = "")
## read fasta
fasta <- read.table(fasta_file)
fasta <- fasta[grep(fasta[, 1], pattern = ">"), 1]
fasta <- gsub(">", "", fasta)
## match fasta and combinedName
inds <- match(combinedName, fasta)
inds_na <- which(!is.na(inds))
## truncata gff and combinedName
gff <- gff[inds_na, ]
combinedName <- combinedName[inds_na]
scaffold <- gff[, column_scaffold]
scaffold_unique <- unique(scaffold)
## iterate through scaffold_unique
for (i in 1:length(scaffold_unique)) {
inds <- scaffold == scaffold_unique[i]
## paste species in front of gene name, add gene gene name and orientation
m_i <- matrix(
unique(paste(combinedName[inds], gff[inds, column_orientation], sep="")), ncol=1)
m_i <- m_i[order(gff[inds, 4]), ]
write.table(m_i,
file = paste("lst_files/", species, "/", scaffold_unique[i], ".lst", sep=""),
col.names=FALSE, row.names=FALSE, quote = FALSE)
}
}
## apply the functions for all species
## actch
write_file("actch.gff3", first_pattern = "Parent=", second_pattern = ";",
keep_pattern_cut = "aaa")
## amahy
write_file("amahy.gff3", first_pattern = "ID=", second_pattern = ";",
keep_pattern_cut = ".v2.1")
## amtri
write_file("amtri.gff3", first_pattern = "ID=", second_pattern = ";",
keep_pattern_cut = "aaa")
## anaba ## no gff file
## anaco
write_file("anaco.gff3", first_pattern = "ID=", second_pattern = ";",
keep_pattern_cut = ".v3")
## aquco
write_file("aquco.gff3", first_pattern = "ID=", second_pattern = ";",
keep_pattern_cut = ".v3.1", add_pattern = ".p")
## aradu
write_file_gff("aradu.gff", filter_take = "CDS", first_pattern = "Genbank:",
second_pattern = ";")
## araip
write_file_gff("araip.gff", filter_take = "CDS", first_pattern = "Genbank:",
second_pattern = ";")
## araly
write_file("araly.gff3", first_pattern = "Name=", second_pattern = ";")
## arath
write_file("arath.gff", first_pattern = "Parent=", second_pattern = ";")
## artan
write_file("artan.gff", filter_take = "CDS", first_pattern = "Name=",
second_pattern = ";")
## aspof
write_file_gff("aspof.gff", first_pattern = "Name=", second_pattern = ";",
filter_take = "CDS")
## azofi
write_file_gff("azofi.gff", first_pattern = "ID=", second_pattern = ";",
filter_take = "gene")
## auran
write_file("auran.gff", filter_take = "CDS", first_pattern = "Name=",
second_pattern = ";")
## betpe
write_file_gff("betpe.gff", first_pattern = "ID=", second_pattern = ";",
filter_take = "mRNA")
## betvu
write_file("betvu.gff3", first_pattern = "ID=", second_pattern = ";",
keep_pattern_cut = "aaaa")
## bradi
write_file("bradi.gff3", first_pattern = "ID=", second_pattern = ";",
keep_pattern_cut = ".v3.1", add_pattern = ".p")
## brana
write_file("brana.gff3", first_pattern = "Alias=", second_pattern = ";",
filter_take = "mRNA")
## braol
write_file("braol.gff3", first_pattern = "transcript_id=", second_pattern = ";",
filter_take = "mRNA")
## brara
write_file("brara.gff3", first_pattern = "ID=", second_pattern = ";",
keep_pattern_cut = ".v1.3", add_pattern = ".p")
## cajca
write_file_gff("cajca.gff", first_pattern = "ID=", second_pattern = ";",
filter_take = "mRNA")
## camsa
write_file_gff("camsa.gff", first_pattern = "Name=", second_pattern = ";",
filter_take = "CDS")
## camsi
write_file("camsi.gff3", first_pattern = "ID=", second_pattern = ";",
filter_take = "mRNA")
## cansa
write_file_gff("cansa.gff", first_pattern = "Name=", second_pattern = ";",
filter_take = "CDS")
## capan
write_file("capan.gff", first_pattern = "=", second_pattern = ";",
filter_take = "mRNA")
## capgr
write_file("capgr.gff3", first_pattern = "Name=", second_pattern = ";",
add_pattern = ".p")
## capru
write_file("capru.gff3", first_pattern = "Name=", second_pattern = ";")
## carpa
write_file("carpa.gff3", first_pattern = "Name=", second_pattern = ";")
## chabr
write_file_gff("chabr.gff", first_pattern = "Name=", second_pattern = ";",
filter_take = "CDS")
## chequ
write_file("chequ.gff3", first_pattern = "Name=", second_pattern = ";",
filter_take = "mRNA")
## chlre
write_file_gff("chlre.gff3", filter_take = "mRNA", first_pattern = "Name=",
second_pattern = ";")
## cicar
write_file_gff("cicar.gff", first_pattern = "ID=", second_pattern = ";",
filter_take = "mRNA")
## citcl
write_file_gff("citcl.gff", fasta_file = "citcl.fasta", filter_take = "CDS",
first_pattern = "Name=", second_pattern = ";")
## citla
write_file("citla.gff", first_pattern = "ID=", second_pattern = ";")
## citsi
write_file("citsi.gff3", first_pattern = "ID=", second_pattern = ";")
## cofca
write_file("cofca.gff3", first_pattern = "Name=", second_pattern = ";",
filter_take = "mRNA")
## corca
write_file("corca.gff", filter_take = "CDS", first_pattern = "Name=",
second_pattern = ";")
## corol
write_file("corol.gff", filter_take = "CDS", first_pattern = "Name=",
second_pattern = ";")
## covsu
write_file_combine_different_entries("covsu.gff3", "../covsu.fasta",
first_pattern1 = "Name=", first_pattern2 = ";", second_pattern1 = "pacid=",
second_pattern2 = ";", connector1 = "Csu", connector2 = "|PACid_",
filter_take = "mRNA")
## cucme
write_file("cucme.gff3", first_pattern = "Target=", second_pattern = " ",
filter_take = "CDS", replace_pattern = TRUE, pattern = "T",
replacement = "P")
## cucsa
write_file_gff(file = "cucsa.gff", fasta_file = "cucsa.fasta",
filter_take = "CDS", first_pattern = "Genbank:", second_pattern = ";")
## cyame
write_file("cyame.gff3", first_pattern = "ID=transcript:", second_pattern = ";")
## cyapa
write_file_combine_different_entries("cyapa.gff", "../cyapa.fasta",
first_pattern1 = "ID=", first_pattern2 = ";", second_pattern1 = "ID=",
second_pattern2 = "e", connector1 = "Cpa|", connector2 = "",
filter_take = "mRNA")
## cycmi no gff
## dauca
write_file("dauca.gff3", first_pattern = "ID=", second_pattern = ";",
keep_pattern_cut = ".v1.0.388")
## denof
write_file_gff("denof.gff", filter_take = "CDS", first_pattern = "Genbank:",
second_pattern = ";")
## dunsa
write_file("dunsa.gff3", first_pattern = "ID=", second_pattern = ";",
keep_pattern_cut = ".v1.0")
## ectsi
write_file("ectsi.gff3", first_pattern = "ID=", second_pattern = ";",
filter_take = "gene")
## elagu
write_file_gff("elagu.gff", first_pattern = "Name=", second_pattern = ";",
filter_take = "CDS")
## equgi no gff
## eucgr
write_file("eucgr.gff3", first_pattern = "Name=", second_pattern = ";",
add_pattern = ".p")
## eutsa
write_file("eutsa.gff3", first_pattern = "Name=", second_pattern = ";")
## frave
write_file("frave.gff3", first_pattern = "Name=", second_pattern = ";")
## genau
write_file_gff("genau.gff", filter_take = "CDS", first_pattern = "Name=",
second_pattern = ";")
## ginbi
write_file("ginbi.gff", first_pattern = "ID=", second_pattern = ";")
## glyma
write_file("glyma.gff3", first_pattern = "Name=", second_pattern = ";",
add_pattern = ".p")
## glyur
write_file("glyur.gff3", first_pattern = "ID=", second_pattern = ";")
## gnemo
write_file("gnemo.gff", first_pattern = "ID=", second_pattern = ";")
## gosra
write_file("gosra.gff3", first_pattern = "Name=", second_pattern = ";")
## helan
write_file("helan.gff3", first_pattern = "ID=", second_pattern = ";")
## horvu
write_file("horvu.gff3", first_pattern = "ID=transcript:", second_pattern = ";",
filter_take = "mRNA")
## humlu
write_file("humlu.gff3", first_pattern = "ID=", second_pattern = ";")
## iponi
write_file_gff("iponi.gff", first_pattern = "Name=", second_pattern = ";",
filter_take = "CDS")
## jatcu
write_file("jatcu.gff", filter_take = "CDS", first_pattern = "JCDB_ID=",
second_pattern = ";")
## jugre
write_file("jugre.gff", filter_take = "gene", first_pattern = "J",
second_pattern = ";", replace_pattern = TRUE, pattern = "ure",
replacement = "Jure")
## kalfe
write_file("kalfe.gff3", first_pattern = "Name=", second_pattern = ";",
add_pattern = ".p")
## kleni
write_file("kleni.gff", first_pattern = "ID=", second_pattern = ";")
## leepe
write_file("leepe.gff3", first_pattern = "transcript_id=", second_pattern = ";",
filter_take = "mRNA")
## lepme
write_file("lepme.gff3", first_pattern = "D=", second_pattern = ";")
## linus
write_file("linus.gff3", first_pattern = "Name=", second_pattern = ";")
## lotja
write_file("lotja.gff3", first_pattern = "ID=", second_pattern = ";")
## lupan
write_file_gff("lupan.gff", first_pattern = "Name=", second_pattern = ";",
filter_take = "CDS")
## maldo
write_file_gff(file = "maldo.gff", filter_take = "CDS",
first_pattern = "Genbank:", second_pattern = ";")
## manes
write_file("manes.gff", first_pattern = "Dbxref=Phytozome:",
second_pattern = ",", filter_take = "CDS")
## marpo
write_file("marpo.gff3", first_pattern = "Name=", second_pattern = ";")
## medtr
write_file_gff(file = "medtr.gff", filter_take = "CDS",
first_pattern = "Genbank:", second_pattern = ";")
## mimgu
write_file("mimgu.gff3", first_pattern = "Name=", second_pattern = ";",
filter_take = "mRNA", add_pattern=".p")
## morno
write_file_gff("morno.gff", first_pattern = "Name=", second_pattern = ";",
filter_take = "CDS")
## musac
write_file("musac.gff3", first_pattern = "ID=", second_pattern = ";",
replace_pattern = TRUE, pattern = "_t", replacement = "_p")
## momch
write_file("momch.gff", filter_take = "CDS", first_pattern = "Name=",
second_pattern = ";")
## nelnu
write_file_gff("nelnu.gff", first_pattern = "Name=", second_pattern = ";",
filter_take = "CDS")
## nicat
write_file_gff("nicat.gff", first_pattern = "Name=", second_pattern = ";",
filter_take = "CDS")
## nicbe
write_file("nicbe.gff", first_pattern = "ID=", second_pattern = ";")
## nicsy
write_file_gff("nicsy.gff", fasta_file = "nicsy.fasta", filter_take = "CDS",
first_pattern = "Name=", second_pattern = ";")
## nicta
write_file("nicta.gff", first_pattern = "ID=", second_pattern = ";")
## oleeu
write_file("oleeu.gff", first_pattern = "Name=", second_pattern = ";")
## oroth
write_file("oroth.gff3", first_pattern = "Name=", second_pattern = ";")
## orysa
write_file("orysa.gff3", first_pattern = "Parent=", second_pattern = ".1;",
filter_take = "CDS")
## oryru
write_file("oryru.gff3", first_pattern = "transcript_id=", second_pattern = ";",
filter_take = "mRNA")
## ostlu
write_file_combine_different_entries("ostlu.gff3", "../ostlu.fasta",
first_pattern1 = "Name=", first_pattern2 = ";", second_pattern1 = "pacid=",
second_pattern2 = ";", connector1 = "Olu", connector2 = "|PACid_")
## papso
write_file_gff("papso.gff", filter_take = "CDS", first_pattern = "Name=",
second_pattern = ";")
## petax
write_file("petax.gff", first_pattern = "ID=", second_pattern = ";")
## petin
write_file("petin.gff", first_pattern = "ID=", second_pattern = ";")
## phaeq
write_file_gff("phaeq.gff", filter_take = "CDS", first_pattern = "Genbank:",
second_pattern = ";")
## phavu
write_file("phavu.gff3", first_pattern = "transcript_id=", second_pattern = ";",
filter_take = "mRNA")
## phoda
write_file_gff("phoda.gff", first_pattern = "Name=", second_pattern = ";",
filter_take = "CDS")
## phypa
write_file("phypa.gff3", first_pattern = "Name=", second_pattern = ";",
add_pattern = ".p")
## picab
write_file("picab.gff", filter_take = "gene", first_pattern = "ID=",
second_pattern = " ")
## pinpi no gff
## pinsy no gff
## pinta
write_file("pinta.gff", filter_take = "gene", first_pattern = "P",
second_pattern = ";", replace_pattern = TRUE, pattern = "ITA",
replacement = "PITA")
## poptr
write_file("poptr.gff3", first_pattern = "Name=", second_pattern = ";")
## porpu
write_file_combine_different_entries("porpu.gff3", "../porpu.fasta",
first_pattern1 = "ID=", first_pattern2 = ";", second_pattern1 = "ID=",
second_pattern2 = "e", connector1 = "Popu|", connector2 = "")
## prupe
write_file("prupe.gff3", first_pattern = "Name=", second_pattern = ";",
add_pattern = ".p")
## pseme
write_file("pseme.gff3", first_pattern = "ID=", second_pattern = ";")
## pungr
write_file("pungr.gff", filter_take = "CDS", first_pattern = "Name=",
second_pattern = ";")
## pyrbr
write_file_gff("pyrbr.gff", first_pattern = "ID=", second_pattern = ";",
filter_take = "mRNA")
## quero
write_file("quero.gff", first_pattern = "Name=", second_pattern = ";",
replace_pattern = TRUE, pattern = "T", replacement = "P")
## ricco
write_file("ricco.gff", first_pattern = "mRNA ", second_pattern = "; ")
## ruboc
write_file("ruboc.gff3", first_pattern = "ID=", second_pattern = ";",
filter_take = "gene")
## salcu
write_file_gff("salcu.gff", first_pattern = "ID=", second_pattern = ";",
filter_take = "gene")
## salmi
write_file("salmi.gff3", first_pattern = "ID=", second_pattern = ";",
add_pattern = "")
## selmo
write_file_combine_different_entries("selmo.gff3", "../selmo.fasta",
first_pattern1 = "Name=", first_pattern2 = ";", second_pattern1 = "pacid=",
second_pattern2 = ";", connector1 = "Smo", connector2 = "|PACid_")
## setit
write_file("setit.gff", first_pattern = "ID=", second_pattern = ";")
## solpe
write_file("solpe.gff3", first_pattern = "ID=", second_pattern = ";")
## soltu
write_file("soltu.gff", first_pattern = "Parent=", second_pattern = ";")
## solyc
write_file("solyc.gff", first_pattern = "ID=mRNA:", second_pattern = ";")
## sorbi
write_file("sorbi.gff3", first_pattern = "Name=", second_pattern = ";",
add_pattern = ".p")
## spipo
write_file("spipo.gff3", first_pattern = "Name=", second_pattern = ";")
## synec
write_file("synec.gff", filter_take = "CDS", first_pattern = "Name=",
second_pattern = ";")
## tarha
write_file_gff("tarha.gff", first_pattern = "Name=", second_pattern = ";",
filter_take = "CDS")
## taxba no gff
## theca
write_file("theca.gff3", first_pattern = "Name=", second_pattern = ";")
## thepa
write_file_combine_different_entries("thepa.gff", "../thepa.fasta",
first_pattern1 = "Tp", first_pattern2 = ";", second_pattern1 = "T",
second_pattern2 = "p", connector1 = "Tp", connector2 = "")
## triae
write_file("triae.gff3", first_pattern = "Name=", second_pattern = ";",
filter_take = "mRNA")
## tripr
write_file("tripr.gff3", first_pattern = "Name=", second_pattern = ";")
## vacco
write_file_gff("vacco.gff3", first_pattern = "ID=", second_pattern = ";",
filter_take = "mRNA")
## vitvi
write_file("vitvi.gff", first_pattern = "mRNA ", second_pattern = " ;")
## volca
write_file("volca.gff3", first_pattern = "Name=", second_pattern = ";")
## zeama
write_file("zeama.gff3", first_pattern = "transcript:", second_pattern = "_",
filter_take = "CDS")
## zizju
write_file_gff("zizju.gff", filter_take = "CDS", first_pattern = "Genbank:",
second_pattern = ";")
## zosma
write_file("zosma.gff3", first_pattern = "Name=", second_pattern = ";")
|
f0166d85f9e1baa239fbe753320d457bedbfc4e4 | 7a95abd73d1ab9826e7f2bd7762f31c98bd0274f | /metafolio/inst/testfiles/est_beta_params/libFuzzer_est_beta_params/est_beta_params_valgrind_files/1612988609-test.R | c8b1db2122fa0c9f3c5cf08a7831121892a303ee | [] | no_license | akhikolla/updatedatatype-list3 | 536d4e126d14ffb84bb655b8551ed5bc9b16d2c5 | d1505cabc5bea8badb599bf1ed44efad5306636c | refs/heads/master | 2023-03-25T09:44:15.112369 | 2021-03-20T15:57:10 | 2021-03-20T15:57:10 | 349,770,001 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 141 | r | 1612988609-test.R | testlist <- list(mu = -1.96154140337828e+23, var = -1.96154140339432e+23)
result <- do.call(metafolio:::est_beta_params,testlist)
str(result) |
270386fb70baff76f9848a93229a442c1d29b0b3 | 3660dc3609b4475230efba029ff7ae2e8c888773 | /transposon_annotation/r_scripts/plot_te_similarity.r | 16db16bafe7277e04c743ecdfe301493ac10ae5b | [
"MIT"
] | permissive | sestaton/sesbio | 16b98d14fe1d74e1bb37196eb6d2c625862e6430 | b222b3f9bf7f836c6987bfbe60965f2e7a4d88ec | refs/heads/master | 2022-11-12T17:06:14.124743 | 2022-10-26T00:26:22 | 2022-10-26T00:26:22 | 13,496,065 | 15 | 6 | null | null | null | null | UTF-8 | R | false | false | 707 | r | plot_te_similarity.r | library("ggplot2")
sims <- read.table("all_te_similarity_0106.txt",sep="\t",header=F)
names(sims) <- c("type","element","length","similarity")
sims$type <- factor(sims$type,
levels = c("copia","gypsy","unclassified-ltr","trim","hAT",
"mutator","tc1-mariner","unclassified-tir"),
labels = c("Copia","Gypsy","Unclassified-LTR","TRIM","hAT",
"Mutator","Tc1-Mariner","Unclassified-TIR"))
ggplot(sims, aes(similarity, y=..density..)) +
geom_histogram(binwidth=.3,fill="cornsilk",colour="grey60",size=.2) +
geom_density(size=1,alpha=.3) +
scale_x_reverse() +
theme_bw() +
facet_grid(type ~ .) +
ylab("Density") +
xlab("LTR/TIR similarity") |
c2bd952da133cc7809f7f3ff0626030a7e115a06 | 8aae37f4b346daa3afcc85e507bcaaa8af68de52 | /r analysis.R | 42f41b31be1061ec73276318b4421193846e1106 | [] | no_license | ElvinFox/R-scripts | d80ba5ef93ad4ca4952d5707793549e93fd7618a | 2dc3e12b409478c49bc37371fa6f077d6d397e97 | refs/heads/master | 2020-03-17T13:27:16.855652 | 2019-04-16T11:14:36 | 2019-04-16T11:14:36 | 133,632,330 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 9,926 | r | r analysis.R | df <- as.data.frame(HairEyeColor)
df1 <- as.data.frame(HairEyeColor["Brown",,"Female"])
chisq.test(df1)
library(ggplot2)
df <- diamonds
head(df)
df <- diamonds
main_stat <- chisq.test(df$cut, df$color)[[1]]
df <- diamonds
df$factor_price <- as.factor( df$price > mean(df$price) )
levels(df$factor_price) <- rep( c(0,1) )
df$factor_carat <- as.factor( df$carat > mean(df$carat) )
levels(df$factor_carat) <- rep( c(0,1) )
main_stat <- chisq.test(df$factor_price, df$factor_carat)[[1]]
fisher_test <- fisher.test(mtcars$am, mtcars$vs)$p.value
plot(mtcars$am, mtcars$vs)
df <- iris
ggplot(df, aes(x = Sepal.Length)) +
geom_histogram(fill = "white", col = "black", binwidth = 0.4) +
facet_grid(Species~.)
ggplot(df, aes(Sepal.Length, fill = Species)) +
geom_density(alpha = 0.6)
ggplot(df, aes(Species,Sepal.Length )) +
geom_boxplot(show.legend = T, outlier.size = 5, outlier.alpha = 0.7, col = "blue")
df <- ToothGrowth
one <- subset(df, supp == "OJ" & dose == 0.5)
two <- subset(df, supp == "VC" & dose == 2)
t_stat <- t.test(one$len, two$len )$statistic
df <- read.csv("https://stepic.org/media/attachments/lesson/11504/lekarstva.csv")
head(df)
t.test(df$Pressure_before, df$Pressure_after, paired = T)
df <- read.table("C:\\Users\\ElvinFox\\Downloads\\dataset_11504_15 (6).txt")
bartlett.test(df$V1 ~ df$V2)
t.test(df$V1 ~ df$V2, var.equal = TRUE)
wilcox.test(df$V1 ~ df$V2)
df <- read.table("C:\\Users\\ElvinFox\\Downloads\\dataset_11504_16 (1).txt")
pv <- t.test(df$V1, df$V2)$p.value
mean(df$V1)
mean(df$V2)
pv
df <- npk
fit <- aov(yield ~ N + P + K, data = df)
summary(fit)
df <- iris
fit <- aov(df$Sepal.Width ~ df$Species)
summary(fit)
TukeyHSD(fit)
df <- read.csv("https://stepik.org/media/attachments/lesson/11505/Pillulkin.csv")
df$patient <- as.factor(df$patient)
fit <- aov(temperature ~ pill + Error(patient/pill), data = df)
summary( fit)
fit1 <- aov(temperature ~ doctor * pill + Error( patient/(pill * doctor) ), data = df)
summary( fit1)
library(Hmisc)
library(ggplot2)
obj <- ggplot(ToothGrowth, aes(x = as.factor(dose), y = len, col = supp, group = supp)) +
stat_summary(fun.data = mean_cl_boot, geom = 'errorbar', width = 0.1, position = position_dodge(0.2)) +
stat_summary(fun.data = mean_cl_boot, geom = 'point', size = 3, position = position_dodge(0.2)) +
stat_summary(fun.data = mean_cl_boot, geom = 'line', position = position_dodge(0.2))
obj
my_vector <- c(1, 2, 3, NA, NA, NA, NA)
NA.position(my_vector)
# 4 5
NA.position <- function(x){
which(is.na(x))
}
NA.position(my_vector)
NA.counter <- function(x){
summary(my_vector)[[7]]
}
NA.counter(my_vector)
summary(my_vector)[[7]]
filtered.sum(c(1, -2, 3, NA, NA))
# [1] 4
x <- c(-1, -2, -3, NA, NA)
filtered.sum <- function(x){
z <- which(x > 0)
sum(x[z])
}
x <- c(0.13, 0.03, -0.11, 39.76, 0.14, 10.02, -0.25,
-1.21, -0.34, 2.12, 0.82, 1.78, -1.06, 1.3, -0.19, 0.88,
-4.49, -0.21, 0.43, 0.69, 8.47, 0.09, 0.83, 0.16, -2.52, -0.15, -0.9, -2.17, -17.68, -1.05)
boxplot(x)
length(x)
outliers.rm <- function(x){
qa <- quantile(x, probs = c(0.25, 0.75)) #рассчитывает первый и третий квартиль вектора x
qr <- IQR(x) #рассчитывает межквартильный размах вектора x
rm <- which(x < (qa[1] - 1.5 * qr) | x > (qa[2] + 1.5 * qr))
x <- x[-rm]
return(x)
}
outliers.rm(x)
x <- mtcars[, c(1,5)]
corr.calc <- function(x){
vals <- cor.test(x[,1], x[,2])
return(c(vals$estimate, vals$p.value))
}
corr.calc( iris[,1:2] )
x <- read.csv("https://stepik.org/media/attachments/lesson/11504/step6.csv")
filtered.cor <- function(x){
nm <- colnames(x)
c <- as.vector( c() )
for (i in (1:length(nm)) ) {
if ( is.numeric(x[,i]) == T) c <- rbind(c,i)
}
cleared <- x[, c]
stat <- cor(cleared)
stat[lower.tri(stat,diag = T)] <- 0
stat <- as.vector( stat )
stat[ which.max( abs(stat) ) ]
}
filtered.cor(iris)
filtered.cor(x)
filtered.cor(test_data)
iris$Petal.Length <- -iris$Petal.Length
filtered.cor(iris)
for (i in (1:length(nm)) ) {
if ( is.null( levels(x[,nm[i]] )) == T) c <- rbind(c,i)
}
is.numeric( sum(x[,1]) )
is.numeric(x[,2])
test_data <- as.data.frame(list(V5 = c("j", "j", "j", "j", "j", "j", "j", "j"),
V1 = c(1.3, -0.8, -0.7, 0.6, -0.6, 0.4, 0.6, -0.3),
V4 = c("k", "k", "k", "k", "k", "k", "k", "k"),
V6 = c("e", "e", "e", "e", "e", "e", "e", "e"),
V3 = c(-0.3, -2.3, 0.7, -1, 0.8, -0.4, -0.1, -1.3),
V2 = c(-0.3, -2.3, 0.7, -1, 0.8, -0.4, -0.1, -1.3)))
mtx <- cor(test_data[,c(2,5:6)])
mtx[lower.tri(mtx,diag = T)] <- 0
mtx
test <- sapply(x, function(x) is.numeric(x))
test_data <- read.csv("https://stepik.org/media/attachments/course/129/test_data.csv")
smart_cor(test_data)
#[1] -0.1031003
smart_cor <- function(x){
t1 <- shapiro.test(x[,1])$p.value
t2 <- shapiro.test(x[,2])$p.value
result <- ifelse( (t1 < 0.05 |t2 < 0.05),
cor.test(x[[1]], x[[2]], method = "spearman")$estimate,
cor.test(x[[1]], x[[2]], method = "pearson")$estimate )
return(result)
}
test_data[,2]
x <- read.csv("https://stepik.org/media/attachments/course/129/test_data.csv")
z <- spearman.test(x)
z$estimate
t1$p.value
test_data <- as.data.frame(list(col1 = c(-0.78, 1.74, 2, -0.3, 0.44, 0.39, -0.82, 0.88, 1.48, -1.79, 0.08, 1.24, -0.47, 1.68, -1.16, -1.79, -0.51, -1.72, -0.4, 0.43, -0.66, 0.88, 1.93, 1.3, 1.79, -0.78, 1.59, 0.04, -1.27, -1.25),
col2 = c(-0.19, -0.14, -0.35, -0.49, 1.6, -0.19, 0.15, 0.77, 0.02, -0.54, -0.85, 1.18, -0.23, 0.58, -1.26, 0.36, -0.78, -1.04, -0.49, -3.62, -0.63, 0.44, 1.17, 0.56, 0.45, 0.65, 0.16, 1.03, 0.46, 1.41)))
library(dplyr)
library(ggplot2)
df <- subset(diamonds, cut == "Ideal" & carat == 0.46 )
fit <- lm(price ~ depth, df)
fit_coef <- fit$coefficients
levels(df$cut)
x <- iris[,1:2]
my_df = iris[,1:2] # на вход подаем данные iris только с переменными Sepal.Length и Sepal.Width
regr.calc(iris[,1:2]) # переменные значимо не коррелируют
#[1] "There is no sense in prediction"
regr.calc <- function(x){
ifelse(cor.test(df[[1]], df[[2]], method = "pearson")$p.value < 0.05,
df$fit <<- lm(df[[1]] ~ df[[2]])$fitted.values,
"There is no sense in prediction")
}
x <- iris[,c(1,4)]
my_df = iris[,c(1,4)] # на вход подаем данные iris только с переменными Sepal.Length и Petal.Width
regr.calc(my_df) # переменные значимо коррелируют
rm(x)
length(test_data[,1])
test_data <- read.csv("https://stepic.org/media/attachments/course/129/fill_na_test.csv")
fill_na <- function(df){
model <- lm(y ~ x_1 + x_2, df)
df$y_full = ifelse( is.na(df$y) , predict(model, df), df$y )
return(df)
}
fill_na(test_data)
x <- read.csv("https://stepic.org/media/attachments/course/129/fill_na_test.csv")
x1 <- x[complete.cases(x),]
model <- lm(y ~ x_1 + x_2, data = x1)
summary(model)
new.df <- data.frame(x_1 = 10, x_2 = 45 )
predict(model, new.df)
predict(model, x[10,])
mtcars$am <- factor(mtcars$am, labels = c('Automatic', 'Manual'))
mtcars$wt_centered <- mtcars$wt - mean(mtcars$wt)
df <- lm(mtcars$mpg ~ mtcars$wt_centered * mtcars$am)
summary(df)
library(ggplot2)
# сначала переведем переменную am в фактор
mtcars$am <- factor(mtcars$am)
# теперь строим график
my_plot <- ggplot(data = mtcars, aes(x = wt, y = mpg, col = am) ) +
geom_smooth(method = 'lm')
my_plot
df <- mtcars[c('wt', 'mpg', 'disp', 'drat', 'hp')]
test <- lm(df$wt ~ df$mpg + df$disp + df$drat + df$hp)
test <- lm(df$wt ~ df$mpg + df$disp + df$hp)
summary(test)
df <- attitude
summary(lm(df$rating ~ df$complaints * df$critical))
my_vector <- c(0.027, 0.079, 0.307, 0.098, 0.021, 0.091, 0.322, 0.211, 0.069,
0.261, 0.241, 0.166, 0.283, 0.041, 0.369, 0.167, 0.001, 0.053,
0.262, 0.033, 0.457, 0.166, 0.344, 0.139, 0.162, 0.152, 0.107,
0.255, 0.037, 0.005, 0.042, 0.220, 0.283, 0.050, 0.194, 0.018,
0.291, 0.037, 0.085, 0.004, 0.265, 0.218, 0.071, 0.213, 0.232,
0.024, 0.049, 0.431, 0.061, 0.523)
hist(my_vector)
shapiro.test(log(my_vector))
shapiro.test(1/my_vector)
beta.coef <- function(x){
df <- as.data.frame( scale( x ) )
fit <- lm(df[,1] ~ df[,2], df)
return(fit$coefficients)
}
beta.coef(mtcars[,c(1,3)])
df <- as.data.frame( scale( mtcars[,c(1,3)] ) )
z <- lm(df[,1] ~ df[,2], df)
beta.coef(mtcars[,c(1,3)])
beta.coef(swiss[,c(1,4)])
model_full <- lm(rating ~ ., data = attitude)
model_null <- lm(rating ~ 1, data = attitude)
scope = list(lower = model_null, upper = model_full)
ideal_model <- step(model_null, scope = list(lower = model_null, upper = model_full), direction = "forward")
summary(ideal_model)
anova(ideal_model, model_full)
df <- LifeCycleSavings
##all interactions including second level
summary( lm(sr ~ (.)^2, df) )
|
65a87acd8feee6629b281605126ab910a65afd82 | 747b678e951172195f621fddc63d44c8a7da447e | /Lifespan_Expectancy_NL.R | f8b7b334d4dda887386d813a849003296d420129 | [] | no_license | MiaFia/NewPirates | a2654c706e910ef5bb44bc93feddb2cbfba9e0b1 | 2767e38fd4c8e716454b75a203c1621328aab3b1 | refs/heads/main | 2023-02-21T13:02:00.759533 | 2021-01-22T13:25:42 | 2021-01-22T13:25:42 | 330,686,526 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 5,474 | r | Lifespan_Expectancy_NL.R | library(tidyverse)
library(readr)
library('ggthemes')
Sys.setenv(LANG= "en")
#With the data file from the github we get life expectancy approximations from before 1800
early_life_expectancy <- read_delim("./Data/Social_High_Class_Life_expectancy_Middle.csv",
";", escape_double = FALSE, trim_ws = TRUE)
early_life_expect_combined <- early_life_expectancy%>%
mutate("Total"=((`Expectation of life at birth(years), Males`)+
(`Expectation of life at birth(years), Females`))/2)%>%
group_by(`Middle of Period of Birth`)
#A dataset by the dutch government to supply more data
Levensverwachting <- read_delim("./Data/LevensverwachtingStatline.csv",
";", escape_double = FALSE, trim_ws = TRUE)
#We filter the set so it only uses the data for cohorts and life expectancy at age 21
Levens2<- Levensverwachting %>%
filter(grepl('tot', Perioden, fixed = TRUE))%>%
filter(grepl('21 jaar', `Leeftijd (op 31 december)`, fixed = FALSE))%>%
filter(Geslacht == "Mannen" | Geslacht == "Vrouwen")%>%
select(-`Sterftekans (kans)`, -`Levenden (tafelbevolking) (aantal)`,
-`Overledenen (tafelbevolking) (aantal)`, -`Leeftijd (op 31 december)`)
#We then found the average life expectancy for each group and across both genders, assuming their split to be 50/50
Levens3 <- Levens2%>%
group_by(Geslacht)%>%
pivot_wider(names_from = Geslacht, values_from= `Levensverwachting (jaar )`)%>%
mutate("Lifeexpectance_combined"
=(as.integer(Mannen)+as.integer(Vrouwen))/2)%>%
mutate("Levensverwacht"= (as.integer(Lifeexpectance_combined)+21))
#As the data deals in groups of 5 years, we determined the middle of the period and assigned the value of the period to it
numeric_years <- Levens3 %>%
separate(Perioden,into= c("PeriodStart", "PeriodEnd"), sep= " tot ", remove = TRUE, convert = TRUE)%>%
mutate(PeriodMiddle= ((((PeriodStart)+(PeriodEnd)))/2)-21)#%>%
#filter (PeriodEnd<=1931)
#We get the Life expectancy at birth
Levens_0<-Levensverwachting %>%
filter(grepl('tot', Perioden, fixed = TRUE))%>%
filter(grepl('0 jaar', `Leeftijd (op 31 december)`, fixed = FALSE))%>%
filter(Geslacht == "Mannen" | Geslacht == "Vrouwen")%>%
select(-`Sterftekans (kans)`, -`Levenden (tafelbevolking) (aantal)`,
-`Overledenen (tafelbevolking) (aantal)`, -`Leeftijd (op 31 december)`)
Levens0 <- Levens_0%>%
group_by(Geslacht)%>%
pivot_wider(names_from = Geslacht, values_from= `Levensverwachting (jaar )`)%>%
mutate("Lifeexpectance_combined"
=(as.integer(Mannen)+as.integer(Vrouwen))/2)
#As the data deals in groups of 5 years, we determined the middle of the period and assigned the value of the period to it
numeric0_years <- Levens0 %>%
separate(Perioden,into= c("PeriodStart", "PeriodEnd"), sep= " tot ", remove = TRUE, convert = TRUE)%>%
mutate(PeriodMiddle= (((PeriodStart)+(PeriodEnd)))/2)%>%
filter (PeriodEnd<=1931)
#Then we find the life span for dutch Wikipedia people
#Creating a variable for the csv file
people_dutch <- read_csv("people_dutch_real.csv",col_types = "ii")
#Piping the data so we could get valid entries for complete cohorts, as well as getting a
people_dutch_new <- people_dutch %>%
mutate("lifespan" = deathYear - birthYear) %>%
filter(lifespan > 0 & lifespan < 120) %>%
filter(birthYear <= 1931)%>%
arrange(birthYear)
#Grouping the entries by year and calculating the mean life
people_dutch_grouped <- people_dutch_new %>%
group_by(birthYear)%>%
summarise(average_life = mean(lifespan))
#Graph 1: Dutch government old and new:
ggplot(data = people_dutch_grouped)+
geom_line( aes(x = birthYear, y = average_life, color ='DBpedia Lifespans'),size=0.5) +
geom_smooth(data= people_dutch_grouped, aes(x = birthYear, y = average_life, color ='DBpedia Lifespans'),size =1.5, method='lm') +
geom_line (data=numeric_years, aes(x=PeriodMiddle, y=Levensverwacht, color='Life expectancy at age 21'))+
geom_line (data=numeric0_years, aes(x=PeriodMiddle, y=Lifeexpectance_combined, color='Life expectancy at birth'))+
labs(title = "Lifespans and Life Expectancy in Netherlands",
subtitle= 'Lifespan of well-known Dutch people compared to
the average life expectancy at birth and at age 21',
x="Birth Year",
y="Average Lifetime",
color = 'Sources') +
theme_light(base_size = 16)+
scale_y_continuous(limits = c(0,90))+
scale_x_continuous(limits=c(1860, 1930))
#Graph 2: Antonovsky data
ggplot(data = people_dutch_grouped)+
geom_line( aes(x = birthYear, y = average_life, color ='DBpedia Lifespans'),size=0.5) +
geom_smooth(data= people_dutch_grouped, aes(x = birthYear, y = average_life, color ='DBpedia Lifespans'),size = 1.5, method='lm') +
#and a line graph for the older times data
geom_line(data=early_life_expect_combined, aes(x= `Middle of Period of Birth`, y= Total, color='Antonovsky Life expectancy'))+
geom_point(data=early_life_expect_combined, aes(x= `Middle of Period of Birth`, y= Total, color='Antonovsky Life expectancy'))+
labs(title = "Average lifespans and life expectancy in the Netherlands",
subtitle= 'Lifespan of well-known Dutch people compared to
the average life expectancy supplied by Antonovsky',
x="Birth Year",
y="Average Lifetime",
color = 'Sources') +
theme_light(base_size = 16)+
scale_y_continuous(limits = c(0,90))+
scale_x_continuous(limits=c(1330, 1930)) |
f02b9ac1c9e8208d1aece53d718080a522f9796c | ffdea92d4315e4363dd4ae673a1a6adf82a761b5 | /data/genthat_extracted_code/spatstat/examples/dilated.areas.Rd.R | 83a14bf9099961367fff8c62421863020ff8d771 | [] | no_license | surayaaramli/typeRrh | d257ac8905c49123f4ccd4e377ee3dfc84d1636c | 66e6996f31961bc8b9aafe1a6a6098327b66bf71 | refs/heads/master | 2023-05-05T04:05:31.617869 | 2019-04-25T22:10:06 | 2019-04-25T22:10:06 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 246 | r | dilated.areas.Rd.R | library(spatstat)
### Name: dilated.areas
### Title: Areas of Morphological Dilations
### Aliases: dilated.areas
### Keywords: spatial math
### ** Examples
X <- runifpoint(10)
a <- dilated.areas(X, c(0.1,0.2), W=square(1), exact=TRUE)
|
0a3d3beee08a6bc87768f8b6270dd4211bf0684e | 76b91c7efb58fc8f1aecbb55f96cfe90793e70d3 | /2018.data.for.analyses.R/2018 behavioral analyses/Scripts/2019.11.13.behavior.figures.MS.R | ebabe814981785d21db66f8f796b007307c97180 | [] | no_license | gcjarvis/Goby_reproduction_by_risk | ddcc0c5323a4242b5c7cff41fa7a9220a49e53b9 | 2e6a824d73af6ade268abee6e26f16315c4c3a90 | refs/heads/master | 2021-11-23T19:15:12.847560 | 2021-11-04T04:14:47 | 2021-11-04T04:14:47 | 168,737,188 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 13,135 | r | 2019.11.13.behavior.figures.MS.R | # Description: figures for MS, b&w
# Author: George C Jarvis
# Date: Wed Nov 13 21:19:41 2019
# Notes: will lay this out as a 5-paneled figure in PPT, aligning the x axes
# but having different scales on the y axes
# --------------
#clear workspace
rm(list=ls())
#library
library(ggplot2)
#loading data
behave<-read.csv("Data/2019.10.25.behavior.includes.recollections.csv")
behave<-na.omit(behave)
behave$avg.inhab<-(ceiling((behave$Recollection+20)/2))
behave$Treatment<-ordered(behave$Treatment,levels=c("Low","Medium","High"))
#plotting
#have to set up different df for each behavior, shouldn't be too hard
#exposure time####
exp<-with(behave, aggregate((proportion.exposed), list(Treatment=Treatment), mean))
exp$se<-with(behave, aggregate((proportion.exposed), list(Treatment=Treatment),
function(x) sd(x)/sqrt(length(x))))[,2]
png("Output/2019.2.1.exposure.9.5x5.5.300dpi.png", width = 6.5, height = 5.5, units = 'in', res = 300)
reco.plot<- ggplot(exp, aes(x=Treatment, y=x, fill=Treatment)) +
geom_bar(stat="identity", colour= "black", width = 0.5, position="dodge")+
scale_x_discrete(limits=c("Low","Medium","High"))+
theme_classic() +
labs(x="Risk Treatment", y="Proportion of Time Exposed") +
theme(legend.position="none") +
scale_fill_manual(values=c("grey", "grey", "grey")) +
theme(axis.text.x=element_text(size=20, colour="black"),
axis.text.y=element_text(size=20, colour="black"),
axis.title=element_text(size=20))+
theme(axis.title.y = element_text(margin = margin(t = 0, r = 20, b = 0, l = 0)),
axis.title.x = element_text(margin = margin(t = 12, r = 0, b = 0, l = 0)),
axis.text.x = element_text(margin = margin(t = 10, r = 0, b = 0, l = 0))) +
theme(legend.text=element_text(size=18)) +
theme(legend.title =element_text(size=20))+
theme(axis.ticks.x = element_blank()) + scale_y_continuous(expand = c(0,0),limits = c(0,0.807))
reco.plot + geom_linerange(aes(ymin=x-se, ymax=x+se), size=0.5,
position=position_dodge(.85)) + theme(text = element_text(family="Arial"))
#not sure why it's off the x axis now...has something to do with breaks
dev.off()
#this was the only behavior with a sig. trt. effect
#so it's likely the only one I will report percentages for in text
#total distance moved####
td<-with(behave, aggregate((total.dist.moved), list(Treatment=Treatment), mean))
td$se<-with(behave, aggregate((total.dist.moved), list(Treatment=Treatment),
function(x) sd(x)/sqrt(length(x))))[,2]
png("Output/2019.11.13.td.moved.9.5x5.5.300dpi.png", width = 6.5, height = 5.5, units = 'in', res = 300)
reco.plot<- ggplot(td, aes(x=Treatment, y=x, fill=Treatment)) +
geom_bar(stat="identity", colour= "black", width = 0.5, position="dodge")+
scale_x_discrete(limits=c("Low","Medium","High"))+
theme_classic() +
labs(x="Risk Treatment", y="Linear Distance Traveled (mm)") +
theme(legend.position="none") +
scale_fill_manual(values=c("grey", "grey", "grey")) +
theme(axis.text.x=element_text(size=20, colour="black"),
axis.text.y=element_text(size=20, colour="black"),
axis.title=element_text(size=20))+
theme(axis.title.y = element_text(margin = margin(t = 0, r = 20, b = 0, l = 0)),
axis.title.x = element_text(margin = margin(t = 12, r = 0, b = 0, l = 0)),
axis.text.x = element_text(margin = margin(t = 10, r = 0, b = 0, l = 0))) +
theme(legend.text=element_text(size=18)) +
theme(legend.title =element_text(size=20))+
theme(axis.ticks.x = element_blank()) + scale_y_continuous(expand = c(0,0),limits = c(0,205))
reco.plot + geom_linerange(aes(ymin=x-se, ymax=x+se), size=0.5,
position=position_dodge(.85)) + theme(text = element_text(family="Arial"))
#not sure why it's off the x axis now...has something to do with breaks
dev.off()
#foraging (bites per minute)####
fr<-with(behave, aggregate((bites.min), list(Treatment=Treatment), mean))
fr$se<-with(behave, aggregate((bites.min), list(Treatment=Treatment),
function(x) sd(x)/sqrt(length(x))))[,2]
png("Output/2019.11.14.Bite.rate.no.parentheses.9.5x5.5.300dpi.png", width = 6.5, height = 5.5, units = 'in', res = 300)
reco.plot<- ggplot(fr, aes(x=Treatment, y=x, fill=Treatment)) +
geom_bar(stat="identity", colour= "black", width = 0.5, position="dodge")+
scale_x_discrete(limits=c("Low","Medium","High"))+
theme_classic() +
labs(x="Risk Treatment", y = "Bites per Minute")+
theme(legend.position="none") +
scale_fill_manual(values=c("grey", "grey", "grey")) +
theme(axis.text.x=element_text(size=20, colour="black"),
axis.text.y=element_text(size=20, colour="black"),
axis.title=element_text(size=20))+
theme(axis.title.y = element_text(margin = margin(t = 0, r = 20, b = 0, l = 0)),
axis.title.x = element_text(margin = margin(t = 12, r = 0, b = 0, l = 0)),
axis.text.x = element_text(margin = margin(t = 10, r = 0, b = 0, l = 0))) +
theme(legend.text=element_text(size=18)) +
theme(legend.title =element_text(size=20))+
theme(axis.ticks.x = element_blank()) + scale_y_continuous(expand = c(0,0),limits = c(0,1.1))
reco.plot + geom_linerange(aes(ymin=x-se, ymax=x+se), size=0.5,
position=position_dodge(.85)) + theme(text = element_text(family="Arial"))
dev.off()
#movements per minute with rate in parentheses
png("Output/2019.11.14.bite.rate.with.parentheses.9.5x5.5.300dpi.png", width = 6.5, height = 5.5, units = 'in', res = 300)
reco.plot<- ggplot(fr, aes(x=Treatment, y=x, fill=Treatment)) +
geom_bar(stat="identity", colour= "black", width = 0.5, position="dodge")+
scale_x_discrete(limits=c("Low","Medium","High"))+
theme_classic() +
labs(x="Risk Treatment",y=(expression(atop("Foraging",
paste((bites~min^-1))))))+
theme(legend.position="none") +
scale_fill_manual(values=c("grey", "grey", "grey")) +
theme(axis.text.x=element_text(size=20, colour="black"),
axis.text.y=element_text(size=20, colour="black"),
axis.title=element_text(size=20))+
theme(axis.title.y = element_text(margin = margin(t = 0, r = 20, b = 0, l = 0)),
axis.title.x = element_text(margin = margin(t = 12, r = 0, b = 0, l = 0)),
axis.text.x = element_text(margin = margin(t = 10, r = 0, b = 0, l = 0))) +
theme(legend.text=element_text(size=18)) +
theme(legend.title =element_text(size=20))+
theme(axis.ticks.x = element_blank()) + scale_y_continuous(expand = c(0,0),limits = c(0,1.01))
reco.plot + geom_linerange(aes(ymin=x-se, ymax=x+se), size=0.5,
position=position_dodge(.85)) + theme(text = element_text(family="Arial"))
dev.off()
#courtship rate (displays per minute)####
cr<-with(behave, aggregate((courtship.min), list(Treatment=Treatment), mean))
cr$se<-with(behave, aggregate((courtship.min), list(Treatment=Treatment),
function(x) sd(x)/sqrt(length(x))))[,2]
png("Output/2019.11.14.courtship.rate.no.parentheses.9.5x5.5.300dpi.png", width = 6.5, height = 5.5, units = 'in', res = 300)
reco.plot<- ggplot(cr, aes(x=Treatment, y=x, fill=Treatment)) +
geom_bar(stat="identity", colour= "black", width = 0.5, position="dodge")+
scale_x_discrete(limits=c("Low","Medium","High"))+
theme_classic() +
labs(x="Risk Treatment", y = "Courtship Displays per Minute")+
theme(legend.position="none") +
scale_fill_manual(values=c("grey", "grey", "grey")) +
theme(axis.text.x=element_text(size=20, colour="black"),
axis.text.y=element_text(size=20, colour="black"),
axis.title=element_text(size=20))+
theme(axis.title.y = element_text(margin = margin(t = 0, r = 20, b = 0, l = 0)),
axis.title.x = element_text(margin = margin(t = 12, r = 0, b = 0, l = 0)),
axis.text.x = element_text(margin = margin(t = 10, r = 0, b = 0, l = 0))) +
theme(legend.text=element_text(size=18)) +
theme(legend.title =element_text(size=20))+
theme(axis.ticks.x = element_blank()) + scale_y_continuous(expand = c(0,0),limits = c(0,1.5))
reco.plot + geom_linerange(aes(ymin=x-se, ymax=x+se), size=0.5,
position=position_dodge(.85)) + theme(text = element_text(family="Arial"))
dev.off()
#movements per minute with rate in parentheses
png("Output/2019.11.14.courtship.rate.with.parentheses.9.5x5.5.300dpi.png", width = 6.5, height = 5.5, units = 'in', res = 300)
png("Output/2019.2.1.interactions.with.parentheses.9.5x5.5.300dpi.png", width = 6.5, height = 5.5, units = 'in', res = 300)
reco.plot<- ggplot(cr, aes(x=Treatment, y=x, fill=Treatment)) +
geom_bar(stat="identity", colour= "black", width = 0.5, position="dodge")+
scale_x_discrete(limits=c("Low","Medium","High"))+
theme_classic() +
labs(x="Risk Treatment",y=(expression(atop("Interactions with Conspecifics",
paste((displays~min^-1))))))+
theme(legend.position="none") +
scale_fill_manual(values=c("grey", "grey", "grey")) +
theme(axis.text.x=element_text(size=20, colour="black"),
axis.text.y=element_text(size=20, colour="black"),
axis.title=element_text(size=20))+
theme(axis.title.y = element_text(margin = margin(t = 0, r = 20, b = 0, l = 0)),
axis.title.x = element_text(margin = margin(t = 12, r = 0, b = 0, l = 0)),
axis.text.x = element_text(margin = margin(t = 10, r = 0, b = 0, l = 0))) +
theme(legend.text=element_text(size=18)) +
theme(legend.title =element_text(size=20))+
theme(axis.ticks.x = element_blank()) + scale_y_continuous(expand = c(0,0),limits = c(0,0.151))
reco.plot + geom_linerange(aes(ymin=x-se, ymax=x+se), size=0.5,
position=position_dodge(.85)) + theme(text = element_text(family="Arial"))
dev.off()
#movement rate (movements per minute), includes code for superscript####
mm<-with(behave, aggregate((movements.min), list(Treatment=Treatment), mean))
mm$se<-with(behave, aggregate((movements.min), list(Treatment=Treatment),
function(x) sd(x)/sqrt(length(x))))[,2]
png("Output/2019.11.13.movement.swims.no.parentheses.9.5x5.5.300dpi.png", width = 6.5, height = 5.5, units = 'in', res = 300)
reco.plot<- ggplot(mm, aes(x=Treatment, y=x, fill=Treatment)) +
geom_bar(stat="identity", colour= "black", width = 0.5, position="dodge")+
scale_x_discrete(limits=c("Low","Medium","High"))+
theme_classic() +
labs(x="Risk Treatment", y ="Movements per Minute")+
theme(legend.position="none") +
scale_fill_manual(values=c("grey", "grey", "grey")) +
theme(axis.text.x=element_text(size=20, colour="black"),
axis.text.y=element_text(size=20, colour="black"),
axis.title=element_text(size=20))+
theme(axis.title.y = element_text(margin = margin(t = 0, r = 20, b = 0, l = 0)),
axis.title.x = element_text(margin = margin(t = 12, r = 0, b = 0, l = 0)),
axis.text.x = element_text(margin = margin(t = 10, r = 0, b = 0, l = 0))) +
theme(legend.text=element_text(size=18)) +
theme(legend.title =element_text(size=20))+
theme(axis.ticks.x = element_blank()) + scale_y_continuous(expand = c(0,0),limits = c(0,1.51),
labels = scales::number_format(accuracy = 0.01)) #changed to 2 decimal places for movement rate
reco.plot + geom_linerange(aes(ymin=x-se, ymax=x+se), size=0.5,
position=position_dodge(.85)) + theme(text = element_text(family="Arial"))
dev.off()
#movements per minute with rate in parentheses
png("Output/2019.11.14.movement.swims.with.rate.parentheses.9.5x5.5.300dpi.png", width = 6.5, height = 5.5, units = 'in', res = 300)
reco.plot<- ggplot(mm, aes(x=Treatment, y=x, fill=Treatment)) +
geom_bar(stat="identity", colour= "black", width = 0.5, position="dodge")+
scale_x_discrete(limits=c("Low","Medium","High"))+
theme_classic() +
labs(x="Risk Treatment",y=(expression(atop("Movements min"^-1))))+
# paste((movements~min^-1))))))+
theme(legend.position="none") +
scale_fill_manual(values=c("grey", "grey", "grey")) +
theme(axis.text.x=element_text(size=20, colour="black"),
axis.text.y=element_text(size=20, colour="black"),
axis.title=element_text(size=20))+
theme(axis.title.y = element_text(margin = margin(t = 0, r = 20, b = 0, l = 0)),
axis.title.x = element_text(margin = margin(t = 12, r = 0, b = 0, l = 0)),
axis.text.x = element_text(margin = margin(t = 10, r = 0, b = 0, l = 0))) +
theme(legend.text=element_text(size=18)) +
theme(legend.title =element_text(size=20))+
theme(axis.ticks.x = element_blank()) + scale_y_continuous(expand = c(0,0),limits = c(0,1.51),
labels = scales::number_format(accuracy = 0.01)) #changed to 2 decimal places for movement rate)
reco.plot + geom_linerange(aes(ymin=x-se, ymax=x+se), size=0.5,
position=position_dodge(.85)) + theme(text = element_text(family="Arial"))
dev.off()
|
ee768ed59d16a42e4445dbb1e4bc1c7bc17d43e9 | d997593085e0a5a90b8c24347c1cdb34364feb08 | /Packages_HMK3/fish.summary/tests/testthat/test-cod_growth-values.R | c6b24c26d3b2db4288fe44c4ba9136db581f7491 | [] | no_license | nparker2016/Computing_Env_Sci | 859551fd00cf5d15f488b71d6609a0b760c30a51 | bc7239457fc82f2ac5a1189f123509d4c52269bb | refs/heads/master | 2020-03-15T01:21:54.122056 | 2018-06-07T04:07:26 | 2018-06-07T04:07:26 | 131,892,061 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 279 | r | test-cod_growth-values.R | context('Output validity')
test_that('cod_growth generates reasonable values',{
a <- -0.497
b <- .1656
c <- .08588
d <- -0.004266
expect_that(cod_growth(a,b,c,d)$Growth_Rate[10]<6, is_true())
expect_that(max(cod_growth(a,b,c,d)$Temp_C)<30, is_true())
})
|
5b92fdd3e50981c1c46608aa87e017adfefe92a7 | b4fb002b30e76436c5f4f1b5e0d6017bc5073067 | /R/lotame_hierarchy_nodes.R | ab675371ea15ecbe0f15e7faad8c25070739b8e6 | [] | no_license | jackpalmer/LotameR | 2f71aef459c69b4722adc41226fa8588c2732a42 | 222f3762e12a88d0910f1d57fe718dca08ce69a7 | refs/heads/master | 2021-01-11T22:30:04.744220 | 2017-12-18T21:28:13 | 2017-12-18T21:28:13 | 78,973,996 | 1 | 1 | null | null | null | null | UTF-8 | R | false | false | 1,148 | r | lotame_hierarchy_nodes.R | #'@title Lotame Hierachy Search
#'@description
#'This function returns a data frame of hierarchies for the search term.
#'
#'@export
lotame_hierarchies_nodes <- function(search_query,
client_id,
hierarchy_category = "STANDARD",
date_range = "LAST_30_DAYS",
universe_id = 1,
page_size = 25000){
options(scipen=999)
path <- paste0("https://api.lotame.com/2/hierarchies/nodes?search=",
search_query,
"&client_id=",
client_id,
"&hierarchy_category=",
hierarchy_category,
"&date_range=",
date_range,
"&universe_id=",
universe_id,
"&page_size=",
page_size)
service_ticket <- lotame_service_ticket(path)
req <- jsonlite::fromJSON(paste0(path,"&ticket=",service_ticket),flatten = T)
output <- as.data.frame(req$groupings$hierarchies)
result <- bind_rows(output$nodes)
} |
4275ed5956b3317c36721b87c2df2e7dd16f08af | b0b04900b0d2cded0f9d0eaddd832a94b62f92c8 | /scripts/36_cp_municipio.R | a6562e93bb669f1af84d99a1e5a73b0ab7750818 | [
"CC-BY-4.0",
"LicenseRef-scancode-unknown-license-reference"
] | permissive | lsy617004926/BachasGertlerHigginsSeira_JF_replication | 85976f7202683a2ba10ff0fc623a3615eb483248 | bf71158ff85e148ac8e6d4b6b44258d2f7625026 | refs/heads/master | 2023-03-18T21:04:35.569326 | 2021-03-14T20:07:29 | 2021-03-14T20:07:29 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,222 | r | 36_cp_municipio.R | # Get data set with postal code and corresponding municipality
# PACKAGES
library(tidyverse)
library(data.table)
library(dtplyr) # Requires old version 0.0.3; code breaks with 1.0.0
# Install with remotes::install_version("dtplyr", version = "0.0.3", repos = "http://cran.us.r-project.org")
library(magrittr)
library(here)
# FUNCTIONS
source(here::here("scripts", "myfunctions.R"))
# DATA
cp <- read_delim(here::here("data", "SEPOMEX", "CPdescarga.txt"),
delim = "|", skip = 1)
cp %>% nrow() # 144,958
cp <- cp %>%
mutate(municipio = str_c(c_estado, c_mnpio))
cp %>% distinct(municipio) %>% nrow() # 2455
n_cp <- cp %>% distinct(d_codigo) %>% nrow()
n_cp # 32,120
# Take a look
cp %>% arrange(municipio) %>% select(d_codigo, municipio, everything())
cp_mun <- cp %>% distinct(d_codigo, municipio) %>% as.data.table()
cp_mun[, count := .N, by = "d_codigo"][count>1] # one zip code spans two municipios
# For now drop it since only one
cp_mun <- cp_mun %>% arrange(d_codigo, municipio) %>% distinct(d_codigo, .keep_all = TRUE)
stopifnot(n_cp==nrow(cp_mun))
cp_mun %<>% rename(cp = d_codigo) # for merge
# SAVE
cp_mun %>% saveRDS(here::here("proc", "cp_mun.rds"))
|
006ee508a1b0bbc58e807c822846192a33d31733 | 29585dff702209dd446c0ab52ceea046c58e384e | /Weighted.Desc.Stat/R/w.cv.R | 469f3a3d2a9c26ae59e305af210a5a172e05371b | [] | no_license | ingted/R-Examples | 825440ce468ce608c4d73e2af4c0a0213b81c0fe | d0917dbaf698cb8bc0789db0c3ab07453016eab9 | refs/heads/master | 2020-04-14T12:29:22.336088 | 2016-07-21T14:01:14 | 2016-07-21T14:01:14 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 53 | r | w.cv.R | w.cv <-
function(x, mu) w.sd(x,mu) / w.mean(x,mu)
|
b915274adb3d205dc6e9f032323361f6eadd6ba6 | 599232817a2b3f186c3a8c1637dbc3f344c2be90 | /R/citygeocoder-package.r | 75ac9417402df34b46bcd8082ca39a217f9ab21d | [
"MIT"
] | permissive | cjtexas/citygeocoder | 01dc42d919228cc291ebbca28274b9785e22136d | 32b548a84def36f2ad6bad196a786f9e9dcc752b | refs/heads/master | 2021-07-06T23:19:11.132151 | 2021-01-14T17:15:36 | 2021-01-14T17:15:36 | 222,630,201 | 6 | 0 | null | null | null | null | UTF-8 | R | false | false | 203 | r | citygeocoder-package.r | #' Standard and reverse geocoding for US cities/populated places.
#' @name citygeocoder
#' @docType package
#' @author Caleb Jenkins (@@cjtexas)
#' @import data.table
# @exportPattern ^[[:alpha:]]+
NULL
|
080ffc65b61a35ca5e483cc86778c569f580ba38 | 150d6d6b2d533356b6305fda45f5881d6d87923e | /Project1/Code/DataClean.R | 7de95ec4b374c3756443ba18ba6ec933b9daa7d2 | [] | no_license | zhwr7125/BIOS6624-zhwr7125 | 87eb947a83981f2b691de8c996c0881390a954d7 | 6052dbd00ffe78ee9781400dc9a26e845400d104 | refs/heads/master | 2020-09-27T01:37:55.309861 | 2018-12-10T22:11:37 | 2018-12-10T22:11:37 | 226,391,947 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 4,493 | r | DataClean.R | library(Hmisc)
library(knitr)
library(devtools)
library(Table1)
#To get Table1, please visit
#https://rdrr.io/github/palmercl/Table1/
#install_github("palmercl/Table1")
#But I made some revise to Table 1, therefore use Macro.R to mask it.
setwd("C:/repository/bios6624-zhwr7125/Project1/Code")
source('Macro.R')
#Reading data
data<-read.csv("C:/repository/bios6624-zhwr7125/Project1/Data/hiv_6624_final.csv",header=T,na.strings = "NA")
#table(data$EDUCBAS[data$years==0])
#Make variables, some has low number
data$RACE_ori<-data$RACE
data$RACE[data$RACE_ori==1|data$RACE_ori==2]=1#white
data$RACE[data$RACE_ori==3|data$RACE_ori==4|data$RACE_ori==7|data$RACE_ori==8]=0#nON-WHITE
data$SMOKE_ori<-data$SMOKE
data$SMOKE[data$SMOKE_ori==1|data$SMOKE_ori==2]=0#No current smoke
data$SMOKE[data$SMOKE_ori==3]=1#current smoke
data$ADH_ori<-data$ADH
data$ADH[data$ADH_ori==1|data$ADH_ori==2]=1#Adh
data$ADH[data$ADH_ori==3|data$ADH_ori==4]=0#Not adh
data$EDUCBAS_ori<-data$EDUCBAS
data$EDUCBAS[data$EDUCBAS_ori==1|data$EDUCBAS_ori==2|data$EDUCBAS_ori==3|data$EDUCBAS==4]=1#LESS COLLEGE
data$EDUCBAS[data$EDUCBAS_ori==5]=2#COLLEGE
data$EDUCBAS[data$EDUCBAS_ori==6|data$EDUCBAS_ori==7]=3#GREATER THAN COLLEGE
#factorize;
data$ART<-as.factor(data$ART)
data$SMOKE<-as.factor(data$SMOKE)
data$RACE<-as.factor(data$RACE)
data$hard_drugs<-as.factor(data$hard_drugs)
data$ADH<-as.factor(data$ADH)
data$EDUCBAS<-as.factor(data$EDUCBAS)
#set lables;
label(data$newid)="Subject ID Number"
label(data$AGG_MENT)='SF36 MCS score'
label(data$AGG_PHYS)='SF36 PCS score'
label(data$years)="years since initiating ART"
label(data$hard_drugs)="Hard drug use since last visit"
label(data$LEU3N)="# of CD4 positive cells (helpers)"
label(data$ADH)="If Adherence to meds since last visit"
label(data$ART)="Take ART at visit"
label(data$SMOKE)="Smoke"
label(data$BMI)="BMI(kg/m2)"
label(data$VLOAD)="Standardized viral load (copies/ml)"
label(data$age)="Age, years"
label(data$RACE)="Race"
label(data$EDUCBAS)="Education"
#levels;
levels(data$ART)=c("No ART",'ART')
levels(data$SMOKE)=c('Non-Current smoker','Current smoker')
table(data$SMOKE_ori,data$SMOKE)
table(is.na(data$SMOKE_ori))
table(is.na(data$SMOKE))
levels(data$RACE)=c('Non-White','White')
table(data$RACE_ori,data$RACE)
table(is.na(data$RACE_ori))
table(is.na(data$RACE))
levels(data$ADH)=c("No",'Yes')
table(data$ADH_ori,data$ADH)
table(is.na(data$ADH_ori))
table(is.na(data$ADH))
levels(data$EDUCBAS)=c("less than college",'college','greater than college')
table(data$EDUCBAS_ori,data$EDUCBAS)
table(is.na(data$EDUCBAS_ori))
table(is.na(data$EDUCBAS))
#create a new subset with only year 0 and 2
data_easy<-subset(data[,c("newid","AGG_MENT","AGG_PHYS","LEU3N","VLOAD","ART","years","hard_drugs","age","BMI","ADH","RACE","SMOKE","EDUCBAS")],data$years==0|data$years==2)
#Check mising hard drug status at baseline
missing_hard_drugs_count<-dim(subset(data_easy,data_easy$years==0&is.na(data_easy$hard_drugs)==TRUE))[1]
missing_hard_drugs_rate<-dim(subset(data_easy,data_easy$years==0&is.na(data_easy$hard_drugs)==TRUE))[1]/dim(subset(data_easy,data_easy$years==0))[1]
#Create baseline data with hard drug status
data_base<-subset(data_easy,data_easy$years==0&is.na(data_easy$hard_drugs)==FALSE)
colnames(data_base)<-c("newid",paste(colnames(data_base)[2:length(colnames(data_base))],"_0",sep=""))
data_base$group[data_base$hard_drugs_0==1]=1
data_base$group[data_base$hard_drugs_0==0]=0
data_base$group<-as.factor(data_base$group)
levels(data_base$group)<-c("No hard drug use at baseline","Hard drug use at baseline")
table(data_base$group)
#factor years to compare as two groups
data_base$years_0<-as.factor(data_base$years_0)
label(data_base$group)="hard drug at baseline"
#Create year two data
data_three<-merge(data_base[,c("newid","group")],subset(data_easy,data_easy$years==2),by="newid")
colnames(data_three)<-c("newid","group",paste(colnames(data_three)[3:length(colnames(data_three))],"_2",sep=""))
table(data_three$group)
#factor years to compare as two groups
data_three$years_2<-as.factor(data_three$years_2)
label(data_three$hard_drugs)="Hard drug use since last visit"
#Make wide table for Baseline and Two years after
data_wide<-merge(data_base[,c("newid","AGG_MENT_0","AGG_PHYS_0","LEU3N_0","VLOAD_0","years_0","age_0","BMI_0","RACE_0","SMOKE_0","EDUCBAS_0","group")],data_three[,c("newid","AGG_MENT_2","AGG_PHYS_2","LEU3N_2","VLOAD_2","years_2","ADH_2")],by="newid")
label(data_wide$group)="hard drug at baseline"
|
48d09d1775c714c5c2c9a5db589441e04bb4cfb9 | 89d23903ad3daf4bb7258091dacba54d3a12105b | /stats/MLMC-stats/server.R | b1b104b555ae55a3cfb9254112195f6dc2821ed0 | [
"ISC"
] | permissive | gringer/MLMC | 204502bb17137199cdd30e225fda1216aa8e4c33 | 20d24031ddda9115d974a85c3f98eaa49e94fa14 | refs/heads/master | 2021-01-20T18:54:55.932785 | 2017-03-24T01:57:36 | 2017-03-24T01:57:36 | 64,516,376 | 2 | 1 | null | null | null | null | UTF-8 | R | false | false | 3,114 | r | server.R | ## MLMC statistics
## - Backend server -
library(shiny);
library(tidyr);
library(digest);
logOutput <- function(input, requestID){
if(!file.exists("../../logs")){
return();
}
## Don't destroy the App just because logging fails
tryCatch({
timeStr <- as.character(Sys.time());
if(!file.exists("../../logs/usageusagelog.csv")){
## add file header (append=TRUE for the rare case of race conditions)
cat("requestID,time,inputCategory,value\n",
file = "../../logs/usageusagelog.csv", append=TRUE);
}
# for(n in names(input)){
# if(is.character(input[[n]]) || (is.numeric(input[[n]]) && (length(input[[n]]) == 1))){
# cat(file = "../../logs/usageusagelog.csv",
# append=TRUE, sprintf("%s,%s,\"%s\",\"%s\"\n",
# requestID, timeStr, n,
# substring(paste(input[[n]], collapse=";"),1,100)));
# }
# }
cat(file = "../../logs/usageusagelog.csv",
append=TRUE, sprintf("%s,%s,\"%s\",\"%s\"\n",
requestID, timeStr, "Access",1));
}, error=function(cond){
cat("Error:\n");
#message(cond);
}, warning=function(cond){
cat("Warning:\n");
#message(cond);
});
}
# Define server logic required to draw a histogram
shinyServer(function(input, output) {
data.df <- read.csv("../../logs/accesslog.csv", stringsAsFactors=FALSE);
data.df$time <- as.POSIXct(data.df$time);
usage.df <- read.csv("../../logs/usageusagelog.csv", stringsAsFactors=FALSE);
usage.df$time <- as.POSIXct(usage.df$time);
data.wide.df <- spread(data = data.df, key = "inputCategory", value="value");
data.wide.df$hours <- round(data.wide.df$time, "hours");
data.wide.df$viewButton <- as.numeric(data.wide.df$viewButton);
output$timePlot <- renderPlot({
htable <- table(as.character(data.wide.df$hours));
hutable <- table(as.character(round(usage.df$time,"hours")));
plot(x=as.POSIXct(names(htable)), y=htable, xlab="Time",
ylab = "Number of visits", yaxt="n", type="b", pch=16, col="blue");
points(x=as.POSIXct(names(hutable)), y=hutable, type="b",
pch=17, col="darkgreen");
legend("topright",legend=c("MLMC App","Usage App"), pch=c(16,17),
col=c("blue", "darkgreen"), inset=0.05, bg="#FFFFFFC0");
axis(2);
});
output$catPlot <- renderPlot({
par(mar=c(5,10,1,1));
barplot(sort(table(data.wide.df$cat)), xlab="Graphs Viewed",
cex.names=0.75, horiz = TRUE, las=1);
});
output$councilPlot <- renderPlot({
par(mar=c(5,10,1,1));
barplot(sort(table(data.wide.df$council)), xlab="Graphs Viewed",
cex.names=0.75, horiz = TRUE, las=1);
})
output$dwellPlot <- renderPlot({
dwellCount <- table(tapply(data.wide.df$viewButton, data.wide.df$requestID,
max));
barplot(dwellCount, xlab = "Graphs viewed per visit",
ylab = "Number of visits");
})
## record the data request in a log file
requestID <- substr(digest(Sys.time()),1,8);
logOutput(input, requestID = requestID);
})
|
934095c8604feacde0f73ee86eae4a26cb42ba42 | c0b41f061f55ec6454ce72b4d803993a33f0c067 | /cs112 class-work 14.1.R | 5582db4c8f4d838685474667619b7eb8a360aff1 | [] | no_license | jccf12/r-minerva | b32e4a929abdcbf8001f70dc5a2ff1391ca97413 | 0936a28562448c9bd1b7fd06ac9a6f9a1f6c1130 | refs/heads/master | 2020-03-28T08:58:16.863587 | 2018-12-31T13:54:52 | 2018-12-31T13:54:52 | 148,003,673 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 213 | r | cs112 class-work 14.1.R | x <- runif(1000,-1,1)
y <- 5 + 3 * x + 2*(x >= 0) + rnorm(1000)
d <- x > 0
reg1 <- lm(y ~ x + d)
summary(reg1)
confint(reg1)
install.packages('rdrubost')
library(rdrobust); rdplot(y,x); summary(rdrobust(y,x))
|
4ad153d4e5c5f28685f3a4f0fe5d07964bab64de | ffdea92d4315e4363dd4ae673a1a6adf82a761b5 | /data/genthat_extracted_code/unrtf/examples/unrtf.Rd.R | 604cdc94f4b691a2a20aa06f15d09b80d5720ef2 | [] | no_license | surayaaramli/typeRrh | d257ac8905c49123f4ccd4e377ee3dfc84d1636c | 66e6996f31961bc8b9aafe1a6a6098327b66bf71 | refs/heads/master | 2023-05-05T04:05:31.617869 | 2019-04-25T22:10:06 | 2019-04-25T22:10:06 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 283 | r | unrtf.Rd.R | library(unrtf)
### Name: unrtf
### Title: Convert rtf Documents
### Aliases: unrtf
### ** Examples
library(unrtf)
text <- unrtf("https://jeroen.github.io/files/sample.rtf", format = "text")
html <- unrtf("https://jeroen.github.io/files/sample.rtf", format = "html")
cat(text)
|
5bd4bc02cfb6df1c31e49afbb03baed36da2ab6e | 2431870ca07be400b3c82dcc790c4397182c77c0 | /run_stats_plots.R | c0940a6e77da941c21a704872c5527f1d29b0b7f | [] | no_license | misken/ebird_oaklandtwp | 4cf35e5aa1132afa947338da742812683e58c1a3 | 23eed7cfcb7f976d8ab3d9bd4f6d92a9f9fc256f | refs/heads/master | 2020-09-26T01:08:19.932618 | 2019-12-09T02:39:40 | 2019-12-09T02:39:40 | 226,129,092 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 313 | r | run_stats_plots.R | otbird_top_species(obs, 2017, top = 30, plotfile = './images/top30_2017.png')
otbird_top_species(obs, 2016, top = 30, plotfile = './images/top30_2016.png')
otbird_top_species(obs, 2015, top = 30, plotfile = './images/top30_2015.png')
otbird_top_species(obs, 2019, top = 30, plotfile = './images/top30_2019.png')
|
13cdaabf659a172c0e123e8d21b0f50dd901ed0c | eb1b93a0408d14d34774cec71ac193c727bddadc | /misc/isglobal.R | ee26a68d129cfdb77d59a1390f24cd252c660d94 | [
"MIT"
] | permissive | databrew/covid19 | cdd3fc196d2da5e078e0bc832308dfa083774fc7 | 816258d686fac6468f5468b96568ca55ad82a724 | refs/heads/master | 2021-02-27T13:58:50.624076 | 2020-08-22T14:31:20 | 2020-08-22T14:31:20 | 245,609,799 | 17 | 4 | NOASSERTION | 2020-03-29T15:03:47 | 2020-03-07T10:11:14 | HTML | UTF-8 | R | false | false | 4,073 | r | isglobal.R | library(covid19)
library(dplyr)
library(ggplot2)
a = plot_day_zero(countries = c('Italy', 'Spain'),
day0 = 150,
cumulative = TRUE,
time_before = 0,
max_date = as.Date('2020-03-09'))
a
ggsave('~/Desktop/isglobal/a.png')
b = plot_day_zero(countries = c('Italy', 'Spain'),
day0 = 150,
cumulative = TRUE,
time_before = -10,
max_date = as.Date('2020-03-09'),
add_markers = T)
b
ggsave('~/Desktop/isglobal/b.png')
cc = plot_day_zero(countries = c('Italy', 'Spain'),
day0 = 150,
cumulative = TRUE,
time_before = -10,
add_markers = T)
cc
ggsave('~/Desktop/isglobal/c.png')
d = plot_day_zero(countries = c('Italy', 'Spain', 'France', 'Germany', 'Switzerland', 'Norway',
'US', 'UK'),
day0 = 150,
cumulative = TRUE,
time_before = -10,
add_markers = T)
d
ggsave('~/Desktop/isglobal/d.png')
e = plot_day_zero(countries = c('Italy', 'Spain', 'France', 'Germany',
'US', 'UK', 'Switzerland'),
day0 = 150,
cumulative = TRUE,
time_before = -10,
add_markers = T)
e
ggsave('~/Desktop/isglobal/e.png')
day0 = 150
time_before = -5
pd <- df %>%
filter(country %in% c('China',
'Italy',
'Spain',
'Germany',
'France',
'US',
'UK',
'Switzerland')) %>%
filter(!district %in% 'Hubei') %>%
mutate(district = ifelse(country == 'China', district, country)) %>%
group_by(date, district, country) %>%
summarise(confirmed_cases = sum(confirmed_cases)) %>%
mutate(day = date) %>%
mutate(country = ifelse(country == 'China', 'Provinces of China', country)) %>%
ungroup %>%
group_by(district) %>%
mutate(first_case = min(date[confirmed_cases >= day0])) %>%
ungroup %>%
mutate(days_since_first_case = date - first_case) %>%
filter(days_since_first_case >= time_before)
library(RColorBrewer)
cols <- colorRampPalette(brewer.pal(n = 8, 'Set1'))(length(unique(pd$country)))
cols[4] <- adjustcolor('grey', alpha.f = 0.6)
f = ggplot() +
theme_simple() +
geom_line(data = pd,
aes(x = days_since_first_case,
y = confirmed_cases,
color = country,
group = district),
lwd = 1) +
scale_y_log10() +
scale_color_manual(name = '',
values = cols) +
labs(x = 'Days since first case in country/province',
y = 'Cumulative cases (log scale)',
title = 'COVID-19: Western countries vs. (non-Hubei) Chinese provinces',
subtitle = '"DAY 0": First day with > 150 cases. Grey lines: Chinese provinces',
caption = 'Data from: https://github.com/CSSEGISandData/COVID-19/tree/master/csse_covid_19_data\nChart by Joe Brew, @joethebrew, www.databrew.cc') +
xlim(-5, 20) +
geom_hline(yintercept = day0, lty = 2, alpha = 0.7) +
geom_vline(xintercept = 0, lty = 2, alpha = 0.7) +
geom_point(data = tibble(days_since_first_case = 0,
confirmed_cases = day0),
aes(x = days_since_first_case,
y = confirmed_cases),
color = 'red',
pch = 1,
size = 20)
f
ggsave('~/Desktop/isglobal/f.png')
g = plot_day_zero(countries = c('Italy', 'Spain', 'France', 'Germany',
'US', 'UK', 'Switzerland', 'Qatar',
'Japan', 'Korea, South', 'Singapore'),
day0 = 150,
cumulative = TRUE,
time_before = -10,
add_markers = T)
g
ggsave('~/Desktop/isglobal/g.png')
pdf('~/Desktop/isglobal/all.pdf',
height = 33,
width = 24)
Rmisc::multiplot(a, b, cc, d, e, f, g,
cols = 2)
dev.off()
|
3dad4a7166c403749714b44f16cedb4cf581a938 | 43116d59e21907eadcbdac87f283c502eb7e9863 | /ansible-playbook/roles/setup_ansible_tower/README.rd | 96248a90d13e1ef8bc7d2f7974d9f74eda16f4d1 | [] | no_license | fossouo/ansible_docker | baa96b4721795404e341e84730297337c8985492 | 73e6b23a3776cb06734c53fda17ca405f2b20444 | refs/heads/master | 2021-01-13T09:59:09.971956 | 2017-05-26T16:04:27 | 2017-05-26T16:04:27 | 76,407,374 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,176 | rd | README.rd | ############### INSTALLATION OF ANSIBLE TOWER ###########
1. This role will install wget
2. it will download Ansible Tower (last release)
3. unarchive it in /etc/ansible-tower-setup-{{version}}
################## NEXT STEPS ##########
1. Edit the file inventory locate in the folder /etc/ansible-tower-setup-{{version}}, replacing empty value with the password of your choice
Note: Redis passwords must not contain spaces or any of the following characters: @, :, -, \, /, #
2. If you need an external database which needs installation
[primary]
node1@example.com ansible_connection=local
[secondary]
node2@example.com
[database]
database@example.com
[all:vars]
admin_password='password'
redis_password='password'
pg_host=‘database@example.com’
pg_port=’5432’
pg_database='awx'
pg_username='awx'
pg_password='password'
3. Installation of Ansible tower using existing external database
[primary]
node1@example.com ansible_connection=local
[secondary]
node2@example.com
[database]
[all:vars]
admin_password='password'
redis_password='password'
pg_host=‘database@example.com’
pg_port=’5432’
pg_database='awx'
pg_username='awx'
pg_password='password'
|
c715213aa3c8c96d22d0a1b8fd9021072482542e | 734139adf1cf7d01bba76488f16992913b8d9180 | /Data Wrangling Exercise 1.R | 359d9b9bda7f379c94db55c35c6a3b071450a2a3 | [] | no_license | mseeley3/Data_Wrangling_1.5 | 0d614bfed62b3f194f7d52f60c2afc50a2cd4ee2 | c57b36c25f5feba20f65daa806b5da9937e111b1 | refs/heads/master | 2021-01-20T18:52:23.417802 | 2016-07-26T03:10:37 | 2016-07-26T03:10:37 | 64,184,277 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 3,535 | r | Data Wrangling Exercise 1.R | #setup
getwd()
setwd("/Users/monicaseeley/Documents/R Files")
library(dplyr)
library(tidyr)
library(readr)
electronics <- read_csv("refine_original.csv")
View(electronics)
#spelling correction of "company" column
electronics$company[electronics$company == "akzo"] <- "AKZO7"
electronics$company[electronics$company == "ak zo"] <- "AKZO7"
electronics$company[electronics$company == "Akzo"] <- "AKZO7"
electronics$company[electronics$company == "AKZO"] <- "AKZO7"
electronics$company[electronics$company == "fillips"] <- "phillips"
electronics$company[electronics$company == "philips"] <- "phillips"
electronics$company[electronics$company == "phillipS"] <- "phillips"
electronics$company[electronics$company == "Phillips"] <- "phillips"
electronics$company[electronics$company == "phlips"] <- "phillips"
electronics$company[electronics$company == "Phillps"] <- "phillips"
electronics$company[electronics$company == "Phllps"] <- "phillips"
electronics$company[electronics$company == "Unilever"] <- "unilever"
electronics$company[electronics$company == "unilver"] <- "unilever"
electronics$company[electronics$company == "Van Houten"] <- "van houten"
electronics$company[electronics$company == "Van Houten"] <- "van houten"
electronics$company[electronics$company == "van Houten"] <- "van houten"
electronics$company[electronics$company == "akzO"] <- "AKZO7"
electronics$company[electronics$company == "akz0"] <- "AKZO7"
electronics$company[electronics$company == "phllips"] <- "phillips"
View(electronics)
electronics$company[electronics$company == "phillps"] <- "phillips"
#seperate "Product code / number" column into "product code" and "product number:
electronics <- separate(electronics, "Product code / number", c("product_code", "product_number"), sep = "-")
#create "product" column
electronics <- electronics %>% mutate(products = product_code)
#correct "product column to "smartphone, laptop, TV, tablet"
electronics$products[electronics$products == "p"] <- "smartphone"
electronics$products[electronics$products == "x"] <- "laptop"
electronics$products[electronics$products == "v"] <- "TV"
electronics$products[electronics$products == "q"] <- "tablet"
#create "full_address" from "address, city, country"
electronics <- unite_(electronics, "full_address", c("address", "city", "country"), sep = ",", remove = FALSE)
#create dummy variables, binary columns for "company_phillips, company_akzo, company_van_houten, company_unilever"
electronics <- electronics %>% mutate(company_phillips = (ifelse(electronics$company == "phillips", 1, 0)))
electronics <- electronics %>% mutate(company_akzo = (ifelse(electronics$company == "AKZO7", 1, 0)))
electronics <- electronics %>% mutate(company_van_houten = (ifelse(electronics$company == "van houten", 1, 0)))
electronics <- electronics %>% mutate(company_unilever = (ifelse(electronics$company == "unilever", 1, 0)))
#create dummy varialbes, binary columns for "product_smartphone, product_TV, product_laptop, product_tablet"
electronics <- electronics %>% mutate(product_smartphone = (ifelse(electronics$products == "smartphone", 1, 0)))
electronics <- electronics %>% mutate(product_TV = (ifelse(electronics$products == "TV", 1, 0)))
electronics <- electronics %>% mutate(product_laptop = (ifelse(electronics$products == "laptop", 1, 0)))
electronics <- electronics %>% mutate(product_tablet = (ifelse(electronics$products == "tablet", 1, 0)))
#turn "electronics" dataframe into "refine.clean.csv"
write.csv(x = electronics, file = "refine.clean.csv", row.names = FALSE)
|
c56d1812b429bd3298e71ad0519b191c8decd98f | 7daf72d1abe4b13d1e26dc46abddfebcfc42d9e8 | /man/first.Rd | 139f57604af031fc743ff533b5a11160a2ce58fa | [
"MIT"
] | permissive | farcego/rbl | 6c39a7f2e63564c75860aa6a7887b2b49ffb73fb | b1cfa946b978dae09bf4d4b79267c4269e067627 | refs/heads/master | 2020-03-21T15:25:49.368438 | 2017-06-15T09:22:11 | 2017-06-15T09:22:11 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | true | 363 | rd | first.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/utils_pkgtools.r
\name{first}
\alias{first}
\alias{last}
\title{Return first or last element of a list of vector}
\usage{
first(x)
last(x)
}
\arguments{
\item{x}{a list of vector}
}
\description{
Return first or last element of a list of vector
}
\examples{
first(1:10)
last(1:10)
}
|
5ce1971d61fd915d2419aba243c39168961d0319 | 5779497cbc0d0a735d9e67989f2342cacf811e13 | /magda/magda.R | b91b25edd7ae18c847589f421386e746affaa961 | [] | no_license | davidmokos/cuni-biostatistics-2 | a90cff18b3cd42d919c37e95046f500c2e050015 | 28bdc0343268a2c31e70cb58acfae85f3af5e04a | refs/heads/master | 2020-12-18T10:38:36.867205 | 2020-01-21T14:32:26 | 2020-01-21T14:32:26 | 235,350,201 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 3,632 | r | magda.R | #jednoducha analyza hlavnich komponent (PCA) na prikladu biometrickych dat o Spergula morrisonii
magda<-read.table("magda.txt",header=T)
head(magda)
mstand = decostand(magda,method="stand")
#poznamka: decostand pusobi jak na objektech typu matice, tak typu data.frame
#dulezite je, ze objekty nesmi obsahovat chybejici hodnoty - takove radky je treba pred analyzou vyloucit (viz oddil na konci tohoto skriptu)
#prohledneme si, jsme dostali
mstand
summary(mstand)
towork = mstand[,5:13]
#*****************************************************
#vlastni vypocet PCA
rda(towork) #PCA se spocita pomoci funkce rda
rdout=rda(towork) #identicke, ale vysledky se ulozi do objektu rdout
summary(rdout)
#podil vysvetlene variability jednotlivymi osami
eigenvals(rdout)
#normalizovany (tj. se souctem vsech eigenvalues rovnym jedne) podil vysvetlene variability
eigenvals(rdout) / sum(eigenvals(rdout))
#graficke znazorneni podilu vysvetlene variability
barplot(as.numeric(eigenvals(rdout) / sum(eigenvals(rdout))),names.arg=paste("Axis",1:9))
#*****************************************************
#znazorneni vlastniho vysledku ordinace
plot(rdout)
biplot(rdout,scaling=-1)
#obecny
xval = scores(rdout)$species[,1]
yval = scores(rdout)$species[,2]
xl = c(min(scores(rdout)$sites[,1],scores(rdout)$species[,1]),max(scores(rdout)$sites[,1],scores(rdout)$species[,1])+0.2)
yl = c(min(scores(rdout)$sites[,2],scores(rdout)$species[,2]),max(scores(rdout)$sites[,2],scores(rdout)$species[,2])+0.2)
plot(xval,yval,xlim = xl,ylim=yl,type="p",col="white",
xlab=paste("Axis 1, Explvar = ",formatC(summary(rdout)$cont$imp[2,1],format="f",digits=3)),
ylab=paste("Axis 2, Explvar = ",formatC(summary(rdout)$cont$imp[2,2],format="f",digits=3)))
arrows(0,0,xval,yval,lwd=2,length=0.1)
lines(c(xl[1]-0.5,xl[2]+0.5),c(0,0),lty=2)
lines(c(0,0),c(yl[1]-0.5,yl[2]+0.5),lty=2)
text(xval,yval,names(towork),pos=ifelse(scores(rdout)$species[,1] > 0, 4,2))
#points(scores(rdout)$sites[,1],scores(rdout)$sites[,2],pch="+",col="red")
points(scores(rdout)$sites[,1],scores(rdout)$sites[,2],pch=c(19,21)[magda$mrav+1],col="red")
legend("topleft",pch=c(19,21),col="red",legend=c("Louka","Mraveniste"))
#znazornit graficky hodnoty nektere vstupni promenne
plot(rdout, disp="sites", type="n")
points(rdout, disp="sites",pch=c(19,21)[magda$mrav+1]) #velikost bodu odpovida poctu kvetu
# je treba sikovne delit/nasobit, aby to vytvorilo smysluplne hodnoty cex tak v rozsahu 0.3-5
legend("topright",pch=c(19,21),legend=c("Louka","Mraveniste"))
ordispider(rdout,magda$mrav)
mrav= as.factor(magda$mrav)
for (i in 1:nlevels(mrav))
ordihull(rdout,mrav,show.groups=levels(mrav)[i],col=rainbow(nlevels(mrav))[i])
plot(rdout, disp="sites", type="n")
points(rdout, disp="sites",pch=c(19,21)[magda$mrav+1],cex=(magda$hloubka+3)/6)
legend("topright",pch=c(19,21),legend=c("Louka","Mraveniste"))
ordispider(rdout,magda$mrav)
mrav= as.factor(magda$mrav)
hl = as.factor(magda$hloubka)
for (i in 1:nlevels(hl))
ordihull(rdout,hl,show.groups=levels(hl)[i],col=rainbow(nlevels(hl))[i])
rdout = rda(towork~magda$hloubka+mrav)
RsquareAdj(rdout)
plot(rdout,display=c("sp","cn"))
barplot(as.numeric(eigenvals(rdout)))
anova(rdout,by="terms")
anova(rdout,by="margin") #vysledky stejne protoze faktory jsou ortogonalni
xx=varpart(towork,~hloubka,~mrav,data=magda, transfo="stand")
xx
plot(xx)
showvarparts(2)
rdout = rda(towork~magda$hloubka*magda$mrav) #test s interakci
plot(rdout,display=c("sp","cn"))
anova(rdout,by="terms")
|
890931272ea774a56c74f90417075a42d1e90d45 | 36772829414c869ce25c07fa81b833bce48b0bbe | /Tutorials/Shiny/shiny_base_1.R | c31b92815ca490b42c211063a30f6ded0ae71d35 | [] | no_license | pole-national-donnees-biodiversite/metadata-training | a29d13a7d2a3cf80b95589c2d65362d312fd4a38 | 24974cc05bb09700770e15da8352642135c6cd51 | refs/heads/master | 2021-07-10T22:06:26.666039 | 2020-10-19T14:22:14 | 2020-10-19T14:22:14 | 211,280,538 | 1 | 0 | null | 2019-11-05T08:52:56 | 2019-09-27T09:09:44 | R | UTF-8 | R | false | false | 703 | r | shiny_base_1.R | library(shiny)
# Commençons simplement ! ----
# Shiny dispose de plusieurs "apps" exemples que nous pourrons décortiquer.
# Par exemple:
shiny::runExample("01_hello")
# Cette appli est assez simple et on devine assez bien les parties UI et server
# Plus particulier: un affichage en temps réel.
shiny::runExample("11_timer")
# Vous pouvez vous amuser à explorer l'ensemble des exemples contenus dans Shiny
# via la fonction
shiny::runExample()
# Par convention, nous maintiendrons l'appel à un espace de nom avant tout
# emploi de fonction: espacedenom::fonction()
# SUITE ----
# Quand vous êtes prêts, nous pourrons procéder à la suite:
rstudioapi::navigateToFile("shiny_base_2.R")
|
c2ca759fd3d81c4a8d2ea163858710d952e9e5eb | cd9cd8e93507bc17b2f08a2184e5fedfff55fbf7 | /03_analise_preditiva/aula03/AULA03_Arvore_Decisao.R | 2be7da6588264fb72793c976b280e8b5f9a93ce0 | [] | no_license | Matheusvfvitor/MBA | b7641dc995d98afb38101cf045cce8fc155817ca | d9b348f8f180629e6b2049a34bceee17a4a12886 | refs/heads/main | 2023-01-29T16:48:41.982857 | 2020-12-05T21:06:53 | 2020-12-05T21:06:53 | 318,882,844 | 0 | 0 | null | null | null | null | IBM852 | R | false | false | 12,158 | r | AULA03_Arvore_Decisao.R | #--------------------------------------------------------------------------------#
#--------------------------------------------------------------------------------#
# MBA EM BUSINESS ANALYTICS E BIG DATA
# ANALISE PREDITIVA
# AULA 4 - ARVORES DE DECISAO
#--------------------------------------------------------------------------------#
#--------------------------------------------------------------------------------#
library(caret)
library(Metrics)
library(dplyr)
#--------------------------------------------------------------------------------#
# ARVORE DE REGRESSAO
#--------------------------------------------------------------------------------#
#--------------------------------------------------------------------------------#
# 0) Lendo a base de dados
# Selecionando o working directory
setwd('D:/AULAS FGV/2. AN┴LISE PREDITIVA/0. Desenvolvimento/DADOS')
# Lendo os arquivos no formato .csv (comma separated values)
DATA <- read.csv("dataset_reg_proc.csv", sep = ",")
str(DATA)
#--------------------------------------------------------------------------------#
# 1) Divisao da base de modelagem em treino e teste com amostragem
set.seed(123) # garantindo reprodutibilidade da amostra
INDEX_TRAIN <- createDataPartition(DATA$AMOUNT, p = 0.7, list = F)
TRAIN_SET <- DATA[INDEX_TRAIN, ] # base de desenvolvimento: 70%
TEST_SET <- DATA[-INDEX_TRAIN,] # base de teste: 30%
# Avaliando a distribuicao da variavel resposta
summary(TRAIN_SET$AMOUNT); summary(TEST_SET$AMOUNT)
#--------------------------------------------------------------------------------#
# 2) Treino do algoritmo de arvore de regresssao
library(rpart)
# aqui comecamos a arvore o mais completa possivel
MDL_FIT <- rpart(AMOUNT ~.,
data = TRAIN_SET,
method = 'anova',
control = rpart.control(minbucket = 10, cp = -1))
??rpart.control
# saida da arvore
MDL_FIT
summary(MDL_FIT)
# avaliando a necessidade da poda da arvore
printcp(MDL_FIT)
plotcp(MDL_FIT)
# aqui conseguimos podar a árvore controlando o cp que reduz o valor minimo do
# erro, que ├ę um parametro de controle
PARM_CTRL <- MDL_FIT$cptable[which.min(MDL_FIT$cptable[,"xerror"]),"CP"]
MDL_FIT.PRUNE <- prune(MDL_FIT, cp = PARM_CTRL)
# saida da árvore
MDL_FIT.PRUNE
summary(MDL_FIT.PRUNE)
#--------------------------------------------------------------------------------#
# 3) Plotando a arvore
library(rattle)
fancyRpartPlot(MDL_FIT.PRUNE)
library(rpart.plot)
# https://www.rdocumentation.org/packages/rpart.plot/versions/3.0.8/topics/rpart.plot
rpart.plot(MDL_FIT.PRUNE,
cex = 0.5,
type = 3,
box.palette = "BuRd",
branch.lty = 3,
shadow.col ="gray",
nn = TRUE,
main = 'Regression Trees')
rpart.plot(MDL_FIT.PRUNE,
type = 3,
cex = 0.5,
clip.right.labs = FALSE,
branch = .4,
box.palette = "BuRd", # override default GnBu palette
main = 'Regression Trees')
#--------------------------------------------------------------------------------#
# 4) Realizando as predicoes
# Valor de AMOUNT pela arvore regressao com maior desenvolvimento
Y_VAL_TRAIN <- predict(MDL_FIT.PRUNE)
Y_VAL_TEST <- predict(MDL_FIT.PRUNE, newdata = TEST_SET)
#--------------------------------------------------------------------------------#
# 5) Avaliando a performance dos modelos e existencia de overfitting
# Arvore
postResample(pred = Y_VAL_TRAIN, obs = TRAIN_SET$AMOUNT)
postResample(pred = Y_VAL_TEST, obs = TEST_SET$AMOUNT)
# sinais de overfitting entre as amostras de treino e teste?
MDL_FINAL <- MDL_FIT.PRUNE
#--------------------------------------------------------------------------------#
# 6) Importancia das variaveis (Modelo final)
varImp(MDL_FINAL)
# o algoritmo de arvore tambem possui uma saida com a importancia das variaveis
round(MDL_FINAL$variable.importance, 3)
#--------------------------------------------------------------------------------#
# 7) Inspecao dos valores previstos vs observados (modelo final)
# Convertendo a variavel para unidade original
RESULT_TRAIN <- data.frame(AMOUNT_OBS = TRAIN_SET$AMOUNT**2,
AMOUNT_PRED = Y_VAL_TRAIN**2) %>%
mutate(RESIDUO = AMOUNT_PRED - AMOUNT_OBS)
RESULT_TEST <- data.frame(AMOUNT_OBS = TEST_SET$AMOUNT**2,
AMOUNT_PRED = Y_VAL_TEST**2) %>%
mutate(RESIDUO = AMOUNT_PRED - AMOUNT_OBS)
# Plotando os resultados
layout(matrix(c(1,2,3,4,3,4), nrow = 3, ncol = 2, byrow = TRUE))
par(oma = c(1,1,0,0), mar = c(5,5,2,1))
hist(RESULT_TRAIN$RESIDUO, breaks = 12, xlim = c(-2000,2000),
main = 'Amostra de Treino', cex.main = 1.2,
xlab = 'RESIDUAL', ylab = 'FREQUENCIA (#)', cex.axis = 1.2,
col = 'darkorange', border = 'brown')
hist(RESULT_TEST$RESIDUO, breaks = 12, xlim = c(-2000,2000),
main = 'Amostra de Teste', cex.main = 1.2,
xlab = 'RESIDUAL', ylab = 'FREQUENCIA (#)', cex.axis = 1.2,
col = 'darkorange', border = 'brown')
plot(RESULT_TRAIN$AMOUNT_PRED,RESULT_TRAIN$AMOUNT_OBS,
#main = 'Amostra de Treino', cex.main = 1.2,
xlab = 'AMOUNT US$ (previsto)', ylab = 'AMOUNT US$ (observado)',
cex.axis = 1.2, pch = 19, cex = 0.5, ylim = c(0,6000),
col = 'darkorange')
abline(lm(AMOUNT_OBS ~ AMOUNT_PRED, data = RESULT_TRAIN),
col = 'firebrick', lwd = 3)
abline(0, 1, col = 'blue', lwd = 3, lty = "dashed")
plot(RESULT_TEST$AMOUNT_PRED, RESULT_TEST$AMOUNT_OBS,
#main = 'Amostra de Teste', cex.main = 1.2,
xlab = 'AMOUNT US$ (previsto)', ylab = 'AMOUNT US$ (observado)',
cex.axis = 1.2, pch = 19, cex = 0.5, ylim = c(0,6000),
col = 'darkorange')
abline(lm(AMOUNT_OBS ~ AMOUNT_PRED, data = RESULT_TEST),
col = 'firebrick', lwd = 3)
abline(0, 1, col = 'blue', lwd = 3, lty = "dashed")
graphics.off()
rm(list = ls())
#--------------------------------------------------------------------------------#
# ARVORE DE CLASSIFICACAO
#--------------------------------------------------------------------------------#
#--------------------------------------------------------------------------------#
# 0) Lendo a base de dados
# Selecionando o working directory
setwd('D:/AULAS FGV/2. AN?LISE PREDITIVA/0. Desenvolvimento/DADOS')
# Lendo os arquivos no formato .csv (comma separated values)
DATA <- read.csv("dataset_clas_proc.csv",sep = ",",dec = '.',stringsAsFactors = T)
#--------------------------------------------------------------------------------#
# 1) Divisao da base de modelagem em treino e teste com amostragem
set.seed(123) # garantindo reprodutibilidade da amostra
INDEX_TRAIN <- createDataPartition(DATA$CHURN, p = 0.7, list = F)
TRAIN_SET <- DATA[INDEX_TRAIN, ] # base de desenvolvimento: 70%
TEST_SET <- DATA[-INDEX_TRAIN,] # base de teste: 30%
# Avaliando a distribuicao da variavel resposta
summary(TRAIN_SET$CHURN);summary(TEST_SET$CHURN)
prop.table(table(TRAIN_SET$CHURN));prop.table(table(TEST_SET$CHURN))
#--------------------------------------------------------------------------------#
# 2) Treino do algoritmo de arvore de regresssao
# aqui comecamos a arvore o mais completa possivel
MDL_FIT <- rpart(CHURN ~.,
data = TRAIN_SET,
method = 'class',
control = rpart.control(minbucket= 20, cp = -1))
# saida da arvore
MDL_FIT
summary(MDL_FIT)
# avaliando a necessidade da poda da arvore
printcp(MDL_FIT)
plotcp(MDL_FIT)
# aqui conseguimos podar a arvore controlando o cp que reduz o valor minimo do
# erro, que e um parametro de controle
PARM_CTRL <- MDL_FIT$cptable[which.min(MDL_FIT$cptable[,"xerror"]),"CP"]
MDL_FIT.PRUNE <- prune(MDL_FIT, cp = PARM_CTRL)
# saida da arvore
MDL_FIT.PRUNE
summary(MDL_FIT.PRUNE)
#--------------------------------------------------------------------------------#
# 3) Plotando a arvore
library(rattle)
fancyRpartPlot(MDL_FIT.PRUNE)
# https://www.rdocumentation.org/packages/rpart.plot/versions/3.0.8/topics/rpart.plot
rpart.plot(MDL_FIT.PRUNE,
cex = 1,
type = 3,
box.palette = "BuRd",
branch.lty = 3,
shadow.col ="gray",
nn = TRUE,
main = 'Classification Trees')
rpart.plot(MDL_FIT.PRUNE,
type = 3,
cex = 1,
clip.right.labs = FALSE,
branch = .4,
box.palette = "BuRd", # override default GnBu palette
main = 'Classification Trees')
#--------------------------------------------------------------------------------#
# 4) Realizando as predicoes
# Probabilidade de classificacao de CHURN
Y_PROB_TRAIN <- predict(MDL_FIT.PRUNE, type = 'prob')[,2]
Y_PROB_TEST <- predict(MDL_FIT.PRUNE, newdata = TEST_SET, type = 'prob')[,2]
head(Y_PROB_TRAIN)
#--------------------------------------------------------------------------------#
# 5) Avaliando a performance dos modelos e existencia de overfitting
library(hmeasure)
HMeasure(TRAIN_SET$CHURN,Y_PROB_TRAIN)$metrics
HMeasure(TEST_SET$CHURN, Y_PROB_TEST)$metrics
# sinais de overfitting entre as amostras de treino e teste?
MDL_FINAL <- MDL_FIT.PRUNE
#--------------------------------------------------------------------------------#
# 6) Importancia das variaveis (Modelo final)
varImp(MDL_FINAL)
# o algoritmo de arvore tambem possui uma saida com a importancia das variaveis
round(MDL_FINAL$variable.importance, 3)
#--------------------------------------------------------------------------------#
# 7) Inspecao dos valores previstos vs observados (modelo final)
# Geracao da matriz de confusao para diferentes pontos de corte (amostra teste)
# Label observado
Y_OBS <- TEST_SET$CHURN
levels(Y_OBS)
# Label previsto usando:
# se PROB > 50% -> 1 (Yes)
# se PROB > 30% -> 1 (Yes)
Y_CLAS1 <- factor(ifelse(Y_PROB_TEST > 0.5,1,0),
levels = c(0,1),
labels = c('No','Yes'))
Y_CLAS2 <- factor(ifelse(Y_PROB_TEST > 0.3,1,0),
levels = c(1,0),
labels = c('No','Yes'))
confusionMatrix(data = Y_CLAS1, reference = Y_OBS, positive = 'Yes')
confusionMatrix(data = Y_CLAS2, reference = Y_OBS, positive = 'Yes')
# Distribuicao do score
graphics.off()
par(mfrow = c(1,2))
# df auxiliar
AUX <- data.frame(Y_PROB = Y_PROB_TEST,
Y_OBS = TEST_SET$CHURN)
boxplot(Y_PROB ~ Y_OBS, data = AUX,
main = 'Boxplot probs', cex.main = 1.2, cex.axis = 1.2,
xlab = 'PROBABILITIES', ylab = 'TARGET',
ylim = c(0,1), horizontal = T,
col = c('darkorange','darkorange4'), border = 'gray20')
hist(AUX$Y_PROB, breaks = 12, xlim = c(0,1),
main = 'Histogram probs', cex.main = 1.2,
xlab = 'PROBABILITIES', ylab = 'FREQUENCIA (#)', cex.axis = 1.2,
col = 'darkorange', border = 'brown')
graphics.off()
#--------------------------------------------------------------------------------#
# 8) Curva ROC
library(pROC)
ROC1 <- roc(TRAIN_SET$CHURN,Y_PROB_TRAIN)
Y1 <- ROC1$sensitivities
X1 <- 1 - ROC1$specificities
ROC2 <- roc(TEST_SET$CHURN,Y_PROB_TEST)
Y2 <- ROC2$sensitivities
X2 <- 1 - ROC2$specificities
plot(X1,Y1, type ="n", cex.axis = 1.2, cex = 0.5,
xlab = '1 - ESPECIFICIDADE', ylab = 'SENSITIVIDADE')
lines(X1, Y1, lwd = 3, lty = 1, col = 'tomato3')
lines(X2, Y2, lwd = 3, lty = 1, col = 'cyan3')
abline(0, 1, lty = 2)
legend('bottomright',c('TRAIN SET','TEST SET'), lty = 1, col = c('tomato3','cyan3'))
#--------------------------------------------------------------------------------#
#--------------------------------------------------------------------------------#
# FIM
#--------------------------------------------------------------------------------#
#--------------------------------------------------------------------------------# |
027d87e9f68609cfd621528610805c5336dcc19d | 57c5f5c154d88b1ad2f12c9bcfe049a29922f045 | /Brent Young_PREDICT 413_Final_Script.R | eee4e38309260799f2398e24fa00274300f02e13 | [] | no_license | brentdyoung/Dengue-Fever-Forecasting | 4f57b0d6aa3ec8ea752dffda8467e7ec5544d7a3 | 5cdac790432a710f2c75133a8a0f6251b6abf97e | refs/heads/master | 2020-06-01T05:29:45.586683 | 2019-06-06T22:18:53 | 2019-06-06T22:18:53 | 190,656,833 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 16,236 | r | Brent Young_PREDICT 413_Final_Script.R | #Final Project Code
#R Code
#Load the data & create time series
library(fpp)
library(tseries)
library(forecast)
library(ggplot2)
library(forecastHybrid)
library(opera)
library(dplyr)
library(corrplot)
library(zoo)
library(MASS)
library(reshape2)
library(mice)
library(VIM)
library(Hmisc)
library(forecastxgb)
library(fGarch)
library(rugarch)
memory.size(max=TRUE)
#Clear Global Environment
#Load Data
setwd("~/R/PREDICT 413/Final")
submissions <- read.csv("submission_format.csv")
#Full Training Data (Unclean)
de <- read.csv("dengue_features_train_combined.csv")
View(de)
summary(de)
str(de)
describe(de)
#Data Preparation
library(mice)
sju= de[1:936,]
summary(sju)
str(sju)
describe(sju)
iqu= de[937:1456,]
summary(iqu)
str(iqu)
describe(iqu)
#Missing Data EDA
#Check for missing values
sapply(de, function(x) sum(is.na(x)))
sum(is.na(de))
sapply(sju, function(x) sum(is.na(x)))
sum(is.na(sju))
sapply(iqu, function(x) sum(is.na(x)))
sum(is.na(iqu))
#Check missing data percentage
pMiss <- function(x){sum(is.na(x))/length(x)*100}
apply(de,2,pMiss)
pMiss <- function(x){sum(is.na(x))/length(x)*100}
apply(sju,2,pMiss)
pMiss <- function(x){sum(is.na(x))/length(x)*100}
apply(iqu,2,pMiss)
library(VIM)
aggr_plot <- aggr(de, col=c('navyblue','red'), numbers=TRUE, sortVars=TRUE, labels=names(de), cex.axis=.5, gap=2, ylab=c("Histogram of missing data","Pattern"))
aggr_plot <- aggr(sju, col=c('navyblue','red'), numbers=TRUE, sortVars=TRUE, labels=names(de), cex.axis=.5, gap=2, ylab=c("Histogram of missing data","Pattern"))
aggr_plot <- aggr(iqu, col=c('navyblue','red'), numbers=TRUE, sortVars=TRUE, labels=names(de), cex.axis=.5, gap=2, ylab=c("Histogram of missing data","Pattern"))
#Split datasets into numerical and categorical
#Numeric
subdatnum <- subset(de, select=c(
"INDEX",
"total_cases",
"ndvi_ne",
"ndvi_nw",
"ndvi_se",
"ndvi_sw",
"precipitation_amt_mm",
"reanalysis_air_temp_k",
"reanalysis_avg_temp_k",
"reanalysis_dew_point_temp_k",
"reanalysis_max_air_temp_k",
"reanalysis_min_air_temp_k",
"reanalysis_precip_amt_kg_per_m2",
"reanalysis_relative_humidity_percent",
"reanalysis_sat_precip_amt_mm",
"reanalysis_specific_humidity_g_per_kg",
"reanalysis_tdtr_k",
"station_avg_temp_c",
"station_diur_temp_rng_c",
"station_max_temp_c",
"station_min_temp_c",
"station_precip_mm"))
subdatnum.df <- data.frame(subdatnum)
#Categorical
subdatcat <- subset(de, select=c(
"INDEX",
"city",
"year",
"weekofyear",
"week_start_date"))
subdatcat.df <- data.frame(subdatcat)
#Run imputation
tempData <- mice(subdatnum.df,m=5,maxit=50,meth='pmm',seed=500)
summary(tempData)
#Check N/A values have been removed
subdatnumimp <- complete(tempData,1)
apply(subdatnumimp,2,pMiss)
summary(subdatnumimp)
sapply(subdatnumimp, function(x) sum(is.na(x)))
#Manually filled reanalysis_sat_precip_amt_mm and precipitation_amt_mm in Excel to 0. Imputation was not picking it up.
#Merge Numeric and Categorical datasets back
DE <- merge(subdatcat.df,subdatnumimp, by=c("INDEX"))
#Check data
str (DE)
summary (DE)
#Full Training Data for SJ
sj= DE[1:936,]
summary(sj)
#Full Training Data for IQ
iq= DE[937:1456,]
summary(iq)
sjtimeseries <- ts(sj$total_cases)
iqtimeseries <- ts(iq$total_cases)
iqtimeseries
#Test Data for SJ
sj_test= DE[1457:1716,]
summary(sj_test)
#Test Data for IQ
iq_test= DE[1717:1872,]
summary(iq_test)
#EDA
#Timeplot on for SJ and IQ
par(mfrow=c(2,1))
plot(sjtimeseries, main='SJ Total Cases', xlab = "Week", ylab= "Cases")
plot(iqtimeseries, main='IQ Total Cases', xlab = "Week", ylab= "Cases")
par(mfrow=c(1,1))
#Barplot of SJ and IQ of Total Cases
p1<- ggplot(sj, aes(x = total_cases)) +
geom_vline(xintercept = 29, color = "red") +
theme_bw() +
ggtitle('Cases of Dengue in San Juan') +
geom_histogram(fill = "dark blue", bins=30) +
theme(plot.title = element_text(hjust = 0.5))
p1 + annotate("text", x = 77, y = 250, label = "Median Cases is 19")
p2<- ggplot(iq, aes(x = total_cases)) +
geom_vline(xintercept = 5, color = "red") +
theme_bw() +
ggtitle('Cases of Dengue in Iquitos') +
geom_histogram(fill = "light blue", bins = 30) +
theme(plot.title = element_text(hjust = 0.5))
p2 + annotate("text", x = 17, y = 190, label = "Median Cases is 5")
#Correlation Plots
m_sj_clean <- data.matrix(sj)
m_sj_clean <- cor(x = m_sj_clean [,c(6:26)], use = 'complete.obs', method = 'pearson')
m_iq_clean <- data.matrix(iq)
m_iq_clean <- cor(x = m_iq_clean [,c(6:26)], use = 'everything', method = 'pearson')
# Correlation Heatmap
corrplot(m_sj_clean, type = 'full', tl.col = 'black', method="shade", shade.col=NA, tl.cex=0.5)
corrplot(m_iq_clean, type = 'full', tl.col = 'black', method="shade", shade.col=NA,tl.cex=0.5)
# Correlation Bar plot
df_m_sj_clean <- data.frame(m_sj_clean)[2:21,]
df_m_sj_clean <- dplyr::select(df_m_sj_clean, total_cases)
df_m_iq_clean <- data.frame(m_iq_clean)[2:21,]
df_m_iq_clean <- dplyr::select(df_m_iq_clean, total_cases)
ggplot(df_m_sj_clean, aes(x= reorder(rownames(df_m_sj_clean), -total_cases), y = total_cases)) +
geom_bar(stat = 'identity') +
theme_bw() +
ggtitle('Correlation of variables in San Juan') +
ylab('Correlation') +
xlab('Variables') +
coord_flip()
ggplot(df_m_iq_clean, aes(x= reorder(rownames(df_m_iq_clean), -total_cases), y = total_cases)) +
geom_bar(stat = 'identity') +
theme_bw() +
ggtitle('Correlation of variables in Iquitos') +
ylab('Correlation') +
xlab('Variables') +
coord_flip()
#Split Data into Train/Test for both SJ and IQ using train (clean) dataset
#SJ - 260 for test to match test dataset
SJtimeseries_train = DE[1:675,]
SJtimeseries_test = DE[676:936,]
sjtimeseries_train <- ts(SJtimeseries_train$total_cases)
sjtimeseries_test <-ts(SJtimeseries_test$total_cases)
#IQ - 156 for test to match test dataset
IQtimeseries_train = DE[937:1299,]
IQtimeseries_test = DE[1300:1456,]
iqtimeseries_train <- ts(IQtimeseries_train$total_cases)
iqtimeseries_test <-ts(IQtimeseries_test$total_cases)
#Model Selection for SJ
#ETS Model
fit1_ets <- ets(sjtimeseries_train)
summary(fit1_ets)
#Auto.Arima Pre-work
tsdisplay(sjtimeseries_train)
#Unit Root Tests
adf.test(sjtimeseries_train, alternative = "stationary")
kpss.test(sjtimeseries_train)
#Differencing
kpss.test(diff(sjtimeseries_train))
tsdisplay(diff(sjtimeseries_train))
#Auto.Arima Model
fit2_arima <- auto.arima(sjtimeseries_train, stepwise=FALSE, approximation=FALSE)
summary(fit2_arima)
tsdisplay(residuals(fit2_arima))
Box.test(residuals(fit2_arima), fitdf=3, lag=10, type="Ljung")
tsdiag(fit2_arima)
#Auto.Arima Model w/Regressors
fit2_arimaR <- auto.arima(SJtimeseries_train [,6], xreg= SJtimeseries_train [,c(12:16,20,22,24:25)], stepwise=FALSE, approximation=FALSE)
summary(fit2_arimaR)
tsdisplay(residuals(fit2_arimaR))
Box.test(residuals(fit2_arimaR), fitdf=3, lag=10, type="Ljung")
tsdiag(fit2_arimaR)
#Neutral Net
fit3_nn <- nnetar(sjtimeseries_train)
summary(fit3_nn)
#Neutral Net w/Regressors
fit3_nnR <- nnetar(SJtimeseries_train [,6], xreg= SJtimeseries_train [,c(4,16,20)])
summary(fit3_nnR)
#Benchmark
fit6_naive<-naive(sjtimeseries_train)
#Training Set Accuracy - Summary
summary(fit1_ets) # training set
summary(fit2_arima) # training set
summary(fit2_arimaR) # training set
summary(fit3_nn) # training set
summary(fit3_nnR) # training set
summary(fit6_naive) # training set
#Training Set Accuracy - Goodness-of-fit
accuracy (fit1_ets) # training set
accuracy (fit2_arima) # training set
accuracy (fit2_arimaR) # training set
accuracy (fit3_nn) # training set
accuracy (fit3_nnR) # training set
accuracy (fit6_naive) # training set
#Forecast on Test Set
par(mfrow=c(3,2))
ETS <-forecast(fit1_ets, h=length(sjtimeseries_test))
plot(ETS, ylab="Total Cases")
lines(sjtimeseries, col="red",ylab="Actual")
ETS
Auto.ARIMA <-forecast(fit2_arima, h=length(sjtimeseries_test))
plot(Auto.ARIMA, ylab="Total Cases")
lines(sjtimeseries, col="red",ylab="Actual")
Auto.ARIMA
Auto.ARIMAR <- forecast(object=fit2_arimaR, xreg =SJtimeseries_train[,c(12:16,20,22,24:25)],h=260)
plot(Auto.ARIMAR,
main="Forecasts from regression ", ylab="Total Cases")
lines(sjtimeseries, col="red",ylab="Actual")
Auto.ARIMAR
NN <-forecast(fit3_nn, h=length(sjtimeseries_test))
plot(NN, ylab="Total Cases")
lines(sjtimeseries, col="red",ylab="Actual")
NN
NNR <-forecast(object=fit3_nnR, xreg =SJtimeseries_train[,c(4,16,20)],h=260)
plot(NNR, ylab="Total Cases")
lines(sjtimeseries, col="red",ylab="Actual")
NNR
NAIVE <-naive(sjtimeseries_train, h=length(sjtimeseries_test))
plot(NAIVE)
lines(sjtimeseries, col="red",ylab="Actual")
par(mfrow=c(1,1))
print(accuracy(ETS, sjtimeseries))
print(accuracy(Auto.ARIMA, sjtimeseries))
print(accuracy(Auto.ARIMAR, sjtimeseries))
print(accuracy(NN, sjtimeseries))
print(accuracy(NNR, sjtimeseries))
print(accuracy(NAIVE, sjtimeseries))
#Model Selection for IQ
#ETS Model
fit1_ets <- ets(iqtimeseries_train)
summary(fit1_ets)
#Auto.Arima Pre-work
tsdisplay(iqtimeseries_train)
#Unit Root Tests
adf.test(iqtimeseries_train, alternative = "stationary")
kpss.test(iqtimeseries_train)
#Differencing
kpss.test(diff(iqtimeseries_train))
tsdisplay(diff(iqtimeseries_train))
#Auto.Arima Model
fit2_arima <- auto.arima(iqtimeseries_train, stepwise=FALSE, approximation=FALSE)
summary(fit2_arima)
tsdisplay(residuals(fit2_arima))
Box.test(residuals(fit2_arima), fitdf=3, lag=10, type="Ljung")
tsdiag(fit2_arima)
#Auto.Arima Model w/Regressors
fit2_arimaR <- auto.arima(IQtimeseries_train [,6], xreg= IQtimeseries_train [,c(14,16,20,25)], stepwise=FALSE, approximation=FALSE)
summary(fit2_arimaR)
tsdisplay(residuals(fit2_arimaR))
Box.test(residuals(fit2_arimaR), fitdf=3, lag=10, type="Ljung")
tsdiag(fit2_arimaR)
#Neutral Net
fit3_nn <- nnetar(iqtimeseries_train)
summary(fit3_nn)
#Neutral Net w/Regressors
fit3_nnR <- nnetar(IQtimeseries_train [,6], xreg= IQtimeseries_train [,c(4,16,20)])
summary(fit3_nnR)
#Benchmark
fit6_naive<-naive(iqtimeseries_train)
#Training Set Accuracy - Summary
summary(fit1_ets) # training set
summary(fit2_arima) # training set
summary(fit2_arimaR) # training set
summary(fit3_nn) # training set
summary(fit3_nn) # training set
summary(fit6_naive) # training set
#Training Set Accuracy - Goodness-of-fit
accuracy(fit1_ets) # training set
accuracy (fit2_arima) # training set
accuracy (fit2_arimaR) # training set
accuracy (fit3_nn) # training set
accuracy (fit3_nnR) # training set
accuracy (fit6_naive) # training set
#Forecast on Test Set
par(mfrow=c(3,2))
ETS_ANN <-forecast(fit1_ets, h=length(iqtimeseries_test))
plot(ETS_ANN, ylab="Total Cases")
lines(iqtimeseries, col="red",ylab="Actual")
ETS_ANN
Auto.ARIMA <-forecast(fit2_arima, h=length(iqtimeseries_test))
plot(Auto.ARIMA, ylab="Total Cases")
lines(iqtimeseries, col="red",ylab="Actual")
Auto.ARIMA
Auto.ARIMAR <- forecast(object=fit2_arimaR, xreg =IQtimeseries_train[,c(14,16,20,25)],h=156)
plot(Auto.ARIMAR,
main="Forecasts from regression ", ylab="Total Cases")
lines(iqtimeseries, col="red",ylab="Actual")
Auto.ARIMAR
NN <-forecast(fit3_nn, h=length(iqtimeseries_test))
plot(NN, ylab="Total Cases")
lines(iqtimeseries, col="red",ylab="Actual")
NN
NNR <-forecast(object=fit3_nnR, xreg = IQtimeseries_train [,c(4,16,20)],h=156)
plot(NNR, ylab="Total Cases")
lines(iqtimeseries, col="red",ylab="Actual")
NNR
NAIVE <-naive(iqtimeseries_train, h=length(iqtimeseries_test))
plot(NAIVE)
lines(iqtimeseries, col="red",ylab="Actual")
par(mfrow=c(1,1))
print(accuracy(ETS_ANN, iqtimeseries))
print(accuracy(Auto.ARIMA, iqtimeseries))
print(accuracy(Auto.ARIMAR, iqtimeseries))
print(accuracy(NN, iqtimeseries))
print(accuracy(NNR, iqtimeseries))
print(accuracy(NAIVE, iqtimeseries))
#Final Submission: Auto.ARIMA
fit_sj <- auto.arima(sjtimeseries)
fit_Arima_sj=forecast(fit_sj, h = 260)
plot(forecast(fit_sj))
tsdisplay(residuals(fit_sj))
Box.test(residuals(fit_Arima_sj),fitdf=2, lag=151, type="Ljung")
fit_iq <- auto.arima(iqtimeseries)
fit_Arima_iq=forecast(fit_iq, h = 156)
plot(forecast(fit_Arima_iq))
tsdisplay(residuals(fit_Arima_iq))
Box.test(residuals(fit_Arima_iq), fitdf=3, lag=151, type="Ljung")
arima_sj_sol <- data.frame(submissions[1:260,-4], total_cases = round(fit_Arima_sj$mean))
arima_iq_sol <- data.frame(submissions[261:416,-4], total_cases =round(fit_Arima_iq$mean))
arima_solution <- bind_rows(arima_sj_sol,arima_iq_sol)
write.csv(arima_solution, file = 'submission.csv', row.names = F)
#Final Submission: ETS
fit_sj <- ets(sjtimeseries)
fit_ETS_sj=forecast(fit_sj, h = 260)
plot(forecast(fit_sj))
tsdisplay(residuals(fit_sj))
Box.test(residuals(fit_ETS_sj),fitdf=2, lag=151, type="Ljung")
fit_iq <- ets(iqtimeseries)
fit_ETS_iq=forecast(fit_iq, h = 156)
plot(forecast(fit_ETS_iq))
tsdisplay(residuals(fit_ETS_iq))
Box.test(residuals(fit_ETS_iq), fitdf=3, lag=151, type="Ljung")
ETS_sj_sol <- data.frame(submissions[1:260,-4], total_cases = round(fit_ETS_sj$mean))
ETS_iq_sol <- data.frame(submissions[261:416,-4], total_cases =round(fit_ETS_iq$mean))
ETS_solution <- bind_rows(ETS_sj_sol,ETS_iq_sol)
write.csv(ETS_solution, file = 'submission.csv', row.names = F)
#Final Submission: NNETAR 10,6 and 5,3
set.seed(101)
fit_sj <- nnetar(sjtimeseries)
fit_Nnetar_sj=forecast(fit_sj, h = 260)
plot(forecast(fit_sj))
tsdisplay(residuals(fit_Nnetar_sj))
Box.test(residuals(fit_Nnetar_sj),fitdf=2, lag=151, type="Ljung")
fit_iq <- nnetar(iqtimeseries)
fit_Nnetar_iq=forecast(fit_iq, h = 156)
plot(forecast(fit_Nnetar_iq))
tsdisplay(residuals(fit_Nnetar_iq))
Box.test(residuals(fit_Nnetar_iq), fitdf=3, lag=151, type="Ljung")
nnetar_sj_sol <- data.frame(submissions[1:260,-4], total_cases = round(fit_Nnetar_sj$mean))
nnetar_iq_sol <- data.frame(submissions[261:416,-4], total_cases =round(fit_Nnetar_iq$mean))
nnetar_solution <- bind_rows(nnetar_sj_sol,nnetar_iq_sol)
write.csv(nnetar_solution, file = 'submission.csv', row.names = F)
#Final Submission: ARIMAR
fit_sj <- auto.arima(sj[,6], xreg= sj[,c(12:16,20,22,24:25)], stepwise=FALSE, approximation=FALSE)
fit_Arima_sj=forecast(object=fit_sj, xreg =sj_test[,c(12:16,20,22,24:25)],h=260)
plot(forecast(fit_Arima_sj))
tsdisplay(residuals(fit_sj))
Box.test(residuals(fit_Arima_sj),fitdf=2, lag=151, type="Ljung")
fit_iq <- auto.arima(iq[,6], xreg= iq[,c(14,16,20,25)], stepwise=FALSE, approximation=FALSE)
fit_Arima_iq=forecast(object=fit_iq, xreg =iq_test[,c(14,16,20,25)],h=156)
plot(forecast(fit_Arima_iq))
tsdisplay(residuals(fit_Arima_iq))
Box.test(residuals(fit_Arima_iq), fitdf=3, lag=151, type="Ljung")
arima_sj_sol <- data.frame(submissions[1:260,-4], total_cases = round(fit_Arima_sj$mean))
arima_iq_sol <- data.frame(submissions[261:416,-4], total_cases =round(fit_Arima_iq$mean))
arima_solution <- bind_rows(arima_sj_sol,arima_iq_sol)
write.csv(arima_solution, file = 'submission.csv', row.names = F)
#Final Submission: NNETAR with Regressors (BEST)
set.seed(101)
fit_sj <- nnetar(sj[,6], xreg= sj[,c(4,16,20)])
fit_Nnetar_sj=forecast(object=fit_sj, xreg =sj_test[,c(4,16,20)],h=260)
plot(forecast(fit_Nnetar_sj))
tsdisplay(residuals(fit_sj))
Box.test(residuals(fit_Nnetar_sj),fitdf=2, lag=151, type="Ljung")
fit_iq <- nnetar(iq[,6], xreg= iq[,c(4,16,20)])
fit_Nnetar_iq= forecast(object=fit_iq, xreg =iq_test[,c(4,16,20)],h=156)
plot(forecast(fit_Nnetar_iq))
tsdisplay(residuals(fit_Nnetar_iq))
Box.test(residuals(fit_Nnetar_iq), fitdf=3, lag=151, type="Ljung")
nnetar_sj_sol <- data.frame(submissions[1:260,-4], total_cases = round(fit_Nnetar_sj$mean))
nnetar_iq_sol <- data.frame(submissions[261:416,-4], total_cases =round(fit_Nnetar_iq$mean))
nnetar_solution <- bind_rows(nnetar_sj_sol,nnetar_iq_sol)
write.csv(nnetar_solution, file ='submission.csv', row.names = F)
|
25359f0a786e0fe61eed227bfd615fb78a1e9443 | 263cac0e728dbf7322afbef74aa8bdc556e58da1 | /lib/hpc/resource_usage/resource_usage_lib.R | 06e813ca0e0f8476952984f39736450d7e876917 | [] | no_license | Snitkin-Lab-Umich/prewas_manuscript_analysis | 2a354d83e543085c3ffe75322e4219790b57558e | 8bb999f6c180a83bddb7565b68ec0dfeb3dc6d33 | refs/heads/master | 2020-11-27T03:57:17.678196 | 2020-04-12T18:22:10 | 2020-04-12T18:22:10 | 229,294,473 | 1 | 1 | null | null | null | null | UTF-8 | R | false | false | 2,499 | r | resource_usage_lib.R | # ------------------------------------------------------------------------------
# GetResourceUsage
# 2020/01/29
# Ryan D. Crawford
# ------------------------------------------------------------------------------
# Take a job id and return the Job Wall-clock time in hours and the Memory
# Utilized in GB as a numeric vector
# ------------------------------------------------------------------------------
# 2020/03/24 Katie Saund added two catches for edge cases:
# (1) When the memory usage is basically 0MB
# (2) When the run time is more than 1 day
GetResourceUsage = function( jobId )
{
# Run the system command to get the seff result for this job ID
seff = system( paste("seff", jobId ), intern = TRUE)
# Find the position of the time and the memory utilized in the
# seff result
wallClockStr = "Job Wall-clock time: "
wallTime = gsub(wallClockStr, '', seff[ grep(wallClockStr, seff) ])
memoryUseStr = "Memory Utilized: "
memoryUsage = gsub(memoryUseStr, '', seff[ grep(memoryUseStr, seff) ])
# Catch the case where, if memory usage from seff is essentially 0 it says
# "0.00 MB (estimated maximum)"
if (grepl("(estimated maximum)", memoryUsage)) {
memoryUsage <- "0.00 MB"
}
# If memory usage from seff if returned in MB, convert to GB
if ( grepl("MB", memoryUsage) )
{
memoryUsage = gsub("MB", '', memoryUsage)
memoryUsage = as.numeric( trimws(memoryUsage) ) / 1000
} else if ( grepl("GB", memoryUsage) ) {
memoryUsage = gsub("GB", '', memoryUsage)
memoryUsage = as.numeric( trimws(memoryUsage) )
} else {
memoryUsage = gsub("KB", '', memoryUsage)
memoryUsage = as.numeric( trimws(memoryUsage) ) * 1000
}
return( c(ConvertTimeToHours(wallTime), memoryUsage) )
}
# Take the seff time result in hours:minutes:seconds and convert the
# results to hours
# KS modified to take data of the form: 1-15:04:51, where 1 == 1 day, so it's actually 24+16=39 hours.
ConvertTimeToHours = function(timeStr)
{
days <- 0
if (grepl("-", timeStr)) {
days <- as.numeric(unlist(strsplit(timeStr, "-"))[1])
}
timeStr_without_days <- gsub(".*[-]", "", timeStr)
times = as.numeric( unlist(strsplit(timeStr_without_days, ':') ) )
expVals = c(0, 1, 2)
hours = sum( sapply( seq(3), function(i) times[i] / 60**expVals[i] ) )
days_in_hours <- days * 24
total <- sum(hours, days_in_hours)
return( round(total, 3) )
}
# ------------------------------------------------------------------------------
|
8c806edda4f6da45367ff7b54422410c08254364 | 80fa52e009448e8c2edbc692efc0e190635720c7 | /man/spectrumTable.Rd | ab17acf796fa5826998524721fcc23a028962f43 | [] | no_license | cpanse/plantGlycoMS | d0245b8fb260d6c22b2f7e35d3a5dab4d41b4498 | cdf07bb0649dec81b60c38e82442a262e15ca457 | refs/heads/master | 2020-03-21T09:10:40.641124 | 2018-02-16T02:10:58 | 2018-02-16T02:10:58 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | true | 454 | rd | spectrumTable.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/plantGlycoMSFunctions.R
\name{spectrumTable}
\alias{spectrumTable}
\title{A function to calculate monoisotopic precursor mass}
\usage{
spectrumTable(input)
}
\arguments{
\item{input}{a csv file with one column named precursorMZ}
}
\description{
this function calculates the monoisotopic precursor mass from then precursor mz
}
\examples{
spectrumTable()
}
\keyword{calculate}
|
f113c79f6508ddf1b3de1489b42b9d07f5ace5d7 | 743fd0ca2f91341e70af3e703ad31592175d15d3 | /man/score.Rd | 07e6d2f8906222e9ca46e5c7e9c9049357337362 | [
"MIT"
] | permissive | jlaffy/scrabble | 3a518c119212b37571d1c4f843ea35a29ba8f5d1 | 77087a6becac12677ca5ba6d2a6a9f85a5afe226 | refs/heads/master | 2022-03-10T21:18:32.646221 | 2019-11-14T10:16:50 | 2019-11-14T10:16:50 | 191,061,663 | 3 | 5 | NOASSERTION | 2022-02-20T18:34:00 | 2019-06-09T22:29:07 | R | UTF-8 | R | false | true | 3,311 | rd | score.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/score.R
\name{score}
\alias{score}
\title{Score matrix columns for groups of rows}
\usage{
score(mat, groups, binmat = NULL, bins = NULL, controls = NULL,
bin.control = F, center = F, nbin = 30, n = 100, replace = F)
}
\arguments{
\item{mat}{an expression matrix of gene rows by cell columns.}
\item{groups}{a character vector or list of character vectors. Each character vector is a group or signature to score each column against and should match subsets of rownames in <mat>.}
\item{binmat}{an expression matrix of gene rows by cell columns that will be used to create the gene bins and ultimately the control signatures for correction of the cell scores. For our use cases, <mat> and <binmat> are identical except that the former is row-centered and used to generate cell scores and the latter is not row-centered and used to correct the cell scores. If NULL, and bin.control = T (and neither <bins> nor <controls> were provided), <mat> will be used. Careful that in this use case <mat> should not be row-centered for the correction to be meaningful. Default: NULL}
\item{bins}{a named character vector with as names the rownames and as values the ID of each bin. You can provide the bins directly (e.g. with bin()) rather than these being generated from <binmat>. Default: NULL}
\item{bin.control}{boolean value. If your controls can be generated straight from <mat> (i.e. if mat is not row-centered and you do not provide <binmatch>, <bins>, or <controls>), then you can just call score(mat, groups, bin.control = TRUE). Default: F}
\item{center}{boolean value. Should the resulting score matrix be column-centered? This option should be considered if binned controls are not used. Default: F}
\item{nbin}{numeric value specifying the number of bins. Not relevant if <bins> or <controls> are provided on input. Default is 30, but please be aware that we chose 30 bins for ~ 8500 genes and if your # of genes is very different you should consider changing this. Default: 30}
\item{n}{numeric value for the number of control genes to be sampled per gene in a signature. Not relevant if <controls> is provided on input. Default: 100}
\item{replace}{boolean value. Allow bin sampling to be done with replacement. Default: F}
\item{controls.}{A character vector if <groups> is a character vector a list of character vectors of the same length as <groups>. Each character vector is a control signature whose genes should have expression levels similar to those in the corresponding real signature, but be otherwise biologically meaningless. You can provide the control signatures directly (e.g. with binmatch()) rather than these being generated from <binmatch> / <bins>. Default: NULL}
}
\value{
a matrix with as rows the columns of the input matrix and as columns the scores of each group provided in groups. If one group is provided, the matrix returned will have 1 column.
}
\description{
Score a matrix by groups of rows. Scores are column averages for the subset of rows specified in a group. Option to generate control group scores to subtract from the real scores. Control groups can either be user-provided or generated from binning of the rows. Similarly, the bins themselves can be user-provided or computed.
}
|
243f7272a38dbf620654896ff7c3599b4d760b47 | 6e00706c34f29f0aab07719b3fee50eec28abc5c | /R/delta_rel.R | f92d697f3e6f89a8dddc76b51257738e3d5d0f8e | [] | no_license | th-zelniker/TIMI | 21f53786b5ee6e0d2ffefc19cd8dc36c424340e6 | 53ce5a7e195b6646b70424a80829a32734a41a0e | refs/heads/master | 2022-10-13T23:26:24.055586 | 2022-09-23T14:11:35 | 2022-09-23T14:11:35 | 135,347,152 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 459 | r | delta_rel.R | #' Interaction effect relative to the effect size for the total population
#'
#' Based on Alosh, Huque, Koch (2015)
#' @param HR_s HR in the Subgroup of Interest
#' @param HR_c HR in the Comparative Subgroup
#' @param HR_overall HR of the Overall Cohort
#' @return Interaction Effect Relative to the Effect Size for the Total Population
#' @export
delta_rel <- function(HR_s, HR_c, HR_overall){
(log(HR_s) - log(HR_c)) / log(HR_overall)
}
|
147bd74e49ae85604f220aa6f62360bf0a16169f | f00f81057cbb435cfe42244ed367c07f04b5b452 | /plot2.R | db252e5d20ef2761771cd579db8061f13ba29099 | [] | no_license | sschanel/ExData_Plotting1 | 54c99ab2b8e86cbec98ea37723d2a295b6ccac97 | 4f517ef27660f8c28a7a20baa0ea490b5222e316 | refs/heads/master | 2021-01-16T19:52:35.413995 | 2015-09-13T17:21:52 | 2015-09-13T17:21:52 | 42,385,811 | 0 | 0 | null | 2015-09-13T06:10:47 | 2015-09-13T06:10:46 | null | UTF-8 | R | false | false | 420 | r | plot2.R | source("ExData_Plotting1/readTheData.R") # reads the data into household_power_consumption
# start a png file
png(filename="ExData_Plotting1/plot2.png", width=480, height=480)
# plot datetime vs global_active_power
plot(household_power_consumption$DateTime, household_power_consumption$Global_active_power, type='l', xlab='', ylab='Global Active Power (kilowatts)')
# close the device, creating the png file
dev.off() |
1a29e906dcfafa46f9a72afbccfdcbc468448912 | 401689f219dd46d826e534f7f275e90641223cd2 | /man/reconstructGRN.Rd | 329b4e4716222f8e0fd2450c2de95337d71efc3e | [
"MIT"
] | permissive | epaaso/epoch | d47a8acacf3e21bc269955dcc7b8740ea3163cfd | a59ece16a43b347f3ae176404f8fb909746fed85 | refs/heads/master | 2023-06-19T19:02:28.103690 | 2021-07-13T18:34:03 | 2021-07-13T18:34:03 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | true | 609 | rd | reconstructGRN.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/1_GRN.R
\name{reconstructGRN}
\alias{reconstructGRN}
\title{reconstruct GRN ala CLR}
\usage{
reconstructGRN(expDat, tfs, dgenes = NULL, method = "pearson", zThresh = 2)
}
\arguments{
\item{expDat}{unsmoothed expression matrix}
\item{tfs}{vector of transcription factor names}
\item{dgenes}{dynamic genes, found using findDynGenes}
\item{method}{either 'pearson' or 'MI' to be used in CLR}
\item{zThresh}{zscore threshold default 2}
}
\value{
data frame of TF TG zscore corr
}
\description{
reconstruct GRN ala CLR uses minet
}
|
45999557e06aa79ac838fe0292d4b3f7a06b1287 | 9f85c1d7673d586d38475405c58abb9bd633a230 | /read.cps.R | 60595bfe26043a1f96a3dec4d4546e120ba776e3 | [] | no_license | hheelllloo/eitc_replication | c5b7f3087985ca0e0be13b9c8b3bc9110f5b52d5 | c1af5af8b1d501a893e054f5b2a21dbd8bdd6832 | refs/heads/master | 2020-03-27T23:47:06.017527 | 2018-09-04T21:46:56 | 2018-09-04T21:46:56 | 147,346,193 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 5,278 | r | read.cps.R | read.cps = function(cps) {
is_household = as.numeric(substr(cps[,1], 7, 8))
h_seq = substr(cps[which(is_household==0),], 2, 6)
hnumfam = as.numeric(substr(cps[which(is_household==0),], 23, 24))
hg_st60 = as.numeric(substr(cps[which(is_household==0),], 40, 41))
hccc_r = as.numeric(substr(cps[which(is_household==0),], 58, 58))
htotval = as.numeric(substr(cps[which(is_household==0),], 248, 255))
h = data.frame(h_seq, hnumfam, hg_st60, hccc_r, htotval)
f_seq = substr(cps[which(is_household>0 & is_household<40),], 2, 6)
ffpos = as.numeric(substr(cps[which(is_household>0 & is_household<40),], 7, 8))
f_seq_pos = substr(cps[which(is_household>0 & is_household<40),], 2, 8)
fkind = as.numeric(substr(cps[which(is_household>0 & is_household<40),], 9, 9))
ftype = as.numeric(substr(cps[which(is_household>0 & is_household<40),], 10, 10))
frelu18 = as.numeric(substr(cps[which(is_household>0 & is_household<40),], 29, 29))
frelu6 = as.numeric(substr(cps[which(is_household>0 & is_household<40),], 28, 28))
ftotval = as.numeric(substr(cps[which(is_household>0 & is_household<40),], 205, 212))
f = data.frame(f_seq, ffpos, f_seq_pos, fkind, ftype, frelu18, ftotval)
colnames(f)[1] = "h_seq"
p_seq = substr(cps[which(is_household>40 & is_household<80),], 2, 6)
pppos = as.numeric(substr(cps[which(is_household>40 & is_household<80),], 7, 8))
p_f_pos = paste(p_seq, substr(cps[which(is_household>40 & is_household<80),], 44, 45), sep="")
a_exprrp = as.numeric(substr(cps[which(is_household>40 & is_household<80),], 13, 14))
a_famtyp = as.numeric(substr(cps[which(is_household>40 & is_household<80),], 31, 31))
a_famnum = as.numeric(substr(cps[which(is_household>40 & is_household<80),], 29, 30))
a_famrel = as.numeric(substr(cps[which(is_household>40 & is_household<80),], 32, 32))
a_maritl = as.numeric(substr(cps[which(is_household>40 & is_household<80),], 17, 17))
a_sex = as.numeric(substr(cps[which(is_household>40 & is_household<80),], 20, 20))
a_race = as.numeric(substr(cps[which(is_household>40 & is_household<80),], 25, 25))
a_age = as.numeric(substr(cps[which(is_household>40 & is_household<80),], 15, 16))
a_reorgn = as.numeric(substr(cps[which(is_household>40 & is_household<80),], 27, 28))
a_hga = as.numeric(substr(cps[which(is_household>40 & is_household<80),], 22, 23))
a_hgc = as.numeric(substr(cps[which(is_household>40 & is_household<80),], 24, 24))
marsupwt = as.numeric(substr(cps[which(is_household>40 & is_household<80),], 66, 73))
wkswork = as.numeric(substr(cps[which(is_household>40 & is_household<80),], 171, 172))
hrswk = as.numeric(substr(cps[which(is_household>40 & is_household<80),], 181, 182))
rsnnotw = as.numeric(substr(cps[which(is_household>40 & is_household<80),], 170, 170))
wsal_val = as.numeric(substr(cps[which(is_household>40 & is_household<80),], 243, 248))
semp_val = as.numeric(substr(cps[which(is_household>40 & is_household<80),], 256, 261))
frse_val = as.numeric(substr(cps[which(is_household>40 & is_household<80),], 263, 267))
uc_val = as.numeric(substr(cps[which(is_household>40 & is_household<80),], 278, 282))
wc_val = as.numeric(substr(cps[which(is_household>40 & is_household<80),], 285, 289))
ss_val = as.numeric(substr(cps[which(is_household>40 & is_household<80),], 291, 295))
ssi_val = as.numeric(substr(cps[which(is_household>40 & is_household<80),], 297, 300))
paw_val = as.numeric(substr(cps[which(is_household>40 & is_household<80),], 305, 309))
vet_val = as.numeric(substr(cps[which(is_household>40 & is_household<80),], 317, 321))
srvs_val = as.numeric(substr(cps[which(is_household>40 & is_household<80),], 337, 342))
dsab_val = as.numeric(substr(cps[which(is_household>40 & is_household<80),], 360, 365))
rtm_val = as.numeric(substr(cps[which(is_household>40 & is_household<80),], 379, 384))
int_val = as.numeric(substr(cps[which(is_household>40 & is_household<80),], 386, 390))
div_val = as.numeric(substr(cps[which(is_household>40 & is_household<80),], 393, 397))
rnt_val = as.numeric(substr(cps[which(is_household>40 & is_household<80),], 399, 403))
ptotval = as.numeric(substr(cps[which(is_household>40 & is_household<80),], 440, 447))
pearnval = as.numeric(substr(cps[which(is_household>40 & is_household<80),], 448, 455))
pothval = as.numeric(substr(cps[which(is_household>40 & is_household<80),], 457, 464))
hours = wkswork * hrswk
youth = as.numeric(a_famrel>2 & (a_age<19 | (a_age>=19 & a_age<24 & rsnnotw==4)))
not_ind_child = a_famrel < 3 | youth==1
taxunit = rep(NA, NROW(cps[which(is_household>40),]))
kl6 = youth==1 & a_age<6
p = data.frame(pppos, p_f_pos, a_exprrp, a_famtyp, a_famnum, a_famrel, a_maritl, a_sex, a_race, a_age, a_reorgn,
a_hga, a_hgc, marsupwt, wkswork, hrswk, rsnnotw, wsal_val, semp_val, frse_val, uc_val, wc_val,
ss_val, ssi_val, paw_val, vet_val, srvs_val, dsab_val, rtm_val, int_val, div_val, ptotval, pearnval, pothval, hours, youth, taxunit, not_ind_child, kl6)
#colnames(p)[1] = "h_seq"
colnames(p)[2] = "f_seq_pos"
#print(dim(h))
#print(dim(f))
#print(dim(p))
first = merge(p, f, by="f_seq_pos")
#print(dim(first))
second = merge(first, h, by="h_seq")
#print(dim(second))
return(second)
} |
f63fc54134fb3ea1565ef0e329221af2e2b19859 | 0a906cf8b1b7da2aea87de958e3662870df49727 | /eive/inst/testfiles/cga_generate_chromosome/AFL_cga_generate_chromosome/cga_generate_chromosome_valgrind_files/1609871832-test.R | 05a7e822c80bbb695de0f354266057362f1e558a | [] | no_license | akhikolla/updated-only-Issues | a85c887f0e1aae8a8dc358717d55b21678d04660 | 7d74489dfc7ddfec3955ae7891f15e920cad2e0c | refs/heads/master | 2023-04-13T08:22:15.699449 | 2021-04-21T16:25:35 | 2021-04-21T16:25:35 | 360,232,775 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 503 | r | 1609871832-test.R | testlist <- list(vec = NULL, prob_vec = c(3.64479385580773e+88, 1.06525411223672e-306, 2.76394938498379e-306, 1.33490523347378e+241, -5.26664487232521e+173, 1.00532437556728e-231, -1.3867803071569e+234, 2.84286997340396e-306, 9.20102403264023e-311, 5.68221329302224e-304, 3.46847845393035e-308, 9.54172978829304e-314, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0))
result <- do.call(eive:::cga_generate_chromosome,testlist)
str(result) |
b882b437d0017b6532c03df303bdcc428479dc3a | ed2892ae0541e9d56f3b234edab712a33a281fe4 | /R/k_kienerES.R | 5c74d5aef39ae58b9aa6f3e977f8246e6f96950b | [] | no_license | cran/FatTailsR | c88ccd7d54de67723afb36a652fca1767b4a4caa | 82796785ae65af40ea504c05710f447788afc88a | refs/heads/master | 2021-07-14T21:46:26.585015 | 2021-03-12T08:00:02 | 2021-03-12T08:00:02 | 21,838,335 | 0 | 1 | null | null | null | null | UTF-8 | R | false | false | 6,534 | r | k_kienerES.R |
#' @include j_kiener7.R
#' @title Quantile (VaR) and Expected Shortfall Corrective Functions
#'
#' @description
#' Quantile functions (or VaR) and Expected Shortfall of Kiener distributions
#' K1, K2, K3 and K4, usually calculated at pprobs2 = c(0.01, 0.025, 0.05, 0.95, 0.975, 0.99),
#' can be expressed as:
#' \enumerate{
#' \item Quantile of the logit function multiplied by a fat tail
#' (c)orrective function \code{ckiener1234};
#' \item Expected s(h)ortfall of the logistic function multiplied
#' by a corrective function \code{hkiener1234}.
#' }
#' Both functions \code{ckiener1234} and \code{hkiener1234} are independant from
#' the scale parameter \code{g} and are indirect measures of the tail curvature.
#' A value close to \code{1} indicates a model similar to the logistic function with
#' almost no curvature and probably parameter \code{k > 8}. When \code{k} (or \code{a,w})
#' decreases, the values of \code{c} and \code{h} increase and indicate some more
#' pronounced symmetric or asymmetric curvature, depending on values of \code{d,e}.
#' Note that if \code{(min(a,k,w) <= 1)}, \code{ckiener1234} still exists but
#' the expected shortfall and \code{hkiener1234} become undefined (\code{NA}).
#'
#' Some financial applications use threshold values on \code{ckiener1234} or
#' \code{hkiener1234} to select or discard stocks over time as they become
#' less or more risky.
#'
#' @param p numeric or vector of probabilities.
#' @param m numeric. parameter m used in model K1, K2, K3 and K4.
#' @param g numeric. parameter g used in model K1, K2, K3 and K4.
#' @param k numeric. parameter k used in model K1, K3 and K4.
#' @param a numeric. parameter a used in model K2.
#' @param w numeric. parameter w used in model K2.
#' @param d numeric. parameter d used in model K3.
#' @param e numeric. parameter e used in model K4.
#' @param lower.tail logical. If TRUE, use p. If FALSE, use 1-p.
#' @param log.p logical. If TRUE, probabilities p are given as log(p).
#' @param coefk vector or 7 columns-matrix representing parameters
#' \code{c(m,g,a,k,w,d,e)} obtained from \code{\link{paramkienerX}}.
#'
#'
#' @seealso
#' \code{\link{logit}}, \code{\link{qkiener1}}, \code{\link{qkiener2}},
#' \code{\link{qkiener3}}, \code{\link{qkiener4}}, \code{\link{fitkienerX}}.
#'
#' @name ckiener1234
NULL
#' @export
#' @rdname ckiener1234
hkiener1 <- function(p, m = 0, g = 1, k = 3.2,
lower.tail = TRUE, log.p = FALSE) {
if (log.p) {p <- exp(p)}
if (!lower.tail) {p <- 1-p}
h <- p
for (i in seq_along(p)) {
h[i] <- ifelse(p[i] <= 0.5,
(ltmkiener1(p[i], m, g, k) - m) / (ltmlogis(p[i], m, g) - m),
(rtmkiener1(p[i], m, g, k) - m) / (rtmlogis(p[i], m, g) - m))
}
return(h)
}
#' @export
#' @rdname ckiener1234
hkiener2 <- function(p, m = 0, g = 1, a = 3.2, w = 3.2,
lower.tail = TRUE, log.p = FALSE) {
if (log.p) {p <- exp(p)}
if (!lower.tail) {p <- 1-p}
h <- p
for (i in seq_along(p)) {
h[i] <- ifelse(p[i] <= 0.5,
(ltmkiener2(p[i], m, g, a, w) - m) / (ltmlogis(p[i], m, g) - m),
(rtmkiener2(p[i], m, g, a, w) - m) / (rtmlogis(p[i], m, g) - m))
}
return(h)
}
#' @export
#' @rdname ckiener1234
hkiener3 <- function(p, m = 0, g = 1, k = 3.2, d = 0,
lower.tail = TRUE, log.p = FALSE) {
if (log.p) {p <- exp(p)}
if (!lower.tail) {p <- 1-p}
h <- p
for (i in seq_along(p)) {
h[i] <- ifelse(p[i] <= 0.5,
(ltmkiener3(p[i], m, g, k, d) - m) / (ltmlogis(p[i], m, g) - m),
(rtmkiener3(p[i], m, g, k, d) - m) / (rtmlogis(p[i], m, g) - m))
}
return(h)
}
#' @export
#' @rdname ckiener1234
hkiener4 <- function(p, m = 0, g = 1, k = 3.2, e = 0,
lower.tail = TRUE, log.p = FALSE) {
if (log.p) {p <- exp(p)}
if (!lower.tail) {p <- 1-p}
h <- p
for (i in seq_along(p)) {
h[i] <- ifelse(p[i] <= 0.5,
(ltmkiener4(p[i], m, g, k, e) - m) / (ltmlogis(p[i], m, g) - m),
(rtmkiener4(p[i], m, g, k, e) - m) / (rtmlogis(p[i], m, g) - m))
}
return(h)
}
#' @export
#' @rdname ckiener1234
hkiener7 <- function(p, coefk = c(0, 1, 3.2, 3.2, 3.2, 0, 0),
lower.tail = TRUE, log.p = FALSE) {
checkcoefk(coefk)
dck <- dimdimc(coefk)
m <- switch(dck, "1" = coefk[1], "2" = coefk[,1])
g <- switch(dck, "1" = coefk[2], "2" = coefk[,2])
a <- switch(dck, "1" = coefk[3], "2" = coefk[,3])
w <- switch(dck, "1" = coefk[5], "2" = coefk[,5])
if (log.p) {p <- exp(p)}
if (!lower.tail) {p <- 1-p}
h <- p
names(h) <- getnamesk(p)$nhesk
for (i in seq_along(p)) {
h[i] <- ifelse(p[i] <= 0.5,
(ltmkiener2(p[i], m, g, a, w) - m) / (ltmlogis(p[i], m, g) - m),
(rtmkiener2(p[i], m, g, a, w) - m) / (rtmlogis(p[i], m, g) - m))
}
return(h)
}
#' @export
#' @rdname ckiener1234
ckiener1 <- function(p, k = 3.2, lower.tail = TRUE, log.p = FALSE) {
if (log.p) {p <- exp(p)}
if (!lower.tail) {p <- 1-p}
l <- qlogis(p)
z <- k/l * sinh(l/k)
z[which(z == "NaN")] <- 1
return(z)
}
#' @export
#' @rdname ckiener1234
ckiener2 <- function(p, a = 3.2, w = 3.2, lower.tail = TRUE, log.p = FALSE) {
if (log.p) {p <- exp(p)}
if (!lower.tail) {p <- 1-p}
l <- qlogis(p)
k <- aw2k(a, w)
e <- aw2e(a, w)
z <- k/l * sinh(l/k) * exp(l/k *e)
z[which(z == "NaN")] <- 1
return(z)
}
#' @export
#' @rdname ckiener1234
ckiener3 <- function(p, k = 3.2, d = 0, lower.tail = TRUE, log.p = FALSE) {
if (log.p) {p <- exp(p)}
if (!lower.tail) {p <- 1-p}
l <- qlogis(p)
z <- k/l * sinh(l/k) * exp(l * d)
z[which(z == "NaN")] <- 1
return(z)
}
#' @export
#' @rdname ckiener1234
ckiener4 <- function(p, k = 3.2, e = 0, lower.tail = TRUE, log.p = FALSE) {
if (log.p) {p <- exp(p)}
if (!lower.tail) {p <- 1-p}
l <- qlogis(p)
z <- k/l * sinh(l/k) * exp(l/k *e)
z[which(z == "NaN")] <- 1
return(z)
}
#' @export
#' @rdname ckiener1234
ckiener7 <- function(p, coefk = c(0, 1, 3.2, 3.2, 3.2, 0, 0),
lower.tail = TRUE, log.p = FALSE) {
checkcoefk(coefk)
k <- switch(dimdimc(coefk), "1" = coefk[4], "2" = coefk[,4])
e <- switch(dimdimc(coefk), "1" = coefk[7], "2" = coefk[,7])
if (log.p) {p <- exp(p)}
if (!lower.tail) {p <- 1-p}
l <- qlogis(p)
z <- k/l * sinh(l/k) * exp(l/k *e)
z[which(z == "NaN")] <- 1
return(z)
}
|
5d8e91753be68a42e922a7950e678119c40715a1 | a2486374afa095172e596aed81b16ea9788aa4ea | /machine-learning-ex3/ex3R/oneVsAll.R | 0916e28278454397f35f3b0ffa14cc0f327e2d52 | [] | no_license | jerryhsieh/Stanfor-Machine-Learning | 41bd0dbd95f5f2484d3b9967de451a38d909332a | f35658e5465596ff810536e1e822ae7c76b458b7 | refs/heads/master | 2021-01-10T09:06:18.649084 | 2017-02-27T01:42:19 | 2017-02-27T01:42:19 | 36,797,928 | 1 | 1 | null | null | null | null | UTF-8 | R | false | false | 2,391 | r | oneVsAll.R | oneVsAll <- function (X, y, num_labels, lambda) {
#ONEVSALL trains multiple logistic regression classifiers and returns all
#the classifiers in a matrix all_theta, where the i-th row of all_theta
#corresponds to the classifier for label i
# [all_theta] = ONEVSALL(X, y, num_labels, lambda) trains num_labels
# logisitc regression classifiers and returns each of these classifiers
# in a matrix all_theta, where the i-th row of all_theta corresponds
# to the classifier for label i
# Some useful variables
m = NROW(X)
n = NCOL(X)
# You need to return the following variables correctly
all_theta = matrix(0, num_labels, n + 1)
# Add ones to the X data matrix
X = cbind(rep(1, m), X)
# ====================== YOUR CODE HERE ======================
# Instructions: You should complete the following code to train num_labels
# logistic regression classifiers with regularization
# parameter lambda.
#
# Hint: theta(:) will return a column vector.
#
# Hint: You can use y == c to obtain a vector of 1's and 0's that tell use
# whether the ground truth is true/false for this class.
#
# Note: For this assignment, we recommend using fmincg to optimize the cost
# function. It is okay to use a for-loop (for c = 1:num_labels) to
# loop over the different classes.
#
# fmincg works similarly to fminunc, but is more efficient when we
# are dealing with large number of parameters.
#
# Example Code for fmincg:
#
# % Set Initial theta
# initial_theta = zeros(n + 1, 1);
#
# % Set options for fminunc
# options = optimset('GradObj', 'on', 'MaxIter', 50);
#
# % Run fmincg to obtain the optimal theta
# % This function will return theta and the cost
# [theta] = ...
# fmincg (@(t)(lrCostFunction(t, X, (y == c), lambda)), ...
# initial_theta, options);
#
source("fmincg.R")
source("lrCostFunction.R")
for (c in 1:num_labels) {
initial_theta = matrix(0, n + 1, 1)
# options = optimset('GradObj', 'on', 'MaxIter', 50);
# theta = fmincg(@(t)(lrCostFunction(t, X, (y == c), lambda)), initial_theta, options);
JandG <- fmincg(lrCostFunction, initial_theta, Maxiter = 50, X, (y==c) + 0, lambda)
theta <- JandG$par
all_theta[c,] = as.vector(theta)
}
return(all_theta)
# =========================================================================
} |
b535ff441d708db594e6188fc45c6b20bbea41b6 | c874e55ec73043f6b837601cc58d855d37649e59 | /mlcenzer/stats/morph_stats.R | 77467ab67c41942c6c4ec5545ccc6a0f30251731 | [] | no_license | mlcenzer/SBB-dispersal | 85c54c924b399834a798d700cabf0b2702ae0755 | 1a777370986f83186180552a09149dfba72b96d0 | refs/heads/master | 2022-12-11T10:13:32.416530 | 2022-12-03T16:23:52 | 2022-12-03T16:23:52 | 229,098,494 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 6,947 | r | morph_stats.R | rm(list=ls())
setwd("~/Documents/Florida soapberry project/2019 Dispersal/SBB-dispersal git/Meredith is working on/")
data_all<-read.csv("data/full_data3.csv", header=TRUE, sep=",", quote="", stringsAsFactors=FALSE)
data_morph<-read.csv("data/bug_morphology_flight-trials-Autumn2019.csv", header=TRUE, sep=",", quote="", stringsAsFactors=FALSE)
#data_all<-data_all[data_all$flew!="",]
#data_all$flew_b<-0
#data_all$flew_b[data_all$flew=="Y"]<-1
data_morph$host_c[data_morph$pophost=="K.elegans"]<-1
data_morph$host_c[data_morph$pophost=="C.corindum"]<- -1
data_morph$sex_c<--1
data_morph$sex_c[data_morph$sex=="F"]<-1
data_morph$lat_c<-data_morph$lat-mean(data_morph$lat)
data_morph$sym_dist<-abs(data_morph$lat-25.49197)
data_morph$thor_c<-data_morph$thorax-mean(data_morph$thorax, na.rm=TRUE)
data_morph$w_morph_b<-NA
data_morph$w_morph_b[data_morph$w_morph=="L"]<-1
data_morph$w_morph_b[data_morph$w_morph=="S"]<-0
###Found one sex conflict, double-check sex between data_all and data_morph
test_data<-data.frame(ID=data_morph$ID, morph_sex=data_morph$sex, trial_sex=NA)
for(row in 1:length(test_data$ID)){
test_data$trial_sex[row]<-data_all$sex[data_all$ID==test_data$ID[row]][1]
}
test_data$ID[which(test_data$trial_sex!=test_data$morph_sex)]
#in addition to the identified 253, possible problems with 68, 116, 118, and 399
#data_all$eggs_b<-0
#data_all$eggs_b[data_all$eggs=="Y"]<-1
#data_all$ID<-as.factor(data_all$ID)
#data_all$min_from_start<-0
min_start_fun<-function(){
for(row in 1:length(data_all$days_from_start)){
minute<-as.numeric(strsplit(strsplit(data_all$time_start[row], ":")[[1]][2], '[ap]m'))
hour<-as.numeric(strsplit(data_all$time_start[row], ":")[[1]][1])
if(hour>=7){
data_all$min_from_start[row]<-60*(hour-8)+minute
}
if(hour<7){
data_all$min_from_start[row]<-60*(hour+4)+minute
}
}
}
##########Start with thorax width
data<-data.frame(R=data_morph$thorax, A=data_morph$host_c, B=data_morph$sex_c, C=data_morph$sym_dist)
library(lme4)
#run AICprobs script
setwd("/Users/Meredith/Desktop/Documents/R/generic models/")
source("AICprobabilities.R")
source("generic models-gaussian glm 3-FF.R")
sort(summary$AIC)
sort(P, decreasing=TRUE, index.return=TRUE)
###top 6: m12, m4, m14, m16, m7, m8
anova(m4, m7, test="Chisq")##No improvement from adding sym_dist
anova(m7, m12, test="Chisq")##Significant improvement from adding host*sym_dist interaction
anova(m4, m8, test="Chisq")##No improvement from adding host*sex interaction
anova(m12, m14, test="Chisq")##No improvement from adding host*sex
anova(m12, m16, test="Chisq")##No improvement from adding sex*sym_dist
anova(m4, m8, test="Chisq")##No improvement from adding host*sex
model_thorax<-glm(thorax~host_c*sym_dist + sex_c, data=data_morph, family=gaussian)
summary(model_thorax)
###Results:
#Females are bigger
#Individuals get smaller moving away from the sympatric zone
#But, bugs further from the sympatric zone get less small if they are from K. elegans
summary_thorax<-aggregate(thorax~sex*pophost*sym_dist, data=data_morph, FUN=mean)
plot(thorax~sym_dist, col=c(rgb(1,0.5,0.5),rgb(0,1,0.8,0.5))[as.factor(pophost)], pch=c(19,21)[as.factor(sex)], data=summary_thorax)
##########Beak length, must also include body size as a covariate
data<-data.frame(R=data_morph$beak, A=data_morph$host_c, B=data_morph$sex_c, C=data_morph$sym_dist, D=data_morph$thor_c)
library(lme4)
#run AICprobs script
setwd("/Users/Meredith/Desktop/Documents/R/generic models/")
source("AICprobabilities.R")
source("generic models-gaussian glm 4-FF.R")
sort(summary$AIC)
sort(P, decreasing=TRUE, index.return=TRUE)
###top 5: m72, m50, m88, m62, m92
##These are unfortunately complex
anova(m72, m50, test="Chisq")##Marginal improvement from inclusion of sex*host interaction (so should examine both m72 and m50)
anova(m72, m88, test="Chisq")##No improvement from adding sym_dist
anova(m88, m92, test="Chisq")##No improvement from adding host*sym_dist interaction
anova(m50, m62, test="Chisq")##No improvement from adding sym_dist
anova(m88, m62, test="Chisq")##Marginal improvement from adding sex*host interaction
model_beak1<-glm(beak~host_c*sex_c + host_c*thor_c + sex_c*thor_c, data=data_morph, family=gaussian)
summary(model_beak1)
model_beak2<-glm(beak~host_c*thor_c + sex_c*thor_c, data=data_morph, family=gaussian)
summary(model_beak2)
#Given these three interactions, it seems highly possible that a three-way interaction would improve this model so much as I might regret this I'm going to try that model too.
model_beak_3<-glm(beak~host_c*sex_c*thor_c, data=data_morph, family=gaussian)
anova(model_beak1, model_beak_3, test="Chisq")#Thank goodness it doesn't improve it
###There is a lot going on here. In both models:
#Bugs from K.elegans have shorter beaks
#Females have longer beaks
#bigger bugs have longer beaks (duh)
#The negative effect of being from K.elegans on beak length is marginally weaker for females (MODEL 1 ONLY)
#The positive effect of body size on beak length is weaker for bugs from K.elegans
#The positive effect of body size on beak length is stronger for females
#visualize it:
boxplot(beak~pophost*sex, data=data_morph) #without body size
plot(beak~thorax, data=data_morph, col=c(rgb(1,0.5,0.5),rgb(0,1,0.8,0.5))[as.factor(pophost)], pch=c(19,21)[as.factor(sex)])
##########Wing morph; because most short-wing morphs were excluded a priori, this doesn't really make sense to ask with this subset of individuals.
##########Wing length (L only), must also include body size as a covariate
data_L<-data_morph[data_morph$w_morph_b==1,]
data<-data.frame(R=data_L$wing, A=data_L$host_c, B=data_L$sex_c, C=data_L$sym_dist, D=data_L$thor_c)
library(lme4)
#run AICprobs script
setwd("/Users/Meredith/Desktop/Documents/R/generic models/")
source("AICprobabilities.R")
source("generic models-gaussian glm 4-FF.R")
sort(summary$AIC)
sort(P, decreasing=TRUE, index.return=TRUE)
#top 4 models: m32, m51, m39, m48
anova(m32, m39, test="Chisq")##No improvement from sex (?!)
anova(m32, m51, test="Chisq")##No improvement from adding host*thorax
anova(m32, m48, test="Chisq")##No improvement from adding host*sym_dist
model_wing<-glm(wing~host_c + sym_dist*thor_c, data=data_L, family=gaussian)
summary(model_wing)
##
#Bugs from K. elegans have longer wings
#Bugs further from the sympatric zone have shorter wings
#Larger bugs have longer wings
#Closer to the sympatric zone, the effect of thorax width on wing size is stronger
####The correlation between thorax and wing length is incredibly high, so interpret other effect sizes carefully
#visualize
plot(wing~thorax, data=data_L, col=c(rgb(1,0.5,0.5),rgb(0,1,0.8,0.5))[as.factor(pophost)], pch=c(19,21)[as.factor(sex)])
plot(wing~sym_dist, data=data_L, col=c(rgb(1,0.5,0.5),rgb(0,1,0.8,0.5))[as.factor(pophost)], pch=c(19,21)[as.factor(sex)])
wing_length_summary<-aggregate(wing~sex*pophost*sym_dist, data=data_L, FUN=mean)
wing_length_summary
|
f945158c6f9a742e69d8a4e1c5cb79d796c5d074 | f8fb2645b2bb7624b7c37c1adc7e456955dde8cd | /man/ly_image.Rd | f183c03eb79ff76e774727b2b3ab06b72a20689b | [
"MIT"
] | permissive | fxcebx/rbokeh | 94608202871cca2cb9271173ceb56be879e09a5e | 251a7f72832451ffceaad8a8d061d52c9b069049 | refs/heads/master | 2021-01-18T13:10:17.529497 | 2016-01-05T00:10:41 | 2016-01-05T00:10:41 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 2,001 | rd | ly_image.Rd | % Generated by roxygen2 (4.1.1): do not edit by hand
% Please edit documentation in R/layer_image.R
\name{ly_image}
\alias{ly_image}
\title{Add an "image" layer to a Bokeh figure}
\usage{
ly_image(fig, z, rows, cols, x = 0, y = 0, dw = 1, dh = 1,
palette = "Spectral10", dilate = FALSE, lname = NULL, lgroup = NULL)
}
\arguments{
\item{fig}{figure to modify}
\item{z}{matrix or vector of image values}
\item{rows}{if \code{z} is a vector, how many rows should be used in treating it as a matrix}
\item{cols}{if \code{z} is a vector, how many columns should be used in treating it as a matrix}
\item{x}{lower left x coordinates}
\item{y}{lower left y coordinates}
\item{dw}{image width distances}
\item{dh}{image height distances}
\item{palette}{name of color palette to use for color ramp (see \href{http://bokeh.pydata.org/en/latest/docs/reference/palettes.html}{here} for acceptable values)}
\item{dilate}{logical - whether to dilate pixel distance computations when drawing}
\item{lname}{layer name}
\item{lgroup}{layer group}
}
\description{
Draws a grid of rectangles with colors corresponding to the values in \code{z}
}
\examples{
\donttest{
p <- figure(xlim = c(0, 1), ylim = c(0, 1), title = "Volcano") \%>\%
ly_image(volcano) \%>\%
ly_contour(volcano)
p
}
}
\seealso{
Other layer functions: \code{\link{ly_abline}};
\code{\link{ly_annular_wedge}}; \code{\link{ly_annulus}};
\code{\link{ly_arc}}; \code{\link{ly_bezier}};
\code{\link{ly_boxplot}}; \code{\link{ly_contour}};
\code{\link{ly_crect}}; \code{\link{ly_curve}};
\code{\link{ly_density}}; \code{\link{ly_hist}};
\code{\link{ly_image_url}}; \code{\link{ly_lines}};
\code{\link{ly_map}}; \code{\link{ly_multi_line}};
\code{\link{ly_oval}}; \code{\link{ly_patch}};
\code{\link{ly_points}}; \code{\link{ly_polygons}};
\code{\link{ly_quadratic}}; \code{\link{ly_quantile}};
\code{\link{ly_ray}}; \code{\link{ly_rect}};
\code{\link{ly_segments}}; \code{\link{ly_text}};
\code{\link{ly_wedge}}
}
|
a24a1f54b530617b849c20fc9fa7a034a6c09d75 | 8ae75ba8be59e47b20103c536b22e5d6df5ebbe4 | /man/srmsymc_read_setup.Rd | f3c54f4fbedbbb2cbd7112f443ef0683f09e8e1c | [] | no_license | AndyCampbell/msy | 361c9fbb82310611b5fbeaa14a998984f3711dbe | 107a0321e84edab191f668ec14cc392bd1da11e5 | refs/heads/master | 2020-12-03T05:19:51.796250 | 2014-11-13T18:37:57 | 2014-11-13T18:37:57 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 316 | rd | srmsymc_read_setup.Rd | % Generated by roxygen2 (4.0.1): do not edit by hand
\name{srmsymc_read_setup}
\alias{srmsymc_read_setup}
\title{Read the srmsymc setup}
\usage{
srmsymc_read_setup(path)
}
\arguments{
\item{path}{The path to the directory that stores the srmsymc results}
}
\description{
Reads the setup from the srmsymc.dat file
}
|
f3cee86981e60534740a593ec87aede32291c8b0 | 84bc15d548ca9a86696cf329fb037d7909eab88c | /tests/testthat/test-crosswalks.R | 2cea07cdd682f72fc279795b400a36f4b698f61f | [
"MIT"
] | permissive | tdjames1/dataspice | b89d84006497979017b2c9fe715c4d66da81c8a3 | 26f76857c7cda7471c13a89465b6b4fcd8fa03de | refs/heads/main | 2023-04-24T01:30:27.690311 | 2021-03-25T07:03:35 | 2021-03-25T07:03:35 | 364,992,372 | 0 | 0 | NOASSERTION | 2021-05-06T17:47:26 | 2021-05-06T17:47:25 | null | UTF-8 | R | false | false | 609 | r | test-crosswalks.R | context("crosswalks")
test_that("test crosswalk_datetime", {
expect_equal(
crosswalk_datetime("FOO"),
list(calendarDate = "FOO", time = NULL))
expect_equal(
crosswalk_datetime("1990"),
list(calendarDate = "1990", time = NULL))
expect_equal(
crosswalk_datetime("1990-01-01"),
list(calendarDate = "1990-01-01", time = NULL))
expect_equal(
crosswalk_datetime("1990-01-01 00:00:00"),
list(calendarDate = "1990-01-01", time = "00:00:00"))
expect_equal(
crosswalk_datetime("1990-01-01T00:00:00.000Z"),
list(calendarDate = "1990-01-01", time = "00:00:00.000Z"))
})
|
85f21cce9f87d544c28e66d2959af66f46910acd | 0d1fec9f827ee26b046d387689a5042e8adc8c76 | /run_analysis.R | 19083498049ab29a6fcde3982009eeb818aa67ac | [] | no_license | mzamyadi/Coursera_Getting-and--Cleaning--Data | 4e3a28ac28decda75636608243318aad74929712 | 3c1d729440070aa2dd9c4b6968624995487a2f2b | refs/heads/master | 2023-04-21T02:46:30.981975 | 2021-05-05T21:31:24 | 2021-05-05T21:31:24 | 364,586,355 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 3,091 | r | run_analysis.R | # Coursera's "Getting and Cleaning Data" Course Project
# Author: Mojdeh Zamyadi
# This R script performs the following:
# 1. Merges the training and the test sets to create one data set.
# 2. Extracts only the measurements on the mean and standard deviation for each measurement.
# 3. Uses descriptive activity names to name the activities in the data set
# 4. Appropriately labels the data set with descriptive variable names.
# 5. From the data set in step 4, creates a second, independent tidy data set with the average of each variable for each activity and each subject.
# First, load required packages and get the data
library(data.table)
url <- "https://d396qusza40orc.cloudfront.net/getdata%2Fprojectfiles%2FUCI%20HAR%20Dataset.zip"
destfile<-file.path(getwd(),"ProjectData.zip")
download.file(url, destfile)
unzip(zipfile = "ProjectData.zip")
# Load activity labels and features files
activityLabels <- fread(file.path(getwd(), "UCI HAR Dataset/activity_labels.txt"))
colnames(activityLabels) = c("index", "activityLabels")
features <- fread(file.path(getwd(), "UCI HAR Dataset/features.txt"))
colnames(features) = c("index", "features")
# Extracts only the measurements on the mean and standard deviation for each measurement.
featuresExtracted <- grep("(mean|std)\\(\\)", features[, features])
measurements_wanted <- features[featuresExtracted, features]
measurements_wanted <- gsub('[()]', '', measurements_wanted)
# Load train datasets
train <- fread(file.path(getwd(), "UCI HAR Dataset/train/X_train.txt"))[, ..featuresExtracted]
colnames(train) = measurements_wanted
trainLabels <- fread(file.path(path, "UCI HAR Dataset/train/Y_train.txt")
, col.names = c("activityLabel"))
trainSubjects <- fread(file.path(path, "UCI HAR Dataset/train/subject_train.txt")
, col.names = c("SubjectNum"))
train <- cbind(trainSubjects, trainLabels, train)
# Load test datasets
test <- fread(file.path(getwd(), "UCI HAR Dataset/test/X_test.txt"))[, ..featuresExtracted]
colnames(test) = measurements_wanted
testLabels <- fread(file.path(path, "UCI HAR Dataset/test/Y_test.txt")
, col.names = c("activityLabel"))
testSubjects <- fread(file.path(path, "UCI HAR Dataset/test/subject_test.txt")
, col.names = c("SubjectNum"))
test <- cbind(testSubjects, testLabels, test)
# merge datasets
combined <- rbind(train, test)
# Convert class lables (1-6) to activity names based on info in "activity_labels" file
combined[["activityLabel"]] <- factor(combined[, activityLabel]
, levels = activityLabels[["classLabels"]]
, labels = activityLabels[["activityName"]])
# conver subject names into factors
combined[["SubjectNum"]] <- as.factor(combined[, SubjectNum])
combined_new <- melt(data = combined, id = c("SubjectNum", "activityLabel"))
combined_final <- dcast(data = combined_new, SubjectNum + activityLabel ~ variable, fun.aggregate = mean)
data.table::fwrite(x = combined_final, file = "tidyData.txt", quote = FALSE) |
c3754efd4926c03050b38de3ff79655a5760178c | ffdea92d4315e4363dd4ae673a1a6adf82a761b5 | /data/genthat_extracted_code/HMP/examples/MC.Xmcupo.statistics.Rd.R | 84732ff033c0fc8c48c7751011e1f98d7fae0e34 | [] | no_license | surayaaramli/typeRrh | d257ac8905c49123f4ccd4e377ee3dfc84d1636c | 66e6996f31961bc8b9aafe1a6a6098327b66bf71 | refs/heads/master | 2023-05-05T04:05:31.617869 | 2019-04-25T22:10:06 | 2019-04-25T22:10:06 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,257 | r | MC.Xmcupo.statistics.Rd.R | library(HMP)
### Name: MC.Xmcupo.statistics
### Title: Size and Power of Several Sample RAD-Probability Mean Test
### Comparisons: Unknown Vector of Proportion
### Aliases: MC.Xmcupo.statistics
### ** Examples
data(saliva)
data(throat)
data(tonsils)
### Get a list of dirichlet-multinomial parameters for the data
fit.saliva <- DM.MoM(saliva)
fit.throat <- DM.MoM(throat)
fit.tonsils <- DM.MoM(tonsils)
### Set up the number of Monte-Carlo experiments
### We use 1 for speed, should be at least 1,000
numMC <- 1
### Generate the number of reads per sample
### The first number is the number of reads and the second is the number of subjects
Nrs1 <- rep(12000, 10)
Nrs2 <- rep(12000, 19)
group.Nrs <- list(Nrs1, Nrs2)
group.theta <- c(fit.throat$theta, fit.tonsils$theta)
pi0 <- fit.saliva$pi
### Computing size of the test statistics (Type I error)
group.theta <- c(fit.throat$theta, fit.tonsils$theta)
pval1 <- MC.Xmcupo.statistics(group.Nrs, numMC, pi0, group.theta=group.theta, type="hnull")
pval1
### Computing Power of the test statistics (Type II error)
group.pi <- rbind(fit.throat$pi, fit.tonsils$pi)
pval2 <- MC.Xmcupo.statistics(group.Nrs, numMC, group.pi=group.pi, group.theta=group.theta)
pval2
|
e26cf0c3d8e5358a8c5aa4ad47387d387f126e0e | 201398772b3822744c6fb77529880ca974a795fb | /man/coloc-class.Rd | b558fd1f5a71ae2e9eae81162927d578fc51139d | [] | no_license | Hui-Guo/coloc | bbcfe99c2a38015f47e8bd537279af638b12b790 | 4904d3b2a9fb674759ba7ab7b54395698ecbbd58 | refs/heads/master | 2020-12-11T03:25:56.651519 | 2013-12-09T11:34:09 | 2013-12-09T11:34:09 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,240 | rd | coloc-class.Rd | \docType{class}
\name{coloc-class}
\alias{coloc-class}
\alias{colocBMA}
\alias{colocBayes-class}
\alias{colocBayesBMA}
\title{Classes \code{"coloc"} and \code{"colocBayes"}}
\description{
Classes designed to hold objects returned by function
\code{\link{coloc.test}} which performs a test of the
null hypothesis that two genetic traits colocalise - that
they share a common causal variant.
}
\section{Objects from the Class}{
Objects can be created by calls to the function
\code{\link{coloc.test}()}. Class \code{colocBayes}
extends class \code{coloc}.
}
\examples{
showClass("coloc")
showClass("colocBayes")
}
\author{
Chris Wallace.
}
\references{
Wallace et al (2012). Statistical colocalisation of
monocyte gene expression and genetic risk variants for
type 1 diabetes. Hum Mol Genet 21:2815-2824.
\url{http://europepmc.org/abstract/MED/22403184}
Plagnol et al (2009). Statistical independence of the
colocalized association signals for type 1 diabetes and
RPS26 gene expression on chromosome 12q13. Biostatistics
10:327-34.
\url{http://www.ncbi.nlm.nih.gov/pubmed/19039033}
}
\seealso{
\code{\link{coloc.test}},
\code{\link{coloc.test.summary}}, \code{\link{coloc.bma}}
}
\keyword{classes}
|
7b651a07d2c9ac95995beb84c4ce9e7f28ac3ca4 | 519578112ec38f95e7c9ec28543b1963b8639f1b | /man/filter_dups_.Rd | 3387fef65e5bf0022a0f75bf8cda7d2463e81349 | [] | no_license | pachevalier/tricky | 62416b2686493c569967c76d98deb2f5215d60a7 | b9e58e962d773984b8bc389fe1c3a8e4f53944c1 | refs/heads/master | 2020-05-29T12:28:26.514050 | 2018-04-10T12:00:26 | 2018-04-10T12:00:26 | 44,686,532 | 0 | 2 | null | null | null | null | UTF-8 | R | false | true | 575 | rd | filter_dups_.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/tibbles.R
\name{filter_dups_}
\alias{filter_dups_}
\title{Filter duplicates}
\usage{
filter_dups_(.data, .groups)
}
\arguments{
\item{.data}{a data frame}
\item{.groups}{a grouping variabme}
}
\value{
a data frame
}
\description{
filter_dups_() takes a data frame and returns only duplicated rows
}
\examples{
table_test <- tibble::tibble(v1 = c("A", "A", "B", "C"), v2 = c("a", "b", "b", "c"))
filter_dups_(.data = table_test, .groups = ~ v1)
filter_dups_(.data = table_test, .groups = ~ v2)
}
|
afda36307eb061fe69acfc2f1fa36d95e9061db1 | 997294803275aa3441464fe804e30e616f765651 | /man/transfer_bin_raster.Rd | 38bb2a369e5f5a6cc59c6038c77a3fe0e9c04acd | [] | no_license | cran/geoTS | ce24c3ebc3b6bc40ee9897da9483b4f151d9ecba | f66e073e8182bbbde505767b0bcaa9ff5fb4e442 | refs/heads/master | 2022-11-24T13:58:19.461113 | 2022-11-17T17:10:18 | 2022-11-17T17:10:18 | 245,601,719 | 0 | 0 | null | null | null | null | UTF-8 | R | false | true | 2,008 | rd | transfer_bin_raster.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/transfer_bin_raster.R
\name{transfer_bin_raster}
\alias{transfer_bin_raster}
\title{Transfer values from a binary image file to a raster file}
\usage{
transfer_bin_raster(
inputPath,
outputPath,
master,
what = integer(),
signed = TRUE,
endian = "little",
size = 2,
format = "GTiff",
dataType = "INT2S",
overwrite = TRUE
)
}
\arguments{
\item{inputPath}{character with full path name of input file(s).}
\item{outputPath}{character with full path name (where the raster files will be saved).}
\item{master}{character with full path name of a raster file; extent and projection
of this file are applied to this function output.}
\item{what}{See \code{\link[base]{readBin}}. Default \code{integer()}.}
\item{signed}{See \code{\link[base]{readBin}}. Default \code{TRUE}.}
\item{endian}{See \code{\link[base]{readBin}}. Default \code{"little"}.}
\item{size}{integer, number of bytes per element in the byte stream, default 2. See \code{\link[base]{readBin}}.}
\item{format}{character, output file type. See \code{\link[raster]{writeFormats}}.}
\item{dataType}{character, output data type. See \code{\link[raster]{dataType}}.}
\item{overwrite}{logical, default \code{TRUE}, should the resulting raster be overwritten.}
}
\value{
At the designated path (\code{outputPath}) the user will find \code{TIF} file(s).
}
\description{
Get the values of a binary file (in integer format) and transfer them to a raster file. All formats
considered in \code{\link[raster]{writeRaster}} are allowed.
}
\examples{
\donttest{
inputPath = system.file("extdata", package = "geoTS")
masterFile = system.file("extdata", "master.tif", package = "geoTS")
transfer_bin_raster(inputPath = inputPath, outputPath = inputPath,
master = masterFile, what = integer(),
signed = TRUE, endian = "little", size = 2,
format = "GTiff", dataType = "INT2S", overwrite = TRUE)
}
}
|
d2f1eea5926b0de5e53e308abc81f20112ebc42f | f8302d2843f1cc1f831c9c040b4ef992418d4a2b | /ClubLectura_R4DS_Sesion4_2021-06-24/Scripts/ClubLectura_R4DS_Sesion4_Cap21.R | 3b4d15e3b29d7b2e7d659b71b3f71c5d429a39eb | [] | no_license | rladies/meetup-presentations_galapagos-islands | fc640d9d3c92382baceeb0f1224eef9ca2904bfb | 04e10005274139102ba71e85545e791580f2bdce | refs/heads/master | 2022-07-25T17:44:06.779368 | 2022-06-21T22:47:13 | 2022-06-21T22:47:13 | 231,669,176 | 7 | 5 | null | null | null | null | UTF-8 | R | false | false | 11,694 | r | ClubLectura_R4DS_Sesion4_Cap21.R | ###########################################################################
#Club de Lectura:
#R para Ciencia de Datos (Wickham y Grolemund)
#R-Ladies Barranquilla, Galapagos, Guayaquil y Milagro
#
#Sesion 4: Capitulos 21 (Iteracion)
#Script por: Denisse Fierro Arcos (R-Ladies Galapagos)
###########################################################################
# Llamando bibliotecas ----------------------------------------------------
library(tidyverse)
library(palmerpenguins)
library(magrittr)
# Bucles for --------------------------------------------------------------
#Carguemos la base de datos de Palmer penguins
ping <- penguins
#Revisemos como se ve esta base de datos
glimpse(ping)
#Usemos un bucle para calcular el promedio de las medidas de picos, aletas y
#peso
#Primero creamos un vector vacio para guardar los resultados
promedios1 <- vector()
#Creamos nuestro bucle que estara basado en los nombres de las columnas de
#nuestro interes
for(i in names(ping[3:6])){
#Guardamos los resultados en el vector vacio creado
promedios1 <- append(promedios, mean(ping[[i]], na.rm = T))
}
#Veamos los resultados
promedios1
#Otras funciones relevantes para bucles for
for(i in 1:ncol(ping)){
print(i)
}
#Esta opcion da mucha flexibilidad al bucle porque las iteraciones
#dependeran del tamano del input
for(i in seq_along(ping)){
print(i)
}
for(i in names(ping)){
print(i)
}
#Utilizando bucles for para cambiar datos en variables existentes
#Asumamos que queremos estandarizar los datos morfometricos de los pinguinos
for(i in 3:6){
print(scale(ping[,i]))
}
ping
#Calculemos los promedios usando el bucle que creamos anteriormente
promedios <- vector()
for(i in names(ping[3:6])){
#Guardamos los resultados en el vector vacio creado
promedios <- append(promedios, mean(ping[[i]], na.rm = T))
}
#Veamos los resultados
promedios
#Los valores son mucho mas cercanos a cero. Nuestro bucle funciono.
#Es importante recalcar que podemos utilizar una lista para guardar
#resultados
medidas <- ping %>%
select(bill_length_mm:body_mass_g)
lista_promedios <- list()
for(i in seq_along(medidas)){
#Guardamos los resultados en el vector vacio creado
lista_promedios[[i]] <- mean(medidas[[i]], na.rm = T)
}
#Veamos los resultados
lista_promedios
#Podemos crear un vector
promedios1 <- unlist(lista_promedios)
#Comparemos con los resultados anteriores
promedios; promedios1
# Bucle while -------------------------------------------------------------
#El bucle while se evalua cuando una condicion se cumple
i <- 10
while(i > 5){
i <- i-1
print(i)
}
#Podemos mezclar varios bucles
for(i in 1:10){
if(i %% 2 == 0){
print(sprintf("%1.0f es un numero par", i))
}else(print(sprintf("%1.0f es un numero impar", i)))}
#Equivalente
i <- 1
while(i <= 10){
if(i %% 2 == 0){
print(sprintf("%1.0f es un numero par", i))
}else(print(sprintf("%1.0f es un numero impar", i)))
i <- i+1
}
#Bucles con mas de una condicion
medidas <- medidas %>% drop_na()
for(i in 1:nrow(medidas)){
if(medidas$bill_depth_mm[i] > 20 & medidas$body_mass_g[i] >= 3500){
print(medidas[i,])
}else(print("No"))
}
# Programacion funcional --------------------------------------------------
#Funciones Apply
#Accedamos a la base de datos de pinguinos
ping <- penguins
glimpse(ping)
#Calculemos el promedio de las columnas con medidas morfometricas con apply
prom <- apply(ping[,3:6], 2, mean, na.rm = T)
prom
#lapply nos ofrece una opcion equivalente, pero en vez de un vector, produce
#una lista
prom1 <- lapply(ping[,3:6], mean, na.rm = T)
prom1
#sapply es otra opcion, pero nos da como resultado un vector
prom2 <- sapply(ping[,3:6], mean, na.rm = T)
#Comparemos todos los resultados
prom2; unlist(prom1); prom
# Funciones map -----------------------------------------------------------
#Parte del paquete purrr que esta dentro del tidyverse
#Si queremos calcular los mismos promedios anteriores podemos utilizar las
#opciones de map
prom3 <- map_dbl(ping[,3:6], mean, na.rm = T)
#Tan podemos unirlo con otros paquetes del tidyverse
ping %>%
select(bill_length_mm:body_mass_g) %>%
map_dbl(mean, na.rm = T)
#Otra opcion es escoger todas columnas con numeros decimales
ping %>%
select_if(is.double) %>%
map_dbl(mean, na.rm = T)
#Comparamos con los resultados anteriores
prom3; prom2; unlist(prom1); prom
#Que pasa si uso map_dbl con un vector no numerico?
map_dbl(ping, mean, na.rm = T)
#Simplemente no obtenemos resultados para columnas no numericas y una advertencia
#Asumamos que quisieramos calcular un modelo lineal que nos permita establecer si
#el largo de la aleta esta relacionada al peso de los pinguinos, pero queremos
#aplicar este modelo a cada especie de pinguino
modelos_especies <- ping %>%
#Dividimos nuestros datos de acuerdo a la especie
split(.$species) %>%
#Aplicamos los modelos lineales a cada especie por separado
map(~lm(flipper_length_mm ~ body_mass_g, data = .x))
#Recuerda que map siempre espera una lista y regresa una lista
#Veamos los resultados - utilizemos un for loop para automatizarlo
for(i in names(modelos_especies)){
print(i)
print(summary(modelos_especies[[i]]))
}
#O incluso mejor, utilizamos map
modelos_especies %>%
map(summary) %>%
#Podemos anadir una linea mas y extraemos el r^2 que nos da la correlacion entre
#variables
map_dbl("r.squared") #equivalente a map_dbl(~.x$r.squared)
#Esto seria equivalente a utilizar el siguiente codigo por cada especie
ping %>%
filter(species == "Adelie") %$%
lm(body_mass_g ~ flipper_length_mm) %>%
summary
# Lidiando con errores ----------------------------------------------------
#Funcion safely - Utilicemosla en conjunto con la raiz cuadrada
raizCuadrada <- safely(sqrt)
#Aplicamos la funcion a una lista de numeros
raizCuadrada(10)
#Tenemos el resultado mas una lista de errores que en este caso esta vacia
#Si la aplicamos a caracteres
raizCuadrada("10")
#Tenemos un resultado que esta vacia y una explicacion del error en la operacion
glimpse(raizCuadrada("10"))
#Podemos usar safely con las funciones map
miLista <- list(1:3, 4, "x")
miLista %>%
map(raizCuadrada)
#El codigo se evalua y nos indica donde hubo errores
#Comparemos al resultado sin safely
miLista %>%
map(sqrt)
#No es muy conveniente porque simplemente obtenemos una falla
#Pero podemos mejorar un poco como se muestran los errores
resultados <- miLista %>%
map(raizCuadrada) %>%
#Asi mostramos a los resultados primero y luego los errores
transpose()
#Identificamos donde no existen errores - errores = NULL
no_errores <- resultados$error %>%
map_lgl(is_null)
#Mostramos los elementos de la lista donde no fueron detectados errores
resultados$result[no_errores] %>%
flatten_dbl()
#Otras funciones relevantes incluyen possibly y quietly - Comparemos
#Safely
miLista %>%
map(safely(sqrt))
#Possibly
miLista %>%
#Con possibly tenemos que incluir el error a ser mostrado
map(possibly(sqrt, "no es un numero"))
#Quietly - no captura errores, pero si advertencias
list(1, -1) %>%
map(quietly(sqrt)) %>%
transpose()
# Multiples argumentos con map --------------------------------------------
#Supongamos que volvemos a la regresion lineal
grupos <- ping %>%
split(.$species)
resultados <- grupos %>%
map(~lm(body_mass_g ~ flipper_length_mm, data = .))
#Que pasa si quiero obtener predicciones, utilizo los resultados de la regresion
#y los aplico a cada grupo
predicciones <- map2(resultados, grupos, predict)
predicciones
#Esto es equivalente a
seq_along(resultados) %>%
map(~predict(resultados[[.x]], grupos[[.x]]))
#O equivalente a repetir la siguiente linea de codigo por cada grupo
predict(resultados$Gentoo, grupos$Gentoo)
#O en un for loop
for(i in seq_along(grupos)){
print(names(grupos)[i])
print(predict(resultados[[i]], grupos[[i]]))
}
#Si tenemos mas de dos argumentos, entonces usamos pmap
df <- data.frame(
x = c("manzana", "banana", "cereza"),
pattern = c("m", "n", "z"),
replacement = c("M", "N", "Z"),
stringsAsFactors = FALSE
)
pmap(df, gsub) %>% flatten_chr()
#En este caso, gsub usa tres argumentos, por lo que usamos pmap
#Esto equivale a
for(i in 1:nrow(df)){
print(gsub(df$pattern[i], df$replacement[i], df$x[i]))
}
# Aplicando distintas funciones a distintos datos -------------------------
#Asumamos que queremos obtener el promedio del largo del pico, la mediana de la
#aleta y el maximo del peso
#Creamos una lista con los datos de interes
datos_morfo <- list(list(x = ping$bill_length_mm, na.rm = T),
list(x = ping$flipper_length_mm, na.rm = T),
list(x = ping$body_mass_g, na.rm = T))
#Luego creamos un vector o lista con las funciones de interes
func <- list(mean, median, max)
#Es necesario que la lista contenga el numero mismo de elementos que argumentos
#necesarios para llevar a cabo la operacion
#Aplicamos la operacion
invoke_map(func, datos_morfo)
#Comparemos con la opcion manual
mean(ping$bill_length_mm, na.rm = T)
median(ping$flipper_length_mm, na.rm = T)
max(ping$body_mass_g, na.rm = T)
#Alternativamente podemos crear una sola tribble
datos_morfo2 <- tribble(~func, ~datos,
mean, list(x = ping$bill_length_mm, na.rm = T),
median, list(x = ping$flipper_length_mm, na.rm = T),
max, list(x = ping$body_mass_g, na.rm = T))
#Dentro del tribble creamos una nueva columna donde guardaremos los resultados
#de las operaciones
datos_morfo2 %>%
mutate(datos_morfo = invoke_map(func, datos)) %>%
#Seleccionamos la columna que acabamos de crear
select(datos_morfo) %>%
#Extraemos la informacion de la lista para mostrarla como un vector
flatten() %>%
flatten_dbl()
# Funcion Walk ------------------------------------------------------------
#Creemos una figura de puntos para cada especie de pinguinos
fig <- ping %>%
split(.$species) %>%
map(~ggplot(.x, aes(bill_length_mm, body_mass_g, col = sex)) + geom_point()+
theme_bw()+
labs(x = "Largo del pico (mm)", y = "Peso del animal (g)"))
#Creemos los nombres de la figura con la extension pdf
nombres <- str_c(names(fig), ".pdf")
#Guardemos las figuras en la carpeta Figuras
pwalk(list(nombres, fig), ggsave, path = "ClubLectura_R4DS_Sesion4_2021-06-24/Figuras/", height = 8.9)
# Otras funciones utiles --------------------------------------------------
#Keep mantiene las columnas que cumplen una condicion
ping %>%
keep(is.double)
#Equivalente
ping %>%
select_if(is.double)
#Discard descarta las columnas que cumplen una condicion
ping %>%
discard(is.integer)
#Some determinan si algunos elementos cumplen una condicion
ping %>%
flatten() %>%
some(is.double)
#Every verifica si todos los elementos cumplen una condicion
ping %>%
flatten() %>%
every(is.double)
#Reduciendo listas basadas en elementos comunes
#Creemos un identificador unico
ping2 <- ping %>%
rowid_to_column("id") %>%
unite("id", species, id)
#Creamos una lista con el identificador unico y unas columnas adicionales
df <- list(isla = tibble(id = ping2$id, isla = ping$island),
sexo = tibble(id = ping2$id, sexo = ping$sex),
peso = tibble(id = ping2$id, peso = ping$body_mass_g))
df
#Creemos un solo data frame con los datos por cada identificador unico unificados
df <- df %>%
reduce(full_join)
df
#Si queremos encontrar los pesos que se repiten entre hembras y machos
pesos_comunes <- list(peso_hembras = ping$body_mass_g[ping$sex == "female"],
peso_machos = ping$body_mass_g[ping$sex == "male"])
#Encontremos los pesos comunes
pesos_comunes %>%
reduce(intersect)
|
e0dc3a4f92a96d36474433abacf698d2f6e0899d | 2bec5a52ce1fb3266e72f8fbeb5226b025584a16 | /segmenTier/R/RcppExports.R | d0c965f0385181c14e4c64f9da3b220eacfcb075 | [] | no_license | akhikolla/InformationHouse | 4e45b11df18dee47519e917fcf0a869a77661fce | c0daab1e3f2827fd08aa5c31127fadae3f001948 | refs/heads/master | 2023-02-12T19:00:20.752555 | 2020-12-31T20:59:23 | 2020-12-31T20:59:23 | 325,589,503 | 9 | 2 | null | null | null | null | UTF-8 | R | false | false | 4,449 | r | RcppExports.R | # Generated by using Rcpp::compileAttributes() -> do not edit by hand
# Generator token: 10BE3573-1514-4C36-9D1C-5A225CD40393
#' Pearson product-moment correlation coefficient
#'
#' Incremental calculation of the Pearson correlation coefficient between
#' two vectors for calculation within Rcpp functions
#' \code{\link{clusterCor_c}}.
#' @param x numeric vector
#' @param y numeric vector
#' @details Simply calculates Pearson's product-moment correlation
#' between vectors \code{x} and \code{y}.
myPearson <- function(x, y) {
.Call('_segmenTier_myPearson', PACKAGE = 'segmenTier', x, y)
}
#' Calculates position-cluster correlations for scoring function "icor".
#'
#' Calculates Pearson's product-moment correlation coefficients
#' between rows in \code{data} and \code{cluster}, and is used to calculate
#' the position-cluster similarity matrix for the scoring function "icor".
#' This is implemented in Rcpp for calculation speed, using
#' \code{\link{myPearson}} to calculate correlations.
#' @param data original data matrix
#' @param clusters cluster centers
#' @return Returns a position-cluster correlation matrix as used in
#' scoring function "icor".
#'@export
clusterCor_c <- function(data, clusters) {
.Call('_segmenTier_clusterCor_c', PACKAGE = 'segmenTier', data, clusters)
}
icor <- function(k, j, c, seq, M, csim) {
.Call('_segmenTier_icor', PACKAGE = 'segmenTier', k, j, c, seq, M, csim)
}
ccor <- function(k, j, c, seq, M, csim) {
.Call('_segmenTier_ccor', PACKAGE = 'segmenTier', k, j, c, seq, M, csim)
}
#' segmenTier's core dynamic programming routine in Rcpp
#'
#' @details This is \code{\link{segmenTier}}'s core dynamic programming
#' routine. It constructs the total score matrix S(i,c), based on
#' the passed scoring function ("icor" or "ccor"), and length penalty
#' \code{M}. "Nuisance" cluster "0" can have a smaller penalty \code{Mn}
#' to allow for shorter distances between "real" segments.
#'
#' Scoring function "icor" calculates the sum of similarities of
#' data at positions k:i to cluster centers c over all k and i.
#' The similarities are calculated e.g., as a (Pearson) correlation between
#' the data at individual positions and the tested cluster c center.
#'
#' Scoring function "ccor" calculates the sum of similarities
#' between the clusters at positions k:i to cluster c over all k and i.
#'
#' Scoring function "ccls" is a special case of "ccor" and is NOT handled
#' here, but is reflected in the cluster similarity matrix \code{csim}. It
#' is handled and automatically constructed in the R wrapper
#' \code{\link{segmentClusters}}, and merely counts the
#' number of clusters in sequence k:i, over all k and i, that are identical
#' to the tested cluster \code{c}, and sub-tracts
#' a penalty for the count of non-identical clusters.
#' @param seq the cluster sequence (where clusters at positions k:i are
#' considered). Note, that unlike the R wrapper, clustering numbers
#' here are 0-based, where 0 is the nuisance cluster.
#' @param C the list of clusters, including nuisance cluster '0', see
#' \code{seq}
#' @param score the scoring function to be used, one of "ccor" or "icor",
#' an apt similarity matrix must be supplied via option \code{csim}
#' @param M minimal sequence length; Note, that this is not a strict
#' cut-off but defined as an accumulating penalty that must be
#' "overcome" by good score
#' @param Mn minimal sequence length for nuisance cluster, Mn<M will allow
#' shorter distances between segments
#' @param csim a matrix, providing either the cluster-cluster (scoring
#' function "ccor") or the position-cluster similarity function
#' (scoring function "icor")
#' @param multi if multiple \code{k} are found which return the same maximal
#' score, should the "max" (shorter segment) or "min" (longer segment) be used?
#' This has little effect on real-life large data sets, since the situation
#' will rarely occur. Default is "max".
#' @return Returns the total score matrix \code{S(i,c)} and the matrix
#' \code{K(i,c)} which stores the position \code{k} which delivered
#' the maximal score at position \code{i}. This is used in the back-tracing
#' phase.
#' @references Machne, Murray & Stadler (2017)
#' <doi:10.1038/s41598-017-12401-8>
#' @export
calculateScore <- function(seq, C, score, csim, M, Mn, multi = "max") {
.Call('_segmenTier_calculateScore', PACKAGE = 'segmenTier', seq, C, score, csim, M, Mn, multi)
}
|
da63a15117ce8febe27bc8dffc7fb7aff6dd1672 | 49ff0bc7c07087584b907d08e68d398e7293d910 | /mbg/mbg_core_code/mbg_central/LBDCore/R/plot_other_params.R | 77ae40d5ebc78e296f14c6c2f84a2eef677389cf | [] | no_license | The-Oxford-GBD-group/typhi_paratyphi_modelling_code | db7963836c9ce9cec3ca8da3a4645c4203bf1352 | 4219ee6b1fb122c9706078e03dd1831f24bdaa04 | refs/heads/master | 2023-07-30T07:05:28.802523 | 2021-09-27T12:11:17 | 2021-09-27T12:11:17 | 297,317,048 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,106 | r | plot_other_params.R | #' @title Plot other parameters
#' @description Create a plot of other parameters from INLA fit
#'
#' @param model_results table from [whatever]_model_results_table.csv
#' @param other_params names of other parameters (not stackers)
#' @return ggplot object plotting parametersby region
#' @examples
#' \dontrun{
#' # Plot stacker betas
#' # Plot other parameters
#' gg_other_params <- plot_other_params(model_results = model_results,
#' other_params = other_params)
#' }
#' @export
plot_other_params <- function(model_results, other_params) {
other_param_table <- subset(model_results, parameter %in% other_params)
dodge <- position_dodge(width = 0.4)
ggplot(data = other_param_table, aes(x = region, y = q0.5, color = region)) +
geom_point(position = dodge) +
geom_pointrange(aes(ymin = q0.025, ymax = q0.975), position = dodge) +
labs(x = "Region", y = "Value", color = "Region") +
facet_wrap(~parameter, scales = "free_y") +
scale_color_manual(values = get_color_scheme("carto_discrete")) +
theme_classic() +
theme(axis.text.x = element_text(angle = 45, hjust = 1))
}
|
c37e2fc374cc1aa777b24bc72e70eee1c9c9fe39 | 29585dff702209dd446c0ab52ceea046c58e384e | /Stem/R/d2_Sigmastar_logb.exp.R | 052278dc8f0446c25324225cea08bfcea13a632d | [] | no_license | ingted/R-Examples | 825440ce468ce608c4d73e2af4c0a0213b81c0fe | d0917dbaf698cb8bc0789db0c3ab07453016eab9 | refs/heads/master | 2020-04-14T12:29:22.336088 | 2016-07-21T14:01:14 | 2016-07-21T14:01:14 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 93 | r | d2_Sigmastar_logb.exp.R | `d2_Sigmastar_logb.exp` <-
function(logb,d) {
d2 = diag(exp(logb),d)
return(d2)
}
|
1530b6b68d013b2dad97c8a6d178e3255fb42714 | c8562cbb51164d7c1d3c50ed3b8455fb60c7f90a | /R/data.R | f7966432fa637eb9d69cc654e39e128d6379403d | [
"Apache-2.0"
] | permissive | maciejsmolka/solvergater.solvers | 5de6f7e96c2d3a48bba414968a9c170735bf80cd | a95a58235faaed7a2973856a0792a3b6bffad0f3 | refs/heads/master | 2023-02-12T23:08:25.241772 | 2021-01-15T18:30:56 | 2021-01-15T18:30:56 | 307,970,112 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 757 | r | data.R | #' Data for MT problem
#'
#' @format A list with the following components:
#' \describe{
#' \item{nparams}{number of parameters;}
#' \item{nqoi}{number of quantities of interest;}
#' \item{max_iter}{maximum (sensible) number of iterations;}
#' \item{precision_levels}{available precision levels sorted from the most
#' inaccurate; each next level increases significantly the time of
#' computations;}
#' \item{exact_qoi}{complex, QOI at 'exact' solution, i.e.
#' `c(1, 2, 10, 3)`, computed with precision `1.2`.}
#' }
"mt_data"
#' Data for L-shape heat transfer problem
#'
#' @format A list with the following components:
#' \describe{
#' \item{exact_qoi}{numeric, QOI at 'exact' solution, i.e.
#' `c(0.1, 1.5, 2.9)`.}
#' }
"heat_data"
|
b9d572ea8130bd32493ae0b9727862b1502e0e4e | 9e8936a8cc7beae524251c8660fa755609de9ce5 | /R/data.R | 41b778416b92fc1662af07c1350bbdd3589808b0 | [
"MIT"
] | permissive | tidymodels/parsnip | bfca10e2b58485e5b21db64517dadd4d3c924648 | 907d2164a093f10cbbc1921e4b73264ca4053f6b | refs/heads/main | 2023-09-05T18:33:59.301116 | 2023-08-17T23:45:42 | 2023-08-17T23:45:42 | 113,789,613 | 451 | 93 | NOASSERTION | 2023-08-17T23:43:21 | 2017-12-10T22:48:42 | R | UTF-8 | R | false | false | 355 | r | data.R | #' parsnip model specification database
#'
#' This is used in the RStudio add-in and captures information about mode
#' specifications in various R packages.
#'
#' @name model_db
#' @aliases model_db
#' @docType data
#' @return \item{model_db}{a data frame}
#' @keywords datasets internal
#' @examplesIf !parsnip:::is_cran_check()
#' data(model_db)
NULL
|
75ab7ad56e6a5c8dc2edb8263572b358320a8ad8 | 53de60b485e5642bd1cc7634eaa8b7bf04caeb27 | /code/network_visualization/global_plots/g_create_ggplot.R | e34ebb7fe73e742618991c23bf8f7d9147a27018 | [] | no_license | wallingTACC/sdfb_network | 4061ae0fafef695d189fe13b97b14c66863b8ecf | 38649a628c1f834767b1c14bdb4402863f5f325a | refs/heads/master | 2020-04-05T23:11:51.984616 | 2017-01-30T17:52:27 | 2017-01-30T17:52:27 | 60,351,123 | 0 | 1 | null | 2016-06-03T13:48:46 | 2016-06-03T13:48:45 | R | UTF-8 | R | false | false | 7,565 | r | g_create_ggplot.R | #@S Code that extracts data from given coordinate file (graphvis text output)
#@S and generates ggplot object.
# TODO: Eventually split this into multiple parts?
# must load in vector of names from some source, that match IDs.
# Old settings?
# edge.weight.range <- c(0.5,1.5) ##Controls amount of edge ink
# name.size.range <- c(15,30) ##Controls amount of text ink
# name.size.range <- c(10,20) ##Controls amount of text ink - FOR TOPIC MODEL
G_PLOTCORD = NULL
G_SETTINGS = NULL
g_create_ggplot <- function(node_names, node_sizes, graphviz_outfile,
node_size_range = c(5,8), color_function) {
#@F ----------------------------------------
#@F Function 'g_create_ggplot'
#@F Args (Input):
#@F node_names = vector of node names
#@F node_sizes = vector of node sizes
#@F graphviz_outfile = output from graphviz algorithm (node placements)
#@F node_size_range = size to rescale node sizes to
#@F color_function = function (of node_names) that provides sizes of each node.
#@F Purpose: Generates needed global varaibles in order to plot a specific graph
#@F visualization, based on given node names/sizes, and graphviz node locations.
#@F Output: none
#@F Global Variables Needed: G_PLOTCORD, G_SETTINGS. These are assigned to.
#@F ----------------------------------------
# TODO: Add actual ggplot code inside here?
# TODO: Play with node_size_range
# this ignores edge drawing for now
NODES = length(node_names)
graphdata = readLines(graphviz_outfile)
graphsize = graphdata[1]
nodelocs = strsplit(graphdata[2:(NODES + 1)], split = " ")
edgelines = strsplit(graphdata[(NODES + 2):length(graphdata)], split = " ")
plotcord = as.data.frame(t(
sapply(nodelocs, function(x) {as.numeric(x[3:4])})
))
edglist = as.data.frame(t(
sapply(edgelines, function(x) {as.numeric(x[2:3])})
))
colnames(plotcord) = c("x", "y") # NOTE NOTE NOTE NOTE Changed from X1/X2 to x,y
plotcord$size <- node_sizes
plotcord$names <- node_names
# Rescale node sizes
# plotcord$size <- (plotcord$size/max(plotcord$size)) * diff(range(node_size_range)) +
# node_size_range[1]
plotcord = cbind(plotcord, color_function(node_names))
G_PLOTCORD <<- plotcord
G_SETTINGS <<- list(node_size_range = node_size_range)
# Ignore edges?
# edges1 = rep(-1, times = dim(edges)[1])
# e1 = rep("grey", times = length(edges1))
return(NULL)
}
generate_birthbased_colors = function(node_names) {
#@F ----------------------------------------
#@F Function 'generate_birthbased_colors'
#@F Args (Input): node_names: identifies which names to look for dates of
#@F Purpose: Provides information as to what colors to use
#@F Output: data frame, columns: color, year of birth
#@F ----------------------------------------
load("network_visualization/est.birthdates.Rdata")
ind = match(node_names, dm.names)
breaks = 150:170 * 10
# give large #s to small dates
cols = sapply(dmname.bdate[ind], function(x) {return(sum(x < breaks, na.rm = 1))})
#cols_touse = topo.colors(n = 7)
cols_touse = rainbow(n = (length(breaks) + 1), start = 3/6, e = 4/6)
res = as.character(cols)
for(j in 0:length(breaks)) {
res[cols == j] = cols_touse[j+1]
}
return(data.frame(color = res, year = dmname.bdate[ind], stringsAsFactors = FALSE))
}
generate_graphdist_colors = function() {
#@F ----------------------------------------
#@F Function 'generate_graphdist_colors'
#@F Args (Input):
#@F Purpose:
#@F Example:
#@F Output:
#@F Notes:
#@F Global Variables Needed:
#@F ----------------------------------------
# TODO: Write this based on existing code
# #################################
# # separate for many nodes: coloration, etc.
#
#
# ###Create plotting matrix for edges
# edglist <- as.matrix.network.edgelist(net)
# edges <- data.frame(plotcord[edglist[,1],c(1,2)], plotcord[edglist[,2],c(1,2)])
# colnames(edges) <- c("X1","Y1","X2","Y2")
# edges$weight <- rep(1, times = nrow(edges))
#
#
# ## Find color setting 1, Charles I dist.
# colors1 = rep(-1, times = N)
# edges1 = rep(-1, times = dim(edges)[1])
# start1 = 487 #Charles I
#
# used.nodes = NULL
# nodes.left = 1:N
# dist = 0
# set = start1
# nodes.left = setdiff(nodes.left, set)
# while(length(set) > 0) {
# colors1[set] = dist
# new.edges = NULL
# for(j in 1:length(set)) {
# new.edges = c(new.edges, which(edglist[,1] == set[j]),
# which(edglist[,2] == set[j]))
# }
# new.edges = intersect(new.edges, which(edges1 == -1))
# edges1[new.edges] = dist
#
# new.nodes = intersect(nodes.left, unique(as.numeric(edglist[new.edges,1:2])))
# used.nodes = unique(c(used.nodes, set))
# set = new.nodes
# nodes.left = setdiff(nodes.left, new.nodes)
# dist = dist + 1
# }
# colors1[colors1 == -1] = 5
# edges1 = edges1 + 1
#
# c1 = rep("", times = length(colors1))
# e1 = rep("", times = length(edges1))
#
# #use heat colors
# cols.touse = topo.colors(n = 6)
#
# for(j in 0:5) {
# c1[colors1 == j] = cols.touse[j+1]
# e1[edges1 == j] = cols.touse[j+1]
# }
#
#
#
#
# ## Find color setting 1, James Harington dist.
# colors1 = rep(-1, times = N)
# edges1 = rep(-1, times = dim(edges)[1])
# start1 = 2054 #Charles I
#
# used.nodes = NULL
# nodes.left = 1:N
# dist = 0
# set = start1
# nodes.left = setdiff(nodes.left, set)
# while(length(set) > 0) {
# colors1[set] = dist
# new.edges = NULL
# for(j in 1:length(set)) {
# new.edges = c(new.edges, which(edglist[,1] == set[j]),
# which(edglist[,2] == set[j]))
# }
# new.edges = intersect(new.edges, which(edges1 == -1))
# edges1[new.edges] = dist
#
# new.nodes = intersect(nodes.left, unique(as.numeric(edglist[new.edges,1:2])))
# used.nodes = unique(c(used.nodes, set))
# set = new.nodes
# nodes.left = setdiff(nodes.left, new.nodes)
# dist = dist + 1
# }
# colors1[colors1 == -1] = 5
# edges1 = edges1 + 1
#
# c1 = rep("", times = length(colors1))
# e1 = rep("", times = length(edges1))
#
# #use heat colors
# cols.touse = topo.colors(n = 6)
# cols.touse = rainbow(n = 6, start = 3/6, e = 4/6)
#
# for(j in 0:5) {
# c1[colors1 == j] = cols.touse[j+1]
# e1[edges1 == j] = cols.touse[j+1]
# }
#
#
#
}
generate_topicmodel_colors = function() {
#@F ----------------------------------------
#@F Function ''
#@F Args (Input):
#@F Purpose:
#@F Example:
#@F Output:
#@F Notes:
#@F Global Variables Needed:
#@F ----------------------------------------
# TODO: Write this based on existing code [ FIND MORE CODE RELATED TO THIS...
# TODO: -- in some file...]
#
#
# ## Using topicmodel topics
# load("topicmodel.fit.Rdata")
#
# cols.touse = c('red', 'blue', 'green', 'black', 'brown')
# cols.touse = c('red', 'beige', 'beige', 'beige', 'beige')
# cols.touse = c('beige', 'blue', 'beige', 'beige', 'beige')
# cols.touse = c('beige', 'beige', 'green', 'beige', 'beige')
# cols.touse = c('beige', 'beige', 'beige', 'black', 'beige')
# cols.touse = c('beige', 'beige', 'beige', 'beige', 'brown')
#
# #new
#
# load("tms.Rdata")
#
# k = 10
# cols.touse = rep('beige', times = 10)
# cols.touse[k] = 'blue'
# c1 = cols.touse[topics(topic.models)] #need to load topicmodel
#
#
}
|
ebca3274b66da3f61698c5bbb1bdfc3f69d49c56 | b4df3231200267fcaac105d9a554ac489d2a3984 | /man/datecols.Rd | b42c21bf1973d72df5b7c8ad704ced51ebede1a7 | [] | no_license | byadu/libcubmeta | 9e9736347db7f95504b5696b595fdafd93bac19c | 166040c8bf8b7af586f3d35d43af689b64c17b66 | refs/heads/master | 2022-11-19T18:12:37.639065 | 2020-07-20T11:22:02 | 2020-07-20T11:22:02 | 279,065,846 | 0 | 0 | null | null | null | null | UTF-8 | R | false | true | 394 | rd | datecols.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/db.R
\name{datecols}
\alias{datecols}
\title{datecols}
\usage{
datecols(dbconn, tab)
}
\arguments{
\item{dbconn}{is the database handle}
\item{tab}{is the table for which columns are required}
}
\description{
Get the date columns in a table in a 'MySQL' database
}
\details{
Returns the date columns of the table
}
|
c5ec3561d40ca154e64445534a392fe68bd660dc | 8c9a6e668b6f6ce94167a21554c4175cb57465fd | /WAV_timeval_7292018.R | 680646edf30cec22fd176a4a61f4f9ef9147c64a | [] | no_license | eafinchum/wav | 9576016ee7f30aa2e5efb3f68a1e0873b93eaa10 | 8ccc4ace1c0bd31ba33fccfb55d662ead91b5783 | refs/heads/master | 2020-04-22T05:51:24.368397 | 2019-02-11T17:20:11 | 2019-02-11T17:20:11 | 170,169,786 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 2,986 | r | WAV_timeval_7292018.R | library(readstata13)
library(ggplot2)
library(dplyr)
library(knitr)
library(tidyr)
xd <- read.dta13('WAV_HH_PANEL_MOD_8.02.17.dta')
xd1<- xd %>%
filter(in_both_waves=="panel") %>%
mutate(treatcondition = ifelse(survey_treatment==0 & in_kbk=="Non-KbK", "Control",
ifelse(survey_treatment==1 & in_kbk=="Non-KbK", "Treat CV, non-KbK", "Treat CV, KbK")))
table(xd1$treatcondition)
xd_timeval <- xd1 %>%
group_by(base_or_end, treatcondition) %>%
summarize(mean_time_val_house = mean(time_val_all_house, na.rm=T),
sd_time_val_house = sd(time_val_all_house, na.rm=T),
mean_time_val_farm = mean(time_val_all_farm, na.rm=T),
sd_time_val_farm = sd(time_val_all_farm, na.rm=T),
mean_time_val_biz = mean(time_val2_end_biz, na.rm=T),
sd_time_val_biz = sd(time_val2_end_biz, na.rm=T))
write.csv(xd_timeval, "timeval_7292018.csv")
#TOT is the group indicator for treatment on the treated
xd1 <- xd1 %>%
mutate(TOT = ifelse(treatcondition=="Control", 0,
ifelse(treatcondition=="Treat CV, KbK", 1, NA)))
table(xd1$treatcondition, xd1$TOT)
#ITT is the group indicator for intent to treat
xd1 <- xd1 %>%
mutate(ITT = ifelse(treatcondition=="Control", 0, 1))
table(xd1$treatcondition, xd1$ITT)
xd1 <- xd1 %>%
mutate(endline = as.factor(ifelse(base_or_end=="endline", 0, 1)))
xd_timeval_endline <- xd1 %>%
filter(base_or_end == "endline")
xd_timeval_baseline <- xd1 %>%
filter(base_or_end == "baseline")
##Farm plot area descriptives
xd1 <- xd1 %>%
mutate(totalplotarea = ifelse(base_or_end=="baseline", farm_plot_area_total_2014lrs, farm_plot_area_total_2016lrs))
summary(xd1$totalplotarea)
xd_cultiv_TOT <- xd1 %>%
group_by(TOT, base_or_end) %>%
summarize(mean_farmplot = mean(totalplotarea, na.rm=T),
sd_farmplot = sd(totalplotarea, na.rm=T))
ggplot()
xd_cultiv_ITT <- xd1 %>%
group_by(ITT, base_or_end) %>%
summarize(mean_farmplot = mean(totalplotarea, na.rm=T),
sd_farmplot = sd(totalplotarea, na.rm=T))
write.csv(xd_cultiv_TOT, "xd_cultiv_TOT.csv")
write.csv(xd_cultiv_ITT, "xd_cultiv_ITT.csv")
xd_base <- xd_timeval_baseline
xd_base <- xd_base %>%
mutate(totalplotarea = ifelse(base_or_end=="baseline", farm_plot_area_total_2014lrs, farm_plot_area_total_2016lrs))
xd_end <- xd_timeval_endline
xd_end <- xd_end %>%
mutate(totalplotarea = ifelse(base_or_end=="baseline", farm_plot_area_total_2014lrs, farm_plot_area_total_2016lrs))
#looking at resp risk preference
summary(xd1$resp_pref_risk) #answers question about how much they would bet on something
xd_risk <- xd1 %>%
group_by(treatcondition, panel_wave) %>%
summarize(N=n(),
mean_risk = mean(resp_pref_risk, na.rm=T),
sd_risk = sd(resp_pref_risk, na.rm=T))
write.csv(xd_risk, "riskpreference_812018.csv")
p1 <- ggplot(xd_risk, aes(x=panel_wave, y=mean_risk, color=treatcondition))+
geom_line()+
scale_color_discrete()
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.