content large_stringlengths 0 6.46M | path large_stringlengths 3 331 | license_type large_stringclasses 2 values | repo_name large_stringlengths 5 125 | language large_stringclasses 1 value | is_vendor bool 2 classes | is_generated bool 2 classes | length_bytes int64 4 6.46M | extension large_stringclasses 75 values | text stringlengths 0 6.46M |
|---|---|---|---|---|---|---|---|---|---|
#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!#
#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!#
#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!#
#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!#
#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!#
#
# This software was authored by Zhian N. Kamvar and Javier F. Tabima, graduate
# students at Oregon State University; Jonah C. Brooks, undergraduate student at
# Oregon State University; and Dr. Nik Grünwald, an employee of USDA-ARS.
#
# Permission to use, copy, modify, and distribute this software and its
# documentation for educational, research and non-profit purposes, without fee,
# and without a written agreement is hereby granted, provided that the statement
# above is incorporated into the material, giving appropriate attribution to the
# authors.
#
# Permission to incorporate this software into commercial products may be
# obtained by contacting USDA ARS and OREGON STATE UNIVERSITY Office for
# Commercialization and Corporate Development.
#
# The software program and documentation are supplied "as is", without any
# accompanying services from the USDA or the University. USDA ARS or the
# University do not warrant that the operation of the program will be
# uninterrupted or error-free. The end-user understands that the program was
# developed for research purposes and is advised not to rely exclusively on the
# program for any reason.
#
# IN NO EVENT SHALL USDA ARS OR OREGON STATE UNIVERSITY BE LIABLE TO ANY PARTY
# FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING
# LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS DOCUMENTATION,
# EVEN IF THE OREGON STATE UNIVERSITY HAS BEEN ADVISED OF THE POSSIBILITY OF
# SUCH DAMAGE. USDA ARS OR OREGON STATE UNIVERSITY SPECIFICALLY DISCLAIMS ANY
# WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
# MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE AND ANY STATUTORY
# WARRANTY OF NON-INFRINGEMENT. THE SOFTWARE PROVIDED HEREUNDER IS ON AN "AS IS"
# BASIS, AND USDA ARS AND OREGON STATE UNIVERSITY HAVE NO OBLIGATIONS TO PROVIDE
# MAINTENANCE, SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.
#
#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!#
#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!#
#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!#
#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!#
#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!#
#==============================================================================#
#' Produce a basic summary table for population genetic analyses.
#'
#' @description
#'
#' For the \pkg{poppr} package description, please see
#' \code{\link[=poppr-package]{package?poppr}}
#'
#' This function allows the user to quickly view indices of heterozygosity,
#' evenness, and linkage to aid in the decision of a path to further analyze
#' a specified dataset. It natively takes \code{\linkS4class{genind}} and
#' \code{\linkS4class{genclone}} objects, but can convert any raw data formats
#' that adegenet can take (fstat, structure, genetix, and genpop) as well as
#' genalex files exported into a csv format (see \code{\link{read.genalex}} for
#' details).
#'
#'
#' @param dat a \code{\linkS4class{genind}} object OR a
#' \code{\linkS4class{genclone}} object OR any fstat, structure, genetix,
#' genpop, or genalex formatted file.
#'
#' @param total When \code{TRUE} (default), indices will be calculated for the
#' pooled populations.
#'
#' @param sublist a list of character strings or integers to indicate specific
#' population names (accessed via \code{popNames()}).
#' Defaults to "ALL".
#'
#' @param exclude a \code{vector} of population names or indexes that the user
#' wishes to discard. Default to \code{NULL}.
#'
#' @param blacklist DEPRECATED, use exclude.
#'
#' @param sample an integer indicating the number of permutations desired to
#' obtain p-values. Sampling will shuffle genotypes at each locus to simulate
#' a panmictic population using the observed genotypes. Calculating the
#' p-value includes the observed statistics, so set your sample number to one
#' off for a round p-value (eg. \code{sample = 999} will give you p = 0.001
#' and \code{sample = 1000} will give you p = 0.000999001).
#'
#' @param method an integer from 1 to 4 indicating the method of sampling
#' desired. see \code{\link{shufflepop}} for details.
#'
#' @param missing how should missing data be treated? \code{"zero"} and
#' \code{"mean"} will set the missing values to those documented in
#' \code{\link{tab}}. \code{"loci"} and \code{"geno"} will remove any loci or
#' genotypes with missing data, respectively (see \code{\link{missingno}} for
#' more information.
#'
#' @param cutoff \code{numeric} a number from 0 to 1 indicating the percent
#' missing data allowed for analysis. This is to be used in conjunction with
#' the flag \code{missing} (see \code{\link{missingno}} for details)
#'
#' @param quiet \code{FALSE} (default) will display a progress bar for each
#' population analyzed.
#'
#' @param clonecorrect default \code{FALSE}. must be used with the \code{strata}
#' parameter, or the user will potentially get undesired results. see
#' \code{\link{clonecorrect}} for details.
#'
#' @param strata a \code{formula} indicating the hierarchical levels to be used.
#' The hierarchies should be present in the \code{strata} slot. See
#' \code{\link{strata}} for details.
#'
#' @param keep an \code{integer}. This indicates which strata you wish to keep
#' after clone correcting your data sets. To combine strata, just set keep
#' from 1 to the number of straifications set in strata. see
#' \code{\link{clonecorrect}} for details.
#'
#' @param plot \code{logical} if \code{TRUE} (default) and \code{sampling > 0},
#' a histogram will be produced for each population.
#'
#' @param hist \code{logical} Deprecated. Use plot.
#'
#' @param index \code{character} Either "Ia" or "rbarD". If \code{hist = TRUE},
#' this will determine the index used for the visualization.
#'
#' @param minsamp an \code{integer} indicating the minimum number of individuals
#' to resample for rarefaction analysis. See \code{\link[vegan]{rarefy}} for
#' details.
#'
#' @param legend \code{logical}. When this is set to \code{TRUE}, a legend
#' describing the resulting table columns will be printed. Defaults to
#' \code{FALSE}
#'
#' @param ... arguments to be passed on to \code{\link{diversity_stats}}
#'
#' @return A data frame with populations in rows and the following columns:
#' \item{Pop}{A vector indicating the population factor}
#' \item{N}{An integer vector indicating the number of individuals/isolates in
#' the specified population.}
#' \item{MLG}{An integer vector indicating the number of multilocus genotypes
#' found in the specified population, (see: \code{\link{mlg}})}
#' \item{eMLG}{The expected number of MLG at the lowest common sample size
#' (set by the parameter \code{minsamp}).}
#' \item{SE}{The standard error for the rarefaction analysis}
#' \item{H}{Shannon-Weiner Diversity index}
#' \item{G}{Stoddard and Taylor's Index}
#' \item{lambda}{Simpson's index}
#' \item{E.5}{Evenness}
#' \item{Hexp}{Nei's gene diversity (expected heterozygosity)}
#' \item{Ia}{A numeric vector giving the value of the Index of Association for
#' each population factor, (see \code{\link{ia}}).}
#' \item{p.Ia}{A numeric vector indicating the p-value for Ia from the number
#' of reshufflings indicated in \code{sample}. Lowest value is 1/n where n is
#' the number of observed values.}
#' \item{rbarD}{A numeric vector giving the value of the Standardized Index of
#' Association for each population factor, (see \code{\link{ia}}).}
#' \item{p.rD}{A numeric vector indicating the p-value for rbarD from the
#' number of reshuffles indicated in \code{sample}. Lowest value is 1/n where
#' n is the number of observed values.}
#' \item{File}{A vector indicating the name of the original data file.}
#'
#' @details This table is intended to be a first look into the dynamics of
#' mutlilocus genotype diversity. Many of the statistics (except for the the
#' index of association) are simply based on counts of multilocus genotypes
#' and do not take into account the actual allelic states.
#' \strong{Descriptions of the statistics can be found in the Algorithms and
#' Equations vignette}: \code{vignette("algo", package = "poppr")}.
#' \subsection{sampling}{The sampling procedure is explicitly for testing the
#' index of association. None of the other diversity statistics (H, G, lambda,
#' E.5) are tested with this sampling due to the differing data types. To
#' obtain confidence intervals for these statistics, please see
#' \code{\link{diversity_ci}}.}
#' \subsection{rarefaction}{Rarefaction analysis is performed on the number of
#' multilocus genotypes because it is relatively easy to estimate (Grünwald et
#' al., 2003). To obtain rarefied estimates of diversity, it is possible to
#' use \code{\link{diversity_ci}} with the argument \code{rarefy = TRUE}}
#' \subsection{graphic}{This function outputs a \pkg{ggplot2} graphic of
#' histograms. These can be manipulated to be visualized in another manner by
#' retrieving the plot with the \code{\link{last_plot}} command from
#' \pkg{ggplot2}. A useful manipulation would be to arrange the graphs into a
#' single column so that the values of the statistic line up: \cr \code{p <-
#' last_plot(); p + facet_wrap(~population, ncol = 1, scales = "free_y")}\cr
#' The name for the groupings is "population" and the name for the x axis is
#' "value".}
#'
#' @note The calculation of \code{Hexp} has changed from \pkg{poppr} 1.x. It was
#' previously calculated based on the diversity of multilocus genotypes,
#' resulting in a value of 1 for sexual populations. This was obviously not
#' Nei's 1978 expected heterozygosity. We have thus changed the statistic to
#' be the true value of Hexp by calculating \eqn{(\frac{n}{n-1}) 1 - \sum_{i =
#' 1}^k{p^{2}_{i}}}{(n/(n - 1))*(1 - sum(p^2))} where p is the allele
#' frequencies at a given locus and n is the number of observed alleles (Nei,
#' 1978) in each locus and then returning the average. Caution should be
#' exercised in interpreting the results of Hexp with polyploid organisms with
#' ambiguous ploidy. The lack of allelic dosage information will cause rare
#' alleles to be over-represented and artificially inflate the index. This is
#' especially true with small sample sizes.
#'
#' @seealso \code{\link{clonecorrect}},
#' \code{\link{poppr.all}},
#' \code{\link{ia}},
#' \code{\link{missingno}},
#' \code{\link{mlg}},
#' \code{\link{diversity_stats}},
#' \code{\link{diversity_ci}}
#'
#' @export
#' @author Zhian N. Kamvar
#' @references Paul-Michael Agapow and Austin Burt. Indices of multilocus
#' linkage disequilibrium. \emph{Molecular Ecology Notes}, 1(1-2):101-102,
#' 2001
#'
#' A.H.D. Brown, M.W. Feldman, and E. Nevo. Multilocus structure of natural
#' populations of \emph{Hordeum spontaneum}. \emph{Genetics}, 96(2):523-536,
#' 1980.
#'
#' Niklaus J. Gr\"unwald, Stephen B. Goodwin, Michael G. Milgroom, and William
#' E. Fry. Analysis of genotypic diversity data for populations of
#' microorganisms. Phytopathology, 93(6):738-46, 2003
#'
#' Bernhard Haubold and Richard R. Hudson. Lian 3.0: detecting linkage
#' disequilibrium in multilocus data. Bioinformatics, 16(9):847-849, 2000.
#'
#' Kenneth L.Jr. Heck, Gerald van Belle, and Daniel Simberloff. Explicit
#' calculation of the rarefaction diversity measurement and the determination
#' of sufficient sample size. Ecology, 56(6):pp. 1459-1461, 1975
#'
#' Masatoshi Nei. Estimation of average heterozygosity and genetic distance
#' from a small number of individuals. Genetics, 89(3):583-590, 1978.
#'
#' S H Hurlbert. The nonconcept of species diversity: a critique and
#' alternative parameters. Ecology, 52(4):577-586, 1971.
#'
#' J.A. Ludwig and J.F. Reynolds. Statistical Ecology. A Primer on Methods and
#' Computing. New York USA: John Wiley and Sons, 1988.
#'
#' Simpson, E. H. Measurement of diversity. Nature 163: 688, 1949
#' doi:10.1038/163688a0
#'
#' Good, I. J. (1953). On the Population Frequency of Species and the
#' Estimation of Population Parameters. \emph{Biometrika} 40(3/4): 237-264.
#'
#' Lande, R. (1996). Statistics and partitioning of species diversity, and
#' similarity among multiple communities. \emph{Oikos} 76: 5-13.
#'
#' Jari Oksanen, F. Guillaume Blanchet, Roeland Kindt, Pierre Legendre, Peter
#' R. Minchin, R. B. O'Hara, Gavin L. Simpson, Peter Solymos, M. Henry H.
#' Stevens, and Helene Wagner. vegan: Community Ecology Package, 2012. R
#' package version 2.0-5.
#'
#' E.C. Pielou. Ecological Diversity. Wiley, 1975.
#'
#' Claude Elwood Shannon. A mathematical theory of communication. Bell Systems
#' Technical Journal, 27:379-423,623-656, 1948
#'
#' J M Smith, N H Smith, M O'Rourke, and B G Spratt. How clonal are bacteria?
#' Proceedings of the National Academy of Sciences, 90(10):4384-4388, 1993.
#'
#' J.A. Stoddart and J.F. Taylor. Genotypic diversity: estimation and
#' prediction in samples. Genetics, 118(4):705-11, 1988.
#'
#'
#' @examples
#' data(nancycats)
#' poppr(nancycats)
#'
#' \dontrun{
#' # Sampling
#' poppr(nancycats, sample = 999, total = FALSE, plot = TRUE)
#'
#' # Customizing the plot
#' library("ggplot2")
#' p <- last_plot()
#' p + facet_wrap(~population, scales = "free_y", ncol = 1)
#'
#' # Turning off diversity statistics (see get_stats)
#' poppr(nancycats, total=FALSE, H = FALSE, G = FALSE, lambda = FALSE, E5 = FALSE)
#'
#' # The previous version of poppr contained a definition of Hexp, which
#' # was calculated as (N/(N - 1))*lambda. It basically looks like an unbiased
#' # Simpson's index. This statistic was originally included in poppr because it
#' # was originally included in the program multilocus. It was finally figured
#' # to be an unbiased Simpson's diversity metric (Lande, 1996; Good, 1953).
#'
#' data(Aeut)
#'
#' uSimp <- function(x){
#' lambda <- vegan::diversity(x, "simpson")
#' x <- drop(as.matrix(x))
#' if (length(dim(x)) > 1){
#' N <- rowSums(x)
#' } else {
#' N <- sum(x)
#' }
#' return((N/(N-1))*lambda)
#' }
#' poppr(Aeut, uSimp = uSimp)
#'
#'
#' # Demonstration with viral data
#' # Note: this is a larger data set that could take a couple of minutes to run
#' # on slower computers.
#' data(H3N2)
#' strata(H3N2) <- data.frame(other(H3N2)$x)
#' setPop(H3N2) <- ~country
#' poppr(H3N2, total = FALSE, sublist=c("Austria", "China", "USA"),
#' clonecorrect = TRUE, strata = ~country/year)
#' }
#==============================================================================#
#' @import adegenet ggplot2 vegan
poppr <- function(dat, total = TRUE, sublist = "ALL", exclude = NULL, blacklist = NULL,
sample = 0, method = 1, missing = "ignore", cutoff = 0.05,
quiet = FALSE, clonecorrect = FALSE, strata = 1, keep = 1,
plot = TRUE, hist = TRUE, index = "rbarD", minsamp = 10,
legend = FALSE, ...){
if (inherits(dat, c("genlight", "snpclone"))){
msg <- "The poppr function will not work with genlight or snpclone objects"
msg <- paste0(msg, "\nIf you want to calculate genotypic diversity, use ",
"the function diversity_stats().")
stop(msg)
}
quiet <- should_poppr_be_quiet(quiet)
x <- process_file(dat, missing = missing, cutoff = cutoff,
clonecorrect = clonecorrect, strata = strata,
keep = keep, quiet = TRUE)
# The namelist will contain information such as the filename and population
# names so that they can easily be ported around.
namelist <- NULL
hist <- plot
callpop <- match.call()
if (!is.null(blacklist)) {
warning(
option_deprecated(
callpop,
"blacklist",
"exclude",
"2.8.7.",
"Please use `exclude` in the future"
),
immediate. = TRUE
)
exclude <- blacklist
}
if (!is.na(grep("system.file", callpop)[1])){
popsplt <- unlist(strsplit(dat, "/"))
namelist$File <- popsplt[length(popsplt)]
} else if (is.genind(dat)){
namelist$File <- as.character(callpop[2])
} else {
namelist$File <- basename(x$X)
}
if (toupper(sublist[1]) == "TOTAL" & length(sublist) == 1){
dat <- x$GENIND
pop(dat) <- rep("Total", nInd(dat))
poplist <- NULL
poplist$Total <- dat
} else {
dat <- popsub(x$GENIND, sublist = sublist, exclude = exclude)
if (any(levels(pop(dat)) == "")) {
levels(pop(dat))[levels(pop(dat)) == ""] <- "?"
warning("missing population factor replaced with '?'")
}
pdrop <- if (dat$type == "PA") FALSE else TRUE
poplist <- if (is.null(pop(dat))) NULL else seppop(dat, drop = pdrop)
}
# Creating the genotype matrix for vegan's diversity analysis.
pop.mat <- mlg.matrix(dat)
if (total == TRUE & !is.null(poplist) & length(poplist) > 1){
poplist$Total <- dat
pop.mat <- rbind(pop.mat, colSums(pop.mat))
}
sublist <- names(poplist)
Iout <- NULL
total <- toupper(total)
missing <- toupper(missing)
# For presence/absences markers, a different algorithm is applied.
if (legend) poppr_message()
MLG.vec <- rowSums(ifelse(pop.mat > 0, 1, 0))
N.vec <- rowSums(pop.mat)
datploid <- unique(ploidy(dat))
Hexp_correction <- 1
if (length(datploid) > 1 || any(datploid > 2)){
datploid <- NULL
Hexp_correction <- N.vec/(N.vec - 1)
}
divmat <- diversity_stats(pop.mat, ...)
if (!is.matrix(divmat)){
divmat <- matrix(divmat, nrow = 1, dimnames = list(NULL, names(divmat)))
}
if (!is.null(poplist)){
# rarefaction giving the standard errors. This will use the minimum pop size
# above a user-defined threshold.
raremax <- ifelse(is.null(nrow(pop.mat)), sum(pop.mat),
ifelse(min(rowSums(pop.mat)) > minsamp,
min(rowSums(pop.mat)), minsamp))
Hexp <- vapply(lapply(poplist, pegas::as.loci), FUN = get_hexp_from_loci,
FUN.VALUE = numeric(1), ploidy = datploid, type = dat@type)
Hexp <- data.frame(Hexp = Hexp)
N.rare <- suppressWarnings(vegan::rarefy(pop.mat, raremax, se = TRUE))
IaList <- lapply(sublist, function(x){
namelist <- list(file = namelist$File, population = x)
.ia(poplist[[x]],
sample = sample,
method = method,
quiet = quiet,
missing = missing,
hist = FALSE,
namelist = namelist)
})
names(IaList) <- sublist
if (sample > 0){
classtest <- summary(IaList)
classless <- !classtest[, "Class"] %in% "ialist"
if (any(classless)){
no_class_pops <- paste(names(IaList[classless]), collapse = ", ")
msg <- paste0("values for ", no_class_pops,
" could not be plotted.\n")
IaList[classless] <- lapply(IaList[classless], function(x) list(index = x))
warning(msg, call. = FALSE)
}
if (plot){
try(print(poppr.plot(sample = IaList[!classless], file = namelist$File)))
}
IaList <- data.frame(t(vapply(IaList, "[[", numeric(4), "index")))
} else {
IaList <- t(as.data.frame(IaList))
}
Iout <- as.data.frame(
list(
Pop = sublist,
N = N.vec,
MLG = MLG.vec,
eMLG = N.rare[1, ],
SE = N.rare[2, ],
divmat,
Hexp,
IaList,
File = namelist$File
),
stringsAsFactors = FALSE
)
rownames(Iout) <- NULL
} else {
# rarefaction giving the standard errors. No population structure means that
# the sample is equal to the number of individuals.
N.rare <- rarefy(pop.mat, sum(pop.mat), se = TRUE)
Hexp <- get_hexp_from_loci(pegas::as.loci(dat),
ploidy = datploid, type = dat@type)
Hexp <- data.frame(Hexp = Hexp)
IaList <-.ia(dat,
sample = sample,
method = method,
quiet = quiet,
missing = missing,
namelist = list(File = namelist$File, population = "Total"),
hist = plot
)
IaList <- if (sample > 0) IaList$index else IaList
Iout <- as.data.frame(list(
Pop = "Total",
N = N.vec,
MLG = MLG.vec,
eMLG = N.rare[1, ],
SE = N.rare[2, ],
divmat,
Hexp,
as.data.frame(t(IaList)),
File = namelist$File
), stringsAsFactors = FALSE)
rownames(Iout) <- NULL
}
class(Iout) <- c("popprtable", "data.frame")
return(Iout)
}
#==============================================================================#
#' Process a list of files with poppr
#'
#' poppr.all is a wrapper function that will loop through a list of files from
#' the working directory, execute \code{\link{poppr}}, and concatenate the
#' output into one data frame.
#'
#' @param filelist a list of files in the current working directory
#'
#' @param ... arguments passed on to poppr
#'
#' @return see \code{\link{poppr}}
#'
#' @seealso \code{\link{poppr}}, \code{\link{getfile}}
#' @export
#' @author Zhian N. Kamvar
#' @examples
#' \dontrun{
#' # Obtain a list of fstat files from a directory.
#' x <- getfile(multi=TRUE, pattern="^.+?dat$")
#'
#' # run the analysis on each file.
#' poppr.all(file.path(x$path, x$files))
#' }
#==============================================================================#
poppr.all <- function(filelist, ...){
result <- NULL
for(a in seq(length(filelist))){
cat(" \\ \n")
input <- filelist[[a]]
if (is.genind(input)){
file <- names(filelist)[a]
if (is.null(file)){
file <- a
}
cat(" | Data: ")
} else {
file <- basename(input)
cat(" | File: ")
}
cat(file, "\n / \n")
res <- poppr(input, ...)
res$File <- file
result <- rbind(result, res)
}
return(result)
}
#==============================================================================#
#' Index of Association
#'
#' Calculate the Index of Association and Standardized Index of Association.
#' \itemize{
#' \item \code{ia()} calculates the index of association over all loci in
#' the data set.
#' \item \code{pair.ia()} calculates the index of association in a pairwise
#' manner among all loci.
#' \item \code{resample.ia()} calculates the index of association on a
#' reduced data set multiple times to create a distribution, showing the
#' variation of values observed at a given sample size (previously
#' \code{jack.ia}).
#' }
#'
#' @param gid a \code{\link{genind}} or \code{\link{genclone}} object.
#'
#' @param sample an integer indicating the number of permutations desired (eg
#' 999).
#'
#' @param method an integer from 1 to 4 indicating the sampling method desired.
#' see \code{\link{shufflepop}} for details.
#'
#' @param quiet Should the function print anything to the screen while it is
#' performing calculations?
#'
#' \code{TRUE} prints nothing.
#'
#' \code{FALSE} (default) will print the population name and progress bar.
#'
#' @param missing a character string. see \code{\link{missingno}} for details.
#'
#' @param plot When \code{TRUE} (default), a heatmap of the values per locus
#' pair will be plotted (for \code{pair.ia()}). When \code{sampling > 0},
#' different things happen with \code{ia()} and \code{pair.ia()}. For
#' \code{ia()}, a histogram for the data set is plotted. For \code{pair.ia()},
#' p-values are added as text on the heatmap.
#'
#' @param hist \code{logical} Deprecated. Use plot.
#'
#' @param index \code{character} either "Ia" or "rbarD". If \code{hist = TRUE},
#' this indicates which index you want represented in the plot (default:
#' "rbarD").
#'
#' @param valuereturn \code{logical} if \code{TRUE}, the index values from the
#' reshuffled data is returned. If \code{FALSE} (default), the index is
#' returned with associated p-values in a 4 element numeric vector.
#'
#' @return
#' \subsection{for \code{pair.ia}}{
#' A matrix with two columns and choose(nLoc(gid), 2) rows representing the
#' values for Ia and rbarD per locus pair.
#' }
#' \subsection{If no sampling has occurred:}{
#' A named number vector of length 2 giving the Index of Association, "Ia";
#' and the Standardized Index of Association, "rbarD"
#' }
#' \subsection{If there is sampling:}{ A a named number vector of length 4
#' with the following values:
#' \itemize{
#' \item{Ia - }{numeric. The index of association.}
#' \item{p.Ia - }{A number indicating the p-value resulting from a
#' one-sided permutation test based on the number of samples indicated in
#' the original call.}
#' \item{rbarD - }{numeric. The standardized index of association.}
#' \item{p.rD - }{A factor indicating the p-value resulting from a
#' one-sided permutation test based on the number of samples indicated in
#' the original call.}
#' }
#' }
#' \subsection{If there is sampling and valureturn = TRUE}{
#' A list with the following elements:
#' \itemize{
#' \item{index }{The above vector}
#' \item{samples }{A data frame with s by 2 column data frame where s is the
#' number of samples defined. The columns are for the values of Ia and
#' rbarD, respectively.}
#' }
#' }
#'
#' @note \code{jack.ia()} is deprecated as the name was misleading. Please use
#' \code{resample.ia()}
#' @details The index of association was originally developed by A.H.D. Brown
#' analyzing population structure of wild barley (Brown, 1980). It has been widely
#' used as a tool to detect clonal reproduction within populations .
#' Populations whose members are undergoing sexual reproduction, whether it be
#' selfing or out-crossing, will produce gametes via meiosis, and thus have a
#' chance to shuffle alleles in the next generation. Populations whose members
#' are undergoing clonal reproduction, however, generally do so via mitosis.
#' This means that the most likely mechanism for a change in genotype is via
#' mutation. The rate of mutation varies from species to species, but it is
#' rarely sufficiently high to approximate a random shuffling of alleles. The
#' index of association is a calculation based on the ratio of the variance of
#' the raw number of differences between individuals and the sum of those
#' variances over each locus . You can also think of it as the observed
#' variance over the expected variance. If they are the same, then the index
#' is zero after subtracting one (from Maynard-Smith, 1993): \deqn{I_A =
#' \frac{V_O}{V_E}-1}{Ia = (Vo/Ve) - 1} Since the distance is more or less a binary
#' distance, any sort of marker can be used for this analysis. In the
#' calculation, phase is not considered, and any difference increases the
#' distance between two individuals. Remember that each column represents a
#' different allele and that each entry in the table represents the fraction
#' of the genotype made up by that allele at that locus. Notice also that the
#' sum of the rows all equal one. Poppr uses this to calculate distances by
#' simply taking the sum of the absolute values of the differences between
#' rows.
#'
#' The calculation for the distance between two individuals at a single locus
#' with \emph{a} allelic states and a ploidy of \emph{k} is as follows (except
#' for Presence/Absence data): \deqn{ d = \displaystyle
#' \frac{k}{2}\sum_{i=1}^{a} \mid A_{i} - B_{i}\mid }{d(A,B) = (k/2)*sum(abs(Ai - Bi))}
#' To find the total number of differences
#' between two individuals over all loci, you just take \emph{d} over \emph{m}
#' loci, a value we'll call \emph{D}:
#'
#' \deqn{D = \displaystyle \sum_{i=1}^{m} d_i }{D = sum(di)}
#'
#' These values are calculated over all possible combinations of individuals
#' in the data set, \eqn{{n \choose 2}}{choose(n, 2)} after which you end up
#' with \eqn{{n \choose 2}\cdot{}m}{choose(n, 2) * m} values of \emph{d} and
#' \eqn{{n \choose 2}}{choose(n, 2)} values of \emph{D}. Calculating the
#' observed variances is fairly straightforward (modified from Agapow and
#' Burt, 2001):
#'
#' \deqn{ V_O = \frac{\displaystyle \sum_{i=1}^{n \choose 2} D_{i}^2 -
#' \frac{(\displaystyle\sum_{i=1}^{n \choose 2} D_{i})^2}{{n \choose 2}}}{{n
#' \choose 2}}}{Vo = var(D)}
#'
#' Calculating the expected variance is the sum of each of the variances of
#' the individual loci. The calculation at a single locus, \emph{j} is the
#' same as the previous equation, substituting values of \emph{D} for
#' \emph{d}:
#'
#' \deqn{ var_j = \frac{\displaystyle \sum_{i=1}^{n \choose 2} d_{i}^2 -
#' \frac{(\displaystyle\sum_{i=1}^{n \choose 2} d_i)^2}{{n \choose 2}}}{{n
#' \choose 2}} }{Varj = var(dj)}
#'
#' The expected variance is then the sum of all the variances over all
#' \emph{m} loci:
#'
#' \deqn{ V_E = \displaystyle \sum_{j=1}^{m} var_j }{Ve = sum(var(dj))}
#'
#' Agapow and Burt showed that \eqn{I_A}{Ia} increases steadily with the
#' number of loci, so they came up with an approximation that is widely used,
#' \eqn{\bar r_d}{rbarD}. For the derivation, see the manual for
#' \emph{multilocus}.
#'
#' \deqn{ \bar r_d = \frac{V_O - V_E} {2\displaystyle
#' \sum_{j=1}^{m}\displaystyle \sum_{k \neq j}^{m}\sqrt{var_j\cdot{}var_k}}
#' }{rbarD = (Vo - Ve)/(2*sum(sum(sqrt(var(dj)*var(dk))))}
#'
#' @references Paul-Michael Agapow and Austin Burt. Indices of multilocus
#' linkage disequilibrium. \emph{Molecular Ecology Notes}, 1(1-2):101-102,
#' 2001
#'
#' A.H.D. Brown, M.W. Feldman, and E. Nevo. Multilocus structure of natural
#' populations of \emph{Hordeum spontaneum}. \emph{Genetics}, 96(2):523-536, 1980.
#'
#' J M Smith, N H Smith, M O'Rourke, and B G Spratt. How clonal are bacteria?
#' Proceedings of the National Academy of Sciences, 90(10):4384-4388, 1993.
#'
#' @seealso \code{\link{poppr}}, \code{\link{missingno}},
#' \code{\link{import2genind}}, \code{\link{read.genalex}},
#' \code{\link{clonecorrect}}, \code{\link{win.ia}}, \code{\link{samp.ia}}
#'
#' @export
#' @rdname ia
#' @author Zhian N. Kamvar
#' @examples
#' data(nancycats)
#' ia(nancycats)
#'
#' # Pairwise over all loci:
#' data(partial_clone)
#' res <- pair.ia(partial_clone)
#' plot(res, low = "black", high = "green", index = "Ia")
#'
#' # Resampling
#' data(Pinf)
#' resample.ia(Pinf, reps = 99)
#'
#' \dontrun{
#'
#' # Pairwise IA with p-values (this will take about a minute)
#' res <- pair.ia(partial_clone, sample = 999)
#' head(res)
#'
#' # Plot the results of resampling rbarD.
#' library("ggplot2")
#' Pinf.resamp <- resample.ia(Pinf, reps = 999)
#' ggplot(Pinf.resamp[2], aes(x = rbarD)) +
#' geom_histogram() +
#' geom_vline(xintercept = ia(Pinf)[2]) +
#' geom_vline(xintercept = ia(clonecorrect(Pinf))[2], linetype = 2) +
#' xlab(expression(bar(r)[d]))
#'
#' # Get the indices back and plot the distributions.
#' nansamp <- ia(nancycats, sample = 999, valuereturn = TRUE)
#'
#' plot(nansamp, index = "Ia")
#' plot(nansamp, index = "rbarD")
#'
#' # You can also adjust the parameters for how large to display the text
#' # so that it's easier to export it for publication/presentations.
#' library("ggplot2")
#' plot(nansamp, labsize = 5, linesize = 2) +
#' theme_bw() + # adding a theme
#' theme(text = element_text(size = rel(5))) + # changing text size
#' theme(plot.title = element_text(size = rel(4))) + # changing title size
#' ggtitle("Index of Association of nancycats") # adding a new title
#'
#' # Get the index for each population.
#' lapply(seppop(nancycats), ia)
#' # With sampling
#' lapply(seppop(nancycats), ia, sample = 999)
#'
#' # Plot pairwise ia for all populations in a grid with cowplot
#' # Set up the library and data
#' library("cowplot")
#' data(monpop)
#' splitStrata(monpop) <- ~Tree/Year/Symptom
#' setPop(monpop) <- ~Tree
#'
#' # Need to set up a list in which to store the plots.
#' plotlist <- vector(mode = "list", length = nPop(monpop))
#' names(plotlist) <- popNames(monpop)
#'
#' # Loop throgh the populations, calculate pairwise ia, plot, and then
#' # capture the plot in the list
#' for (i in popNames(monpop)){
#' x <- pair.ia(monpop[pop = i], limits = c(-0.15, 1)) # subset, calculate, and plot
#' plotlist[[i]] <- ggplot2::last_plot() # save the last plot
#' }
#'
#' # Use the plot_grid function to plot.
#' plot_grid(plotlist = plotlist, labels = paste("Tree", popNames(monpop)))
#'
#' }
#==============================================================================#
ia <- function(gid, sample = 0, method = 1, quiet = FALSE, missing = "ignore",
plot = TRUE, hist = TRUE, index = "rbarD", valuereturn = FALSE){
namelist <- list(population = ifelse(nPop(gid) > 1 | is.null(gid@pop),
"Total", popNames(gid)),
File = as.character(match.call()[2])
)
hist <- plot
popx <- gid
missing <- toupper(missing)
type <- gid@type
quiet <- should_poppr_be_quiet(quiet)
if (type == "PA"){
.Ia.Rd <- .PA.Ia.Rd
} else {
popx <- seploc(popx)
}
# if there are less than three individuals in the population, the calculation
# does not proceed.
if (nInd(gid) < 3){
IarD <- stats::setNames(as.numeric(c(NA, NA)), c("Ia", "rbarD"))
if (sample == 0){
return(IarD)
} else {
IarD <- stats::setNames(as.numeric(rep(NA, 4)), c("Ia","p.Ia","rbarD","p.rD"))
return(IarD)
}
}
IarD <- .Ia.Rd(popx, missing)
names(IarD) <- c("Ia", "rbarD")
# no sampling, it will simply return two named numbers.
if (sample == 0){
Iout <- IarD
result <- NULL
} else {
# sampling will perform the iterations and then return a data frame indicating
# the population, index, observed value, and p-value. It will also produce a
# histogram.
Iout <- NULL
# idx <- data.frame(Index = names(IarD))
if (quiet) {
oh <- progressr::handlers()
on.exit(progressr::handlers(oh))
progressr::handlers("void")
}
progressr::with_progress({
samp <- .sampling(
popx, sample, missing, quiet = quiet, type = type, method = method
)
})
p.val <- sum(IarD[1] <= c(samp$Ia, IarD[1]))/(sample + 1)
p.val[2] <- sum(IarD[2] <= c(samp$rbarD, IarD[2]))/(sample + 1)
if (hist == TRUE){
the_plot <- poppr.plot(samp, observed = IarD, pop = namelist$population,
index = index, file = namelist$File, pval = p.val, N = nrow(gid@tab)
)
print(the_plot)
}
result <- stats::setNames(vector(mode = "numeric", length = 4),
c("Ia","p.Ia","rbarD","p.rD"))
result[c(1, 3)] <- IarD
result[c(2, 4)] <- p.val
if (valuereturn == TRUE){
iaobj <- list(index = final(Iout, result), samples = samp)
class(iaobj) <- "ialist"
return(iaobj)
}
}
return(final(Iout, result))
}
#==============================================================================#
#' @rdname ia
#' @param low (for pair.ia) a color to use for low values when \code{plot =
#' TRUE}
#' @param high (for pair.ia) a color to use for low values when \code{plot =
#' TRUE}
#' @param limits (for pair.ia) the limits to be used for the color scale.
#' Defaults to \code{NULL}. If you want to use a custom range, supply two
#' numbers between -1 and 1, (e.g. \code{limits = c(-0.15, 1)})
#' @export
#==============================================================================#
pair.ia <- function(gid, sample = 0L, quiet = FALSE, plot = TRUE, low = "blue",
high = "red", limits = NULL, index = "rbarD", method = 1L){
N <- nInd(gid)
numLoci <- nLoc(gid)
lnames <- locNames(gid)
np <- choose(N, 2)
nploci <- choose(numLoci, 2)
shuffle <- sample > 0L
# quiet <- should_poppr_be_quiet(quiet)
# QUIET <- if (shuffle) TRUE else quiet
if (quiet) {
oh <- progressr::handlers()
on.exit(progressr::handlers(oh))
progressr::handlers("void")
}
progressr::with_progress({
p <- make_progress((1 + sample) * nploci, 50)
res <- pair_ia_internal(gid, N, numLoci, lnames, np, nploci, p, sample = 0)
if (shuffle) {
# Initialize with 1 to account for the observed data.
counts <- matrix(1L, nrow = nrow(res), ncol = ncol(res))
for (i in seq_len(sample)) {
tmp <- shufflepop(gid, method = method)
tmpres <- pair_ia_internal(tmp, N, numLoci, lnames, np, nploci, p, i)
counts <- counts + as.integer(tmpres >= res)
}
p <- counts/(sample + 1)
res <- cbind(Ia = res[, 1],
p.Ia = p[, 1],
rbarD = res[, 2],
p.rD = p[, 2])
}
})
class(res) <- c("pairia", "matrix")
if (plot) {
tryCatch(plot(res, index = index, low = low, high = high, limits = limits),
error = function(e) e)
}
res
}
pair_ia_internal <- function(gid, N, numLoci, lnames, np, nploci, p, sample = NULL) {
# Calculate pairwise distances for each locus. This will be a matrix of
# np rows and numLoci columns.
if (gid@type == "codom") {
V <- pair_matrix(seploc(gid), numLoci, np)
} else { # P/A case
V <- apply(tab(gid), 2, function(x) as.vector(dist(x)))
# checking for missing data and imputing the comparison to zero.
if (any(is.na(V))) {
V[which(is.na(V))] <- 0
}
}
colnames(V) <- lnames
# calculate I_A and \bar{r}_d for each combination of loci
loci_pairs <- combn(lnames, 2)
ia_pairs <- matrix(NA_real_, nrow = 2, ncol = nploci)
for (i in seq(nploci)) {
if ((nploci * sample + i) %% p$step == 0) p$rog()
the_pair <- loci_pairs[, i, drop = TRUE]
newV <- V[, the_pair, drop = FALSE]
ia_pairs[, i] <- ia_from_d_and_D(
V = list(
d.vector = colSums(newV),
d2.vector = colSums(newV * newV),
D.vector = rowSums(newV)
),
np = np
)
}
colnames(ia_pairs) <- apply(loci_pairs, 2, paste, collapse = ":")
rownames(ia_pairs) <- c("Ia", "rbarD")
ia_pairs <- t(ia_pairs)
ia_pairs
}
#==============================================================================#
#' Create a table of summary statistics per locus.
#'
#' @param x a [genind-class] or [genclone-class]
#' object.
#'
#' @param index Which diversity index to use. Choices are \itemize{ \item
#' `"simpson"` (Default) to give Simpson's index \item `"shannon"`
#' to give the Shannon-Wiener index \item `"invsimpson"` to give the
#' Inverse Simpson's index aka the Stoddard and Tayor index.}
#'
#' @param lev At what level do you want to analyze diversity? Choices are
#' `"allele"` (Default) or `"genotype"`.
#'
#' @param population Select the populations to be analyzed. This is the
#' parameter `sublist` passed on to the function [popsub()].
#' Defaults to `"ALL"`.
#'
#' @param information When `TRUE` (Default), this will print out a header
#' of information to the R console.
#'
#' @return a table with 4 columns indicating the Number of alleles/genotypes
#' observed, Diversity index chosen, Nei's 1978 gene diversity (expected
#' heterozygosity), and Evenness.
#'
#' @seealso [vegan::diversity()], [poppr()]
#' @md
#'
#' @note The calculation of `Hexp` is \eqn{(\frac{n}{n-1}) 1 - \sum_{i =
#' 1}^k{p^{2}_{i}}}{(n/(n - 1))*(1 - sum(p^2))} where p is the allele
#' frequencies at a given locus and n is the number of observed alleles (Nei,
#' 1978) in each locus and then returning the average. Caution should be
#' exercised in interpreting the results of Hexp with polyploid organisms with
#' ambiguous ploidy. The lack of allelic dosage information will cause rare
#' alleles to be over-represented and artificially inflate the index. This is
#' especially true with small sample sizes.
#'
#' If `lev = "genotype"`, then all statistics reflect **genotypic** diversity
#' within each locus. This includes the calculation for `Hexp`, which turns
#' into the unbiased Simpson's index.
#'
#' @author Zhian N. Kamvar
#'
#' @references
#' Jari Oksanen, F. Guillaume Blanchet, Roeland Kindt, Pierre Legendre, Peter
#' R. Minchin, R. B. O'Hara, Gavin L. Simpson, Peter Solymos, M. Henry H.
#' Stevens, and Helene Wagner. vegan: Community Ecology Package, 2012. R
#' package version 2.0-5.
#'
#' Niklaus J. Gr\"unwald, Stephen B. Goodwin, Michael G. Milgroom, and William
#' E. Fry. Analysis of genotypic diversity data for populations of
#' microorganisms. Phytopathology, 93(6):738-46, 2003
#'
#' J.A. Ludwig and J.F. Reynolds. Statistical Ecology. A Primer on Methods and
#' Computing. New York USA: John Wiley and Sons, 1988.
#'
#' E.C. Pielou. Ecological Diversity. Wiley, 1975.
#'
#' J.A. Stoddart and J.F. Taylor. Genotypic diversity: estimation and
#' prediction in samples. Genetics, 118(4):705-11, 1988.
#'
#' Masatoshi Nei. Estimation of average heterozygosity and genetic distance
#' from a small number of individuals. Genetics, 89(3):583-590, 1978.
#'
#' Claude Elwood Shannon. A mathematical theory of communication. Bell Systems
#' Technical Journal, 27:379-423,623-656, 1948
#'
#' @export
#' @examples
#'
#' data(nancycats)
#' locus_table(nancycats[pop = 5])
#' \dontrun{
#' # Analyze locus statistics for the North American population of P. infestans.
#' # Note that due to the unknown dosage of alleles, many of these statistics
#' # will be artificially inflated for polyploids.
#' data(Pinf)
#' locus_table(Pinf, population = "North America")
#' }
#==============================================================================#
locus_table <- function(x, index = "simpson", lev = "allele",
population = "ALL", information = TRUE){
ploid <- unique(ploidy(x))
type <- x@type
INDICES <- c("shannon", "simpson", "invsimpson")
index <- match.arg(index, INDICES)
x <- popsub(x, population, drop = FALSE)
x.loc <- summary(as.loci(x))
outmat <- vapply(x.loc, locus_table_pegas, numeric(4), index, lev, ploid, type)
loci <- colnames(outmat)
divs <- rownames(outmat)
res <- matrix(0.0, nrow = ncol(outmat) + 1, ncol = nrow(outmat))
dimlist <- list(`locus` = c(loci, "mean"), `summary` = divs)
res[-nrow(res), ] <- t(outmat)
res[nrow(res), ] <- colMeans(res[-nrow(res), ], na.rm = TRUE)
attr(res, "dimnames") <- dimlist
if (information){
if (index == "simpson"){
msg <- "Simpson index"
} else if (index == "shannon"){
msg <- "Shannon-Wiener index"
} else {
msg <- "Stoddard and Taylor index"
}
message("\n", divs[1], " = Number of observed ", paste0(divs[1], "s"), appendLF = FALSE)
message("\n", divs[2], " = ", msg, appendLF = FALSE)
message("\n", divs[3], " = Nei's 1978 gene diversity\n", appendLF = FALSE)
message("------------------------------------------\n", appendLF = FALSE)
}
class(res) <- c("locustable", "matrix")
return(res)
}
#==============================================================================#
#' Tabulate alleles the occur in only one population.
#'
#' @param gid a [genind-class] or [genclone-class]
#' object.
#'
#' @param form a [formula()] giving the levels of markers and
#' hierarchy to analyze. See Details.
#'
#' @param report one of `"table", "vector",` or `"data.frame"`. Tables
#' (Default) and data frame will report counts along with populations or
#' individuals. Vectors will simply report which populations or individuals
#' contain private alleles. Tables are matrices with populations or
#' individuals in rows and alleles in columns. Data frames are long form.
#'
#' @param level one of `"population"` (Default) or `"individual"`.
#'
#' @param count.alleles `logical`. If `TRUE` (Default), The report
#' will return the observed number of alleles private to each population. If
#' `FALSE`, each private allele will be counted once, regardless of
#' dosage.
#'
#' @param drop `logical`. if `TRUE`, populations/individuals without
#' private alleles will be dropped from the result. Defaults to `FALSE`.
#'
#' @return a matrix, data.frame, or vector defining the populations or
#' individuals containing private alleles. If vector is chosen, alleles are
#' not defined.
#'
#' @details the argument `form` allows for control over the strata at which
#' private alleles should be computed. It takes a form where the left hand
#' side of the formula can be either "allele", "locus", or "loci". The right
#' hand of the equation, by default is ".". If you change it, it must
#' correspond to strata located in the [adegenet::strata()] slot.
#' Note, that the right hand side is disabled for genpop objects.
#'
#' @export
#' @author Zhian N. Kamvar
#' @md
#' @examples
#'
#' data(Pinf) # Load P. infestans data.
#' private_alleles(Pinf)
#'
#' \dontrun{
#' # Analyze private alleles based on the country of interest:
#' private_alleles(Pinf, alleles ~ Country)
#'
#' # Number of observed alleles per locus
#' private_alleles(Pinf, locus ~ Country, count.alleles = TRUE)
#'
#' # Get raw number of private alleles per locus.
#' (pal <- private_alleles(Pinf, locus ~ Country, count.alleles = FALSE))
#'
#' # Get percentages.
#' sweep(pal, 2, nAll(Pinf)[colnames(pal)], FUN = "/")
#'
#' # An example of how these data can be displayed.
#' library("ggplot2")
#' Pinfpriv <- private_alleles(Pinf, report = "data.frame")
#' ggplot(Pinfpriv) + geom_tile(aes(x = population, y = allele, fill = count))
#' }
#==============================================================================#
private_alleles <- function(gid, form = alleles ~ ., report = "table",
level = "population", count.alleles = TRUE,
drop = FALSE){
REPORTARGS <- c("table", "vector", "data.frame")
LEVELARGS <- c("individual", "population")
LHS_ARGS <- c("alleles", "locus", "loci")
showform <- utils::capture.output(print(form))
marker <- pmatch(as.character(form[[2]]), LHS_ARGS, nomatch = 0L,
duplicates.ok = FALSE)
if (all(marker == 0L)){
stop("Left hand side of ", showform, " must be one of:\n ",
paste(LHS_ARGS, collapse = " "))
} else {
marker <- LHS_ARGS[marker]
}
strataform <- form[c(1, 3)]
the_strata <- all.vars(strataform[[2]])
if (length(the_strata) > 1 || the_strata[1] != "."){
if (!is.genpop(gid)){
setPop(gid) <- strataform
} else {
warning("cannot set strata for a genpop object.")
}
}
report <- match.arg(report, REPORTARGS)
level <- match.arg(level, LEVELARGS)
if (!is.genind(gid) & !is.genpop(gid)){
stop(paste(gid, "is not a genind or genpop object."))
}
if (is.genind(gid) && !is.null(pop(gid)) | is.genpop(gid) && nPop(gid) > 1){
if (is.genind(gid)){
gid.pop <- tab(genind2genpop(gid, quiet = TRUE))
} else {
gid.pop <- tab(gid)
}
private_columns <- colSums(ifelse(gid.pop > 0, 1, 0), na.rm = TRUE) < 2
privates <- gid.pop[, private_columns, drop = FALSE]
if (level == "individual" & is.genind(gid)){
gid.tab <- tab(gid)
privates <- gid.tab[, private_columns, drop = FALSE]
} else if (!count.alleles){
privates <- ifelse(privates > 0, 1, 0)
}
if (drop){
privates <- privates[rowSums(privates, na.rm = TRUE) > 0, , drop = FALSE]
}
if (marker != "alleles"){
private_fac <- locFac(gid)[private_columns]
privates <- vapply(unique(private_fac), function(l){
rowSums(privates[, private_fac == l, drop = FALSE], na.rm = TRUE)
}, FUN.VALUE = numeric(nrow(privates))
)
colnames(privates) <- locNames(gid)[unique(private_fac)]
}
if (length(privates) == 0){
privates <- NULL
cat("No private alleles detected.")
return(invisible(NULL))
}
if (report == "vector"){
privates <- rownames(privates)
} else if (report == "data.frame"){
marker <- if (marker == "alleles") "allele" else "locus"
names(dimnames(privates)) <- c(level, marker)
privates <- as.data.frame.table(privates,
responseName = "count",
stringsAsFactors = FALSE)
}
return(privates)
} else {
stop("There are no populations detected")
}
}
| /R/Index_calculations.r | no_license | grunwaldlab/poppr | R | false | false | 49,495 | r | #!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!#
#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!#
#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!#
#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!#
#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!#
#
# This software was authored by Zhian N. Kamvar and Javier F. Tabima, graduate
# students at Oregon State University; Jonah C. Brooks, undergraduate student at
# Oregon State University; and Dr. Nik Grünwald, an employee of USDA-ARS.
#
# Permission to use, copy, modify, and distribute this software and its
# documentation for educational, research and non-profit purposes, without fee,
# and without a written agreement is hereby granted, provided that the statement
# above is incorporated into the material, giving appropriate attribution to the
# authors.
#
# Permission to incorporate this software into commercial products may be
# obtained by contacting USDA ARS and OREGON STATE UNIVERSITY Office for
# Commercialization and Corporate Development.
#
# The software program and documentation are supplied "as is", without any
# accompanying services from the USDA or the University. USDA ARS or the
# University do not warrant that the operation of the program will be
# uninterrupted or error-free. The end-user understands that the program was
# developed for research purposes and is advised not to rely exclusively on the
# program for any reason.
#
# IN NO EVENT SHALL USDA ARS OR OREGON STATE UNIVERSITY BE LIABLE TO ANY PARTY
# FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING
# LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS DOCUMENTATION,
# EVEN IF THE OREGON STATE UNIVERSITY HAS BEEN ADVISED OF THE POSSIBILITY OF
# SUCH DAMAGE. USDA ARS OR OREGON STATE UNIVERSITY SPECIFICALLY DISCLAIMS ANY
# WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
# MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE AND ANY STATUTORY
# WARRANTY OF NON-INFRINGEMENT. THE SOFTWARE PROVIDED HEREUNDER IS ON AN "AS IS"
# BASIS, AND USDA ARS AND OREGON STATE UNIVERSITY HAVE NO OBLIGATIONS TO PROVIDE
# MAINTENANCE, SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.
#
#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!#
#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!#
#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!#
#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!#
#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!#
#==============================================================================#
#' Produce a basic summary table for population genetic analyses.
#'
#' @description
#'
#' For the \pkg{poppr} package description, please see
#' \code{\link[=poppr-package]{package?poppr}}
#'
#' This function allows the user to quickly view indices of heterozygosity,
#' evenness, and linkage to aid in the decision of a path to further analyze
#' a specified dataset. It natively takes \code{\linkS4class{genind}} and
#' \code{\linkS4class{genclone}} objects, but can convert any raw data formats
#' that adegenet can take (fstat, structure, genetix, and genpop) as well as
#' genalex files exported into a csv format (see \code{\link{read.genalex}} for
#' details).
#'
#'
#' @param dat a \code{\linkS4class{genind}} object OR a
#' \code{\linkS4class{genclone}} object OR any fstat, structure, genetix,
#' genpop, or genalex formatted file.
#'
#' @param total When \code{TRUE} (default), indices will be calculated for the
#' pooled populations.
#'
#' @param sublist a list of character strings or integers to indicate specific
#' population names (accessed via \code{popNames()}).
#' Defaults to "ALL".
#'
#' @param exclude a \code{vector} of population names or indexes that the user
#' wishes to discard. Default to \code{NULL}.
#'
#' @param blacklist DEPRECATED, use exclude.
#'
#' @param sample an integer indicating the number of permutations desired to
#' obtain p-values. Sampling will shuffle genotypes at each locus to simulate
#' a panmictic population using the observed genotypes. Calculating the
#' p-value includes the observed statistics, so set your sample number to one
#' off for a round p-value (eg. \code{sample = 999} will give you p = 0.001
#' and \code{sample = 1000} will give you p = 0.000999001).
#'
#' @param method an integer from 1 to 4 indicating the method of sampling
#' desired. see \code{\link{shufflepop}} for details.
#'
#' @param missing how should missing data be treated? \code{"zero"} and
#' \code{"mean"} will set the missing values to those documented in
#' \code{\link{tab}}. \code{"loci"} and \code{"geno"} will remove any loci or
#' genotypes with missing data, respectively (see \code{\link{missingno}} for
#' more information.
#'
#' @param cutoff \code{numeric} a number from 0 to 1 indicating the percent
#' missing data allowed for analysis. This is to be used in conjunction with
#' the flag \code{missing} (see \code{\link{missingno}} for details)
#'
#' @param quiet \code{FALSE} (default) will display a progress bar for each
#' population analyzed.
#'
#' @param clonecorrect default \code{FALSE}. must be used with the \code{strata}
#' parameter, or the user will potentially get undesired results. see
#' \code{\link{clonecorrect}} for details.
#'
#' @param strata a \code{formula} indicating the hierarchical levels to be used.
#' The hierarchies should be present in the \code{strata} slot. See
#' \code{\link{strata}} for details.
#'
#' @param keep an \code{integer}. This indicates which strata you wish to keep
#' after clone correcting your data sets. To combine strata, just set keep
#' from 1 to the number of straifications set in strata. see
#' \code{\link{clonecorrect}} for details.
#'
#' @param plot \code{logical} if \code{TRUE} (default) and \code{sampling > 0},
#' a histogram will be produced for each population.
#'
#' @param hist \code{logical} Deprecated. Use plot.
#'
#' @param index \code{character} Either "Ia" or "rbarD". If \code{hist = TRUE},
#' this will determine the index used for the visualization.
#'
#' @param minsamp an \code{integer} indicating the minimum number of individuals
#' to resample for rarefaction analysis. See \code{\link[vegan]{rarefy}} for
#' details.
#'
#' @param legend \code{logical}. When this is set to \code{TRUE}, a legend
#' describing the resulting table columns will be printed. Defaults to
#' \code{FALSE}
#'
#' @param ... arguments to be passed on to \code{\link{diversity_stats}}
#'
#' @return A data frame with populations in rows and the following columns:
#' \item{Pop}{A vector indicating the population factor}
#' \item{N}{An integer vector indicating the number of individuals/isolates in
#' the specified population.}
#' \item{MLG}{An integer vector indicating the number of multilocus genotypes
#' found in the specified population, (see: \code{\link{mlg}})}
#' \item{eMLG}{The expected number of MLG at the lowest common sample size
#' (set by the parameter \code{minsamp}).}
#' \item{SE}{The standard error for the rarefaction analysis}
#' \item{H}{Shannon-Weiner Diversity index}
#' \item{G}{Stoddard and Taylor's Index}
#' \item{lambda}{Simpson's index}
#' \item{E.5}{Evenness}
#' \item{Hexp}{Nei's gene diversity (expected heterozygosity)}
#' \item{Ia}{A numeric vector giving the value of the Index of Association for
#' each population factor, (see \code{\link{ia}}).}
#' \item{p.Ia}{A numeric vector indicating the p-value for Ia from the number
#' of reshufflings indicated in \code{sample}. Lowest value is 1/n where n is
#' the number of observed values.}
#' \item{rbarD}{A numeric vector giving the value of the Standardized Index of
#' Association for each population factor, (see \code{\link{ia}}).}
#' \item{p.rD}{A numeric vector indicating the p-value for rbarD from the
#' number of reshuffles indicated in \code{sample}. Lowest value is 1/n where
#' n is the number of observed values.}
#' \item{File}{A vector indicating the name of the original data file.}
#'
#' @details This table is intended to be a first look into the dynamics of
#' mutlilocus genotype diversity. Many of the statistics (except for the the
#' index of association) are simply based on counts of multilocus genotypes
#' and do not take into account the actual allelic states.
#' \strong{Descriptions of the statistics can be found in the Algorithms and
#' Equations vignette}: \code{vignette("algo", package = "poppr")}.
#' \subsection{sampling}{The sampling procedure is explicitly for testing the
#' index of association. None of the other diversity statistics (H, G, lambda,
#' E.5) are tested with this sampling due to the differing data types. To
#' obtain confidence intervals for these statistics, please see
#' \code{\link{diversity_ci}}.}
#' \subsection{rarefaction}{Rarefaction analysis is performed on the number of
#' multilocus genotypes because it is relatively easy to estimate (Grünwald et
#' al., 2003). To obtain rarefied estimates of diversity, it is possible to
#' use \code{\link{diversity_ci}} with the argument \code{rarefy = TRUE}}
#' \subsection{graphic}{This function outputs a \pkg{ggplot2} graphic of
#' histograms. These can be manipulated to be visualized in another manner by
#' retrieving the plot with the \code{\link{last_plot}} command from
#' \pkg{ggplot2}. A useful manipulation would be to arrange the graphs into a
#' single column so that the values of the statistic line up: \cr \code{p <-
#' last_plot(); p + facet_wrap(~population, ncol = 1, scales = "free_y")}\cr
#' The name for the groupings is "population" and the name for the x axis is
#' "value".}
#'
#' @note The calculation of \code{Hexp} has changed from \pkg{poppr} 1.x. It was
#' previously calculated based on the diversity of multilocus genotypes,
#' resulting in a value of 1 for sexual populations. This was obviously not
#' Nei's 1978 expected heterozygosity. We have thus changed the statistic to
#' be the true value of Hexp by calculating \eqn{(\frac{n}{n-1}) 1 - \sum_{i =
#' 1}^k{p^{2}_{i}}}{(n/(n - 1))*(1 - sum(p^2))} where p is the allele
#' frequencies at a given locus and n is the number of observed alleles (Nei,
#' 1978) in each locus and then returning the average. Caution should be
#' exercised in interpreting the results of Hexp with polyploid organisms with
#' ambiguous ploidy. The lack of allelic dosage information will cause rare
#' alleles to be over-represented and artificially inflate the index. This is
#' especially true with small sample sizes.
#'
#' @seealso \code{\link{clonecorrect}},
#' \code{\link{poppr.all}},
#' \code{\link{ia}},
#' \code{\link{missingno}},
#' \code{\link{mlg}},
#' \code{\link{diversity_stats}},
#' \code{\link{diversity_ci}}
#'
#' @export
#' @author Zhian N. Kamvar
#' @references Paul-Michael Agapow and Austin Burt. Indices of multilocus
#' linkage disequilibrium. \emph{Molecular Ecology Notes}, 1(1-2):101-102,
#' 2001
#'
#' A.H.D. Brown, M.W. Feldman, and E. Nevo. Multilocus structure of natural
#' populations of \emph{Hordeum spontaneum}. \emph{Genetics}, 96(2):523-536,
#' 1980.
#'
#' Niklaus J. Gr\"unwald, Stephen B. Goodwin, Michael G. Milgroom, and William
#' E. Fry. Analysis of genotypic diversity data for populations of
#' microorganisms. Phytopathology, 93(6):738-46, 2003
#'
#' Bernhard Haubold and Richard R. Hudson. Lian 3.0: detecting linkage
#' disequilibrium in multilocus data. Bioinformatics, 16(9):847-849, 2000.
#'
#' Kenneth L.Jr. Heck, Gerald van Belle, and Daniel Simberloff. Explicit
#' calculation of the rarefaction diversity measurement and the determination
#' of sufficient sample size. Ecology, 56(6):pp. 1459-1461, 1975
#'
#' Masatoshi Nei. Estimation of average heterozygosity and genetic distance
#' from a small number of individuals. Genetics, 89(3):583-590, 1978.
#'
#' S H Hurlbert. The nonconcept of species diversity: a critique and
#' alternative parameters. Ecology, 52(4):577-586, 1971.
#'
#' J.A. Ludwig and J.F. Reynolds. Statistical Ecology. A Primer on Methods and
#' Computing. New York USA: John Wiley and Sons, 1988.
#'
#' Simpson, E. H. Measurement of diversity. Nature 163: 688, 1949
#' doi:10.1038/163688a0
#'
#' Good, I. J. (1953). On the Population Frequency of Species and the
#' Estimation of Population Parameters. \emph{Biometrika} 40(3/4): 237-264.
#'
#' Lande, R. (1996). Statistics and partitioning of species diversity, and
#' similarity among multiple communities. \emph{Oikos} 76: 5-13.
#'
#' Jari Oksanen, F. Guillaume Blanchet, Roeland Kindt, Pierre Legendre, Peter
#' R. Minchin, R. B. O'Hara, Gavin L. Simpson, Peter Solymos, M. Henry H.
#' Stevens, and Helene Wagner. vegan: Community Ecology Package, 2012. R
#' package version 2.0-5.
#'
#' E.C. Pielou. Ecological Diversity. Wiley, 1975.
#'
#' Claude Elwood Shannon. A mathematical theory of communication. Bell Systems
#' Technical Journal, 27:379-423,623-656, 1948
#'
#' J M Smith, N H Smith, M O'Rourke, and B G Spratt. How clonal are bacteria?
#' Proceedings of the National Academy of Sciences, 90(10):4384-4388, 1993.
#'
#' J.A. Stoddart and J.F. Taylor. Genotypic diversity: estimation and
#' prediction in samples. Genetics, 118(4):705-11, 1988.
#'
#'
#' @examples
#' data(nancycats)
#' poppr(nancycats)
#'
#' \dontrun{
#' # Sampling
#' poppr(nancycats, sample = 999, total = FALSE, plot = TRUE)
#'
#' # Customizing the plot
#' library("ggplot2")
#' p <- last_plot()
#' p + facet_wrap(~population, scales = "free_y", ncol = 1)
#'
#' # Turning off diversity statistics (see get_stats)
#' poppr(nancycats, total=FALSE, H = FALSE, G = FALSE, lambda = FALSE, E5 = FALSE)
#'
#' # The previous version of poppr contained a definition of Hexp, which
#' # was calculated as (N/(N - 1))*lambda. It basically looks like an unbiased
#' # Simpson's index. This statistic was originally included in poppr because it
#' # was originally included in the program multilocus. It was finally figured
#' # to be an unbiased Simpson's diversity metric (Lande, 1996; Good, 1953).
#'
#' data(Aeut)
#'
#' uSimp <- function(x){
#' lambda <- vegan::diversity(x, "simpson")
#' x <- drop(as.matrix(x))
#' if (length(dim(x)) > 1){
#' N <- rowSums(x)
#' } else {
#' N <- sum(x)
#' }
#' return((N/(N-1))*lambda)
#' }
#' poppr(Aeut, uSimp = uSimp)
#'
#'
#' # Demonstration with viral data
#' # Note: this is a larger data set that could take a couple of minutes to run
#' # on slower computers.
#' data(H3N2)
#' strata(H3N2) <- data.frame(other(H3N2)$x)
#' setPop(H3N2) <- ~country
#' poppr(H3N2, total = FALSE, sublist=c("Austria", "China", "USA"),
#' clonecorrect = TRUE, strata = ~country/year)
#' }
#==============================================================================#
#' @import adegenet ggplot2 vegan
poppr <- function(dat, total = TRUE, sublist = "ALL", exclude = NULL, blacklist = NULL,
sample = 0, method = 1, missing = "ignore", cutoff = 0.05,
quiet = FALSE, clonecorrect = FALSE, strata = 1, keep = 1,
plot = TRUE, hist = TRUE, index = "rbarD", minsamp = 10,
legend = FALSE, ...){
if (inherits(dat, c("genlight", "snpclone"))){
msg <- "The poppr function will not work with genlight or snpclone objects"
msg <- paste0(msg, "\nIf you want to calculate genotypic diversity, use ",
"the function diversity_stats().")
stop(msg)
}
quiet <- should_poppr_be_quiet(quiet)
x <- process_file(dat, missing = missing, cutoff = cutoff,
clonecorrect = clonecorrect, strata = strata,
keep = keep, quiet = TRUE)
# The namelist will contain information such as the filename and population
# names so that they can easily be ported around.
namelist <- NULL
hist <- plot
callpop <- match.call()
if (!is.null(blacklist)) {
warning(
option_deprecated(
callpop,
"blacklist",
"exclude",
"2.8.7.",
"Please use `exclude` in the future"
),
immediate. = TRUE
)
exclude <- blacklist
}
if (!is.na(grep("system.file", callpop)[1])){
popsplt <- unlist(strsplit(dat, "/"))
namelist$File <- popsplt[length(popsplt)]
} else if (is.genind(dat)){
namelist$File <- as.character(callpop[2])
} else {
namelist$File <- basename(x$X)
}
if (toupper(sublist[1]) == "TOTAL" & length(sublist) == 1){
dat <- x$GENIND
pop(dat) <- rep("Total", nInd(dat))
poplist <- NULL
poplist$Total <- dat
} else {
dat <- popsub(x$GENIND, sublist = sublist, exclude = exclude)
if (any(levels(pop(dat)) == "")) {
levels(pop(dat))[levels(pop(dat)) == ""] <- "?"
warning("missing population factor replaced with '?'")
}
pdrop <- if (dat$type == "PA") FALSE else TRUE
poplist <- if (is.null(pop(dat))) NULL else seppop(dat, drop = pdrop)
}
# Creating the genotype matrix for vegan's diversity analysis.
pop.mat <- mlg.matrix(dat)
if (total == TRUE & !is.null(poplist) & length(poplist) > 1){
poplist$Total <- dat
pop.mat <- rbind(pop.mat, colSums(pop.mat))
}
sublist <- names(poplist)
Iout <- NULL
total <- toupper(total)
missing <- toupper(missing)
# For presence/absences markers, a different algorithm is applied.
if (legend) poppr_message()
MLG.vec <- rowSums(ifelse(pop.mat > 0, 1, 0))
N.vec <- rowSums(pop.mat)
datploid <- unique(ploidy(dat))
Hexp_correction <- 1
if (length(datploid) > 1 || any(datploid > 2)){
datploid <- NULL
Hexp_correction <- N.vec/(N.vec - 1)
}
divmat <- diversity_stats(pop.mat, ...)
if (!is.matrix(divmat)){
divmat <- matrix(divmat, nrow = 1, dimnames = list(NULL, names(divmat)))
}
if (!is.null(poplist)){
# rarefaction giving the standard errors. This will use the minimum pop size
# above a user-defined threshold.
raremax <- ifelse(is.null(nrow(pop.mat)), sum(pop.mat),
ifelse(min(rowSums(pop.mat)) > minsamp,
min(rowSums(pop.mat)), minsamp))
Hexp <- vapply(lapply(poplist, pegas::as.loci), FUN = get_hexp_from_loci,
FUN.VALUE = numeric(1), ploidy = datploid, type = dat@type)
Hexp <- data.frame(Hexp = Hexp)
N.rare <- suppressWarnings(vegan::rarefy(pop.mat, raremax, se = TRUE))
IaList <- lapply(sublist, function(x){
namelist <- list(file = namelist$File, population = x)
.ia(poplist[[x]],
sample = sample,
method = method,
quiet = quiet,
missing = missing,
hist = FALSE,
namelist = namelist)
})
names(IaList) <- sublist
if (sample > 0){
classtest <- summary(IaList)
classless <- !classtest[, "Class"] %in% "ialist"
if (any(classless)){
no_class_pops <- paste(names(IaList[classless]), collapse = ", ")
msg <- paste0("values for ", no_class_pops,
" could not be plotted.\n")
IaList[classless] <- lapply(IaList[classless], function(x) list(index = x))
warning(msg, call. = FALSE)
}
if (plot){
try(print(poppr.plot(sample = IaList[!classless], file = namelist$File)))
}
IaList <- data.frame(t(vapply(IaList, "[[", numeric(4), "index")))
} else {
IaList <- t(as.data.frame(IaList))
}
Iout <- as.data.frame(
list(
Pop = sublist,
N = N.vec,
MLG = MLG.vec,
eMLG = N.rare[1, ],
SE = N.rare[2, ],
divmat,
Hexp,
IaList,
File = namelist$File
),
stringsAsFactors = FALSE
)
rownames(Iout) <- NULL
} else {
# rarefaction giving the standard errors. No population structure means that
# the sample is equal to the number of individuals.
N.rare <- rarefy(pop.mat, sum(pop.mat), se = TRUE)
Hexp <- get_hexp_from_loci(pegas::as.loci(dat),
ploidy = datploid, type = dat@type)
Hexp <- data.frame(Hexp = Hexp)
IaList <-.ia(dat,
sample = sample,
method = method,
quiet = quiet,
missing = missing,
namelist = list(File = namelist$File, population = "Total"),
hist = plot
)
IaList <- if (sample > 0) IaList$index else IaList
Iout <- as.data.frame(list(
Pop = "Total",
N = N.vec,
MLG = MLG.vec,
eMLG = N.rare[1, ],
SE = N.rare[2, ],
divmat,
Hexp,
as.data.frame(t(IaList)),
File = namelist$File
), stringsAsFactors = FALSE)
rownames(Iout) <- NULL
}
class(Iout) <- c("popprtable", "data.frame")
return(Iout)
}
#==============================================================================#
#' Process a list of files with poppr
#'
#' poppr.all is a wrapper function that will loop through a list of files from
#' the working directory, execute \code{\link{poppr}}, and concatenate the
#' output into one data frame.
#'
#' @param filelist a list of files in the current working directory
#'
#' @param ... arguments passed on to poppr
#'
#' @return see \code{\link{poppr}}
#'
#' @seealso \code{\link{poppr}}, \code{\link{getfile}}
#' @export
#' @author Zhian N. Kamvar
#' @examples
#' \dontrun{
#' # Obtain a list of fstat files from a directory.
#' x <- getfile(multi=TRUE, pattern="^.+?dat$")
#'
#' # run the analysis on each file.
#' poppr.all(file.path(x$path, x$files))
#' }
#==============================================================================#
poppr.all <- function(filelist, ...){
result <- NULL
for(a in seq(length(filelist))){
cat(" \\ \n")
input <- filelist[[a]]
if (is.genind(input)){
file <- names(filelist)[a]
if (is.null(file)){
file <- a
}
cat(" | Data: ")
} else {
file <- basename(input)
cat(" | File: ")
}
cat(file, "\n / \n")
res <- poppr(input, ...)
res$File <- file
result <- rbind(result, res)
}
return(result)
}
#==============================================================================#
#' Index of Association
#'
#' Calculate the Index of Association and Standardized Index of Association.
#' \itemize{
#' \item \code{ia()} calculates the index of association over all loci in
#' the data set.
#' \item \code{pair.ia()} calculates the index of association in a pairwise
#' manner among all loci.
#' \item \code{resample.ia()} calculates the index of association on a
#' reduced data set multiple times to create a distribution, showing the
#' variation of values observed at a given sample size (previously
#' \code{jack.ia}).
#' }
#'
#' @param gid a \code{\link{genind}} or \code{\link{genclone}} object.
#'
#' @param sample an integer indicating the number of permutations desired (eg
#' 999).
#'
#' @param method an integer from 1 to 4 indicating the sampling method desired.
#' see \code{\link{shufflepop}} for details.
#'
#' @param quiet Should the function print anything to the screen while it is
#' performing calculations?
#'
#' \code{TRUE} prints nothing.
#'
#' \code{FALSE} (default) will print the population name and progress bar.
#'
#' @param missing a character string. see \code{\link{missingno}} for details.
#'
#' @param plot When \code{TRUE} (default), a heatmap of the values per locus
#' pair will be plotted (for \code{pair.ia()}). When \code{sampling > 0},
#' different things happen with \code{ia()} and \code{pair.ia()}. For
#' \code{ia()}, a histogram for the data set is plotted. For \code{pair.ia()},
#' p-values are added as text on the heatmap.
#'
#' @param hist \code{logical} Deprecated. Use plot.
#'
#' @param index \code{character} either "Ia" or "rbarD". If \code{hist = TRUE},
#' this indicates which index you want represented in the plot (default:
#' "rbarD").
#'
#' @param valuereturn \code{logical} if \code{TRUE}, the index values from the
#' reshuffled data is returned. If \code{FALSE} (default), the index is
#' returned with associated p-values in a 4 element numeric vector.
#'
#' @return
#' \subsection{for \code{pair.ia}}{
#' A matrix with two columns and choose(nLoc(gid), 2) rows representing the
#' values for Ia and rbarD per locus pair.
#' }
#' \subsection{If no sampling has occurred:}{
#' A named number vector of length 2 giving the Index of Association, "Ia";
#' and the Standardized Index of Association, "rbarD"
#' }
#' \subsection{If there is sampling:}{ A a named number vector of length 4
#' with the following values:
#' \itemize{
#' \item{Ia - }{numeric. The index of association.}
#' \item{p.Ia - }{A number indicating the p-value resulting from a
#' one-sided permutation test based on the number of samples indicated in
#' the original call.}
#' \item{rbarD - }{numeric. The standardized index of association.}
#' \item{p.rD - }{A factor indicating the p-value resulting from a
#' one-sided permutation test based on the number of samples indicated in
#' the original call.}
#' }
#' }
#' \subsection{If there is sampling and valureturn = TRUE}{
#' A list with the following elements:
#' \itemize{
#' \item{index }{The above vector}
#' \item{samples }{A data frame with s by 2 column data frame where s is the
#' number of samples defined. The columns are for the values of Ia and
#' rbarD, respectively.}
#' }
#' }
#'
#' @note \code{jack.ia()} is deprecated as the name was misleading. Please use
#' \code{resample.ia()}
#' @details The index of association was originally developed by A.H.D. Brown
#' analyzing population structure of wild barley (Brown, 1980). It has been widely
#' used as a tool to detect clonal reproduction within populations .
#' Populations whose members are undergoing sexual reproduction, whether it be
#' selfing or out-crossing, will produce gametes via meiosis, and thus have a
#' chance to shuffle alleles in the next generation. Populations whose members
#' are undergoing clonal reproduction, however, generally do so via mitosis.
#' This means that the most likely mechanism for a change in genotype is via
#' mutation. The rate of mutation varies from species to species, but it is
#' rarely sufficiently high to approximate a random shuffling of alleles. The
#' index of association is a calculation based on the ratio of the variance of
#' the raw number of differences between individuals and the sum of those
#' variances over each locus . You can also think of it as the observed
#' variance over the expected variance. If they are the same, then the index
#' is zero after subtracting one (from Maynard-Smith, 1993): \deqn{I_A =
#' \frac{V_O}{V_E}-1}{Ia = (Vo/Ve) - 1} Since the distance is more or less a binary
#' distance, any sort of marker can be used for this analysis. In the
#' calculation, phase is not considered, and any difference increases the
#' distance between two individuals. Remember that each column represents a
#' different allele and that each entry in the table represents the fraction
#' of the genotype made up by that allele at that locus. Notice also that the
#' sum of the rows all equal one. Poppr uses this to calculate distances by
#' simply taking the sum of the absolute values of the differences between
#' rows.
#'
#' The calculation for the distance between two individuals at a single locus
#' with \emph{a} allelic states and a ploidy of \emph{k} is as follows (except
#' for Presence/Absence data): \deqn{ d = \displaystyle
#' \frac{k}{2}\sum_{i=1}^{a} \mid A_{i} - B_{i}\mid }{d(A,B) = (k/2)*sum(abs(Ai - Bi))}
#' To find the total number of differences
#' between two individuals over all loci, you just take \emph{d} over \emph{m}
#' loci, a value we'll call \emph{D}:
#'
#' \deqn{D = \displaystyle \sum_{i=1}^{m} d_i }{D = sum(di)}
#'
#' These values are calculated over all possible combinations of individuals
#' in the data set, \eqn{{n \choose 2}}{choose(n, 2)} after which you end up
#' with \eqn{{n \choose 2}\cdot{}m}{choose(n, 2) * m} values of \emph{d} and
#' \eqn{{n \choose 2}}{choose(n, 2)} values of \emph{D}. Calculating the
#' observed variances is fairly straightforward (modified from Agapow and
#' Burt, 2001):
#'
#' \deqn{ V_O = \frac{\displaystyle \sum_{i=1}^{n \choose 2} D_{i}^2 -
#' \frac{(\displaystyle\sum_{i=1}^{n \choose 2} D_{i})^2}{{n \choose 2}}}{{n
#' \choose 2}}}{Vo = var(D)}
#'
#' Calculating the expected variance is the sum of each of the variances of
#' the individual loci. The calculation at a single locus, \emph{j} is the
#' same as the previous equation, substituting values of \emph{D} for
#' \emph{d}:
#'
#' \deqn{ var_j = \frac{\displaystyle \sum_{i=1}^{n \choose 2} d_{i}^2 -
#' \frac{(\displaystyle\sum_{i=1}^{n \choose 2} d_i)^2}{{n \choose 2}}}{{n
#' \choose 2}} }{Varj = var(dj)}
#'
#' The expected variance is then the sum of all the variances over all
#' \emph{m} loci:
#'
#' \deqn{ V_E = \displaystyle \sum_{j=1}^{m} var_j }{Ve = sum(var(dj))}
#'
#' Agapow and Burt showed that \eqn{I_A}{Ia} increases steadily with the
#' number of loci, so they came up with an approximation that is widely used,
#' \eqn{\bar r_d}{rbarD}. For the derivation, see the manual for
#' \emph{multilocus}.
#'
#' \deqn{ \bar r_d = \frac{V_O - V_E} {2\displaystyle
#' \sum_{j=1}^{m}\displaystyle \sum_{k \neq j}^{m}\sqrt{var_j\cdot{}var_k}}
#' }{rbarD = (Vo - Ve)/(2*sum(sum(sqrt(var(dj)*var(dk))))}
#'
#' @references Paul-Michael Agapow and Austin Burt. Indices of multilocus
#' linkage disequilibrium. \emph{Molecular Ecology Notes}, 1(1-2):101-102,
#' 2001
#'
#' A.H.D. Brown, M.W. Feldman, and E. Nevo. Multilocus structure of natural
#' populations of \emph{Hordeum spontaneum}. \emph{Genetics}, 96(2):523-536, 1980.
#'
#' J M Smith, N H Smith, M O'Rourke, and B G Spratt. How clonal are bacteria?
#' Proceedings of the National Academy of Sciences, 90(10):4384-4388, 1993.
#'
#' @seealso \code{\link{poppr}}, \code{\link{missingno}},
#' \code{\link{import2genind}}, \code{\link{read.genalex}},
#' \code{\link{clonecorrect}}, \code{\link{win.ia}}, \code{\link{samp.ia}}
#'
#' @export
#' @rdname ia
#' @author Zhian N. Kamvar
#' @examples
#' data(nancycats)
#' ia(nancycats)
#'
#' # Pairwise over all loci:
#' data(partial_clone)
#' res <- pair.ia(partial_clone)
#' plot(res, low = "black", high = "green", index = "Ia")
#'
#' # Resampling
#' data(Pinf)
#' resample.ia(Pinf, reps = 99)
#'
#' \dontrun{
#'
#' # Pairwise IA with p-values (this will take about a minute)
#' res <- pair.ia(partial_clone, sample = 999)
#' head(res)
#'
#' # Plot the results of resampling rbarD.
#' library("ggplot2")
#' Pinf.resamp <- resample.ia(Pinf, reps = 999)
#' ggplot(Pinf.resamp[2], aes(x = rbarD)) +
#' geom_histogram() +
#' geom_vline(xintercept = ia(Pinf)[2]) +
#' geom_vline(xintercept = ia(clonecorrect(Pinf))[2], linetype = 2) +
#' xlab(expression(bar(r)[d]))
#'
#' # Get the indices back and plot the distributions.
#' nansamp <- ia(nancycats, sample = 999, valuereturn = TRUE)
#'
#' plot(nansamp, index = "Ia")
#' plot(nansamp, index = "rbarD")
#'
#' # You can also adjust the parameters for how large to display the text
#' # so that it's easier to export it for publication/presentations.
#' library("ggplot2")
#' plot(nansamp, labsize = 5, linesize = 2) +
#' theme_bw() + # adding a theme
#' theme(text = element_text(size = rel(5))) + # changing text size
#' theme(plot.title = element_text(size = rel(4))) + # changing title size
#' ggtitle("Index of Association of nancycats") # adding a new title
#'
#' # Get the index for each population.
#' lapply(seppop(nancycats), ia)
#' # With sampling
#' lapply(seppop(nancycats), ia, sample = 999)
#'
#' # Plot pairwise ia for all populations in a grid with cowplot
#' # Set up the library and data
#' library("cowplot")
#' data(monpop)
#' splitStrata(monpop) <- ~Tree/Year/Symptom
#' setPop(monpop) <- ~Tree
#'
#' # Need to set up a list in which to store the plots.
#' plotlist <- vector(mode = "list", length = nPop(monpop))
#' names(plotlist) <- popNames(monpop)
#'
#' # Loop throgh the populations, calculate pairwise ia, plot, and then
#' # capture the plot in the list
#' for (i in popNames(monpop)){
#' x <- pair.ia(monpop[pop = i], limits = c(-0.15, 1)) # subset, calculate, and plot
#' plotlist[[i]] <- ggplot2::last_plot() # save the last plot
#' }
#'
#' # Use the plot_grid function to plot.
#' plot_grid(plotlist = plotlist, labels = paste("Tree", popNames(monpop)))
#'
#' }
#==============================================================================#
ia <- function(gid, sample = 0, method = 1, quiet = FALSE, missing = "ignore",
plot = TRUE, hist = TRUE, index = "rbarD", valuereturn = FALSE){
namelist <- list(population = ifelse(nPop(gid) > 1 | is.null(gid@pop),
"Total", popNames(gid)),
File = as.character(match.call()[2])
)
hist <- plot
popx <- gid
missing <- toupper(missing)
type <- gid@type
quiet <- should_poppr_be_quiet(quiet)
if (type == "PA"){
.Ia.Rd <- .PA.Ia.Rd
} else {
popx <- seploc(popx)
}
# if there are less than three individuals in the population, the calculation
# does not proceed.
if (nInd(gid) < 3){
IarD <- stats::setNames(as.numeric(c(NA, NA)), c("Ia", "rbarD"))
if (sample == 0){
return(IarD)
} else {
IarD <- stats::setNames(as.numeric(rep(NA, 4)), c("Ia","p.Ia","rbarD","p.rD"))
return(IarD)
}
}
IarD <- .Ia.Rd(popx, missing)
names(IarD) <- c("Ia", "rbarD")
# no sampling, it will simply return two named numbers.
if (sample == 0){
Iout <- IarD
result <- NULL
} else {
# sampling will perform the iterations and then return a data frame indicating
# the population, index, observed value, and p-value. It will also produce a
# histogram.
Iout <- NULL
# idx <- data.frame(Index = names(IarD))
if (quiet) {
oh <- progressr::handlers()
on.exit(progressr::handlers(oh))
progressr::handlers("void")
}
progressr::with_progress({
samp <- .sampling(
popx, sample, missing, quiet = quiet, type = type, method = method
)
})
p.val <- sum(IarD[1] <= c(samp$Ia, IarD[1]))/(sample + 1)
p.val[2] <- sum(IarD[2] <= c(samp$rbarD, IarD[2]))/(sample + 1)
if (hist == TRUE){
the_plot <- poppr.plot(samp, observed = IarD, pop = namelist$population,
index = index, file = namelist$File, pval = p.val, N = nrow(gid@tab)
)
print(the_plot)
}
result <- stats::setNames(vector(mode = "numeric", length = 4),
c("Ia","p.Ia","rbarD","p.rD"))
result[c(1, 3)] <- IarD
result[c(2, 4)] <- p.val
if (valuereturn == TRUE){
iaobj <- list(index = final(Iout, result), samples = samp)
class(iaobj) <- "ialist"
return(iaobj)
}
}
return(final(Iout, result))
}
#==============================================================================#
#' @rdname ia
#' @param low (for pair.ia) a color to use for low values when \code{plot =
#' TRUE}
#' @param high (for pair.ia) a color to use for low values when \code{plot =
#' TRUE}
#' @param limits (for pair.ia) the limits to be used for the color scale.
#' Defaults to \code{NULL}. If you want to use a custom range, supply two
#' numbers between -1 and 1, (e.g. \code{limits = c(-0.15, 1)})
#' @export
#==============================================================================#
pair.ia <- function(gid, sample = 0L, quiet = FALSE, plot = TRUE, low = "blue",
high = "red", limits = NULL, index = "rbarD", method = 1L){
N <- nInd(gid)
numLoci <- nLoc(gid)
lnames <- locNames(gid)
np <- choose(N, 2)
nploci <- choose(numLoci, 2)
shuffle <- sample > 0L
# quiet <- should_poppr_be_quiet(quiet)
# QUIET <- if (shuffle) TRUE else quiet
if (quiet) {
oh <- progressr::handlers()
on.exit(progressr::handlers(oh))
progressr::handlers("void")
}
progressr::with_progress({
p <- make_progress((1 + sample) * nploci, 50)
res <- pair_ia_internal(gid, N, numLoci, lnames, np, nploci, p, sample = 0)
if (shuffle) {
# Initialize with 1 to account for the observed data.
counts <- matrix(1L, nrow = nrow(res), ncol = ncol(res))
for (i in seq_len(sample)) {
tmp <- shufflepop(gid, method = method)
tmpres <- pair_ia_internal(tmp, N, numLoci, lnames, np, nploci, p, i)
counts <- counts + as.integer(tmpres >= res)
}
p <- counts/(sample + 1)
res <- cbind(Ia = res[, 1],
p.Ia = p[, 1],
rbarD = res[, 2],
p.rD = p[, 2])
}
})
class(res) <- c("pairia", "matrix")
if (plot) {
tryCatch(plot(res, index = index, low = low, high = high, limits = limits),
error = function(e) e)
}
res
}
pair_ia_internal <- function(gid, N, numLoci, lnames, np, nploci, p, sample = NULL) {
# Calculate pairwise distances for each locus. This will be a matrix of
# np rows and numLoci columns.
if (gid@type == "codom") {
V <- pair_matrix(seploc(gid), numLoci, np)
} else { # P/A case
V <- apply(tab(gid), 2, function(x) as.vector(dist(x)))
# checking for missing data and imputing the comparison to zero.
if (any(is.na(V))) {
V[which(is.na(V))] <- 0
}
}
colnames(V) <- lnames
# calculate I_A and \bar{r}_d for each combination of loci
loci_pairs <- combn(lnames, 2)
ia_pairs <- matrix(NA_real_, nrow = 2, ncol = nploci)
for (i in seq(nploci)) {
if ((nploci * sample + i) %% p$step == 0) p$rog()
the_pair <- loci_pairs[, i, drop = TRUE]
newV <- V[, the_pair, drop = FALSE]
ia_pairs[, i] <- ia_from_d_and_D(
V = list(
d.vector = colSums(newV),
d2.vector = colSums(newV * newV),
D.vector = rowSums(newV)
),
np = np
)
}
colnames(ia_pairs) <- apply(loci_pairs, 2, paste, collapse = ":")
rownames(ia_pairs) <- c("Ia", "rbarD")
ia_pairs <- t(ia_pairs)
ia_pairs
}
#==============================================================================#
#' Create a table of summary statistics per locus.
#'
#' @param x a [genind-class] or [genclone-class]
#' object.
#'
#' @param index Which diversity index to use. Choices are \itemize{ \item
#' `"simpson"` (Default) to give Simpson's index \item `"shannon"`
#' to give the Shannon-Wiener index \item `"invsimpson"` to give the
#' Inverse Simpson's index aka the Stoddard and Tayor index.}
#'
#' @param lev At what level do you want to analyze diversity? Choices are
#' `"allele"` (Default) or `"genotype"`.
#'
#' @param population Select the populations to be analyzed. This is the
#' parameter `sublist` passed on to the function [popsub()].
#' Defaults to `"ALL"`.
#'
#' @param information When `TRUE` (Default), this will print out a header
#' of information to the R console.
#'
#' @return a table with 4 columns indicating the Number of alleles/genotypes
#' observed, Diversity index chosen, Nei's 1978 gene diversity (expected
#' heterozygosity), and Evenness.
#'
#' @seealso [vegan::diversity()], [poppr()]
#' @md
#'
#' @note The calculation of `Hexp` is \eqn{(\frac{n}{n-1}) 1 - \sum_{i =
#' 1}^k{p^{2}_{i}}}{(n/(n - 1))*(1 - sum(p^2))} where p is the allele
#' frequencies at a given locus and n is the number of observed alleles (Nei,
#' 1978) in each locus and then returning the average. Caution should be
#' exercised in interpreting the results of Hexp with polyploid organisms with
#' ambiguous ploidy. The lack of allelic dosage information will cause rare
#' alleles to be over-represented and artificially inflate the index. This is
#' especially true with small sample sizes.
#'
#' If `lev = "genotype"`, then all statistics reflect **genotypic** diversity
#' within each locus. This includes the calculation for `Hexp`, which turns
#' into the unbiased Simpson's index.
#'
#' @author Zhian N. Kamvar
#'
#' @references
#' Jari Oksanen, F. Guillaume Blanchet, Roeland Kindt, Pierre Legendre, Peter
#' R. Minchin, R. B. O'Hara, Gavin L. Simpson, Peter Solymos, M. Henry H.
#' Stevens, and Helene Wagner. vegan: Community Ecology Package, 2012. R
#' package version 2.0-5.
#'
#' Niklaus J. Gr\"unwald, Stephen B. Goodwin, Michael G. Milgroom, and William
#' E. Fry. Analysis of genotypic diversity data for populations of
#' microorganisms. Phytopathology, 93(6):738-46, 2003
#'
#' J.A. Ludwig and J.F. Reynolds. Statistical Ecology. A Primer on Methods and
#' Computing. New York USA: John Wiley and Sons, 1988.
#'
#' E.C. Pielou. Ecological Diversity. Wiley, 1975.
#'
#' J.A. Stoddart and J.F. Taylor. Genotypic diversity: estimation and
#' prediction in samples. Genetics, 118(4):705-11, 1988.
#'
#' Masatoshi Nei. Estimation of average heterozygosity and genetic distance
#' from a small number of individuals. Genetics, 89(3):583-590, 1978.
#'
#' Claude Elwood Shannon. A mathematical theory of communication. Bell Systems
#' Technical Journal, 27:379-423,623-656, 1948
#'
#' @export
#' @examples
#'
#' data(nancycats)
#' locus_table(nancycats[pop = 5])
#' \dontrun{
#' # Analyze locus statistics for the North American population of P. infestans.
#' # Note that due to the unknown dosage of alleles, many of these statistics
#' # will be artificially inflated for polyploids.
#' data(Pinf)
#' locus_table(Pinf, population = "North America")
#' }
#==============================================================================#
locus_table <- function(x, index = "simpson", lev = "allele",
population = "ALL", information = TRUE){
ploid <- unique(ploidy(x))
type <- x@type
INDICES <- c("shannon", "simpson", "invsimpson")
index <- match.arg(index, INDICES)
x <- popsub(x, population, drop = FALSE)
x.loc <- summary(as.loci(x))
outmat <- vapply(x.loc, locus_table_pegas, numeric(4), index, lev, ploid, type)
loci <- colnames(outmat)
divs <- rownames(outmat)
res <- matrix(0.0, nrow = ncol(outmat) + 1, ncol = nrow(outmat))
dimlist <- list(`locus` = c(loci, "mean"), `summary` = divs)
res[-nrow(res), ] <- t(outmat)
res[nrow(res), ] <- colMeans(res[-nrow(res), ], na.rm = TRUE)
attr(res, "dimnames") <- dimlist
if (information){
if (index == "simpson"){
msg <- "Simpson index"
} else if (index == "shannon"){
msg <- "Shannon-Wiener index"
} else {
msg <- "Stoddard and Taylor index"
}
message("\n", divs[1], " = Number of observed ", paste0(divs[1], "s"), appendLF = FALSE)
message("\n", divs[2], " = ", msg, appendLF = FALSE)
message("\n", divs[3], " = Nei's 1978 gene diversity\n", appendLF = FALSE)
message("------------------------------------------\n", appendLF = FALSE)
}
class(res) <- c("locustable", "matrix")
return(res)
}
#==============================================================================#
#' Tabulate alleles the occur in only one population.
#'
#' @param gid a [genind-class] or [genclone-class]
#' object.
#'
#' @param form a [formula()] giving the levels of markers and
#' hierarchy to analyze. See Details.
#'
#' @param report one of `"table", "vector",` or `"data.frame"`. Tables
#' (Default) and data frame will report counts along with populations or
#' individuals. Vectors will simply report which populations or individuals
#' contain private alleles. Tables are matrices with populations or
#' individuals in rows and alleles in columns. Data frames are long form.
#'
#' @param level one of `"population"` (Default) or `"individual"`.
#'
#' @param count.alleles `logical`. If `TRUE` (Default), The report
#' will return the observed number of alleles private to each population. If
#' `FALSE`, each private allele will be counted once, regardless of
#' dosage.
#'
#' @param drop `logical`. if `TRUE`, populations/individuals without
#' private alleles will be dropped from the result. Defaults to `FALSE`.
#'
#' @return a matrix, data.frame, or vector defining the populations or
#' individuals containing private alleles. If vector is chosen, alleles are
#' not defined.
#'
#' @details the argument `form` allows for control over the strata at which
#' private alleles should be computed. It takes a form where the left hand
#' side of the formula can be either "allele", "locus", or "loci". The right
#' hand of the equation, by default is ".". If you change it, it must
#' correspond to strata located in the [adegenet::strata()] slot.
#' Note, that the right hand side is disabled for genpop objects.
#'
#' @export
#' @author Zhian N. Kamvar
#' @md
#' @examples
#'
#' data(Pinf) # Load P. infestans data.
#' private_alleles(Pinf)
#'
#' \dontrun{
#' # Analyze private alleles based on the country of interest:
#' private_alleles(Pinf, alleles ~ Country)
#'
#' # Number of observed alleles per locus
#' private_alleles(Pinf, locus ~ Country, count.alleles = TRUE)
#'
#' # Get raw number of private alleles per locus.
#' (pal <- private_alleles(Pinf, locus ~ Country, count.alleles = FALSE))
#'
#' # Get percentages.
#' sweep(pal, 2, nAll(Pinf)[colnames(pal)], FUN = "/")
#'
#' # An example of how these data can be displayed.
#' library("ggplot2")
#' Pinfpriv <- private_alleles(Pinf, report = "data.frame")
#' ggplot(Pinfpriv) + geom_tile(aes(x = population, y = allele, fill = count))
#' }
#==============================================================================#
private_alleles <- function(gid, form = alleles ~ ., report = "table",
level = "population", count.alleles = TRUE,
drop = FALSE){
REPORTARGS <- c("table", "vector", "data.frame")
LEVELARGS <- c("individual", "population")
LHS_ARGS <- c("alleles", "locus", "loci")
showform <- utils::capture.output(print(form))
marker <- pmatch(as.character(form[[2]]), LHS_ARGS, nomatch = 0L,
duplicates.ok = FALSE)
if (all(marker == 0L)){
stop("Left hand side of ", showform, " must be one of:\n ",
paste(LHS_ARGS, collapse = " "))
} else {
marker <- LHS_ARGS[marker]
}
strataform <- form[c(1, 3)]
the_strata <- all.vars(strataform[[2]])
if (length(the_strata) > 1 || the_strata[1] != "."){
if (!is.genpop(gid)){
setPop(gid) <- strataform
} else {
warning("cannot set strata for a genpop object.")
}
}
report <- match.arg(report, REPORTARGS)
level <- match.arg(level, LEVELARGS)
if (!is.genind(gid) & !is.genpop(gid)){
stop(paste(gid, "is not a genind or genpop object."))
}
if (is.genind(gid) && !is.null(pop(gid)) | is.genpop(gid) && nPop(gid) > 1){
if (is.genind(gid)){
gid.pop <- tab(genind2genpop(gid, quiet = TRUE))
} else {
gid.pop <- tab(gid)
}
private_columns <- colSums(ifelse(gid.pop > 0, 1, 0), na.rm = TRUE) < 2
privates <- gid.pop[, private_columns, drop = FALSE]
if (level == "individual" & is.genind(gid)){
gid.tab <- tab(gid)
privates <- gid.tab[, private_columns, drop = FALSE]
} else if (!count.alleles){
privates <- ifelse(privates > 0, 1, 0)
}
if (drop){
privates <- privates[rowSums(privates, na.rm = TRUE) > 0, , drop = FALSE]
}
if (marker != "alleles"){
private_fac <- locFac(gid)[private_columns]
privates <- vapply(unique(private_fac), function(l){
rowSums(privates[, private_fac == l, drop = FALSE], na.rm = TRUE)
}, FUN.VALUE = numeric(nrow(privates))
)
colnames(privates) <- locNames(gid)[unique(private_fac)]
}
if (length(privates) == 0){
privates <- NULL
cat("No private alleles detected.")
return(invisible(NULL))
}
if (report == "vector"){
privates <- rownames(privates)
} else if (report == "data.frame"){
marker <- if (marker == "alleles") "allele" else "locus"
names(dimnames(privates)) <- c(level, marker)
privates <- as.data.frame.table(privates,
responseName = "count",
stringsAsFactors = FALSE)
}
return(privates)
} else {
stop("There are no populations detected")
}
}
|
c DCNF-Autarky [version 0.0.1].
c Copyright (c) 2018-2019 Swansea University.
c
c Input Clause Count: 81612
c Performing E1-Autarky iteration.
c Remaining clauses count after E-Reduction: 81604
c
c Performing E1-Autarky iteration.
c Remaining clauses count after E-Reduction: 81604
c
c Input Parameter (command line, file):
c input filename QBFLIB/Wintersteiger/LinearBitvectorRankingFunction/filesys_fastfat_allocsup.c.qdimacs
c output filename /tmp/dcnfAutarky.dimacs
c autarky level 1
c conformity level 0
c encoding type 2
c no.of var 24102
c no.of clauses 81612
c no.of taut cls 0
c
c Output Parameters:
c remaining no.of clauses 81604
c
c QBFLIB/Wintersteiger/LinearBitvectorRankingFunction/filesys_fastfat_allocsup.c.qdimacs 24102 81612 E1 [549 12351] 0 96 24003 81604 RED
| /code/dcnf-ankit-optimized/Results/QBFLIB-2018/E1/Experiments/Wintersteiger/LinearBitvectorRankingFunction/filesys_fastfat_allocsup.c/filesys_fastfat_allocsup.c.R | no_license | arey0pushpa/dcnf-autarky | R | false | false | 808 | r | c DCNF-Autarky [version 0.0.1].
c Copyright (c) 2018-2019 Swansea University.
c
c Input Clause Count: 81612
c Performing E1-Autarky iteration.
c Remaining clauses count after E-Reduction: 81604
c
c Performing E1-Autarky iteration.
c Remaining clauses count after E-Reduction: 81604
c
c Input Parameter (command line, file):
c input filename QBFLIB/Wintersteiger/LinearBitvectorRankingFunction/filesys_fastfat_allocsup.c.qdimacs
c output filename /tmp/dcnfAutarky.dimacs
c autarky level 1
c conformity level 0
c encoding type 2
c no.of var 24102
c no.of clauses 81612
c no.of taut cls 0
c
c Output Parameters:
c remaining no.of clauses 81604
c
c QBFLIB/Wintersteiger/LinearBitvectorRankingFunction/filesys_fastfat_allocsup.c.qdimacs 24102 81612 E1 [549 12351] 0 96 24003 81604 RED
|
#!/usr/bin/env Rscript
" Generate a list of contigs to filter from the second assembly iteration
Usage:
find_contam_contigs.R BLOBTABLE
" -> doc
# This repeats the the targets from the first filter, namely:
# 1. Contigs with high GC (> 0.6)
# 2. Contigs with bacterial and plant/algae taxonomic assignments.
# 3. Contigs with low coverage (< 10)
#
# Some of the above criteria were refined slightly from the first filtration
# step.
#
# In addition the following are also targeted:
# 1. Contigs with coverage < 5 in any single sample
# 2. Contigs from fungal all fungal phyla except microsporidia
library(dplyr)
source("lib/R/blobtools_util.R")
main <- function(blobtable) {
exclusions1 <- filter(blobtable,
superkingdom.t %in% c("Bacteria", "Viruses", "Archaea") |
grepl("ophyta", phylum.t) |
phylum.t == "Eustigmatophyceae" |
grepl("mycota", phylum.t) |
GC > 0.6 |
cov_sum < 10 |
order.t == "Primates" |
order.t == "Rodentia" |
order.t == "Carnivora") %>%
select(name)
exclusions2 <- filter_at(blobtable,
vars(starts_with("cov")), any_vars(. < 5)) %>%
select(name)
exclusions <- c(noquote(unlist(exclusions1)), noquote(unlist(exclusions2)))
exclusions <- unique(exclusions)
exclusions_string <- paste(exclusions, collapse = "\n")
write(exclusions_string, stdout())
}
if (!interactive()) {
blobtable_file <- commandArgs(trailingOnly = TRUE)
blobtable <- read_blobtable(blobtable_file)
main(blobtable)
}
| /pipe/scripts/find_contam_contigs2.R | permissive | EddieKHHo/megadaph | R | false | false | 1,494 | r | #!/usr/bin/env Rscript
" Generate a list of contigs to filter from the second assembly iteration
Usage:
find_contam_contigs.R BLOBTABLE
" -> doc
# This repeats the the targets from the first filter, namely:
# 1. Contigs with high GC (> 0.6)
# 2. Contigs with bacterial and plant/algae taxonomic assignments.
# 3. Contigs with low coverage (< 10)
#
# Some of the above criteria were refined slightly from the first filtration
# step.
#
# In addition the following are also targeted:
# 1. Contigs with coverage < 5 in any single sample
# 2. Contigs from fungal all fungal phyla except microsporidia
library(dplyr)
source("lib/R/blobtools_util.R")
main <- function(blobtable) {
exclusions1 <- filter(blobtable,
superkingdom.t %in% c("Bacteria", "Viruses", "Archaea") |
grepl("ophyta", phylum.t) |
phylum.t == "Eustigmatophyceae" |
grepl("mycota", phylum.t) |
GC > 0.6 |
cov_sum < 10 |
order.t == "Primates" |
order.t == "Rodentia" |
order.t == "Carnivora") %>%
select(name)
exclusions2 <- filter_at(blobtable,
vars(starts_with("cov")), any_vars(. < 5)) %>%
select(name)
exclusions <- c(noquote(unlist(exclusions1)), noquote(unlist(exclusions2)))
exclusions <- unique(exclusions)
exclusions_string <- paste(exclusions, collapse = "\n")
write(exclusions_string, stdout())
}
if (!interactive()) {
blobtable_file <- commandArgs(trailingOnly = TRUE)
blobtable <- read_blobtable(blobtable_file)
main(blobtable)
}
|
"writeTiff" <-
function(pixmap, fn) {
if(class(pixmap) == "pixmapRGB"){
.Call("C_writeTiff", pixmap@red, pixmap@green, pixmap@blue, fn)
} else if(class(pixmap) == "matrix") {
pixmap = newPixmapRGB(pixmap, pixmap, pixmap);
.Call("C_writeTiff", pixmap@red, pixmap@green, pixmap@blue, fn)
} else {
stop(paste("writeTiff expects a pixmapRGB or matrix, got ", class(pixmap)))
}
gc();
return();
}
| /R/writeTiff.R | no_license | VanAndelInstitute/rtiff | R | false | false | 459 | r | "writeTiff" <-
function(pixmap, fn) {
if(class(pixmap) == "pixmapRGB"){
.Call("C_writeTiff", pixmap@red, pixmap@green, pixmap@blue, fn)
} else if(class(pixmap) == "matrix") {
pixmap = newPixmapRGB(pixmap, pixmap, pixmap);
.Call("C_writeTiff", pixmap@red, pixmap@green, pixmap@blue, fn)
} else {
stop(paste("writeTiff expects a pixmapRGB or matrix, got ", class(pixmap)))
}
gc();
return();
}
|
?read.csv()
#Mthod 1: Select the File Manually
stats <- read.csv(file.choose())
stats
#Method 2: Set Workind Directory (WD) and read data
getwd()
setwd("/Users/kimberlyzhu/documents/R Programming")
getwd()
rm(stats) #remove stats
stats <- read.csv("DemographicData.csv")
#----- Explore data
stats
nrow(stats) #numbr of rows
ncol(stats) #number of cols
head(stats) #gives you top 6 rows
tail(stats, n = 10) #set n for how many from bottom
str(stats) #str = structure
#runif = random varaible distributed uniformly
summary(stats)
#--- using the $sign
stats
head(stats)
stats[3,3]
stats[3, "Birth.rate"]
stats$Internet.users # $ works for data sets -= NOT matrices
# returns a vector of the given column
#above is same as below
stats[,"Internet.users"]
stats$Internet.users[2]
levels(stats$Income.Group)
levels(stats[,"Income.Group"])
#------- Basic Operations with a DF
stats[1:10,]
stats[c(4,100),]
#how does [] work:
stats[1,] #still a dataframe!
is.data.frame(stats[1,])
stats[,1]
is.data.frame(stats[,1,drop = F])
#multiply columns
head(stats)
stats$Birth.rate * stats$Internet.users
#add column
stats$MyCalc <- stats$Birth.rate * stats$Internet.users
head(stats)
#if vector is less than length, vector gets recycled
stats$xyz <- 1:5
head(stats)
stats$xyz <- NULL
stats$MyCalc <- NULL
#-------------------- Filtering Data Frames
stats$Internet.users < 2 #gives a vector of true/false
filter <- stats$Internet.users < 2
stats[filter,] #only displays rows that are true
stats[stats$Birth.rate >40,]
stats[stats$Birth.rate >40 & stats$Internet.users <2,]
stats[stats$Income.Group == "High income",]
stats[stats$Country.Name == "Malta",] #just malta row
#------------------- qplot()
install.packages("ggplot2")
library(ggplot2)
?qplot
qplot(data = stats, x = Internet.users)
qplot(data = stats, x = Income.Group, y = Birth.rate, size = I(3), color = I("red"))
qplot(data = stats, x = Income.Group, y = Birth.rate, geom = "boxplot")
#----------------- Visualizing what we need
qplot(data = stats, x = Internet.users, y = Birth.rate)
qplot(data = stats, x = Internet.users, y = Birth.rate, size = I(4),
color = I("blue"))
qplot(data = stats, x = Internet.users, y = Birth.rate, size = I(4),
color = Income.Group)
#-------------- Creating Data Frames
mydf <- data.frame(Countries_2012_Dataset, Codes_2012_Dataset, Regions_2012_Dataset)
head(mydf)
colnames(mydf) <- c("country", "codes", "regions")
#aboves is same as this below
mydf <- data.frame(country = Countries_2012_Dataset, code = Codes_2012_Dataset,
region = Regions_2012_Dataset)
head(mydf)
tail(mydf)
#-------------Merging dataframes
head(stats)
head(mydf)
merged.data.frame <- merge(stats, mydf, by.x = "Country.Code", by.y = "code")
merged.data.frame$country <- NULL
ncol(merged.data.frame)
merged.data.frame
#-------Visualizing With new split
#SHAPES!
qplot(data = merged.data.frame, x = Internet.users, y = Birth.rate,
color = region, size = I(5), shape = I(23))
#Transparaency
qplot(data = merged.data.frame, x = Internet.users, y = Birth.rate,
color = region, size = I(5), shape = I(19), alpha = I(0.6),
main = "Birth Rate vs Internet Users")
| /Demographic data .R | no_license | kimberlyzhu/R-projects | R | false | false | 3,182 | r | ?read.csv()
#Mthod 1: Select the File Manually
stats <- read.csv(file.choose())
stats
#Method 2: Set Workind Directory (WD) and read data
getwd()
setwd("/Users/kimberlyzhu/documents/R Programming")
getwd()
rm(stats) #remove stats
stats <- read.csv("DemographicData.csv")
#----- Explore data
stats
nrow(stats) #numbr of rows
ncol(stats) #number of cols
head(stats) #gives you top 6 rows
tail(stats, n = 10) #set n for how many from bottom
str(stats) #str = structure
#runif = random varaible distributed uniformly
summary(stats)
#--- using the $sign
stats
head(stats)
stats[3,3]
stats[3, "Birth.rate"]
stats$Internet.users # $ works for data sets -= NOT matrices
# returns a vector of the given column
#above is same as below
stats[,"Internet.users"]
stats$Internet.users[2]
levels(stats$Income.Group)
levels(stats[,"Income.Group"])
#------- Basic Operations with a DF
stats[1:10,]
stats[c(4,100),]
#how does [] work:
stats[1,] #still a dataframe!
is.data.frame(stats[1,])
stats[,1]
is.data.frame(stats[,1,drop = F])
#multiply columns
head(stats)
stats$Birth.rate * stats$Internet.users
#add column
stats$MyCalc <- stats$Birth.rate * stats$Internet.users
head(stats)
#if vector is less than length, vector gets recycled
stats$xyz <- 1:5
head(stats)
stats$xyz <- NULL
stats$MyCalc <- NULL
#-------------------- Filtering Data Frames
stats$Internet.users < 2 #gives a vector of true/false
filter <- stats$Internet.users < 2
stats[filter,] #only displays rows that are true
stats[stats$Birth.rate >40,]
stats[stats$Birth.rate >40 & stats$Internet.users <2,]
stats[stats$Income.Group == "High income",]
stats[stats$Country.Name == "Malta",] #just malta row
#------------------- qplot()
install.packages("ggplot2")
library(ggplot2)
?qplot
qplot(data = stats, x = Internet.users)
qplot(data = stats, x = Income.Group, y = Birth.rate, size = I(3), color = I("red"))
qplot(data = stats, x = Income.Group, y = Birth.rate, geom = "boxplot")
#----------------- Visualizing what we need
qplot(data = stats, x = Internet.users, y = Birth.rate)
qplot(data = stats, x = Internet.users, y = Birth.rate, size = I(4),
color = I("blue"))
qplot(data = stats, x = Internet.users, y = Birth.rate, size = I(4),
color = Income.Group)
#-------------- Creating Data Frames
mydf <- data.frame(Countries_2012_Dataset, Codes_2012_Dataset, Regions_2012_Dataset)
head(mydf)
colnames(mydf) <- c("country", "codes", "regions")
#aboves is same as this below
mydf <- data.frame(country = Countries_2012_Dataset, code = Codes_2012_Dataset,
region = Regions_2012_Dataset)
head(mydf)
tail(mydf)
#-------------Merging dataframes
head(stats)
head(mydf)
merged.data.frame <- merge(stats, mydf, by.x = "Country.Code", by.y = "code")
merged.data.frame$country <- NULL
ncol(merged.data.frame)
merged.data.frame
#-------Visualizing With new split
#SHAPES!
qplot(data = merged.data.frame, x = Internet.users, y = Birth.rate,
color = region, size = I(5), shape = I(23))
#Transparaency
qplot(data = merged.data.frame, x = Internet.users, y = Birth.rate,
color = region, size = I(5), shape = I(19), alpha = I(0.6),
main = "Birth Rate vs Internet Users")
|
#' Estimates the background model
#'
#' This function reads a DNA sequence from a given fasta file
#' and uses that sequence to estimate an order-m Markov model.
#'
#'
#' @param file Fasta-filename.
#' @param order Order of the Markov models that shall be used as the
#' background model. Default: order=1.
#'
#' @return None
#'
#' @examples
#'
#' # Estimate first order Markov model based on the sequence provided
#' # in seq.fasta
#'
#' file=system.file("extdata","seq.fasta", package="mdist")
#' readBackground(file,1)
#'
#' @export
readBackground=function(file, order=1) {
nseq=numSequences(file)
lseq=lenSequences(file)
dummy=.C("mdist_makebg", as.character(file), as.integer(order),
as.integer(nseq),as.integer(lseq),PACKAGE="mdist")
}
#' Fetch background model into R vectors
#'
#' This function returns the parameters of the current background model
#' as an R list.
#'
#' @return A list containing the stationary
#' and the transition probabilities, respectively:
#' \describe{
#' \item{stat}{Stationary distribution}
#' \item{trans}{Transition probabilities}
#' }
#'
#' @examples
#'
#' # Estimate first order Markov model based on the sequence provided
#' # in seq.fasta
#'
#' file=system.file("extdata","seq.fasta", package="mdist")
#' readBackground(file,1)
#' fetchBackground()
#'
#' @export
fetchBackground=function() {
stat=.Call("mdist_fetchStationBackground",PACKAGE="mdist");
trans=.Call("mdist_fetchTransBackground",PACKAGE="mdist");
return(list(stat=stat,trans=trans))
}
#' Prints the current background model
#'
#' This function prints the currently loaded background model.
#' The function is primarily used for debugging and testing.
#'
#' @return None
#'
#' @examples
#'
#' # Estimate first order Markov model based on the sequence provided
#' # in seq.fasta
#'
#' seqfile=system.file("extdata","seq.fasta", package="mdist")
#' readBackground(file=seqfile,1)
#' printBackground()
#'
#' @export
printBackground=function() {
dummy=.C("mdist_printBackground",PACKAGE="mdist");
}
#' Delete background model
#'
#' This function unloads the current background model and frees its allocated
#' memory.
#'
#' @return None
#'
#' @examples
#'
#' # Estimate first order Markov model based on the sequence provided
#' # in seq.fasta
#'
#' seqfile=system.file("extdata","seq.fasta", package="mdist")
#' readBackground(file=seqfile,1)
#' deleteBackground()
#'
#'
#' @export
deleteBackground=function() {
dummy=.C("mdist_deleteBackground",PACKAGE="mdist")
}
#' Estimates the background model for sampling
#'
#' This function reads a DNA sequence from a given fasta file
#' and uses that sequence to estimate an order-m Markov model.
#' \strong{Note}: This function is only used for generating the random DNA
#' sequence that is used for computing the empirical
#' distribution. When using \code{\link{compoundPoissonDist}},
#' \code{\link{combinatorialDist}} or \code{\link{motifEnrichmentTest}},
#' this function is not relevant. Instead, consult
#' \code{link{readBackground}}.
#'
#'
#' @param file Fasta-filename.
#' @param order Order of the Markov models that shall be used as the
#' background model. Default: order=1.
#'
#' @return None
#'
#' @seealso \code{link{readBackground}}
#' @examples
#'
#' # Estimate first order Markov model based on the sequence provided
#' # in seq.fasta
#'
#' file=system.file("extdata","seq.fasta", package="mdist")
#' readBackgroundForSampling(file,1)
#'
#'
#' @export
readBackgroundForSampling=function(file, order=1) {
nseq=numSequences(file)
lseq=lenSequences(file)
dummy=.C("mdist_makebgForSampling", as.character(file), as.integer(order),
as.integer(nseq),as.integer(lseq),PACKAGE="mdist")
}
#' Prints the current background model for sampling
#'
#' Similar to \code{link{printBackgroundForSampling}}, but
#' prints parameters that where acquired using
#' \code{link{readBackgroundForSampling}}.
#'
#'
#' @return None
#'
#'
#' @examples
#'
#' # Estimate first order Markov model based on the sequence provided
#' # in seq.fasta
#'
#' seqfile=system.file("extdata","seq.fasta", package="mdist")
#' readBackgroundForSampling(file=seqfile,1)
#' printBackgroundForSampling()
#'
#'
#' @seealso \code{link{printBackgroundForSampling}}
#' @export
printBackgroundForSampling=function() {
dummy=.C("mdist_printBackgroundForSampling",PACKAGE="mdist");
}
#' Delete the current background model for sampling
#'
#' Similar to \code{link{deleteBackgroundForSampling}}, but
#' deletes parameters that where acquired using
#' \code{link{readBackgroundForSampling}}.
#'
#'
#' @return None
#'
#'
#' @examples
#'
#' # Estimate first order Markov model based on the sequence provided
#' # in seq.fasta
#'
#' seqfile=system.file("extdata","seq.fasta", package="mdist")
#' readBackgroundForSampling(file=seqfile,1)
#' deleteBackgroundForSampling()
#'
#' @seealso \code{link{deleteBackgroundForSampling}}
#' @export
deleteBackgroundForSampling=function() {
dummy=.C("mdist_deleteBackgroundForSampling",PACKAGE="mdist")
}
| /R/background_wrapper.R | no_license | wkopp/mdist | R | false | false | 5,089 | r | #' Estimates the background model
#'
#' This function reads a DNA sequence from a given fasta file
#' and uses that sequence to estimate an order-m Markov model.
#'
#'
#' @param file Fasta-filename.
#' @param order Order of the Markov models that shall be used as the
#' background model. Default: order=1.
#'
#' @return None
#'
#' @examples
#'
#' # Estimate first order Markov model based on the sequence provided
#' # in seq.fasta
#'
#' file=system.file("extdata","seq.fasta", package="mdist")
#' readBackground(file,1)
#'
#' @export
readBackground=function(file, order=1) {
nseq=numSequences(file)
lseq=lenSequences(file)
dummy=.C("mdist_makebg", as.character(file), as.integer(order),
as.integer(nseq),as.integer(lseq),PACKAGE="mdist")
}
#' Fetch background model into R vectors
#'
#' This function returns the parameters of the current background model
#' as an R list.
#'
#' @return A list containing the stationary
#' and the transition probabilities, respectively:
#' \describe{
#' \item{stat}{Stationary distribution}
#' \item{trans}{Transition probabilities}
#' }
#'
#' @examples
#'
#' # Estimate first order Markov model based on the sequence provided
#' # in seq.fasta
#'
#' file=system.file("extdata","seq.fasta", package="mdist")
#' readBackground(file,1)
#' fetchBackground()
#'
#' @export
fetchBackground=function() {
stat=.Call("mdist_fetchStationBackground",PACKAGE="mdist");
trans=.Call("mdist_fetchTransBackground",PACKAGE="mdist");
return(list(stat=stat,trans=trans))
}
#' Prints the current background model
#'
#' This function prints the currently loaded background model.
#' The function is primarily used for debugging and testing.
#'
#' @return None
#'
#' @examples
#'
#' # Estimate first order Markov model based on the sequence provided
#' # in seq.fasta
#'
#' seqfile=system.file("extdata","seq.fasta", package="mdist")
#' readBackground(file=seqfile,1)
#' printBackground()
#'
#' @export
printBackground=function() {
dummy=.C("mdist_printBackground",PACKAGE="mdist");
}
#' Delete background model
#'
#' This function unloads the current background model and frees its allocated
#' memory.
#'
#' @return None
#'
#' @examples
#'
#' # Estimate first order Markov model based on the sequence provided
#' # in seq.fasta
#'
#' seqfile=system.file("extdata","seq.fasta", package="mdist")
#' readBackground(file=seqfile,1)
#' deleteBackground()
#'
#'
#' @export
deleteBackground=function() {
dummy=.C("mdist_deleteBackground",PACKAGE="mdist")
}
#' Estimates the background model for sampling
#'
#' This function reads a DNA sequence from a given fasta file
#' and uses that sequence to estimate an order-m Markov model.
#' \strong{Note}: This function is only used for generating the random DNA
#' sequence that is used for computing the empirical
#' distribution. When using \code{\link{compoundPoissonDist}},
#' \code{\link{combinatorialDist}} or \code{\link{motifEnrichmentTest}},
#' this function is not relevant. Instead, consult
#' \code{link{readBackground}}.
#'
#'
#' @param file Fasta-filename.
#' @param order Order of the Markov models that shall be used as the
#' background model. Default: order=1.
#'
#' @return None
#'
#' @seealso \code{link{readBackground}}
#' @examples
#'
#' # Estimate first order Markov model based on the sequence provided
#' # in seq.fasta
#'
#' file=system.file("extdata","seq.fasta", package="mdist")
#' readBackgroundForSampling(file,1)
#'
#'
#' @export
readBackgroundForSampling=function(file, order=1) {
nseq=numSequences(file)
lseq=lenSequences(file)
dummy=.C("mdist_makebgForSampling", as.character(file), as.integer(order),
as.integer(nseq),as.integer(lseq),PACKAGE="mdist")
}
#' Prints the current background model for sampling
#'
#' Similar to \code{link{printBackgroundForSampling}}, but
#' prints parameters that where acquired using
#' \code{link{readBackgroundForSampling}}.
#'
#'
#' @return None
#'
#'
#' @examples
#'
#' # Estimate first order Markov model based on the sequence provided
#' # in seq.fasta
#'
#' seqfile=system.file("extdata","seq.fasta", package="mdist")
#' readBackgroundForSampling(file=seqfile,1)
#' printBackgroundForSampling()
#'
#'
#' @seealso \code{link{printBackgroundForSampling}}
#' @export
printBackgroundForSampling=function() {
dummy=.C("mdist_printBackgroundForSampling",PACKAGE="mdist");
}
#' Delete the current background model for sampling
#'
#' Similar to \code{link{deleteBackgroundForSampling}}, but
#' deletes parameters that where acquired using
#' \code{link{readBackgroundForSampling}}.
#'
#'
#' @return None
#'
#'
#' @examples
#'
#' # Estimate first order Markov model based on the sequence provided
#' # in seq.fasta
#'
#' seqfile=system.file("extdata","seq.fasta", package="mdist")
#' readBackgroundForSampling(file=seqfile,1)
#' deleteBackgroundForSampling()
#'
#' @seealso \code{link{deleteBackgroundForSampling}}
#' @export
deleteBackgroundForSampling=function() {
dummy=.C("mdist_deleteBackgroundForSampling",PACKAGE="mdist")
}
|
# file has number in front of it so it's loaded before the other files (import-resource) using it
#' @import httr
#' @title Helper for functions making GET request to the Premier League API.
#'
#' @description
#' \code{get_premier_league} is a helper for functions making GET request to the Premier League API.
#'
#' @return
#' Response including status code, body, ...
#'
#' @param parameters list of parameters
#' @param resource type of resource you want to get
#' @param timeout_in_sec after this amount of time a time-out error is thrown
#' @param wait_in_sec idle time in seconds before actual request (as a matter of decency it's common not to flood the server with requests)
#'
#' @details
#' Function not called directly.
get_premier_league <- function(resource, parameters, timeout_in_sec = 5, wait_in_sec = 1) {
# wait
Sys.sleep(wait_in_sec)
# url
premier_league_url <- "https://footballapi.pulselive.com/football"
resource_url <- paste(premier_league_url, resource, sep = "/")
# headers
headers <- c(Origin = "https://www.premierleague.com")
# request
response <- GET(resource_url,
query = parameters,
add_headers(.headers = headers),
timeout(timeout_in_sec))
return(response)
}
| /R/1_import.R | no_license | IsaacVerm/penalty | R | false | false | 1,300 | r | # file has number in front of it so it's loaded before the other files (import-resource) using it
#' @import httr
#' @title Helper for functions making GET request to the Premier League API.
#'
#' @description
#' \code{get_premier_league} is a helper for functions making GET request to the Premier League API.
#'
#' @return
#' Response including status code, body, ...
#'
#' @param parameters list of parameters
#' @param resource type of resource you want to get
#' @param timeout_in_sec after this amount of time a time-out error is thrown
#' @param wait_in_sec idle time in seconds before actual request (as a matter of decency it's common not to flood the server with requests)
#'
#' @details
#' Function not called directly.
get_premier_league <- function(resource, parameters, timeout_in_sec = 5, wait_in_sec = 1) {
# wait
Sys.sleep(wait_in_sec)
# url
premier_league_url <- "https://footballapi.pulselive.com/football"
resource_url <- paste(premier_league_url, resource, sep = "/")
# headers
headers <- c(Origin = "https://www.premierleague.com")
# request
response <- GET(resource_url,
query = parameters,
add_headers(.headers = headers),
timeout(timeout_in_sec))
return(response)
}
|
trim.trailing <- function (x) sub("\\s+$", "", x)
trim <- function (x) gsub("^\\s+|\\s+$", "", x)
get.data<-function(){
#Helper Functions
data_dir<<-'UCI HAR Dataset/'
train_dir<<-'~/repo/datasciencecoursera/3_gettingdata/week3/class_project/UCI HAR Dataset/train/'
test_dir<<-'~/repo/datasciencecoursera/3_gettingdata/week3/class_project/UCI HAR Dataset/test/'
#URLS
# amcom_url<-'https://d396qusza40orc.cloudfront.net/getdata%2Fdata%2Fss06hid.csv'
desc_url<-'http://archive.ics.uci.edu/ml/datasets/Human+Activity+Recognition+Using+Smartphones'
data_url<-'https://d396qusza40orc.cloudfront.net/getdata%2Fprojectfiles%2FUCI%20HAR%20Dataset.zip'
#Downloads and Extractions
#ifelse(file.exists('amcomdata.csv'),identity,download.file(amcom_url,'amcomdata.csv', method = 'curl'))
ifelse(file.exists('data.zip'),identity,download.file(data_url,'data.zip', method = 'curl'))
ifelse(file.exists('UCI HAR Dataset/'),identity,unzip('data.zip'))
#File reads
###### amcom<<-read.csv('amcomdata.csv', stringsAsFactors=FALSE)
###### ins<<-readJPEG('ins.jpeg', native = TRUE)
###### edu<<-read.csv('edu.csv',stringsAsFactors=FALSE, na.strings='')
x_names_file<<-'~/repo/datasciencecoursera/3_gettingdata/week3/class_project/UCI HAR Dataset/features.txt'
train_files<<-paste(train_dir,list.files(train_dir), sep = '')
test_files<<-paste(test_dir,list.files(test_dir), sep = '')
### detail data
x_names<<-read.table(x_names_file)
### train data
y_train_file<-'~/repo/datasciencecoursera/3_gettingdata/week3/class_project/UCI HAR Dataset/train/y_train.txt'
x_train_file<-'~/repo/datasciencecoursera/3_gettingdata/week3/class_project/UCI HAR Dataset/train/X_train.txt'
y_train_data<<-read.table(y_train_file)
x_train_data<<-read.table(x_train_file)
### test data
y_test_file<-'~/repo/datasciencecoursera/3_gettingdata/week3/class_project/UCI HAR Dataset/test/y_test.txt'
x_test_file<-'~/repo/datasciencecoursera/3_gettingdata/week3/class_project/UCI HAR Dataset/test/X_test.txt'
y_test_data<<-read.table(y_test_file)
x_test_data<<-read.table(x_test_file)
activities<<-read.table('~/repo/datasciencecoursera/3_gettingdata/week3/class_project/UCI HAR Dataset/activity_labels.txt', stringsAsFactors = FALSE)
#File Cleaning. Subsetting, Naming, NA cleaning operations
###### gdp<<-gdp[c(1,2,4,5)]
###### gdp_titles<-c('code', 'ranking', 'Country', 'gdp')
###### names(gdp)<-gdp_titles
###### gdp<<-gdp[!is.na(gdp$code)&!is.na(gdp$ranking)&!is.na(gdp$Country)&!is.na(gdp$gdp),]
####### naming
names(x_test_data)<-x_names$V2
names(x_train_data)<-x_names$V2
names(activities)<-c('labint','labstr')
###### extract all mean and standard deviation
means<<-x_names[grep('mean', x_names$V2),]
std<<-x_names[grep('std', x_names$V2),]
means_std<<-x_names[grep('std|mean', x_names$V2),]
###### This data is the means and std subsetted out of the full dataframes
xtraindata<<-x_train_data[, means_std$V2]
xtestdata<<-x_test_data[, means_std$V2]
names(xtraindata)<-means_std$V2
names(xtestdata)<-means_std$V2
###### Add the training labels
xtestdata$labint<-y_test_data
xtraindata$labint<-y_train_data
fulldata<-rbind(xtestdata,xtraindata)
means<-c()
for(i in seq(length(fulldata))){
print(i)
n<-fulldata[,i]
means<-c(means,(mean(as.numeric(trim(gsub(fulldata[,i], pattern = ',', replace=''))))))
}
write.table(means,file='means.txt')
#lact<-sapply(seq(NROW(xtestdata$labels)), function(i) activities$V2[activities$V1==xtestdata[i,]$labels[[1]]])
#merge(x=xtestdata, y=lact)
#xall<<-
#File Mergine operations
###### gdp_edu<<-merge(x=edu, y=gdp, by.x='shortcode',by.y='code')
}
get.data()
| /03_gettingdata/week3/run_analysis.R | no_license | minoad/datasciencecoursera | R | false | false | 3,939 | r | trim.trailing <- function (x) sub("\\s+$", "", x)
trim <- function (x) gsub("^\\s+|\\s+$", "", x)
get.data<-function(){
#Helper Functions
data_dir<<-'UCI HAR Dataset/'
train_dir<<-'~/repo/datasciencecoursera/3_gettingdata/week3/class_project/UCI HAR Dataset/train/'
test_dir<<-'~/repo/datasciencecoursera/3_gettingdata/week3/class_project/UCI HAR Dataset/test/'
#URLS
# amcom_url<-'https://d396qusza40orc.cloudfront.net/getdata%2Fdata%2Fss06hid.csv'
desc_url<-'http://archive.ics.uci.edu/ml/datasets/Human+Activity+Recognition+Using+Smartphones'
data_url<-'https://d396qusza40orc.cloudfront.net/getdata%2Fprojectfiles%2FUCI%20HAR%20Dataset.zip'
#Downloads and Extractions
#ifelse(file.exists('amcomdata.csv'),identity,download.file(amcom_url,'amcomdata.csv', method = 'curl'))
ifelse(file.exists('data.zip'),identity,download.file(data_url,'data.zip', method = 'curl'))
ifelse(file.exists('UCI HAR Dataset/'),identity,unzip('data.zip'))
#File reads
###### amcom<<-read.csv('amcomdata.csv', stringsAsFactors=FALSE)
###### ins<<-readJPEG('ins.jpeg', native = TRUE)
###### edu<<-read.csv('edu.csv',stringsAsFactors=FALSE, na.strings='')
x_names_file<<-'~/repo/datasciencecoursera/3_gettingdata/week3/class_project/UCI HAR Dataset/features.txt'
train_files<<-paste(train_dir,list.files(train_dir), sep = '')
test_files<<-paste(test_dir,list.files(test_dir), sep = '')
### detail data
x_names<<-read.table(x_names_file)
### train data
y_train_file<-'~/repo/datasciencecoursera/3_gettingdata/week3/class_project/UCI HAR Dataset/train/y_train.txt'
x_train_file<-'~/repo/datasciencecoursera/3_gettingdata/week3/class_project/UCI HAR Dataset/train/X_train.txt'
y_train_data<<-read.table(y_train_file)
x_train_data<<-read.table(x_train_file)
### test data
y_test_file<-'~/repo/datasciencecoursera/3_gettingdata/week3/class_project/UCI HAR Dataset/test/y_test.txt'
x_test_file<-'~/repo/datasciencecoursera/3_gettingdata/week3/class_project/UCI HAR Dataset/test/X_test.txt'
y_test_data<<-read.table(y_test_file)
x_test_data<<-read.table(x_test_file)
activities<<-read.table('~/repo/datasciencecoursera/3_gettingdata/week3/class_project/UCI HAR Dataset/activity_labels.txt', stringsAsFactors = FALSE)
#File Cleaning. Subsetting, Naming, NA cleaning operations
###### gdp<<-gdp[c(1,2,4,5)]
###### gdp_titles<-c('code', 'ranking', 'Country', 'gdp')
###### names(gdp)<-gdp_titles
###### gdp<<-gdp[!is.na(gdp$code)&!is.na(gdp$ranking)&!is.na(gdp$Country)&!is.na(gdp$gdp),]
####### naming
names(x_test_data)<-x_names$V2
names(x_train_data)<-x_names$V2
names(activities)<-c('labint','labstr')
###### extract all mean and standard deviation
means<<-x_names[grep('mean', x_names$V2),]
std<<-x_names[grep('std', x_names$V2),]
means_std<<-x_names[grep('std|mean', x_names$V2),]
###### This data is the means and std subsetted out of the full dataframes
xtraindata<<-x_train_data[, means_std$V2]
xtestdata<<-x_test_data[, means_std$V2]
names(xtraindata)<-means_std$V2
names(xtestdata)<-means_std$V2
###### Add the training labels
xtestdata$labint<-y_test_data
xtraindata$labint<-y_train_data
fulldata<-rbind(xtestdata,xtraindata)
means<-c()
for(i in seq(length(fulldata))){
print(i)
n<-fulldata[,i]
means<-c(means,(mean(as.numeric(trim(gsub(fulldata[,i], pattern = ',', replace=''))))))
}
write.table(means,file='means.txt')
#lact<-sapply(seq(NROW(xtestdata$labels)), function(i) activities$V2[activities$V1==xtestdata[i,]$labels[[1]]])
#merge(x=xtestdata, y=lact)
#xall<<-
#File Mergine operations
###### gdp_edu<<-merge(x=edu, y=gdp, by.x='shortcode',by.y='code')
}
get.data()
|
# smt help (environment)
#
#
###############################################################################
## -----------------------------------------------------------------------------
## smt Roxygen help
##' @name smt
##' @aliases smt
##' @title smt-simple parameters for single molecule tracking analysis
##' @rdname smt
##' @docType package
##' @description simple analysis on single molecule tracking data using parameters based on mean square displacement (MSD).
## @usage
## smt()
##' @details smt provide a simple analysis on single molecule tracking data using parameters based on mean square displacement (MSD). Currently includes:
##' - duration of the tracks (dwellTime),
##'
##' - square displacement (squareDisp),
##'
##' - mean square displacement as a function of time (msd),
##'
##' - diffusion coefficient (Dcoef) and
##'
##' - emperical cumulative distribution function (eCDF) of MSD over time.
## @seealso
##' @import ggplot2
##' @import dplyr
##' @import reshape2
## @import gridExtra
## @importFrom reshape2 melt
##' @importFrom scales cbreaks
##' @importFrom mixtools normalmixEM
##' @importFrom mixtools boot.se
##' @importFrom fitdistrplus fitdist
##' @importFrom fitdistrplus denscomp
##' @importFrom nls2 nls2
##' @importFrom minpack.lm nlsLM
##' @importFrom truncnorm rtruncnorm
##' @importFrom mclust mclustBootstrapLRT
##' @importFrom gridExtra grid.arrange
##' @importFrom gridExtra marrangeGrob
##' @importFrom rtiff readTiff
##' @importFrom EBImage readImage
##' @importFrom compiler cmpfun
##' @importFrom compiler enableJIT
##' @importFrom plyr rbind.fill
## dplyr has masked intersect, setdiff, setequal, union from base and other packages, try to use importFrom instead of import package
## @importFrom dplyr summarise group_by select %>%
##
##' @import shiny
##' @import shinyjs
##' @export smtGUI
smt=function(){}
smtGUI=function(){
appPath=system.file("myapp",package="smt")
shiny::runApp(appPath)
f
}
| /R/smt.R | no_license | snjy9182/smt-0.4.0 | R | false | false | 1,966 | r | # smt help (environment)
#
#
###############################################################################
## -----------------------------------------------------------------------------
## smt Roxygen help
##' @name smt
##' @aliases smt
##' @title smt-simple parameters for single molecule tracking analysis
##' @rdname smt
##' @docType package
##' @description simple analysis on single molecule tracking data using parameters based on mean square displacement (MSD).
## @usage
## smt()
##' @details smt provide a simple analysis on single molecule tracking data using parameters based on mean square displacement (MSD). Currently includes:
##' - duration of the tracks (dwellTime),
##'
##' - square displacement (squareDisp),
##'
##' - mean square displacement as a function of time (msd),
##'
##' - diffusion coefficient (Dcoef) and
##'
##' - emperical cumulative distribution function (eCDF) of MSD over time.
## @seealso
##' @import ggplot2
##' @import dplyr
##' @import reshape2
## @import gridExtra
## @importFrom reshape2 melt
##' @importFrom scales cbreaks
##' @importFrom mixtools normalmixEM
##' @importFrom mixtools boot.se
##' @importFrom fitdistrplus fitdist
##' @importFrom fitdistrplus denscomp
##' @importFrom nls2 nls2
##' @importFrom minpack.lm nlsLM
##' @importFrom truncnorm rtruncnorm
##' @importFrom mclust mclustBootstrapLRT
##' @importFrom gridExtra grid.arrange
##' @importFrom gridExtra marrangeGrob
##' @importFrom rtiff readTiff
##' @importFrom EBImage readImage
##' @importFrom compiler cmpfun
##' @importFrom compiler enableJIT
##' @importFrom plyr rbind.fill
## dplyr has masked intersect, setdiff, setequal, union from base and other packages, try to use importFrom instead of import package
## @importFrom dplyr summarise group_by select %>%
##
##' @import shiny
##' @import shinyjs
##' @export smtGUI
smt=function(){}
smtGUI=function(){
appPath=system.file("myapp",package="smt")
shiny::runApp(appPath)
f
}
|
Print ("This file was created within RStudio")
Print ("And now this file lives on GitHub") | /TestingR.R | no_license | quatier/datasciencecoursera | R | false | false | 90 | r | Print ("This file was created within RStudio")
Print ("And now this file lives on GitHub") |
tol <- sqrt(.Machine$double.eps)
# Inputs and Expected Outputs
exp_0_taylor <- 1 / factorial(seq_len(11) - 1)
exp_0_pade32numer <- c(1, 3 / 5, 3 / 20, 1 / 60)
exp_0_pade32denom <- c(1, -2 / 5, 1 / 20)
exp_0_pade32_est <- Pade(3, 2, exp_0_taylor)
exp_0_pade43numer <- c(1, 4 / 7, 1 / 7, 2 / 105, 1 / 840)
exp_0_pade43denom <- c(1, -3 / 7, 1 / 14, -1 / 210)
exp_0_pade43_est <- Pade(4, 3, exp_0_taylor)
log1p_0_taylor <- c(0, 1, -1 / 2, 1 / 3, -1 / 4, 1 / 5, -1 / 6)
log1p_0_pade33numer <- c(0, 1, 1, 11 / 60)
log1p_0_pade33denom <- c(1, 1.5, 0.6, 0.05)
log1p_0_pade33_est <- Pade(3, 3, log1p_0_taylor)
sin_taylor <- c(0, 1 / factorial(1), 0, -1 / factorial(3), 0, 1 / factorial(5),
0, -1 / factorial(7), 0, 1 / factorial(9), 0,
-1 / factorial(11))
sin_pade56numer <- c(0, 1, 0, -2363 / 18183, 0, 12671 / 4363920)
sin_pade56denom <- c(1, 0, 445 / 12122, 0, 601 / 872784, 0, 121 / 16662240)
sin_pade56_est <- Pade(5, 6, sin_taylor)
# Testing Accuracy
expect_equal(exp_0_pade32_est[[1]], exp_0_pade32numer, tolerance = tol)
expect_equal(exp_0_pade32_est[[2]], exp_0_pade32denom, tolerance = tol)
expect_equal(exp_0_pade43_est[[1]], exp_0_pade43numer, tolerance = tol)
expect_equal(exp_0_pade43_est[[2]], exp_0_pade43denom, tolerance = tol)
expect_equal(log1p_0_pade33_est[[1]], log1p_0_pade33numer, tolerance = tol)
expect_equal(log1p_0_pade33_est[[2]], log1p_0_pade33denom, tolerance = tol)
expect_equal(sin_pade56_est[[1]], sin_pade56numer, tolerance = tol)
expect_equal(sin_pade56_est[[2]], sin_pade56denom, tolerance = tol)
# Testing Errors
expect_error(Pade(4, 4, log1p_0_taylor),
"Not enough Taylor series coefficients provided.")
expect_error(Pade(7, 6, sin_taylor),
"Not enough Taylor series coefficients provided.")
expect_error(Pade(4.2, 4, log1p_0_taylor),
"Polynomial orders need to be integers.")
expect_error(Pade(4, 3.4, log1p_0_taylor),
"Polynomial orders need to be integers.")
## Test CITATION
expect_true(any(grepl(packageVersion("Pade"), toBibtex(citation("Pade")),
fixed = TRUE)))
| /inst/tinytest/test_Pade.R | no_license | aadler/Pade | R | false | false | 2,109 | r | tol <- sqrt(.Machine$double.eps)
# Inputs and Expected Outputs
exp_0_taylor <- 1 / factorial(seq_len(11) - 1)
exp_0_pade32numer <- c(1, 3 / 5, 3 / 20, 1 / 60)
exp_0_pade32denom <- c(1, -2 / 5, 1 / 20)
exp_0_pade32_est <- Pade(3, 2, exp_0_taylor)
exp_0_pade43numer <- c(1, 4 / 7, 1 / 7, 2 / 105, 1 / 840)
exp_0_pade43denom <- c(1, -3 / 7, 1 / 14, -1 / 210)
exp_0_pade43_est <- Pade(4, 3, exp_0_taylor)
log1p_0_taylor <- c(0, 1, -1 / 2, 1 / 3, -1 / 4, 1 / 5, -1 / 6)
log1p_0_pade33numer <- c(0, 1, 1, 11 / 60)
log1p_0_pade33denom <- c(1, 1.5, 0.6, 0.05)
log1p_0_pade33_est <- Pade(3, 3, log1p_0_taylor)
sin_taylor <- c(0, 1 / factorial(1), 0, -1 / factorial(3), 0, 1 / factorial(5),
0, -1 / factorial(7), 0, 1 / factorial(9), 0,
-1 / factorial(11))
sin_pade56numer <- c(0, 1, 0, -2363 / 18183, 0, 12671 / 4363920)
sin_pade56denom <- c(1, 0, 445 / 12122, 0, 601 / 872784, 0, 121 / 16662240)
sin_pade56_est <- Pade(5, 6, sin_taylor)
# Testing Accuracy
expect_equal(exp_0_pade32_est[[1]], exp_0_pade32numer, tolerance = tol)
expect_equal(exp_0_pade32_est[[2]], exp_0_pade32denom, tolerance = tol)
expect_equal(exp_0_pade43_est[[1]], exp_0_pade43numer, tolerance = tol)
expect_equal(exp_0_pade43_est[[2]], exp_0_pade43denom, tolerance = tol)
expect_equal(log1p_0_pade33_est[[1]], log1p_0_pade33numer, tolerance = tol)
expect_equal(log1p_0_pade33_est[[2]], log1p_0_pade33denom, tolerance = tol)
expect_equal(sin_pade56_est[[1]], sin_pade56numer, tolerance = tol)
expect_equal(sin_pade56_est[[2]], sin_pade56denom, tolerance = tol)
# Testing Errors
expect_error(Pade(4, 4, log1p_0_taylor),
"Not enough Taylor series coefficients provided.")
expect_error(Pade(7, 6, sin_taylor),
"Not enough Taylor series coefficients provided.")
expect_error(Pade(4.2, 4, log1p_0_taylor),
"Polynomial orders need to be integers.")
expect_error(Pade(4, 3.4, log1p_0_taylor),
"Polynomial orders need to be integers.")
## Test CITATION
expect_true(any(grepl(packageVersion("Pade"), toBibtex(citation("Pade")),
fixed = TRUE)))
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/env-special.R
\name{is_installed}
\alias{is_installed}
\title{Is a package installed in the library?}
\usage{
is_installed(pkg)
}
\arguments{
\item{pkg}{The name of a package.}
}
\value{
\code{TRUE} if the package is installed, \code{FALSE} otherwise.
}
\description{
This checks that a package is installed with minimal side effects.
If installed, the package will be loaded but not attached.
}
\examples{
is_installed("utils")
is_installed("ggplot5")
}
| /man/is_installed.Rd | no_license | COMODr/rlang | R | false | true | 533 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/env-special.R
\name{is_installed}
\alias{is_installed}
\title{Is a package installed in the library?}
\usage{
is_installed(pkg)
}
\arguments{
\item{pkg}{The name of a package.}
}
\value{
\code{TRUE} if the package is installed, \code{FALSE} otherwise.
}
\description{
This checks that a package is installed with minimal side effects.
If installed, the package will be loaded but not attached.
}
\examples{
is_installed("utils")
is_installed("ggplot5")
}
|
library(twitteR)
library(text2vec)
# create the dictionaries used for the classification
setwd()
# load in the function to clean the data
source("tweetCleanFunction.R")
# Build the donald term table (load takes a while!)
load("Donald1323.Rda")
donaldClean <- tweetCleanFunction(donald)
# clean out the workspace
rm(donald)
gc()
prep_fun <- tolower # makes lowercase
tok_fun <- word_tokenizer # look at words
# Create the vocabulary using itoken to iterate directly
donaldIt <- itoken(donaldClean,
preprocessor = prep_fun,
tokenizer = tok_fun,
progressbar = FALSE)
donaldVocab <- create_vocabulary(donaldIt)
# clean out the workspace
rm(donaldClean)
gc()
# Build the hillary term table (load takes a while!)
load("Hillary1323.Rda")
hillaryClean <- tweetCleanFunction(hillary)
# clean out the workspace
rm(hillary)
gc()
prep_fun <- tolower # makes lowercase
tok_fun <- word_tokenizer # look at words
# Create the vocabulary using itoken to iterate directly
hillaryIt <- itoken(hillaryClean,
preprocessor = prep_fun,
tokenizer = tok_fun,
progressbar = FALSE)
hillaryVocab <- create_vocabulary(hillaryIt)
# clean out the workspace
rm(hillaryClean)
gc()
# create an object saving the
tweetVocabs <- list(donaldVocab=donaldVocab,
hillaryVocab=hillaryVocab)
#save(tweetVocabs,file="tweetVocabs.Rda") | /reference/2_cleanDataExport.R | no_license | edwardwangxy/imdb_score_guess | R | false | false | 1,434 | r | library(twitteR)
library(text2vec)
# create the dictionaries used for the classification
setwd()
# load in the function to clean the data
source("tweetCleanFunction.R")
# Build the donald term table (load takes a while!)
load("Donald1323.Rda")
donaldClean <- tweetCleanFunction(donald)
# clean out the workspace
rm(donald)
gc()
prep_fun <- tolower # makes lowercase
tok_fun <- word_tokenizer # look at words
# Create the vocabulary using itoken to iterate directly
donaldIt <- itoken(donaldClean,
preprocessor = prep_fun,
tokenizer = tok_fun,
progressbar = FALSE)
donaldVocab <- create_vocabulary(donaldIt)
# clean out the workspace
rm(donaldClean)
gc()
# Build the hillary term table (load takes a while!)
load("Hillary1323.Rda")
hillaryClean <- tweetCleanFunction(hillary)
# clean out the workspace
rm(hillary)
gc()
prep_fun <- tolower # makes lowercase
tok_fun <- word_tokenizer # look at words
# Create the vocabulary using itoken to iterate directly
hillaryIt <- itoken(hillaryClean,
preprocessor = prep_fun,
tokenizer = tok_fun,
progressbar = FALSE)
hillaryVocab <- create_vocabulary(hillaryIt)
# clean out the workspace
rm(hillaryClean)
gc()
# create an object saving the
tweetVocabs <- list(donaldVocab=donaldVocab,
hillaryVocab=hillaryVocab)
#save(tweetVocabs,file="tweetVocabs.Rda") |
#### This script is for preparing our analysis of the Piketty Hypothesis in the Canadian context
library(broom)
library(stargazer)
library(tidyverse)
#### First cut Degree and Income gap for left-right block ####
ces %>%
filter(election<1993) %>%
select(election, left, region, male, age, income, degree, religion2) %>%
group_by(election) %>%
summary()
ces %>%
filter(election<2021) %>%
nest(variables=-election) %>%
mutate(model=map(variables, function(x) lm(left~region2+male+age+income+degree+as.factor(religion2), data=x)),
tidied=map(model, tidy))->ols_block_models
ces %>%
filter(election<2021) %>%
nest(variables=-election) %>%
mutate(model=map(variables, function(x) lm(left~region2+male+age+income_tertile+degree+as.factor(religion2), data=x)),
tidied=map(model, tidy))->ols_block_models2
# ces %>%
# filter(election>1979&election<2021) %>%
# nest(variables=-election) %>%
# mutate(model=map(variables, function(x) lm(left~region2+male+age+income_tertile+degree+as.factor(religion2), data=x)),
# tidied=map(model, tidy))->ols_block_models3
# ces %>%
# filter(election>1979&election<2021) %>%
# nest(variables=-election) %>%
# mutate(model=map(variables, function(x) lm(left~region2+male+age+income_tertile+degree+as.factor(religion2), data=x)),
# tidied=map(model, tidy))->ols_block_models4
ols_block_models2 %>%
unnest(tidied) %>%
filter(term=="degree"|term=="income_tertile") %>%
filter(election<2020) %>%
mutate(Measure=Recode(term, "'degree'='Degree' ; 'income_tertile'='Income'")) %>%
ggplot(., aes(x=election, y=estimate, col=Measure, group=Measure))+
geom_point()+
geom_line()+
#geom_smooth(se=F)+
labs(x="Election", y="Estimate")+
scale_color_grey()+
geom_hline(yintercept=0, linetype=2)+
geom_errorbar(width=0,aes(ymin=estimate-(1.96*std.error), ymax=estimate+(1.96*std.error)))
ggsave(here("Plots", "block_degree_income2.png"), width=8, height=6)
# ols_block_models3 %>%
# unnest(tidied) %>%
# filter(term=="degree"|term=="income3") %>%
# filter(election<2020) %>%
# mutate(Measure=Recode(term, "'degree'='Degree' ; 'income3'='Income'")) %>%
# ggplot(., aes(x=election, y=estimate, col=Measure, group=Measure))+
# geom_point()+
# geom_line()+
# #geom_smooth(se=F)+
# labs(x="Election", y="Estimate")+
# scale_color_grey()+
# geom_hline(yintercept=0, linetype=2)+labs(title="Income Categories 1:2, 3 and 4:5")
# # geom_errorbar(width=0,aes(ymin=estimate-(1.96*std.error), ymax=estimate+(1.96*std.error)))
# ggsave(here("Plots", "block_degree_income3.png"), width=8, height=6)
#### Decompose By Party
#### Basic Party vote models 1965-2021 ####
ces %>%
filter(election<2021) %>%
nest(variables=-election) %>%
mutate(model=map(variables, function(x) lm(ndp~region2+male+age+income_tertile+degree+as.factor(religion2), data=x)),
tidied=map(model, tidy),
vote=rep('NDP', nrow(.)))->ndp_models_complete1
ces %>%
filter(election<2021) %>%
nest(variables=-election) %>%
mutate(model=map(variables, function(x) lm(conservative~region2+male+age+income_tertile+degree+as.factor(religion2), data=x)),
tidied=map(model, tidy),
vote=rep('Conservative', nrow(.))
)->conservative_models_complete1
ces %>%
filter(election<2021) %>%
nest(variables=-election) %>%
mutate(model=map(variables, function(x) lm(liberal~region2+male+age+income_tertile+degree+as.factor(religion2), data=x)),
tidied=map(model, tidy),
vote=rep('Liberal', nrow(.))
)->liberal_models_complete1
# ces %>%
# filter(election>2003) %>%
# nest(variables=-election) %>%
# mutate(model=map(variables, function(x) lm(green~region2+male+age+income+degree+as.factor(religion2), data=x)),
# tidied=map(model, tidy),
# vote=rep('Green', nrow(.))
# )->green_models_complete1
#Join all parties and plot Degree coefficients
ndp_models_complete1 %>%
bind_rows(., liberal_models_complete1) %>%
bind_rows(., conservative_models_complete1) %>%
unnest(tidied) %>%
filter(term=="degree"|term=="income_tertile") %>%
filter(election<2021) %>%
mutate(term=Recode(term, "'degree'='Degree'; 'income_tertile'='Income'")) %>%
ggplot(., aes(x=election, y=estimate, col=vote, size=term, group=term))+
geom_point()+facet_grid(~vote, switch="y")+
scale_color_manual(values=c("navy blue", "red", "orange"), name="Vote")+
#scale_alpha_manual(values=c(0.4, .8))+
scale_size_manual(values=c(1,3), name="Coefficient")+
geom_smooth(method="loess", size=0.5, alpha=0.2, se=F) +
#scale_fill_manual(values=c("navy blue", "red", "orange"))+
labs( alpha="Variable", color="Vote", x="Election", y="Estimate")+
#geom_errorbar(aes(ymin=estimate-(1.96*std.error), ymax=estimate+(1.96*std.error)), width=0)+
ylim(c(-0.12,0.12))+
#Turn to greyscale for printing in the journal; also we don't actually need the legend because the labels are on the side
#scale_color_grey(guide="none")+
geom_hline(yintercept=0, alpha=0.5, linetype=2)+
theme(axis.text.x=element_text(angle=90))
ggsave(here("Plots", "ols_degree_party_income.png"), width=8, height=4)
#### Decompose By Party
# #### Basic Party vote models 1965-2021 ####
# #Income2
# ces %>%
# nest(variables=-election) %>%
# mutate(model=map(variables, function(x) lm(ndp~region2+male+age+income2+degree+as.factor(religion2), data=x)),
# tidied=map(model, tidy),
# vote=rep('NDP', nrow(.)))->ndp_models_complete2
#
# ces %>%
# nest(variables=-election) %>%
# mutate(model=map(variables, function(x) lm(conservative~region2+male+age+income2+degree+as.factor(religion2), data=x)),
# tidied=map(model, tidy),
# vote=rep('Conservative', nrow(.))
# )->conservative_models_complete2
#
# ces %>%
# nest(variables=-election) %>%
# mutate(model=map(variables, function(x) lm(liberal~region2+male+age+income2+degree+as.factor(religion2), data=x)),
# tidied=map(model, tidy),
# vote=rep('Liberal', nrow(.))
# )->liberal_models_complete2
#
# ces %>%
# filter(election>2003) %>%
# nest(variables=-election) %>%
# mutate(model=map(variables, function(x) lm(green~region2+male+age+income2+degree+as.factor(religion2), data=x)),
# tidied=map(model, tidy),
# vote=rep('Green', nrow(.))
# )->green_models_complete2
# #Join all parties and plot Degree coefficients
# ndp_models_complete2 %>%
# bind_rows(., liberal_models_complete2) %>%
# bind_rows(., conservative_models_complete2) %>%
# unnest(tidied) %>%
# filter(term=="degree"|term=="income2") %>%
# filter(election<2021) %>%
# mutate(term=Recode(term, "'degree'='Degree'; 'income2'='Income'")) %>%
# ggplot(., aes(x=election, y=estimate, col=vote, size=term, group=term))+
# geom_point()+facet_grid(~vote, switch="y")+
# scale_color_manual(values=c("navy blue", "red", "orange"), name="Vote")+
# #scale_alpha_manual(values=c(0.4, .8))+
# scale_size_manual(values=c(1,3), name="Coefficient")+
# geom_smooth(method="loess", size=0.5, alpha=0.2, se=F) +
# #scale_fill_manual(values=c("navy blue", "red", "orange"))+
# labs( alpha="Variable", color="Vote", x="Election", y="Estimate", title="Income2, 1, 2:4, 5")+
# #geom_errorbar(aes(ymin=estimate-(1.96*std.error), ymax=estimate+(1.96*std.error)), width=0)+
# ylim(c(-0.12,0.12))+
# #Turn to greyscale for printing in the journal; also we don't actually need the legend because the labels are on the side
# #scale_color_grey(guide="none")+
# geom_hline(yintercept=0, alpha=0.5, linetype=2)+
# theme(axis.text.x=element_text(angle=90))
# ggsave(here("Plots", "ols_degree_party_income2.png"), width=8, height=4)
#
# #Income3
# ces %>%
# nest(variables=-election) %>%
# mutate(model=map(variables, function(x) lm(ndp~region2+male+age+income3+degree+as.factor(religion2), data=x)),
# tidied=map(model, tidy),
# vote=rep('NDP', nrow(.)))->ndp_models_complete3
#
# ces %>%
# nest(variables=-election) %>%
# mutate(model=map(variables, function(x) lm(conservative~region2+male+age+income3+degree+as.factor(religion2), data=x)),
# tidied=map(model, tidy),
# vote=rep('Conservative', nrow(.))
# )->conservative_models_complete3
#
# ces %>%
# nest(variables=-election) %>%
# mutate(model=map(variables, function(x) lm(liberal~region2+male+age+income3+degree+as.factor(religion2), data=x)),
# tidied=map(model, tidy),
# vote=rep('Liberal', nrow(.))
# )->liberal_models_complete3
#
# ces %>%
# filter(election>2003) %>%
# nest(variables=-election) %>%
# mutate(model=map(variables, function(x) lm(green~region2+male+age+income3+degree+as.factor(religion2), data=x)),
# tidied=map(model, tidy),
# vote=rep('Green', nrow(.))
# )->green_models_complete3
# #Join all parties and plot Degree coefficients
# ndp_models_complete3 %>%
# bind_rows(., liberal_models_complete3) %>%
# bind_rows(., conservative_models_complete3) %>%
# unnest(tidied) %>%
# filter(term=="degree"|term=="income3") %>%
# filter(election<2021) %>%
# mutate(term=Recode(term, "'degree'='Degree'; 'income3'='Income'")) %>%
# ggplot(., aes(x=election, y=estimate, col=vote, size=term, group=term))+
# geom_point()+facet_grid(~vote, switch="y")+
# scale_color_manual(values=c("navy blue", "red", "orange"), name="Vote")+
# #scale_alpha_manual(values=c(0.4, .8))+
# scale_size_manual(values=c(1,3), name="Coefficient")+
# geom_smooth(method="loess", size=0.5, alpha=0.2, se=F) +
# #scale_fill_manual(values=c("navy blue", "red", "orange"))+
# labs( alpha="Variable", color="Vote", x="Election", y="Estimate", title="Income3, 1:2, 3, 4:5")+
# #geom_errorbar(aes(ymin=estimate-(1.96*std.error), ymax=estimate+(1.96*std.error)), width=0)+
# ylim(c(-0.12,0.12))+
# #Turn to greyscale for printing in the journal; also we don't actually need the legend because the labels are on the side
# #scale_color_grey(guide="none")+
# geom_hline(yintercept=0, alpha=0.5, linetype=2)+
# theme(axis.text.x=element_text(angle=90))
# ggsave(here("Plots", "ols_degree_party_income3.png"), width=8, height=4)
| /R_Scripts/3_piketty_analysis_canada_income_comparison.R | no_license | sjkiss/CES_Analysis | R | false | false | 10,181 | r | #### This script is for preparing our analysis of the Piketty Hypothesis in the Canadian context
library(broom)
library(stargazer)
library(tidyverse)
#### First cut Degree and Income gap for left-right block ####
ces %>%
filter(election<1993) %>%
select(election, left, region, male, age, income, degree, religion2) %>%
group_by(election) %>%
summary()
ces %>%
filter(election<2021) %>%
nest(variables=-election) %>%
mutate(model=map(variables, function(x) lm(left~region2+male+age+income+degree+as.factor(religion2), data=x)),
tidied=map(model, tidy))->ols_block_models
ces %>%
filter(election<2021) %>%
nest(variables=-election) %>%
mutate(model=map(variables, function(x) lm(left~region2+male+age+income_tertile+degree+as.factor(religion2), data=x)),
tidied=map(model, tidy))->ols_block_models2
# ces %>%
# filter(election>1979&election<2021) %>%
# nest(variables=-election) %>%
# mutate(model=map(variables, function(x) lm(left~region2+male+age+income_tertile+degree+as.factor(religion2), data=x)),
# tidied=map(model, tidy))->ols_block_models3
# ces %>%
# filter(election>1979&election<2021) %>%
# nest(variables=-election) %>%
# mutate(model=map(variables, function(x) lm(left~region2+male+age+income_tertile+degree+as.factor(religion2), data=x)),
# tidied=map(model, tidy))->ols_block_models4
ols_block_models2 %>%
unnest(tidied) %>%
filter(term=="degree"|term=="income_tertile") %>%
filter(election<2020) %>%
mutate(Measure=Recode(term, "'degree'='Degree' ; 'income_tertile'='Income'")) %>%
ggplot(., aes(x=election, y=estimate, col=Measure, group=Measure))+
geom_point()+
geom_line()+
#geom_smooth(se=F)+
labs(x="Election", y="Estimate")+
scale_color_grey()+
geom_hline(yintercept=0, linetype=2)+
geom_errorbar(width=0,aes(ymin=estimate-(1.96*std.error), ymax=estimate+(1.96*std.error)))
ggsave(here("Plots", "block_degree_income2.png"), width=8, height=6)
# ols_block_models3 %>%
# unnest(tidied) %>%
# filter(term=="degree"|term=="income3") %>%
# filter(election<2020) %>%
# mutate(Measure=Recode(term, "'degree'='Degree' ; 'income3'='Income'")) %>%
# ggplot(., aes(x=election, y=estimate, col=Measure, group=Measure))+
# geom_point()+
# geom_line()+
# #geom_smooth(se=F)+
# labs(x="Election", y="Estimate")+
# scale_color_grey()+
# geom_hline(yintercept=0, linetype=2)+labs(title="Income Categories 1:2, 3 and 4:5")
# # geom_errorbar(width=0,aes(ymin=estimate-(1.96*std.error), ymax=estimate+(1.96*std.error)))
# ggsave(here("Plots", "block_degree_income3.png"), width=8, height=6)
#### Decompose By Party
#### Basic Party vote models 1965-2021 ####
ces %>%
filter(election<2021) %>%
nest(variables=-election) %>%
mutate(model=map(variables, function(x) lm(ndp~region2+male+age+income_tertile+degree+as.factor(religion2), data=x)),
tidied=map(model, tidy),
vote=rep('NDP', nrow(.)))->ndp_models_complete1
ces %>%
filter(election<2021) %>%
nest(variables=-election) %>%
mutate(model=map(variables, function(x) lm(conservative~region2+male+age+income_tertile+degree+as.factor(religion2), data=x)),
tidied=map(model, tidy),
vote=rep('Conservative', nrow(.))
)->conservative_models_complete1
ces %>%
filter(election<2021) %>%
nest(variables=-election) %>%
mutate(model=map(variables, function(x) lm(liberal~region2+male+age+income_tertile+degree+as.factor(religion2), data=x)),
tidied=map(model, tidy),
vote=rep('Liberal', nrow(.))
)->liberal_models_complete1
# ces %>%
# filter(election>2003) %>%
# nest(variables=-election) %>%
# mutate(model=map(variables, function(x) lm(green~region2+male+age+income+degree+as.factor(religion2), data=x)),
# tidied=map(model, tidy),
# vote=rep('Green', nrow(.))
# )->green_models_complete1
#Join all parties and plot Degree coefficients
ndp_models_complete1 %>%
bind_rows(., liberal_models_complete1) %>%
bind_rows(., conservative_models_complete1) %>%
unnest(tidied) %>%
filter(term=="degree"|term=="income_tertile") %>%
filter(election<2021) %>%
mutate(term=Recode(term, "'degree'='Degree'; 'income_tertile'='Income'")) %>%
ggplot(., aes(x=election, y=estimate, col=vote, size=term, group=term))+
geom_point()+facet_grid(~vote, switch="y")+
scale_color_manual(values=c("navy blue", "red", "orange"), name="Vote")+
#scale_alpha_manual(values=c(0.4, .8))+
scale_size_manual(values=c(1,3), name="Coefficient")+
geom_smooth(method="loess", size=0.5, alpha=0.2, se=F) +
#scale_fill_manual(values=c("navy blue", "red", "orange"))+
labs( alpha="Variable", color="Vote", x="Election", y="Estimate")+
#geom_errorbar(aes(ymin=estimate-(1.96*std.error), ymax=estimate+(1.96*std.error)), width=0)+
ylim(c(-0.12,0.12))+
#Turn to greyscale for printing in the journal; also we don't actually need the legend because the labels are on the side
#scale_color_grey(guide="none")+
geom_hline(yintercept=0, alpha=0.5, linetype=2)+
theme(axis.text.x=element_text(angle=90))
ggsave(here("Plots", "ols_degree_party_income.png"), width=8, height=4)
#### Decompose By Party
# #### Basic Party vote models 1965-2021 ####
# #Income2
# ces %>%
# nest(variables=-election) %>%
# mutate(model=map(variables, function(x) lm(ndp~region2+male+age+income2+degree+as.factor(religion2), data=x)),
# tidied=map(model, tidy),
# vote=rep('NDP', nrow(.)))->ndp_models_complete2
#
# ces %>%
# nest(variables=-election) %>%
# mutate(model=map(variables, function(x) lm(conservative~region2+male+age+income2+degree+as.factor(religion2), data=x)),
# tidied=map(model, tidy),
# vote=rep('Conservative', nrow(.))
# )->conservative_models_complete2
#
# ces %>%
# nest(variables=-election) %>%
# mutate(model=map(variables, function(x) lm(liberal~region2+male+age+income2+degree+as.factor(religion2), data=x)),
# tidied=map(model, tidy),
# vote=rep('Liberal', nrow(.))
# )->liberal_models_complete2
#
# ces %>%
# filter(election>2003) %>%
# nest(variables=-election) %>%
# mutate(model=map(variables, function(x) lm(green~region2+male+age+income2+degree+as.factor(religion2), data=x)),
# tidied=map(model, tidy),
# vote=rep('Green', nrow(.))
# )->green_models_complete2
# #Join all parties and plot Degree coefficients
# ndp_models_complete2 %>%
# bind_rows(., liberal_models_complete2) %>%
# bind_rows(., conservative_models_complete2) %>%
# unnest(tidied) %>%
# filter(term=="degree"|term=="income2") %>%
# filter(election<2021) %>%
# mutate(term=Recode(term, "'degree'='Degree'; 'income2'='Income'")) %>%
# ggplot(., aes(x=election, y=estimate, col=vote, size=term, group=term))+
# geom_point()+facet_grid(~vote, switch="y")+
# scale_color_manual(values=c("navy blue", "red", "orange"), name="Vote")+
# #scale_alpha_manual(values=c(0.4, .8))+
# scale_size_manual(values=c(1,3), name="Coefficient")+
# geom_smooth(method="loess", size=0.5, alpha=0.2, se=F) +
# #scale_fill_manual(values=c("navy blue", "red", "orange"))+
# labs( alpha="Variable", color="Vote", x="Election", y="Estimate", title="Income2, 1, 2:4, 5")+
# #geom_errorbar(aes(ymin=estimate-(1.96*std.error), ymax=estimate+(1.96*std.error)), width=0)+
# ylim(c(-0.12,0.12))+
# #Turn to greyscale for printing in the journal; also we don't actually need the legend because the labels are on the side
# #scale_color_grey(guide="none")+
# geom_hline(yintercept=0, alpha=0.5, linetype=2)+
# theme(axis.text.x=element_text(angle=90))
# ggsave(here("Plots", "ols_degree_party_income2.png"), width=8, height=4)
#
# #Income3
# ces %>%
# nest(variables=-election) %>%
# mutate(model=map(variables, function(x) lm(ndp~region2+male+age+income3+degree+as.factor(religion2), data=x)),
# tidied=map(model, tidy),
# vote=rep('NDP', nrow(.)))->ndp_models_complete3
#
# ces %>%
# nest(variables=-election) %>%
# mutate(model=map(variables, function(x) lm(conservative~region2+male+age+income3+degree+as.factor(religion2), data=x)),
# tidied=map(model, tidy),
# vote=rep('Conservative', nrow(.))
# )->conservative_models_complete3
#
# ces %>%
# nest(variables=-election) %>%
# mutate(model=map(variables, function(x) lm(liberal~region2+male+age+income3+degree+as.factor(religion2), data=x)),
# tidied=map(model, tidy),
# vote=rep('Liberal', nrow(.))
# )->liberal_models_complete3
#
# ces %>%
# filter(election>2003) %>%
# nest(variables=-election) %>%
# mutate(model=map(variables, function(x) lm(green~region2+male+age+income3+degree+as.factor(religion2), data=x)),
# tidied=map(model, tidy),
# vote=rep('Green', nrow(.))
# )->green_models_complete3
# #Join all parties and plot Degree coefficients
# ndp_models_complete3 %>%
# bind_rows(., liberal_models_complete3) %>%
# bind_rows(., conservative_models_complete3) %>%
# unnest(tidied) %>%
# filter(term=="degree"|term=="income3") %>%
# filter(election<2021) %>%
# mutate(term=Recode(term, "'degree'='Degree'; 'income3'='Income'")) %>%
# ggplot(., aes(x=election, y=estimate, col=vote, size=term, group=term))+
# geom_point()+facet_grid(~vote, switch="y")+
# scale_color_manual(values=c("navy blue", "red", "orange"), name="Vote")+
# #scale_alpha_manual(values=c(0.4, .8))+
# scale_size_manual(values=c(1,3), name="Coefficient")+
# geom_smooth(method="loess", size=0.5, alpha=0.2, se=F) +
# #scale_fill_manual(values=c("navy blue", "red", "orange"))+
# labs( alpha="Variable", color="Vote", x="Election", y="Estimate", title="Income3, 1:2, 3, 4:5")+
# #geom_errorbar(aes(ymin=estimate-(1.96*std.error), ymax=estimate+(1.96*std.error)), width=0)+
# ylim(c(-0.12,0.12))+
# #Turn to greyscale for printing in the journal; also we don't actually need the legend because the labels are on the side
# #scale_color_grey(guide="none")+
# geom_hline(yintercept=0, alpha=0.5, linetype=2)+
# theme(axis.text.x=element_text(angle=90))
# ggsave(here("Plots", "ols_degree_party_income3.png"), width=8, height=4)
|
install.packages("wikipediatrend")
| /wiki1.r | no_license | syntrade/r | R | false | false | 35 | r | install.packages("wikipediatrend")
|
\name{decideTests}
\alias{decideTests}
\title{Multiple Testing Across Genes and Contrasts}
\description{
Classify a series of related t-statistics as up, down or not significant.
A number of different multiple testing schemes are offered which adjust for multiple testing down the genes as well as across contrasts for each gene.
}
\usage{
decideTests(object,method="separate",adjust.method="BH",p.value=0.05,lfc=0)
}
\arguments{
\item{object}{\code{MArrayLM} object output from \code{eBayes} from which the t-statistics may be extracted.}
\item{method}{character string specify how probes and contrasts are to be combined in the multiple testing strategy. Choices are \code{"separate"}, \code{"global"}, \code{"hierarchical"}, \code{"nestedF"} or any partial string.}
\item{adjust.method}{character string specifying p-value adjustment method. Possible values are \code{"none"}, \code{"BH"}, \code{"fdr"} (equivalent to \code{"BH"}), \code{"BY"} and \code{"holm"}. See \code{\link[stats]{p.adjust}} for details.}
\item{p.value}{numeric value between 0 and 1 giving the desired size of the test}
\item{lfc}{minimum log2-fold-change required}
}
\value{
An object of class \code{\link[=TestResults-class]{TestResults}}.
This is essentially a numeric matrix with elements \code{-1}, \code{0} or \code{1} depending on whether each t-statistic is classified as significantly negative, not significant or significantly positive respectively.
If \code{lfc>0} then contrasts are judged significant only when the log2-fold change is at least this large in absolute value.
For example, one might choose \code{lfc=log2(1.5)} to restrict to 50\% changes or \code{lfc=1} for 2-fold changes.
In this case, contrasts must satisfy both the p-value and the fold-change cutoff to be judged significant.
}
\details{
These functions implement multiple testing procedures for determining whether each statistic in a matrix of t-statistics should be considered significantly different from zero.
Rows of \code{tstat} correspond to genes and columns to coefficients or contrasts.
The setting \code{method="separate"} is equivalent to using \code{topTable} separately for each coefficient in the linear model fit, and will give the same lists of probes if \code{adjust.method} is the same.
\code{method="global"} will treat the entire matrix of t-statistics as a single vector of unrelated tests.
\code{method="hierarchical"} adjusts down genes and then across contrasts.
\code{method="nestedF"} adjusts down genes and then uses \code{classifyTestsF} to classify contrasts as significant or not for the selected genes.
Please see the limma User's Guide for a discussion of the statistical properties of these methods.
}
\seealso{
An overview of multiple testing functions is given in \link{08.Tests}.
}
\author{Gordon Smyth}
\keyword{htest}
| /man/decideTests.Rd | no_license | SynBioTek/limma2 | R | false | false | 2,831 | rd | \name{decideTests}
\alias{decideTests}
\title{Multiple Testing Across Genes and Contrasts}
\description{
Classify a series of related t-statistics as up, down or not significant.
A number of different multiple testing schemes are offered which adjust for multiple testing down the genes as well as across contrasts for each gene.
}
\usage{
decideTests(object,method="separate",adjust.method="BH",p.value=0.05,lfc=0)
}
\arguments{
\item{object}{\code{MArrayLM} object output from \code{eBayes} from which the t-statistics may be extracted.}
\item{method}{character string specify how probes and contrasts are to be combined in the multiple testing strategy. Choices are \code{"separate"}, \code{"global"}, \code{"hierarchical"}, \code{"nestedF"} or any partial string.}
\item{adjust.method}{character string specifying p-value adjustment method. Possible values are \code{"none"}, \code{"BH"}, \code{"fdr"} (equivalent to \code{"BH"}), \code{"BY"} and \code{"holm"}. See \code{\link[stats]{p.adjust}} for details.}
\item{p.value}{numeric value between 0 and 1 giving the desired size of the test}
\item{lfc}{minimum log2-fold-change required}
}
\value{
An object of class \code{\link[=TestResults-class]{TestResults}}.
This is essentially a numeric matrix with elements \code{-1}, \code{0} or \code{1} depending on whether each t-statistic is classified as significantly negative, not significant or significantly positive respectively.
If \code{lfc>0} then contrasts are judged significant only when the log2-fold change is at least this large in absolute value.
For example, one might choose \code{lfc=log2(1.5)} to restrict to 50\% changes or \code{lfc=1} for 2-fold changes.
In this case, contrasts must satisfy both the p-value and the fold-change cutoff to be judged significant.
}
\details{
These functions implement multiple testing procedures for determining whether each statistic in a matrix of t-statistics should be considered significantly different from zero.
Rows of \code{tstat} correspond to genes and columns to coefficients or contrasts.
The setting \code{method="separate"} is equivalent to using \code{topTable} separately for each coefficient in the linear model fit, and will give the same lists of probes if \code{adjust.method} is the same.
\code{method="global"} will treat the entire matrix of t-statistics as a single vector of unrelated tests.
\code{method="hierarchical"} adjusts down genes and then across contrasts.
\code{method="nestedF"} adjusts down genes and then uses \code{classifyTestsF} to classify contrasts as significant or not for the selected genes.
Please see the limma User's Guide for a discussion of the statistical properties of these methods.
}
\seealso{
An overview of multiple testing functions is given in \link{08.Tests}.
}
\author{Gordon Smyth}
\keyword{htest}
|
#' @rdname sample_int
#' @details `sample_int_expj()` and `sample_int_expjs()`
#' implement one-pass random sampling with a reservoir with exponential jumps
#' (Efraimidis and Spirakis, 2006, Algorithm A-ExpJ). Both functions are
#' implemented in `Rcpp`; `*_expj()` uses log-transformed keys,
#' `*_expjs()` implements the algorithm in the paper verbatim
#' (at the cost of numerical stability).
#' @examples
#' ## Algorithm A-ExpJ (with log-transformed keys)
#' s <- sample_int_expj(20000, 10000, runif(20000))
#' stopifnot(unique(s) == s)
#' p <- c(995, rep(1, 5))
#' n <- 1000
#' set.seed(42)
#' tbl <- table(replicate(sample_int_expj(6, 3, p),
#' n = n)) / n
#' stopifnot(abs(tbl - c(1, rep(0.4, 5))) < 0.04)
#'
"sample_int_expj"
| /R/sample_int_expj.R | no_license | alexkowa/wrswoR | R | false | false | 769 | r | #' @rdname sample_int
#' @details `sample_int_expj()` and `sample_int_expjs()`
#' implement one-pass random sampling with a reservoir with exponential jumps
#' (Efraimidis and Spirakis, 2006, Algorithm A-ExpJ). Both functions are
#' implemented in `Rcpp`; `*_expj()` uses log-transformed keys,
#' `*_expjs()` implements the algorithm in the paper verbatim
#' (at the cost of numerical stability).
#' @examples
#' ## Algorithm A-ExpJ (with log-transformed keys)
#' s <- sample_int_expj(20000, 10000, runif(20000))
#' stopifnot(unique(s) == s)
#' p <- c(995, rep(1, 5))
#' n <- 1000
#' set.seed(42)
#' tbl <- table(replicate(sample_int_expj(6, 3, p),
#' n = n)) / n
#' stopifnot(abs(tbl - c(1, rep(0.4, 5))) < 0.04)
#'
"sample_int_expj"
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/ctd.R
\name{parseLatLon}
\alias{parseLatLon}
\title{Parse a Latitude or Longitude String}
\usage{
parseLatLon(line, debug = getOption("oceDebug"))
}
\arguments{
\item{line}{a character string containing an indication of latitude or
longitude.}
\item{debug}{a flag that turns on debugging. Set to 1 to get a moderate
amount of debugging information, or to 2 to get more.}
}
\value{
A numerical value of latitude or longitude.
}
\description{
Parse a latitude or longitude string, e.g. as in the header of a CTD file
The following formats are understood (for, e.g. latitude) \preformatted{ *
NMEA Latitude = 47 54.760 N ** Latitude: 47 53.27 N }
}
\seealso{
Used by \code{\link{read.ctd}}.
}
\author{
Dan Kelley
}
| /pkgs/oce/man/parseLatLon.Rd | no_license | vaguiar/EDAV_Project_2017 | R | false | true | 792 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/ctd.R
\name{parseLatLon}
\alias{parseLatLon}
\title{Parse a Latitude or Longitude String}
\usage{
parseLatLon(line, debug = getOption("oceDebug"))
}
\arguments{
\item{line}{a character string containing an indication of latitude or
longitude.}
\item{debug}{a flag that turns on debugging. Set to 1 to get a moderate
amount of debugging information, or to 2 to get more.}
}
\value{
A numerical value of latitude or longitude.
}
\description{
Parse a latitude or longitude string, e.g. as in the header of a CTD file
The following formats are understood (for, e.g. latitude) \preformatted{ *
NMEA Latitude = 47 54.760 N ** Latitude: 47 53.27 N }
}
\seealso{
Used by \code{\link{read.ctd}}.
}
\author{
Dan Kelley
}
|
#' Binds arrays together disregarding names
#'
#' @param ... N-dimensional arrays, or a list thereof
#' @param along Along which axis to bind them together (default: new axis)
#' @return A joined array
#' @export
bind = function(..., along=length(dim(arrayList[[1]]))+1) {
arrayList = list(...)
if (length(arrayList) == 1 && is.list(arrayList[[1]]))
arrayList = arrayList[[1]]
arrayList = vectors_to_row_or_col(arrayList, along=along)
ndim = dim(arrayList[[1]])
ndim[along] = sum(sapply(arrayList, function(a) dim(a)[along]))
dimNames = dimnames(arrayList[[1]])
names_along = unlist(lapply(arrayList, dimnames, along=along))
if (length(names_along) != ndim[along])
dimNames[along] = list(NULL)
else
dimNames[[along]] = names_along
offset = 0
index = base::rep(list(TRUE), length(ndim))
result = array(NA, dim=ndim, dimnames=dimNames)
for (i in seq_along(arrayList)) {
cur = arrayList[[i]]
index[[along]] = offset + seq_len(dim(cur)[along])
result = do.call("[<-", c(list(result), index, list(cur)))
offset = offset + length(index[[along]])
}
if (dim(result)[along] == length(arrayList) && !is.null(names(arrayList)))
dimnames(result)[[along]] = names(arrayList)
restore_null_dimnames(result)
}
| /R/bind.r | permissive | cran/narray | R | false | false | 1,339 | r | #' Binds arrays together disregarding names
#'
#' @param ... N-dimensional arrays, or a list thereof
#' @param along Along which axis to bind them together (default: new axis)
#' @return A joined array
#' @export
bind = function(..., along=length(dim(arrayList[[1]]))+1) {
arrayList = list(...)
if (length(arrayList) == 1 && is.list(arrayList[[1]]))
arrayList = arrayList[[1]]
arrayList = vectors_to_row_or_col(arrayList, along=along)
ndim = dim(arrayList[[1]])
ndim[along] = sum(sapply(arrayList, function(a) dim(a)[along]))
dimNames = dimnames(arrayList[[1]])
names_along = unlist(lapply(arrayList, dimnames, along=along))
if (length(names_along) != ndim[along])
dimNames[along] = list(NULL)
else
dimNames[[along]] = names_along
offset = 0
index = base::rep(list(TRUE), length(ndim))
result = array(NA, dim=ndim, dimnames=dimNames)
for (i in seq_along(arrayList)) {
cur = arrayList[[i]]
index[[along]] = offset + seq_len(dim(cur)[along])
result = do.call("[<-", c(list(result), index, list(cur)))
offset = offset + length(index[[along]])
}
if (dim(result)[along] == length(arrayList) && !is.null(names(arrayList)))
dimnames(result)[[along]] = names(arrayList)
restore_null_dimnames(result)
}
|
###########################################################################
# VISUALISING THE BETA AND GAMMA DISTRIBUTIONS
#
# These functions plot the beta and gamma distribution and allow the user
# to vary the parameters with Tcl/Tk sliders.
#
# To run with default values, just type "vis.beta()" or "vis.gamma()".
# Inital values of parameters can be specified if desired.
#
# Note that R will freeze until the slider window is closed. The slider window
# may disappear behind other windows: look for the "TK" icon on the task bar.
#
# The functions return the last parameter values selected by the user.
#
# Code by Mike Meredith, updated 27 May 2007
# Inspired by functions by Greg Snow in the 'TeachingDemos' package.
# More updates by Bob Burn, 10 July 2007 and by Sandro Leidi, August 2014
###########################################################################
vis.beta <- function (Alpha=2, Beta=2)
{
## Preliminaries:
if (!exists("slider.env"))
slider.env <<- new.env()
library(tcltk)
assign("Alpha", tclVar(Alpha), env = slider.env)
assign("Beta", tclVar(Beta), env = slider.env)
## This draws the figure:
Beta.refresh <- function(...) {
Alpha <- as.numeric(evalq(tclvalue(Alpha), env = slider.env))
Beta <- as.numeric(evalq(tclvalue(Beta), env = slider.env))
mu <- Alpha / (Alpha+Beta)
md <- ifelse(Alpha+Beta != 2, max(0,min(1,(Alpha-1)/(Alpha+Beta-2))), 0.5)
SD<- sqrt((Alpha * Beta) / ((Alpha+Beta)^2 * (Alpha + Beta + 1)))
ylim <- c(0,4)
plot(0:1, ylim, type='n',main="", xlab = expression(theta), ylab = "Probability density")
curve(dbeta(x, Alpha, Beta), 0, 1, col='red', lwd=2, add=TRUE)
segments(mu,0,mu,dbeta(mu, Alpha, Beta), lty=2, col='blue')
segments(md,0,md,dbeta(md, Alpha, Beta), lty=2, col='green')
title(main='Beta Distribution')
x.pos <- ifelse(md>0.5,0,0.75)
x.pos <- ifelse((Alpha<1)|(Beta<1),0.4,x.pos)
text(0.4,4,bquote(alpha == .(Alpha)),pos=4)
text(0.4,3.8,bquote(beta == .(Beta)),pos=4)
text(x.pos,2,paste('Mean = ', round(mu,3)),pos=4,col='blue')
text(x.pos,1.75,paste('Mode = ', round(md,3)),pos=4,col='green')
text(x.pos,1.5,paste('S.D. = ', round(SD,3)),pos=4)
}
## Initial display of figure
Beta.refresh()
bringToTop()
## Set up and run the widget:
m <- tktoplevel()
tkwm.title(m, "Beta Distribution")
tkwm.geometry(m, "+0+0")
tkpack(fr <- tkframe(m), side = "top")
tkpack(tklabel(fr, text = "Alpha", width = "10"), side = "right")
tkpack(sc <- tkscale(fr, command = Beta.refresh, from = 0.1,
to = 10, orient = "horiz", resolution = 0.1, showvalue = T),
side = "left")
assign("sc", sc, env = slider.env)
evalq(tkconfigure(sc, variable = Alpha), env = slider.env)
tkpack(fr <- tkframe(m), side = "top")
tkpack(tklabel(fr, text = "Beta", width = "10"), side = "right")
tkpack(sc <- tkscale(fr, command = Beta.refresh, from = 0.1,
to = 10, orient = "horiz", resolution = 0.1, showvalue = T),
side = "left")
assign("sc", sc, env = slider.env)
evalq(tkconfigure(sc, variable = Beta), env = slider.env)
tkpack(fr <- tkframe(m), side = "top")
tkpack(tkbutton(m, text = "Refresh", command = Beta.refresh),
side = "left")
tkpack(tkbutton(m, text = "Exit", command = function() tkdestroy(m)),
side = "right")
cat("Waiting for TK slider window to close...") ; flush.console()
tkwait.window(m)
cat("okay.\n")
## When window is closed, return final values:
Alpha <- as.numeric(evalq(tclvalue(Alpha), env = slider.env))
Beta <- as.numeric(evalq(tclvalue(Beta), env = slider.env))
output <- c(Alpha, Beta)
names(output) <- c("Alpha", "Beta")
rm("slider.env", pos=1)
return(output)
}
###########################################################################
vis.gamma <- function (Alpha=2, Beta=2)
{
## Preliminaries:
if (!exists("slider.env"))
slider.env <<- new.env()
library(tcltk)
assign("Alpha", tclVar(Alpha), env = slider.env)
assign("Beta", tclVar(Beta), env = slider.env)
## This draws the figure:
Gamma.refresh <- function(...) {
Alpha <- as.numeric(evalq(tclvalue(Alpha), env = slider.env))
Beta <- as.numeric(evalq(tclvalue(Beta), env = slider.env))
mu <- Alpha / Beta
md <- max(0,(Alpha-1) / Beta)
SD<- sqrt(Alpha) / Beta
p25<-qgamma(0.025,Alpha,Beta)
p975<-qgamma(0.975,Alpha,Beta)
x.max <- 5
x.pos <- x.max-1.5
y.seq <- seq(0,2,length=length(0:x.max))
y.pos <- max(y.seq)-0.2
plot(0:x.max, y.seq, type='n', xlab = expression(theta), ylab = "Probability density")
curve(dgamma(x, shape=Alpha, rate=Beta), 0, x.max, col='red', lwd=2, add=TRUE)
segments(mu,0,mu,dgamma(mu, shape=Alpha, rate=Beta), lty=2, col='blue')
segments(md,0,md,dgamma(md, shape=Alpha, rate=Beta), lty=2, col='green')
segments(p25,0,p25,dgamma(p25, shape=Alpha, rate=Beta), lty=1, col='brown')
segments(p975,0,p975,dgamma(p975, shape=Alpha, rate=Beta), lty=1, col='brown')
title(main='Gamma Distribution')
text(x.pos,y.pos,bquote(alpha == .(Alpha)),pos=4)
text(x.pos,y.pos-0.1,bquote(beta == .(Beta)),pos=4)
text(x.pos,y.pos-0.3,paste('Mean = ', round(mu,3)),pos=4,col='blue')
text(x.pos,y.pos-0.4,paste('Mode = ', round(md,3)),pos=4,col='green')
text(x.pos,y.pos-0.5,paste('S.D. = ', round(SD,3)),pos=4)
text(x.pos,y.pos-0.6,paste('P2.5 = ', round(p25,3)),pos=4,col='brown')
text(x.pos,y.pos-0.7,paste('P97.5 = ', round(p975,3)),pos=4,col='brown')
}
## Initial display of figure
Gamma.refresh()
bringToTop()
## Set up and run the widget:
m <- tktoplevel()
tkwm.title(m, "Gamma Distribution")
tkwm.geometry(m, "+0+0")
tkpack(fr <- tkframe(m), side = "top")
tkpack(tklabel(fr, text = "Alpha", width = "10"), side = "right")
tkpack(sc <- tkscale(fr, command = Gamma.refresh, from = 0.1,
to = 10, orient = "horiz", resolution = 0.1, showvalue = T),
side = "left")
assign("sc", sc, env = slider.env)
evalq(tkconfigure(sc, variable = Alpha), env = slider.env)
tkpack(fr <- tkframe(m), side = "top")
tkpack(tklabel(fr, text = "Beta", width = "10"), side = "right")
tkpack(sc <- tkscale(fr, command = Gamma.refresh, from = 0.1,
to = 10, orient = "horiz", resolution = 0.1, showvalue = T),
side = "left")
assign("sc", sc, env = slider.env)
evalq(tkconfigure(sc, variable = Beta), env = slider.env)
tkpack(fr <- tkframe(m), side = "top")
tkpack(tkbutton(m, text = "Refresh", command = Gamma.refresh),
side = "left")
tkpack(tkbutton(m, text = "Exit", command = function() tkdestroy(m)),
side = "right")
cat("Waiting for TK slider window to close...") ; flush.console()
tkwait.window(m)
cat("okay.\n")
## When window is closed, return final values:
Alpha <- as.numeric(evalq(tclvalue(Alpha), env = slider.env))
Beta <- as.numeric(evalq(tclvalue(Beta), env = slider.env))
output <- c(Alpha, Beta)
names(output) <- c("Alpha", "Beta")
rm("slider.env", pos=1)
return(output)
}
###########################################################################
| /datasets/visdist.R | no_license | vbuens/bayesianstats | R | false | false | 7,586 | r | ###########################################################################
# VISUALISING THE BETA AND GAMMA DISTRIBUTIONS
#
# These functions plot the beta and gamma distribution and allow the user
# to vary the parameters with Tcl/Tk sliders.
#
# To run with default values, just type "vis.beta()" or "vis.gamma()".
# Inital values of parameters can be specified if desired.
#
# Note that R will freeze until the slider window is closed. The slider window
# may disappear behind other windows: look for the "TK" icon on the task bar.
#
# The functions return the last parameter values selected by the user.
#
# Code by Mike Meredith, updated 27 May 2007
# Inspired by functions by Greg Snow in the 'TeachingDemos' package.
# More updates by Bob Burn, 10 July 2007 and by Sandro Leidi, August 2014
###########################################################################
vis.beta <- function (Alpha=2, Beta=2)
{
## Preliminaries:
if (!exists("slider.env"))
slider.env <<- new.env()
library(tcltk)
assign("Alpha", tclVar(Alpha), env = slider.env)
assign("Beta", tclVar(Beta), env = slider.env)
## This draws the figure:
Beta.refresh <- function(...) {
Alpha <- as.numeric(evalq(tclvalue(Alpha), env = slider.env))
Beta <- as.numeric(evalq(tclvalue(Beta), env = slider.env))
mu <- Alpha / (Alpha+Beta)
md <- ifelse(Alpha+Beta != 2, max(0,min(1,(Alpha-1)/(Alpha+Beta-2))), 0.5)
SD<- sqrt((Alpha * Beta) / ((Alpha+Beta)^2 * (Alpha + Beta + 1)))
ylim <- c(0,4)
plot(0:1, ylim, type='n',main="", xlab = expression(theta), ylab = "Probability density")
curve(dbeta(x, Alpha, Beta), 0, 1, col='red', lwd=2, add=TRUE)
segments(mu,0,mu,dbeta(mu, Alpha, Beta), lty=2, col='blue')
segments(md,0,md,dbeta(md, Alpha, Beta), lty=2, col='green')
title(main='Beta Distribution')
x.pos <- ifelse(md>0.5,0,0.75)
x.pos <- ifelse((Alpha<1)|(Beta<1),0.4,x.pos)
text(0.4,4,bquote(alpha == .(Alpha)),pos=4)
text(0.4,3.8,bquote(beta == .(Beta)),pos=4)
text(x.pos,2,paste('Mean = ', round(mu,3)),pos=4,col='blue')
text(x.pos,1.75,paste('Mode = ', round(md,3)),pos=4,col='green')
text(x.pos,1.5,paste('S.D. = ', round(SD,3)),pos=4)
}
## Initial display of figure
Beta.refresh()
bringToTop()
## Set up and run the widget:
m <- tktoplevel()
tkwm.title(m, "Beta Distribution")
tkwm.geometry(m, "+0+0")
tkpack(fr <- tkframe(m), side = "top")
tkpack(tklabel(fr, text = "Alpha", width = "10"), side = "right")
tkpack(sc <- tkscale(fr, command = Beta.refresh, from = 0.1,
to = 10, orient = "horiz", resolution = 0.1, showvalue = T),
side = "left")
assign("sc", sc, env = slider.env)
evalq(tkconfigure(sc, variable = Alpha), env = slider.env)
tkpack(fr <- tkframe(m), side = "top")
tkpack(tklabel(fr, text = "Beta", width = "10"), side = "right")
tkpack(sc <- tkscale(fr, command = Beta.refresh, from = 0.1,
to = 10, orient = "horiz", resolution = 0.1, showvalue = T),
side = "left")
assign("sc", sc, env = slider.env)
evalq(tkconfigure(sc, variable = Beta), env = slider.env)
tkpack(fr <- tkframe(m), side = "top")
tkpack(tkbutton(m, text = "Refresh", command = Beta.refresh),
side = "left")
tkpack(tkbutton(m, text = "Exit", command = function() tkdestroy(m)),
side = "right")
cat("Waiting for TK slider window to close...") ; flush.console()
tkwait.window(m)
cat("okay.\n")
## When window is closed, return final values:
Alpha <- as.numeric(evalq(tclvalue(Alpha), env = slider.env))
Beta <- as.numeric(evalq(tclvalue(Beta), env = slider.env))
output <- c(Alpha, Beta)
names(output) <- c("Alpha", "Beta")
rm("slider.env", pos=1)
return(output)
}
###########################################################################
vis.gamma <- function (Alpha=2, Beta=2)
{
## Preliminaries:
if (!exists("slider.env"))
slider.env <<- new.env()
library(tcltk)
assign("Alpha", tclVar(Alpha), env = slider.env)
assign("Beta", tclVar(Beta), env = slider.env)
## This draws the figure:
Gamma.refresh <- function(...) {
Alpha <- as.numeric(evalq(tclvalue(Alpha), env = slider.env))
Beta <- as.numeric(evalq(tclvalue(Beta), env = slider.env))
mu <- Alpha / Beta
md <- max(0,(Alpha-1) / Beta)
SD<- sqrt(Alpha) / Beta
p25<-qgamma(0.025,Alpha,Beta)
p975<-qgamma(0.975,Alpha,Beta)
x.max <- 5
x.pos <- x.max-1.5
y.seq <- seq(0,2,length=length(0:x.max))
y.pos <- max(y.seq)-0.2
plot(0:x.max, y.seq, type='n', xlab = expression(theta), ylab = "Probability density")
curve(dgamma(x, shape=Alpha, rate=Beta), 0, x.max, col='red', lwd=2, add=TRUE)
segments(mu,0,mu,dgamma(mu, shape=Alpha, rate=Beta), lty=2, col='blue')
segments(md,0,md,dgamma(md, shape=Alpha, rate=Beta), lty=2, col='green')
segments(p25,0,p25,dgamma(p25, shape=Alpha, rate=Beta), lty=1, col='brown')
segments(p975,0,p975,dgamma(p975, shape=Alpha, rate=Beta), lty=1, col='brown')
title(main='Gamma Distribution')
text(x.pos,y.pos,bquote(alpha == .(Alpha)),pos=4)
text(x.pos,y.pos-0.1,bquote(beta == .(Beta)),pos=4)
text(x.pos,y.pos-0.3,paste('Mean = ', round(mu,3)),pos=4,col='blue')
text(x.pos,y.pos-0.4,paste('Mode = ', round(md,3)),pos=4,col='green')
text(x.pos,y.pos-0.5,paste('S.D. = ', round(SD,3)),pos=4)
text(x.pos,y.pos-0.6,paste('P2.5 = ', round(p25,3)),pos=4,col='brown')
text(x.pos,y.pos-0.7,paste('P97.5 = ', round(p975,3)),pos=4,col='brown')
}
## Initial display of figure
Gamma.refresh()
bringToTop()
## Set up and run the widget:
m <- tktoplevel()
tkwm.title(m, "Gamma Distribution")
tkwm.geometry(m, "+0+0")
tkpack(fr <- tkframe(m), side = "top")
tkpack(tklabel(fr, text = "Alpha", width = "10"), side = "right")
tkpack(sc <- tkscale(fr, command = Gamma.refresh, from = 0.1,
to = 10, orient = "horiz", resolution = 0.1, showvalue = T),
side = "left")
assign("sc", sc, env = slider.env)
evalq(tkconfigure(sc, variable = Alpha), env = slider.env)
tkpack(fr <- tkframe(m), side = "top")
tkpack(tklabel(fr, text = "Beta", width = "10"), side = "right")
tkpack(sc <- tkscale(fr, command = Gamma.refresh, from = 0.1,
to = 10, orient = "horiz", resolution = 0.1, showvalue = T),
side = "left")
assign("sc", sc, env = slider.env)
evalq(tkconfigure(sc, variable = Beta), env = slider.env)
tkpack(fr <- tkframe(m), side = "top")
tkpack(tkbutton(m, text = "Refresh", command = Gamma.refresh),
side = "left")
tkpack(tkbutton(m, text = "Exit", command = function() tkdestroy(m)),
side = "right")
cat("Waiting for TK slider window to close...") ; flush.console()
tkwait.window(m)
cat("okay.\n")
## When window is closed, return final values:
Alpha <- as.numeric(evalq(tclvalue(Alpha), env = slider.env))
Beta <- as.numeric(evalq(tclvalue(Beta), env = slider.env))
output <- c(Alpha, Beta)
names(output) <- c("Alpha", "Beta")
rm("slider.env", pos=1)
return(output)
}
###########################################################################
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/peaklist_annotation.R
\name{cmp_mz_filter}
\alias{cmp_mz_filter}
\title{Compound list filter}
\usage{
cmp_mz_filter(mz, ref_tbl, mz_col = "mz", ppm = 30)
}
\arguments{
\item{mz}{Query mz to filter by}
\item{ref_tbl}{A \code{\link{tibble}} with compound mz values. Can be generated with expand_adducts.}
\item{mz_col}{Character string. Column name of the column containing the mz values.}
\item{ppm}{Maximum ppm value between query mass and database mz.}
}
\value{
A \code{\link{tibble}} containing the same columns as the input table but having add a ppm column for the deviation between compound mz and query mass.
}
\description{
This function filters a compound list based on mz value.
It is meant to be used to annotate a peaklist by running through (with \code{\link{map}})
each mz in the peaklist and adding a nested table of hits to the peaklist.
}
| /man/cmp_mz_filter.Rd | permissive | stanstrup/PeakABro | R | false | true | 938 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/peaklist_annotation.R
\name{cmp_mz_filter}
\alias{cmp_mz_filter}
\title{Compound list filter}
\usage{
cmp_mz_filter(mz, ref_tbl, mz_col = "mz", ppm = 30)
}
\arguments{
\item{mz}{Query mz to filter by}
\item{ref_tbl}{A \code{\link{tibble}} with compound mz values. Can be generated with expand_adducts.}
\item{mz_col}{Character string. Column name of the column containing the mz values.}
\item{ppm}{Maximum ppm value between query mass and database mz.}
}
\value{
A \code{\link{tibble}} containing the same columns as the input table but having add a ppm column for the deviation between compound mz and query mass.
}
\description{
This function filters a compound list based on mz value.
It is meant to be used to annotate a peaklist by running through (with \code{\link{map}})
each mz in the peaklist and adding a nested table of hits to the peaklist.
}
|
###############################
#initialize parameters
init.params <- function(action, period){
## initialize spinbox
spinbox.state()
#############
#homogenization CDT
if(action == 'homog'){
ret.params <- fromJSON(file.path(apps.dir, 'init_params', 'Homogenization_CDT.json'))
ret.params <- c(list(action = action, period = period), ret.params, list(stn.user.choice = NULL, retpar = 0))
if(str_trim(ret.params$IO.files$dir2save) == "") ret.params$IO.files$dir2save <- getwd()
}
#RHtestsV4
if(action == 'rhtests'){
ret.params <- fromJSON(file.path(apps.dir, 'init_params', 'Homogenization_RHtests.json'))
ret.params <- c(list(action = action, period = period), ret.params, list(stn.user.choice = NULL, getdata = FALSE))
if(str_trim(ret.params$IO.files$dir2save) == "") ret.params$IO.files$dir2save <- getwd()
}
##########
#qc.txtn
if(action == 'qc.temp'){
ret.params <- fromJSON(file.path(apps.dir, 'init_params', 'QC_Temperatures.json'))
stnInfo <- data.frame("Station.ID" = NA, "Lower.Bounds" = ret.params$limits$Lower.Bounds,
"Upper.Bounds" = ret.params$limits$Upper.Bounds, "Lon" = NA, "Lat" = NA)
ret.params <- c(list(action = action, period = period), ret.params, list(stnInfo = stnInfo, retpar = 0))
if(str_trim(ret.params$IO.files$dir2save) == "") ret.params$IO.files$dir2save <- getwd()
}
############
#qc.rainfall
##Zeros check
if(action == 'zero.check'){
ret.params <- fromJSON(file.path(apps.dir, 'init_params', 'False_Zero_Check.json'))
ret.params <- c(list(action = action, period = period), ret.params, list(retpar = 0))
if(str_trim(ret.params$IO.files$dir2save) == "") ret.params$IO.files$dir2save <- getwd()
}
##Outliers check
if(action == 'qc.rain'){
ret.params <- fromJSON(file.path(apps.dir, 'init_params', 'QC_Rainfall.json'))
stnInfo <- data.frame("Station.ID" = NA, "Upper.Bounds" = ret.params$limits$Upper.Bounds, "Lon" = NA, "Lat" = NA)
ret.params <- c(list(action = action, period = period), ret.params, list(stnInfo = stnInfo, retpar = 0))
if(str_trim(ret.params$IO.files$dir2save) == "") ret.params$IO.files$dir2save <- getwd()
}
#########################################################################
## Merging Rainfall
#### Simplified
if(action == 'merge.rain.one'){
ret.params <- fromJSON(file.path(apps.dir, 'init_params', 'Precip_Merging_One.json'))
ret.params <- c(list(action = action, period = period), ret.params)
if(str_trim(ret.params$output$dir) == "") ret.params$output$dir <- getwd()
}
#### Advanced
###Mean bias
if(action == 'coefbias.rain'){
ret.params <- fromJSON(file.path(apps.dir, 'init_params', 'Precip_Bias_Factor_Calc.json'))
ret.params <- c(list(action = action, period = period), ret.params)
if(str_trim(ret.params$output$dir) == "") ret.params$output$dir <- getwd()
}
###Remove bias
if(action == 'rmbias.rain'){
ret.params <- fromJSON(file.path(apps.dir, 'init_params', 'Precip_Bias_Correction.json'))
ret.params <- c(list(action = action, period = period), ret.params)
if(str_trim(ret.params$output$dir) == "") ret.params$output$dir <- getwd()
}
### Compute LM coef
if(action == 'coefLM.rain'){
ret.params <- fromJSON(file.path(apps.dir, 'init_params', 'Precip_LM_Coef_Calc.json'))
ret.params <- c(list(action = action, period = period), ret.params)
if(str_trim(ret.params$output$dir) == "") ret.params$output$dir <- getwd()
}
###Merging rainfall
if(action == 'merge.rain'){
ret.params <- fromJSON(file.path(apps.dir, 'init_params', 'Precip_Merging_Adv.json'))
ret.params <- c(list(action = action, period = period), ret.params)
if(str_trim(ret.params$output$dir) == "") ret.params$output$dir <- getwd()
}
####dekadal update
if(action == 'merge.dekrain'){
ret.params <- fromJSON(file.path(apps.dir, 'init_params', 'Update_dekadal_RR.json'))
ret.params <- c(list(action = action, period = period), ret.params)
if(str_trim(ret.params$output$dir) == "") ret.params$output$dir <- getwd()
}
#################################################################
## Merging temperature
##compute regression parameters for downscaling
if(action == 'coefdown.temp'){
ret.params <- fromJSON(file.path(apps.dir, 'init_params', 'Temp_downscalling_Coef.json'))
ret.params <- c(list(action = action, period = period), ret.params)
if(str_trim(ret.params$IO.files$dir2save) == "") ret.params$IO.files$dir2save <- getwd()
}
######downscaling
if(action == 'down.temp'){
ret.params <- fromJSON(file.path(apps.dir, 'init_params', 'Temp_downscalling_reanalysis.json'))
ret.params <- c(list(action = action, period = period), ret.params)
if(str_trim(ret.params$output$dir) == "") ret.params$output$dir <- getwd()
}
#### Simplified
if(action == 'merge.temp.one'){
ret.params <- fromJSON(file.path(apps.dir, 'init_params', 'Temp_Merging_One.json'))
ret.params <- c(list(action = action, period = period), ret.params)
if(str_trim(ret.params$output$dir) == "") ret.params$output$dir <- getwd()
}
#### Advanced
##Bias coeff
if(action == 'coefbias.temp'){
ret.params <- fromJSON(file.path(apps.dir, 'init_params', 'Temp_Bias_Factor_Calc.json'))
ret.params <- c(list(action = action, period = period), ret.params)
if(str_trim(ret.params$output$dir) == "") ret.params$output$dir <- getwd()
}
##Adjustment
if(action == 'adjust.temp'){
ret.params <- fromJSON(file.path(apps.dir, 'init_params', 'Temp_Bias_Correction.json'))
ret.params <- c(list(action = action, period = period), ret.params)
if(str_trim(ret.params$output$dir) == "") ret.params$output$dir <- getwd()
}
### Compute LM coef
if(action == 'coefLM.temp'){
ret.params <- fromJSON(file.path(apps.dir, 'init_params', 'Temp_LM_Coef_Calc.json'))
ret.params <- c(list(action = action, period = period), ret.params)
if(str_trim(ret.params$output$dir) == "") ret.params$output$dir <- getwd()
}
##Merging
if(action == 'merge.temp'){
ret.params <- fromJSON(file.path(apps.dir, 'init_params', 'Temp_Merging_Adv.json'))
ret.params <- c(list(action = action, period = period), ret.params)
if(str_trim(ret.params$output$dir) == "") ret.params$output$dir <- getwd()
}
#################################################################
## Scale merged data
if(action == 'scale.merged'){
ret.params <- fromJSON(file.path(apps.dir, 'init_params', 'Scaling_MergedData.json'))
ret.params <- c(list(action = action), ret.params)
if(str_trim(ret.params$outdir) == "") ret.params$outdir <- getwd()
}
#############################################################################################3
if(action == 'chk.coords'){
ret.params <- fromJSON(file.path(apps.dir, 'init_params', 'Check_STN_Coords.json'))
ret.params <- c(list(action = action, period = 'daily'), ret.params)
if(str_trim(ret.params$IO.files$dir2save) == "") ret.params$IO.files$dir2save <- getwd()
}
######################
if(action == 'agg.qc'){
file.io <- getwd()
ret.params <- list(action = action, period = 'daily', file.io = file.io)
}
if(action == 'agg.zc'){
file.io <- getwd()
ret.params <- list(action = action, period = 'daily', file.io = file.io)
}
if(action == 'agg.hom'){
file.io <- getwd()
ret.params <- list(action = action, period = 'daily', file.io = file.io)
}
################
if(action == 'cdtInput.stn'){
ret.params <- fromJSON(file.path(apps.dir, 'init_params', 'Fomat_CDT_inputData.json'))
ret.params <- c(list(action = action, period = period), ret.params)
}
################
if(action == 'aggregate.ts'){
ret.params <- fromJSON(file.path(apps.dir, 'init_params', 'Aggregate_time_series.json'))
ret.params <- c(list(action = action, in.tstep = period), ret.params)
}
################
if(action == 'aggregate.nc'){
ret.params <- fromJSON(file.path(apps.dir, 'init_params', 'Aggregate_spatial_netcdf.json'))
ret.params <- c(list(action = action), ret.params)
}
################
if(action == 'fill.temp'){
ret.params <- fromJSON(file.path(apps.dir, 'init_params', 'Filling_CDT_Temperature.json'))
ret.params <- c(list(action = action, period = period), ret.params)
}
# ###########################################
# ## remove
# if(action == 'extrct.ts'){
# file.io <- data.frame(c('NetCDF.dir', 'Shp.file', 'file2save'), c('', '', getwd()))
# names(file.io) <- c('Parameters', 'Values')
# prefix <- data.frame(c('cdfFileFormat', 'tsPrefix'), c("rr_mrg_%s%s%s.nc", "rr_adj"))
# names(prefix) <- c('Parameters', 'Values')
# dates.ts <- data.frame(c('istart.yrs', 'istart.mon', 'istart.dek', 'iend.yrs', 'iend.mon', 'iend.dek'),
# c('1983', '1', '1', '2014', '12', '3'))
# names(dates.ts) <- c('Parameters', 'Values')
# ret.params <- list(action = action, period = period, file.io = file.io, prefix = prefix, dates.ts = dates.ts)
# }
#################################################################
# create cdt dataset from ncdf files
if(action == 'create.CdtDataset'){
ret.params <- fromJSON(file.path(apps.dir, 'init_params', 'Create_CDT_Dataset.json'))
ret.params <- c(list(action = action, Tstep = period), ret.params)
if(str_trim(ret.params$output$dir) == "") ret.params$output$dir <- getwd()
}
#################################################################
## compute temperature variables
if(action == 'compute.dervTemp'){
# cdtstation, cdtdataset, cdtnetcdf
# Mean, Range
# ret.params <- list(action = action, Tstep = period,
# variable = "Mean", data.type = "cdtstation",
# station = list(tmin = "", tmax = ""),
# cdtdataset = list(tmin = "", tmax = ""),
# ncdf.tmin = list(dir = "", sample = "", format = "tmin_%s%s%s.nc"),
# ncdf.tmax = list(dir = "", sample = "", format = "tmax_%s%s%s.nc"),
# output = "")
ret.params <- list(action = action, Tstep = period,
variable = "Mean", data.type = "cdtstation",
cdtstation = list(tmin = "", tmax = ""),
cdtdataset = list(tmin = "", tmax = ""),
cdtnetcdf = list(tmin = list(dir = "", sample = "", format = "tmin_%s%s%s.nc"),
tmax = list(dir = "", sample = "", format = "tmax_%s%s%s.nc")),
output = "")
}
################
## compute potential/reference evapotranspiration
if(action == 'compute.PET'){
# cdtstation, cdtdataset, cdtnetcdf
# Hargreaves (HAR), Modified-Hargreaves (MHAR)
ret.params <- list(action = action, Tstep = period,
method = "HAR", data.type = "cdtstation",
cdtstation = list(tmin = "", tmax = "", prec = ""),
cdtdataset = list(tmin = "", tmax = "", prec = ""),
cdtnetcdf = list(tmin = list(dir = "", sample = "", format = "tmin_%s%s%s.nc"),
tmax = list(dir = "", sample = "", format = "tmax_%s%s%s.nc"),
prec = list(dir = "", sample = "", format = "precip_%s%s%s.nc")),
output = "")
}
################
## compute water balance
if(action == 'compute.WB'){
# cdtstation, cdtdataset
ret.params <- list(action = action, Tstep = period, data.type = "cdtstation",
cdtstation = list(etp = "", prec = ""),
cdtdataset = list(etp = "", prec = ""),
hdate = list(start.month = 1, start.day = 1, separate.year = FALSE),
wb = list(wb1 = 0, multi = FALSE, file = ""),
swhc = list(cap.max = 100, multi = FALSE, file = ""),
output = "")
}
#################################################################
## conversion to CPT data format
if(action == 'convert.CPTdata'){
ret.params <- list(action = action,
data.type = "cdtstation",
cdtstation = "",
cdtnetcdf = list(dir = "", sample = "", format = "onset_%Y%M%D.nc"),
cptinfo = list(name = "onset", units = "days since", missval = '-9999'),
output = "")
}
################
## conversion between ncdf, geotiff, esri .hrd labeled
if(action == 'convert.nc.tif.bil'){
ret.params <- list(action = action,
dir.in = "", dir.out = "",
type.in = "nc", type.out = "tif",
nc.opts = list(varname = "precip", varunit = "mm", missval = -9999,
longname = "Merged station-satellite precipitation"))
}
################
## create a GrADS Data Descriptor File
if(action == 'grads.ctl'){
ret.params <- list(action = action,tstep = period,
nc = list(dir = "", sample = "", format = "rr_mrg_%Y%M%D.nc"),
date = list(year1 = 1981, mon1 = 1, day1 = 1,
year2 = 2017, mon2 = 12, day2 = 31),
out.ctl = "")
}
#############
return(ret.params)
}
| /functions/cdtInitialize_params_functions.R | no_license | rijaf/CDT | R | false | false | 12,405 | r |
###############################
#initialize parameters
init.params <- function(action, period){
## initialize spinbox
spinbox.state()
#############
#homogenization CDT
if(action == 'homog'){
ret.params <- fromJSON(file.path(apps.dir, 'init_params', 'Homogenization_CDT.json'))
ret.params <- c(list(action = action, period = period), ret.params, list(stn.user.choice = NULL, retpar = 0))
if(str_trim(ret.params$IO.files$dir2save) == "") ret.params$IO.files$dir2save <- getwd()
}
#RHtestsV4
if(action == 'rhtests'){
ret.params <- fromJSON(file.path(apps.dir, 'init_params', 'Homogenization_RHtests.json'))
ret.params <- c(list(action = action, period = period), ret.params, list(stn.user.choice = NULL, getdata = FALSE))
if(str_trim(ret.params$IO.files$dir2save) == "") ret.params$IO.files$dir2save <- getwd()
}
##########
#qc.txtn
if(action == 'qc.temp'){
ret.params <- fromJSON(file.path(apps.dir, 'init_params', 'QC_Temperatures.json'))
stnInfo <- data.frame("Station.ID" = NA, "Lower.Bounds" = ret.params$limits$Lower.Bounds,
"Upper.Bounds" = ret.params$limits$Upper.Bounds, "Lon" = NA, "Lat" = NA)
ret.params <- c(list(action = action, period = period), ret.params, list(stnInfo = stnInfo, retpar = 0))
if(str_trim(ret.params$IO.files$dir2save) == "") ret.params$IO.files$dir2save <- getwd()
}
############
#qc.rainfall
##Zeros check
if(action == 'zero.check'){
ret.params <- fromJSON(file.path(apps.dir, 'init_params', 'False_Zero_Check.json'))
ret.params <- c(list(action = action, period = period), ret.params, list(retpar = 0))
if(str_trim(ret.params$IO.files$dir2save) == "") ret.params$IO.files$dir2save <- getwd()
}
##Outliers check
if(action == 'qc.rain'){
ret.params <- fromJSON(file.path(apps.dir, 'init_params', 'QC_Rainfall.json'))
stnInfo <- data.frame("Station.ID" = NA, "Upper.Bounds" = ret.params$limits$Upper.Bounds, "Lon" = NA, "Lat" = NA)
ret.params <- c(list(action = action, period = period), ret.params, list(stnInfo = stnInfo, retpar = 0))
if(str_trim(ret.params$IO.files$dir2save) == "") ret.params$IO.files$dir2save <- getwd()
}
#########################################################################
## Merging Rainfall
#### Simplified
if(action == 'merge.rain.one'){
ret.params <- fromJSON(file.path(apps.dir, 'init_params', 'Precip_Merging_One.json'))
ret.params <- c(list(action = action, period = period), ret.params)
if(str_trim(ret.params$output$dir) == "") ret.params$output$dir <- getwd()
}
#### Advanced
###Mean bias
if(action == 'coefbias.rain'){
ret.params <- fromJSON(file.path(apps.dir, 'init_params', 'Precip_Bias_Factor_Calc.json'))
ret.params <- c(list(action = action, period = period), ret.params)
if(str_trim(ret.params$output$dir) == "") ret.params$output$dir <- getwd()
}
###Remove bias
if(action == 'rmbias.rain'){
ret.params <- fromJSON(file.path(apps.dir, 'init_params', 'Precip_Bias_Correction.json'))
ret.params <- c(list(action = action, period = period), ret.params)
if(str_trim(ret.params$output$dir) == "") ret.params$output$dir <- getwd()
}
### Compute LM coef
if(action == 'coefLM.rain'){
ret.params <- fromJSON(file.path(apps.dir, 'init_params', 'Precip_LM_Coef_Calc.json'))
ret.params <- c(list(action = action, period = period), ret.params)
if(str_trim(ret.params$output$dir) == "") ret.params$output$dir <- getwd()
}
###Merging rainfall
if(action == 'merge.rain'){
ret.params <- fromJSON(file.path(apps.dir, 'init_params', 'Precip_Merging_Adv.json'))
ret.params <- c(list(action = action, period = period), ret.params)
if(str_trim(ret.params$output$dir) == "") ret.params$output$dir <- getwd()
}
####dekadal update
if(action == 'merge.dekrain'){
ret.params <- fromJSON(file.path(apps.dir, 'init_params', 'Update_dekadal_RR.json'))
ret.params <- c(list(action = action, period = period), ret.params)
if(str_trim(ret.params$output$dir) == "") ret.params$output$dir <- getwd()
}
#################################################################
## Merging temperature
##compute regression parameters for downscaling
if(action == 'coefdown.temp'){
ret.params <- fromJSON(file.path(apps.dir, 'init_params', 'Temp_downscalling_Coef.json'))
ret.params <- c(list(action = action, period = period), ret.params)
if(str_trim(ret.params$IO.files$dir2save) == "") ret.params$IO.files$dir2save <- getwd()
}
######downscaling
if(action == 'down.temp'){
ret.params <- fromJSON(file.path(apps.dir, 'init_params', 'Temp_downscalling_reanalysis.json'))
ret.params <- c(list(action = action, period = period), ret.params)
if(str_trim(ret.params$output$dir) == "") ret.params$output$dir <- getwd()
}
#### Simplified
if(action == 'merge.temp.one'){
ret.params <- fromJSON(file.path(apps.dir, 'init_params', 'Temp_Merging_One.json'))
ret.params <- c(list(action = action, period = period), ret.params)
if(str_trim(ret.params$output$dir) == "") ret.params$output$dir <- getwd()
}
#### Advanced
##Bias coeff
if(action == 'coefbias.temp'){
ret.params <- fromJSON(file.path(apps.dir, 'init_params', 'Temp_Bias_Factor_Calc.json'))
ret.params <- c(list(action = action, period = period), ret.params)
if(str_trim(ret.params$output$dir) == "") ret.params$output$dir <- getwd()
}
##Adjustment
if(action == 'adjust.temp'){
ret.params <- fromJSON(file.path(apps.dir, 'init_params', 'Temp_Bias_Correction.json'))
ret.params <- c(list(action = action, period = period), ret.params)
if(str_trim(ret.params$output$dir) == "") ret.params$output$dir <- getwd()
}
### Compute LM coef
if(action == 'coefLM.temp'){
ret.params <- fromJSON(file.path(apps.dir, 'init_params', 'Temp_LM_Coef_Calc.json'))
ret.params <- c(list(action = action, period = period), ret.params)
if(str_trim(ret.params$output$dir) == "") ret.params$output$dir <- getwd()
}
##Merging
if(action == 'merge.temp'){
ret.params <- fromJSON(file.path(apps.dir, 'init_params', 'Temp_Merging_Adv.json'))
ret.params <- c(list(action = action, period = period), ret.params)
if(str_trim(ret.params$output$dir) == "") ret.params$output$dir <- getwd()
}
#################################################################
## Scale merged data
if(action == 'scale.merged'){
ret.params <- fromJSON(file.path(apps.dir, 'init_params', 'Scaling_MergedData.json'))
ret.params <- c(list(action = action), ret.params)
if(str_trim(ret.params$outdir) == "") ret.params$outdir <- getwd()
}
#############################################################################################3
if(action == 'chk.coords'){
ret.params <- fromJSON(file.path(apps.dir, 'init_params', 'Check_STN_Coords.json'))
ret.params <- c(list(action = action, period = 'daily'), ret.params)
if(str_trim(ret.params$IO.files$dir2save) == "") ret.params$IO.files$dir2save <- getwd()
}
######################
if(action == 'agg.qc'){
file.io <- getwd()
ret.params <- list(action = action, period = 'daily', file.io = file.io)
}
if(action == 'agg.zc'){
file.io <- getwd()
ret.params <- list(action = action, period = 'daily', file.io = file.io)
}
if(action == 'agg.hom'){
file.io <- getwd()
ret.params <- list(action = action, period = 'daily', file.io = file.io)
}
################
if(action == 'cdtInput.stn'){
ret.params <- fromJSON(file.path(apps.dir, 'init_params', 'Fomat_CDT_inputData.json'))
ret.params <- c(list(action = action, period = period), ret.params)
}
################
if(action == 'aggregate.ts'){
ret.params <- fromJSON(file.path(apps.dir, 'init_params', 'Aggregate_time_series.json'))
ret.params <- c(list(action = action, in.tstep = period), ret.params)
}
################
if(action == 'aggregate.nc'){
ret.params <- fromJSON(file.path(apps.dir, 'init_params', 'Aggregate_spatial_netcdf.json'))
ret.params <- c(list(action = action), ret.params)
}
################
if(action == 'fill.temp'){
ret.params <- fromJSON(file.path(apps.dir, 'init_params', 'Filling_CDT_Temperature.json'))
ret.params <- c(list(action = action, period = period), ret.params)
}
# ###########################################
# ## remove
# if(action == 'extrct.ts'){
# file.io <- data.frame(c('NetCDF.dir', 'Shp.file', 'file2save'), c('', '', getwd()))
# names(file.io) <- c('Parameters', 'Values')
# prefix <- data.frame(c('cdfFileFormat', 'tsPrefix'), c("rr_mrg_%s%s%s.nc", "rr_adj"))
# names(prefix) <- c('Parameters', 'Values')
# dates.ts <- data.frame(c('istart.yrs', 'istart.mon', 'istart.dek', 'iend.yrs', 'iend.mon', 'iend.dek'),
# c('1983', '1', '1', '2014', '12', '3'))
# names(dates.ts) <- c('Parameters', 'Values')
# ret.params <- list(action = action, period = period, file.io = file.io, prefix = prefix, dates.ts = dates.ts)
# }
#################################################################
# create cdt dataset from ncdf files
if(action == 'create.CdtDataset'){
ret.params <- fromJSON(file.path(apps.dir, 'init_params', 'Create_CDT_Dataset.json'))
ret.params <- c(list(action = action, Tstep = period), ret.params)
if(str_trim(ret.params$output$dir) == "") ret.params$output$dir <- getwd()
}
#################################################################
## compute temperature variables
if(action == 'compute.dervTemp'){
# cdtstation, cdtdataset, cdtnetcdf
# Mean, Range
# ret.params <- list(action = action, Tstep = period,
# variable = "Mean", data.type = "cdtstation",
# station = list(tmin = "", tmax = ""),
# cdtdataset = list(tmin = "", tmax = ""),
# ncdf.tmin = list(dir = "", sample = "", format = "tmin_%s%s%s.nc"),
# ncdf.tmax = list(dir = "", sample = "", format = "tmax_%s%s%s.nc"),
# output = "")
ret.params <- list(action = action, Tstep = period,
variable = "Mean", data.type = "cdtstation",
cdtstation = list(tmin = "", tmax = ""),
cdtdataset = list(tmin = "", tmax = ""),
cdtnetcdf = list(tmin = list(dir = "", sample = "", format = "tmin_%s%s%s.nc"),
tmax = list(dir = "", sample = "", format = "tmax_%s%s%s.nc")),
output = "")
}
################
## compute potential/reference evapotranspiration
if(action == 'compute.PET'){
# cdtstation, cdtdataset, cdtnetcdf
# Hargreaves (HAR), Modified-Hargreaves (MHAR)
ret.params <- list(action = action, Tstep = period,
method = "HAR", data.type = "cdtstation",
cdtstation = list(tmin = "", tmax = "", prec = ""),
cdtdataset = list(tmin = "", tmax = "", prec = ""),
cdtnetcdf = list(tmin = list(dir = "", sample = "", format = "tmin_%s%s%s.nc"),
tmax = list(dir = "", sample = "", format = "tmax_%s%s%s.nc"),
prec = list(dir = "", sample = "", format = "precip_%s%s%s.nc")),
output = "")
}
################
## compute water balance
if(action == 'compute.WB'){
# cdtstation, cdtdataset
ret.params <- list(action = action, Tstep = period, data.type = "cdtstation",
cdtstation = list(etp = "", prec = ""),
cdtdataset = list(etp = "", prec = ""),
hdate = list(start.month = 1, start.day = 1, separate.year = FALSE),
wb = list(wb1 = 0, multi = FALSE, file = ""),
swhc = list(cap.max = 100, multi = FALSE, file = ""),
output = "")
}
#################################################################
## conversion to CPT data format
if(action == 'convert.CPTdata'){
ret.params <- list(action = action,
data.type = "cdtstation",
cdtstation = "",
cdtnetcdf = list(dir = "", sample = "", format = "onset_%Y%M%D.nc"),
cptinfo = list(name = "onset", units = "days since", missval = '-9999'),
output = "")
}
################
## conversion between ncdf, geotiff, esri .hrd labeled
if(action == 'convert.nc.tif.bil'){
ret.params <- list(action = action,
dir.in = "", dir.out = "",
type.in = "nc", type.out = "tif",
nc.opts = list(varname = "precip", varunit = "mm", missval = -9999,
longname = "Merged station-satellite precipitation"))
}
################
## create a GrADS Data Descriptor File
if(action == 'grads.ctl'){
ret.params <- list(action = action,tstep = period,
nc = list(dir = "", sample = "", format = "rr_mrg_%Y%M%D.nc"),
date = list(year1 = 1981, mon1 = 1, day1 = 1,
year2 = 2017, mon2 = 12, day2 = 31),
out.ctl = "")
}
#############
return(ret.params)
}
|
###########################################################################################
# Routine: get.fx.spot
###########################################################################################
get.fx.spot <- function(
from.ccy,
to.ccy = NULL,
asOfDate = last.biz.date()
) {
sd = asOfDate %m-% months(1);
ed = asOfDate %m+% months(1);
fx.rate = get.exchange.rates( from.ccy = from.ccy, to.ccy = to.ccy, start.date = sd, end.date = ed );
fx.spot = as.numeric( fill.missing( fx.rate, start.date = sd, end.date = ed )[ asOfDate ] );
return( fx.spot );
};
###########################################################################################
# Routine: get.forward.points
###########################################################################################
get.forward.points <- function(
symbol,
forward.dates = NULL,
pricing.date = NULL,
BidAsk = 'Mid'
) {
all.fx.data = read.csv( FORWARD_RATES_RECENT_CSV, sep = '\t', header = TRUE, stringsAsFactors = FALSE );
current.data = all.fx.data[ all.fx.data$Symbol == symbol, ];
asOfDates = current.data$AsOfDate;
if( is.null(pricing.date ) ) {
fwd.points = current.data[ asOfDates == max( asOfDates ), ];
} else {
date.diffs = as.numeric(as.Date(asOfDates)) - as.numeric(as.Date(pricing.date ));
fwd.points = current.data[ abs(date.diffs) == min( abs( date.diffs ) ), ];
};
# Select appropriate data columns, and create Mid quote if necessary
if( BidAsk == 'Mid' ) {
fwd.points = data.frame( Forward.Date = fwd.points$Forward.Date, Mid = (fwd.points$Bid + fwd.points$Ask ) / 2 );
} else {
fwd.points = fwd.points[, c( 'Forward.Date', BidAsk ) ];
};
if( !is.null(forward.dates) ) {
interp = approx( as.Date( fwd.points$Forward.Date ), fwd.points[,BidAsk ], forward.dates );
fwd.points = data.frame( interp$x, interp$y );
colnames( fwd.points ) = c( 'Forward.Date', BidAsk );
};
return( fwd.points );
};
###########################################################################################
# Routine: get.fx.forward.points.rescale.factor
#
# Get amount by which forward points are rescaled, corresponding to quoting convention decimal places
###########################################################################################
get.fx.forward.points.rescale.factor <- function( symbol ) {
fx.info = read.csv( EXCHANGE_RATE_INFO_CSV, header = TRUE, stringsAsFactors = FALSE );
rescale = 10^as.numeric( fx.info[ fx.info$Symbol == symbol, 'Decimals' ] );
return( rescale );
};
###########################################################################################
# Routine: get.forward.rates
###########################################################################################
get.forward.rates <- function(
symbol,
forward.dates = NULL,
pricing.date = today(),
BidAsk = 'Ask'
) {
fwd.points = get.forward.points( symbol, forward.dates = forward.dates, pricing.date = pricing.date, BidAsk = BidAsk );
exchange.rates = get.exchange.rates( symbol, start.date = pricing.date %m-% months(1), end.date = pricing.date %m+% months(1) );
exchange.rates = fill.missing( exchange.rates );
date.diffs = as.numeric(pricing.date) - as.numeric(as.Date(row.names( exchange.rates ) ) );
loc = max( which( date.diffs >= 0 ) );
fx.spot = exchange.rates[loc];
rescale = get.fx.forward.points.rescale.factor(symbol);
fwd.rates = data.frame( fwd.points$Forward.Date, fx.spot + fwd.points[,BidAsk]/rescale );
colnames( fwd.rates ) = c( 'Forward.Date', BidAsk );
return( fwd.rates );
};
###########################################################################################
# Routine: calc.forward.prices
###########################################################################################
calc.forward.prices <- function(
symbol,
forward.dates,
fwd.contract.rates,
pricing.date = today(),
BidAsk = 'Ask'
) {
fwd.rates = get.forward.rates( symbol, forward.dates = forward.dates, pricing.date = pricing.date, BidAsk = BidAsk );
prices = numeric( length( forward.dates ) );
for( i in 1:nrow( fwd.rates ) ) {
loc = which( forward.dates == fwd.rates$Forward.Date[i] );
prices = fwd.rates[i,BidAsk] - fwd.contract.rates[loc];
};
return( prices );
};
###########################################################################################
# Routine: calc.fx.forward.prices.ts
#
# Takes a currency cross symbol (e.g., EUR/USD, USD/JPY) as input, and returns time series
# of the prices
###########################################################################################
calc.fx.forward.prices.ts <- function(
symbol,
forward.date,
fwd.contract.rate,
start.date = today() %m-% years(10),
end.date = today(),
BidAsk = 'Mid'
) {
current.data = read.csv( FORWARD_RATES_RECENT_CSV, sep = '\t', header = TRUE, stringsAsFactors = FALSE );
# Restrict data frame to relevant currency
current.data = current.data[ current.data$Symbol == symbol, ];
# Select appropriate data columns, and create Mid quote if necessary
if( BidAsk == 'Mid' ) {
current.data = cbind( current.data, data.frame( Mid = (current.data$Bid + current.data$Ask ) / 2 ) );
};
# Remove all but necessary columns
data = current.data[, c( 'AsOfDate', 'Forward.Date', BidAsk ) ];
colnames( data ) = c( 'AsOfDate', 'Forward.Date', 'Points' );
# Get rescale factor, based on number of decimals in forward point quote
rescale = get.fx.forward.points.rescale.factor(symbol);
data$Points = data$Points / rescale;
# Create time series where each column is a different pricing date
uniq.dates = unique( data$AsOfDate );
df = c();
for( i in 1:length(uniq.dates) ) {
row.data = data[ data$AsOfDate == uniq.dates[i], ];
interp = approx( as.Date( row.data$Forward.Date ), row.data$Points, forward.date );
new.df = data.frame( Date = uniq.dates[i], Value = interp$y );
df = rbind( df, new.df );
};
# Convert forward pt data frame to time series
ts = as.timeSeries(df);
# Get exchange rates, and merge time series with the forward points
exchange.rates = get.exchange.rates( symbol, start.date = start.date %m-% months(1), end.date = end.date );
mts = nanmerge( ts, exchange.rates, rm.na = F );
mts = fill.missing(mts, end.date = end.date );
first.val = mts[min( which(!is.na(mts[,1])) ),1];
mts[is.na(mts[,1]),1] = first.val;
# Forward rates = ( Spot FX rate ) + ( Forward Points ) / ( Rescale factor )
forward.rates = mts[,1] + mts[,2];
forward.prices = forward.rates - fwd.contract.rate;
# Only keep points between start and end dates
forward.prices = ts.range( forward.prices, start.date, end.date );
return( forward.prices );
};
###########################################################################################
# Routine: get.time.series.fx.forwards
###########################################################################################
get.time.series.fx.forwards <- function(
fx.fwd.symbols,
start.date = today() %m-% years(10),
end.date = today(),
BidAsk = 'Mid',
target.ccy = 'USD'
) {
if( is.empty( fx.fwd.symbols ) ) {
return( NULL )
};
all.fwd.info = parse.fx.forward.symbols( fx.fwd.symbols );
ts = timeSeries();
for( i in 1:nrow(all.fwd.info) ) {
fwd.info = all.fwd.info[i,];
# Get the FX forward price time series. This time series is denominated in the bottom currency of the pair.
# For example, the forward price for USD/JPY would be denominated in JPY
raw.ts = calc.fx.forward.prices.ts( symbol = fwd.info$Currency.Cross, forward.date = fwd.info$Settlement.Date,
fwd.contract.rate = fwd.info$Settlement.Rate, start.date = start.date, end.date = end.date, BidAsk = BidAsk );
# Figure out the long and short legs of the forward
fx.info = parse.currency.pair.symbols( fwd.info$Currency.Cross );
# Convert the currency of the forward time series to the target currency
new.ts = convert.ts.currency( raw.ts, from.ccy = fx.info$Long.Currency, to.ccy = target.ccy );
colnames( new.ts ) = fx.fwd.symbols[i];
ts = merge.ts( ts, new.ts );
};
return( ts );
};
###########################################################################################
# Routine: convert.currency
###########################################################################################
convert.currency <- function(
vals,
from.ccy,
to.ccy = 'USD',
asOfDate = last.biz.date()
) {
if( from.ccy == to.ccy ) {
return( vals );
} else {
# Create a time series on the asOfDate
ts = timeSeries( 1, asOfDate );
# Find time series in new currency
spot.rate = convert.ts.currency( ts, from.ccy = from.ccy, to.ccy = to.ccy );
# Convert result back to an array, and return the result
new.vals = vals * as.numeric(spot.rate);
return( new.vals );
};
};
###########################################################################################
# Routine: convert.ts.currency
###########################################################################################
convert.ts.currency <- function(
ts,
from.ccy,
to.ccy = 'USD'
) {
# Get Exchange rate
fx = get.exchange.rates( from.ccy = from.ccy, to.ccy = to.ccy, start.date = first.date(ts) - 7, end.date = last.date(ts) + 7 );
fx = fill.missing( fx, end.date = last.date(ts) + 7 );
fx = ts.range( fx, start.date = first.date(ts), end.date = last.date(ts) );
# Fill missing values
mts = fill.missing( merge.ts( ts, fx ) );
# Multiply FX rate by original time series
new.ts = mts[,1] * mts[,2];
# Keep the original column name and original dates
colnames(new.ts) = colnames(ts);
new.ts = new.ts[ rownames(ts), ];
return( new.ts );
};
| /finance/currency.R | no_license | cmm801/r-finance | R | false | false | 9,787 | r |
###########################################################################################
# Routine: get.fx.spot
###########################################################################################
get.fx.spot <- function(
from.ccy,
to.ccy = NULL,
asOfDate = last.biz.date()
) {
sd = asOfDate %m-% months(1);
ed = asOfDate %m+% months(1);
fx.rate = get.exchange.rates( from.ccy = from.ccy, to.ccy = to.ccy, start.date = sd, end.date = ed );
fx.spot = as.numeric( fill.missing( fx.rate, start.date = sd, end.date = ed )[ asOfDate ] );
return( fx.spot );
};
###########################################################################################
# Routine: get.forward.points
###########################################################################################
get.forward.points <- function(
symbol,
forward.dates = NULL,
pricing.date = NULL,
BidAsk = 'Mid'
) {
all.fx.data = read.csv( FORWARD_RATES_RECENT_CSV, sep = '\t', header = TRUE, stringsAsFactors = FALSE );
current.data = all.fx.data[ all.fx.data$Symbol == symbol, ];
asOfDates = current.data$AsOfDate;
if( is.null(pricing.date ) ) {
fwd.points = current.data[ asOfDates == max( asOfDates ), ];
} else {
date.diffs = as.numeric(as.Date(asOfDates)) - as.numeric(as.Date(pricing.date ));
fwd.points = current.data[ abs(date.diffs) == min( abs( date.diffs ) ), ];
};
# Select appropriate data columns, and create Mid quote if necessary
if( BidAsk == 'Mid' ) {
fwd.points = data.frame( Forward.Date = fwd.points$Forward.Date, Mid = (fwd.points$Bid + fwd.points$Ask ) / 2 );
} else {
fwd.points = fwd.points[, c( 'Forward.Date', BidAsk ) ];
};
if( !is.null(forward.dates) ) {
interp = approx( as.Date( fwd.points$Forward.Date ), fwd.points[,BidAsk ], forward.dates );
fwd.points = data.frame( interp$x, interp$y );
colnames( fwd.points ) = c( 'Forward.Date', BidAsk );
};
return( fwd.points );
};
###########################################################################################
# Routine: get.fx.forward.points.rescale.factor
#
# Get amount by which forward points are rescaled, corresponding to quoting convention decimal places
###########################################################################################
get.fx.forward.points.rescale.factor <- function( symbol ) {
fx.info = read.csv( EXCHANGE_RATE_INFO_CSV, header = TRUE, stringsAsFactors = FALSE );
rescale = 10^as.numeric( fx.info[ fx.info$Symbol == symbol, 'Decimals' ] );
return( rescale );
};
###########################################################################################
# Routine: get.forward.rates
###########################################################################################
get.forward.rates <- function(
symbol,
forward.dates = NULL,
pricing.date = today(),
BidAsk = 'Ask'
) {
fwd.points = get.forward.points( symbol, forward.dates = forward.dates, pricing.date = pricing.date, BidAsk = BidAsk );
exchange.rates = get.exchange.rates( symbol, start.date = pricing.date %m-% months(1), end.date = pricing.date %m+% months(1) );
exchange.rates = fill.missing( exchange.rates );
date.diffs = as.numeric(pricing.date) - as.numeric(as.Date(row.names( exchange.rates ) ) );
loc = max( which( date.diffs >= 0 ) );
fx.spot = exchange.rates[loc];
rescale = get.fx.forward.points.rescale.factor(symbol);
fwd.rates = data.frame( fwd.points$Forward.Date, fx.spot + fwd.points[,BidAsk]/rescale );
colnames( fwd.rates ) = c( 'Forward.Date', BidAsk );
return( fwd.rates );
};
###########################################################################################
# Routine: calc.forward.prices
###########################################################################################
calc.forward.prices <- function(
symbol,
forward.dates,
fwd.contract.rates,
pricing.date = today(),
BidAsk = 'Ask'
) {
fwd.rates = get.forward.rates( symbol, forward.dates = forward.dates, pricing.date = pricing.date, BidAsk = BidAsk );
prices = numeric( length( forward.dates ) );
for( i in 1:nrow( fwd.rates ) ) {
loc = which( forward.dates == fwd.rates$Forward.Date[i] );
prices = fwd.rates[i,BidAsk] - fwd.contract.rates[loc];
};
return( prices );
};
###########################################################################################
# Routine: calc.fx.forward.prices.ts
#
# Takes a currency cross symbol (e.g., EUR/USD, USD/JPY) as input, and returns time series
# of the prices
###########################################################################################
calc.fx.forward.prices.ts <- function(
symbol,
forward.date,
fwd.contract.rate,
start.date = today() %m-% years(10),
end.date = today(),
BidAsk = 'Mid'
) {
current.data = read.csv( FORWARD_RATES_RECENT_CSV, sep = '\t', header = TRUE, stringsAsFactors = FALSE );
# Restrict data frame to relevant currency
current.data = current.data[ current.data$Symbol == symbol, ];
# Select appropriate data columns, and create Mid quote if necessary
if( BidAsk == 'Mid' ) {
current.data = cbind( current.data, data.frame( Mid = (current.data$Bid + current.data$Ask ) / 2 ) );
};
# Remove all but necessary columns
data = current.data[, c( 'AsOfDate', 'Forward.Date', BidAsk ) ];
colnames( data ) = c( 'AsOfDate', 'Forward.Date', 'Points' );
# Get rescale factor, based on number of decimals in forward point quote
rescale = get.fx.forward.points.rescale.factor(symbol);
data$Points = data$Points / rescale;
# Create time series where each column is a different pricing date
uniq.dates = unique( data$AsOfDate );
df = c();
for( i in 1:length(uniq.dates) ) {
row.data = data[ data$AsOfDate == uniq.dates[i], ];
interp = approx( as.Date( row.data$Forward.Date ), row.data$Points, forward.date );
new.df = data.frame( Date = uniq.dates[i], Value = interp$y );
df = rbind( df, new.df );
};
# Convert forward pt data frame to time series
ts = as.timeSeries(df);
# Get exchange rates, and merge time series with the forward points
exchange.rates = get.exchange.rates( symbol, start.date = start.date %m-% months(1), end.date = end.date );
mts = nanmerge( ts, exchange.rates, rm.na = F );
mts = fill.missing(mts, end.date = end.date );
first.val = mts[min( which(!is.na(mts[,1])) ),1];
mts[is.na(mts[,1]),1] = first.val;
# Forward rates = ( Spot FX rate ) + ( Forward Points ) / ( Rescale factor )
forward.rates = mts[,1] + mts[,2];
forward.prices = forward.rates - fwd.contract.rate;
# Only keep points between start and end dates
forward.prices = ts.range( forward.prices, start.date, end.date );
return( forward.prices );
};
###########################################################################################
# Routine: get.time.series.fx.forwards
###########################################################################################
get.time.series.fx.forwards <- function(
fx.fwd.symbols,
start.date = today() %m-% years(10),
end.date = today(),
BidAsk = 'Mid',
target.ccy = 'USD'
) {
if( is.empty( fx.fwd.symbols ) ) {
return( NULL )
};
all.fwd.info = parse.fx.forward.symbols( fx.fwd.symbols );
ts = timeSeries();
for( i in 1:nrow(all.fwd.info) ) {
fwd.info = all.fwd.info[i,];
# Get the FX forward price time series. This time series is denominated in the bottom currency of the pair.
# For example, the forward price for USD/JPY would be denominated in JPY
raw.ts = calc.fx.forward.prices.ts( symbol = fwd.info$Currency.Cross, forward.date = fwd.info$Settlement.Date,
fwd.contract.rate = fwd.info$Settlement.Rate, start.date = start.date, end.date = end.date, BidAsk = BidAsk );
# Figure out the long and short legs of the forward
fx.info = parse.currency.pair.symbols( fwd.info$Currency.Cross );
# Convert the currency of the forward time series to the target currency
new.ts = convert.ts.currency( raw.ts, from.ccy = fx.info$Long.Currency, to.ccy = target.ccy );
colnames( new.ts ) = fx.fwd.symbols[i];
ts = merge.ts( ts, new.ts );
};
return( ts );
};
###########################################################################################
# Routine: convert.currency
###########################################################################################
convert.currency <- function(
vals,
from.ccy,
to.ccy = 'USD',
asOfDate = last.biz.date()
) {
if( from.ccy == to.ccy ) {
return( vals );
} else {
# Create a time series on the asOfDate
ts = timeSeries( 1, asOfDate );
# Find time series in new currency
spot.rate = convert.ts.currency( ts, from.ccy = from.ccy, to.ccy = to.ccy );
# Convert result back to an array, and return the result
new.vals = vals * as.numeric(spot.rate);
return( new.vals );
};
};
###########################################################################################
# Routine: convert.ts.currency
###########################################################################################
convert.ts.currency <- function(
ts,
from.ccy,
to.ccy = 'USD'
) {
# Get Exchange rate
fx = get.exchange.rates( from.ccy = from.ccy, to.ccy = to.ccy, start.date = first.date(ts) - 7, end.date = last.date(ts) + 7 );
fx = fill.missing( fx, end.date = last.date(ts) + 7 );
fx = ts.range( fx, start.date = first.date(ts), end.date = last.date(ts) );
# Fill missing values
mts = fill.missing( merge.ts( ts, fx ) );
# Multiply FX rate by original time series
new.ts = mts[,1] * mts[,2];
# Keep the original column name and original dates
colnames(new.ts) = colnames(ts);
new.ts = new.ts[ rownames(ts), ];
return( new.ts );
};
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/core.R
\name{h3_to_geo}
\alias{h3_to_geo}
\title{Get Hexagon Center Coordinates}
\usage{
h3_to_geo(data, ...)
}
\arguments{
\item{data}{A data.frame or character vector,
if the former is specified then the \code{hex}
argument \emph{must} be passed, specifying the
bare column name containing hexagon indices.}
\item{...}{Other arguments passed to methods.}
}
\value{
The \code{data} as \link[tibble]{tibble} with their corresponding
hexagon centers as \code{hex_center_lat} and \code{hex_center_lon} columns,
\code{data} is a data.frame then data is bound to the latter.
}
\description{
Get Hexagon Center Coordinates
}
\examples{
geo_to_h3(quakes, lat, long) \%>\%
h3_to_geo(hex = hex)
}
| /man/h3_to_geo.Rd | permissive | JohnCoene/h3inr | R | false | true | 772 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/core.R
\name{h3_to_geo}
\alias{h3_to_geo}
\title{Get Hexagon Center Coordinates}
\usage{
h3_to_geo(data, ...)
}
\arguments{
\item{data}{A data.frame or character vector,
if the former is specified then the \code{hex}
argument \emph{must} be passed, specifying the
bare column name containing hexagon indices.}
\item{...}{Other arguments passed to methods.}
}
\value{
The \code{data} as \link[tibble]{tibble} with their corresponding
hexagon centers as \code{hex_center_lat} and \code{hex_center_lon} columns,
\code{data} is a data.frame then data is bound to the latter.
}
\description{
Get Hexagon Center Coordinates
}
\examples{
geo_to_h3(quakes, lat, long) \%>\%
h3_to_geo(hex = hex)
}
|
#' @title Custom Dimensions
#' @description Lists custom dimensions to which the user has access.
#' @param accountId character. Account ID for the custom dimensions to retrieve.
#' @param webPropertyId character. Web property ID for the custom dimensions to retrieve.
#' @param max.results integer. The maximum number of custom dimensions to include in this response.
#' @param start.index integer. An index of the first entity to retrieve. Use this parameter as a pagination mechanism along with the max-results parameter.
#' @param token \code{\link[httr]{Token2.0}} class object with a valid authorization data.
#' @return
#' \item{kind}{Kind value for a custom dimension. Set to "analytics#customDimension". It is a read-only field.}
#' \item{id}{Custom dimension ID.}
#' \item{accountId}{Account ID.}
#' \item{webPropertyId}{Property ID.}
#' \item{name}{Name of the custom dimension.}
#' \item{index}{Index of the custom dimension.}
#' \item{scope}{Scope of the custom dimension: HIT, SESSION, USER or PRODUCT.}
#' \item{active}{Boolean indicating whether the custom dimension is active.}
#' \item{created}{Time the custom dimension was created.}
#' \item{updated}{Time the custom dimension was last modified.}
#' @references \href{https://developers.google.com/analytics/devguides/config/mgmt/v3/mgmtReference/management/customDimensions}{Management API - Custom Dimensions Overview}
#' @family Management API
| /man-roxygen/list_customDimensions.R | no_license | selesnow/RGA | R | false | false | 1,417 | r | #' @title Custom Dimensions
#' @description Lists custom dimensions to which the user has access.
#' @param accountId character. Account ID for the custom dimensions to retrieve.
#' @param webPropertyId character. Web property ID for the custom dimensions to retrieve.
#' @param max.results integer. The maximum number of custom dimensions to include in this response.
#' @param start.index integer. An index of the first entity to retrieve. Use this parameter as a pagination mechanism along with the max-results parameter.
#' @param token \code{\link[httr]{Token2.0}} class object with a valid authorization data.
#' @return
#' \item{kind}{Kind value for a custom dimension. Set to "analytics#customDimension". It is a read-only field.}
#' \item{id}{Custom dimension ID.}
#' \item{accountId}{Account ID.}
#' \item{webPropertyId}{Property ID.}
#' \item{name}{Name of the custom dimension.}
#' \item{index}{Index of the custom dimension.}
#' \item{scope}{Scope of the custom dimension: HIT, SESSION, USER or PRODUCT.}
#' \item{active}{Boolean indicating whether the custom dimension is active.}
#' \item{created}{Time the custom dimension was created.}
#' \item{updated}{Time the custom dimension was last modified.}
#' @references \href{https://developers.google.com/analytics/devguides/config/mgmt/v3/mgmtReference/management/customDimensions}{Management API - Custom Dimensions Overview}
#' @family Management API
|
---
title: "R Notebook"
output: html_notebook
---
This is an [R Markdown](http://rmarkdown.rstudio.com) Notebook. When you execute code within the notebook, the results appear beneath the code.
Try executing this chunk by clicking the *Run* button within the chunk or by placing your cursor inside it and pressing *Cmd+Shift+Enter*.
```{r}
#BEST SUBSET SELECTION
library(ISLR)
names(Hitters)
#Redefine vector with leaving out empty values
Hitters=na.omit(Hitters)
dim(Hitters)
sum(is.na(Hitters))
```
```{r}
library(leaps)
regfit.full = regsubsets(Salary~., Hitters)
summary(regfit.full) #An asterisks indicates that a given variable is included in the corresponding model. This output indicates that the best two variable models contains only Hits & CRBI. regsubsets() only reports results up to the best eight variable model.
```
```{r}
regfit.full=regsubsets(Salary~.,data=Hitters, nvmax=19) #returns as many variables as are desired
reg.summary=summary(regfit.full)
names(reg.summary)
reg.summary$rsq
```
```{r}
par(mfrow=c(2,2))
plot(reg.summary$rss, xlab="Number of Variables", ylab="RSS", type="l")
plot(reg.summary$adjr2, xlab="Number of Variables", ylab="Adjusted RSq", type="l")
which.max(reg.summary$adjr2)
points(11, reg.summary$adjr2[11], col="red", cex=2, pch=20)
```
```{r}
par(mfrow=c(2,2))
plot(reg.summary$cp, xlab="Number of Variables", ylab="Cp", type="l")
which.min(reg.summary$cp)
points(10, reg.summary$cp[10], col="red", cex=2, pch=20)
which.min(reg.summary$bic)
plot(reg.summary$bic, xlab="Number of Variables", ylab="BIC", type="l")
points(6, reg.summary$bic[6], col="red", cex=2, pch=20)
```
```{r}
plot(regfit.full, scale="r2")
plot(regfit.full, scale="adjr2")
plot(regfit.full, scale="Cp")
plot(regfit.full, scale="bic")
coef(regfit.full,6)
```
Add a new chunk by clicking the *Insert Chunk* button on the toolbar or by pressing *Cmd+Option+I*.
When you save the notebook, an HTML file containing the code and output will be saved alongside it (click the *Preview* button or press *Cmd+Shift+K* to preview the HTML file).
The preview shows you a rendered HTML copy of the contents of the editor. Consequently, unlike *Knit*, *Preview* does not run any R code chunks. Instead, the output of the chunk when it was last run in the editor is displayed.
| /Linear Regression in High Dimensions/Best Subset Selection.R | permissive | shilpakancharla/machine-learning-tutorials | R | false | false | 2,290 | r | ---
title: "R Notebook"
output: html_notebook
---
This is an [R Markdown](http://rmarkdown.rstudio.com) Notebook. When you execute code within the notebook, the results appear beneath the code.
Try executing this chunk by clicking the *Run* button within the chunk or by placing your cursor inside it and pressing *Cmd+Shift+Enter*.
```{r}
#BEST SUBSET SELECTION
library(ISLR)
names(Hitters)
#Redefine vector with leaving out empty values
Hitters=na.omit(Hitters)
dim(Hitters)
sum(is.na(Hitters))
```
```{r}
library(leaps)
regfit.full = regsubsets(Salary~., Hitters)
summary(regfit.full) #An asterisks indicates that a given variable is included in the corresponding model. This output indicates that the best two variable models contains only Hits & CRBI. regsubsets() only reports results up to the best eight variable model.
```
```{r}
regfit.full=regsubsets(Salary~.,data=Hitters, nvmax=19) #returns as many variables as are desired
reg.summary=summary(regfit.full)
names(reg.summary)
reg.summary$rsq
```
```{r}
par(mfrow=c(2,2))
plot(reg.summary$rss, xlab="Number of Variables", ylab="RSS", type="l")
plot(reg.summary$adjr2, xlab="Number of Variables", ylab="Adjusted RSq", type="l")
which.max(reg.summary$adjr2)
points(11, reg.summary$adjr2[11], col="red", cex=2, pch=20)
```
```{r}
par(mfrow=c(2,2))
plot(reg.summary$cp, xlab="Number of Variables", ylab="Cp", type="l")
which.min(reg.summary$cp)
points(10, reg.summary$cp[10], col="red", cex=2, pch=20)
which.min(reg.summary$bic)
plot(reg.summary$bic, xlab="Number of Variables", ylab="BIC", type="l")
points(6, reg.summary$bic[6], col="red", cex=2, pch=20)
```
```{r}
plot(regfit.full, scale="r2")
plot(regfit.full, scale="adjr2")
plot(regfit.full, scale="Cp")
plot(regfit.full, scale="bic")
coef(regfit.full,6)
```
Add a new chunk by clicking the *Insert Chunk* button on the toolbar or by pressing *Cmd+Option+I*.
When you save the notebook, an HTML file containing the code and output will be saved alongside it (click the *Preview* button or press *Cmd+Shift+K* to preview the HTML file).
The preview shows you a rendered HTML copy of the contents of the editor. Consequently, unlike *Knit*, *Preview* does not run any R code chunks. Instead, the output of the chunk when it was last run in the editor is displayed.
|
testlist <- list(m = NULL, repetitions = 0L, in_m = structure(c(2.31584307882366e+77, 9.53818252170339e+295, 1.22810536108214e+146, 4.12396251261199e-221, 0), .Dim = c(5L, 1L)))
result <- do.call(CNull:::communities_individual_based_sampling_beta,testlist)
str(result) | /CNull/inst/testfiles/communities_individual_based_sampling_beta/AFL_communities_individual_based_sampling_beta/communities_individual_based_sampling_beta_valgrind_files/1615834750-test.R | no_license | akhikolla/updatedatatype-list2 | R | false | false | 270 | r | testlist <- list(m = NULL, repetitions = 0L, in_m = structure(c(2.31584307882366e+77, 9.53818252170339e+295, 1.22810536108214e+146, 4.12396251261199e-221, 0), .Dim = c(5L, 1L)))
result <- do.call(CNull:::communities_individual_based_sampling_beta,testlist)
str(result) |
#' visualize structure of assembled transcripts
#'
#' @param gene name of gene whose transcripts will be plotted. When using Cufflinks output, usually of the form \code{"XLOC_######"}
#' @param gown ballgown object containing experimental and phenotype data
#' @param samples vector of sample(s) to plot. Can be \code{'none'} if only one plot (showing transcript structure in gray) is desired. Use \code{sampleNames(gown)} to see sample names for \code{gown}. Defaults to \code{sampleNames(gown)[1]}.
#' @param colorby one of \code{"transcript"}, \code{"exon"}, or \code{"none"}, indicating which feature's abundances should dictate plot coloring. If \code{"none"}, all transcripts are drawn in gray.
#' @param meas which expression measurement to color features by, if any. Must match an available measurement for whatever feature you're plotting.
#' @param legend if \code{TRUE} (as it is by default), a color legend is drawn on top of the plot indicating scales for feature abundances.
#' @param labelTranscripts if \code{TRUE}, transcript ids are labeled on the left side of the plot. Default \code{FALSE}.
#' @param main optional string giving the desired plot title.
#' @param colorBorders if \code{TRUE}, exon borders are also drawn in color (instead of black, as they are by default). Useful for visualizing abundances for skinny exons and/or smaller plots, as often happens when \code{length(samples)} is large.
#' @param log if \code{TRUE}, color transcripts on the log scale. Default \code{FALSE}. To account for expression values of 0, we add 1 to all expression values before taking the log.
#' @param logbase log base to use if \code{log = TRUE}. default 2.
#' @param customCol an optional vector of custom colors to color transcripts by. there must be the same number of colors as transcripts in the gene being plotted.
#' @return produces a plot of the transcript structure for the specified gene in the current graphics device.
#' @seealso \code{\link{plotMeans}}
#' @author Alyssa Frazee
#' @export
plotTranscripts = function(gene, gown, samples = NULL,
colorby = 'transcript', meas = 'FPKM', legend = TRUE,
labelTranscripts = FALSE, main = NULL, colorBorders = FALSE,
log = FALSE, logbase = 2, customCol=NULL){
if(class(gown)!="ballgown") stop("gown must be a ballgown object")
if(is.null(samples)){
samples = sampleNames(gown)[1]
if(colorby!='none') message(paste('defaulting to sample',samples))
}
stopifnot(colorby %in% c('transcript', 'exon', 'none'))
if(colorby == 'transcript'){
stopifnot(meas %in% c('cov', 'FPKM'))
}
if(colorby == 'exon'){
stopifnot(meas %in% c('rcount', 'ucount', 'mrcount', 'cov', 'cov_sd', 'mcov', 'mcov_sd'))
}
if(colorby=="none") legend = FALSE
if(!is.null(customCol) & (colorby!="transcript")){
stop("Custom coloring is only available at transcript level currently")
}
n = length(samples)
westval = ifelse(labelTranscripts, 4, 2)
if(n > 1){
numrows = round(sqrt(n))
numcols = ceiling(n/numrows)
par(mfrow=c(numrows, numcols), mar=c(5, westval, 4, 2), oma = c(0, 0, 2, 0))
}else{
par(mar=c(5, westval, 4, 2))
}
ma = IRanges::as.data.frame(structure(gown)$trans)
thetranscripts = indexes(gown)$t2g$t_id[indexes(gown)$t2g$g_id==gene]
if(substr(ma$element[1],1,2) == "tx"){
warning('your ballgown object was built with a deprecated version of ballgown - would probably be good to re-build!')
thetranscripts = paste0('tx',thetranscripts)
}
if(!is.null(customCol) & (length(customCol)!=length(thetranscripts))){
stop("You must have the same number of custom colors as transcripts")
}
gtrans = subset(ma, element %in% thetranscripts)
xax = seq(min(gtrans$start), max(gtrans$end), by=1)
numtx = length(unique(thetranscripts))
ymax = ifelse(legend, numtx+1.5, numtx+1)
if(length(unique(gtrans$seqnames)) > 1) stop("Your gene appears to span multiple chromosomes, which is interesting but also kind of annoying, R-wise. Please choose another gene until additional functionality is added!")
if(length(unique(gtrans$strand)) > 1) stop("Your gene appears to contain exons from both strands, which is potentially interesting but also kind of confusing, so please choose another gene until we figure this sucker out.")
# set appropriate color scale:
if(colorby != 'none'){
g_id = texpr(gown, 'all')$gene_id
if(colorby == "transcript"){
smalldat = texpr(gown, meas)[which(g_id == gene),]
t_id = texpr(gown, 'all')$t_id[which(g_id == gene)]
}
if(colorby == "exon"){
e_id_full = eexpr(gown, 'all')$e_id
smalldat = eexpr(gown, meas)[which(e_id_full %in% gtrans$id),]
e_id = e_id_full[which(e_id_full %in% gtrans$id)]
}
if(numtx == 1){
snames = names(smalldat)
smalldat = matrix(smalldat, nrow=1)
names(smalldat) = snames
}
if(log){
smalldat = log(smalldat+1, base=logbase)
}
maxcol = quantile(as.matrix(smalldat), 0.99)
colscale = seq(0, maxcol, length.out=200)
introntypes = unique(as.character(sapply(names(data(gown)$intron)[-c(1:5)], gettype)))
color.introns = meas %in% introntypes
}else{
color.introns = FALSE
}
# make plot(s) (one for each sample):
for(s in 1:n){
## make base plot:
plot(xax, rep(0,length(xax)), ylim=c(0,ymax),
type="n", xlab="genomic position", yaxt = "n", ylab="")
if(n > 1){
title(samples[s])
}
colName = paste(meas, samples[s], sep='.')
# draw transcripts
for(tx in unique(gtrans$element)){
if(colorby == "transcript"){
colIndex = which(names(smalldat) == colName)
mycolor = closestColor(smalldat[,colIndex][which(t_id==tx)], colscale)
}
if(colorby == "none") mycolor = "gray70"
if(!is.null(customCol)){
mycolor=customCol[which(unique(gtrans$element)==tx)]
}
txind = which(unique(gtrans$element)==tx)
gtsub = gtrans[gtrans$element==tx,]
gtsub = gtsub[order(gtsub$start),]
for(exind in 1:dim(gtsub)[1]){
if(colorby == "exon") mycolor = closestColor(smalldat[,colIndex][which(e_id==gtsub$id[exind])], colscale)
borderCol = ifelse(colorBorders, mycolor, 'black')
polygon(x=c(gtsub$start[exind], gtsub$start[exind],
gtsub$end[exind], gtsub$end[exind]),
y=c(txind-0.4,txind+0.4,txind+0.4,txind-0.4),
col=mycolor, border=borderCol)
if(exind!=dim(gtsub)[1]){
if(!color.introns) lines(c(gtsub$end[exind],gtsub$start[exind+1]),c(txind, txind), lty=2, col="gray60")
if(color.introns){
intronindex = which(data(gown)$intron$start == gtsub$end[exind]+1 & data(gown)$intron$end == gtsub$start[exind+1]-1 & data(gown)$intron$chr==unique(gtsub$seqnames) & data(gown)$intron$strand == unique(gtsub$strand))
icolumnind = which(names(data(gown)$intron) == colName)
icol = closestColor(data(gown)$intron[intronindex,icolumnind], colscale)
lines(c(gtsub$end[exind]+10,gtsub$start[exind+1]-10),c(txind, txind), lwd=3, col=icol)
lines(c(gtsub$end[exind],gtsub$start[exind+1]),c(txind+0.1, txind+0.1), lwd=0.5, col="gray60")
lines(c(gtsub$end[exind],gtsub$start[exind+1]),c(txind-0.1, txind-0.1), lwd=0.5, col="gray60")
}#end if color.introns
}# end if exind != last exon
}# end loop over exons
}# end loop over transcripts
# draw the legend:
if(legend){
leglocs = seq(min(xax)+1, max(xax)-1, length=length(colscale)+1)
for(i in 1:length(colscale)){
polygon(x = c(leglocs[i], leglocs[i], leglocs[i+1], leglocs[i+1]), y = c(ymax-0.3, ymax, ymax, ymax-0.3), border=NA, col = rev(heat.colors(length(colscale)))[i])
}
text(x = seq(min(xax)+1, max(xax)-1, length = 10), y = rep(ymax+0.1, 10), labels = round(colscale,2)[seq(1,length(colscale), length=10)], cex=0.5 )
if(log){
text(x = median(xax), y = ymax-0.5, labels=paste("log expression, by", colorby), cex=0.5)
}else{
text(x = median(xax), y = ymax-0.5, labels=paste("expression, by", colorby), cex=0.5)
}
}
# label the transcripts on the y-axis (if asked for)
if(labelTranscripts){
axis(side=2, at=c(1:numtx), labels=unique(gtrans$element), cex.axis=0.75, las=1)
}
} #end loop over samples
# put title on plot
if(!is.null(main)){
title(main, outer=(n>1))
}else{
if(n==1){
title(paste0(gene,': ',samples))
}else{
title(gene, outer=TRUE)
}
}
}
| /R/plotTranscripts.R | no_license | jtleek/ballgown | R | false | false | 9,252 | r | #' visualize structure of assembled transcripts
#'
#' @param gene name of gene whose transcripts will be plotted. When using Cufflinks output, usually of the form \code{"XLOC_######"}
#' @param gown ballgown object containing experimental and phenotype data
#' @param samples vector of sample(s) to plot. Can be \code{'none'} if only one plot (showing transcript structure in gray) is desired. Use \code{sampleNames(gown)} to see sample names for \code{gown}. Defaults to \code{sampleNames(gown)[1]}.
#' @param colorby one of \code{"transcript"}, \code{"exon"}, or \code{"none"}, indicating which feature's abundances should dictate plot coloring. If \code{"none"}, all transcripts are drawn in gray.
#' @param meas which expression measurement to color features by, if any. Must match an available measurement for whatever feature you're plotting.
#' @param legend if \code{TRUE} (as it is by default), a color legend is drawn on top of the plot indicating scales for feature abundances.
#' @param labelTranscripts if \code{TRUE}, transcript ids are labeled on the left side of the plot. Default \code{FALSE}.
#' @param main optional string giving the desired plot title.
#' @param colorBorders if \code{TRUE}, exon borders are also drawn in color (instead of black, as they are by default). Useful for visualizing abundances for skinny exons and/or smaller plots, as often happens when \code{length(samples)} is large.
#' @param log if \code{TRUE}, color transcripts on the log scale. Default \code{FALSE}. To account for expression values of 0, we add 1 to all expression values before taking the log.
#' @param logbase log base to use if \code{log = TRUE}. default 2.
#' @param customCol an optional vector of custom colors to color transcripts by. there must be the same number of colors as transcripts in the gene being plotted.
#' @return produces a plot of the transcript structure for the specified gene in the current graphics device.
#' @seealso \code{\link{plotMeans}}
#' @author Alyssa Frazee
#' @export
plotTranscripts = function(gene, gown, samples = NULL,
colorby = 'transcript', meas = 'FPKM', legend = TRUE,
labelTranscripts = FALSE, main = NULL, colorBorders = FALSE,
log = FALSE, logbase = 2, customCol=NULL){
if(class(gown)!="ballgown") stop("gown must be a ballgown object")
if(is.null(samples)){
samples = sampleNames(gown)[1]
if(colorby!='none') message(paste('defaulting to sample',samples))
}
stopifnot(colorby %in% c('transcript', 'exon', 'none'))
if(colorby == 'transcript'){
stopifnot(meas %in% c('cov', 'FPKM'))
}
if(colorby == 'exon'){
stopifnot(meas %in% c('rcount', 'ucount', 'mrcount', 'cov', 'cov_sd', 'mcov', 'mcov_sd'))
}
if(colorby=="none") legend = FALSE
if(!is.null(customCol) & (colorby!="transcript")){
stop("Custom coloring is only available at transcript level currently")
}
n = length(samples)
westval = ifelse(labelTranscripts, 4, 2)
if(n > 1){
numrows = round(sqrt(n))
numcols = ceiling(n/numrows)
par(mfrow=c(numrows, numcols), mar=c(5, westval, 4, 2), oma = c(0, 0, 2, 0))
}else{
par(mar=c(5, westval, 4, 2))
}
ma = IRanges::as.data.frame(structure(gown)$trans)
thetranscripts = indexes(gown)$t2g$t_id[indexes(gown)$t2g$g_id==gene]
if(substr(ma$element[1],1,2) == "tx"){
warning('your ballgown object was built with a deprecated version of ballgown - would probably be good to re-build!')
thetranscripts = paste0('tx',thetranscripts)
}
if(!is.null(customCol) & (length(customCol)!=length(thetranscripts))){
stop("You must have the same number of custom colors as transcripts")
}
gtrans = subset(ma, element %in% thetranscripts)
xax = seq(min(gtrans$start), max(gtrans$end), by=1)
numtx = length(unique(thetranscripts))
ymax = ifelse(legend, numtx+1.5, numtx+1)
if(length(unique(gtrans$seqnames)) > 1) stop("Your gene appears to span multiple chromosomes, which is interesting but also kind of annoying, R-wise. Please choose another gene until additional functionality is added!")
if(length(unique(gtrans$strand)) > 1) stop("Your gene appears to contain exons from both strands, which is potentially interesting but also kind of confusing, so please choose another gene until we figure this sucker out.")
# set appropriate color scale:
if(colorby != 'none'){
g_id = texpr(gown, 'all')$gene_id
if(colorby == "transcript"){
smalldat = texpr(gown, meas)[which(g_id == gene),]
t_id = texpr(gown, 'all')$t_id[which(g_id == gene)]
}
if(colorby == "exon"){
e_id_full = eexpr(gown, 'all')$e_id
smalldat = eexpr(gown, meas)[which(e_id_full %in% gtrans$id),]
e_id = e_id_full[which(e_id_full %in% gtrans$id)]
}
if(numtx == 1){
snames = names(smalldat)
smalldat = matrix(smalldat, nrow=1)
names(smalldat) = snames
}
if(log){
smalldat = log(smalldat+1, base=logbase)
}
maxcol = quantile(as.matrix(smalldat), 0.99)
colscale = seq(0, maxcol, length.out=200)
introntypes = unique(as.character(sapply(names(data(gown)$intron)[-c(1:5)], gettype)))
color.introns = meas %in% introntypes
}else{
color.introns = FALSE
}
# make plot(s) (one for each sample):
for(s in 1:n){
## make base plot:
plot(xax, rep(0,length(xax)), ylim=c(0,ymax),
type="n", xlab="genomic position", yaxt = "n", ylab="")
if(n > 1){
title(samples[s])
}
colName = paste(meas, samples[s], sep='.')
# draw transcripts
for(tx in unique(gtrans$element)){
if(colorby == "transcript"){
colIndex = which(names(smalldat) == colName)
mycolor = closestColor(smalldat[,colIndex][which(t_id==tx)], colscale)
}
if(colorby == "none") mycolor = "gray70"
if(!is.null(customCol)){
mycolor=customCol[which(unique(gtrans$element)==tx)]
}
txind = which(unique(gtrans$element)==tx)
gtsub = gtrans[gtrans$element==tx,]
gtsub = gtsub[order(gtsub$start),]
for(exind in 1:dim(gtsub)[1]){
if(colorby == "exon") mycolor = closestColor(smalldat[,colIndex][which(e_id==gtsub$id[exind])], colscale)
borderCol = ifelse(colorBorders, mycolor, 'black')
polygon(x=c(gtsub$start[exind], gtsub$start[exind],
gtsub$end[exind], gtsub$end[exind]),
y=c(txind-0.4,txind+0.4,txind+0.4,txind-0.4),
col=mycolor, border=borderCol)
if(exind!=dim(gtsub)[1]){
if(!color.introns) lines(c(gtsub$end[exind],gtsub$start[exind+1]),c(txind, txind), lty=2, col="gray60")
if(color.introns){
intronindex = which(data(gown)$intron$start == gtsub$end[exind]+1 & data(gown)$intron$end == gtsub$start[exind+1]-1 & data(gown)$intron$chr==unique(gtsub$seqnames) & data(gown)$intron$strand == unique(gtsub$strand))
icolumnind = which(names(data(gown)$intron) == colName)
icol = closestColor(data(gown)$intron[intronindex,icolumnind], colscale)
lines(c(gtsub$end[exind]+10,gtsub$start[exind+1]-10),c(txind, txind), lwd=3, col=icol)
lines(c(gtsub$end[exind],gtsub$start[exind+1]),c(txind+0.1, txind+0.1), lwd=0.5, col="gray60")
lines(c(gtsub$end[exind],gtsub$start[exind+1]),c(txind-0.1, txind-0.1), lwd=0.5, col="gray60")
}#end if color.introns
}# end if exind != last exon
}# end loop over exons
}# end loop over transcripts
# draw the legend:
if(legend){
leglocs = seq(min(xax)+1, max(xax)-1, length=length(colscale)+1)
for(i in 1:length(colscale)){
polygon(x = c(leglocs[i], leglocs[i], leglocs[i+1], leglocs[i+1]), y = c(ymax-0.3, ymax, ymax, ymax-0.3), border=NA, col = rev(heat.colors(length(colscale)))[i])
}
text(x = seq(min(xax)+1, max(xax)-1, length = 10), y = rep(ymax+0.1, 10), labels = round(colscale,2)[seq(1,length(colscale), length=10)], cex=0.5 )
if(log){
text(x = median(xax), y = ymax-0.5, labels=paste("log expression, by", colorby), cex=0.5)
}else{
text(x = median(xax), y = ymax-0.5, labels=paste("expression, by", colorby), cex=0.5)
}
}
# label the transcripts on the y-axis (if asked for)
if(labelTranscripts){
axis(side=2, at=c(1:numtx), labels=unique(gtrans$element), cex.axis=0.75, las=1)
}
} #end loop over samples
# put title on plot
if(!is.null(main)){
title(main, outer=(n>1))
}else{
if(n==1){
title(paste0(gene,': ',samples))
}else{
title(gene, outer=TRUE)
}
}
}
|
#' Replace an NNTable object with an empty one if no data is present
#'
#' @param .NNTable An \code{NNTable} generated by \code{\link{NNTable}}
#'
#' @return An object of class \code{NNTable} without any data
#' @keywords internal
checkEmpty <- function(.NNTable){
old_NNTable <- .NNTable
if (nrow(.NNTable$data) == 0) {
empty_output <- as_tibble("There is no data for this output")
.NNTable <- NNTable(empty_output, " " = "value")
if ("wrapping" %in% names(old_NNTable))
.NNTable$wrapping <- old_NNTable$wrapping
if (isTRUE(.NNTable$wrapping$remove_empty_footnote))
.NNTable$wrapping$footer_orig <- character(0)
}
return(.NNTable)
}
| /R/checkEmpty.R | permissive | NNpackages/NNtable | R | false | false | 676 | r |
#' Replace an NNTable object with an empty one if no data is present
#'
#' @param .NNTable An \code{NNTable} generated by \code{\link{NNTable}}
#'
#' @return An object of class \code{NNTable} without any data
#' @keywords internal
checkEmpty <- function(.NNTable){
old_NNTable <- .NNTable
if (nrow(.NNTable$data) == 0) {
empty_output <- as_tibble("There is no data for this output")
.NNTable <- NNTable(empty_output, " " = "value")
if ("wrapping" %in% names(old_NNTable))
.NNTable$wrapping <- old_NNTable$wrapping
if (isTRUE(.NNTable$wrapping$remove_empty_footnote))
.NNTable$wrapping$footer_orig <- character(0)
}
return(.NNTable)
}
|
context("Azcopy with MD5")
tenant <- Sys.getenv("AZ_TEST_TENANT_ID")
app <- Sys.getenv("AZ_TEST_APP_ID")
cli_app <- Sys.getenv("AZ_TEST_NATIVE_APP_ID")
password <- Sys.getenv("AZ_TEST_PASSWORD")
subscription <- Sys.getenv("AZ_TEST_SUBSCRIPTION")
if(tenant == "" || app == "" || password == "" || subscription == "")
skip("Authentication tests skipped: ARM credentials not set")
rgname <- Sys.getenv("AZ_TEST_STORAGE_RG")
storname <- Sys.getenv("AZ_TEST_STORAGE_NOHNS")
if(rgname == "" || storname == "")
skip("Azcopy client tests skipped: resource names not set")
set_azcopy_path()
if(is.null(.AzureStor$azcopy) || is.na(.AzureStor$azcopy))
skip("Azcopy tests skipped: not detected")
if(Sys.getenv("_R_CHECK_CRAN_INCOMING_") != "")
skip("Azcopy tests skipped: tests being run from devtools::check")
opt_sil <- getOption("azure_storage_azcopy_silent")
options(azure_storage_azcopy_silent="TRUE")
stor <- AzureRMR::az_rm$new(tenant=tenant, app=app, password=password)$
get_subscription(subscription)$
get_resource_group(rgname)$
get_storage_account(storname)
sas <- stor$get_account_sas(permissions="rwdla")
bl <- stor$get_blob_endpoint(key=NULL, sas=sas, token=NULL)
test_that("azcopy works with put-md5 and check-md5",
{
contname <- make_name()
cont <- create_blob_container(bl, contname)
expect_silent(upload_blob(cont, "../resources/iris.csv", put_md5=TRUE, use_azcopy=TRUE))
md5 <- encode_md5(file("../resources/iris.csv"))
lst <- list_blobs(cont, info="all")
expect_identical(lst[["Content-MD5"]], md5)
dl_file <- file.path(tempdir(), make_name())
expect_silent(download_blob(cont, "iris.csv", dl_file, check_md5=TRUE, use_azcopy=TRUE))
dl_md5 <- encode_md5(file(dl_file))
expect_identical(md5, dl_md5)
})
teardown(
{
options(azure_storage_azcopy_silent=opt_sil)
conts <- list_blob_containers(bl)
lapply(conts, delete_blob_container, confirm=FALSE)
})
| /tests/testthat/test13a_azcopy_md5.R | permissive | Azure/AzureStor | R | false | false | 1,950 | r | context("Azcopy with MD5")
tenant <- Sys.getenv("AZ_TEST_TENANT_ID")
app <- Sys.getenv("AZ_TEST_APP_ID")
cli_app <- Sys.getenv("AZ_TEST_NATIVE_APP_ID")
password <- Sys.getenv("AZ_TEST_PASSWORD")
subscription <- Sys.getenv("AZ_TEST_SUBSCRIPTION")
if(tenant == "" || app == "" || password == "" || subscription == "")
skip("Authentication tests skipped: ARM credentials not set")
rgname <- Sys.getenv("AZ_TEST_STORAGE_RG")
storname <- Sys.getenv("AZ_TEST_STORAGE_NOHNS")
if(rgname == "" || storname == "")
skip("Azcopy client tests skipped: resource names not set")
set_azcopy_path()
if(is.null(.AzureStor$azcopy) || is.na(.AzureStor$azcopy))
skip("Azcopy tests skipped: not detected")
if(Sys.getenv("_R_CHECK_CRAN_INCOMING_") != "")
skip("Azcopy tests skipped: tests being run from devtools::check")
opt_sil <- getOption("azure_storage_azcopy_silent")
options(azure_storage_azcopy_silent="TRUE")
stor <- AzureRMR::az_rm$new(tenant=tenant, app=app, password=password)$
get_subscription(subscription)$
get_resource_group(rgname)$
get_storage_account(storname)
sas <- stor$get_account_sas(permissions="rwdla")
bl <- stor$get_blob_endpoint(key=NULL, sas=sas, token=NULL)
test_that("azcopy works with put-md5 and check-md5",
{
contname <- make_name()
cont <- create_blob_container(bl, contname)
expect_silent(upload_blob(cont, "../resources/iris.csv", put_md5=TRUE, use_azcopy=TRUE))
md5 <- encode_md5(file("../resources/iris.csv"))
lst <- list_blobs(cont, info="all")
expect_identical(lst[["Content-MD5"]], md5)
dl_file <- file.path(tempdir(), make_name())
expect_silent(download_blob(cont, "iris.csv", dl_file, check_md5=TRUE, use_azcopy=TRUE))
dl_md5 <- encode_md5(file(dl_file))
expect_identical(md5, dl_md5)
})
teardown(
{
options(azure_storage_azcopy_silent=opt_sil)
conts <- list_blob_containers(bl)
lapply(conts, delete_blob_container, confirm=FALSE)
})
|
#' Expect args in the format of:
#' Rscript geocode.R <outputFile> <MAPQUEST_KEY>
args <- commandArgs(trailingOnly = TRUE)
if (length(args) >= 2){
options(`MAPQUEST_KEY` = args[2])
}
if (length(args) < 1){
stop ("Must provide the filename as the first argument to the script.")
}
scrapedFile <- args[1]
library(httr)
library(RJSONIO)
#' @param address A character or character vector of addresses to look up. Only
#' the unique addresses will be sent to the API.
#' @param key The key for the Mapquest API (NOT URL encoded).
#' @param open If TRUE will use Mapquest's open (OpenStreetMaps) API, if FALSE,
#' will use their commercial API (which has a more stringent quota, so we only
#' want to use it sparingly).
#' @return A data.frame with columns for the addresses (an in-order copy of the
#' vector provided as input complete with any redundant addresses), the zip
#' code, the latitude, and the longitude.
geocode <- function(address, key=getOption("MAPQUEST_KEY"), open=TRUE){
toReturn <- data.frame(address = address)
toReturn$zip <- NA
toReturn$lat <- NA
toReturn$long <- NA
if (is.null(key) || key == ""){
stop("You must provide a mapquest key.")
}
address <- unique(address)
# These JSON libraries are proving useless, so we'll just serialize it ourselves.
json <- paste0('{locations:[',
paste0(paste0('{street:"', address, '",city:"Dallas",state:"TX"}'),
collapse=",")
,']}')
prefix <- ifelse(open, "open", "www")
url <- paste0("http://",prefix,".mapquestapi.com/geocoding/v1/batch?&key=",
key, "&json=", URLencode(json))
#print(paste("Getting: ", url ))
info <- content(GET(url))
for (i in 1:length(info$results)){
res <- info$results[[i]]
rowInd <- which(res$providedLocation$street == toReturn$address)
if (length(res$locations) >= 1){
toReturn[rowInd,"zip"] <- res$locations[[1]]$postalCode
toReturn[rowInd,"lat"] <- res$locations[[1]]$latLng$lat
toReturn[rowInd,"long"] <- res$locations[[1]]$latLng$lng
}
}
toReturn
}
data <- read.table(scrapedFile, sep="\t")
# Currently we have one extra column in front.
data <- data[,-1]
colnames(data) <- c("Incident", "Division", "Nature", "Priority", "DateTime",
"UnitNum", "Block", "Street", "Beat", "ReportingArea",
"Status", "UpdateTime")
data$Lat <- NA
data$Long <- NA
data$Zip <- NA
addresses <- apply(data, 1, function(thisRow){
address <- NULL
if (is.na(thisRow["Block"])){
address <- thisRow["Street"]
} else{
address <- paste(thisRow["Block"], thisRow["Street"])
}
address <- sub(" / ", " and ", address)
address
})
if (length(unique(data$UpdateTime)) > 1){
# Data spans more than one update. Could have duplicates or missed data.
# Just omit.
stop("Data spans multiple updates.")
}
# Provide whatever addresses we can from the cache
try({
if (!file.exists("addressCache.Rds")){
return()
}
addressCache <- readRDS("addressCache.Rds")
matchInd <- match(addresses, addressCache$addresses)
message(sum(!is.na(matchInd)), "/", nrow(data),
" addresses filled from the cache.")
data$Zip <- addressCache[matchInd, "Zip"]
data$Lat <- addressCache[matchInd, "Lat"]
data$Long <- addressCache[matchInd, "Long"]
}, silent=TRUE)
# Get whatever data we can from the open API
try({
naRows <- is.na(data$Lat)
geoOpen <- geocode(addresses[naRows], open=TRUE)
message(sum(!is.na(geoOpen$lat)), "/", sum(naRows),
" addresses filled via the open API.")
data[naRows, "Zip"] <- geoOpen$zip
data[naRows, "Lat"] <- geoOpen$lat
data[naRows, "Long"] <- geoOpen$long
}, silent=TRUE)
# Try to get any missing data from the commercial API
try({
naRows <- is.na(data$Lat)
geoComm <- geocode(addresses[naRows], open=FALSE)
message(sum(!is.na(geoComm$lat)), "/", sum(naRows),
" addresses filled via the commercial API.")
data[naRows,"Zip"] <- geoComm$zip
data[naRows,"Lat"] <- geoComm$lat
data[naRows,"Long"] <- geoComm$long
}, silent=TRUE)
# Cache the addresses on disk so that we don't have to look up the addresses
# that we already looked up in the previous minute.
addressCache <- data[,c("Block", "Street", "Lat", "Long", "Zip")]
addressCache <- cbind(addresses, addressCache)
addressCache <- unique(addressCache)
saveRDS(addressCache, file="addressCache.Rds")
write.csv(data, paste0("out-", as.integer(Sys.time()), ".csv"), row.names=FALSE)
| /scraper/geocode.R | permissive | smartinsightsfromdata/dallas-police | R | false | false | 4,521 | r |
#' Expect args in the format of:
#' Rscript geocode.R <outputFile> <MAPQUEST_KEY>
args <- commandArgs(trailingOnly = TRUE)
if (length(args) >= 2){
options(`MAPQUEST_KEY` = args[2])
}
if (length(args) < 1){
stop ("Must provide the filename as the first argument to the script.")
}
scrapedFile <- args[1]
library(httr)
library(RJSONIO)
#' @param address A character or character vector of addresses to look up. Only
#' the unique addresses will be sent to the API.
#' @param key The key for the Mapquest API (NOT URL encoded).
#' @param open If TRUE will use Mapquest's open (OpenStreetMaps) API, if FALSE,
#' will use their commercial API (which has a more stringent quota, so we only
#' want to use it sparingly).
#' @return A data.frame with columns for the addresses (an in-order copy of the
#' vector provided as input complete with any redundant addresses), the zip
#' code, the latitude, and the longitude.
geocode <- function(address, key=getOption("MAPQUEST_KEY"), open=TRUE){
toReturn <- data.frame(address = address)
toReturn$zip <- NA
toReturn$lat <- NA
toReturn$long <- NA
if (is.null(key) || key == ""){
stop("You must provide a mapquest key.")
}
address <- unique(address)
# These JSON libraries are proving useless, so we'll just serialize it ourselves.
json <- paste0('{locations:[',
paste0(paste0('{street:"', address, '",city:"Dallas",state:"TX"}'),
collapse=",")
,']}')
prefix <- ifelse(open, "open", "www")
url <- paste0("http://",prefix,".mapquestapi.com/geocoding/v1/batch?&key=",
key, "&json=", URLencode(json))
#print(paste("Getting: ", url ))
info <- content(GET(url))
for (i in 1:length(info$results)){
res <- info$results[[i]]
rowInd <- which(res$providedLocation$street == toReturn$address)
if (length(res$locations) >= 1){
toReturn[rowInd,"zip"] <- res$locations[[1]]$postalCode
toReturn[rowInd,"lat"] <- res$locations[[1]]$latLng$lat
toReturn[rowInd,"long"] <- res$locations[[1]]$latLng$lng
}
}
toReturn
}
data <- read.table(scrapedFile, sep="\t")
# Currently we have one extra column in front.
data <- data[,-1]
colnames(data) <- c("Incident", "Division", "Nature", "Priority", "DateTime",
"UnitNum", "Block", "Street", "Beat", "ReportingArea",
"Status", "UpdateTime")
data$Lat <- NA
data$Long <- NA
data$Zip <- NA
addresses <- apply(data, 1, function(thisRow){
address <- NULL
if (is.na(thisRow["Block"])){
address <- thisRow["Street"]
} else{
address <- paste(thisRow["Block"], thisRow["Street"])
}
address <- sub(" / ", " and ", address)
address
})
if (length(unique(data$UpdateTime)) > 1){
# Data spans more than one update. Could have duplicates or missed data.
# Just omit.
stop("Data spans multiple updates.")
}
# Provide whatever addresses we can from the cache
try({
if (!file.exists("addressCache.Rds")){
return()
}
addressCache <- readRDS("addressCache.Rds")
matchInd <- match(addresses, addressCache$addresses)
message(sum(!is.na(matchInd)), "/", nrow(data),
" addresses filled from the cache.")
data$Zip <- addressCache[matchInd, "Zip"]
data$Lat <- addressCache[matchInd, "Lat"]
data$Long <- addressCache[matchInd, "Long"]
}, silent=TRUE)
# Get whatever data we can from the open API
try({
naRows <- is.na(data$Lat)
geoOpen <- geocode(addresses[naRows], open=TRUE)
message(sum(!is.na(geoOpen$lat)), "/", sum(naRows),
" addresses filled via the open API.")
data[naRows, "Zip"] <- geoOpen$zip
data[naRows, "Lat"] <- geoOpen$lat
data[naRows, "Long"] <- geoOpen$long
}, silent=TRUE)
# Try to get any missing data from the commercial API
try({
naRows <- is.na(data$Lat)
geoComm <- geocode(addresses[naRows], open=FALSE)
message(sum(!is.na(geoComm$lat)), "/", sum(naRows),
" addresses filled via the commercial API.")
data[naRows,"Zip"] <- geoComm$zip
data[naRows,"Lat"] <- geoComm$lat
data[naRows,"Long"] <- geoComm$long
}, silent=TRUE)
# Cache the addresses on disk so that we don't have to look up the addresses
# that we already looked up in the previous minute.
addressCache <- data[,c("Block", "Street", "Lat", "Long", "Zip")]
addressCache <- cbind(addresses, addressCache)
addressCache <- unique(addressCache)
saveRDS(addressCache, file="addressCache.Rds")
write.csv(data, paste0("out-", as.integer(Sys.time()), ".csv"), row.names=FALSE)
|
fRegress.double <- function(y, xfdlist, betalist, wt=NULL,
y2cMap=NULL, SigmaE=NULL, returnMatrix=FALSE, ...)
{
# FREGRESS.DOUBLE Fits a scalar dependent variable using the concurrent
# functional regression model using inner products
# of functional covariates and functional regression
# functions.
#
# Arguments:
# Y ... an object for the dependent variable,
# which be a numeric vector
# XFDLIST ... a list object of length p with each list
# containing an object for an independent variable.
# the object may be:
# a functional data object or
# a vector
# if XFDLIST is a functional data object or a vector,
# it is converted to a list of length 1.
# BETALIST ... a list object of length p with each list
# containing a functional parameter object for
# the corresponding regression function. If any of
# these objects is a functional data object, it is
# converted to the default functional parameter object.
# if BETALIST is a functional parameter object
# it is converted to a list of length 1.
# WT ... a vector of nonnegative weights for observations
# Y2CMAP ... the matrix mapping from the vector of observed values
# to the coefficients for the dependent variable.
# This is output by function SMOOTH_BASIS. If this is
# supplied, confidence limits are computed, otherwise not.
# SIGMAE ... Estimate of the covariances among the residuals. This
# can only be estimated after a preliminary analysis
# with .
# RETURNMATRIX ... If False, a matrix in sparse storage model can be returned
# from a call to function BsplineS. See this function for
# enabling this option.
#
# Returns LIST ... A list containing seven members with names:
# yfdobj ... first argument of
# xfdlist ... second argument of
# betalist ... third argument of
# betaestlist ... estimated regression functions
# yhatfdobj ... functional data object containing fitted functions
# Cmatinv ... inverse of the coefficient matrix, needed for
# function .STDERR that computes standard errors
# wt ... weights for observations
# df ... degrees of freedom for fit
# This list object is converted to a class with the name ""
# function predict. is an example of a method that can be called simply
# as predict(List). In this call List can be any object of the
# "".
# Last modified 16 December 2020 by Jim Ramsay
# check Y and compute sample size N
if (!inherits(y, "numeric")) stop("Y is not a numeric vector.")
# ----------------------------------------------------------------
# yfdobj is scalar or multivariate
# ----------------------------------------------------------------
# As of 2020, if yfd is an fdPar object, it is converted to an fd object.
# The added structure of the fdPar class is not used in any of the fRegress codes.
# The older versions of fda package used yfdPar as the name for the first member.
arglist <- fRegressArgCheck(y, xfdlist, betalist, wt)
yfdobj <- arglist$yfd
xfdlist <- arglist$xfdlist
betalist <- arglist$betalist
wt <- arglist$wt
ymat <- as.matrix(y)
N <- dim(ymat)[1]
p <- length(xfdlist)
Zmat <- NULL
Rmat <- NULL
pjvec <- rep(0,p)
ncoef <- 0
for (j in 1:p) {
xfdj <- xfdlist[[j]]
if (!inherits(xfdj, "fd")) {
stop(paste("Independent variable",j,"is not of class fd."))
}
xcoef <- xfdj$coefs
xbasis <- xfdj$basis
betafdParj <- betalist[[j]]
bbasis <- betafdParj$fd$basis
bnbasis <- bbasis$nbasis
pjvec[j] <- bnbasis
Jpsithetaj <- inprod(xbasis,bbasis)
Zmat <- cbind(Zmat,crossprod(xcoef,Jpsithetaj))
if (betafdParj$estimate) {
lambdaj <- betafdParj$lambda
if (lambdaj > 0) {
Lfdj <- betafdParj$Lfd
Rmatj <- lambdaj*eval.penalty(bbasis,Lfdj)
} else {
Rmatj <- matrix(0,bnbasis,bnbasis)
}
if (ncoef > 0) {
zeromat <- matrix(0,ncoef,bnbasis)
Rmat <- rbind(cbind(Rmat, zeromat),
cbind(t(zeromat), Rmatj))
ncoef <- ncoef + bnbasis
} else {
Rmat <- Rmatj
ncoef <- ncoef + bnbasis
}
}
}
# -----------------------------------------------------------
# set up the linear equations for the solution
# -----------------------------------------------------------
# solve for coefficients defining BETA
if (any(wt != 1)) {
rtwt <- sqrt(wt)
Zmatwt <- Zmat*rtwt
ymatwt <- ymat*rtwt
Cmat <- t(Zmatwt) %*% Zmatwt + Rmat
Dmat <- t(Zmatwt) %*% ymatwt
} else {
Cmat <- t(Zmat) %*% Zmat + Rmat
Dmat <- t(Zmat) %*% ymat
}
eigchk(Cmat)
Cmatinv <- solve(Cmat)
betacoef <- Cmatinv %*% Dmat
# df <- sum(diag(Zmat %*% Cmatinv %*% t(Zmat)))
hatvals = diag(Zmat %*% Cmatinv %*% t(Zmat))
df <- sum(hatvals)
# set up fdPar object for BETAESTFDPAR
betaestlist <- betalist
mj2 <- 0
for (j in 1:p) {
betafdParj <- betalist[[j]]
betafdj <- betafdParj$fd
ncoefj <- betafdj$basis$nbasis
mj1 <- mj2 + 1
mj2 <- mj2 + ncoefj
indexj <- mj1:mj2
betacoefj <- betacoef[indexj]
betaestfdj <- betafdj
betaestfdj$coefs <- as.matrix(betacoefj)
betaestfdParj <- betafdParj
betaestfdParj$fd <- betaestfdj
betaestlist[[j]] <- betaestfdParj
}
# set up fd object for predicted values
yhatmat <- matrix(0,N,1)
for (j in 1:p) {
xfdj <- xfdlist[[j]]
if (inherits(xfdj, "fd")) {
xbasis <- xfdj$basis
xnbasis <- xbasis$nbasis
xrng <- xbasis$rangeval
nfine <- max(501,10*xnbasis+1)
tfine <- seq(xrng[1], xrng[2], len=nfine)
deltat <- tfine[2]-tfine[1]
xmat <- eval.fd(tfine, xfdj, 0, returnMatrix)
betafdParj <- betaestlist[[j]]
betafdj <- betafdParj$fd
betamat <- eval.fd(tfine, betafdj, 0, returnMatrix)
fitj <- deltat*(crossprod(xmat,betamat) -
0.5*(outer(xmat[1, ],betamat[1, ]) +
outer(xmat[nfine,],betamat[nfine,])))
yhatmat <- yhatmat + fitj
} else{
betaestfdParj <- betaestlist[[j]]
betavecj <- betaestfdParj$fd$coefs
yhatmat <- yhatmat + xfdj %*% t(betavecj)
}
}
yhatfdobj <- yhatmat
# Calculate OCV and GCV scores
OCV = sum( (ymat-yhatmat)^2/(1-hatvals)^2 )
GCV = sum( (ymat-yhatmat)^2 )/( (sum(1-hatvals))^2 )
# -----------------------------------------------------------------------
# Compute pointwise standard errors of regression coefficients
# if both y2cMap and SigmaE are supplied.
# -----------------------------------------------------------------------
if (!(is.null(y2cMap) || is.null(SigmaE))) {
# check dimensions of y2cMap and SigmaE
y2cdim <- dim(y2cMap)
if (y2cdim[2] != dim(SigmaE)[1]) stop(
"Dimensions of Y2CMAP not correct.")
# compute linear mapping c2bMap takinging coefficients for
# response into coefficients for regression functions
c2bMap <- Cmatinv %*% t(Zmat)
y2bmap <- c2bMap
bvar <- y2bmap %*% as.matrix(SigmaE) %*% t(y2bmap)
betastderrlist <- vector("list",p)
mj2 <- 0
for (j in 1:p) {
betafdParj <- betalist[[j]]
betabasisj <- betafdParj$fd$basis
ncoefj <- betabasisj$nbasis
mj1 <- mj2 + 1
mj2 <- mj2 + ncoefj
indexj <- mj1:mj2
bvarj <- bvar[indexj,indexj]
betarng <- betabasisj$rangeval
nfine <- max(c(501,10*ncoefj+1))
tfine <- seq(betarng[1], betarng[2], len=nfine)
bbasismat <- eval.basis(tfine, betabasisj, 0, returnMatrix)
bstderrj <- sqrt(diag(bbasismat %*% bvarj %*% t(bbasismat)))
bstderrfdj <- smooth.basis(tfine, bstderrj, betabasisj)$fd
betastderrlist[[j]] <- bstderrfdj
}
} else {
betastderrlist = NULL
bvar = NULL
c2bMap = NULL
}
# -----------------------------------------------------------------------
# Set up output list object
# -----------------------------------------------------------------------
fRegressList <-
list(yfdobj = y,
xfdlist = xfdlist,
betalist = betalist,
betaestlist = betaestlist,
yhatfdobj = yhatfdobj,
Cmat = Cmat,
Dmat = Dmat,
Cmatinv = Cmatinv,
wt = wt,
df = df,
GCV = GCV,
OCV = OCV,
y2cMap = y2cMap,
SigmaE = SigmaE,
betastderrlist = betastderrlist,
bvar = bvar,
c2bMap = c2bMap)
class(fRegressList) <- "fRegress"
return(fRegressList)
} | /R/fRegress.double.R | no_license | cran/fda | R | false | false | 9,487 | r | fRegress.double <- function(y, xfdlist, betalist, wt=NULL,
y2cMap=NULL, SigmaE=NULL, returnMatrix=FALSE, ...)
{
# FREGRESS.DOUBLE Fits a scalar dependent variable using the concurrent
# functional regression model using inner products
# of functional covariates and functional regression
# functions.
#
# Arguments:
# Y ... an object for the dependent variable,
# which be a numeric vector
# XFDLIST ... a list object of length p with each list
# containing an object for an independent variable.
# the object may be:
# a functional data object or
# a vector
# if XFDLIST is a functional data object or a vector,
# it is converted to a list of length 1.
# BETALIST ... a list object of length p with each list
# containing a functional parameter object for
# the corresponding regression function. If any of
# these objects is a functional data object, it is
# converted to the default functional parameter object.
# if BETALIST is a functional parameter object
# it is converted to a list of length 1.
# WT ... a vector of nonnegative weights for observations
# Y2CMAP ... the matrix mapping from the vector of observed values
# to the coefficients for the dependent variable.
# This is output by function SMOOTH_BASIS. If this is
# supplied, confidence limits are computed, otherwise not.
# SIGMAE ... Estimate of the covariances among the residuals. This
# can only be estimated after a preliminary analysis
# with .
# RETURNMATRIX ... If False, a matrix in sparse storage model can be returned
# from a call to function BsplineS. See this function for
# enabling this option.
#
# Returns LIST ... A list containing seven members with names:
# yfdobj ... first argument of
# xfdlist ... second argument of
# betalist ... third argument of
# betaestlist ... estimated regression functions
# yhatfdobj ... functional data object containing fitted functions
# Cmatinv ... inverse of the coefficient matrix, needed for
# function .STDERR that computes standard errors
# wt ... weights for observations
# df ... degrees of freedom for fit
# This list object is converted to a class with the name ""
# function predict. is an example of a method that can be called simply
# as predict(List). In this call List can be any object of the
# "".
# Last modified 16 December 2020 by Jim Ramsay
# check Y and compute sample size N
if (!inherits(y, "numeric")) stop("Y is not a numeric vector.")
# ----------------------------------------------------------------
# yfdobj is scalar or multivariate
# ----------------------------------------------------------------
# As of 2020, if yfd is an fdPar object, it is converted to an fd object.
# The added structure of the fdPar class is not used in any of the fRegress codes.
# The older versions of fda package used yfdPar as the name for the first member.
arglist <- fRegressArgCheck(y, xfdlist, betalist, wt)
yfdobj <- arglist$yfd
xfdlist <- arglist$xfdlist
betalist <- arglist$betalist
wt <- arglist$wt
ymat <- as.matrix(y)
N <- dim(ymat)[1]
p <- length(xfdlist)
Zmat <- NULL
Rmat <- NULL
pjvec <- rep(0,p)
ncoef <- 0
for (j in 1:p) {
xfdj <- xfdlist[[j]]
if (!inherits(xfdj, "fd")) {
stop(paste("Independent variable",j,"is not of class fd."))
}
xcoef <- xfdj$coefs
xbasis <- xfdj$basis
betafdParj <- betalist[[j]]
bbasis <- betafdParj$fd$basis
bnbasis <- bbasis$nbasis
pjvec[j] <- bnbasis
Jpsithetaj <- inprod(xbasis,bbasis)
Zmat <- cbind(Zmat,crossprod(xcoef,Jpsithetaj))
if (betafdParj$estimate) {
lambdaj <- betafdParj$lambda
if (lambdaj > 0) {
Lfdj <- betafdParj$Lfd
Rmatj <- lambdaj*eval.penalty(bbasis,Lfdj)
} else {
Rmatj <- matrix(0,bnbasis,bnbasis)
}
if (ncoef > 0) {
zeromat <- matrix(0,ncoef,bnbasis)
Rmat <- rbind(cbind(Rmat, zeromat),
cbind(t(zeromat), Rmatj))
ncoef <- ncoef + bnbasis
} else {
Rmat <- Rmatj
ncoef <- ncoef + bnbasis
}
}
}
# -----------------------------------------------------------
# set up the linear equations for the solution
# -----------------------------------------------------------
# solve for coefficients defining BETA
if (any(wt != 1)) {
rtwt <- sqrt(wt)
Zmatwt <- Zmat*rtwt
ymatwt <- ymat*rtwt
Cmat <- t(Zmatwt) %*% Zmatwt + Rmat
Dmat <- t(Zmatwt) %*% ymatwt
} else {
Cmat <- t(Zmat) %*% Zmat + Rmat
Dmat <- t(Zmat) %*% ymat
}
eigchk(Cmat)
Cmatinv <- solve(Cmat)
betacoef <- Cmatinv %*% Dmat
# df <- sum(diag(Zmat %*% Cmatinv %*% t(Zmat)))
hatvals = diag(Zmat %*% Cmatinv %*% t(Zmat))
df <- sum(hatvals)
# set up fdPar object for BETAESTFDPAR
betaestlist <- betalist
mj2 <- 0
for (j in 1:p) {
betafdParj <- betalist[[j]]
betafdj <- betafdParj$fd
ncoefj <- betafdj$basis$nbasis
mj1 <- mj2 + 1
mj2 <- mj2 + ncoefj
indexj <- mj1:mj2
betacoefj <- betacoef[indexj]
betaestfdj <- betafdj
betaestfdj$coefs <- as.matrix(betacoefj)
betaestfdParj <- betafdParj
betaestfdParj$fd <- betaestfdj
betaestlist[[j]] <- betaestfdParj
}
# set up fd object for predicted values
yhatmat <- matrix(0,N,1)
for (j in 1:p) {
xfdj <- xfdlist[[j]]
if (inherits(xfdj, "fd")) {
xbasis <- xfdj$basis
xnbasis <- xbasis$nbasis
xrng <- xbasis$rangeval
nfine <- max(501,10*xnbasis+1)
tfine <- seq(xrng[1], xrng[2], len=nfine)
deltat <- tfine[2]-tfine[1]
xmat <- eval.fd(tfine, xfdj, 0, returnMatrix)
betafdParj <- betaestlist[[j]]
betafdj <- betafdParj$fd
betamat <- eval.fd(tfine, betafdj, 0, returnMatrix)
fitj <- deltat*(crossprod(xmat,betamat) -
0.5*(outer(xmat[1, ],betamat[1, ]) +
outer(xmat[nfine,],betamat[nfine,])))
yhatmat <- yhatmat + fitj
} else{
betaestfdParj <- betaestlist[[j]]
betavecj <- betaestfdParj$fd$coefs
yhatmat <- yhatmat + xfdj %*% t(betavecj)
}
}
yhatfdobj <- yhatmat
# Calculate OCV and GCV scores
OCV = sum( (ymat-yhatmat)^2/(1-hatvals)^2 )
GCV = sum( (ymat-yhatmat)^2 )/( (sum(1-hatvals))^2 )
# -----------------------------------------------------------------------
# Compute pointwise standard errors of regression coefficients
# if both y2cMap and SigmaE are supplied.
# -----------------------------------------------------------------------
if (!(is.null(y2cMap) || is.null(SigmaE))) {
# check dimensions of y2cMap and SigmaE
y2cdim <- dim(y2cMap)
if (y2cdim[2] != dim(SigmaE)[1]) stop(
"Dimensions of Y2CMAP not correct.")
# compute linear mapping c2bMap takinging coefficients for
# response into coefficients for regression functions
c2bMap <- Cmatinv %*% t(Zmat)
y2bmap <- c2bMap
bvar <- y2bmap %*% as.matrix(SigmaE) %*% t(y2bmap)
betastderrlist <- vector("list",p)
mj2 <- 0
for (j in 1:p) {
betafdParj <- betalist[[j]]
betabasisj <- betafdParj$fd$basis
ncoefj <- betabasisj$nbasis
mj1 <- mj2 + 1
mj2 <- mj2 + ncoefj
indexj <- mj1:mj2
bvarj <- bvar[indexj,indexj]
betarng <- betabasisj$rangeval
nfine <- max(c(501,10*ncoefj+1))
tfine <- seq(betarng[1], betarng[2], len=nfine)
bbasismat <- eval.basis(tfine, betabasisj, 0, returnMatrix)
bstderrj <- sqrt(diag(bbasismat %*% bvarj %*% t(bbasismat)))
bstderrfdj <- smooth.basis(tfine, bstderrj, betabasisj)$fd
betastderrlist[[j]] <- bstderrfdj
}
} else {
betastderrlist = NULL
bvar = NULL
c2bMap = NULL
}
# -----------------------------------------------------------------------
# Set up output list object
# -----------------------------------------------------------------------
fRegressList <-
list(yfdobj = y,
xfdlist = xfdlist,
betalist = betalist,
betaestlist = betaestlist,
yhatfdobj = yhatfdobj,
Cmat = Cmat,
Dmat = Dmat,
Cmatinv = Cmatinv,
wt = wt,
df = df,
GCV = GCV,
OCV = OCV,
y2cMap = y2cMap,
SigmaE = SigmaE,
betastderrlist = betastderrlist,
bvar = bvar,
c2bMap = c2bMap)
class(fRegressList) <- "fRegress"
return(fRegressList)
} |
xm <- data.frame(age=c(25,31,23),
hiight=c(177,163,190),
weight = c(57,69,83),
gender=c("F","F","M"))
row.names(xm)= c("Alex","Lilly","Mark")
xm
#Now Create second Data frame
xm2 <- data.frame(working = c("Yes","No","No"))
xm2
row.names(xm2) = c("Alex","Lilly","Mark")
xm2
#Now Add data fram
rs<-cbind(xm,xm2)
rs
#N0ow Find data type of each column
class(rs$age)#Age is numeric type
class(rs$hiight)#Height is numeric type
class(rs$weight)#Weight is numeric type
class(rs$gender)#Gender is Factor type
class(rs$working)#Working is Factor type
#Find Bmi of all and add it to new column
bmi <- rs$weight / ((rs$hiight)/100)^2
bmi
rs <- cbind(rs,bmi)
rs
#Now Find Heathy Person
health <- bmi < 25
rs <- cbind(rs,health)
rs
#Now Read Data from File
getwd()
#/home/ai20/Desktop/common/R/Day4/Data
r1 <- read.table(file.choose())
r1
r2 <- read.table(file.choose())
r2
| /day5/dataFram.R | no_license | Sneha-Santhosh/R | R | false | false | 877 | r | xm <- data.frame(age=c(25,31,23),
hiight=c(177,163,190),
weight = c(57,69,83),
gender=c("F","F","M"))
row.names(xm)= c("Alex","Lilly","Mark")
xm
#Now Create second Data frame
xm2 <- data.frame(working = c("Yes","No","No"))
xm2
row.names(xm2) = c("Alex","Lilly","Mark")
xm2
#Now Add data fram
rs<-cbind(xm,xm2)
rs
#N0ow Find data type of each column
class(rs$age)#Age is numeric type
class(rs$hiight)#Height is numeric type
class(rs$weight)#Weight is numeric type
class(rs$gender)#Gender is Factor type
class(rs$working)#Working is Factor type
#Find Bmi of all and add it to new column
bmi <- rs$weight / ((rs$hiight)/100)^2
bmi
rs <- cbind(rs,bmi)
rs
#Now Find Heathy Person
health <- bmi < 25
rs <- cbind(rs,health)
rs
#Now Read Data from File
getwd()
#/home/ai20/Desktop/common/R/Day4/Data
r1 <- read.table(file.choose())
r1
r2 <- read.table(file.choose())
r2
|
##' QGIS Algorithm provided by GRASS r.li.shape.ascii (grass7:r.li.shape.ascii)
##'
##' @title QGIS algorithm r.li.shape.ascii
##'
##' @param input `raster` - Name of input raster map. Path to a raster layer.
##' @param config_txt `string` - Landscape structure configuration. String value.
##' @param config `file` - Landscape structure configuration file. Path to a file.
##' @param output_txt `fileDestination` - Shape. Path for new file.
##' @param GRASS_REGION_PARAMETER `extent` - GRASS GIS 7 region extent. A comma delimited string of x min, x max, y min, y max. E.g. '4,10,101,105'. Path to a layer. The extent of the layer is used..
##' @param GRASS_REGION_CELLSIZE_PARAMETER `number` - GRASS GIS 7 region cellsize (leave 0 for default). A numeric value.
##' @param ... further parameters passed to `qgisprocess::qgis_run_algorithm()`
##' @param .complete_output logical specifing if complete out of `qgisprocess::qgis_run_algorithm()` should be used (`TRUE`) or first output (most likely the main) should read (`FALSE`). Default value is `TRUE`.
##'
##' @details
##' ## Outputs description
##' * output_txt - outputFile - Shape
##'
##'
##' @export
##' @md
##' @importFrom qgisprocess qgis_run_algorithm qgis_default_value
grass7_r_li_shape_ascii <- function(input = qgisprocess::qgis_default_value(), config_txt = qgisprocess::qgis_default_value(), config = qgisprocess::qgis_default_value(), output_txt = qgisprocess::qgis_default_value(), GRASS_REGION_PARAMETER = qgisprocess::qgis_default_value(), GRASS_REGION_CELLSIZE_PARAMETER = qgisprocess::qgis_default_value(),..., .complete_output = TRUE) {
check_algorithm_necessities("grass7:r.li.shape.ascii")
output <- qgisprocess::qgis_run_algorithm("grass7:r.li.shape.ascii", `input` = input, `config_txt` = config_txt, `config` = config, `output_txt` = output_txt, `GRASS_REGION_PARAMETER` = GRASS_REGION_PARAMETER, `GRASS_REGION_CELLSIZE_PARAMETER` = GRASS_REGION_CELLSIZE_PARAMETER,...)
if (.complete_output) {
return(output)
}
else{
qgisprocess::qgis_output(output, "output_txt")
}
} | /R/grass7_r_li_shape_ascii.R | permissive | VB6Hobbyst7/r_package_qgis | R | false | false | 2,070 | r | ##' QGIS Algorithm provided by GRASS r.li.shape.ascii (grass7:r.li.shape.ascii)
##'
##' @title QGIS algorithm r.li.shape.ascii
##'
##' @param input `raster` - Name of input raster map. Path to a raster layer.
##' @param config_txt `string` - Landscape structure configuration. String value.
##' @param config `file` - Landscape structure configuration file. Path to a file.
##' @param output_txt `fileDestination` - Shape. Path for new file.
##' @param GRASS_REGION_PARAMETER `extent` - GRASS GIS 7 region extent. A comma delimited string of x min, x max, y min, y max. E.g. '4,10,101,105'. Path to a layer. The extent of the layer is used..
##' @param GRASS_REGION_CELLSIZE_PARAMETER `number` - GRASS GIS 7 region cellsize (leave 0 for default). A numeric value.
##' @param ... further parameters passed to `qgisprocess::qgis_run_algorithm()`
##' @param .complete_output logical specifing if complete out of `qgisprocess::qgis_run_algorithm()` should be used (`TRUE`) or first output (most likely the main) should read (`FALSE`). Default value is `TRUE`.
##'
##' @details
##' ## Outputs description
##' * output_txt - outputFile - Shape
##'
##'
##' @export
##' @md
##' @importFrom qgisprocess qgis_run_algorithm qgis_default_value
grass7_r_li_shape_ascii <- function(input = qgisprocess::qgis_default_value(), config_txt = qgisprocess::qgis_default_value(), config = qgisprocess::qgis_default_value(), output_txt = qgisprocess::qgis_default_value(), GRASS_REGION_PARAMETER = qgisprocess::qgis_default_value(), GRASS_REGION_CELLSIZE_PARAMETER = qgisprocess::qgis_default_value(),..., .complete_output = TRUE) {
check_algorithm_necessities("grass7:r.li.shape.ascii")
output <- qgisprocess::qgis_run_algorithm("grass7:r.li.shape.ascii", `input` = input, `config_txt` = config_txt, `config` = config, `output_txt` = output_txt, `GRASS_REGION_PARAMETER` = GRASS_REGION_PARAMETER, `GRASS_REGION_CELLSIZE_PARAMETER` = GRASS_REGION_CELLSIZE_PARAMETER,...)
if (.complete_output) {
return(output)
}
else{
qgisprocess::qgis_output(output, "output_txt")
}
} |
#' @export
dimnames.excel.range = function(x){
xl.dimnames(x())
}
#' @export
dim.excel.range = function(x){
xl.rng = x()
c(xl.nrow(xl.rng),xl.ncol(xl.rng))
}
#' @export
'dim<-.excel.range' = function(x, value){
stop("'dim' for excel.range is read-only.")
}
#' @export
'dimnames<-.excel.range' = function(x, value){
stop("'dimnames' for excel.range is read-only.")
}
xl.colnames.excel.range = function(xl.rng,...)
# return colnames of connected excel table
{
if (has.colnames(xl.rng)){
all.colnames = unlist(xl.process.list(xl.rng[['rows']][[1]][['Value2']]))
all.colnames = gsub("^([\\s]+)","",all.colnames,perl = TRUE)
all.colnames = gsub("([\\s]+)$","",all.colnames,perl = TRUE)
} else all.colnames = xl.colnames(xl.rng)
if (has.rownames(xl.rng)) all.colnames = all.colnames[-1]
return(all.colnames)
}
xl.dimnames = function(xl.rng,...)
### x - references on excel range
{
list(xl.rownames.excel.range(xl.rng),xl.colnames.excel.range(xl.rng))
}
xl.colnames = function(xl.rng)
## returns vector of Excel colnames, such as A,B,C etc.
{
first.col = xl.rng[["Column"]]
columns.count = xl.rng[["Columns"]][['Count']]
columns = seq(first.col,length.out = columns.count)
# columns = index3*26*26+index2*26+index1
index1 = (columns-1) %% 26+1
index2 = ifelse(columns<27,0,((columns - index1) %/% 26 -1) %% 26 + 1)
index3 = ifelse(columns<(26*26+1),0,((columns-26*index2-index1) %/% (26 * 26) -1 ) %% 26 +1 )
letter1 = letters[index1]
letter2 = ifelse(columns<27,"",letters[index2])
letter3 = ifelse(columns<(26*26+1),"",letters[index3])
paste(letter3,letter2,letter1,sep = "")
}
xl.rownames.excel.range = function(xl.rng,...)
# return rownames of connected excel table
{
if (has.rownames(xl.rng)){
all.rownames = unlist(xl.process.list(xl.rng[['columns']][[1]][['Value2']]))
all.rownames = gsub("^([\\s]+)","",all.rownames,perl = TRUE)
all.rownames = gsub("([\\s]+)$","",all.rownames,perl = TRUE)
} else all.rownames = xl.rownames(xl.rng)
if (has.colnames(xl.rng)) all.rownames = all.rownames[-1]
return(all.rownames)
}
xl.rownames = function(xl.rng)
## returns vector of Excel rownumbers.
{
first.row = xl.rng[["Row"]]
rows.count = xl.rng[["Rows"]][['Count']]
seq(first.row,length.out = rows.count)
}
xl.nrow = function(xl.rng){
res = xl.rng[["Rows"]][["Count"]]
res-has.colnames(xl.rng)
}
xl.ncol = function(xl.rng){
res = xl.rng[["Columns"]][["Count"]]
res-has.rownames(xl.rng)
}
has.colnames = function(x){
UseMethod("has.colnames")
}
has.rownames = function(x){
UseMethod("has.rownames")
}
has.colnames.default = function(x)
# get attribute has.colnames
{
res = attr(x,'has.colnames')
if (is.null(res)) res = FALSE
res
}
has.rownames.default = function(x)
# get attribute has.rownames
{
res = attr(x,'has.rownames')
if (is.null(res)) res = FALSE
res
}
"has.colnames<-" = function(x,value)
# set attribute has.colnames
{
attr(x,'has.colnames') = value
x
}
"has.rownames<-" = function(x,value)
# set attribute has.rownames
{
attr(x,'has.rownames') = value
x
}
has.colnames.excel.range = function(x)
# get attribute has.colnames
{
res = attr(x(),'has.colnames')
if (is.null(res)) res = FALSE
res
}
has.rownames.excel.range = function(x)
# get attribute has.rownames
{
res = attr(x(),'has.rownames')
if (is.null(res)) res = FALSE
res
} | /R/dimnames.excel.range.R | no_license | dctb13/excel.link | R | false | false | 3,561 | r | #' @export
dimnames.excel.range = function(x){
xl.dimnames(x())
}
#' @export
dim.excel.range = function(x){
xl.rng = x()
c(xl.nrow(xl.rng),xl.ncol(xl.rng))
}
#' @export
'dim<-.excel.range' = function(x, value){
stop("'dim' for excel.range is read-only.")
}
#' @export
'dimnames<-.excel.range' = function(x, value){
stop("'dimnames' for excel.range is read-only.")
}
xl.colnames.excel.range = function(xl.rng,...)
# return colnames of connected excel table
{
if (has.colnames(xl.rng)){
all.colnames = unlist(xl.process.list(xl.rng[['rows']][[1]][['Value2']]))
all.colnames = gsub("^([\\s]+)","",all.colnames,perl = TRUE)
all.colnames = gsub("([\\s]+)$","",all.colnames,perl = TRUE)
} else all.colnames = xl.colnames(xl.rng)
if (has.rownames(xl.rng)) all.colnames = all.colnames[-1]
return(all.colnames)
}
xl.dimnames = function(xl.rng,...)
### x - references on excel range
{
list(xl.rownames.excel.range(xl.rng),xl.colnames.excel.range(xl.rng))
}
xl.colnames = function(xl.rng)
## returns vector of Excel colnames, such as A,B,C etc.
{
first.col = xl.rng[["Column"]]
columns.count = xl.rng[["Columns"]][['Count']]
columns = seq(first.col,length.out = columns.count)
# columns = index3*26*26+index2*26+index1
index1 = (columns-1) %% 26+1
index2 = ifelse(columns<27,0,((columns - index1) %/% 26 -1) %% 26 + 1)
index3 = ifelse(columns<(26*26+1),0,((columns-26*index2-index1) %/% (26 * 26) -1 ) %% 26 +1 )
letter1 = letters[index1]
letter2 = ifelse(columns<27,"",letters[index2])
letter3 = ifelse(columns<(26*26+1),"",letters[index3])
paste(letter3,letter2,letter1,sep = "")
}
xl.rownames.excel.range = function(xl.rng,...)
# return rownames of connected excel table
{
if (has.rownames(xl.rng)){
all.rownames = unlist(xl.process.list(xl.rng[['columns']][[1]][['Value2']]))
all.rownames = gsub("^([\\s]+)","",all.rownames,perl = TRUE)
all.rownames = gsub("([\\s]+)$","",all.rownames,perl = TRUE)
} else all.rownames = xl.rownames(xl.rng)
if (has.colnames(xl.rng)) all.rownames = all.rownames[-1]
return(all.rownames)
}
xl.rownames = function(xl.rng)
## returns vector of Excel rownumbers.
{
first.row = xl.rng[["Row"]]
rows.count = xl.rng[["Rows"]][['Count']]
seq(first.row,length.out = rows.count)
}
xl.nrow = function(xl.rng){
res = xl.rng[["Rows"]][["Count"]]
res-has.colnames(xl.rng)
}
xl.ncol = function(xl.rng){
res = xl.rng[["Columns"]][["Count"]]
res-has.rownames(xl.rng)
}
has.colnames = function(x){
UseMethod("has.colnames")
}
has.rownames = function(x){
UseMethod("has.rownames")
}
has.colnames.default = function(x)
# get attribute has.colnames
{
res = attr(x,'has.colnames')
if (is.null(res)) res = FALSE
res
}
has.rownames.default = function(x)
# get attribute has.rownames
{
res = attr(x,'has.rownames')
if (is.null(res)) res = FALSE
res
}
"has.colnames<-" = function(x,value)
# set attribute has.colnames
{
attr(x,'has.colnames') = value
x
}
"has.rownames<-" = function(x,value)
# set attribute has.rownames
{
attr(x,'has.rownames') = value
x
}
has.colnames.excel.range = function(x)
# get attribute has.colnames
{
res = attr(x(),'has.colnames')
if (is.null(res)) res = FALSE
res
}
has.rownames.excel.range = function(x)
# get attribute has.rownames
{
res = attr(x(),'has.rownames')
if (is.null(res)) res = FALSE
res
} |
list_remove <- function(list_input, remove_index) {
list_remove_inner <- function(list_input, remove_index) {
if (sjmisc::is_empty(remove_index)) {
list_output <-
list_input
} else {
list_output <-
list_input %>%
rlist::list.remove(remove_index)
}
return(list_output)
}
environment(list_remove_inner) <- rlang::global_env()
list_output_final <- list_remove_inner(list_input, remove_index)
return(list_output_final)
}
| /R/functions/list_remove.R | no_license | Tim-Holzapfel/DS500_data_science_project | R | false | false | 483 | r | list_remove <- function(list_input, remove_index) {
list_remove_inner <- function(list_input, remove_index) {
if (sjmisc::is_empty(remove_index)) {
list_output <-
list_input
} else {
list_output <-
list_input %>%
rlist::list.remove(remove_index)
}
return(list_output)
}
environment(list_remove_inner) <- rlang::global_env()
list_output_final <- list_remove_inner(list_input, remove_index)
return(list_output_final)
}
|
data=Datafinal
data$Puntuación=NULL
data$Continente=NULL
data$Alfabetizacion=NULL
data$`Gasto Publico`=NULL
data$`Maestros Capacitados`=NULL
data=na.omit(data)
#Añadimos una columna mas
linkeducacion2="https://github.com/Giancarlo-Ramirez/BBDD-Educaci-n/raw/main/Duracion.csv"
Educación=import(linkeducacion2)
data=merge(data,Educación)
#Analisis UNIVARIADO
#Inflación
install.packages("DescTools")
library(DescTools)
Median(data$inflación)
mean(data$inflación)
summary(data$inflación)
sd(data$inflación)
var(data$inflación)
rango=max(data$inflación)-min(data$inflación)
rango
install.packages("e1071")
library(e1071)
skewness(data$inflación) #positiva
hist(data$inflación)
##Pbi
library(DescTools)
Median(data$pbi)
mean(data$pbi)
summary(data$pbi)
sd(data$pbi)
var(data$pbi)
rango=max(data$pbi)-min(data$pbi)
rango
library(e1071)
skewness(data$pbi)#negativa
boxplot(data$pbi)
##Celulares
library(DescTools)
Median(data$celulares)
mean(data$celulares)
summary(data$celulares)
sd(data$celulares)
var(data$celulares)
rango=max(data$celulares)-min(data$celulares)
rango
library(e1071)
skewness(data$celulares)#negativa
hist(data$celulares)
#Analisis multivariado
#Celulares Y Población Rural
plot(data$celulares, data$PobRural)
plot(data$celulares, data$PobRural, xlab = "Celulares", ylab = "Población arural")
*Ho: Las variables son estadísticamente independientes
*Ha: Las variables son estadísticamente dependientes
cor.test(data$celulares, data$PobRural)
#PBI y Funcionamiento del gobierno
plot(data$pbi, data$`Funcionamientodel gobierno`)
plot(data$pbi, data$`Funcionamientodel gobierno`, xlab = "PBI", ylab = "Funcionamineto del Gobierno")
*Ho: Las variables son estadísticamente independientes
*Ha: Las variables son estadísticamente dependientes
cor.test(data$pbi, data$`Funcionamientodel gobierno`)
#Inflación y Crecimiento poblacional Urbano
plot(data$inflación, data$CrecimientoPobUrb)
plot(data$inflación, data$CrecimientoPobUrb, xlab = "Inflación", ylab = "Creciemiento Poblacional Urbano")
*Ho: Las variables son estadísticamente independientes
*Ha: Las variables son estadísticamente dependientes
cor.test(data$inflación, data$CrecimientoPobUrb)
| /Script de análisis univariado y bivariado.R | no_license | Alisson2406/Trabajo-final-de-estadistica | R | false | false | 2,201 | r | data=Datafinal
data$Puntuación=NULL
data$Continente=NULL
data$Alfabetizacion=NULL
data$`Gasto Publico`=NULL
data$`Maestros Capacitados`=NULL
data=na.omit(data)
#Añadimos una columna mas
linkeducacion2="https://github.com/Giancarlo-Ramirez/BBDD-Educaci-n/raw/main/Duracion.csv"
Educación=import(linkeducacion2)
data=merge(data,Educación)
#Analisis UNIVARIADO
#Inflación
install.packages("DescTools")
library(DescTools)
Median(data$inflación)
mean(data$inflación)
summary(data$inflación)
sd(data$inflación)
var(data$inflación)
rango=max(data$inflación)-min(data$inflación)
rango
install.packages("e1071")
library(e1071)
skewness(data$inflación) #positiva
hist(data$inflación)
##Pbi
library(DescTools)
Median(data$pbi)
mean(data$pbi)
summary(data$pbi)
sd(data$pbi)
var(data$pbi)
rango=max(data$pbi)-min(data$pbi)
rango
library(e1071)
skewness(data$pbi)#negativa
boxplot(data$pbi)
##Celulares
library(DescTools)
Median(data$celulares)
mean(data$celulares)
summary(data$celulares)
sd(data$celulares)
var(data$celulares)
rango=max(data$celulares)-min(data$celulares)
rango
library(e1071)
skewness(data$celulares)#negativa
hist(data$celulares)
#Analisis multivariado
#Celulares Y Población Rural
plot(data$celulares, data$PobRural)
plot(data$celulares, data$PobRural, xlab = "Celulares", ylab = "Población arural")
*Ho: Las variables son estadísticamente independientes
*Ha: Las variables son estadísticamente dependientes
cor.test(data$celulares, data$PobRural)
#PBI y Funcionamiento del gobierno
plot(data$pbi, data$`Funcionamientodel gobierno`)
plot(data$pbi, data$`Funcionamientodel gobierno`, xlab = "PBI", ylab = "Funcionamineto del Gobierno")
*Ho: Las variables son estadísticamente independientes
*Ha: Las variables son estadísticamente dependientes
cor.test(data$pbi, data$`Funcionamientodel gobierno`)
#Inflación y Crecimiento poblacional Urbano
plot(data$inflación, data$CrecimientoPobUrb)
plot(data$inflación, data$CrecimientoPobUrb, xlab = "Inflación", ylab = "Creciemiento Poblacional Urbano")
*Ho: Las variables son estadísticamente independientes
*Ha: Las variables son estadísticamente dependientes
cor.test(data$inflación, data$CrecimientoPobUrb)
|
#-------------------------------------------------------------------------
# TCGA Search
#-------------------------------------------------------------------------
#--------------------- START controlling show/hide states ----------------
observeEvent(input$clinicalIndexed, {
if(input$clinicalIndexed){
shinyjs::hide("tcgaClinicalFilter")
} else {
shinyjs::show("tcgaClinicalFilter")
}
})
query.result <- reactive({
input$tcgaSearchBt # the trigger
#------------------- STEP 1: Argument-------------------------
# Set arguments for GDCquery, if value is empty we will set it to FALSE (same as empty)
tumor <- isolate({input$tcgaProjectFilter})
if(str_length(tumor) == 0) tumor <- NULL
data.category <- isolate({input$tcgaDataCategoryFilter})
if(str_length(data.category) == 0) data.category <- NULL
# Data type
data.type <- isolate({input$tcgaDataTypeFilter})
if(str_length(data.type) == 0) data.type <- FALSE
# platform
platform <- isolate({input$tcgaPlatformFilter})
if(str_length(platform) == 0) platform <- FALSE
# workflow
workflow.type <- isolate({input$tcgaWorkFlowFilter})
if(str_length(workflow.type) == 0) workflow.type <- FALSE
# file.type
file.type <- isolate({input$tcgaFileTypeFilter})
if(str_length(file.type) == 0) file.type <- FALSE
# access: we will only work with open access data
#access <- isolate({input$tcgaAcessFilter})
#if(str_length(access) == 0) access <- FALSE
access <- "open"
legacy <- isolate({as.logical(input$tcgaDatabase)})
# bacode
text.samples <- isolate({input$tcgaDownloadBarcode})
if(!is.null(text.samples)){
barcode <- parse.textarea.input(text.samples)
}
if(str_length(barcode) == 0) barcode <- FALSE
# Samples type
sample.type <- isolate({input$tcgasamplestypeFilter})
if(is.null(sample.type)) {
sample.type <- FALSE
} else if(str_length(sample.type) == 0) {
sample.type <- FALSE
}
experimental.strategy <- isolate({input$tcgaExpStrategyFilter})
if(str_length(experimental.strategy) == 0) experimental.strategy <- FALSE
if(is.null(tumor)){
createAlert(session, "tcgasearchmessage", "tcgaAlert", title = "Data input error", style = "danger",
content = "Please select a project", append = FALSE)
return(NULL)
}
if(is.null(data.category)){
createAlert(session, "tcgasearchmessage", "tcgaAlert", title = "Data input error", style = "danger",
content = "Please select a data.category", append = FALSE)
return(NULL)
}
withProgress(message = 'Accessing GDC',
detail = 'This may take a while...', value = 0, {
query = tryCatch({
query <- GDCquery(project = tumor,
data.category = data.category,
workflow.type = workflow.type,
legacy = legacy,
platform = platform,
file.type = file.type,
access = access,
barcode = barcode,
experimental.strategy = experimental.strategy,
sample.type = sample.type,
data.type = data.type)
incProgress(1, detail = "Completed")
return(query)
}, error = function(e) {
createAlert(session, "tcgasearchmessage", "tcgaAlert", title = "Error", style = "danger",
content = "No results for this query", append = FALSE)
return(NULL)
})
})
if(is.null(query)) {
createAlert(session, "tcgasearchmessage", "tcgaAlert", title = "Error", style = "danger",
content = "No results for this query", append = FALSE)
return(NULL)
}
not.found <- c()
tbl <- data.frame()
results <- query$results[[1]]
#------------------- STEP 2: Clinical features-------------------------
# In this step we will download indexed clinical data
# Get the barcodes that repects user inputs
# And select only the results that respects it
if(!isolate({input$tcgaMolecularFilterClinical})){
clinical <- NULL
} else {
clinical <- getClinical.info()
# filter clinical for samples
clinical <- subset(clinical, clinical$submitter_id %in% substr(results$cases,1,unique(str_length(clinical$submitter_id))))
stage <- isolate({input$tcgaClinicalTumorStageFilter})
stage.idx <- NA
if(!is.null(stage) & all(str_length(stage) > 0)){
stage.idx <- sapply(stage, function(y) clinical$tumor_stage %in% y)
stage.idx <- apply(stage.idx,1,any)
}
vital.status <- isolate({input$tcgaClinicalVitalStatusFilter})
vital.status.idx <- NA
if(!is.null(vital.status) & all(str_length(vital.status) > 0)){
vital.status.idx <- sapply(vital.status, function(y) clinical$vital_status %in% y)
vital.status.idx <- apply(vital.status.idx,1,any)
}
race <- isolate({input$tcgaClinicalRaceFilter})
race.idx <- NA
if(!is.null(race) & all(str_length(race) > 0)){
race.idx <- sapply(race, function(y) clinical$race %in% y)
race.idx <- apply(race.idx,1,any)
}
gender <- isolate({input$tcgaClinicalGenderFilter})
gender.idx <- NA
if(!is.null(gender) & all(str_length(gender) > 0)){
gender.idx <- sapply(gender, function(y) clinical$gender %in% y)
gender.idx <- apply(gender.idx,1,any)
}
if(any(!is.na(gender.idx),!is.na(race.idx),!is.na(vital.status.idx),!is.na(stage.idx))){
idx <- apply(data.frame(gender.idx,race.idx,vital.status.idx,stage.idx),1,function(x)all(x,na.rm = TRUE))
clinical <- clinical[idx,]
}
cut <- ifelse(grepl("TARGET",query$project),16,12)
results <- results[substr(results$cases,1,cut) %in% clinical$bcr_patient_barcode,]
if(is.null(results) || nrow(results) == 0) {
createAlert(session, "tcgasearchmessage", "tcgaAlert", title = "Error", style = "danger",
content = "No results for this query", append = FALSE)
return(NULL)
}
}
query$results[[1]] <- results
list(query,clinical)
})
#----------------------- END controlling show/hide states -----------------
observeEvent(input$tcgaSearchBt, {
closeAlert(session,"tcgaAlert")
updateTextInput(session, "tcgafilename", label = "File name",
value = paste0(
paste(isolate({input$tcgaProjectFilter}),
gsub(" ","_",isolate({input$tcgaDataCategoryFilter})),
gsub(" ","_",isolate({input$tcgaDataTypeFilter})),
ifelse(isolate({input$tcgaDatabase}),"hg19","hg38"),
sep = "_"),".rda"))
updateCollapse(session, "collapseTCGA", open = "GDC search results")
output$queryresutlstable <- DT::renderDataTable({
results <- getResults(query.result()[[1]])
if(!is.null(results)) createTable(results)
})
results <- isolate({getResults(query.result()[[1]])})
if(any(duplicated(results$cases)))
createAlert(session, "tcgasearchmessage", "tcgaAlert", title = "Warning", style = "warning",
content = "There are more than one file for the same case.", append = FALSE)
if(is.null(results)){
} else {
suppressWarnings({
output$nb.samples <- renderPlotly({
results <- getResults(query.result()[[1]])
if(is.null(results) || nrow(results) == 0) return(plotly_empty())
df <- data.frame("Samples" = nrow(results),
"Project" = unique(results$project),
"size" = sum(results$file_size/(2^20)))
p <- plot_ly(data = df,
x = ~Project,
y = ~Samples,
name = "Files size (MB)",
type = "bar") %>%
layout(yaxis = list(title = "Number of samples")) %>%
config(displayModeBar = F)
})
output$file.size <- renderPlotly({
results <- getResults(query.result()[[1]])
if(is.null(results) || nrow(results) == 0) return(plotly_empty())
df <- data.frame("Samples" = nrow(results),
"Project" = unique(results$project),
"size" = sum(results$file_size/(2^20)))
p <- plot_ly(data = df,
x = ~Project,
y = ~size,
hoverinfo = 'text',
text=~paste0(size," (MB)"),
name = "Files size (MB)",
type = "bar") %>%
layout(yaxis = list(title = "Files size (MB)")) %>%
config(displayModeBar = F)
})
getPiePlot <- function(var,title, type = "results"){
results <- isolate({getResults(query.result()[[1]])})
if(is.null(results) || nrow(results) == 0) return(plotly_empty())
if(type == "results") {
results <- getResults(query.result()[[1]])
df <- as.data.frame(table(results[,var]))
} else {
clinical <- query.result()[[2]]
if(is.null(clinical) || nrow(clinical) == 0) return(plotly_empty())
df <- as.data.frame(table(clinical[,var]))
}
p <- plot_ly(df, labels = ~Var1, values = ~Freq, type = 'pie',
textposition = 'inside',
textinfo = 'label+percent',
insidetextfont = list(color = '#FFFFFF'),
hoverinfo = 'text',
text=~paste0(Var1,"\n",Freq),
marker = list(colors = colors,
line = list(color = '#FFFFFF', width = 1)),
#The 'pull' attribute can also be used to create space between the sectors
showlegend = FALSE) %>%
layout(title = title,
xaxis = list(showgrid = FALSE, zeroline = FALSE, showticklabels = FALSE),
yaxis = list(showgrid = FALSE, zeroline = FALSE, showticklabels = FALSE)) %>%
config(displayModeBar = F)
}
output$gender <- renderPlotly({
getPiePlot("gender","Gender", "clinical")
})
output$race <- renderPlotly({
getPiePlot("race","Race", "clinical")
})
output$vital.status <- renderPlotly({
getPiePlot("vital_status","Vital status", "clinical")
})
output$tumor.stage <- renderPlotly({
getPiePlot("tumor_stage","Tumor stage", "clinical")
})
output$data.type <- renderPlotly({
getPiePlot("data_type","Data type")
})
output$tissue.definition <- renderPlotly({
getPiePlot("tissue.definition","Tissue definition")
})
output$experimental.strategy <- renderPlotly({
getPiePlot("experimental_strategy","Experimental strategy")
})
})
}
})
observeEvent(input$tcgaPrepareBt,{
closeAlert(session,"tcgaAlert")
query <- isolate({query.result()[[1]]})
results <- isolate({query$results[[1]]})
# Dir to save the files
getPath <- parseDirPath(get.volumes(isolate({input$workingDir})), isolate({input$workingDir}))
if (length(getPath) == 0) getPath <- paste0(Sys.getenv("HOME"),"/TCGAbiolinksGUI")
filename <- file.path(getPath,isolate({input$tcgafilename}))
withProgress(message = 'Download in progress',
detail = 'This may take a while...', value = 0, {
trash = tryCatch({
n <- nrow(results)
step <- 20
for(i in 0:ceiling(n/step - 1)){
query.aux <- query
end <- ifelse(((i + 1) * step) > n, n,((i + 1) * step))
query.aux$results[[1]] <- query.aux$results[[1]][((i * step) + 1):end,]
GDCdownload(query.aux, method = "api",directory = getPath)
incProgress(1/ceiling(n/step), detail = paste("Completed ", i + 1, " of ",ceiling(n/step)))
}
n
}, error = function(e) {
createAlert(session, "tcgasearchmessage", "tcgaAlert", title = "Error", style = "danger",
content = paste0("Error while downloading the files<br>",e), append = FALSE)
return(NULL)
})
})
withProgress(message = 'Prepare progress',
detail = 'This may take a while...', value = 0, {
trash = tryCatch({
genes <- NULL
if(isolate({input$addGistic})) genes <- isolate({input$gisticGenes})
data <- GDCprepare(query,
save = TRUE,
save.filename = filename,
summarizedExperiment = as.logical(isolate({input$prepareRb})),
directory = getPath,
mut.pipeline = isolate({input$tcgaPrepareMutPipeline}),
add.gistic2.mut = genes)
if(as.logical(isolate({input$prepareRb}))){
aux <- gsub(".rda","_samples_information.csv",filename)
write_csv(x = as.data.frame(colData(data)),path = aux )
filename <- c(filename, aux)
}
data
}, error = function(e) {
createAlert(session, "tcgasearchmessage", "tcgaAlert", title = "Error", style = "danger",
content = paste0("Error while preparing the files<br>",e), append = FALSE)
})
createAlert(session, "tcgasearchmessage", "tcgaAlert", title = "Prepare completed", style = "success",
content = paste0("Saved in: ", "<br><ul>", paste(filename, collapse = "</ul><ul>"),"</ul>"), append = FALSE)
})
})
getClinical.info <- reactive({
project <- input$tcgaProjectFilter
if(is.null(project) || str_length(project) == 0) return(NULL)
baseURL <- "https://gdc-api.nci.nih.gov/cases/?"
options.pretty <- "pretty=true"
options.expand <- "expand=diagnoses,demographic"
option.size <- paste0("size=", TCGAbiolinks:::getNbCases(project, "Clinical"))
files.data_category <- "Clinical"
options.filter <- paste0("filters=", URLencode("{\"op\":\"and\",\"content\":[{\"op\":\"in\",\"content\":{\"field\":\"cases.project.project_id\",\"value\":[\""),
project, URLencode("\"]}},{\"op\":\"in\",\"content\":{\"field\":\"files.data_category\",\"value\":[\""),
files.data_category, URLencode("\"]}}]}"))
withProgress(message = 'Loading clinical data',
detail = 'This may take a while...', value = 0, {
json <- jsonlite::fromJSON(paste0(baseURL, paste(options.pretty, options.expand,
option.size, options.filter, sep = "&")), simplifyDataFrame = TRUE)
})
results <- json$data$hits
diagnoses <- rbindlist(results$diagnoses, fill = TRUE)
diagnoses$submitter_id <- results$submitter_id
results$demographic$submitter_id <- results$submitter_id
df <- merge(diagnoses, results$demographic, by = "submitter_id", all = TRUE)
df$bcr_patient_barcode <- df$submitter_id
df$disease <- gsub("TCGA-|TARGET-", "", project)
setDF(df)
return(df)
})
observe({
updateSelectizeInput(session, 'tcgaDataTypeFilter', choices = getDataType(as.logical(input$tcgaDatabase),input$tcgaDataCategoryFilter), server = TRUE)
if(is.null(getDataType(as.logical(input$tcgaDatabase),input$tcgaDataCategoryFilter))) {
shinyjs::hide("tcgaDataTypeFilter")
} else {
shinyjs::show("tcgaDataTypeFilter")
}
})
observe({
updateSelectizeInput(session, 'tcgaPlatformFilter', choices = getPlatform(as.logical(input$tcgaDatabase),input$tcgaDataCategoryFilter), server = TRUE)
if(is.null(getPlatform(as.logical(input$tcgaDatabase),input$tcgaDataCategoryFilter))) {
shinyjs::hide("tcgaPlatformFilter")
} else {
shinyjs::show("tcgaPlatformFilter")
}
})
observe({
updateSelectizeInput(session, 'tcgaWorkFlowFilter', choices = getWorkFlow(as.logical(input$tcgaDatabase),input$tcgaDataCategoryFilter), server = TRUE)
if(is.null(getWorkFlow(as.logical(input$tcgaDatabase),input$tcgaDataCategoryFilter))) {
shinyjs::hide("tcgaWorkFlowFilter")
} else {
shinyjs::show("tcgaWorkFlowFilter")
}
})
observe({
updateSelectizeInput(session, 'tcgaExpStrategyFilter', choices = getExpStrategy(as.logical(input$tcgaDatabase),input$tcgaPlatformFilter), server = TRUE)
if(is.null(getExpStrategy(as.logical(input$tcgaDatabase),input$tcgaPlatformFilter))) {
shinyjs::hide("tcgaExpStrategyFilter")
} else {
shinyjs::show("tcgaExpStrategyFilter")
}
})
observe({
updateSelectizeInput(session, 'tcgaProjectFilter', choices = getTCGAdisease(), server = TRUE)
updateSelectizeInput(session, 'tcgatumorClinicalFilter', choices = getTCGAdisease(), server = TRUE)
})
observe({
updateSelectizeInput(session, 'tcgaFileTypeFilter', choices = getFileType(as.logical(input$tcgaDatabase),input$tcgaDataCategoryFilter), server = TRUE)
if(is.null(getFileType(as.logical(input$tcgaDatabase),input$tcgaDataCategoryFilter))) {
shinyjs::hide("tcgaFileTypeFilter")
} else {
shinyjs::show("tcgaFileTypeFilter")
}
})
observe({
input$tcgaProjectFilter
tryCatch({
clin <- getClinical.info()
if(!is.null(clin)){
updateSelectizeInput(session, 'tcgaClinicalGenderFilter', choices = unique(clin$gender), server = TRUE)
updateSelectizeInput(session, 'tcgaClinicalVitalStatusFilter', choices = unique(clin$vital_status), server = TRUE)
updateSelectizeInput(session, 'tcgaClinicalRaceFilter', choices = unique(clin$race), server = TRUE)
updateSelectizeInput(session, 'tcgaClinicalTumorStageFilter', choices = unique(clin$tumor_stage), server = TRUE)
}
shinyjs::show("tcgaClinicalGenderFilter")
shinyjs::show("tcgaClinicalVitalStatusFilter")
shinyjs::show("tcgaClinicalRaceFilter")
shinyjs::show("tcgaClinicalTumorStageFilter")
}, error = function(e){
shinyjs::hide("tcgaClinicalGenderFilter")
shinyjs::hide("tcgaClinicalVitalStatusFilter")
shinyjs::hide("tcgaClinicalRaceFilter")
shinyjs::hide("tcgaClinicalTumorStageFilter")
})
})
observeEvent(input$prepareRb, {
if(isolate({input$prepareRb})) { # Summarized Experiment
shinyjs::show("addGistic")
if(isolate({input$addGistic})){
shinyjs::show("gisticGenes")
shinyjs::show("tcgaPrepareMutPipeline")
} else {
shinyjs::hide("gisticGenes")
shinyjs::hide("tcgaPrepareMutPipeline")
}
} else {
shinyjs::hide("addGistic")
shinyjs::hide("gisticGenes")
shinyjs::hide("tcgaPrepareMutPipeline")
}
})
observeEvent(input$addGistic, {
if(input$addGistic) {
shinyjs::show("gisticGenes")
} else {
shinyjs::hide("gisticGenes")
}
})
observe({
updateSelectizeInput(session, 'tcgaDataCategoryFilter', choices = getDataCategory(as.logical(input$tcgaDatabase)), server = TRUE)
updateSelectizeInput(session, 'gisticGenes', choices = as.character(sort(TCGAbiolinks:::gene.list)), server = TRUE)
})
| /inst/app/server/getmolecular.R | no_license | inambioinfo/TCGAbiolinksGUI | R | false | false | 21,140 | r | #-------------------------------------------------------------------------
# TCGA Search
#-------------------------------------------------------------------------
#--------------------- START controlling show/hide states ----------------
observeEvent(input$clinicalIndexed, {
if(input$clinicalIndexed){
shinyjs::hide("tcgaClinicalFilter")
} else {
shinyjs::show("tcgaClinicalFilter")
}
})
query.result <- reactive({
input$tcgaSearchBt # the trigger
#------------------- STEP 1: Argument-------------------------
# Set arguments for GDCquery, if value is empty we will set it to FALSE (same as empty)
tumor <- isolate({input$tcgaProjectFilter})
if(str_length(tumor) == 0) tumor <- NULL
data.category <- isolate({input$tcgaDataCategoryFilter})
if(str_length(data.category) == 0) data.category <- NULL
# Data type
data.type <- isolate({input$tcgaDataTypeFilter})
if(str_length(data.type) == 0) data.type <- FALSE
# platform
platform <- isolate({input$tcgaPlatformFilter})
if(str_length(platform) == 0) platform <- FALSE
# workflow
workflow.type <- isolate({input$tcgaWorkFlowFilter})
if(str_length(workflow.type) == 0) workflow.type <- FALSE
# file.type
file.type <- isolate({input$tcgaFileTypeFilter})
if(str_length(file.type) == 0) file.type <- FALSE
# access: we will only work with open access data
#access <- isolate({input$tcgaAcessFilter})
#if(str_length(access) == 0) access <- FALSE
access <- "open"
legacy <- isolate({as.logical(input$tcgaDatabase)})
# bacode
text.samples <- isolate({input$tcgaDownloadBarcode})
if(!is.null(text.samples)){
barcode <- parse.textarea.input(text.samples)
}
if(str_length(barcode) == 0) barcode <- FALSE
# Samples type
sample.type <- isolate({input$tcgasamplestypeFilter})
if(is.null(sample.type)) {
sample.type <- FALSE
} else if(str_length(sample.type) == 0) {
sample.type <- FALSE
}
experimental.strategy <- isolate({input$tcgaExpStrategyFilter})
if(str_length(experimental.strategy) == 0) experimental.strategy <- FALSE
if(is.null(tumor)){
createAlert(session, "tcgasearchmessage", "tcgaAlert", title = "Data input error", style = "danger",
content = "Please select a project", append = FALSE)
return(NULL)
}
if(is.null(data.category)){
createAlert(session, "tcgasearchmessage", "tcgaAlert", title = "Data input error", style = "danger",
content = "Please select a data.category", append = FALSE)
return(NULL)
}
withProgress(message = 'Accessing GDC',
detail = 'This may take a while...', value = 0, {
query = tryCatch({
query <- GDCquery(project = tumor,
data.category = data.category,
workflow.type = workflow.type,
legacy = legacy,
platform = platform,
file.type = file.type,
access = access,
barcode = barcode,
experimental.strategy = experimental.strategy,
sample.type = sample.type,
data.type = data.type)
incProgress(1, detail = "Completed")
return(query)
}, error = function(e) {
createAlert(session, "tcgasearchmessage", "tcgaAlert", title = "Error", style = "danger",
content = "No results for this query", append = FALSE)
return(NULL)
})
})
if(is.null(query)) {
createAlert(session, "tcgasearchmessage", "tcgaAlert", title = "Error", style = "danger",
content = "No results for this query", append = FALSE)
return(NULL)
}
not.found <- c()
tbl <- data.frame()
results <- query$results[[1]]
#------------------- STEP 2: Clinical features-------------------------
# In this step we will download indexed clinical data
# Get the barcodes that repects user inputs
# And select only the results that respects it
if(!isolate({input$tcgaMolecularFilterClinical})){
clinical <- NULL
} else {
clinical <- getClinical.info()
# filter clinical for samples
clinical <- subset(clinical, clinical$submitter_id %in% substr(results$cases,1,unique(str_length(clinical$submitter_id))))
stage <- isolate({input$tcgaClinicalTumorStageFilter})
stage.idx <- NA
if(!is.null(stage) & all(str_length(stage) > 0)){
stage.idx <- sapply(stage, function(y) clinical$tumor_stage %in% y)
stage.idx <- apply(stage.idx,1,any)
}
vital.status <- isolate({input$tcgaClinicalVitalStatusFilter})
vital.status.idx <- NA
if(!is.null(vital.status) & all(str_length(vital.status) > 0)){
vital.status.idx <- sapply(vital.status, function(y) clinical$vital_status %in% y)
vital.status.idx <- apply(vital.status.idx,1,any)
}
race <- isolate({input$tcgaClinicalRaceFilter})
race.idx <- NA
if(!is.null(race) & all(str_length(race) > 0)){
race.idx <- sapply(race, function(y) clinical$race %in% y)
race.idx <- apply(race.idx,1,any)
}
gender <- isolate({input$tcgaClinicalGenderFilter})
gender.idx <- NA
if(!is.null(gender) & all(str_length(gender) > 0)){
gender.idx <- sapply(gender, function(y) clinical$gender %in% y)
gender.idx <- apply(gender.idx,1,any)
}
if(any(!is.na(gender.idx),!is.na(race.idx),!is.na(vital.status.idx),!is.na(stage.idx))){
idx <- apply(data.frame(gender.idx,race.idx,vital.status.idx,stage.idx),1,function(x)all(x,na.rm = TRUE))
clinical <- clinical[idx,]
}
cut <- ifelse(grepl("TARGET",query$project),16,12)
results <- results[substr(results$cases,1,cut) %in% clinical$bcr_patient_barcode,]
if(is.null(results) || nrow(results) == 0) {
createAlert(session, "tcgasearchmessage", "tcgaAlert", title = "Error", style = "danger",
content = "No results for this query", append = FALSE)
return(NULL)
}
}
query$results[[1]] <- results
list(query,clinical)
})
#----------------------- END controlling show/hide states -----------------
observeEvent(input$tcgaSearchBt, {
closeAlert(session,"tcgaAlert")
updateTextInput(session, "tcgafilename", label = "File name",
value = paste0(
paste(isolate({input$tcgaProjectFilter}),
gsub(" ","_",isolate({input$tcgaDataCategoryFilter})),
gsub(" ","_",isolate({input$tcgaDataTypeFilter})),
ifelse(isolate({input$tcgaDatabase}),"hg19","hg38"),
sep = "_"),".rda"))
updateCollapse(session, "collapseTCGA", open = "GDC search results")
output$queryresutlstable <- DT::renderDataTable({
results <- getResults(query.result()[[1]])
if(!is.null(results)) createTable(results)
})
results <- isolate({getResults(query.result()[[1]])})
if(any(duplicated(results$cases)))
createAlert(session, "tcgasearchmessage", "tcgaAlert", title = "Warning", style = "warning",
content = "There are more than one file for the same case.", append = FALSE)
if(is.null(results)){
} else {
suppressWarnings({
output$nb.samples <- renderPlotly({
results <- getResults(query.result()[[1]])
if(is.null(results) || nrow(results) == 0) return(plotly_empty())
df <- data.frame("Samples" = nrow(results),
"Project" = unique(results$project),
"size" = sum(results$file_size/(2^20)))
p <- plot_ly(data = df,
x = ~Project,
y = ~Samples,
name = "Files size (MB)",
type = "bar") %>%
layout(yaxis = list(title = "Number of samples")) %>%
config(displayModeBar = F)
})
output$file.size <- renderPlotly({
results <- getResults(query.result()[[1]])
if(is.null(results) || nrow(results) == 0) return(plotly_empty())
df <- data.frame("Samples" = nrow(results),
"Project" = unique(results$project),
"size" = sum(results$file_size/(2^20)))
p <- plot_ly(data = df,
x = ~Project,
y = ~size,
hoverinfo = 'text',
text=~paste0(size," (MB)"),
name = "Files size (MB)",
type = "bar") %>%
layout(yaxis = list(title = "Files size (MB)")) %>%
config(displayModeBar = F)
})
getPiePlot <- function(var,title, type = "results"){
results <- isolate({getResults(query.result()[[1]])})
if(is.null(results) || nrow(results) == 0) return(plotly_empty())
if(type == "results") {
results <- getResults(query.result()[[1]])
df <- as.data.frame(table(results[,var]))
} else {
clinical <- query.result()[[2]]
if(is.null(clinical) || nrow(clinical) == 0) return(plotly_empty())
df <- as.data.frame(table(clinical[,var]))
}
p <- plot_ly(df, labels = ~Var1, values = ~Freq, type = 'pie',
textposition = 'inside',
textinfo = 'label+percent',
insidetextfont = list(color = '#FFFFFF'),
hoverinfo = 'text',
text=~paste0(Var1,"\n",Freq),
marker = list(colors = colors,
line = list(color = '#FFFFFF', width = 1)),
#The 'pull' attribute can also be used to create space between the sectors
showlegend = FALSE) %>%
layout(title = title,
xaxis = list(showgrid = FALSE, zeroline = FALSE, showticklabels = FALSE),
yaxis = list(showgrid = FALSE, zeroline = FALSE, showticklabels = FALSE)) %>%
config(displayModeBar = F)
}
output$gender <- renderPlotly({
getPiePlot("gender","Gender", "clinical")
})
output$race <- renderPlotly({
getPiePlot("race","Race", "clinical")
})
output$vital.status <- renderPlotly({
getPiePlot("vital_status","Vital status", "clinical")
})
output$tumor.stage <- renderPlotly({
getPiePlot("tumor_stage","Tumor stage", "clinical")
})
output$data.type <- renderPlotly({
getPiePlot("data_type","Data type")
})
output$tissue.definition <- renderPlotly({
getPiePlot("tissue.definition","Tissue definition")
})
output$experimental.strategy <- renderPlotly({
getPiePlot("experimental_strategy","Experimental strategy")
})
})
}
})
observeEvent(input$tcgaPrepareBt,{
closeAlert(session,"tcgaAlert")
query <- isolate({query.result()[[1]]})
results <- isolate({query$results[[1]]})
# Dir to save the files
getPath <- parseDirPath(get.volumes(isolate({input$workingDir})), isolate({input$workingDir}))
if (length(getPath) == 0) getPath <- paste0(Sys.getenv("HOME"),"/TCGAbiolinksGUI")
filename <- file.path(getPath,isolate({input$tcgafilename}))
withProgress(message = 'Download in progress',
detail = 'This may take a while...', value = 0, {
trash = tryCatch({
n <- nrow(results)
step <- 20
for(i in 0:ceiling(n/step - 1)){
query.aux <- query
end <- ifelse(((i + 1) * step) > n, n,((i + 1) * step))
query.aux$results[[1]] <- query.aux$results[[1]][((i * step) + 1):end,]
GDCdownload(query.aux, method = "api",directory = getPath)
incProgress(1/ceiling(n/step), detail = paste("Completed ", i + 1, " of ",ceiling(n/step)))
}
n
}, error = function(e) {
createAlert(session, "tcgasearchmessage", "tcgaAlert", title = "Error", style = "danger",
content = paste0("Error while downloading the files<br>",e), append = FALSE)
return(NULL)
})
})
withProgress(message = 'Prepare progress',
detail = 'This may take a while...', value = 0, {
trash = tryCatch({
genes <- NULL
if(isolate({input$addGistic})) genes <- isolate({input$gisticGenes})
data <- GDCprepare(query,
save = TRUE,
save.filename = filename,
summarizedExperiment = as.logical(isolate({input$prepareRb})),
directory = getPath,
mut.pipeline = isolate({input$tcgaPrepareMutPipeline}),
add.gistic2.mut = genes)
if(as.logical(isolate({input$prepareRb}))){
aux <- gsub(".rda","_samples_information.csv",filename)
write_csv(x = as.data.frame(colData(data)),path = aux )
filename <- c(filename, aux)
}
data
}, error = function(e) {
createAlert(session, "tcgasearchmessage", "tcgaAlert", title = "Error", style = "danger",
content = paste0("Error while preparing the files<br>",e), append = FALSE)
})
createAlert(session, "tcgasearchmessage", "tcgaAlert", title = "Prepare completed", style = "success",
content = paste0("Saved in: ", "<br><ul>", paste(filename, collapse = "</ul><ul>"),"</ul>"), append = FALSE)
})
})
getClinical.info <- reactive({
project <- input$tcgaProjectFilter
if(is.null(project) || str_length(project) == 0) return(NULL)
baseURL <- "https://gdc-api.nci.nih.gov/cases/?"
options.pretty <- "pretty=true"
options.expand <- "expand=diagnoses,demographic"
option.size <- paste0("size=", TCGAbiolinks:::getNbCases(project, "Clinical"))
files.data_category <- "Clinical"
options.filter <- paste0("filters=", URLencode("{\"op\":\"and\",\"content\":[{\"op\":\"in\",\"content\":{\"field\":\"cases.project.project_id\",\"value\":[\""),
project, URLencode("\"]}},{\"op\":\"in\",\"content\":{\"field\":\"files.data_category\",\"value\":[\""),
files.data_category, URLencode("\"]}}]}"))
withProgress(message = 'Loading clinical data',
detail = 'This may take a while...', value = 0, {
json <- jsonlite::fromJSON(paste0(baseURL, paste(options.pretty, options.expand,
option.size, options.filter, sep = "&")), simplifyDataFrame = TRUE)
})
results <- json$data$hits
diagnoses <- rbindlist(results$diagnoses, fill = TRUE)
diagnoses$submitter_id <- results$submitter_id
results$demographic$submitter_id <- results$submitter_id
df <- merge(diagnoses, results$demographic, by = "submitter_id", all = TRUE)
df$bcr_patient_barcode <- df$submitter_id
df$disease <- gsub("TCGA-|TARGET-", "", project)
setDF(df)
return(df)
})
observe({
updateSelectizeInput(session, 'tcgaDataTypeFilter', choices = getDataType(as.logical(input$tcgaDatabase),input$tcgaDataCategoryFilter), server = TRUE)
if(is.null(getDataType(as.logical(input$tcgaDatabase),input$tcgaDataCategoryFilter))) {
shinyjs::hide("tcgaDataTypeFilter")
} else {
shinyjs::show("tcgaDataTypeFilter")
}
})
observe({
updateSelectizeInput(session, 'tcgaPlatformFilter', choices = getPlatform(as.logical(input$tcgaDatabase),input$tcgaDataCategoryFilter), server = TRUE)
if(is.null(getPlatform(as.logical(input$tcgaDatabase),input$tcgaDataCategoryFilter))) {
shinyjs::hide("tcgaPlatformFilter")
} else {
shinyjs::show("tcgaPlatformFilter")
}
})
observe({
updateSelectizeInput(session, 'tcgaWorkFlowFilter', choices = getWorkFlow(as.logical(input$tcgaDatabase),input$tcgaDataCategoryFilter), server = TRUE)
if(is.null(getWorkFlow(as.logical(input$tcgaDatabase),input$tcgaDataCategoryFilter))) {
shinyjs::hide("tcgaWorkFlowFilter")
} else {
shinyjs::show("tcgaWorkFlowFilter")
}
})
observe({
updateSelectizeInput(session, 'tcgaExpStrategyFilter', choices = getExpStrategy(as.logical(input$tcgaDatabase),input$tcgaPlatformFilter), server = TRUE)
if(is.null(getExpStrategy(as.logical(input$tcgaDatabase),input$tcgaPlatformFilter))) {
shinyjs::hide("tcgaExpStrategyFilter")
} else {
shinyjs::show("tcgaExpStrategyFilter")
}
})
observe({
updateSelectizeInput(session, 'tcgaProjectFilter', choices = getTCGAdisease(), server = TRUE)
updateSelectizeInput(session, 'tcgatumorClinicalFilter', choices = getTCGAdisease(), server = TRUE)
})
observe({
updateSelectizeInput(session, 'tcgaFileTypeFilter', choices = getFileType(as.logical(input$tcgaDatabase),input$tcgaDataCategoryFilter), server = TRUE)
if(is.null(getFileType(as.logical(input$tcgaDatabase),input$tcgaDataCategoryFilter))) {
shinyjs::hide("tcgaFileTypeFilter")
} else {
shinyjs::show("tcgaFileTypeFilter")
}
})
observe({
input$tcgaProjectFilter
tryCatch({
clin <- getClinical.info()
if(!is.null(clin)){
updateSelectizeInput(session, 'tcgaClinicalGenderFilter', choices = unique(clin$gender), server = TRUE)
updateSelectizeInput(session, 'tcgaClinicalVitalStatusFilter', choices = unique(clin$vital_status), server = TRUE)
updateSelectizeInput(session, 'tcgaClinicalRaceFilter', choices = unique(clin$race), server = TRUE)
updateSelectizeInput(session, 'tcgaClinicalTumorStageFilter', choices = unique(clin$tumor_stage), server = TRUE)
}
shinyjs::show("tcgaClinicalGenderFilter")
shinyjs::show("tcgaClinicalVitalStatusFilter")
shinyjs::show("tcgaClinicalRaceFilter")
shinyjs::show("tcgaClinicalTumorStageFilter")
}, error = function(e){
shinyjs::hide("tcgaClinicalGenderFilter")
shinyjs::hide("tcgaClinicalVitalStatusFilter")
shinyjs::hide("tcgaClinicalRaceFilter")
shinyjs::hide("tcgaClinicalTumorStageFilter")
})
})
observeEvent(input$prepareRb, {
if(isolate({input$prepareRb})) { # Summarized Experiment
shinyjs::show("addGistic")
if(isolate({input$addGistic})){
shinyjs::show("gisticGenes")
shinyjs::show("tcgaPrepareMutPipeline")
} else {
shinyjs::hide("gisticGenes")
shinyjs::hide("tcgaPrepareMutPipeline")
}
} else {
shinyjs::hide("addGistic")
shinyjs::hide("gisticGenes")
shinyjs::hide("tcgaPrepareMutPipeline")
}
})
observeEvent(input$addGistic, {
if(input$addGistic) {
shinyjs::show("gisticGenes")
} else {
shinyjs::hide("gisticGenes")
}
})
observe({
updateSelectizeInput(session, 'tcgaDataCategoryFilter', choices = getDataCategory(as.logical(input$tcgaDatabase)), server = TRUE)
updateSelectizeInput(session, 'gisticGenes', choices = as.character(sort(TCGAbiolinks:::gene.list)), server = TRUE)
})
|
# Exercise 1: working with data frames (review)
# Install devtools package: allows installations from GitHub
install.packages("devtools")
# Install "fueleconomy" dataset from GitHub
devtools::install_github("hadley/fueleconomy")
# Use the `libary()` function to load the "fueleconomy" package
library(fueleconomy)
# You should now have access to the `vehicles` data frame
# You can use `View()` to inspect it
View(vehicles)
# Select the different manufacturers (makes) of the cars in this data set.
# Save this vector in a variable
manufacturerer_vector <- vehicles$make
print(manufacturerer_vector)
# Use the `unique()` function to determine how many different car manufacturers
# are represented by the data set
manufacturer_count <- length(unique(manufacturerer_vector, incomparables = FALSE))
# Filter the data set for vehicles manufactured in 1997
vehicles_of_1997 <- vehicles[vehicles$year == "1997",]
print(vehicles_of_1997)
# Arrange the 1997 cars by highway (`hwy`) gas milage
# Hint: use the `order()` function to get a vector of indices in order by value
# See also:
# https://www.r-bloggers.com/r-sorting-a-data-frame-by-the-contents-of-a-column/
order(vehicles$id[vehicles$year == "1997"])
# Mutate the 1997 cars data frame to add a column `average` that has the average
# gas milage (between city and highway mpg) for each car
vehicles$average <- mean(vehicles$hwy +vehicles$cty)
# Filter the whole vehicles data set for 2-Wheel Drive vehicles that get more
# than 20 miles/gallon in the city.
# Save this new data frame in a variable.
# Of the above vehicles, what is the vehicle ID of the vehicle with the worst
# hwy mpg?
# Hint: filter for the worst vehicle, then select its ID.
# Write a function that takes a `year_choice` and a `make_choice` as parameters,
# and returns the vehicle model that gets the most hwy miles/gallon of vehicles
# of that make in that year.
# You'll need to filter more (and do some selecting)!
# What was the most efficient Honda model of 1995?
| /chapter-11-exercises/exercise-1/exercise.R | permissive | kmarshall98/book-exercises | R | false | false | 2,008 | r | # Exercise 1: working with data frames (review)
# Install devtools package: allows installations from GitHub
install.packages("devtools")
# Install "fueleconomy" dataset from GitHub
devtools::install_github("hadley/fueleconomy")
# Use the `libary()` function to load the "fueleconomy" package
library(fueleconomy)
# You should now have access to the `vehicles` data frame
# You can use `View()` to inspect it
View(vehicles)
# Select the different manufacturers (makes) of the cars in this data set.
# Save this vector in a variable
manufacturerer_vector <- vehicles$make
print(manufacturerer_vector)
# Use the `unique()` function to determine how many different car manufacturers
# are represented by the data set
manufacturer_count <- length(unique(manufacturerer_vector, incomparables = FALSE))
# Filter the data set for vehicles manufactured in 1997
vehicles_of_1997 <- vehicles[vehicles$year == "1997",]
print(vehicles_of_1997)
# Arrange the 1997 cars by highway (`hwy`) gas milage
# Hint: use the `order()` function to get a vector of indices in order by value
# See also:
# https://www.r-bloggers.com/r-sorting-a-data-frame-by-the-contents-of-a-column/
order(vehicles$id[vehicles$year == "1997"])
# Mutate the 1997 cars data frame to add a column `average` that has the average
# gas milage (between city and highway mpg) for each car
vehicles$average <- mean(vehicles$hwy +vehicles$cty)
# Filter the whole vehicles data set for 2-Wheel Drive vehicles that get more
# than 20 miles/gallon in the city.
# Save this new data frame in a variable.
# Of the above vehicles, what is the vehicle ID of the vehicle with the worst
# hwy mpg?
# Hint: filter for the worst vehicle, then select its ID.
# Write a function that takes a `year_choice` and a `make_choice` as parameters,
# and returns the vehicle model that gets the most hwy miles/gallon of vehicles
# of that make in that year.
# You'll need to filter more (and do some selecting)!
# What was the most efficient Honda model of 1995?
|
library(maps)
library(mapdata)
library(maptools)
library(scales)
library(mapproj)
library(plotrix)
library(zoo)
library(lmodel2)
#user <- '/Users/mariaham/CMOP'
user <- '~/Documents/DATA/SeaFlow/CMOP/CMOP_git'
#user <- NULL
stat <- read.csv(paste0(user, "/CMOP_field/model/crypto_HD_CMOP_6.binned.csv"))
stat$h.time <- as.POSIXct(stat$h.time,origin='1970-01-01',tz='GMT')
i <- min(stat$h.time, na.rm=T)
f <- max(stat$h.time, na.rm=T)
# stat$h.dr.mean <- as.numeric(c('NA', na.approx(stat$h.dr.mean)))
# stat$h.dr.sd <- as.numeric(c('NA','NA', na.approx(stat$h.dr.sd), 'NA'))
crypto <- subset(stat, h.time > as.POSIXct("2013-09-10 16:50:00") & h.time < as.POSIXct("2013-10-03 24:00:00"))
crypto.week1 <- subset(crypto , h.time < as.POSIXct("2013-09-13 16:00:00",tz='GMT'))
crypto.week2 <- subset(crypto , h.time > as.POSIXct("2013-09-16 18:00:00", tz='GMT') & h.time < as.POSIXct("2013-09-20 1:00:00", tz='GMT'))
crypto.week3 <- subset(crypto , h.time > as.POSIXct("2013-09-23 22:00:00", tz='GMT') & h.time < as.POSIXct("2013-09-27 10:00:00", tz='GMT'))
crypto.week4 <- subset(crypto , h.time > as.POSIXct("2013-09-30 18:00:00", tz='GMT') & h.time < as.POSIXct("2013-10-03 24:00:00", tz='GMT'))
time.template <- seq(i, f+60*60*48, by=60*60*24)
time.res <- cut(crypto$h.time,time.template)
daily.abun.mean <- tapply(crypto$h2.conc.mean, time.res, function(x) mean(x, na.rm=T))
daily.abun.sd <- tapply(crypto$h2.conc.sd, time.res, function(x) mean(x, na.rm=T))
daily.dr.mean <- tapply(crypto$h.dr.mean, time.res, function(x) mean(x, na.rm=T))*24
daily.dr.sd <- tapply(crypto$h.dr.sd, time.res, function(x) mean(x, na.rm=T))*24
daily.prod.mean <- c(daily.dr.mean * daily.abun.mean)
daily.prod.sd <- c(daily.dr.sd * daily.abun.sd)
daily.dr.se <- daily.dr.sd / sqrt(24)
sal <- read.csv(paste0(user, "/auxillary_data/salinityCMOP_6"))
sal$time <- as.POSIXct(strptime(sal$time.YYYY.MM.DD.hh.mm.ss.PST., "%Y/%m/%d %H:%M:%S"), tz="GMT")
sal <- subset(sal, time > i & time < f)
id <- which(diff(sal$time) > 60*60*3)
sal.LPF <- smooth.spline(sal$time, sal$water_salinity, spar=0.002)
sal.LPF$y[id] <- NA
temp.LPF <- smooth.spline(sal$time, sal$water_temperature, spar=0.002)
temp.LPF$y[id] <- NA
par <- read.csv(paste0(user, "/CMOP_field/Par_CMOP_6"))
par$time <- as.POSIXct(par$time, format="%Y/%m/%d %H:%M:%S", tz= "GMT")
par <- subset(par, time > i & time < f)
par.LPF <- smooth.spline(as.POSIXct(par$time, origin="1970-01-01", tz='GMT'), par$par, spar=0.05)
par.LPF$y[which(par.LPF$y < 40)] <- 0
par.LPF$y[which(diff(par.LPF$x)>8000)] <- NA
# oxy <- read.csv(paste0(user, "/auxillary_data/oxygenCMOP_6"))
# oxy$time <- as.POSIXct(strptime(oxy$time.YYYY.MM.DD.hh.mm.ss.PST., "%Y/%m/%d %H:%M:%S"), tz="GMT")
# oxy<- subset(oxy, time > i & time < f)
fluo <- read.csv(paste0(user, "/auxillary_data/fvfmCMOP_6"))
fluo$time <- as.POSIXct(strptime(fluo$time.YYYY.MM.DD.hh.mm.ss.PST., "%Y/%m/%d %H:%M:%S"), tz="GMT")
fluo<- subset(fluo, time > i & time < f)
id <- which(diff(fluo$time) > 60*60*3)
fluo.LPF <- smooth.spline(fluo$time, fluo$fo)#, spar=0.01, df=2)
fluo.LPF$y[id] <- NA
# tide <- read.csv(paste0(user, "/auxillary_data/elevationCMOP_6"))
# tide$time <- as.POSIXct(strptime(tide$time.YYYY.MM.DD.hh.mm.ss.PST., "%Y/%m/%d %H:%M:%S"), tz="GMT")
# tide <- subset(tide, time > i & time < f)
# tide <- subset(tide, elevation > 1)
nut <- read.csv(paste0(user, "/auxillary_data/Ribalet_nutrients2.csv"))[]
nut$time <- as.POSIXct(strptime(nut$time, "%m/%d/%y %H:%M"))
nut$DIN <- nut$Nitrate+nut$Nitrite+nut$Ammonium
time.template3 <- seq(min(nut$time), max(nut$time), by=60*60*24)[-1]
time.res <- cut(nut$time,time.template3)
daily.DIN <- tapply(nut$DIN , time.res, function(x) mean(x, na.rm=T))
daily.DIN.sd <- tapply(nut$DIN , time.res, function(x) sd(x, na.rm=T))
daily.PO4 <- tapply(nut$Phosphate , time.res, function(x) mean(x, na.rm=T))
daily.PO4.sd <- tapply(nut$Phosphate , time.res, function(x) sd(x, na.rm=T))
time.nut <- tapply(nut$time , time.res, function(x) mean(x, na.rm=T))
ph <- read.csv(paste0(user, "/auxillary_data/pHCMOP_6"))
ph$time <- as.POSIXct(strptime(ph$time.YYYY.MM.DD.hh.mm.ss.PST., "%Y/%m/%d %H:%M:%S"), tz="GMT")
ph <- subset(ph, time > i & time < f)
id <- which(diff(ph$time) > 60*60*3)
ph.LPF <- smooth.spline(as.POSIXct(ph$time, origin="1970-01-01", tz='GMT'), ph$ph, spar=0.05)
ph.LPF$y[id] <- NA
influx <- read.csv(paste0(user,"/CMOP_INFLUX_Sept2013/results/summary_V2-FR.csv"))
info <- read.csv(paste0(user,"/CMOP_INFLUX_Sept2013/file_names.csv"))
fcm <- merge(influx, info, by="file")
fcm$time <- as.POSIXct(fcm$time, format="%m/%d/%y %I:%M:%S %p")#+ 8*60*60
fcm <- fcm[order(fcm$time),]
pop <- subset(fcm, i =='crypto' & depth=="S")[-c(1,10),]
pop$conc <- 10^-3*pop$n/pop$vol
id <- c(1, 18, 43, 66, 144, 169, 193, 217, 314, 339, 359, 381, 396, 403,454, 478, 502)
data.influx <- data.frame(cbind(fcm=pop$conc, tlc=stat$h2.conc.mean[id]))
cor.influx <- lmodel2(tlc~ fcm, data.influx,"relative", "relative", 99)
meso <- read.csv(paste0(user,"/mesodinium/Meso.csv"))
meso$Time <- as.POSIXct(meso$Time, format="%m/%d/%Y %H:%M", tz='') + 8*3600
meso[1:2,"Time"] <- meso[1:2,"Time"] -14*60*60
meso$Meso <- meso$Meso/1000
id <- c(25, 55, 143, 179, 217, 240, 324, 337, 359, 380, 396, 455, 483, 502)
#id <- findInterval(meso$Time, crypto$h.time)
data.field <- data.frame(cbind(meso=meso$Meso, crypto=crypto$h2.conc.mean[id]))#*crypto$h.dr.mean[id]))
reg <- lmodel2(meso ~ crypto, data.field,"relative", "relative", 99)
reg.log <- lmodel2(crypto ~ meso, log10(data.field),"interval", "interval", 99)
fsc <- read.csv(paste0(user, "/CMOP_field/FSCvsPAR.csv"))
night.fsc <- subset(fsc , par < 2)
fsc[which(fsc$fsc2 == "#VALUE!"),'fsc2'] <- NA
fsc$vol <- 10^(1.2384*log10(fsc$fsc/100)+1.003)
fsc$vol.sd <- 10^(1.2384*log10(fsc$fsc.sd/100)+1.003)
fsc$diam <- 2*((fsc$vol *3)/(pi*4))^(1/3)
m <- read.csv(paste0(user,"/Rhodo_labExperiment/model_output-V2.csv"))
m$time <- as.POSIXct(m$time, origin="1970-01-01", tz="" )
cc <- read.csv(paste0(user,"/Rhodo_labExperiment/RHODO_div-rate.csv"))[-1,]
cc$time <- as.POSIXct(cc$time, origin="1970-01-01", tz="" )
id <- findInterval(cc$time,m$time)
data.cc <- data.frame(cbind(CC=cc$div, DR=m$div.ave[id], CC.sd=na.approx(cc$div.se), DR.sd=na.approx(m$div.se[id])))
c <- lmodel2(CC ~ DR, data.cc,"relative", "relative", 99)
lab <- read.delim(paste0(user,"/Rhodo_labExperiment/stat.tab"))
lab$time <- as.POSIXct(lab$time,tz='')-8*60*60
rhodo <- subset(lab, pop == 'crypto' & flag == 0)
time.template2 <- seq(min(rhodo$time), max(rhodo$time), by=60*60)
time.res <- cut(rhodo$time,time.template2)
h.abun.mean <- tapply(rhodo$abundance, time.res, function(x) mean(x, na.rm=T))
h.abun.sd <- tapply(rhodo$abundance, time.res, function(x) sd(x, na.rm=T))
h.fsc.mean <- tapply(rhodo$fsc_small, time.res, function(x) mean(x, na.rm=T))
h.fsc.sd <- tapply(rhodo$fsc_small, time.res, function(x) sd(x, na.rm=T))
h.time <- tapply(rhodo$time, time.res, function(x) min(x, na.rm=T))
#dark cycle: GMT 16:00-23:00
night.lab <- subset( rhodo , time >= as.POSIXct("2014-09-22 16:00:00")-8*60*60 & time <= as.POSIXct("2014-09-22 23:00:00")-8*60*60)
night2.lab <- subset( rhodo , time >= as.POSIXct("2014-09-23 16:00:00")-8*60*60 & time <= as.POSIXct("2014-09-23 23:00:00")-8*60*60)
time.res <- cut(par$time,time.template)
daily.par <- tapply(par$par, time.res, function(x) mean(x, na.rm=T))
time.res <- cut(sal$time, time.template)
daily.temp <- tapply(sal$water_temperature, time.res, function(x) mean(x, na.rm=T))
daily.sal <- tapply(sal$water_salinity, time.res, function(x) mean(x, na.rm=T))
time.res <- cut(ph$time, time.template)
daily.ph <- tapply(ph$ph, time.res, function(x) mean(x, na.rm=T))
id <- c(1,3,8,14,15,16,17,22)
data <- data.frame(cbind(time= time.template[-24],
DIN=daily.DIN, P04 =daily.PO4, TEMP=daily.temp, SAL=daily.sal, PH=daily.ph, PAR=daily.par,
PROD=daily.prod.mean, DR=daily.dr.mean,PROD.sd = daily.prod.sd, DR.se = daily.dr.se,N=daily.abun.mean))
data <- data[id,]
DIN.P <- lmodel2(PROD ~ DIN, data,"relative", "relative", 99)
P04.P <- lmodel2(PROD ~ P04, data,"relative", "relative", 99)
DIN.DR <- lmodel2(DR ~ DIN, data,"relative", "relative", 99)
P04.DR <- lmodel2(DR ~ P04, data,"relative", "relative", 99)
sal.DR <- lmodel2(DR ~ SAL, data,"relative", "relative", 99)
temp.DR <- lmodel2(DR ~ TEMP, data,"relative", "relative", 99)
ph.DR <- lmodel2(DR ~ PH, data,"relative", "relative", 99)
ph.P <- lmodel2(PROD ~ PH, data,"relative", "relative", 99)
DIN.ph <- lmodel2(PH ~ DIN, data,"relative", "relative", 99)
P04.ph <- lmodel2(PH ~ P04, data,"relative", "relative", 99)
DIN.N <- lmodel2(DIN ~ N, data,"relative", "relative", 99)
P04.N <- lmodel2(P04 ~ N, data,"relative", "relative", 99)
ph.N <- lmodel2(PH ~ N, data,"relative", "relative", 99)
DR.N <- lmodel2(DR ~ N, data,"relative", "relative", 99)
###########
### MAP ###
###########
png("FigureS1.png", width=114, height=114, pointsize=8, res=600, units="mm")
par(mfrow=c(2,1), mar=c(0,0,0,0), oma=c(0,0,0,0))
map("worldHires", 'usa', xlim=c(-140, -50), ylim=c(25,50),col="grey", interior=F, fill=T)
lat<-c(46.21)
lon<-c(-123.91)
polygon(c(-125,-122.5,-122.5,-125), c(45,45,47,47), border='black', lwd=1.5)
map("worldHires", xlim=c(-124.5,-123.15), ylim=c(45.9,46.5), col="lightgrey", fill=T)
lat<-c(46.21)
lon<-c(-123.91)
points(lon, lat, pch=16, col="black", cex=3)
map.scale(-123, 46, ratio=F)
text(-124.2, 46.2, "Pacific \nOcean", cex=1.5)
text(-123.5, 46.4, "Columbia River \nEstuary", cex=1.5)
box(col='black', lwd=1.5)
dev.off()
##################
### HYDROLOGY ###
##################
png("Figure1.png", width=114*2, height=114*1.5, pointsize=8, res=600, units="mm")
par(mfrow=c(3,1), mar=c(3,2,1,2), pty="m", cex=1.2, oma=c(1,3,1,3))
plot(sal.LPF$x, sal.LPF$y, xlab="", ylab="", xlim=c(i,f), type='l', xaxt='n', yaxt='n', lwd=1.5, ylim=c(0,30))
axis(2, at=c(0, 30, 15), las=1)
axis.POSIXct(1, at=seq(i, f, by=60*60*24*6), labels=c(1,7,14,21))
mtext("salinity (psu)",side=2, cex=1.2, line=3)
par(new=T)
plot(temp.LPF$x, temp.LPF$y, xlab="", ylab="", xlim=c(i,f), type='l', xaxt='n', yaxt='n', col='darkgrey', lwd=1.5, ylim=c(13,21))
axis(4, at=c(13,17,21),las=1)
mtext("A", side=3, cex=2, adj=0)
mtext(expression(paste("temperature (",degree,"C)")),side=4, cex=1.2, line=3)
plot(fluo.LPF$x, fluo.LPF$y/1000, xlab="", ylab="", xlim=c(i,f), type='l', xaxt='n', yaxt='n', lwd=1.5)
axis(2, at=c(0.6, 1.2, 1.8), las=1)
axis.POSIXct(1, at=seq(i, f, by=60*60*24*6), labels=c(1,7,14,21))
#mtext(substitute(paste("PAR (",mu, "E m"^{-1},"s"^{-1},')')),side=2, cex=1.2, line=3)
mtext("red fluo (rfu)",side=2, cex=1.2, line=3)
mtext("B", side=3, cex=2, adj=0)
par(new=T)
plot(ph.LPF$x, ph.LPF$y, xlab="", ylab="", xlim=c(i,f), type='l', xaxt='n', yaxt='n', col='darkgrey', lwd=1.5)
axis(4, at=c(7.8, 8.1, 8.4),las=1)
mtext('pH',side=4, cex=1.2, line=3)
plotCI(time.nut, daily.DIN, daily.DIN.sd, xlab="", ylab="", xlim=c(i,f), sfrac=0, xaxt='n', yaxt='n', lwd=1.5, ylim=c(5,35),pch=16)
points(time.nut, daily.DIN, lwd=1.5)
axis(2, at=c(5, 20,35), las=1)
mtext(substitute(paste("DIN (",mu, "M)")),side=2, cex=1.2, line=3)
par(new=T)
plotCI(time.nut, daily.PO4, daily.PO4.sd, sfrac=0, xlab="", ylab="", xlim=c(i,f), pch=16, xaxt='n', yaxt='n', lwd=1.5, ylim=c(0.4,1.6),col='darkgrey')
points(time.nut, daily.PO4, lwd=1.5, col='darkgrey')
axis(4, at=c(0.4, 1,1.6), las=1)
axis.POSIXct(1, at=seq(i, f, by=60*60*24*6), labels=c(1,7,14,21))
mtext(substitute(paste("DIP (",mu, "M)")),side=4, cex=1.2, line=3)
mtext("C", side=3, cex=2, adj=0)
mtext("time (d)", side=1, cex=1.2, outer=T, line=-1)
dev.off()
###################
### ABUNDANCES ###
###################
png("FigureS3.png", width=114*1.5, height=114*1.5, pointsize=8, res=600, units="mm")
par(mfrow=c(2,1), mar=c(3,2,2,2), pty="m", cex=1.2, oma=c(1,3,1,3))
plot(stat$h.time, stat$h2.conc.mean, type='l', xaxt='n', yaxt='n', ylim=c(0.01,3), log='y')
#points(stat$h.time[id], stat$h2.conc.mean[id],col=3,pch=16)
points(pop$time, pop$conc, col=2)
axis(1, at=seq(min(stat$h.time, na.rm=T), max(stat$h.time, na.rm=T), by=60*60*24*6), labels=c(1,7,14,21))
axis(2, at=c(0.02,0.2,2),las=1)
mtext("time (d)", side=1, cex=1.2, line= 2.5)
mtext(substitute(paste("abundance (10"^{6}, " cells L"^{-1},')')), side=2, cex=1.2, line=3)
mtext("A", side=3, cex=2, adj=0)
par(pty='s')
plot(data.influx, log='xy', xlim=c(0.02,2), xaxt='n', yaxt='n', ylim=c(0.02,2), ylab=NA, xlab=NA)
axis(2, at=c(0.02,0.2,2),las=1)
axis(1, at=c(0.02,0.2,2))
mtext(substitute(paste("SeaFlow - abundance (10"^{6}, " cells L"^{-1},')')), side=2, cex=1.2, line=3)
mtext(substitute(paste("Influx - abundance (10"^{6}, " cells L"^{-1},')')), side=1, cex=1.2, line=3)
mtext("B", side=3, cex=2, adj=0)
abline(b=cor.influx$regression.results[4,3],a=cor.influx$regression.results[4,2], lty=2)
text(0.05,1,substitute(paste("R"^{2}, "=0.83")), cex=1)
dev.off()
png("Figure3.png", width=114*2, height=114*1.5, pointsize=8, res=600, units="mm")
par(mfrow=c(4,1), mar=c(3,2,1,2), pty="m", cex=1.2, oma=c(1,3,1,3))
#week 1
df <- crypto.week1
i <- min(df$h.time, na.rm=T)
f <- i + 60*60*24*3.5
plotCI(df$h.time, df$h2.conc.mean, uiw=df$h2.conc.sd, sfrac=0, xaxt='n',xlab="", lwd=2, pch=16, ylab= "", cex.lab=1.5, log="y", col="darkgrey", las=1, ylim=c(0.010, 4), xlim=c(i,f), yaxt='n')
lines(df$h.time, df$h2.conc.mean)
axis(2, at=c(0.02,0.2,2), las=1)
axis(1, at=seq(i, f, by=60*60*24), labels=seq(1,4,by=1))
rect(as.POSIXct("2013-09-10 23:51:00", origin="1970-01-01", tz='GMT'), 0.000000001, as.POSIXct("2013-09-11 06:13:00", origin="1970-01-01", tz='GMT'), 60.0, density=NULL, col=adjustcolor("black", alpha=0.15), border=NA)
rect(as.POSIXct("2013-09-11 11:52:00", origin="1970-01-01", tz='GMT'), 0.000000001, as.POSIXct("2013-09-11 17:41:00", origin="1970-01-01", tz='GMT'), 60.0, density=NULL, col=adjustcolor("black", alpha=0.15), border=NA)
rect(as.POSIXct("2013-09-12 00:55:00", origin="1970-01-01", tz='GMT'), 0.000000001, as.POSIXct("2013-09-12 07:24:00", origin="1970-01-01", tz='GMT'), 60.0, density=NULL, col=adjustcolor("black", alpha=0.15), border=NA)
rect(as.POSIXct("2013-09-12 12:44:00", origin="1970-01-01", tz='GMT'), 0.000000001, as.POSIXct("2013-09-12 18:43:00", origin="1970-01-01", tz='GMT'), 60.0, density=NULL, col=adjustcolor("black", alpha=0.15), border=NA)
rect(as.POSIXct("2013-09-13 02:11:00", origin="1970-01-01", tz='GMT'), 0.000000001, as.POSIXct("2013-09-13 08:41:00", origin="1970-01-01", tz='GMT'), 60.0, density=NULL, col=adjustcolor("black", alpha=0.15), border=NA)
rect(as.POSIXct("2013-09-13 14:05:00", origin="1970-01-01", tz='GMT'), 0.000000001, as.POSIXct("2013-09-13 19:58:00", origin="1970-01-01", tz='GMT'), 60.0, density=NULL, col=adjustcolor("black", alpha=0.15), border=NA)
rect(as.POSIXct("2013-09-14 03:28:00", origin="1970-01-01", tz='GMT'), 0.000000001, as.POSIXct("2013-09-14 09:54:00", origin="1970-01-01", tz='GMT'), 60.0, density=NULL, col=adjustcolor("black", alpha=0.15), border=NA)
rect(as.POSIXct("2013-09-14 15:29:00", origin="1970-01-01", tz='GMT'), 0.000000001, as.POSIXct("2013-09-14 21:18:00", origin="1970-01-01", tz='GMT'), 60.0, density=NULL, col=adjustcolor("black", alpha=0.15), border=NA)
mtext("A", side=3, cex=2, line=0, adj=0)
mtext(substitute(paste("abundance (10"^{6}, " cells L"^{-1},')')), side=2, cex=1.2, outer=T, line=1)
mtext("time (d)", side=1, cex=1.2, outer=T, line=-1)
points(meso$Time, meso$Meso, pch=16, cex=1.5)
#week 2
df <- crypto.week2
i <- min(df$h.time, na.rm=T)
f <- i + 60*60*24*3.5
plotCI(df$h.time, df$h2.conc.mean, uiw=df$h2.conc.sd, sfrac=0, xaxt='n',xlab="", lwd=2, pch=16, ylab= "", cex.lab=1.5, log="y", col="darkgrey", las=1, ylim=c(0.010, 4), xlim=c(i,f), yaxt='n')
lines(df$h.time, df$h2.conc.mean)
axis(2, at=c(0.02,0.2,2), las=1)
axis(1, at=seq(i, f, by=60*60*24), labels=seq(1,4,by=1)+6)
mtext("B", side=3, cex=2, adj=0)
rect(as.POSIXct("2013-09-16 17:43:00", origin="1970-01-01", tz='GMT'), 0.000000001, as.POSIXct("2013-09-16 23:36:00", origin="1970-01-01", tz='GMT'), 60.0, density=NULL, col=adjustcolor("black", alpha=0.15), border=NA)
rect(as.POSIXct("2013-09-17 06:22:00", origin="1970-01-01", tz='GMT'), 0.000000001, as.POSIXct("2013-09-17 12:32:00", origin="1970-01-01", tz='GMT'), 60.0, density=NULL, col=adjustcolor("black", alpha=0.15), border=NA)
rect(as.POSIXct("2013-09-17 18:38:00", origin="1970-01-01", tz='GMT'), 0.000000001, as.POSIXct("2013-09-18 00:32:00", origin="1970-01-01", tz='GMT'), 60.0, density=NULL, col=adjustcolor("black", alpha=0.15), border=NA)
rect(as.POSIXct("2013-09-18 07:06:00", origin="1970-01-01", tz='GMT'), 0.000000001, as.POSIXct("2013-09-18 13:13:00", origin="1970-01-01", tz='GMT'), 60.0, density=NULL, col=adjustcolor("black", alpha=0.15), border=NA)
rect(as.POSIXct("2013-09-18 19:29:00", origin="1970-01-01", tz='GMT'), 0.000000001, as.POSIXct("2013-09-19 01:23:00", origin="1970-01-01", tz='GMT'), 60.0, density=NULL, col=adjustcolor("black", alpha=0.15), border=NA)
rect(as.POSIXct("2013-09-19 07:48:00", origin="1970-01-01", tz='GMT'), 0.000000001, as.POSIXct("2013-09-19 13:52:00", origin="1970-01-01", tz='GMT'), 60.0, density=NULL, col=adjustcolor("black", alpha=0.15), border=NA)
rect(as.POSIXct("2013-09-19 20:16:00", origin="1970-01-01", tz='GMT'), 0.000000001, as.POSIXct("2013-09-20 02:12:00", origin="1970-01-01", tz='GMT'), 60.0, density=NULL, col=adjustcolor("black", alpha=0.15), border=NA)
points(meso$Time, meso$Meso, pch=16, cex=1.5)
#week 3
df <- crypto.week3
i <- min(df$h.time, na.rm=T)
f <- i + 60*60*24*3.5
plotCI(df$h.time, df$h2.conc.mean, uiw=df$h2.conc.sd, sfrac=0, xaxt='n',xlab="", lwd=2, pch=16, ylab= "", cex.lab=1.5, log="y", col="darkgrey", las=1, ylim=c(0.010, 4), xlim=c(i,f), yaxt='n')
lines(df$h.time, df$h2.conc.mean)
axis(2, at=c(0.02,0.2,2), las=1)
axis(1, at=seq(i, f, by=60*60*24), labels=seq(1,4,by=1)+14)
mtext("C", side=3, cex=2, adj=0)
rect(as.POSIXct("2013-09-23 23:07:00", origin="1970-01-01", tz='GMT'), 0.000000001, as.POSIXct("2013-09-24 05:22:00", origin="1970-01-01", tz='GMT'), 60.0, density=NULL, col=adjustcolor("black", alpha=0.15), border=NA)
rect(as.POSIXct("2013-09-24 10:57:00", origin="1970-01-01", tz='GMT'), 0.000000001, as.POSIXct("2013-09-24 16:47:00", origin="1970-01-01", tz='GMT'), 60.0, density=NULL, col=adjustcolor("black", alpha=0.15), border=NA)
rect(as.POSIXct("2013-09-24 23:52:00", origin="1970-01-01", tz='GMT'), 0.000000001, as.POSIXct("2013-09-25 06:16:00", origin="1970-01-01", tz='GMT'), 60.0, density=NULL, col=adjustcolor("black", alpha=0.15), border=NA)
rect(as.POSIXct("2013-09-25 11:40:00", origin="1970-01-01", tz='GMT'), 0.000000001, as.POSIXct("2013-09-25 17:28:00", origin="1970-01-01", tz='GMT'), 60.0, density=NULL, col=adjustcolor("black", alpha=0.15), border=NA)
rect(as.POSIXct("2013-09-26 00:43:00", origin="1970-01-01", tz='GMT'), 0.000000001, as.POSIXct("2013-09-26 07:16:00", origin="1970-01-01", tz='GMT'), 60.0, density=NULL, col=adjustcolor("black", alpha=0.15), border=NA)
rect(as.POSIXct("2013-09-26 12:33:00", origin="1970-01-01", tz='GMT'), 0.000000001, as.POSIXct("2013-09-26 18:18:00", origin="1970-01-01", tz='GMT'), 60.0, density=NULL, col=adjustcolor("black", alpha=0.15), border=NA)
rect(as.POSIXct("2013-09-16 17:43:00", origin="1970-01-01", tz='GMT'), 0.000000001, as.POSIXct("2013-09-16 23:36:00", origin="1970-01-01", tz='GMT'), 60.0, density=NULL, col=adjustcolor("black", alpha=0.15), border=NA)
rect(as.POSIXct("2013-09-27 01:45:00", origin="1970-01-01", tz='GMT'), 0.000000001, as.POSIXct("2013-09-27 07:24:00", origin="1970-01-01", tz='GMT'), 60.0, density=NULL, col=adjustcolor("black", alpha=0.15), border=NA)
points(meso$Time, meso$Meso, pch=16, cex=1.5)
#week 4
df <- crypto.week4
i <- min(df$h.time, na.rm=T)
f <- i + 60*60*24*3.5
plotCI(df$h.time, df$h2.conc.mean, uiw=df$h2.conc.sd, sfrac=0, xaxt='n',xlab="", lwd=2, pch=16, ylab= "", cex.lab=1.5, log="y", col="darkgrey", las=1, ylim=c(0.010, 4), xlim=c(i,f), yaxt='n')
lines(df$h.time, df$h2.conc.mean)
axis(2, at=c(0.02,0.2,2), las=1)
axis(1, at=seq(i, f, by=60*60*24), labels=seq(1,4,by=1)+22)
mtext("D", side=3, cex=2, adj=0)
rect(as.POSIXct("2013-09-30 17:03:00", origin="1970-01-01", tz='GMT'), 0.000000001, as.POSIXct("2013-09-30 22:53:00", origin="1970-01-01", tz='GMT'), 60.0, density=NULL, col=adjustcolor("black", alpha=0.15), border=NA)
rect(as.POSIXct("2013-10-01 05:32:00", origin="1970-01-01", tz='GMT'), 0.000000001, as.POSIXct("2013-10-01 11:48:00", origin="1970-01-01", tz='GMT'), 60.0, density=NULL, col=adjustcolor("black", alpha=0.15), border=NA)
rect(as.POSIXct("2013-10-01 17:52:00", origin="1970-01-01", tz='GMT'), 0.000000001, as.POSIXct("2013-10-01 23:45:00", origin="1970-01-01", tz='GMT'), 60.0, density=NULL, col=adjustcolor("black", alpha=0.15), border=NA)
rect(as.POSIXct("2013-10-02 06:12:00", origin="1970-01-01", tz='GMT'), 0.000000001, as.POSIXct("2013-10-02 12:23:00", origin="1970-01-01", tz='GMT'), 60.0, density=NULL, col=adjustcolor("black", alpha=0.15), border=NA)
rect(as.POSIXct("2013-10-02 18:37:00", origin="1970-01-01", tz='GMT'), 0.000000001, as.POSIXct("2013-10-03 00:32:00", origin="1970-01-01", tz='GMT'), 60.0, density=NULL, col=adjustcolor("black", alpha=0.15), border=NA)
rect(as.POSIXct("2013-10-03 06:50:00", origin="1970-01-01", tz='GMT'), 0.000000001, as.POSIXct("2013-10-03 12:54:00", origin="1970-01-01", tz='GMT'), 60.0, density=NULL, col=adjustcolor("black", alpha=0.15), border=NA)
rect(as.POSIXct("2013-10-03 19:18:00", origin="1970-01-01", tz='GMT'), 0.000000001, as.POSIXct("2013-10-04 01:16:00", origin="1970-01-01", tz='GMT'), 60.0, density=NULL, col=adjustcolor("black", alpha=0.15), border=NA)
rect(as.POSIXct("2013-10-04 07:26:00", origin="1970-01-01", tz='GMT'), 0.000000001, as.POSIXct("2013-10-04 13:25:00", origin="1970-01-01", tz='GMT'), 60.0, density=NULL, col=adjustcolor("black", alpha=0.15), border=NA)
points(meso$Time, meso$Meso, pch=16, cex=1.5)
dev.off()
#####################
### DIVISION RATES ###
#####################
### CULTURE
png("Figure4.png", width=114*2, height=114*2, pointsize=8, res=600, units="mm")
par(mfrow=c(3,1), mar=c(3,2,1,2), pty="m", cex=1.2, oma=c(1,3,1,3))
plotCI(h.time,h.abun.mean,uiw=h.abun.sd, sfrac=0, lwd=2, col='darkgrey', xlim=c(min(cc$time, na.rm=T),max(cc$time, na.rm=T)), ylim=c(0,30), pch=NA, xaxt='n', xlab=NA, ylab=NA, yaxt='n')
lines(h.time,h.abun.mean)
rect(min(night.lab$time), -1, max(night.lab$time), 500, density=NULL, col=adjustcolor("black", alpha=0.15), border=NA)
rect(min(night2.lab$time), -1, max(night2.lab$time), 500, density=NULL, col=adjustcolor("black", alpha=0.15), border=NA)
axis.POSIXct(1, at=seq(min(cc$time, na.rm=T), max(cc$time, na.rm=T), by=60*60*6),labels=seq(1,25, 6))
axis(2, at=c(0,15,30), las=1)
mtext("A", side=3, cex=2, line=0, adj=0)
mtext(substitute(paste("abundance (10"^{6}, " cells L"^{-1},')')), side=2, cex=1.2, line=3)
par(new=TRUE)
plotCI(cc$time,cc$mean.f.G1*100, uiw=na.approx(cc$sd.f.G1*100), sfrac=0, lwd=2, col='red3', xlim=c(min(cc$time, na.rm=T),max(cc$time, na.rm=T)),, ylim=c(0,100), pch=NA, xaxt='n', xlab=NA, ylab=NA, yaxt='n')
lines(cc$time,cc$mean.f.G1*100, col='red3')
plotCI(cc$time,cc$mean.f.G2*100+cc$mean.f.S*100, uiw=na.approx(cc$sd.f.G2*100+cc$sd.f.S*100), sfrac=0, lwd=2, col='seagreen3', pch=NA,add=T)
lines(cc$time,cc$mean.f.G2*100+cc$mean.f.S*100, col='seagreen3')
# plotCI(cc$time,cc$mean.f.S*100, uiw=cc$sd.f.S*100, sfrac=0, lwd=2, col='darkturquoise', pch=NA,add=T)
# lines(cc$time,cc$mean.f.S*100, col='darkturquoise')
axis(4, at=c(0,50,100), las=1)
mtext('Cells in G1 or S+G2 (%)', side=4, cex=1.2, line=2.5)
plotCI(cc$time,cc$div, uiw=na.approx(cc$div.se), sfrac=0, lwd=2, col='red3', xlim=c(min(cc$time, na.rm=T),max(cc$time, na.rm=T)),, ylim=c(0,0.06), pch=NA, xaxt='n', xlab=NA, ylab=NA, yaxt='n')
lines(cc$time,cc$div, col='red3')
plotCI(m$time, m$div.ave, uiw=m$div.se, col="darkgrey",sfrac=0, lwd=2, pch=NA, add=TRUE)
lines(m$time, m$div.ave)
rect(min(night.lab$time), -1, max(night.lab$time), 1, density=NULL, col=adjustcolor("black", alpha=0.15), border=NA)
rect(min(night2.lab$time), -1, max(night2.lab$time), 1, density=NULL, col=adjustcolor("black", alpha=0.15), border=NA)
axis.POSIXct(1, at=seq(min(cc$time, na.rm=T),max(cc$time, na.rm=T), by=60*60*6),labels=seq(1,25, 6))
axis(2, at=c(0,0.03,0.06), las=1)
mtext(substitute(paste("division (h"^{-1},")")), side=2, line=3, cex=1.2)
mtext("B", side=3, cex=2, line=0, adj=0)
mtext("time (h)", side=1, cex=1.2, line=2.5)
dev.off()
### FIELD
png("Figure5.png", width=114*2, height=114*2, pointsize=8, res=600, units="mm")
par(mfrow=c(3,1), mar=c(2,2,1,2), pty="m", cex=1.2, oma=c(1,3,1,1))
plotCI(fsc$time, fsc$vol, uiw=fsc$vol.sd, pch=NA, sfrac=0, col='darkgrey', yaxt='n', ylab=NA, xaxt='n', xlab=NA, lwd=1)
abline(v=night.fsc$time, lwd=1, col=adjustcolor("black", alpha=0.15))
lines(fsc$time, fsc$vol)
axis(2, at=c(20,60,100), las=1)
mtext(substitute(paste("Cell volume (", mu, "m"^{3},")")), 2, line=3 )
axis(1, at=seq(min(fsc$time, na.rm=T), max(fsc$time, na.rm=T), by=60*60*24*6), labels=c(1,7,14,21))
mtext("A", side=3, cex=2, line=0, adj=0)
plotCI(data$time, data$DR, uiw=data$DR.se, sfrac=0, xlab="", lwd=2, pch=16, ylab= "", col="darkgrey", las=1, yaxt='n', xaxt='n', xlim=range(fsc$time))
lines(data$time, data$DR)
axis(2, at=seq(0,2.5,0.5), las=1)
axis(1, at=seq(min(data$time), max(data$time), by=60*60*24*6), labels=c(1,7,14,21))
mtext(substitute(paste("division (d"^{-1},')')), side=2, cex=1.2, line=3)
mtext("time (d)", side=1, cex=1.2, line=2)# par(new=T)
mtext("B", side=3, cex=2, line=0, adj=0)
dev.off()
#####################
### CORRELATIONS ###
#####################
### PH / DIN CORRELATION
png("FigureS2.png", width=114*2, height=114, pointsize=8, res=600, units="mm")
par(mfrow=c(1,2), mar=c(3,2,2,2), pty="s", cex=1.2, oma=c(1,3,1,0))
plot(data[,c(3,6)],yaxt='n',ylim=c(7.8,8.4),xlim=c(0,1.6), yaxt='n', xaxt='n', xlab=NA, ylab=NA)
abline(b=P04.ph$regression.results[4,3],a=P04.ph$regression.results[4,2], lty=2)
axis(2, at=c(7.8,8.1,8.4),las=1)
axis(1, at=c(0,0.8,1.6))
text(y=8.3,x=1.4,substitute(paste("R"^{2}, "=0.29")), cex=1)
mtext("pH",side=2, cex=1.2, line=3)
mtext(substitute(paste("phosphate (",mu, "M)")),side=1, cex=1.2, line=2.5)
mtext("A", side=3, cex=2, line=0, adj=0)
plot(data[,c(2,6)],yaxt='n',ylim=c(7.8,8.4),xlim=c(5,35), yaxt='n', xaxt='n', xlab=NA, ylab=NA)
abline(b=DIN.ph$regression.results[4,3],a=DIN.ph$regression.results[4,2], lty=2)
axis(2, at=c(7.8,8.1,8.4),las=1)
axis(1, at=c(5,20,35))
text(y=8.3,x=31.5,substitute(paste("R"^{2}, "=0.37")), cex=1)
mtext(substitute(paste("DIN (",mu, "M)")),side=1, cex=1.2, line=2.5)
mtext("B", side=3, cex=2, line=0, adj=0)
dev.off()
### ABUNDANCE CORRELATION
png("FigureS4.png", width=114*2, height=114*1.5, pointsize=8, res=600, units="mm")
par(mfrow=c(1,2), mar=c(3,2,1,2), pty="s", cex=1.2, oma=c(1,3,1,0))
plot(data.field[,c(1,2)], xlab=NA, ylab=NA, yaxt='n', xaxt='n',asp=1, xlim=c(0, 0.35),ylim=c(0, 0.35))
axis(2, at=c(0, 0.15,0.3), las=1)
axis(1, at=c(0, 0.15,0.3))
abline(b=reg$regression.results[4,3],a=reg$regression.results[4,2], lty=2)
text(0.07,0.3,substitute(paste("R"^{2}, "=0.24")), cex=1)
mtext("T. amphioxeia ", side=2, cex=1.2, line=3, font=3)
mtext(substitute(paste(" (10"^{6}, " cells L"^{-1},')')), side=2, cex=1.2, line=3)
mtext(substitute(paste(" (10"^{6}, " cells L"^{-1},')')), side=1, cex=1.2, line=3)
mtext("M. major ", side=1, cex=1.2, line=2.83, font=3)
points(log(data[1,c(1,2)]), col=2)
dev.off()
### CELL CYCLE / MODEL
png("FigureS5.png", width=114, height=114, pointsize=8, res=600, units="mm")
par(pty='s')
plotCI(data.cc[,1],data.cc[,2], uiw=data.cc[,3], xlim=c(0,0.06), ylim=c(0,0.06), xaxt='n', yaxt='n', err='x', sfrac=0,
col='darkgrey', lwd=2, pch=NA, xlab=NA, ylab=NA)
plotCI(data.cc[,1],data.cc[,2], uiw=data.cc[,4], err='y', sfrac=0, col='darkgrey', lwd=2, pch=NA,add=TRUE)
points(data.cc[,1],data.cc[,2], pch=16)
abline(b=c$regression.results[4,3],a=c$regression.results[4,2], lty=2)
axis(1, at=c(0,0.03,0.06))
axis(2, at=c(0,0.03,0.06), las=1)
text(0.0075, 0.05,substitute(paste("R"^{2}, "=0.60")), cex=1)
mtext(substitute(paste("DNA-based division (h"^{-1},")")), side=1, line=3, cex=1.2)
mtext(substitute(paste("Size-based division (h"^{-1},")")), side=2, line=3, cex=1.2)
dev.off()
### DIVISION RATES
png("FigureS6.png", width=114*3, height=114, pointsize=8, res=600, units="mm")
par(mfrow=c(1,3), mar=c(3,2,2,3), pty="s", cex=1.2, oma=c(1,3,1,0))
plot(data[,c(3,9)], ylim=c(0,1.6), yaxt='n', xaxt='n', xlab=NA, ylab=NA, xlim=c(0,1.6))
axis(2, at=c(0,0.8,1.6),las=1)
axis(1, at=c(0, 0.8, 1.6))
abline(b=P04.DR$regression.results[4,3],a=P04.DR$regression.results[4,2], lty=2)
text(1.4,0.2,substitute(paste("R"^{2}, "=0.61")), cex=1)
#text(10,0.6,"p < 0.01", cex=0.75)
mtext(substitute(paste("division (d"^{-1},')')), side=2, cex=1.2, line=3)
mtext(substitute(paste("phosphate (",mu, "M)")),side=1, cex=1.2, line=2.5)
mtext("A", side=3, cex=2, adj=0)
plot(data[,c(2,9)], ylim=c(0,1.6), yaxt='n', xaxt='n', xlab=NA, ylab=NA, xlim=c(5,35))
axis(2, at=c(0,0.8,1.6),las=1)
axis(1, at=c(5,20,35))
abline(b=DIN.DR$regression.results[4,3],a=DIN.DR$regression.results[4,2], lty=2)
text(30,0.2,substitute(paste("R"^{2}, "=0.42")), cex=1)
#text(10,0.6,"p < 0.01", cex=0.75)
mtext(substitute(paste("DIN (",mu, "M)")),side=1, cex=1.2, line=2.5)
mtext("B", side=3, cex=2, adj=0)
#mtext(substitute(paste("production (10"^{6},"cell L"^{-1},"d"^{-1},')')), side=2, cex=1.2, line=3)
plot(data[,c(6,9)],ylim=c(0,1.6), yaxt='n',xlim=c(7.8,8.4),yaxt='n', xaxt='n', xlab=NA, ylab=NA)
axis(1, at=c(7.8,8.1,8.4))
axis(2, at=c(0,0.8,1.6),las=1)
mtext("pH",side=1, cex=1.2, line=2.5)
mtext("C", side=3, cex=2, adj=0)
abline(b=ph.DR$regression.results[4,3],a=ph.DR$regression.results[4,2], lty=2)
text(8.3,1.4,substitute(paste("R"^{2}, "=0.38")), cex=1)
#mtext(substitute(paste("division (d"^{-1},')')), side=2, cex=1.2, line=3)
dev.off()
| /manuscript/manuscript_V1/Figures-manuscript-V1.R | no_license | mmh1133/CMOP | R | false | false | 31,417 | r | library(maps)
library(mapdata)
library(maptools)
library(scales)
library(mapproj)
library(plotrix)
library(zoo)
library(lmodel2)
#user <- '/Users/mariaham/CMOP'
user <- '~/Documents/DATA/SeaFlow/CMOP/CMOP_git'
#user <- NULL
stat <- read.csv(paste0(user, "/CMOP_field/model/crypto_HD_CMOP_6.binned.csv"))
stat$h.time <- as.POSIXct(stat$h.time,origin='1970-01-01',tz='GMT')
i <- min(stat$h.time, na.rm=T)
f <- max(stat$h.time, na.rm=T)
# stat$h.dr.mean <- as.numeric(c('NA', na.approx(stat$h.dr.mean)))
# stat$h.dr.sd <- as.numeric(c('NA','NA', na.approx(stat$h.dr.sd), 'NA'))
crypto <- subset(stat, h.time > as.POSIXct("2013-09-10 16:50:00") & h.time < as.POSIXct("2013-10-03 24:00:00"))
crypto.week1 <- subset(crypto , h.time < as.POSIXct("2013-09-13 16:00:00",tz='GMT'))
crypto.week2 <- subset(crypto , h.time > as.POSIXct("2013-09-16 18:00:00", tz='GMT') & h.time < as.POSIXct("2013-09-20 1:00:00", tz='GMT'))
crypto.week3 <- subset(crypto , h.time > as.POSIXct("2013-09-23 22:00:00", tz='GMT') & h.time < as.POSIXct("2013-09-27 10:00:00", tz='GMT'))
crypto.week4 <- subset(crypto , h.time > as.POSIXct("2013-09-30 18:00:00", tz='GMT') & h.time < as.POSIXct("2013-10-03 24:00:00", tz='GMT'))
time.template <- seq(i, f+60*60*48, by=60*60*24)
time.res <- cut(crypto$h.time,time.template)
daily.abun.mean <- tapply(crypto$h2.conc.mean, time.res, function(x) mean(x, na.rm=T))
daily.abun.sd <- tapply(crypto$h2.conc.sd, time.res, function(x) mean(x, na.rm=T))
daily.dr.mean <- tapply(crypto$h.dr.mean, time.res, function(x) mean(x, na.rm=T))*24
daily.dr.sd <- tapply(crypto$h.dr.sd, time.res, function(x) mean(x, na.rm=T))*24
daily.prod.mean <- c(daily.dr.mean * daily.abun.mean)
daily.prod.sd <- c(daily.dr.sd * daily.abun.sd)
daily.dr.se <- daily.dr.sd / sqrt(24)
sal <- read.csv(paste0(user, "/auxillary_data/salinityCMOP_6"))
sal$time <- as.POSIXct(strptime(sal$time.YYYY.MM.DD.hh.mm.ss.PST., "%Y/%m/%d %H:%M:%S"), tz="GMT")
sal <- subset(sal, time > i & time < f)
id <- which(diff(sal$time) > 60*60*3)
sal.LPF <- smooth.spline(sal$time, sal$water_salinity, spar=0.002)
sal.LPF$y[id] <- NA
temp.LPF <- smooth.spline(sal$time, sal$water_temperature, spar=0.002)
temp.LPF$y[id] <- NA
par <- read.csv(paste0(user, "/CMOP_field/Par_CMOP_6"))
par$time <- as.POSIXct(par$time, format="%Y/%m/%d %H:%M:%S", tz= "GMT")
par <- subset(par, time > i & time < f)
par.LPF <- smooth.spline(as.POSIXct(par$time, origin="1970-01-01", tz='GMT'), par$par, spar=0.05)
par.LPF$y[which(par.LPF$y < 40)] <- 0
par.LPF$y[which(diff(par.LPF$x)>8000)] <- NA
# oxy <- read.csv(paste0(user, "/auxillary_data/oxygenCMOP_6"))
# oxy$time <- as.POSIXct(strptime(oxy$time.YYYY.MM.DD.hh.mm.ss.PST., "%Y/%m/%d %H:%M:%S"), tz="GMT")
# oxy<- subset(oxy, time > i & time < f)
fluo <- read.csv(paste0(user, "/auxillary_data/fvfmCMOP_6"))
fluo$time <- as.POSIXct(strptime(fluo$time.YYYY.MM.DD.hh.mm.ss.PST., "%Y/%m/%d %H:%M:%S"), tz="GMT")
fluo<- subset(fluo, time > i & time < f)
id <- which(diff(fluo$time) > 60*60*3)
fluo.LPF <- smooth.spline(fluo$time, fluo$fo)#, spar=0.01, df=2)
fluo.LPF$y[id] <- NA
# tide <- read.csv(paste0(user, "/auxillary_data/elevationCMOP_6"))
# tide$time <- as.POSIXct(strptime(tide$time.YYYY.MM.DD.hh.mm.ss.PST., "%Y/%m/%d %H:%M:%S"), tz="GMT")
# tide <- subset(tide, time > i & time < f)
# tide <- subset(tide, elevation > 1)
nut <- read.csv(paste0(user, "/auxillary_data/Ribalet_nutrients2.csv"))[]
nut$time <- as.POSIXct(strptime(nut$time, "%m/%d/%y %H:%M"))
nut$DIN <- nut$Nitrate+nut$Nitrite+nut$Ammonium
time.template3 <- seq(min(nut$time), max(nut$time), by=60*60*24)[-1]
time.res <- cut(nut$time,time.template3)
daily.DIN <- tapply(nut$DIN , time.res, function(x) mean(x, na.rm=T))
daily.DIN.sd <- tapply(nut$DIN , time.res, function(x) sd(x, na.rm=T))
daily.PO4 <- tapply(nut$Phosphate , time.res, function(x) mean(x, na.rm=T))
daily.PO4.sd <- tapply(nut$Phosphate , time.res, function(x) sd(x, na.rm=T))
time.nut <- tapply(nut$time , time.res, function(x) mean(x, na.rm=T))
ph <- read.csv(paste0(user, "/auxillary_data/pHCMOP_6"))
ph$time <- as.POSIXct(strptime(ph$time.YYYY.MM.DD.hh.mm.ss.PST., "%Y/%m/%d %H:%M:%S"), tz="GMT")
ph <- subset(ph, time > i & time < f)
id <- which(diff(ph$time) > 60*60*3)
ph.LPF <- smooth.spline(as.POSIXct(ph$time, origin="1970-01-01", tz='GMT'), ph$ph, spar=0.05)
ph.LPF$y[id] <- NA
influx <- read.csv(paste0(user,"/CMOP_INFLUX_Sept2013/results/summary_V2-FR.csv"))
info <- read.csv(paste0(user,"/CMOP_INFLUX_Sept2013/file_names.csv"))
fcm <- merge(influx, info, by="file")
fcm$time <- as.POSIXct(fcm$time, format="%m/%d/%y %I:%M:%S %p")#+ 8*60*60
fcm <- fcm[order(fcm$time),]
pop <- subset(fcm, i =='crypto' & depth=="S")[-c(1,10),]
pop$conc <- 10^-3*pop$n/pop$vol
id <- c(1, 18, 43, 66, 144, 169, 193, 217, 314, 339, 359, 381, 396, 403,454, 478, 502)
data.influx <- data.frame(cbind(fcm=pop$conc, tlc=stat$h2.conc.mean[id]))
cor.influx <- lmodel2(tlc~ fcm, data.influx,"relative", "relative", 99)
meso <- read.csv(paste0(user,"/mesodinium/Meso.csv"))
meso$Time <- as.POSIXct(meso$Time, format="%m/%d/%Y %H:%M", tz='') + 8*3600
meso[1:2,"Time"] <- meso[1:2,"Time"] -14*60*60
meso$Meso <- meso$Meso/1000
id <- c(25, 55, 143, 179, 217, 240, 324, 337, 359, 380, 396, 455, 483, 502)
#id <- findInterval(meso$Time, crypto$h.time)
data.field <- data.frame(cbind(meso=meso$Meso, crypto=crypto$h2.conc.mean[id]))#*crypto$h.dr.mean[id]))
reg <- lmodel2(meso ~ crypto, data.field,"relative", "relative", 99)
reg.log <- lmodel2(crypto ~ meso, log10(data.field),"interval", "interval", 99)
fsc <- read.csv(paste0(user, "/CMOP_field/FSCvsPAR.csv"))
night.fsc <- subset(fsc , par < 2)
fsc[which(fsc$fsc2 == "#VALUE!"),'fsc2'] <- NA
fsc$vol <- 10^(1.2384*log10(fsc$fsc/100)+1.003)
fsc$vol.sd <- 10^(1.2384*log10(fsc$fsc.sd/100)+1.003)
fsc$diam <- 2*((fsc$vol *3)/(pi*4))^(1/3)
m <- read.csv(paste0(user,"/Rhodo_labExperiment/model_output-V2.csv"))
m$time <- as.POSIXct(m$time, origin="1970-01-01", tz="" )
cc <- read.csv(paste0(user,"/Rhodo_labExperiment/RHODO_div-rate.csv"))[-1,]
cc$time <- as.POSIXct(cc$time, origin="1970-01-01", tz="" )
id <- findInterval(cc$time,m$time)
data.cc <- data.frame(cbind(CC=cc$div, DR=m$div.ave[id], CC.sd=na.approx(cc$div.se), DR.sd=na.approx(m$div.se[id])))
c <- lmodel2(CC ~ DR, data.cc,"relative", "relative", 99)
lab <- read.delim(paste0(user,"/Rhodo_labExperiment/stat.tab"))
lab$time <- as.POSIXct(lab$time,tz='')-8*60*60
rhodo <- subset(lab, pop == 'crypto' & flag == 0)
time.template2 <- seq(min(rhodo$time), max(rhodo$time), by=60*60)
time.res <- cut(rhodo$time,time.template2)
h.abun.mean <- tapply(rhodo$abundance, time.res, function(x) mean(x, na.rm=T))
h.abun.sd <- tapply(rhodo$abundance, time.res, function(x) sd(x, na.rm=T))
h.fsc.mean <- tapply(rhodo$fsc_small, time.res, function(x) mean(x, na.rm=T))
h.fsc.sd <- tapply(rhodo$fsc_small, time.res, function(x) sd(x, na.rm=T))
h.time <- tapply(rhodo$time, time.res, function(x) min(x, na.rm=T))
#dark cycle: GMT 16:00-23:00
night.lab <- subset( rhodo , time >= as.POSIXct("2014-09-22 16:00:00")-8*60*60 & time <= as.POSIXct("2014-09-22 23:00:00")-8*60*60)
night2.lab <- subset( rhodo , time >= as.POSIXct("2014-09-23 16:00:00")-8*60*60 & time <= as.POSIXct("2014-09-23 23:00:00")-8*60*60)
time.res <- cut(par$time,time.template)
daily.par <- tapply(par$par, time.res, function(x) mean(x, na.rm=T))
time.res <- cut(sal$time, time.template)
daily.temp <- tapply(sal$water_temperature, time.res, function(x) mean(x, na.rm=T))
daily.sal <- tapply(sal$water_salinity, time.res, function(x) mean(x, na.rm=T))
time.res <- cut(ph$time, time.template)
daily.ph <- tapply(ph$ph, time.res, function(x) mean(x, na.rm=T))
id <- c(1,3,8,14,15,16,17,22)
data <- data.frame(cbind(time= time.template[-24],
DIN=daily.DIN, P04 =daily.PO4, TEMP=daily.temp, SAL=daily.sal, PH=daily.ph, PAR=daily.par,
PROD=daily.prod.mean, DR=daily.dr.mean,PROD.sd = daily.prod.sd, DR.se = daily.dr.se,N=daily.abun.mean))
data <- data[id,]
DIN.P <- lmodel2(PROD ~ DIN, data,"relative", "relative", 99)
P04.P <- lmodel2(PROD ~ P04, data,"relative", "relative", 99)
DIN.DR <- lmodel2(DR ~ DIN, data,"relative", "relative", 99)
P04.DR <- lmodel2(DR ~ P04, data,"relative", "relative", 99)
sal.DR <- lmodel2(DR ~ SAL, data,"relative", "relative", 99)
temp.DR <- lmodel2(DR ~ TEMP, data,"relative", "relative", 99)
ph.DR <- lmodel2(DR ~ PH, data,"relative", "relative", 99)
ph.P <- lmodel2(PROD ~ PH, data,"relative", "relative", 99)
DIN.ph <- lmodel2(PH ~ DIN, data,"relative", "relative", 99)
P04.ph <- lmodel2(PH ~ P04, data,"relative", "relative", 99)
DIN.N <- lmodel2(DIN ~ N, data,"relative", "relative", 99)
P04.N <- lmodel2(P04 ~ N, data,"relative", "relative", 99)
ph.N <- lmodel2(PH ~ N, data,"relative", "relative", 99)
DR.N <- lmodel2(DR ~ N, data,"relative", "relative", 99)
###########
### MAP ###
###########
png("FigureS1.png", width=114, height=114, pointsize=8, res=600, units="mm")
par(mfrow=c(2,1), mar=c(0,0,0,0), oma=c(0,0,0,0))
map("worldHires", 'usa', xlim=c(-140, -50), ylim=c(25,50),col="grey", interior=F, fill=T)
lat<-c(46.21)
lon<-c(-123.91)
polygon(c(-125,-122.5,-122.5,-125), c(45,45,47,47), border='black', lwd=1.5)
map("worldHires", xlim=c(-124.5,-123.15), ylim=c(45.9,46.5), col="lightgrey", fill=T)
lat<-c(46.21)
lon<-c(-123.91)
points(lon, lat, pch=16, col="black", cex=3)
map.scale(-123, 46, ratio=F)
text(-124.2, 46.2, "Pacific \nOcean", cex=1.5)
text(-123.5, 46.4, "Columbia River \nEstuary", cex=1.5)
box(col='black', lwd=1.5)
dev.off()
##################
### HYDROLOGY ###
##################
png("Figure1.png", width=114*2, height=114*1.5, pointsize=8, res=600, units="mm")
par(mfrow=c(3,1), mar=c(3,2,1,2), pty="m", cex=1.2, oma=c(1,3,1,3))
plot(sal.LPF$x, sal.LPF$y, xlab="", ylab="", xlim=c(i,f), type='l', xaxt='n', yaxt='n', lwd=1.5, ylim=c(0,30))
axis(2, at=c(0, 30, 15), las=1)
axis.POSIXct(1, at=seq(i, f, by=60*60*24*6), labels=c(1,7,14,21))
mtext("salinity (psu)",side=2, cex=1.2, line=3)
par(new=T)
plot(temp.LPF$x, temp.LPF$y, xlab="", ylab="", xlim=c(i,f), type='l', xaxt='n', yaxt='n', col='darkgrey', lwd=1.5, ylim=c(13,21))
axis(4, at=c(13,17,21),las=1)
mtext("A", side=3, cex=2, adj=0)
mtext(expression(paste("temperature (",degree,"C)")),side=4, cex=1.2, line=3)
plot(fluo.LPF$x, fluo.LPF$y/1000, xlab="", ylab="", xlim=c(i,f), type='l', xaxt='n', yaxt='n', lwd=1.5)
axis(2, at=c(0.6, 1.2, 1.8), las=1)
axis.POSIXct(1, at=seq(i, f, by=60*60*24*6), labels=c(1,7,14,21))
#mtext(substitute(paste("PAR (",mu, "E m"^{-1},"s"^{-1},')')),side=2, cex=1.2, line=3)
mtext("red fluo (rfu)",side=2, cex=1.2, line=3)
mtext("B", side=3, cex=2, adj=0)
par(new=T)
plot(ph.LPF$x, ph.LPF$y, xlab="", ylab="", xlim=c(i,f), type='l', xaxt='n', yaxt='n', col='darkgrey', lwd=1.5)
axis(4, at=c(7.8, 8.1, 8.4),las=1)
mtext('pH',side=4, cex=1.2, line=3)
plotCI(time.nut, daily.DIN, daily.DIN.sd, xlab="", ylab="", xlim=c(i,f), sfrac=0, xaxt='n', yaxt='n', lwd=1.5, ylim=c(5,35),pch=16)
points(time.nut, daily.DIN, lwd=1.5)
axis(2, at=c(5, 20,35), las=1)
mtext(substitute(paste("DIN (",mu, "M)")),side=2, cex=1.2, line=3)
par(new=T)
plotCI(time.nut, daily.PO4, daily.PO4.sd, sfrac=0, xlab="", ylab="", xlim=c(i,f), pch=16, xaxt='n', yaxt='n', lwd=1.5, ylim=c(0.4,1.6),col='darkgrey')
points(time.nut, daily.PO4, lwd=1.5, col='darkgrey')
axis(4, at=c(0.4, 1,1.6), las=1)
axis.POSIXct(1, at=seq(i, f, by=60*60*24*6), labels=c(1,7,14,21))
mtext(substitute(paste("DIP (",mu, "M)")),side=4, cex=1.2, line=3)
mtext("C", side=3, cex=2, adj=0)
mtext("time (d)", side=1, cex=1.2, outer=T, line=-1)
dev.off()
###################
### ABUNDANCES ###
###################
png("FigureS3.png", width=114*1.5, height=114*1.5, pointsize=8, res=600, units="mm")
par(mfrow=c(2,1), mar=c(3,2,2,2), pty="m", cex=1.2, oma=c(1,3,1,3))
plot(stat$h.time, stat$h2.conc.mean, type='l', xaxt='n', yaxt='n', ylim=c(0.01,3), log='y')
#points(stat$h.time[id], stat$h2.conc.mean[id],col=3,pch=16)
points(pop$time, pop$conc, col=2)
axis(1, at=seq(min(stat$h.time, na.rm=T), max(stat$h.time, na.rm=T), by=60*60*24*6), labels=c(1,7,14,21))
axis(2, at=c(0.02,0.2,2),las=1)
mtext("time (d)", side=1, cex=1.2, line= 2.5)
mtext(substitute(paste("abundance (10"^{6}, " cells L"^{-1},')')), side=2, cex=1.2, line=3)
mtext("A", side=3, cex=2, adj=0)
par(pty='s')
plot(data.influx, log='xy', xlim=c(0.02,2), xaxt='n', yaxt='n', ylim=c(0.02,2), ylab=NA, xlab=NA)
axis(2, at=c(0.02,0.2,2),las=1)
axis(1, at=c(0.02,0.2,2))
mtext(substitute(paste("SeaFlow - abundance (10"^{6}, " cells L"^{-1},')')), side=2, cex=1.2, line=3)
mtext(substitute(paste("Influx - abundance (10"^{6}, " cells L"^{-1},')')), side=1, cex=1.2, line=3)
mtext("B", side=3, cex=2, adj=0)
abline(b=cor.influx$regression.results[4,3],a=cor.influx$regression.results[4,2], lty=2)
text(0.05,1,substitute(paste("R"^{2}, "=0.83")), cex=1)
dev.off()
png("Figure3.png", width=114*2, height=114*1.5, pointsize=8, res=600, units="mm")
par(mfrow=c(4,1), mar=c(3,2,1,2), pty="m", cex=1.2, oma=c(1,3,1,3))
#week 1
df <- crypto.week1
i <- min(df$h.time, na.rm=T)
f <- i + 60*60*24*3.5
plotCI(df$h.time, df$h2.conc.mean, uiw=df$h2.conc.sd, sfrac=0, xaxt='n',xlab="", lwd=2, pch=16, ylab= "", cex.lab=1.5, log="y", col="darkgrey", las=1, ylim=c(0.010, 4), xlim=c(i,f), yaxt='n')
lines(df$h.time, df$h2.conc.mean)
axis(2, at=c(0.02,0.2,2), las=1)
axis(1, at=seq(i, f, by=60*60*24), labels=seq(1,4,by=1))
rect(as.POSIXct("2013-09-10 23:51:00", origin="1970-01-01", tz='GMT'), 0.000000001, as.POSIXct("2013-09-11 06:13:00", origin="1970-01-01", tz='GMT'), 60.0, density=NULL, col=adjustcolor("black", alpha=0.15), border=NA)
rect(as.POSIXct("2013-09-11 11:52:00", origin="1970-01-01", tz='GMT'), 0.000000001, as.POSIXct("2013-09-11 17:41:00", origin="1970-01-01", tz='GMT'), 60.0, density=NULL, col=adjustcolor("black", alpha=0.15), border=NA)
rect(as.POSIXct("2013-09-12 00:55:00", origin="1970-01-01", tz='GMT'), 0.000000001, as.POSIXct("2013-09-12 07:24:00", origin="1970-01-01", tz='GMT'), 60.0, density=NULL, col=adjustcolor("black", alpha=0.15), border=NA)
rect(as.POSIXct("2013-09-12 12:44:00", origin="1970-01-01", tz='GMT'), 0.000000001, as.POSIXct("2013-09-12 18:43:00", origin="1970-01-01", tz='GMT'), 60.0, density=NULL, col=adjustcolor("black", alpha=0.15), border=NA)
rect(as.POSIXct("2013-09-13 02:11:00", origin="1970-01-01", tz='GMT'), 0.000000001, as.POSIXct("2013-09-13 08:41:00", origin="1970-01-01", tz='GMT'), 60.0, density=NULL, col=adjustcolor("black", alpha=0.15), border=NA)
rect(as.POSIXct("2013-09-13 14:05:00", origin="1970-01-01", tz='GMT'), 0.000000001, as.POSIXct("2013-09-13 19:58:00", origin="1970-01-01", tz='GMT'), 60.0, density=NULL, col=adjustcolor("black", alpha=0.15), border=NA)
rect(as.POSIXct("2013-09-14 03:28:00", origin="1970-01-01", tz='GMT'), 0.000000001, as.POSIXct("2013-09-14 09:54:00", origin="1970-01-01", tz='GMT'), 60.0, density=NULL, col=adjustcolor("black", alpha=0.15), border=NA)
rect(as.POSIXct("2013-09-14 15:29:00", origin="1970-01-01", tz='GMT'), 0.000000001, as.POSIXct("2013-09-14 21:18:00", origin="1970-01-01", tz='GMT'), 60.0, density=NULL, col=adjustcolor("black", alpha=0.15), border=NA)
mtext("A", side=3, cex=2, line=0, adj=0)
mtext(substitute(paste("abundance (10"^{6}, " cells L"^{-1},')')), side=2, cex=1.2, outer=T, line=1)
mtext("time (d)", side=1, cex=1.2, outer=T, line=-1)
points(meso$Time, meso$Meso, pch=16, cex=1.5)
#week 2
df <- crypto.week2
i <- min(df$h.time, na.rm=T)
f <- i + 60*60*24*3.5
plotCI(df$h.time, df$h2.conc.mean, uiw=df$h2.conc.sd, sfrac=0, xaxt='n',xlab="", lwd=2, pch=16, ylab= "", cex.lab=1.5, log="y", col="darkgrey", las=1, ylim=c(0.010, 4), xlim=c(i,f), yaxt='n')
lines(df$h.time, df$h2.conc.mean)
axis(2, at=c(0.02,0.2,2), las=1)
axis(1, at=seq(i, f, by=60*60*24), labels=seq(1,4,by=1)+6)
mtext("B", side=3, cex=2, adj=0)
rect(as.POSIXct("2013-09-16 17:43:00", origin="1970-01-01", tz='GMT'), 0.000000001, as.POSIXct("2013-09-16 23:36:00", origin="1970-01-01", tz='GMT'), 60.0, density=NULL, col=adjustcolor("black", alpha=0.15), border=NA)
rect(as.POSIXct("2013-09-17 06:22:00", origin="1970-01-01", tz='GMT'), 0.000000001, as.POSIXct("2013-09-17 12:32:00", origin="1970-01-01", tz='GMT'), 60.0, density=NULL, col=adjustcolor("black", alpha=0.15), border=NA)
rect(as.POSIXct("2013-09-17 18:38:00", origin="1970-01-01", tz='GMT'), 0.000000001, as.POSIXct("2013-09-18 00:32:00", origin="1970-01-01", tz='GMT'), 60.0, density=NULL, col=adjustcolor("black", alpha=0.15), border=NA)
rect(as.POSIXct("2013-09-18 07:06:00", origin="1970-01-01", tz='GMT'), 0.000000001, as.POSIXct("2013-09-18 13:13:00", origin="1970-01-01", tz='GMT'), 60.0, density=NULL, col=adjustcolor("black", alpha=0.15), border=NA)
rect(as.POSIXct("2013-09-18 19:29:00", origin="1970-01-01", tz='GMT'), 0.000000001, as.POSIXct("2013-09-19 01:23:00", origin="1970-01-01", tz='GMT'), 60.0, density=NULL, col=adjustcolor("black", alpha=0.15), border=NA)
rect(as.POSIXct("2013-09-19 07:48:00", origin="1970-01-01", tz='GMT'), 0.000000001, as.POSIXct("2013-09-19 13:52:00", origin="1970-01-01", tz='GMT'), 60.0, density=NULL, col=adjustcolor("black", alpha=0.15), border=NA)
rect(as.POSIXct("2013-09-19 20:16:00", origin="1970-01-01", tz='GMT'), 0.000000001, as.POSIXct("2013-09-20 02:12:00", origin="1970-01-01", tz='GMT'), 60.0, density=NULL, col=adjustcolor("black", alpha=0.15), border=NA)
points(meso$Time, meso$Meso, pch=16, cex=1.5)
#week 3
df <- crypto.week3
i <- min(df$h.time, na.rm=T)
f <- i + 60*60*24*3.5
plotCI(df$h.time, df$h2.conc.mean, uiw=df$h2.conc.sd, sfrac=0, xaxt='n',xlab="", lwd=2, pch=16, ylab= "", cex.lab=1.5, log="y", col="darkgrey", las=1, ylim=c(0.010, 4), xlim=c(i,f), yaxt='n')
lines(df$h.time, df$h2.conc.mean)
axis(2, at=c(0.02,0.2,2), las=1)
axis(1, at=seq(i, f, by=60*60*24), labels=seq(1,4,by=1)+14)
mtext("C", side=3, cex=2, adj=0)
rect(as.POSIXct("2013-09-23 23:07:00", origin="1970-01-01", tz='GMT'), 0.000000001, as.POSIXct("2013-09-24 05:22:00", origin="1970-01-01", tz='GMT'), 60.0, density=NULL, col=adjustcolor("black", alpha=0.15), border=NA)
rect(as.POSIXct("2013-09-24 10:57:00", origin="1970-01-01", tz='GMT'), 0.000000001, as.POSIXct("2013-09-24 16:47:00", origin="1970-01-01", tz='GMT'), 60.0, density=NULL, col=adjustcolor("black", alpha=0.15), border=NA)
rect(as.POSIXct("2013-09-24 23:52:00", origin="1970-01-01", tz='GMT'), 0.000000001, as.POSIXct("2013-09-25 06:16:00", origin="1970-01-01", tz='GMT'), 60.0, density=NULL, col=adjustcolor("black", alpha=0.15), border=NA)
rect(as.POSIXct("2013-09-25 11:40:00", origin="1970-01-01", tz='GMT'), 0.000000001, as.POSIXct("2013-09-25 17:28:00", origin="1970-01-01", tz='GMT'), 60.0, density=NULL, col=adjustcolor("black", alpha=0.15), border=NA)
rect(as.POSIXct("2013-09-26 00:43:00", origin="1970-01-01", tz='GMT'), 0.000000001, as.POSIXct("2013-09-26 07:16:00", origin="1970-01-01", tz='GMT'), 60.0, density=NULL, col=adjustcolor("black", alpha=0.15), border=NA)
rect(as.POSIXct("2013-09-26 12:33:00", origin="1970-01-01", tz='GMT'), 0.000000001, as.POSIXct("2013-09-26 18:18:00", origin="1970-01-01", tz='GMT'), 60.0, density=NULL, col=adjustcolor("black", alpha=0.15), border=NA)
rect(as.POSIXct("2013-09-16 17:43:00", origin="1970-01-01", tz='GMT'), 0.000000001, as.POSIXct("2013-09-16 23:36:00", origin="1970-01-01", tz='GMT'), 60.0, density=NULL, col=adjustcolor("black", alpha=0.15), border=NA)
rect(as.POSIXct("2013-09-27 01:45:00", origin="1970-01-01", tz='GMT'), 0.000000001, as.POSIXct("2013-09-27 07:24:00", origin="1970-01-01", tz='GMT'), 60.0, density=NULL, col=adjustcolor("black", alpha=0.15), border=NA)
points(meso$Time, meso$Meso, pch=16, cex=1.5)
#week 4
df <- crypto.week4
i <- min(df$h.time, na.rm=T)
f <- i + 60*60*24*3.5
plotCI(df$h.time, df$h2.conc.mean, uiw=df$h2.conc.sd, sfrac=0, xaxt='n',xlab="", lwd=2, pch=16, ylab= "", cex.lab=1.5, log="y", col="darkgrey", las=1, ylim=c(0.010, 4), xlim=c(i,f), yaxt='n')
lines(df$h.time, df$h2.conc.mean)
axis(2, at=c(0.02,0.2,2), las=1)
axis(1, at=seq(i, f, by=60*60*24), labels=seq(1,4,by=1)+22)
mtext("D", side=3, cex=2, adj=0)
rect(as.POSIXct("2013-09-30 17:03:00", origin="1970-01-01", tz='GMT'), 0.000000001, as.POSIXct("2013-09-30 22:53:00", origin="1970-01-01", tz='GMT'), 60.0, density=NULL, col=adjustcolor("black", alpha=0.15), border=NA)
rect(as.POSIXct("2013-10-01 05:32:00", origin="1970-01-01", tz='GMT'), 0.000000001, as.POSIXct("2013-10-01 11:48:00", origin="1970-01-01", tz='GMT'), 60.0, density=NULL, col=adjustcolor("black", alpha=0.15), border=NA)
rect(as.POSIXct("2013-10-01 17:52:00", origin="1970-01-01", tz='GMT'), 0.000000001, as.POSIXct("2013-10-01 23:45:00", origin="1970-01-01", tz='GMT'), 60.0, density=NULL, col=adjustcolor("black", alpha=0.15), border=NA)
rect(as.POSIXct("2013-10-02 06:12:00", origin="1970-01-01", tz='GMT'), 0.000000001, as.POSIXct("2013-10-02 12:23:00", origin="1970-01-01", tz='GMT'), 60.0, density=NULL, col=adjustcolor("black", alpha=0.15), border=NA)
rect(as.POSIXct("2013-10-02 18:37:00", origin="1970-01-01", tz='GMT'), 0.000000001, as.POSIXct("2013-10-03 00:32:00", origin="1970-01-01", tz='GMT'), 60.0, density=NULL, col=adjustcolor("black", alpha=0.15), border=NA)
rect(as.POSIXct("2013-10-03 06:50:00", origin="1970-01-01", tz='GMT'), 0.000000001, as.POSIXct("2013-10-03 12:54:00", origin="1970-01-01", tz='GMT'), 60.0, density=NULL, col=adjustcolor("black", alpha=0.15), border=NA)
rect(as.POSIXct("2013-10-03 19:18:00", origin="1970-01-01", tz='GMT'), 0.000000001, as.POSIXct("2013-10-04 01:16:00", origin="1970-01-01", tz='GMT'), 60.0, density=NULL, col=adjustcolor("black", alpha=0.15), border=NA)
rect(as.POSIXct("2013-10-04 07:26:00", origin="1970-01-01", tz='GMT'), 0.000000001, as.POSIXct("2013-10-04 13:25:00", origin="1970-01-01", tz='GMT'), 60.0, density=NULL, col=adjustcolor("black", alpha=0.15), border=NA)
points(meso$Time, meso$Meso, pch=16, cex=1.5)
dev.off()
#####################
### DIVISION RATES ###
#####################
### CULTURE
png("Figure4.png", width=114*2, height=114*2, pointsize=8, res=600, units="mm")
par(mfrow=c(3,1), mar=c(3,2,1,2), pty="m", cex=1.2, oma=c(1,3,1,3))
plotCI(h.time,h.abun.mean,uiw=h.abun.sd, sfrac=0, lwd=2, col='darkgrey', xlim=c(min(cc$time, na.rm=T),max(cc$time, na.rm=T)), ylim=c(0,30), pch=NA, xaxt='n', xlab=NA, ylab=NA, yaxt='n')
lines(h.time,h.abun.mean)
rect(min(night.lab$time), -1, max(night.lab$time), 500, density=NULL, col=adjustcolor("black", alpha=0.15), border=NA)
rect(min(night2.lab$time), -1, max(night2.lab$time), 500, density=NULL, col=adjustcolor("black", alpha=0.15), border=NA)
axis.POSIXct(1, at=seq(min(cc$time, na.rm=T), max(cc$time, na.rm=T), by=60*60*6),labels=seq(1,25, 6))
axis(2, at=c(0,15,30), las=1)
mtext("A", side=3, cex=2, line=0, adj=0)
mtext(substitute(paste("abundance (10"^{6}, " cells L"^{-1},')')), side=2, cex=1.2, line=3)
par(new=TRUE)
plotCI(cc$time,cc$mean.f.G1*100, uiw=na.approx(cc$sd.f.G1*100), sfrac=0, lwd=2, col='red3', xlim=c(min(cc$time, na.rm=T),max(cc$time, na.rm=T)),, ylim=c(0,100), pch=NA, xaxt='n', xlab=NA, ylab=NA, yaxt='n')
lines(cc$time,cc$mean.f.G1*100, col='red3')
plotCI(cc$time,cc$mean.f.G2*100+cc$mean.f.S*100, uiw=na.approx(cc$sd.f.G2*100+cc$sd.f.S*100), sfrac=0, lwd=2, col='seagreen3', pch=NA,add=T)
lines(cc$time,cc$mean.f.G2*100+cc$mean.f.S*100, col='seagreen3')
# plotCI(cc$time,cc$mean.f.S*100, uiw=cc$sd.f.S*100, sfrac=0, lwd=2, col='darkturquoise', pch=NA,add=T)
# lines(cc$time,cc$mean.f.S*100, col='darkturquoise')
axis(4, at=c(0,50,100), las=1)
mtext('Cells in G1 or S+G2 (%)', side=4, cex=1.2, line=2.5)
plotCI(cc$time,cc$div, uiw=na.approx(cc$div.se), sfrac=0, lwd=2, col='red3', xlim=c(min(cc$time, na.rm=T),max(cc$time, na.rm=T)),, ylim=c(0,0.06), pch=NA, xaxt='n', xlab=NA, ylab=NA, yaxt='n')
lines(cc$time,cc$div, col='red3')
plotCI(m$time, m$div.ave, uiw=m$div.se, col="darkgrey",sfrac=0, lwd=2, pch=NA, add=TRUE)
lines(m$time, m$div.ave)
rect(min(night.lab$time), -1, max(night.lab$time), 1, density=NULL, col=adjustcolor("black", alpha=0.15), border=NA)
rect(min(night2.lab$time), -1, max(night2.lab$time), 1, density=NULL, col=adjustcolor("black", alpha=0.15), border=NA)
axis.POSIXct(1, at=seq(min(cc$time, na.rm=T),max(cc$time, na.rm=T), by=60*60*6),labels=seq(1,25, 6))
axis(2, at=c(0,0.03,0.06), las=1)
mtext(substitute(paste("division (h"^{-1},")")), side=2, line=3, cex=1.2)
mtext("B", side=3, cex=2, line=0, adj=0)
mtext("time (h)", side=1, cex=1.2, line=2.5)
dev.off()
### FIELD
png("Figure5.png", width=114*2, height=114*2, pointsize=8, res=600, units="mm")
par(mfrow=c(3,1), mar=c(2,2,1,2), pty="m", cex=1.2, oma=c(1,3,1,1))
plotCI(fsc$time, fsc$vol, uiw=fsc$vol.sd, pch=NA, sfrac=0, col='darkgrey', yaxt='n', ylab=NA, xaxt='n', xlab=NA, lwd=1)
abline(v=night.fsc$time, lwd=1, col=adjustcolor("black", alpha=0.15))
lines(fsc$time, fsc$vol)
axis(2, at=c(20,60,100), las=1)
mtext(substitute(paste("Cell volume (", mu, "m"^{3},")")), 2, line=3 )
axis(1, at=seq(min(fsc$time, na.rm=T), max(fsc$time, na.rm=T), by=60*60*24*6), labels=c(1,7,14,21))
mtext("A", side=3, cex=2, line=0, adj=0)
plotCI(data$time, data$DR, uiw=data$DR.se, sfrac=0, xlab="", lwd=2, pch=16, ylab= "", col="darkgrey", las=1, yaxt='n', xaxt='n', xlim=range(fsc$time))
lines(data$time, data$DR)
axis(2, at=seq(0,2.5,0.5), las=1)
axis(1, at=seq(min(data$time), max(data$time), by=60*60*24*6), labels=c(1,7,14,21))
mtext(substitute(paste("division (d"^{-1},')')), side=2, cex=1.2, line=3)
mtext("time (d)", side=1, cex=1.2, line=2)# par(new=T)
mtext("B", side=3, cex=2, line=0, adj=0)
dev.off()
#####################
### CORRELATIONS ###
#####################
### PH / DIN CORRELATION
png("FigureS2.png", width=114*2, height=114, pointsize=8, res=600, units="mm")
par(mfrow=c(1,2), mar=c(3,2,2,2), pty="s", cex=1.2, oma=c(1,3,1,0))
plot(data[,c(3,6)],yaxt='n',ylim=c(7.8,8.4),xlim=c(0,1.6), yaxt='n', xaxt='n', xlab=NA, ylab=NA)
abline(b=P04.ph$regression.results[4,3],a=P04.ph$regression.results[4,2], lty=2)
axis(2, at=c(7.8,8.1,8.4),las=1)
axis(1, at=c(0,0.8,1.6))
text(y=8.3,x=1.4,substitute(paste("R"^{2}, "=0.29")), cex=1)
mtext("pH",side=2, cex=1.2, line=3)
mtext(substitute(paste("phosphate (",mu, "M)")),side=1, cex=1.2, line=2.5)
mtext("A", side=3, cex=2, line=0, adj=0)
plot(data[,c(2,6)],yaxt='n',ylim=c(7.8,8.4),xlim=c(5,35), yaxt='n', xaxt='n', xlab=NA, ylab=NA)
abline(b=DIN.ph$regression.results[4,3],a=DIN.ph$regression.results[4,2], lty=2)
axis(2, at=c(7.8,8.1,8.4),las=1)
axis(1, at=c(5,20,35))
text(y=8.3,x=31.5,substitute(paste("R"^{2}, "=0.37")), cex=1)
mtext(substitute(paste("DIN (",mu, "M)")),side=1, cex=1.2, line=2.5)
mtext("B", side=3, cex=2, line=0, adj=0)
dev.off()
### ABUNDANCE CORRELATION
png("FigureS4.png", width=114*2, height=114*1.5, pointsize=8, res=600, units="mm")
par(mfrow=c(1,2), mar=c(3,2,1,2), pty="s", cex=1.2, oma=c(1,3,1,0))
plot(data.field[,c(1,2)], xlab=NA, ylab=NA, yaxt='n', xaxt='n',asp=1, xlim=c(0, 0.35),ylim=c(0, 0.35))
axis(2, at=c(0, 0.15,0.3), las=1)
axis(1, at=c(0, 0.15,0.3))
abline(b=reg$regression.results[4,3],a=reg$regression.results[4,2], lty=2)
text(0.07,0.3,substitute(paste("R"^{2}, "=0.24")), cex=1)
mtext("T. amphioxeia ", side=2, cex=1.2, line=3, font=3)
mtext(substitute(paste(" (10"^{6}, " cells L"^{-1},')')), side=2, cex=1.2, line=3)
mtext(substitute(paste(" (10"^{6}, " cells L"^{-1},')')), side=1, cex=1.2, line=3)
mtext("M. major ", side=1, cex=1.2, line=2.83, font=3)
points(log(data[1,c(1,2)]), col=2)
dev.off()
### CELL CYCLE / MODEL
png("FigureS5.png", width=114, height=114, pointsize=8, res=600, units="mm")
par(pty='s')
plotCI(data.cc[,1],data.cc[,2], uiw=data.cc[,3], xlim=c(0,0.06), ylim=c(0,0.06), xaxt='n', yaxt='n', err='x', sfrac=0,
col='darkgrey', lwd=2, pch=NA, xlab=NA, ylab=NA)
plotCI(data.cc[,1],data.cc[,2], uiw=data.cc[,4], err='y', sfrac=0, col='darkgrey', lwd=2, pch=NA,add=TRUE)
points(data.cc[,1],data.cc[,2], pch=16)
abline(b=c$regression.results[4,3],a=c$regression.results[4,2], lty=2)
axis(1, at=c(0,0.03,0.06))
axis(2, at=c(0,0.03,0.06), las=1)
text(0.0075, 0.05,substitute(paste("R"^{2}, "=0.60")), cex=1)
mtext(substitute(paste("DNA-based division (h"^{-1},")")), side=1, line=3, cex=1.2)
mtext(substitute(paste("Size-based division (h"^{-1},")")), side=2, line=3, cex=1.2)
dev.off()
### DIVISION RATES
png("FigureS6.png", width=114*3, height=114, pointsize=8, res=600, units="mm")
par(mfrow=c(1,3), mar=c(3,2,2,3), pty="s", cex=1.2, oma=c(1,3,1,0))
plot(data[,c(3,9)], ylim=c(0,1.6), yaxt='n', xaxt='n', xlab=NA, ylab=NA, xlim=c(0,1.6))
axis(2, at=c(0,0.8,1.6),las=1)
axis(1, at=c(0, 0.8, 1.6))
abline(b=P04.DR$regression.results[4,3],a=P04.DR$regression.results[4,2], lty=2)
text(1.4,0.2,substitute(paste("R"^{2}, "=0.61")), cex=1)
#text(10,0.6,"p < 0.01", cex=0.75)
mtext(substitute(paste("division (d"^{-1},')')), side=2, cex=1.2, line=3)
mtext(substitute(paste("phosphate (",mu, "M)")),side=1, cex=1.2, line=2.5)
mtext("A", side=3, cex=2, adj=0)
plot(data[,c(2,9)], ylim=c(0,1.6), yaxt='n', xaxt='n', xlab=NA, ylab=NA, xlim=c(5,35))
axis(2, at=c(0,0.8,1.6),las=1)
axis(1, at=c(5,20,35))
abline(b=DIN.DR$regression.results[4,3],a=DIN.DR$regression.results[4,2], lty=2)
text(30,0.2,substitute(paste("R"^{2}, "=0.42")), cex=1)
#text(10,0.6,"p < 0.01", cex=0.75)
mtext(substitute(paste("DIN (",mu, "M)")),side=1, cex=1.2, line=2.5)
mtext("B", side=3, cex=2, adj=0)
#mtext(substitute(paste("production (10"^{6},"cell L"^{-1},"d"^{-1},')')), side=2, cex=1.2, line=3)
plot(data[,c(6,9)],ylim=c(0,1.6), yaxt='n',xlim=c(7.8,8.4),yaxt='n', xaxt='n', xlab=NA, ylab=NA)
axis(1, at=c(7.8,8.1,8.4))
axis(2, at=c(0,0.8,1.6),las=1)
mtext("pH",side=1, cex=1.2, line=2.5)
mtext("C", side=3, cex=2, adj=0)
abline(b=ph.DR$regression.results[4,3],a=ph.DR$regression.results[4,2], lty=2)
text(8.3,1.4,substitute(paste("R"^{2}, "=0.38")), cex=1)
#mtext(substitute(paste("division (d"^{-1},')')), side=2, cex=1.2, line=3)
dev.off()
|
mydata <- read.table("household_power_consumption.txt", header=TRUE, sep=";", na.strings="?")
sub <- subset(mydata, Date=="1/2/2007" | Date=="2/2/2007")
sub$DateTime <- paste(sub$Date, sub$Time, sep=" ")
library(lubridate)
sub$DateTime <- dmy_hms(sub$DateTime)
par(mfrow = c(2,2), mar=c(4.5,4.5,2,2))
with(sub, {
plot(DateTime, Global_active_power, type = "n",
ylab = "Global Active Power", xlab = "",
main = "", cex.lab = .9, cex.axis = .8)
lines(DateTime, Global_active_power, lwd=1)
plot(DateTime, Voltage, type = "n",
ylab = "Voltage", xlab = "datetime",
main = "", cex.lab = .9, cex.axis = .8)
lines(DateTime, Voltage, lwd=1)
plot(DateTime, Sub_metering_1, type = "n",
ylab = "Energy sub metering", xlab = "",
main = "", cex.lab = .9, cex.axis = .8)
lines(DateTime, Sub_metering_1, lwd=1)
lines(DateTime, Sub_metering_2, lwd=1, col = "red")
lines(DateTime, Sub_metering_3, lwd=1, col = "blue")
legend("topright", legend = c("Sub_metering_1", "Sub_metering_2", "Sub_metering_3"),
lty = 1, cex = .8, col = c("black", "red", "blue"), bty = "n")
plot(DateTime, Global_reactive_power, type = "n",
ylab = "Global_reactive_power", xlab = "datetime",
main = "", cex.lab = .9, cex.axis = .8)
lines(DateTime, Global_reactive_power, lwd=1)
})
dev.copy(png, "plot4.png")
dev.off()
| /figure/plot4.R | no_license | vlikhvar/ExData_Plotting1 | R | false | false | 1,365 | r | mydata <- read.table("household_power_consumption.txt", header=TRUE, sep=";", na.strings="?")
sub <- subset(mydata, Date=="1/2/2007" | Date=="2/2/2007")
sub$DateTime <- paste(sub$Date, sub$Time, sep=" ")
library(lubridate)
sub$DateTime <- dmy_hms(sub$DateTime)
par(mfrow = c(2,2), mar=c(4.5,4.5,2,2))
with(sub, {
plot(DateTime, Global_active_power, type = "n",
ylab = "Global Active Power", xlab = "",
main = "", cex.lab = .9, cex.axis = .8)
lines(DateTime, Global_active_power, lwd=1)
plot(DateTime, Voltage, type = "n",
ylab = "Voltage", xlab = "datetime",
main = "", cex.lab = .9, cex.axis = .8)
lines(DateTime, Voltage, lwd=1)
plot(DateTime, Sub_metering_1, type = "n",
ylab = "Energy sub metering", xlab = "",
main = "", cex.lab = .9, cex.axis = .8)
lines(DateTime, Sub_metering_1, lwd=1)
lines(DateTime, Sub_metering_2, lwd=1, col = "red")
lines(DateTime, Sub_metering_3, lwd=1, col = "blue")
legend("topright", legend = c("Sub_metering_1", "Sub_metering_2", "Sub_metering_3"),
lty = 1, cex = .8, col = c("black", "red", "blue"), bty = "n")
plot(DateTime, Global_reactive_power, type = "n",
ylab = "Global_reactive_power", xlab = "datetime",
main = "", cex.lab = .9, cex.axis = .8)
lines(DateTime, Global_reactive_power, lwd=1)
})
dev.copy(png, "plot4.png")
dev.off()
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/utils.R
\name{.filter_component}
\alias{.filter_component}
\title{for models with zero-inflation component, return required component of model-summary}
\usage{
.filter_component(dat, component)
}
\description{
for models with zero-inflation component, return required component of model-summary
}
\keyword{internal}
| /man/dot-filter_component.Rd | no_license | cran/parameters | R | false | true | 406 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/utils.R
\name{.filter_component}
\alias{.filter_component}
\title{for models with zero-inflation component, return required component of model-summary}
\usage{
.filter_component(dat, component)
}
\description{
for models with zero-inflation component, return required component of model-summary
}
\keyword{internal}
|
library(arm)
library(Matching)
data("lalonde")
#isolating control group
lalonde_control <- lalonde[which(lalonde$treat == 0),]
#linear model
predict_re78_control <- lm(re78 ~ age + educ + re74 + re75 + educ*re74 +
educ*re75 + age*re74 + age*re75 + re74*re75, data = lalonde_control)
#simulating the parameters
sim_params_re78 <- sim(predict_re78_control, n.sims=10000)
#age domain
ages <- c(min(lalonde_control$age):max(lalonde_control$age))
length(ages)
#calculating the medians
med_educ <- median(lalonde_control$educ)
med_re74 <- median(lalonde_control$re74)
med_re75 <- median(lalonde_control$re75)
set.seed(1)
#simulating the predicted values
sim_ys_a <- data.frame()
for (j in 1:39) {
xs_a <- c(1,ages[j],med_educ, med_re74, med_re75, med_educ*med_re74,
med_educ*med_re75, ages[j]*med_re74, ages[j]*med_re75, med_re74*med_re75)
for (i in 1:10000) {
betas_a <- sim_params_re78@coef[i,]
sim_ys_a[i,j] <- sum(xs_a*betas_a) + rnorm(1, 0, sim_params_re78@sigma[i])
}
}
pred_intervals_a <- apply(sim_ys_a,2,quantile,probs = c(0.025, 0.975))
pred_intervals_a
#plotting the intervals
plot(x = c(1:100), y = c(1:100), type = "n", xlim = c(17,55), ylim = c(1.25*min(pred_intervals_a[1,]),1.25*max(pred_intervals_a[2,])),
main = "", xlab = "age", ylab = "re78")
for (age in 17:55) {
segments(
x0 = age,
y0 = pred_intervals_a[1, age - 16],
x1 = age,
y1 = pred_intervals_a[2, age - 16],
lwd = 2)
}
```
#calculating the 90% quantiles
quant90_educ <- quantile(lalonde_control$educ, probs = 0.9)
quant90_re74 <- quantile(lalonde_control$re74, probs = 0.9)
quant90_re75 <- quantile(lalonde_control$re75, probs = 0.9)
set.seed(1)
#simulating the predicted values
sim_ys_b <- data.frame()
for (j in 1:39) {
xs_b <- c(1,ages[j],quant90_educ, quant90_re74, quant90_re75, ages[j]*quant90_re74, ages[j]*quant90_re75, quant90_educ*quant90_re74,
quant90_educ*quant90_re75, quant90_re74*quant90_re75)
for (i in 1:10000) {
betas_b <- sim_params_re78@coef[i,]
sim_ys_b[i,j] <- sum(xs_b*betas_b) + rnorm(1, 0, sim_params_re78@sigma[i])
}
}
pred_intervals_b <- apply(sim_ys_b, 2, quantile, probs = c(0.025, 0.975))
pred_intervals_b
plot(x = c(1:100), y = c(1:100), type = "n", xlim = c(17,55), ylim = c(1.25*min(pred_intervals_b[1,]),1.25*max(pred_intervals_b[2,])),
main = "", xlab = "age", ylab = "re78")
for (age in 17:55) {
segments(
x0 = age,
y0 = pred_intervals_b[1, age - 16],
x1 = age,
y1 = pred_intervals_b[2, age - 16],
lwd = 2)
} | /q2mine.R | no_license | jccf12/r-minerva | R | false | false | 2,576 | r | library(arm)
library(Matching)
data("lalonde")
#isolating control group
lalonde_control <- lalonde[which(lalonde$treat == 0),]
#linear model
predict_re78_control <- lm(re78 ~ age + educ + re74 + re75 + educ*re74 +
educ*re75 + age*re74 + age*re75 + re74*re75, data = lalonde_control)
#simulating the parameters
sim_params_re78 <- sim(predict_re78_control, n.sims=10000)
#age domain
ages <- c(min(lalonde_control$age):max(lalonde_control$age))
length(ages)
#calculating the medians
med_educ <- median(lalonde_control$educ)
med_re74 <- median(lalonde_control$re74)
med_re75 <- median(lalonde_control$re75)
set.seed(1)
#simulating the predicted values
sim_ys_a <- data.frame()
for (j in 1:39) {
xs_a <- c(1,ages[j],med_educ, med_re74, med_re75, med_educ*med_re74,
med_educ*med_re75, ages[j]*med_re74, ages[j]*med_re75, med_re74*med_re75)
for (i in 1:10000) {
betas_a <- sim_params_re78@coef[i,]
sim_ys_a[i,j] <- sum(xs_a*betas_a) + rnorm(1, 0, sim_params_re78@sigma[i])
}
}
pred_intervals_a <- apply(sim_ys_a,2,quantile,probs = c(0.025, 0.975))
pred_intervals_a
#plotting the intervals
plot(x = c(1:100), y = c(1:100), type = "n", xlim = c(17,55), ylim = c(1.25*min(pred_intervals_a[1,]),1.25*max(pred_intervals_a[2,])),
main = "", xlab = "age", ylab = "re78")
for (age in 17:55) {
segments(
x0 = age,
y0 = pred_intervals_a[1, age - 16],
x1 = age,
y1 = pred_intervals_a[2, age - 16],
lwd = 2)
}
```
#calculating the 90% quantiles
quant90_educ <- quantile(lalonde_control$educ, probs = 0.9)
quant90_re74 <- quantile(lalonde_control$re74, probs = 0.9)
quant90_re75 <- quantile(lalonde_control$re75, probs = 0.9)
set.seed(1)
#simulating the predicted values
sim_ys_b <- data.frame()
for (j in 1:39) {
xs_b <- c(1,ages[j],quant90_educ, quant90_re74, quant90_re75, ages[j]*quant90_re74, ages[j]*quant90_re75, quant90_educ*quant90_re74,
quant90_educ*quant90_re75, quant90_re74*quant90_re75)
for (i in 1:10000) {
betas_b <- sim_params_re78@coef[i,]
sim_ys_b[i,j] <- sum(xs_b*betas_b) + rnorm(1, 0, sim_params_re78@sigma[i])
}
}
pred_intervals_b <- apply(sim_ys_b, 2, quantile, probs = c(0.025, 0.975))
pred_intervals_b
plot(x = c(1:100), y = c(1:100), type = "n", xlim = c(17,55), ylim = c(1.25*min(pred_intervals_b[1,]),1.25*max(pred_intervals_b[2,])),
main = "", xlab = "age", ylab = "re78")
for (age in 17:55) {
segments(
x0 = age,
y0 = pred_intervals_b[1, age - 16],
x1 = age,
y1 = pred_intervals_b[2, age - 16],
lwd = 2)
} |
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/LANDSCAPE-Site.R
\name{sample_sugar_Site}
\alias{sample_sugar_Site}
\title{Site: Sample Sugar Feeding Resources}
\usage{
sample_sugar_Site()
}
\description{
Sample a sugar feeding resource at this site, returns a reference to a \code{\link{Sugar_Resource}} object.
\itemize{
\item binding: \code{Site$sample_sugar}
}
}
| /MASH-dev/SeanWu/MBITES/man/sample_sugar_Site.Rd | no_license | aucarter/MASH-Main | R | false | true | 397 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/LANDSCAPE-Site.R
\name{sample_sugar_Site}
\alias{sample_sugar_Site}
\title{Site: Sample Sugar Feeding Resources}
\usage{
sample_sugar_Site()
}
\description{
Sample a sugar feeding resource at this site, returns a reference to a \code{\link{Sugar_Resource}} object.
\itemize{
\item binding: \code{Site$sample_sugar}
}
}
|
#histogram of raw VAFs
args<-commandArgs(trailingOnly = TRUE)
if(length(args)==0|args[1]=="-h")
{
print("Usage: Rscript <workingdir> <sample id> <batch no>")
}
setwd(args[1])
#setwd("/scratch/pancan/clustout_sanger/a330a96e-9897-4605-b5f1-5b5ef45cd365_sv_only/")
id<-args[2]
#id<-"a330a96e-9897-4605-b5f1-5b5ef45cd365"
batch<-args[3]
suppressMessages(library(ggplot2))
suppressMessages(library(gridExtra))
convert_vafs_sv<-function(dat,p)
{
#(1/p$purity)*(dat$adjusted_vaf)*(dat$prop_chrs_bearing_mutation*dat$most_likely_variant_copynumber)
prob<-(dat$most_likely_variant_copynumber*dat$prop_chrs_bearing_mutation*p$purity)/(2*(1-p$purity)+dat$most_likely_variant_copynumber*p$purity)
(1/prob)*dat$adjusted_vaf
}
#svs<-read.table(paste(id,"_filtered_adjusted_svs.tsv",sep=""),header=T,sep="\t",stringsAsFactors=F)
svs<-read.table(paste(id,"_filtered_svs.tsv",sep=""),header=T,sep="\t",stringsAsFactors=F)
#svs<-svs[order(svs$bp1_chr,svs$bp1_pos,svs$bp2_chr,svs$bp2_pos),]
mlcn<-read.table(paste(batch,"/",id,"_most_likely_copynumbers.txt",sep=""),header=T,sep="\t",stringsAsFactors=F)
pp<-read.table("purity_ploidy.txt",header=T,sep="\t",stringsAsFactors=F)
dat<-merge(svs,mlcn,by.x=c(2,3,5,6),by.y=c(1,2,3,4))
certain<-read.table(paste(batch,"/",id,"_cluster_certainty.txt",sep=""),sep="\t",header=T)
dat<-merge(dat,certain,by.x=c(1,2,3,4),by.y=c(1,2,3,4))
dat<-cbind(dat,CCF=convert_vafs_sv(dat,pp))
dat<-dat[order(dat$CCF),]
dat$CCF[dat$CCF>2]<-2
write.table(dat[,c(1,2,3,4,50)],paste0(id,"_CCF.txt"),row.names = F,sep="\t",quote = F)
pdat<-data.frame(Class=dat$classification,CCF=dat$CCF,VAF=dat$adjusted_vaf,cluster=as.character(round(dat$average_proportion*(1/pp$purity),2)))
sv_clust<-read.table(paste(batch,"/",id,"_subclonal_structure.txt",sep=""),header=T,sep="\t",stringsAsFactors=F)
sv_clust<-sv_clust[sv_clust$n_ssms>1,]#&sv_clust$n_ssms/sum(sv_clust$n_ssms)>0.01,]
plot1<-ggplot(pdat,aes(x=as.numeric(VAF),fill=Class,colour=Class))+
geom_bar(alpha=0.3,binwidth=0.05)+xlab("Raw VAF")+xlim(0,2)
plot2<-ggplot(pdat,aes(x=as.numeric(CCF),fill=Class,colour=Class))+xlim(0,2)+
geom_bar(alpha=0.3,binwidth=0.05)+xlab("CCF")+
#geom_vline(xintercept=as.numeric(vals),colour="red")
geom_vline(xintercept=1/pp$purity*as.numeric(sv_clust$proportion[sv_clust$n_ssms/(sum(sv_clust$n_ssms))>0.1&sv_clust$n_ssms>2]),colour="blue",size=1)+
geom_vline(xintercept=1/pp$purity*as.numeric(sv_clust$proportion[sv_clust$n_ssms/(sum(sv_clust$n_ssms))<0.1|sv_clust$n_ssms<=2]),colour="red",lty=2)
plot3<-ggplot(pdat,aes(x=CCF,y=VAF,fill=cluster,colour=cluster))+geom_point(size=3)+xlim(0,2)+ylim(0,1)
bb<-read.table(paste("../",id,"_subclones.txt",sep=""),header=T,sep="\t",stringsAsFactors=F)
clon<-bb[is.na(bb$nMaj2_A),]
subclon<-bb[!is.na(bb$nMaj2_A),]
#plot3<-ggplot(bb,aes(x=as.numeric(frac1_A)))+geom_density(alpha=0.3)+xlab("Battenberg CN")
pdf(paste(id,batch,"_VAF_CCF.pdf",sep=""),height=11)
grid.arrange(plot1,plot2,plot3)
dev.off()
suppressMessages(library(circlize))
pdat<-clon[,c("chr","startpos","endpos")]
pdat<-cbind(pdat,value=clon$nMaj1_A+clon$nMin1_A)
pdat$chr<-paste("chr",pdat$chr,sep="")
colnames(pdat)<-c("chr","start","end","value")
pdat$value[pdat$value>6]<-6
pdat2<-subclon[,c("chr","startpos","endpos")]
pdat2<-cbind(pdat2,value=apply(cbind(subclon$nMaj2_A+subclon$nMin2_A,subclon$nMaj1_A+subclon$nMin1_A),1,mean))
pdat2$chr<-paste("chr",pdat2$chr,sep="")
# pdat3<-subclon[,c(2,3,4)]
# pdat3<-cbind(pdat3,value=subclon$nMaj1_A+subclon$nMin1_A)
# pdat3$chr<-paste("chr",pdat3$chr,sep="")
#pdat2<-rbind(pdat2,pdat3)
pdat2$value[pdat2$value>6]<-6
colnames(pdat2)<-c("chr","start","end","value")
pdat<-list(pdat,pdat2)
colours<-c("#0000FF80","#FF000080","darkgreen","#0000FF40","#FF000040","#00FF0040")
res_svs<-read.table(paste(batch,"/",id,"_cluster_certainty.txt",sep=""),sep="\t",header=T,stringsAsFactors=F)
pdf(paste(id,batch,"_circos.pdf",sep=""),height=12,width=12)
par(mar = c(1, 1, 1, 1))
circos.initializeWithIdeogram(plotType = c("axis","labels"))
circos.genomicTrackPlotRegion(pdat,ylim=c(0,6),
panel.fun=function(region,value,...){
i=getI(...)
circos.genomicLines(region,value,type="segment",lwd=3,col=colours[i],...)
})
for(j in 1:nrow(dat))
{
x<-dat[j,]
ccf<-x$average_proportion*(1/pp$purity)
if(ccf>1.2)
{
lcol=colours[1]
}
else if(ccf<1.2&ccf>0.9)
{
lcol=colours[1]
}
else
{
lcol=colours[2]
}
circos.link(paste("chr",as.character(x[1]),sep=""),as.numeric(x[2]),paste("chr",as.character(x[3]),sep=""),as.numeric(x[4]),col=lcol,lwd=2)
}
dev.off()
tabout<-c()
for(i in 1:nrow(sv_clust))
{
curr<-as.numeric(sv_clust[i,c(1,3),])
curr<-c(curr,curr[2]*1/pp$purity)
curr<-c(curr,sum(res_svs$most_likely_assignment==curr[1]))
tabout<-rbind(tabout,curr)
}
tabout<-tabout[order(as.numeric(tabout[,3]),decreasing=TRUE),]
tabout<-rbind(c("cluster","proportion","CCF","SVs"),tabout)
pdf(paste(id,batch,"_table.pdf",sep=""),height=2,width=3.5)
grid.table(tabout,rows=c())
dev.off()
| /svclone_step1/svclone/post_process_sv_only.R | no_license | smc-het-challenge/6613462 | R | false | false | 5,168 | r | #histogram of raw VAFs
args<-commandArgs(trailingOnly = TRUE)
if(length(args)==0|args[1]=="-h")
{
print("Usage: Rscript <workingdir> <sample id> <batch no>")
}
setwd(args[1])
#setwd("/scratch/pancan/clustout_sanger/a330a96e-9897-4605-b5f1-5b5ef45cd365_sv_only/")
id<-args[2]
#id<-"a330a96e-9897-4605-b5f1-5b5ef45cd365"
batch<-args[3]
suppressMessages(library(ggplot2))
suppressMessages(library(gridExtra))
convert_vafs_sv<-function(dat,p)
{
#(1/p$purity)*(dat$adjusted_vaf)*(dat$prop_chrs_bearing_mutation*dat$most_likely_variant_copynumber)
prob<-(dat$most_likely_variant_copynumber*dat$prop_chrs_bearing_mutation*p$purity)/(2*(1-p$purity)+dat$most_likely_variant_copynumber*p$purity)
(1/prob)*dat$adjusted_vaf
}
#svs<-read.table(paste(id,"_filtered_adjusted_svs.tsv",sep=""),header=T,sep="\t",stringsAsFactors=F)
svs<-read.table(paste(id,"_filtered_svs.tsv",sep=""),header=T,sep="\t",stringsAsFactors=F)
#svs<-svs[order(svs$bp1_chr,svs$bp1_pos,svs$bp2_chr,svs$bp2_pos),]
mlcn<-read.table(paste(batch,"/",id,"_most_likely_copynumbers.txt",sep=""),header=T,sep="\t",stringsAsFactors=F)
pp<-read.table("purity_ploidy.txt",header=T,sep="\t",stringsAsFactors=F)
dat<-merge(svs,mlcn,by.x=c(2,3,5,6),by.y=c(1,2,3,4))
certain<-read.table(paste(batch,"/",id,"_cluster_certainty.txt",sep=""),sep="\t",header=T)
dat<-merge(dat,certain,by.x=c(1,2,3,4),by.y=c(1,2,3,4))
dat<-cbind(dat,CCF=convert_vafs_sv(dat,pp))
dat<-dat[order(dat$CCF),]
dat$CCF[dat$CCF>2]<-2
write.table(dat[,c(1,2,3,4,50)],paste0(id,"_CCF.txt"),row.names = F,sep="\t",quote = F)
pdat<-data.frame(Class=dat$classification,CCF=dat$CCF,VAF=dat$adjusted_vaf,cluster=as.character(round(dat$average_proportion*(1/pp$purity),2)))
sv_clust<-read.table(paste(batch,"/",id,"_subclonal_structure.txt",sep=""),header=T,sep="\t",stringsAsFactors=F)
sv_clust<-sv_clust[sv_clust$n_ssms>1,]#&sv_clust$n_ssms/sum(sv_clust$n_ssms)>0.01,]
plot1<-ggplot(pdat,aes(x=as.numeric(VAF),fill=Class,colour=Class))+
geom_bar(alpha=0.3,binwidth=0.05)+xlab("Raw VAF")+xlim(0,2)
plot2<-ggplot(pdat,aes(x=as.numeric(CCF),fill=Class,colour=Class))+xlim(0,2)+
geom_bar(alpha=0.3,binwidth=0.05)+xlab("CCF")+
#geom_vline(xintercept=as.numeric(vals),colour="red")
geom_vline(xintercept=1/pp$purity*as.numeric(sv_clust$proportion[sv_clust$n_ssms/(sum(sv_clust$n_ssms))>0.1&sv_clust$n_ssms>2]),colour="blue",size=1)+
geom_vline(xintercept=1/pp$purity*as.numeric(sv_clust$proportion[sv_clust$n_ssms/(sum(sv_clust$n_ssms))<0.1|sv_clust$n_ssms<=2]),colour="red",lty=2)
plot3<-ggplot(pdat,aes(x=CCF,y=VAF,fill=cluster,colour=cluster))+geom_point(size=3)+xlim(0,2)+ylim(0,1)
bb<-read.table(paste("../",id,"_subclones.txt",sep=""),header=T,sep="\t",stringsAsFactors=F)
clon<-bb[is.na(bb$nMaj2_A),]
subclon<-bb[!is.na(bb$nMaj2_A),]
#plot3<-ggplot(bb,aes(x=as.numeric(frac1_A)))+geom_density(alpha=0.3)+xlab("Battenberg CN")
pdf(paste(id,batch,"_VAF_CCF.pdf",sep=""),height=11)
grid.arrange(plot1,plot2,plot3)
dev.off()
suppressMessages(library(circlize))
pdat<-clon[,c("chr","startpos","endpos")]
pdat<-cbind(pdat,value=clon$nMaj1_A+clon$nMin1_A)
pdat$chr<-paste("chr",pdat$chr,sep="")
colnames(pdat)<-c("chr","start","end","value")
pdat$value[pdat$value>6]<-6
pdat2<-subclon[,c("chr","startpos","endpos")]
pdat2<-cbind(pdat2,value=apply(cbind(subclon$nMaj2_A+subclon$nMin2_A,subclon$nMaj1_A+subclon$nMin1_A),1,mean))
pdat2$chr<-paste("chr",pdat2$chr,sep="")
# pdat3<-subclon[,c(2,3,4)]
# pdat3<-cbind(pdat3,value=subclon$nMaj1_A+subclon$nMin1_A)
# pdat3$chr<-paste("chr",pdat3$chr,sep="")
#pdat2<-rbind(pdat2,pdat3)
pdat2$value[pdat2$value>6]<-6
colnames(pdat2)<-c("chr","start","end","value")
pdat<-list(pdat,pdat2)
colours<-c("#0000FF80","#FF000080","darkgreen","#0000FF40","#FF000040","#00FF0040")
res_svs<-read.table(paste(batch,"/",id,"_cluster_certainty.txt",sep=""),sep="\t",header=T,stringsAsFactors=F)
pdf(paste(id,batch,"_circos.pdf",sep=""),height=12,width=12)
par(mar = c(1, 1, 1, 1))
circos.initializeWithIdeogram(plotType = c("axis","labels"))
circos.genomicTrackPlotRegion(pdat,ylim=c(0,6),
panel.fun=function(region,value,...){
i=getI(...)
circos.genomicLines(region,value,type="segment",lwd=3,col=colours[i],...)
})
for(j in 1:nrow(dat))
{
x<-dat[j,]
ccf<-x$average_proportion*(1/pp$purity)
if(ccf>1.2)
{
lcol=colours[1]
}
else if(ccf<1.2&ccf>0.9)
{
lcol=colours[1]
}
else
{
lcol=colours[2]
}
circos.link(paste("chr",as.character(x[1]),sep=""),as.numeric(x[2]),paste("chr",as.character(x[3]),sep=""),as.numeric(x[4]),col=lcol,lwd=2)
}
dev.off()
tabout<-c()
for(i in 1:nrow(sv_clust))
{
curr<-as.numeric(sv_clust[i,c(1,3),])
curr<-c(curr,curr[2]*1/pp$purity)
curr<-c(curr,sum(res_svs$most_likely_assignment==curr[1]))
tabout<-rbind(tabout,curr)
}
tabout<-tabout[order(as.numeric(tabout[,3]),decreasing=TRUE),]
tabout<-rbind(c("cluster","proportion","CCF","SVs"),tabout)
pdf(paste(id,batch,"_table.pdf",sep=""),height=2,width=3.5)
grid.table(tabout,rows=c())
dev.off()
|
#' Correlation Table
#'
#' Correlations printed in a nicely formatted table.
#'
#' @param .data the data frame containing the variables
#' @param ... the unquoted variable names to be included in the correlations
#' @param cor_type the correlation type; default is "pearson", other option is "spearman"
#' @param na.rm logical (default is \code{FALSE}); if set to \code{TRUE}, the correlations use the "complete.obs" methods option from \code{stats::cor()}
#' @param rounding the value passed to \code{round} for the output of both the correlation and p-value; default is 3
#' @param output how the table is output; can be "text" for regular console output, "latex2" for specialized latex output, or any of \code{kable()}'s options from \code{knitr} (e.g., "latex", "markdown", "pandoc").
#' @param booktabs when \code{output != "text"}; option is passed to \code{knitr::kable}
#' @param caption when \code{output != "text"}; option is passed to \code{knitr::kable}
#' @param align when \code{output != "text"}; option is passed to \code{knitr::kable}
#' @param float when \code{output == "latex2"} it controls the floating parameter (h, t, b, H)
#'
#' @seealso stats::cor
#'
#' @importFrom stats cor
#' @importFrom knitr kable
#'
#' @export
tableC <- function(.data,
...,
cor_type = "pearson",
na.rm = FALSE,
rounding = 3,
output = "text",
booktabs = TRUE,
caption = NULL,
align = NULL,
float = "htb"){
## Preprocessing ##
.call <- match.call()
data <- selecting(d_=.data, ...)
d <- as.data.frame(data)
## NA ##
if (na.rm){
use1 <- "complete.obs"
n <- sum(complete.cases(d))
} else {
use1 <- "everything"
n <- length(d[[1]])
}
## Correlations ##
cors <- stats::cor(d,
method = cor_type,
use = use1)
## Significance ##
if (cor_type == "pearson"){
tvalues <- cors/sqrt((1 - cors^2)/(n-2))
} else if (cor_type == "spearman"){
tvalues <- cors * sqrt((n-2)/(1 - cors^2))
} else {
stop(paste(cor_type, "is not a possible correlation type with this function."))
}
pvalues <- 2*pt(abs(tvalues), n-2, lower.tail = FALSE)
## Formatting Names and Rownames
cors <- as.data.frame(cors)
pvalues <- as.data.frame(pvalues)
dims <- dim(cors)
if (output == "latex2"){
row.names(cors) <- paste0("{[", 1:dims[1], "]}", row.names(cors))
} else {
row.names(cors) <- paste0("[", 1:dims[1], "]", row.names(cors))
}
names(cors) <- paste0("[", 1:dims[1], "]")
## Combine
final <- matrix(nrow = dims[1], ncol = dims[2])
for (i in 1:dims[1]){
for (j in 1:dims[2]){
final[i,j] <- paste0(ifelse(cors[i,j] == 1, "1.00", round(cors[i,j], rounding)),
" ",
ifelse(cors[i,j] == 1, "",
ifelse(pvalues[i,j] < .001, "(<.001)",
paste0("(",
round(pvalues[i,j], rounding), ")"))))
}
}
final[upper.tri(final)] <- " "
final <- as.data.frame(final)
row.names(final) <- row.names(cors)
final <- data.frame(" " = row.names(final), final)
names(final) <- c(" ", names(cors))
## Output ##
message("N = ", n, "\n",
"Note: ", cor_type, " correlation (p-value).")
if (output != "text"){
if (output == "latex2"){
if (is.null(align)){
l1 <- dim(final)[2]
align <- c("l", rep("c", (l1-1)))
}
tab <- to_latex(final, caption, align, len = dim(final)[2] - 1, splitby = NA, float, booktabs, cor_type)
return(tab)
} else {
kab <- knitr::kable(final,
format=output,
booktabs = booktabs,
caption = caption,
align = align,
row.names = FALSE)
return(kab)
}
} else {
final <- list("Table1" = final)
class(final) <- c("table1", "list")
attr(final, "splitby") <- NULL
attr(final, "output") <- NULL
return(final)
}
}
| /R/table_cor.R | no_license | jarathomas/furniture | R | false | false | 4,224 | r | #' Correlation Table
#'
#' Correlations printed in a nicely formatted table.
#'
#' @param .data the data frame containing the variables
#' @param ... the unquoted variable names to be included in the correlations
#' @param cor_type the correlation type; default is "pearson", other option is "spearman"
#' @param na.rm logical (default is \code{FALSE}); if set to \code{TRUE}, the correlations use the "complete.obs" methods option from \code{stats::cor()}
#' @param rounding the value passed to \code{round} for the output of both the correlation and p-value; default is 3
#' @param output how the table is output; can be "text" for regular console output, "latex2" for specialized latex output, or any of \code{kable()}'s options from \code{knitr} (e.g., "latex", "markdown", "pandoc").
#' @param booktabs when \code{output != "text"}; option is passed to \code{knitr::kable}
#' @param caption when \code{output != "text"}; option is passed to \code{knitr::kable}
#' @param align when \code{output != "text"}; option is passed to \code{knitr::kable}
#' @param float when \code{output == "latex2"} it controls the floating parameter (h, t, b, H)
#'
#' @seealso stats::cor
#'
#' @importFrom stats cor
#' @importFrom knitr kable
#'
#' @export
tableC <- function(.data,
...,
cor_type = "pearson",
na.rm = FALSE,
rounding = 3,
output = "text",
booktabs = TRUE,
caption = NULL,
align = NULL,
float = "htb"){
## Preprocessing ##
.call <- match.call()
data <- selecting(d_=.data, ...)
d <- as.data.frame(data)
## NA ##
if (na.rm){
use1 <- "complete.obs"
n <- sum(complete.cases(d))
} else {
use1 <- "everything"
n <- length(d[[1]])
}
## Correlations ##
cors <- stats::cor(d,
method = cor_type,
use = use1)
## Significance ##
if (cor_type == "pearson"){
tvalues <- cors/sqrt((1 - cors^2)/(n-2))
} else if (cor_type == "spearman"){
tvalues <- cors * sqrt((n-2)/(1 - cors^2))
} else {
stop(paste(cor_type, "is not a possible correlation type with this function."))
}
pvalues <- 2*pt(abs(tvalues), n-2, lower.tail = FALSE)
## Formatting Names and Rownames
cors <- as.data.frame(cors)
pvalues <- as.data.frame(pvalues)
dims <- dim(cors)
if (output == "latex2"){
row.names(cors) <- paste0("{[", 1:dims[1], "]}", row.names(cors))
} else {
row.names(cors) <- paste0("[", 1:dims[1], "]", row.names(cors))
}
names(cors) <- paste0("[", 1:dims[1], "]")
## Combine
final <- matrix(nrow = dims[1], ncol = dims[2])
for (i in 1:dims[1]){
for (j in 1:dims[2]){
final[i,j] <- paste0(ifelse(cors[i,j] == 1, "1.00", round(cors[i,j], rounding)),
" ",
ifelse(cors[i,j] == 1, "",
ifelse(pvalues[i,j] < .001, "(<.001)",
paste0("(",
round(pvalues[i,j], rounding), ")"))))
}
}
final[upper.tri(final)] <- " "
final <- as.data.frame(final)
row.names(final) <- row.names(cors)
final <- data.frame(" " = row.names(final), final)
names(final) <- c(" ", names(cors))
## Output ##
message("N = ", n, "\n",
"Note: ", cor_type, " correlation (p-value).")
if (output != "text"){
if (output == "latex2"){
if (is.null(align)){
l1 <- dim(final)[2]
align <- c("l", rep("c", (l1-1)))
}
tab <- to_latex(final, caption, align, len = dim(final)[2] - 1, splitby = NA, float, booktabs, cor_type)
return(tab)
} else {
kab <- knitr::kable(final,
format=output,
booktabs = booktabs,
caption = caption,
align = align,
row.names = FALSE)
return(kab)
}
} else {
final <- list("Table1" = final)
class(final) <- c("table1", "list")
attr(final, "splitby") <- NULL
attr(final, "output") <- NULL
return(final)
}
}
|
library(testthat)
library(clubSandwich)
test_check("clubSandwich")
| /tests/testthat.R | no_license | jepusto/clubSandwich | R | false | false | 68 | r | library(testthat)
library(clubSandwich)
test_check("clubSandwich")
|
# @project IDZ_2
# @author Nikita Fiodorov
# @date 12.02.2017
# @link https://www.dropbox.com/sh/lnrmixhxjlvq72g/AABfBdX0y8r2f_BOAn3ffcI-a?dl=0
# @data annual-diameter-of-skirt-at-hem-.csv
install.packages("fitdistrplus");
# Для выборок объема 10, 100,1000, 10000 из стандартного нормального закона
# f(x)=ln((1/pi)*( scale/ ( (x-location)^2+scale^2) );scale=5;location=3
#вычислить следующие оценки дисперсии: выборочную дисперсию,
#несмещенную выборочную дисперсию, эффективную выборочную дисперсию.
vars<- function(x){sum((x-mean(x))^2)/(length(x))}
varef<-function(x){(length(x)+1)*var(x)/(length(x)) }
allProp <- function(x){ c(vars = vars(x),var = var(x),varef=varef(x))}
paramN = list(mean = 0, sd = 1)
sizeN = list(x10 = 10,x100=100,x1000=1000,x10000=10000)
allbind <- function(size,params,FUN=rnorm){
r = sapply(size,function(x){do.call(FUN, c(params,x))});
return(t(as.matrix(sapply(r,allProp))))
}
varabs<- function(...){abs(allbind(sizeN,paramN)-1)}
all<- function(){(varabs()+varabs()+varabs()+varabs()+varabs()+
varabs()+varabs()+varabs()+varabs()+varabs())/10}
all()
#абсолютное значение отклонения оценки от истинного значения (=1).
all <- abs(allbind(sizeN,paramN)-1)
#при больших n , асимптотически эффективной оценкой дисперсии
#является и выборочная дисперсия
#при больших n , состоятельной оценкой дисперсии
#является и исправленная(не смещенная) выборочная дисперсия
#выборка данных
dir()
setwd("../IDZ_2/")
#равномерное распределение
unif<<-as.data.frame(read.csv("unif_2.csv"));
#Коши распределение
cauchy<<-as.data.frame(read.csv("cauchy_1.csv"));
#Неизвестное распределение
type<<-as.data.frame(read.csv("type1_1.csv"));
#Необходимо построить оценку параметров, использую метод максимального правдоподобия.
#maxLik sample START
install.packages("maxLik");
library(maxLik);
LL<-function(t){sum(dcauchy(cauchy$x,t[1],t[2],log=TRUE))}
LL(c(1,1))
ml<-maxNR(LL,start=c(0,1))
ml$estimate
#maxLik sample END
library(fitdistrplus);
checkHistSample = function(x,FUN,params,name){
hist(x,breaks=30,col="blue",freq=FALSE,main = name)
expected_sample = do.call(FUN, c(list(x =sort(x)),params))
lines(sort(x),expected_sample,col="red",lwd=2)
}
#fitdist(data, distr, method="mle", start,...)
unifVunif = mledist(data=unif$x, distr="unif", optim.method="default",
lower=-Inf, upper=Inf,start = formals(unif$x))
cauchyVcauchy = mledist(data=cauchy$x, distr="cauchy", optim.method="default",
lower=-Inf, upper=Inf,start = formals(cauchy$x))
var_OMP = (-mean(cauchyVcauchy$hessian))^(-1)
par(mfrow=c(2,2))
hist(type$x,breaks = 2*length(type$x)^(1/3), freq = F, col = "lightblue");
hist(rnorm(n = 10^6, mean = 0, sd = 1),freq = F, col = "lightblue");
hist(rcauchy(n = 10^2,0,0.1),breaks = 50,freq = F, col = "lightblue");
hist(rt(n = 10^2,df=1),breaks = 50,freq = F, col = "lightblue");
#Коши или нормальное, т.к. тяжелые хвосты
typeVcauchy = mledist(data=type$x, distr="cauchy", optim.method="default",
lower=-Inf, upper=Inf,start = formals(type$x))
checkHistSample(type$x,dcauchy,as.list(typeVcauchy$estimate),"typeVcauchy")
typeVnorm = mledist(data=type$x, distr="norm", optim.method="default",
lower=-Inf, upper=Inf,start = formals(type$x))
checkHistSample(type$x,dnorm,as.list(typeVnorm$estimate),"typeVnorm")
| /IDZ_2/IDZ_2.R | no_license | NikitaIT/org.stepik.math.statistics | R | false | false | 3,883 | r |
# @project IDZ_2
# @author Nikita Fiodorov
# @date 12.02.2017
# @link https://www.dropbox.com/sh/lnrmixhxjlvq72g/AABfBdX0y8r2f_BOAn3ffcI-a?dl=0
# @data annual-diameter-of-skirt-at-hem-.csv
install.packages("fitdistrplus");
# Для выборок объема 10, 100,1000, 10000 из стандартного нормального закона
# f(x)=ln((1/pi)*( scale/ ( (x-location)^2+scale^2) );scale=5;location=3
#вычислить следующие оценки дисперсии: выборочную дисперсию,
#несмещенную выборочную дисперсию, эффективную выборочную дисперсию.
vars<- function(x){sum((x-mean(x))^2)/(length(x))}
varef<-function(x){(length(x)+1)*var(x)/(length(x)) }
allProp <- function(x){ c(vars = vars(x),var = var(x),varef=varef(x))}
paramN = list(mean = 0, sd = 1)
sizeN = list(x10 = 10,x100=100,x1000=1000,x10000=10000)
allbind <- function(size,params,FUN=rnorm){
r = sapply(size,function(x){do.call(FUN, c(params,x))});
return(t(as.matrix(sapply(r,allProp))))
}
varabs<- function(...){abs(allbind(sizeN,paramN)-1)}
all<- function(){(varabs()+varabs()+varabs()+varabs()+varabs()+
varabs()+varabs()+varabs()+varabs()+varabs())/10}
all()
#абсолютное значение отклонения оценки от истинного значения (=1).
all <- abs(allbind(sizeN,paramN)-1)
#при больших n , асимптотически эффективной оценкой дисперсии
#является и выборочная дисперсия
#при больших n , состоятельной оценкой дисперсии
#является и исправленная(не смещенная) выборочная дисперсия
#выборка данных
dir()
setwd("../IDZ_2/")
#равномерное распределение
unif<<-as.data.frame(read.csv("unif_2.csv"));
#Коши распределение
cauchy<<-as.data.frame(read.csv("cauchy_1.csv"));
#Неизвестное распределение
type<<-as.data.frame(read.csv("type1_1.csv"));
#Необходимо построить оценку параметров, использую метод максимального правдоподобия.
#maxLik sample START
install.packages("maxLik");
library(maxLik);
LL<-function(t){sum(dcauchy(cauchy$x,t[1],t[2],log=TRUE))}
LL(c(1,1))
ml<-maxNR(LL,start=c(0,1))
ml$estimate
#maxLik sample END
library(fitdistrplus);
checkHistSample = function(x,FUN,params,name){
hist(x,breaks=30,col="blue",freq=FALSE,main = name)
expected_sample = do.call(FUN, c(list(x =sort(x)),params))
lines(sort(x),expected_sample,col="red",lwd=2)
}
#fitdist(data, distr, method="mle", start,...)
unifVunif = mledist(data=unif$x, distr="unif", optim.method="default",
lower=-Inf, upper=Inf,start = formals(unif$x))
cauchyVcauchy = mledist(data=cauchy$x, distr="cauchy", optim.method="default",
lower=-Inf, upper=Inf,start = formals(cauchy$x))
var_OMP = (-mean(cauchyVcauchy$hessian))^(-1)
par(mfrow=c(2,2))
hist(type$x,breaks = 2*length(type$x)^(1/3), freq = F, col = "lightblue");
hist(rnorm(n = 10^6, mean = 0, sd = 1),freq = F, col = "lightblue");
hist(rcauchy(n = 10^2,0,0.1),breaks = 50,freq = F, col = "lightblue");
hist(rt(n = 10^2,df=1),breaks = 50,freq = F, col = "lightblue");
#Коши или нормальное, т.к. тяжелые хвосты
typeVcauchy = mledist(data=type$x, distr="cauchy", optim.method="default",
lower=-Inf, upper=Inf,start = formals(type$x))
checkHistSample(type$x,dcauchy,as.list(typeVcauchy$estimate),"typeVcauchy")
typeVnorm = mledist(data=type$x, distr="norm", optim.method="default",
lower=-Inf, upper=Inf,start = formals(type$x))
checkHistSample(type$x,dnorm,as.list(typeVnorm$estimate),"typeVnorm")
|
library(data.table)
library(plyr)
library(dplyr)
library(ape)
library(dendextend)
library(RColorBrewer)
library(bit64)
library(wordcloud)
library(SnowballC)
library(tm)
assort_desc <- data.frame(fread("/Users/omahala/Desktop/GM Insights/EDA data/Data from Ryan/Assortment DescriptionGolf_Project1765.csv"))
sub_data <- data.frame(fread("/Users/omahala/Desktop/GM Insights/EDA data/Data from Ryan/cleanSubData.csv"))
item_metric <- data.frame(fread("/Users/omahala/Desktop/GM Insights/EDA data/Data from Ryan/item_metrics_file_GOLF_051016.csv"))
attribute <- data.frame(fread("/Users/omahala/Desktop/GM Insights/graph partition/item cluster/GOLF_723/attribute_723.csv"))
selected_upc <- item_metric[which(item_metric$stores.top.90 == 1),]$a.upc
selected_upc <- inner_join(data.frame("Rollup_id"=selected_upc), assort_desc, by = c("Rollup_id"="Rollup_id"))
selected_upc$Category <- paste("Subcat", selected_upc$SubCategory, sep = "--")
docs <- Corpus(VectorSource(selected_upc$Rollup_Description))
toSpace <- content_transformer(function (x , pattern ) gsub(pattern, " ", x))
docs <- tm_map(docs, toSpace, "/")
docs <- tm_map(docs, toSpace, "@")
docs <- tm_map(docs, toSpace, "-")
docs <- tm_map(docs, content_transformer(toupper))
docs <- tm_map(docs, removeNumbers)
docs <- tm_map(docs, removePunctuation)
docs <- tm_map(docs, stripWhitespace)
dtm <- TermDocumentMatrix(docs)
m <- as.matrix(dtm)
v <- sort(rowSums(m),decreasing=TRUE)
d <- data.frame(word = names(v),freq=v)
wordcloud(d$word, d$freq, min.freq = 5, random.order = FALSE, colors = brewer.pal(8, "Dark2"))
slct_wds <- c("golf","glove","cap","umbrella","bag","towel","hat", "ball")
slct_wds2 <- paste0(slct_wds, "s")
all_wds <- toupper(c(slct_wds, slct_wds2))
rollup_wds <- strsplit(toupper(selected_upc$Rollup_Description), split = " ")
rollup_tags <- lapply(rollup_wds, function(x) x[x %in% all_wds])
selected_upc$tag <- NULL
for(i in 1:nrow(selected_upc)){
selected_upc$tag[i] <- paste(rollup_tags[[i]], collapse = " ")
selected_upc$tag[i] <- gsub("GOLF ","",selected_upc$tag[i])
}
selected_upc$tag[which(selected_upc$tag == "")] <- "MISCELLANEOUS"
tag_dist <- selected_upc %>% group_by(tag) %>% summarise(n())
# balls <- selected_upc[which(selected_upc$tag=="BALLS"),]
# text_dist_balls <- adist(toupper(balls$Rollup_Description))
# rownames(text_dist_balls) <- balls$Leaf_labels
# hclust_balls <- hclust(as.dist(text_dist_balls))
# cluster_balls <- cutree(hclust_balls, 3)
# plot(compute.brlen(as.phylo(hclust_balls), power = 0.9), tip.color = cols[cluster_balls], label.offset = 0.005, cex = 0.5)
# balls_hclust <- data.frame("label"=hclust_balls$labels, "order"=hclust_balls$order, "cluster"=cluster_balls)
# balls <- inner_join(balls, balls_hclust, by = c("Leaf_labels"="label"))
# balls$cluster <- balls$cluster + 20
#
# misc <- selected_upc[which(selected_upc$tag=="MISCELLANEOUS"),]
# text_dist_misc <- adist(toupper(misc$Rollup_Description))
# rownames(text_dist_misc) <- misc$Leaf_labels
# hclust_misc <- hclust(as.dist(text_dist_misc))
# cluster_misc <- cutree(hclust_misc, 6)
# plot(compute.brlen(as.phylo(hclust_misc), power = 0.9), tip.color = cols[cluster_misc], label.offset = 0.005, cex = 0.5)
# misc_hclust <- data.frame("label"=hclust_misc$labels, "order"=hclust_misc$order, "cluster"=cluster_misc)
# misc <- inner_join(misc, misc_hclust, by = c("Leaf_labels"="label"))
# misc$cluster <- misc$cluster + 10
# text_dist <- adist(selected_upc$tag[!(selected_upc$tag %in% c("MISCELLANEOUS","BALLS"))])
# rownames(text_dist) <- selected_upc$Leaf_labels[!(selected_upc$tag %in% c("MISCELLANEOUS","BALLS"))]
# hclust <- hclust(as.dist(text_dist))
# clusters <- cutree(hclust, 8)
# item_cluster <- data.frame("label"=hclust$labels, "order"=hclust$order, "cluster"=clusters)
# selected_upc <- inner_join(selected_upc, item_cluster, by = c("Leaf_labels" = "label"))
# #selected_upc <- left_join(selected_upc, others[,c("Leaf_labels","cluster")], by=c("Leaf_labels"="Leaf_labels"))
#
# misc$order <- misc$order + max(selected_upc$order)
# balls$order <- balls$order + max(misc$order)
#
# dend <- merge(as.dendrogram(hclust), as.dendrogram(hclust_misc), as.dendrogram(hclust_balls))
text_dist <- adist(selected_upc$tag)
rownames(text_dist) <- selected_upc$Leaf_labels
hclust <- hclust(as.dist(text_dist))
clusters <- cutree(hclust, 11)
cols <- c(brewer.pal(10,"Paired"),brewer.pal(9,"Set1")[-(1:6)],brewer.pal(8,"Dark2")[-c(2,3,5)],brewer.pal(8,"Accent")[5:7],brewer.pal(8,"Set2"),brewer.pal(10,"Spectral"))
#dend <- branches_attr_by_clusters(dend, clusters[order.dendrogram(dend)], values=cols) %>% color_labels(col=cols[clusters])
par(mar = c(0,0,0,1))
pdf("Text_CDT_selected.pdf", height = 45, width = 35)
plot(compute.brlen(as.phylo(hclust), power = 0.9), tip.color = cols[clusters], label.offset = 0.005, cex = 0.5)
title(main = "Text based CDT for project_id generic", font.main = 2, cex.main = 3)
dev.off()
| /war-room support/text-tagging.R | no_license | mindis/CDT | R | false | false | 4,910 | r | library(data.table)
library(plyr)
library(dplyr)
library(ape)
library(dendextend)
library(RColorBrewer)
library(bit64)
library(wordcloud)
library(SnowballC)
library(tm)
assort_desc <- data.frame(fread("/Users/omahala/Desktop/GM Insights/EDA data/Data from Ryan/Assortment DescriptionGolf_Project1765.csv"))
sub_data <- data.frame(fread("/Users/omahala/Desktop/GM Insights/EDA data/Data from Ryan/cleanSubData.csv"))
item_metric <- data.frame(fread("/Users/omahala/Desktop/GM Insights/EDA data/Data from Ryan/item_metrics_file_GOLF_051016.csv"))
attribute <- data.frame(fread("/Users/omahala/Desktop/GM Insights/graph partition/item cluster/GOLF_723/attribute_723.csv"))
selected_upc <- item_metric[which(item_metric$stores.top.90 == 1),]$a.upc
selected_upc <- inner_join(data.frame("Rollup_id"=selected_upc), assort_desc, by = c("Rollup_id"="Rollup_id"))
selected_upc$Category <- paste("Subcat", selected_upc$SubCategory, sep = "--")
docs <- Corpus(VectorSource(selected_upc$Rollup_Description))
toSpace <- content_transformer(function (x , pattern ) gsub(pattern, " ", x))
docs <- tm_map(docs, toSpace, "/")
docs <- tm_map(docs, toSpace, "@")
docs <- tm_map(docs, toSpace, "-")
docs <- tm_map(docs, content_transformer(toupper))
docs <- tm_map(docs, removeNumbers)
docs <- tm_map(docs, removePunctuation)
docs <- tm_map(docs, stripWhitespace)
dtm <- TermDocumentMatrix(docs)
m <- as.matrix(dtm)
v <- sort(rowSums(m),decreasing=TRUE)
d <- data.frame(word = names(v),freq=v)
wordcloud(d$word, d$freq, min.freq = 5, random.order = FALSE, colors = brewer.pal(8, "Dark2"))
slct_wds <- c("golf","glove","cap","umbrella","bag","towel","hat", "ball")
slct_wds2 <- paste0(slct_wds, "s")
all_wds <- toupper(c(slct_wds, slct_wds2))
rollup_wds <- strsplit(toupper(selected_upc$Rollup_Description), split = " ")
rollup_tags <- lapply(rollup_wds, function(x) x[x %in% all_wds])
selected_upc$tag <- NULL
for(i in 1:nrow(selected_upc)){
selected_upc$tag[i] <- paste(rollup_tags[[i]], collapse = " ")
selected_upc$tag[i] <- gsub("GOLF ","",selected_upc$tag[i])
}
selected_upc$tag[which(selected_upc$tag == "")] <- "MISCELLANEOUS"
tag_dist <- selected_upc %>% group_by(tag) %>% summarise(n())
# balls <- selected_upc[which(selected_upc$tag=="BALLS"),]
# text_dist_balls <- adist(toupper(balls$Rollup_Description))
# rownames(text_dist_balls) <- balls$Leaf_labels
# hclust_balls <- hclust(as.dist(text_dist_balls))
# cluster_balls <- cutree(hclust_balls, 3)
# plot(compute.brlen(as.phylo(hclust_balls), power = 0.9), tip.color = cols[cluster_balls], label.offset = 0.005, cex = 0.5)
# balls_hclust <- data.frame("label"=hclust_balls$labels, "order"=hclust_balls$order, "cluster"=cluster_balls)
# balls <- inner_join(balls, balls_hclust, by = c("Leaf_labels"="label"))
# balls$cluster <- balls$cluster + 20
#
# misc <- selected_upc[which(selected_upc$tag=="MISCELLANEOUS"),]
# text_dist_misc <- adist(toupper(misc$Rollup_Description))
# rownames(text_dist_misc) <- misc$Leaf_labels
# hclust_misc <- hclust(as.dist(text_dist_misc))
# cluster_misc <- cutree(hclust_misc, 6)
# plot(compute.brlen(as.phylo(hclust_misc), power = 0.9), tip.color = cols[cluster_misc], label.offset = 0.005, cex = 0.5)
# misc_hclust <- data.frame("label"=hclust_misc$labels, "order"=hclust_misc$order, "cluster"=cluster_misc)
# misc <- inner_join(misc, misc_hclust, by = c("Leaf_labels"="label"))
# misc$cluster <- misc$cluster + 10
# text_dist <- adist(selected_upc$tag[!(selected_upc$tag %in% c("MISCELLANEOUS","BALLS"))])
# rownames(text_dist) <- selected_upc$Leaf_labels[!(selected_upc$tag %in% c("MISCELLANEOUS","BALLS"))]
# hclust <- hclust(as.dist(text_dist))
# clusters <- cutree(hclust, 8)
# item_cluster <- data.frame("label"=hclust$labels, "order"=hclust$order, "cluster"=clusters)
# selected_upc <- inner_join(selected_upc, item_cluster, by = c("Leaf_labels" = "label"))
# #selected_upc <- left_join(selected_upc, others[,c("Leaf_labels","cluster")], by=c("Leaf_labels"="Leaf_labels"))
#
# misc$order <- misc$order + max(selected_upc$order)
# balls$order <- balls$order + max(misc$order)
#
# dend <- merge(as.dendrogram(hclust), as.dendrogram(hclust_misc), as.dendrogram(hclust_balls))
text_dist <- adist(selected_upc$tag)
rownames(text_dist) <- selected_upc$Leaf_labels
hclust <- hclust(as.dist(text_dist))
clusters <- cutree(hclust, 11)
cols <- c(brewer.pal(10,"Paired"),brewer.pal(9,"Set1")[-(1:6)],brewer.pal(8,"Dark2")[-c(2,3,5)],brewer.pal(8,"Accent")[5:7],brewer.pal(8,"Set2"),brewer.pal(10,"Spectral"))
#dend <- branches_attr_by_clusters(dend, clusters[order.dendrogram(dend)], values=cols) %>% color_labels(col=cols[clusters])
par(mar = c(0,0,0,1))
pdf("Text_CDT_selected.pdf", height = 45, width = 35)
plot(compute.brlen(as.phylo(hclust), power = 0.9), tip.color = cols[clusters], label.offset = 0.005, cex = 0.5)
title(main = "Text based CDT for project_id generic", font.main = 2, cex.main = 3)
dev.off()
|
library(ape)
testtree <- read.tree("8853_0.txt")
unrooted_tr <- unroot(testtree)
write.tree(unrooted_tr, file="8853_0_unrooted.txt") | /codeml_files/newick_trees_processed/8853_0/rinput.R | no_license | DaniBoo/cyanobacteria_project | R | false | false | 135 | r | library(ape)
testtree <- read.tree("8853_0.txt")
unrooted_tr <- unroot(testtree)
write.tree(unrooted_tr, file="8853_0_unrooted.txt") |
library(googleVis)
library(lubridate)
library(stringr)
library(XML)
# Grades & Attendance
tmp <- readHTMLTable('~/R/Wyatt-Grades/data/grades.html')[[1]]
grades <- tmp[c(4:7, 9:10) , c(12, 17, 20, 21)]
colnames(grades) <- c('Course', 'Grade', 'Absences', 'Tardies')
grades$Course <- c('Science', 'Math', 'Band', 'Enrichment', 'Language Arts', 'Social Studies')
grades$Percent <- as.integer(str_sub(grades$Grade, start = 2))
grades$Grade <- as.character(str_sub(grades$Grade, end = 1))
grades <- grades[ , c(1, 3, 4, 2, 5)]
# Classes
parse_class <- function(filename, dir = '~/R/Wyatt-Grades/data/') {
x <- readHTMLTable(str_c(dir, filename))[[2]]
x <- x[2:nrow(x), c(1:3, 9:11)]
names(x) <- c('Date', 'Category', 'Assignment', 'Score', 'Percent', 'Grade')
x$Date <- mdy(x$Date)
x$Score <- as.character(x$Score)
x$Percent <- as.integer(as.character(x$Percent))
x$Grade <- factor(x = x$Grade, levels = c('A', 'B', 'C', 'D', 'F'))
x
}
classes <- lapply(c('science.html', 'math.html', 'band.html', 'ela.html', 'social-studies.html'), parse_class)
names(classes) <- c('Science', 'Math', 'Band', 'ELA', 'Social-Studies')
parse_score <- function(x) {
tmp <- str_split(x, pattern = "/")
scored <- sapply(tmp, function(x) x[1])
scored <- str_replace(scored, "--", "0")
scored <- sum(as.numeric(scored))
possible <- sum(as.numeric(sapply(tmp, function(x) x[2])))
c(scored, possible)
}
# Scores for the past n assignments by class
last_assignments <- function(x, n = 5) tail(x[date(now()) > x$Date, ], n)
last3 <- lapply(classes, last_assignments, 3)
scores <- lapply(lapply(classes, last_assignments, 3), function(x) parse_score(x$Score))
percents <- sapply(scores, function(x) 100 * x[1] / x[2])
# Scores for the past n assignments by class
last_assignments_by_date <- function(x, n.days = 10) {
x <- x[date(now()) > x$Date, ]
days <- difftime(date(now()), x$Date, 8)
x[days <= n.days, ]
}
lapply(classes, last_assignments_by_date)
lapply(lapply(classes, last_assignments_by_date), function(x) parse_score(x$Score))
# Upcoming assignments
upcoming_assignments <- function(x) x[date(now()) < x$Date, ]
lapply(classes, upcoming_assignments)
lapply(lapply(classes, upcoming_assignments), function(x) parse_score(x$Score))
tmp <- data.frame(Course = grades$Course, Percent = grades$Percent)
gauges <- lapply(1:nrow(tmp), function(n) gvisGauge(tmp[n, ], options=list(min=0, max=100, redFrom=0, redTo=69, yellowFrom=70, yellowTo=84, greenFrom=85, greenTo=100)))
plot(gauges[[1]])
| /Wyatt-Grades/wyatt-grades.R | no_license | Larry-Hignight/R | R | false | false | 2,516 | r | library(googleVis)
library(lubridate)
library(stringr)
library(XML)
# Grades & Attendance
tmp <- readHTMLTable('~/R/Wyatt-Grades/data/grades.html')[[1]]
grades <- tmp[c(4:7, 9:10) , c(12, 17, 20, 21)]
colnames(grades) <- c('Course', 'Grade', 'Absences', 'Tardies')
grades$Course <- c('Science', 'Math', 'Band', 'Enrichment', 'Language Arts', 'Social Studies')
grades$Percent <- as.integer(str_sub(grades$Grade, start = 2))
grades$Grade <- as.character(str_sub(grades$Grade, end = 1))
grades <- grades[ , c(1, 3, 4, 2, 5)]
# Classes
parse_class <- function(filename, dir = '~/R/Wyatt-Grades/data/') {
x <- readHTMLTable(str_c(dir, filename))[[2]]
x <- x[2:nrow(x), c(1:3, 9:11)]
names(x) <- c('Date', 'Category', 'Assignment', 'Score', 'Percent', 'Grade')
x$Date <- mdy(x$Date)
x$Score <- as.character(x$Score)
x$Percent <- as.integer(as.character(x$Percent))
x$Grade <- factor(x = x$Grade, levels = c('A', 'B', 'C', 'D', 'F'))
x
}
classes <- lapply(c('science.html', 'math.html', 'band.html', 'ela.html', 'social-studies.html'), parse_class)
names(classes) <- c('Science', 'Math', 'Band', 'ELA', 'Social-Studies')
parse_score <- function(x) {
tmp <- str_split(x, pattern = "/")
scored <- sapply(tmp, function(x) x[1])
scored <- str_replace(scored, "--", "0")
scored <- sum(as.numeric(scored))
possible <- sum(as.numeric(sapply(tmp, function(x) x[2])))
c(scored, possible)
}
# Scores for the past n assignments by class
last_assignments <- function(x, n = 5) tail(x[date(now()) > x$Date, ], n)
last3 <- lapply(classes, last_assignments, 3)
scores <- lapply(lapply(classes, last_assignments, 3), function(x) parse_score(x$Score))
percents <- sapply(scores, function(x) 100 * x[1] / x[2])
# Scores for the past n assignments by class
last_assignments_by_date <- function(x, n.days = 10) {
x <- x[date(now()) > x$Date, ]
days <- difftime(date(now()), x$Date, 8)
x[days <= n.days, ]
}
lapply(classes, last_assignments_by_date)
lapply(lapply(classes, last_assignments_by_date), function(x) parse_score(x$Score))
# Upcoming assignments
upcoming_assignments <- function(x) x[date(now()) < x$Date, ]
lapply(classes, upcoming_assignments)
lapply(lapply(classes, upcoming_assignments), function(x) parse_score(x$Score))
tmp <- data.frame(Course = grades$Course, Percent = grades$Percent)
gauges <- lapply(1:nrow(tmp), function(n) gvisGauge(tmp[n, ], options=list(min=0, max=100, redFrom=0, redTo=69, yellowFrom=70, yellowTo=84, greenFrom=85, greenTo=100)))
plot(gauges[[1]])
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/wordclusters.R
\name{get_words}
\alias{get_words}
\alias{get_words.wordclusters}
\title{Get words}
\usage{
get_words(model, cluster = 0L)
\method{get_words}{wordclusters}(model, cluster = 0L)
}
\arguments{
\item{model}{A model as returned by \code{\link{word_clusters}}.}
\item{cluster}{Cluster number as integral.}
}
\description{
Return all the words given a cluster.
}
\examples{
\dontrun{
# setup word2vec Julia dependency
setup_word2vec()
# sample corpus
data("macbeth", package = "word2vec.r")
# train model
model_path <- word2clusters(macbeth, classes = 25L)
# get word vectors
model <- word_clusters(model_path)
# get cluster
get_words(model, 2L)
}
}
| /man/get_words.Rd | permissive | news-r/word2vec.r | R | false | true | 744 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/wordclusters.R
\name{get_words}
\alias{get_words}
\alias{get_words.wordclusters}
\title{Get words}
\usage{
get_words(model, cluster = 0L)
\method{get_words}{wordclusters}(model, cluster = 0L)
}
\arguments{
\item{model}{A model as returned by \code{\link{word_clusters}}.}
\item{cluster}{Cluster number as integral.}
}
\description{
Return all the words given a cluster.
}
\examples{
\dontrun{
# setup word2vec Julia dependency
setup_word2vec()
# sample corpus
data("macbeth", package = "word2vec.r")
# train model
model_path <- word2clusters(macbeth, classes = 25L)
# get word vectors
model <- word_clusters(model_path)
# get cluster
get_words(model, 2L)
}
}
|
## File: plot3.R
## PA 1 - Course Project - DS4
## Exploratory Data Analysis
## Course ID: exdata-011
## https://class.coursera.org/exdata-011/human_grading
## Go to assignment -> https://class.coursera.org/exdata-011/human_grading/view/courses/973505/assessments/3/submissions
setwd("C:/Users/wbflaptop/Google Drive/edu-COURSERA - Data Science 4 Exploratory Data Analysis/Coursera-WD - DS4/PA 1 - Course Project - DS4")
## Download data file
fileUrl <- "https://d396qusza40orc.cloudfront.net/exdata%2Fdata%2Fhousehold_power_consumption.zip"
download.file(fileUrl, destfile = "./data/exdata-data-household_power_consumption.zip", method = "auto")
dateDownloaded <- date()
## Extract the data from the .zip file
?unzip ## get help on zipfiles
## http://stat.ethz.ch/R-manual/R-devel/library/utils/html/unzip.html
## http://www.r-bloggers.com/read-compressed-zip-files-in-r/
unzip("./data/exdata-data-household_power_consumption.zip",
exdir = "./data/Dataset")
## extract contents from single zip file into Data folder in WD
## Check contents of the unziped folder
list.files("./data")
list.files("./data/Dataset")
## Loading the data
## 1
##
## The dataset has 2,075,259 rows and 9 columns.
## First calculate a rough estimate of how much memory the dataset will
## require in memory before reading into R.
## Make sure your computer has enough memory (most modern computers should be fine).
## # rows * # columns * 8 bytes / 2^20
## 2,075,259 rows * 9 columns * 8 bytes / 2^20
memoryNeed <- 2075259 * 9 * 8 / 2^20
print(memoryNeed)
## This gives us the number of megabytes of the data frame
## (roughly speaking, it could be less).
## If this number is more than half the amount of memory on your computer,
## then you might run into trouble.
# > print(memoryNeed)
# [1] 142.4967
# >
## 2
##
## We will only be using data from the dates 2007-02-01 and 2007-02-02.
## One alternative is to read the data from just those dates rather than
## reading in the entire dataset and subsetting to those dates.
?read.csv
mydata <- read.csv("./data/Dataset/household_power_consumption.txt",
header = T, sep = ";",
na.strings = "?",
nrows = 2075259,
check.names = F,
stringsAsFactors = F,
comment.char = "",
quote = '\"')
## Date: Date in format dd/mm/yyyy
## Time: time in format hh:mm:ss
str(mydata)
## > str(mydata)
## 'data.frame': 2075259 obs. of 9 variables:
## $ Date : chr "16/12/2006" "16/12/2006" "16/12/2006" "16/12/2006" ...
## $ Time : chr "17:24:00" "17:25:00" "17:26:00" "17:27:00" ...
## $ Global_active_power : num 4.22 5.36 5.37 5.39 3.67 ...
## $ Global_reactive_power: num 0.418 0.436 0.498 0.502 0.528 0.522 0.52 0.52 0.51 0.51 ...
## $ Voltage : num 235 234 233 234 236 ...
## $ Global_intensity : num 18.4 23 23 23 15.8 15 15.8 15.8 15.8 15.8 ...
## $ Sub_metering_1 : num 0 0 0 0 0 0 0 0 0 0 ...
## $ Sub_metering_2 : num 1 1 2 1 1 2 1 1 1 2 ...
## $ Sub_metering_3 : num 17 16 17 17 17 17 17 17 17 16 ...
## >
mydata$Date <- as.Date(mydata$Date, format = "%d/%m/%Y")
dataSelection <- subset(mydata, subset = (Date >= "2007-02-01" & Date <= "2007-02-02"))
## Subsetting the data
rm(mydata)
head(dataSelection)
head(dateTime)
## Converting dates
dateTime <- paste(as.Date(dataSelection$Date), dataSelection$Time)
dataSelection$DateTime <- as.POSIXct(dateTime)
head(dataSelection)
## Making Plots
## using the base plotting system
## how household energy usage varies over a 2-day period in February, 2007
## Plot 3
png(file = "./plot3.png")
with(dataSelection, {
plot(Sub_metering_1 ~ DateTime, type = "l",
ylab = "Global Active Power (kilowatts)", xlab = "")
lines(Sub_metering_2 ~ DateTime, col = 'Red')
lines(Sub_metering_3 ~ DateTime, col = 'Blue')
})
legend("topright", col = c("black", "red", "blue"), lty = 1, lwd = 2,
legend = c("Sub_metering_1", "Sub_metering_2", "Sub_metering_3"))
dev.off()
| /plot3.R | no_license | vlad2002v/ExData_Plotting1 | R | false | false | 4,151 | r |
## File: plot3.R
## PA 1 - Course Project - DS4
## Exploratory Data Analysis
## Course ID: exdata-011
## https://class.coursera.org/exdata-011/human_grading
## Go to assignment -> https://class.coursera.org/exdata-011/human_grading/view/courses/973505/assessments/3/submissions
setwd("C:/Users/wbflaptop/Google Drive/edu-COURSERA - Data Science 4 Exploratory Data Analysis/Coursera-WD - DS4/PA 1 - Course Project - DS4")
## Download data file
fileUrl <- "https://d396qusza40orc.cloudfront.net/exdata%2Fdata%2Fhousehold_power_consumption.zip"
download.file(fileUrl, destfile = "./data/exdata-data-household_power_consumption.zip", method = "auto")
dateDownloaded <- date()
## Extract the data from the .zip file
?unzip ## get help on zipfiles
## http://stat.ethz.ch/R-manual/R-devel/library/utils/html/unzip.html
## http://www.r-bloggers.com/read-compressed-zip-files-in-r/
unzip("./data/exdata-data-household_power_consumption.zip",
exdir = "./data/Dataset")
## extract contents from single zip file into Data folder in WD
## Check contents of the unziped folder
list.files("./data")
list.files("./data/Dataset")
## Loading the data
## 1
##
## The dataset has 2,075,259 rows and 9 columns.
## First calculate a rough estimate of how much memory the dataset will
## require in memory before reading into R.
## Make sure your computer has enough memory (most modern computers should be fine).
## # rows * # columns * 8 bytes / 2^20
## 2,075,259 rows * 9 columns * 8 bytes / 2^20
memoryNeed <- 2075259 * 9 * 8 / 2^20
print(memoryNeed)
## This gives us the number of megabytes of the data frame
## (roughly speaking, it could be less).
## If this number is more than half the amount of memory on your computer,
## then you might run into trouble.
# > print(memoryNeed)
# [1] 142.4967
# >
## 2
##
## We will only be using data from the dates 2007-02-01 and 2007-02-02.
## One alternative is to read the data from just those dates rather than
## reading in the entire dataset and subsetting to those dates.
?read.csv
mydata <- read.csv("./data/Dataset/household_power_consumption.txt",
header = T, sep = ";",
na.strings = "?",
nrows = 2075259,
check.names = F,
stringsAsFactors = F,
comment.char = "",
quote = '\"')
## Date: Date in format dd/mm/yyyy
## Time: time in format hh:mm:ss
str(mydata)
## > str(mydata)
## 'data.frame': 2075259 obs. of 9 variables:
## $ Date : chr "16/12/2006" "16/12/2006" "16/12/2006" "16/12/2006" ...
## $ Time : chr "17:24:00" "17:25:00" "17:26:00" "17:27:00" ...
## $ Global_active_power : num 4.22 5.36 5.37 5.39 3.67 ...
## $ Global_reactive_power: num 0.418 0.436 0.498 0.502 0.528 0.522 0.52 0.52 0.51 0.51 ...
## $ Voltage : num 235 234 233 234 236 ...
## $ Global_intensity : num 18.4 23 23 23 15.8 15 15.8 15.8 15.8 15.8 ...
## $ Sub_metering_1 : num 0 0 0 0 0 0 0 0 0 0 ...
## $ Sub_metering_2 : num 1 1 2 1 1 2 1 1 1 2 ...
## $ Sub_metering_3 : num 17 16 17 17 17 17 17 17 17 16 ...
## >
mydata$Date <- as.Date(mydata$Date, format = "%d/%m/%Y")
dataSelection <- subset(mydata, subset = (Date >= "2007-02-01" & Date <= "2007-02-02"))
## Subsetting the data
rm(mydata)
head(dataSelection)
head(dateTime)
## Converting dates
dateTime <- paste(as.Date(dataSelection$Date), dataSelection$Time)
dataSelection$DateTime <- as.POSIXct(dateTime)
head(dataSelection)
## Making Plots
## using the base plotting system
## how household energy usage varies over a 2-day period in February, 2007
## Plot 3
png(file = "./plot3.png")
with(dataSelection, {
plot(Sub_metering_1 ~ DateTime, type = "l",
ylab = "Global Active Power (kilowatts)", xlab = "")
lines(Sub_metering_2 ~ DateTime, col = 'Red')
lines(Sub_metering_3 ~ DateTime, col = 'Blue')
})
legend("topright", col = c("black", "red", "blue"), lty = 1, lwd = 2,
legend = c("Sub_metering_1", "Sub_metering_2", "Sub_metering_3"))
dev.off()
|
#!/usr/bin/Rscript
# Pilih Waktu Analisis
DT_CASE = "01-01-2020 00:00"
Kanal = "13"
print(paste0("Analisis Untuk Tanggal : ", DT_CASE))
DMYHH = strsplit(DT_CASE, split = " ")[[1]]
DMY = strsplit(DMYHH[1], split = "-")[[1]]
Y = DMY[3] ; M = DMY[2] ; DMY[1];
HH = strsplit(DMYHH[2], split = ":")[[1]]
H = HH[1] ; Min = HH[2]
if(DMYHH[2] == "00:00"){
DC = as.POSIXct(paste0(paste(rev(DMY), collapse = "-"), " ", H,":", Min))+1
}else{
DC = as.POSIXct(paste0(paste(rev(DMY), collapse = "-"), " ", H,":", Min))
}
DC0 = DC - (24*60*60)
DC1 = DC + (24*60*60)
DCALL = as.character(seq(DC0, DC1, by = "hour"))
DCALL_STR = substr(DCALL, 1, nchar(DCALL)-3)
DCALL_STR = gsub(DCALL_STR, pattern = ":", replacement = "")
DY = substr(DCALL_STR, 1,4) ; DM = substr(DCALL_STR, 6,7) ; DD = substr(DCALL_STR, 9,10)
DH = substr(DCALL_STR, 12,13) ; DMin = substr(DCALL_STR, 14,15)
sat_url="ftp://96733:96733$%^@202.90.199.64/himawari2/HIMAWARI-8/DATA/NC/Indonesia/" # GANTI
U2 = paste0(sat_url, DY, "/", DM, "/", DD ,"/", "H08_B",Kanal,"_Indonesia_", DY, DM, DD, DH, DMin, ".nc" )
# "H08_B16_Indonesia_201912312240
# setwd("../hindcast_warning/")
system("mkdir data/nc")
setwd("data/nc")
for(i in 1:length(U2)){
system(paste0("curl -O ", U2[i]))
}
setwd("../..") | /hindcast_warning.R | permissive | yosiknorman/MEWS_2020_BOT | R | false | false | 1,251 | r | #!/usr/bin/Rscript
# Pilih Waktu Analisis
DT_CASE = "01-01-2020 00:00"
Kanal = "13"
print(paste0("Analisis Untuk Tanggal : ", DT_CASE))
DMYHH = strsplit(DT_CASE, split = " ")[[1]]
DMY = strsplit(DMYHH[1], split = "-")[[1]]
Y = DMY[3] ; M = DMY[2] ; DMY[1];
HH = strsplit(DMYHH[2], split = ":")[[1]]
H = HH[1] ; Min = HH[2]
if(DMYHH[2] == "00:00"){
DC = as.POSIXct(paste0(paste(rev(DMY), collapse = "-"), " ", H,":", Min))+1
}else{
DC = as.POSIXct(paste0(paste(rev(DMY), collapse = "-"), " ", H,":", Min))
}
DC0 = DC - (24*60*60)
DC1 = DC + (24*60*60)
DCALL = as.character(seq(DC0, DC1, by = "hour"))
DCALL_STR = substr(DCALL, 1, nchar(DCALL)-3)
DCALL_STR = gsub(DCALL_STR, pattern = ":", replacement = "")
DY = substr(DCALL_STR, 1,4) ; DM = substr(DCALL_STR, 6,7) ; DD = substr(DCALL_STR, 9,10)
DH = substr(DCALL_STR, 12,13) ; DMin = substr(DCALL_STR, 14,15)
sat_url="ftp://96733:96733$%^@202.90.199.64/himawari2/HIMAWARI-8/DATA/NC/Indonesia/" # GANTI
U2 = paste0(sat_url, DY, "/", DM, "/", DD ,"/", "H08_B",Kanal,"_Indonesia_", DY, DM, DD, DH, DMin, ".nc" )
# "H08_B16_Indonesia_201912312240
# setwd("../hindcast_warning/")
system("mkdir data/nc")
setwd("data/nc")
for(i in 1:length(U2)){
system(paste0("curl -O ", U2[i]))
}
setwd("../..") |
testlist <- list(m = NULL, repetitions = 0L, in_m = structure(c(2.31584307392677e+77, 9.53818252170339e+295, 1.23078811374256e+146, 4.12396251261199e-221, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), .Dim = c(5L, 7L)))
result <- do.call(CNull:::communities_individual_based_sampling_alpha,testlist)
str(result) | /CNull/inst/testfiles/communities_individual_based_sampling_alpha/AFL_communities_individual_based_sampling_alpha/communities_individual_based_sampling_alpha_valgrind_files/1615775152-test.R | no_license | akhikolla/updatedatatype-list2 | R | false | false | 362 | r | testlist <- list(m = NULL, repetitions = 0L, in_m = structure(c(2.31584307392677e+77, 9.53818252170339e+295, 1.23078811374256e+146, 4.12396251261199e-221, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), .Dim = c(5L, 7L)))
result <- do.call(CNull:::communities_individual_based_sampling_alpha,testlist)
str(result) |
% Generated by roxygen2 (4.1.1): do not edit by hand
% Please edit documentation in R/scale-linetype.r
\name{scale_linetype}
\alias{scale_linetype}
\alias{scale_linetype_continuous}
\alias{scale_linetype_discrete}
\title{Scale for line patterns.}
\usage{
scale_linetype(..., na.value = "blank")
scale_linetype_continuous(...)
scale_linetype_discrete(..., na.value = "blank")
}
\arguments{
\item{...}{common discrete scale parameters: \code{name}, \code{breaks},
\code{labels}, \code{na.value}, \code{limits} and \code{guide}. See
\code{\link{discrete_scale}} for more details}
\item{na.value}{The linetype to use for \code{NA} values.}
}
\description{
Default line types based on a set supplied by Richard Pearson,
University of Manchester. Line types can not be mapped to continuous
values.
}
\examples{
library(reshape2) # for melt
library(plyr) # for ddply
ecm <- melt(economics, id.vars = "date")
rescale01 <- function(x) (x - min(x)) / diff(range(x))
ecm <- ddply(ecm, "variable", transform, value = rescale01(value))
ggplot(ecm, aes(date, value)) + geom_line(aes(group = variable))
ggplot(ecm, aes(date, value)) + geom_line(aes(linetype = variable))
ggplot(ecm, aes(date, value)) + geom_line(aes(colour = variable))
# See scale_manual for more flexibility
}
| /man/scale_linetype.Rd | no_license | ctaimal/ggplot2 | R | false | false | 1,272 | rd | % Generated by roxygen2 (4.1.1): do not edit by hand
% Please edit documentation in R/scale-linetype.r
\name{scale_linetype}
\alias{scale_linetype}
\alias{scale_linetype_continuous}
\alias{scale_linetype_discrete}
\title{Scale for line patterns.}
\usage{
scale_linetype(..., na.value = "blank")
scale_linetype_continuous(...)
scale_linetype_discrete(..., na.value = "blank")
}
\arguments{
\item{...}{common discrete scale parameters: \code{name}, \code{breaks},
\code{labels}, \code{na.value}, \code{limits} and \code{guide}. See
\code{\link{discrete_scale}} for more details}
\item{na.value}{The linetype to use for \code{NA} values.}
}
\description{
Default line types based on a set supplied by Richard Pearson,
University of Manchester. Line types can not be mapped to continuous
values.
}
\examples{
library(reshape2) # for melt
library(plyr) # for ddply
ecm <- melt(economics, id.vars = "date")
rescale01 <- function(x) (x - min(x)) / diff(range(x))
ecm <- ddply(ecm, "variable", transform, value = rescale01(value))
ggplot(ecm, aes(date, value)) + geom_line(aes(group = variable))
ggplot(ecm, aes(date, value)) + geom_line(aes(linetype = variable))
ggplot(ecm, aes(date, value)) + geom_line(aes(colour = variable))
# See scale_manual for more flexibility
}
|
#' Basic Metropolis algorithm
#'
#' This is a basic Metropolis algorithm
#'
#' @export
#' @author Thibaut Jombart \email{t.jombart@@imperial.ac.uk}
#'
#' @param f a function computing log-densities
#' @param param a list of named parameters passed on to 'f'; parameters must be given their initial values
#' @param to.move a logical or an integer vector indicating which parts of 'param' should be moved in the MCMC
#' @param n a number of iterations for the MCMC
#' @param sd the standard deviation of the proposal normal distribution
#'
#' @examples
#'
#' ## try with a basic normal density
#' f1 <- function(x){dnorm(x, log=TRUE)}
#' mc1 <- metro(f1)
#' head(mc1)
#' tail(mc1)
#' plot(mc1)
#'
#' ## change initial point
#' mc2 <- metro(f1, list(x=-10))
#' plot(mc2)
#'
#' ## try with a mixture of densities
#' fmix <- function(x){log(dnorm(x, mean=-3)+dnorm(x, mean=1, sd=.5))}
#' mc3 <- metro(fmix, list(x=0))
#' plot(mc3)
#'
#' ## try harder example
#' fmix2 <- function(x){log(dnorm(x, mean=-5)+dnorm(x, mean=5, sd=.5))}
#' mc4 <- metro(fmix2, list(x=0), sd=3, n=1e4)
#' plot(mc4)
#'
#' @importFrom coda mcmc
#' @importFrom stats rnorm runif
#'
metro <- function(f, param=list(x=0), to.move=TRUE, n=1e4,
sd=1){
## FIND WHICH ARGUMENTS NEED TO MOVE ##
nParam <- length(param)
if(nParam<1) stop("At least one parameter is needed to define the domain of f")
if(!is.null(param)) to.move <- rep(to.move, length=nParam)
if(is.logical(to.move)) to.move <- which(to.move)
param.names <- names(param)
## DEFINE OVERAL DENSITY COMPUTATION ##
logdens <- function(param){
return(sum(do.call(f, param)))
}
## pre-generate unif random variates
RAND <- log(runif(length(to.move)*n))
COUNTER <- 1
## build output
out.log <- double(n)
out.param <- list()
out.param[1:n] <- param
## for each iteration
for(i in 2:n){
## move each param
param <- out.param[i-1]
for(j in to.move){
new <- param
new[[j]] <- rnorm(1, mean=new[[j]],sd=sd)
logratio <- logdens(new) - logdens(param)
## accept/reject
if(logratio >= RAND[COUNTER]){param <- new}
COUNTER <- COUNTER+1
}
## store param
out.param[i] <- param
## store log dens
out.log[i] <- logdens(param)
}
## SHAPE OUTPUT ##
out.param <- matrix(unlist(out.param), byrow=TRUE, ncol=nParam)
colnames(out.param) <- param.names
out <- cbind(out.log, out.param)
colnames(out)[1] <- "logdens"
out <- mcmc(out, start=1, end=n, thin=1)
return(out)
} # end metro
| /R/metro.R | no_license | thibautjombart/fancysamplers | R | false | false | 2,655 | r | #' Basic Metropolis algorithm
#'
#' This is a basic Metropolis algorithm
#'
#' @export
#' @author Thibaut Jombart \email{t.jombart@@imperial.ac.uk}
#'
#' @param f a function computing log-densities
#' @param param a list of named parameters passed on to 'f'; parameters must be given their initial values
#' @param to.move a logical or an integer vector indicating which parts of 'param' should be moved in the MCMC
#' @param n a number of iterations for the MCMC
#' @param sd the standard deviation of the proposal normal distribution
#'
#' @examples
#'
#' ## try with a basic normal density
#' f1 <- function(x){dnorm(x, log=TRUE)}
#' mc1 <- metro(f1)
#' head(mc1)
#' tail(mc1)
#' plot(mc1)
#'
#' ## change initial point
#' mc2 <- metro(f1, list(x=-10))
#' plot(mc2)
#'
#' ## try with a mixture of densities
#' fmix <- function(x){log(dnorm(x, mean=-3)+dnorm(x, mean=1, sd=.5))}
#' mc3 <- metro(fmix, list(x=0))
#' plot(mc3)
#'
#' ## try harder example
#' fmix2 <- function(x){log(dnorm(x, mean=-5)+dnorm(x, mean=5, sd=.5))}
#' mc4 <- metro(fmix2, list(x=0), sd=3, n=1e4)
#' plot(mc4)
#'
#' @importFrom coda mcmc
#' @importFrom stats rnorm runif
#'
metro <- function(f, param=list(x=0), to.move=TRUE, n=1e4,
sd=1){
## FIND WHICH ARGUMENTS NEED TO MOVE ##
nParam <- length(param)
if(nParam<1) stop("At least one parameter is needed to define the domain of f")
if(!is.null(param)) to.move <- rep(to.move, length=nParam)
if(is.logical(to.move)) to.move <- which(to.move)
param.names <- names(param)
## DEFINE OVERAL DENSITY COMPUTATION ##
logdens <- function(param){
return(sum(do.call(f, param)))
}
## pre-generate unif random variates
RAND <- log(runif(length(to.move)*n))
COUNTER <- 1
## build output
out.log <- double(n)
out.param <- list()
out.param[1:n] <- param
## for each iteration
for(i in 2:n){
## move each param
param <- out.param[i-1]
for(j in to.move){
new <- param
new[[j]] <- rnorm(1, mean=new[[j]],sd=sd)
logratio <- logdens(new) - logdens(param)
## accept/reject
if(logratio >= RAND[COUNTER]){param <- new}
COUNTER <- COUNTER+1
}
## store param
out.param[i] <- param
## store log dens
out.log[i] <- logdens(param)
}
## SHAPE OUTPUT ##
out.param <- matrix(unlist(out.param), byrow=TRUE, ncol=nParam)
colnames(out.param) <- param.names
out <- cbind(out.log, out.param)
colnames(out)[1] <- "logdens"
out <- mcmc(out, start=1, end=n, thin=1)
return(out)
} # end metro
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/GetStandings.R
\name{GetStandings}
\alias{GetStandings}
\title{Standings.}
\usage{
GetStandings(year)
}
\arguments{
\item{year}{NBA season (e.g. 2008 for the 2007-08 season)}
}
\value{
data frame with wins and losses for that season
}
\description{
Standings.
}
\examples{
GetStandings(2014)
}
\keyword{schedule}
| /man/GetStandings.Rd | no_license | angelldogg/sportsTools | R | false | true | 392 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/GetStandings.R
\name{GetStandings}
\alias{GetStandings}
\title{Standings.}
\usage{
GetStandings(year)
}
\arguments{
\item{year}{NBA season (e.g. 2008 for the 2007-08 season)}
}
\value{
data frame with wins and losses for that season
}
\description{
Standings.
}
\examples{
GetStandings(2014)
}
\keyword{schedule}
|
# normal tumor clinical data
# Set-up environment
rm(list = ls())
# Set working directory
load("~/R.Config.Rdata")
master.location = master.location.db
setwd(paste0(master.location, "/TBI-LAB - Project - COAD SIDRA-LUMC/NGS_Data"))
source(paste0(toolbox.path, "/R scripts/ipak.function.R"))
# load data
load("./Processed_Data/Survival Data/JSREP_clinical_data.Rdata")
library(dplyr)
tumor = clinical_data
tumor$ID = paste0(tumor$Patient_ID, "T")
tumor <- tumor %>% relocate(ID, .before = gender)
normal = clinical_data
normal$ID = paste0(normal$Patient_ID, "N")
normal <- normal %>% relocate(ID, .before = gender)
clinical_data = rbind(normal, tumor)
clinical_data = clinical_data[order(clinical_data$ID),]
clinical_data$Type = NA
clinical_data$Type[grep("N", clinical_data$ID)] = "Normal"
clinical_data$Type[grep("T", clinical_data$ID)] = "Tumor"
clinical_data <- clinical_data %>% relocate(Type, .before = gender)
save(clinical_data, file = "./Processed_Data/Survival Data/JSREP_NT_clinical_data.Rdata")
| /Microbiome/16S rRNA gene sequencing/000.0_NT_clinical_data.R | no_license | Sidra-TBI-FCO/AC-ICAM-NM | R | false | false | 1,051 | r | # normal tumor clinical data
# Set-up environment
rm(list = ls())
# Set working directory
load("~/R.Config.Rdata")
master.location = master.location.db
setwd(paste0(master.location, "/TBI-LAB - Project - COAD SIDRA-LUMC/NGS_Data"))
source(paste0(toolbox.path, "/R scripts/ipak.function.R"))
# load data
load("./Processed_Data/Survival Data/JSREP_clinical_data.Rdata")
library(dplyr)
tumor = clinical_data
tumor$ID = paste0(tumor$Patient_ID, "T")
tumor <- tumor %>% relocate(ID, .before = gender)
normal = clinical_data
normal$ID = paste0(normal$Patient_ID, "N")
normal <- normal %>% relocate(ID, .before = gender)
clinical_data = rbind(normal, tumor)
clinical_data = clinical_data[order(clinical_data$ID),]
clinical_data$Type = NA
clinical_data$Type[grep("N", clinical_data$ID)] = "Normal"
clinical_data$Type[grep("T", clinical_data$ID)] = "Tumor"
clinical_data <- clinical_data %>% relocate(Type, .before = gender)
save(clinical_data, file = "./Processed_Data/Survival Data/JSREP_NT_clinical_data.Rdata")
|
rm(list=ls())
options(stringsAsFactors = FALSE)
setwd("/net/wong05/home/liz86/Steffi/primary_vs_mets/")
load("Data_v2/all_clinical_info.RData")
tab2 <- function(x){
x[is.na(x)] <- "missing"
print(table(x))
}
tab2(all_clin$PreEndocrine)
tab2(all_clin$PreHER2)
tab2(all_clin$PreChemo)
tab2(all_clin$PostEndocrine)
tab2(all_clin$PostHER2)
tab2(all_clin$PostChemo)
colname <- c("Met.Location","PreEndocrine", "PreChemo", "PreHER2",
"PostEndocrine", "PostChemo", "PostHER2")
all_clin[,colname]
tab2_two <- function(x,y){
x[is.na(x)] <- "missing"
y[is.na(y)] <- "missing"
print(table(x,y))
}
tab2_two(all_clin$PreEndocrine, all_clin$ER.Prim)
tab2_two(all_clin$PreHER2, all_clin$HER2.Prim)
all_clin[which(all_clin$PreEndocrine == "Yes" & all_clin$ER.Prim=="Neg"), ]
all_clin[which(all_clin$PreHER2 == "Yes" & all_clin$HER2.Prim=="Neg"), ] | /Code/check_treatment.R | no_license | lizhu06/TILsComparison_PBTvsMET | R | false | false | 845 | r | rm(list=ls())
options(stringsAsFactors = FALSE)
setwd("/net/wong05/home/liz86/Steffi/primary_vs_mets/")
load("Data_v2/all_clinical_info.RData")
tab2 <- function(x){
x[is.na(x)] <- "missing"
print(table(x))
}
tab2(all_clin$PreEndocrine)
tab2(all_clin$PreHER2)
tab2(all_clin$PreChemo)
tab2(all_clin$PostEndocrine)
tab2(all_clin$PostHER2)
tab2(all_clin$PostChemo)
colname <- c("Met.Location","PreEndocrine", "PreChemo", "PreHER2",
"PostEndocrine", "PostChemo", "PostHER2")
all_clin[,colname]
tab2_two <- function(x,y){
x[is.na(x)] <- "missing"
y[is.na(y)] <- "missing"
print(table(x,y))
}
tab2_two(all_clin$PreEndocrine, all_clin$ER.Prim)
tab2_two(all_clin$PreHER2, all_clin$HER2.Prim)
all_clin[which(all_clin$PreEndocrine == "Yes" & all_clin$ER.Prim=="Neg"), ]
all_clin[which(all_clin$PreHER2 == "Yes" & all_clin$HER2.Prim=="Neg"), ] |
# 2018-10-25_PPI-Analysis.R
#
# Purpose: R code for the submission of the PPI Analysis task
#
# Version: 1.0
#
# Date: 2017 10 25
# Author: Gabriela Morgenshtern (g.morgenshtern@mail.utoronto.ca)
#
# Versions:
#
# 1.0 First draft version, mostly notes from learning unit
if (!require(igraph, quietly=TRUE)) {
install.packages("igraph")
library(igraph)
}
if (!require(biomaRt, quietly=TRUE)) {
if (! exists("biocLite")) {
source("https://bioconductor.org/biocLite.R")
}
biocLite("biomaRt")
library(biomaRt)
}
## Script setup
load("./data/STRINGedges.RData")
head(STRINGedges)
# = 4 Task for submission =================================================
# Write a loop that will go through your personalized list of Ensemble IDs and
# for each ID:
# -- print the ID,
# -- print the first row's HGNC symbol,
# -- print the first row's wikigene description.
# -- print the first row's phenotype.
#
# (Hint, you can structure your loop in the same way as the loop that
# created CPdefs. )
# Place the R code for this loop and its output into your report if you are
# submitting a report for this unit. Please read the requirements carefully.
(ENSPsel <- names(V(gSTR)[BCsel]))
# creating the personalized ENSPsel list:
set.seed(1000703679)
(ENSPsel <- sample(ENSPsel))
# specifying the database:
myMart <- useMart("ensembl", dataset="hsapiens_gene_ensembl")
CPdefs <- list()
for (ID in ENSPsel) {
CPdefs[[ID]] <- getBM(filters = "ensembl_peptide_id",
attributes = c("hgnc_symbol",
"wikigene_description",
"interpro_description",
"phenotype_description"),
values = ID,
mart = myMart)
}
charSpacer <- rep('*', 40)
for (ID in ENSPsel) {
cat("ENSP ID:", ID,"\n")
cat("hgnc_symbol:", CPdefs[[ID]][1,][1,"hgnc_symbol"], "\n")
cat("wikigene_description:", CPdefs[[ID]][1,][1,"wikigene_description"], "\n")
cat("interpro_description:", CPdefs[[ID]][1,][1,"interpro_description"], "\n")
cat("phenotype_description:", CPdefs[[ID]][1,][1,"phenotype_description"], "\n")
cat(charSpacer, "\n")
}
# [END]
| /2018-10-25_PPI-Analysis.R | no_license | gabmorg/bch411-2018 | R | false | false | 2,266 | r | # 2018-10-25_PPI-Analysis.R
#
# Purpose: R code for the submission of the PPI Analysis task
#
# Version: 1.0
#
# Date: 2017 10 25
# Author: Gabriela Morgenshtern (g.morgenshtern@mail.utoronto.ca)
#
# Versions:
#
# 1.0 First draft version, mostly notes from learning unit
if (!require(igraph, quietly=TRUE)) {
install.packages("igraph")
library(igraph)
}
if (!require(biomaRt, quietly=TRUE)) {
if (! exists("biocLite")) {
source("https://bioconductor.org/biocLite.R")
}
biocLite("biomaRt")
library(biomaRt)
}
## Script setup
load("./data/STRINGedges.RData")
head(STRINGedges)
# = 4 Task for submission =================================================
# Write a loop that will go through your personalized list of Ensemble IDs and
# for each ID:
# -- print the ID,
# -- print the first row's HGNC symbol,
# -- print the first row's wikigene description.
# -- print the first row's phenotype.
#
# (Hint, you can structure your loop in the same way as the loop that
# created CPdefs. )
# Place the R code for this loop and its output into your report if you are
# submitting a report for this unit. Please read the requirements carefully.
(ENSPsel <- names(V(gSTR)[BCsel]))
# creating the personalized ENSPsel list:
set.seed(1000703679)
(ENSPsel <- sample(ENSPsel))
# specifying the database:
myMart <- useMart("ensembl", dataset="hsapiens_gene_ensembl")
CPdefs <- list()
for (ID in ENSPsel) {
CPdefs[[ID]] <- getBM(filters = "ensembl_peptide_id",
attributes = c("hgnc_symbol",
"wikigene_description",
"interpro_description",
"phenotype_description"),
values = ID,
mart = myMart)
}
charSpacer <- rep('*', 40)
for (ID in ENSPsel) {
cat("ENSP ID:", ID,"\n")
cat("hgnc_symbol:", CPdefs[[ID]][1,][1,"hgnc_symbol"], "\n")
cat("wikigene_description:", CPdefs[[ID]][1,][1,"wikigene_description"], "\n")
cat("interpro_description:", CPdefs[[ID]][1,][1,"interpro_description"], "\n")
cat("phenotype_description:", CPdefs[[ID]][1,][1,"phenotype_description"], "\n")
cat(charSpacer, "\n")
}
# [END]
|
library(rmarkdown)
render("C:/Users/BAIOMED07/Desktop/work/R_exercise/women.Rmd", "html_document")
library(knitr)
knit("C:/Users/BAIOMED07/Desktop/work/R_exercise/drugs.Rnw")
library(knitr)
knit2pdf("C:/Users/BAIOMED07/Desktop/work/R_exercise/drugs.Rnw")
| /R_011.r | permissive | wenhao966/R | R | false | false | 259 | r | library(rmarkdown)
render("C:/Users/BAIOMED07/Desktop/work/R_exercise/women.Rmd", "html_document")
library(knitr)
knit("C:/Users/BAIOMED07/Desktop/work/R_exercise/drugs.Rnw")
library(knitr)
knit2pdf("C:/Users/BAIOMED07/Desktop/work/R_exercise/drugs.Rnw")
|
#' Function to create a data frame (with three columns) from a (sparse) matrix
#'
#' \code{xSM2DF} is supposed to create a data frame (with three columns) from a (sparse) matrix. Only nonzero/nonna entries from the matrix will be kept in the resulting data frame.
#'
#' @param data a matrix or an object of the dgCMatrix class (a sparse matrix)
#' @param verbose logical to indicate whether the messages will be displayed in the screen. By default, it sets to TRUE for display
#' @return a data frame containing three columns: 1st column for row names, 2nd for column names, and 3rd for numeric values
#' @note none
#' None
#' @export
#' @seealso \code{\link{xSM2DF}}
#' @include xSM2DF.r
#' @examples
#' # create a sparse matrix of 4 X 2
#' input.file <- rbind(c('R1','C1',1), c('R2','C1',1), c('R2','C2',1), c('R3','C2',2), c('R4','C1',1))
#' data <- xSparseMatrix(input.file)
#' # convert into a data frame
#' res_df <- xSM2DF(data)
#' res_df
xSM2DF <- function(data, verbose=TRUE)
{
if(class(data) == 'dgCMatrix' | is.data.frame(data)){
#data <- as.matrix(data)
}
## row names
if(is.null(rownames(data))){
names_row <- 1:nrow(data)
}else{
names_row <- rownames(data)
}
## column names
if(is.null(colnames(data))){
names_col <- 1:ncol(data)
}else{
names_col <- colnames(data)
}
if(is.data.frame(data) | class(data) == 'matrix' | class(data) == 'dgeMatrix'){
data <- as.matrix(data)
## values
xy <- which(data!=0, arr.ind=TRUE)
ind <- which(data!=0, arr.ind=FALSE)
if(nrow(xy) > 0){
res_df <- data.frame(rownames=names_row[xy[,1]], colnames=names_col[xy[,2]], values=data[ind], stringsAsFactors=FALSE)
res_df <- res_df[order(res_df$rownames,res_df$values,decreasing=FALSE),]
if(verbose){
message(sprintf("A matrix of %d X %d has been converted into a data frame of %d X 3.", dim(data)[1], dim(data)[2], nrow(res_df)), appendLF=T)
}
}else{
res_df <- NULL
}
}else if(class(data) == 'dgCMatrix' | class(data) == 'dsCMatrix'){
ijx <- summary(data)
if(nrow(ijx)>0){
res_df <- data.frame(rownames=names_row[ijx[,1]], colnames=names_col[ijx[,2]], values=ijx[,3], stringsAsFactors=FALSE)
res_df <- res_df[order(res_df$rownames,res_df$values,decreasing=FALSE),]
if(verbose){
message(sprintf("A matrix of %d X %d has been converted into a data frame of %d X 3.", dim(data)[1], dim(data)[2], nrow(res_df)), appendLF=T)
}
}else{
res_df <- NULL
}
}
invisible(res_df)
}
| /R/xSM2DF.r | no_license | cran/XGR | R | false | false | 2,533 | r | #' Function to create a data frame (with three columns) from a (sparse) matrix
#'
#' \code{xSM2DF} is supposed to create a data frame (with three columns) from a (sparse) matrix. Only nonzero/nonna entries from the matrix will be kept in the resulting data frame.
#'
#' @param data a matrix or an object of the dgCMatrix class (a sparse matrix)
#' @param verbose logical to indicate whether the messages will be displayed in the screen. By default, it sets to TRUE for display
#' @return a data frame containing three columns: 1st column for row names, 2nd for column names, and 3rd for numeric values
#' @note none
#' None
#' @export
#' @seealso \code{\link{xSM2DF}}
#' @include xSM2DF.r
#' @examples
#' # create a sparse matrix of 4 X 2
#' input.file <- rbind(c('R1','C1',1), c('R2','C1',1), c('R2','C2',1), c('R3','C2',2), c('R4','C1',1))
#' data <- xSparseMatrix(input.file)
#' # convert into a data frame
#' res_df <- xSM2DF(data)
#' res_df
xSM2DF <- function(data, verbose=TRUE)
{
if(class(data) == 'dgCMatrix' | is.data.frame(data)){
#data <- as.matrix(data)
}
## row names
if(is.null(rownames(data))){
names_row <- 1:nrow(data)
}else{
names_row <- rownames(data)
}
## column names
if(is.null(colnames(data))){
names_col <- 1:ncol(data)
}else{
names_col <- colnames(data)
}
if(is.data.frame(data) | class(data) == 'matrix' | class(data) == 'dgeMatrix'){
data <- as.matrix(data)
## values
xy <- which(data!=0, arr.ind=TRUE)
ind <- which(data!=0, arr.ind=FALSE)
if(nrow(xy) > 0){
res_df <- data.frame(rownames=names_row[xy[,1]], colnames=names_col[xy[,2]], values=data[ind], stringsAsFactors=FALSE)
res_df <- res_df[order(res_df$rownames,res_df$values,decreasing=FALSE),]
if(verbose){
message(sprintf("A matrix of %d X %d has been converted into a data frame of %d X 3.", dim(data)[1], dim(data)[2], nrow(res_df)), appendLF=T)
}
}else{
res_df <- NULL
}
}else if(class(data) == 'dgCMatrix' | class(data) == 'dsCMatrix'){
ijx <- summary(data)
if(nrow(ijx)>0){
res_df <- data.frame(rownames=names_row[ijx[,1]], colnames=names_col[ijx[,2]], values=ijx[,3], stringsAsFactors=FALSE)
res_df <- res_df[order(res_df$rownames,res_df$values,decreasing=FALSE),]
if(verbose){
message(sprintf("A matrix of %d X %d has been converted into a data frame of %d X 3.", dim(data)[1], dim(data)[2], nrow(res_df)), appendLF=T)
}
}else{
res_df <- NULL
}
}
invisible(res_df)
}
|
# PROBLEM:
# ------------------------------------------------------------------------------
# input: -----------------------------------------------------------------------
# setup task acc
# 1 1 0.90
# 2 1 0.98
# 1 2 0.80
# 2 2 0.78
# 1 3 0.67
# 2 3 0.99
# desired: ---------------------------------------------------------------------
# 1 setup 1.00 2.00 3.00
# 2 1 0.90 0.80 0.67
# 3 2 0.98 0.78 0.99
# ------------------------------------------------------------------------------
# input ------------------------------------------------------------------------
df <- matrix( data = c( 1, 1, 0.9
, 2, 1, 0.98
, 1, 2, 0.8
, 2, 2, 0.78
, 1, 3, 0.67
, 2, 3, 0.99
)
, byrow = TRUE
, nrow = 6
, ncol = 3
)
colnames(df) <- c("setup", "task", "acc")
as.data.frame(df)
library(reshape2)
new_df <- melt(df, id.vars = c("setup", "task"))
new_df$variable <- NULL
new_df <- aggregate(. ~ setup, new_df, list)
final_df <- cbind(
new_df$setup,
as.data.frame(do.call(rbind, new_df$value))
)
colnames(final_df) <- c("setup",new_df$task[[1]])
final_df
library(reshape2)
# input ------------------------------------------------------------------------
df <- matrix( data = c( 1, 1, 0.9
, 2, 1, 0.98
, 1, 2, 0.8
, 2, 2, 0.78
, 1, 3, 0.67
, 2, 3, 0.99
)
, byrow = TRUE
, nrow = 6
, ncol = 3
)
colnames(df) <- c("setup", "task", "acc")
df <- as.data.frame(df)
new_df <- melt(df, id.vars = c("setup", "task"))
new_df$variable <- NULL
new_df <- aggregate(. ~ setup, new_df, list)
final_df <- cbind(
new_df$setup,
as.data.frame(do.call(rbind, new_df$value))
)
colnames(final_df) <- c("setup",new_df$task[[1]])
final_df
| /scripts/11a_reshaping.R | no_license | shlokkie/twine | R | false | false | 2,005 | r | # PROBLEM:
# ------------------------------------------------------------------------------
# input: -----------------------------------------------------------------------
# setup task acc
# 1 1 0.90
# 2 1 0.98
# 1 2 0.80
# 2 2 0.78
# 1 3 0.67
# 2 3 0.99
# desired: ---------------------------------------------------------------------
# 1 setup 1.00 2.00 3.00
# 2 1 0.90 0.80 0.67
# 3 2 0.98 0.78 0.99
# ------------------------------------------------------------------------------
# input ------------------------------------------------------------------------
df <- matrix( data = c( 1, 1, 0.9
, 2, 1, 0.98
, 1, 2, 0.8
, 2, 2, 0.78
, 1, 3, 0.67
, 2, 3, 0.99
)
, byrow = TRUE
, nrow = 6
, ncol = 3
)
colnames(df) <- c("setup", "task", "acc")
as.data.frame(df)
library(reshape2)
new_df <- melt(df, id.vars = c("setup", "task"))
new_df$variable <- NULL
new_df <- aggregate(. ~ setup, new_df, list)
final_df <- cbind(
new_df$setup,
as.data.frame(do.call(rbind, new_df$value))
)
colnames(final_df) <- c("setup",new_df$task[[1]])
final_df
library(reshape2)
# input ------------------------------------------------------------------------
df <- matrix( data = c( 1, 1, 0.9
, 2, 1, 0.98
, 1, 2, 0.8
, 2, 2, 0.78
, 1, 3, 0.67
, 2, 3, 0.99
)
, byrow = TRUE
, nrow = 6
, ncol = 3
)
colnames(df) <- c("setup", "task", "acc")
df <- as.data.frame(df)
new_df <- melt(df, id.vars = c("setup", "task"))
new_df$variable <- NULL
new_df <- aggregate(. ~ setup, new_df, list)
final_df <- cbind(
new_df$setup,
as.data.frame(do.call(rbind, new_df$value))
)
colnames(final_df) <- c("setup",new_df$task[[1]])
final_df
|
\name{alaska.blkgrp}
\Rdversion{1.1}
\alias{alaska.blkgrp}
\docType{data}
\title{
alaska.blkgrp
}
\description{
alaska.blkgrp is a \code{\link[sp:SpatialPolygonsDataFrame]{SpatialPolygonsDataFrame}} with polygons made from the 2000 US Census tiger/line boundary files (\url{http://www.census.gov/geo/www/tiger/}) for Census Block Groups. It also contains 86 variables from the Summary File 1 (SF 1) which contains the 100-percent data (\url{http://www.census.gov/prod/cen2000/doc/sf1.pdf}).
All polygons are projected in CRS("+proj=longlat +datum=NAD83")
}
\usage{data(alaska.blkgrp)}
%\format{
%}
\details{
\bold{ID Variables} \cr
\tabular{ll}{
data field name \tab Full Description \cr
state \tab State FIPS code \cr
county \tab County FIPS code \cr
tract \tab Tract FIPS code \cr
blkgrp \tab Blockgroup FIPS code \cr
}
\bold{Census Variables} \cr
\tabular{lll}{
Census SF1 Field Name \tab data field name \tab Full Description \cr
(P007001) \tab pop2000 \tab population 2000 \cr
(P007002) \tab white \tab white alone \cr
(P007003) \tab black \tab black or african american alone \cr
(P007004) \tab ameri.es \tab american indian and alaska native alone \cr
(P007005) \tab asian \tab asian alone \cr
(P007006) \tab hawn.pi \tab native hawaiian and other pacific islander alone \cr
(P007007) \tab other \tab some other race alone \cr
(P007008) \tab mult.race \tab 2 or more races \cr
(P011001) \tab hispanic \tab people who are hispanic or latino \cr
(P008002) \tab not.hispanic.t \tab Not Hispanic or Latino \cr
(P008003) \tab nh.white \tab White alone \cr
(P008004) \tab nh.black \tab Black or African American alone \cr
(P008005) \tab nh.ameri.es \tab American Indian and Alaska Native alone \cr
(P008006) \tab nh.asian \tab Asian alone \cr
(P008007) \tab nh.hawn.pi \tab Native Hawaiian and Other Pacific Islander alone \cr
(P008008) \tab nh.other \tab Some other race alone \cr
(P008010) \tab hispanic.t \tab Hispanic or Latino \cr
(P008011) \tab h.white \tab White alone \cr
(P008012) \tab h.black \tab Black or African American alone \cr
(P008013) \tab h.american.es \tab American Indian and Alaska Native alone \cr
(P008014) \tab h.asian \tab Asian alone \cr
(P008015) \tab h.hawn.pi \tab Native Hawaiian and Other Pacific Islander alone \cr
(P008016) \tab h.other \tab Some other race alone \cr
(P012002) \tab males \tab males \cr
(P012026) \tab females \tab females \cr
(P012003 + P012027) \tab age.under5 \tab male and female under 5 yrs \cr
(P012004-006 + P012028-030) \tab age.5.17 \tab male and female 5 to 17 yrs \cr
(P012007-009 + P012031-033) \tab age.18.21 \tab male and female 18 to 21 yrs \cr
(P012010-011 + P012034-035) \tab age.22.29 \tab male and female 22 to 29 yrs \cr
(P012012-013 + P012036-037) \tab age.30.39 \tab male and female 30 to 39 yrs \cr
(P012014-015 + P012038-039) \tab age.40.49 \tab male and female 40 to 49 yrs \cr
(P012016-019 + P012040-043) \tab age.50.64 \tab male and female 50 to 64 yrs \cr
(P012020-025 + P012044-049) \tab age.65.up \tab male and female 65 yrs and over \cr
(P013001) \tab med.age \tab median age, both sexes \cr
(P013002) \tab med.age.m \tab median age, males \cr
(P013003) \tab med.age.f \tab median age, females \cr
(P015001) \tab households \tab households \cr
(P017001) \tab ave.hh.sz \tab average household size \cr
(P018003) \tab hsehld.1.m \tab 1-person household, male householder \cr
(P018004) \tab hsehld.1.f \tab 1-person household, female householder \cr
(P018008) \tab marhh.chd \tab family households, married-couple family, w/ own children under 18 yrs \cr
(P018009) \tab marhh.no.c \tab family households, married-couple family, no own children under 18 yrs \cr
(P018012) \tab mhh.child \tab family households, other family, male householder, no wife present, w/ own children under 18 yrs \cr
(P018015) \tab fhh.child \tab family households, other family, female householder, no husband present, w/ own children under 18 yrs \cr
(H001001) \tab hh.units \tab housng units total \cr
(H002002) \tab hh.urban \tab urban housing units \cr
(H002005) \tab hh.rural \tab rural housing units \cr
(H003002) \tab hh.occupied \tab occupied housing units \cr
(H003003) \tab hh.vacant \tab vacant housing units \cr
(H004002) \tab hh.owner \tab owner occupied housing units \cr
(H004003) \tab hh.renter \tab renter occupied housing units \cr
(H013002) \tab hh.1person \tab 1-person household \cr
(H013003) \tab hh.2person \tab 2-person household \cr
(H013004) \tab hh.3person \tab 3-person household \cr
(H013005) \tab hh.4person \tab 4-person household \cr
(H013006) \tab hh.5person \tab 5-person household \cr
(H013007) \tab hh.6person \tab 6-person household \cr
(H013008) \tab hh.7person \tab 7-person household \cr
(H015I003)+(H015I011) \tab hh.nh.white.1p \tab (white only, not hispanic ) 1-person household \cr
(H015I004)+(H015I012) \tab hh.nh.white.2p \tab (white only, not hispanic ) 2-person household \cr
(H015I005)+(H015I013) \tab hh.nh.white.3p \tab (white only, not hispanic ) 3-person household \cr
(H015I006)+(H015I014) \tab hh.nh.white.4p \tab (white only, not hispanic ) 4-person household \cr
(H015I007)+(H015I015) \tab hh.nh.white.5p \tab (white only, not hispanic ) 5-person household \cr
(H015I008)+(H015I016) \tab hh.nh.white.6p \tab (white only, not hispanic ) 6-person household \cr
(H015I009)+(H015I017) \tab hh.nh.white.7p \tab (white only, not hispanic ) 7-person household \cr
(H015H003)+(H015H011) \tab hh.hisp.1p \tab (hispanic) 1-person household \cr
(H015H004)+(H015H012) \tab hh.hisp.2p \tab (hispanic) 2-person household \cr
(H015H005)+(H015H013) \tab hh.hisp.3p \tab (hispanic) 3-person household \cr
(H015H006)+(H015H014) \tab hh.hisp.4p \tab (hispanic) 4-person household \cr
(H015H007)+(H015H015) \tab hh.hisp.5p \tab (hispanic) 5-person household \cr
(H015H008)+(H015H016) \tab hh.hisp.6p \tab (hispanic) 6-person household \cr
(H015H009)+(H015H017) \tab hh.hisp.7p \tab (hispanic) 7-person household \cr
(H015B003)+(H015B011) \tab hh.black.1p \tab (black) 1-person household \cr
(H015B004)+(H015B012) \tab hh.black.2p \tab (black) 2-person household \cr
(H015B005)+(H015B013) \tab hh.black.3p \tab (black) 3-person household \cr
(H015B006)+(H015B014) \tab hh.black.4p \tab (black) 4-person household \cr
(H015B007)+(H015B015) \tab hh.black.5p \tab (black) 5-person household \cr
(H015B008)+(H015B016) \tab hh.black.6p \tab (black) 6-person household \cr
(H015B009)+(H015B017) \tab hh.black.7p \tab (black) 7-person household \cr
(H015D003)+(H015D011) \tab hh.asian.1p \tab (asian) 1-person household \cr
(H015D004)+(H015D012) \tab hh.asian.2p \tab (asian) 2-person household \cr
(H015D005)+(H015D013) \tab hh.asian.3p \tab (asian) 3-person household \cr
(H015D006)+(H015D014) \tab hh.asian.4p \tab (asian) 4-person household \cr
(H015D007)+(H015D015) \tab hh.asian.5p \tab (asian) 5-person household \cr
(H015D008)+(H015D016) \tab hh.asian.6p \tab (asian) 6-person household \cr
(H015D009)+(H015D017) \tab hh.asian.7p \tab (asian) 7-person household \cr
}
}
\source{
Census 2000 Summary File 1 [name of state1 or United States]/prepared by the U.S. Census
Bureau, 2001.
}
\references{
\url{http://www.census.gov/ }\cr
\url{http://www2.census.gov/cgi-bin/shapefiles/national-files} \cr
\url{http://www.census.gov/prod/cen2000/doc/sf1.pdf} \cr
}
\examples{
data(alaska.blkgrp)
############################################
## Helper function for handling coloring of the map
############################################
color.map<- function(x,dem,y=NULL){
l.poly<-length(x@polygons)
dem.num<- cut(dem ,breaks=ceiling(quantile(dem)),dig.lab = 6)
dem.num[which(is.na(dem.num)==TRUE)]<-levels(dem.num)[1]
l.uc<-length(table(dem.num))
if(is.null(y)){
col.heat<-heat.colors(16)[c(14,8,4,1)] ##fixed set of four colors
}else{
col.heat<-y
}
dem.col<-cbind(col.heat,names(table(dem.num)))
colors.dem<-vector(length=l.poly)
for(i in 1:l.uc){
colors.dem[which(dem.num==dem.col[i,2])]<-dem.col[i,1]
}
out<-list(colors=colors.dem,dem.cut=dem.col[,2],table.colors=dem.col[,1])
return(out)
}
############################################
## Helper function for handling coloring of the map
############################################
colors.use<-color.map(alaska.blkgrp,alaska.blkgrp$pop2000)
plot(alaska.blkgrp,col=colors.use$colors,ylim=c(51.78495, 71.33953),xlim=c(-176.81043, -130.0427))
#text(coordinates(alaska.blkgrp),alaska.blkgrp@data$name,cex=.3)
title(main="Census Block Groups \n of Alaska, 2000", sub="Quantiles (equal frequency)")
legend("bottomright",legend=colors.use$dem.cut,fill=colors.use$table.colors,bty="o",title="Population Count",bg="white")
###############################
### Alternative way to do the above
###############################
\dontrun{
####This example requires the following additional libraries
library(RColorBrewer)
library(classInt)
library(maps)
####This example requires the following additional libraries
data(alaska.blkgrp)
map('state',region='alaska')
plotvar <- alaska.blkgrp$pop2000
nclr <- 4
#BuPu
plotclr <- brewer.pal(nclr,"BuPu")
class <- classIntervals(plotvar, nclr, style="quantile")
colcode <- findColours(class, plotclr)
plot(alaska.blkgrp, col=colcode, border="transparent",add=TRUE)
#transparent
title(main="Census Block Groups \n of Alaska, 2000", sub="Quantiles (equal frequency)")
map.text("county", "alaska",cex=.7,add=TRUE)
map('county','alaska',add=TRUE)
legend("bottomright","(x,y)", legend=names(attr(colcode, "table")),fill=attr(colcode, "palette"),
cex=0.9, bty="o", title="Population Count",bg="white")
}
}
\keyword{datasets}
| /man/alaska.blkgrp.Rd | no_license | cran/UScensus2000blkgrp | R | false | false | 9,591 | rd | \name{alaska.blkgrp}
\Rdversion{1.1}
\alias{alaska.blkgrp}
\docType{data}
\title{
alaska.blkgrp
}
\description{
alaska.blkgrp is a \code{\link[sp:SpatialPolygonsDataFrame]{SpatialPolygonsDataFrame}} with polygons made from the 2000 US Census tiger/line boundary files (\url{http://www.census.gov/geo/www/tiger/}) for Census Block Groups. It also contains 86 variables from the Summary File 1 (SF 1) which contains the 100-percent data (\url{http://www.census.gov/prod/cen2000/doc/sf1.pdf}).
All polygons are projected in CRS("+proj=longlat +datum=NAD83")
}
\usage{data(alaska.blkgrp)}
%\format{
%}
\details{
\bold{ID Variables} \cr
\tabular{ll}{
data field name \tab Full Description \cr
state \tab State FIPS code \cr
county \tab County FIPS code \cr
tract \tab Tract FIPS code \cr
blkgrp \tab Blockgroup FIPS code \cr
}
\bold{Census Variables} \cr
\tabular{lll}{
Census SF1 Field Name \tab data field name \tab Full Description \cr
(P007001) \tab pop2000 \tab population 2000 \cr
(P007002) \tab white \tab white alone \cr
(P007003) \tab black \tab black or african american alone \cr
(P007004) \tab ameri.es \tab american indian and alaska native alone \cr
(P007005) \tab asian \tab asian alone \cr
(P007006) \tab hawn.pi \tab native hawaiian and other pacific islander alone \cr
(P007007) \tab other \tab some other race alone \cr
(P007008) \tab mult.race \tab 2 or more races \cr
(P011001) \tab hispanic \tab people who are hispanic or latino \cr
(P008002) \tab not.hispanic.t \tab Not Hispanic or Latino \cr
(P008003) \tab nh.white \tab White alone \cr
(P008004) \tab nh.black \tab Black or African American alone \cr
(P008005) \tab nh.ameri.es \tab American Indian and Alaska Native alone \cr
(P008006) \tab nh.asian \tab Asian alone \cr
(P008007) \tab nh.hawn.pi \tab Native Hawaiian and Other Pacific Islander alone \cr
(P008008) \tab nh.other \tab Some other race alone \cr
(P008010) \tab hispanic.t \tab Hispanic or Latino \cr
(P008011) \tab h.white \tab White alone \cr
(P008012) \tab h.black \tab Black or African American alone \cr
(P008013) \tab h.american.es \tab American Indian and Alaska Native alone \cr
(P008014) \tab h.asian \tab Asian alone \cr
(P008015) \tab h.hawn.pi \tab Native Hawaiian and Other Pacific Islander alone \cr
(P008016) \tab h.other \tab Some other race alone \cr
(P012002) \tab males \tab males \cr
(P012026) \tab females \tab females \cr
(P012003 + P012027) \tab age.under5 \tab male and female under 5 yrs \cr
(P012004-006 + P012028-030) \tab age.5.17 \tab male and female 5 to 17 yrs \cr
(P012007-009 + P012031-033) \tab age.18.21 \tab male and female 18 to 21 yrs \cr
(P012010-011 + P012034-035) \tab age.22.29 \tab male and female 22 to 29 yrs \cr
(P012012-013 + P012036-037) \tab age.30.39 \tab male and female 30 to 39 yrs \cr
(P012014-015 + P012038-039) \tab age.40.49 \tab male and female 40 to 49 yrs \cr
(P012016-019 + P012040-043) \tab age.50.64 \tab male and female 50 to 64 yrs \cr
(P012020-025 + P012044-049) \tab age.65.up \tab male and female 65 yrs and over \cr
(P013001) \tab med.age \tab median age, both sexes \cr
(P013002) \tab med.age.m \tab median age, males \cr
(P013003) \tab med.age.f \tab median age, females \cr
(P015001) \tab households \tab households \cr
(P017001) \tab ave.hh.sz \tab average household size \cr
(P018003) \tab hsehld.1.m \tab 1-person household, male householder \cr
(P018004) \tab hsehld.1.f \tab 1-person household, female householder \cr
(P018008) \tab marhh.chd \tab family households, married-couple family, w/ own children under 18 yrs \cr
(P018009) \tab marhh.no.c \tab family households, married-couple family, no own children under 18 yrs \cr
(P018012) \tab mhh.child \tab family households, other family, male householder, no wife present, w/ own children under 18 yrs \cr
(P018015) \tab fhh.child \tab family households, other family, female householder, no husband present, w/ own children under 18 yrs \cr
(H001001) \tab hh.units \tab housng units total \cr
(H002002) \tab hh.urban \tab urban housing units \cr
(H002005) \tab hh.rural \tab rural housing units \cr
(H003002) \tab hh.occupied \tab occupied housing units \cr
(H003003) \tab hh.vacant \tab vacant housing units \cr
(H004002) \tab hh.owner \tab owner occupied housing units \cr
(H004003) \tab hh.renter \tab renter occupied housing units \cr
(H013002) \tab hh.1person \tab 1-person household \cr
(H013003) \tab hh.2person \tab 2-person household \cr
(H013004) \tab hh.3person \tab 3-person household \cr
(H013005) \tab hh.4person \tab 4-person household \cr
(H013006) \tab hh.5person \tab 5-person household \cr
(H013007) \tab hh.6person \tab 6-person household \cr
(H013008) \tab hh.7person \tab 7-person household \cr
(H015I003)+(H015I011) \tab hh.nh.white.1p \tab (white only, not hispanic ) 1-person household \cr
(H015I004)+(H015I012) \tab hh.nh.white.2p \tab (white only, not hispanic ) 2-person household \cr
(H015I005)+(H015I013) \tab hh.nh.white.3p \tab (white only, not hispanic ) 3-person household \cr
(H015I006)+(H015I014) \tab hh.nh.white.4p \tab (white only, not hispanic ) 4-person household \cr
(H015I007)+(H015I015) \tab hh.nh.white.5p \tab (white only, not hispanic ) 5-person household \cr
(H015I008)+(H015I016) \tab hh.nh.white.6p \tab (white only, not hispanic ) 6-person household \cr
(H015I009)+(H015I017) \tab hh.nh.white.7p \tab (white only, not hispanic ) 7-person household \cr
(H015H003)+(H015H011) \tab hh.hisp.1p \tab (hispanic) 1-person household \cr
(H015H004)+(H015H012) \tab hh.hisp.2p \tab (hispanic) 2-person household \cr
(H015H005)+(H015H013) \tab hh.hisp.3p \tab (hispanic) 3-person household \cr
(H015H006)+(H015H014) \tab hh.hisp.4p \tab (hispanic) 4-person household \cr
(H015H007)+(H015H015) \tab hh.hisp.5p \tab (hispanic) 5-person household \cr
(H015H008)+(H015H016) \tab hh.hisp.6p \tab (hispanic) 6-person household \cr
(H015H009)+(H015H017) \tab hh.hisp.7p \tab (hispanic) 7-person household \cr
(H015B003)+(H015B011) \tab hh.black.1p \tab (black) 1-person household \cr
(H015B004)+(H015B012) \tab hh.black.2p \tab (black) 2-person household \cr
(H015B005)+(H015B013) \tab hh.black.3p \tab (black) 3-person household \cr
(H015B006)+(H015B014) \tab hh.black.4p \tab (black) 4-person household \cr
(H015B007)+(H015B015) \tab hh.black.5p \tab (black) 5-person household \cr
(H015B008)+(H015B016) \tab hh.black.6p \tab (black) 6-person household \cr
(H015B009)+(H015B017) \tab hh.black.7p \tab (black) 7-person household \cr
(H015D003)+(H015D011) \tab hh.asian.1p \tab (asian) 1-person household \cr
(H015D004)+(H015D012) \tab hh.asian.2p \tab (asian) 2-person household \cr
(H015D005)+(H015D013) \tab hh.asian.3p \tab (asian) 3-person household \cr
(H015D006)+(H015D014) \tab hh.asian.4p \tab (asian) 4-person household \cr
(H015D007)+(H015D015) \tab hh.asian.5p \tab (asian) 5-person household \cr
(H015D008)+(H015D016) \tab hh.asian.6p \tab (asian) 6-person household \cr
(H015D009)+(H015D017) \tab hh.asian.7p \tab (asian) 7-person household \cr
}
}
\source{
Census 2000 Summary File 1 [name of state1 or United States]/prepared by the U.S. Census
Bureau, 2001.
}
\references{
\url{http://www.census.gov/ }\cr
\url{http://www2.census.gov/cgi-bin/shapefiles/national-files} \cr
\url{http://www.census.gov/prod/cen2000/doc/sf1.pdf} \cr
}
\examples{
data(alaska.blkgrp)
############################################
## Helper function for handling coloring of the map
############################################
color.map<- function(x,dem,y=NULL){
l.poly<-length(x@polygons)
dem.num<- cut(dem ,breaks=ceiling(quantile(dem)),dig.lab = 6)
dem.num[which(is.na(dem.num)==TRUE)]<-levels(dem.num)[1]
l.uc<-length(table(dem.num))
if(is.null(y)){
col.heat<-heat.colors(16)[c(14,8,4,1)] ##fixed set of four colors
}else{
col.heat<-y
}
dem.col<-cbind(col.heat,names(table(dem.num)))
colors.dem<-vector(length=l.poly)
for(i in 1:l.uc){
colors.dem[which(dem.num==dem.col[i,2])]<-dem.col[i,1]
}
out<-list(colors=colors.dem,dem.cut=dem.col[,2],table.colors=dem.col[,1])
return(out)
}
############################################
## Helper function for handling coloring of the map
############################################
colors.use<-color.map(alaska.blkgrp,alaska.blkgrp$pop2000)
plot(alaska.blkgrp,col=colors.use$colors,ylim=c(51.78495, 71.33953),xlim=c(-176.81043, -130.0427))
#text(coordinates(alaska.blkgrp),alaska.blkgrp@data$name,cex=.3)
title(main="Census Block Groups \n of Alaska, 2000", sub="Quantiles (equal frequency)")
legend("bottomright",legend=colors.use$dem.cut,fill=colors.use$table.colors,bty="o",title="Population Count",bg="white")
###############################
### Alternative way to do the above
###############################
\dontrun{
####This example requires the following additional libraries
library(RColorBrewer)
library(classInt)
library(maps)
####This example requires the following additional libraries
data(alaska.blkgrp)
map('state',region='alaska')
plotvar <- alaska.blkgrp$pop2000
nclr <- 4
#BuPu
plotclr <- brewer.pal(nclr,"BuPu")
class <- classIntervals(plotvar, nclr, style="quantile")
colcode <- findColours(class, plotclr)
plot(alaska.blkgrp, col=colcode, border="transparent",add=TRUE)
#transparent
title(main="Census Block Groups \n of Alaska, 2000", sub="Quantiles (equal frequency)")
map.text("county", "alaska",cex=.7,add=TRUE)
map('county','alaska',add=TRUE)
legend("bottomright","(x,y)", legend=names(attr(colcode, "table")),fill=attr(colcode, "palette"),
cex=0.9, bty="o", title="Population Count",bg="white")
}
}
\keyword{datasets}
|
\name{getParams}
\alias{getParams}
\title{Calculate parameters to use as input for HMM}
\usage{
getParams(tstats, plots = FALSE, plotfile = NULL,
verbose = F)
}
\arguments{
\item{tstats}{Vector containing all moderated t
statistics obtained using \code{getTstats}.}
\item{plots}{if TRUE, create diagnostic plots as
parameters are estimated}
\item{plotfile}{Optional string giving a location and PDF
file name to which plots should be written, if
\code{plots = TRUE}. If NULL, plots are created in the
available graphics device.}
\item{verbose}{If TRUE, periodic messages are printed
onscreen during estimation.}
}
\value{
A list with elements \item{params }{list with elements
\code{mean} and \code{sd}, both 4-item vectors.
\code{mean} gives the respective means of the "not
expressed," "equally expressed," "overexpressed," and
"underexpressed" distributions; \code{sd} gives their
respective standard deviations.} \item{stateprobs
}{vector of percentages of the mixture distribution that
come from the not expressed," "equally expressed,"
"overexpressed," and "underexpressed" distributions,
respectively. It is assumed that "overexpressed" and
"underexpressed" t statistics comprise equal percentages
of the mixture.}
}
\description{
Assumes that the moderated t statistics obtained by
fitting a linear model to each nucleotide come from a
Gaussian mixture distribution, where the four
distributions in the mixture represent distributions of t
statistics from "underexpressed," "overexpressed,"
"equally expressed," and "not expressed" nucleotides.
\code{getParams} estimates the parameters of each of the
sub-distributions, as well as the percentage of the
mixture distribution each contributes, in order to use
these parameters to fit a Hidden Markov Model that
classifies the nucleotides.
}
\details{
The standard pipeline here is to feed the output from
\code{getParams} directly into \code{getRegions} using
the "HMM" option.
}
\author{
Alyssa Frazee
}
\seealso{
\code{\link{getRegions}}
}
| /man/getParams.Rd | no_license | BioinformaticsArchive/derfinder | R | false | false | 2,086 | rd | \name{getParams}
\alias{getParams}
\title{Calculate parameters to use as input for HMM}
\usage{
getParams(tstats, plots = FALSE, plotfile = NULL,
verbose = F)
}
\arguments{
\item{tstats}{Vector containing all moderated t
statistics obtained using \code{getTstats}.}
\item{plots}{if TRUE, create diagnostic plots as
parameters are estimated}
\item{plotfile}{Optional string giving a location and PDF
file name to which plots should be written, if
\code{plots = TRUE}. If NULL, plots are created in the
available graphics device.}
\item{verbose}{If TRUE, periodic messages are printed
onscreen during estimation.}
}
\value{
A list with elements \item{params }{list with elements
\code{mean} and \code{sd}, both 4-item vectors.
\code{mean} gives the respective means of the "not
expressed," "equally expressed," "overexpressed," and
"underexpressed" distributions; \code{sd} gives their
respective standard deviations.} \item{stateprobs
}{vector of percentages of the mixture distribution that
come from the not expressed," "equally expressed,"
"overexpressed," and "underexpressed" distributions,
respectively. It is assumed that "overexpressed" and
"underexpressed" t statistics comprise equal percentages
of the mixture.}
}
\description{
Assumes that the moderated t statistics obtained by
fitting a linear model to each nucleotide come from a
Gaussian mixture distribution, where the four
distributions in the mixture represent distributions of t
statistics from "underexpressed," "overexpressed,"
"equally expressed," and "not expressed" nucleotides.
\code{getParams} estimates the parameters of each of the
sub-distributions, as well as the percentage of the
mixture distribution each contributes, in order to use
these parameters to fit a Hidden Markov Model that
classifies the nucleotides.
}
\details{
The standard pipeline here is to feed the output from
\code{getParams} directly into \code{getRegions} using
the "HMM" option.
}
\author{
Alyssa Frazee
}
\seealso{
\code{\link{getRegions}}
}
|
testlist <- list(end = NULL, start = NULL, x = structure(c(1.72723371101889e-77, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), .Dim = 9:10), segment_end = structure(0, .Dim = c(1L, 1L)), segment_start = structure(0, .Dim = c(1L, 1L)))
result <- do.call(dynutils::project_to_segments,testlist)
str(result) | /dynutils/inst/testfiles/project_to_segments/AFL_project_to_segments/project_to_segments_valgrind_files/1609871839-test.R | no_license | akhikolla/updated-only-Issues | R | false | false | 532 | r | testlist <- list(end = NULL, start = NULL, x = structure(c(1.72723371101889e-77, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), .Dim = 9:10), segment_end = structure(0, .Dim = c(1L, 1L)), segment_start = structure(0, .Dim = c(1L, 1L)))
result <- do.call(dynutils::project_to_segments,testlist)
str(result) |
statistics <- function(D, limit){
time = D[,1]
RMSD = D[,2]
Energy = D[,3]
RMSD_avg = mean(RMSD, na.rm = TRUE)
RMSD_sdv = sd(RMSD, na.rm = TRUE)
RMSD_min = min(RMSD, na.rm = TRUE)
RMSD_max = max(RMSD, na.rm = T)
Ene_avg = mean(Energy[time > limit], na.rm = TRUE)
Ene_sdv = sd(Energy[time > limit], na.rm = T)
Ene_min = min(Energy[time > limit], na.rm = T)
Ene_max = max(Energy[time > limit], na.rm = T)
aggregates = data.frame("RMSD_avg"=RMSD_avg, "RMSD_sdv"=RMSD_sdv, "RMSD_min"=RMSD_min, "RMSD_max"=RMSD_max, "Ene_avg"=Ene_avg, "Ene_sdv"=Ene_sdv, "Ene_min"=Ene_min, "Ene_max"=Ene_max)
return(aggregates)
} | /R/statistics.R | no_license | beautifulMountain/modygliani | R | false | false | 651 | r | statistics <- function(D, limit){
time = D[,1]
RMSD = D[,2]
Energy = D[,3]
RMSD_avg = mean(RMSD, na.rm = TRUE)
RMSD_sdv = sd(RMSD, na.rm = TRUE)
RMSD_min = min(RMSD, na.rm = TRUE)
RMSD_max = max(RMSD, na.rm = T)
Ene_avg = mean(Energy[time > limit], na.rm = TRUE)
Ene_sdv = sd(Energy[time > limit], na.rm = T)
Ene_min = min(Energy[time > limit], na.rm = T)
Ene_max = max(Energy[time > limit], na.rm = T)
aggregates = data.frame("RMSD_avg"=RMSD_avg, "RMSD_sdv"=RMSD_sdv, "RMSD_min"=RMSD_min, "RMSD_max"=RMSD_max, "Ene_avg"=Ene_avg, "Ene_sdv"=Ene_sdv, "Ene_min"=Ene_min, "Ene_max"=Ene_max)
return(aggregates)
} |
#' Computes the dissimilarity measure \emph{psi} on two or more sequences.
#'
#' @description If the sequences are not aligned (\code{paired.samples = FALSE}), the function executes these steps.
#' \itemize{
#' \item Computes the autosum of the sequences with \code{\link{autoSum}}.
#' \item Computes the distance matrix with \code{\link{distanceMatrix}}.
#' \item Uses the distance matrix to compute the least cost matrix with \code{\link{leastCostMatrix}}.
#' \item Extracts the cost of the least cost path with \code{\link{leastCost}}.
#' \item Computes the dissimilarity measure \emph{psi} with the function \code{\link{psi}}.
#' \item Delivers an output of type "list" (default), "data.frame" or "matrix", depending on the user input, through \code{\link{formatPsi}}.
#' }
#'
#' If the sequences are aligned (\code{paired.samples = TRUE}), these steps are executed:
#' \itemize{
#' \item Computes the autosum of the sequences with \code{\link{autoSum}}.
#' \item Sums the distances between paired samples with \code{\link{distancePairedSamples}}.
#' \item Computes the dissimilarity measure \emph{psi} with the function \code{\link{psi}}.
#' \item Delivers an output of type "list" (default), "data.frame" or "matrix", depending on the user input, through \code{\link{formatPsi}}.
#' }
#'
#' @usage workflowPsi(
#' sequences = NULL,
#' grouping.column = NULL,
#' time.column = NULL,
#' exclude.columns = NULL,
#' method = "manhattan",
#' diagonal = FALSE,
#' format = "dataframe",
#' paired.samples = FALSE,
#' same.time = FALSE,
#' ignore.blocks = FALSE,
#' parallel.execution = TRUE
#' )
#'
#' @param sequences dataframe with multiple sequences identified by a grouping column generated by \code{\link{prepareSequences}}.
#' @param grouping.column character string, name of the column in \code{sequences} to be used to identify separates sequences within the file.
#' @param time.column character string, name of the column with time/depth/rank data.
#' @param exclude.columns character string or character vector with column names in \code{sequences} to be excluded from the analysis.
#' @param method character string naming a distance metric. Valid entries are: "manhattan", "euclidean", "chi", and "hellinger". Invalid entries will throw an error.
#' @param diagonal boolean, if \code{TRUE}, diagonals are included in the computation of the least cost path. Defaults to \code{FALSE}, as the original algorithm did not include diagonals in the computation of the least cost path. If \code{paired.samples} is \code{TRUE}, then \code{diagonal} is irrelevant.
#' @param format string, type of output. One of: "data.frame", "matrix". If \code{NULL} or empty, a list is returned.
#' @param paired.samples boolean, if \code{TRUE}, the sequences are assumed to be aligned, and distances are computed for paired-samples only (no distance matrix required). Default value is \code{FALSE}.
#' @param same.time boolean. If \code{TRUE}, samples in the sequences to compare will be tested to check if they have the same time/age/depth according to \code{time.column}. This argument is only useful when the user needs to compare two sequences taken at different sites but same time frames.
#' @param ignore.blocks boolean. If \code{TRUE}, the function \code{\link{leastCostPathNoBlocks}} analyzes the least-cost path of the best solution, and removes blocks (straight-orthogonal sections of the least-cost path), which happen in highly dissimilar sections of the sequences, and inflate output psi values.
#' @param parallel.execution boolean, if \code{TRUE} (default), execution is parallelized, and serialized if \code{FALSE}.
#'
#' @return A list, matrix, or dataframe, with sequence names and psi values.
#'
#' @author Blas Benito <blasbenito@gmail.com>
#'
#' @examples
#'
#' \donttest{
#' data("sequencesMIS")
#' #prepare sequences
#' MIS.sequences <- prepareSequences(
#' sequences = sequencesMIS,
#' grouping.column = "MIS",
#' if.empty.cases = "zero",
#' transformation = "hellinger"
#' )
#'
#'#execute workflow to compute psi
#'MIS.psi <- workflowPsi(
#' sequences = MIS.sequences[MIS.sequences$MIS %in% c("MIS-1", "MIS-2"), ],
#' grouping.column = "MIS",
#' time.column = NULL,
#' exclude.columns = NULL,
#' method = "manhattan",
#' diagonal = FALSE,
#' parallel.execution = FALSE
#' )
#'
#'MIS.psi
#'
#'}
#'
#' @export
workflowPsi <- function(sequences = NULL,
grouping.column = NULL,
time.column = NULL,
exclude.columns = NULL,
method = "manhattan",
diagonal = FALSE,
format = "dataframe",
paired.samples = FALSE,
same.time = FALSE,
ignore.blocks = FALSE,
parallel.execution = TRUE){
#SAMPLES ARE NOT PAIRED: ELASTIC METHOD
if(paired.samples == FALSE){
#computing distance matrix
distance.matrix <- distanceMatrix(
sequences = sequences,
grouping.column = grouping.column,
time.column = time.column,
exclude.columns = exclude.columns,
method = method,
parallel.execution = parallel.execution
)
#computing least cost matrix
least.cost.matrix <- leastCostMatrix(
distance.matrix = distance.matrix,
diagonal = diagonal,
parallel.execution = parallel.execution
)
#computing least cost path
least.cost.path <- leastCostPath(
distance.matrix = distance.matrix,
least.cost.matrix = least.cost.matrix,
diagonal = diagonal,
parallel.execution = parallel.execution
)
#BLOCKS ARE NOT IGNORED
if(ignore.blocks == TRUE){
#BLOCKS ARE IGNORED
#computing least cost path
least.cost.path <- leastCostPathNoBlocks(
least.cost.path = least.cost.path,
parallel.execution = parallel.execution
)
}
#getting least cost ignoring blocks
least.cost <- leastCost(
least.cost.path = least.cost.path,
parallel.execution = parallel.execution
)
#autosum
autosum.sequences <- autoSum(
sequences = sequences,
least.cost.path = least.cost.path,
grouping.column = grouping.column,
time.column = time.column,
exclude.columns = exclude.columns,
method = method,
parallel.execution = parallel.execution
)
#computing psi
psi.value <- psi(
least.cost = least.cost,
autosum = autosum.sequences,
parallel.execution = parallel.execution
)
#shifting psi by 1
if(diagonal == TRUE){
psi.value <- lapply(X = psi.value, FUN = function(x){x + 1})
}
} #end of paired.samples == FALSE
#SAMPLES ARE PAIRED: STEP-LOCK METHOD
if(paired.samples == TRUE){
#computing least cost
least.cost <- distancePairedSamples(
sequences = sequences,
grouping.column = grouping.column,
time.column = time.column,
exclude.columns = exclude.columns,
same.time = same.time,
method = method,
sum.distances = TRUE,
parallel.execution = parallel.execution
)
#autosum
autosum.sequences <- autoSum(
sequences = sequences,
least.cost.path = least.cost,
grouping.column = grouping.column,
time.column = time.column,
exclude.columns = exclude.columns,
method = method,
parallel.execution = parallel.execution
)
#computing psi
psi.value <- psi(
least.cost = least.cost,
autosum = autosum.sequences,
parallel.execution = parallel.execution
)
#shifting psi by 1
psi.value <- lapply(X = psi.value, FUN = function(x){x + 1})
} #end of paired.samples == TRUE
#formating psi
if(format != "list"){
psi.value <- formatPsi(
psi.values = psi.value,
to = format
)
}
return(psi.value)
}
| /R/workflowPsi.R | no_license | BlasBenito/distantia | R | false | false | 7,887 | r | #' Computes the dissimilarity measure \emph{psi} on two or more sequences.
#'
#' @description If the sequences are not aligned (\code{paired.samples = FALSE}), the function executes these steps.
#' \itemize{
#' \item Computes the autosum of the sequences with \code{\link{autoSum}}.
#' \item Computes the distance matrix with \code{\link{distanceMatrix}}.
#' \item Uses the distance matrix to compute the least cost matrix with \code{\link{leastCostMatrix}}.
#' \item Extracts the cost of the least cost path with \code{\link{leastCost}}.
#' \item Computes the dissimilarity measure \emph{psi} with the function \code{\link{psi}}.
#' \item Delivers an output of type "list" (default), "data.frame" or "matrix", depending on the user input, through \code{\link{formatPsi}}.
#' }
#'
#' If the sequences are aligned (\code{paired.samples = TRUE}), these steps are executed:
#' \itemize{
#' \item Computes the autosum of the sequences with \code{\link{autoSum}}.
#' \item Sums the distances between paired samples with \code{\link{distancePairedSamples}}.
#' \item Computes the dissimilarity measure \emph{psi} with the function \code{\link{psi}}.
#' \item Delivers an output of type "list" (default), "data.frame" or "matrix", depending on the user input, through \code{\link{formatPsi}}.
#' }
#'
#' @usage workflowPsi(
#' sequences = NULL,
#' grouping.column = NULL,
#' time.column = NULL,
#' exclude.columns = NULL,
#' method = "manhattan",
#' diagonal = FALSE,
#' format = "dataframe",
#' paired.samples = FALSE,
#' same.time = FALSE,
#' ignore.blocks = FALSE,
#' parallel.execution = TRUE
#' )
#'
#' @param sequences dataframe with multiple sequences identified by a grouping column generated by \code{\link{prepareSequences}}.
#' @param grouping.column character string, name of the column in \code{sequences} to be used to identify separates sequences within the file.
#' @param time.column character string, name of the column with time/depth/rank data.
#' @param exclude.columns character string or character vector with column names in \code{sequences} to be excluded from the analysis.
#' @param method character string naming a distance metric. Valid entries are: "manhattan", "euclidean", "chi", and "hellinger". Invalid entries will throw an error.
#' @param diagonal boolean, if \code{TRUE}, diagonals are included in the computation of the least cost path. Defaults to \code{FALSE}, as the original algorithm did not include diagonals in the computation of the least cost path. If \code{paired.samples} is \code{TRUE}, then \code{diagonal} is irrelevant.
#' @param format string, type of output. One of: "data.frame", "matrix". If \code{NULL} or empty, a list is returned.
#' @param paired.samples boolean, if \code{TRUE}, the sequences are assumed to be aligned, and distances are computed for paired-samples only (no distance matrix required). Default value is \code{FALSE}.
#' @param same.time boolean. If \code{TRUE}, samples in the sequences to compare will be tested to check if they have the same time/age/depth according to \code{time.column}. This argument is only useful when the user needs to compare two sequences taken at different sites but same time frames.
#' @param ignore.blocks boolean. If \code{TRUE}, the function \code{\link{leastCostPathNoBlocks}} analyzes the least-cost path of the best solution, and removes blocks (straight-orthogonal sections of the least-cost path), which happen in highly dissimilar sections of the sequences, and inflate output psi values.
#' @param parallel.execution boolean, if \code{TRUE} (default), execution is parallelized, and serialized if \code{FALSE}.
#'
#' @return A list, matrix, or dataframe, with sequence names and psi values.
#'
#' @author Blas Benito <blasbenito@gmail.com>
#'
#' @examples
#'
#' \donttest{
#' data("sequencesMIS")
#' #prepare sequences
#' MIS.sequences <- prepareSequences(
#' sequences = sequencesMIS,
#' grouping.column = "MIS",
#' if.empty.cases = "zero",
#' transformation = "hellinger"
#' )
#'
#'#execute workflow to compute psi
#'MIS.psi <- workflowPsi(
#' sequences = MIS.sequences[MIS.sequences$MIS %in% c("MIS-1", "MIS-2"), ],
#' grouping.column = "MIS",
#' time.column = NULL,
#' exclude.columns = NULL,
#' method = "manhattan",
#' diagonal = FALSE,
#' parallel.execution = FALSE
#' )
#'
#'MIS.psi
#'
#'}
#'
#' @export
workflowPsi <- function(sequences = NULL,
grouping.column = NULL,
time.column = NULL,
exclude.columns = NULL,
method = "manhattan",
diagonal = FALSE,
format = "dataframe",
paired.samples = FALSE,
same.time = FALSE,
ignore.blocks = FALSE,
parallel.execution = TRUE){
#SAMPLES ARE NOT PAIRED: ELASTIC METHOD
if(paired.samples == FALSE){
#computing distance matrix
distance.matrix <- distanceMatrix(
sequences = sequences,
grouping.column = grouping.column,
time.column = time.column,
exclude.columns = exclude.columns,
method = method,
parallel.execution = parallel.execution
)
#computing least cost matrix
least.cost.matrix <- leastCostMatrix(
distance.matrix = distance.matrix,
diagonal = diagonal,
parallel.execution = parallel.execution
)
#computing least cost path
least.cost.path <- leastCostPath(
distance.matrix = distance.matrix,
least.cost.matrix = least.cost.matrix,
diagonal = diagonal,
parallel.execution = parallel.execution
)
#BLOCKS ARE NOT IGNORED
if(ignore.blocks == TRUE){
#BLOCKS ARE IGNORED
#computing least cost path
least.cost.path <- leastCostPathNoBlocks(
least.cost.path = least.cost.path,
parallel.execution = parallel.execution
)
}
#getting least cost ignoring blocks
least.cost <- leastCost(
least.cost.path = least.cost.path,
parallel.execution = parallel.execution
)
#autosum
autosum.sequences <- autoSum(
sequences = sequences,
least.cost.path = least.cost.path,
grouping.column = grouping.column,
time.column = time.column,
exclude.columns = exclude.columns,
method = method,
parallel.execution = parallel.execution
)
#computing psi
psi.value <- psi(
least.cost = least.cost,
autosum = autosum.sequences,
parallel.execution = parallel.execution
)
#shifting psi by 1
if(diagonal == TRUE){
psi.value <- lapply(X = psi.value, FUN = function(x){x + 1})
}
} #end of paired.samples == FALSE
#SAMPLES ARE PAIRED: STEP-LOCK METHOD
if(paired.samples == TRUE){
#computing least cost
least.cost <- distancePairedSamples(
sequences = sequences,
grouping.column = grouping.column,
time.column = time.column,
exclude.columns = exclude.columns,
same.time = same.time,
method = method,
sum.distances = TRUE,
parallel.execution = parallel.execution
)
#autosum
autosum.sequences <- autoSum(
sequences = sequences,
least.cost.path = least.cost,
grouping.column = grouping.column,
time.column = time.column,
exclude.columns = exclude.columns,
method = method,
parallel.execution = parallel.execution
)
#computing psi
psi.value <- psi(
least.cost = least.cost,
autosum = autosum.sequences,
parallel.execution = parallel.execution
)
#shifting psi by 1
psi.value <- lapply(X = psi.value, FUN = function(x){x + 1})
} #end of paired.samples == TRUE
#formating psi
if(format != "list"){
psi.value <- formatPsi(
psi.values = psi.value,
to = format
)
}
return(psi.value)
}
|
library(tidyr)
library(ggplot2)
options(stringsAsFactors = FALSE)
#----------------------------------------------------------------------
# define working_dirs
if (Sys.info()['sysname']=='Windows') {dir="P://workspaces/"} else {dir="/data/workspaces/lag/workspaces/"}
#----------------------------------------------------------------------
subset_name="imagingT1_N18057"
primary_dir=paste(dir,"../shared_spaces/Resource_DB/OBrien2018_supplementary/",sep="")
ld_dir=paste(dir,"lg-ukbiobank/working_data/amaia/genetic_data/PT/",subset_name,"/lead_snps/LD/",sep="")
fuma_dir=paste(dir,"lg-ukbiobank/working_data/amaia/genetic_data/PT/",subset_name,"/FUMA/FUMA_job36828_1KGref/",sep="")
working_dir=paste(dir,"lg-ukbiobank/working_data/amaia/genetic_data/PT/OBrien2018/",sep="")
setwd(working_dir)
options(stringsAsFactors = FALSE)
#----------------------------------------------------------------------
# read ld files
ld_files<-paste(ld_dir,list.files(ld_dir,pattern="r2.*_rs*.*.ld"),sep="")
for (f in ld_files){
t<-read.table(f,header=T,strip.white = T)
if (f==ld_files[1]){ld<-t} else {ld<-rbind(ld,t)}
rm(t)
}
rm(f,ld_files)
w<-which(duplicated(ld))
if (length(w)>0){
ld<-ld[-w,]
}
rm(w)
ld<-subset(ld,R2>=0.6)
# gene symbol gencode match
#----------------------------------------------------------------------
# read info
## sample
sample_info<-read.table(paste(primary_dir,"covariates.txt",sep=""),header=TRUE)
snp_info<-read.table(paste(primary_dir,"snp_positions.bed.gz",sep=""),header=TRUE)
#----------------------------------------------------------------------
## genes
gene_names<- read.csv(paste(fuma_dir,"blood_FDR_eqtl_symbol_gencode.csv",sep=""),header=TRUE)
eqtls_genes<- read.table(paste(working_dir,"all_eqtls_gene_interest.txt",sep=""),header=TRUE,comment.char = "")
colnames(eqtls_genes)<-gsub("X\\.","",colnames(eqtls_genes))
expr_genes<-read.table(paste(working_dir,"expression_genes_interest.bed",sep=""),header=TRUE,comment.char = "")
colnames(expr_genes)<-gsub("X\\.","",colnames(expr_genes))
#----------------------------------------------------------------------
## snps
eqtls_snps<-read.table(paste(working_dir,"all_eqtls_snp_interest.txt",sep=""),header=TRUE,comment.char = "")
#----------------------------------------------------------------------
#----------------------------------------------------------------------
# check eQTL effects: snp x gene
# subset only to genes of interst
w<-which(eqtls_snps$gene_id %in% unique(eqtls_genes$gene_id))
eqtls_snps<-eqtls_snps[w,]
subset(eqtls_snps,pval_nominal<0.1)
merge()
data_ld<-merge(eqtls_snps,ld,by.x=c("variant_id"),by.y=c("SNP_B"))
data_ld<-merge(data_ld,gene_names,by.x="gene_id",by.y="gene")
# convert to wide
data_ld_wide<-data_ld[,c("variant_id","CHR_B","BP_B","R2","symbol","pval_nominal")] %>% spread(symbol,pval_nominal)
write.csv(data_ld_wide,"OBrien2018_fetalBrain_eQTLs_wide.csv",row.names = FALSE)
#----------------------------------------------------------------------
#----------------------------------------------------------------------
# transform
# from wide to long format:
expr_genes_long <- gather(expr_genes,key="Sample",value="norm_count", X12545:X13144, factor_key=TRUE)
expr_genes_long$Sample<-gsub("^X","",expr_genes_long$Sample)
table(expr_genes_long$Sample)
# combine with sample info:
expr_genes_long<-merge(sample_info,expr_genes_long,by="Sample",all.y=TRUE)
# add gene names
expr_genes_long<-merge(gene_names,expr_genes_long,by="ID",all=TRUE)
#----------------------------------------------------------------------
#----------------------------------------------------------------------
# plot expression of genes of interest across time:
# data-points per timepoint/gene
gene_time<-ggplot(data=expr_genes_long,aes(x=PCW,y=norm_count)) + geom_point() +
facet_grid(Sex~gene) + theme_bw() + ggtitle("Expression in fetal brain\nO'Brien et al. (2018)")
# get highest and lowest expression values and timepoints per gene:
summary<-do.call("rbind",lapply(unique(expr_genes_long$gene),function(g) {
tmp<-subset(expr_genes_long,gene==g)
min_v<-min(tmp$norm_count)
min_t<-tmp$PCW[which(tmp$norm_count==min_v)]
max_v<-max(tmp$norm_count)
max_t<-tmp$PCW[which(tmp$norm_count==max_v)]
d<-as.data.frame(rbind(cbind("min",min_v,min_t),cbind("max",max_v,max_t)))
colnames(d)<-c("val","norm_count","PCW")
d$gene<-g
return(d)
} ))
summary<-as.data.frame(summary)
| /expression/OBrien2018_check.R | no_license | amaiacc/GeneticsPlanumTemporaleAsymmetry | R | false | false | 4,501 | r | library(tidyr)
library(ggplot2)
options(stringsAsFactors = FALSE)
#----------------------------------------------------------------------
# define working_dirs
if (Sys.info()['sysname']=='Windows') {dir="P://workspaces/"} else {dir="/data/workspaces/lag/workspaces/"}
#----------------------------------------------------------------------
subset_name="imagingT1_N18057"
primary_dir=paste(dir,"../shared_spaces/Resource_DB/OBrien2018_supplementary/",sep="")
ld_dir=paste(dir,"lg-ukbiobank/working_data/amaia/genetic_data/PT/",subset_name,"/lead_snps/LD/",sep="")
fuma_dir=paste(dir,"lg-ukbiobank/working_data/amaia/genetic_data/PT/",subset_name,"/FUMA/FUMA_job36828_1KGref/",sep="")
working_dir=paste(dir,"lg-ukbiobank/working_data/amaia/genetic_data/PT/OBrien2018/",sep="")
setwd(working_dir)
options(stringsAsFactors = FALSE)
#----------------------------------------------------------------------
# read ld files
ld_files<-paste(ld_dir,list.files(ld_dir,pattern="r2.*_rs*.*.ld"),sep="")
for (f in ld_files){
t<-read.table(f,header=T,strip.white = T)
if (f==ld_files[1]){ld<-t} else {ld<-rbind(ld,t)}
rm(t)
}
rm(f,ld_files)
w<-which(duplicated(ld))
if (length(w)>0){
ld<-ld[-w,]
}
rm(w)
ld<-subset(ld,R2>=0.6)
# gene symbol gencode match
#----------------------------------------------------------------------
# read info
## sample
sample_info<-read.table(paste(primary_dir,"covariates.txt",sep=""),header=TRUE)
snp_info<-read.table(paste(primary_dir,"snp_positions.bed.gz",sep=""),header=TRUE)
#----------------------------------------------------------------------
## genes
gene_names<- read.csv(paste(fuma_dir,"blood_FDR_eqtl_symbol_gencode.csv",sep=""),header=TRUE)
eqtls_genes<- read.table(paste(working_dir,"all_eqtls_gene_interest.txt",sep=""),header=TRUE,comment.char = "")
colnames(eqtls_genes)<-gsub("X\\.","",colnames(eqtls_genes))
expr_genes<-read.table(paste(working_dir,"expression_genes_interest.bed",sep=""),header=TRUE,comment.char = "")
colnames(expr_genes)<-gsub("X\\.","",colnames(expr_genes))
#----------------------------------------------------------------------
## snps
eqtls_snps<-read.table(paste(working_dir,"all_eqtls_snp_interest.txt",sep=""),header=TRUE,comment.char = "")
#----------------------------------------------------------------------
#----------------------------------------------------------------------
# check eQTL effects: snp x gene
# subset only to genes of interst
w<-which(eqtls_snps$gene_id %in% unique(eqtls_genes$gene_id))
eqtls_snps<-eqtls_snps[w,]
subset(eqtls_snps,pval_nominal<0.1)
merge()
data_ld<-merge(eqtls_snps,ld,by.x=c("variant_id"),by.y=c("SNP_B"))
data_ld<-merge(data_ld,gene_names,by.x="gene_id",by.y="gene")
# convert to wide
data_ld_wide<-data_ld[,c("variant_id","CHR_B","BP_B","R2","symbol","pval_nominal")] %>% spread(symbol,pval_nominal)
write.csv(data_ld_wide,"OBrien2018_fetalBrain_eQTLs_wide.csv",row.names = FALSE)
#----------------------------------------------------------------------
#----------------------------------------------------------------------
# transform
# from wide to long format:
expr_genes_long <- gather(expr_genes,key="Sample",value="norm_count", X12545:X13144, factor_key=TRUE)
expr_genes_long$Sample<-gsub("^X","",expr_genes_long$Sample)
table(expr_genes_long$Sample)
# combine with sample info:
expr_genes_long<-merge(sample_info,expr_genes_long,by="Sample",all.y=TRUE)
# add gene names
expr_genes_long<-merge(gene_names,expr_genes_long,by="ID",all=TRUE)
#----------------------------------------------------------------------
#----------------------------------------------------------------------
# plot expression of genes of interest across time:
# data-points per timepoint/gene
gene_time<-ggplot(data=expr_genes_long,aes(x=PCW,y=norm_count)) + geom_point() +
facet_grid(Sex~gene) + theme_bw() + ggtitle("Expression in fetal brain\nO'Brien et al. (2018)")
# get highest and lowest expression values and timepoints per gene:
summary<-do.call("rbind",lapply(unique(expr_genes_long$gene),function(g) {
tmp<-subset(expr_genes_long,gene==g)
min_v<-min(tmp$norm_count)
min_t<-tmp$PCW[which(tmp$norm_count==min_v)]
max_v<-max(tmp$norm_count)
max_t<-tmp$PCW[which(tmp$norm_count==max_v)]
d<-as.data.frame(rbind(cbind("min",min_v,min_t),cbind("max",max_v,max_t)))
colnames(d)<-c("val","norm_count","PCW")
d$gene<-g
return(d)
} ))
summary<-as.data.frame(summary)
|
graph.extract <-
function(MT, refX, refY, save="no", image=read.jpeg(file.choose())){
imagematrix <- function(mat, type=NULL, ncol=dim(mat)[1], nrow=dim(mat)[2],
noclipping=FALSE) {
if (is.null(dim(mat)) && is.null(type)) stop("Type should be specified.")
if (length(dim(mat)) == 2 && is.null(type)) type <- "grey"
if (length(dim(mat)) == 3 && is.null(type)) type <- "rgb"
if (type != "rgb" && type != "grey") stop("Type is incorrect.")
if (is.null(ncol) || is.null(nrow)) stop("Dimension is uncertain.")
imgdim <- c(ncol, nrow, if (type == "rgb") 3 else NULL)
if (length(imgdim) == 3 && type == "grey") {
# force to convert grey image
mat <- rgb2grey(mat)
}
if (noclipping == FALSE && ((min(mat) < 0) || (1 < max(mat)))) {
warning("Pixel values were automatically clipped because of range over.")
mat <- clipping(mat)
}
mat <- array(mat, dim=imgdim)
attr(mat, "type") <- type
class(mat) <- c("imagematrix", class(mat))
mat
}
rgb2grey <- function(img, coefs=c(0.30, 0.59, 0.11)) {
if (is.null(dim(img))) stop("image matrix isn't correct.")
if (length(dim(img))<3) stop("image matrix isn't rgb image.")
imagematrix(coefs[1] * img[,,1] + coefs[2] * img[,,2] + coefs[3] * img[,,3],
type="grey")
}
clipping <- function(img, low=0, high=1) {
img[img < low] <- low
img[img > high] <- high
img
}
read.jpeg <- function(filename) {
res <- .C("get_imagesize_of_JPEG_file", as.character(filename),
width=integer(1), height=integer(1), depth=integer(1),
ret=integer(1), PACKAGE="SCVA")
if (res$ret < 0)
stop(if (res$ret==-1) "Can't open file." else "Internal error")
imgtype <- if (res$depth == 1) "grey" else "rgb"
imgdim <- c(res$height, res$width, if (res$depth == 3) res$depth else NULL)
res <- .C("read_JPEG_file", as.character(filename),
image=double(res$width * res$height * res$depth),
ret=integer(1), PACKAGE="SCVA")
img <- array(res$image, dim=imgdim)
imagematrix(img/255, type=imgtype)
}
plot(image)
refpoints <- locator(n = 4, type = 'p', pch = 4, col = 'blue', lwd = 2)
refpoints <- as.data.frame(refpoints)
datapoints <- locator(n=MT,type='p',pch=1,col='red',lwd=2,cex=2)
datapoints <- as.data.frame(datapoints)
x <- refpoints$x[c(1,2)]
y <- refpoints$y[c(3,4)]
cx <- lm(formula=c(refX[1],refX[2])~c(x))$coeff
cy <- lm(formula=c(refY[1],refY[2])~c(y))$coeff
datapoints$x <- datapoints$x*cx[2]+cx[1]
datapoints$y <- datapoints$y*cy[2]+cy[1]
true.data <- as.data.frame(datapoints)
plot(true.data,type='b',pch=1,col='blue',lwd=1.1,bty='l')
rounded <- round(true.data,digits=2)
if(save=="yes"){
write.table(rounded,file=file.choose(new=FALSE),col.names=FALSE,row.names=FALSE,append=FALSE,sep="\t")
}
return(rounded)
}
| /R/graph.extract.R | no_license | cran/SCVA | R | false | false | 2,897 | r | graph.extract <-
function(MT, refX, refY, save="no", image=read.jpeg(file.choose())){
imagematrix <- function(mat, type=NULL, ncol=dim(mat)[1], nrow=dim(mat)[2],
noclipping=FALSE) {
if (is.null(dim(mat)) && is.null(type)) stop("Type should be specified.")
if (length(dim(mat)) == 2 && is.null(type)) type <- "grey"
if (length(dim(mat)) == 3 && is.null(type)) type <- "rgb"
if (type != "rgb" && type != "grey") stop("Type is incorrect.")
if (is.null(ncol) || is.null(nrow)) stop("Dimension is uncertain.")
imgdim <- c(ncol, nrow, if (type == "rgb") 3 else NULL)
if (length(imgdim) == 3 && type == "grey") {
# force to convert grey image
mat <- rgb2grey(mat)
}
if (noclipping == FALSE && ((min(mat) < 0) || (1 < max(mat)))) {
warning("Pixel values were automatically clipped because of range over.")
mat <- clipping(mat)
}
mat <- array(mat, dim=imgdim)
attr(mat, "type") <- type
class(mat) <- c("imagematrix", class(mat))
mat
}
rgb2grey <- function(img, coefs=c(0.30, 0.59, 0.11)) {
if (is.null(dim(img))) stop("image matrix isn't correct.")
if (length(dim(img))<3) stop("image matrix isn't rgb image.")
imagematrix(coefs[1] * img[,,1] + coefs[2] * img[,,2] + coefs[3] * img[,,3],
type="grey")
}
clipping <- function(img, low=0, high=1) {
img[img < low] <- low
img[img > high] <- high
img
}
read.jpeg <- function(filename) {
res <- .C("get_imagesize_of_JPEG_file", as.character(filename),
width=integer(1), height=integer(1), depth=integer(1),
ret=integer(1), PACKAGE="SCVA")
if (res$ret < 0)
stop(if (res$ret==-1) "Can't open file." else "Internal error")
imgtype <- if (res$depth == 1) "grey" else "rgb"
imgdim <- c(res$height, res$width, if (res$depth == 3) res$depth else NULL)
res <- .C("read_JPEG_file", as.character(filename),
image=double(res$width * res$height * res$depth),
ret=integer(1), PACKAGE="SCVA")
img <- array(res$image, dim=imgdim)
imagematrix(img/255, type=imgtype)
}
plot(image)
refpoints <- locator(n = 4, type = 'p', pch = 4, col = 'blue', lwd = 2)
refpoints <- as.data.frame(refpoints)
datapoints <- locator(n=MT,type='p',pch=1,col='red',lwd=2,cex=2)
datapoints <- as.data.frame(datapoints)
x <- refpoints$x[c(1,2)]
y <- refpoints$y[c(3,4)]
cx <- lm(formula=c(refX[1],refX[2])~c(x))$coeff
cy <- lm(formula=c(refY[1],refY[2])~c(y))$coeff
datapoints$x <- datapoints$x*cx[2]+cx[1]
datapoints$y <- datapoints$y*cy[2]+cy[1]
true.data <- as.data.frame(datapoints)
plot(true.data,type='b',pch=1,col='blue',lwd=1.1,bty='l')
rounded <- round(true.data,digits=2)
if(save=="yes"){
write.table(rounded,file=file.choose(new=FALSE),col.names=FALSE,row.names=FALSE,append=FALSE,sep="\t")
}
return(rounded)
}
|
library("mosaic")
diff<-integrateODE(dS~-b*S*I,dI~b*S*I-k*I,dR~k*I,k=1/7,b=1/3,S=1,I=.005,R=0,tdur=list(from=0, to=80));
jpeg("SIRModel.jpg")
plotFun(diff$S(t)~t,t.lim=range(0,80),xlab="Time",ylab="Percent of Population", main="THE SIR MODEL")
plotFun(diff$I(t)~t,t.lim=range(0,80),add=TRUE,col="red")
plotFun(diff$R(t)~t,t.lim=range(0,80),add=TRUE,col="green")
dev.off()
#video for how to implement this in R
#https://www.youtube.com/watch?v=lW2IQ0_I3mQ
#where the formulas came from
#http://www.maa.org/press/periodicals/loci/joma/the-sir-model-for-spread-of-disease-the-differential-equation-model
| /app/assets/Rscripts/SIR.R | no_license | jameskwapien/flu-sim | R | false | false | 603 | r | library("mosaic")
diff<-integrateODE(dS~-b*S*I,dI~b*S*I-k*I,dR~k*I,k=1/7,b=1/3,S=1,I=.005,R=0,tdur=list(from=0, to=80));
jpeg("SIRModel.jpg")
plotFun(diff$S(t)~t,t.lim=range(0,80),xlab="Time",ylab="Percent of Population", main="THE SIR MODEL")
plotFun(diff$I(t)~t,t.lim=range(0,80),add=TRUE,col="red")
plotFun(diff$R(t)~t,t.lim=range(0,80),add=TRUE,col="green")
dev.off()
#video for how to implement this in R
#https://www.youtube.com/watch?v=lW2IQ0_I3mQ
#where the formulas came from
#http://www.maa.org/press/periodicals/loci/joma/the-sir-model-for-spread-of-disease-the-differential-equation-model
|
#' @export
#' @title Parallel, reproducible lapply function with comfort
#' @description Parallel, reproducible version of R's \code{lapply()} function.
#' \strong{Proof of concept - for testing purposes only}
#'
#' @param X A vector or list. Any object that can be coerced by \code{as.list}.
#'
#' @param FUN The function to be applied to each element of X.
#'
#' @param ... Additional arguments to \code{FUN}.
#'
#' @param progress A logical value or a cahracter string. If TRUE or "bar", a modern looking
#' progress bar shows the status of the computations. Further options are
#' "batch" and "simple" which print simpler progress bars.
#'
#' @param title The printed title of the progress bar.
#'
#' @param memo A logical value. If TRUE, \code{FUN} is transformed to memorize its
#' output per distinct input arguments in an internal cache. If
#' \code{FUN} is applied to the same input again, it returns the corresponding
#' cached pre-computed output (see Details).
#'
#' @param resume A logical value. If TRUE, the function records the output per iteration
#' in the folder "tmp". If you re-run the function, it will resume the computations.
#'
#' @param eta A logical value. If TRUE, the estimated time (and date) of availability (ETA) is printed
#'
#' @param time A logical value. If TRUE, the estimated and the elapsed time of the progress are printed.
#'
#' @param threads An integer. The number of threads, i.e. parallel processes, to employ.
#' Caution: At the moment, only Unix based systems are supported for threads > 1
#' (see Details).
#'
#' @param sameSeed A logical value. If TRUE, the same seed is set before each iteration.
#'
#' @param stopOnError A logical value. If TRUE, the execution stops immediately in case of an error. Otherwise,
#' an error object in place for the failed iteration is returned.
#'
#' @param seed An integer. Sets a random seed in the beginning.
#'
#' @param simplify A logical value. If TRUE, the resulting list will be transformed (simplified)
#' by \code{simplify2array()}.
#'
#'
#' @details Uniform clear and clean way making computations reproducible. It does not matter, whether
#' the computations are performed serially or parallely.
#'
#' \strong{Caution:}
#' \enumerate{
#' \item Currently, non-unix systems are restricted to threads = 1 because they do not support
#' R's parallel mechanism (forking).
#' \item Use Option memo = TRUE ONLY for functions that do NOT DRAW RANDOM NUMBERS. Otherwise,
#' the result will be like the functions always proceeds with the same random draw.
#' }
#'
#' @return The function returns \code{TRUE} if no error occurred.
#'
#' @seealso \code{\link{lapply}}, \code{\link{sapply}}, \code{\link{mclapply}}
#' and \code{\link{simplify2array()}}.
#'
plapply <- function(X, FUN, ...,
progress = FALSE,
title = "Progress",
memo = FALSE,
resume = FALSE,
eta = FALSE,
time = FALSE,
threads = 1,
sameSeed = FALSE,
stopOnError = TRUE,
seed = NULL,
simplify = FALSE) {
# obtaining global values for missing arguments
opts <- getOption("collateralopts")
if (missing(progress)) {
progress <- opts$progress
}
if (missing(title)) {
title <- opts$title
}
if (missing(memo)) {
memo <- opts$memo
}
if (missing(resume)) {
resume <- opts$resume
}
if (missing(eta)) {
eta <- opts$eta
}
if (missing(time)) {
time <- opts$time
}
if (missing(threads)) {
threads <- opts$threads
}
if (missing(sameSeed)) {
sameSeed <- opts$sameSeed
}
if (missing(stopOnError)) {
stopOnError <- opts$stopOnError
}
# determine random seed
if (!is.null(seed)) {
if (!is.numeric(seed)) {
stop("Parameter 'seed' (used for set.seed()) must be an integer")
}
set.seed(seed)
}
preRandom.seed <- .GlobalEnv$.Random.seed
if (is.null(preRandom.seed)) {
stop("No random seed is set. Please provide parameter 'seed' or",
"use 'set.seed()'")
}
seed <- preRandom.seed[length(preRandom.seed)]
# check length
n <- length(X)
if (n == 0) {
# robust seed
set.seed(seed - 1)
return(list())
}
if (!is.list(X)) {
X <- as.list(X)
}
FUN <- FUNO <- match.fun(FUN)
# option: record and resume
if (isTRUE(resume)) {
rec <- Recorder$new(FUNO, X, seed)
} else {
rec <- EmptyRecorder$new(FUNO, X, seed)
}
stored <- rec$resume()
Y <- stored$Y
open <- which(!stored$done)
done <- n - length(open)
# option: initialize progress
if (is.character(progress)) {
pb <- Progress$new(value = done / n,
title = title,
type = progress,
eta = eta,
time = time)
} else {
if (is.logical(progress) && progress) {
pb <- Progress$new(value = done / n,
title = title,
type = "bar",
eta = eta,
time = time)
} else {
pb <- EmptyProgress$new()
}
}
# check for work
if (n == done) {
# robust seed
set.seed(seed - 1)
# option: simplify
if (isTRUE(simplify)) {
return(simplify2array(Y))
}
return(Y)
}
# option: same seed
if (isTRUE(sameSeed)) {
newseed <- 0
} else {
newseed <- 1
}
# forced restrictions on parameters
# parallelization for unix only
if (.Platform$OS.type == "unix") {
# restrict child process to one worker
if (parallel:::isChild()) {
progress <- FALSE
threads <- 1
}
} else {
threads <- 1
}
# option: memorize function
if (isTRUE(memo)) {
if (!is.memoised(FUN)) {
FUN <- memoise(FUN)
}
}
# check for parallelization
if (threads > 1 && length(open) > 1) {
free <- threads
jobs <- list()
cleanup <- function() {
if (length(jobs) > 0) {
job_pids <- lapply(jobs, "[[", "pid")
mccollect(parallel:::children(job_pids), FALSE)
parallel:::mckill(parallel:::children(job_pids), SIGTERM)
mccollect(parallel:::children(job_pids))
}
}
on.exit(cleanup())
repeat {
while (free > 0 && length(open) > 0) {
i <- open[1]
jobs <- c(jobs, list(runParallel(i, FUN, X[[i]], seed, newseed, ...)))
free <- free - 1
open <- open[-1]
}
res <- pollResults(jobs, stopOnError = stopOnError)
if (length(res$id) > 0) {
Y[res$id] <- res$val
m <- length(res$id)
for (j in 1:m) {
rec$record(res$id[j], res$val[[j]])
}
done <- done + m
pb$set(done / n)
if (done == n) {
break
}
jobs[res$jkill] <- NULL
free <- free + m
}
# relieve processor
Sys.sleep(0.1)
}
} else {
# option: stop on error
if (isTRUE(stopOnError)) {
FUNI <- FUN
} else {
FUNI <- function(x, ...) {return(try(FUN(x, ...), silent = TRUE))}
}
for (i in open) {
set.seed(seed + newseed * i)
y <- FUNI(X[[i]], ...)
if (!is.null(y)) {
Y[[i]] <- y
}
rec$record(i, y)
done <- done + 1
pb$set(done / n)
}
}
# robust seed
set.seed(seed - 1)
# option: simplify
if (isTRUE(simplify)) {
return(simplify2array(Y))
}
return(Y)
}
# run function with a list of inputs parallely
runParallel <- function(id, FUN, x, seed, newseed, ...) {
parFun <- function() {
# set seed
set.seed(seed + newseed * id)
# iteratively apply function to list
return(FUN(x, ...))
}
return(list(id = id,
pid = mcparallel(expression(parFun()), mc.set.seed = FALSE,
silent = TRUE, mc.affinity = NULL,
mc.interactive = FALSE, detached = FALSE)))
}
# poll processes for collecting results
pollResults <- function(jobs, stopOnError) {
ids <- numeric()
jkills <- numeric()
vals <- list()
for (i in seq_along(jobs)) {
res <- mccollect(jobs[[i]]$pid, wait = FALSE)
# check result availability
if (!is.null(res)) {
# signal the job as terminating
mccollect(jobs[[i]]$pid, wait = TRUE)
# get single result
res <- res[[1]]
# check for error
if (isTRUE(stopOnError) && inherits(res, "try-error")) {
stop(sprintf("Encountered error while computing iteration %i: %s",
jobs[[i]]$id, trimws(as.character(res))))
}
# save to results list
ids <- c(ids, jobs[[i]]$id)
jkills <- c(jkills, i)
vals <- c(vals, list(res))
}
}
return(list(id = ids, jkill = jkills, val = vals))
}
| /R/plapply.R | permissive | ratmaster/collateral | R | false | false | 9,214 | r | #' @export
#' @title Parallel, reproducible lapply function with comfort
#' @description Parallel, reproducible version of R's \code{lapply()} function.
#' \strong{Proof of concept - for testing purposes only}
#'
#' @param X A vector or list. Any object that can be coerced by \code{as.list}.
#'
#' @param FUN The function to be applied to each element of X.
#'
#' @param ... Additional arguments to \code{FUN}.
#'
#' @param progress A logical value or a cahracter string. If TRUE or "bar", a modern looking
#' progress bar shows the status of the computations. Further options are
#' "batch" and "simple" which print simpler progress bars.
#'
#' @param title The printed title of the progress bar.
#'
#' @param memo A logical value. If TRUE, \code{FUN} is transformed to memorize its
#' output per distinct input arguments in an internal cache. If
#' \code{FUN} is applied to the same input again, it returns the corresponding
#' cached pre-computed output (see Details).
#'
#' @param resume A logical value. If TRUE, the function records the output per iteration
#' in the folder "tmp". If you re-run the function, it will resume the computations.
#'
#' @param eta A logical value. If TRUE, the estimated time (and date) of availability (ETA) is printed
#'
#' @param time A logical value. If TRUE, the estimated and the elapsed time of the progress are printed.
#'
#' @param threads An integer. The number of threads, i.e. parallel processes, to employ.
#' Caution: At the moment, only Unix based systems are supported for threads > 1
#' (see Details).
#'
#' @param sameSeed A logical value. If TRUE, the same seed is set before each iteration.
#'
#' @param stopOnError A logical value. If TRUE, the execution stops immediately in case of an error. Otherwise,
#' an error object in place for the failed iteration is returned.
#'
#' @param seed An integer. Sets a random seed in the beginning.
#'
#' @param simplify A logical value. If TRUE, the resulting list will be transformed (simplified)
#' by \code{simplify2array()}.
#'
#'
#' @details Uniform clear and clean way making computations reproducible. It does not matter, whether
#' the computations are performed serially or parallely.
#'
#' \strong{Caution:}
#' \enumerate{
#' \item Currently, non-unix systems are restricted to threads = 1 because they do not support
#' R's parallel mechanism (forking).
#' \item Use Option memo = TRUE ONLY for functions that do NOT DRAW RANDOM NUMBERS. Otherwise,
#' the result will be like the functions always proceeds with the same random draw.
#' }
#'
#' @return The function returns \code{TRUE} if no error occurred.
#'
#' @seealso \code{\link{lapply}}, \code{\link{sapply}}, \code{\link{mclapply}}
#' and \code{\link{simplify2array()}}.
#'
plapply <- function(X, FUN, ...,
progress = FALSE,
title = "Progress",
memo = FALSE,
resume = FALSE,
eta = FALSE,
time = FALSE,
threads = 1,
sameSeed = FALSE,
stopOnError = TRUE,
seed = NULL,
simplify = FALSE) {
# obtaining global values for missing arguments
opts <- getOption("collateralopts")
if (missing(progress)) {
progress <- opts$progress
}
if (missing(title)) {
title <- opts$title
}
if (missing(memo)) {
memo <- opts$memo
}
if (missing(resume)) {
resume <- opts$resume
}
if (missing(eta)) {
eta <- opts$eta
}
if (missing(time)) {
time <- opts$time
}
if (missing(threads)) {
threads <- opts$threads
}
if (missing(sameSeed)) {
sameSeed <- opts$sameSeed
}
if (missing(stopOnError)) {
stopOnError <- opts$stopOnError
}
# determine random seed
if (!is.null(seed)) {
if (!is.numeric(seed)) {
stop("Parameter 'seed' (used for set.seed()) must be an integer")
}
set.seed(seed)
}
preRandom.seed <- .GlobalEnv$.Random.seed
if (is.null(preRandom.seed)) {
stop("No random seed is set. Please provide parameter 'seed' or",
"use 'set.seed()'")
}
seed <- preRandom.seed[length(preRandom.seed)]
# check length
n <- length(X)
if (n == 0) {
# robust seed
set.seed(seed - 1)
return(list())
}
if (!is.list(X)) {
X <- as.list(X)
}
FUN <- FUNO <- match.fun(FUN)
# option: record and resume
if (isTRUE(resume)) {
rec <- Recorder$new(FUNO, X, seed)
} else {
rec <- EmptyRecorder$new(FUNO, X, seed)
}
stored <- rec$resume()
Y <- stored$Y
open <- which(!stored$done)
done <- n - length(open)
# option: initialize progress
if (is.character(progress)) {
pb <- Progress$new(value = done / n,
title = title,
type = progress,
eta = eta,
time = time)
} else {
if (is.logical(progress) && progress) {
pb <- Progress$new(value = done / n,
title = title,
type = "bar",
eta = eta,
time = time)
} else {
pb <- EmptyProgress$new()
}
}
# check for work
if (n == done) {
# robust seed
set.seed(seed - 1)
# option: simplify
if (isTRUE(simplify)) {
return(simplify2array(Y))
}
return(Y)
}
# option: same seed
if (isTRUE(sameSeed)) {
newseed <- 0
} else {
newseed <- 1
}
# forced restrictions on parameters
# parallelization for unix only
if (.Platform$OS.type == "unix") {
# restrict child process to one worker
if (parallel:::isChild()) {
progress <- FALSE
threads <- 1
}
} else {
threads <- 1
}
# option: memorize function
if (isTRUE(memo)) {
if (!is.memoised(FUN)) {
FUN <- memoise(FUN)
}
}
# check for parallelization
if (threads > 1 && length(open) > 1) {
free <- threads
jobs <- list()
cleanup <- function() {
if (length(jobs) > 0) {
job_pids <- lapply(jobs, "[[", "pid")
mccollect(parallel:::children(job_pids), FALSE)
parallel:::mckill(parallel:::children(job_pids), SIGTERM)
mccollect(parallel:::children(job_pids))
}
}
on.exit(cleanup())
repeat {
while (free > 0 && length(open) > 0) {
i <- open[1]
jobs <- c(jobs, list(runParallel(i, FUN, X[[i]], seed, newseed, ...)))
free <- free - 1
open <- open[-1]
}
res <- pollResults(jobs, stopOnError = stopOnError)
if (length(res$id) > 0) {
Y[res$id] <- res$val
m <- length(res$id)
for (j in 1:m) {
rec$record(res$id[j], res$val[[j]])
}
done <- done + m
pb$set(done / n)
if (done == n) {
break
}
jobs[res$jkill] <- NULL
free <- free + m
}
# relieve processor
Sys.sleep(0.1)
}
} else {
# option: stop on error
if (isTRUE(stopOnError)) {
FUNI <- FUN
} else {
FUNI <- function(x, ...) {return(try(FUN(x, ...), silent = TRUE))}
}
for (i in open) {
set.seed(seed + newseed * i)
y <- FUNI(X[[i]], ...)
if (!is.null(y)) {
Y[[i]] <- y
}
rec$record(i, y)
done <- done + 1
pb$set(done / n)
}
}
# robust seed
set.seed(seed - 1)
# option: simplify
if (isTRUE(simplify)) {
return(simplify2array(Y))
}
return(Y)
}
# run function with a list of inputs parallely
runParallel <- function(id, FUN, x, seed, newseed, ...) {
parFun <- function() {
# set seed
set.seed(seed + newseed * id)
# iteratively apply function to list
return(FUN(x, ...))
}
return(list(id = id,
pid = mcparallel(expression(parFun()), mc.set.seed = FALSE,
silent = TRUE, mc.affinity = NULL,
mc.interactive = FALSE, detached = FALSE)))
}
# poll processes for collecting results
pollResults <- function(jobs, stopOnError) {
ids <- numeric()
jkills <- numeric()
vals <- list()
for (i in seq_along(jobs)) {
res <- mccollect(jobs[[i]]$pid, wait = FALSE)
# check result availability
if (!is.null(res)) {
# signal the job as terminating
mccollect(jobs[[i]]$pid, wait = TRUE)
# get single result
res <- res[[1]]
# check for error
if (isTRUE(stopOnError) && inherits(res, "try-error")) {
stop(sprintf("Encountered error while computing iteration %i: %s",
jobs[[i]]$id, trimws(as.character(res))))
}
# save to results list
ids <- c(ids, jobs[[i]]$id)
jkills <- c(jkills, i)
vals <- c(vals, list(res))
}
}
return(list(id = ids, jkill = jkills, val = vals))
}
|
# Question 3:
# Of the four types of sources indicated by the type (point, nonpoint, onroad,
# nonroad) variable, which of these four sources have seen decreases in
# emissions from 1999-2008 for Baltimore City? Which have seen increases in
# emissions from 1999-2008? Use the ggplot2 plotting system to make a plot
# answer this question.
> typePM25ByYear <- ddply(BaltimoreCity, .(year, type), function(x) sum(x$Emissions))
> colnames(typePM25ByYear)[3] <- "Emissions"
> qplot(year, Emissions, data = typePM25ByYear, color = type, geom = "line") +
ggtitle(expression("Baltimore City" ~ PM[2.5] ~ "Emissions by Source Type and Year")) + xlab("Year") + ylab(expression("Total" ~ PM[2.5] ~ "Emissions (tons)"))
> dev.copy(png, "plot3PM25.png")
> dev.off()
| /plot3PM25.R | no_license | AisuluOmar/Exploratory-Data-Analysis | R | false | false | 755 | r | # Question 3:
# Of the four types of sources indicated by the type (point, nonpoint, onroad,
# nonroad) variable, which of these four sources have seen decreases in
# emissions from 1999-2008 for Baltimore City? Which have seen increases in
# emissions from 1999-2008? Use the ggplot2 plotting system to make a plot
# answer this question.
> typePM25ByYear <- ddply(BaltimoreCity, .(year, type), function(x) sum(x$Emissions))
> colnames(typePM25ByYear)[3] <- "Emissions"
> qplot(year, Emissions, data = typePM25ByYear, color = type, geom = "line") +
ggtitle(expression("Baltimore City" ~ PM[2.5] ~ "Emissions by Source Type and Year")) + xlab("Year") + ylab(expression("Total" ~ PM[2.5] ~ "Emissions (tons)"))
> dev.copy(png, "plot3PM25.png")
> dev.off()
|
#' @title Aggregate abundances based on a factor variable
#' @param x A data set containing columns '\code{agg.var}', '\code{sample}'
#' and '\code{realized}'. The last two are obtained from a calls to
#' \code{agTrendTMB::sample_N} and \code{agTrendTMB::get_real}.
#' @param agg.var Variable used for aggregation
#' @import purrr dplyr
#' @export
#'
agg_abund_mgcv <- function(x, agg.var){
x$surv_times <- map(x$sample, ~{attr(.x, "survey_times")})
results <- x %>% group_by(.data[[agg.var]]) %>% nest() %>%
mutate(
surv_times = map(data, ~{sort(reduce(.x$surv_times, union))}),
sample = map(data, ~{reduce(.x$sample, `+`)})
) %>% mutate(
sample = map2(sample, surv_times, ~{
attr(.x, "survey_times") <- .y
.x
})
)
if("realized"%in%colnames(x)){
results <- results %>%
mutate(realized = map(data, ~ reduce(.x$realized, `+`))) %>%
mutate(
realized = map2(realized, surv_times, ~{
attr(.x, "survey_times") <- .y
.x
})
)
}
results %>% ungroup() %>% select(-data, -surv_times)
}
#' @title Summarize aggregation results
#' @param x An aggregated abundance object from \code{agg_abund}
#' @param ci.prob Confidence (credible) interval value
#' @importFrom coda HPDinterval mcmc
#' @export
#'
summary_agg_mgcv <- function(x, ci.prob=0.9){
tn <- attr(x$sample[[1]], "time.name")
x[[tn]] <- map(x$sample, ~{attr(.x, "survey_times")})
surv_times <- x %>% select(1, .data[[tn]]) %>% unnest(cols=.data[[tn]]) %>%
mutate(survey=1)
x <- select(x, -.data[[tn]])
df <- data.frame(attr(x$sample[[1]], tn)) %>% `colnames<-`(tn)
x$Estimate <- map(x$sample, ~{
out <- data.frame(Estimate=apply(.x, 2, median))
bind_cols(df, out)
})
x$CI <- map(x$sample, ~{
coda::HPDinterval(coda::mcmc(.x), prob=ci.prob) %>% as.data.frame() %>%
`colnames<-`(c("CI_predict_lower", "CI_predict_upper"))
})
x <- x[,-c(which(colnames(x)=="sample"))]
if("realized" %in% colnames(x)){
x$Estimate_real <- map(x$realized, ~{data.frame(Estimate_real=apply(.x, 2, median))})
x$CI_real <- map(x$realized, ~{
coda::HPDinterval(coda::mcmc(.x), prob=ci.prob) %>% as.data.frame() %>%
`colnames<-`(c("CI_real_lower", "CI_real_upper"))
})
x <- x[,-c(which(colnames(x)=="realized"))]
x <- x %>% unnest(cols = c(Estimate, CI, Estimate_real, CI_real))
} else{
x <- x %>% unnest(cols = c(Estimate, CI))
}
x <- x %>% left_join(surv_times, by=colnames(surv_times)[1:2]) %>%
mutate(
survey = ifelse(is.na(survey), 0, survey)
)
return(x)
}
| /R/agg_funcs_mgcv.R | no_license | dsjohnson/agTrendTMB | R | false | false | 2,624 | r | #' @title Aggregate abundances based on a factor variable
#' @param x A data set containing columns '\code{agg.var}', '\code{sample}'
#' and '\code{realized}'. The last two are obtained from a calls to
#' \code{agTrendTMB::sample_N} and \code{agTrendTMB::get_real}.
#' @param agg.var Variable used for aggregation
#' @import purrr dplyr
#' @export
#'
agg_abund_mgcv <- function(x, agg.var){
x$surv_times <- map(x$sample, ~{attr(.x, "survey_times")})
results <- x %>% group_by(.data[[agg.var]]) %>% nest() %>%
mutate(
surv_times = map(data, ~{sort(reduce(.x$surv_times, union))}),
sample = map(data, ~{reduce(.x$sample, `+`)})
) %>% mutate(
sample = map2(sample, surv_times, ~{
attr(.x, "survey_times") <- .y
.x
})
)
if("realized"%in%colnames(x)){
results <- results %>%
mutate(realized = map(data, ~ reduce(.x$realized, `+`))) %>%
mutate(
realized = map2(realized, surv_times, ~{
attr(.x, "survey_times") <- .y
.x
})
)
}
results %>% ungroup() %>% select(-data, -surv_times)
}
#' @title Summarize aggregation results
#' @param x An aggregated abundance object from \code{agg_abund}
#' @param ci.prob Confidence (credible) interval value
#' @importFrom coda HPDinterval mcmc
#' @export
#'
summary_agg_mgcv <- function(x, ci.prob=0.9){
tn <- attr(x$sample[[1]], "time.name")
x[[tn]] <- map(x$sample, ~{attr(.x, "survey_times")})
surv_times <- x %>% select(1, .data[[tn]]) %>% unnest(cols=.data[[tn]]) %>%
mutate(survey=1)
x <- select(x, -.data[[tn]])
df <- data.frame(attr(x$sample[[1]], tn)) %>% `colnames<-`(tn)
x$Estimate <- map(x$sample, ~{
out <- data.frame(Estimate=apply(.x, 2, median))
bind_cols(df, out)
})
x$CI <- map(x$sample, ~{
coda::HPDinterval(coda::mcmc(.x), prob=ci.prob) %>% as.data.frame() %>%
`colnames<-`(c("CI_predict_lower", "CI_predict_upper"))
})
x <- x[,-c(which(colnames(x)=="sample"))]
if("realized" %in% colnames(x)){
x$Estimate_real <- map(x$realized, ~{data.frame(Estimate_real=apply(.x, 2, median))})
x$CI_real <- map(x$realized, ~{
coda::HPDinterval(coda::mcmc(.x), prob=ci.prob) %>% as.data.frame() %>%
`colnames<-`(c("CI_real_lower", "CI_real_upper"))
})
x <- x[,-c(which(colnames(x)=="realized"))]
x <- x %>% unnest(cols = c(Estimate, CI, Estimate_real, CI_real))
} else{
x <- x %>% unnest(cols = c(Estimate, CI))
}
x <- x %>% left_join(surv_times, by=colnames(surv_times)[1:2]) %>%
mutate(
survey = ifelse(is.na(survey), 0, survey)
)
return(x)
}
|
#パッケージインストール
pckname<-"segmented"
if(require(pckname)){
print(paste(pckname," is loaded correctly"))
} else {
print(paste("trying to install ",pckname,"..."))
install.packages(pckname)
if(require(pckname)){
print(paste(pckname,"installed and loaded"))
} else {
stop("could not install")
}
}
pckname<-"ggplot2"
if(require(pckname)){
print(paste(pckname," is loaded correctly"))
} else {
print(paste("trying to install ",pckname,"..."))
install.packages(pckname)
if(require(pckname)){
print(paste(pckname,"installed and loaded"))
} else {
stop("could not install")
}
}
#2016年2017年でデータ分け
data16<-data2[data2$year==2016,]
data16<-data16[order(data16[,4]),]
data17<-data2[data2$year==2017,]
x16<-data16$cover
y16<-data16$count
x17<-data17$cover
y17<-data17$count
xx <-data2$cover
yy <-data2$count
dat <- data.frame(xx, yy)
dat16 <- data.frame(x16, y16)
dat17 <- data.frame(x17, y17)
par(mar = c(5.1, 4.5, 4.1, 2.1))
plot(x16,y16,
main="Relationship between
Drifting Sands and Vegetation Coverage",
xlab="Vegetation Coverage(%)",ylab="Drifting Sands(n)",
cex.lab = 1.5, # 軸の説明の字の大きさを設定する
cex.axis = 1.5, # 軸の数字等(ラベル)の大きさを設定する
cex.main = 1.8, # メインタイトルの字の大きさを設定する
col=4,pch=1,
xlim=c(0,35),ylim=c(0,10000)
)
par(new=T)
plot(x17,y17,
col=2,pch=4,
xlim=c(0,35),ylim=c(0,10000),
ann = F,axes = F
)
legend(locator(1), legend=c("2016","2017"),col=c(4,2),pch=c(1,4))
#piecewise regression
##2016
dati16 <- data.frame(x = x16, y = y16)
out.lm16 <- lm(y ~ x, data = dati16)
o16 <- segmented(out.lm16, seg.Z = ~x
)
dat216 = data.frame(x = x16, y = broken.line(o16)$fit)
##2017
dati17 <- data.frame(x = x17, y = y17)
out.lm17 <- lm(y ~ x, data = dati17)
o17 <- segmented(out.lm17, seg.Z = ~x,
control = seg.control(display = FALSE)
)
dat217 = data.frame(x = x17, y = broken.line(o17)$fit)
##all
dati <- data.frame(x = xx, y = yy)
out.lm <- lm(y ~ x, data = dati)
o <- segmented(out.lm, seg.Z = ~x,
control = seg.control(display = FALSE)
)
dat2 = data.frame(x = xx, y = broken.line(o)$fit)
#プロット
windows()
plot( main="Relationship between Drifting Sands and Vegetation Coverage",
xlab="Vegetation Coverage(%)",ylab="Drifting Sands(n)",
cex.lab = 1.5, # 軸の説明の字の大きさを設定する
cex.axis = 1.5, # 軸の数字等(ラベル)の大きさを設定する
cex.main = 1.8 # メインタイトルの字の大きさを設定する
)
ggplot(dati16, aes(x = x, y = y)) +
geom_point() +
geom_line(data = dat216, color = 'blue')
par(new=T)
ggplot(dati17, aes(x = x, y = y)) +
geom_point() +
geom_line(data = dat217, color = 'red')
par(new=T)
ggplot(dati, aes(x = x, y = y)) +
geom_point() +
geom_line(data = dat2, color = 'black')
dev.copy(pdf, file="piecewise.pdf",sep=""), width = 10, height = 10)
dev.off()
#exponential
resid <- function(par) # 関数名は何でも良い。引数は,パラメータのベクトル。x, y は,大域変数として参照する。
{
yhat <- par[1]+par[2]*exp(-par[3]*x16)
sum((y16-yhat)^2) # 残差平方和を返す
}
st<-c(1,1,1)
optim(st, resid)
ea16<-list(a=optim(st, resid)$par[1],
b = optim(st, resid)$par[2],
c = optim(st, resid)$par[3])
e16.model <- nls(y16 ~ a+b*exp(-c*x16), dat16, start=ea16)
resid <- function(par) # 関数名は何でも良い。引数は,パラメータのベクトル。x, y は,大域変数として参照する。
{
yhat <- par[1]+par[2]*exp(-par[3]*x17)
sum((y17-yhat)^2) # 残差平方和を返す
}
st<-c(20,20,20)
optim(st, resid)
ea17<-list(a=optim(st, resid)$par[1],
b = optim(st, resid)$par[2],
c = optim(st, resid)$par[3])
#ea17<-list(a=100,b = 100,c = 1)
e17.model <- nls(y17 ~ a+b*exp(-c*x17), dat17, start=ea17)
resid <- function(par) # 関数名は何でも良い。引数は,パラメータのベクトル。x, y は,大域変数として参照する。
{
yhat <- par[1]+par[2]*exp(-par[3]*xx)
sum((yy-yhat)^2) # 残差平方和を返す
}
st<-c(1,1,1)
eax<-list(a=optim(st, resid)$par[1],
b = optim(st, resid)$par[2],
c = optim(st, resid)$par[3])
e.model <- nls(yy ~ a+b*exp(-c*xx), dat, start=eax)
predict.e16 <- predict(e16.model)
predict.e17 <- predict(e17.model)
predict.e <- predict(e.model)
plot(x16, y16, ann=F, xlim=c(0,40), ylim=c(0,10000)); par(new=T)
plot(x16,predict.e16, type="l", xlim=c(0,40), ylim=c(0,10000))
#inverse
e16.model <- nls(y16 ~ a+b/x16, dat16, start=list(a=1, b=1))
i17.model <- nls(y ~ a+b/x, dat17, start=list(a=1, b=1))
i.model <- nls(y ~ a+b/x, dat, start=list(a=1, b=1))
#logit
l16.model <- nls(y ~ a-b*exp(-exp(c+d*log(x))), dat16, start=list(a=1, b=1, c=1, d=1))
l17.model <- nls(y ~ a-b*exp(-exp(c+d*log(x))), dat17, start=list(a=1, b=1, c=1, d=1))
l.model <- nls(y ~ a-b*exp(-exp(c+d*log(x))), dat, start=list(a=1, b=1, c=1, d=1))
| /elements/regression (2).r | no_license | KazuhaM/Resources | R | false | false | 5,018 | r | #パッケージインストール
pckname<-"segmented"
if(require(pckname)){
print(paste(pckname," is loaded correctly"))
} else {
print(paste("trying to install ",pckname,"..."))
install.packages(pckname)
if(require(pckname)){
print(paste(pckname,"installed and loaded"))
} else {
stop("could not install")
}
}
pckname<-"ggplot2"
if(require(pckname)){
print(paste(pckname," is loaded correctly"))
} else {
print(paste("trying to install ",pckname,"..."))
install.packages(pckname)
if(require(pckname)){
print(paste(pckname,"installed and loaded"))
} else {
stop("could not install")
}
}
#2016年2017年でデータ分け
data16<-data2[data2$year==2016,]
data16<-data16[order(data16[,4]),]
data17<-data2[data2$year==2017,]
x16<-data16$cover
y16<-data16$count
x17<-data17$cover
y17<-data17$count
xx <-data2$cover
yy <-data2$count
dat <- data.frame(xx, yy)
dat16 <- data.frame(x16, y16)
dat17 <- data.frame(x17, y17)
par(mar = c(5.1, 4.5, 4.1, 2.1))
plot(x16,y16,
main="Relationship between
Drifting Sands and Vegetation Coverage",
xlab="Vegetation Coverage(%)",ylab="Drifting Sands(n)",
cex.lab = 1.5, # 軸の説明の字の大きさを設定する
cex.axis = 1.5, # 軸の数字等(ラベル)の大きさを設定する
cex.main = 1.8, # メインタイトルの字の大きさを設定する
col=4,pch=1,
xlim=c(0,35),ylim=c(0,10000)
)
par(new=T)
plot(x17,y17,
col=2,pch=4,
xlim=c(0,35),ylim=c(0,10000),
ann = F,axes = F
)
legend(locator(1), legend=c("2016","2017"),col=c(4,2),pch=c(1,4))
#piecewise regression
##2016
dati16 <- data.frame(x = x16, y = y16)
out.lm16 <- lm(y ~ x, data = dati16)
o16 <- segmented(out.lm16, seg.Z = ~x
)
dat216 = data.frame(x = x16, y = broken.line(o16)$fit)
##2017
dati17 <- data.frame(x = x17, y = y17)
out.lm17 <- lm(y ~ x, data = dati17)
o17 <- segmented(out.lm17, seg.Z = ~x,
control = seg.control(display = FALSE)
)
dat217 = data.frame(x = x17, y = broken.line(o17)$fit)
##all
dati <- data.frame(x = xx, y = yy)
out.lm <- lm(y ~ x, data = dati)
o <- segmented(out.lm, seg.Z = ~x,
control = seg.control(display = FALSE)
)
dat2 = data.frame(x = xx, y = broken.line(o)$fit)
#プロット
windows()
plot( main="Relationship between Drifting Sands and Vegetation Coverage",
xlab="Vegetation Coverage(%)",ylab="Drifting Sands(n)",
cex.lab = 1.5, # 軸の説明の字の大きさを設定する
cex.axis = 1.5, # 軸の数字等(ラベル)の大きさを設定する
cex.main = 1.8 # メインタイトルの字の大きさを設定する
)
ggplot(dati16, aes(x = x, y = y)) +
geom_point() +
geom_line(data = dat216, color = 'blue')
par(new=T)
ggplot(dati17, aes(x = x, y = y)) +
geom_point() +
geom_line(data = dat217, color = 'red')
par(new=T)
ggplot(dati, aes(x = x, y = y)) +
geom_point() +
geom_line(data = dat2, color = 'black')
dev.copy(pdf, file="piecewise.pdf",sep=""), width = 10, height = 10)
dev.off()
#exponential
resid <- function(par) # 関数名は何でも良い。引数は,パラメータのベクトル。x, y は,大域変数として参照する。
{
yhat <- par[1]+par[2]*exp(-par[3]*x16)
sum((y16-yhat)^2) # 残差平方和を返す
}
st<-c(1,1,1)
optim(st, resid)
ea16<-list(a=optim(st, resid)$par[1],
b = optim(st, resid)$par[2],
c = optim(st, resid)$par[3])
e16.model <- nls(y16 ~ a+b*exp(-c*x16), dat16, start=ea16)
resid <- function(par) # 関数名は何でも良い。引数は,パラメータのベクトル。x, y は,大域変数として参照する。
{
yhat <- par[1]+par[2]*exp(-par[3]*x17)
sum((y17-yhat)^2) # 残差平方和を返す
}
st<-c(20,20,20)
optim(st, resid)
ea17<-list(a=optim(st, resid)$par[1],
b = optim(st, resid)$par[2],
c = optim(st, resid)$par[3])
#ea17<-list(a=100,b = 100,c = 1)
e17.model <- nls(y17 ~ a+b*exp(-c*x17), dat17, start=ea17)
resid <- function(par) # 関数名は何でも良い。引数は,パラメータのベクトル。x, y は,大域変数として参照する。
{
yhat <- par[1]+par[2]*exp(-par[3]*xx)
sum((yy-yhat)^2) # 残差平方和を返す
}
st<-c(1,1,1)
eax<-list(a=optim(st, resid)$par[1],
b = optim(st, resid)$par[2],
c = optim(st, resid)$par[3])
e.model <- nls(yy ~ a+b*exp(-c*xx), dat, start=eax)
predict.e16 <- predict(e16.model)
predict.e17 <- predict(e17.model)
predict.e <- predict(e.model)
plot(x16, y16, ann=F, xlim=c(0,40), ylim=c(0,10000)); par(new=T)
plot(x16,predict.e16, type="l", xlim=c(0,40), ylim=c(0,10000))
#inverse
e16.model <- nls(y16 ~ a+b/x16, dat16, start=list(a=1, b=1))
i17.model <- nls(y ~ a+b/x, dat17, start=list(a=1, b=1))
i.model <- nls(y ~ a+b/x, dat, start=list(a=1, b=1))
#logit
l16.model <- nls(y ~ a-b*exp(-exp(c+d*log(x))), dat16, start=list(a=1, b=1, c=1, d=1))
l17.model <- nls(y ~ a-b*exp(-exp(c+d*log(x))), dat17, start=list(a=1, b=1, c=1, d=1))
l.model <- nls(y ~ a-b*exp(-exp(c+d*log(x))), dat, start=list(a=1, b=1, c=1, d=1))
|
## Together, the functions of this file implement
## a special "matrix" which can cache its inverse
## makeCacheMatrix: This function creates a "matrix" object
## which can cache its inverse.
makeCacheMatrix <- function(x = matrix()) {
inv <- NULL
set <- function(y) {
x <<- y
inv <<- NULL
}
get <- function() x
setinverse <- function(inverse) inv <<- inverse
getinverse <- function() inv
list(set = set, get = get,
setinverse = setinverse,
getinverse = getinverse)
}
## cacheSolve: This function computes the inverse of
## the special "matrix" returned by makeCacheMatrix.
## If the "matrix" has its inverse cached,
## cacheSolve will use that instead.
cacheSolve <- function(x, ...) {
inv <- x$getinverse()
if(!is.null(inv)) {
message("getting cached data")
return(inv)
}
data <- x$get()
inv <- solve(data)
x$setinverse(inv)
inv
}
| /cachematrix.R | no_license | pmofjeld/ProgrammingAssignment2 | R | false | false | 895 | r | ## Together, the functions of this file implement
## a special "matrix" which can cache its inverse
## makeCacheMatrix: This function creates a "matrix" object
## which can cache its inverse.
makeCacheMatrix <- function(x = matrix()) {
inv <- NULL
set <- function(y) {
x <<- y
inv <<- NULL
}
get <- function() x
setinverse <- function(inverse) inv <<- inverse
getinverse <- function() inv
list(set = set, get = get,
setinverse = setinverse,
getinverse = getinverse)
}
## cacheSolve: This function computes the inverse of
## the special "matrix" returned by makeCacheMatrix.
## If the "matrix" has its inverse cached,
## cacheSolve will use that instead.
cacheSolve <- function(x, ...) {
inv <- x$getinverse()
if(!is.null(inv)) {
message("getting cached data")
return(inv)
}
data <- x$get()
inv <- solve(data)
x$setinverse(inv)
inv
}
|
library(openxlsx)
##Population##
pop <- read.xlsx("14100DS0002_2017-03.xlsx",
sheet ="Population and People _LGA_1540",
startRow = 5)
pop[pop == "-"]<- NA
#Percentage Age distribution of Population
percent_age_dist <- subset(pop, select = c(2:12))
colnames(percent_age_dist) = percent_age_dist[2, ]
percent_age_dist <- percent_age_dist[-c(1:3),]
#Age distribution of population
age_dist <- subset(pop, select = c(2,3,51:69))
colnames(age_dist) = age_dist[2,]
age_dist <- age_dist[-c(1:3),]
#Age distribution of males
age_dist_m <- subset(pop, select = c(2,3,13:31))
colnames(age_dist_m) = age_dist_m[2,]
age_dist_m <- age_dist_m[-c(1:3),]
#Age distribution of females
age_dist_f <- subset(pop, select = c(2,3,32:50))
colnames(age_dist_f) = age_dist_f[2,]
age_dist_f <- age_dist_f[-c(1:3),]
#Percentage of working age population per area
work_age <- subset(pop, select = c(2,3,70))
colnames(work_age) = work_age[2,]
work_age <- work_age[-c(1:3),]
##Median Age##
#Median age per area
med_age <- subset(pop, select = c(2,3,71:73))
colnames(med_age) = med_age[2,]
med_age <- med_age[-c(1:3),]
##Births and Deaths##
#Births per area
birth <- subset(pop, select =c(2,3,74))
colnames(birth) = birth[1,]
birth <- birth[-c(1:3),]
#Fertility per female
fertility <- subset(pop, select =c(2,3,75))
colnames(fertility) = fertility[1,]
fertility <- fertility[-c(1:3),]
#Deaths per area
death <- subset(pop, select =c(2,3,76))
colnames(death) = death[1,]
death <- death[-c(1:3),]
#Standardised death per area
death_st <- subset(pop, select =c(2,3,77))
colnames(death_st) = death_st[1,]
death_st <- death_st[-c(1:3),]
##Population Desity##
pop_den <- subset (pop, select = c(2,3,78))
colnames(pop_den) = pop_den[3,]
pop_den <- pop_den[-c(1:3),]
#Percentage of Aboriginal people per area
perc_ind <- subset(pop, select =c(2,3,79))
colnames(perc_ind) = perc_ind[1,]
perc_ind <- perc_ind[-c(1:3),]
#Overseas Population
overseas_pop <- subset(pop, select =c(2,3,80:89), X81 != "-")
colnames(overseas_pop) = overseas_pop[1,]
overseas_pop <- overseas_pop[-c(1:3),]
##Migration##
#Arrivals
arrivals <- subset(pop, select =c(2,3,90))
colnames(arrivals) = arrivals[1,]
arrivals <- arrivals[-c(1:3),]
#Departures
departures <- subset(pop, select =c(2,3,91))
colnames(departures) = departures[1,]
departures <- departures[-c(1:3),]
#Net arrivals
net <- subset(pop, select =c(2,3,92))
colnames(net) = net[1,]
net <- net[-c(1:3),]
##Industry and Sectors##
ind <- read.xlsx("14100DS0004_2017-03.xlsx",
sheet = "Economy and Industry _LGA_15424",
startRow = 6)
ind[ind == "-"] <- NA
#Businesses with number of employees
num_emp <- subset(ind, select = c(2,3,4:8))
colnames(num_emp) = num_emp[1,]
num_emp <- num_emp[-c(1:2),]
#Business entries
bus_ent <- subset(ind, select = c(2,3,9:13))
colnames(bus_ent) = bus_ent[1,]
bus_ent <- bus_ent[-c(1:2),]
#Business exits
bus_exit <- subset(ind, select = c(2,3,14:18))
colnames(bus_exit) = bus_exit[1,]
bus_exit <- bus_exit[-c(1:2),]
#Businesses per industry
bus_ind <- subset(ind, select = c(2,3,19:39))
colnames(bus_ind) = bus_ind[1,]
bus_ind <- bus_ind[-c(1:2),]
#Building approvals
bud_app <- subset(ind, select = c(2,3,40:49))
colnames(bud_app) = bud_app[1,]
bud_app <- bud_app[-c(1:2),]
#Bankrupts
bankrupts <- subset(ind, select = c(2,3,54:56))
colnames(bankrupts) = bankrupts[1,]
bankrupts <- bankrupts[-c(1:2),]
#Percentage of workforce per industry
work_ind <- subset(ind, select = c(2,3,57:76))
colnames(work_ind) = work_ind[1,]
work_ind <- work_ind[-c(1:2),]
work_ind <- work_ind[work_ind$Manufacturing != "-",]
#Type of vehicle
vec_type <- subset(ind, select =c(2,3,77:86))
colnames(vec_type) = vec_type[1,]
vec_type <- vec_type[-c(1:2),]
#Vehicle fuel type
vec_fuel <- subset(ind, select = c(2,3,87:90))
colnames(vec_fuel) = vec_fuel[1,]
vec_fuel <- vec_fuel[-c(1:2),]
#Vehicle years since manufacture
vec_manu <- subset(ind, select = c(2,3,91:93))
colnames(vec_manu) = vec_manu[1,]
vec_manu <- vec_manu[-c(1:2),]
#Income Employment and Education
iee <- read.xlsx("14100DS0006_2017-03.xlsx",
sheet ="Income_Educ and Emp_Health_LGA_",
startRow = 6)
iee[iee == "-"] <- NA
##Estimates of personal income
epi <- subset(iee, select = c(2:21), X5 != "-")
colnames(epi) = epi[1,]
epi <- epi[-c(1:2),]
##Income##
#Number, median and total employee income
employee_income <- subset(epi, select = c(1,2,3:5))
#Number, median and total self-employed income
own_income <- subset(epi, select = c(1,2,6:8))
#Number, median and total investment income
invest_income <- subset(epi, select = c(1,2,9:11))
#Number, median and total super income
super_income <- subset(epi, select = c(1,2,12:14))
#Number, median and total other income
other_income <- subset(epi, select = c(1,2,15:17))
#Number, median and total total income
total_income <- subset(epi, select = c(1,2,18:20))
##Government Pensions and Allowances
gpa <- subset(iee, select = c(2,3,22:37))
colnames(gpa) = gpa[1,]
gpa <- gpa[-c(1:2),]
##Education
edu <- subset(iee, select = c(2,3,37:42), X37 != "-")
colnames(edu) = edu[1,]
edu <- edu[-c(1:2),]
##Occupation
ocu <- subset(iee, select = c(2,3,43:51), X44 != "-")
colnames(ocu) = ocu[1,]
ocu <- ocu[-c(1:2),]
##Youth Engagement with workforce
youth <- subset(iee, select = c(2,3,52:59), X53 != "-")
colnames(youth) = youth[1,]
youth <- youth[-c(1:2),]
##Labour force
lbf <- subset(iee, select = c(2,3,60:63), X61 != "-")
colnames(lbf) = lbf[1,]
lbf <- lbf[-c(1:2),]
#Family and Community, Land and Environment
fcle <- read.xlsx("14100DS0008_2017-03.xlsx",
sheet = "Family_Land_LGA_1546198",
startRow = 6)
fcle[fcle == "-"] <- NA
#Family and Community
fc <- subset(fcle, select = c(2,3,4:53), X7 != "-")
colnames(fc) = fc[1,]
fc <- fc[-c(1:2),]
#English proficiency
eng_prof <- subset(fc, select = c(1,2,3:7))
#Citizenship
citizen <- subset(fc, select = c(1,2,8:10))
#Transport to and from work
transport <- subset(fc, select = c(1,2,11:22))
#Total employed
t_employ <- subset(fc, select = c(1,2,23))
#Household size
#Lone, Group, Family, Total, Average
house_size <- subset(fc, select = c(1,2,24:28))
#Family size
fam_size <- subset(fc, select = c(1,2,29:36))
#Voluntary work / care
vol <- subset(fc, select = c(1,2,37:41))
#Internet connection
int <- subset(fc, select = c(1,2,42:45))
#Average rental and mortgage payments
avg_rent_mort <- subset(fc, select = c(1,2,46:47))
#SEIFA Decile Ranking
seifa <- subset(fc, select = c(1,2,48:51))
#Land and environment
le <- subset(fcle, select = c(2,3,53:70), LAND.AREA != "-")
colnames(le) = fcle[1,c(2,3,53:70)]
le <- le[-c(1),]
#Land
land <- subset(le, select = c(1,2,3:18))
#Solar Panel
solar <- subset(le, select = c(1,2,19:20)) | /Code/Import.R | no_license | vincentruan1/ACTL1101 | R | false | false | 6,786 | r | library(openxlsx)
##Population##
pop <- read.xlsx("14100DS0002_2017-03.xlsx",
sheet ="Population and People _LGA_1540",
startRow = 5)
pop[pop == "-"]<- NA
#Percentage Age distribution of Population
percent_age_dist <- subset(pop, select = c(2:12))
colnames(percent_age_dist) = percent_age_dist[2, ]
percent_age_dist <- percent_age_dist[-c(1:3),]
#Age distribution of population
age_dist <- subset(pop, select = c(2,3,51:69))
colnames(age_dist) = age_dist[2,]
age_dist <- age_dist[-c(1:3),]
#Age distribution of males
age_dist_m <- subset(pop, select = c(2,3,13:31))
colnames(age_dist_m) = age_dist_m[2,]
age_dist_m <- age_dist_m[-c(1:3),]
#Age distribution of females
age_dist_f <- subset(pop, select = c(2,3,32:50))
colnames(age_dist_f) = age_dist_f[2,]
age_dist_f <- age_dist_f[-c(1:3),]
#Percentage of working age population per area
work_age <- subset(pop, select = c(2,3,70))
colnames(work_age) = work_age[2,]
work_age <- work_age[-c(1:3),]
##Median Age##
#Median age per area
med_age <- subset(pop, select = c(2,3,71:73))
colnames(med_age) = med_age[2,]
med_age <- med_age[-c(1:3),]
##Births and Deaths##
#Births per area
birth <- subset(pop, select =c(2,3,74))
colnames(birth) = birth[1,]
birth <- birth[-c(1:3),]
#Fertility per female
fertility <- subset(pop, select =c(2,3,75))
colnames(fertility) = fertility[1,]
fertility <- fertility[-c(1:3),]
#Deaths per area
death <- subset(pop, select =c(2,3,76))
colnames(death) = death[1,]
death <- death[-c(1:3),]
#Standardised death per area
death_st <- subset(pop, select =c(2,3,77))
colnames(death_st) = death_st[1,]
death_st <- death_st[-c(1:3),]
##Population Desity##
pop_den <- subset (pop, select = c(2,3,78))
colnames(pop_den) = pop_den[3,]
pop_den <- pop_den[-c(1:3),]
#Percentage of Aboriginal people per area
perc_ind <- subset(pop, select =c(2,3,79))
colnames(perc_ind) = perc_ind[1,]
perc_ind <- perc_ind[-c(1:3),]
#Overseas Population
overseas_pop <- subset(pop, select =c(2,3,80:89), X81 != "-")
colnames(overseas_pop) = overseas_pop[1,]
overseas_pop <- overseas_pop[-c(1:3),]
##Migration##
#Arrivals
arrivals <- subset(pop, select =c(2,3,90))
colnames(arrivals) = arrivals[1,]
arrivals <- arrivals[-c(1:3),]
#Departures
departures <- subset(pop, select =c(2,3,91))
colnames(departures) = departures[1,]
departures <- departures[-c(1:3),]
#Net arrivals
net <- subset(pop, select =c(2,3,92))
colnames(net) = net[1,]
net <- net[-c(1:3),]
##Industry and Sectors##
ind <- read.xlsx("14100DS0004_2017-03.xlsx",
sheet = "Economy and Industry _LGA_15424",
startRow = 6)
ind[ind == "-"] <- NA
#Businesses with number of employees
num_emp <- subset(ind, select = c(2,3,4:8))
colnames(num_emp) = num_emp[1,]
num_emp <- num_emp[-c(1:2),]
#Business entries
bus_ent <- subset(ind, select = c(2,3,9:13))
colnames(bus_ent) = bus_ent[1,]
bus_ent <- bus_ent[-c(1:2),]
#Business exits
bus_exit <- subset(ind, select = c(2,3,14:18))
colnames(bus_exit) = bus_exit[1,]
bus_exit <- bus_exit[-c(1:2),]
#Businesses per industry
bus_ind <- subset(ind, select = c(2,3,19:39))
colnames(bus_ind) = bus_ind[1,]
bus_ind <- bus_ind[-c(1:2),]
#Building approvals
bud_app <- subset(ind, select = c(2,3,40:49))
colnames(bud_app) = bud_app[1,]
bud_app <- bud_app[-c(1:2),]
#Bankrupts
bankrupts <- subset(ind, select = c(2,3,54:56))
colnames(bankrupts) = bankrupts[1,]
bankrupts <- bankrupts[-c(1:2),]
#Percentage of workforce per industry
work_ind <- subset(ind, select = c(2,3,57:76))
colnames(work_ind) = work_ind[1,]
work_ind <- work_ind[-c(1:2),]
work_ind <- work_ind[work_ind$Manufacturing != "-",]
#Type of vehicle
vec_type <- subset(ind, select =c(2,3,77:86))
colnames(vec_type) = vec_type[1,]
vec_type <- vec_type[-c(1:2),]
#Vehicle fuel type
vec_fuel <- subset(ind, select = c(2,3,87:90))
colnames(vec_fuel) = vec_fuel[1,]
vec_fuel <- vec_fuel[-c(1:2),]
#Vehicle years since manufacture
vec_manu <- subset(ind, select = c(2,3,91:93))
colnames(vec_manu) = vec_manu[1,]
vec_manu <- vec_manu[-c(1:2),]
#Income Employment and Education
iee <- read.xlsx("14100DS0006_2017-03.xlsx",
sheet ="Income_Educ and Emp_Health_LGA_",
startRow = 6)
iee[iee == "-"] <- NA
##Estimates of personal income
epi <- subset(iee, select = c(2:21), X5 != "-")
colnames(epi) = epi[1,]
epi <- epi[-c(1:2),]
##Income##
#Number, median and total employee income
employee_income <- subset(epi, select = c(1,2,3:5))
#Number, median and total self-employed income
own_income <- subset(epi, select = c(1,2,6:8))
#Number, median and total investment income
invest_income <- subset(epi, select = c(1,2,9:11))
#Number, median and total super income
super_income <- subset(epi, select = c(1,2,12:14))
#Number, median and total other income
other_income <- subset(epi, select = c(1,2,15:17))
#Number, median and total total income
total_income <- subset(epi, select = c(1,2,18:20))
##Government Pensions and Allowances
gpa <- subset(iee, select = c(2,3,22:37))
colnames(gpa) = gpa[1,]
gpa <- gpa[-c(1:2),]
##Education
edu <- subset(iee, select = c(2,3,37:42), X37 != "-")
colnames(edu) = edu[1,]
edu <- edu[-c(1:2),]
##Occupation
ocu <- subset(iee, select = c(2,3,43:51), X44 != "-")
colnames(ocu) = ocu[1,]
ocu <- ocu[-c(1:2),]
##Youth Engagement with workforce
youth <- subset(iee, select = c(2,3,52:59), X53 != "-")
colnames(youth) = youth[1,]
youth <- youth[-c(1:2),]
##Labour force
lbf <- subset(iee, select = c(2,3,60:63), X61 != "-")
colnames(lbf) = lbf[1,]
lbf <- lbf[-c(1:2),]
#Family and Community, Land and Environment
fcle <- read.xlsx("14100DS0008_2017-03.xlsx",
sheet = "Family_Land_LGA_1546198",
startRow = 6)
fcle[fcle == "-"] <- NA
#Family and Community
fc <- subset(fcle, select = c(2,3,4:53), X7 != "-")
colnames(fc) = fc[1,]
fc <- fc[-c(1:2),]
#English proficiency
eng_prof <- subset(fc, select = c(1,2,3:7))
#Citizenship
citizen <- subset(fc, select = c(1,2,8:10))
#Transport to and from work
transport <- subset(fc, select = c(1,2,11:22))
#Total employed
t_employ <- subset(fc, select = c(1,2,23))
#Household size
#Lone, Group, Family, Total, Average
house_size <- subset(fc, select = c(1,2,24:28))
#Family size
fam_size <- subset(fc, select = c(1,2,29:36))
#Voluntary work / care
vol <- subset(fc, select = c(1,2,37:41))
#Internet connection
int <- subset(fc, select = c(1,2,42:45))
#Average rental and mortgage payments
avg_rent_mort <- subset(fc, select = c(1,2,46:47))
#SEIFA Decile Ranking
seifa <- subset(fc, select = c(1,2,48:51))
#Land and environment
le <- subset(fcle, select = c(2,3,53:70), LAND.AREA != "-")
colnames(le) = fcle[1,c(2,3,53:70)]
le <- le[-c(1),]
#Land
land <- subset(le, select = c(1,2,3:18))
#Solar Panel
solar <- subset(le, select = c(1,2,19:20)) |
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/Clustergram.R
\name{clustergram.cmeans}
\alias{clustergram.cmeans}
\title{cmeans function for clustergram}
\usage{
clustergram.cmeans(Data, k, method = "cmeans", ...)
}
\arguments{
\item{Data}{Should be a scales matrix. Where each column belongs to a
different dimension of the observations.}
\item{k}{Number of desired groups for the c-means clustering.}
\item{method}{Clustering method for the cmeans function.}
\item{...}{Additional parameters to be passed to the cmeans function.}
}
\value{
A list containing the cluster vector and the centers matrix (see
cmeans function).
}
\description{
cmeans function for clustergram
}
\details{
This is an implementation of Fuzzy c-means clustering (with the
cmeans function of the e1071 package) for the clustergram function. The
return list is internally used by the clustergram to build the clustergram
plot.
}
\examples{
\donttest{
####### Example data:
SyntheticTrial <- SyntheticData(SpeciesNum = 100,
CommunityNum = 3, SpCo = NULL,
Length = 500,
Parameters = list(a=c(40, 80, 50),
b=c(100,250,400),
c=rep(0.03,3)),
dev.c = .015, pal = c("#008585", "#FBF2C4", "#C7522B"))
######## 6 clustergram plots
for (i in 1:6) clustergram(as.matrix(SyntheticTrial[,2:ncol(SyntheticTrial)]),
clustering.function = clustergram.cmeans,
k.range = 2:10, line.width = .2)
}
}
| /man/clustergram.cmeans.Rd | no_license | cran/EcotoneFinder | R | false | true | 1,715 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/Clustergram.R
\name{clustergram.cmeans}
\alias{clustergram.cmeans}
\title{cmeans function for clustergram}
\usage{
clustergram.cmeans(Data, k, method = "cmeans", ...)
}
\arguments{
\item{Data}{Should be a scales matrix. Where each column belongs to a
different dimension of the observations.}
\item{k}{Number of desired groups for the c-means clustering.}
\item{method}{Clustering method for the cmeans function.}
\item{...}{Additional parameters to be passed to the cmeans function.}
}
\value{
A list containing the cluster vector and the centers matrix (see
cmeans function).
}
\description{
cmeans function for clustergram
}
\details{
This is an implementation of Fuzzy c-means clustering (with the
cmeans function of the e1071 package) for the clustergram function. The
return list is internally used by the clustergram to build the clustergram
plot.
}
\examples{
\donttest{
####### Example data:
SyntheticTrial <- SyntheticData(SpeciesNum = 100,
CommunityNum = 3, SpCo = NULL,
Length = 500,
Parameters = list(a=c(40, 80, 50),
b=c(100,250,400),
c=rep(0.03,3)),
dev.c = .015, pal = c("#008585", "#FBF2C4", "#C7522B"))
######## 6 clustergram plots
for (i in 1:6) clustergram(as.matrix(SyntheticTrial[,2:ncol(SyntheticTrial)]),
clustering.function = clustergram.cmeans,
k.range = 2:10, line.width = .2)
}
}
|
a <- c(1:10)
a | /teste2.R | permissive | sypryani/javascript | R | false | false | 14 | r | a <- c(1:10)
a |
## trim the trajectory
.packageName <- 'mousetrack'
trim0 <- function(x, y, thresh){
o1 = x[1]
o2 = y[1]
dists = sqrt((x-o1)^2 + (y-o2)^2)
latindx = which( dists > thresh)[1] # latency index
trimmed1 = x[latindx:length(x)];
trimmed2 = y[latindx:length(y)];
return (list(latindx = latindx, xtrim = trimmed1, ytrim = trimmed2) )
}
| /mousetrack/R/trim0.R | no_license | ingted/R-Examples | R | false | false | 352 | r | ## trim the trajectory
.packageName <- 'mousetrack'
trim0 <- function(x, y, thresh){
o1 = x[1]
o2 = y[1]
dists = sqrt((x-o1)^2 + (y-o2)^2)
latindx = which( dists > thresh)[1] # latency index
trimmed1 = x[latindx:length(x)];
trimmed2 = y[latindx:length(y)];
return (list(latindx = latindx, xtrim = trimmed1, ytrim = trimmed2) )
}
|
test_that("tcwarn works as expected", {
expect_warning( tcwarn({ NULL = 1 },'Cannot assign to NULL','variable') , 'Cannot assign to NULL variable' )
expect_warning( tcwarn({ as.numeric('abc') },'Issue in as.numeric()') , 'Issue in as.numeric()' )
}) | /tests/testthat/test_tcwarn.R | no_license | cran/easyr | R | false | false | 263 | r | test_that("tcwarn works as expected", {
expect_warning( tcwarn({ NULL = 1 },'Cannot assign to NULL','variable') , 'Cannot assign to NULL variable' )
expect_warning( tcwarn({ as.numeric('abc') },'Issue in as.numeric()') , 'Issue in as.numeric()' )
}) |
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/generated_client.R
\name{clusters_list_kubernetes_partitions}
\alias{clusters_list_kubernetes_partitions}
\title{List Cluster Partitions for given cluster}
\usage{
clusters_list_kubernetes_partitions(id)
}
\arguments{
\item{id}{integer required. The ID of this cluster.}
}
\value{
An array containing the following fields:
\item{clusterPartitionId}{integer, The ID of this cluster partition.}
\item{name}{string, The name of the cluster partition.}
\item{labels}{array, Labels associated with this partition.}
\item{instanceConfigs}{array, An array containing the following fields:
\itemize{
\item instanceConfigId integer, The ID of this InstanceConfig.
\item instanceType string, An EC2 instance type. Possible values include t2.large, m4.xlarge, m4.2xlarge, m4.4xlarge, m5.12xlarge, and p2.xlarge.
\item minInstances integer, The minimum number of instances of that type in this cluster.
\item maxInstances integer, The maximum number of instances of that type in this cluster.
\item instanceMaxMemory integer, The amount of memory (RAM) available to a single instance of that type in megabytes.
\item instanceMaxCpu integer, The number of processor shares available to a single instance of that type in millicores.
\item instanceMaxDisk integer, The amount of disk available to a single instance of that type in gigabytes.
}}
\item{defaultInstanceConfigId}{integer, The id of the InstanceConfig that is the default for this partition.}
}
\description{
List Cluster Partitions for given cluster
}
| /man/clusters_list_kubernetes_partitions.Rd | no_license | elsander/civis-r | R | false | true | 1,579 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/generated_client.R
\name{clusters_list_kubernetes_partitions}
\alias{clusters_list_kubernetes_partitions}
\title{List Cluster Partitions for given cluster}
\usage{
clusters_list_kubernetes_partitions(id)
}
\arguments{
\item{id}{integer required. The ID of this cluster.}
}
\value{
An array containing the following fields:
\item{clusterPartitionId}{integer, The ID of this cluster partition.}
\item{name}{string, The name of the cluster partition.}
\item{labels}{array, Labels associated with this partition.}
\item{instanceConfigs}{array, An array containing the following fields:
\itemize{
\item instanceConfigId integer, The ID of this InstanceConfig.
\item instanceType string, An EC2 instance type. Possible values include t2.large, m4.xlarge, m4.2xlarge, m4.4xlarge, m5.12xlarge, and p2.xlarge.
\item minInstances integer, The minimum number of instances of that type in this cluster.
\item maxInstances integer, The maximum number of instances of that type in this cluster.
\item instanceMaxMemory integer, The amount of memory (RAM) available to a single instance of that type in megabytes.
\item instanceMaxCpu integer, The number of processor shares available to a single instance of that type in millicores.
\item instanceMaxDisk integer, The amount of disk available to a single instance of that type in gigabytes.
}}
\item{defaultInstanceConfigId}{integer, The id of the InstanceConfig that is the default for this partition.}
}
\description{
List Cluster Partitions for given cluster
}
|
# apply all classifier and prediction models from genefu to dataset
library(tidyverse)
# quantile normed datasets
repo <- "~/R/metaGx/data/mgxSet/"
files <- list.files(paste0(repo, "rlNorm/"))
rl <- map(paste0(repo, "rlNorm/", files), read_csv)
names(rl) <- str_extract(files, "[^_]*")
files <- list.files(paste0(repo, "rlNormNoBatch/"))
rlNoBatch <- map(paste0(repo, "rlNormNoBatch/", files), read_csv)
names(rlNoBatch) <- str_extract(files, "[^_]*")
files <- list.files(paste0(repo, "rlNormBatched/"))
rlBatched <- map(paste0(repo, "rlNormBatched/", files), read_csv)
names(rlBatched) <- c("DFHCC2_CISPLATIN", "DFHCC2_REFERENCE", str_extract(files[3:25], "[^_]*"))
# convert to matrices
rl <- map(rl, column_to_rownames, "sample_name") %>%
map(as.matrix)
rlNoBatch <- map(rlNoBatch, column_to_rownames, "sample_name") %>%
map(as.matrix)
rlBatched <- map(rlBatched, column_to_rownames, "sample_name") %>%
map(as.matrix)
# apply molecular.subtyping x 9 to batched & nonbatched data sets
# make study _. list of cluster assignments function
clusterModels <- c("scmgene", "scmod1", "scmod2", "pam50", "ssp2006", "ssp2003", "intClust", "AIMS","claudinLow")
# get gene map
fmap <- read_tsv("data/metaGxBreast/hgncMap17jun20.tsv") %>%
select(Gene.Symbol = `Approved symbol`, EntrezGene.ID = `NCBI Gene ID`)
fmap <- as.matrix(fmap)
rownames(fmap) <- fmap[, 1]
# construct function from expression matrix -> tibble of subtypes
library(genefu)
molSub <- function(x, y) partial(molecular.subtyping, sbt.model = x, data = y)
expMat2subtype <- function(mod, exp_m, annot_m, ...) enframe(molSub(mod, exp_m)(annot_m, ...)[[1]],
name = "sample", value = mod)
# apply pam50 clustering
pam50 <- vector("list", length = length(c(rl, rlNoBatch))) %>%
set_names(names(c(rl, rlNoBatch)))
for(m in names(c(rl, rlNoBatch))) {
pam50[[m]] <- try(expMat2subtype("pam50", c(rl, rlNoBatch)[[m]], fmap))
}
pam50$KOO
#... "no probe in common -> annot or mapping parameters are necessary for the mapping process!"
# pam50 <- discard
pam50 <- bind_rows(pam50[- 16])
pam50batch <- vector("list", length = length(c(rl, rlBatched))) %>%
set_names(names(c(rl, rlBatched)))
for(m in names(c(rl, rlBatched))) {
pam50batch[[m]] <- try(expMat2subtype("pam50", c(rl, rlBatched)[[m]], fmap))
}
pam50batch <- bind_rows(pam50batch[- 16])
pam50labels <- left_join(pam50, rename(pam50batch, pam50_batch = pam50)) %>%
filter(!duplicated(sample)) %>%
rename(sample_name = sample)
write_csv(pam50labels, "data/metaGxBreast/pam50labels.csv")
# update covariance table
pheno <- read_csv("data/metaGxBreast/metaGXcovarTable.csv.xz", guess_max = 10000)
pheno <- left_join(pheno, pam50labels)
write_csv(pheno, "data/metaGxBreast/metaGXcovarTable.csv.xz")
# apply scmgene model
scmgene <- vector("list", length = length(c(rl, rlNoBatch))) %>%
set_names(names(c(rl, rlNoBatch)))
for(m in names(c(rl, rlNoBatch))) {
scmgene[[m]] <- try(expMat2subtype("scmgene", c(rl, rlNoBatch)[[m]], fmap))
}
scmgene <- bind_rows(scmgene)
scmgenebatch <- vector("list", length = length(c(rl, rlBatched))) %>%
set_names(names(c(rl, rlBatched)))
for(m in names(c(rl, rlBatched))) {
scmgenebatch[[m]] <- try(expMat2subtype("scmgene", c(rl, rlBatched)[[m]], fmap))
}
scmgenebatch <- bind_rows(scmgenebatch[- 16])
scmgenelabels <- left_join(scmgene, rename(scmgenebatch, scmgene_batch = scmgene)) %>%
filter(!duplicated(sample))
# remove translation from robust norm
# doesn't make a difference for pam50 or success of any other cluster models (all others still fail)
rl2 <- lapply(rl, `+`, 1)
rl2NoBatch <- lapply(rlNoBatch, `+`, 1)
rl2Batched <- lapply(rlBatched, `+`, 1)
# apply pam50 clustering without0 centering
pam50Pos <- vector("list", length = length(c(rl2, rl2NoBatch))) %>%
set_names(names(c(rl2, rl2NoBatch)))
for(m in names(c(rl2, rl2NoBatch))) {
pam50Pos[[m]] <- try(expMat2subtype("pam50", c(rl2, rl2NoBatch)[[m]], fmap))
}
pam50Pos$KOO
#... "no probe in common -> annot or mapping parameters are necessary for the mapping process!"
# pam50Pos <- discard
pam50Pos <- bind_rows(pam50Pos[- 16])
pam50Posbatch <- vector("list", length = length(c(rl2, rl2Batched))) %>%
set_names(names(c(rl2, rl2Batched)))
for(m in names(c(rl2, rl2Batched))) {
pam50Posbatch[[m]] <- try(expMat2subtype("pam50", c(rl2, rl2Batched)[[m]], fmap))
}
pam50Posbatch <- bind_rows(pam50Posbatch[- 16])
pam50Poslabels <- left_join(pam50Pos, rename(pam50Posbatch, pam50_batch = pam50)) %>%
filter(!duplicated(sample))
pam50labels2 <- left_join(pam50labels, pam50Poslabels, by = "sample")
filter(pam50labels2, pam50.x:pam50_batch.x != pam50.y:pam50_batch.y)
# A tibble: 0 x 5
# … with 5 variables: sample <chr>, pam50.x <fct>, pam50_batch.x <fct>, pam50.y <fct>,
# pam50_batch.y <fct>
# try all models
clusterLabels <- vector("list", length = length(clusterModels) %>%
set_names(clusterModels)
for(n in names(clusterLabels)) {
clusterLabels[[n]] <- vector("list", length(rl2)) %>%
set_names(names(rl2))
for(m in names(clusterLabels[[n]])) {
clusterLabels[[m]] <- try(expMat2subtype(n, rl2[[m]], fmap))
}
}
# intrinsic gene lists ssp2003, ssp2006, pam50 ("Basal", "Her2", "LumA", "LumB" or "Normal")
#
# genious, ggi,
# prognistic score ihc4, gene70, gene76, endoPredict, oncotype dx, npi, pi3ca, tamr13
# gene modules: mod1, mod2 for scmod1, scmod2, scmgene
| /R/label clusters.R | no_license | stjordanis/cancer | R | false | false | 5,418 | r | # apply all classifier and prediction models from genefu to dataset
library(tidyverse)
# quantile normed datasets
repo <- "~/R/metaGx/data/mgxSet/"
files <- list.files(paste0(repo, "rlNorm/"))
rl <- map(paste0(repo, "rlNorm/", files), read_csv)
names(rl) <- str_extract(files, "[^_]*")
files <- list.files(paste0(repo, "rlNormNoBatch/"))
rlNoBatch <- map(paste0(repo, "rlNormNoBatch/", files), read_csv)
names(rlNoBatch) <- str_extract(files, "[^_]*")
files <- list.files(paste0(repo, "rlNormBatched/"))
rlBatched <- map(paste0(repo, "rlNormBatched/", files), read_csv)
names(rlBatched) <- c("DFHCC2_CISPLATIN", "DFHCC2_REFERENCE", str_extract(files[3:25], "[^_]*"))
# convert to matrices
rl <- map(rl, column_to_rownames, "sample_name") %>%
map(as.matrix)
rlNoBatch <- map(rlNoBatch, column_to_rownames, "sample_name") %>%
map(as.matrix)
rlBatched <- map(rlBatched, column_to_rownames, "sample_name") %>%
map(as.matrix)
# apply molecular.subtyping x 9 to batched & nonbatched data sets
# make study _. list of cluster assignments function
clusterModels <- c("scmgene", "scmod1", "scmod2", "pam50", "ssp2006", "ssp2003", "intClust", "AIMS","claudinLow")
# get gene map
fmap <- read_tsv("data/metaGxBreast/hgncMap17jun20.tsv") %>%
select(Gene.Symbol = `Approved symbol`, EntrezGene.ID = `NCBI Gene ID`)
fmap <- as.matrix(fmap)
rownames(fmap) <- fmap[, 1]
# construct function from expression matrix -> tibble of subtypes
library(genefu)
molSub <- function(x, y) partial(molecular.subtyping, sbt.model = x, data = y)
expMat2subtype <- function(mod, exp_m, annot_m, ...) enframe(molSub(mod, exp_m)(annot_m, ...)[[1]],
name = "sample", value = mod)
# apply pam50 clustering
pam50 <- vector("list", length = length(c(rl, rlNoBatch))) %>%
set_names(names(c(rl, rlNoBatch)))
for(m in names(c(rl, rlNoBatch))) {
pam50[[m]] <- try(expMat2subtype("pam50", c(rl, rlNoBatch)[[m]], fmap))
}
pam50$KOO
#... "no probe in common -> annot or mapping parameters are necessary for the mapping process!"
# pam50 <- discard
pam50 <- bind_rows(pam50[- 16])
pam50batch <- vector("list", length = length(c(rl, rlBatched))) %>%
set_names(names(c(rl, rlBatched)))
for(m in names(c(rl, rlBatched))) {
pam50batch[[m]] <- try(expMat2subtype("pam50", c(rl, rlBatched)[[m]], fmap))
}
pam50batch <- bind_rows(pam50batch[- 16])
pam50labels <- left_join(pam50, rename(pam50batch, pam50_batch = pam50)) %>%
filter(!duplicated(sample)) %>%
rename(sample_name = sample)
write_csv(pam50labels, "data/metaGxBreast/pam50labels.csv")
# update covariance table
pheno <- read_csv("data/metaGxBreast/metaGXcovarTable.csv.xz", guess_max = 10000)
pheno <- left_join(pheno, pam50labels)
write_csv(pheno, "data/metaGxBreast/metaGXcovarTable.csv.xz")
# apply scmgene model
scmgene <- vector("list", length = length(c(rl, rlNoBatch))) %>%
set_names(names(c(rl, rlNoBatch)))
for(m in names(c(rl, rlNoBatch))) {
scmgene[[m]] <- try(expMat2subtype("scmgene", c(rl, rlNoBatch)[[m]], fmap))
}
scmgene <- bind_rows(scmgene)
scmgenebatch <- vector("list", length = length(c(rl, rlBatched))) %>%
set_names(names(c(rl, rlBatched)))
for(m in names(c(rl, rlBatched))) {
scmgenebatch[[m]] <- try(expMat2subtype("scmgene", c(rl, rlBatched)[[m]], fmap))
}
scmgenebatch <- bind_rows(scmgenebatch[- 16])
scmgenelabels <- left_join(scmgene, rename(scmgenebatch, scmgene_batch = scmgene)) %>%
filter(!duplicated(sample))
# remove translation from robust norm
# doesn't make a difference for pam50 or success of any other cluster models (all others still fail)
rl2 <- lapply(rl, `+`, 1)
rl2NoBatch <- lapply(rlNoBatch, `+`, 1)
rl2Batched <- lapply(rlBatched, `+`, 1)
# apply pam50 clustering without0 centering
pam50Pos <- vector("list", length = length(c(rl2, rl2NoBatch))) %>%
set_names(names(c(rl2, rl2NoBatch)))
for(m in names(c(rl2, rl2NoBatch))) {
pam50Pos[[m]] <- try(expMat2subtype("pam50", c(rl2, rl2NoBatch)[[m]], fmap))
}
pam50Pos$KOO
#... "no probe in common -> annot or mapping parameters are necessary for the mapping process!"
# pam50Pos <- discard
pam50Pos <- bind_rows(pam50Pos[- 16])
pam50Posbatch <- vector("list", length = length(c(rl2, rl2Batched))) %>%
set_names(names(c(rl2, rl2Batched)))
for(m in names(c(rl2, rl2Batched))) {
pam50Posbatch[[m]] <- try(expMat2subtype("pam50", c(rl2, rl2Batched)[[m]], fmap))
}
pam50Posbatch <- bind_rows(pam50Posbatch[- 16])
pam50Poslabels <- left_join(pam50Pos, rename(pam50Posbatch, pam50_batch = pam50)) %>%
filter(!duplicated(sample))
pam50labels2 <- left_join(pam50labels, pam50Poslabels, by = "sample")
filter(pam50labels2, pam50.x:pam50_batch.x != pam50.y:pam50_batch.y)
# A tibble: 0 x 5
# … with 5 variables: sample <chr>, pam50.x <fct>, pam50_batch.x <fct>, pam50.y <fct>,
# pam50_batch.y <fct>
# try all models
clusterLabels <- vector("list", length = length(clusterModels) %>%
set_names(clusterModels)
for(n in names(clusterLabels)) {
clusterLabels[[n]] <- vector("list", length(rl2)) %>%
set_names(names(rl2))
for(m in names(clusterLabels[[n]])) {
clusterLabels[[m]] <- try(expMat2subtype(n, rl2[[m]], fmap))
}
}
# intrinsic gene lists ssp2003, ssp2006, pam50 ("Basal", "Her2", "LumA", "LumB" or "Normal")
#
# genious, ggi,
# prognistic score ihc4, gene70, gene76, endoPredict, oncotype dx, npi, pi3ca, tamr13
# gene modules: mod1, mod2 for scmod1, scmod2, scmgene
|
source("scripts/load_packages.R")
source("scripts/custom_functions.R")
source("scripts/recipes.R")
source("scripts/model_DecisionTree.R")
source("scripts/model_RandomForest.R")
source("scripts/model_Xgboost.R")
| /main.R | no_license | Sponghop/dalex_tidymodels_example | R | false | false | 211 | r | source("scripts/load_packages.R")
source("scripts/custom_functions.R")
source("scripts/recipes.R")
source("scripts/model_DecisionTree.R")
source("scripts/model_RandomForest.R")
source("scripts/model_Xgboost.R")
|
\name{cv4abc}
\alias{cv4abc}
\title{
Cross validation for Approximate Bayesian Computation (ABC)
}
\description{
This function performs a leave-one-out cross validation for ABC via
subsequent calls to the function \code{\link{abc}}. A potential use of
this function is to evaluate the effect of the choice of the tolerance
rate on the quality of the estimation with ABC.
}
\usage{
cv4abc(param, sumstat, abc.out = NULL, nval, tols, statistic = "median",
prior.range = NULL, method, hcorr = TRUE, transf = "none", logit.bounds
= c(0,0), subset = NULL, kernel = "epanechnikov", numnet = 10, sizenet =
5, lambda = c(0.0001,0.001,0.01), trace = FALSE, maxit = 500, \dots)
}
\arguments{
\item{param}{
a vector, matrix or data frame of the simulated parameter values.}
\item{sumstat}{
a vector, matrix or data frame of the simulated summary statistics.}
\item{abc.out}{
an object of class \code{"abc"}, optional. If supplied, all arguments
passed to \code{\link{abc}} are extracted from this object,
except for \code{sumstat}, \code{param}, and \code{tol}, which
always have to be supplied as arguments.}
\item{nval}{
size of the cross-validation sample.}
\item{tols}{
a single tolerance rate or a vector of tolerance rates.}
\item{statistic}{
a character string specifying the statistic to calculate a point
estimate from the posterior distribution of the
parameter(s). Possible values are \code{"median"} (default),
\code{"mean"}, or \code{"mode"}.}
\item{prior.range}{
a range to truncate the prior range.}
\item{method}{
a character string indicating the type of ABC algorithm to be
applied. Possible values are \code{"rejection"},
\code{"loclinear"}, and \code{"neuralnet"}. See also
\code{\link{abc}}.}
\item{hcorr}{
logical, if \code{TRUE} (default) the conditional heteroscedastic
model is applied.}
\item{transf}{
a vector of character strings indicating the kind of transformation
to be applied to the parameter values. The possible values are
\code{"log"}, \code{"logit"}, and \code{"none"} (default), when no
is transformation applied. See also \code{\link{abc}}.}
\item{logit.bounds}{
a vector of bounds if \code{transf} is \code{"logit"}. These bounds
are applied to all parameters that are to be logit transformed.}
\item{subset}{
a logical expression indicating elements or rows to keep. Missing
values in \code{param} and/or \code{sumstat} are taken as
\code{FALSE}.}
\item{kernel}{
a character string specifying the kernel to be used when
\code{method} is \code{"loclinear"} or \code{"neuralnet"}. Defaults
to \code{"epanechnikov"}. See \code{\link{density}} for details.}
\item{numnet}{
the number of neural networks when \code{method} is
\code{"neuralnet"}. Defaults to 10. It indicates the number of times
the function \code{nnet} is called.}
\item{sizenet}{
the number of units in the hidden layer. Defaults to 5. Can be zero
if there are no skip-layer units. See \code{\link{nnet}} for more
details.}
\item{lambda}{
a numeric vector or a single value indicating the weight decay when
\code{method} is \code{"neuralnet"}. See \code{\link{nnet}} for more
details. By default, 0.0001, 0.001, or 0.01 is randomly chosen for
each of the networks.}
\item{trace}{
logical, \code{TRUE} switches on tracing the optimization of
\code{\link{nnet}}. Applies only when \code{method} is
\code{"neuralnet"}.}
\item{maxit}{
numeric, the maximum number of iterations. Defaults to 500. Applies
only when \code{method} is \code{"neuralnet"}. See also
\code{\link{nnet}}.}
\item{\dots}{
other arguments passed to \code{\link{nnet}}.}
}
\details{
A simulation is selected repeatedly to be a validation simulation,
while the other simulations are used as training simulations. Each
time the function \code{\link{abc}} is called to estimate the
parameter(s). A total of \code{nval} validation simulations are
selected.
The arguments of the function \code{\link{abc}} can be supplied in two
ways. First, simply give them as arguments when calling this function,
in which case \code{abc.out} can be \code{NULL}. Second, via an
existing object of class \code{"abc"}, here \code{abc.out}. WARNING:
when \code{abc.out} is supplied, the same \code{sumstat} and
\code{param} objects have to be used as in the original call to
\code{\link{abc}}. Column names of \code{sumstat} and \code{param} are
checked for match.
See \code{\link{summary.cv4abc}} for calculating the prediction error
from an object of class \code{"cv4abc"}.
}
\value{
An object of class \code{"cv4abc"}, which is a list with the following
elements
\item{call}{The original calls to \code{\link{abc}} for each tolerance
rates.}
\item{cvsamples}{Numeric vector of length \code{nval}, indicating
which rows of the \code{param} and \code{sumstat} matrices were used
as validation values.}
\item{tols}{The tolerance rates.}
\item{true}{The parameter values that served as validation values.}
\item{estim}{The estimated parameter values.}
\item{names}{A list with two elements: \code{parameter.names} and
\code{statistics.names}. Both contain a vector of character strings
with the parameter and statistics names, respectively.}
\item{seed}{The value of \code{.Random.seed} when \code{cv4abc} is
called.}
}
\seealso{
\code{\link{abc}}, \code{\link{plot.cv4abc}}, \code{\link{summary.cv4abc}}
}
\examples{
data(musigma2)
## this data set contains five R objects, see ?musigma2 for
## details
## cv4abc() calls abc(). Here we show two ways for the supplying
## arguments of abc(). 1st way: passing arguments directly. In this
## example only 'param', 'sumstat', 'tol', and 'method', while default
## values are used for the other arguments.
##
cv.rej <- cv4abc(param=par.sim, sumstat=stat.sim, nval=50,
tols=c(.1,.2,.3), method="rejection")
## 2nd way: first creating an object of class 'abc', and then using it
## to pass its arguments to abc().
##
lin <- abc(target=stat.obs, param=par.sim, sumstat=stat.sim, tol=.2,
method="loclinear", transf=c("none","log"))
cv.lin <- cv4abc(param=par.sim, sumstat=stat.sim, abc.out=lin, nval=50,
tols=c(.1,.2,.3))
## using the plot method. Different tolerance levels are plotted with
## different heat.colors. Smaller the tolerance levels correspond to
## "more red" points.
## !!! consider using the argument 'exclude' (plot.cv4abc) to supress
## the plotting of any outliers that mask readibility !!!
plot(cv.lin, log=c("xy", "xy"), caption=c(expression(mu),
expression(sigma^2)))
## comparing with the rejection sampling
plot(cv.rej, log=c("", "xy"), caption=c(expression(mu), expression(sigma^2)))
## or printing results directly to a postscript file...
plot(cv.lin, log=c("xy", "xy"), caption=c(expression(mu),
expression(sigma^2)), file="CVrej", postscript=TRUE)
## using the summary method to calculate the prediction error
summary(cv.lin)
## compare with rejection sampling
summary(cv.rej)
}
\keyword{htest}
\keyword{models}
% Converted by Sd2Rd version 1.15.
| /r-module/src/test/resources/unit/test_packages/extracted/abc/man/cv4abc.Rd | permissive | openanalytics/RDepot | R | false | false | 7,195 | rd | \name{cv4abc}
\alias{cv4abc}
\title{
Cross validation for Approximate Bayesian Computation (ABC)
}
\description{
This function performs a leave-one-out cross validation for ABC via
subsequent calls to the function \code{\link{abc}}. A potential use of
this function is to evaluate the effect of the choice of the tolerance
rate on the quality of the estimation with ABC.
}
\usage{
cv4abc(param, sumstat, abc.out = NULL, nval, tols, statistic = "median",
prior.range = NULL, method, hcorr = TRUE, transf = "none", logit.bounds
= c(0,0), subset = NULL, kernel = "epanechnikov", numnet = 10, sizenet =
5, lambda = c(0.0001,0.001,0.01), trace = FALSE, maxit = 500, \dots)
}
\arguments{
\item{param}{
a vector, matrix or data frame of the simulated parameter values.}
\item{sumstat}{
a vector, matrix or data frame of the simulated summary statistics.}
\item{abc.out}{
an object of class \code{"abc"}, optional. If supplied, all arguments
passed to \code{\link{abc}} are extracted from this object,
except for \code{sumstat}, \code{param}, and \code{tol}, which
always have to be supplied as arguments.}
\item{nval}{
size of the cross-validation sample.}
\item{tols}{
a single tolerance rate or a vector of tolerance rates.}
\item{statistic}{
a character string specifying the statistic to calculate a point
estimate from the posterior distribution of the
parameter(s). Possible values are \code{"median"} (default),
\code{"mean"}, or \code{"mode"}.}
\item{prior.range}{
a range to truncate the prior range.}
\item{method}{
a character string indicating the type of ABC algorithm to be
applied. Possible values are \code{"rejection"},
\code{"loclinear"}, and \code{"neuralnet"}. See also
\code{\link{abc}}.}
\item{hcorr}{
logical, if \code{TRUE} (default) the conditional heteroscedastic
model is applied.}
\item{transf}{
a vector of character strings indicating the kind of transformation
to be applied to the parameter values. The possible values are
\code{"log"}, \code{"logit"}, and \code{"none"} (default), when no
is transformation applied. See also \code{\link{abc}}.}
\item{logit.bounds}{
a vector of bounds if \code{transf} is \code{"logit"}. These bounds
are applied to all parameters that are to be logit transformed.}
\item{subset}{
a logical expression indicating elements or rows to keep. Missing
values in \code{param} and/or \code{sumstat} are taken as
\code{FALSE}.}
\item{kernel}{
a character string specifying the kernel to be used when
\code{method} is \code{"loclinear"} or \code{"neuralnet"}. Defaults
to \code{"epanechnikov"}. See \code{\link{density}} for details.}
\item{numnet}{
the number of neural networks when \code{method} is
\code{"neuralnet"}. Defaults to 10. It indicates the number of times
the function \code{nnet} is called.}
\item{sizenet}{
the number of units in the hidden layer. Defaults to 5. Can be zero
if there are no skip-layer units. See \code{\link{nnet}} for more
details.}
\item{lambda}{
a numeric vector or a single value indicating the weight decay when
\code{method} is \code{"neuralnet"}. See \code{\link{nnet}} for more
details. By default, 0.0001, 0.001, or 0.01 is randomly chosen for
each of the networks.}
\item{trace}{
logical, \code{TRUE} switches on tracing the optimization of
\code{\link{nnet}}. Applies only when \code{method} is
\code{"neuralnet"}.}
\item{maxit}{
numeric, the maximum number of iterations. Defaults to 500. Applies
only when \code{method} is \code{"neuralnet"}. See also
\code{\link{nnet}}.}
\item{\dots}{
other arguments passed to \code{\link{nnet}}.}
}
\details{
A simulation is selected repeatedly to be a validation simulation,
while the other simulations are used as training simulations. Each
time the function \code{\link{abc}} is called to estimate the
parameter(s). A total of \code{nval} validation simulations are
selected.
The arguments of the function \code{\link{abc}} can be supplied in two
ways. First, simply give them as arguments when calling this function,
in which case \code{abc.out} can be \code{NULL}. Second, via an
existing object of class \code{"abc"}, here \code{abc.out}. WARNING:
when \code{abc.out} is supplied, the same \code{sumstat} and
\code{param} objects have to be used as in the original call to
\code{\link{abc}}. Column names of \code{sumstat} and \code{param} are
checked for match.
See \code{\link{summary.cv4abc}} for calculating the prediction error
from an object of class \code{"cv4abc"}.
}
\value{
An object of class \code{"cv4abc"}, which is a list with the following
elements
\item{call}{The original calls to \code{\link{abc}} for each tolerance
rates.}
\item{cvsamples}{Numeric vector of length \code{nval}, indicating
which rows of the \code{param} and \code{sumstat} matrices were used
as validation values.}
\item{tols}{The tolerance rates.}
\item{true}{The parameter values that served as validation values.}
\item{estim}{The estimated parameter values.}
\item{names}{A list with two elements: \code{parameter.names} and
\code{statistics.names}. Both contain a vector of character strings
with the parameter and statistics names, respectively.}
\item{seed}{The value of \code{.Random.seed} when \code{cv4abc} is
called.}
}
\seealso{
\code{\link{abc}}, \code{\link{plot.cv4abc}}, \code{\link{summary.cv4abc}}
}
\examples{
data(musigma2)
## this data set contains five R objects, see ?musigma2 for
## details
## cv4abc() calls abc(). Here we show two ways for the supplying
## arguments of abc(). 1st way: passing arguments directly. In this
## example only 'param', 'sumstat', 'tol', and 'method', while default
## values are used for the other arguments.
##
cv.rej <- cv4abc(param=par.sim, sumstat=stat.sim, nval=50,
tols=c(.1,.2,.3), method="rejection")
## 2nd way: first creating an object of class 'abc', and then using it
## to pass its arguments to abc().
##
lin <- abc(target=stat.obs, param=par.sim, sumstat=stat.sim, tol=.2,
method="loclinear", transf=c("none","log"))
cv.lin <- cv4abc(param=par.sim, sumstat=stat.sim, abc.out=lin, nval=50,
tols=c(.1,.2,.3))
## using the plot method. Different tolerance levels are plotted with
## different heat.colors. Smaller the tolerance levels correspond to
## "more red" points.
## !!! consider using the argument 'exclude' (plot.cv4abc) to supress
## the plotting of any outliers that mask readibility !!!
plot(cv.lin, log=c("xy", "xy"), caption=c(expression(mu),
expression(sigma^2)))
## comparing with the rejection sampling
plot(cv.rej, log=c("", "xy"), caption=c(expression(mu), expression(sigma^2)))
## or printing results directly to a postscript file...
plot(cv.lin, log=c("xy", "xy"), caption=c(expression(mu),
expression(sigma^2)), file="CVrej", postscript=TRUE)
## using the summary method to calculate the prediction error
summary(cv.lin)
## compare with rejection sampling
summary(cv.rej)
}
\keyword{htest}
\keyword{models}
% Converted by Sd2Rd version 1.15.
|
#Basics:
#' redcap_uri=ptcs$protect$redcap_uri
#' token=ptcs$protect$token
#'
#' action = NULL
#' content = NULL
#' arms = NULL
#' records = NULL
#' fields = NULL
#'
#' message = TRUE
redcap_api_call<-function (redcap_uri=NULL, token=NULL,
action = NULL, content = NULL,
records = NULL,arms = NULL,events=NULL, forms=NULL, fields = NULL,
export_file_path = NULL,batch_size = 500L, carryon = FALSE,
message = TRUE,httr_config=NULL,post_body=NULL,upload_file=NULL,...) {
#Use this space to document
#List of Contents:
#formEventMapping report metadata event participantList exportFieldNames project instrument user instrument generateNextRecordName record pdf file
#List of Actions:
#delete export import
if(is.null(redcap_uri) ) {stop("requires redcap_uri")}
if(is.null(token) && is.null(post_body) ) {stop("requires token or constructed post body")}
if(is.null(content) ) {
message("no content type supplied, using default: 'record'.")
content <- "record"
}
ls_add <- list(...)
if(is.null(post_body)){
post_body <- list(token = token, content = content, format = "csv")
}
post_body$content <- content
if(!is.null(action) && action!= "record_single_run"){post_body$action = action}
if(!is.null(arms)){post_body$arms<-paste(arms,sep = "",collapse = ",")}
if(!is.null(events)){post_body$events<-paste(events,sep = "",collapse = ",")}
if(!is.null(fields)){post_body$fields<-paste(fields,sep = "",collapse = ",")}
if(!is.null(forms)){post_body$forms<-paste(forms,sep = "",collapse = ",")}
if(!is.null(records)){post_body$records<-paste(records,sep = "",collapse = ",")}
if(!is.null(upload_file)) {
post_body$file <- httr::upload_file(upload_file)
names(post_body)[which(names(post_body)=="records")]<-"record"
names(post_body)[which(names(post_body)=="fields")]<-"field"
}
if(is.null(action)) {action <- ""}
if (content == "record" && action == "") {
vari_list <- redcap_api_call(redcap_uri= redcap_uri,post_body = post_body[which(names(post_body)!="fields")],content = "exportFieldNames")
record_list <- redcap_api_call(redcap_uri= redcap_uri,post_body = post_body,content = "record",fields=vari_list$output$original_field_name[1],action = "record_single_run")
if (nrow( record_list$output) > batch_size) {
return(redcap_get_large_records(redcap_uri= redcap_uri,post_body = post_body,record_list = record_list,batch_size = batch_size,carryon = carryon))
}
}
start_time <- Sys.time()
result <- httr::POST(url = redcap_uri, body = post_body,config = httr_config)
raw_text <- httr::content(result, "text")
if(result$status != 200L || any(is.na(raw_text))) {
message("redcap api call failed\n",raw_text)
return(list(output=raw_text,success=FALSE))
}
elapsed_seconds <- as.numeric(difftime(Sys.time(), start_time, units = "secs"))
simple_df_contents <- c("formEventMapping","metadata","event","exportFieldNames","participantList","project","instrument","user","record","generateNextRecordName")
if(content %in% simple_df_contents && action %in% c("record_single_run","")) {
try(ds <- utils::read.csv(text = raw_text, stringsAsFactors = FALSE), silent = TRUE)
if (!exists("ds")){
ds <- raw_text
} else if (inherits(ds, "data.frame")) {
if(nrow(ds)<1){
return(list(output=raw_text,success=TRUE))
} else {
return(list(output=ds,success=TRUE))
}
} else if (result$status != 200L) {
message("redcap api call failed (HTTP code is not 200), returning raw text")
return(list(output=ds,success=FALSE))
} else {
return(output=ds,success=TRUE)
}
} else if (action == "export" || content %in% c("pdf")) {
stop("Function not yet available.")
} else if (content == "records" && action == "delete"){
stop("Function not yet available.")
} else if (content == "file") {
return(list(output=raw_text,success=FALSE))
}
return(list(output=ds,success=FALSE))
}
redcap_get_large_records <- function(redcap_uri = NULL,post_body=NULL,record_list=NULL,batch_size=1000L,carryon = FALSE ) {
record_list$output$count<-ceiling(1:nrow(record_list$output)/batch_size)
records_evt_fixed<-do.call(rbind,lapply(split(record_list$output,record_list$output$registration_redcapid),function(dfx){
if(length(unique(dfx$count))!=1) {
dfx$count<-round(median(dfx$count),digits = 0)
}
return(dfx)
}))
records_sp<-split(records_evt_fixed,records_evt_fixed$count)
message("pulling large records in batchs")
ifTerminate <- FALSE
output_sum<-cleanuplist(lapply(records_sp,function(tgt){
if(ifTerminate && !carryon) {return(NULL)}
message("pulling batch ",unique(tgt$count)," out of ",max(records_evt_fixed$count))
output<-redcap_api_call(redcap_uri = redcap_uri, post_body = post_body,
content = "record",
records = unique(tgt$registration_redcapid),
action = "record_single_run")
if(!output$success || !is.data.frame(output$output)) {
message("failed, error message is: ",output$output)
ifTerminate <<- TRUE
return(list(IDarray = unique(tgt$registration_redcapid),batch_number = unique(tgt$count),success = FALSE,output = output$output))
}
return(output$output)
}))
if(!ifTerminate) {
return(list(output = do.call(rbind,output_sum),success = TRUE))
} else if (carryon) {
message("the function carried on after encounter error. will return reminder and error informations.")
output_isDone <- split(output_sum,sapply(output_sum,is.data.frame))
return(list(output = do.call(rbind,output_isDone$`TRUE`),success = FALSE, error_outcome = output_isDone$`FALSE`))
} else {
return(list(output = NULL, success = FALSE, error_outcome = last(output_sum)))
}
}
redcap_upload<-function (ds_to_write, batch_size = 100L, interbatch_delay = 0.5, retry_whenfailed=T,
continue_on_error = FALSE, redcap_uri, token, verbose = TRUE, NAoverwrite = F,
config_options = NULL)
{
start_time <- base::Sys.time()
if (base::missing(redcap_uri))
base::stop("The required parameter `redcap_uri` was missing from the call to `redcap_write()`.")
if (base::missing(token))
base::stop("The required parameter `token` was missing from the call to `redcap_write()`.")
#token <- REDCapR::sanitize_token(token)
ds_glossary <- REDCapR::create_batch_glossary(row_count = base::nrow(ds_to_write),
batch_size = batch_size)
affected_ids <- character(0)
excluded_ids <-
lst_status_code <- NULL
lst_outcome_message <- NULL
success_combined <- TRUE
message("Starting to update ", format(nrow(ds_to_write),
big.mark = ",", scientific = F, trim = T), " records to be written at ",
Sys.time())
for (i in seq_along(ds_glossary$id)) {
selected_indices <- seq(from = ds_glossary[i, "start_index"],
to = ds_glossary[i, "stop_index"])
if (i > 0)
Sys.sleep(time = interbatch_delay)
message("Writing batch ", i, " of ", nrow(ds_glossary),
", with indices ", min(selected_indices), " through ",
max(selected_indices), ".")
write_result <- redcap_oneshot_upload(ds = ds_to_write[selected_indices,], previousIDs = NULL,retry_whenfailed = T,
redcap_uri = redcap_uri, token = token, verbose = verbose,
NAoverwrite = NAoverwrite,
config_options = config_options)
lst_status_code[[i]] <- write_result$status_code
lst_outcome_message[[i]] <- write_result$outcome_message
if (!write_result$success) {
error_message <- paste0("The `redcap_write()` call failed on iteration ",
i, ".")
error_message <- paste(error_message, ifelse(!verbose,
"Set the `verbose` parameter to TRUE and rerun for additional information.",
""))
if (continue_on_error)
warning(error_message)
else stop(error_message)
}
affected_ids <- c(affected_ids, write_result$affected_ids)
success_combined <- success_combined | write_result$success
excluded_ids <- c(excluded_ids, write_result$excludedIDs)
rm(write_result)
}
elapsed_seconds <- as.numeric(difftime(Sys.time(), start_time,
units = "secs"))
status_code_combined <- paste(lst_status_code, collapse = "; ")
outcome_message_combined <- paste(lst_outcome_message, collapse = "; ")
excluded_ids <- excluded_ids[excluded_ids!=""]
return(list(success = success_combined, status_code = status_code_combined,excluded_ids=excluded_ids,
outcome_message = outcome_message_combined, records_affected_count = length(affected_ids),
affected_ids = affected_ids, elapsed_seconds = elapsed_seconds))
}
redcap_oneshot_upload<-function (ds, redcap_uri, token, verbose = TRUE, NAoverwrite = F,config_options = NULL,retry_whenfailed=F,previousIDs=NULL) {
overwriteBehavior = ifelse(NAoverwrite,"overwrite","normal")
start_time <- Sys.time()
csvElements <- NULL
if (missing(redcap_uri))
stop("The required parameter `redcap_uri` was missing from the call to `redcap_write_oneshot()`.")
if (missing(token))
stop("The required parameter `token` was missing from the call to `redcap_write_oneshot()`.")
#token <- REDCapR::sanitize_token(token)
con <- base::textConnection(object = "csvElements", open = "w",
local = TRUE)
utils::write.csv(ds, con, row.names = FALSE, na = "")
close(con)
csv <- paste(csvElements, collapse = "\n")
rm(csvElements, con)
post_body <- list(token = token, content = "record", format = "csv",
type = "flat", data = csv, overwriteBehavior = overwriteBehavior,
returnContent = "ids", returnFormat = "csv")
result <- httr::POST(url = redcap_uri, body = post_body,
config = config_options)
status_code <- result$status_code
raw_text <- httr::content(result, type = "text")
elapsed_seconds <- as.numeric(difftime(Sys.time(), start_time,
units = "secs"))
success <- (status_code == 200L)
if (success) {
elements <- unlist(strsplit(raw_text, split = "\\n"))
affectedIDs <- elements[-1]
recordsAffectedCount <- length(affectedIDs)
outcome_message <- paste0(format(recordsAffectedCount,
big.mark = ",", scientific = FALSE, trim = TRUE),
" records were written to REDCap in ", round(elapsed_seconds,
1), " seconds.")
raw_text <- ""
}
else {
if(retry_whenfailed){
outcome_message<-outcome_message <- paste0("The upload was not successful:\n",
raw_text,"\n","But we will try again...\n")
sp_rawtext<-strsplit(raw_text,split = "\\n")[[1]]
allgx<-lapply(sp_rawtext,function(x){xa<-strsplit(gsub("\"","",x),",")[[1]];})
mxID<-sapply(allgx,function(sp_rawtext){gsub("ERROR: ","",sp_rawtext[1])})
allIDs<-c(previousIDs,mxID)
negPos<-as.numeric(na.omit(sapply(allIDs,function(IDX){
#print(IDX)
a<-unique(which(ds==IDX,arr.ind = T)[,1]);
if(length(a)>0){a}else{NA}
})))
ds_new<-ds[-negPos,]
gx<-redcap_oneshot_upload(ds = ds_new, redcap_uri = redcap_uri, token = token, verbose = verbose,
retry_whenfailed = T,previousIDs = allIDs,
config_options = config_options)
raw_text<-paste(raw_text,gx$raw_text,sep = "re-try: ")
success<-gx$success
status_code<-gx$status_code
outcome_message<-paste(outcome_message,gx$outcome_message,sep = "re-try: ")
recordsAffectedCount<-gx$records_affected_count
affectedIDs<-gx$affected_ids
elapsed_seconds<-gx$elapsed_seconds
previousIDs<-gx$excludedIDs
} else {
affectedIDs <- numeric(0)
recordsAffectedCount <- NA_integer_
outcome_message <- paste0("The REDCapR write/import operation was not successful. The error message was:\n",
raw_text)
}
}
if (verbose)
message(outcome_message)
if (!is.null(previousIDs)){excludedIDs<-previousIDs}else {excludedIDs<-""}
return(list(success = success, status_code = status_code,
outcome_message = outcome_message, records_affected_count = recordsAffectedCount,
affected_ids = affectedIDs, elapsed_seconds = elapsed_seconds, excludedIDs = excludedIDs,
raw_text = raw_text))
}
redcap_seq_uplaod<-function(ds=NULL,id.var=NULL,redcap_uri,token,batch_size=1000L) {
ds_sp<-split(ds,do.call(paste,c(ds[id.var],sep="_")))
gt <- lapply(ds_sp,function(gx){gx[!apply(gx,2,is.na)]})
gt_u_names<-unique(lapply(gt,names))
gt_u_index <- sapply(gt,function(gtx){match(list(names(gtx)),gt_u_names)})
gyat<-lapply(1:length(gt_u_names),function(g){
message("uploading ",g," out of ",length(gt_u_names))
redcap_write(ds_to_write = do.call(rbind,gt[which(gt_u_index == g)]),batch_size = batch_size,redcap_uri = redcap_uri,token = token,continue_on_error = T)
})
return(
list(affected_ids=unlist(sapply(gyat,`[[`,"affected_ids"),use.names = F),outcome_message = unlist(sapply(gyat,`[[`,"outcome_message"),use.names = F))
)
}
redcap.getreport<-function(redcap_uri, token, reportid = NULL, message = TRUE, config_options = NULL) {
start_time <- Sys.time()
if (missing(redcap_uri)) {
stop("The required parameter `redcap_uri` was missing from the call to `redcap.getreport`.") }
if (missing(token)) {
stop("The required parameter `token` was missing from the call to `redcap.getreport`.") }
#token <- REDCapR::sanitize_token(token)
post_body <- list(token = token, content = "report", format = "csv", report_id=as.character(reportid))
result <- httr::POST(url = redcap_uri, body = post_body,
config = config_options)
status_code <- result$status
success <- (status_code == 200L)
raw_text <- httr::content(result, "text")
elapsed_seconds <- as.numeric(difftime(Sys.time(), start_time, units = "secs"))
if (success) {
try(ds <- utils::read.csv(text = raw_text, stringsAsFactors = FALSE),
silent = TRUE)
if (exists("ds") & inherits(ds, "data.frame")) {
outcome_message <- paste0("The data dictionary describing ",
format(nrow(ds), big.mark = ",", scientific = FALSE, trim = TRUE),
" fields was read from REDCap in ",
round(elapsed_seconds, 1), " seconds. The http status code was ",
status_code, ".")
raw_text <- ""} else {success <- FALSE
ds <- data.frame()
outcome_message <- paste0("The REDCap metadata export failed. The http status code was ",
status_code, ". The 'raw_text' returned was '",
raw_text, "'.")}
} else {
ds <- data.frame()
outcome_message <- paste0("The REDCapR metadata export operation was not successful. The error message was:\n",
raw_text)}
if (message){
message(outcome_message)}
return(list(data = ds, success = success, status_code = status_code,
outcome_message = outcome_message, elapsed_seconds = elapsed_seconds,
raw_text = raw_text))
}
| /R/UPLOAD.R | no_license | AkiraWang2/redcap_in_r | R | false | false | 15,965 | r | #Basics:
#' redcap_uri=ptcs$protect$redcap_uri
#' token=ptcs$protect$token
#'
#' action = NULL
#' content = NULL
#' arms = NULL
#' records = NULL
#' fields = NULL
#'
#' message = TRUE
redcap_api_call<-function (redcap_uri=NULL, token=NULL,
action = NULL, content = NULL,
records = NULL,arms = NULL,events=NULL, forms=NULL, fields = NULL,
export_file_path = NULL,batch_size = 500L, carryon = FALSE,
message = TRUE,httr_config=NULL,post_body=NULL,upload_file=NULL,...) {
#Use this space to document
#List of Contents:
#formEventMapping report metadata event participantList exportFieldNames project instrument user instrument generateNextRecordName record pdf file
#List of Actions:
#delete export import
if(is.null(redcap_uri) ) {stop("requires redcap_uri")}
if(is.null(token) && is.null(post_body) ) {stop("requires token or constructed post body")}
if(is.null(content) ) {
message("no content type supplied, using default: 'record'.")
content <- "record"
}
ls_add <- list(...)
if(is.null(post_body)){
post_body <- list(token = token, content = content, format = "csv")
}
post_body$content <- content
if(!is.null(action) && action!= "record_single_run"){post_body$action = action}
if(!is.null(arms)){post_body$arms<-paste(arms,sep = "",collapse = ",")}
if(!is.null(events)){post_body$events<-paste(events,sep = "",collapse = ",")}
if(!is.null(fields)){post_body$fields<-paste(fields,sep = "",collapse = ",")}
if(!is.null(forms)){post_body$forms<-paste(forms,sep = "",collapse = ",")}
if(!is.null(records)){post_body$records<-paste(records,sep = "",collapse = ",")}
if(!is.null(upload_file)) {
post_body$file <- httr::upload_file(upload_file)
names(post_body)[which(names(post_body)=="records")]<-"record"
names(post_body)[which(names(post_body)=="fields")]<-"field"
}
if(is.null(action)) {action <- ""}
if (content == "record" && action == "") {
vari_list <- redcap_api_call(redcap_uri= redcap_uri,post_body = post_body[which(names(post_body)!="fields")],content = "exportFieldNames")
record_list <- redcap_api_call(redcap_uri= redcap_uri,post_body = post_body,content = "record",fields=vari_list$output$original_field_name[1],action = "record_single_run")
if (nrow( record_list$output) > batch_size) {
return(redcap_get_large_records(redcap_uri= redcap_uri,post_body = post_body,record_list = record_list,batch_size = batch_size,carryon = carryon))
}
}
start_time <- Sys.time()
result <- httr::POST(url = redcap_uri, body = post_body,config = httr_config)
raw_text <- httr::content(result, "text")
if(result$status != 200L || any(is.na(raw_text))) {
message("redcap api call failed\n",raw_text)
return(list(output=raw_text,success=FALSE))
}
elapsed_seconds <- as.numeric(difftime(Sys.time(), start_time, units = "secs"))
simple_df_contents <- c("formEventMapping","metadata","event","exportFieldNames","participantList","project","instrument","user","record","generateNextRecordName")
if(content %in% simple_df_contents && action %in% c("record_single_run","")) {
try(ds <- utils::read.csv(text = raw_text, stringsAsFactors = FALSE), silent = TRUE)
if (!exists("ds")){
ds <- raw_text
} else if (inherits(ds, "data.frame")) {
if(nrow(ds)<1){
return(list(output=raw_text,success=TRUE))
} else {
return(list(output=ds,success=TRUE))
}
} else if (result$status != 200L) {
message("redcap api call failed (HTTP code is not 200), returning raw text")
return(list(output=ds,success=FALSE))
} else {
return(output=ds,success=TRUE)
}
} else if (action == "export" || content %in% c("pdf")) {
stop("Function not yet available.")
} else if (content == "records" && action == "delete"){
stop("Function not yet available.")
} else if (content == "file") {
return(list(output=raw_text,success=FALSE))
}
return(list(output=ds,success=FALSE))
}
redcap_get_large_records <- function(redcap_uri = NULL,post_body=NULL,record_list=NULL,batch_size=1000L,carryon = FALSE ) {
record_list$output$count<-ceiling(1:nrow(record_list$output)/batch_size)
records_evt_fixed<-do.call(rbind,lapply(split(record_list$output,record_list$output$registration_redcapid),function(dfx){
if(length(unique(dfx$count))!=1) {
dfx$count<-round(median(dfx$count),digits = 0)
}
return(dfx)
}))
records_sp<-split(records_evt_fixed,records_evt_fixed$count)
message("pulling large records in batchs")
ifTerminate <- FALSE
output_sum<-cleanuplist(lapply(records_sp,function(tgt){
if(ifTerminate && !carryon) {return(NULL)}
message("pulling batch ",unique(tgt$count)," out of ",max(records_evt_fixed$count))
output<-redcap_api_call(redcap_uri = redcap_uri, post_body = post_body,
content = "record",
records = unique(tgt$registration_redcapid),
action = "record_single_run")
if(!output$success || !is.data.frame(output$output)) {
message("failed, error message is: ",output$output)
ifTerminate <<- TRUE
return(list(IDarray = unique(tgt$registration_redcapid),batch_number = unique(tgt$count),success = FALSE,output = output$output))
}
return(output$output)
}))
if(!ifTerminate) {
return(list(output = do.call(rbind,output_sum),success = TRUE))
} else if (carryon) {
message("the function carried on after encounter error. will return reminder and error informations.")
output_isDone <- split(output_sum,sapply(output_sum,is.data.frame))
return(list(output = do.call(rbind,output_isDone$`TRUE`),success = FALSE, error_outcome = output_isDone$`FALSE`))
} else {
return(list(output = NULL, success = FALSE, error_outcome = last(output_sum)))
}
}
redcap_upload<-function (ds_to_write, batch_size = 100L, interbatch_delay = 0.5, retry_whenfailed=T,
continue_on_error = FALSE, redcap_uri, token, verbose = TRUE, NAoverwrite = F,
config_options = NULL)
{
start_time <- base::Sys.time()
if (base::missing(redcap_uri))
base::stop("The required parameter `redcap_uri` was missing from the call to `redcap_write()`.")
if (base::missing(token))
base::stop("The required parameter `token` was missing from the call to `redcap_write()`.")
#token <- REDCapR::sanitize_token(token)
ds_glossary <- REDCapR::create_batch_glossary(row_count = base::nrow(ds_to_write),
batch_size = batch_size)
affected_ids <- character(0)
excluded_ids <-
lst_status_code <- NULL
lst_outcome_message <- NULL
success_combined <- TRUE
message("Starting to update ", format(nrow(ds_to_write),
big.mark = ",", scientific = F, trim = T), " records to be written at ",
Sys.time())
for (i in seq_along(ds_glossary$id)) {
selected_indices <- seq(from = ds_glossary[i, "start_index"],
to = ds_glossary[i, "stop_index"])
if (i > 0)
Sys.sleep(time = interbatch_delay)
message("Writing batch ", i, " of ", nrow(ds_glossary),
", with indices ", min(selected_indices), " through ",
max(selected_indices), ".")
write_result <- redcap_oneshot_upload(ds = ds_to_write[selected_indices,], previousIDs = NULL,retry_whenfailed = T,
redcap_uri = redcap_uri, token = token, verbose = verbose,
NAoverwrite = NAoverwrite,
config_options = config_options)
lst_status_code[[i]] <- write_result$status_code
lst_outcome_message[[i]] <- write_result$outcome_message
if (!write_result$success) {
error_message <- paste0("The `redcap_write()` call failed on iteration ",
i, ".")
error_message <- paste(error_message, ifelse(!verbose,
"Set the `verbose` parameter to TRUE and rerun for additional information.",
""))
if (continue_on_error)
warning(error_message)
else stop(error_message)
}
affected_ids <- c(affected_ids, write_result$affected_ids)
success_combined <- success_combined | write_result$success
excluded_ids <- c(excluded_ids, write_result$excludedIDs)
rm(write_result)
}
elapsed_seconds <- as.numeric(difftime(Sys.time(), start_time,
units = "secs"))
status_code_combined <- paste(lst_status_code, collapse = "; ")
outcome_message_combined <- paste(lst_outcome_message, collapse = "; ")
excluded_ids <- excluded_ids[excluded_ids!=""]
return(list(success = success_combined, status_code = status_code_combined,excluded_ids=excluded_ids,
outcome_message = outcome_message_combined, records_affected_count = length(affected_ids),
affected_ids = affected_ids, elapsed_seconds = elapsed_seconds))
}
redcap_oneshot_upload<-function (ds, redcap_uri, token, verbose = TRUE, NAoverwrite = F,config_options = NULL,retry_whenfailed=F,previousIDs=NULL) {
overwriteBehavior = ifelse(NAoverwrite,"overwrite","normal")
start_time <- Sys.time()
csvElements <- NULL
if (missing(redcap_uri))
stop("The required parameter `redcap_uri` was missing from the call to `redcap_write_oneshot()`.")
if (missing(token))
stop("The required parameter `token` was missing from the call to `redcap_write_oneshot()`.")
#token <- REDCapR::sanitize_token(token)
con <- base::textConnection(object = "csvElements", open = "w",
local = TRUE)
utils::write.csv(ds, con, row.names = FALSE, na = "")
close(con)
csv <- paste(csvElements, collapse = "\n")
rm(csvElements, con)
post_body <- list(token = token, content = "record", format = "csv",
type = "flat", data = csv, overwriteBehavior = overwriteBehavior,
returnContent = "ids", returnFormat = "csv")
result <- httr::POST(url = redcap_uri, body = post_body,
config = config_options)
status_code <- result$status_code
raw_text <- httr::content(result, type = "text")
elapsed_seconds <- as.numeric(difftime(Sys.time(), start_time,
units = "secs"))
success <- (status_code == 200L)
if (success) {
elements <- unlist(strsplit(raw_text, split = "\\n"))
affectedIDs <- elements[-1]
recordsAffectedCount <- length(affectedIDs)
outcome_message <- paste0(format(recordsAffectedCount,
big.mark = ",", scientific = FALSE, trim = TRUE),
" records were written to REDCap in ", round(elapsed_seconds,
1), " seconds.")
raw_text <- ""
}
else {
if(retry_whenfailed){
outcome_message<-outcome_message <- paste0("The upload was not successful:\n",
raw_text,"\n","But we will try again...\n")
sp_rawtext<-strsplit(raw_text,split = "\\n")[[1]]
allgx<-lapply(sp_rawtext,function(x){xa<-strsplit(gsub("\"","",x),",")[[1]];})
mxID<-sapply(allgx,function(sp_rawtext){gsub("ERROR: ","",sp_rawtext[1])})
allIDs<-c(previousIDs,mxID)
negPos<-as.numeric(na.omit(sapply(allIDs,function(IDX){
#print(IDX)
a<-unique(which(ds==IDX,arr.ind = T)[,1]);
if(length(a)>0){a}else{NA}
})))
ds_new<-ds[-negPos,]
gx<-redcap_oneshot_upload(ds = ds_new, redcap_uri = redcap_uri, token = token, verbose = verbose,
retry_whenfailed = T,previousIDs = allIDs,
config_options = config_options)
raw_text<-paste(raw_text,gx$raw_text,sep = "re-try: ")
success<-gx$success
status_code<-gx$status_code
outcome_message<-paste(outcome_message,gx$outcome_message,sep = "re-try: ")
recordsAffectedCount<-gx$records_affected_count
affectedIDs<-gx$affected_ids
elapsed_seconds<-gx$elapsed_seconds
previousIDs<-gx$excludedIDs
} else {
affectedIDs <- numeric(0)
recordsAffectedCount <- NA_integer_
outcome_message <- paste0("The REDCapR write/import operation was not successful. The error message was:\n",
raw_text)
}
}
if (verbose)
message(outcome_message)
if (!is.null(previousIDs)){excludedIDs<-previousIDs}else {excludedIDs<-""}
return(list(success = success, status_code = status_code,
outcome_message = outcome_message, records_affected_count = recordsAffectedCount,
affected_ids = affectedIDs, elapsed_seconds = elapsed_seconds, excludedIDs = excludedIDs,
raw_text = raw_text))
}
redcap_seq_uplaod<-function(ds=NULL,id.var=NULL,redcap_uri,token,batch_size=1000L) {
ds_sp<-split(ds,do.call(paste,c(ds[id.var],sep="_")))
gt <- lapply(ds_sp,function(gx){gx[!apply(gx,2,is.na)]})
gt_u_names<-unique(lapply(gt,names))
gt_u_index <- sapply(gt,function(gtx){match(list(names(gtx)),gt_u_names)})
gyat<-lapply(1:length(gt_u_names),function(g){
message("uploading ",g," out of ",length(gt_u_names))
redcap_write(ds_to_write = do.call(rbind,gt[which(gt_u_index == g)]),batch_size = batch_size,redcap_uri = redcap_uri,token = token,continue_on_error = T)
})
return(
list(affected_ids=unlist(sapply(gyat,`[[`,"affected_ids"),use.names = F),outcome_message = unlist(sapply(gyat,`[[`,"outcome_message"),use.names = F))
)
}
redcap.getreport<-function(redcap_uri, token, reportid = NULL, message = TRUE, config_options = NULL) {
start_time <- Sys.time()
if (missing(redcap_uri)) {
stop("The required parameter `redcap_uri` was missing from the call to `redcap.getreport`.") }
if (missing(token)) {
stop("The required parameter `token` was missing from the call to `redcap.getreport`.") }
#token <- REDCapR::sanitize_token(token)
post_body <- list(token = token, content = "report", format = "csv", report_id=as.character(reportid))
result <- httr::POST(url = redcap_uri, body = post_body,
config = config_options)
status_code <- result$status
success <- (status_code == 200L)
raw_text <- httr::content(result, "text")
elapsed_seconds <- as.numeric(difftime(Sys.time(), start_time, units = "secs"))
if (success) {
try(ds <- utils::read.csv(text = raw_text, stringsAsFactors = FALSE),
silent = TRUE)
if (exists("ds") & inherits(ds, "data.frame")) {
outcome_message <- paste0("The data dictionary describing ",
format(nrow(ds), big.mark = ",", scientific = FALSE, trim = TRUE),
" fields was read from REDCap in ",
round(elapsed_seconds, 1), " seconds. The http status code was ",
status_code, ".")
raw_text <- ""} else {success <- FALSE
ds <- data.frame()
outcome_message <- paste0("The REDCap metadata export failed. The http status code was ",
status_code, ". The 'raw_text' returned was '",
raw_text, "'.")}
} else {
ds <- data.frame()
outcome_message <- paste0("The REDCapR metadata export operation was not successful. The error message was:\n",
raw_text)}
if (message){
message(outcome_message)}
return(list(data = ds, success = success, status_code = status_code,
outcome_message = outcome_message, elapsed_seconds = elapsed_seconds,
raw_text = raw_text))
}
|
x <- 0:500/50
f <- pnorm(x, 5, 2)
g <- pnorm(x, 5, 1)
h <- sqrt(f*g)
plot(x,f,type='l', ylim=c(0,1))
lines(x,g,col=2)
lines(x,h,lty=2)
abline(h=0.64)
u <- qnorm(0.68, 5, 2)
l <- qnorm(0.64, 5, 1)
fg <- approx(c(l,u), c(pnorm(l,5,1), pnorm(u,5,2)), xout=seq(u,l,length.out=20))
lines(fg$x, fg$y, col=3)
library(lmomco)
x <- 0:2000
parsGP <- vec2par(c(90,12,-0.1), type='gpa')
parsGL <- vec2par(c(110,12,-0.3), type='glo')
obs_events <- sort(rlmomco(500, vec2par(c(92,11.5,-0.15), type='gpa')))
parsGP <- pargpa(lmoms(obs_events))
parsGL <- parglo(lmoms(obs_events))
ygpa <- 1-cdfgpa(x, gpa_par)
yglo <- 1-cdfglo(x, glo_par)
hm <- sqrt(ygpa*yglo)
rplim <- -log(1-(0.02))/360
u0 <- quaglo(1-rplim, glo_par)
l0 <- quagpa(1-rplim, gpa_par)
if(u0 <= l0){
l <- 1.5*quagpa(1-rplim, gpa_par)
u <- u0
}else{
u <- 1.5*quaglo(1-rplim, glo_par)
l <- l0
}
signif(c(u0,u,l0,l,1-cdfgpa(l, gpa_par), 1-cdfglo(u, glo_par)),4)
fg <- approx(c(l,u),
c(1-cdfgpa(l, gpa_par), 1-cdfglo(u, glo_par)),
xout=x)
fg1 <- fg
fg1$y[fg1$x < l] <- -1
fg1$y[fg1$x > u] <- 2
plot(obs_events, glo_poe0, type='p', log='y', ylim=c(1e-6, 1))
points(obs_events, gpa_poe0, col=2)
points(obs_events, gpa_geom, col=3)
points(lint$x, lint$y, col=4)
points(obs_events, pmin(lint$y, gpa_geom), pch=3, lty=2)
points(obs_events, pmin(glo_poe0, gpa_poe0), pch=2, col=2)
points(obs_events, gpa_li, col=5, pch=4)
abline(v=glim[1])
abline(h=mid_changept, col=2)
abline(v=glim[2])
max(1/glo_poe0)
| /OLD/SCRATCH_geom_lininterp.R | no_license | griffada/AQUACAT_UKCEH | R | false | false | 1,562 | r |
x <- 0:500/50
f <- pnorm(x, 5, 2)
g <- pnorm(x, 5, 1)
h <- sqrt(f*g)
plot(x,f,type='l', ylim=c(0,1))
lines(x,g,col=2)
lines(x,h,lty=2)
abline(h=0.64)
u <- qnorm(0.68, 5, 2)
l <- qnorm(0.64, 5, 1)
fg <- approx(c(l,u), c(pnorm(l,5,1), pnorm(u,5,2)), xout=seq(u,l,length.out=20))
lines(fg$x, fg$y, col=3)
library(lmomco)
x <- 0:2000
parsGP <- vec2par(c(90,12,-0.1), type='gpa')
parsGL <- vec2par(c(110,12,-0.3), type='glo')
obs_events <- sort(rlmomco(500, vec2par(c(92,11.5,-0.15), type='gpa')))
parsGP <- pargpa(lmoms(obs_events))
parsGL <- parglo(lmoms(obs_events))
ygpa <- 1-cdfgpa(x, gpa_par)
yglo <- 1-cdfglo(x, glo_par)
hm <- sqrt(ygpa*yglo)
rplim <- -log(1-(0.02))/360
u0 <- quaglo(1-rplim, glo_par)
l0 <- quagpa(1-rplim, gpa_par)
if(u0 <= l0){
l <- 1.5*quagpa(1-rplim, gpa_par)
u <- u0
}else{
u <- 1.5*quaglo(1-rplim, glo_par)
l <- l0
}
signif(c(u0,u,l0,l,1-cdfgpa(l, gpa_par), 1-cdfglo(u, glo_par)),4)
fg <- approx(c(l,u),
c(1-cdfgpa(l, gpa_par), 1-cdfglo(u, glo_par)),
xout=x)
fg1 <- fg
fg1$y[fg1$x < l] <- -1
fg1$y[fg1$x > u] <- 2
plot(obs_events, glo_poe0, type='p', log='y', ylim=c(1e-6, 1))
points(obs_events, gpa_poe0, col=2)
points(obs_events, gpa_geom, col=3)
points(lint$x, lint$y, col=4)
points(obs_events, pmin(lint$y, gpa_geom), pch=3, lty=2)
points(obs_events, pmin(glo_poe0, gpa_poe0), pch=2, col=2)
points(obs_events, gpa_li, col=5, pch=4)
abline(v=glim[1])
abline(h=mid_changept, col=2)
abline(v=glim[2])
max(1/glo_poe0)
|
### Reading in and modifying the data ###
#Reading in only the rows corresponding to the dates from 1/2/2007 to 2/2/2007
library(data.table)
data <- fread("sed '1p;/^[12]\\/2\\/2007/!d' household_power_consumption.txt",
na.strings = c("?", ""))
#Adding a new column with date and time pasted together
library(dplyr)
data <- mutate(data, Full.Date=paste(Date, Time))
#Converting 'Full.Date' column to a date class
library(lubridate)
data$Full.Date <- dmy_hms(data$Full.Date)
### Creating a plot ###
#Opening a PNG device
png(filename = "plot1.png",
width = 480, height = 480)
#Creating a histogram with appropriate parameters
with(data, hist(Global_active_power,
main="Global Active Power",
xlab="Global Active Power (kilowatts)",
col="red"))
#Closing the device
dev.off() | /plot1.R | no_license | pavelkirjanas/ExData_Plotting1 | R | false | false | 845 | r | ### Reading in and modifying the data ###
#Reading in only the rows corresponding to the dates from 1/2/2007 to 2/2/2007
library(data.table)
data <- fread("sed '1p;/^[12]\\/2\\/2007/!d' household_power_consumption.txt",
na.strings = c("?", ""))
#Adding a new column with date and time pasted together
library(dplyr)
data <- mutate(data, Full.Date=paste(Date, Time))
#Converting 'Full.Date' column to a date class
library(lubridate)
data$Full.Date <- dmy_hms(data$Full.Date)
### Creating a plot ###
#Opening a PNG device
png(filename = "plot1.png",
width = 480, height = 480)
#Creating a histogram with appropriate parameters
with(data, hist(Global_active_power,
main="Global Active Power",
xlab="Global Active Power (kilowatts)",
col="red"))
#Closing the device
dev.off() |
### Author: WeilianMuchen
### 1 Continuous Probability
## Continuous Probability
# Code: Cumulative distribution function
# Define x as male heights from the dslabs heights dataset:
library(tidyverse)
library(dslabs)
data(heights)
x <- heights %>% filter(sex=="Male") %>% pull(height)
# Given a vector x, we can define a function for computing the CDF of x using:
F <- function(a) mean(x <= a)
1 - F(70) # probability of male taller than 70 inches
## Theoretical Distribution
# Code: Using pnorm() to calculate probabilities
# Given male heights x:
library(tidyverse)
library(dslabs)
data(heights)
x <- heights %>% filter(sex=="Male") %>% pull(height)
# We can estimate the probability that a male is taller than 70.5 inches using:
1 - pnorm(70.5, mean(x), sd(x))
# Code: Discretization and the normal approximation
# plot distribution of exact heights in data
plot(prop.table(table(x)), xlab = "a = Height in inches", ylab = "Pr(x = a)")
# probabilities in actual data over length 1 ranges containing an integer
mean(x <= 68.5) - mean(x <= 67.5)
mean(x <= 69.5) - mean(x <= 68.5)
mean(x <= 70.5) - mean(x <= 69.5)
# probabilities in normal approximation match well
pnorm(68.5, mean(x), sd(x)) - pnorm(67.5, mean(x), sd(x))
pnorm(69.5, mean(x), sd(x)) - pnorm(68.5, mean(x), sd(x))
pnorm(70.5, mean(x), sd(x)) - pnorm(69.5, mean(x), sd(x))
# probabilities in actual data over other ranges don't match normal approx as well
mean(x <= 70.9) - mean(x <= 70.1)
pnorm(70.9, mean(x), sd(x)) - pnorm(70.1, mean(x), sd(x))
## Plotting the Probability Density
library(tidyverse)
x <- seq(-4, 4, length = 100)
data.frame(x, f = dnorm(x)) %>%
ggplot(aes(x, f)) +
geom_line()
dnorm(z, mu, sigma)
## Monte Carlo Simulations
# Code: Generating normally distributed random numbers
# define x as male heights from dslabs data
library(tidyverse)
library(dslabs)
data(heights)
x <- heights %>% filter(sex=="Male") %>% pull(height)
# generate simulated height data using normal distribution - both datasets should have n observations
n <- length(x)
avg <- mean(x)
s <- sd(x)
simulated_heights <- rnorm(n, avg, s)
# plot distribution of simulated_heights
data.frame(simulated_heights = simulated_heights) %>%
ggplot(aes(simulated_heights)) +
geom_histogram(color="black", binwidth = 2)
# Code: Monte Carlo simulation of tallest person over 7 feet
B <- 10000
tallest <- replicate(B, {
simulated_data <- rnorm(800, avg, s) # generate 800 normally distributed random heights
max(simulated_data) # determine the tallest height
})
mean(tallest >= 7*12) # proportion of times that tallest person exceeded 7 feet (84 inches)
## Other Continuous Distributions
# Code: Plotting the normal distribution with dnorm
# Use d to plot the density function of a continuous distribution. Here is the density function for the normal distribution (abbreviation norm()):
x <- seq(-4, 4, length.out = 100)
data.frame(x, f = dnorm(x)) %>%
ggplot(aes(x,f)) +
geom_line()
## DataCamp Assessment: Continuous Probability
# Exercise 1. Distribution of female heights - 1
# Use pnorm to define the probability that a height will take a value less than 5 feet given the stated distribution.
# Assign a variable 'female_avg' as the average female height.
female_avg <- 64
# Assign a variable 'female_sd' as the standard deviation for female heights.
female_sd <- 3
# Using variables 'female_avg' and 'female_sd', calculate the probability that a randomly selected female is shorter than 5 feet. Print this value to the console - do not save it to a variable.
pnorm(5*12, female_avg, female_sd)
# Exercise 2. Distribution of female heights - 2
# Use pnorm to define the probability that a height will take a value of 6 feet or taller.
# Assign a variable 'female_avg' as the average female height.
female_avg <- 64
# Assign a variable 'female_sd' as the standard deviation for female heights.
female_sd <- 3
# Using variables 'female_avg' and 'female_sd', calculate the probability that a randomly selected female is 6 feet or taller. Print this value to the console.
1 - pnorm(6*12, female_avg, female_sd)
# Exercise 3. Distribution of female heights - 3
# Use pnorm to define the probability that a randomly chosen woman will be shorter than 67 inches.
# Subtract the probability that a randomly chosen will be shorter than 61 inches.
# Assign a variable 'female_avg' as the average female height.
female_avg <- 64
# Assign a variable 'female_sd' as the standard deviation for female heights.
female_sd <- 3
# Using variables 'female_avg' and 'female_sd', calculate the probability that a randomly selected female is between the desired height range. Print this value to the console.
pnorm(67, female_avg, female_sd) - pnorm(61, female_avg, female_sd)
# Exercise 4. Distribution of female heights - 4
# Convert the average height and standard deviation to centimeters by multiplying each value by 2.54.
# Repeat the previous calculation using pnorm to define the probability that a randomly chosen woman will have a height between 61 and 67 inches, converted to centimeters by multiplying each value by 2.54.
# Assign a variable 'female_avg' as the average female height. Convert this value to centimeters.
female_avg <- 64*2.54
# Assign a variable 'female_sd' as the standard deviation for female heights. Convert this value to centimeters.
female_sd <- 3*2.54
# Using variables 'female_avg' and 'female_sd', calculate the probability that a randomly selected female is between the desired height range. Print this value to the console.
pnorm(67*2.54, female_avg, female_sd) - pnorm(61*2.54, female_avg, female_sd)
# Exercise 5. Probability of 1 SD from average
# Calculate the values for heights one standard deviation taller and shorter than the average.
# Calculate the probability that a randomly chosen woman will be within 1 SD from the average height.
# Assign a variable 'female_avg' as the average female height.
female_avg <- 64
# Assign a variable 'female_sd' as the standard deviation for female heights.
female_sd <- 3
# To a variable named 'taller', assign the value of a height that is one SD taller than average.
taller <- female_avg+female_sd
# To a variable named 'shorter', assign the value of a height that is one SD shorter than average.
shorter <- female_avg-female_sd
# Calculate the probability that a randomly selected female is between the desired height range. Print this value to the console.
pnorm(taller, female_avg, female_sd) - pnorm(shorter, female_avg, female_sd)
# Exercise 6. Distribution of male heights
# Determine the height of a man in the 99th percentile, given an average height of 69 inches and a standard deviation of 3 inches.
# Assign a variable 'male_avg' as the average male height.
male_avg <- 69
# Assign a variable 'male_sd' as the standard deviation for male heights.
male_sd <- 3
# Determine the height of a man in the 99th percentile of the distribution.
qnorm(0.99, male_avg, male_sd)
# Exercise 7. Distribution of IQ scores
# Use the function rnorm to generate a random distribution of 10,000 values with a given average and standard deviation.
# Use the function max to return the largest value from a supplied vector.
# Repeat the previous steps a total of 1,000 times. Store the vector of the top 1,000 IQ scores as highestIQ.
# Plot the histogram of values using the function hist.
# The variable `B` specifies the number of times we want the simulation to run.
B <- 1000
# Use the `set.seed` function to make sure your answer matches the expected result after random number generation.
set.seed(1)
# Create an object called `highestIQ` that contains the highest IQ score from each random distribution of 10,000 people.
highestIQ <- replicate(B, {
simulated_data <- rnorm(10000, 100, 15)
max(simulated_data)
})
# Make a histogram of the highest IQ scores.
hist(highestIQ) | /HarvardX Professional Certificate in Data Science/3 HarvardX PH125.3x - Probability/2 Continuous Probability/1 Continuous Probability/1 Continuous Probability.R | no_license | WeilianMuchen/edX_courses | R | false | false | 7,860 | r | ### Author: WeilianMuchen
### 1 Continuous Probability
## Continuous Probability
# Code: Cumulative distribution function
# Define x as male heights from the dslabs heights dataset:
library(tidyverse)
library(dslabs)
data(heights)
x <- heights %>% filter(sex=="Male") %>% pull(height)
# Given a vector x, we can define a function for computing the CDF of x using:
F <- function(a) mean(x <= a)
1 - F(70) # probability of male taller than 70 inches
## Theoretical Distribution
# Code: Using pnorm() to calculate probabilities
# Given male heights x:
library(tidyverse)
library(dslabs)
data(heights)
x <- heights %>% filter(sex=="Male") %>% pull(height)
# We can estimate the probability that a male is taller than 70.5 inches using:
1 - pnorm(70.5, mean(x), sd(x))
# Code: Discretization and the normal approximation
# plot distribution of exact heights in data
plot(prop.table(table(x)), xlab = "a = Height in inches", ylab = "Pr(x = a)")
# probabilities in actual data over length 1 ranges containing an integer
mean(x <= 68.5) - mean(x <= 67.5)
mean(x <= 69.5) - mean(x <= 68.5)
mean(x <= 70.5) - mean(x <= 69.5)
# probabilities in normal approximation match well
pnorm(68.5, mean(x), sd(x)) - pnorm(67.5, mean(x), sd(x))
pnorm(69.5, mean(x), sd(x)) - pnorm(68.5, mean(x), sd(x))
pnorm(70.5, mean(x), sd(x)) - pnorm(69.5, mean(x), sd(x))
# probabilities in actual data over other ranges don't match normal approx as well
mean(x <= 70.9) - mean(x <= 70.1)
pnorm(70.9, mean(x), sd(x)) - pnorm(70.1, mean(x), sd(x))
## Plotting the Probability Density
library(tidyverse)
x <- seq(-4, 4, length = 100)
data.frame(x, f = dnorm(x)) %>%
ggplot(aes(x, f)) +
geom_line()
dnorm(z, mu, sigma)
## Monte Carlo Simulations
# Code: Generating normally distributed random numbers
# define x as male heights from dslabs data
library(tidyverse)
library(dslabs)
data(heights)
x <- heights %>% filter(sex=="Male") %>% pull(height)
# generate simulated height data using normal distribution - both datasets should have n observations
n <- length(x)
avg <- mean(x)
s <- sd(x)
simulated_heights <- rnorm(n, avg, s)
# plot distribution of simulated_heights
data.frame(simulated_heights = simulated_heights) %>%
ggplot(aes(simulated_heights)) +
geom_histogram(color="black", binwidth = 2)
# Code: Monte Carlo simulation of tallest person over 7 feet
B <- 10000
tallest <- replicate(B, {
simulated_data <- rnorm(800, avg, s) # generate 800 normally distributed random heights
max(simulated_data) # determine the tallest height
})
mean(tallest >= 7*12) # proportion of times that tallest person exceeded 7 feet (84 inches)
## Other Continuous Distributions
# Code: Plotting the normal distribution with dnorm
# Use d to plot the density function of a continuous distribution. Here is the density function for the normal distribution (abbreviation norm()):
x <- seq(-4, 4, length.out = 100)
data.frame(x, f = dnorm(x)) %>%
ggplot(aes(x,f)) +
geom_line()
## DataCamp Assessment: Continuous Probability
# Exercise 1. Distribution of female heights - 1
# Use pnorm to define the probability that a height will take a value less than 5 feet given the stated distribution.
# Assign a variable 'female_avg' as the average female height.
female_avg <- 64
# Assign a variable 'female_sd' as the standard deviation for female heights.
female_sd <- 3
# Using variables 'female_avg' and 'female_sd', calculate the probability that a randomly selected female is shorter than 5 feet. Print this value to the console - do not save it to a variable.
pnorm(5*12, female_avg, female_sd)
# Exercise 2. Distribution of female heights - 2
# Use pnorm to define the probability that a height will take a value of 6 feet or taller.
# Assign a variable 'female_avg' as the average female height.
female_avg <- 64
# Assign a variable 'female_sd' as the standard deviation for female heights.
female_sd <- 3
# Using variables 'female_avg' and 'female_sd', calculate the probability that a randomly selected female is 6 feet or taller. Print this value to the console.
1 - pnorm(6*12, female_avg, female_sd)
# Exercise 3. Distribution of female heights - 3
# Use pnorm to define the probability that a randomly chosen woman will be shorter than 67 inches.
# Subtract the probability that a randomly chosen will be shorter than 61 inches.
# Assign a variable 'female_avg' as the average female height.
female_avg <- 64
# Assign a variable 'female_sd' as the standard deviation for female heights.
female_sd <- 3
# Using variables 'female_avg' and 'female_sd', calculate the probability that a randomly selected female is between the desired height range. Print this value to the console.
pnorm(67, female_avg, female_sd) - pnorm(61, female_avg, female_sd)
# Exercise 4. Distribution of female heights - 4
# Convert the average height and standard deviation to centimeters by multiplying each value by 2.54.
# Repeat the previous calculation using pnorm to define the probability that a randomly chosen woman will have a height between 61 and 67 inches, converted to centimeters by multiplying each value by 2.54.
# Assign a variable 'female_avg' as the average female height. Convert this value to centimeters.
female_avg <- 64*2.54
# Assign a variable 'female_sd' as the standard deviation for female heights. Convert this value to centimeters.
female_sd <- 3*2.54
# Using variables 'female_avg' and 'female_sd', calculate the probability that a randomly selected female is between the desired height range. Print this value to the console.
pnorm(67*2.54, female_avg, female_sd) - pnorm(61*2.54, female_avg, female_sd)
# Exercise 5. Probability of 1 SD from average
# Calculate the values for heights one standard deviation taller and shorter than the average.
# Calculate the probability that a randomly chosen woman will be within 1 SD from the average height.
# Assign a variable 'female_avg' as the average female height.
female_avg <- 64
# Assign a variable 'female_sd' as the standard deviation for female heights.
female_sd <- 3
# To a variable named 'taller', assign the value of a height that is one SD taller than average.
taller <- female_avg+female_sd
# To a variable named 'shorter', assign the value of a height that is one SD shorter than average.
shorter <- female_avg-female_sd
# Calculate the probability that a randomly selected female is between the desired height range. Print this value to the console.
pnorm(taller, female_avg, female_sd) - pnorm(shorter, female_avg, female_sd)
# Exercise 6. Distribution of male heights
# Determine the height of a man in the 99th percentile, given an average height of 69 inches and a standard deviation of 3 inches.
# Assign a variable 'male_avg' as the average male height.
male_avg <- 69
# Assign a variable 'male_sd' as the standard deviation for male heights.
male_sd <- 3
# Determine the height of a man in the 99th percentile of the distribution.
qnorm(0.99, male_avg, male_sd)
# Exercise 7. Distribution of IQ scores
# Use the function rnorm to generate a random distribution of 10,000 values with a given average and standard deviation.
# Use the function max to return the largest value from a supplied vector.
# Repeat the previous steps a total of 1,000 times. Store the vector of the top 1,000 IQ scores as highestIQ.
# Plot the histogram of values using the function hist.
# The variable `B` specifies the number of times we want the simulation to run.
B <- 1000
# Use the `set.seed` function to make sure your answer matches the expected result after random number generation.
set.seed(1)
# Create an object called `highestIQ` that contains the highest IQ score from each random distribution of 10,000 people.
highestIQ <- replicate(B, {
simulated_data <- rnorm(10000, 100, 15)
max(simulated_data)
})
# Make a histogram of the highest IQ scores.
hist(highestIQ) |
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/BeerFermentable.R
\name{createBeerFermentable}
\alias{createBeerFermentable}
\title{Add a Fermentable}
\usage{
createBeerFermentable(beerId, fermentableId)
}
\arguments{
\item{beerId}{The beerId}
\item{fermentableId}{Required ID of an existing fermentable}
}
\value{
none
}
\description{
Adds a fermentable to an existing beer.
}
\concept{BeerFermentable}
| /man/createBeerFermentable.Rd | no_license | bpb824/brewerydb | R | false | true | 435 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/BeerFermentable.R
\name{createBeerFermentable}
\alias{createBeerFermentable}
\title{Add a Fermentable}
\usage{
createBeerFermentable(beerId, fermentableId)
}
\arguments{
\item{beerId}{The beerId}
\item{fermentableId}{Required ID of an existing fermentable}
}
\value{
none
}
\description{
Adds a fermentable to an existing beer.
}
\concept{BeerFermentable}
|
\name{summary.enve.GSStest}
\alias{summary.enve.GSStest}
\title{summary enve GSStest}
\description{Summary of an `enve.GSStest` object.}
\usage{\method{summary}{enve.GSStest}(object, ...)}
\arguments{
\item{object}{`enve.GSStest` object.}
\item{\dots}{No additional parameters are currently supported.}
}
\author{Luis M. Rodriguez-R <lmrodriguezr@gmail.com> [aut, cre]}
| /enveomics.R/man/summary.enve.GSStest.Rd | permissive | ofanoyi/enveomics | R | false | false | 400 | rd | \name{summary.enve.GSStest}
\alias{summary.enve.GSStest}
\title{summary enve GSStest}
\description{Summary of an `enve.GSStest` object.}
\usage{\method{summary}{enve.GSStest}(object, ...)}
\arguments{
\item{object}{`enve.GSStest` object.}
\item{\dots}{No additional parameters are currently supported.}
}
\author{Luis M. Rodriguez-R <lmrodriguezr@gmail.com> [aut, cre]}
|
#' @title A two-sample test for the equality of distributions for high-dimensional data
#' @aliases TwoSampleTest.HD
#' @description Performs the four tests of equality of the p marginal distributions
#' for two groups proposed in Cousido- Rocha et al.(2018). The methods have been
#' designed for the low sample size and high dimensional setting. Furthermore,
#' the possibility that the p variables in each data set can be weakly dependent is considered.
#' The function also reports a set of p permutation p-values, each of which is derived
#' from testing the equality of distributions in the two groups for each of the variables
#' separately. These p-values are useful when a proposed test rejects the global null
#' hypothesis since it makes it possible to identify which variables have contributed to this significance.
#' @details The function implements the two-sample tests proposed by Cousido-Rocha, et al. (2018).
#' The methods “spect”, “boot” and “us” are based on a global statistic which is the average of p
#' individual statistics corresponding to each of the p variables. Each of these individual statistics measures the difference between
#' the empirical characteristic functions computed from the two samples.
#' An alternative expression shows that each statistic is essentially the intergrated
#' squared difference between kernel density estimates. The global statistic (average) is
#' standardized using one of three different variance estimators. The method “spect”
#' uses a variance estimator based on spectral analysis, the method “boot” implements
#' the block bootstrap to estimate the variance and the method “us” employs a variance estimator
#' derived from U-statistic theory (more details in Cousido-Rocha et al., 2018).
#' The methods “spect” and “boot” are suitable under some assumptions including that the
#' sequence of individual statistics that define the global statistic is strictly stationary,
#' whereas the method “us” avoids this assumption. However the methods “spect” and “boot”
#' have been checked in simulations and they perform well even when the stationarity assumption
#' is violated. The methods “spect” and “us” have their corresponding versions for independent
#' data (“spect ind” and “us ind”), for which the variance estimators are simplified taking into
#' acount the independence of the variables. The asymptotic normality (when p tends to infinity)
#' of the standardized version of the statistic is used to compute the corresponding p-value.
#' On the other hand, Cousido-Rocha et al. (2018) also proposed the method “perm” whose global
#' statistic is the average of the permutation p-values corresponding to the individual statistics
#' mentioned above. This method assumes that the sequence of p-values is strictly stationary,
#' however in simulations it seems that it performs well when this assumption does not hold.
#' In addition to providing an alternative global test, these p-values can be used when the
#' global null hypothesis is rejected and one wishes to identify which of the p variables have
#' contributed to that rejection. The global statistic depends on a parameter which plays
#' a role similar to that of a smoothing parameter or bandwidth in kernel density estimation.
#' For the four global tests this parameter is estimated using the information from all the variables or features.
#' For the individual statistics from which the permutation
#' p-values are computed, there are two possibilities: (i) use the value employed in the global test
#' (b_I.permutation.p.values=“global”), (ii) estimate this parameter for each variable
#' separately using only its sample information (b_I.permutation.p.values=“individual”)).
#' @param X A matrix where each row is one of the p-samples in the first group.
#' @param Y A matrix where each row is one of the p-samples in the second group.
#' @param method the two-sample test. By default the “us” method is computed. See details.
#' @param I.permutation.p.values Logical. Default is FALSE. A variable indicating whether to compute the permutation p-values or not when the selected method is not “perm”. See details.
#' @param b_I.permutation.p.values The method used to compute the individual statistics on which are based the permutation p-values. Default is “global”. See details.
#'
#' @return A list containing the following components:
#' \item{standardized statistic: }{the value of the standardized statistic.}
#' \item{p.value: }{the p-value for the test.}
#' \item{statistic: }{the value of the statistic.}
#' \item{variance: }{the value of the variance estimator.}
#' \item{p: }{number of samples or populations.}
#' \item{n: }{sample size in the first group.}
#' \item{m: }{sample size in the second group.}
#' \item{method: }{a character string indicating which two sample test is performed.}
#' \item{I.statistics: }{the p individual statistics.}
#' \item{I.permutation.p.values: }{the p individual permutation p-values.}
#' \item{data.name: }{a character string giving the name of the data.}
#'
#' @author
#' \itemize{
#' \item{Marta Cousido-Rocha}
#' \item{José Carlos Soage González}
#' \item{Jacobo de Uña-Álvarez}
#' \item{Jeffrey D. Hart}
#' }
#' @references Cousido-Rocha, M., de Uña-Álvarez J., and Hart, J. (2018).
#' A two-sample test for the equality of marginal distributions for high-dimensional data.
#' Preprint.
#'
#' @examples
#' \dontshow{
#' # Example
#' ### Data set to check the performance of the code
#'
#' p <- 100
#' n <- 5
#' m <- 5
#'
#' X <- matrix(rnorm(p * n), ncol = n)
#' Y <- matrix(rnorm(p * n), ncol = n)
#' system.time(res <- TwoSampleTest.HD(X, Y, method = "perm"))
#' }
#'
#'
#' \donttest{
#' # We consider a simulated data example. We have simulated the following situation.
#' # We have two groups, for example, 7 patients with tumor 1 and 7 patients with tumor 2.
#' # For each patient 1000 variables are measured, for example, gene expression levels.
#' # Besides, the distributions of 100 of the variables are different in the two groups,
#' # and the differences are in terms of location. The variables are independent to
#' # simplify the generation of the data sets.
#' p <- 1000
#' n = m = 7
#' inds <- sample(1:4, p, replace = TRUE)
#' X <- matrix(rep(0, n * p), ncol = n)
#' for (j in 1:p){
#' if (inds[j] == 1){
#' X[j, ] <- rnorm(n)
#' }
#' if (inds[j] == 2){
#' X[j, ] <- rnorm(n, sd = 2)
#' }
#' if (inds[j] == 3){
#' X[j, ] <- rnorm(n, mean = 1)
#' }
#' if (inds[j] == 4){
#' X[j, ] <- rnorm(n, mean = 1, sd = 2)
#' }
#' }
#' rho <- 0.1
#' ind <- sample(1:p, rho * p)
#' li <- length(ind)
#' indsy <- inds
#' for (l in 1:li){
#' if (indsy[ind[l]]==1){
#' indsy[ind[l]]=3
#' } else {
#' if (indsy[ind[l]]==2){
#' indsy[ind[l]]=4
#' } else {
#' if (indsy[ind[l]]==3){
#' indsy[ind[l]]=1
#' } else {
#' indsy[ind[l]] = 2
#' }
#' }
#' }
#' }
#' Y <- matrix(rep(0, m * p), ncol = m)
#' for (j in 1:p){
#' if (indsy[j] == 1){
#' Y[j,] <- rnorm(m)}
#' if (indsy[j] == 2){
#' Y[j, ] <- rnorm(m, sd = 2)
#' }
#' if (indsy[j]==3){
#' Y[j, ] <- rnorm(m, mean = 1)
#' }
#' if (indsy[j] == 4){
#' Y[j,] <- rnorm(m, mean = 1, sd = 2)
#' }
#' }
#'
#' # Our interest is to test the null hypothesis that the distribution of each of the 1000 variables
#' # is the same in the two groups.
#'
#' # We use for this purpose the four methods proposed in Cousido-Rocha et al. (2018).
#'
#' res1 <- TwoSampleTest.HD(X, Y, method = "spect")
#' res1
#' res2 <- TwoSampleTest.HD(X, Y, method = "boot")
#' res2
#' res3 <- TwoSampleTest.HD(X, Y, method = "us")
#' res3
#' res4 <- TwoSampleTest.HD(X, Y, method = "perm")
#' res4
#' # The four methods reject the global null hypothesis.
#' # Hence, we use the individual permutation p-values
#' # to identify which variables are not equally distributed in the two groups.
#' pv<-res4$I.permutation.p.values
#'
#' # Applying a multiple testing procedure to these p-values
#' # we can detect the variables with different distributions for the two groups.
#' # The following plot of the individual permutation p-values is also informative.
#' # We remark in red the 100 smallest p-values.
#'
#' pv_sort <- sort(pv)
#' cri <- pv_sort[100]
#' ind <- which(pv <= cri)
#' plot(1:p, pv, main = "Individual permutation p-values",
#' xlab = "Variables", ylab = "p-values")
#' points(ind, pv[ind], col = "red")
#' }
#' @importFrom utils combn
#' @export
################################################################################
# Two-sample problem
################################################################################
TwoSampleTest.HD <- function(X, Y, method = c("spect", "spect_ind", "boot", "us", "us_ind", "perm"),
I.permutation.p.values = FALSE, b_I.permutation.p.values = c("global", "individual")) {
cat("Call:", "\n")
print(match.call())
if (missing(method)) {
method <- "us"
cat("'us' method used by default\n")
}
if (missing(b_I.permutation.p.values)) {
b_I.permutation.p.values <- "global"
cat("'global' bandwidth used by default\n")
}
method <- match.arg(method)
DNAME <- deparse(substitute(c(X, Y)))
METHOD <- "A two-sample test for the equality of distributions for high-dimensional data"
match.arg(method)
match.arg(b_I.permutation.p.values)
p <- nrow(X)
n <- ncol(X)
m <- ncol(Y)
c1 <- 1 / p
c2 <- 1.114 * mean(c(n, m)) ^ (-1 / 5)
sa <- apply(X, 1, var)
sb <- apply(Y, 1, var)
si <- ((n - 1) * unlist(sa) + (m - 1) * unlist(sb)) / (n + m - 2)
spool <- sqrt(c1 * sum(si))
h <- spool * c2
y <- c(rep(1, 5), rep(2, 5))
#=============================================================================
## Functions to compute the statistic
#=============================================================================
### Method to choose m
mval <- function(rho, lagmax, kn, rho.crit) {
## Compute the number of insignificant runs following each rho(k),
## k = 1, ..., lagmax.
num.ins <- sapply(1:(lagmax - kn + 1),
function(j) sum((abs(rho) < rho.crit)[j:(j + kn - 1)]))
## If there are any values of rho(k) for which the kn proceeding
## values of rho(k + j), j = 1, ..., kn are all insignificant, take the
## smallest rho(k) such that this holds (see footnote c of
## Politis and White for further details).
if(any(num.ins == kn)){
return(which(num.ins == kn)[1])
} else {
## If no runs of length kn are insignificant, take the smallest
## value of rho(k) that is significant.
if(any(abs(rho) > rho.crit)) {
lag.sig <- which(abs(rho) > rho.crit)
k.sig <- length(lag.sig)
if(k.sig == 1) {
## When only one lag is significant, mhat is the sole
## significant rho(k).
return(lag.sig)
} else {
## If there are more than one significant lags but no runs
## of length kn, take the largest value of rho(k) that is
## significant.
return(max(lag.sig))
}
} else {
## When there are no significant lags, mhat must be the
## smallest positive integer (footnote c), hence mhat is set
## to one.
return(1)
}
}
}
Lval <- function(x, method = mean) {
x <- matrix(x)
n <- nrow(x)
d <- ncol(x)
## Parameters for adapting the approach of Politis and White (2004)
kn <- max(5, ceiling(log10(n)))
lagmax <- ceiling(sqrt(n)) + kn
rho.crit <- 1.96 * sqrt(log10(n) / n)
m <- numeric(d)
for (i in 1:d) {
rho <- stats::acf(x[,i], lag.max = lagmax, type = "correlation", plot = FALSE)$acf[-1]
m[i] <- mval(rho, lagmax, kn, rho.crit)
}
return(2 * method(m))
}
##########################################
### Variance estimators (stationary) #####
##########################################
### Spectral variance estimator
variance_spectral <- function(J) {
part2 <- 0
k <- Lval(matrix(J), method = min)
c <- stats::acf(J, type = "covariance", lag.max = k, plot = FALSE)$acf
c0 <- c[1]
c <- c[-1]
for (i in 1:k) {
part2 <- part2 + (1 - (i / (k + 1))) * c[i]
}
statistic <- c0 + 2 * part2
return(statistic)
}
# Particular case of independence
variance_spectral_ind <- function(J) {
k <- 1
c <- stats::acf(J, type = "covariance", lag.max = k, plot = FALSE)$acf
statistic <- c[1]
return(statistic)
}
### Variance block bootstrap
variance <- function(pv) {
p <- length(pv)
# print(h)
# h = 0.8
k <- Lval((pv), method = min)
# bootstats = 1:(p - k + 1)
# bootstats[1] = sum(J[1:k])
# for(j in 1:(p - k)) {
# bootstats[j + 1] = bootstats[j] - J[j] + J[j + k]
# }
bootstats <- 1:(p / k)
for (j in 1:(p / k)) {
bootstats[j] <- sum(pv[(k * (j - 1) + 1):(j * k)])
}
varest <- var(bootstats / sqrt(k))
return(varest)
}
#############################################
### Variance estimator (Non-stationary) ####
#############################################
### Variance estimator based on serfling and time series
VarJhat <- function(x, y, h) {
E1 <- E1hat(c(x, y), h)
E2 <- E2hat(c(x, y), h)
E3 <- E3hat(c(x, y), h)
n <- length(x)
m <- length(y)
c1 <- (n - 2) * (n - 3) / (n * (n - 1))
c1 <- c1 + (m - 2) * (m - 3) / (m * (m - 1))
c1 <- c1 + 4 * (n - 1) * (m - 1) / (n * m)
c1 <- c1 + 2
c1 <- c1 - 4 * (n - 2) / n
c1 <- c1 - 4 * (m - 2) / m
c2 <- -4 / (n * (n - 1)) - 4 / (m * (m - 1)) - 8 / (m * n)
c3 <- 2 / (n * (n - 1)) + 2 / (m * (m - 1)) + 4 / (m * n)
vhat <- c1 * E1 + c2 * E2 + c3 * E3
return(vhat * (4 * pi * h ^ 2))
}
E1hat <- function(Z, h) {
N <- length(Z)
Zmat <- t(matrix(Z, N, N))
Zmat <- Z - Zmat
Zmat <- stats::dnorm(Zmat, sd = sqrt(2) * h)
diag(Zmat) <- 0
E1 <- 0
for (i in 1:(N - 1)) {
for (j in (i + 1):N) {
t1 <- stats::dnorm(Z[i] - Z[j], sd = sqrt(2) * h)
t1 <- sum(t1 * Zmat) - sum(t1 * Zmat[i,] + t1 * Zmat[j,] + t1 * Zmat[, i] +
t1 * Zmat[, j]) + 2 * t1 * Zmat[i, j]
E1 <- E1 + t1
}
}
Number <- N * (N - 1) * (N - 2) * (N - 3) / 2
E1 / Number
}
E2hat <- function(Z, h) {
N <- length(Z)
E2 <- 0
for (i in 1:(N - 1)) {
for (j in (i + 1):N) {
t1 <- stats::dnorm(Z[i] - Z[j], sd = sqrt(2) * h)
vec <- setdiff(1:N, c(i, j))
t2 <- stats::dnorm(Z[i] - Z[vec], sd = sqrt(2) * h)
t3 <- stats::dnorm(Z[j] - Z[vec], sd = sqrt(2) * h)
t1 <- sum(t1 * t2) + sum(t1 * t3)
E2 <- E2 + t1
}
}
Number <- N * (N - 1) * (N - 2)
E2 / Number
}
E3hat <- function(Z, h) {
N <- length(Z)
E3 <- 0
for (i in 1:(N - 1)) {
for (j in (i + 1):N) {
t1 <- stats::dnorm(Z[i] - Z[j], sd = sqrt(2) * h) ^ 2
E3 <- E3 + t1
}
}
Number <- N * (N - 1) / 2
E3 / Number
}
### Dirichlet kernel
variance_est_Dirichlet <- function(X, Y, h, J) {
VJ <- 1:p
for (i in 1:p) {
VJ[i] <- VarJhat(X[i,], Y[i,], h)
}
varest1 <- sum(VJ)
part2 <- 0
k <- Lval(matrix(J), method = min)
c <- 1:k
for (i in 1:k) {
sum <- 0
for (j in 1:(p - i)) {
sum <- sum + J[j] * J[j + i]
}
c[i] <- sum
}
for (i in 1:k) {
part2 <- part2 + c[i]
}
statistic <- (1 / p) * (varest1) + (2 / p) * part2
return(statistic)
}
### Particular case of independence
variance_est_ind <- function(X, Y, h) {
VJ <- 1:p
for (i in 1:p) {
VJ[i] <- VarJhat(X[i,], Y[i,], h)
}
varest1 <- sum(VJ)
statistic <- (1 / p) * varest1
return(statistic)
}
### Function Ji
Ji <- function(x, y, h) {
n <- length(x)
m <- length(y)
X <- t(matrix(x, n, n))
X <- x - X
X <- stats::dnorm(X, sd = sqrt(2) * h)
diag(X) <- 0
t1 <- sum(X) / (n * (n - 1))
Y <- t(matrix(y, m, m))
Y <- y - Y
Y <- stats::dnorm(Y, sd = sqrt(2) * h)
diag(Y) <- 0
t2 <- sum(Y) / (m * (m - 1))
XY <- x[1] - y
for (j in 2:n) {
XY <- rbind(XY, x[j] - y)
}
XY <- stats::dnorm(XY, sd = sqrt(2) * h)
t3 <- mean(XY)
(t1 + t2 - 2 * t3) * (sqrt(2 * pi * 2 * h ^ 2))
}
A <- function(X, Y, h) {
p <- nrow(X)
Jivalue <- 1:p
for (i in 1:p) {
Jivalue[i] <- Ji(X[i, ], Y[i, ], h)
}
return(Jivalue)
}
# Here we compute the statistic Ji, it is necessary to show it in the package
J <- A(X, Y, h)
suma <- function(J) {
p <- length(J)
sum(J) / sqrt(p)
}
# The test statistic before the standardization (also necessary to report in the package)
e <- sum(J) / sqrt(p)
if (method == "boot") {
### Bootstrap
### Variance of the statistic (save in memory and access to it)
var <- variance(J)
### standardized test statistic using the bootstrap method (show)
s <- e / sqrt((var))
### Corresponding p-value (show)
pvalor <- 1 - stats::pnorm(s)
s
}
if(method == "spect") {
### Spectral
### Variance of the statistic (save in memory and access to it)
var_spectral <- variance_spectral(J)
### standardized test statistic using the spectral method (show)
s_spectral <- (e) / sqrt((var_spectral))
### Corresponding p-value (show)
pvalor_spectral <- 1 - stats::pnorm(s_spectral)
s_spectral
}
if(method == "us") {
### Serfling Dirichlet
### Variance of the statistic (save in memory and access to it)
var_est_Dirichlet <- variance_est_Dirichlet(X, Y, h, J)
### standardized test statistic using the Serfling Dirichlet method (show)
s_est_Dirichlet <- e / sqrt((var_est_Dirichlet))
### Corresponding p-value (show)
pvalor_Dirichlet <- 1 - stats::pnorm(s_est_Dirichlet)
s_est_Dirichlet
}
#-----------------------------------------------------------------------------
# Versions for independent data
#-----------------------------------------------------------------------------
if (method == "us_ind") {
### Serfling for independent data (us_ind)
var_est_ind <- variance_est_ind(X, Y, h) ### save in memory
s_est_ind <- e / sqrt(unlist(var_est_ind)) ### show
pvalor_est_ind <- 1 - stats::pnorm(s_est_ind) ### show
}
if (method == "spect_ind") {
### Spectral for independent data (spect_ind)
var_spectral_ind <- variance_spectral_ind(J)### save in memory
s_spectral_ind <- e / sqrt((var_spectral_ind)) ### show
pvalor_spectral_ind <- 1 - stats::pnorm(s_spectral_ind) ### show
}
### Furthermore than these three methods we also reported one based on p-values
### instead of Ji statistics
###############################
### Permutation test ###
###############################
if(method == "perm" | I.permutation.p.values == TRUE) {
# if(method != "perm" & I.permutation.p.values == TRUE){
# print("OK")
# }
#
# if(method != "perm" & I.permutation.p.values == FALSE){
# } else {
# if(method == "perm" | I.permutation.p.values == TRUE) {
kern.permute <- function(x, y, b) {
m <- length(x)
n <- length(y)
S <- 1:(m + n)
stat <- Ji(x, y, b)
X <- c(x, y)
if (m == n) {
M <- utils::combn(1:(2 * m), m)
Ml <- ncol(M) / 2
stats <- 1:Ml
for (j in 1:Ml) {
xstar <- X[M[, j]]
ystar <- X[M[, 2 * Ml + 1 - j]]
stats[j] <- Ji(xstar, ystar, b)
}
}
if (m != n) {
M <- utils::combn(S, m)
Ml <- ncol(M)
stats <- 1:Ml
for (j in 1:Ml) {
xstar <- X[M[, j]]
S1 <- setdiff(S, M[, j])
ystar <- X[S1]
stats[j] <- Ji(xstar, ystar, b)
}
}
vec <- (1:Ml)[stats >= stat]
list(stat, stats, length(vec) / Ml)
}
kern.permute_1 <- function(x, y, b) {
kern.permute(x, y, b)[3]
}
if(b_I.permutation.p.values == "global"){
permutation_diff_ind <- function(X, Y, h) {
p <- nrow(X)
o <- rep(1, p)
for (i in 1:p) {
o[i] <- as.numeric(kern.permute_1(X[i,], Y[i,], h))
}
return(o)
}
### P-values corresponding to each null hypothesis (in memory and access to them)
pv <- permutation_diff_ind(X, Y, h)
}
if(b_I.permutation.p.values == "individual") {
### The p-values computing with indvidual bandwidth
sa <- apply(X, 1, var)
sb <- apply(Y, 1, var)
si <- ((n - 1) * unlist(sa) + (m - 1) * unlist(sb)) / (n + m - 2)
h <- sqrt(si) * c2
permutation_diff_ind <- function(X, Y, h) {
p <- nrow(X)
o <- rep(1, p)
for (i in 1:p) {
o[i] <- as.numeric(kern.permute_1(X[i,], Y[i,], h[i]))
}
return(o)
}
### P-values corresponding to each null hypothesis (in memory and access to them)
pv <- permutation_diff_ind(X, Y, h)
}
### Graph in memory and acess if it selected
# graphics::plot(1:p, pv, main = "Individual p-values", xlab = "Number of null hypothesis",
# ylab = "p-values")
############################
### Variance estimators ####
############################
variance_spectralR <- function(J) {
part2 <- 0
part3 <- 0
k <- Lval(matrix(J), method = min)
c <- stats::acf(J, type = "covariance", lag.max = k, plot = FALSE)$acf
c0 <- c[1]
c <- c[-1]
for (i in 1:k) {
part2 <- part2 + cos(pi * i / (k + 1)) * c[i]
part3 <- part3 + c[i]
}
statistic <- c0 + 2 * part2
stat <- c0 + 2 * part3
return(max(statistic, stat))
}
### Variance of the statistic
var_pv_sR <- variance_spectralR(pv)
### Non standardized statistic
pv_ <- mean(pv)
N <- n + m
if (m == n) {
nm <- ((factorial(N)) / (factorial(n) * factorial(N - n))) / 2
} else {
nm <- ((factorial(N)) / (factorial(n) * factorial(N - n)))
}
### Number of possible permutations
nm
mean_p <- ((nm + 1) / nm) * 0.5
### standardized statistic (show)
s_sR <- ((pv_ - mean_p) * sqrt(p)) / sqrt((var_pv_sR))
### Corresponding p-value
pvalor_s_sR <- stats::pnorm(s_sR)
}
statistic <- switch(method, spect = s_spectral, spect_ind = s_spectral_ind,
boot = s, us = s_est_Dirichlet, us_ind = s_est_ind, perm = s_sR)
names(statistic) <- "standardized statistic"
statistic2 <- switch(method, spect = e, spect_ind = e, boot = e, us = e, us_ind = e,
perm = (pv_)*sqrt(p))
p.value <- switch(method, spect = pvalor_spectral, spect_ind = pvalor_spectral_ind,
boot = pvalor, us = pvalor_Dirichlet, us_ind = pvalor_est_ind,
perm = pvalor_s_sR)
met <- switch(method, spect = "spect", spect_ind = "spect_ind", boot = "boot",
us = "us", us_ind = "us_ind", perm = "perm")
variance <- switch(method, spect = var_spectral, spect_ind = var_spectral_ind,
boot = var, us = var_est_Dirichlet, us_ind = var_est_ind,
perm = var_pv_sR)
RVAL <- list(statistic = statistic, p.value = p.value, method = METHOD,
data.name = DNAME, sample.size = n, method1 = met)
if(method == "perm"){
RVAL2 <- list(standarized.statistic = statistic, p.value = p.value,
statistic = (pv_)*sqrt(p), variance = variance, p = p, n = n, m = m,
method = met, I.statistics = J, I.permutation.p.values = pv,
data.name = DNAME)
}
if (method != "perm" & I.permutation.p.values == FALSE) {
RVAL2 <- list(standarized.statistic = statistic, p.value = p.value,
statistic = e, variance = variance, p = p, n = n, m = m,
method = met, I.statistics = J, data.name = DNAME)
}
if (method != "perm" & I.permutation.p.values == TRUE) {
RVAL2 <- list(standarized.statistic = statistic, p.value = p.value,
statistic = e, variance = variance, p = p, n = n, m = m,
method = met, I.statistics = J, I.permutation.p.values = pv,
data.name = DNAME)
}
class(RVAL) <- "htest"
print(RVAL)
return(invisible(RVAL2))
}
# Example
#
# ### Data set to check the performance of the code
#
# p <- 100
# n <- 5
# m <- 5
#
# X <- matrix(rnorm(p * n), ncol = n)
# Y <- matrix(rnorm(p * n), ncol = n)
# system.time(res <- TwoSampleTest.HD(X, Y, method = "perm"))
| /R/TwoSampleTest.HD.r | no_license | cran/TwoSampleTest.HD | R | false | false | 25,726 | r | #' @title A two-sample test for the equality of distributions for high-dimensional data
#' @aliases TwoSampleTest.HD
#' @description Performs the four tests of equality of the p marginal distributions
#' for two groups proposed in Cousido- Rocha et al.(2018). The methods have been
#' designed for the low sample size and high dimensional setting. Furthermore,
#' the possibility that the p variables in each data set can be weakly dependent is considered.
#' The function also reports a set of p permutation p-values, each of which is derived
#' from testing the equality of distributions in the two groups for each of the variables
#' separately. These p-values are useful when a proposed test rejects the global null
#' hypothesis since it makes it possible to identify which variables have contributed to this significance.
#' @details The function implements the two-sample tests proposed by Cousido-Rocha, et al. (2018).
#' The methods “spect”, “boot” and “us” are based on a global statistic which is the average of p
#' individual statistics corresponding to each of the p variables. Each of these individual statistics measures the difference between
#' the empirical characteristic functions computed from the two samples.
#' An alternative expression shows that each statistic is essentially the intergrated
#' squared difference between kernel density estimates. The global statistic (average) is
#' standardized using one of three different variance estimators. The method “spect”
#' uses a variance estimator based on spectral analysis, the method “boot” implements
#' the block bootstrap to estimate the variance and the method “us” employs a variance estimator
#' derived from U-statistic theory (more details in Cousido-Rocha et al., 2018).
#' The methods “spect” and “boot” are suitable under some assumptions including that the
#' sequence of individual statistics that define the global statistic is strictly stationary,
#' whereas the method “us” avoids this assumption. However the methods “spect” and “boot”
#' have been checked in simulations and they perform well even when the stationarity assumption
#' is violated. The methods “spect” and “us” have their corresponding versions for independent
#' data (“spect ind” and “us ind”), for which the variance estimators are simplified taking into
#' acount the independence of the variables. The asymptotic normality (when p tends to infinity)
#' of the standardized version of the statistic is used to compute the corresponding p-value.
#' On the other hand, Cousido-Rocha et al. (2018) also proposed the method “perm” whose global
#' statistic is the average of the permutation p-values corresponding to the individual statistics
#' mentioned above. This method assumes that the sequence of p-values is strictly stationary,
#' however in simulations it seems that it performs well when this assumption does not hold.
#' In addition to providing an alternative global test, these p-values can be used when the
#' global null hypothesis is rejected and one wishes to identify which of the p variables have
#' contributed to that rejection. The global statistic depends on a parameter which plays
#' a role similar to that of a smoothing parameter or bandwidth in kernel density estimation.
#' For the four global tests this parameter is estimated using the information from all the variables or features.
#' For the individual statistics from which the permutation
#' p-values are computed, there are two possibilities: (i) use the value employed in the global test
#' (b_I.permutation.p.values=“global”), (ii) estimate this parameter for each variable
#' separately using only its sample information (b_I.permutation.p.values=“individual”)).
#' @param X A matrix where each row is one of the p-samples in the first group.
#' @param Y A matrix where each row is one of the p-samples in the second group.
#' @param method the two-sample test. By default the “us” method is computed. See details.
#' @param I.permutation.p.values Logical. Default is FALSE. A variable indicating whether to compute the permutation p-values or not when the selected method is not “perm”. See details.
#' @param b_I.permutation.p.values The method used to compute the individual statistics on which are based the permutation p-values. Default is “global”. See details.
#'
#' @return A list containing the following components:
#' \item{standardized statistic: }{the value of the standardized statistic.}
#' \item{p.value: }{the p-value for the test.}
#' \item{statistic: }{the value of the statistic.}
#' \item{variance: }{the value of the variance estimator.}
#' \item{p: }{number of samples or populations.}
#' \item{n: }{sample size in the first group.}
#' \item{m: }{sample size in the second group.}
#' \item{method: }{a character string indicating which two sample test is performed.}
#' \item{I.statistics: }{the p individual statistics.}
#' \item{I.permutation.p.values: }{the p individual permutation p-values.}
#' \item{data.name: }{a character string giving the name of the data.}
#'
#' @author
#' \itemize{
#' \item{Marta Cousido-Rocha}
#' \item{José Carlos Soage González}
#' \item{Jacobo de Uña-Álvarez}
#' \item{Jeffrey D. Hart}
#' }
#' @references Cousido-Rocha, M., de Uña-Álvarez J., and Hart, J. (2018).
#' A two-sample test for the equality of marginal distributions for high-dimensional data.
#' Preprint.
#'
#' @examples
#' \dontshow{
#' # Example
#' ### Data set to check the performance of the code
#'
#' p <- 100
#' n <- 5
#' m <- 5
#'
#' X <- matrix(rnorm(p * n), ncol = n)
#' Y <- matrix(rnorm(p * n), ncol = n)
#' system.time(res <- TwoSampleTest.HD(X, Y, method = "perm"))
#' }
#'
#'
#' \donttest{
#' # We consider a simulated data example. We have simulated the following situation.
#' # We have two groups, for example, 7 patients with tumor 1 and 7 patients with tumor 2.
#' # For each patient 1000 variables are measured, for example, gene expression levels.
#' # Besides, the distributions of 100 of the variables are different in the two groups,
#' # and the differences are in terms of location. The variables are independent to
#' # simplify the generation of the data sets.
#' p <- 1000
#' n = m = 7
#' inds <- sample(1:4, p, replace = TRUE)
#' X <- matrix(rep(0, n * p), ncol = n)
#' for (j in 1:p){
#' if (inds[j] == 1){
#' X[j, ] <- rnorm(n)
#' }
#' if (inds[j] == 2){
#' X[j, ] <- rnorm(n, sd = 2)
#' }
#' if (inds[j] == 3){
#' X[j, ] <- rnorm(n, mean = 1)
#' }
#' if (inds[j] == 4){
#' X[j, ] <- rnorm(n, mean = 1, sd = 2)
#' }
#' }
#' rho <- 0.1
#' ind <- sample(1:p, rho * p)
#' li <- length(ind)
#' indsy <- inds
#' for (l in 1:li){
#' if (indsy[ind[l]]==1){
#' indsy[ind[l]]=3
#' } else {
#' if (indsy[ind[l]]==2){
#' indsy[ind[l]]=4
#' } else {
#' if (indsy[ind[l]]==3){
#' indsy[ind[l]]=1
#' } else {
#' indsy[ind[l]] = 2
#' }
#' }
#' }
#' }
#' Y <- matrix(rep(0, m * p), ncol = m)
#' for (j in 1:p){
#' if (indsy[j] == 1){
#' Y[j,] <- rnorm(m)}
#' if (indsy[j] == 2){
#' Y[j, ] <- rnorm(m, sd = 2)
#' }
#' if (indsy[j]==3){
#' Y[j, ] <- rnorm(m, mean = 1)
#' }
#' if (indsy[j] == 4){
#' Y[j,] <- rnorm(m, mean = 1, sd = 2)
#' }
#' }
#'
#' # Our interest is to test the null hypothesis that the distribution of each of the 1000 variables
#' # is the same in the two groups.
#'
#' # We use for this purpose the four methods proposed in Cousido-Rocha et al. (2018).
#'
#' res1 <- TwoSampleTest.HD(X, Y, method = "spect")
#' res1
#' res2 <- TwoSampleTest.HD(X, Y, method = "boot")
#' res2
#' res3 <- TwoSampleTest.HD(X, Y, method = "us")
#' res3
#' res4 <- TwoSampleTest.HD(X, Y, method = "perm")
#' res4
#' # The four methods reject the global null hypothesis.
#' # Hence, we use the individual permutation p-values
#' # to identify which variables are not equally distributed in the two groups.
#' pv<-res4$I.permutation.p.values
#'
#' # Applying a multiple testing procedure to these p-values
#' # we can detect the variables with different distributions for the two groups.
#' # The following plot of the individual permutation p-values is also informative.
#' # We remark in red the 100 smallest p-values.
#'
#' pv_sort <- sort(pv)
#' cri <- pv_sort[100]
#' ind <- which(pv <= cri)
#' plot(1:p, pv, main = "Individual permutation p-values",
#' xlab = "Variables", ylab = "p-values")
#' points(ind, pv[ind], col = "red")
#' }
#' @importFrom utils combn
#' @export
################################################################################
# Two-sample problem
################################################################################
TwoSampleTest.HD <- function(X, Y, method = c("spect", "spect_ind", "boot", "us", "us_ind", "perm"),
I.permutation.p.values = FALSE, b_I.permutation.p.values = c("global", "individual")) {
cat("Call:", "\n")
print(match.call())
if (missing(method)) {
method <- "us"
cat("'us' method used by default\n")
}
if (missing(b_I.permutation.p.values)) {
b_I.permutation.p.values <- "global"
cat("'global' bandwidth used by default\n")
}
method <- match.arg(method)
DNAME <- deparse(substitute(c(X, Y)))
METHOD <- "A two-sample test for the equality of distributions for high-dimensional data"
match.arg(method)
match.arg(b_I.permutation.p.values)
p <- nrow(X)
n <- ncol(X)
m <- ncol(Y)
c1 <- 1 / p
c2 <- 1.114 * mean(c(n, m)) ^ (-1 / 5)
sa <- apply(X, 1, var)
sb <- apply(Y, 1, var)
si <- ((n - 1) * unlist(sa) + (m - 1) * unlist(sb)) / (n + m - 2)
spool <- sqrt(c1 * sum(si))
h <- spool * c2
y <- c(rep(1, 5), rep(2, 5))
#=============================================================================
## Functions to compute the statistic
#=============================================================================
### Method to choose m
mval <- function(rho, lagmax, kn, rho.crit) {
## Compute the number of insignificant runs following each rho(k),
## k = 1, ..., lagmax.
num.ins <- sapply(1:(lagmax - kn + 1),
function(j) sum((abs(rho) < rho.crit)[j:(j + kn - 1)]))
## If there are any values of rho(k) for which the kn proceeding
## values of rho(k + j), j = 1, ..., kn are all insignificant, take the
## smallest rho(k) such that this holds (see footnote c of
## Politis and White for further details).
if(any(num.ins == kn)){
return(which(num.ins == kn)[1])
} else {
## If no runs of length kn are insignificant, take the smallest
## value of rho(k) that is significant.
if(any(abs(rho) > rho.crit)) {
lag.sig <- which(abs(rho) > rho.crit)
k.sig <- length(lag.sig)
if(k.sig == 1) {
## When only one lag is significant, mhat is the sole
## significant rho(k).
return(lag.sig)
} else {
## If there are more than one significant lags but no runs
## of length kn, take the largest value of rho(k) that is
## significant.
return(max(lag.sig))
}
} else {
## When there are no significant lags, mhat must be the
## smallest positive integer (footnote c), hence mhat is set
## to one.
return(1)
}
}
}
Lval <- function(x, method = mean) {
x <- matrix(x)
n <- nrow(x)
d <- ncol(x)
## Parameters for adapting the approach of Politis and White (2004)
kn <- max(5, ceiling(log10(n)))
lagmax <- ceiling(sqrt(n)) + kn
rho.crit <- 1.96 * sqrt(log10(n) / n)
m <- numeric(d)
for (i in 1:d) {
rho <- stats::acf(x[,i], lag.max = lagmax, type = "correlation", plot = FALSE)$acf[-1]
m[i] <- mval(rho, lagmax, kn, rho.crit)
}
return(2 * method(m))
}
##########################################
### Variance estimators (stationary) #####
##########################################
### Spectral variance estimator
variance_spectral <- function(J) {
part2 <- 0
k <- Lval(matrix(J), method = min)
c <- stats::acf(J, type = "covariance", lag.max = k, plot = FALSE)$acf
c0 <- c[1]
c <- c[-1]
for (i in 1:k) {
part2 <- part2 + (1 - (i / (k + 1))) * c[i]
}
statistic <- c0 + 2 * part2
return(statistic)
}
# Particular case of independence
variance_spectral_ind <- function(J) {
k <- 1
c <- stats::acf(J, type = "covariance", lag.max = k, plot = FALSE)$acf
statistic <- c[1]
return(statistic)
}
### Variance block bootstrap
variance <- function(pv) {
p <- length(pv)
# print(h)
# h = 0.8
k <- Lval((pv), method = min)
# bootstats = 1:(p - k + 1)
# bootstats[1] = sum(J[1:k])
# for(j in 1:(p - k)) {
# bootstats[j + 1] = bootstats[j] - J[j] + J[j + k]
# }
bootstats <- 1:(p / k)
for (j in 1:(p / k)) {
bootstats[j] <- sum(pv[(k * (j - 1) + 1):(j * k)])
}
varest <- var(bootstats / sqrt(k))
return(varest)
}
#############################################
### Variance estimator (Non-stationary) ####
#############################################
### Variance estimator based on serfling and time series
VarJhat <- function(x, y, h) {
E1 <- E1hat(c(x, y), h)
E2 <- E2hat(c(x, y), h)
E3 <- E3hat(c(x, y), h)
n <- length(x)
m <- length(y)
c1 <- (n - 2) * (n - 3) / (n * (n - 1))
c1 <- c1 + (m - 2) * (m - 3) / (m * (m - 1))
c1 <- c1 + 4 * (n - 1) * (m - 1) / (n * m)
c1 <- c1 + 2
c1 <- c1 - 4 * (n - 2) / n
c1 <- c1 - 4 * (m - 2) / m
c2 <- -4 / (n * (n - 1)) - 4 / (m * (m - 1)) - 8 / (m * n)
c3 <- 2 / (n * (n - 1)) + 2 / (m * (m - 1)) + 4 / (m * n)
vhat <- c1 * E1 + c2 * E2 + c3 * E3
return(vhat * (4 * pi * h ^ 2))
}
E1hat <- function(Z, h) {
N <- length(Z)
Zmat <- t(matrix(Z, N, N))
Zmat <- Z - Zmat
Zmat <- stats::dnorm(Zmat, sd = sqrt(2) * h)
diag(Zmat) <- 0
E1 <- 0
for (i in 1:(N - 1)) {
for (j in (i + 1):N) {
t1 <- stats::dnorm(Z[i] - Z[j], sd = sqrt(2) * h)
t1 <- sum(t1 * Zmat) - sum(t1 * Zmat[i,] + t1 * Zmat[j,] + t1 * Zmat[, i] +
t1 * Zmat[, j]) + 2 * t1 * Zmat[i, j]
E1 <- E1 + t1
}
}
Number <- N * (N - 1) * (N - 2) * (N - 3) / 2
E1 / Number
}
E2hat <- function(Z, h) {
N <- length(Z)
E2 <- 0
for (i in 1:(N - 1)) {
for (j in (i + 1):N) {
t1 <- stats::dnorm(Z[i] - Z[j], sd = sqrt(2) * h)
vec <- setdiff(1:N, c(i, j))
t2 <- stats::dnorm(Z[i] - Z[vec], sd = sqrt(2) * h)
t3 <- stats::dnorm(Z[j] - Z[vec], sd = sqrt(2) * h)
t1 <- sum(t1 * t2) + sum(t1 * t3)
E2 <- E2 + t1
}
}
Number <- N * (N - 1) * (N - 2)
E2 / Number
}
E3hat <- function(Z, h) {
N <- length(Z)
E3 <- 0
for (i in 1:(N - 1)) {
for (j in (i + 1):N) {
t1 <- stats::dnorm(Z[i] - Z[j], sd = sqrt(2) * h) ^ 2
E3 <- E3 + t1
}
}
Number <- N * (N - 1) / 2
E3 / Number
}
### Dirichlet kernel
variance_est_Dirichlet <- function(X, Y, h, J) {
VJ <- 1:p
for (i in 1:p) {
VJ[i] <- VarJhat(X[i,], Y[i,], h)
}
varest1 <- sum(VJ)
part2 <- 0
k <- Lval(matrix(J), method = min)
c <- 1:k
for (i in 1:k) {
sum <- 0
for (j in 1:(p - i)) {
sum <- sum + J[j] * J[j + i]
}
c[i] <- sum
}
for (i in 1:k) {
part2 <- part2 + c[i]
}
statistic <- (1 / p) * (varest1) + (2 / p) * part2
return(statistic)
}
### Particular case of independence
variance_est_ind <- function(X, Y, h) {
VJ <- 1:p
for (i in 1:p) {
VJ[i] <- VarJhat(X[i,], Y[i,], h)
}
varest1 <- sum(VJ)
statistic <- (1 / p) * varest1
return(statistic)
}
### Function Ji
Ji <- function(x, y, h) {
n <- length(x)
m <- length(y)
X <- t(matrix(x, n, n))
X <- x - X
X <- stats::dnorm(X, sd = sqrt(2) * h)
diag(X) <- 0
t1 <- sum(X) / (n * (n - 1))
Y <- t(matrix(y, m, m))
Y <- y - Y
Y <- stats::dnorm(Y, sd = sqrt(2) * h)
diag(Y) <- 0
t2 <- sum(Y) / (m * (m - 1))
XY <- x[1] - y
for (j in 2:n) {
XY <- rbind(XY, x[j] - y)
}
XY <- stats::dnorm(XY, sd = sqrt(2) * h)
t3 <- mean(XY)
(t1 + t2 - 2 * t3) * (sqrt(2 * pi * 2 * h ^ 2))
}
A <- function(X, Y, h) {
p <- nrow(X)
Jivalue <- 1:p
for (i in 1:p) {
Jivalue[i] <- Ji(X[i, ], Y[i, ], h)
}
return(Jivalue)
}
# Here we compute the statistic Ji, it is necessary to show it in the package
J <- A(X, Y, h)
suma <- function(J) {
p <- length(J)
sum(J) / sqrt(p)
}
# The test statistic before the standardization (also necessary to report in the package)
e <- sum(J) / sqrt(p)
if (method == "boot") {
### Bootstrap
### Variance of the statistic (save in memory and access to it)
var <- variance(J)
### standardized test statistic using the bootstrap method (show)
s <- e / sqrt((var))
### Corresponding p-value (show)
pvalor <- 1 - stats::pnorm(s)
s
}
if(method == "spect") {
### Spectral
### Variance of the statistic (save in memory and access to it)
var_spectral <- variance_spectral(J)
### standardized test statistic using the spectral method (show)
s_spectral <- (e) / sqrt((var_spectral))
### Corresponding p-value (show)
pvalor_spectral <- 1 - stats::pnorm(s_spectral)
s_spectral
}
if(method == "us") {
### Serfling Dirichlet
### Variance of the statistic (save in memory and access to it)
var_est_Dirichlet <- variance_est_Dirichlet(X, Y, h, J)
### standardized test statistic using the Serfling Dirichlet method (show)
s_est_Dirichlet <- e / sqrt((var_est_Dirichlet))
### Corresponding p-value (show)
pvalor_Dirichlet <- 1 - stats::pnorm(s_est_Dirichlet)
s_est_Dirichlet
}
#-----------------------------------------------------------------------------
# Versions for independent data
#-----------------------------------------------------------------------------
if (method == "us_ind") {
### Serfling for independent data (us_ind)
var_est_ind <- variance_est_ind(X, Y, h) ### save in memory
s_est_ind <- e / sqrt(unlist(var_est_ind)) ### show
pvalor_est_ind <- 1 - stats::pnorm(s_est_ind) ### show
}
if (method == "spect_ind") {
### Spectral for independent data (spect_ind)
var_spectral_ind <- variance_spectral_ind(J)### save in memory
s_spectral_ind <- e / sqrt((var_spectral_ind)) ### show
pvalor_spectral_ind <- 1 - stats::pnorm(s_spectral_ind) ### show
}
### Furthermore than these three methods we also reported one based on p-values
### instead of Ji statistics
###############################
### Permutation test ###
###############################
if(method == "perm" | I.permutation.p.values == TRUE) {
# if(method != "perm" & I.permutation.p.values == TRUE){
# print("OK")
# }
#
# if(method != "perm" & I.permutation.p.values == FALSE){
# } else {
# if(method == "perm" | I.permutation.p.values == TRUE) {
kern.permute <- function(x, y, b) {
m <- length(x)
n <- length(y)
S <- 1:(m + n)
stat <- Ji(x, y, b)
X <- c(x, y)
if (m == n) {
M <- utils::combn(1:(2 * m), m)
Ml <- ncol(M) / 2
stats <- 1:Ml
for (j in 1:Ml) {
xstar <- X[M[, j]]
ystar <- X[M[, 2 * Ml + 1 - j]]
stats[j] <- Ji(xstar, ystar, b)
}
}
if (m != n) {
M <- utils::combn(S, m)
Ml <- ncol(M)
stats <- 1:Ml
for (j in 1:Ml) {
xstar <- X[M[, j]]
S1 <- setdiff(S, M[, j])
ystar <- X[S1]
stats[j] <- Ji(xstar, ystar, b)
}
}
vec <- (1:Ml)[stats >= stat]
list(stat, stats, length(vec) / Ml)
}
kern.permute_1 <- function(x, y, b) {
kern.permute(x, y, b)[3]
}
if(b_I.permutation.p.values == "global"){
permutation_diff_ind <- function(X, Y, h) {
p <- nrow(X)
o <- rep(1, p)
for (i in 1:p) {
o[i] <- as.numeric(kern.permute_1(X[i,], Y[i,], h))
}
return(o)
}
### P-values corresponding to each null hypothesis (in memory and access to them)
pv <- permutation_diff_ind(X, Y, h)
}
if(b_I.permutation.p.values == "individual") {
### The p-values computing with indvidual bandwidth
sa <- apply(X, 1, var)
sb <- apply(Y, 1, var)
si <- ((n - 1) * unlist(sa) + (m - 1) * unlist(sb)) / (n + m - 2)
h <- sqrt(si) * c2
permutation_diff_ind <- function(X, Y, h) {
p <- nrow(X)
o <- rep(1, p)
for (i in 1:p) {
o[i] <- as.numeric(kern.permute_1(X[i,], Y[i,], h[i]))
}
return(o)
}
### P-values corresponding to each null hypothesis (in memory and access to them)
pv <- permutation_diff_ind(X, Y, h)
}
### Graph in memory and acess if it selected
# graphics::plot(1:p, pv, main = "Individual p-values", xlab = "Number of null hypothesis",
# ylab = "p-values")
############################
### Variance estimators ####
############################
variance_spectralR <- function(J) {
part2 <- 0
part3 <- 0
k <- Lval(matrix(J), method = min)
c <- stats::acf(J, type = "covariance", lag.max = k, plot = FALSE)$acf
c0 <- c[1]
c <- c[-1]
for (i in 1:k) {
part2 <- part2 + cos(pi * i / (k + 1)) * c[i]
part3 <- part3 + c[i]
}
statistic <- c0 + 2 * part2
stat <- c0 + 2 * part3
return(max(statistic, stat))
}
### Variance of the statistic
var_pv_sR <- variance_spectralR(pv)
### Non standardized statistic
pv_ <- mean(pv)
N <- n + m
if (m == n) {
nm <- ((factorial(N)) / (factorial(n) * factorial(N - n))) / 2
} else {
nm <- ((factorial(N)) / (factorial(n) * factorial(N - n)))
}
### Number of possible permutations
nm
mean_p <- ((nm + 1) / nm) * 0.5
### standardized statistic (show)
s_sR <- ((pv_ - mean_p) * sqrt(p)) / sqrt((var_pv_sR))
### Corresponding p-value
pvalor_s_sR <- stats::pnorm(s_sR)
}
statistic <- switch(method, spect = s_spectral, spect_ind = s_spectral_ind,
boot = s, us = s_est_Dirichlet, us_ind = s_est_ind, perm = s_sR)
names(statistic) <- "standardized statistic"
statistic2 <- switch(method, spect = e, spect_ind = e, boot = e, us = e, us_ind = e,
perm = (pv_)*sqrt(p))
p.value <- switch(method, spect = pvalor_spectral, spect_ind = pvalor_spectral_ind,
boot = pvalor, us = pvalor_Dirichlet, us_ind = pvalor_est_ind,
perm = pvalor_s_sR)
met <- switch(method, spect = "spect", spect_ind = "spect_ind", boot = "boot",
us = "us", us_ind = "us_ind", perm = "perm")
variance <- switch(method, spect = var_spectral, spect_ind = var_spectral_ind,
boot = var, us = var_est_Dirichlet, us_ind = var_est_ind,
perm = var_pv_sR)
RVAL <- list(statistic = statistic, p.value = p.value, method = METHOD,
data.name = DNAME, sample.size = n, method1 = met)
if(method == "perm"){
RVAL2 <- list(standarized.statistic = statistic, p.value = p.value,
statistic = (pv_)*sqrt(p), variance = variance, p = p, n = n, m = m,
method = met, I.statistics = J, I.permutation.p.values = pv,
data.name = DNAME)
}
if (method != "perm" & I.permutation.p.values == FALSE) {
RVAL2 <- list(standarized.statistic = statistic, p.value = p.value,
statistic = e, variance = variance, p = p, n = n, m = m,
method = met, I.statistics = J, data.name = DNAME)
}
if (method != "perm" & I.permutation.p.values == TRUE) {
RVAL2 <- list(standarized.statistic = statistic, p.value = p.value,
statistic = e, variance = variance, p = p, n = n, m = m,
method = met, I.statistics = J, I.permutation.p.values = pv,
data.name = DNAME)
}
class(RVAL) <- "htest"
print(RVAL)
return(invisible(RVAL2))
}
# Example
#
# ### Data set to check the performance of the code
#
# p <- 100
# n <- 5
# m <- 5
#
# X <- matrix(rnorm(p * n), ncol = n)
# Y <- matrix(rnorm(p * n), ncol = n)
# system.time(res <- TwoSampleTest.HD(X, Y, method = "perm"))
|
\docType{package}
\name{Rgitbook-package}
\alias{Rgitbook-package}
\title{Gitbook Projects with R Markdown}
\description{
Gitbook Projects with R Markdown
}
\details{
This package provides functions for working with
\url{Gitbook.io} projects from within R. Moreover, it
provides support for working with R markdown files using
the \code{knitr} package. Support for MathJax is also
included.
}
\author{
\email{jason@bryer.org}
}
\keyword{Gitbook}
\keyword{Github}
\keyword{Markdown}
\keyword{R}
| /man/Rgitbook-package.Rd | no_license | fabricebrito/Rgitbook | R | false | false | 495 | rd | \docType{package}
\name{Rgitbook-package}
\alias{Rgitbook-package}
\title{Gitbook Projects with R Markdown}
\description{
Gitbook Projects with R Markdown
}
\details{
This package provides functions for working with
\url{Gitbook.io} projects from within R. Moreover, it
provides support for working with R markdown files using
the \code{knitr} package. Support for MathJax is also
included.
}
\author{
\email{jason@bryer.org}
}
\keyword{Gitbook}
\keyword{Github}
\keyword{Markdown}
\keyword{R}
|
library(tidyverse)
library(httr)
library(stringr)
library(jsonlite)
library(RCurl)
# funkcja zwraca token
getSpotifyToken <- function() {
res <- POST("https://accounts.spotify.com/api/token",
body = list(grant_type = "client_credentials"),
config = add_headers("Authorization" = paste0("Basic ", base64(paste(client_id,client_secret,sep=':'))[[1]])),
encode = "form")
if(res$status_code == 200) {
return(content(res)$access_token)
} else {
return(NA)
}
}
# funkcja pobiera listę utworów z biblioteki użytkownika
getMySporifyLibrary <- function() {
library_tracks <- tibble()
is_next <- TRUE
query_url <- "https://api.spotify.com/v1/me/tracks?limit=50"
while(is_next) {
response <- GET(query_url,
add_headers("Authorization:" = paste("Bearer", spotify_token),
"Content-Type:" = "application/json",
"Accept:" = "application/json"))
if(response$status_code == 200) {
response_json <- rawToChar(response$content) %>% iconv(from = "UTF-8", to = Sys.getenv("CHARSET"))
query_results <- fromJSON(response_json, flatten = TRUE)
library_tracks_tmp <- query_results$items %>%
select(added_at,
track_id = track.id,
track = track.name,
popularity = track.popularity,
duration_s = track.duration_ms,
explicit = track.explicit,
album = track.album.name) %>%
mutate(duration_s = duration_s/1000)
library_tracks_tmp$artist <- map_chr(.f = function(x) x$name %>% unlist %>% str_c(collapse = ", "), query_results$items$track.artists)
library_tracks_tmp$artist_id <- map_chr(.f = function(x) x$id %>% unlist %>% str_c(collapse = ", "), query_results$items$track.artists)
library_tracks <- bind_rows(library_tracks, library_tracks_tmp)
query_url <- query_results$`next`
is_next <- !is.null(query_url)
} else {
return(response$status_code)
}
}
return(library_tracks)
}
# Funkcja pobiera cechy audio utworu track_id
getSporifyTrackFeatures <- function(track_id) {
response <- GET(paste0("https://api.spotify.com/v1/audio-features/", track_id),
add_headers("Authorization:" = paste("Bearer", spotify_token),
"Content-Type:" = "application/json",
"Accept:" = "application/json"))
if(response$status_code == 200) {
response_json <- rawToChar(response$content) %>% iconv(from = "UTF-8", to = Sys.getenv("CHARSET"))
query_results <- fromJSON(response_json, flatten = TRUE)
track_features <- tibble(
id = query_results$id,
danceability = query_results$danceability,
energy = query_results$energy,
loudness = query_results$loudness,
mode = query_results$mode,
speechiness = query_results$speechiness,
acousticness = query_results$acousticness,
instrumentalness = query_results$instrumentalness,
liveness = query_results$liveness,
valence = query_results$valence,
tempo = query_results$tempo,
time_signature = query_results$time_signature)
} else {
return(tibble())
}
return(track_features)
}
# Funkcja pobiera playlistę playlist_id użytkownika user_id
getSpotifyUserPlaylist <- function(user_id, playlist_id) {
playlist_tracks <- tibble()
is_next <- TRUE
query_url <- paste0("https://api.spotify.com/v1/users/", user_id, "/playlists/", playlist_id, "?limit=100")
while(is_next) {
response <- GET(query_url,
add_headers("Authorization:" = paste("Bearer", spotify_token),
"Content-Type:" = "application/json",
"Accept:" = "application/json"))
if(response$status_code == 200) {
response_json <- rawToChar(response$content)
query_results <- fromJSON(response_json, flatten = TRUE)
if(!is.null(query_results$tracks)) {
playlist_tracks_tmp <- query_results$tracks$items %>% select(added_at,
track_id = track.id,
track = track.name,
popularity = track.popularity,
duration_s = track.duration_ms,
explicit = track.explicit,
album_id = track.album.id,
album = track.album.name) %>%
mutate(duration_s = duration_s/1000)
playlist_tracks_tmp$artist <- map_chr(.f = function(x) x$name %>% unlist %>% str_c(collapse = ", "), query_results$tracks$items$track.artists)
playlist_tracks_tmp$artist_id <- map_chr(.f = function(x) x$id %>% unlist %>% str_c(collapse = ", "), query_results$tracks$items$track.artists)
query_url <- query_results$tracks$`next`
} else {
playlist_tracks_tmp <- query_results$items %>% select(added_at,
track_id = track.id,
track = track.name,
popularity = track.popularity,
duration_s = track.duration_ms,
explicit = track.explicit,
album_id = track.album.id,
album = track.album.name) %>%
mutate(duration_s = duration_s/1000)
playlist_tracks_tmp$artist <- map_chr(.f = function(x) x$name %>% unlist %>% str_c(collapse = ", "), query_results$items$track.artists)
playlist_tracks_tmp$artist_id <- map_chr(.f = function(x) x$id %>% unlist %>% str_c(collapse = ", "), query_results$items$track.artists)
query_url <- query_results$`next`
}
playlist_tracks <- bind_rows(playlist_tracks, playlist_tracks_tmp)
is_next <- !is.null(query_url)
} else {
return(response$status_code)
}
}
return(playlist_tracks)
}
# Get an Artist
# https://api.spotify.com/v1/artists/{id}
getSpotifyArtist <- function(artist_id) {
response <- GET(paste0("https://api.spotify.com/v1/artists/", artist_id),
add_headers("Authorization:" = paste("Bearer", spotify_token),
"Content-Type:" = "application/json",
"Accept:" = "application/json"))
if(response$status_code == 200) {
response_json <- rawToChar(response$content)
query_results <- fromJSON(response_json, flatten = TRUE)
if(length(query_results$genres) != 0) {
df <- tibble(genres = query_results$genres)
} else {
df <- tibble(genres = NA)
}
df$artist_name <- query_results$name
df$popularity <- query_results$popularity
df$followers <- query_results$followers$total
return(df)
} else {
return(tibble())
}
}
# Get an Artist's Albums
# https://api.spotify.com/v1/artists/{id}/albums
# https://api.spotify.com/v1/artists/0k17h0D3J5VfsdmQ1iZtE9/albums
# Get an Artist's Top Tracks
# https://api.spotify.com/v1/artists/{id}/top-tracks
# https://api.spotify.com/v1/artists/0k17h0D3J5VfsdmQ1iZtE9/top-tracks?country=PL
# Get an Artist's Related Artists
# https://api.spotify.com/v1/artists/{id}/related-artists
# https://api.spotify.com/v1/artists/0k17h0D3J5VfsdmQ1iZtE9/related-artists
# Get an Album
# https://api.spotify.com/v1/albums/{id}
# https://api.spotify.com/v1/albums/0bCAjiUamIFqKJsekOYuRw
# Get an Album's Tracks
# https://api.spotify.com/v1/albums/{id}/tracks
# https://api.spotify.com/v1/albums/0bCAjiUamIFqKJsekOYuRw/tracks
| /spotify_functions.R | no_license | Cogitarian/Spotify | R | false | false | 8,431 | r | library(tidyverse)
library(httr)
library(stringr)
library(jsonlite)
library(RCurl)
# funkcja zwraca token
getSpotifyToken <- function() {
res <- POST("https://accounts.spotify.com/api/token",
body = list(grant_type = "client_credentials"),
config = add_headers("Authorization" = paste0("Basic ", base64(paste(client_id,client_secret,sep=':'))[[1]])),
encode = "form")
if(res$status_code == 200) {
return(content(res)$access_token)
} else {
return(NA)
}
}
# funkcja pobiera listę utworów z biblioteki użytkownika
getMySporifyLibrary <- function() {
library_tracks <- tibble()
is_next <- TRUE
query_url <- "https://api.spotify.com/v1/me/tracks?limit=50"
while(is_next) {
response <- GET(query_url,
add_headers("Authorization:" = paste("Bearer", spotify_token),
"Content-Type:" = "application/json",
"Accept:" = "application/json"))
if(response$status_code == 200) {
response_json <- rawToChar(response$content) %>% iconv(from = "UTF-8", to = Sys.getenv("CHARSET"))
query_results <- fromJSON(response_json, flatten = TRUE)
library_tracks_tmp <- query_results$items %>%
select(added_at,
track_id = track.id,
track = track.name,
popularity = track.popularity,
duration_s = track.duration_ms,
explicit = track.explicit,
album = track.album.name) %>%
mutate(duration_s = duration_s/1000)
library_tracks_tmp$artist <- map_chr(.f = function(x) x$name %>% unlist %>% str_c(collapse = ", "), query_results$items$track.artists)
library_tracks_tmp$artist_id <- map_chr(.f = function(x) x$id %>% unlist %>% str_c(collapse = ", "), query_results$items$track.artists)
library_tracks <- bind_rows(library_tracks, library_tracks_tmp)
query_url <- query_results$`next`
is_next <- !is.null(query_url)
} else {
return(response$status_code)
}
}
return(library_tracks)
}
# Funkcja pobiera cechy audio utworu track_id
getSporifyTrackFeatures <- function(track_id) {
response <- GET(paste0("https://api.spotify.com/v1/audio-features/", track_id),
add_headers("Authorization:" = paste("Bearer", spotify_token),
"Content-Type:" = "application/json",
"Accept:" = "application/json"))
if(response$status_code == 200) {
response_json <- rawToChar(response$content) %>% iconv(from = "UTF-8", to = Sys.getenv("CHARSET"))
query_results <- fromJSON(response_json, flatten = TRUE)
track_features <- tibble(
id = query_results$id,
danceability = query_results$danceability,
energy = query_results$energy,
loudness = query_results$loudness,
mode = query_results$mode,
speechiness = query_results$speechiness,
acousticness = query_results$acousticness,
instrumentalness = query_results$instrumentalness,
liveness = query_results$liveness,
valence = query_results$valence,
tempo = query_results$tempo,
time_signature = query_results$time_signature)
} else {
return(tibble())
}
return(track_features)
}
# Funkcja pobiera playlistę playlist_id użytkownika user_id
getSpotifyUserPlaylist <- function(user_id, playlist_id) {
playlist_tracks <- tibble()
is_next <- TRUE
query_url <- paste0("https://api.spotify.com/v1/users/", user_id, "/playlists/", playlist_id, "?limit=100")
while(is_next) {
response <- GET(query_url,
add_headers("Authorization:" = paste("Bearer", spotify_token),
"Content-Type:" = "application/json",
"Accept:" = "application/json"))
if(response$status_code == 200) {
response_json <- rawToChar(response$content)
query_results <- fromJSON(response_json, flatten = TRUE)
if(!is.null(query_results$tracks)) {
playlist_tracks_tmp <- query_results$tracks$items %>% select(added_at,
track_id = track.id,
track = track.name,
popularity = track.popularity,
duration_s = track.duration_ms,
explicit = track.explicit,
album_id = track.album.id,
album = track.album.name) %>%
mutate(duration_s = duration_s/1000)
playlist_tracks_tmp$artist <- map_chr(.f = function(x) x$name %>% unlist %>% str_c(collapse = ", "), query_results$tracks$items$track.artists)
playlist_tracks_tmp$artist_id <- map_chr(.f = function(x) x$id %>% unlist %>% str_c(collapse = ", "), query_results$tracks$items$track.artists)
query_url <- query_results$tracks$`next`
} else {
playlist_tracks_tmp <- query_results$items %>% select(added_at,
track_id = track.id,
track = track.name,
popularity = track.popularity,
duration_s = track.duration_ms,
explicit = track.explicit,
album_id = track.album.id,
album = track.album.name) %>%
mutate(duration_s = duration_s/1000)
playlist_tracks_tmp$artist <- map_chr(.f = function(x) x$name %>% unlist %>% str_c(collapse = ", "), query_results$items$track.artists)
playlist_tracks_tmp$artist_id <- map_chr(.f = function(x) x$id %>% unlist %>% str_c(collapse = ", "), query_results$items$track.artists)
query_url <- query_results$`next`
}
playlist_tracks <- bind_rows(playlist_tracks, playlist_tracks_tmp)
is_next <- !is.null(query_url)
} else {
return(response$status_code)
}
}
return(playlist_tracks)
}
# Get an Artist
# https://api.spotify.com/v1/artists/{id}
getSpotifyArtist <- function(artist_id) {
response <- GET(paste0("https://api.spotify.com/v1/artists/", artist_id),
add_headers("Authorization:" = paste("Bearer", spotify_token),
"Content-Type:" = "application/json",
"Accept:" = "application/json"))
if(response$status_code == 200) {
response_json <- rawToChar(response$content)
query_results <- fromJSON(response_json, flatten = TRUE)
if(length(query_results$genres) != 0) {
df <- tibble(genres = query_results$genres)
} else {
df <- tibble(genres = NA)
}
df$artist_name <- query_results$name
df$popularity <- query_results$popularity
df$followers <- query_results$followers$total
return(df)
} else {
return(tibble())
}
}
# Get an Artist's Albums
# https://api.spotify.com/v1/artists/{id}/albums
# https://api.spotify.com/v1/artists/0k17h0D3J5VfsdmQ1iZtE9/albums
# Get an Artist's Top Tracks
# https://api.spotify.com/v1/artists/{id}/top-tracks
# https://api.spotify.com/v1/artists/0k17h0D3J5VfsdmQ1iZtE9/top-tracks?country=PL
# Get an Artist's Related Artists
# https://api.spotify.com/v1/artists/{id}/related-artists
# https://api.spotify.com/v1/artists/0k17h0D3J5VfsdmQ1iZtE9/related-artists
# Get an Album
# https://api.spotify.com/v1/albums/{id}
# https://api.spotify.com/v1/albums/0bCAjiUamIFqKJsekOYuRw
# Get an Album's Tracks
# https://api.spotify.com/v1/albums/{id}/tracks
# https://api.spotify.com/v1/albums/0bCAjiUamIFqKJsekOYuRw/tracks
|
#R script
library(ensembldb)
library("EnsDb.Hsapiens.v86")
edb <- EnsDb.Hsapiens.v86
tx2gene <- transcripts(edb, columns = c(listColumns(edb , "tx"), "gene_name"), return.type = "DataFrame")
tx2gene <- tx2gene[,c(1,7)]
nm <- c("DP_1","DP_2", "EHB_1","EHB_2","L_1","L_2")
file <- file.path(".",paste0(nm,"quant.sf"))
all(file.exists(file))
names(file) <- nm
library(tximport)
#txi <- tximport(file, type = "salmon", tx2gene = tx2gene)
txi <- tximport(file, type = "salmon", abundanceCol = "TPM" , importer = function(x) read.csv2(x, sep = '\t', header = TRUE), tx2gene = tx2gene, countsCol = "NumReads", ignoreTxVersion = TRUE)
head(txi$counts)
df <- read.csv2("expDesign.csv", sep = '\t', header = T)
rownames(df) <- df$run
#txi$length <- txi$length[!is.na(txi$length)]
txi$length[is.na(txi$length)] <- 1
#txi$length[txi$length == 0] <- 1
#txi$counts[is.na(txi$counts)] <- 0
###### OPTIONAL TRIAL : normalize by quantile ######
library(preprocessCore)
# txi$counts <- normalize.quantiles(txi$counts)
library(DESeq2)
df$batch <- as.factor(df$batch)
ddxi <- DESeqDataSetFromTximport(txi, colData = df, design = ~ condition + batch)
##DESEQ analysis
png("dispest.png", width=1200, height = 900) #Plot dispersion
plotDispEsts(dds)
dev.off()
####Prefiltering#####
row_sum_cutoff = 30
pvalue_cutoff = 0.05
lfc_cutoff = 1
keep <- rowSums(counts(ddxi)) >= row_sum_cutoff
ddxi <- ddxi[keep,]
dds <- DESeq(ddxi)
library(limma)
library(sva)
#dds <- removeBatchEffect(dds)
#PCA plot
vst.vals <-vst(dds, blind = FALSE)
vst.mat <- assay(vst.vals)
pca <- plotPCA(vst.vals, intgroup = colnames(vst.vals))
pdf("deseq_vst_pca_batched2.pdf")
pca
dev.off()
vst.norm <- removeBatchEffect(as.data.frame(vst.vals),batch = df$batch)
pca <- plotPCA(vst.vals, intgroup = colnames(vst.vals))
png("deseq_vst_pca_.png", width=1200, height = 900)
pca
dev.off()
######## individual analysis #########
# res <- results(dds, name="condition_celltype")
# res <- results(dds, contrast=c("condition","treated","untreated"))
# resnames <- resultsNames(dds)
res_DP_EHB <- results(dds, contrast = c("condition", "DP", "EHB"))
res_DP_EHB <- na.omit(res_DP_EHB)
res_DP_EHB.sig <- res_DP_EHB[res_DP_EHB$padj <= pvalue_cutoff,]
res_DP_EHB.sig <- res_DP_EHB.sig[abs(res_DP_EHB.sig$log2FoldChange) >= lfc_cutoff,]
res_EHB_L <- results(dds, contrast = c("condition", "L", "EHB"))
res_EHB_L <- na.omit(res_EHB_L)
res_EHB_L.sig <- res_EHB_L[res_EHB_L$padj <= pvalue_cutoff,]
res_EHB_L.sig <- res_EHB_L.sig[abs(res_EHB_L.sig$log2FoldChange) >= lfc_cutoff,]
res_DP_L <- results(dds, contrast = c("condition", "L", "DP"))
res_DP_L <- na.omit(res_DP_L)
res_DP_L.sig <- res_DP_L[res_DP_L$padj <= pvalue_cutoff,]
res_DP_L.sig <- res_DP_L.sig[abs(res_DP_L.sig$log2FoldChange) >= lfc_cutoff,]
# Extrat norm counts and plot PCA
norm.counts <- counts(dds, normalized=TRUE)
write.table(norm.counts, "deseq_normalized_counts.txt", sep="\t", quote=F, col.names = NA)
# Claculate regularized logarithms
#vst.vals <-vst(dds, blind = FALSE)
vst.vals <-vst(dds, blind = FALSE)
vst.mat <- assay(vst.vals)
vst.norm <- removeBatchEffect(vst.mat,df$batch)
#pca plot from normalized
library(factoextra)
res.pca <- prcomp(vst.norm, scale = TRUE)
fviz_pca_ind(res.pca,
col.ind = "cos2", # Color by the quality of representation
gradient.cols = c("#00AFBB", "#E7B800", "#FC4E07"),
repel = TRUE # Avoid text overlapping
)
library("vsn")
ddsBlind <- estimateDispersions( dds )
vsd <- varianceStabilizingTransformation( ddsBlind )
#par(mfrow=c(1,2))
notAllZero = (rowSums(counts(dds))>0)
pdf("log2+1meansd.pdf")
meanSdPlot(log2(counts(dds)[notAllZero, ] + 1))
dev.off()
pdf("vsdmeansd.pdf")
meanSdPlot(vst.vals[notAllZero, ])
dev.off()
pdf("rawQT.pdf", width=1200, height = 900)
meanSdPlot(txi$counts[notAllZero, ])
dev.off()
#rlog
rld <- rlog(dds, blind = FALSE, fitType='mean')
# In sparseTest(counts(object, normalized = TRUE), 0.9, 100, 0.1) :
# the rlog assumes that data is close to a negative binomial distribution, an assumption
# which is sometimes not compatible with datasets where many genes have many zero counts
# despite a few very large counts.
# In this data, for 16.2% of genes with a sum of normalized counts above 100, it was the case
# that a single sample's normalized count made up more than 90% of the sum over all samples.
# the threshold for this warning is 10% of genes. See plotSparsity(dds) for a visualization of this.
# We recommend instead using the varianceStabilizingTransformation or shifted log (see vignette).
######## Blind ###########
# blind means the variable in the design is taken or not into account of the
# analysis. TRUE would be completely unsupervised transfromation
##########################
#analysis plot
library("dplyr")
library("ggplot2")
library("magrittr")
dds <- estimateSizeFactors(dds)
df1 <- bind_rows(
as_data_frame(log2(counts(dds, normalized=TRUE)[, 1:2]+1)) %>%
mutate(transformation = "log2(x + 1)"),
as_data_frame(assay(vst.vals)[, 1:2]) %>% mutate(transformation = "vst"),
as_data_frame(assay(rld)[, 1:2]) %>% mutate(transformation = "rlog"))
colnames(df1)[1:2] <- c("x", "y")
df1 <- data.frame(df1)
png("variance_plotQT.png", width=1200, height = 900)
ggplot(df1, aes(x = x, y = y)) + geom_hex(bins = 80) +
coord_fixed() + facet_grid( . ~ transformation)
dev.off()
##### SAMPLE DISTANCE #####
sampleDists <- dist(t(assay(vst.vals)))
library("pheatmap")
library("RColorBrewer")
#####
pca <- plotPCA(vst.vals, intgroup = "condition")
png("deseq_vst_pcaQT.png", width=1200, height = 900)
pca
dev.off()
### PLOT for top expressed gene #########
#compare DP and EHB
topGene <- rownames(res_DP_EHB)[which.min(res$res_DP_EHB)]
plotCounts(dds, gene = topGene, intgroup= "condition")
####### Gene cluster ############
library("genefilter")
topVarGenes <- head(order(rowVars(assay(vst.vals)), decreasing = TRUE), 50)
mat <- assay(vst.vals)[ topVarGenes, ]
mat <- mat - rowMeans(mat)
anno <- as.data.frame(colData(vst.vals)[, "condition"])
rownames(anno) <- colnames(vst.vals)
png("geneFilterHeatmapQT50.png", width=1200, height = 900)
pheatmap(mat, annotation_col = anno)
dev.off()
##### MORE HEATMAP #######
cdsFull <- estimateSizeFactors( dds )
cdsFull <- estimateDispersions( cdsFull )
cdsFullBlind = estimateDispersions( cdsFull, method = "blind" )
vsdFull = varianceStabilizingTransformation( cdsFullBlind )
library("RColorBrewer")
library("gplots")
select = order(rowMeans(counts(cdsFull)), decreasing=TRUE)[1:50] #more select yield better clade
hmcol = colorRampPalette(brewer.pal(9, "GnBu"))(100)
png("vsdHM.png",width=1200, height = 900)
heatmap.2(exprs(vsdFull)[select,], col = hmcol, trace="none", margin=c(10, 6))
dev.off()
#untransformed data
png("ddsHM.png",width=1200, height = 900)
heatmap.2(counts(cdsFull)[select,], col = hmcol, trace="none", margin=c(10,6))
dev.off()
#Account for batch effect
#library(sva)
| /Jennings_rnaseq.R | no_license | nZhangx/2018_IM_Kleger_Project | R | false | false | 6,945 | r | #R script
library(ensembldb)
library("EnsDb.Hsapiens.v86")
edb <- EnsDb.Hsapiens.v86
tx2gene <- transcripts(edb, columns = c(listColumns(edb , "tx"), "gene_name"), return.type = "DataFrame")
tx2gene <- tx2gene[,c(1,7)]
nm <- c("DP_1","DP_2", "EHB_1","EHB_2","L_1","L_2")
file <- file.path(".",paste0(nm,"quant.sf"))
all(file.exists(file))
names(file) <- nm
library(tximport)
#txi <- tximport(file, type = "salmon", tx2gene = tx2gene)
txi <- tximport(file, type = "salmon", abundanceCol = "TPM" , importer = function(x) read.csv2(x, sep = '\t', header = TRUE), tx2gene = tx2gene, countsCol = "NumReads", ignoreTxVersion = TRUE)
head(txi$counts)
df <- read.csv2("expDesign.csv", sep = '\t', header = T)
rownames(df) <- df$run
#txi$length <- txi$length[!is.na(txi$length)]
txi$length[is.na(txi$length)] <- 1
#txi$length[txi$length == 0] <- 1
#txi$counts[is.na(txi$counts)] <- 0
###### OPTIONAL TRIAL : normalize by quantile ######
library(preprocessCore)
# txi$counts <- normalize.quantiles(txi$counts)
library(DESeq2)
df$batch <- as.factor(df$batch)
ddxi <- DESeqDataSetFromTximport(txi, colData = df, design = ~ condition + batch)
##DESEQ analysis
png("dispest.png", width=1200, height = 900) #Plot dispersion
plotDispEsts(dds)
dev.off()
####Prefiltering#####
row_sum_cutoff = 30
pvalue_cutoff = 0.05
lfc_cutoff = 1
keep <- rowSums(counts(ddxi)) >= row_sum_cutoff
ddxi <- ddxi[keep,]
dds <- DESeq(ddxi)
library(limma)
library(sva)
#dds <- removeBatchEffect(dds)
#PCA plot
vst.vals <-vst(dds, blind = FALSE)
vst.mat <- assay(vst.vals)
pca <- plotPCA(vst.vals, intgroup = colnames(vst.vals))
pdf("deseq_vst_pca_batched2.pdf")
pca
dev.off()
vst.norm <- removeBatchEffect(as.data.frame(vst.vals),batch = df$batch)
pca <- plotPCA(vst.vals, intgroup = colnames(vst.vals))
png("deseq_vst_pca_.png", width=1200, height = 900)
pca
dev.off()
######## individual analysis #########
# res <- results(dds, name="condition_celltype")
# res <- results(dds, contrast=c("condition","treated","untreated"))
# resnames <- resultsNames(dds)
res_DP_EHB <- results(dds, contrast = c("condition", "DP", "EHB"))
res_DP_EHB <- na.omit(res_DP_EHB)
res_DP_EHB.sig <- res_DP_EHB[res_DP_EHB$padj <= pvalue_cutoff,]
res_DP_EHB.sig <- res_DP_EHB.sig[abs(res_DP_EHB.sig$log2FoldChange) >= lfc_cutoff,]
res_EHB_L <- results(dds, contrast = c("condition", "L", "EHB"))
res_EHB_L <- na.omit(res_EHB_L)
res_EHB_L.sig <- res_EHB_L[res_EHB_L$padj <= pvalue_cutoff,]
res_EHB_L.sig <- res_EHB_L.sig[abs(res_EHB_L.sig$log2FoldChange) >= lfc_cutoff,]
res_DP_L <- results(dds, contrast = c("condition", "L", "DP"))
res_DP_L <- na.omit(res_DP_L)
res_DP_L.sig <- res_DP_L[res_DP_L$padj <= pvalue_cutoff,]
res_DP_L.sig <- res_DP_L.sig[abs(res_DP_L.sig$log2FoldChange) >= lfc_cutoff,]
# Extrat norm counts and plot PCA
norm.counts <- counts(dds, normalized=TRUE)
write.table(norm.counts, "deseq_normalized_counts.txt", sep="\t", quote=F, col.names = NA)
# Claculate regularized logarithms
#vst.vals <-vst(dds, blind = FALSE)
vst.vals <-vst(dds, blind = FALSE)
vst.mat <- assay(vst.vals)
vst.norm <- removeBatchEffect(vst.mat,df$batch)
#pca plot from normalized
library(factoextra)
res.pca <- prcomp(vst.norm, scale = TRUE)
fviz_pca_ind(res.pca,
col.ind = "cos2", # Color by the quality of representation
gradient.cols = c("#00AFBB", "#E7B800", "#FC4E07"),
repel = TRUE # Avoid text overlapping
)
library("vsn")
ddsBlind <- estimateDispersions( dds )
vsd <- varianceStabilizingTransformation( ddsBlind )
#par(mfrow=c(1,2))
notAllZero = (rowSums(counts(dds))>0)
pdf("log2+1meansd.pdf")
meanSdPlot(log2(counts(dds)[notAllZero, ] + 1))
dev.off()
pdf("vsdmeansd.pdf")
meanSdPlot(vst.vals[notAllZero, ])
dev.off()
pdf("rawQT.pdf", width=1200, height = 900)
meanSdPlot(txi$counts[notAllZero, ])
dev.off()
#rlog
rld <- rlog(dds, blind = FALSE, fitType='mean')
# In sparseTest(counts(object, normalized = TRUE), 0.9, 100, 0.1) :
# the rlog assumes that data is close to a negative binomial distribution, an assumption
# which is sometimes not compatible with datasets where many genes have many zero counts
# despite a few very large counts.
# In this data, for 16.2% of genes with a sum of normalized counts above 100, it was the case
# that a single sample's normalized count made up more than 90% of the sum over all samples.
# the threshold for this warning is 10% of genes. See plotSparsity(dds) for a visualization of this.
# We recommend instead using the varianceStabilizingTransformation or shifted log (see vignette).
######## Blind ###########
# blind means the variable in the design is taken or not into account of the
# analysis. TRUE would be completely unsupervised transfromation
##########################
#analysis plot
library("dplyr")
library("ggplot2")
library("magrittr")
dds <- estimateSizeFactors(dds)
df1 <- bind_rows(
as_data_frame(log2(counts(dds, normalized=TRUE)[, 1:2]+1)) %>%
mutate(transformation = "log2(x + 1)"),
as_data_frame(assay(vst.vals)[, 1:2]) %>% mutate(transformation = "vst"),
as_data_frame(assay(rld)[, 1:2]) %>% mutate(transformation = "rlog"))
colnames(df1)[1:2] <- c("x", "y")
df1 <- data.frame(df1)
png("variance_plotQT.png", width=1200, height = 900)
ggplot(df1, aes(x = x, y = y)) + geom_hex(bins = 80) +
coord_fixed() + facet_grid( . ~ transformation)
dev.off()
##### SAMPLE DISTANCE #####
sampleDists <- dist(t(assay(vst.vals)))
library("pheatmap")
library("RColorBrewer")
#####
pca <- plotPCA(vst.vals, intgroup = "condition")
png("deseq_vst_pcaQT.png", width=1200, height = 900)
pca
dev.off()
### PLOT for top expressed gene #########
#compare DP and EHB
topGene <- rownames(res_DP_EHB)[which.min(res$res_DP_EHB)]
plotCounts(dds, gene = topGene, intgroup= "condition")
####### Gene cluster ############
library("genefilter")
topVarGenes <- head(order(rowVars(assay(vst.vals)), decreasing = TRUE), 50)
mat <- assay(vst.vals)[ topVarGenes, ]
mat <- mat - rowMeans(mat)
anno <- as.data.frame(colData(vst.vals)[, "condition"])
rownames(anno) <- colnames(vst.vals)
png("geneFilterHeatmapQT50.png", width=1200, height = 900)
pheatmap(mat, annotation_col = anno)
dev.off()
##### MORE HEATMAP #######
cdsFull <- estimateSizeFactors( dds )
cdsFull <- estimateDispersions( cdsFull )
cdsFullBlind = estimateDispersions( cdsFull, method = "blind" )
vsdFull = varianceStabilizingTransformation( cdsFullBlind )
library("RColorBrewer")
library("gplots")
select = order(rowMeans(counts(cdsFull)), decreasing=TRUE)[1:50] #more select yield better clade
hmcol = colorRampPalette(brewer.pal(9, "GnBu"))(100)
png("vsdHM.png",width=1200, height = 900)
heatmap.2(exprs(vsdFull)[select,], col = hmcol, trace="none", margin=c(10, 6))
dev.off()
#untransformed data
png("ddsHM.png",width=1200, height = 900)
heatmap.2(counts(cdsFull)[select,], col = hmcol, trace="none", margin=c(10,6))
dev.off()
#Account for batch effect
#library(sva)
|
appealsCourt <- function(a,b,c,d,e){
judges <- c(getJud(a),getJud(b),getJud(c),getJud(d),getJud(e))
if(sum(judges)>=3){
1
}else{
0
}
}
appealsCourtE <- function(a,b,c,d){
judgeAE <- getJud(a)
judges <- c(judgeAE,getJud(b),getJud(c),getJud(d),judgeAE)
if(sum(judges)>=3){
1
}else{
0
}
}
getJud <- function(p){
if(runif(1)<p){
1
}
else{
0
}
}
normalCase <- c()
for(i in 1:100000){
normalCase<- c(normalCase,appealsCourt(.95,.95,.9,.9,.8))
}
print(1-sum(normalCase)/length(normalCase))
judgeE <- c()
for(j in 1:100000){
judgeE <- c(judgeE, appealsCourtE(.95,.95,.9,.9))
}
print(1-sum(judgeE)/length(judgeE))
| /16AppealsCourtParadox.R | no_license | alokswaincontact/DigitalDice | R | false | false | 663 | r |
appealsCourt <- function(a,b,c,d,e){
judges <- c(getJud(a),getJud(b),getJud(c),getJud(d),getJud(e))
if(sum(judges)>=3){
1
}else{
0
}
}
appealsCourtE <- function(a,b,c,d){
judgeAE <- getJud(a)
judges <- c(judgeAE,getJud(b),getJud(c),getJud(d),judgeAE)
if(sum(judges)>=3){
1
}else{
0
}
}
getJud <- function(p){
if(runif(1)<p){
1
}
else{
0
}
}
normalCase <- c()
for(i in 1:100000){
normalCase<- c(normalCase,appealsCourt(.95,.95,.9,.9,.8))
}
print(1-sum(normalCase)/length(normalCase))
judgeE <- c()
for(j in 1:100000){
judgeE <- c(judgeE, appealsCourtE(.95,.95,.9,.9))
}
print(1-sum(judgeE)/length(judgeE))
|
#====================================================
# -----List of packages explored-----
#1.dplyr
#2.data.table
#3.ggplot2
#4.reshape2
#5.readr
#6.tidyr
#7.lubridate
#8.purrr
#9.stringr
#10.broom
#11.tibble
#==================1.dplyr===================================
#filter#select#arrange#mutate#summarise(with group_by).
library(dplyr)
data("mtcars")
data("iris")
mydata<-mtcars
head(mydata)
#creating a local dataframe
mynewdata<-tbl_df(mydata)
myirisdata<-tbl_df(iris)#table is in tabular structure
filter(mynewdata,cyl>4 & gear>4)
filter(mynewdata,cyl>4)
filter(myirisdata,Species %in% c('setosa','virginica'))
select(mynewdata,cyl,mpg,hp)
select(mynewdata,-cyl,-mpg)
select(mynewdata,-c(cyl,mpg))
select(munewdata,cyl:gear)
#chaining or pipelining
mynewdata%>%select(cyl,wt,gear)%>%filter(wt>2)
mynewdata%>%select(cyl,wt,gear)%>%arrange(wt)
mynewdata%>%select(cyl,wt,gear)%>%arrange(desc(wt))
#mutate-creating new columns
mynewdata%>%select(mpg,cyl)%>%mutate(newvariable=mpg*cyl) #or
newvariable<-mynewdata%>%mutate(newvariable=mpg*cyl)
#summarize-to find insights from data
myirisdata%>%group_by(Species)%>%summarise(Average=mean(Sepal.Length,na.rm=TRUE))
myirisdata%>%group_by(Species)%>%summarise_each(funs(mean,n()),Sepal.Length,Sepal.Width)
#rename
mynewdata%>%rename(miles=mpg)
#================2.data.table=========================
#fast manipulation compared to data.frame
#DT[i,j,by] subset rows using i to calculate j which is grouped by
data("airquality")
mydata<-airquality
head(airquality,6)
data(iris)
myiris<-iris
library(data.table)
mydata<-data.table(mydata)
myiris<-data.table(myiris)
mydata[2:4,]
myiris[Species=="setosa"]
myiris[Species%in%c('setosa','virginica')]
mydata[,Temp]
mydata[,.(Temp,Month)]
#returns sum of selected column
mydata[,sum(Ozone,na.rm=TRUE)]
mydata[,.(sum(Ozone,na.rm=TRUE),sd(Ozone,na.rm=TRUE))]
#print aand plot
myiris[,{print(Sepal.Length)
plot(Sepal.Width)
NULL}]
#grouping by a variable
myiris[,.(sepalsum=sum(Sepal.Length)),by=Species]
#select a column for computation,hence need to set the key on column
setkey(myiris,Species)
#select all the rows associaed with this data point
myiris['setosa']
myiris[c('setosa','virginica')]
#==================3.ggplot2==========================
#can be more powerful if grouped with cowplot,gridExtra
library(ggplot2)
library(gridExtra)
library(cowplot)
df<-ToothGrowth
df$dose<-as.factor(df$dose)
head(df)
#boxplot
bp<-ggplot(df,aes(x=dose,y=len,col=dose))+
geom_boxplot()+
theme(legend.position='none')
bp
#add grid lines
bp+background_grid(major="xy",minor="none") #can use theme() instead
#scatterplot
sp<-ggplot(mpg,aes(x=cty,y=hwy,color=factor(cyl)))+
geom_point(size=2.5)
sp
#barplot
bp<-ggplot(diamonds,aes(clarity,fill=cut))+
geom_bar()+
theme(axis.text.x=element_text(angle=70,vjust=0.5))
#compare two plots
plot_grid(sp,bp,labels=c("A","B"),ncol=2,nrow=1)
#histogram
ggplot(diamonds,aes(x=carat))+
geom_histogram(binwidth=0.25,fill='steelblue')+
scale_x_continuous(breaks=seq(0,3,by=0.5))
#=====================4.reshape=========================
#melt and cast
ID<-c(1,2,3,4,5)
Names<-c('Ram','Shyam','Ghanshyam','Radheshyam','jaishyam')
DateofBirth<-c(1993,1992,1993,1994,1992)
Subject<-c('Maths','Biology','Science','Psychology','Physics')
thisdata<-data.frame(ID,Names,DateofBirth,Subject)
data.table(thisdata)
library(reshape2)
mt<-melt(thisdata,id=(c('ID','Names')))
mt
mcast<-dcast(mt,DateofBirth+Subject~variable)
mcast
#==================5.readr============================
#helps read various forms of data in R
#Delimated files with: read_delim(),read_csv(),read_tsv(),read_csv2()
#Fixed width files:read_fwf(),read_table()
#Web log files with: read_log()
library(readr)
read_csv('test.csv',col_names=TRUE)
read_csv("iris.csv",col_types=list(
Sepal.Length=col_double(),
Sepal.Width=col_double(),
Petal.Length=col_double(),
Petal.Width=col_double(),
Species=col_factor(c("setosa","versicolor","virginica"))))
#in you omit column,it will take care of it automatically
read_csv("iris.csv",col_types=list(Species=col_factor(c("setosa","versicolor","virginica"))))
#Note: write_csv is a lot faster than write.csv()
#===================6.tidyr=============================
#tidyr and dplyr go hand in hand
#gather,spread(),separate(),unite()
library(tidyr)
names<-c('A','B','C','D','E','A','B')
weight<-c(55,49,76,71,65,44,34)
age<-c(21,20,25,29,33,32,38)
Class<-c('Maths','Science','Social','Physics','Biology',
'Economics','Accounts')
tdata<-data.frame(names,age,weight,Class)
tdata
long_t<-tdata%>%gather(Key,Value,weight:Class)
#---------------------------------------------
Humidity<-c(37.79,42.34,52.16,44.57,43.83,44.59)
Rain<-c(0.97,1.10,1.064475,0.95318,0.988,0.9396)
Time<-c("27/01/2017 15:44","23/02/2017 23:24",
"31/03/2017 19:17","20/01/2017 08:45","23/02/2017
07:46","31/01/2017 01:55")
d_set<-data.frame(Humidity,Rain,Time)
d_set
#separate function to separate date,month,year
separate_d<-d_set%>%separate(Time,c('Date','Month','Year'))
separate_d
#unite-reverse of separate
unite_d<-separate_d%>%unite(Time,c(Date,Month,Year),
sep="/")
unite_d
#spread function - reverse of gather
wide_t<-long_t%>% spread(Key,Value)
wide_t
#==================7.lubridate======================
#make easy parsing in date and time.
#update,duration,date extraction functions.
library(lubridate)
n_time<-now()
n_update<-update(n_time,year=2013,month=10)
n_update
#add days,months,year,seconds
d_time<-now()
d_time+ddays(1)
d_time+dweeks(2)
d_time+dyears(3)
d_time+dhours(2)
d_time+dminutes(50)
d_time+dseconds(30)
#extract date,time
n_time$hour<-hour(now())
n_time$year<-year(now())
n_time$month<-month(now())
n_time$second<-second(now())
#check the extracted dates in separate column
new_data<-data.frame(n_time$hour,n_time$second,
n_time$month,n_time$year)
new_data
#for best use all the pakages in conguction
#==================8.purrr============================
library(purrr)
my_list<-list(
c(1,2,6),
c(4,7,1),
c(9,1,5)
)
#find mean of each element
my_list[[1]] %>% mean() # (1+2+6)/3=3
my_list[[2]] %>% mean() #same as mean(my_list[[1]])
my_list[[3]] %>% mean()
#Alternate option
my_list%>%map(mean) #same as mean(my_list,mean)
#specific map
my_list%>%map_int(mean) #we except result in int
my_list%>%map_dbl(mean) #double since we dnt know result is always integer
my_list%>%map(~.*2) #own function. . stand for elements in the list.
#other maps
my_list%>%map(is.numeric) #T T T
my_list%>%map_lgl(is.numeric) #T T T
my_list%>%map_char(is.numeric)#"T","T","T"
my_list%>%map_int(is.numeric)#1,1,1
#==================9.stringr==========================
library(stringr)
str_trim(" Hello World ")
#[1] "Hello World"
str_pad("old",width=3,side="left",pad="g")
#[1] "old"
toupper("Hello Word")
#[1] "HELLO WORD"
tolower("Hello Word")
#[1] "hello word"
x<-c(1:10,20,30)
x[!x %in% boxplot.stats(x)$out] #to remove outliers
#[1] 1 2 3 4 5 6 7 8 9 10
#-------------------------------------------------------
today<-as.Date(Sys.Date()) #enters todays date
xmas<-as.Date("2017-12-25")
daysleft<- today-xmas
daysleft #9 days on 2108-01-03
format(xmas,"%d/%m/%y") #"25/12/17"
#%d day of month(0-31)
#%a abbreviated day of week
#%m month as a number(1-12)
#abbreviate name of month
#%B full name of month
#%y two digit year
#%Y four digit year
#===============10.broom===================================
#Three ways we can get information out of model objects
#tidy() : component-level statistic
#augment(): observation-level statistics (example: residuals,fitted vaues)
#glance(): model-level statistics (only one value for the component) )
#Ojects of models are messy
library(broom)
data("mtcars")
names(mtcars)
lmfit<-lm(mpg~wt+qsec,mtcars) #linear fit
summary(lmfit) #messy
tidy(lmfit)
augment(lmfit) #each row is an observation from the original data
glance(lmfit) #always one row data frame
#broom takes model objects and turns them into
#tidy data frames that can be used with tidy tools.
#can be used with many models#
#Means you can take data and manuplate it plot it by dplyr and ggplot2
#==========================11.tibble===============================================
tdf=tibble(x=1e2,y=rnorm(1e2)) #creates dataframe ==data_frame(x=1:e4,y=rnorm(1e4))
#they print nicely
tdf
print(tbl,n=30) #controlls only 30 rows
#does not do partial matching. Base R does
#type consistency i.e it returns always same type
#==================================================================================
| /1. Data Cleaning and Manipulating.R | no_license | deepaksharma64/Data-cleaning-and-manipulating-with-R- | R | false | false | 8,560 | r | #====================================================
# -----List of packages explored-----
#1.dplyr
#2.data.table
#3.ggplot2
#4.reshape2
#5.readr
#6.tidyr
#7.lubridate
#8.purrr
#9.stringr
#10.broom
#11.tibble
#==================1.dplyr===================================
#filter#select#arrange#mutate#summarise(with group_by).
library(dplyr)
data("mtcars")
data("iris")
mydata<-mtcars
head(mydata)
#creating a local dataframe
mynewdata<-tbl_df(mydata)
myirisdata<-tbl_df(iris)#table is in tabular structure
filter(mynewdata,cyl>4 & gear>4)
filter(mynewdata,cyl>4)
filter(myirisdata,Species %in% c('setosa','virginica'))
select(mynewdata,cyl,mpg,hp)
select(mynewdata,-cyl,-mpg)
select(mynewdata,-c(cyl,mpg))
select(munewdata,cyl:gear)
#chaining or pipelining
mynewdata%>%select(cyl,wt,gear)%>%filter(wt>2)
mynewdata%>%select(cyl,wt,gear)%>%arrange(wt)
mynewdata%>%select(cyl,wt,gear)%>%arrange(desc(wt))
#mutate-creating new columns
mynewdata%>%select(mpg,cyl)%>%mutate(newvariable=mpg*cyl) #or
newvariable<-mynewdata%>%mutate(newvariable=mpg*cyl)
#summarize-to find insights from data
myirisdata%>%group_by(Species)%>%summarise(Average=mean(Sepal.Length,na.rm=TRUE))
myirisdata%>%group_by(Species)%>%summarise_each(funs(mean,n()),Sepal.Length,Sepal.Width)
#rename
mynewdata%>%rename(miles=mpg)
#================2.data.table=========================
#fast manipulation compared to data.frame
#DT[i,j,by] subset rows using i to calculate j which is grouped by
data("airquality")
mydata<-airquality
head(airquality,6)
data(iris)
myiris<-iris
library(data.table)
mydata<-data.table(mydata)
myiris<-data.table(myiris)
mydata[2:4,]
myiris[Species=="setosa"]
myiris[Species%in%c('setosa','virginica')]
mydata[,Temp]
mydata[,.(Temp,Month)]
#returns sum of selected column
mydata[,sum(Ozone,na.rm=TRUE)]
mydata[,.(sum(Ozone,na.rm=TRUE),sd(Ozone,na.rm=TRUE))]
#print aand plot
myiris[,{print(Sepal.Length)
plot(Sepal.Width)
NULL}]
#grouping by a variable
myiris[,.(sepalsum=sum(Sepal.Length)),by=Species]
#select a column for computation,hence need to set the key on column
setkey(myiris,Species)
#select all the rows associaed with this data point
myiris['setosa']
myiris[c('setosa','virginica')]
#==================3.ggplot2==========================
#can be more powerful if grouped with cowplot,gridExtra
library(ggplot2)
library(gridExtra)
library(cowplot)
df<-ToothGrowth
df$dose<-as.factor(df$dose)
head(df)
#boxplot
bp<-ggplot(df,aes(x=dose,y=len,col=dose))+
geom_boxplot()+
theme(legend.position='none')
bp
#add grid lines
bp+background_grid(major="xy",minor="none") #can use theme() instead
#scatterplot
sp<-ggplot(mpg,aes(x=cty,y=hwy,color=factor(cyl)))+
geom_point(size=2.5)
sp
#barplot
bp<-ggplot(diamonds,aes(clarity,fill=cut))+
geom_bar()+
theme(axis.text.x=element_text(angle=70,vjust=0.5))
#compare two plots
plot_grid(sp,bp,labels=c("A","B"),ncol=2,nrow=1)
#histogram
ggplot(diamonds,aes(x=carat))+
geom_histogram(binwidth=0.25,fill='steelblue')+
scale_x_continuous(breaks=seq(0,3,by=0.5))
#=====================4.reshape=========================
#melt and cast
ID<-c(1,2,3,4,5)
Names<-c('Ram','Shyam','Ghanshyam','Radheshyam','jaishyam')
DateofBirth<-c(1993,1992,1993,1994,1992)
Subject<-c('Maths','Biology','Science','Psychology','Physics')
thisdata<-data.frame(ID,Names,DateofBirth,Subject)
data.table(thisdata)
library(reshape2)
mt<-melt(thisdata,id=(c('ID','Names')))
mt
mcast<-dcast(mt,DateofBirth+Subject~variable)
mcast
#==================5.readr============================
#helps read various forms of data in R
#Delimated files with: read_delim(),read_csv(),read_tsv(),read_csv2()
#Fixed width files:read_fwf(),read_table()
#Web log files with: read_log()
library(readr)
read_csv('test.csv',col_names=TRUE)
read_csv("iris.csv",col_types=list(
Sepal.Length=col_double(),
Sepal.Width=col_double(),
Petal.Length=col_double(),
Petal.Width=col_double(),
Species=col_factor(c("setosa","versicolor","virginica"))))
#in you omit column,it will take care of it automatically
read_csv("iris.csv",col_types=list(Species=col_factor(c("setosa","versicolor","virginica"))))
#Note: write_csv is a lot faster than write.csv()
#===================6.tidyr=============================
#tidyr and dplyr go hand in hand
#gather,spread(),separate(),unite()
library(tidyr)
names<-c('A','B','C','D','E','A','B')
weight<-c(55,49,76,71,65,44,34)
age<-c(21,20,25,29,33,32,38)
Class<-c('Maths','Science','Social','Physics','Biology',
'Economics','Accounts')
tdata<-data.frame(names,age,weight,Class)
tdata
long_t<-tdata%>%gather(Key,Value,weight:Class)
#---------------------------------------------
Humidity<-c(37.79,42.34,52.16,44.57,43.83,44.59)
Rain<-c(0.97,1.10,1.064475,0.95318,0.988,0.9396)
Time<-c("27/01/2017 15:44","23/02/2017 23:24",
"31/03/2017 19:17","20/01/2017 08:45","23/02/2017
07:46","31/01/2017 01:55")
d_set<-data.frame(Humidity,Rain,Time)
d_set
#separate function to separate date,month,year
separate_d<-d_set%>%separate(Time,c('Date','Month','Year'))
separate_d
#unite-reverse of separate
unite_d<-separate_d%>%unite(Time,c(Date,Month,Year),
sep="/")
unite_d
#spread function - reverse of gather
wide_t<-long_t%>% spread(Key,Value)
wide_t
#==================7.lubridate======================
#make easy parsing in date and time.
#update,duration,date extraction functions.
library(lubridate)
n_time<-now()
n_update<-update(n_time,year=2013,month=10)
n_update
#add days,months,year,seconds
d_time<-now()
d_time+ddays(1)
d_time+dweeks(2)
d_time+dyears(3)
d_time+dhours(2)
d_time+dminutes(50)
d_time+dseconds(30)
#extract date,time
n_time$hour<-hour(now())
n_time$year<-year(now())
n_time$month<-month(now())
n_time$second<-second(now())
#check the extracted dates in separate column
new_data<-data.frame(n_time$hour,n_time$second,
n_time$month,n_time$year)
new_data
#for best use all the pakages in conguction
#==================8.purrr============================
library(purrr)
my_list<-list(
c(1,2,6),
c(4,7,1),
c(9,1,5)
)
#find mean of each element
my_list[[1]] %>% mean() # (1+2+6)/3=3
my_list[[2]] %>% mean() #same as mean(my_list[[1]])
my_list[[3]] %>% mean()
#Alternate option
my_list%>%map(mean) #same as mean(my_list,mean)
#specific map
my_list%>%map_int(mean) #we except result in int
my_list%>%map_dbl(mean) #double since we dnt know result is always integer
my_list%>%map(~.*2) #own function. . stand for elements in the list.
#other maps
my_list%>%map(is.numeric) #T T T
my_list%>%map_lgl(is.numeric) #T T T
my_list%>%map_char(is.numeric)#"T","T","T"
my_list%>%map_int(is.numeric)#1,1,1
#==================9.stringr==========================
library(stringr)
str_trim(" Hello World ")
#[1] "Hello World"
str_pad("old",width=3,side="left",pad="g")
#[1] "old"
toupper("Hello Word")
#[1] "HELLO WORD"
tolower("Hello Word")
#[1] "hello word"
x<-c(1:10,20,30)
x[!x %in% boxplot.stats(x)$out] #to remove outliers
#[1] 1 2 3 4 5 6 7 8 9 10
#-------------------------------------------------------
today<-as.Date(Sys.Date()) #enters todays date
xmas<-as.Date("2017-12-25")
daysleft<- today-xmas
daysleft #9 days on 2108-01-03
format(xmas,"%d/%m/%y") #"25/12/17"
#%d day of month(0-31)
#%a abbreviated day of week
#%m month as a number(1-12)
#abbreviate name of month
#%B full name of month
#%y two digit year
#%Y four digit year
#===============10.broom===================================
#Three ways we can get information out of model objects
#tidy() : component-level statistic
#augment(): observation-level statistics (example: residuals,fitted vaues)
#glance(): model-level statistics (only one value for the component) )
#Ojects of models are messy
library(broom)
data("mtcars")
names(mtcars)
lmfit<-lm(mpg~wt+qsec,mtcars) #linear fit
summary(lmfit) #messy
tidy(lmfit)
augment(lmfit) #each row is an observation from the original data
glance(lmfit) #always one row data frame
#broom takes model objects and turns them into
#tidy data frames that can be used with tidy tools.
#can be used with many models#
#Means you can take data and manuplate it plot it by dplyr and ggplot2
#==========================11.tibble===============================================
tdf=tibble(x=1e2,y=rnorm(1e2)) #creates dataframe ==data_frame(x=1:e4,y=rnorm(1e4))
#they print nicely
tdf
print(tbl,n=30) #controlls only 30 rows
#does not do partial matching. Base R does
#type consistency i.e it returns always same type
#==================================================================================
|
testlist <- list(testX = c(191493125665849920, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), trainX = structure(c(1.78844646178735e+212, 1.93075223605916e+156, 121373.193669204, 1.26689771433298e+26, 2.46020195254853e+129, 8.54794497535107e-83, 2.61907806894971e-213, 1.5105425626729e+200, 6.51877713351675e+25, 4.40467528702727e-93, 7.6427933587945, 34208333744.1307, 1.6400690920442e-111, 3.9769673154778e-304, 4.76127371594362e-307, 8.63819952335095e+122, 1.18662128550178e-59, 1128.83285802937, 3.80478583615452e-72, 1.21321365773924e-195, 9.69744674150153e-268, 8.98899319496613e+272, 7.63669788330223e+285, 3.85830749537493e+266, 2.65348875902107e+136, 8.14965241967603e+92, 2.59677146539475e-173, 1.55228780425777e-91, 8.25550184376779e+105, 1.18572662524891e+134, 1.04113208597565e+183, 1.01971211553913e-259, 1.23680594512923e-165, 5.24757023065221e+62, 3.41816623042713e-96 ), .Dim = c(5L, 7L)))
result <- do.call(dann:::calc_distance_C,testlist)
str(result) | /dann/inst/testfiles/calc_distance_C/AFL_calc_distance_C/calc_distance_C_valgrind_files/1609868654-test.R | no_license | akhikolla/updated-only-Issues | R | false | false | 1,199 | r | testlist <- list(testX = c(191493125665849920, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), trainX = structure(c(1.78844646178735e+212, 1.93075223605916e+156, 121373.193669204, 1.26689771433298e+26, 2.46020195254853e+129, 8.54794497535107e-83, 2.61907806894971e-213, 1.5105425626729e+200, 6.51877713351675e+25, 4.40467528702727e-93, 7.6427933587945, 34208333744.1307, 1.6400690920442e-111, 3.9769673154778e-304, 4.76127371594362e-307, 8.63819952335095e+122, 1.18662128550178e-59, 1128.83285802937, 3.80478583615452e-72, 1.21321365773924e-195, 9.69744674150153e-268, 8.98899319496613e+272, 7.63669788330223e+285, 3.85830749537493e+266, 2.65348875902107e+136, 8.14965241967603e+92, 2.59677146539475e-173, 1.55228780425777e-91, 8.25550184376779e+105, 1.18572662524891e+134, 1.04113208597565e+183, 1.01971211553913e-259, 1.23680594512923e-165, 5.24757023065221e+62, 3.41816623042713e-96 ), .Dim = c(5L, 7L)))
result <- do.call(dann:::calc_distance_C,testlist)
str(result) |
#' Bayesian inference for capture-recapture analysis with emphasis on behavioural effect modelling
#'
#' Bayesian inference for a large class of discrete-time capture-recapture models under closed population with special emphasis on behavioural effect modelling including also the \emph{meaningful behavioral covariate} approach proposed in Alunni Fegatelli (2013) [PhD thesis]. Many of the standard classical models such as \eqn{M_0}, \eqn{M_b}, \eqn{M_{c_1}}, \eqn{M_t} or \eqn{M_{bt}} can be regarded as particular instances of the aforementioned approach. Other flexible alternatives can be fitted through a careful choice of a meaningful behavioural covariate and a possible partition of its admissible range.
#'
#' @usage
#' BBRecap (data,last.column.count=FALSE, neval = 1000, by.incr = 1,
#' mbc.function = c("standard","markov","counts","integer","counts.integer"),
#' mod = c("linear.logistic", "M0", "Mb", "Mc", "Mcb", "Mt", "Msubjective.cut",
#' "Msubjective"), nsim = 5000, burnin = round(nsim/10),
#' nsim.ML = 1000, burnin.ML = round(nsim.ML/10), num.t = 50,
#' markov.ord=NULL, prior.N = c("Rissanen","Uniform","one.over.N","one.over.N2"),
#' meaningful.mat.subjective = NULL, meaningful.mat.new.value.subjective = NULL,
#' z.cut=NULL, output = c("base", "complete", "complete.ML"))
#'
#' @param data can be one of the following:
#' \enumerate{
#' \item an \eqn{M} by \eqn{t} binary matrix/data.frame. In this case the input is interpreted as a matrix whose rows contain individual capture histories for all \eqn{M} observed units
#' \item a matrix/data.frame with \eqn{(t+1)} columns. The first \eqn{t} columns contain binary entries corresponding to capture occurrences, while the last column contains non negative integers corresponding to frequencies. This format is allowed only when \code{last.column.count} is set to \code{TRUE}
#' \item a \eqn{t}-dimensional array or table representing the counts of the \eqn{2^t} contingency table of binary outcomes
#' }
#' \eqn{M} is the number of units captured at least once and \eqn{t} is the number of capture occasions.
#' @param last.column.count a logical. In the default case \code{last.column.count=FALSE} each row of the input argument \code{data} represents the complete capture history for each observed unit. When \code{last.column.count} is set to \code{TRUE} in each row the first \eqn{t} entries represent one of the observed complete capture histories and the last entry in the last column is the number of observed units with that capture history
#' @param neval a positive integer. \code{neval} is the number of values of the population size \eqn{N} where the posterior is evaluated starting from \eqn{M}. The default value is \code{neval}=1000.
#' @param by.incr a positive integer. \code{nsim} is the number of iterations for the Metropolis-within-Gibbs algorithm which allows the approximation of the posterior. It is considered only if \code{mod} is \code{"linear.logistic"} or \code{"Msubjective"}. In the other cases closed form evaluation of the posterior is available up to a proportionality constant. The default value is \code{nsim=10000.}
#' @param mbc.function a character string with possible entries (see Alunni Fegatelli (2013) for further details)
#' \enumerate{
#' \item \code{"standard"} meaningful behavioural covariate in [0,1] obtained through the normalized binary representation of integers relying upon partial capture history
#' \item \code{"markov"} slight modification of \code{"standard"} providing consistency with arbitrary Markov order models when used in conjunction with the options \code{"Msubjective"} and \code{z.cut}.
#' \item \code{"counts"} covariate in [0,1] obtained by normalizing the integer corresponding to the sum of binary entries i.e. the number of previous captures
#' \item \code{"integer"} un-normalized integer corresponding to the binary entries of the partial capture history
#' \item \code{"counts.integer"} un-normalized covariate obtained as the sum of binary entries i.e. the number of previous captures
#' }
#' @param mod a character. \code{mod} represents the behavioural model considered for the analysis. \code{mod="linear.logistic"} is the model proposed in Alunni Fegatelli (2013) based on the meaningful behavioural covariate. \code{mod="M0"} is the most basic model where no effect is considered and all capture probabilities are the same. \code{mod="Mb"} is the classical behavioural model where the capture probability varies only once when first capture occurs. Hence it represents an \emph{enduring} effect to capture. \code{mod="Mc"} is the \emph{ephemeral} behavioural Markovian model originally introduced in Yang and Chao (2005) and subsequently extended in Farcomeni (2011) and reviewed in Alunni Fegatelli and Tardella (2012) where capture probability depends only on the capture status (captured or uncaptured) in the previous \code{k=markov.ord} occasions. \code{mod="Mcb"} is an extension of Yang and Chao's model (2005); it considers both \emph{ephemeral} and \emph{enduring} effect to capture. \code{mod="Mt"} is the standard temporal effect with no behavioural effect. \code{mod="Msubjective.cut"} is an alternative behavioural model obtained through a specific cut on the meaningful behavioural covariate interpreted as memory effect. \code{mod="Msubjective"} is a customizable (subjective) behavioural model within the linear logistic model framework requiring the specification of the two additional arguments: the first one is \code{meaningful.mat.subjective} and contains an \eqn{M} by \eqn{t} matrix of ad-hoc meaningful covariates depending on previous capture history; the second one is \code{meaningful.mat.new.value.subjective} and contains a vector of length \eqn{t} corresponding to meaningful covariates for a generic uncaptured unit. The default value for \code{mod} is \code{"linear.logistic".}
#' @param nsim a positive integer. \code{nsim} is the number of iterations for the Metropolis-within-Gibbs algorithm which allows the approximation of the posterior. It is considered only if \code{mod} is \code{"linear.logistic"} or \code{"Msubjective"}. In the other cases closed form evaluation of the posterior is available up to a proportionality constant. The default value is \code{nsim=10000.}
#' @param burnin a positive integer. \code{burnin} is the initial number of MCMC samples discarded. It is considered only if \code{mod} is \code{"linear.logistic"} or \code{"Msubjective"}. The default value for \code{burnin} is \code{round(nsim/10).}
#' @param nsim.ML a positive integer. Whenever MCMC is needed \code{nsim.ML} is the number of iterations used in the marginal likelihood estimation procedure via power posterior method of Friel and Pettit (2008)
#' @param burnin.ML a positive integer. Whenever MCMC is needed \code{burnin.ML} is the initial number of samples discarded for marginal likelihood estimation via power-posterior approach. The default value is \code{burnin.ML} is \code{round(nsim/10)}.
#' @param num.t a positive integer. Whenever MCMC is needed \code{num.t} is the number of powers used in the power posterior approximation method for the marginal likelihood evaluation.
#' @param markov.ord a positive integer. \code{markov.ord} is the order of Markovian model \eqn{M_c} or \eqn{M_{cb}}. It is considered only if \code{mod="Mc"} or \code{mod="Mcb"}.
#' @param prior.N a character. \code{prior.N} is the label for the prior distribution for \eqn{N}. When \code{prior.N} is set to \code{"Rissanen"} (default) the Rissanen prior is used as a prior on \eqn{N}. This distribution was first proposed in Rissanen 1983 as a universal prior on integers. \code{prior.N="Uniform"} stands for a prior on \eqn{N} proportional to a constant value. \code{prior.N="one.over.N"} stands for a prior on \eqn{N} proportional to \eqn{1/N}. \code{prior.N="one.over.N2"} stands for a prior on \eqn{N} proportional to \eqn{1/N^2}.
#' @param meaningful.mat.subjective \code{M x t} matrix containing numerical covariates to be used for a customized logistic model approach
#' @param meaningful.mat.new.value.subjective \code{1 x t} numerical vector corresponding to auxiliary covariate to be considered for unobserved unit
#' @param z.cut numeric vector. \code{z.cut} is a vector containing the cut point for the memory effect covariate. It is considered only if \code{mod="Msubjective.cut"}
#' @param output a character. \code{output} selects the kind of output from a very basic summary info on the posterior output (point and interval estimates for the unknown \eqn{N}) to more complete details including MCMC simulations for all parameters in the model when appropriate.
#'
#' @details
#' Independent uniform distributions are considered as default prior for the nuisance parameters. If \code{model="linear.logistic"} or \code{model="Msubjective"} and \code{output="complete.ML"} the marginal likelihood estimation is performed through the \emph{power posteriors method} suggested in Friel and Pettit (2008). In that case the \code{BBRecap} procedure is computing intensive for high values of \code{neval} and \code{nsim}.
#'
#' @return
#' \item{Model}{model considered}
#' \item{Prior}{prior distribution for \eqn{N}}
#' \item{N.hat.mean}{posterior mean for \eqn{N}}
#' \item{N.hat.median}{posterior median for \eqn{N}}
#' \item{N.hat.mode}{posterior mode for \eqn{N}}
#' \item{N.hat.RMSE}{minimizer of a specific loss function connected with the Relative Mean Square Error}
#' \item{HPD.N}{\eqn{95 \%} highest posterior density interval estimate for \eqn{N}}
#' \item{log.marginal.likelihood}{log marginal likelihood}
#' \item{N.range}{values of N considered}
#' \item{posterior.N}{values of the posterior distribution for each N considered}
#' \item{z.matrix}{meaningful behavioural covariate matrix for the observed data}
#' \item{vec.cut}{cut point used to set up meaningful partitions the set of the partial capture histories according to the value of the value of the meaningful behavioural covariate}
#' \item{N.vec}{simulated values from the posterior marginal distribution of N}
#' \item{mean.a0}{posterior mean of the parameter a0}
#' \item{hpd.a0}{highest posterior density interval estimate of the parameter a0}
#' \item{a0.vec}{simulated values from the posterior marginal distribution of a0}
#' \item{mean.a1}{posterior mean of the parameter a1}
#' \item{hpd.a1}{highest posterior density interval estimate of the parameter a1}
#' \item{a1.vec}{simulated values from the posterior marginal distribution of a1}
#'
#' @references
#' Otis D. L., Burnham K. P., White G. C, Anderson D. R. (1978) Statistical Inference From Capture Data on Closed Animal Populations, Wildlife Monographs.
#'
#' Yang H.C., Chao A. (2005) Modeling animals behavioral response by Markov chain models for capture-recapture experiments, Biometrics 61(4), 1010-1017
#'
#' N. Friel and A. N. Pettitt. Marginal likelihood estimation via power posteriors. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 70(3):589, 607--2008
#'
#' Farcomeni A. (2011) Recapture models under equality constraints for the conditional capture probabilities. Biometrika 98(1):237--242
#'
#' Alunni Fegatelli, D. and Tardella, L. (2012) Improved inference on capture recapture models with behavioural effects. Statistical Methods & Applications Applications Volume 22, Issue 1, pp 45-66 10.1007/s10260-012-0221-4
#'
#' Alunni Fegatelli D. (2013) New methods for capture-recapture modelling with behavioural response and individual heterogeneity. http://hdl.handle.net/11573/918752
#'
#' @author Danilo Alunni Fegatelli and Luca Tardella
#'
#' @seealso \code{\link{BBRecap.custom.part}}, \code{\link{LBRecap}}
#'
#' @examples
#'
#' \dontrun{
#' data(greatcopper)
#'
#' mod.Mb=BBRecap(greatcopper,mod="Mb")
#' str(mod.Mb)
#' }
#'
#' @keywords Behavioural_models Bayesian_inference
#' @export
BBRecap=function(
data,
last.column.count=FALSE,
neval=1000,
by.incr=1,
mbc.function=c("standard","markov","counts","integer","counts.integer"),
mod=c("linear.logistic","M0","Mb","Mc","Mcb","Mt","Msubjective.cut","Msubjective"),
nsim=5000,
burnin=round(nsim/10),
nsim.ML=1000,
burnin.ML=round(nsim.ML/10),
num.t=50,
markov.ord=NULL,
prior.N=c("Rissanen","Uniform","one.over.N","one.over.N2"),
meaningful.mat.subjective=NULL,
meaningful.mat.new.value.subjective=NULL,
z.cut=NULL,
output=c("base","complete","complete.ML")){
log.marg.likelihood=NULL
mod=match.arg(mod)
prior.N=match.arg(prior.N)
output=match.arg(output)
mbc.function=match.arg(mbc.function)
model=switch(mod,
linear.logistic="mod.linear.logistic",
M0="mod.M0",
Mb="mod.Mb",
Mc="mod.Mc",
Mcb="mod.Mcb",
Mt="mod.Mt",
Msubjective.cut="mod.Msubjective.cut",
Msubjective="mod.Msubjective")
prior=switch(prior.N,
Rissanen="Rissanen.prior",
Uniform="Uniform.prior",
one.over.N="1overN.prior",
one.over.N2="1overN^2.prior")
qb.function=switch(mbc.function,
standard="qb.standard",
markov="qb.markov",
integer="qb.integer",
counts="qb.count",
counts.integer="qb.count.integer")
if((mod!="Msubjective.cut") & length(z.cut)>0){
warning("z.cut without mod='Msubjective.cut' will be IGNORED!")
}
if(!(any(c("data.frame","matrix","array","table") %in% class(data)))){
stop("input data must be a data.frame or a matrix object or an array")
}
data.matrix=data
if( !("matrix" %in% class(data)) & any(c("array","table") %in% class(data))){
n.occ=length(dim(data))
mm=matrix(ncol=n.occ,nrow=2^n.occ)
for(i in 1:2^n.occ){
mm[i,]=as.numeric(intToBits(i-1)>0)[1:n.occ]
}
data.vec=c(data)
data[1]=0
dd=c()
for(i in 1:length(data)){
dd=c(dd,rep(mm[i,],data.vec[i]))
}
data.matrix=matrix(dd,ncol=length(dim(data)),byrow=T)
}
if(!("matrix" %in% class(data))){
data.matrix=as.matrix(data)
}
if(last.column.count){
if (any(data[,ncol(data)]<0)){
stop("Last column must contain non negative frequencies/counts")
}
data=as.matrix(data)
data.matrix=matrix(ncol=(ncol(data)-1))
for(i in 1:nrow(data)){
d=rep(data[i,1:(ncol(data)-1)],(data[i,ncol(data)]))
dd=matrix(d,ncol=(ncol(data)-1),byrow=T)
data.matrix=rbind(dd,data.matrix)
}
data.matrix=data.matrix[1:(nrow(data.matrix)-1),]
}
if(any(data.matrix!=0 & data.matrix!=1)) stop("data must be a binary matrix")
if(sum(apply(data.matrix,1,sum)==0)){
warning("input data argument contains rows with all zeros: these rows will be removed and ignored")
data.matrix=data.matrix[apply(data.matrix,1,sum)!=0,]
}
t=ncol(data.matrix)
M=nrow(data.matrix)
logistic=0
if(model=="mod.Mc" | model=="mod.Mcb"){
if (!is.numeric(markov.ord) || markov.ord != round(markov.ord) || markov.ord <= 0 || markov.ord > t)
stop("The Markov order must be a non-negative integer smaller than t!")
mbc.function="markov"
z.cut=seq(0,1,by=1/(2^markov.ord))
z.cut[1]=-0.1
if(model=="mod.Mcb"){
z.cut=c(z.cut[1],0,z.cut[-1])
}
}
if(qb.function=="qb.standard") recodebinary=quant.binary
if(qb.function=="qb.integer") recodebinary=quant.binary.integer
if(qb.function=="qb.markov") recodebinary=quant.binary.markov
if(qb.function=="qb.count") recodebinary=quant.binary.counts
if(qb.function=="qb.count.integer") recodebinary=quant.binary.counts.integer
#####
v=c()
meaningful.mat=matrix(NA,ncol=ncol(data.matrix),nrow=M)
for(i in 1:ncol(data.matrix)){
for(j in 1:M){
v=data.matrix[j,1:i]
v=paste(v,collapse="")
if(qb.function=="qb.markov"){
meaningful.mat[j,i]=recodebinary(v,markov.ord)}
else{meaningful.mat[j,i]=recodebinary(v)}
}
}
meaningful.mat=cbind(rep(0,M),meaningful.mat[,1:t-1])
meaningful.mat.new.value=rep(0,t*by.incr)
if(model=="mod.M0"){
z.cut=c(-0.1,1)
}
if(model=="mod.Mb"){
z.cut=c(-0.1,0,1)
}
if(model=="mod.Mt"){
z.cut=seq(0,t)
meaningful.mat=matrix(rep(1:t),nrow=M,ncol=t,byrow=T)
meaningful.mat.new.value=1:t
}
if(model=="mod.Msubjective"){
if(is.matrix(meaningful.mat.subjective)){
if(ncol(meaningful.mat.subjective)!=ncol(data.matrix)||
nrow(meaningful.mat.subjective)!=nrow(data.matrix))
stop("meaningful.mat.subjective has different dimensions from data matrix")
meaningful.mat=meaningful.mat.subjective
}
else{ stop("meaningful.mat.subjective must be a numerical matrix") }
if(is.numeric(meaningful.mat.new.value.subjective)){
meaningful.mat.vec.new=meaningful.mat.new.value.subjective
}
}
meaningful.mat.vec=c(meaningful.mat)
y=c(data.matrix)
nn.1=c()
nn.0=c()
coeff.vec1=c()
coeff.vec2=c()
log.post.N=c()
post.N=c()
vv=c()
prior.inv.const=0
prior.distr.N=function(x){
if(prior=="Rissanen.prior") {out=(rissanen(x))}
if(prior=="Uniform.prior") {out=(1/(x^0))}
if(prior=="1overN.prior") {out=(1/(x^1))}
if(prior=="1overN^2.prior") {out=(1/(x^2))}
return(out)
}
if(model=="mod.linear.logistic" | model=="mod.Msubjective"){logistic=1}
if(logistic==0){
if(output=="complete.ML"){
output="complete"
}
for(k in 1:neval){
meaningful.mat.new=rbind(meaningful.mat,matrix(rep(meaningful.mat.new.value,(k>1)),nrow=((k-1) *by.incr),ncol=t,byrow=T))
dati.new=rbind(data.matrix,matrix(0,nrow=((k-1)*by.incr),ncol=t))
meaningful.mat.vec.new=c(meaningful.mat.new)
y.new=c(dati.new)
for(j in 1:(length(z.cut)-1)){
if(j==1){nn.0[j]=sum(meaningful.mat.vec.new>=z.cut[j] &
meaningful.mat.vec.new<=z.cut[j+1] & y.new==0)}
else{nn.0[j]=sum(meaningful.mat.vec.new>z.cut[j] &
meaningful.mat.vec.new<=z.cut[j+1] & y.new==0)}
if(j==1){nn.1[j]=sum(meaningful.mat.vec.new>=z.cut[j] &
meaningful.mat.vec.new<=z.cut[j+1] & y.new==1)}
else{nn.1[j]=sum(meaningful.mat.vec.new>z.cut[j] &
meaningful.mat.vec.new<=z.cut[j+1] & y.new==1)}
}
prior.inv.const=prior.inv.const+ prior.distr.N((M+k-1))
log.post.N[k]=lchoose((sum(nn.1)+sum(nn.0))/t,M)+sum(lbeta((nn.1+1),(nn.0+1)))+log(prior.distr.N((M+k-1)))
}
# closed k cycle
l.max=max(log.post.N)
for(k in 1:neval){
vv[k]<-exp(log.post.N[k]-l.max)
}
ss=sum(vv)
post.N=vv/ss
if(output=="complete"){
if(model=="mod.Mt"){
Exp.beta=matrix(ncol=(length(z.cut)-1),nrow=neval)
for(i in 1:neval){
Exp.beta[i,]=(nn.1+1)/(nn.1+nn.0-(neval-i)+2)
}
pH.post.mean=apply((post.N*Exp.beta),2,sum)
}
else{
Exp.beta=matrix(ncol=(length(z.cut)-1),nrow=neval)
for(i in 1:neval){
Exp.beta[i,]=(nn.1+1)/(nn.1+nn.0+2)
}
rr=sort(0:(neval-1)*t,decreasing=T)
Exp.beta[,1]=Exp.beta[,1]*(nn.1[1]+nn.0[1]+2)/(nn.1[1]+nn.0[1]-rr+2)
pH.post.mean=apply((post.N*Exp.beta),2,sum)
}
names(pH.post.mean)=paste("pH",1:(length(z.cut)-1),sep="")
}
val=seq(M,((M+(neval-1)*by.incr)),by=by.incr)
ord=order(post.N,decreasing=T)
p.max=ord[1]
mode.N=val[p.max]
mean.N<-round(sum(post.N*val))
median.N=M
g<-c()
for(k in 1:neval) {
g=c(g,post.N[k])
if (sum(g)<=0.5) median.N=median.N+1
}
funzioneRMSE=function(x){
sum((((x/val)-1)^2)*post.N)
}
estimate.N=round(optimize(funzioneRMSE,c(min(val),max(val)))$minimum)
### Credible Set ###
alpha<-0.05
g=0
d=0
aa=c()
ordine<-order(post.N,decreasing=T)
w<-val
w<-w[ordine]
p<-post.N
p<-p[ordine]
for(k in 1:neval) {
if (g<(1-alpha)) {g=g+p[k]
d=d+1}
}
aa<-w[1:d]
inf.lim<-min(aa)
sup.lim<-max(aa)
log.marg.likelihood=log(sum(exp(log.post.N-max(log.post.N)-log(prior.inv.const))))+max(log.post.N)
}
## ARMS
if(model=="mod.linear.logistic" | model=="mod.Msubjective"){logistic=1}
if(logistic==1){
print("implementation via ARMS")
### LOG-PRIOR N
log.prior.N=function(x){
if(prior=="Rissanen.prior") {out=log(rissanen(x))}
if(prior=="Uniform.prior") {out=0}
if(prior=="1overN.prior") {out=log(1/(x^1))}
if(prior=="1overN^2.prior") {out=log(1/(x^2))}
return(out)
}
### LOG-LIKELIHOOD
if(logistic==1){
log.likelihood=function(NN,aa0,aa1){
z=matrix(0,ncol=t,nrow=NN)
x=matrix(0,ncol=t,nrow=NN)
z[(1:M),]=meaningful.mat
x[(1:M),]=data.matrix
l = x*log(exp(aa0+aa1*z)/(1+exp(aa0+aa1*z)))-(1-x)*log((1+exp(aa0+aa1*z)))
out=sum(l) +lchoose(NN,M)
return(out)
}
###
N.vec=c(M,rep(0,(nsim-1)))
a0.vec=c(0,rep(0,(nsim-1)))
a1.vec=c(0,rep(0,(nsim-1)))
### MODIFIED-LOG-LIKELIHOOD
log.likelihood.t=function(NN,aa0,aa1,tt){
z=matrix(0,ncol=t,nrow=NN)
x=matrix(0,ncol=t,nrow=NN)
z[(1:M),]=meaningful.mat
x[(1:M),]=data.matrix
l = x*log(exp(aa0+aa1*z)/(1+exp(aa0+aa1*z)))-(1-x)*log((1+exp(aa0+aa1*z)))
out=tt*(sum(l) +lchoose(NN,M))
return(out)
}
###
for(g in 2:(nsim+1)){
N.vec[g]=arms(
N.vec[g-1],
function(N) log.prior.N(N)+log.likelihood(N,a0.vec[g-1],a1.vec[g-1]),
function(N) (N>=M)*(N<=M+neval),
1
)
N.vec[g]=round(N.vec[g])
a0.vec[g]=arms(
a0.vec[g-1],
function(a0) log.likelihood(N.vec[g],a0,a1.vec[g-1]),
function(a0) (a0>-5)*(a0<=5),
1
)
a1.vec[g]=arms(
a1.vec[g-1],
function(a1) log.likelihood(N.vec[g],a0.vec[g],a1),
function(a1) (a1>-5)*(a1<=5),
1
)
}
mean.N=round(mean(N.vec[-(1:burnin)]))
mean.a0=mean(a0.vec[-(1:burnin)])
mean.a1=mean(a1.vec[-(1:burnin)])
median.N=median(N.vec[-(1:burnin)])
mode.N=as.numeric(names(sort(table(N.vec[-(1:burnin)]))[length(table(N.vec[-(1:burnin)]))]))
val=seq(M,(M+neval))
post.N=(table(c(N.vec,M:(M+neval)))-1)/sum(N.vec)
funzioneRMSE=function(x){
sum((((x/val)-1)^2)*post.N)
}
estimate.N=round(optimize(funzioneRMSE,c(min(val),max(val)))$minimum)
HPD.interval=function(x.vec){
obj=density(x=x.vec,from=min(x.vec),to=max(x.vec),n=diff(range(x.vec))+1,kernel="rectangular")
density.estimation.N=obj$y
names(density.estimation.N)=obj$x
ordered.x.vec=x.vec[order(density.estimation.N[as.character(x.vec)],decreasing=T)]
HPD.interval=range(ordered.x.vec[1:round(quantile(1:length(x.vec),prob=c(0.95)))])
return(HPD.interval)
}
HPD.N=HPD.interval(N.vec[-(1:burnin)])
HPD.a0=HPD.interval(a0.vec[-(1:burnin)])
HPD.a1=HPD.interval(a1.vec[-(1:burnin)])
if(output=="complete.ML"){
output="complete"
a=seq(0.0016,1,length=num.t)
a=a^4
l=matrix(ncol=length(a),nrow=nsim.ML)
for(j in 1:length(a)){
print(paste("ML evaluation via power-posterior method of Friel & Pettit (2008) ; t =",j,"of",length(a)))
N.vec.ML=c(M,rep(0,(nsim-1)))
a0.vec.ML=c(0,rep(0,(nsim-1)))
a1.vec.ML=c(0,rep(0,(nsim-1)))
for(g in 2:(nsim.ML+1)){
N.vec.ML[g]=arms(
N.vec.ML[g-1],
function(N) log.prior.N(N)+log.likelihood.t(N,a0.vec.ML[g-1],a1.vec.ML[g-1],a[j]),
function(N) (N>=M)*(N<=M+neval),
1
)
N.vec.ML[g]=round(N.vec.ML[g])
a0.vec.ML[g]=arms(
a0.vec.ML[g-1],
function(a0) log.likelihood.t(N.vec.ML[g],a0,a1.vec.ML[g-1],a[j]),
function(a0) (a0>-5)*(a0<=5),
1
)
a1.vec.ML[g]=arms(
a1.vec.ML[g-1],
function(a1) log.likelihood.t(N.vec.ML[g],a0.vec.ML[g],a1,a[j]),
function(a1) (a1>-5)*(a1<=5),
1
)
l[g-1,j]=log.likelihood(N.vec.ML[g],a0.vec.ML[g],a1.vec.ML[g])
}
}
ll=apply(l,2,mean)
v=c()
for(k in 1:(length(a)-1)){
v[k]=(a[k+1]-a[k])*(ll[k+1]+ll[k])/2
}
log.marg.likelihood=sum(v)
}
}
}
if(logistic==0){
if(mod=="M0" | mod=="Mb" | mod=="Mt" | mod=="Mc" | mod=="Mcb"){
out=switch(output,
base=
list(Model=model,prior.N=prior.N,N.hat.RMSE=estimate.N,HPD.N=c(inf.lim,sup.lim)),
complete=
list(Model=model,prior.N=prior.N,N.hat.mean=mean.N,N.hat.median=median.N,N.hat.mode=mode.N,N.hat.RMSE=estimate.N,HPD.N=c(inf.lim,sup.lim),pH.post.mean=pH.post.mean, N.range=val,posterior.N=post.N,z.matrix=meaningful.mat,vec.cut=z.cut,log.marginal.likelihood=log.marg.likelihood)
)
}
else{
out=switch(output,
base=
list(Model=model,mbc.function=mbc.function,prior.N=prior.N,N.hat.RMSE=estimate.N,HPD.N=c(inf.lim,sup.lim)),
complete=
list(Model=model,mbc.function=mbc.function,prior.N=prior.N,N.hat.mean=mean.N,N.hat.median=median.N,N.hat.mode=mode.N,N.hat.RMSE=estimate.N,HPD.N=c(inf.lim,sup.lim),pH.post.mean=pH.post.mean, N.range=val,posterior.N=post.N,z.matrix=meaningful.mat,vec.cut=z.cut,log.marginal.likelihood=log.marg.likelihood
) )}
}
if(logistic==1) {
if(mod=="Msubjective"){
out=switch(output,
base=
list(Model=model,prior.N=prior.N,N.hat.RMSE=estimate.N,HPD.N=HPD.N),
complete=
list(Model=model,prior.N=prior.N,N.hat.mean=mean.N,N.hat.median=median.N,N.hat.mode=mode.N,N.hat.RMSE=estimate.N,HPD.N=HPD.N,N.vec=N.vec,mean.a0=mean.a0,HPD.a0=HPD.a0,a0.vec=a0.vec,mean.a1=mean.a1,HPD.a1=HPD.a1,a1.vec=a1.vec,z.matrix=meaningful.mat,log.marginal.likelihood=log.marg.likelihood))
}
else{
out=switch(output,
base=
list(Model=model,mbc.function=mbc.function,prior.N=prior.N,N.hat.RMSE=estimate.N,HPD.N=HPD.N),
model=
list(Model=model,mbc.function=mbc.function,prior.N=prior.N,N.hat.mean=mean.N,N.hat.median=median.N,N.hat.mode=mode.N,N.hat.RMSE=estimate.N,HPD.N=HPD.N,N.vec=N.vec,mean.a0=mean.a0,HPD.a0=HPD.a0,a0.vec=a0.vec,mean.a1=mean.a1,HPD.a1=HPD.a1,a1.vec=a1.vec,z.matrix=meaningful.mat),
complete=
list(Model=model,mbc.function=mbc.function,prior.N=prior.N,N.hat.mean=mean.N,N.hat.median=median.N,N.hat.mode=mode.N,N.hat.RMSE=estimate.N,HPD.N=HPD.N,N.vec=N.vec,mean.a0=mean.a0,HPD.a0=HPD.a0,a0.vec=a0.vec,mean.a1=mean.a1,HPD.a1=HPD.a1,a1.vec=a1.vec,z.matrix=meaningful.mat,log.marginal.likelihood=log.marg.likelihood))}
}
return(out)
}
| /R/BBRecap.R | no_license | lucatardella/BBRecapture | R | false | false | 27,124 | r | #' Bayesian inference for capture-recapture analysis with emphasis on behavioural effect modelling
#'
#' Bayesian inference for a large class of discrete-time capture-recapture models under closed population with special emphasis on behavioural effect modelling including also the \emph{meaningful behavioral covariate} approach proposed in Alunni Fegatelli (2013) [PhD thesis]. Many of the standard classical models such as \eqn{M_0}, \eqn{M_b}, \eqn{M_{c_1}}, \eqn{M_t} or \eqn{M_{bt}} can be regarded as particular instances of the aforementioned approach. Other flexible alternatives can be fitted through a careful choice of a meaningful behavioural covariate and a possible partition of its admissible range.
#'
#' @usage
#' BBRecap (data,last.column.count=FALSE, neval = 1000, by.incr = 1,
#' mbc.function = c("standard","markov","counts","integer","counts.integer"),
#' mod = c("linear.logistic", "M0", "Mb", "Mc", "Mcb", "Mt", "Msubjective.cut",
#' "Msubjective"), nsim = 5000, burnin = round(nsim/10),
#' nsim.ML = 1000, burnin.ML = round(nsim.ML/10), num.t = 50,
#' markov.ord=NULL, prior.N = c("Rissanen","Uniform","one.over.N","one.over.N2"),
#' meaningful.mat.subjective = NULL, meaningful.mat.new.value.subjective = NULL,
#' z.cut=NULL, output = c("base", "complete", "complete.ML"))
#'
#' @param data can be one of the following:
#' \enumerate{
#' \item an \eqn{M} by \eqn{t} binary matrix/data.frame. In this case the input is interpreted as a matrix whose rows contain individual capture histories for all \eqn{M} observed units
#' \item a matrix/data.frame with \eqn{(t+1)} columns. The first \eqn{t} columns contain binary entries corresponding to capture occurrences, while the last column contains non negative integers corresponding to frequencies. This format is allowed only when \code{last.column.count} is set to \code{TRUE}
#' \item a \eqn{t}-dimensional array or table representing the counts of the \eqn{2^t} contingency table of binary outcomes
#' }
#' \eqn{M} is the number of units captured at least once and \eqn{t} is the number of capture occasions.
#' @param last.column.count a logical. In the default case \code{last.column.count=FALSE} each row of the input argument \code{data} represents the complete capture history for each observed unit. When \code{last.column.count} is set to \code{TRUE} in each row the first \eqn{t} entries represent one of the observed complete capture histories and the last entry in the last column is the number of observed units with that capture history
#' @param neval a positive integer. \code{neval} is the number of values of the population size \eqn{N} where the posterior is evaluated starting from \eqn{M}. The default value is \code{neval}=1000.
#' @param by.incr a positive integer. \code{nsim} is the number of iterations for the Metropolis-within-Gibbs algorithm which allows the approximation of the posterior. It is considered only if \code{mod} is \code{"linear.logistic"} or \code{"Msubjective"}. In the other cases closed form evaluation of the posterior is available up to a proportionality constant. The default value is \code{nsim=10000.}
#' @param mbc.function a character string with possible entries (see Alunni Fegatelli (2013) for further details)
#' \enumerate{
#' \item \code{"standard"} meaningful behavioural covariate in [0,1] obtained through the normalized binary representation of integers relying upon partial capture history
#' \item \code{"markov"} slight modification of \code{"standard"} providing consistency with arbitrary Markov order models when used in conjunction with the options \code{"Msubjective"} and \code{z.cut}.
#' \item \code{"counts"} covariate in [0,1] obtained by normalizing the integer corresponding to the sum of binary entries i.e. the number of previous captures
#' \item \code{"integer"} un-normalized integer corresponding to the binary entries of the partial capture history
#' \item \code{"counts.integer"} un-normalized covariate obtained as the sum of binary entries i.e. the number of previous captures
#' }
#' @param mod a character. \code{mod} represents the behavioural model considered for the analysis. \code{mod="linear.logistic"} is the model proposed in Alunni Fegatelli (2013) based on the meaningful behavioural covariate. \code{mod="M0"} is the most basic model where no effect is considered and all capture probabilities are the same. \code{mod="Mb"} is the classical behavioural model where the capture probability varies only once when first capture occurs. Hence it represents an \emph{enduring} effect to capture. \code{mod="Mc"} is the \emph{ephemeral} behavioural Markovian model originally introduced in Yang and Chao (2005) and subsequently extended in Farcomeni (2011) and reviewed in Alunni Fegatelli and Tardella (2012) where capture probability depends only on the capture status (captured or uncaptured) in the previous \code{k=markov.ord} occasions. \code{mod="Mcb"} is an extension of Yang and Chao's model (2005); it considers both \emph{ephemeral} and \emph{enduring} effect to capture. \code{mod="Mt"} is the standard temporal effect with no behavioural effect. \code{mod="Msubjective.cut"} is an alternative behavioural model obtained through a specific cut on the meaningful behavioural covariate interpreted as memory effect. \code{mod="Msubjective"} is a customizable (subjective) behavioural model within the linear logistic model framework requiring the specification of the two additional arguments: the first one is \code{meaningful.mat.subjective} and contains an \eqn{M} by \eqn{t} matrix of ad-hoc meaningful covariates depending on previous capture history; the second one is \code{meaningful.mat.new.value.subjective} and contains a vector of length \eqn{t} corresponding to meaningful covariates for a generic uncaptured unit. The default value for \code{mod} is \code{"linear.logistic".}
#' @param nsim a positive integer. \code{nsim} is the number of iterations for the Metropolis-within-Gibbs algorithm which allows the approximation of the posterior. It is considered only if \code{mod} is \code{"linear.logistic"} or \code{"Msubjective"}. In the other cases closed form evaluation of the posterior is available up to a proportionality constant. The default value is \code{nsim=10000.}
#' @param burnin a positive integer. \code{burnin} is the initial number of MCMC samples discarded. It is considered only if \code{mod} is \code{"linear.logistic"} or \code{"Msubjective"}. The default value for \code{burnin} is \code{round(nsim/10).}
#' @param nsim.ML a positive integer. Whenever MCMC is needed \code{nsim.ML} is the number of iterations used in the marginal likelihood estimation procedure via power posterior method of Friel and Pettit (2008)
#' @param burnin.ML a positive integer. Whenever MCMC is needed \code{burnin.ML} is the initial number of samples discarded for marginal likelihood estimation via power-posterior approach. The default value is \code{burnin.ML} is \code{round(nsim/10)}.
#' @param num.t a positive integer. Whenever MCMC is needed \code{num.t} is the number of powers used in the power posterior approximation method for the marginal likelihood evaluation.
#' @param markov.ord a positive integer. \code{markov.ord} is the order of Markovian model \eqn{M_c} or \eqn{M_{cb}}. It is considered only if \code{mod="Mc"} or \code{mod="Mcb"}.
#' @param prior.N a character. \code{prior.N} is the label for the prior distribution for \eqn{N}. When \code{prior.N} is set to \code{"Rissanen"} (default) the Rissanen prior is used as a prior on \eqn{N}. This distribution was first proposed in Rissanen 1983 as a universal prior on integers. \code{prior.N="Uniform"} stands for a prior on \eqn{N} proportional to a constant value. \code{prior.N="one.over.N"} stands for a prior on \eqn{N} proportional to \eqn{1/N}. \code{prior.N="one.over.N2"} stands for a prior on \eqn{N} proportional to \eqn{1/N^2}.
#' @param meaningful.mat.subjective \code{M x t} matrix containing numerical covariates to be used for a customized logistic model approach
#' @param meaningful.mat.new.value.subjective \code{1 x t} numerical vector corresponding to auxiliary covariate to be considered for unobserved unit
#' @param z.cut numeric vector. \code{z.cut} is a vector containing the cut point for the memory effect covariate. It is considered only if \code{mod="Msubjective.cut"}
#' @param output a character. \code{output} selects the kind of output from a very basic summary info on the posterior output (point and interval estimates for the unknown \eqn{N}) to more complete details including MCMC simulations for all parameters in the model when appropriate.
#'
#' @details
#' Independent uniform distributions are considered as default prior for the nuisance parameters. If \code{model="linear.logistic"} or \code{model="Msubjective"} and \code{output="complete.ML"} the marginal likelihood estimation is performed through the \emph{power posteriors method} suggested in Friel and Pettit (2008). In that case the \code{BBRecap} procedure is computing intensive for high values of \code{neval} and \code{nsim}.
#'
#' @return
#' \item{Model}{model considered}
#' \item{Prior}{prior distribution for \eqn{N}}
#' \item{N.hat.mean}{posterior mean for \eqn{N}}
#' \item{N.hat.median}{posterior median for \eqn{N}}
#' \item{N.hat.mode}{posterior mode for \eqn{N}}
#' \item{N.hat.RMSE}{minimizer of a specific loss function connected with the Relative Mean Square Error}
#' \item{HPD.N}{\eqn{95 \%} highest posterior density interval estimate for \eqn{N}}
#' \item{log.marginal.likelihood}{log marginal likelihood}
#' \item{N.range}{values of N considered}
#' \item{posterior.N}{values of the posterior distribution for each N considered}
#' \item{z.matrix}{meaningful behavioural covariate matrix for the observed data}
#' \item{vec.cut}{cut point used to set up meaningful partitions the set of the partial capture histories according to the value of the value of the meaningful behavioural covariate}
#' \item{N.vec}{simulated values from the posterior marginal distribution of N}
#' \item{mean.a0}{posterior mean of the parameter a0}
#' \item{hpd.a0}{highest posterior density interval estimate of the parameter a0}
#' \item{a0.vec}{simulated values from the posterior marginal distribution of a0}
#' \item{mean.a1}{posterior mean of the parameter a1}
#' \item{hpd.a1}{highest posterior density interval estimate of the parameter a1}
#' \item{a1.vec}{simulated values from the posterior marginal distribution of a1}
#'
#' @references
#' Otis D. L., Burnham K. P., White G. C, Anderson D. R. (1978) Statistical Inference From Capture Data on Closed Animal Populations, Wildlife Monographs.
#'
#' Yang H.C., Chao A. (2005) Modeling animals behavioral response by Markov chain models for capture-recapture experiments, Biometrics 61(4), 1010-1017
#'
#' N. Friel and A. N. Pettitt. Marginal likelihood estimation via power posteriors. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 70(3):589, 607--2008
#'
#' Farcomeni A. (2011) Recapture models under equality constraints for the conditional capture probabilities. Biometrika 98(1):237--242
#'
#' Alunni Fegatelli, D. and Tardella, L. (2012) Improved inference on capture recapture models with behavioural effects. Statistical Methods & Applications Applications Volume 22, Issue 1, pp 45-66 10.1007/s10260-012-0221-4
#'
#' Alunni Fegatelli D. (2013) New methods for capture-recapture modelling with behavioural response and individual heterogeneity. http://hdl.handle.net/11573/918752
#'
#' @author Danilo Alunni Fegatelli and Luca Tardella
#'
#' @seealso \code{\link{BBRecap.custom.part}}, \code{\link{LBRecap}}
#'
#' @examples
#'
#' \dontrun{
#' data(greatcopper)
#'
#' mod.Mb=BBRecap(greatcopper,mod="Mb")
#' str(mod.Mb)
#' }
#'
#' @keywords Behavioural_models Bayesian_inference
#' @export
BBRecap=function(
data,
last.column.count=FALSE,
neval=1000,
by.incr=1,
mbc.function=c("standard","markov","counts","integer","counts.integer"),
mod=c("linear.logistic","M0","Mb","Mc","Mcb","Mt","Msubjective.cut","Msubjective"),
nsim=5000,
burnin=round(nsim/10),
nsim.ML=1000,
burnin.ML=round(nsim.ML/10),
num.t=50,
markov.ord=NULL,
prior.N=c("Rissanen","Uniform","one.over.N","one.over.N2"),
meaningful.mat.subjective=NULL,
meaningful.mat.new.value.subjective=NULL,
z.cut=NULL,
output=c("base","complete","complete.ML")){
log.marg.likelihood=NULL
mod=match.arg(mod)
prior.N=match.arg(prior.N)
output=match.arg(output)
mbc.function=match.arg(mbc.function)
model=switch(mod,
linear.logistic="mod.linear.logistic",
M0="mod.M0",
Mb="mod.Mb",
Mc="mod.Mc",
Mcb="mod.Mcb",
Mt="mod.Mt",
Msubjective.cut="mod.Msubjective.cut",
Msubjective="mod.Msubjective")
prior=switch(prior.N,
Rissanen="Rissanen.prior",
Uniform="Uniform.prior",
one.over.N="1overN.prior",
one.over.N2="1overN^2.prior")
qb.function=switch(mbc.function,
standard="qb.standard",
markov="qb.markov",
integer="qb.integer",
counts="qb.count",
counts.integer="qb.count.integer")
if((mod!="Msubjective.cut") & length(z.cut)>0){
warning("z.cut without mod='Msubjective.cut' will be IGNORED!")
}
if(!(any(c("data.frame","matrix","array","table") %in% class(data)))){
stop("input data must be a data.frame or a matrix object or an array")
}
data.matrix=data
if( !("matrix" %in% class(data)) & any(c("array","table") %in% class(data))){
n.occ=length(dim(data))
mm=matrix(ncol=n.occ,nrow=2^n.occ)
for(i in 1:2^n.occ){
mm[i,]=as.numeric(intToBits(i-1)>0)[1:n.occ]
}
data.vec=c(data)
data[1]=0
dd=c()
for(i in 1:length(data)){
dd=c(dd,rep(mm[i,],data.vec[i]))
}
data.matrix=matrix(dd,ncol=length(dim(data)),byrow=T)
}
if(!("matrix" %in% class(data))){
data.matrix=as.matrix(data)
}
if(last.column.count){
if (any(data[,ncol(data)]<0)){
stop("Last column must contain non negative frequencies/counts")
}
data=as.matrix(data)
data.matrix=matrix(ncol=(ncol(data)-1))
for(i in 1:nrow(data)){
d=rep(data[i,1:(ncol(data)-1)],(data[i,ncol(data)]))
dd=matrix(d,ncol=(ncol(data)-1),byrow=T)
data.matrix=rbind(dd,data.matrix)
}
data.matrix=data.matrix[1:(nrow(data.matrix)-1),]
}
if(any(data.matrix!=0 & data.matrix!=1)) stop("data must be a binary matrix")
if(sum(apply(data.matrix,1,sum)==0)){
warning("input data argument contains rows with all zeros: these rows will be removed and ignored")
data.matrix=data.matrix[apply(data.matrix,1,sum)!=0,]
}
t=ncol(data.matrix)
M=nrow(data.matrix)
logistic=0
if(model=="mod.Mc" | model=="mod.Mcb"){
if (!is.numeric(markov.ord) || markov.ord != round(markov.ord) || markov.ord <= 0 || markov.ord > t)
stop("The Markov order must be a non-negative integer smaller than t!")
mbc.function="markov"
z.cut=seq(0,1,by=1/(2^markov.ord))
z.cut[1]=-0.1
if(model=="mod.Mcb"){
z.cut=c(z.cut[1],0,z.cut[-1])
}
}
if(qb.function=="qb.standard") recodebinary=quant.binary
if(qb.function=="qb.integer") recodebinary=quant.binary.integer
if(qb.function=="qb.markov") recodebinary=quant.binary.markov
if(qb.function=="qb.count") recodebinary=quant.binary.counts
if(qb.function=="qb.count.integer") recodebinary=quant.binary.counts.integer
#####
v=c()
meaningful.mat=matrix(NA,ncol=ncol(data.matrix),nrow=M)
for(i in 1:ncol(data.matrix)){
for(j in 1:M){
v=data.matrix[j,1:i]
v=paste(v,collapse="")
if(qb.function=="qb.markov"){
meaningful.mat[j,i]=recodebinary(v,markov.ord)}
else{meaningful.mat[j,i]=recodebinary(v)}
}
}
meaningful.mat=cbind(rep(0,M),meaningful.mat[,1:t-1])
meaningful.mat.new.value=rep(0,t*by.incr)
if(model=="mod.M0"){
z.cut=c(-0.1,1)
}
if(model=="mod.Mb"){
z.cut=c(-0.1,0,1)
}
if(model=="mod.Mt"){
z.cut=seq(0,t)
meaningful.mat=matrix(rep(1:t),nrow=M,ncol=t,byrow=T)
meaningful.mat.new.value=1:t
}
if(model=="mod.Msubjective"){
if(is.matrix(meaningful.mat.subjective)){
if(ncol(meaningful.mat.subjective)!=ncol(data.matrix)||
nrow(meaningful.mat.subjective)!=nrow(data.matrix))
stop("meaningful.mat.subjective has different dimensions from data matrix")
meaningful.mat=meaningful.mat.subjective
}
else{ stop("meaningful.mat.subjective must be a numerical matrix") }
if(is.numeric(meaningful.mat.new.value.subjective)){
meaningful.mat.vec.new=meaningful.mat.new.value.subjective
}
}
meaningful.mat.vec=c(meaningful.mat)
y=c(data.matrix)
nn.1=c()
nn.0=c()
coeff.vec1=c()
coeff.vec2=c()
log.post.N=c()
post.N=c()
vv=c()
prior.inv.const=0
prior.distr.N=function(x){
if(prior=="Rissanen.prior") {out=(rissanen(x))}
if(prior=="Uniform.prior") {out=(1/(x^0))}
if(prior=="1overN.prior") {out=(1/(x^1))}
if(prior=="1overN^2.prior") {out=(1/(x^2))}
return(out)
}
if(model=="mod.linear.logistic" | model=="mod.Msubjective"){logistic=1}
if(logistic==0){
if(output=="complete.ML"){
output="complete"
}
for(k in 1:neval){
meaningful.mat.new=rbind(meaningful.mat,matrix(rep(meaningful.mat.new.value,(k>1)),nrow=((k-1) *by.incr),ncol=t,byrow=T))
dati.new=rbind(data.matrix,matrix(0,nrow=((k-1)*by.incr),ncol=t))
meaningful.mat.vec.new=c(meaningful.mat.new)
y.new=c(dati.new)
for(j in 1:(length(z.cut)-1)){
if(j==1){nn.0[j]=sum(meaningful.mat.vec.new>=z.cut[j] &
meaningful.mat.vec.new<=z.cut[j+1] & y.new==0)}
else{nn.0[j]=sum(meaningful.mat.vec.new>z.cut[j] &
meaningful.mat.vec.new<=z.cut[j+1] & y.new==0)}
if(j==1){nn.1[j]=sum(meaningful.mat.vec.new>=z.cut[j] &
meaningful.mat.vec.new<=z.cut[j+1] & y.new==1)}
else{nn.1[j]=sum(meaningful.mat.vec.new>z.cut[j] &
meaningful.mat.vec.new<=z.cut[j+1] & y.new==1)}
}
prior.inv.const=prior.inv.const+ prior.distr.N((M+k-1))
log.post.N[k]=lchoose((sum(nn.1)+sum(nn.0))/t,M)+sum(lbeta((nn.1+1),(nn.0+1)))+log(prior.distr.N((M+k-1)))
}
# closed k cycle
l.max=max(log.post.N)
for(k in 1:neval){
vv[k]<-exp(log.post.N[k]-l.max)
}
ss=sum(vv)
post.N=vv/ss
if(output=="complete"){
if(model=="mod.Mt"){
Exp.beta=matrix(ncol=(length(z.cut)-1),nrow=neval)
for(i in 1:neval){
Exp.beta[i,]=(nn.1+1)/(nn.1+nn.0-(neval-i)+2)
}
pH.post.mean=apply((post.N*Exp.beta),2,sum)
}
else{
Exp.beta=matrix(ncol=(length(z.cut)-1),nrow=neval)
for(i in 1:neval){
Exp.beta[i,]=(nn.1+1)/(nn.1+nn.0+2)
}
rr=sort(0:(neval-1)*t,decreasing=T)
Exp.beta[,1]=Exp.beta[,1]*(nn.1[1]+nn.0[1]+2)/(nn.1[1]+nn.0[1]-rr+2)
pH.post.mean=apply((post.N*Exp.beta),2,sum)
}
names(pH.post.mean)=paste("pH",1:(length(z.cut)-1),sep="")
}
val=seq(M,((M+(neval-1)*by.incr)),by=by.incr)
ord=order(post.N,decreasing=T)
p.max=ord[1]
mode.N=val[p.max]
mean.N<-round(sum(post.N*val))
median.N=M
g<-c()
for(k in 1:neval) {
g=c(g,post.N[k])
if (sum(g)<=0.5) median.N=median.N+1
}
funzioneRMSE=function(x){
sum((((x/val)-1)^2)*post.N)
}
estimate.N=round(optimize(funzioneRMSE,c(min(val),max(val)))$minimum)
### Credible Set ###
alpha<-0.05
g=0
d=0
aa=c()
ordine<-order(post.N,decreasing=T)
w<-val
w<-w[ordine]
p<-post.N
p<-p[ordine]
for(k in 1:neval) {
if (g<(1-alpha)) {g=g+p[k]
d=d+1}
}
aa<-w[1:d]
inf.lim<-min(aa)
sup.lim<-max(aa)
log.marg.likelihood=log(sum(exp(log.post.N-max(log.post.N)-log(prior.inv.const))))+max(log.post.N)
}
## ARMS
if(model=="mod.linear.logistic" | model=="mod.Msubjective"){logistic=1}
if(logistic==1){
print("implementation via ARMS")
### LOG-PRIOR N
log.prior.N=function(x){
if(prior=="Rissanen.prior") {out=log(rissanen(x))}
if(prior=="Uniform.prior") {out=0}
if(prior=="1overN.prior") {out=log(1/(x^1))}
if(prior=="1overN^2.prior") {out=log(1/(x^2))}
return(out)
}
### LOG-LIKELIHOOD
if(logistic==1){
log.likelihood=function(NN,aa0,aa1){
z=matrix(0,ncol=t,nrow=NN)
x=matrix(0,ncol=t,nrow=NN)
z[(1:M),]=meaningful.mat
x[(1:M),]=data.matrix
l = x*log(exp(aa0+aa1*z)/(1+exp(aa0+aa1*z)))-(1-x)*log((1+exp(aa0+aa1*z)))
out=sum(l) +lchoose(NN,M)
return(out)
}
###
N.vec=c(M,rep(0,(nsim-1)))
a0.vec=c(0,rep(0,(nsim-1)))
a1.vec=c(0,rep(0,(nsim-1)))
### MODIFIED-LOG-LIKELIHOOD
log.likelihood.t=function(NN,aa0,aa1,tt){
z=matrix(0,ncol=t,nrow=NN)
x=matrix(0,ncol=t,nrow=NN)
z[(1:M),]=meaningful.mat
x[(1:M),]=data.matrix
l = x*log(exp(aa0+aa1*z)/(1+exp(aa0+aa1*z)))-(1-x)*log((1+exp(aa0+aa1*z)))
out=tt*(sum(l) +lchoose(NN,M))
return(out)
}
###
for(g in 2:(nsim+1)){
N.vec[g]=arms(
N.vec[g-1],
function(N) log.prior.N(N)+log.likelihood(N,a0.vec[g-1],a1.vec[g-1]),
function(N) (N>=M)*(N<=M+neval),
1
)
N.vec[g]=round(N.vec[g])
a0.vec[g]=arms(
a0.vec[g-1],
function(a0) log.likelihood(N.vec[g],a0,a1.vec[g-1]),
function(a0) (a0>-5)*(a0<=5),
1
)
a1.vec[g]=arms(
a1.vec[g-1],
function(a1) log.likelihood(N.vec[g],a0.vec[g],a1),
function(a1) (a1>-5)*(a1<=5),
1
)
}
mean.N=round(mean(N.vec[-(1:burnin)]))
mean.a0=mean(a0.vec[-(1:burnin)])
mean.a1=mean(a1.vec[-(1:burnin)])
median.N=median(N.vec[-(1:burnin)])
mode.N=as.numeric(names(sort(table(N.vec[-(1:burnin)]))[length(table(N.vec[-(1:burnin)]))]))
val=seq(M,(M+neval))
post.N=(table(c(N.vec,M:(M+neval)))-1)/sum(N.vec)
funzioneRMSE=function(x){
sum((((x/val)-1)^2)*post.N)
}
estimate.N=round(optimize(funzioneRMSE,c(min(val),max(val)))$minimum)
HPD.interval=function(x.vec){
obj=density(x=x.vec,from=min(x.vec),to=max(x.vec),n=diff(range(x.vec))+1,kernel="rectangular")
density.estimation.N=obj$y
names(density.estimation.N)=obj$x
ordered.x.vec=x.vec[order(density.estimation.N[as.character(x.vec)],decreasing=T)]
HPD.interval=range(ordered.x.vec[1:round(quantile(1:length(x.vec),prob=c(0.95)))])
return(HPD.interval)
}
HPD.N=HPD.interval(N.vec[-(1:burnin)])
HPD.a0=HPD.interval(a0.vec[-(1:burnin)])
HPD.a1=HPD.interval(a1.vec[-(1:burnin)])
if(output=="complete.ML"){
output="complete"
a=seq(0.0016,1,length=num.t)
a=a^4
l=matrix(ncol=length(a),nrow=nsim.ML)
for(j in 1:length(a)){
print(paste("ML evaluation via power-posterior method of Friel & Pettit (2008) ; t =",j,"of",length(a)))
N.vec.ML=c(M,rep(0,(nsim-1)))
a0.vec.ML=c(0,rep(0,(nsim-1)))
a1.vec.ML=c(0,rep(0,(nsim-1)))
for(g in 2:(nsim.ML+1)){
N.vec.ML[g]=arms(
N.vec.ML[g-1],
function(N) log.prior.N(N)+log.likelihood.t(N,a0.vec.ML[g-1],a1.vec.ML[g-1],a[j]),
function(N) (N>=M)*(N<=M+neval),
1
)
N.vec.ML[g]=round(N.vec.ML[g])
a0.vec.ML[g]=arms(
a0.vec.ML[g-1],
function(a0) log.likelihood.t(N.vec.ML[g],a0,a1.vec.ML[g-1],a[j]),
function(a0) (a0>-5)*(a0<=5),
1
)
a1.vec.ML[g]=arms(
a1.vec.ML[g-1],
function(a1) log.likelihood.t(N.vec.ML[g],a0.vec.ML[g],a1,a[j]),
function(a1) (a1>-5)*(a1<=5),
1
)
l[g-1,j]=log.likelihood(N.vec.ML[g],a0.vec.ML[g],a1.vec.ML[g])
}
}
ll=apply(l,2,mean)
v=c()
for(k in 1:(length(a)-1)){
v[k]=(a[k+1]-a[k])*(ll[k+1]+ll[k])/2
}
log.marg.likelihood=sum(v)
}
}
}
if(logistic==0){
if(mod=="M0" | mod=="Mb" | mod=="Mt" | mod=="Mc" | mod=="Mcb"){
out=switch(output,
base=
list(Model=model,prior.N=prior.N,N.hat.RMSE=estimate.N,HPD.N=c(inf.lim,sup.lim)),
complete=
list(Model=model,prior.N=prior.N,N.hat.mean=mean.N,N.hat.median=median.N,N.hat.mode=mode.N,N.hat.RMSE=estimate.N,HPD.N=c(inf.lim,sup.lim),pH.post.mean=pH.post.mean, N.range=val,posterior.N=post.N,z.matrix=meaningful.mat,vec.cut=z.cut,log.marginal.likelihood=log.marg.likelihood)
)
}
else{
out=switch(output,
base=
list(Model=model,mbc.function=mbc.function,prior.N=prior.N,N.hat.RMSE=estimate.N,HPD.N=c(inf.lim,sup.lim)),
complete=
list(Model=model,mbc.function=mbc.function,prior.N=prior.N,N.hat.mean=mean.N,N.hat.median=median.N,N.hat.mode=mode.N,N.hat.RMSE=estimate.N,HPD.N=c(inf.lim,sup.lim),pH.post.mean=pH.post.mean, N.range=val,posterior.N=post.N,z.matrix=meaningful.mat,vec.cut=z.cut,log.marginal.likelihood=log.marg.likelihood
) )}
}
if(logistic==1) {
if(mod=="Msubjective"){
out=switch(output,
base=
list(Model=model,prior.N=prior.N,N.hat.RMSE=estimate.N,HPD.N=HPD.N),
complete=
list(Model=model,prior.N=prior.N,N.hat.mean=mean.N,N.hat.median=median.N,N.hat.mode=mode.N,N.hat.RMSE=estimate.N,HPD.N=HPD.N,N.vec=N.vec,mean.a0=mean.a0,HPD.a0=HPD.a0,a0.vec=a0.vec,mean.a1=mean.a1,HPD.a1=HPD.a1,a1.vec=a1.vec,z.matrix=meaningful.mat,log.marginal.likelihood=log.marg.likelihood))
}
else{
out=switch(output,
base=
list(Model=model,mbc.function=mbc.function,prior.N=prior.N,N.hat.RMSE=estimate.N,HPD.N=HPD.N),
model=
list(Model=model,mbc.function=mbc.function,prior.N=prior.N,N.hat.mean=mean.N,N.hat.median=median.N,N.hat.mode=mode.N,N.hat.RMSE=estimate.N,HPD.N=HPD.N,N.vec=N.vec,mean.a0=mean.a0,HPD.a0=HPD.a0,a0.vec=a0.vec,mean.a1=mean.a1,HPD.a1=HPD.a1,a1.vec=a1.vec,z.matrix=meaningful.mat),
complete=
list(Model=model,mbc.function=mbc.function,prior.N=prior.N,N.hat.mean=mean.N,N.hat.median=median.N,N.hat.mode=mode.N,N.hat.RMSE=estimate.N,HPD.N=HPD.N,N.vec=N.vec,mean.a0=mean.a0,HPD.a0=HPD.a0,a0.vec=a0.vec,mean.a1=mean.a1,HPD.a1=HPD.a1,a1.vec=a1.vec,z.matrix=meaningful.mat,log.marginal.likelihood=log.marg.likelihood))}
}
return(out)
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.