content large_stringlengths 0 6.46M | path large_stringlengths 3 331 | license_type large_stringclasses 2 values | repo_name large_stringlengths 5 125 | language large_stringclasses 1 value | is_vendor bool 2 classes | is_generated bool 2 classes | length_bytes int64 4 6.46M | extension large_stringclasses 75 values | text stringlengths 0 6.46M |
|---|---|---|---|---|---|---|---|---|---|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/add_imgur_image.R
\name{add_imgur_image}
\alias{add_imgur_image}
\title{Deploy a local image to Imgur and create an image tag}
\usage{
add_imgur_image(image, client_id = NULL, alt = NULL)
}
\arguments{
\item{image}{The path to the local image we would like to deploy to Imgur and
for which we'd like an image tag.}
\item{client_id}{The Imgur Client ID value.}
\item{alt}{Text description of image passed to the \code{alt} attribute inside of
the image (\verb{<img>}) tag for use when image loading is disabled and on
screen readers. \code{NULL} default produces blank (\code{""}) alt text.}
}
\value{
A character object with an HTML fragment that can be placed inside
the message body wherever the image should appear.
}
\description{
Getting images into email message bodies (and expecting them to appear for
the recipient) can be a harrowing experience. External images (i.e.,
available at public URLs) work exceedingly well and most email clients will
faithfully display these images. With the \code{imgur_image()} function, we can
take a local image file or a \code{ggplot2} plot object and send it to the Imgur
service, and finally receive an image (\verb{<img>}) tag that can be directly
inserted into an email message using \code{compose_email()}.
}
\details{
To take advantage of this, we need to first have an account with Imgur and
then obtain a \code{Client-ID} key for the Imgur API. This can be easily done by
going to \verb{https://api.imgur.com/oauth2/addclient} and registering an
application. Be sure to select the OAuth 2 authorization type without a
callback URL.
}
| /man/add_imgur_image.Rd | permissive | ataustin/blastula | R | false | true | 1,665 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/add_imgur_image.R
\name{add_imgur_image}
\alias{add_imgur_image}
\title{Deploy a local image to Imgur and create an image tag}
\usage{
add_imgur_image(image, client_id = NULL, alt = NULL)
}
\arguments{
\item{image}{The path to the local image we would like to deploy to Imgur and
for which we'd like an image tag.}
\item{client_id}{The Imgur Client ID value.}
\item{alt}{Text description of image passed to the \code{alt} attribute inside of
the image (\verb{<img>}) tag for use when image loading is disabled and on
screen readers. \code{NULL} default produces blank (\code{""}) alt text.}
}
\value{
A character object with an HTML fragment that can be placed inside
the message body wherever the image should appear.
}
\description{
Getting images into email message bodies (and expecting them to appear for
the recipient) can be a harrowing experience. External images (i.e.,
available at public URLs) work exceedingly well and most email clients will
faithfully display these images. With the \code{imgur_image()} function, we can
take a local image file or a \code{ggplot2} plot object and send it to the Imgur
service, and finally receive an image (\verb{<img>}) tag that can be directly
inserted into an email message using \code{compose_email()}.
}
\details{
To take advantage of this, we need to first have an account with Imgur and
then obtain a \code{Client-ID} key for the Imgur API. This can be easily done by
going to \verb{https://api.imgur.com/oauth2/addclient} and registering an
application. Be sure to select the OAuth 2 authorization type without a
callback URL.
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/install.R
\name{install.ImageMagick}
\alias{install.ImageMagick}
\alias{install.imagemagick}
\title{Downloads and installs ImageMagick for windows}
\usage{
install.ImageMagick(
URL = "https://www.imagemagick.org/script/download.php",
...
)
}
\arguments{
\item{URL}{the URL of the ImageMagick download page.}
\item{...}{extra parameters to pass to \link{install.URL}}
}
\value{
TRUE/FALSE - was the installation successful or not.
}
\description{
Allows the user to downloads and install the latest version of ImageMagick for Windows.
}
\details{
ImageMagick is a software suite to create, edit, compose, or convert bitmap images. It can read and write images in a variety of formats (over 100) including DPX, EXR, GIF, JPEG, JPEG-2000, PDF, PhotoCD, PNG, Postscript, SVG, and TIFF. Use ImageMagick to resize, flip, mirror, rotate, distort, shear and transform images, adjust image colors, apply various special effects, or draw text, lines, polygons, ellipses and Bezier curves.
This function downloads Win32 dynamic at 16 bits-per-pixel.
}
\examples{
\dontrun{
install.ImageMagick() # installs the latest version of ImageMagick
}
}
\references{
\itemize{
\item ImageMagick homepage: \url{http://www.imagemagick.org/script/index.php}
}
}
| /man/install.ImageMagick.Rd | no_license | cran/installr | R | false | true | 1,321 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/install.R
\name{install.ImageMagick}
\alias{install.ImageMagick}
\alias{install.imagemagick}
\title{Downloads and installs ImageMagick for windows}
\usage{
install.ImageMagick(
URL = "https://www.imagemagick.org/script/download.php",
...
)
}
\arguments{
\item{URL}{the URL of the ImageMagick download page.}
\item{...}{extra parameters to pass to \link{install.URL}}
}
\value{
TRUE/FALSE - was the installation successful or not.
}
\description{
Allows the user to downloads and install the latest version of ImageMagick for Windows.
}
\details{
ImageMagick is a software suite to create, edit, compose, or convert bitmap images. It can read and write images in a variety of formats (over 100) including DPX, EXR, GIF, JPEG, JPEG-2000, PDF, PhotoCD, PNG, Postscript, SVG, and TIFF. Use ImageMagick to resize, flip, mirror, rotate, distort, shear and transform images, adjust image colors, apply various special effects, or draw text, lines, polygons, ellipses and Bezier curves.
This function downloads Win32 dynamic at 16 bits-per-pixel.
}
\examples{
\dontrun{
install.ImageMagick() # installs the latest version of ImageMagick
}
}
\references{
\itemize{
\item ImageMagick homepage: \url{http://www.imagemagick.org/script/index.php}
}
}
|
#' Filtrerer datasæt med udgangspunkt i en enkelt variabel
#' @encoding UTF-8
#'
#' @param datasaet Datasættet som skal filtreres (dataframe)
#' @param variablen Navnet på variablen filtrering tager udgangspunkt i (character)
#' @param vaerdi Den eller de værdier for den valgte variabel, hvor observationerne med værdien i variablen skal beholdes
#' vaerdi skal have samme objektype som 'variablen' i 'datasaet'.
#' @param tilladNAIVariablen TRUE/FALSE variabel der angiver om der må filtreres med udgangspunkt i 'variablen' når denne indeholder NA.
#' Default er FALSE - Hvis NA findes i 'variablen' melder funktionen fejl.
#'
#' @return datasaet hvor kun de rækker med de angivne 'vaerdi'er i 'variablen'
#'
#' @examples
#' datasaetEksempel <- retOgFormaterDatasaet(datasaettet = datasaetFormateringsfunktioner, datasaetFormatFilen = formatfilFormateringsfunktioner)
#' filtrereDatasaetEnkeltVariabel(datasaet = datasaetEksempel, variablen = "ydelsesType", vaerdi = c("X", "XY"))
filtrereDatasaetEnkeltVariabel <- function(datasaet, # Datasættet som skal filtreres (dataframe)
variablen, # Navnet på variablen filtrering tager udgangspunkt i (character)
vaerdi, # Den eller de værdier for den valgte variabel, hvor observationerne med værdien i variablen skal beholdes
tilladNAIVariablen = FALSE){
# Tjekker om variablene er angivet korrekt
if(!("data.frame" %in% class(datasaet)) | class(variablen) != "character" | length(variablen) != 1){
stop("Mindst et af inputs til filtrereDatasaet funktionen er ikke angivet korrekt.
'datasaet' skal være en data.frame
'variablen' skal være af typen character med en enkelt værdi/entry
'vaerdier' skal være af typen character")
}
# Tjekker til at starte med om variablen findes i datasættet
if(!(variablen %in% colnames(datasaet))){
stop("Den valgte variabel findes ikke i datasættet.")
}
# Ændrer navnet på variablen i datasættet til 'variablen'
colnames(datasaet)[colnames(datasaet) == variablen] <- "variablen"
if(!tilladNAIVariablen){
# Tjekker om variablen indeholder NA' values
if(sum(is.na(datasaet$variablen)) != 0){
stop("Den valgte variabel indeholder NA'er. Der kan ikke filtreres med udgangspunkt i denne variabel.")
}
}
# Tjekker om variablen indeholder mindst et af de angivne værdier
if(!any(vaerdi %in% unique(datasaet$variablen))){
stop("Ingen af de valgte værdier, der skal beholdes findes i variablen i datasættet.
Der kan ikke filtreres med udgangspunkt i disse værdier.")
}
datasaet <- datasaet%>%
dplyr::filter(variablen %in% vaerdi)
# Ændrer navnet på variablen tilbage til den originale
colnames(datasaet)[colnames(datasaet) == "variablen"] <- variablen
return(datasaet)
}
#' Filtrerer datasæt efter periode
#' @encoding UTF-8
#'
#' @param datasaet Datasættet som skal filtreres (dataframe)
#' @param variablen Navnet på variablen som filtrering tager udgangspunkt i (character) - variablen i datasaet skal være af typen posixct
#' @param fra dato og tid (posixct) som angiver starten af perioden (fra og med) som datasættet skal dække med udgangspunkt i den valgte variabel
#' @param til dato og til (posixct) som angiver slutningen af perioden (til og med)
#' @param tilladNAIVariablen TRUE/FALSE variabel der angiver om der må filtreres med udgangspunkt i 'variablen' når denne indeholder NA.
#' Default er FALSE - Hvis NA findes i 'variablen' melder funktionen fejl.
#'
#' @return datasaet hvor kun de rækker, hvor 'variablen' er indenfor perioden
#'
#' @examples
#' datasaetEksempel <- retOgFormaterDatasaet(datasaettet = datasaetFormateringsfunktioner, datasaetFormatFilen = formatfilFormateringsfunktioner)
#' filtrereDatasaetPeriode(datasaet = datasaetEksempel,
#' variablen = "registreringstidspunkt",
#' fra = as.POSIXct(strptime("01-01-2018", format = "%d-%m-%Y"), tz = "UTC"),
#' til = as.POSIXct(strptime("30-04-2018", format = "%d-%m-%Y"), tz = "UTC"),
#' tilladNAIVariablen = TRUE)
filtrereDatasaetPeriode <- function(datasaet, # Datasættet som skal filtreres (dataframe)
variablen, # Navnet på variablen filtrering tager udgangspunkt i (character) - variablen skal være af typen posixct
fra, # dato og tid (posixct) som angiver starten af perioden (fra og med) som datasættet skal dække med udgangspunkt i den valgte variabel
til, # dato og til (posixct) som angiver slutningen af perioden (til og med)
tilladNAIVariablen = FALSE){
# Tjekker om variablene er angivet korrekt
if(!("data.frame" %in% class(datasaet)) | class(variablen) != "character" | length(variablen) != 1 | !(class(fra)[1] == "POSIXct" | is.na(fra)) | !(class(til)[1] == "POSIXct" | is.na(til))){
stop("Mindst et af inputs til filtrereDatasaet funktionen er ikke angivet korrekt.
'datasaet' skal være en data.frame
'variablen' skal være af typen character med en enkelt værdi/entry
'fra' og 'til' skal være af typen POSIXct")
}
# Tjekker til at starte med om variablen findes i datasættet
if(!(variablen %in% colnames(datasaet))){
stop("Den valgte variabel findes ikke i datasættet.")
}
# Da til ikke må være mindre end fra tjekker vi om dette er overholdt
if(!is.na(fra) & !is.na(til) & til < fra){
stop("Den valgte 'til' (slutningen af perioden) er mindre end den valgte 'fra' (start af perioden). Der kan ikke filtreres med udgangspunkt i de angivne til og fra værdier.")
}
# Ændrer navnet på variablen i datasættet til 'variablen'
colnames(datasaet)[colnames(datasaet) == variablen] <- "variablen"
# Tjekker om variablen er af typen 'POSIXct'
if(class(datasaet$variablen)[1] != "POSIXct"){
stop("Den valgte variabel er ikke at typen POSIXct. Der kan ikke filtreres med udgangspunkt i denne variabel.")
}
if(!tilladNAIVariablen){
# Tjekker om variablen indeholder NA' values
if(sum(is.na(datasaet$variablen)) != 0){
stop("Den valgte variabel indeholder NA'er. Der kan ikke filtreres med udgangspunkt i denne variabel.")
}
}
if(is.na(fra)){fra <- as.POSIXct(strptime("01-01-1900", format = "%d-%m-%Y"), tz = "UTC")}
if(is.na(til)){til <- as.POSIXct(strptime("01-01-2999", format = "%d-%m-%Y"), tz = "UTC")}
datasaet <- datasaet%>%
dplyr::filter(variablen >= fra & variablen <= til)
# Ændrer navnet på variablen tilbage til den originale
colnames(datasaet)[colnames(datasaet) == "variablen"] <- variablen
# Melder fejl hvis ingen data i perioden
if(nrow(datasaet) == 0){
stop("Der er ingen data i den valgte periode. Der kan ikke filtreres.")
}
return(datasaet)
}
| /R/Funktioner til at filtrere datasaet.R | no_license | KubranurSahan/Formateringsfunktioner | R | false | false | 7,022 | r | #' Filtrerer datasæt med udgangspunkt i en enkelt variabel
#' @encoding UTF-8
#'
#' @param datasaet Datasættet som skal filtreres (dataframe)
#' @param variablen Navnet på variablen filtrering tager udgangspunkt i (character)
#' @param vaerdi Den eller de værdier for den valgte variabel, hvor observationerne med værdien i variablen skal beholdes
#' vaerdi skal have samme objektype som 'variablen' i 'datasaet'.
#' @param tilladNAIVariablen TRUE/FALSE variabel der angiver om der må filtreres med udgangspunkt i 'variablen' når denne indeholder NA.
#' Default er FALSE - Hvis NA findes i 'variablen' melder funktionen fejl.
#'
#' @return datasaet hvor kun de rækker med de angivne 'vaerdi'er i 'variablen'
#'
#' @examples
#' datasaetEksempel <- retOgFormaterDatasaet(datasaettet = datasaetFormateringsfunktioner, datasaetFormatFilen = formatfilFormateringsfunktioner)
#' filtrereDatasaetEnkeltVariabel(datasaet = datasaetEksempel, variablen = "ydelsesType", vaerdi = c("X", "XY"))
filtrereDatasaetEnkeltVariabel <- function(datasaet, # Datasættet som skal filtreres (dataframe)
variablen, # Navnet på variablen filtrering tager udgangspunkt i (character)
vaerdi, # Den eller de værdier for den valgte variabel, hvor observationerne med værdien i variablen skal beholdes
tilladNAIVariablen = FALSE){
# Tjekker om variablene er angivet korrekt
if(!("data.frame" %in% class(datasaet)) | class(variablen) != "character" | length(variablen) != 1){
stop("Mindst et af inputs til filtrereDatasaet funktionen er ikke angivet korrekt.
'datasaet' skal være en data.frame
'variablen' skal være af typen character med en enkelt værdi/entry
'vaerdier' skal være af typen character")
}
# Tjekker til at starte med om variablen findes i datasættet
if(!(variablen %in% colnames(datasaet))){
stop("Den valgte variabel findes ikke i datasættet.")
}
# Ændrer navnet på variablen i datasættet til 'variablen'
colnames(datasaet)[colnames(datasaet) == variablen] <- "variablen"
if(!tilladNAIVariablen){
# Tjekker om variablen indeholder NA' values
if(sum(is.na(datasaet$variablen)) != 0){
stop("Den valgte variabel indeholder NA'er. Der kan ikke filtreres med udgangspunkt i denne variabel.")
}
}
# Tjekker om variablen indeholder mindst et af de angivne værdier
if(!any(vaerdi %in% unique(datasaet$variablen))){
stop("Ingen af de valgte værdier, der skal beholdes findes i variablen i datasættet.
Der kan ikke filtreres med udgangspunkt i disse værdier.")
}
datasaet <- datasaet%>%
dplyr::filter(variablen %in% vaerdi)
# Ændrer navnet på variablen tilbage til den originale
colnames(datasaet)[colnames(datasaet) == "variablen"] <- variablen
return(datasaet)
}
#' Filtrerer datasæt efter periode
#' @encoding UTF-8
#'
#' @param datasaet Datasættet som skal filtreres (dataframe)
#' @param variablen Navnet på variablen som filtrering tager udgangspunkt i (character) - variablen i datasaet skal være af typen posixct
#' @param fra dato og tid (posixct) som angiver starten af perioden (fra og med) som datasættet skal dække med udgangspunkt i den valgte variabel
#' @param til dato og til (posixct) som angiver slutningen af perioden (til og med)
#' @param tilladNAIVariablen TRUE/FALSE variabel der angiver om der må filtreres med udgangspunkt i 'variablen' når denne indeholder NA.
#' Default er FALSE - Hvis NA findes i 'variablen' melder funktionen fejl.
#'
#' @return datasaet hvor kun de rækker, hvor 'variablen' er indenfor perioden
#'
#' @examples
#' datasaetEksempel <- retOgFormaterDatasaet(datasaettet = datasaetFormateringsfunktioner, datasaetFormatFilen = formatfilFormateringsfunktioner)
#' filtrereDatasaetPeriode(datasaet = datasaetEksempel,
#' variablen = "registreringstidspunkt",
#' fra = as.POSIXct(strptime("01-01-2018", format = "%d-%m-%Y"), tz = "UTC"),
#' til = as.POSIXct(strptime("30-04-2018", format = "%d-%m-%Y"), tz = "UTC"),
#' tilladNAIVariablen = TRUE)
filtrereDatasaetPeriode <- function(datasaet, # Datasættet som skal filtreres (dataframe)
variablen, # Navnet på variablen filtrering tager udgangspunkt i (character) - variablen skal være af typen posixct
fra, # dato og tid (posixct) som angiver starten af perioden (fra og med) som datasættet skal dække med udgangspunkt i den valgte variabel
til, # dato og til (posixct) som angiver slutningen af perioden (til og med)
tilladNAIVariablen = FALSE){
# Tjekker om variablene er angivet korrekt
if(!("data.frame" %in% class(datasaet)) | class(variablen) != "character" | length(variablen) != 1 | !(class(fra)[1] == "POSIXct" | is.na(fra)) | !(class(til)[1] == "POSIXct" | is.na(til))){
stop("Mindst et af inputs til filtrereDatasaet funktionen er ikke angivet korrekt.
'datasaet' skal være en data.frame
'variablen' skal være af typen character med en enkelt værdi/entry
'fra' og 'til' skal være af typen POSIXct")
}
# Tjekker til at starte med om variablen findes i datasættet
if(!(variablen %in% colnames(datasaet))){
stop("Den valgte variabel findes ikke i datasættet.")
}
# Da til ikke må være mindre end fra tjekker vi om dette er overholdt
if(!is.na(fra) & !is.na(til) & til < fra){
stop("Den valgte 'til' (slutningen af perioden) er mindre end den valgte 'fra' (start af perioden). Der kan ikke filtreres med udgangspunkt i de angivne til og fra værdier.")
}
# Ændrer navnet på variablen i datasættet til 'variablen'
colnames(datasaet)[colnames(datasaet) == variablen] <- "variablen"
# Tjekker om variablen er af typen 'POSIXct'
if(class(datasaet$variablen)[1] != "POSIXct"){
stop("Den valgte variabel er ikke at typen POSIXct. Der kan ikke filtreres med udgangspunkt i denne variabel.")
}
if(!tilladNAIVariablen){
# Tjekker om variablen indeholder NA' values
if(sum(is.na(datasaet$variablen)) != 0){
stop("Den valgte variabel indeholder NA'er. Der kan ikke filtreres med udgangspunkt i denne variabel.")
}
}
if(is.na(fra)){fra <- as.POSIXct(strptime("01-01-1900", format = "%d-%m-%Y"), tz = "UTC")}
if(is.na(til)){til <- as.POSIXct(strptime("01-01-2999", format = "%d-%m-%Y"), tz = "UTC")}
datasaet <- datasaet%>%
dplyr::filter(variablen >= fra & variablen <= til)
# Ændrer navnet på variablen tilbage til den originale
colnames(datasaet)[colnames(datasaet) == "variablen"] <- variablen
# Melder fejl hvis ingen data i perioden
if(nrow(datasaet) == 0){
stop("Der er ingen data i den valgte periode. Der kan ikke filtreres.")
}
return(datasaet)
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/RcppExports.R
\name{hla_sort}
\alias{hla_sort}
\title{Sort HLA alleles by field}
\usage{
hla_sort(alleles)
}
\arguments{
\item{alleles}{A vector of HLA alleles either prefixed (e.g., \emph{"HLA-A*01:01:01:02"})
or without prefix (e.g.: \emph{"01:01:01:02"}).}
}
\value{
A sorted vector of HLA alleles.
}
\description{
Sort HLA alleles based on the numeric values of each successive field.
NMDP codes are sorted to the front, alphabetically based on their first letter.
}
| /man/hla_sort.Rd | no_license | jn7163/hlatools | R | false | true | 550 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/RcppExports.R
\name{hla_sort}
\alias{hla_sort}
\title{Sort HLA alleles by field}
\usage{
hla_sort(alleles)
}
\arguments{
\item{alleles}{A vector of HLA alleles either prefixed (e.g., \emph{"HLA-A*01:01:01:02"})
or without prefix (e.g.: \emph{"01:01:01:02"}).}
}
\value{
A sorted vector of HLA alleles.
}
\description{
Sort HLA alleles based on the numeric values of each successive field.
NMDP codes are sorted to the front, alphabetically based on their first letter.
}
|
# ==========================================================================
# Analysis of Study 2 including the influence of attutudes
# ==========================================================================
library(data.table)
library(lme4)
library(psych)
library(mediation)
library(xtable)
library(kableExtra)
library(car)
library(emmeans)
library(papaja)
library(mlVAR)
source('utils.R')
if ('lmerTest' %in% .packages()) {
detach("package:lmerTest", unload=TRUE) # otherwise mediation() won't work
}
#
# Read and preprocess the data
# -----------------------------------------------------------------------------
d <- fread('../../data/processed/study2.csv')
# Format the 'condition' variables as factors
d[, c("id", "index") := .(factor(id), factor(index))]
d[, quest := factor(quest, levels=c('risk (variance)', 'fluctuation', 'risk'), labels = c('Risk-as-variance', 'Fluctuation', 'Risk'))]
d[, graph := factor(graph, levels=c(FALSE, TRUE), labels = c('Without Graph', 'Graph'))]
# Compute attitude score from the five-item attitude scale
d[, attitude_aktiv_subj := 6-attitude_aktiv_subj] # TODO: check
d[, attitude_mw := rowMeans(cbind(attitude_gut_subj, attitude_interessant_subj, attitude_stark_subj, attitude_aktiv_subj))]
# Center the subjective ratings (1-7 likert) and attitude (1-5 likert)
d[, perception_c := perception - 4]
d[, ret_subj_c := ret_subj - 4]
d[, attitude_mw_c := rowMeans(cbind(attitude_gut_subj, attitude_interessant_subj, attitude_stark_subj, attitude_aktiv_subj) - 3)]
# Do or don't z-standardise values
z <- TRUE
vars <- c('var_obj', 'ret_subj', 'mroi_obj', 'perception')
if (z) {
d[, paste(c(vars), 'z', sep = '_') := lapply(.SD, as.numeric), .SDcols = vars]
d[, paste(c(vars), 'z', sep = '_') := lapply(.SD, function(z) c(scale(z))), .SDcols = vars, by = id]
}
#
# Desribe the attitude score
# -----------------------------------------------------------------------------
# Cronbach's alpha
alpha_by_index <- d[, as.list(summary(psych::alpha(cbind(.SD)))), .SDcols = c('attitude_gut_subj', 'attitude_interessant_subj','attitude_stark_subj','attitude_aktiv_subj'), by = index]
summary(alpha_by_index$raw_alpha, digits = 2)
d[, as.list(round(describe(attitude_mw),2))][, c('median', 'mean', 'sd', 'min', 'max')]
print(d[, mean(attitude_mw), by = country][V1 %in% range(V1)], digits = 3)
#
# Partial Correlation Network Analysis
# -----------------------------------------------------------------------------
contrasts(d$quest) <- contr.sum(3)
contrasts(d$graph) <- contr.sum(2)
# Generate a list with separate data sets by condition
vars <- c('id', 'var_obj', 'ret_subj_z', 'mroi_obj', 'attitude_mw_c', 'perception_z')
datasubsets <- list(
Risk_Graph_TRUE = setnames(d[quest=='Risk' & graph == 'Graph', vars, with = FALSE], length(vars), 'risk_subj'),
Riskvar_Graph_TRUE = setnames(d[quest=='Risk-as-variance' & graph == 'Graph', vars, with = FALSE], length(vars), 'rvar_subj'),
Fluctuation_Graph_TRUE = setnames(d[quest=='Fluctuation' & graph == 'Graph', vars, with = FALSE], length(vars), 'fluct_subj'),
Risk_Graph_FALSE= setnames(d[quest=='Risk' & graph != 'Graph', vars, with = FALSE], length(vars), 'risk_subj'),
Riskvar_Graph_FALSE = setnames(d[quest=='Risk-as-variance' & graph != 'Graph', vars, with = FALSE], length(vars), 'rvar_subj'),
Fluctuation_Graph_FALSE = setnames(d[quest=='Fluctuation' & graph != 'Graph', vars, with = FALSE], length(vars), 'fluct_subj'))
# Fit the partial correlation network
corlist_att <- lapply(datasubsets, function(z) {
# We need to round because mlVAR does not want the data otherwise
z[, names(z)[2:6] := lapply(.SD, round, 6), .SDcols = 2:6]
mlVAR(z, vars = names(z)[2:6], idvar = 'id', lags = 1, temporal = "fixed")
})
#
# Correlation Coefficients
# --------------------------------------------------------------------------
# Table of coefficients
summary(corlist_att$Riskvar_Graph_FALSE, "contemporaneous")
# Tabulate the Goodness-of-Fit
tab <- list()
for (i in 1:6) {
rnms <- make_labels(corlist_att[[i]]$fit$var, ' ')
tmp <- cbind(rbind(corlist_att[[i]]$fit, attitude_mw_c=NA)[, -1], corlist_att[[i]]$fit[c(1,2,3,5,4), -1])
row.names(tmp) <- rnms
tab[[i]] <- tmp
}
names(tab) <- c('Graph + Risk', 'Risk-as-variance', 'Fluctuation', 'Without Graph + Risk', 'Risk-as-variance', 'Fluctuation')
source('tab_setup.R')
papaja::apa_table(tab
, caption = 'Study 2: Goodness of fit of partial correlation networks.'
, col.names = c('', toupper(colnames(tab[[1]])))
, row.names = TRUE
, col_spanners = list('Base\nNetwork' = c(2,3),
'Base + Attitudes\nNetwork' = c(4,5))
, merge_method = 'table_spanner')
alpha <- 0.0125
# get the partial correlations and p-values from the table
tablist <- lapply(corlist_att, get_input_for_table)
tablist <- lapply(tablist, function(i) {
i[diag(1,4,4)==1] <- '---'
i <- i[, -4]
})
names(tablist) <- paste('\\cmidrule{2-4} \\textbf{', make_labels(vars, " "), '}')
# Plot network
source('fig7.R')
#
# Mediation Analysis
# -----------------------------------------------------------------------------
# Checks
# the first line is the model with the main effects
# fit_id_old <- lmer(ret_subj~ quest*graph + perception_z + (quest*graph):perception_z + (1 + perception_z | id), d)
# Test:
# Compute the marginal effects by hand without graph
# sum(fixef(fit_id)[c(
# 'questrisk (variance):perception_z'
# ,'graph1:perception_z'
# ,'quest1:graph1:perception_z')])
# sum(fixef(fit_id)[c(
# 'questfluctuation:perception_z'
# ,'graph1:perception_z'
# ,'quest2:graph1:perception_z')])
# att
# / \
# a b
# / \
# Per -- c -- Ret
#
# Per -- c' -- Ret
#
# Paths (M=mediatior, X=predictor, Y=outcome)
# a: X - M, riskPerception -> attitudes
# b: M - Y, attitudes -> returnPerception
# c: X - Y, riscPrception -> returnPerception (direct effect)
# c': X - M - Y, riskPercept. -> attitudes -> returnPercept. (indirect effect)
# X - Y (c path)
c_model <- lmer(ret_subj ~ quest*graph + quest:graph:perception_z + (1 + perception_z | id), d)
# X - M - Y (c' path)
cpr_model <- lmer(ret_subj ~ quest*graph*attitude_mw_c + (quest:graph:perception_z) * attitude_mw_c + (1 + perception_z | id), d)
# Path a (X - M)
a_model <- lmer(attitude_mw_c ~ quest*graph + quest:graph:perception_z + (1 + perception_z | id), data = d)
# Check: does adding attitude as predictor improve the fit at all?
anova(cpr_model, c_model, refit = F) # Yes
# Describe the slopes / trends
emtrends(cpr_model, ~ quest | graph, var = 'perception_z')
test(emtrends(c_model, ~quest | graph, var = 'perception_z'))
test(emtrends(a_model, ~quest | graph, var = 'perception_z'))
cpr_slopes <- test(emtrends(cpr_model, ~quest | graph, var='attitude_mw_c'))
apa_table(cpr_slopes[,-2],
digits = 3,
al = c('l', 'c', 'c', 'r','r','r'),
col.n = c('Question', 'Slope', 'SE', 'df', '\\textit{z}-ratio', '\\textit{p}'),
p = 'H',
caption = 'Estimated slopes from the linear-mixed effects modelwith dependent variable perceived return')
# Apply simulation-based mediation analysis for all covaraite combinations (conditions)
n_sims <- 1000
covarlevels <- d[, unique(.SD), .SDcols = c('quest', 'graph')][order(-graph,quest)]
control.treat.values <- d[, as.list(round(quantile(perception_z, c(.25,.75)), 2)), by = .(quest, graph)][order(-graph,quest)]
mediatelist <- lapply(1:nrow(covarlevels), function(i) mediation::mediate(
model.m = a_model,
model.y = cpr_model,
treat = 'perception_z',
mediator = 'attitude_mw_c',
sims = n_sims,
covariates = list(quest=covarlevels$quest[i], graph=covarlevels$graph[i]),
control.value = 0,
treat.value = 1
)
)
names(mediatelist) <- control.treat.values[, paste0(quest, graph)]
## Make the results table
tablist <- lapply(mediatelist, function(x) data.frame(matrix(unlist(x[c('d0', 'd0.ci', 'd0.p','z0', 'z0.ci', 'z0.p','tau.coef', 'tau.ci', 'tau.p')]),
ncol = 4,
byrow = TRUE,
dimnames = list(c('Mediation Effect', 'Direct Effect', 'Total Effect'), c('Coefficient','5%CI','95%CI','p')))))
names(tablist) <- covarlevels[, paste0('quest', quest, ':graph', graph), by = .(quest, graph)]$V1
# ACME = beta_2 * gamma = a * b
# ADE = beta_3 = c'
tab <- rbindlist(tablist, id='ID')
tab[, Graph := rep(c(F,T), each = 9)]
tab[, Effect := rep(c('Mediation', 'Direct', 'Total'), 6)]
tab[, Effect := factor(Effect, levels = c('Mediation', 'Direct', 'Total'))]
tab[, Question := rep(c('risk-as-variance', 'fluctuation', 'risk'), each=3, 2)]
tab[, `95%CI` := paste0('[', sprintf('%.2f',X5.CI), ', ', sprintf('%.2f',X95.CI), ']'), by = .(Graph,Effect)]
tab2 <- dcast(tab, Graph + Question ~ Effect, value.var=c('Coefficient', '95%CI', 'p'))
setcolorder(tab2, c(1,2,3,6,9,4,7,10,5,8,11))
tab2
library(magrittr)
library(kableExtra)
gsub('( |\\[)(-)', '\\U\\1$-$', kable(tab2[c(4:6,1:3),-1],
format = 'latex',
digits = 3,
caption = 'Mediation analysis, dependent variable: perceived risk; mediator: attitude score',
label = 'tab:study2_mediation',
booktabs = TRUE,
align=c('l', rep(c('r','c','c'), 3)),
escape = F,
col.names = c(' ', 'b', '95\\% CI', '$p$', 'b', '95\\% CI', '$p$', 'b', '95\\% CI', '$p$')
) %>%
kable_styling(
font_size = 9.5) %>%
add_header_above(c(' ' = 1, 'Mediaton Effect' = 3, 'Direct Effect' = 3, 'Total Effect' = 3)) %>%
group_rows('With Graph', 1, 3) %>%
group_rows('Without', 4, 6) %>%
footnote(general = 'Effects shown are those at the 50% quantile of the other variables.'), perl = TRUE)
source('fig_setup.R')
library(viridis)
ggplot(tab, aes(x = factor(Question, labels = c('Fluctuation', 'Risk', 'Risk\n(measured as variance)')), y = Coefficient, shape = Effect)) +
geom_hline(aes(yintercept = 0), linetype = 2, size = .3) +
geom_pointrange(aes(ymin = X5.CI, ymax = X95.CI, fill = ..y..), position = position_dodge(width = .45), size = .5, stroke = .3) +
facet_wrap(~ifelse(grepl(TRUE, ID), 'Graph', 'No Graph')) +
scale_shape_manual(values = c(Mediation = 24, Direct = 23, Total = 22), labels = c(expression(Mediation[IV%->%A%->%R]), expression(Direct[IV%->%R]), expression(Total[Mediation+Direct]))) +
ylim(-.6,.55) +
xlab('Condition') +
ggtitle('Mediation of the perceived risk-return relationship through attitudes') +theme(panel.spacing.x = unit(5, 'mm'), legend.background = element_rect(color = 'black'), legend.margin = margin(1, 1, 1, 1, unit = 'mm'), legend.key = element_rect(color=NA)) +
scale_fill_viridis(begin = 0, end = .92) +
guides(fill = 'none', shape = guide_legend(override.aes = list(color = '#238B8E', stroke = .5)))
ggsave('../figures/fig6.png', w = 14, h = 6, unit = 'cm', dpi = 600)
d[, attitude_mw_by_id := median(attitude_mw_c), by = id]
d[, attitude_bin := cut(attitude_mw_by_id, 3)]
d[, attitude_binf := factor(attitude_bin, labels = paste(c('Bad', 'Neutral', 'Good'), levels(attitude_bin)))]
d[, questf := factor(quest, levels = c('risk', 'risk (variance)', 'fluctuation'), labels = c('Risk', 'Risk\n(measured as variance)', 'Fluctuation'))]
ggplot(d, aes(y=ret_subj, x=perception_z, color = attitude_binf)) +stat_smooth(method='lm', se = F, aes(linetype = attitude_binf), size = .8) +facet_grid(factor(graph, levels = c('TRUE', 'FALSE'), labels = c('Graph', 'No Graph')) ~ questf) +scale_color_viridis_d(end = .9, option = 'A') +scale_linetype_manual(values = c(1,3,6)) +guides(linetype = guide_legend('Attitude Score (binned)'), color = guide_legend('Attitude Score (binned)')) +labs(caption = "Attitude score centered\nPerceived risk/fluctuation z-standardized") +xlab('Perceived risk or fluctuation') +ylab('Perceived return') +themejj(facet=TRUE) +theme(aspect.ratio = 1, legend.position = 'right', legend.direction = 'vertical')
M <- d[, .(ret_subj = mean(ret_subj), perception_z = median(perception_z)), by = .(quest, graph, attitude_binf)]
M
ggplot(d, aes(x = perception_z, y = ret_subj)) +geom_jitter(aes(color = attitude_binf), alpha = .05) +facet_grid(factor(graph, levels = c('TRUE', 'FALSE'), labels = c('Graph', 'No Graph')) ~ questf+attitude_binf) +geom_violin(alpha=0) +scale_color_viridis_d(end = .9, option = 'A') +geom_point(data = M, aes(color = attitude_binf)) | /analyses/code/2.3-attitudes.R | no_license | JanaJarecki/RRC | R | false | false | 12,191 | r | # ==========================================================================
# Analysis of Study 2 including the influence of attutudes
# ==========================================================================
library(data.table)
library(lme4)
library(psych)
library(mediation)
library(xtable)
library(kableExtra)
library(car)
library(emmeans)
library(papaja)
library(mlVAR)
source('utils.R')
if ('lmerTest' %in% .packages()) {
detach("package:lmerTest", unload=TRUE) # otherwise mediation() won't work
}
#
# Read and preprocess the data
# -----------------------------------------------------------------------------
d <- fread('../../data/processed/study2.csv')
# Format the 'condition' variables as factors
d[, c("id", "index") := .(factor(id), factor(index))]
d[, quest := factor(quest, levels=c('risk (variance)', 'fluctuation', 'risk'), labels = c('Risk-as-variance', 'Fluctuation', 'Risk'))]
d[, graph := factor(graph, levels=c(FALSE, TRUE), labels = c('Without Graph', 'Graph'))]
# Compute attitude score from the five-item attitude scale
d[, attitude_aktiv_subj := 6-attitude_aktiv_subj] # TODO: check
d[, attitude_mw := rowMeans(cbind(attitude_gut_subj, attitude_interessant_subj, attitude_stark_subj, attitude_aktiv_subj))]
# Center the subjective ratings (1-7 likert) and attitude (1-5 likert)
d[, perception_c := perception - 4]
d[, ret_subj_c := ret_subj - 4]
d[, attitude_mw_c := rowMeans(cbind(attitude_gut_subj, attitude_interessant_subj, attitude_stark_subj, attitude_aktiv_subj) - 3)]
# Do or don't z-standardise values
z <- TRUE
vars <- c('var_obj', 'ret_subj', 'mroi_obj', 'perception')
if (z) {
d[, paste(c(vars), 'z', sep = '_') := lapply(.SD, as.numeric), .SDcols = vars]
d[, paste(c(vars), 'z', sep = '_') := lapply(.SD, function(z) c(scale(z))), .SDcols = vars, by = id]
}
#
# Desribe the attitude score
# -----------------------------------------------------------------------------
# Cronbach's alpha
alpha_by_index <- d[, as.list(summary(psych::alpha(cbind(.SD)))), .SDcols = c('attitude_gut_subj', 'attitude_interessant_subj','attitude_stark_subj','attitude_aktiv_subj'), by = index]
summary(alpha_by_index$raw_alpha, digits = 2)
d[, as.list(round(describe(attitude_mw),2))][, c('median', 'mean', 'sd', 'min', 'max')]
print(d[, mean(attitude_mw), by = country][V1 %in% range(V1)], digits = 3)
#
# Partial Correlation Network Analysis
# -----------------------------------------------------------------------------
contrasts(d$quest) <- contr.sum(3)
contrasts(d$graph) <- contr.sum(2)
# Generate a list with separate data sets by condition
vars <- c('id', 'var_obj', 'ret_subj_z', 'mroi_obj', 'attitude_mw_c', 'perception_z')
datasubsets <- list(
Risk_Graph_TRUE = setnames(d[quest=='Risk' & graph == 'Graph', vars, with = FALSE], length(vars), 'risk_subj'),
Riskvar_Graph_TRUE = setnames(d[quest=='Risk-as-variance' & graph == 'Graph', vars, with = FALSE], length(vars), 'rvar_subj'),
Fluctuation_Graph_TRUE = setnames(d[quest=='Fluctuation' & graph == 'Graph', vars, with = FALSE], length(vars), 'fluct_subj'),
Risk_Graph_FALSE= setnames(d[quest=='Risk' & graph != 'Graph', vars, with = FALSE], length(vars), 'risk_subj'),
Riskvar_Graph_FALSE = setnames(d[quest=='Risk-as-variance' & graph != 'Graph', vars, with = FALSE], length(vars), 'rvar_subj'),
Fluctuation_Graph_FALSE = setnames(d[quest=='Fluctuation' & graph != 'Graph', vars, with = FALSE], length(vars), 'fluct_subj'))
# Fit the partial correlation network
corlist_att <- lapply(datasubsets, function(z) {
# We need to round because mlVAR does not want the data otherwise
z[, names(z)[2:6] := lapply(.SD, round, 6), .SDcols = 2:6]
mlVAR(z, vars = names(z)[2:6], idvar = 'id', lags = 1, temporal = "fixed")
})
#
# Correlation Coefficients
# --------------------------------------------------------------------------
# Table of coefficients
summary(corlist_att$Riskvar_Graph_FALSE, "contemporaneous")
# Tabulate the Goodness-of-Fit
tab <- list()
for (i in 1:6) {
rnms <- make_labels(corlist_att[[i]]$fit$var, ' ')
tmp <- cbind(rbind(corlist_att[[i]]$fit, attitude_mw_c=NA)[, -1], corlist_att[[i]]$fit[c(1,2,3,5,4), -1])
row.names(tmp) <- rnms
tab[[i]] <- tmp
}
names(tab) <- c('Graph + Risk', 'Risk-as-variance', 'Fluctuation', 'Without Graph + Risk', 'Risk-as-variance', 'Fluctuation')
source('tab_setup.R')
papaja::apa_table(tab
, caption = 'Study 2: Goodness of fit of partial correlation networks.'
, col.names = c('', toupper(colnames(tab[[1]])))
, row.names = TRUE
, col_spanners = list('Base\nNetwork' = c(2,3),
'Base + Attitudes\nNetwork' = c(4,5))
, merge_method = 'table_spanner')
alpha <- 0.0125
# get the partial correlations and p-values from the table
tablist <- lapply(corlist_att, get_input_for_table)
tablist <- lapply(tablist, function(i) {
i[diag(1,4,4)==1] <- '---'
i <- i[, -4]
})
names(tablist) <- paste('\\cmidrule{2-4} \\textbf{', make_labels(vars, " "), '}')
# Plot network
source('fig7.R')
#
# Mediation Analysis
# -----------------------------------------------------------------------------
# Checks
# the first line is the model with the main effects
# fit_id_old <- lmer(ret_subj~ quest*graph + perception_z + (quest*graph):perception_z + (1 + perception_z | id), d)
# Test:
# Compute the marginal effects by hand without graph
# sum(fixef(fit_id)[c(
# 'questrisk (variance):perception_z'
# ,'graph1:perception_z'
# ,'quest1:graph1:perception_z')])
# sum(fixef(fit_id)[c(
# 'questfluctuation:perception_z'
# ,'graph1:perception_z'
# ,'quest2:graph1:perception_z')])
# att
# / \
# a b
# / \
# Per -- c -- Ret
#
# Per -- c' -- Ret
#
# Paths (M=mediatior, X=predictor, Y=outcome)
# a: X - M, riskPerception -> attitudes
# b: M - Y, attitudes -> returnPerception
# c: X - Y, riscPrception -> returnPerception (direct effect)
# c': X - M - Y, riskPercept. -> attitudes -> returnPercept. (indirect effect)
# X - Y (c path)
c_model <- lmer(ret_subj ~ quest*graph + quest:graph:perception_z + (1 + perception_z | id), d)
# X - M - Y (c' path)
cpr_model <- lmer(ret_subj ~ quest*graph*attitude_mw_c + (quest:graph:perception_z) * attitude_mw_c + (1 + perception_z | id), d)
# Path a (X - M)
a_model <- lmer(attitude_mw_c ~ quest*graph + quest:graph:perception_z + (1 + perception_z | id), data = d)
# Check: does adding attitude as predictor improve the fit at all?
anova(cpr_model, c_model, refit = F) # Yes
# Describe the slopes / trends
emtrends(cpr_model, ~ quest | graph, var = 'perception_z')
test(emtrends(c_model, ~quest | graph, var = 'perception_z'))
test(emtrends(a_model, ~quest | graph, var = 'perception_z'))
cpr_slopes <- test(emtrends(cpr_model, ~quest | graph, var='attitude_mw_c'))
apa_table(cpr_slopes[,-2],
digits = 3,
al = c('l', 'c', 'c', 'r','r','r'),
col.n = c('Question', 'Slope', 'SE', 'df', '\\textit{z}-ratio', '\\textit{p}'),
p = 'H',
caption = 'Estimated slopes from the linear-mixed effects modelwith dependent variable perceived return')
# Apply simulation-based mediation analysis for all covaraite combinations (conditions)
n_sims <- 1000
covarlevels <- d[, unique(.SD), .SDcols = c('quest', 'graph')][order(-graph,quest)]
control.treat.values <- d[, as.list(round(quantile(perception_z, c(.25,.75)), 2)), by = .(quest, graph)][order(-graph,quest)]
mediatelist <- lapply(1:nrow(covarlevels), function(i) mediation::mediate(
model.m = a_model,
model.y = cpr_model,
treat = 'perception_z',
mediator = 'attitude_mw_c',
sims = n_sims,
covariates = list(quest=covarlevels$quest[i], graph=covarlevels$graph[i]),
control.value = 0,
treat.value = 1
)
)
names(mediatelist) <- control.treat.values[, paste0(quest, graph)]
## Make the results table
tablist <- lapply(mediatelist, function(x) data.frame(matrix(unlist(x[c('d0', 'd0.ci', 'd0.p','z0', 'z0.ci', 'z0.p','tau.coef', 'tau.ci', 'tau.p')]),
ncol = 4,
byrow = TRUE,
dimnames = list(c('Mediation Effect', 'Direct Effect', 'Total Effect'), c('Coefficient','5%CI','95%CI','p')))))
names(tablist) <- covarlevels[, paste0('quest', quest, ':graph', graph), by = .(quest, graph)]$V1
# ACME = beta_2 * gamma = a * b
# ADE = beta_3 = c'
tab <- rbindlist(tablist, id='ID')
tab[, Graph := rep(c(F,T), each = 9)]
tab[, Effect := rep(c('Mediation', 'Direct', 'Total'), 6)]
tab[, Effect := factor(Effect, levels = c('Mediation', 'Direct', 'Total'))]
tab[, Question := rep(c('risk-as-variance', 'fluctuation', 'risk'), each=3, 2)]
tab[, `95%CI` := paste0('[', sprintf('%.2f',X5.CI), ', ', sprintf('%.2f',X95.CI), ']'), by = .(Graph,Effect)]
tab2 <- dcast(tab, Graph + Question ~ Effect, value.var=c('Coefficient', '95%CI', 'p'))
setcolorder(tab2, c(1,2,3,6,9,4,7,10,5,8,11))
tab2
library(magrittr)
library(kableExtra)
gsub('( |\\[)(-)', '\\U\\1$-$', kable(tab2[c(4:6,1:3),-1],
format = 'latex',
digits = 3,
caption = 'Mediation analysis, dependent variable: perceived risk; mediator: attitude score',
label = 'tab:study2_mediation',
booktabs = TRUE,
align=c('l', rep(c('r','c','c'), 3)),
escape = F,
col.names = c(' ', 'b', '95\\% CI', '$p$', 'b', '95\\% CI', '$p$', 'b', '95\\% CI', '$p$')
) %>%
kable_styling(
font_size = 9.5) %>%
add_header_above(c(' ' = 1, 'Mediaton Effect' = 3, 'Direct Effect' = 3, 'Total Effect' = 3)) %>%
group_rows('With Graph', 1, 3) %>%
group_rows('Without', 4, 6) %>%
footnote(general = 'Effects shown are those at the 50% quantile of the other variables.'), perl = TRUE)
source('fig_setup.R')
library(viridis)
ggplot(tab, aes(x = factor(Question, labels = c('Fluctuation', 'Risk', 'Risk\n(measured as variance)')), y = Coefficient, shape = Effect)) +
geom_hline(aes(yintercept = 0), linetype = 2, size = .3) +
geom_pointrange(aes(ymin = X5.CI, ymax = X95.CI, fill = ..y..), position = position_dodge(width = .45), size = .5, stroke = .3) +
facet_wrap(~ifelse(grepl(TRUE, ID), 'Graph', 'No Graph')) +
scale_shape_manual(values = c(Mediation = 24, Direct = 23, Total = 22), labels = c(expression(Mediation[IV%->%A%->%R]), expression(Direct[IV%->%R]), expression(Total[Mediation+Direct]))) +
ylim(-.6,.55) +
xlab('Condition') +
ggtitle('Mediation of the perceived risk-return relationship through attitudes') +theme(panel.spacing.x = unit(5, 'mm'), legend.background = element_rect(color = 'black'), legend.margin = margin(1, 1, 1, 1, unit = 'mm'), legend.key = element_rect(color=NA)) +
scale_fill_viridis(begin = 0, end = .92) +
guides(fill = 'none', shape = guide_legend(override.aes = list(color = '#238B8E', stroke = .5)))
ggsave('../figures/fig6.png', w = 14, h = 6, unit = 'cm', dpi = 600)
d[, attitude_mw_by_id := median(attitude_mw_c), by = id]
d[, attitude_bin := cut(attitude_mw_by_id, 3)]
d[, attitude_binf := factor(attitude_bin, labels = paste(c('Bad', 'Neutral', 'Good'), levels(attitude_bin)))]
d[, questf := factor(quest, levels = c('risk', 'risk (variance)', 'fluctuation'), labels = c('Risk', 'Risk\n(measured as variance)', 'Fluctuation'))]
ggplot(d, aes(y=ret_subj, x=perception_z, color = attitude_binf)) +stat_smooth(method='lm', se = F, aes(linetype = attitude_binf), size = .8) +facet_grid(factor(graph, levels = c('TRUE', 'FALSE'), labels = c('Graph', 'No Graph')) ~ questf) +scale_color_viridis_d(end = .9, option = 'A') +scale_linetype_manual(values = c(1,3,6)) +guides(linetype = guide_legend('Attitude Score (binned)'), color = guide_legend('Attitude Score (binned)')) +labs(caption = "Attitude score centered\nPerceived risk/fluctuation z-standardized") +xlab('Perceived risk or fluctuation') +ylab('Perceived return') +themejj(facet=TRUE) +theme(aspect.ratio = 1, legend.position = 'right', legend.direction = 'vertical')
M <- d[, .(ret_subj = mean(ret_subj), perception_z = median(perception_z)), by = .(quest, graph, attitude_binf)]
M
ggplot(d, aes(x = perception_z, y = ret_subj)) +geom_jitter(aes(color = attitude_binf), alpha = .05) +facet_grid(factor(graph, levels = c('TRUE', 'FALSE'), labels = c('Graph', 'No Graph')) ~ questf+attitude_binf) +geom_violin(alpha=0) +scale_color_viridis_d(end = .9, option = 'A') +geom_point(data = M, aes(color = attitude_binf)) |
library(tidyverse)
murders <- read_csv("data/murders.csv")
murders <- murders %>% mutate(regio = factor(region), rate = total / population * 10^5)
save(murders, file = "rda/murders.rda") | /wrangle-data.R | no_license | dexxxtro/Harvard-murders | R | false | false | 186 | r | library(tidyverse)
murders <- read_csv("data/murders.csv")
murders <- murders %>% mutate(regio = factor(region), rate = total / population * 10^5)
save(murders, file = "rda/murders.rda") |
\name{branchmap}
\alias{branchmap}
%- Also NEED an '\alias' for EACH other topic documented here.
\title{Computes a branching map from a sequence of level set trees}
\description{
Branching map visualizes the levels of branching of level set trees
of estimates belonging to a scale of estimates.
It visualizes also the excess masses of the roots of the branches.
}
\usage{
branchmap(estiseq, hseq = NULL, levnum = 80, paletti = NULL,
rootpaletti = NULL, type = "jump")
}
%- maybe also 'usage' for other objects documented here.
\arguments{
\item{estiseq}{A sequence of estimates and level set trees of the estimates.
Output of function lstseq.kern or function lstseq.carthisto.}
\item{hseq}{The sequence of smoothing parameters of the scale of estimates.}
\item{levnum}{The number of level sets used to approximate the
level set trees.}
\item{paletti}{A sequence of color names; colors for each branch,
other than the root branches.}
\item{rootpaletti}{A sequence of color names; colors for the root branches.}
\item{type}{internal}
}
%\details{}
\value{
A representation as a list of a 2D function
\item{level}{x-coordinate is the level of the level sets}
\item{h}{y-coordinate is the smoothing parameter}
\item{z}{z-coordinate is the excess mass}
\item{col}{colors for the graph of the 2D function}
}
%\references{http://denstruct.net}
\author{Jussi Klemela}
%\note{ ~~further notes~~ }
\seealso{
\code{\link{lstseq.kern}},
\code{\link{plotbranchmap}}
}
\examples{
dendat<-sim.data(n=200,type="mulmod")
h1<-0.9
h2<-2.2
lkm<-5
hseq<-hgrid(h1,h2,lkm)
N<-c(16,16)
estiseq<-lstseq.kern(dendat,hseq,N,lstree=TRUE)
bm<-branchmap(estiseq)
plotbranchmap(bm)
}
\keyword{smooth}
\keyword{multivariate} % at least one, from doc/KEYWORDS
| /man/branchmap.Rd | no_license | cran/denpro | R | false | false | 1,776 | rd | \name{branchmap}
\alias{branchmap}
%- Also NEED an '\alias' for EACH other topic documented here.
\title{Computes a branching map from a sequence of level set trees}
\description{
Branching map visualizes the levels of branching of level set trees
of estimates belonging to a scale of estimates.
It visualizes also the excess masses of the roots of the branches.
}
\usage{
branchmap(estiseq, hseq = NULL, levnum = 80, paletti = NULL,
rootpaletti = NULL, type = "jump")
}
%- maybe also 'usage' for other objects documented here.
\arguments{
\item{estiseq}{A sequence of estimates and level set trees of the estimates.
Output of function lstseq.kern or function lstseq.carthisto.}
\item{hseq}{The sequence of smoothing parameters of the scale of estimates.}
\item{levnum}{The number of level sets used to approximate the
level set trees.}
\item{paletti}{A sequence of color names; colors for each branch,
other than the root branches.}
\item{rootpaletti}{A sequence of color names; colors for the root branches.}
\item{type}{internal}
}
%\details{}
\value{
A representation as a list of a 2D function
\item{level}{x-coordinate is the level of the level sets}
\item{h}{y-coordinate is the smoothing parameter}
\item{z}{z-coordinate is the excess mass}
\item{col}{colors for the graph of the 2D function}
}
%\references{http://denstruct.net}
\author{Jussi Klemela}
%\note{ ~~further notes~~ }
\seealso{
\code{\link{lstseq.kern}},
\code{\link{plotbranchmap}}
}
\examples{
dendat<-sim.data(n=200,type="mulmod")
h1<-0.9
h2<-2.2
lkm<-5
hseq<-hgrid(h1,h2,lkm)
N<-c(16,16)
estiseq<-lstseq.kern(dendat,hseq,N,lstree=TRUE)
bm<-branchmap(estiseq)
plotbranchmap(bm)
}
\keyword{smooth}
\keyword{multivariate} % at least one, from doc/KEYWORDS
|
## Copyright 2015 <Jeremy Yee> <jeremyyee@outlook.com.au>
## Computing the martingale increments using nearest neighbours
################################################################################
FastMartingale <- function(value, path, path_nn, disturb, weight, grid,
Neighbour, control) {
## Making sure the input are in an acceptable format
v_dims = dim(value)
if (length(v_dims) != 4) stop("length(dim(value)) != 4")
path_dims = dim(path)
if (length(path_dims) != 3) stop("length(dim(path)) != 3")
if (path_dims[1] != v_dims[4]) stop("dim(path)[1] != dim(value)[4]")
if (path_dims[3] != v_dims[2]) stop("dim(path)[3] != dim(value)[2]")
if (missing(disturb) != missing(weight)) {
stop("missing(disturb) != missing(weight)")
}
d_dims = dim(disturb)
if (length(d_dims) != 5) stop("length(dim(disturb)) != 5")
if (v_dims[2] != d_dims[1]) stop("dim(disturb)[1] != dim(value)[2]")
if (d_dims[1] != d_dims[2]) stop("dim(disturb)[1] != dim(disturb)[2]")
if (length(weight) != d_dims[3]) stop("length(weight) != dim(disturb)[3]")
if (d_dims[4] != path_dims[2]) stop("dim(disturb)[4] != dim(path)[2]")
if (d_dims[5] != (path_dims[1] - 1)) stop("dim(disturb)[5] != dim(path)[1]")
if (ncol(grid) != v_dims[2]) stop("ncol(grid) != dim(value)[2]")
if (path_dims[3] != d_dims[1]) stop("dim(path)[3] != dim(disturb)[1]")
if (ncol(grid) != v_dims[2]) stop("ncol(grid) != dim(value)[2]")
## Call the C++ functions
if (missing(path_nn)) {
cat("\nComputing path_neighbour...")
query <- matrix(data = path, ncol = v_dims[2])
path_nn <- rflann::Neighbour(query, grid, 1, "kdtree", 0, 1)$indices
}
if (!missing(Neighbour)) {
## If user provides a function
output <- .Call('rcss_FastMartingale', PACKAGE = 'rcss', value, disturb,
weight, path, path_nn, Neighbour, grid, control)
} else {
## Otherwise use rflann
Func <- function(query, ref) {
rflann::Neighbour(query, ref, 1, "kdtree", 0, 1)$indices
}
output <- .Call('rcss_FastMartingale', PACKAGE = 'rcss', value, disturb,
weight, path, path_nn, Func, grid, control)
}
if (length(dim(control)) == 2) { ## 3 dim if full control
return(output)
} else if (length(dim(control)) == 3) { ## 4 dim if partial control
n_dec <- v_dims[4]
n_pos <- v_dims[3]
n_path <- path_dims[2]
return(array(output, dim = c(n_dec - 1, dim(control)[2],
n_path, n_pos)))
}
}
| /R/FastMartingale.R | no_license | IanMadlenya/rcss | R | false | false | 2,658 | r | ## Copyright 2015 <Jeremy Yee> <jeremyyee@outlook.com.au>
## Computing the martingale increments using nearest neighbours
################################################################################
FastMartingale <- function(value, path, path_nn, disturb, weight, grid,
Neighbour, control) {
## Making sure the input are in an acceptable format
v_dims = dim(value)
if (length(v_dims) != 4) stop("length(dim(value)) != 4")
path_dims = dim(path)
if (length(path_dims) != 3) stop("length(dim(path)) != 3")
if (path_dims[1] != v_dims[4]) stop("dim(path)[1] != dim(value)[4]")
if (path_dims[3] != v_dims[2]) stop("dim(path)[3] != dim(value)[2]")
if (missing(disturb) != missing(weight)) {
stop("missing(disturb) != missing(weight)")
}
d_dims = dim(disturb)
if (length(d_dims) != 5) stop("length(dim(disturb)) != 5")
if (v_dims[2] != d_dims[1]) stop("dim(disturb)[1] != dim(value)[2]")
if (d_dims[1] != d_dims[2]) stop("dim(disturb)[1] != dim(disturb)[2]")
if (length(weight) != d_dims[3]) stop("length(weight) != dim(disturb)[3]")
if (d_dims[4] != path_dims[2]) stop("dim(disturb)[4] != dim(path)[2]")
if (d_dims[5] != (path_dims[1] - 1)) stop("dim(disturb)[5] != dim(path)[1]")
if (ncol(grid) != v_dims[2]) stop("ncol(grid) != dim(value)[2]")
if (path_dims[3] != d_dims[1]) stop("dim(path)[3] != dim(disturb)[1]")
if (ncol(grid) != v_dims[2]) stop("ncol(grid) != dim(value)[2]")
## Call the C++ functions
if (missing(path_nn)) {
cat("\nComputing path_neighbour...")
query <- matrix(data = path, ncol = v_dims[2])
path_nn <- rflann::Neighbour(query, grid, 1, "kdtree", 0, 1)$indices
}
if (!missing(Neighbour)) {
## If user provides a function
output <- .Call('rcss_FastMartingale', PACKAGE = 'rcss', value, disturb,
weight, path, path_nn, Neighbour, grid, control)
} else {
## Otherwise use rflann
Func <- function(query, ref) {
rflann::Neighbour(query, ref, 1, "kdtree", 0, 1)$indices
}
output <- .Call('rcss_FastMartingale', PACKAGE = 'rcss', value, disturb,
weight, path, path_nn, Func, grid, control)
}
if (length(dim(control)) == 2) { ## 3 dim if full control
return(output)
} else if (length(dim(control)) == 3) { ## 4 dim if partial control
n_dec <- v_dims[4]
n_pos <- v_dims[3]
n_path <- path_dims[2]
return(array(output, dim = c(n_dec - 1, dim(control)[2],
n_path, n_pos)))
}
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/BSDA-package.R
\docType{data}
\name{Register}
\alias{Register}
\title{Maintenance cost versus age of cash registers in a department store}
\format{A data frame with 9 observations on the following 4 variables.
\describe{
\item{age}{a numeric vector}
\item{cost}{a numeric vector}
\item{SRES1}{a numeric vector}
\item{FITS1}{a numeric vector}
}}
\description{
Data for Exercise 2.3, 2.39, and 2.54
}
\examples{
str(Register)
attach(Register)
plot(age,cost,main="Exercise 2.3")
model <- lm(cost~age)
abline(model)
plot(age,resid(model))
detach(Register)
}
\references{
Kitchens, L. J. (2003) \emph{Basic Statistics and Data Analysis}.
Duxbury
}
\keyword{datasets}
| /man/Register.Rd | no_license | johnsonjc6/BSDA | R | false | true | 747 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/BSDA-package.R
\docType{data}
\name{Register}
\alias{Register}
\title{Maintenance cost versus age of cash registers in a department store}
\format{A data frame with 9 observations on the following 4 variables.
\describe{
\item{age}{a numeric vector}
\item{cost}{a numeric vector}
\item{SRES1}{a numeric vector}
\item{FITS1}{a numeric vector}
}}
\description{
Data for Exercise 2.3, 2.39, and 2.54
}
\examples{
str(Register)
attach(Register)
plot(age,cost,main="Exercise 2.3")
model <- lm(cost~age)
abline(model)
plot(age,resid(model))
detach(Register)
}
\references{
Kitchens, L. J. (2003) \emph{Basic Statistics and Data Analysis}.
Duxbury
}
\keyword{datasets}
|
packages <- c("tidyverse", "quanteda")
# install any packages not currently installed
if (any(!packages %in% installed.packages())) {
install.packages(packages[!packages %in% installed.packages()[,"Package"]])
}
# load packages
lapply(packages, library, character.only = TRUE)
# create character string
x <- c("Tea is healthy and calming, don't you think?")
# tokenize character string (x)
tokens <- tokens(x, what = "word1")
# part 1 ------------------------------------------------------------------
# text preprocessing ------------------------------------------------------
# next need to stem/lemmatize words - this reduces each word to its base
# e.g. calming becomes calm
train_tokens <- tokens_wordstem(tokens, language = "english")
# to remove punctuation, symbols, and numbers
train_tokens <- tokens(train_tokens, what = "word1",
remove_numbers = TRUE, remove_punct = TRUE,
remove_symbols = TRUE, split_hyphens = TRUE)
# when analysing text need often you will remove stop words ("the", "is" etc..)
# NOTE - You should always inspect stopword lists for applicability to
# your problem/domain.
# check stopwords() to make sure important domain specific words are not removed
train_tokens <- tokens_select(train_tokens, stopwords(),
selection = "remove")
# stemming/lemmatizing and dropping stopwords might result in your models performing worse
# so treat this preprocessing as part of your hyperparameter optimization process
# pattern matching --------------------------------------------------------
# select target patterns
target <- phrase(c('Galaxy Note', 'iPhone 11', 'iPhone XS', 'Google Pixel'))
# create text doc of phone reviews
text_doc <- c("Glowing review overall, and some really interesting side-by-side ",
"photography tests pitting the iPhone 11 Pro against the ",
"Galaxy Note 10 Plus and last year’s iPhone XS and Google Pixel 3.")
# identify the occurrence of each target pattern in the phone reviews
match <- data.frame(kwic(text_doc, pat1, valuetype = "glob"))
# exercise ----------------------------------------------------------------
# could load data from kaggle URL
# library(httr)
# library(jsonlite)
# url <- "https://storage.googleapis.com/kagglesdsdata/datasets/362178/763778/restaurant.json?X-Goog-Algorithm=GOOG4-RSA-SHA256&X-Goog-Credential=gcp-kaggle-com%40kaggle-161607.iam.gserviceaccount.com%2F20210428%2Fauto%2Fstorage%2Fgoog4_request&X-Goog-Date=20210428T000547Z&X-Goog-Expires=259199&X-Goog-SignedHeaders=host&X-Goog-Signature=2196ef69eba0b202dd8ff55753f17ab1aecb5557e8d99610435341ad3c4ba959ba7911fb24e860d6c45b7d0b123bedfd7bc28a4bb102b038cf123e3d3d2e05b9274d5aa2de6c71b2e345271069883fcd0e17c15d7c31bcb5132e84ff2e213b7fdbca1e3881d6ecaef33362bdaf24e144355ccc07d2170dff790e4725b2d83853826f68649634805af60b4d6828459c86f06a8caa68213f121cab77d8d3db6d2ab06f3fab4e8191685f79efcc17310e980ec763170d5bbed8ce7a27292974d427771e68ccd063fbe723d49d64571b6475f50bf4c443cbe6d10777d106d44b7db50f51d3ee7945db311120042f45dfbb55c9738f96a505e3733243281749ea09de"
# resp <- GET(url)
# http_error(resp)
# parsed_resp <- content(resp, as="parsed")
# load yelp review data
d <- read_csv("data/yelp_ratings.csv")
# tokenize character string (text)
tokens <- tokens(d$text, what = "word1")
# text preprocessing ------------------------------------------------------
# stem/lemmatize words - this reduces each word to its base
# e.g. calming becomes calm
train_tokens <- tokens_wordstem(tokens, language = "english")
# remove punctuation, symbols, and numbers
train_tokens <- tokens(train_tokens, what = "word1",
remove_numbers = TRUE, remove_punct = TRUE,
remove_symbols = TRUE, split_hyphens = TRUE)
# remove stop words ("the", "is" etc..)
# stopwords()
train_tokens <- tokens_select(train_tokens, stopwords(),
selection = "remove")
# menu items
target <- c("Cheese Steak", "Cheesesteak", "Steak and Cheese", "Italian Combo", "Tiramisu", "Cannoli",
"Chicken Salad", "Chicken Spinach Salad", "Meatball", "Pizza", "Pizzas", "Spaghetti",
"Bruchetta", "Eggplant", "Italian Beef", "Purista", "Pasta", "Calzones", "Calzone",
"Italian Sausage", "Chicken Cutlet", "Chicken Parm", "Chicken Parmesan", "Gnocchi",
"Chicken Pesto", "Turkey Sandwich", "Turkey Breast", "Ziti", "Portobello", "Reuben",
"Mozzarella Caprese", "Corned Beef", "Garlic Bread", "Pastrami", "Roast Beef",
"Tuna Salad", "Lasagna", "Artichoke Salad", "Fettuccini Alfredo", "Chicken Parmigiana",
"Grilled Veggie", "Grilled Veggies", "Grilled Vegetable", "Mac and Cheese", "Macaroni",
"Prosciutto", "Salami")
# identify the occurrence of each target pattern in the phone reviews
match <- data.frame(kwic(train_tokens, target, valuetype = "glob"))
# create index of rows that contain a review with a target word
index <- match %>%
mutate(text_num = str_remove(docname, "text")) %>%
select(keyword, text_num) %>%
unique()
# select reviews with target words
select_reviews <- d[as.numeric(index$text_num), ]
# create df of keywords to add to select_reviews
key <- match %>%
select(docname, keyword) %>%
unique() %>%
mutate(keyword = tolower(keyword))
# add keywords
select_reviews <- select_reviews %>%
bind_cols(keyword = key$keyword)
# compute the mean and sd number of stars and the number of reviews for each keyword
# important to consider the number of occurrences
scores <- select_reviews %>%
group_by(keyword) %>%
summarise(mean = mean(stars),
sd = sd(stars),
n = n()) %>%
arrange(mean)
# top 10 mean ratings
slice_max(scores, mean, n = 10)
# worst 10 mean ratings
slice_min(scores, mean, n = 10)
# part 2 ------------------------------------------------------------------
# This code was recycled from another NLP course I completed because it uses the same spam dataset
# This code goes far beyond the content of Kaggle's course
# The other course is:
# Data Science Dojo's Intro to Text Analytics with R
# # https://code.datasciencedojo.com/datasciencedojo/tutorials/tree/master/Introduction%20to%20Text%20Analytics%20with%20R
# # Videos on youtube: https://youtu.be/4vuw0AsHeGw
# create list of packages
packages <- c("tidyverse", "quanteda", "caret")
# install any packages not currently installed
if (any(!packages %in% installed.packages())) {
install.packages(packages[!packages %in% installed.packages()[,"Package"]])
}
# load packages
lapply(packages, library, character.only = TRUE)
d <- read_csv("data/spam.csv")
# creating training and test data
# Use caret to create a 70%/30% stratified split.
# Set the random seed for reproducibility.
set.seed(32984)
indexes <- createDataPartition(d$label, times = 1,
p = 0.7, list = FALSE)
train <- d[indexes,]
test <- d[-indexes,]
# Verify proportions are equivalent in the train and test datasets
prop.table(table(train$label))
prop.table(table(test$label))
# need to convert unstructured text data into a structured dataframe
# first need to decompose text document into distinct pieces (tokens)
# each word or punctuation becomes a single token
# then we construct a document-frequency matrix (aka data frame) where:
# - each row represents a document (e.g., a single complete text message)
# - each column represents a distinct token (word or punctuation)
# - each cell is a count of the token for a document
# IMPORTANT: this method does not preserve the word ordering
# this setup is known as the bag of words (BOW) model
# there is a way to add word ordering back into the model (engram)
# we will look at this later
# the BOW model is typical starting point for text analysis
# some considerations:
# - do you want all tokens to be terms in the model?
# * casing (e.g., If v if)? better to remove capitalisation by making everything lower case
# * puncuation (e.g., ", !, ?)? word ordering not preserved so better to remove punctuation
# * numbers? sometimes depends what you are examining
# * every word (e.g., the, an, a)? aka stop words typically dont add anything so remove
# * symbols (e.g., <, @, #)? can be meaningful so depends on your data and questions
# * similar words (e.g., ran, run, runs, running)? aka stemming - is it ok to collapse similar words into single representation? usually better to collapse. different contexts of use can often be derived from other variables
# PRE-PROCESSING IS MAJOR PART OF TEXT ANALYTICS
# Text analytics requires a lot of data exploration, data pre-processing
# and data wrangling. Let's explore some examples.
# HTML-escaped ampersand character. This is an example of how html records &
# there are 5 characters instead of 1
# could replace all instances of escaped sequence with and or & or something else
# or could remove & and ; to leave amp behind
train$text[20]
# HTML-escaped '<' (<) and '>' (#>) characters.
# remove &, ;, and # so only lt and gt remain
# Also note that Mallika Sherawat is an actual person, but we will ignore
# the implications of this for this introductory tutorial.
test$text[16]
# A URL.
# lots of ways to treat URLs
# for this course we will split http and the www... web address
# reasoning is that web addresses probably won't reoccur but http will indicate that a web address was communicated and that may be an important feature
train$text[381]
# Tokenize SMS text messages.
# remove numbers, punctuation, symbols and split hyphenated words
train.tokens <- tokens(train$text, what = "word1",
remove_numbers = TRUE, remove_punct = TRUE,
remove_symbols = TRUE, split_hyphens = TRUE)
# Take a look at a specific SMS message and see how it transforms.
train.tokens[[381]]
# Lower case the tokens.
train.tokens <- tokens_tolower(train.tokens)
train.tokens[[381]]
# Use quanteda's built-in stopword list for English.
# NOTE - You should always inspect stopword lists for applicability to
# your problem/domain.
train.tokens <- tokens_select(train.tokens, stopwords(),
selection = "remove")
train.tokens[[381]]
# Perform stemming on the tokens.
train.tokens <- tokens_wordstem(train.tokens, language = "english")
train.tokens[[381]]
# Create our first bag-of-words model.
# dfm() takes in tokens and creates a document-frequency matrix (dfm)
train.tokens.dfm <- dfm(train.tokens, tolower = FALSE)
# Transform to a matrix and inspect.
train.tokens.matrix <- as.matrix(train.tokens.dfm)
View(train.tokens.matrix[1:20, 1:100])
dim(train.tokens.matrix)
# Investigate the effects of stemming.
colnames(train.tokens.matrix)[1:50]
# Per best practices, we will leverage cross validation (CV) as
# the basis of our modeling process. Using CV we can create
# estimates of how well our model will do in Production on new,
# unseen data. CV is powerful, but the downside is that it
# requires more processing and therefore more time.
#
# If you are not familiar with CV, consult the following
# Wikipedia article:
#
# https://en.wikipedia.org/wiki/Cross-validation_(statistics)
#
# Setup a feature data frame with labels.
train.tokens.df <- bind_cols(label = train$label, convert(train.tokens.dfm, to = "data.frame"))
View(train.tokens.df)
# Often, tokenization requires some additional pre-processing
# col names are the words in texts. Some of the column names will
# create problems later so we need to clean them up. Here are examples
# of problematic names
names(train.tokens.df)[c(139, 141, 211, 213)]
# Cleanup column names. Will modify only names that are not syntactically valid
names(train.tokens.df) <- make.names(names(train.tokens.df))
head(train.tokens.df)
# Use caret to create stratified folds for 10-fold cross validation repeated
# 3 times (i.e., create 30 random stratified samples)
# we are using stratified cross validation because we have a class imbalance
# in the data (spam < prevalent ham) each random sample taken is representative
# in terms of the proportion of spam and ham in the dataset
# why are we repeating the cross validation 3 times? If we take the time to conduct
# cross validation multiple times we should get more valid estimates. If we take more
# looks at the data the estimation process will be more robust.
set.seed(48743)
cv.folds <- createMultiFolds(train$label, k = 10, times = 3)
cv.cntrl <- trainControl(method = "repeatedcv",
number = 10,
repeats = 3,
index = cv.folds) # since we want stratified cross-validation we need to specify the folds
# Our data frame is non-trivial in size. As such, CV runs will take
# quite a long time to run. To cut down on total execution time, use
# the doSNOW package to allow for multi-core training in parallel.
#
# WARNING - The following code is configured to run on a workstation-
# or server-class machine (i.e., 12 logical cores). Alter
# code to suit your HW environment.
#
library(doSNOW) # doSNOW works on mac and windows out of box. Some other parallel processing packages do not.
{
# Time the code execution
start.time <- Sys.time()
# Create a cluster to work on 7 logical cores.
# check how many cores your machine has available for parallel processing
# keep 1 core free for the operating system
# parallel::detectCores()
cl <- makeCluster(7, type = "SOCK") # effectively creates multiple instances of R studio and uses them all at once to process the model
registerDoSNOW(cl) # need to tell R to process in parallel
# As our data is non-trivial in size at this point, use a single decision
# tree alogrithm (rpart trees) as our first model. We will graduate to using more
# powerful algorithms later when we perform feature extraction to shrink
# the size of our data.
# e.g., could use fandom forest (rf) or XGBoost (xgbTree) instead by changing the method
# formula means predict (~) label using all other variables (.) in dataset
# trControl is the cross validation parameters
# tuneLength allows you to set the number of different configurations of the algorithm to test
# it selects the one best one and builds a model using that configuration
rpart.cv.1 <- train(label ~ ., data = train.tokens.df,
method = "rpart",
trControl = cv.cntrl,
tuneLength = 7)
# Processing is done, stop cluster.
stopCluster(cl)
# Total time of execution on workstation was approximately 4 minutes.
total.time <- Sys.time() - start.time
total.time
}
# Check out our results.
# samples = rows
# predictors = features
# best performing model had 94.78% accuracy
# that's pretty good for a simple model with no tuning or feature engineering
rpart.cv.1
# The use of Term Frequency-Inverse Document Frequency (TF-IDF) is a
# powerful technique for enhancing the information/signal contained
# within our document-frequency matrix. In most instances TF-IDF will
# enhance the performance of the model. Typically this would be done
# immediately after stemming (pre-processing).
# Specifically, the mathematics
# behind TF-IDF accomplish the following goals:
# 1 - The TF calculation accounts for the fact that longer
# documents will have higher individual term counts. Applying
# TF normalizes all documents in the corpus to be length
# independent.
# 2 - The IDF calculation accounts for the frequency of term
# appearance in all documents in the corpus. The intuition
# being that a term that appears in every document has no
# predictive power. Applies penalty to terms that appear more frequently
# 3 - The multiplication of TF by IDF for each cell in the matrix
# allows for weighting of #1 and #2 for each cell in the matrix.
# Our function for calculating relative term frequency (TF)
# TF = freq of a single term in a document (text) / sum of all freq of all terms in a document (text)
# each row represents a term and column represents a document (text)
# TF is document focused (rows)
term.frequency <- function(row) {
row / sum(row)
}
# Our function for calculating inverse document frequency (IDF)
# log10 (total number of documents / the number of documents that a specific term is present in)
# IDF is corpus focused (columns)
inverse.doc.freq <- function(col) {
corpus.size <- length(col) # calculate for each column how many documents there are (should be the same for every column)
doc.count <- length(which(col > 0)) # calculate the number of rows where the column is not 0
log10(corpus.size / doc.count)
}
# Our function for calculating TF-IDF.
# will give the same output as quanteda's tf-idf function if you set normalisation = TRUE
# calculating it manually has the benefit that you can use the components (TF or IDF) in
# later feature engineering/analysis
# TF-IDF = TF * IDF
tf.idf <- function(tf, idf) {
tf * idf
}
# First step, normalize all documents via TF.
# apply term frequency (TF) function to the rows (1) of train.tokens.matrix
train.tokens.df <- apply(train.tokens.matrix, 1, term.frequency)
dim(train.tokens.df) # matrix has been transposed by apply
View(train.tokens.df[1:20, 1:100])
# Second step, calculate the IDF vector that we will use - both
# for training data and for test data!
# apply inverse document frequency (IDF) function to the columns (2) of train.tokens.matrix
train.tokens.idf <- apply(train.tokens.matrix, 2, inverse.doc.freq)
str(train.tokens.idf)
# Lastly, calculate TF-IDF for our training corpus.
# apply TF-IDF function to the columns (2) of train.tokens.df (normalised matrix)
train.tokens.tfidf <- apply(train.tokens.df, 2, tf.idf, idf = train.tokens.idf)
dim(train.tokens.tfidf) # transposed matrix has been maintained
View(train.tokens.tfidf[1:25, 1:25])
# Transpose the matrix in preparation of training the ML model
train.tokens.tfidf <- t(train.tokens.tfidf)
dim(train.tokens.tfidf)
View(train.tokens.tfidf[1:25, 1:25]) # higher values represent terms that appear less frequently
# When we compute TF-IDF we need to check for incomplete cases
# after preprocessing (e.g., remove stop words, stemming) some documents could be empty
# Check for incomplete cases.
incomplete.cases <- which(!complete.cases(train.tokens.tfidf))
train$text[incomplete.cases] # show original text before pre-processing - all contain stop words and punctuation only
# Fix incomplete cases
# for any document (row) that is now empty as a result of pre-processing (e.g., stemming) replace with zeros
# do not want to remove the records because there could be a signal in this data
# zero represents docs made up of stop words only
train.tokens.tfidf[incomplete.cases,] <- rep(0.0, ncol(train.tokens.tfidf))
dim(train.tokens.tfidf)
sum(which(!complete.cases(train.tokens.tfidf)))
# Make a clean data frame using the same process as before.
train.tokens.tfidf.df <- cbind(label = train$label, data.frame(train.tokens.tfidf))
names(train.tokens.tfidf.df) <- make.names(names(train.tokens.tfidf.df))
{
# Time the code execution
start.time <- Sys.time()
# Create a cluster to work on 7 logical cores.
cl <- makeCluster(7, type = "SOCK")
registerDoSNOW(cl)
# As our data is non-trivial in size at this point, use a single decision
# tree alogrithm as our first model. We will graduate to using more
# powerful algorithms later when we perform feature extraction to shrink
# the size of our data.
rpart.cv.2 <- train(label ~ ., data = train.tokens.tfidf.df, method = "rpart",
trControl = cv.cntrl, tuneLength = 7)
# Processing is done, stop cluster.
stopCluster(cl)
# Total time of execution on workstation was
total.time <- Sys.time() - start.time
total.time
}
# Check out our results.
rpart.cv.2
# N-grams allow us to augment our document-term frequency matrices with
# word ordering. This often leads to increased performance (e.g., accuracy)
# for machine learning models trained with more than just unigrams (i.e.,
# single terms). Let's add bigrams (each unique two-term phrase) to our training data and the TF-IDF
# transform the expanded featre matrix to see if accuracy improves. The data will contain unigram and bigram features.
# as you add higher n-grams the lower the likelihood of those phrases being shared across multiple documents -
# will mostly contain zeros (sparsity problem - curse of dimensionality problem)
# Add bigrams to our feature matrix.
train.tokens <- tokens_ngrams(train.tokens, n = 1:2) # 1:2 requests unigrams and bigrams
train.tokens[[381]]
# Transform to dfm and then a matrix.
train.tokens.dfm <- dfm(train.tokens, tolower = FALSE)
train.tokens.matrix <- as.matrix(train.tokens.dfm)
train.tokens.dfm
# Normalize all documents via TF.
train.tokens.df <- apply(train.tokens.matrix, 1, term.frequency)
# Calculate the IDF vector that we will use for training and test data!
train.tokens.idf <- apply(train.tokens.matrix, 2, inverse.doc.freq)
# Calculate TF-IDF for our training corpus
train.tokens.tfidf <- apply(train.tokens.df, 2, tf.idf,
idf = train.tokens.idf)
# Transpose the matrix
train.tokens.tfidf <- t(train.tokens.tfidf)
# Fix incomplete cases
incomplete.cases <- which(!complete.cases(train.tokens.tfidf))
train.tokens.tfidf[incomplete.cases,] <- rep(0.0, ncol(train.tokens.tfidf))
# Make a clean data frame.
train.tokens.tfidf.df <- cbind(label = train$label, data.frame(train.tokens.tfidf))
names(train.tokens.tfidf.df) <- make.names(names(train.tokens.tfidf.df))
# Clean up unused objects in memory.
gc()
# NOTE - The following code requires the use of command-line R to execute
# due to the large number of features (i.e., columns) in the matrix.
# Please consult the following link for more details if you wish
# to run the code yourself:
#
# https://stackoverflow.com/questions/28728774/how-to-set-max-ppsize-in-r
#
# Also note that running the following code required approximately
# 38GB of RAM and more than 4.5 hours to execute on a 10-core
# workstation!
#
# Can log a request with infomatics hub? for USyd's HPC or use a cloud computing platform like Azure.
# Time the code execution
# start.time <- Sys.time()
# Leverage single decision trees to evaluate if adding bigrams improves the
# the effectiveness of the model.
# rpart.cv.3 <- train(label ~ ., data = train.tokens.tfidf.df, method = "rpart",
# trControl = cv.cntrl, tuneLength = 7)
# Total time of execution on workstation was
# total.time <- Sys.time() - start.time
# total.time
# Check out our results.
# rpart.cv.3
#
# The results of the above processing show a slight decline in rpart
# effectiveness with a 10-fold CV repeated 3 times accuracy of 0.9457.
# As we will discuss later, while the addition of bigrams appears to
# negatively impact a single decision tree, it helps with the mighty
# random forest!
#
# one way around the above mentioned problems (sparsity, computing power, and
# length of time to run analysis) is to conduct latent semantic analysis (LSA)
# using a technique called singular value decomposition which is a matrix
# factorisation technique that allows for the implementation of feature reduction/extraction
# this process will reduce the number of features and increase the representation
# of the data (creates a smaller and more feature rich matrix - better for random forest)
# Our Progress So Far
# ▪ We’ve made a lot of progress:
# • Representing unstructured text data in a format amenable to analytics and machine learning.
# • Building a standard text analytics data pre-processing pipeline.
# • Improving the bag-of-words model (BOW) with the use of the mighty TF-IDF.
# • Extending BOW to incorporate word ordering via n-grams.
# ▪ However, we’ve encountered some notable problems as well:
# • Document-term matrices explode to be very wide (i.e., lots of columns).
# • The features of document-term matrices don’t contain a lot of signal (i.e., they’re sparse).
# • We’re running into scalability issues like RAM and huge amounts of computation.
# • The curse of dimensionality.
# ▪ The vector space model helps address many of the problems above!
# Latent Semantic Analysis
# Intuition – Extract relationships between the documents and terms assuming that terms that are close
# in meaning will appear in similar (i.e., correlated) pieces of text.
# Implementation – LSA leverages a singular value decomposition (SVD) factorization of a term-document
# matrix to extract these relationships.
# Latent Semantic Analysis (LSA) often remediates the curse of dimensionality problem in text analytics:
# • The matrix factorization has the effect of combining columns, potentially enriching signal in the data.
# • By selecting a fraction of the most important singular values, LSA can dramatically reduce dimensionality.
# However, there’s no free lunch:
# • Performing the SVD factorization is computationally intensive.
# • The reduced factorized matrices (i.e., the “semantic spaces”) are approximations.
# • We will need to project new data into the semantic space.
# High-level steps so far:
# • Create tokens (e.g., lowercase, remove stop words).
# • Normalize the document vector (i.e., row) using the term.frequency() function.
# • Complete the TF-IDF projection using the tf.idf() function.
# • Apply the SVD projection on the document vector.
# We'll leverage the irlba package for our singular value
# decomposition (SVD). The irlba package allows us to specify
# the number of the most important singular vectors we wish to
# calculate and retain for features.
# NOTE: it looks like it is possible to use PCA instead of SVD
# need to look into this. A benefit of using PCA is that the latent components may be interpretable
library(irlba) # necessary to perform truncated SVD - means you can specify the x number of most significant extracted features
{
# Time the code execution
start.time <- Sys.time()
# Perform SVD. Specifically, reduce dimensionality down to 300 columns
# for our latent semantic analysis (LSA).
# Note: could potentially use PCA instead - this may lead to interpretable latent components
# SVD/LSA assumes that in the matrix the rows are the terms and the columns are the documents
# our matrix is the other way around so we need to transpose t() our matrix to a term-document matrix
# which is necessary to run the analysis
# nv extracts features from the document (right singular vectors) -
# give me the vector representation of the higher level concepts extracted on a per document basis
# nu extracts features from the terms (left singular vectors) we want nu = nv (same number as nv)
# research shows that vectors in the few hundred range tend to offer the best results
# you should spend some time tuning your model to work out the best number of vectors
# e.g., test 200, 300, 400, 500 etc...
# maxit is the number of iterations. typically a mxit value double the nv value works well in
# extracting the number of desired features
train.irlba <- irlba(t(train.tokens.tfidf), nv = 300, maxit = 600)
# Total time of execution on workstation was
total.time <- Sys.time() - start.time
total.time
}
# Take a look at the new feature data up close.
# 𝑆𝑉𝐷 𝑜𝑓 𝑋 = 𝑋 = 𝑈∑𝑉𝑇
# U contains the eigenvectors of the term correlations (left singular vector) 𝑋𝑋t()
# V contains the eigenvectors of the document correlations (right singular vector) 𝑋t()𝑋
# sigma (sum of symbol) contains the singular values of the factorization - denoted by d in the output
# lets take a look at the V matrix
# rows = documents
# columns = extracted features
# this is a black box process we do not know what the columns mean
View(train.irlba$v)
# As with TF-IDF, we will need to project new data (e.g., the test data)
# into the SVD semantic space. The following code illustrates how to do
# this using a row of the training data that has already been transformed
# by TF-IDF, per the mathematics illustrated in the slides.
# the SVD projection for document d is: 𝑑hat = ∑−1 𝑈t() 𝑑
# lets calculate document hat (d hat) to check against our training data
# the below formula is essentially what has been done to our training data
sigma.inverse <- 1 / train.irlba$d # = ∑−1
u.transpose <- t(train.irlba$u) # 1 𝑈t()
document <- train.tokens.tfidf[1,] # 𝑑 - we will just take the first document (text 1)
document.hat <- sigma.inverse * u.transpose %*% document # 𝑑hat
# Look at the first 10 components of projected document and the corresponding
# row in our document semantic space (i.e., the V matrix)
document.hat[1:10]
train.irlba$v[1, 1:10] # there will likely be some minor differences in the values
#
# Create new feature data frame using our document semantic space of 300
# features (i.e., the V matrix from our SVD).
# we are adding the labels (ham or spam) to the new training data (extracted document features)
train.svd <- data.frame(label = train$label, train.irlba$v)
# Create a cluster to work on 3 logical cores.
cl <- makeCluster(3, type = "SOCK")
registerDoSNOW(cl)
{
# Time the code execution
start.time <- Sys.time()
# This will be the last run using single decision trees. With a much smaller
# feature matrix we can now use more powerful methods like the mighty Random
# Forest from now on!
rpart.cv.4 <- train(label ~ ., data = train.svd, method = "rpart",
trControl = cv.cntrl, tuneLength = 7)
# Processing is done, stop cluster.
stopCluster(cl)
# Total time of execution on workstation was
total.time <- Sys.time() - start.time
total.time
}
# Check out our results.
# when using a single decision tree (rpart) we have slightly worse performance by adding bigrams and SVD
# this is because some of the signal has been lost
# good news is that when we use random forest we will gain performance by adding bigrams and SVD
rpart.cv.4
#
# NOTE - The following code takes a long time to run. Here's the math.
# We are performing 10-fold CV repeated 3 times. That means we
# need to build 30 models. We are also asking caret to try 7
# different values of the mtry parameter. Next up by default
# a mighty random forest leverages 500 trees. Lastly, caret will
# build 1 final model at the end of the process with the best
# mtry value over all the training data. Here's the number of
# tree we're building:
#
# (10 * 3 * 7 * 500) + 500 = 105,500 trees!
#
# On a workstation using 10 cores the following code took 28 minutes
# to execute.
#
# Create a cluster to work on 10 logical cores.
# cl <- makeCluster(3, type = "SOCK")
# registerDoSNOW(cl)
# Time the code execution
# start.time <- Sys.time()
# We have reduced the dimensionality of our data using SVD. Also, the
# application of SVD allows us to use LSA to simultaneously increase the
# information density of each feature. To prove this out, leverage a
# mighty Random Forest with the default of 500 trees. We'll also ask
# caret to try 7 different values of mtry to find the mtry value that
# gives the best result!
# rf.cv.1 <- train(label ~ ., data = train.svd, method = "rf",
# trControl = cv.cntrl, tuneLength = 7)
# Processing is done, stop cluster.
# stopCluster(cl)
# Total time of execution on workstation was
# total.time <- Sys.time() - start.time
# total.time
# Load processing results from disk!
load("data/rf.cv.1.RData")
# Check out our results.
# mtry is a parameter that controls how much data gets used when building individual trees
# in other words mytry constrains the amount of data each tree gets to see
# mytry = 151 means the trees get to see 151 of the 300 features extracted using SVD
rf.cv.1
rf.cv.1$finalModel$confusion
# Let's drill-down on the results.
# code originally sourced labels from train.svd$label
# but this created errors for some reason the ordering is different
# notice that accuracy is a little higher than our best model
# this is because 10-fold cv is using 90% of the training data
# 9 folds are used for testing and 1 fold is used for testing
confusionMatrix(rf.cv.1$finalModel$y, rf.cv.1$finalModel$predicted)
# accuracy is not the only performance metric to evaluate the model
# true positive (tp) true negative (tn)
# false positive (fp) false negative (fp)
# what percentage of ham and spam were correctly predicted?
# accuracy = tp + tn / tp + fp + fn + tn (nrow)
# what percentage of ham messages were correctly predicted?
# sensitivity = tp / tp + fn
# what percentage of spam messages were correctly predicted?
# specificity = tn / fp + tn
# true positive rate
# Pos Pred Value = tp / tp + fp (total observed ham)
# true negative rate
# Neg Pred Value = tn / tn + fn (total observed spam)
# when a feature increases both sensitivity and specificity it suggests that the
# feature may generalise to other datasets
# OK, now let's add in the feature we engineered previously for SMS
# text length to see if it improves things.
# texts may be in different order in train.svd than train
# if I run this code will need to check this
train.svd <- bind_cols(train.svd, train %>% select(text_length))
# Create a cluster to work on 10 logical cores.
# cl <- makeCluster(3, type = "SOCK")
# registerDoSNOW(cl)
{
# Time the code execution
# start.time <- Sys.time()
# Re-run the training process with the additional feature.
# importance = TRUE asks the rf model to keep track of feature importance
# rf.cv.2 <- train(label ~ ., data = train.svd, method = "rf",
# trControl = cv.cntrl, tuneLength = 7,
# importance = TRUE)
# Processing is done, stop cluster.
# stopCluster(cl)
# Total time of execution on workstation was
# total.time <- Sys.time() - start.time
# total.time
}
# Load results from disk.
load("data/rf.cv.2.RData")
# Check the results.
rf.cv.2
# Drill-down on the results.
confusionMatrix(rf.cv.2$finalModel$y, rf.cv.2$finalModel$predicted)
# How important was the new feature?
# higher values on x-axis better for mean decrease gini
library(randomForest)
varImpPlot(rf.cv.1$finalModel)
varImpPlot(rf.cv.2$finalModel)
# Turns out that our TextLength feature is very predictive and pushed our
# overall accuracy over the training data to 97.1%. We can also use the
# power of cosine similarity to engineer a feature for calculating, on
# average, how alike each SMS text message is to all of the spam messages.
# The hypothesis here is that our use of bigrams, tf-idf, and LSA have
# produced a representation where ham SMS messages should have low cosine
# similarities with spam SMS messages and vice versa.
# he angle between two docs is represented by theta.
# cosine(theta) is the similarity between two docs
# using the cosine between docs is better than using the dot product (correlation proxy)
# for a few reasons:
# • values range between 0,1 (1 = perfect similarity and 0 = orthogonal (right angle))
# • works well in high dimensional space
# Use the lsa package's cosine function for our calculations.
# need to transpose as it expects documents to be on the columns
# also need to remove first and last columns (label and text_length)
# will produce a 3901*3901 matrix
library(lsa)
train.similarities <- cosine(t(as.matrix(train.svd[, -c(1, ncol(train.svd))])))
dim(train.similarities)
# Next up - take each SMS text message and find what the mean cosine
# similarity is for each SMS text mean with each of the spam SMS messages.
# Per our hypothesis, ham SMS text messages should have relatively low
# cosine similarities with spam messages and vice versa!
# give me the indices of all spam messages
spam.indexes <- which(train$label == "spam")
train.svd <- train.svd %>% mutate(spam_similarity = 0)
# calcualte the mean cosine similarity between each doc (text) and all of the spam messages
# essentially, on average, how similar is each text to all of the spam messages?
# could implement this is in a different way - using nicer code
#
# Giedrius Blazys commenting on video 11 pointed out (unconfirmed by data science dojo):
# when creating cosine similarities with spam message feature on training data
# you should exclude the observation itself from the spam messages list.
# This solves the data leakage problem leading to over-fitting. The RF results
# on test data with updated feature are much better:
# for(i in 1:nrow(train.svd)) {
# spam.indexesCV <- setdiff(spam.indexes,i)
# train.svd$spam_similarity[i] <- mean(train.similarities[i, spam.indexesCV])
# }
for(i in 1:nrow(train.svd)) {
train.svd$spam_similarity[i] <- mean(train.similarities[i, spam.indexes])
}
# As always, let's visualize our results using the mighty ggplot2
ggplot(train.svd, aes(x = spam_similarity, fill = label)) +
theme_bw() +
geom_histogram(binwidth = 0.05) +
labs(y = "Message Count",
x = "Mean Spam Message Cosine Similarity",
title = "Distribution of Ham vs. Spam Using Spam Cosine Similarity")
# Per our analysis of mighty random forest results, we are interested in
# in features that can raise model performance with respect to sensitivity.
# Perform another CV process using the new spam cosine similarity feature.
# # Create a cluster to work on 3 logical cores.
# cl <- makeCluster(3, type = "SOCK")
# registerDoSNOW(cl)
#
# {
# # Time the code execution
# start.time <- Sys.time()
#
# # Re-run the training process with the additional feature.
# rf.cv.3 <- train(label ~ ., data = train.svd, method = "rf",
# trControl = cv.cntrl, tuneLength = 7,
# importance = TRUE)
#
# # Processing is done, stop cluster.
# stopCluster(cl)
#
# # Total time of execution on workstation was
# total.time <- Sys.time() - start.time
# total.time
#
# saveRDS(rf.cv.3, "data/rf.cv.3.RDS")
# }
# Load results from disk.
rf.cv.3 <- readRDS("data/rf.cv.3.RDS")
# Check the results.
rf.cv.3
# Drill-down on the results.
confusionMatrix(rf.cv.3$finalModel$y, rf.cv.3$finalModel$predicted)
# How important was this feature?
# spam_similarity is massively more important than any other features
# it also reduced specificity (worse at predicting spam) and increased sensitivity (better at predicting ham)
# should create some skepticism as both metrics did not increase - feature may cause overfitting which we can check for
library(randomForest)
varImpPlot(rf.cv.3$finalModel)
# We've built what appears to be an effective predictive model. Time to verify
# using the test holdout data we set aside at the beginning of the project.
# First stage of this verification is running the test data through our pre-
# processing pipeline of:
# 1 - Tokenization
# 2 - Lower casing
# 3 - Stopword removal
# 4 - Stemming
# 5 - Adding bigrams
# 6 - Transform to dfm
# 7 - Ensure test dfm has same features as train dfm
# Tokenization.
test.tokens <- tokens(test$text, what = "word1",
remove_numbers = TRUE, remove_punct = TRUE,
remove_symbols = TRUE, split_hyphens = TRUE)
# Lower case the tokens.
test.tokens <- tokens_tolower(test.tokens)
# Stopword removal.
test.tokens <- tokens_select(test.tokens, stopwords(),
selection = "remove")
# Stemming.
test.tokens <- tokens_wordstem(test.tokens, language = "english")
# Add bigrams.
test.tokens <- tokens_ngrams(test.tokens, n = 1:2)
# Convert n-grams to quanteda document-term frequency matrix.
test.tokens.dfm <- dfm(test.tokens, tolower = FALSE)
# Explore the train and test quanteda dfm objects.
# notice in our test set we have less than half the data thus we have less than half the features
# the model is expecting the number of features contained in the training data
# it won't understand anything less than that (you can have more but not less)
# it is very common to see new words in the test set that are not present in the training set
# we will need to fix this later by "creating" new test data the looks like the training data
# and stripping out any unique terms that are only in the test data
# columns need to be in the same order and need to have the same meaning
train.tokens.dfm
test.tokens.dfm
# Ensure the test dfm has the same n-grams as the training dfm and the
# columns are in the same location across training and test data
#
# NOTE - In production we should expect that new text messages will
# contain n-grams that did not exist in the original training
# data. As such, we need to strip those n-grams out.
# original code depreciated:
# test.tokens.dfm1 <- dfm_select(test.tokens.dfm, pattern = train.tokens.dfm,
# selection = "keep")
test.tokens.dfm <- dfm_match(test.tokens.dfm, train.tokens.dfm@Dimnames$features)
test.tokens.matrix <- as.matrix(test.tokens.dfm)
test.tokens.dfm # now we have the same structure as the training set
# With the raw test features in place next up is the projecting the term
# counts for the unigrams into the same TF-IDF vector space as our training
# data. The high level process is as follows:
# 1 - Normalize each document (i.e, each row)
# 2 - Perform IDF multiplication using training IDF values
# Normalize all documents via TF.
# the original idf values from the training set need to be stored so they can be reused here in the test data
test.tokens.df <- apply(test.tokens.matrix, 1, term.frequency)
str(test.tokens.df)
# Lastly, calculate TF-IDF for our training corpus.
# notice we are reusing the idf values from the training data
# this is very important because new texts need to be incorporated to the vector space
# (need to be able to maintain them over time in a production system)
test.tokens.tfidf <- apply(test.tokens.df, 2, tf.idf, idf = train.tokens.idf)
dim(test.tokens.tfidf)
View(test.tokens.tfidf[1:25, 1:25])
# Transpose the matrix
test.tokens.tfidf <- t(test.tokens.tfidf)
# Fix incomplete cases
summary(test.tokens.tfidf[1,])
test.tokens.tfidf[is.na(test.tokens.tfidf)] <- 0.0
summary(test.tokens.tfidf[1,])
# With the test data projected into the TF-IDF vector space of the training
# data we can now to the final projection into the training LSA semantic
# space (i.e. the SVD matrix factorization).
test.svd.raw <- t(sigma.inverse * u.transpose %*% t(test.tokens.tfidf))
# Lastly, we can now build the test data frame to feed into our trained
# machine learning model for predictions. First up, add Label and TextLength.
test.svd <- data.frame(label = test$label, test.svd.raw,
text_length = test$text_length)
# Next step, calculate spam_similarity for all the test documents. First up,
# create a spam similarity matrix.
# take the raw svd features and add rows (docs) that correspond to spam from the training data
# need this to calculate the similarity between test texts and the spam texts from the training data
test.similarities <- rbind(test.svd.raw, train.irlba$v[spam.indexes,])
test.similarities <- cosine(t(test.similarities))
# NOTE - The following code was updated post-video recoding due to a bug.
test.svd$spam_similarity <- rep(0.0, nrow(test.svd))
spam.cols <- (nrow(test.svd) + 1):ncol(test.similarities)
for(i in 1:nrow(test.svd)) {
# The following line has the bug fix.
test.svd$spam_similarity[i] <- mean(test.similarities[i, spam.cols])
}
# Some SMS text messages become empty as a result of stopword and special
# character removal. This results in spam similarity measures of 0. Correct.
# This code as added post-video as part of the bug fix.
test.svd$spam_similarity[!is.finite(test.svd$spam_similarity)] <- 0
# Now we can make predictions on the test data set using our trained mighty
# random forest.
preds <- predict(rf.cv.3, test.svd)
# Drill-in on results
# common definition of overfitting is doing much worse on test data than training data
confusionMatrix(preds, test.svd$label)
# The definition of overfitting is doing far better on the training data as
# evidenced by CV than doing on a hold-out dataset (i.e., our test dataset).
# One potential explantion of this overfitting is the use of the spam similarity
# feature. The hypothesis here is that spam features (i.e., text content) varies
# highly, espeically over time. As such, our average spam cosine similarity
# is likely to overfit to the training data. To combat this, let's rebuild a
# mighty random forest without the spam similarity feature.
train.svd$SpamSimilarity <- NULL
test.svd$SpamSimilarity <- NULL
# Create a cluster to work on 3 logical cores.
cl <- makeCluster(3, type = "SOCK")
registerDoSNOW(cl)
# {
# # Time the code execution
# start.time <- Sys.time()
#
# # Re-run the training process with the additional feature.
# set.seed(254812)
# rf.cv.4 <- train(label ~ ., data = train.svd, method = "rf",
# trControl = cv.cntrl, tuneLength = 7,
# importance = TRUE)
#
# # Processing is done, stop cluster.
# stopCluster(cl)
#
# # Total time of execution on workstation was
# total.time <- Sys.time() - start.time
# total.time
# saveRDS(rf.cv.4, "d
#ata/rf.cv.4.RDS")
# }
# Load results from disk.
rf.cv.4 <- load("data/rf.cv.4.RData")
# Make predictions and drill-in on the results
preds <- predict(rf.cv.4, test.svd)
confusionMatrix(preds, test.svd$label)
# What next?
# ▪ Feature Engineering:
# • How about tri-grams, 4-grams, etc.?
# • We engineered TextLength – are there other features as well?
# ▪ Algorithms - we leveraged a mighty random forest, but could other algorithms do more with the data?
# • Boosted decision trees (XGBoost)?
# • Support vector machines (SVM)? They are very commonly used in text analytics and tend to work ok in high dimensional space
# could allow for the analysis of term columns (pre-SVD)
# ▪ Learn more ways to analyze, understand, and work with text!
# Start Here – The Basics
# Text analysis with R for students of literature
# ▪ Best introduction to thinking analytically about text.
# ▪ Accessible to a very broad audience.
# ▪ Illustrated many techniques not in series (e.g., topic modeling).
# exercise ----------------------------------------------------------------
# You will first build a model to distinguish positive reviews from negative reviews using Yelp reviews
# because these reviews include a rating with each review. Your data consists of the text body of each
# review along with the star rating. Ratings with 1-2 stars count as "negative", and ratings with 4-5 stars
# are "positive". Ratings with 3 stars are "neutral" and have been dropped from the data.
# create list of packages
packages <- c("tidyverse", "quanteda", "caret")
# install any packages not currently installed
if (any(!packages %in% installed.packages())) {
install.packages(packages[!packages %in% installed.packages()[,"Package"]])
}
# load packages
lapply(packages, library, character.only = TRUE)
d <- read_csv("data/yelp_ratings.csv")
# creating training and test data
# Use caret to create a 70%/30% stratified split.
# Set the random seed for reproducibility.
set.seed(32984)
indexes <- createDataPartition(d$sentiment, times = 1,
p = 0.7, list = FALSE)
train <- d[indexes,]
test <- d[-indexes,]
# Verify proportions are equivalent in the train and test datasets
prop.table(table(train$sentiment))
prop.table(table(test$sentiment))
# Tokenize SMS text messages.
# remove numbers, punctuation, symbols and split hyphenated words
train.tokens <- tokens(train$text, what = "word1",
remove_numbers = TRUE, remove_punct = TRUE,
remove_symbols = TRUE, split_hyphens = TRUE)
# clean tokens further: lower case, remove stopwords, and stem
train.tokens <- train.tokens %>%
tokens_tolower() %>%
tokens_select(stopwords(), selection = "remove") %>%
tokens_wordstem(language = "english")
# Create our first bag-of-words model.
# dfm() takes in tokens and creates a document-frequency matrix (dfm)
train.tokens.dfm <- dfm(train.tokens, tolower = FALSE)
# to inspect
# train.tokens.matrix <- as.matrix(train.tokens.dfm)
# Per best practices, we will leverage cross validation (CV) as
# the basis of our modeling process.
# Setup data frame with features and labels.
train.tokens.df <- bind_cols(y_labels = train$sentiment, convert(train.tokens.dfm, to = "data.frame"))
# train <- train %>%
# mutate(sentiment = factor(sentiment, levels = 0:1, labels = c("low", "high")))
# Cleanup column names. Will modify only names that are not syntactically valid
names(train.tokens.df) <- make.names(names(train.tokens.df))
# Use caret to create stratified folds for 10-fold cross validation repeated
# 3 times (i.e., create 30 random stratified samples)
# we are using stratified cross validation because we have a class imbalance
# in the data (spam < prevalent ham) each random sample taken is representative
# in terms of the proportion of spam and ham in the dataset
# why are we repeating the cross validation 3 times? If we take the time to conduct
# cross validation multiple times we should get more valid estimates. If we take more
# looks at the data the estimation process will be more robust.
set.seed(48743)
cv.folds <- createMultiFolds(train$sentiment, k = 10, times = 3)
cv.cntrl <- trainControl(method = "repeatedcv",
number = 10,
repeats = 3,
index = cv.folds) # since we want stratified cross-validation we need to specify the folds
# Our data frame is non-trivial in size. As such, CV runs will take
# quite a long time to run. To cut down on total execution time, use
# the doSNOW package to allow for multi-core training in parallel.
#
# WARNING - The following code is configured to run on a workstation-
# or server-class machine (i.e., 12 logical cores). Alter
# code to suit your HW environment.
#
library(doSNOW) # doSNOW works on mac and windows out of box. Some other parallel processing packages do not.
{
# Time the code execution
start.time <- Sys.time()
# Create a cluster to work on 7 logical cores.
# check how many cores your machine has available for parallel processing
# keep 1 core free for the operating system
# parallel::detectCores()
cl <- makeCluster(7, type = "SOCK") # effectively creates multiple instances of R studio and uses them all at once to process the model
registerDoSNOW(cl) # need to tell R to process in parallel
# As our data is non-trivial in size at this point, use a single decision
# tree alogrithm (rpart trees) as our first model. We will graduate to using more
# powerful algorithms later when we perform feature extraction to shrink
# the size of our data.
# e.g., could use fandom forest (rf) or XGBoost (xgbTree) instead by changing the method
# formula means predict (~) label using all other variables (.) in dataset
# trControl is the cross validation parameters
# tuneLength allows you to set the number of different configurations of the algorithm to test
# it selects the one best one and builds a model using that configuration
rpart.cv.1 <- train(y_label ~ ., data = train.tokens.df,
method = "rpart",
trControl = cv.cntrl,
tuneLength = 7)
# Processing is done, stop cluster.
stopCluster(cl)
# Total time of execution on workstation was approximately 4 minutes.
total.time <- Sys.time() - start.time
total.time
}
# Check out our results.
# samples = rows
# predictors = features
# best performing model had 94.78% accuracy
# that's pretty good for a simple model with no tuning or feature engineering
rpart.cv.1
| /R/intro_to_nlp.R | no_license | mattdblanchard/kaggle_nlp_course | R | false | false | 52,196 | r | packages <- c("tidyverse", "quanteda")
# install any packages not currently installed
if (any(!packages %in% installed.packages())) {
install.packages(packages[!packages %in% installed.packages()[,"Package"]])
}
# load packages
lapply(packages, library, character.only = TRUE)
# create character string
x <- c("Tea is healthy and calming, don't you think?")
# tokenize character string (x)
tokens <- tokens(x, what = "word1")
# part 1 ------------------------------------------------------------------
# text preprocessing ------------------------------------------------------
# next need to stem/lemmatize words - this reduces each word to its base
# e.g. calming becomes calm
train_tokens <- tokens_wordstem(tokens, language = "english")
# to remove punctuation, symbols, and numbers
train_tokens <- tokens(train_tokens, what = "word1",
remove_numbers = TRUE, remove_punct = TRUE,
remove_symbols = TRUE, split_hyphens = TRUE)
# when analysing text need often you will remove stop words ("the", "is" etc..)
# NOTE - You should always inspect stopword lists for applicability to
# your problem/domain.
# check stopwords() to make sure important domain specific words are not removed
train_tokens <- tokens_select(train_tokens, stopwords(),
selection = "remove")
# stemming/lemmatizing and dropping stopwords might result in your models performing worse
# so treat this preprocessing as part of your hyperparameter optimization process
# pattern matching --------------------------------------------------------
# select target patterns
target <- phrase(c('Galaxy Note', 'iPhone 11', 'iPhone XS', 'Google Pixel'))
# create text doc of phone reviews
text_doc <- c("Glowing review overall, and some really interesting side-by-side ",
"photography tests pitting the iPhone 11 Pro against the ",
"Galaxy Note 10 Plus and last year’s iPhone XS and Google Pixel 3.")
# identify the occurrence of each target pattern in the phone reviews
match <- data.frame(kwic(text_doc, pat1, valuetype = "glob"))
# exercise ----------------------------------------------------------------
# could load data from kaggle URL
# library(httr)
# library(jsonlite)
# url <- "https://storage.googleapis.com/kagglesdsdata/datasets/362178/763778/restaurant.json?X-Goog-Algorithm=GOOG4-RSA-SHA256&X-Goog-Credential=gcp-kaggle-com%40kaggle-161607.iam.gserviceaccount.com%2F20210428%2Fauto%2Fstorage%2Fgoog4_request&X-Goog-Date=20210428T000547Z&X-Goog-Expires=259199&X-Goog-SignedHeaders=host&X-Goog-Signature=2196ef69eba0b202dd8ff55753f17ab1aecb5557e8d99610435341ad3c4ba959ba7911fb24e860d6c45b7d0b123bedfd7bc28a4bb102b038cf123e3d3d2e05b9274d5aa2de6c71b2e345271069883fcd0e17c15d7c31bcb5132e84ff2e213b7fdbca1e3881d6ecaef33362bdaf24e144355ccc07d2170dff790e4725b2d83853826f68649634805af60b4d6828459c86f06a8caa68213f121cab77d8d3db6d2ab06f3fab4e8191685f79efcc17310e980ec763170d5bbed8ce7a27292974d427771e68ccd063fbe723d49d64571b6475f50bf4c443cbe6d10777d106d44b7db50f51d3ee7945db311120042f45dfbb55c9738f96a505e3733243281749ea09de"
# resp <- GET(url)
# http_error(resp)
# parsed_resp <- content(resp, as="parsed")
# load yelp review data
d <- read_csv("data/yelp_ratings.csv")
# tokenize character string (text)
tokens <- tokens(d$text, what = "word1")
# text preprocessing ------------------------------------------------------
# stem/lemmatize words - this reduces each word to its base
# e.g. calming becomes calm
train_tokens <- tokens_wordstem(tokens, language = "english")
# remove punctuation, symbols, and numbers
train_tokens <- tokens(train_tokens, what = "word1",
remove_numbers = TRUE, remove_punct = TRUE,
remove_symbols = TRUE, split_hyphens = TRUE)
# remove stop words ("the", "is" etc..)
# stopwords()
train_tokens <- tokens_select(train_tokens, stopwords(),
selection = "remove")
# menu items
target <- c("Cheese Steak", "Cheesesteak", "Steak and Cheese", "Italian Combo", "Tiramisu", "Cannoli",
"Chicken Salad", "Chicken Spinach Salad", "Meatball", "Pizza", "Pizzas", "Spaghetti",
"Bruchetta", "Eggplant", "Italian Beef", "Purista", "Pasta", "Calzones", "Calzone",
"Italian Sausage", "Chicken Cutlet", "Chicken Parm", "Chicken Parmesan", "Gnocchi",
"Chicken Pesto", "Turkey Sandwich", "Turkey Breast", "Ziti", "Portobello", "Reuben",
"Mozzarella Caprese", "Corned Beef", "Garlic Bread", "Pastrami", "Roast Beef",
"Tuna Salad", "Lasagna", "Artichoke Salad", "Fettuccini Alfredo", "Chicken Parmigiana",
"Grilled Veggie", "Grilled Veggies", "Grilled Vegetable", "Mac and Cheese", "Macaroni",
"Prosciutto", "Salami")
# identify the occurrence of each target pattern in the phone reviews
match <- data.frame(kwic(train_tokens, target, valuetype = "glob"))
# create index of rows that contain a review with a target word
index <- match %>%
mutate(text_num = str_remove(docname, "text")) %>%
select(keyword, text_num) %>%
unique()
# select reviews with target words
select_reviews <- d[as.numeric(index$text_num), ]
# create df of keywords to add to select_reviews
key <- match %>%
select(docname, keyword) %>%
unique() %>%
mutate(keyword = tolower(keyword))
# add keywords
select_reviews <- select_reviews %>%
bind_cols(keyword = key$keyword)
# compute the mean and sd number of stars and the number of reviews for each keyword
# important to consider the number of occurrences
scores <- select_reviews %>%
group_by(keyword) %>%
summarise(mean = mean(stars),
sd = sd(stars),
n = n()) %>%
arrange(mean)
# top 10 mean ratings
slice_max(scores, mean, n = 10)
# worst 10 mean ratings
slice_min(scores, mean, n = 10)
# part 2 ------------------------------------------------------------------
# This code was recycled from another NLP course I completed because it uses the same spam dataset
# This code goes far beyond the content of Kaggle's course
# The other course is:
# Data Science Dojo's Intro to Text Analytics with R
# # https://code.datasciencedojo.com/datasciencedojo/tutorials/tree/master/Introduction%20to%20Text%20Analytics%20with%20R
# # Videos on youtube: https://youtu.be/4vuw0AsHeGw
# create list of packages
packages <- c("tidyverse", "quanteda", "caret")
# install any packages not currently installed
if (any(!packages %in% installed.packages())) {
install.packages(packages[!packages %in% installed.packages()[,"Package"]])
}
# load packages
lapply(packages, library, character.only = TRUE)
d <- read_csv("data/spam.csv")
# creating training and test data
# Use caret to create a 70%/30% stratified split.
# Set the random seed for reproducibility.
set.seed(32984)
indexes <- createDataPartition(d$label, times = 1,
p = 0.7, list = FALSE)
train <- d[indexes,]
test <- d[-indexes,]
# Verify proportions are equivalent in the train and test datasets
prop.table(table(train$label))
prop.table(table(test$label))
# need to convert unstructured text data into a structured dataframe
# first need to decompose text document into distinct pieces (tokens)
# each word or punctuation becomes a single token
# then we construct a document-frequency matrix (aka data frame) where:
# - each row represents a document (e.g., a single complete text message)
# - each column represents a distinct token (word or punctuation)
# - each cell is a count of the token for a document
# IMPORTANT: this method does not preserve the word ordering
# this setup is known as the bag of words (BOW) model
# there is a way to add word ordering back into the model (engram)
# we will look at this later
# the BOW model is typical starting point for text analysis
# some considerations:
# - do you want all tokens to be terms in the model?
# * casing (e.g., If v if)? better to remove capitalisation by making everything lower case
# * puncuation (e.g., ", !, ?)? word ordering not preserved so better to remove punctuation
# * numbers? sometimes depends what you are examining
# * every word (e.g., the, an, a)? aka stop words typically dont add anything so remove
# * symbols (e.g., <, @, #)? can be meaningful so depends on your data and questions
# * similar words (e.g., ran, run, runs, running)? aka stemming - is it ok to collapse similar words into single representation? usually better to collapse. different contexts of use can often be derived from other variables
# PRE-PROCESSING IS MAJOR PART OF TEXT ANALYTICS
# Text analytics requires a lot of data exploration, data pre-processing
# and data wrangling. Let's explore some examples.
# HTML-escaped ampersand character. This is an example of how html records &
# there are 5 characters instead of 1
# could replace all instances of escaped sequence with and or & or something else
# or could remove & and ; to leave amp behind
train$text[20]
# HTML-escaped '<' (<) and '>' (#>) characters.
# remove &, ;, and # so only lt and gt remain
# Also note that Mallika Sherawat is an actual person, but we will ignore
# the implications of this for this introductory tutorial.
test$text[16]
# A URL.
# lots of ways to treat URLs
# for this course we will split http and the www... web address
# reasoning is that web addresses probably won't reoccur but http will indicate that a web address was communicated and that may be an important feature
train$text[381]
# Tokenize SMS text messages.
# remove numbers, punctuation, symbols and split hyphenated words
train.tokens <- tokens(train$text, what = "word1",
remove_numbers = TRUE, remove_punct = TRUE,
remove_symbols = TRUE, split_hyphens = TRUE)
# Take a look at a specific SMS message and see how it transforms.
train.tokens[[381]]
# Lower case the tokens.
train.tokens <- tokens_tolower(train.tokens)
train.tokens[[381]]
# Use quanteda's built-in stopword list for English.
# NOTE - You should always inspect stopword lists for applicability to
# your problem/domain.
train.tokens <- tokens_select(train.tokens, stopwords(),
selection = "remove")
train.tokens[[381]]
# Perform stemming on the tokens.
train.tokens <- tokens_wordstem(train.tokens, language = "english")
train.tokens[[381]]
# Create our first bag-of-words model.
# dfm() takes in tokens and creates a document-frequency matrix (dfm)
train.tokens.dfm <- dfm(train.tokens, tolower = FALSE)
# Transform to a matrix and inspect.
train.tokens.matrix <- as.matrix(train.tokens.dfm)
View(train.tokens.matrix[1:20, 1:100])
dim(train.tokens.matrix)
# Investigate the effects of stemming.
colnames(train.tokens.matrix)[1:50]
# Per best practices, we will leverage cross validation (CV) as
# the basis of our modeling process. Using CV we can create
# estimates of how well our model will do in Production on new,
# unseen data. CV is powerful, but the downside is that it
# requires more processing and therefore more time.
#
# If you are not familiar with CV, consult the following
# Wikipedia article:
#
# https://en.wikipedia.org/wiki/Cross-validation_(statistics)
#
# Setup a feature data frame with labels.
train.tokens.df <- bind_cols(label = train$label, convert(train.tokens.dfm, to = "data.frame"))
View(train.tokens.df)
# Often, tokenization requires some additional pre-processing
# col names are the words in texts. Some of the column names will
# create problems later so we need to clean them up. Here are examples
# of problematic names
names(train.tokens.df)[c(139, 141, 211, 213)]
# Cleanup column names. Will modify only names that are not syntactically valid
names(train.tokens.df) <- make.names(names(train.tokens.df))
head(train.tokens.df)
# Use caret to create stratified folds for 10-fold cross validation repeated
# 3 times (i.e., create 30 random stratified samples)
# we are using stratified cross validation because we have a class imbalance
# in the data (spam < prevalent ham) each random sample taken is representative
# in terms of the proportion of spam and ham in the dataset
# why are we repeating the cross validation 3 times? If we take the time to conduct
# cross validation multiple times we should get more valid estimates. If we take more
# looks at the data the estimation process will be more robust.
set.seed(48743)
cv.folds <- createMultiFolds(train$label, k = 10, times = 3)
cv.cntrl <- trainControl(method = "repeatedcv",
number = 10,
repeats = 3,
index = cv.folds) # since we want stratified cross-validation we need to specify the folds
# Our data frame is non-trivial in size. As such, CV runs will take
# quite a long time to run. To cut down on total execution time, use
# the doSNOW package to allow for multi-core training in parallel.
#
# WARNING - The following code is configured to run on a workstation-
# or server-class machine (i.e., 12 logical cores). Alter
# code to suit your HW environment.
#
library(doSNOW) # doSNOW works on mac and windows out of box. Some other parallel processing packages do not.
{
# Time the code execution
start.time <- Sys.time()
# Create a cluster to work on 7 logical cores.
# check how many cores your machine has available for parallel processing
# keep 1 core free for the operating system
# parallel::detectCores()
cl <- makeCluster(7, type = "SOCK") # effectively creates multiple instances of R studio and uses them all at once to process the model
registerDoSNOW(cl) # need to tell R to process in parallel
# As our data is non-trivial in size at this point, use a single decision
# tree alogrithm (rpart trees) as our first model. We will graduate to using more
# powerful algorithms later when we perform feature extraction to shrink
# the size of our data.
# e.g., could use fandom forest (rf) or XGBoost (xgbTree) instead by changing the method
# formula means predict (~) label using all other variables (.) in dataset
# trControl is the cross validation parameters
# tuneLength allows you to set the number of different configurations of the algorithm to test
# it selects the one best one and builds a model using that configuration
rpart.cv.1 <- train(label ~ ., data = train.tokens.df,
method = "rpart",
trControl = cv.cntrl,
tuneLength = 7)
# Processing is done, stop cluster.
stopCluster(cl)
# Total time of execution on workstation was approximately 4 minutes.
total.time <- Sys.time() - start.time
total.time
}
# Check out our results.
# samples = rows
# predictors = features
# best performing model had 94.78% accuracy
# that's pretty good for a simple model with no tuning or feature engineering
rpart.cv.1
# The use of Term Frequency-Inverse Document Frequency (TF-IDF) is a
# powerful technique for enhancing the information/signal contained
# within our document-frequency matrix. In most instances TF-IDF will
# enhance the performance of the model. Typically this would be done
# immediately after stemming (pre-processing).
# Specifically, the mathematics
# behind TF-IDF accomplish the following goals:
# 1 - The TF calculation accounts for the fact that longer
# documents will have higher individual term counts. Applying
# TF normalizes all documents in the corpus to be length
# independent.
# 2 - The IDF calculation accounts for the frequency of term
# appearance in all documents in the corpus. The intuition
# being that a term that appears in every document has no
# predictive power. Applies penalty to terms that appear more frequently
# 3 - The multiplication of TF by IDF for each cell in the matrix
# allows for weighting of #1 and #2 for each cell in the matrix.
# Our function for calculating relative term frequency (TF)
# TF = freq of a single term in a document (text) / sum of all freq of all terms in a document (text)
# each row represents a term and column represents a document (text)
# TF is document focused (rows)
term.frequency <- function(row) {
row / sum(row)
}
# Our function for calculating inverse document frequency (IDF)
# log10 (total number of documents / the number of documents that a specific term is present in)
# IDF is corpus focused (columns)
inverse.doc.freq <- function(col) {
corpus.size <- length(col) # calculate for each column how many documents there are (should be the same for every column)
doc.count <- length(which(col > 0)) # calculate the number of rows where the column is not 0
log10(corpus.size / doc.count)
}
# Our function for calculating TF-IDF.
# will give the same output as quanteda's tf-idf function if you set normalisation = TRUE
# calculating it manually has the benefit that you can use the components (TF or IDF) in
# later feature engineering/analysis
# TF-IDF = TF * IDF
tf.idf <- function(tf, idf) {
tf * idf
}
# First step, normalize all documents via TF.
# apply term frequency (TF) function to the rows (1) of train.tokens.matrix
train.tokens.df <- apply(train.tokens.matrix, 1, term.frequency)
dim(train.tokens.df) # matrix has been transposed by apply
View(train.tokens.df[1:20, 1:100])
# Second step, calculate the IDF vector that we will use - both
# for training data and for test data!
# apply inverse document frequency (IDF) function to the columns (2) of train.tokens.matrix
train.tokens.idf <- apply(train.tokens.matrix, 2, inverse.doc.freq)
str(train.tokens.idf)
# Lastly, calculate TF-IDF for our training corpus.
# apply TF-IDF function to the columns (2) of train.tokens.df (normalised matrix)
train.tokens.tfidf <- apply(train.tokens.df, 2, tf.idf, idf = train.tokens.idf)
dim(train.tokens.tfidf) # transposed matrix has been maintained
View(train.tokens.tfidf[1:25, 1:25])
# Transpose the matrix in preparation of training the ML model
train.tokens.tfidf <- t(train.tokens.tfidf)
dim(train.tokens.tfidf)
View(train.tokens.tfidf[1:25, 1:25]) # higher values represent terms that appear less frequently
# When we compute TF-IDF we need to check for incomplete cases
# after preprocessing (e.g., remove stop words, stemming) some documents could be empty
# Check for incomplete cases.
incomplete.cases <- which(!complete.cases(train.tokens.tfidf))
train$text[incomplete.cases] # show original text before pre-processing - all contain stop words and punctuation only
# Fix incomplete cases
# for any document (row) that is now empty as a result of pre-processing (e.g., stemming) replace with zeros
# do not want to remove the records because there could be a signal in this data
# zero represents docs made up of stop words only
train.tokens.tfidf[incomplete.cases,] <- rep(0.0, ncol(train.tokens.tfidf))
dim(train.tokens.tfidf)
sum(which(!complete.cases(train.tokens.tfidf)))
# Make a clean data frame using the same process as before.
train.tokens.tfidf.df <- cbind(label = train$label, data.frame(train.tokens.tfidf))
names(train.tokens.tfidf.df) <- make.names(names(train.tokens.tfidf.df))
{
# Time the code execution
start.time <- Sys.time()
# Create a cluster to work on 7 logical cores.
cl <- makeCluster(7, type = "SOCK")
registerDoSNOW(cl)
# As our data is non-trivial in size at this point, use a single decision
# tree alogrithm as our first model. We will graduate to using more
# powerful algorithms later when we perform feature extraction to shrink
# the size of our data.
rpart.cv.2 <- train(label ~ ., data = train.tokens.tfidf.df, method = "rpart",
trControl = cv.cntrl, tuneLength = 7)
# Processing is done, stop cluster.
stopCluster(cl)
# Total time of execution on workstation was
total.time <- Sys.time() - start.time
total.time
}
# Check out our results.
rpart.cv.2
# N-grams allow us to augment our document-term frequency matrices with
# word ordering. This often leads to increased performance (e.g., accuracy)
# for machine learning models trained with more than just unigrams (i.e.,
# single terms). Let's add bigrams (each unique two-term phrase) to our training data and the TF-IDF
# transform the expanded featre matrix to see if accuracy improves. The data will contain unigram and bigram features.
# as you add higher n-grams the lower the likelihood of those phrases being shared across multiple documents -
# will mostly contain zeros (sparsity problem - curse of dimensionality problem)
# Add bigrams to our feature matrix.
train.tokens <- tokens_ngrams(train.tokens, n = 1:2) # 1:2 requests unigrams and bigrams
train.tokens[[381]]
# Transform to dfm and then a matrix.
train.tokens.dfm <- dfm(train.tokens, tolower = FALSE)
train.tokens.matrix <- as.matrix(train.tokens.dfm)
train.tokens.dfm
# Normalize all documents via TF.
train.tokens.df <- apply(train.tokens.matrix, 1, term.frequency)
# Calculate the IDF vector that we will use for training and test data!
train.tokens.idf <- apply(train.tokens.matrix, 2, inverse.doc.freq)
# Calculate TF-IDF for our training corpus
train.tokens.tfidf <- apply(train.tokens.df, 2, tf.idf,
idf = train.tokens.idf)
# Transpose the matrix
train.tokens.tfidf <- t(train.tokens.tfidf)
# Fix incomplete cases
incomplete.cases <- which(!complete.cases(train.tokens.tfidf))
train.tokens.tfidf[incomplete.cases,] <- rep(0.0, ncol(train.tokens.tfidf))
# Make a clean data frame.
train.tokens.tfidf.df <- cbind(label = train$label, data.frame(train.tokens.tfidf))
names(train.tokens.tfidf.df) <- make.names(names(train.tokens.tfidf.df))
# Clean up unused objects in memory.
gc()
# NOTE - The following code requires the use of command-line R to execute
# due to the large number of features (i.e., columns) in the matrix.
# Please consult the following link for more details if you wish
# to run the code yourself:
#
# https://stackoverflow.com/questions/28728774/how-to-set-max-ppsize-in-r
#
# Also note that running the following code required approximately
# 38GB of RAM and more than 4.5 hours to execute on a 10-core
# workstation!
#
# Can log a request with infomatics hub? for USyd's HPC or use a cloud computing platform like Azure.
# Time the code execution
# start.time <- Sys.time()
# Leverage single decision trees to evaluate if adding bigrams improves the
# the effectiveness of the model.
# rpart.cv.3 <- train(label ~ ., data = train.tokens.tfidf.df, method = "rpart",
# trControl = cv.cntrl, tuneLength = 7)
# Total time of execution on workstation was
# total.time <- Sys.time() - start.time
# total.time
# Check out our results.
# rpart.cv.3
#
# The results of the above processing show a slight decline in rpart
# effectiveness with a 10-fold CV repeated 3 times accuracy of 0.9457.
# As we will discuss later, while the addition of bigrams appears to
# negatively impact a single decision tree, it helps with the mighty
# random forest!
#
# one way around the above mentioned problems (sparsity, computing power, and
# length of time to run analysis) is to conduct latent semantic analysis (LSA)
# using a technique called singular value decomposition which is a matrix
# factorisation technique that allows for the implementation of feature reduction/extraction
# this process will reduce the number of features and increase the representation
# of the data (creates a smaller and more feature rich matrix - better for random forest)
# Our Progress So Far
# ▪ We’ve made a lot of progress:
# • Representing unstructured text data in a format amenable to analytics and machine learning.
# • Building a standard text analytics data pre-processing pipeline.
# • Improving the bag-of-words model (BOW) with the use of the mighty TF-IDF.
# • Extending BOW to incorporate word ordering via n-grams.
# ▪ However, we’ve encountered some notable problems as well:
# • Document-term matrices explode to be very wide (i.e., lots of columns).
# • The features of document-term matrices don’t contain a lot of signal (i.e., they’re sparse).
# • We’re running into scalability issues like RAM and huge amounts of computation.
# • The curse of dimensionality.
# ▪ The vector space model helps address many of the problems above!
# Latent Semantic Analysis
# Intuition – Extract relationships between the documents and terms assuming that terms that are close
# in meaning will appear in similar (i.e., correlated) pieces of text.
# Implementation – LSA leverages a singular value decomposition (SVD) factorization of a term-document
# matrix to extract these relationships.
# Latent Semantic Analysis (LSA) often remediates the curse of dimensionality problem in text analytics:
# • The matrix factorization has the effect of combining columns, potentially enriching signal in the data.
# • By selecting a fraction of the most important singular values, LSA can dramatically reduce dimensionality.
# However, there’s no free lunch:
# • Performing the SVD factorization is computationally intensive.
# • The reduced factorized matrices (i.e., the “semantic spaces”) are approximations.
# • We will need to project new data into the semantic space.
# High-level steps so far:
# • Create tokens (e.g., lowercase, remove stop words).
# • Normalize the document vector (i.e., row) using the term.frequency() function.
# • Complete the TF-IDF projection using the tf.idf() function.
# • Apply the SVD projection on the document vector.
# We'll leverage the irlba package for our singular value
# decomposition (SVD). The irlba package allows us to specify
# the number of the most important singular vectors we wish to
# calculate and retain for features.
# NOTE: it looks like it is possible to use PCA instead of SVD
# need to look into this. A benefit of using PCA is that the latent components may be interpretable
library(irlba) # necessary to perform truncated SVD - means you can specify the x number of most significant extracted features
{
# Time the code execution
start.time <- Sys.time()
# Perform SVD. Specifically, reduce dimensionality down to 300 columns
# for our latent semantic analysis (LSA).
# Note: could potentially use PCA instead - this may lead to interpretable latent components
# SVD/LSA assumes that in the matrix the rows are the terms and the columns are the documents
# our matrix is the other way around so we need to transpose t() our matrix to a term-document matrix
# which is necessary to run the analysis
# nv extracts features from the document (right singular vectors) -
# give me the vector representation of the higher level concepts extracted on a per document basis
# nu extracts features from the terms (left singular vectors) we want nu = nv (same number as nv)
# research shows that vectors in the few hundred range tend to offer the best results
# you should spend some time tuning your model to work out the best number of vectors
# e.g., test 200, 300, 400, 500 etc...
# maxit is the number of iterations. typically a mxit value double the nv value works well in
# extracting the number of desired features
train.irlba <- irlba(t(train.tokens.tfidf), nv = 300, maxit = 600)
# Total time of execution on workstation was
total.time <- Sys.time() - start.time
total.time
}
# Take a look at the new feature data up close.
# 𝑆𝑉𝐷 𝑜𝑓 𝑋 = 𝑋 = 𝑈∑𝑉𝑇
# U contains the eigenvectors of the term correlations (left singular vector) 𝑋𝑋t()
# V contains the eigenvectors of the document correlations (right singular vector) 𝑋t()𝑋
# sigma (sum of symbol) contains the singular values of the factorization - denoted by d in the output
# lets take a look at the V matrix
# rows = documents
# columns = extracted features
# this is a black box process we do not know what the columns mean
View(train.irlba$v)
# As with TF-IDF, we will need to project new data (e.g., the test data)
# into the SVD semantic space. The following code illustrates how to do
# this using a row of the training data that has already been transformed
# by TF-IDF, per the mathematics illustrated in the slides.
# the SVD projection for document d is: 𝑑hat = ∑−1 𝑈t() 𝑑
# lets calculate document hat (d hat) to check against our training data
# the below formula is essentially what has been done to our training data
sigma.inverse <- 1 / train.irlba$d # = ∑−1
u.transpose <- t(train.irlba$u) # 1 𝑈t()
document <- train.tokens.tfidf[1,] # 𝑑 - we will just take the first document (text 1)
document.hat <- sigma.inverse * u.transpose %*% document # 𝑑hat
# Look at the first 10 components of projected document and the corresponding
# row in our document semantic space (i.e., the V matrix)
document.hat[1:10]
train.irlba$v[1, 1:10] # there will likely be some minor differences in the values
#
# Create new feature data frame using our document semantic space of 300
# features (i.e., the V matrix from our SVD).
# we are adding the labels (ham or spam) to the new training data (extracted document features)
train.svd <- data.frame(label = train$label, train.irlba$v)
# Create a cluster to work on 3 logical cores.
cl <- makeCluster(3, type = "SOCK")
registerDoSNOW(cl)
{
# Time the code execution
start.time <- Sys.time()
# This will be the last run using single decision trees. With a much smaller
# feature matrix we can now use more powerful methods like the mighty Random
# Forest from now on!
rpart.cv.4 <- train(label ~ ., data = train.svd, method = "rpart",
trControl = cv.cntrl, tuneLength = 7)
# Processing is done, stop cluster.
stopCluster(cl)
# Total time of execution on workstation was
total.time <- Sys.time() - start.time
total.time
}
# Check out our results.
# when using a single decision tree (rpart) we have slightly worse performance by adding bigrams and SVD
# this is because some of the signal has been lost
# good news is that when we use random forest we will gain performance by adding bigrams and SVD
rpart.cv.4
#
# NOTE - The following code takes a long time to run. Here's the math.
# We are performing 10-fold CV repeated 3 times. That means we
# need to build 30 models. We are also asking caret to try 7
# different values of the mtry parameter. Next up by default
# a mighty random forest leverages 500 trees. Lastly, caret will
# build 1 final model at the end of the process with the best
# mtry value over all the training data. Here's the number of
# tree we're building:
#
# (10 * 3 * 7 * 500) + 500 = 105,500 trees!
#
# On a workstation using 10 cores the following code took 28 minutes
# to execute.
#
# Create a cluster to work on 10 logical cores.
# cl <- makeCluster(3, type = "SOCK")
# registerDoSNOW(cl)
# Time the code execution
# start.time <- Sys.time()
# We have reduced the dimensionality of our data using SVD. Also, the
# application of SVD allows us to use LSA to simultaneously increase the
# information density of each feature. To prove this out, leverage a
# mighty Random Forest with the default of 500 trees. We'll also ask
# caret to try 7 different values of mtry to find the mtry value that
# gives the best result!
# rf.cv.1 <- train(label ~ ., data = train.svd, method = "rf",
# trControl = cv.cntrl, tuneLength = 7)
# Processing is done, stop cluster.
# stopCluster(cl)
# Total time of execution on workstation was
# total.time <- Sys.time() - start.time
# total.time
# Load processing results from disk!
load("data/rf.cv.1.RData")
# Check out our results.
# mtry is a parameter that controls how much data gets used when building individual trees
# in other words mytry constrains the amount of data each tree gets to see
# mytry = 151 means the trees get to see 151 of the 300 features extracted using SVD
rf.cv.1
rf.cv.1$finalModel$confusion
# Let's drill-down on the results.
# code originally sourced labels from train.svd$label
# but this created errors for some reason the ordering is different
# notice that accuracy is a little higher than our best model
# this is because 10-fold cv is using 90% of the training data
# 9 folds are used for testing and 1 fold is used for testing
confusionMatrix(rf.cv.1$finalModel$y, rf.cv.1$finalModel$predicted)
# accuracy is not the only performance metric to evaluate the model
# true positive (tp) true negative (tn)
# false positive (fp) false negative (fp)
# what percentage of ham and spam were correctly predicted?
# accuracy = tp + tn / tp + fp + fn + tn (nrow)
# what percentage of ham messages were correctly predicted?
# sensitivity = tp / tp + fn
# what percentage of spam messages were correctly predicted?
# specificity = tn / fp + tn
# true positive rate
# Pos Pred Value = tp / tp + fp (total observed ham)
# true negative rate
# Neg Pred Value = tn / tn + fn (total observed spam)
# when a feature increases both sensitivity and specificity it suggests that the
# feature may generalise to other datasets
# OK, now let's add in the feature we engineered previously for SMS
# text length to see if it improves things.
# texts may be in different order in train.svd than train
# if I run this code will need to check this
train.svd <- bind_cols(train.svd, train %>% select(text_length))
# Create a cluster to work on 10 logical cores.
# cl <- makeCluster(3, type = "SOCK")
# registerDoSNOW(cl)
{
# Time the code execution
# start.time <- Sys.time()
# Re-run the training process with the additional feature.
# importance = TRUE asks the rf model to keep track of feature importance
# rf.cv.2 <- train(label ~ ., data = train.svd, method = "rf",
# trControl = cv.cntrl, tuneLength = 7,
# importance = TRUE)
# Processing is done, stop cluster.
# stopCluster(cl)
# Total time of execution on workstation was
# total.time <- Sys.time() - start.time
# total.time
}
# Load results from disk.
load("data/rf.cv.2.RData")
# Check the results.
rf.cv.2
# Drill-down on the results.
confusionMatrix(rf.cv.2$finalModel$y, rf.cv.2$finalModel$predicted)
# How important was the new feature?
# higher values on x-axis better for mean decrease gini
library(randomForest)
varImpPlot(rf.cv.1$finalModel)
varImpPlot(rf.cv.2$finalModel)
# Turns out that our TextLength feature is very predictive and pushed our
# overall accuracy over the training data to 97.1%. We can also use the
# power of cosine similarity to engineer a feature for calculating, on
# average, how alike each SMS text message is to all of the spam messages.
# The hypothesis here is that our use of bigrams, tf-idf, and LSA have
# produced a representation where ham SMS messages should have low cosine
# similarities with spam SMS messages and vice versa.
# he angle between two docs is represented by theta.
# cosine(theta) is the similarity between two docs
# using the cosine between docs is better than using the dot product (correlation proxy)
# for a few reasons:
# • values range between 0,1 (1 = perfect similarity and 0 = orthogonal (right angle))
# • works well in high dimensional space
# Use the lsa package's cosine function for our calculations.
# need to transpose as it expects documents to be on the columns
# also need to remove first and last columns (label and text_length)
# will produce a 3901*3901 matrix
library(lsa)
train.similarities <- cosine(t(as.matrix(train.svd[, -c(1, ncol(train.svd))])))
dim(train.similarities)
# Next up - take each SMS text message and find what the mean cosine
# similarity is for each SMS text mean with each of the spam SMS messages.
# Per our hypothesis, ham SMS text messages should have relatively low
# cosine similarities with spam messages and vice versa!
# give me the indices of all spam messages
spam.indexes <- which(train$label == "spam")
train.svd <- train.svd %>% mutate(spam_similarity = 0)
# calcualte the mean cosine similarity between each doc (text) and all of the spam messages
# essentially, on average, how similar is each text to all of the spam messages?
# could implement this is in a different way - using nicer code
#
# Giedrius Blazys commenting on video 11 pointed out (unconfirmed by data science dojo):
# when creating cosine similarities with spam message feature on training data
# you should exclude the observation itself from the spam messages list.
# This solves the data leakage problem leading to over-fitting. The RF results
# on test data with updated feature are much better:
# for(i in 1:nrow(train.svd)) {
# spam.indexesCV <- setdiff(spam.indexes,i)
# train.svd$spam_similarity[i] <- mean(train.similarities[i, spam.indexesCV])
# }
for(i in 1:nrow(train.svd)) {
train.svd$spam_similarity[i] <- mean(train.similarities[i, spam.indexes])
}
# As always, let's visualize our results using the mighty ggplot2
ggplot(train.svd, aes(x = spam_similarity, fill = label)) +
theme_bw() +
geom_histogram(binwidth = 0.05) +
labs(y = "Message Count",
x = "Mean Spam Message Cosine Similarity",
title = "Distribution of Ham vs. Spam Using Spam Cosine Similarity")
# Per our analysis of mighty random forest results, we are interested in
# in features that can raise model performance with respect to sensitivity.
# Perform another CV process using the new spam cosine similarity feature.
# # Create a cluster to work on 3 logical cores.
# cl <- makeCluster(3, type = "SOCK")
# registerDoSNOW(cl)
#
# {
# # Time the code execution
# start.time <- Sys.time()
#
# # Re-run the training process with the additional feature.
# rf.cv.3 <- train(label ~ ., data = train.svd, method = "rf",
# trControl = cv.cntrl, tuneLength = 7,
# importance = TRUE)
#
# # Processing is done, stop cluster.
# stopCluster(cl)
#
# # Total time of execution on workstation was
# total.time <- Sys.time() - start.time
# total.time
#
# saveRDS(rf.cv.3, "data/rf.cv.3.RDS")
# }
# Load results from disk.
rf.cv.3 <- readRDS("data/rf.cv.3.RDS")
# Check the results.
rf.cv.3
# Drill-down on the results.
confusionMatrix(rf.cv.3$finalModel$y, rf.cv.3$finalModel$predicted)
# How important was this feature?
# spam_similarity is massively more important than any other features
# it also reduced specificity (worse at predicting spam) and increased sensitivity (better at predicting ham)
# should create some skepticism as both metrics did not increase - feature may cause overfitting which we can check for
library(randomForest)
varImpPlot(rf.cv.3$finalModel)
# We've built what appears to be an effective predictive model. Time to verify
# using the test holdout data we set aside at the beginning of the project.
# First stage of this verification is running the test data through our pre-
# processing pipeline of:
# 1 - Tokenization
# 2 - Lower casing
# 3 - Stopword removal
# 4 - Stemming
# 5 - Adding bigrams
# 6 - Transform to dfm
# 7 - Ensure test dfm has same features as train dfm
# Tokenization.
test.tokens <- tokens(test$text, what = "word1",
remove_numbers = TRUE, remove_punct = TRUE,
remove_symbols = TRUE, split_hyphens = TRUE)
# Lower case the tokens.
test.tokens <- tokens_tolower(test.tokens)
# Stopword removal.
test.tokens <- tokens_select(test.tokens, stopwords(),
selection = "remove")
# Stemming.
test.tokens <- tokens_wordstem(test.tokens, language = "english")
# Add bigrams.
test.tokens <- tokens_ngrams(test.tokens, n = 1:2)
# Convert n-grams to quanteda document-term frequency matrix.
test.tokens.dfm <- dfm(test.tokens, tolower = FALSE)
# Explore the train and test quanteda dfm objects.
# notice in our test set we have less than half the data thus we have less than half the features
# the model is expecting the number of features contained in the training data
# it won't understand anything less than that (you can have more but not less)
# it is very common to see new words in the test set that are not present in the training set
# we will need to fix this later by "creating" new test data the looks like the training data
# and stripping out any unique terms that are only in the test data
# columns need to be in the same order and need to have the same meaning
train.tokens.dfm
test.tokens.dfm
# Ensure the test dfm has the same n-grams as the training dfm and the
# columns are in the same location across training and test data
#
# NOTE - In production we should expect that new text messages will
# contain n-grams that did not exist in the original training
# data. As such, we need to strip those n-grams out.
# original code depreciated:
# test.tokens.dfm1 <- dfm_select(test.tokens.dfm, pattern = train.tokens.dfm,
# selection = "keep")
test.tokens.dfm <- dfm_match(test.tokens.dfm, train.tokens.dfm@Dimnames$features)
test.tokens.matrix <- as.matrix(test.tokens.dfm)
test.tokens.dfm # now we have the same structure as the training set
# With the raw test features in place next up is the projecting the term
# counts for the unigrams into the same TF-IDF vector space as our training
# data. The high level process is as follows:
# 1 - Normalize each document (i.e, each row)
# 2 - Perform IDF multiplication using training IDF values
# Normalize all documents via TF.
# the original idf values from the training set need to be stored so they can be reused here in the test data
test.tokens.df <- apply(test.tokens.matrix, 1, term.frequency)
str(test.tokens.df)
# Lastly, calculate TF-IDF for our training corpus.
# notice we are reusing the idf values from the training data
# this is very important because new texts need to be incorporated to the vector space
# (need to be able to maintain them over time in a production system)
test.tokens.tfidf <- apply(test.tokens.df, 2, tf.idf, idf = train.tokens.idf)
dim(test.tokens.tfidf)
View(test.tokens.tfidf[1:25, 1:25])
# Transpose the matrix
test.tokens.tfidf <- t(test.tokens.tfidf)
# Fix incomplete cases
summary(test.tokens.tfidf[1,])
test.tokens.tfidf[is.na(test.tokens.tfidf)] <- 0.0
summary(test.tokens.tfidf[1,])
# With the test data projected into the TF-IDF vector space of the training
# data we can now to the final projection into the training LSA semantic
# space (i.e. the SVD matrix factorization).
test.svd.raw <- t(sigma.inverse * u.transpose %*% t(test.tokens.tfidf))
# Lastly, we can now build the test data frame to feed into our trained
# machine learning model for predictions. First up, add Label and TextLength.
test.svd <- data.frame(label = test$label, test.svd.raw,
text_length = test$text_length)
# Next step, calculate spam_similarity for all the test documents. First up,
# create a spam similarity matrix.
# take the raw svd features and add rows (docs) that correspond to spam from the training data
# need this to calculate the similarity between test texts and the spam texts from the training data
test.similarities <- rbind(test.svd.raw, train.irlba$v[spam.indexes,])
test.similarities <- cosine(t(test.similarities))
# NOTE - The following code was updated post-video recoding due to a bug.
test.svd$spam_similarity <- rep(0.0, nrow(test.svd))
spam.cols <- (nrow(test.svd) + 1):ncol(test.similarities)
for(i in 1:nrow(test.svd)) {
# The following line has the bug fix.
test.svd$spam_similarity[i] <- mean(test.similarities[i, spam.cols])
}
# Some SMS text messages become empty as a result of stopword and special
# character removal. This results in spam similarity measures of 0. Correct.
# This code as added post-video as part of the bug fix.
test.svd$spam_similarity[!is.finite(test.svd$spam_similarity)] <- 0
# Now we can make predictions on the test data set using our trained mighty
# random forest.
preds <- predict(rf.cv.3, test.svd)
# Drill-in on results
# common definition of overfitting is doing much worse on test data than training data
confusionMatrix(preds, test.svd$label)
# The definition of overfitting is doing far better on the training data as
# evidenced by CV than doing on a hold-out dataset (i.e., our test dataset).
# One potential explantion of this overfitting is the use of the spam similarity
# feature. The hypothesis here is that spam features (i.e., text content) varies
# highly, espeically over time. As such, our average spam cosine similarity
# is likely to overfit to the training data. To combat this, let's rebuild a
# mighty random forest without the spam similarity feature.
train.svd$SpamSimilarity <- NULL
test.svd$SpamSimilarity <- NULL
# Create a cluster to work on 3 logical cores.
cl <- makeCluster(3, type = "SOCK")
registerDoSNOW(cl)
# {
# # Time the code execution
# start.time <- Sys.time()
#
# # Re-run the training process with the additional feature.
# set.seed(254812)
# rf.cv.4 <- train(label ~ ., data = train.svd, method = "rf",
# trControl = cv.cntrl, tuneLength = 7,
# importance = TRUE)
#
# # Processing is done, stop cluster.
# stopCluster(cl)
#
# # Total time of execution on workstation was
# total.time <- Sys.time() - start.time
# total.time
# saveRDS(rf.cv.4, "d
#ata/rf.cv.4.RDS")
# }
# Load results from disk.
rf.cv.4 <- load("data/rf.cv.4.RData")
# Make predictions and drill-in on the results
preds <- predict(rf.cv.4, test.svd)
confusionMatrix(preds, test.svd$label)
# What next?
# ▪ Feature Engineering:
# • How about tri-grams, 4-grams, etc.?
# • We engineered TextLength – are there other features as well?
# ▪ Algorithms - we leveraged a mighty random forest, but could other algorithms do more with the data?
# • Boosted decision trees (XGBoost)?
# • Support vector machines (SVM)? They are very commonly used in text analytics and tend to work ok in high dimensional space
# could allow for the analysis of term columns (pre-SVD)
# ▪ Learn more ways to analyze, understand, and work with text!
# Start Here – The Basics
# Text analysis with R for students of literature
# ▪ Best introduction to thinking analytically about text.
# ▪ Accessible to a very broad audience.
# ▪ Illustrated many techniques not in series (e.g., topic modeling).
# exercise ----------------------------------------------------------------
# You will first build a model to distinguish positive reviews from negative reviews using Yelp reviews
# because these reviews include a rating with each review. Your data consists of the text body of each
# review along with the star rating. Ratings with 1-2 stars count as "negative", and ratings with 4-5 stars
# are "positive". Ratings with 3 stars are "neutral" and have been dropped from the data.
# create list of packages
packages <- c("tidyverse", "quanteda", "caret")
# install any packages not currently installed
if (any(!packages %in% installed.packages())) {
install.packages(packages[!packages %in% installed.packages()[,"Package"]])
}
# load packages
lapply(packages, library, character.only = TRUE)
d <- read_csv("data/yelp_ratings.csv")
# creating training and test data
# Use caret to create a 70%/30% stratified split.
# Set the random seed for reproducibility.
set.seed(32984)
indexes <- createDataPartition(d$sentiment, times = 1,
p = 0.7, list = FALSE)
train <- d[indexes,]
test <- d[-indexes,]
# Verify proportions are equivalent in the train and test datasets
prop.table(table(train$sentiment))
prop.table(table(test$sentiment))
# Tokenize SMS text messages.
# remove numbers, punctuation, symbols and split hyphenated words
train.tokens <- tokens(train$text, what = "word1",
remove_numbers = TRUE, remove_punct = TRUE,
remove_symbols = TRUE, split_hyphens = TRUE)
# clean tokens further: lower case, remove stopwords, and stem
train.tokens <- train.tokens %>%
tokens_tolower() %>%
tokens_select(stopwords(), selection = "remove") %>%
tokens_wordstem(language = "english")
# Create our first bag-of-words model.
# dfm() takes in tokens and creates a document-frequency matrix (dfm)
train.tokens.dfm <- dfm(train.tokens, tolower = FALSE)
# to inspect
# train.tokens.matrix <- as.matrix(train.tokens.dfm)
# Per best practices, we will leverage cross validation (CV) as
# the basis of our modeling process.
# Setup data frame with features and labels.
train.tokens.df <- bind_cols(y_labels = train$sentiment, convert(train.tokens.dfm, to = "data.frame"))
# train <- train %>%
# mutate(sentiment = factor(sentiment, levels = 0:1, labels = c("low", "high")))
# Cleanup column names. Will modify only names that are not syntactically valid
names(train.tokens.df) <- make.names(names(train.tokens.df))
# Use caret to create stratified folds for 10-fold cross validation repeated
# 3 times (i.e., create 30 random stratified samples)
# we are using stratified cross validation because we have a class imbalance
# in the data (spam < prevalent ham) each random sample taken is representative
# in terms of the proportion of spam and ham in the dataset
# why are we repeating the cross validation 3 times? If we take the time to conduct
# cross validation multiple times we should get more valid estimates. If we take more
# looks at the data the estimation process will be more robust.
set.seed(48743)
cv.folds <- createMultiFolds(train$sentiment, k = 10, times = 3)
cv.cntrl <- trainControl(method = "repeatedcv",
number = 10,
repeats = 3,
index = cv.folds) # since we want stratified cross-validation we need to specify the folds
# Our data frame is non-trivial in size. As such, CV runs will take
# quite a long time to run. To cut down on total execution time, use
# the doSNOW package to allow for multi-core training in parallel.
#
# WARNING - The following code is configured to run on a workstation-
# or server-class machine (i.e., 12 logical cores). Alter
# code to suit your HW environment.
#
library(doSNOW) # doSNOW works on mac and windows out of box. Some other parallel processing packages do not.
{
# Time the code execution
start.time <- Sys.time()
# Create a cluster to work on 7 logical cores.
# check how many cores your machine has available for parallel processing
# keep 1 core free for the operating system
# parallel::detectCores()
cl <- makeCluster(7, type = "SOCK") # effectively creates multiple instances of R studio and uses them all at once to process the model
registerDoSNOW(cl) # need to tell R to process in parallel
# As our data is non-trivial in size at this point, use a single decision
# tree alogrithm (rpart trees) as our first model. We will graduate to using more
# powerful algorithms later when we perform feature extraction to shrink
# the size of our data.
# e.g., could use fandom forest (rf) or XGBoost (xgbTree) instead by changing the method
# formula means predict (~) label using all other variables (.) in dataset
# trControl is the cross validation parameters
# tuneLength allows you to set the number of different configurations of the algorithm to test
# it selects the one best one and builds a model using that configuration
rpart.cv.1 <- train(y_label ~ ., data = train.tokens.df,
method = "rpart",
trControl = cv.cntrl,
tuneLength = 7)
# Processing is done, stop cluster.
stopCluster(cl)
# Total time of execution on workstation was approximately 4 minutes.
total.time <- Sys.time() - start.time
total.time
}
# Check out our results.
# samples = rows
# predictors = features
# best performing model had 94.78% accuracy
# that's pretty good for a simple model with no tuning or feature engineering
rpart.cv.1
|
context("Utilities")
test_that('Options can be set',{
# warning on nonexistent option
expect_warning(voptions(fiets=3))
# invalid 'raise' value -- not implemented yet
#expect_error(voptions(raise='aap'))
# this should run without problems
reset(voptions)
expect_equal(voptions('raise')[[1]],'none')
})
test_that("match_cells",{
d1 <- data.frame(id=paste(1:3),x=1:3,y=4:6)
d2 <- data.frame(id=paste(4:1),y=4:7,x=1:4)
expect_equal(
names(match_cells(d1,d2,id='id')[[1]])
,names(match_cells(d1,d2,id='id')[[2]])
)
expect_equal(
as.character(match_cells(d1,d2,id='id')[[1]][,'id'])
, as.character(match_cells(d1,d2,id='id')[[2]][,'id'])
)
})
test_that('validating/indicating expressions can be named',{
expect_equal(names(validator(aap=x>3)),'aap')
expect_equal(names(indicator(fiets=mean(x))),'fiets')
})
# code for these methods in confrontation.R
test_that("other methods for 'variables'",{
expect_equal(variables(women),c("height","weight"))
expect_equal(variables(as.list(women)),c("height","weight"))
expect_equal(variables(as.environment(women)),c("height","weight"))
})
test_that('compare works',{
d1 <- data.frame(x=1:3,y=4:6)
d2 <- data.frame(x=c(NA,2,NA),y=c(4,5,NA))
v <- validator(x>0,y<5)
a <- array(
c(6,6,0,6,0,4,4,0,2,2,0
,6,3,3,3,0,2,2,0,1,1,0 ),dim=c(11,2)
)
expect_equivalent(unclass(compare(v,d1,d2)),a)
})
test_that('blocks works',{
v <- validator(x + y > z, q > 0, z + x == 3)
expect_equivalent(v$blocks()[[1]],c(1,3))
expect_equivalent(v$blocks()[[2]],2)
v <- validator(
x > 0
, y > 0
, x + y == z
, u + v == w
, u > 0)
expect_equal(length(v$blocks()),2)
v <- validator(x +y ==z, x+z>0)
expect_equal(length(v$blocks()),1)
})
| /validate/tests/testthat/testUtils.R | no_license | ingted/R-Examples | R | false | false | 1,797 | r |
context("Utilities")
test_that('Options can be set',{
# warning on nonexistent option
expect_warning(voptions(fiets=3))
# invalid 'raise' value -- not implemented yet
#expect_error(voptions(raise='aap'))
# this should run without problems
reset(voptions)
expect_equal(voptions('raise')[[1]],'none')
})
test_that("match_cells",{
d1 <- data.frame(id=paste(1:3),x=1:3,y=4:6)
d2 <- data.frame(id=paste(4:1),y=4:7,x=1:4)
expect_equal(
names(match_cells(d1,d2,id='id')[[1]])
,names(match_cells(d1,d2,id='id')[[2]])
)
expect_equal(
as.character(match_cells(d1,d2,id='id')[[1]][,'id'])
, as.character(match_cells(d1,d2,id='id')[[2]][,'id'])
)
})
test_that('validating/indicating expressions can be named',{
expect_equal(names(validator(aap=x>3)),'aap')
expect_equal(names(indicator(fiets=mean(x))),'fiets')
})
# code for these methods in confrontation.R
test_that("other methods for 'variables'",{
expect_equal(variables(women),c("height","weight"))
expect_equal(variables(as.list(women)),c("height","weight"))
expect_equal(variables(as.environment(women)),c("height","weight"))
})
test_that('compare works',{
d1 <- data.frame(x=1:3,y=4:6)
d2 <- data.frame(x=c(NA,2,NA),y=c(4,5,NA))
v <- validator(x>0,y<5)
a <- array(
c(6,6,0,6,0,4,4,0,2,2,0
,6,3,3,3,0,2,2,0,1,1,0 ),dim=c(11,2)
)
expect_equivalent(unclass(compare(v,d1,d2)),a)
})
test_that('blocks works',{
v <- validator(x + y > z, q > 0, z + x == 3)
expect_equivalent(v$blocks()[[1]],c(1,3))
expect_equivalent(v$blocks()[[2]],2)
v <- validator(
x > 0
, y > 0
, x + y == z
, u + v == w
, u > 0)
expect_equal(length(v$blocks()),2)
v <- validator(x +y ==z, x+z>0)
expect_equal(length(v$blocks()),1)
})
|
library(huxtable)
### Name: add_footnote
### Title: Add a row with a footnote
### Aliases: add_footnote
### ** Examples
jams <- add_footnote(jams,
"* subject to availability")
jams
| /data/genthat_extracted_code/huxtable/examples/add_footnote.Rd.R | no_license | surayaaramli/typeRrh | R | false | false | 194 | r | library(huxtable)
### Name: add_footnote
### Title: Add a row with a footnote
### Aliases: add_footnote
### ** Examples
jams <- add_footnote(jams,
"* subject to availability")
jams
|
/Virtual_machine/Base_MYSQL/API_current_weather/API_current_weather.R | no_license | Triumrus/Kronenergo | R | false | false | 6,447 | r | ||
data <- read.csv("/Users/lesser/git/thesis/processing/prot_csv/test/test_0.csv")
colnames(data) <- c("TIME","DISTANCE","PRESSURE")
plot(data$TIME,data$DISTANCE,type="l",col="red",lwd=1,axes=FALSE,xlab="TIME",ylab="DISTSNCE",xlim=c(0,32355),ylim=c(0,70))
axis(1)
axis(2)
par(new=TRUE)
plot(data$TIME,data$PRESSURE,type="l",col="blue",lwd=1,axes=FALSE,xlab="",ylab="",ylim=c(1024,0))
mtext("PRESSURE",side=4,line=3)
axis(4)
box()
| /R_plot.R | no_license | naoki709mm/thesis | R | false | false | 429 | r |
data <- read.csv("/Users/lesser/git/thesis/processing/prot_csv/test/test_0.csv")
colnames(data) <- c("TIME","DISTANCE","PRESSURE")
plot(data$TIME,data$DISTANCE,type="l",col="red",lwd=1,axes=FALSE,xlab="TIME",ylab="DISTSNCE",xlim=c(0,32355),ylim=c(0,70))
axis(1)
axis(2)
par(new=TRUE)
plot(data$TIME,data$PRESSURE,type="l",col="blue",lwd=1,axes=FALSE,xlab="",ylab="",ylim=c(1024,0))
mtext("PRESSURE",side=4,line=3)
axis(4)
box()
|
testlist <- list(a = 0L, b = 0L, x = c(0L, 889192448L, 0L, 3866624L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L ))
result <- do.call(grattan:::anyOutside,testlist)
str(result) | /grattan/inst/testfiles/anyOutside/libFuzzer_anyOutside/anyOutside_valgrind_files/1610054800-test.R | no_license | akhikolla/updated-only-Issues | R | false | false | 265 | r | testlist <- list(a = 0L, b = 0L, x = c(0L, 889192448L, 0L, 3866624L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L ))
result <- do.call(grattan:::anyOutside,testlist)
str(result) |
source("/home/mr984/diversity_metrics/scripts/checkplot_initials.R")
source("/home/mr984/diversity_metrics/scripts/checkplot_inf.R")
reps<-50
outerreps<-1000
size<-rev(round(10^seq(2, 5, 0.25)))[
11
]
nc<-12
plan(strategy=multisession, workers=nc)
map(rev(1:outerreps), function(x){
start<-Sys.time()
out<-checkplot_inf(flatten(flatten(SADs_list))[[2]], l=1, inds=size, reps=reps)
write.csv(out, paste("/scratch/mr984/SAD2","l",1,"inds", size, "outernew", x, ".csv", sep="_"), row.names=F)
rm(out)
print(Sys.time()-start)
})
| /scripts/checkplots_for_parallel_amarel/asy_845.R | no_license | dushoff/diversity_metrics | R | false | false | 534 | r | source("/home/mr984/diversity_metrics/scripts/checkplot_initials.R")
source("/home/mr984/diversity_metrics/scripts/checkplot_inf.R")
reps<-50
outerreps<-1000
size<-rev(round(10^seq(2, 5, 0.25)))[
11
]
nc<-12
plan(strategy=multisession, workers=nc)
map(rev(1:outerreps), function(x){
start<-Sys.time()
out<-checkplot_inf(flatten(flatten(SADs_list))[[2]], l=1, inds=size, reps=reps)
write.csv(out, paste("/scratch/mr984/SAD2","l",1,"inds", size, "outernew", x, ".csv", sep="_"), row.names=F)
rm(out)
print(Sys.time()-start)
})
|
################################################################################
#
# Bayesian linear regression
# with Normal or Normal-Gamma prior for coefficients
#
################################################################################
#
# Analysing the Canadian Lynx data
# Moran (1953) fit an AR(2) model:
# y(t) = 1.05 + 1.41*y(t-1) - 0.77*y(t-2) + e(t)
#
# Moran (1953) The Statistical Analysis of the Canadian Lynx Cycle.
# Australian Journal of Zoology, 1, pp 163-173.
#
################################################################################
rm(list=ls())
den.ng = function(phi, lambda, gamma){
abs(phi)^(gamma-0.5)*besselK(abs(phi)/lambda,gamma-1/2)/(sqrt(pi)*2^(gamma-0.5)*lambda^(gamma+0.5))
}
logden = function(psi, gamma, lambda, phi){
(gamma-1.5)*log(psi)-0.5*(psi/lambda^2+phi^2/psi)
}
data(lynx)
y = log10(lynx)
n = length(y)
year = 1821:1934
par(mfrow=c(1,1))
plot(year,y,xlab="Year",ylab="Log number of trappings",main="",type="l")
title("Annual numbers of lynx trappings for 1821-1934 in Canada")
# Fitting an Gaussian AR(p) model
# -------------------------------
p = 20
yy = y[(p+1):n]
X = cbind(1,y[p:(n-1)])
for (k in 2:p)
X = cbind(X,y[(p-k+1):(n-k)])
# ML estimation
ols = lm(yy~X-1)
summary(ols)
phi.ols = ols$coef
sig.ols = summary(ols)$sigma
se.ols = sqrt(diag(solve(t(X)%*%X)))*sig.ols
qphi.ols = cbind(phi.ols+qnorm(0.025)*se.ols, phi.ols, phi.ols+qnorm(0.975)*se.ols)
par(mfrow=c(1,1))
plot(0:p,qphi.ols[,2],ylim=range(qphi.ols),pch=16,xlab="Lag",ylab="AR coefficient")
for (i in 1:(p+1))
segments(i-1,qphi.ols[i,1],i-1,qphi.ols[i,3],lwd=2)
abline(h=0,lty=2)
# Bayesian inference: hyperparameters of both priors
# --------------------------------------------------
# sigma
c0 = 2.5
d0 = 2.5
par1 = c0+(n-p)/2
# phi (normal-gamma prior)
lambda = 0.4
gamma = 0.8
# phi (normal prior)
b0 = rep(0,p+1)
V0 = 2*gamma*lambda^2
iV0 = diag(1/V0,p+1)
b0iV0 = b0/V0
# Sufficient statistics
XtX = t(X)%*%X
Xty = t(X)%*%yy
x11()
par(mfrow=c(1,1))
phi = seq(-3,3,length=1000)
plot(phi,den.ng(phi,lambda,gamma),type="l",lwd=2,xlab="",ylab="Density",main=expression(phi[j]))
lines(phi,dnorm(phi,b0,sqrt(V0)),col=2,lwd=2)
legend("topright",legend=c("Normal prior","Normal-Gamma prior"),col=2:1,lty=1,lwd=2)
points(ols$coef,rep(0,p+1),col=3,pch=16)
# MCMC set-up
M0 = 1000
M = 10000
niter = M0+M
# Bayesian inference: normal prior
# --------------------------------
draws.n = matrix(0,niter,p+2)
phi = rep(0,p+1)
for (i in 1:(niter)){
# full conditional of sigma2
par2 = d0+sum((yy-X%*%phi)^2)/2
sig2 = 1/rgamma(1,par1,par2)
# full conditional of phi
V = solve(XtX/sig2+iV0)
m = V%*%(Xty/sig2+b0iV0)
phi = m + t(chol(V))%*%rnorm(p+1)
# storing draws
draws.n[i,] = c(phi,sig2)
}
draws.n = draws.n[(M0+1):niter,]
qphi.n = t(apply(draws.n[,1:(p+1)], 2, quantile, c(0.05,0.5,0.95)))
# Bayesian inference: normal-gamma prior
# --------------------------------------
sd.psi = 0.01
draws.ng = matrix(0,niter,p+2)
phi = phi.ols
psi = rep(1,p+1)
for (i in 1:(niter)){
# full conditional of sigma2
par2 = d0+sum((yy-X%*%phi)^2)/2
sig2 = 1/rgamma(1,par1,par2)
# full conditional of phi
V = solve(XtX/sig2+diag(1/psi))
m = V%*%(Xty/sig2)
phi = m + t(chol(V))%*%rnorm(p+1)
# full conditional of psi
for (j in 1:(p+1)){
psi1 = rlnorm(1,psi[j],sd.psi)
if (psi1>0){
nume = logden(psi1,gamma,lambda,phi[j])-dlnorm(psi1,psi[j],sd.psi,log=TRUE)
deno = logden(psi[j],gamma,lambda,phi[j])-dlnorm(psi[j],psi1,sd.psi,log=TRUE)
log.alpha = min(0,nume-deno)
if (log(runif(1))<log.alpha){
psi[j] = psi1
}
}
}
# storing draws
draws.ng[i,] = c(phi,sig2)
}
draws.ng = draws.ng[(M0+1):niter,]
qphi.ng = t(apply(draws.ng[,1:(p+1)],2,quantile,c(0.025,0.5,0.975)))
par(mfrow=c(1,1))
limx = range(draws.n[,p+2],draws.ng[,p+2])
plot(density(draws.n[,p+2]),xlim=limx,main=expression(sigma),xlab="")
lines(density(draws.ng[,p+2]),col=2)
points(sig.ols,0,pch=16,col=3)
par(mfrow = c(1, 1))
plot(
0:p,
qphi.n[, 2],
ylim = range(qphi.n, qphi.ng, qphi.ols),
pch = 16,
xlab = "Lag",
ylab = "AR coefficient"
)
for (i in 1:(p + 1)) {
segments(i - 1, qphi.n[i, 1], i - 1, qphi.n[i, 3], lwd = 2)
segments(i - 1 + 0.25,
qphi.ng[i, 1],
i - 1 + 0.25,
qphi.ng[i, 3],
lwd = 2,
col = 2)
segments(i - 1 + 0.125,
qphi.ols[i, 1],
i - 1 + 0.125,
qphi.ols[i, 3],
lwd = 2,
col = 3)
points(i - 1 + 0.25, qphi.ng[i, 2], pch = 16, col = 2)
points(i - 1 + 0.125, phi.ols[i], pch = 16, col = 3)
}
abline(h = 0, lty = 2)
legend(
"topright",
legend = c("Normal prior", "Normal-Gamma prior", "OLS"),
col = 1:3,
lty = 1,
lwd = 2
) | /Aula_2020-01-30 BayesianLinearReg.R | no_license | btebaldi/EconometriaBaysiana | R | false | false | 5,010 | r | ################################################################################
#
# Bayesian linear regression
# with Normal or Normal-Gamma prior for coefficients
#
################################################################################
#
# Analysing the Canadian Lynx data
# Moran (1953) fit an AR(2) model:
# y(t) = 1.05 + 1.41*y(t-1) - 0.77*y(t-2) + e(t)
#
# Moran (1953) The Statistical Analysis of the Canadian Lynx Cycle.
# Australian Journal of Zoology, 1, pp 163-173.
#
################################################################################
rm(list=ls())
den.ng = function(phi, lambda, gamma){
abs(phi)^(gamma-0.5)*besselK(abs(phi)/lambda,gamma-1/2)/(sqrt(pi)*2^(gamma-0.5)*lambda^(gamma+0.5))
}
logden = function(psi, gamma, lambda, phi){
(gamma-1.5)*log(psi)-0.5*(psi/lambda^2+phi^2/psi)
}
data(lynx)
y = log10(lynx)
n = length(y)
year = 1821:1934
par(mfrow=c(1,1))
plot(year,y,xlab="Year",ylab="Log number of trappings",main="",type="l")
title("Annual numbers of lynx trappings for 1821-1934 in Canada")
# Fitting an Gaussian AR(p) model
# -------------------------------
p = 20
yy = y[(p+1):n]
X = cbind(1,y[p:(n-1)])
for (k in 2:p)
X = cbind(X,y[(p-k+1):(n-k)])
# ML estimation
ols = lm(yy~X-1)
summary(ols)
phi.ols = ols$coef
sig.ols = summary(ols)$sigma
se.ols = sqrt(diag(solve(t(X)%*%X)))*sig.ols
qphi.ols = cbind(phi.ols+qnorm(0.025)*se.ols, phi.ols, phi.ols+qnorm(0.975)*se.ols)
par(mfrow=c(1,1))
plot(0:p,qphi.ols[,2],ylim=range(qphi.ols),pch=16,xlab="Lag",ylab="AR coefficient")
for (i in 1:(p+1))
segments(i-1,qphi.ols[i,1],i-1,qphi.ols[i,3],lwd=2)
abline(h=0,lty=2)
# Bayesian inference: hyperparameters of both priors
# --------------------------------------------------
# sigma
c0 = 2.5
d0 = 2.5
par1 = c0+(n-p)/2
# phi (normal-gamma prior)
lambda = 0.4
gamma = 0.8
# phi (normal prior)
b0 = rep(0,p+1)
V0 = 2*gamma*lambda^2
iV0 = diag(1/V0,p+1)
b0iV0 = b0/V0
# Sufficient statistics
XtX = t(X)%*%X
Xty = t(X)%*%yy
x11()
par(mfrow=c(1,1))
phi = seq(-3,3,length=1000)
plot(phi,den.ng(phi,lambda,gamma),type="l",lwd=2,xlab="",ylab="Density",main=expression(phi[j]))
lines(phi,dnorm(phi,b0,sqrt(V0)),col=2,lwd=2)
legend("topright",legend=c("Normal prior","Normal-Gamma prior"),col=2:1,lty=1,lwd=2)
points(ols$coef,rep(0,p+1),col=3,pch=16)
# MCMC set-up
M0 = 1000
M = 10000
niter = M0+M
# Bayesian inference: normal prior
# --------------------------------
draws.n = matrix(0,niter,p+2)
phi = rep(0,p+1)
for (i in 1:(niter)){
# full conditional of sigma2
par2 = d0+sum((yy-X%*%phi)^2)/2
sig2 = 1/rgamma(1,par1,par2)
# full conditional of phi
V = solve(XtX/sig2+iV0)
m = V%*%(Xty/sig2+b0iV0)
phi = m + t(chol(V))%*%rnorm(p+1)
# storing draws
draws.n[i,] = c(phi,sig2)
}
draws.n = draws.n[(M0+1):niter,]
qphi.n = t(apply(draws.n[,1:(p+1)], 2, quantile, c(0.05,0.5,0.95)))
# Bayesian inference: normal-gamma prior
# --------------------------------------
sd.psi = 0.01
draws.ng = matrix(0,niter,p+2)
phi = phi.ols
psi = rep(1,p+1)
for (i in 1:(niter)){
# full conditional of sigma2
par2 = d0+sum((yy-X%*%phi)^2)/2
sig2 = 1/rgamma(1,par1,par2)
# full conditional of phi
V = solve(XtX/sig2+diag(1/psi))
m = V%*%(Xty/sig2)
phi = m + t(chol(V))%*%rnorm(p+1)
# full conditional of psi
for (j in 1:(p+1)){
psi1 = rlnorm(1,psi[j],sd.psi)
if (psi1>0){
nume = logden(psi1,gamma,lambda,phi[j])-dlnorm(psi1,psi[j],sd.psi,log=TRUE)
deno = logden(psi[j],gamma,lambda,phi[j])-dlnorm(psi[j],psi1,sd.psi,log=TRUE)
log.alpha = min(0,nume-deno)
if (log(runif(1))<log.alpha){
psi[j] = psi1
}
}
}
# storing draws
draws.ng[i,] = c(phi,sig2)
}
draws.ng = draws.ng[(M0+1):niter,]
qphi.ng = t(apply(draws.ng[,1:(p+1)],2,quantile,c(0.025,0.5,0.975)))
par(mfrow=c(1,1))
limx = range(draws.n[,p+2],draws.ng[,p+2])
plot(density(draws.n[,p+2]),xlim=limx,main=expression(sigma),xlab="")
lines(density(draws.ng[,p+2]),col=2)
points(sig.ols,0,pch=16,col=3)
par(mfrow = c(1, 1))
plot(
0:p,
qphi.n[, 2],
ylim = range(qphi.n, qphi.ng, qphi.ols),
pch = 16,
xlab = "Lag",
ylab = "AR coefficient"
)
for (i in 1:(p + 1)) {
segments(i - 1, qphi.n[i, 1], i - 1, qphi.n[i, 3], lwd = 2)
segments(i - 1 + 0.25,
qphi.ng[i, 1],
i - 1 + 0.25,
qphi.ng[i, 3],
lwd = 2,
col = 2)
segments(i - 1 + 0.125,
qphi.ols[i, 1],
i - 1 + 0.125,
qphi.ols[i, 3],
lwd = 2,
col = 3)
points(i - 1 + 0.25, qphi.ng[i, 2], pch = 16, col = 2)
points(i - 1 + 0.125, phi.ols[i], pch = 16, col = 3)
}
abline(h = 0, lty = 2)
legend(
"topright",
legend = c("Normal prior", "Normal-Gamma prior", "OLS"),
col = 1:3,
lty = 1,
lwd = 2
) |
error_handler <- function(list_models,arg,method) {
if (!is.numeric(arg)) {
if (method == "Performance_Measure") {
stop("Error: The argument g should be numeric")
} else {
stop("Error: The argument t should be numeric")
}
}
count_cols <- sapply(list_models,function(x) ncol(x))
if (sum(count_cols != 2) > 0) {
stop("Error: Each dataframe in the list should consist of only 2 columns.
The first column with class labels (0/1) and the second column indicating the predicted probability")
}
check_col_type <- sum(unlist(lapply(list_models,function(x)
lapply(x, function(y) !is.numeric(y)))))
if (check_col_type > 0) {
stop("Error: All columns in each dataframe should be numeric")
}
check_class_lables <- unlist(lapply(list_models, function(x) unique(x[,1])))
if (sum (!(check_class_lables == 0 | check_class_lables == 1)) > 0) {
stop("Error: Class labels should be either 0 or 1")
}
check_pred_prob <- unlist(lapply(list_models,function(x) range(x[,2])))
if (sum (!(check_pred_prob >= 0 & check_pred_prob <= 1)) > 0) {
stop("Error: Predicted probability should be between zero and one")
}
}
obs_exp_summary <- function(x) {
cut_points <- obs <- pred <- NULL
colnames(x) <- c('obs','pred','cut_points')
obs_exp <- x %>% group_by(cut_points) %>% summarise(obs_zero = length(obs[obs == 0]), obs_one = length(obs[obs ==
1]), exp_zero = (1 - mean(pred)) * n(), exp_one = mean(pred) * n())
obs_exp
}
hl_function <- function(x, g, sample_size_concord = NULL) {
cut_points <- obs <- pred <- NULL
x <- x %>% mutate(cut_points = cut(x[, 2], breaks = unique(quantile(x[, 2], seq(0, 1, 1/g))), include.lowest = T))
obs_exp <- obs_exp_summary(x)
return(list(obs_exp))
}
hl_test <- function(x, g, sample_size_concord = NULL) {
hl_df <- hl_function(x, g)
hl_df <- as.data.frame(hl_df)
colnames(hl_df) <- gsub("hl_df.", "", colnames(hl_df))
chi_square <- sum(with(hl_df, (obs_zero - exp_zero)^2/exp_zero + (obs_one - exp_one)^2/exp_one))
p_value <- 1 - pchisq(chi_square, g - 2)
hl_out <- data.frame(chi_square, p_value)
return(list(hl_out))
}
calib_function <- function(x, g, sample_size_concord = NULL) {
cut_points <- obs <- NULL
colnames(x) <- c('obs','pred')
cut_points_mid <- data.frame(bin_mid = round(((seq(0, 1, 1/g) + lag(seq(0, 1, 1/g)))/2)[-1], 2))
obs_prob <- x %>% mutate(cut_points = cut(x[, 2], breaks = seq(0, 1, 1/g), include.lowest = T)) %>%
group_by(cut_points) %>% summarise(obs_rate = sum(obs)/n())
mid_point <- function(x) {
round(mean(as.numeric(unlist(regmatches(x, gregexpr("[[:digit:]]+\\.*[[:digit:]]*", x))))), 2)
}
calib_df <- data.frame(bin_mid = sapply(1:nrow(obs_prob), function(i) mid_point(obs_prob$cut_points[i])),
obs_prob) %>% select(-cut_points)
calib_df <- merge(cut_points_mid, calib_df[c("bin_mid", "obs_rate")], by.x = "bin_mid", by.y = "bin_mid",
all.x = T)
calib_df[is.na(calib_df)] <- 0
rownames(calib_df) <- NULL
return(list(calib_df))
}
lift_function <- function(x, g, sample_size_concord = NULL) {
obs_zero <- obs_one <- NULL
if (g < length(unique(x[, 2]))) {
x <- x %>% mutate(cut_points = cut(x[, 2], breaks = unique(quantile(x[, 2], seq(0, 1, 1/g))), include.lowest = T))
} else {
x <- x %>% mutate(cut_points = as.factor(round(x[, 2], 2)))
}
obs_exp <- as.data.frame(obs_exp_summary(x))
obs_exp <- obs_exp[nrow(obs_exp):1, ] %>% mutate(Total = obs_zero + obs_one)
cum_capture_1 <- with(obs_exp, round((cumsum(obs_one)/(sum(obs_one))) * 100, 2))
fpr <- with(obs_exp, round((cumsum(obs_zero)/(sum(obs_zero))) * 100, 2))
lift_index <- with(obs_exp, cum_capture_1/(cumsum(Total)/sum(Total)))
lift_df <- data.frame(obs_exp["cut_points"], cum_capture_1, fpr, lift_index, KS = cum_capture_1 - fpr)
rownames(lift_df) = NULL
return(list(lift_df))
}
conc_disc <- function(x, g, sample_size_concord) {
obs <- pred <- p_one <- p_zero <- compare <- Count <- NULL
colnames(x) <- c('obs','pred')
if (nrow(x) > 5000) {
x <- x[sample(nrow(x), sample_size_concord), ]
}
df_one <- unlist(x %>% filter(obs == 1) %>% select(pred))
df_zero <- unlist(x %>% filter(obs == 0) %>% select(pred))
con_dis <- expand.grid(df_one, df_zero)
colnames(con_dis)[1:2] <- c("p_one", "p_zero")
con_dis <- con_dis %>% mutate(compare = c("Lesser than", "tied", "Greater than")[sign(p_one - p_zero) +
2])
con_dis <- con_dis %>% group_by(compare) %>% summarize(Count = n())
con_dis <- con_dis %>% mutate(Perct = paste(round((Count/sum(Count)) * 100, 2), "%", sep = ""))
count <- as.numeric(con_dis$Count)
c_stat <- paste(round(((count[1] + 0.5 * count[3])/sum(count)) * 100, 2), "%", "")
c_stat_df <- data.frame(label = "C-statistic", val = c_stat)
con_dis <- con_dis[-2]
colnames(c_stat_df) <- colnames(con_dis)
con_dis <- rbind(con_dis, c_stat_df)
return(list(con_dis))
}
combine_df <- function(list_df, index) {
comb_df <- do.call(rbind, sapply(list_df, "[[", index))
rep_nos <- sapply(sapply(list_df, "[[", index), function(x) nrow(x))
comb_df <- data.frame(Model = unlist(lapply(seq_along(rep_nos), function(i) rep(paste("Model", i), rep_nos[i]))),
comb_df)
comb_df
}
"Plots"
plot_HL <- function(df) {
Model <- Expected <- Value <- obs_one <- exp_one <- bins <- NULL
df$bins <- unlist(lapply(1:length(unique(df$Model)), function(i) paste("Bin", seq(1, nrow(filter(df,
Model == paste("Model", i)))))))
df$bins <- factor(df$bins, levels = unique(df$bins))
df <- df %>% gather(Expected, Value, obs_one, exp_one)
g <- ggplot(df, aes(x = bins, y = Value, fill = Expected))
g <- g + geom_bar(stat = "identity", position = "dodge") + facet_wrap(~Model)
g
}
plot_calib <- function(df) {
bin_mid <- obs_rate <- Model <- NULL
calib_plot <- ggplot(df, aes(bin_mid, obs_rate, colour = Model)) + geom_line(size = 1) + geom_point(size = 3)
calib_plot <- calib_plot + geom_abline(intercept = 0, slope = 1)
calib_plot
}
plot_lift <- function(df) {
Model <- bins <- lift_index <- NULL
df$bins <- unlist(lapply(1:length(unique(df$Model)), function(i) paste("Bin", seq(1, nrow(filter(df,
Model == paste("Model", i)))))))
df$bins <- factor(df$bins, levels = unique(df$bins))
g <- ggplot(df, aes(x = bins, y = lift_index, group = Model, colour = Model))
g <- g + geom_line(size=1) + geom_point(size=3)
g
}
plot_condis <- function(df) {
obs <- pred <- NULL
for (i in 1:length(df)) {
colnames(df[[i]]) <- c('obs','pred')
}
comb_df <- do.call(rbind, df)
colnames(comb_df) <- c('obs','pred')
reps_no <- sapply(df, function(x) nrow(x))
comb_df <- data.frame(Model = unlist(lapply(seq_along(reps_no), function(i) rep(paste("Model", i), reps_no[i]))),
comb_df)
g <- ggplot(comb_df, aes(x = pred, fill = as.factor(obs))) + geom_density(alpha = 0.5) + facet_wrap(~Model)
g
}
############## Confusion ####################################
conf_mat <- function(x, t) {
x <- x %>% mutate(pred_prob = as.factor(ifelse(x[, 2] >= t, "Pred-1", "Pred-0")))
x$pred_prob <- factor(x = x$pred_prob, levels = c("Pred-0", "Pred-1"))
conf_mat <- table(x[, 3], x[, 1])
conf_mat
}
conf_mat_metrics <- function(x, t) {
matrix <- conf_mat(x, t)
Acc <- (matrix[1, 1] + matrix[2, 2])/sum(matrix)
Acc <- round(Acc * 100, 2)
TPR <- matrix[2, 2]/sum(matrix[, 2])
TPR <- round(TPR * 100, 2)
FPR <- matrix[2, 1]/sum(matrix[, 1])
FPR <- round(FPR * 100, 2)
Prec <- matrix[2, 2]/sum(matrix[2, ])
Prec <- round(Prec * 100, 2)
output <- data.frame(Threshold = t, Acc = Acc, TPR = TPR, FPR = FPR, Prec = Prec)
output
}
conf_range <- function(x, reps, all.unique = F) {
if (all.unique == T) {
prob_values <- sort(unique(x[,2]))
} else {
prob_values <- seq(0, 1, 1/reps)
}
out <- lapply(seq_along(prob_values), function(i) conf_mat_metrics(x, prob_values[i]))
out <- do.call(rbind, out)
out
}
| /R/InternalFunctions.R | no_license | cran/IMP | R | false | false | 8,517 | r |
error_handler <- function(list_models,arg,method) {
if (!is.numeric(arg)) {
if (method == "Performance_Measure") {
stop("Error: The argument g should be numeric")
} else {
stop("Error: The argument t should be numeric")
}
}
count_cols <- sapply(list_models,function(x) ncol(x))
if (sum(count_cols != 2) > 0) {
stop("Error: Each dataframe in the list should consist of only 2 columns.
The first column with class labels (0/1) and the second column indicating the predicted probability")
}
check_col_type <- sum(unlist(lapply(list_models,function(x)
lapply(x, function(y) !is.numeric(y)))))
if (check_col_type > 0) {
stop("Error: All columns in each dataframe should be numeric")
}
check_class_lables <- unlist(lapply(list_models, function(x) unique(x[,1])))
if (sum (!(check_class_lables == 0 | check_class_lables == 1)) > 0) {
stop("Error: Class labels should be either 0 or 1")
}
check_pred_prob <- unlist(lapply(list_models,function(x) range(x[,2])))
if (sum (!(check_pred_prob >= 0 & check_pred_prob <= 1)) > 0) {
stop("Error: Predicted probability should be between zero and one")
}
}
obs_exp_summary <- function(x) {
cut_points <- obs <- pred <- NULL
colnames(x) <- c('obs','pred','cut_points')
obs_exp <- x %>% group_by(cut_points) %>% summarise(obs_zero = length(obs[obs == 0]), obs_one = length(obs[obs ==
1]), exp_zero = (1 - mean(pred)) * n(), exp_one = mean(pred) * n())
obs_exp
}
hl_function <- function(x, g, sample_size_concord = NULL) {
cut_points <- obs <- pred <- NULL
x <- x %>% mutate(cut_points = cut(x[, 2], breaks = unique(quantile(x[, 2], seq(0, 1, 1/g))), include.lowest = T))
obs_exp <- obs_exp_summary(x)
return(list(obs_exp))
}
hl_test <- function(x, g, sample_size_concord = NULL) {
hl_df <- hl_function(x, g)
hl_df <- as.data.frame(hl_df)
colnames(hl_df) <- gsub("hl_df.", "", colnames(hl_df))
chi_square <- sum(with(hl_df, (obs_zero - exp_zero)^2/exp_zero + (obs_one - exp_one)^2/exp_one))
p_value <- 1 - pchisq(chi_square, g - 2)
hl_out <- data.frame(chi_square, p_value)
return(list(hl_out))
}
calib_function <- function(x, g, sample_size_concord = NULL) {
cut_points <- obs <- NULL
colnames(x) <- c('obs','pred')
cut_points_mid <- data.frame(bin_mid = round(((seq(0, 1, 1/g) + lag(seq(0, 1, 1/g)))/2)[-1], 2))
obs_prob <- x %>% mutate(cut_points = cut(x[, 2], breaks = seq(0, 1, 1/g), include.lowest = T)) %>%
group_by(cut_points) %>% summarise(obs_rate = sum(obs)/n())
mid_point <- function(x) {
round(mean(as.numeric(unlist(regmatches(x, gregexpr("[[:digit:]]+\\.*[[:digit:]]*", x))))), 2)
}
calib_df <- data.frame(bin_mid = sapply(1:nrow(obs_prob), function(i) mid_point(obs_prob$cut_points[i])),
obs_prob) %>% select(-cut_points)
calib_df <- merge(cut_points_mid, calib_df[c("bin_mid", "obs_rate")], by.x = "bin_mid", by.y = "bin_mid",
all.x = T)
calib_df[is.na(calib_df)] <- 0
rownames(calib_df) <- NULL
return(list(calib_df))
}
lift_function <- function(x, g, sample_size_concord = NULL) {
obs_zero <- obs_one <- NULL
if (g < length(unique(x[, 2]))) {
x <- x %>% mutate(cut_points = cut(x[, 2], breaks = unique(quantile(x[, 2], seq(0, 1, 1/g))), include.lowest = T))
} else {
x <- x %>% mutate(cut_points = as.factor(round(x[, 2], 2)))
}
obs_exp <- as.data.frame(obs_exp_summary(x))
obs_exp <- obs_exp[nrow(obs_exp):1, ] %>% mutate(Total = obs_zero + obs_one)
cum_capture_1 <- with(obs_exp, round((cumsum(obs_one)/(sum(obs_one))) * 100, 2))
fpr <- with(obs_exp, round((cumsum(obs_zero)/(sum(obs_zero))) * 100, 2))
lift_index <- with(obs_exp, cum_capture_1/(cumsum(Total)/sum(Total)))
lift_df <- data.frame(obs_exp["cut_points"], cum_capture_1, fpr, lift_index, KS = cum_capture_1 - fpr)
rownames(lift_df) = NULL
return(list(lift_df))
}
conc_disc <- function(x, g, sample_size_concord) {
obs <- pred <- p_one <- p_zero <- compare <- Count <- NULL
colnames(x) <- c('obs','pred')
if (nrow(x) > 5000) {
x <- x[sample(nrow(x), sample_size_concord), ]
}
df_one <- unlist(x %>% filter(obs == 1) %>% select(pred))
df_zero <- unlist(x %>% filter(obs == 0) %>% select(pred))
con_dis <- expand.grid(df_one, df_zero)
colnames(con_dis)[1:2] <- c("p_one", "p_zero")
con_dis <- con_dis %>% mutate(compare = c("Lesser than", "tied", "Greater than")[sign(p_one - p_zero) +
2])
con_dis <- con_dis %>% group_by(compare) %>% summarize(Count = n())
con_dis <- con_dis %>% mutate(Perct = paste(round((Count/sum(Count)) * 100, 2), "%", sep = ""))
count <- as.numeric(con_dis$Count)
c_stat <- paste(round(((count[1] + 0.5 * count[3])/sum(count)) * 100, 2), "%", "")
c_stat_df <- data.frame(label = "C-statistic", val = c_stat)
con_dis <- con_dis[-2]
colnames(c_stat_df) <- colnames(con_dis)
con_dis <- rbind(con_dis, c_stat_df)
return(list(con_dis))
}
combine_df <- function(list_df, index) {
comb_df <- do.call(rbind, sapply(list_df, "[[", index))
rep_nos <- sapply(sapply(list_df, "[[", index), function(x) nrow(x))
comb_df <- data.frame(Model = unlist(lapply(seq_along(rep_nos), function(i) rep(paste("Model", i), rep_nos[i]))),
comb_df)
comb_df
}
"Plots"
plot_HL <- function(df) {
Model <- Expected <- Value <- obs_one <- exp_one <- bins <- NULL
df$bins <- unlist(lapply(1:length(unique(df$Model)), function(i) paste("Bin", seq(1, nrow(filter(df,
Model == paste("Model", i)))))))
df$bins <- factor(df$bins, levels = unique(df$bins))
df <- df %>% gather(Expected, Value, obs_one, exp_one)
g <- ggplot(df, aes(x = bins, y = Value, fill = Expected))
g <- g + geom_bar(stat = "identity", position = "dodge") + facet_wrap(~Model)
g
}
plot_calib <- function(df) {
bin_mid <- obs_rate <- Model <- NULL
calib_plot <- ggplot(df, aes(bin_mid, obs_rate, colour = Model)) + geom_line(size = 1) + geom_point(size = 3)
calib_plot <- calib_plot + geom_abline(intercept = 0, slope = 1)
calib_plot
}
plot_lift <- function(df) {
Model <- bins <- lift_index <- NULL
df$bins <- unlist(lapply(1:length(unique(df$Model)), function(i) paste("Bin", seq(1, nrow(filter(df,
Model == paste("Model", i)))))))
df$bins <- factor(df$bins, levels = unique(df$bins))
g <- ggplot(df, aes(x = bins, y = lift_index, group = Model, colour = Model))
g <- g + geom_line(size=1) + geom_point(size=3)
g
}
plot_condis <- function(df) {
obs <- pred <- NULL
for (i in 1:length(df)) {
colnames(df[[i]]) <- c('obs','pred')
}
comb_df <- do.call(rbind, df)
colnames(comb_df) <- c('obs','pred')
reps_no <- sapply(df, function(x) nrow(x))
comb_df <- data.frame(Model = unlist(lapply(seq_along(reps_no), function(i) rep(paste("Model", i), reps_no[i]))),
comb_df)
g <- ggplot(comb_df, aes(x = pred, fill = as.factor(obs))) + geom_density(alpha = 0.5) + facet_wrap(~Model)
g
}
############## Confusion ####################################
conf_mat <- function(x, t) {
x <- x %>% mutate(pred_prob = as.factor(ifelse(x[, 2] >= t, "Pred-1", "Pred-0")))
x$pred_prob <- factor(x = x$pred_prob, levels = c("Pred-0", "Pred-1"))
conf_mat <- table(x[, 3], x[, 1])
conf_mat
}
conf_mat_metrics <- function(x, t) {
matrix <- conf_mat(x, t)
Acc <- (matrix[1, 1] + matrix[2, 2])/sum(matrix)
Acc <- round(Acc * 100, 2)
TPR <- matrix[2, 2]/sum(matrix[, 2])
TPR <- round(TPR * 100, 2)
FPR <- matrix[2, 1]/sum(matrix[, 1])
FPR <- round(FPR * 100, 2)
Prec <- matrix[2, 2]/sum(matrix[2, ])
Prec <- round(Prec * 100, 2)
output <- data.frame(Threshold = t, Acc = Acc, TPR = TPR, FPR = FPR, Prec = Prec)
output
}
conf_range <- function(x, reps, all.unique = F) {
if (all.unique == T) {
prob_values <- sort(unique(x[,2]))
} else {
prob_values <- seq(0, 1, 1/reps)
}
out <- lapply(seq_along(prob_values), function(i) conf_mat_metrics(x, prob_values[i]))
out <- do.call(rbind, out)
out
}
|
recoder <- function(table, question = "qn_47"){
## Function to recode fields.
## Takes as an argument data frame in the long format
## and returns a vector.
temp <- table %>%
filter(qn == question)
responses <- vector(mode = "character", length = nrow(temp))
for (i in c(1:nrow(temp))){
cat(paste0(i, "/", nrow(temp), "\n"))
cat(paste0(temp$responses[i], "\n"))
cat("Czy chcesz poprawić odpowiedź?\n")
logic <- readline() %>% as.character()
if (logic %in% c("yes", "Yes", "tak", "Tak", "t", "T", "Y", "y")){
cat("Wpisz poprawioną odpowiedź. Oddziel średnikiem jeśli jest to lista.\n")
res <- readline() %>% as.character()
responses[i] <- res
} else {
responses[i] <- temp$responses[i]
}
}
return(responses)
}
get_similar <- function(tokens, pattern, n = 20, ignore_case = TRUE, decreasing = TRUE) {
## Function to list similar strings.
## Takes as an argument a vector of strings and returns
## strings consisting the pattern.
tokens %>%
keep(~str_detect(.x, regex(pattern, ignore_case = ignore_case))) %>%
table %>%
sort(decreasing = decreasing) %>%
head(n = n)
}
get_png <- function(filename) {
## Function to read png files.
## Takes as an argument a file name.
grid::rasterGrob(png::readPNG(filename), interpolate = TRUE)
}
compute_number <- function(table = data_wide,
question = qn_3,
response = "Tak",
conditional = FALSE,
question2 = qn_3,
response2 = "Tak"){
question <- enquo(question)
question2 <- enquo(question2)
table <- table %>%
filter((!is.na(!! question)) & (!! question == response))
if (conditional){
number <- table %>%
filter((!is.na(!! question2)) & (!! question2 == response2)) %>%
nrow()
}else{
number <- table %>%
nrow()
}
return(number)
}
filter_data_wide <- function(table = data_wide, question = qn_17, response = "Nie"){
question <- enquo(question)
table %>%
filter(!is.na(!! question) & !! question == response)
}
plot_con_questions <- function(table = data_wide,
question = qn_3,
conditional = FALSE,
question2 = qn_3,
xmin = -3, xmax = 4, ymin = 85, ymax = 105,
legend_title = "",
plot_title = "",
reorder = FALSE,
legend = TRUE,
COLORS = COLORS[1:2]){
plot_title <- str_wrap(plot_title)
legend_title <- str_wrap(legend_title,60)
question <- enquo(question)
question2 <- enquo(question2)
table <- table %>%
filter(!is.na(!! question)) %>%
mutate(label = !! question)
if(reorder){
order_question <- table %>%
group_by(label) %>%
summarise(count = n()) %>%
arrange((count)) %>%
pull(label)
table <- table %>%
mutate(label = factor(label, levels = order_question))
}
if (conditional){
table <- table %>%
filter(!is.na(!! question2)) %>%
filter(!! question2 != "") %>%
mutate(label2 = !! question2)
plot_con <- table %>%
ggplot(aes(x = label, fill = label2)) +
geom_bar(show.legend = legend) +
theme_classic() +
labs(title = plot_title, x = "", y = "Liczebność")+
#annotation_custom(l, xmin = xmin, xmax = xmax, ymin = ymin, ymax = ymax) +
#coord_cartesian(clip = "off") +
coord_flip() +
scale_y_continuous(expand = expansion(mult = c(0, .1))) +
scale_fill_manual(legend_title,values=COLORS) +
theme(legend.position="bottom")
}else{
plot_con <- table %>%
ggplot(aes(x = label)) +
geom_bar(fill = "#123e65") +
theme_classic() +
labs(title = plot_title, x = "", y = "Liczebność")+
#annotation_custom(l, xmin = xmin, xmax = xmax, ymin = ymin, ymax = ymax) +
#coord_cartesian(clip = "off") +
coord_flip() +
scale_y_continuous(expand = expansion(mult = c(0, .1)))
}
print(plot_con)
}
make_table <- function(table = data_wide,
question = qn_3,
conditional = FALSE,
question2 = qn_3,
cap = ""){
question <- enquo(question)
question2 <- enquo(question2)
question_label <- question_show(question = question)
question_label2 <- question_show(question = question2)
if(conditional){
temp <- table %>%
filter(!is.na(!! question)) %>%
filter(!! question2 != "") %>%
mutate(label = !! question,
label2 = !! question2) %>%
group_by(label, label2) %>%
summarise(Frequency = n()) %>%
select(!! question_label := label,
!! question_label2 := label2,
Liczebność = Frequency) %>%
kable(caption = cap, "html") %>%
kable_styling(bootstrap_options = c("striped","hover"),
font_size = 10,
full_width = TRUE)
} else {
temp <- table %>%
filter(!is.na(!! question)) %>%
filter(!! question2 != "") %>%
mutate(label = !! question) %>%
group_by(label) %>%
summarise(Frequency = n()) %>%
select(!! question_label2 := label,
Liczebność = Frequency) %>%
kable(caption = cap, "html") %>%
kable_styling(bootstrap_options = c("striped","hover"),
font_size = 10,
full_width = TRUE)
}
return(temp)
}
question_show <- function(table = QUESTION_LOOKUP, question = 'qn_1', lang = "polski"){
table %>%
pivot_longer(cols = polski:english,
values_to = "value",
names_to = "language") %>%
pivot_wider(names_from = qn,
values_from = value) %>%
filter(language == lang) %>%
select(!! question) %>%
pull()
}
make_open_table <- function(table = data_wide,
question = qn_4,
defualt_cap = TRUE,
custom_cap = "",
stupid_answers = ""
){
question <- enquo(question)
if (defualt_cap){
cap <- question_show(question = question) %>%
str_remove(pattern = "_\\d")
} else {
cap <- custom_cap
}
table %>%
filter(!is.na(!! question)) %>%
filter(nchar(!! question) > 10) %>%
filter(!(!! question) %in% stupid_answers) %>%
select(Jednostka = faculty_label, position, Odpowiedź = !! question) %>%
ungroup(id) %>%
mutate(id = 1:n(),
Jednostka = as.factor(Jednostka),
position = as.factor(position)) %>%
rename("Rodzaj kształcenia" = position) %>%
DT::datatable(caption = cap,
rownames = FALSE,
filter = "top",
options = list(sDom = '<"top">lrt<"bottom">ip'))
}
| /utils/utils.R | no_license | MikoBie/ankieta_sd | R | false | false | 7,092 | r | recoder <- function(table, question = "qn_47"){
## Function to recode fields.
## Takes as an argument data frame in the long format
## and returns a vector.
temp <- table %>%
filter(qn == question)
responses <- vector(mode = "character", length = nrow(temp))
for (i in c(1:nrow(temp))){
cat(paste0(i, "/", nrow(temp), "\n"))
cat(paste0(temp$responses[i], "\n"))
cat("Czy chcesz poprawić odpowiedź?\n")
logic <- readline() %>% as.character()
if (logic %in% c("yes", "Yes", "tak", "Tak", "t", "T", "Y", "y")){
cat("Wpisz poprawioną odpowiedź. Oddziel średnikiem jeśli jest to lista.\n")
res <- readline() %>% as.character()
responses[i] <- res
} else {
responses[i] <- temp$responses[i]
}
}
return(responses)
}
get_similar <- function(tokens, pattern, n = 20, ignore_case = TRUE, decreasing = TRUE) {
## Function to list similar strings.
## Takes as an argument a vector of strings and returns
## strings consisting the pattern.
tokens %>%
keep(~str_detect(.x, regex(pattern, ignore_case = ignore_case))) %>%
table %>%
sort(decreasing = decreasing) %>%
head(n = n)
}
get_png <- function(filename) {
## Function to read png files.
## Takes as an argument a file name.
grid::rasterGrob(png::readPNG(filename), interpolate = TRUE)
}
compute_number <- function(table = data_wide,
question = qn_3,
response = "Tak",
conditional = FALSE,
question2 = qn_3,
response2 = "Tak"){
question <- enquo(question)
question2 <- enquo(question2)
table <- table %>%
filter((!is.na(!! question)) & (!! question == response))
if (conditional){
number <- table %>%
filter((!is.na(!! question2)) & (!! question2 == response2)) %>%
nrow()
}else{
number <- table %>%
nrow()
}
return(number)
}
filter_data_wide <- function(table = data_wide, question = qn_17, response = "Nie"){
question <- enquo(question)
table %>%
filter(!is.na(!! question) & !! question == response)
}
plot_con_questions <- function(table = data_wide,
question = qn_3,
conditional = FALSE,
question2 = qn_3,
xmin = -3, xmax = 4, ymin = 85, ymax = 105,
legend_title = "",
plot_title = "",
reorder = FALSE,
legend = TRUE,
COLORS = COLORS[1:2]){
plot_title <- str_wrap(plot_title)
legend_title <- str_wrap(legend_title,60)
question <- enquo(question)
question2 <- enquo(question2)
table <- table %>%
filter(!is.na(!! question)) %>%
mutate(label = !! question)
if(reorder){
order_question <- table %>%
group_by(label) %>%
summarise(count = n()) %>%
arrange((count)) %>%
pull(label)
table <- table %>%
mutate(label = factor(label, levels = order_question))
}
if (conditional){
table <- table %>%
filter(!is.na(!! question2)) %>%
filter(!! question2 != "") %>%
mutate(label2 = !! question2)
plot_con <- table %>%
ggplot(aes(x = label, fill = label2)) +
geom_bar(show.legend = legend) +
theme_classic() +
labs(title = plot_title, x = "", y = "Liczebność")+
#annotation_custom(l, xmin = xmin, xmax = xmax, ymin = ymin, ymax = ymax) +
#coord_cartesian(clip = "off") +
coord_flip() +
scale_y_continuous(expand = expansion(mult = c(0, .1))) +
scale_fill_manual(legend_title,values=COLORS) +
theme(legend.position="bottom")
}else{
plot_con <- table %>%
ggplot(aes(x = label)) +
geom_bar(fill = "#123e65") +
theme_classic() +
labs(title = plot_title, x = "", y = "Liczebność")+
#annotation_custom(l, xmin = xmin, xmax = xmax, ymin = ymin, ymax = ymax) +
#coord_cartesian(clip = "off") +
coord_flip() +
scale_y_continuous(expand = expansion(mult = c(0, .1)))
}
print(plot_con)
}
make_table <- function(table = data_wide,
question = qn_3,
conditional = FALSE,
question2 = qn_3,
cap = ""){
question <- enquo(question)
question2 <- enquo(question2)
question_label <- question_show(question = question)
question_label2 <- question_show(question = question2)
if(conditional){
temp <- table %>%
filter(!is.na(!! question)) %>%
filter(!! question2 != "") %>%
mutate(label = !! question,
label2 = !! question2) %>%
group_by(label, label2) %>%
summarise(Frequency = n()) %>%
select(!! question_label := label,
!! question_label2 := label2,
Liczebność = Frequency) %>%
kable(caption = cap, "html") %>%
kable_styling(bootstrap_options = c("striped","hover"),
font_size = 10,
full_width = TRUE)
} else {
temp <- table %>%
filter(!is.na(!! question)) %>%
filter(!! question2 != "") %>%
mutate(label = !! question) %>%
group_by(label) %>%
summarise(Frequency = n()) %>%
select(!! question_label2 := label,
Liczebność = Frequency) %>%
kable(caption = cap, "html") %>%
kable_styling(bootstrap_options = c("striped","hover"),
font_size = 10,
full_width = TRUE)
}
return(temp)
}
question_show <- function(table = QUESTION_LOOKUP, question = 'qn_1', lang = "polski"){
table %>%
pivot_longer(cols = polski:english,
values_to = "value",
names_to = "language") %>%
pivot_wider(names_from = qn,
values_from = value) %>%
filter(language == lang) %>%
select(!! question) %>%
pull()
}
make_open_table <- function(table = data_wide,
question = qn_4,
defualt_cap = TRUE,
custom_cap = "",
stupid_answers = ""
){
question <- enquo(question)
if (defualt_cap){
cap <- question_show(question = question) %>%
str_remove(pattern = "_\\d")
} else {
cap <- custom_cap
}
table %>%
filter(!is.na(!! question)) %>%
filter(nchar(!! question) > 10) %>%
filter(!(!! question) %in% stupid_answers) %>%
select(Jednostka = faculty_label, position, Odpowiedź = !! question) %>%
ungroup(id) %>%
mutate(id = 1:n(),
Jednostka = as.factor(Jednostka),
position = as.factor(position)) %>%
rename("Rodzaj kształcenia" = position) %>%
DT::datatable(caption = cap,
rownames = FALSE,
filter = "top",
options = list(sDom = '<"top">lrt<"bottom">ip'))
}
|
#' Plot a rgcca_stability object.
#'
#' The fitted RGCCA model returned by rgcca_stability is plotted.
#' All arguments are forwarded to the plot.rgcca function.
#' @param x Object of type "stability" produced by rgcca_stability.
#' @param ... Arguments for the plot.rgcca function.
#' @return A ggplot2 plot object.
#' @examples
#' data(Russett)
#' blocks <- list(
#' agriculture = Russett[, seq(3)],
#' industry = Russett[, 4:5],
#' politic = Russett[, 6:11]
#' )
#' fit.sgcca <- rgcca(blocks, sparsity = c(.8, .9, .6))
#' res <- rgcca_stability(
#' fit.sgcca, n_boot = 10, verbose = TRUE, keep = rep(.1, 3)
#' )
#' plot(res, type = "weights")
#' @export
plot.stability <- function(x, ...) {
stopifnot(is(x, "stability"))
plot(x$rgcca_res, ...)
}
| /R/plot.stability.R | no_license | vguillemot/RGCCA | R | false | false | 761 | r | #' Plot a rgcca_stability object.
#'
#' The fitted RGCCA model returned by rgcca_stability is plotted.
#' All arguments are forwarded to the plot.rgcca function.
#' @param x Object of type "stability" produced by rgcca_stability.
#' @param ... Arguments for the plot.rgcca function.
#' @return A ggplot2 plot object.
#' @examples
#' data(Russett)
#' blocks <- list(
#' agriculture = Russett[, seq(3)],
#' industry = Russett[, 4:5],
#' politic = Russett[, 6:11]
#' )
#' fit.sgcca <- rgcca(blocks, sparsity = c(.8, .9, .6))
#' res <- rgcca_stability(
#' fit.sgcca, n_boot = 10, verbose = TRUE, keep = rep(.1, 3)
#' )
#' plot(res, type = "weights")
#' @export
plot.stability <- function(x, ...) {
stopifnot(is(x, "stability"))
plot(x$rgcca_res, ...)
}
|
## This function creates a list, gets the matrix and then sets inverse, gets inverse etc.
##
## Inverse Matrix
makeCacheMatrix <- function(x = matrix()) {original_matrix <<- x
inverse_matrix <<- NULL
set_matrix = function(z)
{
original_matrix <<- z
inverse_matrix <<- NULL
}
get_matrix = function()
{
original_matrix
}
set_inverse = function(y = matrix())
{
inverse_matrix <<- y
}
get_inverse = function()
{
inverse_matrix
}
list( set_matrix = set_matrix, get_matrix = get_matrix, set_inverse = set_inverse, get_inverse = get_inverse)
}
## Checks if inverse is already calculated. If not it does it.
cacheSolve <- function(x, ...) {if(!is.null(inverse_matrix))
{
message("Getting cached Inverse")
}
else
{
message("Creating new Inverse")
inverse_matrix <<- solve(original_matrix)
}
inverse_matrix
## Return a matrix that is the inverse of 'x'
}
| /cachematrix.R | no_license | Priyamvadav/ProgrammingAssignment2 | R | false | false | 929 | r | ## This function creates a list, gets the matrix and then sets inverse, gets inverse etc.
##
## Inverse Matrix
makeCacheMatrix <- function(x = matrix()) {original_matrix <<- x
inverse_matrix <<- NULL
set_matrix = function(z)
{
original_matrix <<- z
inverse_matrix <<- NULL
}
get_matrix = function()
{
original_matrix
}
set_inverse = function(y = matrix())
{
inverse_matrix <<- y
}
get_inverse = function()
{
inverse_matrix
}
list( set_matrix = set_matrix, get_matrix = get_matrix, set_inverse = set_inverse, get_inverse = get_inverse)
}
## Checks if inverse is already calculated. If not it does it.
cacheSolve <- function(x, ...) {if(!is.null(inverse_matrix))
{
message("Getting cached Inverse")
}
else
{
message("Creating new Inverse")
inverse_matrix <<- solve(original_matrix)
}
inverse_matrix
## Return a matrix that is the inverse of 'x'
}
|
#' Ratio Variable Creation
#'
#' `step_ratio` creates a a *specification* of a recipe
#' step that will create one or more ratios out of numeric
#' variables.
#'
#' @inheritParams step_center
#' @inherit step_center return
#' @param ... One or more selector functions to choose which
#' variables will be used in the *numerator* of the ratio.
#' When used with `denom_vars`, the dots indicates which
#' variables are used in the *denominator*. See
#' [selections()] for more details. For the `tidy`
#' method, these are not currently used.
#' @param role For terms created by this step, what analysis role
#' should they be assigned?. By default, the function assumes that
#' the newly created ratios created by the original variables will
#' be used as predictors in a model.
#' @param denom A call to `denom_vars` to specify which
#' variables are used in the denominator that can include specific
#' variable names separated by commas or different selectors (see
#' [selections()]). If a column is included in both lists
#' to be numerator and denominator, it will be removed from the
#' listing.
#' @param naming A function that defines the naming convention for
#' new ratio columns.
#' @param columns The column names used in the ratios. This
#' argument is not populated until [prep.recipe()] is
#' executed.
#' @return An updated version of `recipe` with the new step
#' added to the sequence of existing steps (if any). For the
#' `tidy` method, a tibble with columns `terms` (the
#' selectors or variables selected) and `denom`.
#' @keywords datagen
#' @concept preprocessing
#' @export
#' @examples
#' library(recipes)
#' library(modeldata)
#' data(biomass)
#'
#' biomass$total <- apply(biomass[, 3:7], 1, sum)
#' biomass_tr <- biomass[biomass$dataset == "Training",]
#' biomass_te <- biomass[biomass$dataset == "Testing",]
#'
#' rec <- recipe(HHV ~ carbon + hydrogen + oxygen + nitrogen +
#' sulfur + total,
#' data = biomass_tr)
#'
#' ratio_recipe <- rec %>%
#' # all predictors over total
#' step_ratio(all_predictors(), denom = denom_vars(total)) %>%
#' # get rid of the original predictors
#' step_rm(all_predictors(), -ends_with("total"))
#'
#' ratio_recipe <- prep(ratio_recipe, training = biomass_tr)
#'
#' ratio_data <- bake(ratio_recipe, biomass_te)
#' ratio_data
step_ratio <-
function(recipe,
...,
role = "predictor",
trained = FALSE,
denom = denom_vars(),
naming = function(numer, denom)
make.names(paste(numer, denom, sep = "_o_")),
columns = NULL,
skip = FALSE,
id = rand_id("ratio")) {
if (is_empty(denom))
stop("Please supply at least one denominator variable specification. ",
"See ?selections.", call. = FALSE)
add_step(
recipe,
step_ratio_new(
terms = ellipse_check(...),
role = role,
trained = trained,
denom = denom,
naming = naming,
columns = columns,
skip = skip,
id = id
)
)
}
step_ratio_new <-
function(terms, role, trained, denom, naming, columns, skip, id) {
step(
subclass = "ratio",
terms = terms,
role = role,
trained = trained,
denom = denom,
naming = naming,
columns = columns,
skip = skip,
id = id
)
}
#' @export
prep.step_ratio <- function(x, training, info = NULL, ...) {
col_names <- expand.grid(
top = terms_select(x$terms, info = info),
bottom = terms_select(x$denom, info = info),
stringsAsFactors = FALSE
)
col_names <- col_names[!(col_names$top == col_names$bottom), ]
if (nrow(col_names) == 0)
stop("No variables were selected for making ratios", call. = FALSE)
if (any(info$type[info$variable %in% col_names$top] != "numeric"))
stop("The ratio variables should be numeric")
if (any(info$type[info$variable %in% col_names$bottom] != "numeric"))
stop("The ratio variables should be numeric")
step_ratio_new(
terms = x$terms,
role = x$role,
trained = TRUE,
denom = x$denom,
naming = x$naming,
columns = col_names,
skip = x$skip,
id = x$id
)
}
#' @export
bake.step_ratio <- function(object, new_data, ...) {
res <- new_data[, object$columns$top] /
new_data[, object$columns$bottom]
colnames(res) <-
apply(object$columns, 1, function(x)
object$naming(x[1], x[2]))
if (!is_tibble(res))
res <- as_tibble(res)
new_data <- bind_cols(new_data, res)
if (!is_tibble(new_data))
new_data <- as_tibble(new_data)
new_data
}
print.step_ratio <-
function(x, width = max(20, options()$width - 30), ...) {
cat("Ratios from ")
if (x$trained) {
vars <- c(unique(x$columns$top), unique(x$columns$bottom))
cat(format_ch_vec(vars, width = width))
} else
cat(format_selectors(c(x$terms, x$denom), width = width))
if (x$trained)
cat(" [trained]\n")
else
cat("\n")
invisible(x)
}
#' @export
#' @rdname step_ratio
denom_vars <- function(...) quos(...)
#' @rdname step_ratio
#' @param x A `step_ratio` object
#' @export
tidy.step_ratio <- function(x, ...) {
if (is_trained(x)) {
res <- x$columns
colnames(res) <- c("terms", "denom")
res <- as_tibble(res)
} else {
res <- expand.grid(terms = sel2char(x$terms),
denom = sel2char(x$denom),
stringsAsFactors = FALSE)
res <- as_tibble(res)
}
res$id <- x$id
res
}
| /R/ratio.R | no_license | EdwinTh/recipes | R | false | false | 5,530 | r | #' Ratio Variable Creation
#'
#' `step_ratio` creates a a *specification* of a recipe
#' step that will create one or more ratios out of numeric
#' variables.
#'
#' @inheritParams step_center
#' @inherit step_center return
#' @param ... One or more selector functions to choose which
#' variables will be used in the *numerator* of the ratio.
#' When used with `denom_vars`, the dots indicates which
#' variables are used in the *denominator*. See
#' [selections()] for more details. For the `tidy`
#' method, these are not currently used.
#' @param role For terms created by this step, what analysis role
#' should they be assigned?. By default, the function assumes that
#' the newly created ratios created by the original variables will
#' be used as predictors in a model.
#' @param denom A call to `denom_vars` to specify which
#' variables are used in the denominator that can include specific
#' variable names separated by commas or different selectors (see
#' [selections()]). If a column is included in both lists
#' to be numerator and denominator, it will be removed from the
#' listing.
#' @param naming A function that defines the naming convention for
#' new ratio columns.
#' @param columns The column names used in the ratios. This
#' argument is not populated until [prep.recipe()] is
#' executed.
#' @return An updated version of `recipe` with the new step
#' added to the sequence of existing steps (if any). For the
#' `tidy` method, a tibble with columns `terms` (the
#' selectors or variables selected) and `denom`.
#' @keywords datagen
#' @concept preprocessing
#' @export
#' @examples
#' library(recipes)
#' library(modeldata)
#' data(biomass)
#'
#' biomass$total <- apply(biomass[, 3:7], 1, sum)
#' biomass_tr <- biomass[biomass$dataset == "Training",]
#' biomass_te <- biomass[biomass$dataset == "Testing",]
#'
#' rec <- recipe(HHV ~ carbon + hydrogen + oxygen + nitrogen +
#' sulfur + total,
#' data = biomass_tr)
#'
#' ratio_recipe <- rec %>%
#' # all predictors over total
#' step_ratio(all_predictors(), denom = denom_vars(total)) %>%
#' # get rid of the original predictors
#' step_rm(all_predictors(), -ends_with("total"))
#'
#' ratio_recipe <- prep(ratio_recipe, training = biomass_tr)
#'
#' ratio_data <- bake(ratio_recipe, biomass_te)
#' ratio_data
step_ratio <-
function(recipe,
...,
role = "predictor",
trained = FALSE,
denom = denom_vars(),
naming = function(numer, denom)
make.names(paste(numer, denom, sep = "_o_")),
columns = NULL,
skip = FALSE,
id = rand_id("ratio")) {
if (is_empty(denom))
stop("Please supply at least one denominator variable specification. ",
"See ?selections.", call. = FALSE)
add_step(
recipe,
step_ratio_new(
terms = ellipse_check(...),
role = role,
trained = trained,
denom = denom,
naming = naming,
columns = columns,
skip = skip,
id = id
)
)
}
step_ratio_new <-
function(terms, role, trained, denom, naming, columns, skip, id) {
step(
subclass = "ratio",
terms = terms,
role = role,
trained = trained,
denom = denom,
naming = naming,
columns = columns,
skip = skip,
id = id
)
}
#' @export
prep.step_ratio <- function(x, training, info = NULL, ...) {
col_names <- expand.grid(
top = terms_select(x$terms, info = info),
bottom = terms_select(x$denom, info = info),
stringsAsFactors = FALSE
)
col_names <- col_names[!(col_names$top == col_names$bottom), ]
if (nrow(col_names) == 0)
stop("No variables were selected for making ratios", call. = FALSE)
if (any(info$type[info$variable %in% col_names$top] != "numeric"))
stop("The ratio variables should be numeric")
if (any(info$type[info$variable %in% col_names$bottom] != "numeric"))
stop("The ratio variables should be numeric")
step_ratio_new(
terms = x$terms,
role = x$role,
trained = TRUE,
denom = x$denom,
naming = x$naming,
columns = col_names,
skip = x$skip,
id = x$id
)
}
#' @export
bake.step_ratio <- function(object, new_data, ...) {
res <- new_data[, object$columns$top] /
new_data[, object$columns$bottom]
colnames(res) <-
apply(object$columns, 1, function(x)
object$naming(x[1], x[2]))
if (!is_tibble(res))
res <- as_tibble(res)
new_data <- bind_cols(new_data, res)
if (!is_tibble(new_data))
new_data <- as_tibble(new_data)
new_data
}
print.step_ratio <-
function(x, width = max(20, options()$width - 30), ...) {
cat("Ratios from ")
if (x$trained) {
vars <- c(unique(x$columns$top), unique(x$columns$bottom))
cat(format_ch_vec(vars, width = width))
} else
cat(format_selectors(c(x$terms, x$denom), width = width))
if (x$trained)
cat(" [trained]\n")
else
cat("\n")
invisible(x)
}
#' @export
#' @rdname step_ratio
denom_vars <- function(...) quos(...)
#' @rdname step_ratio
#' @param x A `step_ratio` object
#' @export
tidy.step_ratio <- function(x, ...) {
if (is_trained(x)) {
res <- x$columns
colnames(res) <- c("terms", "denom")
res <- as_tibble(res)
} else {
res <- expand.grid(terms = sel2char(x$terms),
denom = sel2char(x$denom),
stringsAsFactors = FALSE)
res <- as_tibble(res)
}
res$id <- x$id
res
}
|
### Generate random sample from normal distribution
x <- rnorm(100, 0, 2)
mean(x)
var(x)
### Shortest confidence interval
## 1. Bisection method
lambda <- 0.00102464
a <- 0 ; b <- 99
f <- function(x) {dchisq(x, 99)-lambda}
sign(f(0)*f(99))
tol <- 10^{-8}
for (i in 1:100) {
c <- (a+b)/2
newa <- c*(f(c)*f(b)<0)+a*(1-(f(c)*f(b)<0))
newb <- c*(f(c)*f(a)<0)+b*(1-(f(c)*f(a)<0))
a <- newa ; b <- newb
err <- abs(a-b)
print(c(i,a,b))
if (err<tol) break
}
(c <- (a+b)/2)
lambda <- 0.00102464
a <- 99 ; b <- 150
f <- function(x) {dchisq(x, 99)-lambda}
sign(f(99)*f(150))
tol <- 10^{-8}
for (i in 1:100) {
d <- (a+b)/2
newa <- d*(f(d)*f(b)<0)+a*(1-(f(d)*f(b)<0))
newb <- d*(f(d)*f(a)<0)+b*(1-(f(d)*f(a)<0))
a <- newa ; b <- newb
err <- abs(a-b)
print(c(i,a,b))
if (err<tol) break
}
(d <- (a+b)/2)
pchisq(d, 99) - pchisq(c, 99)
## 2. Result
# Shortest confidence interval
S.C.I <- c((99*var(x))/d, (99*var(x))/c)
# General confidence interval
G.C.I <- c((99*var(x))/qchisq(0.995, 99), (99*var(x))/qchisq(0.005, 99))
# Compare size
d - c
S.C.I[2] - S.C.I[1]
qchisq(0.995, 99) - qchisq(0.005, 99)
G.C.I[2] - G.C.I[1]
## 3. Graph
x1 <- rchisq(100, 99)
y <- seq(0, 150, by=0.001)
hist(x1, main="Chi-square distribution with df=99", col="yellow", probability=T)
lines(y, dchisq(y, 99), col="black", lwd=2)
abline(h=lambda, col="green")
abline(v=c, col="red") ; abline(v=d, col="red")
abline(v=qchisq(0.005, 99), col="blue") ; abline(v=qchisq(0.995, 99), col="blue")
legend("topright", c("S.C.I","G.C.I"), col=c("red","blue"), lty=c(1,1))
| /JBNU_Statistics_Ph.D/Ph.D dissertation/Censored Regression, Importance Sampling/전산통계/assignment/4월 5일 제출/99% shortest confidence interval (201530175 황성윤).R | no_license | hsyliark/GSS_MBA_for_Big_Data | R | false | false | 1,596 | r | ### Generate random sample from normal distribution
x <- rnorm(100, 0, 2)
mean(x)
var(x)
### Shortest confidence interval
## 1. Bisection method
lambda <- 0.00102464
a <- 0 ; b <- 99
f <- function(x) {dchisq(x, 99)-lambda}
sign(f(0)*f(99))
tol <- 10^{-8}
for (i in 1:100) {
c <- (a+b)/2
newa <- c*(f(c)*f(b)<0)+a*(1-(f(c)*f(b)<0))
newb <- c*(f(c)*f(a)<0)+b*(1-(f(c)*f(a)<0))
a <- newa ; b <- newb
err <- abs(a-b)
print(c(i,a,b))
if (err<tol) break
}
(c <- (a+b)/2)
lambda <- 0.00102464
a <- 99 ; b <- 150
f <- function(x) {dchisq(x, 99)-lambda}
sign(f(99)*f(150))
tol <- 10^{-8}
for (i in 1:100) {
d <- (a+b)/2
newa <- d*(f(d)*f(b)<0)+a*(1-(f(d)*f(b)<0))
newb <- d*(f(d)*f(a)<0)+b*(1-(f(d)*f(a)<0))
a <- newa ; b <- newb
err <- abs(a-b)
print(c(i,a,b))
if (err<tol) break
}
(d <- (a+b)/2)
pchisq(d, 99) - pchisq(c, 99)
## 2. Result
# Shortest confidence interval
S.C.I <- c((99*var(x))/d, (99*var(x))/c)
# General confidence interval
G.C.I <- c((99*var(x))/qchisq(0.995, 99), (99*var(x))/qchisq(0.005, 99))
# Compare size
d - c
S.C.I[2] - S.C.I[1]
qchisq(0.995, 99) - qchisq(0.005, 99)
G.C.I[2] - G.C.I[1]
## 3. Graph
x1 <- rchisq(100, 99)
y <- seq(0, 150, by=0.001)
hist(x1, main="Chi-square distribution with df=99", col="yellow", probability=T)
lines(y, dchisq(y, 99), col="black", lwd=2)
abline(h=lambda, col="green")
abline(v=c, col="red") ; abline(v=d, col="red")
abline(v=qchisq(0.005, 99), col="blue") ; abline(v=qchisq(0.995, 99), col="blue")
legend("topright", c("S.C.I","G.C.I"), col=c("red","blue"), lty=c(1,1))
|
# na wejscie: informacje o sondach i kontrolach
# na wyjcie: adnotacja
annotAgilent = function(probeInfo, controls){
if(probeInfo$SystematicName[1] == "Pro25G"){
annot = annot_AgilentHuman1A(probeInfo, controls)
}else{
rows_names = probeInfo$SystematicName[which(controls==0)]
annot = AnnotationDbi::mget(rows_names, revmap(org.Hs.egACCNUM), ifnotfound = NA)
annot = lapply(annot,`[[`, 1)
}
return(annot)
}
| /R/annotAgilent.R | no_license | EwaMarek/FindReference | R | false | false | 440 | r | # na wejscie: informacje o sondach i kontrolach
# na wyjcie: adnotacja
annotAgilent = function(probeInfo, controls){
if(probeInfo$SystematicName[1] == "Pro25G"){
annot = annot_AgilentHuman1A(probeInfo, controls)
}else{
rows_names = probeInfo$SystematicName[which(controls==0)]
annot = AnnotationDbi::mget(rows_names, revmap(org.Hs.egACCNUM), ifnotfound = NA)
annot = lapply(annot,`[[`, 1)
}
return(annot)
}
|
coordinates<-read.table("gene_coordinates_ENSEMBL_OryCun2_18April20.txt",skip = 1)
colnames(coordinates)<-c("Gene_ID","Chromosome","Start","End")
# remove scaffolds, chromosome X and MT (I'm sure there's a better way to do this but oh well)
coordinates<-subset(coordinates,coordinates$Chromosome==1 |
coordinates$Chromosome==2 |
coordinates$Chromosome==3 |
coordinates$Chromosome==4 |
coordinates$Chromosome==5 |
coordinates$Chromosome==6 |
coordinates$Chromosome==7 |
coordinates$Chromosome==8 |
coordinates$Chromosome==9 |
coordinates$Chromosome==10 |
coordinates$Chromosome==11 |
coordinates$Chromosome==12 |
coordinates$Chromosome==13 |
coordinates$Chromosome==14 |
coordinates$Chromosome==15 |
coordinates$Chromosome==16 |
coordinates$Chromosome==17 |
coordinates$Chromosome==18 |
coordinates$Chromosome==19 |
coordinates$Chromosome==20 |
coordinates$Chromosome==21 )
# export a bed file to use with bedtools complement:
head(coordinates)
bedinput<-coordinates[,c(2,3,4)]
bedinput$Start<-bedinput$Start-1
bedinput[order(bedinput[,1],bedinput[,2]),]->bedinput
write.table(bedinput,file="gene_coordinates_ENSEMBL_OryCun2_18April20.bed",col.names=F,row.names=F,quote=F,sep="\t")
# readtable
library(dplyr)
intergenic<-read.table("rabbit_intergenic.bed",header=F)
colnames(intergenic)<-c("chr","start","end")
#format bed file: add (+) 1000 bp to start and subtract (-) 1000 bp from the end
m <- mutate(intergenic, stert_n = start + 1000, end_n = end - 1000)
# extract regions >=1000bp
subset(m,m$end_n-m$stert_n>=1000)->m_1000
write.table(m_1000,file="rabbit_intergenic_larger1Kb_1kbfromgenes.bed",col.names=F,row.names=F,quote=F,sep="\t")
| /demographic_inference_of_divergence/input_file.R | no_license | MafaldaSFerreira/wtjr_camouflage | R | false | false | 2,101 | r |
coordinates<-read.table("gene_coordinates_ENSEMBL_OryCun2_18April20.txt",skip = 1)
colnames(coordinates)<-c("Gene_ID","Chromosome","Start","End")
# remove scaffolds, chromosome X and MT (I'm sure there's a better way to do this but oh well)
coordinates<-subset(coordinates,coordinates$Chromosome==1 |
coordinates$Chromosome==2 |
coordinates$Chromosome==3 |
coordinates$Chromosome==4 |
coordinates$Chromosome==5 |
coordinates$Chromosome==6 |
coordinates$Chromosome==7 |
coordinates$Chromosome==8 |
coordinates$Chromosome==9 |
coordinates$Chromosome==10 |
coordinates$Chromosome==11 |
coordinates$Chromosome==12 |
coordinates$Chromosome==13 |
coordinates$Chromosome==14 |
coordinates$Chromosome==15 |
coordinates$Chromosome==16 |
coordinates$Chromosome==17 |
coordinates$Chromosome==18 |
coordinates$Chromosome==19 |
coordinates$Chromosome==20 |
coordinates$Chromosome==21 )
# export a bed file to use with bedtools complement:
head(coordinates)
bedinput<-coordinates[,c(2,3,4)]
bedinput$Start<-bedinput$Start-1
bedinput[order(bedinput[,1],bedinput[,2]),]->bedinput
write.table(bedinput,file="gene_coordinates_ENSEMBL_OryCun2_18April20.bed",col.names=F,row.names=F,quote=F,sep="\t")
# readtable
library(dplyr)
intergenic<-read.table("rabbit_intergenic.bed",header=F)
colnames(intergenic)<-c("chr","start","end")
#format bed file: add (+) 1000 bp to start and subtract (-) 1000 bp from the end
m <- mutate(intergenic, stert_n = start + 1000, end_n = end - 1000)
# extract regions >=1000bp
subset(m,m$end_n-m$stert_n>=1000)->m_1000
write.table(m_1000,file="rabbit_intergenic_larger1Kb_1kbfromgenes.bed",col.names=F,row.names=F,quote=F,sep="\t")
|
#' Required script arguments:
# 1) A string representing the name of a setting file
#' 2) An integer denoting which setting in the setting file is used.
#' In this script, a Simulator object is created given the id of a setting
#' defined in the setting file. Then a method of the object named simulate
#' is called and the simulation is run.
#' For the implemenation of the method of the Simulator class, see Simulator.R
library(mlegp)
set.seed(2018)
main <- function(){
data_path = "../data"
output_path = "../output"
# Change the working directory to src, so the source code is visible.
setwd("./src")
source("Simulator.R")
# Parse the argument passed from the command line.
args = commandArgs(TRUE)
setting_file = args[1]
setting_id = as.integer(args[2])
# Create a directory for storing the results for running the settings
# defined the setting_file.
output_path = paste(output_path, sub("\\..*$", '', setting_file), sep = "/")
dir.create(output_path, showWarnings = FALSE)
# Read in the configuration file and slice one setting.
settings = read.csv(paste("../../out_data/",
setting_file, sep = ""),
stringsAsFactors = FALSE)
s = settings[setting_id, , drop = TRUE]
# Parse the setting into a list.
setting_ = list("random_seed" = s[["random_seed"]],
"exploration" = s[["exploration"]],
"anti_batch" = s[["anti_batch"]],
"start_size" = s[["start_size"]],
"add" = s[["add"]],
"data" = s[["data"]],
"noise" = s[["noise"]],
"iter_num" = s[["iter_num"]],
"method" = s[["method"]],
"num_of_features"=s[["num_of_features"]])
# Create the path for the input data.
file_path = file.path(data_path, paste(setting_[["data"]], ".csv",
sep = ""))
# Create an object of the Simulator class.
simulator = Simulator(file_path, setting_)
# Load the support data for expert sampling.
dists = read.csv(file.path(data_path, "pairwise_similarity_bio.csv"), row.names=1)
simulator$dists = dists
# Run the simulator.
simulate(simulator)
}
main()
| /R/main.R | permissive | xkw2020/OPEX | R | false | false | 2,351 | r | #' Required script arguments:
# 1) A string representing the name of a setting file
#' 2) An integer denoting which setting in the setting file is used.
#' In this script, a Simulator object is created given the id of a setting
#' defined in the setting file. Then a method of the object named simulate
#' is called and the simulation is run.
#' For the implemenation of the method of the Simulator class, see Simulator.R
library(mlegp)
set.seed(2018)
main <- function(){
data_path = "../data"
output_path = "../output"
# Change the working directory to src, so the source code is visible.
setwd("./src")
source("Simulator.R")
# Parse the argument passed from the command line.
args = commandArgs(TRUE)
setting_file = args[1]
setting_id = as.integer(args[2])
# Create a directory for storing the results for running the settings
# defined the setting_file.
output_path = paste(output_path, sub("\\..*$", '', setting_file), sep = "/")
dir.create(output_path, showWarnings = FALSE)
# Read in the configuration file and slice one setting.
settings = read.csv(paste("../../out_data/",
setting_file, sep = ""),
stringsAsFactors = FALSE)
s = settings[setting_id, , drop = TRUE]
# Parse the setting into a list.
setting_ = list("random_seed" = s[["random_seed"]],
"exploration" = s[["exploration"]],
"anti_batch" = s[["anti_batch"]],
"start_size" = s[["start_size"]],
"add" = s[["add"]],
"data" = s[["data"]],
"noise" = s[["noise"]],
"iter_num" = s[["iter_num"]],
"method" = s[["method"]],
"num_of_features"=s[["num_of_features"]])
# Create the path for the input data.
file_path = file.path(data_path, paste(setting_[["data"]], ".csv",
sep = ""))
# Create an object of the Simulator class.
simulator = Simulator(file_path, setting_)
# Load the support data for expert sampling.
dists = read.csv(file.path(data_path, "pairwise_similarity_bio.csv"), row.names=1)
simulator$dists = dists
# Run the simulator.
simulate(simulator)
}
main()
|
# compute the score of a network.
network.score = function(x, data, type = NULL, ..., by.node = FALSE, debug = FALSE) {
# check x's class.
check.bn(x)
# the original data set is needed.
check.data(data)
# check the network against the data.
check.bn.vs.data(x, data)
# check debug and by.node.
check.logical(by.node)
check.logical(debug)
# no score if the graph is partially directed.
if (is.pdag(x$arcs, names(x$nodes)))
stop("the graph is only partially directed.")
# check the score label.
type = check.score(type, data)
# check whether the network is valid for the method.
check.arcs.against.assumptions(x$arcs, data, type)
# expand and sanitize score-specific arguments.
extra.args = check.score.args(score = type, network = x,
data = data, extra.args = list(...), learning = FALSE)
# check that the score is decomposable when returning node contributions.
if (by.node && !is.score.decomposable(type, extra.args))
stop("the score is not decomposable, node terms are not defined.")
# compute the node contributions to the network score.
local = per.node.score(network = x, data = data, score = type,
targets = names(x$nodes), extra.args = extra.args, debug = debug)
if (by.node)
return(local)
else
return(sum(local))
}#NETWORK.SCORE
# AIC method for class 'bn', an alias of score(..., type = "aic")
AIC.bn = function(object, data, ..., k = 1) {
# check which type of data we are dealing with.
type = data.type(data)
# argument sanitization is performed in the score() function.
if (type %in% discrete.data.types)
network.score(object, data = data, type = "aic", k = k, ...)
else if (type == "continuous")
network.score(object, data = data, type = "aic-g", k = k, ...)
else if (type == "mixed-cg")
network.score(object, data = data, type = "aic-cg", k = k, ...)
}#AIC.BN
# BIC method for class 'bn', an alias of score(..., type = "bic")
BIC.bn = function(object, data, ...) {
# check which type of data we are dealing with.
type = data.type(data)
# argument sanitization is performed in the score() function.
if (type %in% discrete.data.types)
network.score(object, data = data, type = "bic", ...)
else if (type == "continuous")
network.score(object, data = data, type = "bic-g", ...)
else if (type == "mixed-cg")
network.score(object, data = data, type = "bic-cg", ...)
}#BIC.BN
# logLik method for class 'bn', an alias of score(..., type = "loglik")
logLik.bn = function(object, data, ...) {
# check which type of data we are dealing with.
type = data.type(data)
# argument sanitization is performed in the score() function.
if (type %in% discrete.data.types)
network.score(x = object, data = data, type = "loglik", ...)
else if (type == "continuous")
network.score(x = object, data = data, type = "loglik-g", ...)
else if (type == "mixed-cg")
network.score(x = object, data = data, type = "loglik-cg", ...)
}#LOGLIK.BN
# estimate a reasonable guess at the best imaginary sample size.
alpha.star = function(x, data, debug = FALSE) {
# check x's class.
check.bn(x)
# the original data set is needed.
check.data(data, allowed.types = discrete.data.types)
# check the network against the data.
check.bn.vs.data(x, data)
# check debug.
check.logical(debug)
# no score if the graph is partially directed.
if (is.pdag(x$arcs, names(x$nodes)))
stop("the graph is only partially directed.")
alpha.star.backend(x = x, data = data, debug = debug)
}#ALPHA.STAR
# compute the Bayes factor of two networks.
BF = function(num, den, data, score, ..., log = TRUE) {
# check the two networks, individually and against each other.
check.bn(num)
check.bn(den)
match.bn(num, den)
nodes = names(num$nodes)
# check the data.
data.info = check.data(data)
# check the networks against the data.
check.bn.vs.data(num, data)
check.bn.vs.data(den, data)
# check the log argument.
check.logical(log)
# no score if at least one of the networks is partially directed.
if (is.pdag(num$arcs, names(num$nodes)))
stop("the graph in the numerator on the BF is only partially directed.")
if (is.pdag(den$arcs, names(den$nodes)))
stop("the graph in the denominator on the BF is only partially directed.")
# make sure the score function is suitable for computing a Bayes factor.
if (missing(score)) {
if (data.info$type %in% discrete.data.types)
score = "bde"
else if (data.info$type %in% continuous.data.types)
score = "bge"
else if (data.info$type %in% mixed.data.types)
score = "bic-cg"
}#THEN
else {
score = check.score(score, data,
allowed = c(available.discrete.bayesian.scores,
available.continuous.bayesian.scores,
grep("bic", available.scores, value = TRUE)))
}#ELSE
# check whether the networks are valid for the score.
check.arcs.against.assumptions(num$arcs, data, score)
check.arcs.against.assumptions(den$arcs, data, score)
# expand and sanitize score-specific arguments.
extra.args = check.score.args(score = score, network = num,
data = data, extra.args = list(...), learning = FALSE)
# if a graph prior is used, this in not a Bayes factor any longer.
if (!is.null(extra.args$prior) && extra.args$prior != "uniform")
warning("using a non-uniform graph prior means this is not a Bayes factor.")
# if the score is decomposable, compute the Bayes factor using only those
# local distributions that differ between the two networks; otherwise
# compute it on the whole network.
if (is.score.decomposable(score, extra.args)) {
different =
sapply(nodes, function(n) {
!setequal(num$nodes[[n]]$parents, den$nodes[[n]]$parents)
})
different = nodes[different]
}#THEN
else {
different = nodes
}#ELSE
logBF.num = per.node.score(num, data = data, score = score,
targets = different, extra.args = extra.args)
logBF.den = per.node.score(den, data = data, score = score,
targets = different, extra.args = extra.args)
# compute the Bayes factor on the log-scale, and taking the difference between
# local distributions before summing to minimise numeric problems.
logBF = sum(logBF.num - logBF.den)
return(ifelse(log, logBF, exp(logBF)))
}#BF
| /R/frontend-score.R | no_license | cran/bnlearn | R | false | false | 6,413 | r |
# compute the score of a network.
network.score = function(x, data, type = NULL, ..., by.node = FALSE, debug = FALSE) {
# check x's class.
check.bn(x)
# the original data set is needed.
check.data(data)
# check the network against the data.
check.bn.vs.data(x, data)
# check debug and by.node.
check.logical(by.node)
check.logical(debug)
# no score if the graph is partially directed.
if (is.pdag(x$arcs, names(x$nodes)))
stop("the graph is only partially directed.")
# check the score label.
type = check.score(type, data)
# check whether the network is valid for the method.
check.arcs.against.assumptions(x$arcs, data, type)
# expand and sanitize score-specific arguments.
extra.args = check.score.args(score = type, network = x,
data = data, extra.args = list(...), learning = FALSE)
# check that the score is decomposable when returning node contributions.
if (by.node && !is.score.decomposable(type, extra.args))
stop("the score is not decomposable, node terms are not defined.")
# compute the node contributions to the network score.
local = per.node.score(network = x, data = data, score = type,
targets = names(x$nodes), extra.args = extra.args, debug = debug)
if (by.node)
return(local)
else
return(sum(local))
}#NETWORK.SCORE
# AIC method for class 'bn', an alias of score(..., type = "aic")
AIC.bn = function(object, data, ..., k = 1) {
# check which type of data we are dealing with.
type = data.type(data)
# argument sanitization is performed in the score() function.
if (type %in% discrete.data.types)
network.score(object, data = data, type = "aic", k = k, ...)
else if (type == "continuous")
network.score(object, data = data, type = "aic-g", k = k, ...)
else if (type == "mixed-cg")
network.score(object, data = data, type = "aic-cg", k = k, ...)
}#AIC.BN
# BIC method for class 'bn', an alias of score(..., type = "bic")
BIC.bn = function(object, data, ...) {
# check which type of data we are dealing with.
type = data.type(data)
# argument sanitization is performed in the score() function.
if (type %in% discrete.data.types)
network.score(object, data = data, type = "bic", ...)
else if (type == "continuous")
network.score(object, data = data, type = "bic-g", ...)
else if (type == "mixed-cg")
network.score(object, data = data, type = "bic-cg", ...)
}#BIC.BN
# logLik method for class 'bn', an alias of score(..., type = "loglik")
logLik.bn = function(object, data, ...) {
# check which type of data we are dealing with.
type = data.type(data)
# argument sanitization is performed in the score() function.
if (type %in% discrete.data.types)
network.score(x = object, data = data, type = "loglik", ...)
else if (type == "continuous")
network.score(x = object, data = data, type = "loglik-g", ...)
else if (type == "mixed-cg")
network.score(x = object, data = data, type = "loglik-cg", ...)
}#LOGLIK.BN
# estimate a reasonable guess at the best imaginary sample size.
alpha.star = function(x, data, debug = FALSE) {
# check x's class.
check.bn(x)
# the original data set is needed.
check.data(data, allowed.types = discrete.data.types)
# check the network against the data.
check.bn.vs.data(x, data)
# check debug.
check.logical(debug)
# no score if the graph is partially directed.
if (is.pdag(x$arcs, names(x$nodes)))
stop("the graph is only partially directed.")
alpha.star.backend(x = x, data = data, debug = debug)
}#ALPHA.STAR
# compute the Bayes factor of two networks.
BF = function(num, den, data, score, ..., log = TRUE) {
# check the two networks, individually and against each other.
check.bn(num)
check.bn(den)
match.bn(num, den)
nodes = names(num$nodes)
# check the data.
data.info = check.data(data)
# check the networks against the data.
check.bn.vs.data(num, data)
check.bn.vs.data(den, data)
# check the log argument.
check.logical(log)
# no score if at least one of the networks is partially directed.
if (is.pdag(num$arcs, names(num$nodes)))
stop("the graph in the numerator on the BF is only partially directed.")
if (is.pdag(den$arcs, names(den$nodes)))
stop("the graph in the denominator on the BF is only partially directed.")
# make sure the score function is suitable for computing a Bayes factor.
if (missing(score)) {
if (data.info$type %in% discrete.data.types)
score = "bde"
else if (data.info$type %in% continuous.data.types)
score = "bge"
else if (data.info$type %in% mixed.data.types)
score = "bic-cg"
}#THEN
else {
score = check.score(score, data,
allowed = c(available.discrete.bayesian.scores,
available.continuous.bayesian.scores,
grep("bic", available.scores, value = TRUE)))
}#ELSE
# check whether the networks are valid for the score.
check.arcs.against.assumptions(num$arcs, data, score)
check.arcs.against.assumptions(den$arcs, data, score)
# expand and sanitize score-specific arguments.
extra.args = check.score.args(score = score, network = num,
data = data, extra.args = list(...), learning = FALSE)
# if a graph prior is used, this in not a Bayes factor any longer.
if (!is.null(extra.args$prior) && extra.args$prior != "uniform")
warning("using a non-uniform graph prior means this is not a Bayes factor.")
# if the score is decomposable, compute the Bayes factor using only those
# local distributions that differ between the two networks; otherwise
# compute it on the whole network.
if (is.score.decomposable(score, extra.args)) {
different =
sapply(nodes, function(n) {
!setequal(num$nodes[[n]]$parents, den$nodes[[n]]$parents)
})
different = nodes[different]
}#THEN
else {
different = nodes
}#ELSE
logBF.num = per.node.score(num, data = data, score = score,
targets = different, extra.args = extra.args)
logBF.den = per.node.score(den, data = data, score = score,
targets = different, extra.args = extra.args)
# compute the Bayes factor on the log-scale, and taking the difference between
# local distributions before summing to minimise numeric problems.
logBF = sum(logBF.num - logBF.den)
return(ifelse(log, logBF, exp(logBF)))
}#BF
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/RequestApproximation.R
\name{RequestApproximation}
\alias{RequestApproximation}
\title{Request an approximation of a model using DataRobot Prime}
\usage{
RequestApproximation(project, modelId)
}
\arguments{
\item{project}{character. Either (1) a character string giving the unique alphanumeric
identifier for the project, or (2) a list containing the element projectId with this
identifier.}
\item{modelId}{character. Unique alphanumeric identifier for the model of interest.}
}
\value{
job Id
}
\description{
This function will create several rulesets that approximate the specified model.
The code used in the approximation can be downloaded to be run locally.
Currently only Python and Java downloadable code is available
}
\details{
General workflow of creating and downloading Prime code may look like following:
RequestApproximation - create several rulesets that approximate the specified model
GetRulesets - list all rulesests created for the parent model
RequestPrimeModel - create Prime model for specified ruleset (use one of rulesets return by
GetRulests)
GetPrimeModelFromJobId - get PrimeModelId using JobId returned by RequestPrimeModel
CreatePrimeCode - create code for one of available Prime models
GetPrimeFileFromJobId - get PrimeFileId using JobId returned by CreatePrimeCode
DownloadPrimeCode - download specified Prime code file
}
\examples{
\dontrun{
projectId <- "59a5af20c80891534e3c2bde"
modelId <- "5996f820af07fc605e81ead4"
RequestApproximation(projectId, modelId)
}
}
| /man/RequestApproximation.Rd | no_license | anno526/datarobot | R | false | true | 1,584 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/RequestApproximation.R
\name{RequestApproximation}
\alias{RequestApproximation}
\title{Request an approximation of a model using DataRobot Prime}
\usage{
RequestApproximation(project, modelId)
}
\arguments{
\item{project}{character. Either (1) a character string giving the unique alphanumeric
identifier for the project, or (2) a list containing the element projectId with this
identifier.}
\item{modelId}{character. Unique alphanumeric identifier for the model of interest.}
}
\value{
job Id
}
\description{
This function will create several rulesets that approximate the specified model.
The code used in the approximation can be downloaded to be run locally.
Currently only Python and Java downloadable code is available
}
\details{
General workflow of creating and downloading Prime code may look like following:
RequestApproximation - create several rulesets that approximate the specified model
GetRulesets - list all rulesests created for the parent model
RequestPrimeModel - create Prime model for specified ruleset (use one of rulesets return by
GetRulests)
GetPrimeModelFromJobId - get PrimeModelId using JobId returned by RequestPrimeModel
CreatePrimeCode - create code for one of available Prime models
GetPrimeFileFromJobId - get PrimeFileId using JobId returned by CreatePrimeCode
DownloadPrimeCode - download specified Prime code file
}
\examples{
\dontrun{
projectId <- "59a5af20c80891534e3c2bde"
modelId <- "5996f820af07fc605e81ead4"
RequestApproximation(projectId, modelId)
}
}
|
# This script plots the secchi trends following Deeds East Pond WBMP
# DJW 12/21/20
# Load packages
library(readxl)
library(dplyr)
library(lubridate)
library(Kendall)
# Load secchi data from LSM
filepath <- "/Users/djw56/Documents/Research/7LA-Colby/Belgrade Lakes/Lakes/Belgrades/Historical/"
filename <- paste(filepath,"MaineLakes_Phosphorus.xlsx",sep="")
dat <- read_excel(filename, sheet = 2)
# MIDAS is identifier for LSM data
lake <- 'Cross Lake'
MIDAS1 <- 1674
Pall <- dat %>% filter(MIDAS == MIDAS1)
# Find epicore data
PC <- Pall %>% filter(`Sample Type` == 'C')
# Make a month column
a <- ymd(PC$Date)
PC$Month <- month(a)
PC$Year <- year(a)
npts <- PC %>% group_by(Month) %>% summarize(npts = n())
# Use June - September data
P <- PC %>% filter(Month >= 5 & Month <= 10)
# Average data from the same month in the same year
Pavg <- aggregate(`Total P`~Year+Month+Station, mean, data=P)
# Compute year average by station
Pyr1 <- Pavg %>% filter(Station == 1) %>% group_by(Year) %>% summarise(yrmean = mean(`Total P`))
# Do Mann Kendall trend analysis on whole time series
mk1<-MannKendall(Pyr1$yrmean)
# Do Mann Kendall trend analysis on last 10 years
Pyr1_10 <- Pyr1 %>% filter(Year >= 2011)
mk1_10 <- MannKendall(Pyr1_10$yrmean)
# Plot data and trendline for each
yhigh<-max(Pavg$`Total P`)+3
splot <- Pyr1
st <- '1'
mk <- mk1
png(file="CL1_P_all.png", units="in",width=5, height=5.5, res=400)
smin <- round(min(splot$yrmean),1)
smean <- round(mean(splot$yrmean),1)
smed <-round(median(splot$yrmean),1)
smax <- round(max(splot$yrmean),1)
plot(splot$Year,splot$yrmean, las=1, ylim=c(0,yhigh),ylab="Average May-Oct TP (ppb)",xlab="Year")
lines(lowess(splot$Year,splot$yrmean),col=4)
titles<-lake
titles2<-MIDAS1
titles3<-st
mtext(text=paste(titles, "\n", " (MIDAS ",titles2," - Station ",titles3,")",sep = ""), side=3, line=1.9, font=2)
mtext(text=sprintf("tau=%1.3f",(mk$tau[1])),adj=0, side=3, line=0.6, cex=.8)
mtext(text=sprintf("p=%1.3f",(mk$sl[1])),adj=1, side=3,line=0.6, cex=.8)
mtext(text=sprintf(" min=%1.1f",smin),adj=0, side=3,line=-1, cex=.8)
mtext(text=sprintf(" mean=%1.1f",smean),adj=0, side=3,line=-2, cex=.8)
mtext(text=sprintf(" med=%1.1f",smed),adj=0, side=3,line=-3, cex=.8)
mtext(text=sprintf(" max=%1.1f",smax),adj=0, side=3,line=-4, cex=.8)
dev.off()
splot <- Pyr1_10
st <- '1'
mk <- mk1_10
png(file="CL1_P_10.png", units="in",width=5, height=5.5, res=400)
smin <- round(min(splot$yrmean),1)
smean <- round(mean(splot$yrmean),1)
smed <-round(median(splot$yrmean),1)
smax <- round(max(splot$yrmean),1)
plot(splot$Year,splot$yrmean, las=1, ylim=c(0,yhigh),ylab="Average May-Oct TP (ppb)",xlab="Year")
lines(lowess(splot$Year,splot$yrmean),col=4)
titles<-lake
titles2<-MIDAS1
titles3<-st
mtext(text=paste(titles, "\n", " (MIDAS ",titles2," - Station ",titles3,")",sep = ""), side=3, line=1.9, font=2)
mtext(text=sprintf("tau=%1.3f",(mk$tau[1])),adj=0, side=3, line=0.6, cex=.8)
mtext(text=sprintf("p=%1.3f",(mk$sl[1])),adj=1, side=3,line=0.6, cex=.8)
mtext(text=sprintf(" min=%1.1f",smin),adj=0, side=3,line=-1, cex=.8)
mtext(text=sprintf(" mean=%1.1f",smean),adj=0, side=3,line=-2, cex=.8)
mtext(text=sprintf(" med=%1.1f",smed),adj=0, side=3,line=-3, cex=.8)
mtext(text=sprintf(" max=%1.1f",smax),adj=0, side=3,line=-4, cex=.8)
dev.off()
| /Historical-R/Cross Lake/P_Trend_WBMP.R | no_license | djwain/7LA-Colby-WQI | R | false | false | 3,294 | r | # This script plots the secchi trends following Deeds East Pond WBMP
# DJW 12/21/20
# Load packages
library(readxl)
library(dplyr)
library(lubridate)
library(Kendall)
# Load secchi data from LSM
filepath <- "/Users/djw56/Documents/Research/7LA-Colby/Belgrade Lakes/Lakes/Belgrades/Historical/"
filename <- paste(filepath,"MaineLakes_Phosphorus.xlsx",sep="")
dat <- read_excel(filename, sheet = 2)
# MIDAS is identifier for LSM data
lake <- 'Cross Lake'
MIDAS1 <- 1674
Pall <- dat %>% filter(MIDAS == MIDAS1)
# Find epicore data
PC <- Pall %>% filter(`Sample Type` == 'C')
# Make a month column
a <- ymd(PC$Date)
PC$Month <- month(a)
PC$Year <- year(a)
npts <- PC %>% group_by(Month) %>% summarize(npts = n())
# Use June - September data
P <- PC %>% filter(Month >= 5 & Month <= 10)
# Average data from the same month in the same year
Pavg <- aggregate(`Total P`~Year+Month+Station, mean, data=P)
# Compute year average by station
Pyr1 <- Pavg %>% filter(Station == 1) %>% group_by(Year) %>% summarise(yrmean = mean(`Total P`))
# Do Mann Kendall trend analysis on whole time series
mk1<-MannKendall(Pyr1$yrmean)
# Do Mann Kendall trend analysis on last 10 years
Pyr1_10 <- Pyr1 %>% filter(Year >= 2011)
mk1_10 <- MannKendall(Pyr1_10$yrmean)
# Plot data and trendline for each
yhigh<-max(Pavg$`Total P`)+3
splot <- Pyr1
st <- '1'
mk <- mk1
png(file="CL1_P_all.png", units="in",width=5, height=5.5, res=400)
smin <- round(min(splot$yrmean),1)
smean <- round(mean(splot$yrmean),1)
smed <-round(median(splot$yrmean),1)
smax <- round(max(splot$yrmean),1)
plot(splot$Year,splot$yrmean, las=1, ylim=c(0,yhigh),ylab="Average May-Oct TP (ppb)",xlab="Year")
lines(lowess(splot$Year,splot$yrmean),col=4)
titles<-lake
titles2<-MIDAS1
titles3<-st
mtext(text=paste(titles, "\n", " (MIDAS ",titles2," - Station ",titles3,")",sep = ""), side=3, line=1.9, font=2)
mtext(text=sprintf("tau=%1.3f",(mk$tau[1])),adj=0, side=3, line=0.6, cex=.8)
mtext(text=sprintf("p=%1.3f",(mk$sl[1])),adj=1, side=3,line=0.6, cex=.8)
mtext(text=sprintf(" min=%1.1f",smin),adj=0, side=3,line=-1, cex=.8)
mtext(text=sprintf(" mean=%1.1f",smean),adj=0, side=3,line=-2, cex=.8)
mtext(text=sprintf(" med=%1.1f",smed),adj=0, side=3,line=-3, cex=.8)
mtext(text=sprintf(" max=%1.1f",smax),adj=0, side=3,line=-4, cex=.8)
dev.off()
splot <- Pyr1_10
st <- '1'
mk <- mk1_10
png(file="CL1_P_10.png", units="in",width=5, height=5.5, res=400)
smin <- round(min(splot$yrmean),1)
smean <- round(mean(splot$yrmean),1)
smed <-round(median(splot$yrmean),1)
smax <- round(max(splot$yrmean),1)
plot(splot$Year,splot$yrmean, las=1, ylim=c(0,yhigh),ylab="Average May-Oct TP (ppb)",xlab="Year")
lines(lowess(splot$Year,splot$yrmean),col=4)
titles<-lake
titles2<-MIDAS1
titles3<-st
mtext(text=paste(titles, "\n", " (MIDAS ",titles2," - Station ",titles3,")",sep = ""), side=3, line=1.9, font=2)
mtext(text=sprintf("tau=%1.3f",(mk$tau[1])),adj=0, side=3, line=0.6, cex=.8)
mtext(text=sprintf("p=%1.3f",(mk$sl[1])),adj=1, side=3,line=0.6, cex=.8)
mtext(text=sprintf(" min=%1.1f",smin),adj=0, side=3,line=-1, cex=.8)
mtext(text=sprintf(" mean=%1.1f",smean),adj=0, side=3,line=-2, cex=.8)
mtext(text=sprintf(" med=%1.1f",smed),adj=0, side=3,line=-3, cex=.8)
mtext(text=sprintf(" max=%1.1f",smax),adj=0, side=3,line=-4, cex=.8)
dev.off()
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/auto_detects.R
\name{auto_detect_sm_parents}
\alias{auto_detect_sm_parents}
\title{Detect select multiple parent columns}
\usage{
auto_detect_sm_parents(df, sm_sep = "/")
}
\arguments{
\item{df}{a survey object or dataframe}
\item{sm_sep}{select multiple parent child separator. This is specific for XLSForm data (default = /).
If using read_csv to read in data the separator will most likely be '/' where as if using read.csv it will likely be '.'}
}
\value{
a list of select multiple parent columns in data set.
}
\description{
`auto_detect_sm_parents` is mean to detect select multiple parent columns in a way that does
not rely on the XLSForm as the input
}
| /man/auto_detect_sm_parents.Rd | permissive | zackarno/butteR | R | false | true | 741 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/auto_detects.R
\name{auto_detect_sm_parents}
\alias{auto_detect_sm_parents}
\title{Detect select multiple parent columns}
\usage{
auto_detect_sm_parents(df, sm_sep = "/")
}
\arguments{
\item{df}{a survey object or dataframe}
\item{sm_sep}{select multiple parent child separator. This is specific for XLSForm data (default = /).
If using read_csv to read in data the separator will most likely be '/' where as if using read.csv it will likely be '.'}
}
\value{
a list of select multiple parent columns in data set.
}
\description{
`auto_detect_sm_parents` is mean to detect select multiple parent columns in a way that does
not rely on the XLSForm as the input
}
|
cuadroatr<-table(columna1, columna2, etc)
chisq.test(cuadroatr)
| /Pruebas Paramétricas/Prueba Independencia.R | no_license | KevinMeneses/ActividadesFisicasCotidianas | R | false | false | 64 | r | cuadroatr<-table(columna1, columna2, etc)
chisq.test(cuadroatr)
|
# -------------------------------------
# exploratory analysis file for the
# 2011 wave of EThiopia data
# -------------------------------------
# filepath
if(Sys.info()["user"] == "Tomas"){
filePath <- "C:/Users/Tomas/Documents/LEI/"
} else {
filePath <- "N:/Internationaal Beleid (IB)/Projecten/2285000066 Africa Maize Yield Gap/SurveyData/ETH/2011/Data"
}
# packages
library(reshape2)
library(dplyr)
# source in the data
source(file.path(filePath, "pro-gap/ETH/ETH_2013PP.R"))
# first make some counts of each household
# and type of household represented in the data
# this can be checked against the codebook
count1 <- group_by(ETH2013, REGNAME, ZONENAME, type) %>% summarise(n=n()) %>%
dcast(REGNAME + ZONENAME ~ type, fill=0)
names(count1)[3:4] <- paste(names(count1)[3:4], "HH", sep=" ")
count2 <- select(ETH2013, -household_id, -household_id) %>% unique %>%
group_by(REGNAME, ZONENAME, type) %>% summarise(n=n()) %>%
dcast(REGNAME + ZONENAME ~ type, fill=0)
names(count2)[3:4] <- paste(names(count2)[3:4], "EA", sep=" ")
count <- left_join(count1, count2); rm(count1, count2)
count <- count[c(1, 2, 5, 3, 6, 4)]
# focus on maize producing farmers
maize <- filter(ETH2013, status=="HEAD", crop_code==2); rm(ETH2013)
# regions and main harvest month
table(maize$REGNAME, maize$harv_month1)
# maize produced per region
ggplot( data=maize, aes( x=REGNAME, y=crop_qty_harv ) ) + geom_boxplot( ) +
theme(axis.text.x=element_text(angle=90, hjust=1)) +
ylab("maize harvested per region (kg)") + xlab(" ")
ggplot( data=maize, aes( x=REGNAME, y=log(crop_qty_harv) ) ) + geom_boxplot( ) +
theme(axis.text.x=element_text(angle=90, hjust=1)) +
ylab("maize harvested per region (kg)") + xlab(" ")
# and a histogram
par(mfrow=c(1, 2))
with(maize, {
hist(crop_qty_harv, breaks="FD", freq=FALSE, ylab="Density")
lines(density(crop_qty_harv), lwd=2)
lines(density(crop_qty_harv, adjust=0.5), lwd=1)
rug(crop_qty_harv)
box()
})
with(maize, {
hist(log(crop_qty_harv), breaks="FD", freq=FALSE, ylab="Density")
lines(density(log(crop_qty_harv)), lwd=2)
lines(density(log(crop_qty_harv), adjust=0.5), lwd=1)
rug(log(crop_qty_harv))
box()
})
# have a similar look at areas, using imputed values
par(mfrow=c(1, 2))
with(maize[!is.na(maize$area_gps_mi50),], {
hist(area_gps_mi50, breaks="FD", freq=FALSE, ylab="Density")
lines(density(area_gps_mi50), lwd=2)
lines(density(area_gps_mi50, adjust=0.5), lwd=1)
rug(area_gps_mi50)
box()
})
with(maize[!is.na(maize$area_gps_mi50),], {
hist(log(area_gps_mi50), breaks="FD", freq=FALSE, ylab="Density")
lines(density(log(area_gps_mi50)), lwd=2)
lines(density(log(area_gps_mi50), adjust=0.5), lwd=1)
rug(log(area_gps_mi50))
box()
})
# crucial variabe is yield.
yld <- maize$crop_qty_harv/maize$area_gps_mi50
yld <- yld[!is.na(yld)]
# level -> likely outlying values
hist(yld, breaks="FD", freq=FALSE, ylab="Density")
lines(density(yld), lwd=2)
lines(density(yld, adjust=0.5), lwd=1)
rug(yld)
box()
# log
hist(log(yld), breaks="FD", freq=FALSE, ylab="Density")
lines(density(log(yld)), lwd=2)
lines(density(log(yld), adjust=0.5), lwd=1)
rug(log(yld))
box()
# alternativley we can use the farmers self reported
# harvested area to scale the
yld2 <- maize$crop_qty_harv/maize$area_gps_mi50*maize$harv_area/100
yld2 <- yld2[!is.na(yld2)]
# level -> likely outlying values
hist(yld2, breaks="FD", freq=FALSE, ylab="Density")
lines(density(yld2), lwd=2)
lines(density(yld2, adjust=0.5), lwd=1)
rug(yld2)
box()
# log
hist(log(yld2), breaks="FD", freq=FALSE, ylab="Density")
lines(density(log(yld2)), lwd=2)
lines(density(log(yld2), adjust=0.5), lwd=1)
rug(log(yld2))
box()
# second important variable is nitrogen
par(mfrow=c(1, 2))
with(maize[!maize$N==0,], {
hist(N, breaks="FD", freq=FALSE, ylab="Density")
lines(density(N), lwd=2)
lines(density(N, adjust=0.5), lwd=1)
rug(N)
box()
})
with(maize[!maize$N==0,], {
hist(log(N), breaks="FD", freq=FALSE, ylab="Density")
lines(density(log(N)), lwd=2)
lines(density(log(N), adjust=0.5), lwd=1)
rug(log(N))
box()
})
# -------------------------------------
# mutivariate analysis
# -------------------------------------
# focus on yield and nitrogen
| /Scripts/EDA_ETH.R | no_license | vincentlinderhof/NutritionETH | R | false | false | 4,213 | r | # -------------------------------------
# exploratory analysis file for the
# 2011 wave of EThiopia data
# -------------------------------------
# filepath
if(Sys.info()["user"] == "Tomas"){
filePath <- "C:/Users/Tomas/Documents/LEI/"
} else {
filePath <- "N:/Internationaal Beleid (IB)/Projecten/2285000066 Africa Maize Yield Gap/SurveyData/ETH/2011/Data"
}
# packages
library(reshape2)
library(dplyr)
# source in the data
source(file.path(filePath, "pro-gap/ETH/ETH_2013PP.R"))
# first make some counts of each household
# and type of household represented in the data
# this can be checked against the codebook
count1 <- group_by(ETH2013, REGNAME, ZONENAME, type) %>% summarise(n=n()) %>%
dcast(REGNAME + ZONENAME ~ type, fill=0)
names(count1)[3:4] <- paste(names(count1)[3:4], "HH", sep=" ")
count2 <- select(ETH2013, -household_id, -household_id) %>% unique %>%
group_by(REGNAME, ZONENAME, type) %>% summarise(n=n()) %>%
dcast(REGNAME + ZONENAME ~ type, fill=0)
names(count2)[3:4] <- paste(names(count2)[3:4], "EA", sep=" ")
count <- left_join(count1, count2); rm(count1, count2)
count <- count[c(1, 2, 5, 3, 6, 4)]
# focus on maize producing farmers
maize <- filter(ETH2013, status=="HEAD", crop_code==2); rm(ETH2013)
# regions and main harvest month
table(maize$REGNAME, maize$harv_month1)
# maize produced per region
ggplot( data=maize, aes( x=REGNAME, y=crop_qty_harv ) ) + geom_boxplot( ) +
theme(axis.text.x=element_text(angle=90, hjust=1)) +
ylab("maize harvested per region (kg)") + xlab(" ")
ggplot( data=maize, aes( x=REGNAME, y=log(crop_qty_harv) ) ) + geom_boxplot( ) +
theme(axis.text.x=element_text(angle=90, hjust=1)) +
ylab("maize harvested per region (kg)") + xlab(" ")
# and a histogram
par(mfrow=c(1, 2))
with(maize, {
hist(crop_qty_harv, breaks="FD", freq=FALSE, ylab="Density")
lines(density(crop_qty_harv), lwd=2)
lines(density(crop_qty_harv, adjust=0.5), lwd=1)
rug(crop_qty_harv)
box()
})
with(maize, {
hist(log(crop_qty_harv), breaks="FD", freq=FALSE, ylab="Density")
lines(density(log(crop_qty_harv)), lwd=2)
lines(density(log(crop_qty_harv), adjust=0.5), lwd=1)
rug(log(crop_qty_harv))
box()
})
# have a similar look at areas, using imputed values
par(mfrow=c(1, 2))
with(maize[!is.na(maize$area_gps_mi50),], {
hist(area_gps_mi50, breaks="FD", freq=FALSE, ylab="Density")
lines(density(area_gps_mi50), lwd=2)
lines(density(area_gps_mi50, adjust=0.5), lwd=1)
rug(area_gps_mi50)
box()
})
with(maize[!is.na(maize$area_gps_mi50),], {
hist(log(area_gps_mi50), breaks="FD", freq=FALSE, ylab="Density")
lines(density(log(area_gps_mi50)), lwd=2)
lines(density(log(area_gps_mi50), adjust=0.5), lwd=1)
rug(log(area_gps_mi50))
box()
})
# crucial variabe is yield.
yld <- maize$crop_qty_harv/maize$area_gps_mi50
yld <- yld[!is.na(yld)]
# level -> likely outlying values
hist(yld, breaks="FD", freq=FALSE, ylab="Density")
lines(density(yld), lwd=2)
lines(density(yld, adjust=0.5), lwd=1)
rug(yld)
box()
# log
hist(log(yld), breaks="FD", freq=FALSE, ylab="Density")
lines(density(log(yld)), lwd=2)
lines(density(log(yld), adjust=0.5), lwd=1)
rug(log(yld))
box()
# alternativley we can use the farmers self reported
# harvested area to scale the
yld2 <- maize$crop_qty_harv/maize$area_gps_mi50*maize$harv_area/100
yld2 <- yld2[!is.na(yld2)]
# level -> likely outlying values
hist(yld2, breaks="FD", freq=FALSE, ylab="Density")
lines(density(yld2), lwd=2)
lines(density(yld2, adjust=0.5), lwd=1)
rug(yld2)
box()
# log
hist(log(yld2), breaks="FD", freq=FALSE, ylab="Density")
lines(density(log(yld2)), lwd=2)
lines(density(log(yld2), adjust=0.5), lwd=1)
rug(log(yld2))
box()
# second important variable is nitrogen
par(mfrow=c(1, 2))
with(maize[!maize$N==0,], {
hist(N, breaks="FD", freq=FALSE, ylab="Density")
lines(density(N), lwd=2)
lines(density(N, adjust=0.5), lwd=1)
rug(N)
box()
})
with(maize[!maize$N==0,], {
hist(log(N), breaks="FD", freq=FALSE, ylab="Density")
lines(density(log(N)), lwd=2)
lines(density(log(N), adjust=0.5), lwd=1)
rug(log(N))
box()
})
# -------------------------------------
# mutivariate analysis
# -------------------------------------
# focus on yield and nitrogen
|
PowerConsumption <- read.table("household_power_consumption.txt", header=TRUE, sep=";", na.strings="?")
FebPower <- subset(PowerConsumption, Date == "1/2/2007" | Date =="2/2/2007")
library(lubridate)
FebPower$DateTime <- dmy_hms(paste(FebPower$Date, FebPower$Time))
png(width=480, height=480, units="px", file='plot4.png')
par(mfrow=c(2,2))
with(FebPower,
plot(DateTime,Global_active_power,
ylab="Global Active Power (kilowatts)",
xlab="",
cex.axis=1,
cex.lab=1,
cex.main=1,
type="l")
)
with(FebPower,
plot(DateTime,Voltage,
ylab="Voltage",
xlab="datetime",
cex.axis=1,
cex.lab=1,
cex.main=1,
type="l")
)
with(FebPower,
plot(DateTime,Sub_metering_1,
ylab="Energy sub metering",
xlab="",
cex.axis=1,
cex.lab=1,
cex.main=1,
type="l"))
with(FebPower, points(DateTime,Sub_metering_2,
col="red",
type="l"))
with(FebPower, points(DateTime,Sub_metering_3,
col="blue",
type="l"))
legend("topright", cex=1, lwd=1, col=c("black","blue","red"), legend= c("Sub_metering_1","Sub_metering_2","Sub_metering_3"))
with(FebPower,
plot(DateTime,Global_reactive_power,
xlab="datetime",
cex.axis=1,
cex.lab=1,
cex.main=1,
type="l")
)
dev.off() | /Documents/DwaveT/Data Science/Exploratory Data Analysis/Week 1/Project 1/plot4.R | no_license | scr9035/ExData_Plotting1 | R | false | false | 1,481 | r | PowerConsumption <- read.table("household_power_consumption.txt", header=TRUE, sep=";", na.strings="?")
FebPower <- subset(PowerConsumption, Date == "1/2/2007" | Date =="2/2/2007")
library(lubridate)
FebPower$DateTime <- dmy_hms(paste(FebPower$Date, FebPower$Time))
png(width=480, height=480, units="px", file='plot4.png')
par(mfrow=c(2,2))
with(FebPower,
plot(DateTime,Global_active_power,
ylab="Global Active Power (kilowatts)",
xlab="",
cex.axis=1,
cex.lab=1,
cex.main=1,
type="l")
)
with(FebPower,
plot(DateTime,Voltage,
ylab="Voltage",
xlab="datetime",
cex.axis=1,
cex.lab=1,
cex.main=1,
type="l")
)
with(FebPower,
plot(DateTime,Sub_metering_1,
ylab="Energy sub metering",
xlab="",
cex.axis=1,
cex.lab=1,
cex.main=1,
type="l"))
with(FebPower, points(DateTime,Sub_metering_2,
col="red",
type="l"))
with(FebPower, points(DateTime,Sub_metering_3,
col="blue",
type="l"))
legend("topright", cex=1, lwd=1, col=c("black","blue","red"), legend= c("Sub_metering_1","Sub_metering_2","Sub_metering_3"))
with(FebPower,
plot(DateTime,Global_reactive_power,
xlab="datetime",
cex.axis=1,
cex.lab=1,
cex.main=1,
type="l")
)
dev.off() |
################################################################################
# Course: Exploratory Data Analysis #
# Author: Boris Lenz #
# Task: Programming Assignment 1, Part 3 #
# Date: 2015-04-12 #
################################################################################
# set language to English
Sys.setlocale(locale="English")
# read the source data
unzip("exdata_data_household_power_consumption.zip")
mydata <- read.table("household_power_consumption.txt", header=T, sep=";", na.strings="?")
# subset to only data from 1/2/2007 and 2/2/2007
mydata <- mydata[mydata$Date %in% c('1/2/2007', '2/2/2007'), ]
# create new column with date and time combined
mydata$DateTime <- paste(mydata$Date, mydata$Time)
# convert new DateTime column to POSIXlt
mydata$DateTime <- strptime(mydata$DateTime, format="%d/%m/%Y %H:%M:%S")
# create plot
png(filename="plot3.png", width=480, height=480)
plot(mydata$DateTime, mydata$Sub_metering_1, type="l", xlab="", ylab="Energy sub metering")
lines(mydata$DateTime, mydata$Sub_metering_2, type="l", col="red")
lines(mydata$DateTime, mydata$Sub_metering_3, type="l", col="blue")
legend("topright", col=c("black","red","blue"), lty=1, legend=c("Sub_metering_1","Sub_metering_2","Sub_metering_3"))
dev.off()
| /plot3.R | no_license | BorisLenz/ExData_Plotting1 | R | false | false | 1,459 | r | ################################################################################
# Course: Exploratory Data Analysis #
# Author: Boris Lenz #
# Task: Programming Assignment 1, Part 3 #
# Date: 2015-04-12 #
################################################################################
# set language to English
Sys.setlocale(locale="English")
# read the source data
unzip("exdata_data_household_power_consumption.zip")
mydata <- read.table("household_power_consumption.txt", header=T, sep=";", na.strings="?")
# subset to only data from 1/2/2007 and 2/2/2007
mydata <- mydata[mydata$Date %in% c('1/2/2007', '2/2/2007'), ]
# create new column with date and time combined
mydata$DateTime <- paste(mydata$Date, mydata$Time)
# convert new DateTime column to POSIXlt
mydata$DateTime <- strptime(mydata$DateTime, format="%d/%m/%Y %H:%M:%S")
# create plot
png(filename="plot3.png", width=480, height=480)
plot(mydata$DateTime, mydata$Sub_metering_1, type="l", xlab="", ylab="Energy sub metering")
lines(mydata$DateTime, mydata$Sub_metering_2, type="l", col="red")
lines(mydata$DateTime, mydata$Sub_metering_3, type="l", col="blue")
legend("topright", col=c("black","red","blue"), lty=1, legend=c("Sub_metering_1","Sub_metering_2","Sub_metering_3"))
dev.off()
|
\name{flexscan}
\alias{flexscan}
\title{Flexible Scan Statistics}
\usage{
flexscan(map,case,pop,nsim,k,alpha,isplot,col)
}
\description{
An easy way to conduct flexible scan. Monte-Carlo method is used to test the spatial clusters given the cases, population, and shapefile. A table with formal style and a map with clusters are included in the result report. The method can be referenced at: Toshiro Tango and Kunihiko Takahashi (2005) <doi:10.1186/1476-072X-4-11>.
}
\arguments{
\item{map}{spatial object, typically a shapefile read in using 'rgdal::readOGR'}
\item{case}{numeric, a vector of number of cases for each region of 'map'; it is noteworthy that the order of regions in 'case' is corresponding to that in 'map'}
\item{pop}{numeric, a vector of number of population for each region of 'map'; it is noteworthy that the order of regions in 'pop' is corresponding to that in 'map'}
\item{nsim}{numeric, the number of simulations for Monte Carlo test; the default is 999}
\item{k}{numeric, the maximum number of regions allowed for clusters; the default is 10}
\item{alpha}{numeric, the significance level of flexible scan test; the default is 0.05}
\item{isplot}{logical, wether to plot the results; the default is 0.05}
\item{col}{color vector, two colors for most likely cluster and secondary cluster; the default is c("red","blue")}
}
\value{
\item{data.frame}{a data.frame containing 8 variables as follows:}
\item{Cluster Type}{most likely cluster or secondary cluster}
\item{Region ID}{region id for each cluster; it is noteworthy that the 'ID' is the order of regions in 'map'}
\item{Observed Cases}{observed cases for each cluster}
\item{Expected Cases}{expected cases for each cluster}
\item{SR}{standardized ratio of observed to expected cases}
\item{RR}{relative risk for each cluster}
\item{LLR}{loglikelihood ratio for each cluster}
\item{P Value}{p value of likelihood ratio test for each cluster}
}
\author{
Zhicheng Du<dgdzc@hotmail.com>, Yuantao Hao<haoyt@mail.sysu.edu.cn>
}
\note{
Please feel free to contact us, if you have any advice and find any bug!
Reference:
Tango, T. & Takahashi, K. A Flexibly Shaped Spatial Scan Statistic for Detecting Clusters. INT J HEALTH GEOGR. 4, 11 (2005).
Updates:
Version 0.2.0: Fix the bugs according to the dependent package of "smerc" version 1.1
Version 0.2.2: Fix the bugs according to the dependent package of "spdep"
}
\examples{
data(map)
data(sample)
# simple example for checks; turn the warnings back on using 'options(warn=0)'
options(warn=-1)
flexscan(map,case=sample$case,pop=sample$pop,k=3,isplot=FALSE,nsim=10)
\dontrun{
flexscan(map,case=sample$case,pop=sample$pop)
}
}
| /man/flexscan.Rd | no_license | cran/FlexScan | R | false | false | 2,758 | rd | \name{flexscan}
\alias{flexscan}
\title{Flexible Scan Statistics}
\usage{
flexscan(map,case,pop,nsim,k,alpha,isplot,col)
}
\description{
An easy way to conduct flexible scan. Monte-Carlo method is used to test the spatial clusters given the cases, population, and shapefile. A table with formal style and a map with clusters are included in the result report. The method can be referenced at: Toshiro Tango and Kunihiko Takahashi (2005) <doi:10.1186/1476-072X-4-11>.
}
\arguments{
\item{map}{spatial object, typically a shapefile read in using 'rgdal::readOGR'}
\item{case}{numeric, a vector of number of cases for each region of 'map'; it is noteworthy that the order of regions in 'case' is corresponding to that in 'map'}
\item{pop}{numeric, a vector of number of population for each region of 'map'; it is noteworthy that the order of regions in 'pop' is corresponding to that in 'map'}
\item{nsim}{numeric, the number of simulations for Monte Carlo test; the default is 999}
\item{k}{numeric, the maximum number of regions allowed for clusters; the default is 10}
\item{alpha}{numeric, the significance level of flexible scan test; the default is 0.05}
\item{isplot}{logical, wether to plot the results; the default is 0.05}
\item{col}{color vector, two colors for most likely cluster and secondary cluster; the default is c("red","blue")}
}
\value{
\item{data.frame}{a data.frame containing 8 variables as follows:}
\item{Cluster Type}{most likely cluster or secondary cluster}
\item{Region ID}{region id for each cluster; it is noteworthy that the 'ID' is the order of regions in 'map'}
\item{Observed Cases}{observed cases for each cluster}
\item{Expected Cases}{expected cases for each cluster}
\item{SR}{standardized ratio of observed to expected cases}
\item{RR}{relative risk for each cluster}
\item{LLR}{loglikelihood ratio for each cluster}
\item{P Value}{p value of likelihood ratio test for each cluster}
}
\author{
Zhicheng Du<dgdzc@hotmail.com>, Yuantao Hao<haoyt@mail.sysu.edu.cn>
}
\note{
Please feel free to contact us, if you have any advice and find any bug!
Reference:
Tango, T. & Takahashi, K. A Flexibly Shaped Spatial Scan Statistic for Detecting Clusters. INT J HEALTH GEOGR. 4, 11 (2005).
Updates:
Version 0.2.0: Fix the bugs according to the dependent package of "smerc" version 1.1
Version 0.2.2: Fix the bugs according to the dependent package of "spdep"
}
\examples{
data(map)
data(sample)
# simple example for checks; turn the warnings back on using 'options(warn=0)'
options(warn=-1)
flexscan(map,case=sample$case,pop=sample$pop,k=3,isplot=FALSE,nsim=10)
\dontrun{
flexscan(map,case=sample$case,pop=sample$pop)
}
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/mapping.r
\name{map_species_builds}
\alias{map_species_builds}
\title{Convert coordinates between genome builds}
\usage{
map_species_builds(species, region, asm_one, asm_two,
coord_system = "chromosome", format = "json")
}
\description{
Convert coordinates between genome builds
}
| /man/map_species_builds.Rd | no_license | dwinter/rensembl | R | false | true | 362 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/mapping.r
\name{map_species_builds}
\alias{map_species_builds}
\title{Convert coordinates between genome builds}
\usage{
map_species_builds(species, region, asm_one, asm_two,
coord_system = "chromosome", format = "json")
}
\description{
Convert coordinates between genome builds
}
|
% Generated by roxygen2 (4.1.1): do not edit by hand
% Please edit documentation in R/dnacheck.R
\name{dnacheck}
\alias{dnacheck}
\title{Check if the DNA sequence are in the 4 default types}
\usage{
dnacheck(x)
}
\arguments{
\item{x}{A character vector, as the input DNA sequence.}
}
\value{
Logical. \code{TRUE} if all of the DNA types of the sequence
are within the 4 default types.
The result character vector
}
\description{
Check if the DNA sequence are in the 4 default types
}
\details{
This function checks if the DNA sequence types are in the 4.
}
\examples{
x = 'GACTGAACTGCACTTTGGTTTCATATTATTTGCTC'
dnacheck(x) # TRUE
dnacheck(paste(x, 'Z', sep = '')) # FALSE
}
\author{
Min-feng Zhu <\email{wind2zhu@163.com}>
}
\keyword{check}
| /man/dnacheck.Rd | no_license | wind22zhu/rDNAse | R | false | false | 750 | rd | % Generated by roxygen2 (4.1.1): do not edit by hand
% Please edit documentation in R/dnacheck.R
\name{dnacheck}
\alias{dnacheck}
\title{Check if the DNA sequence are in the 4 default types}
\usage{
dnacheck(x)
}
\arguments{
\item{x}{A character vector, as the input DNA sequence.}
}
\value{
Logical. \code{TRUE} if all of the DNA types of the sequence
are within the 4 default types.
The result character vector
}
\description{
Check if the DNA sequence are in the 4 default types
}
\details{
This function checks if the DNA sequence types are in the 4.
}
\examples{
x = 'GACTGAACTGCACTTTGGTTTCATATTATTTGCTC'
dnacheck(x) # TRUE
dnacheck(paste(x, 'Z', sep = '')) # FALSE
}
\author{
Min-feng Zhu <\email{wind2zhu@163.com}>
}
\keyword{check}
|
\name{summary.spRMM}
\alias{summary.spRMM}
\title{Summarizing fits from Stochastic EM algorithm for semiparametric scaled mixture of censored data}
\usage{
\method{summary}{spRMM}(object, digits = 6, ...)
}
\arguments{
\item{object}{an object of class \code{spRMM} such as a result of a call
to \code{\link{spRMM_SEM}}}
\item{digits}{Significant digits for printing values}
\item{...}{Additional parameters passed to \code{print}.}
}
\description{
\code{\link[base]{summary}} method for class \code{spRMM}.
}
\details{
\code{\link{summary.spRMM}} prints scalar parameter estimates for
a fitted mixture model: each component weight and the scaling factor, see reference below.
The functional (nonparametric) estimates of survival and hazard rate funcions can be obtained
using \code{\link{plotspRMM}}.
}
\value{
The function \code{\link{summary.spRMM}} prints the final loglikelihood
value at the solution as well as The estimated mixing weights and the scaling parameter.
}
\seealso{
Function for plotting functional (nonparametric) estimates:
\code{\link{plotspRMM}}.
Other models and algorithms for censored lifetime data
(name convention is model_algorithm):
\code{\link{expRMM_EM}},
\code{\link{weibullRMM_SEM}}.
}
\references{
\itemize{
\item Bordes, L., and Chauveau, D. (2016),
Stochastic EM algorithms for parametric and semiparametric mixture models
for right-censored lifetime data,
Computational Statistics, Volume 31, Issue 4, pages 1513-1538.
\url{https://link.springer.com/article/10.1007/s00180-016-0661-7}
}
}
\author{Didier Chauveau}
\examples{
# See example(spRMM_SEM)
}
\keyword{file}
| /man/summary.spRMM.Rd | no_license | cran/mixtools | R | false | false | 1,661 | rd | \name{summary.spRMM}
\alias{summary.spRMM}
\title{Summarizing fits from Stochastic EM algorithm for semiparametric scaled mixture of censored data}
\usage{
\method{summary}{spRMM}(object, digits = 6, ...)
}
\arguments{
\item{object}{an object of class \code{spRMM} such as a result of a call
to \code{\link{spRMM_SEM}}}
\item{digits}{Significant digits for printing values}
\item{...}{Additional parameters passed to \code{print}.}
}
\description{
\code{\link[base]{summary}} method for class \code{spRMM}.
}
\details{
\code{\link{summary.spRMM}} prints scalar parameter estimates for
a fitted mixture model: each component weight and the scaling factor, see reference below.
The functional (nonparametric) estimates of survival and hazard rate funcions can be obtained
using \code{\link{plotspRMM}}.
}
\value{
The function \code{\link{summary.spRMM}} prints the final loglikelihood
value at the solution as well as The estimated mixing weights and the scaling parameter.
}
\seealso{
Function for plotting functional (nonparametric) estimates:
\code{\link{plotspRMM}}.
Other models and algorithms for censored lifetime data
(name convention is model_algorithm):
\code{\link{expRMM_EM}},
\code{\link{weibullRMM_SEM}}.
}
\references{
\itemize{
\item Bordes, L., and Chauveau, D. (2016),
Stochastic EM algorithms for parametric and semiparametric mixture models
for right-censored lifetime data,
Computational Statistics, Volume 31, Issue 4, pages 1513-1538.
\url{https://link.springer.com/article/10.1007/s00180-016-0661-7}
}
}
\author{Didier Chauveau}
\examples{
# See example(spRMM_SEM)
}
\keyword{file}
|
library(gemmaAPI)
library(shinyjs)
library(stringr)
library(ogbox)
library(memoise)
library(markerGeneProfile)
library(homologene)
library(reshape2)
library(ggplot2)
library(shiny)
library(DT)
library(magrittr)
library(viridis)
library(cowplot)
library(purrr)
data("mouseMarkerGenes", package = 'markerGeneProfile')
data('mouseMarkerGenesNCBI',package = 'markerGeneProfile')
data("mouseMarkerGenesCombined", package = 'markerGeneProfile')
data('mouseMarkerGenesCombinedNCBI',package = 'markerGeneProfile')
data("mouseRegionHierarchy", package='markerGeneProfile')
mouseMarkerGenes = mouseMarkerGenesCombined
mouseMarkerGenesNCBI = mouseMarkerGenesCombinedNCBI
mem_median = memoise(median)
gemmaPrep = function(study){
if(exists('session')){
setProgress(0, detail ="Compiling metadata")
}
meta = compileMetadata(study,memoised = TRUE)
species = meta$taxon %>% unique
neededGenes = mouseMarkerGenesNCBI %>% unlist %>% unique
neededGenes = neededGenes[!grepl('\\|',neededGenes)]
taxonData = taxonInfo(species,memoised = TRUE)
speciesID = taxonData[[1]]$ncbiId
if(speciesID != 10090){
neededGenes = homologene(neededGenes,inTax = 10090, outTax = speciesID)[[paste0(speciesID,'_ID')]]
}
# expression = datasetInfo(study,request= 'data',
# IdColnames=TRUE,
# memoised = TRUE)
if(exists('session')){
setProgress(4, detail ="Getting expression data")
}
# expression = datasetInfo(study,request= 'data',
# IdColnames=TRUE,
# memoised = TRUE)
expression = gemmaAPI::expressionSubset(study,neededGenes,consolidate = 'pickvar')
if(exists('session')){
setProgress(7, detail ="Filtering expression data")
}
list[gene,exp] = expression %>% sepExpr()
# some samples have outliers. they are NaNs in the entire row. remove them
outlier = exp %>% apply(2,function(y){all(is.nan(y))})
# some rows have NaNs remove them too
keep = exp[!outlier] %>% apply(1,function(y){!any(is.nan(y))})
gene = gene[keep,]
exp = exp[!outlier][keep,]
# note that there can be a difference between samples in metadata and
# samples in filtered exp now
exp = exp[,ogbox::trimNAs(match(meta$sampleName,colnames(exp)))]
meta = meta[match(colnames(exp),meta$sampleName),]
expression = cbind(gene,exp)
expression = expression[!expression$GeneSymbol=='',]
medExp = mem_median(exp %>% unlist)
expression = mostVariable(expression,
genes = "GeneSymbol",
threshFun = max,
threshold = medExp)
return(list(meta,expression))
}
mem_gemmaPrep = memoise(gemmaPrep)
getCategoryAnnotations = function(data,
category,
categoryColumn,
annotationColumns,
split = '(?<=[^\\s])\\|(?=[^\\s])',
merge=FALSE){
categories = data[[categoryColumn]] %>% stringr::str_split(split)
categoryMatch = categories %>% lapply(function(x){
x %in% category
})
sepAnnots = data[annotationColumns] %>% apply(2,function(x){stringr::str_split(x,split)})
relevantAnnots = lapply(seq_len(length(sepAnnots[[1]])),function(i){
out = lapply(seq_along(sepAnnots), function(j){
sepAnnots[[j]][[i]][categoryMatch[[i]]]
})
names(out) = annotationColumns
return(out)
})
if(merge){
relevantAnnots %<>% lapply(function(x){
x %>% lapply(function(y){paste(y,collapse= '|')})
})
}
return(relevantAnnots)
}
mem_mgpEstimate = memoise(mgpEstimate) | /global.R | permissive | PavlidisLab/gemmaMGP | R | false | false | 3,928 | r | library(gemmaAPI)
library(shinyjs)
library(stringr)
library(ogbox)
library(memoise)
library(markerGeneProfile)
library(homologene)
library(reshape2)
library(ggplot2)
library(shiny)
library(DT)
library(magrittr)
library(viridis)
library(cowplot)
library(purrr)
data("mouseMarkerGenes", package = 'markerGeneProfile')
data('mouseMarkerGenesNCBI',package = 'markerGeneProfile')
data("mouseMarkerGenesCombined", package = 'markerGeneProfile')
data('mouseMarkerGenesCombinedNCBI',package = 'markerGeneProfile')
data("mouseRegionHierarchy", package='markerGeneProfile')
mouseMarkerGenes = mouseMarkerGenesCombined
mouseMarkerGenesNCBI = mouseMarkerGenesCombinedNCBI
mem_median = memoise(median)
gemmaPrep = function(study){
if(exists('session')){
setProgress(0, detail ="Compiling metadata")
}
meta = compileMetadata(study,memoised = TRUE)
species = meta$taxon %>% unique
neededGenes = mouseMarkerGenesNCBI %>% unlist %>% unique
neededGenes = neededGenes[!grepl('\\|',neededGenes)]
taxonData = taxonInfo(species,memoised = TRUE)
speciesID = taxonData[[1]]$ncbiId
if(speciesID != 10090){
neededGenes = homologene(neededGenes,inTax = 10090, outTax = speciesID)[[paste0(speciesID,'_ID')]]
}
# expression = datasetInfo(study,request= 'data',
# IdColnames=TRUE,
# memoised = TRUE)
if(exists('session')){
setProgress(4, detail ="Getting expression data")
}
# expression = datasetInfo(study,request= 'data',
# IdColnames=TRUE,
# memoised = TRUE)
expression = gemmaAPI::expressionSubset(study,neededGenes,consolidate = 'pickvar')
if(exists('session')){
setProgress(7, detail ="Filtering expression data")
}
list[gene,exp] = expression %>% sepExpr()
# some samples have outliers. they are NaNs in the entire row. remove them
outlier = exp %>% apply(2,function(y){all(is.nan(y))})
# some rows have NaNs remove them too
keep = exp[!outlier] %>% apply(1,function(y){!any(is.nan(y))})
gene = gene[keep,]
exp = exp[!outlier][keep,]
# note that there can be a difference between samples in metadata and
# samples in filtered exp now
exp = exp[,ogbox::trimNAs(match(meta$sampleName,colnames(exp)))]
meta = meta[match(colnames(exp),meta$sampleName),]
expression = cbind(gene,exp)
expression = expression[!expression$GeneSymbol=='',]
medExp = mem_median(exp %>% unlist)
expression = mostVariable(expression,
genes = "GeneSymbol",
threshFun = max,
threshold = medExp)
return(list(meta,expression))
}
mem_gemmaPrep = memoise(gemmaPrep)
getCategoryAnnotations = function(data,
category,
categoryColumn,
annotationColumns,
split = '(?<=[^\\s])\\|(?=[^\\s])',
merge=FALSE){
categories = data[[categoryColumn]] %>% stringr::str_split(split)
categoryMatch = categories %>% lapply(function(x){
x %in% category
})
sepAnnots = data[annotationColumns] %>% apply(2,function(x){stringr::str_split(x,split)})
relevantAnnots = lapply(seq_len(length(sepAnnots[[1]])),function(i){
out = lapply(seq_along(sepAnnots), function(j){
sepAnnots[[j]][[i]][categoryMatch[[i]]]
})
names(out) = annotationColumns
return(out)
})
if(merge){
relevantAnnots %<>% lapply(function(x){
x %>% lapply(function(y){paste(y,collapse= '|')})
})
}
return(relevantAnnots)
}
mem_mgpEstimate = memoise(mgpEstimate) |
`m4plMoreSummary` <-
function(x, out="result", thetaInitial=NULL){
## Future add of RMSE if thetaInitial != NULL
if (!is.null(thetaInitial) && length(thetaInitial) == 1) thetaInitial <- rep(thetaInitial,length(x))
extract <- function(x,i) x[[i]]
if (length(x[[1]]$par) > 1) {
rep <- length(x)
model <- names(x[[1]]$par)
personParameters <- data.frame(t(matrix(unlist(lapply(x,extract,1)),ncol=rep)))
colnames(personParameters) <- model
if (!is.null(thetaInitial)) {
personParameters <- data.frame(personParameters, Error=personParameters[,1] - thetaInitial)
}
mPersonParameters <- sapply(personParameters, mean, na.rm=TRUE) ##
PersonParameters <- rbind(MEAN = mPersonParameters,
MEDIAN = sapply(personParameters, median, na.rm=TRUE), ##
SD = sapply(personParameters, sd, na.rm=TRUE), ##
N = colSums(!is.na(personParameters)))
personSe <- data.frame(t(matrix(unlist(lapply(x,extract,2)),ncol=rep)))
colnames(personSe) <- paste("Se", model, sep="")
mPersonSe <- sqrt(sapply(data.frame(personSe^2), mean, na.rm=TRUE)) ##
personSeTemp <- personSe
if (length(which(is.na(personSe))) > 0) personSeTemp <- personSeTemp[-which(is.na(personSe)),]
PersonSe <- rbind(MEAN = mPersonSe,
MEDIAN = sapply(personSeTemp, median, na.rm=TRUE), ##
SD = sapply(personSe, sd, na.rm=TRUE), ##
N = colSums(!is.na(personSe)))
colnames(PersonSe) <- model
## Action so that no parameters has a complete zero empirical variance
wNEZERO <- which(sapply(personParameters,sd) <= 0); if (length(wNEZERO) > 0) ##
personParameters[1,wNEZERO] <- personParameters[1,wNEZERO] + 0.00000001
eCorrelation <- cor(personParameters, use = "pairwise.complete.obs")
personCor <- t(matrix(unlist(lapply(x,extract,3)),ncol=rep))
nParameters <- length(model)
personCor <- unlist(lapply(x,extract,3))
personCor <- matrix(sapply(data.frame(t(matrix(personCor,ncol=rep))), mean, na.rm=TRUE), ##
ncol=nParameters)
## Correction to be sure that NA values are returned
personCor[is.nan(c(personCor))] <- NA
personLL <- t(matrix(unlist(lapply(x,extract,4)),ncol=rep))
mPersonLL <- sapply(data.frame(personLL), mean, na.rm=TRUE) ##
personLLTemp <- data.frame(personLL)
if (length(which(is.na(personLL))) > 0) personLLTemp <- personLLTemp[-which(is.na(personLL)),]
PersonLL <- rbind(MEAN = mPersonLL,
MEDIAN = apply(personLLTemp, 2, median, na.rm=TRUE),
SD = sapply(data.frame(personLL), sd, na.rm=TRUE), ##
N = colSums(!is.na(personLL)))
#apply(personLLTemp,2,median, na.rm=TRUE)
colnames(PersonLL) <- colnames(personLL) <- c("LL","AIC","BIC")
colnames(personCor) <- rownames(personCor) <- model
if (out == "result") result <- data.frame(data.frame(personParameters),
data.frame(personSe),
data.frame(personLL))
if (out == "report") report <- list(parameters = PersonParameters,
se = PersonSe,
logLikelihood = PersonLL,
eCorrelation = eCorrelation,
tCorrelation = personCor)
}
if (length(x[[1]]$par) == 1) {
model <- "T"
nSubjects <- length(x)
personParameters <- matrix(unlist(lapply(x,extract,1)), nrow=nSubjects)
colnames(personParameters) <- model
if (!is.null(thetaInitial)) {
personParameters <- data.frame(cbind(personParameters, personParameters[,1] - thetaInitial))
colnames(personParameters) <- c("T","Error")
}
mPersonParameters <- sapply(personParameters, mean, na.rm=TRUE) ##
PersonParameters <- rbind(MEAN = mPersonParameters,
MEDIAN = sapply(personParameters, median, na.rm=TRUE), ##
SD = sapply(personParameters, sd, na.rm=TRUE), ##
N = colSums(!is.na(personParameters)))
personSe <- matrix(unlist(lapply(x,extract,2)), nrow=nSubjects)
mPersonSe <- sqrt(sapply(data.frame(personSe^2), mean, na.rm=TRUE)) ##
PersonSe <- rbind(MEAN = mPersonSe,
MEDIAN = sapply(data.frame(personSe), median, na.rm=TRUE), ##
SD = sapply(data.frame(personSe), sd, na.rm=TRUE), ##
N = colSums(!is.na(personSe)))
personLL <- t(matrix(unlist(lapply(x,extract,4)), ncol=nSubjects))
colnames(personLL)<- c("LL","AIC","BIC")
mPersonLL <- sapply(data.frame(personLL), mean, na.rm=TRUE) ##
PersonLL <- rbind(MEAN = mPersonLL,
MEDIAN = sapply(data.frame(personLL), median, na.rm=TRUE), ##
SD = sapply(data.frame(personLL), sd, na.rm=TRUE), ##
N = colSums(!is.na(personLL)))
colnames(PersonSe) <- model
#colnames(personParameters) <- model
colnames(personSe) <- paste("Se", model, sep="")
if (out == "result") result <- data.frame(data.frame(personParameters),
data.frame(personSe),
data.frame(personLL))
if (out == "report") report <- list(parameters = PersonParameters,
se = PersonSe,
logLikelihood = PersonLL,
tCorrelation = 1,
tCorrelation = 1)
}
if (out == "result") return(result)
if (out == "report") return(report)
}
| /irtProb/R/m4plMoreSummary.r | no_license | ingted/R-Examples | R | false | false | 6,425 | r | `m4plMoreSummary` <-
function(x, out="result", thetaInitial=NULL){
## Future add of RMSE if thetaInitial != NULL
if (!is.null(thetaInitial) && length(thetaInitial) == 1) thetaInitial <- rep(thetaInitial,length(x))
extract <- function(x,i) x[[i]]
if (length(x[[1]]$par) > 1) {
rep <- length(x)
model <- names(x[[1]]$par)
personParameters <- data.frame(t(matrix(unlist(lapply(x,extract,1)),ncol=rep)))
colnames(personParameters) <- model
if (!is.null(thetaInitial)) {
personParameters <- data.frame(personParameters, Error=personParameters[,1] - thetaInitial)
}
mPersonParameters <- sapply(personParameters, mean, na.rm=TRUE) ##
PersonParameters <- rbind(MEAN = mPersonParameters,
MEDIAN = sapply(personParameters, median, na.rm=TRUE), ##
SD = sapply(personParameters, sd, na.rm=TRUE), ##
N = colSums(!is.na(personParameters)))
personSe <- data.frame(t(matrix(unlist(lapply(x,extract,2)),ncol=rep)))
colnames(personSe) <- paste("Se", model, sep="")
mPersonSe <- sqrt(sapply(data.frame(personSe^2), mean, na.rm=TRUE)) ##
personSeTemp <- personSe
if (length(which(is.na(personSe))) > 0) personSeTemp <- personSeTemp[-which(is.na(personSe)),]
PersonSe <- rbind(MEAN = mPersonSe,
MEDIAN = sapply(personSeTemp, median, na.rm=TRUE), ##
SD = sapply(personSe, sd, na.rm=TRUE), ##
N = colSums(!is.na(personSe)))
colnames(PersonSe) <- model
## Action so that no parameters has a complete zero empirical variance
wNEZERO <- which(sapply(personParameters,sd) <= 0); if (length(wNEZERO) > 0) ##
personParameters[1,wNEZERO] <- personParameters[1,wNEZERO] + 0.00000001
eCorrelation <- cor(personParameters, use = "pairwise.complete.obs")
personCor <- t(matrix(unlist(lapply(x,extract,3)),ncol=rep))
nParameters <- length(model)
personCor <- unlist(lapply(x,extract,3))
personCor <- matrix(sapply(data.frame(t(matrix(personCor,ncol=rep))), mean, na.rm=TRUE), ##
ncol=nParameters)
## Correction to be sure that NA values are returned
personCor[is.nan(c(personCor))] <- NA
personLL <- t(matrix(unlist(lapply(x,extract,4)),ncol=rep))
mPersonLL <- sapply(data.frame(personLL), mean, na.rm=TRUE) ##
personLLTemp <- data.frame(personLL)
if (length(which(is.na(personLL))) > 0) personLLTemp <- personLLTemp[-which(is.na(personLL)),]
PersonLL <- rbind(MEAN = mPersonLL,
MEDIAN = apply(personLLTemp, 2, median, na.rm=TRUE),
SD = sapply(data.frame(personLL), sd, na.rm=TRUE), ##
N = colSums(!is.na(personLL)))
#apply(personLLTemp,2,median, na.rm=TRUE)
colnames(PersonLL) <- colnames(personLL) <- c("LL","AIC","BIC")
colnames(personCor) <- rownames(personCor) <- model
if (out == "result") result <- data.frame(data.frame(personParameters),
data.frame(personSe),
data.frame(personLL))
if (out == "report") report <- list(parameters = PersonParameters,
se = PersonSe,
logLikelihood = PersonLL,
eCorrelation = eCorrelation,
tCorrelation = personCor)
}
if (length(x[[1]]$par) == 1) {
model <- "T"
nSubjects <- length(x)
personParameters <- matrix(unlist(lapply(x,extract,1)), nrow=nSubjects)
colnames(personParameters) <- model
if (!is.null(thetaInitial)) {
personParameters <- data.frame(cbind(personParameters, personParameters[,1] - thetaInitial))
colnames(personParameters) <- c("T","Error")
}
mPersonParameters <- sapply(personParameters, mean, na.rm=TRUE) ##
PersonParameters <- rbind(MEAN = mPersonParameters,
MEDIAN = sapply(personParameters, median, na.rm=TRUE), ##
SD = sapply(personParameters, sd, na.rm=TRUE), ##
N = colSums(!is.na(personParameters)))
personSe <- matrix(unlist(lapply(x,extract,2)), nrow=nSubjects)
mPersonSe <- sqrt(sapply(data.frame(personSe^2), mean, na.rm=TRUE)) ##
PersonSe <- rbind(MEAN = mPersonSe,
MEDIAN = sapply(data.frame(personSe), median, na.rm=TRUE), ##
SD = sapply(data.frame(personSe), sd, na.rm=TRUE), ##
N = colSums(!is.na(personSe)))
personLL <- t(matrix(unlist(lapply(x,extract,4)), ncol=nSubjects))
colnames(personLL)<- c("LL","AIC","BIC")
mPersonLL <- sapply(data.frame(personLL), mean, na.rm=TRUE) ##
PersonLL <- rbind(MEAN = mPersonLL,
MEDIAN = sapply(data.frame(personLL), median, na.rm=TRUE), ##
SD = sapply(data.frame(personLL), sd, na.rm=TRUE), ##
N = colSums(!is.na(personLL)))
colnames(PersonSe) <- model
#colnames(personParameters) <- model
colnames(personSe) <- paste("Se", model, sep="")
if (out == "result") result <- data.frame(data.frame(personParameters),
data.frame(personSe),
data.frame(personLL))
if (out == "report") report <- list(parameters = PersonParameters,
se = PersonSe,
logLikelihood = PersonLL,
tCorrelation = 1,
tCorrelation = 1)
}
if (out == "result") return(result)
if (out == "report") return(report)
}
|
#### GSEA using GSEABase ####
library(GSEABase)
library(hgu133plus2cdf)
library(GO.db)
library(Biobase)
##C9orf72##
#set working directory to location of data
setwd ("/Users/clairegreen/Documents/PhD/TDP-43/TDP-43 Data Sets/C9orf72_LCM")
#assign the .csv file to a variable AS A MATRIX (not data.frame), column headers are true
exp_C9.LCM <- read.csv ("eset_NineP_150612_exprs.csv", header=TRUE)
C9_Mat <- as.matrix(read.csv("eset_NineP_150612_exprs.csv", header=TRUE))
#specify that first column contains gene names
row.names(exp_C9.LCM) <- exp_C9.LCM[,1]
#specify that all other columns are gene expression data
exp_C9.LCM<- exp_C9.LCM[,2:12]
x = phenoData(exp_C9.LCM)
#create a minimal ExpressionSet object using the ExpressionSet constructor
minimalSet <- ExpressionSet(assayData=exp_C9.LCM
phenoData)
egs <- GeneSet(minimalSet[1:54675,], setIdentifier = )
| /GSEA_test.R | no_license | zerland/PhD_Code | R | false | false | 908 | r | #### GSEA using GSEABase ####
library(GSEABase)
library(hgu133plus2cdf)
library(GO.db)
library(Biobase)
##C9orf72##
#set working directory to location of data
setwd ("/Users/clairegreen/Documents/PhD/TDP-43/TDP-43 Data Sets/C9orf72_LCM")
#assign the .csv file to a variable AS A MATRIX (not data.frame), column headers are true
exp_C9.LCM <- read.csv ("eset_NineP_150612_exprs.csv", header=TRUE)
C9_Mat <- as.matrix(read.csv("eset_NineP_150612_exprs.csv", header=TRUE))
#specify that first column contains gene names
row.names(exp_C9.LCM) <- exp_C9.LCM[,1]
#specify that all other columns are gene expression data
exp_C9.LCM<- exp_C9.LCM[,2:12]
x = phenoData(exp_C9.LCM)
#create a minimal ExpressionSet object using the ExpressionSet constructor
minimalSet <- ExpressionSet(assayData=exp_C9.LCM
phenoData)
egs <- GeneSet(minimalSet[1:54675,], setIdentifier = )
|
context("Sampling tests")
set.seed(1)
df <- data.frame(matrix(rnorm(1000), ncol = 10))
df$Label1 <- c(sample(c(0,1), 100, replace = TRUE))
df$Label2 <- c(sample(c(0,1), 100, replace = TRUE))
df$Label3 <- c(sample(c(0,1), 100, replace = TRUE))
df$Label4 <- as.numeric(df$Label1 == 0 | df$Label2 == 0 | df$Label3 == 0)
mdata <- mldr::mldr_from_dataframe(df, labelIndices = c(11, 12, 13, 14),
name = "testMLDR")
empty.mdata <- mldr::mldr_from_dataframe(df[, 1:13],
labelIndices = c(11, 12, 13),
name = "testMLDR")
set.seed(NULL)
testFolds <- function (kfold, original, msg) {
real <- unlist(lapply(kfold$fold, names))
expected <- unique(real)
expect_true(all(expected == real), label=msg)
comp <- sort(unlist(kfold$fold)) == seq(original$measures$num.instances)
expect_true(all(comp), label=msg)
expect_true(all(sort(real) == sort(rownames(original$dataset))), label=msg)
}
testEmptyIntersectRows <- function (a, b) {
expect_equal(length(intersect(rownames(a$dataset), rownames(b$dataset))), 0)
}
testCompletude <- function (list, original) {
names <- sort(unlist(lapply(list, function (fold) rownames(fold$dataset))))
expect_true(all(names == sort(rownames(original$dataset))))
}
test_that("random holdout", {
folds <- create_holdout_partition(mdata, 0.7, "random")
expect_equal(length(folds), 2)
expect_is(folds[[1]], "mldr")
expect_is(folds[[2]], "mldr")
expect_equal(folds[[1]]$measures$num.instances, 70)
expect_equal(folds[[2]]$measures$num.instances, 30)
expect_equal(rownames(folds[[1]]$labels), rownames(folds[[2]]$labels))
testEmptyIntersectRows(folds[[1]], folds[[2]])
testCompletude(folds, mdata)
subfolds <- create_holdout_partition(folds[[1]], 0.5, "random")
testEmptyIntersectRows(subfolds[[1]], subfolds[[2]])
testCompletude(subfolds, folds[[1]])
folds <- create_holdout_partition(mdata, c("train"=0.5, "test"=0.5))
expect_named(folds, c("train", "test"))
expect_equal(folds$train$measures$num.instances,
folds$test$measures$num.instances)
testCompletude(folds, mdata)
folds <- create_holdout_partition(empty.mdata, c("train"=0.5, "test"=0.5))
expect_equal(folds$train$measures$num.instances,
folds$test$measures$num.instances)
testCompletude(folds, mdata)
set.seed(1)
f1 <- create_holdout_partition(mdata, c(0.5, 0.5))
set.seed(1)
f2 <- create_holdout_partition(mdata, c(0.5, 0.5))
expect_equal(f1, f2)
set.seed(NULL)
expect_error(create_holdout_partition(mdata, NULL))
expect_error(create_holdout_partition(mdata, c(0.5,0.8,0.1)))
})
test_that("stratified holdout", {
f <- create_holdout_partition(mdata, c("a"=0.4, "b"=0.4, "c"=0.2),
"stratified")
expect_equal(length(f), 3)
expect_named(f, c("a", "b", "c"))
expect_equal(f[[1]]$measures$num.instances, 40)
expect_equal(f[[2]]$measures$num.instances, 40)
expect_equal(f[[3]]$measures$num.instances, 20)
expect_equal(rownames(f[[1]]$labels), rownames(f[[2]]$labels))
expect_equal(rownames(f[[1]]$labels), rownames(f[[3]]$labels))
testEmptyIntersectRows(f$a, f$b)
testEmptyIntersectRows(f$a, f$c)
testEmptyIntersectRows(f$b, f$c)
testCompletude(f, mdata)
folds <- create_holdout_partition(empty.mdata, c("train"=0.5, "test"=0.5),
"stratified")
expect_equal(folds$train$measures$num.instances,
folds$test$measures$num.instances)
testCompletude(folds, mdata)
sf <- create_holdout_partition(f$a, c("a"=0.5, "b"=0.5), "stratified")
expect_equal(length(sf), 2)
testEmptyIntersectRows(sf$a, sf$b)
testCompletude(sf, f$a)
})
test_that("iterative holdout", {
f <- create_holdout_partition(mdata, c("a"=0.4, "b"=0.4, "c"=0.1, "d"=0.1),
"iterative")
expect_equal(length(f), 4)
expect_named(f, c("a", "b", "c", "d"))
expect_equal(rownames(f[[1]]$labels), rownames(f[[2]]$labels))
expect_equal(rownames(f[[1]]$labels), rownames(f[[3]]$labels))
testEmptyIntersectRows(f$a, f$b)
testEmptyIntersectRows(f$a, f$c)
testEmptyIntersectRows(f$a, f$d)
testEmptyIntersectRows(f$b, f$c)
testEmptyIntersectRows(f$b, f$d)
testEmptyIntersectRows(f$c, f$d)
testCompletude(f, mdata)
folds <- create_holdout_partition(empty.mdata, c("train"=0.5, "test"=0.5),
"iterative")
testEmptyIntersectRows(folds$train, folds$test)
testCompletude(folds, mdata)
sf <- create_holdout_partition(f$a, c("a"=0.5, "b"=0.5), "iterative")
expect_equal(length(sf), 2)
testEmptyIntersectRows(sf$a, sf$b)
testCompletude(sf, f$a)
folds <- create_holdout_partition(mdata, 0.6)
sf <- create_holdout_partition(folds[[2]], c("a"=0.6, "b"=0.4), "iterative")
testEmptyIntersectRows(sf$a, sf$b)
testCompletude(sf, folds[[2]])
})
test_that("random kfold", {
f <- create_kfold_partition(mdata, 10, "random")
expect_is(f, "kFoldPartition")
expect_equal(f$k, 10)
expect_equal(length(f$fold), 10)
for (i in 1:10) {
expect_equal(length(f$fold[[i]]), 10)
}
testFolds(f, mdata, "f Random kfolds")
fdata1 <- partition_fold(f, 1)
fdata2 <- partition_fold(f, 10)
expect_equal(rownames(fdata1$labels), rownames(fdata2$labels))
expect_equal(fdata1$measures$num.instances, fdata2$measures$num.instances)
set.seed(1)
f1 <- create_kfold_partition(mdata, 4)
testFolds(f1, mdata, "f1 Random kfolds")
set.seed(1)
f2 <- create_kfold_partition(mdata, 4)
expect_equal(length(f1$fold), 4)
expect_equal(length(f1$fold[[2]]), 25)
expect_equal(f1, f2)
expect_false(all(f1$fold[[1]] %in% f$fold[[1]]))
set.seed(NULL)
f3 <- create_kfold_partition(mdata, 3)
testFolds(f3, mdata, "f3 Random kfolds")
expect_equal(f3$k, 3)
expect_equal(length(unlist(f3$fold)), 100)
expect_gt(length(f3$fold[[1]]), 32)
expect_gt(length(f3$fold[[2]]), 32)
expect_gt(length(f3$fold[[3]]), 32)
ds <- create_holdout_partition(mdata, c("train" = 0.9, "test" = 0.1))
f4 <- create_kfold_partition(ds$train, 9)
testFolds(f4, ds$train, "f4 Random kfolds")
folds <- create_kfold_partition(empty.mdata, 5)
testFolds(folds, empty.mdata, "empty Random kfolds")
})
test_that("stratified kfold", {
f <- create_kfold_partition(mdata, 10, "stratified")
expect_is(f, "kFoldPartition")
expect_equal(f$k, 10)
expect_equal(length(f$fold), 10)
for (i in 1:10) {
expect_equal(length(f$fold[[i]]), 10)
}
testFolds(f, mdata, "f Stratified kfold")
fdata1 <- partition_fold(f, 1)
fdata2 <- partition_fold(f, 10)
expect_equal(rownames(fdata1$labels), rownames(fdata2$labels))
expect_equal(fdata1$measures$num.instances, fdata2$measures$num.instances)
set.seed(1)
f1 <- create_kfold_partition(mdata, 4, "stratified")
testFolds(f1, mdata, "f1 Stratified kfold")
set.seed(1)
f2 <- create_kfold_partition(mdata, 4, "stratified")
expect_equal(length(f1$fold), 4)
expect_equal(length(f1$fold[[2]]), 25)
expect_equal(f1, f2)
expect_false(all(f1$fold[[1]] %in% f$fold[[1]]))
set.seed(NULL)
f3 <- create_kfold_partition(mdata, 3, "stratified")
testFolds(f3, mdata, "f3 Stratified kfold")
expect_equal(f3$k, 3)
expect_equal(length(unlist(f3$fold)), 100)
expect_gt(length(f3$fold[[1]]), 32)
expect_gt(length(f3$fold[[2]]), 32)
expect_gt(length(f3$fold[[3]]), 32)
ds <- create_holdout_partition(mdata, c("train" = 0.9, "test" = 0.1))
f4 <- create_kfold_partition(ds$train, 9, "stratified")
testFolds(f4, ds$train, "f4 Stratified kfold")
folds <- create_kfold_partition(empty.mdata, 5, "stratified")
testFolds(folds, empty.mdata, "empty stratified kfolds")
})
test_that("iterative kfold", {
f <- create_kfold_partition(mdata, 10, "iterative")
expect_is(f, "kFoldPartition")
expect_equal(f$k, 10)
expect_equal(length(f$fold), 10)
for (i in 1:10) {
expect_gt(length(f$fold[[i]]), 7)
}
testFolds(f, mdata, "f Iterative kfold")
fdata1 <- partition_fold(f, 1)
fdata2 <- partition_fold(f, 10)
expect_equal(rownames(fdata1$labels), rownames(fdata2$labels))
expect_equal(fdata1$measures$num.instances, fdata2$measures$num.instances)
set.seed(1)
f1 <- create_kfold_partition(mdata, 4, "iterative")
testFolds(f1, mdata, "f1 Iterative kfold")
set.seed(1)
f2 <- create_kfold_partition(mdata, 4, "iterative")
expect_equal(length(f1$fold), 4)
expect_true(all(sapply(f1$fold, length) > 15))
expect_equal(f1, f2)
expect_false(all(f1$fold[[1]] %in% f$fold[[1]]))
set.seed(NULL)
f3 <- create_kfold_partition(mdata, 3, "iterative")
testFolds(f3, mdata, "f3 Iterative kfold")
expect_equal(f3$k, 3)
expect_equal(length(unlist(f3$fold)), 100)
expect_gt(length(f3$fold[[1]]), 30)
expect_gt(length(f3$fold[[2]]), 30)
expect_gt(length(f3$fold[[3]]), 30)
ds <- create_holdout_partition(mdata, c("train" = 0.9, "test" = 0.1))
f4 <- create_kfold_partition(ds$train, 9, "iterative")
testFolds(f4, ds$train, "f4 Iterative kfold")
folds <- create_kfold_partition(empty.mdata, 5, "iterative")
testFolds(folds, empty.mdata, "empty iterative kfolds")
})
test_that("subset and random subset", {
rows <- 10:20
cols <- 3:7
data <- create_subset(mdata, rows, 1:10)
expect_is(data, "mldr")
expect_equal(data$measures$num.attributes, mdata$measures$num.attributes)
expect_equal(create_subset(mdata, rows), data)
data <- create_subset(mdata, 1:100, cols)
expect_equal(data$measures$num.instances, mdata$measures$num.instances)
expect_equal(data$dataset[data$labels$index],
mdata$dataset[mdata$labels$index])
data1 <- create_subset(mdata, rows, cols)
data2 <- create_subset(mdata, rows, cols)
expect_equal(data1, data2)
expect_error(create_subset(mdata, c(1,2,3,-5,-4,-10), c(1,2,3,-5,-4,-10)))
#TODO test values
data <- create_random_subset(mdata, 20, 5)
expect_equal(data$measures$num.instances, 20)
expect_equal(data$measures$num.attributes, 5 + data$measures$num.labels)
expect_equal(data$dataset[,data$labels$index],
mdata$dataset[rownames(data$dataset), mdata$labels$index])
})
test_that("Alternatives dataset for sampling", {
dataset <- cbind(mdata$dataset[mdata$labels$index],
mdata$dataset[mdata$attributesIndexes])
ndata <- mldr::mldr_from_dataframe(dataset, labelIndices = 1:4,
name = "testMLDR")
test <- create_holdout_partition(ndata)
expect_equal(colnames(test[[1]]$dataset), colnames(ndata$dataset))
expect_equal(colnames(test[[2]]$dataset), colnames(ndata$dataset))
test <- create_holdout_partition(ndata, method="iterative")
expect_equal(colnames(test[[1]]$dataset), colnames(ndata$dataset))
expect_equal(colnames(test[[2]]$dataset), colnames(ndata$dataset))
test <- create_holdout_partition(ndata, method="stratified")
expect_equal(colnames(test[[1]]$dataset), colnames(ndata$dataset))
expect_equal(colnames(test[[2]]$dataset), colnames(ndata$dataset))
kf <- create_kfold_partition(ndata, 3)
test <- partition_fold(kf, 1)
expect_equal(colnames(test[[1]]$dataset), colnames(ndata$dataset))
expect_equal(colnames(test[[2]]$dataset), colnames(ndata$dataset))
test <- partition_fold(kf, 2, has.validation = T)
expect_equal(colnames(test[[1]]$dataset), colnames(ndata$dataset))
expect_equal(colnames(test[[2]]$dataset), colnames(ndata$dataset))
expect_equal(colnames(test[[3]]$dataset), colnames(ndata$dataset))
})
#TODO test not complete partitions (using 90% per example)
| /tests/testthat/test_sampling.R | no_license | cran/utiml | R | false | false | 11,546 | r | context("Sampling tests")
set.seed(1)
df <- data.frame(matrix(rnorm(1000), ncol = 10))
df$Label1 <- c(sample(c(0,1), 100, replace = TRUE))
df$Label2 <- c(sample(c(0,1), 100, replace = TRUE))
df$Label3 <- c(sample(c(0,1), 100, replace = TRUE))
df$Label4 <- as.numeric(df$Label1 == 0 | df$Label2 == 0 | df$Label3 == 0)
mdata <- mldr::mldr_from_dataframe(df, labelIndices = c(11, 12, 13, 14),
name = "testMLDR")
empty.mdata <- mldr::mldr_from_dataframe(df[, 1:13],
labelIndices = c(11, 12, 13),
name = "testMLDR")
set.seed(NULL)
testFolds <- function (kfold, original, msg) {
real <- unlist(lapply(kfold$fold, names))
expected <- unique(real)
expect_true(all(expected == real), label=msg)
comp <- sort(unlist(kfold$fold)) == seq(original$measures$num.instances)
expect_true(all(comp), label=msg)
expect_true(all(sort(real) == sort(rownames(original$dataset))), label=msg)
}
testEmptyIntersectRows <- function (a, b) {
expect_equal(length(intersect(rownames(a$dataset), rownames(b$dataset))), 0)
}
testCompletude <- function (list, original) {
names <- sort(unlist(lapply(list, function (fold) rownames(fold$dataset))))
expect_true(all(names == sort(rownames(original$dataset))))
}
test_that("random holdout", {
folds <- create_holdout_partition(mdata, 0.7, "random")
expect_equal(length(folds), 2)
expect_is(folds[[1]], "mldr")
expect_is(folds[[2]], "mldr")
expect_equal(folds[[1]]$measures$num.instances, 70)
expect_equal(folds[[2]]$measures$num.instances, 30)
expect_equal(rownames(folds[[1]]$labels), rownames(folds[[2]]$labels))
testEmptyIntersectRows(folds[[1]], folds[[2]])
testCompletude(folds, mdata)
subfolds <- create_holdout_partition(folds[[1]], 0.5, "random")
testEmptyIntersectRows(subfolds[[1]], subfolds[[2]])
testCompletude(subfolds, folds[[1]])
folds <- create_holdout_partition(mdata, c("train"=0.5, "test"=0.5))
expect_named(folds, c("train", "test"))
expect_equal(folds$train$measures$num.instances,
folds$test$measures$num.instances)
testCompletude(folds, mdata)
folds <- create_holdout_partition(empty.mdata, c("train"=0.5, "test"=0.5))
expect_equal(folds$train$measures$num.instances,
folds$test$measures$num.instances)
testCompletude(folds, mdata)
set.seed(1)
f1 <- create_holdout_partition(mdata, c(0.5, 0.5))
set.seed(1)
f2 <- create_holdout_partition(mdata, c(0.5, 0.5))
expect_equal(f1, f2)
set.seed(NULL)
expect_error(create_holdout_partition(mdata, NULL))
expect_error(create_holdout_partition(mdata, c(0.5,0.8,0.1)))
})
test_that("stratified holdout", {
f <- create_holdout_partition(mdata, c("a"=0.4, "b"=0.4, "c"=0.2),
"stratified")
expect_equal(length(f), 3)
expect_named(f, c("a", "b", "c"))
expect_equal(f[[1]]$measures$num.instances, 40)
expect_equal(f[[2]]$measures$num.instances, 40)
expect_equal(f[[3]]$measures$num.instances, 20)
expect_equal(rownames(f[[1]]$labels), rownames(f[[2]]$labels))
expect_equal(rownames(f[[1]]$labels), rownames(f[[3]]$labels))
testEmptyIntersectRows(f$a, f$b)
testEmptyIntersectRows(f$a, f$c)
testEmptyIntersectRows(f$b, f$c)
testCompletude(f, mdata)
folds <- create_holdout_partition(empty.mdata, c("train"=0.5, "test"=0.5),
"stratified")
expect_equal(folds$train$measures$num.instances,
folds$test$measures$num.instances)
testCompletude(folds, mdata)
sf <- create_holdout_partition(f$a, c("a"=0.5, "b"=0.5), "stratified")
expect_equal(length(sf), 2)
testEmptyIntersectRows(sf$a, sf$b)
testCompletude(sf, f$a)
})
test_that("iterative holdout", {
f <- create_holdout_partition(mdata, c("a"=0.4, "b"=0.4, "c"=0.1, "d"=0.1),
"iterative")
expect_equal(length(f), 4)
expect_named(f, c("a", "b", "c", "d"))
expect_equal(rownames(f[[1]]$labels), rownames(f[[2]]$labels))
expect_equal(rownames(f[[1]]$labels), rownames(f[[3]]$labels))
testEmptyIntersectRows(f$a, f$b)
testEmptyIntersectRows(f$a, f$c)
testEmptyIntersectRows(f$a, f$d)
testEmptyIntersectRows(f$b, f$c)
testEmptyIntersectRows(f$b, f$d)
testEmptyIntersectRows(f$c, f$d)
testCompletude(f, mdata)
folds <- create_holdout_partition(empty.mdata, c("train"=0.5, "test"=0.5),
"iterative")
testEmptyIntersectRows(folds$train, folds$test)
testCompletude(folds, mdata)
sf <- create_holdout_partition(f$a, c("a"=0.5, "b"=0.5), "iterative")
expect_equal(length(sf), 2)
testEmptyIntersectRows(sf$a, sf$b)
testCompletude(sf, f$a)
folds <- create_holdout_partition(mdata, 0.6)
sf <- create_holdout_partition(folds[[2]], c("a"=0.6, "b"=0.4), "iterative")
testEmptyIntersectRows(sf$a, sf$b)
testCompletude(sf, folds[[2]])
})
test_that("random kfold", {
f <- create_kfold_partition(mdata, 10, "random")
expect_is(f, "kFoldPartition")
expect_equal(f$k, 10)
expect_equal(length(f$fold), 10)
for (i in 1:10) {
expect_equal(length(f$fold[[i]]), 10)
}
testFolds(f, mdata, "f Random kfolds")
fdata1 <- partition_fold(f, 1)
fdata2 <- partition_fold(f, 10)
expect_equal(rownames(fdata1$labels), rownames(fdata2$labels))
expect_equal(fdata1$measures$num.instances, fdata2$measures$num.instances)
set.seed(1)
f1 <- create_kfold_partition(mdata, 4)
testFolds(f1, mdata, "f1 Random kfolds")
set.seed(1)
f2 <- create_kfold_partition(mdata, 4)
expect_equal(length(f1$fold), 4)
expect_equal(length(f1$fold[[2]]), 25)
expect_equal(f1, f2)
expect_false(all(f1$fold[[1]] %in% f$fold[[1]]))
set.seed(NULL)
f3 <- create_kfold_partition(mdata, 3)
testFolds(f3, mdata, "f3 Random kfolds")
expect_equal(f3$k, 3)
expect_equal(length(unlist(f3$fold)), 100)
expect_gt(length(f3$fold[[1]]), 32)
expect_gt(length(f3$fold[[2]]), 32)
expect_gt(length(f3$fold[[3]]), 32)
ds <- create_holdout_partition(mdata, c("train" = 0.9, "test" = 0.1))
f4 <- create_kfold_partition(ds$train, 9)
testFolds(f4, ds$train, "f4 Random kfolds")
folds <- create_kfold_partition(empty.mdata, 5)
testFolds(folds, empty.mdata, "empty Random kfolds")
})
test_that("stratified kfold", {
f <- create_kfold_partition(mdata, 10, "stratified")
expect_is(f, "kFoldPartition")
expect_equal(f$k, 10)
expect_equal(length(f$fold), 10)
for (i in 1:10) {
expect_equal(length(f$fold[[i]]), 10)
}
testFolds(f, mdata, "f Stratified kfold")
fdata1 <- partition_fold(f, 1)
fdata2 <- partition_fold(f, 10)
expect_equal(rownames(fdata1$labels), rownames(fdata2$labels))
expect_equal(fdata1$measures$num.instances, fdata2$measures$num.instances)
set.seed(1)
f1 <- create_kfold_partition(mdata, 4, "stratified")
testFolds(f1, mdata, "f1 Stratified kfold")
set.seed(1)
f2 <- create_kfold_partition(mdata, 4, "stratified")
expect_equal(length(f1$fold), 4)
expect_equal(length(f1$fold[[2]]), 25)
expect_equal(f1, f2)
expect_false(all(f1$fold[[1]] %in% f$fold[[1]]))
set.seed(NULL)
f3 <- create_kfold_partition(mdata, 3, "stratified")
testFolds(f3, mdata, "f3 Stratified kfold")
expect_equal(f3$k, 3)
expect_equal(length(unlist(f3$fold)), 100)
expect_gt(length(f3$fold[[1]]), 32)
expect_gt(length(f3$fold[[2]]), 32)
expect_gt(length(f3$fold[[3]]), 32)
ds <- create_holdout_partition(mdata, c("train" = 0.9, "test" = 0.1))
f4 <- create_kfold_partition(ds$train, 9, "stratified")
testFolds(f4, ds$train, "f4 Stratified kfold")
folds <- create_kfold_partition(empty.mdata, 5, "stratified")
testFolds(folds, empty.mdata, "empty stratified kfolds")
})
test_that("iterative kfold", {
f <- create_kfold_partition(mdata, 10, "iterative")
expect_is(f, "kFoldPartition")
expect_equal(f$k, 10)
expect_equal(length(f$fold), 10)
for (i in 1:10) {
expect_gt(length(f$fold[[i]]), 7)
}
testFolds(f, mdata, "f Iterative kfold")
fdata1 <- partition_fold(f, 1)
fdata2 <- partition_fold(f, 10)
expect_equal(rownames(fdata1$labels), rownames(fdata2$labels))
expect_equal(fdata1$measures$num.instances, fdata2$measures$num.instances)
set.seed(1)
f1 <- create_kfold_partition(mdata, 4, "iterative")
testFolds(f1, mdata, "f1 Iterative kfold")
set.seed(1)
f2 <- create_kfold_partition(mdata, 4, "iterative")
expect_equal(length(f1$fold), 4)
expect_true(all(sapply(f1$fold, length) > 15))
expect_equal(f1, f2)
expect_false(all(f1$fold[[1]] %in% f$fold[[1]]))
set.seed(NULL)
f3 <- create_kfold_partition(mdata, 3, "iterative")
testFolds(f3, mdata, "f3 Iterative kfold")
expect_equal(f3$k, 3)
expect_equal(length(unlist(f3$fold)), 100)
expect_gt(length(f3$fold[[1]]), 30)
expect_gt(length(f3$fold[[2]]), 30)
expect_gt(length(f3$fold[[3]]), 30)
ds <- create_holdout_partition(mdata, c("train" = 0.9, "test" = 0.1))
f4 <- create_kfold_partition(ds$train, 9, "iterative")
testFolds(f4, ds$train, "f4 Iterative kfold")
folds <- create_kfold_partition(empty.mdata, 5, "iterative")
testFolds(folds, empty.mdata, "empty iterative kfolds")
})
test_that("subset and random subset", {
rows <- 10:20
cols <- 3:7
data <- create_subset(mdata, rows, 1:10)
expect_is(data, "mldr")
expect_equal(data$measures$num.attributes, mdata$measures$num.attributes)
expect_equal(create_subset(mdata, rows), data)
data <- create_subset(mdata, 1:100, cols)
expect_equal(data$measures$num.instances, mdata$measures$num.instances)
expect_equal(data$dataset[data$labels$index],
mdata$dataset[mdata$labels$index])
data1 <- create_subset(mdata, rows, cols)
data2 <- create_subset(mdata, rows, cols)
expect_equal(data1, data2)
expect_error(create_subset(mdata, c(1,2,3,-5,-4,-10), c(1,2,3,-5,-4,-10)))
#TODO test values
data <- create_random_subset(mdata, 20, 5)
expect_equal(data$measures$num.instances, 20)
expect_equal(data$measures$num.attributes, 5 + data$measures$num.labels)
expect_equal(data$dataset[,data$labels$index],
mdata$dataset[rownames(data$dataset), mdata$labels$index])
})
test_that("Alternatives dataset for sampling", {
dataset <- cbind(mdata$dataset[mdata$labels$index],
mdata$dataset[mdata$attributesIndexes])
ndata <- mldr::mldr_from_dataframe(dataset, labelIndices = 1:4,
name = "testMLDR")
test <- create_holdout_partition(ndata)
expect_equal(colnames(test[[1]]$dataset), colnames(ndata$dataset))
expect_equal(colnames(test[[2]]$dataset), colnames(ndata$dataset))
test <- create_holdout_partition(ndata, method="iterative")
expect_equal(colnames(test[[1]]$dataset), colnames(ndata$dataset))
expect_equal(colnames(test[[2]]$dataset), colnames(ndata$dataset))
test <- create_holdout_partition(ndata, method="stratified")
expect_equal(colnames(test[[1]]$dataset), colnames(ndata$dataset))
expect_equal(colnames(test[[2]]$dataset), colnames(ndata$dataset))
kf <- create_kfold_partition(ndata, 3)
test <- partition_fold(kf, 1)
expect_equal(colnames(test[[1]]$dataset), colnames(ndata$dataset))
expect_equal(colnames(test[[2]]$dataset), colnames(ndata$dataset))
test <- partition_fold(kf, 2, has.validation = T)
expect_equal(colnames(test[[1]]$dataset), colnames(ndata$dataset))
expect_equal(colnames(test[[2]]$dataset), colnames(ndata$dataset))
expect_equal(colnames(test[[3]]$dataset), colnames(ndata$dataset))
})
#TODO test not complete partitions (using 90% per example)
|
# Crosscut - BEGIN
library(rstudioapi)
setwd(dirname(rstudioapi::getActiveDocumentContext()$path))
getwd()
source("load_movie_ratings_dataset.R")
# Crosscut - END
head(movieData)
str(movieData)
levels(movieData$Genre)
summary(movieData)
# Convert non-factor variable to factor variable
movieData$Year <- factor(movieData$Year)
summary(movieData)
str(movieData)
levels(movieData$Year)
| /section 6/factorized.R | no_license | esonpaguia/r-tutorial | R | false | false | 387 | r | # Crosscut - BEGIN
library(rstudioapi)
setwd(dirname(rstudioapi::getActiveDocumentContext()$path))
getwd()
source("load_movie_ratings_dataset.R")
# Crosscut - END
head(movieData)
str(movieData)
levels(movieData$Genre)
summary(movieData)
# Convert non-factor variable to factor variable
movieData$Year <- factor(movieData$Year)
summary(movieData)
str(movieData)
levels(movieData$Year)
|
require(data.table)
require(plyr)
require(tidyr)
require(caTools)
rm(list=setdiff(ls(),c('trs','trgnd','gnd','x0')))
### build main aggregate table
agg1.cust =
ddply(trs, ###
.(customer_id),summarise,
nn = length(amount),
sumAmount = sum(amount),
avrAmount = mean(amount),
medAmount = median(amount),
minAmount = min(amount),
maxAmount = max(amount),
lenDays = length(unique(day)),
minDay = min(day),
maxDay = max(day),
durDays = max(day)-min(day)+1,
minTime = min(time),
maxTime = max(time),
durTimes = max(time)-min(time)+1,
lenMcc = length(unique(mcc_code)),
lenType = length(unique(tr_type)),
lenTerm = length(unique(term_id)))
agg1.cust = merge(agg1.cust,gnd,all.x = TRUE)
richm = trs[amount<0,sum(amount),by='customer_id']
richp = trs[amount>0,sum(amount),by='customer_id']
names(richm) <- c('customer_id','richm')
names(richp) <- c('customer_id','richp')
agg1.cust = merge(agg1.cust,richm,all.x = TRUE)
agg1.cust = merge(agg1.cust,richp,all.x = TRUE)
agg1.cust$richm[is.na(agg1.cust$richm)] = 0
agg1.cust$richp[is.na(agg1.cust$richp)] = 0
agg1.cust$richm = agg1.cust$richm/agg1.cust$lenDays
agg1.cust$richp = agg1.cust$richp/agg1.cust$lenDays
agg1.cust$rich = agg1.cust$sumAmount/agg1.cust$lenDays
### build mcc_code+tr_type by customer_id
require(plyr)
require(tidyr)
agg1.code =
ddply(trs,.(customer_id,mcc_code,tr_type),summarise,
nn = length(mcc_code),
ss = sum(amount),
mm = mean(amount))
agg1.code$col = paste("mt","m",as.character(agg1.code$mcc_code),as.character(agg1.code$tr_type),sep="_");
agg1.tmp = spread(agg1.code[,c(1,6,7)],col,mm,fill=0)
agg1.tmp.id = agg1.tmp$customer_id
agg1.tmp$customer_id = NULL
#tmp = rowSums(agg1.tmp)
#agg1.tmp = agg1.tmp/tmp
agg1.tmp$customer_id = agg1.tmp.id
agg1.code.col = agg1.tmp
agg1.code$col = paste("mt","s",as.character(agg1.code$mcc_code),as.character(agg1.code$tr_type),sep="_");
agg1.tmp = spread(agg1.code[,c(1,5,7)],col,ss,fill=0)
agg1.tmp = merge(agg1.tmp,subset(agg1.cust,select=c("customer_id","nn","durDays",'lenDays')))
agg1.tmp.id = agg1.tmp$customer_id
agg1.tmp$customer_id = NULL
agg1.tmp = agg1.tmp/agg1.tmp$lenDays
agg1.tmp$nn = NULL
agg1.tmp$durDays = NULL
agg1.tmp$lenDays = NULL
#tmp = rowSums(agg1.tmp)
#agg1.tmp = agg1.tmp/tmp
agg1.tmp$customer_id = agg1.tmp.id
agg1.code.col = merge(agg1.code.col,agg1.tmp,by='customer_id')
x=colSums(agg1.code.col[,-c(1)])
rm(agg1.tmp)
### build wday by customer_id
agg1.wday =
ddply(trs,.(customer_id,wday),summarise,
nn = length(wday),
ss = sum(amount),
mm = mean(amount))
agg1.wday$col = paste("wd","m",as.character(agg1.wday$wday),sep="_");
agg1.wday.col = spread(agg1.wday[,c(1,5,6)],col,mm,fill=0)
agg1.wday$col = paste("wd","n",as.character(agg1.wday$wday),sep="_");
agg1.tmp = spread(agg1.wday[,c(1,3,6)],col,nn,fill=0)
agg1.tmp = merge(agg1.tmp,subset(agg1.cust,select=c("customer_id","nn","durDays",'lenDays')))
agg1.tmp.id = agg1.tmp$customer_id
agg1.tmp = agg1.tmp/agg1.tmp$durDays
agg1.tmp$customer_id = agg1.tmp.id
agg1.tmp$nn = NULL
agg1.tmp$durDays = NULL
agg1.tmp$lenDays = NULL
agg1.wday.col = merge(agg1.wday.col,agg1.tmp,by='customer_id')
x=colSums(agg1.wday.col[,-c(1)])
rm(agg1.tmp)
### build H3 by customer_id
agg1.H3 =
ddply(trs,.(customer_id,H3),summarise,
nn = length(H3),
ss = sum(amount),
mm = mean(amount))
agg1.H3$col = paste("h3","m",as.character(agg1.H3$H3),sep="_");
agg1.H3.col = spread(agg1.H3[,c(1,5,6)],col,mm,fill=0)
agg1.H3.col = data.frame(customer_id=agg1.H3.col$customer_id)
agg1.H3$col = paste("h3","n",as.character(agg1.H3$H3),sep="_");
agg1.tmp = spread(agg1.H3[,c(1,3,6)],col,nn,fill=0)
agg1.tmp = merge(agg1.tmp,subset(agg1.cust,select=c("customer_id","nn","durDays",'lenDays')))
agg1.tmp.id = agg1.tmp$customer_id
agg1.tmp = agg1.tmp/agg1.tmp$nn #lenDays #durDays
agg1.tmp$customer_id = agg1.tmp.id
agg1.tmp$nn = NULL
agg1.tmp$durDays = NULL
agg1.tmp$lenDays = NULL
agg1.H3.col = merge(agg1.H3.col,agg1.tmp,by='customer_id')
agg1.H3$col = paste("h3","s",as.character(agg1.H3$H3),sep="_");
agg1.tmp = spread(agg1.H3[,c(1,4,6)],col,ss,fill=0)
agg1.tmp = merge(agg1.tmp,subset(agg1.cust,select=c("customer_id","nn","durDays",'lenDays')))
agg1.tmp.id = agg1.tmp$customer_id
agg1.tmp = agg1.tmp/agg1.tmp$durDays #lenDays #durDays #nn #lenDays #durDays
agg1.tmp$customer_id = agg1.tmp.id
agg1.tmp$nn = NULL
agg1.tmp$durDays = NULL
agg1.tmp$lenDays = NULL
agg1.H3.col = merge(agg1.H3.col,agg1.tmp,by='customer_id')
x=colSums(agg1.H3.col[,-c(1)])
rm(agg1.tmp)
### build month by customer_id
require(plyr)
require(tidyr)
agg1.month =
ddply(trs,.(customer_id,month),summarise,
nn = length(month),
ss = sum(amount),
mm = mean(amount))
agg1.month$col = paste("mo","m",as.character(agg1.month$month),sep="_");
agg1.month.col = spread(agg1.month[,c(1,5,6)],col,mm,fill=0)
agg1.month$col = paste("mo","n",as.character(agg1.month$month),sep="_");
agg1.tmp = spread(agg1.month[,c(1,3,6)],col,nn,fill=0)
agg1.tmp = merge(agg1.tmp,subset(agg1.cust,select=c("customer_id","nn","durDays",'lenDays')))
agg1.tmp.id = agg1.tmp$customer_id
agg1.tmp = agg1.tmp/agg1.tmp$durDays
agg1.tmp$customer_id = agg1.tmp.id
agg1.tmp$nn = NULL
agg1.tmp$durDays = NULL
agg1.tmp$lenDays = NULL
agg1.month.col = merge(agg1.month.col,agg1.tmp,by='customer_id')
agg1.month$col = paste("mo","s",as.character(agg1.month$month),sep="_");
agg1.tmp = spread(agg1.month[,c(1,4,6)],col,ss,fill=0)
agg1.tmp = merge(agg1.tmp,subset(agg1.cust,select=c("customer_id","nn","durDays",'lenDays')))
agg1.tmp.id = agg1.tmp$customer_id
agg1.tmp = agg1.tmp/agg1.tmp$durDays
agg1.tmp$customer_id = agg1.tmp.id
agg1.tmp$nn = NULL
agg1.tmp$durDays = NULL
agg1.tmp$lenDays = NULL
agg1.month.col = merge(agg1.month.col,agg1.tmp,by='customer_id')
x=colSums(agg1.month.col[,-c(1)])
rm(agg1.tmp)
### build mday by customer_id
require(plyr)
require(tidyr)
agg1.mday =
ddply(trs,.(customer_id,mday),summarise,
nn = length(mday),
ss = sum(amount),
mm = mean(amount))
agg1.mday$col = paste("md","m",as.character(agg1.mday$mday),sep="_");
agg1.mday.col = spread(agg1.mday[,c(1,5,6)],col,mm,fill=0)
if (FALSE) {
agg1.mday$col = paste("md","n",as.character(agg1.mday$mday),sep="_");
agg1.tmp = spread(agg1.mday[,c(1,3,6)],col,nn,fill=0)
agg1.tmp = merge(agg1.tmp,subset(agg1.cust,select=c("customer_id","nn","durDays",'lenDays')))
agg1.tmp.id = agg1.tmp$customer_id
agg1.tmp = agg1.tmp/agg1.tmp$durDays
agg1.tmp$customer_id = agg1.tmp.id
agg1.tmp$nn = NULL
agg1.tmp$durDays = NULL
agg1.tmp$lenDays = NULL
agg1.mday.col = merge(agg1.mday.col,agg1.tmp,by='customer_id')
}
agg1.mday$col = paste("md","s",as.character(agg1.mday$mday),sep="_");
agg1.tmp = spread(agg1.mday[,c(1,4,6)],col,ss,fill=0)
agg1.tmp = merge(agg1.tmp,subset(agg1.cust,select=c("customer_id","nn","durDays",'lenDays')))
agg1.tmp.id = agg1.tmp$customer_id
agg1.tmp = agg1.tmp/agg1.tmp$durDays
agg1.tmp$customer_id = agg1.tmp.id
agg1.tmp$nn = NULL
agg1.tmp$durDays = NULL
agg1.tmp$lenDays = NULL
agg1.mday.col = merge(agg1.mday.col,agg1.tmp,by='customer_id')
x=colSums(agg1.mday.col[,-c(1)])
rm(agg1.tmp)
### build term_id by customer_id (440339 unique term_id ?)
if (FALSE) {
require(plyr)
require(tidyr)
trs.sex = merge(trs,gnd,by='customer_id',all.x=TRUE)
agg1.tmp =
ddply(trs.sex,.(customer_id,term_id),summarise,
pr = mean(gender,na.rm = TRUE),
pr.na = mean(gender))
agg1.tmp.term = table(agg1.tmp$term_id[is.na(agg1.tmp$pr.na)])
agg1.tmp.term = names(agg1.tmp.term[agg1.tmp.term>1])
agg1.tmp.term = data.frame(term_id=agg1.tmp.term,stringsAsFactors = FALSE)
agg1.term =
ddply(trs.sex,.(customer_id,term_id),summarise,
nn = length(amount),
ss = sum(amount),
mm = mean(amount))
agg1.term.1 = merge(agg1.tmp.term,agg1.term,by='term_id')
agg1.term.1$term_id[agg1.term.1$term_id==''] = 'boba1234'
agg1.term = agg1.term.1
rm(trs.sex,agg1.term.1,agg1.tmp,agg1.tmp.term)
agg1.term$col = paste("trt","n",as.character(agg1.term$term_id),sep="_");
agg1.term.col = spread(agg1.term[,c(2,3,6)],col,nn,fill=0)
agg1.term$col = paste("trt","s",as.character(agg1.term$term_id),sep="_");
agg1.tmp = spread(agg1.term[,c(1,4,6)],col,ss,fill=0)
agg1.term.col = merge(agg1.term.col,agg1.tmp,by='customer_id')
agg1.term$col = paste("trt","m",as.character(agg1.term$term_id),sep="_");
agg1.tmp = spread(agg1.term[,c(1,5,6)],col,mm,fill=0)
agg1.term.col = merge(agg1.term.col,agg1.tmp,by='customer_id')
x=colSums(agg1.term.col[,-c(1)])
rm(agg1.tmp)
}
###----------------------------------------------------------
agg1.tmp1 = merge(agg1.code.col,agg1.wday.col,by='customer_id')
agg1.tmp1 = merge(agg1.tmp1,agg1.month.col,by='customer_id')
agg1.tmp1 = merge(agg1.tmp1,agg1.mday.col, by='customer_id')
agg1.tmp1 = merge(agg1.tmp1,agg1.H3.col, by='customer_id')
agg1.tmp = merge(agg1.cust,agg1.tmp1,by='customer_id')
#agg1.tmp = merge(agg1.tmp, agg1.type.col,by='customer_id')
#x=grep("(_n_)|(_m_)",names(agg1.tmp))
#agg1.cor = (cor(subset(agg1.tmp,!is.na(gender))))['gender',]
#agg1.cor = agg1.cor[!is.na(agg1.cor)]
#agg1.cor.mt = agg1.cor[grep("mt_",names(agg1.cor))]
#agg1.cor.not = agg1.cor[grep("mt_",names(agg1.cor),invert = TRUE)]
#agg1.cor = colSums(agg1.tmp)
agg1.cor = agg1.tmp[1,]
#xy = c(agg1.cor.not,agg1.cor.mt[abs(agg1.cor.mt)>0.0005])
xy = agg1.cor
agg1.tmp1 = agg1.tmp[,unique(c('customer_id','gender',names(xy)))]
#agg1.tmp1$durTimes = NULL
### ---------------------------------------------------------
### using xgb boosting
###
require(xgboost)
param <- list( objective = "binary:logistic",
booster = "gbtree",
colsample_bytree = 0.1, #0.5, # 0.5, #0.5, # 0.4,
# subsample = 0.9,
# eval_metric = evalAuc,
eval_metric = "auc",
#tree_method = "exact",
gamma = 7, #1.5, #0.0022, #2.25, #2.25, # 1, # 2.25, #0.05,
#min_child_weight = 8, #6, #7, #10, #8, #5,
#subsample = 0.6, #0.8,
silent = 0)
xy <- grep("customer_id|gender",names(agg1.tmp1))
agg1.tmp2 <- agg1.tmp1[!is.na(agg1.tmp1$gender),-xy]
dYtrain <- as.matrix(agg1.tmp2)
dYlabel <- agg1.tmp1$gender[!is.na(agg1.tmp1$gender)]
agg1.tmp2 <- agg1.tmp1[is.na(agg1.tmp1$gender),-xy]
agg1.tmp3 <- agg1.tmp1$customer_id[is.na(agg1.tmp1$gender)]
dYtest <- as.matrix(agg1.tmp2)
tmp.matrix <- xgb.DMatrix(dYtrain,label = dYlabel);
eta <- 0.03 # 0.02 # 0.2 #0.05 # 0.02 #0.1 #0.05
nnfold <- 5
set.seed(12341234)
history = xgb.cv(tmp.matrix,
nfold = nnfold, #10, #5, #8, #25, # 10,
#folds = folds,
eta=eta,
#max_depth=max_depth,
params =param,
#nrounds = 1000,
nrounds = ifelse(eta<0.035,4000,1000),
# metrics = "auc",
maximize = TRUE, #maxima,
stratified = TRUE,
prediction=TRUE,
early.stop.round = ifelse(eta<0.035,250, 100),
print.every.n = 25);
# ------------ look results and select best result in history
if (!is.null(history$pred)) {
history.pred = history$pred
history = history$dt
}
max(history$test.auc.mean);
h_max=which.max(history$test.auc.mean);
print(c(h_max,"-->",history$test.auc.mean[h_max],history$test.auc.std[h_max],history$train.auc.mean[h_max],history$train.auc.std[h_max]));
plot(history$test.auc.mean)
plot(history$test.auc.mean[history$test.auc.mean>0.875])
#plot(history$test.auc.std[history$test.auc.std<0.015])
#---------------------------------------------------------
##?set.seed(1234)
bst = xgb.train (
params =param,
tmp.matrix,
eta=eta,
nfold = nnfold,
#max_depth=max_depth,
nrounds = h_max+700, # 1500, # ifelse(eta<0.035,3000,800),
verbose=1,
print.every.n = 25,
#watchlist=list(eval = dtest, train = tmp.matrix),
#watchlist=list(eval = tmp.matrix),
watchlist=list(train = tmp.matrix),
metrics = "auc",
stratified = TRUE,
early.stop.round = ifelse(eta<0.035,250, 100),
maximize = TRUE)
pre.train = predict(bst,tmp.matrix);
pre.test = predict(bst,dYtest);
#pre.test = agg.customer.2$pr.term
#---------------------------------------------------------
#agg.customer.glm=glm(gender~.-customer_id,data=agg.customer.1,family='binomial')
#summary(agg.customer.glm)
#pre.train = predict(agg.customer.glm,type='response')
hist(pre.train,breaks = 100)
colAUC(pre.train,agg1.tmp1$gender[!is.na(agg1.tmp1$gender)])
#pre.test = predict(agg.customer.glm,type='response',newdata = agg.customer.2)
hist(pre.test,breaks = 100)
#--------------------------------------------------------
xygg = agg1.tmp3
xypp = pre.test
plot(sort(xypp))
xy1res = list('customer_id'=xygg,
"gender"=sprintf("%12.10f",xypp))
nStep = strftime(Sys.time(),"%Y%m%d-%H%M%S")
outfile = paste("./Result/task1-",nStep,'.csv',sep='')
write.csv(xy1res,file=outfile,quote=FALSE,row.names=FALSE)
#--------------------------------------------------------
#--------------------------------------------------------
#--------------------------------------------------------
| /R/task1x04.R | no_license | SorokinV/2016-oct-SberBank | R | false | false | 13,691 | r | require(data.table)
require(plyr)
require(tidyr)
require(caTools)
rm(list=setdiff(ls(),c('trs','trgnd','gnd','x0')))
### build main aggregate table
agg1.cust =
ddply(trs, ###
.(customer_id),summarise,
nn = length(amount),
sumAmount = sum(amount),
avrAmount = mean(amount),
medAmount = median(amount),
minAmount = min(amount),
maxAmount = max(amount),
lenDays = length(unique(day)),
minDay = min(day),
maxDay = max(day),
durDays = max(day)-min(day)+1,
minTime = min(time),
maxTime = max(time),
durTimes = max(time)-min(time)+1,
lenMcc = length(unique(mcc_code)),
lenType = length(unique(tr_type)),
lenTerm = length(unique(term_id)))
agg1.cust = merge(agg1.cust,gnd,all.x = TRUE)
richm = trs[amount<0,sum(amount),by='customer_id']
richp = trs[amount>0,sum(amount),by='customer_id']
names(richm) <- c('customer_id','richm')
names(richp) <- c('customer_id','richp')
agg1.cust = merge(agg1.cust,richm,all.x = TRUE)
agg1.cust = merge(agg1.cust,richp,all.x = TRUE)
agg1.cust$richm[is.na(agg1.cust$richm)] = 0
agg1.cust$richp[is.na(agg1.cust$richp)] = 0
agg1.cust$richm = agg1.cust$richm/agg1.cust$lenDays
agg1.cust$richp = agg1.cust$richp/agg1.cust$lenDays
agg1.cust$rich = agg1.cust$sumAmount/agg1.cust$lenDays
### build mcc_code+tr_type by customer_id
require(plyr)
require(tidyr)
agg1.code =
ddply(trs,.(customer_id,mcc_code,tr_type),summarise,
nn = length(mcc_code),
ss = sum(amount),
mm = mean(amount))
agg1.code$col = paste("mt","m",as.character(agg1.code$mcc_code),as.character(agg1.code$tr_type),sep="_");
agg1.tmp = spread(agg1.code[,c(1,6,7)],col,mm,fill=0)
agg1.tmp.id = agg1.tmp$customer_id
agg1.tmp$customer_id = NULL
#tmp = rowSums(agg1.tmp)
#agg1.tmp = agg1.tmp/tmp
agg1.tmp$customer_id = agg1.tmp.id
agg1.code.col = agg1.tmp
agg1.code$col = paste("mt","s",as.character(agg1.code$mcc_code),as.character(agg1.code$tr_type),sep="_");
agg1.tmp = spread(agg1.code[,c(1,5,7)],col,ss,fill=0)
agg1.tmp = merge(agg1.tmp,subset(agg1.cust,select=c("customer_id","nn","durDays",'lenDays')))
agg1.tmp.id = agg1.tmp$customer_id
agg1.tmp$customer_id = NULL
agg1.tmp = agg1.tmp/agg1.tmp$lenDays
agg1.tmp$nn = NULL
agg1.tmp$durDays = NULL
agg1.tmp$lenDays = NULL
#tmp = rowSums(agg1.tmp)
#agg1.tmp = agg1.tmp/tmp
agg1.tmp$customer_id = agg1.tmp.id
agg1.code.col = merge(agg1.code.col,agg1.tmp,by='customer_id')
x=colSums(agg1.code.col[,-c(1)])
rm(agg1.tmp)
### build wday by customer_id
agg1.wday =
ddply(trs,.(customer_id,wday),summarise,
nn = length(wday),
ss = sum(amount),
mm = mean(amount))
agg1.wday$col = paste("wd","m",as.character(agg1.wday$wday),sep="_");
agg1.wday.col = spread(agg1.wday[,c(1,5,6)],col,mm,fill=0)
agg1.wday$col = paste("wd","n",as.character(agg1.wday$wday),sep="_");
agg1.tmp = spread(agg1.wday[,c(1,3,6)],col,nn,fill=0)
agg1.tmp = merge(agg1.tmp,subset(agg1.cust,select=c("customer_id","nn","durDays",'lenDays')))
agg1.tmp.id = agg1.tmp$customer_id
agg1.tmp = agg1.tmp/agg1.tmp$durDays
agg1.tmp$customer_id = agg1.tmp.id
agg1.tmp$nn = NULL
agg1.tmp$durDays = NULL
agg1.tmp$lenDays = NULL
agg1.wday.col = merge(agg1.wday.col,agg1.tmp,by='customer_id')
x=colSums(agg1.wday.col[,-c(1)])
rm(agg1.tmp)
### build H3 by customer_id
agg1.H3 =
ddply(trs,.(customer_id,H3),summarise,
nn = length(H3),
ss = sum(amount),
mm = mean(amount))
agg1.H3$col = paste("h3","m",as.character(agg1.H3$H3),sep="_");
agg1.H3.col = spread(agg1.H3[,c(1,5,6)],col,mm,fill=0)
agg1.H3.col = data.frame(customer_id=agg1.H3.col$customer_id)
agg1.H3$col = paste("h3","n",as.character(agg1.H3$H3),sep="_");
agg1.tmp = spread(agg1.H3[,c(1,3,6)],col,nn,fill=0)
agg1.tmp = merge(agg1.tmp,subset(agg1.cust,select=c("customer_id","nn","durDays",'lenDays')))
agg1.tmp.id = agg1.tmp$customer_id
agg1.tmp = agg1.tmp/agg1.tmp$nn #lenDays #durDays
agg1.tmp$customer_id = agg1.tmp.id
agg1.tmp$nn = NULL
agg1.tmp$durDays = NULL
agg1.tmp$lenDays = NULL
agg1.H3.col = merge(agg1.H3.col,agg1.tmp,by='customer_id')
agg1.H3$col = paste("h3","s",as.character(agg1.H3$H3),sep="_");
agg1.tmp = spread(agg1.H3[,c(1,4,6)],col,ss,fill=0)
agg1.tmp = merge(agg1.tmp,subset(agg1.cust,select=c("customer_id","nn","durDays",'lenDays')))
agg1.tmp.id = agg1.tmp$customer_id
agg1.tmp = agg1.tmp/agg1.tmp$durDays #lenDays #durDays #nn #lenDays #durDays
agg1.tmp$customer_id = agg1.tmp.id
agg1.tmp$nn = NULL
agg1.tmp$durDays = NULL
agg1.tmp$lenDays = NULL
agg1.H3.col = merge(agg1.H3.col,agg1.tmp,by='customer_id')
x=colSums(agg1.H3.col[,-c(1)])
rm(agg1.tmp)
### build month by customer_id
require(plyr)
require(tidyr)
agg1.month =
ddply(trs,.(customer_id,month),summarise,
nn = length(month),
ss = sum(amount),
mm = mean(amount))
agg1.month$col = paste("mo","m",as.character(agg1.month$month),sep="_");
agg1.month.col = spread(agg1.month[,c(1,5,6)],col,mm,fill=0)
agg1.month$col = paste("mo","n",as.character(agg1.month$month),sep="_");
agg1.tmp = spread(agg1.month[,c(1,3,6)],col,nn,fill=0)
agg1.tmp = merge(agg1.tmp,subset(agg1.cust,select=c("customer_id","nn","durDays",'lenDays')))
agg1.tmp.id = agg1.tmp$customer_id
agg1.tmp = agg1.tmp/agg1.tmp$durDays
agg1.tmp$customer_id = agg1.tmp.id
agg1.tmp$nn = NULL
agg1.tmp$durDays = NULL
agg1.tmp$lenDays = NULL
agg1.month.col = merge(agg1.month.col,agg1.tmp,by='customer_id')
agg1.month$col = paste("mo","s",as.character(agg1.month$month),sep="_");
agg1.tmp = spread(agg1.month[,c(1,4,6)],col,ss,fill=0)
agg1.tmp = merge(agg1.tmp,subset(agg1.cust,select=c("customer_id","nn","durDays",'lenDays')))
agg1.tmp.id = agg1.tmp$customer_id
agg1.tmp = agg1.tmp/agg1.tmp$durDays
agg1.tmp$customer_id = agg1.tmp.id
agg1.tmp$nn = NULL
agg1.tmp$durDays = NULL
agg1.tmp$lenDays = NULL
agg1.month.col = merge(agg1.month.col,agg1.tmp,by='customer_id')
x=colSums(agg1.month.col[,-c(1)])
rm(agg1.tmp)
### build mday by customer_id
require(plyr)
require(tidyr)
agg1.mday =
ddply(trs,.(customer_id,mday),summarise,
nn = length(mday),
ss = sum(amount),
mm = mean(amount))
agg1.mday$col = paste("md","m",as.character(agg1.mday$mday),sep="_");
agg1.mday.col = spread(agg1.mday[,c(1,5,6)],col,mm,fill=0)
if (FALSE) {
agg1.mday$col = paste("md","n",as.character(agg1.mday$mday),sep="_");
agg1.tmp = spread(agg1.mday[,c(1,3,6)],col,nn,fill=0)
agg1.tmp = merge(agg1.tmp,subset(agg1.cust,select=c("customer_id","nn","durDays",'lenDays')))
agg1.tmp.id = agg1.tmp$customer_id
agg1.tmp = agg1.tmp/agg1.tmp$durDays
agg1.tmp$customer_id = agg1.tmp.id
agg1.tmp$nn = NULL
agg1.tmp$durDays = NULL
agg1.tmp$lenDays = NULL
agg1.mday.col = merge(agg1.mday.col,agg1.tmp,by='customer_id')
}
agg1.mday$col = paste("md","s",as.character(agg1.mday$mday),sep="_");
agg1.tmp = spread(agg1.mday[,c(1,4,6)],col,ss,fill=0)
agg1.tmp = merge(agg1.tmp,subset(agg1.cust,select=c("customer_id","nn","durDays",'lenDays')))
agg1.tmp.id = agg1.tmp$customer_id
agg1.tmp = agg1.tmp/agg1.tmp$durDays
agg1.tmp$customer_id = agg1.tmp.id
agg1.tmp$nn = NULL
agg1.tmp$durDays = NULL
agg1.tmp$lenDays = NULL
agg1.mday.col = merge(agg1.mday.col,agg1.tmp,by='customer_id')
x=colSums(agg1.mday.col[,-c(1)])
rm(agg1.tmp)
### build term_id by customer_id (440339 unique term_id ?)
if (FALSE) {
require(plyr)
require(tidyr)
trs.sex = merge(trs,gnd,by='customer_id',all.x=TRUE)
agg1.tmp =
ddply(trs.sex,.(customer_id,term_id),summarise,
pr = mean(gender,na.rm = TRUE),
pr.na = mean(gender))
agg1.tmp.term = table(agg1.tmp$term_id[is.na(agg1.tmp$pr.na)])
agg1.tmp.term = names(agg1.tmp.term[agg1.tmp.term>1])
agg1.tmp.term = data.frame(term_id=agg1.tmp.term,stringsAsFactors = FALSE)
agg1.term =
ddply(trs.sex,.(customer_id,term_id),summarise,
nn = length(amount),
ss = sum(amount),
mm = mean(amount))
agg1.term.1 = merge(agg1.tmp.term,agg1.term,by='term_id')
agg1.term.1$term_id[agg1.term.1$term_id==''] = 'boba1234'
agg1.term = agg1.term.1
rm(trs.sex,agg1.term.1,agg1.tmp,agg1.tmp.term)
agg1.term$col = paste("trt","n",as.character(agg1.term$term_id),sep="_");
agg1.term.col = spread(agg1.term[,c(2,3,6)],col,nn,fill=0)
agg1.term$col = paste("trt","s",as.character(agg1.term$term_id),sep="_");
agg1.tmp = spread(agg1.term[,c(1,4,6)],col,ss,fill=0)
agg1.term.col = merge(agg1.term.col,agg1.tmp,by='customer_id')
agg1.term$col = paste("trt","m",as.character(agg1.term$term_id),sep="_");
agg1.tmp = spread(agg1.term[,c(1,5,6)],col,mm,fill=0)
agg1.term.col = merge(agg1.term.col,agg1.tmp,by='customer_id')
x=colSums(agg1.term.col[,-c(1)])
rm(agg1.tmp)
}
###----------------------------------------------------------
agg1.tmp1 = merge(agg1.code.col,agg1.wday.col,by='customer_id')
agg1.tmp1 = merge(agg1.tmp1,agg1.month.col,by='customer_id')
agg1.tmp1 = merge(agg1.tmp1,agg1.mday.col, by='customer_id')
agg1.tmp1 = merge(agg1.tmp1,agg1.H3.col, by='customer_id')
agg1.tmp = merge(agg1.cust,agg1.tmp1,by='customer_id')
#agg1.tmp = merge(agg1.tmp, agg1.type.col,by='customer_id')
#x=grep("(_n_)|(_m_)",names(agg1.tmp))
#agg1.cor = (cor(subset(agg1.tmp,!is.na(gender))))['gender',]
#agg1.cor = agg1.cor[!is.na(agg1.cor)]
#agg1.cor.mt = agg1.cor[grep("mt_",names(agg1.cor))]
#agg1.cor.not = agg1.cor[grep("mt_",names(agg1.cor),invert = TRUE)]
#agg1.cor = colSums(agg1.tmp)
agg1.cor = agg1.tmp[1,]
#xy = c(agg1.cor.not,agg1.cor.mt[abs(agg1.cor.mt)>0.0005])
xy = agg1.cor
agg1.tmp1 = agg1.tmp[,unique(c('customer_id','gender',names(xy)))]
#agg1.tmp1$durTimes = NULL
### ---------------------------------------------------------
### using xgb boosting
###
require(xgboost)
param <- list( objective = "binary:logistic",
booster = "gbtree",
colsample_bytree = 0.1, #0.5, # 0.5, #0.5, # 0.4,
# subsample = 0.9,
# eval_metric = evalAuc,
eval_metric = "auc",
#tree_method = "exact",
gamma = 7, #1.5, #0.0022, #2.25, #2.25, # 1, # 2.25, #0.05,
#min_child_weight = 8, #6, #7, #10, #8, #5,
#subsample = 0.6, #0.8,
silent = 0)
xy <- grep("customer_id|gender",names(agg1.tmp1))
agg1.tmp2 <- agg1.tmp1[!is.na(agg1.tmp1$gender),-xy]
dYtrain <- as.matrix(agg1.tmp2)
dYlabel <- agg1.tmp1$gender[!is.na(agg1.tmp1$gender)]
agg1.tmp2 <- agg1.tmp1[is.na(agg1.tmp1$gender),-xy]
agg1.tmp3 <- agg1.tmp1$customer_id[is.na(agg1.tmp1$gender)]
dYtest <- as.matrix(agg1.tmp2)
tmp.matrix <- xgb.DMatrix(dYtrain,label = dYlabel);
eta <- 0.03 # 0.02 # 0.2 #0.05 # 0.02 #0.1 #0.05
nnfold <- 5
set.seed(12341234)
history = xgb.cv(tmp.matrix,
nfold = nnfold, #10, #5, #8, #25, # 10,
#folds = folds,
eta=eta,
#max_depth=max_depth,
params =param,
#nrounds = 1000,
nrounds = ifelse(eta<0.035,4000,1000),
# metrics = "auc",
maximize = TRUE, #maxima,
stratified = TRUE,
prediction=TRUE,
early.stop.round = ifelse(eta<0.035,250, 100),
print.every.n = 25);
# ------------ look results and select best result in history
if (!is.null(history$pred)) {
history.pred = history$pred
history = history$dt
}
max(history$test.auc.mean);
h_max=which.max(history$test.auc.mean);
print(c(h_max,"-->",history$test.auc.mean[h_max],history$test.auc.std[h_max],history$train.auc.mean[h_max],history$train.auc.std[h_max]));
plot(history$test.auc.mean)
plot(history$test.auc.mean[history$test.auc.mean>0.875])
#plot(history$test.auc.std[history$test.auc.std<0.015])
#---------------------------------------------------------
##?set.seed(1234)
bst = xgb.train (
params =param,
tmp.matrix,
eta=eta,
nfold = nnfold,
#max_depth=max_depth,
nrounds = h_max+700, # 1500, # ifelse(eta<0.035,3000,800),
verbose=1,
print.every.n = 25,
#watchlist=list(eval = dtest, train = tmp.matrix),
#watchlist=list(eval = tmp.matrix),
watchlist=list(train = tmp.matrix),
metrics = "auc",
stratified = TRUE,
early.stop.round = ifelse(eta<0.035,250, 100),
maximize = TRUE)
pre.train = predict(bst,tmp.matrix);
pre.test = predict(bst,dYtest);
#pre.test = agg.customer.2$pr.term
#---------------------------------------------------------
#agg.customer.glm=glm(gender~.-customer_id,data=agg.customer.1,family='binomial')
#summary(agg.customer.glm)
#pre.train = predict(agg.customer.glm,type='response')
hist(pre.train,breaks = 100)
colAUC(pre.train,agg1.tmp1$gender[!is.na(agg1.tmp1$gender)])
#pre.test = predict(agg.customer.glm,type='response',newdata = agg.customer.2)
hist(pre.test,breaks = 100)
#--------------------------------------------------------
xygg = agg1.tmp3
xypp = pre.test
plot(sort(xypp))
xy1res = list('customer_id'=xygg,
"gender"=sprintf("%12.10f",xypp))
nStep = strftime(Sys.time(),"%Y%m%d-%H%M%S")
outfile = paste("./Result/task1-",nStep,'.csv',sep='')
write.csv(xy1res,file=outfile,quote=FALSE,row.names=FALSE)
#--------------------------------------------------------
#--------------------------------------------------------
#--------------------------------------------------------
|
## ---- message=FALSE, warning=FALSE---------------------------------------
library(dplyr)
library(ggplot2)
library(infer)
# Clean data
mtcars <- mtcars %>%
as_tibble() %>%
mutate(am = factor(am))
# Observed test statistic
obs_stat <- mtcars %>%
group_by(am) %>%
summarize(mean = mean(mpg)) %>%
summarize(obs_stat = diff(mean)) %>%
pull(obs_stat)
# Simulate null distribution of two-sample difference in means:
null_distribution <- mtcars %>%
specify(mpg ~ am) %>%
hypothesize(null = "independence") %>%
generate(reps = 1000, type = "permute") %>%
calculate(stat = "diff in means", order = c("1", "0"))
# Visualize:
plot <- null_distribution %>%
visualize()
plot +
geom_vline(xintercept = obs_stat, col = "red", size = 1)
## ----message=FALSE, warning=FALSE----------------------------------------
library(dplyr)
library(ggplot2)
library(mosaic)
library(knitr)
library(nycflights13)
library(ggplot2movies)
library(broom)
## ----message=FALSE, warning=FALSE, echo=FALSE----------------------------
# Packages needed internally, but not in text.
## ------------------------------------------------------------------------
bos_sfo <- flights %>%
na.omit() %>%
filter(dest %in% c("BOS", "SFO")) %>%
group_by(dest) %>%
sample_n(100)
## ------------------------------------------------------------------------
bos_sfo_summary <- bos_sfo %>% group_by(dest) %>%
summarize(mean_time = mean(air_time),
sd_time = sd(air_time))
kable(bos_sfo_summary)
## ------------------------------------------------------------------------
ggplot(data = bos_sfo, mapping = aes(x = dest, y = air_time)) +
geom_boxplot()
## ----echo=FALSE----------------------------------------------------------
choice <- c(rep("Correct", 3), "Incorrect", rep("Correct", 6))
kable(choice)
## ----sample-table, echo=FALSE--------------------------------------------
set.seed(2017)
sim1 <- resample(x = c("Correct", "Incorrect"), size = 10, prob = c(0.5, 0.5))
sim2 <- resample(x = c("Correct", "Incorrect"), size = 10, prob = c(0.5, 0.5))
sim3 <- resample(x = c("Correct", "Incorrect"), size = 10, prob = c(0.5, 0.5))
sims <- data.frame(sample1 = sim1, sample2 = sim2, sample3 = sim3)
kable(sims, row.names = TRUE, caption = 'A table of three sets of 10 coin flips')
## ----echo=FALSE----------------------------------------------------------
t1 <- sum(sim1 == "Correct")
t2 <- sum(sim2 == "Correct")
t3 <- sum(sim3 == "Correct")
## ------------------------------------------------------------------------
simGuesses <- do(5000) * rflip(10)
ggplot(data = simGuesses, aes(x = factor(heads))) +
geom_bar()
## ------------------------------------------------------------------------
pvalue_tea <- simGuesses %>%
filter(heads >= 9) %>%
nrow() / nrow(simGuesses)
## ----fig.cap="Barplot of heads with p-value highlighted"-----------------
ggplot(data = simGuesses, aes(x = factor(heads), fill = (heads >= 9))) +
geom_bar() +
labs(x = "heads")
## ----message=FALSE, warning=FALSE----------------------------------------
(movies_trimmed <- movies %>% select(title, year, rating, Action, Romance))
## ------------------------------------------------------------------------
movies_trimmed <- movies_trimmed %>%
filter(!(Action == 1 & Romance == 1))
## ------------------------------------------------------------------------
movies_trimmed <- movies_trimmed %>%
mutate(genre = ifelse(Action == 1, "Action",
ifelse(Romance == 1, "Romance",
"Neither"))) %>%
filter(genre != "Neither") %>%
select(-Action, -Romance)
## ----fig.cap="Rating vs genre in the population"-------------------------
ggplot(data = movies_trimmed, aes(x = genre, y = rating)) +
geom_boxplot()
## ----movie-hist, warning=FALSE, fig.cap="Faceted histogram of genre vs rating"----
ggplot(data = movies_trimmed, mapping = aes(x = rating)) +
geom_histogram(binwidth = 1, color = "white", fill = "dodgerblue") +
facet_grid(genre ~ .)
## ------------------------------------------------------------------------
set.seed(2017)
movies_genre_sample <- movies_trimmed %>%
group_by(genre) %>%
sample_n(34) %>%
ungroup()
## ----fig.cap="Genre vs rating for our sample"----------------------------
ggplot(data = movies_genre_sample, aes(x = genre, y = rating)) +
geom_boxplot()
## ----warning=FALSE, fig.cap="Genre vs rating for our sample as faceted histogram"----
ggplot(data = movies_genre_sample, mapping = aes(x = rating)) +
geom_histogram(binwidth = 1, color = "white", fill = "dodgerblue") +
facet_grid(genre ~ .)
## ------------------------------------------------------------------------
summary_ratings <- movies_genre_sample %>%
group_by(genre) %>%
summarize(mean = mean(rating),
std_dev = sd(rating),
n = n())
summary_ratings %>% kable()
## ------------------------------------------------------------------------
mean_ratings <- movies_genre_sample %>%
group_by(genre) %>%
summarize(mean = mean(rating))
obs_diff <- diff(mean_ratings$mean)
## ----message=FALSE, warning=FALSE----------------------------------------
shuffled_ratings <- #movies_trimmed %>%
movies_genre_sample %>%
mutate(genre = shuffle(genre)) %>%
group_by(genre) %>%
summarize(mean = mean(rating))
diff(shuffled_ratings$mean)
## ----include=FALSE-------------------------------------------------------
set.seed(2017)
if(!file.exists("rds/many_shuffles.rds")){
many_shuffles <- do(5000) *
(movies_genre_sample %>%
mutate(genre = shuffle(genre)) %>%
group_by(genre) %>%
summarize(mean = mean(rating))
)
saveRDS(object = many_shuffles, "rds/many_shuffles.rds")
} else {
many_shuffles <- readRDS("rds/many_shuffles.rds")
}
## ----eval=FALSE----------------------------------------------------------
## set.seed(2017)
## many_shuffles <- do(5000) *
## (movies_genre_sample %>%
## mutate(genre = shuffle(genre)) %>%
## group_by(genre) %>%
## summarize(mean = mean(rating))
## )
## ------------------------------------------------------------------------
rand_distn <- many_shuffles %>%
group_by(.index) %>%
summarize(diffmean = diff(mean))
head(rand_distn, 10)
## ----fig.cap="Simulated differences in means histogram"------------------
ggplot(data = rand_distn, aes(x = diffmean)) +
geom_histogram(color = "white", bins = 20)
## ----fig.cap="Shaded histogram to show p-value"--------------------------
ggplot(data = rand_distn, aes(x = diffmean, fill = (abs(diffmean) >= obs_diff))) +
geom_histogram(color = "white", bins = 20)
## ----fig.cap="Histogram with vertical lines corresponding to observed statistic"----
ggplot(data = rand_distn, aes(x = diffmean)) +
geom_histogram(color = "white", bins = 100) +
geom_vline(xintercept = obs_diff, color = "red") +
geom_vline(xintercept = -obs_diff, color = "red")
## ------------------------------------------------------------------------
(pvalue_movies <- rand_distn %>%
filter(abs(diffmean) >= obs_diff) %>%
nrow() / nrow(rand_distn))
## ----echo=FALSE----------------------------------------------------------
ggplot(data.frame(x = c(-4, 4)), aes(x)) + stat_function(fun = dnorm)
## ----fig.cap="Simulated differences in means histogram"------------------
ggplot(data = rand_distn, aes(x = diffmean)) +
geom_histogram(color = "white", bins = 20)
## ------------------------------------------------------------------------
kable(summary_ratings)
## ------------------------------------------------------------------------
s1 <- summary_ratings$std_dev[2]
s2 <- summary_ratings$std_dev[1]
n1 <- summary_ratings$n[2]
n2 <- summary_ratings$n[1]
## ------------------------------------------------------------------------
(denom_T <- sqrt( (s1^2 / n1) + (s2^2 / n2) ))
## ---- fig.cap="Simulated T statistics histogram"-------------------------
rand_distn <- rand_distn %>%
mutate(t_stat = diffmean / denom_T)
ggplot(data = rand_distn, aes(x = t_stat)) +
geom_histogram(color = "white", bins = 20)
## ------------------------------------------------------------------------
ggplot(data = rand_distn, mapping = aes(x = t_stat)) +
geom_histogram(aes(y = ..density..), color = "white", binwidth = 0.3) +
stat_function(fun = dt,
args = list(df = min(n1 - 1, n2 - 1)),
color = "royalblue", size = 2)
## ------------------------------------------------------------------------
(t_obs <- obs_diff / denom_T)
## ------------------------------------------------------------------------
ggplot(data = rand_distn, mapping = aes(x = t_stat)) +
stat_function(fun = dt,
args = list(df = min(n1 - 1, n2 - 1)),
color = "royalblue", size = 2) +
geom_vline(xintercept = t_obs, color = "red") +
geom_vline(xintercept = -t_obs, color = "red")
## ------------------------------------------------------------------------
pt(t_obs, df = min(n1 - 1, n2 - 1), lower.tail = FALSE) +
pt(-t_obs, df = min(n1 - 1, n2 - 1), lower.tail = TRUE)
## ----warning=FALSE-------------------------------------------------------
# To ensure the random sample of 50 flights is the same for
# anyone using this code
set.seed(2017)
# Load Alaska data, deleting rows that have missing departure delay
# or arrival delay data
alaska_flights <- flights %>%
filter(carrier == "AS") %>%
filter(!is.na(dep_delay) & !is.na(arr_delay)) %>%
# Select 50 flights that don't have missing delay data
sample_n(50)
## ---- echo=FALSE---------------------------------------------------------
# USED INTERNALLY: Least squares line values, used for in-text output
delay_fit <- lm(formula = arr_delay ~ dep_delay, data = alaska_flights)
intercept <- tidy(delay_fit, conf.int=TRUE)$estimate[1] %>% round(3)
slope <- tidy(delay_fit, conf.int=TRUE)$estimate[2] %>% round(3)
CI_intercept <- c(tidy(delay_fit, conf.int=TRUE)$conf.low[1], tidy(delay_fit, conf.int=TRUE)$conf.high[1]) %>% round(3)
CI_slope <- c(tidy(delay_fit, conf.int=TRUE)$conf.low[2], tidy(delay_fit, conf.int=TRUE)$conf.high[2]) %>% round(3)
## ------------------------------------------------------------------------
delay_fit <- lm(formula = arr_delay ~ dep_delay, data = alaska_flights)
(b1_obs <- tidy(delay_fit)$estimate[2])
## ---- include=FALSE------------------------------------------------------
if(!file.exists("rds/rand_slope_distn.rds")){
rand_slope_distn <- do(5000) *
(lm(formula = arr_delay ~ shuffle(dep_delay), data = alaska_flights) %>%
coef())
saveRDS(object = rand_slope_distn, "rds/rand_slope_distn.rds")
} else {
rand_slope_distn <- readRDS("rds/rand_slope_distn.rds")
}
## ----many_shuffles_reg, eval=FALSE---------------------------------------
## rand_slope_distn <- do(5000) *
## (lm(formula = arr_delay ~ shuffle(dep_delay), data = alaska_flights) %>%
## coef())
## ------------------------------------------------------------------------
names(rand_slope_distn)
## ------------------------------------------------------------------------
ggplot(data = rand_slope_distn, mapping = aes(x = dep_delay)) +
geom_histogram(color = "white", bins = 20)
## ----fig.cap="Shaded histogram to show p-value"--------------------------
ggplot(data = rand_slope_distn, aes(x = dep_delay, fill = (dep_delay >= b1_obs))) +
geom_histogram(color = "white", bins = 20)
## ---- echo=FALSE---------------------------------------------------------
delay_fit <- lm(formula = arr_delay ~ dep_delay, data = alaska_flights)
tidy(delay_fit) %>%
kable()
## ----echo=FALSE----------------------------------------------------------
ggplot(data = alaska_flights,
mapping = aes(x = dep_delay, y = arr_delay)) +
geom_point() +
geom_smooth(method = "lm", se = FALSE, color = "red") +
annotate("point", x = 44, y = 7, color = "blue", size = 3) +
annotate("segment", x = 44, xend = 44, y = 7, yend = -14.155 + 1.218 * 44,
color = "blue", arrow = arrow(length = unit(0.03, "npc")))
## ------------------------------------------------------------------------
regression_points <- augment(delay_fit) %>%
select(arr_delay, dep_delay, .fitted, .resid)
regression_points %>%
head() %>%
kable()
## ----resid-histogram-----------------------------------------------------
ggplot(data = regression_points, mapping = aes(x = .resid)) +
geom_histogram(binwidth = 10, color = "white") +
geom_vline(xintercept = 0, color = "blue")
## ----resid-plot, fig.cap="Fitted versus Residuals plot"------------------
ggplot(data = regression_points, mapping = aes(x = .fitted, y = .resid)) +
geom_point() +
geom_abline(intercept = 0, slope = 0, color = "blue")
## ----qqplot1, fig.cap="QQ Plot of residuals"-----------------------------
ggplot(data = regression_points, mapping = aes(sample = .resid)) +
stat_qq()
| /l09test/hypothesis-testing.R | no_license | wLoup/2018R | R | false | false | 12,760 | r | ## ---- message=FALSE, warning=FALSE---------------------------------------
library(dplyr)
library(ggplot2)
library(infer)
# Clean data
mtcars <- mtcars %>%
as_tibble() %>%
mutate(am = factor(am))
# Observed test statistic
obs_stat <- mtcars %>%
group_by(am) %>%
summarize(mean = mean(mpg)) %>%
summarize(obs_stat = diff(mean)) %>%
pull(obs_stat)
# Simulate null distribution of two-sample difference in means:
null_distribution <- mtcars %>%
specify(mpg ~ am) %>%
hypothesize(null = "independence") %>%
generate(reps = 1000, type = "permute") %>%
calculate(stat = "diff in means", order = c("1", "0"))
# Visualize:
plot <- null_distribution %>%
visualize()
plot +
geom_vline(xintercept = obs_stat, col = "red", size = 1)
## ----message=FALSE, warning=FALSE----------------------------------------
library(dplyr)
library(ggplot2)
library(mosaic)
library(knitr)
library(nycflights13)
library(ggplot2movies)
library(broom)
## ----message=FALSE, warning=FALSE, echo=FALSE----------------------------
# Packages needed internally, but not in text.
## ------------------------------------------------------------------------
bos_sfo <- flights %>%
na.omit() %>%
filter(dest %in% c("BOS", "SFO")) %>%
group_by(dest) %>%
sample_n(100)
## ------------------------------------------------------------------------
bos_sfo_summary <- bos_sfo %>% group_by(dest) %>%
summarize(mean_time = mean(air_time),
sd_time = sd(air_time))
kable(bos_sfo_summary)
## ------------------------------------------------------------------------
ggplot(data = bos_sfo, mapping = aes(x = dest, y = air_time)) +
geom_boxplot()
## ----echo=FALSE----------------------------------------------------------
choice <- c(rep("Correct", 3), "Incorrect", rep("Correct", 6))
kable(choice)
## ----sample-table, echo=FALSE--------------------------------------------
set.seed(2017)
sim1 <- resample(x = c("Correct", "Incorrect"), size = 10, prob = c(0.5, 0.5))
sim2 <- resample(x = c("Correct", "Incorrect"), size = 10, prob = c(0.5, 0.5))
sim3 <- resample(x = c("Correct", "Incorrect"), size = 10, prob = c(0.5, 0.5))
sims <- data.frame(sample1 = sim1, sample2 = sim2, sample3 = sim3)
kable(sims, row.names = TRUE, caption = 'A table of three sets of 10 coin flips')
## ----echo=FALSE----------------------------------------------------------
t1 <- sum(sim1 == "Correct")
t2 <- sum(sim2 == "Correct")
t3 <- sum(sim3 == "Correct")
## ------------------------------------------------------------------------
simGuesses <- do(5000) * rflip(10)
ggplot(data = simGuesses, aes(x = factor(heads))) +
geom_bar()
## ------------------------------------------------------------------------
pvalue_tea <- simGuesses %>%
filter(heads >= 9) %>%
nrow() / nrow(simGuesses)
## ----fig.cap="Barplot of heads with p-value highlighted"-----------------
ggplot(data = simGuesses, aes(x = factor(heads), fill = (heads >= 9))) +
geom_bar() +
labs(x = "heads")
## ----message=FALSE, warning=FALSE----------------------------------------
(movies_trimmed <- movies %>% select(title, year, rating, Action, Romance))
## ------------------------------------------------------------------------
movies_trimmed <- movies_trimmed %>%
filter(!(Action == 1 & Romance == 1))
## ------------------------------------------------------------------------
movies_trimmed <- movies_trimmed %>%
mutate(genre = ifelse(Action == 1, "Action",
ifelse(Romance == 1, "Romance",
"Neither"))) %>%
filter(genre != "Neither") %>%
select(-Action, -Romance)
## ----fig.cap="Rating vs genre in the population"-------------------------
ggplot(data = movies_trimmed, aes(x = genre, y = rating)) +
geom_boxplot()
## ----movie-hist, warning=FALSE, fig.cap="Faceted histogram of genre vs rating"----
ggplot(data = movies_trimmed, mapping = aes(x = rating)) +
geom_histogram(binwidth = 1, color = "white", fill = "dodgerblue") +
facet_grid(genre ~ .)
## ------------------------------------------------------------------------
set.seed(2017)
movies_genre_sample <- movies_trimmed %>%
group_by(genre) %>%
sample_n(34) %>%
ungroup()
## ----fig.cap="Genre vs rating for our sample"----------------------------
ggplot(data = movies_genre_sample, aes(x = genre, y = rating)) +
geom_boxplot()
## ----warning=FALSE, fig.cap="Genre vs rating for our sample as faceted histogram"----
ggplot(data = movies_genre_sample, mapping = aes(x = rating)) +
geom_histogram(binwidth = 1, color = "white", fill = "dodgerblue") +
facet_grid(genre ~ .)
## ------------------------------------------------------------------------
summary_ratings <- movies_genre_sample %>%
group_by(genre) %>%
summarize(mean = mean(rating),
std_dev = sd(rating),
n = n())
summary_ratings %>% kable()
## ------------------------------------------------------------------------
mean_ratings <- movies_genre_sample %>%
group_by(genre) %>%
summarize(mean = mean(rating))
obs_diff <- diff(mean_ratings$mean)
## ----message=FALSE, warning=FALSE----------------------------------------
shuffled_ratings <- #movies_trimmed %>%
movies_genre_sample %>%
mutate(genre = shuffle(genre)) %>%
group_by(genre) %>%
summarize(mean = mean(rating))
diff(shuffled_ratings$mean)
## ----include=FALSE-------------------------------------------------------
set.seed(2017)
if(!file.exists("rds/many_shuffles.rds")){
many_shuffles <- do(5000) *
(movies_genre_sample %>%
mutate(genre = shuffle(genre)) %>%
group_by(genre) %>%
summarize(mean = mean(rating))
)
saveRDS(object = many_shuffles, "rds/many_shuffles.rds")
} else {
many_shuffles <- readRDS("rds/many_shuffles.rds")
}
## ----eval=FALSE----------------------------------------------------------
## set.seed(2017)
## many_shuffles <- do(5000) *
## (movies_genre_sample %>%
## mutate(genre = shuffle(genre)) %>%
## group_by(genre) %>%
## summarize(mean = mean(rating))
## )
## ------------------------------------------------------------------------
rand_distn <- many_shuffles %>%
group_by(.index) %>%
summarize(diffmean = diff(mean))
head(rand_distn, 10)
## ----fig.cap="Simulated differences in means histogram"------------------
ggplot(data = rand_distn, aes(x = diffmean)) +
geom_histogram(color = "white", bins = 20)
## ----fig.cap="Shaded histogram to show p-value"--------------------------
ggplot(data = rand_distn, aes(x = diffmean, fill = (abs(diffmean) >= obs_diff))) +
geom_histogram(color = "white", bins = 20)
## ----fig.cap="Histogram with vertical lines corresponding to observed statistic"----
ggplot(data = rand_distn, aes(x = diffmean)) +
geom_histogram(color = "white", bins = 100) +
geom_vline(xintercept = obs_diff, color = "red") +
geom_vline(xintercept = -obs_diff, color = "red")
## ------------------------------------------------------------------------
(pvalue_movies <- rand_distn %>%
filter(abs(diffmean) >= obs_diff) %>%
nrow() / nrow(rand_distn))
## ----echo=FALSE----------------------------------------------------------
ggplot(data.frame(x = c(-4, 4)), aes(x)) + stat_function(fun = dnorm)
## ----fig.cap="Simulated differences in means histogram"------------------
ggplot(data = rand_distn, aes(x = diffmean)) +
geom_histogram(color = "white", bins = 20)
## ------------------------------------------------------------------------
kable(summary_ratings)
## ------------------------------------------------------------------------
s1 <- summary_ratings$std_dev[2]
s2 <- summary_ratings$std_dev[1]
n1 <- summary_ratings$n[2]
n2 <- summary_ratings$n[1]
## ------------------------------------------------------------------------
(denom_T <- sqrt( (s1^2 / n1) + (s2^2 / n2) ))
## ---- fig.cap="Simulated T statistics histogram"-------------------------
rand_distn <- rand_distn %>%
mutate(t_stat = diffmean / denom_T)
ggplot(data = rand_distn, aes(x = t_stat)) +
geom_histogram(color = "white", bins = 20)
## ------------------------------------------------------------------------
ggplot(data = rand_distn, mapping = aes(x = t_stat)) +
geom_histogram(aes(y = ..density..), color = "white", binwidth = 0.3) +
stat_function(fun = dt,
args = list(df = min(n1 - 1, n2 - 1)),
color = "royalblue", size = 2)
## ------------------------------------------------------------------------
(t_obs <- obs_diff / denom_T)
## ------------------------------------------------------------------------
ggplot(data = rand_distn, mapping = aes(x = t_stat)) +
stat_function(fun = dt,
args = list(df = min(n1 - 1, n2 - 1)),
color = "royalblue", size = 2) +
geom_vline(xintercept = t_obs, color = "red") +
geom_vline(xintercept = -t_obs, color = "red")
## ------------------------------------------------------------------------
pt(t_obs, df = min(n1 - 1, n2 - 1), lower.tail = FALSE) +
pt(-t_obs, df = min(n1 - 1, n2 - 1), lower.tail = TRUE)
## ----warning=FALSE-------------------------------------------------------
# To ensure the random sample of 50 flights is the same for
# anyone using this code
set.seed(2017)
# Load Alaska data, deleting rows that have missing departure delay
# or arrival delay data
alaska_flights <- flights %>%
filter(carrier == "AS") %>%
filter(!is.na(dep_delay) & !is.na(arr_delay)) %>%
# Select 50 flights that don't have missing delay data
sample_n(50)
## ---- echo=FALSE---------------------------------------------------------
# USED INTERNALLY: Least squares line values, used for in-text output
delay_fit <- lm(formula = arr_delay ~ dep_delay, data = alaska_flights)
intercept <- tidy(delay_fit, conf.int=TRUE)$estimate[1] %>% round(3)
slope <- tidy(delay_fit, conf.int=TRUE)$estimate[2] %>% round(3)
CI_intercept <- c(tidy(delay_fit, conf.int=TRUE)$conf.low[1], tidy(delay_fit, conf.int=TRUE)$conf.high[1]) %>% round(3)
CI_slope <- c(tidy(delay_fit, conf.int=TRUE)$conf.low[2], tidy(delay_fit, conf.int=TRUE)$conf.high[2]) %>% round(3)
## ------------------------------------------------------------------------
delay_fit <- lm(formula = arr_delay ~ dep_delay, data = alaska_flights)
(b1_obs <- tidy(delay_fit)$estimate[2])
## ---- include=FALSE------------------------------------------------------
if(!file.exists("rds/rand_slope_distn.rds")){
rand_slope_distn <- do(5000) *
(lm(formula = arr_delay ~ shuffle(dep_delay), data = alaska_flights) %>%
coef())
saveRDS(object = rand_slope_distn, "rds/rand_slope_distn.rds")
} else {
rand_slope_distn <- readRDS("rds/rand_slope_distn.rds")
}
## ----many_shuffles_reg, eval=FALSE---------------------------------------
## rand_slope_distn <- do(5000) *
## (lm(formula = arr_delay ~ shuffle(dep_delay), data = alaska_flights) %>%
## coef())
## ------------------------------------------------------------------------
names(rand_slope_distn)
## ------------------------------------------------------------------------
ggplot(data = rand_slope_distn, mapping = aes(x = dep_delay)) +
geom_histogram(color = "white", bins = 20)
## ----fig.cap="Shaded histogram to show p-value"--------------------------
ggplot(data = rand_slope_distn, aes(x = dep_delay, fill = (dep_delay >= b1_obs))) +
geom_histogram(color = "white", bins = 20)
## ---- echo=FALSE---------------------------------------------------------
delay_fit <- lm(formula = arr_delay ~ dep_delay, data = alaska_flights)
tidy(delay_fit) %>%
kable()
## ----echo=FALSE----------------------------------------------------------
ggplot(data = alaska_flights,
mapping = aes(x = dep_delay, y = arr_delay)) +
geom_point() +
geom_smooth(method = "lm", se = FALSE, color = "red") +
annotate("point", x = 44, y = 7, color = "blue", size = 3) +
annotate("segment", x = 44, xend = 44, y = 7, yend = -14.155 + 1.218 * 44,
color = "blue", arrow = arrow(length = unit(0.03, "npc")))
## ------------------------------------------------------------------------
regression_points <- augment(delay_fit) %>%
select(arr_delay, dep_delay, .fitted, .resid)
regression_points %>%
head() %>%
kable()
## ----resid-histogram-----------------------------------------------------
ggplot(data = regression_points, mapping = aes(x = .resid)) +
geom_histogram(binwidth = 10, color = "white") +
geom_vline(xintercept = 0, color = "blue")
## ----resid-plot, fig.cap="Fitted versus Residuals plot"------------------
ggplot(data = regression_points, mapping = aes(x = .fitted, y = .resid)) +
geom_point() +
geom_abline(intercept = 0, slope = 0, color = "blue")
## ----qqplot1, fig.cap="QQ Plot of residuals"-----------------------------
ggplot(data = regression_points, mapping = aes(sample = .resid)) +
stat_qq()
|
context("site")
test_that("render_site", {
skip_on_cran()
# copy our demo site to a tempdir
site_dir <- tempfile()
dir.create(site_dir)
files <- c("_site.yml", "index.Rmd", "PageA.Rmd",
"PageB.rmd", "PageC.md", "styles.css",
"script.R", "docs.txt")
file.copy(file.path("site", files), site_dir, recursive = TRUE)
# render it
capture.output(render_site(site_dir))
# did the html files get rendered and the css get copied?
html_files <- c("index.html", "PageA.html", "PageB.html", "PageC.html")
html_files <- file.path(site_dir, "_site", html_files)
expect_true(all(file.exists(html_files)))
# moved directories
moved <- c("site_libs", "PageA_files")
expect_true(all(!file.exists(file.path(site_dir, moved))))
expect_true(all(file.exists(file.path(site_dir, "_site", moved))))
# respected includes
included <- "script.R"
expect_true(all(file.exists(file.path(site_dir, "_site", included))))
# respected excluded
excluded <- "docs.txt"
expect_true(all(!file.exists(file.path(site_dir, "_site", excluded))))
})
| /packrat/lib/x86_64-w64-mingw32/3.4.3/rmarkdown/tests/testthat/test-site.R | permissive | UBC-MDS/Karl | R | false | false | 1,085 | r | context("site")
test_that("render_site", {
skip_on_cran()
# copy our demo site to a tempdir
site_dir <- tempfile()
dir.create(site_dir)
files <- c("_site.yml", "index.Rmd", "PageA.Rmd",
"PageB.rmd", "PageC.md", "styles.css",
"script.R", "docs.txt")
file.copy(file.path("site", files), site_dir, recursive = TRUE)
# render it
capture.output(render_site(site_dir))
# did the html files get rendered and the css get copied?
html_files <- c("index.html", "PageA.html", "PageB.html", "PageC.html")
html_files <- file.path(site_dir, "_site", html_files)
expect_true(all(file.exists(html_files)))
# moved directories
moved <- c("site_libs", "PageA_files")
expect_true(all(!file.exists(file.path(site_dir, moved))))
expect_true(all(file.exists(file.path(site_dir, "_site", moved))))
# respected includes
included <- "script.R"
expect_true(all(file.exists(file.path(site_dir, "_site", included))))
# respected excluded
excluded <- "docs.txt"
expect_true(all(!file.exists(file.path(site_dir, "_site", excluded))))
})
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/homomorpheR.R
\docType{package}
\name{homomorpheR}
\alias{homomorpheR}
\alias{homomorpheR-package}
\title{homomorpheR: Homomorphic computations in R}
\description{
\code{homomorpheR} is a start at a rudimentary package for
homomorphic computations in R. The goal is to collect homomorphic
encryption schemes in this package for privacy-preserving
distributed computations; for example, applications of the sort
immplemented in package \code{distcomp}.
}
\details{
At the moment, only one scheme is implemented, the Paillier
scheme. The current implementation makes no pretense at efficiency
and also uses direct translations of other implementations,
particularly the one in Javascript.
For a quick overview of the features, read the
\code{\link{homomorpheR}} vignette by running
\code{vignette("homomorpheR")}.
}
\examples{
keys <- PaillierKeyPair$new(1024) # Generate new key pair
encryptAndDecrypt <- function(x) keys$getPrivateKey()$decrypt(keys$pubkey$encrypt(x))
a <- gmp::as.bigz(1273849)
identical(a + 10L, encryptAndDecrypt(a+10L))
x <- lapply(1:100, function(x) random.bigz(nBits = 512))
edx <- lapply(x, encryptAndDecrypt)
identical(x, edx)
}
\references{
\url{https://en.wikipedia.org/wiki/Homomorphic_encryption}
\url{https://mhe.github.io/jspaillier/}
}
| /man/homomorpheR.Rd | no_license | cran/homomorpheR | R | false | true | 1,349 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/homomorpheR.R
\docType{package}
\name{homomorpheR}
\alias{homomorpheR}
\alias{homomorpheR-package}
\title{homomorpheR: Homomorphic computations in R}
\description{
\code{homomorpheR} is a start at a rudimentary package for
homomorphic computations in R. The goal is to collect homomorphic
encryption schemes in this package for privacy-preserving
distributed computations; for example, applications of the sort
immplemented in package \code{distcomp}.
}
\details{
At the moment, only one scheme is implemented, the Paillier
scheme. The current implementation makes no pretense at efficiency
and also uses direct translations of other implementations,
particularly the one in Javascript.
For a quick overview of the features, read the
\code{\link{homomorpheR}} vignette by running
\code{vignette("homomorpheR")}.
}
\examples{
keys <- PaillierKeyPair$new(1024) # Generate new key pair
encryptAndDecrypt <- function(x) keys$getPrivateKey()$decrypt(keys$pubkey$encrypt(x))
a <- gmp::as.bigz(1273849)
identical(a + 10L, encryptAndDecrypt(a+10L))
x <- lapply(1:100, function(x) random.bigz(nBits = 512))
edx <- lapply(x, encryptAndDecrypt)
identical(x, edx)
}
\references{
\url{https://en.wikipedia.org/wiki/Homomorphic_encryption}
\url{https://mhe.github.io/jspaillier/}
}
|
#!/usr/bin/env Rscript
#library('getopt')
#library(devtools)
library(data.table)
args = commandArgs(trailingOnly=TRUE)
source(args[1]) #TODO replace by library(regionalGAM) if available as official package from bioconda
input = fread(args[2], header = TRUE)
# convert to a data.frame (the function flight curve doesn't allow the format from fread)
input <- data.frame(input)
dataset1 <- input[ , c("SPECIES", "SITE", "YEAR", "MONTH", "DAY", "COUNT")]
pheno <- flight_curve(dataset1, MinVisit = args[3], MinOccur = args[4])
write.table(pheno, file="pheno", row.names=FALSE, sep=" ")
| /tools/regionalGAM/scriptsR_Galaxy/flight_curve.R | permissive | 65MO/Galaxy-E | R | false | false | 585 | r | #!/usr/bin/env Rscript
#library('getopt')
#library(devtools)
library(data.table)
args = commandArgs(trailingOnly=TRUE)
source(args[1]) #TODO replace by library(regionalGAM) if available as official package from bioconda
input = fread(args[2], header = TRUE)
# convert to a data.frame (the function flight curve doesn't allow the format from fread)
input <- data.frame(input)
dataset1 <- input[ , c("SPECIES", "SITE", "YEAR", "MONTH", "DAY", "COUNT")]
pheno <- flight_curve(dataset1, MinVisit = args[3], MinOccur = args[4])
write.table(pheno, file="pheno", row.names=FALSE, sep=" ")
|
## Name: Elizabeth Lee
## Date: 6/6/16
## Function: functions to export INLA results as data files and diagnostic figures -- specific to county scale
## Filenames: reference_data/USstate_shapefiles/gz_2010_us_040_00_500k
## Data Source: shapefile from US Census 2010 - https://www.census.gov/geo/maps-data/data/cbf/cbf_state.html
## Notes:
##
## useful commands:
## install.packages("pkg", dependencies=TRUE, lib="/usr/local/lib/R/site-library") # in sudo R
## update.packages(lib.loc = "/usr/local/lib/R/site-library")
require(RColorBrewer); require(ggplot2); require(maps); require(scales); require(classInt); require(data.table)
#### functions for diagnostic plots ################################
plot_countyChoro <- function(exportPath, pltDat, pltVarTxt, code, zeroes){
# draw state choropleth with tiers or gradient colors and export to file
print(match.call())
countyMap <- map_data("county")
data(county.fips)
# plot formatting
h <- 5; w <- 8; dp <- 300
# merge county data
polynameSplit <- tstrsplit(county.fips$polyname, ",")
ctyMap <- tbl_df(county.fips) %>%
mutate(fips = substr.Right(paste0("0", fips), 5)) %>%
mutate(region = polynameSplit[[1]]) %>%
mutate(subregion = polynameSplit[[2]]) %>%
full_join(countyMap, by = c("region", "subregion")) %>%
filter(!is.na(polyname) & !is.na(long)) %>%
rename(state = region, county = subregion) %>%
rename(region = fips) %>%
select(-polyname)
# tier choropleth
if (code == 'tier'){
# process data for tiers
# 7/21/16: natural breaks w/ classIntervals
pltDat <- pltDat %>% rename_(pltVar = pltVarTxt)
# create natural break intervals with jenks algorithm
intervals <- classIntervals(pltDat$pltVar[!is.na(pltDat$pltVar)], n = 5, style = "jenks")
if (zeroes){
# 0s have their own color
if (0 %in% intervals$brks){
breakList <- intervals$brks
} else {
breakList <- c(0, intervals$brks)
}
breaks <- sort(breakList) # 4/7/17 rm extra 0 from breakList
} else{
breaks <- c(intervals$brks)
}
breaksRound <- round(breaks, 1)
breakLabels <- matrix(1:(length(breaksRound)-1))
for (i in 1:length(breakLabels)){
# create legend labels
breakLabels[i] <- paste0("(",as.character(breaksRound[i]), "-", as.character(breaksRound[i+1]), "]")}
# reverse order of break labels so zeros are green and larger values are red
breakLabels <- rev(breakLabels)
pltDat2 <- pltDat %>%
mutate(pltVarBin = factor(.bincode(pltVar, breaks, right = TRUE, include.lowest = TRUE))) %>%
mutate(pltVarBin = factor(pltVarBin, levels = rev(levels(pltVarBin))))
choro <- ggplot() +
geom_map(data = ctyMap, map = ctyMap, aes(x = long, y = lat, map_id = region)) +
geom_map(data = pltDat2, map = ctyMap, aes(fill = pltVarBin, map_id = fips), color = "grey25", size = 0.15) +
scale_fill_brewer(name = pltVarTxt, palette = "RdYlGn", label = breakLabels, na.value = "grey60") +
expand_limits(x = ctyMap$long, y = ctyMap$lat) +
theme_minimal() +
theme(text = element_text(size = 18), axis.ticks = element_blank(), axis.text = element_blank(), axis.title = element_blank(), panel.grid = element_blank(), legend.position = "bottom")
}
# gradient choropleth
else if (code == 'gradient'){
# data for gradient has minimal processing
pltDat <- pltDat %>% rename_(pltVar = pltVarTxt)
choro <- ggplot() +
geom_map(data = ctyMap, map = ctyMap, aes(x = long, y = lat, map_id=region)) +
geom_map(data = pltDat, map = ctyMap, aes(fill = pltVar, map_id = fips), color = "grey25", size = 0.15) +
scale_fill_continuous(name = pltVarTxt, low = "#f0fff0", high = "#006400") +
expand_limits(x = ctyMap$long, y = ctyMap$lat) +
theme_minimal() +
theme(text = element_text(size = 18), axis.ticks = element_blank(), axis.text = element_blank(), axis.title = element_blank(), panel.grid = element_blank(), legend.position = "bottom")
}
ggsave(exportPath, choro, height = h, width = w, dpi = dp)
}
################################
| /programs/source_export_inlaData_cty.R | no_license | Qasim-1develop/flu-SDI-dzBurden-drivers | R | false | false | 4,132 | r |
## Name: Elizabeth Lee
## Date: 6/6/16
## Function: functions to export INLA results as data files and diagnostic figures -- specific to county scale
## Filenames: reference_data/USstate_shapefiles/gz_2010_us_040_00_500k
## Data Source: shapefile from US Census 2010 - https://www.census.gov/geo/maps-data/data/cbf/cbf_state.html
## Notes:
##
## useful commands:
## install.packages("pkg", dependencies=TRUE, lib="/usr/local/lib/R/site-library") # in sudo R
## update.packages(lib.loc = "/usr/local/lib/R/site-library")
require(RColorBrewer); require(ggplot2); require(maps); require(scales); require(classInt); require(data.table)
#### functions for diagnostic plots ################################
plot_countyChoro <- function(exportPath, pltDat, pltVarTxt, code, zeroes){
# draw state choropleth with tiers or gradient colors and export to file
print(match.call())
countyMap <- map_data("county")
data(county.fips)
# plot formatting
h <- 5; w <- 8; dp <- 300
# merge county data
polynameSplit <- tstrsplit(county.fips$polyname, ",")
ctyMap <- tbl_df(county.fips) %>%
mutate(fips = substr.Right(paste0("0", fips), 5)) %>%
mutate(region = polynameSplit[[1]]) %>%
mutate(subregion = polynameSplit[[2]]) %>%
full_join(countyMap, by = c("region", "subregion")) %>%
filter(!is.na(polyname) & !is.na(long)) %>%
rename(state = region, county = subregion) %>%
rename(region = fips) %>%
select(-polyname)
# tier choropleth
if (code == 'tier'){
# process data for tiers
# 7/21/16: natural breaks w/ classIntervals
pltDat <- pltDat %>% rename_(pltVar = pltVarTxt)
# create natural break intervals with jenks algorithm
intervals <- classIntervals(pltDat$pltVar[!is.na(pltDat$pltVar)], n = 5, style = "jenks")
if (zeroes){
# 0s have their own color
if (0 %in% intervals$brks){
breakList <- intervals$brks
} else {
breakList <- c(0, intervals$brks)
}
breaks <- sort(breakList) # 4/7/17 rm extra 0 from breakList
} else{
breaks <- c(intervals$brks)
}
breaksRound <- round(breaks, 1)
breakLabels <- matrix(1:(length(breaksRound)-1))
for (i in 1:length(breakLabels)){
# create legend labels
breakLabels[i] <- paste0("(",as.character(breaksRound[i]), "-", as.character(breaksRound[i+1]), "]")}
# reverse order of break labels so zeros are green and larger values are red
breakLabels <- rev(breakLabels)
pltDat2 <- pltDat %>%
mutate(pltVarBin = factor(.bincode(pltVar, breaks, right = TRUE, include.lowest = TRUE))) %>%
mutate(pltVarBin = factor(pltVarBin, levels = rev(levels(pltVarBin))))
choro <- ggplot() +
geom_map(data = ctyMap, map = ctyMap, aes(x = long, y = lat, map_id = region)) +
geom_map(data = pltDat2, map = ctyMap, aes(fill = pltVarBin, map_id = fips), color = "grey25", size = 0.15) +
scale_fill_brewer(name = pltVarTxt, palette = "RdYlGn", label = breakLabels, na.value = "grey60") +
expand_limits(x = ctyMap$long, y = ctyMap$lat) +
theme_minimal() +
theme(text = element_text(size = 18), axis.ticks = element_blank(), axis.text = element_blank(), axis.title = element_blank(), panel.grid = element_blank(), legend.position = "bottom")
}
# gradient choropleth
else if (code == 'gradient'){
# data for gradient has minimal processing
pltDat <- pltDat %>% rename_(pltVar = pltVarTxt)
choro <- ggplot() +
geom_map(data = ctyMap, map = ctyMap, aes(x = long, y = lat, map_id=region)) +
geom_map(data = pltDat, map = ctyMap, aes(fill = pltVar, map_id = fips), color = "grey25", size = 0.15) +
scale_fill_continuous(name = pltVarTxt, low = "#f0fff0", high = "#006400") +
expand_limits(x = ctyMap$long, y = ctyMap$lat) +
theme_minimal() +
theme(text = element_text(size = 18), axis.ticks = element_blank(), axis.text = element_blank(), axis.title = element_blank(), panel.grid = element_blank(), legend.position = "bottom")
}
ggsave(exportPath, choro, height = h, width = w, dpi = dp)
}
################################
|
###############################################################################
#
# CoEvalModel.R: Routines to run essential network analysis and visualizations
# in R.
# Author: Kemal Akkoyun
# Date: Jun 5, 2013
#
#
# Created by Kemal Akkoyun on 6/5/13.
# Copyright (c) 2013 Kemal Akkoyun. All rights reserved.
#
# This file is part of CoEvalModel.
# CoEvalModel is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Simple DHT is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with CoEvalModel. If not, see <http://www.gnu.org/licenses/>.
#
###############################################################################
# install.packages("RSiena")
library(RSiena)
# Data Creation
###############################################################################
# Obtain relational leisure data from given data files.
friend.data.w1 <- as.matrix(read.csv("Input/Leisure2010.csv", sep = "\t", header = FALSE))
friend.data.w2 <- as.matrix(read.csv("Input/Leisure2011.csv", sep = "\t", header = FALSE))
friend.data.w3 <- as.matrix(read.csv("Input/Leisure2012.csv", sep = "\t", header = FALSE))
# Obtain attribute data from given data files.
gpa <- as.matrix(read.csv("Input/GPA.csv", header = FALSE))
home.town <- as.matrix(read.csv("Input/HomeTown.csv", header = FALSE))
gender <- as.matrix(read.csv("Input/Gender.csv", header = FALSE))
# Network Initializzaton
###############################################################################
# Initialize friendship network.
friendship <- sienaNet(
array(c(friend.data.w1, friend.data.w2, friend.data.w3),
dim = c(16, 16, 3)))
# Initialize gpa behaviour as a network.
gpa.performance <- sienaNet(gpa, type = "behavior")ti
# From the help request
# ?sienaDataCreate
# We see that these can be of five kinds:
# coCovar : Constant actor covariates
# varCovar : Time-varying actor covariates
# coDyadCovar : Constant dyadic covariates
# varDyadCovar : Time-varying dyadic covariates
# compositionChange : Composition change indicators
# So Constant actor variates.
gender1 <- coCovar(gender[,1])
hometown1 <- coCovar(home.town[,1])
# and Time-varying actor covariates
gpa.change <- varCovar(gpa)
# Creating Model : Example | Gender/GPA/Friendship
###############################################################################
exampleCoEvolutionData <- sienaDataCreate(friendship, gender1, gpa.performance)
exampleCoEvolutionEff <- getEffects( exampleCoEvolutionData )
exampleCoEvolutionEff <- includeEffects(exampleCoEvolutionEff,
name = "gpa.performance", indeg, outdeg,
interaction1 = "friendship", include = FALSE)
exampleCoEvAlgorithm <- sienaModelCreate(projname = 'Output/GenderGPAFriendship')
ans0 <- siena07(exampleCoEvAlgorithm,
data = exampleCoEvolutionData,
effects = exampleCoEvolutionEff,
batch = TRUE)
# Creating Model : Hometown/Friendship
###############################################################################
hometownCoEvolutionData <- sienaDataCreate(friendship, hometown1)
hometownCoEvolutionEff <- getEffects(hometownCoEvolutionData)
hometownEvolutionEff <- includeEffects(hometownCoEvolutionEff,
name = "hometown.friendship", indeg, outdeg,
interaction1 = "friendship", include = FALSE)
hometownCoEvAlgorithm <- sienaModelCreate(projname = 'Output/HometownFriendship')
ans1 <- siena07(hometownCoEvAlgorithm,
data = hometownCoEvolutionData,
effects = hometownCoEvolutionEff,
batch = TRUE)
# Creating Model : GPA/Friendship
###############################################################################
gpaCoEvolutionData <- sienaDataCreate(friendship, gpa.performance)
gpaCoEvolutionEff <- getEffects(gpaCoEvolutionData)
gpaCoEvolutionEff <- includeEffects(gpaCoEvolutionEff,
name = "gpa.friendship", indeg, outdeg,
interaction1 = "friendship", include = FALSE)
gpaCoEvAlgorithm <- sienaModelCreate(projname = 'Output/GPAFriendship')
ans2 <- siena07(gpaCoEvAlgorithm,
data = gpaCoEvolutionData,
effects = gpaCoEvolutionEff,
batch = TRUE)
# Creating Model : Friendship/Hometown/GPA
###############################################################################
hometown2CoEvolutionData <- sienaDataCreate(friendship, hometown1, gpa.performance)
hometown2CoEvolutionEff <- getEffects(hometown2CoEvolutionData)
hometown2CoEvolutionEff <- includeEffects(hometown2CoEvolutionEff,
name = "hometown.gpa", indeg, outdeg,
interaction1 = "friendship", include = FALSE)
hometown2CoEvAlgorithm <- sienaModelCreate(projname = 'Output/FriendshipHometownGPA')
ans3 <- siena07(hometown2CoEvAlgorithm,
data = hometown2CoEvolutionData,
effects = hometown2CoEvolutionEff,
batch = TRUE)
summary(ans0)
summary(ans1)
summary(ans2)
summary(ans3)
| /Final Project/CoEvalModel.R | no_license | kakkoyun/cmpe454 | R | false | false | 5,432 | r | ###############################################################################
#
# CoEvalModel.R: Routines to run essential network analysis and visualizations
# in R.
# Author: Kemal Akkoyun
# Date: Jun 5, 2013
#
#
# Created by Kemal Akkoyun on 6/5/13.
# Copyright (c) 2013 Kemal Akkoyun. All rights reserved.
#
# This file is part of CoEvalModel.
# CoEvalModel is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Simple DHT is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with CoEvalModel. If not, see <http://www.gnu.org/licenses/>.
#
###############################################################################
# install.packages("RSiena")
library(RSiena)
# Data Creation
###############################################################################
# Obtain relational leisure data from given data files.
friend.data.w1 <- as.matrix(read.csv("Input/Leisure2010.csv", sep = "\t", header = FALSE))
friend.data.w2 <- as.matrix(read.csv("Input/Leisure2011.csv", sep = "\t", header = FALSE))
friend.data.w3 <- as.matrix(read.csv("Input/Leisure2012.csv", sep = "\t", header = FALSE))
# Obtain attribute data from given data files.
gpa <- as.matrix(read.csv("Input/GPA.csv", header = FALSE))
home.town <- as.matrix(read.csv("Input/HomeTown.csv", header = FALSE))
gender <- as.matrix(read.csv("Input/Gender.csv", header = FALSE))
# Network Initializzaton
###############################################################################
# Initialize friendship network.
friendship <- sienaNet(
array(c(friend.data.w1, friend.data.w2, friend.data.w3),
dim = c(16, 16, 3)))
# Initialize gpa behaviour as a network.
gpa.performance <- sienaNet(gpa, type = "behavior")ti
# From the help request
# ?sienaDataCreate
# We see that these can be of five kinds:
# coCovar : Constant actor covariates
# varCovar : Time-varying actor covariates
# coDyadCovar : Constant dyadic covariates
# varDyadCovar : Time-varying dyadic covariates
# compositionChange : Composition change indicators
# So Constant actor variates.
gender1 <- coCovar(gender[,1])
hometown1 <- coCovar(home.town[,1])
# and Time-varying actor covariates
gpa.change <- varCovar(gpa)
# Creating Model : Example | Gender/GPA/Friendship
###############################################################################
exampleCoEvolutionData <- sienaDataCreate(friendship, gender1, gpa.performance)
exampleCoEvolutionEff <- getEffects( exampleCoEvolutionData )
exampleCoEvolutionEff <- includeEffects(exampleCoEvolutionEff,
name = "gpa.performance", indeg, outdeg,
interaction1 = "friendship", include = FALSE)
exampleCoEvAlgorithm <- sienaModelCreate(projname = 'Output/GenderGPAFriendship')
ans0 <- siena07(exampleCoEvAlgorithm,
data = exampleCoEvolutionData,
effects = exampleCoEvolutionEff,
batch = TRUE)
# Creating Model : Hometown/Friendship
###############################################################################
hometownCoEvolutionData <- sienaDataCreate(friendship, hometown1)
hometownCoEvolutionEff <- getEffects(hometownCoEvolutionData)
hometownEvolutionEff <- includeEffects(hometownCoEvolutionEff,
name = "hometown.friendship", indeg, outdeg,
interaction1 = "friendship", include = FALSE)
hometownCoEvAlgorithm <- sienaModelCreate(projname = 'Output/HometownFriendship')
ans1 <- siena07(hometownCoEvAlgorithm,
data = hometownCoEvolutionData,
effects = hometownCoEvolutionEff,
batch = TRUE)
# Creating Model : GPA/Friendship
###############################################################################
gpaCoEvolutionData <- sienaDataCreate(friendship, gpa.performance)
gpaCoEvolutionEff <- getEffects(gpaCoEvolutionData)
gpaCoEvolutionEff <- includeEffects(gpaCoEvolutionEff,
name = "gpa.friendship", indeg, outdeg,
interaction1 = "friendship", include = FALSE)
gpaCoEvAlgorithm <- sienaModelCreate(projname = 'Output/GPAFriendship')
ans2 <- siena07(gpaCoEvAlgorithm,
data = gpaCoEvolutionData,
effects = gpaCoEvolutionEff,
batch = TRUE)
# Creating Model : Friendship/Hometown/GPA
###############################################################################
hometown2CoEvolutionData <- sienaDataCreate(friendship, hometown1, gpa.performance)
hometown2CoEvolutionEff <- getEffects(hometown2CoEvolutionData)
hometown2CoEvolutionEff <- includeEffects(hometown2CoEvolutionEff,
name = "hometown.gpa", indeg, outdeg,
interaction1 = "friendship", include = FALSE)
hometown2CoEvAlgorithm <- sienaModelCreate(projname = 'Output/FriendshipHometownGPA')
ans3 <- siena07(hometown2CoEvAlgorithm,
data = hometown2CoEvolutionData,
effects = hometown2CoEvolutionEff,
batch = TRUE)
summary(ans0)
summary(ans1)
summary(ans2)
summary(ans3)
|
testlist <- list(Rext = numeric(0), Rs = numeric(0), Z = numeric(0), alpha = numeric(0), atmp = numeric(0), relh = numeric(0), temp = c(5.43472210425371e-323, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), u = numeric(0))
result <- do.call(meteor:::E_Penman,testlist)
str(result) | /meteor/inst/testfiles/E_Penman/libFuzzer_E_Penman/E_Penman_valgrind_files/1612738165-test.R | no_license | akhikolla/updatedatatype-list3 | R | false | false | 556 | r | testlist <- list(Rext = numeric(0), Rs = numeric(0), Z = numeric(0), alpha = numeric(0), atmp = numeric(0), relh = numeric(0), temp = c(5.43472210425371e-323, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), u = numeric(0))
result <- do.call(meteor:::E_Penman,testlist)
str(result) |
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/function_organize.R
\name{organize}
\alias{organize}
\title{Organize project activities}
\usage{
organize(prec1and2 = matrix(0), prec3and4 = matrix(0))
}
\arguments{
\item{prec1and2}{A matrix indicating the order of precedence type 1 and 2 between the activities (Default=matrix(0)). If value \eqn{(i,j)=1} then activity \eqn{i} precedes type \eqn{1} to \eqn{j}, and if \eqn{(i,j)=2} then activity \eqn{i} precedes type \eqn{2} to \eqn{j}. Cycles cannot exist in a project, i.e. if an activity \eqn{i} precedes \eqn{j} then \eqn{j} cannot precede \eqn{i}.}
\item{prec3and4}{A matrix indicating the order of precedence type 3 and 4 between the activities (Default=matrix(0)). If value \eqn{(i,j)=3} then activity \eqn{i} precedes type \eqn{3} to \eqn{j}, and if \eqn{(i,j)=4} then activity \eqn{i} precedes type \eqn{4} to \eqn{j}. Cycles cannot exist in a project, i.e. if an activity \eqn{i} precedes \eqn{j} then \eqn{j} cannot precede \eqn{i}.}
}
\value{
A list containing:
\itemize{
\item{Precedence: }{ ordered precedence matrix.}
\item{Order: }{ new activities values.}
}
}
\description{
This function organizes the activities of a project, in such a way that if i precedes j then i is less strict than j.
}
\examples{
prec1and2<-matrix(c(0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0),nrow=5,ncol=5,byrow=TRUE)
organize(prec1and2)
}
| /man/organize.Rd | no_license | cran/ProjectManagement | R | false | true | 1,425 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/function_organize.R
\name{organize}
\alias{organize}
\title{Organize project activities}
\usage{
organize(prec1and2 = matrix(0), prec3and4 = matrix(0))
}
\arguments{
\item{prec1and2}{A matrix indicating the order of precedence type 1 and 2 between the activities (Default=matrix(0)). If value \eqn{(i,j)=1} then activity \eqn{i} precedes type \eqn{1} to \eqn{j}, and if \eqn{(i,j)=2} then activity \eqn{i} precedes type \eqn{2} to \eqn{j}. Cycles cannot exist in a project, i.e. if an activity \eqn{i} precedes \eqn{j} then \eqn{j} cannot precede \eqn{i}.}
\item{prec3and4}{A matrix indicating the order of precedence type 3 and 4 between the activities (Default=matrix(0)). If value \eqn{(i,j)=3} then activity \eqn{i} precedes type \eqn{3} to \eqn{j}, and if \eqn{(i,j)=4} then activity \eqn{i} precedes type \eqn{4} to \eqn{j}. Cycles cannot exist in a project, i.e. if an activity \eqn{i} precedes \eqn{j} then \eqn{j} cannot precede \eqn{i}.}
}
\value{
A list containing:
\itemize{
\item{Precedence: }{ ordered precedence matrix.}
\item{Order: }{ new activities values.}
}
}
\description{
This function organizes the activities of a project, in such a way that if i precedes j then i is less strict than j.
}
\examples{
prec1and2<-matrix(c(0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0),nrow=5,ncol=5,byrow=TRUE)
organize(prec1and2)
}
|
#clear environment
rm(list = ls())
#load packages
library(psych)
#load dataset
x <- read.csv("~/RStats/Magic HME data - clean.csv", header = TRUE)
#Race Scale construction
#binary - white/nonwhite
x$race_ethnicity
race <- as.numeric(x$race_ethnicity == "White/Caucasian")
race
#Conspiracy scale construction
# max = strongly agree
l2num.agree5 <- function(y) { #create function to convert 5-point agree/disagree scale
y <- factor(y,
levels = c("Strongly disagree",
"Somewhat disagree",
"Neither agree nor disagree",
"Somewhat agree",
"Strongly agree"))
as.numeric(y)
}
l2num.agree5.adjust <- function(y) { #create function to convert 5-point agree/disagree scale
y <- factor(y,
levels = c("Strongly disagree",
"Somewhat disagree",
"Neither agree nor disagree",
"Somewhat agree",
"Strongly\nagree"))
as.numeric(y)
}
consp1 <- l2num.agree5.adjust(x$consp1)
consp2 <- l2num.agree5.adjust(x$consp2)
consp3 <- l2num.agree5.adjust(x$consp3)
consp4 <- l2num.agree5.adjust(x$consp4)
consp.scale <- data.frame(consp1,
consp2,
consp3,
consp4)
alpha(consp.scale)
consp.scale <- ((consp1 + consp2 + consp3 + consp4)/4)
describe(consp.scale)
#Paranormal beliefs scale construction
# max = yes
# I dont think I should have scaled this variable
# since they dont really form a realistic continuum
# is there a way to specify the levels as c(-1, 0, 1)?
l2num.yesno3 <- function(y) {
y <- factor(y,
levels = c("No",
"Not sure",
"Yes"))
as.numeric(y)
}
para1 <- l2num.yesno3(x$para1)
para2 <- l2num.yesno3(x$para2)
para3 <- l2num.yesno3(x$para3)
para4 <- l2num.yesno3(x$para4)
para.scale <- data.frame(para1,
para2,
para3,
para4)
alpha(para.scale)
para.scale <- ((para1 + para2 + para3 + para4)/4)
para.scale
#Apprehension Scale construction
l2num.freq5 <- function(y) {
y <- factor(y,
levels = c("Never",
"Seldom",
"Sometimes",
"Usually",
"Always"))
as.numeric(y)
}
#These first two apprehension Q's were presented correctly in the survey.
#applied the bigger function here, that includes both responses in 1-2 and 3-8.
l2num.freq5.fail <- function(y) {
y <- factor(y,
levels = c("Never",
"Seldom",
"Sometimes",
"Usually",
"Most of the time",
"Always"))
as.numeric(y)
}
appr1 <- l2num.freq5.fail(x$appr1_seatbelt)
appr2 <- l2num.freq5.fail(x$appr2_wash)
appr3 <- l2num.freq5.fail(x$appr3_luggage)
appr4 <- l2num.freq5.fail(x$appr4_shred)
appr5 <- l2num.freq5.fail(x$appr5_lockcar)
appr6 <- l2num.freq5.fail(x$appr6_lockhouse)
appr7 <- l2num.freq5.fail(x$appr7_cancer)
appr8 <- l2num.freq5.fail(x$appr7_cancer)
appr.scale <- data.frame(appr1, appr2)
alpha(appr.scale)
summary(appr.scale)
appr.scale.fail <- data.frame(appr1, appr2, appr3, appr4, appr5, appr6, appr7, appr8)
alpha(appr.scale.fail)
summary(appr.scale.fail)
appr.scale.fail <- ((appr1 + appr2 + appr3 + appr4 + appr5 + appr6 + appr7 + appr8)/8)
plot(appr.scale.fail)
# Election news attention scale construction
# max = a great deal
l2num.amount5 <- function(y) {
y <- factor(y,
levels = c("None at all",
"A little",
"A moderate amount",
"A lot",
"A great deal"))
as.numeric(y)
}
election.news.attn <- l2num.amount5(x$election_news_attention)
describe(election.news.attn)
#Election interest scale construction
# max = very interested
l2num.interest4 <- function(y) {
y <- factor(y,
levels = c("Not at all interested",
"Somewhat interested",
"Moderately interested",
"Very interested"))
as.numeric(y)
}
election.interest <- l2num.interest4(x$election_interest)
describe(election.interest)
#Religious services scale construction
# max = attends more often
l2num.freq6 <- function(y){
y <- factor(y,
levels = c("Never",
"A few times a year",
"Once or twice a month",
"Almost every week",
"Every week",
"More than once a week"))
as.numeric(y)
}
rel.services <- l2num.freq6(x$religion_services)
describe(rel.services)
#Big Five scale construction
# max = strongly agree
#Normal Items
bf.agree2 <- l2num.agree5.adjust(x$BF_agree2)
bf.agree4 <- l2num.agree5.adjust(x$BF_agree4)
bf.agree5 <- l2num.agree5.adjust(x$BF_agree5)
bf.agree7 <- l2num.agree5.adjust(x$BF_agree7)
bf.agree9 <- l2num.agree5(x$BF_agree9)
bf.consc1 <- l2num.agree5.adjust(x$BF_consc1)
bf.consc3 <- l2num.agree5.adjust(x$BF_consc3)
bf.consc6 <- l2num.agree5.adjust(x$BF_consc6)
bf.consc7 <- l2num.agree5(x$BF_consc7)
bf.consc8 <- l2num.agree5(x$BF_consc8)
bf.extro1 <- l2num.agree5.adjust(x$BF_extro1)
bf.extro3 <- l2num.agree5.adjust(x$BF_extro3)
bf.extro4 <- l2num.agree5.adjust(x$BF_extro4)
bf.extro6 <- l2num.agree5.adjust(x$BF_extro6)
bf.extro8 <- l2num.agree5(x$BF_extro8)
bf.neuro1 <- l2num.agree5.adjust(x$BF_neuro1)
bf.neuro3 <- l2num.agree5.adjust(x$BF_neuro3)
bf.neuro4 <- l2num.agree5.adjust(x$BF_neuro4)
bf.neuro6 <- l2num.agree5.adjust(x$BF_neuro6)
bf.neuro8 <- l2num.agree5(x$BF_neuro8)
bf.open1 <- l2num.agree5.adjust(x$BF_open1)
bf.open2 <- l2num.agree5.adjust(x$BF_open2)
bf.open3 <- l2num.agree5.adjust(x$BF_open3)
bf.open4 <- l2num.agree5.adjust(x$BF_open4)
bf.open5 <- l2num.agree5.adjust(x$BF_open5)
bf.open6 <- l2num.agree5.adjust(x$BF_open6R)
bf.open8 <- l2num.agree5(x$BF_open8)
bf.open10 <- l2num.agree5(x$BF_open10)
#Reverse coding BF items
l2num.agree5.adjust.reverse <- function(y) { #create function to convert 5-point agree/disagree scale
y <- factor(y,
levels = c("Strongly\nagree",
"Somewhat agree",
"Neither agree nor disagree",
"Somewhat disagree",
"Strongly disagree"))
as.numeric(y)
}
l2num.agree5.reverse <- function(y) { #create function to convert 5-point agree/disagree scale
y <- factor(y,
levels = c("Strongly agree",
"Somewhat agree",
"Neither agree nor disagree",
"Somewhat disagree",
"Strongly disagree"))
as.numeric(y)
}
bf.agree1 <- l2num.agree5.adjust.reverse(x$BF_agree1R)
bf.agree3 <- l2num.agree5.adjust.reverse(x$BF_agree3R)
bf.agree6 <- l2num.agree5.adjust.reverse(x$BF_agree6R)
bf.agree8 <- l2num.agree5.reverse(x$BF_agree8R)
bf.consc2 <- l2num.agree5.adjust.reverse(x$BF_consc2R)
bf.consc4 <- l2num.agree5.adjust.reverse(x$BF_consc4R)
bf.consc5 <- l2num.agree5.adjust.reverse(x$BF_consc5R)
bf.consc9 <- l2num.agree5.reverse(x$BF_consc9R)
bf.extro2 <- l2num.agree5.adjust.reverse(x$BF_extro2R)
bf.extro5 <- l2num.agree5.adjust.reverse(x$BF_extro5R)
bf.extro7 <- l2num.agree5.adjust.reverse(x$BF_extro7R)
bf.neuro2 <- l2num.agree5.adjust.reverse(x$BF_neuro2R)
bf.neuro5 <- l2num.agree5.adjust.reverse(x$BF_neuro5R)
bf.neuro7 <- l2num.agree5.reverse(x$BF_neuro7R)
bf.open7 <- l2num.agree5.reverse(x$BF_open7R)
bf.open9 <- l2num.agree5.reverse(x$BF_open9R)
#Test scale reliability
bf.scale.all <- data.frame(bf.agree1, bf.agree2, bf.agree3,
bf.agree4, bf.agree5, bf.agree6,
bf.agree7, bf.agree8, bf.consc1,
bf.consc2, bf.consc3, bf.consc4,
bf.consc5, bf.consc6, bf.consc7,
bf.consc8, bf.consc9, bf.extro1,
bf.extro2, bf.extro3, bf.extro4,
bf.extro5, bf.extro6, bf.extro7,
bf.extro8, bf.neuro1, bf.neuro2,
bf.neuro3, bf.neuro4, bf.neuro5,
bf.neuro6, bf.neuro7, bf.neuro8,
bf.open1, bf.open2, bf.open3,
bf.open4, bf.open5, bf.open6,
bf.open7, bf.open8, bf.open9,
bf.open10)
bf.scale.agree <- data.frame(bf.agree1, bf.agree2, bf.agree3,
bf.agree4, bf.agree5, bf.agree6,
bf.agree7, bf.agree8)
bf.scale.consc <- data.frame(bf.consc1,
bf.consc2, bf.consc3, bf.consc4,
bf.consc5, bf.consc6, bf.consc7,
bf.consc8, bf.consc9)
bf.scale.extro <- data.frame(bf.extro1,
bf.extro2, bf.extro3, bf.extro4,
bf.extro5, bf.extro6, bf.extro7,
bf.extro8)
bf.scale.neuro <- data.frame(bf.neuro1, bf.neuro2,
bf.neuro3, bf.neuro4, bf.neuro5,
bf.neuro6, bf.neuro7, bf.neuro8)
bf.scale.open <- data.frame(bf.open1, bf.open2, bf.open3,
bf.open4, bf.open5, bf.open6,
bf.open7, bf.open8, bf.open9,
bf.open10)
alpha(bf.scale.agree)
alpha(bf.scale.consc)
alpha(bf.scale.extro)
alpha(bf.scale.neuro)
alpha(bf.scale.open)
#Drop poor performing questions from scale:
# bf.extro7, open6, open7, open 8
bf.scale.agree <- data.frame(bf.agree1, bf.agree2, bf.agree3,
bf.agree4, bf.agree5, bf.agree6,
bf.agree7, bf.agree8)
bf.scale.consc <- data.frame(bf.consc1,
bf.consc2, bf.consc3, bf.consc4,
bf.consc5, bf.consc6, bf.consc7,
bf.consc8, bf.consc9)
bf.scale.extro <- data.frame(bf.extro1,bf.extro2, bf.extro3,
bf.extro4,bf.extro5, bf.extro6,
bf.extro8)
bf.scale.neuro <- data.frame(bf.neuro1, bf.neuro2,
bf.neuro3, bf.neuro4, bf.neuro5,
bf.neuro6, bf.neuro7, bf.neuro8)
bf.scale.open <- data.frame(bf.open1, bf.open2, bf.open3,
bf.open4, bf.open5, bf.open8,
bf.open10)
alpha(bf.scale.agree)
alpha(bf.scale.consc)
alpha(bf.scale.extro)
alpha(bf.scale.neuro)
alpha(bf.scale.open)
bf.scale <- data.frame(bf.open1, bf.open2, bf.open3,
bf.open4, bf.open5, bf.open8,
bf.open10)
#Collapse scales for regression
bf.scale.agree <- ((bf.agree1 + bf.agree2 + bf.agree3 +
bf.agree4 + bf.agree5 + bf.agree6 +
bf.agree7 + bf.agree8)/8)
describe(bf.scale.agree)
bf.scale.consc <- ((bf.consc1 + bf.consc2 + bf.consc3 +
bf.consc4 + bf.consc5 + bf.consc6 +
bf.consc7 + bf.consc8 + bf.consc9)/9)
describe(bf.scale.consc)
bf.scale.extro <- ((bf.extro1 + bf.extro2 + bf.extro3 +
bf.extro4 + bf.extro5 + bf.extro6 +
bf.extro8)/7)
describe(bf.scale.extro)
bf.scale.neuro <- ((bf.neuro1 + bf.neuro2 + bf.neuro3 +
bf.neuro4 + bf.neuro5 + bf.neuro6 +
bf.neuro7 + bf.neuro8)/8)
describe(bf.scale.neuro)
bf.scale.open <- ((bf.open1 + bf.open2 + bf.open3 +
bf.open4 + bf.open5 + bf.open8 +
bf.open10)/6)
describe(bf.scale.open)
#partyID
max = Republican
partyID <- factor(x$partyID,
levels = c("Democrat",
"Independent",
"Republican"))
partyID <- as.numeric(partyID)
partyID
#Sex variable conversion
# Male = 1, Female = 2
cat2num.sex <- function(y) {
y <- factor(y,
levels = c("Male",
"Female"))
as.numeric(y)
}
sex <- cat2num.sex(x$sex)
#
l2num.income <- function(y) {
y <- factor(y,
levels = c("Male",
"Female"))
as.numeric(y)
}
cat2num.sex(x$sex)
#Income variable scale construction
# 1 = Less than $15,000, 5 = 150,001+
l2num.income8 <- function(y) {
y <- factor(y,
levels = c("Less than $15,000",
"$15,001-$30,000",
"$30,001-$45,000",
"$45,001-$60,000",
"$60,001-$75,000",
"$75,001-$100,000",
"$100,001-$150,000",
"$150,001 or more"))
as.numeric(y)
}
income <- l2num.income8(x$income)
describe(income)
#Education variable scale construction
#1 = less than HS, 6 = Postgraduate degree
l2num.edu6 <- function(y) {
y <- factor(y,
levels = c("Less than high school",
"High school graduate",
"Some college",
"2 year degree",
"4 year degree",
"Postgraduate degree"))
as.numeric(y)
}
edu <- l2num.edu6(x$education)
describe(edu)
#Economic partisanship scale construction
# 1 = very liberal, 7 = very conservative
l2num.partisanship7 <- function(y) {
y <- factor(y,
levels = c("Very liberal",
"Liberal",
"Somewhat liberal",
"Moderate",
"Somewhat conservative",
"Conservative",
"Very conservative"))
as.numeric(y)
}
partisanship.econ <- l2num.partisanship7(x$partisanship_econ)
describe(partisanship.econ)
#Social partisanship scale construction
# 1 = very liberal, 7 = very conservative
partisanship.social <- l2num.partisanship7(x$partisanship_social)
describe(partisanship.social)
partisanship.scale <- data.frame(partisanship.econ, partisanship.social)
alpha(partisanship.scale)
partisanship.scale <- ((partisanship.econ + partisanship.social)/2)
#Pessimism scale construction
#max = more likely to experience a pessimistic event
l2num.likely5 <- function(y) {
y <- factor(y,
levels = c("Extremely unlikely",
"Somewhat unlikely",
"Neither likely nor unlikely",
"Somewhat likely",
"Extremely\nlikely"))
as.numeric(y)
}
pess.recession <- l2num.likely5(x$pess1_recession)
pess.ebola <- l2num.likely5(x$pess2_ebola)
pess.terror <- l2num.likely5(x$pess3_terror)
pess.war <- l2num.likely5(x$pess4_war)
pess.scale <- data.frame(pess.recession,
pess.ebola,
pess.terror,
pess.war)
alpha(pess.scale)
pess.scale <- ((pess.recession + pess.ebola + pess.terror + pess.war)/4)
describe(pess.scale)
#Symbolic Thinking scale construction
# 1 = symbolic, 2 = tangible
symb1 <- factor(x$symb1,
levels = c("stab a photo of your family six times",
"stick your hands in a bowl of cockroaches"))
symb1 <- as.numeric(symb1)
symb1
symb2 <- factor(x$symb2,
levels = c("a luxurious house where a family had recently been murdered",
"a grimy bus station"))
symb2 <- as.numeric(symb2)
symb2
symb3 <- factor(x$symb3,
levels = c("leave trash on someone's grave",
"stand in line for 3 hours at the DMV?"))
symb3 <- as.numeric(symb3)
symb3
symb4 <- factor(x$symb4,
levels = c("yell \"I hope I die tomorrow\" out loud six times",
"ride in a speeding car without a seat belt"))
symb4 <- as.numeric(symb4)
symb4
symb5 <- factor(x$symb5,
levels = c("sleep in laundered pajamas once worn by Charles Manson",
"put a nickel in your mouth that you found on the ground"))
symb5 <- as.numeric(symb5)
symb5
symb6 <- factor(x$symb6,
levels = c("sold two winning tickets in the past three years but had a long line",
"never sold a winning ticket but had no lines"))
symb6 <- as.numeric(symb6)
symb6
symb.scale <- data.frame(symb1,
symb2,
symb3,
symb5)
alpha(symb.scale)
symb.scale <- ((symb1 + symb2 + symb3 + symb4 + symb5 + symb6)/6)
#Need for Cognition scale construction
l2num.describes.me5 <- function(y) {
y <- factor(y,
levels = c("Does not describe me",
"Describes me slightly well",
"Describes me moderately well",
"Describes me very well",
"Describes me extremely well"))
as.numeric(y)
}
l2num.describes.me5.reverse <- function(y) {
y <- factor(y,
levels = c("Describes me extremely well",
"Describes me very well",
"Describes me moderately well",
"Describes me slightly well",
"Does not describe me"))
as.numeric(y)
}
nfc1 <- l2num.describes.me5(x$NFC1)
nfc2 <- l2num.describes.me5(x$NFC2)
nfc3 <- l2num.describes.me5.reverse(x$NFC3R)
nfc4 <- l2num.describes.me5.reverse(x$NFC4R)
nfc5 <- l2num.describes.me5.reverse(x$NFC5R)
nfc6 <- l2num.describes.me5(x$NFC6)
nfc7 <- l2num.describes.me5.reverse(x$NFC7R)
nfc8 <- l2num.describes.me5.reverse(x$NFC8R)
nfc9 <- l2num.describes.me5(x$NFC9)
nfc10 <- l2num.describes.me5.reverse(x$NFC10R)
nfc11 <- l2num.describes.me5(x$NFC11)
nfc12 <- l2num.describes.me5(x$NFC12R)
nfc13 <- l2num.describes.me5.reverse(x$NFC13R)
nfc14 <- l2num.describes.me5(x$NFC14)
nfc15 <- l2num.describes.me5(x$NFC15)
nfc16 <- l2num.describes.me5(x$NFC16)
nfc17 <- l2num.describes.me5.reverse(x$NFC17R)
nfc18 <- l2num.describes.me5.reverse(x$NFC18R)
nfc.scale <- data.frame(nfc1, nfc2, nfc3,
nfc4, nfc5, nfc6,
nfc7, nfc8, nfc9,
nfc10, nfc11, nfc12,
nfc13, nfc14, nfc15,
nfc16, nfc17, nfc18)
alpha(nfc.scale)
nfc.scale <- ((nfc1 + nfc2 + nfc3 +
nfc4 + nfc5 + nfc6 +
nfc7 + nfc8 + nfc9 +
nfc10 + nfc11 + nfc12 +
nfc13 + nfc14 + nfc15 +
nfc16 + nfc17 + nfc18)/18)
nfc.scale
#Condition scale construction
# Ohio = Clinton lead, Trump trail, NC = Trump lead, Clinton trail, Iowa = Tied
condition <- factor(x$Condition,
levels = c("Ohio",
"Iowa",
"North Carolina"))
condition <- as.numeric(condition)
condition
#HMP favorability scale
hmp.favor <- factor(x$hmp_condition_favor,
levels = c("Strongly favored Clinton",
"Mostly favored Clinton",
"Slightly favored Clinton",
"Article was strictly neutral",
"Slightly favored Trump",
"Mostly favored Trump",
"Strongly favored Trump"))
hmp.favor <- as.numeric(hmp.favor)
describe(hmp.favor)
# HMP bias scale construction - Clinton
# 1 = Not at all biased, 5 = very biased against Clinton
l2num.bias.clinton5 <- function(y) {
y <- factor(y,
levels = c("Not at all biased against Clinton",
"Slightly biased against Clinton",
"Somewhat biased against Clinton",
"Mostly biased against Clinton",
"Very biased against Clinton"))
as.numeric(y)
}
clinton.bias <- l2num.bias.clinton5(x$hmp_condition_clinton)
describe(clinton.bias)
# HMP bias scale construction - Trump
# 1 = Not at all biased, 5 = very biased against Trump
l2num.bias.trump5 <- function(y) {
y <- factor(y,
levels = c("Not at all biased against Trump",
"Slightly biased against Trump",
"Somewhat biased against Trump",
"Mostly biased against Trump",
"Very biased against Trump"))
as.numeric(y)
}
trump.bias <- l2num.bias.trump5(x$hmp_condition_trump)
describe(trump.bias)
# HMP bias scale construction - Journalist
# 1 = Strongly favored Clinton, 3 = midpoint, neutral, 5 = Strongly favored Trump
l2num.bias.journalist <- function(y) {
y <- factor(y,
levels = c("Strongly favored Clinton",
"Mostly favored Clinton",
"Slightly favored Clinton",
"Journalist was strictly neutral",
"Slightly favored Trump",
"Mostly favored Trump",
"Strongly favored Trump"))
as.numeric(y)
}
hmp.journalist <- l2num.bias.journalist(x$hmp_condition_journalist)
describe(hmp.journalist)
# Journalist mistrust scale construction
# 1 = strongly disagree, 5 = strongly agree
l2num.agree7 <- function(y) {
y <- factor(y,
levels = c("Strongly disagree",
"Disagree",
"Somewhat disagree",
"Neither agree nor disagree",
"Somewhat agree",
"Agree",
"Strongly agree"))
as.numeric(y)
}
l2num.agree7.reverse <- function(y) {
y <- factor(y,
levels = c("Strongly agree",
"Agree",
"Somewhat agree",
"Neither agree nor disagree",
"Somewhat disagree",
"Disagree",
"Strongly disagree"))
as.numeric(y)
}
journ.clicks <- l2num.agree7(x$journ_clicks)
journ.personal <- l2num.agree7(x$journ_personal)
journ.pretend <- l2num.agree7(x$journ_pretend)
journ.rig <- l2num.agree7(x$journ_rig)
journ.twist <- l2num.agree7(x$journ_twist)
journ.info <- l2num.agree7.reverse(x$journ_infoR)
journ.objective <- l2num.agree7.reverse(x$journ_objectivityR)
journ.bias.scale <- data.frame(journ.clicks,
journ.personal,
journ.pretend,
journ.rig,
journ.twist,
journ.info,
journ.objective)
alpha(journ.bias.scale)
journ.bias.scale <- ((journ.clicks +
journ.personal +
journ.pretend +
journ.rig +
journ.twist +
journ.info +
journ.objective)/7)
describe(journ.bias.scale)
#Candidate Support scale
cand.support <- factor(x$cand_support,
levels = c("Hillary Clinton",
"Dontald Trump"))
cand.support <- as.numeric(cand.support)
cand.support
#Media Distrust scale
media.trust <- factor(x$media_trust,
levels = c("None of the time",
"Only some of the time",
"About half the time",
"Most of the time",
"All of the time"))
media.trust <- as.numeric(media.trust)
media.trust
media.distrust.scale <- data.frame(x$media_complete,
x$media_untrustworthy,
x$media_inaccurate,
x$media_unfair)
alpha(media.distrust.scale)
media.distrust.scale <- ((x$media_complete + x$media_untrustworthy +
x$media_inaccurate + x$media_unfair)/4)
describe(media.distrust.scale)
#Prayer scale construction
# 1 = Never, 5 = Several times a day
prayer <- factor(x$pray,
levels = c("Never",
"Once a week or less",
"A few times a week",
"Once a day",
"Several times a day"))
prayer <- as.numeric(prayer)
describe(prayer)
#Religious fundamentalism scale construction
# 1 = strongly disagree, 5 = strongly agree
rel.fund1 <- l2num.agree5(x$rel_fund1)
rel.fund2 <- l2num.agree5(x$rel_fund2)
rel.fund3 <- l2num.agree5(x$rel_fund3)
rel.fund4 <- l2num.agree5(x$rel_fund4)
rel.fund.scale <- data.frame(rel.fund1,
rel.fund2,
rel.fund3,
rel.fund4)
alpha(rel.fund.scale)
describe(rel.fund.scale)
rel.fund.scale <- ((rel.fund1 + rel.fund2 + rel.fund3 + rel.fund4)/4)
#Effect on voter likelihood scales - Trump
# 1 = less likely to vote, 3 = more likely to vote
cat2num.trump.effect <- function(y) {
y <- factor(y,
levels = c("Articles like this one make Trump supporters less likely to vote",
"Articles like this one do not affect Trump supporters' decision to vote",
"Articles like this one make Trump supporters more likely to vote"))
as.numeric(y)
}
trump.effect.iowa <- cat2num.trump.effect(x$effect_trump_iowa)
trump.effect.nc <- cat2num.trump.effect(x$effect_trump_NC)
trump.effect.ohio <- cat2num.trump.effect(x$effect_trump_ohio)
#Effect on voter likelihood scales - Clinton
# 1 = less likely to vote, 3 = more likely to vote
cat2num.clinton.effect <- function(y) {
y <- factor(y,
levels = c("Articles like this one make Clinton supporters less likely to vote",
"Articles like this one do not affect Clinton supporters' decision to vote",
"Articles like this one make Clinton supporters more likely to vote"))
as.numeric(y)
}
clinton.effect.iowa <- cat2num.trump.effect(x$effect_clinton_iowa)
clinton.effect.nc <- cat2num.trump.effect(x$effect_clinton_NC)
clinton.effect.ohio <- cat2num.trump.effect(x$effect_clinton_ohio)
#dependent variable for logit model is 2 outcomes. rather than continuous effect. inuition - behind mle
#is that linear component but rather than
#mapping onto outcome variable there is a latent variable that determines how IV's affects your DV's by a link
#function map onto the outcome
#what distribution associated with the variable coutn model - link function (logit/probit) ordered.
#because outcome not continuous - ordered logistic - outcome variable is zero or 1 in logistic regression
#estimate logit model coefficienct say change in log odds for unit change in IV. ordered logistic - the DV
#is notcoefficients you get from the model are the rffect of 1 unit change of log odds
#set of coefficients
#predicted probability plots - log odds ratio.'
| /R/.old/magicHME-transform-2.0.R | no_license | mkearney/foley | R | false | false | 28,104 | r | #clear environment
rm(list = ls())
#load packages
library(psych)
#load dataset
x <- read.csv("~/RStats/Magic HME data - clean.csv", header = TRUE)
#Race Scale construction
#binary - white/nonwhite
x$race_ethnicity
race <- as.numeric(x$race_ethnicity == "White/Caucasian")
race
#Conspiracy scale construction
# max = strongly agree
l2num.agree5 <- function(y) { #create function to convert 5-point agree/disagree scale
y <- factor(y,
levels = c("Strongly disagree",
"Somewhat disagree",
"Neither agree nor disagree",
"Somewhat agree",
"Strongly agree"))
as.numeric(y)
}
l2num.agree5.adjust <- function(y) { #create function to convert 5-point agree/disagree scale
y <- factor(y,
levels = c("Strongly disagree",
"Somewhat disagree",
"Neither agree nor disagree",
"Somewhat agree",
"Strongly\nagree"))
as.numeric(y)
}
consp1 <- l2num.agree5.adjust(x$consp1)
consp2 <- l2num.agree5.adjust(x$consp2)
consp3 <- l2num.agree5.adjust(x$consp3)
consp4 <- l2num.agree5.adjust(x$consp4)
consp.scale <- data.frame(consp1,
consp2,
consp3,
consp4)
alpha(consp.scale)
consp.scale <- ((consp1 + consp2 + consp3 + consp4)/4)
describe(consp.scale)
#Paranormal beliefs scale construction
# max = yes
# I dont think I should have scaled this variable
# since they dont really form a realistic continuum
# is there a way to specify the levels as c(-1, 0, 1)?
l2num.yesno3 <- function(y) {
y <- factor(y,
levels = c("No",
"Not sure",
"Yes"))
as.numeric(y)
}
para1 <- l2num.yesno3(x$para1)
para2 <- l2num.yesno3(x$para2)
para3 <- l2num.yesno3(x$para3)
para4 <- l2num.yesno3(x$para4)
para.scale <- data.frame(para1,
para2,
para3,
para4)
alpha(para.scale)
para.scale <- ((para1 + para2 + para3 + para4)/4)
para.scale
#Apprehension Scale construction
l2num.freq5 <- function(y) {
y <- factor(y,
levels = c("Never",
"Seldom",
"Sometimes",
"Usually",
"Always"))
as.numeric(y)
}
#These first two apprehension Q's were presented correctly in the survey.
#applied the bigger function here, that includes both responses in 1-2 and 3-8.
l2num.freq5.fail <- function(y) {
y <- factor(y,
levels = c("Never",
"Seldom",
"Sometimes",
"Usually",
"Most of the time",
"Always"))
as.numeric(y)
}
appr1 <- l2num.freq5.fail(x$appr1_seatbelt)
appr2 <- l2num.freq5.fail(x$appr2_wash)
appr3 <- l2num.freq5.fail(x$appr3_luggage)
appr4 <- l2num.freq5.fail(x$appr4_shred)
appr5 <- l2num.freq5.fail(x$appr5_lockcar)
appr6 <- l2num.freq5.fail(x$appr6_lockhouse)
appr7 <- l2num.freq5.fail(x$appr7_cancer)
appr8 <- l2num.freq5.fail(x$appr7_cancer)
appr.scale <- data.frame(appr1, appr2)
alpha(appr.scale)
summary(appr.scale)
appr.scale.fail <- data.frame(appr1, appr2, appr3, appr4, appr5, appr6, appr7, appr8)
alpha(appr.scale.fail)
summary(appr.scale.fail)
appr.scale.fail <- ((appr1 + appr2 + appr3 + appr4 + appr5 + appr6 + appr7 + appr8)/8)
plot(appr.scale.fail)
# Election news attention scale construction
# max = a great deal
l2num.amount5 <- function(y) {
y <- factor(y,
levels = c("None at all",
"A little",
"A moderate amount",
"A lot",
"A great deal"))
as.numeric(y)
}
election.news.attn <- l2num.amount5(x$election_news_attention)
describe(election.news.attn)
#Election interest scale construction
# max = very interested
l2num.interest4 <- function(y) {
y <- factor(y,
levels = c("Not at all interested",
"Somewhat interested",
"Moderately interested",
"Very interested"))
as.numeric(y)
}
election.interest <- l2num.interest4(x$election_interest)
describe(election.interest)
#Religious services scale construction
# max = attends more often
l2num.freq6 <- function(y){
y <- factor(y,
levels = c("Never",
"A few times a year",
"Once or twice a month",
"Almost every week",
"Every week",
"More than once a week"))
as.numeric(y)
}
rel.services <- l2num.freq6(x$religion_services)
describe(rel.services)
#Big Five scale construction
# max = strongly agree
#Normal Items
bf.agree2 <- l2num.agree5.adjust(x$BF_agree2)
bf.agree4 <- l2num.agree5.adjust(x$BF_agree4)
bf.agree5 <- l2num.agree5.adjust(x$BF_agree5)
bf.agree7 <- l2num.agree5.adjust(x$BF_agree7)
bf.agree9 <- l2num.agree5(x$BF_agree9)
bf.consc1 <- l2num.agree5.adjust(x$BF_consc1)
bf.consc3 <- l2num.agree5.adjust(x$BF_consc3)
bf.consc6 <- l2num.agree5.adjust(x$BF_consc6)
bf.consc7 <- l2num.agree5(x$BF_consc7)
bf.consc8 <- l2num.agree5(x$BF_consc8)
bf.extro1 <- l2num.agree5.adjust(x$BF_extro1)
bf.extro3 <- l2num.agree5.adjust(x$BF_extro3)
bf.extro4 <- l2num.agree5.adjust(x$BF_extro4)
bf.extro6 <- l2num.agree5.adjust(x$BF_extro6)
bf.extro8 <- l2num.agree5(x$BF_extro8)
bf.neuro1 <- l2num.agree5.adjust(x$BF_neuro1)
bf.neuro3 <- l2num.agree5.adjust(x$BF_neuro3)
bf.neuro4 <- l2num.agree5.adjust(x$BF_neuro4)
bf.neuro6 <- l2num.agree5.adjust(x$BF_neuro6)
bf.neuro8 <- l2num.agree5(x$BF_neuro8)
bf.open1 <- l2num.agree5.adjust(x$BF_open1)
bf.open2 <- l2num.agree5.adjust(x$BF_open2)
bf.open3 <- l2num.agree5.adjust(x$BF_open3)
bf.open4 <- l2num.agree5.adjust(x$BF_open4)
bf.open5 <- l2num.agree5.adjust(x$BF_open5)
bf.open6 <- l2num.agree5.adjust(x$BF_open6R)
bf.open8 <- l2num.agree5(x$BF_open8)
bf.open10 <- l2num.agree5(x$BF_open10)
#Reverse coding BF items
l2num.agree5.adjust.reverse <- function(y) { #create function to convert 5-point agree/disagree scale
y <- factor(y,
levels = c("Strongly\nagree",
"Somewhat agree",
"Neither agree nor disagree",
"Somewhat disagree",
"Strongly disagree"))
as.numeric(y)
}
l2num.agree5.reverse <- function(y) { #create function to convert 5-point agree/disagree scale
y <- factor(y,
levels = c("Strongly agree",
"Somewhat agree",
"Neither agree nor disagree",
"Somewhat disagree",
"Strongly disagree"))
as.numeric(y)
}
bf.agree1 <- l2num.agree5.adjust.reverse(x$BF_agree1R)
bf.agree3 <- l2num.agree5.adjust.reverse(x$BF_agree3R)
bf.agree6 <- l2num.agree5.adjust.reverse(x$BF_agree6R)
bf.agree8 <- l2num.agree5.reverse(x$BF_agree8R)
bf.consc2 <- l2num.agree5.adjust.reverse(x$BF_consc2R)
bf.consc4 <- l2num.agree5.adjust.reverse(x$BF_consc4R)
bf.consc5 <- l2num.agree5.adjust.reverse(x$BF_consc5R)
bf.consc9 <- l2num.agree5.reverse(x$BF_consc9R)
bf.extro2 <- l2num.agree5.adjust.reverse(x$BF_extro2R)
bf.extro5 <- l2num.agree5.adjust.reverse(x$BF_extro5R)
bf.extro7 <- l2num.agree5.adjust.reverse(x$BF_extro7R)
bf.neuro2 <- l2num.agree5.adjust.reverse(x$BF_neuro2R)
bf.neuro5 <- l2num.agree5.adjust.reverse(x$BF_neuro5R)
bf.neuro7 <- l2num.agree5.reverse(x$BF_neuro7R)
bf.open7 <- l2num.agree5.reverse(x$BF_open7R)
bf.open9 <- l2num.agree5.reverse(x$BF_open9R)
#Test scale reliability
bf.scale.all <- data.frame(bf.agree1, bf.agree2, bf.agree3,
bf.agree4, bf.agree5, bf.agree6,
bf.agree7, bf.agree8, bf.consc1,
bf.consc2, bf.consc3, bf.consc4,
bf.consc5, bf.consc6, bf.consc7,
bf.consc8, bf.consc9, bf.extro1,
bf.extro2, bf.extro3, bf.extro4,
bf.extro5, bf.extro6, bf.extro7,
bf.extro8, bf.neuro1, bf.neuro2,
bf.neuro3, bf.neuro4, bf.neuro5,
bf.neuro6, bf.neuro7, bf.neuro8,
bf.open1, bf.open2, bf.open3,
bf.open4, bf.open5, bf.open6,
bf.open7, bf.open8, bf.open9,
bf.open10)
bf.scale.agree <- data.frame(bf.agree1, bf.agree2, bf.agree3,
bf.agree4, bf.agree5, bf.agree6,
bf.agree7, bf.agree8)
bf.scale.consc <- data.frame(bf.consc1,
bf.consc2, bf.consc3, bf.consc4,
bf.consc5, bf.consc6, bf.consc7,
bf.consc8, bf.consc9)
bf.scale.extro <- data.frame(bf.extro1,
bf.extro2, bf.extro3, bf.extro4,
bf.extro5, bf.extro6, bf.extro7,
bf.extro8)
bf.scale.neuro <- data.frame(bf.neuro1, bf.neuro2,
bf.neuro3, bf.neuro4, bf.neuro5,
bf.neuro6, bf.neuro7, bf.neuro8)
bf.scale.open <- data.frame(bf.open1, bf.open2, bf.open3,
bf.open4, bf.open5, bf.open6,
bf.open7, bf.open8, bf.open9,
bf.open10)
alpha(bf.scale.agree)
alpha(bf.scale.consc)
alpha(bf.scale.extro)
alpha(bf.scale.neuro)
alpha(bf.scale.open)
#Drop poor performing questions from scale:
# bf.extro7, open6, open7, open 8
bf.scale.agree <- data.frame(bf.agree1, bf.agree2, bf.agree3,
bf.agree4, bf.agree5, bf.agree6,
bf.agree7, bf.agree8)
bf.scale.consc <- data.frame(bf.consc1,
bf.consc2, bf.consc3, bf.consc4,
bf.consc5, bf.consc6, bf.consc7,
bf.consc8, bf.consc9)
bf.scale.extro <- data.frame(bf.extro1,bf.extro2, bf.extro3,
bf.extro4,bf.extro5, bf.extro6,
bf.extro8)
bf.scale.neuro <- data.frame(bf.neuro1, bf.neuro2,
bf.neuro3, bf.neuro4, bf.neuro5,
bf.neuro6, bf.neuro7, bf.neuro8)
bf.scale.open <- data.frame(bf.open1, bf.open2, bf.open3,
bf.open4, bf.open5, bf.open8,
bf.open10)
alpha(bf.scale.agree)
alpha(bf.scale.consc)
alpha(bf.scale.extro)
alpha(bf.scale.neuro)
alpha(bf.scale.open)
bf.scale <- data.frame(bf.open1, bf.open2, bf.open3,
bf.open4, bf.open5, bf.open8,
bf.open10)
#Collapse scales for regression
bf.scale.agree <- ((bf.agree1 + bf.agree2 + bf.agree3 +
bf.agree4 + bf.agree5 + bf.agree6 +
bf.agree7 + bf.agree8)/8)
describe(bf.scale.agree)
bf.scale.consc <- ((bf.consc1 + bf.consc2 + bf.consc3 +
bf.consc4 + bf.consc5 + bf.consc6 +
bf.consc7 + bf.consc8 + bf.consc9)/9)
describe(bf.scale.consc)
bf.scale.extro <- ((bf.extro1 + bf.extro2 + bf.extro3 +
bf.extro4 + bf.extro5 + bf.extro6 +
bf.extro8)/7)
describe(bf.scale.extro)
bf.scale.neuro <- ((bf.neuro1 + bf.neuro2 + bf.neuro3 +
bf.neuro4 + bf.neuro5 + bf.neuro6 +
bf.neuro7 + bf.neuro8)/8)
describe(bf.scale.neuro)
bf.scale.open <- ((bf.open1 + bf.open2 + bf.open3 +
bf.open4 + bf.open5 + bf.open8 +
bf.open10)/6)
describe(bf.scale.open)
#partyID
max = Republican
partyID <- factor(x$partyID,
levels = c("Democrat",
"Independent",
"Republican"))
partyID <- as.numeric(partyID)
partyID
#Sex variable conversion
# Male = 1, Female = 2
cat2num.sex <- function(y) {
y <- factor(y,
levels = c("Male",
"Female"))
as.numeric(y)
}
sex <- cat2num.sex(x$sex)
#
l2num.income <- function(y) {
y <- factor(y,
levels = c("Male",
"Female"))
as.numeric(y)
}
cat2num.sex(x$sex)
#Income variable scale construction
# 1 = Less than $15,000, 5 = 150,001+
l2num.income8 <- function(y) {
y <- factor(y,
levels = c("Less than $15,000",
"$15,001-$30,000",
"$30,001-$45,000",
"$45,001-$60,000",
"$60,001-$75,000",
"$75,001-$100,000",
"$100,001-$150,000",
"$150,001 or more"))
as.numeric(y)
}
income <- l2num.income8(x$income)
describe(income)
#Education variable scale construction
#1 = less than HS, 6 = Postgraduate degree
l2num.edu6 <- function(y) {
y <- factor(y,
levels = c("Less than high school",
"High school graduate",
"Some college",
"2 year degree",
"4 year degree",
"Postgraduate degree"))
as.numeric(y)
}
edu <- l2num.edu6(x$education)
describe(edu)
#Economic partisanship scale construction
# 1 = very liberal, 7 = very conservative
l2num.partisanship7 <- function(y) {
y <- factor(y,
levels = c("Very liberal",
"Liberal",
"Somewhat liberal",
"Moderate",
"Somewhat conservative",
"Conservative",
"Very conservative"))
as.numeric(y)
}
partisanship.econ <- l2num.partisanship7(x$partisanship_econ)
describe(partisanship.econ)
#Social partisanship scale construction
# 1 = very liberal, 7 = very conservative
partisanship.social <- l2num.partisanship7(x$partisanship_social)
describe(partisanship.social)
partisanship.scale <- data.frame(partisanship.econ, partisanship.social)
alpha(partisanship.scale)
partisanship.scale <- ((partisanship.econ + partisanship.social)/2)
#Pessimism scale construction
#max = more likely to experience a pessimistic event
l2num.likely5 <- function(y) {
y <- factor(y,
levels = c("Extremely unlikely",
"Somewhat unlikely",
"Neither likely nor unlikely",
"Somewhat likely",
"Extremely\nlikely"))
as.numeric(y)
}
pess.recession <- l2num.likely5(x$pess1_recession)
pess.ebola <- l2num.likely5(x$pess2_ebola)
pess.terror <- l2num.likely5(x$pess3_terror)
pess.war <- l2num.likely5(x$pess4_war)
pess.scale <- data.frame(pess.recession,
pess.ebola,
pess.terror,
pess.war)
alpha(pess.scale)
pess.scale <- ((pess.recession + pess.ebola + pess.terror + pess.war)/4)
describe(pess.scale)
#Symbolic Thinking scale construction
# 1 = symbolic, 2 = tangible
symb1 <- factor(x$symb1,
levels = c("stab a photo of your family six times",
"stick your hands in a bowl of cockroaches"))
symb1 <- as.numeric(symb1)
symb1
symb2 <- factor(x$symb2,
levels = c("a luxurious house where a family had recently been murdered",
"a grimy bus station"))
symb2 <- as.numeric(symb2)
symb2
symb3 <- factor(x$symb3,
levels = c("leave trash on someone's grave",
"stand in line for 3 hours at the DMV?"))
symb3 <- as.numeric(symb3)
symb3
symb4 <- factor(x$symb4,
levels = c("yell \"I hope I die tomorrow\" out loud six times",
"ride in a speeding car without a seat belt"))
symb4 <- as.numeric(symb4)
symb4
symb5 <- factor(x$symb5,
levels = c("sleep in laundered pajamas once worn by Charles Manson",
"put a nickel in your mouth that you found on the ground"))
symb5 <- as.numeric(symb5)
symb5
symb6 <- factor(x$symb6,
levels = c("sold two winning tickets in the past three years but had a long line",
"never sold a winning ticket but had no lines"))
symb6 <- as.numeric(symb6)
symb6
symb.scale <- data.frame(symb1,
symb2,
symb3,
symb5)
alpha(symb.scale)
symb.scale <- ((symb1 + symb2 + symb3 + symb4 + symb5 + symb6)/6)
#Need for Cognition scale construction
l2num.describes.me5 <- function(y) {
y <- factor(y,
levels = c("Does not describe me",
"Describes me slightly well",
"Describes me moderately well",
"Describes me very well",
"Describes me extremely well"))
as.numeric(y)
}
l2num.describes.me5.reverse <- function(y) {
y <- factor(y,
levels = c("Describes me extremely well",
"Describes me very well",
"Describes me moderately well",
"Describes me slightly well",
"Does not describe me"))
as.numeric(y)
}
nfc1 <- l2num.describes.me5(x$NFC1)
nfc2 <- l2num.describes.me5(x$NFC2)
nfc3 <- l2num.describes.me5.reverse(x$NFC3R)
nfc4 <- l2num.describes.me5.reverse(x$NFC4R)
nfc5 <- l2num.describes.me5.reverse(x$NFC5R)
nfc6 <- l2num.describes.me5(x$NFC6)
nfc7 <- l2num.describes.me5.reverse(x$NFC7R)
nfc8 <- l2num.describes.me5.reverse(x$NFC8R)
nfc9 <- l2num.describes.me5(x$NFC9)
nfc10 <- l2num.describes.me5.reverse(x$NFC10R)
nfc11 <- l2num.describes.me5(x$NFC11)
nfc12 <- l2num.describes.me5(x$NFC12R)
nfc13 <- l2num.describes.me5.reverse(x$NFC13R)
nfc14 <- l2num.describes.me5(x$NFC14)
nfc15 <- l2num.describes.me5(x$NFC15)
nfc16 <- l2num.describes.me5(x$NFC16)
nfc17 <- l2num.describes.me5.reverse(x$NFC17R)
nfc18 <- l2num.describes.me5.reverse(x$NFC18R)
nfc.scale <- data.frame(nfc1, nfc2, nfc3,
nfc4, nfc5, nfc6,
nfc7, nfc8, nfc9,
nfc10, nfc11, nfc12,
nfc13, nfc14, nfc15,
nfc16, nfc17, nfc18)
alpha(nfc.scale)
nfc.scale <- ((nfc1 + nfc2 + nfc3 +
nfc4 + nfc5 + nfc6 +
nfc7 + nfc8 + nfc9 +
nfc10 + nfc11 + nfc12 +
nfc13 + nfc14 + nfc15 +
nfc16 + nfc17 + nfc18)/18)
nfc.scale
#Condition scale construction
# Ohio = Clinton lead, Trump trail, NC = Trump lead, Clinton trail, Iowa = Tied
condition <- factor(x$Condition,
levels = c("Ohio",
"Iowa",
"North Carolina"))
condition <- as.numeric(condition)
condition
#HMP favorability scale
hmp.favor <- factor(x$hmp_condition_favor,
levels = c("Strongly favored Clinton",
"Mostly favored Clinton",
"Slightly favored Clinton",
"Article was strictly neutral",
"Slightly favored Trump",
"Mostly favored Trump",
"Strongly favored Trump"))
hmp.favor <- as.numeric(hmp.favor)
describe(hmp.favor)
# HMP bias scale construction - Clinton
# 1 = Not at all biased, 5 = very biased against Clinton
l2num.bias.clinton5 <- function(y) {
y <- factor(y,
levels = c("Not at all biased against Clinton",
"Slightly biased against Clinton",
"Somewhat biased against Clinton",
"Mostly biased against Clinton",
"Very biased against Clinton"))
as.numeric(y)
}
clinton.bias <- l2num.bias.clinton5(x$hmp_condition_clinton)
describe(clinton.bias)
# HMP bias scale construction - Trump
# 1 = Not at all biased, 5 = very biased against Trump
l2num.bias.trump5 <- function(y) {
y <- factor(y,
levels = c("Not at all biased against Trump",
"Slightly biased against Trump",
"Somewhat biased against Trump",
"Mostly biased against Trump",
"Very biased against Trump"))
as.numeric(y)
}
trump.bias <- l2num.bias.trump5(x$hmp_condition_trump)
describe(trump.bias)
# HMP bias scale construction - Journalist
# 1 = Strongly favored Clinton, 3 = midpoint, neutral, 5 = Strongly favored Trump
l2num.bias.journalist <- function(y) {
y <- factor(y,
levels = c("Strongly favored Clinton",
"Mostly favored Clinton",
"Slightly favored Clinton",
"Journalist was strictly neutral",
"Slightly favored Trump",
"Mostly favored Trump",
"Strongly favored Trump"))
as.numeric(y)
}
hmp.journalist <- l2num.bias.journalist(x$hmp_condition_journalist)
describe(hmp.journalist)
# Journalist mistrust scale construction
# 1 = strongly disagree, 5 = strongly agree
l2num.agree7 <- function(y) {
y <- factor(y,
levels = c("Strongly disagree",
"Disagree",
"Somewhat disagree",
"Neither agree nor disagree",
"Somewhat agree",
"Agree",
"Strongly agree"))
as.numeric(y)
}
l2num.agree7.reverse <- function(y) {
y <- factor(y,
levels = c("Strongly agree",
"Agree",
"Somewhat agree",
"Neither agree nor disagree",
"Somewhat disagree",
"Disagree",
"Strongly disagree"))
as.numeric(y)
}
journ.clicks <- l2num.agree7(x$journ_clicks)
journ.personal <- l2num.agree7(x$journ_personal)
journ.pretend <- l2num.agree7(x$journ_pretend)
journ.rig <- l2num.agree7(x$journ_rig)
journ.twist <- l2num.agree7(x$journ_twist)
journ.info <- l2num.agree7.reverse(x$journ_infoR)
journ.objective <- l2num.agree7.reverse(x$journ_objectivityR)
journ.bias.scale <- data.frame(journ.clicks,
journ.personal,
journ.pretend,
journ.rig,
journ.twist,
journ.info,
journ.objective)
alpha(journ.bias.scale)
journ.bias.scale <- ((journ.clicks +
journ.personal +
journ.pretend +
journ.rig +
journ.twist +
journ.info +
journ.objective)/7)
describe(journ.bias.scale)
#Candidate Support scale
cand.support <- factor(x$cand_support,
levels = c("Hillary Clinton",
"Dontald Trump"))
cand.support <- as.numeric(cand.support)
cand.support
#Media Distrust scale
media.trust <- factor(x$media_trust,
levels = c("None of the time",
"Only some of the time",
"About half the time",
"Most of the time",
"All of the time"))
media.trust <- as.numeric(media.trust)
media.trust
media.distrust.scale <- data.frame(x$media_complete,
x$media_untrustworthy,
x$media_inaccurate,
x$media_unfair)
alpha(media.distrust.scale)
media.distrust.scale <- ((x$media_complete + x$media_untrustworthy +
x$media_inaccurate + x$media_unfair)/4)
describe(media.distrust.scale)
#Prayer scale construction
# 1 = Never, 5 = Several times a day
prayer <- factor(x$pray,
levels = c("Never",
"Once a week or less",
"A few times a week",
"Once a day",
"Several times a day"))
prayer <- as.numeric(prayer)
describe(prayer)
#Religious fundamentalism scale construction
# 1 = strongly disagree, 5 = strongly agree
rel.fund1 <- l2num.agree5(x$rel_fund1)
rel.fund2 <- l2num.agree5(x$rel_fund2)
rel.fund3 <- l2num.agree5(x$rel_fund3)
rel.fund4 <- l2num.agree5(x$rel_fund4)
rel.fund.scale <- data.frame(rel.fund1,
rel.fund2,
rel.fund3,
rel.fund4)
alpha(rel.fund.scale)
describe(rel.fund.scale)
rel.fund.scale <- ((rel.fund1 + rel.fund2 + rel.fund3 + rel.fund4)/4)
#Effect on voter likelihood scales - Trump
# 1 = less likely to vote, 3 = more likely to vote
cat2num.trump.effect <- function(y) {
y <- factor(y,
levels = c("Articles like this one make Trump supporters less likely to vote",
"Articles like this one do not affect Trump supporters' decision to vote",
"Articles like this one make Trump supporters more likely to vote"))
as.numeric(y)
}
trump.effect.iowa <- cat2num.trump.effect(x$effect_trump_iowa)
trump.effect.nc <- cat2num.trump.effect(x$effect_trump_NC)
trump.effect.ohio <- cat2num.trump.effect(x$effect_trump_ohio)
#Effect on voter likelihood scales - Clinton
# 1 = less likely to vote, 3 = more likely to vote
cat2num.clinton.effect <- function(y) {
y <- factor(y,
levels = c("Articles like this one make Clinton supporters less likely to vote",
"Articles like this one do not affect Clinton supporters' decision to vote",
"Articles like this one make Clinton supporters more likely to vote"))
as.numeric(y)
}
clinton.effect.iowa <- cat2num.trump.effect(x$effect_clinton_iowa)
clinton.effect.nc <- cat2num.trump.effect(x$effect_clinton_NC)
clinton.effect.ohio <- cat2num.trump.effect(x$effect_clinton_ohio)
#dependent variable for logit model is 2 outcomes. rather than continuous effect. inuition - behind mle
#is that linear component but rather than
#mapping onto outcome variable there is a latent variable that determines how IV's affects your DV's by a link
#function map onto the outcome
#what distribution associated with the variable coutn model - link function (logit/probit) ordered.
#because outcome not continuous - ordered logistic - outcome variable is zero or 1 in logistic regression
#estimate logit model coefficienct say change in log odds for unit change in IV. ordered logistic - the DV
#is notcoefficients you get from the model are the rffect of 1 unit change of log odds
#set of coefficients
#predicted probability plots - log odds ratio.'
|
# 1. Extracting MIEs from CTD for all chemicals to use as seeds in RWR --------------
#
# Getting the compounds and their related genes to use as seeds in the random-walk procedure
# input: http://ctdbase.org/reports/CTD_chem_gene_ixns.csv.gz in inputData/CTD_chem_gene_ixns.csv.gz
# output: list of chemical and their related MIEs (genes) outputData/chem2gene_no_out.RData
#
# compound gene interarctions release of june 2020
ixns<-read.csv('inputData/CTD_june_2020/CTD_chem_gene_ixns.csv.gz',comment.char = c("#"),
stringsAsFactors = F)
colnames(ixns)<-c("ChemicalName","ChemicalID", "CasRN","GeneSymbol", "GeneID","GeneForms",
"Organism","OrganismID","Interaction" ,"InteractionActions", "PubMedIDs")
library(data.table)
ixns<-as.data.table(ixns)
ixns<-ixns[,paste(InteractionActions,collapse = '|'),by=list(chemID=ixns$ChemicalID,cas=ixns$CasRN,geneID=ixns$GeneID,geneSymbol=ixns$GeneSymbol)]
InterAction<-lapply(ixns$V1,function(x)strsplit(x,'|',fixed = T))
InterAction<-lapply(InterAction,function(x)sapply(x,strsplit,split='^',fixed=T))
type<-sort(unique(unlist(sapply(InterAction,function(x)sapply(x,function(y)y[2])))))
#making a matrix (m*n) with m=1044988 the number of gene-compound and n=51 interaction types
InterAction_mat<-matrix(FALSE,nrow=nrow(ixns),ncol=length(type),dimnames=list(NULL,type))
#Interaction_mat: a logical matrix with rows: chemical-gene interatction and columns as Interaction types
for(i in 1:length(InterAction)){
for(j in 1:length(InterAction[[i]])){
InterAction_mat[i,InterAction[[i]][[j]][2]]<-TRUE
}
}
# METABOLIC INTERACTIONS:
# "acetylation" "acylation" "ADP-ribosylation" "alkylation"
# "amination" "carbamoylation" "carboxylation" "chemical synthesis"
# "cleavage" "degradation" "ethylation" "farnesylation"
# "geranoylation" "glucuronidation" "glutathionylation" "glycation"
# "glycosylation" "hydrolysis" "hydroxylation" "lipidation"
# "methylation" "N-linked glycosylation" "nitrosation" "O-linked glycosylation"
# "oxidation" "palmitoylation" "phosphorylation" "prenylation"
# "reduction" "ribosylation" "sulfation" "sumoylation"
# "ubiquitination"
idx_met_proc<-c(2,4:7,9:12,14,15,18,20:26,28,31,33:39,41,43,47,48,50)
#TRANSPORT INTERACTIONS:
# "export" "import" "secretion" "uptake"
idx_transport<-c(16,27,44,51)
InterAction_mat[,"metabolic processing"]<-InterAction_mat[,"metabolic processing"]|
apply(InterAction_mat[,idx_met_proc],1,any) #IF THE CHEMICAL-GENE PERTURBATION WAS 1 IN ANY OF THE METABOLIC INTERAACTIONS GETS 1
InterAction_mat[,"transport"]<-InterAction_mat[,"transport"]|apply(InterAction_mat[,idx_transport],1,any) ##IF THE CHEMICAL-GENE PERTURBATION WAS 1 IN ANY OF THE TRANSPORT GETS 1
InterAction_mat<-InterAction_mat[,-c(idx_met_proc,idx_transport)] #SIMPLIFYING THE columns of the LOGICAL MATRIX into 14 interaction types
#Performing multiple correspondence analysis to get the variables which are more informative
### The plot is saved in plots folder
#Performing multiple correspondence analysis to get the variables which are more informative
### The plot is saved in plots folder
library(FactoMineR)
library(factoextra)
res.mca<-MCA(InterAction_mat,ncp = 14,graph = F)
fviz_eig(res.mca,ncp = 14,addlabels = T)
fviz_mca_var(res.mca,choice = 'var',axes = 1:2,repel = T,shape.var = 19,labelsize=8)+theme(axis.title = element_text(size=30))+labs(title='')
#reaction','binding','activity','expression','metabolic processing' were the more distant types
#
type.count<-function(x){
if (class(x)=='logical') x*1
else colSums(x)
}
sel_type<-c('reaction','binding','activity','expression','metabolic processing')
idx<-apply(InterAction_mat[,sel_type,drop=F],1,any)
ixns<-ixns[idx,]
InterAction_mat <- InterAction_mat[idx,sel_type]
##################################################### NEW CODE
# compile for each chemical a data.frame indicating for each gene the data sources supporting the interaction
provolone <- list()
selt <- c('R','B', 'A', 'E', 'M')
for(c in unique(ixns$chemID)){
test = ixns[ixns$chemID==c,]
mat = InterAction_mat[ixns$chemID==c,]
if(nrow(test)==1) {
provolone[[length(provolone)+1]] = test$geneID
}
else {
gid = unique(test$geneID)
sum_gid = c()
cat_gid = c()
for(g in gid) {
#print(which(g == test$geneID))
sum_gid = c(sum_gid, sum(mat[which(g == test$geneID),]))
cat_gid = c(cat_gid, paste(selt[which(mat[which(g == test$geneID),] == TRUE)], collapse="."))
}
provolone[[length(provolone)+1]] = data.frame(gid, sum_gid, cat_gid, row.names = 1)
}
}
# compile for each chemical the chemical-gene interactions that need to be removed
mies_to_remove = lapply(provolone, function(x) {
if(!is.numeric(x)) {
if(nrow(x) > 50) {
id.EM = which(x$sum_gid == 1 & (x$cat_gid == 'E' | x$cat_gid == 'M'))
id.EM = c(id.EM, which(x$sum_gid == 2 & x$cat_gid == 'E.M'))
if((nrow(x) - length(id.EM)) > 200) {
id.REM = which(x$sum_gid == 1 & x$cat_gid == 'R')
id.REM = c(id.REM, which(x$sum_gid == 2 & x$cat_gid == 'R.E'))
id.REM = c(id.REM, which(x$sum_gid == 2 & x$cat_gid == 'R.M'))
id.REM = c(id.REM, which(x$sum_gid == 3 & x$cat_gid == 'R.E.M'))
if((nrow(x) - (length(id.EM) + length(id.REM))) > 200) {
id.AB = which(x$sum_gid == 1 & (x$cat_gid == 'A' | x$cat_gid == 'B'))
if((nrow(x) - (length(id.EM) + length(id.REM) + length(id.AB))) > 300) {
tab = table(x$cat_gid[-c(id.EM, id.REM, id.AB)])
print(tab[which(tab > 0)])
print(length(x$cat_gid[-c(id.EM, id.REM, id.AB)]))
print("-------------------------------------------------------------")
#return(-1)
return(c(rownames(x)[id.EM], rownames(x)[id.REM], rownames(x)[id.AB]))
}
return(c(rownames(x)[id.EM], rownames(x)[id.REM], rownames(x)[id.AB]))
}
return(c(rownames(x)[id.EM], rownames(x)[id.REM]))
}
return(rownames(x)[id.EM])
}
else NULL
}
else NULL
})
# determine the final set of entrez gene ids associated to each chemical
final_set_mies = list()
for(i in 1:length(provolone)) {
if(is.data.frame(provolone[[i]])) {
if(nrow(provolone[[i]]) > 50 & length(mies_to_remove[[i]]) > 0) {
final_set_mies[[i]] = setdiff(rownames(provolone[[i]]), mies_to_remove[[i]])
}
else final_set_mies[[i]] = rownames(provolone[[i]])
}
else
final_set_mies[[i]] = provolone[[i]]
}
names(final_set_mies) = names(mies_to_remove) = names(provolone) = unique(ixns$chemID)
final_set_mies<-final_set_mies[-which(sapply(final_set_mies,length)>100)]
final_set_mies<-final_set_mies[-which(sapply(final_set_mies, length)==0)]
chem2gene<-final_set_mies
save(list = c('final_set_mies', 'mies_to_remove', 'provolone'), file = "outputData/new_MIES_set.RData")
save(chem2gene,file = 'outputData/chem2gene_no_out.RData')
##makes a dictionary with three columns cas name mesh
ixns<-read.csv('inputData/CTD_chemicals.csv.gz',comment.char = c("#"),stringsAsFactors = F,header = F) #all compounds from CTD
ixns$V2<-gsub(pattern = 'MESH:',replacement = '',ixns$V2)
library(data.table)
ixns<-as.data.table(ixns)
ixns<-ixns[,c(1,2,3)]
colnames(ixns)<-c('name','mesh','cas')
chem_gene<-read.csv('inputData/CTD_chem_gene_ixns.csv.gz',comment.char = c("#"),stringsAsFactors = F) #around 2000000 gene chmical interaction with 11 variables
chem_gene<-unique(chem_gene[,c("ChemicalName","ChemicalID","CasRN")])
colnames(chem_gene)<-c('name','mesh','cas')
ixns<-merge(ixns,chem_gene,by = c('name','mesh','cas'),all=T)
ixns<-unique(ixns)
write.csv(ixns,file = 'inputData/annotaion/chem_mesh_cas_dictionary.csv')
| /scripts/1_1_MIEs_from_CTD.R | no_license | parnazM/EDTOX | R | false | false | 8,174 | r | # 1. Extracting MIEs from CTD for all chemicals to use as seeds in RWR --------------
#
# Getting the compounds and their related genes to use as seeds in the random-walk procedure
# input: http://ctdbase.org/reports/CTD_chem_gene_ixns.csv.gz in inputData/CTD_chem_gene_ixns.csv.gz
# output: list of chemical and their related MIEs (genes) outputData/chem2gene_no_out.RData
#
# compound gene interarctions release of june 2020
ixns<-read.csv('inputData/CTD_june_2020/CTD_chem_gene_ixns.csv.gz',comment.char = c("#"),
stringsAsFactors = F)
colnames(ixns)<-c("ChemicalName","ChemicalID", "CasRN","GeneSymbol", "GeneID","GeneForms",
"Organism","OrganismID","Interaction" ,"InteractionActions", "PubMedIDs")
library(data.table)
ixns<-as.data.table(ixns)
ixns<-ixns[,paste(InteractionActions,collapse = '|'),by=list(chemID=ixns$ChemicalID,cas=ixns$CasRN,geneID=ixns$GeneID,geneSymbol=ixns$GeneSymbol)]
InterAction<-lapply(ixns$V1,function(x)strsplit(x,'|',fixed = T))
InterAction<-lapply(InterAction,function(x)sapply(x,strsplit,split='^',fixed=T))
type<-sort(unique(unlist(sapply(InterAction,function(x)sapply(x,function(y)y[2])))))
#making a matrix (m*n) with m=1044988 the number of gene-compound and n=51 interaction types
InterAction_mat<-matrix(FALSE,nrow=nrow(ixns),ncol=length(type),dimnames=list(NULL,type))
#Interaction_mat: a logical matrix with rows: chemical-gene interatction and columns as Interaction types
for(i in 1:length(InterAction)){
for(j in 1:length(InterAction[[i]])){
InterAction_mat[i,InterAction[[i]][[j]][2]]<-TRUE
}
}
# METABOLIC INTERACTIONS:
# "acetylation" "acylation" "ADP-ribosylation" "alkylation"
# "amination" "carbamoylation" "carboxylation" "chemical synthesis"
# "cleavage" "degradation" "ethylation" "farnesylation"
# "geranoylation" "glucuronidation" "glutathionylation" "glycation"
# "glycosylation" "hydrolysis" "hydroxylation" "lipidation"
# "methylation" "N-linked glycosylation" "nitrosation" "O-linked glycosylation"
# "oxidation" "palmitoylation" "phosphorylation" "prenylation"
# "reduction" "ribosylation" "sulfation" "sumoylation"
# "ubiquitination"
idx_met_proc<-c(2,4:7,9:12,14,15,18,20:26,28,31,33:39,41,43,47,48,50)
#TRANSPORT INTERACTIONS:
# "export" "import" "secretion" "uptake"
idx_transport<-c(16,27,44,51)
InterAction_mat[,"metabolic processing"]<-InterAction_mat[,"metabolic processing"]|
apply(InterAction_mat[,idx_met_proc],1,any) #IF THE CHEMICAL-GENE PERTURBATION WAS 1 IN ANY OF THE METABOLIC INTERAACTIONS GETS 1
InterAction_mat[,"transport"]<-InterAction_mat[,"transport"]|apply(InterAction_mat[,idx_transport],1,any) ##IF THE CHEMICAL-GENE PERTURBATION WAS 1 IN ANY OF THE TRANSPORT GETS 1
InterAction_mat<-InterAction_mat[,-c(idx_met_proc,idx_transport)] #SIMPLIFYING THE columns of the LOGICAL MATRIX into 14 interaction types
#Performing multiple correspondence analysis to get the variables which are more informative
### The plot is saved in plots folder
#Performing multiple correspondence analysis to get the variables which are more informative
### The plot is saved in plots folder
library(FactoMineR)
library(factoextra)
res.mca<-MCA(InterAction_mat,ncp = 14,graph = F)
fviz_eig(res.mca,ncp = 14,addlabels = T)
fviz_mca_var(res.mca,choice = 'var',axes = 1:2,repel = T,shape.var = 19,labelsize=8)+theme(axis.title = element_text(size=30))+labs(title='')
#reaction','binding','activity','expression','metabolic processing' were the more distant types
#
type.count<-function(x){
if (class(x)=='logical') x*1
else colSums(x)
}
sel_type<-c('reaction','binding','activity','expression','metabolic processing')
idx<-apply(InterAction_mat[,sel_type,drop=F],1,any)
ixns<-ixns[idx,]
InterAction_mat <- InterAction_mat[idx,sel_type]
##################################################### NEW CODE
# compile for each chemical a data.frame indicating for each gene the data sources supporting the interaction
provolone <- list()
selt <- c('R','B', 'A', 'E', 'M')
for(c in unique(ixns$chemID)){
test = ixns[ixns$chemID==c,]
mat = InterAction_mat[ixns$chemID==c,]
if(nrow(test)==1) {
provolone[[length(provolone)+1]] = test$geneID
}
else {
gid = unique(test$geneID)
sum_gid = c()
cat_gid = c()
for(g in gid) {
#print(which(g == test$geneID))
sum_gid = c(sum_gid, sum(mat[which(g == test$geneID),]))
cat_gid = c(cat_gid, paste(selt[which(mat[which(g == test$geneID),] == TRUE)], collapse="."))
}
provolone[[length(provolone)+1]] = data.frame(gid, sum_gid, cat_gid, row.names = 1)
}
}
# compile for each chemical the chemical-gene interactions that need to be removed
mies_to_remove = lapply(provolone, function(x) {
if(!is.numeric(x)) {
if(nrow(x) > 50) {
id.EM = which(x$sum_gid == 1 & (x$cat_gid == 'E' | x$cat_gid == 'M'))
id.EM = c(id.EM, which(x$sum_gid == 2 & x$cat_gid == 'E.M'))
if((nrow(x) - length(id.EM)) > 200) {
id.REM = which(x$sum_gid == 1 & x$cat_gid == 'R')
id.REM = c(id.REM, which(x$sum_gid == 2 & x$cat_gid == 'R.E'))
id.REM = c(id.REM, which(x$sum_gid == 2 & x$cat_gid == 'R.M'))
id.REM = c(id.REM, which(x$sum_gid == 3 & x$cat_gid == 'R.E.M'))
if((nrow(x) - (length(id.EM) + length(id.REM))) > 200) {
id.AB = which(x$sum_gid == 1 & (x$cat_gid == 'A' | x$cat_gid == 'B'))
if((nrow(x) - (length(id.EM) + length(id.REM) + length(id.AB))) > 300) {
tab = table(x$cat_gid[-c(id.EM, id.REM, id.AB)])
print(tab[which(tab > 0)])
print(length(x$cat_gid[-c(id.EM, id.REM, id.AB)]))
print("-------------------------------------------------------------")
#return(-1)
return(c(rownames(x)[id.EM], rownames(x)[id.REM], rownames(x)[id.AB]))
}
return(c(rownames(x)[id.EM], rownames(x)[id.REM], rownames(x)[id.AB]))
}
return(c(rownames(x)[id.EM], rownames(x)[id.REM]))
}
return(rownames(x)[id.EM])
}
else NULL
}
else NULL
})
# determine the final set of entrez gene ids associated to each chemical
final_set_mies = list()
for(i in 1:length(provolone)) {
if(is.data.frame(provolone[[i]])) {
if(nrow(provolone[[i]]) > 50 & length(mies_to_remove[[i]]) > 0) {
final_set_mies[[i]] = setdiff(rownames(provolone[[i]]), mies_to_remove[[i]])
}
else final_set_mies[[i]] = rownames(provolone[[i]])
}
else
final_set_mies[[i]] = provolone[[i]]
}
names(final_set_mies) = names(mies_to_remove) = names(provolone) = unique(ixns$chemID)
final_set_mies<-final_set_mies[-which(sapply(final_set_mies,length)>100)]
final_set_mies<-final_set_mies[-which(sapply(final_set_mies, length)==0)]
chem2gene<-final_set_mies
save(list = c('final_set_mies', 'mies_to_remove', 'provolone'), file = "outputData/new_MIES_set.RData")
save(chem2gene,file = 'outputData/chem2gene_no_out.RData')
##makes a dictionary with three columns cas name mesh
ixns<-read.csv('inputData/CTD_chemicals.csv.gz',comment.char = c("#"),stringsAsFactors = F,header = F) #all compounds from CTD
ixns$V2<-gsub(pattern = 'MESH:',replacement = '',ixns$V2)
library(data.table)
ixns<-as.data.table(ixns)
ixns<-ixns[,c(1,2,3)]
colnames(ixns)<-c('name','mesh','cas')
chem_gene<-read.csv('inputData/CTD_chem_gene_ixns.csv.gz',comment.char = c("#"),stringsAsFactors = F) #around 2000000 gene chmical interaction with 11 variables
chem_gene<-unique(chem_gene[,c("ChemicalName","ChemicalID","CasRN")])
colnames(chem_gene)<-c('name','mesh','cas')
ixns<-merge(ixns,chem_gene,by = c('name','mesh','cas'),all=T)
ixns<-unique(ixns)
write.csv(ixns,file = 'inputData/annotaion/chem_mesh_cas_dictionary.csv')
|
#Read the whole dataset and format columns
dataset <- read.table("household_power_consumption.txt",header = TRUE, sep = ";", na="?")
dataset$Time <- strptime(paste(dataset$Date, dataset$Time),"%d/%m/%Y %H:%M:%S")
dataset$Date <- as.Date(dataset$Date,"%d/%m/%Y")
#Subset the target rows
selectedRows <- as.Date(c("2007-02-01","2007-02-02"),"%Y-%m-%d")
dataset <- subset(dataset, Date %in% selectedRows)
#Reconstruct Plot 1
png("plot1.png", width = 480, height = 480)
hist(dataset$Global_active_power, main = "Global Active Power", xlab="Global Active Power (Kilowatts)", ylab = "Frequency", col="red")
dev.off()
| /plot1.R | no_license | figoeric/ExData_Plotting1 | R | false | false | 626 | r | #Read the whole dataset and format columns
dataset <- read.table("household_power_consumption.txt",header = TRUE, sep = ";", na="?")
dataset$Time <- strptime(paste(dataset$Date, dataset$Time),"%d/%m/%Y %H:%M:%S")
dataset$Date <- as.Date(dataset$Date,"%d/%m/%Y")
#Subset the target rows
selectedRows <- as.Date(c("2007-02-01","2007-02-02"),"%Y-%m-%d")
dataset <- subset(dataset, Date %in% selectedRows)
#Reconstruct Plot 1
png("plot1.png", width = 480, height = 480)
hist(dataset$Global_active_power, main = "Global Active Power", xlab="Global Active Power (Kilowatts)", ylab = "Frequency", col="red")
dev.off()
|
#' @title Gulf of Alaska Time Series Heat Plots
#' @description Creates either temperature or salinity plots of core EcoFOCI
#' stations in the Gulf of Alaska. Each plot displays the average temperature
#' or salinity for each meter of depth of the core stations for each year for
#' the months when peak sampling of these regions occured. Line 8 and Semidi
#' area are most commonly sampled in May and June, which is considered
#' Spring. Summer sampling in the Gulf of Alaska has been less frequent and
#' starts in the early 2000's. This summer sampling is in the Semidi core
#' area, summer is considered August and September.Post 2010, these core
#' stations were only sampled in odd numbered years. In the future more core
#' areas will be added.
#' @param hist_data Supply the path and .csv file name of the historical data
#' in quotations. The historical data must be in the format created by
#' the FastrCAT::make_dataframe_fc, which is the same format EcoDAAT exports.
#' Historic data must be queried from EcoDAAT and saved as a .csv file. In the
#' future there will be a direct link to the Oracle database where EcoDAAT data
#' is housed.
#' @param core_stations There are three core areas for the Gulf of Alaska which
#' have been regularly sampled and are representative of the Gulf of Alaska.
#' Core areas are all in bottom depth at or greater than 100 and less than 150
#' meters. Salintiy or temperature data from the surface to 100m will be shown.
#' core_stations are as follows: "Line 8 FOX", "Line 8", "Semidi Spring" and
#' "Semidi Summer".
#' Line 8 FOX is a set of six stations with a bounding box of `57.52, -155.2,
#' 57.72, -154.85`. Line 8 are the 4 core stations of Line 8 FOX. Semidi are
#' 6- 8 stations centrally located in the Semidi area and have been consistently
#' sampled, with a bounding box of '55.1, -158.5, 55.9, -158.0'.
#' @param plot_type will accept on of two quoted characters "temperature" or
#' "salinity".
#' @param fastcat_data An optional argument if you want to add the current years
#' fastcat data. Supply the path and .csv file name to the current years
#' fastcat data. This must be in the format created by the FastrCAT function
#' make_dataframe_fc.
#' @param min_depth Minimum depth which is shown on the plot. The default is set
#' as 0 meters.
#' @param max_depth Maximum depth which is displayed on the plot. The default
#' is set at 100 meters.
#' @param anomaly An optional argument if you want the anomaly of the plot_type
#' selected. This argument is set to FALSE, when set to TRUE then the anomaly
#' will be plotted. The anomoly is calculated using the ...something equation.
#' @return A depth by year tile plot of temperature or salinity. The plot will
#' be written to the folder designated by the historical data file path. The
#' plot will be in .png format. It should be noted that the plot throws out
#' the 0 depth value. 0 depth can and has been problematic for fastcat data.
#' @export plot_time_series
plot_time_series <- function(hist_data, core_stations, plot_type,
min_depth = 0, max_depth = 100,
fastcat_data = FALSE, anomaly = FALSE){
fast_col_types <- readr::cols_only(
DATE = readr::col_date(format = "%Y-%m-%d"),
DEPTH_BOTTOM = readr::col_integer(),
DEPTH = readr::col_integer(),
TEMPERATURE1 = readr::col_double(),
SALINITY1 = readr::col_double(),
LAT = readr::col_double(),
LON = readr::col_double())
old_data <- readr::read_csv(hist_data, col_types = fast_col_types)
time_data <- if(fastcat_data == FALSE){
time_data <- old_data
} else if (fastcat_data == TRUE){
time_data <- fastcat_data <- readr::read_csv(fastcat_data,
col_types = fast_col_types)%>%
dplyr::bind_rows(old_data)
}
range_filter <- if(core_stations == "Line 8"){ # 4 FOX stations + 2
time_data %>%
dplyr::filter(LAT >= 57.52417 & LAT <= 57.71517)%>%
dplyr::filter(LON >= -155.2673 & LON <= -154.7725)%>%
dplyr::filter(lubridate::month(DATE) %in% c(5,6)) #May and June
} else if (core_stations == "Line 8 FOX"){ # 4 Stations
time_data %>%
dplyr::filter(LAT >= 57.52417 & LAT <= 57.71517)%>%
dplyr::filter(LON >= -155.2 & LON <= -154.85)%>%
dplyr::filter(lubridate::month(DATE) %in% c(5,6)) #May and June
} else if(core_stations == "Semidi Spring"){ # .5 x 1 degree area
time_data %>%
dplyr::filter(LAT >= 55.2 & LAT <= 57.2)%>%
dplyr::filter(LON >= -159 & LON <= -158)%>%
dplyr::filter(lubridate::month(DATE) %in% c(5,6))%>% #May and June
dplyr::filter(DEPTH_BOTTOM >= 100 & DEPTH_BOTTOM <= 150)
} else if(core_stations == "Semidi Summer"){
time_data %>%
dplyr::filter(LAT >= 55.2 & LAT <= 57.2)%>%
dplyr::filter(LON >= -159 & LON <= -158)%>%
dplyr::filter(lubridate::month(DATE) %in% c(8,9))%>% #August and September
dplyr::filter(DEPTH_BOTTOM >= 100 & DEPTH_BOTTOM <= 150)
}
plot_data <- if(plot_type == "temperature"){
if(anomaly == FALSE){
range_filter %>%
dplyr::filter(DEPTH <= max_depth & DEPTH > min_depth)%>%
dplyr::group_by(lubridate::year(DATE), DEPTH)%>%
dplyr::summarise(mean_yr = mean(TEMPERATURE1, na.rm = TRUE))
}else if(anomaly == TRUE){
# place anomaly equation here
}
} else if(plot_type == "salinity"){
if(anomaly == FALSE){
range_filter %>%
dplyr::filter(DEPTH <= max_depth & DEPTH > min_depth)%>%
dplyr::group_by(lubridate::year(DATE), DEPTH)%>%
dplyr::summarise(mean_yr = mean(SALINITY1, na.rm = TRUE))
}else if(anomaly == TRUE){
# place anomaly equation here
}
}
# temperature color pallete red to blue, did not use oce for temperature
# it was decided red to blue was more intuitive. Salinity color ramp
# is the salinity oce color ramp. ---------------------------------------------
plot_color <- if(plot_type == "temperature"){
c("#0E0C71", "#1D159E", "#1D26C9", "#174CBB", "#2F63B3",
"#4977B2", "#658AB4", "#7F9DB9", "#9AAFC1", "#B6C3CD",
#"#D2D7DB" "#EEEDED" "#E0D2CF"
"#D6B8B1", "#CB9F93", "#C28676", "#B86E5A", "#AD543E",
"#A23925", "#931A11", "#7B0413", "#5D0311", "#41000B")
}else if(plot_type == "salinity"){
c("#2B1470", "#2C1D8A", "#212F96", "#114293", "#08518F", "#0E5E8B",
"#1A6989", "#267488", "#318088", "#3A8B88", "#439787", "#4BA385",
"#56AF81", "#64BA7B", "#77C574", "#91CF6C", "#B0D66C", "#CCDE78",
"#E6E58A", "#FEEEA0")
}
# expressions for either salinity or temperature legend -----------------------
legend_name <- if(plot_type == "temperature"){
expression(bold( ~degree~C ))
} else if (plot_type == "salinity"){
expression(bold("PSU"))
}
# time range of plot data, names different for each year ----------------------
time_range <- paste(min(plot_data$`lubridate::year(DATE)`), "_",
max(plot_data$`lubridate::year(DATE)`), sep = "")
# the directory to send the plot to -------------------------------------------
current_path <- unlist(stringr::str_split(hist_data, "/"))
current_path <- paste(current_path[ 1:length(current_path)-1], collapse = "/" )
# Check for plot folder--------------------------------------------------------
if(dir.exists(paste(current_path,"/plots",sep = "")) == FALSE){
dir.create(paste(current_path,"/plots",sep = ""))
}
# name time series plot to write to file --------------------------------------
name_time_series_plot <- paste(current_path, "/plots/", core_stations, "_",
plot_type, "_", time_range, "_", min_depth,
"to", max_depth,"m", ".png",sep = "")
# Time series plot ------------------------------------------------------------
time_series_plot <- ggplot2::ggplot()+
ggplot2::geom_tile(ggplot2::aes(x = `lubridate::year(DATE)`, y = -(DEPTH), fill = mean_yr),
data = plot_data)+
ggplot2::scale_fill_gradientn(colors = plot_color, name = legend_name)+
ggplot2::scale_y_continuous(breaks = -(seq(0,100 ,by = 10)),
labels = abs(seq(0,100 ,by = 10)))+
ggplot2::theme_bw()+
ggplot2::ylab(label = "Depth m")+
ggplot2::xlab(label = "Year")+
ggplot2::ggtitle(label = paste(core_stations,": from ", min_depth, " to ",
max_depth, " meters", sep = ""))+
ggplot2::theme(
axis.text.y = ggplot2::element_text(face = "bold", size = 18),
axis.text.x = ggplot2::element_text(face = "bold", size = 18),
axis.title.x = ggplot2::element_text(face = "bold", size = 20),
axis.title.y = ggplot2::element_text(face = "bold", size = 20),
title = ggplot2::element_text(face = "bold", size = 18),
legend.key.height = ggplot2::unit(38, "mm"),
legend.key.width = ggplot2::unit(8, "mm"),
legend.text = ggplot2::element_text(face = "bold", size = 16))
grDevices::png(filename = name_time_series_plot, width = 325, height = 250,
units = "mm", res = 350, bg = "transparent")
print(time_series_plot)
grDevices::dev.off()
}
| /FastrCAT/R/plot_time_series.R | permissive | Copepoda/FastrCAT | R | false | false | 9,247 | r | #' @title Gulf of Alaska Time Series Heat Plots
#' @description Creates either temperature or salinity plots of core EcoFOCI
#' stations in the Gulf of Alaska. Each plot displays the average temperature
#' or salinity for each meter of depth of the core stations for each year for
#' the months when peak sampling of these regions occured. Line 8 and Semidi
#' area are most commonly sampled in May and June, which is considered
#' Spring. Summer sampling in the Gulf of Alaska has been less frequent and
#' starts in the early 2000's. This summer sampling is in the Semidi core
#' area, summer is considered August and September.Post 2010, these core
#' stations were only sampled in odd numbered years. In the future more core
#' areas will be added.
#' @param hist_data Supply the path and .csv file name of the historical data
#' in quotations. The historical data must be in the format created by
#' the FastrCAT::make_dataframe_fc, which is the same format EcoDAAT exports.
#' Historic data must be queried from EcoDAAT and saved as a .csv file. In the
#' future there will be a direct link to the Oracle database where EcoDAAT data
#' is housed.
#' @param core_stations There are three core areas for the Gulf of Alaska which
#' have been regularly sampled and are representative of the Gulf of Alaska.
#' Core areas are all in bottom depth at or greater than 100 and less than 150
#' meters. Salintiy or temperature data from the surface to 100m will be shown.
#' core_stations are as follows: "Line 8 FOX", "Line 8", "Semidi Spring" and
#' "Semidi Summer".
#' Line 8 FOX is a set of six stations with a bounding box of `57.52, -155.2,
#' 57.72, -154.85`. Line 8 are the 4 core stations of Line 8 FOX. Semidi are
#' 6- 8 stations centrally located in the Semidi area and have been consistently
#' sampled, with a bounding box of '55.1, -158.5, 55.9, -158.0'.
#' @param plot_type will accept on of two quoted characters "temperature" or
#' "salinity".
#' @param fastcat_data An optional argument if you want to add the current years
#' fastcat data. Supply the path and .csv file name to the current years
#' fastcat data. This must be in the format created by the FastrCAT function
#' make_dataframe_fc.
#' @param min_depth Minimum depth which is shown on the plot. The default is set
#' as 0 meters.
#' @param max_depth Maximum depth which is displayed on the plot. The default
#' is set at 100 meters.
#' @param anomaly An optional argument if you want the anomaly of the plot_type
#' selected. This argument is set to FALSE, when set to TRUE then the anomaly
#' will be plotted. The anomoly is calculated using the ...something equation.
#' @return A depth by year tile plot of temperature or salinity. The plot will
#' be written to the folder designated by the historical data file path. The
#' plot will be in .png format. It should be noted that the plot throws out
#' the 0 depth value. 0 depth can and has been problematic for fastcat data.
#' @export plot_time_series
plot_time_series <- function(hist_data, core_stations, plot_type,
min_depth = 0, max_depth = 100,
fastcat_data = FALSE, anomaly = FALSE){
fast_col_types <- readr::cols_only(
DATE = readr::col_date(format = "%Y-%m-%d"),
DEPTH_BOTTOM = readr::col_integer(),
DEPTH = readr::col_integer(),
TEMPERATURE1 = readr::col_double(),
SALINITY1 = readr::col_double(),
LAT = readr::col_double(),
LON = readr::col_double())
old_data <- readr::read_csv(hist_data, col_types = fast_col_types)
time_data <- if(fastcat_data == FALSE){
time_data <- old_data
} else if (fastcat_data == TRUE){
time_data <- fastcat_data <- readr::read_csv(fastcat_data,
col_types = fast_col_types)%>%
dplyr::bind_rows(old_data)
}
range_filter <- if(core_stations == "Line 8"){ # 4 FOX stations + 2
time_data %>%
dplyr::filter(LAT >= 57.52417 & LAT <= 57.71517)%>%
dplyr::filter(LON >= -155.2673 & LON <= -154.7725)%>%
dplyr::filter(lubridate::month(DATE) %in% c(5,6)) #May and June
} else if (core_stations == "Line 8 FOX"){ # 4 Stations
time_data %>%
dplyr::filter(LAT >= 57.52417 & LAT <= 57.71517)%>%
dplyr::filter(LON >= -155.2 & LON <= -154.85)%>%
dplyr::filter(lubridate::month(DATE) %in% c(5,6)) #May and June
} else if(core_stations == "Semidi Spring"){ # .5 x 1 degree area
time_data %>%
dplyr::filter(LAT >= 55.2 & LAT <= 57.2)%>%
dplyr::filter(LON >= -159 & LON <= -158)%>%
dplyr::filter(lubridate::month(DATE) %in% c(5,6))%>% #May and June
dplyr::filter(DEPTH_BOTTOM >= 100 & DEPTH_BOTTOM <= 150)
} else if(core_stations == "Semidi Summer"){
time_data %>%
dplyr::filter(LAT >= 55.2 & LAT <= 57.2)%>%
dplyr::filter(LON >= -159 & LON <= -158)%>%
dplyr::filter(lubridate::month(DATE) %in% c(8,9))%>% #August and September
dplyr::filter(DEPTH_BOTTOM >= 100 & DEPTH_BOTTOM <= 150)
}
plot_data <- if(plot_type == "temperature"){
if(anomaly == FALSE){
range_filter %>%
dplyr::filter(DEPTH <= max_depth & DEPTH > min_depth)%>%
dplyr::group_by(lubridate::year(DATE), DEPTH)%>%
dplyr::summarise(mean_yr = mean(TEMPERATURE1, na.rm = TRUE))
}else if(anomaly == TRUE){
# place anomaly equation here
}
} else if(plot_type == "salinity"){
if(anomaly == FALSE){
range_filter %>%
dplyr::filter(DEPTH <= max_depth & DEPTH > min_depth)%>%
dplyr::group_by(lubridate::year(DATE), DEPTH)%>%
dplyr::summarise(mean_yr = mean(SALINITY1, na.rm = TRUE))
}else if(anomaly == TRUE){
# place anomaly equation here
}
}
# temperature color pallete red to blue, did not use oce for temperature
# it was decided red to blue was more intuitive. Salinity color ramp
# is the salinity oce color ramp. ---------------------------------------------
plot_color <- if(plot_type == "temperature"){
c("#0E0C71", "#1D159E", "#1D26C9", "#174CBB", "#2F63B3",
"#4977B2", "#658AB4", "#7F9DB9", "#9AAFC1", "#B6C3CD",
#"#D2D7DB" "#EEEDED" "#E0D2CF"
"#D6B8B1", "#CB9F93", "#C28676", "#B86E5A", "#AD543E",
"#A23925", "#931A11", "#7B0413", "#5D0311", "#41000B")
}else if(plot_type == "salinity"){
c("#2B1470", "#2C1D8A", "#212F96", "#114293", "#08518F", "#0E5E8B",
"#1A6989", "#267488", "#318088", "#3A8B88", "#439787", "#4BA385",
"#56AF81", "#64BA7B", "#77C574", "#91CF6C", "#B0D66C", "#CCDE78",
"#E6E58A", "#FEEEA0")
}
# expressions for either salinity or temperature legend -----------------------
legend_name <- if(plot_type == "temperature"){
expression(bold( ~degree~C ))
} else if (plot_type == "salinity"){
expression(bold("PSU"))
}
# time range of plot data, names different for each year ----------------------
time_range <- paste(min(plot_data$`lubridate::year(DATE)`), "_",
max(plot_data$`lubridate::year(DATE)`), sep = "")
# the directory to send the plot to -------------------------------------------
current_path <- unlist(stringr::str_split(hist_data, "/"))
current_path <- paste(current_path[ 1:length(current_path)-1], collapse = "/" )
# Check for plot folder--------------------------------------------------------
if(dir.exists(paste(current_path,"/plots",sep = "")) == FALSE){
dir.create(paste(current_path,"/plots",sep = ""))
}
# name time series plot to write to file --------------------------------------
name_time_series_plot <- paste(current_path, "/plots/", core_stations, "_",
plot_type, "_", time_range, "_", min_depth,
"to", max_depth,"m", ".png",sep = "")
# Time series plot ------------------------------------------------------------
time_series_plot <- ggplot2::ggplot()+
ggplot2::geom_tile(ggplot2::aes(x = `lubridate::year(DATE)`, y = -(DEPTH), fill = mean_yr),
data = plot_data)+
ggplot2::scale_fill_gradientn(colors = plot_color, name = legend_name)+
ggplot2::scale_y_continuous(breaks = -(seq(0,100 ,by = 10)),
labels = abs(seq(0,100 ,by = 10)))+
ggplot2::theme_bw()+
ggplot2::ylab(label = "Depth m")+
ggplot2::xlab(label = "Year")+
ggplot2::ggtitle(label = paste(core_stations,": from ", min_depth, " to ",
max_depth, " meters", sep = ""))+
ggplot2::theme(
axis.text.y = ggplot2::element_text(face = "bold", size = 18),
axis.text.x = ggplot2::element_text(face = "bold", size = 18),
axis.title.x = ggplot2::element_text(face = "bold", size = 20),
axis.title.y = ggplot2::element_text(face = "bold", size = 20),
title = ggplot2::element_text(face = "bold", size = 18),
legend.key.height = ggplot2::unit(38, "mm"),
legend.key.width = ggplot2::unit(8, "mm"),
legend.text = ggplot2::element_text(face = "bold", size = 16))
grDevices::png(filename = name_time_series_plot, width = 325, height = 250,
units = "mm", res = 350, bg = "transparent")
print(time_series_plot)
grDevices::dev.off()
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/prompt-git.R
\name{prompt_git}
\alias{prompt_git}
\title{An example 'git' prompt}
\usage{
prompt_git(...)
}
\arguments{
\item{...}{Unused.}
}
\description{
It shows the current branch, whether there are
commits to push or pull to the default remote,
and whether the working directory is dirty.
}
\examples{
\dontrun{
set_prompt(prompt_git)
}
}
\seealso{
Other example.prompts: \code{\link{prompt_devtools}},
\code{\link{prompt_error}}, \code{\link{prompt_fancy}},
\code{\link{prompt_memuse}}, \code{\link{prompt_runtime}}
}
| /man/prompt_git.Rd | no_license | Robinlovelace/prompt | R | false | true | 609 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/prompt-git.R
\name{prompt_git}
\alias{prompt_git}
\title{An example 'git' prompt}
\usage{
prompt_git(...)
}
\arguments{
\item{...}{Unused.}
}
\description{
It shows the current branch, whether there are
commits to push or pull to the default remote,
and whether the working directory is dirty.
}
\examples{
\dontrun{
set_prompt(prompt_git)
}
}
\seealso{
Other example.prompts: \code{\link{prompt_devtools}},
\code{\link{prompt_error}}, \code{\link{prompt_fancy}},
\code{\link{prompt_memuse}}, \code{\link{prompt_runtime}}
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/ec2_operations.R
\name{ec2_enable_volume_io}
\alias{ec2_enable_volume_io}
\title{Enables I/O operations for a volume that had I/O operations disabled
because the data on the volume was potentially inconsistent}
\usage{
ec2_enable_volume_io(DryRun, VolumeId)
}
\arguments{
\item{DryRun}{Checks whether you have the required permissions for the action, without
actually making the request, and provides an error response. If you have
the required permissions, the error response is \code{DryRunOperation}.
Otherwise, it is \code{UnauthorizedOperation}.}
\item{VolumeId}{[required] The ID of the volume.}
}
\description{
Enables I/O operations for a volume that had I/O operations disabled
because the data on the volume was potentially inconsistent.
}
\section{Request syntax}{
\preformatted{svc$enable_volume_io(
DryRun = TRUE|FALSE,
VolumeId = "string"
)
}
}
\examples{
# This example enables I/O on volume `vol-1234567890abcdef0`.
\donttest{svc$enable_volume_io(
VolumeId = "vol-1234567890abcdef0"
)}
}
\keyword{internal}
| /paws/man/ec2_enable_volume_io.Rd | permissive | ryanb8/paws | R | false | true | 1,110 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/ec2_operations.R
\name{ec2_enable_volume_io}
\alias{ec2_enable_volume_io}
\title{Enables I/O operations for a volume that had I/O operations disabled
because the data on the volume was potentially inconsistent}
\usage{
ec2_enable_volume_io(DryRun, VolumeId)
}
\arguments{
\item{DryRun}{Checks whether you have the required permissions for the action, without
actually making the request, and provides an error response. If you have
the required permissions, the error response is \code{DryRunOperation}.
Otherwise, it is \code{UnauthorizedOperation}.}
\item{VolumeId}{[required] The ID of the volume.}
}
\description{
Enables I/O operations for a volume that had I/O operations disabled
because the data on the volume was potentially inconsistent.
}
\section{Request syntax}{
\preformatted{svc$enable_volume_io(
DryRun = TRUE|FALSE,
VolumeId = "string"
)
}
}
\examples{
# This example enables I/O on volume `vol-1234567890abcdef0`.
\donttest{svc$enable_volume_io(
VolumeId = "vol-1234567890abcdef0"
)}
}
\keyword{internal}
|
################################
# Create map of the study area #
################################
# Jolien Goosens - Flanders Marine Institute (VLIZ) / Marine Biology Research Group, Ghent University (Marbiol)
# R version 3.6.2
#### Load packages ####
library(ggplot2)
library(rgdal)
library(raster)
library(dplyr)
library(ggsn)
library(RColorBrewer)
library(lubridate)
library(ggpubr)
#### Read data ####
# Read windfarms polygons
wind <- readOGR("data/external/mapping/Emodnet_HA_WindFarms_20181120",
layer = "Emodnet_HA_WindFarms_pg_20181120")
# Read EEZ
eezbel <- readOGR("data/external/mapping/eezbel", layer = "eez") # Belgium
eezned <- readOGR("data/external/mapping/eezned", layer = "eez") # Netherlands
eezuk <- readOGR("data/external/mapping/eezuk", layer = "eez") # UK
# Read shapefiles area
netherlands_coast <- readOGR("data/external/mapping/netherlands_coast", layer = "world_countries_coasts")
ns <- readOGR("data/external/mapping/nsea", layer = "iho")
#### Organise data ####
# Windfarm polygons
windfort <- fortify(wind)
winddata <- data.frame(wind)
windpol <- winddata %>%
mutate(id = as.character(c(0:205))) %>%
filter(NAME == "Gemini" | NAME == "Belwind I") %>%
left_join(windfort)
# Fortify shapefiles
eezbelfort <- fortify(eezbel)
netherlands_coastfort <- fortify(netherlands_coast)
nsfort <- fortify(ns)
#### Map Belwind + Gemini ####
# Set theme
theme_map <- theme_bw()+
theme(axis.title = element_blank(),
axis.text = element_blank(),
axis.ticks = element_blank(),
panel.border = element_rect(colour = "black", fill = NA, size = 0.5))
p1 <- ggplot() +
coord_map(xlim = c(2.1,6.5), ylim = c(51,54.5)) +
theme_map +
theme(plot.margin=unit(c(0.1,0,0.1,0),"cm")) +
geom_polygon(data = nsfort, aes(long,lat, group = group), fill ="#a5e0dd") +
geom_polygon(data = netherlands_coastfort, aes(long,lat, group = group), fill = "white") +
geom_path(data = nsfort, aes(long,lat, group = group), size = 0.3) +
geom_path(data = eezbelfort, aes(long,lat, group = group), size = 0.3) +
geom_path(data = eezned, aes(long,lat, group = group), size = 0.3) +
geom_path(data = eezuk, aes(long,lat, group = group), size = 0.3) +
geom_path(data= windpol, aes(long,lat, group=group))+
geom_vline(xintercept = seq(2, 6.5, 1), size = 0.5, colour = "gray20", alpha = 0.15) +
geom_hline(yintercept = seq(51, 54.5, 1), size = 0.5, colour = "gray20", alpha = 0.15) +
geom_text(aes(x = c(seq(3, 6.5, 1) + 0.1), y = 54.38),
label = c("3°E", "4°", "5°", "6°"),
size=2.5, hjust = 0.1, vjust = 0, colour = "gray20") +
geom_text(aes(x = 2.16, y = c(seq(52, 54.5, 1)+0.05)),
label = c("52°", "53°", "54°N"),
size=2.5, hjust = 0, vjust = 0, colour = "gray20") +
geom_polygon(data= windpol, aes(long,lat, group=group), colour = "red", fill = NA, size = 0.8) +
annotate("text", x = 2.67, y = 51.365, label = "BPNS", size = 3) +
annotate("text", x = 5.5, y = 53.6, label = "DPNS", size = 3) +
# annotate("text", x = 2.1+(6.5-2.1)*0.09, y = 51 +(54.5-51)*0.93, label = "A", size = 9, fontface = 2) +
scalebar(data = eezbelfort, model = "WGS84", dist = 50, st.size = 2.7, anchor = c(x = 6.19, y = 51.2), st.dist = 0.12, height = 0.08,
transform = T, dist_unit = "km", border.size = 0.3)
# Save as width 315 height 350
ggsave(filename = "reports/figures/Fig2_Map.png", plot = p1, scale = 1, dpi = 600, width = 8, height = 10, units = "cm")
| /src/visualization/Fig2_Map.R | no_license | lifewatch/Tripod-frame_Performance-test | R | false | false | 3,524 | r |
################################
# Create map of the study area #
################################
# Jolien Goosens - Flanders Marine Institute (VLIZ) / Marine Biology Research Group, Ghent University (Marbiol)
# R version 3.6.2
#### Load packages ####
library(ggplot2)
library(rgdal)
library(raster)
library(dplyr)
library(ggsn)
library(RColorBrewer)
library(lubridate)
library(ggpubr)
#### Read data ####
# Read windfarms polygons
wind <- readOGR("data/external/mapping/Emodnet_HA_WindFarms_20181120",
layer = "Emodnet_HA_WindFarms_pg_20181120")
# Read EEZ
eezbel <- readOGR("data/external/mapping/eezbel", layer = "eez") # Belgium
eezned <- readOGR("data/external/mapping/eezned", layer = "eez") # Netherlands
eezuk <- readOGR("data/external/mapping/eezuk", layer = "eez") # UK
# Read shapefiles area
netherlands_coast <- readOGR("data/external/mapping/netherlands_coast", layer = "world_countries_coasts")
ns <- readOGR("data/external/mapping/nsea", layer = "iho")
#### Organise data ####
# Windfarm polygons
windfort <- fortify(wind)
winddata <- data.frame(wind)
windpol <- winddata %>%
mutate(id = as.character(c(0:205))) %>%
filter(NAME == "Gemini" | NAME == "Belwind I") %>%
left_join(windfort)
# Fortify shapefiles
eezbelfort <- fortify(eezbel)
netherlands_coastfort <- fortify(netherlands_coast)
nsfort <- fortify(ns)
#### Map Belwind + Gemini ####
# Set theme
theme_map <- theme_bw()+
theme(axis.title = element_blank(),
axis.text = element_blank(),
axis.ticks = element_blank(),
panel.border = element_rect(colour = "black", fill = NA, size = 0.5))
p1 <- ggplot() +
coord_map(xlim = c(2.1,6.5), ylim = c(51,54.5)) +
theme_map +
theme(plot.margin=unit(c(0.1,0,0.1,0),"cm")) +
geom_polygon(data = nsfort, aes(long,lat, group = group), fill ="#a5e0dd") +
geom_polygon(data = netherlands_coastfort, aes(long,lat, group = group), fill = "white") +
geom_path(data = nsfort, aes(long,lat, group = group), size = 0.3) +
geom_path(data = eezbelfort, aes(long,lat, group = group), size = 0.3) +
geom_path(data = eezned, aes(long,lat, group = group), size = 0.3) +
geom_path(data = eezuk, aes(long,lat, group = group), size = 0.3) +
geom_path(data= windpol, aes(long,lat, group=group))+
geom_vline(xintercept = seq(2, 6.5, 1), size = 0.5, colour = "gray20", alpha = 0.15) +
geom_hline(yintercept = seq(51, 54.5, 1), size = 0.5, colour = "gray20", alpha = 0.15) +
geom_text(aes(x = c(seq(3, 6.5, 1) + 0.1), y = 54.38),
label = c("3°E", "4°", "5°", "6°"),
size=2.5, hjust = 0.1, vjust = 0, colour = "gray20") +
geom_text(aes(x = 2.16, y = c(seq(52, 54.5, 1)+0.05)),
label = c("52°", "53°", "54°N"),
size=2.5, hjust = 0, vjust = 0, colour = "gray20") +
geom_polygon(data= windpol, aes(long,lat, group=group), colour = "red", fill = NA, size = 0.8) +
annotate("text", x = 2.67, y = 51.365, label = "BPNS", size = 3) +
annotate("text", x = 5.5, y = 53.6, label = "DPNS", size = 3) +
# annotate("text", x = 2.1+(6.5-2.1)*0.09, y = 51 +(54.5-51)*0.93, label = "A", size = 9, fontface = 2) +
scalebar(data = eezbelfort, model = "WGS84", dist = 50, st.size = 2.7, anchor = c(x = 6.19, y = 51.2), st.dist = 0.12, height = 0.08,
transform = T, dist_unit = "km", border.size = 0.3)
# Save as width 315 height 350
ggsave(filename = "reports/figures/Fig2_Map.png", plot = p1, scale = 1, dpi = 600, width = 8, height = 10, units = "cm")
|
testlist <- list(phi = c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), x = c(1.36656528938164e-311, -1.65791256519293e+82, 1.29418168595419e-228, -1.85353502606261e+293, 8.08855267383463e-84, -4.03929894096111e-178, 6.04817943207006e-103, -1.66738461804717e-220, -8.8217241872956e-21, -7.84828807007467e-146, -7.48864565556865e+21, -1.00905374512e-187, 5.22970923741951e-218, 2.77992264324548e-197, -5.29147138128251e+140, -1.71332436886848e-93, -1.52261021137076e-52, 2.0627472502345e-21, 1.07149136185465e+184, 4.41748962512848e+47, -4.05885894997926e-142))
result <- do.call(dcurver:::ddc,testlist)
str(result) | /dcurver/inst/testfiles/ddc/AFL_ddc/ddc_valgrind_files/1609867571-test.R | no_license | akhikolla/updated-only-Issues | R | false | false | 831 | r | testlist <- list(phi = c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), x = c(1.36656528938164e-311, -1.65791256519293e+82, 1.29418168595419e-228, -1.85353502606261e+293, 8.08855267383463e-84, -4.03929894096111e-178, 6.04817943207006e-103, -1.66738461804717e-220, -8.8217241872956e-21, -7.84828807007467e-146, -7.48864565556865e+21, -1.00905374512e-187, 5.22970923741951e-218, 2.77992264324548e-197, -5.29147138128251e+140, -1.71332436886848e-93, -1.52261021137076e-52, 2.0627472502345e-21, 1.07149136185465e+184, 4.41748962512848e+47, -4.05885894997926e-142))
result <- do.call(dcurver:::ddc,testlist)
str(result) |
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/clinical_datasets.R
\name{clinical.20160128}
\alias{ACC.clinical.20160128}
\alias{ACC.clinical.20160128,BLCA.clinical.20160128,BRCA.clinical.20160128,CESC.clinical.20160128,CHOL.clinical.20160128,COADREAD.clinical.20160128,DLBC.clinical.20160128,ESCA.clinical.20160128,FPPP.clinical.20160128,GBMLGG.clinical.20160128,HNSC.clinical.20160128,KICH.clinical.20160128,KIPAN.clinical.20160128,KIRC.clinical.20160128,KIRP.clinical.20160128,LAML.clinical.20160128,LIHC.clinical.20160128,LUAD.clinical.20160128,LUSC.clinical.20160128,MESO.clinical.20160128,OV.clinical.20160128,PAAD.clinical.20160128,PCPG.clinical.20160128,PRAD.clinical.20160128,SARC.clinical.20160128,SKCM.clinical.20160128,STAD.clinical.20160128,STES.clinical.20160128,TGCT.clinical.20160128,THCA.clinical.20160128,THYM.clinical.20160128,UCEC.clinical.20160128,UCS.clinical.20160128,UVM.clinical.20160128}
\alias{BLCA.clinical.20160128}
\alias{BRCA.clinical.20160128}
\alias{CESC.clinical.20160128}
\alias{CHOL.clinical.20160128}
\alias{COADREAD.clinical.20160128}
\alias{DLBC.clinical.20160128}
\alias{ESCA.clinical.20160128}
\alias{FPPP.clinical.20160128}
\alias{GBMLGG.clinical.20160128}
\alias{HNSC.clinical.20160128}
\alias{KICH.clinical.20160128}
\alias{KIPAN.clinical.20160128}
\alias{KIRC.clinical.20160128}
\alias{KIRP.clinical.20160128}
\alias{LAML.clinical.20160128}
\alias{LIHC.clinical.20160128}
\alias{LUAD.clinical.20160128}
\alias{LUSC.clinical.20160128}
\alias{MESO.clinical.20160128}
\alias{OV.clinical.20160128}
\alias{PAAD.clinical.20160128}
\alias{PCPG.clinical.20160128}
\alias{PRAD.clinical.20160128}
\alias{SARC.clinical.20160128}
\alias{SKCM.clinical.20160128}
\alias{STAD.clinical.20160128}
\alias{STES.clinical.20160128}
\alias{TGCT.clinical.20160128}
\alias{THCA.clinical.20160128}
\alias{THYM.clinical.20160128}
\alias{UCEC.clinical.20160128}
\alias{UCS.clinical.20160128}
\alias{UVM.clinical.20160128}
\alias{clinical.20160128}
\title{clinical datasets from TCGA project from 2016-01-28 release date}
\source{
\url{http://gdac.broadinstitute.org/}
}
\usage{
ACC.clinical.20160128(metadata = FALSE)
BLCA.clinical.20160128(metadata = FALSE)
BRCA.clinical.20160128(metadata = FALSE)
CESC.clinical.20160128(metadata = FALSE)
CHOL.clinical.20160128(metadata = FALSE)
COADREAD.clinical.20160128(metadata = FALSE)
DLBC.clinical.20160128(metadata = FALSE)
ESCA.clinical.20160128(metadata = FALSE)
FPPP.clinical.20160128(metadata = FALSE)
GBMLGG.clinical.20160128(metadata = FALSE)
HNSC.clinical.20160128(metadata = FALSE)
KICH.clinical.20160128(metadata = FALSE)
KIPAN.clinical.20160128(metadata = FALSE)
KIRC.clinical.20160128(metadata = FALSE)
KIRP.clinical.20160128(metadata = FALSE)
LAML.clinical.20160128(metadata = FALSE)
LIHC.clinical.20160128(metadata = FALSE)
LUAD.clinical.20160128(metadata = FALSE)
LUSC.clinical.20160128(metadata = FALSE)
MESO.clinical.20160128(metadata = FALSE)
OV.clinical.20160128(metadata = FALSE)
PAAD.clinical.20160128(metadata = FALSE)
PCPG.clinical.20160128(metadata = FALSE)
PRAD.clinical.20160128(metadata = FALSE)
SARC.clinical.20160128(metadata = FALSE)
SKCM.clinical.20160128(metadata = FALSE)
STAD.clinical.20160128(metadata = FALSE)
STES.clinical.20160128(metadata = FALSE)
TGCT.clinical.20160128(metadata = FALSE)
THCA.clinical.20160128(metadata = FALSE)
THYM.clinical.20160128(metadata = FALSE)
UCEC.clinical.20160128(metadata = FALSE)
UCS.clinical.20160128(metadata = FALSE)
UVM.clinical.20160128(metadata = FALSE)
}
\arguments{
\item{metadata}{A logical indicating whether load data into the workspace (default, \code{FALSE}) or to only display the object's metadata (\code{TRUE}). See examples.}
}
\description{
Package provides clinical datasets from The Cancer Genome Atlas Project for all cohorts types from \url{http://gdac.broadinstitute.org/}.
Data were downloaded using \link{RTCGA-package} and contain snapshots for the release date: \code{2016-01-28} . Visit \pkg{RTCGA} site: \url{http://rtcga.github.io/RTCGA/}.
Use cases, examples and information about datasets in \pkg{RTCGA.clinical.20160128} can be found here: \code{browseVignettes("RTCGA")}. Link to the data format explanation is in the package DESCRIPTION file.
}
\details{
\code{browseVignettes("RTCGA")}
}
\examples{
\dontrun{
ACC.clinical.20160128(metadata = TRUE)
ACC.clinical.20160128(metadata = FALSE)
ACC.clinical.20160128
}
}
| /man/clinical.20160128.Rd | no_license | RTCGA/RTCGA.clinical.20160128 | R | false | true | 4,456 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/clinical_datasets.R
\name{clinical.20160128}
\alias{ACC.clinical.20160128}
\alias{ACC.clinical.20160128,BLCA.clinical.20160128,BRCA.clinical.20160128,CESC.clinical.20160128,CHOL.clinical.20160128,COADREAD.clinical.20160128,DLBC.clinical.20160128,ESCA.clinical.20160128,FPPP.clinical.20160128,GBMLGG.clinical.20160128,HNSC.clinical.20160128,KICH.clinical.20160128,KIPAN.clinical.20160128,KIRC.clinical.20160128,KIRP.clinical.20160128,LAML.clinical.20160128,LIHC.clinical.20160128,LUAD.clinical.20160128,LUSC.clinical.20160128,MESO.clinical.20160128,OV.clinical.20160128,PAAD.clinical.20160128,PCPG.clinical.20160128,PRAD.clinical.20160128,SARC.clinical.20160128,SKCM.clinical.20160128,STAD.clinical.20160128,STES.clinical.20160128,TGCT.clinical.20160128,THCA.clinical.20160128,THYM.clinical.20160128,UCEC.clinical.20160128,UCS.clinical.20160128,UVM.clinical.20160128}
\alias{BLCA.clinical.20160128}
\alias{BRCA.clinical.20160128}
\alias{CESC.clinical.20160128}
\alias{CHOL.clinical.20160128}
\alias{COADREAD.clinical.20160128}
\alias{DLBC.clinical.20160128}
\alias{ESCA.clinical.20160128}
\alias{FPPP.clinical.20160128}
\alias{GBMLGG.clinical.20160128}
\alias{HNSC.clinical.20160128}
\alias{KICH.clinical.20160128}
\alias{KIPAN.clinical.20160128}
\alias{KIRC.clinical.20160128}
\alias{KIRP.clinical.20160128}
\alias{LAML.clinical.20160128}
\alias{LIHC.clinical.20160128}
\alias{LUAD.clinical.20160128}
\alias{LUSC.clinical.20160128}
\alias{MESO.clinical.20160128}
\alias{OV.clinical.20160128}
\alias{PAAD.clinical.20160128}
\alias{PCPG.clinical.20160128}
\alias{PRAD.clinical.20160128}
\alias{SARC.clinical.20160128}
\alias{SKCM.clinical.20160128}
\alias{STAD.clinical.20160128}
\alias{STES.clinical.20160128}
\alias{TGCT.clinical.20160128}
\alias{THCA.clinical.20160128}
\alias{THYM.clinical.20160128}
\alias{UCEC.clinical.20160128}
\alias{UCS.clinical.20160128}
\alias{UVM.clinical.20160128}
\alias{clinical.20160128}
\title{clinical datasets from TCGA project from 2016-01-28 release date}
\source{
\url{http://gdac.broadinstitute.org/}
}
\usage{
ACC.clinical.20160128(metadata = FALSE)
BLCA.clinical.20160128(metadata = FALSE)
BRCA.clinical.20160128(metadata = FALSE)
CESC.clinical.20160128(metadata = FALSE)
CHOL.clinical.20160128(metadata = FALSE)
COADREAD.clinical.20160128(metadata = FALSE)
DLBC.clinical.20160128(metadata = FALSE)
ESCA.clinical.20160128(metadata = FALSE)
FPPP.clinical.20160128(metadata = FALSE)
GBMLGG.clinical.20160128(metadata = FALSE)
HNSC.clinical.20160128(metadata = FALSE)
KICH.clinical.20160128(metadata = FALSE)
KIPAN.clinical.20160128(metadata = FALSE)
KIRC.clinical.20160128(metadata = FALSE)
KIRP.clinical.20160128(metadata = FALSE)
LAML.clinical.20160128(metadata = FALSE)
LIHC.clinical.20160128(metadata = FALSE)
LUAD.clinical.20160128(metadata = FALSE)
LUSC.clinical.20160128(metadata = FALSE)
MESO.clinical.20160128(metadata = FALSE)
OV.clinical.20160128(metadata = FALSE)
PAAD.clinical.20160128(metadata = FALSE)
PCPG.clinical.20160128(metadata = FALSE)
PRAD.clinical.20160128(metadata = FALSE)
SARC.clinical.20160128(metadata = FALSE)
SKCM.clinical.20160128(metadata = FALSE)
STAD.clinical.20160128(metadata = FALSE)
STES.clinical.20160128(metadata = FALSE)
TGCT.clinical.20160128(metadata = FALSE)
THCA.clinical.20160128(metadata = FALSE)
THYM.clinical.20160128(metadata = FALSE)
UCEC.clinical.20160128(metadata = FALSE)
UCS.clinical.20160128(metadata = FALSE)
UVM.clinical.20160128(metadata = FALSE)
}
\arguments{
\item{metadata}{A logical indicating whether load data into the workspace (default, \code{FALSE}) or to only display the object's metadata (\code{TRUE}). See examples.}
}
\description{
Package provides clinical datasets from The Cancer Genome Atlas Project for all cohorts types from \url{http://gdac.broadinstitute.org/}.
Data were downloaded using \link{RTCGA-package} and contain snapshots for the release date: \code{2016-01-28} . Visit \pkg{RTCGA} site: \url{http://rtcga.github.io/RTCGA/}.
Use cases, examples and information about datasets in \pkg{RTCGA.clinical.20160128} can be found here: \code{browseVignettes("RTCGA")}. Link to the data format explanation is in the package DESCRIPTION file.
}
\details{
\code{browseVignettes("RTCGA")}
}
\examples{
\dontrun{
ACC.clinical.20160128(metadata = TRUE)
ACC.clinical.20160128(metadata = FALSE)
ACC.clinical.20160128
}
}
|
#
# write.GOhyper.r
#
# Created by David Ruau on 2012-01-23.
# Dept. of Pediatrics/Div. Systems Medicine, Stanford University.
#
#
##################### USAGE #########################
#
# write.GOhyper(mfhyper, filename="results.xlsx")
#
#####################################################
write.GOhyper <- function(mfhyper, filename='GO_results.xlsx') {
require(GOstats)
require(multtest)
require(xlsx)
gogo <- summary(mfhyper)
gogo$adjPvalue <- mt.rawp2adjp(gogo$Pvalue, proc="BH")$adjp[,2]
gogo <- gogo[,c(1:2,8,3:7)]
gogo <- gogo[order(gogo$OddsRatio),]
write.xlsx(gogo, file=filename)
print(paste('Results written in', filename))
return(gogo)
}
| /write.GOhyper.r | no_license | TimTim74/codebox | R | false | false | 681 | r | #
# write.GOhyper.r
#
# Created by David Ruau on 2012-01-23.
# Dept. of Pediatrics/Div. Systems Medicine, Stanford University.
#
#
##################### USAGE #########################
#
# write.GOhyper(mfhyper, filename="results.xlsx")
#
#####################################################
write.GOhyper <- function(mfhyper, filename='GO_results.xlsx') {
require(GOstats)
require(multtest)
require(xlsx)
gogo <- summary(mfhyper)
gogo$adjPvalue <- mt.rawp2adjp(gogo$Pvalue, proc="BH")$adjp[,2]
gogo <- gogo[,c(1:2,8,3:7)]
gogo <- gogo[order(gogo$OddsRatio),]
write.xlsx(gogo, file=filename)
print(paste('Results written in', filename))
return(gogo)
}
|
testlist <- list(nmod = NULL, id = NULL, score = NULL, rsp = NULL, id = NULL, score = NULL, nbr = NULL, id = NULL, bk_nmod = integer(0), booklet_id = c(1202941484L, 15623892L, -1661386009L, 45373390L, 426686228L, 1254131289L, 749806690L, -1501899956L, -1876835267L, 574719753L, 12138419L, -194575513L, 1795807422L, -1377650222L, 453135142L, -1780159831L, 992888811L, -1345548849L, -449112064L, 903217161L, 1678998078L, 759393453L, 786045775L, 1752098800L, 455895826L, -1331816706L, 391475866L, 1748544614L, 19691586L, 1176953756L, 349411874L, 2121585973L, -301177052L, 1082896916L, -450872028L, -636931467L, -53289638L, 1570865300L, -1237972204L, -2096910335L, 1674636960L, 729445123L, 1131745602L, 1415150763L, -611049483L, 1203403294L, -2063197885L, -1065072122L, 1319796054L, -1642943874L, -766927552L, 1272735908L, 1467107778L, -1500929355L, 1642529522L, -2023660442L, 215217695L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L), booklet_score = integer(0), include_rsp = integer(0), item_id = integer(0), item_score = integer(0), module_nbr = integer(0), person_id = c(16777216L, 0L, 1409351680L, 682962941L, 1615462481L, -1318203443L, -2131865434L, 1632280887L, 637082149L, 260799231L, 1754027460L, -1055514020L, -1311932986L, -203530874L, -428367857L, -1995603215L, 1192022832L, -996667132L, 432518541L, 815996035L, 1157250652L))
result <- do.call(dexterMST:::make_booklets_unsafe,testlist)
str(result) | /dexterMST/inst/testfiles/make_booklets_unsafe/AFL_make_booklets_unsafe/make_booklets_unsafe_valgrind_files/1615943117-test.R | no_license | akhikolla/updatedatatype-list1 | R | false | false | 1,550 | r | testlist <- list(nmod = NULL, id = NULL, score = NULL, rsp = NULL, id = NULL, score = NULL, nbr = NULL, id = NULL, bk_nmod = integer(0), booklet_id = c(1202941484L, 15623892L, -1661386009L, 45373390L, 426686228L, 1254131289L, 749806690L, -1501899956L, -1876835267L, 574719753L, 12138419L, -194575513L, 1795807422L, -1377650222L, 453135142L, -1780159831L, 992888811L, -1345548849L, -449112064L, 903217161L, 1678998078L, 759393453L, 786045775L, 1752098800L, 455895826L, -1331816706L, 391475866L, 1748544614L, 19691586L, 1176953756L, 349411874L, 2121585973L, -301177052L, 1082896916L, -450872028L, -636931467L, -53289638L, 1570865300L, -1237972204L, -2096910335L, 1674636960L, 729445123L, 1131745602L, 1415150763L, -611049483L, 1203403294L, -2063197885L, -1065072122L, 1319796054L, -1642943874L, -766927552L, 1272735908L, 1467107778L, -1500929355L, 1642529522L, -2023660442L, 215217695L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L), booklet_score = integer(0), include_rsp = integer(0), item_id = integer(0), item_score = integer(0), module_nbr = integer(0), person_id = c(16777216L, 0L, 1409351680L, 682962941L, 1615462481L, -1318203443L, -2131865434L, 1632280887L, 637082149L, 260799231L, 1754027460L, -1055514020L, -1311932986L, -203530874L, -428367857L, -1995603215L, 1192022832L, -996667132L, 432518541L, 815996035L, 1157250652L))
result <- do.call(dexterMST:::make_booklets_unsafe,testlist)
str(result) |
### converts data in the IDB dataset from FIPS codes to ISO codes.
### The conversion chart is from http://opengeocode.org/download.php#fips2iso
### isocsv takes two arguments, an old filename and a new filesname. The contents
### of the first file are reshaped so each column name is a countrycode and then converts
### those from FIPS to ISO amd saves the results as a new csv in the data_local directory.
makePopISO <- function(oldFile, newFile) {
### require reshape2 to melt and cast dataframe
library("reshape2")
dfInit <- read.table(oldFile, sep="|")
### assign meaningful names to the dataframe
names(dfInit) <- c("countryCode", "year", "population")
### "melt" the data frame into long-format data
molten <- melt(data=dfInit, id.vars = c("countryCode", "year"), measured.vars="population")
### "cast" the data frame into wide-format data with country code column names
df <- dcast(molten, year ~ countryCode)
### get list of FIPS country codes
fips <- colnames(df)[-1]
### read conversion table of FIPS to ISO
fips2iso <- read.csv("fips2iso.csv", sep=',', stringsAsFactors = FALSE)
### function which reads a csv table of FIPS to ISO conversions and converts the given country code accordingly
convertCountryCode <- function(countryCode){
### find index of give fips code
fipsMask <- fips2iso$fips == countryCode
### return iso form of country code
if(!(any(fipsMask))){ ### if the country code doesn't exist
return ('NOCODE')
} else if(toString(fips2iso$iso[fipsMask]) != ''){ ### if there is a direct conversion
return(toString(fips2iso$iso[fipsMask]))
} else if (toString(fips2iso$inclusive[fipsMask]) != '') { ### if that country is now part of another country
return(toString(fips2iso$inclusive[fipsMask]))
} else {
return('NOCODE')
}
}
newColNames <- c()
### replace each FIPS code with the ISO code
for(i in 1:length(fips)){
iso <- convertCountryCode(fips[i]) ### return iso code and convert to characters
newColNames <- c(newColNames, iso)
}
### add 'year' back into column names
newColNames <- c('year', newColNames)
### attach the new column names to the original dataframe
colnames(df) <- newColNames
### combine countries that have merged
indices <- which(names(df) == "NA")
ccSum <- df[,indices[1]] + df[,indices[2]]
df[,indices[1]] <- ccSum
df <- df[,-indices[2]]
df[,"PS"] <- df[,"PS"] + df[,"PS.1"]
df <- df[,-(which(names(df) == "PS.1"))]
### drop columns with no country code
drops <- c("NOCODE")
df <- df[,!(names(df) %in% drops)]
write.csv(file=newFile, df, row.names=FALSE)
}
#test run
oldFile <- "../../StaticData/IDB_Dataset/IDBext001.txt"
newFile <- "ISOtest.csv"
makePopISO(oldFile, newFile)
a <-read.csv("ISOtest.csv" )
| /Development/WillLeahy/convertMidYearPop.R | permissive | MazamaScience/PopulationDatabrowser_v0 | R | false | false | 2,844 | r | ### converts data in the IDB dataset from FIPS codes to ISO codes.
### The conversion chart is from http://opengeocode.org/download.php#fips2iso
### isocsv takes two arguments, an old filename and a new filesname. The contents
### of the first file are reshaped so each column name is a countrycode and then converts
### those from FIPS to ISO amd saves the results as a new csv in the data_local directory.
makePopISO <- function(oldFile, newFile) {
### require reshape2 to melt and cast dataframe
library("reshape2")
dfInit <- read.table(oldFile, sep="|")
### assign meaningful names to the dataframe
names(dfInit) <- c("countryCode", "year", "population")
### "melt" the data frame into long-format data
molten <- melt(data=dfInit, id.vars = c("countryCode", "year"), measured.vars="population")
### "cast" the data frame into wide-format data with country code column names
df <- dcast(molten, year ~ countryCode)
### get list of FIPS country codes
fips <- colnames(df)[-1]
### read conversion table of FIPS to ISO
fips2iso <- read.csv("fips2iso.csv", sep=',', stringsAsFactors = FALSE)
### function which reads a csv table of FIPS to ISO conversions and converts the given country code accordingly
convertCountryCode <- function(countryCode){
### find index of give fips code
fipsMask <- fips2iso$fips == countryCode
### return iso form of country code
if(!(any(fipsMask))){ ### if the country code doesn't exist
return ('NOCODE')
} else if(toString(fips2iso$iso[fipsMask]) != ''){ ### if there is a direct conversion
return(toString(fips2iso$iso[fipsMask]))
} else if (toString(fips2iso$inclusive[fipsMask]) != '') { ### if that country is now part of another country
return(toString(fips2iso$inclusive[fipsMask]))
} else {
return('NOCODE')
}
}
newColNames <- c()
### replace each FIPS code with the ISO code
for(i in 1:length(fips)){
iso <- convertCountryCode(fips[i]) ### return iso code and convert to characters
newColNames <- c(newColNames, iso)
}
### add 'year' back into column names
newColNames <- c('year', newColNames)
### attach the new column names to the original dataframe
colnames(df) <- newColNames
### combine countries that have merged
indices <- which(names(df) == "NA")
ccSum <- df[,indices[1]] + df[,indices[2]]
df[,indices[1]] <- ccSum
df <- df[,-indices[2]]
df[,"PS"] <- df[,"PS"] + df[,"PS.1"]
df <- df[,-(which(names(df) == "PS.1"))]
### drop columns with no country code
drops <- c("NOCODE")
df <- df[,!(names(df) %in% drops)]
write.csv(file=newFile, df, row.names=FALSE)
}
#test run
oldFile <- "../../StaticData/IDB_Dataset/IDBext001.txt"
newFile <- "ISOtest.csv"
makePopISO(oldFile, newFile)
a <-read.csv("ISOtest.csv" )
|
#' @include MariaDBConnection.R
#' @include MariaDBResult.R
NULL
convert_bigint <- function(df, bigint) {
if (bigint == "integer64") return(df)
is_int64 <- which(vlapply(df, inherits, "integer64"))
if (length(is_int64) == 0) return(df)
as_bigint <- switch(bigint,
integer = as.integer,
numeric = as.numeric,
character = as.character
)
df[is_int64] <- suppressWarnings(lapply(df[is_int64], as_bigint))
df
}
fix_timezone <- function(ret, conn) {
is_datetime <- which(vapply(ret, inherits, "POSIXt", FUN.VALUE = logical(1)))
if (length(is_datetime) > 0) {
ret[is_datetime] <- lapply(ret[is_datetime], function(x) {
x <- lubridate::with_tz(x, "UTC")
x <- lubridate::force_tz(x, conn@timezone)
lubridate::with_tz(x, conn@timezone_out)
})
}
ret
}
fix_blob <- function(ret) {
is_blob <- which(vapply(ret, is.list, FUN.VALUE = logical(1)))
if (length(is_blob) > 0) {
ret[is_blob] <- lapply(ret[is_blob], blob::as_blob)
}
ret
}
dbSend <- function(conn, statement, params = NULL, is_statement) {
statement <- enc2utf8(statement)
rs <- new("MariaDBResult",
sql = statement,
ptr = result_create(conn@ptr, statement, is_statement),
bigint = conn@bigint,
conn = conn
)
on.exit(dbClearResult(rs))
if (!is.null(params)) {
dbBind(rs, params)
}
on.exit(NULL)
rs
}
#' Database interface meta-data.
#'
#' See documentation of generics for more details.
#'
#' @param res An object of class [MariaDBResult-class]
#' @param ... Ignored. Needed for compatibility with generic
#' @examples
#' if (mariadbHasDefault()) {
#' con <- dbConnect(RMariaDB::MariaDB(), dbname = "test")
#' dbWriteTable(con, "t1", datasets::USArrests, temporary = TRUE)
#'
#' rs <- dbSendQuery(con, "SELECT * FROM t1 WHERE UrbanPop >= 80")
#' rs
#'
#' dbGetStatement(rs)
#' dbHasCompleted(rs)
#' dbColumnInfo(rs)
#'
#' dbFetch(rs)
#' rs
#'
#' dbClearResult(rs)
#' dbDisconnect(con)
#' }
#' @name result-meta
NULL
| /R/query.R | permissive | r-dbi/RMariaDB | R | false | false | 2,005 | r | #' @include MariaDBConnection.R
#' @include MariaDBResult.R
NULL
convert_bigint <- function(df, bigint) {
if (bigint == "integer64") return(df)
is_int64 <- which(vlapply(df, inherits, "integer64"))
if (length(is_int64) == 0) return(df)
as_bigint <- switch(bigint,
integer = as.integer,
numeric = as.numeric,
character = as.character
)
df[is_int64] <- suppressWarnings(lapply(df[is_int64], as_bigint))
df
}
fix_timezone <- function(ret, conn) {
is_datetime <- which(vapply(ret, inherits, "POSIXt", FUN.VALUE = logical(1)))
if (length(is_datetime) > 0) {
ret[is_datetime] <- lapply(ret[is_datetime], function(x) {
x <- lubridate::with_tz(x, "UTC")
x <- lubridate::force_tz(x, conn@timezone)
lubridate::with_tz(x, conn@timezone_out)
})
}
ret
}
fix_blob <- function(ret) {
is_blob <- which(vapply(ret, is.list, FUN.VALUE = logical(1)))
if (length(is_blob) > 0) {
ret[is_blob] <- lapply(ret[is_blob], blob::as_blob)
}
ret
}
dbSend <- function(conn, statement, params = NULL, is_statement) {
statement <- enc2utf8(statement)
rs <- new("MariaDBResult",
sql = statement,
ptr = result_create(conn@ptr, statement, is_statement),
bigint = conn@bigint,
conn = conn
)
on.exit(dbClearResult(rs))
if (!is.null(params)) {
dbBind(rs, params)
}
on.exit(NULL)
rs
}
#' Database interface meta-data.
#'
#' See documentation of generics for more details.
#'
#' @param res An object of class [MariaDBResult-class]
#' @param ... Ignored. Needed for compatibility with generic
#' @examples
#' if (mariadbHasDefault()) {
#' con <- dbConnect(RMariaDB::MariaDB(), dbname = "test")
#' dbWriteTable(con, "t1", datasets::USArrests, temporary = TRUE)
#'
#' rs <- dbSendQuery(con, "SELECT * FROM t1 WHERE UrbanPop >= 80")
#' rs
#'
#' dbGetStatement(rs)
#' dbHasCompleted(rs)
#' dbColumnInfo(rs)
#'
#' dbFetch(rs)
#' rs
#'
#' dbClearResult(rs)
#' dbDisconnect(con)
#' }
#' @name result-meta
NULL
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/7-wadist.R
\name{objective.keepWADIST}
\alias{objective.keepWADIST}
\title{Calculate Fitness of Binary Vector}
\usage{
objective.keepWADIST(codon, ARGS)
}
\arguments{
\item{codon}{A binary vector.}
\item{ARGS}{Handled by \code{\link{prepareArgs}}.}
}
\description{
This objective function seeks to maximize the correlation
between the weighted Aitchison distance of the complete composition
and the weighted Aitchison distance of the amalgamation.
}
| /man/objective.keepWADIST.Rd | no_license | ayazagan/amalgam | R | false | true | 531 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/7-wadist.R
\name{objective.keepWADIST}
\alias{objective.keepWADIST}
\title{Calculate Fitness of Binary Vector}
\usage{
objective.keepWADIST(codon, ARGS)
}
\arguments{
\item{codon}{A binary vector.}
\item{ARGS}{Handled by \code{\link{prepareArgs}}.}
}
\description{
This objective function seeks to maximize the correlation
between the weighted Aitchison distance of the complete composition
and the weighted Aitchison distance of the amalgamation.
}
|
\name{seqintegr}
\alias{seqintegr}
\alias{seqintegration}
%
\author{Gilbert Ritschard}
%
\title{Integrative potential}
%
\description{
Returns the index of integrative potential (capability) for each sequence, either a table with the index for each state or a vector with the index for the selected state.
}
\usage{
seqintegr(seqdata, state=NULL, pow=1, with.missing=FALSE)
}
\arguments{
\item{seqdata}{a state sequence object (\code{stslist}) as returned by \code{\link[TraMineR]{seqdef}}.}
\item{state}{character string. The state for which to compute the integrative index (see Details). When \code{NULL} the index is computed for each state.}
\item{pow}{real. Exponent applied to the position in the sequence. Higher value increase the importance of recency (see Details). Default is 1.}
\item{with.missing}{logical: should non-void missing values be treated as a regular state? If \code{FALSE} (default) missing values are ignored.}
}
\details{
The index of integrative potential or capability \cite{(Brzinsky-Fay, 2007, 2018)} measures the capacity to integrate the selected state within the sequence, i.e. the tendency to reach the selected state and end up in it. The index is defined as the sum of the position numbers occupied by the selected state in the sequence over the sum of all position numbers. Formally, for a sequence \eqn{s} of length \eqn{L}, and numbering the positions \eqn{i} from 1 to \eqn{L}, the index is
\deqn{integr = \sum_{(i | s_i = state)} i^{pow} / \sum_i i^{pow}}{integr = sum (s_i == state)*i^pow / sum i^pow}
where \eqn{state} is the selected state. This same index has also been independently developed by \cite{Manzoni and Mooi-Reci (2018)} under the name of quality index.
The recency exponent \eqn{pow} permits to control the focus given on the latest positions in the sequence. The higher \code{pow}, the higher the importance of the last positions relative to the first ones.
When \code{with.missing = FALSE}, the index is obtained by using the sum of the positions numbers of the non-missing elements as denominator. To compute the index for the missing state, \code{with.missing} should be set as \code{TRUE}.
For capability to integrate a set of states see \code{\link{seqipos}}.
}
\value{
when \code{state=NULL}, a numeric matrix with a row for each sequence and a column by state. When a state is provides, a single column.
}
\references{
Brzinsky-Fay, C. (2007) Lost in Transition? Labour Market Entry Sequences of School Leavers in Europe, \emph{European Sociological Review}, 23(4). \doi{10.1093/esr/jcm011}
Brzinsky-Fay, C. (2018) Unused Resources: Sequence and Trajectory Indicators. International Symposium on Sequence Analysis and Related Methods, Monte Verita, TI, Switzerland, October 10-12, 2018.
Manzoni, A and I. Mooi-Reci (2018) Measuring Sequence Quality, in Ritschard and Studer (eds), \emph{Sequence Analysis and Related Approaches. Innovative Methods and Applications}, Springer, 2018, pp 261-278.
Ritschard, G. (2021), "Measuring the nature of individual sequences", \emph{Sociological Methods and Research}, \doi{10.1177/00491241211036156}.
}
\seealso{
\code{\link{seqipos}}, \code{\link{seqivolatility}}, \code{\link{seqindic}}
}
\examples{
data(ex1)
sx <- seqdef(ex1[,1:13], right="DEL")
seqintegr(sx)
seqintegr(sx, with.missing=TRUE)
seqintegr(sx, state="B")
seqintegr(sx, state="B", pow=1.5)
}
\keyword{Longitudinal characteristics}
| /man/seqintegr.Rd | no_license | cran/TraMineR | R | false | false | 3,510 | rd | \name{seqintegr}
\alias{seqintegr}
\alias{seqintegration}
%
\author{Gilbert Ritschard}
%
\title{Integrative potential}
%
\description{
Returns the index of integrative potential (capability) for each sequence, either a table with the index for each state or a vector with the index for the selected state.
}
\usage{
seqintegr(seqdata, state=NULL, pow=1, with.missing=FALSE)
}
\arguments{
\item{seqdata}{a state sequence object (\code{stslist}) as returned by \code{\link[TraMineR]{seqdef}}.}
\item{state}{character string. The state for which to compute the integrative index (see Details). When \code{NULL} the index is computed for each state.}
\item{pow}{real. Exponent applied to the position in the sequence. Higher value increase the importance of recency (see Details). Default is 1.}
\item{with.missing}{logical: should non-void missing values be treated as a regular state? If \code{FALSE} (default) missing values are ignored.}
}
\details{
The index of integrative potential or capability \cite{(Brzinsky-Fay, 2007, 2018)} measures the capacity to integrate the selected state within the sequence, i.e. the tendency to reach the selected state and end up in it. The index is defined as the sum of the position numbers occupied by the selected state in the sequence over the sum of all position numbers. Formally, for a sequence \eqn{s} of length \eqn{L}, and numbering the positions \eqn{i} from 1 to \eqn{L}, the index is
\deqn{integr = \sum_{(i | s_i = state)} i^{pow} / \sum_i i^{pow}}{integr = sum (s_i == state)*i^pow / sum i^pow}
where \eqn{state} is the selected state. This same index has also been independently developed by \cite{Manzoni and Mooi-Reci (2018)} under the name of quality index.
The recency exponent \eqn{pow} permits to control the focus given on the latest positions in the sequence. The higher \code{pow}, the higher the importance of the last positions relative to the first ones.
When \code{with.missing = FALSE}, the index is obtained by using the sum of the positions numbers of the non-missing elements as denominator. To compute the index for the missing state, \code{with.missing} should be set as \code{TRUE}.
For capability to integrate a set of states see \code{\link{seqipos}}.
}
\value{
when \code{state=NULL}, a numeric matrix with a row for each sequence and a column by state. When a state is provides, a single column.
}
\references{
Brzinsky-Fay, C. (2007) Lost in Transition? Labour Market Entry Sequences of School Leavers in Europe, \emph{European Sociological Review}, 23(4). \doi{10.1093/esr/jcm011}
Brzinsky-Fay, C. (2018) Unused Resources: Sequence and Trajectory Indicators. International Symposium on Sequence Analysis and Related Methods, Monte Verita, TI, Switzerland, October 10-12, 2018.
Manzoni, A and I. Mooi-Reci (2018) Measuring Sequence Quality, in Ritschard and Studer (eds), \emph{Sequence Analysis and Related Approaches. Innovative Methods and Applications}, Springer, 2018, pp 261-278.
Ritschard, G. (2021), "Measuring the nature of individual sequences", \emph{Sociological Methods and Research}, \doi{10.1177/00491241211036156}.
}
\seealso{
\code{\link{seqipos}}, \code{\link{seqivolatility}}, \code{\link{seqindic}}
}
\examples{
data(ex1)
sx <- seqdef(ex1[,1:13], right="DEL")
seqintegr(sx)
seqintegr(sx, with.missing=TRUE)
seqintegr(sx, state="B")
seqintegr(sx, state="B", pow=1.5)
}
\keyword{Longitudinal characteristics}
|
# Load functions
source(file.path(getwd(),"code/lib/helper-functions-2.R"))
source(file.path(getwd(),"code/lib/fit_cholesky_PS.R"))
source(file.path(getwd(),"code/lib/help-functions.R"))
source(file.path(getwd(),"code/lib/quadratic-loss.R"))
source(file.path(getwd(),"code/lib/entropy-loss.R"))
source(file.path(getwd(),"code/lib/build-grid.r"))
cl <- makeCluster(detectCores()-1)
registerDoParallel(cl)
clusterCall(cl,function() {
.libPaths("~/Rlibs/lib")
library(doBy)
library(splines)
library(lattice)
library(MASS)
library(grDevices)
library(magrittr)
library(rlist)
library(plyr)
library(stringr)
library(dplyr)
library(doParallel)
})
set.seed(1985)
# Simulation data parameters
M = 20 # within-subject sample size
myGrid <- build_grid(20)
myGrid <- myGrid %>% transform(.,l_s=l/(max(myGrid$l)+1),
m_s=m/(max(myGrid$m)+1))
sigma. = 0.05 # error noise
flag. = TRUE # to monitor Schall algorithm iterations
# B-spline parameters
bdeg = 3 # B-spline degree
nsegl = 14 # number of inner knots to build B-spline bases for x1 and x2
nsegm = 14
div = 1 # divisor to build nested bases for the interaction terms
# = 1 (no nesting) div>1 (nested bases for interaction with
# nseg/div inner knots) To ensure full nesting, nseg/div
# has to be an integer
N <- 30
Sigma <- matrix(0.7,nrow=M,ncol=M) + diag(rep(0.3),M)
Omega <- solve(Sigma)
C <- t(chol(Sigma))
D <- diag(diag(C))
L <- C%*%solve(D)
T_mod <- solve(L)
phi <- as.vector(T_mod[lower.tri(T_mod)])
Bl <- bspline(myGrid$l_s, 0, 1, nsegl, bdeg)$B
Bm <- bspline(myGrid$m_s, 0, 1, nsegl, bdeg)$B
n1 <- ncol(Bl)
n2 <- ncol(Bm)
knots.l <- bspline(as.vector(grid$l)/max(grid$l), 0, 1, nsegl, bdeg)$knots
interior.knots.l <- knots.l[1:(length(knots.l)-(bdeg+1))]
knots.m <- bspline(as.vector(grid$m)/max(grid$m), 0, 1, nsegl, bdeg)$knots
interior.knots.m <- knots.m[1:(length(knots.m)-(bdeg+1))]
B. <- kronecker(Bm,
t(as.vector(rep(1,ncol(Bl))))) * kronecker(t(as.vector(rep(1,ncol(Bm)))),
Bl)
ggplot(myGrid,aes(l_s,m_s)) +
geom_point(size=0.8) +
theme_minimal() +
xlab("l") +
ylab("m") +
geom_point(data=knot_grid[which(colSums(B. != 0)==0),],
aes(x=l,y=m),size=0.5,colour="red")
# Create a function interpolating colors in the range of specified colors
jet.colors <- colorRampPalette( c("blue", "green") )
# Generate the desired number of colors from this palette
nbcol <- 100
color <- jet.colors(nbcol)
# Compute the z-value at the facet centres
z <- matrix(Phi,
nrow=100,
ncol=100,
byrow = FALSE)
if(Type.!="f05"){
zfacet <- z[-1, -1] + z[-1, -100] + z[-100, -1] + z[-100, -100]
# Recode facet z-values into color indices
facetcol <- cut(zfacet, nbcol)
plotColor <- color[facetcol]
} else {plotColor <- color}
persp(x=seq(0,1,length.out=100),
y=seq(0,1,length.out=100),
z=z,
phi=15,
theta=25,
xlab="l",
ylab="m",
col = plotColor,
# zlab=expression(phi^"\u2217"),
#zlab=expression(paste(phi,"^",*)),
ticktype = "detailed",
main="True Cholesky surface",
expand = 0.5,
zlab="")
N <- 50
y <- matrix(data=NA,nrow=N,M)
y[,1] <- rnorm(N,sd=sigma.)
for(this_t in 2:M){
y[,this_t] <- phi[myGrid$t==this_t & myGrid$s == 1]*y[,1] + rnorm(N,sd=sigma.)
for(this_s in 1:(this_t-1)){
y[,this_t] <- y[,this_t] + phi[myGrid$t==this_t & myGrid$s == this_s]*y[,this_s]
}
}
matplot(t(y),type="l")
T_mat <- diag(M)
T_mat[lower.tri(T_mat)] <- -phi
if(Type. != "f05"){
Tfacet <- (T_mat-diag(M))[-1, -1] + (T_mat-diag(M))[-1, -M] + (T_mat-diag(M))[-M, -1] + (T_mat-diag(M))[-M, -M]
# Recode facet z-values into color indices
facetcol <- cut(Tfacet, nbcol)
persp(z=T_mat-diag(M),
phi=15,
theta=20,
xlab="l",
ylab="m",
col = color[facetcol],
zlab="phi",
ticktype = "detailed",
expand = 0.5)
}
W <- matrix(data=0,nrow=N*(M-1),ncol=choose(M,2))
no.skip <- 0
for (t in 2:M){
W[((0:(N-1))*(M-1)) + t-1,(no.skip+1):(no.skip+t-1)] <- y[,1:(t-1)]
no.skip <- no.skip + t - 1
}
yVec <- as.vector(y[,-1])
start1 <- proc.time()[3]
# Build Mixed Model Bases For nested bases for the interaction terms, create
# new mixed model basis with number of segments as a integer divisor of nseg1,
# and nseg2 for div=1, G1=G1n and G2=G2n
pordl <- 3
pordm <- 2
no_support <- which(colSums(B. != 0)==0)
dm <- 2
if ( dm > 0 ){
Dm <- diff(diag(n2),differences = dm)
} else {
Dm <- diag(n2)
}
Pm <- kronecker(t(Dm)%*%Dm, diag(n1))
dl <- 3
if ( dl > 0 ){
Dl <- diff(diag(n1),differences = dl)
} else {
Dl <- diag(n1)
}
Pl <- kronecker(diag(n2),t(Dl)%*%Dl)
difference_rows_remove <- which(apply((abs(Pl) + abs(Pm))[,no_support],
MARGIN=1,
FUN=max)>0)
Pl <- Pl[-difference_rows_remove,-no_support]
Pm <- Pm[-difference_rows_remove,-no_support]
y <- mvrnorm(n=N,mu=rep(0,M),Sigma=Sigma)
y_vec <- as.vector(t(y[,-1]))
X <- matrix(data=0,nrow=N*(M-1),ncol=choose(M,2))
no.skip <- 0
for (t in 2:M){
X[((0:(N-1))*(M-1)) + t-1,(no.skip+1):(no.skip+t-1)] <- y[,1:(t-1)]
no.skip <- no.skip + t - 1
}
U. <- X%*%B.[,-no_support]
lambdas <- expand.grid(lam_l=exp(seq(-5,11,length.out=30)),
lam_m=exp(seq(-5,11,length.out=30)))
clusterExport(cl,c("fit_cholesky_PS",
"Sigma",
"N",
"m",
"M",
"Bl",
"Bm",
"B.",
"Pl",
"Pm",
"lambdas",
"dl",
"dm",
"C",
"U."))
PS_fit_sim <- foreach(l=iter(lambdas,by="row")) %dopar% {
Fit <- fit_cholesky_PS(y_vec,
N=N,
U.,
D=diag(diag(C)),
Pl,l$lam_l,
Pm,l$lam_m,
0.00000000001)
list(fit = Fit,
lambda_l = l$lam_l,
lambda_m = l$lam_m)
}
fit_list <- lapply(PS_fit_sim,function(l){
Phi <- B.[,-no_support] %*% l$fit$coef
T_mat <- diag(M)
T_mat[lower.tri(T_mat)] <- -Phi
list(fit=Phi,
T_mod=T_mat)
})
omega_list <- lapply(fit_list,function(l){
Omega_hat <- t(l$T_mod)%*%diag(1/diag(C)^2)%*%l$T_mod
})
quad_loss <- lapply(omega_list, function(omega_hat){
quadratic_loss(omega_hat,Sigma)
}) %>%
unlist
entrpy_loss <- lapply(omega_list, function(omega_hat){
entropy_loss(omega_hat,Sigma)
}) %>%
unlist
sse <- lapply(fit_list,function(l){
t(y_vec - X %*% l$fit) %*% diag(1/rep(diag(C^2)[-1],N)) %*% (y_vec - X %*% l$fit)
}) %>% unlist
ed <- lapply(PS_fit_sim,function(l){
l$fit$eff.dim
}) %>% unlist
loglik <- lapply(fit_list,function(l){
sum(apply(y, MARGIN=1, FUN=function(x){
log_lik( as.numeric(x), l$T_mod, diag(diag(C^2)))
}))
}) %>% unlist
aic <- -2*loglik + 2*ed
image.plot(y=log(sort(unique(lambdas$lam_l))),
x=log(sort(unique(lambdas$lam_m))[-1]),
z=matrix(aic[lambdas$lam_m>min(lambdas$lam_m)],
ncol=length(unique(lambdas$lam_l)),
nrow=length(unique(lambdas$lam_m))-1,
byrow=TRUE),
ylab=expression("log "~lambda["l"]),
xlab=expression("log "~lambda["m"]))
persp(y=log(sort(unique(lambdas$lam_l))),
x=log(sort(unique(lambdas$lam_m))[-1]),
z=matrix(aic[lambdas$lam_m>min(lambdas$lam_m)],
ncol=length(unique(lambdas$lam_l)),
nrow=length(unique(lambdas$lam_m))-1,
byrow=TRUE),
phi=15,
theta=90,
ylab="lambda l",
xlab="lambda m",
col = "light blue",
main="",
ticktype = "detailed",
expand = 0.7,
cex.axis=0.6,
cex.lab=0.6,
zlab="aic")
persp(y=log(sort(unique(lambdas$lam_l))),
x=log(sort(unique(lambdas$lam_m))[-1]),
z=matrix(ed[lambdas$lam_m>min(lambdas$lam_m)],
ncol=length(unique(lambdas$lam_l)),
nrow=length(unique(lambdas$lam_m))-1,
byrow=TRUE),
phi=15,
theta=90,
ylab="log lambda l",
xlab="log lambda m",
col = "light blue",
main="effective model dimension",
ticktype = "detailed",
expand = 0.7,
cex.axis=0.6,
cex.lab=0.6,
zlab="ed")
persp(z=diag(M)-T_mod,
phi=15,
theta=10,
xlab="s",
ylab="t",
col = "light blue",
main="true Cholesky surface",
ticktype = "detailed",
expand = 0.7,
cex.axis=0.6,
cex.lab=0.6,
zlab="")
#png(file.path(getwd(),"TeX","img","compound-symmetry-estimated-cholesky.png"))
persp(z=diag(M)-fit_list[[which.min(aic)]]$T_mod,
phi=15,
theta=10,
xlab="s",
ylab="t",
col = "light blue",
main="Estimated Cholesky surface",
ticktype = "detailed",
expand = 0.7,
cex.axis=0.6,
cex.lab=0.6,
zlab="")
persp(x=seq(0,1,length.out=M),
y=seq(0,1,length.out=M),
z=Omega,
phi=15,
theta=15,
xlab="s",
ylab="t",
col = "light blue",
zlab="",
main=expression(paste(Sigma^{-1})),
ticktype = "detailed",
expand = 0.7,
cex.axis=0.6,
cex.lab=0.6)
persp(x=seq(0,1,length.out=M),
y=seq(0,1,length.out=M),
z=omega_list[[which.min(aic)]],
phi=15,
theta=15,
xlab="s",
ylab="t",
col = "light blue",
zlab="",
main=expression(paste(hat(Sigma)^{-1})),
ticktype = "detailed",
expand = 0.7,
cex.axis=0.6,
cex.lab=0.6)
persp(x=seq(0,1,length.out=M),
y=seq(0,1,length.out=M),
z=omega_list[[which.min(entrpy_loss)]],
phi=15,
theta=15,
xlab="s",
ylab="t",
col = "light blue",
zlab="",
main=expression(paste(hat(Sigma)^{-1})),
ticktype = "detailed",
expand = 0.7,
cex.axis=0.6,
cex.lab=0.6)
persp(x=seq(0,1,length.out=M),
y=seq(0,1,length.out=M),
z=omega_list[[which.min(entrpy_loss)]]%*%Sigma,
phi=15,
theta=15,
xlab="s",
ylab="t",
col = "light blue",
zlab="",
main=expression(paste(hat(Sigma)^{-1},Sigma)),
ticktype = "detailed",
expand = 0.7,
cex.axis=0.6,
cex.lab=0.6)
persp(x=seq(0,1,length.out=M),
y=seq(0,1,length.out=M),
z=omega_list[[900]],
phi=15,
theta=15,
xlab="s",
ylab="t",
col = "light blue",
zlab="",
main=expression(paste(hat(Sigma)^{-1})),
ticktype = "detailed",
expand = 0.7,
cex.axis=0.6,
cex.lab=0.6)
png(file.path(getwd(),"TeX","img","compound-symmetry-true-covariance.png"))
persp(x=seq(0,1,length.out=M),
y=seq(0,1,length.out=M),
z=Sigma,
phi=15,
theta=15,
xlab="s",
ylab="t",
col = "light blue",
zlab="",
main=expression(paste(Sigma)),
ticktype = "detailed",
expand = 0.7,
cex.axis=0.6,
cex.lab=0.6,
zlim=c(0.4,1.1))
dev.off()
png(file.path(getwd(),"TeX","img","compound-symmetry-estimated-covariance.png"))
persp(x=seq(0,1,length.out=M),
y=seq(0,1,length.out=M),
z=solve(omega_list[[which.min(aic)]]),
phi=15,
theta=15,
xlab="s",
ylab="t",
col = "light blue",
zlab="",
main=expression(paste(hat(Sigma))),
ticktype = "detailed",
expand = 0.7,
cex.axis=0.6,
cex.lab=0.6)
dev.off()
persp(x=seq(0,1,length.out=M),
y=seq(0,1,length.out=M),
z=solve(omega_list[[900]]),
phi=15,
theta=15,
xlab="s",
ylab="t",
col = "light blue",
zlab="",
main=expression(paste(hat(Sigma))),
ticktype = "detailed",
expand = 0.7,
cex.axis=0.6,
cex.lab=0.6)
S <- outer(y[1,],y[1,])
for(subject in 2:nrow(y)){
S <- S + outer(y[subject,],y[subject,])
}
S <- S/N
persp(x=seq(0,1,length.out=M),
y=seq(0,1,length.out=M),
z=S,
phi=15,
theta=15,
xlab="s",
ylab="t",
col = "light blue",
zlab="",
main=expression(paste(tilde(Sigma))),
ticktype = "detailed",
expand = 0.7,
cex.axis=0.6,
cex.lab=0.6)
entropy_loss(solve(S),Sigma)
min(entrpy_loss)
| /code/src/compound-symmetry-tensor-PS-fit.R | no_license | taylerablake/JSM-2017 | R | false | false | 12,825 | r |
# Load functions
source(file.path(getwd(),"code/lib/helper-functions-2.R"))
source(file.path(getwd(),"code/lib/fit_cholesky_PS.R"))
source(file.path(getwd(),"code/lib/help-functions.R"))
source(file.path(getwd(),"code/lib/quadratic-loss.R"))
source(file.path(getwd(),"code/lib/entropy-loss.R"))
source(file.path(getwd(),"code/lib/build-grid.r"))
cl <- makeCluster(detectCores()-1)
registerDoParallel(cl)
clusterCall(cl,function() {
.libPaths("~/Rlibs/lib")
library(doBy)
library(splines)
library(lattice)
library(MASS)
library(grDevices)
library(magrittr)
library(rlist)
library(plyr)
library(stringr)
library(dplyr)
library(doParallel)
})
set.seed(1985)
# Simulation data parameters
M = 20 # within-subject sample size
myGrid <- build_grid(20)
myGrid <- myGrid %>% transform(.,l_s=l/(max(myGrid$l)+1),
m_s=m/(max(myGrid$m)+1))
sigma. = 0.05 # error noise
flag. = TRUE # to monitor Schall algorithm iterations
# B-spline parameters
bdeg = 3 # B-spline degree
nsegl = 14 # number of inner knots to build B-spline bases for x1 and x2
nsegm = 14
div = 1 # divisor to build nested bases for the interaction terms
# = 1 (no nesting) div>1 (nested bases for interaction with
# nseg/div inner knots) To ensure full nesting, nseg/div
# has to be an integer
N <- 30
Sigma <- matrix(0.7,nrow=M,ncol=M) + diag(rep(0.3),M)
Omega <- solve(Sigma)
C <- t(chol(Sigma))
D <- diag(diag(C))
L <- C%*%solve(D)
T_mod <- solve(L)
phi <- as.vector(T_mod[lower.tri(T_mod)])
Bl <- bspline(myGrid$l_s, 0, 1, nsegl, bdeg)$B
Bm <- bspline(myGrid$m_s, 0, 1, nsegl, bdeg)$B
n1 <- ncol(Bl)
n2 <- ncol(Bm)
knots.l <- bspline(as.vector(grid$l)/max(grid$l), 0, 1, nsegl, bdeg)$knots
interior.knots.l <- knots.l[1:(length(knots.l)-(bdeg+1))]
knots.m <- bspline(as.vector(grid$m)/max(grid$m), 0, 1, nsegl, bdeg)$knots
interior.knots.m <- knots.m[1:(length(knots.m)-(bdeg+1))]
B. <- kronecker(Bm,
t(as.vector(rep(1,ncol(Bl))))) * kronecker(t(as.vector(rep(1,ncol(Bm)))),
Bl)
ggplot(myGrid,aes(l_s,m_s)) +
geom_point(size=0.8) +
theme_minimal() +
xlab("l") +
ylab("m") +
geom_point(data=knot_grid[which(colSums(B. != 0)==0),],
aes(x=l,y=m),size=0.5,colour="red")
# Create a function interpolating colors in the range of specified colors
jet.colors <- colorRampPalette( c("blue", "green") )
# Generate the desired number of colors from this palette
nbcol <- 100
color <- jet.colors(nbcol)
# Compute the z-value at the facet centres
z <- matrix(Phi,
nrow=100,
ncol=100,
byrow = FALSE)
if(Type.!="f05"){
zfacet <- z[-1, -1] + z[-1, -100] + z[-100, -1] + z[-100, -100]
# Recode facet z-values into color indices
facetcol <- cut(zfacet, nbcol)
plotColor <- color[facetcol]
} else {plotColor <- color}
persp(x=seq(0,1,length.out=100),
y=seq(0,1,length.out=100),
z=z,
phi=15,
theta=25,
xlab="l",
ylab="m",
col = plotColor,
# zlab=expression(phi^"\u2217"),
#zlab=expression(paste(phi,"^",*)),
ticktype = "detailed",
main="True Cholesky surface",
expand = 0.5,
zlab="")
N <- 50
y <- matrix(data=NA,nrow=N,M)
y[,1] <- rnorm(N,sd=sigma.)
for(this_t in 2:M){
y[,this_t] <- phi[myGrid$t==this_t & myGrid$s == 1]*y[,1] + rnorm(N,sd=sigma.)
for(this_s in 1:(this_t-1)){
y[,this_t] <- y[,this_t] + phi[myGrid$t==this_t & myGrid$s == this_s]*y[,this_s]
}
}
matplot(t(y),type="l")
T_mat <- diag(M)
T_mat[lower.tri(T_mat)] <- -phi
if(Type. != "f05"){
Tfacet <- (T_mat-diag(M))[-1, -1] + (T_mat-diag(M))[-1, -M] + (T_mat-diag(M))[-M, -1] + (T_mat-diag(M))[-M, -M]
# Recode facet z-values into color indices
facetcol <- cut(Tfacet, nbcol)
persp(z=T_mat-diag(M),
phi=15,
theta=20,
xlab="l",
ylab="m",
col = color[facetcol],
zlab="phi",
ticktype = "detailed",
expand = 0.5)
}
W <- matrix(data=0,nrow=N*(M-1),ncol=choose(M,2))
no.skip <- 0
for (t in 2:M){
W[((0:(N-1))*(M-1)) + t-1,(no.skip+1):(no.skip+t-1)] <- y[,1:(t-1)]
no.skip <- no.skip + t - 1
}
yVec <- as.vector(y[,-1])
start1 <- proc.time()[3]
# Build Mixed Model Bases For nested bases for the interaction terms, create
# new mixed model basis with number of segments as a integer divisor of nseg1,
# and nseg2 for div=1, G1=G1n and G2=G2n
pordl <- 3
pordm <- 2
no_support <- which(colSums(B. != 0)==0)
dm <- 2
if ( dm > 0 ){
Dm <- diff(diag(n2),differences = dm)
} else {
Dm <- diag(n2)
}
Pm <- kronecker(t(Dm)%*%Dm, diag(n1))
dl <- 3
if ( dl > 0 ){
Dl <- diff(diag(n1),differences = dl)
} else {
Dl <- diag(n1)
}
Pl <- kronecker(diag(n2),t(Dl)%*%Dl)
difference_rows_remove <- which(apply((abs(Pl) + abs(Pm))[,no_support],
MARGIN=1,
FUN=max)>0)
Pl <- Pl[-difference_rows_remove,-no_support]
Pm <- Pm[-difference_rows_remove,-no_support]
y <- mvrnorm(n=N,mu=rep(0,M),Sigma=Sigma)
y_vec <- as.vector(t(y[,-1]))
X <- matrix(data=0,nrow=N*(M-1),ncol=choose(M,2))
no.skip <- 0
for (t in 2:M){
X[((0:(N-1))*(M-1)) + t-1,(no.skip+1):(no.skip+t-1)] <- y[,1:(t-1)]
no.skip <- no.skip + t - 1
}
U. <- X%*%B.[,-no_support]
lambdas <- expand.grid(lam_l=exp(seq(-5,11,length.out=30)),
lam_m=exp(seq(-5,11,length.out=30)))
clusterExport(cl,c("fit_cholesky_PS",
"Sigma",
"N",
"m",
"M",
"Bl",
"Bm",
"B.",
"Pl",
"Pm",
"lambdas",
"dl",
"dm",
"C",
"U."))
PS_fit_sim <- foreach(l=iter(lambdas,by="row")) %dopar% {
Fit <- fit_cholesky_PS(y_vec,
N=N,
U.,
D=diag(diag(C)),
Pl,l$lam_l,
Pm,l$lam_m,
0.00000000001)
list(fit = Fit,
lambda_l = l$lam_l,
lambda_m = l$lam_m)
}
fit_list <- lapply(PS_fit_sim,function(l){
Phi <- B.[,-no_support] %*% l$fit$coef
T_mat <- diag(M)
T_mat[lower.tri(T_mat)] <- -Phi
list(fit=Phi,
T_mod=T_mat)
})
omega_list <- lapply(fit_list,function(l){
Omega_hat <- t(l$T_mod)%*%diag(1/diag(C)^2)%*%l$T_mod
})
quad_loss <- lapply(omega_list, function(omega_hat){
quadratic_loss(omega_hat,Sigma)
}) %>%
unlist
entrpy_loss <- lapply(omega_list, function(omega_hat){
entropy_loss(omega_hat,Sigma)
}) %>%
unlist
sse <- lapply(fit_list,function(l){
t(y_vec - X %*% l$fit) %*% diag(1/rep(diag(C^2)[-1],N)) %*% (y_vec - X %*% l$fit)
}) %>% unlist
ed <- lapply(PS_fit_sim,function(l){
l$fit$eff.dim
}) %>% unlist
loglik <- lapply(fit_list,function(l){
sum(apply(y, MARGIN=1, FUN=function(x){
log_lik( as.numeric(x), l$T_mod, diag(diag(C^2)))
}))
}) %>% unlist
aic <- -2*loglik + 2*ed
image.plot(y=log(sort(unique(lambdas$lam_l))),
x=log(sort(unique(lambdas$lam_m))[-1]),
z=matrix(aic[lambdas$lam_m>min(lambdas$lam_m)],
ncol=length(unique(lambdas$lam_l)),
nrow=length(unique(lambdas$lam_m))-1,
byrow=TRUE),
ylab=expression("log "~lambda["l"]),
xlab=expression("log "~lambda["m"]))
persp(y=log(sort(unique(lambdas$lam_l))),
x=log(sort(unique(lambdas$lam_m))[-1]),
z=matrix(aic[lambdas$lam_m>min(lambdas$lam_m)],
ncol=length(unique(lambdas$lam_l)),
nrow=length(unique(lambdas$lam_m))-1,
byrow=TRUE),
phi=15,
theta=90,
ylab="lambda l",
xlab="lambda m",
col = "light blue",
main="",
ticktype = "detailed",
expand = 0.7,
cex.axis=0.6,
cex.lab=0.6,
zlab="aic")
persp(y=log(sort(unique(lambdas$lam_l))),
x=log(sort(unique(lambdas$lam_m))[-1]),
z=matrix(ed[lambdas$lam_m>min(lambdas$lam_m)],
ncol=length(unique(lambdas$lam_l)),
nrow=length(unique(lambdas$lam_m))-1,
byrow=TRUE),
phi=15,
theta=90,
ylab="log lambda l",
xlab="log lambda m",
col = "light blue",
main="effective model dimension",
ticktype = "detailed",
expand = 0.7,
cex.axis=0.6,
cex.lab=0.6,
zlab="ed")
persp(z=diag(M)-T_mod,
phi=15,
theta=10,
xlab="s",
ylab="t",
col = "light blue",
main="true Cholesky surface",
ticktype = "detailed",
expand = 0.7,
cex.axis=0.6,
cex.lab=0.6,
zlab="")
#png(file.path(getwd(),"TeX","img","compound-symmetry-estimated-cholesky.png"))
persp(z=diag(M)-fit_list[[which.min(aic)]]$T_mod,
phi=15,
theta=10,
xlab="s",
ylab="t",
col = "light blue",
main="Estimated Cholesky surface",
ticktype = "detailed",
expand = 0.7,
cex.axis=0.6,
cex.lab=0.6,
zlab="")
persp(x=seq(0,1,length.out=M),
y=seq(0,1,length.out=M),
z=Omega,
phi=15,
theta=15,
xlab="s",
ylab="t",
col = "light blue",
zlab="",
main=expression(paste(Sigma^{-1})),
ticktype = "detailed",
expand = 0.7,
cex.axis=0.6,
cex.lab=0.6)
persp(x=seq(0,1,length.out=M),
y=seq(0,1,length.out=M),
z=omega_list[[which.min(aic)]],
phi=15,
theta=15,
xlab="s",
ylab="t",
col = "light blue",
zlab="",
main=expression(paste(hat(Sigma)^{-1})),
ticktype = "detailed",
expand = 0.7,
cex.axis=0.6,
cex.lab=0.6)
persp(x=seq(0,1,length.out=M),
y=seq(0,1,length.out=M),
z=omega_list[[which.min(entrpy_loss)]],
phi=15,
theta=15,
xlab="s",
ylab="t",
col = "light blue",
zlab="",
main=expression(paste(hat(Sigma)^{-1})),
ticktype = "detailed",
expand = 0.7,
cex.axis=0.6,
cex.lab=0.6)
persp(x=seq(0,1,length.out=M),
y=seq(0,1,length.out=M),
z=omega_list[[which.min(entrpy_loss)]]%*%Sigma,
phi=15,
theta=15,
xlab="s",
ylab="t",
col = "light blue",
zlab="",
main=expression(paste(hat(Sigma)^{-1},Sigma)),
ticktype = "detailed",
expand = 0.7,
cex.axis=0.6,
cex.lab=0.6)
persp(x=seq(0,1,length.out=M),
y=seq(0,1,length.out=M),
z=omega_list[[900]],
phi=15,
theta=15,
xlab="s",
ylab="t",
col = "light blue",
zlab="",
main=expression(paste(hat(Sigma)^{-1})),
ticktype = "detailed",
expand = 0.7,
cex.axis=0.6,
cex.lab=0.6)
png(file.path(getwd(),"TeX","img","compound-symmetry-true-covariance.png"))
persp(x=seq(0,1,length.out=M),
y=seq(0,1,length.out=M),
z=Sigma,
phi=15,
theta=15,
xlab="s",
ylab="t",
col = "light blue",
zlab="",
main=expression(paste(Sigma)),
ticktype = "detailed",
expand = 0.7,
cex.axis=0.6,
cex.lab=0.6,
zlim=c(0.4,1.1))
dev.off()
png(file.path(getwd(),"TeX","img","compound-symmetry-estimated-covariance.png"))
persp(x=seq(0,1,length.out=M),
y=seq(0,1,length.out=M),
z=solve(omega_list[[which.min(aic)]]),
phi=15,
theta=15,
xlab="s",
ylab="t",
col = "light blue",
zlab="",
main=expression(paste(hat(Sigma))),
ticktype = "detailed",
expand = 0.7,
cex.axis=0.6,
cex.lab=0.6)
dev.off()
persp(x=seq(0,1,length.out=M),
y=seq(0,1,length.out=M),
z=solve(omega_list[[900]]),
phi=15,
theta=15,
xlab="s",
ylab="t",
col = "light blue",
zlab="",
main=expression(paste(hat(Sigma))),
ticktype = "detailed",
expand = 0.7,
cex.axis=0.6,
cex.lab=0.6)
S <- outer(y[1,],y[1,])
for(subject in 2:nrow(y)){
S <- S + outer(y[subject,],y[subject,])
}
S <- S/N
persp(x=seq(0,1,length.out=M),
y=seq(0,1,length.out=M),
z=S,
phi=15,
theta=15,
xlab="s",
ylab="t",
col = "light blue",
zlab="",
main=expression(paste(tilde(Sigma))),
ticktype = "detailed",
expand = 0.7,
cex.axis=0.6,
cex.lab=0.6)
entropy_loss(solve(S),Sigma)
min(entrpy_loss)
|
#
# This is a Shiny web application. You can run the application by clicking
# the 'Run App' button above.
#
# Find out more about building applications with Shiny here:
#
# http://shiny.rstudio.com/
#
library(shiny)
library(shinydashboard)
library(ggplot2)
library(lubridate)
library(DT)
library(leaflet)
library(scales)
library(stringi)
library(leaflet)
hurdata <- read.csv("treated-data-atlantic.csv", header=TRUE, dec=',', row.names=NULL)
#making col of proper date and of years
hurdata$newDate <-ymd(hurdata$date)
hurdata$year <- year(hurdata$newDate)
#only hurricanes between 2005-2018
# hurdata <- subset(hurdata, (year > 2004 & year < 2019) )
# hurdata$newlat <- substr(hurdata$lat, 2, 5)
# hurdata$newlon <- substr(hurdata$lon, 3, 6)
# hurdata$newlat <- as.numeric(as.character(hurdata$newlat))
# hurdata$newlon <- as.numeric(as.character(hurdata$newlon ))
hurdata$newlat <- as.numeric(as.character(hurdata$lat))
hurdata$newlon <- as.numeric(as.character(hurdata$lon ))
hurdata$newlon <- hurdata$newlon * -1
pal <- colorFactor(
palette = 'Dark2',
domain = hurdata$name
)
# please hit the Wind Speed button to change away from the NA values
hurdata.windSpeed <- as.data.frame(table(hurdata$cycloneNumber))
order.windspd <- order(hurdata$maxWindSpeed, decreasing=TRUE)
hurdata.windSpeed <- hurdata.windSpeed[order.windspd, ]
colnames(hurdata.windSpeed) <- c("Cyclone ID", "Wind Speed")
#
# m = leaflet(hurdata) %>%
# addTiles() %>%
# # addPolylines(lng = ~newlon, lat = ~newlat, color = "red", opacity = 0.25, weight = 1) %>%
# addCircles(lng = ~newlon, lat = ~newlat, weight = 1,
# radius = ~sqrt(maxWindSpeed) * 1750, color = ~pal(name),
#
# popup=paste("Hurricane Name: ", hurdata$sirName, "<br>",
# "Date: ", hurdata$newDate, "<br>",
# "Time: ", hurdata$time, "<br>",
# "Max Wind Speed: ", hurdata$maxWindSpeed, "<br>",
# "Min Wind Pressure: ", hurdata$minWindPressure, "<br>",
# "[Lat, Lon]: [", hurdata$newlat, ", ", hurdata$newlon, "]")
# )
#
# Define UI for application that draws a histogram
ui <- dashboardPage(
dashboardHeader(title="CS 424 - Project 2 Alpha"),
dashboardSidebar(
sidebarMenu(
menuItem("", tabName="Empty", icon=NULL),
menuItem("", tabName="Empty", icon=NULL),
menuItem("", tabName="Empty", icon=NULL),
menuItem("", tabName="Empty", icon=NULL),
menuItem("", tabName="Empty", icon=NULL),
menuItem("", tabName="Empty", icon=NULL),
menuItem("", tabName="Empty", icon=NULL),
menuItem("", tabName="Empty", icon=NULL)
# selectInput("Date", "Select a date:", hurdata$date)
)
),
dashboardBody(
column(12,
fluidRow(
box(title="Map", solidHeader = TRUE, status="primary", width=12,
leafletOutput("map", height=400)
)
),
fluidRow(
box(title="List by max windspeed", solidHeader=TRUE, status="primary", width=12,
dataTableOutput("windspd", height=100))
),
fluidRow(
box(title="About", solidHeader=TRUE, status="primary", width=12,
htmlOutput("about", height=200))
)
)
)
)
# Define server logic required to draw a histogram
server <- function(input, output) {
output$map <- renderLeaflet(
{
# m
leaflet(hurdata) %>%
addTiles() %>%
addPolylines(lng = ~newlon, lat = ~newlat, color = "red", opacity = 0.25, weight = 1) %>%
addCircles(lng = ~newlon, lat = ~newlat, weight = 1,
radius = ~sqrt(maxWindSpeed) * 1750, color = ~pal(name),
popup=paste("Hurricane Name: ", hurdata$sirName, "<br>",
"Date: ", hurdata$newDate, "<br>",
"Time: ", hurdata$time, "<br>",
"Max Wind Speed: ", hurdata$maxWindSpeed, "<br>",
"Min Wind Pressure: ", hurdata$minWindPressure, "<br>",
"[Lat, Lon]: [", hurdata$newlat, ", ", hurdata$newlon, "]")
) %>%
setView(lng=-35, lat=6, zoom=4)
}
)
output$windspd <- DT::renderDataTable(
DT::datatable(
{
hurdata.windSpeed
},
options = list(searching=FALSE, pageLength=10, lengthChange=FALSE, rownames=FALSE)
)
)
output$about <- renderUI({
str0 <- paste("- Dashboard made by Brandon Graver, Nicholas Abbasi and Ho Chon")
str1 <- paste("- Libraries used: Shiny, Shinydashboard, leaflet, DT, ggplot2, lubridate, stringr.")
HTML(paste(str0, str1, sep='<br>'))
})
}
# Run the application
shinyApp(ui = ui, server = server, options=(port=9001))
| /app.R | no_license | bgraver/cs424-project2 | R | false | false | 5,127 | r | #
# This is a Shiny web application. You can run the application by clicking
# the 'Run App' button above.
#
# Find out more about building applications with Shiny here:
#
# http://shiny.rstudio.com/
#
library(shiny)
library(shinydashboard)
library(ggplot2)
library(lubridate)
library(DT)
library(leaflet)
library(scales)
library(stringi)
library(leaflet)
hurdata <- read.csv("treated-data-atlantic.csv", header=TRUE, dec=',', row.names=NULL)
#making col of proper date and of years
hurdata$newDate <-ymd(hurdata$date)
hurdata$year <- year(hurdata$newDate)
#only hurricanes between 2005-2018
# hurdata <- subset(hurdata, (year > 2004 & year < 2019) )
# hurdata$newlat <- substr(hurdata$lat, 2, 5)
# hurdata$newlon <- substr(hurdata$lon, 3, 6)
# hurdata$newlat <- as.numeric(as.character(hurdata$newlat))
# hurdata$newlon <- as.numeric(as.character(hurdata$newlon ))
hurdata$newlat <- as.numeric(as.character(hurdata$lat))
hurdata$newlon <- as.numeric(as.character(hurdata$lon ))
hurdata$newlon <- hurdata$newlon * -1
pal <- colorFactor(
palette = 'Dark2',
domain = hurdata$name
)
# please hit the Wind Speed button to change away from the NA values
hurdata.windSpeed <- as.data.frame(table(hurdata$cycloneNumber))
order.windspd <- order(hurdata$maxWindSpeed, decreasing=TRUE)
hurdata.windSpeed <- hurdata.windSpeed[order.windspd, ]
colnames(hurdata.windSpeed) <- c("Cyclone ID", "Wind Speed")
#
# m = leaflet(hurdata) %>%
# addTiles() %>%
# # addPolylines(lng = ~newlon, lat = ~newlat, color = "red", opacity = 0.25, weight = 1) %>%
# addCircles(lng = ~newlon, lat = ~newlat, weight = 1,
# radius = ~sqrt(maxWindSpeed) * 1750, color = ~pal(name),
#
# popup=paste("Hurricane Name: ", hurdata$sirName, "<br>",
# "Date: ", hurdata$newDate, "<br>",
# "Time: ", hurdata$time, "<br>",
# "Max Wind Speed: ", hurdata$maxWindSpeed, "<br>",
# "Min Wind Pressure: ", hurdata$minWindPressure, "<br>",
# "[Lat, Lon]: [", hurdata$newlat, ", ", hurdata$newlon, "]")
# )
#
# Define UI for application that draws a histogram
ui <- dashboardPage(
dashboardHeader(title="CS 424 - Project 2 Alpha"),
dashboardSidebar(
sidebarMenu(
menuItem("", tabName="Empty", icon=NULL),
menuItem("", tabName="Empty", icon=NULL),
menuItem("", tabName="Empty", icon=NULL),
menuItem("", tabName="Empty", icon=NULL),
menuItem("", tabName="Empty", icon=NULL),
menuItem("", tabName="Empty", icon=NULL),
menuItem("", tabName="Empty", icon=NULL),
menuItem("", tabName="Empty", icon=NULL)
# selectInput("Date", "Select a date:", hurdata$date)
)
),
dashboardBody(
column(12,
fluidRow(
box(title="Map", solidHeader = TRUE, status="primary", width=12,
leafletOutput("map", height=400)
)
),
fluidRow(
box(title="List by max windspeed", solidHeader=TRUE, status="primary", width=12,
dataTableOutput("windspd", height=100))
),
fluidRow(
box(title="About", solidHeader=TRUE, status="primary", width=12,
htmlOutput("about", height=200))
)
)
)
)
# Define server logic required to draw a histogram
server <- function(input, output) {
output$map <- renderLeaflet(
{
# m
leaflet(hurdata) %>%
addTiles() %>%
addPolylines(lng = ~newlon, lat = ~newlat, color = "red", opacity = 0.25, weight = 1) %>%
addCircles(lng = ~newlon, lat = ~newlat, weight = 1,
radius = ~sqrt(maxWindSpeed) * 1750, color = ~pal(name),
popup=paste("Hurricane Name: ", hurdata$sirName, "<br>",
"Date: ", hurdata$newDate, "<br>",
"Time: ", hurdata$time, "<br>",
"Max Wind Speed: ", hurdata$maxWindSpeed, "<br>",
"Min Wind Pressure: ", hurdata$minWindPressure, "<br>",
"[Lat, Lon]: [", hurdata$newlat, ", ", hurdata$newlon, "]")
) %>%
setView(lng=-35, lat=6, zoom=4)
}
)
output$windspd <- DT::renderDataTable(
DT::datatable(
{
hurdata.windSpeed
},
options = list(searching=FALSE, pageLength=10, lengthChange=FALSE, rownames=FALSE)
)
)
output$about <- renderUI({
str0 <- paste("- Dashboard made by Brandon Graver, Nicholas Abbasi and Ho Chon")
str1 <- paste("- Libraries used: Shiny, Shinydashboard, leaflet, DT, ggplot2, lubridate, stringr.")
HTML(paste(str0, str1, sep='<br>'))
})
}
# Run the application
shinyApp(ui = ui, server = server, options=(port=9001))
|
tm <- vector("list", 11)
r <- names(tm) <- seq(0, 0.5, by=0.05)
loadtm <-
function(file)
{
x <- scan(file, sep="\t", what=character())
wh <- grep("::$", x)
y <- vector("list", length(wh))
names(y) <- sub("::", "", x[wh])
wh <- c(wh, length(x)+1)
for(i in 1:(length(wh)-1)) {
y[[i]] <- as.numeric(x[seq(wh[i]+2, wh[i+1], by=2)])
names(y[[i]]) <- sub(":", "", x[seq(wh[i]+1, wh[i+1]-1, by=2)])
}
y
}
for(i in seq(along=r)) {
j <- r[i]*100
if(j < 10) j <- paste("0", j, sep="")
tm[[i]] <- loadtm(paste("tm4A_", j, ".txt", sep=""))
}
rn <- names(tm[[1]])
ind1 <- sapply(strsplit(rn, "x"), function(a) a[1])
ind2 <- sapply(strsplit(rn, "x"), function(a) a[2])
hap11 <- sapply(strsplit(ind1, "\\|"), function(a) a[1])
hap12 <- sapply(strsplit(ind1, "\\|"), function(a) a[2])
hap21 <- sapply(strsplit(ind2, "\\|"), function(a) a[1])
hap22 <- sapply(strsplit(ind2, "\\|"), function(a) a[2])
hap11a <- sapply(hap11, substr, 1, 1)
hap11b <- sapply(hap11, substr, 2, 2)
hap12a <- sapply(hap12, substr, 1, 1)
hap12b <- sapply(hap12, substr, 2, 2)
hap21a <- sapply(hap21, substr, 1, 1)
hap21b <- sapply(hap21, substr, 2, 2)
hap22a <- sapply(hap22, substr, 1, 1)
hap22b <- sapply(hap22, substr, 2, 2)
B <- matrix(0, nrow=length(tm[[1]]), ncol=17)
dimnames(B) <- list(rn, c("AA|ABx--|--", "AA|--xAB|--", "AA|--xA-|-B",
"A-|-AxAB|--", "A-|-AxA-|-B", "AA|-Bx--|--",
"AA|A-x-B|--", "AA|--x-B|--", "AA|-BxA-|--",
"AB|-AxA-|--", "AB|A-x-A|--", "AB|--x-A|--",
"A-|-Bx-A|--", "A-|A-x-A|-B", "A-|--x-A|-B",
"A-|-Ax-B|--", "AB|-Ax--|--"))
B[hap11=="AA" & hap12=="AB",1] <- B[hap11=="AA" & hap12=="AB",1] + 0.5
B[hap21=="AA" & hap22=="AB",1] <- B[hap21=="AA" & hap22=="AB",1] + 0.5
B[hap11=="AA" & hap21=="AB",2] <- B[hap11=="AA" & hap21=="AB",2] + 0.25
B[hap11=="AA" & hap22=="AB",2] <- B[hap11=="AA" & hap22=="AB",2] + 0.25
B[hap12=="AA" & hap21=="AB",2] <- B[hap12=="AA" & hap21=="AB",2] + 0.25
B[hap12=="AA" & hap22=="AB",2] <- B[hap12=="AA" & hap22=="AB",2] + 0.25
B[hap21=="AA" & hap11=="AB",2] <- B[hap21=="AA" & hap11=="AB",2] + 0.25
B[hap21=="AA" & hap12=="AB",2] <- B[hap21=="AA" & hap12=="AB",2] + 0.25
B[hap22=="AA" & hap11=="AB",2] <- B[hap22=="AA" & hap11=="AB",2] + 0.25
B[hap22=="AA" & hap12=="AB",2] <- B[hap22=="AA" & hap12=="AB",2] + 0.25
B[hap11=="AA" & hap21a=="A" & hap22b=="B",3] <- B[hap11=="AA" & hap21a=="A" & hap22b=="B",3] + 0.25
B[hap11=="AA" & hap22a=="A" & hap21b=="B",3] <- B[hap11=="AA" & hap22a=="A" & hap21b=="B",3] + 0.25
B[hap12=="AA" & hap21a=="A" & hap22b=="B",3] <- B[hap12=="AA" & hap21a=="A" & hap22b=="B",3] + 0.25
B[hap12=="AA" & hap22a=="A" & hap21b=="B",3] <- B[hap12=="AA" & hap22a=="A" & hap21b=="B",3] + 0.25
B[hap21=="AA" & hap11a=="A" & hap12b=="B",3] <- B[hap21=="AA" & hap11a=="A" & hap12b=="B",3] + 0.25
B[hap21=="AA" & hap12a=="A" & hap11b=="B",3] <- B[hap21=="AA" & hap12a=="A" & hap11b=="B",3] + 0.25
B[hap22=="AA" & hap11a=="A" & hap12b=="B",3] <- B[hap22=="AA" & hap11a=="A" & hap12b=="B",3] + 0.25
B[hap22=="AA" & hap12a=="A" & hap11b=="B",3] <- B[hap22=="AA" & hap12a=="A" & hap11b=="B",3] + 0.25
B[hap11=="AB" & hap21a=="A" & hap22b=="A",4] <- B[hap11=="AB" & hap21a=="A" & hap22b=="A",4] + 0.25
B[hap11=="AB" & hap22a=="A" & hap21b=="A",4] <- B[hap11=="AB" & hap22a=="A" & hap21b=="A",4] + 0.25
B[hap12=="AB" & hap21a=="A" & hap22b=="A",4] <- B[hap12=="AB" & hap21a=="A" & hap22b=="A",4] + 0.25
B[hap12=="AB" & hap22a=="A" & hap21b=="A",4] <- B[hap12=="AB" & hap22a=="A" & hap21b=="A",4] + 0.25
B[hap21=="AB" & hap11a=="A" & hap12b=="A",4] <- B[hap21=="AB" & hap11a=="A" & hap12b=="A",4] + 0.25
B[hap21=="AB" & hap12a=="A" & hap11b=="A",4] <- B[hap21=="AB" & hap12a=="A" & hap11b=="A",4] + 0.25
B[hap22=="AB" & hap11a=="A" & hap12b=="A",4] <- B[hap22=="AB" & hap11a=="A" & hap12b=="A",4] + 0.25
B[hap22=="AB" & hap12a=="A" & hap11b=="A",4] <- B[hap22=="AB" & hap12a=="A" & hap11b=="A",4] + 0.25
B[hap11a=="A" & hap12b=="A" & hap21a=="A" & hap22b=="B",5] <- B[hap11a=="A" & hap12b=="A" & hap21a=="A" & hap22b=="B",5] + 0.25
B[hap11a=="A" & hap12b=="A" & hap22a=="A" & hap21b=="B",5] <- B[hap11a=="A" & hap12b=="A" & hap22a=="A" & hap21b=="B",5] + 0.25
B[hap12a=="A" & hap11b=="A" & hap21a=="A" & hap22b=="B",5] <- B[hap12a=="A" & hap11b=="A" & hap21a=="A" & hap22b=="B",5] + 0.25
B[hap12a=="A" & hap11b=="A" & hap22a=="A" & hap21b=="B",5] <- B[hap12a=="A" & hap11b=="A" & hap22a=="A" & hap21b=="B",5] + 0.25
B[hap21a=="A" & hap22b=="A" & hap11a=="A" & hap12b=="B",5] <- B[hap21a=="A" & hap22b=="A" & hap11a=="A" & hap12b=="B",5] + 0.25
B[hap21a=="A" & hap22b=="A" & hap12a=="A" & hap11b=="B",5] <- B[hap21a=="A" & hap22b=="A" & hap12a=="A" & hap11b=="B",5] + 0.25
B[hap22a=="A" & hap21b=="A" & hap11a=="A" & hap12b=="B",5] <- B[hap22a=="A" & hap21b=="A" & hap11a=="A" & hap12b=="B",5] + 0.25
B[hap22a=="A" & hap21b=="A" & hap12a=="A" & hap11b=="B",5] <- B[hap22a=="A" & hap21b=="A" & hap12a=="A" & hap11b=="B",5] + 0.25
B[hap11=="AA" & hap12b=="B",6] <- B[hap11=="AA" & hap12b=="B",6] + 0.5
B[hap12=="AA" & hap11b=="B",6] <- B[hap12=="AA" & hap11b=="B",6] + 0.5
B[hap21=="AA" & hap22b=="B",6] <- B[hap21=="AA" & hap22b=="B",6] + 0.5
B[hap22=="AA" & hap21b=="B",6] <- B[hap22=="AA" & hap21b=="B",6] + 0.5
B[hap11=="AA" & hap12a=="A" & hap21b=="B",7] <- B[hap11=="AA" & hap12a=="A" & hap21b=="B",7] + 0.25
B[hap11=="AA" & hap12a=="A" & hap22b=="B",7] <- B[hap11=="AA" & hap12a=="A" & hap22b=="B",7] + 0.25
B[hap12=="AA" & hap11a=="A" & hap21b=="B",7] <- B[hap12=="AA" & hap11a=="A" & hap21b=="B",7] + 0.25
B[hap12=="AA" & hap11a=="A" & hap22b=="B",7] <- B[hap12=="AA" & hap11a=="A" & hap22b=="B",7] + 0.25
B[hap21=="AA" & hap22a=="A" & hap11b=="B",7] <- B[hap21=="AA" & hap22a=="A" & hap11b=="B",7] + 0.25
B[hap21=="AA" & hap22a=="A" & hap12b=="B",7] <- B[hap21=="AA" & hap22a=="A" & hap12b=="B",7] + 0.25
B[hap22=="AA" & hap21a=="A" & hap11b=="B",7] <- B[hap22=="AA" & hap21a=="A" & hap11b=="B",7] + 0.25
B[hap22=="AA" & hap21a=="A" & hap12b=="B",7] <- B[hap22=="AA" & hap21a=="A" & hap12b=="B",7] + 0.25
B[hap11=="AA" & hap21b=="B",8] <- B[hap11=="AA" & hap21b=="B",8] + 0.25
B[hap11=="AA" & hap22b=="B",8] <- B[hap11=="AA" & hap22b=="B",8] + 0.25
B[hap12=="AA" & hap21b=="B",8] <- B[hap12=="AA" & hap21b=="B",8] + 0.25
B[hap12=="AA" & hap22b=="B",8] <- B[hap12=="AA" & hap22b=="B",8] + 0.25
B[hap21=="AA" & hap11b=="B",8] <- B[hap21=="AA" & hap11b=="B",8] + 0.25
B[hap21=="AA" & hap12b=="B",8] <- B[hap21=="AA" & hap12b=="B",8] + 0.25
B[hap22=="AA" & hap11b=="B",8] <- B[hap22=="AA" & hap11b=="B",8] + 0.25
B[hap22=="AA" & hap12b=="B",8] <- B[hap22=="AA" & hap12b=="B",8] + 0.25
B[hap11=="AA" & hap12b=="B" & hap21a=="A",9] <- B[hap11=="AA" & hap12b=="B" & hap21a=="A",9] + 0.25
B[hap11=="AA" & hap12b=="B" & hap22a=="A",9] <- B[hap11=="AA" & hap12b=="B" & hap22a=="A",9] + 0.25
B[hap12=="AA" & hap11b=="B" & hap21a=="A",9] <- B[hap12=="AA" & hap11b=="B" & hap21a=="A",9] + 0.25
B[hap12=="AA" & hap11b=="B" & hap22a=="A",9] <- B[hap12=="AA" & hap11b=="B" & hap22a=="A",9] + 0.25
B[hap21=="AA" & hap22b=="B" & hap11a=="A",9] <- B[hap21=="AA" & hap22b=="B" & hap11a=="A",9] + 0.25
B[hap21=="AA" & hap22b=="B" & hap12a=="A",9] <- B[hap21=="AA" & hap22b=="B" & hap12a=="A",9] + 0.25
B[hap22=="AA" & hap21b=="B" & hap11a=="A",9] <- B[hap22=="AA" & hap21b=="B" & hap11a=="A",9] + 0.25
B[hap22=="AA" & hap21b=="B" & hap12a=="A",9] <- B[hap22=="AA" & hap21b=="B" & hap12a=="A",9] + 0.25
B[hap11=="AB" & hap12b=="A" & hap21a=="A",10] <- B[hap11=="AB" & hap12b=="A" & hap21a=="A",10] + 0.25
B[hap11=="AB" & hap12b=="A" & hap22a=="A",10] <- B[hap11=="AB" & hap12b=="A" & hap22a=="A",10] + 0.25
B[hap12=="AB" & hap11b=="A" & hap21a=="A",10] <- B[hap12=="AB" & hap11b=="A" & hap21a=="A",10] + 0.25
B[hap12=="AB" & hap11b=="A" & hap22a=="A",10] <- B[hap12=="AB" & hap11b=="A" & hap22a=="A",10] + 0.25
B[hap21=="AB" & hap22b=="A" & hap11a=="A",10] <- B[hap21=="AB" & hap22b=="A" & hap11a=="A",10] + 0.25
B[hap21=="AB" & hap22b=="A" & hap12a=="A",10] <- B[hap21=="AB" & hap22b=="A" & hap12a=="A",10] + 0.25
B[hap22=="AB" & hap21b=="A" & hap11a=="A",10] <- B[hap22=="AB" & hap21b=="A" & hap11a=="A",10] + 0.25
B[hap22=="AB" & hap21b=="A" & hap12a=="A",10] <- B[hap22=="AB" & hap21b=="A" & hap12a=="A",10] + 0.25
B[hap11=="AB" & hap12a=="A" & hap21b=="A",11] <- B[hap11=="AB" & hap12a=="A" & hap21b=="A",11] + 0.25
B[hap11=="AB" & hap12a=="A" & hap22b=="A",11] <- B[hap11=="AB" & hap12a=="A" & hap22b=="A",11] + 0.25
B[hap12=="AB" & hap11a=="A" & hap21b=="A",11] <- B[hap12=="AB" & hap11a=="A" & hap21b=="A",11] + 0.25
B[hap12=="AB" & hap11a=="A" & hap22b=="A",11] <- B[hap12=="AB" & hap11a=="A" & hap22b=="A",11] + 0.25
B[hap21=="AB" & hap22a=="A" & hap11b=="A",11] <- B[hap21=="AB" & hap22a=="A" & hap11b=="A",11] + 0.25
B[hap21=="AB" & hap22a=="A" & hap12b=="A",11] <- B[hap21=="AB" & hap22a=="A" & hap12b=="A",11] + 0.25
B[hap22=="AB" & hap21a=="A" & hap11b=="A",11] <- B[hap22=="AB" & hap21a=="A" & hap11b=="A",11] + 0.25
B[hap22=="AB" & hap21a=="A" & hap12b=="A",11] <- B[hap22=="AB" & hap21a=="A" & hap12b=="A",11] + 0.25
B[hap11=="AB" & hap21b=="A",12] <- B[hap11=="AB" & hap21b=="A",12] + 0.25
B[hap11=="AB" & hap22b=="A",12] <- B[hap11=="AB" & hap22b=="A",12] + 0.25
B[hap12=="AB" & hap21b=="A",12] <- B[hap12=="AB" & hap21b=="A",12] + 0.25
B[hap12=="AB" & hap22b=="A",12] <- B[hap12=="AB" & hap22b=="A",12] + 0.25
B[hap21=="AB" & hap11b=="A",12] <- B[hap21=="AB" & hap11b=="A",12] + 0.25
B[hap21=="AB" & hap12b=="A",12] <- B[hap21=="AB" & hap12b=="A",12] + 0.25
B[hap22=="AB" & hap11b=="A",12] <- B[hap22=="AB" & hap11b=="A",12] + 0.25
B[hap22=="AB" & hap12b=="A",12] <- B[hap22=="AB" & hap12b=="A",12] + 0.25
B[hap11a=="A" & hap12b=="B" & hap21b=="A",13] <- B[hap11a=="A" & hap12b=="B" & hap21b=="A",13] + 0.25
B[hap11a=="A" & hap12b=="B" & hap22b=="A",13] <- B[hap11a=="A" & hap12b=="B" & hap22b=="A",13] + 0.25
B[hap12a=="A" & hap11b=="B" & hap21b=="A",13] <- B[hap12a=="A" & hap11b=="B" & hap21b=="A",13] + 0.25
B[hap12a=="A" & hap11b=="B" & hap22b=="A",13] <- B[hap12a=="A" & hap11b=="B" & hap22b=="A",13] + 0.25
B[hap21a=="A" & hap22b=="B" & hap11b=="A",13] <- B[hap21a=="A" & hap22b=="B" & hap11b=="A",13] + 0.25
B[hap21a=="A" & hap22b=="B" & hap12b=="A",13] <- B[hap21a=="A" & hap22b=="B" & hap12b=="A",13] + 0.25
B[hap22a=="A" & hap21b=="B" & hap11b=="A",13] <- B[hap22a=="A" & hap21b=="B" & hap11b=="A",13] + 0.25
B[hap22a=="A" & hap21b=="B" & hap12b=="A",13] <- B[hap22a=="A" & hap21b=="B" & hap12b=="A",13] + 0.25
B[hap11a=="A" & hap12a=="A" & hap21b=="A" & hap22b=="B",14] <- B[hap11a=="A" & hap12a=="A" & hap21b=="A" & hap22b=="B",14] + 0.5
B[hap11a=="A" & hap12a=="A" & hap21b=="B" & hap22b=="A",14] <- B[hap11a=="A" & hap12a=="A" & hap21b=="B" & hap22b=="A",14] + 0.5
B[hap21a=="A" & hap22a=="A" & hap11b=="A" & hap12b=="B",14] <- B[hap21a=="A" & hap22a=="A" & hap11b=="A" & hap12b=="B",14] + 0.5
B[hap21a=="A" & hap22a=="A" & hap11b=="B" & hap12b=="A",14] <- B[hap21a=="A" & hap22a=="A" & hap11b=="B" & hap12b=="A",14] + 0.5
B[hap11a=="A" & hap21b=="A" & hap22b=="B",15] <- B[hap11a=="A" & hap21b=="A" & hap22b=="B",15] + 0.25
B[hap11a=="A" & hap21b=="B" & hap22b=="A",15] <- B[hap11a=="A" & hap21b=="B" & hap22b=="A",15] + 0.25
B[hap21a=="A" & hap11b=="A" & hap12b=="B",15] <- B[hap21a=="A" & hap11b=="A" & hap12b=="B",15] + 0.25
B[hap21a=="A" & hap11b=="B" & hap12b=="A",15] <- B[hap21a=="A" & hap11b=="B" & hap12b=="A",15] + 0.25
B[hap12a=="A" & hap21b=="A" & hap22b=="B",15] <- B[hap12a=="A" & hap21b=="A" & hap22b=="B",15] + 0.25
B[hap12a=="A" & hap21b=="B" & hap22b=="A",15] <- B[hap12a=="A" & hap21b=="B" & hap22b=="A",15] + 0.25
B[hap22a=="A" & hap11b=="A" & hap12b=="B",15] <- B[hap22a=="A" & hap11b=="A" & hap12b=="B",15] + 0.25
B[hap22a=="A" & hap11b=="B" & hap12b=="A",15] <- B[hap22a=="A" & hap11b=="B" & hap12b=="A",15] + 0.25
B[hap11a=="A" & hap12b=="A" & hap21b=="B",16] <- B[hap11a=="A" & hap12b=="A" & hap21b=="B",16] + 0.25
B[hap11a=="A" & hap12b=="A" & hap22b=="B",16] <- B[hap11a=="A" & hap12b=="A" & hap22b=="B",16] + 0.25
B[hap12a=="A" & hap11b=="A" & hap21b=="B",16] <- B[hap12a=="A" & hap11b=="A" & hap21b=="B",16] + 0.25
B[hap12a=="A" & hap11b=="A" & hap22b=="B",16] <- B[hap12a=="A" & hap11b=="A" & hap22b=="B",16] + 0.25
B[hap21a=="A" & hap22b=="A" & hap11b=="B",16] <- B[hap21a=="A" & hap22b=="A" & hap11b=="B",16] + 0.25
B[hap21a=="A" & hap22b=="A" & hap12b=="B",16] <- B[hap21a=="A" & hap22b=="A" & hap12b=="B",16] + 0.25
B[hap22a=="A" & hap21b=="A" & hap11b=="B",16] <- B[hap22a=="A" & hap21b=="A" & hap11b=="B",16] + 0.25
B[hap22a=="A" & hap21b=="A" & hap12b=="B",16] <- B[hap22a=="A" & hap21b=="A" & hap12b=="B",16] + 0.25
B[hap11=="AB" & hap12b=="A",17] <- B[hap11=="AB" & hap12b=="A",17] + 0.5
B[hap12=="AB" & hap11b=="A",17] <- B[hap12=="AB" & hap11b=="A",17] + 0.5
B[hap21=="AB" & hap22b=="A",17] <- B[hap21=="AB" & hap22b=="A",17] + 0.5
B[hap22=="AB" & hap21b=="A",17] <- B[hap22=="AB" & hap21b=="A",17] + 0.5
z <- array(0, dim=c(length(rn), ncol(B), length(tm)))
dimnames(z) <- list(rn, colnames(B), names(tm))
#for(i in seq(along=tm)) {
for(i in 2:2) {
for(j in seq(along=tm[[i]])) {
if(j==round(j,-3)) cat(i,j,"\n")
zr <- rep(0, length(rn))
names(zr) <- rn
zr[names(tm[[i]][[j]])] <- tm[[i]][[j]]
z[j,,i] <- zr %*% B
}
}
A <- array(dim=c(ncol(B), ncol(B), length(tm)))
dimnames(A) <- list(colnames(B), colnames(B), names(tm))
#for(i in seq(along=tm)) {
for(i in 2:2) {
cat(i,"\n")
for(j in 1:ncol(B)) {
A[,j,i] <- lm(I(z[,j,i]) ~ -1 + B)$coef
}
}
A[abs(A) < 1e-12] <- 0
#tm05 <- matrix(0,ncol=length(rn), nrow=length(rn))
#dimnames(tm05) <- list(rn,rn)
#for(i in seq(along=tm[[2]]))
# tm05[i,names(tm[[2]][[i]])] <- tm[[2]][[i]]
#pi0 <- rep(0, length(rn))
#names(pi0) <- rn
#pi0["AA|BBxCC|DD"] <- 1
#
#pik <- rep(0, 10)
#pik[1] <- pi0 %*% B[,1]
#D <- tm05
#pik[2] <- pi0 %*% tm05 %*% B[,1]
#for(k in 3:10) {
# D <- D %*% tm05
# pik[k] <- pi0 %*% D %*% B[,1]
#}
| /Calculations/TwolociA4/StudyAAAB/studyAAAB.R | permissive | kbroman/preCCProbPaper | R | false | false | 14,011 | r | tm <- vector("list", 11)
r <- names(tm) <- seq(0, 0.5, by=0.05)
loadtm <-
function(file)
{
x <- scan(file, sep="\t", what=character())
wh <- grep("::$", x)
y <- vector("list", length(wh))
names(y) <- sub("::", "", x[wh])
wh <- c(wh, length(x)+1)
for(i in 1:(length(wh)-1)) {
y[[i]] <- as.numeric(x[seq(wh[i]+2, wh[i+1], by=2)])
names(y[[i]]) <- sub(":", "", x[seq(wh[i]+1, wh[i+1]-1, by=2)])
}
y
}
for(i in seq(along=r)) {
j <- r[i]*100
if(j < 10) j <- paste("0", j, sep="")
tm[[i]] <- loadtm(paste("tm4A_", j, ".txt", sep=""))
}
rn <- names(tm[[1]])
ind1 <- sapply(strsplit(rn, "x"), function(a) a[1])
ind2 <- sapply(strsplit(rn, "x"), function(a) a[2])
hap11 <- sapply(strsplit(ind1, "\\|"), function(a) a[1])
hap12 <- sapply(strsplit(ind1, "\\|"), function(a) a[2])
hap21 <- sapply(strsplit(ind2, "\\|"), function(a) a[1])
hap22 <- sapply(strsplit(ind2, "\\|"), function(a) a[2])
hap11a <- sapply(hap11, substr, 1, 1)
hap11b <- sapply(hap11, substr, 2, 2)
hap12a <- sapply(hap12, substr, 1, 1)
hap12b <- sapply(hap12, substr, 2, 2)
hap21a <- sapply(hap21, substr, 1, 1)
hap21b <- sapply(hap21, substr, 2, 2)
hap22a <- sapply(hap22, substr, 1, 1)
hap22b <- sapply(hap22, substr, 2, 2)
B <- matrix(0, nrow=length(tm[[1]]), ncol=17)
dimnames(B) <- list(rn, c("AA|ABx--|--", "AA|--xAB|--", "AA|--xA-|-B",
"A-|-AxAB|--", "A-|-AxA-|-B", "AA|-Bx--|--",
"AA|A-x-B|--", "AA|--x-B|--", "AA|-BxA-|--",
"AB|-AxA-|--", "AB|A-x-A|--", "AB|--x-A|--",
"A-|-Bx-A|--", "A-|A-x-A|-B", "A-|--x-A|-B",
"A-|-Ax-B|--", "AB|-Ax--|--"))
B[hap11=="AA" & hap12=="AB",1] <- B[hap11=="AA" & hap12=="AB",1] + 0.5
B[hap21=="AA" & hap22=="AB",1] <- B[hap21=="AA" & hap22=="AB",1] + 0.5
B[hap11=="AA" & hap21=="AB",2] <- B[hap11=="AA" & hap21=="AB",2] + 0.25
B[hap11=="AA" & hap22=="AB",2] <- B[hap11=="AA" & hap22=="AB",2] + 0.25
B[hap12=="AA" & hap21=="AB",2] <- B[hap12=="AA" & hap21=="AB",2] + 0.25
B[hap12=="AA" & hap22=="AB",2] <- B[hap12=="AA" & hap22=="AB",2] + 0.25
B[hap21=="AA" & hap11=="AB",2] <- B[hap21=="AA" & hap11=="AB",2] + 0.25
B[hap21=="AA" & hap12=="AB",2] <- B[hap21=="AA" & hap12=="AB",2] + 0.25
B[hap22=="AA" & hap11=="AB",2] <- B[hap22=="AA" & hap11=="AB",2] + 0.25
B[hap22=="AA" & hap12=="AB",2] <- B[hap22=="AA" & hap12=="AB",2] + 0.25
B[hap11=="AA" & hap21a=="A" & hap22b=="B",3] <- B[hap11=="AA" & hap21a=="A" & hap22b=="B",3] + 0.25
B[hap11=="AA" & hap22a=="A" & hap21b=="B",3] <- B[hap11=="AA" & hap22a=="A" & hap21b=="B",3] + 0.25
B[hap12=="AA" & hap21a=="A" & hap22b=="B",3] <- B[hap12=="AA" & hap21a=="A" & hap22b=="B",3] + 0.25
B[hap12=="AA" & hap22a=="A" & hap21b=="B",3] <- B[hap12=="AA" & hap22a=="A" & hap21b=="B",3] + 0.25
B[hap21=="AA" & hap11a=="A" & hap12b=="B",3] <- B[hap21=="AA" & hap11a=="A" & hap12b=="B",3] + 0.25
B[hap21=="AA" & hap12a=="A" & hap11b=="B",3] <- B[hap21=="AA" & hap12a=="A" & hap11b=="B",3] + 0.25
B[hap22=="AA" & hap11a=="A" & hap12b=="B",3] <- B[hap22=="AA" & hap11a=="A" & hap12b=="B",3] + 0.25
B[hap22=="AA" & hap12a=="A" & hap11b=="B",3] <- B[hap22=="AA" & hap12a=="A" & hap11b=="B",3] + 0.25
B[hap11=="AB" & hap21a=="A" & hap22b=="A",4] <- B[hap11=="AB" & hap21a=="A" & hap22b=="A",4] + 0.25
B[hap11=="AB" & hap22a=="A" & hap21b=="A",4] <- B[hap11=="AB" & hap22a=="A" & hap21b=="A",4] + 0.25
B[hap12=="AB" & hap21a=="A" & hap22b=="A",4] <- B[hap12=="AB" & hap21a=="A" & hap22b=="A",4] + 0.25
B[hap12=="AB" & hap22a=="A" & hap21b=="A",4] <- B[hap12=="AB" & hap22a=="A" & hap21b=="A",4] + 0.25
B[hap21=="AB" & hap11a=="A" & hap12b=="A",4] <- B[hap21=="AB" & hap11a=="A" & hap12b=="A",4] + 0.25
B[hap21=="AB" & hap12a=="A" & hap11b=="A",4] <- B[hap21=="AB" & hap12a=="A" & hap11b=="A",4] + 0.25
B[hap22=="AB" & hap11a=="A" & hap12b=="A",4] <- B[hap22=="AB" & hap11a=="A" & hap12b=="A",4] + 0.25
B[hap22=="AB" & hap12a=="A" & hap11b=="A",4] <- B[hap22=="AB" & hap12a=="A" & hap11b=="A",4] + 0.25
B[hap11a=="A" & hap12b=="A" & hap21a=="A" & hap22b=="B",5] <- B[hap11a=="A" & hap12b=="A" & hap21a=="A" & hap22b=="B",5] + 0.25
B[hap11a=="A" & hap12b=="A" & hap22a=="A" & hap21b=="B",5] <- B[hap11a=="A" & hap12b=="A" & hap22a=="A" & hap21b=="B",5] + 0.25
B[hap12a=="A" & hap11b=="A" & hap21a=="A" & hap22b=="B",5] <- B[hap12a=="A" & hap11b=="A" & hap21a=="A" & hap22b=="B",5] + 0.25
B[hap12a=="A" & hap11b=="A" & hap22a=="A" & hap21b=="B",5] <- B[hap12a=="A" & hap11b=="A" & hap22a=="A" & hap21b=="B",5] + 0.25
B[hap21a=="A" & hap22b=="A" & hap11a=="A" & hap12b=="B",5] <- B[hap21a=="A" & hap22b=="A" & hap11a=="A" & hap12b=="B",5] + 0.25
B[hap21a=="A" & hap22b=="A" & hap12a=="A" & hap11b=="B",5] <- B[hap21a=="A" & hap22b=="A" & hap12a=="A" & hap11b=="B",5] + 0.25
B[hap22a=="A" & hap21b=="A" & hap11a=="A" & hap12b=="B",5] <- B[hap22a=="A" & hap21b=="A" & hap11a=="A" & hap12b=="B",5] + 0.25
B[hap22a=="A" & hap21b=="A" & hap12a=="A" & hap11b=="B",5] <- B[hap22a=="A" & hap21b=="A" & hap12a=="A" & hap11b=="B",5] + 0.25
B[hap11=="AA" & hap12b=="B",6] <- B[hap11=="AA" & hap12b=="B",6] + 0.5
B[hap12=="AA" & hap11b=="B",6] <- B[hap12=="AA" & hap11b=="B",6] + 0.5
B[hap21=="AA" & hap22b=="B",6] <- B[hap21=="AA" & hap22b=="B",6] + 0.5
B[hap22=="AA" & hap21b=="B",6] <- B[hap22=="AA" & hap21b=="B",6] + 0.5
B[hap11=="AA" & hap12a=="A" & hap21b=="B",7] <- B[hap11=="AA" & hap12a=="A" & hap21b=="B",7] + 0.25
B[hap11=="AA" & hap12a=="A" & hap22b=="B",7] <- B[hap11=="AA" & hap12a=="A" & hap22b=="B",7] + 0.25
B[hap12=="AA" & hap11a=="A" & hap21b=="B",7] <- B[hap12=="AA" & hap11a=="A" & hap21b=="B",7] + 0.25
B[hap12=="AA" & hap11a=="A" & hap22b=="B",7] <- B[hap12=="AA" & hap11a=="A" & hap22b=="B",7] + 0.25
B[hap21=="AA" & hap22a=="A" & hap11b=="B",7] <- B[hap21=="AA" & hap22a=="A" & hap11b=="B",7] + 0.25
B[hap21=="AA" & hap22a=="A" & hap12b=="B",7] <- B[hap21=="AA" & hap22a=="A" & hap12b=="B",7] + 0.25
B[hap22=="AA" & hap21a=="A" & hap11b=="B",7] <- B[hap22=="AA" & hap21a=="A" & hap11b=="B",7] + 0.25
B[hap22=="AA" & hap21a=="A" & hap12b=="B",7] <- B[hap22=="AA" & hap21a=="A" & hap12b=="B",7] + 0.25
B[hap11=="AA" & hap21b=="B",8] <- B[hap11=="AA" & hap21b=="B",8] + 0.25
B[hap11=="AA" & hap22b=="B",8] <- B[hap11=="AA" & hap22b=="B",8] + 0.25
B[hap12=="AA" & hap21b=="B",8] <- B[hap12=="AA" & hap21b=="B",8] + 0.25
B[hap12=="AA" & hap22b=="B",8] <- B[hap12=="AA" & hap22b=="B",8] + 0.25
B[hap21=="AA" & hap11b=="B",8] <- B[hap21=="AA" & hap11b=="B",8] + 0.25
B[hap21=="AA" & hap12b=="B",8] <- B[hap21=="AA" & hap12b=="B",8] + 0.25
B[hap22=="AA" & hap11b=="B",8] <- B[hap22=="AA" & hap11b=="B",8] + 0.25
B[hap22=="AA" & hap12b=="B",8] <- B[hap22=="AA" & hap12b=="B",8] + 0.25
B[hap11=="AA" & hap12b=="B" & hap21a=="A",9] <- B[hap11=="AA" & hap12b=="B" & hap21a=="A",9] + 0.25
B[hap11=="AA" & hap12b=="B" & hap22a=="A",9] <- B[hap11=="AA" & hap12b=="B" & hap22a=="A",9] + 0.25
B[hap12=="AA" & hap11b=="B" & hap21a=="A",9] <- B[hap12=="AA" & hap11b=="B" & hap21a=="A",9] + 0.25
B[hap12=="AA" & hap11b=="B" & hap22a=="A",9] <- B[hap12=="AA" & hap11b=="B" & hap22a=="A",9] + 0.25
B[hap21=="AA" & hap22b=="B" & hap11a=="A",9] <- B[hap21=="AA" & hap22b=="B" & hap11a=="A",9] + 0.25
B[hap21=="AA" & hap22b=="B" & hap12a=="A",9] <- B[hap21=="AA" & hap22b=="B" & hap12a=="A",9] + 0.25
B[hap22=="AA" & hap21b=="B" & hap11a=="A",9] <- B[hap22=="AA" & hap21b=="B" & hap11a=="A",9] + 0.25
B[hap22=="AA" & hap21b=="B" & hap12a=="A",9] <- B[hap22=="AA" & hap21b=="B" & hap12a=="A",9] + 0.25
B[hap11=="AB" & hap12b=="A" & hap21a=="A",10] <- B[hap11=="AB" & hap12b=="A" & hap21a=="A",10] + 0.25
B[hap11=="AB" & hap12b=="A" & hap22a=="A",10] <- B[hap11=="AB" & hap12b=="A" & hap22a=="A",10] + 0.25
B[hap12=="AB" & hap11b=="A" & hap21a=="A",10] <- B[hap12=="AB" & hap11b=="A" & hap21a=="A",10] + 0.25
B[hap12=="AB" & hap11b=="A" & hap22a=="A",10] <- B[hap12=="AB" & hap11b=="A" & hap22a=="A",10] + 0.25
B[hap21=="AB" & hap22b=="A" & hap11a=="A",10] <- B[hap21=="AB" & hap22b=="A" & hap11a=="A",10] + 0.25
B[hap21=="AB" & hap22b=="A" & hap12a=="A",10] <- B[hap21=="AB" & hap22b=="A" & hap12a=="A",10] + 0.25
B[hap22=="AB" & hap21b=="A" & hap11a=="A",10] <- B[hap22=="AB" & hap21b=="A" & hap11a=="A",10] + 0.25
B[hap22=="AB" & hap21b=="A" & hap12a=="A",10] <- B[hap22=="AB" & hap21b=="A" & hap12a=="A",10] + 0.25
B[hap11=="AB" & hap12a=="A" & hap21b=="A",11] <- B[hap11=="AB" & hap12a=="A" & hap21b=="A",11] + 0.25
B[hap11=="AB" & hap12a=="A" & hap22b=="A",11] <- B[hap11=="AB" & hap12a=="A" & hap22b=="A",11] + 0.25
B[hap12=="AB" & hap11a=="A" & hap21b=="A",11] <- B[hap12=="AB" & hap11a=="A" & hap21b=="A",11] + 0.25
B[hap12=="AB" & hap11a=="A" & hap22b=="A",11] <- B[hap12=="AB" & hap11a=="A" & hap22b=="A",11] + 0.25
B[hap21=="AB" & hap22a=="A" & hap11b=="A",11] <- B[hap21=="AB" & hap22a=="A" & hap11b=="A",11] + 0.25
B[hap21=="AB" & hap22a=="A" & hap12b=="A",11] <- B[hap21=="AB" & hap22a=="A" & hap12b=="A",11] + 0.25
B[hap22=="AB" & hap21a=="A" & hap11b=="A",11] <- B[hap22=="AB" & hap21a=="A" & hap11b=="A",11] + 0.25
B[hap22=="AB" & hap21a=="A" & hap12b=="A",11] <- B[hap22=="AB" & hap21a=="A" & hap12b=="A",11] + 0.25
B[hap11=="AB" & hap21b=="A",12] <- B[hap11=="AB" & hap21b=="A",12] + 0.25
B[hap11=="AB" & hap22b=="A",12] <- B[hap11=="AB" & hap22b=="A",12] + 0.25
B[hap12=="AB" & hap21b=="A",12] <- B[hap12=="AB" & hap21b=="A",12] + 0.25
B[hap12=="AB" & hap22b=="A",12] <- B[hap12=="AB" & hap22b=="A",12] + 0.25
B[hap21=="AB" & hap11b=="A",12] <- B[hap21=="AB" & hap11b=="A",12] + 0.25
B[hap21=="AB" & hap12b=="A",12] <- B[hap21=="AB" & hap12b=="A",12] + 0.25
B[hap22=="AB" & hap11b=="A",12] <- B[hap22=="AB" & hap11b=="A",12] + 0.25
B[hap22=="AB" & hap12b=="A",12] <- B[hap22=="AB" & hap12b=="A",12] + 0.25
B[hap11a=="A" & hap12b=="B" & hap21b=="A",13] <- B[hap11a=="A" & hap12b=="B" & hap21b=="A",13] + 0.25
B[hap11a=="A" & hap12b=="B" & hap22b=="A",13] <- B[hap11a=="A" & hap12b=="B" & hap22b=="A",13] + 0.25
B[hap12a=="A" & hap11b=="B" & hap21b=="A",13] <- B[hap12a=="A" & hap11b=="B" & hap21b=="A",13] + 0.25
B[hap12a=="A" & hap11b=="B" & hap22b=="A",13] <- B[hap12a=="A" & hap11b=="B" & hap22b=="A",13] + 0.25
B[hap21a=="A" & hap22b=="B" & hap11b=="A",13] <- B[hap21a=="A" & hap22b=="B" & hap11b=="A",13] + 0.25
B[hap21a=="A" & hap22b=="B" & hap12b=="A",13] <- B[hap21a=="A" & hap22b=="B" & hap12b=="A",13] + 0.25
B[hap22a=="A" & hap21b=="B" & hap11b=="A",13] <- B[hap22a=="A" & hap21b=="B" & hap11b=="A",13] + 0.25
B[hap22a=="A" & hap21b=="B" & hap12b=="A",13] <- B[hap22a=="A" & hap21b=="B" & hap12b=="A",13] + 0.25
B[hap11a=="A" & hap12a=="A" & hap21b=="A" & hap22b=="B",14] <- B[hap11a=="A" & hap12a=="A" & hap21b=="A" & hap22b=="B",14] + 0.5
B[hap11a=="A" & hap12a=="A" & hap21b=="B" & hap22b=="A",14] <- B[hap11a=="A" & hap12a=="A" & hap21b=="B" & hap22b=="A",14] + 0.5
B[hap21a=="A" & hap22a=="A" & hap11b=="A" & hap12b=="B",14] <- B[hap21a=="A" & hap22a=="A" & hap11b=="A" & hap12b=="B",14] + 0.5
B[hap21a=="A" & hap22a=="A" & hap11b=="B" & hap12b=="A",14] <- B[hap21a=="A" & hap22a=="A" & hap11b=="B" & hap12b=="A",14] + 0.5
B[hap11a=="A" & hap21b=="A" & hap22b=="B",15] <- B[hap11a=="A" & hap21b=="A" & hap22b=="B",15] + 0.25
B[hap11a=="A" & hap21b=="B" & hap22b=="A",15] <- B[hap11a=="A" & hap21b=="B" & hap22b=="A",15] + 0.25
B[hap21a=="A" & hap11b=="A" & hap12b=="B",15] <- B[hap21a=="A" & hap11b=="A" & hap12b=="B",15] + 0.25
B[hap21a=="A" & hap11b=="B" & hap12b=="A",15] <- B[hap21a=="A" & hap11b=="B" & hap12b=="A",15] + 0.25
B[hap12a=="A" & hap21b=="A" & hap22b=="B",15] <- B[hap12a=="A" & hap21b=="A" & hap22b=="B",15] + 0.25
B[hap12a=="A" & hap21b=="B" & hap22b=="A",15] <- B[hap12a=="A" & hap21b=="B" & hap22b=="A",15] + 0.25
B[hap22a=="A" & hap11b=="A" & hap12b=="B",15] <- B[hap22a=="A" & hap11b=="A" & hap12b=="B",15] + 0.25
B[hap22a=="A" & hap11b=="B" & hap12b=="A",15] <- B[hap22a=="A" & hap11b=="B" & hap12b=="A",15] + 0.25
B[hap11a=="A" & hap12b=="A" & hap21b=="B",16] <- B[hap11a=="A" & hap12b=="A" & hap21b=="B",16] + 0.25
B[hap11a=="A" & hap12b=="A" & hap22b=="B",16] <- B[hap11a=="A" & hap12b=="A" & hap22b=="B",16] + 0.25
B[hap12a=="A" & hap11b=="A" & hap21b=="B",16] <- B[hap12a=="A" & hap11b=="A" & hap21b=="B",16] + 0.25
B[hap12a=="A" & hap11b=="A" & hap22b=="B",16] <- B[hap12a=="A" & hap11b=="A" & hap22b=="B",16] + 0.25
B[hap21a=="A" & hap22b=="A" & hap11b=="B",16] <- B[hap21a=="A" & hap22b=="A" & hap11b=="B",16] + 0.25
B[hap21a=="A" & hap22b=="A" & hap12b=="B",16] <- B[hap21a=="A" & hap22b=="A" & hap12b=="B",16] + 0.25
B[hap22a=="A" & hap21b=="A" & hap11b=="B",16] <- B[hap22a=="A" & hap21b=="A" & hap11b=="B",16] + 0.25
B[hap22a=="A" & hap21b=="A" & hap12b=="B",16] <- B[hap22a=="A" & hap21b=="A" & hap12b=="B",16] + 0.25
B[hap11=="AB" & hap12b=="A",17] <- B[hap11=="AB" & hap12b=="A",17] + 0.5
B[hap12=="AB" & hap11b=="A",17] <- B[hap12=="AB" & hap11b=="A",17] + 0.5
B[hap21=="AB" & hap22b=="A",17] <- B[hap21=="AB" & hap22b=="A",17] + 0.5
B[hap22=="AB" & hap21b=="A",17] <- B[hap22=="AB" & hap21b=="A",17] + 0.5
z <- array(0, dim=c(length(rn), ncol(B), length(tm)))
dimnames(z) <- list(rn, colnames(B), names(tm))
#for(i in seq(along=tm)) {
for(i in 2:2) {
for(j in seq(along=tm[[i]])) {
if(j==round(j,-3)) cat(i,j,"\n")
zr <- rep(0, length(rn))
names(zr) <- rn
zr[names(tm[[i]][[j]])] <- tm[[i]][[j]]
z[j,,i] <- zr %*% B
}
}
A <- array(dim=c(ncol(B), ncol(B), length(tm)))
dimnames(A) <- list(colnames(B), colnames(B), names(tm))
#for(i in seq(along=tm)) {
for(i in 2:2) {
cat(i,"\n")
for(j in 1:ncol(B)) {
A[,j,i] <- lm(I(z[,j,i]) ~ -1 + B)$coef
}
}
A[abs(A) < 1e-12] <- 0
#tm05 <- matrix(0,ncol=length(rn), nrow=length(rn))
#dimnames(tm05) <- list(rn,rn)
#for(i in seq(along=tm[[2]]))
# tm05[i,names(tm[[2]][[i]])] <- tm[[2]][[i]]
#pi0 <- rep(0, length(rn))
#names(pi0) <- rn
#pi0["AA|BBxCC|DD"] <- 1
#
#pik <- rep(0, 10)
#pik[1] <- pi0 %*% B[,1]
#D <- tm05
#pik[2] <- pi0 %*% tm05 %*% B[,1]
#for(k in 3:10) {
# D <- D %*% tm05
# pik[k] <- pi0 %*% D %*% B[,1]
#}
|
\name{RandomCluster}
\alias{RandomCluster}
\alias{RANDOMCLUSTER}
\title{randomize dates among tips in the BEAST input file}
\description{
This function is similar to "RandomDates" excepts that in "RandomCluster", samples are grouped into clusters
and the shuffling procedure randomizes dates among the clusters but not within
(see manual for more details on this procedure). There are two distinct ways to group the samples into clusters.
The first one is through the upload of a csv.file containing the names of the samples (as given in the XML)
and a cluster number. Any positive integrer (i.e., positive number) can be used to identified cluster;
if a "0" is given to any sample, it would be excluded from the procedure.
The file containing the classification should be labelled: clusters."name".cvs.
An example for such a file in the case of the Influenza dataset can be found distributed with the package.
In a second approach, a model-based clustering classification is automatically performed using the mclust library.
In this case, the option loadCluster should be set to FALSE (loadCluster = F).
If this option is closen, the new classification is written in a csv file that is labelled: clusters."name".cvs.
}
\usage{
RandomCluster(name, reps = 20, loadCluster = T, writeTrees = T)
}
\arguments{
\item{name}{
The name of the original XML-formatted input file on which to apply the date-randomization procedure.
Quote the name ("example"). The .xml extension should not be included.
}
\item{reps}{
The number of repetions required by the user. There will be as many date-randomized datasets produced
as the value of reps (default = 20).
}
\item{loadCluster}{
F or T (default T). If T, clusters are loaded from a cluster structure file.
The file containing the cluster structure needs to follow the example provided.
Any tip assigned to cluster "0" will not be included in any randomization.
Tip dates will only be randomized between (and not within) clusters.
The cluster file should be named "clusters.NAME.csv" where NAME is the XML file name.
If F, clusters are calculated using the package "mclust" procedure and an output cvs file
containing the cluster structure is produced.
}
\item{writeTrees}{
Set to False (F) if you do not want the trees to be written when running the date-randomized
datasets in BEAST. To make the DRT, only the log files are needed (default = T).
}
}
\details{
The function works only with a .xml file generated with BEAUti
}
\value{
The function returns one or many files (the number is set by the "reps" argument; default is 20)
In each new file, the date values are randomized among tips.
}
\references{
Rieux A & Khatchikian, C. Unpublished.
Drummond AJ, Suchard MA, Xie D & Rambaut A (2012) Bayesian phylogenetics with BEAUti and the BEAST 1.7.
Molecular Biology And Evolution 29: 1969-1973.
Fraley C & Raftery AE (2002) Model-based clustering, discriminant analysis, and density
estimation. Journal of the American Statistical Association 97: 611-631.
}
\examples{
\dontrun{
# using the example files "Flu_BEAST_1.8.xml" and "clusters.Flu.csv" found in example folder
RandomCluster("Flu_BEAST_1.8", reps = 20, loadCluster = T, writeTrees = F)
# produce 20 replicate input files (.xml) in working directory
}
}
\keyword{BEAST Software}
\keyword{phylogenetics}
| /man/RandomCluster.Rd | no_license | Xevkin/TipDatingBeast | R | false | false | 3,474 | rd | \name{RandomCluster}
\alias{RandomCluster}
\alias{RANDOMCLUSTER}
\title{randomize dates among tips in the BEAST input file}
\description{
This function is similar to "RandomDates" excepts that in "RandomCluster", samples are grouped into clusters
and the shuffling procedure randomizes dates among the clusters but not within
(see manual for more details on this procedure). There are two distinct ways to group the samples into clusters.
The first one is through the upload of a csv.file containing the names of the samples (as given in the XML)
and a cluster number. Any positive integrer (i.e., positive number) can be used to identified cluster;
if a "0" is given to any sample, it would be excluded from the procedure.
The file containing the classification should be labelled: clusters."name".cvs.
An example for such a file in the case of the Influenza dataset can be found distributed with the package.
In a second approach, a model-based clustering classification is automatically performed using the mclust library.
In this case, the option loadCluster should be set to FALSE (loadCluster = F).
If this option is closen, the new classification is written in a csv file that is labelled: clusters."name".cvs.
}
\usage{
RandomCluster(name, reps = 20, loadCluster = T, writeTrees = T)
}
\arguments{
\item{name}{
The name of the original XML-formatted input file on which to apply the date-randomization procedure.
Quote the name ("example"). The .xml extension should not be included.
}
\item{reps}{
The number of repetions required by the user. There will be as many date-randomized datasets produced
as the value of reps (default = 20).
}
\item{loadCluster}{
F or T (default T). If T, clusters are loaded from a cluster structure file.
The file containing the cluster structure needs to follow the example provided.
Any tip assigned to cluster "0" will not be included in any randomization.
Tip dates will only be randomized between (and not within) clusters.
The cluster file should be named "clusters.NAME.csv" where NAME is the XML file name.
If F, clusters are calculated using the package "mclust" procedure and an output cvs file
containing the cluster structure is produced.
}
\item{writeTrees}{
Set to False (F) if you do not want the trees to be written when running the date-randomized
datasets in BEAST. To make the DRT, only the log files are needed (default = T).
}
}
\details{
The function works only with a .xml file generated with BEAUti
}
\value{
The function returns one or many files (the number is set by the "reps" argument; default is 20)
In each new file, the date values are randomized among tips.
}
\references{
Rieux A & Khatchikian, C. Unpublished.
Drummond AJ, Suchard MA, Xie D & Rambaut A (2012) Bayesian phylogenetics with BEAUti and the BEAST 1.7.
Molecular Biology And Evolution 29: 1969-1973.
Fraley C & Raftery AE (2002) Model-based clustering, discriminant analysis, and density
estimation. Journal of the American Statistical Association 97: 611-631.
}
\examples{
\dontrun{
# using the example files "Flu_BEAST_1.8.xml" and "clusters.Flu.csv" found in example folder
RandomCluster("Flu_BEAST_1.8", reps = 20, loadCluster = T, writeTrees = F)
# produce 20 replicate input files (.xml) in working directory
}
}
\keyword{BEAST Software}
\keyword{phylogenetics}
|
context("Atk output")
skip_on_cran()
# function from the IC2 library 1.0-1
# removed from CRAN but available at
# https://cran.r-project.org/src/contrib/Archive/IC2/
calcAtkinson<-function(x, w=NULL, epsilon=1)
{
if (epsilon<0) return(NULL)
if (!is.numeric(x)) return(NULL)
xNA<-sum(as.numeric(is.na(x)))
weighted<-FALSE
wNA<-NULL
if (is.null(w)) w<-rep(1, length(x))
else
{
if (!is.numeric(w)) return(NULL)
weighted<-TRUE
wNA<-sum(as.numeric(is.na(w)))
}
df<-cbind("x"=x,"w"=w)
df<-df[complete.cases(df),, drop=FALSE]
if(nrow(df)==0) return (NULL)
if(any(df[,"x"]<0) || sum(df[,"x"])==0) return(NULL)
if (any(df[,"x"]==0) && epsilon==1) return(NULL)
index<-0
names(index)<-"Atk"
names(epsilon)<-"epsilon"
Atk<-list(ineq= list(index=index,
parameter=epsilon),
nas= list(xNA=xNA, wNA=wNA, totalNA=length(x)-nrow(df)))
class(Atk)<-"ICI"
if(nrow(df)==1) return(Atk)
if (epsilon==1)
{
w<-df[,"w"]/sum(df[,"w"])
xMean<-weighted.mean(df[,"x"],w)
index<-1-(prod(exp(w*log(df[,"x"])))/xMean)
}
else
{
xMean<-weighted.mean(df[,"x"], df[,"w"])
x1<-df[,"x"]/xMean
w<-df[,"w"]/sum(df[,"w"])
param<-1-epsilon
index<-1-(weighted.mean((x1^param), w)^(1/param))
}
names(index)<-"Atk"
Atk[["ineq"]][["index"]]<-index
return(Atk)
}
library(survey)
library(laeken)
data(eusilc)
dati = data.frame(IDd = seq( 10000 , 10000 + nrow( eusilc ) - 1 ) , eusilc)
dati_nz <- subset(dati, eqIncome > 0)
des_eusilc <- svydesign(ids = ~rb030, strata =~db040, weights = ~rb050, data = eusilc)
des_eusilc <- convey_prep(des_eusilc)
convey_atk <- svyatk(~eqIncome, subset(des_eusilc, eqIncome > 0) )
IC2_atk <- calcAtkinson( x = dati_nz$eqIncome, w = dati_nz$rb050 )$ineq$index
# point estiamte
vardest <- as.numeric( IC2_atk )
convest <- as.numeric(coef(convey_atk)[1])
#domain
# IC2 point estimates
vardestd <- sapply( split(dati_nz, dati_nz$hsize), function(x){ calcAtkinson( x = x$eqIncome, w = x$rb050 )$ineq$index[[1]] } )
vardestd <- as.numeric( vardestd )
# convey point estimates
convestd <- as.numeric( coef( svyby(~eqIncome, ~factor(hsize), subset(des_eusilc, eqIncome > 0), svyatk) ) )
test_that("compare results convey vs vardpoor",{
expect_equal(vardest,convest)
expect_equal(vardestd, convestd)
})
| /tests/testthat/test-svyatk.R | no_license | cran/convey | R | false | false | 2,433 | r | context("Atk output")
skip_on_cran()
# function from the IC2 library 1.0-1
# removed from CRAN but available at
# https://cran.r-project.org/src/contrib/Archive/IC2/
calcAtkinson<-function(x, w=NULL, epsilon=1)
{
if (epsilon<0) return(NULL)
if (!is.numeric(x)) return(NULL)
xNA<-sum(as.numeric(is.na(x)))
weighted<-FALSE
wNA<-NULL
if (is.null(w)) w<-rep(1, length(x))
else
{
if (!is.numeric(w)) return(NULL)
weighted<-TRUE
wNA<-sum(as.numeric(is.na(w)))
}
df<-cbind("x"=x,"w"=w)
df<-df[complete.cases(df),, drop=FALSE]
if(nrow(df)==0) return (NULL)
if(any(df[,"x"]<0) || sum(df[,"x"])==0) return(NULL)
if (any(df[,"x"]==0) && epsilon==1) return(NULL)
index<-0
names(index)<-"Atk"
names(epsilon)<-"epsilon"
Atk<-list(ineq= list(index=index,
parameter=epsilon),
nas= list(xNA=xNA, wNA=wNA, totalNA=length(x)-nrow(df)))
class(Atk)<-"ICI"
if(nrow(df)==1) return(Atk)
if (epsilon==1)
{
w<-df[,"w"]/sum(df[,"w"])
xMean<-weighted.mean(df[,"x"],w)
index<-1-(prod(exp(w*log(df[,"x"])))/xMean)
}
else
{
xMean<-weighted.mean(df[,"x"], df[,"w"])
x1<-df[,"x"]/xMean
w<-df[,"w"]/sum(df[,"w"])
param<-1-epsilon
index<-1-(weighted.mean((x1^param), w)^(1/param))
}
names(index)<-"Atk"
Atk[["ineq"]][["index"]]<-index
return(Atk)
}
library(survey)
library(laeken)
data(eusilc)
dati = data.frame(IDd = seq( 10000 , 10000 + nrow( eusilc ) - 1 ) , eusilc)
dati_nz <- subset(dati, eqIncome > 0)
des_eusilc <- svydesign(ids = ~rb030, strata =~db040, weights = ~rb050, data = eusilc)
des_eusilc <- convey_prep(des_eusilc)
convey_atk <- svyatk(~eqIncome, subset(des_eusilc, eqIncome > 0) )
IC2_atk <- calcAtkinson( x = dati_nz$eqIncome, w = dati_nz$rb050 )$ineq$index
# point estiamte
vardest <- as.numeric( IC2_atk )
convest <- as.numeric(coef(convey_atk)[1])
#domain
# IC2 point estimates
vardestd <- sapply( split(dati_nz, dati_nz$hsize), function(x){ calcAtkinson( x = x$eqIncome, w = x$rb050 )$ineq$index[[1]] } )
vardestd <- as.numeric( vardestd )
# convey point estimates
convestd <- as.numeric( coef( svyby(~eqIncome, ~factor(hsize), subset(des_eusilc, eqIncome > 0), svyatk) ) )
test_that("compare results convey vs vardpoor",{
expect_equal(vardest,convest)
expect_equal(vardestd, convestd)
})
|
dcdf.prop <- function(g, wgt, cluster.ind, cluster, wgt1) {
################################################################################
# Function: dcdf.prop
# Programmer: Tom Kincaid
# Date: December 3, 2002
# Last Revised: January 27, 2004
# Description:
# This function calculates an estimate of the deconvoluted cumulative
# distribution function (CDF) for the proportion of a discrete or an extensive
# resource. The simulation extrapolation deconvolution method (Stefanski and
# Bay, 1996) is use to deconvolute measurement error variance from the
# response. The Horvitz-Thompson ratio estimator, i.e., the ratio of two
# Horvitz-Thompson estimators, is used to calculate the estimate. The
# numerator of the ratio estimates the total of the resource equal to or less
# than a specified value. The denominator of the ratio estimates the size of
# the resource. For a discrete resource size is the number of units in the
# resource. For an extensive resource size is the extent (measure) of the
# resource, i.e., length, area, or volume. The function can accomodate
# single-stage and two-stage samples.
# Arguments:
# g = values of the deconvolution function g(.) evaluated at a specified
# value for the response value for each site.
# wgt = the final adjusted weight (inverse of the sample inclusion
# probability) for each site, which is either the weight for a single-stage
# sample or the stage two weight for a two-stage sample.
# cluster.ind = a logical value that indicates whether the sample is a two-
# stage sample, where TRUE = a two-stage sample and FALSE = not a two-stage
# sample.
# cluster = the stage one sampling unit (primary sampling unit or cluster)
# code for each site.
# wgt1 = the final adjusted stage one weight for each site.
# Output:
# The deconvoluted CDF estimate.
# Other Functions Required:
# None
################################################################################
# Calculate additional required values
if (cluster.ind) {
cluster <- factor(cluster)
ncluster <- length(levels(cluster))
wgt2.lst <- split(wgt, cluster)
wgt1.u <- as.vector(tapply(wgt1, cluster, unique))
}
# Calculate the cdf estimate
if (cluster.ind) {
temp <- array(0, c(ncluster, dim(g[[1]])[2]))
for (i in 1:ncluster) {
temp[i,] <- apply(g[[i]]*wgt2.lst[[i]], 2, sum)
}
cdf <- apply(wgt1.u*temp, 2, sum)
} else {
cdf <- apply(wgt*g, 2, sum)
}
if (cluster.ind)
cdf <- cdf/sum(wgt1*wgt)
else
cdf <- cdf/sum(wgt)
# Return the estimate
cdf
}
| /R/dcdf.prop.R | no_license | ledugan/spsurvey | R | false | false | 2,719 | r | dcdf.prop <- function(g, wgt, cluster.ind, cluster, wgt1) {
################################################################################
# Function: dcdf.prop
# Programmer: Tom Kincaid
# Date: December 3, 2002
# Last Revised: January 27, 2004
# Description:
# This function calculates an estimate of the deconvoluted cumulative
# distribution function (CDF) for the proportion of a discrete or an extensive
# resource. The simulation extrapolation deconvolution method (Stefanski and
# Bay, 1996) is use to deconvolute measurement error variance from the
# response. The Horvitz-Thompson ratio estimator, i.e., the ratio of two
# Horvitz-Thompson estimators, is used to calculate the estimate. The
# numerator of the ratio estimates the total of the resource equal to or less
# than a specified value. The denominator of the ratio estimates the size of
# the resource. For a discrete resource size is the number of units in the
# resource. For an extensive resource size is the extent (measure) of the
# resource, i.e., length, area, or volume. The function can accomodate
# single-stage and two-stage samples.
# Arguments:
# g = values of the deconvolution function g(.) evaluated at a specified
# value for the response value for each site.
# wgt = the final adjusted weight (inverse of the sample inclusion
# probability) for each site, which is either the weight for a single-stage
# sample or the stage two weight for a two-stage sample.
# cluster.ind = a logical value that indicates whether the sample is a two-
# stage sample, where TRUE = a two-stage sample and FALSE = not a two-stage
# sample.
# cluster = the stage one sampling unit (primary sampling unit or cluster)
# code for each site.
# wgt1 = the final adjusted stage one weight for each site.
# Output:
# The deconvoluted CDF estimate.
# Other Functions Required:
# None
################################################################################
# Calculate additional required values
if (cluster.ind) {
cluster <- factor(cluster)
ncluster <- length(levels(cluster))
wgt2.lst <- split(wgt, cluster)
wgt1.u <- as.vector(tapply(wgt1, cluster, unique))
}
# Calculate the cdf estimate
if (cluster.ind) {
temp <- array(0, c(ncluster, dim(g[[1]])[2]))
for (i in 1:ncluster) {
temp[i,] <- apply(g[[i]]*wgt2.lst[[i]], 2, sum)
}
cdf <- apply(wgt1.u*temp, 2, sum)
} else {
cdf <- apply(wgt*g, 2, sum)
}
if (cluster.ind)
cdf <- cdf/sum(wgt1*wgt)
else
cdf <- cdf/sum(wgt)
# Return the estimate
cdf
}
|
#Ecriture des fichiers dans des repertoire au format dossier N0 filliale
for (file in files_stk) {
df <- get(file)
repo <- "C:/Users/ottavig/Desktop/stock_ods/"
filiale <- substr(file,2,4)
repo <- paste0(repo,filiale)
if(!dir.exists(repo)) dir.create(repo)
fich <- paste0(repo,"/wrld_stkn_",filiale,".txt")
write.table(x = df,file = fich, sep =";",quote = F,row.names = F,col.names= F)
}
for (file in files_exp) {
df <- get(file)
repo <- "C:/Users/ottavig/Desktop/stock_ods/"
filiale <- substr(file,2,4)
repo <- paste0(repo,filiale)
if(!dir.exists(repo)) dir.create(repo)
fich <- paste0(repo,"/wrld_tra_",filiale,".csv")
write.table(x = df,file = fich, sep =";",quote = F,row.names = F,col.names = TRUE)
}
rm(list = ls())
print("#######################################################")
print("#######################################################")
print("############# EXECUTION TERMINE #############")
print("#######################################################")
print("#######################################################")
rm(list = ls())
| /Stock/stock_ecriture.R | no_license | Gottavianoni/R | R | false | false | 1,138 | r | #Ecriture des fichiers dans des repertoire au format dossier N0 filliale
for (file in files_stk) {
df <- get(file)
repo <- "C:/Users/ottavig/Desktop/stock_ods/"
filiale <- substr(file,2,4)
repo <- paste0(repo,filiale)
if(!dir.exists(repo)) dir.create(repo)
fich <- paste0(repo,"/wrld_stkn_",filiale,".txt")
write.table(x = df,file = fich, sep =";",quote = F,row.names = F,col.names= F)
}
for (file in files_exp) {
df <- get(file)
repo <- "C:/Users/ottavig/Desktop/stock_ods/"
filiale <- substr(file,2,4)
repo <- paste0(repo,filiale)
if(!dir.exists(repo)) dir.create(repo)
fich <- paste0(repo,"/wrld_tra_",filiale,".csv")
write.table(x = df,file = fich, sep =";",quote = F,row.names = F,col.names = TRUE)
}
rm(list = ls())
print("#######################################################")
print("#######################################################")
print("############# EXECUTION TERMINE #############")
print("#######################################################")
print("#######################################################")
rm(list = ls())
|
context("SOAP API")
test_that("testing SOAP API Functionality", {
n <- 2
object <- "Contact"
prefix <- paste0("SOAP-", as.integer(runif(1,1,100000)), "-")
new_contacts <- tibble(FirstName = rep("Test", n),
LastName = paste0("SOAP-Contact-Create-", 1:n),
My_External_Id__c=paste0(prefix, letters[1:n]))
# sf_create ------------------------------------------------------------------
created_records <- sf_create(new_contacts, object_name=object, api_type="SOAP")
expect_is(created_records, "tbl_df")
expect_equal(names(created_records), c("id", "success"))
expect_equal(nrow(created_records), n)
expect_is(created_records$success, "logical")
# sf_create error ------------------------------------------------------------
new_campaign_members <- tibble(CampaignId = "",
ContactId = "0036A000002C6MbQAK")
create_error_records <- sf_create(new_campaign_members,
object_name = "CampaignMember",
api_type = "SOAP")
expect_is(create_error_records, "tbl_df")
expect_equal(names(create_error_records), c("success", "errors"))
expect_equal(nrow(create_error_records), 1)
expect_is(create_error_records$errors, "list")
expect_equal(length(create_error_records$errors[1][[1]]), 2)
expect_equal(sort(names(create_error_records$errors[1][[1]][[1]])),
c("message", "statusCode"))
new_campaign_members <- tibble(CampaignId = "7013s000000j6n1AAA",
ContactId = "0036A000002C6MbQAK")
create_error_records <- sf_create(new_campaign_members,
object_name = "CampaignMember",
api_type = "SOAP")
expect_is(create_error_records, "tbl_df")
expect_equal(names(create_error_records), c("success", "errors"))
expect_equal(nrow(create_error_records), 1)
expect_is(create_error_records$errors, "list")
expect_equal(length(create_error_records$errors[1][[1]]), 1)
expect_equal(sort(names(create_error_records$errors[1][[1]][[1]])),
c("message", "statusCode"))
# sf_create duplicate --------------------------------------------------------
dupe_n <- 3
prefix <- paste0("KEEP-", as.integer(runif(1,1,100000)), "-")
new_contacts <- tibble(FirstName = rep("KEEP", dupe_n),
LastName = paste0("Test-Contact-Dupe", 1:dupe_n),
Email = rep("keeptestcontact@gmail.com", dupe_n),
Phone = rep("(123) 456-7890", dupe_n),
test_number__c = rep(999.9, dupe_n),
My_External_Id__c = paste0(prefix, 1:dupe_n, "ZZZ"))
dupe_records <- sf_create(new_contacts,
object_name = "Contact",
api_type = "SOAP",
control = list(allowSave = FALSE,
includeRecordDetails = TRUE,
runAsCurrentUser = TRUE))
expect_is(dupe_records, "tbl_df")
expect_equal(names(dupe_records), c("success", "errors"))
expect_equal(nrow(dupe_records), dupe_n)
expect_is(dupe_records$errors, "list")
expect_equal(length(dupe_records$errors[1][[1]]), 1)
expect_equal(sort(names(dupe_records$errors[1][[1]][[1]])),
c("duplicateResult", "message", "statusCode"))
# sf_retrieve ----------------------------------------------------------------
retrieved_records <- sf_retrieve(ids = created_records$id,
fields = c("FirstName", "LastName"),
object_name = object,
api_type = "SOAP")
expect_is(retrieved_records, "tbl_df")
expect_equal(names(retrieved_records), c("sObject", "Id", "FirstName", "LastName"))
expect_equal(nrow(retrieved_records), n)
# FYI: Will not find newly created records because records need to be indexed
# Just search for some default records
my_sosl <- paste("FIND {(336)} in phone fields returning",
"contact(id, firstname, lastname, my_external_id__c),",
"lead(id, firstname, lastname)")
# sf_search ------------------------------------------------------------------
searched_records <- sf_search(my_sosl, is_sosl=TRUE, api_type="SOAP")
expect_is(searched_records, "tbl_df")
expect_equal(names(searched_records), c("sObject", "Id", "FirstName", "LastName"))
expect_equal(nrow(searched_records), 3)
my_soql <- sprintf("SELECT Id,
FirstName,
LastName,
My_External_Id__c
FROM Contact
WHERE Id in ('%s')",
paste0(created_records$id , collapse="','"))
# sf_query -------------------------------------------------------------------
queried_records <- sf_query(my_soql, api_type="SOAP")
expect_is(queried_records, "tbl_df")
expect_equal(names(queried_records), c("Id", "FirstName", "LastName", "My_External_Id__c"))
expect_equal(nrow(queried_records), n)
queried_records <- queried_records %>%
mutate(FirstName = "TestTest")
# sf_update ------------------------------------------------------------------
updated_records <- sf_update(queried_records, object_name=object, api_type="SOAP")
expect_is(updated_records, "tbl_df")
expect_equal(names(updated_records), c("id", "success"))
expect_equal(nrow(updated_records), n)
expect_is(updated_records$success, "logical")
new_record <- tibble(FirstName = "Test",
LastName = paste0("SOAP-Contact-Upsert-", n+1),
My_External_Id__c=paste0(prefix, letters[n+1]))
upserted_contacts <- bind_rows(queried_records %>% select(-Id), new_record)
# sf_upsert ------------------------------------------------------------------
upserted_records <- sf_upsert(input_data=upserted_contacts,
object_name=object,
external_id_fieldname="My_External_Id__c",
api_type = "SOAP")
expect_is(upserted_records, "tbl_df")
expect_equal(names(upserted_records), c("id", "success", "created"))
expect_equal(nrow(upserted_records), nrow(upserted_records))
expect_equal(upserted_records$success, c(TRUE, TRUE, TRUE))
expect_equal(upserted_records$created, c(FALSE, FALSE, TRUE))
# sf_create_attachment -------------------------------------------------------
attachment_details <- tibble(Name = c("salesforcer Logo"),
Body = system.file("extdata", "logo.png", package="salesforcer"),
ContentType = c("image/png"),
ParentId = upserted_records$id[1]) #"0016A0000035mJ5"
attachment_records <- sf_create_attachment(attachment_details, api_type="SOAP")
expect_is(attachment_records, "tbl_df")
expect_equal(names(attachment_records), c("id", "success"))
expect_equal(nrow(attachment_records), 1)
expect_true(attachment_records$success)
# sf_update_attachment -------------------------------------------------------
temp_f <- tempfile(fileext = ".zip")
zipr(temp_f, system.file("extdata", "logo.png", package="salesforcer"))
attachment_details2 <- tibble(Id = attachment_records$id[1],
Name = "logo.png.zip",
Body = temp_f)
attachment_records_update <- sf_update_attachment(attachment_details2, api_type="SOAP")
expect_is(attachment_records_update, "tbl_df")
expect_equal(names(attachment_records_update), c("id", "success"))
expect_true(attachment_records_update$success)
expect_equal(nrow(attachment_records_update), 1)
# sf_delete_attachment -------------------------------------------------------
deleted_attachments <- sf_delete_attachment(attachment_records$id, api_type = "SOAP")
expect_is(deleted_attachments, "tbl_df")
expect_equal(names(deleted_attachments), c("id", "success"))
expect_equal(nrow(deleted_attachments), 1)
expect_true(deleted_attachments$success)
# sf_delete ------------------------------------------------------------------
ids_to_delete <- unique(c(upserted_records$id[!is.na(upserted_records$id)], queried_records$Id))
deleted_records <- sf_delete(ids_to_delete, object_name=object, api_type = "SOAP")
expect_is(deleted_records, "tbl_df")
expect_equal(names(deleted_records), c("id", "success"))
expect_equal(nrow(deleted_records), length(ids_to_delete))
expect_is(created_records$success, "logical")
expect_true(all(deleted_records$success))
})
| /tests/testthat/test-soap.R | permissive | carlganz/salesforcer | R | false | false | 8,772 | r | context("SOAP API")
test_that("testing SOAP API Functionality", {
n <- 2
object <- "Contact"
prefix <- paste0("SOAP-", as.integer(runif(1,1,100000)), "-")
new_contacts <- tibble(FirstName = rep("Test", n),
LastName = paste0("SOAP-Contact-Create-", 1:n),
My_External_Id__c=paste0(prefix, letters[1:n]))
# sf_create ------------------------------------------------------------------
created_records <- sf_create(new_contacts, object_name=object, api_type="SOAP")
expect_is(created_records, "tbl_df")
expect_equal(names(created_records), c("id", "success"))
expect_equal(nrow(created_records), n)
expect_is(created_records$success, "logical")
# sf_create error ------------------------------------------------------------
new_campaign_members <- tibble(CampaignId = "",
ContactId = "0036A000002C6MbQAK")
create_error_records <- sf_create(new_campaign_members,
object_name = "CampaignMember",
api_type = "SOAP")
expect_is(create_error_records, "tbl_df")
expect_equal(names(create_error_records), c("success", "errors"))
expect_equal(nrow(create_error_records), 1)
expect_is(create_error_records$errors, "list")
expect_equal(length(create_error_records$errors[1][[1]]), 2)
expect_equal(sort(names(create_error_records$errors[1][[1]][[1]])),
c("message", "statusCode"))
new_campaign_members <- tibble(CampaignId = "7013s000000j6n1AAA",
ContactId = "0036A000002C6MbQAK")
create_error_records <- sf_create(new_campaign_members,
object_name = "CampaignMember",
api_type = "SOAP")
expect_is(create_error_records, "tbl_df")
expect_equal(names(create_error_records), c("success", "errors"))
expect_equal(nrow(create_error_records), 1)
expect_is(create_error_records$errors, "list")
expect_equal(length(create_error_records$errors[1][[1]]), 1)
expect_equal(sort(names(create_error_records$errors[1][[1]][[1]])),
c("message", "statusCode"))
# sf_create duplicate --------------------------------------------------------
dupe_n <- 3
prefix <- paste0("KEEP-", as.integer(runif(1,1,100000)), "-")
new_contacts <- tibble(FirstName = rep("KEEP", dupe_n),
LastName = paste0("Test-Contact-Dupe", 1:dupe_n),
Email = rep("keeptestcontact@gmail.com", dupe_n),
Phone = rep("(123) 456-7890", dupe_n),
test_number__c = rep(999.9, dupe_n),
My_External_Id__c = paste0(prefix, 1:dupe_n, "ZZZ"))
dupe_records <- sf_create(new_contacts,
object_name = "Contact",
api_type = "SOAP",
control = list(allowSave = FALSE,
includeRecordDetails = TRUE,
runAsCurrentUser = TRUE))
expect_is(dupe_records, "tbl_df")
expect_equal(names(dupe_records), c("success", "errors"))
expect_equal(nrow(dupe_records), dupe_n)
expect_is(dupe_records$errors, "list")
expect_equal(length(dupe_records$errors[1][[1]]), 1)
expect_equal(sort(names(dupe_records$errors[1][[1]][[1]])),
c("duplicateResult", "message", "statusCode"))
# sf_retrieve ----------------------------------------------------------------
retrieved_records <- sf_retrieve(ids = created_records$id,
fields = c("FirstName", "LastName"),
object_name = object,
api_type = "SOAP")
expect_is(retrieved_records, "tbl_df")
expect_equal(names(retrieved_records), c("sObject", "Id", "FirstName", "LastName"))
expect_equal(nrow(retrieved_records), n)
# FYI: Will not find newly created records because records need to be indexed
# Just search for some default records
my_sosl <- paste("FIND {(336)} in phone fields returning",
"contact(id, firstname, lastname, my_external_id__c),",
"lead(id, firstname, lastname)")
# sf_search ------------------------------------------------------------------
searched_records <- sf_search(my_sosl, is_sosl=TRUE, api_type="SOAP")
expect_is(searched_records, "tbl_df")
expect_equal(names(searched_records), c("sObject", "Id", "FirstName", "LastName"))
expect_equal(nrow(searched_records), 3)
my_soql <- sprintf("SELECT Id,
FirstName,
LastName,
My_External_Id__c
FROM Contact
WHERE Id in ('%s')",
paste0(created_records$id , collapse="','"))
# sf_query -------------------------------------------------------------------
queried_records <- sf_query(my_soql, api_type="SOAP")
expect_is(queried_records, "tbl_df")
expect_equal(names(queried_records), c("Id", "FirstName", "LastName", "My_External_Id__c"))
expect_equal(nrow(queried_records), n)
queried_records <- queried_records %>%
mutate(FirstName = "TestTest")
# sf_update ------------------------------------------------------------------
updated_records <- sf_update(queried_records, object_name=object, api_type="SOAP")
expect_is(updated_records, "tbl_df")
expect_equal(names(updated_records), c("id", "success"))
expect_equal(nrow(updated_records), n)
expect_is(updated_records$success, "logical")
new_record <- tibble(FirstName = "Test",
LastName = paste0("SOAP-Contact-Upsert-", n+1),
My_External_Id__c=paste0(prefix, letters[n+1]))
upserted_contacts <- bind_rows(queried_records %>% select(-Id), new_record)
# sf_upsert ------------------------------------------------------------------
upserted_records <- sf_upsert(input_data=upserted_contacts,
object_name=object,
external_id_fieldname="My_External_Id__c",
api_type = "SOAP")
expect_is(upserted_records, "tbl_df")
expect_equal(names(upserted_records), c("id", "success", "created"))
expect_equal(nrow(upserted_records), nrow(upserted_records))
expect_equal(upserted_records$success, c(TRUE, TRUE, TRUE))
expect_equal(upserted_records$created, c(FALSE, FALSE, TRUE))
# sf_create_attachment -------------------------------------------------------
attachment_details <- tibble(Name = c("salesforcer Logo"),
Body = system.file("extdata", "logo.png", package="salesforcer"),
ContentType = c("image/png"),
ParentId = upserted_records$id[1]) #"0016A0000035mJ5"
attachment_records <- sf_create_attachment(attachment_details, api_type="SOAP")
expect_is(attachment_records, "tbl_df")
expect_equal(names(attachment_records), c("id", "success"))
expect_equal(nrow(attachment_records), 1)
expect_true(attachment_records$success)
# sf_update_attachment -------------------------------------------------------
temp_f <- tempfile(fileext = ".zip")
zipr(temp_f, system.file("extdata", "logo.png", package="salesforcer"))
attachment_details2 <- tibble(Id = attachment_records$id[1],
Name = "logo.png.zip",
Body = temp_f)
attachment_records_update <- sf_update_attachment(attachment_details2, api_type="SOAP")
expect_is(attachment_records_update, "tbl_df")
expect_equal(names(attachment_records_update), c("id", "success"))
expect_true(attachment_records_update$success)
expect_equal(nrow(attachment_records_update), 1)
# sf_delete_attachment -------------------------------------------------------
deleted_attachments <- sf_delete_attachment(attachment_records$id, api_type = "SOAP")
expect_is(deleted_attachments, "tbl_df")
expect_equal(names(deleted_attachments), c("id", "success"))
expect_equal(nrow(deleted_attachments), 1)
expect_true(deleted_attachments$success)
# sf_delete ------------------------------------------------------------------
ids_to_delete <- unique(c(upserted_records$id[!is.na(upserted_records$id)], queried_records$Id))
deleted_records <- sf_delete(ids_to_delete, object_name=object, api_type = "SOAP")
expect_is(deleted_records, "tbl_df")
expect_equal(names(deleted_records), c("id", "success"))
expect_equal(nrow(deleted_records), length(ids_to_delete))
expect_is(created_records$success, "logical")
expect_true(all(deleted_records$success))
})
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/parse_lat.R
\name{parse_lat}
\alias{parse_lat}
\title{Parse latitude values}
\usage{
parse_lat(lat, format = NULL)
}
\arguments{
\item{lat}{(numeric/integer/character) one or more latitude values}
\item{format}{(character) format, default often works}
}
\value{
numeric vector
}
\description{
Parse latitude values
}
\section{Errors}{
Throws warnings on parsing errors, and returns \code{NaN} in each case
Types of errors:
\itemize{
\item invalid argument: e.g., letters passed instead of numbers,
see \url{https://en.cppreference.com/w/cpp/error/invalid_argument}
\item out of range: numbers of out acceptable range, see
\url{https://en.cppreference.com/w/cpp/error/out_of_range}
\item out of latitude range: not within -90/90 range
}
}
\examples{
parse_lat("")
parse_lat("-91")
parse_lat("95")
parse_lat("asdfaf")
parse_lat("45")
parse_lat("-45")
parse_lat("-45.2323")
# out of range with std::stod?
parse_lat("-45.23232e24")
parse_lat("-45.23232e2")
# numeric input
parse_lat(1:10)
parse_lat(85:94)
# different formats
parse_lat("40.4183318 N")
parse_lat("40.4183318 S")
parse_lat("40 25 5.994") # => 40.41833
parse_lat("40.4183318N")
parse_lat("N40.4183318")
parse_lat("40.4183318S")
parse_lat("S40.4183318")
parse_lat("N 39 21.440") # => 39.35733
parse_lat("S 56 1.389") # => -56.02315
parse_lat("N40°25’5.994") # => 40.41833
parse_lat("40° 25´ 5.994\\" N") # => 40.41833
parse_lat("40:25:6N")
parse_lat("40:25:5.994N")
parse_lat("40d 25’ 6\\" N")
# user specfied format
# \%C, \%c, \%H \%h \%D, \%d, \%M, \%m, \%S, and \%s
# parse_lat("40.255994", "")
}
| /man/parse_lat.Rd | permissive | mvickm/parzer | R | false | true | 1,657 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/parse_lat.R
\name{parse_lat}
\alias{parse_lat}
\title{Parse latitude values}
\usage{
parse_lat(lat, format = NULL)
}
\arguments{
\item{lat}{(numeric/integer/character) one or more latitude values}
\item{format}{(character) format, default often works}
}
\value{
numeric vector
}
\description{
Parse latitude values
}
\section{Errors}{
Throws warnings on parsing errors, and returns \code{NaN} in each case
Types of errors:
\itemize{
\item invalid argument: e.g., letters passed instead of numbers,
see \url{https://en.cppreference.com/w/cpp/error/invalid_argument}
\item out of range: numbers of out acceptable range, see
\url{https://en.cppreference.com/w/cpp/error/out_of_range}
\item out of latitude range: not within -90/90 range
}
}
\examples{
parse_lat("")
parse_lat("-91")
parse_lat("95")
parse_lat("asdfaf")
parse_lat("45")
parse_lat("-45")
parse_lat("-45.2323")
# out of range with std::stod?
parse_lat("-45.23232e24")
parse_lat("-45.23232e2")
# numeric input
parse_lat(1:10)
parse_lat(85:94)
# different formats
parse_lat("40.4183318 N")
parse_lat("40.4183318 S")
parse_lat("40 25 5.994") # => 40.41833
parse_lat("40.4183318N")
parse_lat("N40.4183318")
parse_lat("40.4183318S")
parse_lat("S40.4183318")
parse_lat("N 39 21.440") # => 39.35733
parse_lat("S 56 1.389") # => -56.02315
parse_lat("N40°25’5.994") # => 40.41833
parse_lat("40° 25´ 5.994\\" N") # => 40.41833
parse_lat("40:25:6N")
parse_lat("40:25:5.994N")
parse_lat("40d 25’ 6\\" N")
# user specfied format
# \%C, \%c, \%H \%h \%D, \%d, \%M, \%m, \%S, and \%s
# parse_lat("40.255994", "")
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/sts.R
\name{sts_semi_local_linear_trend_state_space_model}
\alias{sts_semi_local_linear_trend_state_space_model}
\title{State space model for a semi-local linear trend.}
\usage{
sts_semi_local_linear_trend_state_space_model(
num_timesteps,
level_scale,
slope_mean,
slope_scale,
autoregressive_coef,
initial_state_prior,
observation_noise_scale = 0,
initial_step = 0,
validate_args = FALSE,
allow_nan_stats = TRUE,
name = NULL
)
}
\arguments{
\item{num_timesteps}{Scalar \code{integer} \code{tensor} number of timesteps to model
with this distribution.}
\item{level_scale}{Scalar (any additional dimensions are treated as batch
dimensions) \code{float} \code{tensor} indicating the standard deviation of the
level transitions.}
\item{slope_mean}{Scalar (any additional dimensions are treated as batch
dimensions) \code{float} \code{tensor} indicating the expected long-term mean of
the latent slope.}
\item{slope_scale}{Scalar (any additional dimensions are treated as batch
dimensions) \code{float} \code{tensor} indicating the standard deviation of the
slope transitions.}
\item{autoregressive_coef}{Scalar (any additional dimensions are treated as
batch dimensions) \code{float} \code{tensor} defining the AR1 process on the latent slope.}
\item{initial_state_prior}{instance of \code{tfd_multivariate_normal}
representing the prior distribution on latent states. Must have
event shape \verb{[1]} (as \code{tfd_linear_gaussian_state_space_model} requires a
rank-1 event shape).}
\item{observation_noise_scale}{Scalar (any additional dimensions are
treated as batch dimensions) \code{float} \code{tensor} indicating the standard
deviation of the observation noise.}
\item{initial_step}{Optional scalar \code{integer} \code{tensor} specifying the starting
timestep. Default value: 0.}
\item{validate_args}{\code{logical}. Whether to validate input
with asserts. If \code{validate_args} is \code{FALSE}, and the inputs are
invalid, correct behavior is not guaranteed. Default value: \code{FALSE}.}
\item{allow_nan_stats}{\code{logical}. If \code{FALSE}, raise an
exception if a statistic (e.g. mean/mode/etc...) is undefined for any
batch member. If \code{TRUE}, batch members with valid parameters leading to
undefined statistics will return NaN for this statistic. Default value: \code{TRUE}.}
\item{name}{string` prefixed to ops created by this class.
Default value: "SemiLocalLinearTrendStateSpaceModel".}
}
\value{
an instance of \code{LinearGaussianStateSpaceModel}.
}
\description{
A state space model (SSM) posits a set of latent (unobserved) variables that
evolve over time with dynamics specified by a probabilistic transition model
\code{p(z[t+1] | z[t])}. At each timestep, we observe a value sampled from an
observation model conditioned on the current state, \code{p(x[t] | z[t])}. The
special case where both the transition and observation models are Gaussians
with mean specified as a linear function of the inputs, is known as a linear
Gaussian state space model and supports tractable exact probabilistic
calculations; see \code{tfd_linear_gaussian_state_space_model} for details.
}
\details{
The semi-local linear trend model is a special case of a linear Gaussian
SSM, in which the latent state posits a \code{level} and \code{slope}. The \code{level}
evolves via a Gaussian random walk centered at the current \code{slope}, while
the \code{slope} follows a first-order autoregressive (AR1) process with
mean \code{slope_mean}:
\if{html}{\out{<div class="sourceCode">}}\preformatted{level[t] = level[t-1] + slope[t-1] + Normal(0, level_scale)
slope[t] = (slope_mean + autoregressive_coef * (slope[t-1] - slope_mean) +
Normal(0., slope_scale))
}\if{html}{\out{</div>}}
The latent state is the two-dimensional tuple \verb{[level, slope]}. The
\code{level} is observed at each timestep.
The parameters \code{level_scale}, \code{slope_mean}, \code{slope_scale},
\code{autoregressive_coef}, and \code{observation_noise_scale} are each (a batch of)
scalars. The batch shape of this \code{Distribution} is the broadcast batch shape
of these parameters and of the \code{initial_state_prior}.
Mathematical Details
The semi-local linear trend model implements a
\code{tfp.distributions.LinearGaussianStateSpaceModel} with \code{latent_size = 2}
and \code{observation_size = 1}, following the transition model:
\if{html}{\out{<div class="sourceCode">}}\preformatted{transition_matrix = [[1., 1.]
[0., autoregressive_coef]]
transition_noise ~ N(loc=slope_mean - autoregressive_coef * slope_mean,
scale=diag([level_scale, slope_scale]))
}\if{html}{\out{</div>}}
which implements the evolution of \verb{[level, slope]} described above, and
the observation model:
\if{html}{\out{<div class="sourceCode">}}\preformatted{observation_matrix = [[1., 0.]]
observation_noise ~ N(loc=0, scale=observation_noise_scale)
}\if{html}{\out{</div>}}
which picks out the first latent component, i.e., the \code{level}, as the
observation at each timestep.
}
\seealso{
Other sts:
\code{\link{sts_additive_state_space_model}()},
\code{\link{sts_autoregressive_state_space_model}()},
\code{\link{sts_autoregressive}()},
\code{\link{sts_constrained_seasonal_state_space_model}()},
\code{\link{sts_dynamic_linear_regression_state_space_model}()},
\code{\link{sts_dynamic_linear_regression}()},
\code{\link{sts_linear_regression}()},
\code{\link{sts_local_level_state_space_model}()},
\code{\link{sts_local_level}()},
\code{\link{sts_local_linear_trend_state_space_model}()},
\code{\link{sts_local_linear_trend}()},
\code{\link{sts_seasonal_state_space_model}()},
\code{\link{sts_seasonal}()},
\code{\link{sts_semi_local_linear_trend}()},
\code{\link{sts_smooth_seasonal_state_space_model}()},
\code{\link{sts_smooth_seasonal}()},
\code{\link{sts_sparse_linear_regression}()},
\code{\link{sts_sum}()}
}
\concept{sts}
| /man/sts_semi_local_linear_trend_state_space_model.Rd | no_license | cran/tfprobability | R | false | true | 5,973 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/sts.R
\name{sts_semi_local_linear_trend_state_space_model}
\alias{sts_semi_local_linear_trend_state_space_model}
\title{State space model for a semi-local linear trend.}
\usage{
sts_semi_local_linear_trend_state_space_model(
num_timesteps,
level_scale,
slope_mean,
slope_scale,
autoregressive_coef,
initial_state_prior,
observation_noise_scale = 0,
initial_step = 0,
validate_args = FALSE,
allow_nan_stats = TRUE,
name = NULL
)
}
\arguments{
\item{num_timesteps}{Scalar \code{integer} \code{tensor} number of timesteps to model
with this distribution.}
\item{level_scale}{Scalar (any additional dimensions are treated as batch
dimensions) \code{float} \code{tensor} indicating the standard deviation of the
level transitions.}
\item{slope_mean}{Scalar (any additional dimensions are treated as batch
dimensions) \code{float} \code{tensor} indicating the expected long-term mean of
the latent slope.}
\item{slope_scale}{Scalar (any additional dimensions are treated as batch
dimensions) \code{float} \code{tensor} indicating the standard deviation of the
slope transitions.}
\item{autoregressive_coef}{Scalar (any additional dimensions are treated as
batch dimensions) \code{float} \code{tensor} defining the AR1 process on the latent slope.}
\item{initial_state_prior}{instance of \code{tfd_multivariate_normal}
representing the prior distribution on latent states. Must have
event shape \verb{[1]} (as \code{tfd_linear_gaussian_state_space_model} requires a
rank-1 event shape).}
\item{observation_noise_scale}{Scalar (any additional dimensions are
treated as batch dimensions) \code{float} \code{tensor} indicating the standard
deviation of the observation noise.}
\item{initial_step}{Optional scalar \code{integer} \code{tensor} specifying the starting
timestep. Default value: 0.}
\item{validate_args}{\code{logical}. Whether to validate input
with asserts. If \code{validate_args} is \code{FALSE}, and the inputs are
invalid, correct behavior is not guaranteed. Default value: \code{FALSE}.}
\item{allow_nan_stats}{\code{logical}. If \code{FALSE}, raise an
exception if a statistic (e.g. mean/mode/etc...) is undefined for any
batch member. If \code{TRUE}, batch members with valid parameters leading to
undefined statistics will return NaN for this statistic. Default value: \code{TRUE}.}
\item{name}{string` prefixed to ops created by this class.
Default value: "SemiLocalLinearTrendStateSpaceModel".}
}
\value{
an instance of \code{LinearGaussianStateSpaceModel}.
}
\description{
A state space model (SSM) posits a set of latent (unobserved) variables that
evolve over time with dynamics specified by a probabilistic transition model
\code{p(z[t+1] | z[t])}. At each timestep, we observe a value sampled from an
observation model conditioned on the current state, \code{p(x[t] | z[t])}. The
special case where both the transition and observation models are Gaussians
with mean specified as a linear function of the inputs, is known as a linear
Gaussian state space model and supports tractable exact probabilistic
calculations; see \code{tfd_linear_gaussian_state_space_model} for details.
}
\details{
The semi-local linear trend model is a special case of a linear Gaussian
SSM, in which the latent state posits a \code{level} and \code{slope}. The \code{level}
evolves via a Gaussian random walk centered at the current \code{slope}, while
the \code{slope} follows a first-order autoregressive (AR1) process with
mean \code{slope_mean}:
\if{html}{\out{<div class="sourceCode">}}\preformatted{level[t] = level[t-1] + slope[t-1] + Normal(0, level_scale)
slope[t] = (slope_mean + autoregressive_coef * (slope[t-1] - slope_mean) +
Normal(0., slope_scale))
}\if{html}{\out{</div>}}
The latent state is the two-dimensional tuple \verb{[level, slope]}. The
\code{level} is observed at each timestep.
The parameters \code{level_scale}, \code{slope_mean}, \code{slope_scale},
\code{autoregressive_coef}, and \code{observation_noise_scale} are each (a batch of)
scalars. The batch shape of this \code{Distribution} is the broadcast batch shape
of these parameters and of the \code{initial_state_prior}.
Mathematical Details
The semi-local linear trend model implements a
\code{tfp.distributions.LinearGaussianStateSpaceModel} with \code{latent_size = 2}
and \code{observation_size = 1}, following the transition model:
\if{html}{\out{<div class="sourceCode">}}\preformatted{transition_matrix = [[1., 1.]
[0., autoregressive_coef]]
transition_noise ~ N(loc=slope_mean - autoregressive_coef * slope_mean,
scale=diag([level_scale, slope_scale]))
}\if{html}{\out{</div>}}
which implements the evolution of \verb{[level, slope]} described above, and
the observation model:
\if{html}{\out{<div class="sourceCode">}}\preformatted{observation_matrix = [[1., 0.]]
observation_noise ~ N(loc=0, scale=observation_noise_scale)
}\if{html}{\out{</div>}}
which picks out the first latent component, i.e., the \code{level}, as the
observation at each timestep.
}
\seealso{
Other sts:
\code{\link{sts_additive_state_space_model}()},
\code{\link{sts_autoregressive_state_space_model}()},
\code{\link{sts_autoregressive}()},
\code{\link{sts_constrained_seasonal_state_space_model}()},
\code{\link{sts_dynamic_linear_regression_state_space_model}()},
\code{\link{sts_dynamic_linear_regression}()},
\code{\link{sts_linear_regression}()},
\code{\link{sts_local_level_state_space_model}()},
\code{\link{sts_local_level}()},
\code{\link{sts_local_linear_trend_state_space_model}()},
\code{\link{sts_local_linear_trend}()},
\code{\link{sts_seasonal_state_space_model}()},
\code{\link{sts_seasonal}()},
\code{\link{sts_semi_local_linear_trend}()},
\code{\link{sts_smooth_seasonal_state_space_model}()},
\code{\link{sts_smooth_seasonal}()},
\code{\link{sts_sparse_linear_regression}()},
\code{\link{sts_sum}()}
}
\concept{sts}
|
### Render various outputs for Ensemble App
###############################################################################
### Reactive functions that return tables are in server_render_tables.R
###############################################################################
##### Load Models tab #####
###########################################################
### Load saved environment output
output$load_envir_text <- renderText({
load_envir()
})
###########################################################
# Created spdf messages
### Created spdf message for csv
output$create_spdf_csv_text <- renderText({
req(read_model_csv())
create_spdf_csv()
})
### Created spdf message for gis raster
output$create_spdf_gis_raster_text <- renderText({
req(read_model_gis_raster())
create_spdf_gis_raster()
})
### Created spdf message for gis shp
output$create_spdf_gis_shp_text <- renderText({
req(read_model_gis_shp())
create_spdf_gis_shp()
})
### Created spdf message for gis gdb
output$create_spdf_gis_gdb_text <- renderText({
req(read_model_gis_gdb())
create_spdf_gis_gdb()
})
###########################################################
# Tables
### Table of loaded original model preds
output$models_loaded_table <- DT::renderDataTable({
table_orig()[, -3] #'[, -3]' is to remove Error column
},
options = list(dom = 't'), selection = "multiple")
### Table of stats of loaded original model preds
output$models_loaded_table_stats <- DT::renderDataTable({
table_orig_stats()
},
options = list(dom = 't'), selection = "none")
###########################################################
### Plot/preview of individual model
output$model_pix_preview_plot <- renderPlot({
grid.arrange(model_pix_preview_event())
})
###############################################################################
##### Overlay tab #####
###########################################################
# Tables
### Table of loaded model predictions
output$overlay_loaded_table <- DT::renderDataTable({
table_orig()[, -3] #'[, -3]' is to remove Error column
},
options = list(dom = 't'), selection = "single")
### Table of stats of loaded model predictions
output$overlay_loaded_stats_table <- DT::renderDataTable({
table_orig_stats()
},
options = list(dom = 't'), selection = "none")
###########################################################
# Polygon error outputs and loaded messages
### Boundary (study area) polygon error outputs
output$overlay_bound_csv_text <- renderText(overlay_bound_csv())
output$overlay_bound_gis_shp_text <- renderText(overlay_bound_gis_shp())
output$overlay_bound_gis_gdb_text <- renderText(overlay_bound_gis_gdb())
### Boundary polygon loaded messages
output$overlay_bound_csv_message <- renderText({
if (!is.null(vals$overlay.bound)) "A study area polygon is loaded"
})
output$overlay_bound_gis_shp_message <- renderText({
if (!is.null(vals$overlay.bound)) "A study area polygon is loaded"
})
output$overlay_bound_gis_gdb_message <- renderText({
if (!is.null(vals$overlay.bound)) "A study area polygon is loaded"
})
### Land polygon error outputs
output$overlay_land_csv_text <- renderText(overlay_land_csv())
output$overlay_land_gis_shp_text <- renderText(overlay_land_gis_shp())
output$overlay_land_gis_gdb_text <- renderText(overlay_land_gis_gdb())
### Land polygon loaded messages
output$overlay_land_provided_message <- renderText({
if (!is.null(vals$overlay.land)) "A land polygon is loaded"
})
output$overlay_land_csv_message <- renderText({
if (!is.null(vals$overlay.land)) "A land polygon is loaded"
})
output$overlay_land_gis_shp_message <- renderText({
if (!is.null(vals$overlay.land)) "A land polygon is loaded"
})
output$overlay_land_gis_gdb_message <- renderText({
if (!is.null(vals$overlay.land)) "A land polygon is loaded"
})
###########################################################
# Overlaying process outputs
output$overlay_overlaid_models_message <- renderText({
if (length(vals$overlaid.model) > 0) "Overlaid models are created"
})
#######################################
### Samegrid overlay
output$overlay_samegrid_warning_text <- renderUI({
HTML(overlay_samegrid_warning())
})
output$overlay_samegrid_all_text <- renderText({
overlay_samegrid_all()
})
#######################################
### Standard overlay
output$overlay_overlay_all_text <- renderText({
overlay_all()
})
###########################################################
# Previews
### Preview of base grid
# 'suspendWhenHidden = FALSE' in server_hide+show.R
output$overlay_preview_base <- renderPlot({
plot_overlay_preview_base()
})
### Preview of overlaid model predictions
# 'suspendWhenHidden = FALSE' in server_hide+show.R
output$overlay_preview_overlaid <- renderPlot({
grid.arrange(plot_overlay_preview_overlaid())
})
###############################################################################
##### Create Ensembles tab #####
###########################################################
# Tables
### Display table of overlaid predictions and info
output$create_ens_table <- renderTable({
table_overlaid()[, -3] #'[, -3]' is to remove Error column
},
rownames = TRUE)
### Datatable of overlaid predictions and info
output$create_ens_datatable <- DT::renderDataTable({
table_overlaid()[, -3] #'[, -3]' is to remove Error column
},
options = list(dom = 't'))
###########################################################
# Weights outputs
### Table of metric values to be used as weights
output$create_ens_weights_metric_table_out <- renderTable({
create_ens_weights_metric_table()
},
rownames = FALSE, digits = 3)
### Table of if overlaid models have spatial pixel weights
output$create_ens_weights_pix_table_out <- renderTable({
create_ens_weights_pix_table()
},
rownames = FALSE, align = "lr")
### Table summarizing overlaid models and their polygon weights
output$create_ens_weights_poly_table_out <- renderTable({
create_ens_weights_poly_table()
},
rownames = FALSE)
### Preview plot of weight polygons
output$create_ens_weights_poly_preview_plot <- renderPlot({
create_ens_weights_poly_preview()
})
### Text output for removing loaded weight polygons
output$create_ens_weights_poly_remove_text <- renderText({
create_ens_weights_poly_remove()
})
### Output for adding polygon weight(s) to reactiveValues
output$create_ens_weights_poly_add_text <- renderText({
create_ens_weights_poly_add()
})
###########################################################
### Create ensemble error/completion output
output$ens_create_ensemble_text <- renderUI({
HTML(create_ensemble())
})
###########################################################
# Created ensemble things
### Table of created ensemble predictions
output$ens_datatable_ensembles <- DT::renderDataTable({
table_ensembles()
},
options = list(dom = 't'))
### Remove ensemble error output
output$ens_remove_text <- renderUI({
HTML(ens_remove())
})
### Plot preview of ensemble predictions
# 'suspendWhenHidden = FALSE' in server_hide+show.R
output$ens_pix_preview_plot <- renderPlot({
grid.arrange(ens_pix_preview_event())
})
### Table of abundances of created ensemble predictions
output$ens_abund_table_out <- renderTable({
table_ens_abund()
},
rownames = TRUE, colnames = FALSE, align = "r")
###############################################################################
##### Model Evaluation Metrics tab #####
###########################################################
# Tables
### Table of orig model predictions
output$eval_models_table_orig_out <- DT::renderDataTable({
table_orig()[, 1:4][, -3] #'[, -3]' is to remove Error column
},
options = list(dom = 't'))
### Table of overlaid model predictions
output$eval_models_table_over_out <- DT::renderDataTable({
table_overlaid()[, 1:4][, -3] #'[, -3]' is to remove Error column
},
options = list(dom = 't'))
### Table of ensemble models
output$eval_models_table_ens_out <- DT::renderDataTable({
table_ensembles()
},
options = list(dom = 't'))
###########################################################
# Presence/absence loaded message, error outputs, and table
# Presence and absence points
output$eval_data_1_message <- renderText({
eval.data.list <- vals$eval.data.list
ifelse(!(is.na(eval.data.list)[1] & is.na(eval.data.list)[2]),
"Validation data loaded",
"")
})
# Text outputs
output$eval_csv_data_1_text <- renderText({ eval_data_1_csv() })
output$eval_data_1_gis_text <- renderText({
pa.spdf.list <- vals$eval.data.gis.file.1
req(length(pa.spdf.list) > 0)
req(pa.spdf.list[[1]], pa.spdf.list[[2]] == input$eval_load_type_1)
eval_data_1_gis()
})
output$eval_metrics_text <- renderText({ eval_metrics() })
# Presence/absence table
output$table_pa_pts_out <- renderTable({
table_data_pts()
},
colnames = FALSE)
###########################################################
### Metrics table
output$table_eval_metrics_out <- renderTable({
table_eval_metrics()
},
rownames = FALSE, digits = 3)
###############################################################################
##### High Quality Maps #####
###########################################################
# Tables
### Table of orig model predictions
output$pretty_table_orig_out <- DT::renderDataTable({
table_orig()[, 1:4][, -3] #'[, -3]' is to remove Error column
},
options = list(dom = 't'))
### Table of overlaid model predictions
output$pretty_table_over_out <- DT::renderDataTable({
table_overlaid()[, 1:4][, -3] #'[, -3]' is to remove Error column
},
options = list(dom = 't'))
### Table of ensemble model predictions
output$pretty_table_ens_out <- DT::renderDataTable({
table_ensembles()
},
options = list(dom = 't'))
###########################################################
# Outputs
### Color wheel for preview of color palette
output$pretty_plot_color_preview_plot <- renderPlot({
pretty_plot_color_preview()
})
### Pretty plot error output
output$pretty_plot_values_event_text <- renderText({
pretty_plot_values_event()
})
### Pretty plot
# 'suspendWhenHidden = FALSE' in server_hide+show.R
output$pretty_plot_plot <- renderPlot({
print(pretty_plot_generate())
})
###############################################################################
##### Export Model Predictions #####
###########################################################
# Tables
### Table of orig model predictions
output$export_table_orig_out <- DT::renderDataTable({
table_orig()[, 1:4][, -3] #'[, -3]' is to remove Error column
},
options = list(dom = 't'), selection = "single")
### Table of overlaid model predictions
output$export_table_over_out <- DT::renderDataTable({
table_overlaid()[, 1:4][, -3] #'[, -3]' is to remove Error column
},
options = list(dom = 't'), selection = "single")
### Table of ensemble models
output$export_table_ens_out <- DT::renderDataTable({
table_ensembles()
},
options = list(dom = 't'), selection = "single")
###############################################################################
##### Submit Feedback #####
### Feedback function text
output$feedback_submit_text <- renderText({
feedback_submit()
})
### Warning message if no internet connection is detected
output$feedback_internet_connection_text <- renderText({
feedback_internet_connection()
})
###############################################################################
| /shiny/server_other/server_render.R | no_license | GRSEB9S/eSDM | R | false | false | 11,405 | r | ### Render various outputs for Ensemble App
###############################################################################
### Reactive functions that return tables are in server_render_tables.R
###############################################################################
##### Load Models tab #####
###########################################################
### Load saved environment output
output$load_envir_text <- renderText({
load_envir()
})
###########################################################
# Created spdf messages
### Created spdf message for csv
output$create_spdf_csv_text <- renderText({
req(read_model_csv())
create_spdf_csv()
})
### Created spdf message for gis raster
output$create_spdf_gis_raster_text <- renderText({
req(read_model_gis_raster())
create_spdf_gis_raster()
})
### Created spdf message for gis shp
output$create_spdf_gis_shp_text <- renderText({
req(read_model_gis_shp())
create_spdf_gis_shp()
})
### Created spdf message for gis gdb
output$create_spdf_gis_gdb_text <- renderText({
req(read_model_gis_gdb())
create_spdf_gis_gdb()
})
###########################################################
# Tables
### Table of loaded original model preds
output$models_loaded_table <- DT::renderDataTable({
table_orig()[, -3] #'[, -3]' is to remove Error column
},
options = list(dom = 't'), selection = "multiple")
### Table of stats of loaded original model preds
output$models_loaded_table_stats <- DT::renderDataTable({
table_orig_stats()
},
options = list(dom = 't'), selection = "none")
###########################################################
### Plot/preview of individual model
output$model_pix_preview_plot <- renderPlot({
grid.arrange(model_pix_preview_event())
})
###############################################################################
##### Overlay tab #####
###########################################################
# Tables
### Table of loaded model predictions
output$overlay_loaded_table <- DT::renderDataTable({
table_orig()[, -3] #'[, -3]' is to remove Error column
},
options = list(dom = 't'), selection = "single")
### Table of stats of loaded model predictions
output$overlay_loaded_stats_table <- DT::renderDataTable({
table_orig_stats()
},
options = list(dom = 't'), selection = "none")
###########################################################
# Polygon error outputs and loaded messages
### Boundary (study area) polygon error outputs
output$overlay_bound_csv_text <- renderText(overlay_bound_csv())
output$overlay_bound_gis_shp_text <- renderText(overlay_bound_gis_shp())
output$overlay_bound_gis_gdb_text <- renderText(overlay_bound_gis_gdb())
### Boundary polygon loaded messages
output$overlay_bound_csv_message <- renderText({
if (!is.null(vals$overlay.bound)) "A study area polygon is loaded"
})
output$overlay_bound_gis_shp_message <- renderText({
if (!is.null(vals$overlay.bound)) "A study area polygon is loaded"
})
output$overlay_bound_gis_gdb_message <- renderText({
if (!is.null(vals$overlay.bound)) "A study area polygon is loaded"
})
### Land polygon error outputs
output$overlay_land_csv_text <- renderText(overlay_land_csv())
output$overlay_land_gis_shp_text <- renderText(overlay_land_gis_shp())
output$overlay_land_gis_gdb_text <- renderText(overlay_land_gis_gdb())
### Land polygon loaded messages
output$overlay_land_provided_message <- renderText({
if (!is.null(vals$overlay.land)) "A land polygon is loaded"
})
output$overlay_land_csv_message <- renderText({
if (!is.null(vals$overlay.land)) "A land polygon is loaded"
})
output$overlay_land_gis_shp_message <- renderText({
if (!is.null(vals$overlay.land)) "A land polygon is loaded"
})
output$overlay_land_gis_gdb_message <- renderText({
if (!is.null(vals$overlay.land)) "A land polygon is loaded"
})
###########################################################
# Overlaying process outputs
output$overlay_overlaid_models_message <- renderText({
if (length(vals$overlaid.model) > 0) "Overlaid models are created"
})
#######################################
### Samegrid overlay
output$overlay_samegrid_warning_text <- renderUI({
HTML(overlay_samegrid_warning())
})
output$overlay_samegrid_all_text <- renderText({
overlay_samegrid_all()
})
#######################################
### Standard overlay
output$overlay_overlay_all_text <- renderText({
overlay_all()
})
###########################################################
# Previews
### Preview of base grid
# 'suspendWhenHidden = FALSE' in server_hide+show.R
output$overlay_preview_base <- renderPlot({
plot_overlay_preview_base()
})
### Preview of overlaid model predictions
# 'suspendWhenHidden = FALSE' in server_hide+show.R
output$overlay_preview_overlaid <- renderPlot({
grid.arrange(plot_overlay_preview_overlaid())
})
###############################################################################
##### Create Ensembles tab #####
###########################################################
# Tables
### Display table of overlaid predictions and info
output$create_ens_table <- renderTable({
table_overlaid()[, -3] #'[, -3]' is to remove Error column
},
rownames = TRUE)
### Datatable of overlaid predictions and info
output$create_ens_datatable <- DT::renderDataTable({
table_overlaid()[, -3] #'[, -3]' is to remove Error column
},
options = list(dom = 't'))
###########################################################
# Weights outputs
### Table of metric values to be used as weights
output$create_ens_weights_metric_table_out <- renderTable({
create_ens_weights_metric_table()
},
rownames = FALSE, digits = 3)
### Table of if overlaid models have spatial pixel weights
output$create_ens_weights_pix_table_out <- renderTable({
create_ens_weights_pix_table()
},
rownames = FALSE, align = "lr")
### Table summarizing overlaid models and their polygon weights
output$create_ens_weights_poly_table_out <- renderTable({
create_ens_weights_poly_table()
},
rownames = FALSE)
### Preview plot of weight polygons
output$create_ens_weights_poly_preview_plot <- renderPlot({
create_ens_weights_poly_preview()
})
### Text output for removing loaded weight polygons
output$create_ens_weights_poly_remove_text <- renderText({
create_ens_weights_poly_remove()
})
### Output for adding polygon weight(s) to reactiveValues
output$create_ens_weights_poly_add_text <- renderText({
create_ens_weights_poly_add()
})
###########################################################
### Create ensemble error/completion output
output$ens_create_ensemble_text <- renderUI({
HTML(create_ensemble())
})
###########################################################
# Created ensemble things
### Table of created ensemble predictions
output$ens_datatable_ensembles <- DT::renderDataTable({
table_ensembles()
},
options = list(dom = 't'))
### Remove ensemble error output
output$ens_remove_text <- renderUI({
HTML(ens_remove())
})
### Plot preview of ensemble predictions
# 'suspendWhenHidden = FALSE' in server_hide+show.R
output$ens_pix_preview_plot <- renderPlot({
grid.arrange(ens_pix_preview_event())
})
### Table of abundances of created ensemble predictions
output$ens_abund_table_out <- renderTable({
table_ens_abund()
},
rownames = TRUE, colnames = FALSE, align = "r")
###############################################################################
##### Model Evaluation Metrics tab #####
###########################################################
# Tables
### Table of orig model predictions
output$eval_models_table_orig_out <- DT::renderDataTable({
table_orig()[, 1:4][, -3] #'[, -3]' is to remove Error column
},
options = list(dom = 't'))
### Table of overlaid model predictions
output$eval_models_table_over_out <- DT::renderDataTable({
table_overlaid()[, 1:4][, -3] #'[, -3]' is to remove Error column
},
options = list(dom = 't'))
### Table of ensemble models
output$eval_models_table_ens_out <- DT::renderDataTable({
table_ensembles()
},
options = list(dom = 't'))
###########################################################
# Presence/absence loaded message, error outputs, and table
# Presence and absence points
output$eval_data_1_message <- renderText({
eval.data.list <- vals$eval.data.list
ifelse(!(is.na(eval.data.list)[1] & is.na(eval.data.list)[2]),
"Validation data loaded",
"")
})
# Text outputs
output$eval_csv_data_1_text <- renderText({ eval_data_1_csv() })
output$eval_data_1_gis_text <- renderText({
pa.spdf.list <- vals$eval.data.gis.file.1
req(length(pa.spdf.list) > 0)
req(pa.spdf.list[[1]], pa.spdf.list[[2]] == input$eval_load_type_1)
eval_data_1_gis()
})
output$eval_metrics_text <- renderText({ eval_metrics() })
# Presence/absence table
output$table_pa_pts_out <- renderTable({
table_data_pts()
},
colnames = FALSE)
###########################################################
### Metrics table
output$table_eval_metrics_out <- renderTable({
table_eval_metrics()
},
rownames = FALSE, digits = 3)
###############################################################################
##### High Quality Maps #####
###########################################################
# Tables
### Table of orig model predictions
output$pretty_table_orig_out <- DT::renderDataTable({
table_orig()[, 1:4][, -3] #'[, -3]' is to remove Error column
},
options = list(dom = 't'))
### Table of overlaid model predictions
output$pretty_table_over_out <- DT::renderDataTable({
table_overlaid()[, 1:4][, -3] #'[, -3]' is to remove Error column
},
options = list(dom = 't'))
### Table of ensemble model predictions
output$pretty_table_ens_out <- DT::renderDataTable({
table_ensembles()
},
options = list(dom = 't'))
###########################################################
# Outputs
### Color wheel for preview of color palette
output$pretty_plot_color_preview_plot <- renderPlot({
pretty_plot_color_preview()
})
### Pretty plot error output
output$pretty_plot_values_event_text <- renderText({
pretty_plot_values_event()
})
### Pretty plot
# 'suspendWhenHidden = FALSE' in server_hide+show.R
output$pretty_plot_plot <- renderPlot({
print(pretty_plot_generate())
})
###############################################################################
##### Export Model Predictions #####
###########################################################
# Tables
### Table of orig model predictions
output$export_table_orig_out <- DT::renderDataTable({
table_orig()[, 1:4][, -3] #'[, -3]' is to remove Error column
},
options = list(dom = 't'), selection = "single")
### Table of overlaid model predictions
output$export_table_over_out <- DT::renderDataTable({
table_overlaid()[, 1:4][, -3] #'[, -3]' is to remove Error column
},
options = list(dom = 't'), selection = "single")
### Table of ensemble models
output$export_table_ens_out <- DT::renderDataTable({
table_ensembles()
},
options = list(dom = 't'), selection = "single")
###############################################################################
##### Submit Feedback #####
### Feedback function text
output$feedback_submit_text <- renderText({
feedback_submit()
})
### Warning message if no internet connection is detected
output$feedback_internet_connection_text <- renderText({
feedback_internet_connection()
})
###############################################################################
|
# tests for eol_search fxn in taxize
context("eol_search")
test_that("eol_search returns the correct value", {
skip_on_cran()
expect_that(eol_search(terms='Ursus americanus luteolus')[[1]], equals(1273844))
})
test_that("eol_search returns the correct class", {
skip_on_cran()
expect_is(eol_search(terms='Salix')[[1]], "integer")
expect_is(eol_search('Homo'), "data.frame")
})
| /tests/testthat/test-eol_search.R | permissive | arendsee/taxize | R | false | false | 388 | r | # tests for eol_search fxn in taxize
context("eol_search")
test_that("eol_search returns the correct value", {
skip_on_cran()
expect_that(eol_search(terms='Ursus americanus luteolus')[[1]], equals(1273844))
})
test_that("eol_search returns the correct class", {
skip_on_cran()
expect_is(eol_search(terms='Salix')[[1]], "integer")
expect_is(eol_search('Homo'), "data.frame")
})
|
#### ####
# optimisation.table <- data.table(
# id = character(),
# mvn.mean = numeric(),
# mvn = numeric(),
# lmvn = numeric(),
# mvn.sd_const = numeric(),
# lmvn.data = numeric(),
# lmvn.data.norm = numeric(),
# normalise = numeric(),
# normalise_by_priming = numeric()
# )
#### ####
read_optimisation <- function(path, id, names = colnames(optimisation.table)){
if(file.exists(paste(path, "optimisation.csv", sep = "/"))){
optimisation <- read.table(file = paste(path, "optimisation.csv", sep = "/"),
sep = ",", header = TRUE)
colnames(optimisation) <- names
optimisation$id <- id
optimisation <- optimisation[,c(ncol(optimisation), 1:(ncol(optimisation)-1))]
return(optimisation)
}
}
plot_results <- function(data = data.exp.grouped,
path.analysis,
path.output = path.analysis,
data.model.best.list = data.model.list,
optimisation.best,
grid.ncol = 2,
grid.nrow = 2,
plot.title = "opt",
filename.data_model = "data_model.csv"){
plot.list <- list()
for( i in 1:length(optimisation.best) ){
id <- optimisation.best[i]
try({
data.model.best.list[[as.character(id)]] <- read.table(
file = paste(path.analysis, id, filename.data_model, sep = "/"),
sep = ",",
header = TRUE)
plot.title.id <- paste(plot.title, "place:", i, ", id:", id, ",", sep = "")
plot.list[[as.character(id)]] <- compare_models(
data = data,
data.model.list = data.model.best.list[c("single", "receptors", as.character(id))],
plot.title = paste(plot.title, "place:", i, ", id:", id, ",", sep = ""),
filename = paste(path.analysis, plot.title, "_", i, ".pdf", sep = ""),
plot.save = FALSE,
plot.return = TRUE
)
})
}
pdf(file = paste(path.output, plot.title, ".pdf", sep = ""),
width = 20,
height = 12,
useDingbats = FALSE)
print(marrangeGrob(unlist(plot.list, recursive = FALSE), ncol = grid.ncol, nrow = grid.nrow))
dev.off()
}
#### ####
#### ####
# path.optimisation <- paste(path.output, "cmaes/normalize/", sep = "/")
# path.optimisation.data <- paste(path.optimisation, "data/", sep = "/")
# stimulation.list.all <- ((data.exp %>% distinct(stimulation))[-1])$stimulation
#
# data.exp.grouped <- read.table(
# file = paste(path.optimisation, "data_exp_grouped.csv", sep = ""),
# sep = ",",
# header = TRUE)
#
# data.exp.grouped <- data.exp.grouped %>% group_by(priming, stimulation, time) %>% mutate(intensity_sd = var(intensity))
#
# data.model.list <- list()
# path.single <- paste(path.optimisation.data, "single", sep = "/")
# optimisation.table <- read_optimisation(path = path.single,
# id = "single",
# names = names(fun.likelihood.list))
# data.model.list[["single"]] <- read.table(
# file = paste(path.single, "data_model.csv", sep = "/"),
# sep = ",",
# header = TRUE)
#
# path.receptors <- paste(path.optimisation.data, "receptors", sep = "/")
# optimisation.table <- rbind(optimisation.table, read_optimisation(path = path.receptors, id = "receptors", names(fun.likelihood.list)))
# data.model.list[["receptors"]] <- read.table(
# file = paste(path.receptors, "data_model.csv", sep = "/"),
# sep = ",",
# header = TRUE)
| /R/computations/parallel_computing_summary_initialise.R | no_license | stork119/OSigA | R | false | false | 3,549 | r | #### ####
# optimisation.table <- data.table(
# id = character(),
# mvn.mean = numeric(),
# mvn = numeric(),
# lmvn = numeric(),
# mvn.sd_const = numeric(),
# lmvn.data = numeric(),
# lmvn.data.norm = numeric(),
# normalise = numeric(),
# normalise_by_priming = numeric()
# )
#### ####
read_optimisation <- function(path, id, names = colnames(optimisation.table)){
if(file.exists(paste(path, "optimisation.csv", sep = "/"))){
optimisation <- read.table(file = paste(path, "optimisation.csv", sep = "/"),
sep = ",", header = TRUE)
colnames(optimisation) <- names
optimisation$id <- id
optimisation <- optimisation[,c(ncol(optimisation), 1:(ncol(optimisation)-1))]
return(optimisation)
}
}
plot_results <- function(data = data.exp.grouped,
path.analysis,
path.output = path.analysis,
data.model.best.list = data.model.list,
optimisation.best,
grid.ncol = 2,
grid.nrow = 2,
plot.title = "opt",
filename.data_model = "data_model.csv"){
plot.list <- list()
for( i in 1:length(optimisation.best) ){
id <- optimisation.best[i]
try({
data.model.best.list[[as.character(id)]] <- read.table(
file = paste(path.analysis, id, filename.data_model, sep = "/"),
sep = ",",
header = TRUE)
plot.title.id <- paste(plot.title, "place:", i, ", id:", id, ",", sep = "")
plot.list[[as.character(id)]] <- compare_models(
data = data,
data.model.list = data.model.best.list[c("single", "receptors", as.character(id))],
plot.title = paste(plot.title, "place:", i, ", id:", id, ",", sep = ""),
filename = paste(path.analysis, plot.title, "_", i, ".pdf", sep = ""),
plot.save = FALSE,
plot.return = TRUE
)
})
}
pdf(file = paste(path.output, plot.title, ".pdf", sep = ""),
width = 20,
height = 12,
useDingbats = FALSE)
print(marrangeGrob(unlist(plot.list, recursive = FALSE), ncol = grid.ncol, nrow = grid.nrow))
dev.off()
}
#### ####
#### ####
# path.optimisation <- paste(path.output, "cmaes/normalize/", sep = "/")
# path.optimisation.data <- paste(path.optimisation, "data/", sep = "/")
# stimulation.list.all <- ((data.exp %>% distinct(stimulation))[-1])$stimulation
#
# data.exp.grouped <- read.table(
# file = paste(path.optimisation, "data_exp_grouped.csv", sep = ""),
# sep = ",",
# header = TRUE)
#
# data.exp.grouped <- data.exp.grouped %>% group_by(priming, stimulation, time) %>% mutate(intensity_sd = var(intensity))
#
# data.model.list <- list()
# path.single <- paste(path.optimisation.data, "single", sep = "/")
# optimisation.table <- read_optimisation(path = path.single,
# id = "single",
# names = names(fun.likelihood.list))
# data.model.list[["single"]] <- read.table(
# file = paste(path.single, "data_model.csv", sep = "/"),
# sep = ",",
# header = TRUE)
#
# path.receptors <- paste(path.optimisation.data, "receptors", sep = "/")
# optimisation.table <- rbind(optimisation.table, read_optimisation(path = path.receptors, id = "receptors", names(fun.likelihood.list)))
# data.model.list[["receptors"]] <- read.table(
# file = paste(path.receptors, "data_model.csv", sep = "/"),
# sep = ",",
# header = TRUE)
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/Breast.fit.R
\docType{data}
\name{Breast.fit}
\alias{Breast.fit}
\title{Fit from the Yates et al. cohort of Breast cancers, with REVOLVER. This file contains also clustering assignments and jacckknife results.}
\format{An S3 object of class "rev_cohort_fit".}
\usage{
data(Breast.fit)
}
\description{
Driver mutations annotated in the original paper.
}
\examples{
data(Breast.fit)
print(Breast.fit)
}
\references{
Yates et al. (2015) Nat Med. 2015 Jul;21(7):751-9.
}
\keyword{datasets}
| /man/Breast.fit.Rd | no_license | jh2663/revolver | R | false | true | 565 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/Breast.fit.R
\docType{data}
\name{Breast.fit}
\alias{Breast.fit}
\title{Fit from the Yates et al. cohort of Breast cancers, with REVOLVER. This file contains also clustering assignments and jacckknife results.}
\format{An S3 object of class "rev_cohort_fit".}
\usage{
data(Breast.fit)
}
\description{
Driver mutations annotated in the original paper.
}
\examples{
data(Breast.fit)
print(Breast.fit)
}
\references{
Yates et al. (2015) Nat Med. 2015 Jul;21(7):751-9.
}
\keyword{datasets}
|
# defining the function that create a dataset for a specific hour (mtime)
make.1h.data = function(dateTime){
# extracting the tsa_hp1 var at stations points for current mtime
stations.mtime = stations.dyn %>%
# filtering for current time
dplyr::filter(mtime == dateTime) %>%
# adding corresponding grid px
dplyr::left_join(
data.frame(stations.sf)[c("sid", "px")],
by = "sid"
) %>%
# adding tsa_hp1
dplyr::left_join(
(grid.dyn %>%
dplyr::filter(mtime == dateTime) %>%
dplyr::select(c("px", "tsa_hp1"))
),
by = "px"
) %>%
# adding static vars
dplyr::left_join(
stations.static,
by = "sid"
) %>%
# removing useless px
dplyr::select(-px) %>%
# adding the lat and lon as projected Lambert 2008 (EPSG = 3812)
dplyr::left_join(
(data.frame(st_coordinates(st_transform(stations.sf, 3812))) %>%
dplyr::bind_cols(stations.sf["sid"]) %>%
dplyr::select(-geometry)
),
by = "sid"
) %>%
# removing mtime
dplyr::select(-mtime) %>%
# removing tsa_hp1 because missing values
dplyr::select(-tsa_hp1) %>%
# renaming X and Y to x and y
dplyr::rename(x = X) %>%
dplyr::rename(y = Y)
}
# defining the function that performs the benchmark
make.1h.bmr = function(task, dateTime, learners){
# make bmr reproducible
set.seed(2585)
# ::FIXME:: coordinates ?
# removing ens for now
# task = dropFeatures(task, "ens")
# removing tsa_hp1 for now
# task = dropFeatures(task, "tsa_hp1")
# input missing values
# grid search for param tuning
ctrl = makeTuneControlGrid()
# absolute number feature selection paramset for fusing learner with the filter method
ps = makeParamSet(makeDiscreteParam("fw.abs", values = seq_len(getTaskNFeats(task))))
# inner resampling loop
inner = makeResampleDesc("LOO")
# outer resampling loop
outer = makeResampleDesc("LOO")
# benchmarking
res = benchmark(
measures = list(mae, mse, rmse, timetrain),
tasks = task,
learners = learners,
resamplings = outer,
show.info = FALSE,
models = FALSE
)
bmr = list(
res = res,
task = task
)
}
leafletize <- function(data.sf, borders, stations){
# to make the map responsive
responsiveness.chr = "\'<meta name=\"viewport\" content=\"width=device-width, initial-scale=1.0\">\'"
# Sometimes the interpolation and the stations don't have values in the same domain.
# this lead to mapping inconsistency (transparent color for stations)
# Thus we create a fullDomain which is a rowbinding of interpolated and original data
fullDomain = c(data.sf$response, stations$tsa)
# defining the color palette for the response
varPal <- leaflet::colorNumeric(
palette = "RdYlBu", #"RdBl",
reverse = TRUE,
domain = fullDomain, #data.sf$response,
na.color = "transparent"
)
# Definition of the function to create whitening
alphaPal <- function(color) {
alpha <- seq(0,1,0.1)
r <- col2rgb(color, alpha = T)
r <- t(apply(r, 1, rep, length(alpha)))
# Apply alpha
r[4,] <- alpha*255
r <- r/255.0
codes <- (rgb(r[1,], r[2,], r[3,], r[4,]))
return(codes)
}
# actually building the map
prediction.map = leaflet::leaflet(data.sf) %>%
# basemaps
addProviderTiles(group = "Stamen",
providers$Stamen.Toner,
options = providerTileOptions(opacity = 0.25)
) %>%
addProviderTiles(group = "Satellite",
providers$Esri.WorldImagery,
options = providerTileOptions(opacity = 1)
) %>%
# centering the map
fitBounds(sf::st_bbox(data.sf)[[1]],
sf::st_bbox(data.sf)[[2]],
sf::st_bbox(data.sf)[[3]],
sf::st_bbox(data.sf)[[4]]
) %>%
# adding layer control button
addLayersControl(baseGroups = c("Stamen", "Satellite"),
overlayGroups = c("prediction", "se", "Stations", "Admin"),
options = layersControlOptions(collapsed = TRUE)
) %>%
# fullscreen button
addFullscreenControl() %>%
# location button
addEasyButton(easyButton(
icon = "fa-crosshairs", title = "Locate Me",
onClick = JS("function(btn, map){ map.locate({setView: true}); }"))) %>%
htmlwidgets::onRender(paste0("
function(el, x) {
$('head').append(",responsiveness.chr,");
}")
) %>%
# predictions
addPolygons(
group = "prediction",
color = "#444444", stroke = FALSE, weight = 1, smoothFactor = 0.8,
opacity = 1.0, fillOpacity = 0.9,
fillColor = ~varPal(response),
highlightOptions = highlightOptions(color = "white", weight = 2,
bringToFront = TRUE),
label = ~htmltools::htmlEscape(as.character(response))
) %>%
addLegend(
position = "bottomright", pal = varPal, values = ~response,
title = "prediction",
group = "prediction",
opacity = 1
)
# if se.bool = TRUE
if (!is.null(data.sf$se)) {
uncPal <- leaflet::colorNumeric(
palette = alphaPal("#5af602"),
domain = data.sf$se,
alpha = TRUE
)
prediction.map = prediction.map %>%
addPolygons(
group = "se",
color = "#444444", stroke = FALSE, weight = 1, smoothFactor = 0.5,
opacity = 1.0, fillOpacity = 1,
fillColor = ~uncPal(se),
highlightOptions = highlightOptions(color = "white", weight = 2,
bringToFront = TRUE),
label = ~ paste("prediction:", signif(data.sf$response, 2), "\n","se: ", signif(data.sf$se, 2))
) %>%
addLegend(
group = "se",
position = "bottomleft", pal = uncPal, values = ~se,
title = "se",
opacity = 1
)
}
prediction.map = prediction.map %>%
# admin boundaries
addPolygons(
data = borders,
group = "Admin",
color = "#444444", weight = 1, smoothFactor = 0.5,
opacity = 1, fillOpacity = 0, fillColor = FALSE) %>%
# stations location
addCircleMarkers(
data = stations,
group = "Stations",
color = "black",
weight = 2,
fillColor = ~varPal(tsa),
stroke = TRUE,
fillOpacity = 1,
label = ~htmltools::htmlEscape(as.character(tsa)))
}
| /functions.R | no_license | pokyah/shinySpatial | R | false | false | 6,195 | r | # defining the function that create a dataset for a specific hour (mtime)
make.1h.data = function(dateTime){
# extracting the tsa_hp1 var at stations points for current mtime
stations.mtime = stations.dyn %>%
# filtering for current time
dplyr::filter(mtime == dateTime) %>%
# adding corresponding grid px
dplyr::left_join(
data.frame(stations.sf)[c("sid", "px")],
by = "sid"
) %>%
# adding tsa_hp1
dplyr::left_join(
(grid.dyn %>%
dplyr::filter(mtime == dateTime) %>%
dplyr::select(c("px", "tsa_hp1"))
),
by = "px"
) %>%
# adding static vars
dplyr::left_join(
stations.static,
by = "sid"
) %>%
# removing useless px
dplyr::select(-px) %>%
# adding the lat and lon as projected Lambert 2008 (EPSG = 3812)
dplyr::left_join(
(data.frame(st_coordinates(st_transform(stations.sf, 3812))) %>%
dplyr::bind_cols(stations.sf["sid"]) %>%
dplyr::select(-geometry)
),
by = "sid"
) %>%
# removing mtime
dplyr::select(-mtime) %>%
# removing tsa_hp1 because missing values
dplyr::select(-tsa_hp1) %>%
# renaming X and Y to x and y
dplyr::rename(x = X) %>%
dplyr::rename(y = Y)
}
# defining the function that performs the benchmark
make.1h.bmr = function(task, dateTime, learners){
# make bmr reproducible
set.seed(2585)
# ::FIXME:: coordinates ?
# removing ens for now
# task = dropFeatures(task, "ens")
# removing tsa_hp1 for now
# task = dropFeatures(task, "tsa_hp1")
# input missing values
# grid search for param tuning
ctrl = makeTuneControlGrid()
# absolute number feature selection paramset for fusing learner with the filter method
ps = makeParamSet(makeDiscreteParam("fw.abs", values = seq_len(getTaskNFeats(task))))
# inner resampling loop
inner = makeResampleDesc("LOO")
# outer resampling loop
outer = makeResampleDesc("LOO")
# benchmarking
res = benchmark(
measures = list(mae, mse, rmse, timetrain),
tasks = task,
learners = learners,
resamplings = outer,
show.info = FALSE,
models = FALSE
)
bmr = list(
res = res,
task = task
)
}
leafletize <- function(data.sf, borders, stations){
# to make the map responsive
responsiveness.chr = "\'<meta name=\"viewport\" content=\"width=device-width, initial-scale=1.0\">\'"
# Sometimes the interpolation and the stations don't have values in the same domain.
# this lead to mapping inconsistency (transparent color for stations)
# Thus we create a fullDomain which is a rowbinding of interpolated and original data
fullDomain = c(data.sf$response, stations$tsa)
# defining the color palette for the response
varPal <- leaflet::colorNumeric(
palette = "RdYlBu", #"RdBl",
reverse = TRUE,
domain = fullDomain, #data.sf$response,
na.color = "transparent"
)
# Definition of the function to create whitening
alphaPal <- function(color) {
alpha <- seq(0,1,0.1)
r <- col2rgb(color, alpha = T)
r <- t(apply(r, 1, rep, length(alpha)))
# Apply alpha
r[4,] <- alpha*255
r <- r/255.0
codes <- (rgb(r[1,], r[2,], r[3,], r[4,]))
return(codes)
}
# actually building the map
prediction.map = leaflet::leaflet(data.sf) %>%
# basemaps
addProviderTiles(group = "Stamen",
providers$Stamen.Toner,
options = providerTileOptions(opacity = 0.25)
) %>%
addProviderTiles(group = "Satellite",
providers$Esri.WorldImagery,
options = providerTileOptions(opacity = 1)
) %>%
# centering the map
fitBounds(sf::st_bbox(data.sf)[[1]],
sf::st_bbox(data.sf)[[2]],
sf::st_bbox(data.sf)[[3]],
sf::st_bbox(data.sf)[[4]]
) %>%
# adding layer control button
addLayersControl(baseGroups = c("Stamen", "Satellite"),
overlayGroups = c("prediction", "se", "Stations", "Admin"),
options = layersControlOptions(collapsed = TRUE)
) %>%
# fullscreen button
addFullscreenControl() %>%
# location button
addEasyButton(easyButton(
icon = "fa-crosshairs", title = "Locate Me",
onClick = JS("function(btn, map){ map.locate({setView: true}); }"))) %>%
htmlwidgets::onRender(paste0("
function(el, x) {
$('head').append(",responsiveness.chr,");
}")
) %>%
# predictions
addPolygons(
group = "prediction",
color = "#444444", stroke = FALSE, weight = 1, smoothFactor = 0.8,
opacity = 1.0, fillOpacity = 0.9,
fillColor = ~varPal(response),
highlightOptions = highlightOptions(color = "white", weight = 2,
bringToFront = TRUE),
label = ~htmltools::htmlEscape(as.character(response))
) %>%
addLegend(
position = "bottomright", pal = varPal, values = ~response,
title = "prediction",
group = "prediction",
opacity = 1
)
# if se.bool = TRUE
if (!is.null(data.sf$se)) {
uncPal <- leaflet::colorNumeric(
palette = alphaPal("#5af602"),
domain = data.sf$se,
alpha = TRUE
)
prediction.map = prediction.map %>%
addPolygons(
group = "se",
color = "#444444", stroke = FALSE, weight = 1, smoothFactor = 0.5,
opacity = 1.0, fillOpacity = 1,
fillColor = ~uncPal(se),
highlightOptions = highlightOptions(color = "white", weight = 2,
bringToFront = TRUE),
label = ~ paste("prediction:", signif(data.sf$response, 2), "\n","se: ", signif(data.sf$se, 2))
) %>%
addLegend(
group = "se",
position = "bottomleft", pal = uncPal, values = ~se,
title = "se",
opacity = 1
)
}
prediction.map = prediction.map %>%
# admin boundaries
addPolygons(
data = borders,
group = "Admin",
color = "#444444", weight = 1, smoothFactor = 0.5,
opacity = 1, fillOpacity = 0, fillColor = FALSE) %>%
# stations location
addCircleMarkers(
data = stations,
group = "Stations",
color = "black",
weight = 2,
fillColor = ~varPal(tsa),
stroke = TRUE,
fillOpacity = 1,
label = ~htmltools::htmlEscape(as.character(tsa)))
}
|
### R code from vignette source 'vars.Rnw'
###################################################
### code chunk number 1: source
###################################################
###################################################
### Preliminaries
###################################################
library("vars")
data("Canada")
summary(Canada)
plot(Canada, nc = 2, xlab = "")
###################################################
### ADF - Tests
###################################################
adf1 <- summary(ur.df(Canada[, "prod"], type = "trend", lags = 2))
adf2 <-summary(ur.df(diff(Canada[, "prod"]), type = "drift", lags = 1))
adf3 <-summary(ur.df(Canada[, "e"], type = "trend", lags = 2))
adf4 <-summary(ur.df(diff(Canada[, "e"]), type = "drift", lags = 1))
adf5 <-summary(ur.df(Canada[, "U"], type = "drift", lags = 1))
adf6 <-summary(ur.df(diff(Canada[, "U"]), type = "none", lags = 0))
adf7 <-summary(ur.df(Canada[, "rw"], type = "trend", lags = 4))
adf8 <-summary(ur.df(diff(Canada[, "rw"]), type = "drift", lags = 3))
adf9 <-summary(ur.df(diff(Canada[, "rw"]), type = "drift", lags = 0))
###################################################
### Lag-order selection
###################################################
VARselect(Canada, lag.max = 8, type = "both")
###################################################
### VAR(1)
###################################################
Canada <- Canada[, c("prod", "e", "U", "rw")]
p1ct <- VAR(Canada, p = 1, type = "both")
p1ct
summary(p1ct, equation = "e")
plot(p1ct, names = "e")
###################################################
### VAR(2) & VAR(3)
###################################################
p2ct <- VAR(Canada, p = 2, type = "both")
p3ct <- VAR(Canada, p = 3, type = "both")
###################################################
### Diagnostic Tests 1
###################################################
ser11 <- serial.test(p1ct, lags.pt = 16, type = "PT.asymptotic")
ser11$serial
norm1 <-normality.test(p1ct)
norm1$jb.mul
arch1 <- arch.test(p1ct, lags.multi = 5)
arch1$arch.mul
plot(arch1, names = "e")
plot(stability(p1ct), nc = 2)
###################################################
### Diagnostic Tests 2
###################################################
## Serial
ser31 <- serial.test(p3ct, lags.pt = 16, type = "PT.asymptotic")$serial
ser21 <- serial.test(p2ct, lags.pt = 16, type = "PT.asymptotic")$serial
ser11 <- serial.test(p1ct, lags.pt = 16, type = "PT.asymptotic")$serial
ser32 <- serial.test(p3ct, lags.pt = 16, type = "PT.adjusted")$serial
ser22 <- serial.test(p2ct, lags.pt = 16, type = "PT.adjusted")$serial
ser12 <- serial.test(p1ct, lags.pt = 16, type = "PT.adjusted")$serial
## JB
norm3 <- normality.test(p3ct)$jb.mul$JB
norm2 <-normality.test(p2ct)$jb.mul$JB
norm1 <-normality.test(p1ct)$jb.mul$JB
## ARCH
arch3 <- arch.test(p3ct, lags.multi = 5)$arch.mul
arch2 <- arch.test(p2ct, lags.multi = 5)$arch.mul
arch1 <- arch.test(p1ct, lags.multi = 5)$arch.mul
###################################################
### VECM
###################################################
vecm.p3 <- summary(ca.jo(Canada, type = "trace", ecdet = "trend", K = 3, spec = "transitory"))
vecm.p2 <- summary(ca.jo(Canada, type = "trace", ecdet = "trend", K = 2, spec = "transitory"))
###################################################
### VECM r = 1
###################################################
vecm <- ca.jo(Canada[, c("rw", "prod", "e", "U")], type = "trace", ecdet = "trend", K = 3, spec = "transitory")
vecm.r1 <- cajorls(vecm, r = 1)
##
## Calculation of t-values for alpha and beta
##
alpha <- coef(vecm.r1$rlm)[1, ]
names(alpha) <- c("rw", "prod", "e", "U")
alpha
beta <- vecm.r1$beta
beta
resids <- resid(vecm.r1$rlm)
N <- nrow(resids)
sigma <- crossprod(resids) / N
## t-stats for alpha (calculated by hand)
alpha.se <- sqrt(solve(crossprod(cbind(vecm@ZK %*% beta, vecm@Z1)))[1, 1] * diag(sigma))
names(alpha.se) <- c("rw", "prod", "e", "U")
alpha.t <- alpha / alpha.se
alpha.t
## Differ slightly from coef(summary(vecm.r1$rlm))
## due to degrees of freedom adjustment
coef(summary(vecm.r1$rlm))
## t-stats for beta
beta.se <- sqrt(diag(kronecker(solve(crossprod(vecm@RK[, -1])),
solve(t(alpha) %*% solve(sigma) %*% alpha))))
beta.t <- c(NA, beta[-1] / beta.se)
names(beta.t) <- rownames(vecm.r1$beta)
beta.t
###################################################
### SVEC
###################################################
vecm <- ca.jo(Canada[, c("prod", "e", "U", "rw")], type = "trace",
ecdet = "trend", K = 3, spec = "transitory")
SR <- matrix(NA, nrow = 4, ncol = 4)
SR[4, 2] <- 0
LR <- matrix(NA, nrow = 4, ncol = 4)
LR[1, 2:4] <- 0
LR[2:4, 4] <- 0
svec <- SVEC(vecm, LR = LR, SR = SR, r = 1, lrtest = FALSE,
boot = TRUE, runs = 100)
summary(svec)
###################################################
### SR-table
###################################################
SR <- round(svec$SR, 2)
SRt <- round(svec$SR / svec$SRse, 2)
###################################################
### LR-table
###################################################
LR <- round(svec$LR, 2)
LRt <- round(svec$LR / svec$LRse, 2)
###################################################
### Over-identification
###################################################
LR[3, 3] <- 0
svec.oi <- update(svec, LR = LR, lrtest = TRUE, boot = FALSE)
svec.oi$LRover
###################################################
### SVEC - IRF
###################################################
svec.irf <- irf(svec, response = "U", n.ahead = 48, boot = TRUE)
plot(svec.irf)
###################################################
### SVEC - FEVD
###################################################
fevd.U <- fevd(svec, n.ahead = 48)$U
| /code_vars_vignette.R | no_license | ricardomayerb/dde_forecast | R | false | false | 5,766 | r | ### R code from vignette source 'vars.Rnw'
###################################################
### code chunk number 1: source
###################################################
###################################################
### Preliminaries
###################################################
library("vars")
data("Canada")
summary(Canada)
plot(Canada, nc = 2, xlab = "")
###################################################
### ADF - Tests
###################################################
adf1 <- summary(ur.df(Canada[, "prod"], type = "trend", lags = 2))
adf2 <-summary(ur.df(diff(Canada[, "prod"]), type = "drift", lags = 1))
adf3 <-summary(ur.df(Canada[, "e"], type = "trend", lags = 2))
adf4 <-summary(ur.df(diff(Canada[, "e"]), type = "drift", lags = 1))
adf5 <-summary(ur.df(Canada[, "U"], type = "drift", lags = 1))
adf6 <-summary(ur.df(diff(Canada[, "U"]), type = "none", lags = 0))
adf7 <-summary(ur.df(Canada[, "rw"], type = "trend", lags = 4))
adf8 <-summary(ur.df(diff(Canada[, "rw"]), type = "drift", lags = 3))
adf9 <-summary(ur.df(diff(Canada[, "rw"]), type = "drift", lags = 0))
###################################################
### Lag-order selection
###################################################
VARselect(Canada, lag.max = 8, type = "both")
###################################################
### VAR(1)
###################################################
Canada <- Canada[, c("prod", "e", "U", "rw")]
p1ct <- VAR(Canada, p = 1, type = "both")
p1ct
summary(p1ct, equation = "e")
plot(p1ct, names = "e")
###################################################
### VAR(2) & VAR(3)
###################################################
p2ct <- VAR(Canada, p = 2, type = "both")
p3ct <- VAR(Canada, p = 3, type = "both")
###################################################
### Diagnostic Tests 1
###################################################
ser11 <- serial.test(p1ct, lags.pt = 16, type = "PT.asymptotic")
ser11$serial
norm1 <-normality.test(p1ct)
norm1$jb.mul
arch1 <- arch.test(p1ct, lags.multi = 5)
arch1$arch.mul
plot(arch1, names = "e")
plot(stability(p1ct), nc = 2)
###################################################
### Diagnostic Tests 2
###################################################
## Serial
ser31 <- serial.test(p3ct, lags.pt = 16, type = "PT.asymptotic")$serial
ser21 <- serial.test(p2ct, lags.pt = 16, type = "PT.asymptotic")$serial
ser11 <- serial.test(p1ct, lags.pt = 16, type = "PT.asymptotic")$serial
ser32 <- serial.test(p3ct, lags.pt = 16, type = "PT.adjusted")$serial
ser22 <- serial.test(p2ct, lags.pt = 16, type = "PT.adjusted")$serial
ser12 <- serial.test(p1ct, lags.pt = 16, type = "PT.adjusted")$serial
## JB
norm3 <- normality.test(p3ct)$jb.mul$JB
norm2 <-normality.test(p2ct)$jb.mul$JB
norm1 <-normality.test(p1ct)$jb.mul$JB
## ARCH
arch3 <- arch.test(p3ct, lags.multi = 5)$arch.mul
arch2 <- arch.test(p2ct, lags.multi = 5)$arch.mul
arch1 <- arch.test(p1ct, lags.multi = 5)$arch.mul
###################################################
### VECM
###################################################
vecm.p3 <- summary(ca.jo(Canada, type = "trace", ecdet = "trend", K = 3, spec = "transitory"))
vecm.p2 <- summary(ca.jo(Canada, type = "trace", ecdet = "trend", K = 2, spec = "transitory"))
###################################################
### VECM r = 1
###################################################
vecm <- ca.jo(Canada[, c("rw", "prod", "e", "U")], type = "trace", ecdet = "trend", K = 3, spec = "transitory")
vecm.r1 <- cajorls(vecm, r = 1)
##
## Calculation of t-values for alpha and beta
##
alpha <- coef(vecm.r1$rlm)[1, ]
names(alpha) <- c("rw", "prod", "e", "U")
alpha
beta <- vecm.r1$beta
beta
resids <- resid(vecm.r1$rlm)
N <- nrow(resids)
sigma <- crossprod(resids) / N
## t-stats for alpha (calculated by hand)
alpha.se <- sqrt(solve(crossprod(cbind(vecm@ZK %*% beta, vecm@Z1)))[1, 1] * diag(sigma))
names(alpha.se) <- c("rw", "prod", "e", "U")
alpha.t <- alpha / alpha.se
alpha.t
## Differ slightly from coef(summary(vecm.r1$rlm))
## due to degrees of freedom adjustment
coef(summary(vecm.r1$rlm))
## t-stats for beta
beta.se <- sqrt(diag(kronecker(solve(crossprod(vecm@RK[, -1])),
solve(t(alpha) %*% solve(sigma) %*% alpha))))
beta.t <- c(NA, beta[-1] / beta.se)
names(beta.t) <- rownames(vecm.r1$beta)
beta.t
###################################################
### SVEC
###################################################
vecm <- ca.jo(Canada[, c("prod", "e", "U", "rw")], type = "trace",
ecdet = "trend", K = 3, spec = "transitory")
SR <- matrix(NA, nrow = 4, ncol = 4)
SR[4, 2] <- 0
LR <- matrix(NA, nrow = 4, ncol = 4)
LR[1, 2:4] <- 0
LR[2:4, 4] <- 0
svec <- SVEC(vecm, LR = LR, SR = SR, r = 1, lrtest = FALSE,
boot = TRUE, runs = 100)
summary(svec)
###################################################
### SR-table
###################################################
SR <- round(svec$SR, 2)
SRt <- round(svec$SR / svec$SRse, 2)
###################################################
### LR-table
###################################################
LR <- round(svec$LR, 2)
LRt <- round(svec$LR / svec$LRse, 2)
###################################################
### Over-identification
###################################################
LR[3, 3] <- 0
svec.oi <- update(svec, LR = LR, lrtest = TRUE, boot = FALSE)
svec.oi$LRover
###################################################
### SVEC - IRF
###################################################
svec.irf <- irf(svec, response = "U", n.ahead = 48, boot = TRUE)
plot(svec.irf)
###################################################
### SVEC - FEVD
###################################################
fevd.U <- fevd(svec, n.ahead = 48)$U
|
## ----setup, include=FALSE------------------------------------------------
knitr::opts_chunk$set(echo = TRUE)
## ----basic---------------------------------------------------------------
library(tableHTML)
tableHTML(mtcars)
## ----rownames------------------------------------------------------------
tableHTML(mtcars, rownames = FALSE)
## ---- eval = FALSE-------------------------------------------------------
# mytable <- tableHTML(mtcars)
# str(mytable)
# # Classes 'tableHTML', 'html', 'character' atomic [1:1]
# # <table class=table_1901 border=1 style="border-collapse: collapse;">
# # <tr>
# # <th id="tableHTML_header_1"> </th>
# # <th id="tableHTML_header_2">mpg</th>
# # <th id="tableHTML_header_3">cyl</th>
# # truncated...
## ---- eval = FALSE-------------------------------------------------------
# mytable <- tableHTML(mtcars, class = 'myClass')
# str(mytable)
# # Classes 'tableHTML', 'html', 'character' atomic [1:1]
# # <table class=myClass border=1 style="border-collapse: collapse;">
# # <tr>
# # <th id="tableHTML_header_1"> </th>
# # <th id="tableHTML_header_2">mpg</th>
# # <th id="tableHTML_header_3">cyl</th>
# # truncated...
## ----second header-------------------------------------------------------
tableHTML(mtcars, second_headers = list(c(3, 4, 5), c('col1', 'col2', 'col3')))
## ----row groups----------------------------------------------------------
tableHTML(mtcars,
rownames = FALSE,
row_groups = list(c(10, 10, 12), c('Group 1', 'Group 2', 'Group 3')))
## ----widths--------------------------------------------------------------
tableHTML(mtcars,
widths = rep(100, 12),
second_headers = list(c(3, 4, 5), c('col1', 'col2', 'col3')))
## ----border--------------------------------------------------------------
tableHTML(mtcars, border = 0)
## ----caption-------------------------------------------------------------
tableHTML(mtcars, caption = 'This is a table')
## ----footer--------------------------------------------------------------
tableHTML(mtcars, footer = 'This is a footer')
## ----collapse, eval = FALSE----------------------------------------------
# tableHTML(mtcars, collapse = 'separate')
## ----collapse 2----------------------------------------------------------
tableHTML(mtcars, collapse = 'separate_shiny')
## ----spacing 1-----------------------------------------------------------
tableHTML(mtcars, collapse = 'separate_shiny', spacing = '2px')
## ----spacing 2-----------------------------------------------------------
tableHTML(mtcars, collapse = 'separate_shiny', spacing = '5px 2px')
## ----theme-scientific----------------------------------------------------
tableHTML(mtcars, widths = c(140, rep(50, 11)), theme = 'scientific')
## ----theme-rshiny-blue---------------------------------------------------
tableHTML(mtcars, widths = c(140, rep(50, 11)), theme = 'rshiny-blue')
## ----add css row 1-------------------------------------------------------
mtcars %>%
tableHTML(widths = c(140, rep(45, 11))) %>%
add_css_row(css = list(c('background-color', 'border'), c('lightblue', '2px solid lightgray')))
## ----add css row 2-------------------------------------------------------
mtcars %>%
tableHTML(widths = c(140, rep(45, 11))) %>%
add_css_row(css = list(c('background-color', 'border'), c('lightblue', '2px solid lightgray')),
rows = 2:33)
## ----add css row 3-------------------------------------------------------
mtcars %>%
tableHTML(widths = c(140, rep(45, 11))) %>%
add_css_row(css = list('background-color', '#f2f2f2'),
rows = odd(1:33)) %>%
add_css_row(css = list('background-color', '#e6f0ff'),
rows = even(1:33))
## ----add css column 1----------------------------------------------------
mtcars %>%
tableHTML(widths = c(140, rep(45, 11))) %>%
add_css_column(css = list(c('background-color', 'border'), c('lightblue', '3px solid lightgray')),
columns = c('cyl', 'hp', 'rownames'))
## ----add css column 2----------------------------------------------------
mtcars %>%
tableHTML(widths = c(140, rep(45, 11))) %>%
add_css_row(css = list('background-color', '#f2f2f2')) %>%
add_css_column(css = list('background-color', 'lightblue'),
columns = c('cyl', 'hp', 'rownames'))
## ---- eval = FALSE-------------------------------------------------------
# mytable <- tableHTML(mtcars)
# print(mytable, viewer = FALSE)
# <table style="border-collapse:collapse;" class=table_2079 border=1>
# <thead>
# <tr>
# <th id="tableHTML_header_1"> </th>
# <th id="tableHTML_header_2">mpg</th>
# <th id="tableHTML_header_3">cyl</th>
# truncated...
## ----add css header 1----------------------------------------------------
mtcars %>%
tableHTML(widths = c(140, rep(45, 11))) %>%
add_css_header(css = list('background-color', 'lightgray'), headers = c(1, 4))
## ----add css second header 1---------------------------------------------
mtcars %>%
tableHTML(widths = c(140, rep(45, 11)),
second_headers = list(c(3, 4, 5), c('col1', 'col2', 'col3'))) %>%
add_css_second_header(css = list(c('background-color', 'border'),
c('lightgray', '3px solid green')),
second_headers = c(1, 3))
## ----add css caption-----------------------------------------------------
mtcars %>%
tableHTML(widths = c(140, rep(45, 11)),
caption = 'This is a table') %>%
add_css_caption(css = list(c('color', 'font-size', 'text-align'), c('blue', '20px', 'left')))
## ----add css footer------------------------------------------------------
mtcars %>%
tableHTML(widths = c(140, rep(45, 11)),
footer = 'This is a footer') %>%
add_css_footer(css = list(c('color', 'font-size', 'text-align'), c('blue', '20px', 'left')))
## ----add css thead-------------------------------------------------------
mtcars %>%
tableHTML() %>%
add_css_thead(css = list('background-color', 'lightgray'))
## ----add css tbody-------------------------------------------------------
mtcars %>%
tableHTML() %>%
add_css_tbody(css = list('background-color', 'lightgray'))
## ----thead example 1-----------------------------------------------------
mtcars %>%
tableHTML() %>%
add_css_thead(css = list('background-color', 'lightgray')) %>%
add_css_row(css = list('background-color', 'blue'), rows = 1)
## ----thead example 2-----------------------------------------------------
mtcars %>%
tableHTML() %>%
add_css_tbody(css = list('background-color', 'lightgray')) %>%
add_css_row(css = list('background-color', 'blue'), rows = c(4, 6))
## ----add css table-------------------------------------------------------
mtcars %>%
tableHTML() %>%
add_css_table(css = list('background-color', 'lightgray'))
## ----all together--------------------------------------------------------
mtcars %>%
tableHTML(widths = c(140, rep(45, 11)),
second_headers = list(c(3, 4, 5), c('team1', 'team2', 'team3')),
caption = 'Table of Cars',
footer = 'Figure 1. Stats for famous cars') %>%
add_css_second_header(css = list(c('height', 'background-color', 'font-size'),
c('40px', ' #e6e6e6', '30px')),
second_headers = 1:3) %>%
add_css_header(css = list(c('height', 'background-color'), c('30px', ' #e6e6e6')),
headers = 1:12) %>%
add_css_row(css = list('background-color', '#f2f2f2'),
rows = even(1:34)) %>%
add_css_row(css = list('background-color', '#e6f0ff'),
rows = odd(1:34)) %>%
add_css_column(css = list('text-align', 'center'),
columns = names(mtcars)) %>%
add_css_caption(css = list(c('text-align', 'font-size', 'color'), c('center', '20px', 'black'))) %>%
add_css_footer(css = list(c('text-align', 'color'), c('left', 'black')))
## ----replace html--------------------------------------------------------
mtcars %>%
tableHTML(widths = c(140, rep(45, 11))) %>%
replace_html(' <td id="tableHTML_column_1">21</td>',
'<td id="mpg" style="background-color:lightyellow">21</td>')
## ----shiny 1, eval = FALSE-----------------------------------------------
# library(shiny)
# shinyApp(
# ui = fluidPage(
# fluidRow(
# #leave some spacing
# br(),
# column(width = 1),
# tableHTML_output("mytable"))
# ),
# server = function(input, output) {
# output$mytable <- render_tableHTML(
# tableHTML(mtcars)
# )}
# )
## ----shiny 2, eval = FALSE-----------------------------------------------
# shinyApp(
# ui = fluidPage(
# fluidRow(
# #leave some spacing
# br(),
# column(width = 1),
# tableHTML_output("mytable"))
# ),
# server = function(input, output) {
# output$mytable <- render_tableHTML(
# mtcars %>%
# tableHTML(widths = c(140, rep(45, 11)),
# second_headers = list(c(3, 4, 5), c('team1', 'team2', 'team3'))) %>%
# add_css_second_header(css = list(c('height', 'background-color', 'font-size', 'text-align'),
# c('40px', ' #e6e6e6', '30px', 'center')),
# second_headers = 1:3) %>%
# add_css_header(css = list(c('height', 'background-color', 'text-align'),
# c('30px', ' #e6e6e6', 'center')),
# headers = 1:12) %>%
# add_css_row(css = list('background-color', '#f2f2f2'),
# rows = even(1:34)) %>%
# add_css_row(css = list('background-color', '#e6f0ff'),
# rows = odd(1:34)) %>%
# add_css_column(css = list('text-align', 'center'),
# columns = names(mtcars))
# )}
# )
## ----shiny css, eval = FALSE---------------------------------------------
# #ui.R
# shinyUI(
# fluidPage(
# fluidRow(
# #leave some spacing
# br(),
# column(width = 1),
# #include css file in shiny
# includeCSS('www/mycss.css'),
# tableHTML_output("mytable"))
# )
# )
#
# #server.R
# shinyServer(
# function(input, output) {
# output$mytable <- render_tableHTML(
# tableHTML(mtcars, second_headers = list(c(3, 4, 5), c('col1', 'col2', 'col3')))
# )}
# )
| /inst/doc/tableHTML.R | no_license | GeorgeKappos/tableHTML | R | false | false | 10,606 | r | ## ----setup, include=FALSE------------------------------------------------
knitr::opts_chunk$set(echo = TRUE)
## ----basic---------------------------------------------------------------
library(tableHTML)
tableHTML(mtcars)
## ----rownames------------------------------------------------------------
tableHTML(mtcars, rownames = FALSE)
## ---- eval = FALSE-------------------------------------------------------
# mytable <- tableHTML(mtcars)
# str(mytable)
# # Classes 'tableHTML', 'html', 'character' atomic [1:1]
# # <table class=table_1901 border=1 style="border-collapse: collapse;">
# # <tr>
# # <th id="tableHTML_header_1"> </th>
# # <th id="tableHTML_header_2">mpg</th>
# # <th id="tableHTML_header_3">cyl</th>
# # truncated...
## ---- eval = FALSE-------------------------------------------------------
# mytable <- tableHTML(mtcars, class = 'myClass')
# str(mytable)
# # Classes 'tableHTML', 'html', 'character' atomic [1:1]
# # <table class=myClass border=1 style="border-collapse: collapse;">
# # <tr>
# # <th id="tableHTML_header_1"> </th>
# # <th id="tableHTML_header_2">mpg</th>
# # <th id="tableHTML_header_3">cyl</th>
# # truncated...
## ----second header-------------------------------------------------------
tableHTML(mtcars, second_headers = list(c(3, 4, 5), c('col1', 'col2', 'col3')))
## ----row groups----------------------------------------------------------
tableHTML(mtcars,
rownames = FALSE,
row_groups = list(c(10, 10, 12), c('Group 1', 'Group 2', 'Group 3')))
## ----widths--------------------------------------------------------------
tableHTML(mtcars,
widths = rep(100, 12),
second_headers = list(c(3, 4, 5), c('col1', 'col2', 'col3')))
## ----border--------------------------------------------------------------
tableHTML(mtcars, border = 0)
## ----caption-------------------------------------------------------------
tableHTML(mtcars, caption = 'This is a table')
## ----footer--------------------------------------------------------------
tableHTML(mtcars, footer = 'This is a footer')
## ----collapse, eval = FALSE----------------------------------------------
# tableHTML(mtcars, collapse = 'separate')
## ----collapse 2----------------------------------------------------------
tableHTML(mtcars, collapse = 'separate_shiny')
## ----spacing 1-----------------------------------------------------------
tableHTML(mtcars, collapse = 'separate_shiny', spacing = '2px')
## ----spacing 2-----------------------------------------------------------
tableHTML(mtcars, collapse = 'separate_shiny', spacing = '5px 2px')
## ----theme-scientific----------------------------------------------------
tableHTML(mtcars, widths = c(140, rep(50, 11)), theme = 'scientific')
## ----theme-rshiny-blue---------------------------------------------------
tableHTML(mtcars, widths = c(140, rep(50, 11)), theme = 'rshiny-blue')
## ----add css row 1-------------------------------------------------------
mtcars %>%
tableHTML(widths = c(140, rep(45, 11))) %>%
add_css_row(css = list(c('background-color', 'border'), c('lightblue', '2px solid lightgray')))
## ----add css row 2-------------------------------------------------------
mtcars %>%
tableHTML(widths = c(140, rep(45, 11))) %>%
add_css_row(css = list(c('background-color', 'border'), c('lightblue', '2px solid lightgray')),
rows = 2:33)
## ----add css row 3-------------------------------------------------------
mtcars %>%
tableHTML(widths = c(140, rep(45, 11))) %>%
add_css_row(css = list('background-color', '#f2f2f2'),
rows = odd(1:33)) %>%
add_css_row(css = list('background-color', '#e6f0ff'),
rows = even(1:33))
## ----add css column 1----------------------------------------------------
mtcars %>%
tableHTML(widths = c(140, rep(45, 11))) %>%
add_css_column(css = list(c('background-color', 'border'), c('lightblue', '3px solid lightgray')),
columns = c('cyl', 'hp', 'rownames'))
## ----add css column 2----------------------------------------------------
mtcars %>%
tableHTML(widths = c(140, rep(45, 11))) %>%
add_css_row(css = list('background-color', '#f2f2f2')) %>%
add_css_column(css = list('background-color', 'lightblue'),
columns = c('cyl', 'hp', 'rownames'))
## ---- eval = FALSE-------------------------------------------------------
# mytable <- tableHTML(mtcars)
# print(mytable, viewer = FALSE)
# <table style="border-collapse:collapse;" class=table_2079 border=1>
# <thead>
# <tr>
# <th id="tableHTML_header_1"> </th>
# <th id="tableHTML_header_2">mpg</th>
# <th id="tableHTML_header_3">cyl</th>
# truncated...
## ----add css header 1----------------------------------------------------
mtcars %>%
tableHTML(widths = c(140, rep(45, 11))) %>%
add_css_header(css = list('background-color', 'lightgray'), headers = c(1, 4))
## ----add css second header 1---------------------------------------------
mtcars %>%
tableHTML(widths = c(140, rep(45, 11)),
second_headers = list(c(3, 4, 5), c('col1', 'col2', 'col3'))) %>%
add_css_second_header(css = list(c('background-color', 'border'),
c('lightgray', '3px solid green')),
second_headers = c(1, 3))
## ----add css caption-----------------------------------------------------
mtcars %>%
tableHTML(widths = c(140, rep(45, 11)),
caption = 'This is a table') %>%
add_css_caption(css = list(c('color', 'font-size', 'text-align'), c('blue', '20px', 'left')))
## ----add css footer------------------------------------------------------
mtcars %>%
tableHTML(widths = c(140, rep(45, 11)),
footer = 'This is a footer') %>%
add_css_footer(css = list(c('color', 'font-size', 'text-align'), c('blue', '20px', 'left')))
## ----add css thead-------------------------------------------------------
mtcars %>%
tableHTML() %>%
add_css_thead(css = list('background-color', 'lightgray'))
## ----add css tbody-------------------------------------------------------
mtcars %>%
tableHTML() %>%
add_css_tbody(css = list('background-color', 'lightgray'))
## ----thead example 1-----------------------------------------------------
mtcars %>%
tableHTML() %>%
add_css_thead(css = list('background-color', 'lightgray')) %>%
add_css_row(css = list('background-color', 'blue'), rows = 1)
## ----thead example 2-----------------------------------------------------
mtcars %>%
tableHTML() %>%
add_css_tbody(css = list('background-color', 'lightgray')) %>%
add_css_row(css = list('background-color', 'blue'), rows = c(4, 6))
## ----add css table-------------------------------------------------------
mtcars %>%
tableHTML() %>%
add_css_table(css = list('background-color', 'lightgray'))
## ----all together--------------------------------------------------------
mtcars %>%
tableHTML(widths = c(140, rep(45, 11)),
second_headers = list(c(3, 4, 5), c('team1', 'team2', 'team3')),
caption = 'Table of Cars',
footer = 'Figure 1. Stats for famous cars') %>%
add_css_second_header(css = list(c('height', 'background-color', 'font-size'),
c('40px', ' #e6e6e6', '30px')),
second_headers = 1:3) %>%
add_css_header(css = list(c('height', 'background-color'), c('30px', ' #e6e6e6')),
headers = 1:12) %>%
add_css_row(css = list('background-color', '#f2f2f2'),
rows = even(1:34)) %>%
add_css_row(css = list('background-color', '#e6f0ff'),
rows = odd(1:34)) %>%
add_css_column(css = list('text-align', 'center'),
columns = names(mtcars)) %>%
add_css_caption(css = list(c('text-align', 'font-size', 'color'), c('center', '20px', 'black'))) %>%
add_css_footer(css = list(c('text-align', 'color'), c('left', 'black')))
## ----replace html--------------------------------------------------------
mtcars %>%
tableHTML(widths = c(140, rep(45, 11))) %>%
replace_html(' <td id="tableHTML_column_1">21</td>',
'<td id="mpg" style="background-color:lightyellow">21</td>')
## ----shiny 1, eval = FALSE-----------------------------------------------
# library(shiny)
# shinyApp(
# ui = fluidPage(
# fluidRow(
# #leave some spacing
# br(),
# column(width = 1),
# tableHTML_output("mytable"))
# ),
# server = function(input, output) {
# output$mytable <- render_tableHTML(
# tableHTML(mtcars)
# )}
# )
## ----shiny 2, eval = FALSE-----------------------------------------------
# shinyApp(
# ui = fluidPage(
# fluidRow(
# #leave some spacing
# br(),
# column(width = 1),
# tableHTML_output("mytable"))
# ),
# server = function(input, output) {
# output$mytable <- render_tableHTML(
# mtcars %>%
# tableHTML(widths = c(140, rep(45, 11)),
# second_headers = list(c(3, 4, 5), c('team1', 'team2', 'team3'))) %>%
# add_css_second_header(css = list(c('height', 'background-color', 'font-size', 'text-align'),
# c('40px', ' #e6e6e6', '30px', 'center')),
# second_headers = 1:3) %>%
# add_css_header(css = list(c('height', 'background-color', 'text-align'),
# c('30px', ' #e6e6e6', 'center')),
# headers = 1:12) %>%
# add_css_row(css = list('background-color', '#f2f2f2'),
# rows = even(1:34)) %>%
# add_css_row(css = list('background-color', '#e6f0ff'),
# rows = odd(1:34)) %>%
# add_css_column(css = list('text-align', 'center'),
# columns = names(mtcars))
# )}
# )
## ----shiny css, eval = FALSE---------------------------------------------
# #ui.R
# shinyUI(
# fluidPage(
# fluidRow(
# #leave some spacing
# br(),
# column(width = 1),
# #include css file in shiny
# includeCSS('www/mycss.css'),
# tableHTML_output("mytable"))
# )
# )
#
# #server.R
# shinyServer(
# function(input, output) {
# output$mytable <- render_tableHTML(
# tableHTML(mtcars, second_headers = list(c(3, 4, 5), c('col1', 'col2', 'col3')))
# )}
# )
|
source('functions.R')
dirw = glue('{dird}/10_qc')
#{{{ read in & format
yid = 'rn20j'
res = rnaseq_cpm(yid)
#
th = res$th %>% mutate(gt=str_replace(Genotype,'A632d','A632')) %>%
mutate(tis = factor(Tissue, levels=tiss)) %>%
mutate(gt = factor(gt, levels=gts3)) %>%
arrange(tis, gt, SampleID) %>%
mutate(cond=glue("{tis}.{gt}")) %>%
mutate(cond = as_factor(cond)) %>%
mutate(SampleID = as_factor(SampleID)) %>%
select(SampleID,tis,gt,cond)
tm = res$tm %>% filter(SampleID %in% th$SampleID) %>%
mutate(SampleID = factor(SampleID, levels=levels(th$SampleID))) %>%
mutate(value=asinh(CPM))
fo = glue("{dirw}/01.rds")
r = list(th = th, tm = tm)
saveRDS(r, fo)
#}}}
| /ase_ys/src/st.10.qc.R | permissive | orionzhou/misc | R | false | false | 699 | r | source('functions.R')
dirw = glue('{dird}/10_qc')
#{{{ read in & format
yid = 'rn20j'
res = rnaseq_cpm(yid)
#
th = res$th %>% mutate(gt=str_replace(Genotype,'A632d','A632')) %>%
mutate(tis = factor(Tissue, levels=tiss)) %>%
mutate(gt = factor(gt, levels=gts3)) %>%
arrange(tis, gt, SampleID) %>%
mutate(cond=glue("{tis}.{gt}")) %>%
mutate(cond = as_factor(cond)) %>%
mutate(SampleID = as_factor(SampleID)) %>%
select(SampleID,tis,gt,cond)
tm = res$tm %>% filter(SampleID %in% th$SampleID) %>%
mutate(SampleID = factor(SampleID, levels=levels(th$SampleID))) %>%
mutate(value=asinh(CPM))
fo = glue("{dirw}/01.rds")
r = list(th = th, tm = tm)
saveRDS(r, fo)
#}}}
|
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# functions to support the payments reductions shiny app
# which don't need to be reactive
#
# written by clare betts march 2020
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# function to add a spinner when app parts are loading
spinner<-function(output){
shinycssloaders::withSpinner(output,
color="#808000",
size = 3)
}
# function to get colours right on graphs
colours.helper <- function(x){
len <- length(unique(x))
if(len == 1){cols <- c("#416146")}
if(len == 2){cols <- c("#416146", "#7AA680")}
if(len == 3){cols <- c("#3D5B41", "#BDD3C0", "#7AA680")}
if(len == 4){cols <- c("#39553D", "#D6E4D8", "#5F8D66", "#9BBBA0")}
if(len == 5){cols <- c("#354F39", "#ACC8B0", "#557F5B", "#DCE8DD", "#7AA680")}
if(len == 6){cols <- c("#354F39", "#B7CFBA", "#66986D", "#DCE8DD", "#4D7352", "#8CB291")}
if(len == 7){cols <- c("#354F39", "#9BBBA0", "#496D4E", "#BDD3C0", "#7AA680", "#DCE8DD", "#5F8D66")}
if(len == 8){cols <- c("#354F39", "#9BBBA0", "#496D4E", "#BDD3C0", "#7AA680", "#DCE8DD", "#5F8D66", "#A3A3A3")}
if(len == 9){cols <- c("#354F39", "#9BBBA0", "#496D4E", "#BDD3C0", "#7AA680", "#DCE8DD", "#5F8D66", "#A3A3A3", "#C4C4C4")}
if(len == 10){cols <- c("#354F39", "#9BBBA0", "#496D4E", "#BDD3C0", "#7AA680", "#DCE8DD", "#5F8D66", "#A3A3A3", "#C4C4C4", "#A3A3A3")}
return(cols)
}
# Function to calculate 3 year averages
av_3year <- function(dat, item, years = 2016:2018){
years %>%
substr(start = 3, stop = 4) %>%
paste0("X.", ., item) %>%
subset(dat, select = .) %>%
rowMeans()
}
# function to check for small sample sizes
small_sample_checker <- function(x, nobs = "nobs"){
low_sample_rows <- rownames(x)[x[,nobs] < 5]
nums <- unlist(lapply(x, is.numeric))
out <- x
out[low_sample_rows, nums] <- 0
return(out)
} | /R/Payments_reductions_support_functions.R | no_license | clarebetts-code/payments_shiny_app | R | false | false | 1,920 | r | #~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# functions to support the payments reductions shiny app
# which don't need to be reactive
#
# written by clare betts march 2020
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# function to add a spinner when app parts are loading
spinner<-function(output){
shinycssloaders::withSpinner(output,
color="#808000",
size = 3)
}
# function to get colours right on graphs
colours.helper <- function(x){
len <- length(unique(x))
if(len == 1){cols <- c("#416146")}
if(len == 2){cols <- c("#416146", "#7AA680")}
if(len == 3){cols <- c("#3D5B41", "#BDD3C0", "#7AA680")}
if(len == 4){cols <- c("#39553D", "#D6E4D8", "#5F8D66", "#9BBBA0")}
if(len == 5){cols <- c("#354F39", "#ACC8B0", "#557F5B", "#DCE8DD", "#7AA680")}
if(len == 6){cols <- c("#354F39", "#B7CFBA", "#66986D", "#DCE8DD", "#4D7352", "#8CB291")}
if(len == 7){cols <- c("#354F39", "#9BBBA0", "#496D4E", "#BDD3C0", "#7AA680", "#DCE8DD", "#5F8D66")}
if(len == 8){cols <- c("#354F39", "#9BBBA0", "#496D4E", "#BDD3C0", "#7AA680", "#DCE8DD", "#5F8D66", "#A3A3A3")}
if(len == 9){cols <- c("#354F39", "#9BBBA0", "#496D4E", "#BDD3C0", "#7AA680", "#DCE8DD", "#5F8D66", "#A3A3A3", "#C4C4C4")}
if(len == 10){cols <- c("#354F39", "#9BBBA0", "#496D4E", "#BDD3C0", "#7AA680", "#DCE8DD", "#5F8D66", "#A3A3A3", "#C4C4C4", "#A3A3A3")}
return(cols)
}
# Function to calculate 3 year averages
av_3year <- function(dat, item, years = 2016:2018){
years %>%
substr(start = 3, stop = 4) %>%
paste0("X.", ., item) %>%
subset(dat, select = .) %>%
rowMeans()
}
# function to check for small sample sizes
small_sample_checker <- function(x, nobs = "nobs"){
low_sample_rows <- rownames(x)[x[,nobs] < 5]
nums <- unlist(lapply(x, is.numeric))
out <- x
out[low_sample_rows, nums] <- 0
return(out)
} |
library(SCENIC)
library(AUCell)
library(RcisTarget)
library(Seurat)
covid <- readRDS("/data1/mashuai/data/COVID-19/Seurat_v4/RDS/Lung.combine.CT.rds")
celltypes <- levels(covid)
deg.up <- c()
for (cell in celltypes){
tmp <- read.table(paste0("/data1/mashuai/data/COVID-19/Seurat_v4/EC/DEGs_0.5_CT/up/", cell, "_PvsC.up.list"))
tmp$gene <- rownames(tmp)
tmp$celltype <- cell
deg.up <- rbind(deg.up, tmp)
}
deg.down <- c()
for (cell in celltypes){
tmp <- read.table(paste0("/data1/mashuai/data/COVID-19/Seurat_v4/EC/DEGs_0.5_CT/down/", cell, "_PvsC.down.list"))
tmp$gene <- rownames(tmp)
tmp$celltype <- cell
deg.down <- rbind(deg.down, tmp)
}
used <- subset(covid, stage1 != "Y-Control")
for (cell in celltypes){
tmp <- subset(used, EC.celltype==cell)
exp.mat <- as.matrix(GetAssayData(tmp, slot='data'))
genes <- c(as.character(subset(deg.up, celltype==cell)$gene), as.character(subset(deg.down, celltype==cell)$gene))
exp.mat <- exp.mat[genes,]
dir.create(paste0('/data5/lijiaming/projects/01_single-cell/10.COVID-19/03_result/SCENIC/EC/PvsO/', cell))
setwd(paste0('/data5/lijiaming/projects/01_single-cell/10.COVID-19/03_result/SCENIC/EC/PvsO/', cell))
org="hgnc"
dbDir="/data2/zhengyandong/SCENIC_cisTarget_databases/hg19_cisTarget_databases"
myDatasetTitle="SCENIC analysis of COVID"
data(defaultDbNames)
dbs <- defaultDbNames[[org]]
scenicOptions <- initializeScenic(org=org, dbDir=dbDir, dbs=dbs, datasetTitle=myDatasetTitle, nCores=10)
saveRDS(scenicOptions, file="int/scenicOptions.Rds")
# Gene filter
genesKept <- geneFiltering(exp.mat, scenicOptions=scenicOptions,
minCountsPerGene=3*.01*ncol(exp.mat),
minSamples=ncol(exp.mat)*.01)
exprMat_filtered <- exp.mat[genesKept, ]
# Run Genie3
runCorrelation(exprMat_filtered, scenicOptions)
runGenie3(exprMat_filtered, scenicOptions)
# Run the remaining
scenicOptions@settings$verbose <- TRUE
scenicOptions@settings$nCores <- 10
scenicOptions@settings$seed <- 15
runSCENIC_1_coexNetwork2modules(scenicOptions)
runSCENIC_2_createRegulons(scenicOptions)
runSCENIC_3_scoreCells(scenicOptions, exprMat_filtered)
print(paste0(cell, " finished"))
}
| /snRNA-seq/SCENIC.R | permissive | Bon-jour/COVID-19_Lung_Atlas | R | false | false | 2,315 | r | library(SCENIC)
library(AUCell)
library(RcisTarget)
library(Seurat)
covid <- readRDS("/data1/mashuai/data/COVID-19/Seurat_v4/RDS/Lung.combine.CT.rds")
celltypes <- levels(covid)
deg.up <- c()
for (cell in celltypes){
tmp <- read.table(paste0("/data1/mashuai/data/COVID-19/Seurat_v4/EC/DEGs_0.5_CT/up/", cell, "_PvsC.up.list"))
tmp$gene <- rownames(tmp)
tmp$celltype <- cell
deg.up <- rbind(deg.up, tmp)
}
deg.down <- c()
for (cell in celltypes){
tmp <- read.table(paste0("/data1/mashuai/data/COVID-19/Seurat_v4/EC/DEGs_0.5_CT/down/", cell, "_PvsC.down.list"))
tmp$gene <- rownames(tmp)
tmp$celltype <- cell
deg.down <- rbind(deg.down, tmp)
}
used <- subset(covid, stage1 != "Y-Control")
for (cell in celltypes){
tmp <- subset(used, EC.celltype==cell)
exp.mat <- as.matrix(GetAssayData(tmp, slot='data'))
genes <- c(as.character(subset(deg.up, celltype==cell)$gene), as.character(subset(deg.down, celltype==cell)$gene))
exp.mat <- exp.mat[genes,]
dir.create(paste0('/data5/lijiaming/projects/01_single-cell/10.COVID-19/03_result/SCENIC/EC/PvsO/', cell))
setwd(paste0('/data5/lijiaming/projects/01_single-cell/10.COVID-19/03_result/SCENIC/EC/PvsO/', cell))
org="hgnc"
dbDir="/data2/zhengyandong/SCENIC_cisTarget_databases/hg19_cisTarget_databases"
myDatasetTitle="SCENIC analysis of COVID"
data(defaultDbNames)
dbs <- defaultDbNames[[org]]
scenicOptions <- initializeScenic(org=org, dbDir=dbDir, dbs=dbs, datasetTitle=myDatasetTitle, nCores=10)
saveRDS(scenicOptions, file="int/scenicOptions.Rds")
# Gene filter
genesKept <- geneFiltering(exp.mat, scenicOptions=scenicOptions,
minCountsPerGene=3*.01*ncol(exp.mat),
minSamples=ncol(exp.mat)*.01)
exprMat_filtered <- exp.mat[genesKept, ]
# Run Genie3
runCorrelation(exprMat_filtered, scenicOptions)
runGenie3(exprMat_filtered, scenicOptions)
# Run the remaining
scenicOptions@settings$verbose <- TRUE
scenicOptions@settings$nCores <- 10
scenicOptions@settings$seed <- 15
runSCENIC_1_coexNetwork2modules(scenicOptions)
runSCENIC_2_createRegulons(scenicOptions)
runSCENIC_3_scoreCells(scenicOptions, exprMat_filtered)
print(paste0(cell, " finished"))
}
|
#Fun Riddler for the week
#https://fivethirtyeight.com/features/somethings-fishy-in-the-state-of-the-riddler/
#What words share a letter with every single state but one. Find the longest word.
dictionary= read.table("https://norvig.com/ngrams/word.list")$V1
longest = 0
states= tolower(state.name)
state_letters<-strsplit(states, "")
wordList=c()
state_list=c()
for (i in 1:length(dictionary)){
word = dictionary[i]
if (nchar(word)<longest)next
word_letters<-strsplit(word, split="")[[1]]
counts=sapply(state_letters, function(x)length(intersect(word_letters, x)))
state_flag = which(counts==0)
if (length(state_flag)==1){
if(length(word_letters)==longest){
wordList=c(wordList, word)
state_list = c(state_list, states[state_flag])
print(word)
print(states[state_flag])
}else{
wordList=word
state_list=states[state_flag]
longest=length(word_letters)
print(word)
print(states[state_flag])
}
}
}
| /MackeralRiddle.R | no_license | jntrcs/Fun-in-R | R | false | false | 977 | r | #Fun Riddler for the week
#https://fivethirtyeight.com/features/somethings-fishy-in-the-state-of-the-riddler/
#What words share a letter with every single state but one. Find the longest word.
dictionary= read.table("https://norvig.com/ngrams/word.list")$V1
longest = 0
states= tolower(state.name)
state_letters<-strsplit(states, "")
wordList=c()
state_list=c()
for (i in 1:length(dictionary)){
word = dictionary[i]
if (nchar(word)<longest)next
word_letters<-strsplit(word, split="")[[1]]
counts=sapply(state_letters, function(x)length(intersect(word_letters, x)))
state_flag = which(counts==0)
if (length(state_flag)==1){
if(length(word_letters)==longest){
wordList=c(wordList, word)
state_list = c(state_list, states[state_flag])
print(word)
print(states[state_flag])
}else{
wordList=word
state_list=states[state_flag]
longest=length(word_letters)
print(word)
print(states[state_flag])
}
}
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/grid.R
\name{grid.c3}
\alias{grid.c3}
\title{Title}
\usage{
\method{grid}{c3}(c3, axis, ...)
}
\arguments{
\item{axis}{character 'x' or 'y'}
\item{show}{boolean}
\item{lines}{dataframe with options:
\itemize{
\item{value}{: numeric, character or date depending on axis}
\item{text}{: character (optional)}
\item{class}{: character css class (optional)}
\item{position}{: character one of 'start', 'middle', 'end' (optional)}
}}
}
\value{
c3
}
\description{
Title
}
\examples{
\dontrun{
iris \%>\%
c3(x='Sepal_Length', y='Sepal_Width', group = 'Species') \%>\%
c3_scatter() \%>\%
grid('y') \%>\%
grid('x', show=F, lines = data.frame(value=c(5,6),
text= c('Line 1','Line 2')))
}
}
\seealso{
Other c3: \code{\link{RColorBrewer.c3}}, \code{\link{c3}},
\code{\link{legend.c3}}, \code{\link{region.c3}},
\code{\link{subchart.c3}}, \code{\link{tooltip.c3}},
\code{\link{xAxis.c3}}, \code{\link{zoom.c3}}
Other grid: \code{\link{grid}}
}
| /man/grid.c3.Rd | no_license | drninjamommy/c3 | R | false | true | 1,065 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/grid.R
\name{grid.c3}
\alias{grid.c3}
\title{Title}
\usage{
\method{grid}{c3}(c3, axis, ...)
}
\arguments{
\item{axis}{character 'x' or 'y'}
\item{show}{boolean}
\item{lines}{dataframe with options:
\itemize{
\item{value}{: numeric, character or date depending on axis}
\item{text}{: character (optional)}
\item{class}{: character css class (optional)}
\item{position}{: character one of 'start', 'middle', 'end' (optional)}
}}
}
\value{
c3
}
\description{
Title
}
\examples{
\dontrun{
iris \%>\%
c3(x='Sepal_Length', y='Sepal_Width', group = 'Species') \%>\%
c3_scatter() \%>\%
grid('y') \%>\%
grid('x', show=F, lines = data.frame(value=c(5,6),
text= c('Line 1','Line 2')))
}
}
\seealso{
Other c3: \code{\link{RColorBrewer.c3}}, \code{\link{c3}},
\code{\link{legend.c3}}, \code{\link{region.c3}},
\code{\link{subchart.c3}}, \code{\link{tooltip.c3}},
\code{\link{xAxis.c3}}, \code{\link{zoom.c3}}
Other grid: \code{\link{grid}}
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/geom_relief.R, R/geom_shadow.R
\docType{data}
\name{geom_relief}
\alias{geom_relief}
\alias{GeomRelief}
\alias{geom_shadow}
\alias{GeomShadow}
\title{Relief Shading}
\usage{
geom_relief(mapping = NULL, data = NULL, stat = "identity",
position = "identity", ..., sun.angle = 60, raster = TRUE,
interpolate = TRUE, shadow = FALSE, na.rm = FALSE,
show.legend = NA, inherit.aes = TRUE)
geom_shadow(mapping = NULL, data = NULL, stat = "identity",
position = "identity", ..., sun.angle = 60, range = c(0, 1),
skip = 0, raster = TRUE, interpolate = TRUE, na.rm = FALSE,
show.legend = NA, inherit.aes = TRUE)
}
\arguments{
\item{mapping}{Set of aesthetic mappings created by \code{\link[=aes]{aes()}} or
\code{\link[=aes_]{aes_()}}. If specified and \code{inherit.aes = TRUE} (the
default), it is combined with the default mapping at the top level of the
plot. You must supply \code{mapping} if there is no plot mapping.}
\item{data}{The data to be displayed in this layer. There are three
options:
If \code{NULL}, the default, the data is inherited from the plot
data as specified in the call to \code{\link[=ggplot]{ggplot()}}.
A \code{data.frame}, or other object, will override the plot
data. All objects will be fortified to produce a data frame. See
\code{\link[=fortify]{fortify()}} for which variables will be created.
A \code{function} will be called with a single argument,
the plot data. The return value must be a \code{data.frame}, and
will be used as the layer data. A \code{function} can be created
from a \code{formula} (e.g. \code{~ head(.x, 10)}).}
\item{stat}{The statistical transformation to use on the data for this
layer, as a string.}
\item{position}{Position adjustment, either as a string, or the result of
a call to a position adjustment function.}
\item{...}{Other arguments passed on to \code{\link[=layer]{layer()}}. These are
often aesthetics, used to set an aesthetic to a fixed value, like
\code{colour = "red"} or \code{size = 3}. They may also be parameters
to the paired geom/stat.}
\item{sun.angle}{angle from which the sun is shining, in degrees
counterclockwise from 12 o' clock}
\item{raster}{if \code{TRUE} (the default), uses \link[ggplot2:geom_raster]{ggplot2::geom_raster},
if \code{FALSE}, uses \link[ggplot2:geom_tile]{ggplot2::geom_tile}.}
\item{interpolate}{If \code{TRUE} interpolate linearly, if \code{FALSE}
(the default) don't interpolate.}
\item{shadow}{if TRUE, adds also a layer of \code{geom_shadow()}}
\item{na.rm}{If \code{FALSE}, the default, missing values are removed with
a warning. If \code{TRUE}, missing values are silently removed.}
\item{show.legend}{logical. Should this layer be included in the legends?
\code{NA}, the default, includes if any aesthetics are mapped.
\code{FALSE} never includes, and \code{TRUE} always includes.
It can also be a named logical vector to finely select the aesthetics to
display.}
\item{inherit.aes}{If \code{FALSE}, overrides the default aesthetics,
rather than combining with them. This is most useful for helper functions
that define both data and aesthetics and shouldn't inherit behaviour from
the default plot specification, e.g. \code{\link[=borders]{borders()}}.}
\item{range}{transparency range for shadows}
\item{skip}{data points to skip when casting shadows}
}
\description{
\code{geom_relief()} simulates shading caused by relief. Can be useful when
plotting topographic data because relief shading might give a more intuitive
impression of the shape of the terrain than contour lines or mapping height
to color. \code{geom_shadow()} projects shadows.
}
\details{
\code{light} and \code{dark} must be valid colours determining the light and dark shading
(defaults to "white" and "gray20", respectively).
}
\section{Aesthetics}{
\code{geom_relief()} and \code{geom_shadow()} understands the following aesthetics (required aesthetics are in bold)
\itemize{
\item \strong{x}
\item \strong{y}
\item \strong{z}
\item \code{light}
\item \code{dark}
\item \code{sun.angle}
}
}
\examples{
\dontrun{
library(ggplot2)
ggplot(reshape2::melt(volcano), aes(Var1, Var2)) +
geom_relief(aes(z = value))
}
}
\seealso{
Other ggplot2 helpers: \code{\link{DivideTimeseries}},
\code{\link{MakeBreaks}}, \code{\link{WrapCircular}},
\code{\link{geom_arrow}}, \code{\link{geom_contour2}},
\code{\link{geom_contour_fill}},
\code{\link{geom_label_contour}},
\code{\link{geom_streamline}},
\code{\link{guide_colourstrip}},
\code{\link{map_labels}}, \code{\link{reverselog_trans}},
\code{\link{scale_divergent}},
\code{\link{scale_longitude}}, \code{\link{stat_na}},
\code{\link{stat_subset}}
}
\concept{ggplot2 helpers}
\keyword{datasets}
| /man/geom_relief.Rd | no_license | salvatirehbein/metR | R | false | true | 4,753 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/geom_relief.R, R/geom_shadow.R
\docType{data}
\name{geom_relief}
\alias{geom_relief}
\alias{GeomRelief}
\alias{geom_shadow}
\alias{GeomShadow}
\title{Relief Shading}
\usage{
geom_relief(mapping = NULL, data = NULL, stat = "identity",
position = "identity", ..., sun.angle = 60, raster = TRUE,
interpolate = TRUE, shadow = FALSE, na.rm = FALSE,
show.legend = NA, inherit.aes = TRUE)
geom_shadow(mapping = NULL, data = NULL, stat = "identity",
position = "identity", ..., sun.angle = 60, range = c(0, 1),
skip = 0, raster = TRUE, interpolate = TRUE, na.rm = FALSE,
show.legend = NA, inherit.aes = TRUE)
}
\arguments{
\item{mapping}{Set of aesthetic mappings created by \code{\link[=aes]{aes()}} or
\code{\link[=aes_]{aes_()}}. If specified and \code{inherit.aes = TRUE} (the
default), it is combined with the default mapping at the top level of the
plot. You must supply \code{mapping} if there is no plot mapping.}
\item{data}{The data to be displayed in this layer. There are three
options:
If \code{NULL}, the default, the data is inherited from the plot
data as specified in the call to \code{\link[=ggplot]{ggplot()}}.
A \code{data.frame}, or other object, will override the plot
data. All objects will be fortified to produce a data frame. See
\code{\link[=fortify]{fortify()}} for which variables will be created.
A \code{function} will be called with a single argument,
the plot data. The return value must be a \code{data.frame}, and
will be used as the layer data. A \code{function} can be created
from a \code{formula} (e.g. \code{~ head(.x, 10)}).}
\item{stat}{The statistical transformation to use on the data for this
layer, as a string.}
\item{position}{Position adjustment, either as a string, or the result of
a call to a position adjustment function.}
\item{...}{Other arguments passed on to \code{\link[=layer]{layer()}}. These are
often aesthetics, used to set an aesthetic to a fixed value, like
\code{colour = "red"} or \code{size = 3}. They may also be parameters
to the paired geom/stat.}
\item{sun.angle}{angle from which the sun is shining, in degrees
counterclockwise from 12 o' clock}
\item{raster}{if \code{TRUE} (the default), uses \link[ggplot2:geom_raster]{ggplot2::geom_raster},
if \code{FALSE}, uses \link[ggplot2:geom_tile]{ggplot2::geom_tile}.}
\item{interpolate}{If \code{TRUE} interpolate linearly, if \code{FALSE}
(the default) don't interpolate.}
\item{shadow}{if TRUE, adds also a layer of \code{geom_shadow()}}
\item{na.rm}{If \code{FALSE}, the default, missing values are removed with
a warning. If \code{TRUE}, missing values are silently removed.}
\item{show.legend}{logical. Should this layer be included in the legends?
\code{NA}, the default, includes if any aesthetics are mapped.
\code{FALSE} never includes, and \code{TRUE} always includes.
It can also be a named logical vector to finely select the aesthetics to
display.}
\item{inherit.aes}{If \code{FALSE}, overrides the default aesthetics,
rather than combining with them. This is most useful for helper functions
that define both data and aesthetics and shouldn't inherit behaviour from
the default plot specification, e.g. \code{\link[=borders]{borders()}}.}
\item{range}{transparency range for shadows}
\item{skip}{data points to skip when casting shadows}
}
\description{
\code{geom_relief()} simulates shading caused by relief. Can be useful when
plotting topographic data because relief shading might give a more intuitive
impression of the shape of the terrain than contour lines or mapping height
to color. \code{geom_shadow()} projects shadows.
}
\details{
\code{light} and \code{dark} must be valid colours determining the light and dark shading
(defaults to "white" and "gray20", respectively).
}
\section{Aesthetics}{
\code{geom_relief()} and \code{geom_shadow()} understands the following aesthetics (required aesthetics are in bold)
\itemize{
\item \strong{x}
\item \strong{y}
\item \strong{z}
\item \code{light}
\item \code{dark}
\item \code{sun.angle}
}
}
\examples{
\dontrun{
library(ggplot2)
ggplot(reshape2::melt(volcano), aes(Var1, Var2)) +
geom_relief(aes(z = value))
}
}
\seealso{
Other ggplot2 helpers: \code{\link{DivideTimeseries}},
\code{\link{MakeBreaks}}, \code{\link{WrapCircular}},
\code{\link{geom_arrow}}, \code{\link{geom_contour2}},
\code{\link{geom_contour_fill}},
\code{\link{geom_label_contour}},
\code{\link{geom_streamline}},
\code{\link{guide_colourstrip}},
\code{\link{map_labels}}, \code{\link{reverselog_trans}},
\code{\link{scale_divergent}},
\code{\link{scale_longitude}}, \code{\link{stat_na}},
\code{\link{stat_subset}}
}
\concept{ggplot2 helpers}
\keyword{datasets}
|
/HW7/r1.R | no_license | AlyonaKozyr/Python | R | false | false | 3,278 | r | ||
#!/usr/bin/env Rscript
args = commandArgs(trailingOnly=TRUE)
cormethod="spearman"
numfiles=length(args)
filelist=paste(args,collapse=" ")
cat("unifying intervals\n")
cmdString <- paste("bedtools unionbedg -header -names",filelist,"-filler NOSCORE -i",filelist,"| grep -v NOSCORE")
cmdString.call <- pipe(cmdString, open = "r")
result <- read.delim(cmdString.call, header = T, stringsAsFactors = FALSE)
close(cmdString.call)
bgscores <- result[,4:ncol(result)]
eg=expand.grid(1:numfiles,1:numfiles)
cat("calculating pairwise correlations\n")
pb <- txtProgressBar(min = 0, max = numfiles*numfiles, style = 3)
lc=as.data.frame(
matrix(
unlist(lapply(1:nrow(eg),function(x){
val=cor(bgscores[[eg[x,1]]],bgscores[[eg[x,2]]],method=cormethod)
setTxtProgressBar(pb, x)
return(val)
})),
nrow=numfiles)
)
rownames(lc) <- args
colnames(lc) <- rownames(lc)
write.table(lc,"correlationMatrix.tsv",col.names=T,row.names=T,sep="\t",quote=F)
cat(args)
| /scripts/cormat | no_license | dvera/shart | R | false | false | 1,027 | #!/usr/bin/env Rscript
args = commandArgs(trailingOnly=TRUE)
cormethod="spearman"
numfiles=length(args)
filelist=paste(args,collapse=" ")
cat("unifying intervals\n")
cmdString <- paste("bedtools unionbedg -header -names",filelist,"-filler NOSCORE -i",filelist,"| grep -v NOSCORE")
cmdString.call <- pipe(cmdString, open = "r")
result <- read.delim(cmdString.call, header = T, stringsAsFactors = FALSE)
close(cmdString.call)
bgscores <- result[,4:ncol(result)]
eg=expand.grid(1:numfiles,1:numfiles)
cat("calculating pairwise correlations\n")
pb <- txtProgressBar(min = 0, max = numfiles*numfiles, style = 3)
lc=as.data.frame(
matrix(
unlist(lapply(1:nrow(eg),function(x){
val=cor(bgscores[[eg[x,1]]],bgscores[[eg[x,2]]],method=cormethod)
setTxtProgressBar(pb, x)
return(val)
})),
nrow=numfiles)
)
rownames(lc) <- args
colnames(lc) <- rownames(lc)
write.table(lc,"correlationMatrix.tsv",col.names=T,row.names=T,sep="\t",quote=F)
cat(args)
| |
data1 <- read.table("kq0303.csv",header = TRUE, sep =",")
data1
plot(data1$Xc, data1$Y,xlab="第1次産業比率",ylab="インターネット普及率",main="産業とインターネット普及率")
fm <- lm(Y ~ Xc, data = data1)
abline(fm)
summary(fm)
| /exercises/kq0303c.R | no_license | Prunus1350/Econometrics | R | false | false | 256 | r | data1 <- read.table("kq0303.csv",header = TRUE, sep =",")
data1
plot(data1$Xc, data1$Y,xlab="第1次産業比率",ylab="インターネット普及率",main="産業とインターネット普及率")
fm <- lm(Y ~ Xc, data = data1)
abline(fm)
summary(fm)
|
library(snow);library(doParallel);library(foreach)
library(Rmpi);
setwd("/glade/u/home/sanjib/FamosHydroModel/lowDim")
# Parallelize
nprocs <-mpi.universe.size() - 1
# nprocs <-4
print(nprocs)
mp_type = "MPI"
# mp_type = "PSOCK"
cl <- parallel::makeCluster(spec = nprocs, type=mp_type)
doParallel::registerDoParallel(cl)
load(file="input/design.RData")
ens<-nrow(parMat); rm(parMat)
# ens<-4
# Values for runs
outputMat<-foreach::foreach(jobNum=1:ens , .combine = "cbind") %dopar% {
source("run/rWrapper_Continuous.R")
source("run/mcmc_source_Tr.R")
load(file="input/design.RData")
inputDir<-"/glade/scratch/sanjib/lowDim/input"
outputDir<-"/glade/scratch/sanjib/lowDim/output"
jobPar<-parMat[jobNum,]
modelEval(par = jobPar , j = jobNum , inputDir =inputDir , outputDir = outputDir)
}
save(outputMat,file = "output/modelRuns.RData")
| /lowDim/run/B_run_mpi.R | no_license | benee55/SharmaEtAl22 | R | false | false | 852 | r | library(snow);library(doParallel);library(foreach)
library(Rmpi);
setwd("/glade/u/home/sanjib/FamosHydroModel/lowDim")
# Parallelize
nprocs <-mpi.universe.size() - 1
# nprocs <-4
print(nprocs)
mp_type = "MPI"
# mp_type = "PSOCK"
cl <- parallel::makeCluster(spec = nprocs, type=mp_type)
doParallel::registerDoParallel(cl)
load(file="input/design.RData")
ens<-nrow(parMat); rm(parMat)
# ens<-4
# Values for runs
outputMat<-foreach::foreach(jobNum=1:ens , .combine = "cbind") %dopar% {
source("run/rWrapper_Continuous.R")
source("run/mcmc_source_Tr.R")
load(file="input/design.RData")
inputDir<-"/glade/scratch/sanjib/lowDim/input"
outputDir<-"/glade/scratch/sanjib/lowDim/output"
jobPar<-parMat[jobNum,]
modelEval(par = jobPar , j = jobNum , inputDir =inputDir , outputDir = outputDir)
}
save(outputMat,file = "output/modelRuns.RData")
|
# reroot a non-dated tree based on tip sampling times, by optimizing the root-to-tip (RTT) regression goodness of fit (regression of tip times vs phylodistances-from-root)
# The precise objective optimized can be "correlation", "R2" or "SSR" (sum of squared residuals)
# The input tree's edge lengths should be measured in substitutions per site, and tip sampling times should be measured in forward time (i.e., younger tips have a greater time).
# Adjusted (and improved) from ape::rtt v5.7-1.
root_via_rtt = function(tree, # tree of class "phylo". The tree may be rooted or unrooted, and the root placement does not matter.
tip_times, # numeric vector of length Ntips, listing the sampling times of all tips.
objective = "R2",
force_positive_rate = FALSE, # (logical) whether to force the implied mutation rate to be >=0, by constraining the root placements.
Nthreads = 1, # (integer) number of threads to use in parallel, where applicable
optim_algorithm = "nlminb", # (character) either "optimize" or "nlminb". What algorithm to use for fitting.
relative_error = 1e-9){
if(objective == "correlation")
objective_function = function(x, y){
return(stats::cor(y, x))
}
else if(objective == "R2")
objective_function = function(x, y){
fit = stats::lm.fit(cbind(rep(1,Ntips),x), y)
if(force_positive_rate && (fit$coefficients[2]<0)) return(-1e50)
R2 = 1 - mean(fit$residuals^2)/stats::var(y)
return(R2)
}
else if(objective == "SSR"){
objective_function = function(x, y){
fit = stats::lm.fit(cbind(rep(1,Ntips),x), y)
if(force_positive_rate && (fit$coefficients[2]<0)) return(-1e50)
return(-sum(fit$residuals^2) / fit$df.residual)
}
}
Ntips = length(tree$tip.label)
tree = ape::unroot(tree)
phylodistances = castor::get_all_pairwise_distances(tree=tree, check_input=FALSE)[, 1:Ntips] # get all pairwise phylodistances between clades & tips
aux_objective_function_at_edge = function(x, parent, child){
if(!is.finite(x)) return(-1e50)
edge_distaces = x * phylodistances[parent, ] + (1 - x) * phylodistances[child,]
return(objective_function(x=tip_times, y=edge_distaces))
}
aux_fit_single_edge = function(e){
if(optim_algorithm == "optimize"){
fit = stats::optimize(f = function(x) aux_objective_function_at_edge(x, tree$edge[e,1], tree$edge[e,2]),
interval = c(0, 1),
maximum = TRUE,
tol = relative_error)
}else if(optim_algorithm == "nlminb"){
fit = stats::nlminb(start = 0.5,
objective = function(x) -aux_objective_function_at_edge(x, tree$edge[e,1], tree$edge[e,2]),
lower = 0,
upper = 1,
control = list(iter.max = 10000, eval.max = 10000, rel.tol = relative_error))
fit$objective = -fit$objective
}
return(fit$objective)
}
if(Nthreads>1){
obj_edge = unlist(parallel::mclapply(1:nrow(tree$edge), FUN=aux_fit_single_edge, mc.cores=Nthreads, mc.preschedule = TRUE, mc.cleanup = TRUE))
}else{
obj_edge = sapply(1:nrow(tree$edge), FUN=aux_fit_single_edge)
}
valid_edges = which(is.finite(obj_edge))
best.edge = valid_edges[which.max(obj_edge[valid_edges])]
best.edge.parent = tree$edge[best.edge, 1]
best.edge.child = tree$edge[best.edge, 2]
best.edge.length = tree$edge.length[best.edge]
opt.fun = function(x) aux_objective_function_at_edge(x, best.edge.parent, best.edge.child)
best.pos = optimize(opt.fun, c(0, 1), maximum = TRUE, tol = relative_error)$maximum
new_root = list(edge = matrix(c(2L, 1L), 1, 2), tip.label = "new_root", edge.length = 1, Nnode = 1L, root.edge = 1)
class(new_root) = "phylo"
tree = ape::bind.tree(tree, new_root, where = best.edge.child, position = best.pos*best.edge.length)
tree = ape::collapse.singles(tree)
tree = ape::root(tree, "new_root")
tree = ape::drop.tip(tree, "new_root")
return(list(tree=tree))
} | /R/root_via_rtt.R | no_license | cran/castor | R | false | false | 3,881 | r | # reroot a non-dated tree based on tip sampling times, by optimizing the root-to-tip (RTT) regression goodness of fit (regression of tip times vs phylodistances-from-root)
# The precise objective optimized can be "correlation", "R2" or "SSR" (sum of squared residuals)
# The input tree's edge lengths should be measured in substitutions per site, and tip sampling times should be measured in forward time (i.e., younger tips have a greater time).
# Adjusted (and improved) from ape::rtt v5.7-1.
root_via_rtt = function(tree, # tree of class "phylo". The tree may be rooted or unrooted, and the root placement does not matter.
tip_times, # numeric vector of length Ntips, listing the sampling times of all tips.
objective = "R2",
force_positive_rate = FALSE, # (logical) whether to force the implied mutation rate to be >=0, by constraining the root placements.
Nthreads = 1, # (integer) number of threads to use in parallel, where applicable
optim_algorithm = "nlminb", # (character) either "optimize" or "nlminb". What algorithm to use for fitting.
relative_error = 1e-9){
if(objective == "correlation")
objective_function = function(x, y){
return(stats::cor(y, x))
}
else if(objective == "R2")
objective_function = function(x, y){
fit = stats::lm.fit(cbind(rep(1,Ntips),x), y)
if(force_positive_rate && (fit$coefficients[2]<0)) return(-1e50)
R2 = 1 - mean(fit$residuals^2)/stats::var(y)
return(R2)
}
else if(objective == "SSR"){
objective_function = function(x, y){
fit = stats::lm.fit(cbind(rep(1,Ntips),x), y)
if(force_positive_rate && (fit$coefficients[2]<0)) return(-1e50)
return(-sum(fit$residuals^2) / fit$df.residual)
}
}
Ntips = length(tree$tip.label)
tree = ape::unroot(tree)
phylodistances = castor::get_all_pairwise_distances(tree=tree, check_input=FALSE)[, 1:Ntips] # get all pairwise phylodistances between clades & tips
aux_objective_function_at_edge = function(x, parent, child){
if(!is.finite(x)) return(-1e50)
edge_distaces = x * phylodistances[parent, ] + (1 - x) * phylodistances[child,]
return(objective_function(x=tip_times, y=edge_distaces))
}
aux_fit_single_edge = function(e){
if(optim_algorithm == "optimize"){
fit = stats::optimize(f = function(x) aux_objective_function_at_edge(x, tree$edge[e,1], tree$edge[e,2]),
interval = c(0, 1),
maximum = TRUE,
tol = relative_error)
}else if(optim_algorithm == "nlminb"){
fit = stats::nlminb(start = 0.5,
objective = function(x) -aux_objective_function_at_edge(x, tree$edge[e,1], tree$edge[e,2]),
lower = 0,
upper = 1,
control = list(iter.max = 10000, eval.max = 10000, rel.tol = relative_error))
fit$objective = -fit$objective
}
return(fit$objective)
}
if(Nthreads>1){
obj_edge = unlist(parallel::mclapply(1:nrow(tree$edge), FUN=aux_fit_single_edge, mc.cores=Nthreads, mc.preschedule = TRUE, mc.cleanup = TRUE))
}else{
obj_edge = sapply(1:nrow(tree$edge), FUN=aux_fit_single_edge)
}
valid_edges = which(is.finite(obj_edge))
best.edge = valid_edges[which.max(obj_edge[valid_edges])]
best.edge.parent = tree$edge[best.edge, 1]
best.edge.child = tree$edge[best.edge, 2]
best.edge.length = tree$edge.length[best.edge]
opt.fun = function(x) aux_objective_function_at_edge(x, best.edge.parent, best.edge.child)
best.pos = optimize(opt.fun, c(0, 1), maximum = TRUE, tol = relative_error)$maximum
new_root = list(edge = matrix(c(2L, 1L), 1, 2), tip.label = "new_root", edge.length = 1, Nnode = 1L, root.edge = 1)
class(new_root) = "phylo"
tree = ape::bind.tree(tree, new_root, where = best.edge.child, position = best.pos*best.edge.length)
tree = ape::collapse.singles(tree)
tree = ape::root(tree, "new_root")
tree = ape::drop.tip(tree, "new_root")
return(list(tree=tree))
} |
### . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ..
### Setup ####
pacman::p_load(
dplyr,
purrr,
ggplot2,
gridExtra,
scales,
latex2exp
)
library(asp21bridge)
### . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ..
### Simulate Data ####
set.seed(142)
x1 <- rnorm(n = 50, mean = 1, sd = 1)
x2 <- rnorm(n = 50, mean = 2, sd = 1)
z1 <- rnorm(n = 50, mean = 5, sd = 1)
z2 <- rnorm(n = 50, mean = 3, sd = 1)
X <- cbind(x1, x2)
Z <- cbind(z1, z2)
# standardize columns in X and Z
X <- apply(X, MARGIN = 2, FUN = function(x) (x - mean(x)) / sd(x))
Z <- apply(Z, MARGIN = 2, FUN = function(x) (x - mean(x)) / sd(x))
x1 <- X[, 1]
x2 <- X[, 2]
z1 <- Z[, 1]
z2 <- Z[, 2]
beta <- c(-1, 4) # True: beta_0 = 0, beta_1 = -1, beta_2 = 4,
gamma <- c(-2, 1) # True: gamma_0 = 0, gamma_1 = -2, gamma_2 = 1
y <- vector(mode = "numeric", length = 50)
for (i in seq_along(y)) {
mu <- sum(X[i, ] * beta)
sigma <- exp(sum(Z[i, ] * gamma))
y[i] <- rnorm(n = 1, mean = mu, sd = sigma)
}
# Values for hyper parameters
hyppar_val <- c(-1, 0, 0.5, 1, 2, 10, 50, 100, 200)
### Create lmls model
model <- lmls(
location = y ~ x1 + x2, scale = ~ z1 + z2,
light = FALSE
)
### . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ..
### Create initial Tibbles ####
# One tibble per hyperpar-vector
# Default for other hyper-pars = 1
# Choosed values to have specific ratios
a_tau_data <- tibble(hyppar_val = hyppar_val) %>%
mutate(hyppar = "a_tau") %>%
mutate(samples = map(
.x = hyppar_val, .f = ~ mcmc_ridge(m = model, a_tau = .x, num_sim = 100)
)) %>%
mutate(acc_rate = map_dbl(
.x = samples, .f = ~ .x$mcmc_ridge$acceptance_rate
)) %>%
mutate(results = map(
.x = samples,
.f = ~ summary_complete(.x) %>% select(Parameter, `Posterior Mean`)
))
b_tau_data <- tibble(hyppar_val = hyppar_val) %>%
mutate(hyppar = "b_tau") %>%
mutate(samples = map(
.x = hyppar_val, .f = ~ mcmc_ridge(m = model, b_tau = .x, num_sim = 100)
)) %>%
mutate(acc_rate = map_dbl(
.x = samples, .f = ~ .x$mcmc_ridge$acceptance_rate
)) %>%
mutate(results = map(
.x = samples,
.f = ~ summary_complete(.x) %>% select(Parameter, `Posterior Mean`)
))
a_xi_data <- tibble(hyppar_val = hyppar_val) %>%
mutate(hyppar = "a_xi") %>%
mutate(samples = map(
.x = hyppar_val, .f = ~ mcmc_ridge(m = model, a_xi = .x, num_sim = 100)
)) %>%
mutate(acc_rate = map_dbl(
.x = samples, .f = ~ .x$mcmc_ridge$acceptance_rate
)) %>%
mutate(results = map(
.x = samples,
.f = ~ summary_complete(.x) %>% select(Parameter, `Posterior Mean`)
))
b_xi_data <- tibble(hyppar_val = hyppar_val) %>%
mutate(hyppar = "b_xi") %>%
mutate(samples = map(
.x = hyppar_val, .f = ~ mcmc_ridge(m = model, b_xi = .x, num_sim = 100)
)) %>%
mutate(acc_rate = map_dbl(
.x = samples, .f = ~ .x$mcmc_ridge$acceptance_rate
)) %>%
mutate(results = map(
.x = samples,
.f = ~ summary_complete(.x) %>% select(Parameter, `Posterior Mean`)
))
### . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ..
### Analysis ####
# Creating results tables
results_a_tau_data <- a_tau_data %>%
select(hyppar, hyppar_val, results) %>%
tidyr::unnest(results)
results_b_tau_data <- b_tau_data %>%
select(hyppar, hyppar_val, results) %>%
tidyr::unnest(results)
results_a_xi_data <- a_xi_data %>%
select(hyppar, hyppar_val, results) %>%
tidyr::unnest(results)
results_b_xi_data <- b_xi_data %>%
select(hyppar, hyppar_val, results) %>%
tidyr::unnest(results)
# True value of -1, no variance of posterior means
results_a_tau_data %>%
filter(Parameter == "beta_1") %>%
print(n = Inf)
results_b_tau_data %>%
filter(Parameter == "beta_1") %>%
print(n = Inf)
results_b_xi_data %>%
filter(Parameter == "beta_1") %>%
print(n = Inf)
# True value of -2, small variance but steady small overestimation
results_a_tau_data %>%
filter(Parameter == "gamma_1") %>%
print(n = Inf)
results_b_xi_data %>%
filter(Parameter == "gamma_1") %>%
print(n = Inf)
results_b_xi_data %>%
filter(Parameter == "gamma_1") %>%
print(n = Inf)
# Adding absolute deviation (estimated - true value)
deviation_a_tau_data <- results_a_tau_data %>%
filter(stringr::str_detect(string = Parameter, pattern = "beta|gamma")) %>%
mutate(truth = rep(c(0, -1, 4, 0, -2, 1), times = n_distinct(hyppar_val))) %>%
mutate(deviation = abs(`Posterior Mean` - truth))
deviation_b_tau_data <- results_b_tau_data %>%
filter(stringr::str_detect(string = Parameter, pattern = "beta|gamma")) %>%
mutate(truth = rep(c(0, -1, 4, 0, -2, 1), times = n_distinct(hyppar_val))) %>%
mutate(deviation = abs(`Posterior Mean` - truth))
deviation_a_xi_data <- results_a_xi_data %>%
filter(stringr::str_detect(string = Parameter, pattern = "beta|gamma")) %>%
mutate(truth = rep(c(0, -1, 4, 0, -2, 1), times = n_distinct(hyppar_val))) %>%
mutate(deviation = abs(`Posterior Mean` - truth))
deviation_b_xi_data <- results_b_xi_data %>%
filter(stringr::str_detect(string = Parameter, pattern = "beta|gamma")) %>%
mutate(truth = rep(c(0, -1, 4, 0, -2, 1), times = n_distinct(hyppar_val))) %>%
mutate(deviation = abs(`Posterior Mean` - truth))
# two biggest and two smallest deviations for each parameter and a_tau / b_xi
# Evenly spread values of a_tau, only 100 is always at small, never at biggest
# No pattern of distinct hyperpar values obtainable at the betas
# At the gammas, values 0 < a_tau <= 1 are always at smallest
# The gammas have a higher deviation, the betas slightly none
bind_rows(
biggest = deviation_a_tau_data %>%
group_by(Parameter) %>%
slice_max(order_by = deviation, n = 2) %>%
ungroup(),
smallest = deviation_a_tau_data %>%
group_by(Parameter) %>%
slice_min(order_by = deviation, n = 2) %>%
ungroup(),
.id = "diff"
) %>%
select(hyppar, Parameter, diff, hyppar_val, deviation) %>%
arrange(Parameter) %>%
print(n = Inf)
# for beta, the values of b_xi are evenly spread
# for gamma, always b_xi = 100 are at smallest
# The gammas have a higher deviation, the betas slightly none
bind_rows(
biggest = deviation_b_xi_data %>%
group_by(Parameter) %>%
slice_max(order_by = deviation, n = 2) %>%
ungroup(),
smallest = deviation_b_xi_data %>%
group_by(Parameter) %>%
slice_min(order_by = deviation, n = 2) %>%
ungroup(),
.id = "diff"
) %>%
select(hyppar, Parameter, diff, hyppar_val, deviation) %>%
arrange(Parameter) %>%
print(n = Inf)
# how often is each hyper parameter among the four biggest or four smallest
# deviations from the true value across all parameters and a_tau / b _xi?
# Here, I moreover SPLIT b/w beta and gamma as parameters.
# small values have smaller deviation more often, while large values have larger deviation
# no remarkable differences b/w beta and gamma
bind_rows(
biggest_beta = deviation_a_tau_data %>%
filter(stringr::str_detect(string = Parameter, pattern = "beta")) %>%
group_by(Parameter) %>%
slice_max(order_by = deviation, n = 4) %>%
ungroup(),
smallest_beta = deviation_a_tau_data %>%
filter(stringr::str_detect(string = Parameter, pattern = "beta")) %>%
group_by(Parameter) %>%
slice_min(order_by = deviation, n = 4) %>%
ungroup(),
biggest_gamma = deviation_a_tau_data %>%
filter(stringr::str_detect(string = Parameter, pattern = "gamma")) %>%
group_by(Parameter) %>%
slice_max(order_by = deviation, n = 4) %>%
ungroup(),
smallest_gamma = deviation_a_tau_data %>%
filter(stringr::str_detect(string = Parameter, pattern = "gamma")) %>%
group_by(Parameter) %>%
slice_min(order_by = deviation, n = 4) %>%
ungroup(),
.id = "diff"
) %>%
select(hyppar, diff, hyppar_val, Parameter, deviation) %>%
count(diff, hyppar_val) %>%
tidyr::pivot_wider(names_from = diff, values_from = n) %>%
arrange(hyppar_val) %>%
print(n = Inf)
# beta's and gamma's deviation both is large for small values of hyperpar
# and small for large values
bind_rows(
biggest_beta = deviation_b_xi_data %>%
filter(stringr::str_detect(string = Parameter, pattern = "beta")) %>%
group_by(Parameter) %>%
slice_max(order_by = deviation, n = 4) %>%
ungroup(),
smallest_beta = deviation_b_xi_data %>%
filter(stringr::str_detect(string = Parameter, pattern = "beta")) %>%
group_by(Parameter) %>%
slice_min(order_by = deviation, n = 4) %>%
ungroup(),
biggest_gamma = deviation_b_xi_data %>%
filter(stringr::str_detect(string = Parameter, pattern = "gamma")) %>%
group_by(Parameter) %>%
slice_max(order_by = deviation, n = 4) %>%
ungroup(),
smallest_gamma = deviation_b_xi_data %>%
filter(stringr::str_detect(string = Parameter, pattern = "gamma")) %>%
group_by(Parameter) %>%
slice_min(order_by = deviation, n = 4) %>%
ungroup(),
.id = "diff"
) %>%
select(hyppar, diff, hyppar_val, Parameter, deviation) %>%
count(diff, hyppar_val) %>%
tidyr::pivot_wider(names_from = diff, values_from = n) %>%
arrange(hyppar_val) %>%
print(n = Inf)
# plot deviations for beta vs gamma Parameter for a_tau / b_xi
# largest influence in b_tau / gamma plot
p1 <- deviation_a_tau_data %>%
filter(stringr::str_detect(string = Parameter, pattern = "beta")) %>%
ggplot(mapping = aes(x = hyppar_val, y = deviation, color = Parameter)) +
geom_line(size = 0.5) +
geom_point(size = 1) +
labs(
title = TeX("Absolute deviations from true $\\beta$"),
x = TeX("$a_\\tau$"), y = "| Deviation |", color = NULL
) +
theme_light(base_size = 9) +
theme(
panel.grid.minor = element_blank(),
legend.position = "top",
plot.title = element_text(hjust = 0.5)
) +
geom_smooth(
method = "lm", aes(colour = "linear trend"), linetype = "dashed",
se = FALSE
) +
scale_x_continuous(trans = pseudo_log_trans(), breaks = c(-1, 0, 1, 2, 10, 50, 100, 200))
p2 <- deviation_a_tau_data %>%
filter(stringr::str_detect(string = Parameter, pattern = "gamma")) %>%
ggplot(mapping = aes(x = hyppar_val, y = deviation, color = Parameter)) +
geom_line(size = 0.5) +
geom_point(size = 1) +
labs(
title = TeX("Absolute deviations from true $\\gamma$"),
x = TeX("$a_\\tau$"), y = NULL, color = NULL
) +
theme_light(base_size = 9) +
theme(
panel.grid.minor = element_blank(),
legend.position = "top",
plot.title = element_text(hjust = 0.5)
) +
geom_smooth(
method = "lm", aes(colour = "linear trend"), linetype = "dashed",
se = FALSE
) +
scale_x_continuous(trans = pseudo_log_trans(), breaks = c(-1, 0, 1, 2, 10, 50, 100, 200))
p3 <- deviation_b_tau_data %>%
filter(stringr::str_detect(string = Parameter, pattern = "beta")) %>%
ggplot(mapping = aes(x = hyppar_val, y = deviation, color = Parameter)) +
geom_line(size = 0.5) +
geom_point(size = 1) +
labs(
# title = "Absolute deviations from true beta (1000 simulations)",
x = TeX("$b_\\tau$"), y = "| Deviation |", color = NULL
) +
guides(color = "none") +
theme_light(base_size = 9) +
theme(
panel.grid.minor = element_blank(),
legend.position = "top",
plot.title = element_text(hjust = 0.5)
) +
geom_smooth(
method = "lm", aes(colour = "linear trend"), linetype = "dashed",
se = FALSE
) +
scale_x_continuous(trans = pseudo_log_trans(), breaks = c(-1, 0, 1, 2, 10, 50, 100, 200))
p4 <- deviation_b_tau_data %>%
filter(stringr::str_detect(string = Parameter, pattern = "gamma")) %>%
ggplot(mapping = aes(x = hyppar_val, y = deviation, color = Parameter)) +
geom_line(size = 0.5) +
geom_point(size = 1) +
labs(
# title = "Absolute deviations from true gamma (1000 simulations)",
x = TeX("$b_\\tau$"), y = NULL, color = NULL
) +
guides(color = "none") +
theme_light(base_size = 9) +
theme(
panel.grid.minor = element_blank(),
legend.position = "top",
plot.title = element_text(hjust = 0.5)
) +
geom_smooth(
method = "lm", aes(colour = "linear trend"), linetype = "dashed",
se = FALSE
) +
scale_x_continuous(trans = pseudo_log_trans(), breaks = c(-1, 0, 1, 2, 10, 50, 100, 200))
# tau_dev_plot <- grid.arrange(p1, p3, p2, p4, nrow = 2)
# plot deviations for beta vs gamma Parameter for hypperparams of xi
# largest influence in b_xi / gamma plot
p5 <- deviation_a_xi_data %>%
filter(stringr::str_detect(string = Parameter, pattern = "beta")) %>%
ggplot(mapping = aes(x = hyppar_val, y = deviation, color = Parameter)) +
geom_line(size = 0.5) +
geom_point(size = 1) +
labs(
# title = "Absolute deviations from true gamma (1000 simulations)",
x = TeX("$a_\\xi$"), y = "| Deviation |", color = NULL
) +
guides(color = "none") +
theme_light(base_size = 9) +
theme(
panel.grid.minor = element_blank(),
legend.position = "top",
plot.title = element_text(hjust = 0.5)
) +
geom_smooth(
method = "lm", aes(colour = "linear trend"), linetype = "dashed",
se = FALSE
) +
scale_x_continuous(trans = pseudo_log_trans(), breaks = c(-1, 0, 1, 2, 10, 50, 100, 200))
p6 <- deviation_a_xi_data %>%
filter(stringr::str_detect(string = Parameter, pattern = "gamma")) %>%
ggplot(mapping = aes(x = hyppar_val, y = deviation, color = Parameter)) +
geom_line(size = 0.5) +
geom_point(size = 1) +
labs(
# title = "Abs. dev. from true gamma (#sim = 1000)",
x = TeX("$a_\\xi$"), y = NULL
) +
guides(color = "none") +
theme_light(base_size = 9) +
theme(
panel.grid.minor = element_blank(),
legend.position = "top",
plot.title = element_text(hjust = 0.5)
) +
geom_smooth(
method = "lm", aes(colour = "linear trend"), linetype = "dashed",
se = FALSE
) +
scale_x_continuous(trans = pseudo_log_trans(), breaks = c(-1, 0, 1, 2, 10, 50, 100, 200))
p7 <- deviation_b_xi_data %>%
filter(stringr::str_detect(string = Parameter, pattern = "beta")) %>%
ggplot(mapping = aes(x = hyppar_val, y = deviation, color = Parameter)) +
geom_line(size = 0.5) +
geom_point(size = 1) +
labs(
# title = "Abs. dev. from true beta (#sim = 1000)",
x = TeX("$b_\\xi$"), y = "| Deviation |"
) +
guides(color = "none") +
theme_light(base_size = 9) +
theme(
panel.grid.minor = element_blank(),
legend.position = "top",
plot.title = element_text(hjust = 0.5)
) +
geom_smooth(
method = "lm", aes(colour = "linear trend"), linetype = "dashed",
se = FALSE
) +
scale_x_continuous(trans = pseudo_log_trans(), breaks = c(-1, 0, 1, 2, 10, 50, 100, 200))
p8 <- deviation_b_xi_data %>%
filter(stringr::str_detect(string = Parameter, pattern = "gamma")) %>%
ggplot(mapping = aes(x = hyppar_val, y = deviation, color = Parameter)) +
geom_line(size = 0.5) +
geom_point(size = 1) +
labs(
x = TeX("$b_\\xi$"), y = NULL
) +
theme_light() +
guides(color = "none") +
theme_light(base_size = 9) +
theme(
panel.grid.minor = element_blank(),
legend.position = "top",
plot.title = element_text(hjust = 0.5)
) +
geom_smooth(
method = "lm", aes(colour = "linear trend"), linetype = "dashed",
se = FALSE
) +
scale_x_continuous(trans = pseudo_log_trans(), breaks = c(-1, 0, 1, 2, 10, 50, 100, 200))
# xi_dev_plot <- grid.arrange(p5, p7, p6, p8, nrow = 2)
# highest and lowest acceptance rates for a_tau
a_tau_data %>%
select(hyppar, hyppar_val, acc_rate) %>%
slice_max(order_by = acc_rate, n = 3)
a_tau_data %>%
select(hyppar, hyppar_val, acc_rate) %>%
slice_min(order_by = acc_rate, n = 3)
b_tau_data %>%
select(hyppar, hyppar_val, acc_rate) %>%
slice_max(order_by = acc_rate, n = 3)
b_tau_data %>%
select(hyppar, hyppar_val, acc_rate) %>%
slice_min(order_by = acc_rate, n = 3)
# highest and lowest acceptance rates for b_xi
a_xi_data %>%
select(hyppar, hyppar_val, acc_rate) %>%
slice_max(order_by = acc_rate, n = 3)
a_xi_data %>%
select(hyppar, hyppar_val, acc_rate) %>%
slice_min(order_by = acc_rate, n = 3)
b_xi_data %>%
select(hyppar, hyppar_val, acc_rate) %>%
slice_max(order_by = acc_rate, n = 3)
b_xi_data %>%
select(hyppar, hyppar_val, acc_rate) %>%
slice_min(order_by = acc_rate, n = 3)
# correlation of hyperparams over covariates
plot_cor <- function(covariate) {
# Plots hyperparameters' correlations against each others
covariate <- as.character(covariate)
deviation_binded <- tibble(a_tau = deviation_a_tau_data %>%
filter(stringr::str_detect(string = Parameter, pattern = covariate)) %>%
select(deviation)) %>%
mutate(b_tau = deviation_b_tau_data %>%
filter(stringr::str_detect(string = Parameter, pattern = covariate)) %>%
select(deviation)) %>%
mutate(a_xi = deviation_a_xi_data %>%
filter(stringr::str_detect(string = Parameter, pattern = covariate)) %>%
select(deviation)) %>%
mutate(b_xi = deviation_b_xi_data %>%
filter(stringr::str_detect(string = Parameter, pattern = covariate)) %>%
select(deviation))
panel.cor <- function(x, y, digits = 2, prefix = "", cex.cor, ...) # helper function from pairs() help-page
{
usr <- par("usr")
on.exit(par(usr))
par(usr = c(0, 1, 0, 1))
r <- cor(x, y)
txt <- format(c(r, 0.123456789), digits = digits)[1]
txt <- paste0(prefix, txt)
if (missing(cex.cor)) cex.cor <- 0.8 / strwidth(txt)
text(0.5, 0.5, txt, cex = 2)
}
return(pairs(as.matrix(deviation_binded),
upper.panel = panel.cor,
lower.panel = panel.smooth,
main = paste("Correlation plot of hyperparameters for", covariate)
))
}
# beta0_cor_plot <- plot_cor("beta_0")
# gamma0_cor_plot <- plot_cor("gamma_0")
# ____________________________________________________________________________
# Save data for Second Report ####
readr::write_rds(
x = list(
a_tau_data = a_tau_data,
b_tau_data = b_tau_data,
a_xi_data = a_xi_data,
b_xi_data = b_xi_data,
results_a_tau_data = results_a_tau_data,
results_b_tau_data = results_b_tau_data,
results_a_xi_data = results_a_xi_data,
results_b_xi_data = results_b_xi_data,
deviation_a_tau_data = deviation_a_tau_data,
deviation_b_tau_data = deviation_b_tau_data,
deviation_a_xi_data = deviation_a_xi_data,
deviation_b_xi_data = deviation_b_xi_data,
# tau_dev_plot = tau_dev_plot,
# xi_dev_plot = xi_dev_plot,
p1 = p1,
p2 = p2,
p3 = p3,
p4 = p4,
p5 = p5,
p6 = p6,
p7 = p7,
p8 = p8
# beta0_cor_plot = beta0_cor_plot,
# gamma0_cor_plot = gamma0_cor_plot
),
file = here::here(
"simulation-studies",
"hyperparameters.rds"
)
)
| /simulation-studies/hyperparameters.R | permissive | joel-beck/asp21bridge | R | false | false | 18,849 | r | ### . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ..
### Setup ####
pacman::p_load(
dplyr,
purrr,
ggplot2,
gridExtra,
scales,
latex2exp
)
library(asp21bridge)
### . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ..
### Simulate Data ####
set.seed(142)
x1 <- rnorm(n = 50, mean = 1, sd = 1)
x2 <- rnorm(n = 50, mean = 2, sd = 1)
z1 <- rnorm(n = 50, mean = 5, sd = 1)
z2 <- rnorm(n = 50, mean = 3, sd = 1)
X <- cbind(x1, x2)
Z <- cbind(z1, z2)
# standardize columns in X and Z
X <- apply(X, MARGIN = 2, FUN = function(x) (x - mean(x)) / sd(x))
Z <- apply(Z, MARGIN = 2, FUN = function(x) (x - mean(x)) / sd(x))
x1 <- X[, 1]
x2 <- X[, 2]
z1 <- Z[, 1]
z2 <- Z[, 2]
beta <- c(-1, 4) # True: beta_0 = 0, beta_1 = -1, beta_2 = 4,
gamma <- c(-2, 1) # True: gamma_0 = 0, gamma_1 = -2, gamma_2 = 1
y <- vector(mode = "numeric", length = 50)
for (i in seq_along(y)) {
mu <- sum(X[i, ] * beta)
sigma <- exp(sum(Z[i, ] * gamma))
y[i] <- rnorm(n = 1, mean = mu, sd = sigma)
}
# Values for hyper parameters
hyppar_val <- c(-1, 0, 0.5, 1, 2, 10, 50, 100, 200)
### Create lmls model
model <- lmls(
location = y ~ x1 + x2, scale = ~ z1 + z2,
light = FALSE
)
### . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ..
### Create initial Tibbles ####
# One tibble per hyperpar-vector
# Default for other hyper-pars = 1
# Choosed values to have specific ratios
a_tau_data <- tibble(hyppar_val = hyppar_val) %>%
mutate(hyppar = "a_tau") %>%
mutate(samples = map(
.x = hyppar_val, .f = ~ mcmc_ridge(m = model, a_tau = .x, num_sim = 100)
)) %>%
mutate(acc_rate = map_dbl(
.x = samples, .f = ~ .x$mcmc_ridge$acceptance_rate
)) %>%
mutate(results = map(
.x = samples,
.f = ~ summary_complete(.x) %>% select(Parameter, `Posterior Mean`)
))
b_tau_data <- tibble(hyppar_val = hyppar_val) %>%
mutate(hyppar = "b_tau") %>%
mutate(samples = map(
.x = hyppar_val, .f = ~ mcmc_ridge(m = model, b_tau = .x, num_sim = 100)
)) %>%
mutate(acc_rate = map_dbl(
.x = samples, .f = ~ .x$mcmc_ridge$acceptance_rate
)) %>%
mutate(results = map(
.x = samples,
.f = ~ summary_complete(.x) %>% select(Parameter, `Posterior Mean`)
))
a_xi_data <- tibble(hyppar_val = hyppar_val) %>%
mutate(hyppar = "a_xi") %>%
mutate(samples = map(
.x = hyppar_val, .f = ~ mcmc_ridge(m = model, a_xi = .x, num_sim = 100)
)) %>%
mutate(acc_rate = map_dbl(
.x = samples, .f = ~ .x$mcmc_ridge$acceptance_rate
)) %>%
mutate(results = map(
.x = samples,
.f = ~ summary_complete(.x) %>% select(Parameter, `Posterior Mean`)
))
b_xi_data <- tibble(hyppar_val = hyppar_val) %>%
mutate(hyppar = "b_xi") %>%
mutate(samples = map(
.x = hyppar_val, .f = ~ mcmc_ridge(m = model, b_xi = .x, num_sim = 100)
)) %>%
mutate(acc_rate = map_dbl(
.x = samples, .f = ~ .x$mcmc_ridge$acceptance_rate
)) %>%
mutate(results = map(
.x = samples,
.f = ~ summary_complete(.x) %>% select(Parameter, `Posterior Mean`)
))
### . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ..
### Analysis ####
# Creating results tables
results_a_tau_data <- a_tau_data %>%
select(hyppar, hyppar_val, results) %>%
tidyr::unnest(results)
results_b_tau_data <- b_tau_data %>%
select(hyppar, hyppar_val, results) %>%
tidyr::unnest(results)
results_a_xi_data <- a_xi_data %>%
select(hyppar, hyppar_val, results) %>%
tidyr::unnest(results)
results_b_xi_data <- b_xi_data %>%
select(hyppar, hyppar_val, results) %>%
tidyr::unnest(results)
# True value of -1, no variance of posterior means
results_a_tau_data %>%
filter(Parameter == "beta_1") %>%
print(n = Inf)
results_b_tau_data %>%
filter(Parameter == "beta_1") %>%
print(n = Inf)
results_b_xi_data %>%
filter(Parameter == "beta_1") %>%
print(n = Inf)
# True value of -2, small variance but steady small overestimation
results_a_tau_data %>%
filter(Parameter == "gamma_1") %>%
print(n = Inf)
results_b_xi_data %>%
filter(Parameter == "gamma_1") %>%
print(n = Inf)
results_b_xi_data %>%
filter(Parameter == "gamma_1") %>%
print(n = Inf)
# Adding absolute deviation (estimated - true value)
deviation_a_tau_data <- results_a_tau_data %>%
filter(stringr::str_detect(string = Parameter, pattern = "beta|gamma")) %>%
mutate(truth = rep(c(0, -1, 4, 0, -2, 1), times = n_distinct(hyppar_val))) %>%
mutate(deviation = abs(`Posterior Mean` - truth))
deviation_b_tau_data <- results_b_tau_data %>%
filter(stringr::str_detect(string = Parameter, pattern = "beta|gamma")) %>%
mutate(truth = rep(c(0, -1, 4, 0, -2, 1), times = n_distinct(hyppar_val))) %>%
mutate(deviation = abs(`Posterior Mean` - truth))
deviation_a_xi_data <- results_a_xi_data %>%
filter(stringr::str_detect(string = Parameter, pattern = "beta|gamma")) %>%
mutate(truth = rep(c(0, -1, 4, 0, -2, 1), times = n_distinct(hyppar_val))) %>%
mutate(deviation = abs(`Posterior Mean` - truth))
deviation_b_xi_data <- results_b_xi_data %>%
filter(stringr::str_detect(string = Parameter, pattern = "beta|gamma")) %>%
mutate(truth = rep(c(0, -1, 4, 0, -2, 1), times = n_distinct(hyppar_val))) %>%
mutate(deviation = abs(`Posterior Mean` - truth))
# two biggest and two smallest deviations for each parameter and a_tau / b_xi
# Evenly spread values of a_tau, only 100 is always at small, never at biggest
# No pattern of distinct hyperpar values obtainable at the betas
# At the gammas, values 0 < a_tau <= 1 are always at smallest
# The gammas have a higher deviation, the betas slightly none
bind_rows(
biggest = deviation_a_tau_data %>%
group_by(Parameter) %>%
slice_max(order_by = deviation, n = 2) %>%
ungroup(),
smallest = deviation_a_tau_data %>%
group_by(Parameter) %>%
slice_min(order_by = deviation, n = 2) %>%
ungroup(),
.id = "diff"
) %>%
select(hyppar, Parameter, diff, hyppar_val, deviation) %>%
arrange(Parameter) %>%
print(n = Inf)
# for beta, the values of b_xi are evenly spread
# for gamma, always b_xi = 100 are at smallest
# The gammas have a higher deviation, the betas slightly none
bind_rows(
biggest = deviation_b_xi_data %>%
group_by(Parameter) %>%
slice_max(order_by = deviation, n = 2) %>%
ungroup(),
smallest = deviation_b_xi_data %>%
group_by(Parameter) %>%
slice_min(order_by = deviation, n = 2) %>%
ungroup(),
.id = "diff"
) %>%
select(hyppar, Parameter, diff, hyppar_val, deviation) %>%
arrange(Parameter) %>%
print(n = Inf)
# how often is each hyper parameter among the four biggest or four smallest
# deviations from the true value across all parameters and a_tau / b _xi?
# Here, I moreover SPLIT b/w beta and gamma as parameters.
# small values have smaller deviation more often, while large values have larger deviation
# no remarkable differences b/w beta and gamma
bind_rows(
biggest_beta = deviation_a_tau_data %>%
filter(stringr::str_detect(string = Parameter, pattern = "beta")) %>%
group_by(Parameter) %>%
slice_max(order_by = deviation, n = 4) %>%
ungroup(),
smallest_beta = deviation_a_tau_data %>%
filter(stringr::str_detect(string = Parameter, pattern = "beta")) %>%
group_by(Parameter) %>%
slice_min(order_by = deviation, n = 4) %>%
ungroup(),
biggest_gamma = deviation_a_tau_data %>%
filter(stringr::str_detect(string = Parameter, pattern = "gamma")) %>%
group_by(Parameter) %>%
slice_max(order_by = deviation, n = 4) %>%
ungroup(),
smallest_gamma = deviation_a_tau_data %>%
filter(stringr::str_detect(string = Parameter, pattern = "gamma")) %>%
group_by(Parameter) %>%
slice_min(order_by = deviation, n = 4) %>%
ungroup(),
.id = "diff"
) %>%
select(hyppar, diff, hyppar_val, Parameter, deviation) %>%
count(diff, hyppar_val) %>%
tidyr::pivot_wider(names_from = diff, values_from = n) %>%
arrange(hyppar_val) %>%
print(n = Inf)
# beta's and gamma's deviation both is large for small values of hyperpar
# and small for large values
bind_rows(
biggest_beta = deviation_b_xi_data %>%
filter(stringr::str_detect(string = Parameter, pattern = "beta")) %>%
group_by(Parameter) %>%
slice_max(order_by = deviation, n = 4) %>%
ungroup(),
smallest_beta = deviation_b_xi_data %>%
filter(stringr::str_detect(string = Parameter, pattern = "beta")) %>%
group_by(Parameter) %>%
slice_min(order_by = deviation, n = 4) %>%
ungroup(),
biggest_gamma = deviation_b_xi_data %>%
filter(stringr::str_detect(string = Parameter, pattern = "gamma")) %>%
group_by(Parameter) %>%
slice_max(order_by = deviation, n = 4) %>%
ungroup(),
smallest_gamma = deviation_b_xi_data %>%
filter(stringr::str_detect(string = Parameter, pattern = "gamma")) %>%
group_by(Parameter) %>%
slice_min(order_by = deviation, n = 4) %>%
ungroup(),
.id = "diff"
) %>%
select(hyppar, diff, hyppar_val, Parameter, deviation) %>%
count(diff, hyppar_val) %>%
tidyr::pivot_wider(names_from = diff, values_from = n) %>%
arrange(hyppar_val) %>%
print(n = Inf)
# plot deviations for beta vs gamma Parameter for a_tau / b_xi
# largest influence in b_tau / gamma plot
p1 <- deviation_a_tau_data %>%
filter(stringr::str_detect(string = Parameter, pattern = "beta")) %>%
ggplot(mapping = aes(x = hyppar_val, y = deviation, color = Parameter)) +
geom_line(size = 0.5) +
geom_point(size = 1) +
labs(
title = TeX("Absolute deviations from true $\\beta$"),
x = TeX("$a_\\tau$"), y = "| Deviation |", color = NULL
) +
theme_light(base_size = 9) +
theme(
panel.grid.minor = element_blank(),
legend.position = "top",
plot.title = element_text(hjust = 0.5)
) +
geom_smooth(
method = "lm", aes(colour = "linear trend"), linetype = "dashed",
se = FALSE
) +
scale_x_continuous(trans = pseudo_log_trans(), breaks = c(-1, 0, 1, 2, 10, 50, 100, 200))
p2 <- deviation_a_tau_data %>%
filter(stringr::str_detect(string = Parameter, pattern = "gamma")) %>%
ggplot(mapping = aes(x = hyppar_val, y = deviation, color = Parameter)) +
geom_line(size = 0.5) +
geom_point(size = 1) +
labs(
title = TeX("Absolute deviations from true $\\gamma$"),
x = TeX("$a_\\tau$"), y = NULL, color = NULL
) +
theme_light(base_size = 9) +
theme(
panel.grid.minor = element_blank(),
legend.position = "top",
plot.title = element_text(hjust = 0.5)
) +
geom_smooth(
method = "lm", aes(colour = "linear trend"), linetype = "dashed",
se = FALSE
) +
scale_x_continuous(trans = pseudo_log_trans(), breaks = c(-1, 0, 1, 2, 10, 50, 100, 200))
p3 <- deviation_b_tau_data %>%
filter(stringr::str_detect(string = Parameter, pattern = "beta")) %>%
ggplot(mapping = aes(x = hyppar_val, y = deviation, color = Parameter)) +
geom_line(size = 0.5) +
geom_point(size = 1) +
labs(
# title = "Absolute deviations from true beta (1000 simulations)",
x = TeX("$b_\\tau$"), y = "| Deviation |", color = NULL
) +
guides(color = "none") +
theme_light(base_size = 9) +
theme(
panel.grid.minor = element_blank(),
legend.position = "top",
plot.title = element_text(hjust = 0.5)
) +
geom_smooth(
method = "lm", aes(colour = "linear trend"), linetype = "dashed",
se = FALSE
) +
scale_x_continuous(trans = pseudo_log_trans(), breaks = c(-1, 0, 1, 2, 10, 50, 100, 200))
p4 <- deviation_b_tau_data %>%
filter(stringr::str_detect(string = Parameter, pattern = "gamma")) %>%
ggplot(mapping = aes(x = hyppar_val, y = deviation, color = Parameter)) +
geom_line(size = 0.5) +
geom_point(size = 1) +
labs(
# title = "Absolute deviations from true gamma (1000 simulations)",
x = TeX("$b_\\tau$"), y = NULL, color = NULL
) +
guides(color = "none") +
theme_light(base_size = 9) +
theme(
panel.grid.minor = element_blank(),
legend.position = "top",
plot.title = element_text(hjust = 0.5)
) +
geom_smooth(
method = "lm", aes(colour = "linear trend"), linetype = "dashed",
se = FALSE
) +
scale_x_continuous(trans = pseudo_log_trans(), breaks = c(-1, 0, 1, 2, 10, 50, 100, 200))
# tau_dev_plot <- grid.arrange(p1, p3, p2, p4, nrow = 2)
# plot deviations for beta vs gamma Parameter for hypperparams of xi
# largest influence in b_xi / gamma plot
p5 <- deviation_a_xi_data %>%
filter(stringr::str_detect(string = Parameter, pattern = "beta")) %>%
ggplot(mapping = aes(x = hyppar_val, y = deviation, color = Parameter)) +
geom_line(size = 0.5) +
geom_point(size = 1) +
labs(
# title = "Absolute deviations from true gamma (1000 simulations)",
x = TeX("$a_\\xi$"), y = "| Deviation |", color = NULL
) +
guides(color = "none") +
theme_light(base_size = 9) +
theme(
panel.grid.minor = element_blank(),
legend.position = "top",
plot.title = element_text(hjust = 0.5)
) +
geom_smooth(
method = "lm", aes(colour = "linear trend"), linetype = "dashed",
se = FALSE
) +
scale_x_continuous(trans = pseudo_log_trans(), breaks = c(-1, 0, 1, 2, 10, 50, 100, 200))
p6 <- deviation_a_xi_data %>%
filter(stringr::str_detect(string = Parameter, pattern = "gamma")) %>%
ggplot(mapping = aes(x = hyppar_val, y = deviation, color = Parameter)) +
geom_line(size = 0.5) +
geom_point(size = 1) +
labs(
# title = "Abs. dev. from true gamma (#sim = 1000)",
x = TeX("$a_\\xi$"), y = NULL
) +
guides(color = "none") +
theme_light(base_size = 9) +
theme(
panel.grid.minor = element_blank(),
legend.position = "top",
plot.title = element_text(hjust = 0.5)
) +
geom_smooth(
method = "lm", aes(colour = "linear trend"), linetype = "dashed",
se = FALSE
) +
scale_x_continuous(trans = pseudo_log_trans(), breaks = c(-1, 0, 1, 2, 10, 50, 100, 200))
p7 <- deviation_b_xi_data %>%
filter(stringr::str_detect(string = Parameter, pattern = "beta")) %>%
ggplot(mapping = aes(x = hyppar_val, y = deviation, color = Parameter)) +
geom_line(size = 0.5) +
geom_point(size = 1) +
labs(
# title = "Abs. dev. from true beta (#sim = 1000)",
x = TeX("$b_\\xi$"), y = "| Deviation |"
) +
guides(color = "none") +
theme_light(base_size = 9) +
theme(
panel.grid.minor = element_blank(),
legend.position = "top",
plot.title = element_text(hjust = 0.5)
) +
geom_smooth(
method = "lm", aes(colour = "linear trend"), linetype = "dashed",
se = FALSE
) +
scale_x_continuous(trans = pseudo_log_trans(), breaks = c(-1, 0, 1, 2, 10, 50, 100, 200))
p8 <- deviation_b_xi_data %>%
filter(stringr::str_detect(string = Parameter, pattern = "gamma")) %>%
ggplot(mapping = aes(x = hyppar_val, y = deviation, color = Parameter)) +
geom_line(size = 0.5) +
geom_point(size = 1) +
labs(
x = TeX("$b_\\xi$"), y = NULL
) +
theme_light() +
guides(color = "none") +
theme_light(base_size = 9) +
theme(
panel.grid.minor = element_blank(),
legend.position = "top",
plot.title = element_text(hjust = 0.5)
) +
geom_smooth(
method = "lm", aes(colour = "linear trend"), linetype = "dashed",
se = FALSE
) +
scale_x_continuous(trans = pseudo_log_trans(), breaks = c(-1, 0, 1, 2, 10, 50, 100, 200))
# xi_dev_plot <- grid.arrange(p5, p7, p6, p8, nrow = 2)
# highest and lowest acceptance rates for a_tau
a_tau_data %>%
select(hyppar, hyppar_val, acc_rate) %>%
slice_max(order_by = acc_rate, n = 3)
a_tau_data %>%
select(hyppar, hyppar_val, acc_rate) %>%
slice_min(order_by = acc_rate, n = 3)
b_tau_data %>%
select(hyppar, hyppar_val, acc_rate) %>%
slice_max(order_by = acc_rate, n = 3)
b_tau_data %>%
select(hyppar, hyppar_val, acc_rate) %>%
slice_min(order_by = acc_rate, n = 3)
# highest and lowest acceptance rates for b_xi
a_xi_data %>%
select(hyppar, hyppar_val, acc_rate) %>%
slice_max(order_by = acc_rate, n = 3)
a_xi_data %>%
select(hyppar, hyppar_val, acc_rate) %>%
slice_min(order_by = acc_rate, n = 3)
b_xi_data %>%
select(hyppar, hyppar_val, acc_rate) %>%
slice_max(order_by = acc_rate, n = 3)
b_xi_data %>%
select(hyppar, hyppar_val, acc_rate) %>%
slice_min(order_by = acc_rate, n = 3)
# correlation of hyperparams over covariates
plot_cor <- function(covariate) {
# Plots hyperparameters' correlations against each others
covariate <- as.character(covariate)
deviation_binded <- tibble(a_tau = deviation_a_tau_data %>%
filter(stringr::str_detect(string = Parameter, pattern = covariate)) %>%
select(deviation)) %>%
mutate(b_tau = deviation_b_tau_data %>%
filter(stringr::str_detect(string = Parameter, pattern = covariate)) %>%
select(deviation)) %>%
mutate(a_xi = deviation_a_xi_data %>%
filter(stringr::str_detect(string = Parameter, pattern = covariate)) %>%
select(deviation)) %>%
mutate(b_xi = deviation_b_xi_data %>%
filter(stringr::str_detect(string = Parameter, pattern = covariate)) %>%
select(deviation))
panel.cor <- function(x, y, digits = 2, prefix = "", cex.cor, ...) # helper function from pairs() help-page
{
usr <- par("usr")
on.exit(par(usr))
par(usr = c(0, 1, 0, 1))
r <- cor(x, y)
txt <- format(c(r, 0.123456789), digits = digits)[1]
txt <- paste0(prefix, txt)
if (missing(cex.cor)) cex.cor <- 0.8 / strwidth(txt)
text(0.5, 0.5, txt, cex = 2)
}
return(pairs(as.matrix(deviation_binded),
upper.panel = panel.cor,
lower.panel = panel.smooth,
main = paste("Correlation plot of hyperparameters for", covariate)
))
}
# beta0_cor_plot <- plot_cor("beta_0")
# gamma0_cor_plot <- plot_cor("gamma_0")
# ____________________________________________________________________________
# Save data for Second Report ####
readr::write_rds(
x = list(
a_tau_data = a_tau_data,
b_tau_data = b_tau_data,
a_xi_data = a_xi_data,
b_xi_data = b_xi_data,
results_a_tau_data = results_a_tau_data,
results_b_tau_data = results_b_tau_data,
results_a_xi_data = results_a_xi_data,
results_b_xi_data = results_b_xi_data,
deviation_a_tau_data = deviation_a_tau_data,
deviation_b_tau_data = deviation_b_tau_data,
deviation_a_xi_data = deviation_a_xi_data,
deviation_b_xi_data = deviation_b_xi_data,
# tau_dev_plot = tau_dev_plot,
# xi_dev_plot = xi_dev_plot,
p1 = p1,
p2 = p2,
p3 = p3,
p4 = p4,
p5 = p5,
p6 = p6,
p7 = p7,
p8 = p8
# beta0_cor_plot = beta0_cor_plot,
# gamma0_cor_plot = gamma0_cor_plot
),
file = here::here(
"simulation-studies",
"hyperparameters.rds"
)
)
|
#Elizabeth E. Esterly, Danny Byrd
#Last revision 09/25/2017
#treeTest.R
library(tidyverse)
require(tidyverse)
source("util.R")
source("classes.R")
#THESE WILL COME IN IN SCRIPT
#IF FALSE THEN GINI INDEX WILL BE USED
USEINFOGAIN <- FALSE
PVAL <- 0.95
args <- commandArgs(trailingOnly = TRUE)
#read training data
training <- read_csv("weatherTraining.csv")
totalEntries <- nrow(training)
#calculate entropy of data set
outcomes <- unique(training$Class)
dEnt <- datasetEntropy(training$Class, outcomes, totalEntries)
#copy the training data as a backup as we will start to do a ton of operations on it
copiedTraining <- training
#----------------begin recursive tree-builder function
tree = buildTree(PVAL, copiedTraining, 1)
print(tree)
| /treeTest.R | no_license | elyzabethellen/DecisionTreeHandImplementation | R | false | false | 746 | r | #Elizabeth E. Esterly, Danny Byrd
#Last revision 09/25/2017
#treeTest.R
library(tidyverse)
require(tidyverse)
source("util.R")
source("classes.R")
#THESE WILL COME IN IN SCRIPT
#IF FALSE THEN GINI INDEX WILL BE USED
USEINFOGAIN <- FALSE
PVAL <- 0.95
args <- commandArgs(trailingOnly = TRUE)
#read training data
training <- read_csv("weatherTraining.csv")
totalEntries <- nrow(training)
#calculate entropy of data set
outcomes <- unique(training$Class)
dEnt <- datasetEntropy(training$Class, outcomes, totalEntries)
#copy the training data as a backup as we will start to do a ton of operations on it
copiedTraining <- training
#----------------begin recursive tree-builder function
tree = buildTree(PVAL, copiedTraining, 1)
print(tree)
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/predict.R
\name{predict.ConsReg}
\alias{predict.ConsReg}
\title{Predict or fitted values of object \code{ConsReg}}
\usage{
\method{predict}{ConsReg}(object, newdata = NULL, components = F, ...)
}
\arguments{
\item{object}{object of class \code{ConsReg}}
\item{newdata}{New data to predict the objective function. If is NULL (default),
then the fitted values will be returned}
\item{components}{if its \code{TRUE}, it will return the predictions for each regression component}
\item{...}{Additional argument passed to family. In particular, at this moment,
if type = 'link', then for binomial family, it will return the link values}
}
\value{
predictions
}
\description{
Predict or fitted values of object \code{ConsReg}
}
\examples{
data('fake_data')
data = fake_data
data$y = 1/(1+exp(-data$y))
data$y = ifelse(data$y > .5, 1, 0)
table(data$y)
fit5 = ConsReg(y~x1+x2+x3+x4, data = data,
family = 'binomial', penalty = 10000,
LOWER = -.5, UPPER = .2,
optimizer = 'gosolnp')
pr = predict(fit5, newdata = data[1:3,], type = 'probability')
pr
}
| /man/predict.ConsReg.Rd | no_license | puigjos/ConsReg | R | false | true | 1,170 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/predict.R
\name{predict.ConsReg}
\alias{predict.ConsReg}
\title{Predict or fitted values of object \code{ConsReg}}
\usage{
\method{predict}{ConsReg}(object, newdata = NULL, components = F, ...)
}
\arguments{
\item{object}{object of class \code{ConsReg}}
\item{newdata}{New data to predict the objective function. If is NULL (default),
then the fitted values will be returned}
\item{components}{if its \code{TRUE}, it will return the predictions for each regression component}
\item{...}{Additional argument passed to family. In particular, at this moment,
if type = 'link', then for binomial family, it will return the link values}
}
\value{
predictions
}
\description{
Predict or fitted values of object \code{ConsReg}
}
\examples{
data('fake_data')
data = fake_data
data$y = 1/(1+exp(-data$y))
data$y = ifelse(data$y > .5, 1, 0)
table(data$y)
fit5 = ConsReg(y~x1+x2+x3+x4, data = data,
family = 'binomial', penalty = 10000,
LOWER = -.5, UPPER = .2,
optimizer = 'gosolnp')
pr = predict(fit5, newdata = data[1:3,], type = 'probability')
pr
}
|
\name{ssize.fdr-package}
\alias{ssize.fdr-package}
\alias{ssize.fdr}
\docType{package}
\title{Sample Size Calculations for Microarray Experiments}
\description{
This package calculates appropriate sample sizes for one-sample,
two-sample, and multi-sample microarray experiments for a desired power
of the test. Sample sizes are calculated under controlled false
discovery rates and fixed proportions of non-differentially expressed
genes. Outputs a graph of power versus sample size.
}
\details{
\tabular{ll}{
Package: \tab ssize.fdr\cr
Type: \tab Package\cr
Version: \tab 1.3\cr
Date: \tab 2022-06-05\cr
License: \tab GPL-3\cr
}
For all functions, the user inputs the desired power, the false
discovery rate to be controlled, the proportion(s) of non-
differentially expressed genes, and the maximum possible sample size
to be used in calculations. If the user inputs a vector of proportions
of non-differentially expressed genes, samples size calculations are
performed for each proportion. For the function \code{\link{ssize.twoSamp}},
the user must additionally input the common difference in mean treatment
expressions as well as the common standard deviation for all genes.
This becomes the common effect size and common standard deviation for
all genes when using the function \code{\link{ssize.oneSamp}}. For the
function \code{\link{ssize.twoSampVary}} (\code{\link{ssize.oneSampVary}})
the differences in mean treatment expressions (effect sizes) are assumed
to follow a normal distribution and the variances among genes are assumed
to follow an inverse gamma distribution, so parameters for these
distributions must be entered. For the function \code{\link{ssize.F}},
the design matrix of the experiment, the parameter vector, and an optional
coefficient matrix or vector of linear contrasts of interest must also be
entered. The function \code{\link{ssize.Fvary}} allows the variances of
the genes to follow an inverse gamma distribution, so the shape and scale
parameters must be specified by the user.
}
\author{Megan Orr <megan.orr@ndsu.edu>, Peng Liu <pliu@iastate.edu>}
\references{Liu, Peng and J. T. Gene Hwang. 2007. Quick calculation for
sample size while controlling false discovery rate with application
to microarray analysis. \emph{Bioinformatics} 23(6): 739-746.}
\keyword{ package }
\examples{
a<-0.05 ##false discovery rate to be controlled
pwr<-0.8 ##desired power
p0<-c(0.5,0.9,0.95) ##proportions of non-differentially expressed genes
N<-20; N1<-35 ##maximum sample size for calculations
##Example of function ssize.oneSamp
d<-1 ##effect size
s<-0.5 ##standard deviation
os<-ssize.oneSamp(delta=d,sigma=s,fdr=a,power=pwr,pi0=p0,maxN=N,side="two-sided")
os$ssize ##first sample sizes to reach desired power
os$power ##calculated power for each sample size
os$crit.vals ##calculated critical value for each sample size
##Example of function ssize.oneSampVary
dm<-2; ds<-1 ##the effect sizes of the genes follow a Normal(2,1) distribution
alph<-3; beta<-1 ##the variances of the genes follow an Inverse Gamma(3,1) distribution.
osv<-ssize.oneSampVary(deltaMean=dm,deltaSE=ds,a=alph,b=beta,fdr=a,power=pwr,
pi0=p0,maxN=N1,side="two-sided")
osv$ssize ##first sample sizes to reach desired power
osv$power ##calculated power for each sample size
osv$crit.vals ##calculated critical value for each sample size
##Example of function ssize.twoSamp
##Calculates sample sizes for two-sample microarray experiments
##See Figure 1.(a) of Liu & Hwang (2007)
d1<-1 ##difference in differentially expressed genes to be detected
s1<-0.5 ##standard deviation
ts<-ssize.twoSamp(delta=d1,sigma=s1,fdr=a,power=pwr,pi0=pi,maxN=N,side="two-sided")
ts$ssize ##first sample sizes to reach desired power
ts$power ##calculated power for each sample size
ts$crit.vals ##calculated critical value for each sample size
##Example of function ssize.twoSampVary
##Calculates sample sizes for multi-sample microarray experiments in which both the differences in
##expressions between treatments and the standard deviations vary among genes.
##See Figure 3.(a) of Liu & Hwang (2007)
dm<-2 ##mean parameter of normal distribution of differences
##between treatments among genes
ds<-1 ##standard deviation parameter of normal distribution
##of differences between treatments among genes
alph<-3 ##shape parameter of inverse gamma distribution followed
##by standard deviations of genes
beta<-1 ##scale parameter of inverse gamma distribution followed
##by standard deviations of genes
tsv<-ssize.twoSampVary(deltaMean=dm,deltaSE=ds,a=alph,b=beta,
fdr=a,power=pwr,pi0=p0,maxN=N1,side="two-sided")
tsv$ssize ##first sample sizes to reach desired power
tsv$power ##calculated power for each sample size
tsv$crit.vals ##calculated critical value for each sample sizesv
##Example of function ssize.F
##Sample size calculation for three-treatment loop design microarray experiment
##See Figure S2. of Liu & Hwang (2007)
des<-matrix(c(1,-1,0,0,1,-1),ncol=2,byrow=FALSE) ##design matrix of loop design experiment
b<-c(1,-0.5) ##difference between first two treatments is 1 and
##second and third treatments is -0.5
df<-function(n){3*n-2} ##degrees of freedom for this design is 3n-2
s<-1 ##standard deviation
p0.F<-c(0.5,0.9,0.95,0.995) ##proportions of non-differentially expressed genes
ft<-ssize.F(X=des,beta=b,dn=df,sigma=s,fdr=a,power=pwr,pi0=p0.F,maxN=N)
ft$ssize ##first sample sizes to reach desired power
ft$power ##calculated power for each sample size
ft$crit.vals ##calculated critical value for each sample sizeft$ssize
##Example of function ssize.Fvary
##Sample size calculation for three-treatment loop design microarray experiment
des<-matrix(c(1,-1,0,0,1,-1),ncol=2,byrow=FALSE) ##design matrix of loop design experiment
b<-c(1,-0.5) ##difference between first two treatments is 1 and
##second and third treatments is -0.5
df<-function(n){3*n-2} ##degrees of freedom for this design is 3n-2
alph<-3;beta<-1 ##variances among genes follow an Inverse Gamma(3,1)
a1<-0.05 ##fdr to be fixed
p0.F<-c(0.9,0.95,0.995) ##proportions of non-differentially expressed genes
ftv<-ssize.Fvary(X=des,beta=b,dn=df,a=alph,b=beta,fdr=a1,power=pwr,pi0=p0,maxN=N1)
ftv$ssize ##first sample sizes to reach desired power
ftv$power ##calculated power for each sample size
ftv$crit.vals ##calculated critical value for each sample sizeft$ssize
}
| /man/ssize.fdr-package.Rd | no_license | cran/ssize.fdr | R | false | false | 6,600 | rd | \name{ssize.fdr-package}
\alias{ssize.fdr-package}
\alias{ssize.fdr}
\docType{package}
\title{Sample Size Calculations for Microarray Experiments}
\description{
This package calculates appropriate sample sizes for one-sample,
two-sample, and multi-sample microarray experiments for a desired power
of the test. Sample sizes are calculated under controlled false
discovery rates and fixed proportions of non-differentially expressed
genes. Outputs a graph of power versus sample size.
}
\details{
\tabular{ll}{
Package: \tab ssize.fdr\cr
Type: \tab Package\cr
Version: \tab 1.3\cr
Date: \tab 2022-06-05\cr
License: \tab GPL-3\cr
}
For all functions, the user inputs the desired power, the false
discovery rate to be controlled, the proportion(s) of non-
differentially expressed genes, and the maximum possible sample size
to be used in calculations. If the user inputs a vector of proportions
of non-differentially expressed genes, samples size calculations are
performed for each proportion. For the function \code{\link{ssize.twoSamp}},
the user must additionally input the common difference in mean treatment
expressions as well as the common standard deviation for all genes.
This becomes the common effect size and common standard deviation for
all genes when using the function \code{\link{ssize.oneSamp}}. For the
function \code{\link{ssize.twoSampVary}} (\code{\link{ssize.oneSampVary}})
the differences in mean treatment expressions (effect sizes) are assumed
to follow a normal distribution and the variances among genes are assumed
to follow an inverse gamma distribution, so parameters for these
distributions must be entered. For the function \code{\link{ssize.F}},
the design matrix of the experiment, the parameter vector, and an optional
coefficient matrix or vector of linear contrasts of interest must also be
entered. The function \code{\link{ssize.Fvary}} allows the variances of
the genes to follow an inverse gamma distribution, so the shape and scale
parameters must be specified by the user.
}
\author{Megan Orr <megan.orr@ndsu.edu>, Peng Liu <pliu@iastate.edu>}
\references{Liu, Peng and J. T. Gene Hwang. 2007. Quick calculation for
sample size while controlling false discovery rate with application
to microarray analysis. \emph{Bioinformatics} 23(6): 739-746.}
\keyword{ package }
\examples{
a<-0.05 ##false discovery rate to be controlled
pwr<-0.8 ##desired power
p0<-c(0.5,0.9,0.95) ##proportions of non-differentially expressed genes
N<-20; N1<-35 ##maximum sample size for calculations
##Example of function ssize.oneSamp
d<-1 ##effect size
s<-0.5 ##standard deviation
os<-ssize.oneSamp(delta=d,sigma=s,fdr=a,power=pwr,pi0=p0,maxN=N,side="two-sided")
os$ssize ##first sample sizes to reach desired power
os$power ##calculated power for each sample size
os$crit.vals ##calculated critical value for each sample size
##Example of function ssize.oneSampVary
dm<-2; ds<-1 ##the effect sizes of the genes follow a Normal(2,1) distribution
alph<-3; beta<-1 ##the variances of the genes follow an Inverse Gamma(3,1) distribution.
osv<-ssize.oneSampVary(deltaMean=dm,deltaSE=ds,a=alph,b=beta,fdr=a,power=pwr,
pi0=p0,maxN=N1,side="two-sided")
osv$ssize ##first sample sizes to reach desired power
osv$power ##calculated power for each sample size
osv$crit.vals ##calculated critical value for each sample size
##Example of function ssize.twoSamp
##Calculates sample sizes for two-sample microarray experiments
##See Figure 1.(a) of Liu & Hwang (2007)
d1<-1 ##difference in differentially expressed genes to be detected
s1<-0.5 ##standard deviation
ts<-ssize.twoSamp(delta=d1,sigma=s1,fdr=a,power=pwr,pi0=pi,maxN=N,side="two-sided")
ts$ssize ##first sample sizes to reach desired power
ts$power ##calculated power for each sample size
ts$crit.vals ##calculated critical value for each sample size
##Example of function ssize.twoSampVary
##Calculates sample sizes for multi-sample microarray experiments in which both the differences in
##expressions between treatments and the standard deviations vary among genes.
##See Figure 3.(a) of Liu & Hwang (2007)
dm<-2 ##mean parameter of normal distribution of differences
##between treatments among genes
ds<-1 ##standard deviation parameter of normal distribution
##of differences between treatments among genes
alph<-3 ##shape parameter of inverse gamma distribution followed
##by standard deviations of genes
beta<-1 ##scale parameter of inverse gamma distribution followed
##by standard deviations of genes
tsv<-ssize.twoSampVary(deltaMean=dm,deltaSE=ds,a=alph,b=beta,
fdr=a,power=pwr,pi0=p0,maxN=N1,side="two-sided")
tsv$ssize ##first sample sizes to reach desired power
tsv$power ##calculated power for each sample size
tsv$crit.vals ##calculated critical value for each sample sizesv
##Example of function ssize.F
##Sample size calculation for three-treatment loop design microarray experiment
##See Figure S2. of Liu & Hwang (2007)
des<-matrix(c(1,-1,0,0,1,-1),ncol=2,byrow=FALSE) ##design matrix of loop design experiment
b<-c(1,-0.5) ##difference between first two treatments is 1 and
##second and third treatments is -0.5
df<-function(n){3*n-2} ##degrees of freedom for this design is 3n-2
s<-1 ##standard deviation
p0.F<-c(0.5,0.9,0.95,0.995) ##proportions of non-differentially expressed genes
ft<-ssize.F(X=des,beta=b,dn=df,sigma=s,fdr=a,power=pwr,pi0=p0.F,maxN=N)
ft$ssize ##first sample sizes to reach desired power
ft$power ##calculated power for each sample size
ft$crit.vals ##calculated critical value for each sample sizeft$ssize
##Example of function ssize.Fvary
##Sample size calculation for three-treatment loop design microarray experiment
des<-matrix(c(1,-1,0,0,1,-1),ncol=2,byrow=FALSE) ##design matrix of loop design experiment
b<-c(1,-0.5) ##difference between first two treatments is 1 and
##second and third treatments is -0.5
df<-function(n){3*n-2} ##degrees of freedom for this design is 3n-2
alph<-3;beta<-1 ##variances among genes follow an Inverse Gamma(3,1)
a1<-0.05 ##fdr to be fixed
p0.F<-c(0.9,0.95,0.995) ##proportions of non-differentially expressed genes
ftv<-ssize.Fvary(X=des,beta=b,dn=df,a=alph,b=beta,fdr=a1,power=pwr,pi0=p0,maxN=N1)
ftv$ssize ##first sample sizes to reach desired power
ftv$power ##calculated power for each sample size
ftv$crit.vals ##calculated critical value for each sample sizeft$ssize
}
|
## Data taken from https://www.kaggle.com/dhruvildave/new-york-times-best-sellers
rawData <- read.csv("bestsellers2.csv", stringsAsFactors = FALSE)
data <- rawData[rawData$list_name == "Combined Print and E-Book Nonfiction",]
weeks <- unique(data$published_date)
wasTitleInPreviousWeek = function(title, week){
previousWeek = as.Date(week) - 7
previousWeekTitles = data[data$published_date == previousWeek,]$title
result = title %in% previousWeekTitles
return(result)
}
titlesInListAlready <- c()
submarineTitles <- c()
rollercoasterTitles <- c()
counterTotalPositions <- 0
counterSubmarines <- 0
counterRollerCoasters <- 0
for(week in weeks){
print(week)
titles <- data[data$published_date == week,]$title
for(title in titles){
if(title %in% titlesInListAlready && !wasTitleInPreviousWeek(title, week)){
if(title %in% submarineTitles){
print("ROLLERCOASTER")
print(title)
rollercoasterTitles <- c(rollercoasterTitles, title)
submarineTitles <- submarineTitles[submarineTitles!=title]
counterRollerCoasters <- counterRollerCoasters + 1
}
if(!(title %in% submarineTitles) && !(title %in% rollercoasterTitles)){
print("SUBMARINE")
print(title)
submarineTitles <- c(submarineTitles, title)
counterSubmarines <- counterSubmarines + 1
}
}
}
counterTotalPositions <- counterTotalPositions + length(titles)
titlesInListAlready <- c(titlesInListAlready, titles)
}
counterTotalPositions
counterSubmarines
counterRollerCoasters
submarineTitles
rollercoasterTitles
| /think_again_rank/NYTbestsellerListHistoricalAnalysis.R | no_license | NunoSempere/gjopen | R | false | false | 1,592 | r | ## Data taken from https://www.kaggle.com/dhruvildave/new-york-times-best-sellers
rawData <- read.csv("bestsellers2.csv", stringsAsFactors = FALSE)
data <- rawData[rawData$list_name == "Combined Print and E-Book Nonfiction",]
weeks <- unique(data$published_date)
wasTitleInPreviousWeek = function(title, week){
previousWeek = as.Date(week) - 7
previousWeekTitles = data[data$published_date == previousWeek,]$title
result = title %in% previousWeekTitles
return(result)
}
titlesInListAlready <- c()
submarineTitles <- c()
rollercoasterTitles <- c()
counterTotalPositions <- 0
counterSubmarines <- 0
counterRollerCoasters <- 0
for(week in weeks){
print(week)
titles <- data[data$published_date == week,]$title
for(title in titles){
if(title %in% titlesInListAlready && !wasTitleInPreviousWeek(title, week)){
if(title %in% submarineTitles){
print("ROLLERCOASTER")
print(title)
rollercoasterTitles <- c(rollercoasterTitles, title)
submarineTitles <- submarineTitles[submarineTitles!=title]
counterRollerCoasters <- counterRollerCoasters + 1
}
if(!(title %in% submarineTitles) && !(title %in% rollercoasterTitles)){
print("SUBMARINE")
print(title)
submarineTitles <- c(submarineTitles, title)
counterSubmarines <- counterSubmarines + 1
}
}
}
counterTotalPositions <- counterTotalPositions + length(titles)
titlesInListAlready <- c(titlesInListAlready, titles)
}
counterTotalPositions
counterSubmarines
counterRollerCoasters
submarineTitles
rollercoasterTitles
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.