content
large_stringlengths
0
6.46M
path
large_stringlengths
3
331
license_type
large_stringclasses
2 values
repo_name
large_stringlengths
5
125
language
large_stringclasses
1 value
is_vendor
bool
2 classes
is_generated
bool
2 classes
length_bytes
int64
4
6.46M
extension
large_stringclasses
75 values
text
stringlengths
0
6.46M
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/format_triangle.R \name{format_triangle_multi} \alias{format_triangle_multi} \title{Format labels for multi-year Lexis triangles} \usage{ format_triangle_multi( x, age, period, width = 5, break_max = 100, open_last = TRUE, month_start = "Jan", label_year_start = TRUE, origin = 2000 ) } \arguments{ \item{x}{A vector of Lexis triangle labels.} \item{age}{A vector of age groups, the same length as \code{x}.} \item{period}{A vector of periods, the same length as \code{x}.} \item{width}{The width, in whole years, of the triangles to be created. Defaults to 5.} \item{break_max}{An integer or \code{NULL}. Defaults to 100.} \item{open_last}{Whether the final age group has no upper limit. Defaults to \code{TRUE}.} \item{month_start}{An element of \code{\link[base]{month.name}}, or \code{\link[base]{month.abb}}. Periods start on the first day of this month.} \item{label_year_start}{Logical. Whether a single-year period in \code{x} is labelled using the calendar year at the beginning of the period or the calendar year at the end. Defaults to \code{TRUE}.} \item{origin}{An integer. Defaults to 2000.} } \value{ A factor with the same length as \code{x}. } \description{ Format labels for multi-year Lexis triangles to be used with multi-year age groups and periods. These age groups and periods (apart from a possible open age group) all have the same width, which is set by the \code{width} parameter. } \details{ \code{age} and \code{period} define the age groups and periods to which the Lexis triangles in \code{x} belong. These age groups and periods can be narrower than \code{width}, Age groups can be single-year (\code{"23"}), multi-year (\code{"20-24"}) or open (\code{"100+"}), and periods can be single-year (\code{"2023"}) or multi-year (\code{"2020-2025"}). The values for \code{width}, \code{break_max}, \code{open_last}, and \code{origin} together define a new system of Lexis triangles. \code{format_triangle_multi} calculates where the triangles defined by \code{x}, \code{age}, and \code{period} fall within this new system. For instance, if an upper triangle defined by \code{x}, \code{age}, and \code{period} falls entirely within a lower triangle in the new system, then \code{format_triangle_multi} returns \code{"Lower"}. \code{open_last} determines whether the triangles need to account for an open age group, and \code{break_max} specifies the cut-off for the open age group. See \code{\link{format_age_multi}} for a description of how \code{open_last} and \code{break_max} determine age groups. \code{x} and \code{period} must be based on the same starting month, so that if \code{x} uses years that start in July and end in June, then \code{period} must do so too. If \code{x} was created using function \code{\link{date_to_triangle_year}} and \code{period} was created using function \code{\link{date_to_period_year}}, then both should have used the same value for \code{month_start}. If \code{x} and \code{period} were not calculated from raw dates data, then it may be necessary to check the documentation for \code{x} and \code{period} to see which months of the year were used. } \examples{ ## we construct 'x', 'age', and 'period' ## from dates information ourselves before ## calling 'format_triangle_multi' date_original <- c("2024-03-27", "2022-11-09") dob_original <- "2020-01-01" x <- date_to_triangle_year(date = date_original, dob = dob_original, month_start = "Jul") age <- date_to_age_year(date = date_original, dob = dob_original) period <- date_to_period_year(date = date_original, month_start = "Jul") format_triangle_multi(x = x, age = age, period = period) ## someone else has constructed ## 'x', 'age', and 'period' from ## dates information x_processed <- c("Lower", "Lower", "Lower") age_processed <- c("10", "20+", "5") period_processed <- c(2002, 2015, 2011) format_triangle_multi(x = x_processed, age = age_processed, period = period_processed, break_max = 20) ## alternative value for 'width' format_triangle_multi(x = x_processed, age = age_processed, period = period_processed, width = 10, break_max = 20) ## alternative value for 'break_max' format_triangle_multi(x = x_processed, age = age_processed, period = period_processed, break_max = 10) } \seealso{ Other functions for reformating triangle labels are \code{\link{format_triangle_year}}, \code{\link{format_triangle_births}}, \code{\link{format_triangle_quarter}}, and \code{\link{format_triangle_month}}. \code{\link{date_to_triangle_year}} creates Lexis triangles from dates. }
/man/format_triangle_multi.Rd
permissive
bayesiandemography/demprep
R
false
true
4,986
rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/format_triangle.R \name{format_triangle_multi} \alias{format_triangle_multi} \title{Format labels for multi-year Lexis triangles} \usage{ format_triangle_multi( x, age, period, width = 5, break_max = 100, open_last = TRUE, month_start = "Jan", label_year_start = TRUE, origin = 2000 ) } \arguments{ \item{x}{A vector of Lexis triangle labels.} \item{age}{A vector of age groups, the same length as \code{x}.} \item{period}{A vector of periods, the same length as \code{x}.} \item{width}{The width, in whole years, of the triangles to be created. Defaults to 5.} \item{break_max}{An integer or \code{NULL}. Defaults to 100.} \item{open_last}{Whether the final age group has no upper limit. Defaults to \code{TRUE}.} \item{month_start}{An element of \code{\link[base]{month.name}}, or \code{\link[base]{month.abb}}. Periods start on the first day of this month.} \item{label_year_start}{Logical. Whether a single-year period in \code{x} is labelled using the calendar year at the beginning of the period or the calendar year at the end. Defaults to \code{TRUE}.} \item{origin}{An integer. Defaults to 2000.} } \value{ A factor with the same length as \code{x}. } \description{ Format labels for multi-year Lexis triangles to be used with multi-year age groups and periods. These age groups and periods (apart from a possible open age group) all have the same width, which is set by the \code{width} parameter. } \details{ \code{age} and \code{period} define the age groups and periods to which the Lexis triangles in \code{x} belong. These age groups and periods can be narrower than \code{width}, Age groups can be single-year (\code{"23"}), multi-year (\code{"20-24"}) or open (\code{"100+"}), and periods can be single-year (\code{"2023"}) or multi-year (\code{"2020-2025"}). The values for \code{width}, \code{break_max}, \code{open_last}, and \code{origin} together define a new system of Lexis triangles. \code{format_triangle_multi} calculates where the triangles defined by \code{x}, \code{age}, and \code{period} fall within this new system. For instance, if an upper triangle defined by \code{x}, \code{age}, and \code{period} falls entirely within a lower triangle in the new system, then \code{format_triangle_multi} returns \code{"Lower"}. \code{open_last} determines whether the triangles need to account for an open age group, and \code{break_max} specifies the cut-off for the open age group. See \code{\link{format_age_multi}} for a description of how \code{open_last} and \code{break_max} determine age groups. \code{x} and \code{period} must be based on the same starting month, so that if \code{x} uses years that start in July and end in June, then \code{period} must do so too. If \code{x} was created using function \code{\link{date_to_triangle_year}} and \code{period} was created using function \code{\link{date_to_period_year}}, then both should have used the same value for \code{month_start}. If \code{x} and \code{period} were not calculated from raw dates data, then it may be necessary to check the documentation for \code{x} and \code{period} to see which months of the year were used. } \examples{ ## we construct 'x', 'age', and 'period' ## from dates information ourselves before ## calling 'format_triangle_multi' date_original <- c("2024-03-27", "2022-11-09") dob_original <- "2020-01-01" x <- date_to_triangle_year(date = date_original, dob = dob_original, month_start = "Jul") age <- date_to_age_year(date = date_original, dob = dob_original) period <- date_to_period_year(date = date_original, month_start = "Jul") format_triangle_multi(x = x, age = age, period = period) ## someone else has constructed ## 'x', 'age', and 'period' from ## dates information x_processed <- c("Lower", "Lower", "Lower") age_processed <- c("10", "20+", "5") period_processed <- c(2002, 2015, 2011) format_triangle_multi(x = x_processed, age = age_processed, period = period_processed, break_max = 20) ## alternative value for 'width' format_triangle_multi(x = x_processed, age = age_processed, period = period_processed, width = 10, break_max = 20) ## alternative value for 'break_max' format_triangle_multi(x = x_processed, age = age_processed, period = period_processed, break_max = 10) } \seealso{ Other functions for reformating triangle labels are \code{\link{format_triangle_year}}, \code{\link{format_triangle_births}}, \code{\link{format_triangle_quarter}}, and \code{\link{format_triangle_month}}. \code{\link{date_to_triangle_year}} creates Lexis triangles from dates. }
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/name_file.R \name{name_file} \alias{name_file} \title{Define download file name} \usage{ name_file(select_concept, ma, per1m) } \arguments{ \item{select_concept}{A character vector of length 1.} \item{ma}{A character vector of length 1.} \item{per1m}{A character vector of length 1.} } \value{ A character vector of length 1. } \description{ Define download file name } \examples{ \dontrun{ name_file("new_conf", "Yes", "No") } }
/man/name_file.Rd
permissive
mitsuoxv/covid
R
false
true
510
rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/name_file.R \name{name_file} \alias{name_file} \title{Define download file name} \usage{ name_file(select_concept, ma, per1m) } \arguments{ \item{select_concept}{A character vector of length 1.} \item{ma}{A character vector of length 1.} \item{per1m}{A character vector of length 1.} } \value{ A character vector of length 1. } \description{ Define download file name } \examples{ \dontrun{ name_file("new_conf", "Yes", "No") } }
#' Title prevision and prevision error acording BagKDE #' #' @param xx data vector #' @param grille grid for density evaluation #' @param B number of histograms to aggregate #' @param dobs observation #' #' @return prevision and prevision error #' @export BagKDE.err = function(xx,grille=aa,B= 10,dobs) { fin = 0 err00=NULL n = length(xx) for(i in 1:B) { xb = xx[sample(n,replace=T)] kk = kde(xb,h=bw.ucv(xb),eval.points=grille) fin = fin + kk$estimate previ=fin/i err00=rbind(err00,error(dobs,previ)) } list(prev=fin/B,erreur=err00) }
/BagKDE.err.R
no_license
youla1u/bagdenest-Package--Bagging-of-density-estimation-
R
false
false
592
r
#' Title prevision and prevision error acording BagKDE #' #' @param xx data vector #' @param grille grid for density evaluation #' @param B number of histograms to aggregate #' @param dobs observation #' #' @return prevision and prevision error #' @export BagKDE.err = function(xx,grille=aa,B= 10,dobs) { fin = 0 err00=NULL n = length(xx) for(i in 1:B) { xb = xx[sample(n,replace=T)] kk = kde(xb,h=bw.ucv(xb),eval.points=grille) fin = fin + kk$estimate previ=fin/i err00=rbind(err00,error(dobs,previ)) } list(prev=fin/B,erreur=err00) }
require(data.table) extractPhoneStub <- function(phone) { if( substr(phone, 1, 3) == '+61' ) { substr(phone, 4, 6) } else if(substr(phone, 1, 1) == '0') { substr(phone, 2, 4) } else { substr(phone, 1, 3) } } extractAllPhoneStubs <- Vectorize(extractPhoneStub) # THESE MOBILE PHONE FEATURES ARE BASED ON THE ORIGINAL ISSUER OF THE PHONE # IN AUSTRALIA. MANY OF THESE NUMBERS WILL HAVE BEEN MOVED BETWEEN CARRIERS # SO THESE FEATURES ARE ONLY A CRUDE INDICATION OF WHO SOMEONE WAS LIKELY TO # HAVE GOTTENT THEIR ORIGINAL CONTRACT ############################################################################## addPhoneFeatures <- function(dfin, phone_col) { df <- data.table(dfin) df[, phoneStub := extractAllPhoneStubs( df[[phone_col]] ) ] telstraMobStubs <- c( "400", "407", "408", "409", "417", "418", "419", "427", "428", "429", "437", "438", "439", "447", "448", "455", "456", "457", "458", "459", "467", "472", "473", "474", "475", "476", "477", "484", "487", "488", "490", "491", "497", "498", "499") df[, phoneIsTelstra := ifelse( ( phoneStub %in% telstraMobStubs), 1, 0 )] optusMobStubs <- c( "401", "402", "403", "411", "412", "413", "421", "422", "423", "431", "432", "434", "435", "466", "468", "478", "479", "481", "482") df[, phoneIsOptus := ifelse( ( phoneStub %in% optusMobStubs), 1, 0 )] vodaMobStubs <- c( "404", "405", "406", "414", "415", "416", "424", "425", "426", "430", "433", "449", "450", "451", "452") df[, phoneIsVoda := ifelse( ( phoneStub %in% vodaMobStubs), 1, 0 )] df[, phoneIsLyca := ifelse( ( phoneStub %in% c("470", "469") ), 1, 0 )] df[] }
/phoneFeatures.R
no_license
john-hawkins/R_Feature_Gen
R
false
false
1,802
r
require(data.table) extractPhoneStub <- function(phone) { if( substr(phone, 1, 3) == '+61' ) { substr(phone, 4, 6) } else if(substr(phone, 1, 1) == '0') { substr(phone, 2, 4) } else { substr(phone, 1, 3) } } extractAllPhoneStubs <- Vectorize(extractPhoneStub) # THESE MOBILE PHONE FEATURES ARE BASED ON THE ORIGINAL ISSUER OF THE PHONE # IN AUSTRALIA. MANY OF THESE NUMBERS WILL HAVE BEEN MOVED BETWEEN CARRIERS # SO THESE FEATURES ARE ONLY A CRUDE INDICATION OF WHO SOMEONE WAS LIKELY TO # HAVE GOTTENT THEIR ORIGINAL CONTRACT ############################################################################## addPhoneFeatures <- function(dfin, phone_col) { df <- data.table(dfin) df[, phoneStub := extractAllPhoneStubs( df[[phone_col]] ) ] telstraMobStubs <- c( "400", "407", "408", "409", "417", "418", "419", "427", "428", "429", "437", "438", "439", "447", "448", "455", "456", "457", "458", "459", "467", "472", "473", "474", "475", "476", "477", "484", "487", "488", "490", "491", "497", "498", "499") df[, phoneIsTelstra := ifelse( ( phoneStub %in% telstraMobStubs), 1, 0 )] optusMobStubs <- c( "401", "402", "403", "411", "412", "413", "421", "422", "423", "431", "432", "434", "435", "466", "468", "478", "479", "481", "482") df[, phoneIsOptus := ifelse( ( phoneStub %in% optusMobStubs), 1, 0 )] vodaMobStubs <- c( "404", "405", "406", "414", "415", "416", "424", "425", "426", "430", "433", "449", "450", "451", "452") df[, phoneIsVoda := ifelse( ( phoneStub %in% vodaMobStubs), 1, 0 )] df[, phoneIsLyca := ifelse( ( phoneStub %in% c("470", "469") ), 1, 0 )] df[] }
#plot 1 library(data.table) library(dplyr) #read in data and format power_data <- tbl_df(fread("household_power_consumption.txt", header = TRUE, sep = ';', na.strings = "?")) power_data <- power_data %>% filter(Date %in% c("1/2/2007", "2/2/2007")) power_data <- power_data %>% mutate(date_time = as.POSIXct(strptime(paste(power_data$Date, power_data$Time, sep=" "),"%d/%m/%Y %H:%M:%S"))) with(power_data, {png("plot1.png", width = 450, height = 450, units = "px"); hist(Global_active_power, col = "red", main = "Global Active Power", xlab = "Global Active Power (kilowatts)"); dev.off()})
/plot1.R
no_license
dnlbreen/ExData_Plotting1
R
false
false
595
r
#plot 1 library(data.table) library(dplyr) #read in data and format power_data <- tbl_df(fread("household_power_consumption.txt", header = TRUE, sep = ';', na.strings = "?")) power_data <- power_data %>% filter(Date %in% c("1/2/2007", "2/2/2007")) power_data <- power_data %>% mutate(date_time = as.POSIXct(strptime(paste(power_data$Date, power_data$Time, sep=" "),"%d/%m/%Y %H:%M:%S"))) with(power_data, {png("plot1.png", width = 450, height = 450, units = "px"); hist(Global_active_power, col = "red", main = "Global Active Power", xlab = "Global Active Power (kilowatts)"); dev.off()})
library(testthat) library(GlmSimulatoR) test_check("GlmSimulatoR")
/tests/testthat.R
no_license
cran/GlmSimulatoR
R
false
false
72
r
library(testthat) library(GlmSimulatoR) test_check("GlmSimulatoR")
\name{oklahoma.blkgrp} \Rdversion{1.1} \alias{oklahoma.blkgrp} \docType{data} \title{ oklahoma.blkgrp } \description{ oklahoma.blkgrp is a \code{\link[sp:SpatialPolygonsDataFrame]{SpatialPolygonsDataFrame}} with polygons made from the 2000 US Census tiger/line boundary files (\url{http://www.census.gov/geo/www/tiger/}) for Census Block Groups. It also contains 86 variables from the Summary File 1 (SF 1) which contains the 100-percent data (\url{http://www.census.gov/prod/cen2000/doc/sf1.pdf}). All polygons are projected in CRS("+proj=longlat +datum=NAD83") } \usage{data(oklahoma.blkgrp)} %\format{ %} \details{ \bold{ID Variables} \cr \tabular{ll}{ data field name \tab Full Description \cr state \tab State FIPS code \cr county \tab County FIPS code \cr tract \tab Tract FIPS code \cr blkgrp \tab Blockgroup FIPS code \cr } \bold{Census Variables} \cr \tabular{lll}{ Census SF1 Field Name \tab data field name \tab Full Description \cr (P007001) \tab pop2000 \tab population 2000 \cr (P007002) \tab white \tab white alone \cr (P007003) \tab black \tab black or african american alone \cr (P007004) \tab ameri.es \tab american indian and alaska native alone \cr (P007005) \tab asian \tab asian alone \cr (P007006) \tab hawn.pi \tab native hawaiian and other pacific islander alone \cr (P007007) \tab other \tab some other race alone \cr (P007008) \tab mult.race \tab 2 or more races \cr (P011001) \tab hispanic \tab people who are hispanic or latino \cr (P008002) \tab not.hispanic.t \tab Not Hispanic or Latino \cr (P008003) \tab nh.white \tab White alone \cr (P008004) \tab nh.black \tab Black or African American alone \cr (P008005) \tab nh.ameri.es \tab American Indian and Alaska Native alone \cr (P008006) \tab nh.asian \tab Asian alone \cr (P008007) \tab nh.hawn.pi \tab Native Hawaiian and Other Pacific Islander alone \cr (P008008) \tab nh.other \tab Some other race alone \cr (P008010) \tab hispanic.t \tab Hispanic or Latino \cr (P008011) \tab h.white \tab White alone \cr (P008012) \tab h.black \tab Black or African American alone \cr (P008013) \tab h.american.es \tab American Indian and Alaska Native alone \cr (P008014) \tab h.asian \tab Asian alone \cr (P008015) \tab h.hawn.pi \tab Native Hawaiian and Other Pacific Islander alone \cr (P008016) \tab h.other \tab Some other race alone \cr (P012002) \tab males \tab males \cr (P012026) \tab females \tab females \cr (P012003 + P012027) \tab age.under5 \tab male and female under 5 yrs \cr (P012004-006 + P012028-030) \tab age.5.17 \tab male and female 5 to 17 yrs \cr (P012007-009 + P012031-033) \tab age.18.21 \tab male and female 18 to 21 yrs \cr (P012010-011 + P012034-035) \tab age.22.29 \tab male and female 22 to 29 yrs \cr (P012012-013 + P012036-037) \tab age.30.39 \tab male and female 30 to 39 yrs \cr (P012014-015 + P012038-039) \tab age.40.49 \tab male and female 40 to 49 yrs \cr (P012016-019 + P012040-043) \tab age.50.64 \tab male and female 50 to 64 yrs \cr (P012020-025 + P012044-049) \tab age.65.up \tab male and female 65 yrs and over \cr (P013001) \tab med.age \tab median age, both sexes \cr (P013002) \tab med.age.m \tab median age, males \cr (P013003) \tab med.age.f \tab median age, females \cr (P015001) \tab households \tab households \cr (P017001) \tab ave.hh.sz \tab average household size \cr (P018003) \tab hsehld.1.m \tab 1-person household, male householder \cr (P018004) \tab hsehld.1.f \tab 1-person household, female householder \cr (P018008) \tab marhh.chd \tab family households, married-couple family, w/ own children under 18 yrs \cr (P018009) \tab marhh.no.c \tab family households, married-couple family, no own children under 18 yrs \cr (P018012) \tab mhh.child \tab family households, other family, male householder, no wife present, w/ own children under 18 yrs \cr (P018015) \tab fhh.child \tab family households, other family, female householder, no husband present, w/ own children under 18 yrs \cr (H001001) \tab hh.units \tab housng units total \cr (H002002) \tab hh.urban \tab urban housing units \cr (H002005) \tab hh.rural \tab rural housing units \cr (H003002) \tab hh.occupied \tab occupied housing units \cr (H003003) \tab hh.vacant \tab vacant housing units \cr (H004002) \tab hh.owner \tab owner occupied housing units \cr (H004003) \tab hh.renter \tab renter occupied housing units \cr (H013002) \tab hh.1person \tab 1-person household \cr (H013003) \tab hh.2person \tab 2-person household \cr (H013004) \tab hh.3person \tab 3-person household \cr (H013005) \tab hh.4person \tab 4-person household \cr (H013006) \tab hh.5person \tab 5-person household \cr (H013007) \tab hh.6person \tab 6-person household \cr (H013008) \tab hh.7person \tab 7-person household \cr (H015I003)+(H015I011) \tab hh.nh.white.1p \tab (white only, not hispanic ) 1-person household \cr (H015I004)+(H015I012) \tab hh.nh.white.2p \tab (white only, not hispanic ) 2-person household \cr (H015I005)+(H015I013) \tab hh.nh.white.3p \tab (white only, not hispanic ) 3-person household \cr (H015I006)+(H015I014) \tab hh.nh.white.4p \tab (white only, not hispanic ) 4-person household \cr (H015I007)+(H015I015) \tab hh.nh.white.5p \tab (white only, not hispanic ) 5-person household \cr (H015I008)+(H015I016) \tab hh.nh.white.6p \tab (white only, not hispanic ) 6-person household \cr (H015I009)+(H015I017) \tab hh.nh.white.7p \tab (white only, not hispanic ) 7-person household \cr (H015H003)+(H015H011) \tab hh.hisp.1p \tab (hispanic) 1-person household \cr (H015H004)+(H015H012) \tab hh.hisp.2p \tab (hispanic) 2-person household \cr (H015H005)+(H015H013) \tab hh.hisp.3p \tab (hispanic) 3-person household \cr (H015H006)+(H015H014) \tab hh.hisp.4p \tab (hispanic) 4-person household \cr (H015H007)+(H015H015) \tab hh.hisp.5p \tab (hispanic) 5-person household \cr (H015H008)+(H015H016) \tab hh.hisp.6p \tab (hispanic) 6-person household \cr (H015H009)+(H015H017) \tab hh.hisp.7p \tab (hispanic) 7-person household \cr (H015B003)+(H015B011) \tab hh.black.1p \tab (black) 1-person household \cr (H015B004)+(H015B012) \tab hh.black.2p \tab (black) 2-person household \cr (H015B005)+(H015B013) \tab hh.black.3p \tab (black) 3-person household \cr (H015B006)+(H015B014) \tab hh.black.4p \tab (black) 4-person household \cr (H015B007)+(H015B015) \tab hh.black.5p \tab (black) 5-person household \cr (H015B008)+(H015B016) \tab hh.black.6p \tab (black) 6-person household \cr (H015B009)+(H015B017) \tab hh.black.7p \tab (black) 7-person household \cr (H015D003)+(H015D011) \tab hh.asian.1p \tab (asian) 1-person household \cr (H015D004)+(H015D012) \tab hh.asian.2p \tab (asian) 2-person household \cr (H015D005)+(H015D013) \tab hh.asian.3p \tab (asian) 3-person household \cr (H015D006)+(H015D014) \tab hh.asian.4p \tab (asian) 4-person household \cr (H015D007)+(H015D015) \tab hh.asian.5p \tab (asian) 5-person household \cr (H015D008)+(H015D016) \tab hh.asian.6p \tab (asian) 6-person household \cr (H015D009)+(H015D017) \tab hh.asian.7p \tab (asian) 7-person household \cr } } \source{ Census 2000 Summary File 1 [name of state1 or United States]/prepared by the U.S. Census Bureau, 2001. } \references{ \url{http://www.census.gov/ }\cr \url{http://www2.census.gov/cgi-bin/shapefiles/national-files} \cr \url{http://www.census.gov/prod/cen2000/doc/sf1.pdf} \cr } \examples{ data(oklahoma.blkgrp) ############################################ ## Helper function for handling coloring of the map ############################################ color.map<- function(x,dem,y=NULL){ l.poly<-length(x@polygons) dem.num<- cut(as.numeric(dem) ,breaks=ceiling(quantile(dem)),dig.lab = 6) dem.num[which(is.na(dem.num)==TRUE)]<-levels(dem.num)[1] l.uc<-length(table(dem.num)) if(is.null(y)){ ##commented out, but creates different color schemes ## using runif, may take a couple times to get a good color scheme. ##col.heat<-rgb( runif(l.uc,0,1), runif(l.uc,0,1) , runif(l.uc,0,1) ) col.heat<-heat.colors(16)[c(14,8,4,1)] ##fixed set of four colors }else{ col.heat<-y } dem.col<-cbind(col.heat,names(table(dem.num))) colors.dem<-vector(length=l.poly) for(i in 1:l.uc){ colors.dem[which(dem.num==dem.col[i,2])]<-dem.col[i,1] } out<-list(colors=colors.dem,dem.cut=dem.col[,2],table.colors=dem.col[,1]) return(out) } ############################################ ## Helper function for handling coloring of the map ############################################ colors.use<-color.map(oklahoma.blkgrp,as.numeric(oklahoma.blkgrp@data$pop2000)) plot(oklahoma.blkgrp,col=colors.use$colors) #text(coordinates(oklahoma.blkgrp),oklahoma.blkgrp@data$name,cex=.3) title(main="Census Block Groups \n of Oklahoma, 2000", sub="Quantiles (equal frequency)") legend("bottomright",legend=colors.use$dem.cut,fill=colors.use$table.colors,bty="o",title="Population Count",bg="white") ############################### ### Alternative way to do the above ############################### \dontrun{ ####This example requires the following additional libraries library(RColorBrewer) library(classInt) library(maps) ####This example requires the following additional libraries data(oklahoma.blkgrp) map('state',region='oklahoma') plotvar <- as.numeric(oklahoma.blkgrp@data$pop2000) nclr <- 4 #BuPu plotclr <- brewer.pal(nclr,"BuPu") class <- classIntervals(plotvar, nclr, style="quantile") colcode <- findColours(class, plotclr) plot(oklahoma.blkgrp, col=colcode, border="transparent",add=TRUE) #transparent title(main="Census Block Groups \n of Oklahoma, 2000", sub="Quantiles (equal frequency)") map.text("county", "oklahoma",cex=.7,add=TRUE) map('county','oklahoma',add=TRUE) legend("bottomright","(x,y)", legend=names(attr(colcode, "table")),fill=attr(colcode, "palette"), cex=0.9, bty="o", title="Population Count",bg="white") } } \keyword{datasets}
/man/oklahoma.blkgrp.Rd
no_license
cran/UScensus2000blkgrp
R
false
false
9,813
rd
\name{oklahoma.blkgrp} \Rdversion{1.1} \alias{oklahoma.blkgrp} \docType{data} \title{ oklahoma.blkgrp } \description{ oklahoma.blkgrp is a \code{\link[sp:SpatialPolygonsDataFrame]{SpatialPolygonsDataFrame}} with polygons made from the 2000 US Census tiger/line boundary files (\url{http://www.census.gov/geo/www/tiger/}) for Census Block Groups. It also contains 86 variables from the Summary File 1 (SF 1) which contains the 100-percent data (\url{http://www.census.gov/prod/cen2000/doc/sf1.pdf}). All polygons are projected in CRS("+proj=longlat +datum=NAD83") } \usage{data(oklahoma.blkgrp)} %\format{ %} \details{ \bold{ID Variables} \cr \tabular{ll}{ data field name \tab Full Description \cr state \tab State FIPS code \cr county \tab County FIPS code \cr tract \tab Tract FIPS code \cr blkgrp \tab Blockgroup FIPS code \cr } \bold{Census Variables} \cr \tabular{lll}{ Census SF1 Field Name \tab data field name \tab Full Description \cr (P007001) \tab pop2000 \tab population 2000 \cr (P007002) \tab white \tab white alone \cr (P007003) \tab black \tab black or african american alone \cr (P007004) \tab ameri.es \tab american indian and alaska native alone \cr (P007005) \tab asian \tab asian alone \cr (P007006) \tab hawn.pi \tab native hawaiian and other pacific islander alone \cr (P007007) \tab other \tab some other race alone \cr (P007008) \tab mult.race \tab 2 or more races \cr (P011001) \tab hispanic \tab people who are hispanic or latino \cr (P008002) \tab not.hispanic.t \tab Not Hispanic or Latino \cr (P008003) \tab nh.white \tab White alone \cr (P008004) \tab nh.black \tab Black or African American alone \cr (P008005) \tab nh.ameri.es \tab American Indian and Alaska Native alone \cr (P008006) \tab nh.asian \tab Asian alone \cr (P008007) \tab nh.hawn.pi \tab Native Hawaiian and Other Pacific Islander alone \cr (P008008) \tab nh.other \tab Some other race alone \cr (P008010) \tab hispanic.t \tab Hispanic or Latino \cr (P008011) \tab h.white \tab White alone \cr (P008012) \tab h.black \tab Black or African American alone \cr (P008013) \tab h.american.es \tab American Indian and Alaska Native alone \cr (P008014) \tab h.asian \tab Asian alone \cr (P008015) \tab h.hawn.pi \tab Native Hawaiian and Other Pacific Islander alone \cr (P008016) \tab h.other \tab Some other race alone \cr (P012002) \tab males \tab males \cr (P012026) \tab females \tab females \cr (P012003 + P012027) \tab age.under5 \tab male and female under 5 yrs \cr (P012004-006 + P012028-030) \tab age.5.17 \tab male and female 5 to 17 yrs \cr (P012007-009 + P012031-033) \tab age.18.21 \tab male and female 18 to 21 yrs \cr (P012010-011 + P012034-035) \tab age.22.29 \tab male and female 22 to 29 yrs \cr (P012012-013 + P012036-037) \tab age.30.39 \tab male and female 30 to 39 yrs \cr (P012014-015 + P012038-039) \tab age.40.49 \tab male and female 40 to 49 yrs \cr (P012016-019 + P012040-043) \tab age.50.64 \tab male and female 50 to 64 yrs \cr (P012020-025 + P012044-049) \tab age.65.up \tab male and female 65 yrs and over \cr (P013001) \tab med.age \tab median age, both sexes \cr (P013002) \tab med.age.m \tab median age, males \cr (P013003) \tab med.age.f \tab median age, females \cr (P015001) \tab households \tab households \cr (P017001) \tab ave.hh.sz \tab average household size \cr (P018003) \tab hsehld.1.m \tab 1-person household, male householder \cr (P018004) \tab hsehld.1.f \tab 1-person household, female householder \cr (P018008) \tab marhh.chd \tab family households, married-couple family, w/ own children under 18 yrs \cr (P018009) \tab marhh.no.c \tab family households, married-couple family, no own children under 18 yrs \cr (P018012) \tab mhh.child \tab family households, other family, male householder, no wife present, w/ own children under 18 yrs \cr (P018015) \tab fhh.child \tab family households, other family, female householder, no husband present, w/ own children under 18 yrs \cr (H001001) \tab hh.units \tab housng units total \cr (H002002) \tab hh.urban \tab urban housing units \cr (H002005) \tab hh.rural \tab rural housing units \cr (H003002) \tab hh.occupied \tab occupied housing units \cr (H003003) \tab hh.vacant \tab vacant housing units \cr (H004002) \tab hh.owner \tab owner occupied housing units \cr (H004003) \tab hh.renter \tab renter occupied housing units \cr (H013002) \tab hh.1person \tab 1-person household \cr (H013003) \tab hh.2person \tab 2-person household \cr (H013004) \tab hh.3person \tab 3-person household \cr (H013005) \tab hh.4person \tab 4-person household \cr (H013006) \tab hh.5person \tab 5-person household \cr (H013007) \tab hh.6person \tab 6-person household \cr (H013008) \tab hh.7person \tab 7-person household \cr (H015I003)+(H015I011) \tab hh.nh.white.1p \tab (white only, not hispanic ) 1-person household \cr (H015I004)+(H015I012) \tab hh.nh.white.2p \tab (white only, not hispanic ) 2-person household \cr (H015I005)+(H015I013) \tab hh.nh.white.3p \tab (white only, not hispanic ) 3-person household \cr (H015I006)+(H015I014) \tab hh.nh.white.4p \tab (white only, not hispanic ) 4-person household \cr (H015I007)+(H015I015) \tab hh.nh.white.5p \tab (white only, not hispanic ) 5-person household \cr (H015I008)+(H015I016) \tab hh.nh.white.6p \tab (white only, not hispanic ) 6-person household \cr (H015I009)+(H015I017) \tab hh.nh.white.7p \tab (white only, not hispanic ) 7-person household \cr (H015H003)+(H015H011) \tab hh.hisp.1p \tab (hispanic) 1-person household \cr (H015H004)+(H015H012) \tab hh.hisp.2p \tab (hispanic) 2-person household \cr (H015H005)+(H015H013) \tab hh.hisp.3p \tab (hispanic) 3-person household \cr (H015H006)+(H015H014) \tab hh.hisp.4p \tab (hispanic) 4-person household \cr (H015H007)+(H015H015) \tab hh.hisp.5p \tab (hispanic) 5-person household \cr (H015H008)+(H015H016) \tab hh.hisp.6p \tab (hispanic) 6-person household \cr (H015H009)+(H015H017) \tab hh.hisp.7p \tab (hispanic) 7-person household \cr (H015B003)+(H015B011) \tab hh.black.1p \tab (black) 1-person household \cr (H015B004)+(H015B012) \tab hh.black.2p \tab (black) 2-person household \cr (H015B005)+(H015B013) \tab hh.black.3p \tab (black) 3-person household \cr (H015B006)+(H015B014) \tab hh.black.4p \tab (black) 4-person household \cr (H015B007)+(H015B015) \tab hh.black.5p \tab (black) 5-person household \cr (H015B008)+(H015B016) \tab hh.black.6p \tab (black) 6-person household \cr (H015B009)+(H015B017) \tab hh.black.7p \tab (black) 7-person household \cr (H015D003)+(H015D011) \tab hh.asian.1p \tab (asian) 1-person household \cr (H015D004)+(H015D012) \tab hh.asian.2p \tab (asian) 2-person household \cr (H015D005)+(H015D013) \tab hh.asian.3p \tab (asian) 3-person household \cr (H015D006)+(H015D014) \tab hh.asian.4p \tab (asian) 4-person household \cr (H015D007)+(H015D015) \tab hh.asian.5p \tab (asian) 5-person household \cr (H015D008)+(H015D016) \tab hh.asian.6p \tab (asian) 6-person household \cr (H015D009)+(H015D017) \tab hh.asian.7p \tab (asian) 7-person household \cr } } \source{ Census 2000 Summary File 1 [name of state1 or United States]/prepared by the U.S. Census Bureau, 2001. } \references{ \url{http://www.census.gov/ }\cr \url{http://www2.census.gov/cgi-bin/shapefiles/national-files} \cr \url{http://www.census.gov/prod/cen2000/doc/sf1.pdf} \cr } \examples{ data(oklahoma.blkgrp) ############################################ ## Helper function for handling coloring of the map ############################################ color.map<- function(x,dem,y=NULL){ l.poly<-length(x@polygons) dem.num<- cut(as.numeric(dem) ,breaks=ceiling(quantile(dem)),dig.lab = 6) dem.num[which(is.na(dem.num)==TRUE)]<-levels(dem.num)[1] l.uc<-length(table(dem.num)) if(is.null(y)){ ##commented out, but creates different color schemes ## using runif, may take a couple times to get a good color scheme. ##col.heat<-rgb( runif(l.uc,0,1), runif(l.uc,0,1) , runif(l.uc,0,1) ) col.heat<-heat.colors(16)[c(14,8,4,1)] ##fixed set of four colors }else{ col.heat<-y } dem.col<-cbind(col.heat,names(table(dem.num))) colors.dem<-vector(length=l.poly) for(i in 1:l.uc){ colors.dem[which(dem.num==dem.col[i,2])]<-dem.col[i,1] } out<-list(colors=colors.dem,dem.cut=dem.col[,2],table.colors=dem.col[,1]) return(out) } ############################################ ## Helper function for handling coloring of the map ############################################ colors.use<-color.map(oklahoma.blkgrp,as.numeric(oklahoma.blkgrp@data$pop2000)) plot(oklahoma.blkgrp,col=colors.use$colors) #text(coordinates(oklahoma.blkgrp),oklahoma.blkgrp@data$name,cex=.3) title(main="Census Block Groups \n of Oklahoma, 2000", sub="Quantiles (equal frequency)") legend("bottomright",legend=colors.use$dem.cut,fill=colors.use$table.colors,bty="o",title="Population Count",bg="white") ############################### ### Alternative way to do the above ############################### \dontrun{ ####This example requires the following additional libraries library(RColorBrewer) library(classInt) library(maps) ####This example requires the following additional libraries data(oklahoma.blkgrp) map('state',region='oklahoma') plotvar <- as.numeric(oklahoma.blkgrp@data$pop2000) nclr <- 4 #BuPu plotclr <- brewer.pal(nclr,"BuPu") class <- classIntervals(plotvar, nclr, style="quantile") colcode <- findColours(class, plotclr) plot(oklahoma.blkgrp, col=colcode, border="transparent",add=TRUE) #transparent title(main="Census Block Groups \n of Oklahoma, 2000", sub="Quantiles (equal frequency)") map.text("county", "oklahoma",cex=.7,add=TRUE) map('county','oklahoma',add=TRUE) legend("bottomright","(x,y)", legend=names(attr(colcode, "table")),fill=attr(colcode, "palette"), cex=0.9, bty="o", title="Population Count",bg="white") } } \keyword{datasets}
#' # Template program #' #' Description: This R-Oxygen template program provides example layout and #' organization for making a clean and easily followable html document #' for your R program. #' #' Notes: Hashtags denote hierarchy. The number of hashtags correspond with #' document headings as follows (also an example of how to insert numbered #' lists in R-Oxygen.) #' #' 1. Title, only use on line 1 of program #' 2. Main R program headers #' 3. Sub-headers, I also use these for document preamble and footer #' 4. Sub-sub-headers, and so forth #' #' You can also include LaTeX-formated equations and symbols such as #' $\alpha^2$ and $\beta=2x$, **bolded** text, and *italicized* text just as #' in an R-Markdown document. #' #' ### Preamble #' #' Load libraries #+ libraries, message = F, warning = F library(knitr) # documentation-related library(ezknitr) # documentation-related library(devtools) # documentation-related library(doBy) # analysis-related #' Clear environment and set seed #' #' *Note: for reproducibility, it is important to include these. Clearing the #' environment ensures that you have specified all pertinent files that need #' to be loaded, and setting the seed ensures that your analysis is #' repeatable* remove(list = ls()) set.seed(2583722) #' _____________________________________________________________________________ #' ## Load Data #' #' #' #' _____________________________________________________________________________ #' ## 1st Section of Code #' #' #' #' _____________________________________________________________________________ #' ## 2nd Section of Code, and so on #' #' #' #' _____________________________________________________________________________ #' ## Save files #' #' *Note: For reproducibility sake, it's important to save the processed files #' into the data folder as .csv or .Rdata files and then load them into the #' subsequent R programs.* #' #' _____________________________________________________________________________ #' ### Footer #' #' Session Info devtools::session_info() #' This document was "spun" with: #' #' ezspin(file = "programs/z_template.R", out_dir = "output", fig_dir = "figures", keep_md = F) #' #' *You can run the ezspin() code above to create an html in the "output" folder #' and any associated figures will get put into the "figures" folder.*
/programs/z_template.R
permissive
ha0ye/template_project
R
false
false
2,372
r
#' # Template program #' #' Description: This R-Oxygen template program provides example layout and #' organization for making a clean and easily followable html document #' for your R program. #' #' Notes: Hashtags denote hierarchy. The number of hashtags correspond with #' document headings as follows (also an example of how to insert numbered #' lists in R-Oxygen.) #' #' 1. Title, only use on line 1 of program #' 2. Main R program headers #' 3. Sub-headers, I also use these for document preamble and footer #' 4. Sub-sub-headers, and so forth #' #' You can also include LaTeX-formated equations and symbols such as #' $\alpha^2$ and $\beta=2x$, **bolded** text, and *italicized* text just as #' in an R-Markdown document. #' #' ### Preamble #' #' Load libraries #+ libraries, message = F, warning = F library(knitr) # documentation-related library(ezknitr) # documentation-related library(devtools) # documentation-related library(doBy) # analysis-related #' Clear environment and set seed #' #' *Note: for reproducibility, it is important to include these. Clearing the #' environment ensures that you have specified all pertinent files that need #' to be loaded, and setting the seed ensures that your analysis is #' repeatable* remove(list = ls()) set.seed(2583722) #' _____________________________________________________________________________ #' ## Load Data #' #' #' #' _____________________________________________________________________________ #' ## 1st Section of Code #' #' #' #' _____________________________________________________________________________ #' ## 2nd Section of Code, and so on #' #' #' #' _____________________________________________________________________________ #' ## Save files #' #' *Note: For reproducibility sake, it's important to save the processed files #' into the data folder as .csv or .Rdata files and then load them into the #' subsequent R programs.* #' #' _____________________________________________________________________________ #' ### Footer #' #' Session Info devtools::session_info() #' This document was "spun" with: #' #' ezspin(file = "programs/z_template.R", out_dir = "output", fig_dir = "figures", keep_md = F) #' #' *You can run the ezspin() code above to create an html in the "output" folder #' and any associated figures will get put into the "figures" folder.*
# # This is a Shiny web application. You can run the application by clicking # the 'Run App' button above. # # Find out more about building applications with Shiny here: # # http://shiny.rstudio.com/ # library(shiny) source("plotStartStops.R") # Define UI for application that draws a histogram ui <- fluidPage( # Application title titlePanel("Jersey City bike rental analysis"), # Sidebar with a slider input for number of bins sidebarLayout( sidebarPanel( sliderInput("hour", "Choose hours:", min = 0, max = 23, value = c(0, 23)), sliderInput("month", "Choose month:", min = 1, max = 12, value = 6) ), # Show a plot of the generated distribution mainPanel( tabsetPanel(type = "tabs", tabPanel("Map", plotlyOutput("mapPlot", height = "500%")), tabPanel("Top Stations", plotlyOutput("topPlot")), tabPanel("Freeriders", plotlyOutput("freeriders"))) ) ) ) # Define server logic required to draw a histogram server <- function(input, output) { output$mapPlot <- renderPlotly({ plotlyStart(input$hour[1], input$hour[2], input$month) }) output$topPlot <- renderPlotly({ plotlyTopStations(input$hour[1], input$hour[2], input$month) }) output$freeriders <- renderPlotly({ plotlyFreeriders(input$hour[1], input$hour[2], input$month) }) } # Run the application shinyApp(ui = ui, server = server)
/Project3/app.R
no_license
Dragemil/learning-r-projects
R
false
false
1,863
r
# # This is a Shiny web application. You can run the application by clicking # the 'Run App' button above. # # Find out more about building applications with Shiny here: # # http://shiny.rstudio.com/ # library(shiny) source("plotStartStops.R") # Define UI for application that draws a histogram ui <- fluidPage( # Application title titlePanel("Jersey City bike rental analysis"), # Sidebar with a slider input for number of bins sidebarLayout( sidebarPanel( sliderInput("hour", "Choose hours:", min = 0, max = 23, value = c(0, 23)), sliderInput("month", "Choose month:", min = 1, max = 12, value = 6) ), # Show a plot of the generated distribution mainPanel( tabsetPanel(type = "tabs", tabPanel("Map", plotlyOutput("mapPlot", height = "500%")), tabPanel("Top Stations", plotlyOutput("topPlot")), tabPanel("Freeriders", plotlyOutput("freeriders"))) ) ) ) # Define server logic required to draw a histogram server <- function(input, output) { output$mapPlot <- renderPlotly({ plotlyStart(input$hour[1], input$hour[2], input$month) }) output$topPlot <- renderPlotly({ plotlyTopStations(input$hour[1], input$hour[2], input$month) }) output$freeriders <- renderPlotly({ plotlyFreeriders(input$hour[1], input$hour[2], input$month) }) } # Run the application shinyApp(ui = ui, server = server)
map = function(data,year,var){ variable <- data[data$rok==year,match(var,colnames(data))] intervals<-9 colors<-brewer.pal(intervals, "BuPu") classes<-classIntervals(variable, intervals, style="fixed", fixedBreaks=c(min(variable), round(as.numeric(quantileCuts(variable,10)),0), max(variable))) color.table<-findColours(classes, colors) plot(pov, col=color.table) plot(voi, lwd=2, add=TRUE) legend("bottomleft", legend=names(attr(color.table, "table")), fill=attr(color.table, "palette"), cex=0.8, bty="n") }
/scripts/MAP_12052020.R
no_license
strebuh/spatial_data_reviewer
R
false
false
671
r
map = function(data,year,var){ variable <- data[data$rok==year,match(var,colnames(data))] intervals<-9 colors<-brewer.pal(intervals, "BuPu") classes<-classIntervals(variable, intervals, style="fixed", fixedBreaks=c(min(variable), round(as.numeric(quantileCuts(variable,10)),0), max(variable))) color.table<-findColours(classes, colors) plot(pov, col=color.table) plot(voi, lwd=2, add=TRUE) legend("bottomleft", legend=names(attr(color.table, "table")), fill=attr(color.table, "palette"), cex=0.8, bty="n") }
## Put comments here that give an overall description of what your ## functions do ## Write a short comment describing this function makeCacheMatrix <- function(x = matrix()) { if (nrow(x)==ncol(x)) { m <- NULL set <- function(y) { x <<- y m <<- NULL } get <- function() x setsolve <- function(solve) m <<- solve getsolve <- function() m list(set = set, get = get, setsolve = setsolve, getsolve = getsolve) } else print("Input matrix should be square") } ## Write a short comment describing this function cacheSolve <- function(x, ...) { m <- x$getsolve() if(!is.null(m)) { message("getting cached data") return(m) } data <- x$get() m <- solve(data, ...) x$setsolve(m) m } ## Calcularion example a<-makeCacheMatrix(x=matrix(1:4,nrow=2,ncol=2)) cacheSolve(a)
/cachematrix.R
no_license
Alexlp1985/ProgrammingAssignment2
R
false
false
828
r
## Put comments here that give an overall description of what your ## functions do ## Write a short comment describing this function makeCacheMatrix <- function(x = matrix()) { if (nrow(x)==ncol(x)) { m <- NULL set <- function(y) { x <<- y m <<- NULL } get <- function() x setsolve <- function(solve) m <<- solve getsolve <- function() m list(set = set, get = get, setsolve = setsolve, getsolve = getsolve) } else print("Input matrix should be square") } ## Write a short comment describing this function cacheSolve <- function(x, ...) { m <- x$getsolve() if(!is.null(m)) { message("getting cached data") return(m) } data <- x$get() m <- solve(data, ...) x$setsolve(m) m } ## Calcularion example a<-makeCacheMatrix(x=matrix(1:4,nrow=2,ncol=2)) cacheSolve(a)
##SET WORKING DIRECTORY setwd("~/Desktop") ###MODIFY THIS FILE PATH FOR YOUR COMPUTER getwd() ##LOAD DATA FILE #Load Exam Anxiety.dat Call it "exam". exam <- read.delim("Exam Anxiety.dat", header=TRUE) #Examine structure str(exam) ##VISUALIZE RELATIONSHIPS #Create a scatterplot of Exam Anxiety and Exam Performance. Call it scatter #Include a line of best fit. library(ggplot2) #you want y to be outcome and x to be predictor scatter <-ggplot(exam, aes(x=Exam, y= Anxiety)) + geom_point() + geom_smooth(method=lm, se=F) + labs(x="Exam Scores", y="Anxiety Levels") + labs(title="Relationship beween Exam Scores and Anxiety Levels") + theme(plot.title = element_text(hjust=0.5)) ##CALCULATE CORRELATION COEFFICIENTS #Specified variables cor(exam$Exam, exam$Anxiety, use = "complete.obs", method = "pearson") #Correlation Matrix - Multiple Variables #pearsons correlation cant be done on nonnumeric variables cor(exam, use = "complete.obs", method = "pearson") ## WHY DOESN'T THIS WORK? cor(exam[1:4], use="complete.obs", method="pearson") #or cor(exam[,names(exam)!="Gender"], use="complete.obs", method="pearson") #or cor(exam[,-5], use="complete.obs", method="pearson") ##CONDUCT HYPOTHESIS TEST ON CORRELATION COEFFICIENT #TWO TAILED if you dont know if correlation is going to be positive or negative cor.test(exam$Exam, exam$Anxiety, method="pearson") #this is more conservative (we will use it in this case ) ##LEFT TAILED if you think correlation is going to be negative cor.test(exam$Exam, exam$Anxiety, alternative="less", method="pearson") #one-tailed are easier to reject null hyp, less conservative ##RIGHT TAILED if you think its going to be positive cor.test(exam$Exam, exam$Anxiety, alternative="greater", method="pearson") #The output has a t-test. the p-value says that the chances of getting this value, if really it shouldve been zero, is (the p-val) very small #if the scatterplot shows neg- relationship for sure, in this case you would do directional (left-tailed) #CONDUCT A REGRESSION ANALYSIS TO ASSESS WHETHER ANXIETY IS A SIGNIFICANT PREDICTOR OF EXAM GRADES. r1<-lm(exam$Anxiety ~ exam$Exam) #lm = linear model, it represents exam scores as a function of anxiety #The order is important, youre predicting exam scores from anxiety, and so anxiety is y-axis (its the outcome), score is predictor summary(r1) #Now we want to ask ourselves: is this a better predictor than just relying on the mean #y-hat or anxiety-hat is: y-hat = beta-0 + beta-1 (x) #beta-0 = 90.87 #beta-1 = -0.29 #if the null hypothesis is right and says its not a predictor, than beta-1 will be zero #its -0.29, lets see if thats significantly far from zero. to answer than question, lets do a t-test #this is included in the summary: if really theres no relationship, and -0.29 is really like zero, than the #chances of obtaining this data is less than 5% (we know this because we see Pr <2e-16) #this, as a result, tells me that grades are a significant predictor of anxiety #To figure out of the regression model is significantly better than the mean, theres an f-test (this is also included in the above summary) #F(1, 101)=24.38, p<0.00003 #so if this regression model is no better than the mean, the chances of getting a 24.38 is less that 5% #so, the regression model IS better than the mean #finally, we have the r-squared = 0.1945, so this indicates that the amount of variability in one variable thats accounted #for by the other variable is 19.45%. You can explain 19% of those differences... just by knowing your grade. #NOW REPORT THE RESULTS: (found in slides) # the predictor is significant (t-test) # amnt of variability explained # how well the model fits the data (f-test) ##PRACTICE #Load dataset called "Muscial Profits.dat". Call it raw_data. raw_data <- read.delim("Musical Profits.dat", header=TRUE) ##Inspect your data. What are the variables in this dataset. What type are they? str(raw_data) #The variables: adverts, sales, airplay, attract #TypesL num, int, int, int ##VISUALIZE RELATIONSHIPS #install.packages("ggplot2") library(ggplot2) #Create a scatterplot showing the relationship between airplays and sales. Call it scatter_airplay #Give it a centered title that says "Relationship beween Airplays and Album Sales". Label your axes correctly. scatter_airplay <-ggplot(raw_data, aes(x=airplay, y= sales)) + geom_point() + geom_smooth(method=lm, se=F) + labs(x="Airplays", y="Album Sales") + labs(title="Relationship beween Airplays and Album Sales") + theme(plot.title = element_text(hjust=0.5)) #note: use airplay to predict sales (this should be given in question) #Create a scatterplot showing the relationship between attractiveness ratings and advertisements. Call it scatter_adverts #Give it a centered title that says "Relationship beween Attractiveness Ratings and Advertisements. Label your axes correctly. scatter_adverts <-ggplot(raw_data, aes(x=adverts, y= attract)) + geom_point() + geom_smooth(method=lm, se=F) + labs(x="Advertisements", y="Attractiveness Rating") + labs(title="Relationship beween Attractiveness Ratings and Advertisements") + theme(plot.title = element_text(hjust=0.5)) #note: use adverts as the predictor for attract ##CALCULATE CORRELATION COEFFICIENTS #Get the full Pearson's correlation matrix. #Save this to an object called sales_pearsons. sales_pearsons <- cor(raw_data, use = "complete.obs", method = "pearson") ##CONDUCT TWO-TAILED CORRELATION AND REGRESSION HYPOTHESIS TESTS FOR: # 1. Between Airplay and Sales cor.test(raw_data$airplay, raw_data$sales, method="pearson") r2<-lm(raw_data$airplay ~ raw_data$sales) summary(r2) # 2. Between Attractiveness Ratings and Adverts cor.test(raw_data$attract, raw_data$adverts, method="pearson") r3<-lm(raw_data$attract ~ raw_data$adverts) summary(r3) #State your hypotheses and interpret the results in the context of the hypotheses. # 1. H0: The true correlation is zero. There is no relationship between the two variables. rho = 0 # H1: The true correlation not zero. There is a relationship between the two variables. rho ≠ 0 #NOTE: do correlation, report everything, then do correlation and report: # There is a significant relationship between airplays and sales, # r = 0.60, p<.05, two-tailed. Number of album sales goes up as number of airplays inceases. # Airplays account for 35.87% of variance in album sales. # Airplays significantly predict one’s album sales, t(198)=10.52, p<.001, two-tailed. # The regression model fits the data well overall, F(1,198)=110.7, p<.001. # 2. H0: The true correlation is zero. There is no relationship between the two variables. rho = 0 # H1: The true correlation not zero. There is a relationship between the two variables. rho ≠ 0 # There is no significant relationship between atrractiveness ratings and advertisements, # r = 0.08, p>.05, two-tailed. Number of advertisements does not go up as attractiveness inceases. # Advertisements account for 0.65% of variance in attractiveness ratings. # Advertisements do not significantly predict one’s attractiveness ratings, t(198)=1.14, p<.001, two-tailed. # The regression model fits the data well overall, F(1,198)=1.3, p>.05.
/Correlations_and_Regression.R
no_license
LynnWahab/Correlations-and-Regressions-in-R
R
false
false
7,212
r
##SET WORKING DIRECTORY setwd("~/Desktop") ###MODIFY THIS FILE PATH FOR YOUR COMPUTER getwd() ##LOAD DATA FILE #Load Exam Anxiety.dat Call it "exam". exam <- read.delim("Exam Anxiety.dat", header=TRUE) #Examine structure str(exam) ##VISUALIZE RELATIONSHIPS #Create a scatterplot of Exam Anxiety and Exam Performance. Call it scatter #Include a line of best fit. library(ggplot2) #you want y to be outcome and x to be predictor scatter <-ggplot(exam, aes(x=Exam, y= Anxiety)) + geom_point() + geom_smooth(method=lm, se=F) + labs(x="Exam Scores", y="Anxiety Levels") + labs(title="Relationship beween Exam Scores and Anxiety Levels") + theme(plot.title = element_text(hjust=0.5)) ##CALCULATE CORRELATION COEFFICIENTS #Specified variables cor(exam$Exam, exam$Anxiety, use = "complete.obs", method = "pearson") #Correlation Matrix - Multiple Variables #pearsons correlation cant be done on nonnumeric variables cor(exam, use = "complete.obs", method = "pearson") ## WHY DOESN'T THIS WORK? cor(exam[1:4], use="complete.obs", method="pearson") #or cor(exam[,names(exam)!="Gender"], use="complete.obs", method="pearson") #or cor(exam[,-5], use="complete.obs", method="pearson") ##CONDUCT HYPOTHESIS TEST ON CORRELATION COEFFICIENT #TWO TAILED if you dont know if correlation is going to be positive or negative cor.test(exam$Exam, exam$Anxiety, method="pearson") #this is more conservative (we will use it in this case ) ##LEFT TAILED if you think correlation is going to be negative cor.test(exam$Exam, exam$Anxiety, alternative="less", method="pearson") #one-tailed are easier to reject null hyp, less conservative ##RIGHT TAILED if you think its going to be positive cor.test(exam$Exam, exam$Anxiety, alternative="greater", method="pearson") #The output has a t-test. the p-value says that the chances of getting this value, if really it shouldve been zero, is (the p-val) very small #if the scatterplot shows neg- relationship for sure, in this case you would do directional (left-tailed) #CONDUCT A REGRESSION ANALYSIS TO ASSESS WHETHER ANXIETY IS A SIGNIFICANT PREDICTOR OF EXAM GRADES. r1<-lm(exam$Anxiety ~ exam$Exam) #lm = linear model, it represents exam scores as a function of anxiety #The order is important, youre predicting exam scores from anxiety, and so anxiety is y-axis (its the outcome), score is predictor summary(r1) #Now we want to ask ourselves: is this a better predictor than just relying on the mean #y-hat or anxiety-hat is: y-hat = beta-0 + beta-1 (x) #beta-0 = 90.87 #beta-1 = -0.29 #if the null hypothesis is right and says its not a predictor, than beta-1 will be zero #its -0.29, lets see if thats significantly far from zero. to answer than question, lets do a t-test #this is included in the summary: if really theres no relationship, and -0.29 is really like zero, than the #chances of obtaining this data is less than 5% (we know this because we see Pr <2e-16) #this, as a result, tells me that grades are a significant predictor of anxiety #To figure out of the regression model is significantly better than the mean, theres an f-test (this is also included in the above summary) #F(1, 101)=24.38, p<0.00003 #so if this regression model is no better than the mean, the chances of getting a 24.38 is less that 5% #so, the regression model IS better than the mean #finally, we have the r-squared = 0.1945, so this indicates that the amount of variability in one variable thats accounted #for by the other variable is 19.45%. You can explain 19% of those differences... just by knowing your grade. #NOW REPORT THE RESULTS: (found in slides) # the predictor is significant (t-test) # amnt of variability explained # how well the model fits the data (f-test) ##PRACTICE #Load dataset called "Muscial Profits.dat". Call it raw_data. raw_data <- read.delim("Musical Profits.dat", header=TRUE) ##Inspect your data. What are the variables in this dataset. What type are they? str(raw_data) #The variables: adverts, sales, airplay, attract #TypesL num, int, int, int ##VISUALIZE RELATIONSHIPS #install.packages("ggplot2") library(ggplot2) #Create a scatterplot showing the relationship between airplays and sales. Call it scatter_airplay #Give it a centered title that says "Relationship beween Airplays and Album Sales". Label your axes correctly. scatter_airplay <-ggplot(raw_data, aes(x=airplay, y= sales)) + geom_point() + geom_smooth(method=lm, se=F) + labs(x="Airplays", y="Album Sales") + labs(title="Relationship beween Airplays and Album Sales") + theme(plot.title = element_text(hjust=0.5)) #note: use airplay to predict sales (this should be given in question) #Create a scatterplot showing the relationship between attractiveness ratings and advertisements. Call it scatter_adverts #Give it a centered title that says "Relationship beween Attractiveness Ratings and Advertisements. Label your axes correctly. scatter_adverts <-ggplot(raw_data, aes(x=adverts, y= attract)) + geom_point() + geom_smooth(method=lm, se=F) + labs(x="Advertisements", y="Attractiveness Rating") + labs(title="Relationship beween Attractiveness Ratings and Advertisements") + theme(plot.title = element_text(hjust=0.5)) #note: use adverts as the predictor for attract ##CALCULATE CORRELATION COEFFICIENTS #Get the full Pearson's correlation matrix. #Save this to an object called sales_pearsons. sales_pearsons <- cor(raw_data, use = "complete.obs", method = "pearson") ##CONDUCT TWO-TAILED CORRELATION AND REGRESSION HYPOTHESIS TESTS FOR: # 1. Between Airplay and Sales cor.test(raw_data$airplay, raw_data$sales, method="pearson") r2<-lm(raw_data$airplay ~ raw_data$sales) summary(r2) # 2. Between Attractiveness Ratings and Adverts cor.test(raw_data$attract, raw_data$adverts, method="pearson") r3<-lm(raw_data$attract ~ raw_data$adverts) summary(r3) #State your hypotheses and interpret the results in the context of the hypotheses. # 1. H0: The true correlation is zero. There is no relationship between the two variables. rho = 0 # H1: The true correlation not zero. There is a relationship between the two variables. rho ≠ 0 #NOTE: do correlation, report everything, then do correlation and report: # There is a significant relationship between airplays and sales, # r = 0.60, p<.05, two-tailed. Number of album sales goes up as number of airplays inceases. # Airplays account for 35.87% of variance in album sales. # Airplays significantly predict one’s album sales, t(198)=10.52, p<.001, two-tailed. # The regression model fits the data well overall, F(1,198)=110.7, p<.001. # 2. H0: The true correlation is zero. There is no relationship between the two variables. rho = 0 # H1: The true correlation not zero. There is a relationship between the two variables. rho ≠ 0 # There is no significant relationship between atrractiveness ratings and advertisements, # r = 0.08, p>.05, two-tailed. Number of advertisements does not go up as attractiveness inceases. # Advertisements account for 0.65% of variance in attractiveness ratings. # Advertisements do not significantly predict one’s attractiveness ratings, t(198)=1.14, p<.001, two-tailed. # The regression model fits the data well overall, F(1,198)=1.3, p>.05.
\name{roast} \alias{roast} \alias{roast.default} \alias{mroast} \alias{mroast.default} \alias{Roast-class} \alias{show,Roast-method} \alias{fry} \alias{fry.default} \title{Rotation Gene Set Tests} \description{ Rotation gene set testing for linear models. } \usage{ \S3method{roast}{default}(y, index = NULL, design = NULL, contrast = ncol(design), set.statistic = "mean", gene.weights = NULL, array.weights = NULL, weights = NULL, block = NULL, correlation, var.prior = NULL, df.prior = NULL, trend.var = FALSE, nrot = 999, approx.zscore = TRUE, \dots) \S3method{mroast}{default}(y, index = NULL, design = NULL, contrast = ncol(design), set.statistic = "mean", gene.weights = NULL, array.weights = NULL, weights = NULL, block = NULL, correlation, var.prior = NULL, df.prior = NULL, trend.var = FALSE, nrot = 999, approx.zscore = TRUE, adjust.method = "BH", midp = TRUE, sort = "directional", \dots) \S3method{fry}{default}(y, index = NULL, design = NULL, contrast = ncol(design), sort = "directional", geneid = NULL, standardize = "posterior.sd", \dots) } \arguments{ \item{y}{numeric matrix giving log-expression or log-ratio values for a series of microarrays, or any object that can coerced to a matrix including \code{ExpressionSet}, \code{MAList}, \code{EList} or \code{PLMSet} objects. Rows correspond to probes and columns to samples. If either \code{var.prior} or \code{df.prior} are null, then \code{y} should contain values for all genes on the arrays. If both prior parameters are given, then only \code{y} values for the test set are required.} \item{index}{index vector specifying which rows (probes) of \code{y} are in the test set. This can be a vector of indices, or a logical vector of the same length as \code{statistics}, or any vector such as \code{y[index,]} contains the values for the gene set to be tested. For \code{mroast}, \code{index} is a list of index vectors. The list can be made using \link{ids2indices}.} \item{design}{design matrix} \item{contrast}{contrast for which the test is required. Can be an integer specifying a column of \code{design}, or the name of a column of \code{design}, or a numeric contrast vector of length equal to the number of columns of \code{design}.} \item{set.statistic}{summary set statistic. Possibilities are \code{"mean"},\code{"floormean"},\code{"mean50"} or \code{"msq"}.} \item{gene.weights}{optional numeric vector of weights for genes in the set. Can be positive or negative. For \code{mroast} this vector must have length equal to \code{nrow(y)}. For \code{roast}, can be of length \code{nrow(y)} or of length equal to the number of genes in the test set.} \item{array.weights}{optional numeric vector of array weights.} \item{weights}{optional matrix of observation weights. If supplied, should be of same dimensions as \code{y} and all values should be positive. If \code{y} is an \code{EList} or \code{MAList} object containing weights, then those weights will be used.} \item{block}{optional vector of blocks.} \item{correlation}{correlation between blocks.} \item{var.prior}{prior value for residual variances. If not provided, this is estimated from all the data using \code{squeezeVar}.} \item{df.prior}{prior degrees of freedom for residual variances. If not provided, this is estimated using \code{squeezeVar}.} \item{trend.var}{logical, should a trend be estimated for \code{var.prior}? See \code{eBayes} for details. Only used if \code{var.prior} or \code{df.prior} are \code{NULL}.} \item{nrot}{number of rotations used to estimate the p-values.} \item{adjust.method}{method used to adjust the p-values for multiple testing. See \code{\link{p.adjust}} for possible values.} \item{midp}{logical, should mid-p-values be used in instead of ordinary p-values when adjusting for multiple testing?} \item{sort}{character, whether to sort output table by directional p-value (\code{"directional"}), non-directional p-value (\code{"mixed"}), or not at all (\code{"none"}).} \item{approx.zscore}{logical, if \code{TRUE} then a fast approximation is used to convert t-statistics into z-scores prior to computing set statistics. If \code{FALSE}, z-scores will be exact.} \item{geneid}{gene identifiers. Either a vector of length \code{nrow(y)} or the name of the column of \code{y$genes} containing the gene identifiers.} \item{standardize}{how to standardize the residual effects. Residuals can be standardized by \code{"residual.sd"}, by empirical Bayes \code{"posterior.sd"} or not all (\code{"none"}).} \item{\dots}{other arguments not currently used.} } \value{ \code{roast} produces an object of class \code{"Roast"}. This consists of a list with the following components: \item{p.value}{data.frame with columns \code{Active.Prop} and \code{P.Value}, giving the proportion of genes in the set contributing materially to significance and estimated p-values, respectively. Rows correspond to the alternative hypotheses Down, Up, UpOrDown (two-sided) and Mixed.} \item{var.prior}{prior value for residual variances.} \item{df.prior}{prior degrees of freedom for residual variances.} \code{mroast} produces a data.frame with a row for each set and the following columns: \item{NGenes}{number of genes in set} \item{PropDown}{proportion of genes in set with \code{z < -sqrt(2)}} \item{PropUp}{proportion of genes in set with \code{z > sqrt(2)}} \item{Direction}{direction of change, \code{"Up"} or \code{"Down"}} \item{PValue}{two-sided directional p-value} \item{FDR}{two-sided directional false discovery rate} \item{PValue.Mixed}{non-directional p-value} \item{FDR.Mixed}{non-directional false discovery rate} \code{fry} produces the same output format as \code{mroast} but without the columns \code{PropDown} and \code{ProbUp}. } \details{ These functions implement the ROAST gene set tests proposed by Wu et al (2010). They perform \emph{self-contained} gene set tests in the sense defined by Goeman and Buhlmann (2007). For \emph{competitive} gene set tests, see \code{\link{camera}}. For a gene set enrichment analysis style analysis using a database of gene sets, see \code{\link{romer}}. \code{roast} and \code{mroast} test whether any of the genes in the set are differentially expressed. They can be used for any microarray experiment which can be represented by a linear model. The design matrix for the experiment is specified as for the \code{\link{lmFit}} function, and the contrast of interest is specified as for the \code{\link{contrasts.fit}} function. This allows users to focus on differential expression for any coefficient or contrast in a linear model. If \code{contrast} is not specified, then the last coefficient in the linear model will be tested. The argument \code{gene.weights} allows directional weights to be set for individual genes in the set. This is often useful, because it allows each gene to be flagged as to its direction and magnitude of change based on prior experimentation. A typical use is to make the \code{gene.weights} \code{1} or \code{-1} depending on whether the gene is up or down-regulated in the pathway under consideration. The arguments \code{array.weights}, \code{block} and \code{correlation} have the same meaning as for the \code{\link{lmFit}} function. The arguments \code{df.prior} and \code{var.prior} have the same meaning as in the output of the \code{\link{eBayes}} function. If these arguments are not supplied, they are estimated exactly as is done by \code{eBayes}. The gene set statistics \code{"mean"}, \code{"floormean"}, \code{"mean50"} and \code{msq} are defined by Wu et al (2010). The different gene set statistics have different sensitivities to small number of genes. If \code{set.statistic="mean"} then the set will be statistically significantly only when the majority of the genes are differentially expressed. \code{"floormean"} and \code{"mean50"} will detect as few as 25\% differentially expressed. \code{"msq"} is sensitive to even smaller proportions of differentially expressed genes, if the effects are reasonably large. The output gives p-values three possible alternative hypotheses, \code{"Up"} to test whether the genes in the set tend to be up-regulated, with positive t-statistics, \code{"Down"} to test whether the genes in the set tend to be down-regulated, with negative t-statistics, and \code{"Mixed"} to test whether the genes in the set tend to be differentially expressed, without regard for direction. \code{roast} estimates p-values by simulation, specifically by random rotations of the orthogonalized residuals (Langsrud, 2005), so p-values will vary slightly from run to run. To get more precise p-values, increase the number of rotations \code{nrot}. The p-value is computed as \code{(b+1)/(nrot+1)} where \code{b} is the number of rotations giving a more extreme statistic than that observed (Phipson and Smyth, 2010). This means that the smallest possible p-value is \code{1/(nrot+1)}. \code{mroast} does roast tests for multiple sets, including adjustment for multiple testing. By default, \code{mroast} reports ordinary p-values but uses mid-p-values (Routledge, 1994) at the multiple testing stage. Mid-p-values are probably a good choice when using false discovery rates (\code{adjust.method="BH"}) but not when controlling the family-wise type I error rate (\code{adjust.method="holm"}). \code{fry} is a fast version of \code{mroast} in the special case that \code{df.prior} is large and \code{set.statistic="mean"}. In this situation, it gives the same result as \code{mroast} with an infinite number of rotations. } \note{ The default setting for the set statistic was changed in limma 3.5.9 (3 June 2010) from \code{"msq"} to \code{"mean"}. } \seealso{ See \link{10.GeneSetTests} for a description of other functions used for gene set testing. } \author{Gordon Smyth and Di Wu} \references{ Goeman, JJ, and Buhlmann, P (2007). Analyzing gene expression data in terms of gene sets: methodological issues. \emph{Bioinformatics} 23, 980-987. Langsrud, O (2005). Rotation tests. \emph{Statistics and Computing} 15, 53-60. Phipson B, and Smyth GK (2010). Permutation P-values should never be zero: calculating exact P-values when permutations are randomly drawn. \emph{Statistical Applications in Genetics and Molecular Biology}, Volume 9, Article 39. \url{http://www.statsci.org/smyth/pubs/PermPValuesPreprint.pdf} Routledge, RD (1994). Practicing safe statistics with the mid-p. \emph{Canadian Journal of Statistics} 22, 103-110. Wu, D, Lim, E, Francois Vaillant, F, Asselin-Labat, M-L, Visvader, JE, and Smyth, GK (2010). ROAST: rotation gene set tests for complex microarray experiments. \emph{Bioinformatics} 26, 2176-2182. \url{http://bioinformatics.oxfordjournals.org/content/26/17/2176} } \examples{ y <- matrix(rnorm(100*4),100,4) design <- cbind(Intercept=1,Group=c(0,0,1,1)) # First set of 5 genes contains 3 that are genuinely differentially expressed index1 <- 1:5 y[index1,3:4] <- y[index1,3:4]+3 # Second set of 5 genes contains none that are DE index2 <- 6:10 roast(y,index1,design,contrast=2) mroast(y,index1,design,contrast=2) mroast(y,list(set1=index1,set2=index2),design,contrast=2) } \keyword{gene set test} \concept{gene set test} \concept{gene set enrichment analysis}
/man/roast.Rd
no_license
warnes/limma
R
false
false
11,502
rd
\name{roast} \alias{roast} \alias{roast.default} \alias{mroast} \alias{mroast.default} \alias{Roast-class} \alias{show,Roast-method} \alias{fry} \alias{fry.default} \title{Rotation Gene Set Tests} \description{ Rotation gene set testing for linear models. } \usage{ \S3method{roast}{default}(y, index = NULL, design = NULL, contrast = ncol(design), set.statistic = "mean", gene.weights = NULL, array.weights = NULL, weights = NULL, block = NULL, correlation, var.prior = NULL, df.prior = NULL, trend.var = FALSE, nrot = 999, approx.zscore = TRUE, \dots) \S3method{mroast}{default}(y, index = NULL, design = NULL, contrast = ncol(design), set.statistic = "mean", gene.weights = NULL, array.weights = NULL, weights = NULL, block = NULL, correlation, var.prior = NULL, df.prior = NULL, trend.var = FALSE, nrot = 999, approx.zscore = TRUE, adjust.method = "BH", midp = TRUE, sort = "directional", \dots) \S3method{fry}{default}(y, index = NULL, design = NULL, contrast = ncol(design), sort = "directional", geneid = NULL, standardize = "posterior.sd", \dots) } \arguments{ \item{y}{numeric matrix giving log-expression or log-ratio values for a series of microarrays, or any object that can coerced to a matrix including \code{ExpressionSet}, \code{MAList}, \code{EList} or \code{PLMSet} objects. Rows correspond to probes and columns to samples. If either \code{var.prior} or \code{df.prior} are null, then \code{y} should contain values for all genes on the arrays. If both prior parameters are given, then only \code{y} values for the test set are required.} \item{index}{index vector specifying which rows (probes) of \code{y} are in the test set. This can be a vector of indices, or a logical vector of the same length as \code{statistics}, or any vector such as \code{y[index,]} contains the values for the gene set to be tested. For \code{mroast}, \code{index} is a list of index vectors. The list can be made using \link{ids2indices}.} \item{design}{design matrix} \item{contrast}{contrast for which the test is required. Can be an integer specifying a column of \code{design}, or the name of a column of \code{design}, or a numeric contrast vector of length equal to the number of columns of \code{design}.} \item{set.statistic}{summary set statistic. Possibilities are \code{"mean"},\code{"floormean"},\code{"mean50"} or \code{"msq"}.} \item{gene.weights}{optional numeric vector of weights for genes in the set. Can be positive or negative. For \code{mroast} this vector must have length equal to \code{nrow(y)}. For \code{roast}, can be of length \code{nrow(y)} or of length equal to the number of genes in the test set.} \item{array.weights}{optional numeric vector of array weights.} \item{weights}{optional matrix of observation weights. If supplied, should be of same dimensions as \code{y} and all values should be positive. If \code{y} is an \code{EList} or \code{MAList} object containing weights, then those weights will be used.} \item{block}{optional vector of blocks.} \item{correlation}{correlation between blocks.} \item{var.prior}{prior value for residual variances. If not provided, this is estimated from all the data using \code{squeezeVar}.} \item{df.prior}{prior degrees of freedom for residual variances. If not provided, this is estimated using \code{squeezeVar}.} \item{trend.var}{logical, should a trend be estimated for \code{var.prior}? See \code{eBayes} for details. Only used if \code{var.prior} or \code{df.prior} are \code{NULL}.} \item{nrot}{number of rotations used to estimate the p-values.} \item{adjust.method}{method used to adjust the p-values for multiple testing. See \code{\link{p.adjust}} for possible values.} \item{midp}{logical, should mid-p-values be used in instead of ordinary p-values when adjusting for multiple testing?} \item{sort}{character, whether to sort output table by directional p-value (\code{"directional"}), non-directional p-value (\code{"mixed"}), or not at all (\code{"none"}).} \item{approx.zscore}{logical, if \code{TRUE} then a fast approximation is used to convert t-statistics into z-scores prior to computing set statistics. If \code{FALSE}, z-scores will be exact.} \item{geneid}{gene identifiers. Either a vector of length \code{nrow(y)} or the name of the column of \code{y$genes} containing the gene identifiers.} \item{standardize}{how to standardize the residual effects. Residuals can be standardized by \code{"residual.sd"}, by empirical Bayes \code{"posterior.sd"} or not all (\code{"none"}).} \item{\dots}{other arguments not currently used.} } \value{ \code{roast} produces an object of class \code{"Roast"}. This consists of a list with the following components: \item{p.value}{data.frame with columns \code{Active.Prop} and \code{P.Value}, giving the proportion of genes in the set contributing materially to significance and estimated p-values, respectively. Rows correspond to the alternative hypotheses Down, Up, UpOrDown (two-sided) and Mixed.} \item{var.prior}{prior value for residual variances.} \item{df.prior}{prior degrees of freedom for residual variances.} \code{mroast} produces a data.frame with a row for each set and the following columns: \item{NGenes}{number of genes in set} \item{PropDown}{proportion of genes in set with \code{z < -sqrt(2)}} \item{PropUp}{proportion of genes in set with \code{z > sqrt(2)}} \item{Direction}{direction of change, \code{"Up"} or \code{"Down"}} \item{PValue}{two-sided directional p-value} \item{FDR}{two-sided directional false discovery rate} \item{PValue.Mixed}{non-directional p-value} \item{FDR.Mixed}{non-directional false discovery rate} \code{fry} produces the same output format as \code{mroast} but without the columns \code{PropDown} and \code{ProbUp}. } \details{ These functions implement the ROAST gene set tests proposed by Wu et al (2010). They perform \emph{self-contained} gene set tests in the sense defined by Goeman and Buhlmann (2007). For \emph{competitive} gene set tests, see \code{\link{camera}}. For a gene set enrichment analysis style analysis using a database of gene sets, see \code{\link{romer}}. \code{roast} and \code{mroast} test whether any of the genes in the set are differentially expressed. They can be used for any microarray experiment which can be represented by a linear model. The design matrix for the experiment is specified as for the \code{\link{lmFit}} function, and the contrast of interest is specified as for the \code{\link{contrasts.fit}} function. This allows users to focus on differential expression for any coefficient or contrast in a linear model. If \code{contrast} is not specified, then the last coefficient in the linear model will be tested. The argument \code{gene.weights} allows directional weights to be set for individual genes in the set. This is often useful, because it allows each gene to be flagged as to its direction and magnitude of change based on prior experimentation. A typical use is to make the \code{gene.weights} \code{1} or \code{-1} depending on whether the gene is up or down-regulated in the pathway under consideration. The arguments \code{array.weights}, \code{block} and \code{correlation} have the same meaning as for the \code{\link{lmFit}} function. The arguments \code{df.prior} and \code{var.prior} have the same meaning as in the output of the \code{\link{eBayes}} function. If these arguments are not supplied, they are estimated exactly as is done by \code{eBayes}. The gene set statistics \code{"mean"}, \code{"floormean"}, \code{"mean50"} and \code{msq} are defined by Wu et al (2010). The different gene set statistics have different sensitivities to small number of genes. If \code{set.statistic="mean"} then the set will be statistically significantly only when the majority of the genes are differentially expressed. \code{"floormean"} and \code{"mean50"} will detect as few as 25\% differentially expressed. \code{"msq"} is sensitive to even smaller proportions of differentially expressed genes, if the effects are reasonably large. The output gives p-values three possible alternative hypotheses, \code{"Up"} to test whether the genes in the set tend to be up-regulated, with positive t-statistics, \code{"Down"} to test whether the genes in the set tend to be down-regulated, with negative t-statistics, and \code{"Mixed"} to test whether the genes in the set tend to be differentially expressed, without regard for direction. \code{roast} estimates p-values by simulation, specifically by random rotations of the orthogonalized residuals (Langsrud, 2005), so p-values will vary slightly from run to run. To get more precise p-values, increase the number of rotations \code{nrot}. The p-value is computed as \code{(b+1)/(nrot+1)} where \code{b} is the number of rotations giving a more extreme statistic than that observed (Phipson and Smyth, 2010). This means that the smallest possible p-value is \code{1/(nrot+1)}. \code{mroast} does roast tests for multiple sets, including adjustment for multiple testing. By default, \code{mroast} reports ordinary p-values but uses mid-p-values (Routledge, 1994) at the multiple testing stage. Mid-p-values are probably a good choice when using false discovery rates (\code{adjust.method="BH"}) but not when controlling the family-wise type I error rate (\code{adjust.method="holm"}). \code{fry} is a fast version of \code{mroast} in the special case that \code{df.prior} is large and \code{set.statistic="mean"}. In this situation, it gives the same result as \code{mroast} with an infinite number of rotations. } \note{ The default setting for the set statistic was changed in limma 3.5.9 (3 June 2010) from \code{"msq"} to \code{"mean"}. } \seealso{ See \link{10.GeneSetTests} for a description of other functions used for gene set testing. } \author{Gordon Smyth and Di Wu} \references{ Goeman, JJ, and Buhlmann, P (2007). Analyzing gene expression data in terms of gene sets: methodological issues. \emph{Bioinformatics} 23, 980-987. Langsrud, O (2005). Rotation tests. \emph{Statistics and Computing} 15, 53-60. Phipson B, and Smyth GK (2010). Permutation P-values should never be zero: calculating exact P-values when permutations are randomly drawn. \emph{Statistical Applications in Genetics and Molecular Biology}, Volume 9, Article 39. \url{http://www.statsci.org/smyth/pubs/PermPValuesPreprint.pdf} Routledge, RD (1994). Practicing safe statistics with the mid-p. \emph{Canadian Journal of Statistics} 22, 103-110. Wu, D, Lim, E, Francois Vaillant, F, Asselin-Labat, M-L, Visvader, JE, and Smyth, GK (2010). ROAST: rotation gene set tests for complex microarray experiments. \emph{Bioinformatics} 26, 2176-2182. \url{http://bioinformatics.oxfordjournals.org/content/26/17/2176} } \examples{ y <- matrix(rnorm(100*4),100,4) design <- cbind(Intercept=1,Group=c(0,0,1,1)) # First set of 5 genes contains 3 that are genuinely differentially expressed index1 <- 1:5 y[index1,3:4] <- y[index1,3:4]+3 # Second set of 5 genes contains none that are DE index2 <- 6:10 roast(y,index1,design,contrast=2) mroast(y,index1,design,contrast=2) mroast(y,list(set1=index1,set2=index2),design,contrast=2) } \keyword{gene set test} \concept{gene set test} \concept{gene set enrichment analysis}
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/misc.R \name{obpg1_to_obpg2} \alias{obpg1_to_obpg2} \title{Convert OBPG old-style filenames to OBPG new-style filenames} \usage{ obpg1_to_obpg2( x = c("A2004001.L3m_DAY_CHL_chlor_a_4km.nc", "A2004001.L3m_DAY_SST_sst_4km.nc", "AQUA_MODIS.20191130.L3m.DAY.NSST.sst.4km.NRT.nc", "NPP_VIIRS.20190703.L3m.DAY.SST.sst.4km.nc", "Ocean Hack Week") ) } \arguments{ \item{x}{character, one or more obpg style filenames, directory paths and extensions are dropped} } \value{ character vector of obpg2 style filenames without path and extension } \description{ A2004001.L3m_DAY_CHL_chlor_a_4km becomes AQUA_MODIS.20040101.L3m.DAY.CHL.chlor_a.4km }
/man/obpg1_to_obpg2.Rd
permissive
BigelowLab/ohwobpg
R
false
true
724
rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/misc.R \name{obpg1_to_obpg2} \alias{obpg1_to_obpg2} \title{Convert OBPG old-style filenames to OBPG new-style filenames} \usage{ obpg1_to_obpg2( x = c("A2004001.L3m_DAY_CHL_chlor_a_4km.nc", "A2004001.L3m_DAY_SST_sst_4km.nc", "AQUA_MODIS.20191130.L3m.DAY.NSST.sst.4km.NRT.nc", "NPP_VIIRS.20190703.L3m.DAY.SST.sst.4km.nc", "Ocean Hack Week") ) } \arguments{ \item{x}{character, one or more obpg style filenames, directory paths and extensions are dropped} } \value{ character vector of obpg2 style filenames without path and extension } \description{ A2004001.L3m_DAY_CHL_chlor_a_4km becomes AQUA_MODIS.20040101.L3m.DAY.CHL.chlor_a.4km }
########################################################## #### FIG.1: maps of predictions in summer & winter ####### ########################################################## library(dplyr) library(here) library(raster) library(viridis) library(maps) library(fields) library(ggplot2) library(gridExtra) library(png) library(cowplot) ######################## # import png images ######################## setwd("/Users/philippinechambault/Documents/POST-DOC/2021/PAPER3") png_bel <- readPNG('./FIGURES/Illustrations/Beluga_background.png') png_bw <- readPNG('./FIGURES/Illustrations/bowhead_transaprent.png') png_nar <- readPNG('./FIGURES/Illustrations/narwhal2-removebg-preview.png') ################################################## # import background map with adequate projection ################################################## north_map = map_data("world") %>% group_by(group) shore = north_map[north_map$region=="Canada" | north_map$region=="Greenland" | north_map$region=="Norway" | north_map$region=="Russia" | north_map$region=="Sweden" | north_map$region=="Denmark" | north_map$region=="Finland" | north_map$region=="Iceland",] ############################################# ########## SUMMER ############ ############################################# ######################################## # load rasters in summer from best algo ######################################## #----------------------------------- # current predictions from MEAN #----------------------------------- bel <- readRDS("./RDATA/5.CARET/Mean_pred_3GCM/Summer/PredMean_3GCMs_ffs_6var_Summer_Bel_West.rds") bwW <- readRDS("./RDATA/5.CARET/Mean_pred_3GCM/Summer/PredMean_3GCMs_ffs_6var_Summer_Bw_West.rds") bwE <- readRDS("./RDATA/5.CARET/Mean_pred_3GCM/Summer/PredMean_3GCMs_ffs_6var_Summer_Bw_East.rds") narW <- readRDS("./RDATA/5.CARET/Mean_pred_3GCM/Summer/PredMean_3GCMs_ffs_6var_Summer_Nar_West.rds") narE <- readRDS("./RDATA/5.CARET/Mean_pred_3GCM/Summer/PredMean_3GCMs_ffs_6var_Summer_Nar_east.rds") #-------------------------------------- # predictions in summer 2100 ssp126 #-------------------------------------- bel_2100_126 <- readRDS("./RDATA/6.Climatic_projections/Mean_pred_3GCM/Summer/ProjMean_3GCMs_ffs_6var_Summer2100_Bel_West_ssp126.rds") bwW_2100_126 <- readRDS("./RDATA/6.Climatic_projections/Mean_pred_3GCM/Summer/ProjMean_3GCMs_ffs_6var_Summer2100_Bw_West_ssp126.rds") bwE_2100_126 <- readRDS("./RDATA/6.Climatic_projections/Mean_pred_3GCM/Summer/ProjMean_3GCMs_ffs_6var_Summer2100_Bw_East_ssp126.rds") narW_2100_126 <- readRDS("./RDATA/6.Climatic_projections/Mean_pred_3GCM/Summer/ProjMean_3GCMs_ffs_6var_Summer2100_Nar_West_ssp126.rds") narE_2100_126 <- readRDS("./RDATA/6.Climatic_projections/Mean_pred_3GCM/Summer/ProjMean_3GCMs_ffs_6var_Summer2100_Nar_East_ssp126.rds") #-------------------------------------- # predictions in summer 2100 ssp585 #-------------------------------------- bel_2100_585 <- readRDS("./RDATA/6.Climatic_projections/Mean_pred_3GCM/Summer/ProjMean_3GCMs_ffs_6var_Summer2100_Bel_West_ssp585.rds") bwW_2100_585 <- readRDS("./RDATA/6.Climatic_projections/Mean_pred_3GCM/Summer/ProjMean_3GCMs_ffs_6var_Summer2100_Bw_West_ssp585.rds") bwE_2100_585 <- readRDS("./RDATA/6.Climatic_projections/Mean_pred_3GCM/Summer/ProjMean_3GCMs_ffs_6var_Summer2100_Bw_East_ssp585.rds") narW_2100_585 <- readRDS("./RDATA/6.Climatic_projections/Mean_pred_3GCM/Summer/ProjMean_3GCMs_ffs_6var_Summer2100_Nar_West_ssp585.rds") narE_2100_585 <- readRDS("./RDATA/6.Climatic_projections/Mean_pred_3GCM/Summer/ProjMean_3GCMs_ffs_6var_Summer2100_Nar_East_ssp585.rds") ############################################### # convert rasters to tibble ############################################### #------------------- # belugas in summer #------------------- bel <- as_tibble(rasterToPoints(bel)) bel$ssp = "Present" bel2100_126 <- as_tibble(rasterToPoints(bel_2100_126)) bel2100_126$ssp = "2100 (ssp126)" # bel2100_126$year = "2100" bel2100_585 <- as_tibble(rasterToPoints(bel_2100_585)) bel2100_585$ssp = "2100 (ssp585)" # bel2100_585$year = "2100" west_bel = rbind(bel, bel2100_126, bel2100_585) west_bel$ssp = factor(west_bel$ssp, levels=c("Present","2100 (ssp126)", "2100 (ssp585)")) west_bel$side = "West" west_bel$sp = "Belugas" west_bel$sp = factor(west_bel$sp) west_bel$side = factor(west_bel$side) #-------------------------------- # bowheads in summer #-------------------------------- bwW <- as_tibble(rasterToPoints(bwW)) bwW$ssp = "Present" bwW2100_126 <- as_tibble(rasterToPoints(bwW_2100_126)) bwW2100_126$ssp = "2100 (ssp126)" bwW2100_585 <- as_tibble(rasterToPoints(bwW_2100_585)) bwW2100_585$ssp = "2100 (ssp585)" west_bw = rbind(bwW, bwW2100_126, bwW2100_585) west_bw west_bw$ssp = factor(west_bw$ssp, levels=c("Present","2100 (ssp126)", "2100 (ssp585)")) west_bw$side = "West" west_bw$sp = "Bowheads" west_bw$sp = factor(west_bw$sp) west_bw$side = factor(west_bw$side) bwE <- as_tibble(rasterToPoints(bwE)) bwE$ssp = "Present" bwE2100_126 <- as_tibble(rasterToPoints(bwE_2100_126)) bwE2100_126$ssp = "2100 (ssp126)" bwE2100_585 <- as_tibble(rasterToPoints(bwE_2100_585)) bwE2100_585$ssp = "2100 (ssp585)" east_bw = rbind(bwE, bwE2100_126, bwE2100_585) east_bw east_bw$ssp = factor(east_bw$ssp, levels=c("Present","2100 (ssp126)", "2100 (ssp585)")) east_bw$side = "east" east_bw$sp = "Bowheads" east_bw$sp = factor(east_bw$sp) east_bw$side = factor(east_bw$side) #-------------------------------- # narwhals in summer #-------------------------------- narW <- as_tibble(rasterToPoints(narW)) narW$ssp = "Present" narW2100_126 <- as_tibble(rasterToPoints(narW_2100_126)) narW2100_126$ssp = "2100 (ssp126)" narW2100_585 <- as_tibble(rasterToPoints(narW_2100_585)) narW2100_585$ssp = "2100 (ssp585)" west_nar = rbind(narW, narW2100_126, narW2100_585) west_nar west_nar$ssp = factor(west_nar$ssp, levels=c("Present","2100 (ssp126)", "2100 (ssp585)")) west_nar$side = "West" west_nar$sp = "Narwhals" west_nar$sp = factor(west_nar$sp) west_nar$side = factor(west_nar$side) narE <- as_tibble(rasterToPoints(narE)) narE$ssp = "Present" narE2100_126 <- as_tibble(rasterToPoints(narE_2100_126)) narE2100_126$ssp = "2100 (ssp126)" narE2100_585 <- as_tibble(rasterToPoints(narE_2100_585)) narE2100_585$ssp = "2100 (ssp585)" east_nar = rbind(narE, narE2100_126, narE2100_585) east_nar east_nar$ssp = factor(east_nar$ssp, levels=c("Present","2100 (ssp126)", "2100 (ssp585)")) east_nar$side = "east" east_nar$sp = "Narwhals" east_nar$sp = factor(east_nar$sp) east_nar$side = factor(east_nar$side) # combine 3 species per side #---------------------------- west = rbind(west_bel, west_bw, west_nar) east = rbind(east_bw, east_nar) # plot belugas #------------- bel = ggplot(shore, aes(long, lat)) + geom_contour_filled(data=west[west$sp=="Belugas",], aes(x,y,z=layer), breaks=seq(0,1,by=0.2)) + scale_fill_brewer(palette="BuPu") + coord_map("azequidistant", xlim=c(-103,-20), ylim=c(60,80)) + scale_y_continuous(breaks=c(60,70), labels=c(60,70)) + geom_polygon(aes(group=group), fill="lightgrey",lwd=1) + facet_grid(sp~ssp, switch='y') + annotate("text", x=-18.5,y=62, label="IS", colour="black",size=2) + annotate(geom="text", x=-42, y=72, label="Greenland", angle=65, size=2,colour="black") + annotate(geom="text", x=-110, y=65, label="Canada", angle=-50, size=2,colour="black") + annotate("text", x=-70,y=67, label="BA", colour="black",size=2, angle=45) + annotate("text", x=33, y=74, label="SVB", colour="black",size=2) + labs(y="",x="",fill="Habitat \nsuitability",title="SUMMER") + theme(axis.text.y = element_blank(), axis.text.x = element_blank(), axis.ticks = element_blank(), strip.text.x = element_text(margin = margin(0,0,0,0, "cm")), panel.spacing = unit(0.1, "lines"), strip.background = element_rect(fill="steelblue4",size=0.1), strip.text = element_text(colour='white',size=10,face="bold", margin=margin(0.1,0.1,0.1,0.1,"cm")), axis.title = element_text(size=12,face="bold", hjust=0.5), panel.border = element_rect(colour="black",fill=NA,size=0.2), legend.key.size = unit(0.5,"line"), legend.key.width = unit(0.14,"cm"), legend.key.height = unit(0.3,"cm"), legend.title = element_text(size=7), legend.text = element_text(size=7), legend.box.spacing = unit(0.1,'cm'), legend.margin=margin(t=-0.0, unit='cm'), legend.key = element_blank(), panel.spacing.x = unit(0.07, "lines"), panel.spacing.y = unit(0.07, "lines"), title = element_text(colour="black",size=13,face="bold"), plot.title=element_text(size=14, vjust=-0.5, hjust=0.5, colour="steelblue4"), panel.background = element_blank(), panel.grid.major = element_line(colour="gray52", size=0.1), plot.margin = unit(c(-0.5,0.1,-1.3,-0.4),"cm")) #top,right,bottom,left # plot bowheads #--------------- bw = ggplot(shore, aes(long, lat)) + geom_contour_filled(data=west[west$sp=="Bowheads",], aes(x,y,z=layer), breaks=seq(0,1,by=0.2)) + geom_contour_filled(data=east[east$sp=="Bowheads",], aes(x,y,z=layer), breaks=seq(0,1,by=0.2)) + scale_fill_brewer(palette="BuPu") + coord_map("azequidistant", xlim=c(-105,75), ylim=c(60,80)) + scale_y_continuous(breaks=c(60,70), labels=c(60,70)) + geom_polygon(aes(group=group), fill="lightgrey",lwd=1) + facet_grid(sp~ssp, switch='y') + annotate("text", x=-18.5,y=62, label="IS", colour="black",size=2) + annotate(geom="text", x=-42, y=72, label="Greenland", angle=45, size=2,colour="black") + # annotate(geom="text", x=-106, y=61, label="Canada", angle=0, size=2,colour="black") + annotate(geom="text", x=65, y=63, label="Russia", angle=-50, size=2,colour="black") + annotate("text", x=-70,y=67, label="BA", colour="black",size=2, angle=45) + annotate("text", x=33, y=74, label="SVB", colour="black",size=2) + labs(y="",x="",fill="Habitat \nsuitability",title="") + theme(axis.text.y = element_blank(), axis.text.x = element_blank(), axis.ticks = element_blank(), strip.background.x = element_blank(),#element_rect(fill="gray52", size=0.1), strip.background.y = element_rect(fill="steelblue4", size=0.1), strip.text.x = element_blank(), strip.text.y = element_text(colour='white',size=10,face="bold", margin=margin(0.1,0.1,0.1,0.1,"cm")), axis.title = element_text(size=12,face="bold", hjust=0.5), panel.border = element_rect(colour="black",fill=NA,size=0.2), legend.key.size = unit(0.5,"line"), legend.key.width = unit(0.14,"cm"), legend.key.height = unit(0.3,"cm"), legend.title = element_text(size=7), legend.text = element_text(size=7), legend.box.spacing = unit(0.1,'cm'), legend.margin=margin(t=-0.0, unit='cm'), legend.key = element_blank(), panel.spacing.x = unit(0.07, "lines"), panel.spacing.y = unit(0.07, "lines"), title = element_text(colour="black",size=12,face="bold"), plot.title=element_text(size=14, vjust=-0.5, hjust=0), panel.background = element_blank(), panel.grid.major = element_line(colour="gray52", size=0.1), plot.margin = unit(c(-1.3,0.1,-1.5,-0.4),"cm")) #top,right,bottom,left # plot narwhals #--------------- nar = ggplot(shore, aes(long, lat)) + geom_contour_filled(data=west[west$sp=="Narwhals",], aes(x,y,z=layer), breaks=seq(0,1,by=0.2)) + geom_contour_filled(data=east[east$sp=="Narwhals",], aes(x,y,z=layer), breaks=seq(0,1,by=0.2)) + scale_fill_brewer(palette="BuPu") + coord_map("azequidistant", xlim=c(-103,-20), ylim=c(60,80)) + scale_y_continuous(breaks=c(60,70), labels=c(60,70)) + geom_polygon(aes(group=group), fill="lightgrey",lwd=1) + facet_grid(sp~ssp, switch='y') + annotate("text", x=-18.5,y=62, label="IS", colour="black",size=2) + annotate(geom="text", x=-42, y=72, label="Greenland", angle=65, size=2,colour="black") + annotate(geom="text", x=-110, y=65, label="Canada", angle=-50, size=2,colour="black") + annotate("text", x=-70,y=67, label="BA", colour="black",size=2, angle=45) + labs(y="",x="",fill="Habitat \nsuitability",title="") + theme(axis.text.y = element_blank(), axis.text.x = element_blank(), axis.ticks = element_blank(), strip.background.x = element_blank(), strip.background.y = element_rect(fill="steelblue4", size=0.1), strip.text.x = element_blank(), strip.text.y = element_text(colour='white',size=10,face="bold", margin=margin(0.1,0.1,0.1,0.1,"cm")), axis.title = element_text(size=12,face="bold", hjust=0.5), panel.border = element_rect(colour="black",fill=NA,size=0.2), legend.key.size = unit(0.5,"line"), legend.key.width = unit(0.14,"cm"), legend.key.height = unit(0.3,"cm"), legend.title = element_text(size=7), legend.text = element_text(size=7), legend.box.spacing = unit(0.1,'cm'), legend.margin=margin(t=-0.0, unit='cm'), legend.key = element_blank(), panel.spacing.x = unit(0.07, "lines"), panel.spacing.y = unit(0.07, "lines"), title = element_text(colour="black",size=14,face="bold"), plot.title=element_text(size=14, vjust=-0.5, hjust=0), panel.background = element_blank(), panel.grid.major = element_line(colour="gray52", size=0.1), plot.margin = unit(c(-1.7,0.1,-1,-0.4),"cm")) #top,right,bottom,left a1 = grid.arrange(bel, bw, nar, ncol=1) a2 = ggdraw() + draw_plot(a1) + draw_image(png_bel, x=-0.240, y=0.17, scale=.19) + draw_image(png_bel, x=0.040, y=0.17, scale=.19) + draw_image(png_bel, x=0.325, y=0.17, scale=.19) + draw_image(png_bw, x=-0.24, y=-0.12, scale=.14) + draw_image(png_bw, x=0.04, y=-0.12, scale=.14) + draw_image(png_bw, x=0.33, y=-0.12, scale=.14) + draw_image(png_nar, x=-0.25, y=-0.38, scale=.18) + draw_image(png_nar, x=0.04, y=-0.38, scale=.18) + draw_image(png_nar, x=0.32, y=-0.38, scale=.18) setwd("/Users/philippinechambault/Documents/POST-DOC/2021/PAPER3") ggsave(filename="./PAPER/Science/2.Resubmission_March2022/Figures/Fig.1.Summer.pdf", width=6,height=3.5,units="in",dpi=400,family="Helvetica",a2)
/R-Scripts/7.MS_Figures/7.1.Fig.1.Predictions.Maps.Summer.R
no_license
pchambault/Arctic-whales-predicted-habitat
R
false
false
15,080
r
########################################################## #### FIG.1: maps of predictions in summer & winter ####### ########################################################## library(dplyr) library(here) library(raster) library(viridis) library(maps) library(fields) library(ggplot2) library(gridExtra) library(png) library(cowplot) ######################## # import png images ######################## setwd("/Users/philippinechambault/Documents/POST-DOC/2021/PAPER3") png_bel <- readPNG('./FIGURES/Illustrations/Beluga_background.png') png_bw <- readPNG('./FIGURES/Illustrations/bowhead_transaprent.png') png_nar <- readPNG('./FIGURES/Illustrations/narwhal2-removebg-preview.png') ################################################## # import background map with adequate projection ################################################## north_map = map_data("world") %>% group_by(group) shore = north_map[north_map$region=="Canada" | north_map$region=="Greenland" | north_map$region=="Norway" | north_map$region=="Russia" | north_map$region=="Sweden" | north_map$region=="Denmark" | north_map$region=="Finland" | north_map$region=="Iceland",] ############################################# ########## SUMMER ############ ############################################# ######################################## # load rasters in summer from best algo ######################################## #----------------------------------- # current predictions from MEAN #----------------------------------- bel <- readRDS("./RDATA/5.CARET/Mean_pred_3GCM/Summer/PredMean_3GCMs_ffs_6var_Summer_Bel_West.rds") bwW <- readRDS("./RDATA/5.CARET/Mean_pred_3GCM/Summer/PredMean_3GCMs_ffs_6var_Summer_Bw_West.rds") bwE <- readRDS("./RDATA/5.CARET/Mean_pred_3GCM/Summer/PredMean_3GCMs_ffs_6var_Summer_Bw_East.rds") narW <- readRDS("./RDATA/5.CARET/Mean_pred_3GCM/Summer/PredMean_3GCMs_ffs_6var_Summer_Nar_West.rds") narE <- readRDS("./RDATA/5.CARET/Mean_pred_3GCM/Summer/PredMean_3GCMs_ffs_6var_Summer_Nar_east.rds") #-------------------------------------- # predictions in summer 2100 ssp126 #-------------------------------------- bel_2100_126 <- readRDS("./RDATA/6.Climatic_projections/Mean_pred_3GCM/Summer/ProjMean_3GCMs_ffs_6var_Summer2100_Bel_West_ssp126.rds") bwW_2100_126 <- readRDS("./RDATA/6.Climatic_projections/Mean_pred_3GCM/Summer/ProjMean_3GCMs_ffs_6var_Summer2100_Bw_West_ssp126.rds") bwE_2100_126 <- readRDS("./RDATA/6.Climatic_projections/Mean_pred_3GCM/Summer/ProjMean_3GCMs_ffs_6var_Summer2100_Bw_East_ssp126.rds") narW_2100_126 <- readRDS("./RDATA/6.Climatic_projections/Mean_pred_3GCM/Summer/ProjMean_3GCMs_ffs_6var_Summer2100_Nar_West_ssp126.rds") narE_2100_126 <- readRDS("./RDATA/6.Climatic_projections/Mean_pred_3GCM/Summer/ProjMean_3GCMs_ffs_6var_Summer2100_Nar_East_ssp126.rds") #-------------------------------------- # predictions in summer 2100 ssp585 #-------------------------------------- bel_2100_585 <- readRDS("./RDATA/6.Climatic_projections/Mean_pred_3GCM/Summer/ProjMean_3GCMs_ffs_6var_Summer2100_Bel_West_ssp585.rds") bwW_2100_585 <- readRDS("./RDATA/6.Climatic_projections/Mean_pred_3GCM/Summer/ProjMean_3GCMs_ffs_6var_Summer2100_Bw_West_ssp585.rds") bwE_2100_585 <- readRDS("./RDATA/6.Climatic_projections/Mean_pred_3GCM/Summer/ProjMean_3GCMs_ffs_6var_Summer2100_Bw_East_ssp585.rds") narW_2100_585 <- readRDS("./RDATA/6.Climatic_projections/Mean_pred_3GCM/Summer/ProjMean_3GCMs_ffs_6var_Summer2100_Nar_West_ssp585.rds") narE_2100_585 <- readRDS("./RDATA/6.Climatic_projections/Mean_pred_3GCM/Summer/ProjMean_3GCMs_ffs_6var_Summer2100_Nar_East_ssp585.rds") ############################################### # convert rasters to tibble ############################################### #------------------- # belugas in summer #------------------- bel <- as_tibble(rasterToPoints(bel)) bel$ssp = "Present" bel2100_126 <- as_tibble(rasterToPoints(bel_2100_126)) bel2100_126$ssp = "2100 (ssp126)" # bel2100_126$year = "2100" bel2100_585 <- as_tibble(rasterToPoints(bel_2100_585)) bel2100_585$ssp = "2100 (ssp585)" # bel2100_585$year = "2100" west_bel = rbind(bel, bel2100_126, bel2100_585) west_bel$ssp = factor(west_bel$ssp, levels=c("Present","2100 (ssp126)", "2100 (ssp585)")) west_bel$side = "West" west_bel$sp = "Belugas" west_bel$sp = factor(west_bel$sp) west_bel$side = factor(west_bel$side) #-------------------------------- # bowheads in summer #-------------------------------- bwW <- as_tibble(rasterToPoints(bwW)) bwW$ssp = "Present" bwW2100_126 <- as_tibble(rasterToPoints(bwW_2100_126)) bwW2100_126$ssp = "2100 (ssp126)" bwW2100_585 <- as_tibble(rasterToPoints(bwW_2100_585)) bwW2100_585$ssp = "2100 (ssp585)" west_bw = rbind(bwW, bwW2100_126, bwW2100_585) west_bw west_bw$ssp = factor(west_bw$ssp, levels=c("Present","2100 (ssp126)", "2100 (ssp585)")) west_bw$side = "West" west_bw$sp = "Bowheads" west_bw$sp = factor(west_bw$sp) west_bw$side = factor(west_bw$side) bwE <- as_tibble(rasterToPoints(bwE)) bwE$ssp = "Present" bwE2100_126 <- as_tibble(rasterToPoints(bwE_2100_126)) bwE2100_126$ssp = "2100 (ssp126)" bwE2100_585 <- as_tibble(rasterToPoints(bwE_2100_585)) bwE2100_585$ssp = "2100 (ssp585)" east_bw = rbind(bwE, bwE2100_126, bwE2100_585) east_bw east_bw$ssp = factor(east_bw$ssp, levels=c("Present","2100 (ssp126)", "2100 (ssp585)")) east_bw$side = "east" east_bw$sp = "Bowheads" east_bw$sp = factor(east_bw$sp) east_bw$side = factor(east_bw$side) #-------------------------------- # narwhals in summer #-------------------------------- narW <- as_tibble(rasterToPoints(narW)) narW$ssp = "Present" narW2100_126 <- as_tibble(rasterToPoints(narW_2100_126)) narW2100_126$ssp = "2100 (ssp126)" narW2100_585 <- as_tibble(rasterToPoints(narW_2100_585)) narW2100_585$ssp = "2100 (ssp585)" west_nar = rbind(narW, narW2100_126, narW2100_585) west_nar west_nar$ssp = factor(west_nar$ssp, levels=c("Present","2100 (ssp126)", "2100 (ssp585)")) west_nar$side = "West" west_nar$sp = "Narwhals" west_nar$sp = factor(west_nar$sp) west_nar$side = factor(west_nar$side) narE <- as_tibble(rasterToPoints(narE)) narE$ssp = "Present" narE2100_126 <- as_tibble(rasterToPoints(narE_2100_126)) narE2100_126$ssp = "2100 (ssp126)" narE2100_585 <- as_tibble(rasterToPoints(narE_2100_585)) narE2100_585$ssp = "2100 (ssp585)" east_nar = rbind(narE, narE2100_126, narE2100_585) east_nar east_nar$ssp = factor(east_nar$ssp, levels=c("Present","2100 (ssp126)", "2100 (ssp585)")) east_nar$side = "east" east_nar$sp = "Narwhals" east_nar$sp = factor(east_nar$sp) east_nar$side = factor(east_nar$side) # combine 3 species per side #---------------------------- west = rbind(west_bel, west_bw, west_nar) east = rbind(east_bw, east_nar) # plot belugas #------------- bel = ggplot(shore, aes(long, lat)) + geom_contour_filled(data=west[west$sp=="Belugas",], aes(x,y,z=layer), breaks=seq(0,1,by=0.2)) + scale_fill_brewer(palette="BuPu") + coord_map("azequidistant", xlim=c(-103,-20), ylim=c(60,80)) + scale_y_continuous(breaks=c(60,70), labels=c(60,70)) + geom_polygon(aes(group=group), fill="lightgrey",lwd=1) + facet_grid(sp~ssp, switch='y') + annotate("text", x=-18.5,y=62, label="IS", colour="black",size=2) + annotate(geom="text", x=-42, y=72, label="Greenland", angle=65, size=2,colour="black") + annotate(geom="text", x=-110, y=65, label="Canada", angle=-50, size=2,colour="black") + annotate("text", x=-70,y=67, label="BA", colour="black",size=2, angle=45) + annotate("text", x=33, y=74, label="SVB", colour="black",size=2) + labs(y="",x="",fill="Habitat \nsuitability",title="SUMMER") + theme(axis.text.y = element_blank(), axis.text.x = element_blank(), axis.ticks = element_blank(), strip.text.x = element_text(margin = margin(0,0,0,0, "cm")), panel.spacing = unit(0.1, "lines"), strip.background = element_rect(fill="steelblue4",size=0.1), strip.text = element_text(colour='white',size=10,face="bold", margin=margin(0.1,0.1,0.1,0.1,"cm")), axis.title = element_text(size=12,face="bold", hjust=0.5), panel.border = element_rect(colour="black",fill=NA,size=0.2), legend.key.size = unit(0.5,"line"), legend.key.width = unit(0.14,"cm"), legend.key.height = unit(0.3,"cm"), legend.title = element_text(size=7), legend.text = element_text(size=7), legend.box.spacing = unit(0.1,'cm'), legend.margin=margin(t=-0.0, unit='cm'), legend.key = element_blank(), panel.spacing.x = unit(0.07, "lines"), panel.spacing.y = unit(0.07, "lines"), title = element_text(colour="black",size=13,face="bold"), plot.title=element_text(size=14, vjust=-0.5, hjust=0.5, colour="steelblue4"), panel.background = element_blank(), panel.grid.major = element_line(colour="gray52", size=0.1), plot.margin = unit(c(-0.5,0.1,-1.3,-0.4),"cm")) #top,right,bottom,left # plot bowheads #--------------- bw = ggplot(shore, aes(long, lat)) + geom_contour_filled(data=west[west$sp=="Bowheads",], aes(x,y,z=layer), breaks=seq(0,1,by=0.2)) + geom_contour_filled(data=east[east$sp=="Bowheads",], aes(x,y,z=layer), breaks=seq(0,1,by=0.2)) + scale_fill_brewer(palette="BuPu") + coord_map("azequidistant", xlim=c(-105,75), ylim=c(60,80)) + scale_y_continuous(breaks=c(60,70), labels=c(60,70)) + geom_polygon(aes(group=group), fill="lightgrey",lwd=1) + facet_grid(sp~ssp, switch='y') + annotate("text", x=-18.5,y=62, label="IS", colour="black",size=2) + annotate(geom="text", x=-42, y=72, label="Greenland", angle=45, size=2,colour="black") + # annotate(geom="text", x=-106, y=61, label="Canada", angle=0, size=2,colour="black") + annotate(geom="text", x=65, y=63, label="Russia", angle=-50, size=2,colour="black") + annotate("text", x=-70,y=67, label="BA", colour="black",size=2, angle=45) + annotate("text", x=33, y=74, label="SVB", colour="black",size=2) + labs(y="",x="",fill="Habitat \nsuitability",title="") + theme(axis.text.y = element_blank(), axis.text.x = element_blank(), axis.ticks = element_blank(), strip.background.x = element_blank(),#element_rect(fill="gray52", size=0.1), strip.background.y = element_rect(fill="steelblue4", size=0.1), strip.text.x = element_blank(), strip.text.y = element_text(colour='white',size=10,face="bold", margin=margin(0.1,0.1,0.1,0.1,"cm")), axis.title = element_text(size=12,face="bold", hjust=0.5), panel.border = element_rect(colour="black",fill=NA,size=0.2), legend.key.size = unit(0.5,"line"), legend.key.width = unit(0.14,"cm"), legend.key.height = unit(0.3,"cm"), legend.title = element_text(size=7), legend.text = element_text(size=7), legend.box.spacing = unit(0.1,'cm'), legend.margin=margin(t=-0.0, unit='cm'), legend.key = element_blank(), panel.spacing.x = unit(0.07, "lines"), panel.spacing.y = unit(0.07, "lines"), title = element_text(colour="black",size=12,face="bold"), plot.title=element_text(size=14, vjust=-0.5, hjust=0), panel.background = element_blank(), panel.grid.major = element_line(colour="gray52", size=0.1), plot.margin = unit(c(-1.3,0.1,-1.5,-0.4),"cm")) #top,right,bottom,left # plot narwhals #--------------- nar = ggplot(shore, aes(long, lat)) + geom_contour_filled(data=west[west$sp=="Narwhals",], aes(x,y,z=layer), breaks=seq(0,1,by=0.2)) + geom_contour_filled(data=east[east$sp=="Narwhals",], aes(x,y,z=layer), breaks=seq(0,1,by=0.2)) + scale_fill_brewer(palette="BuPu") + coord_map("azequidistant", xlim=c(-103,-20), ylim=c(60,80)) + scale_y_continuous(breaks=c(60,70), labels=c(60,70)) + geom_polygon(aes(group=group), fill="lightgrey",lwd=1) + facet_grid(sp~ssp, switch='y') + annotate("text", x=-18.5,y=62, label="IS", colour="black",size=2) + annotate(geom="text", x=-42, y=72, label="Greenland", angle=65, size=2,colour="black") + annotate(geom="text", x=-110, y=65, label="Canada", angle=-50, size=2,colour="black") + annotate("text", x=-70,y=67, label="BA", colour="black",size=2, angle=45) + labs(y="",x="",fill="Habitat \nsuitability",title="") + theme(axis.text.y = element_blank(), axis.text.x = element_blank(), axis.ticks = element_blank(), strip.background.x = element_blank(), strip.background.y = element_rect(fill="steelblue4", size=0.1), strip.text.x = element_blank(), strip.text.y = element_text(colour='white',size=10,face="bold", margin=margin(0.1,0.1,0.1,0.1,"cm")), axis.title = element_text(size=12,face="bold", hjust=0.5), panel.border = element_rect(colour="black",fill=NA,size=0.2), legend.key.size = unit(0.5,"line"), legend.key.width = unit(0.14,"cm"), legend.key.height = unit(0.3,"cm"), legend.title = element_text(size=7), legend.text = element_text(size=7), legend.box.spacing = unit(0.1,'cm'), legend.margin=margin(t=-0.0, unit='cm'), legend.key = element_blank(), panel.spacing.x = unit(0.07, "lines"), panel.spacing.y = unit(0.07, "lines"), title = element_text(colour="black",size=14,face="bold"), plot.title=element_text(size=14, vjust=-0.5, hjust=0), panel.background = element_blank(), panel.grid.major = element_line(colour="gray52", size=0.1), plot.margin = unit(c(-1.7,0.1,-1,-0.4),"cm")) #top,right,bottom,left a1 = grid.arrange(bel, bw, nar, ncol=1) a2 = ggdraw() + draw_plot(a1) + draw_image(png_bel, x=-0.240, y=0.17, scale=.19) + draw_image(png_bel, x=0.040, y=0.17, scale=.19) + draw_image(png_bel, x=0.325, y=0.17, scale=.19) + draw_image(png_bw, x=-0.24, y=-0.12, scale=.14) + draw_image(png_bw, x=0.04, y=-0.12, scale=.14) + draw_image(png_bw, x=0.33, y=-0.12, scale=.14) + draw_image(png_nar, x=-0.25, y=-0.38, scale=.18) + draw_image(png_nar, x=0.04, y=-0.38, scale=.18) + draw_image(png_nar, x=0.32, y=-0.38, scale=.18) setwd("/Users/philippinechambault/Documents/POST-DOC/2021/PAPER3") ggsave(filename="./PAPER/Science/2.Resubmission_March2022/Figures/Fig.1.Summer.pdf", width=6,height=3.5,units="in",dpi=400,family="Helvetica",a2)
<?xml version="1.0" encoding="utf-8"?> <serviceModel xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" name="AzureCloudService" generation="1" functional="0" release="0" Id="e150ccba-ecc7-4ed4-afc1-6ee234d46c60" dslVersion="1.2.0.0" xmlns="http://schemas.microsoft.com/dsltools/RDSM"> <groups> <group name="AzureCloudServiceGroup" generation="1" functional="0" release="0"> <componentports> <inPort name="WebRole1:Endpoint1" protocol="http"> <inToChannel> <lBChannelMoniker name="/AzureCloudService/AzureCloudServiceGroup/LB:WebRole1:Endpoint1" /> </inToChannel> </inPort> <inPort name="WebRole1:Microsoft.WindowsAzure.Plugins.RemoteForwarder.RdpInput" protocol="tcp"> <inToChannel> <lBChannelMoniker name="/AzureCloudService/AzureCloudServiceGroup/LB:WebRole1:Microsoft.WindowsAzure.Plugins.RemoteForwarder.RdpInput" /> </inToChannel> </inPort> </componentports> <settings> <aCS name="Certificate|WebRole1:Microsoft.WindowsAzure.Plugins.RemoteAccess.PasswordEncryption" defaultValue=""> <maps> <mapMoniker name="/AzureCloudService/AzureCloudServiceGroup/MapCertificate|WebRole1:Microsoft.WindowsAzure.Plugins.RemoteAccess.PasswordEncryption" /> </maps> </aCS> <aCS name="WebRole1:Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" defaultValue=""> <maps> <mapMoniker name="/AzureCloudService/AzureCloudServiceGroup/MapWebRole1:Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" /> </maps> </aCS> <aCS name="WebRole1:Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountEncryptedPassword" defaultValue=""> <maps> <mapMoniker name="/AzureCloudService/AzureCloudServiceGroup/MapWebRole1:Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountEncryptedPassword" /> </maps> </aCS> <aCS name="WebRole1:Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountExpiration" defaultValue=""> <maps> <mapMoniker name="/AzureCloudService/AzureCloudServiceGroup/MapWebRole1:Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountExpiration" /> </maps> </aCS> <aCS name="WebRole1:Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountUsername" defaultValue=""> <maps> <mapMoniker name="/AzureCloudService/AzureCloudServiceGroup/MapWebRole1:Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountUsername" /> </maps> </aCS> <aCS name="WebRole1:Microsoft.WindowsAzure.Plugins.RemoteAccess.Enabled" defaultValue=""> <maps> <mapMoniker name="/AzureCloudService/AzureCloudServiceGroup/MapWebRole1:Microsoft.WindowsAzure.Plugins.RemoteAccess.Enabled" /> </maps> </aCS> <aCS name="WebRole1:Microsoft.WindowsAzure.Plugins.RemoteForwarder.Enabled" defaultValue=""> <maps> <mapMoniker name="/AzureCloudService/AzureCloudServiceGroup/MapWebRole1:Microsoft.WindowsAzure.Plugins.RemoteForwarder.Enabled" /> </maps> </aCS> <aCS name="WebRole1Instances" defaultValue="[1,1,1]"> <maps> <mapMoniker name="/AzureCloudService/AzureCloudServiceGroup/MapWebRole1Instances" /> </maps> </aCS> </settings> <channels> <lBChannel name="LB:WebRole1:Endpoint1"> <toPorts> <inPortMoniker name="/AzureCloudService/AzureCloudServiceGroup/WebRole1/Endpoint1" /> </toPorts> </lBChannel> <lBChannel name="LB:WebRole1:Microsoft.WindowsAzure.Plugins.RemoteForwarder.RdpInput"> <toPorts> <inPortMoniker name="/AzureCloudService/AzureCloudServiceGroup/WebRole1/Microsoft.WindowsAzure.Plugins.RemoteForwarder.RdpInput" /> </toPorts> </lBChannel> <sFSwitchChannel name="SW:WebRole1:Microsoft.WindowsAzure.Plugins.RemoteAccess.Rdp"> <toPorts> <inPortMoniker name="/AzureCloudService/AzureCloudServiceGroup/WebRole1/Microsoft.WindowsAzure.Plugins.RemoteAccess.Rdp" /> </toPorts> </sFSwitchChannel> </channels> <maps> <map name="MapCertificate|WebRole1:Microsoft.WindowsAzure.Plugins.RemoteAccess.PasswordEncryption" kind="Identity"> <certificate> <certificateMoniker name="/AzureCloudService/AzureCloudServiceGroup/WebRole1/Microsoft.WindowsAzure.Plugins.RemoteAccess.PasswordEncryption" /> </certificate> </map> <map name="MapWebRole1:Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" kind="Identity"> <setting> <aCSMoniker name="/AzureCloudService/AzureCloudServiceGroup/WebRole1/Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" /> </setting> </map> <map name="MapWebRole1:Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountEncryptedPassword" kind="Identity"> <setting> <aCSMoniker name="/AzureCloudService/AzureCloudServiceGroup/WebRole1/Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountEncryptedPassword" /> </setting> </map> <map name="MapWebRole1:Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountExpiration" kind="Identity"> <setting> <aCSMoniker name="/AzureCloudService/AzureCloudServiceGroup/WebRole1/Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountExpiration" /> </setting> </map> <map name="MapWebRole1:Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountUsername" kind="Identity"> <setting> <aCSMoniker name="/AzureCloudService/AzureCloudServiceGroup/WebRole1/Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountUsername" /> </setting> </map> <map name="MapWebRole1:Microsoft.WindowsAzure.Plugins.RemoteAccess.Enabled" kind="Identity"> <setting> <aCSMoniker name="/AzureCloudService/AzureCloudServiceGroup/WebRole1/Microsoft.WindowsAzure.Plugins.RemoteAccess.Enabled" /> </setting> </map> <map name="MapWebRole1:Microsoft.WindowsAzure.Plugins.RemoteForwarder.Enabled" kind="Identity"> <setting> <aCSMoniker name="/AzureCloudService/AzureCloudServiceGroup/WebRole1/Microsoft.WindowsAzure.Plugins.RemoteForwarder.Enabled" /> </setting> </map> <map name="MapWebRole1Instances" kind="Identity"> <setting> <sCSPolicyIDMoniker name="/AzureCloudService/AzureCloudServiceGroup/WebRole1Instances" /> </setting> </map> </maps> <components> <groupHascomponents> <role name="WebRole1" generation="1" functional="0" release="0" software="C:\Users\deniso\Documents\My Received Files\NewCloudRepo-master\NewCloudRepo-master\AzureCloudService\csx\Release\roles\WebRole1" entryPoint="base\x64\WaHostBootstrapper.exe" parameters="base\x64\WaIISHost.exe " memIndex="-1" hostingEnvironment="frontendadmin" hostingEnvironmentVersion="2"> <componentports> <inPort name="Endpoint1" protocol="http" portRanges="80" /> <inPort name="Microsoft.WindowsAzure.Plugins.RemoteForwarder.RdpInput" protocol="tcp" /> <inPort name="Microsoft.WindowsAzure.Plugins.RemoteAccess.Rdp" protocol="tcp" portRanges="3389" /> <outPort name="WebRole1:Microsoft.WindowsAzure.Plugins.RemoteAccess.Rdp" protocol="tcp"> <outToChannel> <sFSwitchChannelMoniker name="/AzureCloudService/AzureCloudServiceGroup/SW:WebRole1:Microsoft.WindowsAzure.Plugins.RemoteAccess.Rdp" /> </outToChannel> </outPort> </componentports> <settings> <aCS name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" defaultValue="" /> <aCS name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountEncryptedPassword" defaultValue="" /> <aCS name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountExpiration" defaultValue="" /> <aCS name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountUsername" defaultValue="" /> <aCS name="Microsoft.WindowsAzure.Plugins.RemoteAccess.Enabled" defaultValue="" /> <aCS name="Microsoft.WindowsAzure.Plugins.RemoteForwarder.Enabled" defaultValue="" /> <aCS name="__ModelData" defaultValue="&lt;m role=&quot;WebRole1&quot; xmlns=&quot;urn:azure:m:v1&quot;&gt;&lt;r name=&quot;WebRole1&quot;&gt;&lt;e name=&quot;Endpoint1&quot; /&gt;&lt;e name=&quot;Microsoft.WindowsAzure.Plugins.RemoteAccess.Rdp&quot; /&gt;&lt;e name=&quot;Microsoft.WindowsAzure.Plugins.RemoteForwarder.RdpInput&quot; /&gt;&lt;/r&gt;&lt;/m&gt;" /> </settings> <resourcereferences> <resourceReference name="DiagnosticStore" defaultAmount="[4096,4096,4096]" defaultSticky="true" kind="Directory" /> <resourceReference name="EventStore" defaultAmount="[1000,1000,1000]" defaultSticky="false" kind="LogStore" /> </resourcereferences> <storedcertificates> <storedCertificate name="Stored0Microsoft.WindowsAzure.Plugins.RemoteAccess.PasswordEncryption" certificateStore="My" certificateLocation="System"> <certificate> <certificateMoniker name="/AzureCloudService/AzureCloudServiceGroup/WebRole1/Microsoft.WindowsAzure.Plugins.RemoteAccess.PasswordEncryption" /> </certificate> </storedCertificate> </storedcertificates> <certificates> <certificate name="Microsoft.WindowsAzure.Plugins.RemoteAccess.PasswordEncryption" /> </certificates> </role> <sCSPolicy> <sCSPolicyIDMoniker name="/AzureCloudService/AzureCloudServiceGroup/WebRole1Instances" /> <sCSPolicyUpdateDomainMoniker name="/AzureCloudService/AzureCloudServiceGroup/WebRole1UpgradeDomains" /> <sCSPolicyFaultDomainMoniker name="/AzureCloudService/AzureCloudServiceGroup/WebRole1FaultDomains" /> </sCSPolicy> </groupHascomponents> </components> <sCSPolicy> <sCSPolicyUpdateDomain name="WebRole1UpgradeDomains" defaultPolicy="[5,5,5]" /> <sCSPolicyFaultDomain name="WebRole1FaultDomains" defaultPolicy="[2,2,2]" /> <sCSPolicyID name="WebRole1Instances" defaultPolicy="[1,1,1]" /> </sCSPolicy> </group> </groups> <implements> <implementation Id="63885408-1071-4c3b-ad11-6c43bcf45213" ref="Microsoft.RedDog.Contract\ServiceContract\AzureCloudServiceContract@ServiceDefinition"> <interfacereferences> <interfaceReference Id="0e2b8209-dcc0-4e37-a689-94e452e6c7e8" ref="Microsoft.RedDog.Contract\Interface\WebRole1:Endpoint1@ServiceDefinition"> <inPort> <inPortMoniker name="/AzureCloudService/AzureCloudServiceGroup/WebRole1:Endpoint1" /> </inPort> </interfaceReference> <interfaceReference Id="4991b202-aebd-455e-95cd-80c8e1200d62" ref="Microsoft.RedDog.Contract\Interface\WebRole1:Microsoft.WindowsAzure.Plugins.RemoteForwarder.RdpInput@ServiceDefinition"> <inPort> <inPortMoniker name="/AzureCloudService/AzureCloudServiceGroup/WebRole1:Microsoft.WindowsAzure.Plugins.RemoteForwarder.RdpInput" /> </inPort> </interfaceReference> </interfacereferences> </implementation> </implements> </serviceModel>
/NewCloudRepo-master/NewCloudRepo-master/AzureCloudService/csx/Release/ServiceDefinition.rd
no_license
ericbehughes/NewCloudRepo
R
false
false
11,608
rd
<?xml version="1.0" encoding="utf-8"?> <serviceModel xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" name="AzureCloudService" generation="1" functional="0" release="0" Id="e150ccba-ecc7-4ed4-afc1-6ee234d46c60" dslVersion="1.2.0.0" xmlns="http://schemas.microsoft.com/dsltools/RDSM"> <groups> <group name="AzureCloudServiceGroup" generation="1" functional="0" release="0"> <componentports> <inPort name="WebRole1:Endpoint1" protocol="http"> <inToChannel> <lBChannelMoniker name="/AzureCloudService/AzureCloudServiceGroup/LB:WebRole1:Endpoint1" /> </inToChannel> </inPort> <inPort name="WebRole1:Microsoft.WindowsAzure.Plugins.RemoteForwarder.RdpInput" protocol="tcp"> <inToChannel> <lBChannelMoniker name="/AzureCloudService/AzureCloudServiceGroup/LB:WebRole1:Microsoft.WindowsAzure.Plugins.RemoteForwarder.RdpInput" /> </inToChannel> </inPort> </componentports> <settings> <aCS name="Certificate|WebRole1:Microsoft.WindowsAzure.Plugins.RemoteAccess.PasswordEncryption" defaultValue=""> <maps> <mapMoniker name="/AzureCloudService/AzureCloudServiceGroup/MapCertificate|WebRole1:Microsoft.WindowsAzure.Plugins.RemoteAccess.PasswordEncryption" /> </maps> </aCS> <aCS name="WebRole1:Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" defaultValue=""> <maps> <mapMoniker name="/AzureCloudService/AzureCloudServiceGroup/MapWebRole1:Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" /> </maps> </aCS> <aCS name="WebRole1:Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountEncryptedPassword" defaultValue=""> <maps> <mapMoniker name="/AzureCloudService/AzureCloudServiceGroup/MapWebRole1:Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountEncryptedPassword" /> </maps> </aCS> <aCS name="WebRole1:Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountExpiration" defaultValue=""> <maps> <mapMoniker name="/AzureCloudService/AzureCloudServiceGroup/MapWebRole1:Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountExpiration" /> </maps> </aCS> <aCS name="WebRole1:Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountUsername" defaultValue=""> <maps> <mapMoniker name="/AzureCloudService/AzureCloudServiceGroup/MapWebRole1:Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountUsername" /> </maps> </aCS> <aCS name="WebRole1:Microsoft.WindowsAzure.Plugins.RemoteAccess.Enabled" defaultValue=""> <maps> <mapMoniker name="/AzureCloudService/AzureCloudServiceGroup/MapWebRole1:Microsoft.WindowsAzure.Plugins.RemoteAccess.Enabled" /> </maps> </aCS> <aCS name="WebRole1:Microsoft.WindowsAzure.Plugins.RemoteForwarder.Enabled" defaultValue=""> <maps> <mapMoniker name="/AzureCloudService/AzureCloudServiceGroup/MapWebRole1:Microsoft.WindowsAzure.Plugins.RemoteForwarder.Enabled" /> </maps> </aCS> <aCS name="WebRole1Instances" defaultValue="[1,1,1]"> <maps> <mapMoniker name="/AzureCloudService/AzureCloudServiceGroup/MapWebRole1Instances" /> </maps> </aCS> </settings> <channels> <lBChannel name="LB:WebRole1:Endpoint1"> <toPorts> <inPortMoniker name="/AzureCloudService/AzureCloudServiceGroup/WebRole1/Endpoint1" /> </toPorts> </lBChannel> <lBChannel name="LB:WebRole1:Microsoft.WindowsAzure.Plugins.RemoteForwarder.RdpInput"> <toPorts> <inPortMoniker name="/AzureCloudService/AzureCloudServiceGroup/WebRole1/Microsoft.WindowsAzure.Plugins.RemoteForwarder.RdpInput" /> </toPorts> </lBChannel> <sFSwitchChannel name="SW:WebRole1:Microsoft.WindowsAzure.Plugins.RemoteAccess.Rdp"> <toPorts> <inPortMoniker name="/AzureCloudService/AzureCloudServiceGroup/WebRole1/Microsoft.WindowsAzure.Plugins.RemoteAccess.Rdp" /> </toPorts> </sFSwitchChannel> </channels> <maps> <map name="MapCertificate|WebRole1:Microsoft.WindowsAzure.Plugins.RemoteAccess.PasswordEncryption" kind="Identity"> <certificate> <certificateMoniker name="/AzureCloudService/AzureCloudServiceGroup/WebRole1/Microsoft.WindowsAzure.Plugins.RemoteAccess.PasswordEncryption" /> </certificate> </map> <map name="MapWebRole1:Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" kind="Identity"> <setting> <aCSMoniker name="/AzureCloudService/AzureCloudServiceGroup/WebRole1/Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" /> </setting> </map> <map name="MapWebRole1:Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountEncryptedPassword" kind="Identity"> <setting> <aCSMoniker name="/AzureCloudService/AzureCloudServiceGroup/WebRole1/Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountEncryptedPassword" /> </setting> </map> <map name="MapWebRole1:Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountExpiration" kind="Identity"> <setting> <aCSMoniker name="/AzureCloudService/AzureCloudServiceGroup/WebRole1/Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountExpiration" /> </setting> </map> <map name="MapWebRole1:Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountUsername" kind="Identity"> <setting> <aCSMoniker name="/AzureCloudService/AzureCloudServiceGroup/WebRole1/Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountUsername" /> </setting> </map> <map name="MapWebRole1:Microsoft.WindowsAzure.Plugins.RemoteAccess.Enabled" kind="Identity"> <setting> <aCSMoniker name="/AzureCloudService/AzureCloudServiceGroup/WebRole1/Microsoft.WindowsAzure.Plugins.RemoteAccess.Enabled" /> </setting> </map> <map name="MapWebRole1:Microsoft.WindowsAzure.Plugins.RemoteForwarder.Enabled" kind="Identity"> <setting> <aCSMoniker name="/AzureCloudService/AzureCloudServiceGroup/WebRole1/Microsoft.WindowsAzure.Plugins.RemoteForwarder.Enabled" /> </setting> </map> <map name="MapWebRole1Instances" kind="Identity"> <setting> <sCSPolicyIDMoniker name="/AzureCloudService/AzureCloudServiceGroup/WebRole1Instances" /> </setting> </map> </maps> <components> <groupHascomponents> <role name="WebRole1" generation="1" functional="0" release="0" software="C:\Users\deniso\Documents\My Received Files\NewCloudRepo-master\NewCloudRepo-master\AzureCloudService\csx\Release\roles\WebRole1" entryPoint="base\x64\WaHostBootstrapper.exe" parameters="base\x64\WaIISHost.exe " memIndex="-1" hostingEnvironment="frontendadmin" hostingEnvironmentVersion="2"> <componentports> <inPort name="Endpoint1" protocol="http" portRanges="80" /> <inPort name="Microsoft.WindowsAzure.Plugins.RemoteForwarder.RdpInput" protocol="tcp" /> <inPort name="Microsoft.WindowsAzure.Plugins.RemoteAccess.Rdp" protocol="tcp" portRanges="3389" /> <outPort name="WebRole1:Microsoft.WindowsAzure.Plugins.RemoteAccess.Rdp" protocol="tcp"> <outToChannel> <sFSwitchChannelMoniker name="/AzureCloudService/AzureCloudServiceGroup/SW:WebRole1:Microsoft.WindowsAzure.Plugins.RemoteAccess.Rdp" /> </outToChannel> </outPort> </componentports> <settings> <aCS name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" defaultValue="" /> <aCS name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountEncryptedPassword" defaultValue="" /> <aCS name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountExpiration" defaultValue="" /> <aCS name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountUsername" defaultValue="" /> <aCS name="Microsoft.WindowsAzure.Plugins.RemoteAccess.Enabled" defaultValue="" /> <aCS name="Microsoft.WindowsAzure.Plugins.RemoteForwarder.Enabled" defaultValue="" /> <aCS name="__ModelData" defaultValue="&lt;m role=&quot;WebRole1&quot; xmlns=&quot;urn:azure:m:v1&quot;&gt;&lt;r name=&quot;WebRole1&quot;&gt;&lt;e name=&quot;Endpoint1&quot; /&gt;&lt;e name=&quot;Microsoft.WindowsAzure.Plugins.RemoteAccess.Rdp&quot; /&gt;&lt;e name=&quot;Microsoft.WindowsAzure.Plugins.RemoteForwarder.RdpInput&quot; /&gt;&lt;/r&gt;&lt;/m&gt;" /> </settings> <resourcereferences> <resourceReference name="DiagnosticStore" defaultAmount="[4096,4096,4096]" defaultSticky="true" kind="Directory" /> <resourceReference name="EventStore" defaultAmount="[1000,1000,1000]" defaultSticky="false" kind="LogStore" /> </resourcereferences> <storedcertificates> <storedCertificate name="Stored0Microsoft.WindowsAzure.Plugins.RemoteAccess.PasswordEncryption" certificateStore="My" certificateLocation="System"> <certificate> <certificateMoniker name="/AzureCloudService/AzureCloudServiceGroup/WebRole1/Microsoft.WindowsAzure.Plugins.RemoteAccess.PasswordEncryption" /> </certificate> </storedCertificate> </storedcertificates> <certificates> <certificate name="Microsoft.WindowsAzure.Plugins.RemoteAccess.PasswordEncryption" /> </certificates> </role> <sCSPolicy> <sCSPolicyIDMoniker name="/AzureCloudService/AzureCloudServiceGroup/WebRole1Instances" /> <sCSPolicyUpdateDomainMoniker name="/AzureCloudService/AzureCloudServiceGroup/WebRole1UpgradeDomains" /> <sCSPolicyFaultDomainMoniker name="/AzureCloudService/AzureCloudServiceGroup/WebRole1FaultDomains" /> </sCSPolicy> </groupHascomponents> </components> <sCSPolicy> <sCSPolicyUpdateDomain name="WebRole1UpgradeDomains" defaultPolicy="[5,5,5]" /> <sCSPolicyFaultDomain name="WebRole1FaultDomains" defaultPolicy="[2,2,2]" /> <sCSPolicyID name="WebRole1Instances" defaultPolicy="[1,1,1]" /> </sCSPolicy> </group> </groups> <implements> <implementation Id="63885408-1071-4c3b-ad11-6c43bcf45213" ref="Microsoft.RedDog.Contract\ServiceContract\AzureCloudServiceContract@ServiceDefinition"> <interfacereferences> <interfaceReference Id="0e2b8209-dcc0-4e37-a689-94e452e6c7e8" ref="Microsoft.RedDog.Contract\Interface\WebRole1:Endpoint1@ServiceDefinition"> <inPort> <inPortMoniker name="/AzureCloudService/AzureCloudServiceGroup/WebRole1:Endpoint1" /> </inPort> </interfaceReference> <interfaceReference Id="4991b202-aebd-455e-95cd-80c8e1200d62" ref="Microsoft.RedDog.Contract\Interface\WebRole1:Microsoft.WindowsAzure.Plugins.RemoteForwarder.RdpInput@ServiceDefinition"> <inPort> <inPortMoniker name="/AzureCloudService/AzureCloudServiceGroup/WebRole1:Microsoft.WindowsAzure.Plugins.RemoteForwarder.RdpInput" /> </inPort> </interfaceReference> </interfacereferences> </implementation> </implements> </serviceModel>
#' Generates a sample-wise PCA plot #' #' @param dat Data matrix to use #' @param color_var Variable to color points by #' @param scale Whether or not variables should be scaled to have unit variance # beore performing the PCA. plot_sample_pca <- function(dat, color_var=NULL, color_var_name='color', point_size=4, scale=FALSE) { # PCA pca <- prcomp(t(dat), scale=scale) # Variance explained by each PC var_explained <- round(summary(pca)$importance[2,] * 100, 2) # Axis labels xl <- sprintf("PC1 (%.2f%% variance)", var_explained[1]) yl <- sprintf("PC2 (%.2f%% variance)", var_explained[2]) # Create a dataframe containing first two PCs df <- data.frame(sample_id=colnames(dat), pc1=pca$x[,1], pc2=pca$x[,2]) # Extend with color information if (!is.null(color_var)) { df[,color_var_name] <- color_var } # Generate plot plt <- ggplot(df, aes(pc1, pc2)) + geom_point(stat="identity", size=point_size) + geom_text(aes(label=sample_id), angle=45, size=4, vjust=2) + xlab(xl) + ylab(yl) + theme(axis.ticks=element_blank(), axis.text.x=element_text(angle=-90)) if (!is.null(color_var)) { plt <- plt + geom_point(stat='identity', size=point_size, aes(color=get(color_var_name))) } plot(plt) } #' Filters low-count genes from an RNA-Seq count table #' #' @param counts Count matrix #' @param threshold Minimum number of reads a sample must have to contribute #' towards filtering criterion #' @param min_samples Minimum number of samples which have to have at least #' `threshold` reads for the gene to be kept. filter_low_counts <- function (counts, threshold=2, min_samples=2) { counts[rowSums(counts > threshold) >= min_samples,] }
/tools/shared/visualization.R
permissive
khughitt/Pharmacogenomics_Prediction_Pipeline_P3
R
false
false
1,792
r
#' Generates a sample-wise PCA plot #' #' @param dat Data matrix to use #' @param color_var Variable to color points by #' @param scale Whether or not variables should be scaled to have unit variance # beore performing the PCA. plot_sample_pca <- function(dat, color_var=NULL, color_var_name='color', point_size=4, scale=FALSE) { # PCA pca <- prcomp(t(dat), scale=scale) # Variance explained by each PC var_explained <- round(summary(pca)$importance[2,] * 100, 2) # Axis labels xl <- sprintf("PC1 (%.2f%% variance)", var_explained[1]) yl <- sprintf("PC2 (%.2f%% variance)", var_explained[2]) # Create a dataframe containing first two PCs df <- data.frame(sample_id=colnames(dat), pc1=pca$x[,1], pc2=pca$x[,2]) # Extend with color information if (!is.null(color_var)) { df[,color_var_name] <- color_var } # Generate plot plt <- ggplot(df, aes(pc1, pc2)) + geom_point(stat="identity", size=point_size) + geom_text(aes(label=sample_id), angle=45, size=4, vjust=2) + xlab(xl) + ylab(yl) + theme(axis.ticks=element_blank(), axis.text.x=element_text(angle=-90)) if (!is.null(color_var)) { plt <- plt + geom_point(stat='identity', size=point_size, aes(color=get(color_var_name))) } plot(plt) } #' Filters low-count genes from an RNA-Seq count table #' #' @param counts Count matrix #' @param threshold Minimum number of reads a sample must have to contribute #' towards filtering criterion #' @param min_samples Minimum number of samples which have to have at least #' `threshold` reads for the gene to be kept. filter_low_counts <- function (counts, threshold=2, min_samples=2) { counts[rowSums(counts > threshold) >= min_samples,] }
####################################################################### ####################################################################### ##This function performs survival analysis for input genes. ##For reference on the format of expression and phenotypeData, ##please see the example analysis in getGeoDataForSurvivalAnalysis.Rmd ##that can be found on this GitHub page. ####################################################################### survivalAnalysis = function(geneOfInterest, expressionData, phenotypeData, survivalPlot = FALSE, survivalData = FALSE, writeDirectory = '', ...){ print(paste('Performing analysis for', geneOfInterest)) ######## #first we select our gene of interest, and sort based on median expression across all patients geneOfInterestExprs = expressionData %>% filter(grepl(geneOfInterest, symbol)) geneOfInterestExprs$medExprs = apply(geneOfInterestExprs[,4:ncol(geneOfInterestExprs)], 1, function(x) median(x, na.rm = TRUE)) geneOfInterestSort = geneOfInterestExprs %>% arrange(desc(medExprs)) ######## #keep the top probe for the gene, based on the expression calculated above geneSurvivalInput = geneOfInterestSort[1,1:(ncol(geneOfInterestSort) - 1)] %>% pivot_longer(cols = colnames(geneOfInterestSort)[4]:colnames(geneOfInterestSort)[(ncol(geneOfInterestSort) - 1)], names_to = 'geo_accession', values_to = 'rnaExprs') %>% right_join(phenotypeData) %>% arrange(desc(rnaExprs)) ######## #sorting of the patients into low and high expression based on the top and bottom 25% geneSurvivalInput$geneLevel = '' geneSurvivalInput[1:round(nrow(geneSurvivalInput) * 0.25), 'geneLevel'] = 'high' geneSurvivalInput[round(nrow(geneSurvivalInput) * 0.75):nrow(geneSurvivalInput), 'geneLevel'] = 'low' geneSurvivalInput$geneLevel = factor(geneSurvivalInput$geneLevel, levels = c('low','high')) ######## #calculation of the different survival metrics based on our data survivalFit = survfit(Surv(ovs, status) ~ geneLevel, data = geneSurvivalInput) survivalDiff = survdiff(Surv(ovs, status) ~ geneLevel, data = geneSurvivalInput) survivalPValue = 1 - pchisq(survivalDiff$chisq, length(survivalDiff$n) - 1) survivalSummary = surv_summary(survivalFit, data = geneSurvivalInput) coxStats = coxph(Surv(ovs, status) ~ geneLevel, data = geneSurvivalInput) coxZScore = coef(coxStats)/sqrt(diag(vcov(coxStats))) ######## #this will output a survival plot if (survivalPlot == TRUE){ ggsurvplot(survivalSummary, pval = survivalPValue, conf.int = FALSE, risk.table = FALSE, linetype = "strata", ggtheme = theme_classic(), palette = brewer.pal(8,'RdBu')[c(8,1)] ) ggsave(paste(baseRepository, writeDirectory, '/survival_', geneOfInterest, '.pdf', sep = ''), width = 4, height = 4, useDingbats = FALSE) } ######## #this will output a text file with the survival results if (survivalData == TRUE){ survivalOutput = summary(survivalFit)$table write.table(survivalOutput, paste(baseRepository, writeDirectory, '/survival_', geneOfInterest, '.csv', sep = ''), col.names = TRUE, row.names = TRUE, quote = FALSE, sep = ',') } ######## #lastly we output the Z-score return(coxZScore) } ####################################################################### #######################################################################
/relatedToPublishedData/getGeoDataForSurvivalAnalysisFunctions.R
no_license
chrishuges/protocolsDryLab
R
false
false
3,450
r
####################################################################### ####################################################################### ##This function performs survival analysis for input genes. ##For reference on the format of expression and phenotypeData, ##please see the example analysis in getGeoDataForSurvivalAnalysis.Rmd ##that can be found on this GitHub page. ####################################################################### survivalAnalysis = function(geneOfInterest, expressionData, phenotypeData, survivalPlot = FALSE, survivalData = FALSE, writeDirectory = '', ...){ print(paste('Performing analysis for', geneOfInterest)) ######## #first we select our gene of interest, and sort based on median expression across all patients geneOfInterestExprs = expressionData %>% filter(grepl(geneOfInterest, symbol)) geneOfInterestExprs$medExprs = apply(geneOfInterestExprs[,4:ncol(geneOfInterestExprs)], 1, function(x) median(x, na.rm = TRUE)) geneOfInterestSort = geneOfInterestExprs %>% arrange(desc(medExprs)) ######## #keep the top probe for the gene, based on the expression calculated above geneSurvivalInput = geneOfInterestSort[1,1:(ncol(geneOfInterestSort) - 1)] %>% pivot_longer(cols = colnames(geneOfInterestSort)[4]:colnames(geneOfInterestSort)[(ncol(geneOfInterestSort) - 1)], names_to = 'geo_accession', values_to = 'rnaExprs') %>% right_join(phenotypeData) %>% arrange(desc(rnaExprs)) ######## #sorting of the patients into low and high expression based on the top and bottom 25% geneSurvivalInput$geneLevel = '' geneSurvivalInput[1:round(nrow(geneSurvivalInput) * 0.25), 'geneLevel'] = 'high' geneSurvivalInput[round(nrow(geneSurvivalInput) * 0.75):nrow(geneSurvivalInput), 'geneLevel'] = 'low' geneSurvivalInput$geneLevel = factor(geneSurvivalInput$geneLevel, levels = c('low','high')) ######## #calculation of the different survival metrics based on our data survivalFit = survfit(Surv(ovs, status) ~ geneLevel, data = geneSurvivalInput) survivalDiff = survdiff(Surv(ovs, status) ~ geneLevel, data = geneSurvivalInput) survivalPValue = 1 - pchisq(survivalDiff$chisq, length(survivalDiff$n) - 1) survivalSummary = surv_summary(survivalFit, data = geneSurvivalInput) coxStats = coxph(Surv(ovs, status) ~ geneLevel, data = geneSurvivalInput) coxZScore = coef(coxStats)/sqrt(diag(vcov(coxStats))) ######## #this will output a survival plot if (survivalPlot == TRUE){ ggsurvplot(survivalSummary, pval = survivalPValue, conf.int = FALSE, risk.table = FALSE, linetype = "strata", ggtheme = theme_classic(), palette = brewer.pal(8,'RdBu')[c(8,1)] ) ggsave(paste(baseRepository, writeDirectory, '/survival_', geneOfInterest, '.pdf', sep = ''), width = 4, height = 4, useDingbats = FALSE) } ######## #this will output a text file with the survival results if (survivalData == TRUE){ survivalOutput = summary(survivalFit)$table write.table(survivalOutput, paste(baseRepository, writeDirectory, '/survival_', geneOfInterest, '.csv', sep = ''), col.names = TRUE, row.names = TRUE, quote = FALSE, sep = ',') } ######## #lastly we output the Z-score return(coxZScore) } ####################################################################### #######################################################################
source("loadAndPrepareData.R") filePath <- "./data/household_power_consumption.txt" destFileName <- "plot3.png" # Load data... relevantData <- loadAndPrepareData(filePath) # Create plot file... originalLocale = Sys.getlocale("LC_TIME") Sys.setlocale("LC_TIME", "English") # Set the locale to English to make sure that weekdays are in the proper language: png(filename = destFileName, width = 480, height = 480, units = "px") #Set graphics device to PNG. par(bg = "transparent") with(relevantData, { plot(Time, Sub_metering_1, type = "l", main ="", xlab = "", ylab = "Energy sub metering") lines(Time, Sub_metering_2, type = "l", col = "red") lines(Time, Sub_metering_3, type = "l", col = "blue") }) legend("topright", lwd = 1, col = c("black", "red", "blue"), legend = c("Sub_metering_1", "Sub_metering_2", "Sub_metering_3")) dev.off() #Leave PNG graphics device again. Sys.setlocale("LC_TIME", originalLocale) # Set locale back to original
/plot3.R
no_license
dominiklanger/ExData_Plotting1
R
false
false
962
r
source("loadAndPrepareData.R") filePath <- "./data/household_power_consumption.txt" destFileName <- "plot3.png" # Load data... relevantData <- loadAndPrepareData(filePath) # Create plot file... originalLocale = Sys.getlocale("LC_TIME") Sys.setlocale("LC_TIME", "English") # Set the locale to English to make sure that weekdays are in the proper language: png(filename = destFileName, width = 480, height = 480, units = "px") #Set graphics device to PNG. par(bg = "transparent") with(relevantData, { plot(Time, Sub_metering_1, type = "l", main ="", xlab = "", ylab = "Energy sub metering") lines(Time, Sub_metering_2, type = "l", col = "red") lines(Time, Sub_metering_3, type = "l", col = "blue") }) legend("topright", lwd = 1, col = c("black", "red", "blue"), legend = c("Sub_metering_1", "Sub_metering_2", "Sub_metering_3")) dev.off() #Leave PNG graphics device again. Sys.setlocale("LC_TIME", originalLocale) # Set locale back to original
library(tangram) ### Name: rmd ### Title: Generate an Rmd table entry from a cell object ### Aliases: rmd rmd.default rmd.cell rmd.cell_iqr rmd.cell_estimate ### rmd.cell_fstat rmd.cell_fraction rmd.cell_chi2 rmd.cell_studentt ### rmd.cell_spearman rmd.cell_n rmd.tangram rmd.table_builder ### ** Examples rmd(tangram(drug ~ bili, pbc))
/data/genthat_extracted_code/tangram/examples/rmd.Rd.R
no_license
surayaaramli/typeRrh
R
false
false
349
r
library(tangram) ### Name: rmd ### Title: Generate an Rmd table entry from a cell object ### Aliases: rmd rmd.default rmd.cell rmd.cell_iqr rmd.cell_estimate ### rmd.cell_fstat rmd.cell_fraction rmd.cell_chi2 rmd.cell_studentt ### rmd.cell_spearman rmd.cell_n rmd.tangram rmd.table_builder ### ** Examples rmd(tangram(drug ~ bili, pbc))
Laplacian_GridOperator <- function(var="x", grid_dataframe, h=1){ # Compute grid distances DeltaX = mean( diff(sort(unique(grid_dataframe[,'x']))) ) DeltaY = mean( diff(sort(unique(grid_dataframe[,'y']))) ) # Compute adjecancy matrix d <- approx_equals(as.matrix(dist(grid_dataframe)),c("x"=DeltaX,"y"=DeltaY)[var]) * approx_equals(as.matrix(dist(grid_dataframe[[var]])),c("x"=DeltaX,"y"=DeltaY)[var]) diag(d) <- -colSums(d) ## Mass conservation as(d/h^2,"dsCMatrix") ## Symmetric sparse matrix }
/R/Laplacian_GridOperator.R
no_license
jin-gao/movement_tools
R
false
false
512
r
Laplacian_GridOperator <- function(var="x", grid_dataframe, h=1){ # Compute grid distances DeltaX = mean( diff(sort(unique(grid_dataframe[,'x']))) ) DeltaY = mean( diff(sort(unique(grid_dataframe[,'y']))) ) # Compute adjecancy matrix d <- approx_equals(as.matrix(dist(grid_dataframe)),c("x"=DeltaX,"y"=DeltaY)[var]) * approx_equals(as.matrix(dist(grid_dataframe[[var]])),c("x"=DeltaX,"y"=DeltaY)[var]) diag(d) <- -colSums(d) ## Mass conservation as(d/h^2,"dsCMatrix") ## Symmetric sparse matrix }
#' Elementary Scoring Function for Expectiles and Quantiles #' #' Weighted average of the elementary scoring function for expectiles resp. quantiles at level \code{alpha} with parameter \code{theta}, see [1]. #' Every choice of \code{theta} gives a scoring function consistent for the expectile resp. quantile at level \code{alpha}. #' Note that the expectile at level \code{alpha = 0.5} is the expectation (mean). #' The smaller the score, the better. #' #' @name elementary_score #' @param actual Observed values. #' @param predicted Predicted values. #' @param w Optional case weights. #' @param alpha Optional level of expectile resp. quantile. #' @param theta Optional parameter. #' @param ... Further arguments passed to \code{weighted_mean}. #' @return A numeric vector of length one. #' @references #' [1] Ehm, W., Gneiting, T., Jordan, A. and Krüger, F. (2016), Of quantiles and expectiles: consistent scoring functions, Choquet representations and forecast rankings. J. R. Stat. Soc. B, 78: 505-562, <doi.org/10.1111/rssb.12154>. #' @examples #' elementary_score_expectile(1:10, c(1:9, 12), alpha = 0.5, theta = 11) #' elementary_score_expectile(1:10, c(1:9, 12), alpha = 0.5, theta = 11, w = rep(1, 10)) #' elementary_score_quantile(1:10, c(1:9, 12), alpha = 0.5, theta = 11, w = rep(1, 10)) NULL #' @rdname elementary_score #' @export elementary_score_expectile <- function(actual, predicted, w = NULL, alpha = 0.5, theta = 0, ...) { stopifnot(length(alpha) == 1L, alpha >= 0, alpha <= 1, length(theta) == 1L, length(actual) == length(predicted)) score <- abs((actual < predicted) - alpha) * ( .positive_part(actual - theta) - .positive_part(predicted - theta) - (actual - predicted) * (theta < predicted) ) weighted_mean(x = score, w = w, ...) } #' @rdname elementary_score #' @export elementary_score_quantile <- function(actual, predicted, w = NULL, alpha = 0.5, theta = 0, ...) { stopifnot(length(alpha) == 1L, alpha >= 0, alpha <= 1, length(theta) == 1L, length(actual) == length(predicted)) score <- ((actual < predicted) - alpha) * ((theta < predicted) - (theta < actual)) weighted_mean(x = score, w = w, ...) } # Helper function .positive_part <- function(x) { (x + abs(x)) / 2 }
/R/elementary_score.R
no_license
JosepER/MetricsWeighted
R
false
false
2,301
r
#' Elementary Scoring Function for Expectiles and Quantiles #' #' Weighted average of the elementary scoring function for expectiles resp. quantiles at level \code{alpha} with parameter \code{theta}, see [1]. #' Every choice of \code{theta} gives a scoring function consistent for the expectile resp. quantile at level \code{alpha}. #' Note that the expectile at level \code{alpha = 0.5} is the expectation (mean). #' The smaller the score, the better. #' #' @name elementary_score #' @param actual Observed values. #' @param predicted Predicted values. #' @param w Optional case weights. #' @param alpha Optional level of expectile resp. quantile. #' @param theta Optional parameter. #' @param ... Further arguments passed to \code{weighted_mean}. #' @return A numeric vector of length one. #' @references #' [1] Ehm, W., Gneiting, T., Jordan, A. and Krüger, F. (2016), Of quantiles and expectiles: consistent scoring functions, Choquet representations and forecast rankings. J. R. Stat. Soc. B, 78: 505-562, <doi.org/10.1111/rssb.12154>. #' @examples #' elementary_score_expectile(1:10, c(1:9, 12), alpha = 0.5, theta = 11) #' elementary_score_expectile(1:10, c(1:9, 12), alpha = 0.5, theta = 11, w = rep(1, 10)) #' elementary_score_quantile(1:10, c(1:9, 12), alpha = 0.5, theta = 11, w = rep(1, 10)) NULL #' @rdname elementary_score #' @export elementary_score_expectile <- function(actual, predicted, w = NULL, alpha = 0.5, theta = 0, ...) { stopifnot(length(alpha) == 1L, alpha >= 0, alpha <= 1, length(theta) == 1L, length(actual) == length(predicted)) score <- abs((actual < predicted) - alpha) * ( .positive_part(actual - theta) - .positive_part(predicted - theta) - (actual - predicted) * (theta < predicted) ) weighted_mean(x = score, w = w, ...) } #' @rdname elementary_score #' @export elementary_score_quantile <- function(actual, predicted, w = NULL, alpha = 0.5, theta = 0, ...) { stopifnot(length(alpha) == 1L, alpha >= 0, alpha <= 1, length(theta) == 1L, length(actual) == length(predicted)) score <- ((actual < predicted) - alpha) * ((theta < predicted) - (theta < actual)) weighted_mean(x = score, w = w, ...) } # Helper function .positive_part <- function(x) { (x + abs(x)) / 2 }
# Convert an R package name to a valid Nix attribute r_to_nix <- function(name) { gsub(".", "_", sub("^(import|assert|inherit|with)$", "r_\\1", name), fixed = TRUE) } line <- function(x) cat(x, sep = "\n") base_packages <- rownames(installed.packages(priority = "base")) args <- commandArgs(trailingOnly = TRUE) if (length(args) != 1) { stop("invalid argument") } snapshot <- as.Date(args) now <- as.POSIXlt(Sys.time(), tz = "UTC") # Fetch packages from CRAN db <- available.packages( sprintf("https://mran.revolutionanalytics.com/snapshot/%s/src/contrib", snapshot), filters = c("R_version", "OS_type", "duplicates"), method = "curl" ) cran_package_names <- db[, "Package"] nix_package_names <- r_to_nix(cran_package_names) versions <- db[, "Version"] dependencies <- tools::package_dependencies(db[, "Package"], db) md5_sums <- db[, "MD5sum"] nix_dependencies <- lapply(dependencies, function(deps) { deps <- deps[!deps %in% base_packages] # Ignore base packages as not on CRAN if (length(deps) > 0) { paste0("[ ", paste0(r_to_nix(deps), collapse = " "), " ]") } else { "[ ]" } }) line(sprintf("# This file was generated by cran2nix.R at %s. Do not edit.", now)) line("{ self, mkCranDerive }:") line(sprintf("let\n derive = mkCranDerive { snapshot = \"%s\"; };\nin", snapshot)) line("with self; {") line( sprintf( " \"%s\" = derive { name = \"%s\"; version = \"%s\"; md5 = \"%s\"; depends = %s; };", nix_package_names, cran_package_names, versions, md5_sums, nix_dependencies ) ) line("}")
/pkgs/cran/cran2nix.R
no_license
james-atkins/dotfiles
R
false
false
1,557
r
# Convert an R package name to a valid Nix attribute r_to_nix <- function(name) { gsub(".", "_", sub("^(import|assert|inherit|with)$", "r_\\1", name), fixed = TRUE) } line <- function(x) cat(x, sep = "\n") base_packages <- rownames(installed.packages(priority = "base")) args <- commandArgs(trailingOnly = TRUE) if (length(args) != 1) { stop("invalid argument") } snapshot <- as.Date(args) now <- as.POSIXlt(Sys.time(), tz = "UTC") # Fetch packages from CRAN db <- available.packages( sprintf("https://mran.revolutionanalytics.com/snapshot/%s/src/contrib", snapshot), filters = c("R_version", "OS_type", "duplicates"), method = "curl" ) cran_package_names <- db[, "Package"] nix_package_names <- r_to_nix(cran_package_names) versions <- db[, "Version"] dependencies <- tools::package_dependencies(db[, "Package"], db) md5_sums <- db[, "MD5sum"] nix_dependencies <- lapply(dependencies, function(deps) { deps <- deps[!deps %in% base_packages] # Ignore base packages as not on CRAN if (length(deps) > 0) { paste0("[ ", paste0(r_to_nix(deps), collapse = " "), " ]") } else { "[ ]" } }) line(sprintf("# This file was generated by cran2nix.R at %s. Do not edit.", now)) line("{ self, mkCranDerive }:") line(sprintf("let\n derive = mkCranDerive { snapshot = \"%s\"; };\nin", snapshot)) line("with self; {") line( sprintf( " \"%s\" = derive { name = \"%s\"; version = \"%s\"; md5 = \"%s\"; depends = %s; };", nix_package_names, cran_package_names, versions, md5_sums, nix_dependencies ) ) line("}")
# Aproximación de una probabilidad # simulando una distribución Bernulli cuyo # parametro p=0.5 library(tidyverse) p=0.5 # caracteristica de la distribución de bernulli set.seed(1) # def. cantidad de numeros nsim=4000 # numeros aletorios cuya distribución uniforme rx=runif(nsim) # rx # Caracteristica de una bernulli # F,E,F,E,F,...... # ojo FE,FFE,.......binomial # Distribución Bernulli rx=runif(nsim)<=p #rx # promedio teorico= 0.5 # E(x)=p-- aproximación mean(rx) # grafico evolución de la aproximación n=1:nsim est=cumsum(rx)/n# F tabla de frecuencias plot(est,type="l",lwd=2, xlab="cantidad de Numeros Aleatorios", ylab="Proporción Muestral") # promedio muestra E(x) abline(h=mean(rx),col="blue",lty=2) # promedio teórico (u) abline(h=p,col="Red") # Estabilidad de F.P. Binomial # parametros de la distribución binomial p=0.1 n1=10 # cantidad de numero aletorios set.seed(1) nsim=5000 #generar los numero aleatorios rx=rbinom(nsim,n1,p) # aproximacion del promedio: muteo=p*n1 # promedio muestra mean(rx) # grafico evolución de la aproximación n=1:nsim est=cumsum(rx)/n# F tabla de frecuencias plot(est,type="l",lwd=2, xlab="cantidad de Numeros Aleatorios", ylab="Proporción Muestral") # promedio muestra E(x) abline(h=mean(rx),col="blue",lty=2) # promedio teórico (u) abline(h=muteo,col="Red") #calculando desviación Estandar esterr=sqrt(cumsum((rx-est)^2))/n # Intervalo de confianza(1-alfa)=0.9772 # limete superior lines(est+2*esterr,lty=3,col="black") #limete inferior lines(est-2*esterr,lty=3,col="black") # qnorm(0.9772) # Solución al reto # parametros iniciales set.seed(1) xsd=1 xmed=0 # cantidad de numero aleatorios nsim=10^3*4 #generando numeros aleatorios normales rx=rnorm(nsim,xmed,xsd) # vectores almacenar datos como estadisticas prueba s=numeric(1) method=numeric(1) statistic=numeric(1) p.value=numeric(1) se1=numeric(1) cumpl=numeric(1) # cargando 30 elmentos. s=rx[1:30] # contar numero de muestras con distribucion normal j=1 alf=0.05 for(i in 31:nsim){ nl=length(s) se=sd(s)/sqrt(nl) if (nl>25){ x=nortest::lillie.test(s) } else{ x=shapiro.test(s) } method[j]=x$method statistic[j]=x$statistic p.value[j]=x$p.value se1[j]=se cumpl=ifelse(x$p.value<alf,"No normal","Normal") s[i]=rx[i] if(se<0.1){ bf=data.frame(method, statistic, p.value, se1, cumpl) break } j=j+1 } mean(rx) sd(s) mean(s) table(bf$cumpl) # estabilidad del promedio muestral # aproximacion del promedio: muteo=0 # promedio muestra mean(rx) # grafico evolución de la aproximación n=1:nsim est=cumsum(rx)/n# F tabla de frecuencias plot(est,type="l",lwd=2, xlab="cantidad de Numeros Aleatorios", ylab="Proporción Muestral") # promedio muestra E(x) abline(h=mean(rx),col="blue",lty=2) # promedio teórico (u) abline(h=muteo,col="Red") #calculando desviación Estandar esterr=sqrt(cumsum((rx-est)^2))/n # Intervalo de confianza(1-alfa)=0.9772 # limete superior lines(est+2*esterr,lty=3,col="black") #limete inferior lines(est-2*esterr,lty=3,col="black") ## solución del reto 2 pracma::rand(1,10) # crear una función genere los numeros y sume genN=function(x){ N=numeric(1) for(i in 1:x){ v=head(cumsum(pracma::rand(1,200))) N[i]=1+sum(v>1) } N } # Calular el E(x) y sd(x) para n=1000 N=genN(1000) mean(N) var(N) # calcular el tamaño de muestra 95% qnorm(0.05/2)^2*var(N)/(0.01^2)
/Análisis estadístico de datos simulados/clase9.R
no_license
MisaelErikson/ComputacionEstadistica2020-2
R
false
false
3,521
r
# Aproximación de una probabilidad # simulando una distribución Bernulli cuyo # parametro p=0.5 library(tidyverse) p=0.5 # caracteristica de la distribución de bernulli set.seed(1) # def. cantidad de numeros nsim=4000 # numeros aletorios cuya distribución uniforme rx=runif(nsim) # rx # Caracteristica de una bernulli # F,E,F,E,F,...... # ojo FE,FFE,.......binomial # Distribución Bernulli rx=runif(nsim)<=p #rx # promedio teorico= 0.5 # E(x)=p-- aproximación mean(rx) # grafico evolución de la aproximación n=1:nsim est=cumsum(rx)/n# F tabla de frecuencias plot(est,type="l",lwd=2, xlab="cantidad de Numeros Aleatorios", ylab="Proporción Muestral") # promedio muestra E(x) abline(h=mean(rx),col="blue",lty=2) # promedio teórico (u) abline(h=p,col="Red") # Estabilidad de F.P. Binomial # parametros de la distribución binomial p=0.1 n1=10 # cantidad de numero aletorios set.seed(1) nsim=5000 #generar los numero aleatorios rx=rbinom(nsim,n1,p) # aproximacion del promedio: muteo=p*n1 # promedio muestra mean(rx) # grafico evolución de la aproximación n=1:nsim est=cumsum(rx)/n# F tabla de frecuencias plot(est,type="l",lwd=2, xlab="cantidad de Numeros Aleatorios", ylab="Proporción Muestral") # promedio muestra E(x) abline(h=mean(rx),col="blue",lty=2) # promedio teórico (u) abline(h=muteo,col="Red") #calculando desviación Estandar esterr=sqrt(cumsum((rx-est)^2))/n # Intervalo de confianza(1-alfa)=0.9772 # limete superior lines(est+2*esterr,lty=3,col="black") #limete inferior lines(est-2*esterr,lty=3,col="black") # qnorm(0.9772) # Solución al reto # parametros iniciales set.seed(1) xsd=1 xmed=0 # cantidad de numero aleatorios nsim=10^3*4 #generando numeros aleatorios normales rx=rnorm(nsim,xmed,xsd) # vectores almacenar datos como estadisticas prueba s=numeric(1) method=numeric(1) statistic=numeric(1) p.value=numeric(1) se1=numeric(1) cumpl=numeric(1) # cargando 30 elmentos. s=rx[1:30] # contar numero de muestras con distribucion normal j=1 alf=0.05 for(i in 31:nsim){ nl=length(s) se=sd(s)/sqrt(nl) if (nl>25){ x=nortest::lillie.test(s) } else{ x=shapiro.test(s) } method[j]=x$method statistic[j]=x$statistic p.value[j]=x$p.value se1[j]=se cumpl=ifelse(x$p.value<alf,"No normal","Normal") s[i]=rx[i] if(se<0.1){ bf=data.frame(method, statistic, p.value, se1, cumpl) break } j=j+1 } mean(rx) sd(s) mean(s) table(bf$cumpl) # estabilidad del promedio muestral # aproximacion del promedio: muteo=0 # promedio muestra mean(rx) # grafico evolución de la aproximación n=1:nsim est=cumsum(rx)/n# F tabla de frecuencias plot(est,type="l",lwd=2, xlab="cantidad de Numeros Aleatorios", ylab="Proporción Muestral") # promedio muestra E(x) abline(h=mean(rx),col="blue",lty=2) # promedio teórico (u) abline(h=muteo,col="Red") #calculando desviación Estandar esterr=sqrt(cumsum((rx-est)^2))/n # Intervalo de confianza(1-alfa)=0.9772 # limete superior lines(est+2*esterr,lty=3,col="black") #limete inferior lines(est-2*esterr,lty=3,col="black") ## solución del reto 2 pracma::rand(1,10) # crear una función genere los numeros y sume genN=function(x){ N=numeric(1) for(i in 1:x){ v=head(cumsum(pracma::rand(1,200))) N[i]=1+sum(v>1) } N } # Calular el E(x) y sd(x) para n=1000 N=genN(1000) mean(N) var(N) # calcular el tamaño de muestra 95% qnorm(0.05/2)^2*var(N)/(0.01^2)
# Practica Shiny library(shiny) library(shinydashboard) library(png) library(shinythemes) library(devtools) # Cargamos el archivo que contiene la database y los ruidos source("database.R") source("noisesNoReactive.R") # Creacion de muestra aleatoria para el numero a mostrar a = sample(500, 1) ui = dashboardPage( dashboardHeader(title = "Perceptual Experiment"), dashboardSidebar( sidebarMenu( menuItem("Play", tabName = "play", icon = icon("play")), menuItem("Design", tabName = "design"), menuItem("Results", tabName = "results") ) ), dashboardBody( tabItems( # Tab donde vemos la imagen tabItem(tabName = "play", box( plotOutput("imagen") ), box( numericInput("guess", "Guess the number", value = 0), actionButton("submit", label = "Play!"), verbatimTextOutput("success"), verbatimTextOutput("real_num") #actionButton("again", "Play again!")) )),# aqui viene la label del numero. Sera reactive porque no aparece hasta que introduce el numero # Tab donde elegimos los tipos de ruido tabItem(tabName = "design", box(title = "Types of noise", checkboxInput("white_noise", "White noise"), conditionalPanel("input.white_noise == true", numericInput("white_p", label = "Percentage", value = 0), selectInput("noises", label = "Noise", choices = list("Normal noise", "Uniform noise", "Pure black"))), checkboxInput("zeroing_noise", "Zeroing noise"), conditionalPanel("input.zeroing_noise == true", numericInput("zeroing_p", label = "Percentage", value = 0)), checkboxInput("vertical_lines", "Vertical lines"), conditionalPanel("input.vertical_lines == true", numericInput("vertical_p", label = "Percentage", value = 0)), actionButton("apply_ch", "Apply Changes") ), box(title = "Number of conditions", verbatimTextOutput("no_conditions")) ), # Tab con la tabla donde aparecen los resultados numéricos tabItem(tabName = "results"#, #renderDataTable() ) ) ) ) server = server = function(input, output, session) { true_label = reactiveValues(number = NULL) # Clickar en "Play!" produce: observeEvent(input$submit, { true_label$number = labels[[a]] output$success = renderText({ # Compara el número introducido con el número de la label ifelse(input$guess == as.numeric(true_label$number), "Correct!", "Sorry, that's not the number!") }) output$real_num = renderText({ # Aparece el numero de la label ifelse(input$guess == as.numeric(true_label$number), "That's the number!", paste0("The actual number is", " ", true_label$number)) }) }) #observeEvent(input$submit, { #a = sample(500, 1) #}) output$imagen <- renderPlot({ img <- imagenes[[a]] img = normalNoise(img) #img = uniformNoise(img) img = verticalLinesNoise(img) image(img, axes=FALSE) }) observe({ input$submit isolate({ a = as.integer(imagenes[[sample(500, 1)]]) }) }) } shinyApp(ui, server)
/PracticaShinyR.R
no_license
MarcosBarrera/Shiny
R
false
false
3,987
r
# Practica Shiny library(shiny) library(shinydashboard) library(png) library(shinythemes) library(devtools) # Cargamos el archivo que contiene la database y los ruidos source("database.R") source("noisesNoReactive.R") # Creacion de muestra aleatoria para el numero a mostrar a = sample(500, 1) ui = dashboardPage( dashboardHeader(title = "Perceptual Experiment"), dashboardSidebar( sidebarMenu( menuItem("Play", tabName = "play", icon = icon("play")), menuItem("Design", tabName = "design"), menuItem("Results", tabName = "results") ) ), dashboardBody( tabItems( # Tab donde vemos la imagen tabItem(tabName = "play", box( plotOutput("imagen") ), box( numericInput("guess", "Guess the number", value = 0), actionButton("submit", label = "Play!"), verbatimTextOutput("success"), verbatimTextOutput("real_num") #actionButton("again", "Play again!")) )),# aqui viene la label del numero. Sera reactive porque no aparece hasta que introduce el numero # Tab donde elegimos los tipos de ruido tabItem(tabName = "design", box(title = "Types of noise", checkboxInput("white_noise", "White noise"), conditionalPanel("input.white_noise == true", numericInput("white_p", label = "Percentage", value = 0), selectInput("noises", label = "Noise", choices = list("Normal noise", "Uniform noise", "Pure black"))), checkboxInput("zeroing_noise", "Zeroing noise"), conditionalPanel("input.zeroing_noise == true", numericInput("zeroing_p", label = "Percentage", value = 0)), checkboxInput("vertical_lines", "Vertical lines"), conditionalPanel("input.vertical_lines == true", numericInput("vertical_p", label = "Percentage", value = 0)), actionButton("apply_ch", "Apply Changes") ), box(title = "Number of conditions", verbatimTextOutput("no_conditions")) ), # Tab con la tabla donde aparecen los resultados numéricos tabItem(tabName = "results"#, #renderDataTable() ) ) ) ) server = server = function(input, output, session) { true_label = reactiveValues(number = NULL) # Clickar en "Play!" produce: observeEvent(input$submit, { true_label$number = labels[[a]] output$success = renderText({ # Compara el número introducido con el número de la label ifelse(input$guess == as.numeric(true_label$number), "Correct!", "Sorry, that's not the number!") }) output$real_num = renderText({ # Aparece el numero de la label ifelse(input$guess == as.numeric(true_label$number), "That's the number!", paste0("The actual number is", " ", true_label$number)) }) }) #observeEvent(input$submit, { #a = sample(500, 1) #}) output$imagen <- renderPlot({ img <- imagenes[[a]] img = normalNoise(img) #img = uniformNoise(img) img = verticalLinesNoise(img) image(img, axes=FALSE) }) observe({ input$submit isolate({ a = as.integer(imagenes[[sample(500, 1)]]) }) }) } shinyApp(ui, server)
library(leaflet) # Choices for drop-downs vars <- c("Points", "Regions") ui <- navbarPage("STAT209", id="nav", tabPanel("Interactive Map", div(class="outer", tags$head( # Include our custom CSS includeCSS("styles.css"), includeScript("gomap.js") ), leafletOutput("map", width="100%", height="100%"), fluidRow(verbatimTextOutput("map_marker_click")), absolutePanel(id = "controls", class = "panel panel-default", fixed = TRUE, draggable = TRUE, top = 60, left = "auto", right = 20, bottom = "auto", width = 330, height = "auto", h2("Plot Type"), selectInput("type", "Type", vars, selected="Total"), sliderInput("slider", "Time", min=as.Date("2020-01-01"), max=as.Date("2021-05-01"), value=as.Date("2021-04-01"), step=16, timeFormat="%b %Y"), textOutput("SliderText"), plotOutput("histPo", height = 200) # conditionalPanel("input.color == 'superzip' || input.size == 'superzip'" # Only prompt for threshold when coloring or sizing by superzip # numericInput("threshold", "Country", world_spdf@data$ISO3) ) # plotOutput("scatterGDP", height = 250) ) # tags$div(id=Pop$id, # 'Data compiled for ', tags$em('Coming Apart: The State of White America, 1960–2010'), # ' by Charles Murray (Crown Forum, 2012).' # ) # ) ) )
/Shiny-Project/ui.R
no_license
Vanessa-feng/Mass-States-of-Mind-during-COVID-19-by-Twitter-Dataset
R
false
false
1,426
r
library(leaflet) # Choices for drop-downs vars <- c("Points", "Regions") ui <- navbarPage("STAT209", id="nav", tabPanel("Interactive Map", div(class="outer", tags$head( # Include our custom CSS includeCSS("styles.css"), includeScript("gomap.js") ), leafletOutput("map", width="100%", height="100%"), fluidRow(verbatimTextOutput("map_marker_click")), absolutePanel(id = "controls", class = "panel panel-default", fixed = TRUE, draggable = TRUE, top = 60, left = "auto", right = 20, bottom = "auto", width = 330, height = "auto", h2("Plot Type"), selectInput("type", "Type", vars, selected="Total"), sliderInput("slider", "Time", min=as.Date("2020-01-01"), max=as.Date("2021-05-01"), value=as.Date("2021-04-01"), step=16, timeFormat="%b %Y"), textOutput("SliderText"), plotOutput("histPo", height = 200) # conditionalPanel("input.color == 'superzip' || input.size == 'superzip'" # Only prompt for threshold when coloring or sizing by superzip # numericInput("threshold", "Country", world_spdf@data$ISO3) ) # plotOutput("scatterGDP", height = 250) ) # tags$div(id=Pop$id, # 'Data compiled for ', tags$em('Coming Apart: The State of White America, 1960–2010'), # ' by Charles Murray (Crown Forum, 2012).' # ) # ) ) )
# load libraries, set wd source("global.R") # load posgresql db connection.. source(file.path("hidden", "DBconnection.hidden")) # ..or create one: con <- dbConnect(RPostgres::Postgres(), dbname = 'DB_NAME', host = 'DB_HOST', port = 'DB_PORT', user = 'USERNAME', password = 'PASSWORD') # set system locale to Lithuanian Sys.setlocale(, 'Lithuanian') # make DB copy as csv file dbReadTable(con, "delfitest") %>% .[!duplicated(.[c("pavadinimas", "linkas")]),-c(5, 6)] %>% write.csv("data/delfiDB.csv") # read data directly from DB.. delfidb <- dbReadTable(con, "delfitest") %>% .[!duplicated(.[c("pavadinimas", "linkas")]),-c(5, 6)] %>% mutate(data = ymd(data), valanda = hour(hms(laikas)), metai = year(data), menuo = month(data), ym = format(data, format = "%Y-%m-01"), sav_diena = wday(as.Date(data)-1), savaite = format(as.Date(data)-wday(data) + 1, format = "%Y-%m-%d")) # ..or from csv file delfidb <- read.csv( "delfiDB.csv") %>% mutate(data = ymd(data), metai = year(data), menuo = month(data), ym = format(data, format = "%Y-%m-01"), sav_diena = wday(as.Date(data)-1), savaite = format(as.Date(data)-wday(data) + 1, format = "%Y-%m-%d")) # create data frame: # - metai as year # - text as all article names concatenated to one long string by.metai <- NULL for (metai in unique(delfidb$metai)) { subset <- delfidb[delfidb$metai == metai, ] text <- str_c(subset$pavadinimas, collapse = " ") row <- data.frame(metai, text, stringsAsFactors = FALSE) by.metai <- rbind(by.metai, row) print(metai) } # create text corpus using myReader as control mapping myReader <- readTabular(mapping = list(content = "text", id = "metai")) corpus <- Corpus(DataframeSource(by.metai), readerControl = list(reader = myReader)) # transform corpus: # - change all letters to lowercase # - remove punctuation # - remove lithuanian punctuation # - remove numbers # - strip whitespaces corpus %>% tm_map(content_transformer(tolower)) %>% tm_map(content_transformer(removePunctuation)) %>% tm_map(content_transformer(function(x) gsub("„", "", x))) %>% tm_map(content_transformer(function(x) gsub("“", "", x))) %>% tm_map(content_transformer(function(x) gsub("”", "", x))) %>% tm_map(content_transformer(function(x) gsub("–", "", x))) %>% tm_map(content_transformer(removeNumbers)) %>% tm_map(content_transformer(stripWhitespace)) -> corpus # save transformed corpus as R data file saveRDS(corpus, "data/corpus.rds") # if RAM size is enough, corpus word tokens can be created and transformed to # term document matrix allTokenizer <- function(x) NGramTokenizer(x, Weka_control(min = 1, max = 2)) delfi.tdm <- TermDocumentMatrix(corpus, control = list(tokenize = allTokenizer)) # .. or transformed to term document matrix withour tokens delfi.tdm <- TermDocumentMatrix(corpus) # sparse terms can be removed delfi.tdm.01 <- removeSparseTerms(delfi.tdm, 0.1) # save final term document matrix as R data file saveRDS(delfi.tdm, "data/delfiTdm.rds") # create data frame from TDM object and save to csv file delfiTdmDataFrame <- data.frame(inspect(delfi.tdm)) delfiTdmDataFrame$zodis <- row.names(delfiTdmDataFrame) write.csv(delfiTdmDataFrame, "data/delfiTDMdf.csv", row.names = FALSE)
/storeData.R
no_license
ubbikas/ubbiR_delfi_tm
R
false
false
3,647
r
# load libraries, set wd source("global.R") # load posgresql db connection.. source(file.path("hidden", "DBconnection.hidden")) # ..or create one: con <- dbConnect(RPostgres::Postgres(), dbname = 'DB_NAME', host = 'DB_HOST', port = 'DB_PORT', user = 'USERNAME', password = 'PASSWORD') # set system locale to Lithuanian Sys.setlocale(, 'Lithuanian') # make DB copy as csv file dbReadTable(con, "delfitest") %>% .[!duplicated(.[c("pavadinimas", "linkas")]),-c(5, 6)] %>% write.csv("data/delfiDB.csv") # read data directly from DB.. delfidb <- dbReadTable(con, "delfitest") %>% .[!duplicated(.[c("pavadinimas", "linkas")]),-c(5, 6)] %>% mutate(data = ymd(data), valanda = hour(hms(laikas)), metai = year(data), menuo = month(data), ym = format(data, format = "%Y-%m-01"), sav_diena = wday(as.Date(data)-1), savaite = format(as.Date(data)-wday(data) + 1, format = "%Y-%m-%d")) # ..or from csv file delfidb <- read.csv( "delfiDB.csv") %>% mutate(data = ymd(data), metai = year(data), menuo = month(data), ym = format(data, format = "%Y-%m-01"), sav_diena = wday(as.Date(data)-1), savaite = format(as.Date(data)-wday(data) + 1, format = "%Y-%m-%d")) # create data frame: # - metai as year # - text as all article names concatenated to one long string by.metai <- NULL for (metai in unique(delfidb$metai)) { subset <- delfidb[delfidb$metai == metai, ] text <- str_c(subset$pavadinimas, collapse = " ") row <- data.frame(metai, text, stringsAsFactors = FALSE) by.metai <- rbind(by.metai, row) print(metai) } # create text corpus using myReader as control mapping myReader <- readTabular(mapping = list(content = "text", id = "metai")) corpus <- Corpus(DataframeSource(by.metai), readerControl = list(reader = myReader)) # transform corpus: # - change all letters to lowercase # - remove punctuation # - remove lithuanian punctuation # - remove numbers # - strip whitespaces corpus %>% tm_map(content_transformer(tolower)) %>% tm_map(content_transformer(removePunctuation)) %>% tm_map(content_transformer(function(x) gsub("„", "", x))) %>% tm_map(content_transformer(function(x) gsub("“", "", x))) %>% tm_map(content_transformer(function(x) gsub("”", "", x))) %>% tm_map(content_transformer(function(x) gsub("–", "", x))) %>% tm_map(content_transformer(removeNumbers)) %>% tm_map(content_transformer(stripWhitespace)) -> corpus # save transformed corpus as R data file saveRDS(corpus, "data/corpus.rds") # if RAM size is enough, corpus word tokens can be created and transformed to # term document matrix allTokenizer <- function(x) NGramTokenizer(x, Weka_control(min = 1, max = 2)) delfi.tdm <- TermDocumentMatrix(corpus, control = list(tokenize = allTokenizer)) # .. or transformed to term document matrix withour tokens delfi.tdm <- TermDocumentMatrix(corpus) # sparse terms can be removed delfi.tdm.01 <- removeSparseTerms(delfi.tdm, 0.1) # save final term document matrix as R data file saveRDS(delfi.tdm, "data/delfiTdm.rds") # create data frame from TDM object and save to csv file delfiTdmDataFrame <- data.frame(inspect(delfi.tdm)) delfiTdmDataFrame$zodis <- row.names(delfiTdmDataFrame) write.csv(delfiTdmDataFrame, "data/delfiTDMdf.csv", row.names = FALSE)
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/vld.R \name{vld_mcmcr} \alias{vld_mcmcr} \alias{vld_mcmcarray} \alias{vld_mcmcrs} \title{Validate MCMC Objects} \usage{ vld_mcmcarray(x) vld_mcmcr(x) vld_mcmcrs(x) } \arguments{ \item{x}{The object to check.} } \value{ A flag indicating whether the object was validated. } \description{ Validates class and structure of MCMC objects. } \details{ To just validate class use \code{\link[chk]{vld_s3_class}()}. } \section{Functions}{ \itemize{ \item \code{vld_mcmcarray}: Validate [mcmcarray-object()] \item \code{vld_mcmcr}: Validate [mcmcr-object()] \item \code{vld_mcmcrs}: Validate [mcmcrs-object()] }} \examples{ #' vld_mcmcarray vld_mcmcarray(1) # vld_mcmcr vld_mcmcr(1) vld_mcmcr(mcmcr::mcmcr_example) # vld_mcmcrs vld_mcmcrs(1) } \seealso{ [chk_mcmcr()] }
/man/vld_mcmcr.Rd
permissive
krlmlr/mcmcr
R
false
true
846
rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/vld.R \name{vld_mcmcr} \alias{vld_mcmcr} \alias{vld_mcmcarray} \alias{vld_mcmcrs} \title{Validate MCMC Objects} \usage{ vld_mcmcarray(x) vld_mcmcr(x) vld_mcmcrs(x) } \arguments{ \item{x}{The object to check.} } \value{ A flag indicating whether the object was validated. } \description{ Validates class and structure of MCMC objects. } \details{ To just validate class use \code{\link[chk]{vld_s3_class}()}. } \section{Functions}{ \itemize{ \item \code{vld_mcmcarray}: Validate [mcmcarray-object()] \item \code{vld_mcmcr}: Validate [mcmcr-object()] \item \code{vld_mcmcrs}: Validate [mcmcrs-object()] }} \examples{ #' vld_mcmcarray vld_mcmcarray(1) # vld_mcmcr vld_mcmcr(1) vld_mcmcr(mcmcr::mcmcr_example) # vld_mcmcrs vld_mcmcrs(1) } \seealso{ [chk_mcmcr()] }
setwd(normalizePath(dirname(R.utils::commandArgs(asValues=TRUE)$"f"))) source('../h2o-runit.R') test.mergecat <- function() { census_path <- locate("smalldata/chicago/chicagoCensus.csv") crimes_path <- locate("smalldata/chicago/chicagoCrimes10k.csv.zip") Log.info("Import Chicago census data...") census_raw <- h2o.importFile(census_path, parse=FALSE) census_setup <- h2o.parseSetup(census_raw) census_setup$column_types[2] <- "Enum" # change from String -> Enum census <- h2o.parseRaw(census_raw, col.types=census_setup$column_types) Log.info("Import Chicago crimes data...") crimes <- h2o.importFile(crimes_path) Log.info("Set column names to be syntactically valid for R") names(census) <- make.names(names(census)) names(crimes) <- make.names(names(crimes)) print(summary(census)) print(summary(crimes)) Log.info("Merge crimes and census data on community area number") names(census)[names(census) == "Community.Area.Number"] <- "Community.Area" crimeMerge <- h2o.merge(crimes, census) print(summary(crimeMerge)) testEnd() } doTest("Merging H2O Frames that contain categorical columns", test.mergecat)
/h2o-r/tests/testdir_misc/runit_mergecat.R
permissive
brightchen/h2o-3
R
false
false
1,160
r
setwd(normalizePath(dirname(R.utils::commandArgs(asValues=TRUE)$"f"))) source('../h2o-runit.R') test.mergecat <- function() { census_path <- locate("smalldata/chicago/chicagoCensus.csv") crimes_path <- locate("smalldata/chicago/chicagoCrimes10k.csv.zip") Log.info("Import Chicago census data...") census_raw <- h2o.importFile(census_path, parse=FALSE) census_setup <- h2o.parseSetup(census_raw) census_setup$column_types[2] <- "Enum" # change from String -> Enum census <- h2o.parseRaw(census_raw, col.types=census_setup$column_types) Log.info("Import Chicago crimes data...") crimes <- h2o.importFile(crimes_path) Log.info("Set column names to be syntactically valid for R") names(census) <- make.names(names(census)) names(crimes) <- make.names(names(crimes)) print(summary(census)) print(summary(crimes)) Log.info("Merge crimes and census data on community area number") names(census)[names(census) == "Community.Area.Number"] <- "Community.Area" crimeMerge <- h2o.merge(crimes, census) print(summary(crimeMerge)) testEnd() } doTest("Merging H2O Frames that contain categorical columns", test.mergecat)
nc <- sf::read_sf(system.file("shape/nc.shp", package = "sf")) nc <- sf::st_transform(nc, 3857) basemap <- ggplot() + layer_location_data( data = nc, fill = NA ) basemap + layer_markers( data = nc[1:10], mapping = aes(size = AREA), make = TRUE ) large_nc <- getdata::get_location_data( data = nc, fn = ~ dplyr::filter(.x, AREA > 0.2) ) large_nc$number <- 1 large_nc$dist <- 2 basemap + layer_numbers( data = large_nc, mapping = aes(fill = NAME), sort = "dist_xmax_ymin", num_style = "Roman", geom = "label", size = 3 ) + guides(fill = "none")
/examples/layer_markers.R
permissive
elipousson/maplayer
R
false
false
623
r
nc <- sf::read_sf(system.file("shape/nc.shp", package = "sf")) nc <- sf::st_transform(nc, 3857) basemap <- ggplot() + layer_location_data( data = nc, fill = NA ) basemap + layer_markers( data = nc[1:10], mapping = aes(size = AREA), make = TRUE ) large_nc <- getdata::get_location_data( data = nc, fn = ~ dplyr::filter(.x, AREA > 0.2) ) large_nc$number <- 1 large_nc$dist <- 2 basemap + layer_numbers( data = large_nc, mapping = aes(fill = NAME), sort = "dist_xmax_ymin", num_style = "Roman", geom = "label", size = 3 ) + guides(fill = "none")
##' align 454 contigs and mira sequence to reference sequence. ##' ##' ##' @title doAlign ##' @param x file names of 454 contig, reference and mira sequences ##' @return fasta S3 object ##' @author ygc ##' @importFrom muscle muscle ##' @importFrom Biostrings readDNAStringSet ##' @importFrom Biostrings reverseComplement ##' @importFrom magrittr %<>% ##' @export doAlign <- function(x) { ## suppose x[1] is 454 contigs ## x[2] is reference sequence ## x[3] is mira sequence, that is read assembly that mapping to reference. ## f454 <- read.fasta(x[1]) f454 <- readDNAStringSet(x[1]) fref <- readDNAStringSet(x[2]) fref <- fref[1] ## fref$seqs <- fref$seqs[1,] ## fref$num <- 1 ## fmira <- read.fasta(x[3]) fmira <- readDNAStringSet(x[3]) ## for (i in 1:nrow(f454$seqs)) { for (i in 1:length(f454)) { ## fa <- list(seqs=rbind(f454$seqs[i,], fref$seqs), num=fref$num+1) ## class(fa) <- "fasta" ## aln <- muscle(fa, quiet = TRUE) fa <- c(f454[i], fref) aln <- muscle(fa, quiet=TRUE) fa2 <- fa fa2[1] <- reverseComplement(fa2[1]) ## fa2$seqs[1,2] <- revcomp(fa2$seqs[1,2]) aln2 <- muscle(fa2, quiet = TRUE) if (identityRatio(aln) < identityRatio(aln2)) { ## f454$seqs[i,1] <- paste("RC", f454$seqs[i,1], sep="_") ## f454$seqs[i,2] <- revcomp(f454$seqs[i,2]) f454[i] <- reverseComplement(f454[i]) names(f454)[i] <- paste("RC", names(f454)[i], sep="_") } } ## fasta <- list(seqs=rbind(fref$seqs, fmira$seqs, f454$seqs), ## num=fref$num+fmira$num+f454$num) ## class(fasta) <- "fasta" fasta <- c(fref, fmira, f454) ## if (f454$num == 1) { if (length(f454) == 1) { res2 <- muscle(fasta, quiet = TRUE) ## ii <- getIdx(fasta$seqs[,1], res2$seqs[,1]) ## res2$seqs <- res2$seqs[ii,] result <- DNAStringSet(res2) ii <- getIdx(names(fasta), names(result)) result <- result[ii] return(result) } ## res <- lapply(1:f454$num, function(i) { res <- lapply(1:length(f454), function(i) { ## fa <- list(seqs=rbind(fref$seqs, fmira$seqs, f454$seqs[i,]), ## num=3) ## class(fa) <- "fasta" fa <- c(fref, fmira, f454[i]) xx <- muscle(fa, quiet = TRUE) yy <- DNAStringSet(xx) ## jj <- getIdx(fasta$seqs[1,1], xx$seqs[,1]) jj <- getIdx(names(fasta)[1], names(yy)) ## ii <- 1:nrow(xx$seqs) ii <- 1:length(yy) yy[c(jj, ii[-jj])] ## xx$seqs <- xx$seqs[c(jj, ii[-jj]),] ## return(xx) return(yy) }) ## if (length(unique(sapply(res, function(x) x$length))) != 1) { if (length(unique(sapply(res, width)[1,])) != 1) { ## yy <- list(seqs=do.call("rbind", lapply(res, function(x) x$seqs[1,])), ## num=length(x)) ## yy$seqs[,2] %<>% gsub("-", "X", .) ## class(yy) <- "fasta" yy <- toString(res[[1]][1]) for (i in 2:length(res)) { yy <- c(yy, toString(res[[i]][1])) } yy %<>% gsub("-", "N", .) yy <- DNAStringSet(yy) yres <- muscle(yy, quiet = TRUE) yres <- DNAStringSet(yres) for(j in seq_along(res)) { res_item <- res[[j]] ## res_seq <- res_item$seqs[,2] res_seq <- sapply(res_item, toString) ## seq <- yres$seqs[j, 2] seq <- toString(yres[j]) idx <- gregexpr("-+", seq) idx <- idx[[1]] if (length(idx) == 1 && idx < 1) { next } for (i in seq_along(idx)) { for (k in seq_along(res_seq)) { if (idx[i] == 1) { res_seq[k] <- paste0( paste0(rep("-", attr(idx, "match.length")[i]), collapse = ""), res_seq[k], collapse = "") } else if (idx[i] > nchar(res_seq[k])) { res_seq[k] <- paste0(res_seq[k], paste0(rep("-", attr(idx, "match.length")[i]), collapse = ""), collapse = "") } else { res_seq[k] <- paste0(substring(res_seq[k], 1, idx[i]-1), paste0(rep("-", attr(idx, "match.length")[i]), collapse = ""), substring(res_seq[k], idx[i]), collapse = "") } } } ## res_item$seqs[,2] <- res_seq ## res[[j]] <- res_item res[[j]] <- DNAStringSet(res_seq) ## res[[j]]$length <- nchar(res_seq[1]) } } ## if (length(unique(sapply(res, function(x) x$length))) == 1) { if (length(unique(sapply(res, width)[1,])) == 1) { ## jj <- sapply(res, function(x) getIdx(fasta$seqs[1,1], x$seqs[,1])) jj <- sapply(res, function(x) getIdx(names(fasta)[1], names(x))) ## r1 <- res[[1]]$seqs[jj[1],2] r1 <- res[[1]][jj[1]] flag <- FALSE for (i in 2:length(jj)) { ## jj should >= 2 ## if (r1 != res[[i]]$seqs[jj[i],2]) { if (r1 != res[[i]][jj[i]]) { flag <- TRUE break } } if (flag == TRUE) { res2 <- muscle(fasta, quiet = TRUE) result <- DNAStringSet(res2) } else { ## seqs <- lapply(res, function(x) x$seqs) ## seqs <- do.call("rbind", seqs) ## seqs <- unique(seqs) seqs <- res[[1]] for (i in 2:length(res)) { seqs <- c(seqs, res[[i]]) } ## seqs <- unique(seqs) ## unique will remove mira sequence if it is identical to reference seqs <- seqs[!duplicated(names(seqs))] ## sn <- seqs[,1] sn <- names(seqs) if (length(sn) != length(unique(sn))) { k <- sapply(unique(sn), function(i) which(i == sn)[1]) ## seqs <- seqs[k,] seqs <- seqs[k] } ## res2 <- list(seqs=seqs, num=nrow(seqs), length=res[[1]]$length) result <- seqs } } else { res2 <- muscle(fasta, quiet = TRUE) result <- DNAStringSet(res2) } ## ii <- getIdx(fasta$seqs[,1], res2$seqs[,1]) ii <- getIdx(names(fasta), names(result)) ## res2$seqs <- res2$seqs[ii,] result <- result[ii] return(result) } ##' align multiple sequence stored in several fasta files. ##' ##' ##' @title doAlign2 ##' @param files fasta files ##' @return alignment ##' @author ygc ##' @importFrom Biostrings readDNAStringSet ##' @importFrom muscle muscle ##' @export doAlign2 <- function(files) { seqs <- lapply(files, readDNAStringSet) fa <- seqs[[1]] if (length(seqs) > 1) { for (i in 2:length(seqs)) { fa <- c(fa, seqs[[i]]) } } ## str <- unlist(lapply(seqs, toString)) ## nn <- unlist(lapply(seqs, names)) ## fa <- list(seqs=data.frame(V1=nn, V2=str), num=length(nn)) ## class(fa) <- "fasta" aln <- muscle(fa) return(aln) } ##' write aligned sequence to fasta file ##' ##' ##' @title writeAlignedSeq ##' @param aln alignment ##' @param output out file ##' @return NULL ##' @author ygc ##' @importFrom Biostrings DNAStringSet ##' @importFrom Biostrings writeXStringSet ##' @export writeAlignedSeq <- function(aln, output) { ## fa <- DNAStringSet(aln$seqs[,2]) ## names(fa) <- aln$seqs[,1] ## writeXStringSet(fa, output) writeXStringSet(DNAStringSet(aln), output) }
/R/doAlign.R
no_license
GuangchuangYu/skleid
R
false
false
7,777
r
##' align 454 contigs and mira sequence to reference sequence. ##' ##' ##' @title doAlign ##' @param x file names of 454 contig, reference and mira sequences ##' @return fasta S3 object ##' @author ygc ##' @importFrom muscle muscle ##' @importFrom Biostrings readDNAStringSet ##' @importFrom Biostrings reverseComplement ##' @importFrom magrittr %<>% ##' @export doAlign <- function(x) { ## suppose x[1] is 454 contigs ## x[2] is reference sequence ## x[3] is mira sequence, that is read assembly that mapping to reference. ## f454 <- read.fasta(x[1]) f454 <- readDNAStringSet(x[1]) fref <- readDNAStringSet(x[2]) fref <- fref[1] ## fref$seqs <- fref$seqs[1,] ## fref$num <- 1 ## fmira <- read.fasta(x[3]) fmira <- readDNAStringSet(x[3]) ## for (i in 1:nrow(f454$seqs)) { for (i in 1:length(f454)) { ## fa <- list(seqs=rbind(f454$seqs[i,], fref$seqs), num=fref$num+1) ## class(fa) <- "fasta" ## aln <- muscle(fa, quiet = TRUE) fa <- c(f454[i], fref) aln <- muscle(fa, quiet=TRUE) fa2 <- fa fa2[1] <- reverseComplement(fa2[1]) ## fa2$seqs[1,2] <- revcomp(fa2$seqs[1,2]) aln2 <- muscle(fa2, quiet = TRUE) if (identityRatio(aln) < identityRatio(aln2)) { ## f454$seqs[i,1] <- paste("RC", f454$seqs[i,1], sep="_") ## f454$seqs[i,2] <- revcomp(f454$seqs[i,2]) f454[i] <- reverseComplement(f454[i]) names(f454)[i] <- paste("RC", names(f454)[i], sep="_") } } ## fasta <- list(seqs=rbind(fref$seqs, fmira$seqs, f454$seqs), ## num=fref$num+fmira$num+f454$num) ## class(fasta) <- "fasta" fasta <- c(fref, fmira, f454) ## if (f454$num == 1) { if (length(f454) == 1) { res2 <- muscle(fasta, quiet = TRUE) ## ii <- getIdx(fasta$seqs[,1], res2$seqs[,1]) ## res2$seqs <- res2$seqs[ii,] result <- DNAStringSet(res2) ii <- getIdx(names(fasta), names(result)) result <- result[ii] return(result) } ## res <- lapply(1:f454$num, function(i) { res <- lapply(1:length(f454), function(i) { ## fa <- list(seqs=rbind(fref$seqs, fmira$seqs, f454$seqs[i,]), ## num=3) ## class(fa) <- "fasta" fa <- c(fref, fmira, f454[i]) xx <- muscle(fa, quiet = TRUE) yy <- DNAStringSet(xx) ## jj <- getIdx(fasta$seqs[1,1], xx$seqs[,1]) jj <- getIdx(names(fasta)[1], names(yy)) ## ii <- 1:nrow(xx$seqs) ii <- 1:length(yy) yy[c(jj, ii[-jj])] ## xx$seqs <- xx$seqs[c(jj, ii[-jj]),] ## return(xx) return(yy) }) ## if (length(unique(sapply(res, function(x) x$length))) != 1) { if (length(unique(sapply(res, width)[1,])) != 1) { ## yy <- list(seqs=do.call("rbind", lapply(res, function(x) x$seqs[1,])), ## num=length(x)) ## yy$seqs[,2] %<>% gsub("-", "X", .) ## class(yy) <- "fasta" yy <- toString(res[[1]][1]) for (i in 2:length(res)) { yy <- c(yy, toString(res[[i]][1])) } yy %<>% gsub("-", "N", .) yy <- DNAStringSet(yy) yres <- muscle(yy, quiet = TRUE) yres <- DNAStringSet(yres) for(j in seq_along(res)) { res_item <- res[[j]] ## res_seq <- res_item$seqs[,2] res_seq <- sapply(res_item, toString) ## seq <- yres$seqs[j, 2] seq <- toString(yres[j]) idx <- gregexpr("-+", seq) idx <- idx[[1]] if (length(idx) == 1 && idx < 1) { next } for (i in seq_along(idx)) { for (k in seq_along(res_seq)) { if (idx[i] == 1) { res_seq[k] <- paste0( paste0(rep("-", attr(idx, "match.length")[i]), collapse = ""), res_seq[k], collapse = "") } else if (idx[i] > nchar(res_seq[k])) { res_seq[k] <- paste0(res_seq[k], paste0(rep("-", attr(idx, "match.length")[i]), collapse = ""), collapse = "") } else { res_seq[k] <- paste0(substring(res_seq[k], 1, idx[i]-1), paste0(rep("-", attr(idx, "match.length")[i]), collapse = ""), substring(res_seq[k], idx[i]), collapse = "") } } } ## res_item$seqs[,2] <- res_seq ## res[[j]] <- res_item res[[j]] <- DNAStringSet(res_seq) ## res[[j]]$length <- nchar(res_seq[1]) } } ## if (length(unique(sapply(res, function(x) x$length))) == 1) { if (length(unique(sapply(res, width)[1,])) == 1) { ## jj <- sapply(res, function(x) getIdx(fasta$seqs[1,1], x$seqs[,1])) jj <- sapply(res, function(x) getIdx(names(fasta)[1], names(x))) ## r1 <- res[[1]]$seqs[jj[1],2] r1 <- res[[1]][jj[1]] flag <- FALSE for (i in 2:length(jj)) { ## jj should >= 2 ## if (r1 != res[[i]]$seqs[jj[i],2]) { if (r1 != res[[i]][jj[i]]) { flag <- TRUE break } } if (flag == TRUE) { res2 <- muscle(fasta, quiet = TRUE) result <- DNAStringSet(res2) } else { ## seqs <- lapply(res, function(x) x$seqs) ## seqs <- do.call("rbind", seqs) ## seqs <- unique(seqs) seqs <- res[[1]] for (i in 2:length(res)) { seqs <- c(seqs, res[[i]]) } ## seqs <- unique(seqs) ## unique will remove mira sequence if it is identical to reference seqs <- seqs[!duplicated(names(seqs))] ## sn <- seqs[,1] sn <- names(seqs) if (length(sn) != length(unique(sn))) { k <- sapply(unique(sn), function(i) which(i == sn)[1]) ## seqs <- seqs[k,] seqs <- seqs[k] } ## res2 <- list(seqs=seqs, num=nrow(seqs), length=res[[1]]$length) result <- seqs } } else { res2 <- muscle(fasta, quiet = TRUE) result <- DNAStringSet(res2) } ## ii <- getIdx(fasta$seqs[,1], res2$seqs[,1]) ii <- getIdx(names(fasta), names(result)) ## res2$seqs <- res2$seqs[ii,] result <- result[ii] return(result) } ##' align multiple sequence stored in several fasta files. ##' ##' ##' @title doAlign2 ##' @param files fasta files ##' @return alignment ##' @author ygc ##' @importFrom Biostrings readDNAStringSet ##' @importFrom muscle muscle ##' @export doAlign2 <- function(files) { seqs <- lapply(files, readDNAStringSet) fa <- seqs[[1]] if (length(seqs) > 1) { for (i in 2:length(seqs)) { fa <- c(fa, seqs[[i]]) } } ## str <- unlist(lapply(seqs, toString)) ## nn <- unlist(lapply(seqs, names)) ## fa <- list(seqs=data.frame(V1=nn, V2=str), num=length(nn)) ## class(fa) <- "fasta" aln <- muscle(fa) return(aln) } ##' write aligned sequence to fasta file ##' ##' ##' @title writeAlignedSeq ##' @param aln alignment ##' @param output out file ##' @return NULL ##' @author ygc ##' @importFrom Biostrings DNAStringSet ##' @importFrom Biostrings writeXStringSet ##' @export writeAlignedSeq <- function(aln, output) { ## fa <- DNAStringSet(aln$seqs[,2]) ## names(fa) <- aln$seqs[,1] ## writeXStringSet(fa, output) writeXStringSet(DNAStringSet(aln), output) }
#' @title Download and import the raw dataset. #' #' @description Download the dataset from GEO, filter, and create a #' \code{SingleCellExperiment object} #' #' @export #' @import BiocFileCache SingleCellExperiment rappdirs slingshot HDF5Array importRawData <- function(){ url <- "https://www.ncbi.nlm.nih.gov/geo/download/?acc=GSE114687&format=file&file=GSE114687%5Fpseudospace%5Fcds%2Erds%2Egz" path <- paste0(rappdirs::user_cache_dir(), basename(url)) bfc <- BiocFileCache::BiocFileCache(path, ask = FALSE) addCds <- bfcadd(bfc, "cds", fpath = url) con <- gzcon(gzfile(addCds)) cds <- readRDS(con) # Extract useful info from the cellDataSet object counts <- cds@assayData$exprs phenoData <- pData(cds@phenoData) rd <- SimpleList( tSNEorig = cbind(cds@phenoData@data$TSNE.1, cds@phenoData@data$TSNE.2) ) rm(cds) ; gc(verbose = FALSE) filt <- apply(counts, 1, function(x){ sum(x >= 2) >= 15 }) counts <- counts[filt, ] sce <- SingleCellExperiment::SingleCellExperiment( assays = list(counts = counts), colData = phenoData, reducedDims = rd) return(sce) }
/R/import_raw_data.R
permissive
wjbsb/bioc2020trajectories
R
false
false
1,112
r
#' @title Download and import the raw dataset. #' #' @description Download the dataset from GEO, filter, and create a #' \code{SingleCellExperiment object} #' #' @export #' @import BiocFileCache SingleCellExperiment rappdirs slingshot HDF5Array importRawData <- function(){ url <- "https://www.ncbi.nlm.nih.gov/geo/download/?acc=GSE114687&format=file&file=GSE114687%5Fpseudospace%5Fcds%2Erds%2Egz" path <- paste0(rappdirs::user_cache_dir(), basename(url)) bfc <- BiocFileCache::BiocFileCache(path, ask = FALSE) addCds <- bfcadd(bfc, "cds", fpath = url) con <- gzcon(gzfile(addCds)) cds <- readRDS(con) # Extract useful info from the cellDataSet object counts <- cds@assayData$exprs phenoData <- pData(cds@phenoData) rd <- SimpleList( tSNEorig = cbind(cds@phenoData@data$TSNE.1, cds@phenoData@data$TSNE.2) ) rm(cds) ; gc(verbose = FALSE) filt <- apply(counts, 1, function(x){ sum(x >= 2) >= 15 }) counts <- counts[filt, ] sce <- SingleCellExperiment::SingleCellExperiment( assays = list(counts = counts), colData = phenoData, reducedDims = rd) return(sce) }
## ---------------------------------------------- # Reference URL: https://davetang.org/muse/2013/04/06/using-the-r_twitter-package/ # install.packages("twitteR") # install.packages("wordcloud") # install.packages("tm") library("twitteR") library("wordcloud") library("tm") ### # The reference site says it's necessary, but # i didn't find it used any where. ### #download.file(url="http://curl.haxx.se/ca/cacert.pem", destfile="cacert.pem") #to get your consumerKey and consumerSecret see the twitteR documentation for instructions consumer_key <- 'cr3eQVg70a8BiwrYvtdLpmDKO' consumer_secret <- '9LBgn6FQF2VUGH0pRrf5BvG0Obqc564MmTrNn7g8FbwrXzh3T6' access_token <- '67892304-4KcGY8R0FbBxe7zuyyFfNgUY2udUY5cWNW2ez49n0' access_secret <- 'ZciIo4pgZQvnI0jYl7fPkJofBEbn1FSFxHFT3Io1ERNdj' setup_twitter_oauth(consumer_key, consumer_secret, access_token, access_secret) ### # Get the twits with hash tag 'Rstats' max twits 1500 ### r_stats <- searchTwitter("#Rstats", n=1500) # Check the number of twits we received. length(r_stats) #[1] 1500 #save text r_stats_text <- sapply(r_stats, function(x) x$getText()) #create corpus r_stats_text_corpus <- Corpus(VectorSource(r_stats_text)) # Convert into UTF-8 format r_stats_text_corpus <- tm_map(r_stats_text_corpus, content_transformer(function(x) iconv(x, to='UTF-8', sub='byte')) ) # clean up r_stats_text_corpus <- tm_map(r_stats_text_corpus, content_transformer(tolower)) r_stats_text_corpus <- tm_map(r_stats_text_corpus, removePunctuation) r_stats_text_corpus <- tm_map(r_stats_text_corpus, function(x)removeWords(x,stopwords())) wordcloud(r_stats_text_corpus, min.freq = 15, random.order = FALSE, random.color =TRUE) ## ------------------------------ library(RColorBrewer) ### # Get the twits with hash tag 'bioinformatics' max twits 1500 ### bioinformatics <- searchTwitter("#bioinformatics", n=1500) bioinformatics_text <- sapply(bioinformatics, function(x) x$getText()) bioinformatics_text_corpus <- Corpus(VectorSource(bioinformatics_text)) bioinformatics_text_corpus <- tm_map(bioinformatics_text_corpus, content_transformer(function(x) iconv(x, to='UTF-8', sub='byte'))) bioinformatics_text_corpus <- tm_map(bioinformatics_text_corpus, content_transformer(tolower)) bioinformatics_text_corpus <- tm_map(bioinformatics_text_corpus, removePunctuation) bioinformatics_text_corpus <- tm_map(bioinformatics_text_corpus, function(x)removeWords(x,stopwords())) pal2 <- brewer.pal(8,"Dark2") wordcloud(bioinformatics_text_corpus,min.freq=2,max.words=100, random.order=T, colors=pal2)
/Unsupervised/Twitter/firstTwitterApp.R
no_license
ujjwal82/R-practice
R
false
false
2,748
r
## ---------------------------------------------- # Reference URL: https://davetang.org/muse/2013/04/06/using-the-r_twitter-package/ # install.packages("twitteR") # install.packages("wordcloud") # install.packages("tm") library("twitteR") library("wordcloud") library("tm") ### # The reference site says it's necessary, but # i didn't find it used any where. ### #download.file(url="http://curl.haxx.se/ca/cacert.pem", destfile="cacert.pem") #to get your consumerKey and consumerSecret see the twitteR documentation for instructions consumer_key <- 'cr3eQVg70a8BiwrYvtdLpmDKO' consumer_secret <- '9LBgn6FQF2VUGH0pRrf5BvG0Obqc564MmTrNn7g8FbwrXzh3T6' access_token <- '67892304-4KcGY8R0FbBxe7zuyyFfNgUY2udUY5cWNW2ez49n0' access_secret <- 'ZciIo4pgZQvnI0jYl7fPkJofBEbn1FSFxHFT3Io1ERNdj' setup_twitter_oauth(consumer_key, consumer_secret, access_token, access_secret) ### # Get the twits with hash tag 'Rstats' max twits 1500 ### r_stats <- searchTwitter("#Rstats", n=1500) # Check the number of twits we received. length(r_stats) #[1] 1500 #save text r_stats_text <- sapply(r_stats, function(x) x$getText()) #create corpus r_stats_text_corpus <- Corpus(VectorSource(r_stats_text)) # Convert into UTF-8 format r_stats_text_corpus <- tm_map(r_stats_text_corpus, content_transformer(function(x) iconv(x, to='UTF-8', sub='byte')) ) # clean up r_stats_text_corpus <- tm_map(r_stats_text_corpus, content_transformer(tolower)) r_stats_text_corpus <- tm_map(r_stats_text_corpus, removePunctuation) r_stats_text_corpus <- tm_map(r_stats_text_corpus, function(x)removeWords(x,stopwords())) wordcloud(r_stats_text_corpus, min.freq = 15, random.order = FALSE, random.color =TRUE) ## ------------------------------ library(RColorBrewer) ### # Get the twits with hash tag 'bioinformatics' max twits 1500 ### bioinformatics <- searchTwitter("#bioinformatics", n=1500) bioinformatics_text <- sapply(bioinformatics, function(x) x$getText()) bioinformatics_text_corpus <- Corpus(VectorSource(bioinformatics_text)) bioinformatics_text_corpus <- tm_map(bioinformatics_text_corpus, content_transformer(function(x) iconv(x, to='UTF-8', sub='byte'))) bioinformatics_text_corpus <- tm_map(bioinformatics_text_corpus, content_transformer(tolower)) bioinformatics_text_corpus <- tm_map(bioinformatics_text_corpus, removePunctuation) bioinformatics_text_corpus <- tm_map(bioinformatics_text_corpus, function(x)removeWords(x,stopwords())) pal2 <- brewer.pal(8,"Dark2") wordcloud(bioinformatics_text_corpus,min.freq=2,max.words=100, random.order=T, colors=pal2)
#' Traverse outward node-by_node until stopping conditions are met #' @description From a graph object of #' class \code{dgr_graph}, move along #' outward edges from one or more nodes #' present in a selection to other #' connected nodes, replacing the current #' nodes in the selection with those nodes #' traversed to until reaching nodes that #' satisfy one or more conditions. #' @param graph a graph object of class #' \code{dgr_graph}. #' @param conditions an option to use a #' stopping condition for the traversal. #' If the condition is met during the #' traversal (i.e., the node(s) traversed #' to match the condition), then those #' traversals will terminate at those #' nodes. Otherwise, traversals with #' continue and terminate when the number #' of steps provided in \code{max_steps} #' is reached. #' @param max_steps the maximum number #' of \code{trav_out()} steps (i.e., #' node-to-node traversals in the outward #' direction) to allow before stopping. #' @param exclude_unmatched if \code{TRUE} #' (the default value) then any nodes not #' satisfying the conditions provided in #' \code{conditions} that are in the ending #' selection are excluded. #' @param add_to_selection if \code{TRUE} #' then every node traversed will be part #' of the final selection of nodes. If #' \code{FALSE} (the default value) then #' only the nodes finally traversed to #' will be part of the final node #' selection. #' @return a graph object of class #' \code{dgr_graph}. #' @examples #' # Create a path graph and add #' # values of 1 to 10 across the #' # nodes from beginning to end; #' # select the first path node #' graph <- #' create_graph() %>% #' add_path( #' n = 10, #' node_data = node_data( #' value = 1:10)) %>% #' select_nodes_by_id( #' nodes = 1) #' #' # Traverse outward, node-by-node #' # until stopping at a node where #' # the `value` attribute is 8 #' graph <- #' graph %>% #' trav_out_until( #' conditions = #' value == 8) #' #' # Get the graph's node selection #' graph %>% #' get_selection() #' #> [1] 8 #' #' # Create two cycles in graph and #' # add values of 1 to 6 to the #' # first cycle, and values 7 to #' # 12 in the second; select nodes #' # `1` and `7` #' graph <- #' create_graph() %>% #' add_cycle( #' n = 6, #' node_data = node_data( #' value = 1:6)) %>% #' add_cycle( #' n = 6, #' node_data = node_data( #' value = 7:12)) %>% #' select_nodes_by_id( #' nodes = c(1, 7)) #' #' # Traverse outward, node-by-node #' # from `1` and `7` until stopping #' # at the first nodes where the #' # `value` attribute is 5, 6, or 15; #' # specify that we should only #' # keep the finally traversed to #' # nodes that satisfy the conditions #' graph <- #' graph %>% #' trav_out_until( #' conditions = #' value %in% c(5, 6, 9), #' exclude_unmatched = TRUE) #' #' # Get the graph's node selection #' graph %>% #' get_selection() #' #> [1] 5 9 #' @importFrom rlang enquo UQ #' @importFrom igraph all_simple_paths #' @importFrom purrr map #' @export trav_out_until trav_out_until <- function(graph, conditions, max_steps = 30, exclude_unmatched = TRUE, add_to_selection = FALSE) { conditions <- rlang::enquo(conditions) # Get the time of function start time_function_start <- Sys.time() # Validation: Graph object is valid if (graph_object_valid(graph) == FALSE) { stop("The graph object is not valid.") } # Validation: Graph contains nodes if (graph_contains_nodes(graph) == FALSE) { stop("The graph contains no nodes, so, no traversal can occur.") } # Validation: Graph contains edges if (graph_contains_edges(graph) == FALSE) { stop("The graph contains no edges, so, no traversal can occur.") } # Validation: Graph object has valid node selection if (graph_contains_node_selection(graph) == FALSE) { stop("There is no selection of nodes, so, no traversal can occur.") } # Initialize the node stack and # the step count node_stack <- vector(mode = "integer") step <- 0 starting_nodes <- graph %>% get_selection() # Determine which nodes satisfy the # conditions provided all_nodes_conditions_met <- get_node_ids(x = graph, conditions = rlang::UQ(conditions)) if (exclude_unmatched & all(is.na(all_nodes_conditions_met))) { # Clear the active selection graph <- graph %>% clear_selection() # Remove action from graph log graph$graph_log <- graph$graph_log[-nrow(graph$graph_log), ] # Update the `graph_log` df with an action graph$graph_log <- add_action_to_log( graph_log = graph$graph_log, version_id = nrow(graph$graph_log) + 1, function_used = "trav_out_until", time_modified = time_function_start, duration = graph_function_duration(time_function_start), nodes = nrow(graph$nodes_df), edges = nrow(graph$edges_df)) # Perform graph actions, if any are available if (nrow(graph$graph_actions) > 0) { graph <- graph %>% trigger_graph_actions() } # Write graph backup if the option is set if (graph$graph_info$write_backups) { save_graph_as_rds(graph = graph) } return(graph) } repeat { # Perform traversal graph <- graph %>% trav_out() # Remove action from graph log graph$graph_log <- graph$graph_log[-nrow(graph$graph_log), ] # If any nodes are `all_nodes_conditions_met` nodes # deselect that node and save the node in a stack if (any(graph %>% get_selection() %in% all_nodes_conditions_met)) { node_stack <- c(node_stack, intersect( graph %>% get_selection(), all_nodes_conditions_met)) # Remove the node from the active selection graph <- graph %>% deselect_nodes(nodes = node_stack) # Remove action from graph log graph$graph_log <- graph$graph_log[-nrow(graph$graph_log), ] } if (all(is.na(get_selection(graph)))) break step <- step + 1 if (step == max_steps) break } if (length(node_stack > 0)) { if (add_to_selection) { if (exclude_unmatched) { node_stack <- intersect(node_stack, all_nodes_conditions_met) } path_nodes <- node_stack %>% purrr::map( .f = function(x) { graph %>% to_igraph() %>% igraph::all_simple_paths( from = x, to = starting_nodes, mode = "in") %>% unlist() %>% as.integer()}) %>% unlist() %>% unique() graph <- graph %>% select_nodes_by_id(unique(path_nodes)) } else { graph <- graph %>% select_nodes_by_id(unique(node_stack)) } # Remove action from graph log graph$graph_log <- graph$graph_log[-nrow(graph$graph_log), ] } else if (length(node_stack) < 1) { if (exclude_unmatched & !all(is.na(get_selection(graph)))) { new_selection <- get_selection(graph) graph <- graph %>% clear_selection() %>% select_nodes_by_id( intersect(new_selection, all_nodes_conditions_met)) # Remove action from graph log graph$graph_log <- graph$graph_log[-nrow(graph$graph_log), ] # Remove action from graph log graph$graph_log <- graph$graph_log[-nrow(graph$graph_log), ] } } # Update the `graph_log` df with an action graph$graph_log <- add_action_to_log( graph_log = graph$graph_log, version_id = nrow(graph$graph_log) + 1, function_used = "trav_out_until", time_modified = time_function_start, duration = graph_function_duration(time_function_start), nodes = nrow(graph$nodes_df), edges = nrow(graph$edges_df)) # Perform graph actions, if any are available if (nrow(graph$graph_actions) > 0) { graph <- graph %>% trigger_graph_actions() } # Write graph backup if the option is set if (graph$graph_info$write_backups) { save_graph_as_rds(graph = graph) } graph }
/R/trav_out_until.R
permissive
alarmcom/DiagrammeR
R
false
false
8,272
r
#' Traverse outward node-by_node until stopping conditions are met #' @description From a graph object of #' class \code{dgr_graph}, move along #' outward edges from one or more nodes #' present in a selection to other #' connected nodes, replacing the current #' nodes in the selection with those nodes #' traversed to until reaching nodes that #' satisfy one or more conditions. #' @param graph a graph object of class #' \code{dgr_graph}. #' @param conditions an option to use a #' stopping condition for the traversal. #' If the condition is met during the #' traversal (i.e., the node(s) traversed #' to match the condition), then those #' traversals will terminate at those #' nodes. Otherwise, traversals with #' continue and terminate when the number #' of steps provided in \code{max_steps} #' is reached. #' @param max_steps the maximum number #' of \code{trav_out()} steps (i.e., #' node-to-node traversals in the outward #' direction) to allow before stopping. #' @param exclude_unmatched if \code{TRUE} #' (the default value) then any nodes not #' satisfying the conditions provided in #' \code{conditions} that are in the ending #' selection are excluded. #' @param add_to_selection if \code{TRUE} #' then every node traversed will be part #' of the final selection of nodes. If #' \code{FALSE} (the default value) then #' only the nodes finally traversed to #' will be part of the final node #' selection. #' @return a graph object of class #' \code{dgr_graph}. #' @examples #' # Create a path graph and add #' # values of 1 to 10 across the #' # nodes from beginning to end; #' # select the first path node #' graph <- #' create_graph() %>% #' add_path( #' n = 10, #' node_data = node_data( #' value = 1:10)) %>% #' select_nodes_by_id( #' nodes = 1) #' #' # Traverse outward, node-by-node #' # until stopping at a node where #' # the `value` attribute is 8 #' graph <- #' graph %>% #' trav_out_until( #' conditions = #' value == 8) #' #' # Get the graph's node selection #' graph %>% #' get_selection() #' #> [1] 8 #' #' # Create two cycles in graph and #' # add values of 1 to 6 to the #' # first cycle, and values 7 to #' # 12 in the second; select nodes #' # `1` and `7` #' graph <- #' create_graph() %>% #' add_cycle( #' n = 6, #' node_data = node_data( #' value = 1:6)) %>% #' add_cycle( #' n = 6, #' node_data = node_data( #' value = 7:12)) %>% #' select_nodes_by_id( #' nodes = c(1, 7)) #' #' # Traverse outward, node-by-node #' # from `1` and `7` until stopping #' # at the first nodes where the #' # `value` attribute is 5, 6, or 15; #' # specify that we should only #' # keep the finally traversed to #' # nodes that satisfy the conditions #' graph <- #' graph %>% #' trav_out_until( #' conditions = #' value %in% c(5, 6, 9), #' exclude_unmatched = TRUE) #' #' # Get the graph's node selection #' graph %>% #' get_selection() #' #> [1] 5 9 #' @importFrom rlang enquo UQ #' @importFrom igraph all_simple_paths #' @importFrom purrr map #' @export trav_out_until trav_out_until <- function(graph, conditions, max_steps = 30, exclude_unmatched = TRUE, add_to_selection = FALSE) { conditions <- rlang::enquo(conditions) # Get the time of function start time_function_start <- Sys.time() # Validation: Graph object is valid if (graph_object_valid(graph) == FALSE) { stop("The graph object is not valid.") } # Validation: Graph contains nodes if (graph_contains_nodes(graph) == FALSE) { stop("The graph contains no nodes, so, no traversal can occur.") } # Validation: Graph contains edges if (graph_contains_edges(graph) == FALSE) { stop("The graph contains no edges, so, no traversal can occur.") } # Validation: Graph object has valid node selection if (graph_contains_node_selection(graph) == FALSE) { stop("There is no selection of nodes, so, no traversal can occur.") } # Initialize the node stack and # the step count node_stack <- vector(mode = "integer") step <- 0 starting_nodes <- graph %>% get_selection() # Determine which nodes satisfy the # conditions provided all_nodes_conditions_met <- get_node_ids(x = graph, conditions = rlang::UQ(conditions)) if (exclude_unmatched & all(is.na(all_nodes_conditions_met))) { # Clear the active selection graph <- graph %>% clear_selection() # Remove action from graph log graph$graph_log <- graph$graph_log[-nrow(graph$graph_log), ] # Update the `graph_log` df with an action graph$graph_log <- add_action_to_log( graph_log = graph$graph_log, version_id = nrow(graph$graph_log) + 1, function_used = "trav_out_until", time_modified = time_function_start, duration = graph_function_duration(time_function_start), nodes = nrow(graph$nodes_df), edges = nrow(graph$edges_df)) # Perform graph actions, if any are available if (nrow(graph$graph_actions) > 0) { graph <- graph %>% trigger_graph_actions() } # Write graph backup if the option is set if (graph$graph_info$write_backups) { save_graph_as_rds(graph = graph) } return(graph) } repeat { # Perform traversal graph <- graph %>% trav_out() # Remove action from graph log graph$graph_log <- graph$graph_log[-nrow(graph$graph_log), ] # If any nodes are `all_nodes_conditions_met` nodes # deselect that node and save the node in a stack if (any(graph %>% get_selection() %in% all_nodes_conditions_met)) { node_stack <- c(node_stack, intersect( graph %>% get_selection(), all_nodes_conditions_met)) # Remove the node from the active selection graph <- graph %>% deselect_nodes(nodes = node_stack) # Remove action from graph log graph$graph_log <- graph$graph_log[-nrow(graph$graph_log), ] } if (all(is.na(get_selection(graph)))) break step <- step + 1 if (step == max_steps) break } if (length(node_stack > 0)) { if (add_to_selection) { if (exclude_unmatched) { node_stack <- intersect(node_stack, all_nodes_conditions_met) } path_nodes <- node_stack %>% purrr::map( .f = function(x) { graph %>% to_igraph() %>% igraph::all_simple_paths( from = x, to = starting_nodes, mode = "in") %>% unlist() %>% as.integer()}) %>% unlist() %>% unique() graph <- graph %>% select_nodes_by_id(unique(path_nodes)) } else { graph <- graph %>% select_nodes_by_id(unique(node_stack)) } # Remove action from graph log graph$graph_log <- graph$graph_log[-nrow(graph$graph_log), ] } else if (length(node_stack) < 1) { if (exclude_unmatched & !all(is.na(get_selection(graph)))) { new_selection <- get_selection(graph) graph <- graph %>% clear_selection() %>% select_nodes_by_id( intersect(new_selection, all_nodes_conditions_met)) # Remove action from graph log graph$graph_log <- graph$graph_log[-nrow(graph$graph_log), ] # Remove action from graph log graph$graph_log <- graph$graph_log[-nrow(graph$graph_log), ] } } # Update the `graph_log` df with an action graph$graph_log <- add_action_to_log( graph_log = graph$graph_log, version_id = nrow(graph$graph_log) + 1, function_used = "trav_out_until", time_modified = time_function_start, duration = graph_function_duration(time_function_start), nodes = nrow(graph$nodes_df), edges = nrow(graph$edges_df)) # Perform graph actions, if any are available if (nrow(graph$graph_actions) > 0) { graph <- graph %>% trigger_graph_actions() } # Write graph backup if the option is set if (graph$graph_info$write_backups) { save_graph_as_rds(graph = graph) } graph }
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/launchApp.R \name{launchApp} \alias{launchApp} \title{launches the TESTshinyapp app} \usage{ launchApp() } \value{ shiny application object } \description{ Display a shiny user interface }
/man/launchApp.Rd
no_license
FabienNicol/TESTshinyapp
R
false
true
267
rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/launchApp.R \name{launchApp} \alias{launchApp} \title{launches the TESTshinyapp app} \usage{ launchApp() } \value{ shiny application object } \description{ Display a shiny user interface }
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/model_child_care_subsidy.R \name{model_child_care_subsidy} \alias{model_child_care_subsidy} \title{Model Child Care Subsidy} \usage{ model_child_care_subsidy( sample_file, Cbdc_hourly_cap = NULL, Fdc_hourly_cap = NULL, Oshc_hourly_cap = NULL, Ihc_hourly_cap = NULL, Annual_cap_income = NULL, Annual_cap_subsidy = NULL, Income_test_bracket_1 = NULL, Income_test_bracket_2 = NULL, Income_test_bracket_3 = NULL, Income_test_bracket_4 = NULL, Income_test_bracket_5 = NULL, Taper_1 = NULL, Taper_2 = NULL, Taper_3 = NULL, Activity_test_1_brackets = NULL, Activity_test_1_hours = NULL, calc_baseline_ccs = TRUE, return. = c("sample_file", "new_ccs", "sample_file.int") ) } \arguments{ \item{sample_file}{A sample file having the same variables as the data.frame in the example.} \item{Cbdc_hourly_cap, Fdc_hourly_cap, Oshc_hourly_cap, Ihc_hourly_cap}{(numeric) The lower of `cost_hour` or the relevant `hourly_cap` will be used in the calculation of the subsidy.} \item{Annual_cap_income}{(numeric) The minimum family income for which the `Annual_cap_subsidy` applies from.} \item{Annual_cap_subsidy}{(numeric) Amount at which annual subsidies are capped for those who earn more than `Annual_cap_income`.} \item{Income_test_bracket_1, Income_test_bracket_2, Income_test_bracket_3, Income_test_bracket_4, Income_test_bracket_5}{(numeric) The steps at which income test 1 changes rates. Note the strange structure \url{https://www.humanservices.gov.au/individuals/services/centrelink/child-care-subsidy/payments/how-your-income-affects-it}.} \item{Taper_1, Taper_2, Taper_3}{(numeric) The proportion of the hourly cap retained. Note that the rate only decreases between each odd bracket.} \item{Activity_test_1_brackets}{(numeric vector) The activity levels at which the activity test increases.} \item{Activity_test_1_hours}{(numeric vector) The hours corresponding to the step increase in `activity_test_1_brackets`.} \item{calc_baseline_ccs}{(logical, default: \code{TRUE}) Should the current child care subsidy be included as a column in the result?} \item{return.}{What should the function return? One of \code{subsidy}, \code{sample_file}, or \code{sample_file.int}. If \code{subsidy}, the subsidy received under the settings; if \code{sample_file}, the \code{sample_file}, but with variables \code{subsidy} and possibly \code{new_subsidy}; if \code{sample_file.int}, same as \code{sample_file} but \code{new_subsidy} is coerced to integer.} } \description{ The child care subsidy if thresholds and rates are changed. (See \code{\link{child_care_subsidy}}.) }
/grattan/man/model_child_care_subsidy.Rd
no_license
akhikolla/TestedPackages-NoIssues
R
false
true
2,685
rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/model_child_care_subsidy.R \name{model_child_care_subsidy} \alias{model_child_care_subsidy} \title{Model Child Care Subsidy} \usage{ model_child_care_subsidy( sample_file, Cbdc_hourly_cap = NULL, Fdc_hourly_cap = NULL, Oshc_hourly_cap = NULL, Ihc_hourly_cap = NULL, Annual_cap_income = NULL, Annual_cap_subsidy = NULL, Income_test_bracket_1 = NULL, Income_test_bracket_2 = NULL, Income_test_bracket_3 = NULL, Income_test_bracket_4 = NULL, Income_test_bracket_5 = NULL, Taper_1 = NULL, Taper_2 = NULL, Taper_3 = NULL, Activity_test_1_brackets = NULL, Activity_test_1_hours = NULL, calc_baseline_ccs = TRUE, return. = c("sample_file", "new_ccs", "sample_file.int") ) } \arguments{ \item{sample_file}{A sample file having the same variables as the data.frame in the example.} \item{Cbdc_hourly_cap, Fdc_hourly_cap, Oshc_hourly_cap, Ihc_hourly_cap}{(numeric) The lower of `cost_hour` or the relevant `hourly_cap` will be used in the calculation of the subsidy.} \item{Annual_cap_income}{(numeric) The minimum family income for which the `Annual_cap_subsidy` applies from.} \item{Annual_cap_subsidy}{(numeric) Amount at which annual subsidies are capped for those who earn more than `Annual_cap_income`.} \item{Income_test_bracket_1, Income_test_bracket_2, Income_test_bracket_3, Income_test_bracket_4, Income_test_bracket_5}{(numeric) The steps at which income test 1 changes rates. Note the strange structure \url{https://www.humanservices.gov.au/individuals/services/centrelink/child-care-subsidy/payments/how-your-income-affects-it}.} \item{Taper_1, Taper_2, Taper_3}{(numeric) The proportion of the hourly cap retained. Note that the rate only decreases between each odd bracket.} \item{Activity_test_1_brackets}{(numeric vector) The activity levels at which the activity test increases.} \item{Activity_test_1_hours}{(numeric vector) The hours corresponding to the step increase in `activity_test_1_brackets`.} \item{calc_baseline_ccs}{(logical, default: \code{TRUE}) Should the current child care subsidy be included as a column in the result?} \item{return.}{What should the function return? One of \code{subsidy}, \code{sample_file}, or \code{sample_file.int}. If \code{subsidy}, the subsidy received under the settings; if \code{sample_file}, the \code{sample_file}, but with variables \code{subsidy} and possibly \code{new_subsidy}; if \code{sample_file.int}, same as \code{sample_file} but \code{new_subsidy} is coerced to integer.} } \description{ The child care subsidy if thresholds and rates are changed. (See \code{\link{child_care_subsidy}}.) }
library(shiny) ui <- navbarPage("GAME", tabPanel(title = "BATTLE", sidebarLayout( sidebarPanel( sliderInput(inputId = "battle_num", label = "choose a number", value = 19, min = 5, max = 99, step = 2), actionButton("click_battle", "Play") ), mainPanel(plotOutput("battle")) ) ), tabPanel(title = "COMPUTER", sidebarLayout( sidebarPanel( sliderInput(inputId = "computer_num", label = "Choose a number", value = 19, min = 5, max = 99, step = 2), radioButtons("color", "Choose Youe Color", choices = c("BLACK", "WHITE"), selected = "BLACK"), radioButtons("level","Choose Level", choices = c("EASY", "HARD"), selected = "EASY"), actionButton("click_computer","Play") ), mainPanel( plotOutput("computer")) ) ), tabPanel(title = "RECORD", sidebarLayout( sidebarPanel( actionButton("refresh", "Refresh") ),mainPanel(tableOutput("record")) ) ), tabPanel(title = "MINESWEEPER", sidebarLayout( sidebarPanel( sliderInput("WidthInput", "Width", 5, 50, 10), #slider for width input sliderInput("LengthInput", "Length", 5, 50, 10), #slider for length input sliderInput("MinesInput", "Mines", 1, 100, 5), #slider for length input actionButton("start", "Start") # selectInput("restartOption", "Restart", c("SELECT CHOICE","YES","NO")) ), mainPanel(plotOutput("mine")) #show the main game panel ) ) ) server <- function(input, output) { source("/gomoku/gomoku_packages.R") source("/gomoku/gomoku_backstage_functions.R") source("/gomoku/gomoku_battle.R") source("/gomoku/gomoku_easy.R") source("/gomoku/gomoku_main_function.R") source("/gomoku/gomoku_plot.R") source("/gomoku/gomoku_harder.R") source("/gomoku/gomoku_hard.R") source("/minesweeper/mine_sweeper.R") observeEvent(input$click_battle, { output$battle = renderPlot({ gomoku_battle(n = input$battle_num) }) }) observeEvent(input$click_computer, { output$computer = renderPlot({ if(input$color == "BLACK"){ if(input$level == "EASY"){gomoku_easy(n = input$computer_num, choose = 1)} if(input$level == "HARD"){gomoku_hard(n = input$computer_num, choose = 1)} } if(input$color == "WHITE"){ if(input$level == "EASY"){gomoku_easy(n = input$computer_num, choose = 2)} if(input$level == "HARD"){gomoku_hard(n = input$computer_num, choose = 2)} } }) }) actionstart <- eventReactive(input$start, { #trigger of starting button output$mine = renderPlot({mine_sweeper(input$WidthInput, input$LengthInput, input$MinesInput) })} ) } shinyApp(ui, server)
/03_Fall_2017/05-little-games/minesweeper/app_gomoku.R
no_license
ranjankislay/Final_Projects
R
false
false
3,135
r
library(shiny) ui <- navbarPage("GAME", tabPanel(title = "BATTLE", sidebarLayout( sidebarPanel( sliderInput(inputId = "battle_num", label = "choose a number", value = 19, min = 5, max = 99, step = 2), actionButton("click_battle", "Play") ), mainPanel(plotOutput("battle")) ) ), tabPanel(title = "COMPUTER", sidebarLayout( sidebarPanel( sliderInput(inputId = "computer_num", label = "Choose a number", value = 19, min = 5, max = 99, step = 2), radioButtons("color", "Choose Youe Color", choices = c("BLACK", "WHITE"), selected = "BLACK"), radioButtons("level","Choose Level", choices = c("EASY", "HARD"), selected = "EASY"), actionButton("click_computer","Play") ), mainPanel( plotOutput("computer")) ) ), tabPanel(title = "RECORD", sidebarLayout( sidebarPanel( actionButton("refresh", "Refresh") ),mainPanel(tableOutput("record")) ) ), tabPanel(title = "MINESWEEPER", sidebarLayout( sidebarPanel( sliderInput("WidthInput", "Width", 5, 50, 10), #slider for width input sliderInput("LengthInput", "Length", 5, 50, 10), #slider for length input sliderInput("MinesInput", "Mines", 1, 100, 5), #slider for length input actionButton("start", "Start") # selectInput("restartOption", "Restart", c("SELECT CHOICE","YES","NO")) ), mainPanel(plotOutput("mine")) #show the main game panel ) ) ) server <- function(input, output) { source("/gomoku/gomoku_packages.R") source("/gomoku/gomoku_backstage_functions.R") source("/gomoku/gomoku_battle.R") source("/gomoku/gomoku_easy.R") source("/gomoku/gomoku_main_function.R") source("/gomoku/gomoku_plot.R") source("/gomoku/gomoku_harder.R") source("/gomoku/gomoku_hard.R") source("/minesweeper/mine_sweeper.R") observeEvent(input$click_battle, { output$battle = renderPlot({ gomoku_battle(n = input$battle_num) }) }) observeEvent(input$click_computer, { output$computer = renderPlot({ if(input$color == "BLACK"){ if(input$level == "EASY"){gomoku_easy(n = input$computer_num, choose = 1)} if(input$level == "HARD"){gomoku_hard(n = input$computer_num, choose = 1)} } if(input$color == "WHITE"){ if(input$level == "EASY"){gomoku_easy(n = input$computer_num, choose = 2)} if(input$level == "HARD"){gomoku_hard(n = input$computer_num, choose = 2)} } }) }) actionstart <- eventReactive(input$start, { #trigger of starting button output$mine = renderPlot({mine_sweeper(input$WidthInput, input$LengthInput, input$MinesInput) })} ) } shinyApp(ui, server)
# # Copyright 2007-2015 The OpenMx Project # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. setClass(Class = "MxRAMMetaData", representation = representation( A = "MxCharOrNumber", S = "MxCharOrNumber", F = "MxCharOrNumber", M = "MxCharOrNumber", depth = "integer"), contains = "MxBaseObjectiveMetaData") setMethod("initialize", "MxRAMMetaData", function(.Object, A, S, F, M, depth) { .Object@A <- A .Object@S <- S .Object@F <- F .Object@M <- M .Object@depth <- depth return(.Object) } )
/R/MxRAMMetaData.R
permissive
RMKirkpatrick/OpenMx
R
false
false
1,038
r
# # Copyright 2007-2015 The OpenMx Project # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. setClass(Class = "MxRAMMetaData", representation = representation( A = "MxCharOrNumber", S = "MxCharOrNumber", F = "MxCharOrNumber", M = "MxCharOrNumber", depth = "integer"), contains = "MxBaseObjectiveMetaData") setMethod("initialize", "MxRAMMetaData", function(.Object, A, S, F, M, depth) { .Object@A <- A .Object@S <- S .Object@F <- F .Object@M <- M .Object@depth <- depth return(.Object) } )
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/build_demographics_race2010standard.R \name{build_demographics_race2010standard} \alias{build_demographics_race2010standard} \title{Builds a synthetic variable for education attainment} \usage{ build_demographics_race2010standard(Data) } \description{ Builds a synthetic variable for education attainment }
/man/build_demographics_race2010standard.Rd
no_license
arthurwelle/harmonizePNAD
R
false
true
385
rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/build_demographics_race2010standard.R \name{build_demographics_race2010standard} \alias{build_demographics_race2010standard} \title{Builds a synthetic variable for education attainment} \usage{ build_demographics_race2010standard(Data) } \description{ Builds a synthetic variable for education attainment }
#setting working directory setwd("EDA/project2") # Reading the NEI & SCC data NEI <- readRDS("summaryScc_PM25.rds") SCC <- readRDS("Source_classification_Code.rds") # Subseting NEI data for Baltimore's fip. baltimoreNEI <- NEI[NEI$fips=="24510", ] #Aggregate using sum the Baltimore emissions data by year TotalBaltimore<- aggregate(Emissions ~ year, baltimoreNEI, sum) png("plot2.png",width=480,height=480) barplot( TotalBaltimore$Emissions, names.arg=TotalBaltimore$year, xlab="Year", ylab="PM2.5 Emissions (Tons)", main="Total PM2.5 Emissions From all Baltimore City Sources", col = "green" ) dev.off()
/plot2.R
no_license
kumar-amit05/Exploratory-Data-Analysis-R-Project
R
false
false
625
r
#setting working directory setwd("EDA/project2") # Reading the NEI & SCC data NEI <- readRDS("summaryScc_PM25.rds") SCC <- readRDS("Source_classification_Code.rds") # Subseting NEI data for Baltimore's fip. baltimoreNEI <- NEI[NEI$fips=="24510", ] #Aggregate using sum the Baltimore emissions data by year TotalBaltimore<- aggregate(Emissions ~ year, baltimoreNEI, sum) png("plot2.png",width=480,height=480) barplot( TotalBaltimore$Emissions, names.arg=TotalBaltimore$year, xlab="Year", ylab="PM2.5 Emissions (Tons)", main="Total PM2.5 Emissions From all Baltimore City Sources", col = "green" ) dev.off()
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/continuous_functions_1-1.R \name{pchaz} \alias{pchaz} \title{Calculate survival for piecewise constant hazard} \usage{ pchaz(Tint, lambda) } \arguments{ \item{Tint}{vector of length \eqn{k+1}, for the boundaries of \eqn{k} time intervals (presumably in days) with piecewise constant hazard. The boundaries should be increasing and the first one should be \code{0}, the last one should be larger than the assumed trial duration.} \item{lambda}{vector of length \eqn{k} with the piecewise constant hazards for the intervals specified via \code{Tint}.} } \value{ A list with class \code{mixpch} containing the following components: \describe{ \item{\code{haz}}{Values of the hazard function over discrete times t.} \item{\code{cumhaz}}{Values of the cumulative hazard function over discrete times t.} \item{\code{S}}{Values of the survival function over discrete times t.} \item{\code{F}}{Values of the distribution function over discrete times t.} \item{\code{t}}{Time points for which the values of the different functions are calculated.} \item{\code{Tint}}{Input vector of boundaries of time intervals.} \item{\code{lambda}}{Input vector of piecewise constant hazards.} \item{\code{funs}}{A list with functions to calculate the hazard, cumulative hazard, survival, pdf and cdf over arbitrary continuous times.} } } \description{ Calculates hazard, cumulative hazard, survival and distribution function based on hazards that are constant over pre-specified time-intervals. } \details{ Given \eqn{k} time intervals \eqn{[t_{j-1},t_j), j=1,\dots,k} with \eqn{0 = t_0 < t_1 \dots < t_k}, the function assume constant hazards \eqn{\lambda_{j}} at each interval. The resulting hazard function is \eqn{\lambda(t) =\sum_{j=1}^k \lambda_{j} {1}_{t \in [t_{j-1},t_j)}}, the cumulative hazard function is\\ \eqn{\Lambda(t) = \int_0^t \lambda(s) ds =\sum_{j=1}^k \left( (t_j-t_{j-1})\lambda_{j} {1}_{t > t_j} + (t-t_{j-1}) \lambda_{j} {1}_{t \in [t_{j-1},t_j) } \right)} and the survival function \eqn{S(t) = e^{-\Lambda(t)}}. The output includes the functions values calculated for all integer time points between 0 and the maximum of \code{Tint}. Additionally, a list with functions is also given to calculate the values at any arbitrary point \eqn{t}. } \examples{ pchaz(Tint = c(0, 40, 100), lambda=c(.02, .05)) } \references{ Robin Ristl, Nicolas Ballarini, Heiko Götte, Armin Schüler, Martin Posch, Franz König. Delayed treatment effects, treatment switching and heterogeneous patient populations: How to design and analyze RCTs in oncology. Pharmaceutical statistics. 2021; 20(1):129-145. } \seealso{ \code{\link{subpop_pchaz}}, \code{\link{pop_pchaz}}, \code{\link{plot.mixpch}} } \author{ Robin Ristl, \email{robin.ristl@meduniwien.ac.at}, Nicolas Ballarini }
/man/pchaz.Rd
no_license
cran/nph
R
false
true
2,841
rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/continuous_functions_1-1.R \name{pchaz} \alias{pchaz} \title{Calculate survival for piecewise constant hazard} \usage{ pchaz(Tint, lambda) } \arguments{ \item{Tint}{vector of length \eqn{k+1}, for the boundaries of \eqn{k} time intervals (presumably in days) with piecewise constant hazard. The boundaries should be increasing and the first one should be \code{0}, the last one should be larger than the assumed trial duration.} \item{lambda}{vector of length \eqn{k} with the piecewise constant hazards for the intervals specified via \code{Tint}.} } \value{ A list with class \code{mixpch} containing the following components: \describe{ \item{\code{haz}}{Values of the hazard function over discrete times t.} \item{\code{cumhaz}}{Values of the cumulative hazard function over discrete times t.} \item{\code{S}}{Values of the survival function over discrete times t.} \item{\code{F}}{Values of the distribution function over discrete times t.} \item{\code{t}}{Time points for which the values of the different functions are calculated.} \item{\code{Tint}}{Input vector of boundaries of time intervals.} \item{\code{lambda}}{Input vector of piecewise constant hazards.} \item{\code{funs}}{A list with functions to calculate the hazard, cumulative hazard, survival, pdf and cdf over arbitrary continuous times.} } } \description{ Calculates hazard, cumulative hazard, survival and distribution function based on hazards that are constant over pre-specified time-intervals. } \details{ Given \eqn{k} time intervals \eqn{[t_{j-1},t_j), j=1,\dots,k} with \eqn{0 = t_0 < t_1 \dots < t_k}, the function assume constant hazards \eqn{\lambda_{j}} at each interval. The resulting hazard function is \eqn{\lambda(t) =\sum_{j=1}^k \lambda_{j} {1}_{t \in [t_{j-1},t_j)}}, the cumulative hazard function is\\ \eqn{\Lambda(t) = \int_0^t \lambda(s) ds =\sum_{j=1}^k \left( (t_j-t_{j-1})\lambda_{j} {1}_{t > t_j} + (t-t_{j-1}) \lambda_{j} {1}_{t \in [t_{j-1},t_j) } \right)} and the survival function \eqn{S(t) = e^{-\Lambda(t)}}. The output includes the functions values calculated for all integer time points between 0 and the maximum of \code{Tint}. Additionally, a list with functions is also given to calculate the values at any arbitrary point \eqn{t}. } \examples{ pchaz(Tint = c(0, 40, 100), lambda=c(.02, .05)) } \references{ Robin Ristl, Nicolas Ballarini, Heiko Götte, Armin Schüler, Martin Posch, Franz König. Delayed treatment effects, treatment switching and heterogeneous patient populations: How to design and analyze RCTs in oncology. Pharmaceutical statistics. 2021; 20(1):129-145. } \seealso{ \code{\link{subpop_pchaz}}, \code{\link{pop_pchaz}}, \code{\link{plot.mixpch}} } \author{ Robin Ristl, \email{robin.ristl@meduniwien.ac.at}, Nicolas Ballarini }
context("eol_dataobjects") test_that("eol_dataobjects with taxonomy TRUE", { skip_on_cran() vcr::use_cassette("eol_dataobjects", { temp <- sm(eol_dataobjects(id = "7561533", verbose = FALSE)) }, preserve_exact_body_bytes = TRUE) expect_is(temp, "list") expect_is(temp$taxonconcepts$scientificname, "character") expect_is(temp$taxonconcepts$identifier, "integer") expect_is(temp$taxonconcepts, "data.frame") }) test_that("eol_dataobjects with taxonomy FALSE", { skip_on_cran() vcr::use_cassette("eol_dataobjects_taxonomy_param", { temp2 <- sm(eol_dataobjects(id = "7561533", taxonomy = FALSE, verbose = FALSE)) }) expect_is(temp2, "list") expect_null(temp2$taxonconcepts$scientificname) expect_null(temp2$taxonconcepts$identifier) }) test_that("eol_dataobjects fails well", { skip_on_cran() expect_error(eol_dataobjects(), "argument \"id\" is missing") })
/tests/testthat/test-eol_dataobjects.R
permissive
ropensci/taxize
R
false
false
901
r
context("eol_dataobjects") test_that("eol_dataobjects with taxonomy TRUE", { skip_on_cran() vcr::use_cassette("eol_dataobjects", { temp <- sm(eol_dataobjects(id = "7561533", verbose = FALSE)) }, preserve_exact_body_bytes = TRUE) expect_is(temp, "list") expect_is(temp$taxonconcepts$scientificname, "character") expect_is(temp$taxonconcepts$identifier, "integer") expect_is(temp$taxonconcepts, "data.frame") }) test_that("eol_dataobjects with taxonomy FALSE", { skip_on_cran() vcr::use_cassette("eol_dataobjects_taxonomy_param", { temp2 <- sm(eol_dataobjects(id = "7561533", taxonomy = FALSE, verbose = FALSE)) }) expect_is(temp2, "list") expect_null(temp2$taxonconcepts$scientificname) expect_null(temp2$taxonconcepts$identifier) }) test_that("eol_dataobjects fails well", { skip_on_cran() expect_error(eol_dataobjects(), "argument \"id\" is missing") })
# name : utils.R # description : Utility file (to load libraries etc) # maintainer : Arnob L. Alam <aa2288a@student.american.edu> # updated : 2016-05-11 # # Utility file to load required libraries, helpful functions, etc. message("Processing util files") # Load the logging library library(futile.logger) flog.info("Loading required libraries") library(igraph) library(dplyr) `%+%` <- function(a, b) paste0(a, b) sim_size <- 100E3 flog.info("Simulation Size is " %+% sim_size)
/R/utils.R
no_license
arnoblalam/risk_sharing
R
false
false
501
r
# name : utils.R # description : Utility file (to load libraries etc) # maintainer : Arnob L. Alam <aa2288a@student.american.edu> # updated : 2016-05-11 # # Utility file to load required libraries, helpful functions, etc. message("Processing util files") # Load the logging library library(futile.logger) flog.info("Loading required libraries") library(igraph) library(dplyr) `%+%` <- function(a, b) paste0(a, b) sim_size <- 100E3 flog.info("Simulation Size is " %+% sim_size)
# dane z WHO # http://apps.who.int/gho/data/view.main.CBDR2040 library(openxlsx) library(SmarterPoland) head(countries) # dotplot plD <- ggplot(countries, aes(x = continent, y = birth.rate)) + geom_dotplot(binaxis = "y", stackdir = "center", binwidth = 0.7) + theme_bw() ggsave(plD, filename = "geomDotplot.pdf", width = 7, height = 7, useDingbats=FALSE) # dotplot plP <- ggplot(countries, aes(x = birth.rate, y =death.rate)) + geom_point() + theme_bw() ggsave(plP, filename = "geomPoint.pdf", width = 7, height = 7, useDingbats=FALSE) # jitter plJ <- ggplot(countries, aes(x = continent, y =birth.rate)) + geom_jitter(position = position_jitter(width = .2)) + theme_bw() ggsave(plJ, filename = "geomJitter.pdf", width = 7, height = 7, useDingbats=FALSE) # różne mapownia plP <- ggplot() + geom_point(data=countries, aes(x = birth.rate, y =death.rate, shape=continent), size=4) + theme_bw() + scale_shape_manual(values=c("F","A","S","E","O")) + theme(legend.position=c(0.9,0.17)) plP ggsave(plP, filename = "geomPointShape.pdf", width = 7, height = 7, useDingbats=FALSE) plC <- ggplot() + geom_point(data=countries, aes(x = birth.rate, y =death.rate, color=continent), size=4, shape=19) + theme_bw() + scale_color_brewer(type = "qual", palette=6) + theme(legend.position=c(0.9,0.17)) plC ggsave(plC, filename = "geomPointColor.pdf", width = 7, height = 7, useDingbats=FALSE) library(scales) plS <- ggplot() + geom_point(data=countries, aes(x = birth.rate, y =death.rate, size=population)) + scale_size_continuous(trans="sqrt", label=comma, limits=c(0,1500000)) + theme_bw() + theme(legend.position="none") ggsave(plS, filename = "geomPointSize.pdf", width = 7, height = 7, useDingbats=FALSE)
/GrammarOfGraphics/birth_death_rate/punkty.R
no_license
pbiecek/Eseje
R
false
false
1,754
r
# dane z WHO # http://apps.who.int/gho/data/view.main.CBDR2040 library(openxlsx) library(SmarterPoland) head(countries) # dotplot plD <- ggplot(countries, aes(x = continent, y = birth.rate)) + geom_dotplot(binaxis = "y", stackdir = "center", binwidth = 0.7) + theme_bw() ggsave(plD, filename = "geomDotplot.pdf", width = 7, height = 7, useDingbats=FALSE) # dotplot plP <- ggplot(countries, aes(x = birth.rate, y =death.rate)) + geom_point() + theme_bw() ggsave(plP, filename = "geomPoint.pdf", width = 7, height = 7, useDingbats=FALSE) # jitter plJ <- ggplot(countries, aes(x = continent, y =birth.rate)) + geom_jitter(position = position_jitter(width = .2)) + theme_bw() ggsave(plJ, filename = "geomJitter.pdf", width = 7, height = 7, useDingbats=FALSE) # różne mapownia plP <- ggplot() + geom_point(data=countries, aes(x = birth.rate, y =death.rate, shape=continent), size=4) + theme_bw() + scale_shape_manual(values=c("F","A","S","E","O")) + theme(legend.position=c(0.9,0.17)) plP ggsave(plP, filename = "geomPointShape.pdf", width = 7, height = 7, useDingbats=FALSE) plC <- ggplot() + geom_point(data=countries, aes(x = birth.rate, y =death.rate, color=continent), size=4, shape=19) + theme_bw() + scale_color_brewer(type = "qual", palette=6) + theme(legend.position=c(0.9,0.17)) plC ggsave(plC, filename = "geomPointColor.pdf", width = 7, height = 7, useDingbats=FALSE) library(scales) plS <- ggplot() + geom_point(data=countries, aes(x = birth.rate, y =death.rate, size=population)) + scale_size_continuous(trans="sqrt", label=comma, limits=c(0,1500000)) + theme_bw() + theme(legend.position="none") ggsave(plS, filename = "geomPointSize.pdf", width = 7, height = 7, useDingbats=FALSE)
library(dplyr) library(sf) # Loading topography data frame ---- topoESQ_comp <- read.csv("Files/Topo_estacas_E.csv") %>% select(-1) %>% rename(height = Altura, side = Lado) # Joins by topography ---- Estacas_1m <- as_tibble(st_read("Files/ESTACAS_1m_EFC_0-871_Policonic_SIRGAS.shp")) ESQ <- left_join(topoESQ_comp, Estacas_1m, by = c("from_km" = "km_calc")) %>% left_join(., Estacas_1m, by = c("to_km" = "km_calc")) %>% filter(!is.na(ID.y)) %>% rename(ID_from = ID.x, ID_to = ID.y, geometry_from = geometry.x, geometry_to = geometry.y) %>% mutate(X_from = do.call(rbind, st_geometry(geometry_from))[ ,1], Y_from = do.call(rbind, st_geometry(geometry_from))[ ,2], X_to = do.call(rbind, st_geometry(geometry_to))[ ,1], Y_to = do.call(rbind, st_geometry(geometry_to))[ ,2]) %>% select(-starts_with("geometry")) ESQ_coords_from <- data.frame(X = ESQ$X_from, Y = ESQ$Y_from) ESQ_coords_to <- data.frame(X = ESQ$X_to, Y = ESQ$Y_to) ESQ_l_sf <- vector("list", nrow(ESQ_coords_from)) for (i in seq_along(ESQ_l_sf)){ ESQ_l_sf[[i]] <- st_linestring(as.matrix(rbind(ESQ_coords_from[i, ], ESQ_coords_to[i,]))) } ESQ_l_sfc <- st_sfc(ESQ_l_sf, crs = 5880) plot(ESQ_l_sfc) plot(Estacas_1m) st_bind_cols(ESQ_l_sfc, ESQ) %>% st_write(., dsn = "Files/Topo_esq.shp", delete_dsn = T)
/Topography.R
no_license
rdornas/raileco
R
false
false
1,364
r
library(dplyr) library(sf) # Loading topography data frame ---- topoESQ_comp <- read.csv("Files/Topo_estacas_E.csv") %>% select(-1) %>% rename(height = Altura, side = Lado) # Joins by topography ---- Estacas_1m <- as_tibble(st_read("Files/ESTACAS_1m_EFC_0-871_Policonic_SIRGAS.shp")) ESQ <- left_join(topoESQ_comp, Estacas_1m, by = c("from_km" = "km_calc")) %>% left_join(., Estacas_1m, by = c("to_km" = "km_calc")) %>% filter(!is.na(ID.y)) %>% rename(ID_from = ID.x, ID_to = ID.y, geometry_from = geometry.x, geometry_to = geometry.y) %>% mutate(X_from = do.call(rbind, st_geometry(geometry_from))[ ,1], Y_from = do.call(rbind, st_geometry(geometry_from))[ ,2], X_to = do.call(rbind, st_geometry(geometry_to))[ ,1], Y_to = do.call(rbind, st_geometry(geometry_to))[ ,2]) %>% select(-starts_with("geometry")) ESQ_coords_from <- data.frame(X = ESQ$X_from, Y = ESQ$Y_from) ESQ_coords_to <- data.frame(X = ESQ$X_to, Y = ESQ$Y_to) ESQ_l_sf <- vector("list", nrow(ESQ_coords_from)) for (i in seq_along(ESQ_l_sf)){ ESQ_l_sf[[i]] <- st_linestring(as.matrix(rbind(ESQ_coords_from[i, ], ESQ_coords_to[i,]))) } ESQ_l_sfc <- st_sfc(ESQ_l_sf, crs = 5880) plot(ESQ_l_sfc) plot(Estacas_1m) st_bind_cols(ESQ_l_sfc, ESQ) %>% st_write(., dsn = "Files/Topo_esq.shp", delete_dsn = T)
\name{evilRegex} \alias{evilRegex} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Evil Regular Expressions } \description{ %% ~~ A concise (1-5 lines) description of what the function does. ~~ } \usage{ evilRegex(x) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x}{ %% ~~Describe \code{x} here~~ } } \details{ %% ~~ If necessary, more details than the description above ~~ } \value{ %% ~Describe the value returned %% If it is a LIST, use %% \item{comp1 }{Description of 'comp1'} %% \item{comp2 }{Description of 'comp2'} %% ... } \references{ %% ~put references to the literature/web site here ~ } \author{ %% ~~who you are~~ } \note{ %% ~~further notes~~ } %% ~Make other sections like Warning with \section{Warning }{....} ~ \seealso{ %% ~~objects to See Also as \code{\link{help}}, ~~~ } \examples{ ##---- Should be DIRECTLY executable !! ---- ##-- ==> Define data, use random, ##-- or do help(data=index) for the standard data sets. ## The function is currently defined as function (x) { } } % Add one or more standard keywords, see file 'KEYWORDS' in the % R documentation directory (show via RShowDoc("KEYWORDS")): % \keyword{ ~kwd1 } % \keyword{ ~kwd2 } % Use only one keyword per line. % For non-standard keywords, use \concept instead of \keyword: % \concept{ ~cpt1 } % \concept{ ~cpt2 } % Use only one concept per line.
/evilRegex/man/evilRegex.Rd
no_license
Mark-Nawar/evilRegex
R
false
false
1,471
rd
\name{evilRegex} \alias{evilRegex} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Evil Regular Expressions } \description{ %% ~~ A concise (1-5 lines) description of what the function does. ~~ } \usage{ evilRegex(x) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x}{ %% ~~Describe \code{x} here~~ } } \details{ %% ~~ If necessary, more details than the description above ~~ } \value{ %% ~Describe the value returned %% If it is a LIST, use %% \item{comp1 }{Description of 'comp1'} %% \item{comp2 }{Description of 'comp2'} %% ... } \references{ %% ~put references to the literature/web site here ~ } \author{ %% ~~who you are~~ } \note{ %% ~~further notes~~ } %% ~Make other sections like Warning with \section{Warning }{....} ~ \seealso{ %% ~~objects to See Also as \code{\link{help}}, ~~~ } \examples{ ##---- Should be DIRECTLY executable !! ---- ##-- ==> Define data, use random, ##-- or do help(data=index) for the standard data sets. ## The function is currently defined as function (x) { } } % Add one or more standard keywords, see file 'KEYWORDS' in the % R documentation directory (show via RShowDoc("KEYWORDS")): % \keyword{ ~kwd1 } % \keyword{ ~kwd2 } % Use only one keyword per line. % For non-standard keywords, use \concept instead of \keyword: % \concept{ ~cpt1 } % \concept{ ~cpt2 } % Use only one concept per line.
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/analytics-report.R \name{sf_create_report} \alias{sf_create_report} \title{Create a report} \usage{ sf_create_report( name = NULL, report_type = NULL, report_metadata = NULL, verbose = FALSE ) } \arguments{ \item{name}{\code{character}; a user-specified name for the report.} \item{report_type}{\code{character}; a character representing the type of report to retrieve the metadata information on. A list of valid report types that can be created using this function will be available in the \code{reportTypes.type} column of results returned \link{sf_list_report_types}. (e.g. \code{AccountList}, \code{AccountContactRole}, \code{OpportunityHistory}, etc.)} \item{report_metadata}{\code{list}; a list representing the properties to create the report with. The names of the list must be one or more of the 3 accepted metadata properties: \code{reportMetadata}, \code{reportTypeMetadata}, \code{reportExtendedMetadata}.} \item{verbose}{\code{logical}; an indicator of whether to print additional detail for each API call, which is useful for debugging. More specifically, when set to \code{TRUE} the URL, header, and body will be printed for each request, along with additional diagnostic information where available.} } \value{ \code{list} representing the newly cloned report with up to 4 properties that describe the report: \describe{ \item{attributes}{Report type along with the URL to retrieve common objects and joined metadata.} \item{reportMetadata}{Unique identifiers for groupings and summaries.} \item{reportTypeMetadata}{Fields in each section of a report type plus filter information for those fields.} \item{reportExtendedMetadata}{Additional information about summaries and groupings.} } } \description{ \ifelse{html}{\out{<a href='https://www.tidyverse.org/lifecycle/#experimental'><img src='figures/lifecycle-experimental.svg' alt='Experimental lifecycle'></a>}}{\strong{Experimental}} Create a new report using a POST request. To create a report, you only have to specify a name and report type to create a new report; all other metadata properties are optional. It is recommended to use the metadata from existing reports pulled using \code{\link{sf_describe_report}} as a guide on how to specify the properties of a new report. } \section{Salesforce Documentation}{ \itemize{ \item \href{https://developer.salesforce.com/docs/atlas.en-us.api_analytics.meta/api_analytics/analytics_api_report_example_post_report.htm}{Documentation} } } \examples{ \dontrun{ # creating a blank report using just the name and type my_new_report <- sf_create_report("Top Accounts Report", "AccountList") # creating a report with additional metadata by grabbing an existing report # and modifying it slightly (only the name in this case) # first, grab all possible reports in your Org all_reports <- sf_query("SELECT Id, Name FROM Report") # second, get the id of the report to copy this_report_id <- all_reports$Id[1] # third, pull down its metadata and update the name report_describe_list <- sf_describe_report(this_report_id) report_describe_list$reportMetadata$name <- "TEST API Report Creation" # fourth, create the report by passing the metadata my_new_report <- sf_create_report(report_metadata=report_describe_list) } } \seealso{ Other Report functions: \code{\link{sf_copy_report}()}, \code{\link{sf_delete_report}()}, \code{\link{sf_describe_report_type}()}, \code{\link{sf_describe_report}()}, \code{\link{sf_execute_report}()}, \code{\link{sf_list_report_fields}()}, \code{\link{sf_list_report_filter_operators}()}, \code{\link{sf_list_report_types}()}, \code{\link{sf_list_reports}()}, \code{\link{sf_query_report}()}, \code{\link{sf_run_report}()}, \code{\link{sf_update_report}()} } \concept{Report functions}
/man/sf_create_report.Rd
permissive
carlganz/salesforcer
R
false
true
3,824
rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/analytics-report.R \name{sf_create_report} \alias{sf_create_report} \title{Create a report} \usage{ sf_create_report( name = NULL, report_type = NULL, report_metadata = NULL, verbose = FALSE ) } \arguments{ \item{name}{\code{character}; a user-specified name for the report.} \item{report_type}{\code{character}; a character representing the type of report to retrieve the metadata information on. A list of valid report types that can be created using this function will be available in the \code{reportTypes.type} column of results returned \link{sf_list_report_types}. (e.g. \code{AccountList}, \code{AccountContactRole}, \code{OpportunityHistory}, etc.)} \item{report_metadata}{\code{list}; a list representing the properties to create the report with. The names of the list must be one or more of the 3 accepted metadata properties: \code{reportMetadata}, \code{reportTypeMetadata}, \code{reportExtendedMetadata}.} \item{verbose}{\code{logical}; an indicator of whether to print additional detail for each API call, which is useful for debugging. More specifically, when set to \code{TRUE} the URL, header, and body will be printed for each request, along with additional diagnostic information where available.} } \value{ \code{list} representing the newly cloned report with up to 4 properties that describe the report: \describe{ \item{attributes}{Report type along with the URL to retrieve common objects and joined metadata.} \item{reportMetadata}{Unique identifiers for groupings and summaries.} \item{reportTypeMetadata}{Fields in each section of a report type plus filter information for those fields.} \item{reportExtendedMetadata}{Additional information about summaries and groupings.} } } \description{ \ifelse{html}{\out{<a href='https://www.tidyverse.org/lifecycle/#experimental'><img src='figures/lifecycle-experimental.svg' alt='Experimental lifecycle'></a>}}{\strong{Experimental}} Create a new report using a POST request. To create a report, you only have to specify a name and report type to create a new report; all other metadata properties are optional. It is recommended to use the metadata from existing reports pulled using \code{\link{sf_describe_report}} as a guide on how to specify the properties of a new report. } \section{Salesforce Documentation}{ \itemize{ \item \href{https://developer.salesforce.com/docs/atlas.en-us.api_analytics.meta/api_analytics/analytics_api_report_example_post_report.htm}{Documentation} } } \examples{ \dontrun{ # creating a blank report using just the name and type my_new_report <- sf_create_report("Top Accounts Report", "AccountList") # creating a report with additional metadata by grabbing an existing report # and modifying it slightly (only the name in this case) # first, grab all possible reports in your Org all_reports <- sf_query("SELECT Id, Name FROM Report") # second, get the id of the report to copy this_report_id <- all_reports$Id[1] # third, pull down its metadata and update the name report_describe_list <- sf_describe_report(this_report_id) report_describe_list$reportMetadata$name <- "TEST API Report Creation" # fourth, create the report by passing the metadata my_new_report <- sf_create_report(report_metadata=report_describe_list) } } \seealso{ Other Report functions: \code{\link{sf_copy_report}()}, \code{\link{sf_delete_report}()}, \code{\link{sf_describe_report_type}()}, \code{\link{sf_describe_report}()}, \code{\link{sf_execute_report}()}, \code{\link{sf_list_report_fields}()}, \code{\link{sf_list_report_filter_operators}()}, \code{\link{sf_list_report_types}()}, \code{\link{sf_list_reports}()}, \code{\link{sf_query_report}()}, \code{\link{sf_run_report}()}, \code{\link{sf_update_report}()} } \concept{Report functions}
% Generated by roxygen2 (4.0.2): do not edit by hand \name{summarizeAlgoPerf} \alias{summarizeAlgoPerf} \title{Creates summary data.frame for algorithm performance values across all instances.} \usage{ summarizeAlgoPerf(asscenario, measure) } \arguments{ \item{asscenario}{[\code{\link{ASScenario}}]\cr Algorithm selection scenario.} \item{measure}{[\code{character(1)}]\cr Selected measure. Default is first measure in scenario.} } \value{ [\code{data.frame}]. } \description{ Creates summary data.frame for algorithm performance values across all instances. }
/algselbench/man/summarizeAlgoPerf.Rd
no_license
berndbischl/coseal-algsel-benchmark-repo
R
false
false
564
rd
% Generated by roxygen2 (4.0.2): do not edit by hand \name{summarizeAlgoPerf} \alias{summarizeAlgoPerf} \title{Creates summary data.frame for algorithm performance values across all instances.} \usage{ summarizeAlgoPerf(asscenario, measure) } \arguments{ \item{asscenario}{[\code{\link{ASScenario}}]\cr Algorithm selection scenario.} \item{measure}{[\code{character(1)}]\cr Selected measure. Default is first measure in scenario.} } \value{ [\code{data.frame}]. } \description{ Creates summary data.frame for algorithm performance values across all instances. }
#' Fast GlmmSeq Analysis #' #' @author George Gruehnagen (Georgia Institute of Technology) and Zachary Johnson (Emory) #' #' @param obj Seurat object #' @param cells cells to run the analysis on #' @param subset.genes.by genes to subset results by #' @param sequenced.genes the gene_list paramater from dndscv, which is a list of genes to restrict the analysis (use for targeted sequencing studies) #' @param ... other parameters passed to dncdscv, all defaults from dndscv are used except max_muts_per_gene_per_sample is set to Infinity #' #' @return coselens returns a list containing six objects: 1) "substitutions", a summary table of gene-level conditional selection for single-nucleotide substitutions (including missense, nonsense, and essential splice site mutations); 2) "indels", same for small indels; (3) "missense_sub", same for missense substitutions only; (4) "truncating_sub", same for truncating substitutions only (nonsense and essential splice site mutations); (5) "overall_mut", a summary table with the combined analysis of single-nucleotide substitutions and indels; and (6) "dndscv", a list of objects with the complete output of (non-conditional) selection analyses separately run on each group of samples, as provided by the dndscv package. The first table should be sufficient for most users. If interested in indels, please note that the indel analysis uses a different null model that makes the test for conditional selection notably less sensitive than in the case of substitutions. Such lower sensitivity also extends to the "overall_mut" table. The dataframes (1-5) contain the following: #' @return - gene_name: name of the gene that was tested for conditional selection #' @return - num.driver.group1: estimate of the number of drivers per sample per gene in group 1 #' @return - num.driver.group2: estimate of the number of drivers per sample per gene in group 2 #' @return - Delta.Nd: absolute difference in the average number of driver mutations per sample (group 1 minus group 2) #' @return - classification: classification of conditional selection. The most frequent classes are strict dependence (drivers only in group 1), facilitation (drivers more frequent in group 1), independence, inhibition (drivers less frequent in group 1), and strict inhibition (drivers absent from group 1). If negative selection is present, other possibilities are strict dependence with sign change (drivers positively selected in group 1 but negatively selected in group 2), strict inhibition with sign change (drivers positively selected in group 2 but negatively selected in group 1), aggravation (purifying selection against mutations becomes stronger in group 1), and relaxation (purifying selection against mutations becomes weaker in group 1). #' @return - dependency: dependency index, measuring the association between the grouping variable (group 1 or 2) and the average number of drivers observed in a gene. It serves as a quantitative measure of the qualitative effect described in "classification". In the most common cases, a value of 1 indicates strict dependence or inhibition (drivers only observed in one group) and a value of 0 (or NA) indicates independence. #' @return - pval: p-value for conditional selection #' @return - qval: q-value for conditional selection using Benjamini-Hochberg correction of false discovery rate. #' #' @return The "dndscv" list contains two objects. Please, read the documentation of the dndscvpackage for further information about th #' @return - dndscv_group1: output of dndscv for group 1 #' @return - dndscv_group2: output of dndscv for group 2 #' #' @export fastGlmm = function(obj, cells, my_formula, return_means = TRUE, do_contrasts = FALSE, my_formula2 = NULL, calc_pct = FALSE, num_cores = 24, out_path = NULL) { this_counts = obj@assays$RNA@counts[,cells] this_meta = data.frame(obj@meta.data[cells,]) if (!"pair" %in% colnames(this_meta)) { message("Required metadata column 'pair' not found in metadata. Exiting."); return(NULL); } gene_not_present_in_pairs = lapply(unique(this_meta$pair), function(x) rowSums(this_counts[,which(this_meta$pair == x)]) == 0) gene_not_present_in_pairs = Reduce(`+`, gene_not_present_in_pairs) genes_present_in_all_pairs = names(gene_not_present_in_pairs[which(gene_not_present_in_pairs == 0)]) this_counts = this_counts[genes_present_in_all_pairs,] this_max_cluster_size = max(table(this_meta$seurat_clusters)) # TODO make clusters as a variable multicoreParam <- MulticoreParam(workers = num_cores) if (is.null(my_formula2)) { my_formula2 = ~ cond } dds = DESeqDataSetFromMatrix(countData = this_counts, colData = this_meta, design = my_formula2) size_factors = calculateSumFactors(this_counts, BPPARAM = multicoreParam, max.cluster.size = this_max_cluster_size, clusters = NULL, ref.clust = NULL, positive = TRUE, scaling = NULL, min.mean = NULL, subset.row = NULL) sizeFactors(dds) = size_factors dds = estimateDispersions(dds, fitType = "glmGamPoi", useCR = TRUE, maxit = 100, weightThreshold = 0.01, quiet = FALSE, modelMatrix = NULL, minmu = 1e-06) disp = as.matrix(mcols(dds)) disp = disp[,11] names(disp) = genes_present_in_all_pairs results <- glmmSeq(my_formula, id = "subject", countdata = this_counts, metadata = this_meta, dispersion = disp, sizeFactors = size_factors, removeSingles=FALSE, progress=TRUE, cores = num_cores) results_df = data.frame(summary(results)) results_df$gene = rownames(results_df) # Calculate pct.1 and pct.2 if (calc_pct) { plk_num = rowSums(this_counts[,which(this_meta$cond == "plk")]) con_num = rowSums(this_counts[,which(this_meta$cond == "con")]) results_df$num_plk = plk_num results_df$con_num = con_num results_df$pct_plk = plk_num / length(which(this_meta$cond == "plk")) results_df$pct_con = con_num / length(which(this_meta$cond == "con")) results_df$up_pct = results_df$pct_plk results_df$up_pct[which( results_df$condplk < 0 )] = results_df$pct_con[which( results_df$condplk < 0 )] } if (do_contrasts) { my_contrasts = parallel::mclapply(1:nrow(results_df), function(x) {this_gene = rownames(results_df)[x]; fit <- glmmRefit(results, gene = this_gene); this_df = data.frame(emmeans::emmeans(fit, specs = pairwise ~ exp)$contrasts); this_df$gene = this_gene; return(this_df) }, mc.cores = num_cores) my_contrasts = data.frame(data.table::rbindlist(my_contrasts)) # my_contrasts = do.call('rbind', my_contrasts) # my_contrasts_melt = reshape2::melt(my_contrasts, id.var = "gene") # my_contrasts_df = reshape2::dcast(my_contrasts, gene ~ contrast, value.var = "p.value") my_contrasts_p = reshape2::dcast(my_contrasts, gene ~ contrast, value.var = "p.value") my_contrasts_est = reshape2::dcast(my_contrasts, gene ~ contrast, value.var = "estimate") results_df[, paste0("p - ", colnames(my_contrasts_p)[2:ncol(my_contrasts_p)])] = my_contrasts_p[match(rownames(results_df), my_contrasts_p$gene), colnames(my_contrasts_p)[2:ncol(my_contrasts_p)]] results_df[, paste0("estimate - ", colnames(my_contrasts_est)[2:ncol(my_contrasts_est)])] = my_contrasts_est[match(rownames(results_df), my_contrasts_est$gene), colnames(my_contrasts_est)[2:ncol(my_contrasts_est)]] } if (return_means) { mean_cols = colnames(results@predict)[which(startsWith(colnames(results@predict), "y_"))] results_df[,mean_cols] = results@predict[,mean_cols] } if(!is.null(out_path)) { write.csv(results_df, out_path) } return(results_df) }
/plk_glmmseq.R
no_license
ggruenhagen3/tooth_scripts
R
false
false
7,672
r
#' Fast GlmmSeq Analysis #' #' @author George Gruehnagen (Georgia Institute of Technology) and Zachary Johnson (Emory) #' #' @param obj Seurat object #' @param cells cells to run the analysis on #' @param subset.genes.by genes to subset results by #' @param sequenced.genes the gene_list paramater from dndscv, which is a list of genes to restrict the analysis (use for targeted sequencing studies) #' @param ... other parameters passed to dncdscv, all defaults from dndscv are used except max_muts_per_gene_per_sample is set to Infinity #' #' @return coselens returns a list containing six objects: 1) "substitutions", a summary table of gene-level conditional selection for single-nucleotide substitutions (including missense, nonsense, and essential splice site mutations); 2) "indels", same for small indels; (3) "missense_sub", same for missense substitutions only; (4) "truncating_sub", same for truncating substitutions only (nonsense and essential splice site mutations); (5) "overall_mut", a summary table with the combined analysis of single-nucleotide substitutions and indels; and (6) "dndscv", a list of objects with the complete output of (non-conditional) selection analyses separately run on each group of samples, as provided by the dndscv package. The first table should be sufficient for most users. If interested in indels, please note that the indel analysis uses a different null model that makes the test for conditional selection notably less sensitive than in the case of substitutions. Such lower sensitivity also extends to the "overall_mut" table. The dataframes (1-5) contain the following: #' @return - gene_name: name of the gene that was tested for conditional selection #' @return - num.driver.group1: estimate of the number of drivers per sample per gene in group 1 #' @return - num.driver.group2: estimate of the number of drivers per sample per gene in group 2 #' @return - Delta.Nd: absolute difference in the average number of driver mutations per sample (group 1 minus group 2) #' @return - classification: classification of conditional selection. The most frequent classes are strict dependence (drivers only in group 1), facilitation (drivers more frequent in group 1), independence, inhibition (drivers less frequent in group 1), and strict inhibition (drivers absent from group 1). If negative selection is present, other possibilities are strict dependence with sign change (drivers positively selected in group 1 but negatively selected in group 2), strict inhibition with sign change (drivers positively selected in group 2 but negatively selected in group 1), aggravation (purifying selection against mutations becomes stronger in group 1), and relaxation (purifying selection against mutations becomes weaker in group 1). #' @return - dependency: dependency index, measuring the association between the grouping variable (group 1 or 2) and the average number of drivers observed in a gene. It serves as a quantitative measure of the qualitative effect described in "classification". In the most common cases, a value of 1 indicates strict dependence or inhibition (drivers only observed in one group) and a value of 0 (or NA) indicates independence. #' @return - pval: p-value for conditional selection #' @return - qval: q-value for conditional selection using Benjamini-Hochberg correction of false discovery rate. #' #' @return The "dndscv" list contains two objects. Please, read the documentation of the dndscvpackage for further information about th #' @return - dndscv_group1: output of dndscv for group 1 #' @return - dndscv_group2: output of dndscv for group 2 #' #' @export fastGlmm = function(obj, cells, my_formula, return_means = TRUE, do_contrasts = FALSE, my_formula2 = NULL, calc_pct = FALSE, num_cores = 24, out_path = NULL) { this_counts = obj@assays$RNA@counts[,cells] this_meta = data.frame(obj@meta.data[cells,]) if (!"pair" %in% colnames(this_meta)) { message("Required metadata column 'pair' not found in metadata. Exiting."); return(NULL); } gene_not_present_in_pairs = lapply(unique(this_meta$pair), function(x) rowSums(this_counts[,which(this_meta$pair == x)]) == 0) gene_not_present_in_pairs = Reduce(`+`, gene_not_present_in_pairs) genes_present_in_all_pairs = names(gene_not_present_in_pairs[which(gene_not_present_in_pairs == 0)]) this_counts = this_counts[genes_present_in_all_pairs,] this_max_cluster_size = max(table(this_meta$seurat_clusters)) # TODO make clusters as a variable multicoreParam <- MulticoreParam(workers = num_cores) if (is.null(my_formula2)) { my_formula2 = ~ cond } dds = DESeqDataSetFromMatrix(countData = this_counts, colData = this_meta, design = my_formula2) size_factors = calculateSumFactors(this_counts, BPPARAM = multicoreParam, max.cluster.size = this_max_cluster_size, clusters = NULL, ref.clust = NULL, positive = TRUE, scaling = NULL, min.mean = NULL, subset.row = NULL) sizeFactors(dds) = size_factors dds = estimateDispersions(dds, fitType = "glmGamPoi", useCR = TRUE, maxit = 100, weightThreshold = 0.01, quiet = FALSE, modelMatrix = NULL, minmu = 1e-06) disp = as.matrix(mcols(dds)) disp = disp[,11] names(disp) = genes_present_in_all_pairs results <- glmmSeq(my_formula, id = "subject", countdata = this_counts, metadata = this_meta, dispersion = disp, sizeFactors = size_factors, removeSingles=FALSE, progress=TRUE, cores = num_cores) results_df = data.frame(summary(results)) results_df$gene = rownames(results_df) # Calculate pct.1 and pct.2 if (calc_pct) { plk_num = rowSums(this_counts[,which(this_meta$cond == "plk")]) con_num = rowSums(this_counts[,which(this_meta$cond == "con")]) results_df$num_plk = plk_num results_df$con_num = con_num results_df$pct_plk = plk_num / length(which(this_meta$cond == "plk")) results_df$pct_con = con_num / length(which(this_meta$cond == "con")) results_df$up_pct = results_df$pct_plk results_df$up_pct[which( results_df$condplk < 0 )] = results_df$pct_con[which( results_df$condplk < 0 )] } if (do_contrasts) { my_contrasts = parallel::mclapply(1:nrow(results_df), function(x) {this_gene = rownames(results_df)[x]; fit <- glmmRefit(results, gene = this_gene); this_df = data.frame(emmeans::emmeans(fit, specs = pairwise ~ exp)$contrasts); this_df$gene = this_gene; return(this_df) }, mc.cores = num_cores) my_contrasts = data.frame(data.table::rbindlist(my_contrasts)) # my_contrasts = do.call('rbind', my_contrasts) # my_contrasts_melt = reshape2::melt(my_contrasts, id.var = "gene") # my_contrasts_df = reshape2::dcast(my_contrasts, gene ~ contrast, value.var = "p.value") my_contrasts_p = reshape2::dcast(my_contrasts, gene ~ contrast, value.var = "p.value") my_contrasts_est = reshape2::dcast(my_contrasts, gene ~ contrast, value.var = "estimate") results_df[, paste0("p - ", colnames(my_contrasts_p)[2:ncol(my_contrasts_p)])] = my_contrasts_p[match(rownames(results_df), my_contrasts_p$gene), colnames(my_contrasts_p)[2:ncol(my_contrasts_p)]] results_df[, paste0("estimate - ", colnames(my_contrasts_est)[2:ncol(my_contrasts_est)])] = my_contrasts_est[match(rownames(results_df), my_contrasts_est$gene), colnames(my_contrasts_est)[2:ncol(my_contrasts_est)]] } if (return_means) { mean_cols = colnames(results@predict)[which(startsWith(colnames(results@predict), "y_"))] results_df[,mean_cols] = results@predict[,mean_cols] } if(!is.null(out_path)) { write.csv(results_df, out_path) } return(results_df) }
Rates.BTC <- list() for (n in names(Rates)) { Rates.BTC[[n]] <- Rates[[n]]$BTC[ , .(Cur=as.character(n), DateTime.c=as.character(DateTime), RateSat=as.integer(as.numeric(Close)*1e8), RateBTC=as.numeric(Close)) ] } Rates.BTC <- rbindlist(Rates.BTC) Rates.BTC[, DateTime := as.POSIXct(DateTime.c)] Rates.BTC[, DateTime.c := NULL] Rates.USD <- list() for (n in names(Rates)) { Rates.USD[[n]] <- Rates[[n]]$USD[ , .(Cur=as.character(n), DateTime.c=as.character(DateTime), RateUSD=as.numeric(Close))] } Rates.USD <- rbindlist(Rates.USD) Rates.USD[, DateTime := as.POSIXct(DateTime.c)] Rates.USD[, DateTime.c := NULL] d0 <- min(Rates.BTC$DateTime) Rates.BTC[Cur=="BTC", DateTime := d0] Rates.USD[Cur=="USD", DateTime := d0] setkey(Rates.BTC, Cur, DateTime) setkey(Rates.USD, Cur, DateTime) # Usage: # add_rates(Combined.D[Cur=="NEO"], Rates.USD[Cur=="NEO"], "NEO") add_rates <- function(DT, Rates, cur) { C <- DT[order(-DateTime)] R <- Rates[Cur==cur] setkey(C, DateTime) setkey(R, DateTime) RC <- R[C, roll=-Inf] RC[, Cur := NULL] return(RC) }
/RatesAgg.R
no_license
Ming-Tang/CryptoBalances
R
false
false
1,099
r
Rates.BTC <- list() for (n in names(Rates)) { Rates.BTC[[n]] <- Rates[[n]]$BTC[ , .(Cur=as.character(n), DateTime.c=as.character(DateTime), RateSat=as.integer(as.numeric(Close)*1e8), RateBTC=as.numeric(Close)) ] } Rates.BTC <- rbindlist(Rates.BTC) Rates.BTC[, DateTime := as.POSIXct(DateTime.c)] Rates.BTC[, DateTime.c := NULL] Rates.USD <- list() for (n in names(Rates)) { Rates.USD[[n]] <- Rates[[n]]$USD[ , .(Cur=as.character(n), DateTime.c=as.character(DateTime), RateUSD=as.numeric(Close))] } Rates.USD <- rbindlist(Rates.USD) Rates.USD[, DateTime := as.POSIXct(DateTime.c)] Rates.USD[, DateTime.c := NULL] d0 <- min(Rates.BTC$DateTime) Rates.BTC[Cur=="BTC", DateTime := d0] Rates.USD[Cur=="USD", DateTime := d0] setkey(Rates.BTC, Cur, DateTime) setkey(Rates.USD, Cur, DateTime) # Usage: # add_rates(Combined.D[Cur=="NEO"], Rates.USD[Cur=="NEO"], "NEO") add_rates <- function(DT, Rates, cur) { C <- DT[order(-DateTime)] R <- Rates[Cur==cur] setkey(C, DateTime) setkey(R, DateTime) RC <- R[C, roll=-Inf] RC[, Cur := NULL] return(RC) }
testlist <- list(Rs = numeric(0), atmp = numeric(0), relh = c(-5.59219752037005e+72, -1.06823407896587e-87, -4.25255837648531e+71, 1.7951202446173e-155, -8.18790345762274e-12, -4.84876319029531e+202, -3.96895588925774e+304, -1.15261897385914e+41, -4.16286459815484e-108, -2.95899697222989e+94, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), temp = -Inf) result <- do.call(meteor:::ET0_Makkink,testlist) str(result)
/meteor/inst/testfiles/ET0_Makkink/AFL_ET0_Makkink/ET0_Makkink_valgrind_files/1615860188-test.R
no_license
akhikolla/updatedatatype-list3
R
false
false
411
r
testlist <- list(Rs = numeric(0), atmp = numeric(0), relh = c(-5.59219752037005e+72, -1.06823407896587e-87, -4.25255837648531e+71, 1.7951202446173e-155, -8.18790345762274e-12, -4.84876319029531e+202, -3.96895588925774e+304, -1.15261897385914e+41, -4.16286459815484e-108, -2.95899697222989e+94, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), temp = -Inf) result <- do.call(meteor:::ET0_Makkink,testlist) str(result)
# File-Name: r_tests.R # Date: 2009-07-23 # Author: Drew Conway # Purpose: Test the speed of R's igraph package on CPU taxing network operations # Data Used: BA_10000.txt # Packages Used: igraph # Output File: # Data Output: # Machine: library(igraph) # G={V:2,500,E:4,996} generated with networkx.generators.barabasi_albert_graph(2500,2) G<-read.graph('BA_2500.txt',format='edgelist') # Convert to undirected graph G<-as.undirected(G) # Test how long it takes igraph to calucluate betweenness centrality on graph G betweenness_test<-function(graph) { return(betweenness(graph)) } # Test how long it takes igraph tocalucluate a Fruchterman-Reingold force-directed # layout on graph G layout_test<-function(graph,i=50) { return(layout.fruchterman.reingold(graph,niter=i)) } # Test how long it takes igraph to find the diameter (maximum shortest path) # of graph G diameter_test<-function(graph) { return(diameter(graph)) } # Test how long it takes NX to find the maximal cliques of graph G max_clique_test<-function(graph) { return(maximal.cliques(graph)) } # Test and print results to stdout print('Betweenness...') print(system.time(B<-betweenness_test(G))) print('Fruchterman-Reingold...') print(system.time(v<-layout_test(G))) print('Diameter...') print(system.time(D<-diameter_test(G))) print('Maximal cliques...') print(system.time(M<-max_clique_test(G)))
/R/SNA in R Talk - Supporting Code/r_tests.R
permissive
ktargows/ZIA
R
false
false
1,626
r
# File-Name: r_tests.R # Date: 2009-07-23 # Author: Drew Conway # Purpose: Test the speed of R's igraph package on CPU taxing network operations # Data Used: BA_10000.txt # Packages Used: igraph # Output File: # Data Output: # Machine: library(igraph) # G={V:2,500,E:4,996} generated with networkx.generators.barabasi_albert_graph(2500,2) G<-read.graph('BA_2500.txt',format='edgelist') # Convert to undirected graph G<-as.undirected(G) # Test how long it takes igraph to calucluate betweenness centrality on graph G betweenness_test<-function(graph) { return(betweenness(graph)) } # Test how long it takes igraph tocalucluate a Fruchterman-Reingold force-directed # layout on graph G layout_test<-function(graph,i=50) { return(layout.fruchterman.reingold(graph,niter=i)) } # Test how long it takes igraph to find the diameter (maximum shortest path) # of graph G diameter_test<-function(graph) { return(diameter(graph)) } # Test how long it takes NX to find the maximal cliques of graph G max_clique_test<-function(graph) { return(maximal.cliques(graph)) } # Test and print results to stdout print('Betweenness...') print(system.time(B<-betweenness_test(G))) print('Fruchterman-Reingold...') print(system.time(v<-layout_test(G))) print('Diameter...') print(system.time(D<-diameter_test(G))) print('Maximal cliques...') print(system.time(M<-max_clique_test(G)))
# library(testthat); library(parallelRemake); context("Collation") source("utils.R") files = c("code.R", "data.csv", "Makefile", paste0("plot", 1:3, ".pdf"), paste0("remake", c("", 2:5, "_data"), ".yml"), "collated_remake.yml") collation = function(){ any(grepl("collated", list.files())) } write_collation_files = function(){ for(file in c("code.R", paste0("remake", c("", 2:5, "_data"), ".yml"))){ x = readLines(file.path("..", "test-collation", file)) write(x, file) } } diri = function(i) paste("collation-proper-time-", i) test_that("Collated workflow runs smoothly.", { testwd("collation-ok") write_collation_files() write_makefile(remakefiles = c("remake.yml", "remake_data.yml")) expect_true(file.exists("collated_remake.yml")) system("make") expect_true(all(files %in% list.files())) expect_true(all(recallable() == paste0("processed", 1:3))) testrm("collation-ok") }) test_that("Collation happens at the proper time.", { dir = "collation-proper-time" testwd(diri(1)) write_collation_files() write_makefile() expect_true(collation()) testrm(diri(1)) testwd(diri(2)) write_collation_files() write_makefile(remakefiles = "remake3.yml") expect_true(collation()) testrm(diri(2)) testwd(diri(3)) write_collation_files() write_makefile(remakefiles = "remake_data.yml") expect_false(collation()) testrm(diri(3)) testwd(diri(4)) write_collation_files() write_makefile(remakefiles = c("remake_data.yml", "remake_data.yml")) expect_false(collation()) testrm(diri(4)) testwd(diri(5)) write_collation_files() write_makefile(remakefiles = c("remake_data.yml", "remake5.yml")) expect_true(collation()) testrm(diri(5)) })
/tests/testthat/test-collation.R
no_license
jmpasmoi/parallelRemake
R
false
false
1,721
r
# library(testthat); library(parallelRemake); context("Collation") source("utils.R") files = c("code.R", "data.csv", "Makefile", paste0("plot", 1:3, ".pdf"), paste0("remake", c("", 2:5, "_data"), ".yml"), "collated_remake.yml") collation = function(){ any(grepl("collated", list.files())) } write_collation_files = function(){ for(file in c("code.R", paste0("remake", c("", 2:5, "_data"), ".yml"))){ x = readLines(file.path("..", "test-collation", file)) write(x, file) } } diri = function(i) paste("collation-proper-time-", i) test_that("Collated workflow runs smoothly.", { testwd("collation-ok") write_collation_files() write_makefile(remakefiles = c("remake.yml", "remake_data.yml")) expect_true(file.exists("collated_remake.yml")) system("make") expect_true(all(files %in% list.files())) expect_true(all(recallable() == paste0("processed", 1:3))) testrm("collation-ok") }) test_that("Collation happens at the proper time.", { dir = "collation-proper-time" testwd(diri(1)) write_collation_files() write_makefile() expect_true(collation()) testrm(diri(1)) testwd(diri(2)) write_collation_files() write_makefile(remakefiles = "remake3.yml") expect_true(collation()) testrm(diri(2)) testwd(diri(3)) write_collation_files() write_makefile(remakefiles = "remake_data.yml") expect_false(collation()) testrm(diri(3)) testwd(diri(4)) write_collation_files() write_makefile(remakefiles = c("remake_data.yml", "remake_data.yml")) expect_false(collation()) testrm(diri(4)) testwd(diri(5)) write_collation_files() write_makefile(remakefiles = c("remake_data.yml", "remake5.yml")) expect_true(collation()) testrm(diri(5)) })
library(dplyr) library(tidyselect) library(tidyr) library(magrittr) library(similR) library(igraph) library(ggplot2) # List of statistics that will be used statistics <- c( `Hamman (S)` = "shamann", `Hamming (D)` = "dhamming", `Mean Manhattan (D)` = "dmh", `Michael (S)` = "smichael", `Sized Difference (D)` = "dsd" ) # Reading covariate data ------------------------------------------------------- dat_group <- haven::read_spss("data-raw/MURI_AllSurveys - FINAL - Group level data_1.sav") dat_individual <- haven::read_spss("data-raw/MURI_AllSurveys - FINAL_073018.sav") dat_individual <- dat_individual %>% mutate(Group = as.integer(Group)) networks_las <- readRDS("data/networks_advice_las.rds") networks_truth <- readRDS("data/networks_truth.rds")[["3"]]$advice networks_advice_css <- readRDS("data/networks_advice_css_jen.rds") networks_sizes <- readr::read_csv("data-raw/Study1_Group sizes.csv") # Comparing LAS vs CSS --------------------------------------------------------- # LAS accuracy_lasL <- lapply(names(networks_advice_css), function(n) { # Computing similalrity/distance ans <- similR::similarity( c(list(networks_las[[n]]), networks_advice_css[[n]]), statistic = statistics, normalized = FALSE, firstonly = TRUE, exclude_j = TRUE ) cbind( data.frame( Group = as.integer(n), ID = rownames(networks_las[[n]]), PID = sprintf("%02i%s", as.integer(n), rownames(networks_las[[n]])), stringsAsFactors = FALSE ), ans ) }) accuracy_las <- accuracy_lasL %>% bind_rows %>% as_tibble %>% select(-i, -j) # Computing range accuracy_las_range <- lapply(accuracy_lasL, function(d) { d[, statistics, drop=FALSE] %>% as.data.frame %>% gather("statistic", "value") %>% group_by(statistic) %>% summarize(range = diff(range(value))) %>% ungroup %>% spread(statistic, range) }) %>% bind_rows %>% bind_cols(networks_sizes) %>% gather("statistic", "value", -Group, -groupSize) %>% group_by(statistic) %>% mutate( groupSize = as.factor(groupSize), value_01 = (value - min(value))/diff(range(value)) ) %>% ungroup # Comparing TRUTH vs CSS --------------------------------------------------------- accuracy_truthL <- lapply(names(networks_advice_css), function(n) { # Computing similalrity/distance ans <- similR::similarity( c(list(networks_truth[[n]]), networks_advice_css[[n]]), statistic = statistics, normalized = FALSE, firstonly = TRUE, exclude_j = TRUE ) cbind( data.frame( Group = as.integer(n), ID = rownames(networks_las[[n]]), PID = sprintf("%02i%s", as.integer(n), rownames(networks_las[[n]])), stringsAsFactors = FALSE ), ans ) }) accuracy_truth <- accuracy_truthL %>% bind_rows %>% as_tibble %>% select(-i, -j) # Computing range accuracy_truth_range <- lapply(accuracy_truthL, function(d) { d[, statistics, drop=FALSE] %>% as.data.frame %>% gather("statistic", "value") %>% group_by(statistic) %>% summarize(range = diff(range(value))) %>% ungroup %>% spread(statistic, range) }) %>% bind_rows %>% bind_cols(networks_sizes) %>% gather("statistic", "value", -Group, -groupSize) %>% group_by(statistic) %>% mutate( groupSize = as.factor(groupSize), value_01 = (value - min(value))/diff(range(value)) ) %>% ungroup # Plotting --------------------------------------------------------------------- # Setting seed for reproducibility set.seed(12) # Generating the violin plots accuracy_truth_range %>% # Adding nicer labels mutate(statistic = names(statistics)[match(statistic, statistics)]) %>% # Plot ggplot(aes(x = statistic, y = value_01)) + geom_violin() + geom_jitter(height = 0, width=.1, aes(colour=groupSize, shape = groupSize), size=4) + scale_colour_viridis_d(alpha = .7) + # Plot labels and titles labs( y = "Range in Similarity/Distance (normalized to be in [0,1])", x = "", shape = "Group Size", colour="Group Size") + theme(axis.text = element_text(angle = 45, hjust = 1)) + labs( title = "Distribution of within team ranges of different distance/similarity metrics.", subtitle = "Truth vs CSS (comparisons exclude the j-th row+colum since these are not reported in the CSS)." ) + ggsave("data/accuracy_truth.png", width = 7, height = 7) # Saving the original data ----------------------------------------------------- readr::write_csv(accuracy_las, "data/accuracy_las.csv") readr::write_csv(accuracy_truth, "data/accuracy_truth.csv")
/data/accuracy.R
no_license
muriteams/css-and-ci
R
false
false
4,675
r
library(dplyr) library(tidyselect) library(tidyr) library(magrittr) library(similR) library(igraph) library(ggplot2) # List of statistics that will be used statistics <- c( `Hamman (S)` = "shamann", `Hamming (D)` = "dhamming", `Mean Manhattan (D)` = "dmh", `Michael (S)` = "smichael", `Sized Difference (D)` = "dsd" ) # Reading covariate data ------------------------------------------------------- dat_group <- haven::read_spss("data-raw/MURI_AllSurveys - FINAL - Group level data_1.sav") dat_individual <- haven::read_spss("data-raw/MURI_AllSurveys - FINAL_073018.sav") dat_individual <- dat_individual %>% mutate(Group = as.integer(Group)) networks_las <- readRDS("data/networks_advice_las.rds") networks_truth <- readRDS("data/networks_truth.rds")[["3"]]$advice networks_advice_css <- readRDS("data/networks_advice_css_jen.rds") networks_sizes <- readr::read_csv("data-raw/Study1_Group sizes.csv") # Comparing LAS vs CSS --------------------------------------------------------- # LAS accuracy_lasL <- lapply(names(networks_advice_css), function(n) { # Computing similalrity/distance ans <- similR::similarity( c(list(networks_las[[n]]), networks_advice_css[[n]]), statistic = statistics, normalized = FALSE, firstonly = TRUE, exclude_j = TRUE ) cbind( data.frame( Group = as.integer(n), ID = rownames(networks_las[[n]]), PID = sprintf("%02i%s", as.integer(n), rownames(networks_las[[n]])), stringsAsFactors = FALSE ), ans ) }) accuracy_las <- accuracy_lasL %>% bind_rows %>% as_tibble %>% select(-i, -j) # Computing range accuracy_las_range <- lapply(accuracy_lasL, function(d) { d[, statistics, drop=FALSE] %>% as.data.frame %>% gather("statistic", "value") %>% group_by(statistic) %>% summarize(range = diff(range(value))) %>% ungroup %>% spread(statistic, range) }) %>% bind_rows %>% bind_cols(networks_sizes) %>% gather("statistic", "value", -Group, -groupSize) %>% group_by(statistic) %>% mutate( groupSize = as.factor(groupSize), value_01 = (value - min(value))/diff(range(value)) ) %>% ungroup # Comparing TRUTH vs CSS --------------------------------------------------------- accuracy_truthL <- lapply(names(networks_advice_css), function(n) { # Computing similalrity/distance ans <- similR::similarity( c(list(networks_truth[[n]]), networks_advice_css[[n]]), statistic = statistics, normalized = FALSE, firstonly = TRUE, exclude_j = TRUE ) cbind( data.frame( Group = as.integer(n), ID = rownames(networks_las[[n]]), PID = sprintf("%02i%s", as.integer(n), rownames(networks_las[[n]])), stringsAsFactors = FALSE ), ans ) }) accuracy_truth <- accuracy_truthL %>% bind_rows %>% as_tibble %>% select(-i, -j) # Computing range accuracy_truth_range <- lapply(accuracy_truthL, function(d) { d[, statistics, drop=FALSE] %>% as.data.frame %>% gather("statistic", "value") %>% group_by(statistic) %>% summarize(range = diff(range(value))) %>% ungroup %>% spread(statistic, range) }) %>% bind_rows %>% bind_cols(networks_sizes) %>% gather("statistic", "value", -Group, -groupSize) %>% group_by(statistic) %>% mutate( groupSize = as.factor(groupSize), value_01 = (value - min(value))/diff(range(value)) ) %>% ungroup # Plotting --------------------------------------------------------------------- # Setting seed for reproducibility set.seed(12) # Generating the violin plots accuracy_truth_range %>% # Adding nicer labels mutate(statistic = names(statistics)[match(statistic, statistics)]) %>% # Plot ggplot(aes(x = statistic, y = value_01)) + geom_violin() + geom_jitter(height = 0, width=.1, aes(colour=groupSize, shape = groupSize), size=4) + scale_colour_viridis_d(alpha = .7) + # Plot labels and titles labs( y = "Range in Similarity/Distance (normalized to be in [0,1])", x = "", shape = "Group Size", colour="Group Size") + theme(axis.text = element_text(angle = 45, hjust = 1)) + labs( title = "Distribution of within team ranges of different distance/similarity metrics.", subtitle = "Truth vs CSS (comparisons exclude the j-th row+colum since these are not reported in the CSS)." ) + ggsave("data/accuracy_truth.png", width = 7, height = 7) # Saving the original data ----------------------------------------------------- readr::write_csv(accuracy_las, "data/accuracy_las.csv") readr::write_csv(accuracy_truth, "data/accuracy_truth.csv")
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/date_to_cohort.R \name{date_to_cohort_quarter} \alias{date_to_cohort_quarter} \title{Convert dates to quarter (three-month) cohorts} \usage{ date_to_cohort_quarter(date) } \arguments{ \item{date}{Dates of events defining cohorts (typically births). A vector of class \code{\link[base]{Date}}, or a vector that can be coerced to class \code{Date} via function \code{\link[base]{as.Date}}.} } \value{ A character vector with the same length as \code{date}. } \description{ Assign dates to one-quarter (three-month) cohorts. Quarters are defined as follows: \tabular{lll}{ \strong{Quarter} \tab \strong{Start} \tab \strong{End} \cr Q1 \tab 1 January \tab 31 March \cr Q2 \tab 1 April \tab 30 June \cr Q3 \tab 1 July \tab 30 September \cr Q4 \tab 1 October \tab 31 December } } \examples{ date_to_cohort_quarter(date = c("2024-03-27", "2022-11-09", "2023-05-11")) } \seealso{ The output from \code{date_to_cohort_quarter} is often processed further using \code{\link{format_cohort_quarter}}. Other functions for creating cohorts from dates are \code{\link{date_to_cohort_quarter}} and and \code{\link{date_to_cohort_month}}. Other functions for creating one-quarter units from dates are \code{\link{date_to_age_quarter}}, \code{\link{date_to_period_quarter}}, and \code{\link{date_to_triangle_quarter}}. The interface for \code{date_to_cohort_quarter} is identical to that of \code{\link{date_to_period_quarter}}. }
/man/date_to_cohort_quarter.Rd
permissive
bayesiandemography/demprep
R
false
true
1,566
rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/date_to_cohort.R \name{date_to_cohort_quarter} \alias{date_to_cohort_quarter} \title{Convert dates to quarter (three-month) cohorts} \usage{ date_to_cohort_quarter(date) } \arguments{ \item{date}{Dates of events defining cohorts (typically births). A vector of class \code{\link[base]{Date}}, or a vector that can be coerced to class \code{Date} via function \code{\link[base]{as.Date}}.} } \value{ A character vector with the same length as \code{date}. } \description{ Assign dates to one-quarter (three-month) cohorts. Quarters are defined as follows: \tabular{lll}{ \strong{Quarter} \tab \strong{Start} \tab \strong{End} \cr Q1 \tab 1 January \tab 31 March \cr Q2 \tab 1 April \tab 30 June \cr Q3 \tab 1 July \tab 30 September \cr Q4 \tab 1 October \tab 31 December } } \examples{ date_to_cohort_quarter(date = c("2024-03-27", "2022-11-09", "2023-05-11")) } \seealso{ The output from \code{date_to_cohort_quarter} is often processed further using \code{\link{format_cohort_quarter}}. Other functions for creating cohorts from dates are \code{\link{date_to_cohort_quarter}} and and \code{\link{date_to_cohort_month}}. Other functions for creating one-quarter units from dates are \code{\link{date_to_age_quarter}}, \code{\link{date_to_period_quarter}}, and \code{\link{date_to_triangle_quarter}}. The interface for \code{date_to_cohort_quarter} is identical to that of \code{\link{date_to_period_quarter}}. }
# World_Petroleum Imports to US by Country_1973-2014 # # --------------------------------------------------------------- # INPUT DATA ---------------------------------------------------- # --------------------------------------------------------------- file.loc = '/Users/MEAS/GitHub/ene505-figures' # location of scripts data.loc = '/Users/MEAS/Google Drive/TA Materials/ENE505 - Fall 2015/ENE 505 Charts' # location of data file(s) data.file = 'World_Petroleum Imports to US by Country_1973_2014.csv' # data file to be used out.loc = '/Users/MEAS/Google Drive/TA Materials/ENE505 - Fall 2015/ENE 505 Charts/20170827' # location of where to save figures country.cols = c("#8DD3C7", "#FFFFB3", "#BEBADA", "#FB8072", "#80B1D3", "#FDB462", "#B3DE69", "#FCCDE5", "#D9D9D9", "#BC80BD", "#CCEBC5", "#FFED6F") ran.cols = c("#40516b", "#7bb3c9", "#ca949f", "#c3ba8c", "#68392b", "#4d673a", "#72d1b6", "#6e3363", "#9c92d7", "#c6843c", "#c74b4a", "#6ac46c", "#ce58a4", "#b6be49", "#6248ab") # --------------------------------------------------------------- # MAIN SCRIPT --------------------------------------------------- # --------------------------------------------------------------- # load libraries ------- library(data.table) library(ggplot2) library(hrbrthemes) library(stringr) library(plyr) # load plot functions ----- setwd(file.loc) source("plotfunctions.R") # load data ------ setwd(data.loc) dt_raw <- fread(data.file, skip = 2, header = T) # change column names ------ colnames(dt_raw) <- c("year", "Total", "Persian Gulf", "OPEC", "Algeria", "Angola", "Ecuador", "Iran", "Iraq", "Kuwait", "Libya", "Nigeria", "Qatar", "Saudi Arabia", "United Arab Emirates", "Venezuela", "Non-OPEC", "Albania", "Argentina", "Aruba", "Australia", "Austria", "Azerbaijan", "Bahama Islands", "Bahrain", "Barbados", "Belarus", "Belgium", "Belize", "Benin", "Bolivia", "Brazil", "Brunei", "Bulgaria", "Burma", "Cameroon", "Canada", "Chad", "Chile", "China", "Colombia", "Congo Brazzaville", "Congo Kinshasa", "Cook Islands", "Costa Rica", "Croatia", "Cyprus", "Czech Republic", "Denmark", "Dominican Republic", "Egypt", "El Salvador", "Equatorial Guinea", "Estonia", "Finland", "France", "Gabon", "Georgia", "Germany", "Ghana", "Gibraltar", "Greece", "Guatemala", "Guinea", "Hong Kong", "Hungary", "India", "Indonesia", "Ireland", "Israel", "Italy", "Ivory Coast", "Jamaica", "Japan", "Kazakhstan", "Korea", "Kyrgyzstan", "Latvia", "Liberia", "Lithuania", "Malaysia", "Malta", "Martinique", "Mauritania", "Mexico", "Midway Islands", "Morocco", "Namibia", "Netherlands", "Netherlands Antilles", "New Zealand", "Nicaragua", "Niue", "Norway", "Oman", "Pakistan", "Panama", "Papua New Guinea", "Peru", "Philippines", "Poland", "Portugal", "Puerto Rico", "Romania", "Russia", "Senegal", "Singapore", "Slovakia", "South Africa", "Spain", "Spratly Islands", "Suriname", "Swaziland", "Sweden", "Switzerland", "Syria", "Taiwan", "Thailand", "Togo", "Tonga", "Trinidad and Tobago", "Tunisia", "Turkey", "Turkmenistan", "Ukraine", "United Kingdom", "Uruguay", "Uzbekistan", "Vietnam", "Virgin Islands", "Yemen") # melt data table from wide to long format ----- dt_long <- melt(dt_raw, measure.vars = colnames(dt_raw)[2:131], variable.name = "country", value.name = "value") # convert value and year columns to numeric ----- dt_long[, value := as.numeric(value)] dt_long[, year := as.numeric(as.character(year))] # order country by value ------ country.levs <- levels(with(dt_long[ year == "2014" & ! country %in% c("Total", "Persian Gulf", "OPEC", "Non-OPEC") ], reorder(country, -value))) country.select <- country.levs[1:14] # assign category ------ dt_long[ country %in% c(country.select, "Total", "Persian Gulf", "OPEC", "Non-OPEC"), category := country ] dt_long[ ! country %in% c(country.select, "Total", "Persian Gulf", "OPEC", "Non-OPEC"), category := "Other" ] # aggregate by category ----- dt_fin <- na.omit(dt_long)[, .(value = sum(value)), by = c("category", "year")] # --------------------------------------------------------------- # FIGURES ------------------------------------------------------- # --------------------------------------------------------------- setwd(out.loc) # AREA PLOT ------------- dt = dt_fin[, category := factor(category, levels(with(dt_fin[year == "2014"], reorder(category, -value))))][! category %in% c("Total", "Persian Gulf", "OPEC", "Non-OPEC") ] xval = dt[, year] yval = dt[, value] fillval = dt[, category] tlab = "1973 - 2014 Petroleum Imports to U.S. by Country" sublab = "Data: U.S. Energy Information Administration" gval = "Y" xlab = NULL ylab = "Thousand Barrels per Day" leglab = "" leg.ord = levels(factor(fillval)) plot.cols = ran.cols area_world_petro_imports = f.areaplot(dt, xval, yval, fillval, tlab, sublab, xlab, ylab, leglab, gval, leg.ord, plot.cols) + scale_x_continuous(breaks = seq(1972,2015,3), expand = c(0,0)) + scale_y_comma(breaks = seq(0,14000,2000), expand = c(0,0)) ggsave(area_world_petro_imports, filename = "World_Petroleum Imports to US_All Countries_1973-2014_ATS.png", width = 11.1, height = 6.25, dpi = 400) # ALL COUNTRIES LINE PLOT ------------- dt = dt_fin[! category %in% c("Total", "Persian Gulf", "OPEC", "Non-OPEC") ] xval = dt[, year] yval = dt[, value] fillval = dt[, category] tlab = "1973 - 2014 Petroleum Imports to U.S. by Country" sublab = "Data: U.S. Energy Information Administration" gval = "Y" xlab = NULL ylab = "Thousand Barrels per Day" leglab = "" leg.ord = levels(with(dt[year == "2014"], reorder(category, -value))) plot.cols = ran.cols line_world_petro_imports = f.lineplot(dt, xval, yval, fillval, tlab, sublab, xlab, ylab, leglab, gval, leg.ord, plot.cols) + scale_x_continuous(breaks = seq(1972,2015,3), expand = c(0,0)) + scale_y_comma(breaks = seq(0,4000,500), expand = c(0,0), limits = c(0, 4000)) ggsave(line_world_petro_imports, filename = "World_Petroleum Imports to US_All Countries_1973-2014_LTS.png", width = 11.1, height = 6.25, dpi = 400) # OPEC VS NON-OPEC LINE PLOT ------ # SEGMENT PLOT ------------- dt = dt_fin[! category %in% c("Total", "Persian Gulf", "OPEC", "Non-OPEC") ] xval = dt[, year] yval = dt[, value] fillval = dt[, category] wsize = 2 csize = 1 tlab = "1973 - 2014 Petroleum Imports to U.S. by Country" sublab = "Data: U.S. Energy Information Administration" gval = "Y" xlab = NULL ylab = "Thousand Barrels per Day" leglab = "" leg.ord = levels(with(dt[year == "2014"], reorder(category, -value))) plot.cols = ran.cols seg_world_petro_imports = f.segplot(dt, xval, yval, fillval, wsize, csize, tlab, sublab, xlab, ylab, leglab, gval, leg.ord, plot.cols) + scale_x_continuous(breaks = seq(1972,2015,3), expand = c(0,0)) + scale_y_comma(breaks = seq(0,4000,500), expand = c(0,0), limits = c(0, 4000)) ggsave(seg_world_petro_imports, filename = "World_Petroleum Imports to US_All Countries_1973-2014_STS.png", width = 11.1, height = 6.25, dpi = 400)
/scripts/archive/old_original/World_Petroleum Imports to US by Country_1973-2014.R
no_license
S3researchUSC/ene505-figures
R
false
false
11,279
r
# World_Petroleum Imports to US by Country_1973-2014 # # --------------------------------------------------------------- # INPUT DATA ---------------------------------------------------- # --------------------------------------------------------------- file.loc = '/Users/MEAS/GitHub/ene505-figures' # location of scripts data.loc = '/Users/MEAS/Google Drive/TA Materials/ENE505 - Fall 2015/ENE 505 Charts' # location of data file(s) data.file = 'World_Petroleum Imports to US by Country_1973_2014.csv' # data file to be used out.loc = '/Users/MEAS/Google Drive/TA Materials/ENE505 - Fall 2015/ENE 505 Charts/20170827' # location of where to save figures country.cols = c("#8DD3C7", "#FFFFB3", "#BEBADA", "#FB8072", "#80B1D3", "#FDB462", "#B3DE69", "#FCCDE5", "#D9D9D9", "#BC80BD", "#CCEBC5", "#FFED6F") ran.cols = c("#40516b", "#7bb3c9", "#ca949f", "#c3ba8c", "#68392b", "#4d673a", "#72d1b6", "#6e3363", "#9c92d7", "#c6843c", "#c74b4a", "#6ac46c", "#ce58a4", "#b6be49", "#6248ab") # --------------------------------------------------------------- # MAIN SCRIPT --------------------------------------------------- # --------------------------------------------------------------- # load libraries ------- library(data.table) library(ggplot2) library(hrbrthemes) library(stringr) library(plyr) # load plot functions ----- setwd(file.loc) source("plotfunctions.R") # load data ------ setwd(data.loc) dt_raw <- fread(data.file, skip = 2, header = T) # change column names ------ colnames(dt_raw) <- c("year", "Total", "Persian Gulf", "OPEC", "Algeria", "Angola", "Ecuador", "Iran", "Iraq", "Kuwait", "Libya", "Nigeria", "Qatar", "Saudi Arabia", "United Arab Emirates", "Venezuela", "Non-OPEC", "Albania", "Argentina", "Aruba", "Australia", "Austria", "Azerbaijan", "Bahama Islands", "Bahrain", "Barbados", "Belarus", "Belgium", "Belize", "Benin", "Bolivia", "Brazil", "Brunei", "Bulgaria", "Burma", "Cameroon", "Canada", "Chad", "Chile", "China", "Colombia", "Congo Brazzaville", "Congo Kinshasa", "Cook Islands", "Costa Rica", "Croatia", "Cyprus", "Czech Republic", "Denmark", "Dominican Republic", "Egypt", "El Salvador", "Equatorial Guinea", "Estonia", "Finland", "France", "Gabon", "Georgia", "Germany", "Ghana", "Gibraltar", "Greece", "Guatemala", "Guinea", "Hong Kong", "Hungary", "India", "Indonesia", "Ireland", "Israel", "Italy", "Ivory Coast", "Jamaica", "Japan", "Kazakhstan", "Korea", "Kyrgyzstan", "Latvia", "Liberia", "Lithuania", "Malaysia", "Malta", "Martinique", "Mauritania", "Mexico", "Midway Islands", "Morocco", "Namibia", "Netherlands", "Netherlands Antilles", "New Zealand", "Nicaragua", "Niue", "Norway", "Oman", "Pakistan", "Panama", "Papua New Guinea", "Peru", "Philippines", "Poland", "Portugal", "Puerto Rico", "Romania", "Russia", "Senegal", "Singapore", "Slovakia", "South Africa", "Spain", "Spratly Islands", "Suriname", "Swaziland", "Sweden", "Switzerland", "Syria", "Taiwan", "Thailand", "Togo", "Tonga", "Trinidad and Tobago", "Tunisia", "Turkey", "Turkmenistan", "Ukraine", "United Kingdom", "Uruguay", "Uzbekistan", "Vietnam", "Virgin Islands", "Yemen") # melt data table from wide to long format ----- dt_long <- melt(dt_raw, measure.vars = colnames(dt_raw)[2:131], variable.name = "country", value.name = "value") # convert value and year columns to numeric ----- dt_long[, value := as.numeric(value)] dt_long[, year := as.numeric(as.character(year))] # order country by value ------ country.levs <- levels(with(dt_long[ year == "2014" & ! country %in% c("Total", "Persian Gulf", "OPEC", "Non-OPEC") ], reorder(country, -value))) country.select <- country.levs[1:14] # assign category ------ dt_long[ country %in% c(country.select, "Total", "Persian Gulf", "OPEC", "Non-OPEC"), category := country ] dt_long[ ! country %in% c(country.select, "Total", "Persian Gulf", "OPEC", "Non-OPEC"), category := "Other" ] # aggregate by category ----- dt_fin <- na.omit(dt_long)[, .(value = sum(value)), by = c("category", "year")] # --------------------------------------------------------------- # FIGURES ------------------------------------------------------- # --------------------------------------------------------------- setwd(out.loc) # AREA PLOT ------------- dt = dt_fin[, category := factor(category, levels(with(dt_fin[year == "2014"], reorder(category, -value))))][! category %in% c("Total", "Persian Gulf", "OPEC", "Non-OPEC") ] xval = dt[, year] yval = dt[, value] fillval = dt[, category] tlab = "1973 - 2014 Petroleum Imports to U.S. by Country" sublab = "Data: U.S. Energy Information Administration" gval = "Y" xlab = NULL ylab = "Thousand Barrels per Day" leglab = "" leg.ord = levels(factor(fillval)) plot.cols = ran.cols area_world_petro_imports = f.areaplot(dt, xval, yval, fillval, tlab, sublab, xlab, ylab, leglab, gval, leg.ord, plot.cols) + scale_x_continuous(breaks = seq(1972,2015,3), expand = c(0,0)) + scale_y_comma(breaks = seq(0,14000,2000), expand = c(0,0)) ggsave(area_world_petro_imports, filename = "World_Petroleum Imports to US_All Countries_1973-2014_ATS.png", width = 11.1, height = 6.25, dpi = 400) # ALL COUNTRIES LINE PLOT ------------- dt = dt_fin[! category %in% c("Total", "Persian Gulf", "OPEC", "Non-OPEC") ] xval = dt[, year] yval = dt[, value] fillval = dt[, category] tlab = "1973 - 2014 Petroleum Imports to U.S. by Country" sublab = "Data: U.S. Energy Information Administration" gval = "Y" xlab = NULL ylab = "Thousand Barrels per Day" leglab = "" leg.ord = levels(with(dt[year == "2014"], reorder(category, -value))) plot.cols = ran.cols line_world_petro_imports = f.lineplot(dt, xval, yval, fillval, tlab, sublab, xlab, ylab, leglab, gval, leg.ord, plot.cols) + scale_x_continuous(breaks = seq(1972,2015,3), expand = c(0,0)) + scale_y_comma(breaks = seq(0,4000,500), expand = c(0,0), limits = c(0, 4000)) ggsave(line_world_petro_imports, filename = "World_Petroleum Imports to US_All Countries_1973-2014_LTS.png", width = 11.1, height = 6.25, dpi = 400) # OPEC VS NON-OPEC LINE PLOT ------ # SEGMENT PLOT ------------- dt = dt_fin[! category %in% c("Total", "Persian Gulf", "OPEC", "Non-OPEC") ] xval = dt[, year] yval = dt[, value] fillval = dt[, category] wsize = 2 csize = 1 tlab = "1973 - 2014 Petroleum Imports to U.S. by Country" sublab = "Data: U.S. Energy Information Administration" gval = "Y" xlab = NULL ylab = "Thousand Barrels per Day" leglab = "" leg.ord = levels(with(dt[year == "2014"], reorder(category, -value))) plot.cols = ran.cols seg_world_petro_imports = f.segplot(dt, xval, yval, fillval, wsize, csize, tlab, sublab, xlab, ylab, leglab, gval, leg.ord, plot.cols) + scale_x_continuous(breaks = seq(1972,2015,3), expand = c(0,0)) + scale_y_comma(breaks = seq(0,4000,500), expand = c(0,0), limits = c(0, 4000)) ggsave(seg_world_petro_imports, filename = "World_Petroleum Imports to US_All Countries_1973-2014_STS.png", width = 11.1, height = 6.25, dpi = 400)
#' #' @name GOCluster #' @title Circular dendrogram. #' @description GOCluster generates a circular dendrogram of the \code{data} #' clustering using by default euclidean distance and average linkage.The #' inner ring displays the color coded logFC while the outside one encodes the #' assigned terms to each gene. #' @param data A data frame which should be the result of #' \code{\link{circle_dat}} in case the data contains only one logFC column. #' Otherwise \code{data} is a data frame whereas the first column contains the #' genes, the second the term and the following columns the logFCs of the #' different contrasts. #' @param process A character vector of selected processes (ID or term #' description) #' @param metric A character vector specifying the distance measure to be used #' (default='euclidean'), see \code{dist} #' @param clust A character vector specifying the agglomeration method to be #' used (default='average'), see \code{hclust} #' @param clust.by A character vector specifying if the clustering should be #' done for gene expression pattern or functional categories. By default the #' clustering is done based on the functional categories. #' @param nlfc If TRUE \code{data} contains multiple logFC columns (default= #' FALSE) #' @param lfc.col Character vector to define the color scale for the logFC of #' the form c(high, midpoint,low) #' @param lfc.min Specifies the minimium value of the logFC scale (default = -3) #' @param lfc.max Specifies the maximum value of the logFC scale (default = 3) #' @param lfc.space The space between the leafs of the dendrogram and the ring #' for the logFC #' @param lfc.width The width of the logFC ring #' @param term.col A character vector specifying the colors of the term bands #' @param term.space The space between the logFC ring and the term ring #' @param term.width The width of the term ring #' @details The inner ring can be split into smaller rings to display multiply #' logFC values resulting from various comparisons. #' @import ggplot2 #' @import ggdendro #' @import RColorBrewer #' @import stats #' @examples #' \dontrun{ #' #Load the included dataset #' data(EC) #' #' #Generating the circ object #' circ<-circular_dat(EC$david, EC$genelist) #' #' #Creating the cluster plot #' GOCluster(circ, EC$process) #' #' #Cluster the data according to gene expression and assigning a different color scale for the logFC #' GOCluster(circ,EC$process,clust.by='logFC',lfc.col=c('darkgoldenrod1','black','cyan1')) #' } #' @export #' GOCluster<-function(chord, process, metric, clust, clust.by, nlfc, lfc.col, lfc.min, lfc.max, lfc.space, lfc.width, term.col, term.space, term.width){ x <- y <- xend <- yend <- width <- space <- logFC <- NULL if (missing(metric)) metric<-'euclidean' if (missing(clust)) clust<-'average' if (missing(clust.by)) clust.by<-'term' if (missing(nlfc)) nlfc <- 0 if (missing(lfc.col)) lfc.col<-c('firebrick1','white','dodgerblue') if (missing(lfc.min)) lfc.min <- -3 if (missing(lfc.max)) lfc.max <- 3 if (missing(lfc.space)) lfc.space <- (-0.5) else lfc.space <- lfc.space*(-1) if (missing(lfc.width)) lfc.width <- (-1.6) else lfc.width <- lfc.space-lfc.width-0.1 if (missing(term.col)) term.col <- brewer.pal(length(process), 'Set3') if (missing(term.space)) term.space <- lfc.space + lfc.width else term.space <- term.space*(-1) if (missing(term.width)) term.width <- lfc.width + term.space else term.width <- term.width*(-1)+term.space if (clust.by == 'logFC') distance <- stats::dist(chord[,dim(chord)[2]], method=metric) if (clust.by == 'term') distance <- stats::dist(chord, method=metric) cluster <- stats::hclust(distance ,method=clust) dendr <- dendro_data(cluster) y_range <- range(dendr$segments$y) x_pos <- data.frame(x=dendr$label$x, label=as.character(dendr$label$label)) chord <- as.data.frame(chord) chord$label <- as.character(rownames(chord)) all <- merge(x_pos, chord, by='label') all$label <- as.character(all$label) if (nlfc != 0){ num <- ncol(all)-2-nlfc tmp <- seq(lfc.space, lfc.width, length = num) lfc <- data.frame(x=numeric(),width=numeric(),space=numeric(),logFC=numeric()) for (l in 1:nlfc){ tmp_df <- data.frame(x = all$x, width=tmp[l+1], space = tmp[l], logFC = all[,ncol(all)-nlfc+l]) lfc<-rbind(lfc,tmp_df) } }else{ lfc <- all[,c(2, dim(all)[2])] lfc$space <- lfc.space lfc$width <- lfc.width } term <- all[,c(2:(length(process)+2))] color<-NULL;termx<-NULL;tspace<-NULL;twidth<-NULL for (row in 1:dim(term)[1]){ idx <- which(term[row,-1] != 0) if(length(idx) != 0){ termx<-c(termx,rep(term[row,1],length(idx))) color<-c(color,term.col[idx]) tmp<-seq(term.space,term.width,length=length(idx)+1) tspace<-c(tspace,tmp[1:(length(tmp)-1)]) twidth<-c(twidth,tmp[2:length(tmp)]) } } tmp <- sapply(lfc$logFC, function(x) ifelse(x > lfc.max, lfc.max, x)) logFC <- sapply(tmp, function(x) ifelse(x < lfc.min, lfc.min, x)) lfc$logFC <- logFC term_rect <- data.frame(x = termx, width = twidth, space = tspace, col = color) legend <- data.frame(x = 1:length(process),label = process) ggplot()+ geom_segment(data=segment(dendr), aes(x=x, y=y, xend=xend, yend=yend))+ geom_rect(data=lfc,aes(xmin=x-0.5,xmax=x+0.5,ymin=width,ymax=space,fill=logFC))+ scale_fill_gradient2('logFC', space = 'Lab', low=lfc.col[3],mid=lfc.col[2],high=lfc.col[1],guide=guide_colorbar(title.position='top',title.hjust=0.5),breaks=c(min(lfc$logFC),max(lfc$logFC)),labels=c(round(min(lfc$logFC)),round(max(lfc$logFC))))+ geom_rect(data=term_rect,aes(xmin=x-0.5,xmax=x+0.5,ymin=width,ymax=space),fill=term_rect$col)+ geom_point(data=legend,aes(x=x,y=0.1,size=factor(label,levels=label),shape=NA))+ guides(size=guide_legend("GO Terms",ncol=4,byrow=T,override.aes=list(shape=22,fill=term.col,size = 8)))+ coord_polar()+ scale_y_reverse()+ theme(legend.position='bottom',legend.background = element_rect(fill='transparent'),legend.box='horizontal',legend.direction='horizontal')+ theme_blank } #' #' @name GOChord #' @title Displays the relationship between genes and terms. #' @description The GOChord function generates a circularly composited overview #' of selected/specific genes and their assigned processes or terms. More #' generally, it joins genes and processes via ribbons in an intersection-like #' graph. The input can be generated with the \code{\link{chord_dat}} #' function. #' @param data The matrix represents the binary relation (1= is related to, 0= #' is not related to) between a set of genes (rows) and processes (columns); a #' column for the logFC of the genes is optional #' @param title The title (on top) of the plot #' @param space The space between the chord segments of the plot #' @param gene.order A character vector defining the order of the displayed gene #' labels #' @param gene.size The size of the gene labels #' @param gene.space The space between the gene labels and the segement of the #' logFC #' @param nlfc Defines the number of logFC columns (default=1) #' @param lfc.col The fill color for the logFC specified in the following form: #' c(color for low values, color for the mid point, color for the high values) #' @param lfc.min Specifies the minimium value of the logFC scale (default = -3) #' @param lfc.max Specifies the maximum value of the logFC scale (default = 3) #' @param ribbon.col The background color of the ribbons #' @param border.size Defines the size of the ribbon borders #' @param process.label The size of the legend entries #' @param limit A vector with two cutoff values (default= c(0,0)). The first #' value defines the minimum number of terms a gene has to be assigned to. The #' second the minimum number of genes assigned to a selected term. #' @details The \code{gene.order} argument has three possible options: "logFC", #' "alphabetical", "none", which are quite self- explanatory. #' #' Maybe the most important argument of the function is \code{nlfc}.If your #' \code{data} does not contain a column of logFC values you have to set #' \code{nlfc = 0}. Differential expression analysis can be performed for #' multiple conditions and/or batches. Therefore, the data frame might contain #' more than one logFC value per gene. To adjust to this situation the #' \code{nlfc} argument is used as well. It is a numeric value and it defines #' the number of logFC columns of your \code{data}. The default is "1" #' assuming that most of the time only one contrast is considered. #' #' To represent the data more useful it might be necessary to reduce the #' dimension of \code{data}. This can be achieved with \code{limit}. The first #' value of the vector defines the threshold for the minimum number of terms a #' gene has to be assigned to in order to be represented in the plot. Most of #' the time it is more meaningful to represent genes with various functions. A #' value of 3 excludes all genes with less than three term assignments. #' Whereas the second value of the parameter restricts the number of terms #' according to the number of assigned genes. All terms with a count smaller #' or equal to the threshold are excluded. #' @seealso \code{\link{chord_dat}} #' @import ggplot2 #' @import grDevices #' @examples #' \dontrun{ #' # Load the included dataset #' data(EC) #' #' # Generating the binary matrix #' chord<-chord_dat(circ,EC$genes,EC$process) #' #' # Creating the chord plot #' GOChord(chord) #' #' # Excluding process with less than 5 assigned genes #' GOChord(chord, limit = c(0,5)) #' #' # Creating the chord plot genes ordered by logFC and a different logFC color scale #' GOChord(chord,space=0.02,gene.order='logFC',lfc.col=c('red','black','cyan')) #' } #' @export GOChord <- function(data, title, space, gene.order, gene.size, gene.space, nlfc = 1, lfc.col, lfc.min, lfc.max, ribbon.col, border.size, process.label, limit, scale.title){ y <- id <- xpro <- ypro <- xgen <- ygen <- lx <- ly <- ID <- logFC <- NULL Ncol <- dim(data)[2] if (missing(title)) title <- '' if (missing(scale.title)) scale.title <- 'logFC' if (missing(space)) space = 0 if (missing(gene.order)) gene.order <- 'none' if (missing(gene.size)) gene.size <- 3 if (missing(gene.space)) gene.space <- 0.2 if (missing(lfc.col)) lfc.col <- c('brown1', 'azure', 'cornflowerblue') if (missing(lfc.min)) lfc.min <- -3 if (missing(lfc.max)) lfc.max <- 3 if (missing(border.size)) border.size <- 0.5 if (missing (process.label)) process.label <- 11 if (missing(limit)) limit <- c(0, 0) if (gene.order == 'logFC') data <- data[order(data[, Ncol], decreasing = T), ] if (gene.order == 'alphabetical') data <- data[order(rownames(data)), ] if (sum(!is.na(match(colnames(data), 'logFC'))) > 0){ if (nlfc == 1){ cdata <- check_chord(data[, 1:(Ncol - 1)], limit) lfc <- sapply(rownames(cdata), function(x) data[match(x,rownames(data)), Ncol]) }else{ cdata <- check_chord(data[, 1:(Ncol - nlfc)], limit) lfc <- sapply(rownames(cdata), function(x) data[, (Ncol - nlfc + 1)]) } }else{ cdata <- check_chord(data, limit) lfc <- 0 } if (missing(ribbon.col)) colRib <- grDevices::rainbow(dim(cdata)[2]) else colRib <- ribbon.col nrib <- colSums(cdata) ngen <- rowSums(cdata) Ncol <- dim(cdata)[2] Nrow <- dim(cdata)[1] colRibb <- c() for (b in 1:length(nrib)) colRibb <- c(colRibb, rep(colRib[b], 202 * nrib[b])) r1 <- 1; r2 <- r1 + 0.1 xmax <- c(); x <- 0 for (r in 1:length(nrib)){ perc <- nrib[r] / sum(nrib) xmax <- c(xmax, (pi * perc) - space) if (length(x) <= Ncol - 1) x <- c(x, x[r] + pi * perc) } xp <- c(); yp <- c() l <- 50 for (s in 1:Ncol){ xh <- seq(x[s], x[s] + xmax[s], length = l) xp <- c(xp, r1 * sin(x[s]), r1 * sin(xh), r1 * sin(x[s] + xmax[s]), r2 * sin(x[s] + xmax[s]), r2 * sin(rev(xh)), r2 * sin(x[s])) yp <- c(yp, r1 * cos(x[s]), r1 * cos(xh), r1 * cos(x[s] + xmax[s]), r2 * cos(x[s] + xmax[s]), r2 * cos(rev(xh)), r2 * cos(x[s])) } df_process <- data.frame(x = xp, y = yp, id = rep(c(1:Ncol), each = 4 + 2 * l)) xp <- c(); yp <- c(); logs <- NULL x2 <- seq(0 - space, -pi - (-pi / Nrow) - space, length = Nrow) xmax2 <- rep(-pi / Nrow + space, length = Nrow) for (s in 1:Nrow){ xh <- seq(x2[s], x2[s] + xmax2[s], length = l) if (nlfc <= 1){ xp <- c(xp, (r1 + 0.05) * sin(x2[s]), (r1 + 0.05) * sin(xh), (r1 + 0.05) * sin(x2[s] + xmax2[s]), r2 * sin(x2[s] + xmax2[s]), r2 * sin(rev(xh)), r2 * sin(x2[s])) yp <- c(yp, (r1 + 0.05) * cos(x2[s]), (r1 + 0.05) * cos(xh), (r1 + 0.05) * cos(x2[s] + xmax2[s]), r2 * cos(x2[s] + xmax2[s]), r2 * cos(rev(xh)), r2 * cos(x2[s])) }else{ tmp <- seq(r1, r2, length = nlfc + 1) for (t in 1:nlfc){ logs <- c(logs, data[s, (dim(data)[2] + 1 - t)]) xp <- c(xp, (tmp[t]) * sin(x2[s]), (tmp[t]) * sin(xh), (tmp[t]) * sin(x2[s] + xmax2[s]), tmp[t + 1] * sin(x2[s] + xmax2[s]), tmp[t + 1] * sin(rev(xh)), tmp[t + 1] * sin(x2[s])) yp <- c(yp, (tmp[t]) * cos(x2[s]), (tmp[t]) * cos(xh), (tmp[t]) * cos(x2[s] + xmax2[s]), tmp[t + 1] * cos(x2[s] + xmax2[s]), tmp[t + 1] * cos(rev(xh)), tmp[t + 1] * cos(x2[s])) }}} if(lfc[1] != 0){ if (nlfc == 1){ df_genes <- data.frame(x = xp, y = yp, id = rep(c(1:Nrow), each = 4 + 2 * l), logFC = rep(lfc, each = 4 + 2 * l)) }else{ df_genes <- data.frame(x = xp, y = yp, id = rep(c(1:(nlfc*Nrow)), each = 4 + 2 * l), logFC = rep(logs, each = 4 + 2 * l)) } }else{ df_genes <- data.frame(x = xp, y = yp, id = rep(c(1:Nrow), each = 4 + 2 * l)) } aseq <- seq(0, 180, length = length(x2)); angle <- c() for (o in aseq) if((o + 270) <= 360) angle <- c(angle, o + 270) else angle <- c(angle, o - 90) df_texg <- data.frame(xgen = (r1 + gene.space) * sin(x2 + xmax2/2),ygen = (r1 + gene.space) * cos(x2 + xmax2 / 2),labels = rownames(cdata), angle = angle) df_texp <- data.frame(xpro = (r1 + 0.15) * sin(x + xmax / 2),ypro = (r1 + 0.15) * cos(x + xmax / 2), labels = colnames(cdata), stringsAsFactors = FALSE) cols <- rep(colRib, each = 4 + 2 * l) x.end <- c(); y.end <- c(); processID <- c() for (gs in 1:length(x2)){ val <- seq(x2[gs], x2[gs] + xmax2[gs], length = ngen[gs] + 1) pros <- which((cdata[gs, ] != 0) == T) for (v in 1:(length(val) - 1)){ x.end <- c(x.end, sin(val[v]), sin(val[v + 1])) y.end <- c(y.end, cos(val[v]), cos(val[v + 1])) processID <- c(processID, rep(pros[v], 2)) } } df_bezier <- data.frame(x.end = x.end, y.end = y.end, processID = processID) df_bezier <- df_bezier[order(df_bezier$processID,-df_bezier$y.end),] x.start <- c(); y.start <- c() for (rs in 1:length(x)){ val<-seq(x[rs], x[rs] + xmax[rs], length = nrib[rs] + 1) for (v in 1:(length(val) - 1)){ x.start <- c(x.start, sin(val[v]), sin(val[v + 1])) y.start <- c(y.start, cos(val[v]), cos(val[v + 1])) } } df_bezier$x.start <- x.start df_bezier$y.start <- y.start df_path <- bezier(df_bezier, colRib) if(length(df_genes$logFC) != 0){ tmp <- sapply(df_genes$logFC, function(x) ifelse(x > lfc.max, lfc.max, x)) logFC <- sapply(tmp, function(x) ifelse(x < lfc.min, lfc.min, x)) df_genes$logFC <- logFC } g<- ggplot() + geom_polygon(data = df_process, aes(x, y, group=id), fill='gray70', inherit.aes = F,color='black') + geom_polygon(data = df_process, aes(x, y, group=id), fill=cols, inherit.aes = F,alpha=0.6,color='black') + geom_point(aes(x = xpro, y = ypro, size = factor(labels, levels = labels), shape = NA), data = df_texp) + guides(size = guide_legend("GO Terms", ncol = 4, byrow = T, override.aes = list(shape = 22, fill = unique(cols), size = 8))) + theme(legend.text = element_text(size = process.label)) + geom_text(aes(xgen, ygen, label = labels, angle = angle), data = df_texg, size = gene.size) + geom_polygon(aes(x = lx, y = ly, group = ID), data = df_path, fill = colRibb, color = 'black', size = border.size, inherit.aes = F) + labs(title = title) + theme_blank if (nlfc >= 1){ g + geom_polygon(data = df_genes, aes(x, y, group = id, fill = logFC), inherit.aes = F, color = 'black') + scale_fill_gradient2(scale.title, space = 'Lab', low = lfc.col[3], mid = lfc.col[2], high = lfc.col[1], guide = guide_colorbar(title.position = "top", title.hjust = 0.5), breaks = c(min(df_genes$logFC), max(df_genes$logFC)), labels = c(round(min(df_genes$logFC)), round(max(df_genes$logFC)))) + theme(legend.position = 'bottom', legend.background = element_rect(fill = 'transparent'), legend.box = 'horizontal', legend.direction = 'horizontal') }else{ g + geom_polygon(data = df_genes, aes(x, y, group = id), fill = 'gray50', inherit.aes = F, color = 'black')+ theme(legend.position = 'bottom', legend.background = element_rect(fill = 'transparent'), legend.box = 'horizontal', legend.direction = 'horizontal') } }
/R/GOCluster.R
no_license
Gzwang1125/wencke.github.io
R
false
false
17,456
r
#' #' @name GOCluster #' @title Circular dendrogram. #' @description GOCluster generates a circular dendrogram of the \code{data} #' clustering using by default euclidean distance and average linkage.The #' inner ring displays the color coded logFC while the outside one encodes the #' assigned terms to each gene. #' @param data A data frame which should be the result of #' \code{\link{circle_dat}} in case the data contains only one logFC column. #' Otherwise \code{data} is a data frame whereas the first column contains the #' genes, the second the term and the following columns the logFCs of the #' different contrasts. #' @param process A character vector of selected processes (ID or term #' description) #' @param metric A character vector specifying the distance measure to be used #' (default='euclidean'), see \code{dist} #' @param clust A character vector specifying the agglomeration method to be #' used (default='average'), see \code{hclust} #' @param clust.by A character vector specifying if the clustering should be #' done for gene expression pattern or functional categories. By default the #' clustering is done based on the functional categories. #' @param nlfc If TRUE \code{data} contains multiple logFC columns (default= #' FALSE) #' @param lfc.col Character vector to define the color scale for the logFC of #' the form c(high, midpoint,low) #' @param lfc.min Specifies the minimium value of the logFC scale (default = -3) #' @param lfc.max Specifies the maximum value of the logFC scale (default = 3) #' @param lfc.space The space between the leafs of the dendrogram and the ring #' for the logFC #' @param lfc.width The width of the logFC ring #' @param term.col A character vector specifying the colors of the term bands #' @param term.space The space between the logFC ring and the term ring #' @param term.width The width of the term ring #' @details The inner ring can be split into smaller rings to display multiply #' logFC values resulting from various comparisons. #' @import ggplot2 #' @import ggdendro #' @import RColorBrewer #' @import stats #' @examples #' \dontrun{ #' #Load the included dataset #' data(EC) #' #' #Generating the circ object #' circ<-circular_dat(EC$david, EC$genelist) #' #' #Creating the cluster plot #' GOCluster(circ, EC$process) #' #' #Cluster the data according to gene expression and assigning a different color scale for the logFC #' GOCluster(circ,EC$process,clust.by='logFC',lfc.col=c('darkgoldenrod1','black','cyan1')) #' } #' @export #' GOCluster<-function(chord, process, metric, clust, clust.by, nlfc, lfc.col, lfc.min, lfc.max, lfc.space, lfc.width, term.col, term.space, term.width){ x <- y <- xend <- yend <- width <- space <- logFC <- NULL if (missing(metric)) metric<-'euclidean' if (missing(clust)) clust<-'average' if (missing(clust.by)) clust.by<-'term' if (missing(nlfc)) nlfc <- 0 if (missing(lfc.col)) lfc.col<-c('firebrick1','white','dodgerblue') if (missing(lfc.min)) lfc.min <- -3 if (missing(lfc.max)) lfc.max <- 3 if (missing(lfc.space)) lfc.space <- (-0.5) else lfc.space <- lfc.space*(-1) if (missing(lfc.width)) lfc.width <- (-1.6) else lfc.width <- lfc.space-lfc.width-0.1 if (missing(term.col)) term.col <- brewer.pal(length(process), 'Set3') if (missing(term.space)) term.space <- lfc.space + lfc.width else term.space <- term.space*(-1) if (missing(term.width)) term.width <- lfc.width + term.space else term.width <- term.width*(-1)+term.space if (clust.by == 'logFC') distance <- stats::dist(chord[,dim(chord)[2]], method=metric) if (clust.by == 'term') distance <- stats::dist(chord, method=metric) cluster <- stats::hclust(distance ,method=clust) dendr <- dendro_data(cluster) y_range <- range(dendr$segments$y) x_pos <- data.frame(x=dendr$label$x, label=as.character(dendr$label$label)) chord <- as.data.frame(chord) chord$label <- as.character(rownames(chord)) all <- merge(x_pos, chord, by='label') all$label <- as.character(all$label) if (nlfc != 0){ num <- ncol(all)-2-nlfc tmp <- seq(lfc.space, lfc.width, length = num) lfc <- data.frame(x=numeric(),width=numeric(),space=numeric(),logFC=numeric()) for (l in 1:nlfc){ tmp_df <- data.frame(x = all$x, width=tmp[l+1], space = tmp[l], logFC = all[,ncol(all)-nlfc+l]) lfc<-rbind(lfc,tmp_df) } }else{ lfc <- all[,c(2, dim(all)[2])] lfc$space <- lfc.space lfc$width <- lfc.width } term <- all[,c(2:(length(process)+2))] color<-NULL;termx<-NULL;tspace<-NULL;twidth<-NULL for (row in 1:dim(term)[1]){ idx <- which(term[row,-1] != 0) if(length(idx) != 0){ termx<-c(termx,rep(term[row,1],length(idx))) color<-c(color,term.col[idx]) tmp<-seq(term.space,term.width,length=length(idx)+1) tspace<-c(tspace,tmp[1:(length(tmp)-1)]) twidth<-c(twidth,tmp[2:length(tmp)]) } } tmp <- sapply(lfc$logFC, function(x) ifelse(x > lfc.max, lfc.max, x)) logFC <- sapply(tmp, function(x) ifelse(x < lfc.min, lfc.min, x)) lfc$logFC <- logFC term_rect <- data.frame(x = termx, width = twidth, space = tspace, col = color) legend <- data.frame(x = 1:length(process),label = process) ggplot()+ geom_segment(data=segment(dendr), aes(x=x, y=y, xend=xend, yend=yend))+ geom_rect(data=lfc,aes(xmin=x-0.5,xmax=x+0.5,ymin=width,ymax=space,fill=logFC))+ scale_fill_gradient2('logFC', space = 'Lab', low=lfc.col[3],mid=lfc.col[2],high=lfc.col[1],guide=guide_colorbar(title.position='top',title.hjust=0.5),breaks=c(min(lfc$logFC),max(lfc$logFC)),labels=c(round(min(lfc$logFC)),round(max(lfc$logFC))))+ geom_rect(data=term_rect,aes(xmin=x-0.5,xmax=x+0.5,ymin=width,ymax=space),fill=term_rect$col)+ geom_point(data=legend,aes(x=x,y=0.1,size=factor(label,levels=label),shape=NA))+ guides(size=guide_legend("GO Terms",ncol=4,byrow=T,override.aes=list(shape=22,fill=term.col,size = 8)))+ coord_polar()+ scale_y_reverse()+ theme(legend.position='bottom',legend.background = element_rect(fill='transparent'),legend.box='horizontal',legend.direction='horizontal')+ theme_blank } #' #' @name GOChord #' @title Displays the relationship between genes and terms. #' @description The GOChord function generates a circularly composited overview #' of selected/specific genes and their assigned processes or terms. More #' generally, it joins genes and processes via ribbons in an intersection-like #' graph. The input can be generated with the \code{\link{chord_dat}} #' function. #' @param data The matrix represents the binary relation (1= is related to, 0= #' is not related to) between a set of genes (rows) and processes (columns); a #' column for the logFC of the genes is optional #' @param title The title (on top) of the plot #' @param space The space between the chord segments of the plot #' @param gene.order A character vector defining the order of the displayed gene #' labels #' @param gene.size The size of the gene labels #' @param gene.space The space between the gene labels and the segement of the #' logFC #' @param nlfc Defines the number of logFC columns (default=1) #' @param lfc.col The fill color for the logFC specified in the following form: #' c(color for low values, color for the mid point, color for the high values) #' @param lfc.min Specifies the minimium value of the logFC scale (default = -3) #' @param lfc.max Specifies the maximum value of the logFC scale (default = 3) #' @param ribbon.col The background color of the ribbons #' @param border.size Defines the size of the ribbon borders #' @param process.label The size of the legend entries #' @param limit A vector with two cutoff values (default= c(0,0)). The first #' value defines the minimum number of terms a gene has to be assigned to. The #' second the minimum number of genes assigned to a selected term. #' @details The \code{gene.order} argument has three possible options: "logFC", #' "alphabetical", "none", which are quite self- explanatory. #' #' Maybe the most important argument of the function is \code{nlfc}.If your #' \code{data} does not contain a column of logFC values you have to set #' \code{nlfc = 0}. Differential expression analysis can be performed for #' multiple conditions and/or batches. Therefore, the data frame might contain #' more than one logFC value per gene. To adjust to this situation the #' \code{nlfc} argument is used as well. It is a numeric value and it defines #' the number of logFC columns of your \code{data}. The default is "1" #' assuming that most of the time only one contrast is considered. #' #' To represent the data more useful it might be necessary to reduce the #' dimension of \code{data}. This can be achieved with \code{limit}. The first #' value of the vector defines the threshold for the minimum number of terms a #' gene has to be assigned to in order to be represented in the plot. Most of #' the time it is more meaningful to represent genes with various functions. A #' value of 3 excludes all genes with less than three term assignments. #' Whereas the second value of the parameter restricts the number of terms #' according to the number of assigned genes. All terms with a count smaller #' or equal to the threshold are excluded. #' @seealso \code{\link{chord_dat}} #' @import ggplot2 #' @import grDevices #' @examples #' \dontrun{ #' # Load the included dataset #' data(EC) #' #' # Generating the binary matrix #' chord<-chord_dat(circ,EC$genes,EC$process) #' #' # Creating the chord plot #' GOChord(chord) #' #' # Excluding process with less than 5 assigned genes #' GOChord(chord, limit = c(0,5)) #' #' # Creating the chord plot genes ordered by logFC and a different logFC color scale #' GOChord(chord,space=0.02,gene.order='logFC',lfc.col=c('red','black','cyan')) #' } #' @export GOChord <- function(data, title, space, gene.order, gene.size, gene.space, nlfc = 1, lfc.col, lfc.min, lfc.max, ribbon.col, border.size, process.label, limit, scale.title){ y <- id <- xpro <- ypro <- xgen <- ygen <- lx <- ly <- ID <- logFC <- NULL Ncol <- dim(data)[2] if (missing(title)) title <- '' if (missing(scale.title)) scale.title <- 'logFC' if (missing(space)) space = 0 if (missing(gene.order)) gene.order <- 'none' if (missing(gene.size)) gene.size <- 3 if (missing(gene.space)) gene.space <- 0.2 if (missing(lfc.col)) lfc.col <- c('brown1', 'azure', 'cornflowerblue') if (missing(lfc.min)) lfc.min <- -3 if (missing(lfc.max)) lfc.max <- 3 if (missing(border.size)) border.size <- 0.5 if (missing (process.label)) process.label <- 11 if (missing(limit)) limit <- c(0, 0) if (gene.order == 'logFC') data <- data[order(data[, Ncol], decreasing = T), ] if (gene.order == 'alphabetical') data <- data[order(rownames(data)), ] if (sum(!is.na(match(colnames(data), 'logFC'))) > 0){ if (nlfc == 1){ cdata <- check_chord(data[, 1:(Ncol - 1)], limit) lfc <- sapply(rownames(cdata), function(x) data[match(x,rownames(data)), Ncol]) }else{ cdata <- check_chord(data[, 1:(Ncol - nlfc)], limit) lfc <- sapply(rownames(cdata), function(x) data[, (Ncol - nlfc + 1)]) } }else{ cdata <- check_chord(data, limit) lfc <- 0 } if (missing(ribbon.col)) colRib <- grDevices::rainbow(dim(cdata)[2]) else colRib <- ribbon.col nrib <- colSums(cdata) ngen <- rowSums(cdata) Ncol <- dim(cdata)[2] Nrow <- dim(cdata)[1] colRibb <- c() for (b in 1:length(nrib)) colRibb <- c(colRibb, rep(colRib[b], 202 * nrib[b])) r1 <- 1; r2 <- r1 + 0.1 xmax <- c(); x <- 0 for (r in 1:length(nrib)){ perc <- nrib[r] / sum(nrib) xmax <- c(xmax, (pi * perc) - space) if (length(x) <= Ncol - 1) x <- c(x, x[r] + pi * perc) } xp <- c(); yp <- c() l <- 50 for (s in 1:Ncol){ xh <- seq(x[s], x[s] + xmax[s], length = l) xp <- c(xp, r1 * sin(x[s]), r1 * sin(xh), r1 * sin(x[s] + xmax[s]), r2 * sin(x[s] + xmax[s]), r2 * sin(rev(xh)), r2 * sin(x[s])) yp <- c(yp, r1 * cos(x[s]), r1 * cos(xh), r1 * cos(x[s] + xmax[s]), r2 * cos(x[s] + xmax[s]), r2 * cos(rev(xh)), r2 * cos(x[s])) } df_process <- data.frame(x = xp, y = yp, id = rep(c(1:Ncol), each = 4 + 2 * l)) xp <- c(); yp <- c(); logs <- NULL x2 <- seq(0 - space, -pi - (-pi / Nrow) - space, length = Nrow) xmax2 <- rep(-pi / Nrow + space, length = Nrow) for (s in 1:Nrow){ xh <- seq(x2[s], x2[s] + xmax2[s], length = l) if (nlfc <= 1){ xp <- c(xp, (r1 + 0.05) * sin(x2[s]), (r1 + 0.05) * sin(xh), (r1 + 0.05) * sin(x2[s] + xmax2[s]), r2 * sin(x2[s] + xmax2[s]), r2 * sin(rev(xh)), r2 * sin(x2[s])) yp <- c(yp, (r1 + 0.05) * cos(x2[s]), (r1 + 0.05) * cos(xh), (r1 + 0.05) * cos(x2[s] + xmax2[s]), r2 * cos(x2[s] + xmax2[s]), r2 * cos(rev(xh)), r2 * cos(x2[s])) }else{ tmp <- seq(r1, r2, length = nlfc + 1) for (t in 1:nlfc){ logs <- c(logs, data[s, (dim(data)[2] + 1 - t)]) xp <- c(xp, (tmp[t]) * sin(x2[s]), (tmp[t]) * sin(xh), (tmp[t]) * sin(x2[s] + xmax2[s]), tmp[t + 1] * sin(x2[s] + xmax2[s]), tmp[t + 1] * sin(rev(xh)), tmp[t + 1] * sin(x2[s])) yp <- c(yp, (tmp[t]) * cos(x2[s]), (tmp[t]) * cos(xh), (tmp[t]) * cos(x2[s] + xmax2[s]), tmp[t + 1] * cos(x2[s] + xmax2[s]), tmp[t + 1] * cos(rev(xh)), tmp[t + 1] * cos(x2[s])) }}} if(lfc[1] != 0){ if (nlfc == 1){ df_genes <- data.frame(x = xp, y = yp, id = rep(c(1:Nrow), each = 4 + 2 * l), logFC = rep(lfc, each = 4 + 2 * l)) }else{ df_genes <- data.frame(x = xp, y = yp, id = rep(c(1:(nlfc*Nrow)), each = 4 + 2 * l), logFC = rep(logs, each = 4 + 2 * l)) } }else{ df_genes <- data.frame(x = xp, y = yp, id = rep(c(1:Nrow), each = 4 + 2 * l)) } aseq <- seq(0, 180, length = length(x2)); angle <- c() for (o in aseq) if((o + 270) <= 360) angle <- c(angle, o + 270) else angle <- c(angle, o - 90) df_texg <- data.frame(xgen = (r1 + gene.space) * sin(x2 + xmax2/2),ygen = (r1 + gene.space) * cos(x2 + xmax2 / 2),labels = rownames(cdata), angle = angle) df_texp <- data.frame(xpro = (r1 + 0.15) * sin(x + xmax / 2),ypro = (r1 + 0.15) * cos(x + xmax / 2), labels = colnames(cdata), stringsAsFactors = FALSE) cols <- rep(colRib, each = 4 + 2 * l) x.end <- c(); y.end <- c(); processID <- c() for (gs in 1:length(x2)){ val <- seq(x2[gs], x2[gs] + xmax2[gs], length = ngen[gs] + 1) pros <- which((cdata[gs, ] != 0) == T) for (v in 1:(length(val) - 1)){ x.end <- c(x.end, sin(val[v]), sin(val[v + 1])) y.end <- c(y.end, cos(val[v]), cos(val[v + 1])) processID <- c(processID, rep(pros[v], 2)) } } df_bezier <- data.frame(x.end = x.end, y.end = y.end, processID = processID) df_bezier <- df_bezier[order(df_bezier$processID,-df_bezier$y.end),] x.start <- c(); y.start <- c() for (rs in 1:length(x)){ val<-seq(x[rs], x[rs] + xmax[rs], length = nrib[rs] + 1) for (v in 1:(length(val) - 1)){ x.start <- c(x.start, sin(val[v]), sin(val[v + 1])) y.start <- c(y.start, cos(val[v]), cos(val[v + 1])) } } df_bezier$x.start <- x.start df_bezier$y.start <- y.start df_path <- bezier(df_bezier, colRib) if(length(df_genes$logFC) != 0){ tmp <- sapply(df_genes$logFC, function(x) ifelse(x > lfc.max, lfc.max, x)) logFC <- sapply(tmp, function(x) ifelse(x < lfc.min, lfc.min, x)) df_genes$logFC <- logFC } g<- ggplot() + geom_polygon(data = df_process, aes(x, y, group=id), fill='gray70', inherit.aes = F,color='black') + geom_polygon(data = df_process, aes(x, y, group=id), fill=cols, inherit.aes = F,alpha=0.6,color='black') + geom_point(aes(x = xpro, y = ypro, size = factor(labels, levels = labels), shape = NA), data = df_texp) + guides(size = guide_legend("GO Terms", ncol = 4, byrow = T, override.aes = list(shape = 22, fill = unique(cols), size = 8))) + theme(legend.text = element_text(size = process.label)) + geom_text(aes(xgen, ygen, label = labels, angle = angle), data = df_texg, size = gene.size) + geom_polygon(aes(x = lx, y = ly, group = ID), data = df_path, fill = colRibb, color = 'black', size = border.size, inherit.aes = F) + labs(title = title) + theme_blank if (nlfc >= 1){ g + geom_polygon(data = df_genes, aes(x, y, group = id, fill = logFC), inherit.aes = F, color = 'black') + scale_fill_gradient2(scale.title, space = 'Lab', low = lfc.col[3], mid = lfc.col[2], high = lfc.col[1], guide = guide_colorbar(title.position = "top", title.hjust = 0.5), breaks = c(min(df_genes$logFC), max(df_genes$logFC)), labels = c(round(min(df_genes$logFC)), round(max(df_genes$logFC)))) + theme(legend.position = 'bottom', legend.background = element_rect(fill = 'transparent'), legend.box = 'horizontal', legend.direction = 'horizontal') }else{ g + geom_polygon(data = df_genes, aes(x, y, group = id), fill = 'gray50', inherit.aes = F, color = 'black')+ theme(legend.position = 'bottom', legend.background = element_rect(fill = 'transparent'), legend.box = 'horizontal', legend.direction = 'horizontal') } }
################ # Create crisis/political institutions distributions graphs # Christopher Gandrud # 20 February 2014 ################ library(foreign) library(ggplot2) library(gridExtra) library(reshape2) Main <- read.dta('/git_repositories/CrisisDataIssues/data/KeeferExtended.dta') LV <- subset(Main, LV2012_Fiscal >= 0) Keefer <- subset(Main, Keefer2007_Fiscal >= 0) #### Distribution among crisis countries #### Plot1 <- ggplot(LV, aes(year, polity2)) + geom_jitter() + geom_smooth(se = FALSE) + xlab('') + ylab('Polity IV\n') + theme_bw() Plot2 <- ggplot(LV, aes(year, checks)) + geom_jitter() + geom_smooth(se = FALSE) + xlab('') + ylab('Checks\n') + theme_bw() Plot3 <- ggplot(LV, aes(year, eiec)) + geom_jitter() + geom_smooth(se = FALSE) + xlab('') + ylab('Electoral Competitiveness\n') + theme_bw() Plot4 <- ggplot(LV, aes(year, stabnsLag3)) + geom_jitter() + geom_smooth(se = FALSE) + xlab('') + ylab('Political Instability\n(3 year lagged average)\n') + theme_bw() Plot5 <- ggplot(LV, aes(year, W)) + geom_jitter() + geom_smooth(se = FALSE) + xlab('') + ylab('Winset\n') + theme_bw() Plot6 <- ggplot(LV, aes(year, log(GDPperCapita))) + geom_jitter() + geom_smooth(se = FALSE) + scale_y_continuous(limits = c(0, 12), breaks = c(0, 5, 10)) + xlab('') + ylab('Log GDP per Capita\n') + theme_bw() # Combine Plots pdf('~/Dropbox/AMCProject/CrisisDataIssuesPaper/figures/PolCris.pdf') grid.arrange(Plot1, Plot2, Plot3, Plot4, Plot5, Plot6, ncol = 2) dev.off() #### Plot of revisions #### Main$Diff <- Main$LV2012_Fiscal - Main$Honohan2003_Fiscal cor.test(Main$LV2012_Fiscal, Main$Honohan2003_Fiscal) Main$HKOngoing[Main$HonohanCrisisOngoing == 0] <- 'Crisis Complete' Main$HKOngoing[Main$HonohanCrisisOngoing == 1] <- 'Crisis Ongoing' PlotDiff <- ggplot(Main, aes(year, Diff, colour = HKOngoing, label = iso2c)) + geom_jitter(position = position_jitter(width = .5, height = 0), size = 3) + geom_text(angle = 30, vjust = -1) + scale_x_continuous(limits = c(1975, 2000)) + scale_colour_manual(values = c('black', 'grey'), name = '') + geom_hline(aes(yintercept = 0), linetype = 'dotted') + xlab('') + ylab('Laeven & Valencia - Honohan & Klingebiel\n') + theme_bw(base_size = 15) pdf('~/Dropbox/AMCProject/CrisisDataIssuesPaper/figures/FiscalDifference.pdf', width = 10) PlotDiff dev.off() #### Directly plot LV vs. HL #Sub <- Main[, c('year', 'Honohan2003_Fiscal', 'LV2012_Fiscal')] #SubMolten <- melt(Sub, id.vars = 'year') #ggplot(SubMolten, aes(year, value)) + geom_jitter() + facet_grid(.~variable) + theme_bw() #### Density/Instability #### Sub <- Main[, c('stabnsLag3', 'LV2012_Fiscal', 'Honohan2003_Fiscal', 'Keefer2007_Fiscal')] SubMolten <- melt(Sub, id.vars = c('stabnsLag3')) SubMolten$variable <- as.character(SubMolten$variable) SubMolten$variable[SubMolten$variable == 'Honohan2003_Fiscal'] <- 'Honohan/Kling.' SubMolten$variable[SubMolten$variable == 'Keefer2007_Fiscal'] <- 'Keefer' SubMolten$variable[SubMolten$variable == 'LV2012_Fiscal'] <- 'Laeven/Valencia' # Density compare PlotDensity <- ggplot(SubMolten, aes(value, colour = as.factor(variable))) + geom_density(aes(linetype = as.factor(variable))) + xlab('\nFiscal Costs of Crisis (% GDP)') + ylab('Density\n') + scale_color_manual(values = c('#e41a1c', '#377eb8', '#4daf4a'), name = 'Data Set') + scale_linetype(name = 'Data Set') + theme_bw() PlotInstCosts <- ggplot(SubMolten, aes(value, stabnsLag3, colour = as.factor(variable))) + scale_color_manual(values = c('#e41a1c', '#377eb8', '#4daf4a'), name = '') + geom_point(size = 5, alpha = 0.5) + xlab('\nFiscal Costs of Crisis (% GDP)') + ylab('Political Instability\n(3 year lagged average)\n') + theme_bw() pdf(file = '~/Dropbox/AMCProject/CrisisDataIssuesPaper/figures/InstCosts.pdf') grid.arrange(PlotDensity, PlotInstCosts) dev.off()
/source/ExplorerStage/CrisisDistributionsGraphs.R
no_license
christophergandrud/CrisisDataIssues
R
false
false
4,089
r
################ # Create crisis/political institutions distributions graphs # Christopher Gandrud # 20 February 2014 ################ library(foreign) library(ggplot2) library(gridExtra) library(reshape2) Main <- read.dta('/git_repositories/CrisisDataIssues/data/KeeferExtended.dta') LV <- subset(Main, LV2012_Fiscal >= 0) Keefer <- subset(Main, Keefer2007_Fiscal >= 0) #### Distribution among crisis countries #### Plot1 <- ggplot(LV, aes(year, polity2)) + geom_jitter() + geom_smooth(se = FALSE) + xlab('') + ylab('Polity IV\n') + theme_bw() Plot2 <- ggplot(LV, aes(year, checks)) + geom_jitter() + geom_smooth(se = FALSE) + xlab('') + ylab('Checks\n') + theme_bw() Plot3 <- ggplot(LV, aes(year, eiec)) + geom_jitter() + geom_smooth(se = FALSE) + xlab('') + ylab('Electoral Competitiveness\n') + theme_bw() Plot4 <- ggplot(LV, aes(year, stabnsLag3)) + geom_jitter() + geom_smooth(se = FALSE) + xlab('') + ylab('Political Instability\n(3 year lagged average)\n') + theme_bw() Plot5 <- ggplot(LV, aes(year, W)) + geom_jitter() + geom_smooth(se = FALSE) + xlab('') + ylab('Winset\n') + theme_bw() Plot6 <- ggplot(LV, aes(year, log(GDPperCapita))) + geom_jitter() + geom_smooth(se = FALSE) + scale_y_continuous(limits = c(0, 12), breaks = c(0, 5, 10)) + xlab('') + ylab('Log GDP per Capita\n') + theme_bw() # Combine Plots pdf('~/Dropbox/AMCProject/CrisisDataIssuesPaper/figures/PolCris.pdf') grid.arrange(Plot1, Plot2, Plot3, Plot4, Plot5, Plot6, ncol = 2) dev.off() #### Plot of revisions #### Main$Diff <- Main$LV2012_Fiscal - Main$Honohan2003_Fiscal cor.test(Main$LV2012_Fiscal, Main$Honohan2003_Fiscal) Main$HKOngoing[Main$HonohanCrisisOngoing == 0] <- 'Crisis Complete' Main$HKOngoing[Main$HonohanCrisisOngoing == 1] <- 'Crisis Ongoing' PlotDiff <- ggplot(Main, aes(year, Diff, colour = HKOngoing, label = iso2c)) + geom_jitter(position = position_jitter(width = .5, height = 0), size = 3) + geom_text(angle = 30, vjust = -1) + scale_x_continuous(limits = c(1975, 2000)) + scale_colour_manual(values = c('black', 'grey'), name = '') + geom_hline(aes(yintercept = 0), linetype = 'dotted') + xlab('') + ylab('Laeven & Valencia - Honohan & Klingebiel\n') + theme_bw(base_size = 15) pdf('~/Dropbox/AMCProject/CrisisDataIssuesPaper/figures/FiscalDifference.pdf', width = 10) PlotDiff dev.off() #### Directly plot LV vs. HL #Sub <- Main[, c('year', 'Honohan2003_Fiscal', 'LV2012_Fiscal')] #SubMolten <- melt(Sub, id.vars = 'year') #ggplot(SubMolten, aes(year, value)) + geom_jitter() + facet_grid(.~variable) + theme_bw() #### Density/Instability #### Sub <- Main[, c('stabnsLag3', 'LV2012_Fiscal', 'Honohan2003_Fiscal', 'Keefer2007_Fiscal')] SubMolten <- melt(Sub, id.vars = c('stabnsLag3')) SubMolten$variable <- as.character(SubMolten$variable) SubMolten$variable[SubMolten$variable == 'Honohan2003_Fiscal'] <- 'Honohan/Kling.' SubMolten$variable[SubMolten$variable == 'Keefer2007_Fiscal'] <- 'Keefer' SubMolten$variable[SubMolten$variable == 'LV2012_Fiscal'] <- 'Laeven/Valencia' # Density compare PlotDensity <- ggplot(SubMolten, aes(value, colour = as.factor(variable))) + geom_density(aes(linetype = as.factor(variable))) + xlab('\nFiscal Costs of Crisis (% GDP)') + ylab('Density\n') + scale_color_manual(values = c('#e41a1c', '#377eb8', '#4daf4a'), name = 'Data Set') + scale_linetype(name = 'Data Set') + theme_bw() PlotInstCosts <- ggplot(SubMolten, aes(value, stabnsLag3, colour = as.factor(variable))) + scale_color_manual(values = c('#e41a1c', '#377eb8', '#4daf4a'), name = '') + geom_point(size = 5, alpha = 0.5) + xlab('\nFiscal Costs of Crisis (% GDP)') + ylab('Political Instability\n(3 year lagged average)\n') + theme_bw() pdf(file = '~/Dropbox/AMCProject/CrisisDataIssuesPaper/figures/InstCosts.pdf') grid.arrange(PlotDensity, PlotInstCosts) dev.off()
#Normal Reference Distribution Plot for Weight of Chickens Example: x = c(156,162,168,182,186,190,190,196,202,210,214,220,226, 230,230,236,236,242,246,270) n = length(x) x = sort(x) i = seq(1:n) u = (i-.5)/n z = qnorm(u) #postscript("refdist,chicken.ps",height=8,horizontal=F) plot(z,x,datax=F,plot=T,xlab="Normal Quantile",ylab="Empirical Quantile", lab=c(7,8,7), main="Normal Reference Distribution Plot\n Chicken Weight Example", cex=.95) abline(lm(x~z)) #graphics.off()
/tamu/basic_stat/R/normalrefplot.R
no_license
sadepu1915/data-science
R
false
false
510
r
#Normal Reference Distribution Plot for Weight of Chickens Example: x = c(156,162,168,182,186,190,190,196,202,210,214,220,226, 230,230,236,236,242,246,270) n = length(x) x = sort(x) i = seq(1:n) u = (i-.5)/n z = qnorm(u) #postscript("refdist,chicken.ps",height=8,horizontal=F) plot(z,x,datax=F,plot=T,xlab="Normal Quantile",ylab="Empirical Quantile", lab=c(7,8,7), main="Normal Reference Distribution Plot\n Chicken Weight Example", cex=.95) abline(lm(x~z)) #graphics.off()
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/RILStEp.R \name{check_epistasis2} \alias{check_epistasis2} \title{Calculate BayesFactor for Comparison between Model1 (without Epistasis) vs Model2 (with Epistasis)} \usage{ check_epistasis2( pair_index, genotype_phenotype_dataset, phenotype_name, peak_qtls, pairs ) } \arguments{ \item{pair_index}{indices of SNP pair to check epistasis} \item{genotype_phenotype_dataset}{dataset of genotype and phenotype} \item{phenotype_name}{trait's name} \item{peak_qtls}{QTL-like SNP index list from extract_peak_qtls() or user specify} \item{pairs}{SNP pairs} } \description{ Calculate BayesFactor for Comparison between Model1 (without Epistasis) vs Model2 (with Epistasis) }
/man/check_epistasis2.Rd
permissive
slt666666/RILStEp
R
false
true
760
rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/RILStEp.R \name{check_epistasis2} \alias{check_epistasis2} \title{Calculate BayesFactor for Comparison between Model1 (without Epistasis) vs Model2 (with Epistasis)} \usage{ check_epistasis2( pair_index, genotype_phenotype_dataset, phenotype_name, peak_qtls, pairs ) } \arguments{ \item{pair_index}{indices of SNP pair to check epistasis} \item{genotype_phenotype_dataset}{dataset of genotype and phenotype} \item{phenotype_name}{trait's name} \item{peak_qtls}{QTL-like SNP index list from extract_peak_qtls() or user specify} \item{pairs}{SNP pairs} } \description{ Calculate BayesFactor for Comparison between Model1 (without Epistasis) vs Model2 (with Epistasis) }
#' Are you sure? #' #' This function asks you interactively for permission to continue or not. You #' can specify a custom message before the question and also different messages #' for a both a positive and negative answer. #' #' If you run this function in non-interactive mode, you should pass an #' automatic answer to \code{default_answer}: \code{'yes'} or \code{'no'}. #' #' @param before_question String with message to be printed before question. #' @param after_saying_no String with message to be printed after answering #' \code{'no'}. #' @param after_saying_yes String with message to be printed after answering #' \code{'yes'}. #' @param default_answer String with answer to question, if run in #' non-interactive mode. #' #' @return A logical indicating if answer was \code{'yes'}/\code{'y'} #' (\code{TRUE}) or otherwise (\code{FALSE}). #' @keywords internal sure <- function(before_question = NULL, after_saying_no = NULL, after_saying_yes = NULL, default_answer = NULL) { # If default_answer is set then assume that we are running in non-interactive # mode. Return TRUE is default_answer == 'y' or FALSE otherwise. if (!rlang::is_null(default_answer)) { ans <- tolower(default_answer) return(identical(ans, "y") || identical(ans, "yes")) } if (!rlang::is_null(before_question)) { message(before_question) utils::flush.console() } answer <- tolower(readline("Do you still want to proceed (y/n)? ")) if (identical(answer, "yes") || identical(answer, "y")) { if (!rlang::is_null(after_saying_yes)) { message(after_saying_yes) utils::flush.console() } return(invisible(TRUE)) } else { if (!rlang::is_null(after_saying_no)) { message(after_saying_no) utils::flush.console() } return(invisible(FALSE)) } }
/R/sure.R
permissive
ramiromagno/gwasrapidd
R
false
false
1,857
r
#' Are you sure? #' #' This function asks you interactively for permission to continue or not. You #' can specify a custom message before the question and also different messages #' for a both a positive and negative answer. #' #' If you run this function in non-interactive mode, you should pass an #' automatic answer to \code{default_answer}: \code{'yes'} or \code{'no'}. #' #' @param before_question String with message to be printed before question. #' @param after_saying_no String with message to be printed after answering #' \code{'no'}. #' @param after_saying_yes String with message to be printed after answering #' \code{'yes'}. #' @param default_answer String with answer to question, if run in #' non-interactive mode. #' #' @return A logical indicating if answer was \code{'yes'}/\code{'y'} #' (\code{TRUE}) or otherwise (\code{FALSE}). #' @keywords internal sure <- function(before_question = NULL, after_saying_no = NULL, after_saying_yes = NULL, default_answer = NULL) { # If default_answer is set then assume that we are running in non-interactive # mode. Return TRUE is default_answer == 'y' or FALSE otherwise. if (!rlang::is_null(default_answer)) { ans <- tolower(default_answer) return(identical(ans, "y") || identical(ans, "yes")) } if (!rlang::is_null(before_question)) { message(before_question) utils::flush.console() } answer <- tolower(readline("Do you still want to proceed (y/n)? ")) if (identical(answer, "yes") || identical(answer, "y")) { if (!rlang::is_null(after_saying_yes)) { message(after_saying_yes) utils::flush.console() } return(invisible(TRUE)) } else { if (!rlang::is_null(after_saying_no)) { message(after_saying_no) utils::flush.console() } return(invisible(FALSE)) } }
# By default set time_scale_template_options to time_scale_template() .onLoad = function(libname, pkgname) { options( time_scale_template = time_scale_template() ) } .onAttach <- function(libname, pkgname) { bsu_rule_color <- "#2c3e50" bsu_main_color <- "#1f78b4" # Check Theme: If Dark, Update Colors if (rstudioapi::isAvailable()) { theme <- rstudioapi::getThemeInfo() if (theme$dark) { bsu_rule_color <- "#7FD2FF" bsu_main_color <- "#18bc9c" } } bsu_main <- crayon::make_style(bsu_main_color) msg <- paste0( cli::rule(left = "Use anomalize to improve your Forecasts by 50%!", col = bsu_rule_color, line = 2), bsu_main('\nBusiness Science offers a 1-hour course - Lab #18: Time Series Anomaly Detection!\n'), bsu_main('</> Learn more at: https://university.business-science.io/p/learning-labs-pro </>') ) packageStartupMessage(msg) }
/R/zzz.R
no_license
iMarcello/anomalize
R
false
false
968
r
# By default set time_scale_template_options to time_scale_template() .onLoad = function(libname, pkgname) { options( time_scale_template = time_scale_template() ) } .onAttach <- function(libname, pkgname) { bsu_rule_color <- "#2c3e50" bsu_main_color <- "#1f78b4" # Check Theme: If Dark, Update Colors if (rstudioapi::isAvailable()) { theme <- rstudioapi::getThemeInfo() if (theme$dark) { bsu_rule_color <- "#7FD2FF" bsu_main_color <- "#18bc9c" } } bsu_main <- crayon::make_style(bsu_main_color) msg <- paste0( cli::rule(left = "Use anomalize to improve your Forecasts by 50%!", col = bsu_rule_color, line = 2), bsu_main('\nBusiness Science offers a 1-hour course - Lab #18: Time Series Anomaly Detection!\n'), bsu_main('</> Learn more at: https://university.business-science.io/p/learning-labs-pro </>') ) packageStartupMessage(msg) }
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/inference.R \name{jackknife_se_multi} \alias{jackknife_se_multi} \title{Compute standard errors using the jackknife} \usage{ jackknife_se_multi(multisynth, relative = NULL) } \arguments{ \item{multisynth}{fitted multisynth object} \item{relative}{Whether to compute effects according to relative time} } \description{ Compute standard errors using the jackknife }
/man/jackknife_se_multi.Rd
permissive
clarkest/augsynth
R
false
true
443
rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/inference.R \name{jackknife_se_multi} \alias{jackknife_se_multi} \title{Compute standard errors using the jackknife} \usage{ jackknife_se_multi(multisynth, relative = NULL) } \arguments{ \item{multisynth}{fitted multisynth object} \item{relative}{Whether to compute effects according to relative time} } \description{ Compute standard errors using the jackknife }
setwd("~/r-workspace/comp-stats/hw2/") data <- read.delim("finanzas.dat") colnames(data) <- c("Compania", "Ganancia", "Libros", "Precio") library(ggplot2) ggplot(data, aes(x=Libros,y=Ganancia)) + geom_point() + geom_smooth() ## ===================================================== ## == 2. Datos de acciones de companias == ## ----------------------------------------------------- # a. Tablas de contingencia # Para el valor en libros with(data, table(Compania, cut(data$Libros, breaks=4))) # Para el precio por accion with(data, table(Compania, cut(data$Precio, breaks=4))) # a.1 Frecuencias absolutas por valor en Libros factor.Ganancia <- factor(cut(data$Libros, breaks=4)) table.Ganancia <- as.data.frame(table(factor.Ganancia)) table.Ganancia <- transform(table.Ganancia, cumFreq=cumsum(Freq), relative=prop.table(Freq)) table.Ganancia # La mayor dispersion esta cuando las ganancias estan entre # (3.08, 6.58]. No hay unidades... # a.1 Frecuencias absolutas por precio de acciones factor.Precio <- factor(cut(data$Precio, breaks=4)) table.Precio <- as.data.frame(table(factor.Precio)) table.Precio <- transform(table.Precio, cumFreq=cumsum(Freq), relative=prop.table(Freq)) table.Precio ## ----------------------------------------------------- # b. Graficas de dispersion pairs(~Ganancia+Libros+Precio, data) ### ==================================================== ## ===================================================== ## == 2. Datos de manchas solares == ## ----------------------------------------------------- ## a. Promedio, stddev y cuartiles ssn <- read.delim("spot_num.txt", sep=",") head(ssn, 20) colnames(ssn) subset(ssn, YEAR==1749) ssn.stats.1 <- aggregate(SSN ~ YEAR, data=ssn, FUN=function(x) c(SUM=sum(x), MEAN=mean(x), SD=sd(x), QUANT=quantile(x, probs=c(0.25, 0.50, 0.75)))) ssn.stats <- data.frame(year=ssn.stats.1$YEAR) # Llenar los valores faltantes en el data.frame ssn.stats$sum <- ssn.stats.1$SSN[,1] ssn.stats$mean <- ssn.stats.1$SSN[,2] ssn.stats$sd <- ssn.stats.1$SSN[,3] ssn.stats$quant.25 <- ssn.stats.1$SSN[,4] ssn.stats$quant.50 <- ssn.stats.1$SSN[,5] ssn.stats$quant.75 <- ssn.stats.1$SSN[,6] head(ssn.stats) ## b. Variable ciclo ssn.stats$cycle <- c(6:11, rep(1:11, 23), 1:6) # Id Ciclo, para las etiquetas de las graficas c.ids <- c() for(i in 2:24){ c.ids <- c(c.ids, paste("Ciclo", rep(i, 11), sep=".")) } ssn.stats$cycle.id <- c(paste("Ciclo", rep(1, 6), sep="."), c.ids, paste("Ciclo", rep(25, 6), sep=".")) head(ssn.stats, 40) ## c. Contar numero de manchas ssn.cycles.1 <- aggregate(sum ~ cycle.id, data=ssn.stats, FUN=function(x) c(sum=sum(x))) str(ssn.cycles.1) # c.1: EL que tiene menor numero de manchas ssn.cycles.1[which.min(ssn.cycles.1$sum), ] # R: El ciclo 25 # c.1: EL que tiene mayor numero de manchas ssn.cycles.1[which.max(ssn.cycles.1$sum), ] # R: El ciclo 20 ## d. Grafica con el numero de manchas por ciclo df.ssn.1 <- data.frame(x=ssn.stats$cycle, val=ssn.stats$sum, ciclo=ssn.stats$cycle.id) ssn.plot.1 <- ggplot(data=df.ssn.1, aes(x=x, y=val)) + geom_line(aes(colour=ciclo)) + xlab("Longitud del ciclo") + ylab("Numero de manchas solares") + ggtitle("Numero de manchas solares a por ciclo") ssn.plot.1 ## e. Grafica con el numero de manchas por anio del ciclo df.ssn.2 <- data.frame(x=ssn.stats$cycle, val=ssn.stats$sum, anio=paste("y", ssn.stats$cycle, sep=".")) ssn.plot.2 <- ggplot(data=df.ssn.2, aes(x=x, y=val)) + geom_line(aes(colour=anio)) + xlab("Año por cada ciclo") + ylab("Numero de manchas solares") + ggtitle("Numero de manchas solares por año del ciclo") ssn.plot.2 ## la funcion par no es compatible con ggplot. ## se usa grid.arrange que se encuentra en la libreria gridExtra ## descomentar la siguiente linea para instalar la libreria #install.packages("gridExtra") library(gridExtra) grid.arrange(ssn.plot.1, ssn.plot.2, ncol=2) ## ===================================================== ## == 5. Datos de Calificaciones y Jueces == ## ----------------------------------------------------- ## Al archivo calificaciones.csv le agregamos un nuevo ## encabezado para las calificaciones ordinales de ## cada juez. ## ----------------------------------------------------- calificaciones <- read.delim("calificaciones.csv", sep=",") dim(calificaciones) # Ordenar alfabeticamente por nombres de Jueces, primero obtenemos # los nombres de los jueces sin la columna "Equipo" header <- names(calificaciones)[2:length(names(calificaciones))] # Despues los ordenamos header <- sort(header) # Luego volvemos a pegarle la columna Equipo header <- c("Equipo", header) # Luego reintegramos el resto de las calificaciones del dataframe # original en un dataframe con los nombres ordenados calif <- data.frame(temp=1:5) #columna temporal para formar el data.frame for(name in header){ calif[[name]] <- calificaciones[[ name ]] } #Quitar la columna temporal calif <- subset(calif, select = -c(temp)) # Renombrar las columnas con JuezX. Dado que cada juez imparte 2 calificaciones # a cada uno se le asigna el sufijo "Ord" y "Cont" dependiendo de la calificacion col.n <- c() for(n in 1:22){ if (n %% 2 == 1) { col.n <- c(col.n, paste(ceiling(n/2), "Cont", sep=".")) } else { col.n <- c(col.n, paste(ceiling(n/2), "Ord", sep=".")) } } # Formamos los nuevos nombres new.colnames <- paste("Juez", col.n, sep="") # Renombramos el dataframe colnames(calif) <- c("Equipo", new.colnames) names(calif) # Ordenar y renombrar los equipos calif <- calif[order(calif$Equipo), ] rownames(calif) <- c("AAA", "BBB", "CCC", "DDD", "EEE") calif["AAA",] # a.1: ¿que equipo es el mejor calificado respecto a la continua? # Escojemos los nombres de las columnas que contienen calificacion continua cont <- subset(colnames(calif), grepl("Cont", colnames(calif))) # Sacamos el equipo que tiene las calidicaciones continuas mas altas which.max(rowMeans(calif[, colnames(calif) %in% cont])) # R: El equipo CCC # a.2: ¿el peor? which.min(rowMeans(calif[, colnames(calif) %in% cont])) # R: El equipo AAA which(mean(calif[, colnames(calif) %in% cont])) # El equipo en el intermedio means <- rowMeans(calif[, colnames(calif) %in% cont]) which.min(abs(means - mean(means))) # R: BBB # b. Lo mismo pero con las ordinales ord <- subset(colnames(calif), grepl("Ord", colnames(calif))) # Sacamos el equipo que tiene las calidicaciones ordinales mas bajas which.min(rowMeans(calif[, colnames(calif) %in% ord])) # R: El equipo CCC # a.2: ¿el peor? which.max(rowMeans(calif[, colnames(calif) %in% ord])) # R: El equipo AAA # El equipo en el intermedio means <- rowMeans(calif[, colnames(calif) %in% ord]) which.min(abs(means - mean(means))) # R: BBB # c.1: Equipo con mayor desacuerdo = mayor desviacion estandar which.max(apply(calif[, colnames(calif) %in% cont], 1, sd)) # R: EEE # c.2: Equipo mas consistente = menor desviacion estandar which.min(apply(calif[, colnames(calif) %in% cont], 1, sd)) # R: CCC # d.1: ¿Que juez diria usted fue el mas duro? which.min(apply(calif[, colnames(calif) %in% cont], 2, mean)) #R: Juez 6 # d.2: ¿Que juez diria usted fue el mas barco? which.max(apply(calif[, colnames(calif) %in% cont], 2, mean)) #R: Juez 4 # E: Cual de los jueces presento mayor variacion? which.max(apply(calif[, colnames(calif) %in% cont], 2, sd)) # R: Juez 5
/hw2/2.R
no_license
alfonsokim/comp-stat-class
R
false
false
7,534
r
setwd("~/r-workspace/comp-stats/hw2/") data <- read.delim("finanzas.dat") colnames(data) <- c("Compania", "Ganancia", "Libros", "Precio") library(ggplot2) ggplot(data, aes(x=Libros,y=Ganancia)) + geom_point() + geom_smooth() ## ===================================================== ## == 2. Datos de acciones de companias == ## ----------------------------------------------------- # a. Tablas de contingencia # Para el valor en libros with(data, table(Compania, cut(data$Libros, breaks=4))) # Para el precio por accion with(data, table(Compania, cut(data$Precio, breaks=4))) # a.1 Frecuencias absolutas por valor en Libros factor.Ganancia <- factor(cut(data$Libros, breaks=4)) table.Ganancia <- as.data.frame(table(factor.Ganancia)) table.Ganancia <- transform(table.Ganancia, cumFreq=cumsum(Freq), relative=prop.table(Freq)) table.Ganancia # La mayor dispersion esta cuando las ganancias estan entre # (3.08, 6.58]. No hay unidades... # a.1 Frecuencias absolutas por precio de acciones factor.Precio <- factor(cut(data$Precio, breaks=4)) table.Precio <- as.data.frame(table(factor.Precio)) table.Precio <- transform(table.Precio, cumFreq=cumsum(Freq), relative=prop.table(Freq)) table.Precio ## ----------------------------------------------------- # b. Graficas de dispersion pairs(~Ganancia+Libros+Precio, data) ### ==================================================== ## ===================================================== ## == 2. Datos de manchas solares == ## ----------------------------------------------------- ## a. Promedio, stddev y cuartiles ssn <- read.delim("spot_num.txt", sep=",") head(ssn, 20) colnames(ssn) subset(ssn, YEAR==1749) ssn.stats.1 <- aggregate(SSN ~ YEAR, data=ssn, FUN=function(x) c(SUM=sum(x), MEAN=mean(x), SD=sd(x), QUANT=quantile(x, probs=c(0.25, 0.50, 0.75)))) ssn.stats <- data.frame(year=ssn.stats.1$YEAR) # Llenar los valores faltantes en el data.frame ssn.stats$sum <- ssn.stats.1$SSN[,1] ssn.stats$mean <- ssn.stats.1$SSN[,2] ssn.stats$sd <- ssn.stats.1$SSN[,3] ssn.stats$quant.25 <- ssn.stats.1$SSN[,4] ssn.stats$quant.50 <- ssn.stats.1$SSN[,5] ssn.stats$quant.75 <- ssn.stats.1$SSN[,6] head(ssn.stats) ## b. Variable ciclo ssn.stats$cycle <- c(6:11, rep(1:11, 23), 1:6) # Id Ciclo, para las etiquetas de las graficas c.ids <- c() for(i in 2:24){ c.ids <- c(c.ids, paste("Ciclo", rep(i, 11), sep=".")) } ssn.stats$cycle.id <- c(paste("Ciclo", rep(1, 6), sep="."), c.ids, paste("Ciclo", rep(25, 6), sep=".")) head(ssn.stats, 40) ## c. Contar numero de manchas ssn.cycles.1 <- aggregate(sum ~ cycle.id, data=ssn.stats, FUN=function(x) c(sum=sum(x))) str(ssn.cycles.1) # c.1: EL que tiene menor numero de manchas ssn.cycles.1[which.min(ssn.cycles.1$sum), ] # R: El ciclo 25 # c.1: EL que tiene mayor numero de manchas ssn.cycles.1[which.max(ssn.cycles.1$sum), ] # R: El ciclo 20 ## d. Grafica con el numero de manchas por ciclo df.ssn.1 <- data.frame(x=ssn.stats$cycle, val=ssn.stats$sum, ciclo=ssn.stats$cycle.id) ssn.plot.1 <- ggplot(data=df.ssn.1, aes(x=x, y=val)) + geom_line(aes(colour=ciclo)) + xlab("Longitud del ciclo") + ylab("Numero de manchas solares") + ggtitle("Numero de manchas solares a por ciclo") ssn.plot.1 ## e. Grafica con el numero de manchas por anio del ciclo df.ssn.2 <- data.frame(x=ssn.stats$cycle, val=ssn.stats$sum, anio=paste("y", ssn.stats$cycle, sep=".")) ssn.plot.2 <- ggplot(data=df.ssn.2, aes(x=x, y=val)) + geom_line(aes(colour=anio)) + xlab("Año por cada ciclo") + ylab("Numero de manchas solares") + ggtitle("Numero de manchas solares por año del ciclo") ssn.plot.2 ## la funcion par no es compatible con ggplot. ## se usa grid.arrange que se encuentra en la libreria gridExtra ## descomentar la siguiente linea para instalar la libreria #install.packages("gridExtra") library(gridExtra) grid.arrange(ssn.plot.1, ssn.plot.2, ncol=2) ## ===================================================== ## == 5. Datos de Calificaciones y Jueces == ## ----------------------------------------------------- ## Al archivo calificaciones.csv le agregamos un nuevo ## encabezado para las calificaciones ordinales de ## cada juez. ## ----------------------------------------------------- calificaciones <- read.delim("calificaciones.csv", sep=",") dim(calificaciones) # Ordenar alfabeticamente por nombres de Jueces, primero obtenemos # los nombres de los jueces sin la columna "Equipo" header <- names(calificaciones)[2:length(names(calificaciones))] # Despues los ordenamos header <- sort(header) # Luego volvemos a pegarle la columna Equipo header <- c("Equipo", header) # Luego reintegramos el resto de las calificaciones del dataframe # original en un dataframe con los nombres ordenados calif <- data.frame(temp=1:5) #columna temporal para formar el data.frame for(name in header){ calif[[name]] <- calificaciones[[ name ]] } #Quitar la columna temporal calif <- subset(calif, select = -c(temp)) # Renombrar las columnas con JuezX. Dado que cada juez imparte 2 calificaciones # a cada uno se le asigna el sufijo "Ord" y "Cont" dependiendo de la calificacion col.n <- c() for(n in 1:22){ if (n %% 2 == 1) { col.n <- c(col.n, paste(ceiling(n/2), "Cont", sep=".")) } else { col.n <- c(col.n, paste(ceiling(n/2), "Ord", sep=".")) } } # Formamos los nuevos nombres new.colnames <- paste("Juez", col.n, sep="") # Renombramos el dataframe colnames(calif) <- c("Equipo", new.colnames) names(calif) # Ordenar y renombrar los equipos calif <- calif[order(calif$Equipo), ] rownames(calif) <- c("AAA", "BBB", "CCC", "DDD", "EEE") calif["AAA",] # a.1: ¿que equipo es el mejor calificado respecto a la continua? # Escojemos los nombres de las columnas que contienen calificacion continua cont <- subset(colnames(calif), grepl("Cont", colnames(calif))) # Sacamos el equipo que tiene las calidicaciones continuas mas altas which.max(rowMeans(calif[, colnames(calif) %in% cont])) # R: El equipo CCC # a.2: ¿el peor? which.min(rowMeans(calif[, colnames(calif) %in% cont])) # R: El equipo AAA which(mean(calif[, colnames(calif) %in% cont])) # El equipo en el intermedio means <- rowMeans(calif[, colnames(calif) %in% cont]) which.min(abs(means - mean(means))) # R: BBB # b. Lo mismo pero con las ordinales ord <- subset(colnames(calif), grepl("Ord", colnames(calif))) # Sacamos el equipo que tiene las calidicaciones ordinales mas bajas which.min(rowMeans(calif[, colnames(calif) %in% ord])) # R: El equipo CCC # a.2: ¿el peor? which.max(rowMeans(calif[, colnames(calif) %in% ord])) # R: El equipo AAA # El equipo en el intermedio means <- rowMeans(calif[, colnames(calif) %in% ord]) which.min(abs(means - mean(means))) # R: BBB # c.1: Equipo con mayor desacuerdo = mayor desviacion estandar which.max(apply(calif[, colnames(calif) %in% cont], 1, sd)) # R: EEE # c.2: Equipo mas consistente = menor desviacion estandar which.min(apply(calif[, colnames(calif) %in% cont], 1, sd)) # R: CCC # d.1: ¿Que juez diria usted fue el mas duro? which.min(apply(calif[, colnames(calif) %in% cont], 2, mean)) #R: Juez 6 # d.2: ¿Que juez diria usted fue el mas barco? which.max(apply(calif[, colnames(calif) %in% cont], 2, mean)) #R: Juez 4 # E: Cual de los jueces presento mayor variacion? which.max(apply(calif[, colnames(calif) %in% cont], 2, sd)) # R: Juez 5
setwd("~/Dropbox/Yeast_Time_Series/Revisions_1/SOM/") #list of sigs Full <- read.table("~/Dropbox/Yeast_Time_Series/Revisions_1/Regression/Pval_table_13tp_ModelA.txt", header = TRUE) Full$p.week <- -log10(Full$p.week) sig_snps <- subset(Full, p.week >= 7) sig_list <- data.frame(sig_snps$chr, sig_snps$pos) colnames(sig_list) <- c("chr","pos") #create trans table snps2018_v2 <- read.table("~/Dropbox/Yeast_Time_Series/Revisions_1/SNP_table.txt", header = TRUE) snps2018_v2=subset(snps2018_v2,snps2018_v2$ancmaf_0_0>0.025 & snps2018_v2$ancmaf_0_0 <0.975) snps2018_v2 <- subset(snps2018_v2, chr != "chrmito") #Filter out "bad" populations (1,2,4,5 and 6) X <- rep(c(3,7:12),each =16) Y <- c(1:15,18) maf <- paste("maf_", X, "_", Y, sep="") cov <- paste("cov_", X, "_", Y, sep="") good <- c(rbind(maf,cov)) snps2018_v2 <- snps2018_v2[c("chr","pos","ref","alt","ancmaf_0_0","anccov_0_0",good)] snps2018_v2 <- merge(snps2018_v2, sig_list) maf <- snps2018_v2[,c(seq(7,ncol(snps2018_v2),by = 2))] #find snps where mean final freq lower than starting test <- cbind(snps2018_v2[,c(1:2,5)],maf[,c(seq(16,ncol(maf), by = 16))]) mean_alt <- rowMeans(test[,4:ncol(test)]) test2 <- cbind(test, mean_alt) test3 <- subset(test2, mean_alt < ancmaf_0_0) #lower than starting test4 <- subset(test2, mean_alt > ancmaf_0_0) #higher than starting lower <- test3[,1:2] higher <- test4[,1:2] #subset lower ending than start and transforms snps2018_v3 <- merge(snps2018_v2, lower) snps2018_v3[,c(seq(5, ncol(snps2018_v3), by = 2))] <- 1- snps2018_v3[,c(seq(5, ncol(snps2018_v3), by = 2))] snps2018_v3[,3:4] <- snps2018_v3[,4:3] #subset higher ending, combing with lower ending snps2018_v4 <- merge(snps2018_v2, higher) #recombine snps2018_v5 <- rbind(snps2018_v3,snps2018_v4) #make new traj table maf <- snps2018_v5[,c(seq(7,ncol(snps2018_v5),by = 2))] #downsample to oringinal X <- rep(c(3,7:12),each =12) Y <- c(1,2,3,5,7,9,10,11,12,13,15,18) maf <- paste("maf_", X, "_", Y, sep="") cov <- paste("cov_", X, "_", Y, sep="") the_rest <- c(rbind(maf,cov)) snps2018_v5 <- snps2018_v5[c("chr","pos","ref","alt","ancmaf_0_0","anccov_0_0",the_rest)] snps2018_v5 <- merge(snps2018_v5, sig_list) maf <- snps2018_v5[,c(seq(7,ncol(snps2018_v5),by = 2))] Sig_Traj_table_uni_dir_ds13 <- data.frame(matrix(ncol = 1, nrow = 13)) for (i in 1:nrow(maf)){ chr <- snps2018_v5[i,1] position <- snps2018_v5[i,2] pop <- c(3,7:12) col_names <- paste("pop",pop,chr,position, sep = "_") maf_snp <- as.numeric(maf[i,]) matrix_maf <- data.frame(matrix(maf_snp, nrow = 12, ncol = 7)) ancestor_maf <- rep(snps2018_v5$ancmaf_0_0[i],12) maf_final <- rbind(ancestor_maf, matrix_maf) colnames(maf_final) <- col_names Sig_Traj_table_uni_dir_ds13 <- cbind(Sig_Traj_table_uni_dir_ds13, maf_final) print(i) } Sig_Traj_table_uni_dir_ds13 <- Sig_Traj_table_uni_dir_ds13[,2:ncol(Sig_Traj_table_uni_dir_ds13)] Trans_traj_sig_uni_dir_ds13 <- t(Sig_Traj_table_uni_dir_ds13) save(Trans_traj_sig_uni_dir_ds13 , file="SOM_table_sig_13tp.rda")
/SOM/Sig_SOM_Table_13tp.R
no_license
mphillips67/DeepSeq_TimeSeries_Project
R
false
false
3,024
r
setwd("~/Dropbox/Yeast_Time_Series/Revisions_1/SOM/") #list of sigs Full <- read.table("~/Dropbox/Yeast_Time_Series/Revisions_1/Regression/Pval_table_13tp_ModelA.txt", header = TRUE) Full$p.week <- -log10(Full$p.week) sig_snps <- subset(Full, p.week >= 7) sig_list <- data.frame(sig_snps$chr, sig_snps$pos) colnames(sig_list) <- c("chr","pos") #create trans table snps2018_v2 <- read.table("~/Dropbox/Yeast_Time_Series/Revisions_1/SNP_table.txt", header = TRUE) snps2018_v2=subset(snps2018_v2,snps2018_v2$ancmaf_0_0>0.025 & snps2018_v2$ancmaf_0_0 <0.975) snps2018_v2 <- subset(snps2018_v2, chr != "chrmito") #Filter out "bad" populations (1,2,4,5 and 6) X <- rep(c(3,7:12),each =16) Y <- c(1:15,18) maf <- paste("maf_", X, "_", Y, sep="") cov <- paste("cov_", X, "_", Y, sep="") good <- c(rbind(maf,cov)) snps2018_v2 <- snps2018_v2[c("chr","pos","ref","alt","ancmaf_0_0","anccov_0_0",good)] snps2018_v2 <- merge(snps2018_v2, sig_list) maf <- snps2018_v2[,c(seq(7,ncol(snps2018_v2),by = 2))] #find snps where mean final freq lower than starting test <- cbind(snps2018_v2[,c(1:2,5)],maf[,c(seq(16,ncol(maf), by = 16))]) mean_alt <- rowMeans(test[,4:ncol(test)]) test2 <- cbind(test, mean_alt) test3 <- subset(test2, mean_alt < ancmaf_0_0) #lower than starting test4 <- subset(test2, mean_alt > ancmaf_0_0) #higher than starting lower <- test3[,1:2] higher <- test4[,1:2] #subset lower ending than start and transforms snps2018_v3 <- merge(snps2018_v2, lower) snps2018_v3[,c(seq(5, ncol(snps2018_v3), by = 2))] <- 1- snps2018_v3[,c(seq(5, ncol(snps2018_v3), by = 2))] snps2018_v3[,3:4] <- snps2018_v3[,4:3] #subset higher ending, combing with lower ending snps2018_v4 <- merge(snps2018_v2, higher) #recombine snps2018_v5 <- rbind(snps2018_v3,snps2018_v4) #make new traj table maf <- snps2018_v5[,c(seq(7,ncol(snps2018_v5),by = 2))] #downsample to oringinal X <- rep(c(3,7:12),each =12) Y <- c(1,2,3,5,7,9,10,11,12,13,15,18) maf <- paste("maf_", X, "_", Y, sep="") cov <- paste("cov_", X, "_", Y, sep="") the_rest <- c(rbind(maf,cov)) snps2018_v5 <- snps2018_v5[c("chr","pos","ref","alt","ancmaf_0_0","anccov_0_0",the_rest)] snps2018_v5 <- merge(snps2018_v5, sig_list) maf <- snps2018_v5[,c(seq(7,ncol(snps2018_v5),by = 2))] Sig_Traj_table_uni_dir_ds13 <- data.frame(matrix(ncol = 1, nrow = 13)) for (i in 1:nrow(maf)){ chr <- snps2018_v5[i,1] position <- snps2018_v5[i,2] pop <- c(3,7:12) col_names <- paste("pop",pop,chr,position, sep = "_") maf_snp <- as.numeric(maf[i,]) matrix_maf <- data.frame(matrix(maf_snp, nrow = 12, ncol = 7)) ancestor_maf <- rep(snps2018_v5$ancmaf_0_0[i],12) maf_final <- rbind(ancestor_maf, matrix_maf) colnames(maf_final) <- col_names Sig_Traj_table_uni_dir_ds13 <- cbind(Sig_Traj_table_uni_dir_ds13, maf_final) print(i) } Sig_Traj_table_uni_dir_ds13 <- Sig_Traj_table_uni_dir_ds13[,2:ncol(Sig_Traj_table_uni_dir_ds13)] Trans_traj_sig_uni_dir_ds13 <- t(Sig_Traj_table_uni_dir_ds13) save(Trans_traj_sig_uni_dir_ds13 , file="SOM_table_sig_13tp.rda")
#' build a long data frame with useful information about the genotypes #' #' **Warning**: very memory intensive, only use on small datasets/subsets #' #' @keywords internal bed_convert_genotypes_to_data_frame <- function(bo, subset = TRUE) { genos <- bed_genotypes(bo, subset) genos_str <- bed_genotypes_as_strings(bo, subset) bim <- bed_bim_df(bo, subset) fam <- bed_fam_df(bo, subset) sample_ids <- rownames(genos) .snp_to_df <- function(i) { gi <- genos[, i] data.frame( bim[i, ], SAMPLE_ID = sample_ids, GENO_INT = gi, GENO_STR = genos_str[, i], DOMINANT = recode_genotypes(gi, 'dominant'), RECESSIVE = recode_genotypes(gi, 'recessive'), stringsAsFactors = FALSE, row.names = NULL ) } dfs <- lapply(1:ncol(genos), .snp_to_df) do.call(rbind.data.frame, dfs) }
/plinker/R/convert.R
no_license
quartzbio/plinker_pkg
R
false
false
847
r
#' build a long data frame with useful information about the genotypes #' #' **Warning**: very memory intensive, only use on small datasets/subsets #' #' @keywords internal bed_convert_genotypes_to_data_frame <- function(bo, subset = TRUE) { genos <- bed_genotypes(bo, subset) genos_str <- bed_genotypes_as_strings(bo, subset) bim <- bed_bim_df(bo, subset) fam <- bed_fam_df(bo, subset) sample_ids <- rownames(genos) .snp_to_df <- function(i) { gi <- genos[, i] data.frame( bim[i, ], SAMPLE_ID = sample_ids, GENO_INT = gi, GENO_STR = genos_str[, i], DOMINANT = recode_genotypes(gi, 'dominant'), RECESSIVE = recode_genotypes(gi, 'recessive'), stringsAsFactors = FALSE, row.names = NULL ) } dfs <- lapply(1:ncol(genos), .snp_to_df) do.call(rbind.data.frame, dfs) }
\name{summary.boral} \alias{summary.boral} \alias{print.summary.boral} \title{Summary of fitted boral object} \description{A summary of the fitted boral objects including the type of model fitted e.g., error distribution, number of latent variables parameter estimates, values of the information criteria (if applicable), and so on.} \usage{ \method{summary}{boral}(object, est = "median", ...) \method{print}{summary.boral}(x,...) } \arguments{ \item{object}{An object of class "boral".} \item{x}{An object of class "boral".} \item{est}{A choice of either whether to print the posterior median (\code{est == "median"}) or posterior mean (\code{est == "mean"}) of the parameters.} \item{...}{Not used.} } \value{ Attributes of the model fitted, parameter estimates, and values of the information criteria if \code{calc.ics = TRUE} in the boral object, and posterior probabilities of including individual and/or grouped coefficients in the model based on SSVS if appropriate. } \author{ Francis K.C. Hui \email{fhui28@gmail.com} } \seealso{ \code{\link{boral}} for the fitting function on which \code{summary} is applied, \code{\link{get.measures}} for details regarding the information criteria returned. } \examples{ \dontrun{ ## NOTE: The values below MUST NOT be used in a real application; ## they are only used here to make the examples run quick!!! example.mcmc.control <- list(n.burnin = 10, n.iteration = 100, n.thin = 1) library(mvabund) ## Load a dataset from the mvabund package data(spider) y <- spider$abun spider.fit.nb <- boral(y, family = "negative.binomial", num.lv = 2, row.eff = "fixed", mcmc.control = example.mcmc.control) summary(spider.fit.nb) } }
/man/summary.boral.Rd
no_license
mbedward/boral
R
false
false
1,702
rd
\name{summary.boral} \alias{summary.boral} \alias{print.summary.boral} \title{Summary of fitted boral object} \description{A summary of the fitted boral objects including the type of model fitted e.g., error distribution, number of latent variables parameter estimates, values of the information criteria (if applicable), and so on.} \usage{ \method{summary}{boral}(object, est = "median", ...) \method{print}{summary.boral}(x,...) } \arguments{ \item{object}{An object of class "boral".} \item{x}{An object of class "boral".} \item{est}{A choice of either whether to print the posterior median (\code{est == "median"}) or posterior mean (\code{est == "mean"}) of the parameters.} \item{...}{Not used.} } \value{ Attributes of the model fitted, parameter estimates, and values of the information criteria if \code{calc.ics = TRUE} in the boral object, and posterior probabilities of including individual and/or grouped coefficients in the model based on SSVS if appropriate. } \author{ Francis K.C. Hui \email{fhui28@gmail.com} } \seealso{ \code{\link{boral}} for the fitting function on which \code{summary} is applied, \code{\link{get.measures}} for details regarding the information criteria returned. } \examples{ \dontrun{ ## NOTE: The values below MUST NOT be used in a real application; ## they are only used here to make the examples run quick!!! example.mcmc.control <- list(n.burnin = 10, n.iteration = 100, n.thin = 1) library(mvabund) ## Load a dataset from the mvabund package data(spider) y <- spider$abun spider.fit.nb <- boral(y, family = "negative.binomial", num.lv = 2, row.eff = "fixed", mcmc.control = example.mcmc.control) summary(spider.fit.nb) } }
testlist <- list(A = structure(c(2.31584307392677e+77, 9.53818252170339e+295, 1.22810533503224e+146, 4.12396251261199e-221, 0, 0, 0), .Dim = c(1L, 7L)), B = structure(0, .Dim = c(1L, 1L))) result <- do.call(multivariance:::match_rows,testlist) str(result)
/multivariance/inst/testfiles/match_rows/AFL_match_rows/match_rows_valgrind_files/1613110050-test.R
no_license
akhikolla/updatedatatype-list3
R
false
false
257
r
testlist <- list(A = structure(c(2.31584307392677e+77, 9.53818252170339e+295, 1.22810533503224e+146, 4.12396251261199e-221, 0, 0, 0), .Dim = c(1L, 7L)), B = structure(0, .Dim = c(1L, 1L))) result <- do.call(multivariance:::match_rows,testlist) str(result)
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/adaps.R \encoding{UTF-8} \name{adaps} \alias{adaps} \alias{adapsBATCH} \alias{adaps2} \title{adaps, adaps2, and adapsBATCH} \source{ \enumerate{ \item r - How can I check if a file is empty? - Stack Overflow answered by Konrad Rudolph and edited by Geekuna Matata on Apr 23 2014. See \url{https://stackoverflow.com/questions/23254002/how-can-i-check-if-a-file-is-empty}. \item r - Better error message for stopifnot? - Stack Overflow answered by Andrie on Dec 1 2011. See \url{https://stackoverflow.com/questions/8343509/better-error-message-for-stopifnot}. \item RDocumentation: TclInterface {tcltk}. See \url{https://www.rdocumentation.org/packages/tcltk/versions/3.3.1}. \item James Wettenhall & Philippe Grosjean, File Open/Save dialogs in R tcltk, December 01, 2015. See \url{https://web.archive.org/web/20160521051207/http://www.sciviews.org/recipes/tcltk/TclTk-file-open-save-dialogs/}. Retrieved thanks to the Internet Archive: Wayback Machine \item r - read csv files and perform function, then bind together - Stack Overflow answered by bjoseph on Jan 8 2015. See \url{https://stackoverflow.com/questions/27846715/read-csv-files-and-perform-function-then-bind-together}. \item multiple output filenames in R - Stack Overflow asked and edited by Gabelins on Feb 1 2013. See \url{https://stackoverflow.com/questions/14651594/multiple-output-filenames-in-r}. \item r - Regex return file name, remove path and file extension - Stack Overflow answered and edited by Ananda Mahto on Feb 25 2013. See \url{https://stackoverflow.com/questions/15073753/regex-return-file-name-remove-path-and-file-extension/15073919}. \item R help - How to change the default Date format for write.csv function? answered by William Dunlap on Dec 28, 2009. See \url{https://hypatia.math.ethz.ch/pipermail/r-help/2009-December/416010.html}. \item RDocumentation: strptime {base}. See \url{https://www.rdocumentation.org/packages/base/versions/3.3.1/topics/strptime}. \item convert date and time string to POSIX in R - Stack Overflow commented by cryo111 on Sep 18 2013. See \url{https://stackoverflow.com/questions/18874400/convert-date-and-time-string-to-posix-in-r/18874863}. } } \usage{ adaps( file = tk_choose.files(default = "", caption = "Select file(s) to open & hold down Ctrl to choose more than 1 file", multi = TRUE, filters = matrix(c("ADAPS file", ".rdb", "ADAPS file", ".RDB"), 4, 2, byrow = TRUE)), interactive = TRUE, overwrite = TRUE ) adapsBATCH( path = tk_choose.dir(caption = "Select directory with the ADAPS .rdb files"), overwrite = TRUE ) adaps2(file, overwrite = TRUE) } \arguments{ \item{file}{Input ADAPS .rdb file(s) to be selected through a file dialog.} \item{interactive}{If interactive is \code{TRUE}, then the user will select the filenames(s) to use for saving with the file dialog. In order to select more than one file, the user must hold down the Ctrl (Control) button while mouse clicking the chosen files. If interactive is \code{FALSE}, then the user will select the directory, via the directory dialog, to use for saving and the original filenames will be used.} \item{overwrite}{If \code{TRUE}, overwrite any existing spreadsheet.} \item{path}{Directory path of ADAPS .rdb files to be selected through a directory dialog. The user will be asked where to find the ADAPS .rdb files & then the user will be asked where to save the ADAPS .xlsx files.} } \value{ ADAPS .xlsx file(s) } \description{ adaps, adaps2, and adapsBATCH process raw Automated Data Processing System (ADAPS) .rdb files from the U.S. Geological Survey (USGS) National Water Information System (NWIS). For these functions, it is only for continuous ADAPS data of the following parameters: discharge (00060), FNU turbidity (63680), and NTRU turbidity (63676 from 63680). } \details{ adaps function opens single or multiple raw ADAPS .rdb file(s) to modify the format and then exports the file(s) in .xlsx format. This is done for a single file or multiple files that the user selects with a file dialog. adaps2 function opens a single raw ADAPS .rdb file to modify the format and then exports the file in .xlsx format. This is done for a single file that the user selects without a file dialog. adapsBATCH function opens raw ADAPS .rdb files, from a directory, to modify the format and then exports the files in .xlsx format. This is done in a BATCH mode (whole directory of ADAPS .rdb files) using a directory dialog. adaps, adaps2, and adapsBATCH functions perform the same processes on the raw ADAPS .rdb files: 1) Read in the file and remove the 1st 4 or 5 lines depending on whether NTRU data are present or not, 2) create 4 or 5 columns (depending on whether NTRU data are present or not) based on the 1st 4 or 5 lines, and 3) export the modified file in .xlsx format. The following lines are representative of the .rdb format used in the files that these functions can operate on. Note: ntru may not be present. If so, then there will only be 3 cases of 16N in the last row. The last row will be removed in the final spreadsheet. \tabular{ccccc}{ DATETIME \tab ght\cr cfs\cr fnu\cr ntru\cr 19D \tab 16N \tab 16N \tab 16N \tab 16N } } \examples{ \dontrun{ library("ie2misc") # Example to check the input file format # Copy and paste the following code into the R console if you # wish to see the ADAPS .rdb input file format. # Note the number of lines and the row headings. file.show(system.file("extdata", "spring_creek_partial.rdb", package = "ie2misc"), title = paste("spring_creek_partial.rdb")) # opens the .rdb file using the default text editor # Examples to change (an) ADAPS .rdb file(s) interactively and # non-interactively adaps2(system.file("extdata", "spring_creek_partial.rdb", package = "ie2misc")) adaps() # default where interactive = TRUE # Follow the file dialog instructions adaps(interactive = FALSE) # Follow the file dialog instructions # Example to change a directory of ADAPS .rdb files adapsBATCH() # Follow the file dialog instructions } }
/man/adaps.Rd
permissive
cran/ie2misc
R
false
true
6,165
rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/adaps.R \encoding{UTF-8} \name{adaps} \alias{adaps} \alias{adapsBATCH} \alias{adaps2} \title{adaps, adaps2, and adapsBATCH} \source{ \enumerate{ \item r - How can I check if a file is empty? - Stack Overflow answered by Konrad Rudolph and edited by Geekuna Matata on Apr 23 2014. See \url{https://stackoverflow.com/questions/23254002/how-can-i-check-if-a-file-is-empty}. \item r - Better error message for stopifnot? - Stack Overflow answered by Andrie on Dec 1 2011. See \url{https://stackoverflow.com/questions/8343509/better-error-message-for-stopifnot}. \item RDocumentation: TclInterface {tcltk}. See \url{https://www.rdocumentation.org/packages/tcltk/versions/3.3.1}. \item James Wettenhall & Philippe Grosjean, File Open/Save dialogs in R tcltk, December 01, 2015. See \url{https://web.archive.org/web/20160521051207/http://www.sciviews.org/recipes/tcltk/TclTk-file-open-save-dialogs/}. Retrieved thanks to the Internet Archive: Wayback Machine \item r - read csv files and perform function, then bind together - Stack Overflow answered by bjoseph on Jan 8 2015. See \url{https://stackoverflow.com/questions/27846715/read-csv-files-and-perform-function-then-bind-together}. \item multiple output filenames in R - Stack Overflow asked and edited by Gabelins on Feb 1 2013. See \url{https://stackoverflow.com/questions/14651594/multiple-output-filenames-in-r}. \item r - Regex return file name, remove path and file extension - Stack Overflow answered and edited by Ananda Mahto on Feb 25 2013. See \url{https://stackoverflow.com/questions/15073753/regex-return-file-name-remove-path-and-file-extension/15073919}. \item R help - How to change the default Date format for write.csv function? answered by William Dunlap on Dec 28, 2009. See \url{https://hypatia.math.ethz.ch/pipermail/r-help/2009-December/416010.html}. \item RDocumentation: strptime {base}. See \url{https://www.rdocumentation.org/packages/base/versions/3.3.1/topics/strptime}. \item convert date and time string to POSIX in R - Stack Overflow commented by cryo111 on Sep 18 2013. See \url{https://stackoverflow.com/questions/18874400/convert-date-and-time-string-to-posix-in-r/18874863}. } } \usage{ adaps( file = tk_choose.files(default = "", caption = "Select file(s) to open & hold down Ctrl to choose more than 1 file", multi = TRUE, filters = matrix(c("ADAPS file", ".rdb", "ADAPS file", ".RDB"), 4, 2, byrow = TRUE)), interactive = TRUE, overwrite = TRUE ) adapsBATCH( path = tk_choose.dir(caption = "Select directory with the ADAPS .rdb files"), overwrite = TRUE ) adaps2(file, overwrite = TRUE) } \arguments{ \item{file}{Input ADAPS .rdb file(s) to be selected through a file dialog.} \item{interactive}{If interactive is \code{TRUE}, then the user will select the filenames(s) to use for saving with the file dialog. In order to select more than one file, the user must hold down the Ctrl (Control) button while mouse clicking the chosen files. If interactive is \code{FALSE}, then the user will select the directory, via the directory dialog, to use for saving and the original filenames will be used.} \item{overwrite}{If \code{TRUE}, overwrite any existing spreadsheet.} \item{path}{Directory path of ADAPS .rdb files to be selected through a directory dialog. The user will be asked where to find the ADAPS .rdb files & then the user will be asked where to save the ADAPS .xlsx files.} } \value{ ADAPS .xlsx file(s) } \description{ adaps, adaps2, and adapsBATCH process raw Automated Data Processing System (ADAPS) .rdb files from the U.S. Geological Survey (USGS) National Water Information System (NWIS). For these functions, it is only for continuous ADAPS data of the following parameters: discharge (00060), FNU turbidity (63680), and NTRU turbidity (63676 from 63680). } \details{ adaps function opens single or multiple raw ADAPS .rdb file(s) to modify the format and then exports the file(s) in .xlsx format. This is done for a single file or multiple files that the user selects with a file dialog. adaps2 function opens a single raw ADAPS .rdb file to modify the format and then exports the file in .xlsx format. This is done for a single file that the user selects without a file dialog. adapsBATCH function opens raw ADAPS .rdb files, from a directory, to modify the format and then exports the files in .xlsx format. This is done in a BATCH mode (whole directory of ADAPS .rdb files) using a directory dialog. adaps, adaps2, and adapsBATCH functions perform the same processes on the raw ADAPS .rdb files: 1) Read in the file and remove the 1st 4 or 5 lines depending on whether NTRU data are present or not, 2) create 4 or 5 columns (depending on whether NTRU data are present or not) based on the 1st 4 or 5 lines, and 3) export the modified file in .xlsx format. The following lines are representative of the .rdb format used in the files that these functions can operate on. Note: ntru may not be present. If so, then there will only be 3 cases of 16N in the last row. The last row will be removed in the final spreadsheet. \tabular{ccccc}{ DATETIME \tab ght\cr cfs\cr fnu\cr ntru\cr 19D \tab 16N \tab 16N \tab 16N \tab 16N } } \examples{ \dontrun{ library("ie2misc") # Example to check the input file format # Copy and paste the following code into the R console if you # wish to see the ADAPS .rdb input file format. # Note the number of lines and the row headings. file.show(system.file("extdata", "spring_creek_partial.rdb", package = "ie2misc"), title = paste("spring_creek_partial.rdb")) # opens the .rdb file using the default text editor # Examples to change (an) ADAPS .rdb file(s) interactively and # non-interactively adaps2(system.file("extdata", "spring_creek_partial.rdb", package = "ie2misc")) adaps() # default where interactive = TRUE # Follow the file dialog instructions adaps(interactive = FALSE) # Follow the file dialog instructions # Example to change a directory of ADAPS .rdb files adapsBATCH() # Follow the file dialog instructions } }
##' Robin Elahi ##' 16 April 2016 ##' foreach ##' ##' library(foreach) # the default output of foreach is a list x <- foreach(i = 1:3) %do% sqrt(i) x x <- foreach(a = 1:3, b = rep(10, 3)) %do% (a + b) x x <- foreach(a = 1:1000, b = rep(10, 2)) %do% { a + b } x ##### Using .combine ##### # Combining numeric output into a vector x <- foreach(i = 1:3, .combine = 'c') %do% exp(i) x # Combining vectors into a matrixf x <- foreach(i = 1:4, .combine = 'cbind') %do% rnorm(4) x # Can also use + or * foreach(i = 1:4, .combine = '+') %do% rnorm(4) foreach(i = 1:4, .combine = '*') %do% rnorm(4) # Specify a user written function to combine results cfun <- function(a, b) NULL foreach(i = 1:4, .combine = 'cfun') %do% rnorm(4) # If function allows many arguments, use .multicombine cfun <- function(...) NULL x <- foreach(i = 1:4, .combine = 'cfun', .multicombine = TRUE) %do% rnorm(4) x
/foreach-lesson.R
permissive
elahi/coding
R
false
false
897
r
##' Robin Elahi ##' 16 April 2016 ##' foreach ##' ##' library(foreach) # the default output of foreach is a list x <- foreach(i = 1:3) %do% sqrt(i) x x <- foreach(a = 1:3, b = rep(10, 3)) %do% (a + b) x x <- foreach(a = 1:1000, b = rep(10, 2)) %do% { a + b } x ##### Using .combine ##### # Combining numeric output into a vector x <- foreach(i = 1:3, .combine = 'c') %do% exp(i) x # Combining vectors into a matrixf x <- foreach(i = 1:4, .combine = 'cbind') %do% rnorm(4) x # Can also use + or * foreach(i = 1:4, .combine = '+') %do% rnorm(4) foreach(i = 1:4, .combine = '*') %do% rnorm(4) # Specify a user written function to combine results cfun <- function(a, b) NULL foreach(i = 1:4, .combine = 'cfun') %do% rnorm(4) # If function allows many arguments, use .multicombine cfun <- function(...) NULL x <- foreach(i = 1:4, .combine = 'cfun', .multicombine = TRUE) %do% rnorm(4) x
setGeneric("boxPlot", signature=c("Object", "covariate1", "covariate2"), function(Object, covariate1, covariate2, file="boxplots.pdf") standardGeneric("boxPlot")) setMethod("boxPlot", signature=c("Dataset", "character", "missing"), function(Object, covariate1, file="boxplots.pdf") { pdf(file) for(i in seq(nrow(Object))) { fit <- aov(x~f, data=data.frame("x"=exprs(Object)[i,], "f"=factor(pData(Object)[,covariate1]) )) boxplot(exprs(Object)[i,]~factor(pData(Object)[,covariate1]), main=paste("Name:", featureNames(Object)[i], "\nAnova P-value:", signif(summary(fit)[[1]]["f","Pr(>F)"], 2))) } dev.off() }) setMethod("boxPlot", signature=c("Dataset", "character", "character"), function(Object, covariate1, covariate2, file="boxplots.pdf") { pdf(file) previous <- par(mai=c(0.5,0.5,1,0.2)) for(i in seq(nrow(Object))) { fit <- aov(x~f1*f2, data=data.frame("x"=exprs(Object)[i,], "f1"=factor(pData(Object)[,covariate1]), "f2"=factor(pData(Object)[,covariate2]) ) ) p1 <- signif(summary(fit)[[1]][1,"Pr(>F)"], 2) p2 <- signif(summary(fit)[[1]][2,"Pr(>F)"], 2) p3 <- signif(summary(fit)[[1]][3,"Pr(>F)"], 2) boxplot(exprs(Object)[i,]~factor(paste(pData(Object)[,covariate1], pData(Object)[,covariate2])), main=paste("Name:", featureNames(Object)[i], "\n", covariate1, "p-value", p1, "\n", covariate2, "p-value", p2, "\n", covariate1, covariate2, "interaction p-value", p3)) } par(previous) dev.off() })
/R/boxPlot.R
no_license
zzxxyui/metadarclean
R
false
false
1,624
r
setGeneric("boxPlot", signature=c("Object", "covariate1", "covariate2"), function(Object, covariate1, covariate2, file="boxplots.pdf") standardGeneric("boxPlot")) setMethod("boxPlot", signature=c("Dataset", "character", "missing"), function(Object, covariate1, file="boxplots.pdf") { pdf(file) for(i in seq(nrow(Object))) { fit <- aov(x~f, data=data.frame("x"=exprs(Object)[i,], "f"=factor(pData(Object)[,covariate1]) )) boxplot(exprs(Object)[i,]~factor(pData(Object)[,covariate1]), main=paste("Name:", featureNames(Object)[i], "\nAnova P-value:", signif(summary(fit)[[1]]["f","Pr(>F)"], 2))) } dev.off() }) setMethod("boxPlot", signature=c("Dataset", "character", "character"), function(Object, covariate1, covariate2, file="boxplots.pdf") { pdf(file) previous <- par(mai=c(0.5,0.5,1,0.2)) for(i in seq(nrow(Object))) { fit <- aov(x~f1*f2, data=data.frame("x"=exprs(Object)[i,], "f1"=factor(pData(Object)[,covariate1]), "f2"=factor(pData(Object)[,covariate2]) ) ) p1 <- signif(summary(fit)[[1]][1,"Pr(>F)"], 2) p2 <- signif(summary(fit)[[1]][2,"Pr(>F)"], 2) p3 <- signif(summary(fit)[[1]][3,"Pr(>F)"], 2) boxplot(exprs(Object)[i,]~factor(paste(pData(Object)[,covariate1], pData(Object)[,covariate2])), main=paste("Name:", featureNames(Object)[i], "\n", covariate1, "p-value", p1, "\n", covariate2, "p-value", p2, "\n", covariate1, covariate2, "interaction p-value", p3)) } par(previous) dev.off() })
library(evolqg) ### Name: MonteCarloRep ### Title: Parametric repeatabilities with covariance or correlation ### matrices ### Aliases: MonteCarloRep ### Keywords: montecarlo parametricsampling repeatability ### ** Examples cov.matrix <- RandomMatrix(5, 1, 1, 10) MonteCarloRep(cov.matrix, sample.size = 30, RandomSkewers, iterations = 20) MonteCarloRep(cov.matrix, sample.size = 30, RandomSkewers, num.vectors = 100, iterations = 20, correlation = TRUE) MonteCarloRep(cov.matrix, sample.size = 30, MatrixCor, correlation = TRUE) MonteCarloRep(cov.matrix, sample.size = 30, KrzCor, iterations = 20) MonteCarloRep(cov.matrix, sample.size = 30, KrzCor, correlation = TRUE) #Creating repeatability vector for a list of matrices mat.list <- RandomMatrix(5, 3, 1, 10) laply(mat.list, MonteCarloRep, 30, KrzCor, correlation = TRUE) ##Multiple threads can be used with doMC library #library(doParallel) ##Windows: #cl <- makeCluster(2) #registerDoParallel(cl) ##Mac and Linux: #registerDoParallel(cores = 2) #MonteCarloRep(cov.matrix, 30, RandomSkewers, iterations = 100, parallel = TRUE)
/data/genthat_extracted_code/evolqg/examples/MonteCarloRep.Rd.R
no_license
surayaaramli/typeRrh
R
false
false
1,108
r
library(evolqg) ### Name: MonteCarloRep ### Title: Parametric repeatabilities with covariance or correlation ### matrices ### Aliases: MonteCarloRep ### Keywords: montecarlo parametricsampling repeatability ### ** Examples cov.matrix <- RandomMatrix(5, 1, 1, 10) MonteCarloRep(cov.matrix, sample.size = 30, RandomSkewers, iterations = 20) MonteCarloRep(cov.matrix, sample.size = 30, RandomSkewers, num.vectors = 100, iterations = 20, correlation = TRUE) MonteCarloRep(cov.matrix, sample.size = 30, MatrixCor, correlation = TRUE) MonteCarloRep(cov.matrix, sample.size = 30, KrzCor, iterations = 20) MonteCarloRep(cov.matrix, sample.size = 30, KrzCor, correlation = TRUE) #Creating repeatability vector for a list of matrices mat.list <- RandomMatrix(5, 3, 1, 10) laply(mat.list, MonteCarloRep, 30, KrzCor, correlation = TRUE) ##Multiple threads can be used with doMC library #library(doParallel) ##Windows: #cl <- makeCluster(2) #registerDoParallel(cl) ##Mac and Linux: #registerDoParallel(cores = 2) #MonteCarloRep(cov.matrix, 30, RandomSkewers, iterations = 100, parallel = TRUE)
library(dplyr) ################################################################## # # Processing Script for MOVING all CMT/CAPT Science data # Created by Jenna Daly # On 03/22/2017 # ################################################################## #Setup environment sub_folders <- list.files() raw_from_github_location <- grep("raw$", sub_folders, value=T) raw_from_github_path <- (paste0(getwd(), "/", raw_from_github_location)) top_level_dp_path <- (paste0(getwd(), "/")) # lapply(filestocopy, function(x) file.copy(paste (origindir, x , sep = "/"), # paste (targetdir,x, sep = "/"), recursive = FALSE, copy.mode = TRUE)) all_dp_categories <- c("All-Students", "_ELL", "Gender", "High-Needs", "2-level", "3-level", "Race", "Special-Education") all_dp_folders <- c("CMT-CAPT-Science-by-All-Students", "CMT-CAPT-Science-by-ELL", "CMT-CAPT-Science-by-Gender", "CMT-CAPT-Science-by-High-Needs", "CMT-CAPT-Science-by-Meal-Eligibility-Lvl-2", "CMT-CAPT-Science-by-Meal-Eligibility-Lvl-3", "CMT-CAPT-Science-by-Race-Ethnicity", "CMT-CAPT-Science-by-Special-Education-Status") for( i in 1:length(all_dp_folders)) { current_dp_path <- (paste0(top_level_dp_path, all_dp_folders[i], "/", "raw")) ##destination current_raw_files <-dir(raw_from_github_path, pattern = all_dp_categories[i]) ##filetocopy copy_current <- lapply(current_raw_files, function(x) file.copy(paste (raw_from_github_path, x , sep = "/"), paste (current_dp_path,x, sep = "/"), recursive = FALSE, copy.mode = TRUE)) }
/move_raw_CMT.R
no_license
CT-Data-Collaborative/edsight-data-setup
R
false
false
1,843
r
library(dplyr) ################################################################## # # Processing Script for MOVING all CMT/CAPT Science data # Created by Jenna Daly # On 03/22/2017 # ################################################################## #Setup environment sub_folders <- list.files() raw_from_github_location <- grep("raw$", sub_folders, value=T) raw_from_github_path <- (paste0(getwd(), "/", raw_from_github_location)) top_level_dp_path <- (paste0(getwd(), "/")) # lapply(filestocopy, function(x) file.copy(paste (origindir, x , sep = "/"), # paste (targetdir,x, sep = "/"), recursive = FALSE, copy.mode = TRUE)) all_dp_categories <- c("All-Students", "_ELL", "Gender", "High-Needs", "2-level", "3-level", "Race", "Special-Education") all_dp_folders <- c("CMT-CAPT-Science-by-All-Students", "CMT-CAPT-Science-by-ELL", "CMT-CAPT-Science-by-Gender", "CMT-CAPT-Science-by-High-Needs", "CMT-CAPT-Science-by-Meal-Eligibility-Lvl-2", "CMT-CAPT-Science-by-Meal-Eligibility-Lvl-3", "CMT-CAPT-Science-by-Race-Ethnicity", "CMT-CAPT-Science-by-Special-Education-Status") for( i in 1:length(all_dp_folders)) { current_dp_path <- (paste0(top_level_dp_path, all_dp_folders[i], "/", "raw")) ##destination current_raw_files <-dir(raw_from_github_path, pattern = all_dp_categories[i]) ##filetocopy copy_current <- lapply(current_raw_files, function(x) file.copy(paste (raw_from_github_path, x , sep = "/"), paste (current_dp_path,x, sep = "/"), recursive = FALSE, copy.mode = TRUE)) }
library(tidyverse) library(rvest) library(lubridate) get_year_page <- function(year) { "http://www.ecb.europa.eu/press/key/date/%s/html/index.en.html" %>% sprintf(year) %>% read_html() } get_date <- . %>% html_nodes("dt") %>% html_text() %>% dmy() get_title <- . %>% html_nodes("dd > span.doc-title") %>% html_text() get_subtitle <- . %>% html_nodes("div.doc-subtitle") %>% html_text() get_url <- . %>% html_nodes("dd > span.doc-title > a") %>% html_attr("href") %>% {paste0("http://www.ecb.europa.eu", .)} get_metadata <- function(page) { data_frame( date = get_date(page), title = get_title(page), subtitle = get_subtitle(page), url = get_url(page) ) %>% separate(title, c("speaker", "title"), ":", extra = "merge") %>% mutate_if(is.character, str_squish) } get_year <- . %>% get_year_page() %>% get_metadata() get_text <- . %>% html_nodes("article > p") %>% html_text() %>% paste(collapse = "\n") w_msg <- function(f) { function(...) { dots <- list(...) message("Processing: ", dots[[1]]) f(...) } } w_delay <- function(f, delay = 0.5) { function(...) { Sys.sleep(delay) f(...) } } politely_get_year <- w_delay(w_msg(get_year)) ## IO ecb_speeches <- map_df(1999:year(Sys.Date()), politely_get_year) ecb <- ecb_speeches %>% head() %>% mutate(page = map(url, read_html), text = map_chr(page, get_text))
/scripts/get_ecb_speeches.R
no_license
fangduan/scraping_workshop
R
false
false
1,436
r
library(tidyverse) library(rvest) library(lubridate) get_year_page <- function(year) { "http://www.ecb.europa.eu/press/key/date/%s/html/index.en.html" %>% sprintf(year) %>% read_html() } get_date <- . %>% html_nodes("dt") %>% html_text() %>% dmy() get_title <- . %>% html_nodes("dd > span.doc-title") %>% html_text() get_subtitle <- . %>% html_nodes("div.doc-subtitle") %>% html_text() get_url <- . %>% html_nodes("dd > span.doc-title > a") %>% html_attr("href") %>% {paste0("http://www.ecb.europa.eu", .)} get_metadata <- function(page) { data_frame( date = get_date(page), title = get_title(page), subtitle = get_subtitle(page), url = get_url(page) ) %>% separate(title, c("speaker", "title"), ":", extra = "merge") %>% mutate_if(is.character, str_squish) } get_year <- . %>% get_year_page() %>% get_metadata() get_text <- . %>% html_nodes("article > p") %>% html_text() %>% paste(collapse = "\n") w_msg <- function(f) { function(...) { dots <- list(...) message("Processing: ", dots[[1]]) f(...) } } w_delay <- function(f, delay = 0.5) { function(...) { Sys.sleep(delay) f(...) } } politely_get_year <- w_delay(w_msg(get_year)) ## IO ecb_speeches <- map_df(1999:year(Sys.Date()), politely_get_year) ecb <- ecb_speeches %>% head() %>% mutate(page = map(url, read_html), text = map_chr(page, get_text))
library(caret) library(ggplot2) library(doParallel) ##Begin start of parallel processing cl <- makePSOCKcluster(5) registerDoParallel(cl)##end start of parallel processing set.seed(107) train <- read.csv("lab4-train.csv", head = TRUE, sep = ",") test <- read.csv("lab4-test.csv", head = TRUE, sep = ",") ##splitting the data into train and test x and y respectivly trainX = train[,1:4] trainY = train[,5] testX = test[,1:4] testY = test[,5] fitGrid_2 <- expand.grid(mfinal = (1:3)*3, # Without this training goes on for a extremely long time. maxdepth = c(1, 3), # change to higher nums for mfinal and maxdepth for longer train time and better performance coeflearn = c("Breiman")) fitControl_2 <- trainControl(method = "repeatedcv", # Without this training goes on for a extremely long time. number = 5, repeats = 3) control <- trainControl(method="repeatedcv", number=10, repeats=3) ##train control for RF seed <- 7 metric <- "Accuracy" set.seed(seed) fit.adaboost <- train(trainX, trainY, method="AdaBoost.M1",trControl = fitControl_2, ##Train adaboost.m1 tuneGrid = fitGrid_2, #and this is new, too! verbose = TRUE) predict.adaboost <- predict(fit.adaboost, test)## use trained AdaBoost model on the test data set.seed(seed) fit.rf <- train(trainX, trainY, method="rf", metric=metric, trControl=control) ##train RF predict.rf <- predict(fit.rf, test)## used trained RF model on the test data # summarize results print(fit.adaboost) print(fit.rf) ##confusion matrixes confusionMatrix(predict.rf, testY) confusionMatrix(predict.adaboost, testY) stopCluster(cl)##finish parallel processing
/RF_and_AdaBoost_M1.R
no_license
JohnRachid/EnsembleLearning
R
false
false
1,771
r
library(caret) library(ggplot2) library(doParallel) ##Begin start of parallel processing cl <- makePSOCKcluster(5) registerDoParallel(cl)##end start of parallel processing set.seed(107) train <- read.csv("lab4-train.csv", head = TRUE, sep = ",") test <- read.csv("lab4-test.csv", head = TRUE, sep = ",") ##splitting the data into train and test x and y respectivly trainX = train[,1:4] trainY = train[,5] testX = test[,1:4] testY = test[,5] fitGrid_2 <- expand.grid(mfinal = (1:3)*3, # Without this training goes on for a extremely long time. maxdepth = c(1, 3), # change to higher nums for mfinal and maxdepth for longer train time and better performance coeflearn = c("Breiman")) fitControl_2 <- trainControl(method = "repeatedcv", # Without this training goes on for a extremely long time. number = 5, repeats = 3) control <- trainControl(method="repeatedcv", number=10, repeats=3) ##train control for RF seed <- 7 metric <- "Accuracy" set.seed(seed) fit.adaboost <- train(trainX, trainY, method="AdaBoost.M1",trControl = fitControl_2, ##Train adaboost.m1 tuneGrid = fitGrid_2, #and this is new, too! verbose = TRUE) predict.adaboost <- predict(fit.adaboost, test)## use trained AdaBoost model on the test data set.seed(seed) fit.rf <- train(trainX, trainY, method="rf", metric=metric, trControl=control) ##train RF predict.rf <- predict(fit.rf, test)## used trained RF model on the test data # summarize results print(fit.adaboost) print(fit.rf) ##confusion matrixes confusionMatrix(predict.rf, testY) confusionMatrix(predict.adaboost, testY) stopCluster(cl)##finish parallel processing
% Generated by roxygen2 (4.0.0): do not edit by hand \name{log.phi.nbp} \alias{log.phi.nbp} \title{(private) A NBP dispersion model} \usage{ \method{log}{phi.nbp}(par, pi, pi.offset = 1e-04) } \arguments{ \item{par}{a vector of length 2, the intercept and the slope of the log linear model (see Details).} \item{pi}{a vector of length m, estimated mean relative frequencies} \item{pi.offset}{a number} } \value{ a vector of length m, log dipserison. } \description{ Specify a NBP dispersion model. The parameters of the specified model are to be estimated from the data using the function \code{optim.pcl}. } \details{ Under this NBP model, the log dispersion is modeled as a linear function of the log mean realtive frequencies (\code{pi.pre}): log(phi) = par[1] + par[2] * log(pi.pre/pi.offset), where the default value of \code{pi.offset} is 1e-4. }
/man/log.phi.nbp.Rd
no_license
diystat/NBPSeq
R
false
false
859
rd
% Generated by roxygen2 (4.0.0): do not edit by hand \name{log.phi.nbp} \alias{log.phi.nbp} \title{(private) A NBP dispersion model} \usage{ \method{log}{phi.nbp}(par, pi, pi.offset = 1e-04) } \arguments{ \item{par}{a vector of length 2, the intercept and the slope of the log linear model (see Details).} \item{pi}{a vector of length m, estimated mean relative frequencies} \item{pi.offset}{a number} } \value{ a vector of length m, log dipserison. } \description{ Specify a NBP dispersion model. The parameters of the specified model are to be estimated from the data using the function \code{optim.pcl}. } \details{ Under this NBP model, the log dispersion is modeled as a linear function of the log mean realtive frequencies (\code{pi.pre}): log(phi) = par[1] + par[2] * log(pi.pre/pi.offset), where the default value of \code{pi.offset} is 1e-4. }
# This file was generated by Rcpp::compileAttributes # Generator token: 10BE3573-1514-4C36-9D1C-5A225CD40393 .mult3sum <- function(x, y, z) { .Call('lifecontingencies_mult3sum', PACKAGE = 'lifecontingencies', x, y, z) } .mult2sum <- function(x, y) { .Call('lifecontingencies_mult2sum', PACKAGE = 'lifecontingencies', x, y) } .fExnCpp <- function(T, y, n, i) { .Call('lifecontingencies_fExnCpp', PACKAGE = 'lifecontingencies', T, y, n, i) } .fAxnCpp <- function(T, y, n, i, m, k = 1) { .Call('lifecontingencies_fAxnCpp', PACKAGE = 'lifecontingencies', T, y, n, i, m, k) } .fIAxnCpp <- function(T, y, n, i, m, k = 1) { .Call('lifecontingencies_fIAxnCpp', PACKAGE = 'lifecontingencies', T, y, n, i, m, k) } .fDAxnCpp <- function(T, y, n, i, m, k = 1) { .Call('lifecontingencies_fDAxnCpp', PACKAGE = 'lifecontingencies', T, y, n, i, m, k) } .fAExnCpp <- function(T, y, n, i, k = 1) { .Call('lifecontingencies_fAExnCpp', PACKAGE = 'lifecontingencies', T, y, n, i, k) }
/R/RcppExports.R
no_license
MachariaK/lifecontingencies
R
false
false
1,000
r
# This file was generated by Rcpp::compileAttributes # Generator token: 10BE3573-1514-4C36-9D1C-5A225CD40393 .mult3sum <- function(x, y, z) { .Call('lifecontingencies_mult3sum', PACKAGE = 'lifecontingencies', x, y, z) } .mult2sum <- function(x, y) { .Call('lifecontingencies_mult2sum', PACKAGE = 'lifecontingencies', x, y) } .fExnCpp <- function(T, y, n, i) { .Call('lifecontingencies_fExnCpp', PACKAGE = 'lifecontingencies', T, y, n, i) } .fAxnCpp <- function(T, y, n, i, m, k = 1) { .Call('lifecontingencies_fAxnCpp', PACKAGE = 'lifecontingencies', T, y, n, i, m, k) } .fIAxnCpp <- function(T, y, n, i, m, k = 1) { .Call('lifecontingencies_fIAxnCpp', PACKAGE = 'lifecontingencies', T, y, n, i, m, k) } .fDAxnCpp <- function(T, y, n, i, m, k = 1) { .Call('lifecontingencies_fDAxnCpp', PACKAGE = 'lifecontingencies', T, y, n, i, m, k) } .fAExnCpp <- function(T, y, n, i, k = 1) { .Call('lifecontingencies_fAExnCpp', PACKAGE = 'lifecontingencies', T, y, n, i, k) }
myapp = oauth_app("interestspark", key="v1e1v1vWFp2l1bDoGE6xDiqHQ",secret="sHv0OZioBa2Xi4I4jmwmCM1f9B4ubvMJ6tcCT7LZScnuaa8rXS") sig = sign_oauth1.0(myapp, token = "3183904713-9Ct7VXRJdAJvL1o5HILwRncG2RKSKGbpM6tWwbS", token_secret = "iHYzXNfjWaZXzV47fgRI9XX8SUKlsZKRwZ8juAThD5ntt") homeTL = GET("https://api.twitter.com/1.1/statuses/home_timeline.json", sig) json1 = content(homeTL) json2 = jsonlite::fromJSON(toJSON(json1)) json2[1,1:4] install.packages("rjson") library(rjson)
/Coursera Data Science class codes/Getting and cleaning data/20150419 Reading APIs.R
no_license
rsankowski/usefulRcodes
R
false
false
536
r
myapp = oauth_app("interestspark", key="v1e1v1vWFp2l1bDoGE6xDiqHQ",secret="sHv0OZioBa2Xi4I4jmwmCM1f9B4ubvMJ6tcCT7LZScnuaa8rXS") sig = sign_oauth1.0(myapp, token = "3183904713-9Ct7VXRJdAJvL1o5HILwRncG2RKSKGbpM6tWwbS", token_secret = "iHYzXNfjWaZXzV47fgRI9XX8SUKlsZKRwZ8juAThD5ntt") homeTL = GET("https://api.twitter.com/1.1/statuses/home_timeline.json", sig) json1 = content(homeTL) json2 = jsonlite::fromJSON(toJSON(json1)) json2[1,1:4] install.packages("rjson") library(rjson)
# if (!requireNamespace("BiocManager", quietly = TRUE)) # install.packages("BiocManager") # BiocManager::install("monocle", version = "3.8") trajectory_seurat <- SubsetData(Seuratset, ident.use = c(0:3)) trajectory_seurat@raw.data <- trajectory_seurat@raw.data[, rownames(trajectory_seurat@meta.data)] gene_annotation <- as.data.frame(rownames(trajectory_seurat@raw.data)) colnames(gene_annotation) <- "ENSG_ID" rownames(gene_annotation) <- gene_annotation$ENSG_ID library(monocle) set.seed(12345) pd <- new("AnnotatedDataFrame", data = trajectory_seurat@meta.data) fd <- new("AnnotatedDataFrame", data = gene_annotation) cds <- newCellDataSet(trajectory_seurat@raw.data, phenoData = pd, featureData = fd, expressionFamily = negbinomial.size()) cds <- estimateSizeFactors(cds) cds <- estimateDispersions(cds) cds <- detectGenes(cds, min_expr = 0.1) print(head(fData(cds))) expressed_genes <- row.names(subset(fData(cds), num_cells_expressed >= 5)) set.seed(123) cds <- reduceDimension(cds, max_components = 2, num_dim = 15, norm_method = 'log', reduction_method = 'tSNE', verbose = T) cds$clusters <- trajectory_seurat@ident disp_table <- dispersionTable(cds) trajectory_sceset <- Convert(from = trajectory_seurat, to = "sce") ### Select Highly variable genes (feature selection) var.fit <- trendVar(trajectory_sceset, parametric=TRUE, use.spikes=FALSE)#, design = batchDesign) var.out <- decomposeVar(trajectory_sceset, var.fit) trajectory_hvg <- var.out[which(var.out$FDR < 0.05 & var.out$bio > .01),] # var.out$bio > .01 dim(trajectory_hvg) ordering_genes <- rownames(trajectory_hvg) cds <- setOrderingFilter(cds, ordering_genes) cds <- reduceDimension(cds, method = 'DDRTree') cds <- orderCells(cds) # Drawing trajectory plot plot_cell_trajectory(cds, color_by = "clusters") df_monocle2 <- as.data.frame(cbind(cds$Pseudotime, cds$clusters)) colnames(df_monocle2) <- c("pseudotime", "clusters") library(ggbeeswarm) ggplot(df_monocle2, aes(x = pseudotime, y = cds$clusters, colour = cds$clusters)) + geom_quasirandom(groupOnX = FALSE) + theme_bw() + theme(panel.grid.major=element_blank(), panel.grid.minor=element_blank(), axis.line=element_line(size=1), axis.ticks=element_line(size=1), legend.title=element_blank(), # legend.key=element_blank(), ) + # scale_color_tableau() + # theme_classic() + xlab("Diffusion map pseudotime (first diffusion map component)") + ylab("clusters") + ggtitle("Cells ordered by diffusion map pseudotime") identical(colnames(cds@reducedDimS), rownames(Seuratset@meta.data)) DimPlot(Seuratset, reduction.use = "umap", do.label = TRUE) # Trajectory analysis - validation by marker gene expression library(ggplot2) library(dplyr) TGgene = "ENSG00000112486" # TGgeneExpression = trajectory_seurat@data[TGgene, ] tibble(x = cds@reducedDimS[1,], y = cds@reducedDimS[2,], TGgeneExpression = trajectory_seurat@data[TGgene, ]) %>% ggplot(aes(x=x, y=y, colour=TGgeneExpression)) + geom_point(size=0.01) + ggtitle("CCR6") + # scale_colour_gradientn(colours = rev(brewer.pal(11, "RdBu"))) + # scale_colour_gradient(low = "grey", high = "red") + scale_colour_gradientn(colours = rev(c("#300000", "red","#eeeeee")), breaks=c(0,max(TGgeneExpression)), labels=c(0,round(as.numeric(max(TGgeneExpression)), digits = 2))) + ylab("Component 2") + xlab("Component 1") + theme_bw() + theme(text = element_text(size=20), panel.grid.major=element_blank(), panel.grid.minor=element_blank(), axis.line=element_line(size=1), axis.ticks=element_line(size=1), legend.text=element_text(size=10), legend.title=element_blank(), # legend.key=element_blank(), axis.text.x = element_text(size=10) ) # Differential expression analysis by pseutime pseudotime_de <- differentialGeneTest(cds, fullModelFormulaStr = "~sm.ns(Pseudotime)", cores = 8) sig_gene_names <- row.names(subset(pseudotime_de, qval < 0.1)) plot_pseudotime_heatmap(cds[sig_gene_names,], num_clusters = 4, cores = 1, show_rownames = F)
/source/Practice_set.2/pipeline_2_monocle2.R
no_license
mgood2/BIML-2019-SingleCellRNAseq-
R
false
false
4,381
r
# if (!requireNamespace("BiocManager", quietly = TRUE)) # install.packages("BiocManager") # BiocManager::install("monocle", version = "3.8") trajectory_seurat <- SubsetData(Seuratset, ident.use = c(0:3)) trajectory_seurat@raw.data <- trajectory_seurat@raw.data[, rownames(trajectory_seurat@meta.data)] gene_annotation <- as.data.frame(rownames(trajectory_seurat@raw.data)) colnames(gene_annotation) <- "ENSG_ID" rownames(gene_annotation) <- gene_annotation$ENSG_ID library(monocle) set.seed(12345) pd <- new("AnnotatedDataFrame", data = trajectory_seurat@meta.data) fd <- new("AnnotatedDataFrame", data = gene_annotation) cds <- newCellDataSet(trajectory_seurat@raw.data, phenoData = pd, featureData = fd, expressionFamily = negbinomial.size()) cds <- estimateSizeFactors(cds) cds <- estimateDispersions(cds) cds <- detectGenes(cds, min_expr = 0.1) print(head(fData(cds))) expressed_genes <- row.names(subset(fData(cds), num_cells_expressed >= 5)) set.seed(123) cds <- reduceDimension(cds, max_components = 2, num_dim = 15, norm_method = 'log', reduction_method = 'tSNE', verbose = T) cds$clusters <- trajectory_seurat@ident disp_table <- dispersionTable(cds) trajectory_sceset <- Convert(from = trajectory_seurat, to = "sce") ### Select Highly variable genes (feature selection) var.fit <- trendVar(trajectory_sceset, parametric=TRUE, use.spikes=FALSE)#, design = batchDesign) var.out <- decomposeVar(trajectory_sceset, var.fit) trajectory_hvg <- var.out[which(var.out$FDR < 0.05 & var.out$bio > .01),] # var.out$bio > .01 dim(trajectory_hvg) ordering_genes <- rownames(trajectory_hvg) cds <- setOrderingFilter(cds, ordering_genes) cds <- reduceDimension(cds, method = 'DDRTree') cds <- orderCells(cds) # Drawing trajectory plot plot_cell_trajectory(cds, color_by = "clusters") df_monocle2 <- as.data.frame(cbind(cds$Pseudotime, cds$clusters)) colnames(df_monocle2) <- c("pseudotime", "clusters") library(ggbeeswarm) ggplot(df_monocle2, aes(x = pseudotime, y = cds$clusters, colour = cds$clusters)) + geom_quasirandom(groupOnX = FALSE) + theme_bw() + theme(panel.grid.major=element_blank(), panel.grid.minor=element_blank(), axis.line=element_line(size=1), axis.ticks=element_line(size=1), legend.title=element_blank(), # legend.key=element_blank(), ) + # scale_color_tableau() + # theme_classic() + xlab("Diffusion map pseudotime (first diffusion map component)") + ylab("clusters") + ggtitle("Cells ordered by diffusion map pseudotime") identical(colnames(cds@reducedDimS), rownames(Seuratset@meta.data)) DimPlot(Seuratset, reduction.use = "umap", do.label = TRUE) # Trajectory analysis - validation by marker gene expression library(ggplot2) library(dplyr) TGgene = "ENSG00000112486" # TGgeneExpression = trajectory_seurat@data[TGgene, ] tibble(x = cds@reducedDimS[1,], y = cds@reducedDimS[2,], TGgeneExpression = trajectory_seurat@data[TGgene, ]) %>% ggplot(aes(x=x, y=y, colour=TGgeneExpression)) + geom_point(size=0.01) + ggtitle("CCR6") + # scale_colour_gradientn(colours = rev(brewer.pal(11, "RdBu"))) + # scale_colour_gradient(low = "grey", high = "red") + scale_colour_gradientn(colours = rev(c("#300000", "red","#eeeeee")), breaks=c(0,max(TGgeneExpression)), labels=c(0,round(as.numeric(max(TGgeneExpression)), digits = 2))) + ylab("Component 2") + xlab("Component 1") + theme_bw() + theme(text = element_text(size=20), panel.grid.major=element_blank(), panel.grid.minor=element_blank(), axis.line=element_line(size=1), axis.ticks=element_line(size=1), legend.text=element_text(size=10), legend.title=element_blank(), # legend.key=element_blank(), axis.text.x = element_text(size=10) ) # Differential expression analysis by pseutime pseudotime_de <- differentialGeneTest(cds, fullModelFormulaStr = "~sm.ns(Pseudotime)", cores = 8) sig_gene_names <- row.names(subset(pseudotime_de, qval < 0.1)) plot_pseudotime_heatmap(cds[sig_gene_names,], num_clusters = 4, cores = 1, show_rownames = F)
bibentries = c( # nolint start breiman_2001 = bibentry("article", title = "Random Forests", author = "Breiman, Leo", year = "2001", journal = "Machine Learning", volume = "45", number = "1", pages = "5--32", doi = "10.1023/A:1010933404324", issn = "1573-0565" ), ishwaran_2008 = bibentry("article", doi = "10.1214/08-aoas169", url = "https://doi.org/10.1214/08-aoas169", year = "2008", month = "9", publisher = "Institute of Mathematical Statistics", volume = "2", number = "3", author = "Hemant Ishwaran and Udaya B. Kogalur and Eugene H. Blackstone and Michael S. Lauer", title = "Random survival forests", journal = "The Annals of Applied Statistics" ), hothorn_2015 = bibentry("article", author = "Torsten Hothorn and Achim Zeileis", title = "partykit: A Modular Toolkit for Recursive Partytioning in R", journal = "Journal of Machine Learning Research", year = "2015", volume = "16", number = "118", pages = "3905-3909", url = "http://jmlr.org/papers/v16/hothorn15a.html" ), hothorn_2006 = bibentry("article", doi = "10.1198/106186006x133933", url = "https://doi.org/10.1198/106186006x133933", year = "2006", month = "9", publisher = "Informa {UK} Limited", volume = "15", number = "3", pages = "651--674", author = "Torsten Hothorn and Kurt Hornik and Achim Zeileis", title = "Unbiased Recursive Partitioning: A Conditional Inference Framework", journal = "Journal of Computational and Graphical Statistics" ), jaeger_2019 = bibentry("article", doi = "10.1214/19-aoas1261", year = "2019", month = "9", publisher = "Institute of Mathematical Statistics", volume = "13", number = "3", author = "Byron C. Jaeger and D. Leann Long and Dustin M. Long and Mario Sims and Jeff M. Szychowski and Yuan-I Min and Leslie A. Mcclure and George Howard and Noah Simon", title = "Oblique random survival forests", journal = "The Annals of Applied Statistics" ) ) # nolint end
/R/bibentries.R
no_license
A-Pai/mlr3extralearners
R
false
false
2,353
r
bibentries = c( # nolint start breiman_2001 = bibentry("article", title = "Random Forests", author = "Breiman, Leo", year = "2001", journal = "Machine Learning", volume = "45", number = "1", pages = "5--32", doi = "10.1023/A:1010933404324", issn = "1573-0565" ), ishwaran_2008 = bibentry("article", doi = "10.1214/08-aoas169", url = "https://doi.org/10.1214/08-aoas169", year = "2008", month = "9", publisher = "Institute of Mathematical Statistics", volume = "2", number = "3", author = "Hemant Ishwaran and Udaya B. Kogalur and Eugene H. Blackstone and Michael S. Lauer", title = "Random survival forests", journal = "The Annals of Applied Statistics" ), hothorn_2015 = bibentry("article", author = "Torsten Hothorn and Achim Zeileis", title = "partykit: A Modular Toolkit for Recursive Partytioning in R", journal = "Journal of Machine Learning Research", year = "2015", volume = "16", number = "118", pages = "3905-3909", url = "http://jmlr.org/papers/v16/hothorn15a.html" ), hothorn_2006 = bibentry("article", doi = "10.1198/106186006x133933", url = "https://doi.org/10.1198/106186006x133933", year = "2006", month = "9", publisher = "Informa {UK} Limited", volume = "15", number = "3", pages = "651--674", author = "Torsten Hothorn and Kurt Hornik and Achim Zeileis", title = "Unbiased Recursive Partitioning: A Conditional Inference Framework", journal = "Journal of Computational and Graphical Statistics" ), jaeger_2019 = bibentry("article", doi = "10.1214/19-aoas1261", year = "2019", month = "9", publisher = "Institute of Mathematical Statistics", volume = "13", number = "3", author = "Byron C. Jaeger and D. Leann Long and Dustin M. Long and Mario Sims and Jeff M. Szychowski and Yuan-I Min and Leslie A. Mcclure and George Howard and Noah Simon", title = "Oblique random survival forests", journal = "The Annals of Applied Statistics" ) ) # nolint end
#' @docType package #' #' @name icesAdvice-package #' #' @aliases icesAdvice #' #' @title Functions Related to ICES Advice #' #' @description #' A collection of functions that facilitate computational steps related to #' advice for fisheries management, according to ICES guidelines. These include #' methods for calculating reference points and model diagnostics. #' #' @details #' \emph{Calculate ICES advice:} #' \tabular{ll}{ #' \code{\link{DLS3.2}} \tab DLS method 3.2\cr #' \code{\link{icesRound}} \tab rounding method #' } #' \emph{Calculate PA reference points and sigma:} #' \tabular{ll}{ #' \code{\link{Bpa}} \tab from Blim\cr #' \code{\link{Fpa}} \tab from Flim\cr #' \code{\link{sigmaCI}} \tab from confidence interval\cr #' \code{\link{sigmaPA}} \tab from PA reference points #' } #' \emph{Other calculations:} #' \tabular{ll}{ #' \code{\link{agesFbar}} \tab suitable age range for Fbar\cr #' \code{\link{mohn}} \tab Mohn's rho retrospective diagnosis #' } #' \emph{Read and write files:} #' \tabular{ll}{ #' \code{\link{read.dls}} \tab read \code{DLS3.2} results from file\cr #' \code{\link{write.dls}} \tab write \code{DLS3.2} results to file #' } #' \emph{Example tables:} #' \tabular{ll}{ #' \code{\link{gss}} \tab Greater silver smelt catch at age\cr #' \code{\link{shake}} \tab Southern hake retro #' } #' #' @author Arni Magnusson, Colin Millar, and Anne Cooper. #' #' @references #' ICES advice: \url{https://ices.dk/advice} NA
/R/icesAdvice-package.R
no_license
cran/icesAdvice
R
false
false
1,530
r
#' @docType package #' #' @name icesAdvice-package #' #' @aliases icesAdvice #' #' @title Functions Related to ICES Advice #' #' @description #' A collection of functions that facilitate computational steps related to #' advice for fisheries management, according to ICES guidelines. These include #' methods for calculating reference points and model diagnostics. #' #' @details #' \emph{Calculate ICES advice:} #' \tabular{ll}{ #' \code{\link{DLS3.2}} \tab DLS method 3.2\cr #' \code{\link{icesRound}} \tab rounding method #' } #' \emph{Calculate PA reference points and sigma:} #' \tabular{ll}{ #' \code{\link{Bpa}} \tab from Blim\cr #' \code{\link{Fpa}} \tab from Flim\cr #' \code{\link{sigmaCI}} \tab from confidence interval\cr #' \code{\link{sigmaPA}} \tab from PA reference points #' } #' \emph{Other calculations:} #' \tabular{ll}{ #' \code{\link{agesFbar}} \tab suitable age range for Fbar\cr #' \code{\link{mohn}} \tab Mohn's rho retrospective diagnosis #' } #' \emph{Read and write files:} #' \tabular{ll}{ #' \code{\link{read.dls}} \tab read \code{DLS3.2} results from file\cr #' \code{\link{write.dls}} \tab write \code{DLS3.2} results to file #' } #' \emph{Example tables:} #' \tabular{ll}{ #' \code{\link{gss}} \tab Greater silver smelt catch at age\cr #' \code{\link{shake}} \tab Southern hake retro #' } #' #' @author Arni Magnusson, Colin Millar, and Anne Cooper. #' #' @references #' ICES advice: \url{https://ices.dk/advice} NA
\name{quantityIndex} \alias{quantityIndex} \title{Calculate Quantity Indices} \description{ Calculates a Laspeyres, Paasche or Fisher Quantity index. } \usage{quantityIndex( prices, quantities, base, data, method = "Laspeyres", na.rm = FALSE, weights = FALSE, EKS = FALSE )} \arguments{ \item{prices}{Vector that contains the names of the prices.} \item{quantities}{Vector that contains the names of the quantities that belong to the \code{prices}.} \item{base}{The base period(s) to calculate the indices (see details).} \item{data}{Dataframe that contains the prices and quantities.} \item{method}{Which quantity index: "Laspeyres", "Paasche" or "Fisher".} \item{na.rm}{a logical value passed to '\code{mean()}' when calculating the \code{base}.} \item{weights}{logical. Should an attribute 'weights' that contains the relatives weights of each quantity be added?} \item{EKS}{logical. TODO} } \details{ The argument \code{base} can be either \cr (a) a single number: the row number of the base prices and quantities, \cr (b) a vector indicating several observations: The means of these observations are used as base prices and quantities, or \cr (c) a logical vector with the same length as the \code{data}: The means of the observations indicated as 'TRUE' are used as base prices and quantities. If any values used for calculating the quantity index (e.g. current quantities, base quantities, current prices or base prices) are not available (NA), they are ignored (only) if they are multiplied by zero. } \value{ a vector containing the quantity indices. } \seealso{\code{\link{quantityIndex}}.} \author{Arne Henningsen} \examples{ data( Missong03E7.7, package = "micEcon" ) # Laspeyres Quantity Indices quantityIndex( c( "p.beef", "p.veal", "p.pork" ), c( "q.beef", "q.veal", "q.pork" ), 1, Missong03E7.7 ) # Paasche Quantity Indices quantityIndex( c( "p.beef", "p.veal", "p.pork" ), c( "q.beef", "q.veal", "q.pork" ), 1, Missong03E7.7, "Paasche" ) data( Bleymueller79E25.1, package = "micEcon" ) # Laspeyres Quantity Indices quantityIndex( c( "p.A", "p.B", "p.C", "p.D" ), c("q.A", "q.B", "q.C", "q.D" ), 1, Bleymueller79E25.1 ) # Paasche Quantity Indices quantityIndex( c( "p.A", "p.B", "p.C", "p.D" ), c("q.A", "q.B", "q.C", "q.D" ), 1, Bleymueller79E25.1, "Paasche" ) } \keyword{models}
/branches/micEconIndexEKS/man/quantityIndex.Rd
no_license
scfmolina/micecon
R
false
false
2,445
rd
\name{quantityIndex} \alias{quantityIndex} \title{Calculate Quantity Indices} \description{ Calculates a Laspeyres, Paasche or Fisher Quantity index. } \usage{quantityIndex( prices, quantities, base, data, method = "Laspeyres", na.rm = FALSE, weights = FALSE, EKS = FALSE )} \arguments{ \item{prices}{Vector that contains the names of the prices.} \item{quantities}{Vector that contains the names of the quantities that belong to the \code{prices}.} \item{base}{The base period(s) to calculate the indices (see details).} \item{data}{Dataframe that contains the prices and quantities.} \item{method}{Which quantity index: "Laspeyres", "Paasche" or "Fisher".} \item{na.rm}{a logical value passed to '\code{mean()}' when calculating the \code{base}.} \item{weights}{logical. Should an attribute 'weights' that contains the relatives weights of each quantity be added?} \item{EKS}{logical. TODO} } \details{ The argument \code{base} can be either \cr (a) a single number: the row number of the base prices and quantities, \cr (b) a vector indicating several observations: The means of these observations are used as base prices and quantities, or \cr (c) a logical vector with the same length as the \code{data}: The means of the observations indicated as 'TRUE' are used as base prices and quantities. If any values used for calculating the quantity index (e.g. current quantities, base quantities, current prices or base prices) are not available (NA), they are ignored (only) if they are multiplied by zero. } \value{ a vector containing the quantity indices. } \seealso{\code{\link{quantityIndex}}.} \author{Arne Henningsen} \examples{ data( Missong03E7.7, package = "micEcon" ) # Laspeyres Quantity Indices quantityIndex( c( "p.beef", "p.veal", "p.pork" ), c( "q.beef", "q.veal", "q.pork" ), 1, Missong03E7.7 ) # Paasche Quantity Indices quantityIndex( c( "p.beef", "p.veal", "p.pork" ), c( "q.beef", "q.veal", "q.pork" ), 1, Missong03E7.7, "Paasche" ) data( Bleymueller79E25.1, package = "micEcon" ) # Laspeyres Quantity Indices quantityIndex( c( "p.A", "p.B", "p.C", "p.D" ), c("q.A", "q.B", "q.C", "q.D" ), 1, Bleymueller79E25.1 ) # Paasche Quantity Indices quantityIndex( c( "p.A", "p.B", "p.C", "p.D" ), c("q.A", "q.B", "q.C", "q.D" ), 1, Bleymueller79E25.1, "Paasche" ) } \keyword{models}
library(tensorflow) library(keras) library(Rcpp) code = " int evaluate_win_cpp(IntegerVector board){ int winner = 0; int tmp0 = 0; int tmp1 = 0; int tmp2 = 0; int zero = -1; for(int i = 0; i < 9; i++) { if(board(i) == 0) { winner = zero; break; } } for(int i = 0;i<3;i++) { tmp0 = i*3; tmp1 = i*3 + 1; tmp2 = i*3 + 2; if(board(tmp0) > 0.5) { if( board(tmp0) == board(tmp1)) { if(board(tmp0) == board(tmp2)) { return board(tmp0); } } } } for(int i = 0;i<3;i++) { tmp0 = i; tmp1 = i+3; tmp2 = i+6; if(board(tmp0) > 0.5) { if( board(tmp0) == board(tmp1)) { if(board(tmp0) == board(tmp2)) { return board(tmp0); } } } } if(board(0) > 0.5) { if(board(0) == board(4)) { if(board(0) == board(8)) return board(0); } } if(board(2) > 0.5) { if(board(2) == board(4)) { if(board(2) == board(6)) return board(0); } } return winner; } " evaluate_win2 = Rcpp::cppFunction(code) make_move <- function(board, player, move){ if(board[move] == 0){ board[move] <- player }else{ print("illegal move") } return(board) } # Get all possible moves possible_moves <- function(board){ moves <- which(board == 0) return(moves) } # Decide on a random next move random_move <- function(board){ if(length(possible_moves(board)) >1){ random_move <- sample(possible_moves(board), 1) }else{ random_move <- possible_moves(board) } return(random_move) } keras = tf$keras create_agent = function() { m = keras$Sequential(list( keras$layers$InputLayer(input_shape = c(10L)), keras$layers$Dropout(rate = 0.2), keras$layers$Dense(units = 35L,activation = keras$activations$relu), keras$layers$Dropout(rate = 0.2), keras$layers$Dense(units = 35L,activation = keras$activations$relu), keras$layers$Dropout(rate = 0.2), keras$layers$Dense(units = 35L,activation = keras$activations$relu), keras$layers$Dropout(rate = 0.2), keras$layers$Dense(units = 12L) )) return(m) } loss_func = function(XTm, YTm, m) { with(tf$GradientTape() %as% tape, { pred = m(XTm) loss1 = tf$reduce_mean(keras$losses$categorical_crossentropy(YTm[,1:9,drop=FALSE], tf$math$softmax(pred[,1:9], axis = 0L))) loss2 = tf$reduce_mean(keras$losses$categorical_crossentropy(YTm[,10:12,drop=FALSE], tf$math$softmax(pred[,10:12], axis = 0L))) loss = loss1 + loss2 }) grads = tape$gradient(loss, m$weights) return(grads) } loss_func_tf = tf_function(loss_func) train_agent = function(m, memory, opt, epoch = 20L) { YT = memory[, 11:12, drop=FALSE]+0.00001-1 XT = memory[,1:10, drop=FALSE] for(i in 1:epoch) { ind = sample.int(nrow(XT), size = ceiling(0.1*nrow(XT))) YTm1 = tf$reshape(tf$squeeze(k_one_hot(YT[ind, 11,drop=FALSE]-1, 9)), list(-1L, 9L)) YTm2 = tf$reshape(tf$squeeze(k_one_hot(YT[ind, 12,drop=FALSE]-1, 3)), list(-1L, 3L)) YTm = tf$concat(list(YTm1, YTm2), axis = 1L) XTm = XT[ind, ,drop=FALSE] # with(tf$GradientTape() %as% tape, { # pred = m(XTm) # loss = tf$reduce_mean(keras$losses$categorical_crossentropy(YTm, pred)) # }) # grads = tape$gradient(loss, m$weights) grads = loss_func_tf(XTm, YTm, m) opt$apply_gradients(purrr::transpose(list(grads, m$weights))) } } simulate_game <- function(board, player_w){ boards_history <- data.frame(matrix(NA, ncol = 9)) #boards_history <- data.frame(X1 = numeric(), X2=numeric) winner = -1 player = player_w while(winner < 0){ next_move <- random_move(board) board <- make_move(board, player, next_move) if(player == 1){player = 2}else{player = 1} winner <- evaluate_win2(board) #print(board) } return(winner) } game <- function(ai_on = T, ai_mode = "aggressive"){ # new agent memory = matrix(NA, 400, 9+1+1+1+1) counter = 1 for(games in 1:500){ board <- c(0,0,0, 0,0,0, 0,0,0) winner = -1 agent = create_agent() opt = keras$optimizers$RMSprop(learning_rate = 0.01) ### training ### sub_memory = memory[complete.cases(memory), ] if(nrow(sub_memory) > 0) { train_agent(agent, sub_memory, opt) } while(winner < 0) { k = counter player = 1 memory[k, 1:9] = board pred = agent(cbind(matrix(board, 1), player))$numpy() next_move_p = tf$math$softmax(pred[1,1:9,drop=FALSE])$numpy() next_win = tf$math$softmax(pred[1,10:12,drop=FALSE])$numpy() while(TRUE) { next_move = sample(1:9, 1, prob = scales::rescale(next_move_p)+0.2) if(next_move %in% possible_moves(board)) break() } memory[k, 10] = player memory[k, 11] = which.max(next_move_p)#next_move # board update pm = possible_moves(board) old = board results = sapply(1:100, function(j) { sapply(pm, function(i) { board_test<- make_move(old, player, i) eval_test = evaluate_win2(board_test) if(eval_test > -0.1){ if(eval_test == player) return(player) else return(0) } result_test = simulate_game(board_test, player) return(result_test) }) }) best_possible = which.max(apply(t(results), 2, sum)) memory[k, 12] = simulate_game(board, 0) board<- make_move(board, player, next_move) memory[k, 12] = simulate_game(board, 0) memory[k, 12] = best_possible winner = evaluate_win2(board) if(winner>-0.1) break() counter = counter + 1 k = counter player = 2 #memory[k, 1:9] = board #next_move_p = agent(cbind(matrix(board, 1), player))$numpy() while(TRUE) { next_move = random_move(board) # next_move = sample(1:9, 1, prob = scales::rescale(next_move_p)+0.2) if(next_move %in% possible_moves(board)) break() } #memory[k, 10] = player #memory[k, 11] = which.max(next_move_p) # board update #pm = possible_moves(board) # results = # sapply(1:100, function(j) { # sapply(pm, function(i) { # board_test<- make_move(board, player, i) # eval_test = evaluate_win2(board_test) # if(eval_test > -0.1){ # if(eval_test == player) return(TRUE) # else return(FALSE) # } # result_test = simulate_game(board_test, player) # return(result_test) # }) # }) # #best_possible = which.max(apply(t(results), 2, sum)) board<- make_move(board, player, next_move) #memory[k, 12] = best_possible #winner = evaluate_win2(board) #counter = counter+1 if(counter > 10) train_agent(agent, memory[complete.cases(memory), ], opt, epoch = max(counter, 50)) } cat("Winner: ", winner, "in game: ", games, " Score: ", mean(memory[complete.cases(memory),9] == memory[complete.cases(memory),10])," \n") if(counter > 100) counter = 1 } } simulate_game <- function(board, player_w){ winner = NULL player = player_w while(is.null(winner)){ next_move <- random_move(board) board <- make_move(board, player, next_move) if(player == 1){player = 2}else{player = 1} winner <- evaluate_win(board) #print(board) } return(winner==player_w) } library(Rcpp) code = " int simulate_game(NumericVector board, int player_w) { int winner = -1; return player_w; }" simulate_game_cpp = Rcpp::cppFunction(code)
/tic_tic_toc/tic_tac_toc_ai.R
no_license
TheoreticalEcology/MethodenSeminar
R
false
false
7,795
r
library(tensorflow) library(keras) library(Rcpp) code = " int evaluate_win_cpp(IntegerVector board){ int winner = 0; int tmp0 = 0; int tmp1 = 0; int tmp2 = 0; int zero = -1; for(int i = 0; i < 9; i++) { if(board(i) == 0) { winner = zero; break; } } for(int i = 0;i<3;i++) { tmp0 = i*3; tmp1 = i*3 + 1; tmp2 = i*3 + 2; if(board(tmp0) > 0.5) { if( board(tmp0) == board(tmp1)) { if(board(tmp0) == board(tmp2)) { return board(tmp0); } } } } for(int i = 0;i<3;i++) { tmp0 = i; tmp1 = i+3; tmp2 = i+6; if(board(tmp0) > 0.5) { if( board(tmp0) == board(tmp1)) { if(board(tmp0) == board(tmp2)) { return board(tmp0); } } } } if(board(0) > 0.5) { if(board(0) == board(4)) { if(board(0) == board(8)) return board(0); } } if(board(2) > 0.5) { if(board(2) == board(4)) { if(board(2) == board(6)) return board(0); } } return winner; } " evaluate_win2 = Rcpp::cppFunction(code) make_move <- function(board, player, move){ if(board[move] == 0){ board[move] <- player }else{ print("illegal move") } return(board) } # Get all possible moves possible_moves <- function(board){ moves <- which(board == 0) return(moves) } # Decide on a random next move random_move <- function(board){ if(length(possible_moves(board)) >1){ random_move <- sample(possible_moves(board), 1) }else{ random_move <- possible_moves(board) } return(random_move) } keras = tf$keras create_agent = function() { m = keras$Sequential(list( keras$layers$InputLayer(input_shape = c(10L)), keras$layers$Dropout(rate = 0.2), keras$layers$Dense(units = 35L,activation = keras$activations$relu), keras$layers$Dropout(rate = 0.2), keras$layers$Dense(units = 35L,activation = keras$activations$relu), keras$layers$Dropout(rate = 0.2), keras$layers$Dense(units = 35L,activation = keras$activations$relu), keras$layers$Dropout(rate = 0.2), keras$layers$Dense(units = 12L) )) return(m) } loss_func = function(XTm, YTm, m) { with(tf$GradientTape() %as% tape, { pred = m(XTm) loss1 = tf$reduce_mean(keras$losses$categorical_crossentropy(YTm[,1:9,drop=FALSE], tf$math$softmax(pred[,1:9], axis = 0L))) loss2 = tf$reduce_mean(keras$losses$categorical_crossentropy(YTm[,10:12,drop=FALSE], tf$math$softmax(pred[,10:12], axis = 0L))) loss = loss1 + loss2 }) grads = tape$gradient(loss, m$weights) return(grads) } loss_func_tf = tf_function(loss_func) train_agent = function(m, memory, opt, epoch = 20L) { YT = memory[, 11:12, drop=FALSE]+0.00001-1 XT = memory[,1:10, drop=FALSE] for(i in 1:epoch) { ind = sample.int(nrow(XT), size = ceiling(0.1*nrow(XT))) YTm1 = tf$reshape(tf$squeeze(k_one_hot(YT[ind, 11,drop=FALSE]-1, 9)), list(-1L, 9L)) YTm2 = tf$reshape(tf$squeeze(k_one_hot(YT[ind, 12,drop=FALSE]-1, 3)), list(-1L, 3L)) YTm = tf$concat(list(YTm1, YTm2), axis = 1L) XTm = XT[ind, ,drop=FALSE] # with(tf$GradientTape() %as% tape, { # pred = m(XTm) # loss = tf$reduce_mean(keras$losses$categorical_crossentropy(YTm, pred)) # }) # grads = tape$gradient(loss, m$weights) grads = loss_func_tf(XTm, YTm, m) opt$apply_gradients(purrr::transpose(list(grads, m$weights))) } } simulate_game <- function(board, player_w){ boards_history <- data.frame(matrix(NA, ncol = 9)) #boards_history <- data.frame(X1 = numeric(), X2=numeric) winner = -1 player = player_w while(winner < 0){ next_move <- random_move(board) board <- make_move(board, player, next_move) if(player == 1){player = 2}else{player = 1} winner <- evaluate_win2(board) #print(board) } return(winner) } game <- function(ai_on = T, ai_mode = "aggressive"){ # new agent memory = matrix(NA, 400, 9+1+1+1+1) counter = 1 for(games in 1:500){ board <- c(0,0,0, 0,0,0, 0,0,0) winner = -1 agent = create_agent() opt = keras$optimizers$RMSprop(learning_rate = 0.01) ### training ### sub_memory = memory[complete.cases(memory), ] if(nrow(sub_memory) > 0) { train_agent(agent, sub_memory, opt) } while(winner < 0) { k = counter player = 1 memory[k, 1:9] = board pred = agent(cbind(matrix(board, 1), player))$numpy() next_move_p = tf$math$softmax(pred[1,1:9,drop=FALSE])$numpy() next_win = tf$math$softmax(pred[1,10:12,drop=FALSE])$numpy() while(TRUE) { next_move = sample(1:9, 1, prob = scales::rescale(next_move_p)+0.2) if(next_move %in% possible_moves(board)) break() } memory[k, 10] = player memory[k, 11] = which.max(next_move_p)#next_move # board update pm = possible_moves(board) old = board results = sapply(1:100, function(j) { sapply(pm, function(i) { board_test<- make_move(old, player, i) eval_test = evaluate_win2(board_test) if(eval_test > -0.1){ if(eval_test == player) return(player) else return(0) } result_test = simulate_game(board_test, player) return(result_test) }) }) best_possible = which.max(apply(t(results), 2, sum)) memory[k, 12] = simulate_game(board, 0) board<- make_move(board, player, next_move) memory[k, 12] = simulate_game(board, 0) memory[k, 12] = best_possible winner = evaluate_win2(board) if(winner>-0.1) break() counter = counter + 1 k = counter player = 2 #memory[k, 1:9] = board #next_move_p = agent(cbind(matrix(board, 1), player))$numpy() while(TRUE) { next_move = random_move(board) # next_move = sample(1:9, 1, prob = scales::rescale(next_move_p)+0.2) if(next_move %in% possible_moves(board)) break() } #memory[k, 10] = player #memory[k, 11] = which.max(next_move_p) # board update #pm = possible_moves(board) # results = # sapply(1:100, function(j) { # sapply(pm, function(i) { # board_test<- make_move(board, player, i) # eval_test = evaluate_win2(board_test) # if(eval_test > -0.1){ # if(eval_test == player) return(TRUE) # else return(FALSE) # } # result_test = simulate_game(board_test, player) # return(result_test) # }) # }) # #best_possible = which.max(apply(t(results), 2, sum)) board<- make_move(board, player, next_move) #memory[k, 12] = best_possible #winner = evaluate_win2(board) #counter = counter+1 if(counter > 10) train_agent(agent, memory[complete.cases(memory), ], opt, epoch = max(counter, 50)) } cat("Winner: ", winner, "in game: ", games, " Score: ", mean(memory[complete.cases(memory),9] == memory[complete.cases(memory),10])," \n") if(counter > 100) counter = 1 } } simulate_game <- function(board, player_w){ winner = NULL player = player_w while(is.null(winner)){ next_move <- random_move(board) board <- make_move(board, player, next_move) if(player == 1){player = 2}else{player = 1} winner <- evaluate_win(board) #print(board) } return(winner==player_w) } library(Rcpp) code = " int simulate_game(NumericVector board, int player_w) { int winner = -1; return player_w; }" simulate_game_cpp = Rcpp::cppFunction(code)
## app.R ## # I like to include commented out installation of packages to assist in reproducability # install.packages("shinydashboard") # install.packages("factoextra") # install.packages("here") # install.packages("R.utils") # install.packages("tidyverse") # install.packages("janitor") # install.packages("DT") # install.packages("colorblindr") library(shinydashboard) library(factoextra) library(here) library(R.utils) library(tidyverse) library(janitor) library(DT) library(colorblindr) # Read in pokemon data --------------------------------------------------- pokemon_data <- rio::import(here::here("pokemon.csv"), setclass = "tibble") %>% janitor::clean_names() %>% select(-percentage_male, -type2) %>% filter(type1 %in% c("ghost", "fairy", "dragon")) %>% # Let's limit this to a few pokemon mutate(type1 = droplevels(as.factor(type1))) %>% # Few poke have data for these drop_na() pokemon_type <- pokemon_data$type1 pokemon_data <- pokemon_data %>% select(starts_with("against"), hp) %>% scale() %>% as.data.frame() # Load output from k-means clustering ------------------------------------- get_filenames <- function(folder) { files <- list.files(here(glue::glue("{folder}/"))) map_chr(files, ~word(.x, sep = "\\.")) } load_clustdata <- function(file) { temp <- loadObject(here(glue::glue("kmeans/{file}.Rda"))) assign(file, temp, envir = .GlobalEnv) } map(get_filenames("kmeans"), ~load_clustdata(.x)) # Custom functions -------------------------------------------------------- # function to create data for silhouette table get_summary_data <- function(clust){ k <- clust$nbclust # number of clusters clust_num <- map_chr(seq(1:k), ~paste("Cluster", .x)) # cluster number wss <- round(clust$withinss, 2) # ss-within bss <- round(rep(clust$betweenss, k), 2) # ss-between nobs <- clust$size # n observations per cluster neg_sil <- rep(0, k) # number of observations with negative sil value (misclassified) neg_sil_clust <- clust$silinfo$widths[, c("cluster","sil_width")] %>% filter(sil_width < 0) %>% group_by(cluster) %>% summarize(neg_sil = n()) neg_sil[neg_sil_clust$cluster] <- neg_sil_clust$neg_sil data.frame(clust_num, nobs, wss, bss, neg_sil) #bind elements to data frame } # function to create sil table make_summary_table <- function(clust) { table <- get_summary_data(clust) %>% datatable(rownames = FALSE, colnames = c("Cluster", "N", "Within SS", "Between SS", "Neg. Silhouette"), caption = htmltools::tags$caption( style = 'caption-side: bottom; text-align: left;', htmltools::em('N = number of observations per cluster; SS = sum of squares'))) return(table) } # ASH: I struggle with tables--I'm totally going to adopt these functions for my own work!! # ASH: Product produced by this code looks great # ASH: Nice use of commentary # Scatterplot function scatplot <- function(data){ pca_data <- prcomp(pokemon_data) plot_data <- data.frame(pokemon_type, data$cluster, pca_data$x[, 1:3]) %>% rename(cluster = 2) %>% gather(starts_with("PC"), value = value, key = principal_component) # Plot clusters by pokemon type facet_labels <- c(dragon = "Dragon", fairy = "Fairy", ghost = "Ghost") plot_data %>% ggplot(aes(x = principal_component, y = value, color = factor(cluster))) + geom_point(position = position_jitter(width = 0.5), alpha = 0.6, size = 5) + coord_flip() + scale_color_OkabeIto() + labs(x = "Principal Component \n", y = "") + facet_wrap(~pokemon_type, labeller = labeller(pokemon_type = facet_labels)) + # ASH: I was totally unaware of the labeller argument :o Good to know theme_minimal(base_size = 17) + theme(panel.grid.minor = element_blank()) } # The scatterplot is definitely my fav--looks great # Dashboard header -------------------------------------------------------- header <- dashboardHeader(title = "K-means Clustering of Pokemon Data", titleWidth = 450) # Dashboard sidebar ------------------------------------------------------- sidebar <- dashboardSidebar( sidebarMenu( menuItem("Intro to clustering", tabName = "intro", icon = icon("info-circle")), menuItem("Cluster Plot", tabName = "clustplot", icon = icon("cookie")), menuItem("Silhuoette Plot", tabName = "silplot", icon = icon("chart-area")), menuItem("Scatter Plot", tabName = "scatplot", icon = icon("braille")), selectInput(inputId = "clusters", label = "Number of centroids:", c("2" = "clust2", # turn this into a function! "3" = "clust3", "4" = "clust4", "5" = "clust5", "6" = "clust6"), selected = "clust2") )) # ASH: thus far everything you have done is superb!!! :) # ASH: I really don't have much feedback for the code itself--it's very advanced and efficient # ASH: I merely modifed coding style where it was hard to read/the line was too long # Dashboard body ---------------------------------------------------------- # ASH: I LOVE the fact that you are using Pokemon data, so... (see next line) # ASH: somewhere here I would include a section describing your data. I want to know more! # ASH: at a more basic level, I'd explain why you are using cluster analysis/define centriod. # ASH: When I first read "Number of centroids", I was confused body <- dashboardBody( tabItems( # Intro tab content tabItem(tabName = "intro", box("This dashboard is the final project for an R functional programming class. We use the Kaggle Pokemon dataset (available here [placeholder]) to demonstrate how different visualization of k-means clustering can provide help determine how well a clustering solution fits the data.", width = 12)), #ASH: is there a way to wrap the text? It's a bit hard to read as is! # clustplot tab content tabItem(tabName = "clustplot", fluidRow( box(plotOutput("clustplot"), width = 6), box(DTOutput("summarytable1"), width = 6)), fluidRow( box("This plot shows the cluster results on the first two principal components of the data that were used to create them.", width = 12)) ), # silplot tab content tabItem(tabName = "silplot", fluidRow( box(plotOutput("silplot"), width = 6), box(DTOutput("summarytable2"), width = 6)), fluidRow( box("A silhouette plot shows cluster distance, a combination of within cluster compactness and of between cluster separation. A silhouette coefficient closer to 1 means that the data are well classified, whereas a coefficient near 0 means observations are between clusters. A negative silhouette coefficient means observations are likely misclassified and that the data do not group well with any of the identified clusters. The height of each cluster in this plot represents the number of observations per cluster. Generally, we want clusters to be of roughly the same size, which we can gain information about by examining the silhouette plot.", width = 12)) ), # scatplot tab content tabItem(tabName = "scatplot", fluidRow( box(plotOutput("scatplot"), width = 12)), fluidRow( box(width = 12, "K-means clustering is a form of unsupervised learning, meaning that it is intended to find grouping structure in unlabeled data. However, we know that the pokemon in this dataset already 'grouped' by type of pokemon. So, we might want to ask how well the clusters we have identified in the data set map onto this pre-existing grouping variable, pokemon type. Here we show a scatterplot that is faceted by 3 popular types of pokemon: dragon, fairy, and ghost. Rather than showing the raw data from the original variables that were fed into the k-means clustering algorithm, we used principal components analysis (PCA) to reduce the data for simplicity of visualization. Here we show the first 3 principal components. You will notice that with a 3-cluster solution, the clusters perfectly map onto the 3 types of pokemon. This is apparent from the fact that each pokemon type only contains a single color. However, with alternative clustering solutions, we see a mix of colors within each pokemon type, meaning that cluster membership is not completely corresponding to pokemon type. In general, greater mixing of colors across pokemon types corresponds to a weaker relationship between the identified clusters in the data and pokemon type."))) ) ) # User interface ---------------------------------------------------------- ui <- dashboardPage(header, sidebar, body) # Server ------------------------------------------------------------------ server <- function(input, output) { # Silhuoette plot output$silplot <- renderPlot({ data <- get(input$clusters) data <- data$silinfo$widths ggplot(data, aes(x = seq_along(cluster), y = sil_width, fill = cluster, color = cluster)) + geom_col() + coord_flip() + geom_hline(yintercept = mean(data$sil_width, na.rm = TRUE), linetype = 2, size = .7) + scale_fill_OkabeIto() + scale_color_OkabeIto() + theme_minimal() + labs(title = paste0("Average Silhouette Width = ", round(mean(data$sil_width, na.rm = TRUE), 2)), x = NULL, y = "Silhouette width") }) # Cluster plot output$clustplot <- renderPlot({ data <- get(input$clusters) data$clust_plot }) # Scatterplot output$scatplot <- renderPlot({ data <- get(input$clusters) scatplot(data) }) output$summarytable1 <- renderDT({ data <- get(input$clusters) make_summary_table(data) }) output$summarytable2 <- renderDT({ data <- get(input$clusters) make_summary_table(data) }) } shinyApp(ui, server) # ASH: The app looks awesome!! I cannot wait to see the final product! # ASH: In terms of the project requiements, you currently meet the requirement for using at least 2 variants of map # ASH: I'm unsure if this is still a requirement, # ......but I don't think you use a function outside the basic map family (walk_*, reduce, modify_*)? # ASH: you still need to incorporate at least one instance of parallel iteration (e.g., map2_*, pmap_*) # ASH: you also need at least one case of purrr::nest %>% mutate()
/app.R
no_license
AshLynnMiller/fpr_final_project
R
false
false
11,886
r
## app.R ## # I like to include commented out installation of packages to assist in reproducability # install.packages("shinydashboard") # install.packages("factoextra") # install.packages("here") # install.packages("R.utils") # install.packages("tidyverse") # install.packages("janitor") # install.packages("DT") # install.packages("colorblindr") library(shinydashboard) library(factoextra) library(here) library(R.utils) library(tidyverse) library(janitor) library(DT) library(colorblindr) # Read in pokemon data --------------------------------------------------- pokemon_data <- rio::import(here::here("pokemon.csv"), setclass = "tibble") %>% janitor::clean_names() %>% select(-percentage_male, -type2) %>% filter(type1 %in% c("ghost", "fairy", "dragon")) %>% # Let's limit this to a few pokemon mutate(type1 = droplevels(as.factor(type1))) %>% # Few poke have data for these drop_na() pokemon_type <- pokemon_data$type1 pokemon_data <- pokemon_data %>% select(starts_with("against"), hp) %>% scale() %>% as.data.frame() # Load output from k-means clustering ------------------------------------- get_filenames <- function(folder) { files <- list.files(here(glue::glue("{folder}/"))) map_chr(files, ~word(.x, sep = "\\.")) } load_clustdata <- function(file) { temp <- loadObject(here(glue::glue("kmeans/{file}.Rda"))) assign(file, temp, envir = .GlobalEnv) } map(get_filenames("kmeans"), ~load_clustdata(.x)) # Custom functions -------------------------------------------------------- # function to create data for silhouette table get_summary_data <- function(clust){ k <- clust$nbclust # number of clusters clust_num <- map_chr(seq(1:k), ~paste("Cluster", .x)) # cluster number wss <- round(clust$withinss, 2) # ss-within bss <- round(rep(clust$betweenss, k), 2) # ss-between nobs <- clust$size # n observations per cluster neg_sil <- rep(0, k) # number of observations with negative sil value (misclassified) neg_sil_clust <- clust$silinfo$widths[, c("cluster","sil_width")] %>% filter(sil_width < 0) %>% group_by(cluster) %>% summarize(neg_sil = n()) neg_sil[neg_sil_clust$cluster] <- neg_sil_clust$neg_sil data.frame(clust_num, nobs, wss, bss, neg_sil) #bind elements to data frame } # function to create sil table make_summary_table <- function(clust) { table <- get_summary_data(clust) %>% datatable(rownames = FALSE, colnames = c("Cluster", "N", "Within SS", "Between SS", "Neg. Silhouette"), caption = htmltools::tags$caption( style = 'caption-side: bottom; text-align: left;', htmltools::em('N = number of observations per cluster; SS = sum of squares'))) return(table) } # ASH: I struggle with tables--I'm totally going to adopt these functions for my own work!! # ASH: Product produced by this code looks great # ASH: Nice use of commentary # Scatterplot function scatplot <- function(data){ pca_data <- prcomp(pokemon_data) plot_data <- data.frame(pokemon_type, data$cluster, pca_data$x[, 1:3]) %>% rename(cluster = 2) %>% gather(starts_with("PC"), value = value, key = principal_component) # Plot clusters by pokemon type facet_labels <- c(dragon = "Dragon", fairy = "Fairy", ghost = "Ghost") plot_data %>% ggplot(aes(x = principal_component, y = value, color = factor(cluster))) + geom_point(position = position_jitter(width = 0.5), alpha = 0.6, size = 5) + coord_flip() + scale_color_OkabeIto() + labs(x = "Principal Component \n", y = "") + facet_wrap(~pokemon_type, labeller = labeller(pokemon_type = facet_labels)) + # ASH: I was totally unaware of the labeller argument :o Good to know theme_minimal(base_size = 17) + theme(panel.grid.minor = element_blank()) } # The scatterplot is definitely my fav--looks great # Dashboard header -------------------------------------------------------- header <- dashboardHeader(title = "K-means Clustering of Pokemon Data", titleWidth = 450) # Dashboard sidebar ------------------------------------------------------- sidebar <- dashboardSidebar( sidebarMenu( menuItem("Intro to clustering", tabName = "intro", icon = icon("info-circle")), menuItem("Cluster Plot", tabName = "clustplot", icon = icon("cookie")), menuItem("Silhuoette Plot", tabName = "silplot", icon = icon("chart-area")), menuItem("Scatter Plot", tabName = "scatplot", icon = icon("braille")), selectInput(inputId = "clusters", label = "Number of centroids:", c("2" = "clust2", # turn this into a function! "3" = "clust3", "4" = "clust4", "5" = "clust5", "6" = "clust6"), selected = "clust2") )) # ASH: thus far everything you have done is superb!!! :) # ASH: I really don't have much feedback for the code itself--it's very advanced and efficient # ASH: I merely modifed coding style where it was hard to read/the line was too long # Dashboard body ---------------------------------------------------------- # ASH: I LOVE the fact that you are using Pokemon data, so... (see next line) # ASH: somewhere here I would include a section describing your data. I want to know more! # ASH: at a more basic level, I'd explain why you are using cluster analysis/define centriod. # ASH: When I first read "Number of centroids", I was confused body <- dashboardBody( tabItems( # Intro tab content tabItem(tabName = "intro", box("This dashboard is the final project for an R functional programming class. We use the Kaggle Pokemon dataset (available here [placeholder]) to demonstrate how different visualization of k-means clustering can provide help determine how well a clustering solution fits the data.", width = 12)), #ASH: is there a way to wrap the text? It's a bit hard to read as is! # clustplot tab content tabItem(tabName = "clustplot", fluidRow( box(plotOutput("clustplot"), width = 6), box(DTOutput("summarytable1"), width = 6)), fluidRow( box("This plot shows the cluster results on the first two principal components of the data that were used to create them.", width = 12)) ), # silplot tab content tabItem(tabName = "silplot", fluidRow( box(plotOutput("silplot"), width = 6), box(DTOutput("summarytable2"), width = 6)), fluidRow( box("A silhouette plot shows cluster distance, a combination of within cluster compactness and of between cluster separation. A silhouette coefficient closer to 1 means that the data are well classified, whereas a coefficient near 0 means observations are between clusters. A negative silhouette coefficient means observations are likely misclassified and that the data do not group well with any of the identified clusters. The height of each cluster in this plot represents the number of observations per cluster. Generally, we want clusters to be of roughly the same size, which we can gain information about by examining the silhouette plot.", width = 12)) ), # scatplot tab content tabItem(tabName = "scatplot", fluidRow( box(plotOutput("scatplot"), width = 12)), fluidRow( box(width = 12, "K-means clustering is a form of unsupervised learning, meaning that it is intended to find grouping structure in unlabeled data. However, we know that the pokemon in this dataset already 'grouped' by type of pokemon. So, we might want to ask how well the clusters we have identified in the data set map onto this pre-existing grouping variable, pokemon type. Here we show a scatterplot that is faceted by 3 popular types of pokemon: dragon, fairy, and ghost. Rather than showing the raw data from the original variables that were fed into the k-means clustering algorithm, we used principal components analysis (PCA) to reduce the data for simplicity of visualization. Here we show the first 3 principal components. You will notice that with a 3-cluster solution, the clusters perfectly map onto the 3 types of pokemon. This is apparent from the fact that each pokemon type only contains a single color. However, with alternative clustering solutions, we see a mix of colors within each pokemon type, meaning that cluster membership is not completely corresponding to pokemon type. In general, greater mixing of colors across pokemon types corresponds to a weaker relationship between the identified clusters in the data and pokemon type."))) ) ) # User interface ---------------------------------------------------------- ui <- dashboardPage(header, sidebar, body) # Server ------------------------------------------------------------------ server <- function(input, output) { # Silhuoette plot output$silplot <- renderPlot({ data <- get(input$clusters) data <- data$silinfo$widths ggplot(data, aes(x = seq_along(cluster), y = sil_width, fill = cluster, color = cluster)) + geom_col() + coord_flip() + geom_hline(yintercept = mean(data$sil_width, na.rm = TRUE), linetype = 2, size = .7) + scale_fill_OkabeIto() + scale_color_OkabeIto() + theme_minimal() + labs(title = paste0("Average Silhouette Width = ", round(mean(data$sil_width, na.rm = TRUE), 2)), x = NULL, y = "Silhouette width") }) # Cluster plot output$clustplot <- renderPlot({ data <- get(input$clusters) data$clust_plot }) # Scatterplot output$scatplot <- renderPlot({ data <- get(input$clusters) scatplot(data) }) output$summarytable1 <- renderDT({ data <- get(input$clusters) make_summary_table(data) }) output$summarytable2 <- renderDT({ data <- get(input$clusters) make_summary_table(data) }) } shinyApp(ui, server) # ASH: The app looks awesome!! I cannot wait to see the final product! # ASH: In terms of the project requiements, you currently meet the requirement for using at least 2 variants of map # ASH: I'm unsure if this is still a requirement, # ......but I don't think you use a function outside the basic map family (walk_*, reduce, modify_*)? # ASH: you still need to incorporate at least one instance of parallel iteration (e.g., map2_*, pmap_*) # ASH: you also need at least one case of purrr::nest %>% mutate()
## Put comments here that give an overall description of what your ## functions do ## Write a short comment describing this function makeCacheMatrix <- function(x = matrix()) { } ## Write a short comment describing this function > makeCacheMatrix <- function(x = matrix()) { inv <- NULL set <- function(y) { x <<- y inv <<- NULL } get <- function() x setInverse <- function(inverse) inv <<- inverse getInverse <- function() inv list(set = set, get = get,setInverse = setInverse, getInverse = getInverse) } > > ##This function creates a special "matrix" object that can cache its inverse makeCacheMatrix() <- function(x = matrix()) { inv <- NULL set <- function(y) { x <<- y inv <<- NULL } get <- function() x setinverse <- function(inverse) inv<<- inverse getinverse <- function() inv list(set = set, get = get, setinverse = setinverse, getinverse = getinverse) } > cacheSolve <- function(x, ...) { inv <- x$getInverse() if (!is.null(inv)) { message(" getting cache data") return(inv) } matrix <- x$get() inv <- solve(matrix, ...) x$setInverse(inv) inv }
/cachematrix.R
no_license
Chahana12/ProgrammingAssignment2
R
false
false
1,217
r
## Put comments here that give an overall description of what your ## functions do ## Write a short comment describing this function makeCacheMatrix <- function(x = matrix()) { } ## Write a short comment describing this function > makeCacheMatrix <- function(x = matrix()) { inv <- NULL set <- function(y) { x <<- y inv <<- NULL } get <- function() x setInverse <- function(inverse) inv <<- inverse getInverse <- function() inv list(set = set, get = get,setInverse = setInverse, getInverse = getInverse) } > > ##This function creates a special "matrix" object that can cache its inverse makeCacheMatrix() <- function(x = matrix()) { inv <- NULL set <- function(y) { x <<- y inv <<- NULL } get <- function() x setinverse <- function(inverse) inv<<- inverse getinverse <- function() inv list(set = set, get = get, setinverse = setinverse, getinverse = getinverse) } > cacheSolve <- function(x, ...) { inv <- x$getInverse() if (!is.null(inv)) { message(" getting cache data") return(inv) } matrix <- x$get() inv <- solve(matrix, ...) x$setInverse(inv) inv }
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/fit.3g.R \name{fit.3g} \alias{fit.3g} \title{fit.3g} \usage{ fit.3g(Z, pars = c(0.8, 0.1, 2, 2, 3, 0.5), weights = rep(1, dim(Z)[1]), C = 1, fit_null = FALSE, maxit = 10000, tol = 1e-04, sgm = 0.8, one_way = FALSE, syscov = 0, accel = TRUE, verbose = TRUE, file = NULL, n_save = 20, incl_z = TRUE, em = TRUE, control = list(factr = 10)) } \arguments{ \item{Z}{an n x 2 matrix; \eqn{Z[i,1]}, \eqn{Z[i,2]} are the \eqn{Z_d} and \eqn{Z_a} scores respectively for the \eqn{i}th SNP} \item{pars}{vector containing initial values of \code{pi0}, \code{pi1}, \code{tau}, \code{sigma1}, \code{sigma2}, \code{rho}.} \item{weights}{SNP weights to adjust for LD; output from LDAK procedure} \item{C}{a term \eqn{C log(}\code{pi0}*\code{pi1}*\code{pi2}\eqn{)} is added to the likelihood so the model is specified.} \item{fit_null}{set to TRUE to fit null model with forced \code{rho}=0, \code{tau}=0} \item{maxit}{maximum number of iterations before algorithm halts} \item{tol}{how small a change in pseudo-likelihood halts the algorithm} \item{sgm}{force \code{sigma1} \eqn{\ge sgm}, \code{sigma2} \eqn{\ge sgm}, \code{tau} \eqn{\ge sgm}. True marginal variances should never be less than 1, but some variation should be allowed.} \item{one_way}{if TRUE, fits a single-Gaussian for category 3, rather than the symmetric model. Requires signed Z scores.} \item{syscov}{if subgroup proportions in the case group do not match those in the population, \eqn{Z_d} and \eqn{Z_a} scores must be transformed. This leads to a systematic correlation (see function \code{\link{syscor}}). This parameter forces adjustment of the fitted model to allow for this correlation.} \item{accel}{attempts to accelerate the fitting process by taking larger steps.} \item{verbose}{prints current parameters with frequency defined by \code{\link{n_save}}} \item{incl_z}{set to TRUE to include input arguments \code{\link{Z}} and \code{\link{weights}} in output. If FALSE these are set to null.} \item{em}{set to TRUE to use E-M algorithm, FALSE to use R's \code{\link{optim}} function.} \item{control}{parameters passed to R's \code{\link{optim}} function; only used if \code{\link{em}}=FALSE} \item{save}{history to a file with frequency defined by \code{\link{n_save}}} \item{b_int}{save or print current \code{\link{pars}} every \code{\link{b_int}} iterations} } \value{ a list of six objects (class \code{\link{3Gfit}}): \code{pars} is the vector of fitted parameters, \code{history} is a matrix of fitted parameters and pseudo-likelihood at each stage in the E-M algorithm, \code{logl} is the joint pseudo-likelihood of \eqn{Z_a} and \eqn{Z_d}, \code{logl_a} is the pseudo-likelihood of \eqn{Z_a} alone (used for adjusting PLR), \code{z_ad} is n x 2 matrix of \eqn{Z_d} and \eqn{Z_a} scores, \code{weights} is the vector of weights used to generate the model, and \code{hypothesis} is 0 or 1 depending on the value of \code{\link{fit_null}}. } \description{ Fit a specific Gaussian mixture distribution. } \details{ The mixture distribution simultaneously models two sets of GWAS summary statistics arising from a control group and two case groups comprising subgroups of a disease case group of interest. The values \eqn{Z_{a}}{Z_a} correspond to Z-scores arising from comparing the control group with the combined case group, and the values \eqn{Z_{d}}{Z_d} from comparing one case subgroup with the other, independent of controls. We expect that SNPs can be classified into three categories, corresponding to the three two-dimensional Gaussians in the joint distribution of \eqn{Z_{a}}{Z_a} and \eqn{Z_{d}}{Z_d}. These three categories are: SNPs not associated with the phenotype and not differentiating subtypes; SNPs associated with the phenotypebut not differentiating subtypes; and SNPs differentiating subtypes. Each of these three categories gives rise to a mixture Gaussian with a different shape. We are interested in whether the data support evidence that SNPs in the third category additionally differentiate cases and controls. Formally, we assume: \deqn{ pdf(Z_{a},Z_{d}) = \pi_{0} G_{0} + \pi_{1} G_{1} + (1-\pi_{0}-\pi_{1}) G_{2} }{ pdf(Z_a,Z_d) = pi0 G0 + pi1 G1 + (1-pi0 - pi1) G2 } where \eqn{G_{0},G_{1}}{G0, G1} are bivariate Gaussians with mean \eqn{(0,0)} and covariance matrices \eqn{(1,0;0,1)}, \eqn{(}\code{sigma1}^2\eqn{,0;0,1)} respectively, and \eqn{G_{2}}{G2} is an equally-weighted mixture of two Gaussians with mean \eqn{(0,0)} and covariance matrices \eqn{(}\code{sigma2}^2,\code{rho};\code{rho},\code{tau}^2 \eqn{)} and \eqn{(}\code{sigma2}^2,-\code{rho};-\code{rho},\code{tau}^2 \eqn{)}. The model is thus characterised by the vector \code{\link{pars}}=(\code{pi0},\code{pi1},\code{tau},\code{sigma1},\code{sigma2},\code{rho}). Under the null hypothesis that SNPs which differentiate subtypes are not in general associated with the phenotype, we have \code{sigma2}=1, \code{rho}=0. This function finds the maximum pseudo-likelihood estimators for the paramaters of these three Gaussians, and the mixing parameters representing the proportion of SNPs in each category. } \examples{ nn=100000 Z=abs(rbind(rmnorm(0.8*nn,varcov=diag(2)), rmnorm(0.15*nn,varcov=rbind(c(1,0),c(0,2^2))), rmnorm(0.05*nn,varcov=rbind(c(3^2,2),c(2,4^2))))); weights=runif(nn) yy=fit.3g(Z,pars=c(0.7,0.2,2.5,1.5,3,1),weights=weights,incl_z=TRUE) yy$pars plot(yy,rlim=2) } \author{ Chris Wallace and James Liley }
/man/fit.3g.Rd
no_license
jamesliley/subtest
R
false
true
5,507
rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/fit.3g.R \name{fit.3g} \alias{fit.3g} \title{fit.3g} \usage{ fit.3g(Z, pars = c(0.8, 0.1, 2, 2, 3, 0.5), weights = rep(1, dim(Z)[1]), C = 1, fit_null = FALSE, maxit = 10000, tol = 1e-04, sgm = 0.8, one_way = FALSE, syscov = 0, accel = TRUE, verbose = TRUE, file = NULL, n_save = 20, incl_z = TRUE, em = TRUE, control = list(factr = 10)) } \arguments{ \item{Z}{an n x 2 matrix; \eqn{Z[i,1]}, \eqn{Z[i,2]} are the \eqn{Z_d} and \eqn{Z_a} scores respectively for the \eqn{i}th SNP} \item{pars}{vector containing initial values of \code{pi0}, \code{pi1}, \code{tau}, \code{sigma1}, \code{sigma2}, \code{rho}.} \item{weights}{SNP weights to adjust for LD; output from LDAK procedure} \item{C}{a term \eqn{C log(}\code{pi0}*\code{pi1}*\code{pi2}\eqn{)} is added to the likelihood so the model is specified.} \item{fit_null}{set to TRUE to fit null model with forced \code{rho}=0, \code{tau}=0} \item{maxit}{maximum number of iterations before algorithm halts} \item{tol}{how small a change in pseudo-likelihood halts the algorithm} \item{sgm}{force \code{sigma1} \eqn{\ge sgm}, \code{sigma2} \eqn{\ge sgm}, \code{tau} \eqn{\ge sgm}. True marginal variances should never be less than 1, but some variation should be allowed.} \item{one_way}{if TRUE, fits a single-Gaussian for category 3, rather than the symmetric model. Requires signed Z scores.} \item{syscov}{if subgroup proportions in the case group do not match those in the population, \eqn{Z_d} and \eqn{Z_a} scores must be transformed. This leads to a systematic correlation (see function \code{\link{syscor}}). This parameter forces adjustment of the fitted model to allow for this correlation.} \item{accel}{attempts to accelerate the fitting process by taking larger steps.} \item{verbose}{prints current parameters with frequency defined by \code{\link{n_save}}} \item{incl_z}{set to TRUE to include input arguments \code{\link{Z}} and \code{\link{weights}} in output. If FALSE these are set to null.} \item{em}{set to TRUE to use E-M algorithm, FALSE to use R's \code{\link{optim}} function.} \item{control}{parameters passed to R's \code{\link{optim}} function; only used if \code{\link{em}}=FALSE} \item{save}{history to a file with frequency defined by \code{\link{n_save}}} \item{b_int}{save or print current \code{\link{pars}} every \code{\link{b_int}} iterations} } \value{ a list of six objects (class \code{\link{3Gfit}}): \code{pars} is the vector of fitted parameters, \code{history} is a matrix of fitted parameters and pseudo-likelihood at each stage in the E-M algorithm, \code{logl} is the joint pseudo-likelihood of \eqn{Z_a} and \eqn{Z_d}, \code{logl_a} is the pseudo-likelihood of \eqn{Z_a} alone (used for adjusting PLR), \code{z_ad} is n x 2 matrix of \eqn{Z_d} and \eqn{Z_a} scores, \code{weights} is the vector of weights used to generate the model, and \code{hypothesis} is 0 or 1 depending on the value of \code{\link{fit_null}}. } \description{ Fit a specific Gaussian mixture distribution. } \details{ The mixture distribution simultaneously models two sets of GWAS summary statistics arising from a control group and two case groups comprising subgroups of a disease case group of interest. The values \eqn{Z_{a}}{Z_a} correspond to Z-scores arising from comparing the control group with the combined case group, and the values \eqn{Z_{d}}{Z_d} from comparing one case subgroup with the other, independent of controls. We expect that SNPs can be classified into three categories, corresponding to the three two-dimensional Gaussians in the joint distribution of \eqn{Z_{a}}{Z_a} and \eqn{Z_{d}}{Z_d}. These three categories are: SNPs not associated with the phenotype and not differentiating subtypes; SNPs associated with the phenotypebut not differentiating subtypes; and SNPs differentiating subtypes. Each of these three categories gives rise to a mixture Gaussian with a different shape. We are interested in whether the data support evidence that SNPs in the third category additionally differentiate cases and controls. Formally, we assume: \deqn{ pdf(Z_{a},Z_{d}) = \pi_{0} G_{0} + \pi_{1} G_{1} + (1-\pi_{0}-\pi_{1}) G_{2} }{ pdf(Z_a,Z_d) = pi0 G0 + pi1 G1 + (1-pi0 - pi1) G2 } where \eqn{G_{0},G_{1}}{G0, G1} are bivariate Gaussians with mean \eqn{(0,0)} and covariance matrices \eqn{(1,0;0,1)}, \eqn{(}\code{sigma1}^2\eqn{,0;0,1)} respectively, and \eqn{G_{2}}{G2} is an equally-weighted mixture of two Gaussians with mean \eqn{(0,0)} and covariance matrices \eqn{(}\code{sigma2}^2,\code{rho};\code{rho},\code{tau}^2 \eqn{)} and \eqn{(}\code{sigma2}^2,-\code{rho};-\code{rho},\code{tau}^2 \eqn{)}. The model is thus characterised by the vector \code{\link{pars}}=(\code{pi0},\code{pi1},\code{tau},\code{sigma1},\code{sigma2},\code{rho}). Under the null hypothesis that SNPs which differentiate subtypes are not in general associated with the phenotype, we have \code{sigma2}=1, \code{rho}=0. This function finds the maximum pseudo-likelihood estimators for the paramaters of these three Gaussians, and the mixing parameters representing the proportion of SNPs in each category. } \examples{ nn=100000 Z=abs(rbind(rmnorm(0.8*nn,varcov=diag(2)), rmnorm(0.15*nn,varcov=rbind(c(1,0),c(0,2^2))), rmnorm(0.05*nn,varcov=rbind(c(3^2,2),c(2,4^2))))); weights=runif(nn) yy=fit.3g(Z,pars=c(0.7,0.2,2.5,1.5,3,1),weights=weights,incl_z=TRUE) yy$pars plot(yy,rlim=2) } \author{ Chris Wallace and James Liley }
source("/home/mr984/diversity_metrics/scripts/checkplot_initials.R") source("/home/mr984/diversity_metrics/scripts/checkplot_inf.R") reps<-50 outerreps<-1000 size<-rev(round(10^seq(2, 5, 0.25)))[ 4 ] nc<-12 plan(strategy=multisession, workers=nc) map(rev(1:outerreps), function(x){ start<-Sys.time() out<-checkplot_inf(flatten(flatten(SADs_list))[[11]], l=1, inds=size, reps=reps) write.csv(out, paste("/scratch/mr984/SAD11","l",1,"inds", size, "outernew", x, ".csv", sep="_"), row.names=F) rm(out) print(Sys.time()-start) })
/scripts/checkplots_for_parallel_amarel/asy_284.R
no_license
dushoff/diversity_metrics
R
false
false
535
r
source("/home/mr984/diversity_metrics/scripts/checkplot_initials.R") source("/home/mr984/diversity_metrics/scripts/checkplot_inf.R") reps<-50 outerreps<-1000 size<-rev(round(10^seq(2, 5, 0.25)))[ 4 ] nc<-12 plan(strategy=multisession, workers=nc) map(rev(1:outerreps), function(x){ start<-Sys.time() out<-checkplot_inf(flatten(flatten(SADs_list))[[11]], l=1, inds=size, reps=reps) write.csv(out, paste("/scratch/mr984/SAD11","l",1,"inds", size, "outernew", x, ".csv", sep="_"), row.names=F) rm(out) print(Sys.time()-start) })
\name{date_trans} \alias{date_trans} \title{Transformation for dates (class Date).} \description{ Transformation for dates (class Date). } \examples{ years <- seq(as.Date("1910/1/1"), as.Date("1999/1/1"), "years") t <- date_trans() t$trans(years) t$inv(t$trans(years)) t$format(t$breaks(range(years))) }
/man/date_trans.Rd
no_license
baptiste/scales
R
false
false
307
rd
\name{date_trans} \alias{date_trans} \title{Transformation for dates (class Date).} \description{ Transformation for dates (class Date). } \examples{ years <- seq(as.Date("1910/1/1"), as.Date("1999/1/1"), "years") t <- date_trans() t$trans(years) t$inv(t$trans(years)) t$format(t$breaks(range(years))) }
\name{loglik2} \alias{loglik2} \title{ Computes the survival log-likelihood } \description{ Computes the survival log-likelihood } \usage{ loglik2(theta, interimData) } \arguments{ \item{theta}{ the three-element vector of \eqn{(\alpha, \beta, \gamma)} } \item{interimData}{ The interim data } } \details{ Computes the survival log-likelihood } \value{ the survival log-likelihood } \references{ Lai, Tze Leung and Lavori, Philip W. and Shih, Mei-Chiung. Sequential Design of Phase II-III Cancer Trials, Statistics in Medicine, Volume 31, issue 18, p.1944-1960, 2012. } \author{ Mei-Chiung Shih, Balasubramanian Narasimhan, Pei He } % Add one or more standard keywords, see file 'KEYWORDS' in the % R documentation directory. \keyword{design}
/man/loglik2.Rd
no_license
cran/sp23design
R
false
false
767
rd
\name{loglik2} \alias{loglik2} \title{ Computes the survival log-likelihood } \description{ Computes the survival log-likelihood } \usage{ loglik2(theta, interimData) } \arguments{ \item{theta}{ the three-element vector of \eqn{(\alpha, \beta, \gamma)} } \item{interimData}{ The interim data } } \details{ Computes the survival log-likelihood } \value{ the survival log-likelihood } \references{ Lai, Tze Leung and Lavori, Philip W. and Shih, Mei-Chiung. Sequential Design of Phase II-III Cancer Trials, Statistics in Medicine, Volume 31, issue 18, p.1944-1960, 2012. } \author{ Mei-Chiung Shih, Balasubramanian Narasimhan, Pei He } % Add one or more standard keywords, see file 'KEYWORDS' in the % R documentation directory. \keyword{design}
library(MuViCP) ### Name: get.NN ### Title: Function to find the nearest neighbours ### Aliases: get.NN ### ** Examples require(MASS) mu <- c(3,4) Sigma <- rbind(c(1,0.2),c(0.2,1)) Y <- mvrnorm(20, mu = mu, Sigma = Sigma) test <- 1:4 train <- 5:20 nn1a <- get.NN(Y, k = 3, test = 1:4, train = 5:20, dist.type = 'euclidean', nn.type = 'which') nn1b <- get.NN(Y, k = 3, test = 1:4, train = 5:20, dist.type = 'euclidean', nn.type = 'dist') nn1c <- get.NN(Y, k = 3, test = 1:4, train = 5:20, dist.type = 'euclidean', nn.type = 'max') nn2 <- get.NN(Y, p = 0.3, test = 1:4, train = 5:20, dist.type = 'euclidean', nn.type = 'which')
/data/genthat_extracted_code/MuViCP/examples/get.NN.Rd.R
no_license
surayaaramli/typeRrh
R
false
false
633
r
library(MuViCP) ### Name: get.NN ### Title: Function to find the nearest neighbours ### Aliases: get.NN ### ** Examples require(MASS) mu <- c(3,4) Sigma <- rbind(c(1,0.2),c(0.2,1)) Y <- mvrnorm(20, mu = mu, Sigma = Sigma) test <- 1:4 train <- 5:20 nn1a <- get.NN(Y, k = 3, test = 1:4, train = 5:20, dist.type = 'euclidean', nn.type = 'which') nn1b <- get.NN(Y, k = 3, test = 1:4, train = 5:20, dist.type = 'euclidean', nn.type = 'dist') nn1c <- get.NN(Y, k = 3, test = 1:4, train = 5:20, dist.type = 'euclidean', nn.type = 'max') nn2 <- get.NN(Y, p = 0.3, test = 1:4, train = 5:20, dist.type = 'euclidean', nn.type = 'which')
#' @title Unweighted Helmert Contrast Matrices #' #' @description #' Returns a matrix of Helmert contrasts, scaled so that the resulting contrast #' estimates (in an ANOVA or regression model) correspond to the difference #' between the levels (categories) being compared. The contrasts may be computed #' either based on a numerical number of levels or a vector of data. #' #' @details #' Helmert contrasts compare the second level with the first, the third with the #' average of the first two, and so on. As with other contrasts, they are #' orthogonal to each other and to the intercept. #' #' When the levels differ in frequency, \emph{unweighted} coding is appropriate #' if the differences in frequency in the sample are merely \emph{incidental} #' (e.g., both conditions were intended to be presented equally frequently, but #' by chance there are more observations from one condition than another). (If #' the differences in frequency instead represent genuine differences in the #' population, weighted coding may be more appropriate.) #' #' If all of the factor levels are equally common, there is no difference #' between unweighted and weighted coding. #' @param x a factor variable (or variable that can be coerced to a factor) for #' which contrasts should be calculated. #' @param reference.levels vector specifying, in order, the category treated as #' the reference level (i.e., assigned the next negative value) in each #' successive contrast. #' @param n a vector of levels for a factor, or the number of levels, which can #' be provided instead of \code{x}. #' @return A matrix with \code{n} rows and \code{k} columns, where \code{n} is #' the number of levels and \code{k=n-1}. #' @references #' Cohen, J., Cohen, P., West, S.G., & Aiken, L.S. (2002). Categorical or #' nominal independent variables. In \emph{Applied multiple regression/ #' correlation analysis for the behavioral sciences} (3rd ed., pp. 302-353). #' Mahwah, NJ: Lawrence Erlbaum Associates. #' @seealso \code{\link{contr.helmert.weighted}} for \emph{weighted} Helmert #' contrasts, and \code{\link{contrasts}} and \code{\link{contr.helmert}}. #' @examples #' contr.helmert.unweighted(n=4) #' #' contr.helmert.unweighted(n=c('Active','Passive'), reference.levels=1) #' #' cuedata <- as.factor(c('ValidCue', 'ValidCue', #' 'InvalidCue', 'NoCue', 'InvalidCue', 'NoCue')) #' contr.helmert.unweighted(x=cuedata) #' contr.helmert.unweighted(x=cuedata, #' reference.levels=c('ValidCue','InvalidCue')) #' @export #' @importFrom stats contr.helmert contr.helmert.unweighted <- function(x,reference.levels=levels[-length(levels)], n=NULL) { # Check to make sure that both a factor and a number of levels have not # BOTH been specified: if (!missing(x) & !missing(n)) { stop('Provide vector of data OR number of levels, not both') } # Work with a vector of data: if (missing(n)) { # Coerce to a factor if needed: if (is.factor(x) == FALSE) { warning(paste0('Coerced to a factor from ',class(x))) x <- as.factor(x) } # Get levels: n <- levels(x) } # Get levels: if (is.numeric(n)) { levels <- 1:n } else { levels <- levels(factor(n)) } # Get number of levels: numlevels <- length(levels) # Get number of contrasts: k <- numlevels-1 # Check to make sure the right number of reference levels were specified if (length(reference.levels) != k) { stop(paste('Wrong number of reference levels!', k, 'contrast(s) needed, but', length(reference.levels), 'reference level(s) specified', sep= ' ')) } # Make sure none of the reference levels is out of bounds if (is.numeric(reference.levels)) { if (any(reference.levels < 0) | any(reference.levels > numlevels) | any((reference.levels - round(reference.levels)) > .0001)) { stop(paste0('Level indices must be integers 1 <= x <= ', numlevels)) } } # Convert the reference levels from characters to numbers, if needed: if (is.character(reference.levels)) { newreflevels <- sapply(reference.levels, function(y) which(levels==y)[1]) if (any(is.na(newreflevels))) { # User named a level that's not actually a level of the factor badlevels <- newreflevels[is.na(newreflevels)] stop(paste0('Factor does not have a level named "', names(badlevels[1]), '"')) } reference.levels <- newreflevels } # Get default helmert contrasts: orig.matrix <- contr.helmert(numlevels) # Reorder the matrix according to the desired contrasts: # Step 1 - get the correct ordering of contrasts contrast.order <- c(reference.levels, which(c(1:numlevels) %in% reference.levels==FALSE)[1]) # Step 2 - name the contrast matrix accordingly rownames(orig.matrix) <- levels[contrast.order] # Step 3 - reorder matrix to match the original ordering of factor levels sorted.matrix <- orig.matrix[levels,,drop=FALSE] # Rescale and return: apply(sorted.matrix, 2, function(x) x/length(x[x != 0]) ) }
/R/contr.helmert.unweighted.R
no_license
anc211/psycholing
R
false
false
4,968
r
#' @title Unweighted Helmert Contrast Matrices #' #' @description #' Returns a matrix of Helmert contrasts, scaled so that the resulting contrast #' estimates (in an ANOVA or regression model) correspond to the difference #' between the levels (categories) being compared. The contrasts may be computed #' either based on a numerical number of levels or a vector of data. #' #' @details #' Helmert contrasts compare the second level with the first, the third with the #' average of the first two, and so on. As with other contrasts, they are #' orthogonal to each other and to the intercept. #' #' When the levels differ in frequency, \emph{unweighted} coding is appropriate #' if the differences in frequency in the sample are merely \emph{incidental} #' (e.g., both conditions were intended to be presented equally frequently, but #' by chance there are more observations from one condition than another). (If #' the differences in frequency instead represent genuine differences in the #' population, weighted coding may be more appropriate.) #' #' If all of the factor levels are equally common, there is no difference #' between unweighted and weighted coding. #' @param x a factor variable (or variable that can be coerced to a factor) for #' which contrasts should be calculated. #' @param reference.levels vector specifying, in order, the category treated as #' the reference level (i.e., assigned the next negative value) in each #' successive contrast. #' @param n a vector of levels for a factor, or the number of levels, which can #' be provided instead of \code{x}. #' @return A matrix with \code{n} rows and \code{k} columns, where \code{n} is #' the number of levels and \code{k=n-1}. #' @references #' Cohen, J., Cohen, P., West, S.G., & Aiken, L.S. (2002). Categorical or #' nominal independent variables. In \emph{Applied multiple regression/ #' correlation analysis for the behavioral sciences} (3rd ed., pp. 302-353). #' Mahwah, NJ: Lawrence Erlbaum Associates. #' @seealso \code{\link{contr.helmert.weighted}} for \emph{weighted} Helmert #' contrasts, and \code{\link{contrasts}} and \code{\link{contr.helmert}}. #' @examples #' contr.helmert.unweighted(n=4) #' #' contr.helmert.unweighted(n=c('Active','Passive'), reference.levels=1) #' #' cuedata <- as.factor(c('ValidCue', 'ValidCue', #' 'InvalidCue', 'NoCue', 'InvalidCue', 'NoCue')) #' contr.helmert.unweighted(x=cuedata) #' contr.helmert.unweighted(x=cuedata, #' reference.levels=c('ValidCue','InvalidCue')) #' @export #' @importFrom stats contr.helmert contr.helmert.unweighted <- function(x,reference.levels=levels[-length(levels)], n=NULL) { # Check to make sure that both a factor and a number of levels have not # BOTH been specified: if (!missing(x) & !missing(n)) { stop('Provide vector of data OR number of levels, not both') } # Work with a vector of data: if (missing(n)) { # Coerce to a factor if needed: if (is.factor(x) == FALSE) { warning(paste0('Coerced to a factor from ',class(x))) x <- as.factor(x) } # Get levels: n <- levels(x) } # Get levels: if (is.numeric(n)) { levels <- 1:n } else { levels <- levels(factor(n)) } # Get number of levels: numlevels <- length(levels) # Get number of contrasts: k <- numlevels-1 # Check to make sure the right number of reference levels were specified if (length(reference.levels) != k) { stop(paste('Wrong number of reference levels!', k, 'contrast(s) needed, but', length(reference.levels), 'reference level(s) specified', sep= ' ')) } # Make sure none of the reference levels is out of bounds if (is.numeric(reference.levels)) { if (any(reference.levels < 0) | any(reference.levels > numlevels) | any((reference.levels - round(reference.levels)) > .0001)) { stop(paste0('Level indices must be integers 1 <= x <= ', numlevels)) } } # Convert the reference levels from characters to numbers, if needed: if (is.character(reference.levels)) { newreflevels <- sapply(reference.levels, function(y) which(levels==y)[1]) if (any(is.na(newreflevels))) { # User named a level that's not actually a level of the factor badlevels <- newreflevels[is.na(newreflevels)] stop(paste0('Factor does not have a level named "', names(badlevels[1]), '"')) } reference.levels <- newreflevels } # Get default helmert contrasts: orig.matrix <- contr.helmert(numlevels) # Reorder the matrix according to the desired contrasts: # Step 1 - get the correct ordering of contrasts contrast.order <- c(reference.levels, which(c(1:numlevels) %in% reference.levels==FALSE)[1]) # Step 2 - name the contrast matrix accordingly rownames(orig.matrix) <- levels[contrast.order] # Step 3 - reorder matrix to match the original ordering of factor levels sorted.matrix <- orig.matrix[levels,,drop=FALSE] # Rescale and return: apply(sorted.matrix, 2, function(x) x/length(x[x != 0]) ) }
#Chapter 9, 9.1.2 #Biological Computing Boot Camp #R Studio Version 1.1.383 ubuntu 16.04 LTS 64bi #Author Petra Guy 26th October 2017 rm(list = ls()) Mydf = read.csv("../Data/EcolArchives-E089-51-D1.csv") library(lattice) # plot graphs and output to pdfs pdf("../Results/Prey_Lattice.pdf", # Open blank pdf page 11.7, 8.3) densityplot(~log(Prey.mass)| Type.of.feeding.interaction, data = Mydf) dev.off() pdf("../Results/Predator_Lattice.pdf", # Open blank pdf page 11.7, 8.3) densityplot(~log(Predator.mass)| Type.of.feeding.interaction, data = Mydf) dev.off() pdf("../Results/SizeRatio_Lattice.pdf", # Open blank pdf page 11.7, 8.3) densityplot(~log(Prey.mass/Predator.mass)| Type.of.feeding.interaction, data = Mydf) dev.off() # are all masses in same units? masses_not_g = subset(Mydf, Prey.mass.unit != "g") length(unique(masses_not_g$Type.of.feeding.interaction)) # = 3, so there are three groups for mass of mg, that makes above calcs wrong. #if prey mass unit = mg, x prey mass by 1E-3 l = length(Mydf$Prey.mass) for (i in 1:l){ if (Mydf$Prey.mass.unit[i] == "mg") { Mydf$Prey.mass[i] = Mydf$Prey.mass[i] * 1e-3 } } #tapplys to get means and medians across feeding interaction # ps, log = ln, natural log groups = as.factor(Mydf$Type.of.feeding.interaction) Prey.mass.means = tapply(log(Mydf$Prey.mass), groups, mean) Predator.mass.means = tapply(log(Mydf$Predator.mass), groups, mean) PredPrey.mass.means = tapply(log(Mydf$Predator.mass/Mydf$Prey.mass), groups, mean) Prey.mass.median = tapply(log(Mydf$Prey.mass), groups, median) Predator.mass.median = tapply(log(Mydf$Prey.mass), groups, median) PredPrey.mass.median = tapply(log(Mydf$Predator.mass/Mydf$Prey.mass), groups, mean) stats = rbind(Prey.mass.means, Predator.mass.means, PredPrey.mass.means, Prey.mass.median, Predator.mass.median, PredPrey.mass.median) #write.csv(stats, "../Results/PP_Results.csv") #another output with headers from StackO write.table_with_header <- function(x, file, header, ...){ cat(header, '\n', file = file) write.table(x, file, append = T, ...) } #Note that append is ignored in a write.csv call, so you simply need to call header = "All values are natural log" write.table_with_header(stats,"../Results/PP_Results.csv",header,sep=',')
/Week3/Code/PP_Lattice.R
no_license
PetraGuy/CMEECourseWork
R
false
false
2,282
r
#Chapter 9, 9.1.2 #Biological Computing Boot Camp #R Studio Version 1.1.383 ubuntu 16.04 LTS 64bi #Author Petra Guy 26th October 2017 rm(list = ls()) Mydf = read.csv("../Data/EcolArchives-E089-51-D1.csv") library(lattice) # plot graphs and output to pdfs pdf("../Results/Prey_Lattice.pdf", # Open blank pdf page 11.7, 8.3) densityplot(~log(Prey.mass)| Type.of.feeding.interaction, data = Mydf) dev.off() pdf("../Results/Predator_Lattice.pdf", # Open blank pdf page 11.7, 8.3) densityplot(~log(Predator.mass)| Type.of.feeding.interaction, data = Mydf) dev.off() pdf("../Results/SizeRatio_Lattice.pdf", # Open blank pdf page 11.7, 8.3) densityplot(~log(Prey.mass/Predator.mass)| Type.of.feeding.interaction, data = Mydf) dev.off() # are all masses in same units? masses_not_g = subset(Mydf, Prey.mass.unit != "g") length(unique(masses_not_g$Type.of.feeding.interaction)) # = 3, so there are three groups for mass of mg, that makes above calcs wrong. #if prey mass unit = mg, x prey mass by 1E-3 l = length(Mydf$Prey.mass) for (i in 1:l){ if (Mydf$Prey.mass.unit[i] == "mg") { Mydf$Prey.mass[i] = Mydf$Prey.mass[i] * 1e-3 } } #tapplys to get means and medians across feeding interaction # ps, log = ln, natural log groups = as.factor(Mydf$Type.of.feeding.interaction) Prey.mass.means = tapply(log(Mydf$Prey.mass), groups, mean) Predator.mass.means = tapply(log(Mydf$Predator.mass), groups, mean) PredPrey.mass.means = tapply(log(Mydf$Predator.mass/Mydf$Prey.mass), groups, mean) Prey.mass.median = tapply(log(Mydf$Prey.mass), groups, median) Predator.mass.median = tapply(log(Mydf$Prey.mass), groups, median) PredPrey.mass.median = tapply(log(Mydf$Predator.mass/Mydf$Prey.mass), groups, mean) stats = rbind(Prey.mass.means, Predator.mass.means, PredPrey.mass.means, Prey.mass.median, Predator.mass.median, PredPrey.mass.median) #write.csv(stats, "../Results/PP_Results.csv") #another output with headers from StackO write.table_with_header <- function(x, file, header, ...){ cat(header, '\n', file = file) write.table(x, file, append = T, ...) } #Note that append is ignored in a write.csv call, so you simply need to call header = "All values are natural log" write.table_with_header(stats,"../Results/PP_Results.csv",header,sep=',')
library(RMySQL) con <- dbConnect(RMySQL::MySQL(), host = "localhost", user = "root", password = "root", dbname="prod_football") rs <- dbSendQuery(con, "select id, Date, a.GameID, a.play_id, Drive, cast(qtr as int) as qtr, TimeSecs, posteam, DefensiveTeam, if(down = 0,null,down) as down, case when play_type in ('End of Quarter', 'End of Half', 'End of Game') then 0 when play_type in ('Extra Point') then 3 when play_type in ('Kickoff') then 0 else ydstogo end as ydstogo, case when yrdline100 = 0 then null else yrdline100 end as yrdline100, play_type, net_yards, if(ydstogo = yrdline100,1,0) as GoalToGo, FirstDown, sp, if(Touchdown = 1,1,0) as Touchdown, if(ExPointResult = '',null,ExPointResult) as ExPointResult, if(TwoPointConv = '', null,TwoPointConv) as TwoPointConv, Fumble, RecFumbTeam, PosTeamScore, DefTeamScore, PosTeamScore-DefTeamScore as ScoreDiff, abs(PosTeamScore-DefTeamScore) as AbsScoreDiff, If(Safety = 1, 1,0) as Safety, FieldGoalResult, case when trim(ReturnResult) like '%touchdown%' then 'Touchdown' else null end as ReturnResult, if(Result = 'Interception',1,0) as InterceptionThrown, HomeTeam, AwayTeam from prod_football.lv_pbp a left outer join (select a.GameID, play_id, a.half, case when a.half = 1 then (rn/cnt*1440)+1440 when a.half = 2 then (rn/cnt*1440) end as TimeSecs from ( select GameID, play_id, case when qtr in (1,2) then 1 when qtr in (3,4) then 2 else 3 end as half , rank() over (partition by GameID, half order by play_id desc) as rn from lv_pbp where (down in (1,2,3,4) or play_type in ('Kickoff')) and play_type <> 'No Play' group by 1,2,3) a inner join (select GameID, case when qtr in (1,2) then 1 when qtr in (3,4) then 2 else 3 end as half, count(distinct play_id) as cnt from lv_pbp where (down in (1,2,3,4) or play_type in ('Kickoff')) and play_type <> 'No Play' group by 1,2) b on a.GameID = b.GameID and a.half = b.half) b on a.GameID = b.GameID and a.play_id = b.play_id where a.play_type <> 'No Play'") data1 <- dbFetch(rs, n = -1) lv_pbp <- data1 lv_pbp$log_ydstogo <- log(lv_pbp$ydstogo); lv_pbp$down <- as.factor(lv_pbp$down); repeat_last = function(x, forward = TRUE, maxgap = Inf, na.rm = FALSE) { if (!forward) x = rev(x) # reverse x twice if carrying backward ind = which(!is.na(x)) # get positions of nonmissing values if (is.na(x[1]) && !na.rm) # if it begins with NA ind = c(1,ind) # add first pos rep_times = diff( # diffing the indices + length yields how often c(ind, length(x) + 1) ) # they need to be repeated if (maxgap < Inf) { exceed = rep_times - 1 > maxgap # exceeding maxgap if (any(exceed)) { # any exceed? ind = sort(c(ind[exceed] + 1, ind)) # add NA in gaps rep_times = diff(c(ind, length(x) + 1) ) # diff again } } x = rep(x[ind], times = rep_times) # repeat the values at these indices if (!forward) x = rev(x) # second reversion x } lv_pbp$TimeSecs <- repeat_last(lv_pbp$TimeSecs) lv_pbp$TimeSecs_Remaining <- ifelse(lv_pbp$qtr %in% c(1,2), lv_pbp$TimeSecs - 1440, lv_pbp$TimeSecs) lv_pbp$yrdline100 <- repeat_last(lv_pbp$yrdline100) library(tidyverse) library(nflWAR) library(sqldf) options(sqldf.driver = "SQLite") pbp_data <- lv_pbp nrow(pbp_data) find_game_next_score_half <- function(pbp_dataset) { # Which rows are the scoring plays: score_plays <- which(pbp_dataset$sp == 1 & pbp_dataset$play_type != "No Play") # Define a helper function that takes in the current play index, # a vector of the scoring play indices, play-by-play data, # and returns the score type and drive number for the next score: find_next_score <- function(play_i, score_plays_i,pbp_df) { # Find the next score index for the current play # based on being the first next score index: next_score_i <- score_plays_i[which(score_plays_i >= play_i)[1]] # If next_score_i is NA (no more scores after current play) # or if the next score is in another half, # then return No_Score and the current drive number if (is.na(next_score_i) | (pbp_df$qtr[play_i] %in% c(1, 2) & pbp_df$qtr[next_score_i] %in% c(3, 4, 5)) | (pbp_df$qtr[play_i] %in% c(3, 4) & pbp_df$qtr[next_score_i] == 5)) { score_type <- "No_Score" # Make it the current play index score_drive <- pbp_df$Drive[play_i] # Else return the observed next score type and drive number: } else { # Store the score_drive number score_drive <- pbp_df$Drive[next_score_i] # Then check the play types to decide what to return # based on several types of cases for the next score: # 1: Return TD if (identical(pbp_df$ReturnResult[next_score_i], "Touchdown")) { # For return touchdowns the current posteam would not have # possession at the time of return, so it's flipped: if (identical(pbp_df$posteam[play_i], pbp_df$posteam[next_score_i])) { score_type <- "Opp_Touchdown" } else { score_type <- "Touchdown" } } else if (identical(pbp_df$FieldGoalResult[next_score_i], "Good")) { # 2: Field Goal # Current posteam made FG if (identical(pbp_df$posteam[play_i], pbp_df$posteam[next_score_i])) { score_type <- "Field_Goal" # Opponent made FG } else { score_type <- "Opp_Field_Goal" } # 3: Touchdown (returns already counted for) } else if (pbp_df$Touchdown[next_score_i] == 1) { # Current posteam TD if (identical(pbp_df$posteam[play_i], pbp_df$posteam[next_score_i])) { score_type <- "Touchdown" # Opponent TD } else { score_type <- "Opp_Touchdown" } # 4: Safety (similar to returns) } else if (pbp_df$Safety[next_score_i] == 1) { if (identical(pbp_df$posteam[play_i],pbp_df$posteam[next_score_i])) { score_type <- "Opp_Safety" } else { score_type <- "Safety" } # 5: Extra Points } else if (identical(pbp_df$ExPointResult[next_score_i], "Made")) { # Current posteam Extra Point if (identical(pbp_df$posteam[play_i], pbp_df$posteam[next_score_i])) { score_type <- "Extra_Point" # Opponent Extra Point } else { score_type <- "Opp_Extra_Point" } # 6: Two Point Conversions } else if (identical(pbp_df$TwoPointConv[next_score_i], "Success")) { # Current posteam Two Point Conversion if (identical(pbp_df$posteam[play_i], pbp_df$posteam[next_score_i])) { score_type <- "Two_Point_Conversion" # Opponent Two Point Conversion } else { score_type <- "Opp_Two_Point_Conversion" } # 7: Defensive Two Point (like returns) } else if (identical(pbp_df$DefTwoPoint[next_score_i], "Success")) { if (identical(pbp_df$posteam[play_i], pbp_df$posteam[next_score_i])) { score_type <- "Opp_Defensive_Two_Point" } else { score_type <- "Defensive_Two_Point" } # 8: Errors of some sort so return NA (but shouldn't take place) } else { score_type <- NA } } return(data.frame(Next_Score_Half = score_type, Drive_Score_Half = score_drive)) } # Using lapply and then bind_rows is much faster than # using map_dfr() here: lapply(c(1:nrow(pbp_dataset)), find_next_score, score_plays_i = score_plays, pbp_df = pbp_dataset) %>% bind_rows() %>% return } pbp_next_score_half <- map_dfr(unique(pbp_data$GameID), function(x) { pbp_data %>% filter(GameID == x) %>% find_game_next_score_half() }) # Join to the pbp_data: pbp_data_next_score <- bind_cols(pbp_data, pbp_next_score_half) # Create the EP model dataset that only includes plays with basic seven # types of next scoring events along with the following play types: # Field Goal, No Play, Pass, Punt, Run, Sack, Spike pbp_ep_model_data <- pbp_data_next_score %>% filter(Next_Score_Half %in% c("Opp_Field_Goal", "Opp_Safety", "Opp_Touchdown", "Field_Goal", "No_Score", "Safety", "Touchdown") & play_type %in% c("Field Goal", "No Play", "Pass", "Punt", "Run", "Sack", "Spike") & is.na(TwoPointConv) & is.na(ExPointResult) & !is.na(down) & !is.na(TimeSecs)) nrow(pbp_ep_model_data) # 304805 pbp_ep_model_data <- sqldf("select * from pbp_ep_model_data where TwoPointConv is null") # Now adjust and create the model variables: pbp_ep_model_data <- pbp_ep_model_data %>% # Reference level should be No_Score: mutate(Next_Score_Half = fct_relevel(factor(Next_Score_Half), "No_Score"), # Create a variable that is time remaining until end of half: # (only working with up to 2016 data so can ignore 2017 time change) TimeSecs_Remaining = as.numeric(ifelse(qtr %in% c(1,2), TimeSecs - 1440, TimeSecs)), # log transform of yards to go and indicator for two minute warning: log_ydstogo = log(ydstogo), Under_TwoMinute_Warning = ifelse(TimeSecs_Remaining < 120, 1, 0), # Changing down into a factor variable: down = factor(down), # Calculate the drive difference between the next score drive and the # current play drive: Drive_Score_Dist = Drive_Score_Half - Drive, # Create a weight column based on difference in drives between play and next score: Drive_Score_Dist_W = (max(Drive_Score_Dist) - Drive_Score_Dist) / (max(Drive_Score_Dist) - min(Drive_Score_Dist)), # Create a weight column based on score differential: ScoreDiff_W = (max(abs(ScoreDiff)) - abs(ScoreDiff)) / (max(abs(ScoreDiff)) - min(abs(ScoreDiff))), # Add these weights together and scale again: Total_W = Drive_Score_Dist_W + ScoreDiff_W, Total_W_Scaled = (Total_W - min(Total_W)) / (max(Total_W) - min(Total_W))) # Save dataset in data folder as pbp_ep_model_data.csv # (NOTE: this dataset is not pushed due to its size exceeding # the github limit but will be referenced in other files) # write_csv(pbp_ep_model_data, "data/pbp_ep_model_data.csv") # Fit the expected points model: # install.packages("nnet") ep_model <- nnet::multinom(Next_Score_Half ~ TimeSecs_Remaining + yrdline100 + down + log_ydstogo + GoalToGo + log_ydstogo*down + yrdline100*down + GoalToGo*log_ydstogo, data = pbp_ep_model_data, weights = Total_W_Scaled, maxit = 300, na.action = na.pass) fg_model_data <- pbp_data_next_score %>% filter(play_type %in% c("Field Goal","Extra Point") & (!is.na(ExPointResult) | !is.na(FieldGoalResult))) nrow(fg_model_data) # 16906 # Save dataset in data folder as fg_model_data.csv # (NOTE: this dataset is not pushed due to its size exceeding # the github limit but will be referenced in other files) # write_csv(fg_model_data, "data/fg_model_data.csv") # Fit the field goal model: # install.packages("mgcv") fg_model <- mgcv::bam(sp ~ s(yrdline100), data = fg_model_data, family = "binomial") base_ep_preds <- as.data.frame(predict(ep_model, newdata = pbp_ep_model_data, type = "probs")) colnames(base_ep_preds) <- c("No_Score","Field_Goal","Opp_Field_Goal", "Opp_Safety","Opp_Touchdown","Safety","Touchdown") base_ep_preds <- dplyr::rename(base_ep_preds,Field_Goal_Prob=Field_Goal,Touchdown_Prob=Touchdown, Opp_Field_Goal_Prob=Opp_Field_Goal,Opp_Touchdown_Prob=Opp_Touchdown, Safety_Prob=Safety,Opp_Safety_Prob=Opp_Safety,No_Score_Prob=No_Score) pbp_ep_model_data <- cbind(pbp_ep_model_data, base_ep_preds) lv_pbp2 <- sqldf("select a.id, a.Date, a.GameID, a.play_id, a.Drive, a.qtr, a.TimeSecs, a.TimeSecs_Remaining, a.posteam, a.DefensiveTeam, a.down, a.ydstogo, a.yrdline100, a.play_type, a.net_yards, a.GoalToGo, a.FirstDown, a.sp, a.Touchdown, a.ExPointResult, a.TwoPointConv, a.Fumble, a.RecFumbTeam, a.PosTeamScore, a.DefTeamScore, a.ScoreDiff, a.AbsScoreDiff, a.Safety, a.FieldGoalResult, a.ReturnResult, a.InterceptionThrown, a.HomeTeam, a.AwayTeam, Field_Goal_Prob, Touchdown_Prob, Opp_Field_Goal_Prob, Opp_Touchdown_Prob, Safety_Prob, Opp_Safety_prob, No_Score_Prob from lv_pbp a left outer join pbp_ep_model_data b on a.id = b.id") # Calculate the EP for receiving a touchback (from the point of view for recieving team) # and update the columns for Kickoff plays: kickoff_data <- lv_pbp2 # Change the yard line to be 80 for 2009-2015 and 75 otherwise # (accounting for the fact that Jan 2016 is in the 2015 season: kickoff_data$yrdline100 <- 60; # Not GoalToGo: kickoff_data$GoalToGo <- rep(0,nrow(lv_pbp2)) # Now first down: kickoff_data$down <- rep("1",nrow(lv_pbp2)) # 10 ydstogo: kickoff_data$ydstogo <- rep(10,nrow(lv_pbp2)) # Create log_ydstogo: kickoff_data <- dplyr::mutate(kickoff_data, log_ydstogo = log(ydstogo)) # Get the new predicted probabilites: if (nrow(kickoff_data) > 1) { kickoff_preds <- as.data.frame(predict(ep_model, newdata = kickoff_data, type = "probs")) } else{ kickoff_preds <- as.data.frame(matrix(predict(ep_model, newdata = kickoff_data, type = "probs"), ncol = 7)) } colnames(kickoff_preds) <- c("No_Score","Opp_Field_Goal","Opp_Safety","Opp_Touchdown", "Field_Goal","Safety","Touchdown") # Find the kickoffs: kickoff_i <- which(lv_pbp2$play_type == "Kickoff") # Now update the probabilities: lv_pbp2[kickoff_i, "Field_Goal_Prob"] <- kickoff_preds[kickoff_i, "Field_Goal"] lv_pbp2[kickoff_i, "Touchdown_Prob"] <- kickoff_preds[kickoff_i, "Touchdown"] lv_pbp2[kickoff_i, "Opp_Field_Goal_Prob"] <- kickoff_preds[kickoff_i, "Opp_Field_Goal"] lv_pbp2[kickoff_i, "Opp_Touchdown_Prob"] <- kickoff_preds[kickoff_i, "Opp_Touchdown"] lv_pbp2[kickoff_i, "Safety_Prob"] <- kickoff_preds[kickoff_i, "Safety"] lv_pbp2[kickoff_i, "Opp_Safety_Prob"] <- kickoff_preds[kickoff_i, "Opp_Safety"] lv_pbp2[kickoff_i, "No_Score_Prob"] <- kickoff_preds[kickoff_i, "No_Score"] # ---------------------------------------------------------------------------------- # Insert probabilities of 0 for everything but No_Score for QB Kneels: # Find the QB Kneels: qb_kneels_i <- which(lv_pbp2$play_type == "QB Kneel") # Now update the probabilities: lv_pbp2[qb_kneels_i, "Field_Goal_Prob"] <- 0 lv_pbp2[qb_kneels_i, "Touchdown_Prob"] <- 0 lv_pbp2[qb_kneels_i, "Opp_Field_Goal_Prob"] <- 0 lv_pbp2[qb_kneels_i, "Opp_Touchdown_Prob"] <- 0 lv_pbp2[qb_kneels_i, "Safety_Prob"] <- 0 lv_pbp2[qb_kneels_i, "Opp_Safety_Prob"] <- 0 lv_pbp2[qb_kneels_i, "No_Score_Prob"] <- 1 # ---------------------------------------------------------------------------------- # Create two new columns, ExPoint_Prob and TwoPoint_Prob, for the PAT events: lv_pbp2$ExPoint_Prob <- 0 lv_pbp2$TwoPoint_Prob <- 0 # Find the indices for these types of plays: extrapoint_i <- which(!is.na(lv_pbp2$ExPointResult)) twopoint_i <- which(!is.na(lv_pbp2$TwoPointConv)) expt_df <- sqldf("select avg(case when ExPointResult = 'Made' then 1 else 0 end) as avg_expt from lv_pbp2 where ExPointResult is not null") twopt_df <- sqldf("select avg(case when TwoPointConv = 'Success' then 1 else 0 end) as avg_twopt from lv_pbp2 where TwoPointConv is not null") # Assign the make_fg_probs of the extra-point PATs: lv_pbp2$ExPoint_Prob[extrapoint_i] <- expt_df$avg_expt # Assign the TwoPoint_Prob with the historical success rate: lv_pbp2$TwoPoint_Prob[twopoint_i] <- twopt_df$avg_twopt # ---------------------------------------------------------------------------------- # Insert NAs for all other types of plays: missing_i <- which(lv_pbp2$play_type %in% c("End of Quarter", "End of Game", "End of Half")) # Now update the probabilities for missing and PATs: lv_pbp2$Field_Goal_Prob[c(missing_i,extrapoint_i,twopoint_i)] <- 0 lv_pbp2$Touchdown_Prob[c(missing_i,extrapoint_i,twopoint_i)] <- 0 lv_pbp2$Opp_Field_Goal_Prob[c(missing_i,extrapoint_i,twopoint_i)] <- 0 lv_pbp2$Opp_Touchdown_Prob[c(missing_i,extrapoint_i,twopoint_i)] <- 0 lv_pbp2$Safety_Prob[c(missing_i,extrapoint_i,twopoint_i)] <- 0 lv_pbp2$Opp_Safety_Prob[c(missing_i,extrapoint_i,twopoint_i)] <- 0 lv_pbp2$No_Score_Prob[c(missing_i,extrapoint_i,twopoint_i)] <- 0 lv_pbp2 <- dplyr::mutate(lv_pbp2, ExpPts = (0*No_Score_Prob) + (-3 * Opp_Field_Goal_Prob) + (-2 * Opp_Safety_Prob) + (-7 * Opp_Touchdown_Prob) + (3 * Field_Goal_Prob) + (2 * Safety_Prob) + (7 * Touchdown_Prob) + (1 * ExPoint_Prob) + (2 * TwoPoint_Prob)) pbp_data_ep <- lv_pbp2 pbp_data_epa <- dplyr::group_by(pbp_data_ep,GameID) pbp_data_epa <- dplyr::mutate(pbp_data_epa, # Offense touchdown (including kickoff returns): EPA_off_td = 7 - ExpPts, # Offense fieldgoal: EPA_off_fg = 3 - ExpPts, # Offense extra-point conversion: EPA_off_ep = 1 - ExpPts, # Offense two-point conversion: EPA_off_tp = 2 - ExpPts, # Missing PAT: EPA_PAT_fail = 0 - ExpPts, # Opponent Safety: EPA_safety = -2 - ExpPts, # End of half/game or timeout or QB Kneel: EPA_endtime = 0, # Defense scoring touchdown (including punt returns): EPA_change_score = -7 - ExpPts, # Change of possession without defense scoring # and no timeout, two minute warning, or quarter end follows: EPA_change_no_score = -dplyr::lead(ExpPts) - ExpPts, # Change of possession without defense scoring # but either Timeout, Two Minute Warning, # Quarter End is the following row: EPA_change_no_score_nxt = -dplyr::lead(ExpPts,2) - ExpPts, # Team keeps possession but either Timeout, Two Minute Warning, # Quarter End is the following row: EPA_base_nxt = dplyr::lead(ExpPts,2) - ExpPts, # Team keeps possession (most general case): EPA_base = dplyr::lead(ExpPts) - ExpPts, # Now the same for PTDA: # Team keeps possession (most general case): PTDA_base = dplyr::lead(Touchdown_Prob) - Touchdown_Prob, # Team keeps possession but either Timeout, Two Minute Warning, # Quarter End is the following row: PTDA_base_nxt = dplyr::lead(Touchdown_Prob,2) - Touchdown_Prob, # Change of possession without defense scoring # and no timeout, two minute warning, or quarter end follows: PTDA_change_no_score = dplyr::lead(Opp_Touchdown_Prob) - Touchdown_Prob, # Change of possession without defense scoring # but either Timeout, Two Minute Warning, # Quarter End is the following row: PTDA_change_no_score_nxt = dplyr::lead(Opp_Touchdown_Prob,2) - Touchdown_Prob, # Change of possession with defense scoring touchdown: PTDA_change_score = 0 - Touchdown_Prob, # Offense touchdown: PTDA_off_td = 1 - Touchdown_Prob, # Offense fieldgoal: PTDA_off_fg = 0 - Touchdown_Prob, # Offense extra-point conversion: PTDA_off_ep = 0, # Offense two-point conversion: PTDA_off_tp = 0, # Offense PAT fail: PTDA_PAT_fail = 0, # Opponent Safety: PTDA_safety = 0 - Touchdown_Prob, # End of half/game or timeout or QB Kneel: PTDA_endtime = 0) # Define the scoring plays first: pbp_data_epa$EPA_off_td_ind <- with(pbp_data_epa, ifelse(sp == 1 & Touchdown == 1 & ((play_type %in% c("Pass","Run") & (InterceptionThrown != 1 & Fumble != 1)) | (play_type == "Kickoff" & ReturnResult == "Touchdown")), 1,0)) pbp_data_epa$EPA_off_fg_ind <- with(pbp_data_epa, ifelse(play_type %in% c("Field Goal","Run") & FieldGoalResult == "Good", 1, 0)) pbp_data_epa$EPA_off_ep_ind <- with(pbp_data_epa, ifelse(ExPointResult == "Made" & play_type != "No Play", 1, 0)) pbp_data_epa$EPA_off_tp_ind <- with(pbp_data_epa, ifelse(TwoPointConv == "Success" & play_type != "No Play", 1, 0)) pbp_data_epa$EPA_PAT_fail_ind <- with(pbp_data_epa, ifelse(play_type != "No Play" & (ExPointResult %in% c("Missed", "Aborted", "Blocked") | TwoPointConv == "Failure"), 1, 0)) pbp_data_epa$EPA_safety_ind <- with(pbp_data_epa, ifelse(play_type != "No Play" & Safety == 1, 1, 0)) pbp_data_epa$EPA_endtime_ind <- with(pbp_data_epa, ifelse(play_type %in% c("Half End","Quarter End", "End of Game","Timeout","QB Kneel") | (GameID == dplyr::lead(GameID) & sp != 1 & Touchdown != 1 & is.na(FieldGoalResult) & is.na(ExPointResult) & is.na(TwoPointConv) & Safety != 1 & ((dplyr::lead(play_type) %in% c("Half End","End of Game")) | (qtr == 2 & dplyr::lead(qtr)==3) | (qtr == 4 & dplyr::lead(qtr)==5))),1,0)) pbp_data_epa$EPA_change_score_ind <- with(pbp_data_epa, ifelse(play_type != "No Play" & sp == 1 & Touchdown == 1 & (InterceptionThrown == 1 | (Fumble == 1 & RecFumbTeam != posteam) | (play_type == "Punt" & ReturnResult == "Touchdown")), 1, 0)) pbp_data_epa$EPA_change_no_score_nxt_ind <- with(pbp_data_epa, ifelse(GameID == dplyr::lead(GameID) & GameID == dplyr::lead(GameID,2) & sp != 1 & dplyr::lead(play_type) %in% c("Quarter End", "Two Minute Warning", "Timeout") & (Drive != dplyr::lead(Drive,2)) & (posteam != dplyr::lead(posteam,2)), 1, 0)) pbp_data_epa$EPA_base_nxt_ind <- with(pbp_data_epa, ifelse(GameID == dplyr::lead(GameID) & GameID == dplyr::lead(GameID,2) & sp != 1 & dplyr::lead(play_type) %in% c("Quarter End", "Two Minute Warning", "Timeout") & (Drive == dplyr::lead(Drive,2)), 1, 0)) pbp_data_epa$EPA_change_no_score_ind <- with(pbp_data_epa, ifelse(GameID == dplyr::lead(GameID) & Drive != dplyr::lead(Drive) & posteam != dplyr::lead(posteam) & dplyr::lead(play_type) %in% c("Pass","Run","Punt","Sack", "Field Goal","No Play", "QB Kneel","Spike"), 1, 0)) # Replace the missings with 0 due to how ifelse treats missings pbp_data_epa$EPA_PAT_fail_ind[is.na(pbp_data_epa$EPA_PAT_fail_ind)] <- 0 pbp_data_epa$EPA_base_nxt_ind[is.na(pbp_data_epa$EPA_base_nxt_ind)] <- 0 pbp_data_epa$EPA_change_no_score_nxt_ind[is.na(pbp_data_epa$EPA_change_no_score_nxt_ind)] <- 0 pbp_data_epa$EPA_change_no_score_ind[is.na(pbp_data_epa$EPA_change_no_score_ind)] <- 0 pbp_data_epa$EPA_change_score_ind[is.na(pbp_data_epa$EPA_change_score_ind)] <- 0 pbp_data_epa$EPA_off_td_ind[is.na(pbp_data_epa$EPA_off_td_ind)] <- 0 pbp_data_epa$EPA_off_fg_ind[is.na(pbp_data_epa$EPA_off_fg_ind)] <- 0 pbp_data_epa$EPA_off_ep_ind[is.na(pbp_data_epa$EPA_off_ep_ind)] <- 0 pbp_data_epa$EPA_off_tp_ind[is.na(pbp_data_epa$EPA_off_tp_ind)] <- 0 pbp_data_epa$EPA_safety_ind[is.na(pbp_data_epa$EPA_safety_ind)] <- 0 pbp_data_epa$EPA_endtime_ind[is.na(pbp_data_epa$EPA_endtime_ind)] <- 0 # Assign EPA using these indicator columns: pbp_data_epa$EPA <- with(pbp_data_epa, ifelse(EPA_off_td_ind == 1, EPA_off_td, ifelse(EPA_off_fg_ind == 1, EPA_off_fg, ifelse(EPA_off_ep_ind == 1, EPA_off_ep, ifelse(EPA_off_tp_ind == 1, EPA_off_tp, ifelse(EPA_PAT_fail_ind == 1, EPA_PAT_fail, ifelse(EPA_safety_ind == 1, EPA_safety, ifelse(EPA_endtime_ind == 1, EPA_endtime, ifelse(EPA_change_score_ind == 1, EPA_change_score, ifelse(EPA_change_no_score_nxt_ind == 1, EPA_change_no_score_nxt, ifelse(EPA_base_nxt_ind == 1, EPA_base_nxt, ifelse(EPA_change_no_score_ind == 1, EPA_change_no_score, EPA_base)))))))))))) # Assign PTDA using these indicator columns: pbp_data_epa$PTDA <- with(pbp_data_epa, ifelse(EPA_off_td_ind == 1, PTDA_off_td, ifelse(EPA_off_fg_ind == 1, PTDA_off_fg, ifelse(EPA_off_ep_ind == 1, PTDA_off_ep, ifelse(EPA_off_tp_ind == 1, PTDA_off_tp, ifelse(EPA_PAT_fail_ind == 1, PTDA_PAT_fail, ifelse(EPA_safety_ind == 1, PTDA_safety, ifelse(EPA_endtime_ind == 1, PTDA_endtime, ifelse(EPA_change_score_ind == 1, PTDA_change_score, ifelse(EPA_change_no_score_nxt_ind == 1, PTDA_change_no_score_nxt, ifelse(EPA_base_nxt_ind == 1, PTDA_base_nxt, ifelse(EPA_change_no_score_ind == 1, PTDA_change_no_score, PTDA_base)))))))))))) # Now drop the unnecessary columns pbp_data_epa_final <- dplyr::select(pbp_data_epa, -c(EPA_base,EPA_base_nxt, EPA_change_no_score,EPA_change_no_score_nxt, EPA_change_score,EPA_off_td,EPA_off_fg,EPA_off_ep, EPA_off_tp,EPA_safety,EPA_base_nxt_ind, EPA_change_no_score_ind,EPA_change_no_score_nxt_ind, EPA_change_score_ind,EPA_off_td_ind,EPA_off_fg_ind,EPA_off_ep_ind, EPA_off_tp_ind,EPA_safety_ind, EPA_endtime_ind, EPA_endtime, PTDA_base,PTDA_base_nxt,PTDA_PAT_fail, PTDA_change_no_score,PTDA_change_no_score_nxt, PTDA_change_score,PTDA_off_td,PTDA_off_fg,PTDA_off_ep, PTDA_off_tp,PTDA_safety,PTDA_endtime, EPA_PAT_fail, EPA_PAT_fail_ind)) # Ungroup the dataset pbp_data_epa_final <- dplyr::ungroup(pbp_data_epa_final) #take out games with win margin > 49 points pbp_data <- sqldf("select * from pbp_data_epa_final where gameID not in ('2016082601', '2016090200', '2017092200', '2017102000', '2017092300', '2016093001', '2016102900', '2017082500', '2017090100', '2017082400')"); pbp_data <- pbp_data %>% # Create an indicator for which half it is: mutate(Half_Ind = ifelse(qtr %in% c(1, 2), "Half1", ifelse(qtr %in% c(3, 4), "Half2", "Overtime")), down = factor(down)) win_data <- sqldf("select a.GameID, case when PosTeamScore > DefTeamScore then posteam else DefensiveTeam end as Winner from pbp_data a inner join (select GameID, max(play_id) as max_play_id from pbp_data group by 1) b on a.GameID = b.GameID and a.play_id = b.max_play_id group by 1,2") View(win_data) pbp_wp_model_data <- pbp_data %>% mutate(GameID = as.character(GameID)) %>% left_join(win_data, by = "GameID") %>% # Create an indicator column if the posteam wins the game: mutate(Win_Indicator = ifelse(posteam == Winner, 1, 0), # Calculate the Expected Score Differential by taking the sum # fo the Expected Points for a play and the score differential: ExpScoreDiff = ExpPts + ScoreDiff, # Create a variable that is time remaining until end of half and game: TimeSecs_Remaining = ifelse(qtr %in% c(1,2), TimeSecs - 1440, ifelse(qtr == 5, TimeSecs + 900, TimeSecs)), # Under two-minute warning indicator Under_TwoMinute_Warning = ifelse(TimeSecs_Remaining < 120, 1, 0), # Define a form of the TimeSecs_Adj that just takes the original TimeSecs but # resets the overtime back to 900: TimeSecs_Adj = ifelse(qtr == 5, TimeSecs + 900, TimeSecs)) %>% # Remove the rows that don't have any ExpScoreDiff and missing Win Indicator: filter(!is.na(ExpScoreDiff) & !is.na(Win_Indicator) & play_type != "End of Game") %>% # Turn win indicator into a factor: mutate(Win_Indicator = as.factor(Win_Indicator), # Define a new variable, ratio of Expected Score Differential to TimeSecs_Adj # which is the same variable essentially as in Lock's random forest model # but now including the expected score differential instead: ExpScoreDiff_Time_Ratio = ExpScoreDiff / (TimeSecs_Adj + 1)) %>% # Remove overtime rows since overtime has special rules regarding the outcome: filter(qtr != 5) %>% # Turn the Half indicator into a factor: mutate(Half_Ind = as.factor(Half_Ind)) wp_model <- mgcv::bam(Win_Indicator ~ s(ExpScoreDiff) + s(TimeSecs_Remaining, by = Half_Ind) + s(ExpScoreDiff_Time_Ratio) + Under_TwoMinute_Warning*Half_Ind, data = pbp_wp_model_data, family = "binomial") # Initialize the vector to store the predicted win probability # with respect to the possession team: OffWinProb <- rep(NA, nrow(pbp_data_epa_final)) # Changing down into a factor variable pbp_data_epa_final$down <- factor(pbp_data_epa_final$down) # Get the Season and Month for rule changes: pbp_data_epa_final <- dplyr::mutate(pbp_data_epa_final, Season = as.numeric(substr(as.character(GameID),1,4)), Month = as.numeric(substr(as.character(GameID),5,6))) # Create a variable that is time remaining until end of half and game: pbp_data_epa_final$TimeSecs_Remaining <- ifelse(pbp_data_epa_final$qtr %in% c(1,2), pbp_data_epa_final$TimeSecs - 1440, ifelse(pbp_data_epa_final$qtr == 5 & (pbp_data_epa_final$Season == 2017 & pbp_data_epa_final$Month > 4), pbp_data_epa_final$TimeSecs + 600, ifelse(pbp_data_epa_final$qtr == 5 & (pbp_data_epa_final$Season < 2017 | (pbp_data_epa_final$Season == 2017 & pbp_data_epa_final$Month <= 4)), pbp_data_epa_final$TimeSecs + 900, pbp_data_epa_final$TimeSecs))) # Expected Score Differential pbp_data_epa_final <- dplyr::mutate(pbp_data_epa_final, ExpScoreDiff = ExpPts + ScoreDiff) # Ratio of time to yard line pbp_data_epa_final <- dplyr::mutate(pbp_data_epa_final, Time_Yard_Ratio = (1+TimeSecs_Remaining)/(1+yrdline100)) # Under Two Minute Warning Flag pbp_data_epa_final$Under_TwoMinute_Warning <- ifelse(pbp_data_epa_final$TimeSecs_Remaining < 120,1,0) # Define a form of the TimeSecs_Adj that just takes the original TimeSecs but # resets the overtime back to 900 or 600 depending on year: pbp_data_epa_final$TimeSecs_Adj <- ifelse(pbp_data_epa_final$qtr == 5 & (pbp_data_epa_final$Season == 2017 & pbp_data_epa_final$Month > 4), pbp_data_epa_final$TimeSecs + 600, ifelse(pbp_data_epa_final$qtr == 5 & (pbp_data_epa_final$Season < 2017 | (pbp_data_epa_final$Season == 2017 & pbp_data_epa_final$Month <= 4)), pbp_data_epa_final$TimeSecs + 900, pbp_data_epa_final$TimeSecs)) # Define a new variable, ratio of Expected Score Differential to TimeSecs_Adj: pbp_data_epa_final <- dplyr::mutate(pbp_data_epa_final, ExpScoreDiff_Time_Ratio = ExpScoreDiff / (TimeSecs_Adj + 1)) pbp_data_epa_final$Half_Ind <- with(pbp_data_epa_final, ifelse(qtr %in% c(1,2),"Half1","Half2")) pbp_data_epa_final$Half_Ind <- as.factor(pbp_data_epa_final$Half_Ind) OffWinProb <- as.numeric(mgcv::predict.bam(wp_model,newdata=pbp_data_epa_final, type = "response")) DefWinProb <- 1 - OffWinProb # Possession team Win_Prob pbp_data_epa_final$Win_Prob <- OffWinProb # Home: Pre-play pbp_data_epa_final$Home_WP_pre <- ifelse(pbp_data_epa_final$posteam == pbp_data_epa_final$HomeTeam, OffWinProb, DefWinProb) # Away: Pre-play pbp_data_epa_final$Away_WP_pre <- ifelse(pbp_data_epa_final$posteam == pbp_data_epa_final$AwayTeam, OffWinProb, DefWinProb) # Create the possible WPA values pbp_data_epa_final <- dplyr::mutate(pbp_data_epa_final, # Team keeps possession (most general case): WPA_base = dplyr::lead(Win_Prob) - Win_Prob, # Team keeps possession but either Timeout, Two Minute Warning, # Quarter End is the following row: WPA_base_nxt = dplyr::lead(Win_Prob,2) - Win_Prob, # Change of possession and no timeout, # two minute warning, or quarter end follows: WPA_change = (1 - dplyr::lead(Win_Prob)) - Win_Prob, # Change of possession but either Timeout, # Two Minute Warning, or # Quarter End is the following row: WPA_change_nxt = (1 - dplyr::lead(Win_Prob,2)) - Win_Prob, # End of quarter, half or end rows: WPA_halfend_to = 0) # Create a WPA column for the last play of the game: pbp_data_epa_final$WPA_final <- ifelse(dplyr::lead(pbp_data_epa_final$ScoreDiff) > 0 & dplyr::lead(pbp_data_epa_final$posteam) == pbp_data_epa_final$HomeTeam, 1 - pbp_data_epa_final$Home_WP_pre, ifelse(dplyr::lead(pbp_data_epa_final$ScoreDiff) > 0 & dplyr::lead(pbp_data_epa_final$posteam) == pbp_data_epa_final$AwayTeam, 1 - pbp_data_epa_final$Away_WP_pre, ifelse(dplyr::lead(pbp_data_epa_final$ScoreDiff) < 0 & dplyr::lead(pbp_data_epa_final$posteam) == pbp_data_epa_final$HomeTeam, 0 - pbp_data_epa_final$Home_WP_pre, ifelse(dplyr::lead(pbp_data_epa_final$ScoreDiff) < 0 & dplyr::lead(pbp_data_epa_final$posteam) == pbp_data_epa_final$AwayTeam, 0 - pbp_data_epa_final$Away_WP_pre, 0)))) pbp_data_epa_final$WPA_base_nxt_ind <- with(pbp_data_epa_final, ifelse(GameID == dplyr::lead(GameID) & posteam == dplyr::lead(posteam,2) & Drive == dplyr::lead(Drive,2) & dplyr::lead(play_type) %in% c("Quarter End","Two Minute Warning","Timeout"),1,0)) pbp_data_epa_final$WPA_change_nxt_ind <- with(pbp_data_epa_final, ifelse(GameID == dplyr::lead(GameID) & Drive != dplyr::lead(Drive,2) & posteam != dplyr::lead(posteam,2) & dplyr::lead(play_type) %in% c("Quarter End","Two Minute Warning","Timeout"),1,0)) pbp_data_epa_final$WPA_change_ind <- with(pbp_data_epa_final, ifelse(GameID == dplyr::lead(GameID) & Drive != dplyr::lead(Drive) & posteam != dplyr::lead(posteam) & dplyr::lead(play_type) %in% c("Pass","Run","Punt","Sack", "Field Goal","No Play","QB Kneel", "Spike","Kickoff"),1,0)) pbp_data_epa_final$WPA_halfend_to_ind <- with(pbp_data_epa_final, ifelse(play_type %in% c("Half End","Quarter End", "End of Game","Timeout") | (GameID == dplyr::lead(GameID) & sp != 1 & Touchdown != 1 & is.na(FieldGoalResult) & is.na(ExPointResult) & is.na(TwoPointConv) & Safety != 1 & ((dplyr::lead(play_type) %in% c("Half End")) | (qtr == 2 & dplyr::lead(qtr)==3) | (qtr == 4 & dplyr::lead(qtr)==5))),1,0)) pbp_data_epa_final$WPA_final_ind <- with(pbp_data_epa_final, ifelse(dplyr::lead(play_type) %in% "End of Game", 1, 0)) # Replace the missings with 0 due to how ifelse treats missings pbp_data_epa_final$WPA_base_nxt_ind[is.na(pbp_data_epa_final$WPA_base_nxt_ind)] <- 0 pbp_data_epa_final$WPA_change_nxt_ind[is.na(pbp_data_epa_final$WPA_change_nxt_ind)] <- 0 pbp_data_epa_final$WPA_change_ind[is.na(pbp_data_epa_final$WPA_change_ind)] <- 0 pbp_data_epa_final$WPA_halfend_to_ind[is.na(pbp_data_epa_final$WPA_halfend_to_ind)] <- 0 pbp_data_epa_final$WPA_final_ind[is.na(pbp_data_epa_final$WPA_final_ind)] <- 0 # Assign WPA using these indicator columns: pbp_data_epa_final$WPA <- with(pbp_data_epa_final, ifelse(WPA_final_ind == 1,WPA_final, ifelse(WPA_halfend_to_ind == 1, WPA_halfend_to, ifelse(WPA_change_nxt_ind == 1, WPA_change_nxt, ifelse(WPA_base_nxt_ind == 1, WPA_base_nxt, ifelse(WPA_change_ind == 1, WPA_change, WPA_base)))))) # Home and Away post: pbp_data_epa_final$Home_WP_post <- ifelse(pbp_data_epa_final$posteam == pbp_data_epa_final$HomeTeam, pbp_data_epa_final$Home_WP_pre + pbp_data_epa_final$WPA, pbp_data_epa_final$Home_WP_pre - pbp_data_epa_final$WPA) pbp_data_epa_final$Away_WP_post <- ifelse(pbp_data_epa_final$posteam == pbp_data_epa_final$AwayTeam, pbp_data_epa_final$Away_WP_pre + pbp_data_epa_final$WPA, pbp_data_epa_final$Away_WP_pre - pbp_data_epa_final$WPA) # For plays with play_type of End of Game, use the previous play's WP_post columns # as the pre and post, since those are already set to be 1 and 0: pbp_data_epa_final$Home_WP_pre <- with(pbp_data_epa_final, ifelse(play_type == "End of Game", dplyr::lag(Home_WP_post), Home_WP_pre)) pbp_data_epa_final$Home_WP_post <- with(pbp_data_epa_final, ifelse(play_type == "End of Game", dplyr::lag(Home_WP_post), Home_WP_post)) pbp_data_epa_final$Away_WP_pre <- with(pbp_data_epa_final, ifelse(play_type == "End of Game", dplyr::lag(Away_WP_post), Away_WP_pre)) pbp_data_epa_final$Away_WP_post <- with(pbp_data_epa_final, ifelse(play_type == "End of Game", dplyr::lag(Away_WP_post), Away_WP_post)) # Now drop the unnecessary columns pbp_data_epa_final <- dplyr::select(pbp_data_epa_final, -c(WPA_base,WPA_base_nxt,WPA_change_nxt,WPA_change, WPA_halfend_to, WPA_final, WPA_base_nxt_ind, WPA_change_nxt_ind, WPA_change_ind, WPA_halfend_to_ind, WPA_final_ind, Half_Ind)) write.csv(pbp_data_epa_final,"pbp_data_epa_final.csv")
/lv_pbp_functions.R
no_license
psrossman/hsfootball_pbp
R
false
false
47,648
r
library(RMySQL) con <- dbConnect(RMySQL::MySQL(), host = "localhost", user = "root", password = "root", dbname="prod_football") rs <- dbSendQuery(con, "select id, Date, a.GameID, a.play_id, Drive, cast(qtr as int) as qtr, TimeSecs, posteam, DefensiveTeam, if(down = 0,null,down) as down, case when play_type in ('End of Quarter', 'End of Half', 'End of Game') then 0 when play_type in ('Extra Point') then 3 when play_type in ('Kickoff') then 0 else ydstogo end as ydstogo, case when yrdline100 = 0 then null else yrdline100 end as yrdline100, play_type, net_yards, if(ydstogo = yrdline100,1,0) as GoalToGo, FirstDown, sp, if(Touchdown = 1,1,0) as Touchdown, if(ExPointResult = '',null,ExPointResult) as ExPointResult, if(TwoPointConv = '', null,TwoPointConv) as TwoPointConv, Fumble, RecFumbTeam, PosTeamScore, DefTeamScore, PosTeamScore-DefTeamScore as ScoreDiff, abs(PosTeamScore-DefTeamScore) as AbsScoreDiff, If(Safety = 1, 1,0) as Safety, FieldGoalResult, case when trim(ReturnResult) like '%touchdown%' then 'Touchdown' else null end as ReturnResult, if(Result = 'Interception',1,0) as InterceptionThrown, HomeTeam, AwayTeam from prod_football.lv_pbp a left outer join (select a.GameID, play_id, a.half, case when a.half = 1 then (rn/cnt*1440)+1440 when a.half = 2 then (rn/cnt*1440) end as TimeSecs from ( select GameID, play_id, case when qtr in (1,2) then 1 when qtr in (3,4) then 2 else 3 end as half , rank() over (partition by GameID, half order by play_id desc) as rn from lv_pbp where (down in (1,2,3,4) or play_type in ('Kickoff')) and play_type <> 'No Play' group by 1,2,3) a inner join (select GameID, case when qtr in (1,2) then 1 when qtr in (3,4) then 2 else 3 end as half, count(distinct play_id) as cnt from lv_pbp where (down in (1,2,3,4) or play_type in ('Kickoff')) and play_type <> 'No Play' group by 1,2) b on a.GameID = b.GameID and a.half = b.half) b on a.GameID = b.GameID and a.play_id = b.play_id where a.play_type <> 'No Play'") data1 <- dbFetch(rs, n = -1) lv_pbp <- data1 lv_pbp$log_ydstogo <- log(lv_pbp$ydstogo); lv_pbp$down <- as.factor(lv_pbp$down); repeat_last = function(x, forward = TRUE, maxgap = Inf, na.rm = FALSE) { if (!forward) x = rev(x) # reverse x twice if carrying backward ind = which(!is.na(x)) # get positions of nonmissing values if (is.na(x[1]) && !na.rm) # if it begins with NA ind = c(1,ind) # add first pos rep_times = diff( # diffing the indices + length yields how often c(ind, length(x) + 1) ) # they need to be repeated if (maxgap < Inf) { exceed = rep_times - 1 > maxgap # exceeding maxgap if (any(exceed)) { # any exceed? ind = sort(c(ind[exceed] + 1, ind)) # add NA in gaps rep_times = diff(c(ind, length(x) + 1) ) # diff again } } x = rep(x[ind], times = rep_times) # repeat the values at these indices if (!forward) x = rev(x) # second reversion x } lv_pbp$TimeSecs <- repeat_last(lv_pbp$TimeSecs) lv_pbp$TimeSecs_Remaining <- ifelse(lv_pbp$qtr %in% c(1,2), lv_pbp$TimeSecs - 1440, lv_pbp$TimeSecs) lv_pbp$yrdline100 <- repeat_last(lv_pbp$yrdline100) library(tidyverse) library(nflWAR) library(sqldf) options(sqldf.driver = "SQLite") pbp_data <- lv_pbp nrow(pbp_data) find_game_next_score_half <- function(pbp_dataset) { # Which rows are the scoring plays: score_plays <- which(pbp_dataset$sp == 1 & pbp_dataset$play_type != "No Play") # Define a helper function that takes in the current play index, # a vector of the scoring play indices, play-by-play data, # and returns the score type and drive number for the next score: find_next_score <- function(play_i, score_plays_i,pbp_df) { # Find the next score index for the current play # based on being the first next score index: next_score_i <- score_plays_i[which(score_plays_i >= play_i)[1]] # If next_score_i is NA (no more scores after current play) # or if the next score is in another half, # then return No_Score and the current drive number if (is.na(next_score_i) | (pbp_df$qtr[play_i] %in% c(1, 2) & pbp_df$qtr[next_score_i] %in% c(3, 4, 5)) | (pbp_df$qtr[play_i] %in% c(3, 4) & pbp_df$qtr[next_score_i] == 5)) { score_type <- "No_Score" # Make it the current play index score_drive <- pbp_df$Drive[play_i] # Else return the observed next score type and drive number: } else { # Store the score_drive number score_drive <- pbp_df$Drive[next_score_i] # Then check the play types to decide what to return # based on several types of cases for the next score: # 1: Return TD if (identical(pbp_df$ReturnResult[next_score_i], "Touchdown")) { # For return touchdowns the current posteam would not have # possession at the time of return, so it's flipped: if (identical(pbp_df$posteam[play_i], pbp_df$posteam[next_score_i])) { score_type <- "Opp_Touchdown" } else { score_type <- "Touchdown" } } else if (identical(pbp_df$FieldGoalResult[next_score_i], "Good")) { # 2: Field Goal # Current posteam made FG if (identical(pbp_df$posteam[play_i], pbp_df$posteam[next_score_i])) { score_type <- "Field_Goal" # Opponent made FG } else { score_type <- "Opp_Field_Goal" } # 3: Touchdown (returns already counted for) } else if (pbp_df$Touchdown[next_score_i] == 1) { # Current posteam TD if (identical(pbp_df$posteam[play_i], pbp_df$posteam[next_score_i])) { score_type <- "Touchdown" # Opponent TD } else { score_type <- "Opp_Touchdown" } # 4: Safety (similar to returns) } else if (pbp_df$Safety[next_score_i] == 1) { if (identical(pbp_df$posteam[play_i],pbp_df$posteam[next_score_i])) { score_type <- "Opp_Safety" } else { score_type <- "Safety" } # 5: Extra Points } else if (identical(pbp_df$ExPointResult[next_score_i], "Made")) { # Current posteam Extra Point if (identical(pbp_df$posteam[play_i], pbp_df$posteam[next_score_i])) { score_type <- "Extra_Point" # Opponent Extra Point } else { score_type <- "Opp_Extra_Point" } # 6: Two Point Conversions } else if (identical(pbp_df$TwoPointConv[next_score_i], "Success")) { # Current posteam Two Point Conversion if (identical(pbp_df$posteam[play_i], pbp_df$posteam[next_score_i])) { score_type <- "Two_Point_Conversion" # Opponent Two Point Conversion } else { score_type <- "Opp_Two_Point_Conversion" } # 7: Defensive Two Point (like returns) } else if (identical(pbp_df$DefTwoPoint[next_score_i], "Success")) { if (identical(pbp_df$posteam[play_i], pbp_df$posteam[next_score_i])) { score_type <- "Opp_Defensive_Two_Point" } else { score_type <- "Defensive_Two_Point" } # 8: Errors of some sort so return NA (but shouldn't take place) } else { score_type <- NA } } return(data.frame(Next_Score_Half = score_type, Drive_Score_Half = score_drive)) } # Using lapply and then bind_rows is much faster than # using map_dfr() here: lapply(c(1:nrow(pbp_dataset)), find_next_score, score_plays_i = score_plays, pbp_df = pbp_dataset) %>% bind_rows() %>% return } pbp_next_score_half <- map_dfr(unique(pbp_data$GameID), function(x) { pbp_data %>% filter(GameID == x) %>% find_game_next_score_half() }) # Join to the pbp_data: pbp_data_next_score <- bind_cols(pbp_data, pbp_next_score_half) # Create the EP model dataset that only includes plays with basic seven # types of next scoring events along with the following play types: # Field Goal, No Play, Pass, Punt, Run, Sack, Spike pbp_ep_model_data <- pbp_data_next_score %>% filter(Next_Score_Half %in% c("Opp_Field_Goal", "Opp_Safety", "Opp_Touchdown", "Field_Goal", "No_Score", "Safety", "Touchdown") & play_type %in% c("Field Goal", "No Play", "Pass", "Punt", "Run", "Sack", "Spike") & is.na(TwoPointConv) & is.na(ExPointResult) & !is.na(down) & !is.na(TimeSecs)) nrow(pbp_ep_model_data) # 304805 pbp_ep_model_data <- sqldf("select * from pbp_ep_model_data where TwoPointConv is null") # Now adjust and create the model variables: pbp_ep_model_data <- pbp_ep_model_data %>% # Reference level should be No_Score: mutate(Next_Score_Half = fct_relevel(factor(Next_Score_Half), "No_Score"), # Create a variable that is time remaining until end of half: # (only working with up to 2016 data so can ignore 2017 time change) TimeSecs_Remaining = as.numeric(ifelse(qtr %in% c(1,2), TimeSecs - 1440, TimeSecs)), # log transform of yards to go and indicator for two minute warning: log_ydstogo = log(ydstogo), Under_TwoMinute_Warning = ifelse(TimeSecs_Remaining < 120, 1, 0), # Changing down into a factor variable: down = factor(down), # Calculate the drive difference between the next score drive and the # current play drive: Drive_Score_Dist = Drive_Score_Half - Drive, # Create a weight column based on difference in drives between play and next score: Drive_Score_Dist_W = (max(Drive_Score_Dist) - Drive_Score_Dist) / (max(Drive_Score_Dist) - min(Drive_Score_Dist)), # Create a weight column based on score differential: ScoreDiff_W = (max(abs(ScoreDiff)) - abs(ScoreDiff)) / (max(abs(ScoreDiff)) - min(abs(ScoreDiff))), # Add these weights together and scale again: Total_W = Drive_Score_Dist_W + ScoreDiff_W, Total_W_Scaled = (Total_W - min(Total_W)) / (max(Total_W) - min(Total_W))) # Save dataset in data folder as pbp_ep_model_data.csv # (NOTE: this dataset is not pushed due to its size exceeding # the github limit but will be referenced in other files) # write_csv(pbp_ep_model_data, "data/pbp_ep_model_data.csv") # Fit the expected points model: # install.packages("nnet") ep_model <- nnet::multinom(Next_Score_Half ~ TimeSecs_Remaining + yrdline100 + down + log_ydstogo + GoalToGo + log_ydstogo*down + yrdline100*down + GoalToGo*log_ydstogo, data = pbp_ep_model_data, weights = Total_W_Scaled, maxit = 300, na.action = na.pass) fg_model_data <- pbp_data_next_score %>% filter(play_type %in% c("Field Goal","Extra Point") & (!is.na(ExPointResult) | !is.na(FieldGoalResult))) nrow(fg_model_data) # 16906 # Save dataset in data folder as fg_model_data.csv # (NOTE: this dataset is not pushed due to its size exceeding # the github limit but will be referenced in other files) # write_csv(fg_model_data, "data/fg_model_data.csv") # Fit the field goal model: # install.packages("mgcv") fg_model <- mgcv::bam(sp ~ s(yrdline100), data = fg_model_data, family = "binomial") base_ep_preds <- as.data.frame(predict(ep_model, newdata = pbp_ep_model_data, type = "probs")) colnames(base_ep_preds) <- c("No_Score","Field_Goal","Opp_Field_Goal", "Opp_Safety","Opp_Touchdown","Safety","Touchdown") base_ep_preds <- dplyr::rename(base_ep_preds,Field_Goal_Prob=Field_Goal,Touchdown_Prob=Touchdown, Opp_Field_Goal_Prob=Opp_Field_Goal,Opp_Touchdown_Prob=Opp_Touchdown, Safety_Prob=Safety,Opp_Safety_Prob=Opp_Safety,No_Score_Prob=No_Score) pbp_ep_model_data <- cbind(pbp_ep_model_data, base_ep_preds) lv_pbp2 <- sqldf("select a.id, a.Date, a.GameID, a.play_id, a.Drive, a.qtr, a.TimeSecs, a.TimeSecs_Remaining, a.posteam, a.DefensiveTeam, a.down, a.ydstogo, a.yrdline100, a.play_type, a.net_yards, a.GoalToGo, a.FirstDown, a.sp, a.Touchdown, a.ExPointResult, a.TwoPointConv, a.Fumble, a.RecFumbTeam, a.PosTeamScore, a.DefTeamScore, a.ScoreDiff, a.AbsScoreDiff, a.Safety, a.FieldGoalResult, a.ReturnResult, a.InterceptionThrown, a.HomeTeam, a.AwayTeam, Field_Goal_Prob, Touchdown_Prob, Opp_Field_Goal_Prob, Opp_Touchdown_Prob, Safety_Prob, Opp_Safety_prob, No_Score_Prob from lv_pbp a left outer join pbp_ep_model_data b on a.id = b.id") # Calculate the EP for receiving a touchback (from the point of view for recieving team) # and update the columns for Kickoff plays: kickoff_data <- lv_pbp2 # Change the yard line to be 80 for 2009-2015 and 75 otherwise # (accounting for the fact that Jan 2016 is in the 2015 season: kickoff_data$yrdline100 <- 60; # Not GoalToGo: kickoff_data$GoalToGo <- rep(0,nrow(lv_pbp2)) # Now first down: kickoff_data$down <- rep("1",nrow(lv_pbp2)) # 10 ydstogo: kickoff_data$ydstogo <- rep(10,nrow(lv_pbp2)) # Create log_ydstogo: kickoff_data <- dplyr::mutate(kickoff_data, log_ydstogo = log(ydstogo)) # Get the new predicted probabilites: if (nrow(kickoff_data) > 1) { kickoff_preds <- as.data.frame(predict(ep_model, newdata = kickoff_data, type = "probs")) } else{ kickoff_preds <- as.data.frame(matrix(predict(ep_model, newdata = kickoff_data, type = "probs"), ncol = 7)) } colnames(kickoff_preds) <- c("No_Score","Opp_Field_Goal","Opp_Safety","Opp_Touchdown", "Field_Goal","Safety","Touchdown") # Find the kickoffs: kickoff_i <- which(lv_pbp2$play_type == "Kickoff") # Now update the probabilities: lv_pbp2[kickoff_i, "Field_Goal_Prob"] <- kickoff_preds[kickoff_i, "Field_Goal"] lv_pbp2[kickoff_i, "Touchdown_Prob"] <- kickoff_preds[kickoff_i, "Touchdown"] lv_pbp2[kickoff_i, "Opp_Field_Goal_Prob"] <- kickoff_preds[kickoff_i, "Opp_Field_Goal"] lv_pbp2[kickoff_i, "Opp_Touchdown_Prob"] <- kickoff_preds[kickoff_i, "Opp_Touchdown"] lv_pbp2[kickoff_i, "Safety_Prob"] <- kickoff_preds[kickoff_i, "Safety"] lv_pbp2[kickoff_i, "Opp_Safety_Prob"] <- kickoff_preds[kickoff_i, "Opp_Safety"] lv_pbp2[kickoff_i, "No_Score_Prob"] <- kickoff_preds[kickoff_i, "No_Score"] # ---------------------------------------------------------------------------------- # Insert probabilities of 0 for everything but No_Score for QB Kneels: # Find the QB Kneels: qb_kneels_i <- which(lv_pbp2$play_type == "QB Kneel") # Now update the probabilities: lv_pbp2[qb_kneels_i, "Field_Goal_Prob"] <- 0 lv_pbp2[qb_kneels_i, "Touchdown_Prob"] <- 0 lv_pbp2[qb_kneels_i, "Opp_Field_Goal_Prob"] <- 0 lv_pbp2[qb_kneels_i, "Opp_Touchdown_Prob"] <- 0 lv_pbp2[qb_kneels_i, "Safety_Prob"] <- 0 lv_pbp2[qb_kneels_i, "Opp_Safety_Prob"] <- 0 lv_pbp2[qb_kneels_i, "No_Score_Prob"] <- 1 # ---------------------------------------------------------------------------------- # Create two new columns, ExPoint_Prob and TwoPoint_Prob, for the PAT events: lv_pbp2$ExPoint_Prob <- 0 lv_pbp2$TwoPoint_Prob <- 0 # Find the indices for these types of plays: extrapoint_i <- which(!is.na(lv_pbp2$ExPointResult)) twopoint_i <- which(!is.na(lv_pbp2$TwoPointConv)) expt_df <- sqldf("select avg(case when ExPointResult = 'Made' then 1 else 0 end) as avg_expt from lv_pbp2 where ExPointResult is not null") twopt_df <- sqldf("select avg(case when TwoPointConv = 'Success' then 1 else 0 end) as avg_twopt from lv_pbp2 where TwoPointConv is not null") # Assign the make_fg_probs of the extra-point PATs: lv_pbp2$ExPoint_Prob[extrapoint_i] <- expt_df$avg_expt # Assign the TwoPoint_Prob with the historical success rate: lv_pbp2$TwoPoint_Prob[twopoint_i] <- twopt_df$avg_twopt # ---------------------------------------------------------------------------------- # Insert NAs for all other types of plays: missing_i <- which(lv_pbp2$play_type %in% c("End of Quarter", "End of Game", "End of Half")) # Now update the probabilities for missing and PATs: lv_pbp2$Field_Goal_Prob[c(missing_i,extrapoint_i,twopoint_i)] <- 0 lv_pbp2$Touchdown_Prob[c(missing_i,extrapoint_i,twopoint_i)] <- 0 lv_pbp2$Opp_Field_Goal_Prob[c(missing_i,extrapoint_i,twopoint_i)] <- 0 lv_pbp2$Opp_Touchdown_Prob[c(missing_i,extrapoint_i,twopoint_i)] <- 0 lv_pbp2$Safety_Prob[c(missing_i,extrapoint_i,twopoint_i)] <- 0 lv_pbp2$Opp_Safety_Prob[c(missing_i,extrapoint_i,twopoint_i)] <- 0 lv_pbp2$No_Score_Prob[c(missing_i,extrapoint_i,twopoint_i)] <- 0 lv_pbp2 <- dplyr::mutate(lv_pbp2, ExpPts = (0*No_Score_Prob) + (-3 * Opp_Field_Goal_Prob) + (-2 * Opp_Safety_Prob) + (-7 * Opp_Touchdown_Prob) + (3 * Field_Goal_Prob) + (2 * Safety_Prob) + (7 * Touchdown_Prob) + (1 * ExPoint_Prob) + (2 * TwoPoint_Prob)) pbp_data_ep <- lv_pbp2 pbp_data_epa <- dplyr::group_by(pbp_data_ep,GameID) pbp_data_epa <- dplyr::mutate(pbp_data_epa, # Offense touchdown (including kickoff returns): EPA_off_td = 7 - ExpPts, # Offense fieldgoal: EPA_off_fg = 3 - ExpPts, # Offense extra-point conversion: EPA_off_ep = 1 - ExpPts, # Offense two-point conversion: EPA_off_tp = 2 - ExpPts, # Missing PAT: EPA_PAT_fail = 0 - ExpPts, # Opponent Safety: EPA_safety = -2 - ExpPts, # End of half/game or timeout or QB Kneel: EPA_endtime = 0, # Defense scoring touchdown (including punt returns): EPA_change_score = -7 - ExpPts, # Change of possession without defense scoring # and no timeout, two minute warning, or quarter end follows: EPA_change_no_score = -dplyr::lead(ExpPts) - ExpPts, # Change of possession without defense scoring # but either Timeout, Two Minute Warning, # Quarter End is the following row: EPA_change_no_score_nxt = -dplyr::lead(ExpPts,2) - ExpPts, # Team keeps possession but either Timeout, Two Minute Warning, # Quarter End is the following row: EPA_base_nxt = dplyr::lead(ExpPts,2) - ExpPts, # Team keeps possession (most general case): EPA_base = dplyr::lead(ExpPts) - ExpPts, # Now the same for PTDA: # Team keeps possession (most general case): PTDA_base = dplyr::lead(Touchdown_Prob) - Touchdown_Prob, # Team keeps possession but either Timeout, Two Minute Warning, # Quarter End is the following row: PTDA_base_nxt = dplyr::lead(Touchdown_Prob,2) - Touchdown_Prob, # Change of possession without defense scoring # and no timeout, two minute warning, or quarter end follows: PTDA_change_no_score = dplyr::lead(Opp_Touchdown_Prob) - Touchdown_Prob, # Change of possession without defense scoring # but either Timeout, Two Minute Warning, # Quarter End is the following row: PTDA_change_no_score_nxt = dplyr::lead(Opp_Touchdown_Prob,2) - Touchdown_Prob, # Change of possession with defense scoring touchdown: PTDA_change_score = 0 - Touchdown_Prob, # Offense touchdown: PTDA_off_td = 1 - Touchdown_Prob, # Offense fieldgoal: PTDA_off_fg = 0 - Touchdown_Prob, # Offense extra-point conversion: PTDA_off_ep = 0, # Offense two-point conversion: PTDA_off_tp = 0, # Offense PAT fail: PTDA_PAT_fail = 0, # Opponent Safety: PTDA_safety = 0 - Touchdown_Prob, # End of half/game or timeout or QB Kneel: PTDA_endtime = 0) # Define the scoring plays first: pbp_data_epa$EPA_off_td_ind <- with(pbp_data_epa, ifelse(sp == 1 & Touchdown == 1 & ((play_type %in% c("Pass","Run") & (InterceptionThrown != 1 & Fumble != 1)) | (play_type == "Kickoff" & ReturnResult == "Touchdown")), 1,0)) pbp_data_epa$EPA_off_fg_ind <- with(pbp_data_epa, ifelse(play_type %in% c("Field Goal","Run") & FieldGoalResult == "Good", 1, 0)) pbp_data_epa$EPA_off_ep_ind <- with(pbp_data_epa, ifelse(ExPointResult == "Made" & play_type != "No Play", 1, 0)) pbp_data_epa$EPA_off_tp_ind <- with(pbp_data_epa, ifelse(TwoPointConv == "Success" & play_type != "No Play", 1, 0)) pbp_data_epa$EPA_PAT_fail_ind <- with(pbp_data_epa, ifelse(play_type != "No Play" & (ExPointResult %in% c("Missed", "Aborted", "Blocked") | TwoPointConv == "Failure"), 1, 0)) pbp_data_epa$EPA_safety_ind <- with(pbp_data_epa, ifelse(play_type != "No Play" & Safety == 1, 1, 0)) pbp_data_epa$EPA_endtime_ind <- with(pbp_data_epa, ifelse(play_type %in% c("Half End","Quarter End", "End of Game","Timeout","QB Kneel") | (GameID == dplyr::lead(GameID) & sp != 1 & Touchdown != 1 & is.na(FieldGoalResult) & is.na(ExPointResult) & is.na(TwoPointConv) & Safety != 1 & ((dplyr::lead(play_type) %in% c("Half End","End of Game")) | (qtr == 2 & dplyr::lead(qtr)==3) | (qtr == 4 & dplyr::lead(qtr)==5))),1,0)) pbp_data_epa$EPA_change_score_ind <- with(pbp_data_epa, ifelse(play_type != "No Play" & sp == 1 & Touchdown == 1 & (InterceptionThrown == 1 | (Fumble == 1 & RecFumbTeam != posteam) | (play_type == "Punt" & ReturnResult == "Touchdown")), 1, 0)) pbp_data_epa$EPA_change_no_score_nxt_ind <- with(pbp_data_epa, ifelse(GameID == dplyr::lead(GameID) & GameID == dplyr::lead(GameID,2) & sp != 1 & dplyr::lead(play_type) %in% c("Quarter End", "Two Minute Warning", "Timeout") & (Drive != dplyr::lead(Drive,2)) & (posteam != dplyr::lead(posteam,2)), 1, 0)) pbp_data_epa$EPA_base_nxt_ind <- with(pbp_data_epa, ifelse(GameID == dplyr::lead(GameID) & GameID == dplyr::lead(GameID,2) & sp != 1 & dplyr::lead(play_type) %in% c("Quarter End", "Two Minute Warning", "Timeout") & (Drive == dplyr::lead(Drive,2)), 1, 0)) pbp_data_epa$EPA_change_no_score_ind <- with(pbp_data_epa, ifelse(GameID == dplyr::lead(GameID) & Drive != dplyr::lead(Drive) & posteam != dplyr::lead(posteam) & dplyr::lead(play_type) %in% c("Pass","Run","Punt","Sack", "Field Goal","No Play", "QB Kneel","Spike"), 1, 0)) # Replace the missings with 0 due to how ifelse treats missings pbp_data_epa$EPA_PAT_fail_ind[is.na(pbp_data_epa$EPA_PAT_fail_ind)] <- 0 pbp_data_epa$EPA_base_nxt_ind[is.na(pbp_data_epa$EPA_base_nxt_ind)] <- 0 pbp_data_epa$EPA_change_no_score_nxt_ind[is.na(pbp_data_epa$EPA_change_no_score_nxt_ind)] <- 0 pbp_data_epa$EPA_change_no_score_ind[is.na(pbp_data_epa$EPA_change_no_score_ind)] <- 0 pbp_data_epa$EPA_change_score_ind[is.na(pbp_data_epa$EPA_change_score_ind)] <- 0 pbp_data_epa$EPA_off_td_ind[is.na(pbp_data_epa$EPA_off_td_ind)] <- 0 pbp_data_epa$EPA_off_fg_ind[is.na(pbp_data_epa$EPA_off_fg_ind)] <- 0 pbp_data_epa$EPA_off_ep_ind[is.na(pbp_data_epa$EPA_off_ep_ind)] <- 0 pbp_data_epa$EPA_off_tp_ind[is.na(pbp_data_epa$EPA_off_tp_ind)] <- 0 pbp_data_epa$EPA_safety_ind[is.na(pbp_data_epa$EPA_safety_ind)] <- 0 pbp_data_epa$EPA_endtime_ind[is.na(pbp_data_epa$EPA_endtime_ind)] <- 0 # Assign EPA using these indicator columns: pbp_data_epa$EPA <- with(pbp_data_epa, ifelse(EPA_off_td_ind == 1, EPA_off_td, ifelse(EPA_off_fg_ind == 1, EPA_off_fg, ifelse(EPA_off_ep_ind == 1, EPA_off_ep, ifelse(EPA_off_tp_ind == 1, EPA_off_tp, ifelse(EPA_PAT_fail_ind == 1, EPA_PAT_fail, ifelse(EPA_safety_ind == 1, EPA_safety, ifelse(EPA_endtime_ind == 1, EPA_endtime, ifelse(EPA_change_score_ind == 1, EPA_change_score, ifelse(EPA_change_no_score_nxt_ind == 1, EPA_change_no_score_nxt, ifelse(EPA_base_nxt_ind == 1, EPA_base_nxt, ifelse(EPA_change_no_score_ind == 1, EPA_change_no_score, EPA_base)))))))))))) # Assign PTDA using these indicator columns: pbp_data_epa$PTDA <- with(pbp_data_epa, ifelse(EPA_off_td_ind == 1, PTDA_off_td, ifelse(EPA_off_fg_ind == 1, PTDA_off_fg, ifelse(EPA_off_ep_ind == 1, PTDA_off_ep, ifelse(EPA_off_tp_ind == 1, PTDA_off_tp, ifelse(EPA_PAT_fail_ind == 1, PTDA_PAT_fail, ifelse(EPA_safety_ind == 1, PTDA_safety, ifelse(EPA_endtime_ind == 1, PTDA_endtime, ifelse(EPA_change_score_ind == 1, PTDA_change_score, ifelse(EPA_change_no_score_nxt_ind == 1, PTDA_change_no_score_nxt, ifelse(EPA_base_nxt_ind == 1, PTDA_base_nxt, ifelse(EPA_change_no_score_ind == 1, PTDA_change_no_score, PTDA_base)))))))))))) # Now drop the unnecessary columns pbp_data_epa_final <- dplyr::select(pbp_data_epa, -c(EPA_base,EPA_base_nxt, EPA_change_no_score,EPA_change_no_score_nxt, EPA_change_score,EPA_off_td,EPA_off_fg,EPA_off_ep, EPA_off_tp,EPA_safety,EPA_base_nxt_ind, EPA_change_no_score_ind,EPA_change_no_score_nxt_ind, EPA_change_score_ind,EPA_off_td_ind,EPA_off_fg_ind,EPA_off_ep_ind, EPA_off_tp_ind,EPA_safety_ind, EPA_endtime_ind, EPA_endtime, PTDA_base,PTDA_base_nxt,PTDA_PAT_fail, PTDA_change_no_score,PTDA_change_no_score_nxt, PTDA_change_score,PTDA_off_td,PTDA_off_fg,PTDA_off_ep, PTDA_off_tp,PTDA_safety,PTDA_endtime, EPA_PAT_fail, EPA_PAT_fail_ind)) # Ungroup the dataset pbp_data_epa_final <- dplyr::ungroup(pbp_data_epa_final) #take out games with win margin > 49 points pbp_data <- sqldf("select * from pbp_data_epa_final where gameID not in ('2016082601', '2016090200', '2017092200', '2017102000', '2017092300', '2016093001', '2016102900', '2017082500', '2017090100', '2017082400')"); pbp_data <- pbp_data %>% # Create an indicator for which half it is: mutate(Half_Ind = ifelse(qtr %in% c(1, 2), "Half1", ifelse(qtr %in% c(3, 4), "Half2", "Overtime")), down = factor(down)) win_data <- sqldf("select a.GameID, case when PosTeamScore > DefTeamScore then posteam else DefensiveTeam end as Winner from pbp_data a inner join (select GameID, max(play_id) as max_play_id from pbp_data group by 1) b on a.GameID = b.GameID and a.play_id = b.max_play_id group by 1,2") View(win_data) pbp_wp_model_data <- pbp_data %>% mutate(GameID = as.character(GameID)) %>% left_join(win_data, by = "GameID") %>% # Create an indicator column if the posteam wins the game: mutate(Win_Indicator = ifelse(posteam == Winner, 1, 0), # Calculate the Expected Score Differential by taking the sum # fo the Expected Points for a play and the score differential: ExpScoreDiff = ExpPts + ScoreDiff, # Create a variable that is time remaining until end of half and game: TimeSecs_Remaining = ifelse(qtr %in% c(1,2), TimeSecs - 1440, ifelse(qtr == 5, TimeSecs + 900, TimeSecs)), # Under two-minute warning indicator Under_TwoMinute_Warning = ifelse(TimeSecs_Remaining < 120, 1, 0), # Define a form of the TimeSecs_Adj that just takes the original TimeSecs but # resets the overtime back to 900: TimeSecs_Adj = ifelse(qtr == 5, TimeSecs + 900, TimeSecs)) %>% # Remove the rows that don't have any ExpScoreDiff and missing Win Indicator: filter(!is.na(ExpScoreDiff) & !is.na(Win_Indicator) & play_type != "End of Game") %>% # Turn win indicator into a factor: mutate(Win_Indicator = as.factor(Win_Indicator), # Define a new variable, ratio of Expected Score Differential to TimeSecs_Adj # which is the same variable essentially as in Lock's random forest model # but now including the expected score differential instead: ExpScoreDiff_Time_Ratio = ExpScoreDiff / (TimeSecs_Adj + 1)) %>% # Remove overtime rows since overtime has special rules regarding the outcome: filter(qtr != 5) %>% # Turn the Half indicator into a factor: mutate(Half_Ind = as.factor(Half_Ind)) wp_model <- mgcv::bam(Win_Indicator ~ s(ExpScoreDiff) + s(TimeSecs_Remaining, by = Half_Ind) + s(ExpScoreDiff_Time_Ratio) + Under_TwoMinute_Warning*Half_Ind, data = pbp_wp_model_data, family = "binomial") # Initialize the vector to store the predicted win probability # with respect to the possession team: OffWinProb <- rep(NA, nrow(pbp_data_epa_final)) # Changing down into a factor variable pbp_data_epa_final$down <- factor(pbp_data_epa_final$down) # Get the Season and Month for rule changes: pbp_data_epa_final <- dplyr::mutate(pbp_data_epa_final, Season = as.numeric(substr(as.character(GameID),1,4)), Month = as.numeric(substr(as.character(GameID),5,6))) # Create a variable that is time remaining until end of half and game: pbp_data_epa_final$TimeSecs_Remaining <- ifelse(pbp_data_epa_final$qtr %in% c(1,2), pbp_data_epa_final$TimeSecs - 1440, ifelse(pbp_data_epa_final$qtr == 5 & (pbp_data_epa_final$Season == 2017 & pbp_data_epa_final$Month > 4), pbp_data_epa_final$TimeSecs + 600, ifelse(pbp_data_epa_final$qtr == 5 & (pbp_data_epa_final$Season < 2017 | (pbp_data_epa_final$Season == 2017 & pbp_data_epa_final$Month <= 4)), pbp_data_epa_final$TimeSecs + 900, pbp_data_epa_final$TimeSecs))) # Expected Score Differential pbp_data_epa_final <- dplyr::mutate(pbp_data_epa_final, ExpScoreDiff = ExpPts + ScoreDiff) # Ratio of time to yard line pbp_data_epa_final <- dplyr::mutate(pbp_data_epa_final, Time_Yard_Ratio = (1+TimeSecs_Remaining)/(1+yrdline100)) # Under Two Minute Warning Flag pbp_data_epa_final$Under_TwoMinute_Warning <- ifelse(pbp_data_epa_final$TimeSecs_Remaining < 120,1,0) # Define a form of the TimeSecs_Adj that just takes the original TimeSecs but # resets the overtime back to 900 or 600 depending on year: pbp_data_epa_final$TimeSecs_Adj <- ifelse(pbp_data_epa_final$qtr == 5 & (pbp_data_epa_final$Season == 2017 & pbp_data_epa_final$Month > 4), pbp_data_epa_final$TimeSecs + 600, ifelse(pbp_data_epa_final$qtr == 5 & (pbp_data_epa_final$Season < 2017 | (pbp_data_epa_final$Season == 2017 & pbp_data_epa_final$Month <= 4)), pbp_data_epa_final$TimeSecs + 900, pbp_data_epa_final$TimeSecs)) # Define a new variable, ratio of Expected Score Differential to TimeSecs_Adj: pbp_data_epa_final <- dplyr::mutate(pbp_data_epa_final, ExpScoreDiff_Time_Ratio = ExpScoreDiff / (TimeSecs_Adj + 1)) pbp_data_epa_final$Half_Ind <- with(pbp_data_epa_final, ifelse(qtr %in% c(1,2),"Half1","Half2")) pbp_data_epa_final$Half_Ind <- as.factor(pbp_data_epa_final$Half_Ind) OffWinProb <- as.numeric(mgcv::predict.bam(wp_model,newdata=pbp_data_epa_final, type = "response")) DefWinProb <- 1 - OffWinProb # Possession team Win_Prob pbp_data_epa_final$Win_Prob <- OffWinProb # Home: Pre-play pbp_data_epa_final$Home_WP_pre <- ifelse(pbp_data_epa_final$posteam == pbp_data_epa_final$HomeTeam, OffWinProb, DefWinProb) # Away: Pre-play pbp_data_epa_final$Away_WP_pre <- ifelse(pbp_data_epa_final$posteam == pbp_data_epa_final$AwayTeam, OffWinProb, DefWinProb) # Create the possible WPA values pbp_data_epa_final <- dplyr::mutate(pbp_data_epa_final, # Team keeps possession (most general case): WPA_base = dplyr::lead(Win_Prob) - Win_Prob, # Team keeps possession but either Timeout, Two Minute Warning, # Quarter End is the following row: WPA_base_nxt = dplyr::lead(Win_Prob,2) - Win_Prob, # Change of possession and no timeout, # two minute warning, or quarter end follows: WPA_change = (1 - dplyr::lead(Win_Prob)) - Win_Prob, # Change of possession but either Timeout, # Two Minute Warning, or # Quarter End is the following row: WPA_change_nxt = (1 - dplyr::lead(Win_Prob,2)) - Win_Prob, # End of quarter, half or end rows: WPA_halfend_to = 0) # Create a WPA column for the last play of the game: pbp_data_epa_final$WPA_final <- ifelse(dplyr::lead(pbp_data_epa_final$ScoreDiff) > 0 & dplyr::lead(pbp_data_epa_final$posteam) == pbp_data_epa_final$HomeTeam, 1 - pbp_data_epa_final$Home_WP_pre, ifelse(dplyr::lead(pbp_data_epa_final$ScoreDiff) > 0 & dplyr::lead(pbp_data_epa_final$posteam) == pbp_data_epa_final$AwayTeam, 1 - pbp_data_epa_final$Away_WP_pre, ifelse(dplyr::lead(pbp_data_epa_final$ScoreDiff) < 0 & dplyr::lead(pbp_data_epa_final$posteam) == pbp_data_epa_final$HomeTeam, 0 - pbp_data_epa_final$Home_WP_pre, ifelse(dplyr::lead(pbp_data_epa_final$ScoreDiff) < 0 & dplyr::lead(pbp_data_epa_final$posteam) == pbp_data_epa_final$AwayTeam, 0 - pbp_data_epa_final$Away_WP_pre, 0)))) pbp_data_epa_final$WPA_base_nxt_ind <- with(pbp_data_epa_final, ifelse(GameID == dplyr::lead(GameID) & posteam == dplyr::lead(posteam,2) & Drive == dplyr::lead(Drive,2) & dplyr::lead(play_type) %in% c("Quarter End","Two Minute Warning","Timeout"),1,0)) pbp_data_epa_final$WPA_change_nxt_ind <- with(pbp_data_epa_final, ifelse(GameID == dplyr::lead(GameID) & Drive != dplyr::lead(Drive,2) & posteam != dplyr::lead(posteam,2) & dplyr::lead(play_type) %in% c("Quarter End","Two Minute Warning","Timeout"),1,0)) pbp_data_epa_final$WPA_change_ind <- with(pbp_data_epa_final, ifelse(GameID == dplyr::lead(GameID) & Drive != dplyr::lead(Drive) & posteam != dplyr::lead(posteam) & dplyr::lead(play_type) %in% c("Pass","Run","Punt","Sack", "Field Goal","No Play","QB Kneel", "Spike","Kickoff"),1,0)) pbp_data_epa_final$WPA_halfend_to_ind <- with(pbp_data_epa_final, ifelse(play_type %in% c("Half End","Quarter End", "End of Game","Timeout") | (GameID == dplyr::lead(GameID) & sp != 1 & Touchdown != 1 & is.na(FieldGoalResult) & is.na(ExPointResult) & is.na(TwoPointConv) & Safety != 1 & ((dplyr::lead(play_type) %in% c("Half End")) | (qtr == 2 & dplyr::lead(qtr)==3) | (qtr == 4 & dplyr::lead(qtr)==5))),1,0)) pbp_data_epa_final$WPA_final_ind <- with(pbp_data_epa_final, ifelse(dplyr::lead(play_type) %in% "End of Game", 1, 0)) # Replace the missings with 0 due to how ifelse treats missings pbp_data_epa_final$WPA_base_nxt_ind[is.na(pbp_data_epa_final$WPA_base_nxt_ind)] <- 0 pbp_data_epa_final$WPA_change_nxt_ind[is.na(pbp_data_epa_final$WPA_change_nxt_ind)] <- 0 pbp_data_epa_final$WPA_change_ind[is.na(pbp_data_epa_final$WPA_change_ind)] <- 0 pbp_data_epa_final$WPA_halfend_to_ind[is.na(pbp_data_epa_final$WPA_halfend_to_ind)] <- 0 pbp_data_epa_final$WPA_final_ind[is.na(pbp_data_epa_final$WPA_final_ind)] <- 0 # Assign WPA using these indicator columns: pbp_data_epa_final$WPA <- with(pbp_data_epa_final, ifelse(WPA_final_ind == 1,WPA_final, ifelse(WPA_halfend_to_ind == 1, WPA_halfend_to, ifelse(WPA_change_nxt_ind == 1, WPA_change_nxt, ifelse(WPA_base_nxt_ind == 1, WPA_base_nxt, ifelse(WPA_change_ind == 1, WPA_change, WPA_base)))))) # Home and Away post: pbp_data_epa_final$Home_WP_post <- ifelse(pbp_data_epa_final$posteam == pbp_data_epa_final$HomeTeam, pbp_data_epa_final$Home_WP_pre + pbp_data_epa_final$WPA, pbp_data_epa_final$Home_WP_pre - pbp_data_epa_final$WPA) pbp_data_epa_final$Away_WP_post <- ifelse(pbp_data_epa_final$posteam == pbp_data_epa_final$AwayTeam, pbp_data_epa_final$Away_WP_pre + pbp_data_epa_final$WPA, pbp_data_epa_final$Away_WP_pre - pbp_data_epa_final$WPA) # For plays with play_type of End of Game, use the previous play's WP_post columns # as the pre and post, since those are already set to be 1 and 0: pbp_data_epa_final$Home_WP_pre <- with(pbp_data_epa_final, ifelse(play_type == "End of Game", dplyr::lag(Home_WP_post), Home_WP_pre)) pbp_data_epa_final$Home_WP_post <- with(pbp_data_epa_final, ifelse(play_type == "End of Game", dplyr::lag(Home_WP_post), Home_WP_post)) pbp_data_epa_final$Away_WP_pre <- with(pbp_data_epa_final, ifelse(play_type == "End of Game", dplyr::lag(Away_WP_post), Away_WP_pre)) pbp_data_epa_final$Away_WP_post <- with(pbp_data_epa_final, ifelse(play_type == "End of Game", dplyr::lag(Away_WP_post), Away_WP_post)) # Now drop the unnecessary columns pbp_data_epa_final <- dplyr::select(pbp_data_epa_final, -c(WPA_base,WPA_base_nxt,WPA_change_nxt,WPA_change, WPA_halfend_to, WPA_final, WPA_base_nxt_ind, WPA_change_nxt_ind, WPA_change_ind, WPA_halfend_to_ind, WPA_final_ind, Half_Ind)) write.csv(pbp_data_epa_final,"pbp_data_epa_final.csv")
# yrfraction.R # fraction of the year for a date, includes leap year # type = 'monthly', 'weekly' or 'daily' (default) # Jan 2014 yrfraction<-function(date,type='daily'){ if (type=='daily'){ if (class(date)!="Date"){stop("Date variable for annual data must be in date format, see ?Dates")} year<-as.numeric(format(date,'%Y')); lastday<-ISOdate(year,12,31); # last day in December day<-as.numeric(format(date,'%j')); # Day of year as decimal number (001-366) yrlength<-as.numeric(format(lastday,'%j')); yrfrac<-(day-1)/yrlength; } if (type=='weekly'){ if (max(date)>53|min(date)<1){stop("Date variable for weekly data must be month integer (1 to 53)")} yrfrac<-(date-1)/(365.25/7); } if (type=='monthly'){ if (max(date)>12|min(date)<1){stop("Date variable for monthly data must be month integer (1 to 12)")} yrfrac<-(date-1)/12; } return(yrfrac) }
/season/R/yrfraction.R
no_license
ingted/R-Examples
R
false
false
929
r
# yrfraction.R # fraction of the year for a date, includes leap year # type = 'monthly', 'weekly' or 'daily' (default) # Jan 2014 yrfraction<-function(date,type='daily'){ if (type=='daily'){ if (class(date)!="Date"){stop("Date variable for annual data must be in date format, see ?Dates")} year<-as.numeric(format(date,'%Y')); lastday<-ISOdate(year,12,31); # last day in December day<-as.numeric(format(date,'%j')); # Day of year as decimal number (001-366) yrlength<-as.numeric(format(lastday,'%j')); yrfrac<-(day-1)/yrlength; } if (type=='weekly'){ if (max(date)>53|min(date)<1){stop("Date variable for weekly data must be month integer (1 to 53)")} yrfrac<-(date-1)/(365.25/7); } if (type=='monthly'){ if (max(date)>12|min(date)<1){stop("Date variable for monthly data must be month integer (1 to 12)")} yrfrac<-(date-1)/12; } return(yrfrac) }
\name{pool.fv} \alias{pool.fv} \title{Pool Several Functions} \description{ Combine several summary functions into a single function. } \usage{ \method{pool}{fv}(..., weights=NULL) } \arguments{ \item{\dots}{ Objects of class \code{"fv"}. } \item{weights}{ Optional numeric vector of weights for the functions. } } \details{ The function \code{\link{pool}} is generic. This is the method for the class \code{"fv"} of summary functions. It is used to combine several estimates of the same function into a single function. Each of the arguments \code{\dots} must be an object of class \code{"fv"}. They must be compatible, in that they are estimates of the same function, and were computed using the same options. The sample mean and sample variance of the corresponding estimates will be computed. } \value{ An object of class \code{"fv"}. } \seealso{ \code{\link{pool}}, \code{\link{pool.anylist}}, \code{\link{pool.rat}} } \examples{ K <- lapply(waterstriders, Kest, correction="iso") Kall <- pool(K[[1]], K[[2]], K[[3]]) Kall <- pool(as.anylist(K)) plot(Kall, cbind(pooliso, pooltheo) ~ r, shade=c("loiso", "hiiso"), main="Pooled K function of waterstriders") } \author{ \adrian \rolf and \ege } \keyword{spatial} \keyword{htest} \keyword{hplot} \keyword{iteration}
/man/pool.fv.Rd
no_license
jschedler/spatstat
R
false
false
1,358
rd
\name{pool.fv} \alias{pool.fv} \title{Pool Several Functions} \description{ Combine several summary functions into a single function. } \usage{ \method{pool}{fv}(..., weights=NULL) } \arguments{ \item{\dots}{ Objects of class \code{"fv"}. } \item{weights}{ Optional numeric vector of weights for the functions. } } \details{ The function \code{\link{pool}} is generic. This is the method for the class \code{"fv"} of summary functions. It is used to combine several estimates of the same function into a single function. Each of the arguments \code{\dots} must be an object of class \code{"fv"}. They must be compatible, in that they are estimates of the same function, and were computed using the same options. The sample mean and sample variance of the corresponding estimates will be computed. } \value{ An object of class \code{"fv"}. } \seealso{ \code{\link{pool}}, \code{\link{pool.anylist}}, \code{\link{pool.rat}} } \examples{ K <- lapply(waterstriders, Kest, correction="iso") Kall <- pool(K[[1]], K[[2]], K[[3]]) Kall <- pool(as.anylist(K)) plot(Kall, cbind(pooliso, pooltheo) ~ r, shade=c("loiso", "hiiso"), main="Pooled K function of waterstriders") } \author{ \adrian \rolf and \ege } \keyword{spatial} \keyword{htest} \keyword{hplot} \keyword{iteration}
# Objective: compute total number of unrel genomes, unrel not neuro genomes # also, by ethnicity date () Sys.info ()[c("nodename", "user")] commandArgs () rm (list = ls ()) R.version.string ## "R version 3.6.1 (2019-07-05)" cc_100 = read.csv("~/Documents/STRs/ANALYSIS/population_research/PAPER/carriers/cc_pileup_100Kg/summary_cc_pileup_100Kg_30sept_VGD_KI.tsv", stringsAsFactors = F, header = T, sep = "\t") dim(cc_100) # 1783 13 # Enrich with `is_unrel` data l_unrel = read.table("~/Documents/STRs/ANALYSIS/population_research/MAIN_ANCESTRY/batch2/l_unrelated_55603_genomes_batch2.txt", stringsAsFactors = F) l_unrel = l_unrel$V1 length(l_unrel) # 55603 cc_100 = cc_100 %>% mutate(is_unrel = ifelse(platekey %in% l_unrel, "Yes", "No")) # Enrich with popu data popu_info = read.csv("~/Documents/STRs/ANALYSIS/population_research/MAIN_ANCESTRY/popu_merged_batch1_batch2_79849_genomes.tsv", stringsAsFactors = F, header = F, sep = "\t") cc_100 = left_join(cc_100, popu_info, by = c("platekey" = "V1")) colnames(cc_100)[15] = "popu" batch2_genomes = popu_info %>% filter(V1 %in% l_unrel) clin_data = read.csv("~/Documents/STRs/clinical_data/clinical_data/Main_RE_V10_and_Pilot_programmes.tsv", stringsAsFactors = F, header = T, sep = "\t") # Disease_group info (and all other clinical characteristics) we've got for probands # Let's take the familyIDs that have been recruited as Neuro in `disease_group` l_fam_neuro = clin_data %>% filter(grepl("[Nn][Ee][Uu][Rr][Oo]", diseasegroup_list)) %>% select(rare_diseases_family_id) %>% unique() %>% pull() length(l_fam_neuro) # 14402 clin_data = clin_data %>% select(platekey, rare_diseases_family_id, diseasegroup_list) clin_data = unique(clin_data) dim(clin_data) # 109411 3 # Load platekey-pid-famID table we created to fish platekeys not included in further RE releases clin_metadata = read.csv("~/Documents/STRs/clinical_data/clinical_data/merged_RE_releases_and_Pilot_PID_FID_platekey.tsv", stringsAsFactors = F, sep = "\t", header = T) dim(clin_metadata) # 621704 4 # Include or enrich `clin_data` with extra platekeys, to associate platekey <-> famID clin_data = full_join(clin_data, clin_metadata %>% select(platekey, participant_id, rare_diseases_family_id), by = "platekey") clin_data = unique(clin_data) dim(clin_data) # 149776 5 # First let's unite `rare_diseases_family_id` columns into 1 clin_data = clin_data %>% group_by(rare_diseases_family_id.x) %>% mutate(famID = ifelse(is.na(rare_diseases_family_id.x), rare_diseases_family_id.y, rare_diseases_family_id.x)) %>% ungroup() %>% as.data.frame() # Now we've got complete famID, let's define whether each platkey is neuro or not clin_data = clin_data %>% group_by(famID) %>% mutate(is_neuro = ifelse(famID %in% l_fam_neuro, "Neuro", "NotNeuro")) %>% ungroup() %>% as.data.frame() cc_100 = left_join(cc_100, clin_data %>% select(platekey, diseasegroup_list,is_neuro), by = "platekey") write.table(cc_100, "~/Documents/STRs/ANALYSIS/population_research/PAPER/carriers/cc_pileup_100Kg/summary_cc_pileup_100Kg_30sept_VGD_KI_unrel.tsv", quote = F, col.names = T, row.names = F, sep = "\t") # How many unrel genomes NOT NEURO do we have? clin_data %>% filter(platekey %in% l_unrel, is_neuro %in% "NotNeuro") %>% select(platekey) %>% unique() %>% pull() %>% length() # 37888 clin_data %>% filter(platekey %in% l_unrel, is_neuro %in% "Neuro") %>% select(platekey) %>% unique() %>% pull() %>% length() # 17715 # Create an unrel (with 55603 genomes) clin_data table unrel_disease_group = clin_data %>% filter(platekey %in% l_unrel) %>% select(platekey, famID, diseasegroup_list, is_neuro) unrel_disease_group = unique(unrel_disease_group) dim(unrel_disease_group) # 55603 4 # Enrich with popu unrel_disease_group = left_join(unrel_disease_group, batch2_genomes, by = c("platekey" = "V1")) colnames(unrel_disease_group)[5] = "popu" length(unique(unrel_disease_group$platekey)) # 55603 write.table(unrel_disease_group, "~/Documents/STRs/ANALYSIS/population_research/PAPER/carriers/table_55603_unrel_genomes_enriched_popu_diseasegroup.tsv", sep = "\t", quote = F, row.names = F, col.names = F) unrel_disease_group %>% filter(is_neuro %in% "NotNeuro", popu %in% "AFR") %>% select(platekey) %>% unique() %>% pull() %>% length() # 1295 unrel_disease_group %>% filter(is_neuro %in% "NotNeuro", popu %in% "AMR") %>% select(platekey) %>% unique() %>% pull() %>% length() # 704 unrel_disease_group %>% filter(is_neuro %in% "NotNeuro", popu %in% "EUR") %>% select(platekey) %>% unique() %>% pull() %>% length() # 32098 unrel_disease_group %>% filter(is_neuro %in% "NotNeuro", popu %in% "EAS") %>% select(platekey) %>% unique() %>% pull() %>% length() # 314 unrel_disease_group %>% filter(is_neuro %in% "NotNeuro", popu %in% "SAS") %>% select(platekey) %>% unique() %>% pull() %>% length() # 2947 # Check unrel 29 genomes expanded in HTT # Ask Ari in which table was done this l_htt = read.table("~/Documents/STRs/ANALYSIS/population_research/100K/carrier_freq/list_29_expanded_after_QC_unrelated.tsv", stringsAsFactors = F) l_htt = l_htt$V1 length(l_htt) # 29 # Enrich the list of HTT unrel genomes in 100kGP with popu and clinical info unrel_htt = clin_data %>% filter(platekey %in% l_htt) unrel_htt = unique(unrel_htt) dim(unrel_htt) # 29 3 unrel_htt = left_join(unrel_htt, popu_info, by = c("platekey" = "V1")) # Write into a table write.table(unrel_htt, "~/Documents/STRs/ANALYSIS/population_research/100K/carrier_freq/table_29_unrel_expanded_HTT_enriched_popu_diseasegroup.tsv", quote = F, row.names = F, col.names = T, sep = "\t")
/population_research/reproducing_results_table1.R
no_license
kibanez/analysing_STRs
R
false
false
5,939
r
# Objective: compute total number of unrel genomes, unrel not neuro genomes # also, by ethnicity date () Sys.info ()[c("nodename", "user")] commandArgs () rm (list = ls ()) R.version.string ## "R version 3.6.1 (2019-07-05)" cc_100 = read.csv("~/Documents/STRs/ANALYSIS/population_research/PAPER/carriers/cc_pileup_100Kg/summary_cc_pileup_100Kg_30sept_VGD_KI.tsv", stringsAsFactors = F, header = T, sep = "\t") dim(cc_100) # 1783 13 # Enrich with `is_unrel` data l_unrel = read.table("~/Documents/STRs/ANALYSIS/population_research/MAIN_ANCESTRY/batch2/l_unrelated_55603_genomes_batch2.txt", stringsAsFactors = F) l_unrel = l_unrel$V1 length(l_unrel) # 55603 cc_100 = cc_100 %>% mutate(is_unrel = ifelse(platekey %in% l_unrel, "Yes", "No")) # Enrich with popu data popu_info = read.csv("~/Documents/STRs/ANALYSIS/population_research/MAIN_ANCESTRY/popu_merged_batch1_batch2_79849_genomes.tsv", stringsAsFactors = F, header = F, sep = "\t") cc_100 = left_join(cc_100, popu_info, by = c("platekey" = "V1")) colnames(cc_100)[15] = "popu" batch2_genomes = popu_info %>% filter(V1 %in% l_unrel) clin_data = read.csv("~/Documents/STRs/clinical_data/clinical_data/Main_RE_V10_and_Pilot_programmes.tsv", stringsAsFactors = F, header = T, sep = "\t") # Disease_group info (and all other clinical characteristics) we've got for probands # Let's take the familyIDs that have been recruited as Neuro in `disease_group` l_fam_neuro = clin_data %>% filter(grepl("[Nn][Ee][Uu][Rr][Oo]", diseasegroup_list)) %>% select(rare_diseases_family_id) %>% unique() %>% pull() length(l_fam_neuro) # 14402 clin_data = clin_data %>% select(platekey, rare_diseases_family_id, diseasegroup_list) clin_data = unique(clin_data) dim(clin_data) # 109411 3 # Load platekey-pid-famID table we created to fish platekeys not included in further RE releases clin_metadata = read.csv("~/Documents/STRs/clinical_data/clinical_data/merged_RE_releases_and_Pilot_PID_FID_platekey.tsv", stringsAsFactors = F, sep = "\t", header = T) dim(clin_metadata) # 621704 4 # Include or enrich `clin_data` with extra platekeys, to associate platekey <-> famID clin_data = full_join(clin_data, clin_metadata %>% select(platekey, participant_id, rare_diseases_family_id), by = "platekey") clin_data = unique(clin_data) dim(clin_data) # 149776 5 # First let's unite `rare_diseases_family_id` columns into 1 clin_data = clin_data %>% group_by(rare_diseases_family_id.x) %>% mutate(famID = ifelse(is.na(rare_diseases_family_id.x), rare_diseases_family_id.y, rare_diseases_family_id.x)) %>% ungroup() %>% as.data.frame() # Now we've got complete famID, let's define whether each platkey is neuro or not clin_data = clin_data %>% group_by(famID) %>% mutate(is_neuro = ifelse(famID %in% l_fam_neuro, "Neuro", "NotNeuro")) %>% ungroup() %>% as.data.frame() cc_100 = left_join(cc_100, clin_data %>% select(platekey, diseasegroup_list,is_neuro), by = "platekey") write.table(cc_100, "~/Documents/STRs/ANALYSIS/population_research/PAPER/carriers/cc_pileup_100Kg/summary_cc_pileup_100Kg_30sept_VGD_KI_unrel.tsv", quote = F, col.names = T, row.names = F, sep = "\t") # How many unrel genomes NOT NEURO do we have? clin_data %>% filter(platekey %in% l_unrel, is_neuro %in% "NotNeuro") %>% select(platekey) %>% unique() %>% pull() %>% length() # 37888 clin_data %>% filter(platekey %in% l_unrel, is_neuro %in% "Neuro") %>% select(platekey) %>% unique() %>% pull() %>% length() # 17715 # Create an unrel (with 55603 genomes) clin_data table unrel_disease_group = clin_data %>% filter(platekey %in% l_unrel) %>% select(platekey, famID, diseasegroup_list, is_neuro) unrel_disease_group = unique(unrel_disease_group) dim(unrel_disease_group) # 55603 4 # Enrich with popu unrel_disease_group = left_join(unrel_disease_group, batch2_genomes, by = c("platekey" = "V1")) colnames(unrel_disease_group)[5] = "popu" length(unique(unrel_disease_group$platekey)) # 55603 write.table(unrel_disease_group, "~/Documents/STRs/ANALYSIS/population_research/PAPER/carriers/table_55603_unrel_genomes_enriched_popu_diseasegroup.tsv", sep = "\t", quote = F, row.names = F, col.names = F) unrel_disease_group %>% filter(is_neuro %in% "NotNeuro", popu %in% "AFR") %>% select(platekey) %>% unique() %>% pull() %>% length() # 1295 unrel_disease_group %>% filter(is_neuro %in% "NotNeuro", popu %in% "AMR") %>% select(platekey) %>% unique() %>% pull() %>% length() # 704 unrel_disease_group %>% filter(is_neuro %in% "NotNeuro", popu %in% "EUR") %>% select(platekey) %>% unique() %>% pull() %>% length() # 32098 unrel_disease_group %>% filter(is_neuro %in% "NotNeuro", popu %in% "EAS") %>% select(platekey) %>% unique() %>% pull() %>% length() # 314 unrel_disease_group %>% filter(is_neuro %in% "NotNeuro", popu %in% "SAS") %>% select(platekey) %>% unique() %>% pull() %>% length() # 2947 # Check unrel 29 genomes expanded in HTT # Ask Ari in which table was done this l_htt = read.table("~/Documents/STRs/ANALYSIS/population_research/100K/carrier_freq/list_29_expanded_after_QC_unrelated.tsv", stringsAsFactors = F) l_htt = l_htt$V1 length(l_htt) # 29 # Enrich the list of HTT unrel genomes in 100kGP with popu and clinical info unrel_htt = clin_data %>% filter(platekey %in% l_htt) unrel_htt = unique(unrel_htt) dim(unrel_htt) # 29 3 unrel_htt = left_join(unrel_htt, popu_info, by = c("platekey" = "V1")) # Write into a table write.table(unrel_htt, "~/Documents/STRs/ANALYSIS/population_research/100K/carrier_freq/table_29_unrel_expanded_HTT_enriched_popu_diseasegroup.tsv", quote = F, row.names = F, col.names = T, sep = "\t")
library(ape) testtree <- read.tree("13143_0.txt") unrooted_tr <- unroot(testtree) write.tree(unrooted_tr, file="13143_0_unrooted.txt")
/codeml_files/newick_trees_processed/13143_0/rinput.R
no_license
DaniBoo/cyanobacteria_project
R
false
false
137
r
library(ape) testtree <- read.tree("13143_0.txt") unrooted_tr <- unroot(testtree) write.tree(unrooted_tr, file="13143_0_unrooted.txt")
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/search_fun.R \name{plot_clim} \alias{plot_clim} \title{Plot lat/long points on a raster map.} \usage{ plot_clim(ext_ob, clim, boundaries = "", file = "", col = "red", legend = TRUE, l.cex = 0.9) } \arguments{ \item{ext_ob}{A data.frame of climate values (i.e., extracted from the climate raster object). MUST include columns named 'lon' and 'lat'.} \item{clim}{A raster object of climate data (matching ext_ob)} \item{boundaries}{A shapefile (e.g., GADM country or state outlines)} \item{file}{If a file path is set this function will try to write the plot as a png to that path} \item{col}{Color of points to plot.} \item{legend}{TRUE or FALSE to plot the legend on the map.} \item{l.cex}{cex parameter to pass to legend function} } \description{ Plotting with fancy colors. Canned so you don't have to think too hard about it. } \examples{ \dontrun{ data(abies); ext.abies = extraction(abies, climondbioclim, schema='raw'); plot_clim(ext.abies, climondbioclim[[5]]); } }
/man/plot_clim.Rd
no_license
rsh249/vegdistmod
R
false
true
1,060
rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/search_fun.R \name{plot_clim} \alias{plot_clim} \title{Plot lat/long points on a raster map.} \usage{ plot_clim(ext_ob, clim, boundaries = "", file = "", col = "red", legend = TRUE, l.cex = 0.9) } \arguments{ \item{ext_ob}{A data.frame of climate values (i.e., extracted from the climate raster object). MUST include columns named 'lon' and 'lat'.} \item{clim}{A raster object of climate data (matching ext_ob)} \item{boundaries}{A shapefile (e.g., GADM country or state outlines)} \item{file}{If a file path is set this function will try to write the plot as a png to that path} \item{col}{Color of points to plot.} \item{legend}{TRUE or FALSE to plot the legend on the map.} \item{l.cex}{cex parameter to pass to legend function} } \description{ Plotting with fancy colors. Canned so you don't have to think too hard about it. } \examples{ \dontrun{ data(abies); ext.abies = extraction(abies, climondbioclim, schema='raw'); plot_clim(ext.abies, climondbioclim[[5]]); } }
#' Launch Shiny App For Package mixMVPLN #' #' A function that launches the shiny app for this package. #' The shiny app permit to perform clustering using mixtures #' of matrix variate Poisson-log normal (MVPLN) via #' variational Gaussian approximations. Model selection #' can be done using AIC, AIC3, BIC and ICL. #' #' @return No return value but open up a shiny page. #' #' @examples #' \dontrun{ #' runMPLNClust() #' } #' #' @author {Anjali Silva, \email{anjali@alumni.uoguelph.ca}} #' #' @references #' Aitchison, J. and C. H. Ho (1989). The multivariate Poisson-log normal distribution. #' \emph{Biometrika} 76. #' #' Akaike, H. (1973). Information theory and an extension of the maximum likelihood #' principle. In \emph{Second International Symposium on Information Theory}, New York, NY, #' USA, pp. 267–281. Springer Verlag. #' #' Arlot, S., Brault, V., Baudry, J., Maugis, C., and Michel, B. (2016). #' capushe: CAlibrating Penalities Using Slope HEuristics. R package version 1.1.1. #' #' Biernacki, C., G. Celeux, and G. Govaert (2000). Assessing a mixture model for #' clustering with the integrated classification likelihood. \emph{IEEE Transactions #' on Pattern Analysis and Machine Intelligence} 22. #' #' Bozdogan, H. (1994). Mixture-model cluster analysis using model selection criteria #' and a new informational measure of complexity. In \emph{Proceedings of the First US/Japan #' Conference on the Frontiers of Statistical Modeling: An Informational Approach: #' Volume 2 Multivariate Statistical Modeling}, pp. 69–113. Dordrecht: Springer Netherlands. #' #' Robinson, M.D., and Oshlack, A. (2010). A scaling normalization method for differential #' expression analysis of RNA-seq data. \emph{Genome Biology} 11, R25. #' #' Schwarz, G. (1978). Estimating the dimension of a model. \emph{The Annals of Statistics} #' 6. #' #' Silva, A. et al. (2019). A multivariate Poisson-log normal mixture model #' for clustering transcriptome sequencing data. \emph{BMC Bioinformatics} 20. #' \href{https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-019-2916-0}{Link} #' #' Silva, A. et al. (2018). Finite Mixtures of Matrix Variate Poisson-Log Normal Distributions #' for Three-Way Count Data. \href{https://arxiv.org/abs/1807.08380}{arXiv preprint arXiv:1807.08380}. #' #' @export #' @importFrom shiny runApp runmixMVPLN <- function() { appDir <- system.file("shiny-scripts", package = "mixMVPLN") shiny::runApp(appDir, display.mode = "normal") return() }
/R/runmixMVPLN.R
permissive
anjalisilva/mixMVPLN
R
false
false
2,522
r
#' Launch Shiny App For Package mixMVPLN #' #' A function that launches the shiny app for this package. #' The shiny app permit to perform clustering using mixtures #' of matrix variate Poisson-log normal (MVPLN) via #' variational Gaussian approximations. Model selection #' can be done using AIC, AIC3, BIC and ICL. #' #' @return No return value but open up a shiny page. #' #' @examples #' \dontrun{ #' runMPLNClust() #' } #' #' @author {Anjali Silva, \email{anjali@alumni.uoguelph.ca}} #' #' @references #' Aitchison, J. and C. H. Ho (1989). The multivariate Poisson-log normal distribution. #' \emph{Biometrika} 76. #' #' Akaike, H. (1973). Information theory and an extension of the maximum likelihood #' principle. In \emph{Second International Symposium on Information Theory}, New York, NY, #' USA, pp. 267–281. Springer Verlag. #' #' Arlot, S., Brault, V., Baudry, J., Maugis, C., and Michel, B. (2016). #' capushe: CAlibrating Penalities Using Slope HEuristics. R package version 1.1.1. #' #' Biernacki, C., G. Celeux, and G. Govaert (2000). Assessing a mixture model for #' clustering with the integrated classification likelihood. \emph{IEEE Transactions #' on Pattern Analysis and Machine Intelligence} 22. #' #' Bozdogan, H. (1994). Mixture-model cluster analysis using model selection criteria #' and a new informational measure of complexity. In \emph{Proceedings of the First US/Japan #' Conference on the Frontiers of Statistical Modeling: An Informational Approach: #' Volume 2 Multivariate Statistical Modeling}, pp. 69–113. Dordrecht: Springer Netherlands. #' #' Robinson, M.D., and Oshlack, A. (2010). A scaling normalization method for differential #' expression analysis of RNA-seq data. \emph{Genome Biology} 11, R25. #' #' Schwarz, G. (1978). Estimating the dimension of a model. \emph{The Annals of Statistics} #' 6. #' #' Silva, A. et al. (2019). A multivariate Poisson-log normal mixture model #' for clustering transcriptome sequencing data. \emph{BMC Bioinformatics} 20. #' \href{https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-019-2916-0}{Link} #' #' Silva, A. et al. (2018). Finite Mixtures of Matrix Variate Poisson-Log Normal Distributions #' for Three-Way Count Data. \href{https://arxiv.org/abs/1807.08380}{arXiv preprint arXiv:1807.08380}. #' #' @export #' @importFrom shiny runApp runmixMVPLN <- function() { appDir <- system.file("shiny-scripts", package = "mixMVPLN") shiny::runApp(appDir, display.mode = "normal") return() }
% Generated by roxygen2 (4.0.1): do not edit by hand \name{gstatusbar} \alias{.gstatusbar} \alias{gstatusbar} \title{constructor to add a status bar to main window} \usage{ gstatusbar(text = "", container = NULL, ..., toolkit = guiToolkit()) .gstatusbar(toolkit, text = "", container = NULL, ...) } \arguments{ \item{text}{inital status text} \item{container}{A parent container. When a widget is created it can be incorporated into the widget heirarchy by passing in a parent container at construction time. (For some toolkits this is not optional, e.g. \pkg{gWidgets2tcltk} or \pkg{gWidgets2WWW2}.)} \item{...}{These values are passed to the \code{add} method of the parent container, and occasionally have been used to sneak in hidden arguments to toolkit implementations.} \item{toolkit}{Each widget constructor is passed in the toolkit it will use. This is typically done using the default, which will lookup the toolkit through \code{\link{guiToolkit}}.} } \description{ constructor to add a status bar to main window generic for toolkit dispatch } \examples{ \dontrun{ w <- gwindow("Statusbar", visible=FALSE) sb <- gstatusbar("status text", container=w) g <- gvbox(container=w) gbutton("update", container=g, handler=function(...) svalue(sb) <- "new value") visible(w) <- TRUE } }
/man/gstatusbar.Rd
no_license
kghub/gWidgets2
R
false
false
1,295
rd
% Generated by roxygen2 (4.0.1): do not edit by hand \name{gstatusbar} \alias{.gstatusbar} \alias{gstatusbar} \title{constructor to add a status bar to main window} \usage{ gstatusbar(text = "", container = NULL, ..., toolkit = guiToolkit()) .gstatusbar(toolkit, text = "", container = NULL, ...) } \arguments{ \item{text}{inital status text} \item{container}{A parent container. When a widget is created it can be incorporated into the widget heirarchy by passing in a parent container at construction time. (For some toolkits this is not optional, e.g. \pkg{gWidgets2tcltk} or \pkg{gWidgets2WWW2}.)} \item{...}{These values are passed to the \code{add} method of the parent container, and occasionally have been used to sneak in hidden arguments to toolkit implementations.} \item{toolkit}{Each widget constructor is passed in the toolkit it will use. This is typically done using the default, which will lookup the toolkit through \code{\link{guiToolkit}}.} } \description{ constructor to add a status bar to main window generic for toolkit dispatch } \examples{ \dontrun{ w <- gwindow("Statusbar", visible=FALSE) sb <- gstatusbar("status text", container=w) g <- gvbox(container=w) gbutton("update", container=g, handler=function(...) svalue(sb) <- "new value") visible(w) <- TRUE } }