blob_id stringlengths 40 40 | directory_id stringlengths 40 40 | path stringlengths 2 327 | content_id stringlengths 40 40 | detected_licenses listlengths 0 91 | license_type stringclasses 2 values | repo_name stringlengths 5 134 | snapshot_id stringlengths 40 40 | revision_id stringlengths 40 40 | branch_name stringclasses 46 values | visit_date timestamp[us]date 2016-08-02 22:44:29 2023-09-06 08:39:28 | revision_date timestamp[us]date 1977-08-08 00:00:00 2023-09-05 12:13:49 | committer_date timestamp[us]date 1977-08-08 00:00:00 2023-09-05 12:13:49 | github_id int64 19.4k 671M ⌀ | star_events_count int64 0 40k | fork_events_count int64 0 32.4k | gha_license_id stringclasses 14 values | gha_event_created_at timestamp[us]date 2012-06-21 16:39:19 2023-09-14 21:52:42 ⌀ | gha_created_at timestamp[us]date 2008-05-25 01:21:32 2023-06-28 13:19:12 ⌀ | gha_language stringclasses 60 values | src_encoding stringclasses 24 values | language stringclasses 1 value | is_vendor bool 2 classes | is_generated bool 2 classes | length_bytes int64 7 9.18M | extension stringclasses 20 values | filename stringlengths 1 141 | content stringlengths 7 9.18M |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
8180c0ea19d8e74071493b4281f6a0c001548bff | fbcd5cbbce3fa87402db91e51f11cb5be91c7616 | /R/alph.R | 9caf952fb5d8102265a33e092300258185df9299 | [] | no_license | bridgetmcgowan/SImulation_HW5 | 8b34cca6c2c2b5e5b96cba7da80f8414b489aac8 | 5a552310664bdef6a49eb53a85ddd66f9b31c824 | refs/heads/master | 2020-09-12T03:07:19.616198 | 2019-11-19T20:45:59 | 2019-11-19T20:45:59 | 222,281,996 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 43 | r | alph.R | alph <-
function(x){
sample(letters,1)
}
|
b94ee7e874d165060688ee6e0172dc4a74ce4694 | 68ab7ff66e331acb0a0721a37c0fa9da5b9b1e57 | /inst/Snow_detection_TS_PAR.R | 1b369f459cad7c3086f25b2bacd1d181f2684334 | [] | no_license | EURAC-Ecohydro/SnowSeasonAnalysis | 240bbd9c3616d4e959b51d88c5672c6ebc4717cd | 2a08475c2d79c183bcf76a13531dcd14ce2c4152 | refs/heads/master | 2021-01-02T08:16:28.539577 | 2020-11-30T22:39:07 | 2020-11-30T22:39:07 | 98,980,841 | 1 | 3 | null | null | null | null | ISO-8859-10 | R | false | false | 8,089 | r | Snow_detection_TS_PAR.R | #-------------------------------------------------------------------------------------------------------------------------------------------------------
# File Title: Snow_detection_TS_PAR.R
# TITLE: Detect snow presence combining Soil Temperature and PAR sensors
# Autor: Christian Brida
# Institute for Alpine Environment
# Data: 12/04/2017
# Version: 1.0
#
#------------------------------------------------------------------------------------------------------------------------------------------------------
Sys.setenv(TZ='Etc/GMT-1')
require(zoo)
require(chron)
require(dygraphs)
#------------------------------------------------------------------------------------------------------------------------------------------------------
# Define your Git folder:
#------------------------------------------------------------------------------------------------------------------------------------------------------
# ==== INPUT 1 ====
#git_folder="C:/Users/CBrida/Desktop/Git/Upload/SnowSeasonAnalysis/"
git_folder=getwd()
# =================
# ~~~~~~ Section 1 ~~~~~~
#------------------------------------------------------------------------------------------------------------------------------------------------------
# Import data (zoo object)
#------------------------------------------------------------------------------------------------------------------------------------------------------
# Import functions to read data
source(paste(git_folder,"/R/sndet_read_data_metadata.R",sep = ""))
# Define path and file to import
path=paste(git_folder,"/data/Input_data/",sep="")
print(paste("File available:",dir(path,pattern = ".csv"))) # Show file available in folder
# ==== INPUT 2 ====
file = "B3_2000m_TOTAL.csv"
# =================
zoo_data=fun_read_data(PATH = path,FILE = file)
# ~~~~~~ Section 2 ~~~~~~
#------------------------------------------------------------------------------------------------------------------------------------------------------
# Check colnames(zoo_data) and assign the properly variable
#------------------------------------------------------------------------------------------------------------------------------------------------------
print(paste("Variable available:",colnames(zoo_data)))
# check in colnames(zoo_data) if there are:
# 1. Soil temperature (obligatory)
# 2. phar_up (obligatory)
# 3. phar_down (obligatory)
# 4. snow_height (optional)
# ==== INPUT 3-11 ====
soil_temperature="ST_CS_00" # <- obligatory
plot(zoo_data[,which(colnames(zoo_data)==soil_temperature)])
phar_up="PAR_Up" # <- obligatory
plot(zoo_data[,which(colnames(zoo_data)==phar_up)])
phar_down="PAR_Soil_LS" # <- obligatory
plot(zoo_data[,which(colnames(zoo_data)==phar_down)])
snow_height="Snow_Height" # <- optional
plot(zoo_data[,which(colnames(zoo_data)==snow_height)])
daily_mean_soil_tempeature_threshold = 3.5 # <- Default 3.5 deg C. Threshold of daily mean of soil temperature that suggest snow presence.
daily_amplitude_soil_tempeature_threshold = 3 # <- Default 3 deg C. Threshold of daily amplitude of soil temperature that suggest snow presence
daily_max_ratio_parup_pardown = 0.1 # <- Default 0.1 (10%). Threshold of ratio between daily maximum of PAR at soil level and at 2 meters that suggest snow presence.
daily_max_pardown = 75 # <- Default 75 W/m2). Threshold of daily maximum PAR at soil level that suggest snow presence.
SUMMER_MONTHS=c("05","06","07","08","09") # <- select summer months based on position of station (elevation) ["01"-> Jan, ... , "12"-> Dec]
# ====================
# ~~~~~~ Section 3 ~~~~~~
#------------------------------------------------------------------------------------------------------------------------------------------------------
# Exctract soil temperature, par up and par down from zoo_data
#------------------------------------------------------------------------------------------------------------------------------------------------------
ST=zoo_data[,which(colnames(zoo_data)==soil_temperature)] # Soil temperature @ 0 cm (superficial)
PAR_DOWN=zoo_data[,which(colnames(zoo_data)==phar_down)] # Par_soil
PAR_UP=zoo_data[,which(colnames(zoo_data)==phar_up)] # Par_up
#------------------------------------------------------------------------------------------------------------------------------------------------------
# Snow detection using Soil temperature
#------------------------------------------------------------------------------------------------------------------------------------------------------
source(paste(git_folder,"/R/sndet_soil_temp_snow_detection.R",sep = ""))
# SOIL_TEMPERATURE = ST
# MEAN_ST_THRESHOLD = 3.5 # Suggested value: 3.5 . Units: deg C (daily mean)
# AMPLITUDE_ST_THRESHOLD = 3 # Suggested value. 3.0 . Units: deg C (daily amplitude)
snow_by_soil_temp=fun_soil_temp_snow_detection(SOIL_TEMPERATURE = ST,
MEAN_ST_THRESHOLD = daily_mean_soil_tempeature_threshold,
AMPLITUDE_ST_THRESHOLD = daily_amplitude_soil_tempeature_threshold)
#------------------------------------------------------------------------------------------------------------------------------------------------------
# Snow detection using Phar sensors (Up an soil)
#------------------------------------------------------------------------------------------------------------------------------------------------------
source(paste(git_folder,"/R/sndet_phar_snow_detection.R",sep = ""))
# PAR_UP = PAR_UP
# PAR_DOWN = PAR_DOWN
# RATIO_THRESHOLD = 0.1 # Suggested value: 0.1 . Units: abs
# PAR_SOIL_THRESHOLD = 75 # Suggested value: 75 . Units: umol/(mēs)
snow_by_phar=fun_phar_snow_detection(PAR_UP = PAR_UP,PAR_DOWN = PAR_DOWN,RATIO_THRESHOLD = daily_max_ratio_parup_pardown ,PAR_SOIL_THRESHOLD = daily_max_pardown)
#------------------------------------------------------------------------------------------------------------------------------------------------------
# Snow detection (Phar + Soil Temperarature)
#------------------------------------------------------------------------------------------------------------------------------------------------------
source(paste(git_folder,"/R/sndet_snow_detection.R",sep = ""))
snow_detect=fun_snow_detection(SOIL_TEMP_SNOW = snow_by_soil_temp, PHAR_SNOW = snow_by_phar)
# Exclude snow cover during summer
# SUMMER_MONTHS=c("05","06","07","08","09") # <- select summer length based on position of station (elevation)
snow_detect[substring(index(snow_detect),6,7) %in% SUMMER_MONTHS]=0
# ~~~~~~ Section 4 ~~~~~~
#------------------------------------------------------------------------------------------------------------------------------------------------------
# Save output
#------------------------------------------------------------------------------------------------------------------------------------------------------
output_for_Visualize_Snow_detection_TS_PAR=list(snow_height,file,zoo_data,
snow_detect,snow_by_soil_temp,snow_by_phar )
names(output_for_Visualize_Snow_detection_TS_PAR)=c("Snow presence PAR + Soil Temp", "Snow presence PAR","Snow presence Soil Temp")
save(output_for_Visualize_Snow_detection_TS_PAR,file = paste(git_folder,"/data/Output/Snow_Detection_RData/",substring(file,1,nchar(file)-4),".RData",sep = ""))
detection=merge(snow_detect, snow_by_phar,snow_by_soil_temp)
detection=as.data.frame(detection)
export=cbind(index(snow_detect),detection)
colnames(export)=c("TIMESTAMP","Snow presence PAR + Soil Temp", "Snow presence PAR","Snow presence Soil Temp")
write.csv(export, paste(git_folder,"/data/Output/Snow_Detection/Snow_presence_",file,sep=""),quote = F,row.names = F)
|
97dbc01ff90e3abc2a657f7aac5655ba5bf6eb8c | 35010b367173c596c680dba115c7700da4f7e252 | /plots.R | 640dce6fd7a0964e3b20c2198452c67cc6aa481c | [
"MIT"
] | permissive | stephan-t/product-categorization | e451ba6f32dc28612a139562c2b3c069cce69d79 | 5cd813d5d150f3f681ec938e83a49bf1e5a5e1bc | refs/heads/master | 2020-03-22T16:48:29.112606 | 2019-11-19T18:53:36 | 2019-11-19T18:53:36 | 140,352,740 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 600 | r | plots.R | #### Plots ####
# install.packages("ggplot2")
library(ggplot2)
# Plot confusion matrix
plot.cm <- function(cm, title = NULL){
p <-
ggplot(data = as.data.frame(cm), aes(x = Predicted, y = Actual)) +
geom_tile(aes(fill = log(Freq)), colour = "white") +
scale_fill_gradient(low = "white", high = "steelblue", na.value="white") +
geom_text(aes(x = Predicted, y = Actual, label = Freq), size = 3) +
theme(legend.position = "none") +
theme(axis.text.x = element_text(angle = 90, hjust = 1)) +
theme(plot.title = element_text(hjust = 0.5)) +
ggtitle(title)
return(p)
}
|
4d7b88e9a05e8c4cca4577fa96cd5d4d5dc6ddcc | ef1d6fa0df37fa552c4c4625e6e9cb974e8482f0 | /R/rescale.R | 0f8d0e815ac38f7b2cbd1baadd12180fdfb9c673 | [] | no_license | bhklab/genefu | 301dd37ef91867de8a759982eb9046d3057723af | 08aec9994d5ccb46383bedff0cbfde04267d9c9a | refs/heads/master | 2022-11-28T09:22:02.713737 | 2022-05-30T15:35:53 | 2022-05-30T15:35:53 | 1,321,876 | 17 | 15 | null | 2022-11-07T11:52:05 | 2011-02-02T21:06:25 | R | UTF-8 | R | false | false | 2,133 | r | rescale.R | #' @title Function to rescale values based on quantiles
#'
#' @description
#' This function rescales values x based on quantiles specified by the user
#' such that x' = (x - q1) / (q2 - q1) where q is the specified quantile,
#' q1 = q / 2, q2 = 1 - q/2) and x' are the new rescaled values.
#'
#' @usage
#' rescale(x, na.rm = FALSE, q = 0)
#'
#' @param x The `matrix` or `vector` to rescale.
#' @param na.rm TRUE if missing values should be removed, FALSE otherwise.
#' @param q Quantile (must lie in \[0,1\]]).
#'
#' @details
#' In order to rescale gene expressions, q = 0.05 yielded comparable scales in
#' numerous breast cancer microarray datasets (data not shown).The rational
#' behind this is that, in general, 'extreme cases' (e.g. low and high
#' proliferation, high and low expression of ESR1, ...) are often present
#' in microarray datasets, making the estimation of 'extreme' quantiles
#' quite stable. This is specially true for genes exhibiting some
#' multi-modality like ESR1 or ERBB2.
#'
#' @return
#' A vector of rescaled values with two attributes q1 and q1 containing
#' the values of the lower and the upper quantiles respectively.
#'
#' @seealso
#' [base::scale()]
#'
#' @examples
#' # load VDX dataset
#' data(vdxs)
#' # load NKI dataset
#' data(nkis)
#' # example of rescaling for ESR1 expression
#' par(mfrow=c(2,2))
#' hist(data.vdxs[ ,"205225_at"], xlab="205225_at", breaks=20,
#' main="ESR1 in VDX")
#' hist(data.nkis[ ,"NM_000125"], xlab="NM_000125", breaks=20,
#' main="ESR1 in NKI")
#' hist((rescale(x=data.vdxs[ ,"205225_at"], q=0.05) - 0.5) * 2,
#' xlab="205225_at", breaks=20, main="ESR1 in VDX\nrescaled")
#' hist((rescale(x=data.nkis[ ,"NM_000125"], q=0.05) - 0.5) * 2,
#' xlab="NM_000125", breaks=20, main="ESR1 in NKI\nrescaled")
#'
#' @md
#' @export
rescale <-
function(x, na.rm=FALSE, q=0) {
if(q == 0) {
ma <- max(x, na.rm=na.rm)
mi <- min(x, na.rm=na.rm)
} else {
ma <- quantile(x, probs=1-(q/2), na.rm=na.rm)
mi <- quantile(x, probs=q/2, na.rm=na.rm)
}
xx <- (x - mi) / (ma - mi)
attributes(xx) <- list("names"=names(x), "q1"=mi,"q2"=ma)
return(xx)
} |
ab6a51f41b45847bc03d9738debc7466c13ab5fc | fac145210987fe3d25dad06aebd19a33d86b9af7 | /run_analysis.R | ffcb350d6f7251850800b144b69dc8a728c956b2 | [] | no_license | JoshReedSchramm/cleaning_data_assignment_1 | fb39b866c1c81499466f9f9142d442bce11248d2 | 43f485022c4bcac4312aa45595f983d8c64b61a4 | refs/heads/master | 2021-01-10T00:53:09.031152 | 2014-04-27T03:48:38 | 2014-04-27T03:48:38 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 3,082 | r | run_analysis.R | getTidyXNames <- function() {
# the features file contains the filenames for the X file
x_names = read.table("./data/UCI HAR Dataset/features.txt")
x_names_list <- x_names$V2
# Clean up some of the strange formatting in that features file
x_names_list <- sub("\\()", "", x_names_list)
x_names_list <- sub("BodyBody", "Body", x_names_list)
x_names_list <- sub("fBody", "Body", x_names_list)
x_names_list
}
generateTestDataSet <- function() {
# read in each of the data files to their own data table
subject <- read.table("./data/UCI HAR Dataset/test/subject_test.txt")
x <- read.table("./data/UCI HAR Dataset/test/X_test.txt")
y <- read.table("./data/UCI HAR Dataset/test/y_test.txt")
# set names for each data table.
# the first is simply subject as specified in the data set documentation
names(subject) <- c("subject")
# Generate and assign column name list from the provided features file
names(x) <- getTidyXNames()
# again the y files only column is the activity per the documentation
names(y) <- c("activity")
# merge the independent data tables together
cbind(subject, x, y)
}
generateTrainDataSet <- function() {
# read in each of the data files to their own data table
subject <- read.table("./data/UCI HAR Dataset/train/subject_train.txt")
x <- read.table("./data/UCI HAR Dataset/train/X_train.txt")
y <- read.table("./data/UCI HAR Dataset/train/y_train.txt")
# set names for each data table.
# the first is simply subject as specified in the data set documentation
names(subject) <- c("subject")
# Generate and assign column name list from the provided features file
names(x) <- getTidyXNames()
# again the y files only column is the activity per the documentation
names(y) <- c("activity")
# merge the independent data tables together
cbind(subject, x, y)
}
appendDataSets<-function(...) {
# wrapping rbind for readibility. This makes it clear that the intent
# is the append the data sets that are passed in
rbind(...)
}
stripRelevantColumns <- function(data) {
# get all of the columns that record a mean or standard deviation
col_name_regex = ".*mean\\(\\).*-X$|.*std\\(\\).*-X$"
columns <- grep(regex, names(data))
# Add back in the subject and activity type
columns <- c(relevantCols, c(1, length(names(mergedData))))
# return only those columns that are contained in the columns subset
data[,columns]
}
generateAveragedData <- function(data) {
# melt the data to get a break down of each metric by subject and activity
meltedData <- melt(data, id=c("subject", "activity"))
# cast data back to a data frame by taking the average of each grouped variable
dcast(meltedData, subject+activity ~ variable, mean)
}
main <- function() {
test <- generateTestDataSet()
train <- generateTrainDataSet()
mergedData <- appendDataSets(test, train)
relevantData <- stripRelevantColumns(mergedData)
averagedData <- generateAveragedData(relevantData)
write.table(averagedData, file="output.txt")
}
main()
|
ee0fc6a528c12dcb04a2fb93b9ae8202cf1b4e35 | f2a4630b8fce228b534c6d240bfbd85ae65c5d3b | /Practice/oprational.R | fb9f0f07916f3a779ac6142400f22ae205780d01 | [] | no_license | Imagic33/Stats-Prog1-Markov-Text-Model | d54b0c386764a2402cca81e0fcf0ceccf94f38cf | 59be52fc5a23dbd1c0dcd2d36f0e98dee62e6741 | refs/heads/main | 2023-08-07T01:10:55.489268 | 2021-10-08T14:19:24 | 2021-10-08T14:19:24 | 409,910,905 | 1 | 0 | null | null | null | null | ISO-8859-7 | R | false | false | 2,067 | r | oprational.R | ## Group
##
## name:
##
## Id:
setwd("C:/Users/ΙςΆχΑΩ/Stats-Prog1-Markov-Text-Model/Work")
a <- scan("1581-0.txt", what="character", skip=156, encoding = "UTF-8")
n <- length(a)
a <- a[-((n-2909):n)] ## strip license
# a is the bible words
split_punct <- function(words){
pri_punct_pos <- grep("[[:punct:]]", a, perl=TRUE)
## Find the words with punctuation.
bible_ws <- rep("", length(a) + length(pri_punct_pos))
## Create an empty character list to be filled.
punct_pos <- pri_punct_pos + 1:length(pri_punct_pos)
## Give the position for the punctuation.
bible_ws[punct_pos] <- gsub("[[:alnum:]]", "", a[pri_punct_pos])
## Eliminate the other character to leave punctuation alone.
bible_ws[-punct_pos] <- gsub("[[:punct:]]", "", a)
## Eliminate the punctuation.
return(bible_ws)
}
bible_ws <- split_punct(a)
##There is a character that influence tolower.
bible_ws_low <- tolower(bible_ws)
# Transform the words in bible into lower case.
bible_voc <- unique(bible_ws_low)
# Find all the vocabulary in bible.
word_pos <- match(bible_ws_low,bible_voc)
# Find the positions for each words.
freq <- tabulate(word_pos)
## Find the frequencies of each words.
main_voc_pos <- order(freq,decreasing=TRUE)[1:1000]
## Find the most frequent 1000 words.
main_voc <- bible_voc[main_voc_pos]
b <- main_voc
b_position <- match(bible_ws_low,b)
# Find the positions for each words for vector b.
bible_ws_low[b_position]
follow_pri <- b_position[-1]
follow <- append(follow_pri, b_position[1])
pair_pos <- cbind(b_position, follow)
cbind(b_position, follow)
##delete_pair_pos
pair_pos <- pair_pos[-which(is.na(rowSums(pair_pos))),]
pair_pos
A <- matrix(0,1000,1000)
for (i in 1:nrow(pair_pos )) {
fir_pos <- pair_pos [i,1]
sec_pos <- pair_pos [i,2]
A[fir_pos,sec_pos] <- A[fir_pos,sec_pos] + 1
}
A <- A/rowSums(A)
Ran_sentence <- array("",c(1,100))
Ran_sentence[1] <- sample(b,1)
for (i in 1:100) {
Ran_sentence[i+1] <- sample(b,1,T,A[which(b==Ran_sentence[i]),])
}
cat(Ran_sentence)
|
7b60037ef5d7ffb5370dfafdb4a9db83d8755f8c | c4e2f1eaf9ae10bb5e8998c7d731606a45624e8c | /SCENIC_analyses/SCENIC_HCA_Blineage_analysis_visualization.R | 479bf209515f3ccee19d2cf55fffbc8de4037e8a | [] | no_license | systemsgenomics/ETV6-RUNX1_scRNAseq_Manuscript_2020_Analysis | c0e7483950990b9c5bf16f7f28f4b9c6acbfb31b | eb08df603f52779ffc781e71461c9175f83f8f24 | refs/heads/master | 2021-05-26T02:49:44.847600 | 2020-11-24T20:52:20 | 2020-11-24T20:52:20 | 254,022,211 | 0 | 2 | null | null | null | null | UTF-8 | R | false | false | 8,916 | r | SCENIC_HCA_Blineage_analysis_visualization.R | # Filtering and visualization of SCENIC HCA B lineage + ALL result
options(stringsAsFactors = F)
library(ComplexHeatmap)
library(dplyr)
library(ggplot2)
setwd("/research/groups/allseq/data/scRNAseq/results/juha_wrk/SCENIC/SCENIC_analysis/FINAL/HCA_only_FINAL/")
# Helper functions
## Mean across variables based label vector
regulonMeanScore <- function(data, regulons) {
df <- data.frame(t(data), regulons, check.names = F)
df <- df %>% group_by(regulons) %>% summarise_all(mean)
df <- as.data.frame(df)
rownames(df) <- df[, 1]
df[, -1]
}
## Linear model fitting through subsampling. Return
linearFitPermutation <- function(data, labels, n_cells = 600, iterations = 100) {
df <- data.frame(data, label = labels, check.names = F)
res <- do.call(cbind, lapply(1:iterations, function(i) {
set.seed(i)
df.sampled <- df %>% group_by(label) %>% sample_n(n_cells)
df.sampled <- as.data.frame(df.sampled)
labels.sampled <- df.sampled$label
df.sampled <- df.sampled[, colnames(df.sampled) != "label"]
Rsqrt <- apply(df.sampled, 2, function(x) {
dat.df <- data.frame(val = x, label = labels.sampled)
summary(lm(val~label, data = dat.df))$r.squared
})
return(Rsqrt)
}))
return(res)
}
## Filter regulons by score compared to TF expression level
filterRegulons <- function(aucell, gexp, celltypes, regulon_activity_threshold = 0.5, gene_expressed_threshold = 0.05, aucell.mean = F) {
gexp2 <- gexp[, gsub("[(].*", "", colnames(aucell))]
gexp2 <- (gexp2 > 0) * 1
gexp2 <- regulonMeanScore(t(gexp2), celltypes)
gexp.expressed <- gexp2 > gene_expressed_threshold
if (aucell.mean) {
aucell2 <- aucell
} else {
aucell2 <- regulonMeanScore(t(aucell), celltypes)
}
aucell2 <- apply(aucell2, 2, function(x) {
x <- x - min(x)
x <- x / max(x)
x
})
aucell.active <- aucell2 > regulon_activity_threshold
fail <- colSums(aucell.active & (! gexp.expressed)) > 0
return(aucell[, ! fail])
}
####################################################
####################################################
# Prepare data for analysis
## TF annotation from Garcia-Alonso et al. 2019 Genome Research https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6673718/
TF_classification <- readxl::read_xlsx("/research/groups/allseq/data/scRNAseq/results/juha_wrk/resources/GarciaAlonso_supplemental_table_S1.xlsx", skip = 1, na = "-")
TF_classification <- as.data.frame(TF_classification)
## TF annotation from Heinäniemi et al. 2013 Nature Methods https://www.nature.com/articles/nmeth.2445
TF_classification2 <- readxl::read_xls("/research/groups/allseq/data/scRNAseq/results/juha_wrk/resources/shmulevichTableS5.xls", skip = 2, na = "na")
TF_classification2 <- as.data.frame(TF_classification2)
mapping <- read.delim("/research/groups/biowhat_share/anno_data/uniq_homosapiens_entrez2refseq2symbol.txt", header = F)
mapping <- unique(mapping[, -2])
colnames(mapping) <- c("EntrezID", "Gene")
TF_classification2$TF <- mapping$Gene[match(TF_classification2$EntrezID, mapping$EntrezID)]
TF_classification2$Category[TF_classification2$Category == "NA"] <- NA
## SCENIC results from HCA only and HCA + ALL runs
HCA_jitter <- data.frame(data.table::fread(file = "/research/groups/allseq/data/scRNAseq/results/juha_wrk/SCENIC/HCA_jitter/aucell_final_regulons_means.csv.gz"), row.names = 1, check.names = F)
HCA_jitter.orig <- HCA_jitter
## Metadata
load("/research/groups/allseq/data/scRNAseq/results/ER_project/new_2019/data/HCA_ALL_combined.RData")
metadata$celltype[metadata$phase != "G1" & metadata$celltype == "leukemic"] <- "leukemic_cycling"
metadata$celltype2 <- metadata$celltype
metadata$celltype2[metadata$celltype == "leukemic"] <- paste0("leukemic_", metadata$batch[metadata$celltype == "leukemic"])
metadata$celltype3 <- metadata$celltype
metadata$celltype3[metadata$louvain == "HCA.4"] <- "6_preB-II_G1"
metadata$celltype4 <- metadata$celltype2
metadata$celltype4[metadata$louvain == "HCA.4"] <- "6_preB-II_G1"
metadata <- metadata[rownames(HCA_jitter), ]
metadata.orig <- metadata
## Remove I samples from data
HCA_jitter <- HCA_jitter[! metadata$batch %in% c("ALL10-d15", "ALL12-d15"), ]
X <- X[! metadata$batch %in% c("ALL10-d15", "ALL12-d15"), ]
metadata <- metadata[! metadata$batch %in% c("ALL10-d15", "ALL12-d15"), ]
## Calculate mean scores for cell types
HCA_jitter.means <- regulonMeanScore(t(HCA_jitter), metadata$celltype3)
HCA_jitter.means2 <- regulonMeanScore(t(HCA_jitter), metadata$celltype4)
## Remove immature B cells not used in scDD analysis (for linear fit)
HCA_jitter <- HCA_jitter[! metadata$louvain %in% c("HCA.1", "HCA.2", "HCA.8"), ]
X2 <- X[! metadata$louvain %in% c("HCA.1", "HCA.2", "HCA.8"), ]
metadata2 <- metadata[! metadata$louvain %in% c("HCA.1", "HCA.2", "HCA.8"), ]
## Prepare HCA only for linear fit
HCA_jitter <- HCA_jitter[grepl("MantonBM", metadata2$batch), ]
metadata.HCA <- metadata2[grepl("MantonBM", metadata2$batch), ]
HCA_jitter <- HCA_jitter[, colSums(HCA_jitter) > 0]
colnames(HCA_jitter) <- gsub("[.].*", "", colnames(HCA_jitter))
####################################################
####################################################
HCA.lm <- linearFitPermutation(HCA_jitter, metadata.HCA$celltype, iterations = 100)
HCA.TF_annotation <- TF_classification[match(gsub("[(].*", "", colnames(HCA_jitter)), TF_classification$TF), ]
HCA.TF_annotation$regulon <- colnames(HCA_jitter)
HCA.TF_annotation$regulon_type <- gsub(".*[(]|[)]", "", HCA.TF_annotation$regulon)
HCA.TF_annotation$regulon_type <- plyr::mapvalues(HCA.TF_annotation$regulon_type, c("+", "-"), c("activators", "repressors"))
HCA.TF_annotation$correct <- "False"
HCA.TF_annotation$correct[(HCA.TF_annotation$regulon_type == "activators" & HCA.TF_annotation$mode_of_regulation %in% c("activators", "dual"))
| (HCA.TF_annotation$regulon_type == "repressors" & HCA.TF_annotation$mode_of_regulation %in% c("repressors", "dual"))] <- "True"
HCA.TF_annotation$correct[is.na(HCA.TF_annotation$mode_of_regulation)] <- NA
HCA_jitter.filt <- filterRegulons(HCA_jitter.means, X, metadata$celltype3, 0.7, 0.04, aucell.mean = T)
HCA.TF_annotation$activity_filter <- plyr::mapvalues(colnames(HCA_jitter.means) %in% colnames(HCA_jitter.filt), c(T, F), c("pass", "fail"))
HCA.TF_annotation$TF_category <- TF_classification2$Category[match(gsub("[(].*", "", colnames(HCA_jitter)), TF_classification2$TF)]
HCA.TF_annotation$Rsquared_mean <- rowMeans(HCA.lm)
HCA_jitter.means.scaled <- t(scale(HCA_jitter.means))
hm_HCA.lm <- Heatmap(HCA_jitter.means.scaled[HCA.TF_annotation$Rsquared_mean > 0.5, ],
cluster_rows = T,
cluster_columns = F,
show_row_names = T,
show_column_names = T,
row_names_gp = gpar(fontsize = 4),
right_annotation = rowAnnotation("R^2" = anno_boxplot(HCA.lm[HCA.TF_annotation$Rsquared_mean > 0.5, ], outline = F, size = 1),
"regulon_type_SCENIC" = HCA.TF_annotation$regulon_type[HCA.TF_annotation$Rsquared_mean > 0.5],
"TF_type_annotated" = HCA.TF_annotation$mode_of_regulation[HCA.TF_annotation$Rsquared_mean > 0.5],
"correct_annotation" = HCA.TF_annotation$correct[HCA.TF_annotation$Rsquared_mean > 0.5],
"TF_category" = HCA.TF_annotation$TF_category[HCA.TF_annotation$Rsquared_mean > 0.5],
col = list("TF_type_annotated" = c("activators" = "#1B9E77", "repressors" = "#D95F02", "dual" = "#7570B3"),
"regulon_type_SCENIC" = c("activators" = "#1B9E77", "repressors" = "#D95F02"),
"correct_annotation" = c("True" = "#66A61E", "False" = "#E7298A"))),
show_heatmap_legend = T,
heatmap_legend_param = list(title = "row-scaled mean score"),
use_raster = F
)
filename <- "HCA_jitter_sampledLinearFit_Rsqrt_gt_05_activityFilter_with_preBII"
pdf(paste0(filename, ".pdf"), width = 6, height = 12)
draw(hm_HCA.lm)
graphics.off()
write.table(HCA.TF_annotation, paste0(filename, ".tsv"), sep = "\t", quote = F, row.names = F, col.names = T)
# Violin plots of regulons' scores
pdf(paste0(filename, "_violinPlots.pdf"), width = 7, height = 4)
for(i in rownames(mat)) {
df.tmp <- data.frame(score = HCA_jitter.orig[, i], celltype = factor(metadata.orig$celltype4))
p <- ggplot(df.tmp, aes(x = celltype, y = score)) +
geom_violin() +
labs(title = i, x = NULL, y = NULL) +
theme_bw() +
theme(axis.text.x = element_text(angle = 90, hjust = 1))
plot(p)
}
graphics.off()
|
76abff2c6b323153a935f8d4c0e1d95760a46ea1 | 9b5eaa1d3af143db4f2459c9dd9f8a5fc5c2044d | /R/utils/ue02plots.R | 8a0c639b6cf15aadfdf041199191ae8a043f9b89 | [] | no_license | p-gw/stan-workshop-2017 | e93ff932b3e8798256831b7aa9a821c9d26c711b | 78a40a6aeab133bf29e220fd386351ea9d9b46c5 | refs/heads/master | 2021-08-17T06:23:38.148396 | 2017-11-20T21:06:20 | 2017-11-20T21:06:20 | 110,131,283 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 268 | r | ue02plots.R | source("examples/02_linreg.R")
png("../Präsentation/images/0204_samp.png", width = 4, height = 4, units = "in", res = 200)
print(p_sim)
dev.off()
png("../Präsentation/images/0204_quant.png", width = 4, height = 4, units = "in", res = 200)
print(p_quant)
dev.off()
|
1487680cc78327488619393354a2fca615840613 | 33a8f03c6db78807bcbb9536c2d19abaebcb36c0 | /lesson2/Lesson 2.1/016523-Hohbein.R | e1514877cc0069fb59ac708c21c914c1d02bd7e0 | [] | no_license | jamesonth/data_analysis_R | 049ddbee0fa9c989b1239c4b07a5faba9e228aa9 | f36b046d754c53b9f5f031b82e41c0e628320980 | refs/heads/master | 2022-11-17T05:53:41.604299 | 2020-07-17T10:53:27 | 2020-07-17T10:53:27 | 262,266,476 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,753 | r | 016523-Hohbein.R | ### Exercise 2.1
# Each problem is worth 1 point. No partial credit, answers
# must be complete and entirely correct. Write your solution
# below each problem. We only need your R code, not your
# results!
# Run the following lines to create vectors coins and tbbt:
coins <- c(20, 5, 50, 2, 2, 10, 10, 5, 1, 1, 5)
tbbt <- c("Leonard", "Penny", "Sheldon", "Amy",
"Raj", "Howard", "Bernadette")
# 1. Suppose you didn't know the content of coins. Create a
# vector coppers that contains all elements of coins whose
# value is below 10.
coppers <- coins[coins<10]
# 2. Write on line of code to determine how many coppers are
# fives.
sum(coppers==5)
# 3. Replace all elements of coppers whose value is below 2
# with the value zero.
coppers[coppers<2] = 0
# 4. From tbbt select the names of all characters who are
# physicists in the TV series The Big Bang Theory (hint:
# their initials are L, S, and R) and store their names in
# the vector physicists.
physicists <- tbbt[tbbt %in% c("Raj", "Leonard","Sheldon")]
# 5. Assign names to the elements of coppers. Use the vector
# tbbt as names.
names(coppers) = tbbt
# 6. Select from coppers the elements "Sheldon" and "Leonard".
# Select both values at once, not each one individually.
coppers[names(coppers) %in% c("Leonard","Sheldon")]
#or
coppers[c("Leonard","Sheldon")]
# 7. Below is an example of bad coding style. Use a different
# function to produce the same result in a shorter and more
# readable way.
c(tbbt, tbbt, tbbt, tbbt, tbbt, tbbt, tbbt)
rep(tbbt,7)
# How to submit:
# Save this file using the last six digits of your
# matriculation number and your family name as follows:
# sixdigits-familyname.R
|
7e5982f62ef9f3d687458b5767f83cb66b0ad24b | 599c2cf0ad1b158138c78b5c6c4c2804bbeb45d0 | /man/roundtohalf.Rd | c97a829703dfcf1c1718e657bceac700efdca60f | [] | no_license | tlarzg/rtemis | b12efae30483c52440cc2402383e58b66fdd9229 | ffd00189f6b703fe8ebbd161db209d8b0f9f6ab4 | refs/heads/master | 2023-07-07T20:53:43.066319 | 2021-08-27T03:42:19 | 2021-08-27T03:42:19 | 400,347,657 | 0 | 0 | null | null | null | null | UTF-8 | R | false | true | 277 | rd | roundtohalf.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/rtfn.R
\name{roundtohalf}
\alias{roundtohalf}
\title{Round to nearest .5}
\usage{
roundtohalf(x)
}
\arguments{
\item{x}{numeric vector}
}
\description{
Round to nearest .5
}
\author{
E.D. Gennatas
}
|
beedf856b17357f734d0c7772d52237f97ca8167 | e20dae6615dd906842fb6a7eeee3e41cb625cc51 | /R/display_profile.R | dad20c7aa1011dc8279cadb2316a5ab7902ea748 | [] | no_license | LenaCarel/nmfem | 1ba1f330569ced860425a149de5423ae14dee26a | 58b1b232277d597f0c24bfc415114a2aa30cf3f2 | refs/heads/master | 2020-05-02T22:08:32.791009 | 2019-12-01T21:39:48 | 2019-12-01T21:39:48 | 178,243,096 | 1 | 2 | null | 2019-04-01T14:02:31 | 2019-03-28T16:32:37 | R | UTF-8 | R | false | false | 2,697 | r | display_profile.R | #' Display 3D profiles
#'
#' This function display profiles of 3 dimensions (day, hour, number of observations). It has been created to display
#' profiles from the \code{nmfem} package data.
#'
#' @param profile a vector or a matrix row containing the profile to display. The day/hour data are contained in the column names.
#' @param numclient logical. Whether the first value of the row is an identifier.
#' @param color color of the display. Possibilities are the ones provided by \url{colorbrewer2.org}.
#' @param language in which language the day/hour names are written. For now, the possibilities are "en" for english and "fr" for french.
#' @param theme A theme to use. The only valid values are "" and "dark".
#' @return Creates a 3D-heatmap displayed in the Viewer tab.
#' @examples
#' display_profile(travelers[sample(nrow(travelers),1), ], numclient = TRUE)
#' @importFrom plyr mapvalues
#' @importFrom tidyr gather
#' @importFrom tidyr spread
#' @import dplyr
#' @importFrom d3heatmap d3heatmap
#' @export
#'
display_profile <- function(profile, numclient = FALSE, color = "Blues", language = "en", theme = "dark"){
heure <- NULL
numjour <- NULL
temps <- NULL
if(numclient){
profile <- profile[ ,-1] # Delete card ID
}
profile <- as.data.frame(profile)
profile <- tidyr::gather(
data = profile,
key = temps,
value = n
)
profile$jour <- substr(profile$temps, 1, nchar(profile$temps) - 2)
profile$heure <- substr(profile$temps, nchar(profile$temps) - 1, nchar(profile$temps))
profile <- profile[ ,-1]
profile <- tidyr::spread(
data = profile,
key = heure,
value = n
)
if(toupper(language) == "FR"){
profile$numjour = plyr::mapvalues(profile$jour, from = c("lundi", "mardi", "mercredi",
"jeudi", "vendredi", "samedi",
"dimanche"),
to = c(1:7)
) %>% as.numeric()
}else{
profile$numjour = plyr::mapvalues(profile$jour, from = c("Monday", "Tuesday", "Wednesday",
"Thursday", "Friday", "Saturday",
"Sunday"),
to = c(1:7)
) %>% as.numeric()
}
profile <- profile[order(profile$numjour), ]
profile <- subset(profile, select = -numjour)
profile[is.na(profile)] <- 0
rownames(profile) <- profile$jour
profile <- profile[ ,-1]
d3heatmap::d3heatmap(profile,
dendrogram = "none", colors = color,
show_grid = 3, theme = theme)
}
|
3ef9af6bf7ff56bb50631595f08ec4bc579a8be6 | 29585dff702209dd446c0ab52ceea046c58e384e | /shopifyr/R/private.R | 4b3944b969dcdd3a373c6e50b1d4ce1964957840 | [] | no_license | ingted/R-Examples | 825440ce468ce608c4d73e2af4c0a0213b81c0fe | d0917dbaf698cb8bc0789db0c3ab07453016eab9 | refs/heads/master | 2020-04-14T12:29:22.336088 | 2016-07-21T14:01:14 | 2016-07-21T14:01:14 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 8,869 | r | private.R | #
# shopifyr: An R Interface to the Shopify API
#
# Copyright (C) 2014 Charlie Friedemann cfriedem @ gmail.com
# Shopify API (c) 2006-2014 Shopify Inc.
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
########### ShopifyShop constructor ###########
#' @importFrom RCurl getCurlHandle
.initialize <- function(shopURL, password, quiet = FALSE) {
if (missing(shopURL)) stop("shopURL is required to create a ShopifyShop")
if (missing(password)) stop("password is required to create a ShopifyShop")
self$shopURL <- paste0(gsub(".myshopify.com", "", tolower(shopURL)), ".myshopify.com")
self$password <- password
# generate curl handle and header gatherer
private$.curlHandle <- getCurlHandle(cainfo = system.file("CurlSSL", "cacert.pem", package = "RCurl"),
headerfunction = .updateResponseHeaders,
writefunction = .updateResponseBody,
httpheader = c('Content-Type' = 'application/json',
'Accept' = 'application/json',
'X-Shopify-Access-Token' = password))
# fetch shop information
self$shopInfo <- try(getShop(), silent=TRUE)
if (inherits(shopInfo, "try-error"))
stop(paste("Error accessing Shopify : ", attr(shopInfo,"condition")$message))
# show announcements if there are any
if (!isTRUE(quiet))
showAnnouncements()
}
########### ShopifyShop print method ###########
print.ShopifyShop <- function(...) {
cat("--", shopInfo$name, "Shopify API Client --\n")
cat("Site Domain:", shopInfo$domain, "\n")
cat("Shopify Domain:", shopInfo$myshopify_domain, "\n")
}
########### Private ShopifyShop member functions ###########
.params <- function(params) {
nms <- names(params)
ret <- NULL
for (i in 1:length(params)) {
if (is.null(params[[i]]))
next
if (!is.null(ret))
ret <- paste0(ret, "&")
prms <- sapply(as.character(params[[i]]), URLencode)
if (length(prms) > 1)
ret <- paste0(ret, URLencode(nms[i]), "=", paste0(prms, collapse=paste0(URLencode(nms[i]),"=")))
else
ret <- paste0(ret, URLencode(nms[i]), "=", prms)
}
ret
}
.url <- function(...) {
paste0(Filter(Negate(is.null), list(...)), collapse="/")
}
.baseUrl <- function() {
paste0("https://", shopURL, "/admin/")
}
.wrap <- function(data, name, check = "id") {
if ((length(data) != 1) || (names(data) != name)) {
ret <- list()
ret[[name]] <- data
} else ret <- data
if (is.character(check)) {
missingFields <- check[which(!check %in% names(ret[[name]]))]
if (length(missingFields) > 0)
stop(paste(name, "missing mandatory field(s): ", ))
}
ret
}
.fetchAll <- function(slug, name = NULL, limit = 250, ...) {
if (is.null(name)) name <- slug
fetched <- NULL
req <- 1
while (TRUE) {
result <- .request(slug, limit=limit, page=req, ...)
fetched <- c(fetched, result[[name]])
if (length(result[[name]]) < limit) break;
req <- req + 1
}
fetched
}
#' @importFrom RJSONIO toJSON
#' @importFrom RJSONIO isValidJSON
.encode <- function(data) {
if (is.list(data)) {
if (length(data) == 0)
data <- "{}" # use '{}' not '[]' which toJSON() would give for empty list
else
data <- toJSON(data, digits=20)
} else if (is.character(data)) {
if (!isValidJSON(data, asText=TRUE)) stop("data must be valid JSON")
} else {
stop("data must be of type list or character")
}
data
}
#' @importFrom RCurl postForm
#' @importFrom RCurl getURL
#' @importFrom RCurl curlPerform
.request <- function(slug, reqType = "GET",
data = NULL,
...,
parse. = TRUE,
type. = "json",
verbose = FALSE) {
# generate url and check request type
reqURL <- paste0(.baseUrl(), slug, ".", type.)
reqType <- match.arg(toupper(reqType), c("GET","POST","PUT","DELETE"))
# parse url parameters
params <- list(...)
if (!is.null(params) && length(params) > 0)
reqURL <- paste0(reqURL, "?", .params(params))
# clear response buffers
.clearResponseHeaders()
.clearResponseBody()
# send request
if (reqType %in% c("GET", "DELETE")) {
# GET or DELETE request
res <- try(curlPerform(url = reqURL,
curl = .curlHandle,
customrequest = reqType,
verbose = verbose), silent=TRUE)
} else if (reqType %in% c("POST","PUT")) {
# POST or PUT request
res <- try(curlPerform(url = reqURL,
curl = .curlHandle,
postfields = .encode(data),
post = ifelse(reqType=="POST",1L,0L),
customrequest = reqType,
verbose = verbose), silent=TRUE)
}
# check result for error
if (inherits(res, "try-error")) {
stop(paste("Curl error :", attr(res,"condition")$message))
}
# return response
.getResponseBody(parse.)
}
#' @importFrom RCurl parseHTTPHeader
.getResponseHeaders <- function(parse = TRUE) {
if (isTRUE(parse))
.parseResponseHeader(.responseHeaders)
else
.responseHeaders
}
.updateResponseHeaders <- function(str) {
private$.responseHeaders <- c(.responseHeaders, str)
nchar(str, "bytes")
}
.clearResponseHeaders <- function() {
private$.responseHeaders <- NULL
}
# the function below is a slightly modified version of RCurl::parseHttpHeader
.parseResponseHeader <- function(lines) {
if (length(lines) < 1)
return(NULL)
if (length(lines) == 1)
lines <- strsplit(lines, "\r\n")[[1]]
i <- grep("^HTTP[^_]", lines) # small fix to ensure no conflict with Shopify's HTTP_X_SHOPIFY header style
status <- lines[max(i)]
lines <- lines[seq(max(i), length(lines))]
st <- .getHeaderStatus(status)
if (st[["status"]] == 100) {
if ((length(lines) - length(grep("^[[:space:]]*$", lines))) == 1)
return(st)
}
lines <- lines[-1]
lines <- gsub("\r\n$", "", lines)
lines <- lines[lines != ""]
header <- structure(sub("[^:]+: (.*)", "\\1", lines), names = sub("([^:]+):.*", "\\1", lines))
header[["status"]] <- st[["status"]]
header[["statusMessage"]] <- st[["message"]]
header
}
# the function below is a slightly modified version of RCurl:::getStatus
.getHeaderStatus <- function(status) {
els <- strsplit(status, " ")[[1]]
list(status = as.integer(els[2]), message = paste(els[-c(1,2)], collapse = " "))
}
.getResponseBody <- function(parse = TRUE) {
if (isTRUE(parse))
.parseResponseBody(paste0(.responseBody, collapse=""))
else
paste0(.responseBody, collapse="")
}
.updateResponseBody <- function(str) {
private$.responseBody <- c(.responseBody, str)
nchar(str, "bytes")
}
.clearResponseBody <- function() {
private$.responseBody <- NULL
}
#' @importFrom RJSONIO fromJSON
.parseResponseBody <- function(response) {
if (missing(response) || is.null(response) || nchar(response) < 2)
return(NULL)
parsed <- fromJSON(response, simplify=FALSE)
if (!is.null(parsed$errors))
stop(paste(parsed$errors, collapse="; "), call.=FALSE)
parsed
}
.parseShopifyTimestamp <- function(str) {
# strings are in format like "2014-08-06T00:01:00-04:00"
# strip out last colon so %z works
as.POSIXct(gsub("^(\\d{4}-\\d{2}-\\d{2}T\\d{2}:\\d{2}:\\d{2}-\\d{2}):(\\d{2})$", "\\1\\2", str), format="%FT%T%z")
} |
30f4819b278ef35e7814422c3e63b4a3ad8c6b6a | 8646bc1e79acfcd681cae81c66330830d6baa49e | /man/ms_2D_average_sh.Rd | df05ab53bae3daa26ed5ddd003c7f46544c2f51c | [] | no_license | Atvar2/mineCETSAapp | 2a46af5b1392dc861c1c53469dee29245bf66426 | 7f30019d46437bbb370c9172b0a0c061be719afc | refs/heads/main | 2023-06-12T20:01:25.580701 | 2021-07-07T12:01:29 | 2021-07-07T12:01:29 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | true | 620 | rd | ms_2D_average_sh.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/ms_2D_average_sh.R
\name{ms_2D_average_sh}
\alias{ms_2D_average_sh}
\title{ms_2D_average_sh}
\usage{
ms_2D_average_sh(data, savefile = TRUE)
}
\arguments{
\item{data}{dataset after calculating the relative protein abundance differences}
\item{savefile}{A logical to tell if you want save the results or not}
}
\value{
A dataframe
}
\description{
Function to calculate the averaged signals from the IMPRTINTS-CETSA result
It is totally based on the function ms_2D_average from the mineCETSA package.
}
\seealso{
\code{\link{ms_2D_caldiff}}
}
|
65b90be4d2e4ec6038391a6d0767addd48436070 | e54da3c941297f45e1a467c9b30947c949bb35d2 | /Scripts/Fit_Model.R | bcb6e4899d9c66f89e80fab1d209d317be875004 | [] | no_license | jinhangjiang/BSAN480-FinalProject | 4f2388579547f34585a554f16d56e2bc9f432c48 | d25fc7b946ebdb7acaf887a68406b656341f3681 | refs/heads/master | 2022-07-12T09:56:04.415794 | 2020-05-14T00:11:51 | 2020-05-14T00:11:51 | 261,912,811 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 8,757 | r | Fit_Model.R | movie<-read.csv("cleandata.csv")
a<-movie
a[,c("TITLE","RELEASE_DATE_TXT","PRODUCTION_COMPANIES",
"DIRECTOR_NAME","DIRECTOR_GENDER","ACTOR1_NAME",
"ACTOR1_GENDER","ACTOR2_NAME","ACTOR2_GENDER"
)]<-list(NULL)
write.csv(a, file = "modeldata.csv")
### average rating prediction
a1<-a
a1[,c("BUDGET","REVENUE","VOTE_COUNT","VOTE_AVERAGE",
"ML_RCOUNT","ML_RATING","PERFORMANCE", "RELEVANCE_Mean","X")]<-list(NULL)
a1<-na.omit(a1)
set.seed(1)
index<-sample(nrow(a1), 0.8*nrow(a1))
a1.train<-a1[index,-1]
a1.test<-a1[-index,-1]
null<-lm(a1.train$AVERAGERATING~1, data = a1.train)
full<-lm(a1.train$AVERAGERATING~., data = a1.train)
step(null, list(lower = null, upper = full), direction = c("both"))
fit<-lm(formula = a1.train$AVERAGERATING ~ STARTYEAR +
documentary + drama + horror + RELEVANCE_Mean_1 + sci.fi +
action + RUNTIME + NO_GENRE + NUMVOTESlog + Actor1_Sex +
budget_log + animation + NUMVOTES + REVENUElog + western +
adventure + film.noir + children + mystery + fantasy + romance,
data = a1.train)
summary(fit)
library(MASS)
a1.train$stres=stdres(fit)
subset(a1.train, a1.train$stres>3)
subset(a1.train, a1.train$stres< -3)
a1.train.subset<-subset(a1.train, a1.train$stres<3 & a1.train$stres> -3)
fit<-lm(formula = AVERAGERATING ~ STARTYEAR +
documentary + drama + horror + RELEVANCE_Mean_1 + sci.fi +
action + RUNTIME + NO_GENRE + NUMVOTESlog + Actor1_Sex +
budget_log + animation + NUMVOTES + REVENUElog + western +
adventure + film.noir,
data = a1.train.subset)
summary(fit)
a1.train.subset$stres=stdres(fit)
subset(a1.train.subset, a1.train.subset$stres>3)
subset(a1.train.subset, a1.train.subset$stres< -3)
a1.train.subset2<-subset(a1.train.subset, a1.train.subset$stres<3 & a1.train.subset$stres> -3)
fit<-lm(formula = AVERAGERATING ~ STARTYEAR +
documentary + drama + horror + RELEVANCE_Mean_1 + sci.fi +
action + RUNTIME + NO_GENRE + NUMVOTESlog + Actor1_Sex +
budget_log + animation + NUMVOTES + REVENUElog + western +
adventure + film.noir,
data = a1.train.subset2)
summary(fit)
a1.train.subset2$stres=stdres(fit)
subset(a1.train.subset2, a1.train.subset2$stres>3)
subset(a1.train.subset2, a1.train.subset2$stres< -3)
a1.train.subset3<-subset(a1.train.subset2, a1.train.subset2$stres<3 & a1.train.subset2$stres> -3)
fit<-lm(formula = AVERAGERATING ~ STARTYEAR +
documentary + drama + horror + RELEVANCE_Mean_1 + sci.fi +
action + RUNTIME + NO_GENRE + NUMVOTESlog + Actor1_Sex +
budget_log + animation + NUMVOTES + REVENUElog + western +
adventure + film.noir,
data = a1.train.subset3)
summary(fit)
a1.train.subset3$stres=stdres(fit)
subset(a1.train.subset3, a1.train.subset3$stres>3)
subset(a1.train.subset3, a1.train.subset3$stres< -3)
a1.train.subset4<-subset(a1.train.subset3, a1.train.subset3$stres<3 & a1.train.subset3$stres> -3)
fit<-lm(formula = AVERAGERATING ~ STARTYEAR +
documentary + drama + horror + RELEVANCE_Mean_1 + sci.fi +
action + RUNTIME + NO_GENRE + NUMVOTESlog + Actor1_Sex +
budget_log + animation + NUMVOTES + REVENUElog + western +
adventure,
data = a1.train.subset4)
summary(fit)
a1.train.subset4$stres=stdres(fit)
subset(a1.train.subset4, a1.train.subset4$stres>3)
subset(a1.train.subset4, a1.train.subset4$stres< -3)
a1.train.subset5<-subset(a1.train.subset4, a1.train.subset4$stres<3 & a1.train.subset4$stres> -3)
fit<-lm(formula = AVERAGERATING ~ STARTYEAR +
documentary + drama + horror + RELEVANCE_Mean_1 + sci.fi +
action + RUNTIME + NO_GENRE + NUMVOTESlog + Actor1_Sex +
budget_log + animation + NUMVOTES + REVENUElog + western +
adventure,
data = a1.train.subset5)
summary(fit)
a1.train.subset5$stres=stdres(fit)
subset(a1.train.subset5, a1.train.subset5$stres>3)
subset(a1.train.subset5, a1.train.subset5$stres< -3)
a1.train.subset6<-subset(a1.train.subset5, a1.train.subset5$stres<3 & a1.train.subset5$stres> -3)
fit<-lm(formula = AVERAGERATING ~ STARTYEAR +
documentary + drama + horror + RELEVANCE_Mean_1 + sci.fi +
action + RUNTIME + NO_GENRE + NUMVOTESlog + Actor1_Sex +
budget_log + animation + NUMVOTES + REVENUElog + western +
adventure,
data = a1.train.subset6)
summary(fit)
plot(fit$res~fit$fitted)
hist(fit$res)
qqnorm(fit$res)
qqline(fit$res)
AIC(fit)
pred1<-predict(fit, newdata = a1.test)
MSPE1<-mean((a1.test$AVERAGERATING-pred1)^2)
MSPE1
#### use performance to predict
a2<-a
a2[,c("BUDGET","REVENUE","VOTE_COUNT","VOTE_AVERAGE",
"ML_RCOUNT","ML_RATING","AVERAGERATING", "RELEVANCE_MEAN")]<-list(NULL)
a2<-na.omit(a2)
set.seed(2)
index<-sample(nrow(a1), 0.8*nrow(a1))
a2.train<-a2[index,-1]
a2.test<-a2[-index,-1]
null<-glm(a2.train$PERFORMANCE~1, data = a2.train, family = binomial)
full<-glm(a2.train$PERFORMANCE~., data = a2.train, family = binomial)
step(null, list(lower = null, upper = full), direction = c("both"))
fit2<-glm(formula = a2.train$PERFORMANCE ~ RELEVANCE_Mean_1 + documentary +
ML_RCOUNTlog + STARTYEAR + drama + sci.fi + Actor1_Sex +
RUNTIME + action + horror + NUMVOTESlog + REVENUElog + NO_GENRE +
budget_log + film.noir + thriller +
animation + adventure + romance + crime, family = binomial, data = a2.train)
summary(fit2)
hist(fit2$res)
library(ROCR)
pred2<-prediction(predict(fit2, newdata = a2.test, type = "response"), a2.test$PERFORMANCE)
perf2<- performance(pred2, "tpr","fpr")
plot(perf2, colorize = TRUE, main = "ROC curve for model 2")
unlist(slot(performance(pred2, "auc"),"y.values"))
#get the MR
pcut<-mean(a2$PERFORMANCE)
pred2.test<-predict(fit2, newdata = a2.test, type = "response")
class1.test<-(pred2.test>pcut)*1
confmat<-table(a2.test$PERFORMANCE, class1.test, dnn = c("TRUE","PRED"))
#get mr
MR<-mean(class1.test!=a2.test$PERFORMANCE)
MR
# FNR
FNR<- prop.table(confmat)[2,1]
FNR
# FPR
FPR<- prop.table(confmat)[1,2]
FPR
## optimal cutoff
cost<- function(pcut, weightFP,weightFN, true01, pred.prob){
class<-(pred.prob>pcut)*1
FP<- sum(class==1 & true01==0)
FN<- sum(class==0 & true01==1)
totalcost<- weightFP*FP+weightFN*FN
return(totalcost)
}
pcut.seq<- seq(0.1, 0.99, by=0.1)
totcost <- NULL
for (i in 1:length(pcut.seq)) {
totcost[i]<-cost(pcut = pcut.seq[i], weightFP = 1, weightFN = 1, true01 = a2.test$PERFORMANCE,
pred.prob = class1.test)
}
pcut.seq[which.min(totcost)] #0.001
pcut<-0.2
pred2.test<- predict(fit2, newdata = a2.test, type = "response")
class2.test<-(pred2.test>pcut)*1
confmat2<-table(a2.test$PERFORMANCE, class2.test, dnn = c("True","Pred"))
confmat2
#get mr
MR<-mean(class2.test!=a2.test$PERFORMANCE)
MR
# FNR
FNR<- prop.table(confmat2)[2,1]
FNR
# FPR
FPR<- prop.table(confmat2)[1,2]
FPR
a2[,c("RELEVANCE_Mean")]<-NULL
a2<-na.omit(a2)
library(boot)
auc<-function(obs, pred){
pred <- prediction(pred, obs)
unlist(slot(performance(pred, "auc"), "y.values"))
}
fit3<-glm(formula = a2$PERFORMANCE ~ RELEVANCE_Mean_1 + documentary +
ML_RCOUNTlog + STARTYEAR + drama + sci.fi + Actor1_Sex +
RUNTIME + action + horror + NUMVOTESlog + REVENUElog + NO_GENRE +
budget_log + film.noir + thriller +
animation + adventure + romance + crime -1, family = binomial, data = a2)
cv1<-cv.glm(data = a2, glmfit = fit3, cost = auc, K=10)
cv1$delta[2]
### find ROI
movie<-read.csv("modeldata.csv")
movie[,c("X","RELEVANCE_Mean")]<-NULL
a<-movie[movie$STARTYEAR>=1990,]
a<-a[a$REVENUE!=0,]
a<-a[a$BUDGET!=0,]
a$ROI<-((a$REVENUE)-(a$BUDGET))/a$BUDGET
cor(a$ROI, a$REVENUE, method = "pearson")
set.seed(2000)
index<-sample(nrow(a), 0.8*nrow(a))
a3<-na.omit(a)
a3.train <-a3[index,-1]
a3.test <-a3[-index,-1]
null<-lm(ROI~1, data = a3.train.subset)
full<-lm(ROI~., data = a3.train.subset)
step(null, list(lower = null, upper = full), direction = c("both"))
fit4<-lm(formula = a3.train$ROI ~ documentary +
log(BUDGET) + Actor1_Sex + log(REVENUE), data = a3.train)
summary(fit4)
crash$stres=stdres(fit1)
subset(crash,crash$stres< -3)
subset(crash,crash$stres > 3)
library(MASS)
a3.train$stres=stdres(fit4)
subset(a3.train, a3.train$stres>3)
subset(a3.train, a3.train$stres< -3)
a3.train.subset<-subset(a3.train, a3.train$stres<3 & a3.train$stres> -3)
fit4<-lm(formula = ROI ~ documentary + NUMVOTESlog +
BUDGET + REVENUE + musical + VOTE_COUNT , data = a3.train.subset)
summary(fit4)
plot(fit4$res~fit4$fitted)
hist(fit4$res)
AIC(fit4)
pred4<-predict(fit4, newdata = a3.test)
MSPE4<-mean((a3.test$ROI-pred4)^2)
MSPE4
|
13e2fe66460d76c7298d08f1797724bf5bcf1a98 | 834ad515de5f16fdf0759a4d4aab2d8a96b2fc2c | /run_analysis.R | 856dad647d11db33f83dd906a0976d9e2b4de984 | [] | no_license | adempewolff/GettingData-CourseProject | 087635f76b1df2b19d5bc4ecc589f089148c6dec | 0924a658e3340c57f2cacac8da3661aef1e32a12 | refs/heads/master | 2016-08-12T07:58:23.290296 | 2016-04-16T09:41:23 | 2016-04-16T09:41:23 | 55,300,314 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 4,549 | r | run_analysis.R | ## Script to download, import and tidy the "Human Activity Recognition using
## Smartphones" dataset. Creates a table which contains the averages for a
## number of variables by activity and subject. Please see the readme file
## and codebook for more information.
##
## Data Source:
## Davide Anguita, Alessandro Ghio, Luca Oneto, Xavier Parra and Jorge L. Reyes
## -Ortiz. Human Activity Recognition on Smartphones using a Multiclass
## Hardware-Friendly Support Vector Machine. International Workshop of Ambient
## Assisted Living (IWAAL 2012). Vitoria-Gasteiz, Spain. Dec 2012
##
## Script written by Austin Dempewolff (adempewolff@gmail.com) as part of the
## Coursera "Getting Data" course's final project.
## Load required packages and throw error if not installed.
required <- c("dplyr", "tidyr")
success <- sapply(required, require, character.only = TRUE)
if(!all(success)) {
stop(paste("Please ensure the following package is installed:",
required[!success], "\n", sep = " "))
}
rm(required, success)
## Download data if not found locally
path <- file.path('data', 'UCI_HAR_Dataset.zip')
fileurl <- 'https://d396qusza40orc.cloudfront.net/getdata%2Fprojectfiles%2FUCI%20HAR%20Dataset.zip'
if(!dir.exists('data')) { dir.create('data') }
if(!file.exists(path)) {
print('Downloading data, this may take some time...')
download.file(fileurl, destfile = path)
print('Success')
} else { print('Zipped data found locally') }
rm(fileurl)
## Unzip to temporary directory if not already done
temp <- tempdir()
if(!dir.exists(file.path(temp, 'UCI HAR Dataset'))) {
print('Unzipping data...')
unzip(path, exdir = temp)
print('Success')
} else {
print('Using existing unzipped files')
}
rm(path)
## Create required filepaths and store in 'filepaths' (list)
print('Getting filepaths...')
source('filepaths.R')
rm(temp, filepath)
print('Success')
## Define function to read in and merge for test or train
readin <- function(x) {
files <- c('x', 'y', 'subject')
files <- paste(files, x, sep = '')
output <- lapply(files, function(x) {
print(paste('Reading in ', x, '...', sep = ''))
tmp <- read.table(filepaths[[x]])
print('Success')
tmp
})
print('Merging tables...')
output <- data.frame(output[[3]], output[[2]], output[[1]])
names(output)[1:2] <- c('subject', 'activity')
print('Success')
output
}
# Read in and label data
testdata <- readin('test')
traindata <- readin('train')
rm(readin)
## combine test & train datasets then wrap in tbl
print('Combining test and train datasets...')
fulldata <- rbind.data.frame(testdata, traindata)
rm(testdata, traindata)
fulldata <- tbl_df(fulldata)
print('Success')
## Change subject & activity to factors and label accordingly
print('Creating factors and labeling...')
activitylabels <- read.table(filepaths$activitylabels)
activitylabels <- tolower(activitylabels$V2)
activitylabels <- sub('_', ' ', activitylabels)
fulldata <- mutate(fulldata, subject = factor(subject),
activity = factor(activity, labels = activitylabels))
rm(activitylabels)
## Get variable names, fix duplicate variables (the bandsEnergy columns are all
## missing xyz axis-labels), and label dataset
varnames <- read.table(filepaths$features)
varnames <- tolower(varnames$V2)
xyzseq <- rep(c('-x','-y','-z'), each = 14, times = 3)
counter <- 1
for(i in seq_along(varnames)) {
if(grepl('bandsenergy()', varnames[i])) {
varnames[i] <- sub('$', xyzseq[counter], varnames[i])
counter <- counter + 1
}
}
varnames <- append(c('subject', 'activity'), varnames)
names(fulldata) <- varnames
rm(counter, i, xyzseq, varnames, filepaths)
print('Success')
## Select desired variables and save to new table
print('Selecting variables...')
meanstd <- select(fulldata, subject, activity, contains('-mean()'),
contains('-std()'))
print('Success')
## Group by activity and summarize
print('Grouping by activity/subject and summarizing...')
meanstd <- group_by(meanstd, subject, activity) %>% summarize_each(funs(mean))
print('Success')
## Make narrow - gather columns 3-68 into a column called "variable"
## and convert to factor variable
tidymeanstd <- gather(meanstd, "variable", "mean", 3:68) %>%
mutate(variable = factor(variable))
## Export to text file
print('Writing results to HAR_meanstd_strata_avg.txt...')
write.table(tidymeanstd, file ='HAR_meanstd_avg.txt', row.name=FALSE)
print('Success')
|
5e580916916c97a3dd62f90e4a2e487b7ce20e98 | 5d5cfc024e6e4ba5cfe5ee840f4c529f41f6f73f | /man/autism.Rd | 8b8c497f836db80a50c4ca38faf1601edd29930d | [] | no_license | cran/WWGbook | debe9b238ad5705c7f173dc13096c8a4892c4fdc | 10804f79a0a1f4d442595272fd707e073428a964 | refs/heads/master | 2022-03-04T21:39:10.847128 | 2022-03-01T23:10:29 | 2022-03-01T23:10:29 | 17,694,099 | 1 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,409 | rd | autism.Rd | \name{autism}
\alias{autism}
\docType{data}
\title{autism data in Chapter 6}
\description{
The data comes from researchers at the University of Michigan as part of a prospective longitudinal study of 214 children.
}
\usage{data(autism)}
\format{
A data frame with 612 observations on the following 4 variables.
\describe{
\item{age}{: Age in years (2, 3, 5, 9, 13); the time variable}
\item{vsae}{: Vineland Socialization Age Equivalent: parent-reported socialization, the dependent variable measured at each age}
\item{sicdegp}{: Sequenced Inventory of Communication Development Expressive Group: categorized expressive language score at age 2 years (1 = Low, 2 = Medium, 3 = High)}
\item{childid}{: Unique child identifier}
}
}
\references{
Oti, R., Anderson, D., Risi, S., Pickles, A. & Lord, C., Social Trajectories Among Individuals with Autism Spectrum Disorders,
Developmental Psychopathology (under review), 2006.
West, B., Welch, K. & Galecki, A, Linear Mixed Models: A Practical Guide Using Statistical Software,
Chapman Hall / CRC Press, first edition, 2006.
}
\examples{
attach(autism)
sicdegp.f <- factor(sicdegp)
age.f <- factor(age)
detach(autism)
# Add the new variables to a new data frame object.
autism.updated <- data.frame(autism, sicdegp.f, age.f)
dim(autism.updated)
names(autism.updated)
}
\keyword{datasets}
|
7c4404c9e82d92b652623075afbda95301d8ad5e | 999b24cc5fa218e80af339f4952e6a62d561f781 | /Roulette and CLT.R | f7d61f595b411456e473cb43850857a3d8f347bf | [] | no_license | ShivKumar95/Learning-R | 81d0fef098a724ef9be6c2ba0d75aa1cd02f3841 | ea36c1d9453e3a645470eaf50c794f042963e085 | refs/heads/master | 2022-07-17T02:18:59.493185 | 2020-05-21T06:00:23 | 2020-05-21T06:00:23 | 261,987,271 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 2,706 | r | Roulette and CLT.R | # sample modeling casio roulette table reds and blacks
# # monte carlo
# sample model 1: defines a lot and then sample
library(dplyr)
library(tidyverse)
color <- rep(c("Black","Red","Green"),c(18,18,2))
n <- 1000 # no. of reps
X <- sample(ifelse(color == "Red", -1,1),n, replace = TRUE)
# if the sample drawn is red we lose of not we win
# sample model 2: defines a lot inside the sample function by noting probabilities
S <- sum(x) # winings is sum of draws
S
# using mote carlo simulation
n <- 1000 # no. of roulette plays
B <- 10000 # no of monte carlo trials
S <- replicate(B, {x <- sample(c(-1,1),n,replace = TRUE, prob = c(9/19,10/19))
sum(x)})
mean(S<0) # probability of casino loosing money
# We can plot a histogram for normal density curve to observe values
s <- seq(min(S), max(S), length = 100) # sequence of 100 values across range of S
normal_density <- data.frame(s = s, f = dnorm(s, mean(S), sd(S))) # generate normal density for S
data.frame (S = S) %>% # make data frame of S for histogram
ggplot(aes(S, ..density..)) +
geom_histogram(color = "black", binwidth = 10) +
ylab("Probability") +
geom_line(data = normal_density, mapping = aes(s, f), color = "blue")
# we observe that the probability distribution os sum of these draws is normal
# we can use central limit theorem
# These equations apply to the case where there are only two outcomes, a and b with proportions p and 1???p respectively. The general principles
# above also apply to random variables with more than two outcomes.
## here in this case we have -1 or 1 as the possible outcomes with 1 having more chance
# can be defined as ap + b(1-p)
# expected value or mu (mean)
a <- -1
b <- 1
p <- 9/19
p
1-p
mu <- a*p + b*(1-p)
mu
# standard deviation by central limit theory
se <- abs(1--1) * sqrt(p*(1-p))
se
# still this doesn't answer the expected range
# if 1000 people play the roulette the expected income
n <- 1000
mu*n
# about $52
# to find range
sqrt(n)*se
# about 32
# therefore the casino can earn from $30 - $84
# roulette game betting on green
# prize $17 when you win $-1 when you lose
green <- 2
red <- 18
black <- 18
p_green <- green / (green + red + black)
p_not_green <- 1 - p_green
# No. of games played
n <- 1000
# Sample outcomes
X <- sample(c(17,-1),n,replace = TRUE, prob = c(p_green,p_not_green))
# sum of all 1000 outcomes
S <- sum(X)
a <- 17
b <- -1
# expected mean
mu <- a*(p_green) + b*(p_not_green)
# expected money in 1000 rolls
n*mu
# standard error
se <- abs(17--1)*sqrt(p_green*p_not_green)
# range of errors
sqrt(n)*se
|
5b7f23e27388a4fdc3c8224dc34bac8e8f642792 | db8d5421d2f4bdde84eff4f816a27d931dd27b1a | /dREG/man/genomic_data_model.Rd | 656777a24dd584ee02a62ca37f6c75997383c029 | [] | no_license | omsai/dREG | 2c6ac5552e99751634884ea86aa8a736c26b5de0 | ab6dc2f383772deb67f0c445c80e650cc054e762 | refs/heads/master | 2023-08-16T04:24:40.797588 | 2021-10-11T10:36:45 | 2021-10-11T10:36:45 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 938 | rd | genomic_data_model.Rd | \name{genomic_data_model}
\alias{genomic_data_model}
\title{
Builds a multiscale window object for feature extracting.
}
\description{
}
\usage{
genomic_data_model(window_sizes, half_nWindows)
}
\arguments{
\item{window_sizes}{vector of integer, indicating the genomic size (bp) for each window.}
\item{half_nWindows}{vector of integer, specifying the window count for each above window. Because the windows are extended at the both sides of an observed position, here this number is considered as half number(left or right side). }
}
\details{
The total number of features including plus and strand is sum(half_nWindows)*2 sides * 2 strands.\cr
The covered region are max(window_sizes)*half_nWindows[which.max(window_sizes)]*2 bps.
}
\value{
A S4 object including two attributes.
}
\examples{
gdm <- genomic_data_model(window_sizes = c(10, 25, 50, 500, 5000),
half_nWindows= c(10, 10, 30, 20, 20) );
}
|
360258c946becabd35efa01119878d4ddf170bda | e85c4f93c01f1c3a58469e73022a64fc6f053126 | /man/createPs.Rd | 3e642bcaa16c22be419a27530edea739b342806a | [
"Apache-2.0"
] | permissive | zuoyizhang/CohortMethod | 9a65958e7a252cfb3eaddad359964996c228281a | 1617ea573d3eb7482b47beac0ad701f222ca0c6d | refs/heads/master | 2020-12-03T09:17:46.295284 | 2015-03-27T04:53:12 | 2015-03-27T04:53:12 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,603 | rd | createPs.Rd | % Generated by roxygen2 (4.1.0): do not edit by hand
% Please edit documentation in R/PsFunctions.R
\name{createPs}
\alias{createPs}
\title{Create propensity scores}
\usage{
createPs(cohortData, checkSorting = TRUE, outcomeConceptId = NULL,
excludeCovariateIds = NULL, prior = createPrior("laplace", exclude = c(0),
useCrossValidation = TRUE), control = createControl(noiseLevel = "silent",
cvType = "auto", startingVariance = 0.1))
}
\arguments{
\item{cohortData}{An object of type \code{cohortData} as generated using \code{getDbCohortData}.}
\item{checkSorting}{Checks if the covariate data is sorted by rowId (necessary for fitting the model).
Checking can be very time-consuming if the data is already sorted.}
\item{outcomeConceptId}{The concept ID of the outcome. Persons marked for removal for the outcome will be removed prior to
creating the propensity score model.}
\item{excludeCovariateIds}{Exclude these covariates from the propensity model.}
\item{prior}{The prior used to fit the model. See \code{\link[Cyclops]{createPrior}} for details.}
\item{control}{The control object used to control the cross-validation used to determine the
hyperparameters of the prior (if applicable). See \code{\link[Cyclops]{createControl}} for details.}
}
\description{
\code{createPs} creates propensity scores using a regularized logistic regression.
}
\details{
\code{createPs} creates propensity scores using a regularized logistic regression.
}
\examples{
data(cohortDataSimulationProfile)
cohortData <- simulateCohortData(cohortDataSimulationProfile, n=1000)
ps <- createPs(cohortData)
}
|
679a08976055c70f77fc7a096582bda7d555c367 | f32dbf645fa99d7348210951818da2275f9c3602 | /R/getjul.R | 2638287f6a378da83c7b29dec95f74189bedf33c | [] | no_license | cran/RSEIS | 68f9b760cde47cb5dc40f52c71f302cf43c56286 | 877a512c8d450ab381de51bbb405da4507e19227 | refs/heads/master | 2023-08-25T02:13:28.165769 | 2023-08-19T12:32:32 | 2023-08-19T14:30:39 | 17,713,884 | 2 | 4 | null | null | null | null | UTF-8 | R | false | false | 142 | r | getjul.R | `getjul` <-
function(year, month, day)
{
jstart = tojul(year, 1, 1);
jul = tojul(year, month, day)-jstart+1;
return(jul)
}
|
0518f09d57f16f01e237a522db09273aa0b413a2 | 7b74f00cd80694634e6925067aaeb6572b09aef8 | /2020/Assignment-2020/Individual/FE8828-Xiang ZiShun/Assignment 4/week-4-infection.R | 85fe8cc7d7e10b29f75aa00e1aee7d3bde5f6964 | [] | no_license | leafyoung/fe8828 | 64c3c52f1587a8e55ef404e8cedacbb28dd10f3f | ccd569c1caed8baae8680731d4ff89699405b0f9 | refs/heads/master | 2023-01-13T00:08:13.213027 | 2020-11-08T14:08:10 | 2020-11-08T14:08:10 | 107,782,106 | 1 | 0 | null | null | null | null | UTF-8 | R | false | false | 2,150 | r | week-4-infection.R | library(shiny)
library(dplyr)
library(ggplot2)
ui <- fluidPage(
titlePanel("Infection Test Simulation"),
hr(),
sidebarLayout(
sidebarPanel(
sliderInput("infectionRate",
"Infection Rate (%):",
min = 0,
max = 100,
value = 25),
width = 3
),
mainPanel(
plotOutput("chart")
)
)
)
server <- function(input, output) {
output$chart <- renderPlot({
df_sensi <- full_join(
tibble(x = 1:20, color = 'Actual Neg'),
tibble(y = 1:25, color = 'Actual Neg'),
by = 'color')
# At 5% infection rate,
# Positive 500 * 5% = 25 = Actual Pos (24 at 95%) + False Neg (1 at 5%)
# Negative 500 * 95% = 475 = Actual Neg (451 at 95%) + False Pos (24 at 5%)
rate = input$infectionRate * 0.01
actPos = round(rate * 500 * 0.95,0)
falNeg = round(rate * 500 * 0.05,0)
actNeg = round((1-rate) * 500 * 0.95,0)
falPos = round((1-rate) * 500 * 0.05,0)
df_sensi['color'] <- c(rep('False Neg', falNeg),
rep('Actual Pos', actPos),
rep('False Pos', falPos),
rep('Actual Neg', 500 - falNeg - actPos - falPos))
ggplot(df_sensi) +
geom_point(aes(x, y,colour = color),
size = 5, shape="circle") +
theme_classic() +
theme(axis.title.x=element_blank(), axis.title.y=element_blank(),
axis.line=element_blank(),axis.text.x=element_blank(),
axis.text.y=element_blank(),axis.ticks=element_blank()) +
scale_color_manual(values = c("#6bcab6", "#ef5675", "#003f5c", "#ffa600"),
limits = c("False Neg", "Actual Pos", "False Pos", "Actual Neg")) +
coord_flip() +
scale_x_reverse()
})
}
shinyApp(ui = ui, server = server)
|
d94837d9482aae4402e4242838ee0dc7a993ffb6 | 045291797b6127d2dabb0030915cf756bd3018e0 | /man/goalbar.Rd | 2f68c69b6471a6db7669e1102b8511329e2cf830 | [] | no_license | Deerluluolivia/mapvizieR | e15b626931109847b38a76cbc41c6d7b01f52c6a | b71e1302d88cfec1d719b8ef3fef3233a009ac9d | refs/heads/master | 2020-03-22T05:51:51.504251 | 2018-05-10T18:13:02 | 2018-05-10T18:13:02 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | true | 2,069 | rd | goalbar.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/goalbar.R
\name{goalbar}
\alias{goalbar}
\title{goal_bar}
\usage{
goalbar(mapvizieR_obj, studentids, measurementscale, start_fws,
start_academic_year, end_fws, end_academic_year,
goal_labels = c(accel_growth = "Made Accel Growth", typ_growth =
"Made Typ Growth", positive_but_not_typ = "Below Typ Growth", negative_growth
= "Negative Growth", no_start = sprintf("Untested: \%s \%s", start_fws,
start_academic_year), no_end = sprintf("Untested: \%s \%s", end_fws,
end_academic_year)), goal_colors = c("#CC00FFFF", "#0066FFFF", "#CCFF00FF",
"#FF0000FF", "#FFFFFF", "#F0FFFF"), ontrack_prorater = NA,
ontrack_fws = NA, ontrack_academic_year = NA,
ontrack_labels = c(ontrack_accel = "On Track for Accel Growth", ontrack_typ
= "On Track for Typ Growth", offtrack_typ = "Off Track for Typ Growth"),
ontrack_colors = c("#CC00FFFF", "#0066FFFF", "#CCFF00FF"),
complete_obsv = FALSE)
}
\arguments{
\item{mapvizieR_obj}{mapvizieR object}
\item{studentids}{target students}
\item{measurementscale}{target subject}
\item{start_fws}{starting season}
\item{start_academic_year}{starting academic year}
\item{end_fws}{ending season}
\item{end_academic_year}{ending academic year}
\item{goal_labels}{what labels to show for each goal category. must be in order from
highest to lowest.}
\item{goal_colors}{what colors to show for each goal category}
\item{ontrack_prorater}{default is NA. if set to a decimal value, what percent of the goal
is considered ontrack?}
\item{ontrack_fws}{season to use for determining ontrack status}
\item{ontrack_academic_year}{year to use for determining ontrack status}
\item{ontrack_labels}{what labels to use for the 3 ontrack statuses}
\item{ontrack_colors}{what colors to use for the 3 ontrack colors}
\item{complete_obsv}{if TRUE, limit only to students who have BOTH a start
and end score. default is FALSE.}
}
\description{
a simple bar chart that shows the percentage of a cohort at different goal
states (met / didn't meet)
}
|
01dc166e642d2e64f2f8ba3ffbf5ed12cf43013c | dfe75cad25358f75eac7d04f43362d7efe10334c | /man/gamma_t.Rd | ea6a64157353dfc255fe2c34f3f334de7fc1824d | [] | no_license | edinburgh-seismicity-hub/ETAS.inlabru | bba8dc65a6ad62245e8c767fdb2cb2884ffad68b | 9aed08199ad64dd1f53a13e07efc2490cb12866d | refs/heads/main | 2023-08-09T19:00:23.791431 | 2023-07-24T16:01:19 | 2023-07-24T16:01:27 | 564,802,885 | 0 | 2 | null | null | null | null | UTF-8 | R | false | true | 951 | rd | gamma_t.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/CopulaTransformations.R
\name{gamma_t}
\alias{gamma_t}
\title{Copula transformation from a standard Normal distribution to a Gamma distribution}
\usage{
gamma_t(x, a, b)
}
\arguments{
\item{x}{values from a standard Normal distribution, \code{vector}.}
\item{a}{shape parameter of the gamma distribution \code{scalar}.}
\item{b}{rate parameter of the gamma distribution \code{scalar}.}
}
\value{
values from a Gamma distribution with shape \code{a} and rate \code{b}, \code{vector} same length as \code{x}.
}
\description{
Copula transformation from a standard Normal distribution to a Gamma distribution
}
\seealso{
Other copula-transformations:
\code{\link{exp_t}()},
\code{\link{inv_exp_t}()},
\code{\link{inv_gamma_t}()},
\code{\link{inv_loggaus_t}()},
\code{\link{inv_unif_t}()},
\code{\link{loggaus_t}()},
\code{\link{unif_t}()}
}
\concept{copula-transformations}
|
3d7d4f88045f41653addd7d334d568e2d499a84c | 7217693dc00b148a48c6503f6fe4ec1d478f52e8 | /meta_flipfix/meta_imp_flip_check.R | 4ed290c3a769887e00c569b1d97a3446c52a5461 | [] | no_license | Eugene77777777/biomarkers | 8ac6250e1726c9233b43b393f42076b573a2e854 | 9e8dc2876f8e6785b509e0fce30f6e215421f45b | refs/heads/master | 2023-07-25T10:52:30.343209 | 2021-09-08T18:12:45 | 2021-09-08T18:12:45 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 3,682 | r | meta_imp_flip_check.R | fullargs <- commandArgs(trailingOnly=FALSE)
args <- commandArgs(trailingOnly=TRUE)
script.name <- normalizePath(sub("--file=", "", fullargs[grep("--file=", fullargs)]))
suppressPackageStartupMessages(require(tidyverse))
suppressPackageStartupMessages(require(data.table))
##############
meta_sumstats_f <- args[1]
gwas_f <- args[2]
meta_check_img <- args[3]
# pvar_f <- '@@@@@@/users/ytanigaw/repos/rivas-lab/public-resources/uk_biobank/biomarkers/meta_flipfix/imp_ref_alt_check/ukb_imp_v3.mac1.flipcheck.tsv.gz'
pvar_f <- '/scratch/users/ytanigaw/ukb_imp_v3.mac1.flipcheck.tsv.gz'
# meta_sumstats_f <- '@@@@@@/projects/biomarkers/meta/plink_imputed/filtered/GLOBAL_Alanine_aminotransferase.sumstats.tsv.gz '
# meta_flipfixed_f <- '@@@@@@/projects/biomarkers/meta/plink_imputed/filtered_flipfixed/GLOBAL_Alanine_aminotransferase.sumstats.tsv'
# meta_flipfixed_img <- '@@@@@@/projects/biomarkers/meta/plink_imputed/filtered_flipfixed/GLOBAL_Alanine_aminotransferase.check.png'
# gwas_f <- '@@@@@@/projects/biomarkers/sumstats_diverse/white_british/plink_imputed/filtered/INT_Alanine_aminotransferase_all.glm.linear.filtered.maf001.info03.tsv.gz'
# pvar_f <- '@@@@@@/users/ytanigaw/repos/rivas-lab/public-resources/uk_biobank/biomarkers/meta_flipfix/imp_ref_alt_check/ukb_imp_v3.mac1.flipcheck.tsv.gz'
##############
add_flip_annotation <- function(df, pvar_df){
pvar_df %>%
rename('MarkerName' = 'ID') %>%
right_join(df, by='MarkerName') %>%
mutate(
REF_is_FASTA_REF = (toupper(REF) == FASTA_REF),
REF_is_FASTA_ALT = (toupper(REF) == FASTA_ALT),
ALT_is_FASTA_REF = (toupper(ALT) == FASTA_REF),
ALT_is_FASTA_ALT = (toupper(ALT) == FASTA_ALT),
is_not_flipped = (REF_is_FASTA_REF & ALT_is_FASTA_ALT),
is_flipped = (REF_is_FASTA_ALT & ALT_is_FASTA_REF)
)
}
flip_check_plot <- function(df, gwas_df, titlelab, xlab){
df %>%
rename('P' = 'P-value') %>%
filter(P < 5e-8) %>%
select(MarkerName, Effect, is_flipped) %>%
inner_join(gwas_df %>% select(ID, BETA) %>% rename('MarkerName' = 'ID'), by='MarkerName') %>%
drop_na() %>%
ggplot(aes(x=Effect, y=BETA, color=is_flipped)) +
geom_point(alpha=0.05) +
labs(
title = titlelab,
x = xlab,
y = 'BETA from WB GWAS sumstats'
)+
guides(colour = guide_legend(override.aes = list(alpha = 1)))
}
read_ref_alt_def <- function(pvar_f){
fread(cmd=paste0('zcat ', pvar_f)) %>%
mutate(FASTA_ALT = if_else(toupper(REF) == toupper(FASTA_REF), ALT, REF)) %>%
select(-REF, -ALT) %>%
mutate(FASTA_REF = toupper(FASTA_REF), FASTA_ALT = toupper(FASTA_ALT)) %>%
rename('CHROM' = '#CHROM')
}
meta_imp_flipcheck <- function(meta_sumstats_f, meta_check_img, gwas_f, pvar_f){
# read files
pvar_df <- read_ref_alt_def(pvar_f)
meta_sumstats_df <- fread(cmd=paste0('zcat ', meta_sumstats_f))
gwas_df <- fread(cmd=paste0('zcat ', gwas_f))
# check flip
joined_annotated_df <- meta_sumstats_df %>% add_flip_annotation(pvar_df)
n_flips <- joined_annotated_df %>% select(is_flipped) %>%
drop_na() %>% pull() %>% sum()
print(paste0('The number of allele flips: ', n_flips))
# plot agains WB
p <- joined_annotated_df %>% flip_check_plot(
gwas_df,
titlelab = str_replace_all(
basename(meta_sumstats_f),
'^GLOBAL_|.sumstats.tsv$', ''
),
xlab = 'Effect column from META file'
)
ggsave(file=meta_check_img, width=6, height=6, p)
}
##############
meta_imp_flipcheck(meta_sumstats_f, meta_check_img, gwas_f, pvar_f)
|
3867c774aa3dd8a4bd5eb2aadf0b07e8fd2d5101 | 43a533fff871b1b8efe95397150cc31282f4c03e | /R/PlotInteractingZscore.R | c6c6470a664bb869e1c981967fa76706b8d4b8b3 | [] | no_license | polyak-lab/matchedDCISIDC | 9982c65b4355b399552930d834eab3b13d171a8a | 8ca3695420cd56ec703c3163905802f136dc867f | refs/heads/master | 2023-06-03T00:18:39.865124 | 2021-06-19T07:06:11 | 2021-06-19T07:06:11 | 254,945,118 | 1 | 0 | null | null | null | null | UTF-8 | R | false | false | 785 | r | PlotInteractingZscore.R | PlotInteractingZscore=function(Fracfound, DistMat, title="test", cols=palcols){
Rx1=rbind(DistMat, Fracfound)
Rx2=scale(Rx1)
## calculate a p value:
pvals=2*pnorm(-abs(Rx2[1001, ]))
px1=melt(Rx2[-1001, ])
px2=melt(Rx2[1001,])
px2$Var2=rownames(px2)
g1=ggplot(px1, aes(x=Var2, y=value, fill=Var2, col=Var2)) + geom_violin()+
stat_summary(geom="point", shape=23, color="black", size=2)+
annotate("text", label = round(pvals*100)/100, size = 3, x = px2$Var2, y = 4)+
stat_summary(data=px2, geom="point", shape=5, col="red", size=2)+labs(y="z-score", x="cell type")+
ggtitle(title)+
theme_bw()+theme(panel.grid.major = element_blank(), panel.grid.minor = element_blank())+
scale_fill_manual(values=cols)+scale_color_manual(values=cols)
g1
} |
becb95d82ca372841dcef8b40b4ffcd0baf507e4 | 553992ae66d19695a240b2c8df4357b09f99bb69 | /SAMR2014/PCA/3_GoingFurther_Discriminant.R | f4e8a107948b07683104da6c75da932b0fc50fb1 | [] | no_license | Alfiew/Workshops | 839ec14d5c4b95cd39474044e9bdb2946d2dece9 | 4ac40823e13ed285bcabc44eb4449d4d1be4cd05 | refs/heads/master | 2023-04-19T05:48:15.096172 | 2021-04-27T01:12:07 | 2021-04-27T01:12:07 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 2,702 | r | 3_GoingFurther_Discriminant.R | ### This file will be updated periodically between now (whenever you are seeing this)
### and April 2nd (the day before the workshop!)
#####
### If you download this file now, please be sure to check back for updates
### as the conference approaches.
### TO NOTE: Anything preceded by at # is a 'comment'
# that means it is something ignored by R
# which means comments are usually something like this --
# informative statements used for later reference, such as
# reminding yourself what you did and why!
##Going further: between group (discriminant) analyses.
library(TInPosition) ##we need the two-table versions of ExPosition and InPosition: TExPosition and TInPosition
rm(list=ls()) #clean the workspace
load('clean.ps.data.rda')
load('clean.ps.design.rda')
## The between-groups version of PCA: Barycentric Discriminant Analysis (BADA)
### This is the data we analyze:
bada.group.averages <- apply(ps.sim.design, 1, "/", colSums(ps.sim.design)) %*% as.matrix(ps.sim.data)
##just for reference... tepBADA does this for us!
## This is fixed effects, and therefore boring!
#bada.ps.res <- tepBADA(ps.sim.data,DESIGN=ps.sim.design,make_design_nominal=FALSE)
##Let's cut to chase.
bada.ps.res <- tepBADA.inference.battery(ps.sim.data,DESIGN=ps.sim.design,make_design_nominal=FALSE)
## The output is just like that with InPosition and ExPosition with a few critical additions:
bada.ps.res$Fixed.Data$TExPosition.Data$assign #but, this is available via the $Inference.Data, too:
## clasification accuracy
bada.ps.res$Inference.Data$loo.data
bada.ps.res$Inference.Data$loo.data$fixed.confuse
bada.ps.res$Inference.Data$loo.data$loo.confuse
## omnibus test
bada.ps.res$Inference.Data$omni$p.val
## r2 test:
bada.ps.res$Inference.Data$r2$r2
bada.ps.res$Inference.Data$r2$p.val
## The between-groups version of MCA: Discriminant Correspondence Analysis (DICA)
### This is the data we analyze:
dica.group.averages <- t(ps.sim.design) %*% makeNominalData(ps.sim.data)
##just for reference... tepDICA does this for us!
dica.ps.res <- tepDICA.inference.battery(ps.sim.data,make_data_nominal=TRUE,DESIGN=ps.sim.design,make_design_nominal=FALSE)
dica.ps.res$Fixed.Data$TExPosition.Data$assign #but, this is available via the $Inference.Data, too:
## clasification accuracy
dica.ps.res$Inference.Data$loo.data
dica.ps.res$Inference.Data$loo.data$fixed.confuse
dica.ps.res$Inference.Data$loo.data$loo.confuse
## omnibus test
dica.ps.res$Inference.Data$omni$p.val
## r2 test:
dica.ps.res$Inference.Data$r2$r2
dica.ps.res$Inference.Data$r2$p.val
##quick comparison:
bada.ps.res$Inference.Data$loo.data$loo.confuse
dica.ps.res$Inference.Data$loo.data$loo.confuse ##the winner! |
305dc2672fcc046eafb17ac7b5bcf7b82f32a0c7 | cd891e1b17592fb1a61083fea71eab07be384e7c | /man/pobierz_zrownywanie.Rd | 5c16cb0cdf6c53c95c0a9e0b594491bedb4a17fa | [
"MIT"
] | permissive | zozlak/PWEdane | 06de162ab2a6aed828f09058a6fb1b0f3d2c4374 | 8d8ded8c5f670965654e6bc5d3d1355122218627 | refs/heads/master | 2021-01-19T07:47:08.276634 | 2015-04-29T08:38:12 | 2015-04-29T08:38:12 | 32,443,387 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 953 | rd | pobierz_zrownywanie.Rd | % Generated by roxygen2 (4.1.1): do not edit by hand
% Please edit documentation in R/pobierz_zrownywanie.R
\name{pobierz_zrownywanie}
\alias{pobierz_zrownywanie}
\title{Pobiera wyniki testów zrównujących ze wskazanego roku wraz z danymi kontekstowymi}
\usage{
pobierz_zrownywanie(rodzajEgzaminu, rok, punktuj = TRUE,
idSkali = NA_integer_, skroc = TRUE)
}
\arguments{
\item{rodzajEgzaminu}{rodzaj egzaminu, ktorego wyniki maja zostac pobrane}
\item{rok}{rok, z ktorego dane maja zostac pobrane}
\item{punktuj}{wybor, czy dane maja byc pobrane w postaci dystraktorow, czy punktow}
\item{idSkali}{identyfikator skali, ktora ma zostac zastosowana do danych}
\item{skroc}{czy do danych zastosowac skrocenia skal opisane w skali}
}
\description{
Jeśli nie zostanie podany parametr \code{idSkali}, wtedy zastąpiopny zostanie
wartością domyślną. W wypadku niepowodzenia pobrania wartości
domyślnejwyświetlony zostanie stosowny komunikat.
}
|
7fb8376eeebccbe5d7f3db2d88bbcc2be9370ddf | 9aafde089eb3d8bba05aec912e61fbd9fb84bd49 | /codeml_files/newick_trees_processed/12063_0/rinput.R | 82a4bf627362a452eaaf73cdd1d8c8291c9fe632 | [] | no_license | DaniBoo/cyanobacteria_project | 6a816bb0ccf285842b61bfd3612c176f5877a1fb | be08ff723284b0c38f9c758d3e250c664bbfbf3b | refs/heads/master | 2021-01-25T05:28:00.686474 | 2013-03-23T15:09:39 | 2013-03-23T15:09:39 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 137 | r | rinput.R | library(ape)
testtree <- read.tree("12063_0.txt")
unrooted_tr <- unroot(testtree)
write.tree(unrooted_tr, file="12063_0_unrooted.txt") |
01b2b0cea4654993d6d5410c487741d89af1dcf3 | 13d6506e07f15a4ce179d40b24b96d152fdc2414 | /Code/boston.R | 08068743823097e48944218665043202ef4b1673 | [] | no_license | adb8165/stat139_final_project | adeb2dbb4a9636f64351af75cce11cefda3d689e | 2dbccaab4282b14adbf650ff459d70005c033731 | refs/heads/master | 2021-05-31T09:28:15.695898 | 2016-05-04T20:39:54 | 2016-05-04T20:39:54 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 4,646 | r | boston.R | bostondata = read.csv("/Users/nick/Desktop/boston/Property_Assessment_2014.csv",header = T)
library(ggplot2)
library(nortest)
library(car)
library(boot)
library(rpart)
library(rpart.plot)
library(e1071)
library(parallel)
library(randomForest)
library(neuralnet)
############## cleanse data ###############################
## if one variable has as many NA values as more than 50%, then drop it
## drop the irrelevent and meaningless variables such as ID, address,ect..
## AV_TOTAL is the target variable and 7 input variables is left.
## AV_TOTAL>50000000 is regarded as outliers
attach(bostondata)
mydata=data.frame(AV_TOTAL,LU,OWN_OCC,GROSS_AREA,NUM_FLOORS,LAND_SF,YR_BUILT,LIVING_AREA)
detach(bostondata)
fit=lm(AV_TOTAL~GROSS_AREA+NUM_FLOORS+LAND_SF+YR_BUILT+LIVING_AREA,data=mydata)
summary(fit)
vif(fit)
##GROSS_AREA and LIVING_AREA are highly correlated, so drop GROSS_AREA or LIVING_AREA
summary(mydata)
mydata$AV_TOTAL[mydata$AV_TOTAL==0|mydata$AV_TOTAL> 50000000] = NA
mydata$LU = factor(mydata$LU,levels=c("R1","R2","R3","R4","RL","A","RC","CM","CD","CP","CC","AH",
"C","CL","I","E","EA"))
mydata$OWN_OCC = factor(mydata$OWN_OCC, levels = c("Y","N"))
mydata$GROSS_AREA[mydata$GROSS_AREA == 0] = NA
mydata$NUM_FLOORS[mydata$NUM_FLOORS > 100 | mydata$NUM_FLOORS < 1] = NA
mydata$LAND_SF[mydata$LAND_SF == 0| mydata$LAND_SF > 10000000] = NA
mydata$YR_BUILT[mydata$YR_BUILT < 1000 | mydata$YR_BUILT == 0 | mydata$YR_BUILT>2014] =NA
mydata_clean = na.omit(mydata)
mydata_clean$LU=factor(mydata_clean$LU)
###################### divide data #######################
##divide data into train set(70%) and test set(30%)
set.seed(123)
divide = sample(1:2, dim(mydata_clean)[1], replace = T, prob = c(0.7, 0.3))
trainset = mydata_clean[divide == 1,]
testset = mydata_clean[divide == 2,]
##################### cost function ######################
RMSE = function(a,b){
sqrt(mean((a-b)^2))
}
##########################################################
####################### modeling #########################
##########################################################
summary(trainset)
## histogram of the target
ggplot(trainset, aes(x=AV_TOTAL,y=..density..)) + geom_histogram(binwidth=10000) + xlim(0,2000000)
#################### linear model ######################
lm = lm(AV_TOTAL~.,data=trainset)
summary(lm)
## residuals diagnose
ggdatalm = data.frame(residuals=lm$residuals,fitted=lm$fitted.values)
ggplot(ggdatalm,aes(x=fitted,y=residuals))+geom_point(alpha=0.1,size=4)
# heteroskedasticity
ggplot(ggdatalm,aes(x=fitted,y=residuals))+geom_point(alpha=0.1,size=4)+
xlim(-1000000,10000000)+ylim(-10000000,10000000)
ad.test(lm$residuals)
leveragePlots(lm)
ncvTest(lm)
vif(lm)
prelm = predict(lm, testset[,2:7])
qplot(prelm,testset[,1])
RMSE(prelm,testset[,1])
################ generalized linear model ################
## Target variable do not has a normal but a fat-tailed distribution,
## Gamma regression is more suitable.
glm = glm(AV_TOTAL~.,data=trainset,family=Gamma(link="log"),maxit = 100)
summary(glm)
## residuals plot
ggdataglm = data.frame(residuals=glm$residuals,fitted=log(lm$fitted.values))
ggplot(ggdataglm,aes(x=fitted,y=residuals))+geom_point(alpha=0.1,size=4)
## cross validation
cvglm = cv.glm(trainset,glm,cost=RMSE,k=10)
cvglm$delta
preglm = predict(glm,testset[,2:7])
RMSE(preglm,testset[,1])
####################### decision tree ###########################
rpart = rpart(AV_TOTAL~.,data=trainset)
rpart.plot(rpart)
printcp(rpart)
prerpart = predict(rpart,testset[,2:7])
RMSE(prerpart,testset[,1])
####################### SVM ##################################
svm = svm(AV_TOTAL~.,data=trainset, type="nu-regression")
# tunesvm=tune.svm(AV_TOTAL~.,data=trainset, cost = 2^(0:4)) too slow
# parallel computing
parasvm = function(cost){
library(e1071)
svm = svm(AV_TOTAL~.,data=trainset, type="nu-regression", cost=cost)
presvm = predict(svm,testset[,2:7])
RMSE(presvm,testset[,1])
}
cl = makeCluster(4) #set CPU cores cluster
clusterExport(cl,c("trainset","testset","RMSE"))
results = clusterApply(cl,2^(0:3), parasvm)
results
###################### random forest ###########################
rf = randomForest(AV_TOTAL~.,data=trainset)
prerf = predict(rf,testset[,2:7])
RMSE(prerf,testset[,1])
##################### neural network ###########################
n = names(trainset)
f = as.formula(paste("AV_TOTAL ~", paste(n[!n %in% "AV_TOTAL"], collapse = " + ")))
nnet = neuralnet(f,data = trainset, hidden = c(10,5), linear.output = T)
prenn = compute(nnet,testset[,2:7])$net.result
RMS(prenn,testset[,1])
|
22658fc945a2f94b6d51e1fcc13118a44276b9fe | ffdea92d4315e4363dd4ae673a1a6adf82a761b5 | /data/genthat_extracted_code/stats19/examples/police_boundaries.Rd.R | 577553d84ec6eb9d2e799bf9f8bfec5b6601df70 | [] | no_license | surayaaramli/typeRrh | d257ac8905c49123f4ccd4e377ee3dfc84d1636c | 66e6996f31961bc8b9aafe1a6a6098327b66bf71 | refs/heads/master | 2023-05-05T04:05:31.617869 | 2019-04-25T22:10:06 | 2019-04-25T22:10:06 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 298 | r | police_boundaries.Rd.R | library(stats19)
### Name: police_boundaries
### Title: Police force boundaries in England (2016)
### Aliases: police_boundaries
### Keywords: datasets
### ** Examples
nrow(police_boundaries)
police_boundaries[police_boundaries$pfa16nm == "West Yorkshire", ]
sf:::plot.sf(police_boundaries)
|
3811bd54b771bc34167a6977416357b803441a9a | 4e1e7ec51f5501e301e8e5f0300ef0608545cfe1 | /man/clip_path.Rd | 9d39daf908295664dfa60dec0ce31b52920ff78e | [
"MIT"
] | permissive | jorritvm/jrutils | 8f3d4f97e6e536d51ce9fca171004cd36a9c3c02 | ac56a5dedca6de96ace2a15b16e0e7e7339cc14d | refs/heads/master | 2023-02-23T15:46:43.963572 | 2023-02-06T15:36:54 | 2023-02-06T15:36:54 | 245,815,281 | 3 | 0 | MIT | 2022-08-12T09:12:04 | 2020-03-08T13:02:11 | R | UTF-8 | R | false | true | 306 | rd | clip_path.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/lib_io.R
\name{clip_path}
\alias{clip_path}
\title{replaces backslashes with forward slashes in a path on the clipboard}
\usage{
clip_path()
}
\description{
replaces backslashes with forward slashes in a path on the clipboard
}
|
a3e1b66ded2d91b5cc6ca06ce82f114d456115d8 | e102753a5ab7eedd58d04c26254a8e29859f9e98 | /scRNA-seq Result Reproduction/RunCDC.R | 77fc9c9c66a746506181f20a645b3fbaf338bf4d | [
"Apache-2.0"
] | permissive | ZPGuiGroupWhu/ClusteringDirectionCentrality | ec166456938602921574ad24d7445804c2226961 | 084a18fc40cc19dee2aa91c757009d4abb2e86f9 | refs/heads/master | 2023-04-13T00:07:42.437629 | 2023-03-11T11:48:27 | 2023-03-11T11:48:27 | 467,575,519 | 72 | 7 | null | 2022-10-26T12:10:05 | 2022-03-08T15:46:14 | MATLAB | UTF-8 | R | false | false | 5,486 | r | RunCDC.R | library(ggplot2)
library(gridExtra)
RunCDC <- function(dataname,mode){
if (mode=="All"){
if (dataname=="pbmc3k"){
CDC_k = seq(30,50,10)
CDC_ratio = seq(0.55, 0.65, 0.01)
}
else if(dataname=="SCINA"){
CDC_k = seq(30,50,10)
CDC_ratio = seq(0.70, 0.80, 0.01)
}
else if(dataname=="AMB"){
CDC_k = seq(30,50,10)
CDC_ratio = seq(0.95, 0.99, 0.005)
}
else{
CDC_k = seq(30,50,10)
CDC_ratio = seq(0.85, 0.99, 0.01)
}
}
else if (mode=="Best"){
if (dataname=="Baron-Human"){
CDC_k = 30
CDC_ratio = 0.88
}
else if(dataname=="Baron-Mouse"){
CDC_k = 50
CDC_ratio = 0.95
}
else if(dataname=="Muraro"){
CDC_k = 30
CDC_ratio = 0.94
}
else if(dataname=="Segerstolpe"){
CDC_k = 40
CDC_ratio = 0.90
}
else if(dataname=="Xin"){
CDC_k = 30
CDC_ratio = 0.97
}
else if(dataname=="AMB"){
CDC_k = 40
CDC_ratio = 0.99
}
else if(dataname=="ALM"){
CDC_k = 40
CDC_ratio = 0.96
}
else if(dataname=="VISp"){
CDC_k = 50
CDC_ratio = 0.93
}
else if(dataname=="TM"){
CDC_k = 30
CDC_ratio = 0.97
}
else if(dataname=="WT_R1"){
CDC_k = 50
CDC_ratio = 0.96
}
else if(dataname=="WT_R2"){
CDC_k = 40
CDC_ratio = 0.98
}
else if(dataname=="NdpKO_R1"){
CDC_k = 30
CDC_ratio = 0.85
}
else if(dataname=="NdpKO_R2"){
CDC_k = 50
CDC_ratio = 0.98
}
else if(dataname=="pbmc3k"){
CDC_k = 40
CDC_ratio = 0.62
}
else if(dataname=="SCINA"){
CDC_k = 50
CDC_ratio = 0.80
}
}
else {
cat("Please select one mode to reproduce our results !")
}
umap_mat <- read.csv(paste('Seurat_UMAP_Results/',dataname,'.csv',sep = ""))
dat_mat <- as.matrix(umap_mat[,3:ncol(umap_mat)-1])
ref <- umap_mat$labels
tsne_mat <- read.csv(paste('Seurat_UMAP_Results/tsne_plot/',dataname,'.csv',sep = ""))
## Cluster the cells using CDC algorithm
## --Arguments--
## k: k of KNN (Default: 30, Recommended: 30~50)
## ratio: percentile ratio of internal points (Default: 0.9, Recommended: 0.75~0.95, 0.55~0.65 for pbmc3k)
source('CDC.R')
UMAP_Dim <- 2
cdc_res <- data.frame()
t1 <- proc.time()
for(knn in CDC_k){
for(int_ratio in CDC_ratio){
res <- CDC(dat_mat, k = knn, ratio = int_ratio)
ARI <- mclust::adjustedRandIndex(res, ref)
cat(paste0('ARI = ', sprintf("%0.4f", ARI),' (n_components = ',UMAP_Dim,', k = ',knn,', ratio = ', sprintf("%0.3f", int_ratio),')'),'\n')
tmp_ari <- data.frame(Neighbors=knn, Ratio=int_ratio, ARI=ARI, Parameters=paste0(knn,',',int_ratio))
cdc_res <- rbind(cdc_res, tmp_ari)
}
}
t2 <- proc.time()
T1 <- t2-t1
max_id <- which(cdc_res[,3]==max(cdc_res[,3]))[1]
max_K <- cdc_res[max_id,1]
max_ratio <- cdc_res[max_id,2]
max_res <- CDC(dat_mat, k = max_K, ratio = max_ratio)
cat('-------------------------------------------------','\n')
cat(paste0('The number of times CDC ran: ', length(CDC_k)*length(CDC_ratio)),'\n')
cat(paste0('Overall runtime of CDC: ',sprintf("%0.3f",T1[3][[1]]),'s'),'\n')
cat(paste0('Average runtime of CDC: ',sprintf("%0.3f",T1[3][[1]]/(length(CDC_k)*length(CDC_ratio))),'s'),'\n')
cat('-------------------------------------------------','\n')
cat(paste0('Average ARI = ', sprintf("%0.4f",mean(cdc_res[,3])),' (n_components = ',UMAP_Dim,', k = ',min(CDC_k),'~',max(CDC_k),', ratio = ',min(CDC_ratio),'~',max(CDC_ratio),')'),'\n')
cat(paste0('Max ARI = ', sprintf("%0.4f",max(cdc_res[,3])),' (n_components = ',UMAP_Dim,', k = ',max_K,', ratio = ',max_ratio,')'),'\n')
g1<-ggplot(data=cdc_res, aes(x=Parameters, y=ARI, group=1))+
geom_line(color="#1E90FF",size=1.3)+
geom_point(color="red",size=3)+
theme(legend.position="none")+
ggtitle("Accuracy Curve")+
theme(plot.title=element_text(hjust=0.5, face='bold'),axis.text.x = element_text(angle = 45, hjust=1))
if(dataname=="pbmc3k"||dataname=="SCINA"||dataname=="WT_R1"||dataname=="WT_R2"||dataname=="NdpKO_R1"||dataname=="NdpKO_R2"){
g2<-ggplot(data=umap_mat, aes(x=UMAP_1, y=UMAP_2, color=ref))+
geom_point(size=1)+
theme(legend.position="none")+
ggtitle("Ground Truth")+
theme(plot.title=element_text(hjust=0.5, face='bold'))
g3<-ggplot(data=umap_mat, aes(x=UMAP_1, y=UMAP_2, color=as.character(max_res)))+
geom_point(size=1)+
theme(legend.position="none")+
ggtitle(paste0("Best CDC Result",' (ARI = ',sprintf("%0.4f",max(cdc_res[,3])),')'))+
theme(plot.title=element_text(hjust=0.5, face='bold'))
}else{
g2<-ggplot(data=tsne_mat, aes(x=tSNE_1, y=tSNE_2, color=ref))+
geom_point(size=1)+
theme(legend.position="none")+
ggtitle("Ground Truth")+
theme(plot.title=element_text(hjust=0.5, face='bold'))
g3<-ggplot(data=tsne_mat, aes(x=tSNE_1, y=tSNE_2, color=as.character(max_res)))+
geom_point(size=1)+
theme(legend.position="none")+
ggtitle(paste0("Best CDC Result",' (ARI = ',sprintf("%0.4f",max(cdc_res[,3])),')'))+
theme(plot.title=element_text(hjust=0.5, face='bold'))
}
grid.arrange(g1, arrangeGrob(g2, g3, ncol=2), nrow = 2)
} |
6ccd167ca47a3adcc52202685eec20ddea4b57a3 | 94717c01dba16dc0ecaa685545f7799ec93750ba | /man/column_expander.Rd | c0e71dd903da0937099a81d47d8a50aaf292de3a | [] | no_license | jkapila/MiscR | 026326defd60c29067b0ab7f35652ebaff320e59 | 54021cc0893c7187d3a7e6ed440c601d1d7fa794 | refs/heads/master | 2021-01-18T18:33:22.435525 | 2016-05-29T06:49:57 | 2016-05-29T06:49:57 | 59,930,425 | 0 | 0 | null | null | null | null | UTF-8 | R | false | true | 668 | rd | column_expander.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/ColumnExpander.r
\name{column_expander}
\alias{column_expander}
\title{Column Expander}
\usage{
column_expander(data, name)
}
\arguments{
\item{data}{Any Data Set}
\item{name}{Name of the Column Variable whose value needs to be expanded}
}
\value{
Data set with additional columns with vules 0 and 1
}
\description{
Expands Values to Columns
}
\details{
Gives Data Set with levels / values as column variable with vales
0 and 1. 1's are the value where that value occurs in the variable rest all
are zero
}
\examples{
data(iris)
column_expander(iris,"Species")
}
\author{
Jitin Kapila
}
|
263d028843b3ed3e02aa6c2b526f40a3ea6f3588 | 903191c7cc98fdc553595f86259554c1c39b88a9 | /man/plot_genre.Rd | 41c2b5bc567fb9b05776244a313818628a2569a5 | [] | no_license | gmlang/imdb | 5f81f6ed2c6aa00828f9d3379861df26801a666a | dc5f69d736682cd8feead228c001d4189c25b638 | refs/heads/master | 2021-01-10T10:47:40.505890 | 2018-10-09T06:27:44 | 2018-10-09T06:27:44 | 36,202,144 | 1 | 0 | null | null | null | null | UTF-8 | R | false | true | 278 | rd | plot_genre.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/05-genres.R
\name{plot_genre}
\alias{plot_genre}
\title{Generate descriptive plots for genres}
\usage{
plot_genre()
}
\value{
3 ggplot2 objects
}
\description{
Generate descriptive plots for genres
}
|
377369beb3e8b9e41ef67f375033c0463fe1c6cd | bec519fa30cdd53fa986da8c2d00a2852a8bbda4 | /tests/strcmp.R | 55543dd7e43b672f1d71f878147bd25e9ce0bd6d | [] | no_license | cran/matlab | d39e7908a27ddd9b9e154de2f6932d98f56f8f11 | cf54fd06412a5ab3f928ee29e3b90d5050665933 | refs/heads/master | 2022-06-03T07:29:14.339119 | 2022-06-01T09:40:02 | 2022-06-01T09:40:02 | 17,697,307 | 0 | 3 | null | 2022-05-12T21:15:22 | 2014-03-13T05:16:27 | R | UTF-8 | R | false | false | 799 | r | strcmp.R | ###
### $Id: strcmp.R 22 2022-05-30 18:03:47Z proebuck $
###
##-----------------------------------------------------------------------------
test.strcmp <- function(input, expected) {
output <- do.call(getFromNamespace("strcmp", "matlab"), input)
identical(output, expected)
}
test.strcmp(list(S = "foo", T = "foo"), TRUE)
test.strcmp(list(S = "foo", T = "bar"), FALSE)
test.strcmp(list(S = c("foo", "bar"),
T = c("foo", "bar")), TRUE)
# Case matters...
test.strcmp(list(S = c("foo", "bar"),
T = c("FOO", "BAR")), FALSE)
# Number of elements of each must match...
test.strcmp(list(S = c("foo", "bar"),
T = c("foo", "bar", "baz")), FALSE)
test.strcmp(list(S = c("foo", "bar", "baz"),
T = c("xxx", "bar", "xxx")), FALSE)
|
9ac789aae959f6425554b6ee6d46398b32fb694c | c20376898f39199f8022fe8dc589d0610b949386 | /02_tidyTables.R | 868a8de843e795f4b10acd2c6280aa3a9156be5f | [] | no_license | kika1002/RCP | 72b76eb026a3042464a101f703838e92df57b039 | 8777779d4b6effdd5010e1af93157fb7a6fd3d69 | refs/heads/master | 2020-06-10T23:47:54.387462 | 2019-07-15T00:01:16 | 2019-07-15T00:01:16 | 193,794,422 | 3 | 0 | null | null | null | null | UTF-8 | R | false | false | 2,417 | r | 02_tidyTables.R |
# Load libraries ----------------------------------------------------------
require(pacman)
pacman::p_load(raster, rgdal, rgeos, stringr, velox, sf, tidyverse)
# Initial setup -----------------------------------------------------------
g <- gc(reset = TRUE)
rm(list = ls())
options(scipen = 999)
# Functions to use --------------------------------------------------------
myRbind <- function(var){
# var <- vrs[1]
tbl <- grep(var, fls, value = TRUE) %>%
map(.x = ., .f = readRDS) %>%
bind_rows()
if(var == 'pr'){
tbl <- tbl %>%
mutate(value = round(value * 86400, digits = 1))
} else {
tbl <- tbl %>%
mutate(value = round(value - 273.15, digits = 1))
}
print('Done!')
return(tbl)
}
reviewDays <- function(tbl){
# tbl <- prec
x <- tbl %>%
group_by(gc, year) %>%
summarise(count = n()) %>%
ungroup() %>%
distinct(count)
y <- tbl %>%
group_by(gc, year) %>%
summarise(count = n()) %>%
ungroup()
z <- y %>%
filter(count %in% c(730, 732))
print('Done!')
return(z)
}
# Load data ---------------------------------------------------------------
fls <- list.files('rds/clima/', full.names = TRUE, patter = '.rds$')
vrs <- c('pr', 'tasmax', 'tasmin')
# Join tables -------------------------------------------------------------
prec <- myRbind(var = vrs[1]) %>%
rename(prec = value) %>%
dplyr::select(-variabl)
tmax <- myRbind(var = vrs[2]) %>%
rename(tmax = value) %>%
dplyr::select(-variable)
tmin <- myRbind(var = vrs[3]) %>%
rename(tmin = value) %>%
dplyr::select(-variable)
# Review tables -----------------------------------------------------------
prec_rvw <- reviewDays(prec)
tmax_rvw <- reviewDays(tmax)
tmin_rvw <- reviewDays(tmin)
tble <- inner_join(tmax, tmin, by = c('gc' = 'gc', 'year' = 'year', 'day' = 'day', 'x' = 'x', 'y' = 'y'))
tble <- inner_join(tble, prec, by = c('gc' = 'gc', 'year' = 'year', 'day' = 'day', 'x' = 'x', 'y' = 'y'))
# Quality control ---------------------------------------------------------
tble <- tble %>%
mutate(comparison = ifelse(tmax > tmin, TRUE, FALSE))
write.csv(tble, 'tbl/clim_future_RCP85.csv', row.names = FALSE)
# --------------------------------------------------------------------------
# END ----------------------------------------------------------------------
# --------------------------------------------------------------------------
|
5d713aa0d8cfcb4e262c0c81561fcd8a76fdfefb | 4d63d3da98c0a7aeaf837e093b287f7ee23c86ba | /R/all_generic.R | a5622587a6588c490fd7084500ea9c5462918f44 | [
"MIT"
] | permissive | bbuchsbaum/colorplane | 69f5155d879964944206fbc95ccf65321df28292 | 1b7a26fec61bc5c985aa9ed0438303566a8085ee | refs/heads/master | 2023-03-06T19:06:41.284373 | 2023-03-03T12:35:16 | 2023-03-03T12:35:16 | 147,871,527 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 2,752 | r | all_generic.R |
#' blend two color planes
#'
#' given two color planes, generate a new color plane by blending the colors using the supplied alpha multiplier.
#'
#' @param bottom the bottom color plane
#' @param top the top color plane
#' @param alpha the alpha overlay value.
#'
#'
#' @export
#' @rdname blend_colors-methods
#'
#' @details
#'
#' The functions in this package blend colors based on the "over" operator where `top` if foreground and `bottom` is background.
#'
#'
#' @references
#'
#' https://en.wikipedia.org/wiki/Alpha_compositing
#'
#' @examples
#'
#' top <- IntensityColorPlane(1:5, cols=rainbow(5))
#' bottom <- IntensityColorPlane(1:5, cols=rev(rainbow(5)))
#'
#' top <- map_colors(top)
#' bottom <- map_colors(bottom)
#' bc <- blend_colors(bottom, top, .5)
setGeneric(name="blend_colors", def=function(bottom, top, alpha) standardGeneric("blend_colors"))
#' map data values to a set of colors
#'
#' instantiate a vector of colors from a ColorPlane specification.
#'
#' @param x the object to map over
#' @param ... extra args
#' @export
#' @rdname map_colors-methods
#'
#'
#' @examples
#'
#' cp <- IntensityColorPlane(seq(1,100), cols=rainbow(25))
#' cl <- map_colors(cp, irange=c(0,50))
#' stopifnot(cl@clr[50] == rainbow(25)[25])
#'
#' @return a \code{HexColorPlane} instance containing the mapped colors
setGeneric(name="map_colors", def=function(x, ...) standardGeneric("map_colors"))
#' convert to rgb colors
#'
#' @param x the object to convert
#' @param ... extra args
#' @rdname as_rgb-methods
#' @export
#' @examples
#' cp <- IntensityColorPlane(seq(1,100), cols=rainbow(25))
#' cl <- map_colors(cp, irange=c(0,50))
#' rgbcols <- as_rgb(cl)
setGeneric(name="as_rgb", def=function(x, ...) standardGeneric("as_rgb"))
#' convert to hex colors
#'
#' @param x the object to convert
#' @param ... extra args
#' @rdname as_hexcol-methods
#' @export
#'
#' @return a character vector of ex colors
#' @seealso \link{rgb}
setGeneric(name="as_hexcol", def=function(x, ...) standardGeneric("as_hexcol"))
#' alpha_channel
#'
#' extract the alpha channel
#'
#' @param x the object to extract alpha channel from
#' @param ... extra args
#'
#' @export
#' @rdname alpha_channel-methods
#' @examples
#' cp <- IntensityColorPlane(seq(1,5), cols=rainbow(25))
#' cl <- map_colors(cp, irange=c(0,50))
#' stopifnot(length(alpha_channel(cl)) == 5)
setGeneric(name="alpha_channel", def=function(x, ...) standardGeneric("alpha_channel"))
#' get_color
#'
#' get the color associated with one or more values
#'
#' @param x the color lookup table
#' @param v the intensity value(s)
#' @param ... extra args
#' @export
#' @rdname get_color-methods
#' @return a color value
setGeneric(name="get_color", def=function(x, v, ...) standardGeneric("get_color"))
|
69a86b8c7dbc76b2099c275387225794f7a5f052 | 4476526ab2485378636b8c774bbc68ac9b3654fa | /Exercises2/saratoga_lm.R | b4c2b4ef72f7f989f104fe3e116c5cb41149cec3 | [] | no_license | rykoh/SDS323 | 426c2ffce0c5ecc1b9783de15c582b5ffb28e209 | 27915e753c2f661c5f088385a96928001265847d | refs/heads/master | 2022-07-04T05:33:57.606182 | 2020-05-11T14:59:50 | 2020-05-11T14:59:50 | 247,365,673 | 0 | 1 | null | null | null | null | UTF-8 | R | false | false | 5,401 | r | saratoga_lm.R | library(tidyverse)
library(mosaic)
data(SaratogaHouses)
summary(SaratogaHouses)
rmse = function(y, yhat) {
sqrt( mean( (y - yhat)^2 ) )
}
lm_small = lm(price ~ bedrooms + bathrooms + lotSize, data=SaratogaHouses)
lm_medium = lm(price ~ lotSize + age + livingArea + pctCollege + bedrooms +
fireplaces + bathrooms + rooms + heating + fuel + centralAir, data=SaratogaHouses)
lm_medium2 = lm(price ~ . - sewer - waterfront - landValue - newConstruction, data=SaratogaHouses)
coef(lm_medium)
coef(lm_medium2)
lm_big = lm(price ~ (. - sewer - waterfront - landValue - newConstruction)^2, data=SaratogaHouses)
coef(lm_big)
##############WORK SPACE################
plot(price ~ lotSize, data=SaratogaHouses) #mid O
plot(price ~ age, data=SaratogaHouses) #mid-strong O
plot(price ~ landValue, data=SaratogaHouses) #strong O
plot(price ~ livingArea, data=SaratogaHouses) #strong O
plot(price ~ pctCollege, data=SaratogaHouses) #mid-strong O
plot(price ~ bedrooms, data=SaratogaHouses) #weak-mid
plot(price ~ fireplaces, data=SaratogaHouses) #weak-mid
plot(price ~ bathrooms, data=SaratogaHouses) #mid O
plot(price ~ rooms, data=SaratogaHouses) #mid O
plot(price ~ heating, data=SaratogaHouses) #weak-mid
plot(price ~ fuel, data=SaratogaHouses) #weak
plot(price ~ sewer, data=SaratogaHouses) #weak
plot(price ~ waterfront, data=SaratogaHouses) #strong O
plot(price ~ newConstruction, data=SaratogaHouses)#strong O
plot(price ~ centralAir, data=SaratogaHouses) #weak
houseVal_prop = SaratogaHouses[, c(1:10)]
propertyVal_prop = SaratogaHouses[, c(11:16)]
head(houseVal_prop)
head(propertyVal_prop)
houseVal_corMat = cor(houseVal_prop)
houseVal_corMat
lm_howard = lm(price ~ livingArea + bathrooms + waterfront + newConstruction + livingArea*bedrooms + livingArea*bathrooms
+ livingArea*rooms + bedrooms*rooms + bathrooms*rooms, data=SaratogaHouses)
coef(lm_howard)
##############WORK SPACE################
n = nrow(SaratogaHouses)
n_train = round(0.8*n) # round to nearest integer
n_test = n - n_train
train_cases = sample.int(n, n_train, replace=FALSE)
test_cases = setdiff(1:n, train_cases)
saratoga_train = SaratogaHouses[train_cases,]
saratoga_test = SaratogaHouses[test_cases,]
lm1 = lm(price ~ lotSize + bedrooms + bathrooms, data=saratoga_train)
lm2 = lm(price ~ . - sewer - waterfront - landValue - newConstruction, data=saratoga_train)
lm3 = lm(price ~ (. - sewer - waterfront - landValue - newConstruction)^2, data=saratoga_train)
# Predictions out of sample
yhat_test1 = predict(lm1, saratoga_test)
yhat_test2 = predict(lm2, saratoga_test)
yhat_test3 = predict(lm3, saratoga_test)
yhat_test_howard = predict(lm_howard, saratoga_test)
rmse_vals = do(100)*{
# re-split into train and test cases with the same sample sizes
train_cases = sample.int(n, n_train, replace=FALSE)
test_cases = setdiff(1:n, train_cases)
saratoga_train = SaratogaHouses[train_cases,]
saratoga_test = SaratogaHouses[test_cases,]
# Fit to the training data
lm1 = lm(price ~ lotSize + bedrooms + bathrooms, data=saratoga_train)
lm2 = lm(price ~ . - sewer - waterfront - landValue - newConstruction, data=saratoga_train)
lm3 = lm(price ~ (. - sewer - waterfront - landValue - newConstruction)^2, data=saratoga_train)
lm_howard = lm(price ~ livingArea + bathrooms + rooms + waterfront + newConstruction + pctCollege +
centralAir + livingArea*bedrooms + livingArea*bathrooms
+ livingArea*rooms + bedrooms*rooms + fireplaces*fuel + lotSize*sewer, data=SaratogaHouses)
# Predictions out of sample
yhat_test1 = predict(lm1, saratoga_test)
yhat_test2 = predict(lm2, saratoga_test)
yhat_test3 = predict(lm3, saratoga_test)
yhat_test_howard = predict(lm_howard, saratoga_test)
c(rmse(saratoga_test$price, yhat_test1),
rmse(saratoga_test$price, yhat_test2),
rmse(saratoga_test$price, yhat_test3),
rmse(saratoga_test$price, yhat_test_howard))
}
rmse_vals
colMeans(rmse_vals)
boxplot(rmse_vals)
k_grid = exp(seq(log(1), log(100), length=100)) %>% round %>% unique
err_grid = foreach(k = k_grid, .combine='c') %do% {
out = do(100)*{
n = nrow(SaratogaHouses)
n_train = round(0.8*n) # round to nearest integer
n_test = n - n_train
train_cases = sample.int(n, n_train, replace=FALSE)
test_cases = setdiff(1:n, train_cases)
train = SaratogaHouses[train_cases,]
test = SaratogaHouses[test_cases,]
X_train = model.matrix(price ~ livingArea + bathrooms + rooms + waterfront + newConstruction + pctCollege +
centralAir - 1, data=train)
X_test = model.matrix(price ~ livingArea + bathrooms + rooms + waterfront + newConstruction + pctCollege +
centralAir - 1, data=test)
Y_train = train$price
Y_test = test$price
scale_factors = apply(X_train, 2, sd)
X_train_sc = scale(X_train, scale=scale_factors)
X_test_sc = scale(X_test, scale=scale_factors)
knn_model = knn.reg(train=X_train_sc, test=X_test_sc, Y_train, k=k)
rmse(Y_test, knn_model$pred)
}
mean(out$result)
}
plot(k_grid, err_grid)
optimal_k = k_grid[match(min(err_grid), err_grid)]
knn_model_optimal = knn.reg(train=X_train_sc, test=X_test_sc, Y_train, k=optimal_k)
min(err_grid)
|
5f389b2707736417e6f579d56b35295bf3e74cd3 | ab7d15d06ed92cd51cc383dc9e98ae2a8fa41eaa | /tests/testthat/test-get_select_last_nodes_edges_created.R | 6dca71302be4fd3a10f44e1686c0bce45ac56fe2 | [
"MIT"
] | permissive | rich-iannone/DiagrammeR | 14c46eb994eb8de90c50166a5d2d7e0668d3f7c5 | 218705d52d445c5d158a04abf8107b425ea40ce1 | refs/heads/main | 2023-08-18T10:32:30.784039 | 2023-05-19T16:33:47 | 2023-05-19T16:33:47 | 28,556,914 | 1,750 | 293 | NOASSERTION | 2023-07-10T20:46:28 | 2014-12-28T08:01:15 | R | UTF-8 | R | false | false | 3,621 | r | test-get_select_last_nodes_edges_created.R | context("Getting or selection of last-created nodes or edges in a graph")
test_that("getting or selecting the last edges created is possible", {
# Create a graph, add a cycle, and then
# add a tree
graph <-
create_graph() %>%
add_cycle(
n = 3,
type = "cycle",
rel = "a") %>%
add_balanced_tree(
k = 2,
h = 2,
type = "tree",
rel = "b")
# Select the last edges created (all edges
# from the tree)
graph_e <-
graph %>%
select_last_edges_created()
# Expect that the selection available in
# the graph is an edge selection
expect_true(
nrow(graph_e$edge_selection) > 0)
expect_true(
nrow(graph_e$node_selection) == 0)
# Expect that the edges selected are
# those that have `rel == 'b'` as an
# edge attribute (since that attribute
# belongs to the balanced tree structure,
# which was created last
expect_equal(
get_edge_df(graph_e)[which(get_edge_df(graph_e)$rel == "b"), 1],
get_selection(graph_e))
# Get the last edges created directly
# with `get_last_edges_created()`
last_edges_created <-
graph %>%
get_last_edges_created()
# Expect the same edge ID values as
# those that have `rel == 'b'` as an
# edge attribute
expect_identical(
get_edge_df(graph_e)[which(get_edge_df(graph_e)$rel == "b"), 1],
last_edges_created)
# Delete an edge from the graph
graph_edge_deleted <-
graph %>%
delete_edge(id = 5)
# Expect an error when attempting to
# get the last edges created after
# having just deleted an edge
expect_error(
graph_edge_deleted %>%
get_last_edges_created())
# Expect an error when attempting to
# select the last edges created after
# having just deleted an edge
expect_error(
graph_edge_deleted %>%
select_last_edges_created())
})
test_that("getting or selecting the last nodes created is possible", {
# Create a graph, add a cycle, and then
# add a tree
graph <-
create_graph() %>%
add_cycle(
n = 3,
type = "cycle",
rel = "a") %>%
add_balanced_tree(
k = 2,
h = 2,
type = "tree",
rel = "b")
# Select the last nodes created (all nodes
# from the tree)
graph_n <-
graph %>%
select_last_nodes_created()
# Expect that the selection available in
# the graph is a node selection
expect_true(
nrow(graph_n$node_selection) > 0)
expect_true(
nrow(graph_n$edge_selection) == 0)
# Expect that the nodes selected are
# those that have `type == 'tree'` as a
# node attribute (since that attribute
# belongs to the balanced tree structure,
# which was created last
expect_identical(
get_node_df(graph_n)[which(get_node_df(graph_n)$type == "tree"), 1],
get_selection(graph_n))
# Get the last nodes created directly
# with `get_last_nodes_created()`
last_nodes_created <-
graph %>%
get_last_nodes_created()
# Expect the same node ID values as
# those that have `type == 'tree'` as a
# node attribute
expect_identical(
get_node_df(graph_n)[which(get_node_df(graph_n)$type == "tree"), 1],
last_nodes_created)
# Delete a node from the graph
graph_node_deleted <-
graph %>%
delete_node(node = 10)
# Expect an error when attempting to
# get the last nodes created after
# having just deleted a node
expect_error(
graph_node_deleted %>%
get_last_nodes_created())
# Expect an error when attempting to
# select the last nodes created after
# having just deleted a node
expect_error(
graph_node_deleted %>%
select_last_nodes_created())
})
|
1820212c0eced321be48e5b1db06e65f309d4eae | ce68a85c4a6c5d474a6a574c612df3a8eb6685f7 | /src/book/R with application to financial quantitive analysis/CH-06/CH-06-02-02.R | 18cbdadbe157e3e1d647efe3bc584ddc07bae937 | [] | no_license | xenron/sandbox-da-r | c325b63114a1bf17d8849f076bfba22b6bdb34a3 | c217fdddc26ed523b3860e2000afc699afac55a2 | refs/heads/master | 2020-04-06T06:58:17.049181 | 2016-08-24T06:16:32 | 2016-08-24T06:16:32 | 60,466,314 | 1 | 1 | null | null | null | null | UTF-8 | R | false | false | 1,482 | r | CH-06-02-02.R | ########################################################
# Description:
# 1.for Book 'R with applications to financial quantitive analysis'
# 2.Chapter: CH-06-02-02
# 3.Section: 6.2
# 4.Purpose: SV modeling through stochvol package
# 5.Author: Qifa Xu
# 6.Founded: Dec 09, 2013.
# 7.Revised: Aug 06, 2014.
########################################################
# Contents:
# 1. load package
# 2. numerical simualtion for univariate SV process
#########################################################
# 0. initializing
setwd('F:/programe/book/R with application to financial quantitive analysis/CH-06')
rm(list=ls())
# 1. load package
library(stochvol)
# 2. numerical simualtion for univariate SV process
# (1) simulate a highly persistent SV process
set.seed(12345)
sim <- svsim(500, mu = -5, phi = 0.95, sigma = 0.25)
# (2) obtain 5000 draws from the sampler
draws <- svsample(sim$y, draws=5000, burnin=500, priormu=c(-8, 1),priorphi=c(10, 1.2), priorsigma=0.2)
# (3) show estimats
print(sim)
summary(draws)
# (4) predict 20 days ahead
fore <- predict(draws, 20)
plot(draws, forecast = fore)
# plot(draws, pages=1, all.terms=TRUE, forecast = fore)
volplot(draws, forecast = 10)
# (5) re-plot with different quantiles
newquants <- c(0.01, 0.05, 0.25, 0.5, 0.75, 0.95, 0.99)
draws <- updatesummary(draws, quantiles = newquants)
plot(draws, forecast = 20, showobs = FALSE, col = seq(along = newquants),
forecastlty = 3, showprior = FALSE)
volplot(draws, forecast = 10)
|
3389e5b52576ff490c944d0d841841f9ffed46ca | b044aed183ac468a69f8a0a6e9c456380b7b0e47 | /archive/influential2.R | 13229916cc4b482de415e5f97428a25104f0565d | [] | no_license | peiyaow/alpha | 5f0db3509e1fe935bd5aa1a09c01a764a96ebf5d | 7c1cbd360b24bc81fbae4926fcb83a06afe46c70 | refs/heads/master | 2021-06-24T10:44:55.173732 | 2020-11-09T00:55:02 | 2020-11-09T00:55:02 | 165,456,731 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,145 | r | influential2.R | setwd("~/Documents/GitHub/alpha/")
source('functions.R')
load("data/ADNI2.RData")
X.mean = apply(X, 2, mean)
X.sd = apply(X, 2, sd)
X = sweep(sweep(X, 2, X.mean), 2, X.sd, "/")
plotPC = function(X1){
X1.mean = apply(X1, 2, mean)
X1 = sweep(X1, 2, X1.mean)
return(X2U(X1, plot = T))
}
n = length(Y)
ix = 1:n
ix1 = ix[label==0]
X1 = X[label==0,]
res1 = plotPC(X1)
ix1.remove = ix1[res1$F2[,1]< -1 & res1$F2[,2]>1]
X2 = X[label==1,]
ix2 = ix[label==1]
res2 = plotPC(X2)
ix2.remove = ix2[res2$F2[,2]< -2]
X3 = X[label==2,]
ix3 = ix[label==2]
res3 = plotPC(X3)
ix3.remove = ix3[abs(res3$F2[,1]) > 2]
X4 = X[label==3,]
ix4 = ix[label==3]
res4 = plotPC(X4)
ix4.remove = ix4[res4$F2[,1] < -1.5]
X5 = X[label==4,]
ix5 = ix[label == 4]
res5 = plotPC(X5)
ix5.remove = ix5[res5$F1< -.7]
ix.remove = c(ix1.remove, ix2.remove, ix3.remove, ix4.remove, ix5.remove)
X = X[-ix.remove,]
Y = Y[-ix.remove]
label = label[-ix.remove]
boxplot(Y~label)$stats
ix = 1:length(Y)
ix.drop = c(ix[label==0 & Y>11],ix[label==2 & Y>16],ix[label==3 & Y > 22])
Y = Y[-ix.drop]
label = label[-ix.drop]
X = X[-ix.drop,]
save(X, Y, label, file="ADNI2_clean2.RData")
|
6398905903f53e536c439a0b9c947e9700e5cbf5 | 445b8e2a2cedcdf2b6da180e6f26fbe7db7038ad | /Voetbal.R | 947f4a013aa8fe277dba430db903199610b353a9 | [] | no_license | ManOnWire/Voetbal | f389869e6eb566f70f2b91298716b228e3afd7e3 | 8f36fba0872a060ffc9fca5ce6c51c856c67ec1f | refs/heads/master | 2020-03-22T21:34:29.819551 | 2017-10-29T14:04:55 | 2017-10-29T14:04:55 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 6,896 | r | Voetbal.R | # Import data about Eredivisie Seizoen 2014/2015. Source: http://www.football-data.co.uk
# First I downloaded the Excel-file manually, but it would be better to work with code inside
# a script. Of course, it would also be nice to add a date for the download.
# Add a download date...
download_date <- date()
# Set working directory...
setwd("~/Dropbox/Sannie's Stuff/Voetbal")
# If Data directory does not exist; create it.
if (!file.exists("data")) {
dir.create("data")
}
# Put URL pointing towards the dataset into a variable.
url_to_dataset <- "http://www.football-data.co.uk/mmz4281/1415/N1.csv"
# Download the dataset to the Data directory inside the working directory.
download.file(url_to_dataset, destfile = "./data/N1.csv")
# Put contents of .csv file into R object:
EreDivisie_2014_15 <- read.csv("./data/N1.csv")
# What have we got?
dim(EreDivisie_2014_15)
head(EreDivisie_2014_15)
names(EreDivisie_2014_15)
# There seem to be 306 rows (match results, games) and 52 variables.
# The first 10 variables are most relevant for me at this time, since
# this is data about the actual matches. The rest of the variables
# are betting odds. We'll do away with those for now to make it more
# manageble:
EreDivisie_2014_15 <- EreDivisie_2014_15[,1:10]
# What kind of variables are available exactly?
str(EreDivisie_2014_15)
# It looks like the "Date" variable isn't actually a date, but a factor
# with 92 levels. "HomeTeam" and "AwayTeam" are factors aswell. They both
# have 18 levels. So there were 18 teams in the league for that season.
# That should result in 18 * 18 = 324 minus 18 = 306 matches. This is
# consistent with the number of rows in the dataset.
# Total number of home and away matches for Ajax:
nrow(EreDivisie_2014_15[EreDivisie_2014_15$HomeTeam == "Ajax" | EreDivisie_2014_15$AwayTeam == "Ajax",])
# Result: 34 matches...
# Which equals 17 home games and 17 away games against each every other
# team in the league.
# QUESTION #1A: How often is the half-time result equal to the full-time result?
nrow(EreDivisie_2014_15[EreDivisie_2014_15$HTR == EreDivisie_2014_15$FTR,]) / 306
# Result: 0.5816993. Almost 6 out of 10 times...
# In this season, at least.
# This matches the intuition: when one team is ahead with only 45 minutes
# left to play the opportunities to change the score are being reduced.
# QUESTION #1B: How often does a draw at half-time remain a draw full-time?
No_HalfTime_Ds <- nrow(EreDivisie_2014_15[EreDivisie_2014_15$HTR == "D",])
No_Ds_Remaining_FullTime <- nrow(EreDivisie_2014_15[EreDivisie_2014_15$HTR == "D" & EreDivisie_2014_15$FTR == "D",])
No_Ds_Remaining_FullTime / No_HalfTime_Ds
# Result: 0.3464567. Only about 1 in 3 half-time draws remain a draw.
# That's considerably less than the figure for the total half-time results.
# Consequently the fraction of 'won / lost' matches half-time that remain
# that way should be even higher than 6 out of 10...
# QUESTION #1C: How often does a half-time win / loss remain so full-time?
No_HalfTime_Hs <- nrow(EreDivisie_2014_15[EreDivisie_2014_15$HTR == "H",])
No_Hs_Remaining_FullTime <- nrow(EreDivisie_2014_15[EreDivisie_2014_15$HTR == "H" & EreDivisie_2014_15$FTR == "H",])
No_Hs_Remaining_FullTime / No_HalfTime_Hs
# Result: 0.7708333. Almost 8 out of 10 times the home team wins the match
# when it was ahead half-time...
No_HalfTime_As <- nrow(EreDivisie_2014_15[EreDivisie_2014_15$HTR == "A",])
No_As_Remaining_FullTime <- nrow(EreDivisie_2014_15[EreDivisie_2014_15$HTR == "A" & EreDivisie_2014_15$FTR == "A",])
No_As_Remaining_FullTime / No_HalfTime_As
# Result: 0.7228916. More than 7 out of 10 times a visiting team that is ahead
# at half-time wins the complete match (at full-time).
# So, when a difference has been made half-time it's quite likely that this
# will be reflected in the final score (a chance of 70 - 80%). Half-time draws
# usually don't remain drawn. Two out of three half-time draws become wins / losses.
# QUESTION #2: Is there a home team advantage?
nrow(EreDivisie_2014_15[EreDivisie_2014_15$FTR == "H",]) / 306
# Result: 0.4509804.
nrow(EreDivisie_2014_15[EreDivisie_2014_15$FTR == "D",]) / 306
# Result: 0.2385621.
nrow(EreDivisie_2014_15[EreDivisie_2014_15$FTR == "A",]) / 306
# Result: 0.3104575.
# Home wins seem way more likely than away wins. The ratio seems to be around
# 1,5. That strikes me as a big enough difference to conclude this is probably
# a real effect and not a statistical fluke. Home teams have a greater chance
# of winning. Or, put differently, home teams only have a 30% chance of losing.
# Winning or drawing are more likely.
#
# INTERESTING FOLLOW UP QUESTION: Has the rule change (3 points for a win
# instead of 2) had any effect? Are the percentages much different? Is it
# statistically meaningful?
#
# To get a quick impression of the half-time / full-time dynamics including
# the favorable probabilities for the home team is seems handy to create a table
# of the half-time and full-time results:
table(EreDivisie_2014_15$HTR, EreDivisie_2014_15$FTR)
# This produces a 3 x 3 table, where it is not immediately clear which axis
# represents the half-time results and which the full-time results. The following
# code provides us with a number that enables us to deduce which is wich:
nrow(EreDivisie_2014_15[(EreDivisie_2014_15$FTR == "H" & EreDivisie_2014_15$HTR == "D"),])
# This figure indicates that the half-time scores are on the Y-axis, while
# the full-time scores are on the X-axis. How to add axis-names to this table
# is unclear at the moment, we still have to find that out.
# The absolute values in the table don't communicate clearly enough what might
# be going on. Converting the numbers to percentages of the total number of games
# helps:
round(table(EreDivisie_2014_15$HTR, EreDivisie_2014_15$FTR) / nrow(EreDivisie_2014_15) * 100)
# Result:
#
# A D H
# A 20 4 3
# D 9 14 18
# H 2 5 24
#
# An analyses of the matrix seems to indicate that the half-time results have
# significant predictive value. Games 'won' half-time and (actually) won full-time
# by the home team show the highest percentage (24), followed by games 'won' / won
# (half-time / full-time) by the away team (20). These results seem to be 'sticky'
# aswell, since both home and away teams lose their lead (to a draw or a loss) not
# very often. Chances are that a win half-time will be a win full-time. When a team
# loses their lead ending up with a draw is more likely than finishing a loser.
# This seems logical.
# With half-time draws the story seems to be different. Here the highest percentage
# represents games that were drawn half-time and ended up a win for the home team
# in the end. The likelyhood of this happening seems to be twice as great when compared
# to a half-time draw being converted to a win for the away team full-time.
# This seems to confirm the home team advantage. |
818996dbbc77188e15a7b87ff50380d89a7d503f | 3bef70f4b3d6283f2b2bfb44ccdfbf9b28c6429d | /R/prepare_well_data.R | d23d1abb5b4652a87d3a7effe1c518e50f8daf9d | [
"MIT"
] | permissive | KWB-R/dwc.wells | 4c1594ea66b1792c6c955b98418982edf80675c1 | 45e8670647c4771fe70d59db0f7cfd1e80242361 | refs/heads/main | 2023-04-10T01:24:40.973815 | 2022-07-12T13:42:20 | 2022-07-12T13:42:20 | 351,021,733 | 0 | 0 | MIT | 2022-10-16T09:17:19 | 2021-03-24T09:35:15 | R | UTF-8 | R | false | false | 3,382 | r | prepare_well_data.R | # prepare well data ------------------------------------------------------------
prepare_well_data <- function(path, renamings) {
# read data from csv and filter Vertikalfilterbrunnen
df_wells <- readr::read_csv(path, skip = 9) %>%
select_rename_cols(renamings$main, "old_name", "new_name_en") %>%
dplyr::filter(grepl("V$", .data$well_name))
# check for duplicates
if (FALSE) {
sum(duplicated(df_wells$site_id)) # duplicates: rehabilitated wells get site id 11
sum(duplicated(df_wells$well_id)) # no duplicates
sum(duplicated(df_wells$drilling_id)) # no duplicates
}
# assign data type "date"
date_cols <- c("construction_date", "operational_start.date",
"operational_state.date", "inliner.date")
df_wells <- df_wells %>%
dplyr::mutate(dplyr::across(dplyr::all_of(date_cols), .fns = as.Date, format = "%Y-%m-%d"),
monitoring.date = as.Date(.data$monitoring.date, format = "%d.%m.%Y"))
# remove outliers in dates and numerical data
df_wells <- df_wells %>%
dplyr::mutate(
# false dates
dplyr::across(dplyr::all_of(date_cols), .fns = dplyr::na_if, "1899-12-30"),
# specific capacity at operational start
operational_start.Qs = dplyr::na_if(.data$operational_start.Qs, 469),
admissible_discharge = dplyr::na_if(.data$admissible_discharge, 0),
n_screens = dplyr::na_if(.data$n_screens, 0)
)
# replace missing construction dates with operational_start.date
cd_missing <- is.na(df_wells$construction_date)
df_wells[cd_missing, "construction_date"] <- df_wells[cd_missing, "operational_start.date"]
# calculate filter length and absolute well depth
df_wells$filter_length <- df_wells$screen_bottom_level - df_wells$screen_top_level
df_wells$well_depth <- df_wells$well_top_level - df_wells$well_depth_level
# recalcuate Qs, as there are 97 wells with no Qs but with Q, W_dynamic, W_static
df_wells <- df_wells %>%
dplyr::mutate(
operational_start.Qs = .data$operational_start.Q /
(.data$operational_start.W_dynamic - .data$operational_start.W_static))
# calculate years from date
df_wells <- df_wells %>%
dplyr::mutate(
construction_year = lubridate::year(.data$construction_date),
operational_start.year = lubridate::year(.data$operational_start.date)
)
# clean categorical variables
df_wells <- df_wells %>%
dplyr::mutate(
dplyr::across(tidyr::starts_with("aquifer"), ~dplyr::na_if(., "nicht bekannt"))
)
# group categorical variables according to lookup table
vars <- c("screen_material", "casing_material", "well_function", "waterworks")
for (var in vars) {
df_wells[[var]] <- rename_values(df_wells[[var]], renamings[[var]])
}
# create new categorical variables
df_wells <- df_wells %>%
dplyr::mutate(
inliner = factor(ifelse(!is.na(.data$inliner.date), "Yes", "No"),
levels = c("Yes", "No")),
well_gallery = substr(.data$well_name, 1, 7)
)
# assign data type "factor"
factor_cols <- c("well_function", "operational_state", "aquifer_confinement",
"aquifer_coverage", "casing_material", "screen_material",
"waterworks", "well_gallery")
df_wells <- df_wells %>%
dplyr::mutate(dplyr::across(dplyr::all_of(factor_cols), .fns = tidy_factor))
df_wells
}
|
c9ba33326c2fe17be3cbe0d15724d5629ab719f6 | 0cc3ac7cd489525298cd1e84cd4901e1605c9696 | /R/otpr_get_isochrone.R | b5b4df36f1e7aae41ec8179418a2c54685b36913 | [] | no_license | mikem77/otpr | 59f15577f45351739eb345b9bb8358cb6495ef66 | cf57a625e4e2fe8cdff7cb9559b716a7201d22a1 | refs/heads/master | 2020-06-22T15:51:31.136949 | 2019-04-24T18:41:51 | 2019-04-24T18:41:51 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 7,499 | r | otpr_get_isochrone.R | #' Returns one or more travel time isochrones
#'
#' Returns one or more travel time isochrone in either GeoJSON format or as an
#' \strong{sf} object. Only works correctly for walk and/or transit modes - a limitation
#' of OTP. Isochrones can be generated either \emph{from} a location or \emph{to}
#' a location.
#' @param otpcon An OTP connection object produced by \code{\link{otp_connect}}.
#' @param location Numeric vector, Latitude/Longitude pair, e.g. `c(53.48805, -2.24258)`
#' @param fromLocation Logical. If TRUE (default) the isochrone
#' will be generated \emph{from} the \code{location}. If FALSE the isochrone will
#' be generated \emph{to} the \code{location}.
#' @param format Character, required format of returned isochrone(s). Either JSON
#' (returns GeoJSON) or SF (returns simple feature collection). Default is JSON.
#' @param mode Character, mode of travel. Valid values are: WALK, TRANSIT, BUS,
#' or RAIL.
#' Note that WALK mode is automatically included for TRANSIT, BUS and RAIL.
#' TRANSIT will use all available transit modes. Default is TRANSIT.
#' @param date Character, must be in the format mm-dd-yyyy. This is the desired
#' date of travel. Only relevant if \code{mode} includes public transport.
#' Default is current system date.
#' @param time Character, must be in the format hh:mm:ss. If \code{arriveBy} is
#' FALSE (the default) this is the desired departure time, otherwise the desired
#' arrival time. Default is current system time.
#' @param cutoffs Numeric vector, containing the cutoff times in seconds, for
#' example: 'c(900, 1800. 2700)'
#' would request 15, 30 and 60 minute isochrones. Can be a single value.
#' @param batch Logical. If true, goal direction is turned off and a full path tree is built
#' @param arriveBy Logical. Whether the specified date and time is for
#' departure (FALSE) or arrival (TRUE). Default is FALSE.
#' @param maxWalkDistance Numeric. The maximum distance (in meters) the user is
#' willing to walk. Default = 800.
#' @param walkReluctance Integer. A multiplier for how bad walking is, compared
#' to being in transit for equal lengths of time. Default = 2.
#' @param transferPenalty Integer. An additional penalty added to boardings after
#' the first. The value is in OTP's internal weight units, which are roughly equivalent
#' to seconds. Set this to a high value to discourage transfers. Default is 0.
#' @param minTransferTime Integer. The minimum time, in seconds, between successive
#' trips on different vehicles. This is designed to allow for imperfect schedule
#' adherence. This is a minimum; transfers over longer distances might use a longer time.
#' Default is 0.
#' @return Returns a list. First element in the list is \code{errorId}. This is "OK" if
#' OTP successfully returned the isochrone(s), otherwise it is "ERROR". The second
#' element of list varies:
#' \itemize{
#' \item If \code{errorId} is "ERROR" then \code{response} contains the OTP error message.
#' \item If \code{errorId} is "OK" then \code{response} contains the the isochrone(s) in
#' either GeoJSON format or as an \strong{sf} object, depending on the value of the
#' \code{format} argument.
#' }
#' @examples \dontrun{
#' otp_get_isochrone(otpcon, location = c(53.48805, -2.24258), cutoffs = c(900, 1800, 2700))
#'
#' otp_get_isochrone(otpcon, location = c(53.48805, -2.24258), fromLocation = FALSE,
#' cutoffs = c(900, 1800, 2700), mode = "BUS")
#'}
#' @export
otp_get_isochrone <-
function(otpcon,
location,
fromLocation = TRUE,
format = "JSON",
mode = "TRANSIT",
date,
time,
cutoffs,
batch = TRUE,
arriveBy = FALSE,
maxWalkDistance = 800,
walkReluctance = 2,
transferPenalty = 0,
minTransferTime = 0
)
{
if(missing(date)){
date <- format(Sys.Date(), "%m-%d-%Y")
}
if(missing(time)) {
time <- format(Sys.time(), "%H:%M:%S")
}
# allow lowercase
format <- toupper(format)
mode <- toupper(mode)
#argument checks
coll <- checkmate::makeAssertCollection()
checkmate::assert_logical(fromLocation, add = coll)
checkmate::assert_integerish(cutoffs, lower = 0, add = coll)
checkmate::assert_logical(batch, add = coll)
checkmate::assert_class(otpcon, "otpconnect", add = coll)
checkmate::assert_numeric(
location,
lower = -180,
upper = 180,
len = 2,
add = coll
)
checkmate::assert_int(maxWalkDistance, lower = 0, add = coll)
checkmate::assert_int(walkReluctance, lower = 0, add = coll)
checkmate::assert_int(transferPenalty, lower = 0, add = coll)
checkmate::assert_int(minTransferTime, lower = 0, add = coll)
checkmate::assert_logical(arriveBy, add = coll)
checkmate::assert_choice(
mode,
choices = c("WALK", "BUS", "RAIL", "TRANSIT"),
null.ok = F,
add = coll
)
checkmate::assert_choice(
format,
choices = c("JSON", "SF"),
null.ok = FALSE,
add = coll
)
checkmate::reportAssertions(coll)
# add WALK to relevant modes
if (identical(mode, "TRANSIT") |
identical(mode, "BUS") |
identical(mode, "RAIL")) {
mode <- append(mode, "WALK")
}
mode <- paste(mode, collapse = ",")
# check date and time are valid
if (otp_is_date(date) == FALSE) {
stop("date must be in the format mm-dd-yyyy")
}
if (otp_is_time(time) == FALSE) {
stop("time must be in the format hh:mm:ss")
}
# Construct URL
routerUrl <- paste0(make_url(otpcon), "/isochrone")
# make cutoffs into list
cutoffs <- as.list(cutoffs)
names(cutoffs) <- rep("cutoffSec", length(cutoffs))
if (isTRUE(fromLocation)) {
req <- httr::GET(
routerUrl,
query =
append(list(
fromPlace = paste(location, collapse = ","),
mode = mode,
batch = batch,
date = date,
time = time,
maxWalkDistance = maxWalkDistance,
walkReluctance = walkReluctance,
arriveBy = arriveBy,
transferPenalty = transferPenalty,
minTransferTime = minTransferTime
), cutoffs)
)
} else {
# due to OTP bug when we require an isochrone to the location we must provide the
# location in toPlace, but also provide fromPlace (which is ignored). Here we
# make fromPlace the same as toPlace.
req <- httr::GET(
routerUrl,
query =
append(list(
toPlace = paste(location, collapse = ","),
fromPlace = paste(location, collapse = ","), # due to OTP bug
mode = mode,
batch = batch,
date = date,
time = time,
maxWalkDistance = maxWalkDistance,
walkReluctance = walkReluctance,
arriveBy = arriveBy,
transferPenalty = transferPenalty,
minTransferTime = minTransferTime
), cutoffs)
)
}
# convert response content into text
req <- httr::content(req, as = "text", encoding = "UTF-8")
# Check that GeoJSON is returned
if (grepl("\"type\":\"FeatureCollection\"", req)) {
errorId <- "OK"
# convert to SF if requested
if (format == "SF"){
req <- geojsonsf::geojson_sf(req)
}
} else {
errorId <- "ERROR"
}
response <-
list("errorId" = errorId,
"response" = req)
return (response)
}
|
bc0b0226da284e6a1ac89b445b63b4a17a392dd0 | 100b985e02823843e9999570ced26e13640ceb5f | /man/resume_navbar.Rd | 0e081ce50495decd8dcf5e179ba276f574d5277b | [
"MIT"
] | permissive | ColinFay/resume | 0b8030ebb2c5deea309e981b39bf444a4c79ad8b | d9ec83d4df4440f24fa35d100d7b4e02e54c4b29 | refs/heads/master | 2020-06-03T22:18:25.990745 | 2019-07-03T09:40:59 | 2019-07-03T09:40:59 | 191,754,203 | 36 | 2 | null | null | null | null | UTF-8 | R | false | true | 479 | rd | resume_navbar.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/nav.R
\name{resume_navbar}
\alias{resume_navbar}
\title{A Resume Navbar}
\usage{
resume_navbar(refs = c(a = "A"), image = "img/profile.jpg",
color = "#bd5d38")
}
\arguments{
\item{refs}{a named list with names being the ID and Elements being the text displayed.}
\item{image}{Image to put in the nav.}
\item{color}{Color of the sidebar}
}
\value{
An HTML taglist
}
\description{
A Resume Navbar
}
|
778864cf9423371587eac074e71ff908e40ae536 | 5e6636f824327482c44c0f175387c39801fd1e02 | /Week 3/Yelp analysis.r | aa0fc79cea16eaf7094e9e9080cf48ec0cecde86 | [] | no_license | funshoelias/social_media_data_analytics | 05f2ca0b27fa93a896544e8b62e2651d6b2ee37f | 972a6f65aa85e49ad3f1f80a1ee44eca28d3e9ec | refs/heads/master | 2021-02-16T18:09:53.229809 | 2018-08-10T20:19:32 | 2018-08-10T20:19:32 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,547 | r | Yelp analysis.r | #Library import for visualizations
library(ggplot2)
#Reads the file with the data
business_data = read.csv(file.choose())
#Plots a bar chart with state frecuency
ggplot(bussiness_data) + geom_bar(aes(x=state), fill='gray')
#Plots a bar chart with stars given frecuency
ggplot(bussiness_data) + geom_bar(aes(x=stars), fill='gray')
#Plots a pie chart with stars given
ggplot(data=bussiness_data, aes(x=factor(1), fill=factor(stars))) + geom_bar(width=1) + coord_polar(theta="y")
#* SECOND PART *#
#Reads the data
user_data = read.csv(file.choose())
#Creates a new dataser from the general dataset, containing only vote info
user_votes = user_data[,c("cool_votes", "funny_votes", "useful_votes")]
#Is there correlation between the number of fans and the funny votes? Kinda yes.
cor(user_data$funny_votes,user_data$fans)
#A linear model to see how useful_votes is related to fans and revie_count in the user_data dataset.
my.lm = lm(useful_votes ~ review_count + fans,data = user_data)
#Calculatets the coefficients for the created linear model and shows them.
coeffs = coefficients(my.lm)
coeffs
#useful_votes = 1.41 * review_count + 22.68 * review_count - 18.2596
#Plots review_count
ggplot(user_data) + geom_bar(aes(x=review_count), fill='gray')
#Cluster columns 3 to 11 of user_data dataset using k-means technique. k=3
userCluster = kmeans(user_data[,c(3,11),3])
#Plots the cluster. It requires a lot of memory.
ggplot(user_data,aes(review_count,fans, color=userCluster%cluster)) + geom_point() |
905fbb4194e22f6322dea8d6cc5663074b5a0fbf | 958dbd91e440179034554cdd57f7a7b0cb58dc23 | /R/create_tidytuesday_dictionary.R | eeed207ab65a4bc117a508b0b8546c3348b1e281 | [
"MIT"
] | permissive | jthomasmock/tidytuesdaymeta | 4cc39ab7d75288bb0191bfe767be53a40f64ce07 | fd0e3434065e12764ba8c8c9837a053a5a16589a | refs/heads/master | 2022-12-11T05:49:08.949944 | 2022-12-06T00:59:57 | 2022-12-06T00:59:57 | 228,060,431 | 2 | 3 | null | 2020-02-01T17:32:59 | 2019-12-14T17:20:05 | R | UTF-8 | R | false | false | 402 | r | create_tidytuesday_dictionary.R | #' Create the TidyTuesday data dictionary
#' @param x an in memory data.frame or tibble.
#' @importFrom dplyr mutate
#' @importFrom tibble tibble
#' @importFrom knitr kable
#' @export
create_tt_dict <- function(x) {
tibble::tibble(variable = names(x)) %>%
dplyr::mutate(
class = purrr::map(x, typeof),
description = variable
) %>%
knitr::kable()
}
|
2b746ae5b8eb25ea051a40caafa4da1089fb40d1 | 44a71491f4ebc032aaabf236f9740d149c84cafd | /Chapter_3/Chp_3_Example_12.R | 7837653b482b7353d22b213c0c897941349350ae | [] | no_license | artofstat/RCode | 82ae8f7b3319888d3a5774fe2bcafbae3ed17419 | 8e8d55d1ac4bc111e5d14798f59033d755959ae5 | refs/heads/main | 2023-03-22T03:17:45.284671 | 2022-08-15T18:09:08 | 2022-08-15T18:09:08 | 503,484,584 | 1 | 0 | null | null | null | null | UTF-8 | R | false | false | 2,037 | r | Chp_3_Example_12.R | #############################################################
## R code to reproduce statistical analysis in the textbook:
## Agresti, Franklin, Klingenberg
## Statistics: The Art & Science of Learning from Data
## 5th Edition, Pearson 2021
## Web: ArtofStat.com
## Copyright: Bernhard Klingenberg
############################################################
####################
### Chapter 3 ###
### Example 12 ###
####################
#############################################
## Exploring Multivariate Relationships ##
#############################################
# Reading in the data:
heights <- read.csv(file='https://raw.githubusercontent.com/artofstat/data/master/Chapter3/high_jump.csv')
attach(heights) # so we can refer to variable names
# Basic scatterplot
plot(x = Year, y = Winning.Height..m., pch = 16,
col = factor(Gender),
main = 'Winning Heights for the \n Olympic High Jump Event',
xlab = 'Year', ylab = 'Height (m)')
legend("topleft",
legend = levels(factor(Gender)),
pch = 16,
col = factor(levels(factor(Gender))))
# Separating observations for men and women
menObservations <- subset(heights, Gender == 'Men')
womenObservations <- subset(heights, Gender == 'Women')
# Fitting in regression model for observations for men
lmMen <- lm(Winning.Height..m. ~ Year, data = menObservations)
# Fitting in regression model for observations for men
lmWomen <- lm(Winning.Height..m. ~ Year, data = womenObservations)
# Adding the regression equations to the plot
abline(lmMen, col = 'black')
abline(lmWomen, col = 'red')
# Scatterplot using ggplot2
library(ggplot2)
ggplot(heights,
aes(x = Year, y = Winning.Height..m.)) +
geom_point(aes(shape = Gender, color = Gender)) +
geom_smooth(method=lm, se=FALSE, fullrange= TRUE,
aes(color=Gender)) +
labs(title = 'Winning Heights for the Olympic High Jump Event',
x = 'Year', y = 'Height (m)') +
theme_bw() +
scale_x_continuous(limits = c(1890,2030), breaks = seq(1900,2020,20))
|
465478f03423df13effe099eee7c34f09a1f88b6 | 2de7dd08c59fbef410beb8a04363da73653800ac | /results/3_speaker_prosody_neutral_question/rscripts/analysis.R | 89c5eea1d64d4e28a2bba7d6754e343d9e110055 | [] | no_license | thegricean/speaker_prosody | ae7d0c3bc845486ef0adecd875e5d7309b29dacc | f60e843ce8c0713b29c20ebc2bed8aedfc5813d9 | refs/heads/master | 2021-09-24T03:07:44.457920 | 2018-10-02T09:15:51 | 2018-10-02T09:15:51 | 110,048,033 | 0 | 1 | null | 2018-01-29T02:57:41 | 2017-11-09T00:49:38 | JavaScript | UTF-8 | R | false | false | 5,648 | r | analysis.R | # load helper functions
source('helpers.R')
# load required packages
library(tidyverse)
library(forcats)
theme_set(theme_bw())
# load raw data
d = read.csv("../data/speaker_prosody.csv")
d$trial = d$slide_number_in_experiment - 2
nrow(d)
length(unique(d$workerid))
# check standard things:
# experiment completion times
mean(d$Answer.time_in_minutes)
median(d$Answer.time_in_minutes)
ggplot(d, aes(x=Answer.time_in_minutes)) +
geom_histogram()
# look at participants' comments
unique(d$comments)
# did participants feel they understoody the task?
table(d$asses)
# did participants enjoy the hit? 0 - 2
table(d$enjoyment)
# native language
table(d$language)
# gender
table(d$gender)
# age
table(d$age)
ggplot(d, aes(x=age)) +
geom_histogram()
d = d %>%
select(workerid,weak_adjective,strong_adjective,condition_knowledge,condition_prosody,response,not_paid_attention,responsequestion,exchange,question,pre_check_response,context,num_plays,age,language,asses,gender,comments,Answer.time_in_minutes,trial)
nrow(d)
# look at overall distribution of responses
ggplot(d, aes(x=response)) +
geom_histogram()
ggplot(d, aes(x=response,fill=condition_prosody)) +
geom_histogram()
# exclude participants with means above .4 on controls (very generous)
no_attention = d %>%
group_by(workerid,condition_knowledge,condition_prosody) %>%
summarize(Mean = mean(response)) %>%
filter(Mean > .35 & condition_prosody == "control" | Mean < .65 & condition_prosody == "filler")
bad_subjects = unique(no_attention$workerid)
bad_subjects
print(paste("participants excluded with means > .35 on controls or <.65 on fillers, in %: ", round(length(bad_subjects)*100 / length(unique(d$workerid)),2),sep=" "))
d = d %>%
filter(! workerid %in% bad_subjects)
## plot means ##
# get condition means
agr = d %>%
group_by(condition_knowledge,condition_prosody) %>%
summarize(Mean = mean(response), CILow = ci.low(response), CIHigh = ci.high(response)) %>%
mutate(YMin = Mean - CILow, YMax = Mean + CIHigh, condition_prosody = fct_relevel(condition_prosody, c("control","RFR","neutral","filler")))
# get condition & subject means
agr_subj = d %>%
group_by(workerid,condition_knowledge,condition_prosody) %>%
summarize(Mean = mean(response), CILow = ci.low(response), CIHigh = ci.high(response)) %>%
mutate(YMin = Mean - CILow, YMax = Mean + CIHigh, condition_prosody = fct_relevel(condition_prosody, c("control","RFR","neutral","filler"))) %>%
ungroup() %>%
mutate(workerid = fct_drop(as.factor(workerid)))
dodge = position_dodge(.9)
ggplot(agr, aes(x=condition_knowledge, y=Mean)) +
geom_bar(stat="identity",fill="gray60",color="black") +
geom_errorbar(aes(ymin=YMin,ymax=YMax),width=.25) +
geom_line(data=agr_subj,aes(group=workerid,color=workerid),alpha=.5) +
xlab("Speaker knowledge") +
ylab("Mean rating (lower='no'=implicature)") +
facet_wrap(~condition_prosody,nrow=1)
ggsave("../graphs/means_subj.pdf",width=7.5)
# get condition & item means
agr_item = d %>%
group_by(weak_adjective,condition_knowledge,condition_prosody) %>%
summarize(Mean = mean(response), CILow = ci.low(response), CIHigh = ci.high(response)) %>%
mutate(YMin = Mean - CILow, YMax = Mean + CIHigh, condition_prosody = fct_relevel(condition_prosody, c("control","RFR","neutral","filler")))
dodge = position_dodge(.9)
ggplot(agr, aes(x=condition_knowledge, y=Mean)) +
geom_bar(stat="identity",fill="gray60",color="black") +
geom_errorbar(aes(ymin=YMin,ymax=YMax),width=.25) +
geom_line(data=agr_item,aes(group=weak_adjective,color=weak_adjective),alpha=.5) +
xlab("Speaker knowledge") +
ylab("Mean rating (lower='no'=implicature)") +
facet_wrap(~condition_prosody,nrow=1)
ggsave("../graphs/means_item.pdf",width=7.5)
# get condition means and plot by whether or not comprehension question was anwered correctly
agr = d %>%
group_by(condition_knowledge,condition_prosody,not_paid_attention) %>%
summarize(Mean = mean(response), CILow = ci.low(response), CIHigh = ci.high(response)) %>%
ungroup() %>%
mutate(YMin = Mean - CILow, YMax = Mean + CIHigh, condition_prosody = fct_relevel(condition_prosody, c("control","RFR","neutral","filler")))
dodge = position_dodge(.9)
ggplot(agr, aes(x=condition_prosody, y=Mean, fill=condition_knowledge)) +
geom_bar(stat="identity",color="black",position=dodge) +
geom_errorbar(aes(ymin=YMin,ymax=YMax),width=.25,position=dodge) +
xlab("Speaker knowledge") +
ylab("Mean rating (lower='no'=implicature)") +
facet_wrap(~not_paid_attention,nrow=1)
ggsave("../graphs/means_subj_byquestion.pdf")
## analysis ##
# exclude uninteresting filler/control conditions and center predictors
cd = d %>%
filter(condition_prosody %in% c("neutral","RFR")) %>%
mutate(condition_prosody = fct_drop(condition_prosody)) %>%
mutate(ccondition_knowledge = myCenter(condition_knowledge), ccondition_prosody = myCenter(condition_prosody),cnot_paid_attention = myCenter(not_paid_attention),ctrial = myCenter(trial)) %>%
droplevels()
# see contrasts so you know how to interpret coefficients:
contrasts(cd$condition_knowledge)
contrasts(cd$condition_prosody)
contrasts(cd$not_paid_attention)
m = lmer(response~ccondition_knowledge*ccondition_prosody+ctrial + (1+ccondition_prosody+ccondition_knowledge|workerid) + (1+ccondition_prosody|weak_adjective), data=cd)
summary(m)
m.att = lmer(response~ccondition_knowledge*ccondition_prosody+cnot_paid_attention+ctrial + (1+ccondition_prosody+ccondition_knowledge|workerid) + (1+ccondition_prosody|weak_adjective), data=cd)
summary(m.att)
table(cd$weak_adjective,cd$condition_knowledge,cd$condition_prosody)
ranef(m)
|
e8e8383f8837844980a6d060c043c04575ca7dc9 | 708858a0fb308cccd01caeeb8072be5da96f651a | /preprocessing.R | c7fdf746fab89c1b98ecb03356e119a2230ecc9b | [] | no_license | ja-thomas/Consulting | fbc85cd3459fe74b611188f3752f563ad33287b0 | 18f61c3bfc65087c653faa9c0b89a2302321f64d | refs/heads/master | 2021-01-18T15:23:11.127232 | 2015-08-21T11:39:51 | 2015-08-21T11:39:51 | 32,466,487 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,732 | r | preprocessing.R | options(scipen = 999)
library(MoCap)
library(data.table)
library(plyr)
setwd("~/Uni/Consulting/Code/")
load("../Data/data_processed.RData")
#remove casts with negative z-Values
data <- data[!(person=="Proband 3" & course_Id %in% c("Vor zurueck","Laufen (2)","Drehen gegen UZS"))]
data[, timestamp:=as.character(timestamp)]
setkey(data, joint_Nr,course_Id, person, sensorId, timestamp)
#merge angle Values
load("../Data/angledata_new.Rdata")
angle_data$timestamp <- as.character(angle_data$timestamp)
angle_data$sensorId <- as.character(angle_data$sensorId)
angle_data$person <- as.character(angle_data$person)
angle_data$course_Id <- as.character(angle_data$course_Id)
angle_data <- data.table(angle_data)
setkey(angle_data, course_Id, person, sensorId, timestamp)
data_full <- merge(data, angle_data, by.x = c("joint_Nr","course_Id", "person", "sensorId", "timestamp"))
setkey(data_full, joint_Nr, course_Id, person, sensorId, timestamp)
#periphery
data_full[, azimut := acos(position_z/sqrt(position_x^2 + position_z^2))]
data_full[, elevation := acos(position_y/sqrt(position_y^2 + position_z^2))]
#bone length
load("../Data/Bone_error.RData")
Knochenvariable$timestamp <- as.character(Knochenvariable$timestamp)
Knochenvariable$sensorId <- as.character(Knochenvariable$sensorId)
Knochenvariable$person <- as.character(Knochenvariable$person)
Knochenvariable$course_Id <- as.character(Knochenvariable$course_Id)
Knochenvariable <- data.table(Knochenvariable)
setkey(Knochenvariable,joint_Nr, course_Id, person, sensorId, timestamp)
data_full <- merge(data_full, Knochenvariable,
by = c("joint_Nr","course_Id", "person", "sensorId", "timestamp"))
save(data_full, file = "../Data/data_full.RData")
|
1f466d93dafef4f5804f05e12ed229235b973809 | ffdea92d4315e4363dd4ae673a1a6adf82a761b5 | /data/genthat_extracted_code/caffsim/examples/caffPkparam.Rd.R | 558f4be60881a6fb1abbc7fcc66ffccd9e37e7bd | [] | no_license | surayaaramli/typeRrh | d257ac8905c49123f4ccd4e377ee3dfc84d1636c | 66e6996f31961bc8b9aafe1a6a6098327b66bf71 | refs/heads/master | 2023-05-05T04:05:31.617869 | 2019-04-25T22:10:06 | 2019-04-25T22:10:06 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 222 | r | caffPkparam.Rd.R | library(caffsim)
### Name: caffPkparam
### Title: Create a dataset for simulation of single dose of caffeine
### Aliases: caffPkparam
### ** Examples
caffPkparam(Weight = 20, Dose = 200, N = 20)
caffPkparam(20,500)
|
b10570bf2676e65cdce7295669125420e1a71cbc | 24b869a14a81cc34735ae3e03c9dfc7e78df5749 | /convert_bear_snp_data_to_vcf.R | ee7a69e5fb4a198c5a63a0bd34cc3df010a65b1c | [
"MIT"
] | permissive | griffinp/polarbear | bcf1db914f26057520ddd6b556f78faec13fd4c5 | 35d51a852a0a2b415a5e1e614b9690bc532ab053 | refs/heads/master | 2020-09-21T20:35:28.037305 | 2016-08-25T01:45:23 | 2016-08-25T01:45:23 | 66,513,404 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 5,637 | r | convert_bear_snp_data_to_vcf.R | # This is a script to convert the custom SNP genotype format for the
# files from Liu et al. (2014) 'Population genomics reveal
# recent speciation and rapid evolutionary adaptation in
# polar bears" doi:10.1016/j.cell.2014.03.054
# to VCF format.
# The files were downloaded from http://gigadb.org/dataset/view/id/100008/
# files look like this:
# chromo position ref anc major minor #major #minor knownEM pK-EM EG01 WG01 EG02 EG03 EG04 WG02 EG05 WG03 WG04 WG05 EG06 WG06 WG07 WG08 WG09 WG10 WG11 WG12
# scaffold79 945 A A A G 6 30 0.16672GG GG AG GG AG GG GG GG GG GG AA GG GG GG AG GG GG AG
# cols 1, 2, 5, 6 give first 4 cols of vcf format
# count total columns
##### FUNCTIONS #####
convert_genotype <-function(genotype_string_column, maj_min){
two_alleles <- strsplit(genotype_string_column, split='')
first_base <- sapply(two_alleles, "[[", 1)
#print(first_base)
second_base <- sapply(two_alleles, "[[", 2)
call_matrix <- matrix(0, ncol=2, nrow=length(two_alleles))
first_call <- as.numeric(first_base==maj_min[,2])
second_call <- as.numeric(second_base==maj_min[,2])
full_call <- paste(first_call, "/", second_call, sep="")
return(full_call)
}
#####################
setwd(dir = "~/Documents/Teaching/polar_bear_data")
desired_scaffold <- "scaffold37"
polar_data <- read.csv("polar_bear.pooled.snp.txt", sep="\t",
stringsAsFactors=FALSE, header=TRUE)
# SUBSET by scaffold immediately
polar_data <- subset(polar_data, polar_data[,1]==desired_scaffold)
polar_col_no <- ncol(polar_data)
polar_maj_min <- as.matrix(polar_data[,5:6])
polar_ind_data <- as.matrix(polar_data[,10:polar_col_no])
colnames(polar_ind_data)
# getting just 3 individuals per pop
polar_ind_data <- polar_ind_data[,c(1,3,4,2,6,8)]
### import and process brown bear data
brown_data <- read.csv("brown_bear.pooled.snp.txt", sep="\t",
stringsAsFactors=FALSE, header=TRUE)
# SUBSET by scaffold immediately
brown_data <- subset(brown_data, brown_data[,1]==desired_scaffold)
brown_col_no <- ncol(brown_data)
brown_maj_min <- as.matrix(brown_data[,5:6])
brown_ind_data <- as.matrix(brown_data[,10:brown_col_no])
colnames(brown_ind_data)
# getting just 3 individuals per pop
brown_ind_data <- brown_ind_data[,c(1,2,3,8,9,10)]
### MERGING THE POLAR BEAR AND BROWN BEAR GENOTYPE DATA
brown_to_keep <- brown_data[which(brown_data$position%in%polar_data$position),]
polar_to_keep <- polar_data[which(polar_data$position%in%brown_to_keep$position),]
brown_and_polar <- data.frame(brown_to_keep[,c(1,2,5,6,10:12,17:19)],polar_to_keep[,c(10,12,13,11,15,17)])
brown_and_polar_converted <- apply(brown_and_polar[,5:ncol(brown_and_polar)], MARGIN=2, FUN=convert_genotype,
maj_min=as.matrix(brown_and_polar[,3:4]))
colnames(brown_and_polar_converted) <- paste(rep(c("brown", "polar"), each=6), colnames(brown_and_polar_converted), sep="_")
brown_and_polar_output_vcf <- cbind(brown_and_polar[,c(1,2)], c("."), brown_and_polar[,c(3,4)], c("."),
FILTER="PASS", c("."), FORMAT="GT",
brown_and_polar_converted)
colnames(brown_and_polar_output_vcf) <- c("#CHROM", "POS", "ID", "REF", "ALT",
"QUAL", "FILTER", "INFO", "FORMAT", colnames(brown_and_polar_converted))
preheader <- '##fileformat=VCFv4.0\n##fileDate=20160406\n##source=Liu_et_al_2014_doi:10.1016/j.cell.2014.03.054\n##phasing=unphased\n##FORMAT=<ID=GT,Number=1,Type=String,Description="Genotype"'
write.table(preheader, file="brown_and_polar_bear_scaffold37_snps.vcf", quote=FALSE,
row.names=FALSE, col.names=FALSE)
write.table(brown_and_polar_output_vcf, file="brown_and_polar_bear_scaffold37_snps.vcf", append=TRUE, quote=FALSE,
row.names=FALSE, col.names=TRUE, sep="\t")
polar_converted <- apply(data.frame(polar_ind_data), MARGIN=2, FUN=convert_genotype, maj_min=polar_maj_min)
polar_output_vcf <- cbind(polar_data[,c(1,2)], c("."), polar_data[,c(5,6)], c("."),
FILTER="PASS", c("."), FORMAT="GT",
polar_converted)
colnames(polar_output_vcf) <- c("#CHROM", "POS", "ID", "REF", "ALT",
"QUAL", "FILTER", "INFO", "FORMAT", colnames(polar_converted))
preheader <- '##fileformat=VCFv4.0\n##fileDate=20160406\n##source=Liu_et_al_2014_doi:10.1016/j.cell.2014.03.054\n##phasing=unphased\n##FORMAT=<ID=GT,Number=1,Type=String,Description="Genotype"'
write.table(preheader, file="polar_bear_scaffold37_snps.vcf", quote=FALSE,
row.names=FALSE, col.names=FALSE)
write.table(polar_output_vcf, file="polar_bear_scaffold37_snps.vcf", append=TRUE, quote=FALSE,
row.names=FALSE, col.names=TRUE, sep="\t")
brown_converted <- apply(data.frame(brown_ind_data), MARGIN=2, FUN=convert_genotype, maj_min=brown_maj_min)
brown_output_vcf <- cbind(brown_data[,c(1,2)], c("."), brown_data[,c(5,6)], c("."),
FILTER="PASS", c("."), FORMAT="GT",
brown_converted)
colnames(brown_output_vcf) <- c("#CHROM", "POS", "ID", "REF", "ALT",
"QUAL", "FILTER", "INFO", "FORMAT", colnames(brown_converted))
preheader <- '##fileformat=VCFv4.0\n##fileDate=20160406\n##source=Liu_et_al_2014_doi:10.1016/j.cell.2014.03.054\n##phasing=unphased\n##FORMAT=<ID=GT,Number=1,Type=String,Description="Genotype"'
write.table(preheader, file="brown_bear_scaffold37_snps.vcf", quote=FALSE,
row.names=FALSE, col.names=FALSE)
write.table(brown_output_vcf, file="brown_bear_scaffold37_snps.vcf", append=TRUE, quote=FALSE,
row.names=FALSE, col.names=TRUE, sep="\t")
|
7becfcdacd66c456f642bedee0cb6d88ec14cb02 | ddc2b096e681398f576a95e40c7fd366b65f50a2 | /SDPSimulations/SurveySummaries.R | 3a0f7d0536e219659f2141de21268b57dbc2e565 | [] | no_license | sbellan61/SDPSimulations | f334d96743c90d657045a673fbff309106e45fce | cfc80b116beafabe3e3aed99429fb03d58dc85db | refs/heads/master | 2021-03-27T20:48:25.857117 | 2017-09-19T20:13:37 | 2017-09-19T20:13:37 | 21,144,447 | 1 | 0 | null | null | null | null | UTF-8 | R | false | false | 22,566 | r | SurveySummaries.R | ######################################################################
# Create a data frame with one row per country (dframe) or per
## survey (dframe.s) with accompanying summary characteristics.
######################################################################
rm(list=ls()) # clear workspace
if(grepl('tacc', Sys.info()['nodename'])) setwd('/home1/02413/sbellan/DHSProject/SDPSimulations/')
#load("../DHSFitting/data files/ds.name.Rdata") # country names
load("../DHSFitting/data files/ds.nm.all.Rdata") # country names
load("../DHSFitting/data files/allDHSAIS.Rdata") # DHS data
#load("../DHSFitting/data files/alldhs.Rdata") # DHS data
load("../DHSFitting/data files/epic.Rdata") # Infectious HIV prevalence in men
head(dat,2)
outdir <- file.path('results','PrevFigs')
if(!file.exists(outdir)) dir.create(outdir)
hazs <- c("bmb","bfb","bme","bfe","bmp","bfp")
####################################################################################################
## initialize dframe: one row per country: name; serodiscordant proportion (SDP; psdc), male SDP, female SDP
dframe <- data.frame(matrix(NA,length(ds.nm), 15))
colnames(dframe) <- c('country','psdc','dhsprev','pmsdc','pfsdc','curprev','peak','peak.nart','tpeak','tpeak.nart',
'pms','pfs','rms','rfs', 'col')
dframe$col <- rainbow(nrow(dframe))
dframe$country <- levels(dat$group)
####################################################################################################
## plot epidemic curves for each country, get epidemic peaks (of prevalence & infectious
## prevalence--the latter assumes the proportion infected but on ART are not infectious). Also fill
## in dframe as we go.
pdf(file.path(outdir,'epics.pdf'))
plot(0,0,type='n',xlim = c(1975,2015), ylim = c(0,.5), bty='n', xlab='', ylab = 'female prevalence')
cols <- rainbow(10)
ord <- c(which(ds.nm=='WA'), which(ds.nm!='WA')) # order so WA first so plot looks best
for(cc in ord) {## for each country
temp <- dat[dat$group==dframe$country[cc],] ## select that country's data
dframe$psdc[cc] <- sum(temp$ser %in% 2:3) / sum(temp$ser %in% 1:3) ## SDP
dframe$dhsprev[cc] <- (sum(temp$ser %in% 2:3) + 2*sum(temp$ser==1)) / (2*sum(temp$ser %in% 1:4)) ## DHS prevalence
dframe$pmsdc[cc] <- sum(temp$ser %in% 2) / sum(temp$ser %in% 1:3) ## M SDP
dframe$pfsdc[cc] <- sum(temp$ser %in% 3) / sum(temp$ser %in% 1:3) ## F SDP
dframe$curprev[cc] <- mean(prev.inf[temp$tint,temp$epic.ind]) ## infectious prevalence at mean interview time
## add epidemic info (female curves)
if(dframe$country[cc] !='WA') { # non-WA countries
lines((1:nrow(prev.inf))/12+1900, prev.inf[,temp$epic.ind[1]], col = cols[cc], lty = 2) # infectious prevalence
lines((1:nrow(prev.inf))/12+1900, prev.all[,temp$epic.ind[1]], col = cols[cc], lty = 1) # all prevalence
## peak height
dframe$peak[cc] <- max(prev.inf[,temp$epic.ind[1]])
dframe$peak.nart[cc] <- max(prev.all[,temp$epic.ind[1]])
## months between epidemic peak & mean(survey time)
dframe$tpeak[cc] <- mean(temp$tint) - which.max(prev.inf[,temp$epic.ind[1]]) #infectious
dframe$tpeak.nart[cc] <- mean(temp$tint) - which.max(prev.all[,temp$epic.ind[1]]) #all
points(which.max(prev.inf[,temp$epic.ind[1]])/12 + 1900, dframe$peak[cc], col = cols[cc], pch = 21) # add points
points(which.max(prev.all[,temp$epic.ind[1]])/12 + 1900, dframe$peak.nart[cc], col = cols[cc], pch = 19)
}else{ ## get weighted average of epidemic peak for W Africa
maxes <- apply(prev.inf[,unique(temp$epic.ind)], 2, max) # max infectious prevalence for countries
tpeaks <- apply(prev.inf[,unique(temp$epic.ind)], 2, which.max) # peak infectious prevalence timing for all
maxes.nart <- apply(prev.all[,unique(temp$epic.ind)], 2, max) # ditto for all prevalence
tpeaks.nart <- apply(prev.all[,unique(temp$epic.ind)], 2, which.max)
weights <- table(temp$epic.nm)/nrow(temp) # weight these by # of couples from each country
dframe$peak[cc] <- sum(maxes*weights)
dframe$tpeak[cc] <- mean(temp$tint) - sum(tpeaks*weights)
dframe$peak.nart[cc] <- sum(maxes.nart*weights)
dframe$tpeak.nart[cc] <- mean(temp$tint) - sum(tpeaks.nart*weights)
for(cc.wa in unique(temp$epic.ind)) { ## lines for each country
lines((1:nrow(prev.inf))/12+1900, prev.inf[,cc.wa], col = cols[cc])
lines((1:nrow(prev.inf))/12+1900, prev.all[,cc.wa], col = cols[cc], lty = 2)
}
points(sum(tpeaks*weights)/12+1900, dframe$peak[cc], col = cols[cc], pch = 19) # points for all countries
points(sum(tpeaks.nart*weights)/12+1900, dframe$peak.nat[cc], col = cols[cc], pch = 21)
}
## now add demography summaries to the data
dframe$pms[cc] <- mean(temp$tmar - temp$tms)/12 ## premarital duration of sexual activity
dframe$pfs[cc] <- mean(temp$tmar - temp$tfs)/12
dframe$rms[cc] <- mean(temp$tint - temp$tmar)/12 ## marital duration of sexual activity
dframe$rfs[cc] <- mean(temp$tint - temp$tmar)/12
}
dframe$tpeak <- dframe$tpeak/12 ## transform to years
dframe$tpeak.nart <- dframe$tpeak.nart/12
dframe$psdc.m <- dframe$pmsdc / (dframe$pmsdc + dframe$pfsdc) # proportion of SDC couples that are M+
legend('topleft', leg = dframe$country, col = cols, bty = 'n', pch = 19)
dev.off()
## log-ratio of time spent in couple to time spent sexually active before
dframe$logmrelpre <- log(dframe$rms / dframe$pms)
dframe$logfrelpre <- log(dframe$rfs / dframe$pfs)
save(dframe, file=file.path("../DHSFitting/data files/dframe.Rdata"))
####################################################################################################
## initialize dframe.s: one row per survey
dframe.s <- data.frame(matrix(NA,length(unique(dat$ds)), 20))
colnames(dframe.s) <- c('ds', 'country','group','yr',
'psdc','dhsprev','pmsdc','pfsdc',
'psdc.f','pmsdc.f','pfsdc.f',
'curprev','peak','peak.nart','tpeak','tpeak.nart',
'pms','pfs','rms','rfs')
dframe.s$ds <- unique(dat$ds)
dframe.s <- dframe.s[order(dframe.s$ds),]
for(cc in 1:nrow(dframe.s)) {
temp <- dat[dat$ds==dframe.s$ds[cc],]
dframe.s$group[cc] <- temp$group[1]
bfmar <- temp$m.fun == 1 & temp$f.fun == 1 ## both partners in first union?
dframe.s$psdc[cc] <- sum(temp$ser %in% 2:3) / sum(temp$ser %in% 1:3) # SDP
dframe.s$dhsprev[cc] <- (sum(temp$ser %in% 2:3) + 2*sum(temp$ser==1)) / (2*sum(temp$ser %in% 1:4)) ## DHS prevalence
dframe.s$pmsdc[cc] <- sum(temp$ser %in% 2) / sum(temp$ser %in% 1:3) # M SDP
dframe.s$pfsdc[cc] <- sum(temp$ser %in% 3) / sum(temp$ser %in% 1:3) # F SDP
## ditto above but first marriage only
dframe.s$psdc.f[cc] <- sum(temp$ser[bfmar] %in% 2:3) / sum(temp$ser[bfmar] %in% 1:3) # SDP
dframe.s$pmsdc.f[cc] <- sum(temp$ser[bfmar] %in% 2) / sum(temp$ser[bfmar] %in% 1:3) # M SDP
dframe.s$pfsdc.f[cc] <- sum(temp$ser[bfmar] %in% 3) / sum(temp$ser[bfmar] %in% 1:3) # F SDP
dframe.s$yr[cc] <- mean(temp$tint)/12+1900
dframe.s$country[cc] <- temp$epic.nm[1]
## add hiv prevalence at mean survey date
dframe.s$prev[cc] <- prev.inf[mean(temp$tint),temp$epic.ind[1]]
}
dframe.s
####################################################################################################
## Plot SDP by survey year for all couples & for those only with both in their first union (to see
## if there are SDP changes over time).
cols <- rainbow(length(unique(dframe.s$country)))
dframe.s$col <- NA
pdf(file.path(outdir, 'sdp by survey (after filter).pdf'), w = 8, h = 8)
plot(0,0, type = 'n', xlim = c(2000,2013), ylim = c(0,1), xlab = 'year', ylab = 'SDP')
for(cc in 1:length(unique(dframe.s$country))) {
tcount <- unique(dframe.s$country)[cc] # country name
dframe.s$col[dframe.s$country==tcount] <- cols[cc] # color
points(dframe.s$yr[dframe.s$country == tcount], dframe.s$psdc[dframe.s$country == tcount], # all
type = 'b', pch = 21, col = dframe.s$col[dframe.s$country == tcount])
points(dframe.s$yr[dframe.s$country == tcount], dframe.s$psdc.f[dframe.s$country == tcount], # both in first union
type = 'b', pch = 19, col = dframe.s$col[dframe.s$country == tcount])
}
legend('topleft', unique(dframe.s$country), col = cols, pch = 19, bty = 'n')
legend('bottomright', c('all','first marriage'), pch = c(21,19), bty = 'n')
dev.off()
save(dframe.s, file=file.path("../DHSFitting/data files","dframe.s.Rdata"))
######################################################################
####################################################################################################
## Now make same types of summary plots but with all the data before pre-processing to remove
## couples with missing/inconsistent data for model fitting.
load("../DHSFitting/data files/allRawDHSAIS.Rdata") # raw data
##################################################
## First need to fix Ethiopian survey dates due to their calendar shift.
## Ethiopia 2005: "The survey was fielded from April 27 to August 30, 2005." (p. 11, Ethiopia DHS
## 2005 report) difference in dates is then
diff2005 <- 105*12 + 4 - min(allraw[allraw$ds=="Ethiopia 2005","tint"])
allraw[allraw$ds=="Ethiopia 2005",c("tms","tfs","tmar","tint")] <-
allraw[allraw$ds=="Ethiopia 2005",c("tms","tfs","tmar","tint")] + diff2005
## Ethiopia 2011:"All data collection took place over a five-month period from 27 December 2010 to 3
## June 2011." (p. 10, Ethiopia DHS 2011 report)
diff2011 <- 110*12 + 12 - min(allraw[allraw$ds=="Ethiopia 2011","tint"])
allraw[allraw$ds=="Ethiopia 2011",c("tms","tfs","tmar","tint")] <-
allraw[allraw$ds=="Ethiopia 2011",c("tms","tfs","tmar","tint")] + diff2011
##################################################
## get discordance rates by countries for raw data (draw)
draw <- data.frame(matrix(NA,length(ds.nm), 14))
colnames(draw) <- c('country','psdc','dhsprev','pmsdc','pfsdc','curprev','peak','peak.nart','tpeak','tpeak.nart',
'pms','pfs','rms','rfs')
draw$country <- unique(dat$group)
draw <- draw[order(draw$country),]
for(cc in 1:nrow(draw)) {
temp <- allraw[allraw$group==draw$country[cc],]
draw$psdc[cc] <- sum(temp$ser %in% 2:3, na.rm=T) / sum(temp$ser %in% 1:3, na.rm=T) # SDP
draw$dhsprev[cc] <- (sum(temp$ser %in% 2:3, na.rm=T) + 2*sum(temp$ser==1, na.rm=T)) / (2*sum(temp$ser %in% 1:4, na.rm=T)) ## DHS prevalence
draw$pmsdc[cc] <- sum(temp$ser %in% 2, na.rm=T) / sum(temp$ser %in% 1:3, na.rm=T) # M SDP
draw$pfsdc[cc] <- sum(temp$ser %in% 3, na.rm=T) / sum(temp$ser %in% 1:3, na.rm=T) # F SDP
## premarital duration of sexual activity
draw$pms[cc] <- mean(temp$tmar - temp$tms, na.rm=T)/12
draw$pfs[cc] <- mean(temp$tmar - temp$tfs, na.rm=T)/12
## marital duration of sexual activity
draw$rms[cc] <- mean(temp$tint - temp$tmar, na.rm=T)/12
draw$rfs[cc] <- mean(temp$tint - temp$tmar, na.rm=T)/12
}
draw$psdc.m <- draw$pmsdc / (draw$pmsdc + draw$pfsdc)
draw <- draw[order(draw$country),]
head(draw,3)
save(draw, file="../DHSFitting/data files/draw.Rdata")
####################################################################################################
## get discordance rates by surveys for raw data (draw.s), do for first mar only too.
draw.s <- data.frame(matrix(NA,length(unique(allraw$ds)), 18))
colnames(draw.s) <- c('country','ds', 'tint.yr',
'psdc','dhsprev','pmsdc','pfsdc',
'psdc.f','pmsdc.f','pfsdc.f',
'pms','pfs','rms','rfs',
'pms.f','pfs.f','rms.f','rfs.f')
draw.s$ds <- unique(allraw$ds)
draw.s <- draw.s[order(draw.s$ds),]
show <- c('Mnumber.of.unions', 'Fnumber.of.unions')
for(cc in 1:nrow(draw.s)) {
temp <- allraw[allraw$ds==draw.s$ds[cc],]
draw.s$country[cc] <- temp$epic.nm[1]
draw.s$psdc[cc] <- sum(temp$ser %in% 2:3, na.rm=T) / sum(temp$ser %in% 1:3, na.rm=T) # SDP
draw.s$dhsprev[cc] <- (sum(temp$ser %in% 2:3, na.rm=T) + 2*sum(temp$ser==1, na.rm=T)) / (2*sum(temp$ser %in% 1:4, na.rm=T)) ## DHS prevalence
draw.s$pmsdc[cc] <- sum(temp$ser %in% 2, na.rm=T) / sum(temp$ser %in% 1:3, na.rm=T) # M SDP
draw.s$pfsdc[cc] <- sum(temp$ser %in% 3, na.rm=T) / sum(temp$ser %in% 1:3, na.rm=T) # F SDP
## for both partners in first marriage only
bfmar <- temp$Mnumber.of.unions == 'Once' & temp$Fnumber.of.unions == 'Once'
draw.s$psdc.f[cc] <- sum(temp$ser[bfmar] %in% 2:3, na.rm=T) / sum(temp$ser[bfmar] %in% 1:3, na.rm=T) # SDP
draw.s$pmsdc.f[cc] <- sum(temp$ser[bfmar] %in% 2, na.rm=T) / sum(temp$ser[bfmar] %in% 1:3, na.rm=T) # M SDP
draw.s$pfsdc.f[cc] <- sum(temp$ser[bfmar] %in% 3, na.rm=T) / sum(temp$ser[bfmar] %in% 1:3, na.rm=T) # F SDP
## premarital duration of sexual activity
draw.s$pms[cc] <- mean(temp$tmar - temp$tms, na.rm=T)/12
draw.s$pfs[cc] <- mean(temp$tmar - temp$tfs, na.rm=T)/12
## marital duration of sexual activity
draw.s$rms[cc] <- mean(temp$tint - temp$tmar, na.rm=T)/12
draw.s$rfs[cc] <- mean(temp$tint - temp$tmar, na.rm=T)/12
## premarital duration of sexual activity (first union)
draw.s$pms.f[cc] <- mean(temp$tmar[bfmar] - temp$tms[bfmar], na.rm=T)/12
draw.s$pfs.f[cc] <- mean(temp$tmar[bfmar] - temp$tfs[bfmar], na.rm=T)/12
## marital duration of sexual activity (first union)
draw.s$rms.f[cc] <- mean(temp$tint[bfmar] - temp$tmar[bfmar], na.rm=T)/12
draw.s$rfs.f[cc] <- mean(temp$tint[bfmar] - temp$tmar[bfmar], na.rm=T)/12
## mean interview date
draw.s$tint.yr[cc] <- mean(temp$tint, na.rm = T) / 12 + 1900
## add hiv prevalence at mean survey date
draw.s$prev[cc] <- prev.all[mean(temp$tint),temp$epic.ind[1]]
}
draw.s$psdc.m <- draw.s$pmsdc / (draw.s$pmsdc + draw.s$pfsdc)
## add country colors to data frame
cols <- rainbow(length(unique(draw.s$country)))
draw.s$col <- NA
for(cc in 1:length(unique(draw.s$country))) {
tcount <- unique(draw.s$country)[cc] # country name
draw.s$col[draw.s$country==tcount] <- cols[cc] # color
}
head(draw.s,5)
save(draw.s, file="../DHSFitting/data files/draw.s.Rdata")
####################################################################################################
## Plot SDP by survey year for all couples & for those only with both in their first union (to see
## if there are SDP changes over time). *RAW DATA*
cols <- rainbow(length(unique(draw.s$country)))
draw.s$col <- NA
pdf(file.path(outdir,'sdp by survey (unfiltered data).pdf'), w = 8, h = 8)
plot(0,0, type = 'n', xlim = c(2000,2013), ylim = c(0,1), xlab = 'year', ylab = 'SDP')
for(cc in 1:length(unique(draw.s$country))) {
tcount <- unique(draw.s$country)[cc] # temp country
draw.s$col[draw.s$country==tcount] <- cols[cc] # assign color
points(draw.s$tint.yr[draw.s$country == tcount], draw.s$psdc[draw.s$country == tcount],
type = 'b', pch = 21, col = draw.s$col[draw.s$country == tcount])
points(draw.s$tint.yr[draw.s$country == tcount], draw.s$psdc.f[draw.s$country == tcount],
type = 'b', pch = 19, col = draw.s$col[draw.s$country == tcount])
}
legend('topleft', unique(draw.s$country), col = cols, pch = 19, bty = 'n')
legend('bottomright', c('all','first marriage'), pch = c(21,19), bty = 'n')
dev.off()
####################################################################################################
## Miscellaneous serostatus plots for presentations with raw data
####################################################################################################
for(col in c('black','white')) {
presdir <- file.path('results', 'PresentationFigures')
if(!file.exists(presdir)) dir.create(presdir)
######################################################################
pdf(file.path(presdir,paste0("serostatus breakdown ",col,".pdf")), width = 5.5, height = 4)
par(mar=c(5,6,.5,.5), fg = col, col.axis = col, col.main = col, col.lab = col)
tab1 <- xtabs(~group + ser, allraw)
tab1 <- tab1[nrow(tab1):1,]
total <- rowSums(tab1)
total <- as.matrix(total)[,rep(1,4)]
tab1 <- tab1/total
tab1
rownames(tab1)[rownames(tab1)=="WA"] <- "West Africa"
cols <- c("yellow","green","purple","dark gray")
barplot(t(tab1), beside = F, names.arg = rownames(tab1), horiz = T, las = 2,
col = cols, xlab = "proportion of couples in serogroup", border = NA)
dev.off()
######################################################################
pdf(file.path(presdir,paste0("serostatus breakdown leg ",col,".pdf")), width = 6, height = 2.5)
par(mar=rep(0,4), fg = col, col.axis = col, col.main = col, col.lab = col)
plot(0,0, type = "n", bty = "n", xlab = "", ylab = "", axes = F)
legend("top", c("M+ F+", "M+ F-", "M- F+", "M- F-"),
pch = 15, bty = "n", col = cols, cex = 1.5)
dev.off()
######################################################################
pdf(file.path(presdir,paste0("serostatus breakdown neg only ",col,".pdf")), width = 8, height = 5)
par(mar=c(5,6,.5,.5), fg = col, col.axis = col, col.main = col, col.lab = col)
tab1 <- xtabs(~group + ser, allraw)
tab1 <- tab1[nrow(tab1):1,]
total <- rowSums(tab1)
total <- as.matrix(total)[,rep(1,4)]
tab1 <- tab1/total
tab1
rownames(tab1)[rownames(tab1)=="WA"] <- "West Africa"
cols <- c(NA,"green","purple","dark gray")
barplot(t(tab1), beside = F, names.arg = rownames(tab1), horiz = T, las = 2,
col = cols, xlab = "proportion of couples in serogroup", border = NA)
dev.off()
######################################################################
######################################################################
pdf(file.path(presdir,paste0("serostatus breakdown pos only ",col,".pdf")), width = 5.5, height = 4)
par(mar=c(5,6,.5,.5), fg = col, col.axis = col, col.main = col, col.lab = col)
tab1 <- xtabs(~group + ser, allraw)
tab1 <- tab1[nrow(tab1):1,]
total <- rowSums(tab1)
total <- as.matrix(total)[,rep(1,4)]
tab1 <- tab1/total
tab1
rownames(tab1)[rownames(tab1)=="WA"] <- "West Africa"
cols <- c("yellow","green","purple",NA)
barplot(t(tab1), beside = F, names.arg = rownames(tab1), horiz = T, las = 2,
col = cols, xlab = "proportion of couples in serogroup", border = NA)
dev.off()
######################################################################
######################################################################
pdf(file.path(presdir,paste0("serostatus breakdown pos only perc ",col,".pdf")), width = 5.5, height = 4)
par(mar=c(5,6,.5,.5), fg = col, col.axis = col, col.main = col, col.lab = col)
tab1 <- xtabs(~group + ser, allraw)
tab1 <- tab1[nrow(tab1):1,]
total <- rowSums(tab1)
total <- as.matrix(total)[,rep(1,4)]
tab1 <- tab1/total
tab1
rownames(tab1)[rownames(tab1)=="WA"] <- "West Africa"
cols <- c("yellow","green","purple",NA)
bb <- barplot(t(tab1), beside = F, names.arg = rownames(tab1), horiz = T, las = 2,
col = cols, xlab = "proportion of couples in serogroup", border = NA)
text(rev(1-tab1[,4]),rev(bb), paste(signif(dframe$psdc*100,3),'%',sep=''), pos = 4)
dev.off()
######################################################################
######################################################################
pdf(file.path(presdir,paste0("serostatus proportion ",col,".pdf")), width = 5.5, height = 4)
par(mar=c(5,6,.5,.5), fg = col, col.axis = col, col.main = col, col.lab = col)
tab2 <- xtabs(~group + ser, allraw)
tab2 <- tab2[nrow(tab2):1,]
total <- rowSums(tab2[,-4])
total <- as.matrix(total)[,rep(1,3)]
tab2 <- tab2[,-4]/total
tab2 <- tab2[,3:1]
rownames(tab2)[rownames(tab2)=="WA"] <- "West Africa"
cols <- rev(c("yellow","green","purple"))
bb <- barplot(t(tab2), beside = F, names.arg = rownames(tab2), horiz = T, las = 2,
col = cols, xlab = "serodiscordance proportion (SDP)", border = NA)
dev.off()
######################################################################
######################################################################
pdf(file.path(presdir,paste0("serostatus proportion only sdc ",col,".pdf")), width = 5.5, height = 4)
par(mar=c(5,6,.5,.5), fg = col, col.axis = col, col.main = col, col.lab = col)
tab2 <- xtabs(~group + ser, allraw)
tab2 <- tab2[nrow(tab2):1,]
total <- rowSums(tab2[,-4])
total <- as.matrix(total)[,rep(1,3)]
tab2 <- tab2[,-4]/total
tab2 <- tab2[,3:1]
rownames(tab2)[rownames(tab2)=="WA"] <- "West Africa"
cols <- rev(c(NA,"green","purple"))
bb <- barplot(t(tab2), beside = F, names.arg = rownames(tab2), horiz = T, las = 2,
col = cols, xlab = "serodiscordance proportion (SDP)", border = NA)
dev.off()
######################################################################
## lognormal risk distributions
for(sd in c(.5, 1, 2, 3)) {
pdf(file.path(presdir,paste('log normal sd', sd, col, '.pdf')), w = 3, h = 3)
par(mar=c(5,.5,.5,.5), fg = col, col.axis = col, col.main = col, col.lab = col)
curve(dnorm(x, 0, sd), from = -8, 8, axes = F, ylab = '', xlab = expression(Z[i]), ylim = c(0,.8))
axis(1, at = log(10^c(-3:3)), label = 10^c(-3:3), las = 2)
dev.off()
}
}
####################################################################################################
## prevalence vs SDP
col <- 'white'
cols <- rainbow(length(ds.nm))
pdf(file.path(presdir,paste('prevalence vs SDP', col, '.pdf')), w = 7, h = 4.4)
layout(matrix(c(1,2),nr=1,nc=2), w = c(1,.38))
par(mar=c(5,4,.5,.5), fg = col, col.axis = col, col.main = col, col.lab = col)
plot(dframe$peak, dframe$psdc, pch = 19, col = cols, xlim = c(0,.3), ylim = c(0,1), xlab = 'peak HIV prevalence', ylab = 'serodiscordant proportion', main ='',
bty='n', axes = F)
mod <- lm(psdc~peak, dframe)
abline(mod)
fx <- function(x) { 2*x*(1-x) / ( 2*x*(1-x) + x^2 ) }
#curve(fx, from = 0, to = .3, col = 'yellow', add=T)
axis(1, at = seq(0,.3, b = .05))
axis(2, at = seq(0,1,b=.25), las = 2)
ord <- rev(order(dframe$psdc))
par(mar=c(.2,0,0,0))
plot.new()
legend('top', ds.nm[ord], col=cols[ord], title = 'top to bottom', cex = .8, inset = .1, pch = 19) #, bty = 'n')
graphics.off()
|
cbfd994937d8b501e09ced43b80885cfc539f1a8 | cf90e90591d94cb82e07ba983fe5792d82aa344b | /CLibraryMrEEngine/SanovTheoremMrE.R | 714d9542a69ed1ee71f6818257b4b52cb450925d | [] | no_license | ProvenSecureSolutionsLLC/CFT2-MrE | 959a2627e4aa46ed9a5797536d3bb0bfa2a45cc5 | 552ac25a43f78dd992cae944d965aa825f7d3203 | refs/heads/master | 2021-05-29T14:14:41.080368 | 2013-11-15T14:05:02 | 2013-11-15T14:05:02 | 12,085,574 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,767 | r | SanovTheoremMrE.R |
SanovTheoremDistribution <- function(x)
{
var1 = as.double(x[1])
var2 = as.double(x[2])
return(
(var1 ^ as.integer(11)) *
(var2 ^ as.integer(2)) *
((as.double(1) - var1 - var2) ^ as.integer(7))
)
}
SanovTheoremSamplingConstraint <- function(x)
{
# Definition of this constraint is:
# if it is negative, MrE Engine will refuse to accept freshly sampled sigma point,
# if positive or zero, the sigma point passes validation
var1 = as.double(x[1])
var2 = as.double(x[2])
return(as.double(1) - var1 - var2)
}
SanovTheoremConstraint <- function(x)
{
var1 = as.double(x[1])
var2 = as.double(x[2])
return(
(
(as.double(1) * var1) +
(as.double(2) * var2) +
(as.double(3) * (as.double(1) - var1 - var2)) -
as.double(2.3)
)
)
}
SanovTheoremConstraint (c(0.1911204835906995,0.542772234863242))
SanovTheoremDistribution (c(0.1911204835906995,0.542772234863242))
dyn.load("MrE_Engine.dll")
.Call("ConstructMrEEngine")
minValue = as.double(0)
maxValue = as.double(1)
.Call("AddVariableWithRange",minValue, maxValue)
.Call("AddVariableWithRange",minValue, maxValue)
#cat("Variables count is ", .Call("GetVariablesCount"),"\n")
.Call("AddNewConstraint",SanovTheoremConstraint ,new.env())
#cat("Constraints count is ", .Call("GetConstraintsCount"),"\n")
.Call("SetDistribution",SanovTheoremDistribution, new.env(), as.integer(50000), SanovTheoremSamplingConstraint)
.Call("GetLagrangeConstraintsSparsities")
warnings()
#.Call("GetDensitiesAtSigmaPoints",as.integer(1),as.integer(10))
#.Call("GetConstraintValueAtSigmaPoints",as.integer(1), as.integer(1),as.integer(10))
.Call("Execute")
.Call("GetLagrangeMultipliers")
#.Call("GetLagrangeConstraintsSparsities")
.Call("DisposeMrEEngine")
dyn.unload("MrE_Engine.dll")
|
2056ea29c8fee9dbd0da27bd5d29645519c8da08 | 52338354bc84c147cde8caf40aa71f1f77462973 | /man/export_table.Rd | 2ffe4e2683bf5f30ed5059a705a7ebeeacf1d92b | [] | no_license | intiluna/flyio | b420d43724ca5a18011f0f2b1de49c75caf904d6 | a226fedcdb59818b50445383ffc15378d9cb6205 | refs/heads/master | 2023-05-25T00:46:06.349036 | 2020-02-11T10:55:21 | 2020-02-11T10:55:21 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | true | 1,276 | rd | export_table.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/export_table.R
\name{export_table}
\alias{export_table}
\title{Write csv, Excel files, txt}
\usage{
export_table(x, file, FUN = data.table::fwrite,
data_source = flyio_get_datasource(),
bucket = flyio_get_bucket(data_source), dir = flyio_get_dir(),
delete_file = TRUE, show_progress = FALSE, ...)
}
\arguments{
\item{x}{variable name}
\item{file}{path of the file to be written to}
\item{FUN}{the function using which the file is to write}
\item{data_source}{the name of the data source, if not set globally. s3, gcs or local}
\item{bucket}{the name of the bucket, if not set globally}
\item{dir}{the directory to store intermediate files}
\item{delete_file}{logical. to delete the file to be uploaded}
\item{show_progress}{logical. Shows progress of the upload operation.}
\item{...}{other parameters for the FUN function defined above}
}
\value{
No output
}
\description{
Write csv, Excel files, txt
}
\examples{
# for data on local
export_table(iris, paste0(tempdir(), "/iris.csv"), FUN = write.csv, data_source = "local")
\dontrun{
# for data on cloud
flyio_set_datasource("gcs")
flyio_set_bucket("your-bucket-name")
export_table(iris, "iris.csv", write.csv, dir = tempdir())
}
}
|
4d549c99795d97e6849c2bc699d397d2256ada9d | 623862ef7ffa7051fea79bf81ce820f1c4b7dea0 | /plot4.R | bbd63fe3153b978913b731f47f1c283cceecee5d | [] | no_license | ilakya-selvarajan/ExData_Plotting1 | aad39d43f1b8652a891878fca63a44fab6a812e4 | 5323a92e62e9b881aeb319492750c6b381793e53 | refs/heads/master | 2021-01-18T05:36:10.605715 | 2015-01-11T17:01:24 | 2015-01-11T17:01:24 | 28,930,677 | 0 | 0 | null | 2015-01-07T19:42:13 | 2015-01-07T19:42:13 | null | UTF-8 | R | false | false | 1,034 | r | plot4.R | png('plot4.png') #To save plot1 in PNG format
par(mfrow = c(2,2)) #To produce the plots in 2 rows and 2 columns
#To convert the Date and Time variables to Date/Time classes
my.data$day <- strptime(paste(my.data$Date, my.data$Time), "%d/%m/%Y %H:%M:%S")
#used multiple base plots
with(my.data, {
plot(day,Global_active_power, type='l',ylab="Global Active Power",xlab="") #plot1
plot(day,Voltage, type='l',ylab="Voltage",xlab="datetime") #plot2
plot(day, Sub_metering_1, type = "l", xlab = "", ylab = "Energy sub metering") #plot3
points(day, Sub_metering_2, type = "l", xlab = "", ylab = "Energy sub metering", col = "red")
points(day, Sub_metering_3, type = "l", xlab = "", ylab = "Energy sub metering", col = "blue")
legend("topright", lty=1, bty="n", col = c("black", "red", "blue"), legend = c("Sub_metering_1", "Sub_metering_2", "Sub_metering_3"))
plot(Datetime,Global_reactive_power, type="l", ylab="Global Reactive Power",xlab="datetime") #plot4
})
dev.off() #without this command, the file doesn't save properly
|
deff2083e19d95c67f1d2e827dcd8f6c91ec47f9 | 8f0c3b7855f87d048a22f78c46699b1e09c7b6dd | /Onestep normal Return_Final.R | 3d07f5afd1b48426024f7d3c8d8f0fb726b0487c | [] | no_license | yangh9596/Momentum | f57127472bcd03936cacc9eb70b130cdbfa93d64 | d2796e3f9ac63cf12336155cdd40009b079c82f8 | refs/heads/master | 2021-01-01T04:29:42.787963 | 2018-07-15T18:45:08 | 2018-07-15T18:45:08 | 57,429,491 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 4,198 | r | Onestep normal Return_Final.R | # ======== Part 0 Markdowns ============
setwd("/Users/yangh/Desktop/HKUST/FINA4414/Project") # please set working directory by yourself
# PACKAGES NEEDED
library("zoo")
library("backtest")
# ======== Part 1 INPUT DATA ============
dat <- read.csv("prices.csv",
sep = ",",
header = TRUE,
stringsAsFactors = FALSE)
dat <- dat[,-1]
# creat zoo objects
DateIndex <- as.Date(dat$Date)
zoo.date <- zoo(dat[,-1], order.by = DateIndex)
# ======= Part 2 GENERATE RETURN DATA ========
zoo.month <- zoo( dat[,-1], order.by = as.yearmon(DateIndex) )
monthly.return <- aggregate(dat$Close, by = list(dat$Ticker,index(zoo.month)),
FUN = function(df){sum(diff(df), na.rm = T)/df[[1]]})
monthly.return <- data.frame(Month = monthly.return[,2], Ticker = monthly.return[,1],
Return = monthly.return[,3], stringsAsFactors = F)
# ======== Part 3 DATA CLEAN =================
# 1. At least 24 months' data
# ticker <- unique(dat$Ticker)
# trading.day <- aggregate(zoo.date$Close, by = list(zoo.date$Ticker),
# FUN = function(x){sum(is.na(x))} )
# trading.day <- as.data.frame(trading.day)
# trading.day$Ticker <- row.names(trading.day)
# rm.ticker <- trading.day[which(trading.day$trading.day > 1687),2]
# for(element in rm.ticker){
# zoo.date <- zoo.date[ -which(zoo.date$Ticker == element),]
# }
# at last there are 2489 stocks in zoo.date
# 2. Exclude long tail data (for further development)
#
#
# ======== Part 4 GENERATE BACKTEST OBJECT ==========
# transform into "backtest" data form and select time period
monthly.bt <- data.frame("Month" = monthly.return$Month,
"Ticker" = monthly.return$Ticker,
stringsAsFactors = FALSE)
st <- which(monthly.bt$Month == "Jan 2007")[1]
ed <- which(monthly.bt$Month == "Nov 2013")[1] - 1
monthly.bt <- monthly.bt[seq(from = st, to = ed, by = 1),]
rownames(monthly.bt) <- NULL
# create column names
ret.names <- vector( length = 8)
for(i in c(3,6,9,12)){
ret.names[i/3] <- paste("ret",i,0, sep = ".")
ret.names[i/3 + 4] <- paste("ret", 0, i, sep = ".")
}
x <- split(monthly.return[,3], monthly.return$Month, drop = FALSE)
df <- as.data.frame(x)
df <- data.frame(Ticker = unique(monthly.return$Ticker),
df,
stringsAsFactors = F)
monthly.ret <- df
# store past return 3M, 6M, 9M, 12M
aVector <- vector(mode = "numeric", length = dim(monthly.bt)[1])
for(element in ret.names[1:4]){
k <- which(ret.names == element)*3
for(i in 1:82){
for(j in 1:dim(monthly.ret)[1]){
range <- seq(from = i + 13 - k, to = i + 12, by = 1)
# when comes to the last loop, this = i*j, or 82 month*2465 stocks = 202130 rows
idx <- (i-1)*dim(monthly.ret)[1] + j
aVector[idx] <- apply(monthly.ret[j,range], MARGIN = 1,
FUN = function(x){(cumprod(x+1)[k]- 1)/k} )
# notice: i*j = number of rows in monthly.bt
}
}
monthly.bt[[element]] <- aVector
}
# store special past return: 1M, 2W, 1W, etc.
aVector <- vector(mode = "numeric", length = dim(monthly.bt)[1])
k <- 1
for(i in 1:82){
for(j in 1:dim(monthly.ret)[1]){
range <- i + 12
# when comes to the last loop, this = i*j, or 82 month*2465 stocks = 202130 rows
idx <- (i-1)*dim(monthly.ret)[1] + j
aVector[idx] <- monthly.ret[j,range]
# notice: i*j = number of rows in monthly.bt
}
}
monthly.bt$ret.1.0 <- aVector
# store future return, 3M,6M,9M,12M
aVector <- vector(mode = "numeric", length = dim(monthly.bt)[1])
for(element in ret.names[5:8]){
k <- which(ret.names[5:8] == element)*3 # k = 3,6,9,12
for(i in 1:82){
for(j in 1:dim(monthly.ret)[1]){
range <- seq(from = i + 12, to = i + k + 11, by = 1) # range of range = k
idx <- (i-1)*dim(monthly.ret)[1] + j
aVector[idx] <- apply(monthly.ret[j,range], MARGIN = 1,
FUN = function(x){(cumprod(x+1)[k]- 1)/k} )
# notice: i*j = number of rows in monthly.bt
}
}
monthly.bt[[element]] <- aVector
}
# Save as csv
write.csv(monthly.bt, "/Users/yangh/Desktop/HKUST/FINA4414/Project/monthlyBTOneStep.csv") |
73fff95e292e0a22c1b061bec53ee46f3ce2a18b | 804019e5175b5637b7cd16cd8ff5914671a8fe4e | /may20_scenario4.R | 15d6ee3d031cec06ef4f22e5e2b6b179740fe55f | [] | no_license | jejoenje/gmse_vary | dde8573adeff9d9c6736a7a834a0a8942c5fcc5d | 057a36896b3b77a0a3e56d1436abe7a45e38cc2d | refs/heads/master | 2022-11-28T11:31:54.294640 | 2020-08-03T10:12:42 | 2020-08-03T10:12:42 | 197,375,284 | 0 | 0 | null | 2019-12-10T21:08:45 | 2019-07-17T11:21:27 | HTML | UTF-8 | R | false | false | 3,429 | r | may20_scenario4.R | rm(list=ls())
library(GMSE)
source("helpers.R")
source("gmse_vary_helpers.R")
source("parasMay2020.R")
### Setup output folder:
# Output scenario name:
scenario_name = "may20_scenario4"
### Scenario-specific parameters
ov = 0.5
# Setup/check output folder:
out_folder = paste0("sims/",scenario_name,"/")
if(!dir.exists(out_folder)) {
dir.create(out_folder)
}
### START SIMULATION RUN
# Create an output name (timestamp) for this sim
out_name = gsub("\\.", "",format(Sys.time(),"%Y%m%d%H%M%OS6"))
### Start empty output list:
sims = list()
# Initialise simulation run with first time step (i.e. time = 0)
#print("Time step 1")
sims[[1]] = gmse_apply(get_res = "Full",
land_dim_1 = dim_1,
land_dim_2 = dim_2,
land_ownership = lo,
res_birth_type = rbt,
res_death_type = rdt,
lambda = l,
remove_pr = rpr,
res_birth_K = rbK,
res_death_K = rdK,
manage_target = mt,
RESOURCE_ini = r_ini,
stakeholders = s,
res_consume = r_cons,
agent_view = a_view,
times_observe = t_obs,
scaring = scare,
culling = cull,
tend_crops = tend_crop,
tend_crop_yld = tcy,
minimum_cost = mc,
ownership_var = ov
)
### Set user budgets according to yield:
start_budgets = set_budgets(cur = sims[[1]], yield_type = "linear", yv = yield_value)
sims[[1]]$AGENTS[2:(stakeholders+1),17] = start_budgets
# Set manager budget according to user budgets:
sims[[1]]$AGENTS[1,17] = set_man_budget(u_buds = start_budgets, type = "prop", p = man_bud_prop)
# Output:
#print_sim_status(1, sims[[1]])
# Loop through nunmber of years
for(i in 2:n_years) {
### Move resources according to yield
#sim_old = move_res(sim_old, gmse_paras)
### Try to run next time step
sim_new = try({gmse_apply(old_list = sims[[i-1]], get_res = "Full")},silent = T)
### Check output of next time step; if there are errors (extinctions or no obs), skip to the next sim.
### (The following function call should call break() if class(sim_new) == "try-error").
check_ext = check_gmse_extinction(sim_new, silent = T)
### So this shoudld only happen if "check_gmse_extinction(sim_new)" has not called break().
### So if NOT extinct, append output and reset time step:
if(check_ext == "ok") {
sims[[i]] = sim_new
### Re-set user budgets according to current yield
new_b = set_budgets(cur = sims[[i]],
yield_type = yield_type,
yv = yield_value)
new_b[new_b>10000] = 10000
new_b[new_b<minimum_cost] = minimum_cost
sims[[i]]$AGENTS[2:(stakeholders+1),17] = new_b
### Set next time step's manager's budget, according to new user budgets
sims[[i]]$AGENTS[1,17] = set_man_budget(u_buds = new_b, type = "prop", p = man_bud_prop)
# Output:
#print_sim_status(i, sims[[i]])
rm(sim_new)
} else {
break()
}
}
### Save simulation:
outfile = paste0(out_folder,out_name,".Rds")
saveRDS(sims, file = outfile)
print(sprintf("Done %d", length(list.files(out_folder))))
|
a618bd88fa9ae89dba7aa0be158ac518251cd2fe | ea8b2c95027f7ffb809c517fd2fc7206fd978ddb | /Postwork_1/Postwork_01.R | f37c1f1e42bf680006906c096f5aecab9ddb061f | [] | no_license | MontseMoreno/BEDU-7_R | 5628aff327188530c0134fef1f1661271d83414b | e99387b52b208fb5a48b70adb6b4adf5d1aa8723 | refs/heads/main | 2023-05-30T16:17:12.530486 | 2021-06-25T01:23:29 | 2021-06-25T01:23:29 | 380,118,936 | 0 | 0 | null | 2021-06-25T04:00:31 | 2021-06-25T04:00:30 | null | UTF-8 | R | false | false | 1,131 | r | Postwork_01.R | # 1.1 Importa los datos de soccer de la temporada 2019/2020 de la primera división de la liga española
df.liga <- read.csv("./data/SP1.csv")
# 1.2. Extrae las columnas que contienen los números de goles anotados por los equipos que jugaron en casa (FTHG)
# y los goles anotados por los equipos que jugaron como visitante (FTAG)
(golesAnotados <- df.liga[ ,c("FTHG","FTAG")])
# 1.3. Consulta cómo funciona la función table en R al ejecutar en la consola ?table
(t.goles <- table("Local"= golesAnotados$FTHG,"Visitante"=golesAnotados$FTAG))
# 1.3.1 La probabilidad (marginal) de que el equipo que juega en casa anote x goles (x = 0, 1, 2, ...)
(local <- rowSums(t.goles))
(local.prob <- round(local/sum(local),3))
# 1.3.2 La probabilidad (marginal) de que el equipo que juega como visitante anote y goles (y = 0, 1, 2, ...)
(visitante <- colSums(t.goles))
(visitante.prob <- round(visitante/sum(visitante),3))
# 1.3.3 La probabilidad (conjunta) de que el equipo que juega en casa anote x goles y
# el equipo que juega como visitante anote y goles (x = 0, 1, 2, ..., y = 0, 1, 2, ...)
round(t.goles/sum(t.goles),3)
|
984abd053b549e0bd7bb49eb3b7326b358853783 | 0ea5ca6a823bc4f1da0732659309ffddf21bec4e | /man/preparePsPlot.Rd | 2de304d69d6a92ab8166013e7b86de8922882ef0 | [
"Apache-2.0"
] | permissive | louisahsmith/EvidenceSynthesis | 04b74449820c924fae46ac91298b0950fdeb5c5d | aee99c04c8d6869fb0bf4d46a3662ea94a768b8a | refs/heads/master | 2023-06-23T00:28:39.821302 | 2021-02-10T14:34:50 | 2021-02-10T14:34:50 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | true | 1,950 | rd | preparePsPlot.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/CohortMethod.R
\name{preparePsPlot}
\alias{preparePsPlot}
\title{Prepare to plot the propensity score distribution}
\usage{
preparePsPlot(data, unfilteredData = NULL, scale = "preference")
}
\arguments{
\item{data}{A data frame with at least the two columns described below}
\item{unfilteredData}{To be used when computing preference scores on data from which subjects have
already been removed, e.g. through trimming and/or matching. This data frame
should have the same structure as \code{data}.}
\item{scale}{The scale of the graph. Two scales are supported: \code{scale = 'propensity'} or
\code{scale = 'preference'}. The preference score scale is defined by Walker et
al. (2013).}
}
\value{
A data frame describing the propensity score (or preference score) distribution at 100
equally-spaced points.
}
\description{
Prepare to plot the propensity (or preference) score distribution. It computes the distribution, so
the output does not contain person-level data.
}
\details{
The data frame should have a least the following two columns:
\itemize{
\item \strong{treatment} (integer): Column indicating whether the person is in the treated (1) or comparator
(0) group. - \strong{propensityScore} (numeric): Propensity score.
}
}
\examples{
# Simulate some data for this example:
treatment <- rep(0:1, each = 100)
propensityScore <- c(rnorm(100, mean = 0.4, sd = 0.25), rnorm(100, mean = 0.6, sd = 0.25))
data <- data.frame(treatment = treatment, propensityScore = propensityScore)
data <- data[data$propensityScore > 0 & data$propensityScore < 1, ]
preparedPlot <- preparePsPlot(data)
}
\references{
Walker AM, Patrick AR, Lauer MS, Hornbrook MC, Marin MG, Platt R, Roger VL, Stang P, and
Schneeweiss S. (2013) A tool for assessing the feasibility of comparative effectiveness research,
Comparative Effective Research, 3, 11-20
}
\seealso{
\link{plotPreparedPs}
}
|
a6cf2135c3c2842cde65f15df2699e1dea15812e | 495b69d415c88dd851fad7f169301070b4fadd38 | /SensusR/RProject/man/plot.RunningAppsDatum.Rd | 08de621488880acb350b588c29e66e6524bfbe6b | [
"Apache-2.0"
] | permissive | fandw06/sensus | f63321210bf7948d7c6eddb01ddebd1a5bce9a6a | 51f9588221d71a32bdab933a84fac487ba616809 | refs/heads/master | 2021-01-15T17:56:23.079171 | 2016-06-21T11:46:58 | 2016-06-21T11:46:58 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | true | 473 | rd | plot.RunningAppsDatum.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/SensusR.R
\name{plot.RunningAppsDatum}
\alias{plot.RunningAppsDatum}
\title{Plot running apps data.}
\usage{
\method{plot}{RunningAppsDatum}(x, ...)
}
\arguments{
\item{x}{Apps data.}
\item{...}{Other plotting parameters.}
}
\description{
Plot running apps data.
}
\examples{
data = read.sensus.json(system.file("extdata", "example.data.txt", package="SensusR"))
plot(data$RunningAppsDatum)
}
|
379d8acb320bc0a74ff6dc4b2ddbe135f82a3a30 | 7218dd41cbe126a617485cd463f9d6a93dfc1eb9 | /man/ordinal_me.Rd | 0d0d72b72391806d21c273ffd5fe00a49729425a | [] | no_license | almartin82/MAP-visuals | 0c171b37978cefed94d46336457e0ef8a672c669 | 28102126dc4b40c6566a6f20248d4f871613253a | refs/heads/master | 2021-01-10T19:22:04.324129 | 2015-06-11T18:38:42 | 2015-06-11T18:38:42 | 10,979,358 | 1 | 1 | null | 2015-02-05T19:41:56 | 2013-06-26T21:16:19 | R | UTF-8 | R | false | false | 368 | rd | ordinal_me.Rd | % Generated by roxygen2 (4.1.0): do not edit by hand
% Please edit documentation in R/ordinal_me.R
\name{ordinal_me}
\alias{ordinal_me}
\title{Ordinal me}
\usage{
ordinal_me(number)
}
\arguments{
\item{number}{a number}
}
\value{
returns a string
}
\description{
\code{ordinal_me} appends the appropriate ending ('st', 'nd' etc) to make an integer into an ordinal.
}
|
25b0d0e61e5790976fac6ce4b56d292ddcfd1c13 | 2a17c73ec61ca0c808f70f41e2e23bb4da53483d | /Plot2.R | 03842b6017faa9e4a541dce83a2b43f243448425 | [] | no_license | dberma15/ExData_Plotting1 | 633b39d3995be874e219eef439727b292ed6759e | d0add817416a8108ab0c7b8a01dcd91de787990d | refs/heads/master | 2020-12-25T04:36:43.634068 | 2015-02-08T16:11:28 | 2015-02-08T16:11:28 | 30,482,421 | 0 | 0 | null | 2015-02-08T05:52:23 | 2015-02-08T05:52:23 | null | UTF-8 | R | false | false | 860 | r | Plot2.R | setwd("C:\\Users\\daniel\\Documents\\R")
householdPowerConsumption <- read.table("household_power_consumption.txt", sep=";")
isFirst<-householdPowerConsumption[,1]=="1/2/2007"
isSecond<-householdPowerConsumption[,1]=="2/2/2007"
isFirstOrSecond<-isFirst|isSecond
householdPowerConsumptionSubset<-householdPowerConsumption[isFirstOrSecond,]
colnames(householdPowerConsumptionSubset)<-as.matrix(householdPowerConsumption[1,])
householdPowerConsumptionSubset[,1]<-as.Date(householdPowerConsumptionSubset[,1],format="%d/%m/%Y")
timeStamp<-strptime(paste(householdPowerConsumptionSubset[,1], householdPowerConsumptionSubset[,2]), "%Y-%m-%d %H:%M:%S")
png(filename="plot2.png", width=480, height=480)
plot(timeStamp,as.numeric(as.matrix(householdPowerConsumptionSubset[,3])), type='l', pch=20, lty="solid", xlab="", ylab="Global Active Power (kilowatts)")
dev.off()
|
8f21516e50766352c919730f3e85a588db6d9681 | b7c5386994fe7139f04fa1b94fba85d827a1a22e | /scripts/covars/10_countries.R | 8249d419d5f5d7430c943be31beaed66bda92afa | [] | no_license | jejakobsen/msc | f8dcbb20105344f300adbceb8614f8fadc6e8407 | 24cb60603f0510d061eb67b1191b96a7a53187ea | refs/heads/main | 2023-05-05T08:25:08.315474 | 2021-05-28T21:09:22 | 2021-05-28T21:09:22 | 371,814,617 | 1 | 0 | null | null | null | null | UTF-8 | R | false | false | 9,910 | r | 10_countries.R | setwd(dirname(rstudioapi::getSourceEditorContext()$path))
### Define transformation function for parameters ###
### and negative log-likelihood function ###
transform.est.params <- function(N.countries, p){
param_matrix <- matrix(NA, nrow=11, ncol=N.countries)
for (i in 1:N.countries){
param_matrix[1, i] <- exp(-p[i])
param_matrix[2, i] <- 1/(1+exp(-p[N.countries + i]))
param_matrix[3, i] <- (1-param_matrix[2, i])/(1+exp(-p[2*N.countries + i]))
param_matrix[4, i] <- exp(-p[3*N.countries + i])
param_matrix[5, i] <- 1/(1+exp(-p[4*N.countries + i]))
param_matrix[6, i] <- p[5*N.countries + i]
param_matrix[7, i] <- p[6*N.countries + 1]
param_matrix[8, i] <- p[6*N.countries + 2]
param_matrix[9, i] <- p[6*N.countries + 3]
param_matrix[10, i] <- p[6*N.countries + 4]
param_matrix[11, i] <- p[6*N.countries + 5]
}
return(param_matrix)
}
neg.loglik.nb.ingarch <- function(p, data){
Y <- data[[1]]
X <- data[[2]]
inds.start <- data[[3]]
inds.stop <- data[[4]]
N.countries <- length(inds.start)
gamma.1 <- p[N.countries*6+1]
gamma.2 <- p[N.countries*6+2]
gamma.3 <- p[N.countries*6+3]
gamma.4 <- p[N.countries*6+4]
gamma.5 <- p[N.countries*6+5]
nll <- 0
for (i in 1:N.countries){
Y.i <- Y[inds.start[i]:inds.stop[i]]
X.1.i <- X[inds.start[i]:inds.stop[i], 1]
X.2.i <- X[inds.start[i]:inds.stop[i], 2]
X.3.i <- X[inds.start[i]:inds.stop[i], 3]
X.4.i <- X[inds.start[i]:inds.stop[i], 4]
N.i <- length(Y.i)
beta.0.i <- exp(-p[i])
alpha.1.i <- 1/(1+exp(-p[N.countries + i]))
beta.1.i <- (1-alpha.1.i)/(1+exp(-p[N.countries*2 + i]))
m.1.i <- exp(-p[N.countries*3 + i])
pi.i <- 1/(1+exp(-p[N.countries*4 + i]))
gamma.0.i <- p[N.countries*5 + i]
XGamma.i.t <- gamma.0.i + gamma.1*X.1.i + gamma.2*(X.1.i)^2 + gamma.3*X.2.i + gamma.4*X.3.i + gamma.5*X.4.i
h.i <- exp(-XGamma.i.t)
M.i <- rep(NA, N.i)
M.i[1] <- m.1.i
for (t in 2:N.i){
M.i[t] <- beta.0.i + alpha.1.i * Y.i[t-1] + beta.1.i * M.i[t-1] + h.i[t]
r.i.t <- M.i[t]*(pi.i/(1-pi.i))
nll <- nll - lgamma(Y.i[t]+r.i.t) + lgamma(Y.i[t]+1) + lgamma(r.i.t) - Y.i[t]*log(1-pi.i) - r.i.t*log(pi.i)
}
}
return(nll)
}
##########################################################################
### Prepare data, parameters and sigmas ###
source("prep_data.R")
countries <- c("colombia", "congo", "ethiopia",
"iraq", "mali", "myanmar",
"nigeria", "pakistan", "sleone",
"uganda")
data <- prep_data(countries)
source("prep_params_alt.R")
params <- prep_params_alt(10, beta.0=0.01, alpha.1=0.05, beta.1=0.85,
m.1=0.5, pi=0.05, gamma.0=2,
gammas=c(-1, 1, 0.5, 1, -1))
sigmas <- prep_sigmas_alt(10, sig.beta.0=0.1, sig.alpha.1=0.001,
sig.beta.1=0.001, sig.m.1=0.1,
sig.pi=0.001, sig.gamma.0=0.1,
sig.gammas=c(0.1, 0.1, 0.1, 0.1, 0.1))
############################################################################
### Fit Model and extract parameters ###
source("fit_countries.R")
set.seed(219481241)
best.fit <- fit_countries(neg.loglik.nb.ingarch, data, params,
N.iter=10, sigma=sigmas, hess=TRUE)
# Negative log-likelihood of 10 iterations
#[1] 29039.40 28975.12 28957.34 28940.70 28931.87
#[5] 28930.78 28930.28 28929.82 28926.53 28922.61
est.params <- transform.est.params(10, best.fit$estimate)
#############################################################################
### Compute country-specific AIC ###
source("../exploration/aic_bic.R")
ll <- function(p, Y, X){
beta.0 <- p[1]
alpha.1 <- p[2]
beta.1 <- p[3]; m.1 <- p[4]
pi <- p[5]
g.0 <- p[6]; g.1 <- p[7]; g.2 <- p[8]; g.3 <- p[9]; g.4 <- p[10]; g.5 <- p[11]
x.1 <- X[, 1]
x.2 <- X[, 2]
x.3 <- X[, 3]
x.4 <- X[, 4]
h <- exp(-(g.0 + g.1*x.1 + g.2*x.1^2 + g.3*x.2 + g.4*x.3 + g.5*x.4))
N <- length(Y)
m <- rep(NA, N)
m[1] <- m.1
nll <- 0
for (i in 2:N){
m[i] <- beta.0 + alpha.1 * Y[i-1] + beta.1 * m[i-1] + h[i]
r.i <- m[i]*(pi/(1-pi))
nll <- nll - lgamma(Y[i]+r.i) + lgamma(Y[i]+1) + lgamma(r.i) - Y[i]*log(1-pi) - r.i*log(pi)
}
return(-nll)
}
AIC.normalized.matrix <- function(ll.func, param.matrix, data, N.excluded, country.names){
inds.start <- data[[3]]
inds.stop <- data[[4]]
N.countries <- length(param.matrix[1, ])
N.params <- length(param.matrix[, 1])
AIC.matrix <- matrix(NA, nrow=N.countries, ncol=1)
colnames(AIC.matrix) <- "AIC"
rownames(AIC.matrix) <- country.names
for (i in 1:N.countries){
Y <- data[[1]][inds.start[i]:inds.stop[i]]
X <- data[[2]][inds.start[i]:inds.stop[i], ]
params <- param.matrix[, i]
log.lik <- ll(params, Y, X)
AIC.matrix[i] <- round(AIC.normalized(log.lik, N.params, length(Y), N.excluded))
}
return(AIC.matrix)
}
aic.matrix <- AIC.normalized.matrix(ll, est.params, data, 1, countries)
aic.matrix
# AIC
#colombia 9173
#congo 5730
#ethiopia 6425
#iraq 9900
#mali 2702
#myanmar 5597
#nigeria 4661
#pakistan 7064
#sleone 2830
#uganda 4019
##############################################################################
### gamma.0i parameters ###
gamma.0.matrix <- matrix(est.params[6, ], nrow=10, ncol=1)
colnames(gamma.0.matrix) <- "gamma.0"
rownames(gamma.0.matrix) <- countries
gamma.0.matrix <- round(gamma.0.matrix, 2)
gamma.0.matrix
# gamma.0
# colombia -6.73
# congo 6.12
# ethiopia 4.74
# iraq -5.52
# mali 9.44
# myanmar 1.08
# nigeria -4.06
# pakistan -1.01
# sleone 23.90
# uganda 14.88
###############################################################################
### Compute 95% CI of global effects ###
gammas <- best.fit$estimate[61:65]
sigma.gammas <- sqrt(diag(solve(best.fit$hessian)))[61:65]
gammas.low <- gammas - qnorm(0.975)*sigma.gammas
gammas.high <- gammas + qnorm(0.975)*sigma.gammas
gammas.matrix <- matrix(c(gammas, gammas.low, gammas.high), nrow=3, ncol=5, byrow=TRUE)
colnames(gammas.matrix) <- c("g.1", "g.2", "g.3", "g.4", "g.5")
rownames(gammas.matrix) <- c("est", "low", "high")
gammas.matrix[, 1] <- round(gammas.matrix[, 1], 2)
gammas.matrix[, 2] <- round(gammas.matrix[, 2], 2)
gammas.matrix[, 3] <- round(gammas.matrix[, 3], 2)
gammas.matrix[, 4] <- round(gammas.matrix[, 4], 2)
gammas.matrix[, 5] <- round(gammas.matrix[, 5], 2)
gammas.matrix
# g.1 g.2 g.3 g.4 g.5
# est -1.38 1.04 6.22 4.02 -4.12
# low -1.87 0.70 5.25 -1.03 -5.69
# high -0.89 1.39 7.20 9.08 -2.55
###############################################################################
### Plots of time-specific intercept of Mali and Uganda ###
M.i <- function(p, Y, X){
beta.0 <- p[1]
alpha.1 <- p[2]
beta.1 <- p[3]; m.1 <- p[4]
pi <- p[5]
g.0 <- p[6]; g.1 <- p[7]; g.2 <- p[8]; g.3 <- p[9]; g.4 <- p[10]; g.5 <- p[11]
x.1 <- X[, 1]
x.2 <- X[, 2]
x.3 <- X[, 3]
x.4 <- X[, 4]
h <- exp(-(g.0 + g.1*x.1 + g.2*x.1^2 + g.3*x.2 + g.4*x.3 + g.5*x.4))
N <- length(Y)
m <- rep(NA, N)
m[1] <- m.1
nll <- 0
for (i in 2:N){
m[i] <- beta.0 + alpha.1 * Y[i-1] + beta.1 * m[i-1] + h[i]
r.i <- m[i]*(pi/(1-pi))
}
mat <- matrix(c(beta.0 + h, m), nrow=2, ncol=1617, byrow=TRUE)
return(mat)
}
plot.M.i <- function(param.matrix, data, country){
inds.start <- data[[3]]
inds.stop <- data[[4]]
i <- which(countries == country)
Y <- data[[1]][inds.start[i]:inds.stop[i]]
X <- data[[2]][inds.start[i]:inds.stop[i], ]
params <- param.matrix[, i]
mat <- M.i(params, Y, X)
par(mfrow=c(1,3))
plot(1:1617, Y, type='l', main=country)
plot(1:1617, mat[1, ], type='l', main="beta.0 + h")
plot(1:1617, mat[2, ], type='l', main="M.i")
return(mat)
}
extract.beta.0.h <- function(param.matrix, data, country){
inds.start <- data[[3]]
inds.stop <- data[[4]]
i <- which(countries == country)
Y <- data[[1]][inds.start[i]:inds.stop[i]]
X <- data[[2]][inds.start[i]:inds.stop[i], ]
params <- param.matrix[, i]
mat <- M.i(params, Y, X)
return(mat[1,])
}
beta.0.h.mli <- extract.beta.0.h(est.params, data, "mali")
beta.0.h.uga <- extract.beta.0.h(est.params, data, "uganda")
pdf("./figs/mli_uga_covars.pdf", width=10, height=6)
par(mfrow=c(1,2))
plot(1:1617, beta.0.h.mli, type='l', main="i=Mali", ylab="", xlab="t", cex.lab=1.4, cex.axis=1.2)
title(ylab=expression(beta["0,i"] + exp(-gamma[i]^T* Z["i,t"])), line=2.1, cex.lab=1.4)
plot(1:1617, beta.0.h.uga, type='l', main="i=Uganda", ylab="", xlab="t", cex.lab=1.4, cex.axis=1.2)
title(ylab=expression(beta["0,i"] + exp(-gamma[i]^T* Z["i,t"])), line=2.1, cex.lab=1.4)
dev.off()
#########################################################
### NB-INGARCH(1,1) specific parameters ###
est.matrix <- t(est.params[1:5, ])
rownames(est.matrix) <- countries
colnames(est.matrix) <- c("beta.0", "alpha.1", "beta.1", "m.1", "pi")
est.matrix[, 1] <- signif(est.matrix[, 1], 2)
est.matrix[, 2] <- round(est.matrix[, 2], 3)
est.matrix[, 3] <- round(est.matrix[, 3], 3)
est.matrix[, 4] <- round(est.matrix[, 4], 1)
est.matrix[, 5] <- round(est.matrix[, 5], 3)
est.matrix
###############################################
# beta.0 alpha.1 beta.1 m.1 pi
# colombia 5.0e-04 0.038 0.960 5.8 0.040
# congo 5.8e-02 0.050 0.872 0.1 0.007
# ethiopia 1.1e-03 0.000 0.987 261.4 0.001
# iraq 1.5e-01 0.111 0.871 0.0 0.013
# mali 2.2e-02 0.092 0.703 0.1 0.035
# myanmar 1.3e+00 0.126 0.496 0.3 0.023
# nigeria 1.0e-05 0.020 0.980 0.0 0.014
# pakistan 2.3e-02 0.102 0.892 0.0 0.028
# sleone 5.0e-03 0.101 0.899 0.1 0.009
# uganda 4.5e-05 0.041 0.959 1.6 0.018
###############################################
# Only for saving
save.image(file="10_countries_new.RData")
|
1a61c30f36e61a182d1614adaf8a5fbbbbeb6dbf | 79062e15adcb70300c0c857798956ba83093e839 | /man/simulateFactorModelNullsFromSingularValuesAndLoadings.Rd | 2da712f42fc5f782ce39cef30464c1da11e30e53 | [] | no_license | angeella/sanssouci | 0fa2b9d59e22d50fdc00fff844b17568becb4d36 | 9455dd754e21046f6424b2daac4fa9d9c07c4c20 | refs/heads/master | 2020-06-24T00:51:59.064339 | 2019-01-18T16:10:53 | 2019-01-18T16:10:53 | 198,800,090 | 0 | 0 | null | 2019-07-25T09:25:37 | 2019-07-25T09:25:36 | null | UTF-8 | R | false | true | 1,771 | rd | simulateFactorModelNullsFromSingularValuesAndLoadings.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/simulateFactorModelNullsFromSingularValuesAndLoadings.R
\name{simulateFactorModelNullsFromSingularValuesAndLoadings}
\alias{simulateFactorModelNullsFromSingularValuesAndLoadings}
\title{simulateFactorModelNullsFromSingularValuesAndLoadings}
\usage{
simulateFactorModelNullsFromSingularValuesAndLoadings(m, h = numeric(0),
P = Matrix(nrow = m, ncol = length(h)), rho = 0)
}
\arguments{
\item{m}{Number of tests}
\item{h}{A vector of \code{k} singular values associated to each factor}
\item{P}{A \code{m x k} matrix of factor loadings}
\item{rho}{\code{1-rho} is the standard deviation of the noise}
}
\value{
\item{Y}{a vector of \code{m} simulated observations} \item{W}{a
vector of \code{k} factors}
}
\description{
Simulate null hypotheses according to a factor model
}
\examples{
m <- 10
## independent
sim0 <- simulateFactorModelNullsFromSingularValuesAndLoadings(m)
str(sim0)
sum(is.na(simulateFactorModelNullsFromSingularValuesAndLoadings(m)))
S0 <- getFactorModelCovarianceMatrix(m)
image(S0)
## equi-correlated
rho <- 0.2
h <- 1
P <- matrix(1, m, length(h))
sim1 <- simulateFactorModelNullsFromSingularValuesAndLoadings(m, h, P, rho=rho)
str(sim1)
S1 <- getFactorModelCovarianceMatrix(m, h, P, rho=rho)
image(S1)
## 3-factor model
m <- 4*floor(m/4) ## make sure m/4 is an integer
rho <- 0.5
h <- c(0.5, 0.3, 0.2)*m
gamma1 <- rep(1,m)
gamma2 <- rep(c(-1, 1), each=m/2)
gamma3 <- rep(c(-1, 1), times=m/2)
P <- cbind(gamma1, gamma2, gamma3)/sqrt(m)
sim3 <- simulateFactorModelNullsFromSingularValuesAndLoadings(m, h, P, rho=rho)
str(sim3)
S3 <- getFactorModelCovarianceMatrix(m, h, P, rho=rho)
image(S3)
sim3
}
\author{
Gilles Blanchard, Pierre Neuvial and Etienne Roquain
}
|
522ecb19dc93148719bcd943c1b889abf44f9c10 | 0833bdcee54f0a83b086bcd3b15069966601d967 | /R/geom_half_violin.R | 7349d21ad0caa74142796395db91ca02cb60a36a | [] | no_license | oracle5th/gghalves | f83cacb4522817859f62dcc814bc58c40d693fed | 2e6e92c1371957948abbed63d642fc39b8ffab47 | refs/heads/master | 2023-09-05T18:34:24.729530 | 2021-03-06T12:13:58 | 2021-03-06T12:13:58 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 3,855 | r | geom_half_violin.R | #' Half Violin plot
#'
#' A violin plot is a compact display of a continuous distribution. It is a
#' blend of [geom_boxplot()] and [geom_density()]: a
#' violin plot is a mirrored density plot displayed in the same way as a
#' boxplot.
#'
#' @inheritParams ggplot2::geom_violin
#' @param side The side on which to draw the half violin plot. "l" for left, "r" for right, defaults to "l".
#' @param nudge Add space between the violinplot and the middle of the space allotted to a given factor on the x-axis.
#' @importFrom ggplot2 layer
#' @examples
#' ggplot(iris, aes(x = Species, y = Petal.Width, fill = Species)) +
#' geom_half_violin()
#'
#' ggplot(iris, aes(x = Species, y = Petal.Width, fill = Species)) +
#' geom_half_violin(side = "r")
#' @export
#' @references Hintze, J. L., Nelson, R. D. (1998) Violin Plots: A Box
#' Plot-Density Trace Synergism. The American Statistician 52, 181-184.
geom_half_violin <- function(
mapping = NULL, data = NULL,
stat = "half_ydensity", position = "dodge",
...,
side = "l",
nudge = 0,
draw_quantiles = NULL,
trim = TRUE,
scale = "area",
na.rm = FALSE,
show.legend = NA,
inherit.aes = TRUE) {
layer(
data = data,
mapping = mapping,
stat = stat,
geom = GeomHalfViolin,
position = position,
show.legend = show.legend,
inherit.aes = inherit.aes,
params = list(
side = side,
nudge = nudge,
trim = trim,
scale = scale,
draw_quantiles = draw_quantiles,
na.rm = na.rm,
...
)
)
}
#' @rdname gghalves-extensions
#' @format NULL
#' @usage NULL
#' @importFrom ggplot2 ggproto GeomViolin GeomBoxplot GeomPolygon
#' @export
GeomHalfViolin <- ggproto(
"GeomHalfViolin", GeomViolin,
setup_data = function(data, params) {
x_data <- GeomBoxplot$setup_data(data, NULL)
data$xmin <- x_data$xmin
data$xmax <- x_data$xmax
data
},
draw_group = function(self, data, side = "l", nudge = 0, ..., draw_quantiles = NULL) {
# Find the points for the line to go all the way around
if (side == "l") {
data <- transform(
data,
xminv = x + violinwidth * (xmin - x) - nudge,
xmaxv = x - nudge
)
} else {
data <- transform(
data,
xminv = x + nudge,
xmaxv = x + violinwidth * (xmax - x) + nudge
)
}
# Make sure it's sorted properly to draw the outline
newdata <- rbind(
transform(data, x = xminv)[order(data$y), ],
transform(data, x = xmaxv)[order(data$y, decreasing = TRUE), ]
)
# Close the polygon: set first and last point the same
# Needed for coord_polar and such
newdata <- rbind(newdata, newdata[1,])
# Draw quantiles if requested, so long as there is non-zero y range
if (length(draw_quantiles) > 0 & !scales::zero_range(range(data$y))) {
stopifnot(all(draw_quantiles >= 0), all(draw_quantiles <= 1))
# Compute the quantile segments and combine with existing aesthetics
quantiles <- ggplot2:::create_quantile_segment_frame(data, draw_quantiles)
aesthetics <- data[
rep(1, nrow(quantiles)),
setdiff(names(data), c("x", "y", "group")),
drop = FALSE
]
aesthetics$alpha <- rep(1, nrow(quantiles))
both <- cbind(quantiles, aesthetics)
both <- both[!is.na(both$group), , drop = FALSE]
quantile_grob <- if (nrow(both) == 0) {
zeroGrob()
} else {
GeomPath$draw_panel(both, ...)
}
ggplot2:::ggname("geom_half_violin", grobTree(
GeomPolygon$draw_panel(newdata, ...),
quantile_grob)
)
} else {
ggplot2:::ggname("geom_half_violin", GeomPolygon$draw_panel(newdata, ...))
}
}
) |
d886a1b4df9b3c6b246485344dace2e5f0b69e38 | 04f2d3b686574c02e73ad7e1113afa0c30ae69e2 | /6 - Create Combined Grid Sightings.R | 59c3a1ec6b62bf07aedb22448f6902428b514dc5 | [] | no_license | granterogers/DMAD_Encounter_Rate | b18f25ce2a351ffca7ed8ad90a36f672d3ef2020 | 0aedf547aa22881b482033d0d962d95bf09065fa | refs/heads/main | 2023-03-16T11:30:24.634521 | 2021-03-03T11:46:00 | 2021-03-03T11:46:00 | 344,105,997 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 3,846 | r | 6 - Create Combined Grid Sightings.R | ## R Script to Create Combined Land & Boat Survey Sightings Grid
## Broken into 4 Stages
##
## 1) Import Empty Hexagonal Grid
## 2) Import Boat Sightings
## 3) Count points in polygon to get Boat Sightings in Each Grid Cell
## 4) Import Land Sightings
## 5) Count points in polygon to get Land Sightings in Each Grid Cell
## 6) Combine Sightings Together to Form Combined Sightings Column
## 7) Output to Combined Sightings Shapefile Grid for Each Season
##
## Author - grant.e.rogers@gmail.com
#Install and Load Required Packages as needed
packages <- c("sf","GISTools")
if (length(setdiff(packages, rownames(installed.packages()))) > 0)
{ install.packages(setdiff(packages, rownames(installed.packages()))) }
invisible(lapply(packages, require, character.only = TRUE))
#Set Working Directory to location of Current R File
tryCatch( expr = {setwd(dirname(rstudioapi::getActiveDocumentContext()$path)) },
error = function(e){ cat("Working Directory Not Set!\n") } )
#Load Empty Survey Grid Shapefile (2x2 Hexagons covering Survey Area)
Survey_Grid <- st_read("DATA/Full_Survey_Area_Grid_With_Weights.gpkg",quiet = TRUE)
#Define Shapefile Output Directory & Create if not already exist
output_directory_root = "OUT//"
output_directory = "OUT//Boat_Land_Sightings_Grid//"
if (!dir.exists(output_directory_root)){ dir.create(output_directory_root,showWarnings = FALSE) }
if (!dir.exists(output_directory)){ dir.create(output_directory,showWarnings = FALSE) }
#Define Season List for Looping
seasons_list <- c("ALL","WINTER","SPRING","SUMMER","AUTUMN")
for(season in seasons_list)
{
#Duplicate Grid for Boat Effort to Preserve Original
Boat_Land_Sightings_Grid <- Survey_Grid
#Load Boat Route
Boat_Sightings <- st_read(paste("OUT//Boat_Sightings//Boat_Sightings_",season,".gpkg",sep=""),quiet = TRUE)
#Calculate Total Boat Effort in minutes per grid cells by counting points in polygon function
Boat_Land_Sightings_Grid$BoatSightings <- unname(poly.counts(as_Spatial(Boat_Sightings),as_Spatial(Boat_Land_Sightings_Grid)))
#Load Land Effort and Land Survey Areas Shapefile
Land_Sightings <- st_read(paste("OUT//Land_Sightings//Land_Sightings_",season,".gpkg",sep=""),quiet = TRUE)
#Convert Shapefiles to Dataframe
Boat_Sightings_CSV <- as.data.frame(st_coordinates(Boat_Sightings))
Land_Sightings_CSV <- as.data.frame(st_coordinates(Land_Sightings))
#Combine Sightings
Boat_Land_Sightings <- rbind(Boat_Sightings_CSV,Land_Sightings_CSV)
#Calculate Total Boat Effort in minutes per grid cells by counting points in polygon function
Boat_Land_Sightings_Grid$LandSightings <- unname(poly.counts(as_Spatial(Land_Sightings),as_Spatial(Boat_Land_Sightings_Grid)))
#Calculate Combined Sightings
Boat_Land_Sightings_Grid$CombinedSightings <- Boat_Land_Sightings_Grid$BoatSightings + Boat_Land_Sightings_Grid$LandSightings
#Filter only useful columns
Columns_To_Keep <- paste("id","Weight","BoatSightings","LandSightings","CombinedSightings","geom",sep="|")
Boat_Land_Sightings_Grid <- Boat_Land_Sightings_Grid[grepl(Columns_To_Keep,colnames(Boat_Land_Sightings_Grid))]
#Convert NA's to 0 (though there should not be any)
Boat_Land_Sightings_Grid[is.na(Boat_Land_Sightings_Grid)] <- 0
#Create Shapefile for Effort Grid & Define Output Filename
Boat_Land_Sightings_Grid_Filename <- paste(output_directory,"Boat_Land_Sightings_Grid_",season,".gpkg",sep="")
#Output to GPKG Format
if (file.exists(Boat_Land_Sightings_Grid_Filename))
file.remove(Boat_Land_Sightings_Grid_Filename)
st_write(Boat_Land_Sightings_Grid,Boat_Land_Sightings_Grid_Filename, driver = "gpkg",quiet=TRUE)
}
cat("\nCombined Boat & Land Sightings Created & Exported to ",output_directory,"\n") |
5b06d34724a3f5a72fab212b0d1c69a38b1a58f1 | 76371ca11e03f754f0613c953a539612c56aabcb | /model/LightGBM_01_BaseLine/LightGBM_01_BaseLine.R | acc36b7dabbc8aba35ff3276feeda81d320f25fa | [] | no_license | you1025/probspace_salary | cc26b152d152b098cd764f7cdbf5c8e4eb83b824 | e78b62397cce1585d81bcb039a9237145a1f94dd | refs/heads/master | 2022-04-01T00:44:33.099781 | 2019-12-20T04:15:58 | 2019-12-20T04:15:58 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 6,660 | r | LightGBM_01_BaseLine.R | # TODO
# - 適当なパラメータ探し(OK)
# - 対数変換(OK)
# - 特徴量の追加
# train_mae: xxxxxxxx, test_mae: xxxxxxxx - xxx
# train_mae: 1188.048, test_mae: 1177.353 - 初期値
# train_mae: 60.50451, test_mae: 60.79072 - min_data = 2500 お試し
# train_mae: 226.4191, test_mae: 226.4209 - ベースラインとかいいうのに合わせた orz
# R と python で初期値が異なるのかな???
# train_mae: 19.65834, test_mae: 26.11733 - カテゴリを全て LabelEncoding ようやくまともな値に
# train_mae: 19.23358, test_mae: 26.44987 - max_depth = 7
# train_mae: 20.38744, test_mae: 24.8537 - cv を 7 fold に変更(忘れてた)
# train_mae: 20.38744, test_mae: 24.8537 - パラメータ指定を動的に変更できるように修正 - baseline
library(tidyverse)
library(tidymodels)
library(furrr)
library(lightgbm)
source("model/LightGBM_01_BaseLine/functions.R", encoding = "utf-8")
# Data Load ---------------------------------------------------------------
df.train_data <- load_train_data("data/input/train_data.csv")
df.cv <- create_cv(df.train_data, v = 7)
# Feature Engineering -----------------------------------------------------
recipe <- recipes::recipe(salary ~ ., data = df.train_data) %>%
# Label Encoding
recipes::step_mutate(
position = as.integer(position) - 1,
area = as.integer(area) - 1,
sex = as.integer(sex) - 1,
partner = as.integer(partner) - 1,
education = as.integer(education) - 1
) %>%
# 対数変換: salary
recipes::step_log(salary, offset = 1) %>%
recipes::step_rm(
id
)
recipes::prep(recipe) %>% recipes::juice()
# Hyper Parameter ---------------------------------------------------------
df.grid.params <- tibble(
max_depth = 7,
num_leaves = 31,
bagging_freq = 5,
bagging_fraction = 0.8,
feature_fraction = 0.9,
min_data_in_leaf = 20,
lambda_l1 = 0,
lambda_l2 = 0
)
df.grid.params
# Parametr Fitting --------------------------------------------------------
# 並列処理
future::plan(future::multisession(workers = 8))
system.time({
df.results <- purrr::pmap_dfr(df.grid.params, function(max_depth, num_leaves, bagging_freq, bagging_fraction, feature_fraction, min_data_in_leaf, lambda_l1, lambda_l2) {
hyper_params <- list(
max_depth = max_depth,
num_leaves = num_leaves,
bagging_freq = bagging_freq,
bagging_fraction = bagging_fraction,
feature_fraction = feature_fraction,
min_data_in_leaf = min_data_in_leaf,
lambda_l1 = lambda_l1,
lambda_l2 = lambda_l2
)
furrr::future_map_dfr(df.cv$splits, function(split, recipe, hyper_params) {
print(hyper_params)
# 前処理済データの作成
lst.train_valid_test <- recipe %>%
{
recipe <- (.)
# train data
df.train <- recipes::prep(recipe) %>%
recipes::bake(rsample::training(split))
x.train <- df.train %>%
dplyr::select(-salary) %>%
as.matrix()
y.train <- df.train$salary
# for early_stopping
train_valid_split <- rsample::initial_split(df.train, prop = 4/5, strata = NULL)
x.train.train <- rsample::training(train_valid_split) %>%
dplyr::select(-salary) %>%
as.matrix()
y.train.train <- rsample::training(train_valid_split)$salary
x.train.valid <- rsample::testing(train_valid_split) %>%
dplyr::select(-salary) %>%
as.matrix()
y.train.valid <- rsample::testing(train_valid_split)$salary
# for LightGBM Dataset
dtrain <- lightgbm::lgb.Dataset(
data = x.train.train,
label = y.train.train
)
dvalid <- lightgbm::lgb.Dataset(
data = x.train.valid,
label = y.train.valid,
reference = dtrain
)
# test data
df.test <- recipes::prep(recipe) %>%
recipes::bake(rsample::testing(split))
x.test <- df.test %>%
dplyr::select(-salary) %>%
as.matrix()
y.test <- df.test$salary
list(
# train data
x.train = x.train,
y.train = y.train,
train.dtrain = dtrain,
train.dvalid = dvalid,
# test data
x.test = x.test,
y.test = y.test
)
}
# 学習
model.fitted <- lightgbm::lgb.train(
# 学習パラメータの指定
params = list(
boosting_type = "gbdt",
objective = "fair",
metric = "fair",
# user defined
max_depth = hyper_params$max_depth,
num_leaves = hyper_params$num_leaves,
bagging_freq = hyper_params$bagging_freq,
bagging_fraction = hyper_params$bagging_fraction,
feature_fraction = hyper_params$feature_fraction,
min_data_in_leaf = min_data_in_leaf,
lambda_l1 = hyper_params$lambda_l1,
lambda_l2 = hyper_params$lambda_l2,
seed = 1234
),
# 学習&検証データ
data = lst.train_valid_test$train.dtrain,
valids = list(valid = lst.train_valid_test$train.dvalid),
# 木の数など
learning_rate = 0.1,
nrounds = 20000,
early_stopping_rounds = 200,
verbose = 1,
# カテゴリデータの指定
categorical_feature = c("position", "area", "sex", "partner", "education")
)
# MAE の算出
train_mae <- tibble::tibble(
actual = lst.train_valid_test$y.train,
pred = predict(model.fitted, lst.train_valid_test$x.train)
) %>%
dplyr::mutate_all(function(x) { exp(x) - 1 }) %>%
yardstick::mae(truth = actual, estimate = pred) %>%
.$.estimate
test_mae <- tibble::tibble(
actual = lst.train_valid_test$y.test,
pred = predict(model.fitted, lst.train_valid_test$x.test)
) %>%
dplyr::mutate_all(function(x) { exp(x) - 1 }) %>%
yardstick::mae(truth = actual, estimate = pred) %>%
.$.estimate
tibble::tibble(
train_mae = train_mae,
test_mae = test_mae
)
}, recipe = recipe, hyper_params = hyper_params, .options = furrr::future_options(seed = 5963L)) %>%
# CV 分割全体の平均値を評価スコアとする
dplyr::summarise_all(mean)
}) %>%
# 評価結果とパラメータを結合
dplyr::bind_cols(df.grid.params, .) %>%
# 評価スコアの順にソート(昇順)
dplyr::arrange(test_mae)
})
|
08f0ee1d6c033aef22e3698b78bb8a4d3e9406f7 | faa22e1df31862fbef2031c56505f2075714be3c | /R/solveneardot.R | c56f1bb70c60790b1f5b630cc53dae033461ccb5 | [] | no_license | psolymos/ResourceSelection | f10a67f07c82f91664f77faefd6e015f47829934 | cd971b69a9e55e31f7d6401b85fecf22d52e0765 | refs/heads/master | 2023-07-10T10:18:00.535286 | 2023-06-27T20:44:17 | 2023-06-27T20:44:17 | 25,499,267 | 7 | 6 | null | 2016-11-04T16:56:01 | 2014-10-21T03:07:55 | R | UTF-8 | R | false | false | 200 | r | solveneardot.R | ## robust matrix inversion
.solvenear <- function(x) {
xinv <- try(solve(x), silent = TRUE)
if (inherits(xinv, "try-error"))
xinv <- as.matrix(solve(Matrix::nearPD(x)$mat))
xinv
}
|
f559a54523ed1fc5e233fdfaa7a77c43bce2fae0 | ef2182021ae6275ae0cd8107e3d3513d0ae987da | /R/fct_plot_data.R | 94e88b3279e09bd241e3158e63f8de09d43f16d3 | [] | no_license | markclements/schedule | 258b74aa994b0a9e4921203a69051df4b11787df | 4a4d0e90e45719ff0f57a5b8c2ec3d0203c79381 | refs/heads/master | 2022-08-21T22:01:57.814091 | 2022-07-29T16:37:08 | 2022-07-29T16:37:08 | 197,422,061 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,816 | r | fct_plot_data.R | #' plot_data
#'
#' @description A fct function
#'
#' @return The return value, if any, from executing the function.
#'
#' @noRd
plot_schedule <- function(df, fill) {
times <- c(stringr::str_c(1:11, " AM"), "12 PM", stringr::str_c(1:11, " PM"))
days <- c("Mon", "Tue", "Wed", "Thur", "Fri", "Sat")
ggplot2::ggplot(df) +
ggplot2::geom_rect(
ggplot2::aes(
xmin = x1,
xmax = x2,
ymin = stime,
ymax = etime,
fill = .data[[fill]]
),
alpha = 0.6,
color = "black"
) +
ggplot2::geom_text(
ggplot2::aes(
x = x1,
y = (stime + abs(stime - etime) / 2),
label = course,
size = (x2 - x1) * 100
),
#hjust = "right",
#vjust = 0.5,
#nudge_x = 0.01
) +
ggplot2::scale_size_area(max_size = 4) +
ggplot2::scale_y_reverse(
breaks = 1:23,
labels = times
) +
ggplot2::scale_x_continuous(
breaks = 1:6 + 0.5,
labels = days,
expand = ggplot2::expansion(add = 0.01)
) +
ggplot2::theme(
legend.position = "right",
panel.grid.major.x = ggplot2::element_blank(),
panel.grid.minor.x = ggplot2::element_blank(),
axis.ticks.x = ggplot2::element_blank(),
axis.text.x = ggplot2::element_text(size = 10),
axis.title = ggplot2::element_blank()
) +
# ggplot2::xlab("") +
# ggplot2::ylab("") +
ggplot2::guides(fill = "none") +
ggplot2::geom_vline(xintercept = 1:7) +
ggplot2::facet_grid(campus ~ .) -> sched -> p
return(p)
}
|
04d7414c3d771f839f0ebc248a830fa660ebfcbe | 7cec7d1d0ab0c0ab189919ba41f6a311cf5387fc | /scripts/hulpfuncties_20220201.R | c1000d7c87d0b631b60a7f870272125d47414add | [] | no_license | gerardhros/WaternetAnalyse | d2c0cfea95124aa41dbbf5da8414850d6eb1bc2a | e2706db83347ebe1d24135644b7985b65f1f91fc | refs/heads/master | 2022-06-05T10:17:48.660377 | 2022-05-16T10:29:13 | 2022-05-16T10:29:13 | 221,250,355 | 1 | 2 | null | 2020-06-04T08:09:41 | 2019-11-12T15:29:21 | HTML | UTF-8 | R | false | false | 51,991 | r | hulpfuncties_20220201.R | #packages
require(dplyr);require(tidyverse); require(ggplot2);require(gtools);require(RColorBrewer)
require(devtools)
require(gvlma)
require(htmlwidgets)
require(mapview)
require(sf);require(rgdal)
require(raster)
require(data.table)
require(RColorBrewer)
require(leaflet)
require(rgeos)
require(maptools)
#koppeltabellen tijdelijk
koppel_meetnet <- data.frame(meetnet=c("fychem",
"prioritaire stoffen",
"specifiek verontreinigende stoffen"),
Waardebepalingsmethode.code=c("other:Aquo-kit;OW-toetsing;KRW fysisch-chemisch 2018",
"other:Aquo-kit;OW-toetsing;KRW prioritaire stoffen SGBP 2022-2027 - zoet",
"other:Aquo-kit;OW-toetsing;KRW spec. verontr. stoffen SGBP 2022-2027 - zoet"))
koppel_ESF_scores <- data.frame(resultaat.code=c(0,1,2,3),resultaat=c("onbekend", "op orde", "at risk", "niet op orde"),resultaat.kleur=c("grey", "red", "yellow", "green"))
koppel_EKR_scores <- data.frame(oordeel=c("onbekend", "slecht", "ontoereikend", "matig", "goed"),oordeel.kleur=c("grey", "red", "orange", "yellow", "green"))
#functies
doSplit <- function(x,y=cols){
if(length(y)==1){
x_split <- split(x,list(x[,y]))
}else{
x_split <- split(x,as.list(x[,y]))
}
return(x_split)
}
add_doelen <- function(x=oordeel_overig,y=doelen){
y$GHPR[y$GHPR %in% "Overige waterflora-kwaliteit"] <- maatselectie
y_sel <- y%>%filter(GeoObject.code %in% "", GHPR %in% maatselectie)%>%
select(gebied, Doel_2022)%>%distinct()%>%rename(EAGIDENT=gebied, GEP_2022=Doel_2022)
x_add <- x%>%filter(GHPR %in% maatselectie)%>%left_join(y_sel,by=c("EAGIDENT"))%>%rename()
return(data.frame(x_add))
}
get_laatste_jaar <- function(x=doSplit(db_overig,"EAGIDENT")[[1]]){
#neem meest recente jaar...
x_out <- x[x$jaar %in% max(x$jaar),]
x_out <- x_out%>%select(-jaar)
return(x_out)
}
get_oordeel_EKR <- function(x=doSplit(db_tot_gat, "id")[[1]]){
x$GEP_gebruikt <- x$GEP_2022
x$GEP_gebruikt <- ifelse(x$GEP_gebruik > 0.6,0.6,x$GEP_gebruikt) #afromen op 0.6
gap <- x$GEP_2022/4
oordeel <- ifelse(x$EKR < gap,"slecht",
ifelse(x$EKR < (gap*2),"ontoereikend",
ifelse(x$EKR < (gap*3),"matig",
"goed")))
x$oordeel <- ifelse(is.na(oordeel),"onbekend",oordeel)
return(x)
}
add_empty_GHPR <- function(x=doSplit(db_tot_gat_oordeel, "gebied")[[1]]){
if(NA %in% x$EAGIDENT & nrow(x)<4){
if(!"Macrofauna" %in% x$GHPR){
x_add <- x[1,]
x_add <- x_add%>%mutate(GHPR="Macrofauna",EKR=NA, GEP_2022=NA, GEP_gebruikt=NA,doelgat=NA,oordeel="onbekend")
x <- rbind(x, x_add)
}
if(!"Fytoplankton" %in% x$GHPR){
x_add <- x[1,]
x_add <- x_add%>%mutate(GHPR="Fytoplankton",EKR=NA, GEP_2022=NA, GEP_gebruikt=NA,doelgat=NA,oordeel="onbekend")
x <- rbind(x, x_add)
}
if(!"Ov. waterflora" %in% x$GHPR){
x_add <- x[1,]
x_add <- x_add%>%mutate(GHPR="Ov.waterflora",EKR=NA, GEP_2022=NA, GEP_gebruikt=NA,doelgat=NA,oordeel="onbekend")
x <- rbind(x, x_add)
}
if(!"Vis" %in% x$GHPR){
x_add <- x[1,]
x_add <- x_add%>%mutate(GHPR="Vis",EKR=NA, GEP_2022=NA, GEP_gebruikt=NA,doelgat=NA,oordeel="onbekend")
x <- rbind(x, x_add)
}
}
return(x)
}
get_EAG_orde <- function(x=db_tot_gat_oordeel){
x$EAG_orde <- NA
x <- x%>%mutate(EAG_orde=ifelse(!is.na(EAGIDENT) & substr(EAGIDENT,0,1) %in% 1,1000,
ifelse(!is.na(EAGIDENT) & substr(EAGIDENT,0,1) %in% 2,2000,
ifelse(!is.na(EAGIDENT) & substr(EAGIDENT,0,1) %in% 3,3000,
ifelse(!is.na(EAGIDENT) & substr(EAGIDENT,0,1) %in% 4,4000,
ifelse(!is.na(EAGIDENT) & substr(EAGIDENT,0,1) %in% 5,5000,
ifelse(!is.na(EAGIDENT) & substr(EAGIDENT,0,1) %in% 6,6000,
ifelse(!is.na(EAGIDENT) & substr(EAGIDENT,0,1) %in% 7,7000,
ifelse(!is.na(EAGIDENT) & substr(EAGIDENT,0,1) %in% 8,8000,
ifelse(!is.na(EAGIDENT) & substr(EAGIDENT,0,1) %in% 9,9000,NA))))))))))
return(x)
}
do_doelgatplot <- function(x=db_tot_gat){
#x%>%filter(gebiedtype %in% "KRW_waterlichaam")
x <- x%>%filter(!watertype %in% "M1b")#losse waarneming eruit halen
g <- ggplot(x, aes(GHPR, doelgat, fill=GHPR))
g <- g + theme_bw()+geom_boxplot()
g <- g + facet_grid(gebiedtype~watertype, scales="free_x")
ggsave(paste0("output/doelgatplot_", Sys.Date(), ".png"), g, device="png", height=6 ,width=10)
x_sel <- x%>%filter(gebiedtype %in% "Overig water")%>%group_by(watertype, GHPR)%>%summarise(doelgat_gem =mean(doelgat, na.rm=T))%>%
ungroup()%>%filter(!is.na(doelgat_gem))
x_sel$watertype <- factor(x_sel$watertype, levels=x_sel$watertype[order(x_sel$doelgat_gem)])
g <- ggplot(x_sel, aes(watertype, doelgat_gem))+theme_classic()
g <- g + geom_bar(stat="identity",fill="green")
g <- g + scale_y_continuous(limits=c(0,1),expand =c(0,0))
g <- g + ylab("gemiddelde afstand tot doel (delta EKR)")
ggsave(paste0("output/doelgatplot_ov_gem_", Sys.Date(), ".png"), g, device="png", height=8 ,width=8)
}
do_plot_EKR_table <- function(x=db_tot_gat_oordeel){
x_KRW <- x%>%filter(!is.na(id))
x_KRW$oordeel <- factor(x_KRW$oordeel , levels=c("onbekend", "slecht", "ontoereikend", "matig", "goed"))
g1 <- ggplot(x_KRW, aes(gebied,fill=oordeel))+theme_minimal()
g1 <- g1 + geom_bar(stat="count")
g1 <- g1 + scale_y_continuous(limits=c(0,1),expand=c(0,0))
g1 <- g1 + scale_fill_manual(values = x_KRW$oordeel.kleur, breaks = x_KRW$oordeel)
g1 <- g1 + facet_grid(.~GHPR)
g1 <- g1 + coord_flip()
g1 <- g1 + theme(axis.title = element_blank(), axis.ticks.x = element_blank(), axis.text.x = element_blank(),
legend.position = "bottom", legend.title = element_blank())
ggsave(paste0("output/EKR/EKR_overzicht_KRW_wl_", Sys.Date(), ".png"), g1,device="png", height=8, width=6)
#OVERIG WATER
x_ov <- x%>%filter(is.na(id))
x_ov$oordeel <- factor(x_ov$oordeel , levels=c("onbekend", "slecht", "ontoereikend", "matig", "goed"))
for(i in unique(x_ov$EAG_orde)){
x_ov_orde <- x_ov%>%filter(EAG_orde %in% i)
g2 <- ggplot(x_ov_orde, aes(gebied,fill=oordeel))+theme_minimal()
g2 <- g2 + geom_bar(stat="count")
g2 <- g2 + scale_y_continuous(limits=c(0,1),expand=c(0,0))
g2 <- g2 + scale_fill_manual(values = x_KRW$oordeel.kleur, breaks = x_KRW$oordeel)
g2 <- g2 + facet_grid(.~GHPR)
g2 <- g2 + coord_flip()
g2 <- g2 + theme(axis.title = element_blank(), axis.ticks.x = element_blank(), axis.text.x = element_blank(),
legend.position = "bottom", legend.title = element_blank())
g2 <- g2 + ggtitle(paste0("EAG-orde: ",i))
ggsave(paste0("output/EKR/EKR_overzicht_overig_",i,"_",Sys.Date(), ".png"), g2,device="png", height=(5+nrow(x_ov_orde)*0.1), width=8)
}
}
plot_EKR_doel_per_gebied <- function(x=db_tot_long, type_sel="KRW_waterlichaam"){ #type is 'KRW_waterlichaam' of 'overig water'
if(type_sel=="KRW_waterlichaam"){
x_sel_oordeel <- x%>%filter(gebiedtype %in% type_sel & type %in% "EKR")
x_sel_doel <- x%>%filter(gebiedtype %in% type_sel & type %in% "GEP_2022")
g <- ggplot(x_sel_oordeel,aes(gebied,waarde, fill="green")) + theme_classic()
g <- g + geom_bar(stat="identity")
g <- g + geom_point(data=x_sel_doel,aes(gebied,waarde), pch=45, size=6)
g <- g + scale_y_continuous(limits=c(0,1), expand=c(0,0))
g <- g + facet_grid(GHPR~watertype,scales="free_x", space="free_x",switch = "y")
g <- g + theme(axis.text.x = element_text(angle=-90, vjust=0.1, hjust=-0.01))
#g <- g + facet_grid(watertype~GHPR,scales="free_y", space="free_y",switch = "y")
#g <- g + coord_flip()
ggsave(paste0("output/EKR_vs_doel_per_KRW_WL", Sys.Date(), ".png"), g, device="png", height=10 ,width=14)
}
if(type_sel=="Overig water"){
x_sel_oordeel <- x%>%filter(gebiedtype %in% type_sel& type %in% "EKR")
levels <- x_sel_oordeel$EAGIDENT[order(x_sel_oordeel$waarde)]
x_sel_doel <- x%>%filter(gebiedtype %in% type_sel & type %in% "GEP_2022")
x_sel_oordeel$EAGIDENT <- factor(x_sel_oordeel$EAGIDENT, levels=levels)
x_sel_doel$EAGIDENT <- factor(x_sel_doel$EAGIDENT, levels=levels)
g <- ggplot(x_sel_oordeel,aes(EAGIDENT,waarde, fill="green")) + theme_classic()
g <- g + geom_bar(stat="identity")
g <- g + geom_point(data=x_sel_doel,aes(gebied,waarde), pch=73, size=4)
g <- g + scale_y_continuous(limits=c(0,1), expand=c(0,0))
g <- g + facet_grid(watertype~.,scales="free_y", space="free_y",switch = "y")
g <- g + theme(axis.text.x = element_text(angle=-90, vjust=0.1, hjust=-0.01))
g <- g + coord_flip()
ggsave(paste0("output/EKR_vs_doel_overig_", Sys.Date(), ".png"), g, device="png", height=25 ,width=10)
}
}
filter_gemeten_vanaf <- function(x=doSplit(db_stofDr,"parameternaam")[[10]],jaar=2020){
if(max(x$jaar)>=jaar){return(x)}
}
#ESF-resultaten
do_plot_esf_table <- function(x=ESFset_long){
#KRW_waterlichamen
koppel_ESF_names <- data.frame(ESF=c("ESF1", "ESF2","ESF3", "ESF4", "ESF5","ESF6","ESF7", "ESF8"), ESF_names=factor(c("nutriënten", "lichtklimaat", "waterbodem","habitat", "verspreiding", "verwijdering","organische bel.", "toxiciteit"),levels=c("nutriënten", "lichtklimaat", "waterbodem","habitat", "verspreiding", "verwijdering","organische bel.", "toxiciteit")))
x <- x%>%left_join(koppel_ESF_names,"ESF")
x_KRW <- x%>%filter(grepl("NL", OWL))
x_KRW$resultaat <- factor(x_KRW$resultaat , levels=c("onbekend", "niet op orde", "at risk", "op orde"))
g1 <- ggplot(x_KRW, aes(OWMNAAM_SGBP3,fill=resultaat))+theme_minimal()
g1 <- g1 + geom_bar(stat="count")
g1 <- g1 + scale_y_continuous(limits=c(0,1),expand=c(0,0))
g1 <- g1 + scale_fill_manual(values = x_KRW$resultaat.kleur, breaks = x_KRW$resultaat)
g1 <- g1 + facet_grid(.~ESF_names)
g1 <- g1 + coord_flip()
g1 <- g1 + theme(axis.title = element_blank(), axis.ticks.x = element_blank(), axis.text.x = element_blank(),
legend.position = "bottom", legend.title = element_blank())
ggsave(paste0("output/ESF/overzicht_KRW_wl_", Sys.Date(), ".png"), g1,device="png", height=8, width=10)
#OVERIG WATER
x_ov <- x%>%filter(!grepl("NL", OWL))
x_ov$resultaat <- factor(x_ov$resultaat , levels=c("onbekend", "niet op orde", "at risk", "op orde"))
g2 <- ggplot(x_ov, aes(OWL_SGBP3,fill=resultaat))+theme_minimal()
g2 <- g2 + geom_bar(stat="count")
g2 <- g2 + scale_y_continuous(limits=c(0,1),expand=c(0,0))
g2 <- g2 + scale_fill_manual(values = x_KRW$resultaat.kleur, breaks = x_KRW$resultaat)
g2 <- g2 + facet_grid(.~ESF_names)
g2 <- g2 + coord_flip()
g2 <- g2 + theme(axis.title = element_blank(), axis.ticks.x = element_blank(), axis.text.x = element_blank(),
legend.position = "bottom", legend.title = element_blank())
ggsave(paste0("output/ESF/overzicht_overig_", Sys.Date(), ".png"), g2,device="png", height=20, width=10)
}
#stoffen
do_aantal_stoffen_en_punten <- function(x=db_stofOpp){
#x_agg_I <- x%>%select(locatie.EAG, locatie.bodemsoortlocatie,locatiecode,jaar,fewsparametercategorie,fewsparameter,fewsparameternaam)%>%distinct()
x_agg_I <- x%>%select(locatie.EAG,locatiecode,jaar, fewsparametercategorie)%>%distinct()%>%group_by(jaar, fewsparametercategorie)%>%summarize(aantal=n())%>%ungroup()
for(i in unique(x_agg_I$fewsparametercategorie)){
x_agg_i <- x_agg_I[x_agg_I$fewsparametercategorie %in% i,]
g <- ggplot(x_agg_i,aes(jaar,aantal))+theme_classic()
g <- g + geom_bar(stat="identity")
g <- g + scale_y_continuous(expand=c(0,0))
g <- g + ggtitle(paste0("parametercategorie = ",i))+ylab("aantal locaties")
ggsave(paste0("output/stoffen_agg/aantal_locaties_per_jaar_",i,"_",Sys.Date(),".png"),g,device="png", width=8, height =6)
}
x_agg_II <- x%>%select(locatie.EAG, jaar, fewsparameter,fewsparametercategorie)%>%distinct()%>%group_by(jaar, locatie.EAG,fewsparametercategorie)%>%summarize(aantal=n())%>%ungroup()
for(i in unique(x_agg_II$fewsparametercategorie)){
x_agg_i <- x_agg_II[x_agg_II$fewsparametercategorie %in% i,]
g <- ggplot(x_agg_i,aes(jaar,aantal))+theme_bw()
g <- g + geom_bar(stat="identity")
g <- g + scale_y_continuous(expand=c(0,0))
g <- g + ggtitle(paste0("parametercategorie = ",i))+ylab("aantal stoffen")
g <- g + facet_grid(locatie.EAG~.)+theme(strip.text.y = element_text(angle=-0.001))
ggsave(paste0("output/stoffen_agg/aantal_stoffen_per_EAG_jaar_",i,"_",Sys.Date(),".png"),g,device="png", width=8, height =15)
}
x_agg_III <- x%>%select(jaar, fewsparameter,fewsparametercategorie)%>%distinct()%>%group_by(jaar, fewsparametercategorie)%>%summarize(aantal=n())%>%ungroup()
for(i in unique(x_agg_III$fewsparametercategorie)){
x_agg_i <- x_agg_III[x_agg_III$fewsparametercategorie %in% i,]
g <- ggplot(x_agg_i,aes(jaar,aantal))+theme_bw()
g <- g + geom_bar(stat="identity")
g <- g + scale_y_continuous(expand=c(0,0))
g <- g + ggtitle(paste0("parametercategorie = ",i))+ylab("aantal stoffen")
ggsave(paste0("output/stoffen_agg/aantal_stoffen_per_jaar_",i,"_",Sys.Date(),".png"),g,device="png", width=8, height =6)
}
}
do_stofplot <- function(x=doSplit(db_stofOpp, "parameternaam")[[30]], type="drinkwater"){
c <- unique(x$fewsparametercategorie)
n <- unique(x$parameternaam)
eenh <- unique(x$eenheid)
min_jaar <- 2010
max_jaar <- 2021
g <- ggplot(x, aes(datum, meetwaarde, group=locatiecode, color=locatiecode))+theme_classic()
g <- g + geom_line()+geom_point()
g <- g + ggtitle(paste0(c,": ", n))
g <- g + ylab(paste0(n, "( ",eenh,")"))
g <- g + scale_x_date(limits = c(as.Date(paste0(min_jaar,"-01-01")),as.Date(paste0(max_jaar,"-12-31"))),
breaks = as.Date(paste0(min_jaar:max_jaar,"-01-01")),
labels=c(min_jaar:max_jaar), expand = c(0,0))
if(type=="drinkwater"){
ggsave(paste0("output/stoffen_drinkwater/drinkwater_", c,"_",n,"_", Sys.Date(), ".png"), device = "png", height = 6, width = 10)
}
if(type=="oppervlaktewater"){
g <- g + facet_grid(locatie.EAG~.)
ggsave(paste0("output/stoffen_oppervlaktewarter/drinkwater_", c,"_",n,"_", Sys.Date(), ".png"), device = "png", height = 6, width = 10)
}
}
do_plot_stoftoets <- function(x=doSplit(db_stof_toets, "meetnet")[[2]]){
m <- unique(x$meetnet)
x_agg <- x%>%group_by(gebied, jaar, watertype, Alfanumeriekewaarde)%>%summarize(waarde=n())%>%ungroup()
if(m=="fychem"){
x_agg$Alfanumeriekewaarde <- factor(x_agg$Alfanumeriekewaarde, levels=c("Slecht", "Ontoereikend", "Matig", "Goed", "Zeer goed"))
x_agg <- x_agg%>%mutate(oordeel_kleur = ifelse(Alfanumeriekewaarde == "Slecht", "red",
ifelse(Alfanumeriekewaarde == "Ontoereikend", "orange",
ifelse(Alfanumeriekewaarde == "Matig", "yellow",
ifelse(Alfanumeriekewaarde == "Goed", "green",
ifelse(Alfanumeriekewaarde == "Zeer goed", "blue","grey"))))))
}else{
x_agg$Alfanumeriekewaarde <- factor(x_agg$Alfanumeriekewaarde, levels=c("Voldoet niet","Niet toetsbaar", "Voldoet"))
x_agg <- x_agg%>%mutate(oordeel_kleur = ifelse(Alfanumeriekewaarde == "Niet toetsbaar", "grey",
ifelse(Alfanumeriekewaarde == "Voldoet niet", "red",
ifelse(Alfanumeriekewaarde == "Voldoet", "green","grey"))))
# x$oordeel_kleur <- factor(x$oordeel_kleur, levels=c("grey", "red", "green"))
}
x_agg <- x_agg%>%rename(oordeel=Alfanumeriekewaarde)
g <- ggplot(x_agg, aes(gebied,waarde, fill=forcats::fct_rev(oordeel)))+theme_bw()
g <- g + geom_bar(stat="identity", position="fill")
g <- g + facet_grid(jaar~watertype, space="free", scales="free",switch = "both")
g <- g + scale_y_continuous(expand=c(0,0))
g <- g + scale_fill_manual(values=unique(x_agg$oordeel_kleur[order(x_agg$oordeel)]), breaks = levels(x_agg$oordeel))
g <- g + theme(axis.text.x = element_text(angle=-90, hjust=-0.01, vjust=0.01),axis.text.y = element_blank(),axis.ticks.y = element_blank(), legend.title = element_blank() )
g <- g + ylab("")+xlab("")+ggtitle(unique(x$meetnet))
#g <- g + coord_flip()
ggsave(paste0("output/stoffen_toets/",m,"_2018_2019_2020.png"), g, device="png", height=8, width=12)
}
do_plot_stoftoets_2 <- function(x=doSplit(db_stof_toets, c("gebied", "meetnet"))[[2]]){
if(nrow(x)>3){
m <- unique(x$meetnet)
geb. <- unique(x$gebied)
if(m=="fychem"){
x$Alfanumeriekewaarde <- factor(x$Alfanumeriekewaarde, levels=c("Slecht", "Ontoereikend", "Matig", "Goed", "Zeer goed"))
x$Parameter.omschrijving[x$Grootheid.omschrijving %in% c("Temperatuur", "Zuurgraad", "Doorzicht")] <- x$Grootheid.omschrijving[x$Grootheid.omschrijving %in% c("Temperatuur", "Zuurgraad", "Doorzicht")]
}else{
x$Alfanumeriekewaarde <- factor(x$Alfanumeriekewaarde, levels=c("Voldoet niet","Niet toetsbaar", "Voldoet"))
x$Parameter.omschrijving <- paste0(x$Parameter.omschrijving, " (", x$Waardebewerkingmethode.code,")")
}
x <- x%>%rename(oordeel=Alfanumeriekewaarde,stof=Parameter.omschrijving)
g <- ggplot(x, aes(jaar,group=oordeel, fill=oordeel))+theme_bw()
g <- g + geom_bar(stat="count", position="fill")
g <- g + facet_grid(stof~., space="free", scales="free",switch = "both")
g <- g + scale_y_continuous(expand=c(0,0))
g <- g + scale_fill_brewer(palette = "Spectral")
g <- g + theme(strip.text.y.left = element_text(angle=0), axis.text.x = element_text(angle=-90, hjust=-0.01, vjust=0.01),axis.text.y = element_blank(),
axis.ticks.y = element_blank())
g <- g + ylab("")+xlab("")+ggtitle(paste0(geb., ": ", m))
#g <- g + coord_flip()
ggsave(paste0("output/stoffen_toets/",m, "_", geb.,"_2018_2019_2020.png"), g, device="png", height=12, width=10)
}
}
make_subset_macft_EKR <- function(x=readRDS("../data/trendekr.rds")){
maatlatten <- c("Overige waterflora-kwaliteit","Abundantie groeivormen macrofyten", "Soortensamenstelling macrofyten","Bedekking Grote drijfbladplanten","Bedekking Emerse planten","Bedekking Flab","Bedekking Kroos","Bedekking som submerse planten en draadalgen")
x_sel <- x%>%filter(GHPR %in% maatlatten)%>%select(id, GHPR, KRWwatertype.code, EAGIDENT, GEP_2022, POT_2022, waterlichaam, EKRref,EKR3jr,
`2006`, `2007`,`2008`,`2009`,`2010`,`2011`,`2012`,`2013`,`2014`,`2015`,`2016`,`2017`,`2018`,`2019`,`2020`)
write.table(x_sel,"output/EKR-tabelII_20220119.csv", sep=";", dec=".", row.names = F)
}
##### DEEL 3
do_opwerken_KNMI_data <- function(x=db_KNMI,vanaf=2012){
names(x)[2] <- "datum"
names(x)[12] <- "TG"
names(x)[23] <- "RH"
names(x)[41] <- "EV24"
x_sel <- x%>%dplyr::select(datum, RH, EV24,TG)%>%mutate(neerslag=RH/10, verdamping=EV24/10,temperatuur=TG/10)
x_sel <- x_sel%>%mutate(neerslagoverschot = neerslag - verdamping)
x_sel$datum <- as.Date(paste0(substr(x_sel$datum,1,4),"-",substr(x_sel$datum,5,6),"-",substr(x_sel$datum,7,8)),format="%Y-%m-%d")
x_sel$jaar <- as.numeric(format(x_sel$datum,"%Y"))
x_sel$maand <- as.numeric(format(x_sel$datum,"%m"))
x_out <- x_sel%>%filter(jaar >= vanaf)%>%dplyr::select(datum,jaar,maand,temperatuur,neerslag, verdamping, neerslagoverschot)
return(x_out)
}
do_opwerken_nut_data <- function(x=db_fychem,vanaf=2012){
x_sel <- x%>%dplyr::select(datum,locatiecode,locatie.EAG,fewsparametercode,meetwaarde,afronding)%>%distinct()
x_sel$datum <- as.Date(substr(x_sel$datum,1,10),format="%Y-%m-%d")
x_sel$jaar <- as.numeric(format(x_sel$datum,"%Y"))
x_sel$maand <- as.numeric(format(x_sel$datum,"%m"))
x_out <- x_sel%>%filter(jaar >= vanaf,fewsparametercode %in% c("PO4","Ptot","Ntot"))%>%dplyr::select(datum,locatie.EAG,locatiecode,fewsparametercode,meetwaarde)%>%distinct()
return(x_out)
}
do_remove_outliers <- function(x=db_fychem,perc=0.01){
x_percs <- x%>%group_by(fewsparametercode)%>%summarize(perc=quantile(meetwaarde,probs=1-perc))
x_sel <- x[x$fewsparametercode %in% x_percs$fewsparametercode[1] & x$meetwaarde <x_percs$perc[1] |
x$fewsparametercode %in% x_percs$fewsparametercode[2] & x$meetwaarde <x_percs$perc[2]|
x$fewsparametercode %in% x_percs$fewsparametercode[3] & x$meetwaarde <x_percs$perc[3],]
return(x_sel)
}
do_calc_periodic_means_KNMI <- function(x=db_KNMI, p=10){
x$neerslagoverschot_mean <- NULL
x$temperatuur_mean <- NULL
x$neerslag_cumu <- NULL
for(i in x$datum){
neerslagoverschot_mean <- x%>%filter(datum %in% (i-p):i)%>%mutate(out=mean(neerslagoverschot))%>%dplyr::select(out)%>%distinct()%>%as.numeric()
x$neerslagoverschot_mean[x$datum %in% i] <- neerslagoverschot_mean
temperatuur_mean <- x%>%filter(datum %in% (i-p):i)%>%mutate(out=mean(temperatuur))%>%dplyr::select(out)%>%distinct()%>%as.numeric()
x$temperatuur_mean[x$datum %in% i] <- temperatuur_mean
x$neerslag_cumu[x$datum %in% i] <- x%>%filter(datum %in% (i-p):i)%>%mutate(out=sum(neerslag))%>%dplyr::select(out)%>%distinct()%>%as.numeric()
x$n_dag <- p
}
return(x)
}
do_analyse_nuts_KNMI_II <- function(x=doSplit(db_nut_KNMI_means,"locatie.EAG")[["2550-EAG-1"]],iterator="locatie.EAG",data_min=10){
if(nrow(x)>10){
print(unique(x[,iterator]))
winter <- c(12,1,2) #wintermaanden
voorjaar <- c(3,4,5)
zomer <- c(6,7,8)
najaar <- c(9,10,11)
## 15 gebieden -> mail maarten
## naam EAGs in plaatje toevoegen
#AANPASSEN: cumulatieve neerslag (niet overschot!) van 5mm
zeer_droog_grens <- 0 #zeer droog: <-2mm/dag
nat_grens <- 9 #droog: <2mm/dag
zeer_nat_grens <- 20 #nat: < 5mm/dag, zeer nat: >=10mm/dag
x <- x%>%mutate(seizoen=ifelse(maand %in% winter,"winter",
ifelse(maand %in% voorjaar,"voorjaar",
ifelse(maand %in% zomer,"zomer",
ifelse(maand %in% najaar,"najaar","")))))
#x <- x%>%mutate(neerslagperiode=ifelse(neerslag_cumu <= zeer_droog_grens,"zeer droog",
# ifelse(neerslag_cumu < nat_grens,"droog",
# ifelse(neerslag_cumu < zeer_nat_grens,"nat","zeer nat"))))
x <- x%>%mutate(neerslagperiode=ifelse(neerslag_cumu < nat_grens,"droog","nat"))
x <- x%>%mutate(jaar_alt=ifelse(maand==12,jaar+1,jaar)) #neem december mee als start jaar erna!
x$cat <- paste0(x$seizoen)
lm_df_out <- NULL
for(i in unique(x$cat)){
for(j in unique(x$neerslagperiode)){
for(z in unique(x$fewsparametercode)){
x_sel <- x%>%filter(cat %in% i,neerslagperiode %in% j, fewsparametercode %in% z)
if(nrow(x_sel)==0){next}
db_assumptions <- do_test_assumptions(x_sel, type="normal")
db_assumptions_log <- do_test_assumptions(x_sel, type="log")
statistics_type_selector <- ifelse(is.null(db_assumptions),"geen",ifelse(db_assumptions$sum == 5,"normal",ifelse(db_assumptions_log$sum == 5,"log","geen")))
if(statistics_type_selector=="normal"){
r_sq <- round(summary(lm(meetwaarde~jaar_alt, data=x_sel))$r.squared,3)
p <- round(data.frame(summary(lm(meetwaarde~jaar_alt, data=x_sel))$coefficients)[2,4],3)
sl <- round(data.frame(summary(lm(meetwaarde~jaar_alt, data=x_sel))$coefficients)[2,1],3)
jrn <- length(unique(x_sel$jaar_alt))
jr_ltst <- max(x_sel$jaar_alt)
lm_df_out <-rbind(lm_df_out,data.frame(cat=i,neerslagperiode=j,fewsparametercode=z,p=p,r_squared=r_sq,slope=sl, jaren=jrn,laatste_jaar=jr_ltst,statistiek="normal"))
}
if(statistics_type_selector=="log"){
x_sel <- x_sel%>%mutate(meetwaarde = ifelse(meetwaarde==0,0.000001,meetwaarde))
r_sq <- round(summary(lm(log(meetwaarde)~jaar_alt, data=x_sel))$r.squared,3)
p <- round(data.frame(summary(lm(log(meetwaarde)~jaar_alt, data=x_sel))$coefficients)[2,4],3)
sl <- round(data.frame(summary(lm(log(meetwaarde)~jaar_alt, data=x_sel))$coefficients)[2,1],3)
jrn <- length(unique(x_sel$jaar_alt))
jr_ltst <- max(x_sel$jaar_alt)
lm_df_out <-rbind(lm_df_out,data.frame(cat=i,neerslagperiode=j,fewsparametercode=z,p=p,r_squared=r_sq,slope=sl, jaren=jrn,laatste_jaar=jr_ltst,statistiek="log"))
}
if(statistics_type_selector=="geen"){
jrn <- length(unique(x_sel$jaar_alt))
jr_ltst <- max(x_sel$jaar_alt)
lm_df_out <-rbind(lm_df_out,data.frame(cat=i,neerslagperiode=j,fewsparametercode=z,p="ontestbaar",r_squared="ontestbaar",slope="ontestbaar", jaren=jrn,laatste_jaar=jr_ltst,statistiek="geen"))
}
}
}
}
lm_df_out$p[!is.na(lm_df_out$p) &! NaN %in% lm_df_out$p & lm_df_out$p <=0.05] <- paste0(lm_df_out$p[!is.na(lm_df_out$p) &! NaN %in% lm_df_out$p & lm_df_out$p <=0.05],"*")
lm_df_out$p_text <- paste0("p: ",lm_df_out$p)
lm_df_out$x_pos <- min(x$jaar_alt)+1.5
lm_df_out$y_pos <- ifelse(lm_df_out$fewsparametercode %in% "Ptot",max(x$meetwaarde[x$fewsparametercode %in% "Ptot"])*1.1,
ifelse(lm_df_out$fewsparametercode %in% "PO4",max(x$meetwaarde[x$fewsparametercode %in% "PO4"])*1.1,
max(x$meetwaarde[x$fewsparametercode %in% "Ntot"])*1.1))
x_add <- x%>%left_join(lm_df_out,c("cat", "neerslagperiode","fewsparametercode"))
for(i in unique(x_add$fewsparametercode)){
x_add_sel <- x_add%>%filter(fewsparametercode %in% i)
x_add_sel$cat <- factor(x_add_sel$cat, levels=c("winter","voorjaar","zomer","najaar"))
#x_add_sel$neerslagperiode <- factor(x_add_sel$neerslagperiode,levels=c("zeer nat","nat","droog","zeer droog"))
x_add_sel$neerslagperiode <- factor(x_add_sel$neerslagperiode,levels=c("nat","droog"))
data_text <- lm_df_out[lm_df_out$fewsparametercode %in% i,]
data_text$cat <- factor(data_text$cat, levels=c("winter","voorjaar","zomer","najaar"))
#data_text$neerslagperiode <- factor(data_text$neerslagperiode,levels=c("zeer nat","nat","droog","zeer droog"))
data_text$neerslagperiode <- factor(data_text$neerslagperiode,levels=c("nat","droog"))
g <- ggplot(x_add_sel,aes(jaar_alt,meetwaarde, group=cat))+theme_bw()
g <- g + geom_point()
if(length(unique(x[,iterator]))>1){
g <- g + ggtitle(paste0(i," in alle EAG's samen"))
}else{ g <- g + ggtitle(paste0(i," in ", unique(x[,iterator])))
}
g <- g + facet_grid(neerslagperiode~cat)
g <- g + geom_smooth(formula=y~x,method="lm")
g <- g + scale_x_continuous(breaks=min(x$jaar):max(x$jaar))
g <- g + geom_text(data=data_text,aes(x=x_pos,y=y_pos,label=(paste0("p: ",p,"; r^2: ",r_squared)),color="blue",size=2),show.legend = F)
if(length(unique(x[,iterator]))>1){
ggsave(paste0("output/nutrienten/nutrienten_per_cat_",i,"_",Sys.Date(),".png"),g,device="png",width=35,height=10)
}else{
ggsave(paste0("output/nutrienten/",iterator,"/nutrienten_per_cat_",unique(x[,iterator]),"_",i,"_",Sys.Date(),".png"),g,device="png",width=15,height=10)
}
}
x_out <- x_add%>%dplyr::select(-locatiecode,-meetwaarde)%>%distinct()
return(x_out)
}
}
#funtie in functie!
do_test_assumptions <- function(x_f=x_sel, type="normal"){#functie wordt aangeroepen in do_analyse_nuts_KNMI_II()
# print(paste0(unique(x$habitattype), unique(x$parameter)))
if(length(unique(x_f$meetwaarde))>1 & length(unique(x_f$jaar_alt))>=3){ #wanneer er 0 variatie is foutmelding teruggeven!
x_f <- x_f%>%mutate(meetwaarde=ifelse(meetwaarde==0,0.0000001,meetwaarde)) #ken een zeer laag getal toe aan nul, zodat log-transformatie niet vastloopt. Dit is toegestaan, zie: https://www.researchgate.net/post/Log_transformation_of_values_that_include_0_zero_for_statistical_analyses2
if(type=="normal"){model <- lm(meetwaarde ~ jaar_alt, data=x_f)}
if(type=="log"){model <- lm(log(meetwaarde) ~ jaar_alt, data=x_f)}
assumptions_test <- gvlma(model)
Global_test <- if (assumptions_test$GlobalTest$GlobalStat4$pvalue > 0.05) {1} else {0} # Linearity. Is the relationship between X and Y linear
Skewness <- if (assumptions_test$GlobalTest$DirectionalStat1$pvalue > 0.05) {1} else {0} # Normality. Is the distribution symmetrical or skewed
Kurtosis <- if (assumptions_test$GlobalTest$DirectionalStat2$pvalue > 0.05) {1} else {0} # Normality. Does the data have a long tail or sharp peak
Link_function <- if (assumptions_test$GlobalTest$DirectionalStat3$pvalue >= 0.05) {1} else {0} # Normality. Is the data truly continuous or more categorical
Heteroscedasticity <- if (assumptions_test$GlobalTest$DirectionalStat4$pvalue >= 0.05) {1} else {0} # Homoscedasticity. Is the variance constant across X
assumptions <- data.frame(fewsparametercode=unique(x_f$fewsparametercode), neerslagperiode =unique(x_f$neerslagperiode), type=type, n_jaren=length(unique(x_f$jaar_alt)),
Linearity = Global_test, Normality_Skewness = Skewness, Normality_Kurtosis = Kurtosis, Normality_continuous = Link_function, Homoscedasticity = Heteroscedasticity,
sum=sum(Global_test,Skewness, Kurtosis, Link_function,Heteroscedasticity))
return(assumptions)
}else{#print(paste0("warning voor ", unique(x$habitattype)," ", unique(x$parameter), ": variatie in waardes is 0, kon geen toets uitvoeren."))
}
}
do_vergelijking_ecologie_nut_KNMI <- function(x=db_EKR,y=db_nut_KNMI_means_trend_EAG){
y_sel <- y%>%dplyr::select(locatie.EAG,fewsparametercode,neerslagperiode,p,slope,r_squared,p_text, jaren,laatste_jaar,cat,statistiek)%>%distinct()
#uitgangspunt: rsquared moet tenminste 0.3 zijn en tenminste drie meetjaren voor we spreken van een verband
y_sel <- y_sel%>%mutate(nut_oordeel=ifelse(r_squared < 0.3 | statistiek == "geen","niet significant",
ifelse(grepl("*",p) & slope <= -0.25,"zeer wenselijk",
ifelse(grepl("*",p) & slope < 0,"wenselijk",
ifelse(grepl("*",p) & slope < 0.25,"onwenselijk",
ifelse(grepl("*",p) & slope >= 0.25,"zeer onwenselijk",
"niet significant"))))))
y_sel_stat_zomer_nat <- y_sel%>%filter(cat %in%"zomer", neerslagperiode %in% "nat", fewsparametercode %in% "Ptot")%>%dplyr::select(locatie.EAG,statistiek)%>%rename(stat_zomer_nat_P=statistiek)
y_sel <- y_sel %>% left_join(y_sel_stat_zomer_nat,c("locatie.EAG"))
y_sel_wide <- y_sel%>%dplyr::select(-p,-slope,-r_squared,-p_text,-jaren,-laatste_jaar,-statistiek)%>%distinct()%>%pivot_wider(names_from = c("cat","neerslagperiode","fewsparametercode"),values_from = "nut_oordeel" )
x_sel <- x%>%dplyr::select(GAFIDENT,cat_verschil.ref_2019, cat_verschil.ref_2020, cat_verschil.2019_2020)
db_combi <- x_sel%>%left_join(y_sel_wide,c("GAFIDENT"= "locatie.EAG"))
db_combi$oordeel_EKR_zomer_nat_P <- ifelse(db_combi$cat_verschil.ref_2019 %in% c("licht positief","zeer positief") &
db_combi$zomer_nat_Ptot %in% c("wenselijk","zeer wenselijk"),"beide wenselijk",
ifelse(db_combi$cat_verschil.ref_2019 %in% c("licht negatief","zeer negatief") &
db_combi$zomer_nat_Ptot %in% c("onwenselijk","zeer onwenselijk"),"beide onwenselijk",
ifelse(db_combi$cat_verschil.ref_2019 %in% c("gelijk") &
db_combi$zomer_nat_Ptot %in% c("niet significant") ,"beide stabiel",
ifelse(db_combi$cat_verschil.ref_2019 %in% c("onbekend",NA) &
db_combi$zomer_nat_Ptot %in% c(NA),"onbekend","ongelijke trend"
))))
#db_combi$oordeel_EKR_zomer_nat_P <- ifelse(db_combi$cat_verschil.ref_2019 %in% c("licht positief","zeer positief") &
# (db_combi$zomer_nat_Ptot %in% c("nwenselijk","zeer wenselijk") |
# db_combi$`zomer_zeer nat_Ptot` %in% c("wenselijk","zeer wenselijk")),"beide wenselijk",
# ifelse(db_combi$cat_verschil.ref_2019 %in% c("licht negatief","zeer negatief") &
# (db_combi$zomer_nat_Ptot %in% c("onwenselijk","zeer onwenselijk")|
# db_combi$`zomer_zeer nat_Ptot` %in% c("onwenselijk","zeer onwenselijk")),"beide onwenselijk",
# ifelse(db_combi$cat_verschil.ref_2019 %in% c("gelijk") &
# (db_combi$zomer_nat_Ptot %in% c("niet significant") |
# db_combi$`zomer_zeer nat_Ptot` %in% c("niet significant")),"beide stabiel",
# ifelse(db_combi$cat_verschil.ref_2019 %in% c("onbekend",NA) &
# (db_combi$zomer_nat_Ptot %in% c(NA) &
# db_combi$`zomer_zeer nat_Ptot`%in% c(NA)),"onbekend","ongelijke trend"
# ))))
g <- ggplot(db_combi,aes(GAFIDENT,fill=oordeel_EKR_zomer_nat_P))+theme_bw()
g <- g + geom_bar(stat="count")
g <- g + facet_grid(oordeel_EKR_zomer_nat_P~.,scales="free")
g <- g + coord_flip()
return(db_combi)
}
do_EKR_trendplot <- function(x=db_EKR, y=eag_wl){
x_combi <- x%>%left_join(y,c("GAFIDENT"="GAFIDENT"))
x_combi_1 <- x_combi%>%filter(!LANDBOUWGEB %in% c("gtb","plas","sportpark"))
g <- ggplot(x_combi_1,aes(EKRref,verschil.ref_2019,color=LANDBOUWGEB))+theme_bw()
g <- g + geom_abline(slope = 0,intercept = 0,linetype="dashed", color="red")
g <- g + geom_point()
g <- g + xlab("EKR-score SGBP1")+ylab("Verschil EKR SGBP2 - SGBP1")
g <- g + geom_smooth(formula=y~x,method="lm")
g <- g + facet_wrap("LANDBOUWGEB",ncol=5)
g <- g + theme(legend.position="")
ggsave("output/EKR_trend/EKR_vs_delta_EKR_gebruikstype.png",g,device="png",width=10,height=6)
x_combi_2 <- x_combi%>%filter(!Deelgebied %in% c("Het Gooi","NZK"))
g <- ggplot(x_combi_2,aes(EKRref,verschil.ref_2019,color=Deelgebied))+theme_bw()
g <- g + geom_abline(slope = 0,intercept = 0,linetype="dashed", color="red")
g <- g + geom_point()
g <- g + xlab("EKR-score SGBP1")+ylab("Verschil EKR SGBP2 - SGBP1")
g <- g + geom_smooth(formula=y~x,method="lm")
g <- g + facet_wrap("Deelgebied",ncol=5)
g <- g + theme(legend.position="")
ggsave("output/EKR_trend/EKR_vs_delta_EKR_deelgebied.png",g,device="png",width=12,height=6)
x_combi_3 <- x_combi%>%filter(!watertype %in% c("M1b"))
g <- ggplot(x_combi_3,aes(EKRref,verschil.ref_2019,color=watertype))+theme_bw()
g <- g + geom_abline(slope = 0,intercept = 0,linetype="dashed", color="red")
g <- g + geom_point()
g <- g + xlab("EKR-score SGBP1")+ylab("Verschil EKR SGBP2 - SGBP1")
g <- g + geom_smooth(formula=y~x,method="lm")
g <- g + facet_wrap("watertype",ncol=5)
g <- g + theme(legend.position="")
ggsave("output/EKR_trend/EKR_vs_delta_EKR_watertype.png",g,device="png",width=12,height=6)
x_combi_4 <- x_combi
g <- ggplot(x_combi_4,aes(EKRref,verschil.ref_2019,color=type))+theme_bw()
g <- g + geom_abline(slope = 0,intercept = 0,linetype="dashed", color="red")
g <- g + geom_point()
g <- g + xlab("EKR-score SGBP1")+ylab("Verschil EKR SGBP2 - SGBP1")
g <- g + geom_smooth(formula=y~x,method="lm")
g <- g + facet_wrap("type",ncol=2)
g <- g + theme(legend.position="")
ggsave("output/EKR_trend/EKR_vs_delta_EKR_type_water.png",g,device="png",width=8,height=6)
x_combi_5 <- x_combi%>%filter(!OPMERKING %in% c("M1b"))
g <- ggplot(x_combi_5,aes(EKRref,verschil.ref_2019,color=OPMERKING))+theme_bw()
g <- g + geom_abline(slope = 0,intercept = 0,linetype="dashed", color="red")
g <- g + geom_point()
g <- g + xlab("EKR-score SGBP1")+ylab("Verschil EKR SGBP2 - SGBP1")
g <- g + geom_smooth(formula=y~x,method="lm")
#g <- g + facet_wrap("OPMERKING",ncol=2)
g <- g + theme(legend.position="")
ggsave("output/EKR_trend/EKR_vs_delta_EKR_KRW_OW.png",g,device="png",width=6,height=6)
delta_EKR_gem <- x_combi%>%group_by(OPMERKING)%>%summarize(gemiddeld=mean(verschil.ref_2019,na.rm=T))%>%ungroup()
winst_KRW <- delta_EKR_gem$gemiddeld[delta_EKR_gem$OPMERKING %in% "KRW Waterlichaam"] - delta_EKR_gem$gemiddeld[delta_EKR_gem$OPMERKING %in% "KRW Overig water"]
}
##output GIS
get_map_EKR_nut <- function(eag_wl=eag_wl,db_oordeel=oordeel_EKR_nut,gEAG=gEAG,gEAG_new=gEAG_new,gEAG_sf=gEAG_sf,maptype="vastgesteld"){
if(maptype=="vastgesteld"){
map <- sp::merge(gEAG, eag_wl, by.x = 'GAFIDENT', by.y =
'GAFIDENT', all.x = TRUE, duplicateGeoms = T)
map <- sp::merge(map, db_oordeel, by.x = 'GAFIDENT', by.y =
'GAFIDENT', all.x = TRUE, duplicateGeoms = T)
#map_shp <- sp::merge(gEAG_sf, eag_wl, by.x = 'GAFIDENT', by.y =
# 'GAFIDENT', all.x = TRUE, duplicateGeoms = T)
#map_shp <- sp::merge(map_shp, db_EKR, by.x = 'GAFIDENT', by.y =
# 'GAFIDENT', all.x = TRUE, duplicateGeoms = T)
}
if(maptype=="nieuw"){
map <- sp::merge(gEAG_new, eag_wl, by.x = 'GAFIDENT', by.y =
'GAFIDENT', all.x = TRUE, duplicateGeoms = T)
map <- sp::merge(map, db_oordeel, by.x = 'GAFIDENT', by.y =
'GAFIDENT', all.x = TRUE, duplicateGeoms = T)
#map_shp <- sp::merge(gEAG_sf, eag_wl, by.x = 'GAFIDENT', by.y =
# 'GAFIDENT', all.x = TRUE, duplicateGeoms = T)
#map_shp <- sp::merge(map_shp, db_EKR, by.x = 'GAFIDENT', by.y =
# 'GAFIDENT', all.x = TRUE, duplicateGeoms = T)
}
#map$param <- as.factor(map$oordeel_EKR_zomer_nat_P)
#map <- map[order(map$param) & !is.na(map$param) & !map$param == "",]
#map$param <- fct_drop(map$param)
#set colors
kleuren <- data.frame(cat=c("onbekend","beide onwenselijk", "beide stabiel", "beide wenselijk", "ongelijke trend"),
kleur=c("#bcbcbc","#fc2f2f","#e5e500","#0ed90e","#f3c99d"))
#col <- kleuren$kleur
#lab <- kleuren$cat
#pal <- colorFactor(palette = col, levels = lab)
#pal_shp <- colorFactor(palette = col, levels = lab)
#set colors stat
#map$param_stat <- as.factor(map$stat_zomer_nat_P)
#map <- map[order(map$param_stat) & !is.na(map$param_stat) & !map$param_stat == "",]
#map$param_stat <- fct_drop(map$param_stat)
kleuren_stat <- data.frame(cat=c("geen","log", "normal"), kleur_stat=c("#ffffff","#b45f06","#0000FF"))
map <- map%>%left_join(kleuren,c("oordeel_EKR_zomer_nat_P"="cat"))
map <- map%>%left_join(kleuren_stat,c("stat_zomer_nat_P"="cat"))
opac <- ifelse(is.na(map$oordeel_EKR_zomer_nat_P),0,1)
#maak kaart
m <- leaflet() %>%
#flyToBounds(lat1 = 52.089900,lat2=52.443641,lng1 = 4.706125,lng2=5.284069)%>%
fitBounds(lat1 = 52.13,lat2=52.43,lng1 = 4.75,lng2=5.2)%>%
addPolygons(data = map, layerId = map$GAFIDENT, popup= paste("EAG naam", map$GAFNAAM.x, "<br>",
"EAGIDENT:", map$GAFIDENT, "<br>",
"Verandering:", map$oordeel_EKR_zomer_nat_P),
stroke = T, color=map$kleur_stat, opacity=opac, weight = 0.8, smoothFactor = 0.8,
fill=T, fillColor = map$kleur, fillOpacity = opac, ) %>%
addLegend("bottomright", colors=kleuren$kleur, labels=kleuren$cat, opacity=1)%>%
addProviderTiles("Esri.WorldGrayCanvas")#addTiles()
m
## save html to png
saveWidget(m, "output/html/map_oordeel.html", selfcontained = FALSE)
mapshot(m, file = "output/html/map_oordeel.png")
}
get_map_verschil <- function(eag_wl=eag_wl,db_EKR=db_EKR,gEAG=gEAG,gEAG_new=gEAG_new,gEAG_sf=gEAG_sf,maptype="vastgesteld"){
if(maptype=="vastgesteld"){
map <- sp::merge(gEAG, eag_wl, by.x = 'GAFIDENT', by.y =
'GAFIDENT', all.x = TRUE, duplicateGeoms = T)
map <- sp::merge(map, db_EKR, by.x = 'GAFIDENT', by.y =
'EAGIDENT', all.x = TRUE, duplicateGeoms = T)
map_shp <- sp::merge(gEAG_sf, eag_wl, by.x = 'GAFIDENT', by.y =
'GAFIDENT', all.x = TRUE, duplicateGeoms = T)
map_shp <- sp::merge(map_shp, db_EKR, by.x = 'GAFIDENT', by.y =
'EAGIDENT', all.x = TRUE, duplicateGeoms = T)
}
if(maptype=="nieuw"){
map <- sp::merge(gEAG_new, eag_wl, by.x = 'GAFIDENT', by.y =
'GAFIDENT', all.x = TRUE, duplicateGeoms = T)
map <- sp::merge(map, db_EKR, by.x = 'GAFIDENT', by.y =
'GAFIDENT', all.x = TRUE, duplicateGeoms = T)
map_shp <- sp::merge(gEAG_sf, eag_wl, by.x = 'GAFIDENT', by.y =
'GAFIDENT', all.x = TRUE, duplicateGeoms = T)
map_shp <- sp::merge(map_shp, db_EKR, by.x = 'GAFIDENT', by.y =
'GAFIDENT', all.x = TRUE, duplicateGeoms = T)
}
map$param <- as.factor(map$cat_SGBP2_SGBP1)
#map$param <- as.factor(map$Deelgebied)
map <- map[order(map$param) & !is.na(map$param) & !map$param == "",]
map$param <- fct_drop(map$param)
map_shp$param <- as.factor(map_shp$cat_SGBP3_SGBP1)
map_shp <- map_shp[order(map_shp$param) & !is.na(map_shp$param) & !map_shp$param == "",]
map_shp$param <- fct_drop(map_shp$param)
map_shp <- map_shp[!map_shp$cat_SGBP3_SGBP1 %in% "onbekend",]
#set colors
kleuren <- data.frame(cat=c("onbekend","sterke achteruitgang", "lichte achteruitgang","gelijk", "lichte vooruitgang", "sterke vooruitgang"), kleur=c("#ffffab","#ff6666","#ffd8d8","#fcfc81","#e5ffd8","#99ff66"))
#kleuren <- data.frame(cat=unique(map$Deelgebied),kleur=rainbow(n=length(unique(map$Deelgebied))))
col <- kleuren$kleur
lab <- kleuren$cat
#map_test <- map%>%left_join(kleuren,c("Deelgebied"="cat"))
map <- map%>%left_join(kleuren,c("cat_SGBP2_SGBP1" = "cat"))
map_shp <- map_shp%>%left_join(kleuren,c("cat_SGBP3_SGBP1" = "cat"))
pal <- colorFactor(palette = col, levels = lab)
pal_shp <- colorFactor(palette = col, levels = lab)
#maak kaart
m <- leaflet() %>%
fitBounds(lat1 = 52.13,lat2=52.43,lng1 = 4.75,lng2=5.2)%>%
addPolygons(data = map, layerId = map$GAFIDENT, popup= paste("EAG naam", map$GAFNAAM.x, "<br>",
"EAGIDENT:", map$GAFIDENT, "<br>",
"Verandering:", map$cat_SGBP2_SGBP1),
stroke = T, color= 'grey', opacity=0.8, weight = 0.5, smoothFactor = 0.8,
fill=T, fillColor = ~pal(param), fillOpacity = 0.6) %>%
addCircleMarkers(data = map_shp,layerId=map_shp$GAFIDENT, label= paste("Verandering 2020:", map_shp$cat_SGBP3_SGBP1),
stroke = T, color= 'grey', opacity=1,
fill=T, fillColor = ~pal_shp(param), fillOpacity = 1)%>%
addLegend("bottomright", colors=col, labels=lab)%>%
addProviderTiles("Esri.WorldGrayCanvas")#addTiles()
## save html to png
saveWidget(m, "output/html/map_EKR_trend.html", selfcontained = FALSE)
mapshot(m, file = "output/html/map_EKR_trend.png")
}
get_map_EAGtype <- function(eag_wl=eag_wl,db_oordeel=oordeel_EKR_nut,gEAG=gEAG,gEAG_new=gEAG_new,gEAG_sf=gEAG_sf,maptype="vastgesteld"){
if(maptype=="vastgesteld"){
map <- sp::merge(gEAG, eag_wl, by.x = 'GAFIDENT', by.y =
'GAFIDENT', all.x = TRUE, duplicateGeoms = T)
}
if(maptype=="nieuw"){
map <- sp::merge(gEAG_new, eag_wl, by.x = 'GAFIDENT', by.y =
'GAFIDENT', all.x = TRUE, duplicateGeoms = T)
}
#map$param <- as.factor(map$oordeel_EKR_zomer_nat_P)
#map <- map[order(map$param) & !is.na(map$param) & !map$param == "",]
#map$param <- fct_drop(map$param)
#set colors
kleuren <- data.frame(cat=c("onbekend","beide onwenselijk", "beide stabiel", "beide wenselijk", "ongelijke trend"), kleur=c("#f3c99d","#fc2f2f","#e5e500","#0ed90e","#bcbcbc"))
#col <- kleuren$kleur
#lab <- kleuren$cat
#pal <- colorFactor(palette = col, levels = lab)
#pal_shp <- colorFactor(palette = col, levels = lab)
#set colors stat
#map$param_stat <- as.factor(map$stat_zomer_nat_P)
#map <- map[order(map$param_stat) & !is.na(map$param_stat) & !map$param_stat == "",]
#map$param_stat <- fct_drop(map$param_stat)
kleuren_stat <- data.frame(cat=c("geen","log", "normal"), kleur_stat=c("#ffffff","#b45f06","#0000FF"))
map <- map%>%left_join(kleuren,c("oordeel_EKR_zomer_nat_P"="cat"))
map <- map%>%left_join(kleuren_stat,c("stat_zomer_nat_P"="cat"))
#maak kaart
m <- leaflet() %>%
#flyToBounds(lat1 = 52.089900,lat2=52.443641,lng1 = 4.706125,lng2=5.284069)%>%
fitBounds(lat1 = 52.13,lat2=52.43,lng1 = 4.75,lng2=5.2)%>%
addPolygons(data = map, layerId = map$GAFIDENT, popup= paste("EAG naam", map$GAFNAAM.x, "<br>",
"EAGIDENT:", map$GAFIDENT, "<br>",
"Verandering:", map$oordeel_EKR_zomer_nat_P),
stroke = T, color=map$kleur_stat, opacity=0.6, weight = 0.8, smoothFactor = 0.8,
fill=T, fillColor = map$kleur, fillOpacity = 0.9, ) %>%
addLegend("bottomright", colors=col, labels=lab, opacity=1)%>%
addProviderTiles("Esri.WorldGrayCanvas")#addTiles()
m
## save html to png
saveWidget(m, "output/html/map_oordeel.html", selfcontained = FALSE)
mapshot(m, file = "output/html/map_oordeel.png")
}
###OUD####
do_analyse_nuts_KNMI <- function(x=doSplit(db_nut_KNMI_means,"locatie.EAG")[[1]],iterator="locatie.EAG",data_min=10){
if(nrow(x)>10){
print(unique(x[,iterator]))
winter <- c(12,1,2) #wintermaanden
voorjaar <- c(3,4,5)
zomer <- c(6,7,8)
najaar <- c(9,10,11)
## 15 gebieden -> mail maarten
## naam EAGs in plaatje toevoegen
zeer_droog_grens <- -2 #zeer droog: <-2mm/dag
nat_grens <- 2 #droog: <2mm/dag
zeer_nat_grens <- 5 #nat: < 5mm/dag, zeer nat: >=10mm/dag
x <- x%>%mutate(seizoen=ifelse(maand %in% winter,"winter",
ifelse(maand %in% voorjaar,"voorjaar",
ifelse(maand %in% zomer,"zomer",
ifelse(maand %in% najaar,"najaar","")))))
x <- x%>%mutate(neerslagperiode=ifelse(neerslagoverschot_mean <= zeer_droog_grens,"zeer droog",
ifelse(neerslagoverschot_mean < nat_grens,"droog",
ifelse(neerslagoverschot_mean < zeer_nat_grens,"nat","zeer nat"))))
x$cat <- paste0(x$seizoen,"_",x$neerslagperiode)
x$cat <- factor(x$cat,levels=c("winter_zeer nat","winter_nat","winter_droog","winter_zeer droog",
"voorjaar_zeer nat","voorjaar_nat","voorjaar_droog","voorjaar_zeer droog",
"zomer_zeer nat","zomer_nat","zomer_droog","zomer_zeer droog",
"najaar_zeer nat","najaar_nat","najaar_droog","najaar_zeer droog"))
x <- x%>%mutate(jaar_alt=ifelse(maand==12,jaar+1,jaar)) #neem december mee als start jaar erna!
lm_df_out <- NULL
for(i in unique(x$cat)){
for(j in unique(x$fewsparametercode)){
x_sel <- x%>%filter(cat %in% i,fewsparametercode %in% j)
if(nrow(x_sel)==0){next}
r_sq <- round(summary(lm(meetwaarde~jaar_alt, data=x_sel))$r.squared,3)
p <- round(data.frame(summary(lm(meetwaarde~jaar_alt, data=x_sel))$coefficients)[2,4],3)
sl <- round(data.frame(summary(lm(meetwaarde~jaar_alt, data=x_sel))$coefficients)[2,1],3)
lm_df_out <-rbind(lm_df_out,data.frame(cat=i,fewsparametercode=j,p=p,r_squared=r_sq,slope=sl))
}
}
lm_df_out$p[!is.na(lm_df_out$p) &! NaN %in% lm_df_out$P & lm_df_out$p <=0.05] <- paste0(lm_df_out$p[!is.na(lm_df_out$p) &! NaN %in% lm_df_out$P & lm_df_out$p <=0.05],"*")
lm_df_out$p_text <- paste0("p: ",lm_df_out$p)
lm_df_out$x_pos <- min(x$jaar_alt)+1.5
lm_df_out <- data.frame(fewsparametercode=c("Ptot","Ntot"),y_pos=c(max(x$meetwaarde[x$fewsparametercode %in% "Ptot"]),max(x$meetwaarde[x$fewsparametercode %in% "Ntot"])))%>%
right_join(lm_df_out,"fewsparametercode")
x_add <- x%>%left_join(lm_df_out,c("cat", "fewsparametercode"))
g <- ggplot(x_add,aes(jaar_alt,meetwaarde, group=cat))+theme_bw()
g <- g + geom_point()
if(length(unique(x[,iterator]))>1){
g <- g + ggtitle("alle EAG's samen")
}else{ g <- g + ggtitle(unique(x[,iterator]))
}
g <- g + facet_grid(fewsparametercode~cat,scales="free_y")
g <- g + geom_smooth(formula=y~x,method="lm")
g <- g + scale_x_continuous(breaks=min(x$jaar):max(x$jaar))
g <- g + geom_text(data=lm_df_out,aes(x=x_pos,y=y_pos,label=(paste0("p: ",p,"; r^2: ",r_squared)),color="blue",size=2),show.legend = F)
if(length(unique(x[,iterator]))>1){
ggsave(paste0("output/nutrienten/nutrienten_per_cat_",Sys.Date(),".png"),g,device="png",width=35,height=10)
}else{
ggsave(paste0("output/nutrienten/",iterator,"/nutrienten_per_cat_",unique(x[,iterator]),"_",Sys.Date(),".png"),g,device="png",width=15,height=10)
}
x_out <- x_add%>%dplyr::select(-locatiecode,-meetwaarde)%>%distinct()
return(x_out)
}
}
do_trendanalyse <- function(x=doSplit(EAG_scores, "EAGIDENT")[[110]]){
x_sel <- x%>%filter(GHPR %in% "Ov. waterflora")%>%select(EAGIDENT, jaar, Numeriekewaarde, GEP, GEP_2022,CODE)
x_sel$jaar_iter <- x_sel$jaar - min(x_sel$jaar) + 1
lm_x_sel <- lm(jaar_iter~Numeriekewaarde,x_sel)
lm_xlog_sel <- lm(jaar_iter~log(Numeriekewaarde),x_sel)
x_out <- data.frame(EAGIDENT=unique(x_sel$EAGIDENT), type=c("lm", "log_lm"), intercept= c(lm_x_sel$coefficients[1],lm_xlog_sel$coefficients[1]),slope=c(lm_x_sel$coefficients[2],lm_xlog_sel$coefficients[2]))
return(x_out)
}
|
2edb61462fe98f511f45b00362587e65365ccba9 | f1d6cbb0d0e0518494be620235385d4840ee248a | /PhysicalRecognition.R | 4ab46c524b390ab262b35625830fc5181f63fd2f | [] | no_license | renefreichel/personality_prediction | 82c09d2593dacc4d09d1041730a1a2555f6964b5 | d9db65337bbe389a81bb93b831da316e5475801e | refs/heads/master | 2023-01-04T17:32:24.593287 | 2020-10-27T08:09:18 | 2020-10-27T08:09:18 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 27,199 | r | PhysicalRecognition.R | # %% [markdown]
# # Team Skip, Cathelijne & René
# %% [markdown]
# In this notebook we will implement various methods to find a good model. It's important to point out that during the first round submission we only included the code of the final model and not all the different things that we tried out. Eventually we ended up on rank 3 of the leaderboard with our previous LDA analysis at round 1.
#
# For a better overview, we removed the given "unnecessary descriptions" for some basic steps. Some code elements stem from the "quickstart", "feature extraction from signals" and the "voice gender recognition".
# %% [markdown]
# ## Importing data and organizing files
#
# First, we import the data, bring it into the right format and plot the data as a basic check.
# %% [code]
## Loading packages
library(tidyverse)
library(ggplot2)
# Important pre-processing steps
list.files(path = "../input")
# Make sure the data is available
if (length(list.files("../input", pattern = "recognition")) > 0) {
# Copy all files to the current directory
system("cp -r ../input/bda-2020-physical-activity-recognition/* ./")
} else {
# Download data for this competition
data_url = "https://phonesensordata.netlify.app/Archive.zip"
download.file(data_url, "Archive.zip")
# Unzip all files in the current directory
unzip("Archive.zip")
}
# list files in the current working directory
list.files()
# show the content of the labels file in a seperate window
file.show("activity_labels.txt")
# 1. Reading the data
act_labels = read_delim("activity_labels.txt"," ",col_names=F,trim_ws=T)
act_labels = act_labels %>% select(X1,X2)
act_labels
labels <- read_delim("./RawData/Train/labels_train.txt", " ", col_names = F)
colnames(labels) <- c('trial', 'userid', 'activity', 'start', 'end')
labels <- labels %>% mutate(activity = act_labels$X2[activity])
print(labels)
# identify the file name and extract the 'username' (participant ID) and 'expname' (experimental run)
filename = "RawData/Train/gyro_exp01_user01.txt"
username = gsub(".+user(\\d+).+", "\\1", filename) %>% as.integer()
expname = gsub(".+exp(\\d+).+", "\\1", filename) %>% as.integer()
# import the data from the file
user01 <- read_delim(filename, " ", col_names = F)
#head(user01)
options(repr.plot.width=12)
plot.ts(user01, xlab="Sample number")
# 1.3 Merging Signals and Labels
print(labels[1:2,])
# Add the sequence start:end to each row in a list.
# The result is a nested table:
sample_labels_nested <-
labels %>%
rowwise() %>% # do next operation(s) rowwise
mutate(sampleid = list(start:end)) %>%
ungroup()
# Check the resulting table:
print(sample_labels_nested, n=6)
# Unnest the nested tabel.
sample_labels <-
sample_labels_nested %>%
# Rows are segments, we need to keep track of different segements
mutate(segment = row_number() ) %>%
# Expand the data frame to one sample per row
unnest() %>%
# Remove columns we don't need anymore
select(-start, -end)
# Check the result (first few rows are not interesting; rows 977-990 are)
print(sample_labels[977:990, ])
user_df <-
# Store signals user01 in a data frame, with 'userid' and 'trial'
data.frame(userid = username, trial = expname, user01) %>%
# Add 'sampleid' for matching the sampleid's in 'sample_labels'.
# The first sample in user01 signals always has sampleid=0; the last
# has sampleid is therefore nrow(user01)-1.
mutate(sampleid = 0:(nrow(user01)-1) ) %>%
# Add the labels stored in sample_labels
left_join(sample_labels)
# Check the result (first few rows are not interesting; the following are)
print(user_df[1227:1239, ])
spectrum(user_df$X3,plot=FALSE, span = 11, log = "n")$spec
spectrum(user_df$X3, span = 11, log = "n")
options(repr.plot.width=15) # change plot width for nicer output
user_df %>%
ggplot(aes(x = sampleid, y = X1, col = factor(activity), group=segment)) +
geom_line()
# %% [markdown]
# ## Feature Extraction
#
# Here we use the given code for some time domain features and include code for a function that extracts spectrum features and the mode.
# %% [code]
# 2.1 Time domain features
user_df %>%
# change 7986 to 8586 to see shifted walk cycle
dplyr::filter(activity == "WALKING", segment == 13,
7596 < sampleid & sampleid < 7986) %>%
ggplot(aes(x = sampleid %% 54, y = X1, group = sampleid %/% 54,
col = factor(sampleid %/% 54))) + geom_line()
user_df %>%
# change 7986 to 8586 to see shifted walk cycle
dplyr::filter(activity == "WALKING", segment == 13,
7596 < sampleid & sampleid < 8586) %>%
ggplot(aes(x = sampleid %% 54, y = X1, group = sampleid %/% 54,
col = factor(sampleid %/% 54))) + geom_line()
user_df %>%
ggplot(aes(X1)) +
geom_histogram(bins=40, fill=1, alpha=0.5) +
geom_histogram(aes(X2), bins=40, fill = 2, alpha=0.5) +
geom_histogram(aes(X3), bins=40, fill = 4, alpha=0.5) +
facet_wrap(~activity, scales = "free_y")
##check spectrum features
spectrum(user_df$X1, plot = FALSE)
### own function to get the mode
getmode <- function(x) {
uniqv <- unique(x)
uniqv[which.max(tabulate(match(x, uniqv)))]
}
### own function to get entropy
getentropy <- function(x) {
probs <- prop.table(table(x))
-sum(probs * log2(probs))
}
# from Group 8 to get the spectral peak
getpeak <- function(x) {
spec <- spectrum(x, log = "y",plot=FALSE)
peak <- spec$freq[which.max(spec$spec)]
return(peak)
}
# %% [markdown]
# ## Including new features
#
# Before selecting the relevant features, we here made a list with features that we aimed to include statistical features:
#
# - Means
# - Medians
# - modes
# - minima
# - maxima
# - absolute median values
# - standard deviations
# - root means square
# - 25th and 75th quantiles
# - Interquartile range
# - skewness and curtosis
# - entropy (see function definition above)
# - autocorrelations with lag1 and lag4
# - autocorreelations with lag1 from other predictors
# - Correlations between X1, X2, X3
# - Zerocrossings
#
# Moreover, we included spectral features:
# - RMS frequency
# - Center frequency
# - Spectral minima, maxima, SDs
# - Energy
# - SRA
# - spectral peak
# %% [code]
usertimedom <- user_df %>%
# add an epoch ID variable (on epoch = 2.56 sec)
mutate(epoch = sampleid %/% 128) %>%
# extract statistical features from each epoch
group_by(epoch) %>%
summarise(
# The epoch's activity label is the mode of 'activity'
activity = names(which.max(table(c("-", activity)))),
# keep the starting sampleid of epoch as a time marker
sampleid = sampleid[1],
### Features
# Mean, median, mode, min and max
m1 = mean(X1),
m2 = mean(X2),
m3 = mean(X3),
med1 = median(X1),
med2 = median(X2),
med3 = median(X3),
mod1 = getmode(X1),
mod2 = getmode(X2),
mod3 = getmode(X3),
min1 = min(X1),
min2 = min(X2),
min3 = min(X3),
max1 = max(X1),
max2 = max(X2),
max3 = max(X3),
# Median absolute value
absmed1 = mad(X1),
absmed2 = mad(X2),
absmed3 = mad(X3),
# Standard deviations
sd1 = sd(X1),
sd2 = sd(X2),
sd3 = sd(X3),
# Root mean square
rms1 = sqrt(mean(X1^2)),
rms2 = sqrt(mean(X2^2)),
rms3 = sqrt(mean(X3^2)),
# Quantiles
q1_25 = quantile(X1, .25),
q2_25 = quantile(X2, .25),
q3_25 = quantile(X3, .25),
q1_75 = quantile(X1, .75),
q2_75 = quantile(X2, .75),
q3_75 = quantile(X3, .75),
# Interquartile range
IQR1 = IQR(X1),
IQR2 = IQR(X2),
IQR3 = IQR(X3),
# Skewness and kurtosis
skew1 = e1071::skewness(X1),
skew2 = e1071::skewness(X2),
skew3 = e1071::skewness(X3),
kurt1 = e1071::kurtosis(X1),
kurt2 = e1071::kurtosis(X2),
kurt3 = e1071::kurtosis(X3),
# Entropy
ent1 = getentropy(X1),
ent2 = getentropy(X2),
ent3 = getentropy(X3),
# Power (will be very similar to mean...)
Pow1 = mean(X1^2),
Pow2 = mean(X2^2),
Pow3 = mean(X3^3),
# peak spectrum
spec_peak1 = getpeak(X1),
spec_peak2 = getpeak(X2),
spec_peak3 = getpeak(X3),
# Autocorrelations with lag1 up to lag4
AR1.1 = cor(X1, lag(X1), use = "pairwise"),
AR1.2 = cor(X1, lag(X1, n = 2), use = "pairwise"),
AR1.3 = cor(X1, lag(X1, n = 3), use = "pairwise"),
AR1.4 = cor(X1, lag(X1, n = 4), use = "pairwise"),
AR2.1 = cor(X2, lag(X2), use = "pairwise"),
AR2.2 = cor(X2, lag(X2, n = 2), use = "pairwise"),
AR2.3 = cor(X2, lag(X2, n = 3), use = "pairwise"),
AR2.4 = cor(X2, lag(X2, n = 4), use = "pairwise"),
AR3.1 = cor(X3, lag(X3), use = "pairwise"),
AR3.2 = cor(X3, lag(X3, n = 2), use = "pairwise"),
AR3.3 = cor(X3, lag(X3, n = 3), use = "pairwise"),
AR3.4 = cor(X3, lag(X3, n = 4), use = "pairwise"),
# Autocorrelations with lag1 from other predictors
AR12 = cor(X1, lag(X2), use = "pairwise"),
AR13 = cor(X1, lag(X3), use = "pairwise"),
AR21 = cor(X2, lag(X1), use = "pairwise"),
AR23 = cor(X2, lag(X3), use = "pairwise"),
AR31 = cor(X3, lag(X1), use = "pairwise"),
AR32 = cor(X3, lag(X2), use = "pairwise"),
# Correlations between X1, X2, X3
C1_2 = cor(X1,X2, use = "pairwise"),
C1_3 = cor(X1,X3, use = "pairwise"),
C2_3 = cor(X2,X3, use = "pairwise"),
# Zerocrossings
zcr1 = 0.5 * mean(abs(sign(X1*(sampleid +1)) - sign(X1*(sampleid)))),
zcr2 = 0.5 * mean(abs(sign(X2*(sampleid +1)) - sign(X2*(sampleid)))),
zcr3 = 0.5 * mean(abs(sign(X3*(sampleid +1)) - sign(X3*(sampleid)))),
# keep track of signal lengths
n_samples = n()
)
head(usertimedom)
glimpse(usertimedom)
# obtain all files
dir("./RawData/Train/", pattern = "^acc")
extractTimeDomainFeatures <- function(filename, sample_labels) {
# extract user and experimental run ID's from file name
username = gsub(".+user(\\d+).+", "\\1", filename) %>% as.numeric()
expname = gsub( ".+exp(\\d+).+", "\\1", filename) %>% as.numeric()
# import the sensor signals from the file
user01 <- read_delim(filename, " ", col_names = F, progress = TRUE,
col_types = "ddd")
# merge signals with labels
user_df <-
data.frame(userid = username, trial = expname, user01) %>%
mutate(sampleid = 0:(nrow(user01)-1) ) %>%
left_join(sample_labels, by = c('userid','trial','sampleid'))
# split in epochs of 128 samples and compute features per epoch
usertimedom <- user_df %>%
# add an epoch ID variable (on epoch = 2.56 sec)
mutate(epoch = sampleid %/% 128) %>%
# extract statistical features from each epoch
group_by(epoch) %>%
summarise(
# keep track of user and experiment information
user_id = username,
exp_id = expname,
# epoch's activity labels and start sample
activity = names(which.max(table(c("-", activity)))),
sampleid = sampleid[1],
### Features
# Mean, median, mode, min and max
#m1 = mean(X1),
#m2 = mean(X2),
#m3 = mean(X3),
#med1 = median(X1),
#med2 = median(X2),
#med3 = median(X3),
#mod1 = getmode(X1),
#mod2 = getmode(X2),
#mod3 = getmode(X3),
#min1 = min(X1),
#min2 = min(X2),
#min3 = min(X3),
#max1 = max(X1),
#max2 = max(X2),
#max3 = max(X3),
# Standard deviations
sd1 = sd(X1),
sd2 = sd(X2),
sd3 = sd(X3),
# Variance
#var1 = var(X1),
#var2 = var(X2),
#var3 = var(X3),
# Quantiles
#q1_25 = quantile(X1, .25),
#q2_25 = quantile(X2, .25),
#q3_25 = quantile(X3, .25),
#q1_75 = quantile(X1, .75),
#q2_75 = quantile(X2, .75),
#q3_75 = quantile(X3, .75),
# Interquartile range
#IQR1 = IQR(X1),
#IQR2 = IQR(X2),
#IQR3 = IQR(X3),
# median absolute deviation (called median absolute value in article Tom found)
#absmed1 = mad(X1),
#absmed2 = mad(X2),
#absmed3 = mad(X3),
# mean absolute value
absmean1 = mean(abs(X1)),
absmean2 = mean(abs(X2)),
absmean3 = mean(abs(X3)),
#RMS
rms1 = sqrt(mean(X1^2)),
rms2 = sqrt(mean(X2^2)),
rms3 = sqrt(mean(X3^2)),
# Zero crossings (the average number the sign of a time wave changes)
#zcr1 = 0.5 * mean(abs(sign(X1*(sampleid+1)) - sign(X1*(sampleid)))),
#zcr2 = 0.5 * mean(abs(sign(X2*(sampleid+1)) - sign(X2*(sampleid)))),
#zcr3 = 0.5 * mean(abs(sign(X3*(sampleid+1)) - sign(X3*(sampleid)))),
# Skewness and kurtosis
skew1 = e1071::skewness(X1),
skew2 = e1071::skewness(X2),
skew3 = e1071::skewness(X3),
kurt1 = e1071::kurtosis(X1),
kurt2 = e1071::kurtosis(X2),
kurt3 = e1071::kurtosis(X3),
#crest factor
CF1 = max(abs(X1))/sqrt(mean(X1^2)),
CF2 = max(abs(X2))/sqrt(mean(X2^2)),
CF3 = max(abs(X3))/sqrt(mean(X3^2)),
#spectral features
#RMS frequency
spec_RMS1 = sqrt(mean(spectrum(X1,plot=FALSE)$spec^2)),
spec_RMS2 = sqrt(mean(spectrum(X2,plot=FALSE)$spec^2)),
spec_RMS3 = sqrt(mean(spectrum(X3,plot=FALSE)$spec^2)),
#center frequency
cenF1 = sqrt(spectrum(X1,plot=FALSE)$freq[which.max(spectrum(X1,plot=FALSE)$spec)]
*spectrum(X1,plot=FALSE)$freq[which.min(spectrum(X1,plot=FALSE)$spec)]),
cenF2 = sqrt(spectrum(X1,plot=FALSE)$freq[which.max(spectrum(X2,plot=FALSE)$spec)]
*spectrum(X1,plot=FALSE)$freq[which.min(spectrum(X2,plot=FALSE)$spec)]),
cenF3 = sqrt(spectrum(X1,plot=FALSE)$freq[which.max(spectrum(X3,plot=FALSE)$spec)]
*spectrum(X3,plot=FALSE)$freq[which.min(spectrum(X1,plot=FALSE)$spec)]),
#spec_mean1 = mean(spectrum(X1,plot=FALSE)$spec),
#spec_mean2 = mean(spectrum(X2,plot=FALSE)$spec),
#spec_mean3 = mean(spectrum(X3,plot=FALSE)$spec),
#spec_median1 = median(spectrum(X1,plot=FALSE)$spec),
#spec_median2 = median(spectrum(X2,plot=FALSE)$spec),
#spec_median3 = median(spectrum(X3,plot=FALSE)$spec),
#spec_mode1 = getmode(spectrum(X1,plot=FALSE)$spec),
#spec_mode2 = getmode(spectrum(X2,plot=FALSE)$spec),
#spec_mode3 = getmode(spectrum(X3,plot=FALSE)$spec),
#spec_sd1 = sd(spectrum(X1,plot=FALSE)$spec),
#spec_sd2 = sd(spectrum(X2,plot=FALSE)$spec),
#spec_sd3 = sd(spectrum(X3,plot=FALSE)$spec),
#spec_min1 = min(spectrum(X1,plot=FALSE)$spec, na.rm = TRUE),
#spec_min2 = min(spectrum(X2,plot=FALSE)$spec, na.rm = TRUE),
#spec_min3 = min(spectrum(X3,plot=FALSE)$spec, na.rm = TRUE),
#spec_max1 = max(spectrum(X1,plot=FALSE)$spec, na.rm = TRUE),
#spec_max2 = max(spectrum(X2,plot=FALSE)$spec, na.rm = TRUE),
#spec_max3 = max(spectrum(X3,plot=FALSE)$spec, na.rm = TRUE),
# Entropy
#ent1 = getentropy(X1),
#ent2 = getentropy(X2),
#ent3 = getentropy(X3),
# Power (will be very similar to mean...)
#Pow1 = mean(X1^2),
#Pow2 = mean(X2^2),
#Pow3 = mean(X3^3),
#Energy
Ene1 = sum(X1^2),
Ene2 = sum(X2^2),
Ene3 = sum(X3^2),
#SRA
SRA1 = mean(sqrt(abs(X1))),
SRA2 = mean(sqrt(abs(X2))),
SRA3 = mean(sqrt(abs(X3))),
# Autocorrelations with lag1 up to lag4
AR1.1 = cor(X1, lag(X1), use = "pairwise"),
#AR1.2 = cor(X1, lag(X1, n = 2), use = "pairwise"),
#AR1.3 = cor(X1, lag(X1, n = 3), use = "pairwise"),
#AR1.4 = cor(X1, lag(X1, n = 4), use = "pairwise"),
AR2.1 = cor(X2, lag(X2), use = "pairwise"),
#AR2.2 = cor(X2, lag(X2, n = 2), use = "pairwise"),
#AR2.3 = cor(X2, lag(X2, n = 3), use = "pairwise"),
#AR2.4 = cor(X2, lag(X2, n = 4), use = "pairwise"),
AR3.1 = cor(X3, lag(X3), use = "pairwise"),
#AR3.2 = cor(X3, lag(X3, n = 2), use = "pairwise"),
#AR3.3 = cor(X3, lag(X3, n = 3), use = "pairwise"),
#AR3.4 = cor(X3, lag(X3, n = 4), use = "pairwise"),
# Autocorrelations with lag1 from other predictors
#AR12 = cor(X1, lag(X2), use = "pairwise"),
#AR13 = cor(X1, lag(X3), use = "pairwise"),
#AR21 = cor(X2, lag(X1), use = "pairwise"),
#AR23 = cor(X2, lag(X3), use = "pairwise"),
#AR31 = cor(X3, lag(X1), use = "pairwise"),
#AR32 = cor(X3, lag(X2), use = "pairwise"),
# Correlations between X1, X2, X3
C1_2 = cor(X1,X2, use = "pairwise"),
C1_3 = cor(X1,X3, use = "pairwise"),
C2_3 = cor(X2,X3, use = "pairwise"),
n_samples = n()
)
usertimedom
}
# %% [code]
filename = "./RawData/Train/acc_exp01_user01.txt"
df = extractTimeDomainFeatures(filename, sample_labels)
print(df)
# demonstrate this for only the first 5 files
filenames <- dir("./RawData/Train/", "^acc", full.names = TRUE)
# map_dfr runs `extractTimeDomainFeatures` on all elements in
# filenames and binds results row wise
myData = suppressMessages(map_dfr(filenames, extractTimeDomainFeatures, sample_labels))
# Check the result
#print(myData)
# demonstrate this for only the first 5 files
filenames <- dir("./RawData/Train/", "^gyro", full.names = TRUE)
# map_dfr runs `extractTimeDomainFeatures` on all elements in
# filenames and binds results row wise
myData_gyro = suppressMessages(map_dfr(filenames, extractTimeDomainFeatures, sample_labels))
# Check the result
#print(myData_gyro)
all_data = merge(myData, myData_gyro, by=c("epoch", "user_id", "exp_id", "activity", "sampleid", "n_samples"), suffixes = c("_acc","_gyro"))
head(all_data)
colnames(all_data)
glimpse(all_data)
# %% [markdown]
# ## Model Fitting
#
# Here we fit various models and select relevant features.
#
# Importantly,
#
# 1. we tried predictions using the entire dataset with n_sample !=128 and including the activity "-" however this led to worse performance. Thus, we decided to delete n_samples < 128, and delete all unlabelled data.
#
# 2. we examined non-zero variance for all features however this showed no relevant predictors to remove. If there are no predictors to remove, then the nzv - object is empty and running the non-zero variance method to remove [non - existing] objects will lead to an error. Thus, we removed it from the code and added the explanation here.
#
# 3. we removed highly correlated predictors.
#
# 4. used cross validation technique with five iterations. We chose the number 5 based on a short literature review and the sample size of the data.
#
# 5. experimented with LDA, QDA, KNN, mulitnominal models
# %% [code]
#First, we will remove redundant features.
all_data = merge(myData, myData_gyro, by=c("epoch", "user_id", "exp_id", "activity", "sampleid", "n_samples"), suffixes = c("_acc","_gyro"))
##delete spec_gyro
#all_data <- all_data[,!grepl("spec.*gyro", colnames(all_data))]
## delete n_samples < 128, and we delete all unlabelled data
all_data <- all_data %>%
filter(n_samples == 128 & activity != '-')
# Near zero variance for all features, first 6 columns skipped
#nzv <- caret::nearZeroVar(all_data[,-c(1:6)]) + 6
all_data2 <- all_data#[,-nzv]
# Correlated features
R <- cor(all_data2[7:ncol(all_data2)])
cor_ind <- caret::findCorrelation(R, cutoff = 0.8)
dplyr::glimpse(cor_ind)
length(cor_ind)
### 96 features have very high correlations (makes sense as they are statistical proporties with a lot linear dependencies...), these will be removed from the predictor set
# account for that the first 6 rows must be left in the data
cor_ind2 <- cor_ind + 6
# removing the correlated features
all_data3 <- all_data2[,-cor_ind2]
glimpse(all_data3)
# Cross-validation
trcntr = caret::trainControl('cv', number = 5, p=0.85)
## LDA, QDA, mulitnominal
# LDA
fit_lda = caret::train(activity ~ ., data=all_data3[,-c(1:3,5,6)], method="lda", trControl = trcntr)
fit_lda
#QDA
#fit_qda = caret::train(activity ~ ., data=all_data3[,-c(1:3,5,6)], method="qda", trControl = trcntr)
#fit_qda
#this gives the error that some group is too small for a qda
# Multinomial regression
fit_multi = caret::train(activity ~ ., data=all_data3[,-c(1:3,5,6)], method="multinom", trControl = trcntr)
fit_multi
# KNNs
grid = expand.grid(k = c(6,7,8,9,10))
knns_fit = train(activity ~ ., data=all_data3[,-c(1:3,5,6)], method="knn", preProcess = c("scale"), trControl = trcntr, tuneGrid = grid)
knns_fit
#After the first round of model fitting, multinomial logistic regression (.88) and knn (.86) worked best, considering accuracy only. This was done without thinking about the potential of having way to many predictors. However, It is important to take this into account because otherwise overfitting will be a problem. So in the next block we chose predictors through stepwise analysis of the models we already fitted.
## stepwise LDA, QDA, Multinomial regression
##LDA & QDA
# LDA
fit_lda_step = caret::train(activity ~ ., data=all_data3[,-c(1:3,5,6)], method="stepLDA", trControl = trcntr)
fit_lda_step
# for completeness sake we keep the code in but since some groups are to small this will not be followed up on
# fit_qda_step = caret::train(activity ~ ., data=all_data3[,-c(1:3,5,6)], method="stepQDA", trControl = trcntr)
# fit_qda_step
# Multinomial regression
fit_multi_step = caret::train(activity ~ ., data=all_data3[,-c(1:3,5,6)], method="multinom", direction="forward", trControl = trcntr)
fit_multi_step
## KNN
# Another option that has been explained in the lectures was the unscaled k- nearest- neighbours option.
# Earlier we only tried the scaled option so we thought to add in the unscaled option for the sake of getting the whole range of
# options and to be sure that we didn't leave out a maybe efficient model.
#knn not scaled
knn_fit = train(activity ~ ., data=all_data3[,-c(1:3,5,6)], method="knn", trControl = trcntr, tuneGrid = grid)
knn_fit
## Plot
## Visualize model performance differences for
# Put all fitted models in a named list (if necessary
# change the names to match the names you used to store
# the models in.)
models = list(knn = knn_fit, multi_s = fit_multi_step,lda_s = fit_lda_step,knns = knns_fit, multi = fit_multi,lda = fit_lda)
# Extract the cross-validated accuracies from each model
Acc = sapply(models, function(mdl) max(mdl$results$Accuracy))
# make a barplot with only the best performing model in red
color = 1 + (Acc >= max(Acc))
barplot(Acc, horiz=T, las=1, col = color, main = "accuracy per model",xlab = "accuracy", ylab = "model name", xlim = c(0:1))
best_model = fit_multi_step
#From this we could conclude that the bar that is red is the best model (multinominal regression)
# if you look at the accuracy so this will be the model that we use to predict the unknown instances of the data.
# %% [markdown]
# ## Visualisation of Model performance after stepwise regression
# %% [code]
barplot(Acc, horiz=T, las=1, col = color, main = "accuracy per model",xlab = "accuracy", ylab = "model name", xlim = c(0:1))
# %% [markdown]
# ## Submissions
# %% [code]
# 5. Submissions
#The test data can be imported in the same way as the training data, you only have to change `Train` to `Test` in the directory path:
filenames_test = dir("./RawData/Test/", "^acc", full.names = TRUE)
# map_dfr runs `extractTimeDomainFeatures` on all elements in
# filenames and binds results row wise
test_acc = map_dfr(filenames_test, extractTimeDomainFeatures, sample_labels)
# demonstrate this for only the first 5 files
filenames_test <- dir("./RawData/Test/", "^gyro", full.names = TRUE)
# map_dfr runs `extractTimeDomainFeatures` on all elements in
# filenames and binds results row wise
test_gyro = map_dfr(filenames_test, extractTimeDomainFeatures, sample_labels)
test_data = merge(test_acc, test_gyro, by=c("epoch", "user_id", "exp_id", "activity", "sampleid", "n_samples"), suffixes = c("_acc","_gyro"))
glimpse(test_data)
##see whether there are NA's for features in the testdata
colSums(is.na(test_data))
sum(colSums(is.na(test_data)))
##For the AR feature there NA's, as this feature is used in the model, we will fill these NA's with the column means
for(i in 1:ncol(test_data)){
test_data[is.na(test_data[,i]), i] <- median(test_data[,i], na.rm = TRUE)
}
colSums(is.na(test_data))
sum(colSums(is.na(test_data)))
# Predictions based on logistic regression
pred <- predict(best_model, new = test_data)
predictions <- test_data %>%
mutate(activity = pred)
## Formatting the submission file
#To help you turning your predictions into the right format, the following code can help. Here it is executed on the training set data frame, but the same can be applied to the test set data frame.
predictions %>%
# prepend "user" and "exp" to user_id and exp_id
mutate(
##credits to group 7 for this piece of code
user_id = case_when(user_id < 10 ~ paste("user0", user_id, sep=""),TRUE ~ paste("user", user_id, sep="")),
exp_id = case_when(exp_id < 10 ~ paste("exp0", exp_id, sep=""), TRUE ~ paste("exp", exp_id, sep=""))
) %>%
# unit columnes user_id, exp_id and sample_id into a string
# separated by "_" and store it in the new variable `Id`
unite(Id, user_id, exp_id, sampleid) %>%
# retain only the `Id` and predictions
select(Id, Predicted = activity) %>%
# write to file
write_csv("test_set_predictions.csv")
# Check the result
file.show("test_set_predictions.csv")
# %% [markdown]
# ## Final comments
#
# We spent significant amount of time on the modelling part (running both QDA, LDA, KNN, multinominal models). Regarding the features, we tried to initially use many statistical and spectral features. Our literature review on other possible features led us to more creative and sophisticated features, such as the Mel frequency cepstral coefficients (MFCC) and (ii) Perceptive Linear Prediction (PLP) features. However, the extraction of those more advanced features requires external libraries in R - thus, we could not use them in here.
#
# %% [markdown]
# ## Other notes.
#
# Here we will add notes from the notebook that may still be helpful.
# %% [code]
##features by previous research
##acc X1 6
#SD -> done
#correlation coefficient
#Kurtosis -> done
#RMS -> done
#SRA -> done
#Frequency Center
##acc X2 6
#SD -> done
#Skewness -> done
#Kurtosis -> done
#RMS -> done
#SRA -> done
#RMS frequency
##acc X3 7
#Mean Abolute value -> done
#SD -> done
#Correlation coefficient
#Skewness -> done
#Energy -> done
#SRA -> done
#Frequency center
##gyro X1 8
#SD -> done
#Median -> done
#Correlation coefficient
#Skewness -> done
#RMS -> done
#Energy -> done
#Frequency Center
#RMS Frequency
##gyro X2 7
#SD -> done
#Skewness -> done
#Kurtosis -> done
#Energy -> done
#SRA -> done
#Crest factor
#RMS frequency
##gyro X3 7
#SD -> done
#correlation coefficient
#Kurtosis -> done
#RMS -> done
#Energy -> done
#SRA -> done
#Frequency Center |
c3b6b84ca76aae4bed10c80ea1ae5f193fe8ec30 | b68ea561a0d58bdcb56edb14eb09b7caa62a807d | /run_analysis.R | e12556572c80fa4db06e0de733b475dcb35bc131 | [] | no_license | vigmanu06/Getting-and-Cleaning-Data-Course-Project---Feb-2016 | 16f7aac6618f318ce0f3e23a12f5611a7b8de4ae | 30d8d02435fb79ea4d636f74c8a5a591f927de15 | refs/heads/master | 2021-01-10T02:00:26.959026 | 2016-02-09T17:37:15 | 2016-02-09T17:37:15 | 51,384,965 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 2,460 | r | run_analysis.R | # 1) Merges the training and the test sets to create one data set.
subject_test <- read.table('D:/370140/R Programming/Workspace 1_3rd July 2014/getdata-projectfiles-UCI HAR Dataset/UCI HAR Dataset/test/subject_test.txt')
subject_train <- read.table('D:/370140/R Programming/Workspace 1_3rd July 2014/getdata-projectfiles-UCI HAR Dataset/UCI HAR Dataset/train/subject_train.txt')
subject_merged <- rbind(subject_train, subject_test)
names(subject_merged) <- "subject"
subject_merged
X_test <- read.table('./getdata-projectfiles-UCI HAR Dataset/UCI HAR Dataset/test/X_test.txt')
X_train <- read.table('./getdata-projectfiles-UCI HAR Dataset/UCI HAR Dataset/train/X_train.txt')
X_merged <- rbind(X_train, X_test)
y_test <- read.table('./getdata-projectfiles-UCI HAR Dataset/UCI HAR Dataset/test/y_test.txt')
y_train <- read.table('./getdata-projectfiles-UCI HAR Dataset/UCI HAR Dataset/train/y_train.txt')
y_merged <- rbind(y_train, y_test)
# 2) Extracts only the measurements on the mean and standard deviation for each measurement.
features <- read.table('./getdata-projectfiles-UCI HAR Dataset/UCI HAR Dataset/features.txt', header=FALSE, col.names=c('id', 'name'))
feature_selected_columns <- grep('mean\\(\\)|std\\(\\)', features$name)
filtered_dataset <- X_merged[, feature_selected_columns]
names(filtered_dataset) <- features[features$id %in% feature_selected_columns, 2]
filtered_dataset
# 3) Uses descriptive activity names to name the activities in the data set
activity_labels <- read.table('./getdata-projectfiles-UCI HAR Dataset/UCI HAR Dataset/activity_labels.txt', header=FALSE, col.names=c('id', 'name'))
# 4) Appropriately labels the data set with descriptive activity names.
y_merged[, 1] = activity_labels[y_merged[, 1], 2]
names(y_merged) <- "activity"
# 5.1) Creates an intermediate dataset with all required measurements.
whole_dataset <- cbind(subject_merged, y_merged, filtered_dataset)
write.csv(whole_dataset, "./whole_dataset_with_descriptive_activity_names.csv")
# 5.2) Creates the final, independent tidy data set with the average of each variable for each activity and each subject.
measurements <- whole_dataset[, 3:dim(whole_dataset)[2]]
tidy_dataset <- aggregate(measurements, list(whole_dataset$subject, whole_dataset$activity), mean)
names(tidy_dataset)[1:2] <- c('subject', 'activity')
write.csv(tidy_dataset, "./final_tidy_dataset.csv")
write.table(tidy_dataset, "./final_tidy_dataset.txt")
|
fce55ba5eb284d3c0d0fab2364fa4a236495b29f | 1872d7bda348a3e3efd64bd83be76205c6eb242f | /R/email_validate.R | e703568b8c5a674641452a88a663ac0a917fe46b | [] | no_license | azradey/email-validation | 850f2ee05bbdc80b11e04f04aaaf65b479f536ac | 6f4d74221dfecbf23ea97e74de1d7412a6edc07d | refs/heads/master | 2020-03-19T06:18:36.256295 | 2018-06-04T10:31:02 | 2018-06-04T10:31:02 | 136,009,213 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 272 | r | email_validate.R | #' Validate email address
#'
#'
#' Calculate the length of the perimeter of an ellipse.
#'
#' @param e email.
#' @return if valid email
#' @export
email_validate <- function(e) {
grepl("\\<[A-Z0-9._%+-]+@[A-Z0-9.-]+\\.[A-Z]{2,}\\>", as.character(e), ignore.case=TRUE)
}
|
6302d8198dbf86dec6f64b8cc3a85a82161e3a48 | f71db8d5c0f397bce11675a90e506f42d6d73c35 | /Data Manipulations.R | a9585e30f128199d830802c3af41f29fd5d8a25d | [] | no_license | DenzMaou/Data-manipulation | c028e316c550114741d9c25eb1c03fca37062e75 | a54a16dbd576f9ee55e388aa33a157d4c372253e | refs/heads/master | 2022-10-09T22:01:38.764262 | 2020-06-12T07:31:35 | 2020-06-12T07:31:35 | 271,736,658 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 2,165 | r | Data Manipulations.R | getwd()
setwd("C:/Users/USER/Desktop")
loadedNamespaces()
update.packages()
Fish_survey<- read.csv ("Fish_survey.csv", header = TRUE)
str(Fish_survey)
head(Fish_survey)
install.packages("tibble")
library(tibble)
install.packages("tidyr")
library(tidyr)
install.packages("Rcpp")
library(Rcpp)
installed.packages("dplyr")
library(dplyr)
install.packages("reshape2")
library(reshape2)
#1. Use gather and spread, reshape package, melt and dcast
Fish_survey<- read.csv("Fish_survey.csv", header = TRUE)
Fish_survey_long <- gather(Fish_survey, Species, Abundance, 4:6)
str(Fish_survey_long)
Fish_survey_long
Fish_survey_wide <- spread(Fish_survey_long, Species, Abundance)
head(Fish_survey_wide)
Fish_survey_long <- melt(Fish_survey,
id.vars = c("Site","Month","Transect"),
measure.vars = c("Trout","Perch","S+ckleback"),
variable.name = "Species", value.name = "Abundance")
str(Fish_survey_long)
head(Fish_survey_long)
#2. Use inner join
Water_data<- read.csv("Water_data.csv", header = TRUE,
stringsAsFactors=FALSE)
GPS_locatioon<- read.csv("GPS_data.csv", header = TRUE,
stringsAsFactors=FALSE)
Fish_survey_long<- read.csv("Fish_survey_long.csv", header = TRUE,
stringsAsFactors=FALSE)
Fish_and_Water<-inner_join(Fish_survey_long, Water_data, by= c("Site", "Month"))
Fish_and_Water
str(Fish_and_Water)
head(Fish_and_Water)
Fish_survey_combined<- inner_join(Fish_and_Water, GPS_locatioon, by = c("Site", "Transect"))
#3. Subset
Bird_Behaviour<- read.csv("Bird_Behaviour.csv", header = TRUE, stringsAsFactors=FALSE)
str(Bird_Behaviour)
Bird_Behaviour$log_FID<- log(Bird_Behaviour$FID)
Bird_Behaviour<- separate(Bird_Behaviour, Species, c("Genus", "Species"), sep="_", remove= TRUE)
str(Bird_Behaviour)
Bird_Behaviour[ ,1:4]
Bird_Behaviour[Bird_Behaviour$Sex == "male", ]
subset(Bird_Behaviour, FID < 10)
subset(Bird_Behaviour, FID < 10 & Sex == "male")
subset(Bird_Behaviour, FID > 10 | FID < 15, select = c(Ind, Sex, Year))
|
efcfa2b0bf7161b15260ef08ca08b7a6158b9b51 | af142fbc7a5c36557015e59f32c6fc1369e85b1d | /R/truncgridr.R | eed0978c9fe349974d194437163acdab6e65e18a | [] | no_license | TheoMichelot/localGibbs | fdbd5dbab06c98aa02b2979d4a06ee8fc6db8359 | e1dd16b41e2e6c03ac7a1aedc92cba47b923d2a0 | refs/heads/master | 2022-05-08T05:10:22.329007 | 2022-03-09T15:41:47 | 2022-03-09T15:41:47 | 135,612,759 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,335 | r | truncgridr.R |
#' Truncate the sample of avilability radii
#'
#' @param rdist Distribution of availability radius r. Can be
#' "exp", "gamma", or "weibull".
#' @param shape Shape parameter of r distribution (NA if r~exp)
#' @param rate Rate parameter of r distribution
#' @param ID Vector of track IDs
#' @param xy Matrix of observed locations
#' @param gridr Grid for Monte Carlo integration
#'
#' @importFrom truncdist qtrunc
truncgridr <- function(rdist = c("exp","gamma","weibull"), shape=NULL, rate, ID, xy, gridr)
{
# step lengths
steps <- sqrt(rowSums((xy[-1,] - xy[-nrow(xy),])^2))
# no contribution if first obs in track, or if missing data
allt <- which(ID[-length(ID)]==ID[-1] & !is.na(steps))
truncr <- matrix(NA,nrow=nrow(xy)-1,ncol=length(gridr))
arglist <- list(p = gridr,
spec = rdist,
b = Inf)
if(rdist=="gamma" | rdist=="weibull")
arglist$shape <- shape
if(rdist=="exp" | rdist=="gamma")
arglist$rate <- rate
if(rdist=="weibull")
arglist$scale <- rate
for(t in allt) {
arglist$a <- steps[t]/2
tryCatch(
truncr[t,] <- do.call(qtrunc, arglist),
error = function(e) {
print(arglist)
stop(e)
}
)
}
return(truncr)
}
|
f09fc4869d03be9e9f5e16ab5a9793bc0b7a1181 | 6873c70aba0cfe0b297481a8f0e604ba4fc38eec | /man/betaResults.Rd | a98b94d0def5473f61d7bc8a1d060680f37da5f4 | [] | no_license | KatjaHebestreit/BiSeq | 71c23711395b4c688a1375df4370a68f4306da66 | f33f5f1c17882d645a73cb9e087a6fb1556107be | refs/heads/master | 2021-04-18T15:40:02.218531 | 2020-03-28T00:27:12 | 2020-03-28T00:27:12 | 249,558,625 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 846 | rd | betaResults.Rd | \name{betaResults}
\alias{betaResults}
\docType{data}
\title{
The output of \code{betaRegression}
}
\description{
Please see the package vignette for description.
}
\usage{data(betaResults)}
\format{
A data frame with 4276 observations on the following 10 variables:
\describe{
\item{\code{chr}}{a factor with levels \code{chr1} \code{chr2}}
\item{\code{pos}}{a numeric vector}
\item{\code{p.val}}{a numeric vector}
\item{\code{meth.group1}}{a numeric vector}
\item{\code{meth.group2}}{a numeric vector}
\item{\code{meth.diff}}{a numeric vector}
\item{\code{estimate}}{a numeric vector}
\item{\code{std.error}}{a numeric vector}
\item{\code{pseudo.R.sqrt}}{a numeric vector}
\item{\code{cluster.id}}{a character vector}
}
}
\examples{
data(betaResults)
head(betaResults)
}
\keyword{datasets}
|
c683a0a8dd9f7d48cf517f159d82477a21377eb7 | a10b69e74bf440015bcc90d6b9fb62dbbdd819d3 | /cachematrix.R | a29366befa511682823915370188e368808a5af3 | [] | no_license | vrann/ProgrammingAssignment2 | 0c8b714cae7a9e48944bb9b83d0fc8c66b0ac22d | 1997fb5be2bdfcd6089095148337518fe130e3d2 | refs/heads/master | 2021-01-17T12:42:04.082300 | 2014-04-26T22:34:53 | 2014-04-26T22:34:53 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,442 | r | cachematrix.R | ## Set of Functions which allows to calculate inverse of the matrix.
## In order to optimize performance, matrix is cached once it's inverted, subsequent calls will return the cached copy
## makeCacheMatrix creates a list of functions to store original matrix and its inverse
## cacheSolve calculates inverse based on the list of functions provided from makeCacheMatrix
## It is expected that matrix is inevrsable.
## Create a list of functions that allows to store matrix and its inverse
## set, get - stores and return the original matrix
## setsolve, getsolve - stores and returns inverse matrix
makeCacheMatrix <- function(x = matrix()) {
s <- NULL
set <- function(y) {
x <<- y
s <<- NULL
}
get <- function() x
setsolve <- function(solve) s <<- solve
getsolve <- function() s
list(set = set, get = get,
setsolve = setsolve,
getsolve = getsolve)
}
## Return a matrix which is inverse of the matrix referenced by the list 'x'.
## Cached inverse matrix will be returned if cache exists in the list
cacheSolve <- function(x, ...) {
## retrieve the cached value first
s <- x$getsolve()
if(!is.null(s)) {
## If inversed matrix is in the cache, return it
return(s)
}
## get original matrix from the list and inverse it
data <- x$get()
s <- solve(data, ...)
## set inverse of the matrix to cache
x$setsolve(s)
s
} |
1fe7ac135120eca15caabf2bd72e073087c90afd | 6904f37c4bd3fc269a9f27d80474df5eef2ab1fd | /R/mCSEATest.R | ece4a8d49f4e11f017cd680f4a7a7b21e00d1a35 | [] | no_license | jordimartorell/mCSEA | cd8300650d1e96249440d8f924f727420cca67f4 | dd10e9abcdff124706e3cf4c0c234026a58eb04b | refs/heads/master | 2022-04-04T13:43:49.958826 | 2020-02-20T08:15:06 | 2020-02-20T08:15:06 | 110,973,043 | 1 | 1 | null | null | null | null | UTF-8 | R | false | false | 8,297 | r | mCSEATest.R | #' mCSEA core analysis
#'
#' Perform a methylated CpG sites enrichment analysis in predefined genomic
#' regions
#'
#' @param rank A named numeric vector with the ranking statistic of each CpG
#' site
#' @param methData A data frame or a matrix containing Illumina's CpG probes in
#' rows and samples in columns. A SummarizedExperiment object can be used too
#' @param pheno A data frame or a matrix containing samples in rows and
#' covariates in columns. If NULL (default), pheno is extracted from the
#' SummarizedExperiment object
#' @param column The column name or number from pheno used to split the samples
#' into groups (first column is used by default)
#' @param regionsTypes A character or character vector indicating the predefined
#' regions to be analyzed. NULL to skip this step and use customAnnotation.
#' @param customAnnotation An optional list with the CpGs associated to each
#' feature (default = NULL)
#' @param minCpGs Minimum number of CpGs associated to a region. Regions below
#' this threshold are not tested
#' @param nproc Number of processors to use in GSEA step (default = 1)
#' @param nperm (deprecated) Number of permutations to do in GSEA step in the
#' previous implementation. Now, this parameter is ignored
#' @param platform Platform used to get the methylation data (450k or EPIC)
#'
#' @return A list with the results of each of the analyzed regions. For each
#' region type, a data frame with the results and a list with the probes
#' associated to each region are generated. In addition, this list also contains
#' the input methData, pheno and platform objects
#'
#' @author Jordi Martorell Marugán, \email{jordi.martorell@@genyo.es}
#'
#' @seealso \code{\link{rankProbes}}, \code{\link{mCSEAPlot}},
#' \code{\link{mCSEAPlotGSEA}}
#'
#' @references Subramanian, A. et al (2005). \emph{Gene set enrichment analysis:
#' A knowledge-based approach for interpreting genome-wide expression profiles}
#' . PNAS 102, 15545-15550.
#'
#' @examples
#' \dontrun{
#' library(mCSEAdata)
#' data(mcseadata)
#' myRank <- rankProbes(betaTest, phenoTest, refGroup = "Control")
#' set.seed(123)
#' myResults <- mCSEATest(myRank, betaTest, phenoTest,
#' regionsTypes = "promoters", platform = "EPIC")
#' }
#' data(precomputedmCSEA)
#' head(myResults[["promoters"]])
#' head(myResults[["promoters_association"]])
#' @export
mCSEATest <- function(rank, methData, pheno = NULL, column = 1,
regionsTypes = c("promoters", "genes", "CGI"),
customAnnotation = NULL, minCpGs = 5, nproc = 1,
nperm = NULL, platform = "450k")
{
output <- list()
# Check input objects
if (!any(class(rank) == "numeric" | class(rank) == "integer")){
stop("rank must be a numeric vector")
}
if (!typeof(rank) == "double"){
stop("rank must be a named vector")
}
if (!any(class(methData) == "data.frame" | class(methData) == "matrix" |
class(methData) == "SummarizedExperiment" |
class(methData) == "RangedSummarizedExperiment")){
stop("methData must be a data frame, a matrix or a SummarizedExperiment
object")
}
if (!any(class(pheno) == "data.frame" | class(pheno) == "matrix" |
is.null(pheno))){
stop("pheno must be a data frame, a matrix or NULL")
}
if (!identical(colnames(methData), rownames(pheno))) {
if (length(setdiff(colnames(methData), rownames(pheno))) == 0 &&
length(setdiff( rownames(pheno), colnames(methData))) == 0) {
pheno <- pheno[colnames(methData),]
}
else {
stop("Sample labels of methData and pheno must be the same")
}
}
if (!any(class(column) != "character" |
!is.numeric(column))){
stop("column must be a character or numeric object")
}
if (class(regionsTypes) != "character" & !is.null(regionsTypes)){
stop("regionsTypes must be a character")
}
if (is.null(regionsTypes) & is.null(customAnnotation)){
stop("Either regionsTypes or customAnnotations must be specified")
}
if (!any(class(customAnnotation) == "list" | is.null(customAnnotation))){
stop("customAnnotation must be a list or NULL")
}
if (class(platform) != "character"){
stop("platform must be a character object")
}
# Get data from SummarizedExperiment objects
if (any(class(methData) == "SummarizedExperiment" |
class(methData) == "RangedSummarizedExperiment")){
if (is.null(pheno)){
pheno <- SummarizedExperiment::colData(methData)
}
methData <- SummarizedExperiment::assay(methData)
}
else {
if (is.null(pheno)) {
stop("If methData is not a SummarizedExperiment, you must provide
pheno parameter")
}
}
platform <- match.arg(platform, c("450k", "EPIC"))
if (platform == "450k") {
assocPromoters <- mCSEAdata::assocPromoters450k
assocGenes <- mCSEAdata::assocGenes450k
assocCGI <- mCSEAdata::assocCGI450k
}
else {
assocPromoters <- mCSEAdata::assocPromotersEPIC
assocGenes <- mCSEAdata::assocGenesEPIC
assocCGI <- mCSEAdata::assocCGIEPIC
}
if (!is.null(regionsTypes)){
regionsTypes <- match.arg(regionsTypes,
choices=c("promoters","genes","CGI"),
several.ok=TRUE)
for (region in regionsTypes) {
if (region == "promoters") {
resGSEA <- .performGSEA(region, rank, platform, assocPromoters,
minCpGs, nproc)
output[["promoters"]] <- resGSEA[[1]]
output[["promoters_association"]] <- resGSEA[[2]]
}
else if (region == "genes") {
resGSEA <- .performGSEA(region, rank, platform, assocGenes,
minCpGs, nproc)
output[["genes"]] <- resGSEA[[1]]
output[["genes_association"]] <- resGSEA[[2]]
}
else if (region == "CGI") {
resGSEA <- .performGSEA(region, rank, platform, assocCGI,
minCpGs, nproc)
output[["CGI"]] <- resGSEA[[1]]
output[["CGI_association"]] <- resGSEA[[2]]
}
}
}
if (!is.null(customAnnotation)) {
resGSEA <- .performGSEA("custom", rank, platform, customAnnotation,
minCpGs, nproc)
output[["custom"]] <- resGSEA[[1]]
output[["custom_association"]] <- resGSEA[[2]]
}
output[["methData"]] <- methData
output[["pheno"]] <- data.frame(Group = factor(pheno[,column]),
row.names = rownames(pheno))
output[["platform"]] <- platform
return(output)
}
.performGSEA <- function(region, rank, platform, assoc, minCpGs, nproc) {
if (region != "custom"){
message(paste("Associating CpG sites to", region))
if (platform == "450k" & length(rank) > 242500 |
platform == "EPIC" & length(rank) > 433000){
dataDiff <- setdiff(unlist(assoc), names(rank))
genes <- lapply(assoc,
function(x) {x[!x %in% dataDiff]})
}
else {
genes <- lapply(assoc,
function(x) {x[x %in% names(rank)]})
}
}
else {
genes <- assoc
}
message(paste("Analysing", region))
fgseaRes <- fgsea::fgseaMultilevel(genes, rank, minSize=minCpGs,
nproc=nproc)
fgseaDataFrame <- as.data.frame(fgseaRes)
rownames(fgseaDataFrame) <- fgseaDataFrame[,1]
fgseaDataFrame <- fgseaDataFrame[,-1]
message(paste(sum(fgseaDataFrame[["padj"]] < 0.05),
"DMRs found (padj < 0.05)"))
fgseaSorted <- fgseaDataFrame[order(fgseaDataFrame[["NES"]]),]
fgseaSorted[,7] <- vapply(fgseaSorted[,7],
function(x) {paste(unlist(x),
collapse=", ")},
"")
return(list(fgseaSorted, genes))
}
|
deab61cdbb46ba563e553469a180b3ac501f5fb3 | 0124b6a02692905922a3a8992cb123bb8e039e6b | /evaluateTimings_safetosource.R | 912790c70b8a654b96ed89100226e355dc1888f9 | [
"MIT"
] | permissive | pedlefsen/hiv-founder-id | ebcadf9563549fb9932237e148a0c93609d856f3 | 50ba8d2757cb3be15357b4bdeaea3fe0d2680eea | refs/heads/master | 2020-12-24T05:58:03.939103 | 2019-07-07T21:36:09 | 2019-07-07T21:36:09 | 42,635,031 | 3 | 3 | MIT | 2019-02-08T12:36:25 | 2015-09-17T04:32:20 | R | UTF-8 | R | false | false | 146,473 | r | evaluateTimings_safetosource.R | ## First do all the stuff in README.postprocessing.txt.
library( "parallel" ); # for mclapply
library( "glmnet" ); # for cv.glmnet
library( "glmnetUtils" ); # for formula interface (cv.glmnet.formula): see https://github.com/Hong-Revo/glmnetUtils
source( "readIdentifyFounders_safetosource.R" );
source( "getDaysSinceInfection_safetosource.R" );
source( "getArtificialBounds_safetosource.R" );
source( "getResultsByRegionAndTime_safetosource.R" );
source( "writeResultsTables_safetosource.R" );
source( "summarizeCovariatesOnePerParticipant_safetosource.R" );
## These should be set externally.
# GOLD.STANDARD.DIR <- "/fh/fast/edlefsen_p/bakeoff/gold_standard/";
#
# RESULTS.DIR <- "/fast/bakeoff_merged_analysis_sequences_filteredPre2017/results/";
# RESULTS.DIRNAME <- "raw_fixed";
#
# HELPFUL.ADDITIONAL.COLS.WITH.INTERACTIONS <- c(); #c( "v3_not_nflg", "X6m.not.1m" );
#
# ## Run it in all four configurations of INCLUDE.INTERCEPT and HELPFUL.ADDITIONAL.COLS to match what was done for the paper:
#
# #HELPFUL.ADDITIONAL.COLS <- c( "lPVL" );
# HELPFUL.ADDITIONAL.COLS <- c();
#
# INCLUDE.INTERCEPT <- FALSE;
# #INCLUDE.INTERCEPT <- TRUE;
THE.RESULTS.DIR <- RESULTS.DIR; # to avoid "promise already under evaluation" errors
########################################################
## Other fns
evaluateTimings.compute.config.string <- function (
include.intercept = INCLUDE.INTERCEPT,
include.all.vars.in.lasso = TRUE,
helpful.additional.cols = c( "lPVL" ),
helpful.additional.cols.with.interactions = c( "v3_not_nflg", "X6m.not.1m" ),
use.gold.is.multiple = FALSE,
mutation.rate.calibration = FALSE
) {
config.string <- "";
if( include.intercept ) {
config.string <- "include.intercept";
stopifnot( !mutation.rate.calibration );
} else if( mutation.rate.calibration ) {
config.string <- "mutation.rate.calibration";
}
if( length( helpful.additional.cols ) > 0 ) {
.new.config.string.part <-
paste( "covs.", paste( helpful.additional.cols, collapse = "." ), sep = "" );
if( config.string == "" ) {
config.string <- .new.config.string.part;
} else {
config.string <- paste( config.string, .new.config.string.part, sep = "_" );
}
}
if( length( helpful.additional.cols.with.interactions ) > 0 ) {
.new.config.string.part <-
paste( "interactingCovs.", paste( helpful.additional.cols.with.interactions, collapse = "." ), sep = "" );
if( config.string == "" ) {
config.string <- .new.config.string.part;
} else {
config.string <- paste( config.string, .new.config.string.part, sep = "_" );
}
}
if( !include.all.vars.in.lasso ) {
if( config.string == "" ) {
config.string <- "lassoFromNonlasso";
} else {
config.string <- paste( config.string, "lassoFromNonlasso", sep = "_" );
}
}
if( use.gold.is.multiple ) {
if( config.string == "" ) {
config.string <- "with.gold.is.multiple";
} else {
config.string <- paste( config.string, "with.gold.is.multiple", sep = "_" );
}
}
return( config.string );
} # evaluateTimings.compute.config.string (..)
########################################################
#' Evaluate timings estimates and produce results tables.
#'
#' This function runs the BakeOff results analysis for the timings results.
#'
#' The "center of bounds" (COB) approach is the way that we do it at the
#' VTN, and the way it was done in RV144, etc: use the midpoint
#' between the bounds on the actual infection time computed from the
#' dates and results of the HIV positivity tests (antibody or PCR).
#' The typical approach is to perform antibody testing every X days
#' (historically this is 6 months in most HIV vaccine trials, except
#' during the vaccination phase there are more frequent visits and on
#' every visit HIV testing is conducted). The (fake) bounds used here
#' are calculated in the createArtificialBoundsOnInfectionDate.R file.
#' The actual bounds would be too tight, since the participants were
#' detected HIV+ earlier in these people than what we expect to see in
#' a trial in which testing is conducted every X days. For the center
#' of bounds approach we load the bounds files in subdirs of the
#' "bounds" subdirectory eg at
#' /fh/fast/edlefsen_p/bakeoff/analysis_sequences/raw_edited_20160216/bounds/nflg/1m/.
#' This will also look for plasma viral load measurements in files
#' called
#' /fh/fast/edlefsen_p/bakeoff/gold_standard/caprisa_002/caprisa_002_viralloads.csv
#' and
#' /fh/fast/edlefsen_p/bakeoff/gold_standard/rv217/rv217_viralloads.csv. These
#' files have three important columns: ptid,viralload,timepoint. For
#' each ptid the viral loads at timepoints 1,2,3 correspond to the
#' gold-standard, 1m, and 6m time points. Viral loads are not logged
#' in the input file.
#'
#' These files have names beginning with "artificialBounds_" and
#' ending with ".tab".
#'
#' @param use.bounds compute results for the COB approach, and also return evaluations of bounded versions of the other results.
#' @param use.infer compute results for the PREAST approach.
#' @param use.anchre compute results for the anchre approach.
#' @param use.glm.validate evaluate predicted values from leave-one-out cross-validation, using a model with one predictor, maybe with helpful.additional.cols or with the bounds.
#' @param use.step.validate evaluate predicted values from leave-one-out cross-validation, using a model with one predictor and a step-selected subset of other predictors, maybe with the bounds.
#' @param use.lasso.validate evaluate predicted values from leave-one-out cross-validation, using a model with one predictor and a lasso-selected subset of other predictors, maybe with the bounds.
#' @param use.gold.is.multiple include gold.is.multiple and interactions between gold.is.multiple and helpful.additional.cols (but not with helpful.additional.cols.with.interactions).
#' @param include.intercept if TRUE, include an intercept term, and for time-pooled analyses also include a term to shift the intercept for 6m.not.1m samples. Note that this analysis is affected by the low variance in the true dates in the training data, which for 1m samples has SD around 5 and for 6m samples has an SD around 10, so even the "none" results do pretty well, and it is difficult to improve the estimators beyond this.
#' @param mutation.rate.calibration if TRUE, do not include anything that is not in interaction with (whatever the variable is), and no intercept (note this requires include.intercept == FALSE).
#' @param include.all.vars.in.lasso if FALSE, include only the "helpful.additional.cols" and "helpful.additional.cols.with.interactions" and the estimators and interactors among these vars that are tested via the non-lasso anaylsis (if use.glm.validate = TRUE). If include.all.vars.in.lasso = TRUE (the default), then as many additional covariates as possible will be included by parsing output from the identify-founders script (but note that apart from "lPVL" - log plasma viral load - these are mostly sequence statistics, eg. PFitter-computed maximum and mean Hamming distances among sequences.
#' @param helpful.additional.cols extra cols to be included in the glm and lasso: Note that interactions will be added only with members of the other set (helpful.additional.cols.with.interactions), not, with each other in this set.
#' @param helpful.additional.cols.with.interactions extra cols to be included in the glm both as-is and interacting with each other and with members of the helpful.additional.cols set.
#' @param results.dirname the subdirectory of RESULTS.DIR
#' @param force.recomputation if FALSE (default) and if there is a saved version called timings.results.by.region.and.time.Rda (under bakeoff_analysis_results/results.dirname), then that file will be loaded; otherwise the results will be recomputed and saved in that location.
#' @param partition.bootstrap.seed the random seed to use when bootstrapping samples by selecting one partition number per ptid, repeatedly; we do it this way because there are an unequal number of partitions, depending on sampling depth.
#' @param partition.bootstrap.samples the number of bootstrap replicates to conduct; the idea is to get an estimate of the variation in estimates and results (errors) across these samples.
#' @param partition.bootstrap.num.cores the number of cores to run the boostrap replicates on (defaults to all of the cores returned by parallel::detectCores()).
#' @return the filename of the Rda output. If you load( filename ), it will add "results.by.region.and.time" to your environment.
#' @export
evaluateTimings <- function (
use.bounds = TRUE,
use.infer = TRUE,
use.anchre = FALSE,
use.glm.validate = TRUE,
use.step.validate = FALSE,
use.lasso.validate = FALSE,
use.gold.is.multiple = FALSE,
include.intercept = INCLUDE.INTERCEPT,
mutation.rate.calibration = FALSE,
include.all.vars.in.lasso = TRUE,
helpful.additional.cols = HELPFUL.ADDITIONAL.COLS,
helpful.additional.cols.with.interactions = HELPFUL.ADDITIONAL.COLS.WITH.INTERACTIONS,
RESULTS.DIR = THE.RESULTS.DIR,
results.dirname = RESULTS.DIRNAME,
force.recomputation = TRUE,
partition.bootstrap.seed = 98103,
partition.bootstrap.samples = 100,
partition.bootstrap.num.cores = detectCores(),
regions = c( "nflg", "v3" ),
times = c( "1w", "1m", "6m" )
#regions = c( "nflg", "v3", "rv217_v3" ),
#times = c( "1m", "6m", "1m6m" )
)
{
## For debugging: start from .results.for.region in evaluate.specific.timings.model.formula from ~/src/from-git/hiv-founder-id/getFilteredResultsTables_safetosource.R
# use.bounds = TRUE; use.infer = TRUE; use.anchre = FALSE; use.glm.validate = TRUE; use.step.validate = FALSE; use.lasso.validate = FALSE; use.gold.is.multiple = FALSE; include.intercept = INCLUDE.INTERCEPT; mutation.rate.calibration = FALSE; include.all.vars.in.lasso = TRUE; helpful.additional.cols = HELPFUL.ADDITIONAL.COLS; helpful.additional.cols.with.interactions = HELPFUL.ADDITIONAL.COLS.WITH.INTERACTIONS; results.dirname = RESULTS.DIRNAME; force.recomputation = TRUE; partition.bootstrap.seed = 98103; partition.bootstrap.samples = 100; partition.bootstrap.num.cores = detectCores(); regions = c( "nflg", "v3" ); times = c( "1m", "6m" )
# results.per.person = .results.for.region[[ the.time ]][["results.per.person" ]]; days.since.infection = .results.for.region[[ the.time ]][["days.since.infection" ]]; results.covars.per.person.with.extra.cols = .results.for.region[[ the.time ]][["results.covars.per.person.with.extra.cols" ]]; the.artificial.bounds = .results.for.region[[ the.time ]][["bounds" ]]
# use.bounds = TRUE; use.infer = TRUE; use.anchre = FALSE; use.glm.validate = TRUE; use.step.validate = FALSE; use.lasso.validate = FALSE; results.dirname = "raw_edited_20160216"; force.recomputation = FALSE; partition.bootstrap.seed = 98103; partition.bootstrap.samples = 100; partition.bootstrap.num.cores = detectCores();
if( mutation.rate.calibration ) {
stopifnot( !( include.intercept ) );
}
config.string <- evaluateTimings.compute.config.string(
include.intercept = include.intercept,
include.all.vars.in.lasso = include.all.vars.in.lasso,
helpful.additional.cols = helpful.additional.cols,
helpful.additional.cols.with.interactions = helpful.additional.cols.with.interactions,
use.gold.is.multiple = use.gold.is.multiple,
mutation.rate.calibration = mutation.rate.calibration
);
results.by.region.and.time.Rda.filename <-
paste( RESULTS.DIR, results.dirname, "/Timings.results.by.region.and.time.", config.string, ".Rda", sep = "" );
if( config.string == "" ) {
evaluateTimings.tab.file.suffix <- "_evaluateTimings.tab";
} else {
evaluateTimings.tab.file.suffix <- paste( "_evaluateTimings_", config.string, ".tab", sep = "" );
}
MINIMUM.DF <- 2; # how much more should nrow( .lasso.mat ) be than ncol( .lasso.mat ) at minimum?
if( include.intercept ) {
MINIMUM.DF <- MINIMUM.DF + 1;
}
MINIMUM.CORRELATION.WITH.OUTCOME <- 0.1;
rv217.gold.standard.infection.dates.in <- read.csv( paste( GOLD.STANDARD.DIR, "rv217/rv217_gold_standard_timings.csv", sep = "" ) );
rv217.gold.standard.infection.dates <- as.Date( as.character( rv217.gold.standard.infection.dates.in[,2] ), "%m/%d/%y" );
names( rv217.gold.standard.infection.dates ) <- as.character( rv217.gold.standard.infection.dates.in[,1] );
caprisa002.gold.standard.infection.dates.in <- read.csv( paste( GOLD.STANDARD.DIR, "caprisa_002/caprisa_002_gold_standard_timings.csv", sep = "" ) );
caprisa002.gold.standard.infection.dates <- as.Date( as.character( caprisa002.gold.standard.infection.dates.in[,2] ), "%Y/%m/%d" );
names( caprisa002.gold.standard.infection.dates ) <- as.character( caprisa002.gold.standard.infection.dates.in[,1] );
if( use.gold.is.multiple ) {
rv217.gold.standards.in <- read.csv( paste( GOLD.STANDARD.DIR, "rv217/RV217_gold_standards.csv", sep = "" ) );
rv217.gold.is.multiple <- rv217.gold.standards.in[ , "gold.is.multiple" ];
names( rv217.gold.is.multiple ) <- rv217.gold.standards.in[ , "ptid" ];
caprisa002.gold.standards.in <- read.csv( paste( GOLD.STANDARD.DIR, "caprisa_002/caprisa_002_gold_standards.csv", sep = "" ) );
caprisa002.gold.is.multiple <- caprisa002.gold.standards.in[ , "gold.is.multiple" ];
names( caprisa002.gold.is.multiple ) <- caprisa002.gold.standards.in[ , "ptid" ];
}
rv217.pvl.in <- read.csv( paste( GOLD.STANDARD.DIR, "rv217/rv217_viralloads.csv", sep = "" ) );
caprisa002.pvl.in <- read.csv( paste( GOLD.STANDARD.DIR, "caprisa_002/caprisa_002_viralloads.csv", sep = "" ) );
rmse <- function( x, na.rm = FALSE ) {
if( na.rm ) {
x <- x[ !is.na( x ) ];
}
return( sqrt( mean( x ** 2 ) ) );
}
compute.results.per.person <- function ( results, weights ) {
apply( results, 2, function ( .column ) {
.rv <-
sapply( unique( rownames( results ) ), function( .ppt ) {
print( .ppt );
.ppt.cells <- as.numeric( as.character( .column[ rownames( results ) == .ppt ] ) );
if( all( is.na( .ppt.cells ) ) ) {
return( NA );
}
.ppt.weights <- apply( weights[ rownames( results ) == .ppt, , drop = FALSE ], 1, mean );
.ppt.weights <- .ppt.weights / sum( .ppt.weights, na.rm = TRUE );
sum( .ppt.cells * .ppt.weights, na.rm = TRUE );
} );
names( .rv ) <- unique( rownames( results ) );
return( .rv );
} );
} # compute.results.per.person
get.infer.results.columns <-
function ( the.region, the.time, the.ptids, partition.size = NA ) {
## Add to results: "infer" results.
if( is.na( partition.size ) ) {
infer.results.directories <- dir( paste( RESULTS.DIR, results.dirname, "/", the.region, "/", the.time, sep = "" ), "founder-inference-bakeoff_", full.name = TRUE );
} else {
#infer.results.directories <- dir( paste( RESULTS.DIR, results.dirname, "/", the.region, "/", the.time, "/partitions", sep = "" ), "founder-inference-bakeoff_", full.name = TRUE );
stop( "TODO: When there are some Infer results run on partitions, evaluate the results here" );
}
# Special: for v3, separate out the caprisa seqs from the rv217 seqs
if( the.region == "v3" ) {
infer.results.directories <- grep( "_100\\d\\d\\d$", infer.results.directories, value = TRUE );
} else if( the.region == "rv217_v3" ) {
infer.results.directories <- grep( "_100\\d\\d\\d$", infer.results.directories, value = TRUE, invert = TRUE );
}
# For now only use the sampledwidth ones, so exclude the other artificialBounds results.
infer.results.directories.sampledwidth <-
grep( "sampledwidth", infer.results.directories, value = TRUE );
infer.results.directories.unbounded <-
grep( "artificialBounds", infer.results.directories, value = TRUE, invert = TRUE );
infer.results.directories <- c( infer.results.directories.unbounded, infer.results.directories.sampledwidth );
infer.results.files <- sapply( infer.results.directories, dir, "toi.csv", full.name = TRUE );
if( length( infer.results.files ) == 0 ) {
return( NULL );
}
infer.results.list <-
lapply( unlist( infer.results.files ), function( .file ) {
.rv <- as.matrix( read.csv( .file, header = FALSE ), nrow = 1 );
stopifnot( ncol( .rv ) == 3 );
return( .rv );
} );
names( infer.results.list ) <- unlist( infer.results.files );
if( length( infer.results.list ) == 0 ) {
return( NULL );
} else {
## TODO: REMOVE
print( infer.results.list );
}
infer.results <- do.call( rbind, infer.results.list );
colnames( infer.results ) <- c( "Infer", "Infer.CI.high", "Infer.CI.low" );
## reorder them
infer.results <- infer.results[ , c( "Infer", "Infer.CI.low", "Infer.CI.high" ), drop = FALSE ];
infer.results.bounds.ptid <- gsub( "^.+_(\\d+)/.+$", "\\1", names( infer.results.list ) );
rownames( infer.results ) <- infer.results.bounds.ptid;
## Separate it into separate tables by bounds type.
infer.results.bounds.type <-
gsub( "^.+_artificialBounds_(.+)_\\d+/.+$", "\\1", names( infer.results.list ) );
infer.results.bounds.type[ grep( "csv$", infer.results.bounds.type ) ] <- NA;
infer.results.nobounds.table <- infer.results[ is.na( infer.results.bounds.type ), , drop = FALSE ];
infer.results.bounds.types <- setdiff( unique( infer.results.bounds.type ), NA );
## Exclude old/outdated bounds
infer.results.bounds.types <-
grep( "(one|six)month", infer.results.bounds.types, invert = TRUE, value = TRUE );
## TODO: Handle shifted results.
infer.results.bounds.types <-
grep( "shifted", infer.results.bounds.types, invert = TRUE, value = TRUE );
infer.results.bounds.tables <- lapply( infer.results.bounds.types, function ( .bounds.type ) {
return( infer.results[ !is.na( infer.results.bounds.type ) & ( infer.results.bounds.type == .bounds.type ), , drop = FALSE ] );
} );
names( infer.results.bounds.tables ) <- infer.results.bounds.types;
# Add just the estimates from infer (not the CIs) to the results table.
if( nrow( infer.results.nobounds.table ) > 0 ) {
new.results.columns <-
matrix( NA, nrow = length( the.ptids ), ncol = 1 + length( infer.results.bounds.tables ) );
rownames( new.results.columns ) <- the.ptids;
colnames( new.results.columns ) <- c( "Infer.time.est", infer.results.bounds.types );
.shared.ptids.nobounds <-
intersect( the.ptids, rownames( infer.results.nobounds.table ) );
.result.ignored <- sapply( .shared.ptids.nobounds, function( .ptid ) {
.infer.subtable <-
infer.results.nobounds.table[ rownames( infer.results.nobounds.table ) == .ptid, 1, drop = FALSE ];
#stopifnot( sum( rownames( infer.results.nobounds.table ) == .ptid ) == nrow( .infer.subtable ) );
if( sum( rownames( infer.results.nobounds.table ) == .ptid ) != nrow( .infer.subtable ) ) {
## TODO: REMOVE
cat( paste( "There are missing unbounded Infer results for ptid", .ptid, "in region", the.region, "at time", the.time ), fill = TRUE );
}
# But there might be fewer of these than there are rows in the results.table (if eg there are 3 input fasta files and infer results for only 2 of them).
#stopifnot( nrow( .infer.subtable ) <= sum( rownames( new.results.columns ) == .ptid ) );
if( nrow( .infer.subtable ) < sum( rownames( new.results.columns ) == .ptid ) ) {
.infer.subtable <- rbind( .infer.subtable, matrix( NA, nrow = ( sum( rownames( new.results.columns ) == .ptid ) - nrow( .infer.subtable ) ), ncol = ncol( .infer.subtable ) ) );
}
new.results.columns[ rownames( new.results.columns ) == .ptid, 1 ] <<-
.infer.subtable;
return( NULL );
} );
} else {
new.results.columns <-
matrix( NA, nrow = length( the.ptids ), ncol = length( infer.results.bounds.tables ) );
rownames( new.results.columns ) <- the.ptids;
colnames( new.results.columns ) <- infer.results.bounds.types;
.shared.ptids.nobounds <- c();
}
.result.ignored <- sapply( names( infer.results.bounds.tables ), function ( .bounds.type ) {
print( .bounds.type );
.shared.ptids <-
intersect( rownames( new.results.columns ), rownames( infer.results.bounds.tables[[ .bounds.type ]] ) );
..result.ignored <- sapply( .shared.ptids, function( .ptid ) {
print( .ptid );
.infer.subtable <-
infer.results.bounds.tables[[ .bounds.type ]][ rownames( infer.results.bounds.tables[[ .bounds.type ]] ) == .ptid, 1, drop = FALSE ];
#stopifnot( sum( rownames( infer.results.bounds.tables[[ .bounds.type ]] ) == .ptid ) == nrow( .infer.subtable ) );
if( sum( rownames( infer.results.bounds.tables[[ .bounds.type ]] ) == .ptid ) != nrow( .infer.subtable ) ) {
## TODO: REMOVE
cat( paste( "There are missing bounded Infer results for ptid", .ptid, "in region", the.region, "at time", the.time, "using bounds", .bounds.type ), fill = TRUE );
}
# But there might be fewer of these than there are rows in the results.table (if eg there are 3 input fasta files and infer results for only 2 of them).
stopifnot( nrow( .infer.subtable ) <= sum( rownames( new.results.columns ) == .ptid ) );
if( nrow( .infer.subtable ) < sum( rownames( new.results.columns ) == .ptid ) ) {
.infer.subtable <- rbind( .infer.subtable, matrix( NA, nrow = ( sum( rownames( new.results.columns ) == .ptid ) - nrow( .infer.subtable ) ), ncol = ncol( .infer.subtable ) ) );
}
new.results.columns[ rownames( new.results.columns ) == .ptid, .bounds.type ] <<-
.infer.subtable;
return( NULL );
} );
return( NULL );
} );
if( nrow( infer.results.nobounds.table ) > 0 ) {
colnames( new.results.columns ) <- c( "Infer.time.est", paste( "Infer", gsub( "_", ".", infer.results.bounds.types ), "time.est", sep = "." ) );
} else {
colnames( new.results.columns ) <- paste( "Infer", gsub( "_", ".", infer.results.bounds.types ), "time.est", sep = "." );
}
return( new.results.columns );
} # get.infer.results.columns (..)
get.anchre.results.columns <- function ( the.region, the.time, sample.dates.in, partition.size = NA ) {
## Add to results: "infer" results.
stopifnot( is.na( partition.size ) ); # TODO: Implement support for anchre on partitions.
stopifnot( the.time == "1m6m" ); # There's only anchre results for longitudinal data.
## Add to results: "anchre" results. (only at 1m6m)
anchre.results.directories <- dir( paste( RESULTS.DIR, results.dirname, "/", the.region, "/1m6m", sep = "" ), "anchre", full.name = TRUE );
if( length( anchre.results.directories ) == 0 ) {
return( NULL );
}
anchre.results.files <-
sapply( anchre.results.directories, dir, "mrca.csv", full.name = TRUE );
anchre.results <- do.call( rbind,
lapply( unlist( anchre.results.files ), function( .file ) {
stopifnot( file.exists( .file ) );
.file.short <-
gsub( "^.*?\\/?([^\\/]+?)$", "\\1", .file, perl = TRUE );
.file.short.nosuffix <-
gsub( "^([^\\.]+)(\\..+)?$", "\\1", .file.short, perl = TRUE );
.file.converted <-
paste( RESULTS.DIR, results.dirname, "/", the.region, "/1m6m/", .file.short.nosuffix, ".anc2tsv.tab", sep = "" );
# convert it.
system( paste( "./anc2tsv.sh", .file, ">", .file.converted ) );
stopifnot( file.exists( .file.converted ) );
.rv <- as.matrix( read.delim( .file.converted, header = TRUE, sep = "\t" ), nrow = 1 );
## No negative dates! Just call it NA.
.rv <- apply( .rv, 1:2, function( .str ) { if( length( grep( "^-", .str ) ) > 0 ) { NA } else { .str } } );
stopifnot( ncol( .rv ) == 4 );
return( .rv );
} ) );
colnames( anchre.results ) <- c( "Anchre.r2t.est", "Anchre.est", "Anchre.CI.low", "Anchre.CI.high" );
rownames( anchre.results ) <-
gsub( "^.+_(\\d+)$", "\\1", names( unlist( anchre.results.files ) ) );
# Special: for v3, only use caprisa seqs (not rv217, for now).
if( the.region == "v3" ) {
anchre.results <-
anchre.results[ grep( "^100\\d\\d\\d", rownames( anchre.results ) ), , drop = FALSE ];
} else if( the.region == "rv217_v3" ) {
anchre.results <-
anchre.results[ grep( "^100\\d\\d\\d", rownames( anchre.results ), invert = TRUE ), , drop = FALSE ];
}
# Add just the estimate from anchre.
sample.dates <- as.Date( as.character( sample.dates.in[ , 2 ] ) );
names( sample.dates ) <- sample.dates.in[ , 1 ];
anchre.r2t.days.before.sample <- sapply( 1:nrow( anchre.results ), function( .i ) { 0 - as.numeric( as.Date( anchre.results[ .i, 1 ] ) - sample.dates[ rownames( anchre.results )[ .i ] ] ) } );
names( anchre.r2t.days.before.sample ) <- rownames( anchre.results );
anchre.days.before.sample <- sapply( 1:nrow( anchre.results ), function( .i ) { 0 - as.numeric( as.Date( anchre.results[ .i, 2 ] ) - sample.dates[ rownames( anchre.results )[ .i ] ] ) } );
names( anchre.days.before.sample ) <- rownames( anchre.results );
anchre.columns <- cbind( anchre.r2t.days.before.sample, anchre.days.before.sample );
colnames( anchre.columns ) <- c( "Anchre.r2t.time.est", "Anchre.bst.time.est" );
return( anchre.columns );
} # get.anchre.results.columns (..)
compute.diffs.by.stat <- function ( results.per.person, days.since.infection ) {
diffs.by.stat <-
lapply( colnames( results.per.person ), function( .stat ) {
.rv <- ( as.numeric( results.per.person[ , .stat ] ) - as.numeric( days.since.infection[ rownames( results.per.person ) ] ) );
names( .rv ) <- rownames( results.per.person );
return( .rv );
} );
names( diffs.by.stat ) <- colnames( results.per.person );
return( diffs.by.stat );
} # compute.diffs.by.stat ( results.per.person, days.since.infection )
bound.and.evaluate.results.per.ppt <-
function ( results.per.person, days.since.infection, results.covars.per.person.with.extra.cols, the.time, the.artificial.bounds = NA, ppt.suffix.pattern = "\\..+", return.step.coefs = TRUE, return.lasso.coefs = TRUE, return.formulas = TRUE, return.results.per.person = TRUE ) {
## Special: the ppt names might have suffices in results.per.person; if so, strip off the suffix for purposes of matching ppts to the covars, etc.
ppt.names <- rownames( results.per.person );
ppt.suffices <- NULL;
if( !is.na( ppt.suffix.pattern ) && ( length( grep( ppt.suffix.pattern, ppt.names ) ) > 0 ) ) {
# if one has it, they should all have it.
stopifnot( length( grep( ppt.suffix.pattern, ppt.names ) ) == length( ppt.names ) );
.ppt.names <- ppt.names;
ppt.names <- gsub( ppt.suffix.pattern, "", .ppt.names );
names( ppt.names ) <- .ppt.names;
ppt.suffices <- gsub( paste( "^.+?(", ppt.suffix.pattern, ")$", sep = "" ), "\\1", .ppt.names, perl = TRUE );
names( ppt.suffices ) <- .ppt.names;
}
all.ptids <- unique( ppt.names );
## Also include all of the date estimates, which is
## everything in "results.per.person" so
## far. (However we will exclude some bounded
## results with deterministic bounds from the
## optimization, see below). It's also redundant to
## use PFitter-based days estimates, since we are
## using the mutation rate coefs, which are a
## linear fn of the corresponding days ests (I have
## confirmed that the predictions are the same
## (within +- 0.6 days, due to the rounding that
## PFitter does to its days estimates). So
## basically unless we added non-PFitter results
## (PREAST/infer, anchre, or center-of-bounds), there won't be
## anything to do here.
days.est.cols <- colnames( results.per.person );
days.est.cols <- grep( "deterministic", days.est.cols, invert = TRUE, value = TRUE );
days.est.cols <- grep( "PFitter|(DS)?Star[Pp]hy(Test)?", days.est.cols, invert = TRUE, perl = TRUE, value = TRUE );
# Also exclude anything time-dependent with times we don't use anymore.
days.est.cols <- grep( "(one|six)month", days.est.cols, invert = TRUE, perl = TRUE, value = TRUE );
## TODO: handle shifted
days.est.cols <- grep( "shifted", days.est.cols, invert = TRUE, perl = TRUE, value = TRUE );
if( the.time == "1m.6m" ) {
# Then keep the 1mmtn003.6mhvtn502 results, but not each separately.
days.est.cols <- grep( "\\.mtn|\\.hvtn", days.est.cols, invert = TRUE, perl = TRUE, value = TRUE );
}
if( use.glm.validate || use.step.validate || use.lasso.validate ) {
results.covars.per.person.with.extra.cols <-
cbind( results.per.person[ , days.est.cols, drop = FALSE ], results.covars.per.person.with.extra.cols );
.keep.cols <-
grep( "num.*\\.seqs|totalbases|upper|lower", colnames( results.covars.per.person.with.extra.cols ), value = TRUE, perl = TRUE, invert = TRUE );
# There are redundancies because the mut.rate.coef for DS is identical to PFitter's and for Bayesian it is very similar.
.keep.cols <-
grep( "(DS)?Star[pP]hy(Test)?\\.mut\\.rate\\.coef", .keep.cols, value = TRUE, invert = TRUE );
# Also exclude this strange test.
.keep.cols <-
grep( "DS\\.(DS)?Star[pP]hy(Test)?\\.is\\.starlike", .keep.cols, value = TRUE, invert = TRUE );
# Also exclude this, which is based on the strange test.
.keep.cols <-
grep( "StarPhy\\.is\\.one\\.founder", .keep.cols, value = TRUE, invert = TRUE );
.keep.cols <-
grep( "(DS)?Star[pP]hy(Test)?\\.fits", .keep.cols, value = TRUE, invert = TRUE );
.keep.cols <-
grep( "(DS)?Star[pP]hy(Test)?\\.founders", .keep.cols, value = TRUE, invert = TRUE );
# For COB and infer, use only the real-data sources (mtn003 or hvtn502). So exclude the "mtn003" and "sixmonths" ones.
.keep.cols <-
grep( "\\.(one|six)months?\\.", .keep.cols, value = TRUE, invert = TRUE );
## TODO: ADD OPTION TO KEEP and USE SHIFTED RESULTS
.keep.cols <-
grep( "shifted", .keep.cols, value = TRUE, invert = TRUE );
.donotkeep.cols <- c( grep( "DS\\.Star[pP]y\\.(fits|founders|is.starlike)", .keep.cols, value = TRUE ) );
.donotkeep.cols <- c( .donotkeep.cols, "StarPhy.founders", "InSites.founders", "multifounder.DS.Starphy.fits" );
.keep.cols <- setdiff( .keep.cols, .donotkeep.cols );
## Keep only the mut.rate.coef cols and priv.sites and multifounder.Synonymous.PFitter.is.poisson, and Infer and anchre cols.
Infer.cols <- grep( "Infer", .keep.cols, value = TRUE );
## Exclude Infer cols for obsolete bounds.
Infer.cols <- grep( "(one|six)month", Infer.cols, value = TRUE, invert = TRUE );
## TODO: hanlde shifted
Infer.cols <- grep( "shifted", Infer.cols, value = TRUE, invert = TRUE );
## Actually also exclude the ones that don't match the time
if( the.time == "1m.6m" ) {
Infer.cols <- grep( "\\.mtn|\\.hvtn", Infer.cols, value = TRUE, invert = TRUE );
}
anchre.cols <- grep( "anchre", .keep.cols, value = TRUE );
mut.rate.coef.cols <- grep( "mut\\.rate\\.coef", .keep.cols, value = TRUE );
COB.cols <- grep( "^COB", .keep.cols, value = TRUE );
## Exclude COB cols for obsolete bounds.
COB.cols <- grep( "(one|six)month", COB.cols, value = TRUE, invert = TRUE );
## TODO: Handle shifted
COB.cols <- grep( "shifted", COB.cols, value = TRUE, invert = TRUE );
## Actually also exclude the ones that don't match the time
if( the.time == "1m.6m" ) {
COB.cols <- grep( "\\.mtn|\\.hvtn", COB.cols, value = TRUE, invert = TRUE );
}
estimate.cols <- c( COB.cols, mut.rate.coef.cols, Infer.cols, anchre.cols );
all.additional.cols <- setdiff( .keep.cols, estimate.cols );
if( use.lasso.validate && include.all.vars.in.lasso ) {
keep.cols <- unique( c( helpful.additional.cols, helpful.additional.cols.with.interactions, all.additional.cols, estimate.cols ) );
} else {
keep.cols <- unique( c( helpful.additional.cols, helpful.additional.cols.with.interactions, estimate.cols ) );
}
if( use.gold.is.multiple ) {
keep.cols <- c( "gold.is.multiple", keep.cols );
}
# Don't evaluate estimators that have no variation at
# all. Note that technically we could/should do this after holding
# out each person, in case a value has no variation among the
# subset excluding that person. But we do it here only.
estimators.to.exclude <-
apply( results.covars.per.person.with.extra.cols[ , estimate.cols, drop = FALSE ], 2, function ( .col ) {
return( ( sum( !is.na( .col ) ) <= 1 ) || ( var( .col, na.rm = TRUE ) == 0 ) );
} );
estimate.cols <- setdiff( estimate.cols, names( which( estimators.to.exclude ) ) );
# Also always evaluate no-estimate: "none".
estimate.cols <- c( "none", estimate.cols );
# To make things consistent we actually change this _to_ X6m.not.1m.
colnames( results.covars.per.person.with.extra.cols )[ colnames( results.covars.per.person.with.extra.cols ) == "6m.not.1m" ] <- "X6m.not.1m";
# Some cols from "helpful.additional.cols" or "helpful.additional.cols.with.interactions" might be missing
keep.cols <-
intersect( keep.cols, colnames( results.covars.per.person.with.extra.cols ) );
results.covars.per.person <-
results.covars.per.person.with.extra.cols[ , keep.cols, drop = FALSE ];
results.covars.per.person.df <-
data.frame( results.covars.per.person );
## DO NOT Undo conversion of the colnames (X is added before "6m.not.1m"). We want it to be called "X6m.not.1m" so it can work in the regression formulas.
#colnames( results.covars.per.person.df ) <- colnames( results.covars.per.person.with.extra.cols );
regression.df <- cbind( data.frame( days.since.infection = days.since.infection[ rownames( results.covars.per.person.df ) ] ), results.covars.per.person.df, lapply( the.artificial.bounds[ grep( "shifted", grep( "(one|six)month", names( the.artificial.bounds ), invert = TRUE, value = TRUE ), invert = TRUE, value = TRUE ) ], function( .mat ) { .mat[ rownames( results.covars.per.person.df ), , drop = FALSE ] } ) );
## Ok build a regression model, including only the helpful.additional.cols and helpful.additional.cols.with.interactions (and interactions among them), and also the lower and upper bounds associated with either 5 weeks or 30 weeks, depending on the.time (if there's a 1m sample, uses "mtn003").
if( ( the.time == "6m" ) || ( the.time == "1m6m" ) ) {
.lower.bound.colname <- "sampledwidth_uniform_hvtn502.lower";
.upper.bound.colname <- "sampledwidth_uniform_hvtn502.upper";
} else if( the.time == "1m.6m" ) {
.lower.bound.colname <- "sampledwidth_uniform_1mmtn003_6mhvtn502.lower";
.upper.bound.colname <- "sampledwidth_uniform_1mmtn003_6mhvtn502.upper";
} else { # 1m or 1w
.lower.bound.colname <- "sampledwidth_uniform_mtn003.lower";
.upper.bound.colname <- "sampledwidth_uniform_mtn003.upper";
}
if( use.glm.validate ) {
#glm.fit.statistics <- matrix( 0, nrow = length( all.ptids ), nrow( results.covars.per.person.df ), ncol = 0 );
## Note that there might be multiple rows per ppt in the regression.df and in this prediction output matrix; the values will be filled in using leave-one-ptid-out xv, ie in each iteration there might be multiple rows filled in, since multiple rows correspond to one held-out ptid.
glm.validation.results.per.person <- matrix( NA, nrow = nrow( results.covars.per.person.df ), ncol = length( estimate.cols ) );
glm.withbounds.validation.results.per.person <- matrix( NA, nrow = nrow( results.covars.per.person.df ), ncol = length( estimate.cols ) );
if( return.formulas ) {
## glm:
glm.formulas.per.person <-
matrix( NA, nrow = length( all.ptids ), ncol = length( estimate.cols ) );
colnames( glm.formulas.per.person ) <-
estimate.cols;
rownames( glm.formulas.per.person ) <-
all.ptids;
## glm.withbounds:
glm.withbounds.formulas.per.person <-
matrix( NA, nrow = length( all.ptids ), ncol = length( estimate.cols ) );
colnames( glm.withbounds.formulas.per.person ) <-
estimate.cols;
rownames( glm.withbounds.formulas.per.person ) <-
all.ptids;
}
}
if( use.step.validate ) {
## These are again per-sample, see above Note.
step.validation.results.per.person <-
matrix( NA, nrow = nrow( results.covars.per.person.df ), ncol = length( estimate.cols ) );
step.withbounds.validation.results.per.person <- matrix( NA, nrow = nrow( results.covars.per.person.df ), ncol = length( estimate.cols ) );
if( return.formulas ) {
## step:
step.formulas.per.person <-
matrix( NA, nrow = length( all.ptids ), ncol = length( estimate.cols ) );
colnames( step.formulas.per.person ) <-
estimate.cols;
rownames( step.formulas.per.person ) <-
all.ptids;
## step.withbounds:
step.withbounds.formulas.per.person <-
matrix( NA, nrow = length( all.ptids ), ncol = length( estimate.cols ) );
colnames( step.withbounds.formulas.per.person ) <-
estimate.cols;
rownames( step.withbounds.formulas.per.person ) <-
all.ptids;
}
if( return.step.coefs ) {
## This is really a 3D array, but I'm just lazily representing it directly this way. Note this is by removed ppt, not by regression row (there might be multiple rows per ppt in the regression.df and the prediction output matrices).
# and these are per-ptid!
step.validation.results.per.person.coefs <-
as.list( rep( NA, length( all.ptids ) ) );
names( step.validation.results.per.person.coefs ) <-
all.ptids;
step.withbounds.validation.results.per.person.coefs <-
as.list( rep( NA, length( all.ptids ) ) );
names( step.withbounds.validation.results.per.person.coefs ) <-
all.ptids;
}
}
if( use.lasso.validate ) {
## These are again per-sample, see above Note.
lasso.validation.results.per.person <-
matrix( NA, nrow = nrow( results.covars.per.person.df ), ncol = length( estimate.cols ) );
lasso.withbounds.validation.results.per.person <- matrix( NA, nrow = nrow( results.covars.per.person.df ), ncol = length( estimate.cols ) );
if( return.formulas ) {
## lasso:
lasso.formulas.per.person <-
matrix( NA, nrow = length( all.ptids ), ncol = length( estimate.cols ) );
colnames( lasso.formulas.per.person ) <-
estimate.cols;
rownames( lasso.formulas.per.person ) <-
all.ptids;
## lasso.withbounds:
lasso.withbounds.formulas.per.person <-
matrix( NA, nrow = length( all.ptids ), ncol = length( estimate.cols ) );
colnames( lasso.withbounds.formulas.per.person ) <-
estimate.cols;
rownames( lasso.withbounds.formulas.per.person ) <-
all.ptids;
}
if( return.lasso.coefs ) {
## This is really a 3D array, but I'm just lazily representing it directly this way. Note this is by removed ppt, not by regression row (there might be multiple rows per ppt in the regression.df and the prediction output matrices).
# and these are per-ptid!
lasso.validation.results.per.person.coefs <-
as.list( rep( NA, length( all.ptids ) ) );
names( lasso.validation.results.per.person.coefs ) <-
all.ptids;
lasso.withbounds.validation.results.per.person.coefs <-
as.list( rep( NA, length( all.ptids ) ) );
names( lasso.withbounds.validation.results.per.person.coefs ) <-
all.ptids;
}
}
for( .ptid.i in 1:length( all.ptids ) ) {
the.ptid <- all.ptids[ .ptid.i ];
the.rows.for.ptid <- which( ppt.names == the.ptid );
names( the.rows.for.ptid ) <- rownames( results.per.person )[ the.rows.for.ptid ];
the.rows.excluding.ptid <- which( ppt.names != the.ptid );
## TODO: REMOVE
print( paste( "PTID", .ptid.i, "removed:", the.ptid, "rows:(", paste( names( the.rows.for.ptid ), collapse = ", " ), ")" ) );
if( use.step.validate && return.step.coefs ) {
.step.validation.results.per.person.coefs.row <-
as.list( rep( NA, length( estimate.cols ) ) );
names( .step.validation.results.per.person.coefs.row ) <-
estimate.cols;
.step.withbounds.validation.results.per.person.coefs.row <-
as.list( rep( NA, length( estimate.cols ) ) );
names( .step.withbounds.validation.results.per.person.coefs.row ) <-
estimate.cols;
}
if( use.lasso.validate && return.lasso.coefs ) {
.lasso.validation.results.per.person.coefs.row <-
as.list( rep( NA, length( estimate.cols ) ) );
names( .lasso.validation.results.per.person.coefs.row ) <-
estimate.cols;
.lasso.withbounds.validation.results.per.person.coefs.row <-
as.list( rep( NA, length( estimate.cols ) ) );
names( .lasso.withbounds.validation.results.per.person.coefs.row ) <-
estimate.cols;
}
regression.df.without.ptid.i <-
regression.df[ the.rows.excluding.ptid, , drop = FALSE ];
## But now also exclude columns for which ptid has only NA values, as these can't be used for predicting ptid's excluded days.since.infection.
ptid.has.nonmissing.data.by.col <-
apply( regression.df[ the.rows.for.ptid, , drop = FALSE ], 2, function( .col ) { !all( is.na( .col ) ) } );
regression.df.without.ptid.i <-
regression.df.without.ptid.i[ , ptid.has.nonmissing.data.by.col, drop = FALSE ];
for( .col.i in 1:length( estimate.cols ) ) {
.estimate.colname <- estimate.cols[ .col.i ];
if( ( .estimate.colname != "none" ) && ( ptid.has.nonmissing.data.by.col[ .estimate.colname ] == FALSE ) ) {
# This estimator is missing for this participant.
next;
}
## TODO: REMOVE
#print( .estimate.colname );
if( use.glm.validate || use.step.validate || ( use.lasso.validate && !include.all.vars.in.lasso ) ) {
# covariates for glm
.covariates.glm <-
c( helpful.additional.cols, helpful.additional.cols.with.interactions );
# covariates for glm.withbounds
.covariates.glm.withbounds <-
c( helpful.additional.cols, helpful.additional.cols.with.interactions, .upper.bound.colname );
if( use.gold.is.multiple ) {
.covariates.glm <- c( "gold.is.multiple", .covariates.glm );
.covariates.glm.withbounds <- c( "gold.is.multiple", .covariates.glm.withbounds );
}
.covars.to.exclude <- apply( regression.df.without.ptid.i, 2, function ( .col ) {
return( ( sum( !is.na( .col ) ) <= 1 ) || ( var( as.numeric( .col ), na.rm = TRUE ) == 0 ) );
} );
# Also exclude covars that are too highly correlated.
.retained.covars <-
setdiff( colnames( regression.df.without.ptid.i ), names( which( .covars.to.exclude ) ) );
if( ( .estimate.colname == "none" ) || ( .estimate.colname %in% .retained.covars ) ) {
.interactors <-
intersect( .retained.covars, helpful.additional.cols.with.interactions );
# Add interactions among the interactors.
if( length( .interactors ) > 1 ) {
## NOTE that for now it is up to the caller to ensure that the df is not too high in this case.
.interactions.among.interactors <- c();
for( .interactor.i in 1:( length( .interactors ) - 1 ) ) {
.interactor <- .interactors[ .interactor.i ];
.remaining.interactors <-
.interactors[ ( .interactor.i + 1 ):length( .interactors ) ];
.interactions.among.interactors <-
c( .interactions.among.interactors,
paste( .interactor, .remaining.interactors, collapse = "+", sep = ":" ) );
} # End foreach .interactor.i
.interactors <- c( .interactors, .interactions.among.interactors );
} # End if there's more than one "interactor"
if( use.gold.is.multiple ) {
.interactors <- c( "gold.is.multiple", .interactors );
}
if( .estimate.colname == "none" ) {
.cv.glm <- intersect( .retained.covars, .covariates.glm );
if( length( .cv.glm ) == 0 ) {
.cv.glm <- 1;
}
## SEE NOTE BELOW ABOUT USE OF AN INTERCEPT. WHEN we're using "none", then we always do use an intercept (only for the non-bounded result).
if( include.intercept ) {
.formula <- paste( "days.since.infection ~ 1 + ", paste( .cv.glm, collapse = "+" ) );
.formula.withbounds <- paste( "days.since.infection ~ 1 + ", paste( intersect( .retained.covars, .covariates.glm.withbounds ), collapse = "+" ) );
} else {
.cv.glm.nointercept <- intersect( .retained.covars, .covariates.glm );
if( length( .cv.glm.nointercept ) == 0 ) {
.formula.nointercept <- "days.since.infection ~ 1"; # _only_ intercept!
} else {
.formula.nointercept <- paste( "days.since.infection ~ 0 + ", paste( .cv.glm.nointercept, collapse = "+" ) );
}
.covariates.glm.withbounds.nointercept <- .covariates.glm.withbounds;
.formula.withbounds.nointercept <- paste( "days.since.infection ~ 0 + ", paste( intersect( .retained.covars, .covariates.glm.withbounds.nointercept ), collapse = "+" ) );
.formula <- .formula.nointercept;
.formula.withbounds <- .formula.withbounds.nointercept;
}
} else {
## NOTE WE MUST BE CAREFUL ABOUT USING AN INTERCEPT, BECAUSE THEN WE JUST CHEAT BY PREDICTING EVERYTHING IS CENTERED AT the tight true bounds on the days.since.infection in our training data (when looking at one time at a time, and now with 6m.not.1m this should be true for both times as well.).
if( include.intercept ) {
.formula <- paste( "days.since.infection ~ 1 + ", paste( intersect( .retained.covars, c( .covariates.glm, .estimate.colname ) ), collapse = "+" ) );
.formula.withbounds <- paste( "days.since.infection ~ 1 + ", paste( intersect( .retained.covars, c( .covariates.glm.withbounds, .estimate.colname ) ), collapse = "+" ) );
} else {
.covariates.glm.nointercept <- .covariates.glm;
.formula.nointercept <- paste( "days.since.infection ~ 0 + ", paste( intersect( .retained.covars, c( .covariates.glm.nointercept, .estimate.colname ) ), collapse = "+" ) );
.formula <- .formula.nointercept;
.covariates.glm.withbounds.nointercept <- .covariates.glm.withbounds;
.formula.withbounds.nointercept <-
paste( "days.since.infection ~ 0 + ", paste( intersect( .retained.covars, c( .covariates.glm.withbounds.nointercept, .estimate.colname ) ), collapse = "+" ) );
.formula.withbounds <- .formula.withbounds.nointercept;
}
}
.interactees <-
setdiff( intersect( .retained.covars, .covariates.glm ), .interactors );
if( ( length( .interactors ) > 0 ) && ( length( .interactees ) > 0 ) ) {
## NOTE that for now it is up to the caller to ensure that the df is not too high in this case.
.interactions.formula.part.components <- c();
for( .interactor.i in 1:length( .interactors ) ) {
.interactor <- .interactors[ .interactor.i ];
.interactions.formula.part.components <-
c( .interactions.formula.part.components,
paste( .interactor, .interactees, collapse = "+", sep = ":" ) );
} # End foreach .interactor.i
.interactions.formula.part <-
paste( .interactions.formula.part.components, collapse =" + " );
.formula <-
paste( .formula, .interactions.formula.part, sep = " + " );
} # End if there's any "interactees"
## Also add interactions with the estimate colname and everything included so far.
if( .estimate.colname != "none" ) {
.everything.included.so.far.in.formula <-
strsplit( .formula, split = "\\s*\\+\\s*" )[[1]];
# Add interactions.
.new.interactions.with.estimator.in.formula <-
sapply( setdiff( .everything.included.so.far.in.formula[ -1 ], .estimate.colname ), function ( .existing.part ) {
paste( .existing.part, .estimate.colname, sep = ":" )
} );
.formula <- paste( c( .everything.included.so.far.in.formula, .new.interactions.with.estimator.in.formula ), collapse = " + " );
} # End if .estimate.colname != "none"
## withbounds
.interactees.withbounds <-
setdiff( intersect( .retained.covars, .covariates.glm.withbounds ), .interactors );
if( ( length( .interactors ) > 0 ) && ( length( .interactees.withbounds ) > 0 ) ) {
## NOTE that for now it is up to the caller to ensure that the df is not too high in this case.
.interactions.withbounds.formula.part.components <- c();
for( .interactor.i in 1:length( .interactors ) ) {
.interactor <- .interactors[ .interactor.i ];
.interactions.withbounds.formula.part.components <-
c( .interactions.withbounds.formula.part.components,
paste( .interactor, .interactees.withbounds, collapse = "+", sep = ":" ) );
} # End foreach .interactor.i
.interactions.withbounds.formula.part <-
paste( .interactions.withbounds.formula.part.components, collapse =" + " );
.formula.withbounds <-
paste( .formula.withbounds, .interactions.withbounds.formula.part, sep = " + " );
} # End if there's any "interactees.withbounds"
## Also add interactions with the estimate colname and everything included so far.
if( .estimate.colname != "none" ) {
.everything.included.so.far.in.formula.withbounds <-
strsplit( .formula.withbounds, split = "\\s*\\+\\s*" )[[1]];
# Add interactions with the bound.
.new.interactions.with.estimator.in.formula.withbounds <-
sapply( setdiff( .everything.included.so.far.in.formula.withbounds[ -1 ], .estimate.colname ), function ( .existing.part ) {
paste( .existing.part, .estimate.colname, sep = ":" )
} );
.formula.withbounds <- paste( c( .everything.included.so.far.in.formula.withbounds, .new.interactions.with.estimator.in.formula.withbounds ), collapse = " + " );
} # End if .estimate.colname != "none"
## If the intercept isn't included, then also don't include any combo of the pseudo-intercepts denoting the time or region.
if( !include.intercept ) {
## TODO: MOVE UP. MAGIC #s
pseudo.intercepts <- c( "X6m.not.1m", "v3_not_nflg", "X6m.not.1m:v3_not_nflg", "v3_not_nflg:X6m.not.1m" );
.everything.included.so.far.in.formula <-
strsplit( .formula, split = "\\s*\\+\\s*" )[[1]];
# Remove the pseudointercepts.
.everything.that.should.be.in.formula <-
setdiff(
.everything.included.so.far.in.formula[ -1 ],
pseudo.intercepts
);
.formula <- paste( c( .everything.included.so.far.in.formula[ 1 ], .everything.that.should.be.in.formula ), collapse = " + " );
.everything.included.so.far.in.formula.withbounds <-
strsplit( .formula.withbounds, split = "\\s*\\+\\s*" )[[1]];
# Remove the pseudointercepts.
.everything.that.should.be.in.formula.withbounds <-
setdiff(
.everything.included.so.far.in.formula.withbounds[ -1 ],
pseudo.intercepts
);
.formula.withbounds <- paste( c( .everything.included.so.far.in.formula.withbounds[ 1 ], .everything.that.should.be.in.formula.withbounds ), collapse = " + " );
} # End if !include.intercept
## If the intercept isn't included, then also don't include any combo of the pseudo-intercepts denoting the time or region.
if( mutation.rate.calibration && ( .estimate.colname != "none" ) ) {
.everything.included.so.far.in.formula <-
strsplit( .formula, split = "\\s*\\+\\s*" )[[1]];
# Remove anything not including the estimate.colname
.everything.that.should.be.in.formula <-
grep( .estimate.colname, .everything.included.so.far.in.formula[ -1 ], value = TRUE );
.formula <- paste( c( .everything.included.so.far.in.formula[ 1 ], .everything.that.should.be.in.formula ), collapse = " + " );
.everything.included.so.far.in.formula.withbounds <-
strsplit( .formula.withbounds, split = "\\s*\\+\\s*" )[[1]];
# Remove anything not including the estimate.colname
.everything.that.should.be.in.formula.withbounds <-
grep( .estimate.colname, .everything.included.so.far.in.formula.withbounds[ -1 ], value = TRUE );
.formula.withbounds <- paste( c( .everything.included.so.far.in.formula.withbounds[ 1 ], .everything.that.should.be.in.formula.withbounds ), collapse = " + " );
} # End if mutation.rate.calibration && ( estimate.colname != "none" )
# To this point these are still strings:
#.formula <- as.formula( .formula );
#.formula.withbounds <- as.formula( .formula.withbounds );
# The condition below is required now that we also compute the above for the cases with use.step.validate and ( use.lasso.validate && !include.all.vars.in.lasso )
if( use.glm.validate ) {
.glm.result <- lm( .formula, data = regression.df.without.ptid.i );
if( return.formulas ) {
glm.formulas.per.person[ .ptid.i, .col.i ] <-
.formula;
}
for( .row.i in the.rows.for.ptid ) {
.pred.value.glm <- predict( .glm.result, regression.df[ .row.i, , drop = FALSE ] );
glm.validation.results.per.person[ .row.i, .col.i ] <-
.pred.value.glm;
}
# glm.withbounds:
.glm.withbounds.result <-
lm( .formula.withbounds, data = regression.df.without.ptid.i );
if( return.formulas ) {
glm.withbounds.formulas.per.person[ .ptid.i, .col.i ] <-
.formula.withbounds;
}
for( .row.i in the.rows.for.ptid ) {
.pred.value.glm.withbounds <-
predict( .glm.withbounds.result, regression.df[ .row.i, , drop = FALSE ] );
glm.withbounds.validation.results.per.person[ .row.i, .col.i ] <-
.pred.value.glm.withbounds;
}
} # End if use.glm.validate
} # End if the estimate can be used
} # End if use.glm.validate || ( use.lasso.validate && !include.all.vars.in.lasso )
if( use.step.validate ) {
## step (not withbounds)
## Use .formula already defined.
## NOTE: in the below, the .retained.covars and .glm.result and .glm.withbounds.result are defined above, with the computation of the non-withbounds ("glm") results.
if( ( .estimate.colname == "none" ) || ( .estimate.colname %in% .retained.covars ) ) {
if( .estimate.colname == "none" ) {
.step.result <- suppressWarnings( step( .glm.result, trace = FALSE ) );
} else {
# This forces keeping the .estimate.colname
tryCatch( {
.glm.result.na.omit <-
lm( .formula, data = na.omit( regression.df.without.ptid.i ) );
.step.result <-
suppressWarnings( step( .glm.result.na.omit, trace = FALSE, scope = c( lower = paste( "~", .estimate.colname ), upper = ( .glm.result.na.omit$terms ) ) ) );
}, error = function( e ) {
# Sometimes for unknown reasons it fails with a bounded scope but not without one.
.step.result <-
suppressWarnings( step( .glm.result.na.omit, trace = FALSE ) );
} );
}
if( return.formulas ) {
step.formulas.per.person[ .ptid.i, .col.i ] <-
as.character( ( ( .step.result )$call )[ 2 ] );
}
.pred.value.step <-
predict( .step.result, regression.df[ .row.i, , drop = FALSE ] );
step.validation.results.per.person[ .row.i, .col.i ] <-
.pred.value.step;
if( return.step.coefs ) {
.step.validation.results.per.person.coefs.cell <-
coef( .step.result );
.step.validation.results.per.person.coefs.row[[ .col.i ]] <-
.step.validation.results.per.person.coefs.cell;
}
## step.withbounds:
if( .estimate.colname == "none" ) {
.step.withbounds.result <-
suppressWarnings( step( .glm.withbounds.result, trace = FALSE ) );
} else {
# This forces keeping the .estimate.colname and .upper.bound.colname
tryCatch( {
.glm.withbounds.result.na.omit <-
lm( .formula.withbounds, data = na.omit( regression.df.without.ptid.i ) );
.step.withbounds.result <-
suppressWarnings( step( .glm.withbounds.result.na.omit, trace = FALSE, scope = c( lower = paste( "~", paste( .estimate.colname, .upper.bound.colname, sep = "+" ) ), upper = ( .glm.withbounds.result.na.omit$terms ) ) ) );
}, error = function( e ) {
# Sometimes for unknown reasons it fails with a bounded scope but not without one.
.step.withbounds.result <-
suppressWarnings( step( .glm.withbounds.result.na.omit, trace = FALSE ) );
} );
}
if( return.formulas ) {
step.withbounds.formulas.per.person[ .ptid.i, .col.i ] <-
as.character( ( ( .step.withbounds.result )$call )[ 2 ] );
}
.pred.value.step.withbounds <-
predict( .step.withbounds.result, regression.df[ .row.i, , drop = FALSE ] );
step.withbounds.validation.results.per.person[ .row.i, .col.i ] <-
.pred.value.step.withbounds;
if( return.step.coefs ) {
.step.withbounds.validation.results.per.person.coefs.cell <-
coef( .step.withbounds.result );
.step.withbounds.validation.results.per.person.coefs.row[[ .col.i ]] <-
.step.withbounds.validation.results.per.person.coefs.cell;
}
} else { stop( paste( "unusable estimator:", .estimate.colname ) ) } # End if the step estimate variable is usable
} # End if use.step.validate
if( use.lasso.validate ) {
## lasso (not withbounds)
if( include.all.vars.in.lasso ) {
# covariates for lasso
.covariates.lasso <-
intersect( colnames( regression.df.without.ptid.i ), unique( c( helpful.additional.cols, all.additional.cols ) ) );
# covariates for lasso.withbounds
.covariates.lasso.withbounds <-
intersect( colnames( regression.df.without.ptid.i ), unique( c( helpful.additional.cols, .lower.bound.colname, .upper.bound.colname, all.additional.cols ) ) );
# lasso:
if( .estimate.colname == "none" ) {
.lasso.mat <-
as.matrix( regression.df.without.ptid.i[ , .covariates.lasso, drop = FALSE ] );
} else {
.lasso.mat <-
as.matrix( regression.df.without.ptid.i[ , c( .covariates.lasso, .estimate.colname ), drop = FALSE ] );
}
mode( .lasso.mat ) <- "numeric";
# exclude any rows with any NAs.
.retained.rows <-
which( apply( .lasso.mat, 1, function( .row ) { !any( is.na( .row ) ) } ) );
.lasso.mat <-
.lasso.mat[ .retained.rows, , drop = FALSE ];
.out <- regression.df.without.ptid.i[[ "days.since.infection" ]][ .retained.rows ];
.covars.to.exclude <- apply( .lasso.mat, 2, function ( .col ) {
return( ( sum( !is.na( .col ) ) <= 1 ) || ( var( as.numeric( .col ), na.rm = TRUE ) == 0 ) );
} );
.covars.to.exclude <- names( which( .covars.to.exclude ) );
# At least one DF is needed for the variance estimate (aka the error term), and one for leave-one-out xvalidation.
cors.with.the.outcome <-
sapply( setdiff( colnames( .lasso.mat ), c( .covars.to.exclude, .estimate.colname ) ), function( .covar.colname ) {
.cor <-
cor( .out, .lasso.mat[ , .covar.colname ], use = "pairwise" );
return( .cor );
} );
sorted.cors.with.the.outcome <-
cors.with.the.outcome[ order( abs( cors.with.the.outcome ) ) ];
# Sort the columns of .lasso.mat by their correlation with the outcome. This is to ensure that more-relevant columns get selected when removing columns due to pairwise correlation among them. We want to keep the best one.
if( .estimate.colname == "none" ) {
.lasso.mat <- .lasso.mat[ , rev( names( sorted.cors.with.the.outcome ) ), drop = FALSE ];
} else {
.lasso.mat <- .lasso.mat[ , c( .estimate.colname, rev( names( sorted.cors.with.the.outcome ) ) ), drop = FALSE ];
}
# Exclude covars that are not sufficiently correlated with the outcome.
.covars.to.exclude <- c( .covars.to.exclude,
names( which( sapply( setdiff( colnames( .lasso.mat ), c( .covars.to.exclude, .estimate.colname ) ), function( .covar.colname ) {
#print( .covar.colname );
.cor <-
cor( .out, .lasso.mat[ , .covar.colname ], use = "pairwise" );
#print( .cor );
if( is.na( .cor ) || ( abs( .cor ) <= MINIMUM.CORRELATION.WITH.OUTCOME ) ) {
TRUE
} else {
FALSE
}
} ) ) ) );
COR.THRESHOLD <- 0.8;
# Exclude covars that are too highly correlated with the estimate.
if( .estimate.colname != "none" ) {
.covars.to.exclude <- c( .covars.to.exclude,
names( which( sapply( setdiff( colnames( .lasso.mat ), c( .covars.to.exclude, .estimate.colname ) ), function( .covar.colname ) {
#print( .covar.colname );
.cor <-
cor( .lasso.mat[ , .estimate.colname ], .lasso.mat[ , .covar.colname ], use = "pairwise" );
#print( .cor );
if( is.na( .cor ) || ( abs( .cor ) >= COR.THRESHOLD ) ) {
TRUE
} else {
FALSE
}
} ) ) ) );
}
# Exclude covars that are too highly correlated with each other.
.covars.to.consider <-
setdiff( colnames( .lasso.mat ), c( .covars.to.exclude, .estimate.colname, .lower.bound.colname, .upper.bound.colname ) );
## Process these in reverse order to ensure that we prioritize keeping those towards the top.
.covars.to.consider <- rev( .covars.to.consider );
.new.covars.to.exclude <- rep( FALSE, length( .covars.to.consider ) );
names( .new.covars.to.exclude ) <- .covars.to.consider;
for( .c.i in 1:length( .covars.to.consider ) ) {
.covar.colname <- .covars.to.consider[ .c.i ];
#print( .covar.colname );
# Only consider those not already excluded.
.cor <-
cor( .lasso.mat[ , .covar.colname ], .lasso.mat[ , names( which( !.new.covars.to.exclude ) ) ], use = "pairwise" );
#print( .cor );
if( ( length( .cor ) > 0 ) && any( .cor[ !is.na( .cor ) ] < 1 & .cor[ !is.na( .cor ) ] >= COR.THRESHOLD ) ) {
.new.covars.to.exclude[ .c.i ] <- TRUE;
} else {
.new.covars.to.exclude[ .c.i ] <- FALSE;
}
} # End foreach of the .covars.to.consider
.covars.to.exclude <- c( .covars.to.exclude, names( which( .new.covars.to.exclude ) ) );
.retained.covars <- setdiff( colnames( .lasso.mat ), .covars.to.exclude );
.covariates.lasso <- intersect( .retained.covars, .covariates.lasso );
if( .estimate.colname == "none" ) {
.cv.lasso <- intersect( .retained.covars, .covariates.lasso );
if( length( .cv.lasso ) == 0 ) {
.cv.lasso <- 1;
}
## SEE NOTE BELOW ABOUT USE OF AN INTERCEPT. WHEN we're using "none", then we always do use an intercept (only for the non-bounded result).
if( include.intercept ) {
.formula <- paste( "days.since.infection ~ 1 + ", paste( .cv.lasso, collapse = "+" ) );
} else {
.cv.lasso.nointercept <- intersect( .retained.covars, .covariates.lasso );
if( length( .cv.lasso.nointercept ) == 0 ) {
.formula.nointercept <- "days.since.infection ~ 1"; # _only_ intercept!
} else {
.formula.nointercept <- paste( "days.since.infection ~ 0 + ", paste( .cv.lasso.nointercept, collapse = "+" ) );
}
.formula <- .formula.nointercept;
}
} else {
## NOTE WE MUST BE CAREFUL ABOUT USING AN INTERCEPT, BECAUSE THEN WE JUST CHEAT BY PREDICTING EVERYTHING IS CENTERED AT the tight true bounds on the days.since.infection in our training data (when looking at one time at a time, and now with 6m.not.1m this should be true for both times as well.).
if( include.intercept ) {
.formula <- paste( "days.since.infection ~ 1 + ", paste( intersect( .retained.covars, c( .covariates.lasso, .estimate.colname ) ), collapse = "+" ) );
} else {
.covariates.lasso.nointercept <- .covariates.lasso;
.formula.nointercept <- paste( "days.since.infection ~ 0 + ", paste( intersect( .retained.covars, c( .covariates.lasso.nointercept, .estimate.colname ) ), collapse = "+" ) );
.formula <- .formula.nointercept;
}
}
## NOTE: We now always include these helpful.additional.cols.with.interactions, even if they would otherwise be excluded.
.interactors <-
intersect( colnames( .lasso.mat ), helpful.additional.cols.with.interactions );
#intersect( .retained.covars, helpful.additional.cols.with.interactions );
# Add interactions among the interactors.
if( length( .interactors ) > 1 ) {
## NOTE that for now it is up to the caller to ensure that the df is not too high in this case.
.interactions.among.interactors <- c();
for( .interactor.i in 1:( length( .interactors ) - 1 ) ) {
.interactor <- .interactors[ .interactor.i ];
.remaining.interactors <-
.interactors[ ( .interactor.i + 1 ):length( .interactors ) ];
.interactions.among.interactors <-
c( .interactions.among.interactors,
paste( .interactor, .remaining.interactors, collapse = "+", sep = ":" ) );
} # End foreach .interactor.i
.interactors <- c( .interactors, .interactions.among.interactors );
} # End if there's more than one "interactor"
if( use.gold.is.multiple ) {
.interactors <- c( "gold.is.multiple", .interactors );
}
.interactees <-
setdiff( .retained.covars, .interactors );
if( ( length( .interactors ) > 0 ) && ( length( .interactees ) > 0 ) ) {
## NOTE that for now it is up to the caller to ensure that the df is not too high in this case.
.interactions.formula.part.components <- c();
for( .interactor.i in 1:length( .interactors ) ) {
.interactor <- .interactors[ .interactor.i ];
.interactions.formula.part.components <-
c( .interactions.formula.part.components,
paste( .interactor, .interactees, collapse = "+", sep = ":" ) );
} # End foreach .interactor.i
.interactions.formula.part <-
paste( .interactions.formula.part.components, collapse =" + " );
.formula <-
paste( .formula, .interactions.formula.part, sep = " + " );
} # End if there's any "interactees"
## Also add interactions with the estimate colname and everything included so far.
if( .estimate.colname != "none" ) {
.everything.included.so.far.in.formula <-
strsplit( .formula, split = "\\s*\\+\\s*" )[[1]];
# Add interactions.
.new.interactions.with.estimator.in.formula <-
sapply( .everything.included.so.far.in.formula[ 3:length( .everything.included.so.far.in.formula ) ], function ( .existing.part ) {
paste( .existing.part, .estimate.colname, sep = ":" )
} );
.formula <- paste( c( .everything.included.so.far.in.formula, .new.interactions.with.estimator.in.formula ), collapse = " + " );
} # End if .estimate.colname != "none"
} else {
## Great, do nothing, just use .formula already defined.
}
## NOTE: in the below, the .retained.covars for the !include.all.vars.in.lasso case are defined above, with the computation of the non-withbounds ("glm") results.
if( include.all.vars.in.lasso || ( ( .estimate.colname == "none" ) || ( .estimate.colname %in% .retained.covars ) ) ) {
.mf <- stats::model.frame( as.formula( .formula ), data = regression.df.without.ptid.i );
.out <- .mf[ , "days.since.infection" ];
.lasso.mat <- model.matrix(as.formula( .formula ), .mf);
.covars.to.exclude <- apply( .lasso.mat, 2, function ( .col ) {
return( ( sum( !is.na( .col ) ) <= 1 ) || ( var( .col, na.rm = TRUE ) == 0 ) );
} );
.covars.to.exclude <- names( which( .covars.to.exclude ) );
cors.with.the.outcome <-
sapply( setdiff( colnames( .lasso.mat ), c( .covars.to.exclude, .estimate.colname ) ), function( .covar.colname ) {
.cor <-
cor( .out, .lasso.mat[ , .covar.colname ], use = "pairwise" );
return( .cor );
} );
sorted.cors.with.the.outcome <-
cors.with.the.outcome[ order( abs( cors.with.the.outcome ) ) ];
# Sort the columns of .lasso.mat by their correlation with the outcome. This is to ensure that more-relevant columns get selected when removing columns due to pairwise correlation among them. We want to keep the best one.
if( .estimate.colname == "none" ) {
.lasso.mat <- .lasso.mat[ , rev( names( sorted.cors.with.the.outcome ) ), drop = FALSE ];
} else {
.lasso.mat <- .lasso.mat[ , c( .estimate.colname, rev( names( sorted.cors.with.the.outcome ) ) ), drop = FALSE ];
}
.retained.covars <- colnames( .lasso.mat );
.needed.df <-
( length( .retained.covars ) - ( nrow( .lasso.mat ) - MINIMUM.DF ) );
if( .needed.df > 0 ) {
# Then remove some covars until there is at least MINIMUM.DF degrees of freedom.
# They are in order, so just chop them off the end.
.retained.covars <-
.retained.covars[ 1:( length( .retained.covars ) - .needed.df ) ];
}
} # End if( include.all.vars.in.lasso || ( ( .estimate.colname == "none" ) || ( .estimate.colname %in% .retained.covars ) ) )
## Ensure that the formula and the lasso mat are consistent.
.everything.included.so.far.in.formula <-
strsplit( .formula, split = "\\s*\\+\\s*" )[[1]];
# Remove the excluded columns from the formula.
if( include.intercept ) {
.formula <- paste( c( .everything.included.so.far.in.formula[ 1 ], .retained.covars ), collapse = " + " );
} else {
## If the intercept isn't included, then also don't include any combo of the pseudo-intercepts denoting the time or region.
# Remove the pseudointercepts.
.everything.that.should.be.in.formula <-
setdiff(
.retained.covars,
pseudo.intercepts
);
.formula <- paste( c( .everything.included.so.far.in.formula[ 1 ], .everything.that.should.be.in.formula ), collapse = " + " );
}
if( ( .estimate.colname == "none" ) || ( .estimate.colname %in% .retained.covars ) ) {
.mf <- stats::model.frame( as.formula( .formula ), data = regression.df.without.ptid.i );
.out <- .mf[ , "days.since.infection" ];
.lasso.mat <- model.matrix(as.formula( .formula ), .mf);
# penalty.factor = 0 to force the .estimate.colname variable.
.penalty.factor <-
as.numeric( colnames( .lasso.mat ) != .estimate.colname )
if( return.formulas ) {
lasso.formulas.per.person[ .ptid.i, .col.i ] <-
.formula;
}
if( ncol( .lasso.mat ) == 1 ) {
# Can't do lasso with only one variable.
# This is using basic lm, not lasso.
.lasso.result <- lm( .formula, data = regression.df.without.ptid.i );
for( .row.i in the.rows.for.ptid ) {
# Reminder: this is using basic lm, not lasso.
.pred.value.lasso <-
predict( .lasso.result, regression.df[ .row.i, , drop = FALSE ] );
lasso.validation.results.per.person[ .row.i, .col.i ] <-
.pred.value.lasso;
}
if( return.lasso.coefs ) {
# Reminder: not really lasso.
.lasso.validation.results.per.person.coefs.cell <-
coef( .lasso.result );
.lasso.validation.results.per.person.coefs.row[[ .col.i ]] <-
.lasso.validation.results.per.person.coefs.cell;
}
} else {
tryCatch(
{
.cv.glmnet.fit <- cv.glmnet( .lasso.mat, .out, intercept = include.intercept,
penalty.factor = .penalty.factor, grouped = FALSE, nfold = length( .out ) ); # grouped = FALSE to avoid the warning.;
.lasso.validation.results.per.person.coefs.cell <-
coef( .cv.glmnet.fit, s = "lambda.min" );
if( return.lasso.coefs ) {
.lasso.validation.results.per.person.coefs.row[[ .col.i ]] <-
.lasso.validation.results.per.person.coefs.cell;
}
.mf.ptid <- stats::model.frame( as.formula( .formula ), data = regression.df );
# This reorders them, which fixes a bug in which columns get renamed automatically by model.matrix according to the order in which the columns appear.
.mf.ptid <- .mf.ptid[ , colnames( .mf ) ];
.the.actual.rows.for.ptid <-
the.rows.for.ptid[ names( the.rows.for.ptid ) %in% rownames( .mf.ptid ) ];
.lasso.mat.ptid <-
model.matrix(as.formula( .formula ), .mf.ptid[ names( .the.actual.rows.for.ptid ), , drop = FALSE ] );
for( .row.i.j in 1:length(.the.actual.rows.for.ptid) ) {
.row.i <- .the.actual.rows.for.ptid[ .row.i.j ];
.newx <-
c( 1, .lasso.mat.ptid[ .row.i.j, rownames( as.matrix( .lasso.validation.results.per.person.coefs.cell ) )[-1], drop = FALSE ] );
names( .newx ) <- c( "(Intercept)", rownames( as.matrix( .lasso.validation.results.per.person.coefs.cell ) )[-1] );
# Note that the intercept is always the first one, even when include.intercept == FALSE
.pred.value.lasso <-
sum( as.matrix( .lasso.validation.results.per.person.coefs.cell ) * .newx );
lasso.validation.results.per.person[ .row.i, .col.i ] <-
.pred.value.lasso;
}
},
error = function( e )
{
if( .col.i == 1 ) {
warning( paste( "ptid", .ptid.i, "col", .col.i, "lasso failed with error", e, "\nReverting to simple regression with only an intrecept." ) );
.formula <- as.formula( "days.since.infection ~ 1" );
} else {
warning( paste( "ptid", .ptid.i, "col", .col.i, "lasso failed with error", e, "\nReverting to simple regression vs", .estimate.colname ) );
if( include.intercept ) {
.formula <- as.formula( paste( "days.since.infection ~ 1 + ", .estimate.colname ) );
} else {
.formula <- as.formula( paste( "days.since.infection ~ 0 + ", .estimate.colname ) );
}
}
.lm <- lm( .formula, data = regression.df.without.ptid.i );
if( return.lasso.coefs ) {
## It's nothing, no coefs selected, so leave it as NA.
}
for( .row.i in the.rows.for.ptid ) {
.pred.value.lasso <-
predict( .lm, regression.df[ .row.i, , drop = FALSE ] );
lasso.validation.results.per.person[ .row.i, .col.i ] <<-
.pred.value.lasso;
}
},
finally = {}
);
} # End if there's at least two columns, enough to do lasso on.
} # End if the lasso estimate variable is usable
## lasso.withbounds:
if( include.all.vars.in.lasso ) {
if( .estimate.colname == "none" ) {
.lasso.withbounds.mat <-
as.matrix( regression.df.without.ptid.i[ , .covariates.lasso.withbounds ] );
} else {
.lasso.withbounds.mat <-
as.matrix( regression.df.without.ptid.i[ , c( .covariates.lasso.withbounds, .estimate.colname ) ] );
}
mode( .lasso.withbounds.mat ) <- "numeric";
# exclude any rows with any NAs.
.retained.rows <-
which( apply( .lasso.withbounds.mat, 1, function( .row ) { !any( is.na( .row ) ) } ) );
.lasso.withbounds.mat <-
.lasso.withbounds.mat[ .retained.rows, , drop = FALSE ];
.out <- regression.df.without.ptid.i[[ "days.since.infection" ]][ .retained.rows ];
.covars.to.exclude <- apply( .lasso.withbounds.mat, 2, function ( .col ) {
return( ( sum( !is.na( .col ) ) <= 1 ) || ( var( .col, na.rm = TRUE ) == 0 ) );
} );
.covars.to.exclude <- names( which( .covars.to.exclude ) );
cors.with.the.outcome <-
sapply( setdiff( colnames( .lasso.withbounds.mat ), c( .covars.to.exclude, .estimate.colname ) ), function( .covar.colname ) {
.cor <-
cor( .out, .lasso.withbounds.mat[ , .covar.colname ], use = "pairwise" );
return( .cor );
} );
sorted.cors.with.the.outcome <-
cors.with.the.outcome[ order( abs( cors.with.the.outcome ) ) ];
# Sort the columns of .lasso.withbounds.mat by their correlation with the outcome. This is to ensure that more-relevant columns get selected when removing columns due to pairwise correlation among them. We want to keep the best one.
if( .estimate.colname == "none" ) {
.lasso.withbounds.mat <- .lasso.withbounds.mat[ , rev( names( sorted.cors.with.the.outcome ) ), drop = FALSE ];
} else {
.lasso.withbounds.mat <- .lasso.withbounds.mat[ , c( .estimate.colname, rev( names( sorted.cors.with.the.outcome ) ) ), drop = FALSE ];
}
# Exclude covars that are not sufficiently correlated with the outcome.
.covars.to.exclude <- c( .covars.to.exclude,
names( which( sapply( setdiff( colnames( .lasso.withbounds.mat ), c( .covars.to.exclude, .lower.bound.colname, .upper.bound.colname, .estimate.colname ) ), function( .covar.colname ) {
#print( .covar.colname );
.cor <-
cor( .out, .lasso.withbounds.mat[ , .covar.colname ], use = "pairwise" );
#print( .cor );
if( is.na( .cor ) || ( abs( .cor ) <= MINIMUM.CORRELATION.WITH.OUTCOME ) ) {
TRUE
} else {
FALSE
}
} ) ) ) );
# Exclude covars that are too highly correlated with the estimate. Here we can exclude the upper or lower bound too if it is too highly correlated with the estimate.
if( .estimate.colname != "none" ) {
.covars.to.exclude <- c( .covars.to.exclude,
names( which( sapply( setdiff( colnames( .lasso.withbounds.mat ), c( .covars.to.exclude, .estimate.colname ) ), function( .covar.colname ) {
#print( .covar.colname );
.cor <-
cor( .lasso.withbounds.mat[ , .estimate.colname ], .lasso.withbounds.mat[ , .covar.colname ], use = "pairwise" );
#print( .cor );
if( is.na( .cor ) || ( abs( .cor ) >= COR.THRESHOLD ) ) {
TRUE
} else {
FALSE
}
} ) ) ) );
}
# Exclude covars that are too highly correlated with the upper bound.
if( !( .upper.bound.colname %in% .covars.to.exclude ) ) {
.covars.to.exclude <- c( .covars.to.exclude,
names( which( sapply( setdiff( colnames( .lasso.withbounds.mat ), c( .covars.to.exclude, .estimate.colname, .upper.bound.colname ) ), function( .covar.colname ) {
#print( .covar.colname );
.cor <-
cor( .lasso.withbounds.mat[ , .upper.bound.colname ], .lasso.withbounds.mat[ , .covar.colname ], use = "pairwise" );
#print( .cor );
if( is.na( .cor ) || ( abs( .cor ) >= COR.THRESHOLD ) ) {
TRUE
} else {
FALSE
}
} ) ) ) );
}
# Exclude covars that are too highly correlated with each other (not including bounds)
.covars.to.consider <-
setdiff( colnames( .lasso.withbounds.mat ), c( .covars.to.exclude, .estimate.colname, .lower.bound.colname, .upper.bound.colname ) );
## Process these in reverse order to ensure that we prioritize keeping those towards the top.
.covars.to.consider <- rev( .covars.to.consider );
.new.covars.to.exclude <- rep( FALSE, length( .covars.to.consider ) );
names( .new.covars.to.exclude ) <- .covars.to.consider;
for( .c.i in 1:length( .covars.to.consider ) ) {
.covar.colname <- .covars.to.consider[ .c.i ];
#print( .covar.colname );
# Only consider those not already excluded.
.cor <-
cor( .lasso.withbounds.mat[ , .covar.colname ], .lasso.withbounds.mat[ , names( which( !.new.covars.to.exclude ) ) ], use = "pairwise" );
#print( .cor );
if( ( length( .cor ) > 0 ) && any( .cor[ !is.na( .cor ) ] < 1 & .cor[ !is.na( .cor ) ] >= COR.THRESHOLD ) ) {
.new.covars.to.exclude[ .c.i ] <- TRUE;
} else {
.new.covars.to.exclude[ .c.i ] <- FALSE;
}
} # End foreach of the .covars.to.consider
.covars.to.exclude <- c( .covars.to.exclude, names( which( .new.covars.to.exclude ) ) );
.retained.covars <-
setdiff( colnames( .lasso.withbounds.mat ), .covars.to.exclude );
.covariates.lasso.withbounds <- intersect( .retained.covars, .covariates.lasso.withbounds );
if( .estimate.colname == "none" ) {
## SEE NOTE BELOW ABOUT USE OF AN INTERCEPT. WHEN we're using "none", then we always do use an intercept (only for the non-bounded result).
if( include.intercept ) {
.formula.withbounds <- paste( "days.since.infection ~ 1 + ", paste( intersect( .retained.covars, .covariates.lasso.withbounds ), collapse = "+" ) );
} else {
.covariates.lasso.withbounds.nointercept <- .covariates.lasso.withbounds;
.formula.withbounds.nointercept <- paste( "days.since.infection ~ 0 + ", paste( intersect( .retained.covars, .covariates.lasso.withbounds.nointercept ), collapse = "+" ) );
.formula.withbounds <- .formula.withbounds.nointercept;
}
} else {
## NOTE WE MUST BE CAREFUL ABOUT USING AN INTERCEPT, BECAUSE THEN WE JUST CHEAT BY PREDICTING EVERYTHING IS CENTERED AT the tight true bounds on the days.since.infection in our training data (when looking at one time at a time, and now with 6m.not.1m this should be true for both times as well.).
if( include.intercept ) {
.formula.withbounds <- paste( "days.since.infection ~ 1 + ", paste( intersect( .retained.covars, c( .covariates.lasso.withbounds, .estimate.colname ) ), collapse = "+" ) );
} else {
.covariates.lasso.withbounds.nointercept <- .covariates.lasso.withbounds;
.formula.withbounds.nointercept <-
paste( "days.since.infection ~ 0 + ", paste( intersect( .retained.covars, c( .covariates.lasso.withbounds.nointercept, .estimate.colname ) ), collapse = "+" ) );
.formula.withbounds <- .formula.withbounds.nointercept;
}
}
## NOTE: We now always include these helpful.additional.cols.with.interactions, even they would otherwise be excluded.
.interactors <-
intersect( colnames( .lasso.mat ), helpful.additional.cols.with.interactions );
#intersect( .retained.covars, helpful.additional.cols.with.interactions );
# Add interactions among the interactors.
if( length( .interactors ) > 1 ) {
## NOTE that for now it is up to the caller to ensure that the df is not too high in this case.
.interactions.among.interactors <- c();
for( .interactor.i in 1:( length( .interactors ) - 1 ) ) {
.interactor <- .interactors[ .interactor.i ];
.remaining.interactors <-
.interactors[ ( .interactor.i + 1 ):length( .interactors ) ];
.interactions.among.interactors <-
c( .interactions.among.interactors,
paste( .interactor, .remaining.interactors, collapse = "+", sep = ":" ) );
} # End foreach .interactor.i
.interactors <- c( .interactors, .interactions.among.interactors );
} # End if there's more than one "interactor"
if( use.gold.is.multiple ) {
.interactors <- c( "gold.is.multiple", .interactors );
}
.interactees <-
setdiff( .retained.covars, .interactors );
if( ( length( .interactors ) > 0 ) && ( length( .interactees ) > 0 ) ){
## NOTE that for now it is up to the caller to ensure that the df is not too high in this case.
.interactions.formula.part.components <- c();
for( .interactor.i in 1:length( .interactors ) ) {
.interactor <- .interactors[ .interactor.i ];
.interactions.formula.part.components <-
c( .interactions.formula.part.components,
paste( .interactor, .interactees, collapse = "+", sep = ":" ) );
} # End foreach .interactor.i
.interactions.formula.part <-
paste( .interactions.formula.part.components, collapse =" + " );
.formula.withbounds <-
paste( .formula.withbounds, .interactions.formula.part, sep = " + " );
} # End if there's any "interactees"
## Also add interactions with the estimate colname and everything included so far.
if( .estimate.colname != "none" ) {
.everything.included.so.far.in.formula.withbounds <-
strsplit( .formula.withbounds, split = "\\s*\\+\\s*" )[[1]];
# Add interactions with the estimator.
.new.interactions.with.estimator.in.formula.withbounds <- # exclude estimator, and "left-hand-side ~ 0"
sapply( .everything.included.so.far.in.formula.withbounds[ 3:length( .everything.included.so.far.in.formula.withbounds ) ], function ( .existing.part ) {
paste( .existing.part, .estimate.colname, sep = ":" )
} );
.formula.withbounds <- paste( c( .everything.included.so.far.in.formula.withbounds, .new.interactions.with.estimator.in.formula.withbounds ), collapse = " + " );
} # End if .estimate.colname != "none"
} else {
## Great, do nothing, just use .formula already defined.
}
## NOTE: in the below, the .retained.covars for the !include.all.vars.in.lasso case are defined above, with the computation of the non-withbounds ("glm") results.
if( include.all.vars.in.lasso || ( ( .estimate.colname == "none" ) || ( .estimate.colname %in% .retained.covars ) ) ) {
.mf.withbounds <- stats::model.frame( as.formula( .formula.withbounds ), data = regression.df.without.ptid.i );
.out <- .mf.withbounds[ , "days.since.infection" ];
.lasso.withbounds.mat <- model.matrix(as.formula( .formula.withbounds ), .mf.withbounds);
.covars.to.exclude <- apply( .lasso.withbounds.mat, 2, function ( .col ) {
return( ( sum( !is.na( .col ) ) <= 1 ) || ( var( .col, na.rm = TRUE ) == 0 ) );
} );
.covars.to.exclude <- names( which( .covars.to.exclude ) );
cors.with.the.outcome <-
sapply( setdiff( colnames( .lasso.withbounds.mat ), c( .covars.to.exclude, .estimate.colname ) ), function( .covar.colname ) {
.cor <-
cor( .out, .lasso.withbounds.mat[ , .covar.colname ], use = "pairwise" );
return( .cor );
} );
sorted.cors.with.the.outcome <-
cors.with.the.outcome[ order( abs( cors.with.the.outcome ) ) ];
# Sort the columns of .lasso.withbounds.mat by their correlation with the outcome. This is to ensure that more-relevant columns get selected when removing columns due to pairwise correlation among them. We want to keep the best one.
if( .estimate.colname == "none" ) {
.lasso.withbounds.mat <- .lasso.withbounds.mat[ , rev( names( sorted.cors.with.the.outcome ) ), drop = FALSE ];
} else {
.lasso.withbounds.mat <- .lasso.withbounds.mat[ , c( .estimate.colname, rev( names( sorted.cors.with.the.outcome ) ) ), drop = FALSE ];
}
.retained.covars <- colnames( .lasso.withbounds.mat );
.needed.df <-
( length( .retained.covars ) - ( nrow( .lasso.withbounds.mat ) - MINIMUM.DF ) );
if( .needed.df > 0 ) {
# Then remove some covars until there is at least MINIMUM.DF degrees of freedom.
# They are in order, so just chop them off the end.
.retained.covars <-
.retained.covars[ 1:( length( .retained.covars ) - .needed.df ) ];
}
} # End if include.all.vars.in.lasso || ( ( .estimate.colname == "none" ) || ( .estimate.colname %in% .retained.covars ) )
## Ensure that the formula and the lasso mat are consistent.
.everything.included.so.far.in.formula.withbounds <-
strsplit( .formula.withbounds, split = "\\s*\\+\\s*" )[[1]];
# Remove the excluded columns from the formula.
if( include.intercept ) {
.formula.withbounds <- paste( c( .everything.included.so.far.in.formula.withbounds[ 1 ], .retained.covars ), collapse = " + " );
} else {
## If the intercept isn't included, then also don't include any combo of the pseudo-intercepts denoting the time or region.
# Remove the pseudointercepts.
.everything.that.should.be.in.formula.withbounds <-
setdiff(
.retained.covars,
pseudo.intercepts
);
.formula.withbounds <- paste( c( .everything.included.so.far.in.formula.withbounds[ 1 ], .everything.that.should.be.in.formula.withbounds ), collapse = " + " );
}
.mf.withbounds <- stats::model.frame( as.formula( .formula.withbounds ), data = regression.df.without.ptid.i );
.out <- .mf.withbounds[ , "days.since.infection" ];
.lasso.withbounds.mat <- model.matrix(as.formula( .formula.withbounds ), .mf.withbounds);
if( ( .estimate.colname == "none" ) || ( .estimate.colname %in% .retained.covars ) ) {
if( return.formulas ) {
## TODO: Update formula to account for .retained.covars, see above.
lasso.withbounds.formulas.per.person[ .ptid.i, .col.i ] <-
.formula;
}
tryCatch(
{
# penalty.factor = 0 to force the .estimate.colname variable.
.cv.glmnet.fit.withbounds <-
cv.glmnet( .lasso.withbounds.mat, .out, intercept = include.intercept,
penalty.factor = as.numeric( colnames( .lasso.withbounds.mat ) != .estimate.colname ), grouped = FALSE, nfold = length( .out ) ); # grouped = FALSE to avoid the warning.
.lasso.withbounds.validation.results.per.person.coefs.cell <-
coef( .cv.glmnet.fit.withbounds, s = "lambda.min" );
if( return.lasso.coefs ) {
.lasso.withbounds.validation.results.per.person.coefs.row[[ .col.i ]] <-
.lasso.withbounds.validation.results.per.person.coefs.cell;
}
.mf.withbounds.ptid <- stats::model.frame( as.formula( .formula.withbounds ), data = regression.df );
.the.actual.rows.for.ptid <-
the.rows.for.ptid[ names( the.rows.for.ptid ) %in% rownames( .mf.withbounds.ptid ) ];
.lasso.withbounds.mat.ptid <- model.matrix(as.formula( .formula.withbounds ), .mf.withbounds.ptid[ rownames( .mf.withbounds.ptid ) %in% names( .the.actual.rows.for.ptid ), , drop = FALSE ] );
for( .row.i.j in 1:length(.the.actual.rows.for.ptid) ) {
.row.i <- .the.actual.rows.for.ptid[ .row.i.j ];
.newx <-
c( 1, .lasso.withbounds.mat.ptid[ .row.i.j, rownames( .lasso.withbounds.validation.results.per.person.coefs.cell )[ -1 ], drop = FALSE ] );
names( .newx ) <- c( "(Intercept)", rownames( as.matrix( .lasso.withbounds.validation.results.per.person.coefs.cell ) )[-1] );
.pred.value.lasso.withbounds <-
sum( as.matrix( .lasso.withbounds.validation.results.per.person.coefs.cell ) * .newx );
#predict( .cv.glmnet.fit.withbounds, newx = .newx, s = "lambda.min" );
lasso.withbounds.validation.results.per.person[ .row.i, .col.i ] <-
.pred.value.lasso.withbounds;
}
},
error = function( e )
{
warning( paste( "ptid", .ptid.i, "col", .col.i, "lasso withbounds failed with error", e, "\nReverting to simple regression vs", .estimate.colname ) );
if( include.intercept ) {
.formula <- as.formula( paste( "days.since.infection ~ 1 + ", .estimate.colname ) );
} else {
.formula <- as.formula( paste( "days.since.infection ~ 0 + ", .estimate.colname ) );
}
for( .row.i in .the.actual.rows.for.ptid ) {
.pred.value.lasso.withbounds <-
predict( lm( .formula, data = regression.df.without.ptid.i ), regression.df[ .row.i, , drop = FALSE ] );
lasso.withbounds.validation.results.per.person[ .row.i, .col.i ] <-
.pred.value.lasso.withbounds;
}
},
finally = {}
);
} else { stop( paste( "unusable estimator:", .estimate.colname ) ) } # End if the lasso.withbounds estimate variable is usable
} # End if use.lasso.validate
} # End foreach .col.i
if( use.step.validate && return.step.coefs ) {
step.validation.results.per.person.coefs[[ .ptid.i ]] <-
.step.validation.results.per.person.coefs.row;
step.withbounds.validation.results.per.person.coefs[[ .ptid.i ]] <-
.step.withbounds.validation.results.per.person.coefs.row;
} # End if use.step.validate && return.step.coefs
if( use.lasso.validate && return.lasso.coefs ) {
lasso.validation.results.per.person.coefs[[ .ptid.i ]] <-
.lasso.validation.results.per.person.coefs.row;
lasso.withbounds.validation.results.per.person.coefs[[ .ptid.i ]] <-
.lasso.withbounds.validation.results.per.person.coefs.row;
} # End if use.lasso.validate && return.lasso.coefs
} # End foreach .ptid.i
if( use.glm.validate ) {
## glm:
colnames( glm.validation.results.per.person ) <-
paste( "glm.validation.results", estimate.cols, sep = "." );
rownames( glm.validation.results.per.person ) <-
rownames( regression.df );
results.per.person <-
cbind( results.per.person,
glm.validation.results.per.person );
## glm.withbounds:
colnames( glm.withbounds.validation.results.per.person ) <-
paste( "glm.withbounds.validation.results", estimate.cols, sep = "." );
rownames( glm.withbounds.validation.results.per.person ) <-
rownames( regression.df );
results.per.person <-
cbind( results.per.person,
glm.withbounds.validation.results.per.person );
}
if( use.step.validate ) {
## step:
colnames( step.validation.results.per.person ) <-
paste( "step.validation.results", estimate.cols, sep = "." );
rownames( step.validation.results.per.person ) <-
rownames( regression.df );
results.per.person <-
cbind( results.per.person,
step.validation.results.per.person );
## step.withbounds:
colnames( step.withbounds.validation.results.per.person ) <-
paste( "step.withbounds.validation.results", estimate.cols, sep = "." );
rownames( step.withbounds.validation.results.per.person ) <-
rownames( regression.df );
results.per.person <-
cbind( results.per.person,
step.withbounds.validation.results.per.person );
}
if( use.lasso.validate ) {
## lasso:
colnames( lasso.validation.results.per.person ) <-
paste( "lasso.validation.results", estimate.cols, sep = "." );
rownames( lasso.validation.results.per.person ) <-
rownames( regression.df );
results.per.person <-
cbind( results.per.person,
lasso.validation.results.per.person );
## lasso.withbounds:
colnames( lasso.withbounds.validation.results.per.person ) <-
paste( "lasso.withbounds.validation.results", estimate.cols, sep = "." );
rownames( lasso.withbounds.validation.results.per.person ) <-
rownames( regression.df );
results.per.person <-
cbind( results.per.person,
lasso.withbounds.validation.results.per.person );
}
} # End if use.glm.validate || use.step.validate || use.lasso.validate
## For fairness in evaluating when some methods
## completely fail to give a result, (so it's NA
## presently), we change all of these estimates
## from NA to 0. When we put bounds on the
## results, below, they will be changed from 0 to
## a boundary endpoint if 0 is outside of the
## bounds.
results.per.person.zeroNAs <- apply( results.per.person, 1:2, function( .value ) {
if( is.na( .value ) ) {
0
} else {
.value
}
} );
## unbounded results:
diffs.by.stat.zeroNAs <- compute.diffs.by.stat( results.per.person.zeroNAs, days.since.infection );
diffs.by.stat <- compute.diffs.by.stat( results.per.person, days.since.infection );
# Note this accesses *.formulas.per.person and days.since.infection from enclosing environment.
get.results.list.for.bounds.type <- function ( results.per.person, results.per.person.zeroNAs, bounds.type ) {
diffs.by.stat <- compute.diffs.by.stat( results.per.person, days.since.infection );
diffs.by.stat.zeroNAs <- compute.diffs.by.stat( results.per.person.zeroNAs, days.since.infection );
.results <-
list( bias = lapply( diffs.by.stat, mean, na.rm = T ), se = lapply( diffs.by.stat, sd, na.rm = T ), rmse = lapply( diffs.by.stat, rmse, na.rm = T ), n = lapply( diffs.by.stat, function( .vec ) { sum( !is.na( .vec ) ) } ), bias.zeroNAs = lapply( diffs.by.stat.zeroNAs, mean, na.rm = T ), se.zeroNAs = lapply( diffs.by.stat.zeroNAs, sd, na.rm = T ), rmse.zeroNAs = lapply( diffs.by.stat.zeroNAs, rmse, na.rm = T ), n.zeroNAs = lapply( diffs.by.stat.zeroNAs, function( .vec ) { sum( !is.na( .vec ) ) } ) );
if( return.results.per.person ) {
.results <- c( .results, list( results.per.person = results.per.person, results.per.person.zeroNAs = results.per.person.zeroNAs ) );
}
if( return.formulas ) {
if( use.glm.validate ) {
.results <- c( .results, list( glm.formulas = list( glm = glm.formulas.per.person, glm.withbounds = glm.withbounds.formulas.per.person ) ) );
}
if( use.step.validate ) {
.results <- c( .results, list( step.formulas = list( step = step.formulas.per.person, step.withbounds = step.withbounds.formulas.per.person ) ) );
}
if( use.lasso.validate ) {
.results <- c( .results, list( lasso.formulas = list( lasso = lasso.formulas.per.person, lasso.withbounds = lasso.withbounds.formulas.per.person ) ) );
}
}
if( use.step.validate && return.step.coefs ) {
#.results <- c( .results, list( step.coefs = list( step = step.validation.results.per.person.coefs, step.withbounds = step.withbounds.validation.results.per.person.coefs, step.nointercept = step.nointercept.validation.results.per.person.coefs, step.nointercept.withbounds = step.nointercept.withbounds.validation.results.per.person.coefs ) ) );
.results <- c( .results, list( step.coefs = list( step = step.validation.results.per.person.coefs, step.withbounds = step.withbounds.validation.results.per.person.coefs ) ) );
}
if( use.lasso.validate && return.lasso.coefs ) {
#.results <- c( .results, list( lasso.coefs = list( lasso = lasso.validation.results.per.person.coefs, lasso.withbounds = lasso.withbounds.validation.results.per.person.coefs, lasso.nointercept = lasso.nointercept.validation.results.per.person.coefs, lasso.nointercept.withbounds = lasso.nointercept.withbounds.validation.results.per.person.coefs ) ) );
.results <- c( .results, list( lasso.coefs = list( lasso = lasso.validation.results.per.person.coefs, lasso.withbounds = lasso.withbounds.validation.results.per.person.coefs ) ) );
}
results.list <- list();
results.list[[ bounds.type ]] <- .results;
## If there are multi-timepoint and multi-region predictors, also include per-region, per-timepoint diffs.by.stat results.
if( !is.null( ppt.suffices ) ) {
unique.ppt.suffices <- unique( ppt.suffices );
.results.by.suffix <- lapply( unique.ppt.suffices, function ( .ppt.suffix ) {
## TODO: REMOVE
#print( paste( "suffix:", .ppt.suffix ) );
.diffs.by.stat <-
lapply( diffs.by.stat, function( .diffs.for.stat ) { .diffs.for.stat[ ppt.suffices == .ppt.suffix ]; } );
.diffs.by.stat.zeroNAs <-
lapply( diffs.by.stat.zeroNAs, function( .diffs.for.stat ) { .diffs.for.stat[ ppt.suffices == .ppt.suffix ]; } );
list( bias = lapply( .diffs.by.stat, mean, na.rm = T ), se = lapply( .diffs.by.stat, sd, na.rm = T ), rmse = lapply( .diffs.by.stat, rmse, na.rm = T ), n = lapply( .diffs.by.stat, function( .vec ) { sum( !is.na( .vec ) ) } ), bias.zeroNAs = lapply( .diffs.by.stat.zeroNAs, mean, na.rm = T ), se.zeroNAs = lapply( .diffs.by.stat.zeroNAs, sd, na.rm = T ), rmse.zeroNAs = lapply( .diffs.by.stat.zeroNAs, rmse, na.rm = T ), n.zeroNAs = lapply( .diffs.by.stat.zeroNAs, function( .vec ) { sum( !is.na( .vec ) ) } ) );
} );
names( .results.by.suffix ) <- paste( bounds.type, unique.ppt.suffices, sep = "" );
results.list <- c( results.list, .results.by.suffix );
} # End if there are suffices, also include results by suffix.
return( results.list );
} # get.results.list.for.bounds.type ( results.per.person, results.per.person.zeroNAs, bounds.type );
results.list <- get.results.list.for.bounds.type( results.per.person, results.per.person.zeroNAs, "unbounded" );
#if( use.glm.validate ) {
# results.list <- c( results.list, list( glm.fit.statistics = glm.fit.statistics ) );
#}
## bounded results:
if( use.bounds ) {
## Ok, well, now we can also evaluate versions
## of variants of each method, but bounding the
## results. Ie if the estimated time is within
## the bounds, that time is used, otherwise,
## the boundary. Note we don't do this with
## the deterministic bounds, and we only do it
## for the time corresponding to the sample (
## mtn003 for "1m" and hvtn502 for "6m" ) [each
## gets an additional week for the difference
## between the 2 weeks added at the beginning
## and 1 week subtracted at the end, for
## eclipse phase; also this accounts for a bit
## (~10%) additional variation in the time
## between visits at 6 months than at 1-2
## months -- a totally made up number as at
## this time I have no idea what the right
## number is, but this seems reasonable.]
.artificial.bounds.to.use <-
#grep( ifelse( the.time == "6m", ifelse( the.time == "1m.6m", "1mmtn003_6msixmonths", "sixmonths" ), "mtn003" ), grep( "deterministic", names( the.artificial.bounds ), invert = TRUE, value = TRUE ), value = TRUE );
grep( ifelse( the.time == "6m", ifelse( the.time == "1m.6m", "1mmtn003_6mhvtn502", "hvtn502" ), "mtn003" ), grep( "deterministic", names( the.artificial.bounds ), invert = TRUE, value = TRUE ), value = TRUE );
## Also note that NAs are bounded too, replaced by the lower bound.
results.per.person.bounded <-
lapply( .artificial.bounds.to.use, function ( .artificial.bounds.name ) {
.mat <-
apply( results.per.person, 2, function ( .results.column ) {
sapply( names( .results.column ), function ( .ppt ) {
.value <- .results.column[ .ppt ];
if( !( .ppt %in% rownames( the.artificial.bounds[[ .artificial.bounds.name ]] ) ) ) {
stop( paste( .ppt, "has no bounds!" ) );
}
if( is.na( .value ) ) {
return( the.artificial.bounds[[ .artificial.bounds.name ]][ .ppt, "lower" ] );
} else {
.value.is.below.lb <- ( .value < the.artificial.bounds[[ .artificial.bounds.name ]][ .ppt, "lower" ] );
if( .value.is.below.lb ) {
return( the.artificial.bounds[[ .artificial.bounds.name ]][ .ppt, "lower" ] );
}
.value.is.above.ub <- ( .value > the.artificial.bounds[[ .artificial.bounds.name ]][ .ppt, "upper" ] );
if( .value.is.above.ub ) {
return( the.artificial.bounds[[ .artificial.bounds.name ]][ .ppt, "upper" ] );
}
}
return( .value );
} );
} );
rownames( .mat ) <- rownames( results.per.person );
return( .mat );
} );
names( results.per.person.bounded ) <- .artificial.bounds.to.use;
bounded.results.by.bounds.type <- lapply( .artificial.bounds.to.use, function ( .bounds.type ) {
.results.per.person <- results.per.person.bounded[[ .bounds.type ]];
.results.per.person.zeroNAs <-
apply( .results.per.person, 1:2, function( .value ) {
if( is.na( .value ) ) {
0
} else {
.value
}
} );
.results.list <-
get.results.list.for.bounds.type( .results.per.person, .results.per.person.zeroNAs, .bounds.type );
return( .results.list );
} );
return( c( results.list, do.call( c, bounded.results.by.bounds.type ) ) );
} else {
return( results.list );
}
} # bound.and.evaluate.results.per.ppt (..)
get.timings.results.for.region.and.time <- function ( the.region, the.time, partition.size ) {
## TODO: REMOVE
print( paste( "the.region", the.region ) );
print( paste( "the.time", the.time ) );
.days.since.infection.filename <- paste( RESULTS.DIR, results.dirname, "/", the.region, "/", the.time, "/sampleDates.tbl", sep = "" );
stopifnot( file.exists( .days.since.infection.filename ) );
if( the.region == "v3" ) {
days.since.infection <-
getDaysSinceInfection(
.days.since.infection.filename,
caprisa002.gold.standard.infection.dates
);
} else {
stopifnot( ( the.region == "nflg" ) || ( length( grep( "rv217", the.region ) ) > 0 ) );
days.since.infection <-
getDaysSinceInfection(
.days.since.infection.filename,
rv217.gold.standard.infection.dates
);
}
## identify-founders results; we always get and use these.
if( is.na( partition.size ) ) {
results.in <- readIdentifyFounders( paste( RESULTS.DIR, results.dirname, "/", the.region, "/", the.time, "/identify_founders.tab", sep = "" ) );
} else {
results.in <- readIdentifyFounders( paste( RESULTS.DIR, results.dirname, "/", the.region, "/", the.time, "/partitions/identify_founders.tab", sep = "" ), partition.size = partition.size );
}
# Note that we use the pvl at the earliest time ie for "1m6m" we use timepoint 2.
if( the.region == "nflg" || ( length( grep( "rv217", the.region ) ) > 0 ) ) {
pvl.at.the.time <- sapply( rownames( results.in ), function( .ptid ) { as.numeric( as.character( rv217.pvl.in[ ( rv217.pvl.in[ , "ptid" ] == .ptid ) & ( rv217.pvl.in[ , "timepoint" ] == ifelse( the.time == "6m", 3, ifelse( the.time == "1w", 1, 2 ) ) ), "viralload" ] ) ) } );
} else {
stopifnot( the.region == "v3" );
pvl.at.the.time <- sapply( rownames( results.in ), function( .ptid ) { as.numeric( as.character( caprisa002.pvl.in[ ( caprisa002.pvl.in[ , "ptid" ] == .ptid ) & ( caprisa002.pvl.in[ , "timepoint" ] == ifelse( the.time == "6m", 3, ifelse( the.time == "1w", 1, 2 ) ) ), "viralload" ] ) ) } );
}
## Add log plasma viral load (lPVL).
results.with.lPVL <- cbind( results.in, log( pvl.at.the.time ) );
colnames( results.with.lPVL )[ ncol( results.with.lPVL ) ] <- "lPVL";
results.covars.per.person.with.extra.cols <-
summarizeCovariatesOnePerParticipant( results.with.lPVL );
if( use.gold.is.multiple ) {
if( the.region == "nflg" || ( length( grep( "rv217", the.region ) ) > 0 ) ) {
gold.is.multiple <- rv217.gold.is.multiple[ rownames( results.covars.per.person.with.extra.cols ) ];
} else {
stopifnot( the.region == "v3" );
gold.is.multiple <- caprisa002.gold.is.multiple[ rownames( results.covars.per.person.with.extra.cols ) ];
}
## Add gold.is.multiple
results.covars.per.person.with.extra.cols <- cbind( results.covars.per.person.with.extra.cols, gold.is.multiple );
colnames( results.covars.per.person.with.extra.cols )[ ncol( results.covars.per.person.with.extra.cols ) ] <-
"gold.is.multiple";
} # End if use.gold.is.multiple
days.colnames <- c( grep( "time", colnames( results.in ), value = T ), grep( "days", colnames( results.in ), value = T ) );
days.est.colnames <- grep( "est$", days.colnames, value = TRUE );
days.est <- results.in[ , days.est.colnames, drop = FALSE ];
lambda.est.colnames <-
gsub( "PFitter\\.lambda\\.est", "PFitter.lambda", gsub( "(?:days|time|fits)", "lambda", days.est.colnames, perl = TRUE ) );
stopifnot( all( lambda.est.colnames %in% colnames( results.in ) ) );
days.est.colnames.nb <- gsub( "[^\\.]+\\.(DS)?Star[Pp]hy(Test)?", "PFitter", gsub( "(?:days|time|fits).*$", "nbases", days.est.colnames, perl = TRUE ) );
days.est.nb <- results.in[ , days.est.colnames.nb, drop = FALSE ];
days.est.colnames.nseq <- gsub( "[^\\.]+\\.(DS)?Star[Pp]hy(Test)?", "PFitter", gsub( "(?:days|time|fits).*$", "nseq", days.est.colnames, perl = TRUE ) );
days.est.nseq <- results.in[ , days.est.colnames.nseq, drop = FALSE ];
results <- results.with.lPVL[ , days.est.colnames, drop = FALSE ];
if( use.infer && is.na( partition.size ) ) { ## TODO: process infer results on partitions.
infer.results.columns <- get.infer.results.columns( the.region, the.time, rownames( results ), partition.size );
results <- cbind( results, infer.results.columns );
} # End if use.infer
if( use.anchre && ( the.time == "1m6m" ) && is.na( partition.size ) ) { ## TODO: Handle the anchre results for the partitions
if( the.region == "v3" ) {
sample.dates.in <-
getDaysSinceInfection(
.days.since.infection.filename,
caprisa002.gold.standard.infection.dates,
return.sample.dates.in = TRUE
);
} else {
stopifnot( ( the.region == "nflg" ) || ( length( grep( "rv217", the.region ) ) > 0 ) );
sample.dates.in <-
getDaysSinceInfection(
.days.since.infection.filename,
rv217.gold.standard.infection.dates,
return.sample.dates.in = TRUE
);
}
anchre.results.columns <-
get.anchre.results.columns( the.region, the.time, sample.dates.in, partition.size );
results <- cbind( results, anchre.results.columns );
} # End if use.anchre and the.time is 1m6m, add anchre results too.
if( is.na( partition.size ) ) {
## Now the issue is that there are multiple input files per ppt, eg for the NFLGs ther are often "LH" and "RH" files. What to do? The number of sequences varies. Do a weighted average.
.weights <- days.est.nseq*days.est.nb;
results.per.person <-
compute.results.per.person( results, .weights );
if( use.bounds ) {
the.artificial.bounds <- getArtificialBounds( the.region, the.time, RESULTS.DIR, results.dirname );
# Only keep the "sampledwidth" bounds.
the.artificial.bounds <-
the.artificial.bounds[ grep( "sampledwidth", names( the.artificial.bounds ), value = TRUE ) ];
center.of.bounds.table <- sapply( names( the.artificial.bounds ), function ( .artificial.bounds.name ) {
round( apply( the.artificial.bounds[[ .artificial.bounds.name ]], 1, mean ) )
} );
if( is.null( dim( center.of.bounds.table ) ) ) {
if( is.list( center.of.bounds.table ) && all( names( the.artificial.bounds ) == names( center.of.bounds.table ) ) ) {
center.of.bounds.table.fixed <-
matrix( NA, nrow = length( center.of.bounds.table ), ncol = length( center.of.bounds.table[[ 1 ]] ) );
rownames( center.of.bounds.table.fixed ) <- names( center.of.bounds.table );
colnames( center.of.bounds.table.fixed ) <- names( center.of.bounds.table[[ 1 ]] );
for( .bounds.name in rownames( center.of.bounds.table.fixed ) ) {
center.of.bounds.table.fixed[ .bounds.name, names( center.of.bounds.table[[ .bounds.name ]] ) ] <- center.of.bounds.table[[ .bounds.name ]];
}
center.of.bounds.table <- t( center.of.bounds.table.fixed );
} else {
center.of.bounds.table.flat <- center.of.bounds.table;
center.of.bounds.table <-
matrix( nrow = 1, ncol = length( center.of.bounds.table.flat ) );
colnames( center.of.bounds.table ) <- names( center.of.bounds.table.flat );
rownames( center.of.bounds.table ) <- names( the.artificial.bounds );
center.of.bounds.table[ 1, ] <- center.of.bounds.table.flat;
}
}
colnames( center.of.bounds.table ) <-
paste( "COB", gsub( "_", ".", colnames( center.of.bounds.table ) ), "time.est", sep = "." );
results.per.person <-
cbind( results.per.person, center.of.bounds.table[ rownames( results.per.person ), , drop = FALSE ] );
## TODO: REMOVE. TEMPORARY HACK TO REMOVE OLD RESULTS. Remove "onemonth" and "sixmonths" results.
results.per.person <-
results.per.person[ , grep( "uniform\\.(one|six)month", colnames( results.per.person ), invert = TRUE ), drop = FALSE ];
## TODO: REMOVE. TEMPORARY HACK TO REMOVE OLD RESULTS. Remove "shifted" results.
results.per.person <-
results.per.person[ , grep( "shifted", colnames( results.per.person ), invert = TRUE ), drop = FALSE ];
# ## TODO: REMOVE. TEMPORARY HACK TO REMOVE all but NEW RESULTS. If there are "shifted" results, use them.
# if( length( grep( "shifted", colnames( results.per.person ) ) ) > 0 ) {
# results.per.person <-
# results.per.person[ , grep( "(?:502)|(?:003)\\.time\\.est$", colnames( results.per.person ), invert = TRUE ), drop = FALSE ];
# }
return( list( results.per.person = results.per.person, days.since.infection = days.since.infection, results.covars.per.person.with.extra.cols = results.covars.per.person.with.extra.cols, bounds = the.artificial.bounds, evaluated.results = bound.and.evaluate.results.per.ppt( results.per.person, days.since.infection, results.covars.per.person.with.extra.cols, the.time, the.artificial.bounds ) ) );
} else {
return( list( results.per.person = results.per.person, days.since.infection = days.since.infection, results.covars.per.person.with.extra.cols = results.covars.per.person.with.extra.cols, evaluated.results = bound.and.evaluate.results.per.ppt( results.per.person, days.since.infection, results.covars.per.person.with.extra.cols, the.time ) ) );
}
} else { # else !is.na( partition.size )
## Here the multiple results per participant come from the partitions. We want to evaluate each one, and summarize them afterwards.
partition.id <- results.in[ , "partition.id" ];
diffs.per.ppt.by.id <- apply( results, 2, function ( .column ) {
.rv <-
lapply( unique( rownames( results ) ), function( .ppt ) {
.values <- .column[ rownames( results ) == .ppt ];
.partition.ids <- partition.id[ rownames( results ) == .ppt ];
sapply( unique( .partition.ids ), function( the.partition.id ) {
..values <- .values[ .partition.ids == the.partition.id ];
return( as.numeric( ..values ) - as.numeric( days.since.infection[ .ppt ] ) );
} );
} );
names( .rv ) <- unique( rownames( results ) );
return( .rv );
} );
mean.bias.per.ppt <- lapply( diffs.per.ppt.by.id, function( .lst ) { sapply( .lst, mean, na.rm = T ) } );
sd.bias.per.ppt <- lapply( diffs.per.ppt.by.id, function( .lst ) { sapply( .lst, sd, na.rm = T ) } );
median.mean.bias <- lapply( mean.bias.per.ppt, function( .lst ) { median( unlist( .lst ), na.rm = T ) } );
min.mean.bias <- lapply( mean.bias.per.ppt, function( .lst ) { min( unlist( .lst ), na.rm = T ) } );
max.mean.bias <- lapply( mean.bias.per.ppt, function( .lst ) { max( unlist( .lst ), na.rm = T ) } );
median.sd.bias <- lapply( sd.bias.per.ppt, function( .lst ) { median( unlist( .lst ), na.rm = T ) } );
min.sd.bias <- lapply( sd.bias.per.ppt, function( .lst ) { min( unlist( .lst ), na.rm = T ) } );
max.sd.bias <- lapply( sd.bias.per.ppt, function( .lst ) { max( unlist( .lst ), na.rm = T ) } );
num.partitions.per.ppt <- sapply( diffs.per.ppt.by.id[[1]], length );
the.summary <- list( median.mean.bias = median.mean.bias, min.mean.bias = min.mean.bias, max.mean.bias = max.mean.bias, median.sd.bias = median.sd.bias, min.sd.bias = min.sd.bias, max.sd.bias = max.sd.bias );
results.per.ppt.by.id <- apply( results, 2, function ( .column ) {
.rv <-
lapply( unique( rownames( results ) ), function( .ppt ) {
.values <- .column[ rownames( results ) == .ppt ];
.partition.ids <- partition.id[ rownames( results ) == .ppt ];
sapply( unique( .partition.ids ), function( the.partition.id ) {
return( .values[ .partition.ids == the.partition.id ] );
} );
} );
names( .rv ) <- unique( rownames( results ) );
return( .rv );
} );
.the.partition.ids.by.sample <-
sapply( 1:partition.bootstrap.samples, function( .sample.id ) {
sapply( num.partitions.per.ppt, sample, size = 1 );
} );
do.one.sample <- function ( .sample.id ) {
print( .sample.id );
.thissample.the.partition.ids <-
.the.partition.ids.by.sample[ , .sample.id ];
.thissample.results.per.person <-
sapply( colnames( results ), function( est.name ) {
sapply( names( .thissample.the.partition.ids ), function( .ppt ) {
unname( results.per.ppt.by.id[[ est.name ]][[ .ppt ]][ .thissample.the.partition.ids[ .ppt ] ] )
} ) } );
return( list( results.per.person = .thissample.results.per.person, evaluated.results = bound.and.evaluate.results.per.ppt( .thissample.results.per.person, days.since.infection, results.covars.per.person.with.extra.cols, the.time, the.artificial.bounds ) ) );
} # do.one.sample (..)
set.seed( partition.bootstrap.seed );
bootstrap.results <-
mclapply( 1:partition.bootstrap.samples, do.one.sample, mc.cores = partition.bootstrap.num.cores );
matrix.of.unbounded.results.rmses <- sapply( bootstrap.results, function( .results.for.bootstrap ) { .results.for.bootstrap[[ "evaluated.results" ]][[ "unbounded" ]]$rmse } );
mode( matrix.of.unbounded.results.rmses ) <- "numeric";
## This uses the second position, which is the first of the unbounded ones, and for now the only one. It's "mtn003" unless the.time is "6m" in which case it is "hvtn502".
matrix.of.bounded.results.rmses <- sapply( bootstrap.results, function( .results.for.bootstrap ) { .results.for.bootstrap[[ "evaluated.results" ]][[ 2 ]]$rmse } );
mode( matrix.of.bounded.results.rmses ) <- "numeric";
#hist( apply( matrix.of.unbounded.results.rmses, 1, diff ) )
return( list( summary = the.summary, mean.bias.per.ppt.by.est = mean.bias.per.ppt, sd.bias.per.ppt.by.est = sd.bias.per.ppt, num.partitions.per.ppt = num.partitions.per.ppt, days.since.infection = days.since.infection, results.covars.per.person.with.extra.cols = results.covars.per.person.with.extra.cols, bounds = the.artificial.bounds, bootstrap.results = bootstrap.results, bootstrap.unbounded.rmse = matrix.of.unbounded.results.rmses, bootstrap.bounded.rmse = matrix.of.bounded.results.rmses ) );
} # End if is.na( partition.size ) .. else ..
} # get.timings.results.for.region.and.time (..)
getTimingsResultsByRegionAndTime <- function ( partition.size = NA ) {
getResultsByRegionAndTime( gold.standard.varname = "days.since.infection", get.results.for.region.and.time.fn = get.timings.results.for.region.and.time, evaluate.results.per.person.fn = bound.and.evaluate.results.per.ppt, partition.size = partition.size, regions = regions, times = times )
} # getTimingsResultsByRegionAndTime (..)
if( force.recomputation || !file.exists( results.by.region.and.time.Rda.filename ) ) {
results.by.region.and.time <- getTimingsResultsByRegionAndTime();
save( results.by.region.and.time, file = results.by.region.and.time.Rda.filename );
writeResultsTables( results.by.region.and.time, evaluateTimings.tab.file.suffix, regions = regions, results.are.bounded = TRUE, RESULTS.DIR = RESULTS.DIR, results.dirname = results.dirname );
}
## TODO
## *) select a set of best predictions from isMultiple to use here (instead of gold.standard.varname) -- if there's any benefit to using gold.standard.varname.
## *) check out results of the partitions of evaluateTimings.
### ######
### *) (exclude lPVL) try looking at diversity again -- look when lPVL is not there.
## *) try gold.is.multiple again.
## *) check if cv.glmnet does automatic rescaling
## *) try running the code developed for the TB data.
loadTimingsData <- function () {
# loads results.by.region.and.time
load( results.by.region.and.time.Rda.filename, envir = parent.frame() );
}
if( FALSE ) {
## For partition size == 10
## NOTE that this part is presently broken. TODO: FIX IT.
timings.results.by.region.and.time.p10 <- getTimingsResultsByRegionAndTime( partition.size = 10 );
# Make a table out of it. (one per study).
results.table.by.region.and.time.p10 <-
lapply( names( timings.results.by.region.and.time.p10 ), function( the.region ) {
.rv <-
lapply( names( timings.results.by.region.and.time.p10[[ the.region ]] ), function( the.time ) {
sapply( timings.results.by.region.and.time.p10[[ the.region ]][[ the.time ]], function( results.list ) { results.list } ) } );
names( .rv ) <- times;
return( .rv );
} );
names( results.table.by.region.and.time.p10 ) <- "v3";
## Write these out.
.result.ignored <- sapply( "v3", function ( the.region ) {
..result.ignored <-
sapply( times, function ( the.time ) {
out.file <- paste( RESULTS.DIR, results.dirname, "/", the.region, "/", the.time, "/evaluateTimings_p10.tab", sep = "" );
.tbl <- apply( results.table.by.region.and.time.p10[[ the.region ]][[ the.time ]], 1:2, function( .x ) { sprintf( "%0.2f", .x ) } );
write.table( .tbl, quote = FALSE, file = out.file, sep = "\t" );
return( NULL );
} );
return( NULL );
} );
} # End if FALSE
return( results.by.region.and.time.Rda.filename );
} # evaluateTimings (..)
|
a2dd0971e741af0be92ff644fc6ba424bea4b1a3 | 905274a91e57dcff9484aee8abaabfd3f7df4eb3 | /R/bashDirections.R | 0b6483b5d9358012af94447b007978ccaaf41afa | [
"MIT"
] | permissive | girouxem/Colletotrichum | 10e97f786c3863653ee5c1e776d645a26b60f358 | ad99bc2ba6d64e7daf91453222b28234c82fa17a | refs/heads/master | 2023-03-09T06:59:14.528989 | 2023-02-22T22:20:29 | 2023-02-22T22:20:29 | 191,385,440 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 372 | r | bashDirections.R | bashDirections <- paste("
'###############################################################################
##### ***** SUBMIT BASH FILE FROM HEAD NODE, AND WAIT FOR COMPLETION ****######
##### watch output from this command in the console ######
###############################################################################'")
cat(bashDirections) |
77c62ae6ab99440d62329424fcf23cfd880316f1 | 8b4f8770b2d7cf171d0d5ebc371aa18a4e18663c | /R/surrogate_nn.R | c6a9cb425deead8912c60b4310562e35616f4785 | [] | no_license | LinSelina/rlR | 048e5f329f41079d460aae4a54611bc84ac3f3f2 | e0571de32e3a69aafa85c19515899c201656dcb4 | refs/heads/master | 2020-03-20T05:09:50.305044 | 2018-06-06T19:21:50 | 2018-06-06T19:21:50 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 3,217 | r | surrogate_nn.R | SurroNN = R6::R6Class("SurroNN",
inherit = Surrogate,
public = list(
lr = NULL,
arch.list = NULL,
initialize = function(actCnt, stateDim, arch.list) {
self$actCnt = actCnt
self$stateDim = stateDim
self$lr = arch.list$lr
self$arch.list = arch.list
self$model = self$makeModel()
},
makeModel = function() {
if (length(self$stateDim) > 1L) {
model = makeCnn(input_shape = self$stateDim, act_cnt = self$actCnt)
} else {
model = makeKerasModel(input_shape = self$stateDim, output_shape = self$actCnt, arch.list = self$arch.list)
}
return(model)
},
getWeights = function() {
keras::get_weights(self$model)
},
setWeights = function(weights) {
keras::set_weights(self$model, weights)
},
persist = function(file_path) {
keras::save_model_hdf5(object = self$model, file_path = file_path)
},
train = function(X_train, Y_train, epochs = 1L) {
#nr = nrow(X_train)
#keras::k_set_value(self$model$optimizer$lr, self$lr / 3)
#lr = keras::k_get_value(self$model$optimizer$lr)
#cat(sprintf("learning rate: %s", lr))
keras::fit(object = self$model, x = X_train, y = Y_train, epochs = epochs, verbose = 0)
},
pred = function(X) {
res = keras::predict_on_batch(self$model, X)
res # FIXME: prediction might be NA from Keras
},
afterEpisode = function() {
#nr = nrow(X_train)
# keras::k_set_value(self$model$optimizer$lr, self$lr / nr)
keras::k_set_value(self$model$optimizer$lr, self$lr)
lr = keras::k_get_value(self$model$optimizer$lr)
cat(sprintf("learning rate: %s \n", lr))
}
),
private = list(
deep_clone = function(name, value) {
# With x$clone(deep=TRUE) is called, the deep_clone gets invoked once for
# each field, with the name and value.
if (name == "model") {
# `a` is an environment, so use this quick way of copying
#list2env(as.list.environment(value, all.names = TRUE),
#parent = emptyenv())
weights = self$getWeights()
model = self$makeModel()
keras::set_weights(model, weights)
return(model)
#keras::clone_model(self$model)
} else {
# For all other fields, just return the value
value
}
}
),
active = list()
)
SurroNN4PG = R6::R6Class("SurroNN4PG",
inherit = SurroNN,
public = list(
lr = NULL,
initialize = function(actCnt, stateDim, arch.list) {
super$initialize(actCnt, stateDim, arch.list) # FIXME: arch.list could be None when PG surrogate is called as super prior to PGBaseline is called.
self$lr = arch.list[["lr"]]
},
train = function(X_train, Y_train, epochs = 1L) {
keras::fit(object = self$model, x = X_train, y = Y_train, epochs = epochs, verbose = 0)
},
afterEpisode = function() {
#nr = nrow(X_train)
# keras::k_set_value(self$model$optimizer$lr, self$lr / nr)
keras::k_set_value(self$model$optimizer$lr, self$lr)
lr = keras::k_get_value(self$model$optimizer$lr)
cat(sprintf("learning rate: %s \n", lr))
}
)
)
|
bc794d59e7af155674d6a0a47c542f944a87b0f7 | 3c8b8c66dbf43af57cf3cfe660072a95f0f422d6 | /man/process_MCMC_sample.Rd | 95134e9cb7afafbc41c3141fe067091171d1d93e | [] | no_license | rudeboybert/SpatialEpi | 08802d87299ce3e8a1accc73f80d7da2aca7354c | 08e5da8bc822ec620ed3aae56a5ac08650595695 | refs/heads/master | 2023-03-09T16:47:42.636282 | 2023-02-22T21:35:23 | 2023-02-22T21:35:23 | 20,269,160 | 22 | 8 | null | 2021-07-27T12:05:38 | 2014-05-28T19:10:05 | R | UTF-8 | R | false | true | 1,205 | rd | process_MCMC_sample.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/process_MCMC_sample.R
\name{process_MCMC_sample}
\alias{process_MCMC_sample}
\title{Process MCMC Sample}
\usage{
process_MCMC_sample(sample, param, RR.area, cluster.list, cutoffs)
}
\arguments{
\item{sample}{list objects of sampled configurations}
\item{param}{mean relative risk associted with each of the \code{n.zones} single zones considering the wide prior}
\item{RR.area}{mean relative risk associated with each of the \code{n} areas considering the narrow prior}
\item{cluster.list}{list of length \code{n.zones} listing, for each single zone, its component areas}
\item{cutoffs}{cutoffs used to declare highs (clusters) and lows (anti-clusters)}
}
\value{
\item{high.area}{Probability of cluster membership for each area}
\item{low.area}{Probability of anti-cluster membership for each area}
\item{RR.est.area}{Smoothed relative risk estimates for each area}
}
\description{
Take the output of sampled configurations from \code{MCMC_simulation} and produce area-by-area summaries
}
\references{
Wakefield J. and Kim A.Y. (2013) A Bayesian model for cluster detection. \emph{Biostatistics}, \strong{14}, 752--765.
}
|
112c015c0554a37abece672d38a855d49f9063dd | 12265f972c749881abbfecfd7d318a466ab387af | /man/get_cooper.Rd | 896d7a91067a34957d2305012f1af5178457a964 | [] | no_license | cran/HadamardR | ab10185cfc1d91a5a98ec5c3bf5289e009564ec5 | 0c455660b1f2d8fb0ebc2ae6e62180c4454b3932 | refs/heads/master | 2021-05-26T03:17:35.294417 | 2020-04-07T14:10:06 | 2020-04-07T14:10:06 | 254,030,652 | 0 | 0 | null | null | null | null | UTF-8 | R | false | true | 923 | rd | get_cooper.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/get_cooper.R
\name{get_cooper}
\alias{get_cooper}
\title{get_cooper}
\usage{
get_cooper(x)
}
\arguments{
\item{x}{integer
Hadamard Matrix Order to Check}
}
\value{
m Tsequence order
n Williamson order
}
\description{
This function provides the Williamson Matrix order and T-Sequence length required to construct Hadamard matrix.
}
\details{
If m is the order of T-Sequence and n is the order of Williamson sequence and both exists.
Cooper and Wallis (1972) showed a construction method for Hadamard matrix of order 4mn exists. This function returns
m and n if they exists otherwise NULL value is returned.
}
\examples{
get_cooper(340)
#$m
#[1] 5
#$n
#[1] 17
get_cooper(256)
#NULL
}
\references{
Cooper, J., and Wallis, J. 1972. A construction for Hadamard arrays. Bull. Austral. Math. Soc., 07: 269-277.
}
|
68f9be325fbc6cc321d600a564ed97e7f16c68c7 | ffdea92d4315e4363dd4ae673a1a6adf82a761b5 | /data/genthat_extracted_code/pid/examples/manfacture.Rd.R | 0c01d17e89e1358ce47e7efd009a4679527d11c0 | [] | no_license | surayaaramli/typeRrh | d257ac8905c49123f4ccd4e377ee3dfc84d1636c | 66e6996f31961bc8b9aafe1a6a6098327b66bf71 | refs/heads/master | 2023-05-05T04:05:31.617869 | 2019-04-25T22:10:06 | 2019-04-25T22:10:06 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 622 | r | manfacture.Rd.R | library(pid)
### Name: manufacture
### Title: Simulation of a manufacturing facility's profit when varying two
### factors
### Aliases: manufacture
### ** Examples
# Producing at the default settings of price ($0.75)
# and throughput of 325 parts per hour:
manufacture()
# Let's try selling for a higher price, $1.05,
# and a slower throughput of 298 parts per hour:
manufacture(P=1.05, T=298)
# What happens if the product is sold too cheaply
# at high production rates?
manufacture(P=0.52, T=417)
# Can you find the optimum combination of settings to
# maximize the profit, but using only a few experiments?
|
a5a3ddffd084a8f4bb4d44f5ecea71ab9a4b08dc | d86a1bb7037e535619e0b7d71284798f4c97b607 | /Logistic Regression/Logistic Regression Phase 1/Phase1.R | 833f50c735fed08511ae107ab5e706a07a1f989c | [] | no_license | mkshephe/Fall_1_Team_Work | cd0792a109f7c74d744f42f7814eba534df27588 | 6fc494aa61e72a1d73b103b1047bfd9d0e8f47e9 | refs/heads/master | 2020-07-14T04:55:18.704886 | 2019-08-29T20:21:08 | 2019-08-29T20:21:08 | 205,243,543 | 0 | 0 | null | 2019-08-29T20:16:22 | 2019-08-29T20:16:21 | null | UTF-8 | R | false | false | 1,491 | r | Phase1.R | ###############################################################################
# Phase 1 for Logistic Regression Homework #
# Due September 6 #
# Team Blue 5 #
###############################################################################
#Katlyn's comment
library('haven')#for read_sas()
#load data
dataTrain <- read_sas('insurance_t.sas7bdat')
#modifies BRANCH and RES to factors for analysis
dataTrain$BRANCH <- as.factor(dataTrain$BRANCH)
dataTrain$RES <- as.factor(dataTrain$RES)
#Looks at the 5 number summary of the data
summary(dataTrain)
#binary? DDA, DIRDEP, NSF, SAV, ATM, CD, IRA, LOC, INV, ILS, MM, MTG, CC, SDB, HMOWN, MOVED, INAREA, (INS <- Target)
#NA's ACCTAGE, PHONE, POS, POSAMT, INV, INVBAL, CC, CCBAL, CCPURC, INCOME, HMOWN, LORES, HMVAL, AGE, CRSCORE
# Bullet point 1:
# Verify alpha
# Run simple logistic/test of association on all variables - pulling out the significant ones
# Bullet point 2:
# Find the binary variables
# Run the odds
# Create a table
# Talk about the largest one (include any additional interesting findings)
# Bullet point 3:
# Test the linearity assumptions for continuous variables (logit stuff?)
# Qq plots
# Y = log odds (logits)
# X = predictor variable (continuous)
# Bullet point 4:
# Visual representation of missing values
# Examine multicollinearity
# Write the report
|
2a294a54495eac2140b3353df624341b71f9c76e | ea524efd69aaa01a698112d4eb3ee4bf0db35988 | /tests/testthat/test-snapshot-cleanup.R | 51180c368c1030d556bb33d8e24e2c38dd995f5b | [
"MIT"
] | permissive | r-lib/testthat | 92f317432e9e8097a5e5c21455f67563c923765f | 29018e067f87b07805e55178f387d2a04ff8311f | refs/heads/main | 2023-08-31T02:50:55.045661 | 2023-08-08T12:17:23 | 2023-08-08T12:17:23 | 295,311 | 452 | 217 | NOASSERTION | 2023-08-29T10:51:30 | 2009-09-02T12:51:44 | R | UTF-8 | R | false | false | 1,948 | r | test-snapshot-cleanup.R | test_that("snapshot cleanup makes nice message if needed", {
dir <- local_snap_dir(c("a.md", "b.md"))
expect_snapshot({
snapshot_cleanup(dir)
snapshot_cleanup(dir, c("a", "b"))
})
})
test_that("deletes empty dirs", {
dir <- local_snap_dir(character())
dir.create(file.path(dir, "a", "b", "c"), recursive = TRUE)
dir.create(file.path(dir, "b"), recursive = TRUE)
dir.create(file.path(dir, "c"), recursive = TRUE)
snapshot_cleanup(dir)
expect_equal(dir(dir), character())
})
test_that("detects outdated snapshots", {
dir <- local_snap_dir(c("a.md", "b.md", "b.new.md"))
expect_equal(snapshot_outdated(dir, c("a", "b")), character())
expect_equal(snapshot_outdated(dir, "a"), c("b.md", "b.new.md"))
expect_equal(snapshot_outdated(dir, "b"), "a.md")
expect_equal(snapshot_outdated(dir), c("a.md", "b.md", "b.new.md"))
})
test_that("preserves variants", {
dir <- local_snap_dir(c("a.md", "windows/a.md", "windows/b.md"))
expect_equal(snapshot_outdated(dir, "a"), "windows/b.md")
# Doesn't delete new files in variants
dir <- local_snap_dir(c("a.md", "windows/a.md", "windows/a.new.md"))
expect_equal(snapshot_outdated(dir, "a"), character())
})
test_that("detects outdated snapshot files", {
dir <- local_snap_dir(c("a/foo.txt", "b/foo.txt", "b/foo.new.txt"))
expect_equal(
snapshot_outdated(dir, character(), character()),
c("a/foo.txt", "b/foo.new.txt", "b/foo.txt")
)
expect_equal(
snapshot_outdated(dir, character(), "a/foo.txt"),
c("b/foo.new.txt", "b/foo.txt")
)
expect_equal(
snapshot_outdated(dir, character(), "b/foo.txt"),
"a/foo.txt"
)
expect_equal(
snapshot_outdated(dir, character(), c("a/foo.txt", "b/foo.txt")),
character()
)
})
test_that("detects individual snapshots files to remove", {
dir <- local_snap_dir(c("a/a1", "a/a2", "b/b1"))
expect_equal(
snapshot_outdated(dir, c("a", "b"), "a/a1"),
c("a/a2", "b/b1")
)
})
|
880495de44af11c6a2f93a04ff47f92e17d8bded | 70f3e3227a7e60e92367d30b6dc2272648b77c04 | /Chapters/mesh/r/latency-free.r | 738df8f7cd6f24d600f606fdd70fd1f33af7b14f | [] | no_license | tenchd/thesis2 | 2f5cc1d8b8440629b5722ffc6b24d5ca444d5778 | ac9b74745cc53b0949e953a2ac4c8fafb20e4ba2 | refs/heads/master | 2020-12-03T22:02:14.375329 | 2020-07-18T17:01:59 | 2020-07-18T17:01:59 | 231,499,618 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 3,690 | r | latency-free.r | library(ggplot2)
library(grid)
library(scales)
dat_mesh0n <- read.delim('../data/redis-latency/mesh0n.free.csv', header = TRUE, sep = ',')
dat_mesh0n$alloc <- 'Mesh 0n'
dat_mesh1n <- read.delim('../data/redis-latency/mesh1n.free.csv', header = TRUE, sep = ',')
dat_mesh1n$alloc <- 'Mesh 1n'
dat_mesh2n <- read.delim('../data/redis-latency/mesh2n.free.csv', header = TRUE, sep = ',')
dat_mesh2n$alloc <- 'Mesh 2n'
dat_mesh2y <- read.delim('../data/redis-latency/mesh2y.free.csv', header = TRUE, sep = ',')
dat_mesh2y$alloc <- 'Mesh 2y (new)'
## dat_hoard <- read.delim('../data/redis-latency/malloc.hoard.csv', header = TRUE, sep = ',')
## dat_hoard$alloc <- 'Hoard'
## dat_diehard <- read.delim('../data/git-latency/latency.diehard', header = TRUE, sep = ',')
## dat_diehard$alloc <- 'DieHard'
## dat_glibc <- read.delim('../data/git-latency/latency.glibc', header = TRUE, sep = ',')
## dat_glibc$alloc <- 'glibc'
dat_jemalloc <- read.delim('../data/redis-latency/jemalloc-external.free.csv', header = TRUE, sep = ',')
dat_jemalloc$alloc <- 'jemalloc'
dat_tcmalloc <- read.delim('../data/redis-latency/tcmalloc-external.free.csv', header = TRUE, sep = ',')
dat_tcmalloc$alloc <- 'tcmalloc'
dat_hoard <- read.delim('../data/redis-latency/hoard.free.csv', header = TRUE, sep = ',')
dat_hoard$alloc <- 'hoard'
p <- ggplot() + #dat_mesh, aes(time, color=alloc)) +
geom_line(data=dat_mesh0n, size=.3, aes(x=Value, y=Percentile, color=alloc, group=1)) +
## geom_line(data=dat_mesh1n, size=.3, aes(x=Value, y=Percentile, color=alloc, group=2)) +
## geom_line(data=dat_mesh2n, size=.3, aes(x=Value, y=Percentile, color=alloc, group=3)) +
geom_line(data=dat_mesh2y, size=.3, aes(x=Value, y=Percentile, color=alloc, group=4)) +
## geom_line(data=dat_mesh1n, size=.3, aes(x=Value, y=Percentile, color=alloc, group=6)) +
## geom_line(data=dat_mesh2n, size=.3, aes(x=Value, y=Percentile, color=alloc, group=8)) +
geom_line(data=dat_jemalloc, size=.3, aes(x=Value, y=Percentile, color=alloc, group=5)) +
## geom_line(data=dat_tcmalloc, size=.3, aes(x=Value, y=Percentile, color=alloc, group=6)) +
## geom_line(data=dat_hoard, size=.3, aes(x=Value, y=Percentile, color=alloc, group=7)) +
## stat_ecdf(geom = "step", data=dat_mesh, size=.3, aes(x=Value, color=alloc, group=1)) +
## stat_ecdf(geom = "step", data=dat_glibc, size=.3, aes(x=Value, color=alloc, group=2)) +
## stat_ecdf(geom = "step", data=dat_jemalloc, size=.3, aes(x=Value, color=alloc, group=4)) +
## stat_ecdf(geom = "step", data=dat_hoard, size=.3, aes(x=Value, color=alloc, group=5)) +
## stat_ecdf(geom = "step", data=dat_diehard, size=.3, aes(x=Value, color=alloc, group=8)) +
## stat_ecdf(geom = "step", data=dat_tcmalloc, size=.3, aes(x=Value, color=alloc, group=7)) +
coord_cartesian(xlim = c(0, 300)) +
scale_x_continuous('Time (nanoseconds)') +
scale_y_continuous('Cumulative fraction of requests') +
theme_bw(10, 'Times') +
guides(colour = guide_legend(nrow = 2)) +
theme(
plot.title = element_text(size=10, face='bold'),
strip.background = element_rect(color='dark gray', linetype=0.5),
plot.margin = unit(c(.1, .1, 0, 0), 'in'),
panel.border = element_rect(colour='gray'),
panel.margin = unit(c(0, 0, -0.5, 0), 'in'),
legend.position = 'bottom',
legend.key = element_rect(color=NA),
legend.key.size = unit(0.15, 'in'),
legend.margin = unit(0, 'in'),
legend.title=element_blank(),
axis.title.y = element_text(size=9),
axis.text.x = element_text(angle = 0, hjust = 1)
)
ggsave(p, filename='latency-free.pdf', width=3.4, height=2.5)
|
f4e8dbc827043ad45d298941582afe795c6174fc | 32694467865205579b98f15bf738d88c19fb954d | /tests/testthat/test_submission_validation_entity_inputs.R | 0db2149783b8b4ae7f3c08754c78fee22011319a | [] | no_license | vjcitn/terraClientR | ee0dc11c00b8d707d023d93776637b5622c189b6 | 85ab30d88da3b4c3da9e36a9b2f9dbc7ab5f237a | refs/heads/main | 2023-06-05T04:42:36.218619 | 2021-06-29T15:43:36 | 2021-06-29T15:43:36 | 381,414,881 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 687 | r | test_submission_validation_entity_inputs.R | # Automatically generated by openapi-generator (https://openapi-generator.tech)
# Please update as you see appropriate
context("Test SubmissionValidationEntityInputs")
model.instance <- SubmissionValidationEntityInputs$new()
test_that("entityName", {
# tests for the property `entityName` (character)
# name of entity
# uncomment below to test the property
#expect_equal(model.instance$`entityName`, "EXPECTED_RESULT")
})
test_that("inputResolutions", {
# tests for the property `inputResolutions` (array[SubmissionValidationValue])
# input resolution
# uncomment below to test the property
#expect_equal(model.instance$`inputResolutions`, "EXPECTED_RESULT")
})
|
0c54982642f54b42768cfdfad4db669dd0576975 | c3e25193aa5734c9abb676e95b14bd0ede10e007 | /plot1.R | e5419bce5fef7235cede15eb8a06ee924ed1b918 | [] | no_license | mteremko84/EXPLORATORY-DATA-ANALYSIS-Assignment- | 1f7b74e4e1f4a6a5ad9b3583984cf46817bdcd16 | 54c124aa366ed8c1250ea7fcfebff850ae0577ee | refs/heads/master | 2021-01-10T06:49:36.819766 | 2016-02-28T07:40:20 | 2016-02-28T07:40:20 | 52,709,873 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 451 | r | plot1.R | #Plot1.R code for assignment 2
setwd("C:/Users/JOHN/Desktop/PROJECT 2 exploratory data science")
NEI <- readRDS("summarySCC_PM25.rds")
SCC <- readRDS("Source_Classification_Code.rds")
aggregatedTotalYearly <- aggregate(Emissions ~ year, NEI, sum)
png('plot1.png')
barplot(aggregatedTotalYearly$Emissions, xlab="years", ylab = expression('total PM'[2.5]*' emission'), main = expression('Total PM'[2.5]*' emissions at various years'))
dev.off() |
a630887a2edd3aa7e101d93bfeedb3599777ee3d | 3686818552f53deb37c782880b6604848da89a4c | /R-Tidyverse/R-tidyverse/codes/cleaning_data.R | 73039f3dd5dca86052add5cb4bab4e67bf6880ab | [] | no_license | mariaulf4h/learn-R | f73425689217f40c95899447de6fb2f4afb8626c | bb2250f67f89eee3fcbd0690241dc545533a91ce | refs/heads/master | 2021-06-16T17:39:06.959321 | 2021-01-26T23:57:34 | 2021-01-26T23:57:34 | 137,305,964 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 655 | r | cleaning_data.R | library(readr)
library(tidyverse)
library(dplyr)
library(haven)
#import data
bike <- readRDS("data/input/r_cleaning/bike_share_rides_ch1_1.rds")
#cek data type
glimpse(bike)
# is.numeric(variable_name) -- return "TRUE/FALSE"
# class() --- return "information type of data"
# <dbl> == numeric
# <chr> == character
#summary() statistics
summary(bike$user_birth_year)
#converting data type
class(bike$user_birth_year) # from numeric
as.factor(bike$user_birth_year) # to be factor
#removing "character in string variable"
library(stringr)
# function to remove
# str_remove(df$varible, "what we want to remove?")
|
96a32d4a53e22e9004e7d52ad19e19c9931741ea | d6683c4b7c090c60886c1cbe134810f53015bd79 | /analysis_script_github.R | 06bbdc87292221cf30aad27f43f5bd146511f25d | [] | no_license | iamnielsjanssen/ArticulatoryDurations | dbbd49904167667f8f18f5dea592359f0feea5ce | 35a136cdc6dbb3b6b63fed03867c0a530f938a52 | refs/heads/master | 2021-01-01T04:08:49.700900 | 2018-01-08T10:05:54 | 2018-01-08T10:05:54 | 97,130,428 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 8,283 | r | analysis_script_github.R | # Data analysis of Articulation Data
#
# Niels Janssen and Anna Siyanova-Chanturia 2017
#
# If you use this code for analysis that is published in an indexed journal or repository, please cite the following article:
# Siyanova-Chanturia & Janssen "Production of familiar phrases: Frequency effects in native speakers and second language learners"
#
# For questions email: njanssen@ull.es or anna.siyanova@vuw.ac.nz
#
# Update: Dec 2017, added analysis Length of stay, age Exposure, and speech rate
#-------------------------------------------------------------------
# LOAD DATA AND INITIAL CLEAN UP
#-------------------------------------------------------------------
# read data matrix
rm(list=ls())
setwd('/set/to/path') # CHANGE TO FOLDER WITH DATA MATRIX
dt = read.table("data_matrix_DEC2017.txt", header=TRUE, sep="\t")
# quick look at data, note this is the cleaned data set
head(dt)
str(dt)
#-------------------------------------------------------------------
# PRE-PROCESSING OF DATA
#-------------------------------------------------------------------
# subject is a factor
dt$subject = as.factor(dt$subject)
# log transform relevant vars
dt$logphrase_freq = log(dt$phrasal_freq + 1) # +1 to avoid log(0)
dt$logw1_freq = log(dt$w1_bnc_freq + 1)
dt$logw2_freq = log(dt$w2_bnc_freq + 1)
dt$logw1_and_freq = log(dt$w1_and_bnc_freq + 1)
dt$logand_w2_freq = log(dt$and_w2_bnc_freq + 1)
# also log transform the dependent variable Articulation Time
dt$logAT = log(dt$AT)
# center variables
dt$phrase_lenC = scale(dt$phrasal_length, scale=FALSE)
dt$trialC = scale(dt$trial, scale=FALSE)
dt$nativeC = scale(as.integer(dt$native), scale=FALSE)
dt$stim_typeC = scale(as.integer(dt$stim_type), scale=FALSE)
dt$logphrase_freqC = scale(dt$logphrase_freq, scale=FALSE)
dt$logw1_freqC = scale(dt$logw1_freq, scale=FALSE)
dt$logw2_freqC = scale(dt$logw2_freq, scale=FALSE)
dt$logw1_and_freqC = scale(dt$logw1_and_freq, scale = FALSE)
dt$logand_w2_freqC = scale(dt$logand_w2_freq, scale = FALSE)
# compute by-participant speech rate (UPDATE DEC-2017)
# speech rate defined as the mean (item phrase duration / item phrase length in phonemes) for each participant
dt$itemrate = dt$AT / dt$phrasal_length
dt$speechrate = ave(dt$itemrate, dt$subject)
dt$speechrateC = scale(log(dt$speechrate), scale=FALSE)
#-------------------------------------------------------------------
# DATA MODELING
# Following modeling strategy outlined in Bates, Kliegl, Vasishth & Baayen (2015)
#-------------------------------------------------------------------
# load lme4
require(lme4)
# 1. *maximal model fails to converge*
# note only native var for item random effect structure (other vars are between items)
# (this takes a while to end...)
model1 = lmer(logAT ~ trialC + stim_typeC + phrase_lenC + nativeC*(logphrase_freqC + logw1_freqC + logw2_freqC) + (1+ trialC + stim_typeC + phrase_lenC + logphrase_freqC + logw1_freqC + logw2_freqC|subject) + (1+ nativeC|stim), data=dt, REML=FALSE)
summary(model1)
# attempt to find model that converges by deleting random-effect terms that produce near zero variance in non-converged model
# delete by-subject trialC
model1 = lmer(logAT ~ trialC + stim_typeC + phrase_lenC + nativeC*(logphrase_freqC + logw1_freqC + logw2_freqC) + (1+ stim_typeC + phrase_lenC + logphrase_freqC + logw1_freqC + logw2_freqC|subject) + (1+ nativeC|stim), data=dt, REML=FALSE)
summary(model1)
# Per Bates et al. we next run the rePCA script to see if the RE structure is too complex
# devtools::install_github("dmbates/RePsychLing")
require(RePsychLing)
E = rePCA(model1)
summary(E)
# PCA suggests that there may be three random effect components in subject structure that do not contribute variance
# next, model without correlation parameters (|| for subject and item RE)
model1 = lmer(logAT ~ trialC + stim_typeC + phrase_lenC + nativeC*(logphrase_freqC + logw1_freqC + logw2_freqC) + (1+ stim_typeC + phrase_lenC + logphrase_freqC + logw1_freqC + logw2_freqC||subject) + (1+ nativeC||stim), data=dt, REML=FALSE)
summary(model1)
# suggests zero variance for RE by-item nativeC, and by-subject logw2_freqC, logw1_freqC, logphrase_freqC.
model2 = lmer(logAT ~ trialC + stim_typeC + phrase_lenC + nativeC*(logphrase_freqC + logw1_freqC + logw2_freqC) + (1+ stim_typeC + phrase_lenC||subject) + (1|stim), data=dt, REML=FALSE)
anova(model1, model2) # not significant, prefer simpler model2
summary(model2)
# remove by-subject slope for phrase_lenC
model3 = lmer(logAT ~ trialC + stim_typeC + phrase_lenC + nativeC*(logphrase_freqC + logw1_freqC + logw2_freqC) + (1+ stim_typeC ||subject) + (1|stim), data=dt, REML=FALSE)
anova(model2, model3) # not significant, prefer simpler model3
summary(model3)
# remove by-subject slope for stim_typeC
model4 = lmer(logAT ~ trialC + stim_typeC + phrase_lenC + nativeC*(logphrase_freqC + logw1_freqC + logw2_freqC) + (1|subject) + (1|stim), data=dt, REML=FALSE)
anova(model3, model4) # not significant, prefer simpler model4
summary(model4)
# We end with model 4 as the best model.
anova(model1,model4) # not significant, keep model4 (no accumulation)
# note cannot put correlation back in because subject has random slope but not random intercept and so correlation impossible
# SAME FOR FIXED EFFECTS
# remove main effect logw2_freq
model5 = lmer(logAT ~ trialC + stim_typeC + phrase_lenC + nativeC:logw2_freqC + nativeC*(logphrase_freqC + logw1_freqC) + (1|subject) + (1|stim), data=dt, REML=FALSE)
anova(model4, model5) # not significant, prefer simpler model7
summary(model5)
# remove stim_typeC
model6 = lmer(logAT ~ trialC + phrase_lenC + nativeC:logw2_freqC + nativeC*(logphrase_freqC + logw1_freqC) + (1|subject) + (1|stim), data=dt, REML=FALSE)
anova(model5, model6) # not significant, prefer simpler model8
summary(model6)
# remove nativeC:logw1_freqC
model7 = lmer(logAT ~ trialC + phrase_lenC + nativeC:logw2_freqC + nativeC*logphrase_freqC + logw1_freqC + (1|subject) + (1|stim), data=dt, REML=FALSE)
anova(model6, model7) # not significant, prefer simpler model9
summary(model7)
# remove nativeC:logw2_freqC
model8 = lmer(logAT ~ trialC + phrase_lenC + nativeC*logphrase_freqC + logw1_freqC + (1|subject) + (1|stim), data=dt, REML=FALSE)
anova(model7, model8) # not significant, prefer simpler model10
summary(model8)
# remove logw1_freqC
model9 = lmer(logAT ~ trialC + phrase_lenC + nativeC*logphrase_freqC + (1|subject) + (1|stim), data=dt, REML=FALSE)
anova(model8, model9) # not significant, prefer simpler model11
summary(model9)
# remove nativeC:logphrase_freqC
model10 = lmer(logAT ~ trialC + phrase_lenC + nativeC+logphrase_freqC + (1|subject) + (1|stim), data=dt, REML=FALSE)
anova(model9, model10) # significant!, prefer model9
summary(model10)
# MODEL 9 IS FINAL MODEL
anova(model1,model9) # no accumulation
# check condition index
source("mer-utils.R") # from Jaeger website
kappa.mer(model9) # = 1.12
# obtain p-values
require(afex)
mixed(logAT ~ trialC + phrase_lenC + nativeC*logphrase_freqC + (1|subject) + (1|stim), data=dt, REML=FALSE, method="S")
# Further examination of the interaction between nativeC:log_phrase_freqC
dtNS = dt[dt$native=="Native speaker",]
dtNNS = dt[dt$native=="Non-native speaker",]
dtNNS$length_stayC = scale(dtNNS$length_stay, scale=FALSE)
dtNNS$age_exposeC = scale(dtNNS$age_expose, scale=FALSE)
modelNS = lmer(logAT ~ trialC + phrase_lenC + logphrase_freqC + (1|subject) + (1|stim), data=dtNS, REML=FALSE)
summary(modelNS)
modelNNS = lmer(logAT ~ trialC + phrase_lenC + logphrase_freqC + (1|subject) + (1|stim), data=dtNNS, REML=FALSE)
summary(modelNNS)
mixed(logAT ~ trialC + phrase_lenC + logphrase_freqC + (1|subject) + (1|stim), data=dtNS, REML=FALSE,method="S")
mixed(logAT ~ trialC + phrase_lenC + logphrase_freqC + (1|subject) + (1|stim), data=dtNNS, REML=FALSE,method="S")
# Looking at Length of Stay in ESC and Age of Eposure
# Length of stay in NNS and interaction with phrase_freq
modelNNS = lmer(logAT ~ trialC + phrase_lenC + length_stayC*logphrase_freqC + (1|subject) + (1|stim), data=dtNNS, REML=FALSE)
summary(modelNNS)
# Age of exposure in NNS and interaction with phrase_freq
modelNNS = lmer(logAT ~ trialC + phrase_lenC + age_exposeC*logphrase_freqC + (1|subject) + (1|stim), data=dtNNS, REML=FALSE)
summary(modelNNS)
|
4f7fc49fb82cfd973ecc154ade352072cbcf26a8 | 5e93884d2d5ef99f50c88fe7ef3750f2bea5795f | /frame.R | 4fbc7fa80f2723e10f32b9d4a9e19f1553c58692 | [] | no_license | fall2018-wallace/hw7_maps_manan | d10ada8dad5ebcb41af3a822b7235c03377b7322 | 149b85f1a911ae5c15343e99793ce03cb4d8f647 | refs/heads/master | 2020-04-01T19:08:44.294050 | 2018-10-18T14:07:29 | 2018-10-18T14:07:29 | 153,537,651 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 87 | r | frame.R |
census_arrests1 <- data.frame(census_arrests,state.area,state.center)
census_arrests1
|
3f848a5a520ca04c73298c18d3eda5d83e5e1302 | 0d7fd4f46398fa46f79963383758f26d057280ad | /plot4.R | 7da95723ef92842d2b6c8c22c26d21f4597b762a | [] | no_license | andreagolfari/ExData_Plotting1 | 2d08d71158fc407888ba6a56f8401c07a7f4b6be | 0b7284cb1501db91d00dfa19236a122b19ffeaf3 | refs/heads/master | 2020-06-08T17:39:34.208826 | 2019-06-23T16:49:36 | 2019-06-23T16:49:36 | 193,274,753 | 0 | 0 | null | 2019-06-22T20:03:00 | 2019-06-22T20:02:59 | null | UTF-8 | R | false | false | 1,591 | r | plot4.R | # P L O T 4
# Reading data from a Data folder within the directory
rawdata <- read.csv("Data/household_power_consumption.txt",
header = TRUE, sep=";", na.strings = "?", stringsAsFactors=FALSE)
# Subsetting on the required dates
data <- subset(rawdata, Date %in% c("1/2/2007", "2/2/2007"))
data$Date <- as.Date(data$Date, format = "%d/%m/%Y") # formatting date
data$DateTime <- as.POSIXct(paste(data$Date, data$Time)) # creating a variable for continuous date-time
#plotting to png file
png(file = "plot4.png", width = 480, height = 480)
# Setting parameters for multiple graphs
par(mfrow=c(2,2), mar=c(5,5,2,1), oma=c(0,0,1,0))
with(data, {
plot(Global_active_power ~ DateTime, type = "l", # topleft graph
ylab = "Global Active Power", xlab = "")
plot(Voltage ~ DateTime, type = "l", # topright graph
ylab = "Voltage", xlab = "datetime")
plot(Sub_metering_1 ~ DateTime, # bottomleft graph
type="l", ylab="Energy sub metering", xlab="")
lines(Sub_metering_2 ~ DateTime, col = "red")
lines(Sub_metering_3 ~ DateTime, col = "blue")
legend("topright", col = c("black", "red", "blue"), bty = "n", lty = 1, lwd = 2,
legend = c("Sub_metering_1", "Sub_metering_2", "Sub_metering_3"))
plot(Global_reactive_power ~ DateTime, type = "l", # bottomright graph
ylab = "Global_reactive_power", xlab = "datetime")
})
dev.off() # closing the png device
|
e37d4c5bc91369882931b7ec4f8b9a9bc12027f8 | f36e545b20969996931a7c07c0a9e9d1e09dd72b | /man/makeSVD.Rd | 3307414f51b385b8bceedec90724995501689430 | [] | no_license | hcorrada/BatchQC | a98f5b15892b0793f240b7f28d3e62b95fc273d4 | 462e891ff433f2b154f37d06374af3ddd46f192b | refs/heads/master | 2021-01-18T07:06:09.365539 | 2015-05-19T14:13:11 | 2015-05-19T14:13:11 | 34,738,129 | 1 | 0 | null | 2015-04-28T15:14:37 | 2015-04-28T15:14:37 | null | UTF-8 | R | false | false | 372 | rd | makeSVD.Rd | % Generated by roxygen2 (4.1.0): do not edit by hand
% Please edit documentation in R/pca.R
\name{makeSVD}
\alias{makeSVD}
\title{Compute singular value decomposition}
\usage{
makeSVD(x)
}
\arguments{
\item{x}{matrix of genes by sample (ie. the usual data matrix)}
}
\value{
returns a list of svd components v and d
}
\description{
Compute singular value decomposition
}
|
79908320009945a9ed826ebf498058644dcfd8ec | 63dab8a00017248bc923e1321df33ab641cdba33 | /ui.R | 1a8653aa1878ad5cb1083b6c73007d6d8507034e | [] | no_license | LSB1990/Rshinyproject | 86ddd2b7a2264e8c3a86a7d40073e8aa0e5f9ec2 | 3b43932e6680114b5ad60e021f6278245ddc4dc7 | refs/heads/master | 2020-07-03T03:41:25.626119 | 2019-08-11T14:10:27 | 2019-08-11T14:10:27 | 201,772,598 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 287 | r | ui.R |
library(shiny)
shinyUI(fluidPage(
titlePanel("Coursera: Developping Data Product Project"),
sidebarLayout(
sidebarPanel(
numericInput("expinput", "experiment nr:", 1, min = 1, max = 5)
),
mainPanel(
plotOutput("distPlot")
)
)
)
)
|
eeb190888d890cb58834b6e05589e83ccbe1d7fb | 4976dace5f4a657fab1a7ab844dfeee196efd603 | /R/stmv_predict_globalmodel.R | 76493f014a2c665ec62e85a2df1b68139db30d72 | [
"MIT"
] | permissive | jae0/stmv | f47118126975c0855c3be9d3e9afae05c7496f39 | f0ec282030c924837317c08ff597141d011f4157 | refs/heads/master | 2023-08-07T18:43:38.607704 | 2023-08-01T13:13:16 | 2023-08-01T13:13:16 | 110,134,052 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 6,548 | r | stmv_predict_globalmodel.R | stmv_predict_globalmodel = function( ip=NULL, p, Yq_link=NULL, global_model=NULL ) {
if (0) {
# for debugging runs ..
p = parallel_run( p=p, runindex=list( pnt = 1:p$nt ) )
ip = 1:p$nruns # == 1:p$nt
}
if (exists( "libs", p)) suppressMessages( RLibrary( p$libs ) )
ooo = NULL
if (is.null(ip)) {
if( exists( "nruns", p ) ) {
ip = 1:p$nruns
ooo = p$runs[ip, "pnt"]
}
}
if (is.null(ooo)) ooo = 1:p$nt
P0 = stmv_attach( p$storage_backend, p$ptr$P0 ) # remember this is on link scale
P0sd = stmv_attach( p$storage_backend, p$ptr$P0sd ) # and this too
if (is.null(global_model)) global_model = stmv_global_model( p=p, DS="global_model")
if (is.null(global_model)) stop("Global model not found.")
# main loop over each output location in S (stats output locations)
for ( it in ooo ) {
# downscale and warp from p(0) -> p1
pa = NULL # construct prediction surface
if ( length(p$stmv_variables$COV) > 0 ) {
for (i in p$stmv_variables$COV ) {
pu = stmv_attach( p$storage_backend, p$ptr$Pcov[[i]] )
nc = ncol(pu)
if ( nc== 1 ) {
pa = cbind( pa, pu[] ) # ie. a static variable (space)
} else if ( nc == p$nt & nc == p$ny) {
pa = cbind( pa, pu[,it] ) # ie. same time dimension as predictive data (space.annual.seasonal)
} else if ( nc == p$ny & p$nt > p$ny) {
iy = trunc( (it-1) / p$nw ) + 1L
pa = cbind( pa, pu[,iy] ) # ie., annual data (space.annual)
} else if ( nc == p$nt & p$nt > p$ny) {
pa = cbind( pa, pu[,it] ) # ie. same time dimension as predictive data (space.annual.seasonal)
} else {
print(i)
print(nc)
stop( "Erroneous data dimension ... the dataset for the above variable looks to be incomplete?")
}
}
pa = as.data.frame( pa )
names(pa) = p$stmv_variables$COV
} else {
pa = data.frame(intercept=rep(1, length(P0))) # just to get the size right when constant intercept model
}
if ( any( p$stmv_variables$LOCS %in% p$stmv_variables$global_cov ) ) {
Ploc = stmv_attach( p$storage_backend, p$ptr$Ploc )
pa = cbind(pa, Ploc[])
names(pa) = c( p$stmv_variables$COV, p$stmv_variables$LOCS )
}
if ( "yr" %in% p$stmv_variables$global_cov ) {
npa = names(pa)
pa = cbind(pa, p$yrs[it] )
names(pa) = c( npa, "yr" )
}
if ( "dyear" %in% p$stmv_variables$global_cov ) {
npa = names(pa)
pa = cbind(pa, p$prediction_dyear )
names(pa) = c( npa, "dyear" )
}
if (0) {
p$stmv_global_modelengine = "userdefined"
p$stmv_global_model$run = "
gam( formula=p$stmv_global_modelformula, data=B,
optimizer= p$stmv_gam_optimizer, family=p$stmv_global_family, weights=wt ) )
"
# must be 'global_model', newdata=pa' in following
p$stmv_global_model$predict = "
predict( global_model, newdata=pa, type='link', se.fit=TRUE )
"
}
if ( p$stmv_global_modelengine == "userdefined" ) {
if (exists( "prediction_method", p$stmv_global_model )) {
Pbaseline = NULL
Pbaseline = try( eval(parse(text=p$stmv_global_model$predict )) )
if (inherits(Pbaseline, "try-error")) Pbaseline = NULL
} else {
# try in case of a default predict method exits
warning( "p$stmv_global_model$predict() method was not found trying to do a generic prediction ...")
Pbaseline = try( predict( global_model, newdata=pa, type='link', se.fit=TRUE ) ) # must be on link scale
if (inherits(Pbaseline, "try-error")) stop ("Prediction failed ... ")
}
pa = NULL
if ( ! is.null(Pbaseline) ) {
# extreme data can make convergence slow and erratic .. this will be used later to limit 99.9%CI
# do not permit extrapolation
lb = which( Pbaseline$fit < Yq_link[1])
ub = which( Pbaseline$fit > Yq_link[2])
if (length(lb) > 0) Pbaseline$fit[lb] = Yq_link[1]
if (length(ub) > 0) Pbaseline$fit[ub] = Yq_link[2]
P0[,it] = Pbaseline$fit
P0sd[,it] = Pbaseline$se.fit
Pbaseline = NULL
}
} else if (p$stmv_global_modelengine %in% c("glm", "bigglm", "gam") ) {
Pbaseline = try( predict( global_model, newdata=pa, type="link", se.fit=TRUE ) ) # must be on link scale
pa = NULL
if (!inherits(Pbaseline, "try-error")) {
# do not permit extrapolation
lb = which( Pbaseline$fit < Yq_link[1])
ub = which( Pbaseline$fit > Yq_link[2])
if (length(lb) > 0) Pbaseline$fit[lb] = Yq_link[1]
if (length(ub) > 0) Pbaseline$fit[ub] = Yq_link[2]
P0[,it] = Pbaseline$fit
P0sd[,it] = Pbaseline$se.fit
}
Pbaseline = NULL
} else if (p$stmv_global_modelengine =="bayesx") {
stop( "not yet tested" ) # TODO
# Pbaseline = try( predict( global_model, newdata=pa, type="link", se.fit=TRUE ) )
# pa = NULL
# if (!inherits(Pbaseline, "try-error")) {
# P0[,it] = Pbaseline$fit
# P0sd[,it] = Pbaseline$se.fit
# }
# Pbaseline = NULL
} else if (p$stmv_global_modelengine =="none") {
# nothing to do
} else {
stop ("This global model method requires a bit more work .. ")
}
if (exists("all.covars.static", p)) {
if (p$all.covars.static) {
# if this is true then this is a single cpu run and all predictions for each time slice is the same
# could probably catch this and keep storage small but that would make the update math a little more complex
# this keeps it simple with a quick copy
if (p$nt > 1 ) {
for (j in ooo[-1]){
P0[,j] = P0[,ooo[1]]
P0sd[,j] = P0sd[,ooo[1]]
}
}
break() # escape for loop
}
}
} # end each timeslice
return(NULL)
if (0) {
if ("all nt time slices in stored predictions P") {
Ploc = stmv_attach( p$storage_backend, p$ptr$Ploc )
# pa comes from stmv_interpolate ... not carried here
for (i in 1:p$nt) {
i = 1
print( lattice::levelplot( P0[,i] ~ Ploc[,1] + Ploc[ , 2], col.regions=heat.colors(100), scale=list(draw=FALSE) , aspect="iso" ) )
}
}
if ("no time slices in P") {
Ploc = stmv_attach( p$storage_backend, p$ptr$Ploc )
print( lattice::levelplot( P0[] ~ Ploc[,1] + Ploc[ , 2], col.regions=heat.colors(100), scale=list(draw=FALSE) , aspect="iso" ) )
}
}
} |
3d16c104e9ce4907a7e84e97c171a575025c3e93 | 3da183743d2023c3b4ce8478f2bbdc1999fdcccb | /R/collapse.singles.R | a680f9852fe419fada7cdfe0b213e8d586f1b7b8 | [] | no_license | tanja819/TreeSim | d0ca6ffc5b84bc278b6aa7d627c0e4bb41e46506 | 7138f7b466a75d8fbff96e884a07b43e87826056 | refs/heads/master | 2021-01-19T06:43:45.401354 | 2018-10-25T06:21:57 | 2018-10-25T06:21:57 | 63,273,940 | 1 | 1 | null | null | null | null | UTF-8 | R | false | false | 959 | r | collapse.singles.R | collapse.singles <- function(tree) {
elen <- tree$edge.length
xmat <- tree$edge
node.lab <- tree$node.label
nnode <- tree$Nnode
ntip <- length(tree$tip.label)
singles <- NA
while (length(singles) > 0) {
tx <- tabulate(xmat[, 1])
singles <- which(tx == 1)
if (length(singles) > 0) {
i <- singles[1]
prev.node <- which(xmat[, 2] == i)
next.node <- which(xmat[, 1] == i)
xmat[prev.node, 2] <- xmat[next.node, 2]
xmat <- xmat[xmat[, 1] != i, ]
xmat[xmat > i] <- xmat[xmat > i] - 1L
elen[prev.node] <- elen[prev.node] + elen[next.node]
if (!is.null(node.lab)) {
node.lab <- node.lab[-c(i - ntip)]}
nnode <- nnode - 1L
elen <- elen[-next.node]
}
}
tree$edge <- xmat
tree$edge.length <- elen
tree$node.label <- node.lab
tree$Nnode <- nnode
tree
} |
c4773e9dfbadecdb229bc5fa9f75dc9d783f72c1 | bd81690d3b2f634b367ddf467d808ced5918d5af | /man/is_inst.Rd | 5c73d86e1d580694c0a258cb5f9aa3d297c18bf7 | [] | no_license | dataeducation/stackoverflow | 24c3c3b67b41d4597ae6c64661b41a7f523a45b5 | 4598e85891171884c8757576658bf24b45d46e37 | refs/heads/master | 2022-11-27T23:21:37.119236 | 2020-08-01T15:23:12 | 2020-08-01T15:23:12 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | true | 725 | rd | is_inst.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/is_inst.R
\name{is_inst}
\alias{is_inst}
\title{Check if package is available}
\usage{
is_inst(pkg)
}
\arguments{
\item{pkg}{a character string with the name of a single package. An error occurs if more than one package name is given.}
}
\value{
\code{TRUE} if a package is installed, and \code{FALSE} otherwise.
}
\description{
A predicate for whether a package is installed
}
\examples{
is_inst("grDevices")
}
\references{
\url{https://stackoverflow.com/questions/9341635/check-for-installed-packages-before-running-install-packages/38082613#38082613}
}
\author{
\href{https://stackoverflow.com/users/1863950/artem-klevtsov}{Artem Klevtsov}
}
|
2dddbaaeff5771648763adb512f0bd9f3a1c8876 | 36274ff152ee9dc37a25392acb9750c0ee356994 | /man/is_data.frame.Rd | 83c8c71f9f21175cafdc5c266f22d64d35f3ed1a | [] | no_license | cran/assertive.types | 205021d83c306ce0e79c9355a95e41f715cffd3d | 0b4f51f93659a34799357899f9c2ede9dcebd27e | refs/heads/master | 2021-05-04T11:23:00.883941 | 2016-12-30T18:35:46 | 2016-12-30T18:35:46 | 48,076,813 | 0 | 0 | null | null | null | null | UTF-8 | R | false | true | 1,001 | rd | is_data.frame.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/assert-is-type-base.R, R/is-type-base.R
\name{assert_is_data.frame}
\alias{assert_is_data.frame}
\alias{is_data.frame}
\title{Is the input is a data.frame?}
\usage{
assert_is_data.frame(x, severity = getOption("assertive.severity", "stop"))
is_data.frame(x, .xname = get_name_in_parent(x))
}
\arguments{
\item{x}{Input to check.}
\item{severity}{How severe should the consequences of the assertion be?
Either \code{"stop"}, \code{"warning"}, \code{"message"}, or \code{"none"}.}
\item{.xname}{Not intended to be used directly.}
}
\value{
\code{is_data.frame} wraps \code{is.data.frame},
providing more information on failure. \code{assert_is_data.frame}
returns nothing but throws an error if \code{is_data.frame}
returns \code{FALSE}.
}
\description{
Is the input is a data.frame?
}
\examples{
assert_is_data.frame(data.frame())
assert_is_data.frame(datasets::CO2)
}
\seealso{
\code{\link[base]{is.data.frame}}.
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.