content
large_stringlengths 0
6.46M
| path
large_stringlengths 3
331
| license_type
large_stringclasses 2
values | repo_name
large_stringlengths 5
125
| language
large_stringclasses 1
value | is_vendor
bool 2
classes | is_generated
bool 2
classes | length_bytes
int64 4
6.46M
| extension
large_stringclasses 75
values | text
stringlengths 0
6.46M
|
|---|---|---|---|---|---|---|---|---|---|
#' Are column data less than a specified value?
#'
#' The `col_vals_lt()` validation function, the `expect_col_vals_lt()`
#' expectation function, and the `test_col_vals_lt()` test function all check
#' whether column values in a table are *less than* a specified `value` (the
#' exact comparison used in this function is `col_val < value`). The `value` can
#' be specified as a single, literal value or as a column name given in
#' `vars()`. The validation function can be used directly on a data table or
#' with an *agent* object (technically, a `ptblank_agent` object) whereas the
#' expectation and test functions can only be used with a data table. The types
#' of data tables that can be used include data frames, tibbles, and even
#' database tables of `tbl_dbi` class. Each validation step or expectation will
#' operate over the number of test units that is equal to the number of rows in
#' the table (after any `preconditions` have been applied).
#'
#' If providing multiple column names to `columns`, the result will be an
#' expansion of validation steps to that number of column names (e.g.,
#' `vars(col_a, col_b)` will result in the entry of two validation steps). Aside
#' from column names in quotes and in `vars()`, **tidyselect** helper functions
#' are available for specifying columns. They are: `starts_with()`,
#' `ends_with()`, `contains()`, `matches()`, and `everything()`.
#'
#' This validation function supports special handling of `NA` values. The
#' `na_pass` argument will determine whether an `NA` value appearing in a test
#' unit should be counted as a *pass* or a *fail*. The default of `na_pass =
#' FALSE` means that any `NA`s encountered will accumulate failing test units.
#'
#' Having table `preconditions` means **pointblank** will mutate the table just
#' before interrogation. Such a table mutation is isolated in scope to the
#' validation step(s) produced by the validation function call. Using
#' **dplyr** code is suggested here since the statements can be translated to
#' SQL if necessary. The code is most easily supplied as a one-sided **R**
#' formula (using a leading `~`). In the formula representation, the `.` serves
#' as the input data table to be transformed (e.g.,
#' `~ . %>% dplyr::mutate(col_a = col_b + 10)`). Alternatively, a function could
#' instead be supplied (e.g.,
#' `function(x) dplyr::mutate(x, col_a = col_b + 10)`).
#'
#' Often, we will want to specify `actions` for the validation. This argument,
#' present in every validation function, takes a specially-crafted list
#' object that is best produced by the [action_levels()] function. Read that
#' function's documentation for the lowdown on how to create reactions to
#' above-threshold failure levels in validation. The basic gist is that you'll
#' want at least a single threshold level (specified as either the fraction of
#' test units failed, or, an absolute value), often using the `warn_at`
#' argument. This is especially true when `x` is a table object because,
#' otherwise, nothing happens. For the `col_vals_*()`-type functions, using
#' `action_levels(warn_at = 0.25)` or `action_levels(stop_at = 0.25)` are good
#' choices depending on the situation (the first produces a warning when a
#' quarter of the total test units fails, the other `stop()`s at the same
#' threshold level).
#'
#' Want to describe this validation step in some detail? Keep in mind that this
#' is only useful if `x` is an *agent*. If that's the case, `brief` the agent
#' with some text that fits. Don't worry if you don't want to do it. The
#' *autobrief* protocol is kicked in when `brief = NULL` and a simple brief will
#' then be automatically generated.
#'
#' @inheritParams col_vals_gt
#' @param value A numeric value used for this test. Any column values `< value`
#' are considered passing.
#'
#' @return For the validation function, the return value is either a
#' `ptblank_agent` object or a table object (depending on whether an agent
#' object or a table was passed to `x`). The expectation function invisibly
#' returns its input but, in the context of testing data, the function is
#' called primarily for its potential side-effects (e.g., signaling failure).
#' The test function returns a logical value.
#'
#' @examples
#' # For all of the examples here, we'll
#' # use a simple table with three numeric
#' # columns (`a`, `b`, and `c`) and three
#' # character columns (`d`, `e`, and `f`)
#' tbl <-
#' dplyr::tibble(
#' a = c(5, 5, 5, 5, 5, 5),
#' b = c(1, 1, 1, 2, 2, 2),
#' c = c(1, 1, 1, 2, 3, 4),
#' d = LETTERS[a],
#' e = LETTERS[b],
#' f = LETTERS[c]
#' )
#'
#' tbl
#'
#' # A: Using an `agent` with validation
#' # functions and then `interrogate()`
#'
#' # Validate that values in column `c`
#' # are all less than the value of `5`
#' agent <-
#' create_agent(tbl) %>%
#' col_vals_lt(vars(c), 5) %>%
#' interrogate()
#'
#' # Determine if this validation
#' # had no failing test units (there
#' # are 6 test units, one for each row)
#' all_passed(agent)
#'
#' # Calling `agent` in the console
#' # prints the agent's report; but we
#' # can get a `gt_tbl` object directly
#' # with `get_agent_report(agent)`
#'
#' # B: Using the validation function
#' # directly on the data (no `agent`)
#'
#' # This way of using validation functions
#' # acts as a data filter: data is passed
#' # through but should `stop()` if there
#' # is a single test unit failing; the
#' # behavior of side effects can be
#' # customized with the `actions` option
#' tbl %>%
#' col_vals_lt(vars(c), 5) %>%
#' dplyr::pull(c)
#'
#' # C: Using the expectation function
#'
#' # With the `expect_*()` form, we would
#' # typically perform one validation at a
#' # time; this is primarily used in
#' # testthat tests
#' expect_col_vals_lt(tbl, vars(c), 5)
#'
#' # D: Using the test function
#'
#' # With the `test_*()` form, we should
#' # get a single logical value returned
#' # to us
#' test_col_vals_lt(tbl, vars(c), 5)
#'
#' @family validation functions
#' @section Function ID:
#' 2-1
#'
#' @seealso The analogous function with a right-closed bound: [col_vals_lte()].
#'
#' @name col_vals_lt
NULL
#' @rdname col_vals_lt
#' @import rlang
#' @export
col_vals_lt <- function(x,
columns,
value,
na_pass = FALSE,
preconditions = NULL,
actions = NULL,
brief = NULL,
active = TRUE) {
# Capture the `columns` expression
columns <- rlang::enquo(columns)
# Resolve the columns based on the expression
columns <- resolve_columns(x = x, var_expr = columns, preconditions)
if (is_a_table_object(x)) {
secret_agent <- create_agent(x, name = "::QUIET::") %>%
col_vals_lt(
columns = columns,
value = value,
na_pass = na_pass,
preconditions = preconditions,
brief = brief,
actions = prime_actions(actions),
active = active
) %>% interrogate()
return(x)
}
agent <- x
if (is.null(brief)) {
brief <- generate_autobriefs(agent, columns, preconditions, values = value, "col_vals_lt")
}
# Add one or more validation steps based on the
# length of the `columns` variable
for (i in seq(columns)) {
agent <-
create_validation_step(
agent = agent,
assertion_type = "col_vals_lt",
column = columns[i],
values = value,
na_pass = na_pass,
preconditions = preconditions,
actions = covert_actions(actions, agent),
brief = brief[i],
active = active
)
}
agent
}
#' @rdname col_vals_lt
#' @import rlang
#' @export
expect_col_vals_lt <- function(object,
columns,
value,
na_pass = FALSE,
preconditions = NULL,
threshold = 1) {
fn_name <- "expect_col_vals_lt"
vs <-
create_agent(tbl = object, name = "::QUIET::") %>%
col_vals_lt(
columns = {{ columns }},
value = {{ value }},
na_pass = na_pass,
preconditions = {{ preconditions }},
actions = action_levels(notify_at = threshold)
) %>%
interrogate() %>% .$validation_set
x <- vs$notify %>% all()
threshold_type <- get_threshold_type(threshold = threshold)
if (threshold_type == "proportional") {
failed_amount <- vs$f_failed
} else {
failed_amount <- vs$n_failed
}
if (inherits(vs$capture_stack[[1]]$warning, "simpleWarning")) {
warning(conditionMessage(vs$capture_stack[[1]]$warning))
}
if (inherits(vs$capture_stack[[1]]$error, "simpleError")) {
stop(conditionMessage(vs$capture_stack[[1]]$error))
}
act <- testthat::quasi_label(enquo(x), arg = "object")
column_text <- prep_column_text(vs$column[[1]])
operator <- "<"
values_text <- prep_values_text(values = vs$values, limit = 3, lang = "en")
testthat::expect(
ok = identical(!as.vector(act$val), TRUE),
failure_message = glue::glue(failure_message_gluestring(fn_name = fn_name, lang = "en"))
)
act$val <- object
invisible(act$val)
}
#' @rdname col_vals_lt
#' @import rlang
#' @export
test_col_vals_lt <- function(object,
columns,
value,
na_pass = FALSE,
preconditions = NULL,
threshold = 1) {
vs <-
create_agent(tbl = object, name = "::QUIET::") %>%
col_vals_lt(
columns = {{ columns }},
value = {{ value }},
na_pass = na_pass,
preconditions = {{ preconditions }},
actions = action_levels(notify_at = threshold)
) %>%
interrogate() %>% .$validation_set
if (inherits(vs$capture_stack[[1]]$warning, "simpleWarning")) {
warning(conditionMessage(vs$capture_stack[[1]]$warning))
}
if (inherits(vs$capture_stack[[1]]$error, "simpleError")) {
stop(conditionMessage(vs$capture_stack[[1]]$error))
}
all(!vs$notify)
}
|
/R/col_vals_lt.R
|
permissive
|
JayKimBravekjh/pointblank
|
R
| false
| false
| 10,211
|
r
|
#' Are column data less than a specified value?
#'
#' The `col_vals_lt()` validation function, the `expect_col_vals_lt()`
#' expectation function, and the `test_col_vals_lt()` test function all check
#' whether column values in a table are *less than* a specified `value` (the
#' exact comparison used in this function is `col_val < value`). The `value` can
#' be specified as a single, literal value or as a column name given in
#' `vars()`. The validation function can be used directly on a data table or
#' with an *agent* object (technically, a `ptblank_agent` object) whereas the
#' expectation and test functions can only be used with a data table. The types
#' of data tables that can be used include data frames, tibbles, and even
#' database tables of `tbl_dbi` class. Each validation step or expectation will
#' operate over the number of test units that is equal to the number of rows in
#' the table (after any `preconditions` have been applied).
#'
#' If providing multiple column names to `columns`, the result will be an
#' expansion of validation steps to that number of column names (e.g.,
#' `vars(col_a, col_b)` will result in the entry of two validation steps). Aside
#' from column names in quotes and in `vars()`, **tidyselect** helper functions
#' are available for specifying columns. They are: `starts_with()`,
#' `ends_with()`, `contains()`, `matches()`, and `everything()`.
#'
#' This validation function supports special handling of `NA` values. The
#' `na_pass` argument will determine whether an `NA` value appearing in a test
#' unit should be counted as a *pass* or a *fail*. The default of `na_pass =
#' FALSE` means that any `NA`s encountered will accumulate failing test units.
#'
#' Having table `preconditions` means **pointblank** will mutate the table just
#' before interrogation. Such a table mutation is isolated in scope to the
#' validation step(s) produced by the validation function call. Using
#' **dplyr** code is suggested here since the statements can be translated to
#' SQL if necessary. The code is most easily supplied as a one-sided **R**
#' formula (using a leading `~`). In the formula representation, the `.` serves
#' as the input data table to be transformed (e.g.,
#' `~ . %>% dplyr::mutate(col_a = col_b + 10)`). Alternatively, a function could
#' instead be supplied (e.g.,
#' `function(x) dplyr::mutate(x, col_a = col_b + 10)`).
#'
#' Often, we will want to specify `actions` for the validation. This argument,
#' present in every validation function, takes a specially-crafted list
#' object that is best produced by the [action_levels()] function. Read that
#' function's documentation for the lowdown on how to create reactions to
#' above-threshold failure levels in validation. The basic gist is that you'll
#' want at least a single threshold level (specified as either the fraction of
#' test units failed, or, an absolute value), often using the `warn_at`
#' argument. This is especially true when `x` is a table object because,
#' otherwise, nothing happens. For the `col_vals_*()`-type functions, using
#' `action_levels(warn_at = 0.25)` or `action_levels(stop_at = 0.25)` are good
#' choices depending on the situation (the first produces a warning when a
#' quarter of the total test units fails, the other `stop()`s at the same
#' threshold level).
#'
#' Want to describe this validation step in some detail? Keep in mind that this
#' is only useful if `x` is an *agent*. If that's the case, `brief` the agent
#' with some text that fits. Don't worry if you don't want to do it. The
#' *autobrief* protocol is kicked in when `brief = NULL` and a simple brief will
#' then be automatically generated.
#'
#' @inheritParams col_vals_gt
#' @param value A numeric value used for this test. Any column values `< value`
#' are considered passing.
#'
#' @return For the validation function, the return value is either a
#' `ptblank_agent` object or a table object (depending on whether an agent
#' object or a table was passed to `x`). The expectation function invisibly
#' returns its input but, in the context of testing data, the function is
#' called primarily for its potential side-effects (e.g., signaling failure).
#' The test function returns a logical value.
#'
#' @examples
#' # For all of the examples here, we'll
#' # use a simple table with three numeric
#' # columns (`a`, `b`, and `c`) and three
#' # character columns (`d`, `e`, and `f`)
#' tbl <-
#' dplyr::tibble(
#' a = c(5, 5, 5, 5, 5, 5),
#' b = c(1, 1, 1, 2, 2, 2),
#' c = c(1, 1, 1, 2, 3, 4),
#' d = LETTERS[a],
#' e = LETTERS[b],
#' f = LETTERS[c]
#' )
#'
#' tbl
#'
#' # A: Using an `agent` with validation
#' # functions and then `interrogate()`
#'
#' # Validate that values in column `c`
#' # are all less than the value of `5`
#' agent <-
#' create_agent(tbl) %>%
#' col_vals_lt(vars(c), 5) %>%
#' interrogate()
#'
#' # Determine if this validation
#' # had no failing test units (there
#' # are 6 test units, one for each row)
#' all_passed(agent)
#'
#' # Calling `agent` in the console
#' # prints the agent's report; but we
#' # can get a `gt_tbl` object directly
#' # with `get_agent_report(agent)`
#'
#' # B: Using the validation function
#' # directly on the data (no `agent`)
#'
#' # This way of using validation functions
#' # acts as a data filter: data is passed
#' # through but should `stop()` if there
#' # is a single test unit failing; the
#' # behavior of side effects can be
#' # customized with the `actions` option
#' tbl %>%
#' col_vals_lt(vars(c), 5) %>%
#' dplyr::pull(c)
#'
#' # C: Using the expectation function
#'
#' # With the `expect_*()` form, we would
#' # typically perform one validation at a
#' # time; this is primarily used in
#' # testthat tests
#' expect_col_vals_lt(tbl, vars(c), 5)
#'
#' # D: Using the test function
#'
#' # With the `test_*()` form, we should
#' # get a single logical value returned
#' # to us
#' test_col_vals_lt(tbl, vars(c), 5)
#'
#' @family validation functions
#' @section Function ID:
#' 2-1
#'
#' @seealso The analogous function with a right-closed bound: [col_vals_lte()].
#'
#' @name col_vals_lt
NULL
#' @rdname col_vals_lt
#' @import rlang
#' @export
col_vals_lt <- function(x,
columns,
value,
na_pass = FALSE,
preconditions = NULL,
actions = NULL,
brief = NULL,
active = TRUE) {
# Capture the `columns` expression
columns <- rlang::enquo(columns)
# Resolve the columns based on the expression
columns <- resolve_columns(x = x, var_expr = columns, preconditions)
if (is_a_table_object(x)) {
secret_agent <- create_agent(x, name = "::QUIET::") %>%
col_vals_lt(
columns = columns,
value = value,
na_pass = na_pass,
preconditions = preconditions,
brief = brief,
actions = prime_actions(actions),
active = active
) %>% interrogate()
return(x)
}
agent <- x
if (is.null(brief)) {
brief <- generate_autobriefs(agent, columns, preconditions, values = value, "col_vals_lt")
}
# Add one or more validation steps based on the
# length of the `columns` variable
for (i in seq(columns)) {
agent <-
create_validation_step(
agent = agent,
assertion_type = "col_vals_lt",
column = columns[i],
values = value,
na_pass = na_pass,
preconditions = preconditions,
actions = covert_actions(actions, agent),
brief = brief[i],
active = active
)
}
agent
}
#' @rdname col_vals_lt
#' @import rlang
#' @export
expect_col_vals_lt <- function(object,
columns,
value,
na_pass = FALSE,
preconditions = NULL,
threshold = 1) {
fn_name <- "expect_col_vals_lt"
vs <-
create_agent(tbl = object, name = "::QUIET::") %>%
col_vals_lt(
columns = {{ columns }},
value = {{ value }},
na_pass = na_pass,
preconditions = {{ preconditions }},
actions = action_levels(notify_at = threshold)
) %>%
interrogate() %>% .$validation_set
x <- vs$notify %>% all()
threshold_type <- get_threshold_type(threshold = threshold)
if (threshold_type == "proportional") {
failed_amount <- vs$f_failed
} else {
failed_amount <- vs$n_failed
}
if (inherits(vs$capture_stack[[1]]$warning, "simpleWarning")) {
warning(conditionMessage(vs$capture_stack[[1]]$warning))
}
if (inherits(vs$capture_stack[[1]]$error, "simpleError")) {
stop(conditionMessage(vs$capture_stack[[1]]$error))
}
act <- testthat::quasi_label(enquo(x), arg = "object")
column_text <- prep_column_text(vs$column[[1]])
operator <- "<"
values_text <- prep_values_text(values = vs$values, limit = 3, lang = "en")
testthat::expect(
ok = identical(!as.vector(act$val), TRUE),
failure_message = glue::glue(failure_message_gluestring(fn_name = fn_name, lang = "en"))
)
act$val <- object
invisible(act$val)
}
#' @rdname col_vals_lt
#' @import rlang
#' @export
test_col_vals_lt <- function(object,
columns,
value,
na_pass = FALSE,
preconditions = NULL,
threshold = 1) {
vs <-
create_agent(tbl = object, name = "::QUIET::") %>%
col_vals_lt(
columns = {{ columns }},
value = {{ value }},
na_pass = na_pass,
preconditions = {{ preconditions }},
actions = action_levels(notify_at = threshold)
) %>%
interrogate() %>% .$validation_set
if (inherits(vs$capture_stack[[1]]$warning, "simpleWarning")) {
warning(conditionMessage(vs$capture_stack[[1]]$warning))
}
if (inherits(vs$capture_stack[[1]]$error, "simpleError")) {
stop(conditionMessage(vs$capture_stack[[1]]$error))
}
all(!vs$notify)
}
|
#!/usr/bin/Rscript
args = commandArgs(trailingOnly=TRUE)
input_loc <- args[1]
out_loc <- args[2]
input_name <- args[3]
cnv_input <- paste(input_loc,input_name,".addNeutral.cnv.txt",sep="")
snv_input <- paste(input_loc,input_name,".maf.txt",sep="")
out <- paste(out_loc,input_name,".R.txt",sep="")
library(ABSOLUTE)
RunAbsolute(cnv_input,0, 0.015,0.95, 10, NA,'SNP_6.0', input_name, out_loc, 1500, 0.05, 0.005, 'total', snv_input, 0)
load(paste(out_loc,input_name,".ABSOLUTE.RData",sep=""))
alpha <- seg.dat[["mode.res"]][["mode.tab"]][,"alpha"]
tau <- round(seg.dat[["mode.res"]][["mode.tab"]][,"tau"])
sink(out)
cat(substr(input_name,10,nchar(input_name)))
cat("\t")
cat(round(alpha[tau==2][1],2))
cat("\n")
sink()
|
/simulation/script/ABSOLUTE_command.r
|
permissive
|
njalloul90/All-FIT
|
R
| false
| false
| 718
|
r
|
#!/usr/bin/Rscript
args = commandArgs(trailingOnly=TRUE)
input_loc <- args[1]
out_loc <- args[2]
input_name <- args[3]
cnv_input <- paste(input_loc,input_name,".addNeutral.cnv.txt",sep="")
snv_input <- paste(input_loc,input_name,".maf.txt",sep="")
out <- paste(out_loc,input_name,".R.txt",sep="")
library(ABSOLUTE)
RunAbsolute(cnv_input,0, 0.015,0.95, 10, NA,'SNP_6.0', input_name, out_loc, 1500, 0.05, 0.005, 'total', snv_input, 0)
load(paste(out_loc,input_name,".ABSOLUTE.RData",sep=""))
alpha <- seg.dat[["mode.res"]][["mode.tab"]][,"alpha"]
tau <- round(seg.dat[["mode.res"]][["mode.tab"]][,"tau"])
sink(out)
cat(substr(input_name,10,nchar(input_name)))
cat("\t")
cat(round(alpha[tau==2][1],2))
cat("\n")
sink()
|
getwd()
setwd("/Users/rameshmitawa/Documents/R Programs")
rm(list = ls())
data <- read.csv("CSV_FILE.csv")
data
is.data.frame(data)
nrow(data)
ncol(data)
library("rio")
x <- import('1049_Notifications_By_Member_CSV.csv', format = 'tsv')
export(x, "file1.csv")
|
/Read CSV.R
|
permissive
|
rsmitawa/R-Practice
|
R
| false
| false
| 262
|
r
|
getwd()
setwd("/Users/rameshmitawa/Documents/R Programs")
rm(list = ls())
data <- read.csv("CSV_FILE.csv")
data
is.data.frame(data)
nrow(data)
ncol(data)
library("rio")
x <- import('1049_Notifications_By_Member_CSV.csv', format = 'tsv')
export(x, "file1.csv")
|
library(ggplot2)
library(ggpmisc)
library(RColorBrewer)
source("~/mm/summarySEwithin.R")
source("~/mm/normDataWithin.R")
source("~/mm/summarySE.R")
d1 <- read.csv("tummaddeler.csv")
d2 <- reshape2::melt(d1, id = c("Sure", "pH", "Doz", "Dezenfeksiyon"))
d3 <- summarySEwithin(d2, measurevar = "value",
withinvars = c("pH", "Sure", "variable", "Doz",
"Dezenfeksiyon"), idvar = "Sure")
d3[is.na(d3)] <- 0
d3$pH <- factor(d3$pH, levels=c("Orijinal", "10"))
p1 <- plyr::dlply(if (exists("d3")) {
.data = d3
} else {
.data = d2
},
"variable", function(x){
ggplot(x, aes(x = Sure, y = value, fill = Dezenfeksiyon)) +
geom_bar(position = position_dodge(),stat = "identity") +
geom_errorbar(position = position_dodge(width = 0.9),
aes(ymin = value - sd, ymax = value + sd),
width = 0.3)+
facet_grid(Doz~pH, labeller = label_both)+
scale_fill_brewer(palette = "Set1", name = "")+
ylab(expression(paste("C/",C[0], sep = ""))) +
xlab("Süre (saat)") +
theme_bw(base_size =16)+
theme(axis.text.x = element_text(angle = 90, hjust = 1),
legend.position = "bottom") +
ggtitle(x$variable)
})
tit <- colnames(d1[5:length(d1)])
x <- length(p1)
for (i in 1:x){
png(file = paste(tit[[i]], "_dezenfeksiyon",".png", sep = ""))
print(p1[[i]])
dev.off()
}
|
/klorlama-kloraminleme.R
|
no_license
|
egemenaydin/mm
|
R
| false
| false
| 1,585
|
r
|
library(ggplot2)
library(ggpmisc)
library(RColorBrewer)
source("~/mm/summarySEwithin.R")
source("~/mm/normDataWithin.R")
source("~/mm/summarySE.R")
d1 <- read.csv("tummaddeler.csv")
d2 <- reshape2::melt(d1, id = c("Sure", "pH", "Doz", "Dezenfeksiyon"))
d3 <- summarySEwithin(d2, measurevar = "value",
withinvars = c("pH", "Sure", "variable", "Doz",
"Dezenfeksiyon"), idvar = "Sure")
d3[is.na(d3)] <- 0
d3$pH <- factor(d3$pH, levels=c("Orijinal", "10"))
p1 <- plyr::dlply(if (exists("d3")) {
.data = d3
} else {
.data = d2
},
"variable", function(x){
ggplot(x, aes(x = Sure, y = value, fill = Dezenfeksiyon)) +
geom_bar(position = position_dodge(),stat = "identity") +
geom_errorbar(position = position_dodge(width = 0.9),
aes(ymin = value - sd, ymax = value + sd),
width = 0.3)+
facet_grid(Doz~pH, labeller = label_both)+
scale_fill_brewer(palette = "Set1", name = "")+
ylab(expression(paste("C/",C[0], sep = ""))) +
xlab("Süre (saat)") +
theme_bw(base_size =16)+
theme(axis.text.x = element_text(angle = 90, hjust = 1),
legend.position = "bottom") +
ggtitle(x$variable)
})
tit <- colnames(d1[5:length(d1)])
x <- length(p1)
for (i in 1:x){
png(file = paste(tit[[i]], "_dezenfeksiyon",".png", sep = ""))
print(p1[[i]])
dev.off()
}
|
library(data.table)
#' @noRd
mergeTable <- function(flatBiotic, table, columns, keys, prune=F){
if (any(columns %in% names(table))){
if (is.null(flatBiotic)){
return(table[,unique(c(keys, columns[columns %in% names(table)])), with=F])
}
else{
return(merge(table[,unlist(unique(c(keys, columns[columns %in% names(table)]))), with=F], flatBiotic, by=keys, all.x=!prune))
}
}
if (!is.null(flatBiotic)){
return(merge(table[,keys, with=F], flatBiotic, by=keys))
}
return(NULL)
}
#' Make tabular view of biotic data
#' @description
#' produces a single 2D table with biotic data.
#' @details
#' the columns that should be included in the view is provided in the argument 'columns'.
#' (field names are unqiue across levels in biotic 3)
#' Key columns are autmatically included. Do not specify these.
#' Only levels implied by these columns are included.
#'
#' Pruning: pruning results means to exclude higher levels when lower leves are not present.
#' That is, if individual level is included, should catchsamples with no individuals sampled be kept in the data set.
#'
#' relationalBiotic could be obtained by RstoxData::readXmlFile, or any other parser
#' that produces the following format:
#' relationalBiotic should be a named list containing data.tables.
#' each table should be named as corresponding elements in biotic 3 (namespace: http://www.imr.no/formats/nmdbiotic/v3)
#' all columns should be named as corresponding elements and attributes in biotic 3.
#'
#' age parameters are considered part of the individual level,
#' and views will be constrainted to one age reading for each individual,
#' although biotic3 supports several.
#'
#' views does not allow columns from different branches of the hiearchy in the same view
#' (apart from age reading, which is constrained to only one per individual).
#' E.g. can not include columns from both prey and tag, or from both preylengthfrequencytable and copepodedevstagefrequencytable
#' @param relationalBiotic list() of tabular representation of the different levels in biotic
#' @param columns name of columns to keep (in addition to key columns)
#' @param prune logical, if TRUE, whether to prune results.
#' @return data.table with the requested columns, in addition to any key columns.
makeTabularView <- function(relationalBiotic, columns, prune=F){
allColumnNames <- c(names(relationalBiotic$agedetermination),
names(relationalBiotic$catchsample),
names(relationalBiotic$copepodedevstagefrequencytable),
names(relationalBiotic$fishstation),
names(relationalBiotic$individual),
names(relationalBiotic$mission),
names(relationalBiotic$prey),
names(relationalBiotic$preylengthfrequencytable),
names(relationalBiotic$tag))
if (!all(columns %in% allColumnNames)){
stop("Some specified columns does not exist i format: ", paste(columns[!(columns %in% allColumnNames)], collapse=","))
}
#biotic 3 key structure
missionkeys <- c("missiontype", "startyear", "platform", "missionnumber")
stationkeys <- c(missionkeys, "serialnumber")
catchkeys <- c(stationkeys, "catchsampleid")
individualkeys <- c(catchkeys, "specimenid")
agedetkeys <- c(individualkeys, "agedeterminationid")
tagkeys <- c(individualkeys, "tagid")
preykeys <- c(individualkeys, "preysampleid")
preylengthfrequencykeys <- c(preykeys, "preylengthid")
copepodedevstagefrequencykeys <- c(preykeys, "copepodedevstage")
#keys are automatically added where appropriate.
columns <- columns[!(columns %in% c(agedetkeys, tagkeys, preylengthfrequencykeys, copepodedevstagefrequencykeys))]
if (any(columns %in% names(relationalBiotic$agedetermination))){
#set prefferred age reading where missing
ageids <- unique(relationalBiotic$agedetermination[,agedetkeys, with=F])
ageids <- ageids[!duplicated(ageids[,individualkeys, with=F])]
relationalBiotic$individual <- merge(relationalBiotic$individual, ageids, by=individualkeys, all.x=T)
relationalBiotic$individual[is.na(relationalBiotic$individual$preferredagereading),"preferredagereading"] <- relationalBiotic$individual[is.na(relationalBiotic$individual$preferredagereading),][["agedeterminationid"]]
relationalBiotic$individual$agedeterminationid <- NULL
}
if (any(columns %in% names(relationalBiotic$tag))){
if (any(columns %in% names(relationalBiotic$copepodedevstagefrequencytable)) |
any(columns %in% names(relationalBiotic$preylengthfrequencytable)) |
any(columns %in% names(relationalBiotic$prey))){
stop("Cannot view several branches of the hiearchy in one view. Columns from tag, may not be comibined with prey-related columns.")
}
}
if (any(columns %in% names(relationalBiotic$copepodedevstagefrequencytable)) &
any(columns %in% names(relationalBiotic$preylengthfrequencytable))){
stop("Cannot view several branches of the hiearchy in one view. columns from copepodedevstagefrequencytable, may not be comibined with columns from preylengthfrequencytable")
}
# These can only occur as leaf tables
flatbiotic <- NULL
flatbiotic <- mergeTable(flatbiotic, relationalBiotic$tag, columns, tagkeys, prune)
flatbiotic <- mergeTable(flatbiotic, relationalBiotic$copepodedevstagefrequencytable, columns, copepodedevstagefrequencykeys, prune)
flatbiotic <- mergeTable(flatbiotic, relationalBiotic$preylengthfrequencytable, columns, preylengthfrequencykeys, prune)
flatbiotic <- mergeTable(flatbiotic, relationalBiotic$prey, columns, preykeys, prune)
if (!any(columns %in% names(relationalBiotic$agedetermination))){
flatbiotic <- mergeTable(flatbiotic, relationalBiotic$individual, columns, individualkeys, prune)
}
else if (any(columns %in% names(relationalBiotic$agedetermination))){
agedet <- mergeTable(NULL, relationalBiotic$agedetermination, columns, agedetkeys)
ind <- relationalBiotic$individual
ind$agedeterminationid <- ind$preferredagereading
ind <- merge(relationalBiotic$individual, agedet, all.x=T)
flatbiotic <- mergeTable(flatbiotic, ind, columns, agedetkeys, prune)
}
else{
stop()
}
flatbiotic <- mergeTable(flatbiotic, relationalBiotic$catchsample, columns, catchkeys, prune)
flatbiotic <- mergeTable(flatbiotic, relationalBiotic$fishstation, columns, stationkeys, prune)
flatbiotic <- mergeTable(flatbiotic, relationalBiotic$mission, columns, missionkeys, prune)
return(flatbiotic)
}
#' Custom view of individuals, with common additions from catch level and age reading level
individualView <- function(relationalBiotic, additional=c("catchcategory", "catchpartnumber", "commonname", "scientificname", "age", "otolithtype")){
return(makeTabularView(relationalBiotic, c(names(relationalBiotic$individual), additional), prune=T))
}
#' Custum view with all columns from mission, fishstation, catchsample, indvidual and agedetermination
stdview <- function(relationalBiotic){
return(makeTabularView(relationalBiotic, c(names(relationalBiotic$agedetermination), names(relationalBiotic$individual), names(relationalBiotic$catchsample), names(relationalBiotic$fishstation), names(relationalBiotic$mission)), prune=F))
}
#' Custum view with all columns from mission, fishstation, and catchsample
catchview <- function(relationalBiotic){
return(makeTabularView(relationalBiotic, c(names(relationalBiotic$catchsample), names(relationalBiotic$fishstation), names(relationalBiotic$mission)), prune=F))
}
#
# data set examples
# some of these are really fisheries independent data, but derve as useful examples still.
#
#' data set for looking up mission level columns based on serialnumber and year
#' for merging with SPD-dependent scripts
#' See pull.R for getting data files
#' See RstoxData for parsing routines: https://github.com/StoXProject/RstoxData
makeCapelinMissionLookup <- function(datafiles = paste("biotic_year_", 1970:2018, ".xml", sep="")){
require("RstoxData")
result <- NULL
if (!all(file.exists(datafiles))){
stop("Some specified files does not exist: ", paste(datafiles[!file.exists(datafiles)], collapse = ", "))
}
for (f in datafiles){
data <- RstoxData::readXmlFile(f)
flatdata <- makeTabularView(data, c(names(data$mission), "serialnumber", "commonname", "scientificname", "catchcategory"))
flatdata <- flatdata[!is.na(flatdata$catchcategory) & flatdata$catchcategory == "162035",]
if (is.null(result)){
result <- flatdata
}
else{
result <- rbind(result, flatdata)
}
}
return(result)
}
#' Extract catches from a cruise series
#' See pull.R for getting data files
#' See Rstox for routines for looking up cruise info: https://github.com/Sea2Data/Rstox
#' See RstoxData for parsing routines: https://github.com/StoXProject/RstoxData
makeCruiseSet <- function(cruise="Barents Sea NOR-RUS demersal fish cruise in winter", years=2013:2018, datafiles = paste("biotic_year_", years, ".xml", sep="")){
require("Rstox")
require("RstoxData")
require("data.table")
cruiselist <- getNMDinfo("cs")
survey <- cruiselist[[cruise]][cruiselist[[cruise]]$Year %in% years,]
survey$Year <- as.integer(survey$Year)
result <- NULL
for (f in datafiles){
if (!file.exists(f)){
stop(paste("File",f,"does not exist."))
}
data <- RstoxData::readXmlFile(f, stream = T)
flatdata <- catchview(data)
#drop data that is not in any cruise series
flatdata <- flatdata[!is.na(flatdata$cruise),]
#keep only the data from 'survey'
flatdata <- merge(flatdata, as.data.table(survey[,c("Cruise", "Year")]), by.x=c("cruise", "startyear"), by.y=c("Cruise", "Year"))
result <- rbind(flatdata, result)
}
return(result)
}
|
/dataAcess/makeBioticDataSet/tabularize.R
|
no_license
|
Sea2Data/FDAtools
|
R
| false
| false
| 9,872
|
r
|
library(data.table)
#' @noRd
mergeTable <- function(flatBiotic, table, columns, keys, prune=F){
if (any(columns %in% names(table))){
if (is.null(flatBiotic)){
return(table[,unique(c(keys, columns[columns %in% names(table)])), with=F])
}
else{
return(merge(table[,unlist(unique(c(keys, columns[columns %in% names(table)]))), with=F], flatBiotic, by=keys, all.x=!prune))
}
}
if (!is.null(flatBiotic)){
return(merge(table[,keys, with=F], flatBiotic, by=keys))
}
return(NULL)
}
#' Make tabular view of biotic data
#' @description
#' produces a single 2D table with biotic data.
#' @details
#' the columns that should be included in the view is provided in the argument 'columns'.
#' (field names are unqiue across levels in biotic 3)
#' Key columns are autmatically included. Do not specify these.
#' Only levels implied by these columns are included.
#'
#' Pruning: pruning results means to exclude higher levels when lower leves are not present.
#' That is, if individual level is included, should catchsamples with no individuals sampled be kept in the data set.
#'
#' relationalBiotic could be obtained by RstoxData::readXmlFile, or any other parser
#' that produces the following format:
#' relationalBiotic should be a named list containing data.tables.
#' each table should be named as corresponding elements in biotic 3 (namespace: http://www.imr.no/formats/nmdbiotic/v3)
#' all columns should be named as corresponding elements and attributes in biotic 3.
#'
#' age parameters are considered part of the individual level,
#' and views will be constrainted to one age reading for each individual,
#' although biotic3 supports several.
#'
#' views does not allow columns from different branches of the hiearchy in the same view
#' (apart from age reading, which is constrained to only one per individual).
#' E.g. can not include columns from both prey and tag, or from both preylengthfrequencytable and copepodedevstagefrequencytable
#' @param relationalBiotic list() of tabular representation of the different levels in biotic
#' @param columns name of columns to keep (in addition to key columns)
#' @param prune logical, if TRUE, whether to prune results.
#' @return data.table with the requested columns, in addition to any key columns.
makeTabularView <- function(relationalBiotic, columns, prune=F){
allColumnNames <- c(names(relationalBiotic$agedetermination),
names(relationalBiotic$catchsample),
names(relationalBiotic$copepodedevstagefrequencytable),
names(relationalBiotic$fishstation),
names(relationalBiotic$individual),
names(relationalBiotic$mission),
names(relationalBiotic$prey),
names(relationalBiotic$preylengthfrequencytable),
names(relationalBiotic$tag))
if (!all(columns %in% allColumnNames)){
stop("Some specified columns does not exist i format: ", paste(columns[!(columns %in% allColumnNames)], collapse=","))
}
#biotic 3 key structure
missionkeys <- c("missiontype", "startyear", "platform", "missionnumber")
stationkeys <- c(missionkeys, "serialnumber")
catchkeys <- c(stationkeys, "catchsampleid")
individualkeys <- c(catchkeys, "specimenid")
agedetkeys <- c(individualkeys, "agedeterminationid")
tagkeys <- c(individualkeys, "tagid")
preykeys <- c(individualkeys, "preysampleid")
preylengthfrequencykeys <- c(preykeys, "preylengthid")
copepodedevstagefrequencykeys <- c(preykeys, "copepodedevstage")
#keys are automatically added where appropriate.
columns <- columns[!(columns %in% c(agedetkeys, tagkeys, preylengthfrequencykeys, copepodedevstagefrequencykeys))]
if (any(columns %in% names(relationalBiotic$agedetermination))){
#set prefferred age reading where missing
ageids <- unique(relationalBiotic$agedetermination[,agedetkeys, with=F])
ageids <- ageids[!duplicated(ageids[,individualkeys, with=F])]
relationalBiotic$individual <- merge(relationalBiotic$individual, ageids, by=individualkeys, all.x=T)
relationalBiotic$individual[is.na(relationalBiotic$individual$preferredagereading),"preferredagereading"] <- relationalBiotic$individual[is.na(relationalBiotic$individual$preferredagereading),][["agedeterminationid"]]
relationalBiotic$individual$agedeterminationid <- NULL
}
if (any(columns %in% names(relationalBiotic$tag))){
if (any(columns %in% names(relationalBiotic$copepodedevstagefrequencytable)) |
any(columns %in% names(relationalBiotic$preylengthfrequencytable)) |
any(columns %in% names(relationalBiotic$prey))){
stop("Cannot view several branches of the hiearchy in one view. Columns from tag, may not be comibined with prey-related columns.")
}
}
if (any(columns %in% names(relationalBiotic$copepodedevstagefrequencytable)) &
any(columns %in% names(relationalBiotic$preylengthfrequencytable))){
stop("Cannot view several branches of the hiearchy in one view. columns from copepodedevstagefrequencytable, may not be comibined with columns from preylengthfrequencytable")
}
# These can only occur as leaf tables
flatbiotic <- NULL
flatbiotic <- mergeTable(flatbiotic, relationalBiotic$tag, columns, tagkeys, prune)
flatbiotic <- mergeTable(flatbiotic, relationalBiotic$copepodedevstagefrequencytable, columns, copepodedevstagefrequencykeys, prune)
flatbiotic <- mergeTable(flatbiotic, relationalBiotic$preylengthfrequencytable, columns, preylengthfrequencykeys, prune)
flatbiotic <- mergeTable(flatbiotic, relationalBiotic$prey, columns, preykeys, prune)
if (!any(columns %in% names(relationalBiotic$agedetermination))){
flatbiotic <- mergeTable(flatbiotic, relationalBiotic$individual, columns, individualkeys, prune)
}
else if (any(columns %in% names(relationalBiotic$agedetermination))){
agedet <- mergeTable(NULL, relationalBiotic$agedetermination, columns, agedetkeys)
ind <- relationalBiotic$individual
ind$agedeterminationid <- ind$preferredagereading
ind <- merge(relationalBiotic$individual, agedet, all.x=T)
flatbiotic <- mergeTable(flatbiotic, ind, columns, agedetkeys, prune)
}
else{
stop()
}
flatbiotic <- mergeTable(flatbiotic, relationalBiotic$catchsample, columns, catchkeys, prune)
flatbiotic <- mergeTable(flatbiotic, relationalBiotic$fishstation, columns, stationkeys, prune)
flatbiotic <- mergeTable(flatbiotic, relationalBiotic$mission, columns, missionkeys, prune)
return(flatbiotic)
}
#' Custom view of individuals, with common additions from catch level and age reading level
individualView <- function(relationalBiotic, additional=c("catchcategory", "catchpartnumber", "commonname", "scientificname", "age", "otolithtype")){
return(makeTabularView(relationalBiotic, c(names(relationalBiotic$individual), additional), prune=T))
}
#' Custum view with all columns from mission, fishstation, catchsample, indvidual and agedetermination
stdview <- function(relationalBiotic){
return(makeTabularView(relationalBiotic, c(names(relationalBiotic$agedetermination), names(relationalBiotic$individual), names(relationalBiotic$catchsample), names(relationalBiotic$fishstation), names(relationalBiotic$mission)), prune=F))
}
#' Custum view with all columns from mission, fishstation, and catchsample
catchview <- function(relationalBiotic){
return(makeTabularView(relationalBiotic, c(names(relationalBiotic$catchsample), names(relationalBiotic$fishstation), names(relationalBiotic$mission)), prune=F))
}
#
# data set examples
# some of these are really fisheries independent data, but derve as useful examples still.
#
#' data set for looking up mission level columns based on serialnumber and year
#' for merging with SPD-dependent scripts
#' See pull.R for getting data files
#' See RstoxData for parsing routines: https://github.com/StoXProject/RstoxData
makeCapelinMissionLookup <- function(datafiles = paste("biotic_year_", 1970:2018, ".xml", sep="")){
require("RstoxData")
result <- NULL
if (!all(file.exists(datafiles))){
stop("Some specified files does not exist: ", paste(datafiles[!file.exists(datafiles)], collapse = ", "))
}
for (f in datafiles){
data <- RstoxData::readXmlFile(f)
flatdata <- makeTabularView(data, c(names(data$mission), "serialnumber", "commonname", "scientificname", "catchcategory"))
flatdata <- flatdata[!is.na(flatdata$catchcategory) & flatdata$catchcategory == "162035",]
if (is.null(result)){
result <- flatdata
}
else{
result <- rbind(result, flatdata)
}
}
return(result)
}
#' Extract catches from a cruise series
#' See pull.R for getting data files
#' See Rstox for routines for looking up cruise info: https://github.com/Sea2Data/Rstox
#' See RstoxData for parsing routines: https://github.com/StoXProject/RstoxData
makeCruiseSet <- function(cruise="Barents Sea NOR-RUS demersal fish cruise in winter", years=2013:2018, datafiles = paste("biotic_year_", years, ".xml", sep="")){
require("Rstox")
require("RstoxData")
require("data.table")
cruiselist <- getNMDinfo("cs")
survey <- cruiselist[[cruise]][cruiselist[[cruise]]$Year %in% years,]
survey$Year <- as.integer(survey$Year)
result <- NULL
for (f in datafiles){
if (!file.exists(f)){
stop(paste("File",f,"does not exist."))
}
data <- RstoxData::readXmlFile(f, stream = T)
flatdata <- catchview(data)
#drop data that is not in any cruise series
flatdata <- flatdata[!is.na(flatdata$cruise),]
#keep only the data from 'survey'
flatdata <- merge(flatdata, as.data.table(survey[,c("Cruise", "Year")]), by.x=c("cruise", "startyear"), by.y=c("Cruise", "Year"))
result <- rbind(flatdata, result)
}
return(result)
}
|
# install.packages("ggfortify")
# install.poackages("ggplot2")
library(ggplot2)
library(ggfortify)
## Subset to one observation per week on Mondays at 8:00pm for 2007, 2008 and 2009
house070809weekly <- filter(
SUB_METERING_2006_2010, Weekday == 2 & Hour == 20 & Minute == 1
)
## Create TS object with SubMeter3
## Season = 1 year
tsSM3_070809weekly <- ts(
house070809weekly$Sub_metering_3,
frequency=52,
start=c(2007,1)
)
## Plot sub-meter 3 with autoplot (you may need to install these packages)
autoplot(tsSM3_070809weekly)
## Plot sub-meter 3 with autoplot - add labels, color
autoplot(
tsSM3_070809weekly,
color = 'red',
xlab = "Time",
ylab = "Watt Hours",
main = "Sub-meter 3"
)
## Season = 3 months
house_09_daily_1400 <- filter(
SUB_METERING_2006_2010,
Year == 2009 & Hour == 14 & Minute == 1
)
tsSM2_09_daily_1400 <- ts(
house_09_daily_1400$Sub_metering_2,
start = c(0,1),
frequency = 30 * 3,
)
autoplot(
tsSM2_09_daily_1400,
color = 'green',
xlab = "Time",
ylab = "Watt Hours",
main = "Sub-meter 2 (Laundry) Daily, 2009"
)
## season = 1 Weeks
house_10_First12Weeks_30min <- filter(
SUB_METERING_2006_2010,
Year == 2010 & Week <= 12 & Minute %% 30 == 0
)
tsSM1_10_First12Weeks_30min <- ts(
house_10_First12Weeks_30min$Sub_metering_1,
start = c(1,1),
frequency = 2 * 23 * 7,
)
autoplot(
tsSM1_10_First12Weeks_30min,
color = 'blue',
xlab = "Time",
ylab = "Watt Hours",
main = "Sub-meter 1 (Kitchen) 30 mintues, First 12 Weeks 2010"
)
|
/timeSeriesAnalysis.R
|
no_license
|
mvill025/Visualize-And-Analyze-Energy-Data
|
R
| false
| false
| 1,521
|
r
|
# install.packages("ggfortify")
# install.poackages("ggplot2")
library(ggplot2)
library(ggfortify)
## Subset to one observation per week on Mondays at 8:00pm for 2007, 2008 and 2009
house070809weekly <- filter(
SUB_METERING_2006_2010, Weekday == 2 & Hour == 20 & Minute == 1
)
## Create TS object with SubMeter3
## Season = 1 year
tsSM3_070809weekly <- ts(
house070809weekly$Sub_metering_3,
frequency=52,
start=c(2007,1)
)
## Plot sub-meter 3 with autoplot (you may need to install these packages)
autoplot(tsSM3_070809weekly)
## Plot sub-meter 3 with autoplot - add labels, color
autoplot(
tsSM3_070809weekly,
color = 'red',
xlab = "Time",
ylab = "Watt Hours",
main = "Sub-meter 3"
)
## Season = 3 months
house_09_daily_1400 <- filter(
SUB_METERING_2006_2010,
Year == 2009 & Hour == 14 & Minute == 1
)
tsSM2_09_daily_1400 <- ts(
house_09_daily_1400$Sub_metering_2,
start = c(0,1),
frequency = 30 * 3,
)
autoplot(
tsSM2_09_daily_1400,
color = 'green',
xlab = "Time",
ylab = "Watt Hours",
main = "Sub-meter 2 (Laundry) Daily, 2009"
)
## season = 1 Weeks
house_10_First12Weeks_30min <- filter(
SUB_METERING_2006_2010,
Year == 2010 & Week <= 12 & Minute %% 30 == 0
)
tsSM1_10_First12Weeks_30min <- ts(
house_10_First12Weeks_30min$Sub_metering_1,
start = c(1,1),
frequency = 2 * 23 * 7,
)
autoplot(
tsSM1_10_First12Weeks_30min,
color = 'blue',
xlab = "Time",
ylab = "Watt Hours",
main = "Sub-meter 1 (Kitchen) 30 mintues, First 12 Weeks 2010"
)
|
#
# This file is for plot 2 for Expolitory Data Analysis Project 1
#
###############################################################################
# Step 1 Set the file locations and read the file
# Note: This file assumes the data file exists in the "explore_p_1" subdirectory
# under the working directory
###############################################################################
readfile <- paste(getwd(),"/explore_p_1", "/", "household_power_consumption.txt",
sep="")
writefile <- paste(getwd(),"/explore_p_1", "/", "plot2.png", sep="")
# Read the file
power.df <- as.data.frame(read.table(readfile, header=TRUE, sep=";",
na.strings="?"))
# Step 2 Clean the data set
###############################################################################
#subset the data frame to innclude only the dates of interest
power.df <- subset(power.df, Date == "1/2/2007" | Date == "2/2/2007")
# Merge Date & Time into one filed and change the class
power.df$DateTime <- paste(power.df$Date, power.df$Time, sep=" ")
power.df$DateTime <- strptime(power.df$DateTime, format="%d/%m/%Y %H:%M:%S")
# Step 3 Generate the line graph
###############################################################################
png(filename = writefile, width = 480, height = 480, units = "px", bg = "white")
par(mar = c(6, 6, 5, 4))
plot(power.df$DateTime, power.df$Global_active_power, type="l",
xlab="", ylab="Global Active Power (kilowatts)")
dev.off()
|
/plot2.R
|
no_license
|
MGoodman10/ExData_Plotting1
|
R
| false
| false
| 1,507
|
r
|
#
# This file is for plot 2 for Expolitory Data Analysis Project 1
#
###############################################################################
# Step 1 Set the file locations and read the file
# Note: This file assumes the data file exists in the "explore_p_1" subdirectory
# under the working directory
###############################################################################
readfile <- paste(getwd(),"/explore_p_1", "/", "household_power_consumption.txt",
sep="")
writefile <- paste(getwd(),"/explore_p_1", "/", "plot2.png", sep="")
# Read the file
power.df <- as.data.frame(read.table(readfile, header=TRUE, sep=";",
na.strings="?"))
# Step 2 Clean the data set
###############################################################################
#subset the data frame to innclude only the dates of interest
power.df <- subset(power.df, Date == "1/2/2007" | Date == "2/2/2007")
# Merge Date & Time into one filed and change the class
power.df$DateTime <- paste(power.df$Date, power.df$Time, sep=" ")
power.df$DateTime <- strptime(power.df$DateTime, format="%d/%m/%Y %H:%M:%S")
# Step 3 Generate the line graph
###############################################################################
png(filename = writefile, width = 480, height = 480, units = "px", bg = "white")
par(mar = c(6, 6, 5, 4))
plot(power.df$DateTime, power.df$Global_active_power, type="l",
xlab="", ylab="Global Active Power (kilowatts)")
dev.off()
|
#' Functional Principal Component Analysis
#'
#' FPCA for dense or sparse functional data.
#'
#' @param Ly A list of \emph{n} vectors containing the observed values for each individual. Missing values specified by \code{NA}s are supported for dense case (\code{dataType='dense'}).
#' @param Lt A list of \emph{n} vectors containing the observation time points for each individual corresponding to y. Each vector should be sorted in ascending order.
#' @param optns A list of options control parameters specified by \code{list(name=value)}. See `Details'.
#'
#' @details If the input is sparse data, make sure you check the design plot is dense and the 2D domain is well covered, using \code{plot} or \code{CreateDesignPlot}. Some study design such as snippet data (each subject is observed only on a sub-interval of the period of study) will have an ill-covered design plot, for which the covariance estimate will be unreliable.
#'
#' Available control options are
#' \describe{
#' \item{userBwCov}{The bandwidth value for the smoothed covariance function; positive numeric - default: determine automatically based on 'methodBwCov'}
#' \item{methodBwCov}{The bandwidth choice method for the smoothed covariance function; 'GMeanAndGCV' (the geometric mean of the GCV bandwidth and the minimum bandwidth),'CV','GCV' - default: 10\% of the support}
#' \item{userBwMu}{The bandwidth value for the smoothed mean function (using 'CV' or 'GCV'); positive numeric - default: determine automatically based on 'methodBwMu'}
#' \item{methodBwMu}{The bandwidth choice method for the mean function; 'GMeanAndGCV' (the geometric mean of the GCV bandwidth and the minimum bandwidth),'CV','GCV' - default: 5\% of the support}
#' \item{dataType}{The type of design we have (usually distinguishing between sparse or dense functional data); 'Sparse', 'Dense', 'DenseWithMV', 'p>>n' - default: determine automatically based on 'IsRegular'}
#' \item{diagnosticsPlot}{Deprecated. Same as the option 'plot'}
#' \item{plot}{Plot FPCA results (design plot, mean, scree plot and first K (<=3) eigenfunctions); logical - default: FALSE}
#' \item{error}{Assume measurement error in the dataset; logical - default: TRUE}
#' \item{fitEigenValues}{Whether also to obtain a regression fit of the eigenvalues - default: FALSE}
#' \item{FVEthreshold}{Fraction-of-Variance-Explained threshold used during the SVD of the fitted covar. function; numeric (0,1] - default: 0.9999}
#' \item{kernel}{Smoothing kernel choice, common for mu and covariance; "rect", "gauss", "epan", "gausvar", "quar" - default: "gauss"; dense data are assumed noise-less so no smoothing is performed. }
#' \item{kFoldMuCov}{The number of folds to be used for mean and covariance smoothing. Default: 10}
#' \item{lean}{If TRUE the 'inputData' field in the output list is empty. Default: FALSE}
#' \item{maxK}{The maximum number of principal components to consider - default: min(20, N-1), N:# of curves}
#' \item{methodXi}{The method to estimate the PC scores; 'CE' (Condit. Expectation), 'IN' (Numerical Integration) - default: 'CE' for sparse data, 'IN' for dense data.}
#' \item{methodMuCovEst}{The method to estimate the mean and covariance in the case of dense functional data; 'cross-sectional', 'smooth' - default: 'cross-sectional'}
#' \item{nRegGrid}{The number of support points in each direction of covariance surface; numeric - default: 51}
#' \item{numBins}{The number of bins to bin the data into; positive integer > 10, default: NULL}
#' \item{methodSelectK}{The method of choosing the number of principal components K; 'FVE','AIC','BIC', or a positive integer as specified number of components: default 'FVE')}
#' \item{shrink}{Whether to use shrinkage method to estimate the scores in the dense case (see Yao et al 2003) - default FALSE}
#' \item{outPercent}{A 2-element vector in [0,1] indicating the outPercent data in the boundary - default (0,1)}
#' \item{rho}{The truncation threshold for the iterative residual. 'cv': choose rho by leave-one-observation out cross-validation; 'no': no regularization - default "cv" if error == TRUE, and "no" if error == FALSE.}
#' \item{rotationCut}{The 2-element vector in [0,1] indicating the percent of data truncated during sigma^2 estimation; default (0.25, 0.75))}
#' \item{useBinnedData}{Should the data be binned? 'FORCE' (Enforce the # of bins), 'AUTO' (Select the # of bins automatically), 'OFF' (Do not bin) - default: 'AUTO'}
#' \item{useBinnedCov}{Whether to use the binned raw covariance for smoothing; logical - default:TRUE}
#' \item{userCov}{The user-defined smoothed covariance function; list of two elements: numerical vector 't' and matrix 'cov', 't' must cover the support defined by 'Ly' - default: NULL}
#' \item{userMu}{The user-defined smoothed mean function; list of two numerical vector 't' and 'mu' of equal size, 't' must cover the support defined 'Ly' - default: NULL}
#' \item{userSigma2}{The user-defined measurement error variance. A positive scalar. If specified then no regularization is used (rho is set to 'no', unless specified otherwise). Default to `NULL`}
#' \item{verbose}{Display diagnostic messages; logical - default: FALSE}
#' }
#' @return A list containing the following fields:
#' \item{sigma2}{Variance for measure error.}
#' \item{lambda}{A vector of length \emph{K} containing eigenvalues.}
#' \item{phi}{An nWorkGrid by \emph{K} matrix containing eigenfunctions, supported on workGrid.}
#' \item{xiEst}{A \emph{n} by \emph{K} matrix containing the FPC estimates.}
#' \item{xiVar}{A list of length \emph{n}, each entry containing the variance estimates for the FPC estimates.}
#' \item{obsGrid}{The (sorted) grid points where all observation points are pooled.}
#' \item{mu}{A vector of length nWorkGrid containing the mean function estimate.}
#' \item{workGrid}{A vector of length nWorkGrid. The internal regular grid on which the eigen analysis is carried on.}
#' \item{smoothedCov}{A nWorkGrid by nWorkGrid matrix of the smoothed covariance surface.}
#' \item{fittedCov}{A nWorkGrid by nWorkGrid matrix of the fitted covariance surface, which is guaranteed to be non-negative definite.}
#' \item{optns}{A list of actually used options.}
#' \item{bwMu}{The selected (or user specified) bandwidth for smoothing the mean function.}
#' \item{bwCov}{The selected (or user specified) bandwidth for smoothing the covariance function.}
#' \item{rho}{A regularizing scalar for the measurement error variance estimate.}
#' \item{cumFVE}{A vector with the percentages of the total variance explained by each FPC. Increase to almost 1.}
#' \item{FVE}{A percentage indicating the total variance explained by chosen FPCs with corresponding 'FVEthreshold'.}
#' \item{criterionValue}{A scalar specifying the criterion value obtained by the selected number of components with specific methodSelectK: FVE,AIC,BIC values or NULL for fixedK.}
#' \item{inputData}{A list containting the original 'Ly' and 'Lt' lists used as inputs to FPCA. NULL if 'lean' was specified to be TRUE.}
#' @examples
#' set.seed(1)
#' n <- 20
#' pts <- seq(0, 1, by=0.05)
#' sampWiener <- Wiener(n, pts)
#' sampWiener <- Sparsify(sampWiener, pts, 10)
#' res <- FPCA(sampWiener$Ly, sampWiener$Lt,
#' list(dataType='Sparse', error=FALSE, kernel='epan', verbose=TRUE))
#' plot(res) # The design plot covers [0, 1] * [0, 1] well.
#' CreateCovPlot(res, 'Fitted')
#' @references
#' \cite{Yao, F., Mueller, H.G., Clifford, A.J., Dueker, S.R., Follett, J., Lin, Y., Buchholz, B., Vogel, J.S. (2003). "Shrinkage estimation for functional principal component scores, with application to the population kinetics of plasma folate." Biometrics 59, 676-685. (Shrinkage estimates for dense data)}
#'
#' \cite{Yao, Fang, Hans-Georg Mueller, and Jane-Ling Wang. "Functional data analysis for sparse longitudinal data." Journal of the American Statistical Association 100, no. 470 (2005): 577-590. (Sparse data FPCA)}
#'
#' \cite{Liu, Bitao, and Hans-Georg Mueller. "Estimating derivatives for samples of sparsely observed functions, with application to online auction dynamics." Journal of the American Statistical Association 104, no. 486 (2009): 704-717. (Sparse data FPCA)}
#'
#' \cite{Castro, P. E., W. H. Lawton, and E. A. Sylvestre. "Principal modes of variation for processes with continuous sample curves." Technometrics 28, no. 4 (1986): 329-337. (Dense data FPCA)}
#' @export
FPCA = function(Ly, Lt, optns = list()){
# Check the data validity for further analysis
CheckData(Ly,Lt)
# Force the data to be list of numeric members and handle NA's
#Ly <- lapply(Ly, as.numeric)
#Lt <- lapply(Lt, as.numeric)
#Lt <- lapply(Lt, signif, 14)
#inputData <- list(Ly=Ly, Lt=Lt);
inputData <- HandleNumericsAndNAN(Ly,Lt);
Ly <- inputData$Ly;
Lt <- inputData$Lt;
# Set the options structure members that are still NULL
optns = SetOptions(Ly, Lt, optns);
# Check the options validity for the PCA function.
numOfCurves = length(Ly);
CheckOptions(Lt, optns,numOfCurves)
# Bin the data
if ( optns$useBinnedData != 'OFF'){
BinnedDataset <- GetBinnedDataset(Ly,Lt,optns)
Ly = BinnedDataset$newy;
Lt = BinnedDataset$newt;
optns[['nRegGrid']] <- min(optns[['nRegGrid']],
BinnedDataset[['numBins']])
inputData$Ly <- Ly
inputData$Lt <- Lt
}
# Generate basic grids:
# obsGrid: the unique sorted pooled time points of the sample and the new
# data
# regGrid: the grid of time points for which the smoothed covariance
# surface assumes values
# cutRegGrid: truncated grid specified by optns$outPercent for the cov
# functions
obsGrid = sort(unique( c(unlist(Lt))));
regGrid = seq(min(obsGrid), max(obsGrid),length.out = optns$nRegGrid);
outPercent <- optns$outPercent
buff <- .Machine$double.eps * max(abs(obsGrid)) * 10
rangeGrid <- range(regGrid)
minGrid <- rangeGrid[1]
maxGrid <- rangeGrid[2]
cutRegGrid <- regGrid[regGrid > minGrid + diff(rangeGrid) * outPercent[1] -
buff &
regGrid < minGrid + diff(rangeGrid) * outPercent[2] +
buff]
## Mean function
# If the user provided a mean function use it
userMu <- optns$userMu
if ( is.list(userMu) && (length(userMu$mu) == length(userMu$t))){
smcObj <- GetUserMeanCurve(optns, obsGrid, regGrid, buff)
smcObj$muDense = ConvertSupport(obsGrid, regGrid, mu = smcObj$mu)
} else if (optns$methodMuCovEst == 'smooth') { # smooth mean
smcObj = GetSmoothedMeanCurve(Ly, Lt, obsGrid, regGrid, optns)
} else if (optns$methodMuCovEst == 'cross-sectional') { # cross-sectional mean
ymat = List2Mat(Ly,Lt)
smcObj = GetMeanDense(ymat, obsGrid, optns)
}
# mu: the smoothed mean curve evaluated at times 'obsGrid'
mu <- smcObj$mu
## Covariance function and sigma2
if (!is.null(optns$userCov) && optns$methodMuCovEst != 'smooth') {
scsObj <- GetUserCov(optns, obsGrid, cutRegGrid, buff, ymat)
} else if (optns$methodMuCovEst == 'smooth') {
# smooth cov and/or sigma2
scsObj = GetSmoothedCovarSurface(Ly, Lt, mu, obsGrid, regGrid, optns,
optns$useBinnedCov)
} else if (optns$methodMuCovEst == 'cross-sectional') {
scsObj = GetCovDense(ymat, mu, optns)
if (length(obsGrid) != cutRegGrid || !all.equal(obsGrid, cutRegGrid)) {
scsObj$smoothCov = ConvertSupport(obsGrid, cutRegGrid, Cov =
scsObj$smoothCov)
}
scsObj$outGrid <- cutRegGrid
}
sigma2 <- scsObj[['sigma2']]
# workGrid: possibly truncated version of the regGrid
workGrid <- scsObj$outGrid
# convert mu to truncated workGrid
muWork <- ConvertSupport(obsGrid, toGrid = workGrid, mu=smcObj$mu)
# Get the results for the eigen-analysis
eigObj = GetEigenAnalysisResults(smoothCov = scsObj$smoothCov, workGrid, optns, muWork = muWork)
# Truncated obsGrid, and observations. Empty observation due to truncation has length 0.
truncObsGrid <- obsGrid
if (!all(abs(optns$outPercent - c(0, 1)) < .Machine$double.eps * 2)) {
truncObsGrid <- truncObsGrid[truncObsGrid >= min(workGrid) - buff &
truncObsGrid <= max(workGrid) + buff]
tmp <- TruncateObs(Ly, Lt, truncObsGrid)
Ly <- tmp$Ly
Lt <- tmp$Lt
}
# convert phi and fittedCov to obsGrid.
muObs <- ConvertSupport(obsGrid, truncObsGrid, mu=mu)
phiObs <- ConvertSupport(workGrid, truncObsGrid, phi=eigObj$phi)
if (optns$methodXi == 'CE') {
CovObs <- ConvertSupport(workGrid, truncObsGrid, Cov=eigObj$fittedCov)
}
# Get scores
if (optns$methodXi == 'CE') {
if (optns$rho != 'no') {
if( length(Ly) > 2048 ){
randIndx <- sample( length(Ly), 2048)
rho <- GetRho(Ly[randIndx], Lt[randIndx], optns, muObs, truncObsGrid, CovObs, eigObj$lambda, phiObs, sigma2)
} else {
rho <- GetRho(Ly, Lt, optns, muObs, truncObsGrid, CovObs, eigObj$lambda, phiObs, sigma2)
}
sigma2 <- rho
}
scoresObj <- GetCEScores(Ly, Lt, optns, muObs, truncObsGrid, CovObs, eigObj$lambda, phiObs, sigma2)
} else if (optns$methodXi == 'IN') {
ymat = List2Mat(Ly,Lt)
scoresObj <- GetINScores(ymat, Lt, optns, muObs, eigObj$lambda, phiObs, sigma2)
}
if (optns$fitEigenValues) {
fitLambda <- FitEigenValues(scsObj$rcov, workGrid, eigObj$phi, optns$maxK)
} else {
fitLambda <- NULL
}
# Make the return object by MakeResultFPCA
ret <- MakeResultFPCA(optns, smcObj, muObs, scsObj, eigObj,
inputData = inputData,
scoresObj, truncObsGrid, workGrid,
rho = if (optns$rho =='cv') rho else NULL,
fitLambda=fitLambda)
# select number of components based on specified criterion
if(ret$optns$lean == TRUE){
selectedK <- SelectK(fpcaObj = ret, criterion = optns$methodSelectK, FVEthreshold = optns$FVEthreshold,
Ly = Ly, Lt = Lt)
} else {
selectedK <- SelectK(fpcaObj = ret, criterion = optns$methodSelectK, FVEthreshold = optns$FVEthreshold)
}
ret <- append(ret, list(selectK = selectedK$K, criterionValue = selectedK$criterion))
class(ret) <- 'FPCA'
ret <- SubsetFPCA(fpcaObj = ret, K = ret$selectK)
# Plot the results
if(optns$plot){
plot.FPCA(ret)
}
return(ret);
}
|
/R/FPCA.R
|
no_license
|
HBGDki/tPACE
|
R
| false
| false
| 14,386
|
r
|
#' Functional Principal Component Analysis
#'
#' FPCA for dense or sparse functional data.
#'
#' @param Ly A list of \emph{n} vectors containing the observed values for each individual. Missing values specified by \code{NA}s are supported for dense case (\code{dataType='dense'}).
#' @param Lt A list of \emph{n} vectors containing the observation time points for each individual corresponding to y. Each vector should be sorted in ascending order.
#' @param optns A list of options control parameters specified by \code{list(name=value)}. See `Details'.
#'
#' @details If the input is sparse data, make sure you check the design plot is dense and the 2D domain is well covered, using \code{plot} or \code{CreateDesignPlot}. Some study design such as snippet data (each subject is observed only on a sub-interval of the period of study) will have an ill-covered design plot, for which the covariance estimate will be unreliable.
#'
#' Available control options are
#' \describe{
#' \item{userBwCov}{The bandwidth value for the smoothed covariance function; positive numeric - default: determine automatically based on 'methodBwCov'}
#' \item{methodBwCov}{The bandwidth choice method for the smoothed covariance function; 'GMeanAndGCV' (the geometric mean of the GCV bandwidth and the minimum bandwidth),'CV','GCV' - default: 10\% of the support}
#' \item{userBwMu}{The bandwidth value for the smoothed mean function (using 'CV' or 'GCV'); positive numeric - default: determine automatically based on 'methodBwMu'}
#' \item{methodBwMu}{The bandwidth choice method for the mean function; 'GMeanAndGCV' (the geometric mean of the GCV bandwidth and the minimum bandwidth),'CV','GCV' - default: 5\% of the support}
#' \item{dataType}{The type of design we have (usually distinguishing between sparse or dense functional data); 'Sparse', 'Dense', 'DenseWithMV', 'p>>n' - default: determine automatically based on 'IsRegular'}
#' \item{diagnosticsPlot}{Deprecated. Same as the option 'plot'}
#' \item{plot}{Plot FPCA results (design plot, mean, scree plot and first K (<=3) eigenfunctions); logical - default: FALSE}
#' \item{error}{Assume measurement error in the dataset; logical - default: TRUE}
#' \item{fitEigenValues}{Whether also to obtain a regression fit of the eigenvalues - default: FALSE}
#' \item{FVEthreshold}{Fraction-of-Variance-Explained threshold used during the SVD of the fitted covar. function; numeric (0,1] - default: 0.9999}
#' \item{kernel}{Smoothing kernel choice, common for mu and covariance; "rect", "gauss", "epan", "gausvar", "quar" - default: "gauss"; dense data are assumed noise-less so no smoothing is performed. }
#' \item{kFoldMuCov}{The number of folds to be used for mean and covariance smoothing. Default: 10}
#' \item{lean}{If TRUE the 'inputData' field in the output list is empty. Default: FALSE}
#' \item{maxK}{The maximum number of principal components to consider - default: min(20, N-1), N:# of curves}
#' \item{methodXi}{The method to estimate the PC scores; 'CE' (Condit. Expectation), 'IN' (Numerical Integration) - default: 'CE' for sparse data, 'IN' for dense data.}
#' \item{methodMuCovEst}{The method to estimate the mean and covariance in the case of dense functional data; 'cross-sectional', 'smooth' - default: 'cross-sectional'}
#' \item{nRegGrid}{The number of support points in each direction of covariance surface; numeric - default: 51}
#' \item{numBins}{The number of bins to bin the data into; positive integer > 10, default: NULL}
#' \item{methodSelectK}{The method of choosing the number of principal components K; 'FVE','AIC','BIC', or a positive integer as specified number of components: default 'FVE')}
#' \item{shrink}{Whether to use shrinkage method to estimate the scores in the dense case (see Yao et al 2003) - default FALSE}
#' \item{outPercent}{A 2-element vector in [0,1] indicating the outPercent data in the boundary - default (0,1)}
#' \item{rho}{The truncation threshold for the iterative residual. 'cv': choose rho by leave-one-observation out cross-validation; 'no': no regularization - default "cv" if error == TRUE, and "no" if error == FALSE.}
#' \item{rotationCut}{The 2-element vector in [0,1] indicating the percent of data truncated during sigma^2 estimation; default (0.25, 0.75))}
#' \item{useBinnedData}{Should the data be binned? 'FORCE' (Enforce the # of bins), 'AUTO' (Select the # of bins automatically), 'OFF' (Do not bin) - default: 'AUTO'}
#' \item{useBinnedCov}{Whether to use the binned raw covariance for smoothing; logical - default:TRUE}
#' \item{userCov}{The user-defined smoothed covariance function; list of two elements: numerical vector 't' and matrix 'cov', 't' must cover the support defined by 'Ly' - default: NULL}
#' \item{userMu}{The user-defined smoothed mean function; list of two numerical vector 't' and 'mu' of equal size, 't' must cover the support defined 'Ly' - default: NULL}
#' \item{userSigma2}{The user-defined measurement error variance. A positive scalar. If specified then no regularization is used (rho is set to 'no', unless specified otherwise). Default to `NULL`}
#' \item{verbose}{Display diagnostic messages; logical - default: FALSE}
#' }
#' @return A list containing the following fields:
#' \item{sigma2}{Variance for measure error.}
#' \item{lambda}{A vector of length \emph{K} containing eigenvalues.}
#' \item{phi}{An nWorkGrid by \emph{K} matrix containing eigenfunctions, supported on workGrid.}
#' \item{xiEst}{A \emph{n} by \emph{K} matrix containing the FPC estimates.}
#' \item{xiVar}{A list of length \emph{n}, each entry containing the variance estimates for the FPC estimates.}
#' \item{obsGrid}{The (sorted) grid points where all observation points are pooled.}
#' \item{mu}{A vector of length nWorkGrid containing the mean function estimate.}
#' \item{workGrid}{A vector of length nWorkGrid. The internal regular grid on which the eigen analysis is carried on.}
#' \item{smoothedCov}{A nWorkGrid by nWorkGrid matrix of the smoothed covariance surface.}
#' \item{fittedCov}{A nWorkGrid by nWorkGrid matrix of the fitted covariance surface, which is guaranteed to be non-negative definite.}
#' \item{optns}{A list of actually used options.}
#' \item{bwMu}{The selected (or user specified) bandwidth for smoothing the mean function.}
#' \item{bwCov}{The selected (or user specified) bandwidth for smoothing the covariance function.}
#' \item{rho}{A regularizing scalar for the measurement error variance estimate.}
#' \item{cumFVE}{A vector with the percentages of the total variance explained by each FPC. Increase to almost 1.}
#' \item{FVE}{A percentage indicating the total variance explained by chosen FPCs with corresponding 'FVEthreshold'.}
#' \item{criterionValue}{A scalar specifying the criterion value obtained by the selected number of components with specific methodSelectK: FVE,AIC,BIC values or NULL for fixedK.}
#' \item{inputData}{A list containting the original 'Ly' and 'Lt' lists used as inputs to FPCA. NULL if 'lean' was specified to be TRUE.}
#' @examples
#' set.seed(1)
#' n <- 20
#' pts <- seq(0, 1, by=0.05)
#' sampWiener <- Wiener(n, pts)
#' sampWiener <- Sparsify(sampWiener, pts, 10)
#' res <- FPCA(sampWiener$Ly, sampWiener$Lt,
#' list(dataType='Sparse', error=FALSE, kernel='epan', verbose=TRUE))
#' plot(res) # The design plot covers [0, 1] * [0, 1] well.
#' CreateCovPlot(res, 'Fitted')
#' @references
#' \cite{Yao, F., Mueller, H.G., Clifford, A.J., Dueker, S.R., Follett, J., Lin, Y., Buchholz, B., Vogel, J.S. (2003). "Shrinkage estimation for functional principal component scores, with application to the population kinetics of plasma folate." Biometrics 59, 676-685. (Shrinkage estimates for dense data)}
#'
#' \cite{Yao, Fang, Hans-Georg Mueller, and Jane-Ling Wang. "Functional data analysis for sparse longitudinal data." Journal of the American Statistical Association 100, no. 470 (2005): 577-590. (Sparse data FPCA)}
#'
#' \cite{Liu, Bitao, and Hans-Georg Mueller. "Estimating derivatives for samples of sparsely observed functions, with application to online auction dynamics." Journal of the American Statistical Association 104, no. 486 (2009): 704-717. (Sparse data FPCA)}
#'
#' \cite{Castro, P. E., W. H. Lawton, and E. A. Sylvestre. "Principal modes of variation for processes with continuous sample curves." Technometrics 28, no. 4 (1986): 329-337. (Dense data FPCA)}
#' @export
FPCA = function(Ly, Lt, optns = list()){
# Check the data validity for further analysis
CheckData(Ly,Lt)
# Force the data to be list of numeric members and handle NA's
#Ly <- lapply(Ly, as.numeric)
#Lt <- lapply(Lt, as.numeric)
#Lt <- lapply(Lt, signif, 14)
#inputData <- list(Ly=Ly, Lt=Lt);
inputData <- HandleNumericsAndNAN(Ly,Lt);
Ly <- inputData$Ly;
Lt <- inputData$Lt;
# Set the options structure members that are still NULL
optns = SetOptions(Ly, Lt, optns);
# Check the options validity for the PCA function.
numOfCurves = length(Ly);
CheckOptions(Lt, optns,numOfCurves)
# Bin the data
if ( optns$useBinnedData != 'OFF'){
BinnedDataset <- GetBinnedDataset(Ly,Lt,optns)
Ly = BinnedDataset$newy;
Lt = BinnedDataset$newt;
optns[['nRegGrid']] <- min(optns[['nRegGrid']],
BinnedDataset[['numBins']])
inputData$Ly <- Ly
inputData$Lt <- Lt
}
# Generate basic grids:
# obsGrid: the unique sorted pooled time points of the sample and the new
# data
# regGrid: the grid of time points for which the smoothed covariance
# surface assumes values
# cutRegGrid: truncated grid specified by optns$outPercent for the cov
# functions
obsGrid = sort(unique( c(unlist(Lt))));
regGrid = seq(min(obsGrid), max(obsGrid),length.out = optns$nRegGrid);
outPercent <- optns$outPercent
buff <- .Machine$double.eps * max(abs(obsGrid)) * 10
rangeGrid <- range(regGrid)
minGrid <- rangeGrid[1]
maxGrid <- rangeGrid[2]
cutRegGrid <- regGrid[regGrid > minGrid + diff(rangeGrid) * outPercent[1] -
buff &
regGrid < minGrid + diff(rangeGrid) * outPercent[2] +
buff]
## Mean function
# If the user provided a mean function use it
userMu <- optns$userMu
if ( is.list(userMu) && (length(userMu$mu) == length(userMu$t))){
smcObj <- GetUserMeanCurve(optns, obsGrid, regGrid, buff)
smcObj$muDense = ConvertSupport(obsGrid, regGrid, mu = smcObj$mu)
} else if (optns$methodMuCovEst == 'smooth') { # smooth mean
smcObj = GetSmoothedMeanCurve(Ly, Lt, obsGrid, regGrid, optns)
} else if (optns$methodMuCovEst == 'cross-sectional') { # cross-sectional mean
ymat = List2Mat(Ly,Lt)
smcObj = GetMeanDense(ymat, obsGrid, optns)
}
# mu: the smoothed mean curve evaluated at times 'obsGrid'
mu <- smcObj$mu
## Covariance function and sigma2
if (!is.null(optns$userCov) && optns$methodMuCovEst != 'smooth') {
scsObj <- GetUserCov(optns, obsGrid, cutRegGrid, buff, ymat)
} else if (optns$methodMuCovEst == 'smooth') {
# smooth cov and/or sigma2
scsObj = GetSmoothedCovarSurface(Ly, Lt, mu, obsGrid, regGrid, optns,
optns$useBinnedCov)
} else if (optns$methodMuCovEst == 'cross-sectional') {
scsObj = GetCovDense(ymat, mu, optns)
if (length(obsGrid) != cutRegGrid || !all.equal(obsGrid, cutRegGrid)) {
scsObj$smoothCov = ConvertSupport(obsGrid, cutRegGrid, Cov =
scsObj$smoothCov)
}
scsObj$outGrid <- cutRegGrid
}
sigma2 <- scsObj[['sigma2']]
# workGrid: possibly truncated version of the regGrid
workGrid <- scsObj$outGrid
# convert mu to truncated workGrid
muWork <- ConvertSupport(obsGrid, toGrid = workGrid, mu=smcObj$mu)
# Get the results for the eigen-analysis
eigObj = GetEigenAnalysisResults(smoothCov = scsObj$smoothCov, workGrid, optns, muWork = muWork)
# Truncated obsGrid, and observations. Empty observation due to truncation has length 0.
truncObsGrid <- obsGrid
if (!all(abs(optns$outPercent - c(0, 1)) < .Machine$double.eps * 2)) {
truncObsGrid <- truncObsGrid[truncObsGrid >= min(workGrid) - buff &
truncObsGrid <= max(workGrid) + buff]
tmp <- TruncateObs(Ly, Lt, truncObsGrid)
Ly <- tmp$Ly
Lt <- tmp$Lt
}
# convert phi and fittedCov to obsGrid.
muObs <- ConvertSupport(obsGrid, truncObsGrid, mu=mu)
phiObs <- ConvertSupport(workGrid, truncObsGrid, phi=eigObj$phi)
if (optns$methodXi == 'CE') {
CovObs <- ConvertSupport(workGrid, truncObsGrid, Cov=eigObj$fittedCov)
}
# Get scores
if (optns$methodXi == 'CE') {
if (optns$rho != 'no') {
if( length(Ly) > 2048 ){
randIndx <- sample( length(Ly), 2048)
rho <- GetRho(Ly[randIndx], Lt[randIndx], optns, muObs, truncObsGrid, CovObs, eigObj$lambda, phiObs, sigma2)
} else {
rho <- GetRho(Ly, Lt, optns, muObs, truncObsGrid, CovObs, eigObj$lambda, phiObs, sigma2)
}
sigma2 <- rho
}
scoresObj <- GetCEScores(Ly, Lt, optns, muObs, truncObsGrid, CovObs, eigObj$lambda, phiObs, sigma2)
} else if (optns$methodXi == 'IN') {
ymat = List2Mat(Ly,Lt)
scoresObj <- GetINScores(ymat, Lt, optns, muObs, eigObj$lambda, phiObs, sigma2)
}
if (optns$fitEigenValues) {
fitLambda <- FitEigenValues(scsObj$rcov, workGrid, eigObj$phi, optns$maxK)
} else {
fitLambda <- NULL
}
# Make the return object by MakeResultFPCA
ret <- MakeResultFPCA(optns, smcObj, muObs, scsObj, eigObj,
inputData = inputData,
scoresObj, truncObsGrid, workGrid,
rho = if (optns$rho =='cv') rho else NULL,
fitLambda=fitLambda)
# select number of components based on specified criterion
if(ret$optns$lean == TRUE){
selectedK <- SelectK(fpcaObj = ret, criterion = optns$methodSelectK, FVEthreshold = optns$FVEthreshold,
Ly = Ly, Lt = Lt)
} else {
selectedK <- SelectK(fpcaObj = ret, criterion = optns$methodSelectK, FVEthreshold = optns$FVEthreshold)
}
ret <- append(ret, list(selectK = selectedK$K, criterionValue = selectedK$criterion))
class(ret) <- 'FPCA'
ret <- SubsetFPCA(fpcaObj = ret, K = ret$selectK)
# Plot the results
if(optns$plot){
plot.FPCA(ret)
}
return(ret);
}
|
\name{plot.cv.compLasso}
\alias{plot.cv.compLasso}
\title{
Plot the component lasso cross-validation curve returned by cv.compLasso.
}
\description{
This function plots the cross-validation curve versus the lambda values, along with the upper/lower corresponding standard deviation curves.
}
\usage{
\method{plot}{cv.compLasso}(x, sign.lambda = 1, ...)
}
\arguments{
\item{x}{
Fitted "compLasso" object.
}
\item{sign.lambda}{
Plots the curve against the log(lambda) values or its negative (if sign.lambda=-1).}
\item{\dots}{
Other graphical parameters to plot.
}
}
\references{Hussami, N and Tibshirani, R (2013) A Component Lasso.}
\author{FORTRAN code by Jerry Friedman. R interface by Nadine Hussami and Robert Tibshirani}
\note{The FORTRAN code that this function links to was kindly written
and provided by Jerry Friedman. This function is taken from the glmnet package.
}
\seealso{ cv.compLasso}
\examples{
set.seed(1000)
n=200
p=50
k=5
clus=rep(1:k,p/k)
x=matrix(rnorm(n*p),ncol=p)
b0=rep(0,p)
b0[clus<5]=3
y=x\%*\%b0+.1*rnorm(n)
foldid=c(rep(2,40),rep(1,40),rep(4,40),rep(5,40),rep(3,40))
a=cv.compLasso(x, y, clus, parm=1, foldid=foldid, type.measure = "mse", nfolds = 5)
print(a$cvm)
plot(a)
}
|
/man/plot.cv.compLasso.Rd
|
no_license
|
cran/compLasso
|
R
| false
| false
| 1,219
|
rd
|
\name{plot.cv.compLasso}
\alias{plot.cv.compLasso}
\title{
Plot the component lasso cross-validation curve returned by cv.compLasso.
}
\description{
This function plots the cross-validation curve versus the lambda values, along with the upper/lower corresponding standard deviation curves.
}
\usage{
\method{plot}{cv.compLasso}(x, sign.lambda = 1, ...)
}
\arguments{
\item{x}{
Fitted "compLasso" object.
}
\item{sign.lambda}{
Plots the curve against the log(lambda) values or its negative (if sign.lambda=-1).}
\item{\dots}{
Other graphical parameters to plot.
}
}
\references{Hussami, N and Tibshirani, R (2013) A Component Lasso.}
\author{FORTRAN code by Jerry Friedman. R interface by Nadine Hussami and Robert Tibshirani}
\note{The FORTRAN code that this function links to was kindly written
and provided by Jerry Friedman. This function is taken from the glmnet package.
}
\seealso{ cv.compLasso}
\examples{
set.seed(1000)
n=200
p=50
k=5
clus=rep(1:k,p/k)
x=matrix(rnorm(n*p),ncol=p)
b0=rep(0,p)
b0[clus<5]=3
y=x\%*\%b0+.1*rnorm(n)
foldid=c(rep(2,40),rep(1,40),rep(4,40),rep(5,40),rep(3,40))
a=cv.compLasso(x, y, clus, parm=1, foldid=foldid, type.measure = "mse", nfolds = 5)
print(a$cvm)
plot(a)
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/Electivity.R
\name{Electivity}
\alias{Electivity}
\title{Calculates a variety of electivity indices and foraging ratios}
\usage{
Electivity(
Diet,
Available,
Indices = c("ForageRatio", "Ivlev", "Strauss", "JacobsQ", "JacobsD", "Chesson",
"VanderploegScavia"),
LogQ = TRUE,
CalcAbundance = FALSE,
Depleting = FALSE
)
}
\arguments{
\item{Diet}{Data frame with data corresponding to consumed resources found in the diet. See details for formatting.}
\item{Available}{Data frame with data corresponding to the available resources. See details for formatting.}
\item{Indices}{Character vector containing the names of the desired indices to calculate. See description for information on available indices.}
\item{LogQ}{Logical. If true, should Jacob's Q be logged? This is the recommendation of Jacob, 1974.Default is TRUE, following the recommendation.}
\item{CalcAbundance}{Logical. If true, are the data raw and would you like to calculate the relative abundance? Relative abundance will be calculated for both diet and available given the row sums of the supplied Diet and Available parameter inputs. Default is False, i.e. relative abundances are not calculated.}
\item{Depleting}{Logical. If true, calculates Chesson's Case 2, where food depletion occurs and the available food is not constant. Default is False.}
}
\value{
Object of class Electivity which is a list containing data frames for each electivity index selected.
}
\description{
This function calculates the forage ratio and a variety of electivity indices. Included indices include Ivlev's (1961),
Strauss' (1979), Jacob's Q and D (1974), Chesson's (1983)(Which is similar to Manly's Alpha (1974)), and Vanderploeg & Scavia (1979).
}
\details{
This function calculates one or multiple electivity indices for one or more diet records for which one or more records of prey availability exists.For example,
it is possible to calculate multiple indices for multiple diet records that may be from a number of sites with different prey
availability all in one call of the function (see example which ). Specifically, this function measures the following indices (and their input for the Indices argument)
Ivlev's (1961) Forage Ratio ("ForageRatio") and electivity ("Ivlev"), Strauss'(1979) ("Strauss"), Jacobs (1974) Q ("JacobsQ") and D ("JacobsD"), Chesson (1983)(Which is similar to Manly's Alpha (1974))("Chesson"),
and Vanderploeg & Scavia (1979) ("VanderploegScavia"). For those wishing to calculate Vanderploeg and Scavia's selectivity coefficient (W), please select "Chesson" as an argument for indices, which
will calculate Chesson's alpha, which is identical to Vanderploeg and Scavia's selectivity coefficient (W).
The function takes two data frames as input. The first argument, Diet, should be formatted as followed. Each row in the data frame should be a diet record.
The first column should contain the name of the record to be calculated. The second column should contain the name linking the consumed prey in Diet to that in Available (example, name of the different habitats, note in our example it is "Year"), which will be described below.
All remaining columns should contain the abundance or relative abundance of the prey in the diet. These columns should also be named so they can be matched to those in Available. The second data frame, Available should be formatted similar to Diet where each row describes a unique record for available prey.
The remaining columns should contain the abundance or relative abundance of the prey that are available to be consumed. These columns should also be named so they can be matched to Those in Diet. These indices typically utilize relative abundances and the abundances with relative abundance represented by a decimal (i.e sum to 1). Users can supply raw data that are not in relative abundances and use the
CalcAbundance option to calculate relative abundances if they wish to do so. By default, CalcAbundance is set to FALSE and assumes the supplied data are already in relative abundance, so users wishing to calculate relative abundance should use CalcAbundance = TRUE.
Indices are bounded by the following values. Ivlev's, Strauss', and Jacobs' D, and Vanderploeg & Scavia's indices are bounded between -1 and 1, with items closer to -1 representing avoided items, 0 randomly feeding, and 1 preferential items.
Forage ratio values range between 1 and infinity for preferred items while values between 0 and 1 represent avoided prey. Similar to forage ratio, Jacobs' Q ranges
between 1 and infinity for preferred items and 0 and 1 for avoided prey, however log10(Q) is the preferred as it provides
the advantage of equal ranges and ranges from -infinity to +infinity for avoidance and preference respectively. This option can be selected in the function with the logQ argument, which by default is set to TRUE.
Finally, Chesson's index ranges between 0 and 1 and preference is typically assessed using 1/n, where n is the number of prey types. The value of 1/n represents random feeding while values
above and below 1/n represent preference and avoidance respectively. For Chesson's index, users can also specify if the available resources are
depleting, in which case the equation from case 2 of Chesson, 1983 is calculated. Note, this takes the log of (p-r)/p) and values of 0 or negatives will return NaN.
}
\examples{
#Load Electivity Data from Horn 1982
data(Horn1982)
#Run all electivity indices
my.indices <- Electivity(Diet = Horn1982$Consumed, Available = Horn1982$Available, Indices =
c("ForageRatio","Ivlev","Strauss","JacobsQ","JacobsD","Chesson","VanderploegScavia"),LogQ = TRUE,
CalcAbundance = FALSE, Depleting = FALSE)
}
\references{
Chesson, J. 1983. The estimation and analysis of preference and its relatioship to foraging models. Ecology 64:1297-1304.
Ivlev, U. 1961. Experimental ecology of the feeding of fish. Yale University Press, New Haven.
Jacobs, J. 1974. Quantitative measurement of food selection. Oecologia 14:413-417.
Manly, B. 1974. A model for certain types of selection experiments. Biometrics 30:281-294.
Strauss, R. E. 1979. Reliability Estimates for Ivlev's Electivity Index, the Forage Ratio, and a Proposed Linear Index of Food Selection. Transactions of the American Fisheries Society 108:344-352.
Vanderploeg, H., and D. Scavia. 1979. Two electivity indices for feeding with special reference to zooplankton grazing. Journal of the Fisheries Board of Canada 36:362-365.
}
\seealso{
\code{\link{PlotElectivity}}
}
\author{
Samuel Borstein
}
|
/man/Electivity.Rd
|
no_license
|
cran/dietr
|
R
| false
| true
| 6,711
|
rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/Electivity.R
\name{Electivity}
\alias{Electivity}
\title{Calculates a variety of electivity indices and foraging ratios}
\usage{
Electivity(
Diet,
Available,
Indices = c("ForageRatio", "Ivlev", "Strauss", "JacobsQ", "JacobsD", "Chesson",
"VanderploegScavia"),
LogQ = TRUE,
CalcAbundance = FALSE,
Depleting = FALSE
)
}
\arguments{
\item{Diet}{Data frame with data corresponding to consumed resources found in the diet. See details for formatting.}
\item{Available}{Data frame with data corresponding to the available resources. See details for formatting.}
\item{Indices}{Character vector containing the names of the desired indices to calculate. See description for information on available indices.}
\item{LogQ}{Logical. If true, should Jacob's Q be logged? This is the recommendation of Jacob, 1974.Default is TRUE, following the recommendation.}
\item{CalcAbundance}{Logical. If true, are the data raw and would you like to calculate the relative abundance? Relative abundance will be calculated for both diet and available given the row sums of the supplied Diet and Available parameter inputs. Default is False, i.e. relative abundances are not calculated.}
\item{Depleting}{Logical. If true, calculates Chesson's Case 2, where food depletion occurs and the available food is not constant. Default is False.}
}
\value{
Object of class Electivity which is a list containing data frames for each electivity index selected.
}
\description{
This function calculates the forage ratio and a variety of electivity indices. Included indices include Ivlev's (1961),
Strauss' (1979), Jacob's Q and D (1974), Chesson's (1983)(Which is similar to Manly's Alpha (1974)), and Vanderploeg & Scavia (1979).
}
\details{
This function calculates one or multiple electivity indices for one or more diet records for which one or more records of prey availability exists.For example,
it is possible to calculate multiple indices for multiple diet records that may be from a number of sites with different prey
availability all in one call of the function (see example which ). Specifically, this function measures the following indices (and their input for the Indices argument)
Ivlev's (1961) Forage Ratio ("ForageRatio") and electivity ("Ivlev"), Strauss'(1979) ("Strauss"), Jacobs (1974) Q ("JacobsQ") and D ("JacobsD"), Chesson (1983)(Which is similar to Manly's Alpha (1974))("Chesson"),
and Vanderploeg & Scavia (1979) ("VanderploegScavia"). For those wishing to calculate Vanderploeg and Scavia's selectivity coefficient (W), please select "Chesson" as an argument for indices, which
will calculate Chesson's alpha, which is identical to Vanderploeg and Scavia's selectivity coefficient (W).
The function takes two data frames as input. The first argument, Diet, should be formatted as followed. Each row in the data frame should be a diet record.
The first column should contain the name of the record to be calculated. The second column should contain the name linking the consumed prey in Diet to that in Available (example, name of the different habitats, note in our example it is "Year"), which will be described below.
All remaining columns should contain the abundance or relative abundance of the prey in the diet. These columns should also be named so they can be matched to those in Available. The second data frame, Available should be formatted similar to Diet where each row describes a unique record for available prey.
The remaining columns should contain the abundance or relative abundance of the prey that are available to be consumed. These columns should also be named so they can be matched to Those in Diet. These indices typically utilize relative abundances and the abundances with relative abundance represented by a decimal (i.e sum to 1). Users can supply raw data that are not in relative abundances and use the
CalcAbundance option to calculate relative abundances if they wish to do so. By default, CalcAbundance is set to FALSE and assumes the supplied data are already in relative abundance, so users wishing to calculate relative abundance should use CalcAbundance = TRUE.
Indices are bounded by the following values. Ivlev's, Strauss', and Jacobs' D, and Vanderploeg & Scavia's indices are bounded between -1 and 1, with items closer to -1 representing avoided items, 0 randomly feeding, and 1 preferential items.
Forage ratio values range between 1 and infinity for preferred items while values between 0 and 1 represent avoided prey. Similar to forage ratio, Jacobs' Q ranges
between 1 and infinity for preferred items and 0 and 1 for avoided prey, however log10(Q) is the preferred as it provides
the advantage of equal ranges and ranges from -infinity to +infinity for avoidance and preference respectively. This option can be selected in the function with the logQ argument, which by default is set to TRUE.
Finally, Chesson's index ranges between 0 and 1 and preference is typically assessed using 1/n, where n is the number of prey types. The value of 1/n represents random feeding while values
above and below 1/n represent preference and avoidance respectively. For Chesson's index, users can also specify if the available resources are
depleting, in which case the equation from case 2 of Chesson, 1983 is calculated. Note, this takes the log of (p-r)/p) and values of 0 or negatives will return NaN.
}
\examples{
#Load Electivity Data from Horn 1982
data(Horn1982)
#Run all electivity indices
my.indices <- Electivity(Diet = Horn1982$Consumed, Available = Horn1982$Available, Indices =
c("ForageRatio","Ivlev","Strauss","JacobsQ","JacobsD","Chesson","VanderploegScavia"),LogQ = TRUE,
CalcAbundance = FALSE, Depleting = FALSE)
}
\references{
Chesson, J. 1983. The estimation and analysis of preference and its relatioship to foraging models. Ecology 64:1297-1304.
Ivlev, U. 1961. Experimental ecology of the feeding of fish. Yale University Press, New Haven.
Jacobs, J. 1974. Quantitative measurement of food selection. Oecologia 14:413-417.
Manly, B. 1974. A model for certain types of selection experiments. Biometrics 30:281-294.
Strauss, R. E. 1979. Reliability Estimates for Ivlev's Electivity Index, the Forage Ratio, and a Proposed Linear Index of Food Selection. Transactions of the American Fisheries Society 108:344-352.
Vanderploeg, H., and D. Scavia. 1979. Two electivity indices for feeding with special reference to zooplankton grazing. Journal of the Fisheries Board of Canada 36:362-365.
}
\seealso{
\code{\link{PlotElectivity}}
}
\author{
Samuel Borstein
}
|
#Notes about what we want. For Injuries by geography --- rate of injury? Can we find partiipation rates by sport?
#All over time, if possible
#bar chart of total injuries --
#line graph with injuries by quarter by sport.
#line graph with injuries by gender and sport -- specifically, baseball, basketball, soccer, lacrosse cheer.
#concussions by sport over time / concussions as a percentage of all injuries
#overall cost/average cost for sports
#basball arm injuries over time
#Concussions in hockey
#lower extremity injuries in Football -
#hockey, soccer, football, lax -- by gender over time
#cardiac arrest and death by sport over time
library(tidyverse)
library(zoo)
library(zipcode)
#here we create variables and read in each table Julia created. And we make it so the columns except for the total charge column are in a standard format.
BASEBALL_IPALL_10 <- read_csv("BASEBALL_IPALL_10.csv", col_types = cols(
TOT_CHG = col_number(),
.default = "c")
)
#we're also adding a column called sport for sorting/grouping purposes
BASEBALL_IPALL_10["sport"]<-"baseball"
#Forces the prindiag column to be a character, een though it's all numbers. Because sometimes the column has numbers and letters.
BASEBALL_IPALL_9 <- read_csv("BASEBALL_IPALL_9.csv", col_types = cols(
TOT_CHG = col_number(),
.default = "c")
)
BASEBALL_IPALL_9["sport"]<-"baseball"
BASEBALL_OPALL_10 <- read_csv("BASEBALL_OPALL_10.csv", col_types = cols(
TOT_CHG = col_number(),
.default = "c")
)
BASEBALL_OPALL_10["sport"]<-"baseball"
BASEBALL_OPALL_9 <- read_csv("BASEBALL_OPALL_9.csv", col_types = cols(
TOT_CHG = col_number(),
.default = "c")
)
BASEBALL_OPALL_9["sport"]<-"baseball"
BASKETBALL_IPALL_10 <- read_csv("BASKETBALL_IPALL_10.csv", col_types = cols(
TOT_CHG = col_number(),
.default = "c")
)
BASKETBALL_IPALL_10["sport"]<-"basketball"
BASKETBALL_IPALL_9 <- read_csv("BASKETBALL_IPALL_9.csv", col_types = cols(
TOT_CHG = col_number(),
.default = "c")
)
BASKETBALL_IPALL_9["sport"]<-"basketball"
BASKETBALL_OPALL_10 <- read_csv("BASKETBALL_OPALL_10.csv", col_types = cols(
TOT_CHG = col_number(),
.default = "c")
)
BASKETBALL_OPALL_10["sport"]<-"basketball"
BASKETBALL_OPALL_9 <- read_csv("BASKETBALL_OPALL_9.csv", col_types = cols(
TOT_CHG = col_number(),
.default = "c")
)
BASKETBALL_OPALL_9["sport"]<-"basketball"
CHEER_OPALL_10 <- read_csv("CHEER_OPALL_10.csv", col_types = cols(
TOT_CHG = col_number(),
.default = "c")
)
CHEER_OPALL_10["sport"]<-"cheer"
CHEER_OPALL_9 <- read_csv("CHEER_OPALL_9.csv", col_types = cols(
TOT_CHG = col_number(),
.default = "c")
)
CHEER_OPALL_9["sport"]<-"cheer"
CHEER_IPALL_9 <- read_csv("CHEER_IPALL_9.csv", col_types = cols(
TOT_CHG = col_number(),
.default = "c")
)
CHEER_IPALL_9["sport"]<-"cheer"
CHEER_IPALL_10<- read_csv("CHEERLEADING_IPALL_10.csv", col_types = cols(
TOT_CHG = col_number(),
.default = "c")
)
CHEER_IPALL_10["sport"]<-"cheer"
FOOTBALL_IPALL_10 <- read_csv("FOOTBALL_IPALL_10.csv", col_types = cols(
TOT_CHG = col_number(),
.default = "c")
)
FOOTBALL_IPALL_10["sport"]<-"football"
FOOTBALL_IPALL_9 <- read_csv("FOOTBALL_IPALL_9.csv", col_types = cols(
TOT_CHG = col_number(),
.default = "c")
)
FOOTBALL_IPALL_9["sport"]<-"football"
FOOTBALL_OPALL_9 <- read_csv("FOOTBALL_OPALL_9.csv", col_types = cols(
TOT_CHG = col_number(),
.default = "c")
)
FOOTBALL_OPALL_9["sport"]<-"football"
FOOTBALL_OPALL_10 <- read_csv("FOOTBALL_OPALL_10.csv", col_types = cols(
TOT_CHG = col_number(),
.default = "c")
)
FOOTBALL_OPALL_10["sport"]<-"football"
GYMNASTICS_OPALL_10 <- read_csv("GYMNASTICS_OPALL_10.csv", col_types = cols(
TOT_CHG = col_number(),
.default = "c")
)
GYMNASTICS_OPALL_10["sport"]<-"gymnastics"
GYMNASTICS_OPALL_9 <- read_csv("GYMNASTICS_OPALL_9.csv", col_types = cols(
TOT_CHG = col_number(),
.default = "c")
)
GYMNASTICS_OPALL_9["sport"]<-"gymnastics"
GYMNASTICS_IPALL_9 <- read_csv("GYMNASTICS_IPALL_9.csv", col_types = cols(
TOT_CHG = col_number(),
.default = "c")
)
GYMNASTICS_IPALL_9["sport"]<-"gymnastics"
ICEHOCKEY_IPALL_10 <- read_csv("ICEHOCKEY_IPALL_10.csv", col_types = cols(
TOT_CHG = col_number(),
.default = "c")
)
ICEHOCKEY_IPALL_10["sport"]<-"hockey"
ICEHOCKEY_IPALL_9 <- read_csv("ICEHOCKEY_IPALL_9.csv", col_types = cols(
TOT_CHG = col_number(),
.default = "c")
)
ICEHOCKEY_IPALL_9["sport"]<-"hockey"
ICEHOCKEY_OPALL_10 <- read_csv("ICEHOCKEY_OPALL_10.csv", col_types = cols(
TOT_CHG = col_number(),
.default = "c")
)
ICEHOCKEY_OPALL_10["sport"]<-"hockey"
ICEHOCKEY_OPALL_9 <- read_csv("ICEHOCKEY_OPALL_9.csv", col_types = cols(
TOT_CHG = col_number(),
.default = "c")
)
ICEHOCKEY_OPALL_9["sport"]<-"hockey"
LAXFH_IPALL_9 <- read_csv("LAXFH_IPALL_9.csv", col_types = cols(
TOT_CHG = col_number(),
.default = "c")
)
LAXFH_IPALL_9["sport"]<-"lax/fieldhockey"
LAXFH_OPALL_10 <- read_csv("LAXFH_OPALL_10.csv", col_types = cols(
TOT_CHG = col_number(),
.default = "c")
)
LAXFH_OPALL_10["sport"]<-"lax/fieldhockey"
LAXFH_OPALL_9 <- read_csv("LAXFH_OPALL_9.csv", col_types = cols(
TOT_CHG = col_number(),
.default = "c")
)
LAXFH_OPALL_9["sport"]<-"lax/fieldhockey"
SOCCER_OPALL_10 <- read_csv("SOCCER_OPALL_10.csv", col_types = cols(
TOT_CHG = col_number(),
.default = "c")
)
SOCCER_OPALL_10["sport"]<-"soccer"
SOCCER_OPALL_9 <- read_csv("SOCCER_OPALL_9.csv", col_types = cols(
TOT_CHG = col_number(),
.default = "c")
)
SOCCER_OPALL_9["sport"]<-"soccer"
SOCCER_IPALL_10 <- read_csv("SOCCER_IPALL_10.csv", col_types = cols(
TOT_CHG = col_number(),
.default = "c")
)
SOCCER_IPALL_10["sport"]<-"soccer"
SOCCER_IPALL_9 <- read_csv("SOCCER_IPALL_9.csv", col_types = cols(
TOT_CHG = col_number(),
.default = "c")
)
SOCCER_IPALL_9["sport"]<-"soccer"
TRACK_IPALL_9 <- read_csv("TRACK_IPALL_9.csv", col_types = cols(
TOT_CHG = col_number(),
.default = "c")
)
TRACK_IPALL_9["sport"]<-"track"
TRACK_OPALL_10 <- read_csv("TRACK_OPALL_10.csv", col_types = cols(
TOT_CHG = col_number(),
.default = "c")
)
TRACK_OPALL_10["sport"]<-"track"
TRACK_OPALL_9 <- read_csv("TRACK_OPALL_9.csv", col_types = cols(
TOT_CHG = col_number(),
.default = "c")
)
TRACK_OPALL_9["sport"]<-"track"
#now it's time to join the two tables.
#this is sthe sample that above rows are built on.
#x <- read_csv("file.csv", col_types = cols(
# A = col_double(),
# B = col_logical(),
# C = col_factor()))
ipall_all <- bind_rows(BASEBALL_IPALL_10, BASEBALL_IPALL_9, BASKETBALL_IPALL_10, BASKETBALL_IPALL_9, CHEER_IPALL_10, CHEER_IPALL_9, FOOTBALL_IPALL_10, FOOTBALL_IPALL_9, GYMNASTICS_IPALL_9, ICEHOCKEY_IPALL_10, ICEHOCKEY_IPALL_9, LAXFH_IPALL_9, SOCCER_IPALL_10, SOCCER_IPALL_9, TRACK_IPALL_9)
opall_all <- bind_rows( BASEBALL_OPALL_10, BASEBALL_OPALL_9, BASKETBALL_OPALL_10, BASKETBALL_OPALL_9, CHEER_OPALL_10, CHEER_OPALL_9, FOOTBALL_OPALL_9, FOOTBALL_OPALL_10, GYMNASTICS_OPALL_10, GYMNASTICS_OPALL_9, ICEHOCKEY_OPALL_10, ICEHOCKEY_OPALL_9, LAXFH_OPALL_10, LAXFH_OPALL_9, SOCCER_OPALL_10, SOCCER_OPALL_9, TRACK_OPALL_10, TRACK_OPALL_9)
#Now we are just looking at all IP injuries and grouping them by city and sport.
ip_zip_sport <- ipall_all_with_zips %>%
group_by(city, sport) %>%
summarise(count = n()) %>%
#arrange the list in descending order
arrange(desc(count))
ip_zip_sport[1:10,]
#now we're looking at all OP injuries and grouping them by city and sport
op_zip_sport <- opall_all_with_zips %>%
group_by( sport) %>%
summarise(count = n()) %>%
#arrange the list in descending order
arrange(desc(count))
op_zip_sport[1:10,]
#total chg ip for each sport by gender
ip_cost_sport_gender <- ipall_all_with_zips %>%
group_by(sport, SEX) %>%
summarise(total_cost = sum(TOT_CHG), avg_cost = mean(TOT_CHG)) %>%
arrange(desc(total_cost ))
ip_cost_sport_gender[1:10,]
#total chg ip for each sport by gender
ip_cost_sport <- ipall_all_with_zips %>%
group_by(sport) %>%
summarise(total_cost = sum(TOT_CHG), avg_cost = mean(TOT_CHG)) %>%
arrange(desc(total_cost ))
ip_cost_sport[1:10,]
#total chg op for each sport by gender
op_cost_sport_gender <- opall_all_with_zips %>%
group_by(sport, SEX) %>%
summarise(total_cost = sum(TOT_CHG), avg_cost = mean(TOT_CHG)) %>%
arrange(desc(total_cost ))
op_cost_sport_gender[1:10,]
#total chg op for each sport
op_cost_sport <- opall_all_with_zips %>%
group_by(sport) %>%
summarise(total_cost = sum(TOT_CHG), avg_cost = mean(TOT_CHG)) %>%
arrange(desc(total_cost ))
op_cost_sport[1:10,]
op_cost_sport <- opall_all_with_zips %>%
group_by(sport) %>%
summarise(total_cost = sum(TOT_CHG), avg_cost = mean(TOT_CHG)) %>%
arrange(desc(total_cost ))
op_cost_sport[1:10,]
# sorts inpatient table by sex and sport
ip_all_gender <- ipall_all %>%
group_by(SEX, sport) %>%
summarise(count = n()) %>%
#arrange the list in descending order
arrange(desc(count))
op_all_gender <- opall_all %>%
group_by(SEX, sport) %>%
summarise(count = n()) %>%
#arrange the list in descending order
arrange(desc(count))
op_all_gender[1:10,]
#need to make graph **
ggplot(data=op_all_gender, aes(x=, y=count, group=sport)) +
geom_line(aes(color=sport))+
geom_point(aes(color=sport)) +
scale_x_yearqtr(format="%YQ%q", n=5) +
xlab("Quarter") +
ylab("Number of injuries") +
ggtitle("IP Year by quarter")
#sorts the op table by sports and gender
op_all_gender <- opall_all %>%
group_by(SEX, sport) %>%
summarise(count = n()) %>%
#arrange the list in descending order
arrange(desc(sport))
#concussions, city, anklesprains, count of injuries by sport, by sex, by race
#sorts and counts the
op_year <- opall_all %>%
group_by(YEAR, sport) %>%
summarise(count = n()) %>%
#arrange the list in descending order
arrange(desc(YEAR))
op_age_check <- opall_all %>%
group_by(AGE_GROUP, YEAR, sport) %>%
summarise(count = n()) %>%
#arrange the list in descending order
arrange(desc(YEAR))
#creates a fancy line plot. It gives each sport a different color
ggplot(data=op_year, aes(x=YEAR, y=count, group=sport)) +
geom_line(aes(color=sport))+
geom_point(aes(color=sport))
ip_year <- ipall_all %>%
group_by(YEAR, sport) %>%
summarise(count = n()) %>%
#arrange the list in descending order
arrange(desc(YEAR))
#ok so year made a wonky graph. So now we're trying to zero in on quarter data.
#so, here we are converting the year and quarter field into one.
ipall_all$NewDate <- as.yearqtr(paste(ipall_all$YEAR, ipall_all$QTR, sep = ' Q'))
#now we are using the library(zoo) to make them a yearqtr type...this should help us graph
yearqtr(ipall_all$NewDate)
#making sure it's a yearqtr field
class(ipall_all$NewDate)
#and now we try and count and groupbefore we graph.
ip_year_qtr<- ipall_all %>%
group_by(NewDate, sport) %>%
summarise(count = n()) %>%
#arrange the list in descending order
arrange(desc(NewDate))
#now we graph. Looks funny but looks right.
ggplot(data=ip_year_qtr, aes(x=NewDate, y=count, group=sport)) +
geom_line(aes(color=sport))+
geom_point(aes(color=sport)) +
scale_x_yearqtr(format="%YQ%q", n=5) +
xlab("Quarter") +
ylab("Number of injuries") +
ggtitle("IP Year by quarter")
#now let's do the same for the opall table
opall_all$NewDate <- as.yearqtr(paste(opall_all$YEAR, opall_all$QTR, sep = ' Q'))
#now we are using the library(zoo) to make them a yearqtr type...this should help us graph
yearqtr(opall_all$NewDate)
#making sure it's a yearqtr field
class(opall_all$NewDate)
#and now we try and count and groupbefore we graph.
op_year_qtr<- opall_all %>%
group_by(NewDate, sport) %>%
summarise(count = n()) %>%
#arrange the list in descending order
arrange(desc(NewDate))
#now we graph.
opall_test <- data.frame(opall_all_test)
#To find out how many patients sustained concussions in each sport, we need to search through a variety of columns to find those injuries. So I did the best I could to devise a query that searches through all the columns and returns a case where a concussion is sustained. The ICD 9 and 10 codes for concussions are included. Then I summarized by gender and sport. And then summarized and arranged by count.
concussions_op_all <- opall_all %>%
filter(grepl("S060X0A|8500|S060X9|8502|8505|8509", PRINDIAG) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG1) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG2) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG3) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG4) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG5) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG6) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG7) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG8)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG9)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG10)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG11)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG12)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG13)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG14)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG15)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG16)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG17)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG18)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG19)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG21)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG22)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG23)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG24)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG25)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG26)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG27)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG28)) %>%
group_by(sport, SEX) %>%
summarise(concussions_count = n()) %>%
arrange(desc(concussions_count))
concussions_op_all[1:10,]
#we will also break concussions down by geography.
concussions_op_all_geography <- opall_all_with_zips %>%
filter(grepl("S060X0A|8500|S060X9|8502|8505|8509", PRINDIAG) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG1) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG2) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG3) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG4) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG5) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG6) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG7) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG8)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG9)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG10)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG11)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG12)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG13)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG14)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG15)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG16)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG17)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG18)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG19)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG21)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG22)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG23)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG24)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG25)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG26)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG27)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG28)) %>%
group_by(SAS_COUNTY) %>%
summarise(concussions_count = n()) %>%
arrange(desc(concussions_count))
concussions_op_all_geography[1:10,]
#we will also break concussions down by geography.
concussions_ip_all_geography <- ipall_all_with_zips %>%
filter(grepl("S060X0A|8500|S060X9|8502|8505|8509", PRINDIAG) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG1) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG2) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG3) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG4) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG5) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG6) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG7) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG8)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG9)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG10)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG11)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG12)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG13)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG14)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG15)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG16)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG17)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG18)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG19)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG21)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG22)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG23)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG24)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG25)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG26)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG27)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG28)) %>%
group_by(SAS_COUNTY) %>%
summarise(concussions_count = n()) %>%
arrange(desc(concussions_count))
concussions_ip_all_geography[1:10,]
#now let's look at concussions by race
concussions_op_all_race <- opall_all_with_zips %>%
filter(grepl("S060X0A|8500|S060X9|8502|8505|8509", PRINDIAG) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG1) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG2) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG3) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG4) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG5) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG6) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG7) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG8)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG9)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG10)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG11)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG12)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG13)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG14)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG15)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG16)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG17)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG18)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG19)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG21)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG22)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG23)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG24)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG25)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG26)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG27)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG28)) %>%
group_by(RACE, SAS_COUNTY) %>%
summarise(concussions_count = n()) %>%
arrange(desc(concussions_count))
concussions_op_all_race[1:10,]
#we will also break concussions down by race
concussions_ip_all_race <- ipall_all_with_zips %>%
filter(grepl("S060X0A|8500|S060X9|8502|8505|8509", PRINDIAG) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG1) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG2) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG3) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG4) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG5) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG6) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG7) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG8)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG9)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG10)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG11)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG12)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG13)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG14)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG15)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG16)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG17)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG18)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG19)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG21)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG22)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG23)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG24)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG25)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG26)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG27)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG28)) %>%
group_by(RACE) %>%
summarise(concussions_count = n()) %>%
arrange(desc(concussions_count))
concussions_ip_all_race[1:10,]
#we will also break concussions down by race
concussions_op_all_race <- opall_all_with_zips %>%
filter(grepl("S060X0A|8500|S060X9|8502|8505|8509", PRINDIAG) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG1) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG2) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG3) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG4) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG5) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG6) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG7) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG8)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG9)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG10)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG11)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG12)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG13)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG14)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG15)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG16)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG17)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG18)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG19)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG21)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG22)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG23)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG24)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG25)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG26)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG27)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG28)) %>%
group_by(RACE, sport) %>%
summarise(concussions_count = n()) %>%
arrange(desc(concussions_count))
concussions_op_all_race[1:10,]
#summarizing all inkuries OP by race
all_op_injuries_by_race <- opall_all_with_zips %>%
group_by(RACE) %>%
summarise(injuries_count = n()) %>%
arrange(desc(injuries_count))
all_op_injuries_by_race[1:10,]
#summarizing all inkuries OP by race
all_ip_injuries_by_race <- ipall_all_with_zips %>%
group_by(RACE) %>%
summarise(injuries_count = n()) %>%
arrange(desc(injuries_count))
all_ip_injuries_by_race[1:10,]
#identifies all injuries by sport and sex.
all_op_injuries_by_sport_gender <- opall_all_with_zips %>%
group_by(sport, SEX) %>%
summarise(injuries_count = n()) %>%
arrange(desc(injuries_count))
write_csv(all_op_injuries_by_sport_gender, "all_op_injuries_by_sport_gender.csv")
#summarizing all inkuries OP by race
all_ip_injuries_by_sport_gender <- ipall_all_with_zips %>%
group_by(sport, SEX) %>%
summarise(injuries_count = n()) %>%
arrange(desc(injuries_count))
write_csv(all_ip_injuries_by_sport_gender, "all_ip_injuries_by_sport_gender.csv")
ggplot(concussions_op_all, aes(sport,SEX, concussions_count)) +
geom_bar(aes(fill = variable), position = "dodge", stat="identity")
#same thing going on here that I did above
concussions_ip_all <- ipall_all_with_zips %>%
filter(grepl("S060X0A|8500|S060X9|8502|8505|8509", PRINDIAG) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG1) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG2) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG3) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG4) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG5) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG6) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG7) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG8)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG9)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG10)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG11)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG12)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG13)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG14)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG15)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG16)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG17)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG18)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG19)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG21)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG22)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG23)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG24)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG25)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG26)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG27)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG28)) %>%
group_by(SEX) %>%
summarise(concussions_count = n()) %>%
arrange(desc(concussions_count))
concussions_ip_all[1:10,]
#adds in column for ipall
concussions_ip_all["ip_op"]<-"ip"
#adds a new column that labels the injuries as op
concussions_op_all["ip_op"]<-"op"
all_concussions <- bind_rows( concussions_ip_all, concussions_op_all)
#this counts all people treated for inkuries and groups by sex
sport_count <- opall_all %>%
group_by(PRINDIAG, sport, SEX) %>%
summarise(total_sports_injuries_count = n()) %>%
arrange(desc(total_sports_injuries_count))
#now that we've done concussions, we want to examine some additional injuries.
op_ankle_sprains <- opall_all %>%
filter(grepl("S934|8450", PRINDIAG) | grepl("S934|8450", DIAG1) | grepl("S934|8450", DIAG2) | grepl("S934|8450", DIAG3) | grepl("S934|8450", DIAG4) | grepl("S934|8450", DIAG5) | grepl("S934|8450", DIAG6) | grepl("S934|8450", DIAG7) | grepl("S934|8450", DIAG8)| grepl("S934|8450", DIAG9)| grepl("S934|8450", DIAG10)| grepl("S934|8450", DIAG11)| grepl("S934|8450", DIAG12)| grepl("S934|8450", DIAG13)| grepl("S934|8450", DIAG14)| grepl("S934|8450", DIAG15)| grepl("S934|8450", DIAG16)| grepl("S934|8450", DIAG17)| grepl("S934|8450", DIAG18)| grepl("S934|8450", DIAG19)| grepl("S934|8450", DIAG21)| grepl("S934|8450", DIAG22)| grepl("S934|8450", DIAG23)| grepl("S934|8450", DIAG24)| grepl("S934|8450", DIAG25)| grepl("S934|8450", DIAG26)| grepl("S934|8450", DIAG27)| grepl("S934|8450", DIAG28)) %>%
group_by(sport, SEX, YEAR) %>%
summarise(ankle_sprain_count = n()) %>%
arrange(desc(ankle_sprain_count))
ggplot(data=op_ankle_sprains, aes(x=sport, y=ankle_sprain_count,)) +
geom_bar(stat="identity") +
ggtitle("ankle_sprain_count")
ip_ankle_sprains[1:12,]
#ip ankle sprains
ip_ankle_sprains <- ipall_all %>%
filter(grepl("S934|8450", PRINDIAG) | grepl("S934|8450", DIAG1) | grepl("S934|8450", DIAG2) | grepl("S934|8450", DIAG3) | grepl("S934|8450", DIAG4) | grepl("S934|8450", DIAG5) | grepl("S934|8450", DIAG6) | grepl("S934|8450", DIAG7) | grepl("S934|8450", DIAG8)| grepl("S934|8450", DIAG9)| grepl("S934|8450", DIAG10)| grepl("S934|8450", DIAG11)| grepl("S934|8450", DIAG12)| grepl("S934|8450", DIAG13)| grepl("S934|8450", DIAG14)| grepl("S934|8450", DIAG15)| grepl("S934|8450", DIAG16)| grepl("S934|8450", DIAG17)| grepl("S934|8450", DIAG18)| grepl("S934|8450", DIAG19)| grepl("S934|8450", DIAG21)| grepl("S934|8450", DIAG22)| grepl("S934|8450", DIAG23)| grepl("S934|8450", DIAG24)| grepl("S934|8450", DIAG25)| grepl("S934|8450", DIAG26)| grepl("S934|8450", DIAG27)| grepl("S934|8450", DIAG28)) %>%
group_by(sport, SEX, YEAR) %>%
summarise(ankle_sprain_count = n()) %>%
arrange(desc(ankle_sprain_count))
#add and name a column for ip and op
op_ankle_sprains["ip_op"]<-"op"
ip_ankle_sprains["ip_op"]<-"ip"
#thorax injuries
op_thorax_injury <- opall_all %>%
filter(grepl("9221|S20", PRINDIAG) | grepl("9221|S20", DIAG1) | grepl("9221|S20", DIAG2) | grepl("9221|S20", DIAG3) | grepl("9221|S20", DIAG4) | grepl("9221|S20", DIAG5) | grepl("9221|S20", DIAG6) | grepl("9221|S20", DIAG7) | grepl("9221|S20", DIAG8)| grepl("9221|S20", DIAG9)| grepl("9221|S20", DIAG10)| grepl("9221|S20", DIAG11)| grepl("9221|S20", DIAG12)| grepl("9221|S20", DIAG13)| grepl("9221|S20", DIAG14)| grepl("9221|S20", DIAG15)| grepl("9221|S20", DIAG16)| grepl("9221|S20", DIAG17)| grepl("9221|S20", DIAG18)| grepl("9221|S20", DIAG19)| grepl("9221|S20", DIAG21)| grepl("9221|S20", DIAG22)| grepl("9221|S20", DIAG23)| grepl("9221|S20", DIAG24)| grepl("9221|S20", DIAG25)| grepl("9221|S20", DIAG26)| grepl("9221|S20", DIAG27)| grepl("9221|S20", DIAG28)) %>%
group_by(sport, SEX) %>%
summarise(thorax_injury_count = n()) %>%
arrange(desc(thorax_injury_count))
ip_thorax_injury <- ipall_all %>%
filter(grepl("9221|S20", PRINDIAG) | grepl("9221|S20", DIAG1) | grepl("9221|S20", DIAG2) | grepl("9221|S20", DIAG3) | grepl("9221|S20", DIAG4) | grepl("9221|S20", DIAG5) | grepl("9221|S20", DIAG6) | grepl("9221|S20", DIAG7) | grepl("9221|S20", DIAG8)| grepl("9221|S20", DIAG9)| grepl("9221|S20", DIAG10)| grepl("9221|S20", DIAG11)| grepl("9221|S20", DIAG12)| grepl("9221|S20", DIAG13)| grepl("9221|S20", DIAG14)| grepl("9221|S20", DIAG15)| grepl("9221|S20", DIAG16)| grepl("9221|S20", DIAG17)| grepl("9221|S20", DIAG18)| grepl("9221|S20", DIAG19)| grepl("9221|S20", DIAG21)| grepl("9221|S20", DIAG22)| grepl("9221|S20", DIAG23)| grepl("9221|S20", DIAG24)| grepl("9221|S20", DIAG25)| grepl("9221|S20", DIAG26)| grepl("9221|S20", DIAG27)| grepl("9221|S20", DIAG28)) %>%
group_by(sport, SEX) %>%
summarise(thorax_injury_count = n()) %>%
arrange(desc(thorax_injury_count ))
op_thorax_injury[1:12,]
ip_thorax_injury[1:12,]
#knee and leg sprains with corresponding icd9 and 10 codes
ip_knee_leg_sprains <- ipall_all %>%
filter(grepl("S83|S86|8449", PRINDIAG) | grepl("S83|S86|8449", DIAG1) | grepl("S83|S86|8449", DIAG2) | grepl("S83|S86|8449", DIAG3) | grepl("S83|S86|8449", DIAG4) | grepl("S83|S86|8449", DIAG5) | grepl("S83|S86|8449", DIAG6) | grepl("S83|S86|8449", DIAG7) | grepl("S83|S86|8449", DIAG8)| grepl("S83|S86|8449", DIAG9)| grepl("S83|S86|8449", DIAG10)| grepl("S83|S86|8449", DIAG11)| grepl("S83|S86|8449", DIAG12)| grepl("S83|S86|8449", DIAG13)| grepl("S83|S86|8449", DIAG14)| grepl("S83|S86|8449", DIAG15)| grepl("S83|S86|8449", DIAG16)| grepl("S83|S86|8449", DIAG17)| grepl("S83|S86|8449", DIAG18)| grepl("S83|S86|8449", DIAG19)| grepl("S83|S86|8449", DIAG21)| grepl("S83|S86|8449", DIAG22)| grepl("S83|S86|8449", DIAG23)| grepl("S83|S86|8449", DIAG24)| grepl("S83|S86|8449", DIAG25)| grepl("S83|S86|8449", DIAG26)| grepl("S83|S86|8449", DIAG27)| grepl("S83|S86|8449", DIAG28)) %>%
group_by(sport, SEX) %>%
summarise(knee_leg_sprain_count = n()) %>%
arrange(desc(knee_leg_sprain_count))
op_knee_leg_sprains <- opall_all %>%
filter(grepl("S83|S86|8449", PRINDIAG) | grepl("S83|S86|8449", DIAG1) | grepl("S83|S86|8449", DIAG2) | grepl("S83|S86|8449", DIAG3) | grepl("S83|S86|8449", DIAG4) | grepl("S83|S86|8449", DIAG5) | grepl("S83|S86|8449", DIAG6) | grepl("S83|S86|8449", DIAG7) | grepl("S83|S86|8449", DIAG8)| grepl("S83|S86|8449", DIAG9)| grepl("S83|S86|8449", DIAG10)| grepl("S83|S86|8449", DIAG11)| grepl("S83|S86|8449", DIAG12)| grepl("S83|S86|8449", DIAG13)| grepl("S83|S86|8449", DIAG14)| grepl("S83|S86|8449", DIAG15)| grepl("S83|S86|8449", DIAG16)| grepl("S83|S86|8449", DIAG17)| grepl("S83|S86|8449", DIAG18)| grepl("S83|S86|8449", DIAG19)| grepl("S83|S86|8449", DIAG21)| grepl("S83|S86|8449", DIAG22)| grepl("S83|S86|8449", DIAG23)| grepl("S83|S86|8449", DIAG24)| grepl("S83|S86|8449", DIAG25)| grepl("S83|S86|8449", DIAG26)| grepl("S83|S86|8449", DIAG27)| grepl("S83|S86|8449", DIAG28)) %>%
group_by(sport, SEX) %>%
summarise(knee_leg_sprain_count = n()) %>%
arrange(desc(knee_leg_sprain_count))
ggplot(data=op_knee_leg_sprains, aes(x=sport, y=knee_leg_sprain_count,)) +
geom_bar(stat="identity") +
ggtitle("op_knee_leg_sprains")
ip_knee_leg_sprains[1:12,]
op_knee_leg_sprains[1:12,]
#S5 is an icd 10 code and it covers all injuries to the elbow and forearm, ICD9 codes are 9593 is elbow and forearm injuries. 842 is wrist injuries. S6 is the icd10 wrist code
ip_elbows_forearms_wrists <- ipall_all %>%
filter(grepl("S5|9593|842|S6", PRINDIAG) | grepl("S5|9593|842|S6", DIAG1) | grepl("S5|9593|842|S6", DIAG2) | grepl("S5|9593|842|S6", DIAG3) | grepl("S5|9593|842|S6", DIAG4) | grepl("S5|9593|842|S6", DIAG5) | grepl("S5|9593|842|S6", DIAG6) | grepl("S5|9593|842|S6", DIAG7) | grepl("S5|9593|842|S6", DIAG8)| grepl("S5|9593|842|S6", DIAG9)| grepl("S5|9593|842|S6", DIAG10)| grepl("S5|9593|842|S6", DIAG11)| grepl("S5|9593|842|S6", DIAG12)| grepl("S5|9593|842|S6", DIAG13)| grepl("S5|9593|842|S6", DIAG14)| grepl("S5|9593|842|S6", DIAG15)| grepl("S5|9593|842|S6", DIAG16)| grepl("S5|9593|842|S6", DIAG17)| grepl("S5|9593|842|S6", DIAG18)| grepl("S5|9593|842|S6", DIAG19)| grepl("S5|9593|842|S6", DIAG21)| grepl("S5|9593|842|S6", DIAG22)| grepl("S5|9593|842|S6", DIAG23)| grepl("S5|9593|842|S6", DIAG24)| grepl("S5|9593|842|S6", DIAG25)| grepl("S5|9593|842|S6", DIAG26)| grepl("S5|9593|842|S6", DIAG27)| grepl("S5|9593|842|S6", DIAG28)) %>%
group_by(sport, SEX, YEAR) %>%
summarise(count = n()) %>%
arrange(desc(count))
ggplot(data=ip_elbows_forearms_wrists, aes(x=sport, y=count,)) +
geom_bar(stat="identity") +
ggtitle("IP Elbows Forearms Wrists injuries")
op_elbows_forearms_wrists <- opall_all %>%
filter(grepl("S5|9593|842|S6", PRINDIAG) | grepl("S5|9593|842|S6", DIAG1) | grepl("S5|9593|842|S6", DIAG2) | grepl("S5|9593|842|S6", DIAG3) | grepl("S5|9593|842|S6", DIAG4) | grepl("S5|9593|842|S6", DIAG5) | grepl("S5|9593|842|S6", DIAG6) | grepl("S5|9593|842|S6", DIAG7) | grepl("S5|9593|842|S6", DIAG8)| grepl("S5|9593|842|S6", DIAG9)| grepl("S5|9593|842|S6", DIAG10)| grepl("S5|9593|842|S6", DIAG11)| grepl("S5|9593|842|S6", DIAG12)| grepl("S5|9593|842|S6", DIAG13)| grepl("S5|9593|842|S6", DIAG14)| grepl("S5|9593|842|S6", DIAG15)| grepl("S5|9593|842|S6", DIAG16)| grepl("S5|9593|842|S6", DIAG17)| grepl("S5|9593|842|S6", DIAG18)| grepl("S5|9593|842|S6", DIAG19)| grepl("S5|9593|842|S6", DIAG21)| grepl("S5|9593|842|S6", DIAG22)| grepl("S5|9593|842|S6", DIAG23)| grepl("S5|9593|842|S6", DIAG24)| grepl("S5|9593|842|S6", DIAG25)| grepl("S5|9593|842|S6", DIAG26)| grepl("S5|9593|842|S6", DIAG27)| grepl("S5|9593|842|S6", DIAG28)) %>%
group_by(sport, SEX, YEAR) %>%
summarise(count = n()) %>%
arrange(desc(count))
ggplot(data=op_elbows_forearms_wrists, aes(x=sport, y=count,)) +
geom_bar(stat="identity") +
ggtitle("OP Elbows Forearms Wrists injuries")
write_csv(op_ankle_sprains)
write_csv(ip_ankle_sprains)
write_csv(op_knee_leg_sprains)
write_csv(op_ankle_sprains)
write_csv(concussions_op_all )
write_csv(concussions_ip_all)
write_csv(sport_count, "sport_prindiag_count.csv")
#### so exploratory queries were a hoot. Now let's do what's needed.
#renames the gender fields to male female and unknown using a nested ifelse function
opall_all$SEX <- ifelse(opall_all$SEX == 1, "MALE", ifelse(opall_all$SEX == 2,"FEMALE", "UNKNOWN"))
ipall_all$SEX <- ifelse(ipall_all$SEX == 1, "MALE", ifelse(ipall_all$SEX == 2,"FEMALE", "UNKNOWN"))
#for race
opall_all$RACE <- ifelse(opall_all$RACE == 1, "WHITE", ifelse(opall_all$RACE == 2,"BLACK", ifelse(opall_all$RACE == 3,"ASIAN", ifelse(opall_all$RACE == 4,"AMERICAN INDIAN/ALASKA NATIVE", ifelse(opall_all$RACE == 5,"OTHER", ifelse(opall_all$RACE == 6,"TWO OR MORE RACES", ifelse(opall_all$RACE == 7 ,"NATIVE HAWAIIAN OR PACIFIC ISLANDER", ifelse(opall_all$RACE == 8,"DECLINED TO ANSWER", "UNKNOWN"))))))))
ipall_all$RACE <- ifelse(ipall_all$RACE == 1, "WHITE", ifelse(ipall_all$RACE == 2,"BLACK", ifelse(ipall_all$RACE == 3,"ASIAN", ifelse(ipall_all$RACE == 4,"AMERICAN INDIAN/ALASKA NATIVE", ifelse(ipall_all$RACE == 5,"OTHER", ifelse(ipall_all$RACE == 6,"TWO OR MORE RACES", ifelse(ipall_all$RACE == 7 ,"NATIVE HAWAIIAN OR PACIFIC ISLANDER", ifelse(ipall_all$RACE == 8,"DECLINED TO ANSWER", "UNKNOWN"))))))))
#first let's merge a zip code table with our darta so we can look on a city level, though we will also pull the zip codes too.
data(zipcode)
View(zipcode)
#creates a merged list of zip codes and the hogan information.
ipall_all_with_zips <- left_join(ipall_all, zipcode, by =c("ZIPCODE" = "zip"))
opall_all_with_zips <- left_join(opall_all, zipcode, by =c("ZIPCODE" = "zip"))
#All over time, if possible
#1 bar chart of total injuries --
#2 line graph with injuries by quarter by sport.
#3 line graph with injuries by gender and sport -- specifically, baseball, basketball, soccer, lacrosse cheer.
#4 concussions by sport over time / concussions as a percentage of all injuries
#5 overall cost/average cost for sports
#6 basball arm injuries over time
#8 lower extremity injuries in Football -
#hockey, soccer, football, lax -- by gender over time
#cardiac arrest and death by sport over time
#1- we're doing total IP injuries and grouping them by sport.
ip_injuries_by_sport <- ipall_all_with_zips %>%
group_by(sport) %>%
summarise(count = n()) %>%
#arrange the list in descending order
arrange(desc(count))
ip_injuries_by_sport[1:10,]
#now we're looking at all OP injuries and grouping them by sport
op_injuries_by_sport <- opall_all_with_zips %>%
group_by(sport) %>%
summarise(count = n()) %>%
#arrange the list in descending order
arrange(desc(count))
op_injuries_by_sport[1:10,]
#identifies the columns with nunbers below 10 and replaces them with Less than 10.
ip_injuries_by_sport$count[ip_injuries_by_sport$count<=10] <- "Less than 10"
op_injuries_by_sport$count[op_injuries_by_sport$count<=10] <- "Less than 10"
write_csv(ip_injuries_by_sport, "ip_injuries_by_sport.csv")
write_csv(op_injuries_by_sport, "op_injuries_by_sport.csv")
#2 - Now we are just looking at all IP injuries and grouping them by QTR.
ip_by_year_sport<- ipall_all_with_zips %>%
group_by(YEAR, sport) %>%
summarise(count = n()) %>%
#arrange the list in descending order
arrange(desc(count))
ip_by_year_sport[1:10,]
#now we're looking at all OP injuries and grouping them by QTR
op_by_year_sport<- opall_all_with_zips %>%
group_by(YEAR, sport) %>%
summarise(count = n()) %>%
#arrange the list in descending order
arrange(desc(count))
op_by_year_sport[1:10,]
ip_by_year_sport$count[ip_by_year_sport$count<=10] <- "Less than 10"
op_by_year_sport$count[op_by_year_sport$count<=10] <- "Less than 10"
write_csv(ip_by_year_sport, "ip_by_year_sport.csv")
write_csv(op_by_year_sport, "op_by_year_sport.csv")
#3 - Now we are just looking at all IP injuries and grouping them by QTR.
ip_by_sport_year_gender <- ipall_all_with_zips %>%
group_by(SEX, sport, YEAR) %>%
summarise(count = n()) %>%
#arrange the list in descending order
arrange(desc(count))
ip_by_sport_year_gender[1:10,]
#now we're looking at all OP injuries and grouping them by QTR
op_by_sport_year_gender <- opall_all_with_zips %>%
group_by(SEX, sport, YEAR) %>%
summarise(count = n()) %>%
#arrange the list in descending order
arrange(desc(count))
op_by_sport_year_gender[1:10,]
ip_by_sport_year_gender$count[ip_by_sport_year_gender$count<=10] <- "Less than 10"
op_by_sport_year_gender$count[op_by_sport_year_gender$count<=10] <- "Less than 10"
write_csv(ip_by_sport_year_gender, "ip_by_sport_year_gender.csv")
write_csv(op_by_sport_year_gender, "op_by_sport_year_gender.csv")
#4 concussions by sport over time
#this takes the ip codes that correspond to concussions, both for ICD9 and ICD10 and filters
ip_concussions_year_sport <- ipall_all_with_zips %>%
filter(grepl("S060X0A|8500|S060X9|8502|8505|8509", PRINDIAG) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG1) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG2) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG3) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG4) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG5) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG6) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG7) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG8)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG9)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG10)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG11)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG12)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG13)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG14)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG15)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG16)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG17)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG18)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG19)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG21)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG22)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG23)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG24)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG25)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG26)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG27)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG28)) %>%
group_by(YEAR, sport) %>%
summarise(concussions_count = n()) %>%
arrange(desc(concussions_count))
ip_concussions_year_sport[1:10,]
#this takes the ip codes that correspond to concussions, both for ICD9 and ICD10
op_concussions_year_sport <- opall_all_with_zips %>%
filter(grepl("S060X0A|8500|S060X9|8502|8505|8509", PRINDIAG) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG1) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG2) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG3) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG4) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG5) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG6) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG7) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG8)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG9)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG10)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG11)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG12)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG13)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG14)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG15)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG16)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG17)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG18)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG19)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG21)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG22)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG23)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG24)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG25)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG26)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG27)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG28)) %>%
group_by(YEAR, sport) %>%
summarise(concussions_count = n()) %>%
arrange(desc(concussions_count))
op_concussions_year_sport [1:10,]
#identifies the columns with nunbers below 10 and replaces them with Less than 10.
op_concussions_year_sport$concussions_count[op_concussions_year_sport$concussions_count<=10] <- "Less than 10"
ip_concussions_year_sport$concussions_count[ip_concussions_year_sport$concussions_count<=10] <- "Less than 10"
write_csv(op_concussions_year_sport, "op_concussions_year_sport.csv")
write_csv(ip_concussions_year_sport, "ip_concussions_year_sport.csv")
# 4b this is just concussions bg sport
ip_concussions_sport <- ipall_all_with_zips %>%
filter(grepl("S060X0A|8500|S060X9|8502|8505|8509", PRINDIAG) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG1) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG2) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG3) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG4) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG5) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG6) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG7) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG8)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG9)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG10)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG11)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG12)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG13)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG14)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG15)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG16)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG17)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG18)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG19)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG21)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG22)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG23)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG24)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG25)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG26)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG27)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG28)) %>%
group_by(sport) %>%
summarise(concussions_count = n()) %>%
arrange(desc(concussions_count))
ip_concussions_sport[1:10,]
#this takes the ip codes that correspond to concussions, both for ICD9 and ICD10
op_concussions_sport <- opall_all_with_zips %>%
filter(grepl("S060X0A|8500|S060X9|8502|8505|8509", PRINDIAG) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG1) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG2) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG3) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG4) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG5) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG6) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG7) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG8)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG9)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG10)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG11)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG12)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG13)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG14)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG15)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG16)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG17)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG18)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG19)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG21)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG22)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG23)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG24)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG25)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG26)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG27)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG28)) %>%
group_by(sport) %>%
summarise(concussions_count = n()) %>%
arrange(desc(concussions_count))
op_concussions_sport [1:10,]
#identifies the columns with nunbers below 10 and replaces them with Less than 10.
op_concussions_sport$concussions_count[op_concussions_sport$concussions_count<=10] <- "Less than 10"
ip_concussions_sport$concussions_count[ip_concussions_sport$concussions_count<=10] <- "Less than 10"
write_csv(op_concussions_sport, "op_concussions_sport.csv")
write_csv(ip_concussions_sport, "ip_concussions_sport.csv")
# 4c this is just concussions bg sport
ip_concussions_sport_sex <- ipall_all_with_zips %>%
filter(grepl("S060X0A|8500|S060X9|8502|8505|8509", PRINDIAG) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG1) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG2) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG3) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG4) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG5) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG6) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG7) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG8)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG9)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG10)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG11)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG12)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG13)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG14)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG15)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG16)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG17)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG18)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG19)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG21)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG22)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG23)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG24)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG25)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG26)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG27)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG28)) %>%
group_by(SEX, sport) %>%
summarise(concussions_count = n()) %>%
arrange(desc(concussions_count))
ip_concussions_sport_sex[1:10,]
#this takes the ip codes that correspond to concussions, both for ICD9 and ICD10
op_concussions_sport_sex <- opall_all_with_zips %>%
filter(grepl("S060X0A|8500|S060X9|8502|8505|8509", PRINDIAG) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG1) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG2) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG3) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG4) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG5) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG6) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG7) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG8)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG9)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG10)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG11)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG12)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG13)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG14)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG15)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG16)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG17)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG18)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG19)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG21)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG22)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG23)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG24)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG25)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG26)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG27)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG28)) %>%
group_by(SEX, sport) %>%
summarise(concussions_count = n()) %>%
arrange(desc(concussions_count))
op_concussions_sport_sex [1:10,]
#identifies the columns with nunbers below 10 and replaces them with Less than 10.
op_concussions_sport_sex$concussions_count[op_concussions_sport_sex$concussions_count<=10] <- "Less than 10"
ip_concussions_sport_sex$concussions_count[ip_concussions_sport_sex$concussions_count<=10] <- "Less than 10"
write_csv(op_concussions_sport_sex, "op_concussions_sport_sex.csv")
write_csv(ip_concussions_sport_sex, "ip_concussions_sport_sex.csv")
#5 overall cost/average cost for sports
#total chg op for each sport by gender
op_cost_sport_gender <- opall_all_with_zips %>%
group_by(sport, SEX) %>%
summarise(total_cost = sum(TOT_CHG), avg_cost = mean(TOT_CHG)) %>%
arrange(desc(total_cost ))
op_cost_sport_gender [1:10,]
#total chg ip for each sport by gender
ip_cost_sport_gender <- ipall_all_with_zips %>%
group_by(sport, SEX) %>%
summarise(total_cost = sum(TOT_CHG), avg_cost = mean(TOT_CHG)) %>%
arrange(desc(total_cost ))
ip_cost_sport_gender[1:10,]
#total chg op for each sport
op_cost_sport <- opall_all_with_zips %>%
group_by(sport) %>%
summarise(total_cost = sum(TOT_CHG), avg_cost = mean(TOT_CHG)) %>%
arrange(desc(total_cost ))
op_cost_sport[1:10,]
#total chg ip for each sport by gender
ip_cost_sport <- ipall_all_with_zips %>%
group_by(sport) %>%
summarise(total_cost = sum(TOT_CHG), avg_cost = mean(TOT_CHG)) %>%
arrange(desc(total_cost ))
ip_cost_sport[1:10,]
write_csv( op_cost_sport_gender, "op_cost_sport_gender.csv")
write_csv(op_cost_sport_gender, "op_cost_sport_gender.csv")
write_csv(op_cost_sport, "op_cost_sport.csv")
write_csv(ip_cost_sport, "ip_cost_sport.csv")
#6 basball arm injuries over time
#S5 is an icd 10 code and it covers all injuries to the elbow and forearm, ICD9 codes are 9593 is elbow and forearm injuries. 842 is wrist injuries. S6 is the icd10 wrist code.
ip_elbows_forearms_wrists <- ipall_all_with_zips %>%
filter(grepl("S5|9593|842|S6", PRINDIAG) | grepl("S5|9593|842|S6", DIAG1) | grepl("S5|9593|842|S6", DIAG2) | grepl("S5|9593|842|S6", DIAG3) | grepl("S5|9593|842|S6", DIAG4) | grepl("S5|9593|842|S6", DIAG5) | grepl("S5|9593|842|S6", DIAG6) | grepl("S5|9593|842|S6", DIAG7) | grepl("S5|9593|842|S6", DIAG8)| grepl("S5|9593|842|S6", DIAG9)| grepl("S5|9593|842|S6", DIAG10)| grepl("S5|9593|842|S6", DIAG11)| grepl("S5|9593|842|S6", DIAG12)| grepl("S5|9593|842|S6", DIAG13)| grepl("S5|9593|842|S6", DIAG14)| grepl("S5|9593|842|S6", DIAG15)| grepl("S5|9593|842|S6", DIAG16)| grepl("S5|9593|842|S6", DIAG17)| grepl("S5|9593|842|S6", DIAG18)| grepl("S5|9593|842|S6", DIAG19)| grepl("S5|9593|842|S6", DIAG21)| grepl("S5|9593|842|S6", DIAG22)| grepl("S5|9593|842|S6", DIAG23)| grepl("S5|9593|842|S6", DIAG24)| grepl("S5|9593|842|S6", DIAG25)| grepl("S5|9593|842|S6", DIAG26)| grepl("S5|9593|842|S6", DIAG27)| grepl("S5|9593|842|S6", DIAG28)) %>%
group_by(sport, SEX) %>%
summarise(count = n()) %>%
arrange(desc(count))
op_elbows_forearms_wrists <- opall_all_with_zips %>%
filter(grepl("S5|9593|842|S6", PRINDIAG) | grepl("S5|9593|842|S6", DIAG1) | grepl("S5|9593|842|S6", DIAG2) | grepl("S5|9593|842|S6", DIAG3) | grepl("S5|9593|842|S6", DIAG4) | grepl("S5|9593|842|S6", DIAG5) | grepl("S5|9593|842|S6", DIAG6) | grepl("S5|9593|842|S6", DIAG7) | grepl("S5|9593|842|S6", DIAG8)| grepl("S5|9593|842|S6", DIAG9)| grepl("S5|9593|842|S6", DIAG10)| grepl("S5|9593|842|S6", DIAG11)| grepl("S5|9593|842|S6", DIAG12)| grepl("S5|9593|842|S6", DIAG13)| grepl("S5|9593|842|S6", DIAG14)| grepl("S5|9593|842|S6", DIAG15)| grepl("S5|9593|842|S6", DIAG16)| grepl("S5|9593|842|S6", DIAG17)| grepl("S5|9593|842|S6", DIAG18)| grepl("S5|9593|842|S6", DIAG19)| grepl("S5|9593|842|S6", DIAG21)| grepl("S5|9593|842|S6", DIAG22)| grepl("S5|9593|842|S6", DIAG23)| grepl("S5|9593|842|S6", DIAG24)| grepl("S5|9593|842|S6", DIAG25)| grepl("S5|9593|842|S6", DIAG26)| grepl("S5|9593|842|S6", DIAG27)| grepl("S5|9593|842|S6", DIAG28)) %>%
group_by(sport, SEX) %>%
summarise(count = n()) %>%
arrange(desc(count))
ip_elbows_forearms_wrists$count[ip_elbows_forearms_wrists$count<=10] <- "Less than 10"
op_elbows_forearms_wrists$count[op_elbows_forearms_wrists$count<=10] <- "Less than 10"
write_csv( ip_elbows_forearms_wrists, " ip_elbows_forearms_wrists.csv")
write_csv(op_elbows_forearms_wrists, "op_elbows_forearms_wrists.csv")
#8 lower extremity injuries in Football -
ip_knee_leg_ankle_sprains <- ipall_all_with_zips %>%
filter(grepl("S934|8450|S83|S86|8449", PRINDIAG) | grepl("S934|8450|S83|S86|8449", DIAG1) | grepl("S934|8450|S83|S86|8449", DIAG2) | grepl("S934|8450|S83|S86|8449", DIAG3) | grepl("S934|8450|S83|S86|8449", DIAG4) | grepl("S934|8450|S83|S86|8449", DIAG5) | grepl("S934|8450|S83|S86|8449", DIAG6) | grepl("S934|8450|S83|S86|8449", DIAG7) | grepl("S934|8450|S83|S86|8449", DIAG8)| grepl("S934|8450|S83|S86|8449", DIAG9)| grepl("S934|8450|S83|S86|8449", DIAG10)| grepl("S934|8450|S83|S86|8449", DIAG11)| grepl("S934|8450|S83|S86|8449", DIAG12)| grepl("S934|8450|S83|S86|8449", DIAG13)| grepl("S934|8450|S83|S86|8449", DIAG14)| grepl("S934|8450|S83|S86|8449", DIAG15)| grepl("S934|8450|S83|S86|8449", DIAG16)| grepl("S934|8450|S83|S86|8449", DIAG17)| grepl("S934|8450|S83|S86|8449", DIAG18)| grepl("S934|8450|S83|S86|8449", DIAG19)| grepl("S934|8450|S83|S86|8449", DIAG21)| grepl("S934|8450|S83|S86|8449", DIAG22)| grepl("S934|8450|S83|S86|8449", DIAG23)| grepl("S934|8450|S83|S86|8449", DIAG24)| grepl("S934|8450|S83|S86|8449", DIAG25)| grepl("S934|8450|S83|S86|8449", DIAG26)| grepl("S934|8450|S83|S86|8449", DIAG27)| grepl("S934|8450|S83|S86|8449", DIAG28)) %>%
group_by(sport, SEX) %>%
summarise(knee_leg_ankle_sprain_count = n()) %>%
arrange(desc(knee_leg_ankle_sprain_count))
op_knee_leg_ankle_sprains <- opall_all_with_zips %>%
filter(grepl("S934|8450|S83|S86|8449", PRINDIAG) | grepl("S934|8450|S83|S86|8449", DIAG1) | grepl("S934|8450|S83|S86|8449", DIAG2) | grepl("S934|8450|S83|S86|8449", DIAG3) | grepl("S934|8450|S83|S86|8449", DIAG4) | grepl("S934|8450|S83|S86|8449", DIAG5) | grepl("S934|8450|S83|S86|8449", DIAG6) | grepl("S934|8450|S83|S86|8449", DIAG7) | grepl("S934|8450|S83|S86|8449", DIAG8)| grepl("S934|8450|S83|S86|8449", DIAG9)| grepl("S934|8450|S83|S86|8449", DIAG10)| grepl("S934|8450|S83|S86|8449", DIAG11)| grepl("S934|8450|S83|S86|8449", DIAG12)| grepl("S934|8450|S83|S86|8449", DIAG13)| grepl("S934|8450|S83|S86|8449", DIAG14)| grepl("S934|8450|S83|S86|8449", DIAG15)| grepl("S934|8450|S83|S86|8449", DIAG16)| grepl("S934|8450|S83|S86|8449", DIAG17)| grepl("S934|8450|S83|S86|8449", DIAG18)| grepl("S934|8450|S83|S86|8449", DIAG19)| grepl("S934|8450|S83|S86|8449", DIAG21)| grepl("S934|8450|S83|S86|8449", DIAG22)| grepl("S934|8450|S83|S86|8449", DIAG23)| grepl("S934|8450|S83|S86|8449", DIAG24)| grepl("S934|8450|S83|S86|8449", DIAG25)| grepl("S934|8450|S83|S86|8449", DIAG26)| grepl("S934|8450|S83|S86|8449", DIAG27)| grepl("S934|8450|S83|S86|8449", DIAG28)) %>%
group_by(sport, SEX) %>%
summarise(knee_leg_ankle_sprain_count = n()) %>%
arrange(desc(knee_leg_ankle_sprain_count))
op_knee_leg_ankle_sprains$knee_leg_ankle_sprain_count[ op_knee_leg_ankle_sprains$knee_leg_ankle_sprain_count<=10] <- "Less than 10"
ip_knee_leg_ankle_sprains$knee_leg_ankle_sprain_count[ ip_knee_leg_ankle_sprains$knee_leg_ankle_sprain_count<=10] <- "Less than 10"
write_csv(ip_knee_leg_ankle_sprains, "ip_knee_leg_ankle_sprains.csv")
write_csv(op_knee_leg_ankle_sprains , "op_knee_leg_ankle_sprains.csv")
|
/.Rproj.user/97AC6EA5/sources/per/t/FF8D200D-contents
|
no_license
|
ChrisCioffi/Sports_injuries_examination
|
R
| false
| false
| 58,838
|
#Notes about what we want. For Injuries by geography --- rate of injury? Can we find partiipation rates by sport?
#All over time, if possible
#bar chart of total injuries --
#line graph with injuries by quarter by sport.
#line graph with injuries by gender and sport -- specifically, baseball, basketball, soccer, lacrosse cheer.
#concussions by sport over time / concussions as a percentage of all injuries
#overall cost/average cost for sports
#basball arm injuries over time
#Concussions in hockey
#lower extremity injuries in Football -
#hockey, soccer, football, lax -- by gender over time
#cardiac arrest and death by sport over time
library(tidyverse)
library(zoo)
library(zipcode)
#here we create variables and read in each table Julia created. And we make it so the columns except for the total charge column are in a standard format.
BASEBALL_IPALL_10 <- read_csv("BASEBALL_IPALL_10.csv", col_types = cols(
TOT_CHG = col_number(),
.default = "c")
)
#we're also adding a column called sport for sorting/grouping purposes
BASEBALL_IPALL_10["sport"]<-"baseball"
#Forces the prindiag column to be a character, een though it's all numbers. Because sometimes the column has numbers and letters.
BASEBALL_IPALL_9 <- read_csv("BASEBALL_IPALL_9.csv", col_types = cols(
TOT_CHG = col_number(),
.default = "c")
)
BASEBALL_IPALL_9["sport"]<-"baseball"
BASEBALL_OPALL_10 <- read_csv("BASEBALL_OPALL_10.csv", col_types = cols(
TOT_CHG = col_number(),
.default = "c")
)
BASEBALL_OPALL_10["sport"]<-"baseball"
BASEBALL_OPALL_9 <- read_csv("BASEBALL_OPALL_9.csv", col_types = cols(
TOT_CHG = col_number(),
.default = "c")
)
BASEBALL_OPALL_9["sport"]<-"baseball"
BASKETBALL_IPALL_10 <- read_csv("BASKETBALL_IPALL_10.csv", col_types = cols(
TOT_CHG = col_number(),
.default = "c")
)
BASKETBALL_IPALL_10["sport"]<-"basketball"
BASKETBALL_IPALL_9 <- read_csv("BASKETBALL_IPALL_9.csv", col_types = cols(
TOT_CHG = col_number(),
.default = "c")
)
BASKETBALL_IPALL_9["sport"]<-"basketball"
BASKETBALL_OPALL_10 <- read_csv("BASKETBALL_OPALL_10.csv", col_types = cols(
TOT_CHG = col_number(),
.default = "c")
)
BASKETBALL_OPALL_10["sport"]<-"basketball"
BASKETBALL_OPALL_9 <- read_csv("BASKETBALL_OPALL_9.csv", col_types = cols(
TOT_CHG = col_number(),
.default = "c")
)
BASKETBALL_OPALL_9["sport"]<-"basketball"
CHEER_OPALL_10 <- read_csv("CHEER_OPALL_10.csv", col_types = cols(
TOT_CHG = col_number(),
.default = "c")
)
CHEER_OPALL_10["sport"]<-"cheer"
CHEER_OPALL_9 <- read_csv("CHEER_OPALL_9.csv", col_types = cols(
TOT_CHG = col_number(),
.default = "c")
)
CHEER_OPALL_9["sport"]<-"cheer"
CHEER_IPALL_9 <- read_csv("CHEER_IPALL_9.csv", col_types = cols(
TOT_CHG = col_number(),
.default = "c")
)
CHEER_IPALL_9["sport"]<-"cheer"
CHEER_IPALL_10<- read_csv("CHEERLEADING_IPALL_10.csv", col_types = cols(
TOT_CHG = col_number(),
.default = "c")
)
CHEER_IPALL_10["sport"]<-"cheer"
FOOTBALL_IPALL_10 <- read_csv("FOOTBALL_IPALL_10.csv", col_types = cols(
TOT_CHG = col_number(),
.default = "c")
)
FOOTBALL_IPALL_10["sport"]<-"football"
FOOTBALL_IPALL_9 <- read_csv("FOOTBALL_IPALL_9.csv", col_types = cols(
TOT_CHG = col_number(),
.default = "c")
)
FOOTBALL_IPALL_9["sport"]<-"football"
FOOTBALL_OPALL_9 <- read_csv("FOOTBALL_OPALL_9.csv", col_types = cols(
TOT_CHG = col_number(),
.default = "c")
)
FOOTBALL_OPALL_9["sport"]<-"football"
FOOTBALL_OPALL_10 <- read_csv("FOOTBALL_OPALL_10.csv", col_types = cols(
TOT_CHG = col_number(),
.default = "c")
)
FOOTBALL_OPALL_10["sport"]<-"football"
GYMNASTICS_OPALL_10 <- read_csv("GYMNASTICS_OPALL_10.csv", col_types = cols(
TOT_CHG = col_number(),
.default = "c")
)
GYMNASTICS_OPALL_10["sport"]<-"gymnastics"
GYMNASTICS_OPALL_9 <- read_csv("GYMNASTICS_OPALL_9.csv", col_types = cols(
TOT_CHG = col_number(),
.default = "c")
)
GYMNASTICS_OPALL_9["sport"]<-"gymnastics"
GYMNASTICS_IPALL_9 <- read_csv("GYMNASTICS_IPALL_9.csv", col_types = cols(
TOT_CHG = col_number(),
.default = "c")
)
GYMNASTICS_IPALL_9["sport"]<-"gymnastics"
ICEHOCKEY_IPALL_10 <- read_csv("ICEHOCKEY_IPALL_10.csv", col_types = cols(
TOT_CHG = col_number(),
.default = "c")
)
ICEHOCKEY_IPALL_10["sport"]<-"hockey"
ICEHOCKEY_IPALL_9 <- read_csv("ICEHOCKEY_IPALL_9.csv", col_types = cols(
TOT_CHG = col_number(),
.default = "c")
)
ICEHOCKEY_IPALL_9["sport"]<-"hockey"
ICEHOCKEY_OPALL_10 <- read_csv("ICEHOCKEY_OPALL_10.csv", col_types = cols(
TOT_CHG = col_number(),
.default = "c")
)
ICEHOCKEY_OPALL_10["sport"]<-"hockey"
ICEHOCKEY_OPALL_9 <- read_csv("ICEHOCKEY_OPALL_9.csv", col_types = cols(
TOT_CHG = col_number(),
.default = "c")
)
ICEHOCKEY_OPALL_9["sport"]<-"hockey"
LAXFH_IPALL_9 <- read_csv("LAXFH_IPALL_9.csv", col_types = cols(
TOT_CHG = col_number(),
.default = "c")
)
LAXFH_IPALL_9["sport"]<-"lax/fieldhockey"
LAXFH_OPALL_10 <- read_csv("LAXFH_OPALL_10.csv", col_types = cols(
TOT_CHG = col_number(),
.default = "c")
)
LAXFH_OPALL_10["sport"]<-"lax/fieldhockey"
LAXFH_OPALL_9 <- read_csv("LAXFH_OPALL_9.csv", col_types = cols(
TOT_CHG = col_number(),
.default = "c")
)
LAXFH_OPALL_9["sport"]<-"lax/fieldhockey"
SOCCER_OPALL_10 <- read_csv("SOCCER_OPALL_10.csv", col_types = cols(
TOT_CHG = col_number(),
.default = "c")
)
SOCCER_OPALL_10["sport"]<-"soccer"
SOCCER_OPALL_9 <- read_csv("SOCCER_OPALL_9.csv", col_types = cols(
TOT_CHG = col_number(),
.default = "c")
)
SOCCER_OPALL_9["sport"]<-"soccer"
SOCCER_IPALL_10 <- read_csv("SOCCER_IPALL_10.csv", col_types = cols(
TOT_CHG = col_number(),
.default = "c")
)
SOCCER_IPALL_10["sport"]<-"soccer"
SOCCER_IPALL_9 <- read_csv("SOCCER_IPALL_9.csv", col_types = cols(
TOT_CHG = col_number(),
.default = "c")
)
SOCCER_IPALL_9["sport"]<-"soccer"
TRACK_IPALL_9 <- read_csv("TRACK_IPALL_9.csv", col_types = cols(
TOT_CHG = col_number(),
.default = "c")
)
TRACK_IPALL_9["sport"]<-"track"
TRACK_OPALL_10 <- read_csv("TRACK_OPALL_10.csv", col_types = cols(
TOT_CHG = col_number(),
.default = "c")
)
TRACK_OPALL_10["sport"]<-"track"
TRACK_OPALL_9 <- read_csv("TRACK_OPALL_9.csv", col_types = cols(
TOT_CHG = col_number(),
.default = "c")
)
TRACK_OPALL_9["sport"]<-"track"
#now it's time to join the two tables.
#this is sthe sample that above rows are built on.
#x <- read_csv("file.csv", col_types = cols(
# A = col_double(),
# B = col_logical(),
# C = col_factor()))
ipall_all <- bind_rows(BASEBALL_IPALL_10, BASEBALL_IPALL_9, BASKETBALL_IPALL_10, BASKETBALL_IPALL_9, CHEER_IPALL_10, CHEER_IPALL_9, FOOTBALL_IPALL_10, FOOTBALL_IPALL_9, GYMNASTICS_IPALL_9, ICEHOCKEY_IPALL_10, ICEHOCKEY_IPALL_9, LAXFH_IPALL_9, SOCCER_IPALL_10, SOCCER_IPALL_9, TRACK_IPALL_9)
opall_all <- bind_rows( BASEBALL_OPALL_10, BASEBALL_OPALL_9, BASKETBALL_OPALL_10, BASKETBALL_OPALL_9, CHEER_OPALL_10, CHEER_OPALL_9, FOOTBALL_OPALL_9, FOOTBALL_OPALL_10, GYMNASTICS_OPALL_10, GYMNASTICS_OPALL_9, ICEHOCKEY_OPALL_10, ICEHOCKEY_OPALL_9, LAXFH_OPALL_10, LAXFH_OPALL_9, SOCCER_OPALL_10, SOCCER_OPALL_9, TRACK_OPALL_10, TRACK_OPALL_9)
#Now we are just looking at all IP injuries and grouping them by city and sport.
ip_zip_sport <- ipall_all_with_zips %>%
group_by(city, sport) %>%
summarise(count = n()) %>%
#arrange the list in descending order
arrange(desc(count))
ip_zip_sport[1:10,]
#now we're looking at all OP injuries and grouping them by city and sport
op_zip_sport <- opall_all_with_zips %>%
group_by( sport) %>%
summarise(count = n()) %>%
#arrange the list in descending order
arrange(desc(count))
op_zip_sport[1:10,]
#total chg ip for each sport by gender
ip_cost_sport_gender <- ipall_all_with_zips %>%
group_by(sport, SEX) %>%
summarise(total_cost = sum(TOT_CHG), avg_cost = mean(TOT_CHG)) %>%
arrange(desc(total_cost ))
ip_cost_sport_gender[1:10,]
#total chg ip for each sport by gender
ip_cost_sport <- ipall_all_with_zips %>%
group_by(sport) %>%
summarise(total_cost = sum(TOT_CHG), avg_cost = mean(TOT_CHG)) %>%
arrange(desc(total_cost ))
ip_cost_sport[1:10,]
#total chg op for each sport by gender
op_cost_sport_gender <- opall_all_with_zips %>%
group_by(sport, SEX) %>%
summarise(total_cost = sum(TOT_CHG), avg_cost = mean(TOT_CHG)) %>%
arrange(desc(total_cost ))
op_cost_sport_gender[1:10,]
#total chg op for each sport
op_cost_sport <- opall_all_with_zips %>%
group_by(sport) %>%
summarise(total_cost = sum(TOT_CHG), avg_cost = mean(TOT_CHG)) %>%
arrange(desc(total_cost ))
op_cost_sport[1:10,]
op_cost_sport <- opall_all_with_zips %>%
group_by(sport) %>%
summarise(total_cost = sum(TOT_CHG), avg_cost = mean(TOT_CHG)) %>%
arrange(desc(total_cost ))
op_cost_sport[1:10,]
# sorts inpatient table by sex and sport
ip_all_gender <- ipall_all %>%
group_by(SEX, sport) %>%
summarise(count = n()) %>%
#arrange the list in descending order
arrange(desc(count))
op_all_gender <- opall_all %>%
group_by(SEX, sport) %>%
summarise(count = n()) %>%
#arrange the list in descending order
arrange(desc(count))
op_all_gender[1:10,]
#need to make graph **
ggplot(data=op_all_gender, aes(x=, y=count, group=sport)) +
geom_line(aes(color=sport))+
geom_point(aes(color=sport)) +
scale_x_yearqtr(format="%YQ%q", n=5) +
xlab("Quarter") +
ylab("Number of injuries") +
ggtitle("IP Year by quarter")
#sorts the op table by sports and gender
op_all_gender <- opall_all %>%
group_by(SEX, sport) %>%
summarise(count = n()) %>%
#arrange the list in descending order
arrange(desc(sport))
#concussions, city, anklesprains, count of injuries by sport, by sex, by race
#sorts and counts the
op_year <- opall_all %>%
group_by(YEAR, sport) %>%
summarise(count = n()) %>%
#arrange the list in descending order
arrange(desc(YEAR))
op_age_check <- opall_all %>%
group_by(AGE_GROUP, YEAR, sport) %>%
summarise(count = n()) %>%
#arrange the list in descending order
arrange(desc(YEAR))
#creates a fancy line plot. It gives each sport a different color
ggplot(data=op_year, aes(x=YEAR, y=count, group=sport)) +
geom_line(aes(color=sport))+
geom_point(aes(color=sport))
ip_year <- ipall_all %>%
group_by(YEAR, sport) %>%
summarise(count = n()) %>%
#arrange the list in descending order
arrange(desc(YEAR))
#ok so year made a wonky graph. So now we're trying to zero in on quarter data.
#so, here we are converting the year and quarter field into one.
ipall_all$NewDate <- as.yearqtr(paste(ipall_all$YEAR, ipall_all$QTR, sep = ' Q'))
#now we are using the library(zoo) to make them a yearqtr type...this should help us graph
yearqtr(ipall_all$NewDate)
#making sure it's a yearqtr field
class(ipall_all$NewDate)
#and now we try and count and groupbefore we graph.
ip_year_qtr<- ipall_all %>%
group_by(NewDate, sport) %>%
summarise(count = n()) %>%
#arrange the list in descending order
arrange(desc(NewDate))
#now we graph. Looks funny but looks right.
ggplot(data=ip_year_qtr, aes(x=NewDate, y=count, group=sport)) +
geom_line(aes(color=sport))+
geom_point(aes(color=sport)) +
scale_x_yearqtr(format="%YQ%q", n=5) +
xlab("Quarter") +
ylab("Number of injuries") +
ggtitle("IP Year by quarter")
#now let's do the same for the opall table
opall_all$NewDate <- as.yearqtr(paste(opall_all$YEAR, opall_all$QTR, sep = ' Q'))
#now we are using the library(zoo) to make them a yearqtr type...this should help us graph
yearqtr(opall_all$NewDate)
#making sure it's a yearqtr field
class(opall_all$NewDate)
#and now we try and count and groupbefore we graph.
op_year_qtr<- opall_all %>%
group_by(NewDate, sport) %>%
summarise(count = n()) %>%
#arrange the list in descending order
arrange(desc(NewDate))
#now we graph.
opall_test <- data.frame(opall_all_test)
#To find out how many patients sustained concussions in each sport, we need to search through a variety of columns to find those injuries. So I did the best I could to devise a query that searches through all the columns and returns a case where a concussion is sustained. The ICD 9 and 10 codes for concussions are included. Then I summarized by gender and sport. And then summarized and arranged by count.
concussions_op_all <- opall_all %>%
filter(grepl("S060X0A|8500|S060X9|8502|8505|8509", PRINDIAG) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG1) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG2) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG3) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG4) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG5) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG6) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG7) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG8)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG9)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG10)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG11)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG12)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG13)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG14)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG15)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG16)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG17)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG18)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG19)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG21)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG22)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG23)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG24)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG25)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG26)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG27)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG28)) %>%
group_by(sport, SEX) %>%
summarise(concussions_count = n()) %>%
arrange(desc(concussions_count))
concussions_op_all[1:10,]
#we will also break concussions down by geography.
concussions_op_all_geography <- opall_all_with_zips %>%
filter(grepl("S060X0A|8500|S060X9|8502|8505|8509", PRINDIAG) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG1) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG2) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG3) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG4) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG5) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG6) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG7) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG8)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG9)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG10)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG11)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG12)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG13)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG14)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG15)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG16)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG17)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG18)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG19)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG21)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG22)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG23)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG24)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG25)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG26)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG27)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG28)) %>%
group_by(SAS_COUNTY) %>%
summarise(concussions_count = n()) %>%
arrange(desc(concussions_count))
concussions_op_all_geography[1:10,]
#we will also break concussions down by geography.
concussions_ip_all_geography <- ipall_all_with_zips %>%
filter(grepl("S060X0A|8500|S060X9|8502|8505|8509", PRINDIAG) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG1) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG2) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG3) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG4) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG5) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG6) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG7) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG8)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG9)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG10)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG11)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG12)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG13)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG14)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG15)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG16)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG17)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG18)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG19)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG21)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG22)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG23)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG24)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG25)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG26)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG27)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG28)) %>%
group_by(SAS_COUNTY) %>%
summarise(concussions_count = n()) %>%
arrange(desc(concussions_count))
concussions_ip_all_geography[1:10,]
#now let's look at concussions by race
concussions_op_all_race <- opall_all_with_zips %>%
filter(grepl("S060X0A|8500|S060X9|8502|8505|8509", PRINDIAG) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG1) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG2) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG3) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG4) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG5) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG6) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG7) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG8)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG9)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG10)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG11)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG12)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG13)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG14)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG15)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG16)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG17)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG18)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG19)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG21)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG22)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG23)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG24)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG25)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG26)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG27)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG28)) %>%
group_by(RACE, SAS_COUNTY) %>%
summarise(concussions_count = n()) %>%
arrange(desc(concussions_count))
concussions_op_all_race[1:10,]
#we will also break concussions down by race
concussions_ip_all_race <- ipall_all_with_zips %>%
filter(grepl("S060X0A|8500|S060X9|8502|8505|8509", PRINDIAG) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG1) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG2) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG3) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG4) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG5) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG6) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG7) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG8)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG9)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG10)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG11)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG12)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG13)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG14)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG15)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG16)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG17)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG18)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG19)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG21)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG22)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG23)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG24)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG25)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG26)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG27)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG28)) %>%
group_by(RACE) %>%
summarise(concussions_count = n()) %>%
arrange(desc(concussions_count))
concussions_ip_all_race[1:10,]
#we will also break concussions down by race
concussions_op_all_race <- opall_all_with_zips %>%
filter(grepl("S060X0A|8500|S060X9|8502|8505|8509", PRINDIAG) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG1) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG2) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG3) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG4) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG5) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG6) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG7) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG8)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG9)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG10)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG11)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG12)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG13)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG14)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG15)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG16)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG17)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG18)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG19)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG21)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG22)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG23)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG24)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG25)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG26)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG27)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG28)) %>%
group_by(RACE, sport) %>%
summarise(concussions_count = n()) %>%
arrange(desc(concussions_count))
concussions_op_all_race[1:10,]
#summarizing all inkuries OP by race
all_op_injuries_by_race <- opall_all_with_zips %>%
group_by(RACE) %>%
summarise(injuries_count = n()) %>%
arrange(desc(injuries_count))
all_op_injuries_by_race[1:10,]
#summarizing all inkuries OP by race
all_ip_injuries_by_race <- ipall_all_with_zips %>%
group_by(RACE) %>%
summarise(injuries_count = n()) %>%
arrange(desc(injuries_count))
all_ip_injuries_by_race[1:10,]
#identifies all injuries by sport and sex.
all_op_injuries_by_sport_gender <- opall_all_with_zips %>%
group_by(sport, SEX) %>%
summarise(injuries_count = n()) %>%
arrange(desc(injuries_count))
write_csv(all_op_injuries_by_sport_gender, "all_op_injuries_by_sport_gender.csv")
#summarizing all inkuries OP by race
all_ip_injuries_by_sport_gender <- ipall_all_with_zips %>%
group_by(sport, SEX) %>%
summarise(injuries_count = n()) %>%
arrange(desc(injuries_count))
write_csv(all_ip_injuries_by_sport_gender, "all_ip_injuries_by_sport_gender.csv")
ggplot(concussions_op_all, aes(sport,SEX, concussions_count)) +
geom_bar(aes(fill = variable), position = "dodge", stat="identity")
#same thing going on here that I did above
concussions_ip_all <- ipall_all_with_zips %>%
filter(grepl("S060X0A|8500|S060X9|8502|8505|8509", PRINDIAG) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG1) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG2) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG3) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG4) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG5) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG6) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG7) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG8)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG9)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG10)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG11)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG12)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG13)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG14)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG15)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG16)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG17)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG18)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG19)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG21)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG22)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG23)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG24)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG25)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG26)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG27)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG28)) %>%
group_by(SEX) %>%
summarise(concussions_count = n()) %>%
arrange(desc(concussions_count))
concussions_ip_all[1:10,]
#adds in column for ipall
concussions_ip_all["ip_op"]<-"ip"
#adds a new column that labels the injuries as op
concussions_op_all["ip_op"]<-"op"
all_concussions <- bind_rows( concussions_ip_all, concussions_op_all)
#this counts all people treated for inkuries and groups by sex
sport_count <- opall_all %>%
group_by(PRINDIAG, sport, SEX) %>%
summarise(total_sports_injuries_count = n()) %>%
arrange(desc(total_sports_injuries_count))
#now that we've done concussions, we want to examine some additional injuries.
op_ankle_sprains <- opall_all %>%
filter(grepl("S934|8450", PRINDIAG) | grepl("S934|8450", DIAG1) | grepl("S934|8450", DIAG2) | grepl("S934|8450", DIAG3) | grepl("S934|8450", DIAG4) | grepl("S934|8450", DIAG5) | grepl("S934|8450", DIAG6) | grepl("S934|8450", DIAG7) | grepl("S934|8450", DIAG8)| grepl("S934|8450", DIAG9)| grepl("S934|8450", DIAG10)| grepl("S934|8450", DIAG11)| grepl("S934|8450", DIAG12)| grepl("S934|8450", DIAG13)| grepl("S934|8450", DIAG14)| grepl("S934|8450", DIAG15)| grepl("S934|8450", DIAG16)| grepl("S934|8450", DIAG17)| grepl("S934|8450", DIAG18)| grepl("S934|8450", DIAG19)| grepl("S934|8450", DIAG21)| grepl("S934|8450", DIAG22)| grepl("S934|8450", DIAG23)| grepl("S934|8450", DIAG24)| grepl("S934|8450", DIAG25)| grepl("S934|8450", DIAG26)| grepl("S934|8450", DIAG27)| grepl("S934|8450", DIAG28)) %>%
group_by(sport, SEX, YEAR) %>%
summarise(ankle_sprain_count = n()) %>%
arrange(desc(ankle_sprain_count))
ggplot(data=op_ankle_sprains, aes(x=sport, y=ankle_sprain_count,)) +
geom_bar(stat="identity") +
ggtitle("ankle_sprain_count")
ip_ankle_sprains[1:12,]
#ip ankle sprains
ip_ankle_sprains <- ipall_all %>%
filter(grepl("S934|8450", PRINDIAG) | grepl("S934|8450", DIAG1) | grepl("S934|8450", DIAG2) | grepl("S934|8450", DIAG3) | grepl("S934|8450", DIAG4) | grepl("S934|8450", DIAG5) | grepl("S934|8450", DIAG6) | grepl("S934|8450", DIAG7) | grepl("S934|8450", DIAG8)| grepl("S934|8450", DIAG9)| grepl("S934|8450", DIAG10)| grepl("S934|8450", DIAG11)| grepl("S934|8450", DIAG12)| grepl("S934|8450", DIAG13)| grepl("S934|8450", DIAG14)| grepl("S934|8450", DIAG15)| grepl("S934|8450", DIAG16)| grepl("S934|8450", DIAG17)| grepl("S934|8450", DIAG18)| grepl("S934|8450", DIAG19)| grepl("S934|8450", DIAG21)| grepl("S934|8450", DIAG22)| grepl("S934|8450", DIAG23)| grepl("S934|8450", DIAG24)| grepl("S934|8450", DIAG25)| grepl("S934|8450", DIAG26)| grepl("S934|8450", DIAG27)| grepl("S934|8450", DIAG28)) %>%
group_by(sport, SEX, YEAR) %>%
summarise(ankle_sprain_count = n()) %>%
arrange(desc(ankle_sprain_count))
#add and name a column for ip and op
op_ankle_sprains["ip_op"]<-"op"
ip_ankle_sprains["ip_op"]<-"ip"
#thorax injuries
op_thorax_injury <- opall_all %>%
filter(grepl("9221|S20", PRINDIAG) | grepl("9221|S20", DIAG1) | grepl("9221|S20", DIAG2) | grepl("9221|S20", DIAG3) | grepl("9221|S20", DIAG4) | grepl("9221|S20", DIAG5) | grepl("9221|S20", DIAG6) | grepl("9221|S20", DIAG7) | grepl("9221|S20", DIAG8)| grepl("9221|S20", DIAG9)| grepl("9221|S20", DIAG10)| grepl("9221|S20", DIAG11)| grepl("9221|S20", DIAG12)| grepl("9221|S20", DIAG13)| grepl("9221|S20", DIAG14)| grepl("9221|S20", DIAG15)| grepl("9221|S20", DIAG16)| grepl("9221|S20", DIAG17)| grepl("9221|S20", DIAG18)| grepl("9221|S20", DIAG19)| grepl("9221|S20", DIAG21)| grepl("9221|S20", DIAG22)| grepl("9221|S20", DIAG23)| grepl("9221|S20", DIAG24)| grepl("9221|S20", DIAG25)| grepl("9221|S20", DIAG26)| grepl("9221|S20", DIAG27)| grepl("9221|S20", DIAG28)) %>%
group_by(sport, SEX) %>%
summarise(thorax_injury_count = n()) %>%
arrange(desc(thorax_injury_count))
ip_thorax_injury <- ipall_all %>%
filter(grepl("9221|S20", PRINDIAG) | grepl("9221|S20", DIAG1) | grepl("9221|S20", DIAG2) | grepl("9221|S20", DIAG3) | grepl("9221|S20", DIAG4) | grepl("9221|S20", DIAG5) | grepl("9221|S20", DIAG6) | grepl("9221|S20", DIAG7) | grepl("9221|S20", DIAG8)| grepl("9221|S20", DIAG9)| grepl("9221|S20", DIAG10)| grepl("9221|S20", DIAG11)| grepl("9221|S20", DIAG12)| grepl("9221|S20", DIAG13)| grepl("9221|S20", DIAG14)| grepl("9221|S20", DIAG15)| grepl("9221|S20", DIAG16)| grepl("9221|S20", DIAG17)| grepl("9221|S20", DIAG18)| grepl("9221|S20", DIAG19)| grepl("9221|S20", DIAG21)| grepl("9221|S20", DIAG22)| grepl("9221|S20", DIAG23)| grepl("9221|S20", DIAG24)| grepl("9221|S20", DIAG25)| grepl("9221|S20", DIAG26)| grepl("9221|S20", DIAG27)| grepl("9221|S20", DIAG28)) %>%
group_by(sport, SEX) %>%
summarise(thorax_injury_count = n()) %>%
arrange(desc(thorax_injury_count ))
op_thorax_injury[1:12,]
ip_thorax_injury[1:12,]
#knee and leg sprains with corresponding icd9 and 10 codes
ip_knee_leg_sprains <- ipall_all %>%
filter(grepl("S83|S86|8449", PRINDIAG) | grepl("S83|S86|8449", DIAG1) | grepl("S83|S86|8449", DIAG2) | grepl("S83|S86|8449", DIAG3) | grepl("S83|S86|8449", DIAG4) | grepl("S83|S86|8449", DIAG5) | grepl("S83|S86|8449", DIAG6) | grepl("S83|S86|8449", DIAG7) | grepl("S83|S86|8449", DIAG8)| grepl("S83|S86|8449", DIAG9)| grepl("S83|S86|8449", DIAG10)| grepl("S83|S86|8449", DIAG11)| grepl("S83|S86|8449", DIAG12)| grepl("S83|S86|8449", DIAG13)| grepl("S83|S86|8449", DIAG14)| grepl("S83|S86|8449", DIAG15)| grepl("S83|S86|8449", DIAG16)| grepl("S83|S86|8449", DIAG17)| grepl("S83|S86|8449", DIAG18)| grepl("S83|S86|8449", DIAG19)| grepl("S83|S86|8449", DIAG21)| grepl("S83|S86|8449", DIAG22)| grepl("S83|S86|8449", DIAG23)| grepl("S83|S86|8449", DIAG24)| grepl("S83|S86|8449", DIAG25)| grepl("S83|S86|8449", DIAG26)| grepl("S83|S86|8449", DIAG27)| grepl("S83|S86|8449", DIAG28)) %>%
group_by(sport, SEX) %>%
summarise(knee_leg_sprain_count = n()) %>%
arrange(desc(knee_leg_sprain_count))
op_knee_leg_sprains <- opall_all %>%
filter(grepl("S83|S86|8449", PRINDIAG) | grepl("S83|S86|8449", DIAG1) | grepl("S83|S86|8449", DIAG2) | grepl("S83|S86|8449", DIAG3) | grepl("S83|S86|8449", DIAG4) | grepl("S83|S86|8449", DIAG5) | grepl("S83|S86|8449", DIAG6) | grepl("S83|S86|8449", DIAG7) | grepl("S83|S86|8449", DIAG8)| grepl("S83|S86|8449", DIAG9)| grepl("S83|S86|8449", DIAG10)| grepl("S83|S86|8449", DIAG11)| grepl("S83|S86|8449", DIAG12)| grepl("S83|S86|8449", DIAG13)| grepl("S83|S86|8449", DIAG14)| grepl("S83|S86|8449", DIAG15)| grepl("S83|S86|8449", DIAG16)| grepl("S83|S86|8449", DIAG17)| grepl("S83|S86|8449", DIAG18)| grepl("S83|S86|8449", DIAG19)| grepl("S83|S86|8449", DIAG21)| grepl("S83|S86|8449", DIAG22)| grepl("S83|S86|8449", DIAG23)| grepl("S83|S86|8449", DIAG24)| grepl("S83|S86|8449", DIAG25)| grepl("S83|S86|8449", DIAG26)| grepl("S83|S86|8449", DIAG27)| grepl("S83|S86|8449", DIAG28)) %>%
group_by(sport, SEX) %>%
summarise(knee_leg_sprain_count = n()) %>%
arrange(desc(knee_leg_sprain_count))
ggplot(data=op_knee_leg_sprains, aes(x=sport, y=knee_leg_sprain_count,)) +
geom_bar(stat="identity") +
ggtitle("op_knee_leg_sprains")
ip_knee_leg_sprains[1:12,]
op_knee_leg_sprains[1:12,]
#S5 is an icd 10 code and it covers all injuries to the elbow and forearm, ICD9 codes are 9593 is elbow and forearm injuries. 842 is wrist injuries. S6 is the icd10 wrist code
ip_elbows_forearms_wrists <- ipall_all %>%
filter(grepl("S5|9593|842|S6", PRINDIAG) | grepl("S5|9593|842|S6", DIAG1) | grepl("S5|9593|842|S6", DIAG2) | grepl("S5|9593|842|S6", DIAG3) | grepl("S5|9593|842|S6", DIAG4) | grepl("S5|9593|842|S6", DIAG5) | grepl("S5|9593|842|S6", DIAG6) | grepl("S5|9593|842|S6", DIAG7) | grepl("S5|9593|842|S6", DIAG8)| grepl("S5|9593|842|S6", DIAG9)| grepl("S5|9593|842|S6", DIAG10)| grepl("S5|9593|842|S6", DIAG11)| grepl("S5|9593|842|S6", DIAG12)| grepl("S5|9593|842|S6", DIAG13)| grepl("S5|9593|842|S6", DIAG14)| grepl("S5|9593|842|S6", DIAG15)| grepl("S5|9593|842|S6", DIAG16)| grepl("S5|9593|842|S6", DIAG17)| grepl("S5|9593|842|S6", DIAG18)| grepl("S5|9593|842|S6", DIAG19)| grepl("S5|9593|842|S6", DIAG21)| grepl("S5|9593|842|S6", DIAG22)| grepl("S5|9593|842|S6", DIAG23)| grepl("S5|9593|842|S6", DIAG24)| grepl("S5|9593|842|S6", DIAG25)| grepl("S5|9593|842|S6", DIAG26)| grepl("S5|9593|842|S6", DIAG27)| grepl("S5|9593|842|S6", DIAG28)) %>%
group_by(sport, SEX, YEAR) %>%
summarise(count = n()) %>%
arrange(desc(count))
ggplot(data=ip_elbows_forearms_wrists, aes(x=sport, y=count,)) +
geom_bar(stat="identity") +
ggtitle("IP Elbows Forearms Wrists injuries")
op_elbows_forearms_wrists <- opall_all %>%
filter(grepl("S5|9593|842|S6", PRINDIAG) | grepl("S5|9593|842|S6", DIAG1) | grepl("S5|9593|842|S6", DIAG2) | grepl("S5|9593|842|S6", DIAG3) | grepl("S5|9593|842|S6", DIAG4) | grepl("S5|9593|842|S6", DIAG5) | grepl("S5|9593|842|S6", DIAG6) | grepl("S5|9593|842|S6", DIAG7) | grepl("S5|9593|842|S6", DIAG8)| grepl("S5|9593|842|S6", DIAG9)| grepl("S5|9593|842|S6", DIAG10)| grepl("S5|9593|842|S6", DIAG11)| grepl("S5|9593|842|S6", DIAG12)| grepl("S5|9593|842|S6", DIAG13)| grepl("S5|9593|842|S6", DIAG14)| grepl("S5|9593|842|S6", DIAG15)| grepl("S5|9593|842|S6", DIAG16)| grepl("S5|9593|842|S6", DIAG17)| grepl("S5|9593|842|S6", DIAG18)| grepl("S5|9593|842|S6", DIAG19)| grepl("S5|9593|842|S6", DIAG21)| grepl("S5|9593|842|S6", DIAG22)| grepl("S5|9593|842|S6", DIAG23)| grepl("S5|9593|842|S6", DIAG24)| grepl("S5|9593|842|S6", DIAG25)| grepl("S5|9593|842|S6", DIAG26)| grepl("S5|9593|842|S6", DIAG27)| grepl("S5|9593|842|S6", DIAG28)) %>%
group_by(sport, SEX, YEAR) %>%
summarise(count = n()) %>%
arrange(desc(count))
ggplot(data=op_elbows_forearms_wrists, aes(x=sport, y=count,)) +
geom_bar(stat="identity") +
ggtitle("OP Elbows Forearms Wrists injuries")
write_csv(op_ankle_sprains)
write_csv(ip_ankle_sprains)
write_csv(op_knee_leg_sprains)
write_csv(op_ankle_sprains)
write_csv(concussions_op_all )
write_csv(concussions_ip_all)
write_csv(sport_count, "sport_prindiag_count.csv")
#### so exploratory queries were a hoot. Now let's do what's needed.
#renames the gender fields to male female and unknown using a nested ifelse function
opall_all$SEX <- ifelse(opall_all$SEX == 1, "MALE", ifelse(opall_all$SEX == 2,"FEMALE", "UNKNOWN"))
ipall_all$SEX <- ifelse(ipall_all$SEX == 1, "MALE", ifelse(ipall_all$SEX == 2,"FEMALE", "UNKNOWN"))
#for race
opall_all$RACE <- ifelse(opall_all$RACE == 1, "WHITE", ifelse(opall_all$RACE == 2,"BLACK", ifelse(opall_all$RACE == 3,"ASIAN", ifelse(opall_all$RACE == 4,"AMERICAN INDIAN/ALASKA NATIVE", ifelse(opall_all$RACE == 5,"OTHER", ifelse(opall_all$RACE == 6,"TWO OR MORE RACES", ifelse(opall_all$RACE == 7 ,"NATIVE HAWAIIAN OR PACIFIC ISLANDER", ifelse(opall_all$RACE == 8,"DECLINED TO ANSWER", "UNKNOWN"))))))))
ipall_all$RACE <- ifelse(ipall_all$RACE == 1, "WHITE", ifelse(ipall_all$RACE == 2,"BLACK", ifelse(ipall_all$RACE == 3,"ASIAN", ifelse(ipall_all$RACE == 4,"AMERICAN INDIAN/ALASKA NATIVE", ifelse(ipall_all$RACE == 5,"OTHER", ifelse(ipall_all$RACE == 6,"TWO OR MORE RACES", ifelse(ipall_all$RACE == 7 ,"NATIVE HAWAIIAN OR PACIFIC ISLANDER", ifelse(ipall_all$RACE == 8,"DECLINED TO ANSWER", "UNKNOWN"))))))))
#first let's merge a zip code table with our darta so we can look on a city level, though we will also pull the zip codes too.
data(zipcode)
View(zipcode)
#creates a merged list of zip codes and the hogan information.
ipall_all_with_zips <- left_join(ipall_all, zipcode, by =c("ZIPCODE" = "zip"))
opall_all_with_zips <- left_join(opall_all, zipcode, by =c("ZIPCODE" = "zip"))
#All over time, if possible
#1 bar chart of total injuries --
#2 line graph with injuries by quarter by sport.
#3 line graph with injuries by gender and sport -- specifically, baseball, basketball, soccer, lacrosse cheer.
#4 concussions by sport over time / concussions as a percentage of all injuries
#5 overall cost/average cost for sports
#6 basball arm injuries over time
#8 lower extremity injuries in Football -
#hockey, soccer, football, lax -- by gender over time
#cardiac arrest and death by sport over time
#1- we're doing total IP injuries and grouping them by sport.
ip_injuries_by_sport <- ipall_all_with_zips %>%
group_by(sport) %>%
summarise(count = n()) %>%
#arrange the list in descending order
arrange(desc(count))
ip_injuries_by_sport[1:10,]
#now we're looking at all OP injuries and grouping them by sport
op_injuries_by_sport <- opall_all_with_zips %>%
group_by(sport) %>%
summarise(count = n()) %>%
#arrange the list in descending order
arrange(desc(count))
op_injuries_by_sport[1:10,]
#identifies the columns with nunbers below 10 and replaces them with Less than 10.
ip_injuries_by_sport$count[ip_injuries_by_sport$count<=10] <- "Less than 10"
op_injuries_by_sport$count[op_injuries_by_sport$count<=10] <- "Less than 10"
write_csv(ip_injuries_by_sport, "ip_injuries_by_sport.csv")
write_csv(op_injuries_by_sport, "op_injuries_by_sport.csv")
#2 - Now we are just looking at all IP injuries and grouping them by QTR.
ip_by_year_sport<- ipall_all_with_zips %>%
group_by(YEAR, sport) %>%
summarise(count = n()) %>%
#arrange the list in descending order
arrange(desc(count))
ip_by_year_sport[1:10,]
#now we're looking at all OP injuries and grouping them by QTR
op_by_year_sport<- opall_all_with_zips %>%
group_by(YEAR, sport) %>%
summarise(count = n()) %>%
#arrange the list in descending order
arrange(desc(count))
op_by_year_sport[1:10,]
ip_by_year_sport$count[ip_by_year_sport$count<=10] <- "Less than 10"
op_by_year_sport$count[op_by_year_sport$count<=10] <- "Less than 10"
write_csv(ip_by_year_sport, "ip_by_year_sport.csv")
write_csv(op_by_year_sport, "op_by_year_sport.csv")
#3 - Now we are just looking at all IP injuries and grouping them by QTR.
ip_by_sport_year_gender <- ipall_all_with_zips %>%
group_by(SEX, sport, YEAR) %>%
summarise(count = n()) %>%
#arrange the list in descending order
arrange(desc(count))
ip_by_sport_year_gender[1:10,]
#now we're looking at all OP injuries and grouping them by QTR
op_by_sport_year_gender <- opall_all_with_zips %>%
group_by(SEX, sport, YEAR) %>%
summarise(count = n()) %>%
#arrange the list in descending order
arrange(desc(count))
op_by_sport_year_gender[1:10,]
ip_by_sport_year_gender$count[ip_by_sport_year_gender$count<=10] <- "Less than 10"
op_by_sport_year_gender$count[op_by_sport_year_gender$count<=10] <- "Less than 10"
write_csv(ip_by_sport_year_gender, "ip_by_sport_year_gender.csv")
write_csv(op_by_sport_year_gender, "op_by_sport_year_gender.csv")
#4 concussions by sport over time
#this takes the ip codes that correspond to concussions, both for ICD9 and ICD10 and filters
ip_concussions_year_sport <- ipall_all_with_zips %>%
filter(grepl("S060X0A|8500|S060X9|8502|8505|8509", PRINDIAG) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG1) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG2) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG3) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG4) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG5) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG6) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG7) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG8)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG9)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG10)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG11)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG12)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG13)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG14)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG15)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG16)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG17)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG18)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG19)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG21)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG22)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG23)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG24)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG25)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG26)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG27)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG28)) %>%
group_by(YEAR, sport) %>%
summarise(concussions_count = n()) %>%
arrange(desc(concussions_count))
ip_concussions_year_sport[1:10,]
#this takes the ip codes that correspond to concussions, both for ICD9 and ICD10
op_concussions_year_sport <- opall_all_with_zips %>%
filter(grepl("S060X0A|8500|S060X9|8502|8505|8509", PRINDIAG) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG1) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG2) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG3) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG4) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG5) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG6) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG7) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG8)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG9)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG10)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG11)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG12)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG13)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG14)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG15)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG16)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG17)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG18)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG19)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG21)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG22)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG23)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG24)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG25)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG26)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG27)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG28)) %>%
group_by(YEAR, sport) %>%
summarise(concussions_count = n()) %>%
arrange(desc(concussions_count))
op_concussions_year_sport [1:10,]
#identifies the columns with nunbers below 10 and replaces them with Less than 10.
op_concussions_year_sport$concussions_count[op_concussions_year_sport$concussions_count<=10] <- "Less than 10"
ip_concussions_year_sport$concussions_count[ip_concussions_year_sport$concussions_count<=10] <- "Less than 10"
write_csv(op_concussions_year_sport, "op_concussions_year_sport.csv")
write_csv(ip_concussions_year_sport, "ip_concussions_year_sport.csv")
# 4b this is just concussions bg sport
ip_concussions_sport <- ipall_all_with_zips %>%
filter(grepl("S060X0A|8500|S060X9|8502|8505|8509", PRINDIAG) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG1) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG2) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG3) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG4) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG5) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG6) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG7) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG8)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG9)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG10)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG11)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG12)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG13)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG14)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG15)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG16)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG17)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG18)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG19)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG21)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG22)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG23)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG24)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG25)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG26)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG27)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG28)) %>%
group_by(sport) %>%
summarise(concussions_count = n()) %>%
arrange(desc(concussions_count))
ip_concussions_sport[1:10,]
#this takes the ip codes that correspond to concussions, both for ICD9 and ICD10
op_concussions_sport <- opall_all_with_zips %>%
filter(grepl("S060X0A|8500|S060X9|8502|8505|8509", PRINDIAG) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG1) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG2) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG3) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG4) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG5) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG6) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG7) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG8)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG9)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG10)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG11)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG12)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG13)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG14)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG15)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG16)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG17)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG18)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG19)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG21)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG22)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG23)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG24)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG25)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG26)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG27)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG28)) %>%
group_by(sport) %>%
summarise(concussions_count = n()) %>%
arrange(desc(concussions_count))
op_concussions_sport [1:10,]
#identifies the columns with nunbers below 10 and replaces them with Less than 10.
op_concussions_sport$concussions_count[op_concussions_sport$concussions_count<=10] <- "Less than 10"
ip_concussions_sport$concussions_count[ip_concussions_sport$concussions_count<=10] <- "Less than 10"
write_csv(op_concussions_sport, "op_concussions_sport.csv")
write_csv(ip_concussions_sport, "ip_concussions_sport.csv")
# 4c this is just concussions bg sport
ip_concussions_sport_sex <- ipall_all_with_zips %>%
filter(grepl("S060X0A|8500|S060X9|8502|8505|8509", PRINDIAG) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG1) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG2) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG3) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG4) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG5) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG6) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG7) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG8)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG9)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG10)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG11)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG12)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG13)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG14)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG15)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG16)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG17)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG18)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG19)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG21)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG22)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG23)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG24)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG25)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG26)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG27)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG28)) %>%
group_by(SEX, sport) %>%
summarise(concussions_count = n()) %>%
arrange(desc(concussions_count))
ip_concussions_sport_sex[1:10,]
#this takes the ip codes that correspond to concussions, both for ICD9 and ICD10
op_concussions_sport_sex <- opall_all_with_zips %>%
filter(grepl("S060X0A|8500|S060X9|8502|8505|8509", PRINDIAG) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG1) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG2) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG3) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG4) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG5) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG6) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG7) | grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG8)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG9)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG10)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG11)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG12)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG13)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG14)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG15)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG16)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG17)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG18)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG19)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG21)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG22)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG23)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG24)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG25)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG26)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG27)| grepl("S060X0A|8500|S060X9|8502|8505|8509", DIAG28)) %>%
group_by(SEX, sport) %>%
summarise(concussions_count = n()) %>%
arrange(desc(concussions_count))
op_concussions_sport_sex [1:10,]
#identifies the columns with nunbers below 10 and replaces them with Less than 10.
op_concussions_sport_sex$concussions_count[op_concussions_sport_sex$concussions_count<=10] <- "Less than 10"
ip_concussions_sport_sex$concussions_count[ip_concussions_sport_sex$concussions_count<=10] <- "Less than 10"
write_csv(op_concussions_sport_sex, "op_concussions_sport_sex.csv")
write_csv(ip_concussions_sport_sex, "ip_concussions_sport_sex.csv")
#5 overall cost/average cost for sports
#total chg op for each sport by gender
op_cost_sport_gender <- opall_all_with_zips %>%
group_by(sport, SEX) %>%
summarise(total_cost = sum(TOT_CHG), avg_cost = mean(TOT_CHG)) %>%
arrange(desc(total_cost ))
op_cost_sport_gender [1:10,]
#total chg ip for each sport by gender
ip_cost_sport_gender <- ipall_all_with_zips %>%
group_by(sport, SEX) %>%
summarise(total_cost = sum(TOT_CHG), avg_cost = mean(TOT_CHG)) %>%
arrange(desc(total_cost ))
ip_cost_sport_gender[1:10,]
#total chg op for each sport
op_cost_sport <- opall_all_with_zips %>%
group_by(sport) %>%
summarise(total_cost = sum(TOT_CHG), avg_cost = mean(TOT_CHG)) %>%
arrange(desc(total_cost ))
op_cost_sport[1:10,]
#total chg ip for each sport by gender
ip_cost_sport <- ipall_all_with_zips %>%
group_by(sport) %>%
summarise(total_cost = sum(TOT_CHG), avg_cost = mean(TOT_CHG)) %>%
arrange(desc(total_cost ))
ip_cost_sport[1:10,]
write_csv( op_cost_sport_gender, "op_cost_sport_gender.csv")
write_csv(op_cost_sport_gender, "op_cost_sport_gender.csv")
write_csv(op_cost_sport, "op_cost_sport.csv")
write_csv(ip_cost_sport, "ip_cost_sport.csv")
#6 basball arm injuries over time
#S5 is an icd 10 code and it covers all injuries to the elbow and forearm, ICD9 codes are 9593 is elbow and forearm injuries. 842 is wrist injuries. S6 is the icd10 wrist code.
ip_elbows_forearms_wrists <- ipall_all_with_zips %>%
filter(grepl("S5|9593|842|S6", PRINDIAG) | grepl("S5|9593|842|S6", DIAG1) | grepl("S5|9593|842|S6", DIAG2) | grepl("S5|9593|842|S6", DIAG3) | grepl("S5|9593|842|S6", DIAG4) | grepl("S5|9593|842|S6", DIAG5) | grepl("S5|9593|842|S6", DIAG6) | grepl("S5|9593|842|S6", DIAG7) | grepl("S5|9593|842|S6", DIAG8)| grepl("S5|9593|842|S6", DIAG9)| grepl("S5|9593|842|S6", DIAG10)| grepl("S5|9593|842|S6", DIAG11)| grepl("S5|9593|842|S6", DIAG12)| grepl("S5|9593|842|S6", DIAG13)| grepl("S5|9593|842|S6", DIAG14)| grepl("S5|9593|842|S6", DIAG15)| grepl("S5|9593|842|S6", DIAG16)| grepl("S5|9593|842|S6", DIAG17)| grepl("S5|9593|842|S6", DIAG18)| grepl("S5|9593|842|S6", DIAG19)| grepl("S5|9593|842|S6", DIAG21)| grepl("S5|9593|842|S6", DIAG22)| grepl("S5|9593|842|S6", DIAG23)| grepl("S5|9593|842|S6", DIAG24)| grepl("S5|9593|842|S6", DIAG25)| grepl("S5|9593|842|S6", DIAG26)| grepl("S5|9593|842|S6", DIAG27)| grepl("S5|9593|842|S6", DIAG28)) %>%
group_by(sport, SEX) %>%
summarise(count = n()) %>%
arrange(desc(count))
op_elbows_forearms_wrists <- opall_all_with_zips %>%
filter(grepl("S5|9593|842|S6", PRINDIAG) | grepl("S5|9593|842|S6", DIAG1) | grepl("S5|9593|842|S6", DIAG2) | grepl("S5|9593|842|S6", DIAG3) | grepl("S5|9593|842|S6", DIAG4) | grepl("S5|9593|842|S6", DIAG5) | grepl("S5|9593|842|S6", DIAG6) | grepl("S5|9593|842|S6", DIAG7) | grepl("S5|9593|842|S6", DIAG8)| grepl("S5|9593|842|S6", DIAG9)| grepl("S5|9593|842|S6", DIAG10)| grepl("S5|9593|842|S6", DIAG11)| grepl("S5|9593|842|S6", DIAG12)| grepl("S5|9593|842|S6", DIAG13)| grepl("S5|9593|842|S6", DIAG14)| grepl("S5|9593|842|S6", DIAG15)| grepl("S5|9593|842|S6", DIAG16)| grepl("S5|9593|842|S6", DIAG17)| grepl("S5|9593|842|S6", DIAG18)| grepl("S5|9593|842|S6", DIAG19)| grepl("S5|9593|842|S6", DIAG21)| grepl("S5|9593|842|S6", DIAG22)| grepl("S5|9593|842|S6", DIAG23)| grepl("S5|9593|842|S6", DIAG24)| grepl("S5|9593|842|S6", DIAG25)| grepl("S5|9593|842|S6", DIAG26)| grepl("S5|9593|842|S6", DIAG27)| grepl("S5|9593|842|S6", DIAG28)) %>%
group_by(sport, SEX) %>%
summarise(count = n()) %>%
arrange(desc(count))
ip_elbows_forearms_wrists$count[ip_elbows_forearms_wrists$count<=10] <- "Less than 10"
op_elbows_forearms_wrists$count[op_elbows_forearms_wrists$count<=10] <- "Less than 10"
write_csv( ip_elbows_forearms_wrists, " ip_elbows_forearms_wrists.csv")
write_csv(op_elbows_forearms_wrists, "op_elbows_forearms_wrists.csv")
#8 lower extremity injuries in Football -
ip_knee_leg_ankle_sprains <- ipall_all_with_zips %>%
filter(grepl("S934|8450|S83|S86|8449", PRINDIAG) | grepl("S934|8450|S83|S86|8449", DIAG1) | grepl("S934|8450|S83|S86|8449", DIAG2) | grepl("S934|8450|S83|S86|8449", DIAG3) | grepl("S934|8450|S83|S86|8449", DIAG4) | grepl("S934|8450|S83|S86|8449", DIAG5) | grepl("S934|8450|S83|S86|8449", DIAG6) | grepl("S934|8450|S83|S86|8449", DIAG7) | grepl("S934|8450|S83|S86|8449", DIAG8)| grepl("S934|8450|S83|S86|8449", DIAG9)| grepl("S934|8450|S83|S86|8449", DIAG10)| grepl("S934|8450|S83|S86|8449", DIAG11)| grepl("S934|8450|S83|S86|8449", DIAG12)| grepl("S934|8450|S83|S86|8449", DIAG13)| grepl("S934|8450|S83|S86|8449", DIAG14)| grepl("S934|8450|S83|S86|8449", DIAG15)| grepl("S934|8450|S83|S86|8449", DIAG16)| grepl("S934|8450|S83|S86|8449", DIAG17)| grepl("S934|8450|S83|S86|8449", DIAG18)| grepl("S934|8450|S83|S86|8449", DIAG19)| grepl("S934|8450|S83|S86|8449", DIAG21)| grepl("S934|8450|S83|S86|8449", DIAG22)| grepl("S934|8450|S83|S86|8449", DIAG23)| grepl("S934|8450|S83|S86|8449", DIAG24)| grepl("S934|8450|S83|S86|8449", DIAG25)| grepl("S934|8450|S83|S86|8449", DIAG26)| grepl("S934|8450|S83|S86|8449", DIAG27)| grepl("S934|8450|S83|S86|8449", DIAG28)) %>%
group_by(sport, SEX) %>%
summarise(knee_leg_ankle_sprain_count = n()) %>%
arrange(desc(knee_leg_ankle_sprain_count))
op_knee_leg_ankle_sprains <- opall_all_with_zips %>%
filter(grepl("S934|8450|S83|S86|8449", PRINDIAG) | grepl("S934|8450|S83|S86|8449", DIAG1) | grepl("S934|8450|S83|S86|8449", DIAG2) | grepl("S934|8450|S83|S86|8449", DIAG3) | grepl("S934|8450|S83|S86|8449", DIAG4) | grepl("S934|8450|S83|S86|8449", DIAG5) | grepl("S934|8450|S83|S86|8449", DIAG6) | grepl("S934|8450|S83|S86|8449", DIAG7) | grepl("S934|8450|S83|S86|8449", DIAG8)| grepl("S934|8450|S83|S86|8449", DIAG9)| grepl("S934|8450|S83|S86|8449", DIAG10)| grepl("S934|8450|S83|S86|8449", DIAG11)| grepl("S934|8450|S83|S86|8449", DIAG12)| grepl("S934|8450|S83|S86|8449", DIAG13)| grepl("S934|8450|S83|S86|8449", DIAG14)| grepl("S934|8450|S83|S86|8449", DIAG15)| grepl("S934|8450|S83|S86|8449", DIAG16)| grepl("S934|8450|S83|S86|8449", DIAG17)| grepl("S934|8450|S83|S86|8449", DIAG18)| grepl("S934|8450|S83|S86|8449", DIAG19)| grepl("S934|8450|S83|S86|8449", DIAG21)| grepl("S934|8450|S83|S86|8449", DIAG22)| grepl("S934|8450|S83|S86|8449", DIAG23)| grepl("S934|8450|S83|S86|8449", DIAG24)| grepl("S934|8450|S83|S86|8449", DIAG25)| grepl("S934|8450|S83|S86|8449", DIAG26)| grepl("S934|8450|S83|S86|8449", DIAG27)| grepl("S934|8450|S83|S86|8449", DIAG28)) %>%
group_by(sport, SEX) %>%
summarise(knee_leg_ankle_sprain_count = n()) %>%
arrange(desc(knee_leg_ankle_sprain_count))
op_knee_leg_ankle_sprains$knee_leg_ankle_sprain_count[ op_knee_leg_ankle_sprains$knee_leg_ankle_sprain_count<=10] <- "Less than 10"
ip_knee_leg_ankle_sprains$knee_leg_ankle_sprain_count[ ip_knee_leg_ankle_sprains$knee_leg_ankle_sprain_count<=10] <- "Less than 10"
write_csv(ip_knee_leg_ankle_sprains, "ip_knee_leg_ankle_sprains.csv")
write_csv(op_knee_leg_ankle_sprains , "op_knee_leg_ankle_sprains.csv")
|
|
setwd("~/Development/wanderdata-scripts/fitbit")
require(ggplot2)
require(bbplot)
require(skimr)
require(parsedate)
require(reshape2)
require(lubridate)
data_dir <- 'data/'
plots_dir <- 'plots/'
args = commandArgs(trailingOnly=TRUE)
# args <- c('2019-06-16', '2019-06-28')
if (length(args)==0) {
stop("At least one argument must be supplied (input file).n", call.=FALSE)
} else {
# starting day
print(args[1])
print(args[2])
}
# this function produces the plots of the different activity levels
create_activity_level_plots <- function(variables) {
# activity level per minutes plots
df <- data.frame(dateTime=character(),
value=numeric(),
level=character(),
stringsAsFactors=FALSE)
# read only these files
for (f in c('minutesSedentary', 'minutesLightlyActive','minutesFairlyActive', 'minutesVeryActive')) {
print(f)
activity.df <- read.csv(sprintf("~/Development/wanderdata-scripts/fitbit/data/%s.csv", f))
print(skim(activity.df))
print(sum(activity.df$value))
activity.df$level <- f
df <- rbind(df, activity.df)
}
df$dateTime <- as.Date(df$dateTime)
df <- df[df$dateTime >= args[1] & df$dateTime < args[2],]
first_date <- head(df, n=1)$dateTime
last_date <- tail(df, n=1)$dateTime
p <- ggplot(df, aes(x=level, y=value, fill=level)) +
geom_boxplot() +
theme(plot.margin = unit(c(1.0,1.0,1.0,0.5), "cm"),
plot.title = element_text(family = 'Helvetica', size = 28, face = "bold", color = "#222222"),
plot.subtitle = element_text(family = 'Helvetica', size = 22, margin = ggplot2::margin(9, 0, 9, 0)),
axis.text = element_text(family = 'Helvetica', size = 18, color = "#222222"),
axis.title.x = element_text(family = 'Helvetica', size = 18, color = "#222222"),
axis.title.y = element_text(family = 'Helvetica', size = 18, color = "#222222"),
legend.text=element_text(size=14),
legend.position = "top", legend.text.align = 0, legend.background = ggplot2::element_blank(),
legend.title = ggplot2::element_blank(), legend.key = ggplot2::element_blank()) +
labs(title="My Fitbit's activity values boxplot",
subtitle = sprintf("From %s until %s", first_date, last_date)) +
xlab("Level") + ylab("Minutes")
ggsave(sprintf("%s%s_%s_%s.jpg", plots_dir,'activity_levels_boxplot', first_date, last_date), plot = p,
width = 13, height = 7, units = 'in')
p <- ggplot() +
geom_line(data=subset(df,dateTime<=as.character(first_date)),aes(x=dateTime,y=value, color=level),
linetype=2) +
geom_point(data=subset(df,dateTime<=as.character(first_date)),aes(x=dateTime,y=value)) +
geom_line(data=subset(df,dateTime>=as.character(first_date)),aes(x=dateTime,y=value, color=level),
linetype=1) +
geom_point(data=subset(df,dateTime>=as.character(first_date)),aes(x=dateTime,y=value)) +
scale_x_date(date_labels = "%Y-%m-%d", date_breaks="1 day") +
labs(title="Minutes at activity level according to my Fitbit",
subtitle = sprintf("From %s until %s", first_date, last_date)) +
theme(axis.text.x = element_text(angle = 45, hjust = 1)) +
bbc_style() +
xlab('Date') + ylab('Value') +
theme(plot.margin = unit(c(1.0,1.5,1.0,0.5), "cm"))
ggsave(sprintf("%s%s_%s_%s.jpg", plots_dir,'activity_levels_values', first_date, last_date), plot = p,
width = 11, height = 5, units = 'in')
}
# this function produces the plots of the different activities
create_activity_plots <- function() {
# produce activity plots
for (f in list.files(data_dir)) {
unit <- ""
unit.label <- ""
activity.name <- gsub("\\..*","",f)
# ignore the files that are related to activity level per minutes
if (grepl('minutes',f)) {
next
}
if (activity.name == 'elevation') {
unit <- ' (m)'
} else if (activity.name == 'distance') {
unit <- ' (km)'
} else {
next
}
print(f)
df <- read.csv(sprintf("%s/%s", data_dir, f))
df$dateTime <- as.Date(df$dateTime)
df <- df[df$dateTime >= args[1] & df$dateTime < args[2],]
first_date <- head(df, n=1)$dateTime
last_date <- tail(df, n=1)$dateTime
print(skim(df))
p <- ggplot() +
geom_line(data=subset(df,dateTime<=as.character(first_date)),aes(x=dateTime,y=value),
linetype=2, color='#6d7d03') +
geom_point(data=subset(df,dateTime<=as.character(first_date)),aes(x=dateTime,y=value)) +
geom_line(data=subset(df,dateTime>=as.character(first_date)),aes(x=dateTime,y=value),
linetype=1, color='#6d7d03') +
geom_point(data=subset(df,dateTime>=as.character(first_date)),aes(x=dateTime,y=value)) +
scale_x_date(date_labels = "%Y-%m-%d", date_breaks="1 day") +
labs(title=sprintf("My Fitbit's \"%s\"%s values", activity.name, unit),
subtitle = sprintf("From %s until %s", first_date, last_date)) +
theme(axis.text.x = element_text(angle = 45, hjust = 1)) +
xlab('Date') + ylab('Value') +
theme(plot.margin = unit(c(1.0,1.0,1.0,0.5), "cm")) +
bbc_style()
ggsave(sprintf("%s%s_%s_%s_%s.jpg", plots_dir, activity.name, 'plot', first_date, last_date), plot = p,
width = 10, height = 5, units = 'in')
}
}
create_distance_steps_plots <- function() {
print("create_distance_steps_plots")
# produce activity plots
distance.df <- read.csv("~/Development/wanderdata-scripts/fitbit/data/distance.csv")
steps.df <- read.csv("~/Development/wanderdata-scripts/fitbit/data/steps.csv")
print("Steps")
print(skim(steps.df))
distance.df$metric <- "distance"
steps.df$metric <- "steps"
df <- rbind(distance.df, steps.df)
df$dateTime <- as.Date(df$dateTime)
df <- df[df$dateTime >= args[1] & df$dateTime < args[2],]
first_date <- head(df, n=1)$dateTime
last_date <- tail(df, n=1)$dateTime
print(cor(distance.df$value, steps.df$value))
p <- ggplot() +
geom_line(data=subset(df,dateTime<=as.character(first_date)),aes(x=dateTime,y=value, color = metric)) +
geom_point(data=subset(df,dateTime<=as.character(first_date)),aes(x=dateTime,y=value)) +
geom_line(data=subset(df,dateTime>=as.character(first_date)),aes(x=dateTime,y=value, color = metric)) +
geom_point(data=subset(df,dateTime>=as.character(first_date)),aes(x=dateTime,y=value)) +
scale_y_continuous(sec.axis= sec_axis(~.*1, name="Steps"), trans = "log10") +
scale_x_date(date_labels = "%Y-%m-%d", date_breaks="1 day") +
bbc_style() +
theme(axis.title = element_text(size = 18),
plot.margin = unit(c(1.0,1.0,1.0,0.5), "cm"),
axis.text.x = element_text(angle = 45, hjust = 1)) +
labs(title="My Fitbit's distance (km) and steps values (in log scale)",
subtitle = sprintf("From %s until %s", first_date, last_date)) +
ylab("Distance") +
xlab("Value")
ggsave(sprintf("%s%s_%s_%s.jpg", plots_dir, "distance_steps_plot", first_date, last_date), plot = p,
width = 12, height = 6, units = 'in')
}
create_sleep_plots <- function() {
print("create_sleep_plots")
# produce activity plots
df <- read.csv("~/Development/wanderdata-scripts/fitbit/data/sleep.csv", stringsAsFactors = FALSE)
df$date <- date(df$date)
df <- df[df$date >= args[1] & df$date < args[2],]
df$start.time.posixct <-as.POSIXct(df$startTime, format="%Y-%m-%dT%H:%M:%OS")
df$end.time.posixct <-as.POSIXct(df$endTime, format="%Y-%m-%dT%H:%M:%OS")
df$decimal.start <- hour(df$start.time.posixct) + minute(df$start.time.posixct)/60
df$decimal.end <- hour(df$end.time.posixct) + minute(df$end.time.posixct)/60
print(skim(df))
first_date <- head(df, n=1)$date
last_date <- tail(df, n=1)$date
df.times <- data.frame(dateTime = df$date, startTime = df$decimal.start, endTime = df$decimal.end)
df.times <- melt(df.times, id.vars = c("dateTime"))
p <- ggplot() +
geom_line(data=df.times, aes(x = dateTime, y = value, color = variable)) +
geom_point(data=df.times ,aes(x=dateTime,y=value)) +
bbc_style() +
scale_x_date(date_labels = "%Y-%m-%d", date_breaks="1 day") +
scale_y_continuous(breaks = round(seq(min(df.times$value), max(df.times$value), by = 2))) +
theme(axis.title = element_text(size = 18),
plot.margin = unit(c(1.0,1.0,1.0,0.5), "cm"),
axis.text.x = element_text(angle = 45, hjust = 1)) +
labs(title="My sleeping times (start and end) according to Fitbit",
subtitle = sprintf("From %s until %s", first_date, last_date)) +
ylab("Hour") +
xlab("Date")
ggsave(sprintf("%s%s_%s_%s.jpg", plots_dir, "sleep_start_end_times", first_date, last_date), plot = p,
width = 12, height = 6, units = 'in')
p <- ggplot() +
geom_line(data=df, aes(x=date,y=minutesAsleep), color = "#6d7d03") +
geom_point(data=df ,aes(x=date,y=minutesAsleep)) +
bbc_style() +
scale_x_date(date_labels = "%Y-%m-%d", date_breaks="1 day") +
theme(axis.title = element_text(size = 18),
plot.margin = unit(c(1.0,1.0,1.0,0.5), "cm"),
axis.text.x = element_text(angle = 45, hjust = 1)) +
labs(title="My minutes asleep according to Fitbit",
subtitle = sprintf("From %s until %s", first_date, last_date)) +
ylab("Minutes") +
xlab("Date")
ggsave(sprintf("%s%s_%s_%s.jpg", plots_dir, "sleepy_minutes", first_date, last_date), plot = p,
width = 12, height = 6, units = 'in')
p <- ggplot(df, aes(x=date, y=minutesAsleep)) +
geom_boxplot(aes(group=1), fill = "#6d7d03") +
theme(plot.margin = unit(c(1.0,1.0,1.0,0.5), "cm"),
plot.title = element_text(family = 'Helvetica', size = 28, face = "bold", color = "#222222"),
plot.subtitle = element_text(family = 'Helvetica', size = 22, margin = ggplot2::margin(9, 0, 9, 0)),
axis.text = element_text(family = 'Helvetica', size = 18, color = "#222222"),
axis.title.x = element_text(family = 'Helvetica', size = 18, color = "#222222"),
axis.title.y = element_text(family = 'Helvetica', size = 18, color = "#222222"),
legend.text=element_text(size=14),
legend.position = "top", legend.text.align = 0, legend.background = ggplot2::element_blank(),
legend.title = ggplot2::element_blank(), legend.key = ggplot2::element_blank()) +
labs(title="My minutes asleep boxplot",
subtitle = sprintf("From %s until %s", first_date, last_date)) +
ylab("Minutes") +
xlab("Date")
ggsave(sprintf("%s%s_%s_%s.jpg", plots_dir,'sleepy_minutes_boxplot', first_date, last_date), plot = p,
width = 13, height = 7, units = 'in')
}
create_activity_level_plots()
create_activity_plots()
create_distance_steps_plots()
create_sleep_plots()
|
/fitbit/plot.R
|
no_license
|
Jagadt/wanderdata-scripts
|
R
| false
| false
| 10,895
|
r
|
setwd("~/Development/wanderdata-scripts/fitbit")
require(ggplot2)
require(bbplot)
require(skimr)
require(parsedate)
require(reshape2)
require(lubridate)
data_dir <- 'data/'
plots_dir <- 'plots/'
args = commandArgs(trailingOnly=TRUE)
# args <- c('2019-06-16', '2019-06-28')
if (length(args)==0) {
stop("At least one argument must be supplied (input file).n", call.=FALSE)
} else {
# starting day
print(args[1])
print(args[2])
}
# this function produces the plots of the different activity levels
create_activity_level_plots <- function(variables) {
# activity level per minutes plots
df <- data.frame(dateTime=character(),
value=numeric(),
level=character(),
stringsAsFactors=FALSE)
# read only these files
for (f in c('minutesSedentary', 'minutesLightlyActive','minutesFairlyActive', 'minutesVeryActive')) {
print(f)
activity.df <- read.csv(sprintf("~/Development/wanderdata-scripts/fitbit/data/%s.csv", f))
print(skim(activity.df))
print(sum(activity.df$value))
activity.df$level <- f
df <- rbind(df, activity.df)
}
df$dateTime <- as.Date(df$dateTime)
df <- df[df$dateTime >= args[1] & df$dateTime < args[2],]
first_date <- head(df, n=1)$dateTime
last_date <- tail(df, n=1)$dateTime
p <- ggplot(df, aes(x=level, y=value, fill=level)) +
geom_boxplot() +
theme(plot.margin = unit(c(1.0,1.0,1.0,0.5), "cm"),
plot.title = element_text(family = 'Helvetica', size = 28, face = "bold", color = "#222222"),
plot.subtitle = element_text(family = 'Helvetica', size = 22, margin = ggplot2::margin(9, 0, 9, 0)),
axis.text = element_text(family = 'Helvetica', size = 18, color = "#222222"),
axis.title.x = element_text(family = 'Helvetica', size = 18, color = "#222222"),
axis.title.y = element_text(family = 'Helvetica', size = 18, color = "#222222"),
legend.text=element_text(size=14),
legend.position = "top", legend.text.align = 0, legend.background = ggplot2::element_blank(),
legend.title = ggplot2::element_blank(), legend.key = ggplot2::element_blank()) +
labs(title="My Fitbit's activity values boxplot",
subtitle = sprintf("From %s until %s", first_date, last_date)) +
xlab("Level") + ylab("Minutes")
ggsave(sprintf("%s%s_%s_%s.jpg", plots_dir,'activity_levels_boxplot', first_date, last_date), plot = p,
width = 13, height = 7, units = 'in')
p <- ggplot() +
geom_line(data=subset(df,dateTime<=as.character(first_date)),aes(x=dateTime,y=value, color=level),
linetype=2) +
geom_point(data=subset(df,dateTime<=as.character(first_date)),aes(x=dateTime,y=value)) +
geom_line(data=subset(df,dateTime>=as.character(first_date)),aes(x=dateTime,y=value, color=level),
linetype=1) +
geom_point(data=subset(df,dateTime>=as.character(first_date)),aes(x=dateTime,y=value)) +
scale_x_date(date_labels = "%Y-%m-%d", date_breaks="1 day") +
labs(title="Minutes at activity level according to my Fitbit",
subtitle = sprintf("From %s until %s", first_date, last_date)) +
theme(axis.text.x = element_text(angle = 45, hjust = 1)) +
bbc_style() +
xlab('Date') + ylab('Value') +
theme(plot.margin = unit(c(1.0,1.5,1.0,0.5), "cm"))
ggsave(sprintf("%s%s_%s_%s.jpg", plots_dir,'activity_levels_values', first_date, last_date), plot = p,
width = 11, height = 5, units = 'in')
}
# this function produces the plots of the different activities
create_activity_plots <- function() {
# produce activity plots
for (f in list.files(data_dir)) {
unit <- ""
unit.label <- ""
activity.name <- gsub("\\..*","",f)
# ignore the files that are related to activity level per minutes
if (grepl('minutes',f)) {
next
}
if (activity.name == 'elevation') {
unit <- ' (m)'
} else if (activity.name == 'distance') {
unit <- ' (km)'
} else {
next
}
print(f)
df <- read.csv(sprintf("%s/%s", data_dir, f))
df$dateTime <- as.Date(df$dateTime)
df <- df[df$dateTime >= args[1] & df$dateTime < args[2],]
first_date <- head(df, n=1)$dateTime
last_date <- tail(df, n=1)$dateTime
print(skim(df))
p <- ggplot() +
geom_line(data=subset(df,dateTime<=as.character(first_date)),aes(x=dateTime,y=value),
linetype=2, color='#6d7d03') +
geom_point(data=subset(df,dateTime<=as.character(first_date)),aes(x=dateTime,y=value)) +
geom_line(data=subset(df,dateTime>=as.character(first_date)),aes(x=dateTime,y=value),
linetype=1, color='#6d7d03') +
geom_point(data=subset(df,dateTime>=as.character(first_date)),aes(x=dateTime,y=value)) +
scale_x_date(date_labels = "%Y-%m-%d", date_breaks="1 day") +
labs(title=sprintf("My Fitbit's \"%s\"%s values", activity.name, unit),
subtitle = sprintf("From %s until %s", first_date, last_date)) +
theme(axis.text.x = element_text(angle = 45, hjust = 1)) +
xlab('Date') + ylab('Value') +
theme(plot.margin = unit(c(1.0,1.0,1.0,0.5), "cm")) +
bbc_style()
ggsave(sprintf("%s%s_%s_%s_%s.jpg", plots_dir, activity.name, 'plot', first_date, last_date), plot = p,
width = 10, height = 5, units = 'in')
}
}
create_distance_steps_plots <- function() {
print("create_distance_steps_plots")
# produce activity plots
distance.df <- read.csv("~/Development/wanderdata-scripts/fitbit/data/distance.csv")
steps.df <- read.csv("~/Development/wanderdata-scripts/fitbit/data/steps.csv")
print("Steps")
print(skim(steps.df))
distance.df$metric <- "distance"
steps.df$metric <- "steps"
df <- rbind(distance.df, steps.df)
df$dateTime <- as.Date(df$dateTime)
df <- df[df$dateTime >= args[1] & df$dateTime < args[2],]
first_date <- head(df, n=1)$dateTime
last_date <- tail(df, n=1)$dateTime
print(cor(distance.df$value, steps.df$value))
p <- ggplot() +
geom_line(data=subset(df,dateTime<=as.character(first_date)),aes(x=dateTime,y=value, color = metric)) +
geom_point(data=subset(df,dateTime<=as.character(first_date)),aes(x=dateTime,y=value)) +
geom_line(data=subset(df,dateTime>=as.character(first_date)),aes(x=dateTime,y=value, color = metric)) +
geom_point(data=subset(df,dateTime>=as.character(first_date)),aes(x=dateTime,y=value)) +
scale_y_continuous(sec.axis= sec_axis(~.*1, name="Steps"), trans = "log10") +
scale_x_date(date_labels = "%Y-%m-%d", date_breaks="1 day") +
bbc_style() +
theme(axis.title = element_text(size = 18),
plot.margin = unit(c(1.0,1.0,1.0,0.5), "cm"),
axis.text.x = element_text(angle = 45, hjust = 1)) +
labs(title="My Fitbit's distance (km) and steps values (in log scale)",
subtitle = sprintf("From %s until %s", first_date, last_date)) +
ylab("Distance") +
xlab("Value")
ggsave(sprintf("%s%s_%s_%s.jpg", plots_dir, "distance_steps_plot", first_date, last_date), plot = p,
width = 12, height = 6, units = 'in')
}
create_sleep_plots <- function() {
print("create_sleep_plots")
# produce activity plots
df <- read.csv("~/Development/wanderdata-scripts/fitbit/data/sleep.csv", stringsAsFactors = FALSE)
df$date <- date(df$date)
df <- df[df$date >= args[1] & df$date < args[2],]
df$start.time.posixct <-as.POSIXct(df$startTime, format="%Y-%m-%dT%H:%M:%OS")
df$end.time.posixct <-as.POSIXct(df$endTime, format="%Y-%m-%dT%H:%M:%OS")
df$decimal.start <- hour(df$start.time.posixct) + minute(df$start.time.posixct)/60
df$decimal.end <- hour(df$end.time.posixct) + minute(df$end.time.posixct)/60
print(skim(df))
first_date <- head(df, n=1)$date
last_date <- tail(df, n=1)$date
df.times <- data.frame(dateTime = df$date, startTime = df$decimal.start, endTime = df$decimal.end)
df.times <- melt(df.times, id.vars = c("dateTime"))
p <- ggplot() +
geom_line(data=df.times, aes(x = dateTime, y = value, color = variable)) +
geom_point(data=df.times ,aes(x=dateTime,y=value)) +
bbc_style() +
scale_x_date(date_labels = "%Y-%m-%d", date_breaks="1 day") +
scale_y_continuous(breaks = round(seq(min(df.times$value), max(df.times$value), by = 2))) +
theme(axis.title = element_text(size = 18),
plot.margin = unit(c(1.0,1.0,1.0,0.5), "cm"),
axis.text.x = element_text(angle = 45, hjust = 1)) +
labs(title="My sleeping times (start and end) according to Fitbit",
subtitle = sprintf("From %s until %s", first_date, last_date)) +
ylab("Hour") +
xlab("Date")
ggsave(sprintf("%s%s_%s_%s.jpg", plots_dir, "sleep_start_end_times", first_date, last_date), plot = p,
width = 12, height = 6, units = 'in')
p <- ggplot() +
geom_line(data=df, aes(x=date,y=minutesAsleep), color = "#6d7d03") +
geom_point(data=df ,aes(x=date,y=minutesAsleep)) +
bbc_style() +
scale_x_date(date_labels = "%Y-%m-%d", date_breaks="1 day") +
theme(axis.title = element_text(size = 18),
plot.margin = unit(c(1.0,1.0,1.0,0.5), "cm"),
axis.text.x = element_text(angle = 45, hjust = 1)) +
labs(title="My minutes asleep according to Fitbit",
subtitle = sprintf("From %s until %s", first_date, last_date)) +
ylab("Minutes") +
xlab("Date")
ggsave(sprintf("%s%s_%s_%s.jpg", plots_dir, "sleepy_minutes", first_date, last_date), plot = p,
width = 12, height = 6, units = 'in')
p <- ggplot(df, aes(x=date, y=minutesAsleep)) +
geom_boxplot(aes(group=1), fill = "#6d7d03") +
theme(plot.margin = unit(c(1.0,1.0,1.0,0.5), "cm"),
plot.title = element_text(family = 'Helvetica', size = 28, face = "bold", color = "#222222"),
plot.subtitle = element_text(family = 'Helvetica', size = 22, margin = ggplot2::margin(9, 0, 9, 0)),
axis.text = element_text(family = 'Helvetica', size = 18, color = "#222222"),
axis.title.x = element_text(family = 'Helvetica', size = 18, color = "#222222"),
axis.title.y = element_text(family = 'Helvetica', size = 18, color = "#222222"),
legend.text=element_text(size=14),
legend.position = "top", legend.text.align = 0, legend.background = ggplot2::element_blank(),
legend.title = ggplot2::element_blank(), legend.key = ggplot2::element_blank()) +
labs(title="My minutes asleep boxplot",
subtitle = sprintf("From %s until %s", first_date, last_date)) +
ylab("Minutes") +
xlab("Date")
ggsave(sprintf("%s%s_%s_%s.jpg", plots_dir,'sleepy_minutes_boxplot', first_date, last_date), plot = p,
width = 13, height = 7, units = 'in')
}
create_activity_level_plots()
create_activity_plots()
create_distance_steps_plots()
create_sleep_plots()
|
write.csv(data.frame(x=1:10),file="batch_test.csv",row.names = FALSE)
|
/tests/testthat/script_to_test_run_batch.R
|
no_license
|
shug0131/cctu
|
R
| false
| false
| 70
|
r
|
write.csv(data.frame(x=1:10),file="batch_test.csv",row.names = FALSE)
|
/Summarization_ v1.R
|
no_license
|
amart90-UI/Scripts
|
R
| false
| false
| 13,086
|
r
| ||
#Copyright © 2016 RTE Réseau de transport d’électricité
context("Function readInputThermal")
sapply(studyPathS, function(studyPath){
opts <- setSimulationPath(studyPath)
if(!isH5Opts(opts)){
test_that("Thermal availabilities importation works", {
input <- readInputThermal(clusters = "peak_must_run_partial", showProgress = FALSE)
expect_is(input, "antaresDataTable")
expect_gt(nrow(input), 0)
expect_equal(nrow(input) %% (24 * 7 * nweeks), 0)
})
test_that("Thermal modulation importation works", {
input <- readInputThermal(clusters = "peak_must_run_partial", thermalModulation = TRUE, showProgress = FALSE)
expect_is(input, "antaresDataList")
expect_is(input$thermalModulation, "antaresDataTable")
expect_gt(nrow(input$thermalModulation), 0)
expect_equal(nrow(input$thermalModulation) %% (24 * 7 * nweeks), 0)
})
test_that("Thermal data importation works", {
input <- readInputThermal(clusters = "peak_must_run_partial", thermalModulation = TRUE, showProgress = FALSE)
expect_is(input, "antaresDataList")
expect_is(input$thermalModulation, "antaresDataTable")
expect_gt(nrow(input$thermalModulation), 0)
expect_equal(nrow(input$thermalModulation) %% (24 * 7 * nweeks), 0)
})
}
})
|
/tests/testthat/test-readInputClusters.R
|
no_license
|
cran/antaresRead
|
R
| false
| false
| 1,367
|
r
|
#Copyright © 2016 RTE Réseau de transport d’électricité
context("Function readInputThermal")
sapply(studyPathS, function(studyPath){
opts <- setSimulationPath(studyPath)
if(!isH5Opts(opts)){
test_that("Thermal availabilities importation works", {
input <- readInputThermal(clusters = "peak_must_run_partial", showProgress = FALSE)
expect_is(input, "antaresDataTable")
expect_gt(nrow(input), 0)
expect_equal(nrow(input) %% (24 * 7 * nweeks), 0)
})
test_that("Thermal modulation importation works", {
input <- readInputThermal(clusters = "peak_must_run_partial", thermalModulation = TRUE, showProgress = FALSE)
expect_is(input, "antaresDataList")
expect_is(input$thermalModulation, "antaresDataTable")
expect_gt(nrow(input$thermalModulation), 0)
expect_equal(nrow(input$thermalModulation) %% (24 * 7 * nweeks), 0)
})
test_that("Thermal data importation works", {
input <- readInputThermal(clusters = "peak_must_run_partial", thermalModulation = TRUE, showProgress = FALSE)
expect_is(input, "antaresDataList")
expect_is(input$thermalModulation, "antaresDataTable")
expect_gt(nrow(input$thermalModulation), 0)
expect_equal(nrow(input$thermalModulation) %% (24 * 7 * nweeks), 0)
})
}
})
|
# Political strongholds and organized violence in Kenya
# Francisco Villamil
# November 2014
# -----------------------------------------------------------
# ANALYSIS: NEGATIVE BINOMIAL MODELS & PREDICTED EFFECTS
# (R code file no. 1)
# NOTE (!): ASSIGN WORKING DIRECTORY TO THE REPLICATION FOLDER
library(MASS)
setwd(...)
# Read data file and limit sample to those constituencies with at least one reported event
data = read.csv("data.csv", header = TRUE)
data = subset(data, events.org == 1)
# Negative binomial models
# ------------------------
# Formulae
kik = formula(fat ~ kikuyu + I(kikuyu^2) + change.pov + odinga + ethnic.het + offset(log(population)))
kik2 = formula(fat ~ kikuyu * ethnic.het + change.pov + odinga + ethnic.het + offset(log(population)))
polco = formula(fat ~ pol.comp * ethnic.het + change.pov + offset(log(population)))
# Models & summaries
m.kik = glm.nb(formula = kik, data = data)
m.kik2 = glm.nb(formula = kik2, data = data)
m.polco = glm.nb(formula = polco, data = data)
summary(m.kik)
summary(m.kik2)
summary(m.polco)
# Variable effects in data frames
coefs.kik = data.frame(
variables = c("Kikuyu", "Kikuyu^2", "Change in\npoverty", "ODM share", "Ethnic\nheterog."),
coef = exp(summary(m.kik)$coefficients[2:6, 1]),
UL = exp(summary(m.kik)$coefficients[2:6, 1] + (1.96 * summary(m.kik)$coefficients[2:6, 2])),
LL = exp(summary(m.kik)$coefficients[2:6, 1] - (1.96 * summary(m.kik)$coefficients[2:6, 2]))
)
coefs.kik2 = data.frame(
variables = c("Kikuyu", "Ethnic\nheterog.", "Change in\npoverty", "ODM share", "Kik:Eth het"),
coef = exp(summary(m.kik2)$coefficients[2:6, 1]),
UL = exp(summary(m.kik2)$coefficients[2:6, 1] + (1.96 * summary(m.kik2)$coefficients[2:6, 2])),
LL = exp(summary(m.kik2)$coefficients[2:6, 1] - (1.96 * summary(m.kik2)$coefficients[2:6, 2]))
)
coefs.polco = data.frame(
variables = c("Political\ncompetition", "Ethnic\nheterog.", "Change in\npoverty", "Pol com:Eth het"),
coef = exp(summary(m.polco)$coefficients[2:5, 1]),
UL = exp(summary(m.polco)$coefficients[2:5, 1] + (1.96 * summary(m.polco)$coefficients[2:5, 2])),
LL = exp(summary(m.polco)$coefficients[2:5, 1] - (1.96 * summary(m.polco)$coefficients[2:5, 2]))
)
# Political competition & ethnic heterogeneity interaction
# --------------------------------------------------------
# New data
newdf = data.frame(
population = rep(mean(data$population), 41*3),
change.pov = rep(mean(data$change.pov), 41*3),
ethnic.het = rep(seq(-1, 1, 0.05),3),
pol.comp = rep(c(-1, 0, 1), each = 41))
# Predicted effects
pred.pclow = predict(m.polco, newdata = subset(newdf, pol.comp == -1), type = "response", se.fit = TRUE)
pred.pcmean = predict(m.polco, newdata = subset(newdf, pol.comp == 0), type = "response", se.fit = TRUE)
pred.pchigh = predict(m.polco, newdata = subset(newdf, pol.comp == 1), type = "response", se.fit = TRUE)
# Effects and CIs
preds = data.frame(
ethnic.het = rep(seq(-1, 1, 0.05),3),
pol.comp = rep(c("Political competition = -1", "Political competition = 0", "Political competition = 1"), each = 41),
pred = c(pred.pclow$fit, pred.pcmean$fit, pred.pchigh$fit),
predUL = c(pred.pclow$fit + 1.96 * pred.pclow$se.fit,
pred.pcmean$fit + 1.96 * pred.pcmean$se.fit,
pred.pchigh$fit + 1.96 * pred.pchigh$se.fit),
predLL = c(pred.pclow$fit - 1.96 * pred.pclow$se.fit,
pred.pcmean$fit - 1.96 * pred.pcmean$se.fit,
pred.pchigh$fit - 1.96 * pred.pchigh$se.fit)
)
rm(pred.pclow, pred.pcmean, pred.pchigh, newdf)
|
/analysis.R
|
no_license
|
franvillamil/ov-kenya
|
R
| false
| false
| 3,479
|
r
|
# Political strongholds and organized violence in Kenya
# Francisco Villamil
# November 2014
# -----------------------------------------------------------
# ANALYSIS: NEGATIVE BINOMIAL MODELS & PREDICTED EFFECTS
# (R code file no. 1)
# NOTE (!): ASSIGN WORKING DIRECTORY TO THE REPLICATION FOLDER
library(MASS)
setwd(...)
# Read data file and limit sample to those constituencies with at least one reported event
data = read.csv("data.csv", header = TRUE)
data = subset(data, events.org == 1)
# Negative binomial models
# ------------------------
# Formulae
kik = formula(fat ~ kikuyu + I(kikuyu^2) + change.pov + odinga + ethnic.het + offset(log(population)))
kik2 = formula(fat ~ kikuyu * ethnic.het + change.pov + odinga + ethnic.het + offset(log(population)))
polco = formula(fat ~ pol.comp * ethnic.het + change.pov + offset(log(population)))
# Models & summaries
m.kik = glm.nb(formula = kik, data = data)
m.kik2 = glm.nb(formula = kik2, data = data)
m.polco = glm.nb(formula = polco, data = data)
summary(m.kik)
summary(m.kik2)
summary(m.polco)
# Variable effects in data frames
coefs.kik = data.frame(
variables = c("Kikuyu", "Kikuyu^2", "Change in\npoverty", "ODM share", "Ethnic\nheterog."),
coef = exp(summary(m.kik)$coefficients[2:6, 1]),
UL = exp(summary(m.kik)$coefficients[2:6, 1] + (1.96 * summary(m.kik)$coefficients[2:6, 2])),
LL = exp(summary(m.kik)$coefficients[2:6, 1] - (1.96 * summary(m.kik)$coefficients[2:6, 2]))
)
coefs.kik2 = data.frame(
variables = c("Kikuyu", "Ethnic\nheterog.", "Change in\npoverty", "ODM share", "Kik:Eth het"),
coef = exp(summary(m.kik2)$coefficients[2:6, 1]),
UL = exp(summary(m.kik2)$coefficients[2:6, 1] + (1.96 * summary(m.kik2)$coefficients[2:6, 2])),
LL = exp(summary(m.kik2)$coefficients[2:6, 1] - (1.96 * summary(m.kik2)$coefficients[2:6, 2]))
)
coefs.polco = data.frame(
variables = c("Political\ncompetition", "Ethnic\nheterog.", "Change in\npoverty", "Pol com:Eth het"),
coef = exp(summary(m.polco)$coefficients[2:5, 1]),
UL = exp(summary(m.polco)$coefficients[2:5, 1] + (1.96 * summary(m.polco)$coefficients[2:5, 2])),
LL = exp(summary(m.polco)$coefficients[2:5, 1] - (1.96 * summary(m.polco)$coefficients[2:5, 2]))
)
# Political competition & ethnic heterogeneity interaction
# --------------------------------------------------------
# New data
newdf = data.frame(
population = rep(mean(data$population), 41*3),
change.pov = rep(mean(data$change.pov), 41*3),
ethnic.het = rep(seq(-1, 1, 0.05),3),
pol.comp = rep(c(-1, 0, 1), each = 41))
# Predicted effects
pred.pclow = predict(m.polco, newdata = subset(newdf, pol.comp == -1), type = "response", se.fit = TRUE)
pred.pcmean = predict(m.polco, newdata = subset(newdf, pol.comp == 0), type = "response", se.fit = TRUE)
pred.pchigh = predict(m.polco, newdata = subset(newdf, pol.comp == 1), type = "response", se.fit = TRUE)
# Effects and CIs
preds = data.frame(
ethnic.het = rep(seq(-1, 1, 0.05),3),
pol.comp = rep(c("Political competition = -1", "Political competition = 0", "Political competition = 1"), each = 41),
pred = c(pred.pclow$fit, pred.pcmean$fit, pred.pchigh$fit),
predUL = c(pred.pclow$fit + 1.96 * pred.pclow$se.fit,
pred.pcmean$fit + 1.96 * pred.pcmean$se.fit,
pred.pchigh$fit + 1.96 * pred.pchigh$se.fit),
predLL = c(pred.pclow$fit - 1.96 * pred.pclow$se.fit,
pred.pcmean$fit - 1.96 * pred.pcmean$se.fit,
pred.pchigh$fit - 1.96 * pred.pchigh$se.fit)
)
rm(pred.pclow, pred.pcmean, pred.pchigh, newdf)
|
# Loading Packages -----------
library(RODBC)
library(xts)
library(tseries)
# Read Market Return -------------
mkt.all.data <- read.table("D:/Stk Data/Index/TRD_Index.txt",header=TRUE)
head(mkt.all.data)
ret.mkt <- mkt.all.data[mkt.all.data$Indexcd == 902, c("Trddt","Retindex")]
ret.mkt.xts <- xts(ret.mkt$Retindex, order.by=as.Date(ret.mkt$Trddt))
names(ret.mkt.xts) <- "ret.mkt"
head(ret.mkt.xts)
# Set Risk Free Rate ---------------
rf <- (1.0325)^(1/360) - 1
# Connect with Access File -------------
conn <- odbcConnectAccess2007(access.file="D:/Stk Data/Stock/Stock.accdb",
uid="test",pwd="test")
|
/src/dg/financial/courseware/CAPM source.R
|
no_license
|
xenron/sandbox-da-r
|
R
| false
| false
| 642
|
r
|
# Loading Packages -----------
library(RODBC)
library(xts)
library(tseries)
# Read Market Return -------------
mkt.all.data <- read.table("D:/Stk Data/Index/TRD_Index.txt",header=TRUE)
head(mkt.all.data)
ret.mkt <- mkt.all.data[mkt.all.data$Indexcd == 902, c("Trddt","Retindex")]
ret.mkt.xts <- xts(ret.mkt$Retindex, order.by=as.Date(ret.mkt$Trddt))
names(ret.mkt.xts) <- "ret.mkt"
head(ret.mkt.xts)
# Set Risk Free Rate ---------------
rf <- (1.0325)^(1/360) - 1
# Connect with Access File -------------
conn <- odbcConnectAccess2007(access.file="D:/Stk Data/Stock/Stock.accdb",
uid="test",pwd="test")
|
data = read.csv("./subway_data.csv")
data$weather_sunny = 0
data$weather_sunny[data$weather=="sunny"] = 1
hmm <- depmix(data$people ~ 1 + data$weather_sunny, family = poisson(), nstates = 2, data=data,respstart=c(10,10,10,10))
hmmfit <- fit(hmm, verbose = TRUE)
post_probs <- posterior(hmmfit)
layout(1:2)
plot(data$people,type="l")
data$state_pred = post_probs$state
matplot(post_probs[,-1], type='l', main='Regime Posterior Probabilities', ylab='Probability')
legend(x='topright', c('Closed','Open'), fill=1:3, bty='n')
table(data$state,data$state_pred)
data = read.csv("./subway_data_extended.csv")
data$weather_sunny = 0
data$weather_sunny[data$weather=="sunny"] = 1
hmm <- depmix(data = data, people ~ 1 + weather_sunny,transition=~1 + machinery_road, family = poisson(), nstates = 2, respstart=c(10,10,10,10) )
hmmfit <- fit(hmm, verbose = TRUE)
post_probs <- posterior(hmmfit)
data$state_pred = post_probs$state
table(data$state,data$state_pred)
|
/Chapter10/10__5__subwaymodel.R
|
permissive
|
PacktPublishing/R-Statistics-Cookbook
|
R
| false
| false
| 978
|
r
|
data = read.csv("./subway_data.csv")
data$weather_sunny = 0
data$weather_sunny[data$weather=="sunny"] = 1
hmm <- depmix(data$people ~ 1 + data$weather_sunny, family = poisson(), nstates = 2, data=data,respstart=c(10,10,10,10))
hmmfit <- fit(hmm, verbose = TRUE)
post_probs <- posterior(hmmfit)
layout(1:2)
plot(data$people,type="l")
data$state_pred = post_probs$state
matplot(post_probs[,-1], type='l', main='Regime Posterior Probabilities', ylab='Probability')
legend(x='topright', c('Closed','Open'), fill=1:3, bty='n')
table(data$state,data$state_pred)
data = read.csv("./subway_data_extended.csv")
data$weather_sunny = 0
data$weather_sunny[data$weather=="sunny"] = 1
hmm <- depmix(data = data, people ~ 1 + weather_sunny,transition=~1 + machinery_road, family = poisson(), nstates = 2, respstart=c(10,10,10,10) )
hmmfit <- fit(hmm, verbose = TRUE)
post_probs <- posterior(hmmfit)
data$state_pred = post_probs$state
table(data$state,data$state_pred)
|
# AUTO GENERATED FILE - DO NOT EDIT
htmlTextarea <- function(children=NULL, id=NULL, n_clicks=NULL, n_clicks_timestamp=NULL, key=NULL, role=NULL, autoComplete=NULL, autoFocus=NULL, cols=NULL, disabled=NULL, form=NULL, inputMode=NULL, maxLength=NULL, minLength=NULL, name=NULL, placeholder=NULL, readOnly=NULL, required=NULL, rows=NULL, wrap=NULL, accessKey=NULL, className=NULL, contentEditable=NULL, contextMenu=NULL, dir=NULL, draggable=NULL, hidden=NULL, lang=NULL, spellCheck=NULL, style=NULL, tabIndex=NULL, title=NULL, loading_state=NULL, ...) {
wildcard_names = names(dash_assert_valid_wildcards(attrib = list('data', 'aria'), ...))
props <- list(children=children, id=id, n_clicks=n_clicks, n_clicks_timestamp=n_clicks_timestamp, key=key, role=role, autoComplete=autoComplete, autoFocus=autoFocus, cols=cols, disabled=disabled, form=form, inputMode=inputMode, maxLength=maxLength, minLength=minLength, name=name, placeholder=placeholder, readOnly=readOnly, required=required, rows=rows, wrap=wrap, accessKey=accessKey, className=className, contentEditable=contentEditable, contextMenu=contextMenu, dir=dir, draggable=draggable, hidden=hidden, lang=lang, spellCheck=spellCheck, style=style, tabIndex=tabIndex, title=title, loading_state=loading_state, ...)
if (length(props) > 0) {
props <- props[!vapply(props, is.null, logical(1))]
}
component <- list(
props = props,
type = 'Textarea',
namespace = 'dash_html_components',
propNames = c('children', 'id', 'n_clicks', 'n_clicks_timestamp', 'key', 'role', 'autoComplete', 'autoFocus', 'cols', 'disabled', 'form', 'inputMode', 'maxLength', 'minLength', 'name', 'placeholder', 'readOnly', 'required', 'rows', 'wrap', 'accessKey', 'className', 'contentEditable', 'contextMenu', 'dir', 'draggable', 'hidden', 'lang', 'spellCheck', 'style', 'tabIndex', 'title', 'loading_state', wildcard_names),
package = 'dashHtmlComponents'
)
structure(component, class = c('dash_component', 'list'))
}
|
/R/htmlTextarea.R
|
permissive
|
noisycomputation/dash-html-components
|
R
| false
| false
| 2,029
|
r
|
# AUTO GENERATED FILE - DO NOT EDIT
htmlTextarea <- function(children=NULL, id=NULL, n_clicks=NULL, n_clicks_timestamp=NULL, key=NULL, role=NULL, autoComplete=NULL, autoFocus=NULL, cols=NULL, disabled=NULL, form=NULL, inputMode=NULL, maxLength=NULL, minLength=NULL, name=NULL, placeholder=NULL, readOnly=NULL, required=NULL, rows=NULL, wrap=NULL, accessKey=NULL, className=NULL, contentEditable=NULL, contextMenu=NULL, dir=NULL, draggable=NULL, hidden=NULL, lang=NULL, spellCheck=NULL, style=NULL, tabIndex=NULL, title=NULL, loading_state=NULL, ...) {
wildcard_names = names(dash_assert_valid_wildcards(attrib = list('data', 'aria'), ...))
props <- list(children=children, id=id, n_clicks=n_clicks, n_clicks_timestamp=n_clicks_timestamp, key=key, role=role, autoComplete=autoComplete, autoFocus=autoFocus, cols=cols, disabled=disabled, form=form, inputMode=inputMode, maxLength=maxLength, minLength=minLength, name=name, placeholder=placeholder, readOnly=readOnly, required=required, rows=rows, wrap=wrap, accessKey=accessKey, className=className, contentEditable=contentEditable, contextMenu=contextMenu, dir=dir, draggable=draggable, hidden=hidden, lang=lang, spellCheck=spellCheck, style=style, tabIndex=tabIndex, title=title, loading_state=loading_state, ...)
if (length(props) > 0) {
props <- props[!vapply(props, is.null, logical(1))]
}
component <- list(
props = props,
type = 'Textarea',
namespace = 'dash_html_components',
propNames = c('children', 'id', 'n_clicks', 'n_clicks_timestamp', 'key', 'role', 'autoComplete', 'autoFocus', 'cols', 'disabled', 'form', 'inputMode', 'maxLength', 'minLength', 'name', 'placeholder', 'readOnly', 'required', 'rows', 'wrap', 'accessKey', 'className', 'contentEditable', 'contextMenu', 'dir', 'draggable', 'hidden', 'lang', 'spellCheck', 'style', 'tabIndex', 'title', 'loading_state', wildcard_names),
package = 'dashHtmlComponents'
)
structure(component, class = c('dash_component', 'list'))
}
|
# author_@Aditya_Sharma
require(seqinr)
require(psych)
require(ggplot2)
require(MASS)
require(car)
require(neuralnet)
require(grid)
require(e1071)
############################################
#loading the data set
trainingdata <- read.csv("training_data.csv")
summary(trainingdata)
rowtotal <- nrow(trainingdata)
#############################################
# separating 0's and 1's response Data
traininigdata0 <- trainingdata[which(trainingdata$Resp==0),]
traininigdata1 <- trainingdata[which(trainingdata$Resp==1),]
#############################################
# Getting the base composition of the sequence IN ACGT
#N means BASE CAN be either A,C,G,OR T
#Y means Pyrimidine i.e either C OR T
######## FEATURE EXTRACTION #################
#length of the RTrans
#############################################
for (i in 1:rowtotal) {
trainingdata$rtlength[i]<- nchar(as.character(trainingdata$RT.Seq[i]))
}
trainingdata$rtlength
#length of the PRlength is same at 247 for each entry so we wont take it into account
############################################
################# AS PR SEQ DATA IS NOT AVAILABLE FROM 992 ###########################
#initializing new training data with only 920 entries.
trainingdatanew <- trainingdata[-c(921:1000),]
View(trainingdatanew)
#####################CORRECT TILL HERE##############################################################################################################
#K-MERS
####################################################
for (i in 1:920) {
hiv_PRseq <- as.character(trainingdatanew$PR.Seq[i])
#splitting the sequence into a vector
hivseq1 <- strsplit(hiv_PRseq,"") #strsplit returns a list
hivseqvectornotation <- unlist(hivseq1) # unlisting to vector notation so as to use GC function
#One of the most fundamental properties of a genome sequence is its GC content,
#the fraction of the sequence that consists of Gs and Cs, ie. the %(G+C).
GCPR <- GC(hivseqvectornotation) # GC VALUE IN FRACTIONS NOT PERCENTAGE
# hivseql is a vector of character
prtable <- table(hivseq1) #prtable is gives us the A C G T count in a neuclotide
prtable <- as.data.frame(prtable) # converting to DF so as to manipulate values by picking it specifically
trainingdatanew$pr_A[i] <- prtable$Freq[1]
trainingdatanew$pr_C[i] <- prtable$Freq[2]
trainingdatanew$pr_G[i] <- prtable$Freq[3]
trainingdatanew$pr_R[i] <- prtable$Freq[4]
trainingdatanew$pr_T[i] <- prtable$Freq[5]
trainingdatanew$pr_Y[i] <- prtable$Freq[6]
trainingdatanew$PR_GC[i] <- GCPR
}
trainingdatanew[is.na(trainingdatanew)] <- 0 # Removing N/A values in the data frame to 0
####################################### CORRECT TILL HERE #############################################
#RT data extraction
#########################################################################################
for (i in 1:920) {
hiv_rtseq <- as.character(trainingdatanew$RT.Seq[i])
#splitting the sequence into a vector
hivseq1 <- strsplit(hiv_rtseq,"")
hivseqvectornotation <- unlist(hivseq1) # TO GET VECTOR NOTATION OF SPLITTED RT SEQ
GCRT <- GC(hivseqvectornotation)
# hivseql is a vector of character
prtable <- table(hivseq1) #prtable is gives us the A C G T count in a neuclotide
prtable <- as.data.frame(prtable) # converting to DF so as to manipulate values by picking it specifically
trainingdatanew$RT_A[i] <- prtable$Freq[1]
trainingdatanew$RT_C[i] <- prtable$Freq[2]
trainingdatanew$RT_G[i] <- prtable$Freq[3]
trainingdatanew$RT_R[i] <- prtable$Freq[4]
trainingdatanew$RT_T[i] <- prtable$Freq[5]
trainingdatanew$RT_Y[i] <- prtable$Freq[6]
trainingdatanew$RT_GC[i] <- GCRT
}
trainingdatanew[is.na(trainingdatanew)] <- 0 # Removing N/A values in the data frame to 0
trainingdatanew<-trainingdatanew[,-3]
trainingdatanew<-trainingdatanew[,-3]
trainingdatanew<-trainingdatanew[,-1] #this remnoval came late as i progressed till variable importance
View(trainingdatanew)
#our Data set has been augmented enough now to start building a model
#############################################################################################
############### correct till here ###########################################################
#1
colnames(trainingdatanew)
neu
nn <- neuralnet(formula = Resp~VL.t0+CD4.t0+rtlength+pr_A+pr_C+pr_G+pr_R+pr_T+pr_Y+PR_GC+RT_A+RT_C+RT_G+RT_R+RT_T+RT_Y+RT_GC,data = trainingdatanew,hidden = 16,err.fct = "ce",linear.output = FALSE,stepmax = 100000000,learningrate = 0.01,algorithm = "backprop")
plot(nn)
nn$net.result[1]
nn1<- as.data.frame(ifelse(nn$net.result[[1]]>0.5,1,0))
nn1$original <- trainingdata$Resp
View(nn1)
svmfit <- svm(trainingdatanew$Resp~trainingdatanew$VL.t0+trainingdatanew$CD4.t0+trainingdatanew$rtlength+trainingdatanew$pr_A,kernel="linear",cost=0.1)
svmfit
plot(svmfit,trainingdatanew[,col])
####################
#RANDOM FOREST ALGO###################################################################################################################################################
####################
#DataPartition
require(randomForest)
set.seed(123)
independentsample <- sample(2,nrow(trainingdatanew),replace=TRUE,prob=c(0.7,0.3))
train <- trainingdatanew[independentsample==1,]
test <-trainingdatanew[independentsample==2,]
#random forest
#classsification model as well as regression
#as response variable is factor so here classsification
#will help in feauture selection as well as avoinding overfitting
#default 500 trees
#majority vote btw trees will give us the predicted values
train$Resp<-as.factor(train$Resp) #important to convert in factors to use classification mode brathar than regression
View(trainingdatanew)
set.seed(222)
rf <- randomForest(Resp~.,data=train
)
print(rf) # confusion matrix and error rate 16.38%
#out of bag error is 16.38%
# predicting class 1 : 0 respose with greater accuracy
# 58 % error in predicting class 2: response =1
attributes(rf)
rf$confusion
require(caret)
#######################
#prediction and confusion matrix -train data
p1 <- predict(rf,train)
head(p1)
confusionMatrix(p1,train$Resp)
# accuracy is coming at 100%
#99.43% confidence interval
# oob error was 16.38% but the accuracy is coming at 100%
# thus a mismatch is seem at place but it is not there
#as in random forest the out of bag error is the probable error in the data that the model has not seen yet
############################
#prediction with TEST data[out of bag]
############################
p2 <- predict(rf,test)
confusionMatrix(p2,test$Resp)
#accuracy has come down to 79.1208 %
#confidence interval at 73.811% to 83.78%
#############################################
#ERROR RATE
############################
plot(rf)
# the out of bag error decreases initially as the no of trees grow but we are not able to improve the error after around100 trees
#########################
#TUNING
#########################
View(train)
set.seed(222)
t<-tuneRF(train[,-2],train[,2],
stepFactor = 0.5,
plot = TRUE,
ntreeTry =50 ,
improve = 0.05)
#bottom at mtry = 2 becomes the lowest where
#onwards we get to a increasing error
#so lets change the rf model with ntreetry at 350 to lower the error from 16.38%
# at ntreetry = 50 we see that till mtry error depriciates
#again we test at rftuned2
set.seed(222)
rftuned <- randomForest(Resp~.,data=train,
ntree=100,
mtry=2,
importance=TRUE,
proximity=TRUE)
print(rftuned)
#the error out of b ag is being 16.85%
#so we will accept just the rf original
set.seed(222)
rftuned2 <- randomForest(Resp~.,data=train,
ntree=50,
mtry=8,
importance=TRUE,
proximity=TRUE)
print(rftuned2)
#17.16% out of bag error
# thus we keep rf
#no of nodes for the trees
hist(treesize(rf),
main="no of nodes for the trees",
col ="green")
#overall distruibution is btw 50 to 90 nodes
#majority of the trees on average have 70 nodes
#Variable importance
#######################
varImpPlot(rf)
#patient id shall be removed as it seems to be making the most of the difference
#this point got out of my head before
#VL.T0 AND RT_A THAN RT_C THAN RT_GC ARE THE TOP FOUR VARIABLES THAT PROVIDE ACCURACY
varImpPlot(rf,
sort=T,
n.var=6,
main = " top 6 variable imporatnce"
)
importance(rf)
varUsed(rf)#which predicter variables were actually used in the random forest model
#14th variable was used the least
#i.e RT_G
partialPlot(rf,
train,
VL.t0,
"1")
#tends to predict class1 more stronmgly
#when vl.t0 is >3.4
###############################################################################################################################
###############################################################################################################################
testfinal <- read.csv("test_data.csv")
View(testfinal)
rowtotaltest <- nrow(trainingdata)
#length of the RTrans
#############################################
for (i in 1:rowtotaltest) {
testfinal$rtlength[i]<- nchar(as.character(testfinal$RT.Seq[i]))
}
testfinal$rtlength
#length of the PRlength is same at 247 for each entry so we wont take it into account
#####################CORRECT TILL HERE##############################################################################################################
#K-MERS
####################################################
for (i in 1:692) {
hiv_PRseq <- as.character(testfinal$PR.Seq[i])
#splitting the sequence into a vector
hivseq1 <- strsplit(hiv_PRseq,"") #strsplit returns a list
hivseqvectornotation <- unlist(hivseq1) # unlisting to vector notation so as to use GC function
#One of the most fundamental properties of a genome sequence is its GC content,
#the fraction of the sequence that consists of Gs and Cs, ie. the %(G+C).
GCPR <- GC(hivseqvectornotation) # GC VALUE IN FRACTIONS NOT PERCENTAGE
# hivseql is a vector of character
prtable <- table(hivseq1) #prtable is gives us the A C G T count in a neuclotide
prtable <- as.data.frame(prtable) # converting to DF so as to manipulate values by picking it specifically
testfinal$pr_A[i] <- prtable$Freq[1]
testfinal$pr_C[i] <- prtable$Freq[2]
testfinal$pr_G[i] <- prtable$Freq[3]
testfinal$pr_R[i] <- prtable$Freq[4]
testfinal$pr_T[i] <- prtable$Freq[5]
testfinal$pr_Y[i] <- prtable$Freq[6]
testfinal$PR_GC[i] <- GCPR
}
testfinal[is.na(testfinal)] <- 0 # Removing N/A values in the data frame to 0
####################################### CORRECT TILL HERE #############################################
#RT data extraction
#########################################################################################
for (i in 1:920) {
hiv_rtseq <- as.character(testfinal$RT.Seq[i])
#splitting the sequence into a vector
hivseq1 <- strsplit(hiv_rtseq,"")
hivseqvectornotation <- unlist(hivseq1) # TO GET VECTOR NOTATION OF SPLITTED RT SEQ
GCRT <- GC(hivseqvectornotation)
# hivseql is a vector of character
prtable <- table(hivseq1) #prtable is gives us the A C G T count in a neuclotide
prtable <- as.data.frame(prtable) # converting to DF so as to manipulate values by picking it specifically
testfinal$RT_A[i] <- prtable$Freq[1]
testfinal$RT_C[i] <- prtable$Freq[2]
testfinal$RT_G[i] <- prtable$Freq[3]
testfinal$RT_R[i] <- prtable$Freq[4]
testfinal$RT_T[i] <- prtable$Freq[5]
testfinal$RT_Y[i] <- prtable$Freq[6]
testfinal$RT_GC[i] <- GCRT
}
testfinal[is.na(testfinal)] <- 0 # Removing N/A values in the data frame to 0
testfinal<- testfinal[,-1]
testfinal<- testfinal[,-2]
testfinal<- testfinal[,-2]
View(testfinal)
#######################################################
pfinal <- predict(rf,testfinal)
testfinal$Resp <- pfinal
View(testfinal) # MY PREDICTED VALUES FOR RESPONSE
confusionMatrix(pfinal,testfinal$Resp)
#Reference
#Prediction 0 1
#0 569 0
#1 0 123
|
/hivfinal.r
|
no_license
|
shinchan121/-Predict-HIV-Progression-_Pucho-Round
|
R
| false
| false
| 12,717
|
r
|
# author_@Aditya_Sharma
require(seqinr)
require(psych)
require(ggplot2)
require(MASS)
require(car)
require(neuralnet)
require(grid)
require(e1071)
############################################
#loading the data set
trainingdata <- read.csv("training_data.csv")
summary(trainingdata)
rowtotal <- nrow(trainingdata)
#############################################
# separating 0's and 1's response Data
traininigdata0 <- trainingdata[which(trainingdata$Resp==0),]
traininigdata1 <- trainingdata[which(trainingdata$Resp==1),]
#############################################
# Getting the base composition of the sequence IN ACGT
#N means BASE CAN be either A,C,G,OR T
#Y means Pyrimidine i.e either C OR T
######## FEATURE EXTRACTION #################
#length of the RTrans
#############################################
for (i in 1:rowtotal) {
trainingdata$rtlength[i]<- nchar(as.character(trainingdata$RT.Seq[i]))
}
trainingdata$rtlength
#length of the PRlength is same at 247 for each entry so we wont take it into account
############################################
################# AS PR SEQ DATA IS NOT AVAILABLE FROM 992 ###########################
#initializing new training data with only 920 entries.
trainingdatanew <- trainingdata[-c(921:1000),]
View(trainingdatanew)
#####################CORRECT TILL HERE##############################################################################################################
#K-MERS
####################################################
for (i in 1:920) {
hiv_PRseq <- as.character(trainingdatanew$PR.Seq[i])
#splitting the sequence into a vector
hivseq1 <- strsplit(hiv_PRseq,"") #strsplit returns a list
hivseqvectornotation <- unlist(hivseq1) # unlisting to vector notation so as to use GC function
#One of the most fundamental properties of a genome sequence is its GC content,
#the fraction of the sequence that consists of Gs and Cs, ie. the %(G+C).
GCPR <- GC(hivseqvectornotation) # GC VALUE IN FRACTIONS NOT PERCENTAGE
# hivseql is a vector of character
prtable <- table(hivseq1) #prtable is gives us the A C G T count in a neuclotide
prtable <- as.data.frame(prtable) # converting to DF so as to manipulate values by picking it specifically
trainingdatanew$pr_A[i] <- prtable$Freq[1]
trainingdatanew$pr_C[i] <- prtable$Freq[2]
trainingdatanew$pr_G[i] <- prtable$Freq[3]
trainingdatanew$pr_R[i] <- prtable$Freq[4]
trainingdatanew$pr_T[i] <- prtable$Freq[5]
trainingdatanew$pr_Y[i] <- prtable$Freq[6]
trainingdatanew$PR_GC[i] <- GCPR
}
trainingdatanew[is.na(trainingdatanew)] <- 0 # Removing N/A values in the data frame to 0
####################################### CORRECT TILL HERE #############################################
#RT data extraction
#########################################################################################
for (i in 1:920) {
hiv_rtseq <- as.character(trainingdatanew$RT.Seq[i])
#splitting the sequence into a vector
hivseq1 <- strsplit(hiv_rtseq,"")
hivseqvectornotation <- unlist(hivseq1) # TO GET VECTOR NOTATION OF SPLITTED RT SEQ
GCRT <- GC(hivseqvectornotation)
# hivseql is a vector of character
prtable <- table(hivseq1) #prtable is gives us the A C G T count in a neuclotide
prtable <- as.data.frame(prtable) # converting to DF so as to manipulate values by picking it specifically
trainingdatanew$RT_A[i] <- prtable$Freq[1]
trainingdatanew$RT_C[i] <- prtable$Freq[2]
trainingdatanew$RT_G[i] <- prtable$Freq[3]
trainingdatanew$RT_R[i] <- prtable$Freq[4]
trainingdatanew$RT_T[i] <- prtable$Freq[5]
trainingdatanew$RT_Y[i] <- prtable$Freq[6]
trainingdatanew$RT_GC[i] <- GCRT
}
trainingdatanew[is.na(trainingdatanew)] <- 0 # Removing N/A values in the data frame to 0
trainingdatanew<-trainingdatanew[,-3]
trainingdatanew<-trainingdatanew[,-3]
trainingdatanew<-trainingdatanew[,-1] #this remnoval came late as i progressed till variable importance
View(trainingdatanew)
#our Data set has been augmented enough now to start building a model
#############################################################################################
############### correct till here ###########################################################
#1
colnames(trainingdatanew)
neu
nn <- neuralnet(formula = Resp~VL.t0+CD4.t0+rtlength+pr_A+pr_C+pr_G+pr_R+pr_T+pr_Y+PR_GC+RT_A+RT_C+RT_G+RT_R+RT_T+RT_Y+RT_GC,data = trainingdatanew,hidden = 16,err.fct = "ce",linear.output = FALSE,stepmax = 100000000,learningrate = 0.01,algorithm = "backprop")
plot(nn)
nn$net.result[1]
nn1<- as.data.frame(ifelse(nn$net.result[[1]]>0.5,1,0))
nn1$original <- trainingdata$Resp
View(nn1)
svmfit <- svm(trainingdatanew$Resp~trainingdatanew$VL.t0+trainingdatanew$CD4.t0+trainingdatanew$rtlength+trainingdatanew$pr_A,kernel="linear",cost=0.1)
svmfit
plot(svmfit,trainingdatanew[,col])
####################
#RANDOM FOREST ALGO###################################################################################################################################################
####################
#DataPartition
require(randomForest)
set.seed(123)
independentsample <- sample(2,nrow(trainingdatanew),replace=TRUE,prob=c(0.7,0.3))
train <- trainingdatanew[independentsample==1,]
test <-trainingdatanew[independentsample==2,]
#random forest
#classsification model as well as regression
#as response variable is factor so here classsification
#will help in feauture selection as well as avoinding overfitting
#default 500 trees
#majority vote btw trees will give us the predicted values
train$Resp<-as.factor(train$Resp) #important to convert in factors to use classification mode brathar than regression
View(trainingdatanew)
set.seed(222)
rf <- randomForest(Resp~.,data=train
)
print(rf) # confusion matrix and error rate 16.38%
#out of bag error is 16.38%
# predicting class 1 : 0 respose with greater accuracy
# 58 % error in predicting class 2: response =1
attributes(rf)
rf$confusion
require(caret)
#######################
#prediction and confusion matrix -train data
p1 <- predict(rf,train)
head(p1)
confusionMatrix(p1,train$Resp)
# accuracy is coming at 100%
#99.43% confidence interval
# oob error was 16.38% but the accuracy is coming at 100%
# thus a mismatch is seem at place but it is not there
#as in random forest the out of bag error is the probable error in the data that the model has not seen yet
############################
#prediction with TEST data[out of bag]
############################
p2 <- predict(rf,test)
confusionMatrix(p2,test$Resp)
#accuracy has come down to 79.1208 %
#confidence interval at 73.811% to 83.78%
#############################################
#ERROR RATE
############################
plot(rf)
# the out of bag error decreases initially as the no of trees grow but we are not able to improve the error after around100 trees
#########################
#TUNING
#########################
View(train)
set.seed(222)
t<-tuneRF(train[,-2],train[,2],
stepFactor = 0.5,
plot = TRUE,
ntreeTry =50 ,
improve = 0.05)
#bottom at mtry = 2 becomes the lowest where
#onwards we get to a increasing error
#so lets change the rf model with ntreetry at 350 to lower the error from 16.38%
# at ntreetry = 50 we see that till mtry error depriciates
#again we test at rftuned2
set.seed(222)
rftuned <- randomForest(Resp~.,data=train,
ntree=100,
mtry=2,
importance=TRUE,
proximity=TRUE)
print(rftuned)
#the error out of b ag is being 16.85%
#so we will accept just the rf original
set.seed(222)
rftuned2 <- randomForest(Resp~.,data=train,
ntree=50,
mtry=8,
importance=TRUE,
proximity=TRUE)
print(rftuned2)
#17.16% out of bag error
# thus we keep rf
#no of nodes for the trees
hist(treesize(rf),
main="no of nodes for the trees",
col ="green")
#overall distruibution is btw 50 to 90 nodes
#majority of the trees on average have 70 nodes
#Variable importance
#######################
varImpPlot(rf)
#patient id shall be removed as it seems to be making the most of the difference
#this point got out of my head before
#VL.T0 AND RT_A THAN RT_C THAN RT_GC ARE THE TOP FOUR VARIABLES THAT PROVIDE ACCURACY
varImpPlot(rf,
sort=T,
n.var=6,
main = " top 6 variable imporatnce"
)
importance(rf)
varUsed(rf)#which predicter variables were actually used in the random forest model
#14th variable was used the least
#i.e RT_G
partialPlot(rf,
train,
VL.t0,
"1")
#tends to predict class1 more stronmgly
#when vl.t0 is >3.4
###############################################################################################################################
###############################################################################################################################
testfinal <- read.csv("test_data.csv")
View(testfinal)
rowtotaltest <- nrow(trainingdata)
#length of the RTrans
#############################################
for (i in 1:rowtotaltest) {
testfinal$rtlength[i]<- nchar(as.character(testfinal$RT.Seq[i]))
}
testfinal$rtlength
#length of the PRlength is same at 247 for each entry so we wont take it into account
#####################CORRECT TILL HERE##############################################################################################################
#K-MERS
####################################################
for (i in 1:692) {
hiv_PRseq <- as.character(testfinal$PR.Seq[i])
#splitting the sequence into a vector
hivseq1 <- strsplit(hiv_PRseq,"") #strsplit returns a list
hivseqvectornotation <- unlist(hivseq1) # unlisting to vector notation so as to use GC function
#One of the most fundamental properties of a genome sequence is its GC content,
#the fraction of the sequence that consists of Gs and Cs, ie. the %(G+C).
GCPR <- GC(hivseqvectornotation) # GC VALUE IN FRACTIONS NOT PERCENTAGE
# hivseql is a vector of character
prtable <- table(hivseq1) #prtable is gives us the A C G T count in a neuclotide
prtable <- as.data.frame(prtable) # converting to DF so as to manipulate values by picking it specifically
testfinal$pr_A[i] <- prtable$Freq[1]
testfinal$pr_C[i] <- prtable$Freq[2]
testfinal$pr_G[i] <- prtable$Freq[3]
testfinal$pr_R[i] <- prtable$Freq[4]
testfinal$pr_T[i] <- prtable$Freq[5]
testfinal$pr_Y[i] <- prtable$Freq[6]
testfinal$PR_GC[i] <- GCPR
}
testfinal[is.na(testfinal)] <- 0 # Removing N/A values in the data frame to 0
####################################### CORRECT TILL HERE #############################################
#RT data extraction
#########################################################################################
for (i in 1:920) {
hiv_rtseq <- as.character(testfinal$RT.Seq[i])
#splitting the sequence into a vector
hivseq1 <- strsplit(hiv_rtseq,"")
hivseqvectornotation <- unlist(hivseq1) # TO GET VECTOR NOTATION OF SPLITTED RT SEQ
GCRT <- GC(hivseqvectornotation)
# hivseql is a vector of character
prtable <- table(hivseq1) #prtable is gives us the A C G T count in a neuclotide
prtable <- as.data.frame(prtable) # converting to DF so as to manipulate values by picking it specifically
testfinal$RT_A[i] <- prtable$Freq[1]
testfinal$RT_C[i] <- prtable$Freq[2]
testfinal$RT_G[i] <- prtable$Freq[3]
testfinal$RT_R[i] <- prtable$Freq[4]
testfinal$RT_T[i] <- prtable$Freq[5]
testfinal$RT_Y[i] <- prtable$Freq[6]
testfinal$RT_GC[i] <- GCRT
}
testfinal[is.na(testfinal)] <- 0 # Removing N/A values in the data frame to 0
testfinal<- testfinal[,-1]
testfinal<- testfinal[,-2]
testfinal<- testfinal[,-2]
View(testfinal)
#######################################################
pfinal <- predict(rf,testfinal)
testfinal$Resp <- pfinal
View(testfinal) # MY PREDICTED VALUES FOR RESPONSE
confusionMatrix(pfinal,testfinal$Resp)
#Reference
#Prediction 0 1
#0 569 0
#1 0 123
|
% Generated by roxygen2 (4.1.0): do not edit by hand
% Please edit documentation in R/extractSaveData.R
\name{l_getSavedata_Fileinfo}
\alias{l_getSavedata_Fileinfo}
\title{local function that does the work of getSaveData_Fileinfo}
\usage{
l_getSavedata_Fileinfo(outfile, outfiletext)
}
\arguments{
\item{outfile}{A character string giving the name of the Mplus
output file.}
\item{outfiletext}{The contents of the output file, for example as read by \code{scan}}
}
\value{
A list that includes:
\item{fileName}{The name of the file containing the analysis dataset created by the Mplus SAVEDATA command.}
\item{fileVarNames}{A character vector containing the names of variables in the dataset.}
\item{fileVarFormats}{A character vector containing the Fortran-style formats of variables in the dataset.}
\item{fileVarWidths}{A numeric vector containing the widths of variables in the dataset (which is stored in fixed-width format).}
\item{bayesFile}{The name of the BPARAMETERS file containing draws from the posterior distribution created by
the Mplus SAVEDATA BPARAMETERS command.}
\item{bayesVarNames}{A character vector containing the names of variables in the BPARAMETERS dataset.}
\item{tech3File}{A character vector of the tech 3 output.}
\item{tech4File}{A character vector of the tech 4 output.}
}
\description{
This function is split out so that \code{getSaveData_Fileinfo} is
exposed to the user, but the parsing function can be used by
\code{readModels}
}
\examples{
# make me!
}
\seealso{
\code{\link{getSavedata_Data}}
}
\keyword{internal}
|
/man/l_getSavedata_Fileinfo.Rd
|
no_license
|
clbustos/MplusAutomation
|
R
| false
| false
| 1,596
|
rd
|
% Generated by roxygen2 (4.1.0): do not edit by hand
% Please edit documentation in R/extractSaveData.R
\name{l_getSavedata_Fileinfo}
\alias{l_getSavedata_Fileinfo}
\title{local function that does the work of getSaveData_Fileinfo}
\usage{
l_getSavedata_Fileinfo(outfile, outfiletext)
}
\arguments{
\item{outfile}{A character string giving the name of the Mplus
output file.}
\item{outfiletext}{The contents of the output file, for example as read by \code{scan}}
}
\value{
A list that includes:
\item{fileName}{The name of the file containing the analysis dataset created by the Mplus SAVEDATA command.}
\item{fileVarNames}{A character vector containing the names of variables in the dataset.}
\item{fileVarFormats}{A character vector containing the Fortran-style formats of variables in the dataset.}
\item{fileVarWidths}{A numeric vector containing the widths of variables in the dataset (which is stored in fixed-width format).}
\item{bayesFile}{The name of the BPARAMETERS file containing draws from the posterior distribution created by
the Mplus SAVEDATA BPARAMETERS command.}
\item{bayesVarNames}{A character vector containing the names of variables in the BPARAMETERS dataset.}
\item{tech3File}{A character vector of the tech 3 output.}
\item{tech4File}{A character vector of the tech 4 output.}
}
\description{
This function is split out so that \code{getSaveData_Fileinfo} is
exposed to the user, but the parsing function can be used by
\code{readModels}
}
\examples{
# make me!
}
\seealso{
\code{\link{getSavedata_Data}}
}
\keyword{internal}
|
##change-point simulation for wang2021 paper
library(mvtnorm)
library(fMultivar)
library(MASS)
library(sde)
library(rrcov)
library(xtable)
library(sn)
library(doParallel)
library(doRNG)
#results dirrectory
dirr<- "WBSIP"
setwd(dirr)
#simulation parameters
#N
Ns<-c(100,200)
#simulation size, num repetitions
sim.size=100
mod=100
#xhange below
#0,2,3.5 cps
thetas<-list(c(.333,.666),
c(.25,.5,.75),
c(1/6,2/6,3/6,4/6,5/6))
##d= 2,3,5,10
ds=c(2,3,5,10)
distributions<-c("Normal", "Cauchy")
##Create Parameter Vector
numUniqueRuns<-length(Ns)*length(thetas)*length(ds)*length(distributions)
paramterIndices<-matrix(0,ncol=4,nrow=numUniqueRuns)
curr=0
for(i1 in 1:length(Ns)){
for(i2 in 1:length(thetas)){
for(i3 in 1:length(distributions)){
for(i4 in 1:length(ds)){
curr=curr+1
paramterIndices[curr,]=c(i1,i2,i3,i4)
}
}
}
}
#necessary packages
pkgs<-c("mvtnorm" , "fMultivar" ,"MASS" , "sde" , "rrcov" , "sn" ,'ecp' )
no_cores<-Sys.getenv("SLURM_CPUS_PER_TASK")
cl <- makeCluster(no_cores,type="FORK")
registerDoParallel(cl)
registerDoRNG(seed = 440)
#simulation functions
#run simulation
for(i in 1:numUniqueRuns){
params<-paramterIndices[i,]
N=Ns[params[1]]
theta=thetas[[params[2]]]
distr=distributions[[params[3]]]
d=ds[params[4]]
dName=distr
numInt=floor(log(Ns[params[1]]))*mod
simData<-function(N,theta,dist,d){
#simulation functions, used to generate data, simulate the change size
change_sizes=sample(1:10,5,FALSE)
#generates iid d-normal, sigmaxI Rv
normalMaster<-function(n,d,sigmaSq){mvtnorm::rmvnorm(n,sigma=diag(sigmaSq*rep(1,d)))}
norm1<-function(n,d){normalMaster(n,d,change_sizes[1])}
norm2<-function(n,d){normalMaster(n,d,change_sizes[2])}
norm3<-function(n,d){normalMaster(n,d,change_sizes[3])}
norm4<-function(n,d){normalMaster(n,d,change_sizes[4])}
norm5<-function(n,d){normalMaster(n,d,change_sizes[5])}
normals<-list(norm1,norm2,norm3,norm4,norm5,norm1)
#generates iid d-cauchy, sigmaxI scale
cauchyMaster<-function(n,d,sigmaSq){replicate(d,rcauchy(n,scale=sigmaSq))}
cauchy1<-function(n,d){cauchyMaster(n,d,change_sizes[1])}
cauchy2<-function(n,d){cauchyMaster(n,d,change_sizes[2])}
cauchy3<-function(n,d){cauchyMaster(n,d,change_sizes[3])}
cauchy4<-function(n,d){cauchyMaster(n,d,change_sizes[4])}
cauchy5<-function(n,d){cauchyMaster(n,d,change_sizes[5])}
cauchys<-list(cauchy1,cauchy2,cauchy3,cauchy4,cauchy5,cauchy1)
distributions<-list(normals,cauchys)
#cp locations
if(!is.null(theta))
locations<-c(1,floor(theta*N),N)
else
locations<-N
#data
dat<-matrix(0,nrow=N,ncol=d)
if(dist=="Normal"){
rdata=normals
}
else{
rdata=cauchys
}
for(i in 2:(length(theta)+2))
dat[locations[i-1]:locations[i],]<-rdata[[i-1]](length(locations[i-1]:locations[i]),d)
return(dat)
}
#run one repetition
runSim<-function(N,theta,dist,d,numInt){
#functions for BSOP
St=function(t,s,e,data){
# print(paste0("t ",t))
# print(paste0("s ",s))
# print(paste0("e ",e))
if(is.null(dim(data)))
data=matrix(data,ncol=1)
term1=sqrt((e-t)/((e-s)*(t-s)))*(t(data[(s+1):t,])%*%(data[(s+1):t,]))
term2=sqrt((t-s)/((e-t)*(e-s)))*(t(data[(t+1):e,])%*%data[(t+1):e,])
return(term1-term2)
}
BSOP=function(s,e,data,tau=1,cps=NULL){
FLAG=0
red=data[(s+1):e]
p=ncol(data)
n=nrow(data)
n_se=(e-s)
if(n_se>(2*p*log(n)+1) & FLAG==0){
t_range=ceiling(s+p*log(n)):floor(e-p*log(n))
set_op_norms=sapply(t_range,function(t){norm(St(t,s,e,data),type="2")} )
a=max(set_op_norms)
if(a<=tau){
FLAG=1
return(NULL)
}
else{
b=t_range[which.max(set_op_norms)]
cps=rbind(cps,c(b,a))
# cps=c(cps, BSOP(s,b-1,data,tau,cps))
# cps=c(cps, BSOP(b-1,e,data,tau,cps))
return(rbind(cps, BSOP(s,b-1,data,tau,cps), BSOP(b-2,e,data,tau,cps)))
}
}
else
return(NULL)
}
getIntervals<-function(indices,M){
ints<-t(replicate(M,sort(sample(indices,2))))
diffs<-(ints[,2]-ints[,1])==1
if(any(diffs)){
ints[diffs,]=getIntervals(indices,sum(diffs))
return(ints)
}
else{
return(ints)
}
}
PC1=function(Xt,alpha,beta){
p=ncol(Xt)
n=nrow(Xt)
if((beta-alpha)>2*p*log(n)+1){
t_vals=ceiling(alpha+p*log(n)):floor(beta-p*log(n))
dm=t_vals[which.max(sapply(t_vals,function(t){norm(St(t,alpha,beta,Xt),type="2")} ))]
um=eigen(St(dm,alpha,beta,Xt))$vectors[,1]
}
else{um=rep(0,p)}
return(um)
}
PC=function(Wt,intervals){
M=nrow(intervals)
ums=NULL
for(i in 1:M){
ums=rbind(ums,PC1(Wt,intervals[i,1],intervals[i,2]))
}
return(ums)
}
#let the set of intervals be a matrix with 2 columns
WBSIP<-function(data,s,e,intervals,tau){
# sig.level=sig.level/2
# threshold=qBB(1-sig.level)$root
p=ncol(data)
n=nrow(data)
Wt=data[seq(1,nrow(data),by=2),]
Xt=data[seq(2,nrow(data),by=2),]
M=nrow(intervals)
#u has M rows
u=PC(Wt,intervals)
#
# s=floor(s/2)
# e=floor(e/2)
# intervals2=floor(intervals)/2
#M by n
Ytu=u%*%t(Xt)
Ytu=Ytu^2
if((e-s)<(2*p*log(n/2)+1))
return(NULL)
else{
#intervals contained in s,e
# Mes<-which(apply(intervals2,1,checkIfSubInterval,super=c(s,e)))
left_endpoint=sapply(intervals[,1],function(x){max(x,s)})
right_endpoint=sapply(intervals[,2],function(x){min(x,e)})
Mes=which((right_endpoint-left_endpoint)>=(2*log(n/2)+1))
if(length(Mes)>1){
am=rep(-1,M)
bm=rep(-1,M)
for(j in Mes){
t_vals=ceiling(left_endpoint[j]+log(n/2)):floor(right_endpoint[j]-log(n/2))
candidate_ys<-sapply(t_vals,function(t){abs(St(t,left_endpoint[j],right_endpoint[j],Ytu[j,]))} )
mm=which.max(candidate_ys)
bm[j]=t_vals[mm[1]]
am[j]=candidate_ys[mm[1]]
}
m=which.max(am)
if(am[m[1]]>tau){
# sig.level=sig.level/2
return(rbind(c(bm[m[1]]*2,am[m[1]]),
WBSIP(data,s,bm[m[1]],intervals,tau),
WBSIP(data,bm[m[1]]+1,e,intervals,tau)))
}
else
return(NULL)
}
else
return(NULL)
}
}
# unique(BSOP(p*log(n),n-p*log(n),data,50*sqrt(n)^.9))
p=d
n=N
#sim data
testData<-simData(N,theta,dist,d)
s.test<-1
e.test<-N
#get intervals
# intervals<-getIntervals(1:e.test,numInt)
intervals=getIntervals(0:floor(n/2),numInt)
# original
# big_enough=function(i){(i[2]-i[1])>(2*p*log(n/2)+1)}
big_enough=function(i){(i[2]-i[1])>(p*log(n/2)+1)/4}
intervals=intervals[apply(intervals, 1, big_enough),]
count=1
while(nrow(intervals)<numInt&&count<100){
intervals2=getIntervals(0:floor(n/2),numInt)
intervals=rbind(intervals,intervals2[apply(intervals2, 1, big_enough),])
count=count+1
}
intervals=intervals[1:numInt,]
#runBS
# WBSIP(testData,p*log(n/2)+1,n/2-p*log(n/2)+1,intervals,1)
cp_WBSIP<-WBSIP(testData,p*log(n/2)+1,n/2-p*log(n/2)+1,intervals,1)
cp_BSOP<-BSOP(p*log(n),n-p*log(n),testData,1)
return(list(WBSIP=cp_WBSIP,BSOP=cp_BSOP))
}
#parallel running
#necessary packages
#
# no_cores<-detectCores()-1
#
#
# cl <- makeCluster(no_cores,type="FORK")
#
# registerDoParallel(cl)
#for windows
#clusterExport(cl=cl,c("normalMaster","cauchyMaster","skewNormalMaster"))
# registerDoRNG(seed = 440)
results=try({foreach(i=1:sim.size,.packages= pkgs) %dopar% {runSim(N,theta,distr,d,numInt)}})
error=inherits(results, "try-error")
if(error){
print("there was an error!")
}
fileName<-paste0(N,"_",length(theta),"_",dName,"_",d,"_",numInt,"_WANG_simsize_",sim.size,sep="")
fileName1<-paste(dirr,fileName,".Rda",sep="")
save(results,file=fileName1)
print(i/numUniqueRuns)
}
registerDoSEQ()
stopCluster(cl)
closeAllConnections()
|
/WANG2021/CP_SIM_CODES_SS.R
|
no_license
|
12ramsake/MVT-WBS-RankCUSUM
|
R
| false
| false
| 9,123
|
r
|
##change-point simulation for wang2021 paper
library(mvtnorm)
library(fMultivar)
library(MASS)
library(sde)
library(rrcov)
library(xtable)
library(sn)
library(doParallel)
library(doRNG)
#results dirrectory
dirr<- "WBSIP"
setwd(dirr)
#simulation parameters
#N
Ns<-c(100,200)
#simulation size, num repetitions
sim.size=100
mod=100
#xhange below
#0,2,3.5 cps
thetas<-list(c(.333,.666),
c(.25,.5,.75),
c(1/6,2/6,3/6,4/6,5/6))
##d= 2,3,5,10
ds=c(2,3,5,10)
distributions<-c("Normal", "Cauchy")
##Create Parameter Vector
numUniqueRuns<-length(Ns)*length(thetas)*length(ds)*length(distributions)
paramterIndices<-matrix(0,ncol=4,nrow=numUniqueRuns)
curr=0
for(i1 in 1:length(Ns)){
for(i2 in 1:length(thetas)){
for(i3 in 1:length(distributions)){
for(i4 in 1:length(ds)){
curr=curr+1
paramterIndices[curr,]=c(i1,i2,i3,i4)
}
}
}
}
#necessary packages
pkgs<-c("mvtnorm" , "fMultivar" ,"MASS" , "sde" , "rrcov" , "sn" ,'ecp' )
no_cores<-Sys.getenv("SLURM_CPUS_PER_TASK")
cl <- makeCluster(no_cores,type="FORK")
registerDoParallel(cl)
registerDoRNG(seed = 440)
#simulation functions
#run simulation
for(i in 1:numUniqueRuns){
params<-paramterIndices[i,]
N=Ns[params[1]]
theta=thetas[[params[2]]]
distr=distributions[[params[3]]]
d=ds[params[4]]
dName=distr
numInt=floor(log(Ns[params[1]]))*mod
simData<-function(N,theta,dist,d){
#simulation functions, used to generate data, simulate the change size
change_sizes=sample(1:10,5,FALSE)
#generates iid d-normal, sigmaxI Rv
normalMaster<-function(n,d,sigmaSq){mvtnorm::rmvnorm(n,sigma=diag(sigmaSq*rep(1,d)))}
norm1<-function(n,d){normalMaster(n,d,change_sizes[1])}
norm2<-function(n,d){normalMaster(n,d,change_sizes[2])}
norm3<-function(n,d){normalMaster(n,d,change_sizes[3])}
norm4<-function(n,d){normalMaster(n,d,change_sizes[4])}
norm5<-function(n,d){normalMaster(n,d,change_sizes[5])}
normals<-list(norm1,norm2,norm3,norm4,norm5,norm1)
#generates iid d-cauchy, sigmaxI scale
cauchyMaster<-function(n,d,sigmaSq){replicate(d,rcauchy(n,scale=sigmaSq))}
cauchy1<-function(n,d){cauchyMaster(n,d,change_sizes[1])}
cauchy2<-function(n,d){cauchyMaster(n,d,change_sizes[2])}
cauchy3<-function(n,d){cauchyMaster(n,d,change_sizes[3])}
cauchy4<-function(n,d){cauchyMaster(n,d,change_sizes[4])}
cauchy5<-function(n,d){cauchyMaster(n,d,change_sizes[5])}
cauchys<-list(cauchy1,cauchy2,cauchy3,cauchy4,cauchy5,cauchy1)
distributions<-list(normals,cauchys)
#cp locations
if(!is.null(theta))
locations<-c(1,floor(theta*N),N)
else
locations<-N
#data
dat<-matrix(0,nrow=N,ncol=d)
if(dist=="Normal"){
rdata=normals
}
else{
rdata=cauchys
}
for(i in 2:(length(theta)+2))
dat[locations[i-1]:locations[i],]<-rdata[[i-1]](length(locations[i-1]:locations[i]),d)
return(dat)
}
#run one repetition
runSim<-function(N,theta,dist,d,numInt){
#functions for BSOP
St=function(t,s,e,data){
# print(paste0("t ",t))
# print(paste0("s ",s))
# print(paste0("e ",e))
if(is.null(dim(data)))
data=matrix(data,ncol=1)
term1=sqrt((e-t)/((e-s)*(t-s)))*(t(data[(s+1):t,])%*%(data[(s+1):t,]))
term2=sqrt((t-s)/((e-t)*(e-s)))*(t(data[(t+1):e,])%*%data[(t+1):e,])
return(term1-term2)
}
BSOP=function(s,e,data,tau=1,cps=NULL){
FLAG=0
red=data[(s+1):e]
p=ncol(data)
n=nrow(data)
n_se=(e-s)
if(n_se>(2*p*log(n)+1) & FLAG==0){
t_range=ceiling(s+p*log(n)):floor(e-p*log(n))
set_op_norms=sapply(t_range,function(t){norm(St(t,s,e,data),type="2")} )
a=max(set_op_norms)
if(a<=tau){
FLAG=1
return(NULL)
}
else{
b=t_range[which.max(set_op_norms)]
cps=rbind(cps,c(b,a))
# cps=c(cps, BSOP(s,b-1,data,tau,cps))
# cps=c(cps, BSOP(b-1,e,data,tau,cps))
return(rbind(cps, BSOP(s,b-1,data,tau,cps), BSOP(b-2,e,data,tau,cps)))
}
}
else
return(NULL)
}
getIntervals<-function(indices,M){
ints<-t(replicate(M,sort(sample(indices,2))))
diffs<-(ints[,2]-ints[,1])==1
if(any(diffs)){
ints[diffs,]=getIntervals(indices,sum(diffs))
return(ints)
}
else{
return(ints)
}
}
PC1=function(Xt,alpha,beta){
p=ncol(Xt)
n=nrow(Xt)
if((beta-alpha)>2*p*log(n)+1){
t_vals=ceiling(alpha+p*log(n)):floor(beta-p*log(n))
dm=t_vals[which.max(sapply(t_vals,function(t){norm(St(t,alpha,beta,Xt),type="2")} ))]
um=eigen(St(dm,alpha,beta,Xt))$vectors[,1]
}
else{um=rep(0,p)}
return(um)
}
PC=function(Wt,intervals){
M=nrow(intervals)
ums=NULL
for(i in 1:M){
ums=rbind(ums,PC1(Wt,intervals[i,1],intervals[i,2]))
}
return(ums)
}
#let the set of intervals be a matrix with 2 columns
WBSIP<-function(data,s,e,intervals,tau){
# sig.level=sig.level/2
# threshold=qBB(1-sig.level)$root
p=ncol(data)
n=nrow(data)
Wt=data[seq(1,nrow(data),by=2),]
Xt=data[seq(2,nrow(data),by=2),]
M=nrow(intervals)
#u has M rows
u=PC(Wt,intervals)
#
# s=floor(s/2)
# e=floor(e/2)
# intervals2=floor(intervals)/2
#M by n
Ytu=u%*%t(Xt)
Ytu=Ytu^2
if((e-s)<(2*p*log(n/2)+1))
return(NULL)
else{
#intervals contained in s,e
# Mes<-which(apply(intervals2,1,checkIfSubInterval,super=c(s,e)))
left_endpoint=sapply(intervals[,1],function(x){max(x,s)})
right_endpoint=sapply(intervals[,2],function(x){min(x,e)})
Mes=which((right_endpoint-left_endpoint)>=(2*log(n/2)+1))
if(length(Mes)>1){
am=rep(-1,M)
bm=rep(-1,M)
for(j in Mes){
t_vals=ceiling(left_endpoint[j]+log(n/2)):floor(right_endpoint[j]-log(n/2))
candidate_ys<-sapply(t_vals,function(t){abs(St(t,left_endpoint[j],right_endpoint[j],Ytu[j,]))} )
mm=which.max(candidate_ys)
bm[j]=t_vals[mm[1]]
am[j]=candidate_ys[mm[1]]
}
m=which.max(am)
if(am[m[1]]>tau){
# sig.level=sig.level/2
return(rbind(c(bm[m[1]]*2,am[m[1]]),
WBSIP(data,s,bm[m[1]],intervals,tau),
WBSIP(data,bm[m[1]]+1,e,intervals,tau)))
}
else
return(NULL)
}
else
return(NULL)
}
}
# unique(BSOP(p*log(n),n-p*log(n),data,50*sqrt(n)^.9))
p=d
n=N
#sim data
testData<-simData(N,theta,dist,d)
s.test<-1
e.test<-N
#get intervals
# intervals<-getIntervals(1:e.test,numInt)
intervals=getIntervals(0:floor(n/2),numInt)
# original
# big_enough=function(i){(i[2]-i[1])>(2*p*log(n/2)+1)}
big_enough=function(i){(i[2]-i[1])>(p*log(n/2)+1)/4}
intervals=intervals[apply(intervals, 1, big_enough),]
count=1
while(nrow(intervals)<numInt&&count<100){
intervals2=getIntervals(0:floor(n/2),numInt)
intervals=rbind(intervals,intervals2[apply(intervals2, 1, big_enough),])
count=count+1
}
intervals=intervals[1:numInt,]
#runBS
# WBSIP(testData,p*log(n/2)+1,n/2-p*log(n/2)+1,intervals,1)
cp_WBSIP<-WBSIP(testData,p*log(n/2)+1,n/2-p*log(n/2)+1,intervals,1)
cp_BSOP<-BSOP(p*log(n),n-p*log(n),testData,1)
return(list(WBSIP=cp_WBSIP,BSOP=cp_BSOP))
}
#parallel running
#necessary packages
#
# no_cores<-detectCores()-1
#
#
# cl <- makeCluster(no_cores,type="FORK")
#
# registerDoParallel(cl)
#for windows
#clusterExport(cl=cl,c("normalMaster","cauchyMaster","skewNormalMaster"))
# registerDoRNG(seed = 440)
results=try({foreach(i=1:sim.size,.packages= pkgs) %dopar% {runSim(N,theta,distr,d,numInt)}})
error=inherits(results, "try-error")
if(error){
print("there was an error!")
}
fileName<-paste0(N,"_",length(theta),"_",dName,"_",d,"_",numInt,"_WANG_simsize_",sim.size,sep="")
fileName1<-paste(dirr,fileName,".Rda",sep="")
save(results,file=fileName1)
print(i/numUniqueRuns)
}
registerDoSEQ()
stopCluster(cl)
closeAllConnections()
|
virtualArrayMergeRecurse <-
function (dfs, by, ...)
{
if (length(dfs) == 2) {
merge(dfs[[1]], dfs[[2]], all = FALSE, sort = FALSE, ...)
}
else {
merge(dfs[[1]], Recall(dfs[-1]), all = FALSE, sort = FALSE,
...)
}
}
|
/R/virtualArrayMergeRecurse.R
|
no_license
|
scfurl/virtualArray
|
R
| false
| false
| 259
|
r
|
virtualArrayMergeRecurse <-
function (dfs, by, ...)
{
if (length(dfs) == 2) {
merge(dfs[[1]], dfs[[2]], all = FALSE, sort = FALSE, ...)
}
else {
merge(dfs[[1]], Recall(dfs[-1]), all = FALSE, sort = FALSE,
...)
}
}
|
###################################################### -*- mode: r -*- #####
## Scenario setup for Iterated Race (iRace).
############################################################################
## File that contains the description of the parameters.
parameterFile = "./parameters.txt"
## Directory where the programs will be run.
execDir = "./exec"
## The script called for each configuration that launches the program to be
## tuned. See templates/target-runner.tmpl
targetRunner = "./target-runner"
## Directory where training instances are located, either absolute or relative
## to current directory.
trainInstancesDir = "../demo-maxsat/inst/"
## File with a list of instances and (optionally) parameters.
## If empty or NULL, do not use a file.
trainInstancesFile = "./training.txt"
## The maximum number of runs (invocations of targetRunner) that will
## performed. It determines the (maximum) budget of experiments for the tuning.
maxExperiments = 300
## Indicates the number of decimal places to be considered for the
## real parameters.
digits = 2
## A file containing a list of initial configurations.
## If empty or NULL, do not use a file.
# configurationsFile = "default.txt"
## File containing a list of logical expressions that cannot be true
## for any evaluated configuration. If empty or NULL, do not use a file.
# forbiddenFile = "forbidden.txt"
## Directory where testing instances are located, either absolute or relative
## to current directory.
testInstancesDir = "../demo-maxsat/inst/"
## File containing a list of test instances and optionally additional
## parameters for them. If empty or NULL, do not use a file.
## Should actually not be the training set in a real application!
testInstancesFile = "training.txt"
## Number of elite configurations returned by irace that will be tested
## if test instances are provided.
testNbElites = 1
## Enable/disable testing the elite configurations found at each iteration.
testIterationElites = 1
|
/irace2/scenario.txt
|
permissive
|
jaeraxdoto/mhlib
|
R
| false
| false
| 1,988
|
txt
|
###################################################### -*- mode: r -*- #####
## Scenario setup for Iterated Race (iRace).
############################################################################
## File that contains the description of the parameters.
parameterFile = "./parameters.txt"
## Directory where the programs will be run.
execDir = "./exec"
## The script called for each configuration that launches the program to be
## tuned. See templates/target-runner.tmpl
targetRunner = "./target-runner"
## Directory where training instances are located, either absolute or relative
## to current directory.
trainInstancesDir = "../demo-maxsat/inst/"
## File with a list of instances and (optionally) parameters.
## If empty or NULL, do not use a file.
trainInstancesFile = "./training.txt"
## The maximum number of runs (invocations of targetRunner) that will
## performed. It determines the (maximum) budget of experiments for the tuning.
maxExperiments = 300
## Indicates the number of decimal places to be considered for the
## real parameters.
digits = 2
## A file containing a list of initial configurations.
## If empty or NULL, do not use a file.
# configurationsFile = "default.txt"
## File containing a list of logical expressions that cannot be true
## for any evaluated configuration. If empty or NULL, do not use a file.
# forbiddenFile = "forbidden.txt"
## Directory where testing instances are located, either absolute or relative
## to current directory.
testInstancesDir = "../demo-maxsat/inst/"
## File containing a list of test instances and optionally additional
## parameters for them. If empty or NULL, do not use a file.
## Should actually not be the training set in a real application!
testInstancesFile = "training.txt"
## Number of elite configurations returned by irace that will be tested
## if test instances are provided.
testNbElites = 1
## Enable/disable testing the elite configurations found at each iteration.
testIterationElites = 1
|
#' @rdname getUserMedia
#' @export
#'
#' @title
#' Extract recent media published by a user.
#'
#' @description
#' \code{getUserMedia} retrieves public media from a given user and, optionally,
#' downloads recent pictures to a specified folder.
#'
#' IMPORTANT: After June 1st, 2016 only applications that have passed permission
#' review by Instagram will be allowed to access data for users other than the
#' authenticated user. See \url{https://www.instagram.com/developer/review/}
#' for more information.
#'
#' @author
#' Pablo Barbera \email{pablo.barbera@@nyu.edu}
#'
#' @param username String, screen name of user.
#'
#' @param token An OAuth token created with \code{instaOAuth}.
#'
#' @param n Maximum number of media to return. Currently it is only possible to
#' download the 20 most recent pictures or videos on the authenticated user's
#' profile, unless Instagram has approved the application.
#'
#' @param folder If different than \code{NULL}, will download all pictures
#' to this folder.
#'
#' @param userid Numeric ID of user.
#'
#' @param verbose If \code{TRUE} (default), outputs details about progress
#' of function on the console.
#'
#'
#' @examples \dontrun{
#' ## See examples for instaOAuth to know how token was created.
#' ## Capturing information about 50 most recent pictures by @@barackobama
#' load("my_oauth")
#' obama <- getUserMedia( username="barackobama", token=my_oauth, n=50, folder="barackobama")
#' }
#'
getUserMedia <- function(username, token, n=30, folder=NULL, userid=NULL, verbose=TRUE){
if (is.null(userid)){
url <- paste0("https://api.instagram.com/v1/users/search?q=", username)
content <- callAPI(url, token)
if (length(content$data)==0) stop(c("Error. User name not found. ",
"Does this application have permission to access public content?"))
userid <- as.numeric(content$data[[1]]$id)
}
url <- paste0("https://api.instagram.com/v1/users/", userid, "/media/recent?count=",
as.character(min(c(n, 30))))
content <- callAPI(url, token)
if (content$meta$code==400){
stop(content$meta$error_message)
}
l <- length(content$data)
if (verbose) message(l, " posts")
## retrying 3 times if error was found
error <- 0
while (is.null(content$meta) | content$meta != 200){
message("Error!")
Sys.sleep(0.5)
error <- error + 1
content <- callAPI(url, token)
if (error==3){ stop("Error") }
}
if (length(content$data)==0){
stop("No public posts mentioning the string were found")
}
df <- content$data
if (!is.null(folder)){
if (verbose) message("Downloading pictures...")
downloadPictures(df, folder)
}
if (n>20){
df.list <- list(df)
while (l<n & length(content$data)>0 && (length(content$pagination)!=0) &&
!is.null(content$pagination['next_url'])){
content <- callAPI(content$pagination['next_url'], token)
l <- l + length(content$data)
if (length(content$data)>0){ message(l, " posts")}
## retrying 3 times if error was found
error <- 0
while (is.null(content$meta) | content$meta != 200){
message("Error!")
Sys.sleep(0.5)
error <- error + 1
content <- callAPI(url, token)
if (error==3){ stop("Error") }
}
new.df <- searchListToDF(content$data)
# downloading pictures
if (!is.null(folder)){
if (verbose) message("Downloading pictures...")
downloadPictures(new.df, folder)
}
df.list <- c(df.list, list(new.df))
}
df <- do.call(rbind, df.list)
}
return(df)
}
|
/instaR/R/getUserMedia.R
|
no_license
|
darokun/instaR
|
R
| false
| false
| 3,902
|
r
|
#' @rdname getUserMedia
#' @export
#'
#' @title
#' Extract recent media published by a user.
#'
#' @description
#' \code{getUserMedia} retrieves public media from a given user and, optionally,
#' downloads recent pictures to a specified folder.
#'
#' IMPORTANT: After June 1st, 2016 only applications that have passed permission
#' review by Instagram will be allowed to access data for users other than the
#' authenticated user. See \url{https://www.instagram.com/developer/review/}
#' for more information.
#'
#' @author
#' Pablo Barbera \email{pablo.barbera@@nyu.edu}
#'
#' @param username String, screen name of user.
#'
#' @param token An OAuth token created with \code{instaOAuth}.
#'
#' @param n Maximum number of media to return. Currently it is only possible to
#' download the 20 most recent pictures or videos on the authenticated user's
#' profile, unless Instagram has approved the application.
#'
#' @param folder If different than \code{NULL}, will download all pictures
#' to this folder.
#'
#' @param userid Numeric ID of user.
#'
#' @param verbose If \code{TRUE} (default), outputs details about progress
#' of function on the console.
#'
#'
#' @examples \dontrun{
#' ## See examples for instaOAuth to know how token was created.
#' ## Capturing information about 50 most recent pictures by @@barackobama
#' load("my_oauth")
#' obama <- getUserMedia( username="barackobama", token=my_oauth, n=50, folder="barackobama")
#' }
#'
getUserMedia <- function(username, token, n=30, folder=NULL, userid=NULL, verbose=TRUE){
if (is.null(userid)){
url <- paste0("https://api.instagram.com/v1/users/search?q=", username)
content <- callAPI(url, token)
if (length(content$data)==0) stop(c("Error. User name not found. ",
"Does this application have permission to access public content?"))
userid <- as.numeric(content$data[[1]]$id)
}
url <- paste0("https://api.instagram.com/v1/users/", userid, "/media/recent?count=",
as.character(min(c(n, 30))))
content <- callAPI(url, token)
if (content$meta$code==400){
stop(content$meta$error_message)
}
l <- length(content$data)
if (verbose) message(l, " posts")
## retrying 3 times if error was found
error <- 0
while (is.null(content$meta) | content$meta != 200){
message("Error!")
Sys.sleep(0.5)
error <- error + 1
content <- callAPI(url, token)
if (error==3){ stop("Error") }
}
if (length(content$data)==0){
stop("No public posts mentioning the string were found")
}
df <- content$data
if (!is.null(folder)){
if (verbose) message("Downloading pictures...")
downloadPictures(df, folder)
}
if (n>20){
df.list <- list(df)
while (l<n & length(content$data)>0 && (length(content$pagination)!=0) &&
!is.null(content$pagination['next_url'])){
content <- callAPI(content$pagination['next_url'], token)
l <- l + length(content$data)
if (length(content$data)>0){ message(l, " posts")}
## retrying 3 times if error was found
error <- 0
while (is.null(content$meta) | content$meta != 200){
message("Error!")
Sys.sleep(0.5)
error <- error + 1
content <- callAPI(url, token)
if (error==3){ stop("Error") }
}
new.df <- searchListToDF(content$data)
# downloading pictures
if (!is.null(folder)){
if (verbose) message("Downloading pictures...")
downloadPictures(new.df, folder)
}
df.list <- c(df.list, list(new.df))
}
df <- do.call(rbind, df.list)
}
return(df)
}
|
## load library
## for data loading
library(data.table)
## for chaining
library(dplyr)
## load the data
dat <- fread("household_power_consumption.txt")
dat_tbl <- tbl_df(dat)
## transform Date as character to date format, same for time
dat_tbl <- mutate(dat_tbl,AsDate=as.Date(Date,format = "%d/%m/%Y"))
DateOfInterest <- c(as.Date("1/2/2007",format="%d/%m/%Y"),as.Date("2/2/2007",format="%d/%m/%Y"))
sample <- filter(dat_tbl, dat_tbl$AsDate %in% DateOfInterest)
rm("dat","dat_tbl")
GAP <- as.numeric(sample$Global_active_power)
sample <- mutate(sample,DateTime=as.POSIXct(paste(Date, Time), format="%d/%m/%Y %H:%M:%S"))
## create graph
## define png as device and png plot size
png("Plot2.png",width=480,height = 480)
## plot histogram with red color and title addition
plot(sample$DateTime,sample$Global_active_power,type='l',ylab = "Global Active Power (kilowatts)",xlab="")
## close png as device
dev.off()
|
/Plot2.R
|
no_license
|
MS3186/ExData_Plotting1
|
R
| false
| false
| 946
|
r
|
## load library
## for data loading
library(data.table)
## for chaining
library(dplyr)
## load the data
dat <- fread("household_power_consumption.txt")
dat_tbl <- tbl_df(dat)
## transform Date as character to date format, same for time
dat_tbl <- mutate(dat_tbl,AsDate=as.Date(Date,format = "%d/%m/%Y"))
DateOfInterest <- c(as.Date("1/2/2007",format="%d/%m/%Y"),as.Date("2/2/2007",format="%d/%m/%Y"))
sample <- filter(dat_tbl, dat_tbl$AsDate %in% DateOfInterest)
rm("dat","dat_tbl")
GAP <- as.numeric(sample$Global_active_power)
sample <- mutate(sample,DateTime=as.POSIXct(paste(Date, Time), format="%d/%m/%Y %H:%M:%S"))
## create graph
## define png as device and png plot size
png("Plot2.png",width=480,height = 480)
## plot histogram with red color and title addition
plot(sample$DateTime,sample$Global_active_power,type='l',ylab = "Global Active Power (kilowatts)",xlab="")
## close png as device
dev.off()
|
testlist <- list(A = structure(c(2.31584307392677e+77, 9.53818252170339e+295, 1.22810536108214e+146, 1.75738820099344e+159, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), .Dim = c(5L, 7L)), B = structure(0, .Dim = c(1L, 1L)))
result <- do.call(multivariance:::match_rows,testlist)
str(result)
|
/multivariance/inst/testfiles/match_rows/AFL_match_rows/match_rows_valgrind_files/1613118544-test.R
|
no_license
|
akhikolla/updatedatatype-list3
|
R
| false
| false
| 343
|
r
|
testlist <- list(A = structure(c(2.31584307392677e+77, 9.53818252170339e+295, 1.22810536108214e+146, 1.75738820099344e+159, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), .Dim = c(5L, 7L)), B = structure(0, .Dim = c(1L, 1L)))
result <- do.call(multivariance:::match_rows,testlist)
str(result)
|
library(tidyverse)
library(caret)
library(doSNOW)
setwd("~/Downloads")
#Read in and combine the data
Titanic.test <- read.csv("test.csv",stringsAsFactors = FALSE)
Titanic.train <- read.csv("train.csv",stringsAsFactors = FALSE)
View(Titanic.test)
View(Titanic.train)
full <- bind_rows(Titanic.train,Titanic.test)
View(full)
# Feature engineering and fit in the missing values
sapply(full,function(x) sum(is.na(x)))
sapply(full,function(x) sum(x==""))
full$Famsize <- full$SibSp+full$Parch+1
full$Title <- sapply(full$Name, FUN=function(x) {strsplit(x, split='[,.]')[[1]][2]})
full$Title <- sub(' ', '', full$Title)
full$Title[full$Title %in% c('Mme', 'Mlle')] <- 'Mlle'
full$Title[full$Title %in% c('Capt', 'Don', 'Major', 'Sir')] <- 'Sir'
full$Title[full$Title %in% c('Dona', 'Lady', 'the Countess', 'Jonkheer')] <- 'Lady'
full$agegroup <- ifelse(is.na(full$Age),0,1)
table(full$Embarked)
full$Embarked[full$Embarked==""] <- "S"
table(full$Embarked)
full$Survived <- as.factor(full$Survived)
full$Pclass <- as.factor((full$Pclass))
full$Sex <- as.factor((full$Sex))
full$Embarked <- as.factor((full$Embarked))
full$Title <- as.factor(full$Title)
full <- select(full,Survived,Pclass,Sex,Age,Famsize,Fare,Embarked,Title)
#Fill in age using bagged tree imputation
dummy <- dummyVars(~.,data=full[,-1])
trainDummy <- predict(dummy,full[,-1])
View(trainDummy)
Pre <- preProcess(trainDummy,method="bagImpute")
imputed <- predict(Pre,trainDummy)
View(imputed)
#Categorize the age variable
full$Age <- imputed[,4]
full$age_new[full$Age<=18] <- "Young"
full$age_new[full$Age>18] <- "Old"
full <- select(full,-Age)
#Split the data
Train <- full[1:891,]
Test <- full[892:1309,]
Test %>%
filter(!is.na(Fare)) %>%
group_by(Pclass,Embarked) %>%
summarise(median=median(Fare))
Test$Fare[is.na(Test$Fare)] <- 8.05
#Hyperparameter tuning, data training and prediction using random forest
train.control <- trainControl(method="repeatedcv",number=10,repeats=5,
search="grid")
tune <- expand.grid(mtry=1:7)
cl <- makeCluster(2, type = "SOCK")
registerDoSNOW(cl)
training <- train(Survived~.,data=Train,method="rf",tuneGrid=tune,trControl=train.control)
stopCluster(cl)
predicting <- predict(training,Test)
solution <- data.frame(PassengerID = Titanic.test$PassengerId, Survived = predicting)
write.csv(solution,file="luck123.csv",row.names = F)
#Trying to fit a neural network
tune.grid <- expand.grid(size=1:20,decay=5e-4)
training <- train(Survived~.,data=Train,method="nnet",trControl=train.control)
predicting <- predict(training,Test)
solution <- data.frame(PassengerID = Titanic.test$PassengerId, Survived = predicting)
write.csv(solution,file="luck.csv",row.names = F)
|
/Titanic.R
|
no_license
|
ntushao/Titanic
|
R
| false
| false
| 2,726
|
r
|
library(tidyverse)
library(caret)
library(doSNOW)
setwd("~/Downloads")
#Read in and combine the data
Titanic.test <- read.csv("test.csv",stringsAsFactors = FALSE)
Titanic.train <- read.csv("train.csv",stringsAsFactors = FALSE)
View(Titanic.test)
View(Titanic.train)
full <- bind_rows(Titanic.train,Titanic.test)
View(full)
# Feature engineering and fit in the missing values
sapply(full,function(x) sum(is.na(x)))
sapply(full,function(x) sum(x==""))
full$Famsize <- full$SibSp+full$Parch+1
full$Title <- sapply(full$Name, FUN=function(x) {strsplit(x, split='[,.]')[[1]][2]})
full$Title <- sub(' ', '', full$Title)
full$Title[full$Title %in% c('Mme', 'Mlle')] <- 'Mlle'
full$Title[full$Title %in% c('Capt', 'Don', 'Major', 'Sir')] <- 'Sir'
full$Title[full$Title %in% c('Dona', 'Lady', 'the Countess', 'Jonkheer')] <- 'Lady'
full$agegroup <- ifelse(is.na(full$Age),0,1)
table(full$Embarked)
full$Embarked[full$Embarked==""] <- "S"
table(full$Embarked)
full$Survived <- as.factor(full$Survived)
full$Pclass <- as.factor((full$Pclass))
full$Sex <- as.factor((full$Sex))
full$Embarked <- as.factor((full$Embarked))
full$Title <- as.factor(full$Title)
full <- select(full,Survived,Pclass,Sex,Age,Famsize,Fare,Embarked,Title)
#Fill in age using bagged tree imputation
dummy <- dummyVars(~.,data=full[,-1])
trainDummy <- predict(dummy,full[,-1])
View(trainDummy)
Pre <- preProcess(trainDummy,method="bagImpute")
imputed <- predict(Pre,trainDummy)
View(imputed)
#Categorize the age variable
full$Age <- imputed[,4]
full$age_new[full$Age<=18] <- "Young"
full$age_new[full$Age>18] <- "Old"
full <- select(full,-Age)
#Split the data
Train <- full[1:891,]
Test <- full[892:1309,]
Test %>%
filter(!is.na(Fare)) %>%
group_by(Pclass,Embarked) %>%
summarise(median=median(Fare))
Test$Fare[is.na(Test$Fare)] <- 8.05
#Hyperparameter tuning, data training and prediction using random forest
train.control <- trainControl(method="repeatedcv",number=10,repeats=5,
search="grid")
tune <- expand.grid(mtry=1:7)
cl <- makeCluster(2, type = "SOCK")
registerDoSNOW(cl)
training <- train(Survived~.,data=Train,method="rf",tuneGrid=tune,trControl=train.control)
stopCluster(cl)
predicting <- predict(training,Test)
solution <- data.frame(PassengerID = Titanic.test$PassengerId, Survived = predicting)
write.csv(solution,file="luck123.csv",row.names = F)
#Trying to fit a neural network
tune.grid <- expand.grid(size=1:20,decay=5e-4)
training <- train(Survived~.,data=Train,method="nnet",trControl=train.control)
predicting <- predict(training,Test)
solution <- data.frame(PassengerID = Titanic.test$PassengerId, Survived = predicting)
write.csv(solution,file="luck.csv",row.names = F)
|
DIFlasso <-
function (Y, X, l.lambda = 20, trace = FALSE)
{
vardiffs <- abs(apply(X,2,var)-1)
if(sum(vardiffs)>1e-6)
stop("X has to be standardized")
if(sum(!apply(Y,2,unique) %in% c(0,1))>0)
stop("Y may only contain 0 or 1")
if(!is.data.frame(X))
stop("X has to be a data.frame")
if(!is.data.frame(Y))
stop("Y has to be a data.frame")
# print trace?
if (trace) {
trace <- 1
}else {
trace <- 0
}
# number of persons
P <- nrow(Y)
# number of items
I <- ncol(Y)
# number of covariates
l <- ncol(X)
# Convert data.frames into matrices
names.y <- names(Y)
names.x <- names(X)
Y <- matrix(as.double(as.matrix(Y)),ncol=I,byrow=FALSE)
X <- matrix(as.double(as.matrix(X)),ncol=l,byrow=FALSE)
# which persons have to be excluded?
exclu1 <- rowSums(Y) != I & rowSums(Y) != 0
exclu <- rep(exclu1, each = I)
# create the part of the design matrix containing the covariates
xp <- matrix(0, ncol = l * I, nrow = P * I)
suppressWarnings(for (q in 1:P) {
xp[((q - 1) * I + 1):(q * I), ] <- matrix(as.double(c(X[q,
], rep(0, l * I))), byrow = T, ncol = l * I, nrow = I)
})
xp <- xp[exclu, ]
# create the response vector
XP <- c()
for (t in 1:P) {
XP <- c(XP, Y[t, ])
}
XP <- XP[exclu]
# new (reduced) number of persons
ex <- sum(!exclu1)
P <- P - ex
# create the part of the design matrix containing the columns for person parameters theta
help1 <- c(1, rep(0, P - 2))
help2 <- c(rep(help1, I), 0)
suppressWarnings(help3 <- matrix(help2, ncol = P - 1, nrow = P *
I, byrow = TRUE))
help3[((P - 1) * I + 1):(P * I), ] <- 0
# create the part of the design matrix containing the columns for item parameters beta
help4 <- rep(diag(I), P)
help5 <- matrix(help4, ncol = I, nrow = P * I, byrow = TRUE)
# design matrix
design.matrix <- cbind(help3, -help5, -xp)
# index vector for group lasso penalization
index = c(rep(NA, sum(exclu1) + I - 1), rep(1:I, each = l))
# calculate maximal lambda
lmax <- lambdamax(design.matrix, XP, index, penscale = sqrt,
model = LogReg(), center = FALSE, standardize = FALSE)
# create sequence of lambdas
lambda.seq <- seq(from = lmax * 1.05, to = 1e-06, length = l.lambda)
# compute group lasso model
suppressWarnings(m2 <- grplasso(design.matrix, XP, index,
lambda = lambda.seq, penscale = sqrt, model = LogReg(),
center = FALSE, standardize = FALSE, control = grpl.control(trace = trace)))
# index for penalized parameters
index2 <- index[!is.na(index)]
# parameters for smallest lambda
coef.unpen <- m2$coef[!is.na(index), l.lambda]
# norm for parameters for smallest lambda
norm.unpen <- c()
for (o in 1:max(index2)) {
norm.unpen[o] <- sqrt(sum(coef.unpen[index2 == o]^2))
}
# group sizes
p.j <- table(index2)
# matrix of coefficients
coefs.unrest <- coefs <- m2$coef
# matrix for gammas
coefs.grp <- coefs[!is.na(index), ]
# norms of gammas
norm.gamma <- matrix(0, ncol = ncol(coefs), nrow = max(index2))
for (j in 1:ncol(coefs)) {
for (o in 1:max(index2)) {
norm.gamma[o, j] <- sqrt(sum(coefs.grp[index2 == o,
j]^2))
}
}
# degrees of freedom of all gammas
df.grp <- colSums(norm.gamma > 0) + colSums((norm.gamma/norm.unpen) *
(c(p.j) - 1))
# degrees of freedom for thetas and betas
df.unpen <- sum(is.na(index))
# total degrees of freedom
df <- df.grp + df.unpen
# predicted probabilities
pi.i <- predict(m2, type = "response")
# log likelihood
loglik <- colSums(XP * log(pi.i) - XP * log(1 - pi.i) + log(1 -
pi.i))
# BIC
bic <- -2 * loglik + df * log(length(XP))
# AIC
aic <- -2 * loglik + df * 2
####################
# which item will be the reference/anchor item
ref.item <- NULL
ind0 <- l.lambda
while(is.null(ref.item)){
cand <- which(colSums(matrix(coefs.grp[,ind0],nrow=l))==0)
if(length(cand)>0){
ref.item <- max(cand)
}else{ind0 <- ind0-1}
}
restrict <- function(x){
x <- x - rep(x[(l*ref.item-l+1):(l*ref.item)],I)
return(x)}
# center gamma around refernce item, only relevant when gamma_ref.item not zero
coefs.grp <- apply(coefs.grp,2,restrict)
# coefs2: add one row to coefs matrix because theta_P is not set zero anymore
coefs2 <- matrix(0,nrow=nrow(coefs)+1,ncol=ncol(coefs))
coefs2[(P+I+1):(nrow(coefs2)),] <- coefs.grp
# estimated thetas, centered around reference item ref.item
theta.hat <- predict(m2)[ref.item +seq(from=0, to= P*I-I, by=I),]
# estimated betas, centered around reference item ref.item
beta.hat <- coefs[(P):(P + I - 1), ]
beta.hat <- t(apply(beta.hat,1,function(x){return(x - beta.hat[ref.item,])}))
coefs2[1:P, ] <- theta.hat
# estimated betas
coefs2[(P+1):(P + I), ] <- beta.hat
dif.mat <- matrix(coefs.grp[,which.min(bic)],nrow=l, dimnames=list(names.x,names.y))
no.dif.cols <- which(coefs.grp[,which.min(bic)]==0)
design.matrix <- insertCol(design.matrix,P,c(rep(0,(P-1)*I),rep(1,I)))
design.matrix<- design.matrix[,-c(P+ref.item,P+I+no.dif.cols)]
design.matrix <- cbind(design.matrix,XP)
dif.items <- which(colSums(dif.mat)!=0)
returns <- list(theta = theta.hat, beta = beta.hat, gamma = coefs.grp, P = P, I = I, m = l,
logLik = logLik, BIC = bic, AIC = aic, df = df, refit.matrix = design.matrix,
lambda = lambda.seq, ref.item = ref.item, dif.mat = dif.mat, dif.items = dif.items,
names.y = names.y, names.x = names.x, removed.persons = which(!exclu1))
class(returns) <- "DIFlasso"
return(returns)
}
|
/DIFlasso/R/DIFlasso.R
|
no_license
|
ingted/R-Examples
|
R
| false
| false
| 6,250
|
r
|
DIFlasso <-
function (Y, X, l.lambda = 20, trace = FALSE)
{
vardiffs <- abs(apply(X,2,var)-1)
if(sum(vardiffs)>1e-6)
stop("X has to be standardized")
if(sum(!apply(Y,2,unique) %in% c(0,1))>0)
stop("Y may only contain 0 or 1")
if(!is.data.frame(X))
stop("X has to be a data.frame")
if(!is.data.frame(Y))
stop("Y has to be a data.frame")
# print trace?
if (trace) {
trace <- 1
}else {
trace <- 0
}
# number of persons
P <- nrow(Y)
# number of items
I <- ncol(Y)
# number of covariates
l <- ncol(X)
# Convert data.frames into matrices
names.y <- names(Y)
names.x <- names(X)
Y <- matrix(as.double(as.matrix(Y)),ncol=I,byrow=FALSE)
X <- matrix(as.double(as.matrix(X)),ncol=l,byrow=FALSE)
# which persons have to be excluded?
exclu1 <- rowSums(Y) != I & rowSums(Y) != 0
exclu <- rep(exclu1, each = I)
# create the part of the design matrix containing the covariates
xp <- matrix(0, ncol = l * I, nrow = P * I)
suppressWarnings(for (q in 1:P) {
xp[((q - 1) * I + 1):(q * I), ] <- matrix(as.double(c(X[q,
], rep(0, l * I))), byrow = T, ncol = l * I, nrow = I)
})
xp <- xp[exclu, ]
# create the response vector
XP <- c()
for (t in 1:P) {
XP <- c(XP, Y[t, ])
}
XP <- XP[exclu]
# new (reduced) number of persons
ex <- sum(!exclu1)
P <- P - ex
# create the part of the design matrix containing the columns for person parameters theta
help1 <- c(1, rep(0, P - 2))
help2 <- c(rep(help1, I), 0)
suppressWarnings(help3 <- matrix(help2, ncol = P - 1, nrow = P *
I, byrow = TRUE))
help3[((P - 1) * I + 1):(P * I), ] <- 0
# create the part of the design matrix containing the columns for item parameters beta
help4 <- rep(diag(I), P)
help5 <- matrix(help4, ncol = I, nrow = P * I, byrow = TRUE)
# design matrix
design.matrix <- cbind(help3, -help5, -xp)
# index vector for group lasso penalization
index = c(rep(NA, sum(exclu1) + I - 1), rep(1:I, each = l))
# calculate maximal lambda
lmax <- lambdamax(design.matrix, XP, index, penscale = sqrt,
model = LogReg(), center = FALSE, standardize = FALSE)
# create sequence of lambdas
lambda.seq <- seq(from = lmax * 1.05, to = 1e-06, length = l.lambda)
# compute group lasso model
suppressWarnings(m2 <- grplasso(design.matrix, XP, index,
lambda = lambda.seq, penscale = sqrt, model = LogReg(),
center = FALSE, standardize = FALSE, control = grpl.control(trace = trace)))
# index for penalized parameters
index2 <- index[!is.na(index)]
# parameters for smallest lambda
coef.unpen <- m2$coef[!is.na(index), l.lambda]
# norm for parameters for smallest lambda
norm.unpen <- c()
for (o in 1:max(index2)) {
norm.unpen[o] <- sqrt(sum(coef.unpen[index2 == o]^2))
}
# group sizes
p.j <- table(index2)
# matrix of coefficients
coefs.unrest <- coefs <- m2$coef
# matrix for gammas
coefs.grp <- coefs[!is.na(index), ]
# norms of gammas
norm.gamma <- matrix(0, ncol = ncol(coefs), nrow = max(index2))
for (j in 1:ncol(coefs)) {
for (o in 1:max(index2)) {
norm.gamma[o, j] <- sqrt(sum(coefs.grp[index2 == o,
j]^2))
}
}
# degrees of freedom of all gammas
df.grp <- colSums(norm.gamma > 0) + colSums((norm.gamma/norm.unpen) *
(c(p.j) - 1))
# degrees of freedom for thetas and betas
df.unpen <- sum(is.na(index))
# total degrees of freedom
df <- df.grp + df.unpen
# predicted probabilities
pi.i <- predict(m2, type = "response")
# log likelihood
loglik <- colSums(XP * log(pi.i) - XP * log(1 - pi.i) + log(1 -
pi.i))
# BIC
bic <- -2 * loglik + df * log(length(XP))
# AIC
aic <- -2 * loglik + df * 2
####################
# which item will be the reference/anchor item
ref.item <- NULL
ind0 <- l.lambda
while(is.null(ref.item)){
cand <- which(colSums(matrix(coefs.grp[,ind0],nrow=l))==0)
if(length(cand)>0){
ref.item <- max(cand)
}else{ind0 <- ind0-1}
}
restrict <- function(x){
x <- x - rep(x[(l*ref.item-l+1):(l*ref.item)],I)
return(x)}
# center gamma around refernce item, only relevant when gamma_ref.item not zero
coefs.grp <- apply(coefs.grp,2,restrict)
# coefs2: add one row to coefs matrix because theta_P is not set zero anymore
coefs2 <- matrix(0,nrow=nrow(coefs)+1,ncol=ncol(coefs))
coefs2[(P+I+1):(nrow(coefs2)),] <- coefs.grp
# estimated thetas, centered around reference item ref.item
theta.hat <- predict(m2)[ref.item +seq(from=0, to= P*I-I, by=I),]
# estimated betas, centered around reference item ref.item
beta.hat <- coefs[(P):(P + I - 1), ]
beta.hat <- t(apply(beta.hat,1,function(x){return(x - beta.hat[ref.item,])}))
coefs2[1:P, ] <- theta.hat
# estimated betas
coefs2[(P+1):(P + I), ] <- beta.hat
dif.mat <- matrix(coefs.grp[,which.min(bic)],nrow=l, dimnames=list(names.x,names.y))
no.dif.cols <- which(coefs.grp[,which.min(bic)]==0)
design.matrix <- insertCol(design.matrix,P,c(rep(0,(P-1)*I),rep(1,I)))
design.matrix<- design.matrix[,-c(P+ref.item,P+I+no.dif.cols)]
design.matrix <- cbind(design.matrix,XP)
dif.items <- which(colSums(dif.mat)!=0)
returns <- list(theta = theta.hat, beta = beta.hat, gamma = coefs.grp, P = P, I = I, m = l,
logLik = logLik, BIC = bic, AIC = aic, df = df, refit.matrix = design.matrix,
lambda = lambda.seq, ref.item = ref.item, dif.mat = dif.mat, dif.items = dif.items,
names.y = names.y, names.x = names.x, removed.persons = which(!exclu1))
class(returns) <- "DIFlasso"
return(returns)
}
|
rankhospital <- function(state, outcome, num = "best") {
## Read outcome data
full_data <- read.csv("outcome-of-care-measures.csv",colClasses = "character")
full_data[,c(11,17,23)] <- sapply(full_data[,c(11,17,23)] , as.numeric)
clear_data <- na.omit(full_data)
outcomes <- c("heart attack","heart failure","pneumonia")
states <- clear_data[,7]
## Check that state and outcome are valid
if ((state %in% states) == FALSE) {
stop(print("invalid state"))
}
else if ((outcome %in% outcomes) == FALSE) {
stop(print("invalid outcome"))
}
if (outcome == "heart attack") {
outcome_column <- 11
}
else if (outcome == "heart failure") {
outcome_column <- 17
}
else {
outcome_column <- 23
}
# order data by outcome
new_data <- clear_data[clear_data$State==state,]
sorted_data <- new_data[order(as.numeric(new_data[[outcome_column]]),
new_data[["Hospital.Name"]],decreasing=FALSE,na.last=NA), ]
if (num=="best") num = 1
if (num=='worst') num = nrow(sorted_data)
#will automatically return NA if num > nrow, as well as if it's some other text value
# if someone passes num < 1, they'll get what's expected
#if (is.numeric(num) & num > nrwo(sorted.data.state) return(NA)
sorted_data[num,"Hospital.Name"]
}
|
/R-Programming-Assignment-3/rankhospital.r
|
no_license
|
xiiicyw/R-programming
|
R
| false
| false
| 1,371
|
r
|
rankhospital <- function(state, outcome, num = "best") {
## Read outcome data
full_data <- read.csv("outcome-of-care-measures.csv",colClasses = "character")
full_data[,c(11,17,23)] <- sapply(full_data[,c(11,17,23)] , as.numeric)
clear_data <- na.omit(full_data)
outcomes <- c("heart attack","heart failure","pneumonia")
states <- clear_data[,7]
## Check that state and outcome are valid
if ((state %in% states) == FALSE) {
stop(print("invalid state"))
}
else if ((outcome %in% outcomes) == FALSE) {
stop(print("invalid outcome"))
}
if (outcome == "heart attack") {
outcome_column <- 11
}
else if (outcome == "heart failure") {
outcome_column <- 17
}
else {
outcome_column <- 23
}
# order data by outcome
new_data <- clear_data[clear_data$State==state,]
sorted_data <- new_data[order(as.numeric(new_data[[outcome_column]]),
new_data[["Hospital.Name"]],decreasing=FALSE,na.last=NA), ]
if (num=="best") num = 1
if (num=='worst') num = nrow(sorted_data)
#will automatically return NA if num > nrow, as well as if it's some other text value
# if someone passes num < 1, they'll get what's expected
#if (is.numeric(num) & num > nrwo(sorted.data.state) return(NA)
sorted_data[num,"Hospital.Name"]
}
|
#[export]
var2tests <- function(x, y = NULL, ina, alternative = "unequal", logged = FALSE) {
if ( is.null(y) ) {
s1 <- Rfast::colVars( x[ ina == 1, ] )
s2 <- Rfast::colVars( x[ ina == 2, ] )
n1 <- sum( ina == 1 )
n2 <- length(ina) - n1
} else {
s1 <- Rfast::colVars( x )
s2 <- Rfast::colVars( y )
n1 <- dim(x)[1]
n2 <- dim(y)[1]
}
stat <- s1 / s2
if ( alternative == "unequal" ) {
if ( logged ) {
a <- pf( stat, n1 - 1, n2 - 1, log.p = TRUE )
pvalue <- log(2) + Rfast::rowMins( cbind(a, 1 - a), value = TRUE )
} else {
a <- pf( stat, n1 - 1, n2 - 1 )
pvalue <- 2 * Rfast::rowMins( cbind(a, 1 - a), value = TRUE )
}
} else if ( alternative == "greater" ) {
pvalue <- pf(stat, n1 - 1, n2 - 1, lower.tail = FALSE, log.p = logged)
} else if ( alternative == "less" ) {
pvalue <- pf(stat, n1 - 1, n2 - 1, log.p = logged)
}
cbind(stat, pvalue)
}
|
/fuzzedpackages/Rfast/R/var2tests.R
|
no_license
|
akhikolla/testpackages
|
R
| false
| false
| 993
|
r
|
#[export]
var2tests <- function(x, y = NULL, ina, alternative = "unequal", logged = FALSE) {
if ( is.null(y) ) {
s1 <- Rfast::colVars( x[ ina == 1, ] )
s2 <- Rfast::colVars( x[ ina == 2, ] )
n1 <- sum( ina == 1 )
n2 <- length(ina) - n1
} else {
s1 <- Rfast::colVars( x )
s2 <- Rfast::colVars( y )
n1 <- dim(x)[1]
n2 <- dim(y)[1]
}
stat <- s1 / s2
if ( alternative == "unequal" ) {
if ( logged ) {
a <- pf( stat, n1 - 1, n2 - 1, log.p = TRUE )
pvalue <- log(2) + Rfast::rowMins( cbind(a, 1 - a), value = TRUE )
} else {
a <- pf( stat, n1 - 1, n2 - 1 )
pvalue <- 2 * Rfast::rowMins( cbind(a, 1 - a), value = TRUE )
}
} else if ( alternative == "greater" ) {
pvalue <- pf(stat, n1 - 1, n2 - 1, lower.tail = FALSE, log.p = logged)
} else if ( alternative == "less" ) {
pvalue <- pf(stat, n1 - 1, n2 - 1, log.p = logged)
}
cbind(stat, pvalue)
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/scale.R
\name{sca}
\alias{sca}
\title{Linear transformation of the IRT scale}
\usage{
sca(
old.ip,
new.ip,
old.items,
new.items,
old.qu = NULL,
new.qu = NULL,
method = "MS",
bec = FALSE
)
}
\arguments{
\item{old.ip}{A set of parameters that are already on the desired scale}
\item{new.ip}{A set of parameters that must be placed on the same scale as
\code{old.ip}}
\item{old.items}{A vector of indexes pointing to those items in
\code{old.ip} that are common to both sets of parameters}
\item{new.items}{The indexes of the same items in \code{new.ip}}
\item{old.qu}{A quadrature object for \code{old.ip}, typically produced by
the same program that estimated \code{old.ip}. Only needed if
\code{method="LS"} or \code{method="HB"}}
\item{new.qu}{A quadrature object for \code{new.ip}, typically produced by
the same program that estimated \code{new.ip}. Only needed if
\code{method="HB"}}
\item{method}{One of "MM" (Mean/Mean), "MS" (Mean/Sigma), "SL"
(Stocking-Lord), or "HB" (Haebara). Default is "MS"}
\item{bec}{Use back-equating correction? When TRUE, the Stocking-Lord or
Hebaera procedures will be adjusted for back-equating (see Hebaera, 1980).
Ignored when method is MM or MS. Default is FALSE.}
}
\value{
A list of: \item{slope}{The slope of the linear transformation}
\item{intercept}{The intercept of the linear transformation}
\item{scaled.ip}{The parameters in \code{new.ip} tranformed to a scale that
is compatible with \code{old.ip}}
}
\description{
Linearly transform a set of IRT parameters to bring them to the scale of
another set of parameters. Four methods are implemented: Mean/Mean,
Mean/Sigma, Lord-Stocking, and Haebara.
}
\examples{
\dontrun{
# a small simulation to demonstrate transformation to a common scale
# fake 50 2PL items
pa <- cbind(runif(50,.8,2), runif(50,-2.4,2.4), rep(0,50))
# simulate responses with two samples of different ability levels
r.1 <- sim(ip=pa[1:30,], x=rnorm(1000,-.5))
r.2 <- sim(ip=pa[21:50,], x=rnorm(1000,.5))
# estimate item parameters
p.1 <- est(r.1, engine="ltm")
p.2 <- est(r.2, engine="ltm")
# plot difficulties to show difference in scale
plot(c(-3,3), c(-3,3), ty="n", xlab="True",ylab="Estimated",
main="Achieving common scale")
text(pa[1:30,2], p.1$est[,2], 1:30)
text(pa[21:50,2], p.2$est[,2], 21:50, co=2)
# scale with the default Mean/Sigma method
pa.sc = sca(old.ip=p.1$est, new.ip=p.2$est, old.items=21:30, new.items=1:10)
# add difficulties of scaled items to plot
text(pa[21:50,2], pa.sc$scaled.ip[,2], 21:50, co=3)
}
}
\references{
Kolen, M.J. & R.L. Brennan (1995) Test Equating: Methods and
Practices. Springer.
Haebara, T. (1980) Equating logistic ability scales by
a weighted lest squares method. Japanese Psychological Research, 22,
p.144--149
}
\author{
Ivailo Partchev and Tamaki Hattori
}
\keyword{models}
|
/man/sca.Rd
|
no_license
|
cran/irtoys
|
R
| false
| true
| 2,902
|
rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/scale.R
\name{sca}
\alias{sca}
\title{Linear transformation of the IRT scale}
\usage{
sca(
old.ip,
new.ip,
old.items,
new.items,
old.qu = NULL,
new.qu = NULL,
method = "MS",
bec = FALSE
)
}
\arguments{
\item{old.ip}{A set of parameters that are already on the desired scale}
\item{new.ip}{A set of parameters that must be placed on the same scale as
\code{old.ip}}
\item{old.items}{A vector of indexes pointing to those items in
\code{old.ip} that are common to both sets of parameters}
\item{new.items}{The indexes of the same items in \code{new.ip}}
\item{old.qu}{A quadrature object for \code{old.ip}, typically produced by
the same program that estimated \code{old.ip}. Only needed if
\code{method="LS"} or \code{method="HB"}}
\item{new.qu}{A quadrature object for \code{new.ip}, typically produced by
the same program that estimated \code{new.ip}. Only needed if
\code{method="HB"}}
\item{method}{One of "MM" (Mean/Mean), "MS" (Mean/Sigma), "SL"
(Stocking-Lord), or "HB" (Haebara). Default is "MS"}
\item{bec}{Use back-equating correction? When TRUE, the Stocking-Lord or
Hebaera procedures will be adjusted for back-equating (see Hebaera, 1980).
Ignored when method is MM or MS. Default is FALSE.}
}
\value{
A list of: \item{slope}{The slope of the linear transformation}
\item{intercept}{The intercept of the linear transformation}
\item{scaled.ip}{The parameters in \code{new.ip} tranformed to a scale that
is compatible with \code{old.ip}}
}
\description{
Linearly transform a set of IRT parameters to bring them to the scale of
another set of parameters. Four methods are implemented: Mean/Mean,
Mean/Sigma, Lord-Stocking, and Haebara.
}
\examples{
\dontrun{
# a small simulation to demonstrate transformation to a common scale
# fake 50 2PL items
pa <- cbind(runif(50,.8,2), runif(50,-2.4,2.4), rep(0,50))
# simulate responses with two samples of different ability levels
r.1 <- sim(ip=pa[1:30,], x=rnorm(1000,-.5))
r.2 <- sim(ip=pa[21:50,], x=rnorm(1000,.5))
# estimate item parameters
p.1 <- est(r.1, engine="ltm")
p.2 <- est(r.2, engine="ltm")
# plot difficulties to show difference in scale
plot(c(-3,3), c(-3,3), ty="n", xlab="True",ylab="Estimated",
main="Achieving common scale")
text(pa[1:30,2], p.1$est[,2], 1:30)
text(pa[21:50,2], p.2$est[,2], 21:50, co=2)
# scale with the default Mean/Sigma method
pa.sc = sca(old.ip=p.1$est, new.ip=p.2$est, old.items=21:30, new.items=1:10)
# add difficulties of scaled items to plot
text(pa[21:50,2], pa.sc$scaled.ip[,2], 21:50, co=3)
}
}
\references{
Kolen, M.J. & R.L. Brennan (1995) Test Equating: Methods and
Practices. Springer.
Haebara, T. (1980) Equating logistic ability scales by
a weighted lest squares method. Japanese Psychological Research, 22,
p.144--149
}
\author{
Ivailo Partchev and Tamaki Hattori
}
\keyword{models}
|
#' A function to compute partial F statistics
#'
#' This is an internal function used in FSelect. It can only be used for two
#' groups. The partial F statistic is the additional contribution to the model
#' from adding one more trait.
#'
#'
#' @param m.lda An object of class 'lda'
#' @param GROUP A factor vector indicating group membership
#' @param T_pm1 The F statistic calculated for a discriminant analysis with
#' only the most informative trait.
#' @return Returns a partial F statistic
#' @seealso \code{\link{FSelect}}
#' @references Habbema J, Hermans J (1977). Selection of Variables in
#' Discriminant Analysis by F-Statistics and Error Rate. Technometrics, 19(4),
#' 487 - 493.
#' @examples
#'
#' #Internal function used in FSelect
#'
#' data(Nuclei)
#' data(Groups)
#'
#' NPC<-floor(ncol(Nuclei)/5)
#'
#' DAT.comp<-completeData(Nuclei, n_pcs = NPC)
#' Groups.use<-c(1,2)
#' use.DAT<-which(Groups==Groups.use[1]|Groups==Groups.use[2])
#'
#' DAT.use<-Nuclei[use.DAT,]
#' GR.use<-Groups[use.DAT]
#'
#' traitA<-2
#'
#' mlda<-MASS::lda(GR.use~DAT.use[,traitA])
#'
#' F1<-partialF(mlda,GR.use,0)
#'
#' traitB<-1
#'
#' mlda2<-MASS::lda(GR.use~DAT.use[,c(traitA,traitB)])
#'
#' partialF(mlda2,GR.use,F1)
#'
#'
#' @export partialF
partialF <- function(m.lda, GROUP, T_pm1) {
GRS <- unique(GROUP)
n1 <- length(which(GROUP == GRS[1]))
n2 <- length(which(GROUP == GRS[2]))
v <- n1 + n2 - 2
p <- ncol(m.lda$means)
T_p <- sum(m.lda$svd)
T_pm1 <- T_pm1
Fp <- (v - p + 1) * ((T_p - T_pm1)/(v + T_pm1))
return(Fp)
} #end partialF
|
/R/partialF.R
|
no_license
|
scarpino/multiDimBio
|
R
| false
| false
| 1,592
|
r
|
#' A function to compute partial F statistics
#'
#' This is an internal function used in FSelect. It can only be used for two
#' groups. The partial F statistic is the additional contribution to the model
#' from adding one more trait.
#'
#'
#' @param m.lda An object of class 'lda'
#' @param GROUP A factor vector indicating group membership
#' @param T_pm1 The F statistic calculated for a discriminant analysis with
#' only the most informative trait.
#' @return Returns a partial F statistic
#' @seealso \code{\link{FSelect}}
#' @references Habbema J, Hermans J (1977). Selection of Variables in
#' Discriminant Analysis by F-Statistics and Error Rate. Technometrics, 19(4),
#' 487 - 493.
#' @examples
#'
#' #Internal function used in FSelect
#'
#' data(Nuclei)
#' data(Groups)
#'
#' NPC<-floor(ncol(Nuclei)/5)
#'
#' DAT.comp<-completeData(Nuclei, n_pcs = NPC)
#' Groups.use<-c(1,2)
#' use.DAT<-which(Groups==Groups.use[1]|Groups==Groups.use[2])
#'
#' DAT.use<-Nuclei[use.DAT,]
#' GR.use<-Groups[use.DAT]
#'
#' traitA<-2
#'
#' mlda<-MASS::lda(GR.use~DAT.use[,traitA])
#'
#' F1<-partialF(mlda,GR.use,0)
#'
#' traitB<-1
#'
#' mlda2<-MASS::lda(GR.use~DAT.use[,c(traitA,traitB)])
#'
#' partialF(mlda2,GR.use,F1)
#'
#'
#' @export partialF
partialF <- function(m.lda, GROUP, T_pm1) {
GRS <- unique(GROUP)
n1 <- length(which(GROUP == GRS[1]))
n2 <- length(which(GROUP == GRS[2]))
v <- n1 + n2 - 2
p <- ncol(m.lda$means)
T_p <- sum(m.lda$svd)
T_pm1 <- T_pm1
Fp <- (v - p + 1) * ((T_p - T_pm1)/(v + T_pm1))
return(Fp)
} #end partialF
|
# Libraries ###############################################################################
library(combinat)
library(purrr)
library(ngramrr)
library(ngram)
library(gtools)
library(data.table)
library (magrittr)
library(sqldf)
library(dplyr)
library(plyr)
library(caret)
library(scales)
library(tidyr)
library(ggplot2)
library(stringr)
library(plotly)
library(dendextend)
library(abind)
library(nnet)
library(readr)
# library(naniar)
# Original Data ###########################################################################
SALI_de <- readRDS("./0_general/SALI_SIT-data_decodes_20191101.rds")
SALI <-readRDS("./0_general/SALI_SIT-data_20191101.rds")
soil_sample <- read.csv("./0_general/samples.csv")
lab <- read.csv("./0_general/labresults.csv")
lab_15n1_extra <- read.csv('./0_general/15N1.csv') # Peter supplied on 27/May/2020 email
# Subset data sets from SALI###############################################################
sit <- SALI$SIT
sit$CREATION_DATE <- as.character(sit$CREATION_DATE)
sit$LAST_UPDATE_DATE <- as.character(sit$LAST_UPDATE_DATE)
sit[sit == "-"] <- NA
# sit <- sit %>%
# mutate_at(vars(ELEM_TYPE_CODE), na_if, "-")
obs <- SALI$OBS
obs <- aggregate(OBS_NO~PROJECT_CODE+SITE_ID,obs,min) %>%
left_join(.,obs,by=c("PROJECT_CODE" , "SITE_ID","OBS_NO"))
obs$CREATION_DATE <- as.character(obs$CREATION_DATE)
obs$LAST_UPDATE_DATE <- as.character(obs$LAST_UPDATE_DATE)
obs$OBS_DATE <- as.character(obs$OBS_DATE)
obs[obs == "-"] <- NA
names(obs)[10] = 'OBS_LITH_CODE'
# ODS
ods <- SALI$ODS
ods <- aggregate(OBS_NO~PROJECT_CODE+SITE_ID,ods,min) %>%
left_join(.,ods,by=c("PROJECT_CODE" , "SITE_ID","OBS_NO"))
ods <- aggregate(DISTURB_NO~PROJECT_CODE+SITE_ID+OBS_NO,ods,min) %>%
left_join(.,ods,by=c("PROJECT_CODE" , "SITE_ID","OBS_NO","DISTURB_NO"))
ods$CREATION_DATE <- as.character(ods$CREATION_DATE)
ods$LAST_UPDATE_DATE <- as.character(ods$LAST_UPDATE_DATE)
ods[ods == "-"] <- NA
nrow(unique(ods[,1:2]))
#SPC
spc <- obs[which(obs$TAX_UNIT_TYPE == 'SPC' & !is.na(obs$TAX_UNIT_CODE)),
c('PROJECT_CODE','SITE_ID','OBS_NO','TAX_UNIT_CODE')]
names(spc)[4] <- "SPC"
length(unique(spc$SPC))
nrow(unique(spc[,c(1:3)]))
#spc$SPC <- spc$TAX_UNIT_CODE
# use mean value if multiple values available for one horizon
#spc<- aggregate(SPC ~ PROJECT_CODE+SITE_ID+HORIZON_NO,SPC,mean)
osc <- SALI$OSC
osc <- aggregate(OBS_NO~PROJECT_CODE+SITE_ID,osc,min) %>%
left_join(.,osc,by=c("PROJECT_CODE" , "SITE_ID","OBS_NO"))
osc <- aggregate(SURF_COND_NO~PROJECT_CODE+SITE_ID+OBS_NO,osc,min) %>%
left_join(.,osc,by=c("PROJECT_CODE" , "SITE_ID","OBS_NO","SURF_COND_NO"))
osc$CREATION_DATE <- as.character(osc$CREATION_DATE)
osc$LAST_UPDATE_DATE <- as.character(osc$LAST_UPDATE_DATE)
osc[osc == "-"] <- NA
# nrow(unique(osc[,1:2])) #91365
# nrow(unique(osc[,1:3])) #91365
# nrow(unique(osc[,1:4])) #91365
#vst
vst <- SALI$VST[which(SALI$VST$VEG_STRATA_CODE=='T'),]
vst <- aggregate(OBS_NO~PROJECT_CODE+SITE_ID,vst,min) %>%
left_join(.,vst,by=c("PROJECT_CODE" , "SITE_ID","OBS_NO"))
vst <- aggregate(VEG_COMM_NO~PROJECT_CODE+SITE_ID+OBS_NO,vst,min) %>%
left_join(.,vst,by=c("PROJECT_CODE" , "SITE_ID","OBS_NO","VEG_COMM_NO"))
vst$CREATION_DATE <- as.character(vst$CREATION_DATE)
vst$LAST_UPDATE_DATE <- as.character(vst$LAST_UPDATE_DATE)
#
vsp <- aggregate(OBS_NO~PROJECT_CODE+SITE_ID,SALI$VSP,min) %>%
left_join(.,SALI$VSP,by=c("PROJECT_CODE" , "SITE_ID","OBS_NO"))
vsp <- aggregate(VEG_COMM_NO~PROJECT_CODE+SITE_ID+OBS_NO,vsp,min) %>%
left_join(.,vsp,by=c("PROJECT_CODE" , "SITE_ID","OBS_NO","VEG_COMM_NO"))
vsp$CREATION_DATE <- as.character(vsp$CREATION_DATE)
vsp$LAST_UPDATE_DATE <- as.character(vsp$LAST_UPDATE_DATE)
a = vsp[which(vsp$VEG_STRATA_CODE=='T' & vsp$VEG_STRATA_SPEC_NO==1),]
vsp <- vsp[which(vsp$VEG_STRATA_CODE=='L' & vsp$VEG_STRATA_SPEC_NO ==1),] %>%
anti_join(.,vsp[which(vsp$VEG_STRATA_CODE=='T'),],
by=c("PROJECT_CODE" , "SITE_ID","OBS_NO","VEG_COMM_NO")) %>%
full_join(.,vsp[which(vsp$VEG_STRATA_CODE=='T' & vsp$VEG_STRATA_SPEC_NO==1),])
#anti_join(SALI$VSP,.,],
#by=c("PROJECT_CODE" , "SITE_ID","OBS_NO","VEG_COMM_NO"))
vsp[vsp == "-"] <- NA
ggplot(data.frame(vsp$VEG_SPEC_CODE), aes(x=vsp$VEG_SPEC_CODE)) + geom_bar()
nrow(unique(vsp[,1:4]))
# obs_1 <- obs[, c('PROJECT_CODE','SITE_ID','OBS_NO','TAX_UNIT_TYPE','TAX_UNIT_CODE')]
# length(unique(obs_1[which(obs_1$TAX_UNIT_TYPE =='SPC'),c("TAX_UNIT_CODE")])) #1862
# #number of soil data without a SPC value
# nrow(unique(obs[which(obs$TAX_UNIT_TYPE == 'SPC' & is.na(obs$TAX_UNIT_CODE)),1:2]))
#OCL Soil classification
ocl <- SALI$OCL
ocl <- aggregate(OBS_NO~PROJECT_CODE+SITE_ID,ocl,min) %>%
left_join(.,ocl,by=c("PROJECT_CODE" = "PROJECT_CODE" , "SITE_ID" = "SITE_ID","OBS_NO"="OBS_NO"))
#eliminate duplcates with different SOIL_CLASS_NO, only keep the one SOIL_CLASS_NO == MIN(MOST ARE 1)
ocl <- aggregate(SOIL_CLASS_NO~PROJECT_CODE+SITE_ID,ocl,min) %>%
left_join(.,ocl,by=c("PROJECT_CODE" , "SITE_ID","SOIL_CLASS_NO"))
ocl <- aggregate(ASC_ORD~PROJECT_CODE+SITE_ID,ocl,max) %>%
left_join(.,ocl,by=c("PROJECT_CODE" , "SITE_ID","ASC_ORD"))
ocl$CREATION_DATE <- as.character(ocl$CREATION_DATE)
ocl$LAST_UPDATE_DATE <- as.character(ocl$LAST_UPDATE_DATE)
ocl[ocl == "-"] <- NA
#
# ocf
ocf <- SALI$OCF
ocf <- aggregate(OBS_NO~PROJECT_CODE+SITE_ID,ocf,min) %>%
left_join(.,ocf,by=c("PROJECT_CODE" = "PROJECT_CODE" , "SITE_ID" = "SITE_ID","OBS_NO"="OBS_NO"))
#eliminate duplcates with different SOIL_CLASS_NO, only keep the one SOIL_CLASS_NO == MIN(MOST ARE 1)
ocf <- aggregate(SURF_FRAG_NO~PROJECT_CODE+SITE_ID,ocf,min) %>%
left_join(.,ocf,by=c("PROJECT_CODE" , "SITE_ID","SURF_FRAG_NO"))
ocf$CREATION_DATE <- as.character(ocf$CREATION_DATE)
ocf$LAST_UPDATE_DATE <- as.character(ocf$LAST_UPDATE_DATE)
ocf[ocf == "-"] <- NA
names(ocf)[6]='OCF_LITH_CODE'
names(ocf)[5]='OCF_ABUNDANCE'
nrow(unique(ocf[,1:2]))
# HMT ###
hmt = SALI$HMT
hmt = aggregate(OBS_NO~PROJECT_CODE+SITE_ID,hmt,min) %>%
left_join(.,hmt,by=c("PROJECT_CODE" = "PROJECT_CODE" , "SITE_ID" = "SITE_ID","OBS_NO"="OBS_NO"))
hmt = aggregate(MOTT_NO~PROJECT_CODE+SITE_ID,hmt,min) %>%
left_join(.,hmt,by =c("PROJECT_CODE" , "SITE_ID",'MOTT_NO' ))
nrow(unique(hmt[,1:3]))
# HOR
hor <- SALI$HOR
hor <- aggregate(OBS_NO~PROJECT_CODE+SITE_ID,hor,min) %>%
left_join(.,hor,by=c("PROJECT_CODE" = "PROJECT_CODE" , "SITE_ID" = "SITE_ID","OBS_NO"="OBS_NO"))
hor$CREATION_DATE <- as.character(hor$CREATION_DATE)
hor$LAST_UPDATE_DATE <- as.character(hor$LAST_UPDATE_DATE)
hor[hor == "-"] <- NA
# hor$HORIZON_NAME = gsub("[[:punct:]]+",NA,hor$HORIZON_NAME , fixed = TRUE)
# hor$HORIZON_NAME = gsub("[[:punct:]]+",'',hor$HORIZON_NAME )
hor$HORIZON_NAME = gsub("\\Q+\\E|\\Q?\\E",'',hor$HORIZON_NAME )
hor$HORIZON_NAME = gsub("b23",'B23',hor$HORIZON_NAME)
hor$HORIZON_NAME = gsub("a1",'A1',hor$HORIZON_NAME)
#unique(hor$HORIZON_NAME)
length(unique(hor$HORIZON_NAME))
# hor$HORIZON_NAME = gsub("[O][0-9]","O",hor$HORIZON_NAME)
#unique(hor$HORIZON_NAME[grep("b1",hor$HORIZON_NAME)])
hor <- hor[grep("S|UB",hor$HORIZON_NAME),] %>%
distinct(PROJECT_CODE,SITE_ID) %>%
anti_join(hor,.,by = c('PROJECT_CODE','SITE_ID'))
# HORIZON_NAME spit
hor_split = separate(hor,HORIZON_NAME,c('HOR_PREFIX','HORIZON_NAME'),sep="(?<=[0-9]|[0-9][0-9]|^)(?=[A-Za-z]|$)",
remove = T,convert = T,extra = "merge",fill = "left")
#unique(hor_split$HORIZON_NAME[grep("O",hor_split$HORIZON_NAME)])
hor_split$HORIZON_NAME = gsub("Ap",'AP',hor_split$HORIZON_NAME)
hor_split$HORIZON_NAME = gsub("AB|A/B",'A3',hor_split$HORIZON_NAME)
hor_split$HORIZON_NAME = gsub("B3/2D",'B3',hor_split$HORIZON_NAME)
hor_split$HORIZON_NAME = gsub("BCC",'BC',hor_split$HORIZON_NAME)
hor_split$HORIZON_NAME = gsub("C/B",'C',hor_split$HORIZON_NAME)
#hor_split$HORIZON_NAME = gsub("UB",'U',hor_split$HORIZON_NAME)
# hor_split$HORIZON_NAME = gsub("Ap|AP|M",'A1',hor_split$HORIZON_NAME)
# hor_split$HORIZON_NAME = gsub("Bp",'B1',hor_split$HORIZON_NAME)
# hor_split$HORIZON_NAME = gsub("BC|BCC|BD",'B3',hor_split$HORIZON_NAME)
hor_split= separate(hor_split,HORIZON_NAME,c('HOR_MASTER','HORIZON_NAME'),
sep="((?<=([AB]([0-9]|\\b))|[OPCDRMSU])(?=[0-9a-z]|$))|((?<=[ABOP])(?=[a-z]))",
remove = T,convert = T,extra = "merge",fill = "left")
sort(unique(hor_split$HOR_MASTER))
length(unique(hor_split$HOR_MASTER))
#hor_split$HOR_MASTER[hor_split$HOR_MASTER %in% c('O','P')] = "GRP1"
# hor_split$HOR_MASTER = gsub("O|P",'GRP1',hor_split$HOR_MASTER)
# hor_split$HOR_MASTER = gsub("M|A1|AP|A",'GRP2',hor_split$HOR_MASTER)
# hor_split$HORIZON_NAME = gsub("A3|B1|AB|A/B",'GRP4',hor_split$HORIZON_NAME)
# hor_split$HORIZON_NAME = gsub("B3|BC|C|D",'A3',hor_split$HORIZON_NAME)
# hor_split$HOR_MASTER = gsub("B/C|A/C|C/B|A/B|C/B2|A/B2|A/B1|B/A|C/B3|AA|C/B1|UB2",'OTHERS',
# hor_split$HOR_MASTER)
# hor_split$HOR_MASTER = gsub("[A][4-9]",'A_OTHERS',hor_split$HOR_MASTER)
# hor_split$HOR_MASTER = gsub("[B][4-9]",'B_OTHERS',hor_split$HOR_MASTER)
# hor_split$HOR_MASTER = gsub("[P][3-9]",'P_OTHERS',hor_split$HOR_MASTER)
a=unique(hor_split[,c('HOR_PREFIX','HOR_MASTER','HORIZON_NAME')])
a[grep('[A-Z]',a$HORIZON_NAME),]
unique(a$HOR_MASTER)
hor_split= separate(hor_split,HORIZON_NAME,c('HOR_SUBHOR','HOR_SUFFIX'),sep="(?<=[0-9])(?=[a-z]|$)",
remove = T,convert = T,extra = "merge",fill = "left")
hor_split$HOR_SUFFIX[hor_split$HOR_SUFFIX == ""] = NA
unique(hor_split$HOR_MASTER)
unique(hor_split$HOR_PREFIX)
unique(hor_split$HOR_SUBHOR)
unique(hor_split$HOR_SUFFIX)
hor_spc = hor_split
hor_spc$HOR_MASTER[hor_spc$HOR_MASTER %in% c('O','P','AO')] = "GRP1"
hor_spc$HOR_MASTER[hor_spc$HOR_MASTER %in% c('M','A1','AP','A','A/C','A0','AA','AC')] = "GRP2"
hor_spc$HOR_MASTER[hor_spc$HOR_MASTER %in% c('A3','B1','AB','A/B','A4','B/A')] = "GRP4"
hor_spc$HOR_MASTER[hor_spc$HOR_MASTER %in% c('B','B2')] = "GRP5"
hor_spc$HOR_MASTER[hor_spc$HOR_MASTER %in% c('B3','BC','B/C','C','D','B4','B9','BD')] = "GRP6"
names(hor_spc)[11]='SPC_HOR_MASTER'
# c= separate(b,a,c('c','a'),sep = '(?<=[Ap][1-2]|[B][1-2])(?=[1-9])',
#remove = T,convert = T,extra = "merge",fill = "left")
# hco
colour_convert <- function(hue,value,chroma) {
color = c()
for ( i in 1 : length(hue)) {
if (hue[i]<5) {
if (value[i]>5) {
if (chroma[i] <= 3) {color[i] = 'GREY'}
else {color[i] = 'YELLOW'}
}
else if (chroma[i] <= 2) {
if (value[i]>3) {color[i] = 'GREY'}
else {color[i] = 'BLACK'}
}
else {color[i] = 'BROWN'}
}
else if (chroma[i] <= 2) {
if (value[i]>=4) {color[i] = 'GREY'}
else {color[i] = 'BLACK'}
}
else {color[i] = 'RED'}
}
return(color)
}
hco = SALI$HCO
hco = hco[which(grepl('YR',hco$COLOUR_CODE)),]
hco = separate(hco,COLOUR_CODE,c('HUE','COLOUR_CODE'),sep='(?<=[0-9])(?=[A-Za-z])')
hco$COLOUR_CODE = gsub('YR|/','',hco$COLOUR_CODE)
hco = separate(hco,COLOUR_CODE,c('VALUE','CHROMA'),sep='(?<=[0-9])(?=[0-9])')
hco$COLOUR_CLASS = colour_convert(hco$HUE,hco$VALUE,hco$CHROMA)
hco = aggregate(OBS_NO~PROJECT_CODE+SITE_ID,hco,min) %>%
left_join(.,hco,by=c("PROJECT_CODE" = "PROJECT_CODE" , "SITE_ID" = "SITE_ID","OBS_NO"="OBS_NO"))
hco = aggregate(HOR_COL_NO~PROJECT_CODE+SITE_ID,hco,min) %>%
left_join(.,hco,by =c("PROJECT_CODE" , "SITE_ID",'HOR_COL_NO' ))
#ggplot(data.frame(hco$COLOUR_CLASS), aes(x=hco$COLOUR_CLASS)) + geom_bar()
#hcu
hcu <- SALI$HCU
hcu <- aggregate(OBS_NO~PROJECT_CODE+SITE_ID,hcu,min) %>%
left_join(.,hcu,by=c("PROJECT_CODE" , "SITE_ID","OBS_NO"))
hcu <- aggregate(CUTAN_NO~PROJECT_CODE+SITE_ID+OBS_NO+HORIZON_NO,hcu,min) %>%
left_join(.,hcu,by=c("PROJECT_CODE" ,"SITE_ID","OBS_NO",'HORIZON_NO',"CUTAN_NO"))
hcu$CREATION_DATE <- as.character(hcu$CREATION_DATE)
hcu$LAST_UPDATE_DATE <- as.character(hcu$LAST_UPDATE_DATE)
hcu[hcu == "-"] <- NA
# hcu_confuse <- sqldf('SELECT a.PROJECT_CODE, a.SITE_ID, a.OBS_NO, a.HORIZON_NO,a.CUTAN_NO
# FROM hcu a
# JOIN hcu b
# on b.PROJECT_CODE = a.PROJECT_CODE
# AND b.SITE_ID = a.SITE_ID
# AND b.OBS_NO = a.OBS_NO
# AND b.HORIZON_NO = a.HORIZON_NO
# AND b.CUTAN_NO <> a.CUTAN_NO
# ORDER BY a.PROJECT_CODE,a.SITE_ID')
hst <- SALI$HST
hst <- aggregate(OBS_NO~PROJECT_CODE+SITE_ID,hst,min) %>%
left_join(.,hst,by=c("PROJECT_CODE" , "SITE_ID","OBS_NO"))
hst <- aggregate(STRUCT_NO~PROJECT_CODE+SITE_ID+OBS_NO+HORIZON_NO,hst,min) %>%
left_join(.,hst,by=c("PROJECT_CODE" ,"SITE_ID","OBS_NO",'HORIZON_NO',"STRUCT_NO"))
hst$CREATION_DATE <- as.character(hst$CREATION_DATE)
hst$LAST_UPDATE_DATE <- as.character(hst$LAST_UPDATE_DATE)
hst[hst == "-"] <- NA
hsg <- SALI$HSG
hsg <- aggregate(OBS_NO~PROJECT_CODE+SITE_ID,hsg,min) %>%
left_join(.,hsg,by=c("PROJECT_CODE" , "SITE_ID","OBS_NO"))
hsg <- aggregate(SEG_NO~PROJECT_CODE+SITE_ID+OBS_NO+HORIZON_NO,hsg,min) %>%
left_join(.,hsg,by=c("PROJECT_CODE" ,"SITE_ID","OBS_NO",'HORIZON_NO',"SEG_NO"))
names(hsg)[6]='HSG_ABUNDANCE'
hsg$CREATION_DATE <- as.character(hsg$CREATION_DATE)
hsg$LAST_UPDATE_DATE <- as.character(hsg$LAST_UPDATE_DATE)
hsg[hsg == "-"] <- NA
fts <- SALI$FTS[which(SALI$FTS$TEST_TYPE == 'PH' & !is.na(SALI$FTS$VALUE)),
c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO','TEST_NO','VALUE')]
names(fts)[6] <- "FTS_PH"
fts$FTS_PH <-as.numeric(fts$FTS_PH)
# use mean value if multiple values available for one horizon
fts<- aggregate(FTS_PH ~ PROJECT_CODE+SITE_ID+HORIZON_NO,fts,mean)
# lab_15n1 <- lab[which(lab$LAB_METH_CODE == '15N1' & !is.na(lab$NUMERIC_VALUE)),
# c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO','SAMPLE_NO','LAB_METH_CODE','NUMERIC_VALUE')] %>%
# aggregate(OBS_NO~PROJECT_CODE+SITE_ID,.,min) %>%
# left_join(.,lab[which(lab$LAB_METH_CODE == '15N1' & !is.na(lab$NUMERIC_VALUE)),
# c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO','SAMPLE_NO',
# 'LAB_METH_CODE','NUMERIC_VALUE')],by=c("PROJECT_CODE" , "SITE_ID","OBS_NO"))
# # nrow(unique(lab_15n1[,1:2]))
# names(lab_15n1)[7] <- "VALUE_15N1"
# # use min value(first) of the SAMPLE_NO if multiple values available for one horizon
# lab_15n1<- aggregate(SAMPLE_NO~PROJECT_CODE+SITE_ID+OBS_NO+HORIZON_NO,lab_15n1,min) %>%
# left_join(.,lab_15n1[,c('PROJECT_CODE','SITE_ID',"OBS_NO",'HORIZON_NO','SAMPLE_NO',
# "VALUE_15N1")])
# nrow(unique(lab_15n1[,1:2]))
#add extra data
names(lab_15n1_extra)[4] <- "VALUE_15N1"
lab_15n1 <- aggregate(OBS_NO~PROJECT_CODE+SITE_ID,lab_15n1_extra,min) %>%
left_join(.,lab_15n1_extra,by=c("PROJECT_CODE" , "SITE_ID","OBS_NO"))
lab_15n1 <- aggregate(LD~PROJECT_CODE+SITE_ID+OBS_NO+HORIZON_NO,lab_15n1,min) %>%
left_join(.,lab_15n1[,c('PROJECT_CODE','SITE_ID',"OBS_NO",'HORIZON_NO','LD','SAMPLE_NO',
"VALUE_15N1")])
# lab_15n1_new <- full_join(lab_15n1_test,lab_15n1)
# lab_15n1 <- aggregate(OBS_NO~PROJECT_CODE+SITE_ID,lab_15n1,min) %>%
# left_join(.,lab_15n1,by=c("PROJECT_CODE" , "SITE_ID","OBS_NO"))
nrow(unique(lab_15n1[,1:5]))
lab_15n1<- aggregate(SAMPLE_NO~PROJECT_CODE+SITE_ID+OBS_NO+HORIZON_NO,lab_15n1,min) %>%
left_join(.,lab_15n1[,c('PROJECT_CODE','SITE_ID',"OBS_NO",'HORIZON_NO','SAMPLE_NO',
"VALUE_15N1")])
lab_15n1<- aggregate(VALUE_15N1~PROJECT_CODE+SITE_ID+OBS_NO+HORIZON_NO,lab_15n1,min) #%>%
#left_join(.,lab_15n1[,c('PROJECT_CODE','SITE_ID',"OBS_NO",'HORIZON_NO','SAMPLE_NO',
# "VALUE_15N1")])
nrow(unique(lab_15n1[,1:2]))
#lab_15n1_dup <- lab_15n1[which(duplicated(lab_15n1[,1:5]) |
# duplicated(lab_15n1[,1:5], fromLast = TRUE)),]
# write.csv(lab_15n1_dup,file =paste("C:/Users/lzccn/iCloudDrive/DATA SCIENCE/DATA7703
#Machine Learing/Home Works/lab_15n1_dup",
# ".csv",sep = " "))
#nrow(unique(fts[,1:3]))
lab_13c1_fe <- lab[which(lab$LAB_METH_CODE == '13C1_Fe' & !is.na(lab$NUMERIC_VALUE)),
c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO','SAMPLE_NO','LAB_CODE','NUMERIC_VALUE')]
names(lab_13c1_fe)[7] <- "VALUE_13C1_Fe"
# use mean value if multiple values available for one horizon
lab_13c1_fe<- aggregate(VALUE_13C1_Fe~PROJECT_CODE+SITE_ID+HORIZON_NO,lab_13c1_fe,mean)
lab_4a1 <- lab[which(lab$LAB_METH_CODE == '4A1' & !is.na(lab$NUMERIC_VALUE)),
c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO','SAMPLE_NO','LAB_CODE','NUMERIC_VALUE')]
names(lab_4a1)[7] <- "VALUE_4A1"
# use mean value if multiple values available for one horizon
lab_4a1<- aggregate(VALUE_4A1~PROJECT_CODE+SITE_ID+HORIZON_NO,lab_4a1,mean)
lab_2z2_clay <- lab[which(lab$LAB_METH_CODE == '2Z2_Clay' & !is.na(lab$NUMERIC_VALUE)),
c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO','SAMPLE_NO','LAB_CODE','NUMERIC_VALUE')]
names(lab_2z2_clay)[7] <- "VALUE_2Z2_Clay"
# use mean value if multiple values available for one horizon
lab_2z2_clay<- aggregate(VALUE_2Z2_Clay~PROJECT_CODE+SITE_ID+HORIZON_NO,lab_2z2_clay,mean)
lab_6b_6a <- lab[which(lab$LAB_METH_CODE %in% c('6A1','6B2a','6B2b','6B2c','6B4') & !is.na(lab$NUMERIC_VALUE)),
c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO','SAMPLE_NO','LAB_CODE','NUMERIC_VALUE')]
names(lab_6b_6a)[7] <- "VALUE_6B_6A"
# use mean value if multiple values available for one horizon
lab_6b_6a<- aggregate(VALUE_6B_6A~PROJECT_CODE+SITE_ID+HORIZON_NO,lab_6b_6a,mean)
# nrow(unique(lab_15n1[,1:2])) #91365
# nrow(unique(lab_15n1[,1:3])) #91365
# nrow(unique(lab_15n1[,c(1,2,3,4,5)])) #91365
# nrow(unique(lab_15n1[,c(1,2,3,4)]))
# Empty rate############################################################
# empty rate of a dataframe
empty_rate <- function(x) {
rate = data.frame('name'=NA,'EMPTY_RATE'=0, 'EMPTY_#'=0, 'NON_EMPTY_#'=0, "TOTAL"=0)
n = nrow(x)
d = length(x)
for ( i in 1: d) {
na = sum(is.na(x[[i]]) )
s = na/n
rate[i,] = list(names(x[i]),s,na, n-na, n )
}
rate = rate[order(rate$EMPTY_RATE,decreasing = T),]
na_r = sum(rowSums(is.na(x)) != 0)
rate = rbind(rate,list('By_rows',na_r/n, na_r, n-na_r, n))
na_t = sum(is.na(x))
rate = rbind(rate,list('Total',na_t/(n*d),na_t,n*d-na_t,n*d ))
rate$EMPTY_RATE = label_percent()(rate$EMPTY_RATE)
return(rate)
}
# empty_rate = data.frame('name'=NA,'RATE'=0)
# s1=s2=0
# for ( i in 1: length(SALI)) {
# s = sum(is.na(SALI[[i]])) /(length(SALI[[i]])*nrow(SALI[[i]]))
# s1 = s1+sum(is.na(SALI[[i]]))
# s2 = s2+(length(SALI[[i]])*nrow(SALI[[i]]))
# # s = label_percent()(s)
# # s = format(s,digits = 3,nsmall = 2)
# empty_rate[i,] = list(names(SALI[i]),s )
# #print(c(names(SALI[i]),sum(is.na(SALI[[i]]))/(length(SALI[[i]])*nrow(SALI[[i]]))))
# }
#
# empty_rate = empty_rate[order(empty_rate$RATE,decreasing = T),]
# empty_rate$RATE = label_percent()(empty_rate$RATE)
# empty_rate = rbind(empty_rate,list('Total',s1/s2))
# empty_rate
# sum(is.na(n_sit))/(length(n_sit)*nrow(n_sit))
# n_hor = SALI$HOR[,-c(24:27)]
# sum(is.na(n_hor))/(length(n_hor)*nrow(n_hor))
# nrow(n_hor[which(n_hor$HORIZON_NAME == '-'),])
# SALI$OCL[which(SALI$OCL$PROJECT_CODE == 'BAS' & SALI$OCL$SITE_ID == 28),]
# sum(is.na(SALI$HCU))/(length(SALI$HCU)*nrow(SALI$HCU))
# sort(empty_rate$RATE, decreasing = T)
# format(empty_rate$RATE,nsmall = 2)
# empty_rate_kmean = data.frame('name'=NA,'RATE'=0)
# for ( i in 5: length(kmean_dat2)) {
# s = sum(is.na(kmean_dat2[[i]])/length(kmean_dat2[[i]]))
# # s = label_percent()(s)
# # s = format(s,digits = 3,nsmall = 2)
# empty_rate_kmean[(i-4),] = list(names(kmean_dat2)[i], s)
# #print(c(names(SALI[i]),sum(is.na(SALI[[i]]))/(length(SALI[[i]])*nrow(SALI[[i]]))))
# }
# empty_rate_kmean = empty_rate_kmean[order(empty_rate_kmean$RATE,decreasing = T),]
# empty_rate_kmean = rbind(empty_rate_kmean,
# list('Total',
# sum(is.na(kmean_dat2[,-c(1:4)]))/((length(kmean_dat2)-4)*nrow(kmean_dat2))))
# empty_rate_kmean$RATE = label_percent(0.01 )(empty_rate_kmean$RATE)
# empty_rate_kmean
# #sum(is.na(kmean_dat2[[22]])/length(kmean_dat2[[22]]))
unique(SALI$FTS[which(SALI$FTS$TEST_TYPE=="PH"),'VALUE'])
length(unique(SALI$FTS[which(SALI$FTS$TEST_TYPE=="PH"),'VALUE']))
unique(SALI$HOR$UPPER_DEPTH)
length(unique(SALI$HOR$UPPER_DEPTH))
unique(lab[which(lab$LAB_METH_CODE=="13C1_Fe"),'NUMERIC_VALUE'])
length(unique(lab[which(lab$LAB_METH_CODE=="13C1_Fe"),'NUMERIC_VALUE']))
unique(lab[which(lab$LAB_METH_CODE %in% c('6A1','6B2a','6B2b','6B2c','6B4')),'NUMERIC_VALUE'])
length(unique(lab[which(lab$LAB_METH_CODE %in% c('6A1','6B2a','6B2b','6B2c','6B4')),'NUMERIC_VALUE']))
unique(lab[which(lab$LAB_METH_CODE %in% c('15N1')),'NUMERIC_VALUE'])
length(unique(lab[which(lab$LAB_METH_CODE %in% c('15N1')),'NUMERIC_VALUE']))
unique(lab[which(lab$LAB_METH_CODE %in% c('4A1')),'NUMERIC_VALUE'])
length(unique(lab[which(lab$LAB_METH_CODE %in% c('4A1')),'NUMERIC_VALUE']))
unique(lab[which(lab$LAB_METH_CODE %in% c('2Z2_Clay')),'NUMERIC_VALUE'])
length(unique(lab[which(lab$LAB_METH_CODE %in% c('2Z2_Clay')),'NUMERIC_VALUE']))
# Asc_train full data set ############################################################
asc_train <-
hor[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO')] %>%
left_join(.,obs[,c('PROJECT_CODE','SITE_ID','OBS_NO','DRAINAGE')],) %>%
left_join(.,sit[,c('PROJECT_CODE','SITE_ID','ELEM_TYPE_CODE')]) %>%
left_join(.,osc[,c('PROJECT_CODE','SITE_ID','STATUS')]) %>%
left_join(.,hor[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO',
'UPPER_DEPTH','LOWER_DEPTH','BOUND_DISTINCT','SOIL_WATER_STAT')]) %>%
left_join(.,fts,by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,lab_13c1_fe,by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,lab_15n1_new,by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,lab_6b_6a,by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,lab_4a1,by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,lab_2z2_clay,by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,hor[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO','DESIGN_MASTER',
'HORIZON_NAME','TEXTURE_CODE')]) %>%
left_join(.,hcu[,c('PROJECT_CODE','SITE_ID','HORIZON_NO','CUTAN_TYPE')],
by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,hst[,c('PROJECT_CODE','SITE_ID','HORIZON_NO','PEDALITY_TYPE','PEDALITY_GRADE')],
by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,hsg[,c('PROJECT_CODE','SITE_ID','HORIZON_NO','NATURE')],
by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,ocl[,c('PROJECT_CODE','SITE_ID','ASC_ORD')]) %>%
anti_join(.,.[which(.$ASC_ORD == '-' | is.na(.$ASC_ORD) #|
#.$DESIGN_MASTER == '-' | is.na(.$DESIGN_MASTER)|
#.$HORIZON_NAME == '-' | is.na(.$HORIZON_NAME)|
#.$BOUND_DISTINCT == '-' | is.na(.$BOUND_DISTINCT)#|
#.$TEXTURE_CODE == '-'|is.na(.$TEXTURE_CODE)|
#.$BOUND_DISTINCT == '-'|is.na(.$BOUND_DISTINCT)|
#.$SOIL_WATER_STAT == '-'|is.na(.$SOIL_WATER_STAT)|
#.$ELEM_TYPE_CODE == '-'|is.na(.$ELEM_TYPE_CODE)#|
#.$CUTAN_TYPE == '-'|is.na(.$CUTAN_TYPE)|
#.$PEDALITY_TYPE == '-'|is.na(.$PEDALITY_TYPE)#|
#.$PEDALITY_GRADE == '-'|is.na(.$PEDALITY_GRADE)|
#.$DRAINAGE == '-'|is.na(.$DRAINAGE)|
#.$STATUS == '-'|is.na(.$STATUS)
#.$NATURE == '-'|is.na(.$NATURE)
),],
by = c('PROJECT_CODE','SITE_ID'))
asc_train_empty_rate_F = empty_rate(asc_train[,5:24])
asc_train_s1 <- asc_train[1:12,]
# asc_tr_1 <- asc_train %>% distinct(PROJECT_CODE,SITE_ID,OBS_NO,
# DRAINAGE,ELEM_TYPE_CODE,STATUS,ASC_ORD,)
# asc_tr_1_empty_rate = empty_rate(asc_tr_1[,4:6])
#
# z1 <- split(asc_train, list(asc_train$PROJECT_CODE,asc_train$SITE_ID),drop = T)
# z2 <- as.list(NA)
# for (i in 1 : length(z1)){
# z2[[i]] <- subset(z1[[i]],select = c(6:15,18:24))
# }
# asc_tr_1$HOR <- z2
# # asc_tr_1 <- filter(dg,ASC_ORD !='TE')
#
# asc_tr_s1 <- asc_tr_1[sample(nrow(asc_tr_1),10),]
#asc_train_s1 <- asc_train[sample(nrow(asc_train),10),]
# ASC_train_split Training set with split HORIZON_NAME ############################################
asc_train_split <-
hor_split[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO')] %>%
left_join(.,obs[,c('PROJECT_CODE','SITE_ID','OBS_NO','DRAINAGE')],) %>%
left_join(.,sit[,c('PROJECT_CODE','SITE_ID','ELEM_TYPE_CODE')]) %>%
left_join(.,osc[,c('PROJECT_CODE','SITE_ID','STATUS')]) %>%
left_join(.,hor_split[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO',
'UPPER_DEPTH','LOWER_DEPTH','BOUND_DISTINCT','SOIL_WATER_STAT')]) %>%
left_join(.,fts,by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,lab_13c1_fe,by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,lab_15n1,by = c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO')) %>%
left_join(.,lab_6b_6a,by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,lab_4a1,by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,lab_2z2_clay,by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,hor_split[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO',
'HOR_PREFIX','HOR_MASTER','HOR_SUBHOR','HOR_SUFFIX', 'TEXTURE_CODE')]) %>%
left_join(.,hcu[,c('PROJECT_CODE','SITE_ID','HORIZON_NO','CUTAN_TYPE')],
by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,hst[,c('PROJECT_CODE','SITE_ID','HORIZON_NO','PEDALITY_TYPE','PEDALITY_GRADE')],
by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,hsg[,c('PROJECT_CODE','SITE_ID','HORIZON_NO','NATURE')],
by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,ocl[,c('PROJECT_CODE','SITE_ID','ASC_ORD')]) %>%
anti_join(.,.[which(.$ASC_ORD == '-' | is.na(.$ASC_ORD) #|
#.$DESIGN_MASTER == '-' | is.na(.$DESIGN_MASTER)|
#.$HORIZON_NAME == '-' | is.na(.$HORIZON_NAME)|
#.$BOUND_DISTINCT == '-' | is.na(.$BOUND_DISTINCT)#|
#.$TEXTURE_CODE == '-'|is.na(.$TEXTURE_CODE)|
#.$BOUND_DISTINCT == '-'|is.na(.$BOUND_DISTINCT)|
#.$SOIL_WATER_STAT == '-'|is.na(.$SOIL_WATER_STAT)|
#.$ELEM_TYPE_CODE == '-'|is.na(.$ELEM_TYPE_CODE)#|
#.$CUTAN_TYPE == '-'|is.na(.$CUTAN_TYPE)|
#.$PEDALITY_TYPE == '-'|is.na(.$PEDALITY_TYPE)#|
#.$PEDALITY_GRADE == '-'|is.na(.$PEDALITY_GRADE)|
#.$DRAINAGE == '-'|is.na(.$DRAINAGE)|
#.$STATUS == '-'|is.na(.$STATUS)
#.$NATURE == '-'|is.na(.$NATURE)
),],
by = c('PROJECT_CODE','SITE_ID'))
# remove TE from the dataset
asc_train_split = filter(asc_train_split,ASC_ORD !='TE')
write.csv(asc_train_split,file ="asc_train_split.csv",row.names = FALSE)
asc_train_empty_rate = empty_rate(asc_train_split[,5:24])
# ASC_train_sub Training set with split HORIZON_NAME ############################################
asc_train_sub <-
hor_split[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO')] %>%
left_join(.,obs[,c('PROJECT_CODE','SITE_ID','OBS_NO','DRAINAGE')],) %>%
left_join(.,sit[,c('PROJECT_CODE','SITE_ID','ELEM_TYPE_CODE')]) %>%
left_join(.,osc[,c('PROJECT_CODE','SITE_ID','STATUS')]) %>%
left_join(.,hor_split[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO',
'UPPER_DEPTH','LOWER_DEPTH','BOUND_DISTINCT','SOIL_WATER_STAT')]) %>%
left_join(.,fts,by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,lab_13c1_fe,by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,lab_15n1,by = c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO')) %>%
left_join(.,lab_6b_6a,by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,lab_4a1,by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,lab_2z2_clay,by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,hor_split[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO',
'HOR_PREFIX','HOR_MASTER','HOR_SUBHOR','HOR_SUFFIX', 'TEXTURE_CODE')]) %>%
left_join(.,hcu[,c('PROJECT_CODE','SITE_ID','HORIZON_NO','CUTAN_TYPE')],
by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,hst[,c('PROJECT_CODE','SITE_ID','HORIZON_NO','PEDALITY_TYPE','PEDALITY_GRADE')],
by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,hsg[,c('PROJECT_CODE','SITE_ID','HORIZON_NO','NATURE')],
by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,hco[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO','COLOUR_CLASS')]) %>%
left_join(.,ocl[,c('PROJECT_CODE','SITE_ID','ASC_ORD','SUBORD_ASC_CODE',
'GREAT_GROUP_ASC_CODE','SUBGROUP_ASC_CODE')]) %>%
anti_join(.,.[which(.$ASC_ORD == '-' | is.na(.$ASC_ORD) #|
#.$DESIGN_MASTER == '-' | is.na(.$DESIGN_MASTER)|
#.$HORIZON_NAME == '-' | is.na(.$HORIZON_NAME)|
#.$BOUND_DISTINCT == '-' | is.na(.$BOUND_DISTINCT)#|
#.$TEXTURE_CODE == '-'|is.na(.$TEXTURE_CODE)|
#.$BOUND_DISTINCT == '-'|is.na(.$BOUND_DISTINCT)|
#.$SOIL_WATER_STAT == '-'|is.na(.$SOIL_WATER_STAT)|
#.$ELEM_TYPE_CODE == '-'|is.na(.$ELEM_TYPE_CODE)#|
#.$CUTAN_TYPE == '-'|is.na(.$CUTAN_TYPE)|
#.$PEDALITY_TYPE == '-'|is.na(.$PEDALITY_TYPE)#|
#.$PEDALITY_GRADE == '-'|is.na(.$PEDALITY_GRADE)|
#.$DRAINAGE == '-'|is.na(.$DRAINAGE)|
#.$STATUS == '-'|is.na(.$STATUS)
#.$NATURE == '-'|is.na(.$NATURE)
),],
by = c('PROJECT_CODE','SITE_ID'))
# remove TE from the dataset
asc_train_sub = filter(asc_train_sub,ASC_ORD !='TE')
#write.csv(asc_train_sub,file ="./1_asc_mlp/asc_train_sub.csv",row.names = FALSE)
write_rds(asc_train_sub,"./1_asc_mlp/asc_train_sub.rds")
#asc_train_empty_rate = empty_rate(asc_train_split[,5:24])
# Predict dataset####################################################################################
asc_predict_split <-
hor_split[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO')] %>%
left_join(.,obs[,c('PROJECT_CODE','SITE_ID','OBS_NO','DRAINAGE')],) %>%
left_join(.,sit[,c('PROJECT_CODE','SITE_ID','ELEM_TYPE_CODE')]) %>%
left_join(.,osc[,c('PROJECT_CODE','SITE_ID','STATUS')]) %>%
left_join(.,hor_split[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO',
'UPPER_DEPTH','LOWER_DEPTH','BOUND_DISTINCT','SOIL_WATER_STAT')]) %>%
left_join(.,fts,by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,lab_13c1_fe,by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,lab_15n1,by = c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO')) %>%
left_join(.,lab_6b_6a,by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,lab_4a1,by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,lab_2z2_clay,by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,hor_split[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO',
'HOR_PREFIX','HOR_MASTER','HOR_SUBHOR','HOR_SUFFIX', 'TEXTURE_CODE')]) %>%
left_join(.,hcu[,c('PROJECT_CODE','SITE_ID','HORIZON_NO','CUTAN_TYPE')],
by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,hst[,c('PROJECT_CODE','SITE_ID','HORIZON_NO','PEDALITY_TYPE','PEDALITY_GRADE')],
by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,hsg[,c('PROJECT_CODE','SITE_ID','HORIZON_NO','NATURE')],
by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,ocl[,c('PROJECT_CODE','SITE_ID','ASC_ORD')]) # %>%
#anti_join(.,.[which(.$ASC_ORD != '-' | !is.na(.$ASC_ORD) #|
#.$DESIGN_MASTER == '-' | is.na(.$DESIGN_MASTER)|
#.$HORIZON_NAME == '-' | is.na(.$HORIZON_NAME)|
#.$BOUND_DISTINCT == '-' | is.na(.$BOUND_DISTINCT)#|
#.$TEXTURE_CODE == '-'|is.na(.$TEXTURE_CODE)|
#.$BOUND_DISTINCT == '-'|is.na(.$BOUND_DISTINCT)|
#.$SOIL_WATER_STAT == '-'|is.na(.$SOIL_WATER_STAT)|
#.$ELEM_TYPE_CODE == '-'|is.na(.$ELEM_TYPE_CODE)#|
#.$CUTAN_TYPE == '-'|is.na(.$CUTAN_TYPE)|
#.$PEDALITY_TYPE == '-'|is.na(.$PEDALITY_TYPE)#|
#.$PEDALITY_GRADE == '-'|is.na(.$PEDALITY_GRADE)|
#.$DRAINAGE == '-'|is.na(.$DRAINAGE)|
#.$STATUS == '-'|is.na(.$STATUS)
#.$NATURE == '-'|is.na(.$NATURE)
#),]
#by = c('PROJECT_CODE','SITE_ID'))
nrow(asc_predict_split[which(asc_predict_split$ASC_ORD == 'TE'),])
#asc_predict_split = filter(asc_predict_split,ASC_ORD !='TE')
# Normal distribution test ###############################################################################
a = aggregate(HORIZON_NO~PROJECT_CODE+SITE_ID, data = spc_kmean_dat ,FUN = length)
plt <- ggplot(a) + geom_bar(aes(x=PROJECT_CODE, y=HORIZON_NO), stat="identity")
print(plt)
histogram(a$HORIZON_NO)
hist(a$HORIZON_NO,xlim = c(0,20),breaks = 30)
nrow(a[a$HORIZON_NO > 6,])
nrow(a[a$HORIZON_NO < 6,])
nrow(a[a$HORIZON_NO > 6,])/nrow(a)
median(a$HORIZON_NO)
# Create the function.
getmode <- function(v) {
uniqv <- unique(v)
uniqv[which.max(tabulate(match(v, uniqv)))]
}
getmode(a$HORIZON_NO)
nrow(a[a$HORIZON_NO == 4,])
# SPC training dataset##############################################################
spc_train <-
hor_spc[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO')] %>%
left_join(.,osc[,c('PROJECT_CODE','SITE_ID','STATUS')]) %>%
left_join(.,obs[,c('PROJECT_CODE','SITE_ID','OBS_NO','OBS_LITH_CODE')],) %>%
left_join(.,sit[,c('PROJECT_CODE','SITE_ID', 'REL_MOD_SLOPE_CLASS')]) %>%
#left_join(.,ods[,c('PROJECT_CODE','SITE_ID', 'DISTURB_TYPE')]) %>%
#left_join(.,vst[,c('PROJECT_CODE','SITE_ID','OBS_NO','GROWTH_FORM','HEIGHT_CLASS','COVER_CLASS')]) %>%
left_join(.,vsp[,c('PROJECT_CODE','SITE_ID','OBS_NO','VEG_SPEC_CODE')]) %>%
left_join(.,hor_spc[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO','SPC_HOR_MASTER')]) %>%
left_join(.,hco[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO','COLOUR_CLASS')]) %>%
left_join(.,hmt[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO','MOTT_TYPE')]) %>%
left_join(.,hor_split[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO','TEXTURE_CODE')]) %>%
left_join(.,hst[,c('PROJECT_CODE','SITE_ID','HORIZON_NO','PEDALITY_GRADE')],
by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,hsg[,c('PROJECT_CODE','SITE_ID','HORIZON_NO','NATURE')],
by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,ocf[,c('PROJECT_CODE','SITE_ID','OBS_NO','OCF_LITH_CODE')]) %>%
left_join(.,hcu[,c('PROJECT_CODE','SITE_ID','HORIZON_NO','CUTAN_TYPE')],
by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,fts,by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,hor_spc[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO','BOUND_DISTINCT')]) %>%
left_join(.,spc[,c('PROJECT_CODE','SITE_ID','OBS_NO','SPC')]) #%>%
# anti_join(.,.[which(.$SPC == '-' | is.na(.$SPC) #|
#.$DESIGN_MASTER == '-' | is.na(.$DESIGN_MASTER)|
#.$HORIZON_NAME == '-' | is.na(.$HORIZON_NAME)|
#.$BOUND_DISTINCT == '-' | is.na(.$BOUND_DISTINCT)#|
#.$TEXTURE_CODE == '-'|is.na(.$TEXTURE_CODE)|
#.$BOUND_DISTINCT == '-'|is.na(.$BOUND_DISTINCT)|
#.$SOIL_WATER_STAT == '-'|is.na(.$SOIL_WATER_STAT)|
#.$ELEM_TYPE_CODE == '-'|is.na(.$ELEM_TYPE_CODE)#|
#.$CUTAN_TYPE == '-'|is.na(.$CUTAN_TYPE)|
#.$PEDALITY_TYPE == '-'|is.na(.$PEDALITY_TYPE)#|
#.$PEDALITY_GRADE == '-'|is.na(.$PEDALITY_GRADE)|
#.$DRAINAGE == '-'|is.na(.$DRAINAGE)|
#.$STATUS == '-'|is.na(.$STATUS)
#.$NATURE == '-'|is.na(.$NATURE)
# ),],
# by = c('PROJECT_CODE','SITE_ID'))
spc_train = na_if(spc_train,'-')
#ggplot(data.frame(spc_train$SPC), aes(x=spc_train$SPC)) + geom_bar()
write.csv(spc_train,file ="spc_train.csv", row.names = FALSE)
# ASC_SPC_mix Training set with split HORIZON_NAME ############################################
asc_spc <-
# base
hor_spc[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO')] %>%
left_join(.,osc[,c('PROJECT_CODE','SITE_ID','STATUS')]) %>%
left_join(.,fts,by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,hst[,c('PROJECT_CODE','SITE_ID','HORIZON_NO','PEDALITY_GRADE')],
by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,hor_split[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO',
'UPPER_DEPTH','LOWER_DEPTH','BOUND_DISTINCT')]) %>%
left_join(.,hor_split[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO','TEXTURE_CODE')]) %>%
left_join(.,hor_split[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO',
'HOR_PREFIX','HOR_SUBHOR','HOR_SUFFIX')]) %>%
left_join(.,hcu[,c('PROJECT_CODE','SITE_ID','HORIZON_NO','CUTAN_TYPE')],
by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,hsg[,c('PROJECT_CODE','SITE_ID','HORIZON_NO','NATURE')],
by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
#asc_ex
left_join(.,sit[,c('PROJECT_CODE','SITE_ID','ELEM_TYPE_CODE')]) %>%
left_join(.,obs[,c('PROJECT_CODE','SITE_ID','OBS_NO','DRAINAGE')],) %>%
left_join(.,hor_split[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO',
'SOIL_WATER_STAT')]) %>%
left_join(.,lab_13c1_fe,by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,lab_15n1,by = c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO')) %>%
left_join(.,lab_6b_6a,by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,lab_4a1,by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,lab_2z2_clay,by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,hor_split[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO',
'HOR_MASTER')]) %>%
left_join(.,hst[,c('PROJECT_CODE','SITE_ID','HORIZON_NO','PEDALITY_TYPE')],
by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
#spc_ex
left_join(.,obs[,c('PROJECT_CODE','SITE_ID','OBS_NO','OBS_LITH_CODE')],) %>%
left_join(.,sit[,c('PROJECT_CODE','SITE_ID', 'REL_MOD_SLOPE_CLASS')]) %>%
#left_join(.,ods[,c('PROJECT_CODE','SITE_ID', 'DISTURB_TYPE')]) %>%
#left_join(.,vst[,c('PROJECT_CODE','SITE_ID','OBS_NO','GROWTH_FORM','HEIGHT_CLASS','COVER_CLASS')]) %>%
left_join(.,vsp[,c('PROJECT_CODE','SITE_ID','OBS_NO','VEG_SPEC_CODE')]) %>%
left_join(.,hor_spc[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO','SPC_HOR_MASTER')]) %>%
left_join(.,hmt[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO','MOTT_TYPE')]) %>%
left_join(.,ocf[,c('PROJECT_CODE','SITE_ID','OBS_NO','OCF_LITH_CODE')]) %>%
#so_ex
left_join(.,hco[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO','COLOUR_CLASS')]) %>%
#mottle tpye in spc
#gg_ex
#sg_ex
#asc_tag
left_join(.,ocl[,c('PROJECT_CODE','SITE_ID','ASC_ORD','SUBORD_ASC_CODE',
'GREAT_GROUP_ASC_CODE','SUBGROUP_ASC_CODE')]) %>%
#spc_tag
left_join(.,spc[,c('PROJECT_CODE','SITE_ID','OBS_NO','SPC')]) %>%
# DELETE NA TAG DATA
anti_join(.,.[which(.$ASC_ORD == '-' | is.na(.$ASC_ORD)|
.$SUBORD_ASC_CODE == '-' | is.na(.$SUBORD_ASC_CODE)|
.$GREAT_GROUP_ASC_CODE == '-' | is.na(.$GREAT_GROUP_ASC_CODE)|
.$SUBGROUP_ASC_CODE == '-' | is.na(.$SUBGROUP_ASC_CODE)|
.$SPC == '-' | is.na(.$SPC)
),],
by = c('PROJECT_CODE','SITE_ID'))
# remove TE from the dataset
#asc_spc = filter(asc_spc,ASC_ORD !='TE')
write_rds(asc_spc,"./0_general/asc_spc.rds")
# ASC_SUB ##################################
asc_sub <-
# base
hor_spc[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO')] %>%
left_join(.,osc[,c('PROJECT_CODE','SITE_ID','STATUS')]) %>%
left_join(.,sit[,c('PROJECT_CODE','SITE_ID','ELEM_TYPE_CODE')]) %>%
left_join(.,fts,by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,hst[,c('PROJECT_CODE','SITE_ID','HORIZON_NO','PEDALITY_GRADE')],
by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,hor_split[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO',
'UPPER_DEPTH','LOWER_DEPTH','BOUND_DISTINCT')]) %>%
left_join(.,hor_split[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO','TEXTURE_CODE')]) %>%
left_join(.,hor_split[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO',
'HOR_PREFIX','HOR_SUBHOR','HOR_SUFFIX')]) %>%
left_join(.,hcu[,c('PROJECT_CODE','SITE_ID','HORIZON_NO','CUTAN_TYPE')],
by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,hst[,c('PROJECT_CODE','SITE_ID','HORIZON_NO','PEDALITY_TYPE')],
by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,hsg[,c('PROJECT_CODE','SITE_ID','HORIZON_NO','NATURE')],
by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
#asc_ex
left_join(.,obs[,c('PROJECT_CODE','SITE_ID','OBS_NO','DRAINAGE')],) %>%
left_join(.,hor_split[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO',
'SOIL_WATER_STAT')]) %>%
left_join(.,lab_13c1_fe,by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,lab_15n1,by = c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO')) %>%
left_join(.,lab_6b_6a,by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,lab_4a1,by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,lab_2z2_clay,by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,hor_split[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO',
'HOR_MASTER')]) %>%
#spc_ex
# left_join(.,obs[,c('PROJECT_CODE','SITE_ID','OBS_NO','OBS_LITH_CODE')],) %>%
# left_join(.,sit[,c('PROJECT_CODE','SITE_ID', 'REL_MOD_SLOPE_CLASS')]) %>%
# #left_join(.,ods[,c('PROJECT_CODE','SITE_ID', 'DISTURB_TYPE')]) %>%
# #left_join(.,vst[,c('PROJECT_CODE','SITE_ID','OBS_NO','GROWTH_FORM','HEIGHT_CLASS','COVER_CLASS')]) %>%
# left_join(.,vsp[,c('PROJECT_CODE','SITE_ID','OBS_NO','VEG_SPEC_CODE')]) %>%
# left_join(.,hor_spc[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO','SPC_HOR_MASTER')]) %>%
# left_join(.,hmt[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO','MOTT_TYPE')]) %>%
# left_join(.,ocf[,c('PROJECT_CODE','SITE_ID','OBS_NO','OCF_LITH_CODE')]) %>%
#so_ex
left_join(.,hco[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO','COLOUR_CLASS')]) %>%
left_join(.,hmt[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO','MOTT_TYPE')]) %>%
#mottle tpye in spc
#gg_ex
#sg_ex
#asc_tag
left_join(.,ocl[,c('PROJECT_CODE','SITE_ID','ASC_ORD','SUBORD_ASC_CODE',
'GREAT_GROUP_ASC_CODE','SUBGROUP_ASC_CODE')]) %>%
#spc_tag
#left_join(.,spc[,c('PROJECT_CODE','SITE_ID','OBS_NO','SPC')]) %>%
# DELETE NA TAG DATA
anti_join(.,.[which(.$ASC_ORD == '-' | is.na(.$ASC_ORD)#|
#.$SUBORD_ASC_CODE == '-' | is.na(.$SUBORD_ASC_CODE)|
#.$GREAT_GROUP_ASC_CODE == '-' | is.na(.$GREAT_GROUP_ASC_CODE)|
#.$SUBGROUP_ASC_CODE == '-' | is.na(.$SUBGROUP_ASC_CODE)#|
#.$SPC == '-' | is.na(.$SPC)
),],
by = c('PROJECT_CODE','SITE_ID'))
# remove TE from the dataset
#asc_sub = filter(asc_sub,ASC_ORD !='TE')
nrow(unique(asc_sub[,1:2]))
write_rds(asc_sub,"./0_general/asc_sub_new.rds")
# for (i in names(SALI)) {
# assign(i,SALI[[i]])
# }
|
/0_general/data_and_library.R
|
permissive
|
Ahai-L/DATA7901_Capstone_Project_1-
|
R
| false
| false
| 45,022
|
r
|
# Libraries ###############################################################################
library(combinat)
library(purrr)
library(ngramrr)
library(ngram)
library(gtools)
library(data.table)
library (magrittr)
library(sqldf)
library(dplyr)
library(plyr)
library(caret)
library(scales)
library(tidyr)
library(ggplot2)
library(stringr)
library(plotly)
library(dendextend)
library(abind)
library(nnet)
library(readr)
# library(naniar)
# Original Data ###########################################################################
SALI_de <- readRDS("./0_general/SALI_SIT-data_decodes_20191101.rds")
SALI <-readRDS("./0_general/SALI_SIT-data_20191101.rds")
soil_sample <- read.csv("./0_general/samples.csv")
lab <- read.csv("./0_general/labresults.csv")
lab_15n1_extra <- read.csv('./0_general/15N1.csv') # Peter supplied on 27/May/2020 email
# Subset data sets from SALI###############################################################
sit <- SALI$SIT
sit$CREATION_DATE <- as.character(sit$CREATION_DATE)
sit$LAST_UPDATE_DATE <- as.character(sit$LAST_UPDATE_DATE)
sit[sit == "-"] <- NA
# sit <- sit %>%
# mutate_at(vars(ELEM_TYPE_CODE), na_if, "-")
obs <- SALI$OBS
obs <- aggregate(OBS_NO~PROJECT_CODE+SITE_ID,obs,min) %>%
left_join(.,obs,by=c("PROJECT_CODE" , "SITE_ID","OBS_NO"))
obs$CREATION_DATE <- as.character(obs$CREATION_DATE)
obs$LAST_UPDATE_DATE <- as.character(obs$LAST_UPDATE_DATE)
obs$OBS_DATE <- as.character(obs$OBS_DATE)
obs[obs == "-"] <- NA
names(obs)[10] = 'OBS_LITH_CODE'
# ODS
ods <- SALI$ODS
ods <- aggregate(OBS_NO~PROJECT_CODE+SITE_ID,ods,min) %>%
left_join(.,ods,by=c("PROJECT_CODE" , "SITE_ID","OBS_NO"))
ods <- aggregate(DISTURB_NO~PROJECT_CODE+SITE_ID+OBS_NO,ods,min) %>%
left_join(.,ods,by=c("PROJECT_CODE" , "SITE_ID","OBS_NO","DISTURB_NO"))
ods$CREATION_DATE <- as.character(ods$CREATION_DATE)
ods$LAST_UPDATE_DATE <- as.character(ods$LAST_UPDATE_DATE)
ods[ods == "-"] <- NA
nrow(unique(ods[,1:2]))
#SPC
spc <- obs[which(obs$TAX_UNIT_TYPE == 'SPC' & !is.na(obs$TAX_UNIT_CODE)),
c('PROJECT_CODE','SITE_ID','OBS_NO','TAX_UNIT_CODE')]
names(spc)[4] <- "SPC"
length(unique(spc$SPC))
nrow(unique(spc[,c(1:3)]))
#spc$SPC <- spc$TAX_UNIT_CODE
# use mean value if multiple values available for one horizon
#spc<- aggregate(SPC ~ PROJECT_CODE+SITE_ID+HORIZON_NO,SPC,mean)
osc <- SALI$OSC
osc <- aggregate(OBS_NO~PROJECT_CODE+SITE_ID,osc,min) %>%
left_join(.,osc,by=c("PROJECT_CODE" , "SITE_ID","OBS_NO"))
osc <- aggregate(SURF_COND_NO~PROJECT_CODE+SITE_ID+OBS_NO,osc,min) %>%
left_join(.,osc,by=c("PROJECT_CODE" , "SITE_ID","OBS_NO","SURF_COND_NO"))
osc$CREATION_DATE <- as.character(osc$CREATION_DATE)
osc$LAST_UPDATE_DATE <- as.character(osc$LAST_UPDATE_DATE)
osc[osc == "-"] <- NA
# nrow(unique(osc[,1:2])) #91365
# nrow(unique(osc[,1:3])) #91365
# nrow(unique(osc[,1:4])) #91365
#vst
vst <- SALI$VST[which(SALI$VST$VEG_STRATA_CODE=='T'),]
vst <- aggregate(OBS_NO~PROJECT_CODE+SITE_ID,vst,min) %>%
left_join(.,vst,by=c("PROJECT_CODE" , "SITE_ID","OBS_NO"))
vst <- aggregate(VEG_COMM_NO~PROJECT_CODE+SITE_ID+OBS_NO,vst,min) %>%
left_join(.,vst,by=c("PROJECT_CODE" , "SITE_ID","OBS_NO","VEG_COMM_NO"))
vst$CREATION_DATE <- as.character(vst$CREATION_DATE)
vst$LAST_UPDATE_DATE <- as.character(vst$LAST_UPDATE_DATE)
#
vsp <- aggregate(OBS_NO~PROJECT_CODE+SITE_ID,SALI$VSP,min) %>%
left_join(.,SALI$VSP,by=c("PROJECT_CODE" , "SITE_ID","OBS_NO"))
vsp <- aggregate(VEG_COMM_NO~PROJECT_CODE+SITE_ID+OBS_NO,vsp,min) %>%
left_join(.,vsp,by=c("PROJECT_CODE" , "SITE_ID","OBS_NO","VEG_COMM_NO"))
vsp$CREATION_DATE <- as.character(vsp$CREATION_DATE)
vsp$LAST_UPDATE_DATE <- as.character(vsp$LAST_UPDATE_DATE)
a = vsp[which(vsp$VEG_STRATA_CODE=='T' & vsp$VEG_STRATA_SPEC_NO==1),]
vsp <- vsp[which(vsp$VEG_STRATA_CODE=='L' & vsp$VEG_STRATA_SPEC_NO ==1),] %>%
anti_join(.,vsp[which(vsp$VEG_STRATA_CODE=='T'),],
by=c("PROJECT_CODE" , "SITE_ID","OBS_NO","VEG_COMM_NO")) %>%
full_join(.,vsp[which(vsp$VEG_STRATA_CODE=='T' & vsp$VEG_STRATA_SPEC_NO==1),])
#anti_join(SALI$VSP,.,],
#by=c("PROJECT_CODE" , "SITE_ID","OBS_NO","VEG_COMM_NO"))
vsp[vsp == "-"] <- NA
ggplot(data.frame(vsp$VEG_SPEC_CODE), aes(x=vsp$VEG_SPEC_CODE)) + geom_bar()
nrow(unique(vsp[,1:4]))
# obs_1 <- obs[, c('PROJECT_CODE','SITE_ID','OBS_NO','TAX_UNIT_TYPE','TAX_UNIT_CODE')]
# length(unique(obs_1[which(obs_1$TAX_UNIT_TYPE =='SPC'),c("TAX_UNIT_CODE")])) #1862
# #number of soil data without a SPC value
# nrow(unique(obs[which(obs$TAX_UNIT_TYPE == 'SPC' & is.na(obs$TAX_UNIT_CODE)),1:2]))
#OCL Soil classification
ocl <- SALI$OCL
ocl <- aggregate(OBS_NO~PROJECT_CODE+SITE_ID,ocl,min) %>%
left_join(.,ocl,by=c("PROJECT_CODE" = "PROJECT_CODE" , "SITE_ID" = "SITE_ID","OBS_NO"="OBS_NO"))
#eliminate duplcates with different SOIL_CLASS_NO, only keep the one SOIL_CLASS_NO == MIN(MOST ARE 1)
ocl <- aggregate(SOIL_CLASS_NO~PROJECT_CODE+SITE_ID,ocl,min) %>%
left_join(.,ocl,by=c("PROJECT_CODE" , "SITE_ID","SOIL_CLASS_NO"))
ocl <- aggregate(ASC_ORD~PROJECT_CODE+SITE_ID,ocl,max) %>%
left_join(.,ocl,by=c("PROJECT_CODE" , "SITE_ID","ASC_ORD"))
ocl$CREATION_DATE <- as.character(ocl$CREATION_DATE)
ocl$LAST_UPDATE_DATE <- as.character(ocl$LAST_UPDATE_DATE)
ocl[ocl == "-"] <- NA
#
# ocf
ocf <- SALI$OCF
ocf <- aggregate(OBS_NO~PROJECT_CODE+SITE_ID,ocf,min) %>%
left_join(.,ocf,by=c("PROJECT_CODE" = "PROJECT_CODE" , "SITE_ID" = "SITE_ID","OBS_NO"="OBS_NO"))
#eliminate duplcates with different SOIL_CLASS_NO, only keep the one SOIL_CLASS_NO == MIN(MOST ARE 1)
ocf <- aggregate(SURF_FRAG_NO~PROJECT_CODE+SITE_ID,ocf,min) %>%
left_join(.,ocf,by=c("PROJECT_CODE" , "SITE_ID","SURF_FRAG_NO"))
ocf$CREATION_DATE <- as.character(ocf$CREATION_DATE)
ocf$LAST_UPDATE_DATE <- as.character(ocf$LAST_UPDATE_DATE)
ocf[ocf == "-"] <- NA
names(ocf)[6]='OCF_LITH_CODE'
names(ocf)[5]='OCF_ABUNDANCE'
nrow(unique(ocf[,1:2]))
# HMT ###
hmt = SALI$HMT
hmt = aggregate(OBS_NO~PROJECT_CODE+SITE_ID,hmt,min) %>%
left_join(.,hmt,by=c("PROJECT_CODE" = "PROJECT_CODE" , "SITE_ID" = "SITE_ID","OBS_NO"="OBS_NO"))
hmt = aggregate(MOTT_NO~PROJECT_CODE+SITE_ID,hmt,min) %>%
left_join(.,hmt,by =c("PROJECT_CODE" , "SITE_ID",'MOTT_NO' ))
nrow(unique(hmt[,1:3]))
# HOR
hor <- SALI$HOR
hor <- aggregate(OBS_NO~PROJECT_CODE+SITE_ID,hor,min) %>%
left_join(.,hor,by=c("PROJECT_CODE" = "PROJECT_CODE" , "SITE_ID" = "SITE_ID","OBS_NO"="OBS_NO"))
hor$CREATION_DATE <- as.character(hor$CREATION_DATE)
hor$LAST_UPDATE_DATE <- as.character(hor$LAST_UPDATE_DATE)
hor[hor == "-"] <- NA
# hor$HORIZON_NAME = gsub("[[:punct:]]+",NA,hor$HORIZON_NAME , fixed = TRUE)
# hor$HORIZON_NAME = gsub("[[:punct:]]+",'',hor$HORIZON_NAME )
hor$HORIZON_NAME = gsub("\\Q+\\E|\\Q?\\E",'',hor$HORIZON_NAME )
hor$HORIZON_NAME = gsub("b23",'B23',hor$HORIZON_NAME)
hor$HORIZON_NAME = gsub("a1",'A1',hor$HORIZON_NAME)
#unique(hor$HORIZON_NAME)
length(unique(hor$HORIZON_NAME))
# hor$HORIZON_NAME = gsub("[O][0-9]","O",hor$HORIZON_NAME)
#unique(hor$HORIZON_NAME[grep("b1",hor$HORIZON_NAME)])
hor <- hor[grep("S|UB",hor$HORIZON_NAME),] %>%
distinct(PROJECT_CODE,SITE_ID) %>%
anti_join(hor,.,by = c('PROJECT_CODE','SITE_ID'))
# HORIZON_NAME spit
hor_split = separate(hor,HORIZON_NAME,c('HOR_PREFIX','HORIZON_NAME'),sep="(?<=[0-9]|[0-9][0-9]|^)(?=[A-Za-z]|$)",
remove = T,convert = T,extra = "merge",fill = "left")
#unique(hor_split$HORIZON_NAME[grep("O",hor_split$HORIZON_NAME)])
hor_split$HORIZON_NAME = gsub("Ap",'AP',hor_split$HORIZON_NAME)
hor_split$HORIZON_NAME = gsub("AB|A/B",'A3',hor_split$HORIZON_NAME)
hor_split$HORIZON_NAME = gsub("B3/2D",'B3',hor_split$HORIZON_NAME)
hor_split$HORIZON_NAME = gsub("BCC",'BC',hor_split$HORIZON_NAME)
hor_split$HORIZON_NAME = gsub("C/B",'C',hor_split$HORIZON_NAME)
#hor_split$HORIZON_NAME = gsub("UB",'U',hor_split$HORIZON_NAME)
# hor_split$HORIZON_NAME = gsub("Ap|AP|M",'A1',hor_split$HORIZON_NAME)
# hor_split$HORIZON_NAME = gsub("Bp",'B1',hor_split$HORIZON_NAME)
# hor_split$HORIZON_NAME = gsub("BC|BCC|BD",'B3',hor_split$HORIZON_NAME)
hor_split= separate(hor_split,HORIZON_NAME,c('HOR_MASTER','HORIZON_NAME'),
sep="((?<=([AB]([0-9]|\\b))|[OPCDRMSU])(?=[0-9a-z]|$))|((?<=[ABOP])(?=[a-z]))",
remove = T,convert = T,extra = "merge",fill = "left")
sort(unique(hor_split$HOR_MASTER))
length(unique(hor_split$HOR_MASTER))
#hor_split$HOR_MASTER[hor_split$HOR_MASTER %in% c('O','P')] = "GRP1"
# hor_split$HOR_MASTER = gsub("O|P",'GRP1',hor_split$HOR_MASTER)
# hor_split$HOR_MASTER = gsub("M|A1|AP|A",'GRP2',hor_split$HOR_MASTER)
# hor_split$HORIZON_NAME = gsub("A3|B1|AB|A/B",'GRP4',hor_split$HORIZON_NAME)
# hor_split$HORIZON_NAME = gsub("B3|BC|C|D",'A3',hor_split$HORIZON_NAME)
# hor_split$HOR_MASTER = gsub("B/C|A/C|C/B|A/B|C/B2|A/B2|A/B1|B/A|C/B3|AA|C/B1|UB2",'OTHERS',
# hor_split$HOR_MASTER)
# hor_split$HOR_MASTER = gsub("[A][4-9]",'A_OTHERS',hor_split$HOR_MASTER)
# hor_split$HOR_MASTER = gsub("[B][4-9]",'B_OTHERS',hor_split$HOR_MASTER)
# hor_split$HOR_MASTER = gsub("[P][3-9]",'P_OTHERS',hor_split$HOR_MASTER)
a=unique(hor_split[,c('HOR_PREFIX','HOR_MASTER','HORIZON_NAME')])
a[grep('[A-Z]',a$HORIZON_NAME),]
unique(a$HOR_MASTER)
hor_split= separate(hor_split,HORIZON_NAME,c('HOR_SUBHOR','HOR_SUFFIX'),sep="(?<=[0-9])(?=[a-z]|$)",
remove = T,convert = T,extra = "merge",fill = "left")
hor_split$HOR_SUFFIX[hor_split$HOR_SUFFIX == ""] = NA
unique(hor_split$HOR_MASTER)
unique(hor_split$HOR_PREFIX)
unique(hor_split$HOR_SUBHOR)
unique(hor_split$HOR_SUFFIX)
hor_spc = hor_split
hor_spc$HOR_MASTER[hor_spc$HOR_MASTER %in% c('O','P','AO')] = "GRP1"
hor_spc$HOR_MASTER[hor_spc$HOR_MASTER %in% c('M','A1','AP','A','A/C','A0','AA','AC')] = "GRP2"
hor_spc$HOR_MASTER[hor_spc$HOR_MASTER %in% c('A3','B1','AB','A/B','A4','B/A')] = "GRP4"
hor_spc$HOR_MASTER[hor_spc$HOR_MASTER %in% c('B','B2')] = "GRP5"
hor_spc$HOR_MASTER[hor_spc$HOR_MASTER %in% c('B3','BC','B/C','C','D','B4','B9','BD')] = "GRP6"
names(hor_spc)[11]='SPC_HOR_MASTER'
# c= separate(b,a,c('c','a'),sep = '(?<=[Ap][1-2]|[B][1-2])(?=[1-9])',
#remove = T,convert = T,extra = "merge",fill = "left")
# hco
colour_convert <- function(hue,value,chroma) {
color = c()
for ( i in 1 : length(hue)) {
if (hue[i]<5) {
if (value[i]>5) {
if (chroma[i] <= 3) {color[i] = 'GREY'}
else {color[i] = 'YELLOW'}
}
else if (chroma[i] <= 2) {
if (value[i]>3) {color[i] = 'GREY'}
else {color[i] = 'BLACK'}
}
else {color[i] = 'BROWN'}
}
else if (chroma[i] <= 2) {
if (value[i]>=4) {color[i] = 'GREY'}
else {color[i] = 'BLACK'}
}
else {color[i] = 'RED'}
}
return(color)
}
hco = SALI$HCO
hco = hco[which(grepl('YR',hco$COLOUR_CODE)),]
hco = separate(hco,COLOUR_CODE,c('HUE','COLOUR_CODE'),sep='(?<=[0-9])(?=[A-Za-z])')
hco$COLOUR_CODE = gsub('YR|/','',hco$COLOUR_CODE)
hco = separate(hco,COLOUR_CODE,c('VALUE','CHROMA'),sep='(?<=[0-9])(?=[0-9])')
hco$COLOUR_CLASS = colour_convert(hco$HUE,hco$VALUE,hco$CHROMA)
hco = aggregate(OBS_NO~PROJECT_CODE+SITE_ID,hco,min) %>%
left_join(.,hco,by=c("PROJECT_CODE" = "PROJECT_CODE" , "SITE_ID" = "SITE_ID","OBS_NO"="OBS_NO"))
hco = aggregate(HOR_COL_NO~PROJECT_CODE+SITE_ID,hco,min) %>%
left_join(.,hco,by =c("PROJECT_CODE" , "SITE_ID",'HOR_COL_NO' ))
#ggplot(data.frame(hco$COLOUR_CLASS), aes(x=hco$COLOUR_CLASS)) + geom_bar()
#hcu
hcu <- SALI$HCU
hcu <- aggregate(OBS_NO~PROJECT_CODE+SITE_ID,hcu,min) %>%
left_join(.,hcu,by=c("PROJECT_CODE" , "SITE_ID","OBS_NO"))
hcu <- aggregate(CUTAN_NO~PROJECT_CODE+SITE_ID+OBS_NO+HORIZON_NO,hcu,min) %>%
left_join(.,hcu,by=c("PROJECT_CODE" ,"SITE_ID","OBS_NO",'HORIZON_NO',"CUTAN_NO"))
hcu$CREATION_DATE <- as.character(hcu$CREATION_DATE)
hcu$LAST_UPDATE_DATE <- as.character(hcu$LAST_UPDATE_DATE)
hcu[hcu == "-"] <- NA
# hcu_confuse <- sqldf('SELECT a.PROJECT_CODE, a.SITE_ID, a.OBS_NO, a.HORIZON_NO,a.CUTAN_NO
# FROM hcu a
# JOIN hcu b
# on b.PROJECT_CODE = a.PROJECT_CODE
# AND b.SITE_ID = a.SITE_ID
# AND b.OBS_NO = a.OBS_NO
# AND b.HORIZON_NO = a.HORIZON_NO
# AND b.CUTAN_NO <> a.CUTAN_NO
# ORDER BY a.PROJECT_CODE,a.SITE_ID')
hst <- SALI$HST
hst <- aggregate(OBS_NO~PROJECT_CODE+SITE_ID,hst,min) %>%
left_join(.,hst,by=c("PROJECT_CODE" , "SITE_ID","OBS_NO"))
hst <- aggregate(STRUCT_NO~PROJECT_CODE+SITE_ID+OBS_NO+HORIZON_NO,hst,min) %>%
left_join(.,hst,by=c("PROJECT_CODE" ,"SITE_ID","OBS_NO",'HORIZON_NO',"STRUCT_NO"))
hst$CREATION_DATE <- as.character(hst$CREATION_DATE)
hst$LAST_UPDATE_DATE <- as.character(hst$LAST_UPDATE_DATE)
hst[hst == "-"] <- NA
hsg <- SALI$HSG
hsg <- aggregate(OBS_NO~PROJECT_CODE+SITE_ID,hsg,min) %>%
left_join(.,hsg,by=c("PROJECT_CODE" , "SITE_ID","OBS_NO"))
hsg <- aggregate(SEG_NO~PROJECT_CODE+SITE_ID+OBS_NO+HORIZON_NO,hsg,min) %>%
left_join(.,hsg,by=c("PROJECT_CODE" ,"SITE_ID","OBS_NO",'HORIZON_NO',"SEG_NO"))
names(hsg)[6]='HSG_ABUNDANCE'
hsg$CREATION_DATE <- as.character(hsg$CREATION_DATE)
hsg$LAST_UPDATE_DATE <- as.character(hsg$LAST_UPDATE_DATE)
hsg[hsg == "-"] <- NA
fts <- SALI$FTS[which(SALI$FTS$TEST_TYPE == 'PH' & !is.na(SALI$FTS$VALUE)),
c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO','TEST_NO','VALUE')]
names(fts)[6] <- "FTS_PH"
fts$FTS_PH <-as.numeric(fts$FTS_PH)
# use mean value if multiple values available for one horizon
fts<- aggregate(FTS_PH ~ PROJECT_CODE+SITE_ID+HORIZON_NO,fts,mean)
# lab_15n1 <- lab[which(lab$LAB_METH_CODE == '15N1' & !is.na(lab$NUMERIC_VALUE)),
# c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO','SAMPLE_NO','LAB_METH_CODE','NUMERIC_VALUE')] %>%
# aggregate(OBS_NO~PROJECT_CODE+SITE_ID,.,min) %>%
# left_join(.,lab[which(lab$LAB_METH_CODE == '15N1' & !is.na(lab$NUMERIC_VALUE)),
# c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO','SAMPLE_NO',
# 'LAB_METH_CODE','NUMERIC_VALUE')],by=c("PROJECT_CODE" , "SITE_ID","OBS_NO"))
# # nrow(unique(lab_15n1[,1:2]))
# names(lab_15n1)[7] <- "VALUE_15N1"
# # use min value(first) of the SAMPLE_NO if multiple values available for one horizon
# lab_15n1<- aggregate(SAMPLE_NO~PROJECT_CODE+SITE_ID+OBS_NO+HORIZON_NO,lab_15n1,min) %>%
# left_join(.,lab_15n1[,c('PROJECT_CODE','SITE_ID',"OBS_NO",'HORIZON_NO','SAMPLE_NO',
# "VALUE_15N1")])
# nrow(unique(lab_15n1[,1:2]))
#add extra data
names(lab_15n1_extra)[4] <- "VALUE_15N1"
lab_15n1 <- aggregate(OBS_NO~PROJECT_CODE+SITE_ID,lab_15n1_extra,min) %>%
left_join(.,lab_15n1_extra,by=c("PROJECT_CODE" , "SITE_ID","OBS_NO"))
lab_15n1 <- aggregate(LD~PROJECT_CODE+SITE_ID+OBS_NO+HORIZON_NO,lab_15n1,min) %>%
left_join(.,lab_15n1[,c('PROJECT_CODE','SITE_ID',"OBS_NO",'HORIZON_NO','LD','SAMPLE_NO',
"VALUE_15N1")])
# lab_15n1_new <- full_join(lab_15n1_test,lab_15n1)
# lab_15n1 <- aggregate(OBS_NO~PROJECT_CODE+SITE_ID,lab_15n1,min) %>%
# left_join(.,lab_15n1,by=c("PROJECT_CODE" , "SITE_ID","OBS_NO"))
nrow(unique(lab_15n1[,1:5]))
lab_15n1<- aggregate(SAMPLE_NO~PROJECT_CODE+SITE_ID+OBS_NO+HORIZON_NO,lab_15n1,min) %>%
left_join(.,lab_15n1[,c('PROJECT_CODE','SITE_ID',"OBS_NO",'HORIZON_NO','SAMPLE_NO',
"VALUE_15N1")])
lab_15n1<- aggregate(VALUE_15N1~PROJECT_CODE+SITE_ID+OBS_NO+HORIZON_NO,lab_15n1,min) #%>%
#left_join(.,lab_15n1[,c('PROJECT_CODE','SITE_ID',"OBS_NO",'HORIZON_NO','SAMPLE_NO',
# "VALUE_15N1")])
nrow(unique(lab_15n1[,1:2]))
#lab_15n1_dup <- lab_15n1[which(duplicated(lab_15n1[,1:5]) |
# duplicated(lab_15n1[,1:5], fromLast = TRUE)),]
# write.csv(lab_15n1_dup,file =paste("C:/Users/lzccn/iCloudDrive/DATA SCIENCE/DATA7703
#Machine Learing/Home Works/lab_15n1_dup",
# ".csv",sep = " "))
#nrow(unique(fts[,1:3]))
lab_13c1_fe <- lab[which(lab$LAB_METH_CODE == '13C1_Fe' & !is.na(lab$NUMERIC_VALUE)),
c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO','SAMPLE_NO','LAB_CODE','NUMERIC_VALUE')]
names(lab_13c1_fe)[7] <- "VALUE_13C1_Fe"
# use mean value if multiple values available for one horizon
lab_13c1_fe<- aggregate(VALUE_13C1_Fe~PROJECT_CODE+SITE_ID+HORIZON_NO,lab_13c1_fe,mean)
lab_4a1 <- lab[which(lab$LAB_METH_CODE == '4A1' & !is.na(lab$NUMERIC_VALUE)),
c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO','SAMPLE_NO','LAB_CODE','NUMERIC_VALUE')]
names(lab_4a1)[7] <- "VALUE_4A1"
# use mean value if multiple values available for one horizon
lab_4a1<- aggregate(VALUE_4A1~PROJECT_CODE+SITE_ID+HORIZON_NO,lab_4a1,mean)
lab_2z2_clay <- lab[which(lab$LAB_METH_CODE == '2Z2_Clay' & !is.na(lab$NUMERIC_VALUE)),
c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO','SAMPLE_NO','LAB_CODE','NUMERIC_VALUE')]
names(lab_2z2_clay)[7] <- "VALUE_2Z2_Clay"
# use mean value if multiple values available for one horizon
lab_2z2_clay<- aggregate(VALUE_2Z2_Clay~PROJECT_CODE+SITE_ID+HORIZON_NO,lab_2z2_clay,mean)
lab_6b_6a <- lab[which(lab$LAB_METH_CODE %in% c('6A1','6B2a','6B2b','6B2c','6B4') & !is.na(lab$NUMERIC_VALUE)),
c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO','SAMPLE_NO','LAB_CODE','NUMERIC_VALUE')]
names(lab_6b_6a)[7] <- "VALUE_6B_6A"
# use mean value if multiple values available for one horizon
lab_6b_6a<- aggregate(VALUE_6B_6A~PROJECT_CODE+SITE_ID+HORIZON_NO,lab_6b_6a,mean)
# nrow(unique(lab_15n1[,1:2])) #91365
# nrow(unique(lab_15n1[,1:3])) #91365
# nrow(unique(lab_15n1[,c(1,2,3,4,5)])) #91365
# nrow(unique(lab_15n1[,c(1,2,3,4)]))
# Empty rate############################################################
# empty rate of a dataframe
empty_rate <- function(x) {
rate = data.frame('name'=NA,'EMPTY_RATE'=0, 'EMPTY_#'=0, 'NON_EMPTY_#'=0, "TOTAL"=0)
n = nrow(x)
d = length(x)
for ( i in 1: d) {
na = sum(is.na(x[[i]]) )
s = na/n
rate[i,] = list(names(x[i]),s,na, n-na, n )
}
rate = rate[order(rate$EMPTY_RATE,decreasing = T),]
na_r = sum(rowSums(is.na(x)) != 0)
rate = rbind(rate,list('By_rows',na_r/n, na_r, n-na_r, n))
na_t = sum(is.na(x))
rate = rbind(rate,list('Total',na_t/(n*d),na_t,n*d-na_t,n*d ))
rate$EMPTY_RATE = label_percent()(rate$EMPTY_RATE)
return(rate)
}
# empty_rate = data.frame('name'=NA,'RATE'=0)
# s1=s2=0
# for ( i in 1: length(SALI)) {
# s = sum(is.na(SALI[[i]])) /(length(SALI[[i]])*nrow(SALI[[i]]))
# s1 = s1+sum(is.na(SALI[[i]]))
# s2 = s2+(length(SALI[[i]])*nrow(SALI[[i]]))
# # s = label_percent()(s)
# # s = format(s,digits = 3,nsmall = 2)
# empty_rate[i,] = list(names(SALI[i]),s )
# #print(c(names(SALI[i]),sum(is.na(SALI[[i]]))/(length(SALI[[i]])*nrow(SALI[[i]]))))
# }
#
# empty_rate = empty_rate[order(empty_rate$RATE,decreasing = T),]
# empty_rate$RATE = label_percent()(empty_rate$RATE)
# empty_rate = rbind(empty_rate,list('Total',s1/s2))
# empty_rate
# sum(is.na(n_sit))/(length(n_sit)*nrow(n_sit))
# n_hor = SALI$HOR[,-c(24:27)]
# sum(is.na(n_hor))/(length(n_hor)*nrow(n_hor))
# nrow(n_hor[which(n_hor$HORIZON_NAME == '-'),])
# SALI$OCL[which(SALI$OCL$PROJECT_CODE == 'BAS' & SALI$OCL$SITE_ID == 28),]
# sum(is.na(SALI$HCU))/(length(SALI$HCU)*nrow(SALI$HCU))
# sort(empty_rate$RATE, decreasing = T)
# format(empty_rate$RATE,nsmall = 2)
# empty_rate_kmean = data.frame('name'=NA,'RATE'=0)
# for ( i in 5: length(kmean_dat2)) {
# s = sum(is.na(kmean_dat2[[i]])/length(kmean_dat2[[i]]))
# # s = label_percent()(s)
# # s = format(s,digits = 3,nsmall = 2)
# empty_rate_kmean[(i-4),] = list(names(kmean_dat2)[i], s)
# #print(c(names(SALI[i]),sum(is.na(SALI[[i]]))/(length(SALI[[i]])*nrow(SALI[[i]]))))
# }
# empty_rate_kmean = empty_rate_kmean[order(empty_rate_kmean$RATE,decreasing = T),]
# empty_rate_kmean = rbind(empty_rate_kmean,
# list('Total',
# sum(is.na(kmean_dat2[,-c(1:4)]))/((length(kmean_dat2)-4)*nrow(kmean_dat2))))
# empty_rate_kmean$RATE = label_percent(0.01 )(empty_rate_kmean$RATE)
# empty_rate_kmean
# #sum(is.na(kmean_dat2[[22]])/length(kmean_dat2[[22]]))
unique(SALI$FTS[which(SALI$FTS$TEST_TYPE=="PH"),'VALUE'])
length(unique(SALI$FTS[which(SALI$FTS$TEST_TYPE=="PH"),'VALUE']))
unique(SALI$HOR$UPPER_DEPTH)
length(unique(SALI$HOR$UPPER_DEPTH))
unique(lab[which(lab$LAB_METH_CODE=="13C1_Fe"),'NUMERIC_VALUE'])
length(unique(lab[which(lab$LAB_METH_CODE=="13C1_Fe"),'NUMERIC_VALUE']))
unique(lab[which(lab$LAB_METH_CODE %in% c('6A1','6B2a','6B2b','6B2c','6B4')),'NUMERIC_VALUE'])
length(unique(lab[which(lab$LAB_METH_CODE %in% c('6A1','6B2a','6B2b','6B2c','6B4')),'NUMERIC_VALUE']))
unique(lab[which(lab$LAB_METH_CODE %in% c('15N1')),'NUMERIC_VALUE'])
length(unique(lab[which(lab$LAB_METH_CODE %in% c('15N1')),'NUMERIC_VALUE']))
unique(lab[which(lab$LAB_METH_CODE %in% c('4A1')),'NUMERIC_VALUE'])
length(unique(lab[which(lab$LAB_METH_CODE %in% c('4A1')),'NUMERIC_VALUE']))
unique(lab[which(lab$LAB_METH_CODE %in% c('2Z2_Clay')),'NUMERIC_VALUE'])
length(unique(lab[which(lab$LAB_METH_CODE %in% c('2Z2_Clay')),'NUMERIC_VALUE']))
# Asc_train full data set ############################################################
asc_train <-
hor[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO')] %>%
left_join(.,obs[,c('PROJECT_CODE','SITE_ID','OBS_NO','DRAINAGE')],) %>%
left_join(.,sit[,c('PROJECT_CODE','SITE_ID','ELEM_TYPE_CODE')]) %>%
left_join(.,osc[,c('PROJECT_CODE','SITE_ID','STATUS')]) %>%
left_join(.,hor[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO',
'UPPER_DEPTH','LOWER_DEPTH','BOUND_DISTINCT','SOIL_WATER_STAT')]) %>%
left_join(.,fts,by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,lab_13c1_fe,by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,lab_15n1_new,by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,lab_6b_6a,by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,lab_4a1,by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,lab_2z2_clay,by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,hor[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO','DESIGN_MASTER',
'HORIZON_NAME','TEXTURE_CODE')]) %>%
left_join(.,hcu[,c('PROJECT_CODE','SITE_ID','HORIZON_NO','CUTAN_TYPE')],
by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,hst[,c('PROJECT_CODE','SITE_ID','HORIZON_NO','PEDALITY_TYPE','PEDALITY_GRADE')],
by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,hsg[,c('PROJECT_CODE','SITE_ID','HORIZON_NO','NATURE')],
by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,ocl[,c('PROJECT_CODE','SITE_ID','ASC_ORD')]) %>%
anti_join(.,.[which(.$ASC_ORD == '-' | is.na(.$ASC_ORD) #|
#.$DESIGN_MASTER == '-' | is.na(.$DESIGN_MASTER)|
#.$HORIZON_NAME == '-' | is.na(.$HORIZON_NAME)|
#.$BOUND_DISTINCT == '-' | is.na(.$BOUND_DISTINCT)#|
#.$TEXTURE_CODE == '-'|is.na(.$TEXTURE_CODE)|
#.$BOUND_DISTINCT == '-'|is.na(.$BOUND_DISTINCT)|
#.$SOIL_WATER_STAT == '-'|is.na(.$SOIL_WATER_STAT)|
#.$ELEM_TYPE_CODE == '-'|is.na(.$ELEM_TYPE_CODE)#|
#.$CUTAN_TYPE == '-'|is.na(.$CUTAN_TYPE)|
#.$PEDALITY_TYPE == '-'|is.na(.$PEDALITY_TYPE)#|
#.$PEDALITY_GRADE == '-'|is.na(.$PEDALITY_GRADE)|
#.$DRAINAGE == '-'|is.na(.$DRAINAGE)|
#.$STATUS == '-'|is.na(.$STATUS)
#.$NATURE == '-'|is.na(.$NATURE)
),],
by = c('PROJECT_CODE','SITE_ID'))
asc_train_empty_rate_F = empty_rate(asc_train[,5:24])
asc_train_s1 <- asc_train[1:12,]
# asc_tr_1 <- asc_train %>% distinct(PROJECT_CODE,SITE_ID,OBS_NO,
# DRAINAGE,ELEM_TYPE_CODE,STATUS,ASC_ORD,)
# asc_tr_1_empty_rate = empty_rate(asc_tr_1[,4:6])
#
# z1 <- split(asc_train, list(asc_train$PROJECT_CODE,asc_train$SITE_ID),drop = T)
# z2 <- as.list(NA)
# for (i in 1 : length(z1)){
# z2[[i]] <- subset(z1[[i]],select = c(6:15,18:24))
# }
# asc_tr_1$HOR <- z2
# # asc_tr_1 <- filter(dg,ASC_ORD !='TE')
#
# asc_tr_s1 <- asc_tr_1[sample(nrow(asc_tr_1),10),]
#asc_train_s1 <- asc_train[sample(nrow(asc_train),10),]
# ASC_train_split Training set with split HORIZON_NAME ############################################
asc_train_split <-
hor_split[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO')] %>%
left_join(.,obs[,c('PROJECT_CODE','SITE_ID','OBS_NO','DRAINAGE')],) %>%
left_join(.,sit[,c('PROJECT_CODE','SITE_ID','ELEM_TYPE_CODE')]) %>%
left_join(.,osc[,c('PROJECT_CODE','SITE_ID','STATUS')]) %>%
left_join(.,hor_split[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO',
'UPPER_DEPTH','LOWER_DEPTH','BOUND_DISTINCT','SOIL_WATER_STAT')]) %>%
left_join(.,fts,by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,lab_13c1_fe,by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,lab_15n1,by = c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO')) %>%
left_join(.,lab_6b_6a,by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,lab_4a1,by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,lab_2z2_clay,by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,hor_split[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO',
'HOR_PREFIX','HOR_MASTER','HOR_SUBHOR','HOR_SUFFIX', 'TEXTURE_CODE')]) %>%
left_join(.,hcu[,c('PROJECT_CODE','SITE_ID','HORIZON_NO','CUTAN_TYPE')],
by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,hst[,c('PROJECT_CODE','SITE_ID','HORIZON_NO','PEDALITY_TYPE','PEDALITY_GRADE')],
by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,hsg[,c('PROJECT_CODE','SITE_ID','HORIZON_NO','NATURE')],
by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,ocl[,c('PROJECT_CODE','SITE_ID','ASC_ORD')]) %>%
anti_join(.,.[which(.$ASC_ORD == '-' | is.na(.$ASC_ORD) #|
#.$DESIGN_MASTER == '-' | is.na(.$DESIGN_MASTER)|
#.$HORIZON_NAME == '-' | is.na(.$HORIZON_NAME)|
#.$BOUND_DISTINCT == '-' | is.na(.$BOUND_DISTINCT)#|
#.$TEXTURE_CODE == '-'|is.na(.$TEXTURE_CODE)|
#.$BOUND_DISTINCT == '-'|is.na(.$BOUND_DISTINCT)|
#.$SOIL_WATER_STAT == '-'|is.na(.$SOIL_WATER_STAT)|
#.$ELEM_TYPE_CODE == '-'|is.na(.$ELEM_TYPE_CODE)#|
#.$CUTAN_TYPE == '-'|is.na(.$CUTAN_TYPE)|
#.$PEDALITY_TYPE == '-'|is.na(.$PEDALITY_TYPE)#|
#.$PEDALITY_GRADE == '-'|is.na(.$PEDALITY_GRADE)|
#.$DRAINAGE == '-'|is.na(.$DRAINAGE)|
#.$STATUS == '-'|is.na(.$STATUS)
#.$NATURE == '-'|is.na(.$NATURE)
),],
by = c('PROJECT_CODE','SITE_ID'))
# remove TE from the dataset
asc_train_split = filter(asc_train_split,ASC_ORD !='TE')
write.csv(asc_train_split,file ="asc_train_split.csv",row.names = FALSE)
asc_train_empty_rate = empty_rate(asc_train_split[,5:24])
# ASC_train_sub Training set with split HORIZON_NAME ############################################
asc_train_sub <-
hor_split[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO')] %>%
left_join(.,obs[,c('PROJECT_CODE','SITE_ID','OBS_NO','DRAINAGE')],) %>%
left_join(.,sit[,c('PROJECT_CODE','SITE_ID','ELEM_TYPE_CODE')]) %>%
left_join(.,osc[,c('PROJECT_CODE','SITE_ID','STATUS')]) %>%
left_join(.,hor_split[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO',
'UPPER_DEPTH','LOWER_DEPTH','BOUND_DISTINCT','SOIL_WATER_STAT')]) %>%
left_join(.,fts,by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,lab_13c1_fe,by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,lab_15n1,by = c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO')) %>%
left_join(.,lab_6b_6a,by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,lab_4a1,by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,lab_2z2_clay,by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,hor_split[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO',
'HOR_PREFIX','HOR_MASTER','HOR_SUBHOR','HOR_SUFFIX', 'TEXTURE_CODE')]) %>%
left_join(.,hcu[,c('PROJECT_CODE','SITE_ID','HORIZON_NO','CUTAN_TYPE')],
by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,hst[,c('PROJECT_CODE','SITE_ID','HORIZON_NO','PEDALITY_TYPE','PEDALITY_GRADE')],
by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,hsg[,c('PROJECT_CODE','SITE_ID','HORIZON_NO','NATURE')],
by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,hco[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO','COLOUR_CLASS')]) %>%
left_join(.,ocl[,c('PROJECT_CODE','SITE_ID','ASC_ORD','SUBORD_ASC_CODE',
'GREAT_GROUP_ASC_CODE','SUBGROUP_ASC_CODE')]) %>%
anti_join(.,.[which(.$ASC_ORD == '-' | is.na(.$ASC_ORD) #|
#.$DESIGN_MASTER == '-' | is.na(.$DESIGN_MASTER)|
#.$HORIZON_NAME == '-' | is.na(.$HORIZON_NAME)|
#.$BOUND_DISTINCT == '-' | is.na(.$BOUND_DISTINCT)#|
#.$TEXTURE_CODE == '-'|is.na(.$TEXTURE_CODE)|
#.$BOUND_DISTINCT == '-'|is.na(.$BOUND_DISTINCT)|
#.$SOIL_WATER_STAT == '-'|is.na(.$SOIL_WATER_STAT)|
#.$ELEM_TYPE_CODE == '-'|is.na(.$ELEM_TYPE_CODE)#|
#.$CUTAN_TYPE == '-'|is.na(.$CUTAN_TYPE)|
#.$PEDALITY_TYPE == '-'|is.na(.$PEDALITY_TYPE)#|
#.$PEDALITY_GRADE == '-'|is.na(.$PEDALITY_GRADE)|
#.$DRAINAGE == '-'|is.na(.$DRAINAGE)|
#.$STATUS == '-'|is.na(.$STATUS)
#.$NATURE == '-'|is.na(.$NATURE)
),],
by = c('PROJECT_CODE','SITE_ID'))
# remove TE from the dataset
asc_train_sub = filter(asc_train_sub,ASC_ORD !='TE')
#write.csv(asc_train_sub,file ="./1_asc_mlp/asc_train_sub.csv",row.names = FALSE)
write_rds(asc_train_sub,"./1_asc_mlp/asc_train_sub.rds")
#asc_train_empty_rate = empty_rate(asc_train_split[,5:24])
# Predict dataset####################################################################################
asc_predict_split <-
hor_split[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO')] %>%
left_join(.,obs[,c('PROJECT_CODE','SITE_ID','OBS_NO','DRAINAGE')],) %>%
left_join(.,sit[,c('PROJECT_CODE','SITE_ID','ELEM_TYPE_CODE')]) %>%
left_join(.,osc[,c('PROJECT_CODE','SITE_ID','STATUS')]) %>%
left_join(.,hor_split[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO',
'UPPER_DEPTH','LOWER_DEPTH','BOUND_DISTINCT','SOIL_WATER_STAT')]) %>%
left_join(.,fts,by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,lab_13c1_fe,by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,lab_15n1,by = c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO')) %>%
left_join(.,lab_6b_6a,by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,lab_4a1,by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,lab_2z2_clay,by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,hor_split[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO',
'HOR_PREFIX','HOR_MASTER','HOR_SUBHOR','HOR_SUFFIX', 'TEXTURE_CODE')]) %>%
left_join(.,hcu[,c('PROJECT_CODE','SITE_ID','HORIZON_NO','CUTAN_TYPE')],
by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,hst[,c('PROJECT_CODE','SITE_ID','HORIZON_NO','PEDALITY_TYPE','PEDALITY_GRADE')],
by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,hsg[,c('PROJECT_CODE','SITE_ID','HORIZON_NO','NATURE')],
by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,ocl[,c('PROJECT_CODE','SITE_ID','ASC_ORD')]) # %>%
#anti_join(.,.[which(.$ASC_ORD != '-' | !is.na(.$ASC_ORD) #|
#.$DESIGN_MASTER == '-' | is.na(.$DESIGN_MASTER)|
#.$HORIZON_NAME == '-' | is.na(.$HORIZON_NAME)|
#.$BOUND_DISTINCT == '-' | is.na(.$BOUND_DISTINCT)#|
#.$TEXTURE_CODE == '-'|is.na(.$TEXTURE_CODE)|
#.$BOUND_DISTINCT == '-'|is.na(.$BOUND_DISTINCT)|
#.$SOIL_WATER_STAT == '-'|is.na(.$SOIL_WATER_STAT)|
#.$ELEM_TYPE_CODE == '-'|is.na(.$ELEM_TYPE_CODE)#|
#.$CUTAN_TYPE == '-'|is.na(.$CUTAN_TYPE)|
#.$PEDALITY_TYPE == '-'|is.na(.$PEDALITY_TYPE)#|
#.$PEDALITY_GRADE == '-'|is.na(.$PEDALITY_GRADE)|
#.$DRAINAGE == '-'|is.na(.$DRAINAGE)|
#.$STATUS == '-'|is.na(.$STATUS)
#.$NATURE == '-'|is.na(.$NATURE)
#),]
#by = c('PROJECT_CODE','SITE_ID'))
nrow(asc_predict_split[which(asc_predict_split$ASC_ORD == 'TE'),])
#asc_predict_split = filter(asc_predict_split,ASC_ORD !='TE')
# Normal distribution test ###############################################################################
a = aggregate(HORIZON_NO~PROJECT_CODE+SITE_ID, data = spc_kmean_dat ,FUN = length)
plt <- ggplot(a) + geom_bar(aes(x=PROJECT_CODE, y=HORIZON_NO), stat="identity")
print(plt)
histogram(a$HORIZON_NO)
hist(a$HORIZON_NO,xlim = c(0,20),breaks = 30)
nrow(a[a$HORIZON_NO > 6,])
nrow(a[a$HORIZON_NO < 6,])
nrow(a[a$HORIZON_NO > 6,])/nrow(a)
median(a$HORIZON_NO)
# Create the function.
getmode <- function(v) {
uniqv <- unique(v)
uniqv[which.max(tabulate(match(v, uniqv)))]
}
getmode(a$HORIZON_NO)
nrow(a[a$HORIZON_NO == 4,])
# SPC training dataset##############################################################
spc_train <-
hor_spc[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO')] %>%
left_join(.,osc[,c('PROJECT_CODE','SITE_ID','STATUS')]) %>%
left_join(.,obs[,c('PROJECT_CODE','SITE_ID','OBS_NO','OBS_LITH_CODE')],) %>%
left_join(.,sit[,c('PROJECT_CODE','SITE_ID', 'REL_MOD_SLOPE_CLASS')]) %>%
#left_join(.,ods[,c('PROJECT_CODE','SITE_ID', 'DISTURB_TYPE')]) %>%
#left_join(.,vst[,c('PROJECT_CODE','SITE_ID','OBS_NO','GROWTH_FORM','HEIGHT_CLASS','COVER_CLASS')]) %>%
left_join(.,vsp[,c('PROJECT_CODE','SITE_ID','OBS_NO','VEG_SPEC_CODE')]) %>%
left_join(.,hor_spc[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO','SPC_HOR_MASTER')]) %>%
left_join(.,hco[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO','COLOUR_CLASS')]) %>%
left_join(.,hmt[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO','MOTT_TYPE')]) %>%
left_join(.,hor_split[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO','TEXTURE_CODE')]) %>%
left_join(.,hst[,c('PROJECT_CODE','SITE_ID','HORIZON_NO','PEDALITY_GRADE')],
by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,hsg[,c('PROJECT_CODE','SITE_ID','HORIZON_NO','NATURE')],
by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,ocf[,c('PROJECT_CODE','SITE_ID','OBS_NO','OCF_LITH_CODE')]) %>%
left_join(.,hcu[,c('PROJECT_CODE','SITE_ID','HORIZON_NO','CUTAN_TYPE')],
by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,fts,by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,hor_spc[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO','BOUND_DISTINCT')]) %>%
left_join(.,spc[,c('PROJECT_CODE','SITE_ID','OBS_NO','SPC')]) #%>%
# anti_join(.,.[which(.$SPC == '-' | is.na(.$SPC) #|
#.$DESIGN_MASTER == '-' | is.na(.$DESIGN_MASTER)|
#.$HORIZON_NAME == '-' | is.na(.$HORIZON_NAME)|
#.$BOUND_DISTINCT == '-' | is.na(.$BOUND_DISTINCT)#|
#.$TEXTURE_CODE == '-'|is.na(.$TEXTURE_CODE)|
#.$BOUND_DISTINCT == '-'|is.na(.$BOUND_DISTINCT)|
#.$SOIL_WATER_STAT == '-'|is.na(.$SOIL_WATER_STAT)|
#.$ELEM_TYPE_CODE == '-'|is.na(.$ELEM_TYPE_CODE)#|
#.$CUTAN_TYPE == '-'|is.na(.$CUTAN_TYPE)|
#.$PEDALITY_TYPE == '-'|is.na(.$PEDALITY_TYPE)#|
#.$PEDALITY_GRADE == '-'|is.na(.$PEDALITY_GRADE)|
#.$DRAINAGE == '-'|is.na(.$DRAINAGE)|
#.$STATUS == '-'|is.na(.$STATUS)
#.$NATURE == '-'|is.na(.$NATURE)
# ),],
# by = c('PROJECT_CODE','SITE_ID'))
spc_train = na_if(spc_train,'-')
#ggplot(data.frame(spc_train$SPC), aes(x=spc_train$SPC)) + geom_bar()
write.csv(spc_train,file ="spc_train.csv", row.names = FALSE)
# ASC_SPC_mix Training set with split HORIZON_NAME ############################################
asc_spc <-
# base
hor_spc[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO')] %>%
left_join(.,osc[,c('PROJECT_CODE','SITE_ID','STATUS')]) %>%
left_join(.,fts,by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,hst[,c('PROJECT_CODE','SITE_ID','HORIZON_NO','PEDALITY_GRADE')],
by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,hor_split[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO',
'UPPER_DEPTH','LOWER_DEPTH','BOUND_DISTINCT')]) %>%
left_join(.,hor_split[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO','TEXTURE_CODE')]) %>%
left_join(.,hor_split[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO',
'HOR_PREFIX','HOR_SUBHOR','HOR_SUFFIX')]) %>%
left_join(.,hcu[,c('PROJECT_CODE','SITE_ID','HORIZON_NO','CUTAN_TYPE')],
by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,hsg[,c('PROJECT_CODE','SITE_ID','HORIZON_NO','NATURE')],
by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
#asc_ex
left_join(.,sit[,c('PROJECT_CODE','SITE_ID','ELEM_TYPE_CODE')]) %>%
left_join(.,obs[,c('PROJECT_CODE','SITE_ID','OBS_NO','DRAINAGE')],) %>%
left_join(.,hor_split[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO',
'SOIL_WATER_STAT')]) %>%
left_join(.,lab_13c1_fe,by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,lab_15n1,by = c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO')) %>%
left_join(.,lab_6b_6a,by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,lab_4a1,by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,lab_2z2_clay,by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,hor_split[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO',
'HOR_MASTER')]) %>%
left_join(.,hst[,c('PROJECT_CODE','SITE_ID','HORIZON_NO','PEDALITY_TYPE')],
by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
#spc_ex
left_join(.,obs[,c('PROJECT_CODE','SITE_ID','OBS_NO','OBS_LITH_CODE')],) %>%
left_join(.,sit[,c('PROJECT_CODE','SITE_ID', 'REL_MOD_SLOPE_CLASS')]) %>%
#left_join(.,ods[,c('PROJECT_CODE','SITE_ID', 'DISTURB_TYPE')]) %>%
#left_join(.,vst[,c('PROJECT_CODE','SITE_ID','OBS_NO','GROWTH_FORM','HEIGHT_CLASS','COVER_CLASS')]) %>%
left_join(.,vsp[,c('PROJECT_CODE','SITE_ID','OBS_NO','VEG_SPEC_CODE')]) %>%
left_join(.,hor_spc[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO','SPC_HOR_MASTER')]) %>%
left_join(.,hmt[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO','MOTT_TYPE')]) %>%
left_join(.,ocf[,c('PROJECT_CODE','SITE_ID','OBS_NO','OCF_LITH_CODE')]) %>%
#so_ex
left_join(.,hco[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO','COLOUR_CLASS')]) %>%
#mottle tpye in spc
#gg_ex
#sg_ex
#asc_tag
left_join(.,ocl[,c('PROJECT_CODE','SITE_ID','ASC_ORD','SUBORD_ASC_CODE',
'GREAT_GROUP_ASC_CODE','SUBGROUP_ASC_CODE')]) %>%
#spc_tag
left_join(.,spc[,c('PROJECT_CODE','SITE_ID','OBS_NO','SPC')]) %>%
# DELETE NA TAG DATA
anti_join(.,.[which(.$ASC_ORD == '-' | is.na(.$ASC_ORD)|
.$SUBORD_ASC_CODE == '-' | is.na(.$SUBORD_ASC_CODE)|
.$GREAT_GROUP_ASC_CODE == '-' | is.na(.$GREAT_GROUP_ASC_CODE)|
.$SUBGROUP_ASC_CODE == '-' | is.na(.$SUBGROUP_ASC_CODE)|
.$SPC == '-' | is.na(.$SPC)
),],
by = c('PROJECT_CODE','SITE_ID'))
# remove TE from the dataset
#asc_spc = filter(asc_spc,ASC_ORD !='TE')
write_rds(asc_spc,"./0_general/asc_spc.rds")
# ASC_SUB ##################################
asc_sub <-
# base
hor_spc[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO')] %>%
left_join(.,osc[,c('PROJECT_CODE','SITE_ID','STATUS')]) %>%
left_join(.,sit[,c('PROJECT_CODE','SITE_ID','ELEM_TYPE_CODE')]) %>%
left_join(.,fts,by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,hst[,c('PROJECT_CODE','SITE_ID','HORIZON_NO','PEDALITY_GRADE')],
by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,hor_split[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO',
'UPPER_DEPTH','LOWER_DEPTH','BOUND_DISTINCT')]) %>%
left_join(.,hor_split[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO','TEXTURE_CODE')]) %>%
left_join(.,hor_split[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO',
'HOR_PREFIX','HOR_SUBHOR','HOR_SUFFIX')]) %>%
left_join(.,hcu[,c('PROJECT_CODE','SITE_ID','HORIZON_NO','CUTAN_TYPE')],
by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,hst[,c('PROJECT_CODE','SITE_ID','HORIZON_NO','PEDALITY_TYPE')],
by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,hsg[,c('PROJECT_CODE','SITE_ID','HORIZON_NO','NATURE')],
by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
#asc_ex
left_join(.,obs[,c('PROJECT_CODE','SITE_ID','OBS_NO','DRAINAGE')],) %>%
left_join(.,hor_split[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO',
'SOIL_WATER_STAT')]) %>%
left_join(.,lab_13c1_fe,by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,lab_15n1,by = c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO')) %>%
left_join(.,lab_6b_6a,by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,lab_4a1,by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,lab_2z2_clay,by = c('PROJECT_CODE','SITE_ID','HORIZON_NO')) %>%
left_join(.,hor_split[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO',
'HOR_MASTER')]) %>%
#spc_ex
# left_join(.,obs[,c('PROJECT_CODE','SITE_ID','OBS_NO','OBS_LITH_CODE')],) %>%
# left_join(.,sit[,c('PROJECT_CODE','SITE_ID', 'REL_MOD_SLOPE_CLASS')]) %>%
# #left_join(.,ods[,c('PROJECT_CODE','SITE_ID', 'DISTURB_TYPE')]) %>%
# #left_join(.,vst[,c('PROJECT_CODE','SITE_ID','OBS_NO','GROWTH_FORM','HEIGHT_CLASS','COVER_CLASS')]) %>%
# left_join(.,vsp[,c('PROJECT_CODE','SITE_ID','OBS_NO','VEG_SPEC_CODE')]) %>%
# left_join(.,hor_spc[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO','SPC_HOR_MASTER')]) %>%
# left_join(.,hmt[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO','MOTT_TYPE')]) %>%
# left_join(.,ocf[,c('PROJECT_CODE','SITE_ID','OBS_NO','OCF_LITH_CODE')]) %>%
#so_ex
left_join(.,hco[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO','COLOUR_CLASS')]) %>%
left_join(.,hmt[,c('PROJECT_CODE','SITE_ID','OBS_NO','HORIZON_NO','MOTT_TYPE')]) %>%
#mottle tpye in spc
#gg_ex
#sg_ex
#asc_tag
left_join(.,ocl[,c('PROJECT_CODE','SITE_ID','ASC_ORD','SUBORD_ASC_CODE',
'GREAT_GROUP_ASC_CODE','SUBGROUP_ASC_CODE')]) %>%
#spc_tag
#left_join(.,spc[,c('PROJECT_CODE','SITE_ID','OBS_NO','SPC')]) %>%
# DELETE NA TAG DATA
anti_join(.,.[which(.$ASC_ORD == '-' | is.na(.$ASC_ORD)#|
#.$SUBORD_ASC_CODE == '-' | is.na(.$SUBORD_ASC_CODE)|
#.$GREAT_GROUP_ASC_CODE == '-' | is.na(.$GREAT_GROUP_ASC_CODE)|
#.$SUBGROUP_ASC_CODE == '-' | is.na(.$SUBGROUP_ASC_CODE)#|
#.$SPC == '-' | is.na(.$SPC)
),],
by = c('PROJECT_CODE','SITE_ID'))
# remove TE from the dataset
#asc_sub = filter(asc_sub,ASC_ORD !='TE')
nrow(unique(asc_sub[,1:2]))
write_rds(asc_sub,"./0_general/asc_sub_new.rds")
# for (i in names(SALI)) {
# assign(i,SALI[[i]])
# }
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/getVRPFeatureSet.R
\name{getVRPFeatureSet}
\alias{getVRPFeatureSet}
\title{Feature: Statistics of distances between obligatory/optional customers and
depots.}
\usage{
getVRPFeatureSet(x, include.costs = FALSE)
}
\arguments{
\item{x}{[\code{Network}]\cr
Network.}
\item{include.costs}{[\code{logical(1)}]\cr
Should costs be included as an additional feature? Default is \code{FALSE}.}
}
\value{
[\code{list}]
}
\description{
Feature: Statistics of distances between obligatory/optional customers and
depots.
}
\note{
Only orienteering instances with two depots are supported at the moment.
}
|
/man/getVRPFeatureSet.Rd
|
no_license
|
cdx08222028/salesperson
|
R
| false
| true
| 670
|
rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/getVRPFeatureSet.R
\name{getVRPFeatureSet}
\alias{getVRPFeatureSet}
\title{Feature: Statistics of distances between obligatory/optional customers and
depots.}
\usage{
getVRPFeatureSet(x, include.costs = FALSE)
}
\arguments{
\item{x}{[\code{Network}]\cr
Network.}
\item{include.costs}{[\code{logical(1)}]\cr
Should costs be included as an additional feature? Default is \code{FALSE}.}
}
\value{
[\code{list}]
}
\description{
Feature: Statistics of distances between obligatory/optional customers and
depots.
}
\note{
Only orienteering instances with two depots are supported at the moment.
}
|
# This is for processing GHTorrent Rails example data
#Set the working directory
setwd("~/github/local/VOSS-Sequencing-Toolkit/msr_procedural_structure/")
#Load the TraMineR and cluster libraries
library(TraMineR)
library(TraMineRextras)
library(PST)
library(cluster)
library(stringr)
library(VLMC)
# Direct output to a textfile
# sink("twitter_output.txt", append=FALSE, split=FALSE)
# To reset, use:
# sink(file = NULL)
## Load data file
sequences <- read.delim(file = "rails_data.txt", header = FALSE, sep = ",", nrow=100)
# Turn this command on to get only the head of 100
# sequences <- head(sequences, 100)
# Turn this on if you need to transpose the data
# sequences <- t(sequences)
# Turn this on to get only 10 sequences
# sequences <- head(sequences, 10)
# Fix column names
colnames(sequences) <- c("id", "time", "event")
# Convert time column to actual dates
sequences$time <- as.POSIXct(sequences$time, origin="1970-01-01 00:00:01")
# DO SIMPLE EVENT MINING
## Define the event sequence object (for descriptives)
sequences2 <- head(sequences, 50)
sequences.seqe <- seqecreate(sequences2, id = sequences2$id, timestamp = sequences2$time,event = sequences2$event)
# Displaying the data
print(sequences.seqe)
seqpcplot(sequences.seqe)
# Find most frequent subsequences
mostfreq <- seqefsub(sequences.seqe, pMinSupport = 0.05)
plot(mostfreq[1:10]) # display the 10 most common subsequences
# Filter out single event sequences
mostfreq2 <- seqentrans(mostfreq)
mostfreq3 <- mostfreq2[mostfreq2$data$nevent > 1]
plot(mostfreq3[1:10]) # display the 10 most common subsequences with > 1 events
# 4. Exctracting sequential association rules
DO THIS NEXT, and then email Georgios and ask for more data, richer alphabet, and covariates.
# 5. Dissimilarity-based analysis of event sequences
# 2. Covariates
# 2.5 Entropy?
# 3. Finding most discriminate subsequences for covariates
# FITTING USING VLMC
sequences <- as.character(sequences)
sequences.vlmc <- vlmc(sequences)
# FITTING USING PST
# Convert to STS
sequences.sts <- TSE_to_STS(sequences, id = 1, timestamp = 2, event = 3, tmax = 10)
sequences.sts <- seqdef(sequences.sts)
# sequences <- as.data.frame(sequences)
# Fit PST model
sequences.pst <- pstree(sequences.sts, L=20, nmin=30)
# Mine patterns
sequences.pm <- pmine(sequences.pst, sequences.seq, pmin=0.4, output='patterns')
data(s1)
s1.seq <- seqdef(s1)
s1.seq
S1 <- pstree(s1.seq, L = 3)
print(S1, digits = 3)
S1
|
/msr_procedural_structure/pst_mining_.R
|
no_license
|
aronlindberg/VOSS-Sequencing-Toolkit
|
R
| false
| false
| 2,455
|
r
|
# This is for processing GHTorrent Rails example data
#Set the working directory
setwd("~/github/local/VOSS-Sequencing-Toolkit/msr_procedural_structure/")
#Load the TraMineR and cluster libraries
library(TraMineR)
library(TraMineRextras)
library(PST)
library(cluster)
library(stringr)
library(VLMC)
# Direct output to a textfile
# sink("twitter_output.txt", append=FALSE, split=FALSE)
# To reset, use:
# sink(file = NULL)
## Load data file
sequences <- read.delim(file = "rails_data.txt", header = FALSE, sep = ",", nrow=100)
# Turn this command on to get only the head of 100
# sequences <- head(sequences, 100)
# Turn this on if you need to transpose the data
# sequences <- t(sequences)
# Turn this on to get only 10 sequences
# sequences <- head(sequences, 10)
# Fix column names
colnames(sequences) <- c("id", "time", "event")
# Convert time column to actual dates
sequences$time <- as.POSIXct(sequences$time, origin="1970-01-01 00:00:01")
# DO SIMPLE EVENT MINING
## Define the event sequence object (for descriptives)
sequences2 <- head(sequences, 50)
sequences.seqe <- seqecreate(sequences2, id = sequences2$id, timestamp = sequences2$time,event = sequences2$event)
# Displaying the data
print(sequences.seqe)
seqpcplot(sequences.seqe)
# Find most frequent subsequences
mostfreq <- seqefsub(sequences.seqe, pMinSupport = 0.05)
plot(mostfreq[1:10]) # display the 10 most common subsequences
# Filter out single event sequences
mostfreq2 <- seqentrans(mostfreq)
mostfreq3 <- mostfreq2[mostfreq2$data$nevent > 1]
plot(mostfreq3[1:10]) # display the 10 most common subsequences with > 1 events
# 4. Exctracting sequential association rules
DO THIS NEXT, and then email Georgios and ask for more data, richer alphabet, and covariates.
# 5. Dissimilarity-based analysis of event sequences
# 2. Covariates
# 2.5 Entropy?
# 3. Finding most discriminate subsequences for covariates
# FITTING USING VLMC
sequences <- as.character(sequences)
sequences.vlmc <- vlmc(sequences)
# FITTING USING PST
# Convert to STS
sequences.sts <- TSE_to_STS(sequences, id = 1, timestamp = 2, event = 3, tmax = 10)
sequences.sts <- seqdef(sequences.sts)
# sequences <- as.data.frame(sequences)
# Fit PST model
sequences.pst <- pstree(sequences.sts, L=20, nmin=30)
# Mine patterns
sequences.pm <- pmine(sequences.pst, sequences.seq, pmin=0.4, output='patterns')
data(s1)
s1.seq <- seqdef(s1)
s1.seq
S1 <- pstree(s1.seq, L = 3)
print(S1, digits = 3)
S1
|
\name{univariate_table}
\alias{univariate_table}
\title{Create a custom descriptive table for a dataset}
\description{
Produces a formatted table of univariate summary statistics with options allowing for stratification by one or more variables, computing of custom summary/association statistics, custom string templates for results, etc.
}
\usage{
univariate_table(
data,
strata = NULL,
associations = NULL,
numeric_summary = c(Summary = "median (q1, q3)"),
categorical_summary = c(Summary = "count (percent\%)"),
other_summary = NULL,
all_summary = NULL,
evaluate = FALSE,
add_n = FALSE,
order = NULL,
labels = NULL,
levels = NULL,
format = c("html", "latex", "markdown", "pandoc", "none"),
variableName = "Variable",
levelName = "Level",
sep = "_",
fill_blanks = "",
caption = NULL,
...
)
}
\arguments{
\item{data}{A \code{\link{data.frame}}.}
\item{strata}{An additive \code{\link{formula}} specifying stratification columns. Columns on the left side go down the rows, and columns on the right side go across the columns. Defaults to \code{NULL}.}
\item{associations}{A named \code{\link{list}} of functions to evaluate with column strata and each variable. Defaults to \code{NULL}. See \code{\link{univariate_associations}}.}
\item{numeric_summary}{A named vector containing string templates of how results for numeric data should be presented. See details for what is available by default. Defaults to \code{c(Summary = "median (q1, q3)")}.}
\item{categorical_summary}{A named vector containing string templates of how results for categorical data should be presented. See details for what is available by default. Defaults to \code{c(Summary = "count (percent\%)")}.}
\item{other_summary}{A named character vector containing string templates of how results for non-numeric and non-categorical data should be presented. Defaults to \code{NULL}.}
\item{all_summary}{A named character vector containing string templates of additional results applying to all variables. See details for what is available by default. Defaults to \code{NULL}.}
\item{evaluate}{Should the results of the string templates be evaluated as an \code{R} expression after filled with their values? See \code{\link{absorb}} for details. Defaults to \code{FALSE}.}
\item{add_n}{Should the sample size for each stratfication level be added to the result? Defaults to \code{FALSE}.}
\item{order}{Arguments passed to \code{forcats::fct_relevel} for reordering the variables. Defaults to \code{NULL}}
\item{labels}{A named character vector containing the new labels. Defaults to \code{NULL}}
\item{levels}{A named \code{\link{list}} of named character vectors containing the new levels. Defaults to \code{NULL}}
\item{format}{The format that the result should be rendered in. Must be "html", "latex", "markdown", "pandoc", or "none". Defaults to \code{"html"}.}
\item{variableName}{Header for the variable column in the result. Defaults to \code{"Variable"}.}
\item{levelName}{Header for the factor level column in the result. Defaults to \code{"Level"}.}
\item{sep}{Delimiter to separate summary columns. Defaults to \code{"_"}.}
\item{fill_blanks}{String to fill in blank spaces in the result. Defaults to \code{""}.}
\item{caption}{Caption for resulting table passed to \code{knitr::kable}. Defaults to \code{NULL}.}
\item{...}{Additional arguments to pass to \code{\link{descriptives}}.}
}
\value{
A table of summary statistics in the specified \code{format}. A \code{tibble::tibble} is returned if \code{format = "none"}.
}
\author{Alex Zajichek}
\examples{
#Set format
format <- "pandoc"
#Default summary
heart_disease \%>\%
univariate_table(
format = format
)
#Stratified summary
heart_disease \%>\%
univariate_table(
strata = ~Sex,
add_n = TRUE,
format = format
)
#Row strata with custom summaries with
heart_disease \%>\%
univariate_table(
strata = HeartDisease~1,
numeric_summary = c(Mean = "mean", Median = "median"),
categorical_summary = c(`Count (\%)` = "count (percent\%)"),
categorical_types = c("factor", "logical"),
add_n = TRUE,
format = format
)
}
|
/man/univariate_table.Rd
|
permissive
|
zajichek/cheese
|
R
| false
| false
| 4,232
|
rd
|
\name{univariate_table}
\alias{univariate_table}
\title{Create a custom descriptive table for a dataset}
\description{
Produces a formatted table of univariate summary statistics with options allowing for stratification by one or more variables, computing of custom summary/association statistics, custom string templates for results, etc.
}
\usage{
univariate_table(
data,
strata = NULL,
associations = NULL,
numeric_summary = c(Summary = "median (q1, q3)"),
categorical_summary = c(Summary = "count (percent\%)"),
other_summary = NULL,
all_summary = NULL,
evaluate = FALSE,
add_n = FALSE,
order = NULL,
labels = NULL,
levels = NULL,
format = c("html", "latex", "markdown", "pandoc", "none"),
variableName = "Variable",
levelName = "Level",
sep = "_",
fill_blanks = "",
caption = NULL,
...
)
}
\arguments{
\item{data}{A \code{\link{data.frame}}.}
\item{strata}{An additive \code{\link{formula}} specifying stratification columns. Columns on the left side go down the rows, and columns on the right side go across the columns. Defaults to \code{NULL}.}
\item{associations}{A named \code{\link{list}} of functions to evaluate with column strata and each variable. Defaults to \code{NULL}. See \code{\link{univariate_associations}}.}
\item{numeric_summary}{A named vector containing string templates of how results for numeric data should be presented. See details for what is available by default. Defaults to \code{c(Summary = "median (q1, q3)")}.}
\item{categorical_summary}{A named vector containing string templates of how results for categorical data should be presented. See details for what is available by default. Defaults to \code{c(Summary = "count (percent\%)")}.}
\item{other_summary}{A named character vector containing string templates of how results for non-numeric and non-categorical data should be presented. Defaults to \code{NULL}.}
\item{all_summary}{A named character vector containing string templates of additional results applying to all variables. See details for what is available by default. Defaults to \code{NULL}.}
\item{evaluate}{Should the results of the string templates be evaluated as an \code{R} expression after filled with their values? See \code{\link{absorb}} for details. Defaults to \code{FALSE}.}
\item{add_n}{Should the sample size for each stratfication level be added to the result? Defaults to \code{FALSE}.}
\item{order}{Arguments passed to \code{forcats::fct_relevel} for reordering the variables. Defaults to \code{NULL}}
\item{labels}{A named character vector containing the new labels. Defaults to \code{NULL}}
\item{levels}{A named \code{\link{list}} of named character vectors containing the new levels. Defaults to \code{NULL}}
\item{format}{The format that the result should be rendered in. Must be "html", "latex", "markdown", "pandoc", or "none". Defaults to \code{"html"}.}
\item{variableName}{Header for the variable column in the result. Defaults to \code{"Variable"}.}
\item{levelName}{Header for the factor level column in the result. Defaults to \code{"Level"}.}
\item{sep}{Delimiter to separate summary columns. Defaults to \code{"_"}.}
\item{fill_blanks}{String to fill in blank spaces in the result. Defaults to \code{""}.}
\item{caption}{Caption for resulting table passed to \code{knitr::kable}. Defaults to \code{NULL}.}
\item{...}{Additional arguments to pass to \code{\link{descriptives}}.}
}
\value{
A table of summary statistics in the specified \code{format}. A \code{tibble::tibble} is returned if \code{format = "none"}.
}
\author{Alex Zajichek}
\examples{
#Set format
format <- "pandoc"
#Default summary
heart_disease \%>\%
univariate_table(
format = format
)
#Stratified summary
heart_disease \%>\%
univariate_table(
strata = ~Sex,
add_n = TRUE,
format = format
)
#Row strata with custom summaries with
heart_disease \%>\%
univariate_table(
strata = HeartDisease~1,
numeric_summary = c(Mean = "mean", Median = "median"),
categorical_summary = c(`Count (\%)` = "count (percent\%)"),
categorical_types = c("factor", "logical"),
add_n = TRUE,
format = format
)
}
|
library(mlrCPO)
### Name: makeCPO
### Title: Create a Custom CPO Constructor
### Aliases: makeCPO makeCPOExtendedTrafo makeCPORetrafoless
### makeCPOTargetOp makeCPOExtendedTargetOp
### ** Examples
# an example constant feature remover CPO
constFeatRem = makeCPO("constFeatRem",
dataformat = "df.features",
cpo.train = function(data, target) {
names(Filter(function(x) { # names of columns to keep
length(unique(x)) > 1
}, data))
}, cpo.retrafo = function(data, control) {
data[control]
})
# alternatively:
constFeatRem = makeCPO("constFeatRem",
dataformat = "df.features",
cpo.train = function(data, target) {
cols.keep = names(Filter(function(x) {
length(unique(x)) > 1
}, data))
# the following function will do both the trafo and retrafo
result = function(data) {
data[cols.keep]
}
result
}, cpo.retrafo = NULL)
|
/data/genthat_extracted_code/mlrCPO/examples/makeCPO.Rd.R
|
no_license
|
surayaaramli/typeRrh
|
R
| false
| false
| 896
|
r
|
library(mlrCPO)
### Name: makeCPO
### Title: Create a Custom CPO Constructor
### Aliases: makeCPO makeCPOExtendedTrafo makeCPORetrafoless
### makeCPOTargetOp makeCPOExtendedTargetOp
### ** Examples
# an example constant feature remover CPO
constFeatRem = makeCPO("constFeatRem",
dataformat = "df.features",
cpo.train = function(data, target) {
names(Filter(function(x) { # names of columns to keep
length(unique(x)) > 1
}, data))
}, cpo.retrafo = function(data, control) {
data[control]
})
# alternatively:
constFeatRem = makeCPO("constFeatRem",
dataformat = "df.features",
cpo.train = function(data, target) {
cols.keep = names(Filter(function(x) {
length(unique(x)) > 1
}, data))
# the following function will do both the trafo and retrafo
result = function(data) {
data[cols.keep]
}
result
}, cpo.retrafo = NULL)
|
#### load climate and fire data ----
# Go here for CA_fire.RDS data: https://github.com/dinhkristine/Climate-Fire/tree/master/data/fires
CA_fire <- read_rds("data/fires/CA_fire.RDS")
# Go here for CA_temp.RDS data: https://github.com/dinhkristine/Climate-Fire/tree/master/data/temp
CA_temp <- read_rds("data/temp/CA_temp.RDS")
#### prepare temp data ----
## edit column to merge in with fire data discovery date
colnames(CA_temp) <- c("discovery_date", "COUNTYFYP", "discovery_min_temp",
"discovery_max_temp", "discovery_prec")
## join with fire data
CA_fire %<>%
left_join(CA_temp, by = c("discovery_date", "COUNTYFYP"))
## edit column to merge in with fire data cont date
colnames(CA_temp) <- c("cont_date", "COUNTYFYP", "cont_min_temp", "cont_max_temp", "cont_prec")
## join with fire data
CA_fire %<>%
left_join(CA_temp, by = c("cont_date", "COUNTYFYP"))
#### Create new column ----
data <- CA_fire %>%
mutate(fire_month = month(discovery_date))
#### select necessary columns ----
data %<>%
dplyr::select(objectid,
fire_year,
fire_month,
lat = y,
lon = x,
COUNTYFP = COUNTYFYP,
min_temp = discovery_min_temp,
max_temp = discovery_max_temp,
prec = discovery_prec)
#### save data ----
# write_rds(data, "data/data.RDS")
|
/release-R-files/data-flat-files.R
|
no_license
|
dinhkristine/Climate-Fire
|
R
| false
| false
| 1,371
|
r
|
#### load climate and fire data ----
# Go here for CA_fire.RDS data: https://github.com/dinhkristine/Climate-Fire/tree/master/data/fires
CA_fire <- read_rds("data/fires/CA_fire.RDS")
# Go here for CA_temp.RDS data: https://github.com/dinhkristine/Climate-Fire/tree/master/data/temp
CA_temp <- read_rds("data/temp/CA_temp.RDS")
#### prepare temp data ----
## edit column to merge in with fire data discovery date
colnames(CA_temp) <- c("discovery_date", "COUNTYFYP", "discovery_min_temp",
"discovery_max_temp", "discovery_prec")
## join with fire data
CA_fire %<>%
left_join(CA_temp, by = c("discovery_date", "COUNTYFYP"))
## edit column to merge in with fire data cont date
colnames(CA_temp) <- c("cont_date", "COUNTYFYP", "cont_min_temp", "cont_max_temp", "cont_prec")
## join with fire data
CA_fire %<>%
left_join(CA_temp, by = c("cont_date", "COUNTYFYP"))
#### Create new column ----
data <- CA_fire %>%
mutate(fire_month = month(discovery_date))
#### select necessary columns ----
data %<>%
dplyr::select(objectid,
fire_year,
fire_month,
lat = y,
lon = x,
COUNTYFP = COUNTYFYP,
min_temp = discovery_min_temp,
max_temp = discovery_max_temp,
prec = discovery_prec)
#### save data ----
# write_rds(data, "data/data.RDS")
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/fun.R
\name{bd}
\alias{bd}
\title{Show demos}
\usage{
bd(template = NA, to = ".")
}
\arguments{
\item{template}{NA or character, templates to show}
\item{to}{character. The destination directory for the bookdown project}
}
\value{
demo files
}
\description{
Show demos
}
\examples{
bd(NULL)
}
|
/man/bd.Rd
|
no_license
|
cran/bookdownplus
|
R
| false
| true
| 394
|
rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/fun.R
\name{bd}
\alias{bd}
\title{Show demos}
\usage{
bd(template = NA, to = ".")
}
\arguments{
\item{template}{NA or character, templates to show}
\item{to}{character. The destination directory for the bookdown project}
}
\value{
demo files
}
\description{
Show demos
}
\examples{
bd(NULL)
}
|
# 3-Construct\unzip-files.R
# Goals of this script are:
# - To loop through the downloaded files, unzipping them in to .xml on the fly
###################
# 1. Housekeeping #
###################
# Change in to the directory where downloaded data is stored (at the end of step 2)
data_raw_directory <- paste(data_directory, data_download_date, "/", "zakupki-", data_download_date, "-raw-data/94fz/", sep="")
# Define target directory for processed data
data_unzipped_directory <- paste(data_directory, data_download_date, "/", "zakupki-", data_download_date, "-unzipped-data/94fz/", sep="")
############################################
# 2. Gather parameters about the job ahead #
############################################
# Obtain list of regions for which downloaded data is available
regions_list <- as.list(list.files(path=data_raw_directory))
# Filter out non-regions in to their own list, remove the log files from list of regions
others_list <- Filter((function(x) (x %in% c("_FCS_nsi", "nsi"))), regions_list)
regions_list <- Filter((function(x) (!x %in% c("_FCS_nsi", "_readme.txt", "nsi"))), regions_list)
regions_list <- Filter((function(x) !grepl('.log', x)), regions_list)
regions_list <- Filter((function(x) !grepl('Krym_Resp', x)), regions_list) # Remove Crimea
regions_list <- Filter((function(x) !grepl('Sevastopol_g', x)), regions_list) # Remove Sevastopol
regions_number <- length(regions_list)
##############################################
# 3. Define functions to process each region #
##############################################
# Define a function to do all the hard work unzipping, taking two inputs: type of document and region
unzip_files <- function(document_type, current_region){
from_directory <- paste(data_raw_directory, current_region, "/", document_type, sep="")
to_directory <- paste(data_unzipped_directory, current_region, "/", document_type, sep="")
dir.create(to_directory, recursive=TRUE)
files_list <- as.list(list.files(path=from_directory, pattern="zip$", recursive=TRUE, full.names=TRUE))
files_list_length <- length(files_list)
for (l in 1:files_list_length){
unzip(as.character(files_list[l]), exdir=to_directory, overwrite=TRUE)
print(paste(l, " of ", files_list_length, " files unzipped", sep=""))
}
}
################################################
# 4. Loop over regions to process them in turn #
################################################
# List of document types to process
# document_types <- as.list(c("contracts", "notifications", "placement_result", "plans", "protocols"))
document_types <- as.list(c("contracts", "notifications", "protocols")) # only interested in these docs
document_types_number <- length(document_types)
# List of regions from 2. above is called here, looped through by region, then documents loop inside
for (r in 1:regions_number){
current_region <- as.character(regions_list[r])
for (d in 1:document_types_number){
document_type <- as.character(document_types[d])
print(paste("Processing ", document_type, " documents from ", current_region, sep=""))
unzip_files(document_type, current_region)
}
}
# ENDS
|
/3-Unpack/unzip-files.R
|
no_license
|
shaunmcgirr/shaun-mcgirr-dissertation
|
R
| false
| false
| 3,157
|
r
|
# 3-Construct\unzip-files.R
# Goals of this script are:
# - To loop through the downloaded files, unzipping them in to .xml on the fly
###################
# 1. Housekeeping #
###################
# Change in to the directory where downloaded data is stored (at the end of step 2)
data_raw_directory <- paste(data_directory, data_download_date, "/", "zakupki-", data_download_date, "-raw-data/94fz/", sep="")
# Define target directory for processed data
data_unzipped_directory <- paste(data_directory, data_download_date, "/", "zakupki-", data_download_date, "-unzipped-data/94fz/", sep="")
############################################
# 2. Gather parameters about the job ahead #
############################################
# Obtain list of regions for which downloaded data is available
regions_list <- as.list(list.files(path=data_raw_directory))
# Filter out non-regions in to their own list, remove the log files from list of regions
others_list <- Filter((function(x) (x %in% c("_FCS_nsi", "nsi"))), regions_list)
regions_list <- Filter((function(x) (!x %in% c("_FCS_nsi", "_readme.txt", "nsi"))), regions_list)
regions_list <- Filter((function(x) !grepl('.log', x)), regions_list)
regions_list <- Filter((function(x) !grepl('Krym_Resp', x)), regions_list) # Remove Crimea
regions_list <- Filter((function(x) !grepl('Sevastopol_g', x)), regions_list) # Remove Sevastopol
regions_number <- length(regions_list)
##############################################
# 3. Define functions to process each region #
##############################################
# Define a function to do all the hard work unzipping, taking two inputs: type of document and region
unzip_files <- function(document_type, current_region){
from_directory <- paste(data_raw_directory, current_region, "/", document_type, sep="")
to_directory <- paste(data_unzipped_directory, current_region, "/", document_type, sep="")
dir.create(to_directory, recursive=TRUE)
files_list <- as.list(list.files(path=from_directory, pattern="zip$", recursive=TRUE, full.names=TRUE))
files_list_length <- length(files_list)
for (l in 1:files_list_length){
unzip(as.character(files_list[l]), exdir=to_directory, overwrite=TRUE)
print(paste(l, " of ", files_list_length, " files unzipped", sep=""))
}
}
################################################
# 4. Loop over regions to process them in turn #
################################################
# List of document types to process
# document_types <- as.list(c("contracts", "notifications", "placement_result", "plans", "protocols"))
document_types <- as.list(c("contracts", "notifications", "protocols")) # only interested in these docs
document_types_number <- length(document_types)
# List of regions from 2. above is called here, looped through by region, then documents loop inside
for (r in 1:regions_number){
current_region <- as.character(regions_list[r])
for (d in 1:document_types_number){
document_type <- as.character(document_types[d])
print(paste("Processing ", document_type, " documents from ", current_region, sep=""))
unzip_files(document_type, current_region)
}
}
# ENDS
|
# -----------------------------------------------------------------------------------
# Building a Shiny app for visualisation of neurological outcomes in SCI
#
# July 8, 2020
# L. Bourguignon
# -----------------------------------------------------------------------------------
# Set working directory ----
# Load packages ----
library(rsconnect)
library(shiny)
library(shinyWidgets)
library(shinydashboard)
library(ggplot2)
library(data.table)
library(assertthat)
library(plyr)
library(dplyr)
library('ggthemes')
library(ggpubr)
library(ggridges)
library(gridExtra)
library(sjPlot)
library(jtools)
library(reshape2)
library(PMCMRplus)
#library(epicalc)
library(EpiReport)
library(epiDisplay)
library(naniar)
library(boot)
library(table1)
library(broom)
#library(pander)
library(gtable)
library(grid)
library(tidyr)
library(Hmisc)
library(RColorBrewer)
library(lme4)
library(DT)
library(shinyjs)
library(sodium)
library(shinymanager)
library(shinyalert)
library(plotly)
# Source helper functions -----
source("helper_functions_3.R")
# Load data ----
data_emsci <- read.csv('data/df_emsci_formatted2.csv')
data_emsci$ExamStage <- as.factor(data_emsci$ExamStage)
data_emsci$ExamStage <- relevel(data_emsci$ExamStage, ref = "very acute")
data_sygen <- read.csv('data/df_sygen_formatted_3.csv')
data_SCI_rehab <- read.csv('data/df_rehab_formatted.csv')
data_All <- read.csv('data/df_all_formatted.csv')
data_emsci_sygen <- read.csv('data/df_emsci_sygen_formatted.csv')
data_age_emsci <- read.csv('data/emsci_age.csv')
data_emsci_epi <- read.csv('data/emsci.csv')
data_emsci_epi$ExamStage <- as.factor(data_emsci_epi$ExamStage)
data_emsci_epi$ExamStage <- relevel(data_emsci_epi$ExamStage, ref = "very acute")
data_sygen_epi <- read.csv('data/sygen_epi.csv')
data_SCI_rehab_epi <- read.csv('data/df_rehab_epi.csv')
# Functions ----
convertMenuItem <- function(mi,tabName) {
mi$children[[1]]$attribs['data-toggle']="tab"
mi$children[[1]]$attribs['data-value'] = tabName
if(length(mi$attribs$class)>0 && mi$attribs$class=="treeview"){
mi$attribs$class=NULL
}
mi
}
latest.DateTime <- file.info("app_neuro_2.R")$mtime
# User interface ----
ui <- dashboardPage(skin = "blue", # make the frame blue
dashboardHeader(title = img(src="neurosurveillance_logo.png", height="80%", width="80%")), # rename the left column 'Menu'
## Sidebar content
dashboardSidebar(
sidebarMenu(id = "sidebarmenu", # create the main and sub parts within the sidebar menu
menuItem("About", tabName = "AboutTab", icon = icon("info-circle")),
menuItem("User guide", tabName = "user_guide", icon = icon("book-reader")),
menuItem('EMSCI', tabName = 'emsci', icon=icon("database"),
menuSubItem("Study details", tabName = "about_emsci", icon = icon("info-circle")),
menuSubItem("Epidemiological features", tabName = "epi_emsci", icon = icon("users")),
menuSubItem("Neurological features", tabName = "neuro_emsci", icon = icon("user-check")),
menuSubItem("Functional features", tabName = "funct_emsci", icon = icon("accessible-icon")),
menuSubItem("Monitoring", tabName = "monitore_emsci", icon = icon("clipboard-list"))),
menuItem('Sygen Trial', tabName = 'sygen', icon = icon("hospital-user"),
menuSubItem("Study details", tabName = "about_sygen", icon = icon("info-circle")),
menuSubItem("Epidemiological features", tabName = "epi_sygen", icon = icon("users")),
menuSubItem("Neurological features", tabName = "neuro_sygen", icon = icon("user-check")),
menuSubItem("Functional features", tabName = "funct_sygen", icon = icon("accessible-icon"))),
menuItem('Data sources compared', tabName = 'All', icon = icon("balance-scale"),
menuSubItem("Neurological features", tabName = "neuro_all", icon = icon("user-check"))),
menuItem("Abbreviations", tabName = "abbreviations", icon = icon("language")),
menuItem(HTML(paste0("Contact for collaborations ", icon("external-link"))), icon=icon("envelope"), href = "mailto:catherine.jutzeler@bsse.ethz.ch"),
uiOutput("dynamic_content")
) # end sidebarMenu
), # end dashboardSidebar
dashboardBody(
tags$script(HTML("
var openTab = function(tabName){
$('a', $('.sidebar')).each(function() {
if(this.getAttribute('data-value') == tabName) {
this.click()
};
});
};
$('.sidebar-toggle').attr('id','menu');
var dimension = [0, 0];
$(document).on('shiny:connected', function(e) {
dimension[0] = window.innerWidth;
dimension[1] = window.innerHeight;
Shiny.onInputChange('dimension', dimension);
});
$(window).resize(function(e) {
dimension[0] = window.innerWidth;
dimension[1] = window.innerHeight;
Shiny.onInputChange('dimension', dimension);
});
")),
# Customize color for the box status 'primary' and 'success' to match the skin color
tags$style(HTML("
.btn.btn-success {
color: #fff;
background-color: #3c8dbc;
border-color: #3c8dbc;
}
.btn.btn-success.focus,
.btn.btn-success:focus {
color: #fff;
background-color: #3c8dbc;
border-color: #3c8dbc;
outline: none;
box-shadow: none;
}
.btn.btn-success:hover {
color: #fff;
background-color: #3c8dbc;
border-color: #3c8dbc;
outline: none;
box-shadow: none;
}
.btn.btn-success.active,
.btn.btn-success:active {
color: #fff;
background-color: #3c8dbc;
border-color: #3c8dbc;
outline: none;
}
.btn.btn-success.active.focus,
.btn.btn-success.active:focus,
.btn.btn-success.active:hover,
.btn.btn-success:active.focus,
.btn.btn-success:active:focus,
.btn.btn-success:active:hover {
color: #fff;
background-color: #7ac8f5 ;
border-color: #7ac8f5 ;
outline: none;
box-shadow: none;
}
")),
tags$div(class = "tab-content",
#tabItems(
tabItem(tabName = "AboutTab",
titlePanel(title = div(strong("Welcome to ", img(src="neurosurveillance_logo.png", height="35%", width="35%")))),
fluidRow( # create a separation in the panel
column(width = 8, # create first column for boxplot
box(width = NULL, status = "primary",
p(h3("Benchmarking the spontaneous functional and neurological recovery following spinal cord injury"), align = "justify"),
br(),
p('Traumatic spinal cord injury is a rare but devastating neurological disorder.
It constitutes a major public health issue, burdening both patients, caregivers,
as well as society as a whole. The goal of this project is to establish an
international benchmark for neurological and functional recovery after spinal
cord injury. Currently, Neurosurveillance leverages three decades of data from two
of the largest data sources in the field facilitating the analysis of temporal
trends in epidemiological landscape, providing reference values for future clinical
trials and studies, and enabling monitoring of patients on a personalized level.', align = "justify"),
p('More information can be found here:', a(icon('github'), href ="https://github.com/jutzca/Neurosurveillance", target="_blank")),
br(),
p(h3('What You Can Do Here:')),
p("This applet has enables visitors to directly interact with the data collected
in the EMSCI study and the Sygen clinical trial. Both",
a("EMSCI", onclick = "openTab('emsci')", href="#"),
"and", a("Sygen", onclick = "openTab('sygen')", href="#"),
"Tabs are organized as follows:", align = "justify"),
tags$ul(align="justify",
tags$li("The Study details Tab provides information the data sources,
(",
a("EMSCI study", onclick = "openTab('about_emsci')", href="#"),
"and the ",
a("Sygen clinical trial", onclick = "openTab('about_sygen')", href="#"),
")."),
tags$li("The Epidemiological feature Tabs (",
a("EMSCI study", onclick = "openTab('epi_emsci')", href="#"), " and ",
a("Sygen clinical trial", onclick = "openTab('epi_sygen')", href="#"),
"), Neurological features Tabs (",
a("EMSCI study", onclick = "openTab('neuro_emsci')", href="#"), "and",
a("Sygen clinical trial", onclick = "openTab('neuro_sygen')", href="#"),
") and Functional features Tabs (",
a("EMSCI study", onclick = "openTab('funct_emsci')", href="#"), "and",
a("Sygen clinical trial", onclick = "openTab('funct_sygen')", href="#"),
") offer an interactive interface to explore the epidemiological chracteristics, as well as
neurological and functional recovery after spinal cord injury, respectively.
You can choose the data source,
outcome variable of interest, select the cohort of interest based in demographics and
injury characteristics (entire cohort or a subset thereof), and chose the variables for
the visualization (see more details in",
a("User Guide", onclick = "openTab('user_guide')", href="#"),
")."),
tags$li("The ",
a("Monitoring", onclick = "openTab('monitore_emsci')", href="#"),
" Tab, part of the EMSCI Tab, gives you the possibility to visualize and monitor the neurological and
functional recovery trajectory of single patients or patient groups that share very similar
demographics and baseline injury characteristics. As an example, if you have a patient in the
clinical with a certain motor score and you are interested in their recovery, you can have a
look at previous patients with comparable characteristics. This follows the concept of a
digital twin/sibling.")),
p("Additionally, you can find:", align = "justify"),
tags$ul(align="justify",
tags$li("A ",
a("User Guide", onclick = "openTab('user_guide')", href="#"),
" describing the different ways to interact with the plots."),
tags$li("The ",
a("Abbreviation", onclick = "openTab('abbreviations')", href="#"),
" Tab describes the abbreviations used within the framework of this applet.")
),
uiOutput("video"),
br(),
p(h3('Study team')),
p(h4('Principal Investigators')),
HTML(
paste('<p style="text-align:justify">',
a(icon('envelope'), href = 'mailto:catherine.jutzeler@bsse.ethz.ch'), strong('Dr. Catherine Jutzeler'),
', Research Group Leader, Department of Biosystems Science and Engineering, ETH Zurich, Switzerland',
'<br/>',
a(icon('envelope'), href = 'mailto:lucie.bourguignon@bsse.ethz.ch'), strong('Lucie Bourguignon'),
', MSc. PhD Student, Department of Biosystems Science and Engineering, ETH Zurich, Switzerland',
'<br/>',
a(icon('envelope'), href = 'mailto:john.kramer@ubc.ca'), strong('Prof. John Kramer'),
', ICORD principle investigator, Anesthesiology, Pharmacology, and Therapeutics,
University of British Columbia, Vancouver, Canada',
'<br/>',
a(icon('envelope'), href = 'mailto:armin.curt@balgrist.ch'), strong('Prof. Armin Curt'),
', Director of Balgrist Spinal Cord Injury Center and EMSCI, University Hospital Balgrist,
Zurich, Switzerland'
)
),
p(h4('Collaborators')),
p('Bobo Tong, Fred Geisler, Martin Schubert, Frank Röhrich, Marion Saur, Norbert Weidner,
Ruediger Rupp, Yorck-Bernhard B. Kalke, Rainer Abel, Doris Maier, Lukas Grassner,
Harvinder S. Chhabra, Thomas Liebscher, Jacquelyn J. Cragg, ',
a('EMSCI study group', href = 'https://www.emsci.org/index.php/members', target="_blank"), align = "justify"),
br(),
p(h3('Ethics statement')),
p('All patients gave their written informed consent before being included in the EMSCI database.
The study was performed in accordance with the Declaration of Helsinki and was approved
by all responsible institutional review boards. The study was performed in accordance with
the Declaration of Helsinki. If you have any questions or concerns regarding the study
please do not hesitate to contact the Principal Investigator, ',
a('Dr. Catherine Jutzeler', href = 'mailto:catherine.jutzeler@bsse.ethz.ch'), '.', align = "justify"),
br(),
p(h3('Funding')),
p('This project is supported by the ',
a('Swiss National Science Foundation', href = 'http://p3.snf.ch/project-186101', target="_blank"),
' (Ambizione Grant, #PZ00P3_186101), ',
a('Wings for Life Research Foundation', href = 'https://www.wingsforlife.com/en/', target="_blank"),
' (#2017_044), the ',
a('International Foundation for Research in Paraplegia', href = 'https://www.irp.ch/en/foundation/', target="_blank"),
' (IRP).', align = "justify"),
), # end box
tags$style(".small-box{border-radius: 15px}"),
valueBox("5000+", "Patients", icon = icon("hospital-user"), width = 3, color = "blue"),
valueBox("15", "Countries", icon = icon("globe-europe"), width = 3, color = "blue"),
valueBox("20", "Years", icon = icon("calendar-alt"), width = 3, color = "blue"),
valueBox("50+", "Researchers", icon = icon("user-cog"), width = 3, color = "blue")#,
), # end column
column(width = 4,
box(width = NULL, status = "primary",
tags$div(
HTML('<a href="https://twitter.com/Neurosurv_Sci?ref_src=twsrc%5Etfw" class="twitter-follow-button" data-show-count="false">Follow @Neurosurv_Sci</a><script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>')
),
tags$div(
HTML('<a class="twitter-timeline" data-height="1800" data-theme="light" href="https://twitter.com/Neurosurv_Sci?ref_src=twsrc%5Etfw">Tweets by Neurosurv_Sci</a> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>')
)) # end box
) # end column
), # end fluidRow
shinyjs::useShinyjs(),
tags$footer(HTML("<strong>Copyright © 2020 <a href=\"https://github.com/jutzca/Neurosurveillance\" target=\"_blank\">Neurosurveillance</a>.</strong>
<br>This work is licensed under a <a rel=\"license\" href=\"http://creativecommons.org/licenses/by-nc-nd/4.0/\" target=\"_blank\">Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License</a>.
<br><a rel=\"license\" href=\"http://creativecommons.org/licenses/by-nc-nd/4.0/\" target=\"_blank\"><img alt=\"Creative Commons License\" style=\"border-width:0\" src=\"https://i.creativecommons.org/l/by-nc-nd/4.0/88x31.png\" /></a>
<br>Last updated:<br>"),
latest.DateTime,
id = "sideFooter",
align = "left",
style = "
position:absolute;
bottom:0;
width:100%;
padding: 10px;
"
)
), # end tabItem
tabItem(tabName = "user_guide",
titlePanel(title = div(strong("User Guide"))),
), #end tabItem
tabItem(tabName = "about_emsci",
h3("European Multicenter Study about Spinal Cord Injury"),
box(#title = "Explore The Data",
width = 12,
heigth = "500px",
solidHeader = TRUE,
strong("Study design"),
"The EMSCI is an ongoing longitudinal, observational study that prospectively collects clinical,
functional, and neurophysiological data at fixed time points over the first year of injury: very acute
(within 2 weeks), acute I (2-4 weeks), acute II (3 months), and acute III (6 months), and chronic
(12 months)",
br(),
br(),
strong("Governance structure"),
"Founded in 2001 and led by the University Clinic Balgrist (Zurich, Switzerland), EMSCI
consists of 23 active members from 9 countries and 6 passive members from four countries.
Moreover, it has three associated members from Canada (2) and Germany (1).",
br(),
br(),
strong("Ethical approval and registration"),
"The study was performed in accordance with the Declaration of Helsinki and
approved by the local ethics committee of each participating center. All patients
gave their written informed consent before being included in the EMSCI database.
European Multicenter Study about Spinal Cord Injury is registered with ClinicalTrials.gov
Identifier (NCT01571531).",
br(),
br(),
strong("Inclusion/exclusion criteria."),
"Three inclusion criteria have to be met before a patient can be enrolled in the EMSCI:
(1) the patient has to be capable and willing to give written informed consent;
(2) the injury was caused by a single event; and
(3) the first EMSCI assessment has to be possible within the first 6 weeks following injury.
Patients are excluded from EMSCI for the following reasons:
(1) nontraumatic para- or tetraplegia (i.e. discusprolaps, tumor, AV-malformation, myelitis) excl. single event ischemic incidences;
(2) previously known dementia or severe reduction of intelligence, leading to reduced capabilities of cooperation or giving consent;
(3) peripheral nerve lesions above the level of lesion (i.e. plexus brachialis impairment);
(4) pre-existing polyneuropathy; and
(5) severe craniocerebral injury. All individuals in the EMSCI receive standards of rehabilitation care.",
br(),
br(),
strong("Neurological Scoring"),
"The EMSCI centers collect the following neurological scores: total motor score,
lower extremity motor score, upper extremity motor score, total pinprick score,
and total light touch score. For motor scores, key muscles in the upper and lower
extremities were examined according to the International Standards for the Neurological
Classification of SCI (ISNCSCI), with a maximum score of 50 points for each of the upper
and lower extremities (for a maximum total score of 100). Light touch and pin prick
(sharp-dull discrimination) scores were also assessed according to ISNCSCI, with a maximum
score of 112 each.",
br(),
br(),
strong("Functional Assessments"),
"Functional outcomes comprise Spinal Cord Independence Measure (SCIM),
Walking Index for Spinal Cord Injury, 10-meter walking test (10MWT),
6-minute walk test (6MWT), and Timed Up & Go (TUG) test. The SCIM is a
scale for the assessment of achievements of daily function of patients with spinal cord lesions.
Throughout the duration of this study (2001-2019), two different versions of the SCIM were used:
between 2001-2007 SCIM II23 and since 2008 SCIM III. The major difference between the versions
is that SCIM II does not consider intercultural differences. Both versions contain 19 tasks, all
activities of daily living organized, in four areas of function (subscales): Self-Care (scored 0–20);
respiration and sphincter management (0–40); mobility in room and toilet (0–10); and mobility
indoors and outdoors (0–30). WISCI II has an original scale that quantifies a patient's walking
ability; a score of 0 indicates that a patient cannot stand and walk and the highest score of 20
is assigned in case a patient can walk more than 10m without walking aids of assistance. Lastly,
10MWT measures the time (in seconds) it takes a patient to walk 10m, the 6MWT quantifies the
distance (in meters) covered by the patient within 6 minutes, and TUG measures the time (in seconds)
it takes a patient to stand up from an armchair, walk 3m, return to the chair, and sit down."
)
), #end tabItem
tabItem(tabName = "epi_emsci",
fluidRow(
column (width = 8,
box(status = "primary", width = NULL,
div(style="display:inline-block;width:100%;text-align:center;",
radioGroupButtons(
inputId = "var_epi_emsci",
label = "Epidemiological features:",
selected = "sex_emsci",
status = "success",
individual = T, #if false, then the boxes are connected
choiceNames = c("Sex", "Age", "AIS grade", "Level of injury"),
choiceValues = c("sex_emsci", "age_emsci", "ais_emsci", "nli_emsci")
) # Close radioGroupButtons bracket
), # Close div bracket
),
box(status = "primary", width = NULL,
div(plotOutput("plot.epi.emsci", width = "90%",
height = "660px",
inline = FALSE),
align='center')
), # end box
), # end column
column(width = 4, # create second column for second type of user inputs (filters)
box(status = "primary", width = NULL, # create a new box
sliderTextInput(inputId = "year_epi_emsci", # create new slider text
label = "Years of injury to display:", # label of the box
choices = list("2000" = 2000,"2001" = 2001,"2002" = 2002,"2003" = 2003,"2004" = 2004,
"2005" = 2005,"2006" = 2006,"2007" = 2007,"2008" = 2008,"2009" = 2009,
"2010" = 2010,"2011" = 2011,"2012" = 2012,"2013" = 2013,"2014" = 2014,
"2015" = 2015,"2016" = 2016,"2017" = 2017,"2018" = 2018,"2019" = 2019,
"2019" = 2019),
selected = c(2000, 2019),
animate = T, grid = T, hide_min_max = FALSE, from_fixed = FALSE,
to_fixed = FALSE, from_min = NULL, from_max = NULL, to_min = NULL,
to_max = NULL, force_edges = T, width = NULL, pre = NULL,
post = NULL, dragRange = TRUE)
), # end box
box(status = "primary", width = NULL, # create a new box
sliderInput(inputId = "binyear_epi_emsci", # create new slider text
label = "Number of years per bin:", # label of the box
min = 1, max = 5,
value = 1)
), # end box
box(status = "primary", width = NULL, # create box
radioButtons("checkbox_epi_emsci",
label = "Do you want to inspect subpopulations ?",
choices = c("No" = 1, 'Yes' = 0),
selected = 1) # checkbox to go to basic plot: no filters, EMSCI patients, x-axis=stages, y-axis=value of score
), # end box
conditionalPanel(condition = "input.checkbox_epi_emsci == 0", # if user decides to not display EMSCI data and apply filters, make a new panel appear
box(status = "primary", width = NULL, # create new box
radioButtons(inputId = "checkGroup_epi_emsci", # create new check box group
label = "Criteria:", # label of the box
choices = list("AIS grade" = '1', "Paralysis type" = '2'), # choices are the different filters that can be applied
selected = c('1')) # by default, sex and AIS grades are selected
) # end box
), # end conditionalPanel
conditionalPanel(condition = "input.checkbox_epi_emsci == 0 && input.checkGroup_epi_emsci.includes('1')", # if user chooses to filter based on AIS grade
box(status = "primary", width = NULL, # create a new box
checkboxGroupInput(inputId = "grade_epi_emsci", # create new check box group
label = "AIS grade:", # label of the box
choices = list("AIS A" = "A", "AIS B" = "B", "AIS C" = "C", "AIS D" = "D"), # choices (match the levels in EMSCI dataset), missing AIS grades will automaticEMSCIy be removed
selected = c("A", "B", "C", "D")) # by default, EMSCI grades are selected but missing grades
) # end box
), # end conditionalPanel
conditionalPanel(condition = "input.checkbox_epi_emsci == 0 && input.checkGroup_epi_emsci.includes('2')", # if user chooses to filter based on functlogical level of injury
box(status = "primary", width = NULL, # create a new box
checkboxGroupInput(inputId = "paralysis_epi_emsci", # create new check box group
label = "Type of paralysis:", # label of the box
choices = list("paraplegia", 'tetraplegia'),
selected = c("paraplegia", 'tetraplegia'))
) # end box
) # end conditionalPanel
), #end column
) #close fluid row
), #end tabItem
tabItem(tabName = "neuro_emsci",
fluidRow(
column (width = 8,
box(status = "primary", width = NULL,
div(style="display:inline-block;width:100%;text-align:center;",
radioGroupButtons(
inputId = "var_neuro_emsci",
label = "Neurological features:",
selected = "UEMS",
status = "success",
individual = T, #if false, then the boxes are connected
choiceNames = c("UEMS","RUEMS","LUEMS",
"LEMS","RLEMS","LLEMS",
"RMS","LMS","TMS",
"RPP","LPP","TPP",
"RLT","LLT","TLT"),
choiceValues = c("UEMS","RUEMS","LUEMS",
"LEMS","RLEMS","LLEMS",
"RMS","LMS","TMS",
"RPP","LPP","TPP",
"RLT","LLT","TLT")
) # Close radioGroupButtons bracket
), # Close div bracket
), # close box
box(status = "primary", width = NULL,
div(plotlyOutput("plot.neuro.emsci", width = "90%",
height = "660px",
inline = FALSE),
align='center'),
), # end box
), # end column
column(width = 4, # create second column for second type of user inputs (filters)
box(status = "primary", width = NULL, # create box
radioButtons("cont_neuro_emsci",
label = "Type of plot ?",
choices = c("Trend plot" = 0, 'Boxplot' = 1),
selected = 0)
), # end box
box(status = "primary", width = NULL, # create box
radioButtons("subset_neuro_EMSCI",
label = "Select subset of the cohort ?",
choices = c("No" = 1, 'Yes' = 0),
selected = 1), # checkbox to go to basic plot: no filters, all patients, x-axis=stages, y-axis=value of score
conditionalPanel(condition = "input.subset_neuro_EMSCI == 0", # if user decides to not display all data and apply filters, make a new panel appear
checkboxGroupInput(inputId = "checksub_neuro_EMSCI", # create new check box group
label = "Subsetting criteria:", # label of the box
choices = list("Sex" = '1', "Age at injury" = '2', "Cause of SCI" = '3', "AIS grade" = '4', "Level of injury" = '5', "Country" = '6'),# "Year of injury" = '7'), # choices are the different filters that can be applied
selected = c('1', '4')) # by default, sex and AIS grades are selected
), # end conditionalPanel
conditionalPanel(condition = "input.subset_neuro_EMSCI == 0 && input.checksub_neuro_EMSCI.includes('1')", # if user chooses to filter based on sex
checkboxGroupInput(inputId = "sex_neuro_EMSCI", # create new check box group
label = "Sex:", # label of the box
choices = list("Male", "Female"), # choices are male and female (match the levels in EMSCI dataset)
selected = c("Male", "Female")) # by default, both male and female are selected
), # end conditionalPanel
conditionalPanel(condition = "input.subset_neuro_EMSCI == 0 && input.checksub_neuro_EMSCI.includes('2')", # if user chooses to filter based on age
checkboxGroupInput(inputId = "age_neuro_EMSCI", # create new check box group
label = "Age at injury:", # label of the box
choices = list("0-19 (0)" = 0, "20-39 (1)" = 1, "40-59 (2)" = 2, "60-79 (3)" = 3, "80-99 (4)" = 4), # choices are age group divided in 20 years (match the levels in EMSCI dataset from categorised column)
selected = c(0,1,2,3,4)) # by default, all categories are selected
), # end conditionalPanel
conditionalPanel(condition = "input.subset_neuro_EMSCI == 0 && input.checksub_neuro_EMSCI.includes('3')", # if user chooses to filter based on cause of injury
checkboxGroupInput(inputId = "cause_neuro_EMSCI", # create new check box group
label = "Cause of injury:", # label of the box
choices = list("disc herniation", "haemorragic", "ischemic", "traumatic", "other"), # choices (match the levels in EMSCI dataset)
selected = c("disc herniation", "haemorragic", "ischemic", "traumatic", "other")) # by default, all categories are selected
), # end conditionalPanel
conditionalPanel(condition = "input.subset_neuro_EMSCI == 0 && input.checksub_neuro_EMSCI.includes('4')", # if user chooses to filter based on AIS grade
checkboxGroupInput(inputId = "grade_neuro_EMSCI", # create new check box group
label = "AIS grade:", # label of the box
choices = list("AIS A", "AIS B", "AIS C", "AIS D", "AIS E"), # choices (match the levels in EMSCI dataset), missing AIS grades will automatically be removed
selected = c("AIS A", "AIS B", "AIS C", "AIS D", "AIS E")) # by default, all grades are selected but missing grades
), # end conditionalPanel
conditionalPanel(condition = "input.subset_neuro_EMSCI == 0 && input.checksub_neuro_EMSCI.includes('5')", # if user chooses to filter based on neurological level of injury
checkboxGroupInput(inputId = "level_neuro_EMSCI", # create new check box group
label = "Neurological level of injury:", # label of the box
choices = list("cervical", "thoracic", "lumbar", "sacral"), # choices (match the levels in EMSCI dataset)
selected = c("cervical", "thoracic", "lumbar", "sacral")) # by default, all categories are selected
), # end conditionalPanel
conditionalPanel(condition = "input.subset_neuro_EMSCI == 0 && input.checksub_neuro_EMSCI.includes('6')", # if user chooses to filter based on country
checkboxGroupInput(inputId = "country_neuro_EMSCI", # create new check box group
label = "Country:", # label of the box
choices = list("Austria", "Czech Republic", "France", "Germany",
"Great Britain", "India", "Italy", "Netherlands",
"Spain", "Switzerland"), # choices (match the levels in EMSCI dataset)
selected = c("Austria", "Czech Republic", "France", "Germany",
"Great Britain", "India", "Italy", "Netherlands",
"Spain", "Switzerland")) # by default, all countries are selected
), # end conditionalPanel
# conditionalPanel(condition = "input.subset_neuro_EMSCI == 0 && input.checksub_neuro_EMSCI.includes('7')", # if user chooses to filter based on year of injury (categorised)
# sliderTextInput(inputId = "filteryear_neuro_EMSCI", # create new slider text
# label = "Years of injury to display:", # label of the box
# choices = list("2000" = 2000, "2005" = 2005, "2010" = 2010, "2015" = 2015, "2019" = 2019), # choices (match the levels in EMSCI dataset in categorised column)
# selected = c(2000, 2019), # by default, all groups are selected
# animate = T, grid = T, hide_min_max = FALSE, from_fixed = FALSE,
# to_fixed = FALSE, from_min = NULL, from_max = NULL, to_min = NULL,
# to_max = NULL, force_edges = T, width = NULL, pre = NULL,
# post = NULL, dragRange = TRUE)
# ) # end conditionalPanel
), # end box
box(status = "primary", width = NULL, # create box
radioButtons("checkbox_neuro_EMSCI",
label = "How do you want to visualise the data?",
choices = c("Default" = 1, 'Customize (display by subgroups)' = 0),
selected = 1), # checkbox to go to basic plot: no filters, all patients, x-axis=stages, y-axis=value of score
conditionalPanel(condition = "input.checkbox_neuro_EMSCI == 0", # if user decides to not display all data and apply filters, make a new panel appear
checkboxGroupInput(inputId = "checkGroup_neuro_EMSCI", # create new check box group
label = "Visualisation criteria:", # label of the box
choices = list("Sex" = '1',
"Age at injury" = '2',
"Cause of SCI" = '3',
"AIS grade" = '4',
"Level of injury" = '5',
"Country" = '6'),
#"Year of injury" = '7'), # choices are the different filters that can be applied
selected = c('1','4')) # by default, sex and AIS grades are selected
), # end conditionalPanel
), # end box
box(status = "primary", width = NULL,
radioButtons(inputId = "neuro_choose_time_EMSCI",
label = "Choose time displayed:",
choices = c("Single time point" = "single",
"Time range" = "multiple"),
selected = "multiple"),
conditionalPanel(condition = "input.neuro_choose_time_EMSCI == 'multiple'",
sliderTextInput(inputId = "neuro_time_multiple_EMSCI",
label = "Time range:",
choices = list("very acute", "acute I", "acute II", 'acute III', 'chronic'),
selected = c("very acute", 'chronic'),
animate = F, grid = TRUE, hide_min_max = FALSE, from_fixed = FALSE,
to_fixed = FALSE, from_min = NULL, from_max = NULL, to_min = NULL,
to_max = NULL, force_edges = T, width = NULL, pre = NULL,
post = NULL, dragRange = TRUE)
), # end conditionalPanel
conditionalPanel(condition = "input.neuro_choose_time_EMSCI == 'single'",
sliderTextInput(inputId = "neuro_time_single_EMSCI",
label = "Time point:",
choices = list("very acute", "acute I", "acute II", 'acute III', 'chronic'),
selected = c("very acute"),
animate = F, grid = TRUE, hide_min_max = FALSE, from_fixed = FALSE,
to_fixed = FALSE, from_min = NULL, from_max = NULL, to_min = NULL,
to_max = NULL, force_edges = T, width = NULL, pre = NULL,
post = NULL, dragRange = TRUE)
) # end conditionalPanel
), # end box
box(status = "primary", width = NULL, # create a new box
sliderTextInput(inputId = "year_neuro_emsci", # create new slider text
label = "Years of injury to display:", # label of the box
choices = list("2000" = 2000,"2001" = 2001,"2002" = 2002,"2003" = 2003,"2004" = 2004,
"2005" = 2005,"2006" = 2006,"2007" = 2007,"2008" = 2008,"2009" = 2009,
"2010" = 2010,"2011" = 2011,"2012" = 2012,"2013" = 2013,"2014" = 2014,
"2015" = 2015,"2016" = 2016,"2017" = 2017,"2018" = 2018,"2019" = 2019,
"2019" = 2019),
selected = c(2000, 2019),
animate = T, grid = T, hide_min_max = FALSE, from_fixed = FALSE,
to_fixed = FALSE, from_min = NULL, from_max = NULL, to_min = NULL,
to_max = NULL, force_edges = T, width = NULL, pre = NULL,
post = NULL, dragRange = TRUE)
), # end box
box(status = "primary", width = NULL, # create a new box
sliderInput(inputId = "binyear_neuro_emsci", # create new slider text
label = "Number of years per bin:", # label of the box
min = 1, max = 5,
value = 5)
), # end box
) #end column
) #end FluidRow
), #end tabItem
tabItem(tabName = "funct_emsci",
fluidRow(
column (width = 8,
box(status = "primary", width = NULL,
div(style="display:inline-block;width:100%;text-align:center;",
radioGroupButtons(
inputId = "var_funct_emsci",
label = "Functional features:",
selected = "WISCI",
status = "success",
individual = T, #if false, then the boxes are connected
choiceNames = c("WISCI","test_6min","test_10m","TUG","SCIM2","SCIM3"),
choiceValues = c("WISCI","test_6min","test_10m","TUG","SCIM2","SCIM3")
) # Close radioGroupButtons bracket
), # Close div bracket
),
box(status = "primary", width = NULL,
div(plotlyOutput("plot.funct.emsci", width = "90%",
height = "660px",
inline = FALSE),
align='center'),
), # end box
), # end column
column(width = 4, # create second column for second type of user inputs (filters)
box(status = "primary", width = NULL, # create box
radioButtons("cont_funct_emsci",
label = "Type of plot ?",
choices = c("Trend plot" = 0, 'Boxplot' = 1),
selected = 0)
), # end box
box(status = "primary", width = NULL, # create box
radioButtons("subset_funct_EMSCI",
label = "Select subset of the cohort ?",
choices = c("No" = 1, 'Yes' = 0),
selected = 1), # checkbox to go to basic plot: no filters, all patients, x-axis=stages, y-axis=value of score
conditionalPanel(condition = "input.subset_funct_EMSCI == 0", # if user decides to not display all data and apply filters, make a new panel appear
checkboxGroupInput(inputId = "checksub_funct_EMSCI", # create new check box group
label = "Subsetting criteria:", # label of the box
choices = list("Sex" = '1', "Age at injury" = '2', "Cause of SCI" = '3', "AIS grade" = '4', "Level of injury" = '5', "Country" = '6'),# "Year of injury" = '7'), # choices are the different filters that can be applied
selected = c('1', '4')) # by default, sex and AIS grades are selected
), # end conditionalPanel
conditionalPanel(condition = "input.subset_funct_EMSCI == 0 && input.checksub_funct_EMSCI.includes('1')", # if user chooses to filter based on sex
checkboxGroupInput(inputId = "sex_funct_EMSCI", # create new check box group
label = "Sex:", # label of the box
choices = list("Male", "Female"), # choices are male and female (match the levels in EMSCI dataset)
selected = c("Male", "Female")) # by default, both male and female are selected
), # end conditionalPanel
conditionalPanel(condition = "input.subset_funct_EMSCI == 0 && input.checksub_funct_EMSCI.includes('2')", # if user chooses to filter based on age
checkboxGroupInput(inputId = "age_funct_EMSCI", # create new check box group
label = "Age at injury:", # label of the box
choices = list("0-19 (0)" = 0, "20-39 (1)" = 1, "40-59 (2)" = 2, "60-79 (3)" = 3, "80-99 (4)" = 4), # choices are age group divided in 20 years (match the levels in EMSCI dataset from categorised column)
selected = c(0,1,2,3,4)) # by default, all categories are selected
), # end conditionalPanel
conditionalPanel(condition = "input.subset_funct_EMSCI == 0 && input.checksub_funct_EMSCI.includes('3')", # if user chooses to filter based on cause of injury
checkboxGroupInput(inputId = "cause_funct_EMSCI", # create new check box group
label = "Cause of injury:", # label of the box
choices = list("ischemic", "traumatic"), # choices (match the levels in EMSCI dataset)
selected = c("disc herniation", "haemorragic", "ischemic", "traumatic", "other")) # by default, all categories are selected
), # end conditionalPanel
conditionalPanel(condition = "input.subset_funct_EMSCI == 0 && input.checksub_funct_EMSCI.includes('4')", # if user chooses to filter based on AIS grade
checkboxGroupInput(inputId = "grade_funct_EMSCI", # create new check box group
label = "AIS grade:", # label of the box
choices = list("AIS A", "AIS B", "AIS C", "AIS D", "AIS E"), # choices (match the levels in EMSCI dataset), missing AIS grades will automatically be removed
selected = c("AIS A", "AIS B", "AIS C", "AIS D", "AIS E")) # by default, all grades are selected but missing grades
), # end conditionalPanel
conditionalPanel(condition = "input.subset_funct_EMSCI == 0 && input.checksub_funct_EMSCI.includes('5')", # if user chooses to filter based on neurological level of injury
checkboxGroupInput(inputId = "level_funct_EMSCI", # create new check box group
label = "Neurological level of injury:", # label of the box
choices = list("cervical", "thoracic", "lumbar", "sacral"), # choices (match the levels in EMSCI dataset)
selected = c("cervical", "thoracic", "lumbar", "sacral")) # by default, all categories are selected
), # end conditionalPanel
conditionalPanel(condition = "input.subset_funct_EMSCI == 0 && input.checksub_funct_EMSCI.includes('6')", # if user chooses to filter based on country
checkboxGroupInput(inputId = "country_funct_EMSCI", # create new check box group
label = "Country:", # label of the box
choices = list("Austria", "Czech Republic", "France", "Germany",
"Great Britain", "India", "Italy", "Netherlands",
"Spain", "Switzerland"), # choices (match the levels in EMSCI dataset)
selected = c("Austria", "Czech Republic", "France", "Germany",
"Great Britain", "India", "Italy", "Netherlands",
"Spain", "Switzerland")) # by default, all countries are selected
), # end conditionalPanel
# conditionalPanel(condition = "input.subset_funct_EMSCI == 0 && input.checksub_funct_EMSCI.includes('7')", # if user chooses to filter based on year of injury (categorised)
# sliderTextInput(inputId = "filteryear_funct_EMSCI", # create new slider text
# label = "Years of injury to display:", # label of the box
# choices = list("2000" = 2000, "2005" = 2005, "2010" = 2010, "2015" = 2015, "2019" = 2019), # choices (match the levels in EMSCI dataset in categorised column)
# selected = c(2000, 2019), # by default, all groups are selected
# animate = T, grid = T, hide_min_max = FALSE, from_fixed = FALSE,
# to_fixed = FALSE, from_min = NULL, from_max = NULL, to_min = NULL,
# to_max = NULL, force_edges = T, width = NULL, pre = NULL,
# post = NULL, dragRange = TRUE)
# ) # end conditionalPanel
), # end box
box(status = "primary", width = NULL, # create box
radioButtons("checkbox_funct_EMSCI",
label = "How do you want to visualise the data?",
choices = c("Default" = 1, 'Customize (display by subgroups)' = 0),
selected = 1), # checkbox to go to basic plot: no filters, all patients, x-axis=stages, y-axis=value of score
conditionalPanel(condition = "input.checkbox_funct_EMSCI == 0", # if user decides to not display all data and apply filters, make a new panel appear
checkboxGroupInput(inputId = "checkGroup_funct_EMSCI", # create new check box group
label = "Visualisation criteria:", # label of the box
choices = list("Sex" = '1',
"Age at injury" = '2',
"Cause of SCI" = '3',
"AIS grade" = '4',
"Level of injury" = '5',
"Country" = '6'),
#"Year of injury" = '7'), # choices are the different filters that can be applied
selected = c('1','4')) # by default, sex and AIS grades are selected
), # end conditionalPanel
), # end box
box(status = "primary", width = NULL,
radioButtons(inputId = "funct_choose_time_EMSCI",
label = "Choose time displayed:",
choices = c("Single time point" = "single",
"Time range" = "multiple"),
selected = "multiple"),
conditionalPanel(condition = "input.funct_choose_time_EMSCI == 'multiple'",
sliderTextInput(inputId = "funct_time_multiple_EMSCI",
label = "Time range:",
choices = list("very acute", "acute I", "acute II", 'acute III', 'chronic'),
selected = c("very acute", 'chronic'),
animate = F, grid = TRUE, hide_min_max = FALSE, from_fixed = FALSE,
to_fixed = FALSE, from_min = NULL, from_max = NULL, to_min = NULL,
to_max = NULL, force_edges = T, width = NULL, pre = NULL,
post = NULL, dragRange = TRUE)
), # end conditionalPanel
conditionalPanel(condition = "input.funct_choose_time_EMSCI == 'single'",
sliderTextInput(inputId = "funct_time_single_EMSCI",
label = "Time point:",
choices = list("very acute", "acute I", "acute II", 'acute III', 'chronic'),
selected = c("very acute"),
animate = F, grid = TRUE, hide_min_max = FALSE, from_fixed = FALSE,
to_fixed = FALSE, from_min = NULL, from_max = NULL, to_min = NULL,
to_max = NULL, force_edges = T, width = NULL, pre = NULL,
post = NULL, dragRange = TRUE)
) # end conditionalPanel
), # end box
box(status = "primary", width = NULL, # create a new box
sliderTextInput(inputId = "year_funct_emsci", # create new slider text
label = "Years of injury to display:", # label of the box
choices = list("2000" = 2000,"2001" = 2001,"2002" = 2002,"2003" = 2003,"2004" = 2004,
"2005" = 2005,"2006" = 2006,"2007" = 2007,"2008" = 2008,"2009" = 2009,
"2010" = 2010,"2011" = 2011,"2012" = 2012,"2013" = 2013,"2014" = 2014,
"2015" = 2015,"2016" = 2016,"2017" = 2017,"2018" = 2018,"2019" = 2019,
"2019" = 2019),
selected = c(2000, 2019),
animate = T, grid = T, hide_min_max = FALSE, from_fixed = FALSE,
to_fixed = FALSE, from_min = NULL, from_max = NULL, to_min = NULL,
to_max = NULL, force_edges = T, width = NULL, pre = NULL,
post = NULL, dragRange = TRUE)
), # end box
box(status = "primary", width = NULL, # create a new box
sliderInput(inputId = "binyear_funct_emsci", # create new slider text
label = "Number of years per bin:", # label of the box
min = 1, max = 5,
value = 5)
), # end box
) #end column
) # end FluidRow
), #end tabItem
tabItem(tabName = "monitore_emsci",
useShinyalert(),
actionButton("preview_predict", "Disclaimer"),
htmlOutput("title_predict"),
fluidRow( # create a separation in the panel
column(width = 8, # create first column for boxplot
box(width = NULL, status = "primary",
p("Using this monitoring tool you can visualise, in blue, patients that had a
similar score at a very acute stage for your outcome of interest (± 5 points).
These patients share the characteristics you specified in terms of sex,
age injury severity, (AIS grade), and neurological of injury.
Black points represent patients with the similar characteristics,
irrespective of their score at a very acute stage.")),
box(width = NULL, status = "primary", # create box to display plot
align="center", # center the plot
plotOutput('plot_predict', height = 660)) # call server plot function for the score and dataset chosen by the user #end box
), # end column
column(width = 4, # create second column for second type of user inputs (filters)
box(status = "primary", width = NULL, # create a new box
selectInput("select_score",
label = "Select outcome of interest",
choices = list("UEMS", 'LEMS', 'TMS'),
selected = c("UEMS"))
), # end box
box(status = "primary", width = NULL, # create a new box
textInput("input_compscore",
label = "Patient score at very acute stage",
value = "Enter value...")
), # end box
box(status = "primary", width = NULL, # create a new box
selectInput("select_sex",
label = "Select sex",
choices = list("Male" = "m", "Female" = "f", "Unknown" = "Unknown"),
selected = c("Unknown"))
), # end box
box(status = "primary", width = NULL, # create box
selectInput("select_age",
label = "Select age at injury",
choices = list("0-19", "20-39", "40-59", "60-79", "80+", 'Unknown'),
selected = c("Unknown"))
), # end box
box(status = "primary", width = NULL, # create a new box
selectInput("select_ais",
label = "Select baseline AIS grade",
choices = list("AIS A" = "A", "AIS B" = "B", "AIS C" = "C", "AIS D" = "D", "AIS E" = "E", "Unknown" = "Unknown"),
selected = c("Unknown"))
), # end box
box(status = "primary", width = NULL, # create a new box
selectInput("select_nli",
label = "Select injury level",
choices = list("Unknown", "Cervical", "Thoracic", "Lumbar", "Sacral",
"C1","C2","C3","C4","C5","C6","C7","C8",
"T1","T2","T3","T4","T5","T6","T7","T8","T9","T1","T11", "T12",
"L1", "L2", "L3", "L4", "L5",
"S1", "S2", "S3", "S4", "S5"),
selected = c("Unknown"))
)#, # end box
) #end column
) #end fluidRow
), #end tabItem
tabItem(tabName = "about_sygen",
h3(strong("The Sygen Clinical Trial")),
br(),
fluidRow(
box(#title = "Explore The Data",
width = 8,
heigth = "500px",
solidHeader = TRUE,
h4("Objectives of original study"),
"To determine efficacy and safety of monosialotetrahexosylganglioside GM1 (i.e., Sygen) in acute spinal cord injury.",
br(),
h4("Methods"),
strong("Monosialotetrahexosylganglioside GM1"),
"Sygen (monosialotetrahexosylganglioside GM1 sodium salt) is a naturally occurring compound in cell membranes of mammals and is especially abundant in the membranes of the central nervous system.
Acute neuroprotective and longer-term regenerative effects in multiple experimental models of ischemia and injury have been reported. The proposed mechanisms of action of GM1 include
anti-excitotoxic activity, apoptosis prevention, and potentiation of neuritic sprouting and the effects of nerve growth factors.",
br(),
br(),
strong("Study design."), "Randomized, double-blind, sequential,
multicenter clinical trial of two doses Sygen (i.e., low-dose GM-1: 300 mg intravenous loading dose followed by 100 mg/d x 56 days or high-dose GM-1:00 mg intravenous loading dose followed by 200 mg/d x 56 days) versus
placebo. All patients received the National Acute Spinal Cord Injury Study (NASCIS-2) protocol dosage of methylprednisolone. Based on a potential adverse interaction between concomitant MPSS and GM-1 administration,
the initial dose of GM-1 was delayed until after the steroids were given (mean onset of study medication, 54.9 hours).",
br(),
br(),
strong("Inclusion/exclusion criteria."), "For inclusion in Sygen, patients were required to have at least one lower extremity with a substantial motor deficit. Patients with spinal cord transection
or penetration were excluded, as were patients with a cauda equina, brachial or lumbosacral plexus, or peripheral nerve injury. Multiple trauma cases were included as long as they were not so severe
as to preclude neurologic evaluation. It is notable that this requirement of participating in a detailed neurologic exam excluded major head trauma cases and also intubated
chest trauma cases.",
br(),
br(),
strong("Assessments."), "Baseline neurologic assessment included both the AIS and detailed American Spinal Injury Association (ASIA) motor and
sensory examinations. Additionally, the Modified Benzel Classification and the ASIA motor and
sensory examinations were performed at 4, 8, 16, 26, and 52 weeks after injury. The Modified Benzel Classification was used for post-baseline measurement because it rates walking
ability and, in effect, subdivides the broad D category of the AIS. Because most patients have an unstable spinal fracture at
baseline, it is not possible to assess walking ability at that time; hence the use of different baseline and follow-up scales.
Marked recovery was defined as at least a two-grade equivalent improvement in the Modified Benzel Classification from the
baseline AIS. The primary efficacy assessment was the proportion of patients with marked recovery at week 26. The secondary efficacy assessments included the time course of marked recovery and
other established measures of spinal cord function (the ASIA motor and sensory scores, relative and absolute sensory levels of impairment, and assessments of bladder and bowel
function).",
br(),
br(),
strong("Concomitant medications."), "The use of medications delivered alongside the study medication (i.e., GM-1) was rigorously tracked.
For each concomitant medication administered during the trial, the dosage, reason for administration, and the timing of administration were recorded.",
br(),
br(),
strong("Results."), "Of 797 patients recruited, 760 were included in the analysis. The prospectively planned analysis at the prespecified endpoint time for all patients was negative.
The negative finding of the Sygen study is considered Class I Medical Evidence by the spinal cord injury Committee of the
American Association of Neurological Surgeons (AANS) and the Congress of Neurological Surgeons (CNS). Subsequent analyses of the Sygen
data have been performed to characterize the trajectory and extent of spontaneous recovery from acute spinal cord injury.",
br()
), # close box
fluidRow(
valueBox(prettyNum(797, big.mark=" ", scientific=FALSE), "Patients", icon = icon("user-edit"), width = 3, color = "blue"),
#valueBox(prettyNum(489, big.mark=" ", scientific=FALSE), "Unique concomittant medications to treat secondary complications", icon = icon("pills"), width = 3, color = "blue"),
#valueBox(tagList("10", tags$sup(style="font-size: 20px", "%")),
# "Prophylactic medication use", icon = icon("prescription"), width = 3, color = "blue"
#),
valueBox("28", "North American clinical sites", icon = icon("clinic-medical"), width = 3, color = "blue"),
valueBox("1991-1997", "Running time", icon = icon("calendar-alt"), width = 3, color = "blue")#,
)
), # close fluid row
fluidRow(
box(#title = "Explore The Data",
width = 8,
heigth = "500px",
solidHeader = TRUE,
h4("References"),
tags$ul(
tags$li(a('Geisler et al, 2001', href = 'https://europepmc.org/article/med/11805612', target="_blank"), "Recruitment and early treatment in a multicenter study of acute spinal cord injury. Spine (Phila Pa 1976)."),
tags$li(a('Geisler et al, 2001', href = 'https://journals.lww.com/spinejournal/Fulltext/2001/12151/The_Sygen_R__Multicenter_Acute_Spinal_Cord_Injury.15.aspx', target="_blank"), "The Sygen multicenter acute spinal cord injury study. Spine (Phila Pa 1976)")
) # close tags
) # close box
) # close fluid row
), # close tab item
tabItem(tabName = "epi_sygen",
fluidRow(
column (width = 8,
box(status = "primary", width = NULL,
div(style="display:inline-block;width:100%;text-align:center;",
radioGroupButtons(
inputId = "var_epi_sygen",
label = "Epidemiological features:",
selected = "sex_sygen",
status = "success",
individual = T, #if false, then the boxes are connected
choiceNames = c("Sex", "Age", "AIS grade", "Level of injury"),
choiceValues = c("sex_sygen", "age_sygen", "ais_sygen", "nli_sygen")
) # Close radioGroupButtons bracket
), # Close div bracket
),
box(status = "primary", width = NULL,
div(plotOutput("plot.epi.sygen", width = "90%",
height = "660px",
inline = FALSE),
align='center')
), # end box
), # end column
column(width = 4, # create second column for second type of user inputs (filters)
box(status = "primary", width = NULL, # create a new box
sliderInput(inputId = "year_epi_sygen", # create new slider text
label = "Years of injury to display:", # label of the box
min = 1991, max = 1997,
value = c(1991,1997),
sep = "")
), # end box
box(status = "primary", width = NULL, # create a new box
sliderInput(inputId = "binyear_epi_sygen", # create new slider text
label = "Number of years per bin:", # label of the box
min = 1, max = 3,
value = 1)
), # end box
box(status = "primary", width = NULL, # create box
radioButtons("checkbox_epi_sygen",
label = "Do you want to inspect subpopulations ?",
choices = c("No" = 1, 'Yes' = 0),
selected = 1) # checkbox to go to basic plot: no filters, Sygen patients, x-axis=stages, y-axis=value of score
), # end box
conditionalPanel(condition = "input.checkbox_epi_sygen == 0", # if user decides to not display Sygen data and apply filters, make a new panel appear
box(status = "primary", width = NULL, # create new box
radioButtons(inputId = "checkGroup_epi_sygen", # create new check box group
label = "Criteria:", # label of the box
choices = list("AIS grade" = '1', "Paralysis type" = '2'), # choices are the different filters that can be applied
selected = c('1')) # by default, NLI and AIS grades are selected
) # end box
), # end conditionalPanel
conditionalPanel(condition = "input.checkbox_epi_sygen == 0 && input.checkGroup_epi_sygen.includes('1')", # if user chooses to filter based on AIS grade
box(status = "primary", width = NULL, # create a new box
checkboxGroupInput(inputId = "grade_epi_sygen", # create new check box group
label = "AIS grade:", # label of the box
choices = list("AIS A", "AIS B", "AIS C", "AIS D"), # choices (match the levels in Sygen dataset), missing AIS grades will automaticSygeny be removed
selected = c("AIS A", "AIS B", "AIS C", "AIS D")) # by default, Sygen grades are selected but missing grades
) # end box
), # end conditionalPanel
conditionalPanel(condition = "input.checkbox_epi_sygen == 0 && input.checkGroup_epi_sygen.includes('2')", # if user chooses to filter based on functlogical level of injury
box(status = "primary", width = NULL, # create a new box
checkboxGroupInput(inputId = "paralysis_epi_sygen", # create new check box group
label = "Type of paralysis:", # label of the box
choices = list("paraplegia", 'tetraplegia'),
selected = c("paraplegia", 'tetraplegia'))
) # end box
) # end conditionalPanel
) #end column
) #close fluid row
), #close tabitem epi_sygen
tabItem(tabName = "neuro_sygen",
fluidRow(
column (width = 8,
box(status = "primary", width = NULL,
div(style="display:inline-block;width:100%;text-align:center;",
radioGroupButtons(
inputId = "var_neuro_sygen",
label = "Neurological features:",
selected = "UEMS",
status = "success",
individual = T, #if false, then the boxes are connected
choiceNames = c("UEMS", "LEMS", "TEMS",
"RMS", "LMS", "TMS",
"RPP","LPP","TPP",
"RLT","LLT","TLT"),
choiceValues = c("UEMS", "LEMS", "TEMS",
"RMS", "LMS", "TMS",
"RPP","LPP","TPP",
"RLT","LLT","TLT")
) # Close radioGroupButtons bracket
), # Close div bracket
),
box(status = "primary", width = NULL,
div(plotlyOutput("plot.neuro.sygen", width = "90%",
height = "660px",
inline = FALSE),
align='center')
), # end box
), # end column
column(width = 4, # create second column for second type of user inputs (filters)
box(status = "primary", width = NULL, # create box
radioButtons("cont_neuro_Sygen",
label = "Type of plot ?",
choices = c("Trend plot" = 0, 'Boxplot' = 1),
selected = 0)
), # end box
box(status = "primary", width = NULL, # create box
radioButtons("subset_neuro_Sygen",
label = "Select subset of the cohort ?",
choices = c("No" = 1, 'Yes' = 0),
selected = 1), # checkbox to go to basic plot: no filters, all patients, x-axis=stages, y-axis=value of score
conditionalPanel(condition = "input.subset_neuro_Sygen == 0", # if user decides to not display all data and apply filters, make a new panel appear
checkboxGroupInput(inputId = "checksub_neuro_Sygen", # create new check box group
label = "Subsetting criteria:", # label of the box
choices = list("Sex" = '1', "Age at injury" = '2', "Cause of SCI" = '3', "AIS grade" = '4', "Level of injury" = '5'),
selected = c('1', '4')) # by default, sex and AIS grades are selected
), # end conditionalPanel
conditionalPanel(condition = "input.subset_neuro_Sygen == 0 && input.checksub_neuro_Sygen.includes('1')", # if user chooses to filter based on sex
checkboxGroupInput(inputId = "sex_neuro_Sygen", # create new check box group
label = "Sex:", # label of the box
choices = list("Male", "Female"), # choices are male and female (match the levels in Sygen dataset)
selected = c("Male", "Female")) # by default, both male and female are selected
), # end conditionalPanel
conditionalPanel(condition = "input.subset_neuro_Sygen == 0 && input.checksub_neuro_Sygen.includes('2')", # if user chooses to filter based on age
checkboxGroupInput(inputId = "age_neuro_Sygen", # create new check box group
label = "Age at injury:", # label of the box
choices = list("0-19", "20-39", "40-59", "60-79"),
selected = c("0-19", "20-39", "40-59", "60-79")) # by default, all categories are selected
), # end conditionalPanel
conditionalPanel(condition = "input.subset_neuro_Sygen == 0 && input.checksub_neuro_Sygen.includes('3')", # if user chooses to filter based on cause of injury
checkboxGroupInput(inputId = "cause_neuro_Sygen", # create new check box group
label = "Cause of injury:", # label of the box
choices = list("automobile", "blunt trauma", "fall", "gun shot wound", "motorcycle", "other sports", "others", "pedestrian", "water related"),
selected = c("automobile", "blunt trauma", "fall", "gun shot wound", "motorcycle", "other sports", "others", "pedestrian", "water related"))
), # end conditionalPanel
conditionalPanel(condition = "input.subset_neuro_Sygen == 0 && input.checksub_neuro_Sygen.includes('4')", # if user chooses to filter based on AIS grade
checkboxGroupInput(inputId = "grade_neuro_Sygen", # create new check box group
label = "AIS grade:", # label of the box
choices = list("AIS A", "AIS B", "AIS C", "AIS D"), # choices (match the levels in Sygen dataset), missing AIS grades will automatically be removed
selected = c("AIS A", "AIS B", "AIS C", "AIS D")) # by default, all grades are selected but missing grades
), # end conditionalPanel
conditionalPanel(condition = "input.subset_neuro_Sygen == 0 && input.checksub_neuro_Sygen.includes('5')", # if user chooses to filter based on neurological level of injury
checkboxGroupInput(inputId = "level_neuro_Sygen", # create new check box group
label = "Neurological level of injury:", # label of the box
choices = list("cervical", "thoracic"), # choices (match the levels in Sygen dataset)
selected = c("cervical", "thoracic")) # by default, all categories are selected
) # end conditionalPanel
), # end box
box(status = "primary", width = NULL, # create box
radioButtons("checkbox_neuro_Sygen",
label = "How do you want to visualise the data?",
choices = c("Default" = 1, 'Customize (display by subgroups)' = 0),
selected = 1), # checkbox to go to basic plot: no filters, all patients, x-axis=stages, y-axis=value of score
conditionalPanel(condition = "input.checkbox_neuro_Sygen == 0", # if user decides to not display all data and apply filters, make a new panel appear
checkboxGroupInput(inputId = "checkGroup_neuro_Sygen", # create new check box group
label = "Visualisation criteria:", # label of the box
choices = list("Sex" = '1',
"Age at injury" = '2',
"Cause of SCI" = '3',
"AIS grade" = '4',
"Level of injury" = '5'), # choices are the different filters that can be applied
selected = c('1','4')) # by default, sex and AIS grades are selected
), # end conditionalPanel
), # end box
box(status = "primary", width = NULL,
radioButtons(inputId = "neuro_choose_time_Sygen",
label = "Choose time displayed:",
choices = c("Single time point" = "single",
"Time range" = "multiple"),
selected = "multiple"),
conditionalPanel(condition = "input.neuro_choose_time_Sygen == 'multiple'",
sliderTextInput(inputId = "neuro_time_multiple_Sygen",
#label = "Time range:",
label = NULL,
choices = c("Week00", "Week01", "Week04", 'Week08', 'Week16', "Week26", "Week52"),
selected = c("Week00", "Week52"),
animate = F, grid = TRUE, hide_min_max = FALSE, from_fixed = FALSE,
to_fixed = FALSE, force_edges = T, width = NULL, pre = NULL,
post = NULL, dragRange = TRUE)
), # end conditionalPanel
conditionalPanel(condition = "input.neuro_choose_time_Sygen == 'single'",
sliderTextInput(inputId = "neuro_time_single_Sygen",
label = "Time point:",
choices = list("Week00", "Week01", "Week04", 'Week08', 'Week16', "Week26", "Week52"),
selected = c("Week00"),
animate = F, grid = TRUE, hide_min_max = FALSE, from_fixed = FALSE,
to_fixed = FALSE, from_min = NULL, from_max = NULL, to_min = NULL,
to_max = NULL, force_edges = T, width = NULL, dragRange = TRUE)
) # end conditionalPanel
), # end box
box(status = "primary", width = NULL, # create a new box
sliderInput(inputId = "year_neuro_sygen", # create new slider text
label = "Years of injury to display:", # label of the box
min = 1991, max = 1997,
value = c(1991,1997),
sep = "")
), # end box
box(status = "primary", width = NULL, # create a new box
sliderInput(inputId = "binyear_neuro_sygen", # create new slider text
label = "Number of years per bin:", # label of the box
min = 1, max = 3,
value = 1)
), # end box
) #end column
) #close fluid row
), #end tabItem
tabItem(tabName = "funct_sygen",
fluidRow(
column (width = 8,
box(status = "primary", width = NULL,
div(style="display:inline-block;width:100%;text-align:center;",
radioGroupButtons(
inputId = "var_funct_sygen",
label = "Functional features:",
selected = "Benzel",
status = "success",
individual = T, #if false, then the boxes are connected
choiceNames = c('Modified Benzel score'),
choiceValues = c("Benzel")
) # Close radioGroupButtons bracket
), # Close div bracket
),
box(status = "primary", width = NULL,
div(plotlyOutput("plot.funct.sygen", width = "90%",
height = "660px",
inline = FALSE),
align='center')
), # end box
), # end column
column(width = 4, # create second column for second type of user inputs (filters)
box(status = "primary", width = NULL, # create box
radioButtons("cont_funct_Sygen",
label = "Type of plot ?",
choices = c("Trend plot" = 0, 'Boxplot' = 1),
selected = 0)
), # end box
box(status = "primary", width = NULL, # create box
radioButtons("subset_funct_Sygen",
label = "Select subset of the cohort ?",
choices = c("No" = 1, 'Yes' = 0),
selected = 1), # checkbox to go to basic plot: no filters, all patients, x-axis=stages, y-axis=value of score
conditionalPanel(condition = "input.subset_funct_Sygen == 0", # if user decides to not display all data and apply filters, make a new panel appear
checkboxGroupInput(inputId = "checksub_funct_Sygen", # create new check box group
label = "Subsetting criteria:", # label of the box
choices = list("Sex" = '1', "Age at injury" = '2', "Cause of SCI" = '3', "AIS grade" = '4', "Level of injury" = '5'),
selected = c('1', '4')) # by default, sex and AIS grades are selected
), # end conditionalPanel
conditionalPanel(condition = "input.subset_funct_Sygen == 0 && input.checksub_funct_Sygen.includes('1')", # if user chooses to filter based on sex
checkboxGroupInput(inputId = "sex_funct_Sygen", # create new check box group
label = "Sex:", # label of the box
choices = list("Male", "Female"), # choices are male and female (match the levels in Sygen dataset)
selected = c("Male", "Female")) # by default, both male and female are selected
), # end conditionalPanel
conditionalPanel(condition = "input.subset_funct_Sygen == 0 && input.checksub_funct_Sygen.includes('2')", # if user chooses to filter based on age
checkboxGroupInput(inputId = "age_funct_Sygen", # create new check box group
label = "Age at injury:", # label of the box
choices = list("0-19", "20-39", "40-59", "60-79"),
selected = c("0-19", "20-39", "40-59", "60-79")) # by default, all categories are selected
), # end conditionalPanel
conditionalPanel(condition = "input.subset_funct_Sygen == 0 && input.checksub_funct_Sygen.includes('3')", # if user chooses to filter based on cause of injury
checkboxGroupInput(inputId = "cause_funct_Sygen", # create new check box group
label = "Cause of injury:", # label of the box
choices = list("automobile", "blunt trauma", "fall", "gun shot wound", "motorcycle", "other sports", "others", "pedestrian", "water related"),
selected = c("automobile", "blunt trauma", "fall", "gun shot wound", "motorcycle", "other sports", "others", "pedestrian", "water related"))
), # end conditionalPanel
conditionalPanel(condition = "input.subset_funct_Sygen == 0 && input.checksub_funct_Sygen.includes('4')", # if user chooses to filter based on AIS grade
checkboxGroupInput(inputId = "grade_funct_Sygen", # create new check box group
label = "AIS grade:", # label of the box
choices = list("AIS A", "AIS B", "AIS C", "AIS D"), # choices (match the levels in Sygen dataset), missing AIS grades will automatically be removed
selected = c("AIS A", "AIS B", "AIS C", "AIS D")) # by default, all grades are selected but missing grades
), # end conditionalPanel
conditionalPanel(condition = "input.subset_funct_Sygen == 0 && input.checksub_funct_Sygen.includes('5')", # if user chooses to filter based on functlogical level of injury
checkboxGroupInput(inputId = "level_funct_Sygen", # create new check box group
label = "Neurological level of injury:", # label of the box
choices = list("cervical", "thoracic"), # choices (match the levels in Sygen dataset)
selected = c("cervical", "thoracic")) # by default, all categories are selected
) # end conditionalPanel
), # end box
box(status = "primary", width = NULL, # create box
radioButtons("checkbox_funct_Sygen",
label = "How do you want to visualise the data?",
choices = c("Default" = 1, 'Customize (display by subgroups)' = 0),
selected = 1), # checkbox to go to basic plot: no filters, all patients, x-axis=stages, y-axis=value of score
conditionalPanel(condition = "input.checkbox_funct_Sygen == 0", # if user decides to not display all data and apply filters, make a new panel appear
checkboxGroupInput(inputId = "checkGroup_funct_Sygen", # create new check box group
label = "Visualisation criteria:", # label of the box
choices = list("Sex" = '1',
"Age at injury" = '2',
"Cause of SCI" = '3',
"AIS grade" = '4',
"Level of injury" = '5'), # choices are the different filters that can be applied
selected = c('1','4')) # by default, sex and AIS grades are selected
), # end conditionalPanel
), # end box
box(status = "primary", width = NULL,
radioButtons(inputId = "funct_choose_time_Sygen",
label = "Choose time displayed:",
choices = c("Single time point" = "single",
"Time range" = "multiple"),
selected = "multiple"),
conditionalPanel(condition = "input.funct_choose_time_Sygen == 'multiple'",
sliderTextInput(inputId = "funct_time_multiple_Sygen",
#label = "Time range:",
label = NULL,
choices = c("Week00", "Week01", "Week04", 'Week08', 'Week16', "Week26", "Week52"),
selected = c("Week00", "Week52"),
animate = F, grid = TRUE, hide_min_max = FALSE, from_fixed = FALSE,
to_fixed = FALSE, force_edges = T, width = NULL, pre = NULL,
post = NULL, dragRange = TRUE)
), # end conditionalPanel
conditionalPanel(condition = "input.funct_choose_time_Sygen == 'single'",
sliderTextInput(inputId = "funct_time_single_Sygen",
label = "Time point:",
choices = list("Week00", "Week01", "Week04", 'Week08', 'Week16', "Week26", "Week52"),
selected = c("Week00"),
animate = F, grid = TRUE, hide_min_max = FALSE, from_fixed = FALSE,
to_fixed = FALSE, from_min = NULL, from_max = NULL, to_min = NULL,
to_max = NULL, force_edges = T, width = NULL, dragRange = TRUE)
) # end conditionalPanel
), # end box
box(status = "primary", width = NULL, # create a new box
sliderInput(inputId = "year_funct_sygen", # create new slider text
label = "Years of injury to display:", # label of the box
min = 1991, max = 1997,
value = c(1991,1997),
sep = "")
), # end box
box(status = "primary", width = NULL, # create a new box
sliderInput(inputId = "binyear_funct_sygen", # create new slider text
label = "Number of years per bin:", # label of the box
min = 1, max = 3,
value = 1)
), # end box
) #end column
) #close fluid row
), #end tabItem
tabItem(tabName = "neuro_all",
fluidRow(
column (width = 8,
box(status = "primary", width = NULL,
div(style="display:inline-block;width:100%;text-align:center;",
radioGroupButtons(
inputId = "var_neuro_all",
label = "Neurological features:",
selected = "UEMS",
status = "success",
individual = T, #if false, then the boxes are connected
choiceNames = c("UEMS", "LEMS",
"RMS", "LMS", "TMS",
"RPP","LPP","TPP",
"RLT","LLT","TLT"),
choiceValues = c("UEMS", "LEMS",
"RMS", "LMS", "TMS",
"RPP","LPP","TPP",
"RLT","LLT","TLT")
) # Close radioGroupButtons bracket
), # Close div bracket
),
box(status = "primary", width = NULL,
div(plotOutput("plot.neuro.all", width = "90%",
height = "660px",
inline = FALSE),
align='center')
), # end box
), # end column
column(width = 4, # create second column for second type of user inputs (filters)
box(status = "primary", width = NULL, # create box
radioButtons("subset_neuro_All",
label = "Select subset of the cohort ?",
choices = c("No" = 1, 'Yes' = 0),
selected = 1), # checkbox to go to basic plot: no filters, all patients, x-axis=stages, y-axis=value of score
conditionalPanel(condition = "input.subset_neuro_All == 0", # if user decides to not display all data and apply filters, make a new panel appear
checkboxGroupInput(inputId = "checksub_neuro_All", # create new check box group
label = "Subsetting criteria:", # label of the box
choices = list("Sex" = '1', "Age at injury" = '2', "AIS grade" = '3', "Level of injury" = '4'),
selected = NULL) # by default, sex and AIS grades are selected
), # end conditionalPanel
conditionalPanel(condition = "input.subset_neuro_All == 0 && input.checksub_neuro_All.includes('1')", # if user chooses to filter based on sex
checkboxGroupInput(inputId = "sex_neuro_All", # create new check box group
label = "Sex:", # label of the box
choices = list("Male", "Female"), # choices are male and female (match the levels in All dataset)
selected = c("Male", "Female")) # by default, both male and female are selected
), # end conditionalPanel
conditionalPanel(condition = "input.subset_neuro_All == 0 && input.checksub_neuro_All.includes('2')", # if user chooses to filter based on age
checkboxGroupInput(inputId = "age_neuro_All", # create new check box group
label = "Age at injury:", # label of the box
choices = list("0-19", "20-39", "40-59", "60-79", "80+"),
selected = c("0-19", "20-39", "40-59", "60-79", "80+")) # by default, all categories are selected
), # end conditionalPanel
conditionalPanel(condition = "input.subset_neuro_All == 0 && input.checksub_neuro_All.includes('3')", # if user chooses to filter based on AIS grade
checkboxGroupInput(inputId = "grade_neuro_All", # create new check box group
label = "AIS grade:", # label of the box
choices = list("AIS A", "AIS B", "AIS C", "AIS D"), # choices (match the levels in All dataset), missing AIS grades will automatically be removed
selected = c("AIS A", "AIS B", "AIS C", "AIS D")) # by default, all grades are selected but missing grades
), # end conditionalPanel
conditionalPanel(condition = "input.subset_neuro_All == 0 && input.checksub_neuro_All.includes('4')", # if user chooses to filter based on neurological level of injury
checkboxGroupInput(inputId = "level_neuro_All", # create new check box group
label = "Neurological level of injury:", # label of the box
choices = list("cervical", "lumbar", 'sacral', "thoracic"), # choices (match the levels in All dataset)
selected = c("cervical", "lumbar", 'sacral', "thoracic")) # by default, all categories are selected
) # end conditionalPanel
), # end box
box(status = "primary", width = NULL, # create box
radioButtons("checkbox_neuro_All",
label = "How do you want to visualise the data?",
choices = c("Default" = 1, 'Customize (display by subgroups)' = 0),
selected = 1), # checkbox to go to basic plot: no filters, all patients, x-axis=stages, y-axis=value of score
conditionalPanel(condition = "input.checkbox_neuro_All == 0", # if user decides to not display all data and apply filters, make a new panel appear
checkboxGroupInput(inputId = "checkGroup_neuro_All", # create new check box group
label = "Visualisation criteria:", # label of the box
choices = list("Sex" = '1',
"Age at injury" = '2',
"AIS grade" = '3',
"Level of injury" = '4'), # choices are the different filters that can be applied
selected = c('1','4')) # by default, sex and AIS grades are selected
), # end conditionalPanel
), # end box
) # end column
) #close fluid row
), #end tabItem
tabItem(tabName = "abbreviations",
titlePanel(strong("Dictionary of abbreviations")),
fluidRow(
column(width = 6,
box(width = NULL, status = "primary",
h4(strong('General')),
p(strong('SCI'), 'spinal cord injury'),
p(strong(a('ASIA', href ="https://asia-spinalinjury.org/", target="_blank")), 'american spinal injury association'),
p(strong(a('EMSCI', href ="https://www.emsci.org/", target="_blank")), 'european multicenter study about spinal cord injury'),
p(strong('PBE'), 'practice-based evidence')
),
box(width = NULL, status = "primary",
h4(strong('Functional outcomes')),
p(strong(a('WISCI', href = "http://www.spinalcordcenter.org/research/wisci_guide.pdf", target="_blank")), 'walking index for spinal cord injury'),
p(strong(a('test_6min', href = "https://www.emsci.org/index.php/project/the-assessments/functional-test", target="_blank")), '6 minutes walking test'),
p(strong(a('test_10m', href = "https://www.emsci.org/index.php/project/the-assessments/functional-test", target="_blank")), '10 meters walking test'),
p(strong(a('TUG', href = "https://www.emsci.org/index.php/project/the-assessments/functional-test", target="_blank")), 'timed up and go test'),
p(strong(a('SCIM2', href = "https://www.emsci.org/index.php/project/the-assessments/independence", target="_blank")), 'spinal cord independence measure type 2'),
p(strong(a('SCIM3', href = "https://www.emsci.org/index.php/project/the-assessments/independence", target="_blank")), 'spinal cord independence measure type 3'),
p(strong('benzel'), 'modified benzel classification')
)
), # end column
column(width = 6,
box(width = NULL, status = "primary",
h4(strong(a('Neurological outcomes', href ="https://asia-spinalinjury.org/wp-content/uploads/2016/02/International_Stds_Diagram_Worksheet.pdf", target="_blank"))),
p(strong(a('AIS', href ='https://www.icf-casestudies.org/introduction/spinal-cord-injury-sci/american-spinal-injury-association-asia-impairment-scale#:~:text=The%20American%20Spinal%20Injury%20Association,both%20sides%20of%20the%20body', target="_blank")), 'ASIA impairment scale'),
p(strong('UEMS'), 'upper extremity motor score'),
p(strong('RUEMS'), 'right upper extremity motor score'),
p(strong('LUEMS'), 'left upper extremity motor score'),
p(strong('LEMS'), 'lower extremity motor score'),
p(strong('RLEMS'), 'right lower extremity motor score'),
p(strong('LLEMS'), 'left lower extremity motor score'),
p(strong('RMS'), 'right motor score'),
p(strong('LMS'), 'left motor score'),
p(strong('TMS'), 'total motor score'),
p(strong('RPP'), 'right pin prick'),
p(strong('LPP'), 'left pin prick'),
p(strong('TPP'), 'total pin prick'),
p(strong('RLT'), 'right light touch'),
p(strong('LLT'), 'left light touch'),
p(strong('TLT'), 'total light touch')
)
) # end column
) # end fluidRow
), # end tabItem
) # end tabitems
) #end dashboardBody
) # end dashboardPage
ui <- secure_app(ui)
# Server logic ----
server <- function(input, output) {
res_auth <- secure_server(
check_credentials = check_credentials(credentials)
)
output$auth_output <- renderPrint({
reactiveValuesToList(res_auth)
})
output$plot.funct.sygen <- renderPlotly({
years <- c(unique(input$year_funct_sygen)[1]:unique(input$year_funct_sygen)[2])
nb_bins <- unique(input$binyear_funct_sygen)[1]
data_modified <- data_sygen[data_sygen$Year_of_injury %in% years, ]
##########################################################################################
vector_temp <- seq(input$year_funct_sygen[1],(input$year_funct_sygen[2]+input$year_funct_sygen[2]%%nb_bins),nb_bins)
vector_temp <- ifelse(vector_temp > input$year_funct_sygen[2],input$year_funct_sygen[2],vector_temp)
if (!(input$year_funct_sygen[2] %in% vector_temp)){
vector_temp <- append(vector_temp, input$year_funct_sygen[2])
}
data_modified$Year_of_injury_cat<-cut(data_modified$Year_of_injury,
c(unique(vector_temp)),
include.lowest = T, right = F)
##########################################################################################
type_plot <- input$cont_funct_Sygen[1]
# FILTERING BASED ON STAGES TO DISPLAY #
list_all_stages <- c("Week00", "Week01", "Week04", 'Week08', 'Week16', "Week26", "Week52") # store all potential stages
if (input$funct_choose_time_Sygen == 'single'){
times <- unique(input$funct_time_single_Sygen)
indices_time = which(list_all_stages %in% unlist(times, use.names=FALSE))
list_time = list_all_stages[c(indices_time)]
} else if (input$funct_choose_time_Sygen == 'multiple'){
times <- unique(input$funct_time_multiple_Sygen)
indices_time = which(list_all_stages %in% unlist(times, use.names=FALSE))
list_time = list_all_stages[c(indices_time[1]:indices_time[2])]
}
if (input$subset_funct_Sygen == 0){
input_sex <- unique(input$sex_funct_Sygen)
input_age <- unique(input$age_funct_Sygen)
input_cause <- unique(input$cause_funct_Sygen)
input_ais <- unique(input$grade_funct_Sygen)
input_nli <- unique(input$level_funct_Sygen)
#data_modified <- data_sygen
if (!('Unknown' %in% input_sex)){data_modified <- data_modified[data_modified$Sex %in% input_sex, ]}
if (!('Unknown' %in% input_age)){data_modified <- data_modified[data_modified$Age %in% input_age, ]}
if (!('Unknown' %in% input_ais)){data_modified <- data_modified[data_modified$AIS %in% input_ais, ]}
if (!('Unknown' %in% input_cause)){data_modified <- data_modified[data_modified$Cause %in% input_cause, ]}
if (!('Unknown' %in% input_nli)){data_modified <- data_modified[data_modified$NLI %in% input_nli, ]}
data_sygen_copy <- data_modified
} else {
data_sygen_copy <- data_modified
}
if (input$checkbox_funct_Sygen == 1){ # if user chooses to display all data
plot <- plot_base_Sygen(data_sygen_copy, score = input$var_funct_sygen, list_time, type_plot) # display basic plot with all patients, and user selected stages
}
else if (input$checkbox_funct_Sygen == 0){ # if user chooses filters
if (!(length(input$checkGroup_funct_Sygen) == 2)){ # if user chooses something else than 2 filters
plot <- plot_error() # give a plot saying to choose 2 filters
}
else if (length(input$checkGroup_funct_Sygen) == 2){ # if user chooses exactly 2 filters
filters <- as.numeric(as.vector(unique(input$checkGroup_funct_Sygen))) # store filters the user has selected
list_all = list(c("Female", "Male"), # store all the different options for the different filters
c("0-19","20-39","40-59","60-79"),
c("automobile","blunt trauma","fall","gun shot wound","motorcycle","other sports","others","pedestrian","water related"),
c("AIS A", "AIS B", "AIS C", "AIS D"),
c("cervical", "thoracic"))
filter1_all <- as.vector(list_all[filters[1]][[1]]) # select all options for the first filter chosen by user
filter2_all <- as.vector(list_all[filters[2]][[1]]) # select all options for the second filter chosen by user
list_names = c("Sex", "Age", "Cause", "AIS", "NLI", "Country") # names of columns corresponding to the available filters
plot <- plot_filters_Sygen(data_sygen_copy, score = input$var_funct_sygen, # call function for emsci plots in helper_functions.R
list_time,
list_names[filters[1]],
list_names[filters[2]],
filter1_all,
filter2_all, type_plot)
}
}
plot})
output$plot.funct.emsci <- renderPlotly({
years <- c(unique(input$year_funct_emsci)[1]:unique(input$year_funct_emsci)[2])
nb_bins <- unique(input$binyear_funct_emsci)[1]
data_modified <- data_emsci[data_emsci$YEARDOI %in% years, ]
data_modified$YEARDOI_cat<-cut(data_modified$YEARDOI,
seq(input$year_funct_emsci[1],
input$year_funct_emsci[2]+input$year_funct_emsci[2]%%nb_bins,
nb_bins),
include.lowest = T, right = F)
type_plot <- input$cont_funct_emsci[1]
# FILTERING BASED ON STAGES TO DISPLAY #
list_all_stages <- c("very acute", "acute I", "acute II", "acute III", "chronic") # store all potential stages
if (input$funct_choose_time_EMSCI == 'single'){
times <- unique(input$funct_time_single_EMSCI)
indices_time = which(list_all_stages %in% unlist(times, use.names=FALSE))
list_time = list_all_stages[c(indices_time)]
} else if (input$funct_choose_time_EMSCI == 'multiple'){
times <- unique(input$funct_time_multiple_EMSCI)
indices_time = which(list_all_stages %in% unlist(times, use.names=FALSE))
list_time = list_all_stages[c(indices_time[1]:indices_time[2])]
}
if (input$subset_funct_EMSCI == 0){
input_sex <- unique(input$sex_funct_EMSCI)
input_age <- unique(input$age_funct_EMSCI)
input_cause <- unique(input$cause_funct_EMSCI)
input_ais <- unique(input$grade_funct_EMSCI)
input_nli <- unique(input$level_funct_EMSCI)
input_country <- unique(input$country_funct_EMSCI)
# input_year <- unique(input$filteryear_funct_EMSCI)
#data_modified <- data_emsci
if (!('Unknown' %in% input_sex)){data_modified <- data_modified[data_modified$Sex %in% input_sex, ]}
if (!('Unknown' %in% input_age)){data_modified <- data_modified[data_modified$age_category %in% input_age, ]}
if (!('Unknown' %in% input_ais)){data_modified <- data_modified[data_modified$AIS %in% input_ais, ]}
if (!('Unknown' %in% input_cause)){data_modified <- data_modified[data_modified$Cause %in% input_cause, ]}
if (!('Unknown' %in% input_nli)){data_modified <- data_modified[data_modified$NLI_level %in% input_nli, ]}
if (!('Unknown' %in% input_country)){data_modified <- data_modified[data_modified$Country %in% input_country, ]}
all_years <- c(2000, 2005, 2010, 2015, 2019)
temp <- which(all_years %in% unlist(input_year, use.names=FALSE))
int_filter = c(all_years[c(temp[1]:temp[2])])
vector_year <- c()
if (2000 %in% int_filter && 2005 %in% int_filter){vector_year <- c(vector_year, '2000-2004')}
if (2005 %in% int_filter && 2010 %in% int_filter){vector_year <- c(vector_year, '2005-2009')}
if (2010 %in% int_filter && 2015 %in% int_filter){vector_year <- c(vector_year, '2010-2014')}
if (2015 %in% int_filter && 2019 %in% int_filter){vector_year <- c(vector_year, '2015-2019')}
data_modified <- data_modified[data_modified$YEARDOI_cat %in% vector_year, ]
data_emsci_copy <- data_modified
} else {
data_emsci_copy <- data_modified
}
if (input$checkbox_funct_EMSCI == 1){ # if user chooses to display all data
plot <- plot_base_emsci(data_emsci_copy, score = input$var_funct_emsci, list_time, type_plot) # display basic plot with all patients, and user selected stages
}
else if (input$checkbox_funct_EMSCI == 0){ # if user chooses filters
if (!(length(input$checkGroup_funct_EMSCI) == 2)){ # if user chooses something else than 2 filters
plot <- plot_error() # give a plot saying to choose 2 filters
}
else if (length(input$checkGroup_funct_EMSCI) == 2){ # if user chooses exactly 2 filters
filters <- as.numeric(as.vector(unique(input$checkGroup_funct_EMSCI))) # store filters the user has selected
list_all = list(c("Male", "Female"), # store all the different options for the different filters
c(0,1,2,3,4),
c("disc herniation", "haemorragic", "ischemic", "traumatic", "other"),
c("AIS A", "AIS B", "AIS C", "AIS D"),
c("cervical", "thoracic", "lumbar", "sacral"),
c("Austria", "Czech Republic", "France", "Germany",
"Great Britain", "India", "Italy", "Netherlands",
"Spain", "Switzerland"),
c('2000-2004', '2005-2009', '2010-2014', '2015-2019'))
filter1_all <- as.vector(list_all[filters[1]][[1]]) # select all options for the first filter chosen by user
filter2_all <- as.vector(list_all[filters[2]][[1]]) # select all options for the second filter chosen by user
list_names = c("Sex", "age_category", "Cause", "AIS", "NLI_level", "Country", "YEARDOI_cat") # names of columns corresponding to the available filters
plot <- plot_filters_emsci(data_emsci_copy, score = input$var_funct_emsci, # call function for emsci plots in helper_functions.R
list_time,
list_names[filters[1]],
list_names[filters[2]],
filter1_all,
filter2_all, type_plot)
}
}
plot})
output$plot.neuro.sygen <- renderPlotly({
# FILTERING BASED ON STAGES TO DISPLAY #
years <- c(unique(input$year_neuro_sygen)[1]:unique(input$year_neuro_sygen)[2])
nb_bins <- unique(input$binyear_neuro_sygen)[1]
data_modified <- data_sygen[data_sygen$Year_of_injury %in% years, ]
##########################################################################################
vector_temp <- seq(input$year_neuro_sygen[1],(input$year_neuro_sygen[2]+input$year_neuro_sygen[2]%%nb_bins),nb_bins)
vector_temp <- ifelse(vector_temp > input$year_neuro_sygen[2],input$year_neuro_sygen[2],vector_temp)
if (!(input$year_neuro_sygen[2] %in% vector_temp)){
vector_temp <- append(vector_temp, input$year_neuro_sygen[2])
}
data_modified$Year_of_injury_cat<-cut(data_modified$Year_of_injury,
c(unique(vector_temp)),
include.lowest = T, right = F)
##########################################################################################
type_plot <- input$cont_neuro_Sygen[1]
list_all_stages <- c("Week00", "Week01", "Week04", 'Week08', 'Week16', "Week26", "Week52") # store all potential stages
if (input$neuro_choose_time_Sygen == 'single'){
times <- unique(input$neuro_time_single_Sygen)
indices_time = which(list_all_stages %in% unlist(times, use.names=FALSE))
list_time = list_all_stages[c(indices_time)]
} else if (input$neuro_choose_time_Sygen == 'multiple'){
times <- unique(input$neuro_time_multiple_Sygen)
indices_time = which(list_all_stages %in% unlist(times, use.names=FALSE))
list_time = list_all_stages[c(indices_time[1]:indices_time[2])]
}
if (input$subset_neuro_Sygen == 0){
#print('test')
#print(dim(data_modified)[1])
input_sex <- unique(input$sex_neuro_Sygen)
input_age <- unique(input$age_neuro_Sygen)
input_cause <- unique(input$cause_neuro_Sygen)
input_ais <- unique(input$grade_neuro_Sygen)
input_nli <- unique(input$level_neuro_Sygen)
if (!('Unknown' %in% input_sex)){data_modified <- data_modified[data_modified$Sex %in% input_sex, ]}
if (!('Unknown' %in% input_age)){data_modified <- data_modified[data_modified$Age %in% input_age, ]}
if (!('Unknown' %in% input_ais)){data_modified <- data_modified[data_modified$AIS %in% input_ais, ]}
if (!('Unknown' %in% input_cause)){data_modified <- data_modified[data_modified$Cause %in% input_cause, ]}
if (!('Unknown' %in% input_nli)){data_modified <- data_modified[data_modified$NLI %in% input_nli, ]}
data_sygen_copy <- data_modified
#print(dim(data_sygen_copy)[1])
} else {
data_sygen_copy <- data_modified
}
if (input$checkbox_neuro_Sygen == 1){ # if user chooses to display all data
plot <- plot_base_Sygen(data_sygen_copy, score = input$var_neuro_sygen, list_time, type_plot) # display basic plot with all patients, and user selected stages
}
else if (input$checkbox_neuro_Sygen == 0){ # if user chooses filters
if (!(length(input$checkGroup_neuro_Sygen) == 2)){ # if user chooses something else than 2 filters
plot <- plot_error() # give a plot saying to choose 2 filters
}
else if (length(input$checkGroup_neuro_Sygen) == 2){ # if user chooses exactly 2 filters
filters <- as.numeric(as.vector(unique(input$checkGroup_neuro_Sygen))) # store filters the user has selected
list_all = list(c("Female", "Male"), # store all the different options for the different filters
c("0-19","20-39","40-59","60-79"),
c("automobile","blunt trauma","fall","gun shot wound","motorcycle","other sports","others","pedestrian","water related"),
c("AIS A", "AIS B", "AIS C", "AIS D"),
c("cervical", "thoracic"))
filter1_all <- as.vector(list_all[filters[1]][[1]]) # select all options for the first filter chosen by user
filter2_all <- as.vector(list_all[filters[2]][[1]]) # select all options for the second filter chosen by user
list_names = c("Sex", "Age", "Cause", "AIS", "NLI", "Country") # names of columns corresponding to the available filters
plot <- plot_filters_Sygen(data_sygen_copy, score = input$var_neuro_sygen, # call function for emsci plots in helper_functions.R
list_time,
list_names[filters[1]],
list_names[filters[2]],
filter1_all,
filter2_all, type_plot)
}
}
plot})
output$plot.neuro.emsci <- renderPlotly({
years <- c(unique(input$year_neuro_emsci)[1]:unique(input$year_neuro_emsci)[2])
nb_bins <- unique(input$binyear_neuro_emsci)[1]
data_modified <- data_emsci[data_emsci$YEARDOI %in% years, ]
data_modified$YEARDOI_cat<-cut(data_modified$YEARDOI,
seq(input$year_neuro_emsci[1],
input$year_neuro_emsci[2]+input$year_neuro_emsci[2]%%nb_bins,
nb_bins),
include.lowest = T, right = F)
type_plot <- input$cont_neuro_emsci[1]
# FILTERING BASED ON STAGES TO DISPLAY #
list_all_stages <- c("very acute", "acute I", "acute II", "acute III", "chronic") # store all potential stages
if (input$neuro_choose_time_EMSCI == 'single'){
times <- unique(input$neuro_time_single_EMSCI)
indices_time = which(list_all_stages %in% unlist(times, use.names=FALSE))
list_time = list_all_stages[c(indices_time)]
} else if (input$neuro_choose_time_EMSCI == 'multiple'){
times <- unique(input$neuro_time_multiple_EMSCI)
indices_time = which(list_all_stages %in% unlist(times, use.names=FALSE))
list_time = list_all_stages[c(indices_time[1]:indices_time[2])]
}
if (input$subset_neuro_EMSCI == 0){
#print('test')
#print(dim(data_modified)[1])
input_sex <- unique(input$sex_neuro_EMSCI)
input_age <- unique(input$age_neuro_EMSCI)
input_cause <- unique(input$cause_neuro_EMSCI)
input_ais <- unique(input$grade_neuro_EMSCI)
input_nli <- unique(input$level_neuro_EMSCI)
input_country <- unique(input$country_neuro_EMSCI)
# input_year <- unique(input$filteryear_neuro_EMSCI)
#data_modified <- data_emsci
if (!('Unknown' %in% input_sex)){data_modified <- data_modified[data_modified$Sex %in% input_sex, ]}
if (!('Unknown' %in% input_age)){data_modified <- data_modified[data_modified$age_category %in% input_age, ]}
if (!('Unknown' %in% input_ais)){data_modified <- data_modified[data_modified$AIS %in% input_ais, ]}
if (!('Unknown' %in% input_cause)){data_modified <- data_modified[data_modified$Cause %in% input_cause, ]}
if (!('Unknown' %in% input_nli)){data_modified <- data_modified[data_modified$NLI_level %in% input_nli, ]}
if (!('Unknown' %in% input_country)){data_modified <- data_modified[data_modified$Country %in% input_country, ]}
# all_years <- c(2000, 2005, 2010, 2015, 2019)
# temp <- which(all_years %in% unlist(input_year, use.names=FALSE))
# int_filter = c(all_years[c(temp[1]:temp[2])])
# vector_year <- c()
# if (2000 %in% int_filter && 2005 %in% int_filter){vector_year <- c(vector_year, '2000-2004')}
# if (2005 %in% int_filter && 2010 %in% int_filter){vector_year <- c(vector_year, '2005-2009')}
# if (2010 %in% int_filter && 2015 %in% int_filter){vector_year <- c(vector_year, '2010-2014')}
# if (2015 %in% int_filter && 2019 %in% int_filter){vector_year <- c(vector_year, '2015-2019')}
# data_modified <- data_modified[data_modified$YEARDOI_cat %in% vector_year, ]
data_emsci_copy <- data_modified
} else {
data_emsci_copy <- data_modified
}
if (input$checkbox_neuro_EMSCI == 1){ # if user chooses to display all data
plot <- plot_base_emsci(data_emsci_copy, score = input$var_neuro_emsci, list_time, type_plot) # display basic plot with all patients, and user selected stages
}
else if (input$checkbox_neuro_EMSCI == 0){ # if user chooses filters
if (!(length(input$checkGroup_neuro_EMSCI) == 2)){ # if user chooses something else than 2 filters
plot <- plot_error() # give a plot saying to choose 2 filters
}
else if (length(input$checkGroup_neuro_EMSCI) == 2){ # if user chooses exactly 2 filters
filters <- as.numeric(as.vector(unique(input$checkGroup_neuro_EMSCI))) # store filters the user has selected
list_all = list(c("Male", "Female"), # store all the different options for the different filters
c(0,1,2,3,4),
c("disc herniation", "haemorragic", "ischemic", "traumatic", "other"),
c("AIS A", "AIS B", "AIS C", "AIS D"),
c("cervical", "thoracic", "lumbar", "sacral"),
c("Austria", "Czech Republic", "France", "Germany",
"Great Britain", "India", "Italy", "Netherlands",
"Spain", "Switzerland"),
c('2000-2004', '2005-2009', '2010-2014', '2015-2019'))
filter1_all <- as.vector(list_all[filters[1]][[1]]) # select all options for the first filter chosen by user
filter2_all <- as.vector(list_all[filters[2]][[1]]) # select all options for the second filter chosen by user
list_names = c("Sex", "age_category", "Cause", "AIS", "NLI_level", "Country", "YEARDOI_cat") # names of columns corresponding to the available filters
plot <- plot_filters_emsci(data_emsci_copy, score = input$var_neuro_emsci, # call function for emsci plots in helper_functions.R
list_time,
list_names[filters[1]],
list_names[filters[2]],
filter1_all,
filter2_all, type_plot)
}
}
plot})
output$plot.neuro.all <- renderPlot ({
if (input$subset_neuro_All == 0){
input_sex <- unique(input$sex_neuro_All)
input_age <- unique(input$age_neuro_All)
input_ais <- unique(input$grade_neuro_All)
input_nli <- unique(input$level_neuro_All)
data_modified <- data_emsci_sygen
if (!('Unknown' %in% input_sex)){data_modified <- data_modified[data_modified$Sex %in% input_sex, ]}
if (!('Unknown' %in% input_age)){data_modified <- data_modified[data_modified$Age %in% input_age, ]}
if (!('Unknown' %in% input_ais)){data_modified <- data_modified[data_modified$AIS %in% input_ais, ]}
if (!('Unknown' %in% input_nli)){data_modified <- data_modified[data_modified$NLI %in% input_nli, ]}
data_All_copy <- data_modified
} else {
data_All_copy <- data_emsci_sygen
}
if (input$checkbox_neuro_All == 1){ # if user chooses to display all data
plot <- plot_base_All(data_All_copy, score = input$var_neuro_all) # display basic plot with all patients, and user selected stages
}
else if (input$checkbox_neuro_All == 0){ # if user chooses filters
if (!(length(input$checkGroup_neuro_All) == 2)){ # if user chooses something else than 2 filters
plot <- plot_error() # give a plot saying to choose 2 filters
}
else if (length(input$checkGroup_neuro_All) == 2){ # if user chooses exactly 2 filters
filters <- as.numeric(as.vector(unique(input$checkGroup_neuro_All))) # store filters the user has selected
list_all = list(c("Female", "Male"), # store all the different options for the different filters
c("12-19", "20-39", "40-59", "60-79", "80+"),
c("AIS A", "AIS B", "AIS C", "AIS D"),
c("cervical", "lumbar", "sacral", "thoracic"))
filter1_all <- as.vector(list_all[filters[1]][[1]]) # select all options for the first filter chosen by user
filter2_all <- as.vector(list_all[filters[2]][[1]]) # select all options for the second filter chosen by user
list_names = c("Sex", "Age", "AIS", "NLI") # names of columns corresponding to the available filters
plot <- plot_filters_All(data_All_copy, score = input$var_neuro_all, # call function for All plots in helper_functions.R
list_names[filters[1]],
list_names[filters[2]],
filter1_all,
filter2_all)
}
}
plot})
output$plot.epi.sygen <- renderPlot({
years <- c(unique(input$year_epi_sygen)[1]:unique(input$year_epi_sygen)[2])
nb_bins <- unique(input$binyear_epi_sygen)[1]
data_modified <- data_sygen_epi[data_sygen_epi$yeardoi %in% years, ]
##########################################################################################
vector_temp <- seq(input$year_epi_sygen[1],(input$year_epi_sygen[2]+input$year_epi_sygen[2]%%nb_bins),nb_bins)
vector_temp <- ifelse(vector_temp > input$year_epi_sygen[2],input$year_epi_sygen[2],vector_temp)
if (!(input$year_epi_sygen[2] %in% vector_temp)){
vector_temp <- append(vector_temp, input$year_epi_sygen[2])
}
data_modified$YEARDOI_cat<-cut(data_modified$yeardoi,
c(unique(vector_temp)),
include.lowest = T, right = F)
##########################################################################################
if (input$var_epi_sygen == "sex_sygen"){
if (input$checkbox_epi_sygen == 1){ # if user chooses to display all data
plot <- plot_base_Sex_Sygen(data_modified, '') # display basic plot with all patients, and user selected stSexs
}
else if (input$checkbox_epi_sygen == 0){ # if user chooses filters
if (as.numeric(as.vector(unique(input$checkGroup_epi_sygen))) == 1){
if (length(unique(input$grade_epi_sygen)) == 1){
plot <- plot_base_Sex_Sygen(subset(data_modified, ais1 == input$grade_epi_sygen[1]), '')
} else if (length(unique(input$grade_epi_sygen)) == 2){
plot.1 <- plot_base_Sex_Sygen(subset(data_modified, ais1 == input$grade_epi_sygen[1]), paste(input$grade_epi_sygen[1]))
plot.2 <- plot_base_Sex_Sygen(subset(data_modified, ais1 == input$grade_epi_sygen[2]), paste(input$grade_epi_sygen[2]))
plot <- grid.arrange(plot.1, plot.2, ncol=2)
} else if (length(unique(input$grade_epi_sygen)) == 3){
plot.1 <- plot_base_Sex_Sygen(subset(data_modified, ais1 == input$grade_epi_sygen[1]), paste(input$grade_epi_sygen[1]))
plot.2 <- plot_base_Sex_Sygen(subset(data_modified, ais1 == input$grade_epi_sygen[2]), paste(input$grade_epi_sygen[2]))
plot.3 <- plot_base_Sex_Sygen(subset(data_modified, ais1 == input$grade_epi_sygen[3]), paste(input$grade_epi_sygen[3]))
plot <- grid.arrange(plot.1, plot.2, plot.3, ncol=2, nrow=2)
} else if (length(unique(input$grade_epi_sygen)) == 4){
plot.1 <- plot_base_Sex_Sygen(subset(data_modified, ais1 == input$grade_epi_sygen[1]), paste(input$grade_epi_sygen[1]))
plot.2 <- plot_base_Sex_Sygen(subset(data_modified, ais1 == input$grade_epi_sygen[2]), paste(input$grade_epi_sygen[2]))
plot.3 <- plot_base_Sex_Sygen(subset(data_modified, ais1 == input$grade_epi_sygen[3]), paste(input$grade_epi_sygen[3]))
plot.4 <- plot_base_Sex_Sygen(subset(data_modified, ais1 == input$grade_epi_sygen[4]), paste(input$grade_epi_sygen[4]))
plot <- grid.arrange(plot.1, plot.2, plot.3, plot.4, ncol=2, nrow=2)
}
} else if (as.numeric(as.vector(unique(input$checkGroup_epi_sygen))) == 2){
if (length(unique(input$paralysis_epi_sygen)) == 2){
data_modified.tetra <-subset(data_modified, plegia =='tetra')
data_modified.para <-subset(data_modified, plegia =='para')
plot_tetra <- plot_base_Sex_Sygen(data_modified.tetra, 'Tetraplegic')
plot_para <- plot_base_Sex_Sygen(data_modified.para, 'Paraplegic')
plot <- grid.arrange(plot_tetra, plot_para, ncol=2)
} else {
if ('paraplegia' %in% as.vector(unique(input$paralysis_epi_sygen))){
plot <- plot_base_Sex_Sygen(subset(data_modified, plegia =='para'), 'Paraplegic')
} else if ('tetraplegia' %in% as.vector(unique(input$paralysis_epi_sygen))){
plot <- plot_base_Sex_Sygen(subset(data_modified, plegia =='tetra'), 'Tetraplegic')
}
}
}
}
}
else if (input$var_epi_sygen == "age_sygen"){
if (input$checkbox_epi_sygen == 1){ # if user chooses to display all data
plot <- plot_base_Age_Sygen(data_modified, '') # display basic plot with all patients, and user selected stages
}
else if (input$checkbox_epi_sygen == 0){ # if user chooses filters
if (as.numeric(as.vector(unique(input$checkGroup_epi_sygen))) == 1){
if (length(unique(input$grade_epi_sygen)) == 1){
plot <- plot_base_Age_Sygen(subset(data_modified, ais1 == input$grade_epi_sygen[1]), '')
} else if (length(unique(input$grade_epi_sygen)) == 2){
plot.1 <- plot_base_Age_Sygen(subset(data_modified, ais1 == input$grade_epi_sygen[1]), paste(input$grade_epi_sygen[1]))
plot.2 <- plot_base_Age_Sygen(subset(data_modified, ais1 == input$grade_epi_sygen[2]), paste(input$grade_epi_sygen[2]))
plot <- grid.arrange(plot.1, plot.2, ncol=2)
} else if (length(unique(input$grade_epi_sygen)) == 3){
plot.1 <- plot_base_Age_Sygen(subset(data_modified, ais1 == input$grade_epi_sygen[1]), paste(input$grade_epi_sygen[1]))
plot.2 <- plot_base_Age_Sygen(subset(data_modified, ais1 == input$grade_epi_sygen[2]), paste(input$grade_epi_sygen[2]))
plot.3 <- plot_base_Age_Sygen(subset(data_modified, ais1 == input$grade_epi_sygen[3]), paste(input$grade_epi_sygen[3]))
plot <- grid.arrange(plot.1, plot.2, plot.3, ncol=2, nrow=2)
} else if (length(unique(input$grade_epi_sygen)) == 4){
plot.1 <- plot_base_Age_Sygen(subset(data_modified, ais1 == input$grade_epi_sygen[1]), paste(input$grade_epi_sygen[1]))
plot.2 <- plot_base_Age_Sygen(subset(data_modified, ais1 == input$grade_epi_sygen[2]), paste(input$grade_epi_sygen[2]))
plot.3 <- plot_base_Age_Sygen(subset(data_modified, ais1 == input$grade_epi_sygen[3]), paste(input$grade_epi_sygen[3]))
plot.4 <- plot_base_Age_Sygen(subset(data_modified, ais1 == input$grade_epi_sygen[4]), paste(input$grade_epi_sygen[4]))
plot <- grid.arrange(plot.1, plot.2, plot.3, plot.4, ncol=2, nrow=2)
}
} else if (as.numeric(as.vector(unique(input$checkGroup_epi_sygen))) == 2){
if (length(unique(input$paralysis_epi_sygen)) == 2){
data_modified.tetra <-subset(data_modified, plegia =='tetra')
data_modified.para <-subset(data_modified, plegia =='para')
plot_tetra <- plot_base_Age_Sygen(data_modified.tetra, 'Tetraplegic')
plot_para <- plot_base_Age_Sygen(data_modified.para, 'Paraplegic')
plot <- grid.arrange(plot_tetra, plot_para, ncol=2)
} else {
if ('paraplegia' %in% as.vector(unique(input$paralysis_epi_sygen))){
plot <- plot_base_Age_Sygen(subset(data_modified, plegia =='para'), 'Paraplegic')
} else if ('tetraplegia' %in% as.vector(unique(input$paralysis_epi_sygen))){
plot <- plot_base_Age_Sygen(subset(data_modified, plegia =='tetra'), 'Tetraplegic')
}
}
}
}
}
else if (input$var_epi_sygen == 'ais_sygen') {
if (input$checkbox_epi_sygen == 1){ # if user chooses to display all data
plot <- plot_base_AIS_Sygen(data_modified, '') # display basic plot with all patients, and user selected stAISs
}
else if (input$checkbox_epi_sygen == 0){ # if user chooses filters
if (as.numeric(as.vector(unique(input$checkGroup_epi_sygen))) == 2){
if (length(unique(input$paralysis_epi_sygen)) == 2){
data_modified.tetra <-subset(data_modified, plegia =='tetra')
data_modified.para <-subset(data_modified, plegia =='para')
plot_tetra <- plot_base_AIS_Sygen(data_modified.tetra, 'Tetraplegic')
plot_para <- plot_base_AIS_Sygen(data_modified.para, 'Paraplegic')
plot <- grid.arrange(plot_tetra, plot_para, ncol=2)
} else {
if ('paraplegia' %in% as.vector(unique(input$paralysis_epi_sygen))){
plot <- plot_base_AIS_Sygen(subset(data_modified, plegia =='para'), 'Paraplegic')
} else if ('tetraplegia' %in% as.vector(unique(input$paralysis_epi_sygen))){
plot <- plot_base_AIS_Sygen(subset(data_modified, plegia =='tetra'), 'Tetraplegic')
}
}
}
}
}
else if (input$var_epi_sygen == "nli_sygen") {
if (input$checkbox_epi_sygen == 1){ # if user chooses to display all data
plot <- plot_base_NLI_Sygen(data_modified, '') # display basic plot with all patients, and user selected stNLIs
}
else if (input$checkbox_epi_sygen == 0){ # if user chooses filters
if (as.numeric(as.vector(unique(input$checkGroup_epi_sygen))) == 1){
if (length(unique(input$grade_epi_sygen)) == 1){
plot <- plot_base_NLI_Sygen(subset(data_modified, ais1 == input$grade_epi_sygen[1]), '')
} else if (length(unique(input$grade_epi_sygen)) == 2){
plot.1 <- plot_base_NLI_Sygen(subset(data_modified, ais1 == input$grade_epi_sygen[1]), paste(input$grade_epi_sygen[1]))
plot.2 <- plot_base_NLI_Sygen(subset(data_modified, ais1 == input$grade_epi_sygen[2]), paste(input$grade_epi_sygen[2]))
plot <- grid.arrange(plot.1, plot.2, ncol=2)
} else if (length(unique(input$grade_epi_sygen)) == 3){
plot.1 <- plot_base_NLI_Sygen(subset(data_modified, ais1 == input$grade_epi_sygen[1]), paste(input$grade_epi_sygen[1]))
plot.2 <- plot_base_NLI_Sygen(subset(data_modified, ais1 == input$grade_epi_sygen[2]), paste(input$grade_epi_sygen[2]))
plot.3 <- plot_base_NLI_Sygen(subset(data_modified, ais1 == input$grade_epi_sygen[3]), paste(input$grade_epi_sygen[3]))
plot <- grid.arrange(plot.1, plot.2, plot.3, ncol=2, nrow=2)
} else if (length(unique(input$grade_epi_sygen)) == 4){
plot.1 <- plot_base_NLI_Sygen(subset(data_modified, ais1 == input$grade_epi_sygen[1]), paste(input$grade_epi_sygen[1]))
plot.2 <- plot_base_NLI_Sygen(subset(data_modified, ais1 == input$grade_epi_sygen[2]), paste(input$grade_epi_sygen[2]))
plot.3 <- plot_base_NLI_Sygen(subset(data_modified, ais1 == input$grade_epi_sygen[3]), paste(input$grade_epi_sygen[3]))
plot.4 <- plot_base_NLI_Sygen(subset(data_modified, ais1 == input$grade_epi_sygen[4]), paste(input$grade_epi_sygen[4]))
plot <- grid.arrange(plot.1, plot.2, plot.3, plot.4, ncol=2, nrow=2)
}
} else if (as.numeric(as.vector(unique(input$checkGroup_epi_sygen))) == 2){
if (length(unique(input$paralysis_epi_sygen)) == 2){
data_modified.tetra <-subset(data_modified, plegia =='tetra')
data_modified.para <-subset(data_modified, plegia =='para')
plot_tetra <- plot_base_NLI_Sygen(data_modified.tetra, 'Tetraplegic')
plot_para <- plot_base_NLI_Sygen(data_modified.para, 'Paraplegic')
plot <- grid.arrange(plot_tetra, plot_para, ncol=2)
} else {
if ('paraplegia' %in% as.vector(unique(input$paralysis_epi_sygen))){
plot <- plot_base_NLI_Sygen(subset(data_modified, plegia =='para'), 'Paraplegic')
} else if ('tetraplegia' %in% as.vector(unique(input$paralysis_epi_sygen))){
plot <- plot_base_NLI_Sygen(subset(data_modified, plegia =='tetra'), 'Tetraplegic')
}
}
}
}
}
plot})
output$plot.epi.emsci <- renderPlot({
years <- c(unique(input$year_epi_emsci)[1]:unique(input$year_epi_emsci)[2])
nb_bins <- unique(input$binyear_epi_emsci)[1]
data_modified <- data_age_emsci[data_age_emsci$YEARDOI %in% years, ]
data_modified$YEARDOI_cat<-cut(data_modified$YEARDOI,
seq(input$year_epi_emsci[1],
input$year_epi_emsci[2]+input$year_epi_emsci[2]%%nb_bins,
nb_bins),
include.lowest = T, right = F)
if (input$var_epi_emsci == "sex_emsci"){
if (input$checkbox_epi_emsci == 1){ # if user chooses to display all data
plot <- plot_base_Sex_EMSCI(data_modified, '') # display basic plot with all patients, and user selected stSexs
}
else if (input$checkbox_epi_emsci == 0){ # if user chooses filters
if (as.numeric(as.vector(unique(input$checkGroup_epi_emsci))) == 1){
if (length(unique(input$grade_epi_emsci)) == 1){
plot <- plot_base_Sex_EMSCI(subset(data_modified, AIS == input$grade_epi_emsci[1]), '')
} else if (length(unique(input$grade_epi_emsci)) == 2){
plot.1 <- plot_base_Sex_EMSCI(subset(data_modified, AIS == input$grade_epi_emsci[1]), paste(input$grade_epi_emsci[1]))
plot.2 <- plot_base_Sex_EMSCI(subset(data_modified, AIS == input$grade_epi_emsci[2]), paste(input$grade_epi_emsci[2]))
plot <- grid.arrange(plot.1, plot.2, ncol=2)
} else if (length(unique(input$grade_epi_emsci)) == 3){
plot.1 <- plot_base_Sex_EMSCI(subset(data_modified, AIS == input$grade_epi_emsci[1]), paste(input$grade_epi_emsci[1]))
plot.2 <- plot_base_Sex_EMSCI(subset(data_modified, AIS == input$grade_epi_emsci[2]), paste(input$grade_epi_emsci[2]))
plot.3 <- plot_base_Sex_EMSCI(subset(data_modified, AIS == input$grade_epi_emsci[3]), paste(input$grade_epi_emsci[3]))
plot <- grid.arrange(plot.1, plot.2, plot.3, ncol=2, nrow=2)
} else if (length(unique(input$grade_epi_emsci)) == 4){
plot.1 <- plot_base_Sex_EMSCI(subset(data_modified, AIS == input$grade_epi_emsci[1]), paste(input$grade_epi_emsci[1]))
plot.2 <- plot_base_Sex_EMSCI(subset(data_modified, AIS == input$grade_epi_emsci[2]), paste(input$grade_epi_emsci[2]))
plot.3 <- plot_base_Sex_EMSCI(subset(data_modified, AIS == input$grade_epi_emsci[3]), paste(input$grade_epi_emsci[3]))
plot.4 <- plot_base_Sex_EMSCI(subset(data_modified, AIS == input$grade_epi_emsci[4]), paste(input$grade_epi_emsci[4]))
plot <- grid.arrange(plot.1, plot.2, plot.3, plot.4, ncol=2, nrow=2)
}
} else if (as.numeric(as.vector(unique(input$checkGroup_epi_emsci))) == 2){
if (length(unique(input$paralysis_epi_emsci)) == 2){
data_modified.tetra <-subset(data_modified, plegia =='tetra')
data_modified.para <-subset(data_modified, plegia =='para')
plot_tetra <- plot_base_Sex_EMSCI_paralysis(data_modified.tetra, 'Tetraplegic')
plot_para <- plot_base_Sex_EMSCI_paralysis(data_modified.para, 'Paraplegic')
plot <- grid.arrange(plot_tetra, plot_para, ncol=2)
} else {
if ('paraplegia' %in% as.vector(unique(input$paralysis_epi_emsci))){
plot <- plot_base_Sex_EMSCI_paralysis(subset(data_modified, plegia =='para'), 'Paraplegic')
} else if ('tetraplegia' %in% as.vector(unique(input$paralysis_epi_emsci))){
plot <- plot_base_Sex_EMSCI_paralysis(subset(data_modified, plegia =='tetra'), 'Tetraplegic')
}
}
}
}
}
else if (input$var_epi_emsci == "age_emsci"){
if (input$checkbox_epi_emsci == 1){ # if user chooses to display all data
plot <- plot_base_Age_EMSCI(data_modified, '') # display basic plot with all patients, and user selected stages
}
else if (input$checkbox_epi_emsci == 0){ # if user chooses filters
if (as.numeric(as.vector(unique(input$checkGroup_epi_emsci))) == 1){
if (length(unique(input$grade_epi_emsci)) == 1){
plot <- plot_base_Age_EMSCI(subset(data_modified, ais1 == input$grade_epi_emsci[1]), '')
} else if (length(unique(input$grade_epi_emsci)) == 2){
plot.1 <- plot_base_Age_EMSCI(subset(data_modified, ais1 == input$grade_epi_emsci[1]), paste(input$grade_epi_emsci[1]))
plot.2 <- plot_base_Age_EMSCI(subset(data_modified, ais1 == input$grade_epi_emsci[2]), paste(input$grade_epi_emsci[2]))
plot <- grid.arrange(plot.1, plot.2, ncol=2)
} else if (length(unique(input$grade_epi_emsci)) == 3){
plot.1 <- plot_base_Age_EMSCI(subset(data_modified, ais1 == input$grade_epi_emsci[1]), paste(input$grade_epi_emsci[1]))
plot.2 <- plot_base_Age_EMSCI(subset(data_modified, ais1 == input$grade_epi_emsci[2]), paste(input$grade_epi_emsci[2]))
plot.3 <- plot_base_Age_EMSCI(subset(data_modified, ais1 == input$grade_epi_emsci[3]), paste(input$grade_epi_emsci[3]))
plot <- grid.arrange(plot.1, plot.2, plot.3, ncol=2, nrow=2)
} else if (length(unique(input$grade_epi_emsci)) == 4){
plot.1 <- plot_base_Age_EMSCI(subset(data_modified, ais1 == input$grade_epi_emsci[1]), paste(input$grade_epi_emsci[1]))
plot.2 <- plot_base_Age_EMSCI(subset(data_modified, ais1 == input$grade_epi_emsci[2]), paste(input$grade_epi_emsci[2]))
plot.3 <- plot_base_Age_EMSCI(subset(data_modified, ais1 == input$grade_epi_emsci[3]), paste(input$grade_epi_emsci[3]))
plot.4 <- plot_base_Age_EMSCI(subset(data_modified, ais1 == input$grade_epi_emsci[4]), paste(input$grade_epi_emsci[4]))
plot <- grid.arrange(plot.1, plot.2, plot.3, plot.4, ncol=2, nrow=2)
}
} else if (as.numeric(as.vector(unique(input$checkGroup_epi_emsci))) == 2){
if (length(unique(input$paralysis_epi_emsci)) == 2){
data_modified.tetra <-subset(data_modified, plegia =='tetra')
data_modified.para <-subset(data_modified, plegia =='para')
plot_tetra <- plot_base_Age_EMSCI(data_modified.tetra, 'Tetraplegic')
plot_para <- plot_base_Age_EMSCI(data_modified.para, 'Paraplegic')
plot <- grid.arrange(plot_tetra, plot_para, ncol=2)
} else {
if ('paraplegia' %in% as.vector(unique(input$paralysis_epi_emsci))){
plot <- plot_base_Age_EMSCI(subset(data_modified, plegia =='para'), 'Paraplegic')
} else if ('tetraplegia' %in% as.vector(unique(input$paralysis_epi_emsci))){
plot <- plot_base_Age_EMSCI(subset(data_modified, plegia =='tetra'), 'Tetraplegic')
}
}
}
}
}
else if (input$var_epi_emsci == 'ais_emsci') {
if (input$checkbox_epi_emsci == 1){ # if user chooses to display all data
plot <- plot_base_AIS_EMSCI(data_modified, '') # display basic plot with all patients, and user selected stAISs
}
else if (input$checkbox_epi_emsci == 0){ # if user chooses filters
if (as.numeric(as.vector(unique(input$checkGroup_epi_emsci))) == 2){
if (length(unique(input$paralysis_epi_emsci)) == 2){
data_modified.tetra <-subset(data_modified, plegia =='tetra')
data_modified.para <-subset(data_modified, plegia =='para')
plot_tetra <- plot_base_AIS_EMSCI(data_modified.tetra, 'Tetraplegic')
plot_para <- plot_base_AIS_EMSCI(data_modified.para, 'Paraplegic')
plot <- grid.arrange(plot_tetra, plot_para, ncol=2)
} else {
if ('paraplegia' %in% as.vector(unique(input$paralysis_epi_emsci))){
plot <- plot_base_AIS_EMSCI(subset(data_modified, plegia =='para'), 'Paraplegic')
} else if ('tetraplegia' %in% as.vector(unique(input$paralysis_epi_emsci))){
plot <- plot_base_AIS_EMSCI(subset(data_modified, plegia =='tetra'), 'Tetraplegic')
}
}
}
}
}
else if (input$var_epi_emsci == "nli_emsci") {
if (input$checkbox_epi_emsci == 1){ # if user chooses to display all data
plot <- plot_base_NLI_EMSCI(data_modified, '') # display basic plot with all patients, and user selected stNLIs
}
else if (input$checkbox_epi_emsci == 0){ # if user chooses filters
if (as.numeric(as.vector(unique(input$checkGroup_epi_emsci))) == 1){
if (length(unique(input$grade_epi_emsci)) == 1){
plot <- plot_base_NLI_EMSCI(subset(data_modified, ais1 == input$grade_epi_emsci[1]), '')
} else if (length(unique(input$grade_epi_emsci)) == 2){
plot.1 <- plot_base_NLI_EMSCI(subset(data_modified, ais1 == input$grade_epi_emsci[1]), paste(input$grade_epi_emsci[1]))
plot.2 <- plot_base_NLI_EMSCI(subset(data_modified, ais1 == input$grade_epi_emsci[2]), paste(input$grade_epi_emsci[2]))
plot <- grid.arrange(plot.1, plot.2, ncol=2)
} else if (length(unique(input$grade_epi_emsci)) == 3){
plot.1 <- plot_base_NLI_EMSCI(subset(data_modified, ais1 == input$grade_epi_emsci[1]), paste(input$grade_epi_emsci[1]))
plot.2 <- plot_base_NLI_EMSCI(subset(data_modified, ais1 == input$grade_epi_emsci[2]), paste(input$grade_epi_emsci[2]))
plot.3 <- plot_base_NLI_EMSCI(subset(data_modified, ais1 == input$grade_epi_emsci[3]), paste(input$grade_epi_emsci[3]))
plot <- grid.arrange(plot.1, plot.2, plot.3, ncol=2, nrow=2)
} else if (length(unique(input$grade_epi_emsci)) == 4){
plot.1 <- plot_base_NLI_EMSCI(subset(data_modified, ais1 == input$grade_epi_emsci[1]), paste(input$grade_epi_emsci[1]))
plot.2 <- plot_base_NLI_EMSCI(subset(data_modified, ais1 == input$grade_epi_emsci[2]), paste(input$grade_epi_emsci[2]))
plot.3 <- plot_base_NLI_EMSCI(subset(data_modified, ais1 == input$grade_epi_emsci[3]), paste(input$grade_epi_emsci[3]))
plot.4 <- plot_base_NLI_EMSCI(subset(data_modified, ais1 == input$grade_epi_emsci[4]), paste(input$grade_epi_emsci[4]))
plot <- grid.arrange(plot.1, plot.2, plot.3, plot.4, ncol=2, nrow=2)
}
} else if (as.numeric(as.vector(unique(input$checkGroup_epi_emsci))) == 2){
if (length(unique(input$paralysis_epi_emsci)) == 2){
data_modified.tetra <-subset(data_modified, plegia =='tetra')
data_modified.para <-subset(data_modified, plegia =='para')
plot_tetra <- plot_base_NLI_EMSCI(data_modified.tetra, 'Tetraplegic')
plot_para <- plot_base_NLI_EMSCI(data_modified.para, 'Paraplegic')
plot <- grid.arrange(plot_tetra, plot_para, ncol=2)
} else {
if ('paraplegia' %in% as.vector(unique(input$paralysis_epi_emsci))){
plot <- plot_base_NLI_EMSCI(subset(data_modified, plegia =='para'), 'Paraplegic')
} else if ('tetraplegia' %in% as.vector(unique(input$paralysis_epi_emsci))){
plot <- plot_base_NLI_EMSCI(subset(data_modified, plegia =='tetra'), 'Tetraplegic')
}
}
}
}
}
plot})
output$title_predict <- renderText({paste("<h2><b>", "Monitoring of individual patient", "</b>")})
output$plot_predict <- renderPlot({ # create output function for plot of interest
input_sex <- unique(input$select_sex)[1]
input_age <- unique(input$select_age)[1]
input_ais <- unique(input$select_ais)[1]
input_nli <- unique(input$select_nli)[1]
input_score <- unique(input$select_score)[1]
input_indscore <- unique(input$input_compscore)[1]
data_temp <- data_emsci_epi
data_temp$UEMS <- as.numeric(as.character(data_temp$UEMS))
data_temp$LEMS <- as.numeric(as.character(data_temp$LEMS))
data_temp$TMS <- as.numeric(as.character(data_temp$TMS))
data_modified <- data_temp
if (!(input_sex == 'Unknown')){
data_modified <- data_modified[data_modified$Sex %in% input_sex, ]
}
if (!(input_ais == 'Unknown')){
data_modified <- data_modified[data_modified$AIS %in% input_ais, ]
}
if (!(input_nli == 'Unknown')){
if (input_nli == 'Cervical'){
data_modified <- data_modified[data_modified$NLI_level %in% 'cervical', ]
} else if (input_nli == 'Thoracic'){
data_modified <- data_modified[data_modified$NLI_level %in% 'thoracic', ]
} else if (input_nli == 'Lumbar'){
data_modified <- data_modified[data_modified$NLI_level %in% 'lumbar', ]
} else {
data_modified <- data_modified[data_modified$NLI %in% input_nli, ]
}
}
if (!(input_age == 'Unknown')){
if (input_age == '0-19'){
data_modified <- data_modified[data_modified$AgeAtDOI>0 & data_modified$AgeAtDOI<20, ]
} else if (input_age == '20-39'){
data_modified <- data_modified[data_modified$AgeAtDOI>=20 & data_modified$AgeAtDOI<40, ]
} else if (input_age == '40-59'){
data_modified <- data_modified[data_modified$AgeAtDOI>=40 & data_modified$AgeAtDOI<60, ]
} else if (input_age == '60-79'){
data_modified <- data_modified[data_modified$AgeAtDOI>=60 & data_modified$AgeAtDOI<80, ]
} else if (input_age == '80+'){
data_modified <- data_modified[data_modified$AgeAtDOI>=80, ]
}
}
data_modified <- data_modified[data_modified$UEMS != 'NA', ]
data_modified <- data_modified[data_modified$UEMS != 'NT', ]
if (dim(data_modified)[1] == 0){
plot <- plot_error_data()
} else if (input_indscore == "Enter value..." || input_indscore == "") {
plot <- plot_predict_emsci(data_modified, input_score)
} else {
value <- as.numeric(as.character(input_indscore))
if (value < 0 || value > 50){
plot <- plot_error_value()
} else {
#plot <- plot_error_value()
plot <- plot_predict_emsci_NN(data_modified, input_score, value)
}
}
plot})
output$video <- renderUI({
tags$video(src = "video_version3.mp4", type = "video/mp4", autoplay = NA, controls = NA, height = 350, width = 750)
})
# observe({
# showNotification('Disclaimer:
# We cannot be held responsible for conclusions you might draw from the website.
# It is intended for data visualisation only.', type='error')
# })
observeEvent(input$preview, {
# Show a modal when the button is pressed
shinyalert("Disclaimer:
We cannot be held responsible for conclusions you might draw from the website.
It is intended for data visualisation only.", type = "error")
})
observeEvent(input$preview_predict, {
# Show a modal when the button is pressed
shinyalert("Disclaimer:
We cannot be held responsible for conclusions you might draw from the website.
It is intended for data visualisation only.", type = "error")
})
}
# Run app ----
shinyApp(ui, server)
|
/code/app_neuro_2.R
|
permissive
|
jutzca/Neurosurveillance
|
R
| false
| false
| 176,627
|
r
|
# -----------------------------------------------------------------------------------
# Building a Shiny app for visualisation of neurological outcomes in SCI
#
# July 8, 2020
# L. Bourguignon
# -----------------------------------------------------------------------------------
# Set working directory ----
# Load packages ----
library(rsconnect)
library(shiny)
library(shinyWidgets)
library(shinydashboard)
library(ggplot2)
library(data.table)
library(assertthat)
library(plyr)
library(dplyr)
library('ggthemes')
library(ggpubr)
library(ggridges)
library(gridExtra)
library(sjPlot)
library(jtools)
library(reshape2)
library(PMCMRplus)
#library(epicalc)
library(EpiReport)
library(epiDisplay)
library(naniar)
library(boot)
library(table1)
library(broom)
#library(pander)
library(gtable)
library(grid)
library(tidyr)
library(Hmisc)
library(RColorBrewer)
library(lme4)
library(DT)
library(shinyjs)
library(sodium)
library(shinymanager)
library(shinyalert)
library(plotly)
# Source helper functions -----
source("helper_functions_3.R")
# Load data ----
data_emsci <- read.csv('data/df_emsci_formatted2.csv')
data_emsci$ExamStage <- as.factor(data_emsci$ExamStage)
data_emsci$ExamStage <- relevel(data_emsci$ExamStage, ref = "very acute")
data_sygen <- read.csv('data/df_sygen_formatted_3.csv')
data_SCI_rehab <- read.csv('data/df_rehab_formatted.csv')
data_All <- read.csv('data/df_all_formatted.csv')
data_emsci_sygen <- read.csv('data/df_emsci_sygen_formatted.csv')
data_age_emsci <- read.csv('data/emsci_age.csv')
data_emsci_epi <- read.csv('data/emsci.csv')
data_emsci_epi$ExamStage <- as.factor(data_emsci_epi$ExamStage)
data_emsci_epi$ExamStage <- relevel(data_emsci_epi$ExamStage, ref = "very acute")
data_sygen_epi <- read.csv('data/sygen_epi.csv')
data_SCI_rehab_epi <- read.csv('data/df_rehab_epi.csv')
# Functions ----
convertMenuItem <- function(mi,tabName) {
mi$children[[1]]$attribs['data-toggle']="tab"
mi$children[[1]]$attribs['data-value'] = tabName
if(length(mi$attribs$class)>0 && mi$attribs$class=="treeview"){
mi$attribs$class=NULL
}
mi
}
latest.DateTime <- file.info("app_neuro_2.R")$mtime
# User interface ----
ui <- dashboardPage(skin = "blue", # make the frame blue
dashboardHeader(title = img(src="neurosurveillance_logo.png", height="80%", width="80%")), # rename the left column 'Menu'
## Sidebar content
dashboardSidebar(
sidebarMenu(id = "sidebarmenu", # create the main and sub parts within the sidebar menu
menuItem("About", tabName = "AboutTab", icon = icon("info-circle")),
menuItem("User guide", tabName = "user_guide", icon = icon("book-reader")),
menuItem('EMSCI', tabName = 'emsci', icon=icon("database"),
menuSubItem("Study details", tabName = "about_emsci", icon = icon("info-circle")),
menuSubItem("Epidemiological features", tabName = "epi_emsci", icon = icon("users")),
menuSubItem("Neurological features", tabName = "neuro_emsci", icon = icon("user-check")),
menuSubItem("Functional features", tabName = "funct_emsci", icon = icon("accessible-icon")),
menuSubItem("Monitoring", tabName = "monitore_emsci", icon = icon("clipboard-list"))),
menuItem('Sygen Trial', tabName = 'sygen', icon = icon("hospital-user"),
menuSubItem("Study details", tabName = "about_sygen", icon = icon("info-circle")),
menuSubItem("Epidemiological features", tabName = "epi_sygen", icon = icon("users")),
menuSubItem("Neurological features", tabName = "neuro_sygen", icon = icon("user-check")),
menuSubItem("Functional features", tabName = "funct_sygen", icon = icon("accessible-icon"))),
menuItem('Data sources compared', tabName = 'All', icon = icon("balance-scale"),
menuSubItem("Neurological features", tabName = "neuro_all", icon = icon("user-check"))),
menuItem("Abbreviations", tabName = "abbreviations", icon = icon("language")),
menuItem(HTML(paste0("Contact for collaborations ", icon("external-link"))), icon=icon("envelope"), href = "mailto:catherine.jutzeler@bsse.ethz.ch"),
uiOutput("dynamic_content")
) # end sidebarMenu
), # end dashboardSidebar
dashboardBody(
tags$script(HTML("
var openTab = function(tabName){
$('a', $('.sidebar')).each(function() {
if(this.getAttribute('data-value') == tabName) {
this.click()
};
});
};
$('.sidebar-toggle').attr('id','menu');
var dimension = [0, 0];
$(document).on('shiny:connected', function(e) {
dimension[0] = window.innerWidth;
dimension[1] = window.innerHeight;
Shiny.onInputChange('dimension', dimension);
});
$(window).resize(function(e) {
dimension[0] = window.innerWidth;
dimension[1] = window.innerHeight;
Shiny.onInputChange('dimension', dimension);
});
")),
# Customize color for the box status 'primary' and 'success' to match the skin color
tags$style(HTML("
.btn.btn-success {
color: #fff;
background-color: #3c8dbc;
border-color: #3c8dbc;
}
.btn.btn-success.focus,
.btn.btn-success:focus {
color: #fff;
background-color: #3c8dbc;
border-color: #3c8dbc;
outline: none;
box-shadow: none;
}
.btn.btn-success:hover {
color: #fff;
background-color: #3c8dbc;
border-color: #3c8dbc;
outline: none;
box-shadow: none;
}
.btn.btn-success.active,
.btn.btn-success:active {
color: #fff;
background-color: #3c8dbc;
border-color: #3c8dbc;
outline: none;
}
.btn.btn-success.active.focus,
.btn.btn-success.active:focus,
.btn.btn-success.active:hover,
.btn.btn-success:active.focus,
.btn.btn-success:active:focus,
.btn.btn-success:active:hover {
color: #fff;
background-color: #7ac8f5 ;
border-color: #7ac8f5 ;
outline: none;
box-shadow: none;
}
")),
tags$div(class = "tab-content",
#tabItems(
tabItem(tabName = "AboutTab",
titlePanel(title = div(strong("Welcome to ", img(src="neurosurveillance_logo.png", height="35%", width="35%")))),
fluidRow( # create a separation in the panel
column(width = 8, # create first column for boxplot
box(width = NULL, status = "primary",
p(h3("Benchmarking the spontaneous functional and neurological recovery following spinal cord injury"), align = "justify"),
br(),
p('Traumatic spinal cord injury is a rare but devastating neurological disorder.
It constitutes a major public health issue, burdening both patients, caregivers,
as well as society as a whole. The goal of this project is to establish an
international benchmark for neurological and functional recovery after spinal
cord injury. Currently, Neurosurveillance leverages three decades of data from two
of the largest data sources in the field facilitating the analysis of temporal
trends in epidemiological landscape, providing reference values for future clinical
trials and studies, and enabling monitoring of patients on a personalized level.', align = "justify"),
p('More information can be found here:', a(icon('github'), href ="https://github.com/jutzca/Neurosurveillance", target="_blank")),
br(),
p(h3('What You Can Do Here:')),
p("This applet has enables visitors to directly interact with the data collected
in the EMSCI study and the Sygen clinical trial. Both",
a("EMSCI", onclick = "openTab('emsci')", href="#"),
"and", a("Sygen", onclick = "openTab('sygen')", href="#"),
"Tabs are organized as follows:", align = "justify"),
tags$ul(align="justify",
tags$li("The Study details Tab provides information the data sources,
(",
a("EMSCI study", onclick = "openTab('about_emsci')", href="#"),
"and the ",
a("Sygen clinical trial", onclick = "openTab('about_sygen')", href="#"),
")."),
tags$li("The Epidemiological feature Tabs (",
a("EMSCI study", onclick = "openTab('epi_emsci')", href="#"), " and ",
a("Sygen clinical trial", onclick = "openTab('epi_sygen')", href="#"),
"), Neurological features Tabs (",
a("EMSCI study", onclick = "openTab('neuro_emsci')", href="#"), "and",
a("Sygen clinical trial", onclick = "openTab('neuro_sygen')", href="#"),
") and Functional features Tabs (",
a("EMSCI study", onclick = "openTab('funct_emsci')", href="#"), "and",
a("Sygen clinical trial", onclick = "openTab('funct_sygen')", href="#"),
") offer an interactive interface to explore the epidemiological chracteristics, as well as
neurological and functional recovery after spinal cord injury, respectively.
You can choose the data source,
outcome variable of interest, select the cohort of interest based in demographics and
injury characteristics (entire cohort or a subset thereof), and chose the variables for
the visualization (see more details in",
a("User Guide", onclick = "openTab('user_guide')", href="#"),
")."),
tags$li("The ",
a("Monitoring", onclick = "openTab('monitore_emsci')", href="#"),
" Tab, part of the EMSCI Tab, gives you the possibility to visualize and monitor the neurological and
functional recovery trajectory of single patients or patient groups that share very similar
demographics and baseline injury characteristics. As an example, if you have a patient in the
clinical with a certain motor score and you are interested in their recovery, you can have a
look at previous patients with comparable characteristics. This follows the concept of a
digital twin/sibling.")),
p("Additionally, you can find:", align = "justify"),
tags$ul(align="justify",
tags$li("A ",
a("User Guide", onclick = "openTab('user_guide')", href="#"),
" describing the different ways to interact with the plots."),
tags$li("The ",
a("Abbreviation", onclick = "openTab('abbreviations')", href="#"),
" Tab describes the abbreviations used within the framework of this applet.")
),
uiOutput("video"),
br(),
p(h3('Study team')),
p(h4('Principal Investigators')),
HTML(
paste('<p style="text-align:justify">',
a(icon('envelope'), href = 'mailto:catherine.jutzeler@bsse.ethz.ch'), strong('Dr. Catherine Jutzeler'),
', Research Group Leader, Department of Biosystems Science and Engineering, ETH Zurich, Switzerland',
'<br/>',
a(icon('envelope'), href = 'mailto:lucie.bourguignon@bsse.ethz.ch'), strong('Lucie Bourguignon'),
', MSc. PhD Student, Department of Biosystems Science and Engineering, ETH Zurich, Switzerland',
'<br/>',
a(icon('envelope'), href = 'mailto:john.kramer@ubc.ca'), strong('Prof. John Kramer'),
', ICORD principle investigator, Anesthesiology, Pharmacology, and Therapeutics,
University of British Columbia, Vancouver, Canada',
'<br/>',
a(icon('envelope'), href = 'mailto:armin.curt@balgrist.ch'), strong('Prof. Armin Curt'),
', Director of Balgrist Spinal Cord Injury Center and EMSCI, University Hospital Balgrist,
Zurich, Switzerland'
)
),
p(h4('Collaborators')),
p('Bobo Tong, Fred Geisler, Martin Schubert, Frank Röhrich, Marion Saur, Norbert Weidner,
Ruediger Rupp, Yorck-Bernhard B. Kalke, Rainer Abel, Doris Maier, Lukas Grassner,
Harvinder S. Chhabra, Thomas Liebscher, Jacquelyn J. Cragg, ',
a('EMSCI study group', href = 'https://www.emsci.org/index.php/members', target="_blank"), align = "justify"),
br(),
p(h3('Ethics statement')),
p('All patients gave their written informed consent before being included in the EMSCI database.
The study was performed in accordance with the Declaration of Helsinki and was approved
by all responsible institutional review boards. The study was performed in accordance with
the Declaration of Helsinki. If you have any questions or concerns regarding the study
please do not hesitate to contact the Principal Investigator, ',
a('Dr. Catherine Jutzeler', href = 'mailto:catherine.jutzeler@bsse.ethz.ch'), '.', align = "justify"),
br(),
p(h3('Funding')),
p('This project is supported by the ',
a('Swiss National Science Foundation', href = 'http://p3.snf.ch/project-186101', target="_blank"),
' (Ambizione Grant, #PZ00P3_186101), ',
a('Wings for Life Research Foundation', href = 'https://www.wingsforlife.com/en/', target="_blank"),
' (#2017_044), the ',
a('International Foundation for Research in Paraplegia', href = 'https://www.irp.ch/en/foundation/', target="_blank"),
' (IRP).', align = "justify"),
), # end box
tags$style(".small-box{border-radius: 15px}"),
valueBox("5000+", "Patients", icon = icon("hospital-user"), width = 3, color = "blue"),
valueBox("15", "Countries", icon = icon("globe-europe"), width = 3, color = "blue"),
valueBox("20", "Years", icon = icon("calendar-alt"), width = 3, color = "blue"),
valueBox("50+", "Researchers", icon = icon("user-cog"), width = 3, color = "blue")#,
), # end column
column(width = 4,
box(width = NULL, status = "primary",
tags$div(
HTML('<a href="https://twitter.com/Neurosurv_Sci?ref_src=twsrc%5Etfw" class="twitter-follow-button" data-show-count="false">Follow @Neurosurv_Sci</a><script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>')
),
tags$div(
HTML('<a class="twitter-timeline" data-height="1800" data-theme="light" href="https://twitter.com/Neurosurv_Sci?ref_src=twsrc%5Etfw">Tweets by Neurosurv_Sci</a> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>')
)) # end box
) # end column
), # end fluidRow
shinyjs::useShinyjs(),
tags$footer(HTML("<strong>Copyright © 2020 <a href=\"https://github.com/jutzca/Neurosurveillance\" target=\"_blank\">Neurosurveillance</a>.</strong>
<br>This work is licensed under a <a rel=\"license\" href=\"http://creativecommons.org/licenses/by-nc-nd/4.0/\" target=\"_blank\">Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License</a>.
<br><a rel=\"license\" href=\"http://creativecommons.org/licenses/by-nc-nd/4.0/\" target=\"_blank\"><img alt=\"Creative Commons License\" style=\"border-width:0\" src=\"https://i.creativecommons.org/l/by-nc-nd/4.0/88x31.png\" /></a>
<br>Last updated:<br>"),
latest.DateTime,
id = "sideFooter",
align = "left",
style = "
position:absolute;
bottom:0;
width:100%;
padding: 10px;
"
)
), # end tabItem
tabItem(tabName = "user_guide",
titlePanel(title = div(strong("User Guide"))),
), #end tabItem
tabItem(tabName = "about_emsci",
h3("European Multicenter Study about Spinal Cord Injury"),
box(#title = "Explore The Data",
width = 12,
heigth = "500px",
solidHeader = TRUE,
strong("Study design"),
"The EMSCI is an ongoing longitudinal, observational study that prospectively collects clinical,
functional, and neurophysiological data at fixed time points over the first year of injury: very acute
(within 2 weeks), acute I (2-4 weeks), acute II (3 months), and acute III (6 months), and chronic
(12 months)",
br(),
br(),
strong("Governance structure"),
"Founded in 2001 and led by the University Clinic Balgrist (Zurich, Switzerland), EMSCI
consists of 23 active members from 9 countries and 6 passive members from four countries.
Moreover, it has three associated members from Canada (2) and Germany (1).",
br(),
br(),
strong("Ethical approval and registration"),
"The study was performed in accordance with the Declaration of Helsinki and
approved by the local ethics committee of each participating center. All patients
gave their written informed consent before being included in the EMSCI database.
European Multicenter Study about Spinal Cord Injury is registered with ClinicalTrials.gov
Identifier (NCT01571531).",
br(),
br(),
strong("Inclusion/exclusion criteria."),
"Three inclusion criteria have to be met before a patient can be enrolled in the EMSCI:
(1) the patient has to be capable and willing to give written informed consent;
(2) the injury was caused by a single event; and
(3) the first EMSCI assessment has to be possible within the first 6 weeks following injury.
Patients are excluded from EMSCI for the following reasons:
(1) nontraumatic para- or tetraplegia (i.e. discusprolaps, tumor, AV-malformation, myelitis) excl. single event ischemic incidences;
(2) previously known dementia or severe reduction of intelligence, leading to reduced capabilities of cooperation or giving consent;
(3) peripheral nerve lesions above the level of lesion (i.e. plexus brachialis impairment);
(4) pre-existing polyneuropathy; and
(5) severe craniocerebral injury. All individuals in the EMSCI receive standards of rehabilitation care.",
br(),
br(),
strong("Neurological Scoring"),
"The EMSCI centers collect the following neurological scores: total motor score,
lower extremity motor score, upper extremity motor score, total pinprick score,
and total light touch score. For motor scores, key muscles in the upper and lower
extremities were examined according to the International Standards for the Neurological
Classification of SCI (ISNCSCI), with a maximum score of 50 points for each of the upper
and lower extremities (for a maximum total score of 100). Light touch and pin prick
(sharp-dull discrimination) scores were also assessed according to ISNCSCI, with a maximum
score of 112 each.",
br(),
br(),
strong("Functional Assessments"),
"Functional outcomes comprise Spinal Cord Independence Measure (SCIM),
Walking Index for Spinal Cord Injury, 10-meter walking test (10MWT),
6-minute walk test (6MWT), and Timed Up & Go (TUG) test. The SCIM is a
scale for the assessment of achievements of daily function of patients with spinal cord lesions.
Throughout the duration of this study (2001-2019), two different versions of the SCIM were used:
between 2001-2007 SCIM II23 and since 2008 SCIM III. The major difference between the versions
is that SCIM II does not consider intercultural differences. Both versions contain 19 tasks, all
activities of daily living organized, in four areas of function (subscales): Self-Care (scored 0–20);
respiration and sphincter management (0–40); mobility in room and toilet (0–10); and mobility
indoors and outdoors (0–30). WISCI II has an original scale that quantifies a patient's walking
ability; a score of 0 indicates that a patient cannot stand and walk and the highest score of 20
is assigned in case a patient can walk more than 10m without walking aids of assistance. Lastly,
10MWT measures the time (in seconds) it takes a patient to walk 10m, the 6MWT quantifies the
distance (in meters) covered by the patient within 6 minutes, and TUG measures the time (in seconds)
it takes a patient to stand up from an armchair, walk 3m, return to the chair, and sit down."
)
), #end tabItem
tabItem(tabName = "epi_emsci",
fluidRow(
column (width = 8,
box(status = "primary", width = NULL,
div(style="display:inline-block;width:100%;text-align:center;",
radioGroupButtons(
inputId = "var_epi_emsci",
label = "Epidemiological features:",
selected = "sex_emsci",
status = "success",
individual = T, #if false, then the boxes are connected
choiceNames = c("Sex", "Age", "AIS grade", "Level of injury"),
choiceValues = c("sex_emsci", "age_emsci", "ais_emsci", "nli_emsci")
) # Close radioGroupButtons bracket
), # Close div bracket
),
box(status = "primary", width = NULL,
div(plotOutput("plot.epi.emsci", width = "90%",
height = "660px",
inline = FALSE),
align='center')
), # end box
), # end column
column(width = 4, # create second column for second type of user inputs (filters)
box(status = "primary", width = NULL, # create a new box
sliderTextInput(inputId = "year_epi_emsci", # create new slider text
label = "Years of injury to display:", # label of the box
choices = list("2000" = 2000,"2001" = 2001,"2002" = 2002,"2003" = 2003,"2004" = 2004,
"2005" = 2005,"2006" = 2006,"2007" = 2007,"2008" = 2008,"2009" = 2009,
"2010" = 2010,"2011" = 2011,"2012" = 2012,"2013" = 2013,"2014" = 2014,
"2015" = 2015,"2016" = 2016,"2017" = 2017,"2018" = 2018,"2019" = 2019,
"2019" = 2019),
selected = c(2000, 2019),
animate = T, grid = T, hide_min_max = FALSE, from_fixed = FALSE,
to_fixed = FALSE, from_min = NULL, from_max = NULL, to_min = NULL,
to_max = NULL, force_edges = T, width = NULL, pre = NULL,
post = NULL, dragRange = TRUE)
), # end box
box(status = "primary", width = NULL, # create a new box
sliderInput(inputId = "binyear_epi_emsci", # create new slider text
label = "Number of years per bin:", # label of the box
min = 1, max = 5,
value = 1)
), # end box
box(status = "primary", width = NULL, # create box
radioButtons("checkbox_epi_emsci",
label = "Do you want to inspect subpopulations ?",
choices = c("No" = 1, 'Yes' = 0),
selected = 1) # checkbox to go to basic plot: no filters, EMSCI patients, x-axis=stages, y-axis=value of score
), # end box
conditionalPanel(condition = "input.checkbox_epi_emsci == 0", # if user decides to not display EMSCI data and apply filters, make a new panel appear
box(status = "primary", width = NULL, # create new box
radioButtons(inputId = "checkGroup_epi_emsci", # create new check box group
label = "Criteria:", # label of the box
choices = list("AIS grade" = '1', "Paralysis type" = '2'), # choices are the different filters that can be applied
selected = c('1')) # by default, sex and AIS grades are selected
) # end box
), # end conditionalPanel
conditionalPanel(condition = "input.checkbox_epi_emsci == 0 && input.checkGroup_epi_emsci.includes('1')", # if user chooses to filter based on AIS grade
box(status = "primary", width = NULL, # create a new box
checkboxGroupInput(inputId = "grade_epi_emsci", # create new check box group
label = "AIS grade:", # label of the box
choices = list("AIS A" = "A", "AIS B" = "B", "AIS C" = "C", "AIS D" = "D"), # choices (match the levels in EMSCI dataset), missing AIS grades will automaticEMSCIy be removed
selected = c("A", "B", "C", "D")) # by default, EMSCI grades are selected but missing grades
) # end box
), # end conditionalPanel
conditionalPanel(condition = "input.checkbox_epi_emsci == 0 && input.checkGroup_epi_emsci.includes('2')", # if user chooses to filter based on functlogical level of injury
box(status = "primary", width = NULL, # create a new box
checkboxGroupInput(inputId = "paralysis_epi_emsci", # create new check box group
label = "Type of paralysis:", # label of the box
choices = list("paraplegia", 'tetraplegia'),
selected = c("paraplegia", 'tetraplegia'))
) # end box
) # end conditionalPanel
), #end column
) #close fluid row
), #end tabItem
tabItem(tabName = "neuro_emsci",
fluidRow(
column (width = 8,
box(status = "primary", width = NULL,
div(style="display:inline-block;width:100%;text-align:center;",
radioGroupButtons(
inputId = "var_neuro_emsci",
label = "Neurological features:",
selected = "UEMS",
status = "success",
individual = T, #if false, then the boxes are connected
choiceNames = c("UEMS","RUEMS","LUEMS",
"LEMS","RLEMS","LLEMS",
"RMS","LMS","TMS",
"RPP","LPP","TPP",
"RLT","LLT","TLT"),
choiceValues = c("UEMS","RUEMS","LUEMS",
"LEMS","RLEMS","LLEMS",
"RMS","LMS","TMS",
"RPP","LPP","TPP",
"RLT","LLT","TLT")
) # Close radioGroupButtons bracket
), # Close div bracket
), # close box
box(status = "primary", width = NULL,
div(plotlyOutput("plot.neuro.emsci", width = "90%",
height = "660px",
inline = FALSE),
align='center'),
), # end box
), # end column
column(width = 4, # create second column for second type of user inputs (filters)
box(status = "primary", width = NULL, # create box
radioButtons("cont_neuro_emsci",
label = "Type of plot ?",
choices = c("Trend plot" = 0, 'Boxplot' = 1),
selected = 0)
), # end box
box(status = "primary", width = NULL, # create box
radioButtons("subset_neuro_EMSCI",
label = "Select subset of the cohort ?",
choices = c("No" = 1, 'Yes' = 0),
selected = 1), # checkbox to go to basic plot: no filters, all patients, x-axis=stages, y-axis=value of score
conditionalPanel(condition = "input.subset_neuro_EMSCI == 0", # if user decides to not display all data and apply filters, make a new panel appear
checkboxGroupInput(inputId = "checksub_neuro_EMSCI", # create new check box group
label = "Subsetting criteria:", # label of the box
choices = list("Sex" = '1', "Age at injury" = '2', "Cause of SCI" = '3', "AIS grade" = '4', "Level of injury" = '5', "Country" = '6'),# "Year of injury" = '7'), # choices are the different filters that can be applied
selected = c('1', '4')) # by default, sex and AIS grades are selected
), # end conditionalPanel
conditionalPanel(condition = "input.subset_neuro_EMSCI == 0 && input.checksub_neuro_EMSCI.includes('1')", # if user chooses to filter based on sex
checkboxGroupInput(inputId = "sex_neuro_EMSCI", # create new check box group
label = "Sex:", # label of the box
choices = list("Male", "Female"), # choices are male and female (match the levels in EMSCI dataset)
selected = c("Male", "Female")) # by default, both male and female are selected
), # end conditionalPanel
conditionalPanel(condition = "input.subset_neuro_EMSCI == 0 && input.checksub_neuro_EMSCI.includes('2')", # if user chooses to filter based on age
checkboxGroupInput(inputId = "age_neuro_EMSCI", # create new check box group
label = "Age at injury:", # label of the box
choices = list("0-19 (0)" = 0, "20-39 (1)" = 1, "40-59 (2)" = 2, "60-79 (3)" = 3, "80-99 (4)" = 4), # choices are age group divided in 20 years (match the levels in EMSCI dataset from categorised column)
selected = c(0,1,2,3,4)) # by default, all categories are selected
), # end conditionalPanel
conditionalPanel(condition = "input.subset_neuro_EMSCI == 0 && input.checksub_neuro_EMSCI.includes('3')", # if user chooses to filter based on cause of injury
checkboxGroupInput(inputId = "cause_neuro_EMSCI", # create new check box group
label = "Cause of injury:", # label of the box
choices = list("disc herniation", "haemorragic", "ischemic", "traumatic", "other"), # choices (match the levels in EMSCI dataset)
selected = c("disc herniation", "haemorragic", "ischemic", "traumatic", "other")) # by default, all categories are selected
), # end conditionalPanel
conditionalPanel(condition = "input.subset_neuro_EMSCI == 0 && input.checksub_neuro_EMSCI.includes('4')", # if user chooses to filter based on AIS grade
checkboxGroupInput(inputId = "grade_neuro_EMSCI", # create new check box group
label = "AIS grade:", # label of the box
choices = list("AIS A", "AIS B", "AIS C", "AIS D", "AIS E"), # choices (match the levels in EMSCI dataset), missing AIS grades will automatically be removed
selected = c("AIS A", "AIS B", "AIS C", "AIS D", "AIS E")) # by default, all grades are selected but missing grades
), # end conditionalPanel
conditionalPanel(condition = "input.subset_neuro_EMSCI == 0 && input.checksub_neuro_EMSCI.includes('5')", # if user chooses to filter based on neurological level of injury
checkboxGroupInput(inputId = "level_neuro_EMSCI", # create new check box group
label = "Neurological level of injury:", # label of the box
choices = list("cervical", "thoracic", "lumbar", "sacral"), # choices (match the levels in EMSCI dataset)
selected = c("cervical", "thoracic", "lumbar", "sacral")) # by default, all categories are selected
), # end conditionalPanel
conditionalPanel(condition = "input.subset_neuro_EMSCI == 0 && input.checksub_neuro_EMSCI.includes('6')", # if user chooses to filter based on country
checkboxGroupInput(inputId = "country_neuro_EMSCI", # create new check box group
label = "Country:", # label of the box
choices = list("Austria", "Czech Republic", "France", "Germany",
"Great Britain", "India", "Italy", "Netherlands",
"Spain", "Switzerland"), # choices (match the levels in EMSCI dataset)
selected = c("Austria", "Czech Republic", "France", "Germany",
"Great Britain", "India", "Italy", "Netherlands",
"Spain", "Switzerland")) # by default, all countries are selected
), # end conditionalPanel
# conditionalPanel(condition = "input.subset_neuro_EMSCI == 0 && input.checksub_neuro_EMSCI.includes('7')", # if user chooses to filter based on year of injury (categorised)
# sliderTextInput(inputId = "filteryear_neuro_EMSCI", # create new slider text
# label = "Years of injury to display:", # label of the box
# choices = list("2000" = 2000, "2005" = 2005, "2010" = 2010, "2015" = 2015, "2019" = 2019), # choices (match the levels in EMSCI dataset in categorised column)
# selected = c(2000, 2019), # by default, all groups are selected
# animate = T, grid = T, hide_min_max = FALSE, from_fixed = FALSE,
# to_fixed = FALSE, from_min = NULL, from_max = NULL, to_min = NULL,
# to_max = NULL, force_edges = T, width = NULL, pre = NULL,
# post = NULL, dragRange = TRUE)
# ) # end conditionalPanel
), # end box
box(status = "primary", width = NULL, # create box
radioButtons("checkbox_neuro_EMSCI",
label = "How do you want to visualise the data?",
choices = c("Default" = 1, 'Customize (display by subgroups)' = 0),
selected = 1), # checkbox to go to basic plot: no filters, all patients, x-axis=stages, y-axis=value of score
conditionalPanel(condition = "input.checkbox_neuro_EMSCI == 0", # if user decides to not display all data and apply filters, make a new panel appear
checkboxGroupInput(inputId = "checkGroup_neuro_EMSCI", # create new check box group
label = "Visualisation criteria:", # label of the box
choices = list("Sex" = '1',
"Age at injury" = '2',
"Cause of SCI" = '3',
"AIS grade" = '4',
"Level of injury" = '5',
"Country" = '6'),
#"Year of injury" = '7'), # choices are the different filters that can be applied
selected = c('1','4')) # by default, sex and AIS grades are selected
), # end conditionalPanel
), # end box
box(status = "primary", width = NULL,
radioButtons(inputId = "neuro_choose_time_EMSCI",
label = "Choose time displayed:",
choices = c("Single time point" = "single",
"Time range" = "multiple"),
selected = "multiple"),
conditionalPanel(condition = "input.neuro_choose_time_EMSCI == 'multiple'",
sliderTextInput(inputId = "neuro_time_multiple_EMSCI",
label = "Time range:",
choices = list("very acute", "acute I", "acute II", 'acute III', 'chronic'),
selected = c("very acute", 'chronic'),
animate = F, grid = TRUE, hide_min_max = FALSE, from_fixed = FALSE,
to_fixed = FALSE, from_min = NULL, from_max = NULL, to_min = NULL,
to_max = NULL, force_edges = T, width = NULL, pre = NULL,
post = NULL, dragRange = TRUE)
), # end conditionalPanel
conditionalPanel(condition = "input.neuro_choose_time_EMSCI == 'single'",
sliderTextInput(inputId = "neuro_time_single_EMSCI",
label = "Time point:",
choices = list("very acute", "acute I", "acute II", 'acute III', 'chronic'),
selected = c("very acute"),
animate = F, grid = TRUE, hide_min_max = FALSE, from_fixed = FALSE,
to_fixed = FALSE, from_min = NULL, from_max = NULL, to_min = NULL,
to_max = NULL, force_edges = T, width = NULL, pre = NULL,
post = NULL, dragRange = TRUE)
) # end conditionalPanel
), # end box
box(status = "primary", width = NULL, # create a new box
sliderTextInput(inputId = "year_neuro_emsci", # create new slider text
label = "Years of injury to display:", # label of the box
choices = list("2000" = 2000,"2001" = 2001,"2002" = 2002,"2003" = 2003,"2004" = 2004,
"2005" = 2005,"2006" = 2006,"2007" = 2007,"2008" = 2008,"2009" = 2009,
"2010" = 2010,"2011" = 2011,"2012" = 2012,"2013" = 2013,"2014" = 2014,
"2015" = 2015,"2016" = 2016,"2017" = 2017,"2018" = 2018,"2019" = 2019,
"2019" = 2019),
selected = c(2000, 2019),
animate = T, grid = T, hide_min_max = FALSE, from_fixed = FALSE,
to_fixed = FALSE, from_min = NULL, from_max = NULL, to_min = NULL,
to_max = NULL, force_edges = T, width = NULL, pre = NULL,
post = NULL, dragRange = TRUE)
), # end box
box(status = "primary", width = NULL, # create a new box
sliderInput(inputId = "binyear_neuro_emsci", # create new slider text
label = "Number of years per bin:", # label of the box
min = 1, max = 5,
value = 5)
), # end box
) #end column
) #end FluidRow
), #end tabItem
tabItem(tabName = "funct_emsci",
fluidRow(
column (width = 8,
box(status = "primary", width = NULL,
div(style="display:inline-block;width:100%;text-align:center;",
radioGroupButtons(
inputId = "var_funct_emsci",
label = "Functional features:",
selected = "WISCI",
status = "success",
individual = T, #if false, then the boxes are connected
choiceNames = c("WISCI","test_6min","test_10m","TUG","SCIM2","SCIM3"),
choiceValues = c("WISCI","test_6min","test_10m","TUG","SCIM2","SCIM3")
) # Close radioGroupButtons bracket
), # Close div bracket
),
box(status = "primary", width = NULL,
div(plotlyOutput("plot.funct.emsci", width = "90%",
height = "660px",
inline = FALSE),
align='center'),
), # end box
), # end column
column(width = 4, # create second column for second type of user inputs (filters)
box(status = "primary", width = NULL, # create box
radioButtons("cont_funct_emsci",
label = "Type of plot ?",
choices = c("Trend plot" = 0, 'Boxplot' = 1),
selected = 0)
), # end box
box(status = "primary", width = NULL, # create box
radioButtons("subset_funct_EMSCI",
label = "Select subset of the cohort ?",
choices = c("No" = 1, 'Yes' = 0),
selected = 1), # checkbox to go to basic plot: no filters, all patients, x-axis=stages, y-axis=value of score
conditionalPanel(condition = "input.subset_funct_EMSCI == 0", # if user decides to not display all data and apply filters, make a new panel appear
checkboxGroupInput(inputId = "checksub_funct_EMSCI", # create new check box group
label = "Subsetting criteria:", # label of the box
choices = list("Sex" = '1', "Age at injury" = '2', "Cause of SCI" = '3', "AIS grade" = '4', "Level of injury" = '5', "Country" = '6'),# "Year of injury" = '7'), # choices are the different filters that can be applied
selected = c('1', '4')) # by default, sex and AIS grades are selected
), # end conditionalPanel
conditionalPanel(condition = "input.subset_funct_EMSCI == 0 && input.checksub_funct_EMSCI.includes('1')", # if user chooses to filter based on sex
checkboxGroupInput(inputId = "sex_funct_EMSCI", # create new check box group
label = "Sex:", # label of the box
choices = list("Male", "Female"), # choices are male and female (match the levels in EMSCI dataset)
selected = c("Male", "Female")) # by default, both male and female are selected
), # end conditionalPanel
conditionalPanel(condition = "input.subset_funct_EMSCI == 0 && input.checksub_funct_EMSCI.includes('2')", # if user chooses to filter based on age
checkboxGroupInput(inputId = "age_funct_EMSCI", # create new check box group
label = "Age at injury:", # label of the box
choices = list("0-19 (0)" = 0, "20-39 (1)" = 1, "40-59 (2)" = 2, "60-79 (3)" = 3, "80-99 (4)" = 4), # choices are age group divided in 20 years (match the levels in EMSCI dataset from categorised column)
selected = c(0,1,2,3,4)) # by default, all categories are selected
), # end conditionalPanel
conditionalPanel(condition = "input.subset_funct_EMSCI == 0 && input.checksub_funct_EMSCI.includes('3')", # if user chooses to filter based on cause of injury
checkboxGroupInput(inputId = "cause_funct_EMSCI", # create new check box group
label = "Cause of injury:", # label of the box
choices = list("ischemic", "traumatic"), # choices (match the levels in EMSCI dataset)
selected = c("disc herniation", "haemorragic", "ischemic", "traumatic", "other")) # by default, all categories are selected
), # end conditionalPanel
conditionalPanel(condition = "input.subset_funct_EMSCI == 0 && input.checksub_funct_EMSCI.includes('4')", # if user chooses to filter based on AIS grade
checkboxGroupInput(inputId = "grade_funct_EMSCI", # create new check box group
label = "AIS grade:", # label of the box
choices = list("AIS A", "AIS B", "AIS C", "AIS D", "AIS E"), # choices (match the levels in EMSCI dataset), missing AIS grades will automatically be removed
selected = c("AIS A", "AIS B", "AIS C", "AIS D", "AIS E")) # by default, all grades are selected but missing grades
), # end conditionalPanel
conditionalPanel(condition = "input.subset_funct_EMSCI == 0 && input.checksub_funct_EMSCI.includes('5')", # if user chooses to filter based on neurological level of injury
checkboxGroupInput(inputId = "level_funct_EMSCI", # create new check box group
label = "Neurological level of injury:", # label of the box
choices = list("cervical", "thoracic", "lumbar", "sacral"), # choices (match the levels in EMSCI dataset)
selected = c("cervical", "thoracic", "lumbar", "sacral")) # by default, all categories are selected
), # end conditionalPanel
conditionalPanel(condition = "input.subset_funct_EMSCI == 0 && input.checksub_funct_EMSCI.includes('6')", # if user chooses to filter based on country
checkboxGroupInput(inputId = "country_funct_EMSCI", # create new check box group
label = "Country:", # label of the box
choices = list("Austria", "Czech Republic", "France", "Germany",
"Great Britain", "India", "Italy", "Netherlands",
"Spain", "Switzerland"), # choices (match the levels in EMSCI dataset)
selected = c("Austria", "Czech Republic", "France", "Germany",
"Great Britain", "India", "Italy", "Netherlands",
"Spain", "Switzerland")) # by default, all countries are selected
), # end conditionalPanel
# conditionalPanel(condition = "input.subset_funct_EMSCI == 0 && input.checksub_funct_EMSCI.includes('7')", # if user chooses to filter based on year of injury (categorised)
# sliderTextInput(inputId = "filteryear_funct_EMSCI", # create new slider text
# label = "Years of injury to display:", # label of the box
# choices = list("2000" = 2000, "2005" = 2005, "2010" = 2010, "2015" = 2015, "2019" = 2019), # choices (match the levels in EMSCI dataset in categorised column)
# selected = c(2000, 2019), # by default, all groups are selected
# animate = T, grid = T, hide_min_max = FALSE, from_fixed = FALSE,
# to_fixed = FALSE, from_min = NULL, from_max = NULL, to_min = NULL,
# to_max = NULL, force_edges = T, width = NULL, pre = NULL,
# post = NULL, dragRange = TRUE)
# ) # end conditionalPanel
), # end box
box(status = "primary", width = NULL, # create box
radioButtons("checkbox_funct_EMSCI",
label = "How do you want to visualise the data?",
choices = c("Default" = 1, 'Customize (display by subgroups)' = 0),
selected = 1), # checkbox to go to basic plot: no filters, all patients, x-axis=stages, y-axis=value of score
conditionalPanel(condition = "input.checkbox_funct_EMSCI == 0", # if user decides to not display all data and apply filters, make a new panel appear
checkboxGroupInput(inputId = "checkGroup_funct_EMSCI", # create new check box group
label = "Visualisation criteria:", # label of the box
choices = list("Sex" = '1',
"Age at injury" = '2',
"Cause of SCI" = '3',
"AIS grade" = '4',
"Level of injury" = '5',
"Country" = '6'),
#"Year of injury" = '7'), # choices are the different filters that can be applied
selected = c('1','4')) # by default, sex and AIS grades are selected
), # end conditionalPanel
), # end box
box(status = "primary", width = NULL,
radioButtons(inputId = "funct_choose_time_EMSCI",
label = "Choose time displayed:",
choices = c("Single time point" = "single",
"Time range" = "multiple"),
selected = "multiple"),
conditionalPanel(condition = "input.funct_choose_time_EMSCI == 'multiple'",
sliderTextInput(inputId = "funct_time_multiple_EMSCI",
label = "Time range:",
choices = list("very acute", "acute I", "acute II", 'acute III', 'chronic'),
selected = c("very acute", 'chronic'),
animate = F, grid = TRUE, hide_min_max = FALSE, from_fixed = FALSE,
to_fixed = FALSE, from_min = NULL, from_max = NULL, to_min = NULL,
to_max = NULL, force_edges = T, width = NULL, pre = NULL,
post = NULL, dragRange = TRUE)
), # end conditionalPanel
conditionalPanel(condition = "input.funct_choose_time_EMSCI == 'single'",
sliderTextInput(inputId = "funct_time_single_EMSCI",
label = "Time point:",
choices = list("very acute", "acute I", "acute II", 'acute III', 'chronic'),
selected = c("very acute"),
animate = F, grid = TRUE, hide_min_max = FALSE, from_fixed = FALSE,
to_fixed = FALSE, from_min = NULL, from_max = NULL, to_min = NULL,
to_max = NULL, force_edges = T, width = NULL, pre = NULL,
post = NULL, dragRange = TRUE)
) # end conditionalPanel
), # end box
box(status = "primary", width = NULL, # create a new box
sliderTextInput(inputId = "year_funct_emsci", # create new slider text
label = "Years of injury to display:", # label of the box
choices = list("2000" = 2000,"2001" = 2001,"2002" = 2002,"2003" = 2003,"2004" = 2004,
"2005" = 2005,"2006" = 2006,"2007" = 2007,"2008" = 2008,"2009" = 2009,
"2010" = 2010,"2011" = 2011,"2012" = 2012,"2013" = 2013,"2014" = 2014,
"2015" = 2015,"2016" = 2016,"2017" = 2017,"2018" = 2018,"2019" = 2019,
"2019" = 2019),
selected = c(2000, 2019),
animate = T, grid = T, hide_min_max = FALSE, from_fixed = FALSE,
to_fixed = FALSE, from_min = NULL, from_max = NULL, to_min = NULL,
to_max = NULL, force_edges = T, width = NULL, pre = NULL,
post = NULL, dragRange = TRUE)
), # end box
box(status = "primary", width = NULL, # create a new box
sliderInput(inputId = "binyear_funct_emsci", # create new slider text
label = "Number of years per bin:", # label of the box
min = 1, max = 5,
value = 5)
), # end box
) #end column
) # end FluidRow
), #end tabItem
tabItem(tabName = "monitore_emsci",
useShinyalert(),
actionButton("preview_predict", "Disclaimer"),
htmlOutput("title_predict"),
fluidRow( # create a separation in the panel
column(width = 8, # create first column for boxplot
box(width = NULL, status = "primary",
p("Using this monitoring tool you can visualise, in blue, patients that had a
similar score at a very acute stage for your outcome of interest (± 5 points).
These patients share the characteristics you specified in terms of sex,
age injury severity, (AIS grade), and neurological of injury.
Black points represent patients with the similar characteristics,
irrespective of their score at a very acute stage.")),
box(width = NULL, status = "primary", # create box to display plot
align="center", # center the plot
plotOutput('plot_predict', height = 660)) # call server plot function for the score and dataset chosen by the user #end box
), # end column
column(width = 4, # create second column for second type of user inputs (filters)
box(status = "primary", width = NULL, # create a new box
selectInput("select_score",
label = "Select outcome of interest",
choices = list("UEMS", 'LEMS', 'TMS'),
selected = c("UEMS"))
), # end box
box(status = "primary", width = NULL, # create a new box
textInput("input_compscore",
label = "Patient score at very acute stage",
value = "Enter value...")
), # end box
box(status = "primary", width = NULL, # create a new box
selectInput("select_sex",
label = "Select sex",
choices = list("Male" = "m", "Female" = "f", "Unknown" = "Unknown"),
selected = c("Unknown"))
), # end box
box(status = "primary", width = NULL, # create box
selectInput("select_age",
label = "Select age at injury",
choices = list("0-19", "20-39", "40-59", "60-79", "80+", 'Unknown'),
selected = c("Unknown"))
), # end box
box(status = "primary", width = NULL, # create a new box
selectInput("select_ais",
label = "Select baseline AIS grade",
choices = list("AIS A" = "A", "AIS B" = "B", "AIS C" = "C", "AIS D" = "D", "AIS E" = "E", "Unknown" = "Unknown"),
selected = c("Unknown"))
), # end box
box(status = "primary", width = NULL, # create a new box
selectInput("select_nli",
label = "Select injury level",
choices = list("Unknown", "Cervical", "Thoracic", "Lumbar", "Sacral",
"C1","C2","C3","C4","C5","C6","C7","C8",
"T1","T2","T3","T4","T5","T6","T7","T8","T9","T1","T11", "T12",
"L1", "L2", "L3", "L4", "L5",
"S1", "S2", "S3", "S4", "S5"),
selected = c("Unknown"))
)#, # end box
) #end column
) #end fluidRow
), #end tabItem
tabItem(tabName = "about_sygen",
h3(strong("The Sygen Clinical Trial")),
br(),
fluidRow(
box(#title = "Explore The Data",
width = 8,
heigth = "500px",
solidHeader = TRUE,
h4("Objectives of original study"),
"To determine efficacy and safety of monosialotetrahexosylganglioside GM1 (i.e., Sygen) in acute spinal cord injury.",
br(),
h4("Methods"),
strong("Monosialotetrahexosylganglioside GM1"),
"Sygen (monosialotetrahexosylganglioside GM1 sodium salt) is a naturally occurring compound in cell membranes of mammals and is especially abundant in the membranes of the central nervous system.
Acute neuroprotective and longer-term regenerative effects in multiple experimental models of ischemia and injury have been reported. The proposed mechanisms of action of GM1 include
anti-excitotoxic activity, apoptosis prevention, and potentiation of neuritic sprouting and the effects of nerve growth factors.",
br(),
br(),
strong("Study design."), "Randomized, double-blind, sequential,
multicenter clinical trial of two doses Sygen (i.e., low-dose GM-1: 300 mg intravenous loading dose followed by 100 mg/d x 56 days or high-dose GM-1:00 mg intravenous loading dose followed by 200 mg/d x 56 days) versus
placebo. All patients received the National Acute Spinal Cord Injury Study (NASCIS-2) protocol dosage of methylprednisolone. Based on a potential adverse interaction between concomitant MPSS and GM-1 administration,
the initial dose of GM-1 was delayed until after the steroids were given (mean onset of study medication, 54.9 hours).",
br(),
br(),
strong("Inclusion/exclusion criteria."), "For inclusion in Sygen, patients were required to have at least one lower extremity with a substantial motor deficit. Patients with spinal cord transection
or penetration were excluded, as were patients with a cauda equina, brachial or lumbosacral plexus, or peripheral nerve injury. Multiple trauma cases were included as long as they were not so severe
as to preclude neurologic evaluation. It is notable that this requirement of participating in a detailed neurologic exam excluded major head trauma cases and also intubated
chest trauma cases.",
br(),
br(),
strong("Assessments."), "Baseline neurologic assessment included both the AIS and detailed American Spinal Injury Association (ASIA) motor and
sensory examinations. Additionally, the Modified Benzel Classification and the ASIA motor and
sensory examinations were performed at 4, 8, 16, 26, and 52 weeks after injury. The Modified Benzel Classification was used for post-baseline measurement because it rates walking
ability and, in effect, subdivides the broad D category of the AIS. Because most patients have an unstable spinal fracture at
baseline, it is not possible to assess walking ability at that time; hence the use of different baseline and follow-up scales.
Marked recovery was defined as at least a two-grade equivalent improvement in the Modified Benzel Classification from the
baseline AIS. The primary efficacy assessment was the proportion of patients with marked recovery at week 26. The secondary efficacy assessments included the time course of marked recovery and
other established measures of spinal cord function (the ASIA motor and sensory scores, relative and absolute sensory levels of impairment, and assessments of bladder and bowel
function).",
br(),
br(),
strong("Concomitant medications."), "The use of medications delivered alongside the study medication (i.e., GM-1) was rigorously tracked.
For each concomitant medication administered during the trial, the dosage, reason for administration, and the timing of administration were recorded.",
br(),
br(),
strong("Results."), "Of 797 patients recruited, 760 were included in the analysis. The prospectively planned analysis at the prespecified endpoint time for all patients was negative.
The negative finding of the Sygen study is considered Class I Medical Evidence by the spinal cord injury Committee of the
American Association of Neurological Surgeons (AANS) and the Congress of Neurological Surgeons (CNS). Subsequent analyses of the Sygen
data have been performed to characterize the trajectory and extent of spontaneous recovery from acute spinal cord injury.",
br()
), # close box
fluidRow(
valueBox(prettyNum(797, big.mark=" ", scientific=FALSE), "Patients", icon = icon("user-edit"), width = 3, color = "blue"),
#valueBox(prettyNum(489, big.mark=" ", scientific=FALSE), "Unique concomittant medications to treat secondary complications", icon = icon("pills"), width = 3, color = "blue"),
#valueBox(tagList("10", tags$sup(style="font-size: 20px", "%")),
# "Prophylactic medication use", icon = icon("prescription"), width = 3, color = "blue"
#),
valueBox("28", "North American clinical sites", icon = icon("clinic-medical"), width = 3, color = "blue"),
valueBox("1991-1997", "Running time", icon = icon("calendar-alt"), width = 3, color = "blue")#,
)
), # close fluid row
fluidRow(
box(#title = "Explore The Data",
width = 8,
heigth = "500px",
solidHeader = TRUE,
h4("References"),
tags$ul(
tags$li(a('Geisler et al, 2001', href = 'https://europepmc.org/article/med/11805612', target="_blank"), "Recruitment and early treatment in a multicenter study of acute spinal cord injury. Spine (Phila Pa 1976)."),
tags$li(a('Geisler et al, 2001', href = 'https://journals.lww.com/spinejournal/Fulltext/2001/12151/The_Sygen_R__Multicenter_Acute_Spinal_Cord_Injury.15.aspx', target="_blank"), "The Sygen multicenter acute spinal cord injury study. Spine (Phila Pa 1976)")
) # close tags
) # close box
) # close fluid row
), # close tab item
tabItem(tabName = "epi_sygen",
fluidRow(
column (width = 8,
box(status = "primary", width = NULL,
div(style="display:inline-block;width:100%;text-align:center;",
radioGroupButtons(
inputId = "var_epi_sygen",
label = "Epidemiological features:",
selected = "sex_sygen",
status = "success",
individual = T, #if false, then the boxes are connected
choiceNames = c("Sex", "Age", "AIS grade", "Level of injury"),
choiceValues = c("sex_sygen", "age_sygen", "ais_sygen", "nli_sygen")
) # Close radioGroupButtons bracket
), # Close div bracket
),
box(status = "primary", width = NULL,
div(plotOutput("plot.epi.sygen", width = "90%",
height = "660px",
inline = FALSE),
align='center')
), # end box
), # end column
column(width = 4, # create second column for second type of user inputs (filters)
box(status = "primary", width = NULL, # create a new box
sliderInput(inputId = "year_epi_sygen", # create new slider text
label = "Years of injury to display:", # label of the box
min = 1991, max = 1997,
value = c(1991,1997),
sep = "")
), # end box
box(status = "primary", width = NULL, # create a new box
sliderInput(inputId = "binyear_epi_sygen", # create new slider text
label = "Number of years per bin:", # label of the box
min = 1, max = 3,
value = 1)
), # end box
box(status = "primary", width = NULL, # create box
radioButtons("checkbox_epi_sygen",
label = "Do you want to inspect subpopulations ?",
choices = c("No" = 1, 'Yes' = 0),
selected = 1) # checkbox to go to basic plot: no filters, Sygen patients, x-axis=stages, y-axis=value of score
), # end box
conditionalPanel(condition = "input.checkbox_epi_sygen == 0", # if user decides to not display Sygen data and apply filters, make a new panel appear
box(status = "primary", width = NULL, # create new box
radioButtons(inputId = "checkGroup_epi_sygen", # create new check box group
label = "Criteria:", # label of the box
choices = list("AIS grade" = '1', "Paralysis type" = '2'), # choices are the different filters that can be applied
selected = c('1')) # by default, NLI and AIS grades are selected
) # end box
), # end conditionalPanel
conditionalPanel(condition = "input.checkbox_epi_sygen == 0 && input.checkGroup_epi_sygen.includes('1')", # if user chooses to filter based on AIS grade
box(status = "primary", width = NULL, # create a new box
checkboxGroupInput(inputId = "grade_epi_sygen", # create new check box group
label = "AIS grade:", # label of the box
choices = list("AIS A", "AIS B", "AIS C", "AIS D"), # choices (match the levels in Sygen dataset), missing AIS grades will automaticSygeny be removed
selected = c("AIS A", "AIS B", "AIS C", "AIS D")) # by default, Sygen grades are selected but missing grades
) # end box
), # end conditionalPanel
conditionalPanel(condition = "input.checkbox_epi_sygen == 0 && input.checkGroup_epi_sygen.includes('2')", # if user chooses to filter based on functlogical level of injury
box(status = "primary", width = NULL, # create a new box
checkboxGroupInput(inputId = "paralysis_epi_sygen", # create new check box group
label = "Type of paralysis:", # label of the box
choices = list("paraplegia", 'tetraplegia'),
selected = c("paraplegia", 'tetraplegia'))
) # end box
) # end conditionalPanel
) #end column
) #close fluid row
), #close tabitem epi_sygen
tabItem(tabName = "neuro_sygen",
fluidRow(
column (width = 8,
box(status = "primary", width = NULL,
div(style="display:inline-block;width:100%;text-align:center;",
radioGroupButtons(
inputId = "var_neuro_sygen",
label = "Neurological features:",
selected = "UEMS",
status = "success",
individual = T, #if false, then the boxes are connected
choiceNames = c("UEMS", "LEMS", "TEMS",
"RMS", "LMS", "TMS",
"RPP","LPP","TPP",
"RLT","LLT","TLT"),
choiceValues = c("UEMS", "LEMS", "TEMS",
"RMS", "LMS", "TMS",
"RPP","LPP","TPP",
"RLT","LLT","TLT")
) # Close radioGroupButtons bracket
), # Close div bracket
),
box(status = "primary", width = NULL,
div(plotlyOutput("plot.neuro.sygen", width = "90%",
height = "660px",
inline = FALSE),
align='center')
), # end box
), # end column
column(width = 4, # create second column for second type of user inputs (filters)
box(status = "primary", width = NULL, # create box
radioButtons("cont_neuro_Sygen",
label = "Type of plot ?",
choices = c("Trend plot" = 0, 'Boxplot' = 1),
selected = 0)
), # end box
box(status = "primary", width = NULL, # create box
radioButtons("subset_neuro_Sygen",
label = "Select subset of the cohort ?",
choices = c("No" = 1, 'Yes' = 0),
selected = 1), # checkbox to go to basic plot: no filters, all patients, x-axis=stages, y-axis=value of score
conditionalPanel(condition = "input.subset_neuro_Sygen == 0", # if user decides to not display all data and apply filters, make a new panel appear
checkboxGroupInput(inputId = "checksub_neuro_Sygen", # create new check box group
label = "Subsetting criteria:", # label of the box
choices = list("Sex" = '1', "Age at injury" = '2', "Cause of SCI" = '3', "AIS grade" = '4', "Level of injury" = '5'),
selected = c('1', '4')) # by default, sex and AIS grades are selected
), # end conditionalPanel
conditionalPanel(condition = "input.subset_neuro_Sygen == 0 && input.checksub_neuro_Sygen.includes('1')", # if user chooses to filter based on sex
checkboxGroupInput(inputId = "sex_neuro_Sygen", # create new check box group
label = "Sex:", # label of the box
choices = list("Male", "Female"), # choices are male and female (match the levels in Sygen dataset)
selected = c("Male", "Female")) # by default, both male and female are selected
), # end conditionalPanel
conditionalPanel(condition = "input.subset_neuro_Sygen == 0 && input.checksub_neuro_Sygen.includes('2')", # if user chooses to filter based on age
checkboxGroupInput(inputId = "age_neuro_Sygen", # create new check box group
label = "Age at injury:", # label of the box
choices = list("0-19", "20-39", "40-59", "60-79"),
selected = c("0-19", "20-39", "40-59", "60-79")) # by default, all categories are selected
), # end conditionalPanel
conditionalPanel(condition = "input.subset_neuro_Sygen == 0 && input.checksub_neuro_Sygen.includes('3')", # if user chooses to filter based on cause of injury
checkboxGroupInput(inputId = "cause_neuro_Sygen", # create new check box group
label = "Cause of injury:", # label of the box
choices = list("automobile", "blunt trauma", "fall", "gun shot wound", "motorcycle", "other sports", "others", "pedestrian", "water related"),
selected = c("automobile", "blunt trauma", "fall", "gun shot wound", "motorcycle", "other sports", "others", "pedestrian", "water related"))
), # end conditionalPanel
conditionalPanel(condition = "input.subset_neuro_Sygen == 0 && input.checksub_neuro_Sygen.includes('4')", # if user chooses to filter based on AIS grade
checkboxGroupInput(inputId = "grade_neuro_Sygen", # create new check box group
label = "AIS grade:", # label of the box
choices = list("AIS A", "AIS B", "AIS C", "AIS D"), # choices (match the levels in Sygen dataset), missing AIS grades will automatically be removed
selected = c("AIS A", "AIS B", "AIS C", "AIS D")) # by default, all grades are selected but missing grades
), # end conditionalPanel
conditionalPanel(condition = "input.subset_neuro_Sygen == 0 && input.checksub_neuro_Sygen.includes('5')", # if user chooses to filter based on neurological level of injury
checkboxGroupInput(inputId = "level_neuro_Sygen", # create new check box group
label = "Neurological level of injury:", # label of the box
choices = list("cervical", "thoracic"), # choices (match the levels in Sygen dataset)
selected = c("cervical", "thoracic")) # by default, all categories are selected
) # end conditionalPanel
), # end box
box(status = "primary", width = NULL, # create box
radioButtons("checkbox_neuro_Sygen",
label = "How do you want to visualise the data?",
choices = c("Default" = 1, 'Customize (display by subgroups)' = 0),
selected = 1), # checkbox to go to basic plot: no filters, all patients, x-axis=stages, y-axis=value of score
conditionalPanel(condition = "input.checkbox_neuro_Sygen == 0", # if user decides to not display all data and apply filters, make a new panel appear
checkboxGroupInput(inputId = "checkGroup_neuro_Sygen", # create new check box group
label = "Visualisation criteria:", # label of the box
choices = list("Sex" = '1',
"Age at injury" = '2',
"Cause of SCI" = '3',
"AIS grade" = '4',
"Level of injury" = '5'), # choices are the different filters that can be applied
selected = c('1','4')) # by default, sex and AIS grades are selected
), # end conditionalPanel
), # end box
box(status = "primary", width = NULL,
radioButtons(inputId = "neuro_choose_time_Sygen",
label = "Choose time displayed:",
choices = c("Single time point" = "single",
"Time range" = "multiple"),
selected = "multiple"),
conditionalPanel(condition = "input.neuro_choose_time_Sygen == 'multiple'",
sliderTextInput(inputId = "neuro_time_multiple_Sygen",
#label = "Time range:",
label = NULL,
choices = c("Week00", "Week01", "Week04", 'Week08', 'Week16', "Week26", "Week52"),
selected = c("Week00", "Week52"),
animate = F, grid = TRUE, hide_min_max = FALSE, from_fixed = FALSE,
to_fixed = FALSE, force_edges = T, width = NULL, pre = NULL,
post = NULL, dragRange = TRUE)
), # end conditionalPanel
conditionalPanel(condition = "input.neuro_choose_time_Sygen == 'single'",
sliderTextInput(inputId = "neuro_time_single_Sygen",
label = "Time point:",
choices = list("Week00", "Week01", "Week04", 'Week08', 'Week16', "Week26", "Week52"),
selected = c("Week00"),
animate = F, grid = TRUE, hide_min_max = FALSE, from_fixed = FALSE,
to_fixed = FALSE, from_min = NULL, from_max = NULL, to_min = NULL,
to_max = NULL, force_edges = T, width = NULL, dragRange = TRUE)
) # end conditionalPanel
), # end box
box(status = "primary", width = NULL, # create a new box
sliderInput(inputId = "year_neuro_sygen", # create new slider text
label = "Years of injury to display:", # label of the box
min = 1991, max = 1997,
value = c(1991,1997),
sep = "")
), # end box
box(status = "primary", width = NULL, # create a new box
sliderInput(inputId = "binyear_neuro_sygen", # create new slider text
label = "Number of years per bin:", # label of the box
min = 1, max = 3,
value = 1)
), # end box
) #end column
) #close fluid row
), #end tabItem
tabItem(tabName = "funct_sygen",
fluidRow(
column (width = 8,
box(status = "primary", width = NULL,
div(style="display:inline-block;width:100%;text-align:center;",
radioGroupButtons(
inputId = "var_funct_sygen",
label = "Functional features:",
selected = "Benzel",
status = "success",
individual = T, #if false, then the boxes are connected
choiceNames = c('Modified Benzel score'),
choiceValues = c("Benzel")
) # Close radioGroupButtons bracket
), # Close div bracket
),
box(status = "primary", width = NULL,
div(plotlyOutput("plot.funct.sygen", width = "90%",
height = "660px",
inline = FALSE),
align='center')
), # end box
), # end column
column(width = 4, # create second column for second type of user inputs (filters)
box(status = "primary", width = NULL, # create box
radioButtons("cont_funct_Sygen",
label = "Type of plot ?",
choices = c("Trend plot" = 0, 'Boxplot' = 1),
selected = 0)
), # end box
box(status = "primary", width = NULL, # create box
radioButtons("subset_funct_Sygen",
label = "Select subset of the cohort ?",
choices = c("No" = 1, 'Yes' = 0),
selected = 1), # checkbox to go to basic plot: no filters, all patients, x-axis=stages, y-axis=value of score
conditionalPanel(condition = "input.subset_funct_Sygen == 0", # if user decides to not display all data and apply filters, make a new panel appear
checkboxGroupInput(inputId = "checksub_funct_Sygen", # create new check box group
label = "Subsetting criteria:", # label of the box
choices = list("Sex" = '1', "Age at injury" = '2', "Cause of SCI" = '3', "AIS grade" = '4', "Level of injury" = '5'),
selected = c('1', '4')) # by default, sex and AIS grades are selected
), # end conditionalPanel
conditionalPanel(condition = "input.subset_funct_Sygen == 0 && input.checksub_funct_Sygen.includes('1')", # if user chooses to filter based on sex
checkboxGroupInput(inputId = "sex_funct_Sygen", # create new check box group
label = "Sex:", # label of the box
choices = list("Male", "Female"), # choices are male and female (match the levels in Sygen dataset)
selected = c("Male", "Female")) # by default, both male and female are selected
), # end conditionalPanel
conditionalPanel(condition = "input.subset_funct_Sygen == 0 && input.checksub_funct_Sygen.includes('2')", # if user chooses to filter based on age
checkboxGroupInput(inputId = "age_funct_Sygen", # create new check box group
label = "Age at injury:", # label of the box
choices = list("0-19", "20-39", "40-59", "60-79"),
selected = c("0-19", "20-39", "40-59", "60-79")) # by default, all categories are selected
), # end conditionalPanel
conditionalPanel(condition = "input.subset_funct_Sygen == 0 && input.checksub_funct_Sygen.includes('3')", # if user chooses to filter based on cause of injury
checkboxGroupInput(inputId = "cause_funct_Sygen", # create new check box group
label = "Cause of injury:", # label of the box
choices = list("automobile", "blunt trauma", "fall", "gun shot wound", "motorcycle", "other sports", "others", "pedestrian", "water related"),
selected = c("automobile", "blunt trauma", "fall", "gun shot wound", "motorcycle", "other sports", "others", "pedestrian", "water related"))
), # end conditionalPanel
conditionalPanel(condition = "input.subset_funct_Sygen == 0 && input.checksub_funct_Sygen.includes('4')", # if user chooses to filter based on AIS grade
checkboxGroupInput(inputId = "grade_funct_Sygen", # create new check box group
label = "AIS grade:", # label of the box
choices = list("AIS A", "AIS B", "AIS C", "AIS D"), # choices (match the levels in Sygen dataset), missing AIS grades will automatically be removed
selected = c("AIS A", "AIS B", "AIS C", "AIS D")) # by default, all grades are selected but missing grades
), # end conditionalPanel
conditionalPanel(condition = "input.subset_funct_Sygen == 0 && input.checksub_funct_Sygen.includes('5')", # if user chooses to filter based on functlogical level of injury
checkboxGroupInput(inputId = "level_funct_Sygen", # create new check box group
label = "Neurological level of injury:", # label of the box
choices = list("cervical", "thoracic"), # choices (match the levels in Sygen dataset)
selected = c("cervical", "thoracic")) # by default, all categories are selected
) # end conditionalPanel
), # end box
box(status = "primary", width = NULL, # create box
radioButtons("checkbox_funct_Sygen",
label = "How do you want to visualise the data?",
choices = c("Default" = 1, 'Customize (display by subgroups)' = 0),
selected = 1), # checkbox to go to basic plot: no filters, all patients, x-axis=stages, y-axis=value of score
conditionalPanel(condition = "input.checkbox_funct_Sygen == 0", # if user decides to not display all data and apply filters, make a new panel appear
checkboxGroupInput(inputId = "checkGroup_funct_Sygen", # create new check box group
label = "Visualisation criteria:", # label of the box
choices = list("Sex" = '1',
"Age at injury" = '2',
"Cause of SCI" = '3',
"AIS grade" = '4',
"Level of injury" = '5'), # choices are the different filters that can be applied
selected = c('1','4')) # by default, sex and AIS grades are selected
), # end conditionalPanel
), # end box
box(status = "primary", width = NULL,
radioButtons(inputId = "funct_choose_time_Sygen",
label = "Choose time displayed:",
choices = c("Single time point" = "single",
"Time range" = "multiple"),
selected = "multiple"),
conditionalPanel(condition = "input.funct_choose_time_Sygen == 'multiple'",
sliderTextInput(inputId = "funct_time_multiple_Sygen",
#label = "Time range:",
label = NULL,
choices = c("Week00", "Week01", "Week04", 'Week08', 'Week16', "Week26", "Week52"),
selected = c("Week00", "Week52"),
animate = F, grid = TRUE, hide_min_max = FALSE, from_fixed = FALSE,
to_fixed = FALSE, force_edges = T, width = NULL, pre = NULL,
post = NULL, dragRange = TRUE)
), # end conditionalPanel
conditionalPanel(condition = "input.funct_choose_time_Sygen == 'single'",
sliderTextInput(inputId = "funct_time_single_Sygen",
label = "Time point:",
choices = list("Week00", "Week01", "Week04", 'Week08', 'Week16', "Week26", "Week52"),
selected = c("Week00"),
animate = F, grid = TRUE, hide_min_max = FALSE, from_fixed = FALSE,
to_fixed = FALSE, from_min = NULL, from_max = NULL, to_min = NULL,
to_max = NULL, force_edges = T, width = NULL, dragRange = TRUE)
) # end conditionalPanel
), # end box
box(status = "primary", width = NULL, # create a new box
sliderInput(inputId = "year_funct_sygen", # create new slider text
label = "Years of injury to display:", # label of the box
min = 1991, max = 1997,
value = c(1991,1997),
sep = "")
), # end box
box(status = "primary", width = NULL, # create a new box
sliderInput(inputId = "binyear_funct_sygen", # create new slider text
label = "Number of years per bin:", # label of the box
min = 1, max = 3,
value = 1)
), # end box
) #end column
) #close fluid row
), #end tabItem
tabItem(tabName = "neuro_all",
fluidRow(
column (width = 8,
box(status = "primary", width = NULL,
div(style="display:inline-block;width:100%;text-align:center;",
radioGroupButtons(
inputId = "var_neuro_all",
label = "Neurological features:",
selected = "UEMS",
status = "success",
individual = T, #if false, then the boxes are connected
choiceNames = c("UEMS", "LEMS",
"RMS", "LMS", "TMS",
"RPP","LPP","TPP",
"RLT","LLT","TLT"),
choiceValues = c("UEMS", "LEMS",
"RMS", "LMS", "TMS",
"RPP","LPP","TPP",
"RLT","LLT","TLT")
) # Close radioGroupButtons bracket
), # Close div bracket
),
box(status = "primary", width = NULL,
div(plotOutput("plot.neuro.all", width = "90%",
height = "660px",
inline = FALSE),
align='center')
), # end box
), # end column
column(width = 4, # create second column for second type of user inputs (filters)
box(status = "primary", width = NULL, # create box
radioButtons("subset_neuro_All",
label = "Select subset of the cohort ?",
choices = c("No" = 1, 'Yes' = 0),
selected = 1), # checkbox to go to basic plot: no filters, all patients, x-axis=stages, y-axis=value of score
conditionalPanel(condition = "input.subset_neuro_All == 0", # if user decides to not display all data and apply filters, make a new panel appear
checkboxGroupInput(inputId = "checksub_neuro_All", # create new check box group
label = "Subsetting criteria:", # label of the box
choices = list("Sex" = '1', "Age at injury" = '2', "AIS grade" = '3', "Level of injury" = '4'),
selected = NULL) # by default, sex and AIS grades are selected
), # end conditionalPanel
conditionalPanel(condition = "input.subset_neuro_All == 0 && input.checksub_neuro_All.includes('1')", # if user chooses to filter based on sex
checkboxGroupInput(inputId = "sex_neuro_All", # create new check box group
label = "Sex:", # label of the box
choices = list("Male", "Female"), # choices are male and female (match the levels in All dataset)
selected = c("Male", "Female")) # by default, both male and female are selected
), # end conditionalPanel
conditionalPanel(condition = "input.subset_neuro_All == 0 && input.checksub_neuro_All.includes('2')", # if user chooses to filter based on age
checkboxGroupInput(inputId = "age_neuro_All", # create new check box group
label = "Age at injury:", # label of the box
choices = list("0-19", "20-39", "40-59", "60-79", "80+"),
selected = c("0-19", "20-39", "40-59", "60-79", "80+")) # by default, all categories are selected
), # end conditionalPanel
conditionalPanel(condition = "input.subset_neuro_All == 0 && input.checksub_neuro_All.includes('3')", # if user chooses to filter based on AIS grade
checkboxGroupInput(inputId = "grade_neuro_All", # create new check box group
label = "AIS grade:", # label of the box
choices = list("AIS A", "AIS B", "AIS C", "AIS D"), # choices (match the levels in All dataset), missing AIS grades will automatically be removed
selected = c("AIS A", "AIS B", "AIS C", "AIS D")) # by default, all grades are selected but missing grades
), # end conditionalPanel
conditionalPanel(condition = "input.subset_neuro_All == 0 && input.checksub_neuro_All.includes('4')", # if user chooses to filter based on neurological level of injury
checkboxGroupInput(inputId = "level_neuro_All", # create new check box group
label = "Neurological level of injury:", # label of the box
choices = list("cervical", "lumbar", 'sacral', "thoracic"), # choices (match the levels in All dataset)
selected = c("cervical", "lumbar", 'sacral', "thoracic")) # by default, all categories are selected
) # end conditionalPanel
), # end box
box(status = "primary", width = NULL, # create box
radioButtons("checkbox_neuro_All",
label = "How do you want to visualise the data?",
choices = c("Default" = 1, 'Customize (display by subgroups)' = 0),
selected = 1), # checkbox to go to basic plot: no filters, all patients, x-axis=stages, y-axis=value of score
conditionalPanel(condition = "input.checkbox_neuro_All == 0", # if user decides to not display all data and apply filters, make a new panel appear
checkboxGroupInput(inputId = "checkGroup_neuro_All", # create new check box group
label = "Visualisation criteria:", # label of the box
choices = list("Sex" = '1',
"Age at injury" = '2',
"AIS grade" = '3',
"Level of injury" = '4'), # choices are the different filters that can be applied
selected = c('1','4')) # by default, sex and AIS grades are selected
), # end conditionalPanel
), # end box
) # end column
) #close fluid row
), #end tabItem
tabItem(tabName = "abbreviations",
titlePanel(strong("Dictionary of abbreviations")),
fluidRow(
column(width = 6,
box(width = NULL, status = "primary",
h4(strong('General')),
p(strong('SCI'), 'spinal cord injury'),
p(strong(a('ASIA', href ="https://asia-spinalinjury.org/", target="_blank")), 'american spinal injury association'),
p(strong(a('EMSCI', href ="https://www.emsci.org/", target="_blank")), 'european multicenter study about spinal cord injury'),
p(strong('PBE'), 'practice-based evidence')
),
box(width = NULL, status = "primary",
h4(strong('Functional outcomes')),
p(strong(a('WISCI', href = "http://www.spinalcordcenter.org/research/wisci_guide.pdf", target="_blank")), 'walking index for spinal cord injury'),
p(strong(a('test_6min', href = "https://www.emsci.org/index.php/project/the-assessments/functional-test", target="_blank")), '6 minutes walking test'),
p(strong(a('test_10m', href = "https://www.emsci.org/index.php/project/the-assessments/functional-test", target="_blank")), '10 meters walking test'),
p(strong(a('TUG', href = "https://www.emsci.org/index.php/project/the-assessments/functional-test", target="_blank")), 'timed up and go test'),
p(strong(a('SCIM2', href = "https://www.emsci.org/index.php/project/the-assessments/independence", target="_blank")), 'spinal cord independence measure type 2'),
p(strong(a('SCIM3', href = "https://www.emsci.org/index.php/project/the-assessments/independence", target="_blank")), 'spinal cord independence measure type 3'),
p(strong('benzel'), 'modified benzel classification')
)
), # end column
column(width = 6,
box(width = NULL, status = "primary",
h4(strong(a('Neurological outcomes', href ="https://asia-spinalinjury.org/wp-content/uploads/2016/02/International_Stds_Diagram_Worksheet.pdf", target="_blank"))),
p(strong(a('AIS', href ='https://www.icf-casestudies.org/introduction/spinal-cord-injury-sci/american-spinal-injury-association-asia-impairment-scale#:~:text=The%20American%20Spinal%20Injury%20Association,both%20sides%20of%20the%20body', target="_blank")), 'ASIA impairment scale'),
p(strong('UEMS'), 'upper extremity motor score'),
p(strong('RUEMS'), 'right upper extremity motor score'),
p(strong('LUEMS'), 'left upper extremity motor score'),
p(strong('LEMS'), 'lower extremity motor score'),
p(strong('RLEMS'), 'right lower extremity motor score'),
p(strong('LLEMS'), 'left lower extremity motor score'),
p(strong('RMS'), 'right motor score'),
p(strong('LMS'), 'left motor score'),
p(strong('TMS'), 'total motor score'),
p(strong('RPP'), 'right pin prick'),
p(strong('LPP'), 'left pin prick'),
p(strong('TPP'), 'total pin prick'),
p(strong('RLT'), 'right light touch'),
p(strong('LLT'), 'left light touch'),
p(strong('TLT'), 'total light touch')
)
) # end column
) # end fluidRow
), # end tabItem
) # end tabitems
) #end dashboardBody
) # end dashboardPage
ui <- secure_app(ui)
# Server logic ----
server <- function(input, output) {
res_auth <- secure_server(
check_credentials = check_credentials(credentials)
)
output$auth_output <- renderPrint({
reactiveValuesToList(res_auth)
})
output$plot.funct.sygen <- renderPlotly({
years <- c(unique(input$year_funct_sygen)[1]:unique(input$year_funct_sygen)[2])
nb_bins <- unique(input$binyear_funct_sygen)[1]
data_modified <- data_sygen[data_sygen$Year_of_injury %in% years, ]
##########################################################################################
vector_temp <- seq(input$year_funct_sygen[1],(input$year_funct_sygen[2]+input$year_funct_sygen[2]%%nb_bins),nb_bins)
vector_temp <- ifelse(vector_temp > input$year_funct_sygen[2],input$year_funct_sygen[2],vector_temp)
if (!(input$year_funct_sygen[2] %in% vector_temp)){
vector_temp <- append(vector_temp, input$year_funct_sygen[2])
}
data_modified$Year_of_injury_cat<-cut(data_modified$Year_of_injury,
c(unique(vector_temp)),
include.lowest = T, right = F)
##########################################################################################
type_plot <- input$cont_funct_Sygen[1]
# FILTERING BASED ON STAGES TO DISPLAY #
list_all_stages <- c("Week00", "Week01", "Week04", 'Week08', 'Week16', "Week26", "Week52") # store all potential stages
if (input$funct_choose_time_Sygen == 'single'){
times <- unique(input$funct_time_single_Sygen)
indices_time = which(list_all_stages %in% unlist(times, use.names=FALSE))
list_time = list_all_stages[c(indices_time)]
} else if (input$funct_choose_time_Sygen == 'multiple'){
times <- unique(input$funct_time_multiple_Sygen)
indices_time = which(list_all_stages %in% unlist(times, use.names=FALSE))
list_time = list_all_stages[c(indices_time[1]:indices_time[2])]
}
if (input$subset_funct_Sygen == 0){
input_sex <- unique(input$sex_funct_Sygen)
input_age <- unique(input$age_funct_Sygen)
input_cause <- unique(input$cause_funct_Sygen)
input_ais <- unique(input$grade_funct_Sygen)
input_nli <- unique(input$level_funct_Sygen)
#data_modified <- data_sygen
if (!('Unknown' %in% input_sex)){data_modified <- data_modified[data_modified$Sex %in% input_sex, ]}
if (!('Unknown' %in% input_age)){data_modified <- data_modified[data_modified$Age %in% input_age, ]}
if (!('Unknown' %in% input_ais)){data_modified <- data_modified[data_modified$AIS %in% input_ais, ]}
if (!('Unknown' %in% input_cause)){data_modified <- data_modified[data_modified$Cause %in% input_cause, ]}
if (!('Unknown' %in% input_nli)){data_modified <- data_modified[data_modified$NLI %in% input_nli, ]}
data_sygen_copy <- data_modified
} else {
data_sygen_copy <- data_modified
}
if (input$checkbox_funct_Sygen == 1){ # if user chooses to display all data
plot <- plot_base_Sygen(data_sygen_copy, score = input$var_funct_sygen, list_time, type_plot) # display basic plot with all patients, and user selected stages
}
else if (input$checkbox_funct_Sygen == 0){ # if user chooses filters
if (!(length(input$checkGroup_funct_Sygen) == 2)){ # if user chooses something else than 2 filters
plot <- plot_error() # give a plot saying to choose 2 filters
}
else if (length(input$checkGroup_funct_Sygen) == 2){ # if user chooses exactly 2 filters
filters <- as.numeric(as.vector(unique(input$checkGroup_funct_Sygen))) # store filters the user has selected
list_all = list(c("Female", "Male"), # store all the different options for the different filters
c("0-19","20-39","40-59","60-79"),
c("automobile","blunt trauma","fall","gun shot wound","motorcycle","other sports","others","pedestrian","water related"),
c("AIS A", "AIS B", "AIS C", "AIS D"),
c("cervical", "thoracic"))
filter1_all <- as.vector(list_all[filters[1]][[1]]) # select all options for the first filter chosen by user
filter2_all <- as.vector(list_all[filters[2]][[1]]) # select all options for the second filter chosen by user
list_names = c("Sex", "Age", "Cause", "AIS", "NLI", "Country") # names of columns corresponding to the available filters
plot <- plot_filters_Sygen(data_sygen_copy, score = input$var_funct_sygen, # call function for emsci plots in helper_functions.R
list_time,
list_names[filters[1]],
list_names[filters[2]],
filter1_all,
filter2_all, type_plot)
}
}
plot})
output$plot.funct.emsci <- renderPlotly({
years <- c(unique(input$year_funct_emsci)[1]:unique(input$year_funct_emsci)[2])
nb_bins <- unique(input$binyear_funct_emsci)[1]
data_modified <- data_emsci[data_emsci$YEARDOI %in% years, ]
data_modified$YEARDOI_cat<-cut(data_modified$YEARDOI,
seq(input$year_funct_emsci[1],
input$year_funct_emsci[2]+input$year_funct_emsci[2]%%nb_bins,
nb_bins),
include.lowest = T, right = F)
type_plot <- input$cont_funct_emsci[1]
# FILTERING BASED ON STAGES TO DISPLAY #
list_all_stages <- c("very acute", "acute I", "acute II", "acute III", "chronic") # store all potential stages
if (input$funct_choose_time_EMSCI == 'single'){
times <- unique(input$funct_time_single_EMSCI)
indices_time = which(list_all_stages %in% unlist(times, use.names=FALSE))
list_time = list_all_stages[c(indices_time)]
} else if (input$funct_choose_time_EMSCI == 'multiple'){
times <- unique(input$funct_time_multiple_EMSCI)
indices_time = which(list_all_stages %in% unlist(times, use.names=FALSE))
list_time = list_all_stages[c(indices_time[1]:indices_time[2])]
}
if (input$subset_funct_EMSCI == 0){
input_sex <- unique(input$sex_funct_EMSCI)
input_age <- unique(input$age_funct_EMSCI)
input_cause <- unique(input$cause_funct_EMSCI)
input_ais <- unique(input$grade_funct_EMSCI)
input_nli <- unique(input$level_funct_EMSCI)
input_country <- unique(input$country_funct_EMSCI)
# input_year <- unique(input$filteryear_funct_EMSCI)
#data_modified <- data_emsci
if (!('Unknown' %in% input_sex)){data_modified <- data_modified[data_modified$Sex %in% input_sex, ]}
if (!('Unknown' %in% input_age)){data_modified <- data_modified[data_modified$age_category %in% input_age, ]}
if (!('Unknown' %in% input_ais)){data_modified <- data_modified[data_modified$AIS %in% input_ais, ]}
if (!('Unknown' %in% input_cause)){data_modified <- data_modified[data_modified$Cause %in% input_cause, ]}
if (!('Unknown' %in% input_nli)){data_modified <- data_modified[data_modified$NLI_level %in% input_nli, ]}
if (!('Unknown' %in% input_country)){data_modified <- data_modified[data_modified$Country %in% input_country, ]}
all_years <- c(2000, 2005, 2010, 2015, 2019)
temp <- which(all_years %in% unlist(input_year, use.names=FALSE))
int_filter = c(all_years[c(temp[1]:temp[2])])
vector_year <- c()
if (2000 %in% int_filter && 2005 %in% int_filter){vector_year <- c(vector_year, '2000-2004')}
if (2005 %in% int_filter && 2010 %in% int_filter){vector_year <- c(vector_year, '2005-2009')}
if (2010 %in% int_filter && 2015 %in% int_filter){vector_year <- c(vector_year, '2010-2014')}
if (2015 %in% int_filter && 2019 %in% int_filter){vector_year <- c(vector_year, '2015-2019')}
data_modified <- data_modified[data_modified$YEARDOI_cat %in% vector_year, ]
data_emsci_copy <- data_modified
} else {
data_emsci_copy <- data_modified
}
if (input$checkbox_funct_EMSCI == 1){ # if user chooses to display all data
plot <- plot_base_emsci(data_emsci_copy, score = input$var_funct_emsci, list_time, type_plot) # display basic plot with all patients, and user selected stages
}
else if (input$checkbox_funct_EMSCI == 0){ # if user chooses filters
if (!(length(input$checkGroup_funct_EMSCI) == 2)){ # if user chooses something else than 2 filters
plot <- plot_error() # give a plot saying to choose 2 filters
}
else if (length(input$checkGroup_funct_EMSCI) == 2){ # if user chooses exactly 2 filters
filters <- as.numeric(as.vector(unique(input$checkGroup_funct_EMSCI))) # store filters the user has selected
list_all = list(c("Male", "Female"), # store all the different options for the different filters
c(0,1,2,3,4),
c("disc herniation", "haemorragic", "ischemic", "traumatic", "other"),
c("AIS A", "AIS B", "AIS C", "AIS D"),
c("cervical", "thoracic", "lumbar", "sacral"),
c("Austria", "Czech Republic", "France", "Germany",
"Great Britain", "India", "Italy", "Netherlands",
"Spain", "Switzerland"),
c('2000-2004', '2005-2009', '2010-2014', '2015-2019'))
filter1_all <- as.vector(list_all[filters[1]][[1]]) # select all options for the first filter chosen by user
filter2_all <- as.vector(list_all[filters[2]][[1]]) # select all options for the second filter chosen by user
list_names = c("Sex", "age_category", "Cause", "AIS", "NLI_level", "Country", "YEARDOI_cat") # names of columns corresponding to the available filters
plot <- plot_filters_emsci(data_emsci_copy, score = input$var_funct_emsci, # call function for emsci plots in helper_functions.R
list_time,
list_names[filters[1]],
list_names[filters[2]],
filter1_all,
filter2_all, type_plot)
}
}
plot})
output$plot.neuro.sygen <- renderPlotly({
# FILTERING BASED ON STAGES TO DISPLAY #
years <- c(unique(input$year_neuro_sygen)[1]:unique(input$year_neuro_sygen)[2])
nb_bins <- unique(input$binyear_neuro_sygen)[1]
data_modified <- data_sygen[data_sygen$Year_of_injury %in% years, ]
##########################################################################################
vector_temp <- seq(input$year_neuro_sygen[1],(input$year_neuro_sygen[2]+input$year_neuro_sygen[2]%%nb_bins),nb_bins)
vector_temp <- ifelse(vector_temp > input$year_neuro_sygen[2],input$year_neuro_sygen[2],vector_temp)
if (!(input$year_neuro_sygen[2] %in% vector_temp)){
vector_temp <- append(vector_temp, input$year_neuro_sygen[2])
}
data_modified$Year_of_injury_cat<-cut(data_modified$Year_of_injury,
c(unique(vector_temp)),
include.lowest = T, right = F)
##########################################################################################
type_plot <- input$cont_neuro_Sygen[1]
list_all_stages <- c("Week00", "Week01", "Week04", 'Week08', 'Week16', "Week26", "Week52") # store all potential stages
if (input$neuro_choose_time_Sygen == 'single'){
times <- unique(input$neuro_time_single_Sygen)
indices_time = which(list_all_stages %in% unlist(times, use.names=FALSE))
list_time = list_all_stages[c(indices_time)]
} else if (input$neuro_choose_time_Sygen == 'multiple'){
times <- unique(input$neuro_time_multiple_Sygen)
indices_time = which(list_all_stages %in% unlist(times, use.names=FALSE))
list_time = list_all_stages[c(indices_time[1]:indices_time[2])]
}
if (input$subset_neuro_Sygen == 0){
#print('test')
#print(dim(data_modified)[1])
input_sex <- unique(input$sex_neuro_Sygen)
input_age <- unique(input$age_neuro_Sygen)
input_cause <- unique(input$cause_neuro_Sygen)
input_ais <- unique(input$grade_neuro_Sygen)
input_nli <- unique(input$level_neuro_Sygen)
if (!('Unknown' %in% input_sex)){data_modified <- data_modified[data_modified$Sex %in% input_sex, ]}
if (!('Unknown' %in% input_age)){data_modified <- data_modified[data_modified$Age %in% input_age, ]}
if (!('Unknown' %in% input_ais)){data_modified <- data_modified[data_modified$AIS %in% input_ais, ]}
if (!('Unknown' %in% input_cause)){data_modified <- data_modified[data_modified$Cause %in% input_cause, ]}
if (!('Unknown' %in% input_nli)){data_modified <- data_modified[data_modified$NLI %in% input_nli, ]}
data_sygen_copy <- data_modified
#print(dim(data_sygen_copy)[1])
} else {
data_sygen_copy <- data_modified
}
if (input$checkbox_neuro_Sygen == 1){ # if user chooses to display all data
plot <- plot_base_Sygen(data_sygen_copy, score = input$var_neuro_sygen, list_time, type_plot) # display basic plot with all patients, and user selected stages
}
else if (input$checkbox_neuro_Sygen == 0){ # if user chooses filters
if (!(length(input$checkGroup_neuro_Sygen) == 2)){ # if user chooses something else than 2 filters
plot <- plot_error() # give a plot saying to choose 2 filters
}
else if (length(input$checkGroup_neuro_Sygen) == 2){ # if user chooses exactly 2 filters
filters <- as.numeric(as.vector(unique(input$checkGroup_neuro_Sygen))) # store filters the user has selected
list_all = list(c("Female", "Male"), # store all the different options for the different filters
c("0-19","20-39","40-59","60-79"),
c("automobile","blunt trauma","fall","gun shot wound","motorcycle","other sports","others","pedestrian","water related"),
c("AIS A", "AIS B", "AIS C", "AIS D"),
c("cervical", "thoracic"))
filter1_all <- as.vector(list_all[filters[1]][[1]]) # select all options for the first filter chosen by user
filter2_all <- as.vector(list_all[filters[2]][[1]]) # select all options for the second filter chosen by user
list_names = c("Sex", "Age", "Cause", "AIS", "NLI", "Country") # names of columns corresponding to the available filters
plot <- plot_filters_Sygen(data_sygen_copy, score = input$var_neuro_sygen, # call function for emsci plots in helper_functions.R
list_time,
list_names[filters[1]],
list_names[filters[2]],
filter1_all,
filter2_all, type_plot)
}
}
plot})
output$plot.neuro.emsci <- renderPlotly({
years <- c(unique(input$year_neuro_emsci)[1]:unique(input$year_neuro_emsci)[2])
nb_bins <- unique(input$binyear_neuro_emsci)[1]
data_modified <- data_emsci[data_emsci$YEARDOI %in% years, ]
data_modified$YEARDOI_cat<-cut(data_modified$YEARDOI,
seq(input$year_neuro_emsci[1],
input$year_neuro_emsci[2]+input$year_neuro_emsci[2]%%nb_bins,
nb_bins),
include.lowest = T, right = F)
type_plot <- input$cont_neuro_emsci[1]
# FILTERING BASED ON STAGES TO DISPLAY #
list_all_stages <- c("very acute", "acute I", "acute II", "acute III", "chronic") # store all potential stages
if (input$neuro_choose_time_EMSCI == 'single'){
times <- unique(input$neuro_time_single_EMSCI)
indices_time = which(list_all_stages %in% unlist(times, use.names=FALSE))
list_time = list_all_stages[c(indices_time)]
} else if (input$neuro_choose_time_EMSCI == 'multiple'){
times <- unique(input$neuro_time_multiple_EMSCI)
indices_time = which(list_all_stages %in% unlist(times, use.names=FALSE))
list_time = list_all_stages[c(indices_time[1]:indices_time[2])]
}
if (input$subset_neuro_EMSCI == 0){
#print('test')
#print(dim(data_modified)[1])
input_sex <- unique(input$sex_neuro_EMSCI)
input_age <- unique(input$age_neuro_EMSCI)
input_cause <- unique(input$cause_neuro_EMSCI)
input_ais <- unique(input$grade_neuro_EMSCI)
input_nli <- unique(input$level_neuro_EMSCI)
input_country <- unique(input$country_neuro_EMSCI)
# input_year <- unique(input$filteryear_neuro_EMSCI)
#data_modified <- data_emsci
if (!('Unknown' %in% input_sex)){data_modified <- data_modified[data_modified$Sex %in% input_sex, ]}
if (!('Unknown' %in% input_age)){data_modified <- data_modified[data_modified$age_category %in% input_age, ]}
if (!('Unknown' %in% input_ais)){data_modified <- data_modified[data_modified$AIS %in% input_ais, ]}
if (!('Unknown' %in% input_cause)){data_modified <- data_modified[data_modified$Cause %in% input_cause, ]}
if (!('Unknown' %in% input_nli)){data_modified <- data_modified[data_modified$NLI_level %in% input_nli, ]}
if (!('Unknown' %in% input_country)){data_modified <- data_modified[data_modified$Country %in% input_country, ]}
# all_years <- c(2000, 2005, 2010, 2015, 2019)
# temp <- which(all_years %in% unlist(input_year, use.names=FALSE))
# int_filter = c(all_years[c(temp[1]:temp[2])])
# vector_year <- c()
# if (2000 %in% int_filter && 2005 %in% int_filter){vector_year <- c(vector_year, '2000-2004')}
# if (2005 %in% int_filter && 2010 %in% int_filter){vector_year <- c(vector_year, '2005-2009')}
# if (2010 %in% int_filter && 2015 %in% int_filter){vector_year <- c(vector_year, '2010-2014')}
# if (2015 %in% int_filter && 2019 %in% int_filter){vector_year <- c(vector_year, '2015-2019')}
# data_modified <- data_modified[data_modified$YEARDOI_cat %in% vector_year, ]
data_emsci_copy <- data_modified
} else {
data_emsci_copy <- data_modified
}
if (input$checkbox_neuro_EMSCI == 1){ # if user chooses to display all data
plot <- plot_base_emsci(data_emsci_copy, score = input$var_neuro_emsci, list_time, type_plot) # display basic plot with all patients, and user selected stages
}
else if (input$checkbox_neuro_EMSCI == 0){ # if user chooses filters
if (!(length(input$checkGroup_neuro_EMSCI) == 2)){ # if user chooses something else than 2 filters
plot <- plot_error() # give a plot saying to choose 2 filters
}
else if (length(input$checkGroup_neuro_EMSCI) == 2){ # if user chooses exactly 2 filters
filters <- as.numeric(as.vector(unique(input$checkGroup_neuro_EMSCI))) # store filters the user has selected
list_all = list(c("Male", "Female"), # store all the different options for the different filters
c(0,1,2,3,4),
c("disc herniation", "haemorragic", "ischemic", "traumatic", "other"),
c("AIS A", "AIS B", "AIS C", "AIS D"),
c("cervical", "thoracic", "lumbar", "sacral"),
c("Austria", "Czech Republic", "France", "Germany",
"Great Britain", "India", "Italy", "Netherlands",
"Spain", "Switzerland"),
c('2000-2004', '2005-2009', '2010-2014', '2015-2019'))
filter1_all <- as.vector(list_all[filters[1]][[1]]) # select all options for the first filter chosen by user
filter2_all <- as.vector(list_all[filters[2]][[1]]) # select all options for the second filter chosen by user
list_names = c("Sex", "age_category", "Cause", "AIS", "NLI_level", "Country", "YEARDOI_cat") # names of columns corresponding to the available filters
plot <- plot_filters_emsci(data_emsci_copy, score = input$var_neuro_emsci, # call function for emsci plots in helper_functions.R
list_time,
list_names[filters[1]],
list_names[filters[2]],
filter1_all,
filter2_all, type_plot)
}
}
plot})
output$plot.neuro.all <- renderPlot ({
if (input$subset_neuro_All == 0){
input_sex <- unique(input$sex_neuro_All)
input_age <- unique(input$age_neuro_All)
input_ais <- unique(input$grade_neuro_All)
input_nli <- unique(input$level_neuro_All)
data_modified <- data_emsci_sygen
if (!('Unknown' %in% input_sex)){data_modified <- data_modified[data_modified$Sex %in% input_sex, ]}
if (!('Unknown' %in% input_age)){data_modified <- data_modified[data_modified$Age %in% input_age, ]}
if (!('Unknown' %in% input_ais)){data_modified <- data_modified[data_modified$AIS %in% input_ais, ]}
if (!('Unknown' %in% input_nli)){data_modified <- data_modified[data_modified$NLI %in% input_nli, ]}
data_All_copy <- data_modified
} else {
data_All_copy <- data_emsci_sygen
}
if (input$checkbox_neuro_All == 1){ # if user chooses to display all data
plot <- plot_base_All(data_All_copy, score = input$var_neuro_all) # display basic plot with all patients, and user selected stages
}
else if (input$checkbox_neuro_All == 0){ # if user chooses filters
if (!(length(input$checkGroup_neuro_All) == 2)){ # if user chooses something else than 2 filters
plot <- plot_error() # give a plot saying to choose 2 filters
}
else if (length(input$checkGroup_neuro_All) == 2){ # if user chooses exactly 2 filters
filters <- as.numeric(as.vector(unique(input$checkGroup_neuro_All))) # store filters the user has selected
list_all = list(c("Female", "Male"), # store all the different options for the different filters
c("12-19", "20-39", "40-59", "60-79", "80+"),
c("AIS A", "AIS B", "AIS C", "AIS D"),
c("cervical", "lumbar", "sacral", "thoracic"))
filter1_all <- as.vector(list_all[filters[1]][[1]]) # select all options for the first filter chosen by user
filter2_all <- as.vector(list_all[filters[2]][[1]]) # select all options for the second filter chosen by user
list_names = c("Sex", "Age", "AIS", "NLI") # names of columns corresponding to the available filters
plot <- plot_filters_All(data_All_copy, score = input$var_neuro_all, # call function for All plots in helper_functions.R
list_names[filters[1]],
list_names[filters[2]],
filter1_all,
filter2_all)
}
}
plot})
output$plot.epi.sygen <- renderPlot({
years <- c(unique(input$year_epi_sygen)[1]:unique(input$year_epi_sygen)[2])
nb_bins <- unique(input$binyear_epi_sygen)[1]
data_modified <- data_sygen_epi[data_sygen_epi$yeardoi %in% years, ]
##########################################################################################
vector_temp <- seq(input$year_epi_sygen[1],(input$year_epi_sygen[2]+input$year_epi_sygen[2]%%nb_bins),nb_bins)
vector_temp <- ifelse(vector_temp > input$year_epi_sygen[2],input$year_epi_sygen[2],vector_temp)
if (!(input$year_epi_sygen[2] %in% vector_temp)){
vector_temp <- append(vector_temp, input$year_epi_sygen[2])
}
data_modified$YEARDOI_cat<-cut(data_modified$yeardoi,
c(unique(vector_temp)),
include.lowest = T, right = F)
##########################################################################################
if (input$var_epi_sygen == "sex_sygen"){
if (input$checkbox_epi_sygen == 1){ # if user chooses to display all data
plot <- plot_base_Sex_Sygen(data_modified, '') # display basic plot with all patients, and user selected stSexs
}
else if (input$checkbox_epi_sygen == 0){ # if user chooses filters
if (as.numeric(as.vector(unique(input$checkGroup_epi_sygen))) == 1){
if (length(unique(input$grade_epi_sygen)) == 1){
plot <- plot_base_Sex_Sygen(subset(data_modified, ais1 == input$grade_epi_sygen[1]), '')
} else if (length(unique(input$grade_epi_sygen)) == 2){
plot.1 <- plot_base_Sex_Sygen(subset(data_modified, ais1 == input$grade_epi_sygen[1]), paste(input$grade_epi_sygen[1]))
plot.2 <- plot_base_Sex_Sygen(subset(data_modified, ais1 == input$grade_epi_sygen[2]), paste(input$grade_epi_sygen[2]))
plot <- grid.arrange(plot.1, plot.2, ncol=2)
} else if (length(unique(input$grade_epi_sygen)) == 3){
plot.1 <- plot_base_Sex_Sygen(subset(data_modified, ais1 == input$grade_epi_sygen[1]), paste(input$grade_epi_sygen[1]))
plot.2 <- plot_base_Sex_Sygen(subset(data_modified, ais1 == input$grade_epi_sygen[2]), paste(input$grade_epi_sygen[2]))
plot.3 <- plot_base_Sex_Sygen(subset(data_modified, ais1 == input$grade_epi_sygen[3]), paste(input$grade_epi_sygen[3]))
plot <- grid.arrange(plot.1, plot.2, plot.3, ncol=2, nrow=2)
} else if (length(unique(input$grade_epi_sygen)) == 4){
plot.1 <- plot_base_Sex_Sygen(subset(data_modified, ais1 == input$grade_epi_sygen[1]), paste(input$grade_epi_sygen[1]))
plot.2 <- plot_base_Sex_Sygen(subset(data_modified, ais1 == input$grade_epi_sygen[2]), paste(input$grade_epi_sygen[2]))
plot.3 <- plot_base_Sex_Sygen(subset(data_modified, ais1 == input$grade_epi_sygen[3]), paste(input$grade_epi_sygen[3]))
plot.4 <- plot_base_Sex_Sygen(subset(data_modified, ais1 == input$grade_epi_sygen[4]), paste(input$grade_epi_sygen[4]))
plot <- grid.arrange(plot.1, plot.2, plot.3, plot.4, ncol=2, nrow=2)
}
} else if (as.numeric(as.vector(unique(input$checkGroup_epi_sygen))) == 2){
if (length(unique(input$paralysis_epi_sygen)) == 2){
data_modified.tetra <-subset(data_modified, plegia =='tetra')
data_modified.para <-subset(data_modified, plegia =='para')
plot_tetra <- plot_base_Sex_Sygen(data_modified.tetra, 'Tetraplegic')
plot_para <- plot_base_Sex_Sygen(data_modified.para, 'Paraplegic')
plot <- grid.arrange(plot_tetra, plot_para, ncol=2)
} else {
if ('paraplegia' %in% as.vector(unique(input$paralysis_epi_sygen))){
plot <- plot_base_Sex_Sygen(subset(data_modified, plegia =='para'), 'Paraplegic')
} else if ('tetraplegia' %in% as.vector(unique(input$paralysis_epi_sygen))){
plot <- plot_base_Sex_Sygen(subset(data_modified, plegia =='tetra'), 'Tetraplegic')
}
}
}
}
}
else if (input$var_epi_sygen == "age_sygen"){
if (input$checkbox_epi_sygen == 1){ # if user chooses to display all data
plot <- plot_base_Age_Sygen(data_modified, '') # display basic plot with all patients, and user selected stages
}
else if (input$checkbox_epi_sygen == 0){ # if user chooses filters
if (as.numeric(as.vector(unique(input$checkGroup_epi_sygen))) == 1){
if (length(unique(input$grade_epi_sygen)) == 1){
plot <- plot_base_Age_Sygen(subset(data_modified, ais1 == input$grade_epi_sygen[1]), '')
} else if (length(unique(input$grade_epi_sygen)) == 2){
plot.1 <- plot_base_Age_Sygen(subset(data_modified, ais1 == input$grade_epi_sygen[1]), paste(input$grade_epi_sygen[1]))
plot.2 <- plot_base_Age_Sygen(subset(data_modified, ais1 == input$grade_epi_sygen[2]), paste(input$grade_epi_sygen[2]))
plot <- grid.arrange(plot.1, plot.2, ncol=2)
} else if (length(unique(input$grade_epi_sygen)) == 3){
plot.1 <- plot_base_Age_Sygen(subset(data_modified, ais1 == input$grade_epi_sygen[1]), paste(input$grade_epi_sygen[1]))
plot.2 <- plot_base_Age_Sygen(subset(data_modified, ais1 == input$grade_epi_sygen[2]), paste(input$grade_epi_sygen[2]))
plot.3 <- plot_base_Age_Sygen(subset(data_modified, ais1 == input$grade_epi_sygen[3]), paste(input$grade_epi_sygen[3]))
plot <- grid.arrange(plot.1, plot.2, plot.3, ncol=2, nrow=2)
} else if (length(unique(input$grade_epi_sygen)) == 4){
plot.1 <- plot_base_Age_Sygen(subset(data_modified, ais1 == input$grade_epi_sygen[1]), paste(input$grade_epi_sygen[1]))
plot.2 <- plot_base_Age_Sygen(subset(data_modified, ais1 == input$grade_epi_sygen[2]), paste(input$grade_epi_sygen[2]))
plot.3 <- plot_base_Age_Sygen(subset(data_modified, ais1 == input$grade_epi_sygen[3]), paste(input$grade_epi_sygen[3]))
plot.4 <- plot_base_Age_Sygen(subset(data_modified, ais1 == input$grade_epi_sygen[4]), paste(input$grade_epi_sygen[4]))
plot <- grid.arrange(plot.1, plot.2, plot.3, plot.4, ncol=2, nrow=2)
}
} else if (as.numeric(as.vector(unique(input$checkGroup_epi_sygen))) == 2){
if (length(unique(input$paralysis_epi_sygen)) == 2){
data_modified.tetra <-subset(data_modified, plegia =='tetra')
data_modified.para <-subset(data_modified, plegia =='para')
plot_tetra <- plot_base_Age_Sygen(data_modified.tetra, 'Tetraplegic')
plot_para <- plot_base_Age_Sygen(data_modified.para, 'Paraplegic')
plot <- grid.arrange(plot_tetra, plot_para, ncol=2)
} else {
if ('paraplegia' %in% as.vector(unique(input$paralysis_epi_sygen))){
plot <- plot_base_Age_Sygen(subset(data_modified, plegia =='para'), 'Paraplegic')
} else if ('tetraplegia' %in% as.vector(unique(input$paralysis_epi_sygen))){
plot <- plot_base_Age_Sygen(subset(data_modified, plegia =='tetra'), 'Tetraplegic')
}
}
}
}
}
else if (input$var_epi_sygen == 'ais_sygen') {
if (input$checkbox_epi_sygen == 1){ # if user chooses to display all data
plot <- plot_base_AIS_Sygen(data_modified, '') # display basic plot with all patients, and user selected stAISs
}
else if (input$checkbox_epi_sygen == 0){ # if user chooses filters
if (as.numeric(as.vector(unique(input$checkGroup_epi_sygen))) == 2){
if (length(unique(input$paralysis_epi_sygen)) == 2){
data_modified.tetra <-subset(data_modified, plegia =='tetra')
data_modified.para <-subset(data_modified, plegia =='para')
plot_tetra <- plot_base_AIS_Sygen(data_modified.tetra, 'Tetraplegic')
plot_para <- plot_base_AIS_Sygen(data_modified.para, 'Paraplegic')
plot <- grid.arrange(plot_tetra, plot_para, ncol=2)
} else {
if ('paraplegia' %in% as.vector(unique(input$paralysis_epi_sygen))){
plot <- plot_base_AIS_Sygen(subset(data_modified, plegia =='para'), 'Paraplegic')
} else if ('tetraplegia' %in% as.vector(unique(input$paralysis_epi_sygen))){
plot <- plot_base_AIS_Sygen(subset(data_modified, plegia =='tetra'), 'Tetraplegic')
}
}
}
}
}
else if (input$var_epi_sygen == "nli_sygen") {
if (input$checkbox_epi_sygen == 1){ # if user chooses to display all data
plot <- plot_base_NLI_Sygen(data_modified, '') # display basic plot with all patients, and user selected stNLIs
}
else if (input$checkbox_epi_sygen == 0){ # if user chooses filters
if (as.numeric(as.vector(unique(input$checkGroup_epi_sygen))) == 1){
if (length(unique(input$grade_epi_sygen)) == 1){
plot <- plot_base_NLI_Sygen(subset(data_modified, ais1 == input$grade_epi_sygen[1]), '')
} else if (length(unique(input$grade_epi_sygen)) == 2){
plot.1 <- plot_base_NLI_Sygen(subset(data_modified, ais1 == input$grade_epi_sygen[1]), paste(input$grade_epi_sygen[1]))
plot.2 <- plot_base_NLI_Sygen(subset(data_modified, ais1 == input$grade_epi_sygen[2]), paste(input$grade_epi_sygen[2]))
plot <- grid.arrange(plot.1, plot.2, ncol=2)
} else if (length(unique(input$grade_epi_sygen)) == 3){
plot.1 <- plot_base_NLI_Sygen(subset(data_modified, ais1 == input$grade_epi_sygen[1]), paste(input$grade_epi_sygen[1]))
plot.2 <- plot_base_NLI_Sygen(subset(data_modified, ais1 == input$grade_epi_sygen[2]), paste(input$grade_epi_sygen[2]))
plot.3 <- plot_base_NLI_Sygen(subset(data_modified, ais1 == input$grade_epi_sygen[3]), paste(input$grade_epi_sygen[3]))
plot <- grid.arrange(plot.1, plot.2, plot.3, ncol=2, nrow=2)
} else if (length(unique(input$grade_epi_sygen)) == 4){
plot.1 <- plot_base_NLI_Sygen(subset(data_modified, ais1 == input$grade_epi_sygen[1]), paste(input$grade_epi_sygen[1]))
plot.2 <- plot_base_NLI_Sygen(subset(data_modified, ais1 == input$grade_epi_sygen[2]), paste(input$grade_epi_sygen[2]))
plot.3 <- plot_base_NLI_Sygen(subset(data_modified, ais1 == input$grade_epi_sygen[3]), paste(input$grade_epi_sygen[3]))
plot.4 <- plot_base_NLI_Sygen(subset(data_modified, ais1 == input$grade_epi_sygen[4]), paste(input$grade_epi_sygen[4]))
plot <- grid.arrange(plot.1, plot.2, plot.3, plot.4, ncol=2, nrow=2)
}
} else if (as.numeric(as.vector(unique(input$checkGroup_epi_sygen))) == 2){
if (length(unique(input$paralysis_epi_sygen)) == 2){
data_modified.tetra <-subset(data_modified, plegia =='tetra')
data_modified.para <-subset(data_modified, plegia =='para')
plot_tetra <- plot_base_NLI_Sygen(data_modified.tetra, 'Tetraplegic')
plot_para <- plot_base_NLI_Sygen(data_modified.para, 'Paraplegic')
plot <- grid.arrange(plot_tetra, plot_para, ncol=2)
} else {
if ('paraplegia' %in% as.vector(unique(input$paralysis_epi_sygen))){
plot <- plot_base_NLI_Sygen(subset(data_modified, plegia =='para'), 'Paraplegic')
} else if ('tetraplegia' %in% as.vector(unique(input$paralysis_epi_sygen))){
plot <- plot_base_NLI_Sygen(subset(data_modified, plegia =='tetra'), 'Tetraplegic')
}
}
}
}
}
plot})
output$plot.epi.emsci <- renderPlot({
years <- c(unique(input$year_epi_emsci)[1]:unique(input$year_epi_emsci)[2])
nb_bins <- unique(input$binyear_epi_emsci)[1]
data_modified <- data_age_emsci[data_age_emsci$YEARDOI %in% years, ]
data_modified$YEARDOI_cat<-cut(data_modified$YEARDOI,
seq(input$year_epi_emsci[1],
input$year_epi_emsci[2]+input$year_epi_emsci[2]%%nb_bins,
nb_bins),
include.lowest = T, right = F)
if (input$var_epi_emsci == "sex_emsci"){
if (input$checkbox_epi_emsci == 1){ # if user chooses to display all data
plot <- plot_base_Sex_EMSCI(data_modified, '') # display basic plot with all patients, and user selected stSexs
}
else if (input$checkbox_epi_emsci == 0){ # if user chooses filters
if (as.numeric(as.vector(unique(input$checkGroup_epi_emsci))) == 1){
if (length(unique(input$grade_epi_emsci)) == 1){
plot <- plot_base_Sex_EMSCI(subset(data_modified, AIS == input$grade_epi_emsci[1]), '')
} else if (length(unique(input$grade_epi_emsci)) == 2){
plot.1 <- plot_base_Sex_EMSCI(subset(data_modified, AIS == input$grade_epi_emsci[1]), paste(input$grade_epi_emsci[1]))
plot.2 <- plot_base_Sex_EMSCI(subset(data_modified, AIS == input$grade_epi_emsci[2]), paste(input$grade_epi_emsci[2]))
plot <- grid.arrange(plot.1, plot.2, ncol=2)
} else if (length(unique(input$grade_epi_emsci)) == 3){
plot.1 <- plot_base_Sex_EMSCI(subset(data_modified, AIS == input$grade_epi_emsci[1]), paste(input$grade_epi_emsci[1]))
plot.2 <- plot_base_Sex_EMSCI(subset(data_modified, AIS == input$grade_epi_emsci[2]), paste(input$grade_epi_emsci[2]))
plot.3 <- plot_base_Sex_EMSCI(subset(data_modified, AIS == input$grade_epi_emsci[3]), paste(input$grade_epi_emsci[3]))
plot <- grid.arrange(plot.1, plot.2, plot.3, ncol=2, nrow=2)
} else if (length(unique(input$grade_epi_emsci)) == 4){
plot.1 <- plot_base_Sex_EMSCI(subset(data_modified, AIS == input$grade_epi_emsci[1]), paste(input$grade_epi_emsci[1]))
plot.2 <- plot_base_Sex_EMSCI(subset(data_modified, AIS == input$grade_epi_emsci[2]), paste(input$grade_epi_emsci[2]))
plot.3 <- plot_base_Sex_EMSCI(subset(data_modified, AIS == input$grade_epi_emsci[3]), paste(input$grade_epi_emsci[3]))
plot.4 <- plot_base_Sex_EMSCI(subset(data_modified, AIS == input$grade_epi_emsci[4]), paste(input$grade_epi_emsci[4]))
plot <- grid.arrange(plot.1, plot.2, plot.3, plot.4, ncol=2, nrow=2)
}
} else if (as.numeric(as.vector(unique(input$checkGroup_epi_emsci))) == 2){
if (length(unique(input$paralysis_epi_emsci)) == 2){
data_modified.tetra <-subset(data_modified, plegia =='tetra')
data_modified.para <-subset(data_modified, plegia =='para')
plot_tetra <- plot_base_Sex_EMSCI_paralysis(data_modified.tetra, 'Tetraplegic')
plot_para <- plot_base_Sex_EMSCI_paralysis(data_modified.para, 'Paraplegic')
plot <- grid.arrange(plot_tetra, plot_para, ncol=2)
} else {
if ('paraplegia' %in% as.vector(unique(input$paralysis_epi_emsci))){
plot <- plot_base_Sex_EMSCI_paralysis(subset(data_modified, plegia =='para'), 'Paraplegic')
} else if ('tetraplegia' %in% as.vector(unique(input$paralysis_epi_emsci))){
plot <- plot_base_Sex_EMSCI_paralysis(subset(data_modified, plegia =='tetra'), 'Tetraplegic')
}
}
}
}
}
else if (input$var_epi_emsci == "age_emsci"){
if (input$checkbox_epi_emsci == 1){ # if user chooses to display all data
plot <- plot_base_Age_EMSCI(data_modified, '') # display basic plot with all patients, and user selected stages
}
else if (input$checkbox_epi_emsci == 0){ # if user chooses filters
if (as.numeric(as.vector(unique(input$checkGroup_epi_emsci))) == 1){
if (length(unique(input$grade_epi_emsci)) == 1){
plot <- plot_base_Age_EMSCI(subset(data_modified, ais1 == input$grade_epi_emsci[1]), '')
} else if (length(unique(input$grade_epi_emsci)) == 2){
plot.1 <- plot_base_Age_EMSCI(subset(data_modified, ais1 == input$grade_epi_emsci[1]), paste(input$grade_epi_emsci[1]))
plot.2 <- plot_base_Age_EMSCI(subset(data_modified, ais1 == input$grade_epi_emsci[2]), paste(input$grade_epi_emsci[2]))
plot <- grid.arrange(plot.1, plot.2, ncol=2)
} else if (length(unique(input$grade_epi_emsci)) == 3){
plot.1 <- plot_base_Age_EMSCI(subset(data_modified, ais1 == input$grade_epi_emsci[1]), paste(input$grade_epi_emsci[1]))
plot.2 <- plot_base_Age_EMSCI(subset(data_modified, ais1 == input$grade_epi_emsci[2]), paste(input$grade_epi_emsci[2]))
plot.3 <- plot_base_Age_EMSCI(subset(data_modified, ais1 == input$grade_epi_emsci[3]), paste(input$grade_epi_emsci[3]))
plot <- grid.arrange(plot.1, plot.2, plot.3, ncol=2, nrow=2)
} else if (length(unique(input$grade_epi_emsci)) == 4){
plot.1 <- plot_base_Age_EMSCI(subset(data_modified, ais1 == input$grade_epi_emsci[1]), paste(input$grade_epi_emsci[1]))
plot.2 <- plot_base_Age_EMSCI(subset(data_modified, ais1 == input$grade_epi_emsci[2]), paste(input$grade_epi_emsci[2]))
plot.3 <- plot_base_Age_EMSCI(subset(data_modified, ais1 == input$grade_epi_emsci[3]), paste(input$grade_epi_emsci[3]))
plot.4 <- plot_base_Age_EMSCI(subset(data_modified, ais1 == input$grade_epi_emsci[4]), paste(input$grade_epi_emsci[4]))
plot <- grid.arrange(plot.1, plot.2, plot.3, plot.4, ncol=2, nrow=2)
}
} else if (as.numeric(as.vector(unique(input$checkGroup_epi_emsci))) == 2){
if (length(unique(input$paralysis_epi_emsci)) == 2){
data_modified.tetra <-subset(data_modified, plegia =='tetra')
data_modified.para <-subset(data_modified, plegia =='para')
plot_tetra <- plot_base_Age_EMSCI(data_modified.tetra, 'Tetraplegic')
plot_para <- plot_base_Age_EMSCI(data_modified.para, 'Paraplegic')
plot <- grid.arrange(plot_tetra, plot_para, ncol=2)
} else {
if ('paraplegia' %in% as.vector(unique(input$paralysis_epi_emsci))){
plot <- plot_base_Age_EMSCI(subset(data_modified, plegia =='para'), 'Paraplegic')
} else if ('tetraplegia' %in% as.vector(unique(input$paralysis_epi_emsci))){
plot <- plot_base_Age_EMSCI(subset(data_modified, plegia =='tetra'), 'Tetraplegic')
}
}
}
}
}
else if (input$var_epi_emsci == 'ais_emsci') {
if (input$checkbox_epi_emsci == 1){ # if user chooses to display all data
plot <- plot_base_AIS_EMSCI(data_modified, '') # display basic plot with all patients, and user selected stAISs
}
else if (input$checkbox_epi_emsci == 0){ # if user chooses filters
if (as.numeric(as.vector(unique(input$checkGroup_epi_emsci))) == 2){
if (length(unique(input$paralysis_epi_emsci)) == 2){
data_modified.tetra <-subset(data_modified, plegia =='tetra')
data_modified.para <-subset(data_modified, plegia =='para')
plot_tetra <- plot_base_AIS_EMSCI(data_modified.tetra, 'Tetraplegic')
plot_para <- plot_base_AIS_EMSCI(data_modified.para, 'Paraplegic')
plot <- grid.arrange(plot_tetra, plot_para, ncol=2)
} else {
if ('paraplegia' %in% as.vector(unique(input$paralysis_epi_emsci))){
plot <- plot_base_AIS_EMSCI(subset(data_modified, plegia =='para'), 'Paraplegic')
} else if ('tetraplegia' %in% as.vector(unique(input$paralysis_epi_emsci))){
plot <- plot_base_AIS_EMSCI(subset(data_modified, plegia =='tetra'), 'Tetraplegic')
}
}
}
}
}
else if (input$var_epi_emsci == "nli_emsci") {
if (input$checkbox_epi_emsci == 1){ # if user chooses to display all data
plot <- plot_base_NLI_EMSCI(data_modified, '') # display basic plot with all patients, and user selected stNLIs
}
else if (input$checkbox_epi_emsci == 0){ # if user chooses filters
if (as.numeric(as.vector(unique(input$checkGroup_epi_emsci))) == 1){
if (length(unique(input$grade_epi_emsci)) == 1){
plot <- plot_base_NLI_EMSCI(subset(data_modified, ais1 == input$grade_epi_emsci[1]), '')
} else if (length(unique(input$grade_epi_emsci)) == 2){
plot.1 <- plot_base_NLI_EMSCI(subset(data_modified, ais1 == input$grade_epi_emsci[1]), paste(input$grade_epi_emsci[1]))
plot.2 <- plot_base_NLI_EMSCI(subset(data_modified, ais1 == input$grade_epi_emsci[2]), paste(input$grade_epi_emsci[2]))
plot <- grid.arrange(plot.1, plot.2, ncol=2)
} else if (length(unique(input$grade_epi_emsci)) == 3){
plot.1 <- plot_base_NLI_EMSCI(subset(data_modified, ais1 == input$grade_epi_emsci[1]), paste(input$grade_epi_emsci[1]))
plot.2 <- plot_base_NLI_EMSCI(subset(data_modified, ais1 == input$grade_epi_emsci[2]), paste(input$grade_epi_emsci[2]))
plot.3 <- plot_base_NLI_EMSCI(subset(data_modified, ais1 == input$grade_epi_emsci[3]), paste(input$grade_epi_emsci[3]))
plot <- grid.arrange(plot.1, plot.2, plot.3, ncol=2, nrow=2)
} else if (length(unique(input$grade_epi_emsci)) == 4){
plot.1 <- plot_base_NLI_EMSCI(subset(data_modified, ais1 == input$grade_epi_emsci[1]), paste(input$grade_epi_emsci[1]))
plot.2 <- plot_base_NLI_EMSCI(subset(data_modified, ais1 == input$grade_epi_emsci[2]), paste(input$grade_epi_emsci[2]))
plot.3 <- plot_base_NLI_EMSCI(subset(data_modified, ais1 == input$grade_epi_emsci[3]), paste(input$grade_epi_emsci[3]))
plot.4 <- plot_base_NLI_EMSCI(subset(data_modified, ais1 == input$grade_epi_emsci[4]), paste(input$grade_epi_emsci[4]))
plot <- grid.arrange(plot.1, plot.2, plot.3, plot.4, ncol=2, nrow=2)
}
} else if (as.numeric(as.vector(unique(input$checkGroup_epi_emsci))) == 2){
if (length(unique(input$paralysis_epi_emsci)) == 2){
data_modified.tetra <-subset(data_modified, plegia =='tetra')
data_modified.para <-subset(data_modified, plegia =='para')
plot_tetra <- plot_base_NLI_EMSCI(data_modified.tetra, 'Tetraplegic')
plot_para <- plot_base_NLI_EMSCI(data_modified.para, 'Paraplegic')
plot <- grid.arrange(plot_tetra, plot_para, ncol=2)
} else {
if ('paraplegia' %in% as.vector(unique(input$paralysis_epi_emsci))){
plot <- plot_base_NLI_EMSCI(subset(data_modified, plegia =='para'), 'Paraplegic')
} else if ('tetraplegia' %in% as.vector(unique(input$paralysis_epi_emsci))){
plot <- plot_base_NLI_EMSCI(subset(data_modified, plegia =='tetra'), 'Tetraplegic')
}
}
}
}
}
plot})
output$title_predict <- renderText({paste("<h2><b>", "Monitoring of individual patient", "</b>")})
output$plot_predict <- renderPlot({ # create output function for plot of interest
input_sex <- unique(input$select_sex)[1]
input_age <- unique(input$select_age)[1]
input_ais <- unique(input$select_ais)[1]
input_nli <- unique(input$select_nli)[1]
input_score <- unique(input$select_score)[1]
input_indscore <- unique(input$input_compscore)[1]
data_temp <- data_emsci_epi
data_temp$UEMS <- as.numeric(as.character(data_temp$UEMS))
data_temp$LEMS <- as.numeric(as.character(data_temp$LEMS))
data_temp$TMS <- as.numeric(as.character(data_temp$TMS))
data_modified <- data_temp
if (!(input_sex == 'Unknown')){
data_modified <- data_modified[data_modified$Sex %in% input_sex, ]
}
if (!(input_ais == 'Unknown')){
data_modified <- data_modified[data_modified$AIS %in% input_ais, ]
}
if (!(input_nli == 'Unknown')){
if (input_nli == 'Cervical'){
data_modified <- data_modified[data_modified$NLI_level %in% 'cervical', ]
} else if (input_nli == 'Thoracic'){
data_modified <- data_modified[data_modified$NLI_level %in% 'thoracic', ]
} else if (input_nli == 'Lumbar'){
data_modified <- data_modified[data_modified$NLI_level %in% 'lumbar', ]
} else {
data_modified <- data_modified[data_modified$NLI %in% input_nli, ]
}
}
if (!(input_age == 'Unknown')){
if (input_age == '0-19'){
data_modified <- data_modified[data_modified$AgeAtDOI>0 & data_modified$AgeAtDOI<20, ]
} else if (input_age == '20-39'){
data_modified <- data_modified[data_modified$AgeAtDOI>=20 & data_modified$AgeAtDOI<40, ]
} else if (input_age == '40-59'){
data_modified <- data_modified[data_modified$AgeAtDOI>=40 & data_modified$AgeAtDOI<60, ]
} else if (input_age == '60-79'){
data_modified <- data_modified[data_modified$AgeAtDOI>=60 & data_modified$AgeAtDOI<80, ]
} else if (input_age == '80+'){
data_modified <- data_modified[data_modified$AgeAtDOI>=80, ]
}
}
data_modified <- data_modified[data_modified$UEMS != 'NA', ]
data_modified <- data_modified[data_modified$UEMS != 'NT', ]
if (dim(data_modified)[1] == 0){
plot <- plot_error_data()
} else if (input_indscore == "Enter value..." || input_indscore == "") {
plot <- plot_predict_emsci(data_modified, input_score)
} else {
value <- as.numeric(as.character(input_indscore))
if (value < 0 || value > 50){
plot <- plot_error_value()
} else {
#plot <- plot_error_value()
plot <- plot_predict_emsci_NN(data_modified, input_score, value)
}
}
plot})
output$video <- renderUI({
tags$video(src = "video_version3.mp4", type = "video/mp4", autoplay = NA, controls = NA, height = 350, width = 750)
})
# observe({
# showNotification('Disclaimer:
# We cannot be held responsible for conclusions you might draw from the website.
# It is intended for data visualisation only.', type='error')
# })
observeEvent(input$preview, {
# Show a modal when the button is pressed
shinyalert("Disclaimer:
We cannot be held responsible for conclusions you might draw from the website.
It is intended for data visualisation only.", type = "error")
})
observeEvent(input$preview_predict, {
# Show a modal when the button is pressed
shinyalert("Disclaimer:
We cannot be held responsible for conclusions you might draw from the website.
It is intended for data visualisation only.", type = "error")
})
}
# Run app ----
shinyApp(ui, server)
|
library(glmnet)
mydata = read.table("../../../../TrainingSet/FullSet/ReliefF/skin.csv",head=T,sep=",")
x = as.matrix(mydata[,4:ncol(mydata)])
y = as.matrix(mydata[,1])
set.seed(123)
glm = cv.glmnet(x,y,nfolds=10,type.measure="mse",alpha=0.15,family="gaussian",standardize=FALSE)
sink('./skin_030.txt',append=TRUE)
print(glm$glmnet.fit)
sink()
|
/Model/EN/ReliefF/skin/skin_030.R
|
no_license
|
esbgkannan/QSMART
|
R
| false
| false
| 343
|
r
|
library(glmnet)
mydata = read.table("../../../../TrainingSet/FullSet/ReliefF/skin.csv",head=T,sep=",")
x = as.matrix(mydata[,4:ncol(mydata)])
y = as.matrix(mydata[,1])
set.seed(123)
glm = cv.glmnet(x,y,nfolds=10,type.measure="mse",alpha=0.15,family="gaussian",standardize=FALSE)
sink('./skin_030.txt',append=TRUE)
print(glm$glmnet.fit)
sink()
|
library(glmnet)
mydata = read.table("../../../../TrainingSet/FullSet/AvgRank/ovary.csv",head=T,sep=",")
x = as.matrix(mydata[,4:ncol(mydata)])
y = as.matrix(mydata[,1])
set.seed(123)
glm = cv.glmnet(x,y,nfolds=10,type.measure="mae",alpha=0.05,family="gaussian",standardize=FALSE)
sink('./ovary_024.txt',append=TRUE)
print(glm$glmnet.fit)
sink()
|
/Model/EN/AvgRank/ovary/ovary_024.R
|
no_license
|
esbgkannan/QSMART
|
R
| false
| false
| 345
|
r
|
library(glmnet)
mydata = read.table("../../../../TrainingSet/FullSet/AvgRank/ovary.csv",head=T,sep=",")
x = as.matrix(mydata[,4:ncol(mydata)])
y = as.matrix(mydata[,1])
set.seed(123)
glm = cv.glmnet(x,y,nfolds=10,type.measure="mae",alpha=0.05,family="gaussian",standardize=FALSE)
sink('./ovary_024.txt',append=TRUE)
print(glm$glmnet.fit)
sink()
|
shinyUI(navbarPage(theme = shinytheme("cerulean"),
"Simplified Plant Dispatch",
tabPanel("Price simulations",
sidebarPanel(
helpText("Generate the hourly power curve for use
with dispatch optimization"),
sliderInput("simNum", label = h4("Number of Sims"),
min = 100, max = 1000, step = 100, value = 100),
numericInput("initPrice", label = h4("Initial price"),
min = 20, max =80, step = 1, value = 30),
numericInput("annVol", label = h4("Annualized volatility"),
min = 0.1, max = 0.8, step = 0.05, value = 0.3),
actionButton("goButton1", "Simulate price")
),
mainPanel(
C3LineBarChartOutput("chart")
# tableOutput("table")
# plotOutput("chart")
))
)
)
|
/ui.R
|
no_license
|
saluteyang/symphonyPlantDispatch
|
R
| false
| false
| 1,230
|
r
|
shinyUI(navbarPage(theme = shinytheme("cerulean"),
"Simplified Plant Dispatch",
tabPanel("Price simulations",
sidebarPanel(
helpText("Generate the hourly power curve for use
with dispatch optimization"),
sliderInput("simNum", label = h4("Number of Sims"),
min = 100, max = 1000, step = 100, value = 100),
numericInput("initPrice", label = h4("Initial price"),
min = 20, max =80, step = 1, value = 30),
numericInput("annVol", label = h4("Annualized volatility"),
min = 0.1, max = 0.8, step = 0.05, value = 0.3),
actionButton("goButton1", "Simulate price")
),
mainPanel(
C3LineBarChartOutput("chart")
# tableOutput("table")
# plotOutput("chart")
))
)
)
|
# Linear models
library(readr)
library(dplyr)
person <- read_csv(
file = 'data/census_pums/sample.csv',
col_types = cols_only(
AGEP = 'i', # Age
WAGP = 'd', # Wages or salary income past 12 months
SCHL = 'i', # Educational attainment
SEX = 'f', # Sex
OCCP = 'f', # Occupation recode based on 2010 OCC codes
WKHP = 'i')) # Usual hours worked per week past 12 months
person <- within(person, {
SCHL <- factor(SCHL)
levels(SCHL) <- list(
'Incomplete' = c(1:15),
'High School' = 16,
'College Credit' = 17:20,
'Bachelor\'s' = 21,
'Master\'s' = 22:23,
'Doctorate' = 24)}) %>%
filter(
WAGP > 0,
WAGP < max(WAGP, na.rm = TRUE))
# Formula Notation
fit <- lm(
formula = WAGP ~ SCHL,
data = person)
fit <- lm(
log(WAGP) ~ SCHL,
person)
summary(fit)
# Metadata matters
fit <- lm(
log(WAGP) ~ AGEP,
person)
summary(fit)
# GLM families
fit <- glm(log(WAGP) ~ SCHL,
family = gaussian,
data = person)
summary(fit)
# Logistic Regression
fit <- glm(SEX ~ WAGP,
family = binomial,
data = person)
levels(person$SEX)
anova(fit, update(fit, SEX ~ 1), test = 'Chisq')
# Random Intercept
library(lme4)
fit <- lmer(
log(WAGP) ~ (1|OCCP) + SCHL,
data = person)
# Random Slope
fit <- lmer(
log(WAGP) ~ (WKHP | SCHL),
data = person)
fit <- lmer(
log(WAGP) ~ (WKHP | SCHL),
data = person,
control = lmerControl(optimizer = "bobyqa"))
ggplot(person,
aes(x = WKHP, y = log(WAGP), color = SCHL)) +
geom_point() +
geom_line(aes(y = predict(fit))) +
labs(title = 'Random intercept and slope with lmer')
|
/worksheet-6.R
|
no_license
|
zach10028/handouts
|
R
| false
| false
| 1,613
|
r
|
# Linear models
library(readr)
library(dplyr)
person <- read_csv(
file = 'data/census_pums/sample.csv',
col_types = cols_only(
AGEP = 'i', # Age
WAGP = 'd', # Wages or salary income past 12 months
SCHL = 'i', # Educational attainment
SEX = 'f', # Sex
OCCP = 'f', # Occupation recode based on 2010 OCC codes
WKHP = 'i')) # Usual hours worked per week past 12 months
person <- within(person, {
SCHL <- factor(SCHL)
levels(SCHL) <- list(
'Incomplete' = c(1:15),
'High School' = 16,
'College Credit' = 17:20,
'Bachelor\'s' = 21,
'Master\'s' = 22:23,
'Doctorate' = 24)}) %>%
filter(
WAGP > 0,
WAGP < max(WAGP, na.rm = TRUE))
# Formula Notation
fit <- lm(
formula = WAGP ~ SCHL,
data = person)
fit <- lm(
log(WAGP) ~ SCHL,
person)
summary(fit)
# Metadata matters
fit <- lm(
log(WAGP) ~ AGEP,
person)
summary(fit)
# GLM families
fit <- glm(log(WAGP) ~ SCHL,
family = gaussian,
data = person)
summary(fit)
# Logistic Regression
fit <- glm(SEX ~ WAGP,
family = binomial,
data = person)
levels(person$SEX)
anova(fit, update(fit, SEX ~ 1), test = 'Chisq')
# Random Intercept
library(lme4)
fit <- lmer(
log(WAGP) ~ (1|OCCP) + SCHL,
data = person)
# Random Slope
fit <- lmer(
log(WAGP) ~ (WKHP | SCHL),
data = person)
fit <- lmer(
log(WAGP) ~ (WKHP | SCHL),
data = person,
control = lmerControl(optimizer = "bobyqa"))
ggplot(person,
aes(x = WKHP, y = log(WAGP), color = SCHL)) +
geom_point() +
geom_line(aes(y = predict(fit))) +
labs(title = 'Random intercept and slope with lmer')
|
#' National Eutrophication Survey Data
#'
#' A dataset containing hydrologic and water quality data for approximately 800
#' lakes in the continental United States.
#'
#' \tabular{ccc}{
#' variable name \tab description \tab units \cr
#' pdf \tab pdf identifier (474 - 477) \tab integer \cr
#' pagenum \tab page number of the pdf (not the report page number) \tab integer \cr
#' storet_code \tab identifier which links measurement to coordinate locations\tab character \cr
#' state \tab state where the water body resides \tab character \cr
#' name \tab name of the water body\tab character \cr
#' county \tab county where the water body resides \tab character \cr
#' lake_type \tab natural or impoundment \tab character \cr
#' drainage_area \tab the total drainage area \tab square kilometers \cr
#' surface_area \tab the area of the water surface\tab sq km \cr
#' mean_depth \tab the volume of the water body divided by the surface area in square meters\tab meters \cr
#' total_inflow \tab the mean of the inflows of all tributaries and the immediate drainage \tab cubic meters per second \cr
#' retention_time \tab a mean value determined by dividing the lake volume, in cubic meters, by the mean annual outflow in cubic meters per unit of time in years or days \cr
#' retention_time_units \tab the units of time for each retention entry\tab years or days \cr
#' alkalinity \tab alkalinity\tab milligrams per liter \cr
#' conductivity \tab conductivity\tab microohms \cr
#' secchi \tab secchi\tab meters \cr
#' tp \tab total phosphorus\tab milligrams per liter \cr
#' po4 \tab orthophosphate\tab milligrams per liter \cr
#' tin \tab total inorganic nitrogen\tab milligrams per liter \cr
#' tn \tab total nitrogen\tab milligrams per liter \cr
#' p_pnt_source_muni \tab municipal point source phosphorus loading\tab kilograms per year \cr
#' p_pnt_source_industrial \tab industrial point source phosphorus loading\tab kilograms per year \cr
#' p_pnt_source_septic \tab septic point source phosphorus loading\tab kilograms per year \cr
#' p_nonpnt_source \tab nonpoint source phosphorus loading\tab kilograms per year \cr
#' p_total \tab total phosphorus loading\tab kilograms per year \cr
#' n_pnt_source_muni \tab municipal point source nitrogen loading\tab kilograms per year \cr
#' n_pnt_source_industrial \tab industrial point source nitrogen loading\tab kilograms per year \cr
#' n_pnt_source_septic \tab septic point source nitrogen loading\tab kilograms per year \cr
#' n_nonpnt_source \tab nonpoint source nitrogen loading\tab kilograms per year \cr
#' n_total \tab total nitrogen loading\tab kilograms per year \cr
#' p_total_out \tab total phosphorus outlet load\tab kilograms per year \cr
#' p_percent_retention \tab percent phosphorus retention\tab percent \cr
#' p_surface_area_loading \tab phosphorus surface area loading\tab grams per square meter per year \cr
#' n_total_out \tab total nitrogen outlet load\tab kilograms per year \cr
#' n_percent_retention \tab percent nitrogen retention\tab percent \cr
#' n_surface_area_loading \tab nitrogen surface area loading\tab grams per square meter per year \cr
#' lat \tab latitude\tab decimal degrees \cr
#' long \tab longitude\tab decimal degrees
#' }
#'
#' @aliases nes
#' @examples
#' data(nes)
#' head(nes)
"nes"
|
/R/nesRdata-package.R
|
no_license
|
jsta/nesRdata
|
R
| false
| false
| 3,495
|
r
|
#' National Eutrophication Survey Data
#'
#' A dataset containing hydrologic and water quality data for approximately 800
#' lakes in the continental United States.
#'
#' \tabular{ccc}{
#' variable name \tab description \tab units \cr
#' pdf \tab pdf identifier (474 - 477) \tab integer \cr
#' pagenum \tab page number of the pdf (not the report page number) \tab integer \cr
#' storet_code \tab identifier which links measurement to coordinate locations\tab character \cr
#' state \tab state where the water body resides \tab character \cr
#' name \tab name of the water body\tab character \cr
#' county \tab county where the water body resides \tab character \cr
#' lake_type \tab natural or impoundment \tab character \cr
#' drainage_area \tab the total drainage area \tab square kilometers \cr
#' surface_area \tab the area of the water surface\tab sq km \cr
#' mean_depth \tab the volume of the water body divided by the surface area in square meters\tab meters \cr
#' total_inflow \tab the mean of the inflows of all tributaries and the immediate drainage \tab cubic meters per second \cr
#' retention_time \tab a mean value determined by dividing the lake volume, in cubic meters, by the mean annual outflow in cubic meters per unit of time in years or days \cr
#' retention_time_units \tab the units of time for each retention entry\tab years or days \cr
#' alkalinity \tab alkalinity\tab milligrams per liter \cr
#' conductivity \tab conductivity\tab microohms \cr
#' secchi \tab secchi\tab meters \cr
#' tp \tab total phosphorus\tab milligrams per liter \cr
#' po4 \tab orthophosphate\tab milligrams per liter \cr
#' tin \tab total inorganic nitrogen\tab milligrams per liter \cr
#' tn \tab total nitrogen\tab milligrams per liter \cr
#' p_pnt_source_muni \tab municipal point source phosphorus loading\tab kilograms per year \cr
#' p_pnt_source_industrial \tab industrial point source phosphorus loading\tab kilograms per year \cr
#' p_pnt_source_septic \tab septic point source phosphorus loading\tab kilograms per year \cr
#' p_nonpnt_source \tab nonpoint source phosphorus loading\tab kilograms per year \cr
#' p_total \tab total phosphorus loading\tab kilograms per year \cr
#' n_pnt_source_muni \tab municipal point source nitrogen loading\tab kilograms per year \cr
#' n_pnt_source_industrial \tab industrial point source nitrogen loading\tab kilograms per year \cr
#' n_pnt_source_septic \tab septic point source nitrogen loading\tab kilograms per year \cr
#' n_nonpnt_source \tab nonpoint source nitrogen loading\tab kilograms per year \cr
#' n_total \tab total nitrogen loading\tab kilograms per year \cr
#' p_total_out \tab total phosphorus outlet load\tab kilograms per year \cr
#' p_percent_retention \tab percent phosphorus retention\tab percent \cr
#' p_surface_area_loading \tab phosphorus surface area loading\tab grams per square meter per year \cr
#' n_total_out \tab total nitrogen outlet load\tab kilograms per year \cr
#' n_percent_retention \tab percent nitrogen retention\tab percent \cr
#' n_surface_area_loading \tab nitrogen surface area loading\tab grams per square meter per year \cr
#' lat \tab latitude\tab decimal degrees \cr
#' long \tab longitude\tab decimal degrees
#' }
#'
#' @aliases nes
#' @examples
#' data(nes)
#' head(nes)
"nes"
|
# Generated by using Rcpp::compileAttributes() -> do not edit by hand
# Generator token: 10BE3573-1514-4C36-9D1C-5A225CD40393
simulateTrials <- function(allpaths, turnTimes, alpha, model, turnMethod) {
.Call(`_baseModels_simulateTrials`, allpaths, turnTimes, alpha, model, turnMethod)
}
getPathLikelihood <- function(allpaths, alpha, H, sim, model, policyMethod, epsilon = 0, endTrial = 0L) {
.Call(`_baseModels_getPathLikelihood`, allpaths, alpha, H, sim, model, policyMethod, epsilon, endTrial)
}
getProbMatrix <- function(allpaths, alpha, H, sim, model, policyMethod, epsilon = 0, endTrial = 0L) {
.Call(`_baseModels_getProbMatrix`, allpaths, alpha, H, sim, model, policyMethod, epsilon, endTrial)
}
getEpisodes <- function(allpaths) {
.Call(`_baseModels_getEpisodes`, allpaths)
}
rcpparma_hello_world <- function() {
.Call(`_baseModels_rcpparma_hello_world`)
}
rcpparma_outerproduct <- function(x) {
.Call(`_baseModels_rcpparma_outerproduct`, x)
}
rcpparma_innerproduct <- function(x) {
.Call(`_baseModels_rcpparma_innerproduct`, x)
}
rcpparma_bothproducts <- function(x) {
.Call(`_baseModels_rcpparma_bothproducts`, x)
}
getPathTimes <- function(allpaths, enreg_pos) {
.Call(`_baseModels_getPathTimes`, allpaths, enreg_pos)
}
empiricalProbMat <- function(allpaths, window) {
.Call(`_baseModels_empiricalProbMat`, allpaths, window)
}
empiricalProbMat2 <- function(allpaths, window) {
.Call(`_baseModels_empiricalProbMat2`, allpaths, window)
}
mseEmpirical <- function(allpaths, probMatrix_m1, movAvg, sim) {
.Call(`_baseModels_mseEmpirical`, allpaths, probMatrix_m1, movAvg, sim)
}
pathProbability <- function(allpaths, probMatrix_m1, sim) {
.Call(`_baseModels_pathProbability`, allpaths, probMatrix_m1, sim)
}
getBoxTimes <- function(enregPosTimes, rleLengths) {
.Call(`_baseModels_getBoxTimes`, enregPosTimes, rleLengths)
}
getTurnsFromPaths <- function(path, state) {
.Call(`_baseModels_getTurnsFromPaths`, path, state)
}
getTurnString <- function(turnNb) {
.Call(`_baseModels_getTurnString`, turnNb)
}
getTurnIdx <- function(turn, state) {
.Call(`_baseModels_getTurnIdx`, turn, state)
}
getTurnTimes <- function(allpaths, boxTimes, sim) {
.Call(`_baseModels_getTurnTimes`, allpaths, boxTimes, sim)
}
getComputationalActivity <- function(allpaths, probabilityMatrix) {
.Call(`_baseModels_getComputationalActivity`, allpaths, probabilityMatrix)
}
getPathFromTurns <- function(turns, state) {
.Call(`_baseModels_getPathFromTurns`, turns, state)
}
|
/Sources/lib/Base Models/baseModels/R/RcppExports.R
|
no_license
|
mattapattu/Rats-Credit
|
R
| false
| false
| 2,553
|
r
|
# Generated by using Rcpp::compileAttributes() -> do not edit by hand
# Generator token: 10BE3573-1514-4C36-9D1C-5A225CD40393
simulateTrials <- function(allpaths, turnTimes, alpha, model, turnMethod) {
.Call(`_baseModels_simulateTrials`, allpaths, turnTimes, alpha, model, turnMethod)
}
getPathLikelihood <- function(allpaths, alpha, H, sim, model, policyMethod, epsilon = 0, endTrial = 0L) {
.Call(`_baseModels_getPathLikelihood`, allpaths, alpha, H, sim, model, policyMethod, epsilon, endTrial)
}
getProbMatrix <- function(allpaths, alpha, H, sim, model, policyMethod, epsilon = 0, endTrial = 0L) {
.Call(`_baseModels_getProbMatrix`, allpaths, alpha, H, sim, model, policyMethod, epsilon, endTrial)
}
getEpisodes <- function(allpaths) {
.Call(`_baseModels_getEpisodes`, allpaths)
}
rcpparma_hello_world <- function() {
.Call(`_baseModels_rcpparma_hello_world`)
}
rcpparma_outerproduct <- function(x) {
.Call(`_baseModels_rcpparma_outerproduct`, x)
}
rcpparma_innerproduct <- function(x) {
.Call(`_baseModels_rcpparma_innerproduct`, x)
}
rcpparma_bothproducts <- function(x) {
.Call(`_baseModels_rcpparma_bothproducts`, x)
}
getPathTimes <- function(allpaths, enreg_pos) {
.Call(`_baseModels_getPathTimes`, allpaths, enreg_pos)
}
empiricalProbMat <- function(allpaths, window) {
.Call(`_baseModels_empiricalProbMat`, allpaths, window)
}
empiricalProbMat2 <- function(allpaths, window) {
.Call(`_baseModels_empiricalProbMat2`, allpaths, window)
}
mseEmpirical <- function(allpaths, probMatrix_m1, movAvg, sim) {
.Call(`_baseModels_mseEmpirical`, allpaths, probMatrix_m1, movAvg, sim)
}
pathProbability <- function(allpaths, probMatrix_m1, sim) {
.Call(`_baseModels_pathProbability`, allpaths, probMatrix_m1, sim)
}
getBoxTimes <- function(enregPosTimes, rleLengths) {
.Call(`_baseModels_getBoxTimes`, enregPosTimes, rleLengths)
}
getTurnsFromPaths <- function(path, state) {
.Call(`_baseModels_getTurnsFromPaths`, path, state)
}
getTurnString <- function(turnNb) {
.Call(`_baseModels_getTurnString`, turnNb)
}
getTurnIdx <- function(turn, state) {
.Call(`_baseModels_getTurnIdx`, turn, state)
}
getTurnTimes <- function(allpaths, boxTimes, sim) {
.Call(`_baseModels_getTurnTimes`, allpaths, boxTimes, sim)
}
getComputationalActivity <- function(allpaths, probabilityMatrix) {
.Call(`_baseModels_getComputationalActivity`, allpaths, probabilityMatrix)
}
getPathFromTurns <- function(turns, state) {
.Call(`_baseModels_getPathFromTurns`, turns, state)
}
|
### Script para paralelização no servidor Ubuntu (Ale)
# Pacotes para paralelizacao
require(plyr)
require(doMC)
# Script com codigo de simula.neutra.trade
source("simula.neutra.trade_LEVE_dp0_sem_banco_sem_papi_rapido.R")
# Dados do hipercubo
load("dados_arredond_hipercubo_25jul16.RData")
simula.parallel <- function(replica) {
res <- simula.neutra.trade(S = dados3_25jul16[replica,1],
j = round(5000/dados3_25jul16[replica,1]),
xi0 = rep(seq(1,20000,length.out = dados3_25jul16[replica,1]),each=round(5000/dados3_25jul16[replica,1])),
X = 20000,
dp = 0,
dist.pos = if(dados3_25jul16[replica,2]>0 & dados3_25jul16[replica,2]<3e5) round(seq(from = 3e5/(dados3_25jul16[replica,2]+1), to = 3e5-(3e5/(dados3_25jul16[replica,2]+1)), length.out = dados3_25jul16[replica,2])) else if(dados3_25jul16[replica,2]==0) NULL else seq(1,3e5,1),
dist.int = dados3_25jul16[replica,3],
ciclo = 3e5,
step = 100
)
return(res)
}
######## doMC e plyr
registerDoMC(4)
for (i in (seq(1,1000,8)[21:25]))
{
replica.sim <- as.list(i:(i+7))
resultados <- llply(.data = replica.sim, .fun = simula.parallel, .parallel = TRUE)
save(resultados,file=paste("resultados25jul16-",i,"_",i+7,".RData",sep=""))
}
|
/paralelizacao/simulacoes_25jul16/simula_parallel_servidor_25jul16_161-200.R
|
no_license
|
luisanovara/simula-neutra-step
|
R
| false
| false
| 1,425
|
r
|
### Script para paralelização no servidor Ubuntu (Ale)
# Pacotes para paralelizacao
require(plyr)
require(doMC)
# Script com codigo de simula.neutra.trade
source("simula.neutra.trade_LEVE_dp0_sem_banco_sem_papi_rapido.R")
# Dados do hipercubo
load("dados_arredond_hipercubo_25jul16.RData")
simula.parallel <- function(replica) {
res <- simula.neutra.trade(S = dados3_25jul16[replica,1],
j = round(5000/dados3_25jul16[replica,1]),
xi0 = rep(seq(1,20000,length.out = dados3_25jul16[replica,1]),each=round(5000/dados3_25jul16[replica,1])),
X = 20000,
dp = 0,
dist.pos = if(dados3_25jul16[replica,2]>0 & dados3_25jul16[replica,2]<3e5) round(seq(from = 3e5/(dados3_25jul16[replica,2]+1), to = 3e5-(3e5/(dados3_25jul16[replica,2]+1)), length.out = dados3_25jul16[replica,2])) else if(dados3_25jul16[replica,2]==0) NULL else seq(1,3e5,1),
dist.int = dados3_25jul16[replica,3],
ciclo = 3e5,
step = 100
)
return(res)
}
######## doMC e plyr
registerDoMC(4)
for (i in (seq(1,1000,8)[21:25]))
{
replica.sim <- as.list(i:(i+7))
resultados <- llply(.data = replica.sim, .fun = simula.parallel, .parallel = TRUE)
save(resultados,file=paste("resultados25jul16-",i,"_",i+7,".RData",sep=""))
}
|
# MISM6202 - Upload to Github ExCr
# R Assignment - Exercise
# Working with Probabilities and Distributions
## 84% of U.S Adults use Facebook.
## Consider a sample size of 100 American adults.
# What is the P that 70 American adults are Facebook users?
dbinom(70, 100, 0.84)
# What is the P that no more than 70 American adults are Facebook users?
pbinom(70, 100, 0.84)
# What is the P that at least 70 American adults are Facebook users?
1- pbinom(69, 100, 0.84)
######################
## If 1.5 Craft Breweries open every day...
# What is the probability that no more than 10 craft breweries open
# every week?
ppois(10, 10.5)
# What is the probability that exactly 10 craft breweries open
# every week?
dpois(10, 10.5)
## MEAN = 7.49%
## STDEV = 6.41%
# What is the chance that P(5 <= X <= 10)?
pnorm(10, 7.49, 6.41, lower.tail = TRUE) - pnorm(5, 7.49, 6.41, lower.tail = TRUE)
# What is the chance that P(5 <= X <= 10)?
qnorm(0.90, 7.49, 6.41)
|
/ExCr-MISM6202.R
|
no_license
|
heykarnold2010/R-ExCr
|
R
| false
| false
| 967
|
r
|
# MISM6202 - Upload to Github ExCr
# R Assignment - Exercise
# Working with Probabilities and Distributions
## 84% of U.S Adults use Facebook.
## Consider a sample size of 100 American adults.
# What is the P that 70 American adults are Facebook users?
dbinom(70, 100, 0.84)
# What is the P that no more than 70 American adults are Facebook users?
pbinom(70, 100, 0.84)
# What is the P that at least 70 American adults are Facebook users?
1- pbinom(69, 100, 0.84)
######################
## If 1.5 Craft Breweries open every day...
# What is the probability that no more than 10 craft breweries open
# every week?
ppois(10, 10.5)
# What is the probability that exactly 10 craft breweries open
# every week?
dpois(10, 10.5)
## MEAN = 7.49%
## STDEV = 6.41%
# What is the chance that P(5 <= X <= 10)?
pnorm(10, 7.49, 6.41, lower.tail = TRUE) - pnorm(5, 7.49, 6.41, lower.tail = TRUE)
# What is the chance that P(5 <= X <= 10)?
qnorm(0.90, 7.49, 6.41)
|
## Wiki UFC Fight Results Scraper.
## This will scrape the results of every UFC fight from Wikipedia.
## Outputs are data_frame object "boutsdf", and vector "fighterlinksvect".
## Install packages and do some prep work ----
# Read in required packages & files.
library(rvest)
library(dplyr)
library(magrittr)
library(stringr)
source("~/mma_scrape/wiki_ufcbouts_functions.R")
datafile <- "~/mma_scrape/0-ufc_bouts.RData"
if (file.exists(datafile)) {
load("~/mma_scrape/0-ufc_bouts.RData")
}
# Pull html from wiki page of all UFC events.
cards <- xml2::read_html("https://en.wikipedia.org/wiki/List_of_UFC_events")
# Extract all url strings, then append "wikipedia.org" to each string.
cardlinks <- cards %>%
html_nodes('td:nth-child(2) a') %>%
html_attr('href') %>%
unique() %>%
paste0("https://en.wikipedia.org", .)
# Remove links for fight events that were canceled and thus have no
# fight results.
cardlinks <- cardlinks[!grepl("UFC_176|UFC_151|Lamas_vs._Penn", cardlinks)]
# If the wiki_ufcbouts DB already exists as an RData file, edit cardlinks to
# only include urls that do not appear in the existing wiki_ufcbouts DB
# (intention is to only scrape fight results that are new and haven't previously
# been scraped).
if (file.exists(datafile)) {
cardlinks <- cardlinks[!cardlinks %in% unique(boutsdf$wikilink)]
}
# Create vector of all of the months of the year
# (with a single trailing space appended to each month).
allMonths <- paste0(
months(seq.Date(as.Date("2015-01-01"), as.Date("2015-12-31"), "month")), " ")
# Create vector of countries that have hosted UFC events.
countries <- cards %>%
html_nodes('td:nth-child(5)') %>%
html_text() %>%
sapply(., function(x) tail(strsplit(x, ", ")[[1]], n=1)) %>%
unname() %>%
c(., "England") %>%
unique()
## Start the scraping ----
# Scraping is all contained within the for loop below.
# Object fighterlinks will house urls for each fighter the scraper encounters
# that has a wiki page. Those urls will be used to facilitate scraping within
# the file "1-wiki_ufcfighters.R".
# Turn off warnings, they will be turned back on after the loop finishes.
oldw <- getOption("warn")
options(warn = -1)
bouts <- data_frame()
fighterlinks <- vector()
for (i in cardlinks) {
# Read html of the specific fight card.
html <- xml2::read_html(i)
# Record wiki links of all the fighters on the card.
fighterlinks <- c(fighterlinks, html %>%
html_nodes('.toccolours td a') %>%
html_attr('href'))
# Extract all tables within page, then do NA checks.
tables <- html %>%
html_nodes('table')
# If tables is all NA's or empty, skip to the next iteration of cardlinks.
if (all(is.na(tables)) || length(tables) < 0) {next}
# If tables contains any NA's, eliminate them and continue.
if (any(is.na(tables))) {
tables <- tables[!is.na(tables)]
}
# ID fight results tables and info tables for each card within a single url.
resultsnum <- getTables(tables)
infonum <- resultsnum - 2
for (t in seq_len(length(infonum))) {
test <- tables %>%
extract2(infonum[t]) %>%
html_table(fill = TRUE) %>%
colnames() %>%
extract2(1)
if (test == "X1") {
infonum[t] <- infonum[t] - 1
}
}
# Get vector of event names and check to ensure each one dosen't
# already exist in the bouts df. If it does, then delete it from
# nameVect and edit resultsnum and infonum to delete the associated
# table index from those two vectors.
# Additionally, if the event associated with i is in bouts df, use
# approximate string matching to ID the event name from the url string, then
# find that event within bouts df and replace the existing url with i.
nameVect <- getEventNames(tables, infonum) %>%
sapply(., function(x) utfConvert(x)) %>%
unname()
if (any(nameVect %in% unique(bouts$Event))) {
ids <- which(nameVect %in% unique(bouts$Event))
}
if (file.exists(datafile)) {
if (any(nameVect %in% unique(boutsdf$Event))) {
if (exists("ids")) {
ids <- c(ids, which(nameVect %in% unique(boutsdf$Event)))
} else {
ids <- which(nameVect %in% unique(boutsdf$Event))
}
}
}
if (exists("ids")) {
# Reassign the url string within the obs of bouts that are associated with
# the event of the current iteration of i.
urlname <- strsplit(i, "wiki/", fixed = TRUE)[[1]][2] %>%
gsub("_", " ", .)
ranks <- adist(urlname, nameVect, partial = TRUE, ignore.case = TRUE) %>%
as.vector
if (any(bouts$Event == nameVect[which(ranks == min(ranks))])) {
bouts[bouts$Event == nameVect[which(ranks == min(ranks))], ]$wikilink <- i
}
# Delete elements that already exist within bouts or boutsdf.
nameVect <- nameVect[-ids]
resultsnum <- resultsnum[-ids]
infonum <- infonum[-ids]
rm(ids)
}
# Check to make sure at least one table has been positively IDed.
if (is.na(resultsnum) || length(resultsnum) < 1 || resultsnum == 0) {next}
# Extract the dates that the events took place, record the index of the
# table within which the event date was found, and record and indication as to
# the format of the date.
dateVect <- getDateBouts(tables, infonum)
# Get vector of venue names.
venueVect <- getVenue(tables, infonum, dateVect[[2]])
# Get city, state, and country, saved as elements of a list.
locVect <- getLoc(tables, infonum, dateVect[[2]])
# Create holder df, which will house all scraped data for the events on
# a single page, then rbind them to bouts df.
holderdf <- appendDF(tables, resultsnum, nameVect, dateVect, venueVect,
locVect, i)
bouts <- rbind(bouts, holderdf)
}
# Turn global warnings back on
options(warn = oldw)
## Transformations and Clean Up ----
# If the wiki_ufcbouts DB already exists as an RData file, eliminate elements
# of the newly scraped fighterlinks that appear in fighterlinksvect.
if (file.exists(datafile)) {
fighterlinks <- fighterlinks[!fighterlinks %in% unique(fighterlinksvect)]
}
# Eliminate all "TBA" and "TBD" entries within fighterlinks.
fighterlinks <- fighterlinks[!grepl("wiki/TBA|wiki/TBD", fighterlinks)]
# Reset the row indices
rownames(bouts) <- NULL
# Eliminate all observations associated with future UFC events.
bouts <- bouts[bouts$Result != "" & !is.na(bouts$Result), ]
# Change all values of "N/A" and to be NA.
bouts[bouts[sapply(bouts, is.character)] == "N/A" &
!is.na(bouts[sapply(bouts, is.character)]),
grep("N/A", bouts, fixed = TRUE)] <- NA
# Eliminate variable "Notes".
bouts <- bouts[, -c(which(colnames(bouts) == "Notes"))]
# Specify encoding for strings (to facilitate matching).
# The raw non-US fighter/location strings are very messy, and the application of
# accent marks is inconsistent. Two step process for each variable, first step
# converts from utf-8 to LATIN1. Second step converts to ASCII//TRANSLIT,
# thereby eliminating accent marks.
ids <- which(colnames(bouts) %in% c("Weight", "FighterA", "FighterB", "Event",
"Venue", "City"))
bouts[, ids] <- lapply(bouts[, ids], utfConvert)
# Create two new variables, "champPost" and "interimChampPost". Idea is that if
# a championship belt or an interim championship belt awarded as the result
# of a fight, the belt winners name will be appended to this variable.
if (file.exists(datafile)) {
vects <- getInterimChampLabels(bouts, datafile = boutsdf)
bouts$interimChampPost <- vects[[1]]
boutsdf$interimChampPost <- vects[[2]]
vects <- getChampLabels(bouts, datafile = boutsdf)
bouts$champPost <- vects[[1]]
boutsdf$champPost <- vects[[2]]
} else {
bouts$interimChampPost <- getInterimChampLabels(bouts)
bouts$champPost <- getChampLabels(bouts)
}
# Eliminate all championship tags from strings within variables
# FighterA, FighterB, champPost, and interimChampPost.
ids <- which(colnames(bouts) %in% c("FighterA", "FighterB", "champPost",
"interimChampPost"))
bouts[, ids] <- lapply(
bouts[, ids], function(x)
gsub(" \\(Fighter)| \\(c)| \\(ic)| \\(UFC Champion)| \\(Pride Champion)",
"", x, ignore.case = TRUE))
# Add variable "TotalSeconds" showing total seconds of fight time for each bout.
bouts$Round <- as.double(bouts$Round)
bouts$TotalSeconds <- mapply(function(x, y)
boutSeconds(x, y), bouts$Time, bouts$Round, USE.NAMES = FALSE)
# Split results variable into two seperate variables ("Results" & "Subresult")
rsltsplit <- unname(sapply(bouts$Result, vectSplit))
bouts$Result <- unlist(rsltsplit[1, ])
bouts$Subresult <- unlist(rsltsplit[2, ])
# For all fights that ended in a draw, edit variable subresult
# to include the word "draw".
for (i in seq_len(nrow(bouts))) {
if (bouts$Result[i] == "Draw" &&
!grepl("draw", bouts$Subresult[i]) &&
!is.na(bouts$Subresult[i])) {
bouts$Subresult[i] <- paste0(bouts$Subresult[i], " draw")
} else if (bouts$Result[i] == "Draw" &&
is.na(bouts$Subresult[i])) {
bouts$Subresult[i] <- "draw"
}
}
# For any remaining NA's within variable Subresult, attempt to
# unpack info from variable Result to fill in that NA gap.
for (i in seq_len(nrow(bouts))) {
if (is.na(bouts$Subresult[i]) &&
grepl("unanimous|split|majority", bouts$Result[i], ignore.case = TRUE)) {
holder <- unlist(strsplit(bouts$Result[i], " "))
if (any(grepl("draw", holder, ignore.case = TRUE, fixed = TRUE)) &&
length(holder) > 1) {
bouts$Result[i] <- "Draw"
bouts$Subresult[i] <- tolower(paste(holder[1], holder[2]))
} else if (any(grepl("decision", holder, ignore.case = TRUE, fixed = TRUE)) &&
length(holder) > 1) {
bouts$Result[i] <- "Decision"
bouts$Subresult[i] <- tolower(holder[1])
}
}
}
# Clean up variables Result and Subresult by combining similar values.
if (any(grepl("submission", bouts$Result, fixed = TRUE))) {
bouts[grepl("submission", bouts$Result, fixed = TRUE), ]$Result <- "Submission"
}
if (any(grepl("dq|DQ", bouts$Result))) {
bouts[grepl("dq|DQ", bouts$Result), ]$Result <- "Disqualification"
}
if (!is.na(match("rear naked choke", bouts$Subresult))) {
bouts[which(bouts$Subresult == "rear naked choke"), ]$Subresult <-
"rear-naked choke"
}
# Add variables for all types of over/under, ITD, and ended in r1-r5.
# For these variables, r = "round", ITD = "inside the distance".
ids <- ncol(bouts)
bouts$over1.5r <- ifelse(bouts$TotalSeconds > 450, 1, 0)
bouts$over2.5r <- ifelse(bouts$TotalSeconds > 750, 1, 0)
bouts$over3.5r <- ifelse(bouts$TotalSeconds > 1050, 1, 0)
bouts$over4.5r <- ifelse(bouts$TotalSeconds > 1350, 1, 0)
bouts$ITD <- ifelse(
!grepl("^Decision|Draw|No Contest", bouts$Result), 1, ifelse(
grepl("No Contest", bouts$Result) &
!bouts$TotalSeconds %in% c(900, 1500), 1, 0))
bouts$r1Finish <- ifelse(bouts$Round == 1, 1, 0)
bouts$r2Finish <- ifelse(bouts$Round == 2, 1, 0)
bouts$r3Finish <- ifelse(bouts$ITD == 1 & bouts$Round == 3, 1, 0)
bouts$r4Finish <- ifelse(bouts$Round == 4, 1, 0)
bouts$r5Finish <- ifelse(bouts$ITD == 1 & bouts$Round == 5, 1, 0)
bouts[which(bouts$Result == "No Contest" &
bouts$Subresult != "overturned"), (ids + 1):ncol(bouts)] <- NA
bouts[is.na(bouts$Round), (ids + 1):ncol(bouts)] <- NA
# Reorder the variables, and eliminate unwanted variables.
if (file.exists(datafile)) {
goodCols <- colnames(boutsdf)
} else {
goodCols <- c("Weight",
"FighterA",
"VS",
"FighterB",
"Result",
"Subresult",
"Round",
"Time",
"TotalSeconds",
"Event",
"Date",
"Venue",
"City",
"State",
"Country",
"champPost",
"interimChampPost",
"wikilink",
"over1.5r",
"over2.5r",
"over3.5r",
"over4.5r",
"ITD",
"r1Finish",
"r2Finish",
"r3Finish",
"r4Finish",
"r5Finish")
}
bouts <- subset(bouts, select = goodCols)
# If updating an existing UFC bouts dataset, remove any observations within the
# newly scraped data in which the event name appear in the existing UFC bouts
# dataset.
if (file.exists(datafile)) {
bouts <- bouts[!bouts$Event %in% unique(boutsdf$Event), ]
}
## Write results to file ----
# If UFC bouts dataset already exists in directory "mma_scrape", append the
# newly scraped results to objects "boutsdf" and "fighterlinksvect", then save
# both as an RData file.
# Otherwise, save the newly scraped results as an RData file.
# The RData file will be sourced at the top of R file "1-wiki_ufcfighters.R",
# and that file will make use of object "fighterlinksvect".
if (!file.exists(datafile)) {
boutsdf <- bouts
fighterlinksvect <- fighterlinks
save(boutsdf, fighterlinksvect, file = "~/mma_scrape/0-ufc_bouts.RData")
} else if (identical(colnames(bouts), colnames(boutsdf))) {
boutsdf <- rbind(bouts, boutsdf)
fighterlinksvect <- c(fighterlinks, fighterlinksvect)
save(boutsdf, fighterlinksvect, file = "~/mma_scrape/0-ufc_bouts.RData")
} else {
writeLines(c("ERROR: rbind of new data with old data failed,",
"columns of the two dataframes do not allign."))
}
|
/mma_scrape/0-wiki_ufcbouts.R
|
no_license
|
JamesSkane/MMA-Data-Scrape
|
R
| false
| false
| 13,551
|
r
|
## Wiki UFC Fight Results Scraper.
## This will scrape the results of every UFC fight from Wikipedia.
## Outputs are data_frame object "boutsdf", and vector "fighterlinksvect".
## Install packages and do some prep work ----
# Read in required packages & files.
library(rvest)
library(dplyr)
library(magrittr)
library(stringr)
source("~/mma_scrape/wiki_ufcbouts_functions.R")
datafile <- "~/mma_scrape/0-ufc_bouts.RData"
if (file.exists(datafile)) {
load("~/mma_scrape/0-ufc_bouts.RData")
}
# Pull html from wiki page of all UFC events.
cards <- xml2::read_html("https://en.wikipedia.org/wiki/List_of_UFC_events")
# Extract all url strings, then append "wikipedia.org" to each string.
cardlinks <- cards %>%
html_nodes('td:nth-child(2) a') %>%
html_attr('href') %>%
unique() %>%
paste0("https://en.wikipedia.org", .)
# Remove links for fight events that were canceled and thus have no
# fight results.
cardlinks <- cardlinks[!grepl("UFC_176|UFC_151|Lamas_vs._Penn", cardlinks)]
# If the wiki_ufcbouts DB already exists as an RData file, edit cardlinks to
# only include urls that do not appear in the existing wiki_ufcbouts DB
# (intention is to only scrape fight results that are new and haven't previously
# been scraped).
if (file.exists(datafile)) {
cardlinks <- cardlinks[!cardlinks %in% unique(boutsdf$wikilink)]
}
# Create vector of all of the months of the year
# (with a single trailing space appended to each month).
allMonths <- paste0(
months(seq.Date(as.Date("2015-01-01"), as.Date("2015-12-31"), "month")), " ")
# Create vector of countries that have hosted UFC events.
countries <- cards %>%
html_nodes('td:nth-child(5)') %>%
html_text() %>%
sapply(., function(x) tail(strsplit(x, ", ")[[1]], n=1)) %>%
unname() %>%
c(., "England") %>%
unique()
## Start the scraping ----
# Scraping is all contained within the for loop below.
# Object fighterlinks will house urls for each fighter the scraper encounters
# that has a wiki page. Those urls will be used to facilitate scraping within
# the file "1-wiki_ufcfighters.R".
# Turn off warnings, they will be turned back on after the loop finishes.
oldw <- getOption("warn")
options(warn = -1)
bouts <- data_frame()
fighterlinks <- vector()
for (i in cardlinks) {
# Read html of the specific fight card.
html <- xml2::read_html(i)
# Record wiki links of all the fighters on the card.
fighterlinks <- c(fighterlinks, html %>%
html_nodes('.toccolours td a') %>%
html_attr('href'))
# Extract all tables within page, then do NA checks.
tables <- html %>%
html_nodes('table')
# If tables is all NA's or empty, skip to the next iteration of cardlinks.
if (all(is.na(tables)) || length(tables) < 0) {next}
# If tables contains any NA's, eliminate them and continue.
if (any(is.na(tables))) {
tables <- tables[!is.na(tables)]
}
# ID fight results tables and info tables for each card within a single url.
resultsnum <- getTables(tables)
infonum <- resultsnum - 2
for (t in seq_len(length(infonum))) {
test <- tables %>%
extract2(infonum[t]) %>%
html_table(fill = TRUE) %>%
colnames() %>%
extract2(1)
if (test == "X1") {
infonum[t] <- infonum[t] - 1
}
}
# Get vector of event names and check to ensure each one dosen't
# already exist in the bouts df. If it does, then delete it from
# nameVect and edit resultsnum and infonum to delete the associated
# table index from those two vectors.
# Additionally, if the event associated with i is in bouts df, use
# approximate string matching to ID the event name from the url string, then
# find that event within bouts df and replace the existing url with i.
nameVect <- getEventNames(tables, infonum) %>%
sapply(., function(x) utfConvert(x)) %>%
unname()
if (any(nameVect %in% unique(bouts$Event))) {
ids <- which(nameVect %in% unique(bouts$Event))
}
if (file.exists(datafile)) {
if (any(nameVect %in% unique(boutsdf$Event))) {
if (exists("ids")) {
ids <- c(ids, which(nameVect %in% unique(boutsdf$Event)))
} else {
ids <- which(nameVect %in% unique(boutsdf$Event))
}
}
}
if (exists("ids")) {
# Reassign the url string within the obs of bouts that are associated with
# the event of the current iteration of i.
urlname <- strsplit(i, "wiki/", fixed = TRUE)[[1]][2] %>%
gsub("_", " ", .)
ranks <- adist(urlname, nameVect, partial = TRUE, ignore.case = TRUE) %>%
as.vector
if (any(bouts$Event == nameVect[which(ranks == min(ranks))])) {
bouts[bouts$Event == nameVect[which(ranks == min(ranks))], ]$wikilink <- i
}
# Delete elements that already exist within bouts or boutsdf.
nameVect <- nameVect[-ids]
resultsnum <- resultsnum[-ids]
infonum <- infonum[-ids]
rm(ids)
}
# Check to make sure at least one table has been positively IDed.
if (is.na(resultsnum) || length(resultsnum) < 1 || resultsnum == 0) {next}
# Extract the dates that the events took place, record the index of the
# table within which the event date was found, and record and indication as to
# the format of the date.
dateVect <- getDateBouts(tables, infonum)
# Get vector of venue names.
venueVect <- getVenue(tables, infonum, dateVect[[2]])
# Get city, state, and country, saved as elements of a list.
locVect <- getLoc(tables, infonum, dateVect[[2]])
# Create holder df, which will house all scraped data for the events on
# a single page, then rbind them to bouts df.
holderdf <- appendDF(tables, resultsnum, nameVect, dateVect, venueVect,
locVect, i)
bouts <- rbind(bouts, holderdf)
}
# Turn global warnings back on
options(warn = oldw)
## Transformations and Clean Up ----
# If the wiki_ufcbouts DB already exists as an RData file, eliminate elements
# of the newly scraped fighterlinks that appear in fighterlinksvect.
if (file.exists(datafile)) {
fighterlinks <- fighterlinks[!fighterlinks %in% unique(fighterlinksvect)]
}
# Eliminate all "TBA" and "TBD" entries within fighterlinks.
fighterlinks <- fighterlinks[!grepl("wiki/TBA|wiki/TBD", fighterlinks)]
# Reset the row indices
rownames(bouts) <- NULL
# Eliminate all observations associated with future UFC events.
bouts <- bouts[bouts$Result != "" & !is.na(bouts$Result), ]
# Change all values of "N/A" and to be NA.
bouts[bouts[sapply(bouts, is.character)] == "N/A" &
!is.na(bouts[sapply(bouts, is.character)]),
grep("N/A", bouts, fixed = TRUE)] <- NA
# Eliminate variable "Notes".
bouts <- bouts[, -c(which(colnames(bouts) == "Notes"))]
# Specify encoding for strings (to facilitate matching).
# The raw non-US fighter/location strings are very messy, and the application of
# accent marks is inconsistent. Two step process for each variable, first step
# converts from utf-8 to LATIN1. Second step converts to ASCII//TRANSLIT,
# thereby eliminating accent marks.
ids <- which(colnames(bouts) %in% c("Weight", "FighterA", "FighterB", "Event",
"Venue", "City"))
bouts[, ids] <- lapply(bouts[, ids], utfConvert)
# Create two new variables, "champPost" and "interimChampPost". Idea is that if
# a championship belt or an interim championship belt awarded as the result
# of a fight, the belt winners name will be appended to this variable.
if (file.exists(datafile)) {
vects <- getInterimChampLabels(bouts, datafile = boutsdf)
bouts$interimChampPost <- vects[[1]]
boutsdf$interimChampPost <- vects[[2]]
vects <- getChampLabels(bouts, datafile = boutsdf)
bouts$champPost <- vects[[1]]
boutsdf$champPost <- vects[[2]]
} else {
bouts$interimChampPost <- getInterimChampLabels(bouts)
bouts$champPost <- getChampLabels(bouts)
}
# Eliminate all championship tags from strings within variables
# FighterA, FighterB, champPost, and interimChampPost.
ids <- which(colnames(bouts) %in% c("FighterA", "FighterB", "champPost",
"interimChampPost"))
bouts[, ids] <- lapply(
bouts[, ids], function(x)
gsub(" \\(Fighter)| \\(c)| \\(ic)| \\(UFC Champion)| \\(Pride Champion)",
"", x, ignore.case = TRUE))
# Add variable "TotalSeconds" showing total seconds of fight time for each bout.
bouts$Round <- as.double(bouts$Round)
bouts$TotalSeconds <- mapply(function(x, y)
boutSeconds(x, y), bouts$Time, bouts$Round, USE.NAMES = FALSE)
# Split results variable into two seperate variables ("Results" & "Subresult")
rsltsplit <- unname(sapply(bouts$Result, vectSplit))
bouts$Result <- unlist(rsltsplit[1, ])
bouts$Subresult <- unlist(rsltsplit[2, ])
# For all fights that ended in a draw, edit variable subresult
# to include the word "draw".
for (i in seq_len(nrow(bouts))) {
if (bouts$Result[i] == "Draw" &&
!grepl("draw", bouts$Subresult[i]) &&
!is.na(bouts$Subresult[i])) {
bouts$Subresult[i] <- paste0(bouts$Subresult[i], " draw")
} else if (bouts$Result[i] == "Draw" &&
is.na(bouts$Subresult[i])) {
bouts$Subresult[i] <- "draw"
}
}
# For any remaining NA's within variable Subresult, attempt to
# unpack info from variable Result to fill in that NA gap.
for (i in seq_len(nrow(bouts))) {
if (is.na(bouts$Subresult[i]) &&
grepl("unanimous|split|majority", bouts$Result[i], ignore.case = TRUE)) {
holder <- unlist(strsplit(bouts$Result[i], " "))
if (any(grepl("draw", holder, ignore.case = TRUE, fixed = TRUE)) &&
length(holder) > 1) {
bouts$Result[i] <- "Draw"
bouts$Subresult[i] <- tolower(paste(holder[1], holder[2]))
} else if (any(grepl("decision", holder, ignore.case = TRUE, fixed = TRUE)) &&
length(holder) > 1) {
bouts$Result[i] <- "Decision"
bouts$Subresult[i] <- tolower(holder[1])
}
}
}
# Clean up variables Result and Subresult by combining similar values.
if (any(grepl("submission", bouts$Result, fixed = TRUE))) {
bouts[grepl("submission", bouts$Result, fixed = TRUE), ]$Result <- "Submission"
}
if (any(grepl("dq|DQ", bouts$Result))) {
bouts[grepl("dq|DQ", bouts$Result), ]$Result <- "Disqualification"
}
if (!is.na(match("rear naked choke", bouts$Subresult))) {
bouts[which(bouts$Subresult == "rear naked choke"), ]$Subresult <-
"rear-naked choke"
}
# Add variables for all types of over/under, ITD, and ended in r1-r5.
# For these variables, r = "round", ITD = "inside the distance".
ids <- ncol(bouts)
bouts$over1.5r <- ifelse(bouts$TotalSeconds > 450, 1, 0)
bouts$over2.5r <- ifelse(bouts$TotalSeconds > 750, 1, 0)
bouts$over3.5r <- ifelse(bouts$TotalSeconds > 1050, 1, 0)
bouts$over4.5r <- ifelse(bouts$TotalSeconds > 1350, 1, 0)
bouts$ITD <- ifelse(
!grepl("^Decision|Draw|No Contest", bouts$Result), 1, ifelse(
grepl("No Contest", bouts$Result) &
!bouts$TotalSeconds %in% c(900, 1500), 1, 0))
bouts$r1Finish <- ifelse(bouts$Round == 1, 1, 0)
bouts$r2Finish <- ifelse(bouts$Round == 2, 1, 0)
bouts$r3Finish <- ifelse(bouts$ITD == 1 & bouts$Round == 3, 1, 0)
bouts$r4Finish <- ifelse(bouts$Round == 4, 1, 0)
bouts$r5Finish <- ifelse(bouts$ITD == 1 & bouts$Round == 5, 1, 0)
bouts[which(bouts$Result == "No Contest" &
bouts$Subresult != "overturned"), (ids + 1):ncol(bouts)] <- NA
bouts[is.na(bouts$Round), (ids + 1):ncol(bouts)] <- NA
# Reorder the variables, and eliminate unwanted variables.
if (file.exists(datafile)) {
goodCols <- colnames(boutsdf)
} else {
goodCols <- c("Weight",
"FighterA",
"VS",
"FighterB",
"Result",
"Subresult",
"Round",
"Time",
"TotalSeconds",
"Event",
"Date",
"Venue",
"City",
"State",
"Country",
"champPost",
"interimChampPost",
"wikilink",
"over1.5r",
"over2.5r",
"over3.5r",
"over4.5r",
"ITD",
"r1Finish",
"r2Finish",
"r3Finish",
"r4Finish",
"r5Finish")
}
bouts <- subset(bouts, select = goodCols)
# If updating an existing UFC bouts dataset, remove any observations within the
# newly scraped data in which the event name appear in the existing UFC bouts
# dataset.
if (file.exists(datafile)) {
bouts <- bouts[!bouts$Event %in% unique(boutsdf$Event), ]
}
## Write results to file ----
# If UFC bouts dataset already exists in directory "mma_scrape", append the
# newly scraped results to objects "boutsdf" and "fighterlinksvect", then save
# both as an RData file.
# Otherwise, save the newly scraped results as an RData file.
# The RData file will be sourced at the top of R file "1-wiki_ufcfighters.R",
# and that file will make use of object "fighterlinksvect".
if (!file.exists(datafile)) {
boutsdf <- bouts
fighterlinksvect <- fighterlinks
save(boutsdf, fighterlinksvect, file = "~/mma_scrape/0-ufc_bouts.RData")
} else if (identical(colnames(bouts), colnames(boutsdf))) {
boutsdf <- rbind(bouts, boutsdf)
fighterlinksvect <- c(fighterlinks, fighterlinksvect)
save(boutsdf, fighterlinksvect, file = "~/mma_scrape/0-ufc_bouts.RData")
} else {
writeLines(c("ERROR: rbind of new data with old data failed,",
"columns of the two dataframes do not allign."))
}
|
rm(list=ls())
options(stringsAsFactors = F)
set.seed(1)
#!/usr/bin/env Rscript
# create at 2019.05.30
# author zhangyiming
load <- function() {
library(Seurat)
library(Mfuzz)
library(openxlsx)
library(ggplot2)
library(clusterProfiler)
library(DOSE)
library(GO.db)
library(KEGG.db)
library(dplyr)
library(reshape2)
}
suppressPackageStartupMessages(load())
root.dir = "/mnt/raid62/Lung_cancer_10x/02_figures_each_cell/B_cells_SCC"
rds = "/mnt/raid62/Lung_cancer_10x/02_figures_each_cell/B_cells_SCC/B_cells.rds"
dir.create(paste(root.dir, "mfuzz_gene_module", sep = "/"), showWarnings = F)
set.seed(1)
setwd(root.dir)
# function to read and format the normalized_counts from sctransform
read_sctransform <- function(path="paga/normalized_counts.csv.gz") {
r = gzfile(path)
data = read.csv(r, row.names = 1)
colnames(data) = gsub("\\.", "-", colnames(data), perl=F)
colnames(data) = gsub("^X", "", colnames(data), perl=T)
return(data)
}
# function to select cluster_markers by qvalue and logfc
# :param cluster_markers cluster markers from Seurat FindMarkers
# :param qvalue: q value threshold
# :param logfc: logFC threshold
# :return vector of gene names
select_cluster_markers <- function(cluster_markers, qvalue=0.05, logfc=0.5) {
return(cluster_markers[cluster_markers$p_val_adj < qvalue & cluster_markers$avg_logFC > logfc, "gene"])
}
# function to make heatmaps
# :param obj Seurat obj
# :param cluster results from perform_mfuzz
# :param output_prefix: the output file prefix
make_heatmap <- function(obj, cluster, output_prefix=NULL, group.by = "Stage") {
for (i in sort(unique(cluster))) {
temp_genes = names(cluster[cluster == i])
p <- DoHeatmap(
obj,
group.by = group.by,
genes.use = temp_genes,
cex.col=0,
slim.col.label = TRUE,
remove.key = TRUE,
do.plot = F,
title = i
)
height = length(temp_genes) / 8
if (height > 40) {
height = 40
} else if (height < 5) {
height = 5
}
if (is.null(output_prefix)) {
print(p)
} else {
ggsave(
paste(output_prefix, i, ".png", sep = ""),
p,
width = 12,
height = height,
dpi = 300,
units = "in"
)
}
}
}
# function to make dotplot
# :param obj Seurat obj
# :param cluster results from perform_mfuzz
# :param output_prefix: the output file prefix
make_dotplot <- function(obj, cluster, output_prefix=NULL, group.by="Stage") {
for (i in sort(unique(cluster))) {
temp_genes = names(cluster[cluster == i])
p <- DotPlot(
obj,
group.by = group.by,
genes.plot = temp_genes,
x.lab.rot = TRUE,
do.return = T
) + labs(title = i) + coord_flip()
height = length(temp_genes) / 8
if (height > 40) {
height = 40
} else if (height < 5) {
height = 5
}
if (is.null(output_prefix)) {
print(p)
} else {
ggsave(
paste(output_prefix, i, ".png", sep = ""),
p,
width = 12,
height = height,
dpi = 300,
units = "in"
)
}
}
}
# function to perform mfuzz on expression based on cluster selection
# :param expr dataframe, the expression matrix
# :param cluster_markers qvlaue see select_cluster_markers
# :param logfc see select_cluster_markers
# :param init_cluster init cluster
# :param m see mfuzz
perform_mfuzz <- function(expr, cluster_markers, qvalue=0.05, logfc=0.5, init_cluster=9, m=NULL ) {
selected_markers = select_cluster_markers(cluster_markers, qvalue, logfc)
# filter genes and create ExpressionSet
expr = ExpressionSet(
as.matrix(expr[intersect(rownames(expr), selected_markers),])
)
print(dim(expr))
expr = standardise(expr)
# best m, if m is null, then estimate based on expr
if (is.null(m)) {
m = mestimate(expr)
}
cl = mfuzz(expr, c=init_cluster, m=m)
return(cl)
}
# function to construct matrix of mean zscore
construct_stage_group_zscore <- function(obj, expr, cluster) {
groups = sort(unique(cluster$cluster))
stages = sort(unique(obj@meta.data$Stage))
res = as.data.frame(matrix(0, nrow = length(groups), ncol = length(stages)))
rownames(res) = groups
colnames(res) = stages
for (i in groups) {
temp_gene = names(cluster$cluster[cluster$cluster == i])
for (j in stages) {
# print(paste0(i, j))
temp_cells = rownames(obj@meta.data[obj@meta.data$Stage == j, ])
temp_expr = expr[intersect(temp_gene, rownames(expr)), temp_cells]
# temp_expr = scale(temp_expr)
res[i, j] = mean(apply(temp_expr, 1, sum))
}
}
res[is.na(res)] = 0
return(res)
}
do_kegg <- function(eg, cluster=NA, pvalueCutoff = 0.05) {
kk <- enrichKEGG(gene = eg$ENTREZID,
organism = 'hsa',
pvalueCutoff = pvalueCutoff)
kk <- as.data.frame(kk)
if (nrow(kk) == 0) {
return(kk)
}
kk$cluster <- cluster
return(kk)
}
do_go <- function(eg, cluster = NA, pvalueCutoff = 0.01, qvalueCutoff = 0.05, cutoff=0.7) {
res = NULL
# for(i in c("BP", "CC", "MF")) {
# ego <- enrichGO(gene = eg$ENTREZID,
# keyType = 'ENTREZID',
# OrgDb = org.Hs.eg.db,
# ont = i,
# pAdjustMethod = "BH",
# pvalueCutoff = pvalueCutoff,
# qvalueCutoff = qvalueCutoff,
# readable = FALSE)
#
# if(is.null(ego)) {
# return(ego)
# }
# ego <- clusterProfiler::simplify(ego, cutoff=cutoff, by="p.adjust", select_fun=min)
#
# ego = as.data.frame(ego)
#
# if (nrow(ego) > 0) {
# ego$ONTOLOGY = i
# }
#
# if (is.null(res)) {
# res = ego
# } else {
# res = rbind(res, ego)
# }
# }
#
# if (nrow(res) == 0) {
# return(res)
# }
ego <- enrichGO(gene = eg$ENTREZID,
keyType = 'ENTREZID',
OrgDb = org.Hs.eg.db,
ont = "ALL",
pAdjustMethod = "BH",
pvalueCutoff = pvalueCutoff,
qvalueCutoff = qvalueCutoff,
readable = FALSE)
res = as.data.frame(ego)
if (is.null(res) || nrow(res) <= 0) {
return(res)
}
res$cluster <- cluster
return(res)
}
do_do <- function(eg, cluster = NA, pvalueCutoff = 0.05, qvalueCutoff = 0.05, minGSSize = 5, maxGSSize = 500) {
do <- enrichDO(gene = eg$ENTREZID,
ont = "DO",
pvalueCutoff = pvalueCutoff,
pAdjustMethod = "BH",
minGSSize = minGSSize,
maxGSSize = maxGSSize,
qvalueCutoff = qvalueCutoff,
readable = TRUE)
do = as.data.frame(do)
if (nrow(do) == 0) {
return(do)
}
do$cluster = cluster
return(do)
}
get_entrzid <- function(markers) {
res = NULL
for(i in unique(markers)) {
print(i)
temp <- names(markers[markers == i])
eg <- bitr(unique(temp), fromType="SYMBOL", toType="ENTREZID", OrgDb="org.Hs.eg.db")
if(!is.null(eg) && nrow(eg) > 0) {
eg$ident = i
if(is.null(res)) {
res = eg
} else {
res = rbind(res, eg)
}
}
}
return(res)
}
print(getwd())
print(rds)
cluster_markers = "annotation_results_by_stage.xlsx"
cluster_markers = read.xlsx(paste(root.dir, cluster_markers, sep = "/"))
expr = read_sctransform()
obj <- readRDS(rds)
qvalue = 0.05
logfc = 0.6
cl = perform_mfuzz(expr, cluster_markers, logfc=logfc, qvalue=qvalue)
mtx = construct_stage_group_zscore(obj, expr, cluster=cl)
temp = hclust(dist(mtx))
# make_heatmap(obj, cl$cluster)
plot(temp)
new_cl = perform_mfuzz(expr, cluster_markers, logfc=logfc, qvalue=qvalue, init_cluster = 4)
new_cl = new_cl$cluster
make_heatmap(obj, new_cl)
make_heatmap(obj, new_cl, output_prefix = paste(root.dir, "mfuzz_gene_module/heatmap_", sep = "/"))
make_dotplot(obj, new_cl, output_prefix = paste(root.dir, "mfuzz_gene_module/dotplot_", sep = "/"))
make_dotplot(obj, new_cl)
make_dotplot(obj, new_cl, group.by = "res.0.6")
eg = get_entrzid(new_cl)
kk <- NULL
for(i in unique(eg$ident)) {
print(i)
temp = do_kegg(eg[eg$ident == i, ])
if(is.null(temp) || nrow(temp) == 0) {
next
}
temp$ident = i
if(is.null(kk)) {
kk = temp
} else {
kk = rbind(kk, temp)
}
}
go <- NULL
for(i in unique(eg$ident)) {
print(i)
temp = do_go(eg[eg$ident == i, ])
if(is.null(temp) || nrow(temp) == 0) {
next
}
temp$ident = i
if(is.null(go)) {
go = temp
} else {
go = rbind(go, temp)
}
}
do <- NULL
for(i in unique(eg$ident)) {
print(i)
temp = do_do(eg[eg$ident == i, ])
if(is.null(temp) || nrow(temp) == 0) {
next
}
temp$ident = i
if(is.null(do)) {
do = temp
} else {
do = rbind(do, temp)
}
}
print(getwd())
temp = as.data.frame(new_cl)
colnames(temp) <- "gene_module_id"
temp$gene = rownames(temp)
wb = createWorkbook()
addWorksheet(wb, "gene_module")
writeData(wb, 1, temp)
addWorksheet(wb, "kegg")
writeData(wb, 2, kk)
addWorksheet(wb, "go")
writeData(wb, 3, go)
addWorksheet(wb, "do")
writeData(wb, 4, do)
saveWorkbook(wb, file = paste(root.dir, "mfuzz_gene_module/results.xlsx", sep = "/"), overwrite = T)
|
/bak/5_stage_gene_module/B_cells_SCC.R
|
no_license
|
Chenmengpin/inferCC
|
R
| false
| false
| 10,256
|
r
|
rm(list=ls())
options(stringsAsFactors = F)
set.seed(1)
#!/usr/bin/env Rscript
# create at 2019.05.30
# author zhangyiming
load <- function() {
library(Seurat)
library(Mfuzz)
library(openxlsx)
library(ggplot2)
library(clusterProfiler)
library(DOSE)
library(GO.db)
library(KEGG.db)
library(dplyr)
library(reshape2)
}
suppressPackageStartupMessages(load())
root.dir = "/mnt/raid62/Lung_cancer_10x/02_figures_each_cell/B_cells_SCC"
rds = "/mnt/raid62/Lung_cancer_10x/02_figures_each_cell/B_cells_SCC/B_cells.rds"
dir.create(paste(root.dir, "mfuzz_gene_module", sep = "/"), showWarnings = F)
set.seed(1)
setwd(root.dir)
# function to read and format the normalized_counts from sctransform
read_sctransform <- function(path="paga/normalized_counts.csv.gz") {
r = gzfile(path)
data = read.csv(r, row.names = 1)
colnames(data) = gsub("\\.", "-", colnames(data), perl=F)
colnames(data) = gsub("^X", "", colnames(data), perl=T)
return(data)
}
# function to select cluster_markers by qvalue and logfc
# :param cluster_markers cluster markers from Seurat FindMarkers
# :param qvalue: q value threshold
# :param logfc: logFC threshold
# :return vector of gene names
select_cluster_markers <- function(cluster_markers, qvalue=0.05, logfc=0.5) {
return(cluster_markers[cluster_markers$p_val_adj < qvalue & cluster_markers$avg_logFC > logfc, "gene"])
}
# function to make heatmaps
# :param obj Seurat obj
# :param cluster results from perform_mfuzz
# :param output_prefix: the output file prefix
make_heatmap <- function(obj, cluster, output_prefix=NULL, group.by = "Stage") {
for (i in sort(unique(cluster))) {
temp_genes = names(cluster[cluster == i])
p <- DoHeatmap(
obj,
group.by = group.by,
genes.use = temp_genes,
cex.col=0,
slim.col.label = TRUE,
remove.key = TRUE,
do.plot = F,
title = i
)
height = length(temp_genes) / 8
if (height > 40) {
height = 40
} else if (height < 5) {
height = 5
}
if (is.null(output_prefix)) {
print(p)
} else {
ggsave(
paste(output_prefix, i, ".png", sep = ""),
p,
width = 12,
height = height,
dpi = 300,
units = "in"
)
}
}
}
# function to make dotplot
# :param obj Seurat obj
# :param cluster results from perform_mfuzz
# :param output_prefix: the output file prefix
make_dotplot <- function(obj, cluster, output_prefix=NULL, group.by="Stage") {
for (i in sort(unique(cluster))) {
temp_genes = names(cluster[cluster == i])
p <- DotPlot(
obj,
group.by = group.by,
genes.plot = temp_genes,
x.lab.rot = TRUE,
do.return = T
) + labs(title = i) + coord_flip()
height = length(temp_genes) / 8
if (height > 40) {
height = 40
} else if (height < 5) {
height = 5
}
if (is.null(output_prefix)) {
print(p)
} else {
ggsave(
paste(output_prefix, i, ".png", sep = ""),
p,
width = 12,
height = height,
dpi = 300,
units = "in"
)
}
}
}
# function to perform mfuzz on expression based on cluster selection
# :param expr dataframe, the expression matrix
# :param cluster_markers qvlaue see select_cluster_markers
# :param logfc see select_cluster_markers
# :param init_cluster init cluster
# :param m see mfuzz
perform_mfuzz <- function(expr, cluster_markers, qvalue=0.05, logfc=0.5, init_cluster=9, m=NULL ) {
selected_markers = select_cluster_markers(cluster_markers, qvalue, logfc)
# filter genes and create ExpressionSet
expr = ExpressionSet(
as.matrix(expr[intersect(rownames(expr), selected_markers),])
)
print(dim(expr))
expr = standardise(expr)
# best m, if m is null, then estimate based on expr
if (is.null(m)) {
m = mestimate(expr)
}
cl = mfuzz(expr, c=init_cluster, m=m)
return(cl)
}
# function to construct matrix of mean zscore
construct_stage_group_zscore <- function(obj, expr, cluster) {
groups = sort(unique(cluster$cluster))
stages = sort(unique(obj@meta.data$Stage))
res = as.data.frame(matrix(0, nrow = length(groups), ncol = length(stages)))
rownames(res) = groups
colnames(res) = stages
for (i in groups) {
temp_gene = names(cluster$cluster[cluster$cluster == i])
for (j in stages) {
# print(paste0(i, j))
temp_cells = rownames(obj@meta.data[obj@meta.data$Stage == j, ])
temp_expr = expr[intersect(temp_gene, rownames(expr)), temp_cells]
# temp_expr = scale(temp_expr)
res[i, j] = mean(apply(temp_expr, 1, sum))
}
}
res[is.na(res)] = 0
return(res)
}
do_kegg <- function(eg, cluster=NA, pvalueCutoff = 0.05) {
kk <- enrichKEGG(gene = eg$ENTREZID,
organism = 'hsa',
pvalueCutoff = pvalueCutoff)
kk <- as.data.frame(kk)
if (nrow(kk) == 0) {
return(kk)
}
kk$cluster <- cluster
return(kk)
}
do_go <- function(eg, cluster = NA, pvalueCutoff = 0.01, qvalueCutoff = 0.05, cutoff=0.7) {
res = NULL
# for(i in c("BP", "CC", "MF")) {
# ego <- enrichGO(gene = eg$ENTREZID,
# keyType = 'ENTREZID',
# OrgDb = org.Hs.eg.db,
# ont = i,
# pAdjustMethod = "BH",
# pvalueCutoff = pvalueCutoff,
# qvalueCutoff = qvalueCutoff,
# readable = FALSE)
#
# if(is.null(ego)) {
# return(ego)
# }
# ego <- clusterProfiler::simplify(ego, cutoff=cutoff, by="p.adjust", select_fun=min)
#
# ego = as.data.frame(ego)
#
# if (nrow(ego) > 0) {
# ego$ONTOLOGY = i
# }
#
# if (is.null(res)) {
# res = ego
# } else {
# res = rbind(res, ego)
# }
# }
#
# if (nrow(res) == 0) {
# return(res)
# }
ego <- enrichGO(gene = eg$ENTREZID,
keyType = 'ENTREZID',
OrgDb = org.Hs.eg.db,
ont = "ALL",
pAdjustMethod = "BH",
pvalueCutoff = pvalueCutoff,
qvalueCutoff = qvalueCutoff,
readable = FALSE)
res = as.data.frame(ego)
if (is.null(res) || nrow(res) <= 0) {
return(res)
}
res$cluster <- cluster
return(res)
}
do_do <- function(eg, cluster = NA, pvalueCutoff = 0.05, qvalueCutoff = 0.05, minGSSize = 5, maxGSSize = 500) {
do <- enrichDO(gene = eg$ENTREZID,
ont = "DO",
pvalueCutoff = pvalueCutoff,
pAdjustMethod = "BH",
minGSSize = minGSSize,
maxGSSize = maxGSSize,
qvalueCutoff = qvalueCutoff,
readable = TRUE)
do = as.data.frame(do)
if (nrow(do) == 0) {
return(do)
}
do$cluster = cluster
return(do)
}
get_entrzid <- function(markers) {
res = NULL
for(i in unique(markers)) {
print(i)
temp <- names(markers[markers == i])
eg <- bitr(unique(temp), fromType="SYMBOL", toType="ENTREZID", OrgDb="org.Hs.eg.db")
if(!is.null(eg) && nrow(eg) > 0) {
eg$ident = i
if(is.null(res)) {
res = eg
} else {
res = rbind(res, eg)
}
}
}
return(res)
}
print(getwd())
print(rds)
cluster_markers = "annotation_results_by_stage.xlsx"
cluster_markers = read.xlsx(paste(root.dir, cluster_markers, sep = "/"))
expr = read_sctransform()
obj <- readRDS(rds)
qvalue = 0.05
logfc = 0.6
cl = perform_mfuzz(expr, cluster_markers, logfc=logfc, qvalue=qvalue)
mtx = construct_stage_group_zscore(obj, expr, cluster=cl)
temp = hclust(dist(mtx))
# make_heatmap(obj, cl$cluster)
plot(temp)
new_cl = perform_mfuzz(expr, cluster_markers, logfc=logfc, qvalue=qvalue, init_cluster = 4)
new_cl = new_cl$cluster
make_heatmap(obj, new_cl)
make_heatmap(obj, new_cl, output_prefix = paste(root.dir, "mfuzz_gene_module/heatmap_", sep = "/"))
make_dotplot(obj, new_cl, output_prefix = paste(root.dir, "mfuzz_gene_module/dotplot_", sep = "/"))
make_dotplot(obj, new_cl)
make_dotplot(obj, new_cl, group.by = "res.0.6")
eg = get_entrzid(new_cl)
kk <- NULL
for(i in unique(eg$ident)) {
print(i)
temp = do_kegg(eg[eg$ident == i, ])
if(is.null(temp) || nrow(temp) == 0) {
next
}
temp$ident = i
if(is.null(kk)) {
kk = temp
} else {
kk = rbind(kk, temp)
}
}
go <- NULL
for(i in unique(eg$ident)) {
print(i)
temp = do_go(eg[eg$ident == i, ])
if(is.null(temp) || nrow(temp) == 0) {
next
}
temp$ident = i
if(is.null(go)) {
go = temp
} else {
go = rbind(go, temp)
}
}
do <- NULL
for(i in unique(eg$ident)) {
print(i)
temp = do_do(eg[eg$ident == i, ])
if(is.null(temp) || nrow(temp) == 0) {
next
}
temp$ident = i
if(is.null(do)) {
do = temp
} else {
do = rbind(do, temp)
}
}
print(getwd())
temp = as.data.frame(new_cl)
colnames(temp) <- "gene_module_id"
temp$gene = rownames(temp)
wb = createWorkbook()
addWorksheet(wb, "gene_module")
writeData(wb, 1, temp)
addWorksheet(wb, "kegg")
writeData(wb, 2, kk)
addWorksheet(wb, "go")
writeData(wb, 3, go)
addWorksheet(wb, "do")
writeData(wb, 4, do)
saveWorkbook(wb, file = paste(root.dir, "mfuzz_gene_module/results.xlsx", sep = "/"), overwrite = T)
|
#' Maximum Variance Unfolding / Semidefinite Embedding
#'
#' The method of Maximum Variance Unfolding(MVU), also known as Semidefinite Embedding(SDE) is, as its names suggest,
#' to exploit semidefinite programming in performing nonlinear dimensionality reduction by \emph{unfolding}
#' neighborhood graph constructed in the original high-dimensional space. Its unfolding generates a gram
#' matrix \eqn{K} in that we can choose from either directly finding embeddings (\code{"spectral"}) or
#' use again Kernel PCA technique (\code{"kpca"}) to find low-dimensional representations. Note that
#' since \code{do.mvu} depends on \href{https://CRAN.R-project.org/package=Rcsdp}{Rcsdp}, we cannot guarantee its computational
#' efficiency when given a large dataset.
#'
#' @param X an \eqn{(n\times p)} matrix or data frame whose rows are observations and columns represent independent variables.
#' @param ndim an integer-valued target dimension.
#' @param type a vector of neighborhood graph construction. Following types are supported;
#' \code{c("knn",k)}, \code{c("enn",radius)}, and \code{c("proportion",ratio)}.
#' Default is \code{c("proportion",0.1)}, connecting about 1/10 of nearest data points
#' among all data points. See also \code{\link{aux.graphnbd}} for more details.
#' @param preprocess an additional option for preprocessing the data.
#' Default is "null". See also \code{\link{aux.preprocess}} for more details.
#' @param projtype type of method for projection; either \code{"spectral"} or \code{"kpca"} used.
#'
#' @return a named list containing
#' \describe{
#' \item{Y}{an \eqn{(n\times ndim)} matrix whose rows are embedded observations.}
#' \item{trfinfo}{a list containing information for out-of-sample prediction.}
#' }
#'
#' @examples
#' \dontrun{
#' ## generate ribbon-shaped data with the small number of data
#' X = aux.gensamples(dname="ribbon", n=50)
#'
#' ## 1. standard MVU
#' output1 <- do.mvu(X,ndim=2)
#'
#' ## 2. standard setting with "kpca"-type projection
#' output2 <- do.mvu(X,ndim=2,projtype="kpca")
#'
#' ## 3. standard MVU for densly connected graph
#' output3 <- do.mvu(X,ndim=2,type=c("proportion",0.5))
#'
#' ## Visualize three different projections
#' par(mfrow=c(1,3))
#' plot(output1$Y[,1],output1$Y[,2],main="standard")
#' plot(output2$Y[,1],output2$Y[,2],main="kpca projection")
#' plot(output3$Y[,1],output3$Y[,2],main="densely connected graph")
#' }
#'
#' @references
#' \insertRef{weinberger_unsupervised_2006}{Rdimtools}
#'
#' @author Kisung You
#' @aliases do.sde
#' @rdname nonlinear_MVU
#' @export
do.mvu <- function(X,ndim=2,type=c("proportion",0.1),
preprocess=c("null","center","scale","cscale","decorrelate","whiten"),
projtype=c("spectral","kpca")){
# 1. typecheck is always first step to perform.
aux.typecheck(X)
if ((!is.numeric(ndim))||(ndim<1)||(ndim>ncol(X))||is.infinite(ndim)||is.na(ndim)){
stop("* do.mvu : 'ndim' is a positive integer in [1,#(covariates)].")
}
ndim = as.integer(ndim)
# 2. ... parameters
# 2-1. aux.graphnbd
# type : vector of c("knn",k), c("enn",radius), or c("proportion",ratio)
# 2-2. mvu itself
# preprocess : 'center','decorrelate', 'null', or 'whiten'
nbdtype = type
nbdsymmetric = "union"
if (missing(preprocess)){
algpreprocess = "center"
} else {
algpreprocess = match.arg(preprocess)
}
# 3. process : data preprocessing
tmplist = aux.preprocess.hidden(X,type=algpreprocess,algtype="nonlinear")
trfinfo = tmplist$info
pX = tmplist$pX
n = nrow(pX)
p = ncol(pX)
# 4. process : neighborhood selection + ALPHA for SDE/MVU
# : make a triangle.
nbdstruct = aux.graphnbd(pX,method="euclidean",
type=nbdtype,symmetric="union")
M1 = nbdstruct$mask
M2 = M1
for (it in 1:nrow(M1)){
for (i in 1:ncol(M1)){
for (j in 1:ncol(M1)){
if ((M1[it,i]==TRUE)&&(M1[it,j]==TRUE)){
M2[i,j] = TRUE
M2[j,i] = TRUE
}
}
}
}
diag(M2) = FALSE
# 5. Compute K
# 5-1. basic settings
N = ncol(M2) # number of data points
nconstraints = sum(M2)/2
D2 = ((as.matrix(dist(pX)))^2)
C = list(.simple_triplet_diag_sym_matrix(1,N))
K = list(type="s",size=N)
# 5-2. iterative computation
iter = 1
A = list()
b = c()
for (it1 in 1:nrow(M2)){
for (it2 in it1:ncol(M2)){
if (M2[it1,it2]==TRUE){
tmpA = list(simple_triplet_sym_matrix(i=c(it1,it1,it2,it2),j=c(it1,it2,it1,it2),v=c(1,-1,-1,1),n=N))
tmpb = D2[it1,it2]
A[[iter]] = tmpA
b[iter] = tmpb
iter = iter+1
}
}
}
# 5-3. final update for mean centered constraint
A[[iter]] = list(matrix(1,N,N)/N)
b[iter] = 0
outCSDP = csdp(C,A,b,K,csdp.control(printlevel=0))
KK = as.matrix(outCSDP$X[[1]])
if (projtype=="spectral"){
# 6. Embedding : Spectral Method, directly from K
KKeigen = eigen(KK)
eigvals = KKeigen$values
eigvecs = KKeigen$vectors
tY = (diag(sqrt(eigvals[1:ndim])) %*% t(eigvecs[,1:ndim]))
} else if (projtype=="kpca"){
# 7. Embedding : Kernel PCA method
tY = aux.kernelprojection(KK, ndim)
} else {
stop("* do.mvu : 'projtype' should be either 'spectral' or 'kpca'.")
}
# 8. Return output
result = list()
result$Y = t(tY)
result$trfinfo = trfinfo
return(result)
}
|
/R/nonlinear_MVU.R
|
no_license
|
rcannood/Rdimtools
|
R
| false
| false
| 5,367
|
r
|
#' Maximum Variance Unfolding / Semidefinite Embedding
#'
#' The method of Maximum Variance Unfolding(MVU), also known as Semidefinite Embedding(SDE) is, as its names suggest,
#' to exploit semidefinite programming in performing nonlinear dimensionality reduction by \emph{unfolding}
#' neighborhood graph constructed in the original high-dimensional space. Its unfolding generates a gram
#' matrix \eqn{K} in that we can choose from either directly finding embeddings (\code{"spectral"}) or
#' use again Kernel PCA technique (\code{"kpca"}) to find low-dimensional representations. Note that
#' since \code{do.mvu} depends on \href{https://CRAN.R-project.org/package=Rcsdp}{Rcsdp}, we cannot guarantee its computational
#' efficiency when given a large dataset.
#'
#' @param X an \eqn{(n\times p)} matrix or data frame whose rows are observations and columns represent independent variables.
#' @param ndim an integer-valued target dimension.
#' @param type a vector of neighborhood graph construction. Following types are supported;
#' \code{c("knn",k)}, \code{c("enn",radius)}, and \code{c("proportion",ratio)}.
#' Default is \code{c("proportion",0.1)}, connecting about 1/10 of nearest data points
#' among all data points. See also \code{\link{aux.graphnbd}} for more details.
#' @param preprocess an additional option for preprocessing the data.
#' Default is "null". See also \code{\link{aux.preprocess}} for more details.
#' @param projtype type of method for projection; either \code{"spectral"} or \code{"kpca"} used.
#'
#' @return a named list containing
#' \describe{
#' \item{Y}{an \eqn{(n\times ndim)} matrix whose rows are embedded observations.}
#' \item{trfinfo}{a list containing information for out-of-sample prediction.}
#' }
#'
#' @examples
#' \dontrun{
#' ## generate ribbon-shaped data with the small number of data
#' X = aux.gensamples(dname="ribbon", n=50)
#'
#' ## 1. standard MVU
#' output1 <- do.mvu(X,ndim=2)
#'
#' ## 2. standard setting with "kpca"-type projection
#' output2 <- do.mvu(X,ndim=2,projtype="kpca")
#'
#' ## 3. standard MVU for densly connected graph
#' output3 <- do.mvu(X,ndim=2,type=c("proportion",0.5))
#'
#' ## Visualize three different projections
#' par(mfrow=c(1,3))
#' plot(output1$Y[,1],output1$Y[,2],main="standard")
#' plot(output2$Y[,1],output2$Y[,2],main="kpca projection")
#' plot(output3$Y[,1],output3$Y[,2],main="densely connected graph")
#' }
#'
#' @references
#' \insertRef{weinberger_unsupervised_2006}{Rdimtools}
#'
#' @author Kisung You
#' @aliases do.sde
#' @rdname nonlinear_MVU
#' @export
do.mvu <- function(X,ndim=2,type=c("proportion",0.1),
preprocess=c("null","center","scale","cscale","decorrelate","whiten"),
projtype=c("spectral","kpca")){
# 1. typecheck is always first step to perform.
aux.typecheck(X)
if ((!is.numeric(ndim))||(ndim<1)||(ndim>ncol(X))||is.infinite(ndim)||is.na(ndim)){
stop("* do.mvu : 'ndim' is a positive integer in [1,#(covariates)].")
}
ndim = as.integer(ndim)
# 2. ... parameters
# 2-1. aux.graphnbd
# type : vector of c("knn",k), c("enn",radius), or c("proportion",ratio)
# 2-2. mvu itself
# preprocess : 'center','decorrelate', 'null', or 'whiten'
nbdtype = type
nbdsymmetric = "union"
if (missing(preprocess)){
algpreprocess = "center"
} else {
algpreprocess = match.arg(preprocess)
}
# 3. process : data preprocessing
tmplist = aux.preprocess.hidden(X,type=algpreprocess,algtype="nonlinear")
trfinfo = tmplist$info
pX = tmplist$pX
n = nrow(pX)
p = ncol(pX)
# 4. process : neighborhood selection + ALPHA for SDE/MVU
# : make a triangle.
nbdstruct = aux.graphnbd(pX,method="euclidean",
type=nbdtype,symmetric="union")
M1 = nbdstruct$mask
M2 = M1
for (it in 1:nrow(M1)){
for (i in 1:ncol(M1)){
for (j in 1:ncol(M1)){
if ((M1[it,i]==TRUE)&&(M1[it,j]==TRUE)){
M2[i,j] = TRUE
M2[j,i] = TRUE
}
}
}
}
diag(M2) = FALSE
# 5. Compute K
# 5-1. basic settings
N = ncol(M2) # number of data points
nconstraints = sum(M2)/2
D2 = ((as.matrix(dist(pX)))^2)
C = list(.simple_triplet_diag_sym_matrix(1,N))
K = list(type="s",size=N)
# 5-2. iterative computation
iter = 1
A = list()
b = c()
for (it1 in 1:nrow(M2)){
for (it2 in it1:ncol(M2)){
if (M2[it1,it2]==TRUE){
tmpA = list(simple_triplet_sym_matrix(i=c(it1,it1,it2,it2),j=c(it1,it2,it1,it2),v=c(1,-1,-1,1),n=N))
tmpb = D2[it1,it2]
A[[iter]] = tmpA
b[iter] = tmpb
iter = iter+1
}
}
}
# 5-3. final update for mean centered constraint
A[[iter]] = list(matrix(1,N,N)/N)
b[iter] = 0
outCSDP = csdp(C,A,b,K,csdp.control(printlevel=0))
KK = as.matrix(outCSDP$X[[1]])
if (projtype=="spectral"){
# 6. Embedding : Spectral Method, directly from K
KKeigen = eigen(KK)
eigvals = KKeigen$values
eigvecs = KKeigen$vectors
tY = (diag(sqrt(eigvals[1:ndim])) %*% t(eigvecs[,1:ndim]))
} else if (projtype=="kpca"){
# 7. Embedding : Kernel PCA method
tY = aux.kernelprojection(KK, ndim)
} else {
stop("* do.mvu : 'projtype' should be either 'spectral' or 'kpca'.")
}
# 8. Return output
result = list()
result$Y = t(tY)
result$trfinfo = trfinfo
return(result)
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/GETINFO.R
\docType{data}
\name{FRED_APIkey}
\alias{FRED_APIkey}
\title{GETINFO from FRED}
\format{
An object of class \code{character} of length 1.
}
\usage{
GETINFO(FRED_APIkey,search_text,realtime_start,realtime_end,limit)
}
\value{
}
\description{
This function allows you retrieve all country's Econ information from FRED(this function requires your own api key from FRED)
}
\details{
GETINFO
}
\examples{
GETINFO(FRED_APIkey="43feed24d714332ac6ad5110919b6556", realtime_start="2018-01-01",realtime_end="2020-09-01",search_text="GDP+Japan",limit="50")
}
\author{
Jiatong Li
}
\keyword{datasets}
|
/man/FRED_APIkey.Rd
|
permissive
|
JT944/Uclapack3
|
R
| false
| true
| 678
|
rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/GETINFO.R
\docType{data}
\name{FRED_APIkey}
\alias{FRED_APIkey}
\title{GETINFO from FRED}
\format{
An object of class \code{character} of length 1.
}
\usage{
GETINFO(FRED_APIkey,search_text,realtime_start,realtime_end,limit)
}
\value{
}
\description{
This function allows you retrieve all country's Econ information from FRED(this function requires your own api key from FRED)
}
\details{
GETINFO
}
\examples{
GETINFO(FRED_APIkey="43feed24d714332ac6ad5110919b6556", realtime_start="2018-01-01",realtime_end="2020-09-01",search_text="GDP+Japan",limit="50")
}
\author{
Jiatong Li
}
\keyword{datasets}
|
# This script will extract the time-series from the ROI sets defined in 01_resolution
###
# Setup
###
library(Rsge)
library(tools)
tmpdf <- read.csv("subinfo/04_all_df.csv")
njobs <- nrow(tmpdf) # number of jobs = number of subjects
nthreads <- 1
nforks <- 100
rbase <- "/home2/data/Projects/CWAS/share/age+gender/analysis/03_robustness/rois"
mask_file <- file.path(rbase, "mask_for_age+sex_gray_4mm.nii.gz")
ks <- c(25,50,100,200,400,800,1600,3200)
func_files <- as.character(read.table("subinfo/04_all_funcpaths_4mm.txt")[,1])
###
# ROI Extraction (Random)
###
roi_files <- file.path(rbase, sprintf("rois_random_k%04i.nii.gz", ks))
for (roi_file in roi_files) {
cat(sprintf("ROI: %s\n", roi_file))
# Get the name of the ROI
roi_base <- file_path_sans_ext(file_path_sans_ext(basename(roi_file)))
# Loop through functionals
## to output nifti object
out_files <- sge.parLapply(func_files, function(func_file) {
set_parallel_procs(1, 1)
out_file <- file.path(dirname(func_file), paste(roi_base, ".nii.gz", sep=""))
roi_mean_wrapper(func_file, roi_file, mask_file, out_file)
return(out_file)
}, packages=c("connectir"), function.savelist=ls(), njobs=njobs)
out_files <- unlist(out_files)
ofile <- paste("z_", roi_base, "_nifti.txt", sep="")
write.table(out_files, file=ofile, row.names=F, col.names=F)
### to output text object
#out_files <- sge.parLapply(out_files, function(in_file) {
# img <- read.nifti.image(in_file)
# set_parallel_procs(1, 1)
# out_file <- file.path(dirname(func_file), paste(roi_base, ".1D", sep=""))
# roi_mean_wrapper(func_file, roi_file, mask_file, out_file)
# return(out_file)
#}, packages=c("connectir"), function.savelist=ls(), njobs=njobs)
#out_files <- unlist(out_files)
#ofile <- paste("z_", roi_base, "_1D.txt", sep="")
#write.table(out_files, file=ofile, row.names=F, col.names=F)
}
|
/age+gender/analysis/03_robustness/B-ROIs_21-extract.R
|
no_license
|
vishalmeeni/cwas-paper
|
R
| false
| false
| 1,994
|
r
|
# This script will extract the time-series from the ROI sets defined in 01_resolution
###
# Setup
###
library(Rsge)
library(tools)
tmpdf <- read.csv("subinfo/04_all_df.csv")
njobs <- nrow(tmpdf) # number of jobs = number of subjects
nthreads <- 1
nforks <- 100
rbase <- "/home2/data/Projects/CWAS/share/age+gender/analysis/03_robustness/rois"
mask_file <- file.path(rbase, "mask_for_age+sex_gray_4mm.nii.gz")
ks <- c(25,50,100,200,400,800,1600,3200)
func_files <- as.character(read.table("subinfo/04_all_funcpaths_4mm.txt")[,1])
###
# ROI Extraction (Random)
###
roi_files <- file.path(rbase, sprintf("rois_random_k%04i.nii.gz", ks))
for (roi_file in roi_files) {
cat(sprintf("ROI: %s\n", roi_file))
# Get the name of the ROI
roi_base <- file_path_sans_ext(file_path_sans_ext(basename(roi_file)))
# Loop through functionals
## to output nifti object
out_files <- sge.parLapply(func_files, function(func_file) {
set_parallel_procs(1, 1)
out_file <- file.path(dirname(func_file), paste(roi_base, ".nii.gz", sep=""))
roi_mean_wrapper(func_file, roi_file, mask_file, out_file)
return(out_file)
}, packages=c("connectir"), function.savelist=ls(), njobs=njobs)
out_files <- unlist(out_files)
ofile <- paste("z_", roi_base, "_nifti.txt", sep="")
write.table(out_files, file=ofile, row.names=F, col.names=F)
### to output text object
#out_files <- sge.parLapply(out_files, function(in_file) {
# img <- read.nifti.image(in_file)
# set_parallel_procs(1, 1)
# out_file <- file.path(dirname(func_file), paste(roi_base, ".1D", sep=""))
# roi_mean_wrapper(func_file, roi_file, mask_file, out_file)
# return(out_file)
#}, packages=c("connectir"), function.savelist=ls(), njobs=njobs)
#out_files <- unlist(out_files)
#ofile <- paste("z_", roi_base, "_1D.txt", sep="")
#write.table(out_files, file=ofile, row.names=F, col.names=F)
}
|
library(ptycho)
### Name: PosteriorStatistics
### Title: Extract Posterior Statistics
### Aliases: PosteriorStatistics meanTau varTau meanIndicators
### meanVarIndicators meanGrpIndicators
### ** Examples
data(ptychoIn)
data(ptychoOut)
# Compare averages of sampled group indicator variables to truth.
cbind(ptychoIn$replicates[[1]]$indic.grp,
meanGrpIndicators(ptychoOut))
# Compare averages of sampled covariate indicator variables to truth.
cbind(ptychoIn$replicates[[1]]$indic.var,
meanVarIndicators(ptychoOut))
# Compare average of sampled values of tau to truth.
ptychoIn$replicates[[1]]$tau
meanTau(ptychoOut)
# Variance of sampled values of tau is reasonable because sampled model
# is usually NOT empty.
varTau(ptychoOut)
|
/data/genthat_extracted_code/ptycho/examples/PosteriorStatistics.Rd.R
|
no_license
|
surayaaramli/typeRrh
|
R
| false
| false
| 752
|
r
|
library(ptycho)
### Name: PosteriorStatistics
### Title: Extract Posterior Statistics
### Aliases: PosteriorStatistics meanTau varTau meanIndicators
### meanVarIndicators meanGrpIndicators
### ** Examples
data(ptychoIn)
data(ptychoOut)
# Compare averages of sampled group indicator variables to truth.
cbind(ptychoIn$replicates[[1]]$indic.grp,
meanGrpIndicators(ptychoOut))
# Compare averages of sampled covariate indicator variables to truth.
cbind(ptychoIn$replicates[[1]]$indic.var,
meanVarIndicators(ptychoOut))
# Compare average of sampled values of tau to truth.
ptychoIn$replicates[[1]]$tau
meanTau(ptychoOut)
# Variance of sampled values of tau is reasonable because sampled model
# is usually NOT empty.
varTau(ptychoOut)
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/tfm.R, R/um.R
\name{fit.tfm}
\alias{fit.tfm}
\alias{fit}
\alias{fit.um}
\title{Estimation of the ARIMA model}
\usage{
\method{fit}{tfm}(
mdl,
y = NULL,
method = c("exact", "cond"),
optim.method = "BFGS",
show.iter = FALSE,
fit.noise = TRUE,
...
)
fit(mdl, ...)
\method{fit}{um}(
mdl,
z = NULL,
method = c("exact", "cond"),
optim.method = "BFGS",
show.iter = FALSE,
...
)
}
\arguments{
\item{mdl}{an object of class \code{\link{um}} or \code{\link{tfm}}.}
\item{y}{a \code{ts} object.}
\item{method}{Exact/conditional maximum likelihood.}
\item{optim.method}{the \code{method} argument of the \code{optim}
function.}
\item{show.iter}{logical value to show or hide the estimates at the
different iterations.}
\item{fit.noise}{logical. If TRUE parameters of the noise model are fixed.}
\item{...}{additional arguments.}
\item{z}{a time series.}
}
\value{
A \code{tfm} object.
An object of class "um" with the estimated parameters.
}
\description{
\code{fit} fits the univariate model to the time series z.
}
\note{
The \code{um} function estimates the corresponding ARIMA model when a time
series is provided. The \code{fit} function is useful to fit a model to
several time series, for example, in a Monte Carlo study.
}
\examples{
z <- AirPassengers
airl <- um(i = list(1, c(1, 12)), ma = list(1, c(1, 12)), bc = TRUE)
airl <- fit(airl, z)
}
|
/tfarima/man/fit.Rd
|
no_license
|
akhikolla/TestedPackages-NoIssues
|
R
| false
| true
| 1,455
|
rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/tfm.R, R/um.R
\name{fit.tfm}
\alias{fit.tfm}
\alias{fit}
\alias{fit.um}
\title{Estimation of the ARIMA model}
\usage{
\method{fit}{tfm}(
mdl,
y = NULL,
method = c("exact", "cond"),
optim.method = "BFGS",
show.iter = FALSE,
fit.noise = TRUE,
...
)
fit(mdl, ...)
\method{fit}{um}(
mdl,
z = NULL,
method = c("exact", "cond"),
optim.method = "BFGS",
show.iter = FALSE,
...
)
}
\arguments{
\item{mdl}{an object of class \code{\link{um}} or \code{\link{tfm}}.}
\item{y}{a \code{ts} object.}
\item{method}{Exact/conditional maximum likelihood.}
\item{optim.method}{the \code{method} argument of the \code{optim}
function.}
\item{show.iter}{logical value to show or hide the estimates at the
different iterations.}
\item{fit.noise}{logical. If TRUE parameters of the noise model are fixed.}
\item{...}{additional arguments.}
\item{z}{a time series.}
}
\value{
A \code{tfm} object.
An object of class "um" with the estimated parameters.
}
\description{
\code{fit} fits the univariate model to the time series z.
}
\note{
The \code{um} function estimates the corresponding ARIMA model when a time
series is provided. The \code{fit} function is useful to fit a model to
several time series, for example, in a Monte Carlo study.
}
\examples{
z <- AirPassengers
airl <- um(i = list(1, c(1, 12)), ma = list(1, c(1, 12)), bc = TRUE)
airl <- fit(airl, z)
}
|
% Generated by roxygen2 (4.1.1): do not edit by hand
% Please edit documentation in R/Interplot.R
\name{interplot}
\alias{interplot}
\title{Plot Conditional Coefficients of a Variable in an Interaction Term}
\usage{
interplot(m, var1, var2, plot = TRUE, point = FALSE, sims = 5000,
xmin = NA, xmax = NA)
}
\arguments{
\item{m}{A model object including an interaction term, or, alternately, a data frame generated by an earlier call to interplot using the argument plot = FALSE.}
\item{var1}{The name (as a string) of the variable of interest in the interaction term; its conditional coefficient estimates will be plotted.}
\item{var2}{The name (as a string) of the other variable in the interaction term}
\item{plot}{A logical value indicating whether the output is a plot or a dataframe including the conditional coefficient estimates of var1, their upper and lower bounds, and the corresponding values of var2.}
\item{point}{A logical value determining the format of plot. By default, the function produces a line plot when var2 takes on ten or more distinct values and a point (dot-and-whisker) plot otherwise; option TRUE forces a point plot.}
\item{sims}{Number of independent simulation draws used to calculate upper and lower bounds of coefficient estimates: lower values run faster; higher values produce smoother curves.}
\item{xmin}{A numerical value indicating the minimum value shown of x shown in the graph. Rarely used.}
\item{xmax}{A numerical value indicating the maximum value shown of x shown in the graph. Rarely used.}
}
\value{
The function returns a \code{ggplot} object.
}
\description{
\code{interplot} is a generic function to produce a plot of the coefficient estimates of one variable in a two-way interaction conditional on the values of the other variable in the interaction term. The function invokes particular \code{methods} which depend on the \code{\link{class}} of the first argument.
}
\details{
\code{interplot} visualizes the changes in the coefficient of one term in a two-way interaction conditioned by the other term. In the current version, the function works with interactions in the following classes of models:
\itemize{
\item Ordinary linear models (object class: \code{lm});
\item Generalized linear models (object class: \code{glm});
\item Linear mixed-effects models (object class: \code{lmerMod});
\item Generalized linear mixed-effects models (object class: \code{glmerMod});
\item Ordinary linear models with imputed data (object class: \code{list});
\item Generalized linear models with imputed data (object class: \code{list})
\item Linear mixed-effects models with imputed data (object class: \code{list});
\item Generalized linear mixed-effects models with imputed data (object class: \code{list}).
}
The examples below illustrate how methods invoked by this generic deal with different type of objects.
Because the output function is based on \code{\link[ggplot2]{ggplot}}, any additional arguments and layers supported by \code{ggplot2} can be added with the \code{+}.
}
\examples{
data(mtcars)
m_cyl <- lm(mpg ~ wt * cyl, data = mtcars)
library(interplot)
# Plot interactions with a continous conditioning variable
interplot(m = m_cyl, var1 = "cyl", var2 = "wt") +
xlab("Automobile Weight (thousands lbs)") +
ylab("Estimated Coefficient for Number of Cylinders") +
ggtitle("Estimated Coefficient of Engine Cylinders\\non Mileage by Automobile Weight") +
theme(plot.title = element_text(face="bold"))
# Plot interactions with a categorical conditioning variable
interplot(m = m_cyl, var1 = "wt", var2 = "cyl") +
xlab("Number of Cylinders") +
ylab("Estimated Coefficient for Automobile Weight (thousands lbs)") +
ggtitle("Estimated Coefficient of Automobile Weight \\non Mileage by Engine Cylinders") +
theme(plot.title = element_text(face="bold"))
}
|
/man/interplot.Rd
|
no_license
|
arturochian/interplot
|
R
| false
| false
| 3,909
|
rd
|
% Generated by roxygen2 (4.1.1): do not edit by hand
% Please edit documentation in R/Interplot.R
\name{interplot}
\alias{interplot}
\title{Plot Conditional Coefficients of a Variable in an Interaction Term}
\usage{
interplot(m, var1, var2, plot = TRUE, point = FALSE, sims = 5000,
xmin = NA, xmax = NA)
}
\arguments{
\item{m}{A model object including an interaction term, or, alternately, a data frame generated by an earlier call to interplot using the argument plot = FALSE.}
\item{var1}{The name (as a string) of the variable of interest in the interaction term; its conditional coefficient estimates will be plotted.}
\item{var2}{The name (as a string) of the other variable in the interaction term}
\item{plot}{A logical value indicating whether the output is a plot or a dataframe including the conditional coefficient estimates of var1, their upper and lower bounds, and the corresponding values of var2.}
\item{point}{A logical value determining the format of plot. By default, the function produces a line plot when var2 takes on ten or more distinct values and a point (dot-and-whisker) plot otherwise; option TRUE forces a point plot.}
\item{sims}{Number of independent simulation draws used to calculate upper and lower bounds of coefficient estimates: lower values run faster; higher values produce smoother curves.}
\item{xmin}{A numerical value indicating the minimum value shown of x shown in the graph. Rarely used.}
\item{xmax}{A numerical value indicating the maximum value shown of x shown in the graph. Rarely used.}
}
\value{
The function returns a \code{ggplot} object.
}
\description{
\code{interplot} is a generic function to produce a plot of the coefficient estimates of one variable in a two-way interaction conditional on the values of the other variable in the interaction term. The function invokes particular \code{methods} which depend on the \code{\link{class}} of the first argument.
}
\details{
\code{interplot} visualizes the changes in the coefficient of one term in a two-way interaction conditioned by the other term. In the current version, the function works with interactions in the following classes of models:
\itemize{
\item Ordinary linear models (object class: \code{lm});
\item Generalized linear models (object class: \code{glm});
\item Linear mixed-effects models (object class: \code{lmerMod});
\item Generalized linear mixed-effects models (object class: \code{glmerMod});
\item Ordinary linear models with imputed data (object class: \code{list});
\item Generalized linear models with imputed data (object class: \code{list})
\item Linear mixed-effects models with imputed data (object class: \code{list});
\item Generalized linear mixed-effects models with imputed data (object class: \code{list}).
}
The examples below illustrate how methods invoked by this generic deal with different type of objects.
Because the output function is based on \code{\link[ggplot2]{ggplot}}, any additional arguments and layers supported by \code{ggplot2} can be added with the \code{+}.
}
\examples{
data(mtcars)
m_cyl <- lm(mpg ~ wt * cyl, data = mtcars)
library(interplot)
# Plot interactions with a continous conditioning variable
interplot(m = m_cyl, var1 = "cyl", var2 = "wt") +
xlab("Automobile Weight (thousands lbs)") +
ylab("Estimated Coefficient for Number of Cylinders") +
ggtitle("Estimated Coefficient of Engine Cylinders\\non Mileage by Automobile Weight") +
theme(plot.title = element_text(face="bold"))
# Plot interactions with a categorical conditioning variable
interplot(m = m_cyl, var1 = "wt", var2 = "cyl") +
xlab("Number of Cylinders") +
ylab("Estimated Coefficient for Automobile Weight (thousands lbs)") +
ggtitle("Estimated Coefficient of Automobile Weight \\non Mileage by Engine Cylinders") +
theme(plot.title = element_text(face="bold"))
}
|
## This first line will likely take a few seconds. Be patient!
library(dplyr)
require(ggplot2)
NEI <- readRDS("summarySCC_PM25.rds")
SCC <- readRDS("Source_Classification_Code.rds")
NEI$year <- factor(NEI$year, levels = c('1999', '2002', '2005', '2008'))
NEI.Onroad.Baltimore <- filter(NEI, fips== 24510, type == 'ON-ROAD')
Plot5_data <- aggregate(NEI.Onroad.Baltimore[, 'Emissions'], by = list(NEI.Onroad.Baltimore$year), sum)
colnames(Plot5_data) <- c('year', 'Emissions')
png('Plot5.png')
ggplot(data = Plot5_data, aes(x = year, y = Emissions)) + geom_bar(aes(fill = year), stat = "identity") + guides(fill = F) + ggtitle('Total On-Road Emmisions- Baltimore, MD') + ylab(expression('PM'[2.5])) + xlab('Year') + theme(legend.position = 'none') + geom_text(aes(label = round(Emissions, 0), size = 1, hjust = 0.5, vjust = 2))
dev.off()
|
/Plot5.R
|
no_license
|
sannananjaiah/EDA-Project-2
|
R
| false
| false
| 837
|
r
|
## This first line will likely take a few seconds. Be patient!
library(dplyr)
require(ggplot2)
NEI <- readRDS("summarySCC_PM25.rds")
SCC <- readRDS("Source_Classification_Code.rds")
NEI$year <- factor(NEI$year, levels = c('1999', '2002', '2005', '2008'))
NEI.Onroad.Baltimore <- filter(NEI, fips== 24510, type == 'ON-ROAD')
Plot5_data <- aggregate(NEI.Onroad.Baltimore[, 'Emissions'], by = list(NEI.Onroad.Baltimore$year), sum)
colnames(Plot5_data) <- c('year', 'Emissions')
png('Plot5.png')
ggplot(data = Plot5_data, aes(x = year, y = Emissions)) + geom_bar(aes(fill = year), stat = "identity") + guides(fill = F) + ggtitle('Total On-Road Emmisions- Baltimore, MD') + ylab(expression('PM'[2.5])) + xlab('Year') + theme(legend.position = 'none') + geom_text(aes(label = round(Emissions, 0), size = 1, hjust = 0.5, vjust = 2))
dev.off()
|
% Generated by roxygen2 (4.1.1): do not edit by hand
% Please edit documentation in R/crossval_rf.R
\name{crossval_rf}
\alias{crossval_rf}
\title{Cross Validate Random Forests to get Categorical Accuracy}
\usage{
crossval_rf(y, x, breaks, cat = NULL, split, n, ntree)
}
\arguments{
\item{y}{response a vector}
\item{x}{predictors a data.frame}
\item{breaks}{numeric vector of cut points}
\item{cat}{categories of response}
\item{split}{proportion to include in training datset. test set to inverse.}
\item{n}{number of iterations}
}
\description{
Funtion to cross-validate categorical prediction accuracy from
resultant chl a. A Random forest of forests... Used to get categorical
accuracy. Also returns a cross-validated accuracy of % var explained.
}
\examples{
data(LakeTrophicModelling)
predictors_all <- predictors_all[predictors_all!="DATE_COL"]
all_cf_dat <- data.frame(ltmData[predictors_all],LogCHLA=log10(ltmData$CHLA))
all_cf_dat <- all_cf_dat[complete.cases(all_cf_dat),]
ts_brks <- c(min(all_cf_dat$LogCHLA),log10(2),log10(7),log10(30),max(all_cf_dat$LogCHLA))
ts_cats <- c("oligo","meso","eu","hyper")
x<-crossval_rf(all_cf_dat$LogCHLA,all_cf_dat[,names(all_cf_dat)!="LogCHLA"],
ts_brks,ts_cats,0.8,1000,1000)
}
|
/man/crossval_rf.Rd
|
no_license
|
BKreakie/LakeTrophicModelling
|
R
| false
| false
| 1,248
|
rd
|
% Generated by roxygen2 (4.1.1): do not edit by hand
% Please edit documentation in R/crossval_rf.R
\name{crossval_rf}
\alias{crossval_rf}
\title{Cross Validate Random Forests to get Categorical Accuracy}
\usage{
crossval_rf(y, x, breaks, cat = NULL, split, n, ntree)
}
\arguments{
\item{y}{response a vector}
\item{x}{predictors a data.frame}
\item{breaks}{numeric vector of cut points}
\item{cat}{categories of response}
\item{split}{proportion to include in training datset. test set to inverse.}
\item{n}{number of iterations}
}
\description{
Funtion to cross-validate categorical prediction accuracy from
resultant chl a. A Random forest of forests... Used to get categorical
accuracy. Also returns a cross-validated accuracy of % var explained.
}
\examples{
data(LakeTrophicModelling)
predictors_all <- predictors_all[predictors_all!="DATE_COL"]
all_cf_dat <- data.frame(ltmData[predictors_all],LogCHLA=log10(ltmData$CHLA))
all_cf_dat <- all_cf_dat[complete.cases(all_cf_dat),]
ts_brks <- c(min(all_cf_dat$LogCHLA),log10(2),log10(7),log10(30),max(all_cf_dat$LogCHLA))
ts_cats <- c("oligo","meso","eu","hyper")
x<-crossval_rf(all_cf_dat$LogCHLA,all_cf_dat[,names(all_cf_dat)!="LogCHLA"],
ts_brks,ts_cats,0.8,1000,1000)
}
|
library(testthat)
# most common expectations:
# equality: expect_equal() and expect_identical()
# regexp: expect_match()
# catch-all: expect_true() and expect_false()
# console output: expect_output()
# messages: expect_message()
# warning: expect_warning()
# errors: expect_error()
escapeString <- function(s) {
t <- gsub("(\\\\)", "\\\\\\\\", s)
t <- gsub("(\n)", "\\\\n", t)
t <- gsub("(\r)", "\\\\r", t)
t <- gsub("(\")", "\\\\\"", t)
return(t)
}
prepStr <- function(s) {
t <- escapeString(s)
u <- eval(parse(text=paste0("\"", t, "\"")))
if(s!=u) stop("Unable to escape string!")
t <- paste0("\thtml <- \"", t, "\"")
utils::writeClipboard(t)
return(invisible())
}
evaluationMode <- "sequential"
processingLibrary <- "dplyr"
description <- "test: sequential dplyr"
countFunction <- "n()"
isDevelopmentVersion <- (length(strsplit(packageDescription("pivottabler")$Version, "\\.")[[1]]) > 3)
testScenarios <- function(description="test", releaseEvaluationMode="batch", releaseProcessingLibrary="dplyr", runAllForReleaseVersion=FALSE) {
isDevelopmentVersion <- (length(strsplit(packageDescription("pivottabler")$Version, "\\.")[[1]]) > 3)
if(isDevelopmentVersion||runAllForReleaseVersion) {
evaluationModes <- c("sequential", "batch")
processingLibraries <- c("dplyr", "data.table")
}
else {
evaluationModes <- releaseEvaluationMode
processingLibraries <- releaseProcessingLibrary
}
testCount <- length(evaluationModes)*length(processingLibraries)
c1 <- character(testCount)
c2 <- character(testCount)
c3 <- character(testCount)
c4 <- character(testCount)
testCount <- 0
for(evaluationMode in evaluationModes)
for(processingLibrary in processingLibraries) {
testCount <- testCount + 1
c1[testCount] <- evaluationMode
c2[testCount] <- processingLibrary
c3[testCount] <- paste0(description, ": ", evaluationMode, " ", processingLibrary)
c4[testCount] <- ifelse(processingLibrary=="data.table", ".N", "n()")
}
df <- data.frame(evaluationMode=c1, processingLibrary=c2, description=c3, countFunction=c4, stringsAsFactors=FALSE)
return(df)
}
context("THEMING TESTS")
scenarios <- testScenarios("theming tests: basic test")
for(i in 1:nrow(scenarios)) {
evaluationMode <- scenarios$evaluationMode[i]
processingLibrary <- scenarios$processingLibrary[i]
description <- scenarios$description[i]
countFunction <- scenarios$countFunction[i]
test_that(description, {
# define the colours
orangeColors <- list(
headerBackgroundColor = "rgb(237, 125, 49)",
headerColor = "rgb(255, 255, 255)",
cellBackgroundColor = "rgb(255, 255, 255)",
cellColor = "rgb(0, 0, 0)",
totalBackgroundColor = "rgb(248, 198, 165)",
totalColor = "rgb(0, 0, 0)",
borderColor = "rgb(198, 89, 17)"
)
# create the pivot table
library(pivottabler)
pt <- PivotTable$new(processingLibrary=processingLibrary, evaluationMode=evaluationMode)
pt$addData(bhmtrains)
pt$addColumnDataGroups("TrainCategory")
pt$addRowDataGroups("TOC")
pt$defineCalculation(calculationName="TotalTrains", summariseExpression=countFunction)
pt$theme <- getSimpleColoredTheme(parentPivot=pt, colors=orangeColors, fontName="Garamond, arial")
pt$evaluatePivot()
# pt$renderPivot()
# sum(pt$cells$asMatrix(), na.rm=TRUE)
# prepStr(as.character(pt$getHtml()))
# prepStr(as.character(pt$getCss()))
html <- "<table class=\"Table\">\n <tr>\n <th class=\"ColumnHeader\"> </th>\n <th class=\"ColumnHeader\">Express Passenger</th>\n <th class=\"ColumnHeader\">Ordinary Passenger</th>\n <th class=\"ColumnHeader\">Total</th>\n </tr>\n <tr>\n <th class=\"RowHeader\">Arriva Trains Wales</th>\n <td class=\"Cell\">3079</td>\n <td class=\"Cell\">830</td>\n <td class=\"Total\">3909</td>\n </tr>\n <tr>\n <th class=\"RowHeader\">CrossCountry</th>\n <td class=\"Cell\">22865</td>\n <td class=\"Cell\">63</td>\n <td class=\"Total\">22928</td>\n </tr>\n <tr>\n <th class=\"RowHeader\">London Midland</th>\n <td class=\"Cell\">14487</td>\n <td class=\"Cell\">33792</td>\n <td class=\"Total\">48279</td>\n </tr>\n <tr>\n <th class=\"RowHeader\">Virgin Trains</th>\n <td class=\"Cell\">8594</td>\n <td class=\"Cell\"></td>\n <td class=\"Total\">8594</td>\n </tr>\n <tr>\n <th class=\"RowHeader\">Total</th>\n <td class=\"Total\">49025</td>\n <td class=\"Total\">34685</td>\n <td class=\"Total\">83710</td>\n </tr>\n</table>"
css <- ".Table {display: table; border-collapse: collapse; border: 2px solid rgb(198, 89, 17); }\r\n.ColumnHeader {font-family: Garamond, arial; font-size: 0.75em; padding: 2px; border: 1px solid rgb(198, 89, 17); vertical-align: middle; text-align: center; font-weight: bold; color: rgb(255, 255, 255); background-color: rgb(237, 125, 49); }\r\n.RowHeader {font-family: Garamond, arial; font-size: 0.75em; padding: 2px 8px 2px 2px; border: 1px solid rgb(198, 89, 17); vertical-align: middle; text-align: left; font-weight: bold; color: rgb(255, 255, 255); background-color: rgb(237, 125, 49); }\r\n.Cell {font-family: Garamond, arial; font-size: 0.75em; padding: 2px 2px 2px 8px; border: 1px solid rgb(198, 89, 17); vertical-align: middle; text-align: right; color: rgb(0, 0, 0); background-color: rgb(255, 255, 255); }\r\n.OutlineColumnHeader {font-family: Garamond, arial; font-size: 0.75em; padding: 2px; border: 1px solid rgb(198, 89, 17); vertical-align: middle; text-align: center; font-weight: bold; color: rgb(255, 255, 255); background-color: rgb(237, 125, 49); }\r\n.OutlineRowHeader {font-family: Garamond, arial; font-size: 0.75em; padding: 2px 8px 2px 2px; border: 1px solid rgb(198, 89, 17); vertical-align: middle; text-align: left; font-weight: bold; color: rgb(255, 255, 255); background-color: rgb(237, 125, 49); }\r\n.OutlineCell {font-family: Garamond, arial; font-size: 0.75em; padding: 2px 2px 2px 8px; border: 1px solid rgb(198, 89, 17); vertical-align: middle; text-align: right; color: rgb(0, 0, 0); background-color: rgb(255, 255, 255); font-weight: bold; }\r\n.Total {font-family: Garamond, arial; font-size: 0.75em; padding: 2px 2px 2px 8px; border: 1px solid rgb(198, 89, 17); vertical-align: middle; text-align: right; color: rgb(0, 0, 0); background-color: rgb(248, 198, 165); }\r\n"
expect_equal(sum(pt$cells$asMatrix(), na.rm=TRUE), 334840)
expect_identical(as.character(pt$getHtml()), html)
expect_identical(pt$getCss(), css)
})
}
scenarios <- testScenarios("theming tests: applying styling multiple times to the same cell", runAllForReleaseVersion=TRUE)
for(i in 1:nrow(scenarios)) {
if(!isDevelopmentVersion) break
evaluationMode <- scenarios$evaluationMode[i]
processingLibrary <- scenarios$processingLibrary[i]
description <- scenarios$description[i]
countFunction <- scenarios$countFunction[i]
test_that(description, {
library(pivottabler)
pt <- PivotTable$new(processingLibrary=processingLibrary, evaluationMode=evaluationMode,
compatibility=list(totalStyleIsCellStyle=TRUE))
pt$addData(bhmtrains)
pt$addColumnDataGroups("TrainCategory")
pt$addRowDataGroups("TOC")
pt$defineCalculation(calculationName="TotalTrains", summariseExpression=countFunction)
pt$evaluatePivot()
grps <- pt$rowGroup$childGroups
pt$setStyling(groups=grps, declarations=list("font-weight"="normal"))
pt$setStyling(groups=grps, declarations=list("color"="blue"))
cells <- pt$getCells(rowNumbers=4)
pt$setStyling(cells=cells, declarations=list("font-weight"="bold"))
pt$setStyling(cells=cells, declarations=list("color"="green"))
pt$setStyling(2, 1, declarations=list("color"="red"))
pt$setStyling(2, 1, declarations=list("font-weight"="bold"))
pt$renderPivot()
# prepStr(as.character(pt$getHtml()))
html <- "<table class=\"Table\">\n <tr>\n <th class=\"RowHeader\"> </th>\n <th class=\"ColumnHeader\">Express Passenger</th>\n <th class=\"ColumnHeader\">Ordinary Passenger</th>\n <th class=\"ColumnHeader\">Total</th>\n </tr>\n <tr>\n <th class=\"RowHeader\" style=\"font-weight: normal; color: blue; \">Arriva Trains Wales</th>\n <td class=\"Cell\">3079</td>\n <td class=\"Cell\">830</td>\n <td class=\"Cell\">3909</td>\n </tr>\n <tr>\n <th class=\"RowHeader\" style=\"font-weight: normal; color: blue; \">CrossCountry</th>\n <td class=\"Cell\" style=\"color: red; font-weight: bold; \">22865</td>\n <td class=\"Cell\">63</td>\n <td class=\"Cell\">22928</td>\n </tr>\n <tr>\n <th class=\"RowHeader\" style=\"font-weight: normal; color: blue; \">London Midland</th>\n <td class=\"Cell\">14487</td>\n <td class=\"Cell\">33792</td>\n <td class=\"Cell\">48279</td>\n </tr>\n <tr>\n <th class=\"RowHeader\" style=\"font-weight: normal; color: blue; \">Virgin Trains</th>\n <td class=\"Cell\" style=\"font-weight: bold; color: green; \">8594</td>\n <td class=\"Cell\" style=\"font-weight: bold; color: green; \"></td>\n <td class=\"Cell\" style=\"font-weight: bold; color: green; \">8594</td>\n </tr>\n <tr>\n <th class=\"RowHeader\" style=\"font-weight: normal; color: blue; \">Total</th>\n <td class=\"Cell\">49025</td>\n <td class=\"Cell\">34685</td>\n <td class=\"Cell\">83710</td>\n </tr>\n</table>"
expect_identical(as.character(pt$getHtml()), html)
})
}
# The code for the following tests runs OK in R-Studio
# But errors when used with testthat on my PC:
# Error PivotDataGroup.R:294: object of type 'closure' is not subsettable
# The error is something to do with:
# styleDeclarations=list("color"="red", "font-weight"="bold", "background-color"="yellow")
# scenarios <- testScenarios("theming tests: styling data groups")
# for(i in 1:nrow(scenarios)) {
# evaluationMode <- scenarios$evaluationMode[i]
# processingLibrary <- scenarios$processingLibrary[i]
# description <- scenarios$description[i]
# countFunction <- scenarios$countFunction[i]
#
# test_that(description, {
#
# library(pivottabler)
# pt <- PivotTable$new(processingLibrary=processingLibrary, evaluationMode=evaluationMode)
# pt$addData(bhmtrains)
# pt$addColumnDataGroups("TrainCategory", styleDeclarations=list("color"="red", "font-weight"="bold", "background-color"="yellow"))
# pt$addRowDataGroups("TOC")
# pt$defineCalculation(calculationName="TotalTrains", summariseExpression=countFunction)
# pt$evaluatePivot()
# # pt$renderPivot()
# # sum(pt$cells$asMatrix(), na.rm=TRUE)
# # prepStr(as.character(pt$getHtml()))
# # prepStr(as.character(pt$getCss()))
# html <- "<table class=\"Table\">\n <tr>\n <th class=\"RowHeader\" rowspan=\"1\" colspan=\"1\"> </th>\n <th class=\"ColumnHeader\" style=\"color: red; font-weight: bold; background-color: yellow; \" colspan=\"1\">Express Passenger</th>\n <th class=\"ColumnHeader\" style=\"color: red; font-weight: bold; background-color: yellow; \" colspan=\"1\">Ordinary Passenger</th>\n <th class=\"ColumnHeader\" style=\"color: red; font-weight: bold; background-color: yellow; \" colspan=\"1\">Total</th>\n </tr>\n <tr>\n <th class=\"RowHeader\" rowspan=\"1\">Arriva Trains Wales</th>\n <td class=\"Cell\">3079</td>\n <td class=\"Cell\">830</td>\n <td class=\"Cell\">3909</td>\n </tr>\n <tr>\n <th class=\"RowHeader\" rowspan=\"1\">CrossCountry</th>\n <td class=\"Cell\">22865</td>\n <td class=\"Cell\">63</td>\n <td class=\"Cell\">22928</td>\n </tr>\n <tr>\n <th class=\"RowHeader\" rowspan=\"1\">London Midland</th>\n <td class=\"Cell\">14487</td>\n <td class=\"Cell\">33792</td>\n <td class=\"Cell\">48279</td>\n </tr>\n <tr>\n <th class=\"RowHeader\" rowspan=\"1\">Virgin Trains</th>\n <td class=\"Cell\">8594</td>\n <td class=\"Cell\"></td>\n <td class=\"Cell\">8594</td>\n </tr>\n <tr>\n <th class=\"RowHeader\" rowspan=\"1\">Total</th>\n <td class=\"Cell\">49025</td>\n <td class=\"Cell\">34685</td>\n <td class=\"Cell\">83710</td>\n </tr>\n</table>"
# css <- ".Table {border-collapse: collapse; }\r\n.ColumnHeader {font-family: Arial; font-size: 0.75em; padding: 2px; border: 1px solid lightgray; vertical-align: middle; text-align: center; font-weight: bold; background-color: #F2F2F2; }\r\n.RowHeader {font-family: Arial; font-size: 0.75em; padding: 2px 8px 2px 2px; border: 1px solid lightgray; vertical-align: middle; text-align: left; font-weight: bold; background-color: #F2F2F2; }\r\n.Cell {font-family: Arial; font-size: 0.75em; padding: 2px 2px 2px 8px; border: 1px solid lightgray; vertical-align: middle; text-align: right; }\r\n"
#
# expect_equal(sum(pt$cells$asMatrix(), na.rm=TRUE), 334840)
# expect_identical(as.character(pt$getHtml()), html)
# expect_identical(pt$getCss(), css)
# })
# }
#
#
# scenarios <- testScenarios("theming tests: styling cells")
# for(i in 1:nrow(scenarios)) {
# evaluationMode <- scenarios$evaluationMode[i]
# processingLibrary <- scenarios$processingLibrary[i]
# description <- scenarios$description[i]
# countFunction <- scenarios$countFunction[i]
#
# test_that(description, {
#
# library(pivottabler)
# pt <- PivotTable$new(processingLibrary=processingLibrary, evaluationMode=evaluationMode)
# pt$styles$addStyle(styleName="NewHeadingStyle1", list(
# "font-family"="Arial",
# "font-size"="0.75em",
# padding="2px",
# border="1px solid lightgray",
# "vertical-align"="middle",
# "text-align"="center",
# "font-weight"="bold",
# "background-color"="Gold",
# "xl-wrap-text"="wrap"
# ))
# pt$styles$addStyle(styleName="CellStyle1", list(
# "font-family"="Arial",
# "font-size"="0.75em",
# padding="2px 2px 2px 8px",
# border="1px solid lightgray",
# "vertical-align"="middle",
# "background-color"="Yellow",
# "text-align"="right"
# ))
# pt$addData(bhmtrains)
# pt$addColumnDataGroups("TrainCategory")
# pt$addRowDataGroups("TOC")
# pt$defineCalculation(calculationName="TotalTrains1", summariseExpression=countFunction,
# headingBaseStyleName="NewHeadingStyle1", cellBaseStyleName="CellStyle1")
# pt$defineCalculation(calculationName="TotalTrains2", summariseExpression=countFunction,
# headingStyleDeclarations=list("color"="red", "font-weight"="bold"),
# cellStyleDeclarations=list("color"="blue"))
# pt$evaluatePivot()
# # pt$renderPivot()
# # sum(pt$cells$asMatrix(), na.rm=TRUE)
# # prepStr(as.character(pt$getHtml()))
# # prepStr(as.character(pt$getCss()))
# html <- "<table class=\"Table\">\n <tr>\n <th class=\"RowHeader\" rowspan=\"2\" colspan=\"1\"> </th>\n <th class=\"ColumnHeader\" colspan=\"2\">Express Passenger</th>\n <th class=\"ColumnHeader\" colspan=\"2\">Ordinary Passenger</th>\n <th class=\"ColumnHeader\" colspan=\"2\">Total</th>\n </tr>\n <tr>\n <th class=\"NewHeadingStyle1\" colspan=\"1\">TotalTrains1</th>\n <th class=\"ColumnHeader\" style=\"color: red; font-weight: bold; \" colspan=\"1\">TotalTrains2</th>\n <th class=\"NewHeadingStyle1\" colspan=\"1\">TotalTrains1</th>\n <th class=\"ColumnHeader\" style=\"color: red; font-weight: bold; \" colspan=\"1\">TotalTrains2</th>\n <th class=\"NewHeadingStyle1\" colspan=\"1\">TotalTrains1</th>\n <th class=\"ColumnHeader\" style=\"color: red; font-weight: bold; \" colspan=\"1\">TotalTrains2</th>\n </tr>\n <tr>\n <th class=\"RowHeader\" rowspan=\"1\">Arriva Trains Wales</th>\n <td class=\"CellStyle1\">3079</td>\n <td class=\"Cell\" style=\"color: blue; \">3079</td>\n <td class=\"CellStyle1\">830</td>\n <td class=\"Cell\" style=\"color: blue; \">830</td>\n <td class=\"CellStyle1\">3909</td>\n <td class=\"Cell\" style=\"color: blue; \">3909</td>\n </tr>\n <tr>\n <th class=\"RowHeader\" rowspan=\"1\">CrossCountry</th>\n <td class=\"CellStyle1\">22865</td>\n <td class=\"Cell\" style=\"color: blue; \">22865</td>\n <td class=\"CellStyle1\">63</td>\n <td class=\"Cell\" style=\"color: blue; \">63</td>\n <td class=\"CellStyle1\">22928</td>\n <td class=\"Cell\" style=\"color: blue; \">22928</td>\n </tr>\n <tr>\n <th class=\"RowHeader\" rowspan=\"1\">London Midland</th>\n <td class=\"CellStyle1\">14487</td>\n <td class=\"Cell\" style=\"color: blue; \">14487</td>\n <td class=\"CellStyle1\">33792</td>\n <td class=\"Cell\" style=\"color: blue; \">33792</td>\n <td class=\"CellStyle1\">48279</td>\n <td class=\"Cell\" style=\"color: blue; \">48279</td>\n </tr>\n <tr>\n <th class=\"RowHeader\" rowspan=\"1\">Virgin Trains</th>\n <td class=\"CellStyle1\">8594</td>\n <td class=\"Cell\" style=\"color: blue; \">8594</td>\n <td class=\"CellStyle1\"></td>\n <td class=\"Cell\" style=\"color: blue; \"></td>\n <td class=\"CellStyle1\">8594</td>\n <td class=\"Cell\" style=\"color: blue; \">8594</td>\n </tr>\n <tr>\n <th class=\"RowHeader\" rowspan=\"1\">Total</th>\n <td class=\"CellStyle1\">49025</td>\n <td class=\"Cell\" style=\"color: blue; \">49025</td>\n <td class=\"CellStyle1\">34685</td>\n <td class=\"Cell\" style=\"color: blue; \">34685</td>\n <td class=\"CellStyle1\">83710</td>\n <td class=\"Cell\" style=\"color: blue; \">83710</td>\n </tr>\n</table>"
# css <- ".Table {border-collapse: collapse; }\r\n.ColumnHeader {font-family: Arial; font-size: 0.75em; padding: 2px; border: 1px solid lightgray; vertical-align: middle; text-align: center; font-weight: bold; background-color: #F2F2F2; }\r\n.RowHeader {font-family: Arial; font-size: 0.75em; padding: 2px 8px 2px 2px; border: 1px solid lightgray; vertical-align: middle; text-align: left; font-weight: bold; background-color: #F2F2F2; }\r\n.Cell {font-family: Arial; font-size: 0.75em; padding: 2px 2px 2px 8px; border: 1px solid lightgray; vertical-align: middle; text-align: right; }\r\n.NewHeadingStyle1 {font-family: Arial; font-size: 0.75em; padding: 2px; border: 1px solid lightgray; vertical-align: middle; text-align: center; font-weight: bold; background-color: Gold; }\r\n.CellStyle1 {font-family: Arial; font-size: 0.75em; padding: 2px 2px 2px 8px; border: 1px solid lightgray; vertical-align: middle; background-color: Yellow; text-align: right; }\r\n"
#
# expect_equal(sum(pt$cells$asMatrix(), na.rm=TRUE), 669680)
# expect_identical(as.character(pt$getHtml()), html)
# expect_identical(pt$getCss(), css)
# })
# }
|
/tests/testthat/test-10-themingTests.R
|
no_license
|
hyunyulhenry/pivottabler
|
R
| false
| false
| 18,679
|
r
|
library(testthat)
# most common expectations:
# equality: expect_equal() and expect_identical()
# regexp: expect_match()
# catch-all: expect_true() and expect_false()
# console output: expect_output()
# messages: expect_message()
# warning: expect_warning()
# errors: expect_error()
escapeString <- function(s) {
t <- gsub("(\\\\)", "\\\\\\\\", s)
t <- gsub("(\n)", "\\\\n", t)
t <- gsub("(\r)", "\\\\r", t)
t <- gsub("(\")", "\\\\\"", t)
return(t)
}
prepStr <- function(s) {
t <- escapeString(s)
u <- eval(parse(text=paste0("\"", t, "\"")))
if(s!=u) stop("Unable to escape string!")
t <- paste0("\thtml <- \"", t, "\"")
utils::writeClipboard(t)
return(invisible())
}
evaluationMode <- "sequential"
processingLibrary <- "dplyr"
description <- "test: sequential dplyr"
countFunction <- "n()"
isDevelopmentVersion <- (length(strsplit(packageDescription("pivottabler")$Version, "\\.")[[1]]) > 3)
testScenarios <- function(description="test", releaseEvaluationMode="batch", releaseProcessingLibrary="dplyr", runAllForReleaseVersion=FALSE) {
isDevelopmentVersion <- (length(strsplit(packageDescription("pivottabler")$Version, "\\.")[[1]]) > 3)
if(isDevelopmentVersion||runAllForReleaseVersion) {
evaluationModes <- c("sequential", "batch")
processingLibraries <- c("dplyr", "data.table")
}
else {
evaluationModes <- releaseEvaluationMode
processingLibraries <- releaseProcessingLibrary
}
testCount <- length(evaluationModes)*length(processingLibraries)
c1 <- character(testCount)
c2 <- character(testCount)
c3 <- character(testCount)
c4 <- character(testCount)
testCount <- 0
for(evaluationMode in evaluationModes)
for(processingLibrary in processingLibraries) {
testCount <- testCount + 1
c1[testCount] <- evaluationMode
c2[testCount] <- processingLibrary
c3[testCount] <- paste0(description, ": ", evaluationMode, " ", processingLibrary)
c4[testCount] <- ifelse(processingLibrary=="data.table", ".N", "n()")
}
df <- data.frame(evaluationMode=c1, processingLibrary=c2, description=c3, countFunction=c4, stringsAsFactors=FALSE)
return(df)
}
context("THEMING TESTS")
scenarios <- testScenarios("theming tests: basic test")
for(i in 1:nrow(scenarios)) {
evaluationMode <- scenarios$evaluationMode[i]
processingLibrary <- scenarios$processingLibrary[i]
description <- scenarios$description[i]
countFunction <- scenarios$countFunction[i]
test_that(description, {
# define the colours
orangeColors <- list(
headerBackgroundColor = "rgb(237, 125, 49)",
headerColor = "rgb(255, 255, 255)",
cellBackgroundColor = "rgb(255, 255, 255)",
cellColor = "rgb(0, 0, 0)",
totalBackgroundColor = "rgb(248, 198, 165)",
totalColor = "rgb(0, 0, 0)",
borderColor = "rgb(198, 89, 17)"
)
# create the pivot table
library(pivottabler)
pt <- PivotTable$new(processingLibrary=processingLibrary, evaluationMode=evaluationMode)
pt$addData(bhmtrains)
pt$addColumnDataGroups("TrainCategory")
pt$addRowDataGroups("TOC")
pt$defineCalculation(calculationName="TotalTrains", summariseExpression=countFunction)
pt$theme <- getSimpleColoredTheme(parentPivot=pt, colors=orangeColors, fontName="Garamond, arial")
pt$evaluatePivot()
# pt$renderPivot()
# sum(pt$cells$asMatrix(), na.rm=TRUE)
# prepStr(as.character(pt$getHtml()))
# prepStr(as.character(pt$getCss()))
html <- "<table class=\"Table\">\n <tr>\n <th class=\"ColumnHeader\"> </th>\n <th class=\"ColumnHeader\">Express Passenger</th>\n <th class=\"ColumnHeader\">Ordinary Passenger</th>\n <th class=\"ColumnHeader\">Total</th>\n </tr>\n <tr>\n <th class=\"RowHeader\">Arriva Trains Wales</th>\n <td class=\"Cell\">3079</td>\n <td class=\"Cell\">830</td>\n <td class=\"Total\">3909</td>\n </tr>\n <tr>\n <th class=\"RowHeader\">CrossCountry</th>\n <td class=\"Cell\">22865</td>\n <td class=\"Cell\">63</td>\n <td class=\"Total\">22928</td>\n </tr>\n <tr>\n <th class=\"RowHeader\">London Midland</th>\n <td class=\"Cell\">14487</td>\n <td class=\"Cell\">33792</td>\n <td class=\"Total\">48279</td>\n </tr>\n <tr>\n <th class=\"RowHeader\">Virgin Trains</th>\n <td class=\"Cell\">8594</td>\n <td class=\"Cell\"></td>\n <td class=\"Total\">8594</td>\n </tr>\n <tr>\n <th class=\"RowHeader\">Total</th>\n <td class=\"Total\">49025</td>\n <td class=\"Total\">34685</td>\n <td class=\"Total\">83710</td>\n </tr>\n</table>"
css <- ".Table {display: table; border-collapse: collapse; border: 2px solid rgb(198, 89, 17); }\r\n.ColumnHeader {font-family: Garamond, arial; font-size: 0.75em; padding: 2px; border: 1px solid rgb(198, 89, 17); vertical-align: middle; text-align: center; font-weight: bold; color: rgb(255, 255, 255); background-color: rgb(237, 125, 49); }\r\n.RowHeader {font-family: Garamond, arial; font-size: 0.75em; padding: 2px 8px 2px 2px; border: 1px solid rgb(198, 89, 17); vertical-align: middle; text-align: left; font-weight: bold; color: rgb(255, 255, 255); background-color: rgb(237, 125, 49); }\r\n.Cell {font-family: Garamond, arial; font-size: 0.75em; padding: 2px 2px 2px 8px; border: 1px solid rgb(198, 89, 17); vertical-align: middle; text-align: right; color: rgb(0, 0, 0); background-color: rgb(255, 255, 255); }\r\n.OutlineColumnHeader {font-family: Garamond, arial; font-size: 0.75em; padding: 2px; border: 1px solid rgb(198, 89, 17); vertical-align: middle; text-align: center; font-weight: bold; color: rgb(255, 255, 255); background-color: rgb(237, 125, 49); }\r\n.OutlineRowHeader {font-family: Garamond, arial; font-size: 0.75em; padding: 2px 8px 2px 2px; border: 1px solid rgb(198, 89, 17); vertical-align: middle; text-align: left; font-weight: bold; color: rgb(255, 255, 255); background-color: rgb(237, 125, 49); }\r\n.OutlineCell {font-family: Garamond, arial; font-size: 0.75em; padding: 2px 2px 2px 8px; border: 1px solid rgb(198, 89, 17); vertical-align: middle; text-align: right; color: rgb(0, 0, 0); background-color: rgb(255, 255, 255); font-weight: bold; }\r\n.Total {font-family: Garamond, arial; font-size: 0.75em; padding: 2px 2px 2px 8px; border: 1px solid rgb(198, 89, 17); vertical-align: middle; text-align: right; color: rgb(0, 0, 0); background-color: rgb(248, 198, 165); }\r\n"
expect_equal(sum(pt$cells$asMatrix(), na.rm=TRUE), 334840)
expect_identical(as.character(pt$getHtml()), html)
expect_identical(pt$getCss(), css)
})
}
scenarios <- testScenarios("theming tests: applying styling multiple times to the same cell", runAllForReleaseVersion=TRUE)
for(i in 1:nrow(scenarios)) {
if(!isDevelopmentVersion) break
evaluationMode <- scenarios$evaluationMode[i]
processingLibrary <- scenarios$processingLibrary[i]
description <- scenarios$description[i]
countFunction <- scenarios$countFunction[i]
test_that(description, {
library(pivottabler)
pt <- PivotTable$new(processingLibrary=processingLibrary, evaluationMode=evaluationMode,
compatibility=list(totalStyleIsCellStyle=TRUE))
pt$addData(bhmtrains)
pt$addColumnDataGroups("TrainCategory")
pt$addRowDataGroups("TOC")
pt$defineCalculation(calculationName="TotalTrains", summariseExpression=countFunction)
pt$evaluatePivot()
grps <- pt$rowGroup$childGroups
pt$setStyling(groups=grps, declarations=list("font-weight"="normal"))
pt$setStyling(groups=grps, declarations=list("color"="blue"))
cells <- pt$getCells(rowNumbers=4)
pt$setStyling(cells=cells, declarations=list("font-weight"="bold"))
pt$setStyling(cells=cells, declarations=list("color"="green"))
pt$setStyling(2, 1, declarations=list("color"="red"))
pt$setStyling(2, 1, declarations=list("font-weight"="bold"))
pt$renderPivot()
# prepStr(as.character(pt$getHtml()))
html <- "<table class=\"Table\">\n <tr>\n <th class=\"RowHeader\"> </th>\n <th class=\"ColumnHeader\">Express Passenger</th>\n <th class=\"ColumnHeader\">Ordinary Passenger</th>\n <th class=\"ColumnHeader\">Total</th>\n </tr>\n <tr>\n <th class=\"RowHeader\" style=\"font-weight: normal; color: blue; \">Arriva Trains Wales</th>\n <td class=\"Cell\">3079</td>\n <td class=\"Cell\">830</td>\n <td class=\"Cell\">3909</td>\n </tr>\n <tr>\n <th class=\"RowHeader\" style=\"font-weight: normal; color: blue; \">CrossCountry</th>\n <td class=\"Cell\" style=\"color: red; font-weight: bold; \">22865</td>\n <td class=\"Cell\">63</td>\n <td class=\"Cell\">22928</td>\n </tr>\n <tr>\n <th class=\"RowHeader\" style=\"font-weight: normal; color: blue; \">London Midland</th>\n <td class=\"Cell\">14487</td>\n <td class=\"Cell\">33792</td>\n <td class=\"Cell\">48279</td>\n </tr>\n <tr>\n <th class=\"RowHeader\" style=\"font-weight: normal; color: blue; \">Virgin Trains</th>\n <td class=\"Cell\" style=\"font-weight: bold; color: green; \">8594</td>\n <td class=\"Cell\" style=\"font-weight: bold; color: green; \"></td>\n <td class=\"Cell\" style=\"font-weight: bold; color: green; \">8594</td>\n </tr>\n <tr>\n <th class=\"RowHeader\" style=\"font-weight: normal; color: blue; \">Total</th>\n <td class=\"Cell\">49025</td>\n <td class=\"Cell\">34685</td>\n <td class=\"Cell\">83710</td>\n </tr>\n</table>"
expect_identical(as.character(pt$getHtml()), html)
})
}
# The code for the following tests runs OK in R-Studio
# But errors when used with testthat on my PC:
# Error PivotDataGroup.R:294: object of type 'closure' is not subsettable
# The error is something to do with:
# styleDeclarations=list("color"="red", "font-weight"="bold", "background-color"="yellow")
# scenarios <- testScenarios("theming tests: styling data groups")
# for(i in 1:nrow(scenarios)) {
# evaluationMode <- scenarios$evaluationMode[i]
# processingLibrary <- scenarios$processingLibrary[i]
# description <- scenarios$description[i]
# countFunction <- scenarios$countFunction[i]
#
# test_that(description, {
#
# library(pivottabler)
# pt <- PivotTable$new(processingLibrary=processingLibrary, evaluationMode=evaluationMode)
# pt$addData(bhmtrains)
# pt$addColumnDataGroups("TrainCategory", styleDeclarations=list("color"="red", "font-weight"="bold", "background-color"="yellow"))
# pt$addRowDataGroups("TOC")
# pt$defineCalculation(calculationName="TotalTrains", summariseExpression=countFunction)
# pt$evaluatePivot()
# # pt$renderPivot()
# # sum(pt$cells$asMatrix(), na.rm=TRUE)
# # prepStr(as.character(pt$getHtml()))
# # prepStr(as.character(pt$getCss()))
# html <- "<table class=\"Table\">\n <tr>\n <th class=\"RowHeader\" rowspan=\"1\" colspan=\"1\"> </th>\n <th class=\"ColumnHeader\" style=\"color: red; font-weight: bold; background-color: yellow; \" colspan=\"1\">Express Passenger</th>\n <th class=\"ColumnHeader\" style=\"color: red; font-weight: bold; background-color: yellow; \" colspan=\"1\">Ordinary Passenger</th>\n <th class=\"ColumnHeader\" style=\"color: red; font-weight: bold; background-color: yellow; \" colspan=\"1\">Total</th>\n </tr>\n <tr>\n <th class=\"RowHeader\" rowspan=\"1\">Arriva Trains Wales</th>\n <td class=\"Cell\">3079</td>\n <td class=\"Cell\">830</td>\n <td class=\"Cell\">3909</td>\n </tr>\n <tr>\n <th class=\"RowHeader\" rowspan=\"1\">CrossCountry</th>\n <td class=\"Cell\">22865</td>\n <td class=\"Cell\">63</td>\n <td class=\"Cell\">22928</td>\n </tr>\n <tr>\n <th class=\"RowHeader\" rowspan=\"1\">London Midland</th>\n <td class=\"Cell\">14487</td>\n <td class=\"Cell\">33792</td>\n <td class=\"Cell\">48279</td>\n </tr>\n <tr>\n <th class=\"RowHeader\" rowspan=\"1\">Virgin Trains</th>\n <td class=\"Cell\">8594</td>\n <td class=\"Cell\"></td>\n <td class=\"Cell\">8594</td>\n </tr>\n <tr>\n <th class=\"RowHeader\" rowspan=\"1\">Total</th>\n <td class=\"Cell\">49025</td>\n <td class=\"Cell\">34685</td>\n <td class=\"Cell\">83710</td>\n </tr>\n</table>"
# css <- ".Table {border-collapse: collapse; }\r\n.ColumnHeader {font-family: Arial; font-size: 0.75em; padding: 2px; border: 1px solid lightgray; vertical-align: middle; text-align: center; font-weight: bold; background-color: #F2F2F2; }\r\n.RowHeader {font-family: Arial; font-size: 0.75em; padding: 2px 8px 2px 2px; border: 1px solid lightgray; vertical-align: middle; text-align: left; font-weight: bold; background-color: #F2F2F2; }\r\n.Cell {font-family: Arial; font-size: 0.75em; padding: 2px 2px 2px 8px; border: 1px solid lightgray; vertical-align: middle; text-align: right; }\r\n"
#
# expect_equal(sum(pt$cells$asMatrix(), na.rm=TRUE), 334840)
# expect_identical(as.character(pt$getHtml()), html)
# expect_identical(pt$getCss(), css)
# })
# }
#
#
# scenarios <- testScenarios("theming tests: styling cells")
# for(i in 1:nrow(scenarios)) {
# evaluationMode <- scenarios$evaluationMode[i]
# processingLibrary <- scenarios$processingLibrary[i]
# description <- scenarios$description[i]
# countFunction <- scenarios$countFunction[i]
#
# test_that(description, {
#
# library(pivottabler)
# pt <- PivotTable$new(processingLibrary=processingLibrary, evaluationMode=evaluationMode)
# pt$styles$addStyle(styleName="NewHeadingStyle1", list(
# "font-family"="Arial",
# "font-size"="0.75em",
# padding="2px",
# border="1px solid lightgray",
# "vertical-align"="middle",
# "text-align"="center",
# "font-weight"="bold",
# "background-color"="Gold",
# "xl-wrap-text"="wrap"
# ))
# pt$styles$addStyle(styleName="CellStyle1", list(
# "font-family"="Arial",
# "font-size"="0.75em",
# padding="2px 2px 2px 8px",
# border="1px solid lightgray",
# "vertical-align"="middle",
# "background-color"="Yellow",
# "text-align"="right"
# ))
# pt$addData(bhmtrains)
# pt$addColumnDataGroups("TrainCategory")
# pt$addRowDataGroups("TOC")
# pt$defineCalculation(calculationName="TotalTrains1", summariseExpression=countFunction,
# headingBaseStyleName="NewHeadingStyle1", cellBaseStyleName="CellStyle1")
# pt$defineCalculation(calculationName="TotalTrains2", summariseExpression=countFunction,
# headingStyleDeclarations=list("color"="red", "font-weight"="bold"),
# cellStyleDeclarations=list("color"="blue"))
# pt$evaluatePivot()
# # pt$renderPivot()
# # sum(pt$cells$asMatrix(), na.rm=TRUE)
# # prepStr(as.character(pt$getHtml()))
# # prepStr(as.character(pt$getCss()))
# html <- "<table class=\"Table\">\n <tr>\n <th class=\"RowHeader\" rowspan=\"2\" colspan=\"1\"> </th>\n <th class=\"ColumnHeader\" colspan=\"2\">Express Passenger</th>\n <th class=\"ColumnHeader\" colspan=\"2\">Ordinary Passenger</th>\n <th class=\"ColumnHeader\" colspan=\"2\">Total</th>\n </tr>\n <tr>\n <th class=\"NewHeadingStyle1\" colspan=\"1\">TotalTrains1</th>\n <th class=\"ColumnHeader\" style=\"color: red; font-weight: bold; \" colspan=\"1\">TotalTrains2</th>\n <th class=\"NewHeadingStyle1\" colspan=\"1\">TotalTrains1</th>\n <th class=\"ColumnHeader\" style=\"color: red; font-weight: bold; \" colspan=\"1\">TotalTrains2</th>\n <th class=\"NewHeadingStyle1\" colspan=\"1\">TotalTrains1</th>\n <th class=\"ColumnHeader\" style=\"color: red; font-weight: bold; \" colspan=\"1\">TotalTrains2</th>\n </tr>\n <tr>\n <th class=\"RowHeader\" rowspan=\"1\">Arriva Trains Wales</th>\n <td class=\"CellStyle1\">3079</td>\n <td class=\"Cell\" style=\"color: blue; \">3079</td>\n <td class=\"CellStyle1\">830</td>\n <td class=\"Cell\" style=\"color: blue; \">830</td>\n <td class=\"CellStyle1\">3909</td>\n <td class=\"Cell\" style=\"color: blue; \">3909</td>\n </tr>\n <tr>\n <th class=\"RowHeader\" rowspan=\"1\">CrossCountry</th>\n <td class=\"CellStyle1\">22865</td>\n <td class=\"Cell\" style=\"color: blue; \">22865</td>\n <td class=\"CellStyle1\">63</td>\n <td class=\"Cell\" style=\"color: blue; \">63</td>\n <td class=\"CellStyle1\">22928</td>\n <td class=\"Cell\" style=\"color: blue; \">22928</td>\n </tr>\n <tr>\n <th class=\"RowHeader\" rowspan=\"1\">London Midland</th>\n <td class=\"CellStyle1\">14487</td>\n <td class=\"Cell\" style=\"color: blue; \">14487</td>\n <td class=\"CellStyle1\">33792</td>\n <td class=\"Cell\" style=\"color: blue; \">33792</td>\n <td class=\"CellStyle1\">48279</td>\n <td class=\"Cell\" style=\"color: blue; \">48279</td>\n </tr>\n <tr>\n <th class=\"RowHeader\" rowspan=\"1\">Virgin Trains</th>\n <td class=\"CellStyle1\">8594</td>\n <td class=\"Cell\" style=\"color: blue; \">8594</td>\n <td class=\"CellStyle1\"></td>\n <td class=\"Cell\" style=\"color: blue; \"></td>\n <td class=\"CellStyle1\">8594</td>\n <td class=\"Cell\" style=\"color: blue; \">8594</td>\n </tr>\n <tr>\n <th class=\"RowHeader\" rowspan=\"1\">Total</th>\n <td class=\"CellStyle1\">49025</td>\n <td class=\"Cell\" style=\"color: blue; \">49025</td>\n <td class=\"CellStyle1\">34685</td>\n <td class=\"Cell\" style=\"color: blue; \">34685</td>\n <td class=\"CellStyle1\">83710</td>\n <td class=\"Cell\" style=\"color: blue; \">83710</td>\n </tr>\n</table>"
# css <- ".Table {border-collapse: collapse; }\r\n.ColumnHeader {font-family: Arial; font-size: 0.75em; padding: 2px; border: 1px solid lightgray; vertical-align: middle; text-align: center; font-weight: bold; background-color: #F2F2F2; }\r\n.RowHeader {font-family: Arial; font-size: 0.75em; padding: 2px 8px 2px 2px; border: 1px solid lightgray; vertical-align: middle; text-align: left; font-weight: bold; background-color: #F2F2F2; }\r\n.Cell {font-family: Arial; font-size: 0.75em; padding: 2px 2px 2px 8px; border: 1px solid lightgray; vertical-align: middle; text-align: right; }\r\n.NewHeadingStyle1 {font-family: Arial; font-size: 0.75em; padding: 2px; border: 1px solid lightgray; vertical-align: middle; text-align: center; font-weight: bold; background-color: Gold; }\r\n.CellStyle1 {font-family: Arial; font-size: 0.75em; padding: 2px 2px 2px 8px; border: 1px solid lightgray; vertical-align: middle; background-color: Yellow; text-align: right; }\r\n"
#
# expect_equal(sum(pt$cells$asMatrix(), na.rm=TRUE), 669680)
# expect_identical(as.character(pt$getHtml()), html)
# expect_identical(pt$getCss(), css)
# })
# }
|
# Direct Marketting Data Set
data <- read.csv("C:\\Users\\mainak\\Desktop\\DADAI\\Data-Science\\Data Science With R\\Linear Regression\\DirectMarketing.csv")
library(dplyr)
library(ggplot2)
#Purpose of this model is to determine good customer based on AMountSpent dependent variable.
##################################################################################################################################
#Exploratory Data Analysis
str(data)
#1. Age and Amount Spent
plot(data$Age,data$AmountSpent,col="red")
#Age group Middle and Old have similar spending trend , we can combine both category into one.
#Create New Factor Variable Age1
data$Age1 <- as.factor(ifelse(data$Age!='Young','Middle-Old',as.character(data$Age)))
summary(data$Age1)
plot(data$Age1,data$AmountSpent,col="red")
#2. Gender and Amount Spent
summary(data$Gender)
plot(data$Gender,data$AmountSpent,col="red")
#3. Own house and Amount Spent
summary(data$OwnHome)
plot(data$OwnHome,data$AmountSpent,col="red")
#4. Married and Amount Spent
summary(data$Married)
plot(data$Married,data$AmountSpent,col="red")
#5. Location and Amount Spent
summary(data$Location)
plot(data$Location,data$AmountSpent,col="red")
#6. Salary and Amount Spent
summary(data$Salary)
plot(data$Salary,data$AmountSpent,col="red")
#Might be heteroescadasticity because a funnel like shape is forming
#7. Children and Amount Spent
summary(data$Children)
data$Children<-as.factor(data$Children) # Converting integer variable to factor variable
plot(data$Children,data$AmountSpent,col="red")
#People with 2 and 3 children have sinilar spending behaviour
#Lets combine the 2 groups into one "3-2"
data$Children1<-as.factor(ifelse(data$Children==3|data$Children==2,"3-2",as.character(data$Children)))
summary(data$Children1)
plot(data$Children1,data$AmountSpent,col="red")
#8. History and Amount Spent
summary(data$History)
tapply(data$AmountSpent,data$History,mean)
#Impute/Replace Missing values
ind<-which(is.na(data$History))
mean(data[ind,"AmountSpent"])
#Mean of missing category is close to medium category
#Lets plot Missing and medium category
Amount_Medium<-data%>%filter(History=="Medium")%>%select(AmountSpent)
p<-ggplot(data[ind,],aes(x=AmountSpent))
q<-ggplot(Amount_Medium,aes(x=AmountSpent))
p+geom_histogram()
q+geom_histogram()
#Create a category called missing.
data$History1<-as.factor(ifelse(is.na(data$History),"Missing",as.factor(data$History)))
summary(data$History1)
#Label the levels
data$History1<-factor(data$History1,labels=c("High","Low","Medium","Missing"))
#9. Catalogues And Amount Spent
summary(data$Catalogs)
plot(data$Catalogs,data$AmountSpent,col="red")
#########################################################################################################################3
# Model Building and check for significant variable
# Lets remove the variables for which new variables were created
#Age, Children, History
data1<-data[,-c(1,7,8)]
mod<-lm(AmountSpent~.,data=data1)
summary(mod)
#Model Prediction for significant variables
#Gender + Location + Salary + Catalogs + Children1 + History1
#Lets do a step wise model prediction to determine significant variable
step(mod,direction = "both")
#step also giving the below significant variable
#Gender + Location + Salary + Catalogs + Children1 + History1
#---------------------------------------------------------------------------------------------#
#Lets build the new model with significant variables given by step() function
mod2<-lm(formula = AmountSpent ~ Gender + Location + Salary + Catalogs + Children1 + History1, data = data1)
summary(mod2)
#--------------------------------------------------------------------------------------------------------------#
#Remove insignificant variabes
#HistoryMissing
#GenderMale
#Create dummy variables
data1$Male_d<-ifelse(data1$Gender=="Male",1,0)
data1$Female_d<-ifelse(data1$Gender=="Female",1,0)
data1$Missing_d<-ifelse(data$History1=="Missing",1,0)
data1$Low_d<-ifelse(data$History1=="Low",1,0)
data1$Med_d<-ifelse(data$History1=="Medium",1,0)
data1$High_d<-ifelse(data$History1=="High",1,0)
mod3<-lm(formula = AmountSpent ~ Male_d + Location + Salary + Catalogs + Children1+Med_d+Low_d , data = data1)
summary(mod3)
#------------------------------------------------------------------------------------------------------------#
# Remove male dummy as it is also seems to be insignificant
mod4<-lm(formula = AmountSpent ~ Location + Salary + Catalogs + Children1+Med_d+Low_d, data = data1)
summary(mod4)
#############################################################################################################################
#Model validation
# 1. Assumption of normality check using Histogram and quantile plot
library(car)
hist(mod4$residuals)
qqPlot(mod4$residuals) # quantile quantile plot belongong to car package
# Some dispersion at the tail. Non normal behaviour observed
#2. Multicolinearity check
vif(mod4)
#3. Constant variance check using fitted values and residuals
plot(mod4$fitted.values,mod4$residuals)
#Funnel shape is observed
#--------------------------------------------------------------------------------------------------------------------#
# So from the above model we can determine that model is not appropriated and is heteroscedastic in nature/
# So lets apply remedies by applying log transformation.
mod5<-lm(formula = log(AmountSpent) ~ Location + Salary + Catalogs + Children1+Med_d+Low_d, data = data1)
summary(mod5)
qqPlot(mod5$residuals)#qqplot looks okay
plot(mod5$fitted.values,mod5$residuals)# Still funnel
#---------------------------------------------------------------------------------------------------------------------#
#Square Root Transformation
mod6<-lm(formula = sqrt(AmountSpent) ~ Location + Salary + Catalogs + Children1+Med_d+Low_d, data = data1)
summary(mod6)
qqPlot(mod6$residuals)
plot(mod6$fitted.values,mod6$residuals)
# Thus qqplot and constant variance check seems to be okay by applying Square root transformation
# These can be considered as the good model
|
/Linear Regression/Linear Regression.R
|
no_license
|
mainak-paul/Data-Science
|
R
| false
| false
| 6,060
|
r
|
# Direct Marketting Data Set
data <- read.csv("C:\\Users\\mainak\\Desktop\\DADAI\\Data-Science\\Data Science With R\\Linear Regression\\DirectMarketing.csv")
library(dplyr)
library(ggplot2)
#Purpose of this model is to determine good customer based on AMountSpent dependent variable.
##################################################################################################################################
#Exploratory Data Analysis
str(data)
#1. Age and Amount Spent
plot(data$Age,data$AmountSpent,col="red")
#Age group Middle and Old have similar spending trend , we can combine both category into one.
#Create New Factor Variable Age1
data$Age1 <- as.factor(ifelse(data$Age!='Young','Middle-Old',as.character(data$Age)))
summary(data$Age1)
plot(data$Age1,data$AmountSpent,col="red")
#2. Gender and Amount Spent
summary(data$Gender)
plot(data$Gender,data$AmountSpent,col="red")
#3. Own house and Amount Spent
summary(data$OwnHome)
plot(data$OwnHome,data$AmountSpent,col="red")
#4. Married and Amount Spent
summary(data$Married)
plot(data$Married,data$AmountSpent,col="red")
#5. Location and Amount Spent
summary(data$Location)
plot(data$Location,data$AmountSpent,col="red")
#6. Salary and Amount Spent
summary(data$Salary)
plot(data$Salary,data$AmountSpent,col="red")
#Might be heteroescadasticity because a funnel like shape is forming
#7. Children and Amount Spent
summary(data$Children)
data$Children<-as.factor(data$Children) # Converting integer variable to factor variable
plot(data$Children,data$AmountSpent,col="red")
#People with 2 and 3 children have sinilar spending behaviour
#Lets combine the 2 groups into one "3-2"
data$Children1<-as.factor(ifelse(data$Children==3|data$Children==2,"3-2",as.character(data$Children)))
summary(data$Children1)
plot(data$Children1,data$AmountSpent,col="red")
#8. History and Amount Spent
summary(data$History)
tapply(data$AmountSpent,data$History,mean)
#Impute/Replace Missing values
ind<-which(is.na(data$History))
mean(data[ind,"AmountSpent"])
#Mean of missing category is close to medium category
#Lets plot Missing and medium category
Amount_Medium<-data%>%filter(History=="Medium")%>%select(AmountSpent)
p<-ggplot(data[ind,],aes(x=AmountSpent))
q<-ggplot(Amount_Medium,aes(x=AmountSpent))
p+geom_histogram()
q+geom_histogram()
#Create a category called missing.
data$History1<-as.factor(ifelse(is.na(data$History),"Missing",as.factor(data$History)))
summary(data$History1)
#Label the levels
data$History1<-factor(data$History1,labels=c("High","Low","Medium","Missing"))
#9. Catalogues And Amount Spent
summary(data$Catalogs)
plot(data$Catalogs,data$AmountSpent,col="red")
#########################################################################################################################3
# Model Building and check for significant variable
# Lets remove the variables for which new variables were created
#Age, Children, History
data1<-data[,-c(1,7,8)]
mod<-lm(AmountSpent~.,data=data1)
summary(mod)
#Model Prediction for significant variables
#Gender + Location + Salary + Catalogs + Children1 + History1
#Lets do a step wise model prediction to determine significant variable
step(mod,direction = "both")
#step also giving the below significant variable
#Gender + Location + Salary + Catalogs + Children1 + History1
#---------------------------------------------------------------------------------------------#
#Lets build the new model with significant variables given by step() function
mod2<-lm(formula = AmountSpent ~ Gender + Location + Salary + Catalogs + Children1 + History1, data = data1)
summary(mod2)
#--------------------------------------------------------------------------------------------------------------#
#Remove insignificant variabes
#HistoryMissing
#GenderMale
#Create dummy variables
data1$Male_d<-ifelse(data1$Gender=="Male",1,0)
data1$Female_d<-ifelse(data1$Gender=="Female",1,0)
data1$Missing_d<-ifelse(data$History1=="Missing",1,0)
data1$Low_d<-ifelse(data$History1=="Low",1,0)
data1$Med_d<-ifelse(data$History1=="Medium",1,0)
data1$High_d<-ifelse(data$History1=="High",1,0)
mod3<-lm(formula = AmountSpent ~ Male_d + Location + Salary + Catalogs + Children1+Med_d+Low_d , data = data1)
summary(mod3)
#------------------------------------------------------------------------------------------------------------#
# Remove male dummy as it is also seems to be insignificant
mod4<-lm(formula = AmountSpent ~ Location + Salary + Catalogs + Children1+Med_d+Low_d, data = data1)
summary(mod4)
#############################################################################################################################
#Model validation
# 1. Assumption of normality check using Histogram and quantile plot
library(car)
hist(mod4$residuals)
qqPlot(mod4$residuals) # quantile quantile plot belongong to car package
# Some dispersion at the tail. Non normal behaviour observed
#2. Multicolinearity check
vif(mod4)
#3. Constant variance check using fitted values and residuals
plot(mod4$fitted.values,mod4$residuals)
#Funnel shape is observed
#--------------------------------------------------------------------------------------------------------------------#
# So from the above model we can determine that model is not appropriated and is heteroscedastic in nature/
# So lets apply remedies by applying log transformation.
mod5<-lm(formula = log(AmountSpent) ~ Location + Salary + Catalogs + Children1+Med_d+Low_d, data = data1)
summary(mod5)
qqPlot(mod5$residuals)#qqplot looks okay
plot(mod5$fitted.values,mod5$residuals)# Still funnel
#---------------------------------------------------------------------------------------------------------------------#
#Square Root Transformation
mod6<-lm(formula = sqrt(AmountSpent) ~ Location + Salary + Catalogs + Children1+Med_d+Low_d, data = data1)
summary(mod6)
qqPlot(mod6$residuals)
plot(mod6$fitted.values,mod6$residuals)
# Thus qqplot and constant variance check seems to be okay by applying Square root transformation
# These can be considered as the good model
|
#titanic
#on Home
setwd("/home/chase/Dropbox/R Code/ML")
#on Laptop
setwd("/home/chasedehan/Dropbox/R Code/ML")
#on Work
setwd("C:/Users/cdehan/Dropbox/R Code/ML")
library(dplyr)
library(rpart) #recursive partitioning and regression trees
######################################################################
############## Getting data into the appropriate form ################
######################################################################
train <- read.csv("titanic_train.csv", stringsAsFactors=FALSE)
test <- read.csv("titanic_test.csv", stringsAsFactors=FALSE)
##########Clean up the data ###########################################
#Need to merge the training and test data together, but test doesn't have a "Survived" column, so we create it
test$Survived <- NA
#merge test and train
all_data <- data.frame(rbind(train, test))
#We can also split the title out of the name field, because there is more information contained in there
all_data$Name[1]
#convert strings from factors to character
all_data$Name <- as.character(all_data$Name)
#splitting the string apart on the comma and period because all the cells have these
#[[1]] because it actually returns a list and we want the first element
strsplit(all_data$Name[1], split = '[,.]')[[1]]
#we want the title, which comes right after the last name, and is the second element
strsplit(all_data$Name[1], split = '[,.]')[[1]][2]
#apply to every observation in dataframe with sapply
all_data$Title <- sapply(all_data$Name, FUN=function(x) {strsplit(x, split='[,.]')[[1]][2]})
#insert the embark
all_data$Embarked[which(all_data$Embarked == "")] <- "S"
#mess with the fare
all_data$Fare[which(is.na(all_data$Fare))] <-median(all_data$Fare, na.rm=T)
#Predict the age and reassign the values
predicted_age <- rpart(Age ~ Pclass + Sex + SibSp + Parch + Embarked + Title, data = all_data[!is.na(all_data$Age),], method = "anova")
all_data$Age[is.na(all_data$Age)] <- predict(predicted_age, newdata = all_data[is.na(all_data$Age),] )
#convert the string variables back into factors so randomForest can process them
all_data$Title <- as.factor(all_data$Title)
all_data$Sex <- as.factor(all_data$Sex)
all_data$Embarked <- as.factor(all_data$Embarked)
#split back into test and train data
test_clean <- all_data[is.na(all_data$Survived),]
train_clean <- all_data[!is.na(all_data$Survived),]
#Creating the training and test data sets for the data because we don't have labels for the training set
# Set seed of 567 for reproducibility
set.seed(567)
# Store row numbers for training set: index_train
index_train <- sample(1:nrow(train_clean), 2/3 * nrow(train_clean))
# Create training set: training_set
train <- train_clean[index_train, ]
# Create test set: test_set
test <- train_clean[-index_train,]
######################################################################
###################### k-fold validation #############################
######################################################################
#see far down below for the use of the caret package
#can use cv.glm() for glm models in library(boot), which is the bootstrap library
#Code by hand doesn't quite work yet, but will at some point
#Randomly shuffle the data
train_shuffle<-train_clean[sample(nrow(train_clean)),]
#Create 10 equally size folds, assigning a group value to each value in the vector
#Can change "breaks=" to however many breaks we want
n_folds <- 10
folds <- cut(seq(1,nrow(train_shuffle)),breaks=n_folds,labels=FALSE)
#Perform 10 fold cross validation
#Change the 10 here it necessary
cv_tmp <- matrix(NA, nrow = n_folds, ncol = length(train_shuffle))
for(i in 1:n_folds){
#Segement your data by fold using the which() function
testIndexes <- which(folds==i,arr.ind=TRUE)
testData <- train_shuffle[testIndexes, ]
trainData <- train_shuffle[-testIndexes, ]
x<- trainData$Sex #or whatever the classifiers are
y <- trainData$Survived
fitted_model < - lm() #or whatever model
x <- testData$Sex #same classifiers as above
y <- testData$Survived
pred <- predict()
cv_tmp[k, ] <- sapply(as.list(data.frame(pred)), function(y_hat) mean((y - y_hat)^2))
}
cv <- colMeans(cv_tmp)
######################################################################
############## Decision Tree Methodology #############################
######################################################################
#A basic decision tree methodology
tree <- rpart(Survived ~ Pclass + Sex + Age + Parch + Fare + Embarked, data = train, method = "class")
plot(tree, uniform = T)
text(tree)
#predict values
tree_predict <- predict(tree, newdata = test, type = "class")
#Confusion Matrix -
conf_tree <- table(test$Survived, tree_predict)
conf_tree
acc_tree <- sum(diag(conf_tree)) / sum(conf_tree)
acc_tree
###Build for submission
submission <- data.frame(PassengerId = test$PassengerId, Survived = test$Survived)
#Write to csv for upload
write.csv(submission, file = "titanic_submission.csv", row.names = FALSE)
##############3
#fancy plot
library(rattle) #It has been a while back, but I believe I had to install libgtk2.0-dev
#enhanced tree plots
library(rpart.plot)
prp(tree)
fancyRpartPlot(tree)
#More color selection for fancy tree plots
library(RColorBrewer)
######################################################################
############ Random Forest Model Datacamp ##########################
######################################################################
#Handles the overfitting we saw with decision trees. Basically, the model overfits the decision trees
#randomly and has the majority vote "win" the outcome.
library(randomForest)
#Created the forest model object
forest_model <- randomForest(as.factor(Survived) ~ Pclass + Age + Title + Sex + Embarked,
data = train,
importance = T,
ntree= 1000)
rf_prediction <- predict(forest_model, newdata=test)
#look at the plot to see what is more important
varImpPlot(forest_model)
#Building the submission
conf_rf <- table(test$Survived, rf_prediction)
conf_rf
acc_rf <- sum(diag(conf_rf)) / sum(conf_rf)
acc_rf
#67%, which is not as good as the one above.
#We need to work on some diagnostics and model fitting
#moved up to 0.78469 when inserted Sex, Title, and Embarked to the model
######################################################################
################ Logistic Regression Model ##########################
######################################################################
#library(glmnet) has a number of glm applications, including lasso, ridge, etc. Link here: https://web.stanford.edu/~hastie/glmnet/glmnet_alpha.html
#Logisitic Regression, and comparing logit, probit, cloglog
#Build the glm() model with family="binomial"
log_model <- glm(Survived ~ Pclass + Age + Sex + Embarked, family = binomial(link=logit), data=train)
library(modelr)
train <- add_residuals(train, log_model, var = "resid")
ggplot(train, aes(Survived, resid)) +
geom_point()
log_probit <- glm(Survived ~ Pclass + Age + Sex + Embarked, family = binomial(link=probit), data=train)
log_clog <- glm(Survived ~ Pclass + Age + Sex + Embarked, family = binomial(link=cloglog), data=train)
#Had to take Title out of the above model because there were titles in test, not present in train
#Predict probability of survival
log_predict <- predict(log_model, newdata = test, type = "response")
probit_predict <- predict(log_probit, newdata = test, type = "response")
clog_predict <- predict(log_clog, newdata = test, type = "response")
#look at the prediction values because we have to choose a cutoff
range(log_predict)
summary(log_predict)
#setting cutoff value at the mean value 0.37
log_predict <- ifelse(log_predict>0.37, 1, 0)
#then confusion matrix to compare accuracies
conf<- table(test$Survived, log_predict)
acc_logit <- sum(diag(conf)) / nrow(test)
#and doing it for the others
probit_predict <- ifelse(probit_predict>0.37, 1, 0)
conf<- table(test$Survived, probit_predict)
acc_probit <- sum(diag(conf)) / nrow(test)
clog_predict <- ifelse(clog_predict>0.37, 1, 0)
conf<- table(test$Survived, clog_predict)
acc_cloglog <- sum(diag(conf)) / nrow(test)
acc_logit
acc_probit #Scores highest, but barely
acc_cloglog
#Comparing with ROC curve
library(pROC)
auc(test$Survived, log_predict)
auc(test$Survived, probit_predict) #does best here
auc(test$Survived, clog_predict)
#It appears as though the probit provides the best results from Accuracy and AUC, so we should get better results
#We should also start removing some of the observations to see if we get more informative results
log_remove <- glm(Survived ~ Age + Sex + Embarked, family = binomial(link=probit), data=train)
pred_remove_Pclass <- predict(log_remove, newdata = test, type="response")
log_remove <- glm(Survived ~ Pclass + Sex + Embarked, family = binomial(link=probit), data=train)
pred_remove_Age<- predict(log_remove, newdata = test, type="response")
log_remove <- glm(Survived ~ Pclass + Age + Embarked, family = binomial(link=probit), data=train)
pred_remove_Sex<- predict(log_remove, newdata = test, type="response")
log_remove_embark <- glm(Survived ~ Pclass + Age + Sex, family = binomial(link=probit), data=train)
pred_remove_Embark<- predict(log_remove, newdata = test, type="response")
#check the auc
auc(test$Survived, probit_predict)
auc(test$Survived, pred_remove_Sex)
auc(test$Survived, pred_remove_Embark)
auc(test$Survived, pred_remove_Age) #Really interesting that age removal results in higher AUC
auc(test$Survived, pred_remove_Pclass)
#We remove Embark and test with removing something else
log_remove <- glm(Survived ~ Pclass + Sex, family = binomial(link=probit), data=train)
pred_Emb_age<- predict(log_remove, newdata = test, type="response")
log_remove <- glm(Survived ~ Pclass + Age, family = binomial(link=probit), data=train)
pred_Emb_sex<- predict(log_remove, newdata = test, type="response")
log_remove_emb_class <- glm(Survived ~ Age + Sex, family = binomial(link=probit), data=train)
pred_Emb_class<- predict(log_remove_emb_class, newdata = test, type="response")
auc(test$Survived, pred_remove_Embark)
auc(test$Survived, pred_Emb_sex)
auc(test$Survived, pred_Emb_age)
auc(test$Survived, pred_Emb_class)
#We get the best AUC by removing Embarked and Age
#Use the whole training dataset to get more accurate stats
log_model <- glm(Survived ~ Pclass + Sex + Age, family = binomial(link=probit), data=train_clean)
pred <- predict(log_remove_embark, newdata = test_clean, type="response")
###Build for submission
submission <- data.frame(PassengerId = test_clean$PassengerId)
#Set the cutoff
submission$Survived <- ifelse(pred>0.37, 1, 0)
head(submission)
#Write to csv for upload
write.csv(submission, file = "titanic_submission.csv", row.names = FALSE)
############### Estimating Performance measures and cutoffs for logistic regressions
library(ROCR)
#https://www.r-bloggers.com/a-small-introduction-to-the-rocr-package/
#Should look into it further
######################################################################
####################### Support Vector Machines #####################
######################################################################
library(kernlab)
svp <- ksvm(train$Sex,train$Survived,type="C-svc",kernel="vanilladot",C=100,scaled=c())
######################################################################
####################### k-means clustering ##########################
######################################################################
#This really is not the best method for determining what is going on here
#We know there are two groups, but this method is figuring out how many groups there are
set.seed(42)
#Have to restrict the data down to just the explainers
a <- train %>%
mutate(Male = ifelse(Sex == "male", 1, 0)) %>%
select(Pclass, Male, Age, SibSp, Parch)
#use kmeans() to group into 2 groups
km <- kmeans(a, centers = 2, nstart=20)
km
km_conf <- table(km$cluster, train$Survived)
sum(diag(km_conf)) / sum (km_conf)
#Doesn't provide a great accuracy
#Create a scree plot to see how many groups there are
# Initialise ratio_ss
ratio_ss <- rep(0, 7)
#Write the for loop depending on k.
for (k in 1:7) {
# Apply k-means to a
km <- kmeans(a, k, nstart = 20)
# Save the ratio between of WSS to TSS in kth element of ratio_ss
ratio_ss[k] <- km$tot.withinss / km$totss
}
# Make a scree plot with type "b" and xlab "k"
plot(ratio_ss, type = "b", xlab = "k")
#Plot shows there are 3 or 4 groups in the data
#Maybe I take those three or four groups and assign it as a classifier?
#lets try that now on a random forest
######################################################################
################## Clustering as a Classifier ########################
######################################################################
#Set at 4 groups because thats what the scree plot showed worked
km <- kmeans(a, centers = 4, nstart=20)
#cbind back with the train data
train_clustered <- cbind(train,Cluster= km$cluster)
#create the groups in the test data
library(clue)
b <- test %>%
mutate(Male = ifelse(Sex == "male", 1, 0)) %>%
select(Pclass, Male, Age, SibSp, Parch)
km_predict <- cl_predict(km, newdata = b, type = "class_ids")
test_clustered <- cbind(test, Cluster = as.vector(km_predict))
#Run the random forest model
#Created the forest model object
forest_model <- randomForest(as.factor(Survived) ~ Pclass + Age + Title + Sex + Embarked + Cluster,
data = train_clustered,
importance = T,
ntree= 1000)
rf_prediction <- predict(forest_model, newdata=test_clustered)
#look at the plot to see what is more important
varImpPlot(forest_model)
#Building the submission
conf_rf <- table(test$Survived, rf_prediction)
conf_rf
acc_rf_cluster <- sum(diag(conf_rf)) / sum(conf_rf)
#Compare to the base rf model
acc_rf
acc_rf_cluster #Shit! it actually improves!
#However was slightly lower on Kaggle - I don't like that it only shows on 1/2 the data
#################################
##########Now predict it with the full data for submission
a <- train_clean %>%
mutate(Male = ifelse(Sex == "male", 1, 0)) %>%
select(Pclass, Male, Age, SibSp, Parch)
ratio_ss <- rep(0, 7)
for (k in 1:7) {
km <- kmeans(a, k, nstart = 20)
ratio_ss[k] <- km$tot.withinss / km$totss
}
plot(ratio_ss, type = "b", xlab = "k")
#Set at 4 groups because thats what the scree plot showed worked
km <- kmeans(a, centers = 4, nstart=20)
#cbind back with the train data
train_clustered <- cbind(train_clean,Cluster= km$cluster)
#create the groups in the test data
library(clue)
b <- test_clean %>%
mutate(Male = ifelse(Sex == "male", 1, 0)) %>%
select(Pclass, Male, Age, SibSp, Parch)
km_predict <- cl_predict(km, newdata = b, type = "class_ids")
test_clustered <- cbind(test_clean, Cluster = as.vector(km_predict))
#Run the random forest model
forest_model <- randomForest(as.factor(Survived) ~ Pclass + Age + Title + Sex + Embarked + Cluster,
data = train_clustered,
importance = T,
ntree= 1000)
rf_prediction <- predict(forest_model, newdata=test_clustered)
######################################################################
################ Using the "caret" package ##########################
######################################################################
library(caret)
#using caret for partial least squares discriminant analysis
rfFit <- train(as.factor(Survived) ~ Pclass + Age + Title + Sex + Embarked,
data = train,
method = "rf")
caret_predict <- predict(rfFit, newdata = test) #6 are classified different from above, maybe not specifying the parameters
rpartFit <- train(Survived ~ Pclass + Age + Title + Sex + Embarked,
data = train,
method = "rpart")
prediction <- predict(rpartFit, newdata = test)
tree <- rpart(Survived ~ Pclass + Sex + Age + Parch + Fare + Embarked, data = train, method = "class")
plot(tree, uniform = T)
text(tree)
#predict values
tree_predict <- predict(tree, newdata = test, type = "class")
#Confusion Matrix -
conf_tree <- table(test$Survived, tree_predict)
conf_tree
acc_tree <- sum(diag(conf_tree)) / sum(conf_tree)
acc_tree
######################################################################
################################# BMA for predictions ################
######################################################################
library(BMA)
train_subset <- select(train, Pclass, Sex, Age, SibSp, Parch, Fare, Embarked, Title)
bma_model <- bicreg(train_subset,train$Survived)
bma_predictions <- predict(bma_model, newdata = test)
#specify cutoff
summary(bma_predictions$mean) #choose the mean value of 0.37
bma_predict <- ifelse(bma_predictions$mean>0.45, 1, 0) #and it got bumped up to the 0.45
conf_tree <- table(test$Survived, bma_predict)
conf_tree
acc_tree <- sum(diag(conf_tree)) / sum(conf_tree)
acc_tree
#Build for submission now on the full data using the same cutoff point
train_subset <- select(train_clean, Pclass, Sex, Age, SibSp, Parch, Fare, Embarked, Title)
bma_model <- bicreg(train_subset,train_clean$Survived)
bma_predictions <- predict(bma_model, newdata = test_clean)
imageplot.bma(bma_model) #can see that it breaks almost all the variables into dummies
summary(bma_predictions$mean) #summary looks basically the same as above, now cutoff the values
bma_predict <- ifelse(bma_predictions$mean>0.45, 1, 0)
#I am really surprised by the effectiveness of BMA. It actually did a pretty decent job of fitting the observations
######################################################################
########################## Ensembling Predictions ####################
######################################################################
#idea came from: http://rpubs.com/Vincent/Titanic
#And more applications can be found here: http://mlwave.com/kaggle-ensembling-guide/
#And library(caretEnsemble): https://cran.r-project.org/web/packages/caretEnsemble/vignettes/caretEnsemble-intro.html
#simple weighting scheme where you average the results and see where it falls
#although I'm not 100% sure why we subtract the one from the predictions
ensemble <- as.numeric(predict_1) + as.numberic(predict_2)-1 + as.numeric(predict_3)-1
#then average the estimates
ensemble <- sapply(ensemble/3, round) #and here is the output
######################################################################
################## Feature Selection - Boruta ########################
######################################################################
#file:///home/chase/Downloads/v36i11.pdf
#Check the importance of the variables through this Boruta process
#Creates shadow variables to test against. Important variables should be more important that the most important shadoww
library(Boruta)
borutaTest <- Boruta(Survived ~ ., data=train, doTrace=1, ntree=500) #running the Boruta algorithm
#Can take quite a bit of time if the dataframe is large
borutaTest
plot(borutaTest2)
attStats(borutaTest) #Shows the Z-score statistics and the fraction of RF runs that it was more important than the most important shadow run
borutaTest2 <- Boruta(Survived ~ ., data=train, doTrace=1,maxRuns=100)
plot(borutaTest2)
#############
######################################################################
################## Build file for submission #########################
######################################################################
submission <- data.frame(PassengerId = test_clean$PassengerId, Survived = prediction)
head(submission)
write.csv(submission, file = "titanic_submission.csv", row.names = FALSE)
|
/titanic.r
|
no_license
|
sireeshapulipati/ML_Learning
|
R
| false
| false
| 19,855
|
r
|
#titanic
#on Home
setwd("/home/chase/Dropbox/R Code/ML")
#on Laptop
setwd("/home/chasedehan/Dropbox/R Code/ML")
#on Work
setwd("C:/Users/cdehan/Dropbox/R Code/ML")
library(dplyr)
library(rpart) #recursive partitioning and regression trees
######################################################################
############## Getting data into the appropriate form ################
######################################################################
train <- read.csv("titanic_train.csv", stringsAsFactors=FALSE)
test <- read.csv("titanic_test.csv", stringsAsFactors=FALSE)
##########Clean up the data ###########################################
#Need to merge the training and test data together, but test doesn't have a "Survived" column, so we create it
test$Survived <- NA
#merge test and train
all_data <- data.frame(rbind(train, test))
#We can also split the title out of the name field, because there is more information contained in there
all_data$Name[1]
#convert strings from factors to character
all_data$Name <- as.character(all_data$Name)
#splitting the string apart on the comma and period because all the cells have these
#[[1]] because it actually returns a list and we want the first element
strsplit(all_data$Name[1], split = '[,.]')[[1]]
#we want the title, which comes right after the last name, and is the second element
strsplit(all_data$Name[1], split = '[,.]')[[1]][2]
#apply to every observation in dataframe with sapply
all_data$Title <- sapply(all_data$Name, FUN=function(x) {strsplit(x, split='[,.]')[[1]][2]})
#insert the embark
all_data$Embarked[which(all_data$Embarked == "")] <- "S"
#mess with the fare
all_data$Fare[which(is.na(all_data$Fare))] <-median(all_data$Fare, na.rm=T)
#Predict the age and reassign the values
predicted_age <- rpart(Age ~ Pclass + Sex + SibSp + Parch + Embarked + Title, data = all_data[!is.na(all_data$Age),], method = "anova")
all_data$Age[is.na(all_data$Age)] <- predict(predicted_age, newdata = all_data[is.na(all_data$Age),] )
#convert the string variables back into factors so randomForest can process them
all_data$Title <- as.factor(all_data$Title)
all_data$Sex <- as.factor(all_data$Sex)
all_data$Embarked <- as.factor(all_data$Embarked)
#split back into test and train data
test_clean <- all_data[is.na(all_data$Survived),]
train_clean <- all_data[!is.na(all_data$Survived),]
#Creating the training and test data sets for the data because we don't have labels for the training set
# Set seed of 567 for reproducibility
set.seed(567)
# Store row numbers for training set: index_train
index_train <- sample(1:nrow(train_clean), 2/3 * nrow(train_clean))
# Create training set: training_set
train <- train_clean[index_train, ]
# Create test set: test_set
test <- train_clean[-index_train,]
######################################################################
###################### k-fold validation #############################
######################################################################
#see far down below for the use of the caret package
#can use cv.glm() for glm models in library(boot), which is the bootstrap library
#Code by hand doesn't quite work yet, but will at some point
#Randomly shuffle the data
train_shuffle<-train_clean[sample(nrow(train_clean)),]
#Create 10 equally size folds, assigning a group value to each value in the vector
#Can change "breaks=" to however many breaks we want
n_folds <- 10
folds <- cut(seq(1,nrow(train_shuffle)),breaks=n_folds,labels=FALSE)
#Perform 10 fold cross validation
#Change the 10 here it necessary
cv_tmp <- matrix(NA, nrow = n_folds, ncol = length(train_shuffle))
for(i in 1:n_folds){
#Segement your data by fold using the which() function
testIndexes <- which(folds==i,arr.ind=TRUE)
testData <- train_shuffle[testIndexes, ]
trainData <- train_shuffle[-testIndexes, ]
x<- trainData$Sex #or whatever the classifiers are
y <- trainData$Survived
fitted_model < - lm() #or whatever model
x <- testData$Sex #same classifiers as above
y <- testData$Survived
pred <- predict()
cv_tmp[k, ] <- sapply(as.list(data.frame(pred)), function(y_hat) mean((y - y_hat)^2))
}
cv <- colMeans(cv_tmp)
######################################################################
############## Decision Tree Methodology #############################
######################################################################
#A basic decision tree methodology
tree <- rpart(Survived ~ Pclass + Sex + Age + Parch + Fare + Embarked, data = train, method = "class")
plot(tree, uniform = T)
text(tree)
#predict values
tree_predict <- predict(tree, newdata = test, type = "class")
#Confusion Matrix -
conf_tree <- table(test$Survived, tree_predict)
conf_tree
acc_tree <- sum(diag(conf_tree)) / sum(conf_tree)
acc_tree
###Build for submission
submission <- data.frame(PassengerId = test$PassengerId, Survived = test$Survived)
#Write to csv for upload
write.csv(submission, file = "titanic_submission.csv", row.names = FALSE)
##############3
#fancy plot
library(rattle) #It has been a while back, but I believe I had to install libgtk2.0-dev
#enhanced tree plots
library(rpart.plot)
prp(tree)
fancyRpartPlot(tree)
#More color selection for fancy tree plots
library(RColorBrewer)
######################################################################
############ Random Forest Model Datacamp ##########################
######################################################################
#Handles the overfitting we saw with decision trees. Basically, the model overfits the decision trees
#randomly and has the majority vote "win" the outcome.
library(randomForest)
#Created the forest model object
forest_model <- randomForest(as.factor(Survived) ~ Pclass + Age + Title + Sex + Embarked,
data = train,
importance = T,
ntree= 1000)
rf_prediction <- predict(forest_model, newdata=test)
#look at the plot to see what is more important
varImpPlot(forest_model)
#Building the submission
conf_rf <- table(test$Survived, rf_prediction)
conf_rf
acc_rf <- sum(diag(conf_rf)) / sum(conf_rf)
acc_rf
#67%, which is not as good as the one above.
#We need to work on some diagnostics and model fitting
#moved up to 0.78469 when inserted Sex, Title, and Embarked to the model
######################################################################
################ Logistic Regression Model ##########################
######################################################################
#library(glmnet) has a number of glm applications, including lasso, ridge, etc. Link here: https://web.stanford.edu/~hastie/glmnet/glmnet_alpha.html
#Logisitic Regression, and comparing logit, probit, cloglog
#Build the glm() model with family="binomial"
log_model <- glm(Survived ~ Pclass + Age + Sex + Embarked, family = binomial(link=logit), data=train)
library(modelr)
train <- add_residuals(train, log_model, var = "resid")
ggplot(train, aes(Survived, resid)) +
geom_point()
log_probit <- glm(Survived ~ Pclass + Age + Sex + Embarked, family = binomial(link=probit), data=train)
log_clog <- glm(Survived ~ Pclass + Age + Sex + Embarked, family = binomial(link=cloglog), data=train)
#Had to take Title out of the above model because there were titles in test, not present in train
#Predict probability of survival
log_predict <- predict(log_model, newdata = test, type = "response")
probit_predict <- predict(log_probit, newdata = test, type = "response")
clog_predict <- predict(log_clog, newdata = test, type = "response")
#look at the prediction values because we have to choose a cutoff
range(log_predict)
summary(log_predict)
#setting cutoff value at the mean value 0.37
log_predict <- ifelse(log_predict>0.37, 1, 0)
#then confusion matrix to compare accuracies
conf<- table(test$Survived, log_predict)
acc_logit <- sum(diag(conf)) / nrow(test)
#and doing it for the others
probit_predict <- ifelse(probit_predict>0.37, 1, 0)
conf<- table(test$Survived, probit_predict)
acc_probit <- sum(diag(conf)) / nrow(test)
clog_predict <- ifelse(clog_predict>0.37, 1, 0)
conf<- table(test$Survived, clog_predict)
acc_cloglog <- sum(diag(conf)) / nrow(test)
acc_logit
acc_probit #Scores highest, but barely
acc_cloglog
#Comparing with ROC curve
library(pROC)
auc(test$Survived, log_predict)
auc(test$Survived, probit_predict) #does best here
auc(test$Survived, clog_predict)
#It appears as though the probit provides the best results from Accuracy and AUC, so we should get better results
#We should also start removing some of the observations to see if we get more informative results
log_remove <- glm(Survived ~ Age + Sex + Embarked, family = binomial(link=probit), data=train)
pred_remove_Pclass <- predict(log_remove, newdata = test, type="response")
log_remove <- glm(Survived ~ Pclass + Sex + Embarked, family = binomial(link=probit), data=train)
pred_remove_Age<- predict(log_remove, newdata = test, type="response")
log_remove <- glm(Survived ~ Pclass + Age + Embarked, family = binomial(link=probit), data=train)
pred_remove_Sex<- predict(log_remove, newdata = test, type="response")
log_remove_embark <- glm(Survived ~ Pclass + Age + Sex, family = binomial(link=probit), data=train)
pred_remove_Embark<- predict(log_remove, newdata = test, type="response")
#check the auc
auc(test$Survived, probit_predict)
auc(test$Survived, pred_remove_Sex)
auc(test$Survived, pred_remove_Embark)
auc(test$Survived, pred_remove_Age) #Really interesting that age removal results in higher AUC
auc(test$Survived, pred_remove_Pclass)
#We remove Embark and test with removing something else
log_remove <- glm(Survived ~ Pclass + Sex, family = binomial(link=probit), data=train)
pred_Emb_age<- predict(log_remove, newdata = test, type="response")
log_remove <- glm(Survived ~ Pclass + Age, family = binomial(link=probit), data=train)
pred_Emb_sex<- predict(log_remove, newdata = test, type="response")
log_remove_emb_class <- glm(Survived ~ Age + Sex, family = binomial(link=probit), data=train)
pred_Emb_class<- predict(log_remove_emb_class, newdata = test, type="response")
auc(test$Survived, pred_remove_Embark)
auc(test$Survived, pred_Emb_sex)
auc(test$Survived, pred_Emb_age)
auc(test$Survived, pred_Emb_class)
#We get the best AUC by removing Embarked and Age
#Use the whole training dataset to get more accurate stats
log_model <- glm(Survived ~ Pclass + Sex + Age, family = binomial(link=probit), data=train_clean)
pred <- predict(log_remove_embark, newdata = test_clean, type="response")
###Build for submission
submission <- data.frame(PassengerId = test_clean$PassengerId)
#Set the cutoff
submission$Survived <- ifelse(pred>0.37, 1, 0)
head(submission)
#Write to csv for upload
write.csv(submission, file = "titanic_submission.csv", row.names = FALSE)
############### Estimating Performance measures and cutoffs for logistic regressions
library(ROCR)
#https://www.r-bloggers.com/a-small-introduction-to-the-rocr-package/
#Should look into it further
######################################################################
####################### Support Vector Machines #####################
######################################################################
library(kernlab)
svp <- ksvm(train$Sex,train$Survived,type="C-svc",kernel="vanilladot",C=100,scaled=c())
######################################################################
####################### k-means clustering ##########################
######################################################################
#This really is not the best method for determining what is going on here
#We know there are two groups, but this method is figuring out how many groups there are
set.seed(42)
#Have to restrict the data down to just the explainers
a <- train %>%
mutate(Male = ifelse(Sex == "male", 1, 0)) %>%
select(Pclass, Male, Age, SibSp, Parch)
#use kmeans() to group into 2 groups
km <- kmeans(a, centers = 2, nstart=20)
km
km_conf <- table(km$cluster, train$Survived)
sum(diag(km_conf)) / sum (km_conf)
#Doesn't provide a great accuracy
#Create a scree plot to see how many groups there are
# Initialise ratio_ss
ratio_ss <- rep(0, 7)
#Write the for loop depending on k.
for (k in 1:7) {
# Apply k-means to a
km <- kmeans(a, k, nstart = 20)
# Save the ratio between of WSS to TSS in kth element of ratio_ss
ratio_ss[k] <- km$tot.withinss / km$totss
}
# Make a scree plot with type "b" and xlab "k"
plot(ratio_ss, type = "b", xlab = "k")
#Plot shows there are 3 or 4 groups in the data
#Maybe I take those three or four groups and assign it as a classifier?
#lets try that now on a random forest
######################################################################
################## Clustering as a Classifier ########################
######################################################################
#Set at 4 groups because thats what the scree plot showed worked
km <- kmeans(a, centers = 4, nstart=20)
#cbind back with the train data
train_clustered <- cbind(train,Cluster= km$cluster)
#create the groups in the test data
library(clue)
b <- test %>%
mutate(Male = ifelse(Sex == "male", 1, 0)) %>%
select(Pclass, Male, Age, SibSp, Parch)
km_predict <- cl_predict(km, newdata = b, type = "class_ids")
test_clustered <- cbind(test, Cluster = as.vector(km_predict))
#Run the random forest model
#Created the forest model object
forest_model <- randomForest(as.factor(Survived) ~ Pclass + Age + Title + Sex + Embarked + Cluster,
data = train_clustered,
importance = T,
ntree= 1000)
rf_prediction <- predict(forest_model, newdata=test_clustered)
#look at the plot to see what is more important
varImpPlot(forest_model)
#Building the submission
conf_rf <- table(test$Survived, rf_prediction)
conf_rf
acc_rf_cluster <- sum(diag(conf_rf)) / sum(conf_rf)
#Compare to the base rf model
acc_rf
acc_rf_cluster #Shit! it actually improves!
#However was slightly lower on Kaggle - I don't like that it only shows on 1/2 the data
#################################
##########Now predict it with the full data for submission
a <- train_clean %>%
mutate(Male = ifelse(Sex == "male", 1, 0)) %>%
select(Pclass, Male, Age, SibSp, Parch)
ratio_ss <- rep(0, 7)
for (k in 1:7) {
km <- kmeans(a, k, nstart = 20)
ratio_ss[k] <- km$tot.withinss / km$totss
}
plot(ratio_ss, type = "b", xlab = "k")
#Set at 4 groups because thats what the scree plot showed worked
km <- kmeans(a, centers = 4, nstart=20)
#cbind back with the train data
train_clustered <- cbind(train_clean,Cluster= km$cluster)
#create the groups in the test data
library(clue)
b <- test_clean %>%
mutate(Male = ifelse(Sex == "male", 1, 0)) %>%
select(Pclass, Male, Age, SibSp, Parch)
km_predict <- cl_predict(km, newdata = b, type = "class_ids")
test_clustered <- cbind(test_clean, Cluster = as.vector(km_predict))
#Run the random forest model
forest_model <- randomForest(as.factor(Survived) ~ Pclass + Age + Title + Sex + Embarked + Cluster,
data = train_clustered,
importance = T,
ntree= 1000)
rf_prediction <- predict(forest_model, newdata=test_clustered)
######################################################################
################ Using the "caret" package ##########################
######################################################################
library(caret)
#using caret for partial least squares discriminant analysis
rfFit <- train(as.factor(Survived) ~ Pclass + Age + Title + Sex + Embarked,
data = train,
method = "rf")
caret_predict <- predict(rfFit, newdata = test) #6 are classified different from above, maybe not specifying the parameters
rpartFit <- train(Survived ~ Pclass + Age + Title + Sex + Embarked,
data = train,
method = "rpart")
prediction <- predict(rpartFit, newdata = test)
tree <- rpart(Survived ~ Pclass + Sex + Age + Parch + Fare + Embarked, data = train, method = "class")
plot(tree, uniform = T)
text(tree)
#predict values
tree_predict <- predict(tree, newdata = test, type = "class")
#Confusion Matrix -
conf_tree <- table(test$Survived, tree_predict)
conf_tree
acc_tree <- sum(diag(conf_tree)) / sum(conf_tree)
acc_tree
######################################################################
################################# BMA for predictions ################
######################################################################
library(BMA)
train_subset <- select(train, Pclass, Sex, Age, SibSp, Parch, Fare, Embarked, Title)
bma_model <- bicreg(train_subset,train$Survived)
bma_predictions <- predict(bma_model, newdata = test)
#specify cutoff
summary(bma_predictions$mean) #choose the mean value of 0.37
bma_predict <- ifelse(bma_predictions$mean>0.45, 1, 0) #and it got bumped up to the 0.45
conf_tree <- table(test$Survived, bma_predict)
conf_tree
acc_tree <- sum(diag(conf_tree)) / sum(conf_tree)
acc_tree
#Build for submission now on the full data using the same cutoff point
train_subset <- select(train_clean, Pclass, Sex, Age, SibSp, Parch, Fare, Embarked, Title)
bma_model <- bicreg(train_subset,train_clean$Survived)
bma_predictions <- predict(bma_model, newdata = test_clean)
imageplot.bma(bma_model) #can see that it breaks almost all the variables into dummies
summary(bma_predictions$mean) #summary looks basically the same as above, now cutoff the values
bma_predict <- ifelse(bma_predictions$mean>0.45, 1, 0)
#I am really surprised by the effectiveness of BMA. It actually did a pretty decent job of fitting the observations
######################################################################
########################## Ensembling Predictions ####################
######################################################################
#idea came from: http://rpubs.com/Vincent/Titanic
#And more applications can be found here: http://mlwave.com/kaggle-ensembling-guide/
#And library(caretEnsemble): https://cran.r-project.org/web/packages/caretEnsemble/vignettes/caretEnsemble-intro.html
#simple weighting scheme where you average the results and see where it falls
#although I'm not 100% sure why we subtract the one from the predictions
ensemble <- as.numeric(predict_1) + as.numberic(predict_2)-1 + as.numeric(predict_3)-1
#then average the estimates
ensemble <- sapply(ensemble/3, round) #and here is the output
######################################################################
################## Feature Selection - Boruta ########################
######################################################################
#file:///home/chase/Downloads/v36i11.pdf
#Check the importance of the variables through this Boruta process
#Creates shadow variables to test against. Important variables should be more important that the most important shadoww
library(Boruta)
borutaTest <- Boruta(Survived ~ ., data=train, doTrace=1, ntree=500) #running the Boruta algorithm
#Can take quite a bit of time if the dataframe is large
borutaTest
plot(borutaTest2)
attStats(borutaTest) #Shows the Z-score statistics and the fraction of RF runs that it was more important than the most important shadow run
borutaTest2 <- Boruta(Survived ~ ., data=train, doTrace=1,maxRuns=100)
plot(borutaTest2)
#############
######################################################################
################## Build file for submission #########################
######################################################################
submission <- data.frame(PassengerId = test_clean$PassengerId, Survived = prediction)
head(submission)
write.csv(submission, file = "titanic_submission.csv", row.names = FALSE)
|
#TickerSector has tickers and sector information for all stocks
#masterdata has timer series data for all stocks
#Function to calculate price ratios of 2 stocks
l.pr <- function(D){
x.prices<- D[,1]
y.prices<- D[,2]
xyratio <- x.prices/y.prices
xyratio.mean <- mean(xyratio)
xyratio.sd<- sd(xyratio)
return(list(xyratio.sd,xyratio.mean))
}
#######################################################################################
############Calculating std deviation of price ratios for pairs in Healthcare###########
HClabels<- sapply(Healthcare,function(x){ paste(x,".Adjusted",sep="")})
masterdata.HC<- masterdata[,HClabels]
PR.df.healthcare<-data.frame(rep(NA,5050),rep(NA,5050),rep(0,5050),rep(0,5050))
colnames(PR.df.healthcare)<- c('Stock1','Stock2','SD of PR','Mean of PR')
k=1
for(i in 1:101){
for(j in i:101){
if (i != j){
pairprices <- masterdata.HC[,c(i,j)]
pairprices <- na.omit(pairprices)
l.output<-l.pr(pairprices)
PR.df.healthcare[k,1]<- Healthcare[i,1]
PR.df.healthcare[k,2]<- Healthcare[j,1]
PR.df.healthcare[k,3]<- l.output[[1]]
PR.df.healthcare[k,4]<- l.output[[2]]
k=k+1
}
}
}
#Calculate coefficient of variation
PR.df.healthcare$COV <- PR.df.healthcare$'SD of PR'/PR.df.healthcare$'Mean of PR'
PR.df.healthcare<-PR.df.healthcare[order(PR.df.healthcare$COV),]
write.csv(PR.df.healthcare,"HealthcarePairs.csv")
#########################################################################################
#############Calculating std deviation of price ratios for pairs in Energy############
Energylabels<- sapply(Energy,function(x){ paste(x,".Adjusted",sep="")})
masterdata.Energy<- masterdata[,Energylabels]
PR.df.Energy<-data.frame(rep(NA,3240),rep(NA,3240),rep(0,3240),rep(0,3240))
colnames(PR.df.Energy)<- c('Stock1','Stock2','SD of PR','Mean of PR')
k=1
for(i in 1:81){
for(j in i:81){
if (i != j){
pairprices <- masterdata.Energy[,c(i,j)]
pairprices <- na.omit(pairprices)
l.output<-l.pr(pairprices)
PR.df.Energy[k,1]<- Energy[i,1]
PR.df.Energy[k,2]<- Energy[j,1]
PR.df.Energy[k,3]<- l.output[[1]]
PR.df.Energy[k,4]<- l.output[[2]]
k=k+1
}
}
}
#Calculate coefficient of variation
PR.df.Energy$COV <- PR.df.Energy$'SD of PR'/PR.df.Energy$'Mean of PR'
PR.df.Energy<-PR.df.Energy[order(PR.df.Energy$COV),]
write.csv(PR.df.Energy,"EnergyPairs.csv")
# Testing for PSX and SSE
pairprices.eg <- masterdata[,c('PSX.Adjusted','SSE.Adjusted')]
pairprices.eg <- na.omit(pairprices.eg)
l.pr.eg <- function(D){
x.prices<- D[,1]
y.prices<- D[,2]
xyratio <- x.prices/y.prices
xyratio.mean <- mean(xyratio)
xyratio.sd<- sd(xyratio)
return(list(xyratio.sd,xyratio.mean,xyratio))
}
temp<-l.pr.eg(pairprices.eg)
temp[[3]]
##########################################################################################3
#########Calculating std deviation of price ratios for pairs in Materials and Procurement######
MaPlabels<- sapply(MaP,function(x){ paste(x,".Adjusted",sep="")})
masterdata.MaP<- masterdata[,MaPlabels]
PR.df.MaP<-data.frame(rep(NA,2775),rep(NA,2775),rep(0,2775),rep(0,2775 ))
colnames(PR.df.MaP)<- c('Stock1','Stock2','SD of PR','Mean of PR')
k=1
for(i in 1:75){
for(j in i:75){
if (i != j){
pairprices <- masterdata.MaP[,c(i,j)]
pairprices <- na.omit(pairprices)
l.output<-l.pr(pairprices)
PR.df.MaP[k,1]<- MaP[i,1]
PR.df.MaP[k,2]<- MaP[j,1]
PR.df.MaP[k,3]<- l.output[[1]]
PR.df.MaP[k,4]<- l.output[[2]]
k=k+1
}
}
}
#Calculate coefficient of variation
PR.df.MaP$COV <- PR.df.MaP$'SD of PR'/PR.df.MaP$'Mean of PR'
PR.df.MaP<-PR.df.MaP[order(PR.df.MaP$COV),]
write.csv(PR.df.MaP,"Materials&ProcPairs.csv")
#######################################################################################3
#########Calculating std deviation of price ratios for pairs in Producer Durables#########
PDlabels<- sapply(PD,function(x){ paste(x,".Adjusted",sep="")})
masterdata.PD<- masterdata[,PDlabels]
PR.df.PD<-data.frame(rep(NA,9045),rep(NA,9045),rep(0,9045),rep(0,9045))
colnames(PR.df.PD)<- c('Stock1','Stock2','SD of PR','Mean of PR')
k=1
for(i in 1:135){
for(j in i:135){
if (i != j){
pairprices <- masterdata.PD[,c(i,j)]
pairprices <- na.omit(pairprices)
l.output<-l.pr(pairprices)
PR.df.PD[k,1]<- PD[i,1]
PR.df.PD[k,2]<- PD[j,1]
PR.df.PD[k,3]<- l.output[[1]]
PR.df.PD[k,4]<- l.output[[2]]
k=k+1
}
}
}
#Calculate coefficient of variation
PR.df.PD$COV <- PR.df.PD$'SD of PR'/PR.df.PD$'Mean of PR'
PR.df.PD<-PR.df.PD[order(PR.df.PD$COV),]
write.csv(PR.df.PD,"ProducerDurablePairs.csv")
#########################################################################################
####Calculating std deviation of price ratios for pairs in Consumer Discretionary########
CDlabels<- sapply(CD,function(x){ paste(x,".Adjusted",sep="")})
masterdata.CD<- masterdata[,CDlabels]
PR.df.CD<-data.frame(rep(NA,16110),rep(NA,16110),rep(0,16110),rep(0,16110))
colnames(PR.df.CD)<- c('Stock1','Stock2','SD of PR','Mean of PR')
k=1
for(i in 1:180){
for(j in i:180){
if (i != j){
pairprices <- masterdata.CD[,c(i,j)]
pairprices <- na.omit(pairprices)
l.output<-l.pr(pairprices)
PR.df.CD[k,1]<- CD[i,1]
PR.df.CD[k,2]<- CD[j,1]
PR.df.CD[k,3]<- l.output[[1]]
PR.df.CD[k,4]<- l.output[[2]]
k=k+1
}
}
}
#Calculate coefficient of variation
PR.df.CD$COV <- PR.df.CD$'SD of PR'/PR.df.CD$'Mean of PR'
PR.df.CD<-PR.df.CD[order(PR.df.CD$COV),]
write.csv(PR.df.CD,"ConsumerDiscretionaryPairs.csv")
########################################################################################
####Calculating std deviation of price ratios for pairs in Technology##################
Techlabels<- sapply(Tech,function(x){ paste(x,".Adjusted",sep="")})
masterdata.Tech<- masterdata[,Techlabels]
PR.df.Tech<-data.frame(rep(NA,6903),rep(NA,6903),rep(0,6903),rep(0,6903))
colnames(PR.df.Tech)<- c('Stock1','Stock2','SD of PR','Mean of PR')
k=1
for(i in 1:118){
for(j in i:118){
if (i != j){
pairprices <- masterdata.Tech[,c(i,j)]
pairprices <- na.omit(pairprices)
l.output<-l.pr(pairprices)
PR.df.Tech[k,1]<- Tech[i,1]
PR.df.Tech[k,2]<- Tech[j,1]
PR.df.Tech[k,3]<- l.output[[1]]
PR.df.Tech[k,4]<- l.output[[2]]
k=k+1
}
}
}
#Calculate coefficient of variation
PR.df.Tech$COV <- PR.df.Tech$'SD of PR'/PR.df.Tech$'Mean of PR'
PR.df.Tech<-PR.df.Tech[order(PR.df.Tech$COV),]
write.csv(PR.df.Tech,"TechPairs.csv")
########################################################################################
####Calculating std deviation of price ratios for pairs in Financials##################
Finlabels<- sapply(Fin,function(x){ paste(x,".Adjusted",sep="")})
masterdata.Fin<- masterdata[,Finlabels]
PR.df.Fin<-data.frame(rep(NA,24976),rep(NA,24976),rep(0,24976),rep(0,24976))
colnames(PR.df.Fin)<- c('Stock1','Stock2','SD of PR','Mean of PR')
k=1
for(i in 1:224){
for(j in i:224){
if (i != j){
pairprices <- masterdata.Fin[,c(i,j)]
pairprices <- na.omit(pairprices)
l.output<-l.pr(pairprices)
PR.df.Fin[k,1]<- Fin[i,1]
PR.df.Fin[k,2]<- Fin[j,1]
PR.df.Fin[k,3]<- l.output[[1]]
PR.df.Fin[k,4]<- l.output[[2]]
k=k+1
}
}
}
#Calculate coefficient of variation
PR.df.Fin$COV <- PR.df.Fin$'SD of PR'/PR.df.Fin$'Mean of PR'
PR.df.Fin<-PR.df.Fin[order(PR.df.Fin$COV),]
write.csv(PR.df.Fin,"FinancialsPairs.csv")
###########################################################################################
#######Calculating std deviation of price ratios for pairs in Consumer Staple##################
CSlabels<- sapply(CS,function(x){ paste(x,".Adjusted",sep="")})
masterdata.CS<- masterdata[,CSlabels]
PR.df.CS<-data.frame(rep(NA,1176),rep(NA,1176),rep(0,1176),rep(0,1176))
colnames(PR.df.CS)<- c('Stock1','Stock2','SD of PR','Mean of PR')
k=1
for(i in 1:49){
for(j in i:49){
if (i != j){
pairprices <- masterdata.CS[,c(i,j)]
pairprices <- na.omit(pairprices)
l.output<-l.pr(pairprices)
PR.df.CS[k,1]<- CS[i,1]
PR.df.CS[k,2]<- CS[j,1]
PR.df.CS[k,3]<- l.output[[1]]
PR.df.CS[k,4]<- l.output[[2]]
k=k+1
}
}
}
#Calculate coefficient of variation
PR.df.CS$COV <- PR.df.CS$'SD of PR'/PR.df.CS$'Mean of PR'
PR.df.CS<-PR.df.CS[order(PR.df.CS$COV),]
write.csv(PR.df.CS,"ConsumerStaplesPairs.csv")
######################################################################################
#######Calculating std deviation of price ratios for pairs in Utilities##################
Utilitieslabels<- sapply(Utilities,function(x){ paste(x,".Adjusted",sep="")})
masterdata.Utilities<- masterdata[,Utilitieslabels]
PR.df.Utilities<-data.frame(rep(NA,1540),rep(NA,1540),rep(0,1540),rep(0,1540))
colnames(PR.df.Utilities)<- c('Stock1','Stock2','SD of PR','Mean of PR')
k=1
for(i in 1:56){
for(j in i:56){
if (i != j){
pairprices <- masterdata.Utilities[,c(i,j)]
pairprices <- na.omit(pairprices)
l.output<-l.pr(pairprices)
PR.df.Utilities[k,1]<- Utilities[i,1]
PR.df.Utilities[k,2]<- Utilities[j,1]
PR.df.Utilities[k,3]<- l.output[[1]]
PR.df.Utilities[k,4]<- l.output[[2]]
k=k+1
}
}
}
#Calculate coefficient of variation
PR.df.Utilities$COV <- PR.df.Utilities$'SD of PR'/PR.df.Utilities$'Mean of PR'
PR.df.Utilities<-PR.df.Utilities[order(PR.df.Utilities$COV),]
write.csv(PR.df.Utilities,"UtilitiesPairs.csv")
|
/PairsTraiding - price ratios.R
|
no_license
|
govindgnair23/CQA-Challenge
|
R
| false
| false
| 10,003
|
r
|
#TickerSector has tickers and sector information for all stocks
#masterdata has timer series data for all stocks
#Function to calculate price ratios of 2 stocks
l.pr <- function(D){
x.prices<- D[,1]
y.prices<- D[,2]
xyratio <- x.prices/y.prices
xyratio.mean <- mean(xyratio)
xyratio.sd<- sd(xyratio)
return(list(xyratio.sd,xyratio.mean))
}
#######################################################################################
############Calculating std deviation of price ratios for pairs in Healthcare###########
HClabels<- sapply(Healthcare,function(x){ paste(x,".Adjusted",sep="")})
masterdata.HC<- masterdata[,HClabels]
PR.df.healthcare<-data.frame(rep(NA,5050),rep(NA,5050),rep(0,5050),rep(0,5050))
colnames(PR.df.healthcare)<- c('Stock1','Stock2','SD of PR','Mean of PR')
k=1
for(i in 1:101){
for(j in i:101){
if (i != j){
pairprices <- masterdata.HC[,c(i,j)]
pairprices <- na.omit(pairprices)
l.output<-l.pr(pairprices)
PR.df.healthcare[k,1]<- Healthcare[i,1]
PR.df.healthcare[k,2]<- Healthcare[j,1]
PR.df.healthcare[k,3]<- l.output[[1]]
PR.df.healthcare[k,4]<- l.output[[2]]
k=k+1
}
}
}
#Calculate coefficient of variation
PR.df.healthcare$COV <- PR.df.healthcare$'SD of PR'/PR.df.healthcare$'Mean of PR'
PR.df.healthcare<-PR.df.healthcare[order(PR.df.healthcare$COV),]
write.csv(PR.df.healthcare,"HealthcarePairs.csv")
#########################################################################################
#############Calculating std deviation of price ratios for pairs in Energy############
Energylabels<- sapply(Energy,function(x){ paste(x,".Adjusted",sep="")})
masterdata.Energy<- masterdata[,Energylabels]
PR.df.Energy<-data.frame(rep(NA,3240),rep(NA,3240),rep(0,3240),rep(0,3240))
colnames(PR.df.Energy)<- c('Stock1','Stock2','SD of PR','Mean of PR')
k=1
for(i in 1:81){
for(j in i:81){
if (i != j){
pairprices <- masterdata.Energy[,c(i,j)]
pairprices <- na.omit(pairprices)
l.output<-l.pr(pairprices)
PR.df.Energy[k,1]<- Energy[i,1]
PR.df.Energy[k,2]<- Energy[j,1]
PR.df.Energy[k,3]<- l.output[[1]]
PR.df.Energy[k,4]<- l.output[[2]]
k=k+1
}
}
}
#Calculate coefficient of variation
PR.df.Energy$COV <- PR.df.Energy$'SD of PR'/PR.df.Energy$'Mean of PR'
PR.df.Energy<-PR.df.Energy[order(PR.df.Energy$COV),]
write.csv(PR.df.Energy,"EnergyPairs.csv")
# Testing for PSX and SSE
pairprices.eg <- masterdata[,c('PSX.Adjusted','SSE.Adjusted')]
pairprices.eg <- na.omit(pairprices.eg)
l.pr.eg <- function(D){
x.prices<- D[,1]
y.prices<- D[,2]
xyratio <- x.prices/y.prices
xyratio.mean <- mean(xyratio)
xyratio.sd<- sd(xyratio)
return(list(xyratio.sd,xyratio.mean,xyratio))
}
temp<-l.pr.eg(pairprices.eg)
temp[[3]]
##########################################################################################3
#########Calculating std deviation of price ratios for pairs in Materials and Procurement######
MaPlabels<- sapply(MaP,function(x){ paste(x,".Adjusted",sep="")})
masterdata.MaP<- masterdata[,MaPlabels]
PR.df.MaP<-data.frame(rep(NA,2775),rep(NA,2775),rep(0,2775),rep(0,2775 ))
colnames(PR.df.MaP)<- c('Stock1','Stock2','SD of PR','Mean of PR')
k=1
for(i in 1:75){
for(j in i:75){
if (i != j){
pairprices <- masterdata.MaP[,c(i,j)]
pairprices <- na.omit(pairprices)
l.output<-l.pr(pairprices)
PR.df.MaP[k,1]<- MaP[i,1]
PR.df.MaP[k,2]<- MaP[j,1]
PR.df.MaP[k,3]<- l.output[[1]]
PR.df.MaP[k,4]<- l.output[[2]]
k=k+1
}
}
}
#Calculate coefficient of variation
PR.df.MaP$COV <- PR.df.MaP$'SD of PR'/PR.df.MaP$'Mean of PR'
PR.df.MaP<-PR.df.MaP[order(PR.df.MaP$COV),]
write.csv(PR.df.MaP,"Materials&ProcPairs.csv")
#######################################################################################3
#########Calculating std deviation of price ratios for pairs in Producer Durables#########
PDlabels<- sapply(PD,function(x){ paste(x,".Adjusted",sep="")})
masterdata.PD<- masterdata[,PDlabels]
PR.df.PD<-data.frame(rep(NA,9045),rep(NA,9045),rep(0,9045),rep(0,9045))
colnames(PR.df.PD)<- c('Stock1','Stock2','SD of PR','Mean of PR')
k=1
for(i in 1:135){
for(j in i:135){
if (i != j){
pairprices <- masterdata.PD[,c(i,j)]
pairprices <- na.omit(pairprices)
l.output<-l.pr(pairprices)
PR.df.PD[k,1]<- PD[i,1]
PR.df.PD[k,2]<- PD[j,1]
PR.df.PD[k,3]<- l.output[[1]]
PR.df.PD[k,4]<- l.output[[2]]
k=k+1
}
}
}
#Calculate coefficient of variation
PR.df.PD$COV <- PR.df.PD$'SD of PR'/PR.df.PD$'Mean of PR'
PR.df.PD<-PR.df.PD[order(PR.df.PD$COV),]
write.csv(PR.df.PD,"ProducerDurablePairs.csv")
#########################################################################################
####Calculating std deviation of price ratios for pairs in Consumer Discretionary########
CDlabels<- sapply(CD,function(x){ paste(x,".Adjusted",sep="")})
masterdata.CD<- masterdata[,CDlabels]
PR.df.CD<-data.frame(rep(NA,16110),rep(NA,16110),rep(0,16110),rep(0,16110))
colnames(PR.df.CD)<- c('Stock1','Stock2','SD of PR','Mean of PR')
k=1
for(i in 1:180){
for(j in i:180){
if (i != j){
pairprices <- masterdata.CD[,c(i,j)]
pairprices <- na.omit(pairprices)
l.output<-l.pr(pairprices)
PR.df.CD[k,1]<- CD[i,1]
PR.df.CD[k,2]<- CD[j,1]
PR.df.CD[k,3]<- l.output[[1]]
PR.df.CD[k,4]<- l.output[[2]]
k=k+1
}
}
}
#Calculate coefficient of variation
PR.df.CD$COV <- PR.df.CD$'SD of PR'/PR.df.CD$'Mean of PR'
PR.df.CD<-PR.df.CD[order(PR.df.CD$COV),]
write.csv(PR.df.CD,"ConsumerDiscretionaryPairs.csv")
########################################################################################
####Calculating std deviation of price ratios for pairs in Technology##################
Techlabels<- sapply(Tech,function(x){ paste(x,".Adjusted",sep="")})
masterdata.Tech<- masterdata[,Techlabels]
PR.df.Tech<-data.frame(rep(NA,6903),rep(NA,6903),rep(0,6903),rep(0,6903))
colnames(PR.df.Tech)<- c('Stock1','Stock2','SD of PR','Mean of PR')
k=1
for(i in 1:118){
for(j in i:118){
if (i != j){
pairprices <- masterdata.Tech[,c(i,j)]
pairprices <- na.omit(pairprices)
l.output<-l.pr(pairprices)
PR.df.Tech[k,1]<- Tech[i,1]
PR.df.Tech[k,2]<- Tech[j,1]
PR.df.Tech[k,3]<- l.output[[1]]
PR.df.Tech[k,4]<- l.output[[2]]
k=k+1
}
}
}
#Calculate coefficient of variation
PR.df.Tech$COV <- PR.df.Tech$'SD of PR'/PR.df.Tech$'Mean of PR'
PR.df.Tech<-PR.df.Tech[order(PR.df.Tech$COV),]
write.csv(PR.df.Tech,"TechPairs.csv")
########################################################################################
####Calculating std deviation of price ratios for pairs in Financials##################
Finlabels<- sapply(Fin,function(x){ paste(x,".Adjusted",sep="")})
masterdata.Fin<- masterdata[,Finlabels]
PR.df.Fin<-data.frame(rep(NA,24976),rep(NA,24976),rep(0,24976),rep(0,24976))
colnames(PR.df.Fin)<- c('Stock1','Stock2','SD of PR','Mean of PR')
k=1
for(i in 1:224){
for(j in i:224){
if (i != j){
pairprices <- masterdata.Fin[,c(i,j)]
pairprices <- na.omit(pairprices)
l.output<-l.pr(pairprices)
PR.df.Fin[k,1]<- Fin[i,1]
PR.df.Fin[k,2]<- Fin[j,1]
PR.df.Fin[k,3]<- l.output[[1]]
PR.df.Fin[k,4]<- l.output[[2]]
k=k+1
}
}
}
#Calculate coefficient of variation
PR.df.Fin$COV <- PR.df.Fin$'SD of PR'/PR.df.Fin$'Mean of PR'
PR.df.Fin<-PR.df.Fin[order(PR.df.Fin$COV),]
write.csv(PR.df.Fin,"FinancialsPairs.csv")
###########################################################################################
#######Calculating std deviation of price ratios for pairs in Consumer Staple##################
CSlabels<- sapply(CS,function(x){ paste(x,".Adjusted",sep="")})
masterdata.CS<- masterdata[,CSlabels]
PR.df.CS<-data.frame(rep(NA,1176),rep(NA,1176),rep(0,1176),rep(0,1176))
colnames(PR.df.CS)<- c('Stock1','Stock2','SD of PR','Mean of PR')
k=1
for(i in 1:49){
for(j in i:49){
if (i != j){
pairprices <- masterdata.CS[,c(i,j)]
pairprices <- na.omit(pairprices)
l.output<-l.pr(pairprices)
PR.df.CS[k,1]<- CS[i,1]
PR.df.CS[k,2]<- CS[j,1]
PR.df.CS[k,3]<- l.output[[1]]
PR.df.CS[k,4]<- l.output[[2]]
k=k+1
}
}
}
#Calculate coefficient of variation
PR.df.CS$COV <- PR.df.CS$'SD of PR'/PR.df.CS$'Mean of PR'
PR.df.CS<-PR.df.CS[order(PR.df.CS$COV),]
write.csv(PR.df.CS,"ConsumerStaplesPairs.csv")
######################################################################################
#######Calculating std deviation of price ratios for pairs in Utilities##################
Utilitieslabels<- sapply(Utilities,function(x){ paste(x,".Adjusted",sep="")})
masterdata.Utilities<- masterdata[,Utilitieslabels]
PR.df.Utilities<-data.frame(rep(NA,1540),rep(NA,1540),rep(0,1540),rep(0,1540))
colnames(PR.df.Utilities)<- c('Stock1','Stock2','SD of PR','Mean of PR')
k=1
for(i in 1:56){
for(j in i:56){
if (i != j){
pairprices <- masterdata.Utilities[,c(i,j)]
pairprices <- na.omit(pairprices)
l.output<-l.pr(pairprices)
PR.df.Utilities[k,1]<- Utilities[i,1]
PR.df.Utilities[k,2]<- Utilities[j,1]
PR.df.Utilities[k,3]<- l.output[[1]]
PR.df.Utilities[k,4]<- l.output[[2]]
k=k+1
}
}
}
#Calculate coefficient of variation
PR.df.Utilities$COV <- PR.df.Utilities$'SD of PR'/PR.df.Utilities$'Mean of PR'
PR.df.Utilities<-PR.df.Utilities[order(PR.df.Utilities$COV),]
write.csv(PR.df.Utilities,"UtilitiesPairs.csv")
|
\name{deriv.polynomial}
\alias{deriv.polynomial}
\title{Differentiate a Polynomial}
\description{
Calculates the derivative of a univariate polynomial.
}
\usage{
\method{deriv}{polynomial}(expr, \dots)
}
\arguments{
\item{expr}{an object of class \code{"polynomial"}.}
\item{\dots}{further arguments to be passed to or from methods.}
}
\details{
This is a method for the generic function \code{\link{deriv}}.
}
\value{
Derivative of the polynomial.
}
\seealso{
\code{\link{integral.polynomial}},
\code{\link{deriv}}.
}
\examples{
pr <- poly.calc(1:5)
pr
## -120 + 274*x - 225*x^2 + 85*x^3 - 15*x^4 + x^5
deriv(pr)
## 274 - 450*x + 255*x^2 - 60*x^3 + 5*x^4
}
\keyword{symbolmath}
|
/man/deriv.polynomial.Rd
|
no_license
|
cran/polynom
|
R
| false
| false
| 694
|
rd
|
\name{deriv.polynomial}
\alias{deriv.polynomial}
\title{Differentiate a Polynomial}
\description{
Calculates the derivative of a univariate polynomial.
}
\usage{
\method{deriv}{polynomial}(expr, \dots)
}
\arguments{
\item{expr}{an object of class \code{"polynomial"}.}
\item{\dots}{further arguments to be passed to or from methods.}
}
\details{
This is a method for the generic function \code{\link{deriv}}.
}
\value{
Derivative of the polynomial.
}
\seealso{
\code{\link{integral.polynomial}},
\code{\link{deriv}}.
}
\examples{
pr <- poly.calc(1:5)
pr
## -120 + 274*x - 225*x^2 + 85*x^3 - 15*x^4 + x^5
deriv(pr)
## 274 - 450*x + 255*x^2 - 60*x^3 + 5*x^4
}
\keyword{symbolmath}
|
library("caret")
library(corrplot)
library(C50)
library(dummies)
library(gmodels)
library(Metrics)
library(neuralnet)
library(plyr)
library(rpart)
library(tree)
library(e1071)
library(rpart.plot)
library(fastDummies)
################################## Load Files #############################################
x <-
read.csv(
"C:\\Users\\User\\Documents\\Thesis\\Data\\Models\\LeagueELO\\Serie A\\ELO09-10.csv",
stringsAsFactors = FALSE
)
################################# Clean Data ##############################################
x$B365H <- as.numeric(x$B365H)
x$B365D <- as.numeric(x$B365D)
x$B365A <- as.numeric(x$B365A)
x$BWH <- as.numeric(x$BWH)
x$BWD <- as.numeric(x$BWD)
x$BWA <- as.numeric(x$BWA)
x$GBH <- as.numeric(x$GBH)
x$GBD <- as.numeric(x$GBD)
x$GBA <- as.numeric(x$GBA)
x$IWH <- as.numeric(x$IWH)
x$IWD <- as.numeric(x$IWD)
x$IWA <- as.numeric(x$IWA)
x$LBH <- as.numeric(x$LBH)
x$LBD <- as.numeric(x$LBD)
x$LBA <- as.numeric(x$LBA)
x$SBH <- as.numeric(x$SBH)
x$SBD <- as.numeric(x$SBD)
x$SBA <- as.numeric(x$SBA)
x$WHH <- as.numeric(x$WHH)
x$WHD <- as.numeric(x$WHD)
x$WHA <- as.numeric(x$WHA)
x$VCH <- as.numeric(x$VCH)
x$VCD <- as.numeric(x$VCD)
x$VCA <- as.numeric(x$VCA)
x <- na.exclude(x)
################################## Rename Columns #########################################
colnames(x)[1] <- "Season"
################################ Create Dummy Vars ########################################
x <- cbind.data.frame(x, dummy(x$Home))
x <- cbind.data.frame(x, dummy(x$Away))
########################### Remove Cols After Dummy Vars ##################################
x$Home <- NULL
x$Away <- NULL
x$Season <- NULL
x$date <- NULL
##################################### All Bookies #########################################
NNM <- x
set.seed(123)
NNM.rows <- nrow(NNM)
NNM.sample <- sample(NNM.rows, NNM.rows * 0.6)
NN.train <- NNM[NNM.sample, ]
NN.test <- NNM[NNM.sample, ]
NN = neuralnet(FTR ~ ., NN.train, hidden = 3, linear.output = T)
plot(NN)
comp <- compute(NN, NN.test[-1])
pred.weights <- comp$net.result
idx <- apply(pred.weights, 1, which.max)
pred <- c('A', 'D', 'H')[idx]
CrossTable(
idx,
NN.test$FTR,
prop.c = FALSE,
prop.r = FALSE,
prop.chisq = FALSE
)
##########################################################################################################
NNM2 <- x[-c(7:27)]
set.seed(123)
NNM2.rows <- nrow(NNM2)
NNM2.sample <- sample(NNM2.rows, NNM2.rows * 0.6)
NN2.train <- NNM2[NNM2.sample, ]
NN2.test <- NNM2[NNM2.sample, ]
NN2 = neuralnet(FTR ~ ., NN2.train, hidden = 3, linear.output = T)
plot(NN2)
comp <- compute(NN2, NN2.test[-1])
pred.weights <- comp$net.result
idx <- apply(pred.weights, 1, which.max)
pred <- c('A', 'D', 'H')[idx]
CrossTable(
idx,
NN2.test$FTR,
prop.c = FALSE,
prop.r = FALSE,
prop.chisq = FALSE
)
##########################################################################################################
NNM3 <- x[-c(4:6, 10:27)]
set.seed(123)
NNM3.rows <- nrow(NNM3)
NNM3.sample <- sample(NNM3.rows, NNM3.rows * 0.6)
NN3.train <- NNM3[NNM3.sample, ]
NN3.test <- NNM3[NNM3.sample, ]
NN3 = neuralnet(FTR ~ ., NN3.train, hidden = 3, linear.output = T)
plot(NN3)
comp <- compute(NN3, NN3.test[-1])
pred.weights <- comp$net.result
idx <- apply(pred.weights, 1, which.max)
pred <- c('A', 'D', 'H')[idx]
CrossTable(
idx,
NN3.test$FTR,
prop.c = FALSE,
prop.r = FALSE,
prop.chisq = FALSE
)
##########################################################################################################
NNM4 <- x[-c(4:9, 13:27)]
set.seed(123)
NNM4.rows <- nrow(NNM4)
NNM4.sample <- sample(NNM4.rows, NNM4.rows * 0.6)
NN4.train <- NNM4[NNM4.sample, ]
NN4.test <- NNM4[NNM4.sample, ]
NN4 = neuralnet(FTR ~ ., NN.train, hidden = 3, linear.output = T)
plot(NN4)
comp <- compute(NN4, NN4.test[-1])
pred.weights <- comp$net.result
idx <- apply(pred.weights, 1, which.max)
pred <- c('A', 'D', 'H')[idx]
CrossTable(
idx,
NN4.test$FTR,
prop.c = FALSE,
prop.r = FALSE,
prop.chisq = FALSE
)
##########################################################################################################
NNM5 <- x[-c(4:12, 16:27)]
set.seed(123)
NNM5.rows <- nrow(NNM5)
NNM5.sample <- sample(NNM5.rows, NNM5.rows * 0.6)
NN5.train <- NNM5[NNM5.sample, ]
NN5.test <- NNM5[NNM5.sample, ]
NN5 = neuralnet(FTR ~ ., NN5.train, hidden = 3, linear.output = T)
plot(NN5)
comp <- compute(NN5, NN5.test[-1])
pred.weights <- comp$net.result
idx <- apply(pred.weights, 1, which.max)
pred <- c('A', 'D', 'H')[idx]
CrossTable(
idx,
NN5.test$FTR,
prop.c = FALSE,
prop.r = FALSE,
prop.chisq = FALSE
)
##########################################################################################################
NNM6 <- x[-c(4:15, 19:27)]
set.seed(123)
NNM6.rows <- nrow(NNM6)
NNM6.sample <- sample(NNM6.rows, NNM6.rows * 0.6)
NN6.train <- NNM6[NNM6.sample, ]
NN6.test <- NNM6[NNM6.sample, ]
NN6 = neuralnet(FTR ~ ., NN6.train, hidden = 3, linear.output = T)
plot(NN6)
comp <- compute(NN6, NN6.test[-1])
pred.weights <- comp$net.result
idx <- apply(pred.weights, 1, which.max)
pred <- c('A', 'D', 'H')[idx]
CrossTable(
idx,
NN6.test$FTR,
prop.c = FALSE,
prop.r = FALSE,
prop.chisq = FALSE
)
##########################################################################################################
NNM7 <- x[-c(4:18, 22:27)]
set.seed(123)
NNM7.rows <- nrow(NNM7)
NNM7.sample <- sample(NNM7.rows, NNM7.rows * 0.6)
NN7.train <- NNM7[NNM7.sample, ]
NN7.test <- NNM7[NNM7.sample, ]
NN7 = neuralnet(FTR ~ ., NN7.train, hidden = 3, linear.output = T)
plot(NN7)
comp <- compute(NN7, NN7.test[-1])
pred.weights <- comp$net.result
idx <- apply(pred.weights, 1, which.max)
pred <- c('A', 'D', 'H')[idx]
CrossTable(
idx,
NN7.test$FTR,
prop.c = FALSE,
prop.r = FALSE,
prop.chisq = FALSE
)
##############################################################################################################
NNM8 <- x[-c(4:21, 24:27)]
set.seed(123)
NNM8.rows <- nrow(NNM8)
NNM8.sample <- sample(NNM8.rows, NNM8.rows * 0.6)
NN8.train <- NNM8[NNM8.sample, ]
NN8.test <- NNM8[NNM8.sample, ]
NN8 = neuralnet(FTR ~ ., NN8.train, hidden = 3, linear.output = T)
plot(NN8)
comp <- compute(NN8, NN8.test[-1])
pred.weights <- comp$net.result
idx <- apply(pred.weights, 1, which.max)
pred <- c('A', 'D', 'H')[idx]
CrossTable(
idx,
NN8.test$FTR,
prop.c = FALSE,
prop.r = FALSE,
prop.chisq = FALSE
)
###############################################################################################################
NNM9 <- x[-c(4:24)]
set.seed(123)
NNM9.rows <- nrow(NNM9)
NNM9.sample <- sample(NNM9.rows, NNM9.rows * 0.6)
NN9.train <- NNM9[NNM9.sample, ]
NN9.test <- NNM7[NNM9.sample, ]
NN9 = neuralnet(FTR ~ ., NN9.train, hidden = 3, linear.output = T)
plot(NN9)
comp <- compute(NN9, NN9.test[-1])
pred.weights <- comp$net.result
idx <- apply(pred.weights, 1, which.max)
pred <- c('A', 'D', 'H')[idx]
CrossTable(
idx,
NN9.test$FTR,
prop.c = FALSE,
prop.r = FALSE,
prop.chisq = FALSE
)
|
/0-Implementation/R Files/NN/League - ELO/Serie A/09-10.R
|
no_license
|
Chanter08/Thesis
|
R
| false
| false
| 7,079
|
r
|
library("caret")
library(corrplot)
library(C50)
library(dummies)
library(gmodels)
library(Metrics)
library(neuralnet)
library(plyr)
library(rpart)
library(tree)
library(e1071)
library(rpart.plot)
library(fastDummies)
################################## Load Files #############################################
x <-
read.csv(
"C:\\Users\\User\\Documents\\Thesis\\Data\\Models\\LeagueELO\\Serie A\\ELO09-10.csv",
stringsAsFactors = FALSE
)
################################# Clean Data ##############################################
x$B365H <- as.numeric(x$B365H)
x$B365D <- as.numeric(x$B365D)
x$B365A <- as.numeric(x$B365A)
x$BWH <- as.numeric(x$BWH)
x$BWD <- as.numeric(x$BWD)
x$BWA <- as.numeric(x$BWA)
x$GBH <- as.numeric(x$GBH)
x$GBD <- as.numeric(x$GBD)
x$GBA <- as.numeric(x$GBA)
x$IWH <- as.numeric(x$IWH)
x$IWD <- as.numeric(x$IWD)
x$IWA <- as.numeric(x$IWA)
x$LBH <- as.numeric(x$LBH)
x$LBD <- as.numeric(x$LBD)
x$LBA <- as.numeric(x$LBA)
x$SBH <- as.numeric(x$SBH)
x$SBD <- as.numeric(x$SBD)
x$SBA <- as.numeric(x$SBA)
x$WHH <- as.numeric(x$WHH)
x$WHD <- as.numeric(x$WHD)
x$WHA <- as.numeric(x$WHA)
x$VCH <- as.numeric(x$VCH)
x$VCD <- as.numeric(x$VCD)
x$VCA <- as.numeric(x$VCA)
x <- na.exclude(x)
################################## Rename Columns #########################################
colnames(x)[1] <- "Season"
################################ Create Dummy Vars ########################################
x <- cbind.data.frame(x, dummy(x$Home))
x <- cbind.data.frame(x, dummy(x$Away))
########################### Remove Cols After Dummy Vars ##################################
x$Home <- NULL
x$Away <- NULL
x$Season <- NULL
x$date <- NULL
##################################### All Bookies #########################################
NNM <- x
set.seed(123)
NNM.rows <- nrow(NNM)
NNM.sample <- sample(NNM.rows, NNM.rows * 0.6)
NN.train <- NNM[NNM.sample, ]
NN.test <- NNM[NNM.sample, ]
NN = neuralnet(FTR ~ ., NN.train, hidden = 3, linear.output = T)
plot(NN)
comp <- compute(NN, NN.test[-1])
pred.weights <- comp$net.result
idx <- apply(pred.weights, 1, which.max)
pred <- c('A', 'D', 'H')[idx]
CrossTable(
idx,
NN.test$FTR,
prop.c = FALSE,
prop.r = FALSE,
prop.chisq = FALSE
)
##########################################################################################################
NNM2 <- x[-c(7:27)]
set.seed(123)
NNM2.rows <- nrow(NNM2)
NNM2.sample <- sample(NNM2.rows, NNM2.rows * 0.6)
NN2.train <- NNM2[NNM2.sample, ]
NN2.test <- NNM2[NNM2.sample, ]
NN2 = neuralnet(FTR ~ ., NN2.train, hidden = 3, linear.output = T)
plot(NN2)
comp <- compute(NN2, NN2.test[-1])
pred.weights <- comp$net.result
idx <- apply(pred.weights, 1, which.max)
pred <- c('A', 'D', 'H')[idx]
CrossTable(
idx,
NN2.test$FTR,
prop.c = FALSE,
prop.r = FALSE,
prop.chisq = FALSE
)
##########################################################################################################
NNM3 <- x[-c(4:6, 10:27)]
set.seed(123)
NNM3.rows <- nrow(NNM3)
NNM3.sample <- sample(NNM3.rows, NNM3.rows * 0.6)
NN3.train <- NNM3[NNM3.sample, ]
NN3.test <- NNM3[NNM3.sample, ]
NN3 = neuralnet(FTR ~ ., NN3.train, hidden = 3, linear.output = T)
plot(NN3)
comp <- compute(NN3, NN3.test[-1])
pred.weights <- comp$net.result
idx <- apply(pred.weights, 1, which.max)
pred <- c('A', 'D', 'H')[idx]
CrossTable(
idx,
NN3.test$FTR,
prop.c = FALSE,
prop.r = FALSE,
prop.chisq = FALSE
)
##########################################################################################################
NNM4 <- x[-c(4:9, 13:27)]
set.seed(123)
NNM4.rows <- nrow(NNM4)
NNM4.sample <- sample(NNM4.rows, NNM4.rows * 0.6)
NN4.train <- NNM4[NNM4.sample, ]
NN4.test <- NNM4[NNM4.sample, ]
NN4 = neuralnet(FTR ~ ., NN.train, hidden = 3, linear.output = T)
plot(NN4)
comp <- compute(NN4, NN4.test[-1])
pred.weights <- comp$net.result
idx <- apply(pred.weights, 1, which.max)
pred <- c('A', 'D', 'H')[idx]
CrossTable(
idx,
NN4.test$FTR,
prop.c = FALSE,
prop.r = FALSE,
prop.chisq = FALSE
)
##########################################################################################################
NNM5 <- x[-c(4:12, 16:27)]
set.seed(123)
NNM5.rows <- nrow(NNM5)
NNM5.sample <- sample(NNM5.rows, NNM5.rows * 0.6)
NN5.train <- NNM5[NNM5.sample, ]
NN5.test <- NNM5[NNM5.sample, ]
NN5 = neuralnet(FTR ~ ., NN5.train, hidden = 3, linear.output = T)
plot(NN5)
comp <- compute(NN5, NN5.test[-1])
pred.weights <- comp$net.result
idx <- apply(pred.weights, 1, which.max)
pred <- c('A', 'D', 'H')[idx]
CrossTable(
idx,
NN5.test$FTR,
prop.c = FALSE,
prop.r = FALSE,
prop.chisq = FALSE
)
##########################################################################################################
NNM6 <- x[-c(4:15, 19:27)]
set.seed(123)
NNM6.rows <- nrow(NNM6)
NNM6.sample <- sample(NNM6.rows, NNM6.rows * 0.6)
NN6.train <- NNM6[NNM6.sample, ]
NN6.test <- NNM6[NNM6.sample, ]
NN6 = neuralnet(FTR ~ ., NN6.train, hidden = 3, linear.output = T)
plot(NN6)
comp <- compute(NN6, NN6.test[-1])
pred.weights <- comp$net.result
idx <- apply(pred.weights, 1, which.max)
pred <- c('A', 'D', 'H')[idx]
CrossTable(
idx,
NN6.test$FTR,
prop.c = FALSE,
prop.r = FALSE,
prop.chisq = FALSE
)
##########################################################################################################
NNM7 <- x[-c(4:18, 22:27)]
set.seed(123)
NNM7.rows <- nrow(NNM7)
NNM7.sample <- sample(NNM7.rows, NNM7.rows * 0.6)
NN7.train <- NNM7[NNM7.sample, ]
NN7.test <- NNM7[NNM7.sample, ]
NN7 = neuralnet(FTR ~ ., NN7.train, hidden = 3, linear.output = T)
plot(NN7)
comp <- compute(NN7, NN7.test[-1])
pred.weights <- comp$net.result
idx <- apply(pred.weights, 1, which.max)
pred <- c('A', 'D', 'H')[idx]
CrossTable(
idx,
NN7.test$FTR,
prop.c = FALSE,
prop.r = FALSE,
prop.chisq = FALSE
)
##############################################################################################################
NNM8 <- x[-c(4:21, 24:27)]
set.seed(123)
NNM8.rows <- nrow(NNM8)
NNM8.sample <- sample(NNM8.rows, NNM8.rows * 0.6)
NN8.train <- NNM8[NNM8.sample, ]
NN8.test <- NNM8[NNM8.sample, ]
NN8 = neuralnet(FTR ~ ., NN8.train, hidden = 3, linear.output = T)
plot(NN8)
comp <- compute(NN8, NN8.test[-1])
pred.weights <- comp$net.result
idx <- apply(pred.weights, 1, which.max)
pred <- c('A', 'D', 'H')[idx]
CrossTable(
idx,
NN8.test$FTR,
prop.c = FALSE,
prop.r = FALSE,
prop.chisq = FALSE
)
###############################################################################################################
NNM9 <- x[-c(4:24)]
set.seed(123)
NNM9.rows <- nrow(NNM9)
NNM9.sample <- sample(NNM9.rows, NNM9.rows * 0.6)
NN9.train <- NNM9[NNM9.sample, ]
NN9.test <- NNM7[NNM9.sample, ]
NN9 = neuralnet(FTR ~ ., NN9.train, hidden = 3, linear.output = T)
plot(NN9)
comp <- compute(NN9, NN9.test[-1])
pred.weights <- comp$net.result
idx <- apply(pred.weights, 1, which.max)
pred <- c('A', 'D', 'H')[idx]
CrossTable(
idx,
NN9.test$FTR,
prop.c = FALSE,
prop.r = FALSE,
prop.chisq = FALSE
)
|
### Pathway analysis with Spls : bootstrapping is useless, so spls has no code for bootstrapping
library(predictiveModeling)
library(synapseClient)
synapseLogin("in.sock.jang@sagebase.org","tjsDUD@")
source("/home/ijang/COMPBIO/trunk/users/jang/R5/mySplsModel.R")
source("/home/ijang/COMPBIO/trunk/users/jang/pathway_analysis/preRankedTest.R")
###################################################
#### Load Molecular Feature Data from Synapse ####
###################################################
id_copyLayer <- "48339"
layer_copy <- loadEntity(id_copyLayer)
eSet_copy <- layer_copy$objects$copySet
id_oncomapLayer <- "48341"
layer_oncomap <- loadEntity(id_oncomapLayer)
eSet_oncomap <- layer_oncomap$objects$oncomapSet
id_exprLayer <- "48344"
layer_expr <- loadEntity(id_exprLayer)
eSet_expr <- layer_expr$objects$exprSet
###################################################
### Load Response Data
###################################################
id_drugLayer <- "48359"
layer_drug <- loadEntity(id_drugLayer)
adf_drug <- layer_drug$objects$responseADF
featureData <- createAggregateFeatureDataSet(list(expr = eSet_expr,
copy = eSet_copy,
mut = eSet_oncomap))
# NA filter for training set
featureData_filtered <- filterNasFromMatrix(featureData, filterBy = "rows")
dataSets_ccle <- createFeatureAndResponseDataList(t(featureData_filtered),adf_drug)
for(kk in 1:ncol(dataSets_ccle$responseData)){
#########################################################################################################
######## Training and Testing data are scaled(normalized) vs. raw(unnormalized) #######################
#########################################################################################################
# data preprocessing for preselecting features
filteredData<-filterPredictiveModelData(dataSets_ccle$featureData,dataSets_ccle$responseData[,kk,drop=FALSE], featureVarianceThreshold = 0.01, corPValThresh = 0.1)
# filteredData<-filterPredictiveModelData(dataSets_ccle$featureData,dataSets_ccle$responseData[,kk,drop=FALSE])
# filtered feature and response data
filteredFeatureData <- t(unique(t(filteredData$featureData)))
filteredResponseData <- filteredData$responseData
## scale these data
filteredFeatureDataScaled <- scale(filteredFeatureData)
filteredResponseDataScaled <- scale(filteredResponseData)
# 5 fold cross validation
etas <- seq(0.1,0.9,0.1)
Ks <- seq(1,10,by =1)
# No bootstrapping to select the most significant feature set
# just utilize the optimal alpha and lambda from cross validation
finalSplsModel <- mySplsModel$new()
finalSplsModel$customTrain(filteredFeatureDataScaled, filteredResponseDataScaled, eta = etas, K = Ks, nfolds = 5)
referenceSet <- finalSplsModel$getCoefficients()
ref<-referenceSet[-1]
names(ref) <- rownames(referenceSet)[-1]
a<-sort(abs(ref), decreasing = TRUE, index.return = TRUE)
# select top 200 genes
referenceGenes<-names(ref)[a$ix[1:200]]
getUniqueGenesFromFeatureNames <- function(featureNames){
genes_copy <- sub("_copy","",featureNames[grep("_copy", featureNames)])
genes_expr <- sub("_expr","",featureNames[grep("_expr", featureNames)])
genes_mut <- sub("_mut","",featureNames[grep("_mut", featureNames)])
geneSet <- union(genes_copy,union(genes_expr, genes_mut))
}
referenceGenes<-getUniqueGenesFromFeatureNames(referenceGenes)
# MSigDB from synapse
mSigDB_annotations <- loadEntity(105363)
mSigDB_symbolID <- loadEntity(105350)
DB<-mSigDB_symbolID$objects$MsigDB_symbolID
allPathways <- mSigDB_annotations$objects$C2$KEGG
# preprocess for FET : make total set
geneAllSetList <-DB$genesets[is.element(DB$geneset.names,allPathways)]
geneAllSet <-c()
for (i in 1:length(geneAllSetList)){
geneAllSet<-union(geneAllSet,geneAllSetList[[i]])
}
analysis <- foreach (i = 1:length(allPathways)) %dopar%{
print(paste("processing", allPathways[i]))
mSigDB_index <- which(DB$geneset.names == allPathways[i])
curPathwayGenes <- DB$genesets[[mSigDB_index]]
a1=paste(curPathwayGenes,"_copy",sep ="")
a2=paste(curPathwayGenes,"_expr",sep ="")
a3=paste(curPathwayGenes,"_mut",sep ="")
geneSet <-union(a1,union(a2,a3))
geneSet <- intersect(geneSet, rownames(referenceSet))
# Here, GSEA part should be taken into consideration
# to make referenceSet
gseaResult <- preRankedTest(abs(ref), geneSet,np = 1000)
Mat2x2 <- mat.or.vec(2,2)
Mat2x2[1,1] <- length(intersect(referenceGenes,geneAllSetList[[i]]))
Mat2x2[2,1] <- length(setdiff(referenceGenes,geneAllSetList[[i]]))
Mat2x2[1,2] <- length(setdiff(geneAllSetList[[i]],referenceGenes))
Mat2x2[2,2] <- length(union(geneAllSet,referenceGenes)) - Mat2x2[1,1]- Mat2x2[1,2]- Mat2x2[2,1]
fetResult<-fisher.test(Mat2x2)
return(list(GSEA = gseaResult, FET = fetResult))
}
filename <- paste("splsFeatureAnalysisWithoutBootstrap_",kk,".Rdata",sep = "")
save(analysis,referenceSet,file=filename)
}
|
/predictiveModel_ccle/Spls/pathwaySpls_woBootstrap.R
|
no_license
|
insockjang/DrugResponse
|
R
| false
| false
| 5,195
|
r
|
### Pathway analysis with Spls : bootstrapping is useless, so spls has no code for bootstrapping
library(predictiveModeling)
library(synapseClient)
synapseLogin("in.sock.jang@sagebase.org","tjsDUD@")
source("/home/ijang/COMPBIO/trunk/users/jang/R5/mySplsModel.R")
source("/home/ijang/COMPBIO/trunk/users/jang/pathway_analysis/preRankedTest.R")
###################################################
#### Load Molecular Feature Data from Synapse ####
###################################################
id_copyLayer <- "48339"
layer_copy <- loadEntity(id_copyLayer)
eSet_copy <- layer_copy$objects$copySet
id_oncomapLayer <- "48341"
layer_oncomap <- loadEntity(id_oncomapLayer)
eSet_oncomap <- layer_oncomap$objects$oncomapSet
id_exprLayer <- "48344"
layer_expr <- loadEntity(id_exprLayer)
eSet_expr <- layer_expr$objects$exprSet
###################################################
### Load Response Data
###################################################
id_drugLayer <- "48359"
layer_drug <- loadEntity(id_drugLayer)
adf_drug <- layer_drug$objects$responseADF
featureData <- createAggregateFeatureDataSet(list(expr = eSet_expr,
copy = eSet_copy,
mut = eSet_oncomap))
# NA filter for training set
featureData_filtered <- filterNasFromMatrix(featureData, filterBy = "rows")
dataSets_ccle <- createFeatureAndResponseDataList(t(featureData_filtered),adf_drug)
for(kk in 1:ncol(dataSets_ccle$responseData)){
#########################################################################################################
######## Training and Testing data are scaled(normalized) vs. raw(unnormalized) #######################
#########################################################################################################
# data preprocessing for preselecting features
filteredData<-filterPredictiveModelData(dataSets_ccle$featureData,dataSets_ccle$responseData[,kk,drop=FALSE], featureVarianceThreshold = 0.01, corPValThresh = 0.1)
# filteredData<-filterPredictiveModelData(dataSets_ccle$featureData,dataSets_ccle$responseData[,kk,drop=FALSE])
# filtered feature and response data
filteredFeatureData <- t(unique(t(filteredData$featureData)))
filteredResponseData <- filteredData$responseData
## scale these data
filteredFeatureDataScaled <- scale(filteredFeatureData)
filteredResponseDataScaled <- scale(filteredResponseData)
# 5 fold cross validation
etas <- seq(0.1,0.9,0.1)
Ks <- seq(1,10,by =1)
# No bootstrapping to select the most significant feature set
# just utilize the optimal alpha and lambda from cross validation
finalSplsModel <- mySplsModel$new()
finalSplsModel$customTrain(filteredFeatureDataScaled, filteredResponseDataScaled, eta = etas, K = Ks, nfolds = 5)
referenceSet <- finalSplsModel$getCoefficients()
ref<-referenceSet[-1]
names(ref) <- rownames(referenceSet)[-1]
a<-sort(abs(ref), decreasing = TRUE, index.return = TRUE)
# select top 200 genes
referenceGenes<-names(ref)[a$ix[1:200]]
getUniqueGenesFromFeatureNames <- function(featureNames){
genes_copy <- sub("_copy","",featureNames[grep("_copy", featureNames)])
genes_expr <- sub("_expr","",featureNames[grep("_expr", featureNames)])
genes_mut <- sub("_mut","",featureNames[grep("_mut", featureNames)])
geneSet <- union(genes_copy,union(genes_expr, genes_mut))
}
referenceGenes<-getUniqueGenesFromFeatureNames(referenceGenes)
# MSigDB from synapse
mSigDB_annotations <- loadEntity(105363)
mSigDB_symbolID <- loadEntity(105350)
DB<-mSigDB_symbolID$objects$MsigDB_symbolID
allPathways <- mSigDB_annotations$objects$C2$KEGG
# preprocess for FET : make total set
geneAllSetList <-DB$genesets[is.element(DB$geneset.names,allPathways)]
geneAllSet <-c()
for (i in 1:length(geneAllSetList)){
geneAllSet<-union(geneAllSet,geneAllSetList[[i]])
}
analysis <- foreach (i = 1:length(allPathways)) %dopar%{
print(paste("processing", allPathways[i]))
mSigDB_index <- which(DB$geneset.names == allPathways[i])
curPathwayGenes <- DB$genesets[[mSigDB_index]]
a1=paste(curPathwayGenes,"_copy",sep ="")
a2=paste(curPathwayGenes,"_expr",sep ="")
a3=paste(curPathwayGenes,"_mut",sep ="")
geneSet <-union(a1,union(a2,a3))
geneSet <- intersect(geneSet, rownames(referenceSet))
# Here, GSEA part should be taken into consideration
# to make referenceSet
gseaResult <- preRankedTest(abs(ref), geneSet,np = 1000)
Mat2x2 <- mat.or.vec(2,2)
Mat2x2[1,1] <- length(intersect(referenceGenes,geneAllSetList[[i]]))
Mat2x2[2,1] <- length(setdiff(referenceGenes,geneAllSetList[[i]]))
Mat2x2[1,2] <- length(setdiff(geneAllSetList[[i]],referenceGenes))
Mat2x2[2,2] <- length(union(geneAllSet,referenceGenes)) - Mat2x2[1,1]- Mat2x2[1,2]- Mat2x2[2,1]
fetResult<-fisher.test(Mat2x2)
return(list(GSEA = gseaResult, FET = fetResult))
}
filename <- paste("splsFeatureAnalysisWithoutBootstrap_",kk,".Rdata",sep = "")
save(analysis,referenceSet,file=filename)
}
|
## For use in R IDE
PoTRA.combN <- function(mydata, genelist, Num.sample.normal, Num.sample.case, Pathway.database, PR.quantile) {
require(BiocGenerics)
require(graph)
require(graphite)
require(igraph)
require(org.HS.eg.db)
Fishertest<-c()
TheNumOfHubGene.normal<-c()
TheNumOfHubGene.case<-c()
E.normal<-c()
E.case<-c()
length.pathway<-c()
E.union.normal<-c()
E.union.case<-c()
kstest<-c()
pathwaynames <- c()
humanReactome <- pathways("hsapiens", "reactome")
humanBiocarta <- pathways("hsapiens", "biocarta")
humanKEGG <- pathways("hsapiens", "kegg")
for (x in 1:length(Pathway.database)){
print(x)
p0 <-Pathway.database[[x]]
pathwaynames[x] <- p0@title
p <- convertIdentifiers(p0, "entrez")
g<-pathwayGraph(p)
nodelist<-nodes(g)
graph.path<-igraph.from.graphNEL(g)
graph.path<-as.undirected(graph.path)
length.intersect<-length(intersect(unlist(nodelist),unlist(genelist)))
length.pathway[x]<-length.intersect
graph.path<-induced_subgraph(graph.path, as.character(intersect(unlist(nodelist),unlist(genelist))))
if (length.intersect<5){
next
}else{
#collect expression data of genes for a specific pathway across normal and tumor samples.
path<-data.frame(matrix(0,length.intersect,(Num.sample.normal+Num.sample.case)))
a<- c()
for (j in 1:length.intersect){
a[j]<-intersect(unlist(nodelist),unlist(genelist))[j]
path[j,]<-mydata[which(genelist==a[j]),] #collect expression data of genes for a specific pathway across normal and tumor samples.
}
##Construct a gene-gene network for normal samples and calculate PageRank values for each gene in this network.
cor.normal <- apply(path[,1:Num.sample.normal], 1, function(x) { apply(path[,1:Num.sample.normal], 1, function(y) { cor.test(x,y)[[3]] })})
cor.normal<-as.matrix(cor.normal)
cor.normal.adj<-matrix(p.adjust(cor.normal,method="fdr"),length.intersect,length.intersect)
cor.normal.adj[ cor.normal.adj > 0.05 ] <- 0
cor.normal.adj[ is.na(cor.normal.adj)] <- 0
diag(cor.normal.adj) <- 0
colnames(cor.normal.adj)<-intersect(unlist(nodelist),unlist(genelist))
rownames(cor.normal.adj)<-intersect(unlist(nodelist),unlist(genelist))
graph.normal<-graph.adjacency(cor.normal.adj,weighted=TRUE,mode="undirected")
E.normal[x]<-length(E(graph.normal))
PR.normal<-page.rank(graph.normal,direct=FALSE)$vector
graph.union.normal<- intersection(graph.normal,graph.path)
E.union.normal[x]<- length(E(graph.union.normal))
PR.union.normal<-page.rank(graph.union.normal,direct=FALSE)$vector
##Construct a gene-gene network for tumor samples and calculate PageRank values for each gene in this network.
cor.case <- apply(path[,(Num.sample.normal+1):(Num.sample.normal+Num.sample.case)], 1, function(x) { apply(path[,(Num.sample.normal+1):(Num.sample.normal+Num.sample.case)], 1, function(y) { cor.test(x,y)[[3]] })})
cor.case<-as.matrix(cor.case)
cor.case.adj<-matrix(p.adjust(cor.case,method="fdr"),length.intersect,length.intersect)
cor.case.adj[ cor.case.adj > 0.05 ] <- 0
cor.case.adj[ is.na(cor.case.adj)] <- 0
diag(cor.case.adj) <- 0
colnames(cor.case.adj)<-intersect(unlist(nodelist),unlist(genelist))
rownames(cor.case.adj)<-intersect(unlist(nodelist),unlist(genelist))
graph.case<-graph.adjacency(cor.case.adj,weighted=TRUE,mode="undirected")
E.case[x]<-length(E(graph.case))
PR.case<-page.rank(graph.case,direct=FALSE)$vector
graph.union.case<- intersection(graph.case,graph.path)
E.union.case[x]<- length(E(graph.union.case))
PR.union.case<-page.rank(graph.union.case,direct=FALSE)$vector
############################
matrix.HCC<- matrix("NA",2*length.intersect,2)
rownames(matrix.HCC)<-as.character(c(PR.union.normal,PR.union.case))
colnames(matrix.HCC)<-c("Disease_status","PageRank")
matrix.HCC[,1]<-c(rep("Normal",length.intersect), rep("Cancer",length.intersect))
loc.largePR<-which(as.numeric(rownames(matrix.HCC))>=quantile(PR.union.normal,PR.quantile))
loc.smallPR<-which(as.numeric(rownames(matrix.HCC))<quantile(PR.union.normal,PR.quantile))
matrix.HCC[loc.largePR,2]<-"large_PageRank"
matrix.HCC[loc.smallPR,2]<-"small_PageRank"
table.HCC<-list(1,2)
names(table.HCC)<-c("Disease_status","PageRank")
table.HCC$Disease_status<-matrix("NA",2*length.intersect,2)
table.HCC$PageRank<-matrix("NA",2*length.intersect,2)
table.HCC$Disease_status<-matrix.HCC[,1]
table.HCC$PageRank<-matrix.HCC[,2]
cont.HCC<-table(table.HCC$Disease_status,table.HCC$PageRank)
TheNumOfHubGene.normal[x]<-cont.HCC[2]
TheNumOfHubGene.case[x]<-cont.HCC[1]
if (dim(cont.HCC)[1]!=dim(cont.HCC)[2]){
Fishertest[x]<-1
}else{
Fishertest[x]<-fisher.test(cont.HCC)$p.value
}
kstest[x]<-ks.test(PR.union.normal,PR.union.case)$p.value
if (E.union.normal[x]<E.union.case[x]){
Fishertest[x]<-1
kstest[x]<-1
}else{
Fishertest[x]<-Fishertest[x]
kstest[x]<-kstest[x]
}
############################################
}
}
return(list(Fishertest.p.value=Fishertest,KStest.p.value=kstest,LengthOfPathway=length.pathway,TheNumberOfHubGenes.normal=TheNumOfHubGene.normal,TheNumOfHubGene.case=TheNumOfHubGene.case,TheNumberOfEdges.normal=E.union.normal,TheNumberOfEdges.case=E.union.case,PathwayName=pathwaynames))
}
PoTRA.corN <- function(mydata,genelist,Num.sample.normal,Num.sample.case,Pathway.database, PR.quantile) {
require(BiocGenerics)
require(graph)
require(graphite)
require(igraph)
Fishertest<-c()
TheNumOfHubGene.normal<-c()
TheNumOfHubGene.case<-c()
E.normal<-c()
E.case<-c()
length.pathway<-c()
kstest<-c()
pathwaynames <- c()
humanReactome <- pathways("hsapiens", "reactome")
humanBiocarta <- pathways("hsapiens", "biocarta")
humanKEGG <- pathways("hsapiens", "kegg")
for (x in 1:length(Pathway.database)){
print(x)
p0 <-Pathway.database[[x]]
pathwaynames[x] <- p0@title
p <- convertIdentifiers(p0, "entrez")
g<-pathwayGraph(p)
nodelist<-nodes(g)
graph.path<-igraph.from.graphNEL(g)
length.intersect<-length(intersect(unlist(nodelist),unlist(genelist)))
length.pathway[x]<-length.intersect
graph.path<-induced_subgraph(graph.path, as.character(intersect(unlist(nodelist),unlist(genelist))))
if (length.intersect<5){
next
}else{
#collect expression data of genes for a specific pathway across normal and tumor samples.
path<-data.frame(matrix(0,length.intersect,(Num.sample.normal+Num.sample.case)))
a<- c()
for (j in 1:length.intersect){
a[j]<-intersect(unlist(nodelist),unlist(genelist))[j]
path[j,]<-mydata[which(genelist==a[j]),] #collect expression data of genes for a specific pathway across normal and tumor samples.
}
##Construct a gene-gene network for normal samples and calculate PageRank values for each gene in this network.
cor.normal <- apply(path[,1:Num.sample.normal], 1, function(x) { apply(path[,1:Num.sample.normal], 1, function(y) { cor.test(x,y)[[3]] })})
cor.normal<-as.matrix(cor.normal)
cor.normal.adj<-matrix(p.adjust(cor.normal,method="fdr"),length.intersect,length.intersect)
cor.normal.adj[ cor.normal.adj > 0.05 ] <- 0
cor.normal.adj[ is.na(cor.normal.adj)] <- 0
diag(cor.normal.adj) <- 0
graph.normal<-graph.adjacency(cor.normal.adj,weighted=TRUE,mode="undirected")
E.normal[x]<-length(E(graph.normal))
PR.normal<-page.rank(graph.normal,direct=FALSE)$vector
##Construct a gene-gene network for tumor samples and calculate PageRank values for each gene in this network.
cor.case <- apply(path[,(Num.sample.normal+1):(Num.sample.normal+Num.sample.case)], 1, function(x) { apply(path[,(Num.sample.normal+1):(Num.sample.normal+Num.sample.case)], 1, function(y) { cor.test(x,y)[[3]] })})
cor.case<-as.matrix(cor.case)
cor.case.adj<-matrix(p.adjust(cor.case,method="fdr"),length.intersect,length.intersect)
cor.case.adj[ cor.case.adj > 0.05 ] <- 0
cor.case.adj[ is.na(cor.case.adj)] <- 0
diag(cor.case.adj) <- 0
graph.case<-graph.adjacency(cor.case.adj,weighted=TRUE,mode="undirected")
E.case[x]<-length(E(graph.case))
PR.case<-page.rank(graph.case,direct=FALSE)$vector
matrix.HCC<- matrix("NA",2*length.intersect,2)
rownames(matrix.HCC)<-as.character(c(PR.normal,PR.case))
colnames(matrix.HCC)<-c("Disease_status","PageRank")
matrix.HCC[,1]<-c(rep("Normal",length.intersect), rep("Cancer",length.intersect))
loc.largePR<-which(as.numeric(rownames(matrix.HCC))>=quantile(PR.normal,PR.quantile))
loc.smallPR<-which(as.numeric(rownames(matrix.HCC))<quantile(PR.normal,PR.quantile))
matrix.HCC[loc.largePR,2]<-"large_PageRank"
matrix.HCC[loc.smallPR,2]<-"small_PageRank"
table.HCC<-list(1,2)
names(table.HCC)<-c("Disease_status","PageRank")
table.HCC$Disease_status<-matrix("NA",2*length.intersect,2)
table.HCC$PageRank<-matrix("NA",2*length.intersect,2)
table.HCC$Disease_status<-matrix.HCC[,1]
table.HCC$PageRank<-matrix.HCC[,2]
cont.HCC<-table(table.HCC$Disease_status,table.HCC$PageRank)
TheNumOfHubGene.normal[x]<-cont.HCC[2]
TheNumOfHubGene.case[x]<-cont.HCC[1]
if (dim(cont.HCC)[1]!=dim(cont.HCC)[2]){
Fishertest[x]<-1
}else{
Fishertest[x]<-fisher.test(cont.HCC)$p.value
}
kstest[x]<-ks.test(PR.normal,PR.case)$p.value
if (E.normal[x]<E.case[x]){
Fishertest[x]<-1
kstest[x]<-1
}else{
Fishertest[x]<-Fishertest[x]
kstest[x]<-kstest[x]
}
################################
}
}
return(list(Fishertest.p.value=Fishertest,KStest.p.value=kstest,LengthOfPathway=length.pathway,TheNumberOfHubGenes.normal=TheNumOfHubGene.normal,TheNumOfHubGene.case=TheNumOfHubGene.case,TheNumberOfEdges.normal=E.normal,TheNumberOfEdges.case=E.case,PathwayName=pathwaynames))
}
## the below code doesn't work and will be replaced
overlayplot<-function(mydata,genelist,Num.sample.normal,Num.sample.case,Pathway.database) {
require(BiocGenerics)
require(graph)
require(graphite)
require(igraph)
length.pathway<-c()
humanReactome <- pathways("hsapiens", "reactome")
humanBiocarta <- pathways("hsapiens", "biocarta")
humanKEGG <- pathways("hsapiens", "kegg")
plot.multi.dens <- function(s)
{
junk.x = NULL
junk.y = NULL
for(i in 1:length(s))
{
junk.x = c(junk.x, density(s[[i]])$x)
junk.y = c(junk.y, density(s[[i]])$y)
}
xr <- range(junk.x)
yr <- range(junk.y)
plot(density(s[[1]]), xlim = xr, ylim = yr, main = "")
for(i in 1:length(s))
{
lines(density(s[[i]]), xlim = xr, ylim = yr, col = i)
}
}
p0 <-Pathway.database
p <- convertIdentifiers(p0, "entrez")
g<-pathwayGraph(p)
nodelist<-nodes(g)
graph.path<-igraph.from.graphNEL(g)
length.intersect<-length(intersect(unlist(nodelist),unlist(genelist)))
length.pathway<-length(nodelist)
#collect expression data of genes for a specific pathway across normal and tumor samples.
path<-data.frame(matrix(200,length.intersect,(Num.sample.normal+Num.sample.case)))
a<- c()
for (j in 1:length.intersect){
a[j]<-intersect(unlist(nodelist),unlist(genelist))[j]
path[j,]<-mydata[which(genelist==a[j]),] #collect expression data of genes for a specific pathway across normal and tumor samples.
}
##Construct a gene-gene network for normal samples and calculate PageRank values for each gene in this network.
cor.normal <- apply(path[,1:Num.sample.normal], 1, function(x) { apply(path[,1:Num.sample.normal], 1, function(y) { cor.test(x,y)[[3]] })})
cor.normal<-as.matrix(cor.normal)
cor.normal.adj<-matrix(p.adjust(cor.normal,method="fdr"),length.intersect,length.intersect)
cor.normal.adj[ cor.normal.adj > 0.05 ] <- 0
cor.normal.adj[ is.na(cor.normal.adj)] <- 0
diag(cor.normal.adj) <- 0
graph.normal<-graph.adjacency(cor.normal.adj,weighted=TRUE,mode="undirected")
PR.normal<-page.rank(graph.normal,direct=FALSE)
##Construct a gene-gene network for tumor samples and calculate PageRank values for each gene in this network.
cor.case <- apply(path[,(Num.sample.normal+1):(Num.sample.normal+Num.sample.case)], 1, function(x) { apply(path[,(Num.sample.normal+1):(Num.sample.normal+Num.sample.case)], 1, function(y) { cor.test(x,y)[[3]] })})
cor.case<-as.matrix(cor.case)
cor.case.adj<-matrix(p.adjust(cor.case,method="fdr"),length.intersect,length.intersect)
cor.case.adj[ cor.case.adj > 0.05 ] <- 0
cor.case.adj[ is.na(cor.case.adj)] <- 0
diag(cor.case.adj) <- 0
graph.case<-graph.adjacency(cor.case.adj,weighted=TRUE,mode="undirected")
PR.case<-page.rank(graph.case,direct=FALSE)
PoTRA.plot<-plot.multi.dens(list(PR.normal$vector,PR.case$vector))
}
|
/PoTRA.R
|
no_license
|
DinuLabASU/PoTRA
|
R
| false
| false
| 13,600
|
r
|
## For use in R IDE
PoTRA.combN <- function(mydata, genelist, Num.sample.normal, Num.sample.case, Pathway.database, PR.quantile) {
require(BiocGenerics)
require(graph)
require(graphite)
require(igraph)
require(org.HS.eg.db)
Fishertest<-c()
TheNumOfHubGene.normal<-c()
TheNumOfHubGene.case<-c()
E.normal<-c()
E.case<-c()
length.pathway<-c()
E.union.normal<-c()
E.union.case<-c()
kstest<-c()
pathwaynames <- c()
humanReactome <- pathways("hsapiens", "reactome")
humanBiocarta <- pathways("hsapiens", "biocarta")
humanKEGG <- pathways("hsapiens", "kegg")
for (x in 1:length(Pathway.database)){
print(x)
p0 <-Pathway.database[[x]]
pathwaynames[x] <- p0@title
p <- convertIdentifiers(p0, "entrez")
g<-pathwayGraph(p)
nodelist<-nodes(g)
graph.path<-igraph.from.graphNEL(g)
graph.path<-as.undirected(graph.path)
length.intersect<-length(intersect(unlist(nodelist),unlist(genelist)))
length.pathway[x]<-length.intersect
graph.path<-induced_subgraph(graph.path, as.character(intersect(unlist(nodelist),unlist(genelist))))
if (length.intersect<5){
next
}else{
#collect expression data of genes for a specific pathway across normal and tumor samples.
path<-data.frame(matrix(0,length.intersect,(Num.sample.normal+Num.sample.case)))
a<- c()
for (j in 1:length.intersect){
a[j]<-intersect(unlist(nodelist),unlist(genelist))[j]
path[j,]<-mydata[which(genelist==a[j]),] #collect expression data of genes for a specific pathway across normal and tumor samples.
}
##Construct a gene-gene network for normal samples and calculate PageRank values for each gene in this network.
cor.normal <- apply(path[,1:Num.sample.normal], 1, function(x) { apply(path[,1:Num.sample.normal], 1, function(y) { cor.test(x,y)[[3]] })})
cor.normal<-as.matrix(cor.normal)
cor.normal.adj<-matrix(p.adjust(cor.normal,method="fdr"),length.intersect,length.intersect)
cor.normal.adj[ cor.normal.adj > 0.05 ] <- 0
cor.normal.adj[ is.na(cor.normal.adj)] <- 0
diag(cor.normal.adj) <- 0
colnames(cor.normal.adj)<-intersect(unlist(nodelist),unlist(genelist))
rownames(cor.normal.adj)<-intersect(unlist(nodelist),unlist(genelist))
graph.normal<-graph.adjacency(cor.normal.adj,weighted=TRUE,mode="undirected")
E.normal[x]<-length(E(graph.normal))
PR.normal<-page.rank(graph.normal,direct=FALSE)$vector
graph.union.normal<- intersection(graph.normal,graph.path)
E.union.normal[x]<- length(E(graph.union.normal))
PR.union.normal<-page.rank(graph.union.normal,direct=FALSE)$vector
##Construct a gene-gene network for tumor samples and calculate PageRank values for each gene in this network.
cor.case <- apply(path[,(Num.sample.normal+1):(Num.sample.normal+Num.sample.case)], 1, function(x) { apply(path[,(Num.sample.normal+1):(Num.sample.normal+Num.sample.case)], 1, function(y) { cor.test(x,y)[[3]] })})
cor.case<-as.matrix(cor.case)
cor.case.adj<-matrix(p.adjust(cor.case,method="fdr"),length.intersect,length.intersect)
cor.case.adj[ cor.case.adj > 0.05 ] <- 0
cor.case.adj[ is.na(cor.case.adj)] <- 0
diag(cor.case.adj) <- 0
colnames(cor.case.adj)<-intersect(unlist(nodelist),unlist(genelist))
rownames(cor.case.adj)<-intersect(unlist(nodelist),unlist(genelist))
graph.case<-graph.adjacency(cor.case.adj,weighted=TRUE,mode="undirected")
E.case[x]<-length(E(graph.case))
PR.case<-page.rank(graph.case,direct=FALSE)$vector
graph.union.case<- intersection(graph.case,graph.path)
E.union.case[x]<- length(E(graph.union.case))
PR.union.case<-page.rank(graph.union.case,direct=FALSE)$vector
############################
matrix.HCC<- matrix("NA",2*length.intersect,2)
rownames(matrix.HCC)<-as.character(c(PR.union.normal,PR.union.case))
colnames(matrix.HCC)<-c("Disease_status","PageRank")
matrix.HCC[,1]<-c(rep("Normal",length.intersect), rep("Cancer",length.intersect))
loc.largePR<-which(as.numeric(rownames(matrix.HCC))>=quantile(PR.union.normal,PR.quantile))
loc.smallPR<-which(as.numeric(rownames(matrix.HCC))<quantile(PR.union.normal,PR.quantile))
matrix.HCC[loc.largePR,2]<-"large_PageRank"
matrix.HCC[loc.smallPR,2]<-"small_PageRank"
table.HCC<-list(1,2)
names(table.HCC)<-c("Disease_status","PageRank")
table.HCC$Disease_status<-matrix("NA",2*length.intersect,2)
table.HCC$PageRank<-matrix("NA",2*length.intersect,2)
table.HCC$Disease_status<-matrix.HCC[,1]
table.HCC$PageRank<-matrix.HCC[,2]
cont.HCC<-table(table.HCC$Disease_status,table.HCC$PageRank)
TheNumOfHubGene.normal[x]<-cont.HCC[2]
TheNumOfHubGene.case[x]<-cont.HCC[1]
if (dim(cont.HCC)[1]!=dim(cont.HCC)[2]){
Fishertest[x]<-1
}else{
Fishertest[x]<-fisher.test(cont.HCC)$p.value
}
kstest[x]<-ks.test(PR.union.normal,PR.union.case)$p.value
if (E.union.normal[x]<E.union.case[x]){
Fishertest[x]<-1
kstest[x]<-1
}else{
Fishertest[x]<-Fishertest[x]
kstest[x]<-kstest[x]
}
############################################
}
}
return(list(Fishertest.p.value=Fishertest,KStest.p.value=kstest,LengthOfPathway=length.pathway,TheNumberOfHubGenes.normal=TheNumOfHubGene.normal,TheNumOfHubGene.case=TheNumOfHubGene.case,TheNumberOfEdges.normal=E.union.normal,TheNumberOfEdges.case=E.union.case,PathwayName=pathwaynames))
}
PoTRA.corN <- function(mydata,genelist,Num.sample.normal,Num.sample.case,Pathway.database, PR.quantile) {
require(BiocGenerics)
require(graph)
require(graphite)
require(igraph)
Fishertest<-c()
TheNumOfHubGene.normal<-c()
TheNumOfHubGene.case<-c()
E.normal<-c()
E.case<-c()
length.pathway<-c()
kstest<-c()
pathwaynames <- c()
humanReactome <- pathways("hsapiens", "reactome")
humanBiocarta <- pathways("hsapiens", "biocarta")
humanKEGG <- pathways("hsapiens", "kegg")
for (x in 1:length(Pathway.database)){
print(x)
p0 <-Pathway.database[[x]]
pathwaynames[x] <- p0@title
p <- convertIdentifiers(p0, "entrez")
g<-pathwayGraph(p)
nodelist<-nodes(g)
graph.path<-igraph.from.graphNEL(g)
length.intersect<-length(intersect(unlist(nodelist),unlist(genelist)))
length.pathway[x]<-length.intersect
graph.path<-induced_subgraph(graph.path, as.character(intersect(unlist(nodelist),unlist(genelist))))
if (length.intersect<5){
next
}else{
#collect expression data of genes for a specific pathway across normal and tumor samples.
path<-data.frame(matrix(0,length.intersect,(Num.sample.normal+Num.sample.case)))
a<- c()
for (j in 1:length.intersect){
a[j]<-intersect(unlist(nodelist),unlist(genelist))[j]
path[j,]<-mydata[which(genelist==a[j]),] #collect expression data of genes for a specific pathway across normal and tumor samples.
}
##Construct a gene-gene network for normal samples and calculate PageRank values for each gene in this network.
cor.normal <- apply(path[,1:Num.sample.normal], 1, function(x) { apply(path[,1:Num.sample.normal], 1, function(y) { cor.test(x,y)[[3]] })})
cor.normal<-as.matrix(cor.normal)
cor.normal.adj<-matrix(p.adjust(cor.normal,method="fdr"),length.intersect,length.intersect)
cor.normal.adj[ cor.normal.adj > 0.05 ] <- 0
cor.normal.adj[ is.na(cor.normal.adj)] <- 0
diag(cor.normal.adj) <- 0
graph.normal<-graph.adjacency(cor.normal.adj,weighted=TRUE,mode="undirected")
E.normal[x]<-length(E(graph.normal))
PR.normal<-page.rank(graph.normal,direct=FALSE)$vector
##Construct a gene-gene network for tumor samples and calculate PageRank values for each gene in this network.
cor.case <- apply(path[,(Num.sample.normal+1):(Num.sample.normal+Num.sample.case)], 1, function(x) { apply(path[,(Num.sample.normal+1):(Num.sample.normal+Num.sample.case)], 1, function(y) { cor.test(x,y)[[3]] })})
cor.case<-as.matrix(cor.case)
cor.case.adj<-matrix(p.adjust(cor.case,method="fdr"),length.intersect,length.intersect)
cor.case.adj[ cor.case.adj > 0.05 ] <- 0
cor.case.adj[ is.na(cor.case.adj)] <- 0
diag(cor.case.adj) <- 0
graph.case<-graph.adjacency(cor.case.adj,weighted=TRUE,mode="undirected")
E.case[x]<-length(E(graph.case))
PR.case<-page.rank(graph.case,direct=FALSE)$vector
matrix.HCC<- matrix("NA",2*length.intersect,2)
rownames(matrix.HCC)<-as.character(c(PR.normal,PR.case))
colnames(matrix.HCC)<-c("Disease_status","PageRank")
matrix.HCC[,1]<-c(rep("Normal",length.intersect), rep("Cancer",length.intersect))
loc.largePR<-which(as.numeric(rownames(matrix.HCC))>=quantile(PR.normal,PR.quantile))
loc.smallPR<-which(as.numeric(rownames(matrix.HCC))<quantile(PR.normal,PR.quantile))
matrix.HCC[loc.largePR,2]<-"large_PageRank"
matrix.HCC[loc.smallPR,2]<-"small_PageRank"
table.HCC<-list(1,2)
names(table.HCC)<-c("Disease_status","PageRank")
table.HCC$Disease_status<-matrix("NA",2*length.intersect,2)
table.HCC$PageRank<-matrix("NA",2*length.intersect,2)
table.HCC$Disease_status<-matrix.HCC[,1]
table.HCC$PageRank<-matrix.HCC[,2]
cont.HCC<-table(table.HCC$Disease_status,table.HCC$PageRank)
TheNumOfHubGene.normal[x]<-cont.HCC[2]
TheNumOfHubGene.case[x]<-cont.HCC[1]
if (dim(cont.HCC)[1]!=dim(cont.HCC)[2]){
Fishertest[x]<-1
}else{
Fishertest[x]<-fisher.test(cont.HCC)$p.value
}
kstest[x]<-ks.test(PR.normal,PR.case)$p.value
if (E.normal[x]<E.case[x]){
Fishertest[x]<-1
kstest[x]<-1
}else{
Fishertest[x]<-Fishertest[x]
kstest[x]<-kstest[x]
}
################################
}
}
return(list(Fishertest.p.value=Fishertest,KStest.p.value=kstest,LengthOfPathway=length.pathway,TheNumberOfHubGenes.normal=TheNumOfHubGene.normal,TheNumOfHubGene.case=TheNumOfHubGene.case,TheNumberOfEdges.normal=E.normal,TheNumberOfEdges.case=E.case,PathwayName=pathwaynames))
}
## the below code doesn't work and will be replaced
overlayplot<-function(mydata,genelist,Num.sample.normal,Num.sample.case,Pathway.database) {
require(BiocGenerics)
require(graph)
require(graphite)
require(igraph)
length.pathway<-c()
humanReactome <- pathways("hsapiens", "reactome")
humanBiocarta <- pathways("hsapiens", "biocarta")
humanKEGG <- pathways("hsapiens", "kegg")
plot.multi.dens <- function(s)
{
junk.x = NULL
junk.y = NULL
for(i in 1:length(s))
{
junk.x = c(junk.x, density(s[[i]])$x)
junk.y = c(junk.y, density(s[[i]])$y)
}
xr <- range(junk.x)
yr <- range(junk.y)
plot(density(s[[1]]), xlim = xr, ylim = yr, main = "")
for(i in 1:length(s))
{
lines(density(s[[i]]), xlim = xr, ylim = yr, col = i)
}
}
p0 <-Pathway.database
p <- convertIdentifiers(p0, "entrez")
g<-pathwayGraph(p)
nodelist<-nodes(g)
graph.path<-igraph.from.graphNEL(g)
length.intersect<-length(intersect(unlist(nodelist),unlist(genelist)))
length.pathway<-length(nodelist)
#collect expression data of genes for a specific pathway across normal and tumor samples.
path<-data.frame(matrix(200,length.intersect,(Num.sample.normal+Num.sample.case)))
a<- c()
for (j in 1:length.intersect){
a[j]<-intersect(unlist(nodelist),unlist(genelist))[j]
path[j,]<-mydata[which(genelist==a[j]),] #collect expression data of genes for a specific pathway across normal and tumor samples.
}
##Construct a gene-gene network for normal samples and calculate PageRank values for each gene in this network.
cor.normal <- apply(path[,1:Num.sample.normal], 1, function(x) { apply(path[,1:Num.sample.normal], 1, function(y) { cor.test(x,y)[[3]] })})
cor.normal<-as.matrix(cor.normal)
cor.normal.adj<-matrix(p.adjust(cor.normal,method="fdr"),length.intersect,length.intersect)
cor.normal.adj[ cor.normal.adj > 0.05 ] <- 0
cor.normal.adj[ is.na(cor.normal.adj)] <- 0
diag(cor.normal.adj) <- 0
graph.normal<-graph.adjacency(cor.normal.adj,weighted=TRUE,mode="undirected")
PR.normal<-page.rank(graph.normal,direct=FALSE)
##Construct a gene-gene network for tumor samples and calculate PageRank values for each gene in this network.
cor.case <- apply(path[,(Num.sample.normal+1):(Num.sample.normal+Num.sample.case)], 1, function(x) { apply(path[,(Num.sample.normal+1):(Num.sample.normal+Num.sample.case)], 1, function(y) { cor.test(x,y)[[3]] })})
cor.case<-as.matrix(cor.case)
cor.case.adj<-matrix(p.adjust(cor.case,method="fdr"),length.intersect,length.intersect)
cor.case.adj[ cor.case.adj > 0.05 ] <- 0
cor.case.adj[ is.na(cor.case.adj)] <- 0
diag(cor.case.adj) <- 0
graph.case<-graph.adjacency(cor.case.adj,weighted=TRUE,mode="undirected")
PR.case<-page.rank(graph.case,direct=FALSE)
PoTRA.plot<-plot.multi.dens(list(PR.normal$vector,PR.case$vector))
}
|
8150e5a20302762af5294f44742f5f2c query03_query60_1344n.qdimacs 3101 9755
|
/code/dcnf-ankit-optimized/Results/QBFLIB-2018/E1+A1/Database/Jordan-Kaiser/reduction-finding-full-set-params-k1c3n4/query03_query60_1344n/query03_query60_1344n.R
|
no_license
|
arey0pushpa/dcnf-autarky
|
R
| false
| false
| 72
|
r
|
8150e5a20302762af5294f44742f5f2c query03_query60_1344n.qdimacs 3101 9755
|
# Convert adjacency matrix to ped format
adj2ped = function(adj, labs = NULL) {
sex = as.integer(attr(adj, 'sex'))
n = length(sex)
nseq = seq_len(n)
# Fix labels
if(is.null(labs))
labs = as.character(nseq)
else {
labs = as.character(labs)
origN = length(labs)
if(n > origN) {
exSeq = seq_len(n - origN)
labs[origN + exSeq] = paste0("e", exSeq)
}
}
# Find fidx and midx
fid = mid = integer(n)
parents = nseq[.rowSums(adj, n, n) > 0]
for(i in parents) {
kids = nseq[adj[i, ]]
if(sex[i] == 1)
fid[kids] = i
else
mid[kids] = i
}
# If known to be connected, go straight to newPed()
if(isTRUE(attr(adj, "connected")))
return(newPed(labs, fid, mid, sex, ""))
p = ped(id = nseq, fid = fid, mid = mid, sex = sex, reorder = FALSE, validate = FALSE)
relabelFast(p, labs)
}
# Stripped-down version of pedtools::relabel()
relabelFast = function(x, newlabs) {
if(is.pedList(x)) {
y = lapply(x, function(comp) {
comp$ID = newlabs[as.integer(comp$ID)]
comp
})
class(y) = c("pedList", "list")
return(y)
}
x$ID = newlabs
x
}
# Not used
relabelAddedParents = function(x, origN) {
if(is.pedList(x)) {
y = lapply(x, relabelAddedParents, origN)
class(y) = c("pedList", "list")
return(y)
}
n = length(x$ID)
if(n > origN) {
exSeq = seq_len(n - origN)
x$ID[origN + exSeq] = paste0("e", exSeq)
}
x
}
# Convert pedigree to adjacency matrix
ped2adj = function(ped) {
if(is.pedList(ped)) {
return(lapply(ped, ped2adj))
}
adj = matrix(0L, ncol = pedsize(ped), nrow = pedsize(ped),
dimnames = list(labels(ped), labels(ped)))
for(nf in nonfounders(ped))
adj[parents(ped, nf), nf] = 1L
adjMatrix(adj, sex = getSex(ped))
}
|
/R/adj2ped.R
|
no_license
|
magnusdv/pedbuildr
|
R
| false
| false
| 1,800
|
r
|
# Convert adjacency matrix to ped format
adj2ped = function(adj, labs = NULL) {
sex = as.integer(attr(adj, 'sex'))
n = length(sex)
nseq = seq_len(n)
# Fix labels
if(is.null(labs))
labs = as.character(nseq)
else {
labs = as.character(labs)
origN = length(labs)
if(n > origN) {
exSeq = seq_len(n - origN)
labs[origN + exSeq] = paste0("e", exSeq)
}
}
# Find fidx and midx
fid = mid = integer(n)
parents = nseq[.rowSums(adj, n, n) > 0]
for(i in parents) {
kids = nseq[adj[i, ]]
if(sex[i] == 1)
fid[kids] = i
else
mid[kids] = i
}
# If known to be connected, go straight to newPed()
if(isTRUE(attr(adj, "connected")))
return(newPed(labs, fid, mid, sex, ""))
p = ped(id = nseq, fid = fid, mid = mid, sex = sex, reorder = FALSE, validate = FALSE)
relabelFast(p, labs)
}
# Stripped-down version of pedtools::relabel()
relabelFast = function(x, newlabs) {
if(is.pedList(x)) {
y = lapply(x, function(comp) {
comp$ID = newlabs[as.integer(comp$ID)]
comp
})
class(y) = c("pedList", "list")
return(y)
}
x$ID = newlabs
x
}
# Not used
relabelAddedParents = function(x, origN) {
if(is.pedList(x)) {
y = lapply(x, relabelAddedParents, origN)
class(y) = c("pedList", "list")
return(y)
}
n = length(x$ID)
if(n > origN) {
exSeq = seq_len(n - origN)
x$ID[origN + exSeq] = paste0("e", exSeq)
}
x
}
# Convert pedigree to adjacency matrix
ped2adj = function(ped) {
if(is.pedList(ped)) {
return(lapply(ped, ped2adj))
}
adj = matrix(0L, ncol = pedsize(ped), nrow = pedsize(ped),
dimnames = list(labels(ped), labels(ped)))
for(nf in nonfounders(ped))
adj[parents(ped, nf), nf] = 1L
adjMatrix(adj, sex = getSex(ped))
}
|
#Regression testing
?mtcars
head(mtcars)
data <- mtcars
# Define variable groups
x <- as.matrix(data[, -1])
y <- data[, 1]
# REGRESSION WITH SIMULTANEOUS ENTRY #######################
# Using variable groups
reg1 <- lm(y ~ x)
# Or specify variables individually
# Results
reg1 # Coefficients only
summary(reg1) # Inferential tests
|
/Regression_testing.R
|
no_license
|
Zargham1214/R-programming-Modules
|
R
| false
| false
| 368
|
r
|
#Regression testing
?mtcars
head(mtcars)
data <- mtcars
# Define variable groups
x <- as.matrix(data[, -1])
y <- data[, 1]
# REGRESSION WITH SIMULTANEOUS ENTRY #######################
# Using variable groups
reg1 <- lm(y ~ x)
# Or specify variables individually
# Results
reg1 # Coefficients only
summary(reg1) # Inferential tests
|
rm(list = ls())
setwd("/Users/Runze/Documents/GitHub/RobustASE/Code/R")
q <- 0.9
nCores <- 2
# dataName <- "desikan"
dataName <- "CPAC200"
# dataName <- "Talairach"
isSVD <- 0
source("function_collection.R")
require(parallel)
tmpList <- ReadDataWeighted(dataName, DA = F, newGraph = T)
AList <- tmpList[[1]]
n <- tmpList[[2]]
M <- tmpList[[3]]
rm(tmpList)
# dVec <- 1:n
# nD <- length(dVec)
# ASum <- add(AList)
require(Matrix)
pairCrossVec <- combn(M, 2)
pairBetweenVec <- Matrix(1:M, nrow=2)
nCross <- dim(pairCrossVec)[2]
nBetween <- dim(pairBetweenVec)[2]
errorABarCross <- rep(0, nCross)
errorPHatCross <- rep(0, nCross)
errorABarASECross <- rep(0, nCross)
errorPHatASECross <- rep(0, nCross)
errorABarBetween <- rep(0, nBetween)
errorPHatBetween <- rep(0, nBetween)
errorABarASEBetween <- rep(0, nBetween)
errorPHatASEBetween <- rep(0, nBetween)
out <- mclapply(1:nCross, function(iIter) ExpTest(M, AList[pairCrossVec[, iIter]],
q, isSVD), mc.cores=nCores)
out = array(unlist(out), dim = c(4, nCross))
errorABarCross <- out[1,]
errorPHatCross <- out[2,]
errorABarASECross <- out[3,]
errorPHatASECross <- out[4,]
out <- mclapply(1:nBetween, function(iIter) ExpTest(M, AList[pairBetweenVec[, iIter]],
q, isSVD), mc.cores=nCores)
out = array(unlist(out), dim = c(4, nBetween))
errorABarBetween <- out[1,]
errorPHatBetween <- out[2,]
errorABarASEBetween <- out[3,]
errorPHatASEBetween <- out[4,]
if (isSVD) {
fileName = paste("../../Result/result_", dataName, "_test_q_", q, "_svd.RData", sep="")
} else {
fileName = paste("../../Result/result_", dataName, "_test_q_", q, "_eig.RData", sep="")
}
save(errorABarCross, errorPHatCross, errorABarASECross, errorPHatASECross,
errorABarBetween, errorPHatBetween, errorABarASEBetween, errorPHatASEBetween,
n, M, file=fileName)
|
/Code/R/MainExpRealTest.R
|
no_license
|
TangRunze/RobustASE
|
R
| false
| false
| 1,898
|
r
|
rm(list = ls())
setwd("/Users/Runze/Documents/GitHub/RobustASE/Code/R")
q <- 0.9
nCores <- 2
# dataName <- "desikan"
dataName <- "CPAC200"
# dataName <- "Talairach"
isSVD <- 0
source("function_collection.R")
require(parallel)
tmpList <- ReadDataWeighted(dataName, DA = F, newGraph = T)
AList <- tmpList[[1]]
n <- tmpList[[2]]
M <- tmpList[[3]]
rm(tmpList)
# dVec <- 1:n
# nD <- length(dVec)
# ASum <- add(AList)
require(Matrix)
pairCrossVec <- combn(M, 2)
pairBetweenVec <- Matrix(1:M, nrow=2)
nCross <- dim(pairCrossVec)[2]
nBetween <- dim(pairBetweenVec)[2]
errorABarCross <- rep(0, nCross)
errorPHatCross <- rep(0, nCross)
errorABarASECross <- rep(0, nCross)
errorPHatASECross <- rep(0, nCross)
errorABarBetween <- rep(0, nBetween)
errorPHatBetween <- rep(0, nBetween)
errorABarASEBetween <- rep(0, nBetween)
errorPHatASEBetween <- rep(0, nBetween)
out <- mclapply(1:nCross, function(iIter) ExpTest(M, AList[pairCrossVec[, iIter]],
q, isSVD), mc.cores=nCores)
out = array(unlist(out), dim = c(4, nCross))
errorABarCross <- out[1,]
errorPHatCross <- out[2,]
errorABarASECross <- out[3,]
errorPHatASECross <- out[4,]
out <- mclapply(1:nBetween, function(iIter) ExpTest(M, AList[pairBetweenVec[, iIter]],
q, isSVD), mc.cores=nCores)
out = array(unlist(out), dim = c(4, nBetween))
errorABarBetween <- out[1,]
errorPHatBetween <- out[2,]
errorABarASEBetween <- out[3,]
errorPHatASEBetween <- out[4,]
if (isSVD) {
fileName = paste("../../Result/result_", dataName, "_test_q_", q, "_svd.RData", sep="")
} else {
fileName = paste("../../Result/result_", dataName, "_test_q_", q, "_eig.RData", sep="")
}
save(errorABarCross, errorPHatCross, errorABarASECross, errorPHatASECross,
errorABarBetween, errorPHatBetween, errorABarASEBetween, errorPHatASEBetween,
n, M, file=fileName)
|
shinyUI(
pageWithSidebar(
# Application Title
headerPanel("Basketball Wins Predictor"),
sidebarPanel(
numericInput('pointsFor', 'Total Points Scored', value = 0),
numericInput('pointsAgainst', 'Total Points Allowed', value = 0),
numericInput('games', 'Number of Games Played', value = 0),
submitButton('Submit')
),
mainPanel(
h3('Results of prediction'),
h4('Total Points Scored'),
verbatimTextOutput("inputValue"),
h4('Total Points Allowed'),
verbatimTextOutput("inputValue2"),
h4('Number of Games Played'),
verbatimTextOutput("inputValue3"),
h4('Predicted Number of Wins'),
verbatimTextOutput("prediction"),
h4('Predicted Number of Wins for Season'),
verbatimTextOutput("prediction2")
)
)
)
|
/ui.R
|
no_license
|
TylerCrain/developing_data_products
|
R
| false
| false
| 1,003
|
r
|
shinyUI(
pageWithSidebar(
# Application Title
headerPanel("Basketball Wins Predictor"),
sidebarPanel(
numericInput('pointsFor', 'Total Points Scored', value = 0),
numericInput('pointsAgainst', 'Total Points Allowed', value = 0),
numericInput('games', 'Number of Games Played', value = 0),
submitButton('Submit')
),
mainPanel(
h3('Results of prediction'),
h4('Total Points Scored'),
verbatimTextOutput("inputValue"),
h4('Total Points Allowed'),
verbatimTextOutput("inputValue2"),
h4('Number of Games Played'),
verbatimTextOutput("inputValue3"),
h4('Predicted Number of Wins'),
verbatimTextOutput("prediction"),
h4('Predicted Number of Wins for Season'),
verbatimTextOutput("prediction2")
)
)
)
|
theme_registration <- function() {
register_theme_elements(
elementalist.polygon = element_polygon(
fill = "white", colour = "black", size = 0.5, linetype = 1,
inherit.blank = TRUE, linejoin = "round", lineend = "round"
),
elementalist.geom_polygon = element_polygon(),
elementalist.geom_line = element_line(),
elementalist.geom_rect = element_rect(),
elementalist.geom_text = element_text(),
element_tree = list(
elementalist.polygon = el_def("element_polygon"),
elementalist.geom_polygon = el_def("element_polygon",
"elementalist.polygon"),
elementalist.geom_line = el_def("element_line", "line"),
elementalist.geom_rect = el_def("element_rect", "rect"),
elementalist.geom_text = el_def("element_text", "text")
)
)
}
|
/R/theme_registration.R
|
permissive
|
gejielin/elementalist
|
R
| false
| false
| 838
|
r
|
theme_registration <- function() {
register_theme_elements(
elementalist.polygon = element_polygon(
fill = "white", colour = "black", size = 0.5, linetype = 1,
inherit.blank = TRUE, linejoin = "round", lineend = "round"
),
elementalist.geom_polygon = element_polygon(),
elementalist.geom_line = element_line(),
elementalist.geom_rect = element_rect(),
elementalist.geom_text = element_text(),
element_tree = list(
elementalist.polygon = el_def("element_polygon"),
elementalist.geom_polygon = el_def("element_polygon",
"elementalist.polygon"),
elementalist.geom_line = el_def("element_line", "line"),
elementalist.geom_rect = el_def("element_rect", "rect"),
elementalist.geom_text = el_def("element_text", "text")
)
)
}
|
library(ssnapstats)
context("Check internal_calculate_d2ki72hrs_aggr_values")
# Create the KI computed variables
su_times <- tibble::tibble(
S1FirstStrokeUnitArrivalDateTime = c(as.POSIXct(
c("2017-01-01 01:00:00",
"2017-01-01 02:00:00",
"2017-01-01 03:00:00",
"2017-01-01 04:00:00",
"2017-01-01 05:00:00"),
origin = "1970-01-01",
tz = "UTC"), NA),
S1PatientClockStartDateTime = as.POSIXct(
c("2017-01-01 00:00:00",
"2017-01-01 00:00:00",
"2017-01-01 00:00:00",
"2017-01-01 00:00:00",
"2017-01-01 00:00:00",
"2017-01-01 00:00:00"),
origin = "1970-01-01",
tz = "UTC"),
S1FirstWard = factor(c( "SU", "SU", "SU", "SU", "SU", "O"),
levels = c("ICH", "SU", "MAC", "O")),
S2IAI = c(FALSE, FALSE, FALSE, FALSE, FALSE, FALSE))
su_times <- dplyr::mutate(
su_times,
!!! internal_d2ki72hrs_calculated_field_functions)
# Then create monthly tallies and group the data (NB we only have
# one group in our sample, but this imitates the domain calculations)
su_times <- dplyr::mutate(
su_times,
reportingMonth = format(.data[["S1PatientClockStartDateTime"]],
"%m-%Y"))
monthly72hrs <- dplyr::group_by(su_times,
.data[["reportingMonth"]])
kiresults <- dplyr::summarise(
monthly72hrs,
!!! internal_d2ki72hrs_aggr_value_functions("TC")
)
test_that("Check Median first stroke unit time calculated
correctly", {
expect_equal(kiresults[["TCKIMedianFirstSUTime"]], c(180))
})
test_that("Check percent admitted to a stroke unit within 4 hrs
calculated correctly", {
expect_equal(kiresults[["TCKIPCFirstSUIn4Hrs"]], c(66.7))
})
|
/tests/testthat/test-internal_calculate_d2ki72hrs_aggr_values.R
|
no_license
|
md0u80c9/SSNAPStats
|
R
| false
| false
| 1,672
|
r
|
library(ssnapstats)
context("Check internal_calculate_d2ki72hrs_aggr_values")
# Create the KI computed variables
su_times <- tibble::tibble(
S1FirstStrokeUnitArrivalDateTime = c(as.POSIXct(
c("2017-01-01 01:00:00",
"2017-01-01 02:00:00",
"2017-01-01 03:00:00",
"2017-01-01 04:00:00",
"2017-01-01 05:00:00"),
origin = "1970-01-01",
tz = "UTC"), NA),
S1PatientClockStartDateTime = as.POSIXct(
c("2017-01-01 00:00:00",
"2017-01-01 00:00:00",
"2017-01-01 00:00:00",
"2017-01-01 00:00:00",
"2017-01-01 00:00:00",
"2017-01-01 00:00:00"),
origin = "1970-01-01",
tz = "UTC"),
S1FirstWard = factor(c( "SU", "SU", "SU", "SU", "SU", "O"),
levels = c("ICH", "SU", "MAC", "O")),
S2IAI = c(FALSE, FALSE, FALSE, FALSE, FALSE, FALSE))
su_times <- dplyr::mutate(
su_times,
!!! internal_d2ki72hrs_calculated_field_functions)
# Then create monthly tallies and group the data (NB we only have
# one group in our sample, but this imitates the domain calculations)
su_times <- dplyr::mutate(
su_times,
reportingMonth = format(.data[["S1PatientClockStartDateTime"]],
"%m-%Y"))
monthly72hrs <- dplyr::group_by(su_times,
.data[["reportingMonth"]])
kiresults <- dplyr::summarise(
monthly72hrs,
!!! internal_d2ki72hrs_aggr_value_functions("TC")
)
test_that("Check Median first stroke unit time calculated
correctly", {
expect_equal(kiresults[["TCKIMedianFirstSUTime"]], c(180))
})
test_that("Check percent admitted to a stroke unit within 4 hrs
calculated correctly", {
expect_equal(kiresults[["TCKIPCFirstSUIn4Hrs"]], c(66.7))
})
|
mapdeckGridDependency <- function() {
list(
htmltools::htmlDependency(
"grid",
"1.0.0",
system.file("htmlwidgets/lib/grid", package = "mapdeck"),
script = c("grid.js")
)
)
}
#' Add Grid
#'
#' The Grid Layer renders a grid heatmap based on an array of points.
#' It takes the constant size all each cell, projects points into cells.
#' The color and height of the cell is scaled by number of points it contains.
#'
#' @inheritParams add_polygon
#' @param lon column containing longitude values
#' @param lat column containing latitude values
#' @param colour_range vector of hex colours
#' @param cell_size size of each cell in meters
#' @param extruded logical indicating if cells are elevated or not
#' @param elevation_scale cell elevation multiplier
#'
#' @examples
#' \donttest{
#' ## You need a valid access token from Mapbox
#' key <- 'abc'
#'
#' df <- read.csv(paste0(
#' 'https://raw.githubusercontent.com/uber-common/deck.gl-data/master/',
#' 'examples/3d-heatmap/heatmap-data.csv'
#' ))
#'
#'
#' mapdeck( token = key, style = 'mapbox://styles/mapbox/dark-v9', pitch = 45 ) %>%
#' add_grid(
#' data = df
#' , lat = "lat"
#' , lon = "lng"
#' , cell_size = 5000
#' , elevation_scale = 50
#' , layer_id = "grid_layer"
#' , auto_highlight = TRUE
#' )
#' }
#'
#' @export
add_grid <- function(
map,
data = get_map_data(map),
lon = NULL,
lat = NULL,
polyline = NULL,
colour_range = viridisLite::viridis(5),
cell_size = 1000,
extruded = TRUE,
elevation_scale = 1,
auto_highlight = FALSE,
layer_id = NULL,
digits = 6
) {
objArgs <- match.call(expand.dots = F)
data <- normaliseSfData(data, "POINT")
polyline <- findEncodedColumn(data, polyline)
if( !is.null(polyline) && !polyline %in% names(objArgs) ) {
objArgs[['polyline']] <- polyline
data <- unlistMultiGeometry( data, polyline )
}
## parmater checks
usePolyline <- isUsingPolyline(polyline)
checkNumeric(digits)
checkNumeric(elevation_scale)
checkNumeric(cell_size)
checkHex(colour_range)
layer_id <- layerId(layer_id, "grid")
## end parameter checks
if ( !usePolyline ) {
## TODO(check only a data.frame)
data[['polyline']] <- googlePolylines::encode(data, lon = lon, lat = lat, byrow = TRUE)
polyline <- 'polyline'
## TODO(check lon & lat exist / passed in as arguments )
objArgs[['lon']] <- NULL
objArgs[['lat']] <- NULL
objArgs[['polyline']] <- polyline
}
allCols <- gridColumns()
requiredCols <- requiredGridColumns()
# colourColumns <- shapeAttributes(
# fill_colour = NULL
# , stroke_colour = NULL
# , stroke_from = NULL
# , stroke_to = NULL
# )
shape <- createMapObject(data, allCols, objArgs)
# pal <- createPalettes(shape, colourColumns)
#
# colour_palettes <- createColourPalettes(data, pal, colourColumns, palette)
# colours <- createColours(shape, colour_palettes)
# if(length(colours) > 0){
# shape <- replaceVariableColours(shape, colours)
# }
requiredDefaults <- setdiff(requiredCols, names(shape))
if(length(requiredDefaults) > 0){
shape <- addDefaults(shape, requiredDefaults, "grid")
}
shape <- jsonlite::toJSON(shape, digits = digits)
map <- addDependency(map, mapdeckGridDependency())
invoke_method(map, "add_grid", shape, layer_id, cell_size, jsonlite::toJSON(extruded, auto_unbox = T), elevation_scale, colour_range, auto_highlight)
}
requiredGridColumns <- function() {
c()
}
gridColumns <- function() {
c("polyline")
}
gridDefaults <- function(n) {
data.frame(
stringsAsFactors = F
)
}
|
/R/map_layer_grid.R
|
no_license
|
ChrisMuir/mapdeck
|
R
| false
| false
| 3,500
|
r
|
mapdeckGridDependency <- function() {
list(
htmltools::htmlDependency(
"grid",
"1.0.0",
system.file("htmlwidgets/lib/grid", package = "mapdeck"),
script = c("grid.js")
)
)
}
#' Add Grid
#'
#' The Grid Layer renders a grid heatmap based on an array of points.
#' It takes the constant size all each cell, projects points into cells.
#' The color and height of the cell is scaled by number of points it contains.
#'
#' @inheritParams add_polygon
#' @param lon column containing longitude values
#' @param lat column containing latitude values
#' @param colour_range vector of hex colours
#' @param cell_size size of each cell in meters
#' @param extruded logical indicating if cells are elevated or not
#' @param elevation_scale cell elevation multiplier
#'
#' @examples
#' \donttest{
#' ## You need a valid access token from Mapbox
#' key <- 'abc'
#'
#' df <- read.csv(paste0(
#' 'https://raw.githubusercontent.com/uber-common/deck.gl-data/master/',
#' 'examples/3d-heatmap/heatmap-data.csv'
#' ))
#'
#'
#' mapdeck( token = key, style = 'mapbox://styles/mapbox/dark-v9', pitch = 45 ) %>%
#' add_grid(
#' data = df
#' , lat = "lat"
#' , lon = "lng"
#' , cell_size = 5000
#' , elevation_scale = 50
#' , layer_id = "grid_layer"
#' , auto_highlight = TRUE
#' )
#' }
#'
#' @export
add_grid <- function(
map,
data = get_map_data(map),
lon = NULL,
lat = NULL,
polyline = NULL,
colour_range = viridisLite::viridis(5),
cell_size = 1000,
extruded = TRUE,
elevation_scale = 1,
auto_highlight = FALSE,
layer_id = NULL,
digits = 6
) {
objArgs <- match.call(expand.dots = F)
data <- normaliseSfData(data, "POINT")
polyline <- findEncodedColumn(data, polyline)
if( !is.null(polyline) && !polyline %in% names(objArgs) ) {
objArgs[['polyline']] <- polyline
data <- unlistMultiGeometry( data, polyline )
}
## parmater checks
usePolyline <- isUsingPolyline(polyline)
checkNumeric(digits)
checkNumeric(elevation_scale)
checkNumeric(cell_size)
checkHex(colour_range)
layer_id <- layerId(layer_id, "grid")
## end parameter checks
if ( !usePolyline ) {
## TODO(check only a data.frame)
data[['polyline']] <- googlePolylines::encode(data, lon = lon, lat = lat, byrow = TRUE)
polyline <- 'polyline'
## TODO(check lon & lat exist / passed in as arguments )
objArgs[['lon']] <- NULL
objArgs[['lat']] <- NULL
objArgs[['polyline']] <- polyline
}
allCols <- gridColumns()
requiredCols <- requiredGridColumns()
# colourColumns <- shapeAttributes(
# fill_colour = NULL
# , stroke_colour = NULL
# , stroke_from = NULL
# , stroke_to = NULL
# )
shape <- createMapObject(data, allCols, objArgs)
# pal <- createPalettes(shape, colourColumns)
#
# colour_palettes <- createColourPalettes(data, pal, colourColumns, palette)
# colours <- createColours(shape, colour_palettes)
# if(length(colours) > 0){
# shape <- replaceVariableColours(shape, colours)
# }
requiredDefaults <- setdiff(requiredCols, names(shape))
if(length(requiredDefaults) > 0){
shape <- addDefaults(shape, requiredDefaults, "grid")
}
shape <- jsonlite::toJSON(shape, digits = digits)
map <- addDependency(map, mapdeckGridDependency())
invoke_method(map, "add_grid", shape, layer_id, cell_size, jsonlite::toJSON(extruded, auto_unbox = T), elevation_scale, colour_range, auto_highlight)
}
requiredGridColumns <- function() {
c()
}
gridColumns <- function() {
c("polyline")
}
gridDefaults <- function(n) {
data.frame(
stringsAsFactors = F
)
}
|
#' @include Transect.R
#' @importFrom methods validObject
#' @title Class "Line.Transect" extends Class "Transect"
#'
#' @description Class \code{"Line.Transect"} is an S4 class
#' detailing a set of transects from a point transect design.
#' @name Line.Transect-class
#' @title S4 Class "Line.Transect"
#' @slot line.length the total line length for the transect set
#' @slot trackline the total on and off effort trackline length from
#' the start of the first transect to the end of the last
#' @slot cyclictrackline the trackline distance plus the distance
#' required to return from the end of the last transect to the
#' beginning of the first
#' @keywords classes
#' @seealso \code{\link{make.design}}
#' @export
setClass(Class = "Line.Transect",
representation = representation(line.length = "numeric",
trackline = "numeric",
cyclictrackline = "numeric"),
contains = "Transect")
setMethod(
f="initialize",
signature="Line.Transect",
definition=function(.Object, design, lines, samp.count, line.length, seg.length, effort.allocation,
spacing, design.angle, edge.protocol, cov.area = numeric(0),
cov.area.polys = list(), strata.area, strata.names, trackline,
cyclictrackline){
#Set slots
.Object@strata.names <- strata.names
.Object@design <- design
.Object@samplers <- lines
.Object@strata.area <- strata.area
.Object@cov.area <- cov.area
.Object@cov.area.polys <- cov.area.polys
.Object@line.length <- line.length
.Object@trackline <- trackline
.Object@cyclictrackline <- cyclictrackline
.Object@samp.count <- samp.count
.Object@effort.allocation <- effort.allocation
.Object@spacing <- spacing
.Object@design.angle <- design.angle
.Object@edge.protocol <- edge.protocol
#Check object is valid
valid <- try(validObject(.Object), silent = TRUE)
if(class(valid) == "try-error"){
stop(attr(valid, "condition")$message, call. = FALSE)
}
# return object
return(.Object)
}
)
setValidity("Line.Transect",
function(object){
return(TRUE)
}
)
# GENERIC METHODS DEFINITIONS --------------------------------------------
#' Plot
#'
#' Plots an S4 object of class 'Transect'
#'
#' @param x object of class transect
#' @param y not used
#' @param ... Additional arguments: add (TRUE/FALSE) whether to add to existing
#' plot, col colour, lwd line width (for line transects) and pch point symbols
#' (for point transects).
#' @rdname plot.Transect-methods
#' @exportMethod plot
setMethod(
f="plot",
signature="Line.Transect",
definition=function(x, y, ...){
# If main is not supplied then take it from the object
additional.args <- list(...)
add <- ifelse("add" %in% names(additional.args), additional.args$add, FALSE)
col <- ifelse("col" %in% names(additional.args), additional.args$col, 5)
lwd <- ifelse("lwd" %in% names(additional.args), additional.args$lwd, 2)
#add.cover.area <- ifelse("add.cover.area" %in% names(additional.args), additional.args$add.cover.area, FALSE)
sf.column.poly <- attr(x@cov.area.polys, "sf_column")
sf.column.samps <- attr(x@samplers, "sf_column")
if(length(x@samplers) > 0){
#if(add.cover.area){
# plot(x@cov.area.polys[[sf.column.poly]], add = add, col = 4)
#}
plot(x@samplers[[sf.column.samps]], add = add, col = col, lwd = lwd)
}else{
warning("No samplers to plot", call. = F, immediate. = F)
}
invisible(x)
}
)
#' Show
#'
#' Displays details of an S4 object of class 'Transect'
#' @param object an object of class Transect
#' @rdname show.Transect-methods
#' @exportMethod show
setMethod(
f="show",
signature="Line.Transect",
definition=function(object){
strata.names <- object@strata.names
for(strat in seq(along = object@design)){
title <- paste("\n Strata ", strata.names[strat], ":", sep = "")
len.title <- nchar(title)
underline <- paste(" ", paste(rep("_", (len.title-3)), collapse = ""), sep = "")
cat(title, fill = T)
cat(underline, fill = T)
design <- switch(object@design[strat],
"random" = "randomly located transects",
"systematic" = "systematically spaced parallel transects",
"eszigzag" = "equal spaced zigzag",
"eszigzagcom" = "complementaty equal spaced zigzags",
"segmentedgrid" = "segmented grid")
cat("Design: ", design, fill = T)
if(object@design[strat] %in% c("systematic", "eszigzag", "eszigzagcom", "segmentedgrid")){
cat("Spacing: ", object@spacing[strat], fill = T)
}
cat("Line length:", object@line.length[strat], fill = T)
if(object@design[strat] == "segmentedgrid"){
cat("Segment length: ", object@seg.length[strat], fill = T)
cat("Segment threshold: ", object@seg.threshold[strat], fill = T)
}
cat("Trackline length:", object@trackline[strat], fill = T)
cat("Cyclic trackline length:", object@cyclictrackline[strat], fill = T)
cat("Number of samplers: ", object@samp.count[strat], fill = T)
cat("Design angle: ", object@design.angle[strat], fill = T)
cat("Edge protocol: ", object@edge.protocol[strat], fill = T)
cat("Covered area: ", object@cov.area[strat], fill = T)
cat("Strata coverage: ", round((object@cov.area[strat]/object@strata.area[strat])*100,2), "%", fill = T, sep = "")
cat("Strata area: ", object@strata.area[strat], fill = T)
}
#Now print totals
cat("\n Study Area Totals:", fill = T)
cat(" _________________", fill = T)
cat("Line length:", sum(object@line.length, na.rm = T), fill = T)
cat("Trackline length:", sum(object@trackline, na.rm = T), fill = T)
cat("Cyclic trackline length:", sum(object@cyclictrackline, na.rm = T), fill = T)
cat("Number of samplers: ", sum(object@samp.count, na.rm = T), fill = T)
if(length(object@effort.allocation) > 0){
cat("Effort allocation: ", paste(object@effort.allocation*100, collapse = "%, "), "%", fill = T, sep = "")
}
cat("Covered area: ", sum(object@cov.area, na.rm = T), fill = T)
index <- which(!is.na(object@cov.area))
cat("Average coverage: ", round((sum(object@cov.area[index])/sum(object@strata.area))*100,2), "%", fill = T, sep = "")
invisible(object)
}
)
|
/R/Line.Transect.R
|
no_license
|
olexandr-konovalov/dssd
|
R
| false
| false
| 6,558
|
r
|
#' @include Transect.R
#' @importFrom methods validObject
#' @title Class "Line.Transect" extends Class "Transect"
#'
#' @description Class \code{"Line.Transect"} is an S4 class
#' detailing a set of transects from a point transect design.
#' @name Line.Transect-class
#' @title S4 Class "Line.Transect"
#' @slot line.length the total line length for the transect set
#' @slot trackline the total on and off effort trackline length from
#' the start of the first transect to the end of the last
#' @slot cyclictrackline the trackline distance plus the distance
#' required to return from the end of the last transect to the
#' beginning of the first
#' @keywords classes
#' @seealso \code{\link{make.design}}
#' @export
setClass(Class = "Line.Transect",
representation = representation(line.length = "numeric",
trackline = "numeric",
cyclictrackline = "numeric"),
contains = "Transect")
setMethod(
f="initialize",
signature="Line.Transect",
definition=function(.Object, design, lines, samp.count, line.length, seg.length, effort.allocation,
spacing, design.angle, edge.protocol, cov.area = numeric(0),
cov.area.polys = list(), strata.area, strata.names, trackline,
cyclictrackline){
#Set slots
.Object@strata.names <- strata.names
.Object@design <- design
.Object@samplers <- lines
.Object@strata.area <- strata.area
.Object@cov.area <- cov.area
.Object@cov.area.polys <- cov.area.polys
.Object@line.length <- line.length
.Object@trackline <- trackline
.Object@cyclictrackline <- cyclictrackline
.Object@samp.count <- samp.count
.Object@effort.allocation <- effort.allocation
.Object@spacing <- spacing
.Object@design.angle <- design.angle
.Object@edge.protocol <- edge.protocol
#Check object is valid
valid <- try(validObject(.Object), silent = TRUE)
if(class(valid) == "try-error"){
stop(attr(valid, "condition")$message, call. = FALSE)
}
# return object
return(.Object)
}
)
setValidity("Line.Transect",
function(object){
return(TRUE)
}
)
# GENERIC METHODS DEFINITIONS --------------------------------------------
#' Plot
#'
#' Plots an S4 object of class 'Transect'
#'
#' @param x object of class transect
#' @param y not used
#' @param ... Additional arguments: add (TRUE/FALSE) whether to add to existing
#' plot, col colour, lwd line width (for line transects) and pch point symbols
#' (for point transects).
#' @rdname plot.Transect-methods
#' @exportMethod plot
setMethod(
f="plot",
signature="Line.Transect",
definition=function(x, y, ...){
# If main is not supplied then take it from the object
additional.args <- list(...)
add <- ifelse("add" %in% names(additional.args), additional.args$add, FALSE)
col <- ifelse("col" %in% names(additional.args), additional.args$col, 5)
lwd <- ifelse("lwd" %in% names(additional.args), additional.args$lwd, 2)
#add.cover.area <- ifelse("add.cover.area" %in% names(additional.args), additional.args$add.cover.area, FALSE)
sf.column.poly <- attr(x@cov.area.polys, "sf_column")
sf.column.samps <- attr(x@samplers, "sf_column")
if(length(x@samplers) > 0){
#if(add.cover.area){
# plot(x@cov.area.polys[[sf.column.poly]], add = add, col = 4)
#}
plot(x@samplers[[sf.column.samps]], add = add, col = col, lwd = lwd)
}else{
warning("No samplers to plot", call. = F, immediate. = F)
}
invisible(x)
}
)
#' Show
#'
#' Displays details of an S4 object of class 'Transect'
#' @param object an object of class Transect
#' @rdname show.Transect-methods
#' @exportMethod show
setMethod(
f="show",
signature="Line.Transect",
definition=function(object){
strata.names <- object@strata.names
for(strat in seq(along = object@design)){
title <- paste("\n Strata ", strata.names[strat], ":", sep = "")
len.title <- nchar(title)
underline <- paste(" ", paste(rep("_", (len.title-3)), collapse = ""), sep = "")
cat(title, fill = T)
cat(underline, fill = T)
design <- switch(object@design[strat],
"random" = "randomly located transects",
"systematic" = "systematically spaced parallel transects",
"eszigzag" = "equal spaced zigzag",
"eszigzagcom" = "complementaty equal spaced zigzags",
"segmentedgrid" = "segmented grid")
cat("Design: ", design, fill = T)
if(object@design[strat] %in% c("systematic", "eszigzag", "eszigzagcom", "segmentedgrid")){
cat("Spacing: ", object@spacing[strat], fill = T)
}
cat("Line length:", object@line.length[strat], fill = T)
if(object@design[strat] == "segmentedgrid"){
cat("Segment length: ", object@seg.length[strat], fill = T)
cat("Segment threshold: ", object@seg.threshold[strat], fill = T)
}
cat("Trackline length:", object@trackline[strat], fill = T)
cat("Cyclic trackline length:", object@cyclictrackline[strat], fill = T)
cat("Number of samplers: ", object@samp.count[strat], fill = T)
cat("Design angle: ", object@design.angle[strat], fill = T)
cat("Edge protocol: ", object@edge.protocol[strat], fill = T)
cat("Covered area: ", object@cov.area[strat], fill = T)
cat("Strata coverage: ", round((object@cov.area[strat]/object@strata.area[strat])*100,2), "%", fill = T, sep = "")
cat("Strata area: ", object@strata.area[strat], fill = T)
}
#Now print totals
cat("\n Study Area Totals:", fill = T)
cat(" _________________", fill = T)
cat("Line length:", sum(object@line.length, na.rm = T), fill = T)
cat("Trackline length:", sum(object@trackline, na.rm = T), fill = T)
cat("Cyclic trackline length:", sum(object@cyclictrackline, na.rm = T), fill = T)
cat("Number of samplers: ", sum(object@samp.count, na.rm = T), fill = T)
if(length(object@effort.allocation) > 0){
cat("Effort allocation: ", paste(object@effort.allocation*100, collapse = "%, "), "%", fill = T, sep = "")
}
cat("Covered area: ", sum(object@cov.area, na.rm = T), fill = T)
index <- which(!is.na(object@cov.area))
cat("Average coverage: ", round((sum(object@cov.area[index])/sum(object@strata.area))*100,2), "%", fill = T, sep = "")
invisible(object)
}
)
|
#' Locate the position of patterns in a string.
#'
#' Vectorised over \code{string} and \code{pattern}. If the match is of length
#' 0, (e.g. from a special match like \code{$}) end will be one character less
#' than start.
#'
#' @inheritParams str_detect
#' @return For \code{str_locate}, an integer matrix. First column gives start
#' postion of match, and second column gives end position. For
#' \code{str_locate_all} a list of integer matrices.
#' @seealso
#' \code{\link{str_extract}} for a convenient way of extracting matches,
#' \code{\link[stringi]{stri_locate}} for the underlying implementation.
#' @export
#' @examples
#' fruit <- c("apple", "banana", "pear", "pineapple")
#' str_locate(fruit, "$")
#' str_locate(fruit, "a")
#' str_locate(fruit, "e")
#' str_locate(fruit, c("a", "b", "p", "p"))
#'
#' str_locate_all(fruit, "a")
#' str_locate_all(fruit, "e")
#' str_locate_all(fruit, c("a", "b", "p", "p"))
#'
#' # Find location of every character
#' str_locate_all(fruit, "")
str_locate <- function(string, pattern) {
switch(type(pattern),
empty = stri_locate_first_boundaries(string,
opts_brkiter = stri_opts_brkiter("character")),
bound = stri_locate_first_boundaries(string,
opts_brkiter = attr(pattern, "options")),
fixed = stri_locate_first_fixed(string, pattern,
opts_fixed = attr(pattern, "options")),
coll = stri_locate_first_coll(string, pattern,
opts_collator = attr(pattern, "options")),
regex = stri_locate_first_regex(string, pattern,
opts_regex = attr(pattern, "options"))
)
}
#' @rdname str_locate
#' @export
str_locate_all <- function(string, pattern) {
switch(type(pattern),
empty = stri_locate_all_boundaries(string, omit_no_match = TRUE,
opts_brkiter = stri_opts_brkiter("character")),
bound = stri_locate_all_boundaries(string, omit_no_match = TRUE,
opts_brkiter = attr(pattern, "options")),
fixed = stri_locate_all_fixed(string, pattern, omit_no_match = TRUE,
opts_fixed = attr(pattern, "options")),
regex = stri_locate_all_regex(string, pattern,
omit_no_match = TRUE, opts_regex = attr(pattern, "options")),
coll = stri_locate_all_coll(string, pattern,
omit_no_match = TRUE, opts_collator = attr(pattern, "options"))
)
}
#' Switch location of matches to location of non-matches.
#'
#' Invert a matrix of match locations to match the opposite of what was
#' previously matched.
#'
#' @param loc matrix of match locations, as from \code{\link{str_locate_all}}
#' @return numeric match giving locations of non-matches
#' @export
#' @examples
#' numbers <- "1 and 2 and 4 and 456"
#' num_loc <- str_locate_all(numbers, "[0-9]+")[[1]]
#' str_sub(numbers, num_loc[, "start"], num_loc[, "end"])
#'
#' text_loc <- invert_match(num_loc)
#' str_sub(numbers, text_loc[, "start"], text_loc[, "end"])
invert_match <- function(loc) {
cbind(
start = c(0L, loc[, "end"] + 1L),
end = c(loc[, "start"] - 1L, -1L)
)
}
|
/R/locate.r
|
no_license
|
hyiltiz/stringr
|
R
| false
| false
| 2,967
|
r
|
#' Locate the position of patterns in a string.
#'
#' Vectorised over \code{string} and \code{pattern}. If the match is of length
#' 0, (e.g. from a special match like \code{$}) end will be one character less
#' than start.
#'
#' @inheritParams str_detect
#' @return For \code{str_locate}, an integer matrix. First column gives start
#' postion of match, and second column gives end position. For
#' \code{str_locate_all} a list of integer matrices.
#' @seealso
#' \code{\link{str_extract}} for a convenient way of extracting matches,
#' \code{\link[stringi]{stri_locate}} for the underlying implementation.
#' @export
#' @examples
#' fruit <- c("apple", "banana", "pear", "pineapple")
#' str_locate(fruit, "$")
#' str_locate(fruit, "a")
#' str_locate(fruit, "e")
#' str_locate(fruit, c("a", "b", "p", "p"))
#'
#' str_locate_all(fruit, "a")
#' str_locate_all(fruit, "e")
#' str_locate_all(fruit, c("a", "b", "p", "p"))
#'
#' # Find location of every character
#' str_locate_all(fruit, "")
str_locate <- function(string, pattern) {
switch(type(pattern),
empty = stri_locate_first_boundaries(string,
opts_brkiter = stri_opts_brkiter("character")),
bound = stri_locate_first_boundaries(string,
opts_brkiter = attr(pattern, "options")),
fixed = stri_locate_first_fixed(string, pattern,
opts_fixed = attr(pattern, "options")),
coll = stri_locate_first_coll(string, pattern,
opts_collator = attr(pattern, "options")),
regex = stri_locate_first_regex(string, pattern,
opts_regex = attr(pattern, "options"))
)
}
#' @rdname str_locate
#' @export
str_locate_all <- function(string, pattern) {
switch(type(pattern),
empty = stri_locate_all_boundaries(string, omit_no_match = TRUE,
opts_brkiter = stri_opts_brkiter("character")),
bound = stri_locate_all_boundaries(string, omit_no_match = TRUE,
opts_brkiter = attr(pattern, "options")),
fixed = stri_locate_all_fixed(string, pattern, omit_no_match = TRUE,
opts_fixed = attr(pattern, "options")),
regex = stri_locate_all_regex(string, pattern,
omit_no_match = TRUE, opts_regex = attr(pattern, "options")),
coll = stri_locate_all_coll(string, pattern,
omit_no_match = TRUE, opts_collator = attr(pattern, "options"))
)
}
#' Switch location of matches to location of non-matches.
#'
#' Invert a matrix of match locations to match the opposite of what was
#' previously matched.
#'
#' @param loc matrix of match locations, as from \code{\link{str_locate_all}}
#' @return numeric match giving locations of non-matches
#' @export
#' @examples
#' numbers <- "1 and 2 and 4 and 456"
#' num_loc <- str_locate_all(numbers, "[0-9]+")[[1]]
#' str_sub(numbers, num_loc[, "start"], num_loc[, "end"])
#'
#' text_loc <- invert_match(num_loc)
#' str_sub(numbers, text_loc[, "start"], text_loc[, "end"])
invert_match <- function(loc) {
cbind(
start = c(0L, loc[, "end"] + 1L),
end = c(loc[, "start"] - 1L, -1L)
)
}
|
#' TODO Functions that allow pulling data on artists
#' https://developer.spotify.com/documentation/web-api/reference-beta/#category-artists
#' Info on artist.
#' returns a list that contains the info on the artist
#' @param artistID spotify ID of the artist.
#' @export
getArtist <- function(artistID) {
url <- paste("https://api.spotify.com/v1/artists/", artistID, sep = "")
result <- GETRequest(url)
return(result)
}
#' Returns the toptracks of artist.
#' @param artistID The Spotify ID of the artist
#' @param country An ISO 3166-1 alpha-2 country code or the string
#' @export
getArtistTopTracks <- function(artistID, country) {
url <- paste("https://api.spotify.com/v1/artists/",
artistID,
"/top-tracks?country=",
country,
sep = "")
result <- GETRequest(url) %>%
buildToptrackDF()
return(result)
}
#' Builds a data frame.
#' @param trackList a list that contains tracks
buildToptrackDF <- function(trackList) {
df <- data.frame(track = character(10),
popularity = integer(10),
id = character(10),
url = character(10),
stringsAsFactors = FALSE)
for (i in 1:10) {
roww <- c(trackList$tracks[[i]]$name,
trackList$tracks[[i]]$popularity,
trackList$tracks[[i]]$id,
trackList$tracks[[i]]$href)
df[i,] <- roww
}
return(df)
}
|
/R/artist.R
|
no_license
|
topiaskarjalainen/SpotR
|
R
| false
| false
| 1,439
|
r
|
#' TODO Functions that allow pulling data on artists
#' https://developer.spotify.com/documentation/web-api/reference-beta/#category-artists
#' Info on artist.
#' returns a list that contains the info on the artist
#' @param artistID spotify ID of the artist.
#' @export
getArtist <- function(artistID) {
url <- paste("https://api.spotify.com/v1/artists/", artistID, sep = "")
result <- GETRequest(url)
return(result)
}
#' Returns the toptracks of artist.
#' @param artistID The Spotify ID of the artist
#' @param country An ISO 3166-1 alpha-2 country code or the string
#' @export
getArtistTopTracks <- function(artistID, country) {
url <- paste("https://api.spotify.com/v1/artists/",
artistID,
"/top-tracks?country=",
country,
sep = "")
result <- GETRequest(url) %>%
buildToptrackDF()
return(result)
}
#' Builds a data frame.
#' @param trackList a list that contains tracks
buildToptrackDF <- function(trackList) {
df <- data.frame(track = character(10),
popularity = integer(10),
id = character(10),
url = character(10),
stringsAsFactors = FALSE)
for (i in 1:10) {
roww <- c(trackList$tracks[[i]]$name,
trackList$tracks[[i]]$popularity,
trackList$tracks[[i]]$id,
trackList$tracks[[i]]$href)
df[i,] <- roww
}
return(df)
}
|
#HIDDEN RATES MODEL OF BINARY TRAIT EVOLUTION
#written by Jeremy M. Beaulieu
corHMM <- function(phy, data, rate.cat, rate.mat=NULL, node.states=c("joint", "marginal", "scaled", "none"), optim.method=c("subplex"), p=NULL, root.p=NULL, ip=NULL, nstarts=10, n.cores=NULL, sann.its=5000, diagn=FALSE){
# Checks to make sure node.states is not NULL. If it is, just returns a diagnostic message asking for value.
if(is.null(node.states)){
obj <- NULL
obj$loglik <- NULL
obj$diagnostic <- paste("No model for ancestral states selected. Please pass one of the following to corHMM command for parameter \'node.states\': joint, marginal, scaled, or none.")
return(obj)
}
else { # even if node.states is not NULL, need to make sure its one of the three valid options
valid.models <- c("joint", "marginal", "scaled", "none")
if(!any(valid.models == node.states)){
obj <- NULL
obj$loglik <- NULL
obj$diagnostic <- paste("\'",node.states, "\' is not valid for ancestral state reconstruction method. Please pass one of the following to corHMM command for parameter \'node.states\': joint, marginal, scaled, or none.",sep="")
return(obj)
}
if(length(node.states) > 1){ # User did not enter a value, so just pick marginal.
node.states <- "marginal"
cat("No model selected for \'node.states\'. Will perform marginal ancestral state estimation.\n")
}
}
# Checks to make sure phy & data have same taxa. Fixes conflicts (see match.tree.data function).
matching <- match.tree.data(phy,data)
data <- matching$data
phy <- matching$phy
# Will not perform reconstructions on invariant characters
if(nlevels(as.factor(data[,1])) <= 1){
obj <- NULL
obj$loglik <- NULL
obj$diagnostic <- paste("Character is invariant. Analysis stopped.",sep="")
return(obj)
} else {
# Still need to make sure second level isnt just an ambiguity
lvls <- as.factor(data[,1])
if(nlevels(as.factor(data[,1])) == 2 && length(which(lvls == "?"))){
obj <- NULL
obj$loglik <- NULL
obj$diagnostic <- paste("Character is invariant. Analysis stopped.",sep="")
return(obj)
}
}
#Creates the data structure and orders the rows to match the tree.
phy$edge.length[phy$edge.length<=1e-5]=1e-5
data.sort <- data.frame(data[,2], data[,2],row.names=data[,1])
data.sort <- data.sort[phy$tip.label,]
counts <- table(data.sort[,1])
levels <- levels(as.factor(data.sort[,1]))
cols <- as.factor(data.sort[,1])
cat("State distribution in data:\n")
cat("States:",levels,"\n",sep="\t")
cat("Counts:",counts,"\n",sep="\t")
#Some initial values for use later
k=2
ub = log(100)
lb = -20
obj <- NULL
nb.tip <- length(phy$tip.label)
nb.node <- phy$Nnode
rate.cat=rate.cat
root.p=root.p
nstarts=nstarts
ip=ip
model.set.final<-rate.cat.set.corHMM(phy=phy,data.sort=data.sort,rate.cat=rate.cat)
if(!is.null(rate.mat)){
rate <- rate.mat
model.set.final$np <- max(rate, na.rm=TRUE)
rate[is.na(rate)]=max(rate, na.rm=TRUE)+1
model.set.final$rate <- rate
model.set.final$index.matrix <- rate.mat
## for precursor type models ##
col.sums <- which(colSums(rate.mat, na.rm=TRUE) == 0)
row.sums <- which(rowSums(rate.mat, na.rm=TRUE) == 0)
drop.states <- col.sums[which(col.sums == row.sums)]
if(length(drop.states > 0)){
model.set.final$liks[,drop.states] <- 0
}
###############################
}
lower = rep(lb, model.set.final$np)
upper = rep(ub, model.set.final$np)
if(optim.method=="twoStep"){
if(!is.null(p)){
cat("Calculating likelihood from a set of fixed parameters", "\n")
out<-NULL
est.pars<-log(p)
out$objective <- dev.corhmm(est.pars,phy=phy,liks=model.set.final$liks,Q=model.set.final$Q,rate=model.set.final$rate,root.p=root.p)
est.pars <- exp(est.pars)
}
else{
cat("Initializing...", "\n")
model.set.init<-rate.cat.set.corHMM(phy=phy,data.sort=data.sort,rate.cat=1)
rate<-rate.mat.maker(hrm=TRUE,rate.cat=1)
rate<-rate.par.eq(rate,eq.par=c(1,2))
model.set.init$index.matrix<-rate
rate[is.na(rate)]<-max(rate,na.rm=TRUE)+1
model.set.init$rate<-rate
dat<-as.matrix(data.sort)
dat<-phyDat(dat,type="USER", levels=c("0","1"))
par.score<-parsimony(phy, dat, method="fitch")
tl <- sum(phy$edge.length)
mean.change = par.score/tl
if(mean.change==0){
ip=exp(lb)+0.01
}else{
ip<-rexp(1, 1/mean.change)
}
if(ip < exp(lb) || ip > exp(ub)){ # initial parameter value is outside bounds
ip <- exp(lb)
}
lower = rep(lb, model.set.final$np)
upper = rep(ub, model.set.final$np)
opts <- list("algorithm"="NLOPT_LN_SBPLX", "maxeval"="1000000", "ftol_rel"=.Machine$double.eps^0.5)
cat("Finished. Beginning simulated annealing Round 1...", "\n")
out.sann <- GenSA(rep(log(ip), model.set.final$np), fn=dev.corhmm, lower=lower, upper=upper, control=list(max.call=sann.its), phy=phy,liks=model.set.final$liks, Q=model.set.final$Q,rate=model.set.final$rate,root.p=root.p)
cat("Finished. Refining using subplex routine...", "\n")
#out = subplex(out.sann$par, fn=dev.corhmm, control=list(.Machine$double.eps^0.25, parscale=rep(0.1, length(out.sann$par))), phy=phy,liks=model.set.init$liks,Q=model.set.init$Q,rate=model.set.init$rate,root.p=root.p)
out = nloptr(x0=out.sann$par, eval_f=dev.corhmm, lb=lower, ub=upper, opts=opts, phy=phy,liks=model.set.final$liks,Q=model.set.final$Q,rate=model.set.final$rate,root.p=root.p)
cat("Finished. Beginning simulated annealing Round 2...", "\n")
out.sann <- GenSA(out$solution, fn=dev.corhmm, lower=lower, upper=upper, control=list(max.call=sann.its), phy=phy,liks=model.set.final$liks, Q=model.set.final$Q,rate=model.set.final$rate,root.p=root.p)
cat("Finished. Refining using subplex routine...", "\n")
#out = subplex(out.sann$par, fn=dev.corhmm, control=list(.Machine$double.eps^0.25, parscale=rep(0.1, length(out.sann$par))), phy=phy,liks=model.set.init$liks,Q=model.set.init$Q,rate=model.set.init$rate,root.p=root.p)
out = nloptr(x0=out.sann$par, eval_f=dev.corhmm, lb=lower, ub=upper, opts=opts, phy=phy,liks=model.set.final$liks,Q=model.set.final$Q,rate=model.set.final$rate,root.p=root.p)
cat("Finished. Beginning simulated annealing Round 3...", "\n")
out.sann <- GenSA(out$solution, fn=dev.corhmm, lower=lower, upper=upper, control=list(max.call=sann.its), phy=phy,liks=model.set.final$liks, Q=model.set.final$Q,rate=model.set.final$rate,root.p=root.p)
cat("Finished. Refining using subplex routine...", "\n")
#out = subplex(out.sann$par, fn=dev.corhmm, control=list(.Machine$double.eps^0.25, parscale=rep(0.1, length(out.sann$par))), phy=phy,liks=model.set.init$liks,Q=model.set.init$Q,rate=model.set.init$rate,root.p=root.p)
out = nloptr(x0=out.sann$par, eval_f=dev.corhmm, lb=lower, ub=upper, opts=opts, phy=phy,liks=model.set.final$liks,Q=model.set.final$Q,rate=model.set.final$rate,root.p=root.p)
loglik <- -out$objective
est.pars <- exp(out$solution)
}
}
if(optim.method=="subplex"){
opts <- list("algorithm"="NLOPT_LN_SBPLX", "maxeval"="1000000", "ftol_rel"=.Machine$double.eps^0.5)
if(!is.null(p)){
cat("Calculating likelihood from a set of fixed parameters", "\n")
out<-NULL
est.pars<-log(p)
out$objective<-dev.corhmm(est.pars,phy=phy,liks=model.set.final$liks,Q=model.set.final$Q,rate=model.set.final$rate,root.p=root.p)
loglik <- -out$objective
est.pars <- exp(est.pars)
}
#If a user-specified starting value(s) is not supplied this begins loop through a set of randomly chosen starting values:
else{
#If a user-specified starting value(s) is supplied:
if(is.null(ip)){
if(is.null(n.cores)){
cat("Beginning thorough optimization search -- performing", nstarts, "random restarts", "\n")
#If the analysis is to be run a single processor:
if(is.null(n.cores)){
#Sets parameter settings for random restarts by taking the parsimony score and dividing
#by the total length of the tree
dat<-as.matrix(data.sort)
dat<-phyDat(dat,type="USER", levels=c("0","1"))
par.score<-parsimony(phy, dat, method="fitch")/2
tl <- sum(phy$edge.length)
mean.change = par.score/tl
if(mean.change==0){
ip=0.01+exp(lb)
}else{
ip <- rexp(model.set.final$np, 1/mean.change)
}
ip[ip < exp(lb)] = exp(lb)
ip[ip > exp(ub)] = exp(lb)
out = nloptr(x0=rep(log(ip), length.out = model.set.final$np), eval_f=dev.corhmm, lb=lower, ub=upper, opts=opts, phy=phy,liks=model.set.final$liks,Q=model.set.final$Q,rate=model.set.final$rate,root.p=root.p)
tmp = matrix(,1,ncol=(1+model.set.final$np))
tmp[,1] = out$objective
tmp[,2:(model.set.final$np+1)] = out$solution
for(i in 2:nstarts){
#Temporary solution for ensuring an ordered Q with respect to the rate classes. If a simpler model is called this feature is automatically turned off:
if(mean.change==0){
starts=runif(0.01+exp(lb), 1, model.set.final$np)
}else{
starts<-rexp(model.set.final$np, 1/mean.change)
}
starts[starts < exp(lb)] = exp(lb)
starts[starts > exp(ub)] = exp(lb)
par.order<-NA
if(rate.cat == 2){
try(par.order<-starts[3] > starts[8])
if(!is.na(par.order)){
pp.tmp <- c(starts[3],starts[8])
starts[3] <- min(pp.tmp)
starts[8] <- max(pp.tmp)
}
}
if(rate.cat == 3){
try(par.order <- starts[3] > starts[9] | starts[9] > starts[14])
if(!is.na(par.order)){
pp.tmp <- c(starts[3],starts[9],starts[14])
starts[3] <- min(pp.tmp)
starts[9] <- median(pp.tmp)
starts[14] <- max(pp.tmp)
}
}
if(rate.cat == 4){
try(par.order <- starts[3] > starts[9] | starts[9] > starts[15] | starts[15] > starts[20])
if(!is.na(par.order)){
pp.tmp <- c(starts[3],starts[9],starts[15],starts[20])
starts[3] <- pp.tmp[order(pp.tmp)][1]
starts[9] <- pp.tmp[order(pp.tmp)][2]
starts[15] <- pp.tmp[order(pp.tmp)][3]
starts[20] <- pp.tmp[order(pp.tmp)][4]
}
}
if(rate.cat == 5){
try(par.order <- starts[3] > starts[9] | starts[9] > starts[15] | starts[15] > starts[21] | starts[21] > starts[26])
if(!is.na(par.order)){
pp.tmp <- c(starts[3],starts[9],starts[15],starts[21],starts[26])
starts[3] <- pp.tmp[order(pp.tmp)][1]
starts[9] <- pp.tmp[order(pp.tmp)][2]
starts[15] <- pp.tmp[order(pp.tmp)][3]
starts[21] <- pp.tmp[order(pp.tmp)][4]
starts[26] <- pp.tmp[order(pp.tmp)][5]
}
}
if(rate.cat == 6){
try(par.order <- starts[3] > starts[9] | starts[9] > starts[15] | starts[15] > starts[21] | starts[21] > starts[27] | starts[27] > starts[32])
if(!is.na(par.order)){
pp.tmp <- c(starts[3],starts[9],starts[15],starts[21],starts[27],starts[32])
starts[3] <- pp.tmp[order(pp.tmp)][1]
starts[9] <- pp.tmp[order(pp.tmp)][2]
starts[15] <- pp.tmp[order(pp.tmp)][3]
starts[21] <- pp.tmp[order(pp.tmp)][4]
starts[27] <- pp.tmp[order(pp.tmp)][5]
starts[32] <- pp.tmp[order(pp.tmp)][6]
}
}
if(rate.cat == 7){
try(par.order <- starts[3] > starts[9] | starts[9] > starts[15] | starts[15] > starts[21] | starts[21] > starts[27] | starts[27] > starts[33] | starts[33] > starts[38])
if(!is.na(par.order)){
pp.tmp <- c(starts[3],starts[9],starts[15],starts[21],starts[27],starts[33],starts[38])
starts[3] <- pp.tmp[order(pp.tmp)][1]
starts[9] <- pp.tmp[order(pp.tmp)][2]
starts[15] <- pp.tmp[order(pp.tmp)][3]
starts[21] <- pp.tmp[order(pp.tmp)][4]
starts[27] <- pp.tmp[order(pp.tmp)][5]
starts[33] <- pp.tmp[order(pp.tmp)][6]
starts[38] <- pp.tmp[order(pp.tmp)][7]
}
}
out.alt = nloptr(x0=rep(log(starts), length.out = model.set.final$np), eval_f=dev.corhmm, lb=lower, ub=upper, opts=opts, phy=phy,liks=model.set.final$liks,Q=model.set.final$Q,rate=model.set.final$rate,root.p=root.p)
tmp[,1] = out.alt$objective
tmp[,2:(model.set.final$np+1)] = starts
if(out.alt$objective < out$objective){
out = out.alt
ip = starts
}
else{
out = out
ip = ip
}
}
loglik <- -out$objective
est.pars <- exp(out$solution)
}
}
#If the analysis is to be run on multiple processors:
else{
#Sets parameter settings for random restarts by taking the parsimony score and dividing
#by the total length of the tree
cat("Beginning thorough optimization search -- performing", nstarts, "random restarts", "\n")
dat<-as.matrix(data.sort)
dat<-phyDat(dat,type="USER", levels=c("0","1"))
par.score<-parsimony(phy, dat, method="fitch")/2
tl <- sum(phy$edge.length)
mean.change = par.score/tl
random.restart<-function(nstarts){
tmp = matrix(,1,ncol=(1+model.set.final$np))
#Temporary solution for ensuring an ordered Q with respect to the rate classes. If a simpler model is called this feature is automatically turned off:
if(mean.change==0){
starts=rep(0.01+exp(lb), model.set.final$np)
}else{
starts<-rexp(model.set.final$np, 1/mean.change)
}
starts[starts < exp(lb)] = exp(lb)
starts[starts > exp(ub)] = exp(lb)
par.order<-NA
if(rate.cat == 2){
try(par.order<-starts[3] > starts[8])
if(!is.na(par.order)){
pp.tmp <- c(starts[3],starts[8])
starts[3] <- min(pp.tmp)
starts[8] <- max(pp.tmp)
}
}
if(rate.cat == 3){
try(par.order <- starts[3] > starts[9] | starts[9] > starts[14])
if(!is.na(par.order)){
pp.tmp <- c(starts[3],starts[9],starts[14])
starts[3] <- min(pp.tmp)
starts[9] <- median(pp.tmp)
starts[14] <- max(pp.tmp)
}
}
if(rate.cat == 4){
try(par.order <- starts[3] > starts[9] | starts[9] > starts[15] | starts[15] > starts[20])
if(!is.na(par.order)){
pp.tmp <- c(starts[3],starts[9],starts[15],starts[20])
starts[3] <- pp.tmp[order(pp.tmp)][1]
starts[9] <- pp.tmp[order(pp.tmp)][2]
starts[15] <- pp.tmp[order(pp.tmp)][3]
starts[20] <- pp.tmp[order(pp.tmp)][4]
}
}
if(rate.cat == 5){
try(par.order <- starts[3] > starts[9] | starts[9] > starts[15] | starts[15] > starts[21] | starts[21] > starts[26])
if(!is.na(par.order)){
pp.tmp <- c(starts[3],starts[9],starts[15],starts[21],starts[26])
starts[3] <- pp.tmp[order(pp.tmp)][1]
starts[9] <- pp.tmp[order(pp.tmp)][2]
starts[15] <- pp.tmp[order(pp.tmp)][3]
starts[21] <- pp.tmp[order(pp.tmp)][4]
starts[26] <- pp.tmp[order(pp.tmp)][5]
}
}
if(rate.cat == 6){
try(par.order <- starts[3] > starts[9] | starts[9] > starts[15] | starts[15] > starts[21] | starts[21] > starts[27] | starts[27] > starts[32])
if(!is.na(par.order)){
pp.tmp <- c(starts[3],starts[9],starts[15],starts[21],starts[27],starts[32])
starts[3] <- pp.tmp[order(pp.tmp)][1]
starts[9] <- pp.tmp[order(pp.tmp)][2]
starts[15] <- pp.tmp[order(pp.tmp)][3]
starts[21] <- pp.tmp[order(pp.tmp)][4]
starts[27] <- pp.tmp[order(pp.tmp)][5]
starts[32] <- pp.tmp[order(pp.tmp)][6]
}
}
if(rate.cat == 7){
try(par.order <- starts[3] > starts[9] | starts[9] > starts[15] | starts[15] > starts[21] | starts[21] > starts[27] | starts[27] > starts[33] | starts[33] > starts[38])
if(!is.na(par.order)){
pp.tmp <- c(starts[3],starts[9],starts[15],starts[21],starts[27],starts[33],starts[38])
starts[3] <- pp.tmp[order(pp.tmp)][1]
starts[9] <- pp.tmp[order(pp.tmp)][2]
starts[15] <- pp.tmp[order(pp.tmp)][3]
starts[21] <- pp.tmp[order(pp.tmp)][4]
starts[27] <- pp.tmp[order(pp.tmp)][5]
starts[33] <- pp.tmp[order(pp.tmp)][6]
starts[38] <- pp.tmp[order(pp.tmp)][7]
}
}
out = nloptr(x0=log(starts), eval_f=dev.corhmm, lb=lower, ub=upper, opts=opts, phy=phy, liks=model.set.final$liks,Q=model.set.final$Q,rate=model.set.final$rate,root.p=root.p)
tmp[,1] = out$objective
tmp[,2:(model.set.final$np+1)] = out$solution
tmp
}
restart.set<-mclapply(1:nstarts, random.restart, mc.cores=n.cores)
#Finds the best fit within the restart.set list
best.fit<-which.min(unlist(lapply(1:nstarts,function(i) lapply(restart.set[[i]][,1],min))))
#Generates an object to store results from restart algorithm:
out<-NULL
out$objective=unlist(restart.set[[best.fit]][,1])
out$solution=unlist(restart.set[[best.fit]][,2:(model.set.final$np+1)])
loglik <- -out$objective
est.pars <- exp(out$solution)
}
}
else{
cat("Beginning subplex optimization routine -- Starting value(s):", ip, "\n")
ip=ip
out = nloptr(x0=rep(log(ip), length.out = model.set.final$np), eval_f=dev.corhmm, lb=lower, ub=upper, opts=opts, phy=phy,liks=model.set.final$liks,Q=model.set.final$Q,rate=model.set.final$rate,root.p=root.p)
loglik <- -out$objective
est.pars <- exp(out$solution)
}
}
}
#Starts the summarization process:
if(node.states != "none") {
cat("Finished. Inferring ancestral states using", node.states, "reconstruction.","\n")
}
TIPS <- 1:nb.tip
if (node.states == "marginal" || node.states == "scaled"){
lik.anc <- ancRECON(phy, data, est.pars, hrm=TRUE, rate.cat, rate.mat=rate.mat, method=node.states, ntraits=NULL, root.p=root.p)
pr<-apply(lik.anc$lik.anc.states,1,which.max)
phy$node.label <- pr
tip.states <- lik.anc$lik.tip.states
row.names(tip.states) <- phy$tip.label
}
if (node.states == "joint"){
lik.anc <- ancRECON(phy, data, est.pars, hrm=TRUE, rate.cat, rate.mat=rate.mat, method=node.states, ntraits=NULL,root.p=root.p)
phy$node.label <- lik.anc$lik.anc.states
tip.states <- lik.anc$lik.tip.states
}
if (node.states == "none") {
lik.anc <- list(lik.tip.states=NA, lik.anc.states=NA)
phy$node.label <- NA
tip.states <- NA
}
cat("Finished. Performing diagnostic tests.", "\n")
#Approximates the Hessian using the numDeriv function
if(diagn==TRUE){
h <- hessian(func=dev.corhmm, x=est.pars, phy=phy,liks=model.set.final$liks,Q=model.set.final$Q,rate=model.set.final$rate,root.p=root.p)
solution <- matrix(est.pars[model.set.final$index.matrix], dim(model.set.final$index.matrix))
solution.se <- matrix(sqrt(diag(pseudoinverse(h)))[model.set.final$index.matrix], dim(model.set.final$index.matrix))
hess.eig <- eigen(h,symmetric=TRUE)
eigval<-signif(hess.eig$values, 2)
eigvect<-round(hess.eig$vectors, 2)
}
else{
solution <- matrix(est.pars[model.set.final$index.matrix], dim(model.set.final$index.matrix))
solution.se <- matrix(0,dim(solution)[1],dim(solution)[1])
eigval<-NULL
eigvect<-NULL
}
if (rate.cat == 1){
rownames(solution) <- rownames(solution.se) <- c("(0)","(1)")
colnames(solution) <- colnames(solution.se) <- c("(0)","(1)")
#Initiates user-specified reconstruction method:
if (is.character(node.states)) {
if (node.states == "marginal" || node.states == "scaled"){
colnames(lik.anc$lik.anc.states) <- c("P(0)","P(1)")
}
}
}
if (rate.cat == 2){
rownames(solution) <- rownames(solution.se) <- c("(0,R1)","(1,R1)","(0,R2)","(1,R2)")
colnames(solution) <- colnames(solution.se) <- c("(0,R1)","(1,R1)","(0,R2)","(1,R2)")
if (is.character(node.states)) {
if (node.states == "marginal" || node.states == "scaled"){
colnames(lik.anc$lik.anc.states) <- c("(0,R1)","(1,R1)","(0,R2)","(1,R2)")
}
}
}
if (rate.cat == 3){
rownames(solution) <- rownames(solution.se) <- c("(0,R1)","(1,R1)","(0,R2)","(1,R2)","(0,R3)","(1,R3)")
colnames(solution) <- colnames(solution.se) <- c("(0,R1)","(1,R1)","(0,R2)","(1,R2)","(0,R3)","(1,R3)")
if (is.character(node.states)) {
if (node.states == "marginal" || node.states == "scaled"){
colnames(lik.anc$lik.anc.states) <- c("(0,R1)","(1,R1)","(0,R2)","(1,R2)","(0,R3)","(1,R3)")
}
}
}
if (rate.cat == 4){
rownames(solution) <- rownames(solution.se) <- c("(0,R1)","(1,R1)","(0,R2)","(1,R2)","(0,R3)","(1,R3)","(0,R4)","(1,R4)")
colnames(solution) <- colnames(solution.se) <- c("(0,R1)","(1,R1)","(0,R2)","(1,R2)","(0,R3)","(1,R3)","(0,R4)","(1,R4)")
if (is.character(node.states)) {
if (node.states == "marginal" || node.states == "scaled"){
colnames(lik.anc$lik.anc.states) <- c("(0,R1)","(1,R1)","(0,R2)","(1,R2)","(0,R3)","(1,R3)","(0,R4)","(1,R4)")
}
}
}
if (rate.cat == 5){
rownames(solution) <- rownames(solution.se) <- c("(0,R1)","(1,R1)","(0,R2)","(1,R2)","(0,R3)","(1,R3)","(0,R4)","(1,R4)","(0,R5)","(1,R5)")
colnames(solution) <- colnames(solution.se) <- c("(0,R1)","(1,R1)","(0,R2)","(1,R2)","(0,R3)","(1,R3)","(0,R4)","(1,R4)","(0,R5)","(1,R5)")
if (is.character(node.states)) {
if (node.states == "marginal" || node.states == "scaled"){
colnames(lik.anc$lik.anc.states) <- c("(0,R1)","(1,R1)","(0,R2)","(1,R2)","(0,R3)","(1,R3)","(0,R4)","(1,R4)","(0,R5)","(1,R5)")
}
}
}
if (rate.cat == 6){
rownames(solution) <- rownames(solution.se) <- c("(0,R1)","(1,R1)","(0,R2)","(1,R2)","(0,R3)","(1,R3)","(0,R4)","(1,R4)","(0,R5)","(1,R5)","(0,R6)","(1,R6)")
colnames(solution) <- colnames(solution.se) <- c("(0,R1)","(1,R1)","(0,R2)","(1,R2)","(0,R3)","(1,R3)","(0,R4)","(1,R4)","(0,R5)","(1,R5)","(0,R6)","(1,R6)")
if (is.character(node.states)) {
if (node.states == "marginal" || node.states == "scaled"){
colnames(lik.anc$lik.anc.states) <- c("(0,R1)","(1,R1)","(0,R2)","(1,R2)","(0,R3)","(1,R3)","(0,R4)","(1,R4)","(0,R5)","(1,R5)","(0,R6)","(1,R6)")
}
}
}
if (rate.cat == 7){
rownames(solution) <- rownames(solution.se) <- c("(0,R1)","(1,R1)","(0,R2)","(1,R2)","(0,R3)","(1,R3)","(0,R4)","(1,R4)","(0,R5)","(1,R5)","(0,R6)","(1,R6)","(0,R7)","(1,R7)")
colnames(solution) <- colnames(solution.se) <- c("(0,R1)","(1,R1)","(0,R2)","(1,R2)","(0,R3)","(1,R3)","(0,R4)","(1,R4)","(0,R5)","(1,R5)","(0,R6)","(1,R6)","(0,R7)","(1,R7)")
if (is.character(node.states)) {
if (node.states == "marginal" || node.states == "scaled"){
colnames(lik.anc$lik.anc.states) <- c("(0,R1)","(1,R1)","(0,R2)","(1,R2)","(0,R3)","(1,R3)","(0,R4)","(1,R4)","(0,R5)","(1,R5)","(0,R6)","(1,R6)","(0,R7)","(1,R7)")
}
}
}
obj = list(loglik = loglik, AIC = -2*loglik+2*model.set.final$np,AICc = -2*loglik+(2*model.set.final$np*(nb.tip/(nb.tip-model.set.final$np-1))),rate.cat=rate.cat,solution=solution, solution.se=solution.se, index.mat=model.set.final$index.matrix, data=data.sort, phy=phy, states=lik.anc$lik.anc.states, tip.states=tip.states, iterations=out$iterations, eigval=eigval, eigvect=eigvect)
class(obj)<-"corhmm"
return(obj)
}
#Print function
print.corhmm<-function(x,...){
ntips=Ntip(x$phy)
output<-data.frame(x$loglik,x$AIC,x$AICc,x$rate.cat,ntips, row.names="")
names(output)<-c("-lnL","AIC","AICc","Rate.cat","ntax")
cat("\nFit\n")
print(output)
cat("\n")
param.est<- x$solution
cat("Rates\n")
print(param.est)
cat("\n")
if(any(x$eigval<0)){
index.matrix <- x$index.mat
#If any eigenvalue is less than 0 then the solution is not the maximum likelihood solution
if (any(x$eigval<0)) {
cat("The objective function may be at a saddle point", "\n")
}
}
else{
cat("Arrived at a reliable solution","\n")
}
}
#Generalized ace() function that allows analysis to be carried out when there are polytomies:
dev.corhmm <- function(p,phy,liks,Q,rate,root.p) {
p = exp(p)
nb.tip <- length(phy$tip.label)
nb.node <- phy$Nnode
TIPS <- 1:nb.tip
comp <- numeric(nb.tip + nb.node)
phy <- reorder(phy, "pruningwise")
#Obtain an object of all the unique ancestors
anc <- unique(phy$edge[,1])
k.rates <- dim(Q)[2] / 2
if (any(is.nan(p)) || any(is.infinite(p))) return(1000000)
Q[] <- c(p, 0)[rate]
diag(Q) <- -rowSums(Q)
for (i in seq(from = 1, length.out = nb.node)) {
#the ancestral node at row i is called focal
focal <- anc[i]
#Get descendant information of focal
desRows<-which(phy$edge[,1]==focal)
desNodes<-phy$edge[desRows,2]
v <- 1
#Loops through all descendants of focal (how we deal with polytomies):
for (desIndex in sequence(length(desRows))){
v<-v*expm(Q * phy$edge.length[desRows[desIndex]], method=c("Ward77")) %*% liks[desNodes[desIndex],]
}
#Sum the likelihoods:
comp[focal] <- sum(v)
#Divide each likelihood by the sum to obtain probabilities:
liks[focal, ] <- v/comp[focal]
}
#Temporary solution for ensuring an ordered Q with respect to the rate classes. If a simpler model is called this feature is automatically turned off:
par.order<-NA
if(k.rates == 2){
try(par.order <- p[3] > p[8])
if(!is.na(par.order)){
if(par.order == TRUE){
return(1000000)
}
}
}
if(k.rates == 3){
try(par.order <- p[3] > p[9] | p[9] > p[14])
if(!is.na(par.order)){
if(par.order == TRUE){
return(1000000)
}
}
}
if(k.rates == 4){
try(par.order <- p[3] > p[9] | p[9] > p[15] | p[15] > p[20])
if(!is.na(par.order)){
if(par.order == TRUE){
return(1000000)
}
}
}
if(k.rates == 5){
try(par.order <- p[3] > p[9] | p[9] > p[15] | p[15] > p[21] | p[21] > p[26])
if(!is.na(par.order)){
if(par.order == TRUE){
return(1000000)
}
}
}
if(k.rates == 6){
try(par.order <- p[3] > p[9] | p[9] > p[15] | p[15] > p[21] | p[21] > p[27] | p[27] > p[32])
if(!is.na(par.order)){
if(par.order == TRUE){
return(1000000)
}
}
}
if(k.rates == 7){
try(par.order <- p[3] > p[9] | p[9] > p[15] | p[15] > p[21] | p[21] > p[27] | p[27] > p[33] | p[33] > p[38])
if(!is.na(par.order)){
if(par.order == TRUE){
return(1000000)
}
}
}
#Specifies the root:
root <- nb.tip + 1L
#If any of the logs have NAs restart search:
if (is.na(sum(log(comp[-TIPS])))){return(1000000)}
else{
equil.root <- NULL
for(i in 1:ncol(Q)){
posrows <- which(Q[,i] >= 0)
rowsum <- sum(Q[posrows,i])
poscols <- which(Q[i,] >= 0)
colsum <- sum(Q[i,poscols])
equil.root <- c(equil.root,rowsum/(rowsum+colsum))
}
if (is.null(root.p)){
flat.root = equil.root
k.rates <- 1/length(which(!is.na(equil.root)))
flat.root[!is.na(flat.root)] = k.rates
flat.root[is.na(flat.root)] = 0
loglik<- -(sum(log(comp[-TIPS])) + log(sum(flat.root * liks[root,])))
}
else{
if(is.character(root.p)){
# root.p==yang will fix root probabilities based on the inferred rates: q10/(q01+q10)
if(root.p == "yang"){
diag(Q) = 0
equil.root = colSums(Q) / sum(Q)
loglik <- -(sum(log(comp[-TIPS])) + log(sum(exp(log(equil.root)+log(liks[root,])))))
if(is.infinite(loglik)){
return(1000000)
}
}else{
# root.p==maddfitz will fix root probabilities according to FitzJohn et al 2009 Eq. 10:
root.p = liks[root,] / sum(liks[root,])
loglik <- -(sum(log(comp[-TIPS])) + log(sum(exp(log(root.p)+log(liks[root,])))))
}
}
# root.p!==NULL will fix root probabilities based on user supplied vector:
else{
loglik <- -(sum(log(comp[-TIPS])) + log(sum(exp(log(root.p)+log(liks[root,])))))
if(is.infinite(loglik)){
return(1000000)
}
}
}
}
loglik
}
rate.cat.set.corHMM<-function(phy,data.sort,rate.cat){
k=2
obj <- NULL
nb.tip <- length(phy$tip.label)
nb.node <- phy$Nnode
obj$rate.cat<-rate.cat
rate<-rate.mat.maker(hrm=TRUE,rate.cat=rate.cat)
index.matrix<-rate
rate[is.na(rate)]<-max(rate,na.rm=TRUE)+1
#Makes a matrix of tip states and empty cells corresponding
#to ancestral nodes during the optimization process.
x <- data.sort[,1]
TIPS <- 1:nb.tip
for(i in 1:nb.tip){
if(is.na(x[i])){x[i]=2}
}
if (rate.cat == 1){
liks <- matrix(0, nb.tip + nb.node, k*rate.cat)
TIPS <- 1:nb.tip
for(i in 1:nb.tip){
if(x[i]==0){liks[i,1]=1}
if(x[i]==1){liks[i,2]=1}
if(x[i]==2){liks[i,1:2]=1}
}
Q <- matrix(0, k*rate.cat, k*rate.cat)
}
if (rate.cat == 2){
liks <- matrix(0, nb.tip + nb.node, k*rate.cat)
for(i in 1:nb.tip){
if(x[i]==0){liks[i,c(1,3)]=1}
if(x[i]==1){liks[i,c(2,4)]=1}
if(x[i]==2){liks[i,1:4]=1}
}
Q <- matrix(0, k*rate.cat, k*rate.cat)
}
if (rate.cat == 3){
liks <- matrix(0, nb.tip + nb.node, k*rate.cat)
for(i in 1:nb.tip){
if(x[i]==0){liks[i,c(1,3,5)]=1}
if(x[i]==1){liks[i,c(2,4,6)]=1}
if(x[i]==2){liks[i,1:6]=1}
}
Q <- matrix(0, k*rate.cat, k*rate.cat)
}
if (rate.cat == 4){
liks <- matrix(0, nb.tip + nb.node, k*rate.cat)
for(i in 1:nb.tip){
if(x[i]==0){liks[i,c(1,3,5,7)]=1}
if(x[i]==1){liks[i,c(2,4,6,8)]=1}
if(x[i]==2){liks[i,1:8]=1}
}
Q <- matrix(0, k*rate.cat, k*rate.cat)
}
if (rate.cat == 5){
liks <- matrix(0, nb.tip + nb.node, k*rate.cat)
for(i in 1:nb.tip){
if(x[i]==0){liks[i,c(1,3,5,7,9)]=1}
if(x[i]==1){liks[i,c(2,4,6,8,10)]=1}
if(x[i]==2){liks[i,1:10]=1}
}
Q <- matrix(0, k*rate.cat, k*rate.cat)
}
if (rate.cat == 6){
liks <- matrix(0, nb.tip + nb.node, k*rate.cat)
for(i in 1:nb.tip){
if(x[i]==0){liks[i,c(1,3,5,7,9,11)]=1}
if(x[i]==1){liks[i,c(2,4,6,8,10,12)]=1}
if(x[i]==2){liks[i,1:12]=1}
}
Q <- matrix(0, k*rate.cat, k*rate.cat)
}
if (rate.cat == 7){
liks <- matrix(0, nb.tip + nb.node, k*rate.cat)
for(i in 1:nb.tip){
if(x[i]==0){liks[i,c(1,3,5,7,9,11,13)]=1}
if(x[i]==1){liks[i,c(2,4,6,8,10,12,14)]=1}
if(x[i]==2){liks[i,1:14]=1}
}
Q <- matrix(0, k*rate.cat, k*rate.cat)
}
obj$np<-max(rate)-1
obj$rate<-rate
obj$index.matrix<-index.matrix
obj$liks<-liks
obj$Q<-Q
obj
}
|
/R/corHMM.R
|
no_license
|
acope3/corHMM
|
R
| false
| false
| 29,941
|
r
|
#HIDDEN RATES MODEL OF BINARY TRAIT EVOLUTION
#written by Jeremy M. Beaulieu
corHMM <- function(phy, data, rate.cat, rate.mat=NULL, node.states=c("joint", "marginal", "scaled", "none"), optim.method=c("subplex"), p=NULL, root.p=NULL, ip=NULL, nstarts=10, n.cores=NULL, sann.its=5000, diagn=FALSE){
# Checks to make sure node.states is not NULL. If it is, just returns a diagnostic message asking for value.
if(is.null(node.states)){
obj <- NULL
obj$loglik <- NULL
obj$diagnostic <- paste("No model for ancestral states selected. Please pass one of the following to corHMM command for parameter \'node.states\': joint, marginal, scaled, or none.")
return(obj)
}
else { # even if node.states is not NULL, need to make sure its one of the three valid options
valid.models <- c("joint", "marginal", "scaled", "none")
if(!any(valid.models == node.states)){
obj <- NULL
obj$loglik <- NULL
obj$diagnostic <- paste("\'",node.states, "\' is not valid for ancestral state reconstruction method. Please pass one of the following to corHMM command for parameter \'node.states\': joint, marginal, scaled, or none.",sep="")
return(obj)
}
if(length(node.states) > 1){ # User did not enter a value, so just pick marginal.
node.states <- "marginal"
cat("No model selected for \'node.states\'. Will perform marginal ancestral state estimation.\n")
}
}
# Checks to make sure phy & data have same taxa. Fixes conflicts (see match.tree.data function).
matching <- match.tree.data(phy,data)
data <- matching$data
phy <- matching$phy
# Will not perform reconstructions on invariant characters
if(nlevels(as.factor(data[,1])) <= 1){
obj <- NULL
obj$loglik <- NULL
obj$diagnostic <- paste("Character is invariant. Analysis stopped.",sep="")
return(obj)
} else {
# Still need to make sure second level isnt just an ambiguity
lvls <- as.factor(data[,1])
if(nlevels(as.factor(data[,1])) == 2 && length(which(lvls == "?"))){
obj <- NULL
obj$loglik <- NULL
obj$diagnostic <- paste("Character is invariant. Analysis stopped.",sep="")
return(obj)
}
}
#Creates the data structure and orders the rows to match the tree.
phy$edge.length[phy$edge.length<=1e-5]=1e-5
data.sort <- data.frame(data[,2], data[,2],row.names=data[,1])
data.sort <- data.sort[phy$tip.label,]
counts <- table(data.sort[,1])
levels <- levels(as.factor(data.sort[,1]))
cols <- as.factor(data.sort[,1])
cat("State distribution in data:\n")
cat("States:",levels,"\n",sep="\t")
cat("Counts:",counts,"\n",sep="\t")
#Some initial values for use later
k=2
ub = log(100)
lb = -20
obj <- NULL
nb.tip <- length(phy$tip.label)
nb.node <- phy$Nnode
rate.cat=rate.cat
root.p=root.p
nstarts=nstarts
ip=ip
model.set.final<-rate.cat.set.corHMM(phy=phy,data.sort=data.sort,rate.cat=rate.cat)
if(!is.null(rate.mat)){
rate <- rate.mat
model.set.final$np <- max(rate, na.rm=TRUE)
rate[is.na(rate)]=max(rate, na.rm=TRUE)+1
model.set.final$rate <- rate
model.set.final$index.matrix <- rate.mat
## for precursor type models ##
col.sums <- which(colSums(rate.mat, na.rm=TRUE) == 0)
row.sums <- which(rowSums(rate.mat, na.rm=TRUE) == 0)
drop.states <- col.sums[which(col.sums == row.sums)]
if(length(drop.states > 0)){
model.set.final$liks[,drop.states] <- 0
}
###############################
}
lower = rep(lb, model.set.final$np)
upper = rep(ub, model.set.final$np)
if(optim.method=="twoStep"){
if(!is.null(p)){
cat("Calculating likelihood from a set of fixed parameters", "\n")
out<-NULL
est.pars<-log(p)
out$objective <- dev.corhmm(est.pars,phy=phy,liks=model.set.final$liks,Q=model.set.final$Q,rate=model.set.final$rate,root.p=root.p)
est.pars <- exp(est.pars)
}
else{
cat("Initializing...", "\n")
model.set.init<-rate.cat.set.corHMM(phy=phy,data.sort=data.sort,rate.cat=1)
rate<-rate.mat.maker(hrm=TRUE,rate.cat=1)
rate<-rate.par.eq(rate,eq.par=c(1,2))
model.set.init$index.matrix<-rate
rate[is.na(rate)]<-max(rate,na.rm=TRUE)+1
model.set.init$rate<-rate
dat<-as.matrix(data.sort)
dat<-phyDat(dat,type="USER", levels=c("0","1"))
par.score<-parsimony(phy, dat, method="fitch")
tl <- sum(phy$edge.length)
mean.change = par.score/tl
if(mean.change==0){
ip=exp(lb)+0.01
}else{
ip<-rexp(1, 1/mean.change)
}
if(ip < exp(lb) || ip > exp(ub)){ # initial parameter value is outside bounds
ip <- exp(lb)
}
lower = rep(lb, model.set.final$np)
upper = rep(ub, model.set.final$np)
opts <- list("algorithm"="NLOPT_LN_SBPLX", "maxeval"="1000000", "ftol_rel"=.Machine$double.eps^0.5)
cat("Finished. Beginning simulated annealing Round 1...", "\n")
out.sann <- GenSA(rep(log(ip), model.set.final$np), fn=dev.corhmm, lower=lower, upper=upper, control=list(max.call=sann.its), phy=phy,liks=model.set.final$liks, Q=model.set.final$Q,rate=model.set.final$rate,root.p=root.p)
cat("Finished. Refining using subplex routine...", "\n")
#out = subplex(out.sann$par, fn=dev.corhmm, control=list(.Machine$double.eps^0.25, parscale=rep(0.1, length(out.sann$par))), phy=phy,liks=model.set.init$liks,Q=model.set.init$Q,rate=model.set.init$rate,root.p=root.p)
out = nloptr(x0=out.sann$par, eval_f=dev.corhmm, lb=lower, ub=upper, opts=opts, phy=phy,liks=model.set.final$liks,Q=model.set.final$Q,rate=model.set.final$rate,root.p=root.p)
cat("Finished. Beginning simulated annealing Round 2...", "\n")
out.sann <- GenSA(out$solution, fn=dev.corhmm, lower=lower, upper=upper, control=list(max.call=sann.its), phy=phy,liks=model.set.final$liks, Q=model.set.final$Q,rate=model.set.final$rate,root.p=root.p)
cat("Finished. Refining using subplex routine...", "\n")
#out = subplex(out.sann$par, fn=dev.corhmm, control=list(.Machine$double.eps^0.25, parscale=rep(0.1, length(out.sann$par))), phy=phy,liks=model.set.init$liks,Q=model.set.init$Q,rate=model.set.init$rate,root.p=root.p)
out = nloptr(x0=out.sann$par, eval_f=dev.corhmm, lb=lower, ub=upper, opts=opts, phy=phy,liks=model.set.final$liks,Q=model.set.final$Q,rate=model.set.final$rate,root.p=root.p)
cat("Finished. Beginning simulated annealing Round 3...", "\n")
out.sann <- GenSA(out$solution, fn=dev.corhmm, lower=lower, upper=upper, control=list(max.call=sann.its), phy=phy,liks=model.set.final$liks, Q=model.set.final$Q,rate=model.set.final$rate,root.p=root.p)
cat("Finished. Refining using subplex routine...", "\n")
#out = subplex(out.sann$par, fn=dev.corhmm, control=list(.Machine$double.eps^0.25, parscale=rep(0.1, length(out.sann$par))), phy=phy,liks=model.set.init$liks,Q=model.set.init$Q,rate=model.set.init$rate,root.p=root.p)
out = nloptr(x0=out.sann$par, eval_f=dev.corhmm, lb=lower, ub=upper, opts=opts, phy=phy,liks=model.set.final$liks,Q=model.set.final$Q,rate=model.set.final$rate,root.p=root.p)
loglik <- -out$objective
est.pars <- exp(out$solution)
}
}
if(optim.method=="subplex"){
opts <- list("algorithm"="NLOPT_LN_SBPLX", "maxeval"="1000000", "ftol_rel"=.Machine$double.eps^0.5)
if(!is.null(p)){
cat("Calculating likelihood from a set of fixed parameters", "\n")
out<-NULL
est.pars<-log(p)
out$objective<-dev.corhmm(est.pars,phy=phy,liks=model.set.final$liks,Q=model.set.final$Q,rate=model.set.final$rate,root.p=root.p)
loglik <- -out$objective
est.pars <- exp(est.pars)
}
#If a user-specified starting value(s) is not supplied this begins loop through a set of randomly chosen starting values:
else{
#If a user-specified starting value(s) is supplied:
if(is.null(ip)){
if(is.null(n.cores)){
cat("Beginning thorough optimization search -- performing", nstarts, "random restarts", "\n")
#If the analysis is to be run a single processor:
if(is.null(n.cores)){
#Sets parameter settings for random restarts by taking the parsimony score and dividing
#by the total length of the tree
dat<-as.matrix(data.sort)
dat<-phyDat(dat,type="USER", levels=c("0","1"))
par.score<-parsimony(phy, dat, method="fitch")/2
tl <- sum(phy$edge.length)
mean.change = par.score/tl
if(mean.change==0){
ip=0.01+exp(lb)
}else{
ip <- rexp(model.set.final$np, 1/mean.change)
}
ip[ip < exp(lb)] = exp(lb)
ip[ip > exp(ub)] = exp(lb)
out = nloptr(x0=rep(log(ip), length.out = model.set.final$np), eval_f=dev.corhmm, lb=lower, ub=upper, opts=opts, phy=phy,liks=model.set.final$liks,Q=model.set.final$Q,rate=model.set.final$rate,root.p=root.p)
tmp = matrix(,1,ncol=(1+model.set.final$np))
tmp[,1] = out$objective
tmp[,2:(model.set.final$np+1)] = out$solution
for(i in 2:nstarts){
#Temporary solution for ensuring an ordered Q with respect to the rate classes. If a simpler model is called this feature is automatically turned off:
if(mean.change==0){
starts=runif(0.01+exp(lb), 1, model.set.final$np)
}else{
starts<-rexp(model.set.final$np, 1/mean.change)
}
starts[starts < exp(lb)] = exp(lb)
starts[starts > exp(ub)] = exp(lb)
par.order<-NA
if(rate.cat == 2){
try(par.order<-starts[3] > starts[8])
if(!is.na(par.order)){
pp.tmp <- c(starts[3],starts[8])
starts[3] <- min(pp.tmp)
starts[8] <- max(pp.tmp)
}
}
if(rate.cat == 3){
try(par.order <- starts[3] > starts[9] | starts[9] > starts[14])
if(!is.na(par.order)){
pp.tmp <- c(starts[3],starts[9],starts[14])
starts[3] <- min(pp.tmp)
starts[9] <- median(pp.tmp)
starts[14] <- max(pp.tmp)
}
}
if(rate.cat == 4){
try(par.order <- starts[3] > starts[9] | starts[9] > starts[15] | starts[15] > starts[20])
if(!is.na(par.order)){
pp.tmp <- c(starts[3],starts[9],starts[15],starts[20])
starts[3] <- pp.tmp[order(pp.tmp)][1]
starts[9] <- pp.tmp[order(pp.tmp)][2]
starts[15] <- pp.tmp[order(pp.tmp)][3]
starts[20] <- pp.tmp[order(pp.tmp)][4]
}
}
if(rate.cat == 5){
try(par.order <- starts[3] > starts[9] | starts[9] > starts[15] | starts[15] > starts[21] | starts[21] > starts[26])
if(!is.na(par.order)){
pp.tmp <- c(starts[3],starts[9],starts[15],starts[21],starts[26])
starts[3] <- pp.tmp[order(pp.tmp)][1]
starts[9] <- pp.tmp[order(pp.tmp)][2]
starts[15] <- pp.tmp[order(pp.tmp)][3]
starts[21] <- pp.tmp[order(pp.tmp)][4]
starts[26] <- pp.tmp[order(pp.tmp)][5]
}
}
if(rate.cat == 6){
try(par.order <- starts[3] > starts[9] | starts[9] > starts[15] | starts[15] > starts[21] | starts[21] > starts[27] | starts[27] > starts[32])
if(!is.na(par.order)){
pp.tmp <- c(starts[3],starts[9],starts[15],starts[21],starts[27],starts[32])
starts[3] <- pp.tmp[order(pp.tmp)][1]
starts[9] <- pp.tmp[order(pp.tmp)][2]
starts[15] <- pp.tmp[order(pp.tmp)][3]
starts[21] <- pp.tmp[order(pp.tmp)][4]
starts[27] <- pp.tmp[order(pp.tmp)][5]
starts[32] <- pp.tmp[order(pp.tmp)][6]
}
}
if(rate.cat == 7){
try(par.order <- starts[3] > starts[9] | starts[9] > starts[15] | starts[15] > starts[21] | starts[21] > starts[27] | starts[27] > starts[33] | starts[33] > starts[38])
if(!is.na(par.order)){
pp.tmp <- c(starts[3],starts[9],starts[15],starts[21],starts[27],starts[33],starts[38])
starts[3] <- pp.tmp[order(pp.tmp)][1]
starts[9] <- pp.tmp[order(pp.tmp)][2]
starts[15] <- pp.tmp[order(pp.tmp)][3]
starts[21] <- pp.tmp[order(pp.tmp)][4]
starts[27] <- pp.tmp[order(pp.tmp)][5]
starts[33] <- pp.tmp[order(pp.tmp)][6]
starts[38] <- pp.tmp[order(pp.tmp)][7]
}
}
out.alt = nloptr(x0=rep(log(starts), length.out = model.set.final$np), eval_f=dev.corhmm, lb=lower, ub=upper, opts=opts, phy=phy,liks=model.set.final$liks,Q=model.set.final$Q,rate=model.set.final$rate,root.p=root.p)
tmp[,1] = out.alt$objective
tmp[,2:(model.set.final$np+1)] = starts
if(out.alt$objective < out$objective){
out = out.alt
ip = starts
}
else{
out = out
ip = ip
}
}
loglik <- -out$objective
est.pars <- exp(out$solution)
}
}
#If the analysis is to be run on multiple processors:
else{
#Sets parameter settings for random restarts by taking the parsimony score and dividing
#by the total length of the tree
cat("Beginning thorough optimization search -- performing", nstarts, "random restarts", "\n")
dat<-as.matrix(data.sort)
dat<-phyDat(dat,type="USER", levels=c("0","1"))
par.score<-parsimony(phy, dat, method="fitch")/2
tl <- sum(phy$edge.length)
mean.change = par.score/tl
random.restart<-function(nstarts){
tmp = matrix(,1,ncol=(1+model.set.final$np))
#Temporary solution for ensuring an ordered Q with respect to the rate classes. If a simpler model is called this feature is automatically turned off:
if(mean.change==0){
starts=rep(0.01+exp(lb), model.set.final$np)
}else{
starts<-rexp(model.set.final$np, 1/mean.change)
}
starts[starts < exp(lb)] = exp(lb)
starts[starts > exp(ub)] = exp(lb)
par.order<-NA
if(rate.cat == 2){
try(par.order<-starts[3] > starts[8])
if(!is.na(par.order)){
pp.tmp <- c(starts[3],starts[8])
starts[3] <- min(pp.tmp)
starts[8] <- max(pp.tmp)
}
}
if(rate.cat == 3){
try(par.order <- starts[3] > starts[9] | starts[9] > starts[14])
if(!is.na(par.order)){
pp.tmp <- c(starts[3],starts[9],starts[14])
starts[3] <- min(pp.tmp)
starts[9] <- median(pp.tmp)
starts[14] <- max(pp.tmp)
}
}
if(rate.cat == 4){
try(par.order <- starts[3] > starts[9] | starts[9] > starts[15] | starts[15] > starts[20])
if(!is.na(par.order)){
pp.tmp <- c(starts[3],starts[9],starts[15],starts[20])
starts[3] <- pp.tmp[order(pp.tmp)][1]
starts[9] <- pp.tmp[order(pp.tmp)][2]
starts[15] <- pp.tmp[order(pp.tmp)][3]
starts[20] <- pp.tmp[order(pp.tmp)][4]
}
}
if(rate.cat == 5){
try(par.order <- starts[3] > starts[9] | starts[9] > starts[15] | starts[15] > starts[21] | starts[21] > starts[26])
if(!is.na(par.order)){
pp.tmp <- c(starts[3],starts[9],starts[15],starts[21],starts[26])
starts[3] <- pp.tmp[order(pp.tmp)][1]
starts[9] <- pp.tmp[order(pp.tmp)][2]
starts[15] <- pp.tmp[order(pp.tmp)][3]
starts[21] <- pp.tmp[order(pp.tmp)][4]
starts[26] <- pp.tmp[order(pp.tmp)][5]
}
}
if(rate.cat == 6){
try(par.order <- starts[3] > starts[9] | starts[9] > starts[15] | starts[15] > starts[21] | starts[21] > starts[27] | starts[27] > starts[32])
if(!is.na(par.order)){
pp.tmp <- c(starts[3],starts[9],starts[15],starts[21],starts[27],starts[32])
starts[3] <- pp.tmp[order(pp.tmp)][1]
starts[9] <- pp.tmp[order(pp.tmp)][2]
starts[15] <- pp.tmp[order(pp.tmp)][3]
starts[21] <- pp.tmp[order(pp.tmp)][4]
starts[27] <- pp.tmp[order(pp.tmp)][5]
starts[32] <- pp.tmp[order(pp.tmp)][6]
}
}
if(rate.cat == 7){
try(par.order <- starts[3] > starts[9] | starts[9] > starts[15] | starts[15] > starts[21] | starts[21] > starts[27] | starts[27] > starts[33] | starts[33] > starts[38])
if(!is.na(par.order)){
pp.tmp <- c(starts[3],starts[9],starts[15],starts[21],starts[27],starts[33],starts[38])
starts[3] <- pp.tmp[order(pp.tmp)][1]
starts[9] <- pp.tmp[order(pp.tmp)][2]
starts[15] <- pp.tmp[order(pp.tmp)][3]
starts[21] <- pp.tmp[order(pp.tmp)][4]
starts[27] <- pp.tmp[order(pp.tmp)][5]
starts[33] <- pp.tmp[order(pp.tmp)][6]
starts[38] <- pp.tmp[order(pp.tmp)][7]
}
}
out = nloptr(x0=log(starts), eval_f=dev.corhmm, lb=lower, ub=upper, opts=opts, phy=phy, liks=model.set.final$liks,Q=model.set.final$Q,rate=model.set.final$rate,root.p=root.p)
tmp[,1] = out$objective
tmp[,2:(model.set.final$np+1)] = out$solution
tmp
}
restart.set<-mclapply(1:nstarts, random.restart, mc.cores=n.cores)
#Finds the best fit within the restart.set list
best.fit<-which.min(unlist(lapply(1:nstarts,function(i) lapply(restart.set[[i]][,1],min))))
#Generates an object to store results from restart algorithm:
out<-NULL
out$objective=unlist(restart.set[[best.fit]][,1])
out$solution=unlist(restart.set[[best.fit]][,2:(model.set.final$np+1)])
loglik <- -out$objective
est.pars <- exp(out$solution)
}
}
else{
cat("Beginning subplex optimization routine -- Starting value(s):", ip, "\n")
ip=ip
out = nloptr(x0=rep(log(ip), length.out = model.set.final$np), eval_f=dev.corhmm, lb=lower, ub=upper, opts=opts, phy=phy,liks=model.set.final$liks,Q=model.set.final$Q,rate=model.set.final$rate,root.p=root.p)
loglik <- -out$objective
est.pars <- exp(out$solution)
}
}
}
#Starts the summarization process:
if(node.states != "none") {
cat("Finished. Inferring ancestral states using", node.states, "reconstruction.","\n")
}
TIPS <- 1:nb.tip
if (node.states == "marginal" || node.states == "scaled"){
lik.anc <- ancRECON(phy, data, est.pars, hrm=TRUE, rate.cat, rate.mat=rate.mat, method=node.states, ntraits=NULL, root.p=root.p)
pr<-apply(lik.anc$lik.anc.states,1,which.max)
phy$node.label <- pr
tip.states <- lik.anc$lik.tip.states
row.names(tip.states) <- phy$tip.label
}
if (node.states == "joint"){
lik.anc <- ancRECON(phy, data, est.pars, hrm=TRUE, rate.cat, rate.mat=rate.mat, method=node.states, ntraits=NULL,root.p=root.p)
phy$node.label <- lik.anc$lik.anc.states
tip.states <- lik.anc$lik.tip.states
}
if (node.states == "none") {
lik.anc <- list(lik.tip.states=NA, lik.anc.states=NA)
phy$node.label <- NA
tip.states <- NA
}
cat("Finished. Performing diagnostic tests.", "\n")
#Approximates the Hessian using the numDeriv function
if(diagn==TRUE){
h <- hessian(func=dev.corhmm, x=est.pars, phy=phy,liks=model.set.final$liks,Q=model.set.final$Q,rate=model.set.final$rate,root.p=root.p)
solution <- matrix(est.pars[model.set.final$index.matrix], dim(model.set.final$index.matrix))
solution.se <- matrix(sqrt(diag(pseudoinverse(h)))[model.set.final$index.matrix], dim(model.set.final$index.matrix))
hess.eig <- eigen(h,symmetric=TRUE)
eigval<-signif(hess.eig$values, 2)
eigvect<-round(hess.eig$vectors, 2)
}
else{
solution <- matrix(est.pars[model.set.final$index.matrix], dim(model.set.final$index.matrix))
solution.se <- matrix(0,dim(solution)[1],dim(solution)[1])
eigval<-NULL
eigvect<-NULL
}
if (rate.cat == 1){
rownames(solution) <- rownames(solution.se) <- c("(0)","(1)")
colnames(solution) <- colnames(solution.se) <- c("(0)","(1)")
#Initiates user-specified reconstruction method:
if (is.character(node.states)) {
if (node.states == "marginal" || node.states == "scaled"){
colnames(lik.anc$lik.anc.states) <- c("P(0)","P(1)")
}
}
}
if (rate.cat == 2){
rownames(solution) <- rownames(solution.se) <- c("(0,R1)","(1,R1)","(0,R2)","(1,R2)")
colnames(solution) <- colnames(solution.se) <- c("(0,R1)","(1,R1)","(0,R2)","(1,R2)")
if (is.character(node.states)) {
if (node.states == "marginal" || node.states == "scaled"){
colnames(lik.anc$lik.anc.states) <- c("(0,R1)","(1,R1)","(0,R2)","(1,R2)")
}
}
}
if (rate.cat == 3){
rownames(solution) <- rownames(solution.se) <- c("(0,R1)","(1,R1)","(0,R2)","(1,R2)","(0,R3)","(1,R3)")
colnames(solution) <- colnames(solution.se) <- c("(0,R1)","(1,R1)","(0,R2)","(1,R2)","(0,R3)","(1,R3)")
if (is.character(node.states)) {
if (node.states == "marginal" || node.states == "scaled"){
colnames(lik.anc$lik.anc.states) <- c("(0,R1)","(1,R1)","(0,R2)","(1,R2)","(0,R3)","(1,R3)")
}
}
}
if (rate.cat == 4){
rownames(solution) <- rownames(solution.se) <- c("(0,R1)","(1,R1)","(0,R2)","(1,R2)","(0,R3)","(1,R3)","(0,R4)","(1,R4)")
colnames(solution) <- colnames(solution.se) <- c("(0,R1)","(1,R1)","(0,R2)","(1,R2)","(0,R3)","(1,R3)","(0,R4)","(1,R4)")
if (is.character(node.states)) {
if (node.states == "marginal" || node.states == "scaled"){
colnames(lik.anc$lik.anc.states) <- c("(0,R1)","(1,R1)","(0,R2)","(1,R2)","(0,R3)","(1,R3)","(0,R4)","(1,R4)")
}
}
}
if (rate.cat == 5){
rownames(solution) <- rownames(solution.se) <- c("(0,R1)","(1,R1)","(0,R2)","(1,R2)","(0,R3)","(1,R3)","(0,R4)","(1,R4)","(0,R5)","(1,R5)")
colnames(solution) <- colnames(solution.se) <- c("(0,R1)","(1,R1)","(0,R2)","(1,R2)","(0,R3)","(1,R3)","(0,R4)","(1,R4)","(0,R5)","(1,R5)")
if (is.character(node.states)) {
if (node.states == "marginal" || node.states == "scaled"){
colnames(lik.anc$lik.anc.states) <- c("(0,R1)","(1,R1)","(0,R2)","(1,R2)","(0,R3)","(1,R3)","(0,R4)","(1,R4)","(0,R5)","(1,R5)")
}
}
}
if (rate.cat == 6){
rownames(solution) <- rownames(solution.se) <- c("(0,R1)","(1,R1)","(0,R2)","(1,R2)","(0,R3)","(1,R3)","(0,R4)","(1,R4)","(0,R5)","(1,R5)","(0,R6)","(1,R6)")
colnames(solution) <- colnames(solution.se) <- c("(0,R1)","(1,R1)","(0,R2)","(1,R2)","(0,R3)","(1,R3)","(0,R4)","(1,R4)","(0,R5)","(1,R5)","(0,R6)","(1,R6)")
if (is.character(node.states)) {
if (node.states == "marginal" || node.states == "scaled"){
colnames(lik.anc$lik.anc.states) <- c("(0,R1)","(1,R1)","(0,R2)","(1,R2)","(0,R3)","(1,R3)","(0,R4)","(1,R4)","(0,R5)","(1,R5)","(0,R6)","(1,R6)")
}
}
}
if (rate.cat == 7){
rownames(solution) <- rownames(solution.se) <- c("(0,R1)","(1,R1)","(0,R2)","(1,R2)","(0,R3)","(1,R3)","(0,R4)","(1,R4)","(0,R5)","(1,R5)","(0,R6)","(1,R6)","(0,R7)","(1,R7)")
colnames(solution) <- colnames(solution.se) <- c("(0,R1)","(1,R1)","(0,R2)","(1,R2)","(0,R3)","(1,R3)","(0,R4)","(1,R4)","(0,R5)","(1,R5)","(0,R6)","(1,R6)","(0,R7)","(1,R7)")
if (is.character(node.states)) {
if (node.states == "marginal" || node.states == "scaled"){
colnames(lik.anc$lik.anc.states) <- c("(0,R1)","(1,R1)","(0,R2)","(1,R2)","(0,R3)","(1,R3)","(0,R4)","(1,R4)","(0,R5)","(1,R5)","(0,R6)","(1,R6)","(0,R7)","(1,R7)")
}
}
}
obj = list(loglik = loglik, AIC = -2*loglik+2*model.set.final$np,AICc = -2*loglik+(2*model.set.final$np*(nb.tip/(nb.tip-model.set.final$np-1))),rate.cat=rate.cat,solution=solution, solution.se=solution.se, index.mat=model.set.final$index.matrix, data=data.sort, phy=phy, states=lik.anc$lik.anc.states, tip.states=tip.states, iterations=out$iterations, eigval=eigval, eigvect=eigvect)
class(obj)<-"corhmm"
return(obj)
}
#Print function
print.corhmm<-function(x,...){
ntips=Ntip(x$phy)
output<-data.frame(x$loglik,x$AIC,x$AICc,x$rate.cat,ntips, row.names="")
names(output)<-c("-lnL","AIC","AICc","Rate.cat","ntax")
cat("\nFit\n")
print(output)
cat("\n")
param.est<- x$solution
cat("Rates\n")
print(param.est)
cat("\n")
if(any(x$eigval<0)){
index.matrix <- x$index.mat
#If any eigenvalue is less than 0 then the solution is not the maximum likelihood solution
if (any(x$eigval<0)) {
cat("The objective function may be at a saddle point", "\n")
}
}
else{
cat("Arrived at a reliable solution","\n")
}
}
#Generalized ace() function that allows analysis to be carried out when there are polytomies:
dev.corhmm <- function(p,phy,liks,Q,rate,root.p) {
p = exp(p)
nb.tip <- length(phy$tip.label)
nb.node <- phy$Nnode
TIPS <- 1:nb.tip
comp <- numeric(nb.tip + nb.node)
phy <- reorder(phy, "pruningwise")
#Obtain an object of all the unique ancestors
anc <- unique(phy$edge[,1])
k.rates <- dim(Q)[2] / 2
if (any(is.nan(p)) || any(is.infinite(p))) return(1000000)
Q[] <- c(p, 0)[rate]
diag(Q) <- -rowSums(Q)
for (i in seq(from = 1, length.out = nb.node)) {
#the ancestral node at row i is called focal
focal <- anc[i]
#Get descendant information of focal
desRows<-which(phy$edge[,1]==focal)
desNodes<-phy$edge[desRows,2]
v <- 1
#Loops through all descendants of focal (how we deal with polytomies):
for (desIndex in sequence(length(desRows))){
v<-v*expm(Q * phy$edge.length[desRows[desIndex]], method=c("Ward77")) %*% liks[desNodes[desIndex],]
}
#Sum the likelihoods:
comp[focal] <- sum(v)
#Divide each likelihood by the sum to obtain probabilities:
liks[focal, ] <- v/comp[focal]
}
#Temporary solution for ensuring an ordered Q with respect to the rate classes. If a simpler model is called this feature is automatically turned off:
par.order<-NA
if(k.rates == 2){
try(par.order <- p[3] > p[8])
if(!is.na(par.order)){
if(par.order == TRUE){
return(1000000)
}
}
}
if(k.rates == 3){
try(par.order <- p[3] > p[9] | p[9] > p[14])
if(!is.na(par.order)){
if(par.order == TRUE){
return(1000000)
}
}
}
if(k.rates == 4){
try(par.order <- p[3] > p[9] | p[9] > p[15] | p[15] > p[20])
if(!is.na(par.order)){
if(par.order == TRUE){
return(1000000)
}
}
}
if(k.rates == 5){
try(par.order <- p[3] > p[9] | p[9] > p[15] | p[15] > p[21] | p[21] > p[26])
if(!is.na(par.order)){
if(par.order == TRUE){
return(1000000)
}
}
}
if(k.rates == 6){
try(par.order <- p[3] > p[9] | p[9] > p[15] | p[15] > p[21] | p[21] > p[27] | p[27] > p[32])
if(!is.na(par.order)){
if(par.order == TRUE){
return(1000000)
}
}
}
if(k.rates == 7){
try(par.order <- p[3] > p[9] | p[9] > p[15] | p[15] > p[21] | p[21] > p[27] | p[27] > p[33] | p[33] > p[38])
if(!is.na(par.order)){
if(par.order == TRUE){
return(1000000)
}
}
}
#Specifies the root:
root <- nb.tip + 1L
#If any of the logs have NAs restart search:
if (is.na(sum(log(comp[-TIPS])))){return(1000000)}
else{
equil.root <- NULL
for(i in 1:ncol(Q)){
posrows <- which(Q[,i] >= 0)
rowsum <- sum(Q[posrows,i])
poscols <- which(Q[i,] >= 0)
colsum <- sum(Q[i,poscols])
equil.root <- c(equil.root,rowsum/(rowsum+colsum))
}
if (is.null(root.p)){
flat.root = equil.root
k.rates <- 1/length(which(!is.na(equil.root)))
flat.root[!is.na(flat.root)] = k.rates
flat.root[is.na(flat.root)] = 0
loglik<- -(sum(log(comp[-TIPS])) + log(sum(flat.root * liks[root,])))
}
else{
if(is.character(root.p)){
# root.p==yang will fix root probabilities based on the inferred rates: q10/(q01+q10)
if(root.p == "yang"){
diag(Q) = 0
equil.root = colSums(Q) / sum(Q)
loglik <- -(sum(log(comp[-TIPS])) + log(sum(exp(log(equil.root)+log(liks[root,])))))
if(is.infinite(loglik)){
return(1000000)
}
}else{
# root.p==maddfitz will fix root probabilities according to FitzJohn et al 2009 Eq. 10:
root.p = liks[root,] / sum(liks[root,])
loglik <- -(sum(log(comp[-TIPS])) + log(sum(exp(log(root.p)+log(liks[root,])))))
}
}
# root.p!==NULL will fix root probabilities based on user supplied vector:
else{
loglik <- -(sum(log(comp[-TIPS])) + log(sum(exp(log(root.p)+log(liks[root,])))))
if(is.infinite(loglik)){
return(1000000)
}
}
}
}
loglik
}
rate.cat.set.corHMM<-function(phy,data.sort,rate.cat){
k=2
obj <- NULL
nb.tip <- length(phy$tip.label)
nb.node <- phy$Nnode
obj$rate.cat<-rate.cat
rate<-rate.mat.maker(hrm=TRUE,rate.cat=rate.cat)
index.matrix<-rate
rate[is.na(rate)]<-max(rate,na.rm=TRUE)+1
#Makes a matrix of tip states and empty cells corresponding
#to ancestral nodes during the optimization process.
x <- data.sort[,1]
TIPS <- 1:nb.tip
for(i in 1:nb.tip){
if(is.na(x[i])){x[i]=2}
}
if (rate.cat == 1){
liks <- matrix(0, nb.tip + nb.node, k*rate.cat)
TIPS <- 1:nb.tip
for(i in 1:nb.tip){
if(x[i]==0){liks[i,1]=1}
if(x[i]==1){liks[i,2]=1}
if(x[i]==2){liks[i,1:2]=1}
}
Q <- matrix(0, k*rate.cat, k*rate.cat)
}
if (rate.cat == 2){
liks <- matrix(0, nb.tip + nb.node, k*rate.cat)
for(i in 1:nb.tip){
if(x[i]==0){liks[i,c(1,3)]=1}
if(x[i]==1){liks[i,c(2,4)]=1}
if(x[i]==2){liks[i,1:4]=1}
}
Q <- matrix(0, k*rate.cat, k*rate.cat)
}
if (rate.cat == 3){
liks <- matrix(0, nb.tip + nb.node, k*rate.cat)
for(i in 1:nb.tip){
if(x[i]==0){liks[i,c(1,3,5)]=1}
if(x[i]==1){liks[i,c(2,4,6)]=1}
if(x[i]==2){liks[i,1:6]=1}
}
Q <- matrix(0, k*rate.cat, k*rate.cat)
}
if (rate.cat == 4){
liks <- matrix(0, nb.tip + nb.node, k*rate.cat)
for(i in 1:nb.tip){
if(x[i]==0){liks[i,c(1,3,5,7)]=1}
if(x[i]==1){liks[i,c(2,4,6,8)]=1}
if(x[i]==2){liks[i,1:8]=1}
}
Q <- matrix(0, k*rate.cat, k*rate.cat)
}
if (rate.cat == 5){
liks <- matrix(0, nb.tip + nb.node, k*rate.cat)
for(i in 1:nb.tip){
if(x[i]==0){liks[i,c(1,3,5,7,9)]=1}
if(x[i]==1){liks[i,c(2,4,6,8,10)]=1}
if(x[i]==2){liks[i,1:10]=1}
}
Q <- matrix(0, k*rate.cat, k*rate.cat)
}
if (rate.cat == 6){
liks <- matrix(0, nb.tip + nb.node, k*rate.cat)
for(i in 1:nb.tip){
if(x[i]==0){liks[i,c(1,3,5,7,9,11)]=1}
if(x[i]==1){liks[i,c(2,4,6,8,10,12)]=1}
if(x[i]==2){liks[i,1:12]=1}
}
Q <- matrix(0, k*rate.cat, k*rate.cat)
}
if (rate.cat == 7){
liks <- matrix(0, nb.tip + nb.node, k*rate.cat)
for(i in 1:nb.tip){
if(x[i]==0){liks[i,c(1,3,5,7,9,11,13)]=1}
if(x[i]==1){liks[i,c(2,4,6,8,10,12,14)]=1}
if(x[i]==2){liks[i,1:14]=1}
}
Q <- matrix(0, k*rate.cat, k*rate.cat)
}
obj$np<-max(rate)-1
obj$rate<-rate
obj$index.matrix<-index.matrix
obj$liks<-liks
obj$Q<-Q
obj
}
|
library(elliptic)
### Name: myintegrate
### Title: Complex integration
### Aliases: myintegrate integrate.contour integrate.segments residue
### Keywords: math
### ** Examples
f1 <- function(z){sin(exp(z))}
f2 <- function(z,p){p/z}
myintegrate(f1,2,3) # that is, along the real axis
integrate.segments(f1,c(1,1i,-1,-1i),close=TRUE) # should be zero
# (following should be pi*2i; note secondary argument):
integrate.segments(f2,points=c(1,1i,-1,-1i),close=TRUE,p=1)
# To integrate round the unit circle, we need the contour and its
# derivative:
u <- function(x){exp(pi*2i*x)}
udash <- function(x){pi*2i*exp(pi*2i*x)}
# Some elementary functions, for practice:
# (following should be 2i*pi; note secondary argument 'p'):
integrate.contour(function(z,p){p/z},u,udash,p=1)
integrate.contour(function(z){log(z)},u,udash) # should be -2i*pi
integrate.contour(function(z){sin(z)+1/z^2},u,udash) # should be zero
# residue() is a convenience wrapper integrating f(z)/(z-z0) along a
# circular contour:
residue(function(z){1/z},2,r=0.1) # should be 1/2=0.5
# Now, some elliptic functions:
g <- c(3,2+4i)
Zeta <- function(z){zeta(z,g)}
Sigma <- function(z){sigma(z,g)}
WeierstrassP <- function(z){P(z,g)}
jj <- integrate.contour(Zeta,u,udash)
abs(jj-2*pi*1i) # should be zero
abs(integrate.contour(Sigma,u,udash)) # should be zero
abs(integrate.contour(WeierstrassP,u,udash)) # should be zero
# Now integrate f(x) = exp(1i*x)/(1+x^2) from -Inf to +Inf along the
# real axis, using the Residue Theorem. This tells us that integral of
# f(z) along any closed path is equal to pi*2i times the sum of the
# residues inside it. Take a semicircular path P from -R to +R along
# the real axis, then following a semicircle in the upper half plane, of
# radius R to close the loop. Now consider large R. Then P encloses a
# pole at +1i [there is one at -1i also, but this is outside P, so
# irrelevent here] at which the residue is -1i/2e. Thus the integral of
# f(z) = 2i*pi*(-1i/2e) = pi/e along P; the contribution from the
# semicircle tends to zero as R tends to infinity; thus the integral
# along the real axis is the whole path integral, or pi/e.
# We can now reproduce this result analytically. First, choose an R:
R <- 400
# now define P. First, the semicircle, u1:
u1 <- function(x){R*exp(pi*1i*x)}
u1dash <- function(x){R*pi*1i*exp(pi*1i*x)}
# and now the straight part along the real axis, u2:
u2 <- function(x){R*(2*x-1)}
u2dash <- function(x){R*2}
# Better define the function:
f <- function(z){exp(1i*z)/(1+z^2)}
# OK, now carry out the path integral. I'll do it explicitly, but note
# that the contribution from the first integral should be small:
answer.approximate <-
integrate.contour(f,u1,u1dash) +
integrate.contour(f,u2,u2dash)
# And compare with the analytical value:
answer.exact <- pi/exp(1)
abs(answer.approximate - answer.exact)
# Now try the same thing but integrating over a triangle, using
# integrate.segments(). Use a path P' with base from -R to +R along the
# real axis, closed by two straight segments, one from +R to 1i*R, the
# other from 1i*R to -R:
abs(integrate.segments(f,c(-R,R,1i*R))- answer.exact)
# Observe how much better one can do by integrating over a big square
# instead:
abs(integrate.segments(f,c(-R,R,R+1i*R, -R+1i*R))- answer.exact)
# Now in the interests of search engine findability, here is an
# application of Cauchy's integral formula, or Cauchy's formula. I will
# use it to find sin(0.8):
u <- function(x){exp(pi*2i*x)}
udash <- function(x){pi*2i*exp(pi*2i*x)}
g <- function(z){sin(z)/(z-0.8)}
a <- 1/(2i*pi)*integrate.contour(g,u,udash)
abs(a-sin(0.8))
|
/data/genthat_extracted_code/elliptic/examples/myintegrate.Rd.R
|
no_license
|
surayaaramli/typeRrh
|
R
| false
| false
| 3,741
|
r
|
library(elliptic)
### Name: myintegrate
### Title: Complex integration
### Aliases: myintegrate integrate.contour integrate.segments residue
### Keywords: math
### ** Examples
f1 <- function(z){sin(exp(z))}
f2 <- function(z,p){p/z}
myintegrate(f1,2,3) # that is, along the real axis
integrate.segments(f1,c(1,1i,-1,-1i),close=TRUE) # should be zero
# (following should be pi*2i; note secondary argument):
integrate.segments(f2,points=c(1,1i,-1,-1i),close=TRUE,p=1)
# To integrate round the unit circle, we need the contour and its
# derivative:
u <- function(x){exp(pi*2i*x)}
udash <- function(x){pi*2i*exp(pi*2i*x)}
# Some elementary functions, for practice:
# (following should be 2i*pi; note secondary argument 'p'):
integrate.contour(function(z,p){p/z},u,udash,p=1)
integrate.contour(function(z){log(z)},u,udash) # should be -2i*pi
integrate.contour(function(z){sin(z)+1/z^2},u,udash) # should be zero
# residue() is a convenience wrapper integrating f(z)/(z-z0) along a
# circular contour:
residue(function(z){1/z},2,r=0.1) # should be 1/2=0.5
# Now, some elliptic functions:
g <- c(3,2+4i)
Zeta <- function(z){zeta(z,g)}
Sigma <- function(z){sigma(z,g)}
WeierstrassP <- function(z){P(z,g)}
jj <- integrate.contour(Zeta,u,udash)
abs(jj-2*pi*1i) # should be zero
abs(integrate.contour(Sigma,u,udash)) # should be zero
abs(integrate.contour(WeierstrassP,u,udash)) # should be zero
# Now integrate f(x) = exp(1i*x)/(1+x^2) from -Inf to +Inf along the
# real axis, using the Residue Theorem. This tells us that integral of
# f(z) along any closed path is equal to pi*2i times the sum of the
# residues inside it. Take a semicircular path P from -R to +R along
# the real axis, then following a semicircle in the upper half plane, of
# radius R to close the loop. Now consider large R. Then P encloses a
# pole at +1i [there is one at -1i also, but this is outside P, so
# irrelevent here] at which the residue is -1i/2e. Thus the integral of
# f(z) = 2i*pi*(-1i/2e) = pi/e along P; the contribution from the
# semicircle tends to zero as R tends to infinity; thus the integral
# along the real axis is the whole path integral, or pi/e.
# We can now reproduce this result analytically. First, choose an R:
R <- 400
# now define P. First, the semicircle, u1:
u1 <- function(x){R*exp(pi*1i*x)}
u1dash <- function(x){R*pi*1i*exp(pi*1i*x)}
# and now the straight part along the real axis, u2:
u2 <- function(x){R*(2*x-1)}
u2dash <- function(x){R*2}
# Better define the function:
f <- function(z){exp(1i*z)/(1+z^2)}
# OK, now carry out the path integral. I'll do it explicitly, but note
# that the contribution from the first integral should be small:
answer.approximate <-
integrate.contour(f,u1,u1dash) +
integrate.contour(f,u2,u2dash)
# And compare with the analytical value:
answer.exact <- pi/exp(1)
abs(answer.approximate - answer.exact)
# Now try the same thing but integrating over a triangle, using
# integrate.segments(). Use a path P' with base from -R to +R along the
# real axis, closed by two straight segments, one from +R to 1i*R, the
# other from 1i*R to -R:
abs(integrate.segments(f,c(-R,R,1i*R))- answer.exact)
# Observe how much better one can do by integrating over a big square
# instead:
abs(integrate.segments(f,c(-R,R,R+1i*R, -R+1i*R))- answer.exact)
# Now in the interests of search engine findability, here is an
# application of Cauchy's integral formula, or Cauchy's formula. I will
# use it to find sin(0.8):
u <- function(x){exp(pi*2i*x)}
udash <- function(x){pi*2i*exp(pi*2i*x)}
g <- function(z){sin(z)/(z-0.8)}
a <- 1/(2i*pi)*integrate.contour(g,u,udash)
abs(a-sin(0.8))
|
## here we go. some house cleaning first
library(dplyr)
setwd("~/learning/coursera-datascience/course3/project")
features = read.table("features.txt")
activity_labels = read.table("activity_labels.txt")
## now read test data
setwd("~/learning/coursera-datascience/course3/project/test")
subject_test = read.table("subject_test.txt")
x_test = read.table("X_test.txt")
y_test = read.table("y_test.txt")
## now read training data
setwd("~/learning/coursera-datascience/course3/project/train")
subject_train = read.table("subject_train.txt")
x_train = read.table("X_train.txt")
y_train = read.table("Y_train.txt")
##now convert to data frames to make life easier
xtestdf <- as.data.frame(x_test)
ytestdf <- as.data.frame(y_test)
xtraindf <- as.data.frame(x_train)
ytraindf <- as.data.frame(y_train)
featuresdf <- as.data.frame(features)
labelsdf <- as.data.frame(activity_labels)
## now merge
xcompdf <- rbind(xtestdf, xtraindf)
ycompdf <- rbind(ytestdf, ytraindf)
df <- xcompdf
colnames(df) <- featuresdf$V2
df$y <- ycompdf$V1
##add subject data
subtestdf <- as.data.frame(subject_test)
subtraindf <- as.data.frame(subject_train)
subcompdf <- rbind(subtestdf, subtraindf)
df$Subject <- subcompdf$V1
## now get only mean and std dev data
mergeddf = merge(df,labelsdf, by.x="y", by.y = "V1")
mergedYcomp = merge(ycompdf,labelsdf, by.x="V1", by.y = "V1")
df$Labels <- mergedYcomp$V2
filterdf <- df[ , grepl( "mean|Mean|std|Std|Labels|Subject" , names( df ) ) ]
## now rearrange
ordered_df <- arrange(filterdf,Labels,Subject)
## now split by activity
od1 <- ordered_df[grep("LAYING",ordered_df$Labels),]
od2 <- ordered_df[grep("WALKING",ordered_df$Labels),]
od3 <- ordered_df[grep("WALKING_UPSTAIRS",ordered_df$Labels),]
od4 <- ordered_df[grep("WALKING_DOWNSTAIRS",ordered_df$Labels),]
od5 <- ordered_df[grep("SITTING",ordered_df$Labels),]
od6 <- ordered_df[grep("STANDING",ordered_df$Labels),]
## and split by subject
os1 <- split(od1,od1$Subject)
os2 <- split(od2,od2$Subject)
os3 <- split(od3,od3$Subject)
os4 <- split(od4,od4$Subject)
os5 <- split(od5,od5$Subject)
os6 <- split(od6,od6$Subject)
## now find the averages
tidy1 <- lapply(os1, function(x) colMeans(x[1:86]))
tidy2 <- lapply(os2, function(x) colMeans(x[1:86]))
tidy3 <- lapply(os3, function(x) colMeans(x[1:86]))
tidy4 <- lapply(os4, function(x) colMeans(x[1:86]))
tidy5 <- lapply(os5, function(x) colMeans(x[1:86]))
tidy6 <- lapply(os6, function(x) colMeans(x[1:86]))
##now recombine
joined <- c(tidy1, tidy2, tidy3, tidy4, tidy5, tidy6)
dfinal <- do.call("rbind", joined)
## remove duplicates
dfinalvals <- unique(dfinal)
##and save to disk
write.table(dfinalvals,"finalMeansByLabelAndSubject.txt",row.names = FALSE)
|
/run_analysis.R
|
no_license
|
jemsbhai/datasciencecourse3project
|
R
| false
| false
| 2,740
|
r
|
## here we go. some house cleaning first
library(dplyr)
setwd("~/learning/coursera-datascience/course3/project")
features = read.table("features.txt")
activity_labels = read.table("activity_labels.txt")
## now read test data
setwd("~/learning/coursera-datascience/course3/project/test")
subject_test = read.table("subject_test.txt")
x_test = read.table("X_test.txt")
y_test = read.table("y_test.txt")
## now read training data
setwd("~/learning/coursera-datascience/course3/project/train")
subject_train = read.table("subject_train.txt")
x_train = read.table("X_train.txt")
y_train = read.table("Y_train.txt")
##now convert to data frames to make life easier
xtestdf <- as.data.frame(x_test)
ytestdf <- as.data.frame(y_test)
xtraindf <- as.data.frame(x_train)
ytraindf <- as.data.frame(y_train)
featuresdf <- as.data.frame(features)
labelsdf <- as.data.frame(activity_labels)
## now merge
xcompdf <- rbind(xtestdf, xtraindf)
ycompdf <- rbind(ytestdf, ytraindf)
df <- xcompdf
colnames(df) <- featuresdf$V2
df$y <- ycompdf$V1
##add subject data
subtestdf <- as.data.frame(subject_test)
subtraindf <- as.data.frame(subject_train)
subcompdf <- rbind(subtestdf, subtraindf)
df$Subject <- subcompdf$V1
## now get only mean and std dev data
mergeddf = merge(df,labelsdf, by.x="y", by.y = "V1")
mergedYcomp = merge(ycompdf,labelsdf, by.x="V1", by.y = "V1")
df$Labels <- mergedYcomp$V2
filterdf <- df[ , grepl( "mean|Mean|std|Std|Labels|Subject" , names( df ) ) ]
## now rearrange
ordered_df <- arrange(filterdf,Labels,Subject)
## now split by activity
od1 <- ordered_df[grep("LAYING",ordered_df$Labels),]
od2 <- ordered_df[grep("WALKING",ordered_df$Labels),]
od3 <- ordered_df[grep("WALKING_UPSTAIRS",ordered_df$Labels),]
od4 <- ordered_df[grep("WALKING_DOWNSTAIRS",ordered_df$Labels),]
od5 <- ordered_df[grep("SITTING",ordered_df$Labels),]
od6 <- ordered_df[grep("STANDING",ordered_df$Labels),]
## and split by subject
os1 <- split(od1,od1$Subject)
os2 <- split(od2,od2$Subject)
os3 <- split(od3,od3$Subject)
os4 <- split(od4,od4$Subject)
os5 <- split(od5,od5$Subject)
os6 <- split(od6,od6$Subject)
## now find the averages
tidy1 <- lapply(os1, function(x) colMeans(x[1:86]))
tidy2 <- lapply(os2, function(x) colMeans(x[1:86]))
tidy3 <- lapply(os3, function(x) colMeans(x[1:86]))
tidy4 <- lapply(os4, function(x) colMeans(x[1:86]))
tidy5 <- lapply(os5, function(x) colMeans(x[1:86]))
tidy6 <- lapply(os6, function(x) colMeans(x[1:86]))
##now recombine
joined <- c(tidy1, tidy2, tidy3, tidy4, tidy5, tidy6)
dfinal <- do.call("rbind", joined)
## remove duplicates
dfinalvals <- unique(dfinal)
##and save to disk
write.table(dfinalvals,"finalMeansByLabelAndSubject.txt",row.names = FALSE)
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/grabMzxmlFunctions.R
\name{grabMzxmlBPC}
\alias{grabMzxmlBPC}
\title{Grab the BPC or TIC from a file}
\usage{
grabMzxmlBPC(xml_data, TIC = FALSE)
}
\arguments{
\item{xml_data}{An `xml2` nodeset, usually created by applying `read_xml` to
an mzML file.}
\item{TIC}{Boolean. If TRUE, the TIC is extracted rather than the BPC.}
}
\value{
A `data.table` with columns for retention time (rt), and intensity (int).
}
\description{
The base peak intensity and total ion current are actually written into the
mzXML files and aren't encoded, making retrieval of BPC and TIC information
blazingly fast if parsed correctly.
}
|
/man/grabMzxmlBPC.Rd
|
permissive
|
R-Lionheart/RaMS
|
R
| false
| true
| 693
|
rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/grabMzxmlFunctions.R
\name{grabMzxmlBPC}
\alias{grabMzxmlBPC}
\title{Grab the BPC or TIC from a file}
\usage{
grabMzxmlBPC(xml_data, TIC = FALSE)
}
\arguments{
\item{xml_data}{An `xml2` nodeset, usually created by applying `read_xml` to
an mzML file.}
\item{TIC}{Boolean. If TRUE, the TIC is extracted rather than the BPC.}
}
\value{
A `data.table` with columns for retention time (rt), and intensity (int).
}
\description{
The base peak intensity and total ion current are actually written into the
mzXML files and aren't encoded, making retrieval of BPC and TIC information
blazingly fast if parsed correctly.
}
|
#' Extract class of variable
#'
#' Some packages append non-base classes to data frame columns, e.g.
#' if data is labeled with the `Hmisc` package the class of a string will
#' be `c("labelled", "character")` rather than `c("character")` only. This
#' simple function extracts the base R class.
#'
#' @param data data frame
#' @param variable string vector of column names from data
#' @keywords internal
#' @noRd
#' @author Daniel D. Sjoberg
assign_class <- function(data, variable, classes_expected) {
# extracting the base R class
classes_return <-
map(variable, ~class(data[[.x]]) %>% intersect(classes_expected))
classes_return
}
#' For dichotomous data, returns that value that will be printed in table.
#'
#' @param data data frame
#' @param variable character variable name in \code{data} that will be tabulated
#' @param summary_type the type of summary statistics that will be calculated
#' @param class class of \code{variable}
#' @return value that will be printed in table for dichotomous data
#' @keywords internal
#' @noRd
#' @author Daniel D. Sjoberg
# wrapper for assign_dichotomous_value_one() function
assign_dichotomous_value <- function(data, variable, summary_type, class, value) {
pmap(
list(variable, summary_type, class),
~ assign_dichotomous_value_one(data, ..1, ..2, ..3, value)
)
}
assign_dichotomous_value_one <- function(data, variable, summary_type, class, value) {
# only assign value for dichotomous data
if (summary_type != "dichotomous") {
return(NULL)
}
# removing all NA values
var_vector <- data[[variable]][!is.na(data[[variable]])]
# if 'value' provided, then dichotomous_value is the provided one
if (!is.null(value[[variable]])) {
return(value[[variable]])
}
# if class is logical, then value will be TRUE
if (class == "logical") {
return(TRUE)
}
# if column provided is a factor with "Yes" and "No" (or "yes" and "no") then
# the value is "Yes" (or "yes")
if (class %in% c("factor", "character")) {
if (setdiff(var_vector, c("Yes", "No")) %>% length() == 0) {
return("Yes")
}
if (setdiff(var_vector, c("yes", "no")) %>% length() == 0) {
return("yes")
}
if (setdiff(var_vector, c("YES", "NO")) %>% length() == 0) {
return("YES")
}
}
# if column provided is all zeros and ones (or exclusively either one), the the value is one
if (setdiff(var_vector, c(0, 1)) %>% length() == 0) {
return(1)
}
# otherwise, the value must be passed from the values argument to tbl_summary
stop(glue(
"'{variable}' is dichotomous, but I was unable to determine the ",
"level to display. Use the 'value = list({variable} = <level>)' argument ",
"to specify level."
), call. = FALSE)
}
# assign_dichotomous_value_one(mtcars, "am", "dichotomous", "double", NULL)
#' Assign type of summary statistic
#'
#' Function that assigns default statistics to display, or if specified,
#' assigns the user-defined statistics for display.
#'
#' @param variable Vector of variable names
#' @param summary_type A list that includes specified summary types
#' @param stat_display List with up to two named elements. Names must be
#' continuous or categorical. Can be \code{NULL}.
#' @return vector of stat_display selections for each variable
#' @keywords internal
#' @noRd
#' @author Daniel D. Sjoberg
assign_stat_display <- function(variable, summary_type, stat_display) {
# dichotomous and categorical are treated in the same fashion here
summary_type <- ifelse(summary_type == "dichotomous", "categorical", summary_type)
# otherwise, return defaults
return(
map2_chr(
variable, summary_type,
~ case_when(
.y == "continuous" ~
stat_display[[.x]] %||%
get_theme_element("tbl_summary-str:continuous_stat") %||%
"{median} ({p25}, {p75})",
.y %in% c("categorical", "dichotomous") ~
stat_display[[.x]] %||%
get_theme_element("tbl_summary-str:categorical_stat") %||%
"{n} ({p}%)"
)
)
)
}
#' Assigns summary type (e.g. continuous, categorical, or dichotomous).
#'
#' For variables where the summary type was not specified in the function
#' call of `tbl_summary`, `assign_summary_type` assigns a type based on class and
#' number of unique levels.
#'
#' @param data Data frame.
#' @param variable Vector of column name.
#' @param class Vector of classes (e.g. numeric, character, etc.)
#' corresponding one-to-one with the names in `variable`.
#' @param summary_type list that includes specified summary types,
#' e.g. \code{summary_type = list(age = "continuous")}
#' @return Vector summary types `c("continuous", "categorical", "dichotomous")`.
#' @keywords internal
#' @noRd
#' @author Daniel D. Sjoberg
#' @examples
#' gtsummary:::assign_summary_type(
#' data = mtcars,
#' variable = names(mtcars),
#' class = apply(mtcars, 2, class),
#' summary_type = NULL, value = NULL
#' )
assign_summary_type <- function(data, variable, class, summary_type, value) {
# checking if user requested type = "categorical" for variable that is all missing
if (!is.null(summary_type)) {
summary_type <- purrr::imap(
summary_type,
function(.x, .y) {
categorical_missing <-
.x == "categorical" &&
length(data[[.y]]) == sum(is.na(data[[.y]])) &&
!"factor" %in% class(data[[.y]]) # factor can be summarized with categorical
if(categorical_missing == FALSE) return(.x)
message(glue(
"Variable '{.y}' is `NA` for all observations and cannot be summarized as 'categorical'.\n",
"Using `{.y} ~ \"dichotomous\"` instead."
))
return("dichotomous")
}
)
}
# assigning types ------------------------------------------------------------
type <- map2_chr(
variable, class,
~ summary_type[[.x]] %||%
case_when(
# if a value to display was supplied, then dichotomous
!is.null(value[[.x]]) &
length(intersect(value[[.x]], data[[.x]]))
~ "dichotomous",
# logical variables will be dichotmous
.y == "logical" ~ "dichotomous",
# numeric variables that are 0 and 1 only, will be dichotomous
.y %in% c("integer", "numeric") &
length(setdiff(na.omit(data[[.x]]), c(0, 1))) == 0 &
nrow(data) != sum(is.na(data[[.x]])) ~
"dichotomous",
# factor variables that are "No" and "Yes" only, will be dichotomous
.y %in% c("factor") & setequal(attr(data[[.x]], "levels"), c("No", "Yes")) ~
"dichotomous",
.y %in% c("factor") & setequal(attr(data[[.x]], "levels"), c("no", "yes")) ~
"dichotomous",
.y %in% c("factor") & setequal(attr(data[[.x]], "levels"), c("NO", "YES")) ~
"dichotomous",
# character variables that are "No" and "Yes" only, will be dichotomous
.y %in% c("character") & setequal(na.omit(data[[.x]]), c("No", "Yes")) ~
"dichotomous",
.y %in% c("character") & setequal(na.omit(data[[.x]]), c("no", "yes")) ~
"dichotomous",
.y %in% c("character") & setequal(na.omit(data[[.x]]), c("NO", "YES")) ~
"dichotomous",
# factors and characters are categorical (except when all missing)
.y == "character" & nrow(data) == sum(is.na(data[[.x]])) ~ "dichotomous",
.y %in% c("factor", "character") ~ "categorical",
# numeric variables with fewer than 10 levels will be categorical
.y %in% c("integer", "numeric", "difftime") &
length(unique(na.omit(data[[.x]]))) < 10 &
nrow(data) != sum(is.na(data[[.x]])) ~
"categorical",
# everything else is assigned to continuous
TRUE ~ "continuous"
)
)
# checking user did not request a factor or character variable be summarized
# as a continuous variable
purrr::pwalk(
list(type, class, variable),
~ if(..1 == "continuous" && ..2 %in% c("factor", "character"))
stop(glue(
"Column '{..3}' is class \"{..2}\" and cannot be summarized as a continuous variable."
), call. = FALSE)
)
type
}
#' Assigns variable label to display.
#'
#' Preference is given to labels specified in `fmt_table1(..., var_label = list())`
#' argument, then to a label attribute attached to the data frame
#' (i.e. attr(data$var, "label)), then to the variable name.
#'
#' @param data Data frame.
#' @param variable Vector of column name.
#' @param var_label list that includes specified variable labels,
#' e.g. `var_label = list(age = "Age, yrs")`
#' @return Vector variable labels.
#' @keywords internal
#' @noRd
#' @author Daniel D. Sjoberg
#' @examples
#' gtsummary:::assign_var_label(mtcars, names(mtcars), list(hp = "Horsepower"))
assign_var_label <- function(data, variable, var_label) {
map_chr(
variable,
~ var_label[[.x]] %||%
attr(data[[.x]], "label") %||%
.x
)
}
#' Guesses how many digits to use in rounding continuous variables
#' or summary statistics
#'
#' @param x vector containing the values of a continuous variable. This can be
#' raw data values or a vector of summary statistics themselves
#' @return the rounded values
#' @noRd
#' @keywords internal
#' @author Emily Zabor, Daniel D. Sjoberg
# takes as the input a vector of variable and summary types
continuous_digits_guess <- function(data,
variable,
summary_type,
class,
digits = NULL) {
pmap(
list(variable, summary_type, class),
~ continuous_digits_guess_one(data, ..1, ..2, ..3, digits)
)
}
# runs for a single var
continuous_digits_guess_one <- function(data,
variable,
summary_type,
class,
digits = NULL) {
# if all values are NA, returning 0
if (nrow(data) == sum(is.na(data[[variable]]))) {
return(0)
}
# if the variable is not continuous type, return NA
if (summary_type != "continuous") {
return(NA)
}
# if the number of digits is specified for a variable, return specified number
if (!is.null(digits[[variable]])) {
return(digits[[variable]])
}
# if class is integer, then round everythng to nearest integer
if (class == "integer") {
return(0)
}
# calculate the spread of the variable
var_spread <- stats::quantile(data[[variable]], probs = c(0.95), na.rm = TRUE) -
stats::quantile(data[[variable]], probs = c(0.05), na.rm = TRUE)
# otherwise guess the number of dignits to use based on the spread
case_when(
var_spread < 0.01 ~ 4,
var_spread >= 0.01 & var_spread < 0.1 ~ 3,
var_spread >= 0.1 & var_spread < 10 ~ 2,
var_spread >= 10 & var_spread < 20 ~ 1,
var_spread >= 20 ~ 0
)
}
#' Simple utility function to get extract and calculate additional information
#' about the 'by' variable in \code{\link{tbl_summary}}
#'
#' Given a dataset and the name of the 'by' variable, this function returns a
#' data frame with unique levels of the by variable, the by variable ID, a character
#' version of the levels, and the column name for each level in the \code{\link{tbl_summary}}
#' output data frame.
#'
#' @param data data frame
#' @param by character name of the `by` variable found in data
#' @noRd
#' @keywords internal
#' @author Daniel D. Sjoberg
df_by <- function(data, by) {
if (is.null(by)) return(NULL)
if (inherits(data[[by]], "factor"))
result <- tibble(by = attr(data[[by]], "levels") %>%
factor(x = ., levels= ., labels = .))
else result <- data %>% select(by) %>% dplyr::distinct() %>% set_names("by")
result <-
result %>%
arrange(!!sym("by")) %>%
mutate(
n = purrr::map_int(.data$by, ~ sum(data[[!!by]] == .x)),
N = sum(.data$n),
p = .data$n / .data$N,
by_id = 1:n(), # 'by' variable ID
by_chr = as.character(.data$by), # Character version of 'by' variable
by_col = paste0("stat_", .data$by_id) # Column name of in fmt_table1 output
) %>%
select(starts_with("by"), everything())
attr(result$by, "label") <- NULL
result
}
#' Assigns categorical variables sort type ("alphanumeric" or "frequency")
#'
#' @param variable variable name
#' @param summary_type the type of variable ("continuous", "categorical", "dichotomous")
#' @param sort named list indicating the type of sorting to perform. Default is NULL.
#' @noRd
#' @keywords internal
#' @author Daniel D. Sjoberg
# this function assigns categorical variables sort type ("alphanumeric" or "frequency")
assign_sort <- function(variable, summary_type, sort) {
purrr::map2_chr(
variable, summary_type,
function(variable, summary_type) {
# only assigning sort type for caegorical data
if (summary_type == "dichotomous") {
return("alphanumeric")
}
if (summary_type != "categorical") {
return(NA_character_)
}
# if variable was specified, then use that
if (!is.null(sort[[variable]])) {
return(sort[[variable]])
}
# otherwise, return "alphanumeric"
return("alphanumeric")
}
)
}
# function that checks the inputs to \code{\link{tbl_summary}}
# this should include EVERY input of \code{\link{tbl_summary}} in the same order
# copy and paste them from \code{\link{tbl_summary}}
tbl_summary_data_checks <- function(data) {
# data -----------------------------------------------------------------------
# data is a data frame
if (!is.data.frame(data)) {
stop("'data' input must be a data frame.", call. = FALSE)
}
# cannot be empty data frame
if (nrow(data) == 0L) {
stop("Expecting 'data' to have at least 1 row.", call. = FALSE)
}
# must have at least one column
if (ncol(data) == 0L) {
stop("Expecting 'data' to have at least 1 column", call. = FALSE)
}
}
tbl_summary_input_checks <- function(data, by, label, type, value, statistic,
digits, missing, missing_text, sort) {
# data -----------------------------------------------------------------------
tbl_summary_data_checks(data)
# by -------------------------------------------------------------------------
if (!is.null(by) && !by %in% names(data)) {
stop(glue(
"`by = '{by}'` is not a column in `data=`. Did you misspell the column name, ",
"or omit the column with the `include=` argument?"), call. = FALSE)
}
# type -----------------------------------------------------------------------
if (!is.null(type) & is.null(names(type))) { # checking names for deprecated named list input
# checking input type: must be a list of formulas, or one formula
if (!inherits(type, c("list", "formula"))) {
stop(glue(
"'type' argument must be a list of formulas. ",
"LHS of the formula is the variable specification, ",
"and the RHS is the type specification: ",
"list(c(age, marker) ~ \"continuous\")"
), call. = FALSE)
}
if (inherits(type, "list")) {
if (some(type, negate(rlang::is_bare_formula))) {
stop(glue(
"'type' argument must be a list of formulas. ",
"LHS of the formula is the variable specification, ",
"and the RHS is the type specification: ",
"list(c(age, marker) ~ \"continuous\")"
), call. = FALSE)
}
}
# all sepcifed types are continuous, categorical, or dichotomous
if (inherits(type, "formula")) type <- list(type)
if (!every(type, ~ eval(rlang::f_rhs(.x)) %in% c("continuous", "categorical", "dichotomous")) |
!every(type, ~ rlang::is_string(eval(rlang::f_rhs(.x))))) {
stop(glue(
"The RHS of the formula in the 'type' argument must of one and only one of ",
"\"continuous\", \"categorical\", or \"dichotomous\""
), call. = FALSE)
}
}
# value ----------------------------------------------------------------------
if (!is.null(value) & is.null(names(value))) { # checking names for deprecated named list input
# checking input type: must be a list of formulas, or one formula
if (!inherits(value, c("list", "formula"))) {
stop(glue(
"'value' argument must be a list of formulas. ",
"LHS of the formula is the variable specification, ",
"and the RHS is the value specification: ",
"list(stage ~ \"T1\")"
), call. = FALSE)
}
if (inherits(value, "list")) {
if (some(value, negate(rlang::is_bare_formula))) {
stop(glue(
"'value' argument must be a list of formulas. ",
"LHS of the formula is the variable specification, ",
"and the RHS is the value specification: ",
"list(stage ~ \"T1\")"
), call. = FALSE)
}
}
# functions all_continuous, all_categorical, and all_dichotomous cannot be used for value
if (some(
value,
~ deparse(.x) %>% # converts a formula to a string
stringr::str_detect(c("all_continuous()", "all_categorical()", "all_dichotomous()")) %>%
any()
)) {
stop(glue(
"Select functions all_continuous(), all_categorical(), all_dichotomous() ",
"cannot be used in the 'value' argument."
), call. = FALSE)
}
}
# label ----------------------------------------------------------------------
if (!is.null(label) & is.null(names(label))) { # checking names for deprecated named list input
# all sepcifed labels must be a string of length 1
if (inherits(label, "formula")) label <- list(label)
if (!every(label, ~ rlang::is_string(eval(rlang::f_rhs(.x))))) {
stop(glue(
"The RHS of the formula in the 'label' argument must be a string."
), call. = FALSE)
}
}
# statistic ------------------------------------------------------------------
if (!is.null(statistic) & is.null(names(statistic))) { # checking names for deprecated named list input
# checking input type: must be a list of formulas, or one formula
if (!inherits(statistic, c("list", "formula"))) {
stop(glue(
"'statistic' argument must be a list of formulas. ",
"LHS of the formula is the variable specification, ",
"and the RHS is the statistic specification: ",
"list(all_categorical() ~ \"{n} / {N}\")"
), call. = FALSE)
}
if (inherits(statistic, "list")) {
if (some(statistic, negate(rlang::is_bare_formula))) {
stop(glue(
"'statistic' argument must be a list of formulas. ",
"LHS of the formula is the variable specification, ",
"and the RHS is the statistic specification: ",
"list(all_categorical() ~ \"{n} / {N}\")"
), call. = FALSE)
}
}
# all sepcifed statistics must be a string of length 1
if (inherits(statistic, "formula")) statistic <- list(statistic)
if (!every(statistic, ~ rlang::is_string(eval(rlang::f_rhs(.x))))) {
stop(glue(
"The RHS of the formula in the 'statistic' argument must be a string."
), call. = FALSE)
}
}
# digits ---------------------------------------------------------------------
if (!is.null(digits) & is.null(names(digits))) { # checking names for deprecated named list input
# checking input type: must be a list of formulas, or one formula
if (!inherits(digits, c("list", "formula"))) {
stop(glue(
"'digits' argument must be a list of formulas. ",
"LHS of the formula is the variable specification, ",
"and the RHS is the digits specification: ",
"list(c(age, marker) ~ 1)"
), call. = FALSE)
}
if (inherits(digits, "list")) {
if (some(digits, negate(rlang::is_bare_formula))) {
stop(glue(
"'digits' argument must be a list of formulas. ",
"LHS of the formula is the variable specification, ",
"and the RHS is the digits specification: ",
"list(c(age, marker) ~ 1)"
), call. = FALSE)
}
}
}
# missing_text ---------------------------------------------------------------
# input must be character
if (!rlang::is_string(missing_text)) {
stop("Argument 'missing_text' must be a character string of length 1.", call. = FALSE)
}
# sort -----------------------------------------------------------------------
if (!is.null(sort) & is.null(names(sort))) { # checking names for deprecated named list input
# checking input type: must be a list of formulas, or one formula
if (!inherits(sort, c("list", "formula"))) {
stop(glue(
"'sort' argument must be a list of formulas. ",
"LHS of the formula is the variable specification, ",
"and the RHS is the sort specification: ",
"c(vars(age, marker) ~ 1)"
), call. = FALSE)
}
if (inherits(sort, "list")) {
if (some(sort, negate(rlang::is_bare_formula))) {
stop(glue(
"'sort' argument must be a list of formulas. ",
"LHS of the formula is the variable specification, ",
"and the RHS is the sort specification: ",
"list(c(stage, marker) ~ \"frequency\")"
), call. = FALSE)
}
}
# all sepcifed types are frequency or alphanumeric
if (inherits(sort, "formula")) sort <- list(sort)
if (!every(sort, ~ eval(rlang::f_rhs(.x)) %in% c("frequency", "alphanumeric")) |
!every(sort, ~ rlang::is_string(eval(rlang::f_rhs(.x))))) {
stop(glue(
"The RHS of the formula in the 'sort' argument must of one and only one of ",
"\"frequency\" or \"alphanumeric\""
), call. = FALSE)
}
}
}
# # data
# # not putting in a data frame
# tbl_summary_input_checks(
# data = list(not = "a", proper = "data.frame"), by = NULL, summary_type = NULL, var_label = NULL,
# stat_display = NULL, digits = NULL, pvalue_fun = NULL
# )
#
# # by
# # by var not in dataset
# tbl_summary_input_checks(
# data = mtcars, by = "Petal.Width", summary_type = NULL, var_label = NULL,
# stat_display = NULL, digits = NULL, pvalue_fun = NULL
# )
#
# # summary_type
# # names not a variables name, and input type not valid
# tbl_summary_input_checks(
# data = mtcars, by = NULL, summary_type = list(hp = "continuous", length = "catttegorical"), var_label = NULL,
# stat_display = NULL, digits = NULL, pvalue_fun = NULL
# )
#
# # var_label
# # names not a variables name, and input type not valid
# tbl_summary_input_checks(
# data = mtcars, by = NULL, summary_type = NULL, var_label = list(hp = "Horsepower", wrong_name = "Not a variable"),
# stat_display = NULL, digits = NULL, pvalue_fun = NULL
# )
#
# # stat_display
# tbl_summary_input_checks(
# data = mtcars, by = NULL, summary_type = NULL, var_label = NULL,
# stat_display = list(continuous = "{mean}", dichot_not = "nope"), digits = NULL, pvalue_fun = NULL
# )
#
# # digits
# tbl_summary_input_checks(
# data = mtcars, by = NULL, summary_type = NULL, var_label = NULL,
# stat_display = NULL, digits = list(hp = 2, wrong_name = 2.1, am = 5.4), pvalue_fun = NULL
# )
# tbl_summary_input_checks(
# data = NULL, by = NULL, summary_type = NULL, var_label = NULL,
# stat_display = NULL, digits = NULL, pvalue_fun = NULL
# )
# stat_label_match -------------------------------------------------------------
# provide a vector of stat_display and get labels back i.e. {mean} ({sd}) gives Mean (SD)
stat_label_match <- function(stat_display, iqr = TRUE) {
language <- get_theme_element("pkgwide-str:language", default = "en")
labels <-
tibble::tribble(
~stat, ~label,
"{min}", translate_text("minimum", language),
"{max}", translate_text("maximum", language),
"{median}", translate_text("median", language),
"{mean}", translate_text("mean", language),
"{sd}", translate_text("SD", language),
"{var}", translate_text("variance", language),
"{n}", translate_text("n", language),
"{N}", translate_text("N", language),
"{p}%", translate_text("%", language),
"{p_miss}%", translate_text("% missing", language),
"{p_nonmiss}%", translate_text("% not missing", language),
"{p}", translate_text("%", language),
"{p_miss}", translate_text("% missing", language),
"{p_nonmiss}", translate_text("% not missing", language),
"{N_miss}", translate_text("N missing", language),
"{N_nonmiss}", translate_text("N", language),
"{N_obs}", translate_text("no. obs.", language)
) %>%
# adding in quartiles
bind_rows(
tibble(stat = paste0("{p", 0:100, "}")) %>%
mutate(label = paste0(gsub("[^0-9\\.]", "", .data$stat), "%"))
) %>%
# if function does not appear in above list, the print the function name
bind_rows(
tibble(
stat = str_extract_all(stat_display, "\\{.*?\\}") %>%
unlist() %>%
unique(),
label = .data$stat %>%
str_remove_all(pattern = fixed("}")) %>%
str_remove_all(pattern = fixed("{"))
)
)
# adding IQR replacements if indicated
if (iqr == TRUE) {
labels <-
bind_rows(
tibble::tribble(
~stat, ~label,
"{p25}, {p75}", translate_text("IQR", language),
"{p25} - {p75}", translate_text("IQR", language)
),
labels
)
}
# replacing statistics in {}, with their labels
for (i in seq_len(nrow(labels))) {
stat_display <-
stringr::str_replace_all(
stat_display,
stringr::fixed(labels$stat[i]),
labels$label[i]
)
}
stat_display
}
# footnote_stat_label ----------------------------------------------------------
# stat_label footnote maker
footnote_stat_label <- function(meta_data) {
meta_data %>%
select(c("summary_type", "stat_label")) %>%
mutate(
summary_type = case_when(
summary_type == "dichotomous" ~ "categorical",
TRUE ~ .data$summary_type
),
message = glue("{stat_label}")
) %>%
distinct() %>%
pull("message") %>%
paste(collapse = "; ") %>%
paste0(translate_text("Statistics presented"), ": ", .)
}
# summarize_categorical --------------------------------------------------------
summarize_categorical <- function(data, variable, by, class, dichotomous_value, sort, percent) {
# grabbing percent formatting function
percent_fun <-
get_theme_element("tbl_summary-fn:percent_fun") %||%
getOption("gtsummary.tbl_summary.percent_fun", default = style_percent)
N_fun <-
get_theme_element("tbl_summary-fn:N_fun",
default = function(x) sprintf("%.0f", x))
# stripping attributes/classes that cause issues -----------------------------
# tidyr::complete throws warning `has different attributes on LHS and RHS of join`
# when variable has label. So deleting it.
attr(data[[variable]], "label") <- NULL
if (!is.null(by)) attr(data[[by]], "label") <- NULL
# same thing when the class "labelled" is included when labeled with the Hmisc package
class(data[[variable]]) <- setdiff(class(data[[variable]]), "labelled")
if (!is.null(by)) class(data[[by]]) <- setdiff(class(data[[by]]), "labelled")
# tabulating data ------------------------------------------------------------
df_by <- df_by(data, by)
variable_by_chr <- c("variable", switch(!is.null(by), "by"))
data <- data %>%
select(c(variable, by)) %>%
# renaming variables to c("variable", "by") (if there is a by variable)
set_names(variable_by_chr)
df_tab <-
data %>%
mutate(
# converting to factor, if not already factor
variable = switch(class, factor = .data$variable) %||% factor(.data$variable),
# adding dichotomous level (in case it is unobserved)
variable = forcats::fct_expand(.data$variable, as.character(dichotomous_value)),
# # re-leveling by alphanumeric order or frequency
variable = switch(sort,
"alphanumeric" = .data$variable,
"frequency" = forcats::fct_infreq(.data$variable))
) %>%
{suppressWarnings(count(., !!!syms(variable_by_chr)))} %>%
stats::na.omit() %>%
# if there is a by variable, merging in all levels
{switch(
!is.null(by),
full_join(.,
list(by = df_by$by,
variable = factor(attr(.$variable, "levels"),
levels = attr(.$variable, "levels"))) %>%
purrr::cross_df(),
by = c("by", "variable"))[c("by", "variable", "n")]) %||% .} %>%
tidyr::complete(!!!syms(variable_by_chr), fill = list(n = 0))
# calculating percent
group_by_percent <- switch(
percent,
"cell" = "",
"column" = ifelse(!is.null(by), "by", ""),
"row" = "variable"
)
result <- df_tab %>%
group_by(!!!syms(group_by_percent)) %>%
mutate(
N = sum(.data$n),
# if the Big N is 0, there is no denom so making percent NA
p = ifelse(.data$N == 0, NA, .data$n / .data$N)
) %>%
ungroup() %>%
rename(variable_levels = .data$variable) %>%
mutate(variable = !!variable) %>%
select(c(by, variable, "variable_levels", everything()))
if (!is.null(dichotomous_value)) {
result <- result %>%
filter(.data$variable_levels == !!dichotomous_value) %>%
select(-.data$variable_levels)
}
# adding percent_fun as attr to p column
attr(result$p, "fmt_fun") <- percent_fun
attr(result$N, "fmt_fun") <- N_fun
attr(result$n, "fmt_fun") <- N_fun
result
}
# summarize_continuous ---------------------------------------------------------
summarize_continuous <- function(data, variable, by, stat_display, digits) {
# stripping attributes/classes that cause issues -----------------------------
# tidyr::complete throws warning `has different attributes on LHS and RHS of join`
# when variable has label. So deleting it.
attr(data[[variable]], "label") <- NULL
if (!is.null(by)) attr(data[[by]], "label") <- NULL
# same thing when the class "labelled" is included when labeled with the Hmisc package
class(data[[variable]]) <- setdiff(class(data[[variable]]), "labelled")
if (!is.null(by)) class(data[[by]]) <- setdiff(class(data[[by]]), "labelled")
# extracting function calls
fns_names_chr <-
str_extract_all(stat_display, "\\{.*?\\}") %>%
map(str_remove_all, pattern = fixed("}")) %>%
map(str_remove_all, pattern = fixed("{")) %>%
unlist() %>%
# removing elements protected as other items
setdiff(c("p_miss", "p_nonmiss", "N_miss", "N_nonmiss", "N_obs"))
# defining shortcut quantile functions, if needed
if (any(fns_names_chr %in% paste0("p", 0:100))) {
fns_names_chr[fns_names_chr %in% paste0("p", 0:100)] %>%
set_names(.) %>%
imap(~purrr::partial(
quantile,
probs = as.numeric(stringr::str_replace(.x, pattern = "^p", "")) / 100
)) %>%
list2env(envir = rlang::env_parent())
}
if (length(fns_names_chr) == 0) stop(glue(
"No summary function found in `{stat_display}` for variable '{variable}'.\n",
"Did you wrap the function name in curly brackets?"
), call. = FALSE)
if (any(c("by", "variable") %in% fns_names_chr)) {
stop(paste(
"'by' and 'variable' are protected names, and continuous variables",
"cannot be summarized with functions by the these name."), call. = FALSE)
}
# prepping data set
variable_by_chr <- c("variable", switch(!is.null(by), "by"))
df_by <- df_by(data, by)
data <-
data %>%
select(c(variable, by)) %>%
stats::na.omit() %>%
# renaming variables to c("variable", "by") (if there is a by variable)
set_names(variable_by_chr)
# calculating stats for each var and by level
if (!is.null(by)) {
df_stats <-
list(
fn = fns_names_chr,
by = df_by$by
) %>%
cross_df() %>%
mutate(
variable = variable,
value = purrr::map2_dbl(
.data$fn, .data$by,
function(x, y) {
var_vctr <- filter(data, .data$by == y) %>% pull(.data$variable)
if (length(var_vctr) == 0) return(NA)
do.call(what = x, args = list(x = var_vctr))
}
)
) %>%
tidyr::pivot_wider(id_cols = c("by", "variable"), names_from = "fn")
}
else if (is.null(by)) {
df_stats <-
list(fn = fns_names_chr) %>%
cross_df() %>%
mutate(
variable = variable,
value = map_dbl(
.data$fn,
~do.call(what = .x, args = list(x = pull(data, .data$variable)))
)
) %>%
tidyr::pivot_wider(id_cols = c("variable"), names_from = "fn")
}
# adding formatting function as attr to summary statistics columns
fmt_fun <- as.list(rep(digits, length.out = length(fns_names_chr))) %>%
set_names(fns_names_chr)
df_stats <- purrr::imap_dfc(
df_stats,
function(column, colname) {
if(is.null(fmt_fun[[colname]])) return(column)
fmt <- glue("%.{fmt_fun[[colname]]}f")
attr(column, "fmt_fun") <- purrr::partial(sprintf, fmt = !!fmt)
column
}
)
# returning final object
df_stats
}
# df_stats_to_tbl --------------------------------------------------------------
df_stats_to_tbl <- function(data, variable, summary_type, by, var_label, stat_display,
df_stats, missing, missing_text) {
# styling the statistics -----------------------------------------------------
for (v in (names(df_stats) %>% setdiff(c("by", "variable", "variable_levels")))) {
df_stats[[v]] <- df_stats[[v]] %>% attr(df_stats[[v]], "fmt_fun")()
}
# calculating the statistic to be displayed in the cell in the table.
tryCatch(
df_stats$statistic <- glue::glue_data(df_stats, stat_display) %>% as.character(),
error = function(e) {
stop(glue(
"There was an error assembling the summary statistics for '{{variable}}'\n",
" with summary type '{{summary_type}}'.\n\n",
"There are 2 common sources for this error.\n",
"1. You have requested summary statistics meant for continuous\n",
" variables for a variable being as summarized as categorical.\n",
" To change the summary type to continuous, add the argument\n",
" `type = list({{variable}} ~ 'continuous')`\n",
"2. One of the functions or statistics from the `statistic=` argument is not valid.",
.open = "{{", .close = "}}"
))
})
# reshaping table to wide ----------------------------------------------------
if (!is.null(by)) {
df_stats_wide <-
df_stats %>%
select(any_of(c("by", "variable", "variable_levels", "statistic"))) %>%
# merging in new column header names
left_join(df_by(data, by)[c("by", "by_col")], by = "by") %>%
tidyr::pivot_wider(id_cols = any_of(c("variable", "variable_levels")),
names_from = "by_col",
values_from = "statistic")
}
else {
df_stats_wide <-
df_stats %>%
select(any_of(c("by", "variable", "variable_levels", "statistic"))) %>%
rename(stat_0 = .data$statistic) %>%
select(any_of(c("variable", "variable_levels", "stat_0")))
}
# setting up structure for table_body
if (summary_type == "categorical") {
# adding a label row for categorical variables
result <-
df_stats_wide %>%
mutate(
row_type = "level",
label = as.character(.data$variable_levels)
) %>%
select(-.data$variable_levels)
# adding label row
result <-
tibble(
variable = variable,
row_type = "label",
label = var_label
) %>%
bind_rows(result)
}
else if (summary_type %in% c("continuous", "dichotomous")) {
result <-
df_stats_wide %>%
mutate(
row_type = "label",
label = var_label
)
}
# add rows for missing -------------------------------------------------------
if (missing == "always" || (missing == "ifany" & sum(is.na(data[[variable]])) > 0)) {
result <-
result %>%
bind_rows(
calculate_missing_row(data = data, variable = variable,
by = by, missing_text = missing_text)
)
}
# returning final object formatted for table_body ----------------------------
# selecting stat_* cols (in the correct order)
stat_vars <- switch(!is.null(by), df_by(data, by)$by_col) %||% "stat_0"
result %>% select(all_of(c("variable", "row_type", "label", stat_vars)))
}
# calculate_missing_row --------------------------------------------------------
calculate_missing_row <- function(data, variable, by, missing_text) {
# converting variable to TRUE/FALSE for missing
data <-
data %>%
select(c(variable, by)) %>%
mutate(
!!variable := is.na(.data[[variable]])
)
# passing the T/F variable through the functions to format as we do in
# the tbl_summary output
summarize_categorical(
data = data, variable = variable, by = by, class = "logical",
dichotomous_value = TRUE, sort = "alphanumeric", percent = "column"
) %>%
{df_stats_to_tbl(
data = data, variable = variable, summary_type = "dichotomous", by = by,
var_label = missing_text, stat_display = "{n}", df_stats = .,
missing = "no", missing_text = "Doesn't Matter -- Text should never appear")} %>%
# changing row_type to missing
mutate(row_type = "missing")
}
# df_stats_fun -----------------------------------------------------------------
# this function creates df_stats in the tbl_summary meta data table
# and includes the number of missing values
df_stats_fun <- function(summary_type, variable, class, dichotomous_value, sort,
stat_display, digits, data, by, percent) {
# first table are the standard stats
t1 <- switch(
summary_type,
"continuous" = summarize_continuous(data = data, variable = variable,
by = by, stat_display = stat_display,
digits = digits),
"categorical" = summarize_categorical(data = data, variable = variable,
by = by, class = class,
dichotomous_value = dichotomous_value,
sort = sort, percent = percent),
"dichotomous" = summarize_categorical(data = data, variable = variable,
by = by, class = class,
dichotomous_value = dichotomous_value,
sort = sort, percent = percent)
)
# adding the N_obs and N_missing, etc
t2 <- summarize_categorical(data = mutate_at(data, vars(all_of(variable)), is.na),
variable = variable,
by = by, class = "logical",
dichotomous_value = TRUE,
sort = "alphanumeric", percent = percent) %>%
rename(p_miss = .data$p, N_obs = .data$N, N_miss = .data$n) %>%
mutate(N_nonmiss = .data$N_obs - .data$N_miss,
p_nonmiss = 1 - .data$p_miss)
# returning table will all stats
merge_vars <- switch(!is.null(by), c("by", "variable")) %||% "variable"
return <- left_join(t1, t2, by = merge_vars)
# setting fmt_fun for percents and integers
attr(return$p_nonmiss, "fmt_fun") <- attr(return$p_miss, "fmt_fun")
attr(return$N_nonmiss, "fmt_fun") <- attr(return$N_miss, "fmt_fun")
return
}
# translation function ---------------------------------------------------------
translate_text <- function(x, language = get_theme_element("pkgwide-str:language", default = "en")) {
if (language == "en") return(x)
# sub-setting on row of text to translate
df_text <- filter(df_translations, .data$en == x)
# if no rows selected OR translation is not provided return x, otherwise the translated text
ifelse(nrow(df_text) != 1 || is.na(df_text[[language]]), x, df_text[[language]])
}
|
/R/utils-tbl_summary.R
|
permissive
|
denis-or/gtsummary
|
R
| false
| false
| 40,177
|
r
|
#' Extract class of variable
#'
#' Some packages append non-base classes to data frame columns, e.g.
#' if data is labeled with the `Hmisc` package the class of a string will
#' be `c("labelled", "character")` rather than `c("character")` only. This
#' simple function extracts the base R class.
#'
#' @param data data frame
#' @param variable string vector of column names from data
#' @keywords internal
#' @noRd
#' @author Daniel D. Sjoberg
assign_class <- function(data, variable, classes_expected) {
# extracting the base R class
classes_return <-
map(variable, ~class(data[[.x]]) %>% intersect(classes_expected))
classes_return
}
#' For dichotomous data, returns that value that will be printed in table.
#'
#' @param data data frame
#' @param variable character variable name in \code{data} that will be tabulated
#' @param summary_type the type of summary statistics that will be calculated
#' @param class class of \code{variable}
#' @return value that will be printed in table for dichotomous data
#' @keywords internal
#' @noRd
#' @author Daniel D. Sjoberg
# wrapper for assign_dichotomous_value_one() function
assign_dichotomous_value <- function(data, variable, summary_type, class, value) {
pmap(
list(variable, summary_type, class),
~ assign_dichotomous_value_one(data, ..1, ..2, ..3, value)
)
}
assign_dichotomous_value_one <- function(data, variable, summary_type, class, value) {
# only assign value for dichotomous data
if (summary_type != "dichotomous") {
return(NULL)
}
# removing all NA values
var_vector <- data[[variable]][!is.na(data[[variable]])]
# if 'value' provided, then dichotomous_value is the provided one
if (!is.null(value[[variable]])) {
return(value[[variable]])
}
# if class is logical, then value will be TRUE
if (class == "logical") {
return(TRUE)
}
# if column provided is a factor with "Yes" and "No" (or "yes" and "no") then
# the value is "Yes" (or "yes")
if (class %in% c("factor", "character")) {
if (setdiff(var_vector, c("Yes", "No")) %>% length() == 0) {
return("Yes")
}
if (setdiff(var_vector, c("yes", "no")) %>% length() == 0) {
return("yes")
}
if (setdiff(var_vector, c("YES", "NO")) %>% length() == 0) {
return("YES")
}
}
# if column provided is all zeros and ones (or exclusively either one), the the value is one
if (setdiff(var_vector, c(0, 1)) %>% length() == 0) {
return(1)
}
# otherwise, the value must be passed from the values argument to tbl_summary
stop(glue(
"'{variable}' is dichotomous, but I was unable to determine the ",
"level to display. Use the 'value = list({variable} = <level>)' argument ",
"to specify level."
), call. = FALSE)
}
# assign_dichotomous_value_one(mtcars, "am", "dichotomous", "double", NULL)
#' Assign type of summary statistic
#'
#' Function that assigns default statistics to display, or if specified,
#' assigns the user-defined statistics for display.
#'
#' @param variable Vector of variable names
#' @param summary_type A list that includes specified summary types
#' @param stat_display List with up to two named elements. Names must be
#' continuous or categorical. Can be \code{NULL}.
#' @return vector of stat_display selections for each variable
#' @keywords internal
#' @noRd
#' @author Daniel D. Sjoberg
assign_stat_display <- function(variable, summary_type, stat_display) {
# dichotomous and categorical are treated in the same fashion here
summary_type <- ifelse(summary_type == "dichotomous", "categorical", summary_type)
# otherwise, return defaults
return(
map2_chr(
variable, summary_type,
~ case_when(
.y == "continuous" ~
stat_display[[.x]] %||%
get_theme_element("tbl_summary-str:continuous_stat") %||%
"{median} ({p25}, {p75})",
.y %in% c("categorical", "dichotomous") ~
stat_display[[.x]] %||%
get_theme_element("tbl_summary-str:categorical_stat") %||%
"{n} ({p}%)"
)
)
)
}
#' Assigns summary type (e.g. continuous, categorical, or dichotomous).
#'
#' For variables where the summary type was not specified in the function
#' call of `tbl_summary`, `assign_summary_type` assigns a type based on class and
#' number of unique levels.
#'
#' @param data Data frame.
#' @param variable Vector of column name.
#' @param class Vector of classes (e.g. numeric, character, etc.)
#' corresponding one-to-one with the names in `variable`.
#' @param summary_type list that includes specified summary types,
#' e.g. \code{summary_type = list(age = "continuous")}
#' @return Vector summary types `c("continuous", "categorical", "dichotomous")`.
#' @keywords internal
#' @noRd
#' @author Daniel D. Sjoberg
#' @examples
#' gtsummary:::assign_summary_type(
#' data = mtcars,
#' variable = names(mtcars),
#' class = apply(mtcars, 2, class),
#' summary_type = NULL, value = NULL
#' )
assign_summary_type <- function(data, variable, class, summary_type, value) {
# checking if user requested type = "categorical" for variable that is all missing
if (!is.null(summary_type)) {
summary_type <- purrr::imap(
summary_type,
function(.x, .y) {
categorical_missing <-
.x == "categorical" &&
length(data[[.y]]) == sum(is.na(data[[.y]])) &&
!"factor" %in% class(data[[.y]]) # factor can be summarized with categorical
if(categorical_missing == FALSE) return(.x)
message(glue(
"Variable '{.y}' is `NA` for all observations and cannot be summarized as 'categorical'.\n",
"Using `{.y} ~ \"dichotomous\"` instead."
))
return("dichotomous")
}
)
}
# assigning types ------------------------------------------------------------
type <- map2_chr(
variable, class,
~ summary_type[[.x]] %||%
case_when(
# if a value to display was supplied, then dichotomous
!is.null(value[[.x]]) &
length(intersect(value[[.x]], data[[.x]]))
~ "dichotomous",
# logical variables will be dichotmous
.y == "logical" ~ "dichotomous",
# numeric variables that are 0 and 1 only, will be dichotomous
.y %in% c("integer", "numeric") &
length(setdiff(na.omit(data[[.x]]), c(0, 1))) == 0 &
nrow(data) != sum(is.na(data[[.x]])) ~
"dichotomous",
# factor variables that are "No" and "Yes" only, will be dichotomous
.y %in% c("factor") & setequal(attr(data[[.x]], "levels"), c("No", "Yes")) ~
"dichotomous",
.y %in% c("factor") & setequal(attr(data[[.x]], "levels"), c("no", "yes")) ~
"dichotomous",
.y %in% c("factor") & setequal(attr(data[[.x]], "levels"), c("NO", "YES")) ~
"dichotomous",
# character variables that are "No" and "Yes" only, will be dichotomous
.y %in% c("character") & setequal(na.omit(data[[.x]]), c("No", "Yes")) ~
"dichotomous",
.y %in% c("character") & setequal(na.omit(data[[.x]]), c("no", "yes")) ~
"dichotomous",
.y %in% c("character") & setequal(na.omit(data[[.x]]), c("NO", "YES")) ~
"dichotomous",
# factors and characters are categorical (except when all missing)
.y == "character" & nrow(data) == sum(is.na(data[[.x]])) ~ "dichotomous",
.y %in% c("factor", "character") ~ "categorical",
# numeric variables with fewer than 10 levels will be categorical
.y %in% c("integer", "numeric", "difftime") &
length(unique(na.omit(data[[.x]]))) < 10 &
nrow(data) != sum(is.na(data[[.x]])) ~
"categorical",
# everything else is assigned to continuous
TRUE ~ "continuous"
)
)
# checking user did not request a factor or character variable be summarized
# as a continuous variable
purrr::pwalk(
list(type, class, variable),
~ if(..1 == "continuous" && ..2 %in% c("factor", "character"))
stop(glue(
"Column '{..3}' is class \"{..2}\" and cannot be summarized as a continuous variable."
), call. = FALSE)
)
type
}
#' Assigns variable label to display.
#'
#' Preference is given to labels specified in `fmt_table1(..., var_label = list())`
#' argument, then to a label attribute attached to the data frame
#' (i.e. attr(data$var, "label)), then to the variable name.
#'
#' @param data Data frame.
#' @param variable Vector of column name.
#' @param var_label list that includes specified variable labels,
#' e.g. `var_label = list(age = "Age, yrs")`
#' @return Vector variable labels.
#' @keywords internal
#' @noRd
#' @author Daniel D. Sjoberg
#' @examples
#' gtsummary:::assign_var_label(mtcars, names(mtcars), list(hp = "Horsepower"))
assign_var_label <- function(data, variable, var_label) {
map_chr(
variable,
~ var_label[[.x]] %||%
attr(data[[.x]], "label") %||%
.x
)
}
#' Guesses how many digits to use in rounding continuous variables
#' or summary statistics
#'
#' @param x vector containing the values of a continuous variable. This can be
#' raw data values or a vector of summary statistics themselves
#' @return the rounded values
#' @noRd
#' @keywords internal
#' @author Emily Zabor, Daniel D. Sjoberg
# takes as the input a vector of variable and summary types
continuous_digits_guess <- function(data,
variable,
summary_type,
class,
digits = NULL) {
pmap(
list(variable, summary_type, class),
~ continuous_digits_guess_one(data, ..1, ..2, ..3, digits)
)
}
# runs for a single var
continuous_digits_guess_one <- function(data,
variable,
summary_type,
class,
digits = NULL) {
# if all values are NA, returning 0
if (nrow(data) == sum(is.na(data[[variable]]))) {
return(0)
}
# if the variable is not continuous type, return NA
if (summary_type != "continuous") {
return(NA)
}
# if the number of digits is specified for a variable, return specified number
if (!is.null(digits[[variable]])) {
return(digits[[variable]])
}
# if class is integer, then round everythng to nearest integer
if (class == "integer") {
return(0)
}
# calculate the spread of the variable
var_spread <- stats::quantile(data[[variable]], probs = c(0.95), na.rm = TRUE) -
stats::quantile(data[[variable]], probs = c(0.05), na.rm = TRUE)
# otherwise guess the number of dignits to use based on the spread
case_when(
var_spread < 0.01 ~ 4,
var_spread >= 0.01 & var_spread < 0.1 ~ 3,
var_spread >= 0.1 & var_spread < 10 ~ 2,
var_spread >= 10 & var_spread < 20 ~ 1,
var_spread >= 20 ~ 0
)
}
#' Simple utility function to get extract and calculate additional information
#' about the 'by' variable in \code{\link{tbl_summary}}
#'
#' Given a dataset and the name of the 'by' variable, this function returns a
#' data frame with unique levels of the by variable, the by variable ID, a character
#' version of the levels, and the column name for each level in the \code{\link{tbl_summary}}
#' output data frame.
#'
#' @param data data frame
#' @param by character name of the `by` variable found in data
#' @noRd
#' @keywords internal
#' @author Daniel D. Sjoberg
df_by <- function(data, by) {
if (is.null(by)) return(NULL)
if (inherits(data[[by]], "factor"))
result <- tibble(by = attr(data[[by]], "levels") %>%
factor(x = ., levels= ., labels = .))
else result <- data %>% select(by) %>% dplyr::distinct() %>% set_names("by")
result <-
result %>%
arrange(!!sym("by")) %>%
mutate(
n = purrr::map_int(.data$by, ~ sum(data[[!!by]] == .x)),
N = sum(.data$n),
p = .data$n / .data$N,
by_id = 1:n(), # 'by' variable ID
by_chr = as.character(.data$by), # Character version of 'by' variable
by_col = paste0("stat_", .data$by_id) # Column name of in fmt_table1 output
) %>%
select(starts_with("by"), everything())
attr(result$by, "label") <- NULL
result
}
#' Assigns categorical variables sort type ("alphanumeric" or "frequency")
#'
#' @param variable variable name
#' @param summary_type the type of variable ("continuous", "categorical", "dichotomous")
#' @param sort named list indicating the type of sorting to perform. Default is NULL.
#' @noRd
#' @keywords internal
#' @author Daniel D. Sjoberg
# this function assigns categorical variables sort type ("alphanumeric" or "frequency")
assign_sort <- function(variable, summary_type, sort) {
purrr::map2_chr(
variable, summary_type,
function(variable, summary_type) {
# only assigning sort type for caegorical data
if (summary_type == "dichotomous") {
return("alphanumeric")
}
if (summary_type != "categorical") {
return(NA_character_)
}
# if variable was specified, then use that
if (!is.null(sort[[variable]])) {
return(sort[[variable]])
}
# otherwise, return "alphanumeric"
return("alphanumeric")
}
)
}
# function that checks the inputs to \code{\link{tbl_summary}}
# this should include EVERY input of \code{\link{tbl_summary}} in the same order
# copy and paste them from \code{\link{tbl_summary}}
tbl_summary_data_checks <- function(data) {
# data -----------------------------------------------------------------------
# data is a data frame
if (!is.data.frame(data)) {
stop("'data' input must be a data frame.", call. = FALSE)
}
# cannot be empty data frame
if (nrow(data) == 0L) {
stop("Expecting 'data' to have at least 1 row.", call. = FALSE)
}
# must have at least one column
if (ncol(data) == 0L) {
stop("Expecting 'data' to have at least 1 column", call. = FALSE)
}
}
tbl_summary_input_checks <- function(data, by, label, type, value, statistic,
digits, missing, missing_text, sort) {
# data -----------------------------------------------------------------------
tbl_summary_data_checks(data)
# by -------------------------------------------------------------------------
if (!is.null(by) && !by %in% names(data)) {
stop(glue(
"`by = '{by}'` is not a column in `data=`. Did you misspell the column name, ",
"or omit the column with the `include=` argument?"), call. = FALSE)
}
# type -----------------------------------------------------------------------
if (!is.null(type) & is.null(names(type))) { # checking names for deprecated named list input
# checking input type: must be a list of formulas, or one formula
if (!inherits(type, c("list", "formula"))) {
stop(glue(
"'type' argument must be a list of formulas. ",
"LHS of the formula is the variable specification, ",
"and the RHS is the type specification: ",
"list(c(age, marker) ~ \"continuous\")"
), call. = FALSE)
}
if (inherits(type, "list")) {
if (some(type, negate(rlang::is_bare_formula))) {
stop(glue(
"'type' argument must be a list of formulas. ",
"LHS of the formula is the variable specification, ",
"and the RHS is the type specification: ",
"list(c(age, marker) ~ \"continuous\")"
), call. = FALSE)
}
}
# all sepcifed types are continuous, categorical, or dichotomous
if (inherits(type, "formula")) type <- list(type)
if (!every(type, ~ eval(rlang::f_rhs(.x)) %in% c("continuous", "categorical", "dichotomous")) |
!every(type, ~ rlang::is_string(eval(rlang::f_rhs(.x))))) {
stop(glue(
"The RHS of the formula in the 'type' argument must of one and only one of ",
"\"continuous\", \"categorical\", or \"dichotomous\""
), call. = FALSE)
}
}
# value ----------------------------------------------------------------------
if (!is.null(value) & is.null(names(value))) { # checking names for deprecated named list input
# checking input type: must be a list of formulas, or one formula
if (!inherits(value, c("list", "formula"))) {
stop(glue(
"'value' argument must be a list of formulas. ",
"LHS of the formula is the variable specification, ",
"and the RHS is the value specification: ",
"list(stage ~ \"T1\")"
), call. = FALSE)
}
if (inherits(value, "list")) {
if (some(value, negate(rlang::is_bare_formula))) {
stop(glue(
"'value' argument must be a list of formulas. ",
"LHS of the formula is the variable specification, ",
"and the RHS is the value specification: ",
"list(stage ~ \"T1\")"
), call. = FALSE)
}
}
# functions all_continuous, all_categorical, and all_dichotomous cannot be used for value
if (some(
value,
~ deparse(.x) %>% # converts a formula to a string
stringr::str_detect(c("all_continuous()", "all_categorical()", "all_dichotomous()")) %>%
any()
)) {
stop(glue(
"Select functions all_continuous(), all_categorical(), all_dichotomous() ",
"cannot be used in the 'value' argument."
), call. = FALSE)
}
}
# label ----------------------------------------------------------------------
if (!is.null(label) & is.null(names(label))) { # checking names for deprecated named list input
# all sepcifed labels must be a string of length 1
if (inherits(label, "formula")) label <- list(label)
if (!every(label, ~ rlang::is_string(eval(rlang::f_rhs(.x))))) {
stop(glue(
"The RHS of the formula in the 'label' argument must be a string."
), call. = FALSE)
}
}
# statistic ------------------------------------------------------------------
if (!is.null(statistic) & is.null(names(statistic))) { # checking names for deprecated named list input
# checking input type: must be a list of formulas, or one formula
if (!inherits(statistic, c("list", "formula"))) {
stop(glue(
"'statistic' argument must be a list of formulas. ",
"LHS of the formula is the variable specification, ",
"and the RHS is the statistic specification: ",
"list(all_categorical() ~ \"{n} / {N}\")"
), call. = FALSE)
}
if (inherits(statistic, "list")) {
if (some(statistic, negate(rlang::is_bare_formula))) {
stop(glue(
"'statistic' argument must be a list of formulas. ",
"LHS of the formula is the variable specification, ",
"and the RHS is the statistic specification: ",
"list(all_categorical() ~ \"{n} / {N}\")"
), call. = FALSE)
}
}
# all sepcifed statistics must be a string of length 1
if (inherits(statistic, "formula")) statistic <- list(statistic)
if (!every(statistic, ~ rlang::is_string(eval(rlang::f_rhs(.x))))) {
stop(glue(
"The RHS of the formula in the 'statistic' argument must be a string."
), call. = FALSE)
}
}
# digits ---------------------------------------------------------------------
if (!is.null(digits) & is.null(names(digits))) { # checking names for deprecated named list input
# checking input type: must be a list of formulas, or one formula
if (!inherits(digits, c("list", "formula"))) {
stop(glue(
"'digits' argument must be a list of formulas. ",
"LHS of the formula is the variable specification, ",
"and the RHS is the digits specification: ",
"list(c(age, marker) ~ 1)"
), call. = FALSE)
}
if (inherits(digits, "list")) {
if (some(digits, negate(rlang::is_bare_formula))) {
stop(glue(
"'digits' argument must be a list of formulas. ",
"LHS of the formula is the variable specification, ",
"and the RHS is the digits specification: ",
"list(c(age, marker) ~ 1)"
), call. = FALSE)
}
}
}
# missing_text ---------------------------------------------------------------
# input must be character
if (!rlang::is_string(missing_text)) {
stop("Argument 'missing_text' must be a character string of length 1.", call. = FALSE)
}
# sort -----------------------------------------------------------------------
if (!is.null(sort) & is.null(names(sort))) { # checking names for deprecated named list input
# checking input type: must be a list of formulas, or one formula
if (!inherits(sort, c("list", "formula"))) {
stop(glue(
"'sort' argument must be a list of formulas. ",
"LHS of the formula is the variable specification, ",
"and the RHS is the sort specification: ",
"c(vars(age, marker) ~ 1)"
), call. = FALSE)
}
if (inherits(sort, "list")) {
if (some(sort, negate(rlang::is_bare_formula))) {
stop(glue(
"'sort' argument must be a list of formulas. ",
"LHS of the formula is the variable specification, ",
"and the RHS is the sort specification: ",
"list(c(stage, marker) ~ \"frequency\")"
), call. = FALSE)
}
}
# all sepcifed types are frequency or alphanumeric
if (inherits(sort, "formula")) sort <- list(sort)
if (!every(sort, ~ eval(rlang::f_rhs(.x)) %in% c("frequency", "alphanumeric")) |
!every(sort, ~ rlang::is_string(eval(rlang::f_rhs(.x))))) {
stop(glue(
"The RHS of the formula in the 'sort' argument must of one and only one of ",
"\"frequency\" or \"alphanumeric\""
), call. = FALSE)
}
}
}
# # data
# # not putting in a data frame
# tbl_summary_input_checks(
# data = list(not = "a", proper = "data.frame"), by = NULL, summary_type = NULL, var_label = NULL,
# stat_display = NULL, digits = NULL, pvalue_fun = NULL
# )
#
# # by
# # by var not in dataset
# tbl_summary_input_checks(
# data = mtcars, by = "Petal.Width", summary_type = NULL, var_label = NULL,
# stat_display = NULL, digits = NULL, pvalue_fun = NULL
# )
#
# # summary_type
# # names not a variables name, and input type not valid
# tbl_summary_input_checks(
# data = mtcars, by = NULL, summary_type = list(hp = "continuous", length = "catttegorical"), var_label = NULL,
# stat_display = NULL, digits = NULL, pvalue_fun = NULL
# )
#
# # var_label
# # names not a variables name, and input type not valid
# tbl_summary_input_checks(
# data = mtcars, by = NULL, summary_type = NULL, var_label = list(hp = "Horsepower", wrong_name = "Not a variable"),
# stat_display = NULL, digits = NULL, pvalue_fun = NULL
# )
#
# # stat_display
# tbl_summary_input_checks(
# data = mtcars, by = NULL, summary_type = NULL, var_label = NULL,
# stat_display = list(continuous = "{mean}", dichot_not = "nope"), digits = NULL, pvalue_fun = NULL
# )
#
# # digits
# tbl_summary_input_checks(
# data = mtcars, by = NULL, summary_type = NULL, var_label = NULL,
# stat_display = NULL, digits = list(hp = 2, wrong_name = 2.1, am = 5.4), pvalue_fun = NULL
# )
# tbl_summary_input_checks(
# data = NULL, by = NULL, summary_type = NULL, var_label = NULL,
# stat_display = NULL, digits = NULL, pvalue_fun = NULL
# )
# stat_label_match -------------------------------------------------------------
# provide a vector of stat_display and get labels back i.e. {mean} ({sd}) gives Mean (SD)
stat_label_match <- function(stat_display, iqr = TRUE) {
language <- get_theme_element("pkgwide-str:language", default = "en")
labels <-
tibble::tribble(
~stat, ~label,
"{min}", translate_text("minimum", language),
"{max}", translate_text("maximum", language),
"{median}", translate_text("median", language),
"{mean}", translate_text("mean", language),
"{sd}", translate_text("SD", language),
"{var}", translate_text("variance", language),
"{n}", translate_text("n", language),
"{N}", translate_text("N", language),
"{p}%", translate_text("%", language),
"{p_miss}%", translate_text("% missing", language),
"{p_nonmiss}%", translate_text("% not missing", language),
"{p}", translate_text("%", language),
"{p_miss}", translate_text("% missing", language),
"{p_nonmiss}", translate_text("% not missing", language),
"{N_miss}", translate_text("N missing", language),
"{N_nonmiss}", translate_text("N", language),
"{N_obs}", translate_text("no. obs.", language)
) %>%
# adding in quartiles
bind_rows(
tibble(stat = paste0("{p", 0:100, "}")) %>%
mutate(label = paste0(gsub("[^0-9\\.]", "", .data$stat), "%"))
) %>%
# if function does not appear in above list, the print the function name
bind_rows(
tibble(
stat = str_extract_all(stat_display, "\\{.*?\\}") %>%
unlist() %>%
unique(),
label = .data$stat %>%
str_remove_all(pattern = fixed("}")) %>%
str_remove_all(pattern = fixed("{"))
)
)
# adding IQR replacements if indicated
if (iqr == TRUE) {
labels <-
bind_rows(
tibble::tribble(
~stat, ~label,
"{p25}, {p75}", translate_text("IQR", language),
"{p25} - {p75}", translate_text("IQR", language)
),
labels
)
}
# replacing statistics in {}, with their labels
for (i in seq_len(nrow(labels))) {
stat_display <-
stringr::str_replace_all(
stat_display,
stringr::fixed(labels$stat[i]),
labels$label[i]
)
}
stat_display
}
# footnote_stat_label ----------------------------------------------------------
# stat_label footnote maker
footnote_stat_label <- function(meta_data) {
meta_data %>%
select(c("summary_type", "stat_label")) %>%
mutate(
summary_type = case_when(
summary_type == "dichotomous" ~ "categorical",
TRUE ~ .data$summary_type
),
message = glue("{stat_label}")
) %>%
distinct() %>%
pull("message") %>%
paste(collapse = "; ") %>%
paste0(translate_text("Statistics presented"), ": ", .)
}
# summarize_categorical --------------------------------------------------------
summarize_categorical <- function(data, variable, by, class, dichotomous_value, sort, percent) {
# grabbing percent formatting function
percent_fun <-
get_theme_element("tbl_summary-fn:percent_fun") %||%
getOption("gtsummary.tbl_summary.percent_fun", default = style_percent)
N_fun <-
get_theme_element("tbl_summary-fn:N_fun",
default = function(x) sprintf("%.0f", x))
# stripping attributes/classes that cause issues -----------------------------
# tidyr::complete throws warning `has different attributes on LHS and RHS of join`
# when variable has label. So deleting it.
attr(data[[variable]], "label") <- NULL
if (!is.null(by)) attr(data[[by]], "label") <- NULL
# same thing when the class "labelled" is included when labeled with the Hmisc package
class(data[[variable]]) <- setdiff(class(data[[variable]]), "labelled")
if (!is.null(by)) class(data[[by]]) <- setdiff(class(data[[by]]), "labelled")
# tabulating data ------------------------------------------------------------
df_by <- df_by(data, by)
variable_by_chr <- c("variable", switch(!is.null(by), "by"))
data <- data %>%
select(c(variable, by)) %>%
# renaming variables to c("variable", "by") (if there is a by variable)
set_names(variable_by_chr)
df_tab <-
data %>%
mutate(
# converting to factor, if not already factor
variable = switch(class, factor = .data$variable) %||% factor(.data$variable),
# adding dichotomous level (in case it is unobserved)
variable = forcats::fct_expand(.data$variable, as.character(dichotomous_value)),
# # re-leveling by alphanumeric order or frequency
variable = switch(sort,
"alphanumeric" = .data$variable,
"frequency" = forcats::fct_infreq(.data$variable))
) %>%
{suppressWarnings(count(., !!!syms(variable_by_chr)))} %>%
stats::na.omit() %>%
# if there is a by variable, merging in all levels
{switch(
!is.null(by),
full_join(.,
list(by = df_by$by,
variable = factor(attr(.$variable, "levels"),
levels = attr(.$variable, "levels"))) %>%
purrr::cross_df(),
by = c("by", "variable"))[c("by", "variable", "n")]) %||% .} %>%
tidyr::complete(!!!syms(variable_by_chr), fill = list(n = 0))
# calculating percent
group_by_percent <- switch(
percent,
"cell" = "",
"column" = ifelse(!is.null(by), "by", ""),
"row" = "variable"
)
result <- df_tab %>%
group_by(!!!syms(group_by_percent)) %>%
mutate(
N = sum(.data$n),
# if the Big N is 0, there is no denom so making percent NA
p = ifelse(.data$N == 0, NA, .data$n / .data$N)
) %>%
ungroup() %>%
rename(variable_levels = .data$variable) %>%
mutate(variable = !!variable) %>%
select(c(by, variable, "variable_levels", everything()))
if (!is.null(dichotomous_value)) {
result <- result %>%
filter(.data$variable_levels == !!dichotomous_value) %>%
select(-.data$variable_levels)
}
# adding percent_fun as attr to p column
attr(result$p, "fmt_fun") <- percent_fun
attr(result$N, "fmt_fun") <- N_fun
attr(result$n, "fmt_fun") <- N_fun
result
}
# summarize_continuous ---------------------------------------------------------
summarize_continuous <- function(data, variable, by, stat_display, digits) {
# stripping attributes/classes that cause issues -----------------------------
# tidyr::complete throws warning `has different attributes on LHS and RHS of join`
# when variable has label. So deleting it.
attr(data[[variable]], "label") <- NULL
if (!is.null(by)) attr(data[[by]], "label") <- NULL
# same thing when the class "labelled" is included when labeled with the Hmisc package
class(data[[variable]]) <- setdiff(class(data[[variable]]), "labelled")
if (!is.null(by)) class(data[[by]]) <- setdiff(class(data[[by]]), "labelled")
# extracting function calls
fns_names_chr <-
str_extract_all(stat_display, "\\{.*?\\}") %>%
map(str_remove_all, pattern = fixed("}")) %>%
map(str_remove_all, pattern = fixed("{")) %>%
unlist() %>%
# removing elements protected as other items
setdiff(c("p_miss", "p_nonmiss", "N_miss", "N_nonmiss", "N_obs"))
# defining shortcut quantile functions, if needed
if (any(fns_names_chr %in% paste0("p", 0:100))) {
fns_names_chr[fns_names_chr %in% paste0("p", 0:100)] %>%
set_names(.) %>%
imap(~purrr::partial(
quantile,
probs = as.numeric(stringr::str_replace(.x, pattern = "^p", "")) / 100
)) %>%
list2env(envir = rlang::env_parent())
}
if (length(fns_names_chr) == 0) stop(glue(
"No summary function found in `{stat_display}` for variable '{variable}'.\n",
"Did you wrap the function name in curly brackets?"
), call. = FALSE)
if (any(c("by", "variable") %in% fns_names_chr)) {
stop(paste(
"'by' and 'variable' are protected names, and continuous variables",
"cannot be summarized with functions by the these name."), call. = FALSE)
}
# prepping data set
variable_by_chr <- c("variable", switch(!is.null(by), "by"))
df_by <- df_by(data, by)
data <-
data %>%
select(c(variable, by)) %>%
stats::na.omit() %>%
# renaming variables to c("variable", "by") (if there is a by variable)
set_names(variable_by_chr)
# calculating stats for each var and by level
if (!is.null(by)) {
df_stats <-
list(
fn = fns_names_chr,
by = df_by$by
) %>%
cross_df() %>%
mutate(
variable = variable,
value = purrr::map2_dbl(
.data$fn, .data$by,
function(x, y) {
var_vctr <- filter(data, .data$by == y) %>% pull(.data$variable)
if (length(var_vctr) == 0) return(NA)
do.call(what = x, args = list(x = var_vctr))
}
)
) %>%
tidyr::pivot_wider(id_cols = c("by", "variable"), names_from = "fn")
}
else if (is.null(by)) {
df_stats <-
list(fn = fns_names_chr) %>%
cross_df() %>%
mutate(
variable = variable,
value = map_dbl(
.data$fn,
~do.call(what = .x, args = list(x = pull(data, .data$variable)))
)
) %>%
tidyr::pivot_wider(id_cols = c("variable"), names_from = "fn")
}
# adding formatting function as attr to summary statistics columns
fmt_fun <- as.list(rep(digits, length.out = length(fns_names_chr))) %>%
set_names(fns_names_chr)
df_stats <- purrr::imap_dfc(
df_stats,
function(column, colname) {
if(is.null(fmt_fun[[colname]])) return(column)
fmt <- glue("%.{fmt_fun[[colname]]}f")
attr(column, "fmt_fun") <- purrr::partial(sprintf, fmt = !!fmt)
column
}
)
# returning final object
df_stats
}
# df_stats_to_tbl --------------------------------------------------------------
df_stats_to_tbl <- function(data, variable, summary_type, by, var_label, stat_display,
df_stats, missing, missing_text) {
# styling the statistics -----------------------------------------------------
for (v in (names(df_stats) %>% setdiff(c("by", "variable", "variable_levels")))) {
df_stats[[v]] <- df_stats[[v]] %>% attr(df_stats[[v]], "fmt_fun")()
}
# calculating the statistic to be displayed in the cell in the table.
tryCatch(
df_stats$statistic <- glue::glue_data(df_stats, stat_display) %>% as.character(),
error = function(e) {
stop(glue(
"There was an error assembling the summary statistics for '{{variable}}'\n",
" with summary type '{{summary_type}}'.\n\n",
"There are 2 common sources for this error.\n",
"1. You have requested summary statistics meant for continuous\n",
" variables for a variable being as summarized as categorical.\n",
" To change the summary type to continuous, add the argument\n",
" `type = list({{variable}} ~ 'continuous')`\n",
"2. One of the functions or statistics from the `statistic=` argument is not valid.",
.open = "{{", .close = "}}"
))
})
# reshaping table to wide ----------------------------------------------------
if (!is.null(by)) {
df_stats_wide <-
df_stats %>%
select(any_of(c("by", "variable", "variable_levels", "statistic"))) %>%
# merging in new column header names
left_join(df_by(data, by)[c("by", "by_col")], by = "by") %>%
tidyr::pivot_wider(id_cols = any_of(c("variable", "variable_levels")),
names_from = "by_col",
values_from = "statistic")
}
else {
df_stats_wide <-
df_stats %>%
select(any_of(c("by", "variable", "variable_levels", "statistic"))) %>%
rename(stat_0 = .data$statistic) %>%
select(any_of(c("variable", "variable_levels", "stat_0")))
}
# setting up structure for table_body
if (summary_type == "categorical") {
# adding a label row for categorical variables
result <-
df_stats_wide %>%
mutate(
row_type = "level",
label = as.character(.data$variable_levels)
) %>%
select(-.data$variable_levels)
# adding label row
result <-
tibble(
variable = variable,
row_type = "label",
label = var_label
) %>%
bind_rows(result)
}
else if (summary_type %in% c("continuous", "dichotomous")) {
result <-
df_stats_wide %>%
mutate(
row_type = "label",
label = var_label
)
}
# add rows for missing -------------------------------------------------------
if (missing == "always" || (missing == "ifany" & sum(is.na(data[[variable]])) > 0)) {
result <-
result %>%
bind_rows(
calculate_missing_row(data = data, variable = variable,
by = by, missing_text = missing_text)
)
}
# returning final object formatted for table_body ----------------------------
# selecting stat_* cols (in the correct order)
stat_vars <- switch(!is.null(by), df_by(data, by)$by_col) %||% "stat_0"
result %>% select(all_of(c("variable", "row_type", "label", stat_vars)))
}
# calculate_missing_row --------------------------------------------------------
calculate_missing_row <- function(data, variable, by, missing_text) {
# converting variable to TRUE/FALSE for missing
data <-
data %>%
select(c(variable, by)) %>%
mutate(
!!variable := is.na(.data[[variable]])
)
# passing the T/F variable through the functions to format as we do in
# the tbl_summary output
summarize_categorical(
data = data, variable = variable, by = by, class = "logical",
dichotomous_value = TRUE, sort = "alphanumeric", percent = "column"
) %>%
{df_stats_to_tbl(
data = data, variable = variable, summary_type = "dichotomous", by = by,
var_label = missing_text, stat_display = "{n}", df_stats = .,
missing = "no", missing_text = "Doesn't Matter -- Text should never appear")} %>%
# changing row_type to missing
mutate(row_type = "missing")
}
# df_stats_fun -----------------------------------------------------------------
# this function creates df_stats in the tbl_summary meta data table
# and includes the number of missing values
df_stats_fun <- function(summary_type, variable, class, dichotomous_value, sort,
stat_display, digits, data, by, percent) {
# first table are the standard stats
t1 <- switch(
summary_type,
"continuous" = summarize_continuous(data = data, variable = variable,
by = by, stat_display = stat_display,
digits = digits),
"categorical" = summarize_categorical(data = data, variable = variable,
by = by, class = class,
dichotomous_value = dichotomous_value,
sort = sort, percent = percent),
"dichotomous" = summarize_categorical(data = data, variable = variable,
by = by, class = class,
dichotomous_value = dichotomous_value,
sort = sort, percent = percent)
)
# adding the N_obs and N_missing, etc
t2 <- summarize_categorical(data = mutate_at(data, vars(all_of(variable)), is.na),
variable = variable,
by = by, class = "logical",
dichotomous_value = TRUE,
sort = "alphanumeric", percent = percent) %>%
rename(p_miss = .data$p, N_obs = .data$N, N_miss = .data$n) %>%
mutate(N_nonmiss = .data$N_obs - .data$N_miss,
p_nonmiss = 1 - .data$p_miss)
# returning table will all stats
merge_vars <- switch(!is.null(by), c("by", "variable")) %||% "variable"
return <- left_join(t1, t2, by = merge_vars)
# setting fmt_fun for percents and integers
attr(return$p_nonmiss, "fmt_fun") <- attr(return$p_miss, "fmt_fun")
attr(return$N_nonmiss, "fmt_fun") <- attr(return$N_miss, "fmt_fun")
return
}
# translation function ---------------------------------------------------------
translate_text <- function(x, language = get_theme_element("pkgwide-str:language", default = "en")) {
if (language == "en") return(x)
# sub-setting on row of text to translate
df_text <- filter(df_translations, .data$en == x)
# if no rows selected OR translation is not provided return x, otherwise the translated text
ifelse(nrow(df_text) != 1 || is.na(df_text[[language]]), x, df_text[[language]])
}
|
\name{ch09data}
\docType{data}
\alias{ch09data}
\alias{m.fac9003}
\alias{m.cpice16.dp7503}
\alias{m.barra.9003}
\alias{m.5cln}
%\alias{m.bnd}
\alias{m.apca0103}
\title{ financial time series for Tsay (2005, chapter 9[text]) }
\description{
Financial time series used in examples in chapter 9.
}
\usage{
data(m.fac9003)
data(m.cpice16.dp7503)
data(m.barra.9003)
data(m.5cln)
#data(m.bnd) <- documented with ch08, also used in ch09
data(m.apca0103)
}
\format{
\itemize{
\item{m.fac9003}{
a zoo object of 168 observations giving simple excess returns of
13 stocks and the Standard and Poor's 500 index over the monthly
series of three-month Treasury bill rates of the secondary market
as the risk-free rate from January 1990 to December 2003. (These
numbers are used in Table 9.1.)
\itemize{
\item{AA}{Alcoa}
\item{AGE}{A. G. Edwards}
\item{CAT}{Caterpillar}
\item{F}{Ford Motor}
\item{FDX}{FedEx}
\item{GM}{General Motors}
\item{HPQ}{Hewlett-Packard}
\item{KMB}{Kimberly-Clark}
\item{MEL}{Mellon Financial}
\item{NYT}{New York Times}
\item{PG}{Proctor & Gamble}
\item{TRB}{Chicago Tribune}
\item{TXN}{Texas Instruments}
\item{SP5}{Standard & Poor's 500 index}
}
}
\item{m.cpice16.dp7503}{
a zoo object of 168 monthly on two macroeconomic variables from
January 1975 through December 2002 (p. 412):
\itemize{
\item{CPI}{
consumer price index for all urban consumers: all items and
with index 1982-1984 = 100
}
\item{CE16}{
Civilian employment numbers 16 years and over: measured in
thousands
}
}
}
\item{m.barra.9003}{
a zoo object giving monthly excess returns of ten stocks from
January 1990 through December 2003:
\itemize{
\item{AGE}{A. G. Edwards}
\item{C}{Citigroup}
\item{MWD}{Morgan Stanley}
\item{MER}{Merrill Lynch}
\item{DELL}{Dell, Inc.}
\item{IBM}{International Business Machines}
\item{AA}{Alcoa}
\item{CAT}{Caterpillar}
\item{PG}{Proctor & Gamble}
}
}
\item{m.5cln}{
a zoo object giving monthly log returns in percentages of 5 stocks
from January 1990 through December 1999:
\itemize{
\item{IBM}{International Business Machines}
\item{HPQ}{Hewlett-Packard}
\item{INTC}{Intel}
\item{MER}{Merrill Lynch}
\item{MWD}{Morgan Stanley Dean Witter}
}
}
\item{m.apca0103}{
data.frame of monthly simple returns of 40 stocks from January
2001 through December 2003, discussed in sect. 9.6.2, pp. 437ff.
\itemize{
\item{CompanyID}{5-digit company identification code}
\item{date}{the last workday of the month}
\item{return}{in percent}
}
}
}
}
%\details{
%}
\source{
\url{
http://faculty.chicagogsb.edu/ruey.tsay/teaching/fts2
}
}
\references{
Ruey Tsay (2005)
Analysis of Financial Time Series, 2nd ed. (Wiley, ch. 7)
}
\keyword{datasets}
\seealso{
\code{\link{ch01data}}
\code{\link{ch02data}}
\code{\link{ch03data}}
\code{\link{ch04data}}
\code{\link{ch05data}}
\code{\link{ch06data}}
}
\examples{
data(m.apca0103)
dim(m.apca0103)
# 1440 3; 1440 = 40*36
# Are the dates all the same?
sameDates <- rep(NA, 39)
for(i in 1:39)
sameDates[i] <- with(m.apca0103, all.equal(date[1:36],
date[(i*36)+1:36]))
stopifnot(all(sameDates))
M.apca0103 <- with(m.apca0103, array(return, dim=c(36, 40), dimnames=
list(NULL, paste("Co", CompanyID[seq(1, 1440, 36)], sep=""))))
}
|
/man/ch09data.Rd
|
no_license
|
RahmanMahmudurTokyo/FinTS
|
R
| false
| false
| 3,605
|
rd
|
\name{ch09data}
\docType{data}
\alias{ch09data}
\alias{m.fac9003}
\alias{m.cpice16.dp7503}
\alias{m.barra.9003}
\alias{m.5cln}
%\alias{m.bnd}
\alias{m.apca0103}
\title{ financial time series for Tsay (2005, chapter 9[text]) }
\description{
Financial time series used in examples in chapter 9.
}
\usage{
data(m.fac9003)
data(m.cpice16.dp7503)
data(m.barra.9003)
data(m.5cln)
#data(m.bnd) <- documented with ch08, also used in ch09
data(m.apca0103)
}
\format{
\itemize{
\item{m.fac9003}{
a zoo object of 168 observations giving simple excess returns of
13 stocks and the Standard and Poor's 500 index over the monthly
series of three-month Treasury bill rates of the secondary market
as the risk-free rate from January 1990 to December 2003. (These
numbers are used in Table 9.1.)
\itemize{
\item{AA}{Alcoa}
\item{AGE}{A. G. Edwards}
\item{CAT}{Caterpillar}
\item{F}{Ford Motor}
\item{FDX}{FedEx}
\item{GM}{General Motors}
\item{HPQ}{Hewlett-Packard}
\item{KMB}{Kimberly-Clark}
\item{MEL}{Mellon Financial}
\item{NYT}{New York Times}
\item{PG}{Proctor & Gamble}
\item{TRB}{Chicago Tribune}
\item{TXN}{Texas Instruments}
\item{SP5}{Standard & Poor's 500 index}
}
}
\item{m.cpice16.dp7503}{
a zoo object of 168 monthly on two macroeconomic variables from
January 1975 through December 2002 (p. 412):
\itemize{
\item{CPI}{
consumer price index for all urban consumers: all items and
with index 1982-1984 = 100
}
\item{CE16}{
Civilian employment numbers 16 years and over: measured in
thousands
}
}
}
\item{m.barra.9003}{
a zoo object giving monthly excess returns of ten stocks from
January 1990 through December 2003:
\itemize{
\item{AGE}{A. G. Edwards}
\item{C}{Citigroup}
\item{MWD}{Morgan Stanley}
\item{MER}{Merrill Lynch}
\item{DELL}{Dell, Inc.}
\item{IBM}{International Business Machines}
\item{AA}{Alcoa}
\item{CAT}{Caterpillar}
\item{PG}{Proctor & Gamble}
}
}
\item{m.5cln}{
a zoo object giving monthly log returns in percentages of 5 stocks
from January 1990 through December 1999:
\itemize{
\item{IBM}{International Business Machines}
\item{HPQ}{Hewlett-Packard}
\item{INTC}{Intel}
\item{MER}{Merrill Lynch}
\item{MWD}{Morgan Stanley Dean Witter}
}
}
\item{m.apca0103}{
data.frame of monthly simple returns of 40 stocks from January
2001 through December 2003, discussed in sect. 9.6.2, pp. 437ff.
\itemize{
\item{CompanyID}{5-digit company identification code}
\item{date}{the last workday of the month}
\item{return}{in percent}
}
}
}
}
%\details{
%}
\source{
\url{
http://faculty.chicagogsb.edu/ruey.tsay/teaching/fts2
}
}
\references{
Ruey Tsay (2005)
Analysis of Financial Time Series, 2nd ed. (Wiley, ch. 7)
}
\keyword{datasets}
\seealso{
\code{\link{ch01data}}
\code{\link{ch02data}}
\code{\link{ch03data}}
\code{\link{ch04data}}
\code{\link{ch05data}}
\code{\link{ch06data}}
}
\examples{
data(m.apca0103)
dim(m.apca0103)
# 1440 3; 1440 = 40*36
# Are the dates all the same?
sameDates <- rep(NA, 39)
for(i in 1:39)
sameDates[i] <- with(m.apca0103, all.equal(date[1:36],
date[(i*36)+1:36]))
stopifnot(all(sameDates))
M.apca0103 <- with(m.apca0103, array(return, dim=c(36, 40), dimnames=
list(NULL, paste("Co", CompanyID[seq(1, 1440, 36)], sep=""))))
}
|
# Extract eigenvalues from an object that has them
`eigenvals` <-
function(x, ...)
{
UseMethod("eigenvals")
}
`eigenvals.default`<-
function(x, ...)
{
## svd and eigen return unspecified 'list', see if this could be
## either of them (like does cmdscale)
out <- NA
if (is.list(x)) {
## eigen
if (length(x) == 2 && all(names(x) %in% c("values", "vectors")))
out <- x$values
## svd: return squares of singular values
else if (length(x) == 3 && all(names(x) %in% c("d", "u", "v")))
out <- x$d^2
## cmdscale() will return all eigenvalues from R 2.12.1
else if (all(c("points","eig","GOF") %in% names(x)))
out <- x$eig
}
class(out) <- "eigenvals"
out
}
## squares of sdev
`eigenvals.prcomp` <-
function(x, ...)
{
out <- x$sdev^2
names(out) <- colnames(x$rotation)
class(out) <- "eigenvals"
out
}
## squares of sdev
`eigenvals.princomp` <-
function(x, ...)
{
out <- x$sdev^2
class(out) <- "eigenvals"
out
}
## concatenate constrained and unconstrained eigenvalues in cca, rda
## and capscale (vegan) -- ignore pCCA component
`eigenvals.cca` <- function(x, constrained = FALSE, ...)
{
if (constrained)
out <- x$CCA$eig
else
out <- c(x$CCA$eig, x$CA$eig)
if (!is.null(out))
class(out) <- "eigenvals"
out
}
## wcmdscale (in vegan)
`eigenvals.wcmdscale` <-
function(x, ...)
{
out <- x$eig
class(out) <- "eigenvals"
out
}
## pcnm (in vegan)
`eigenvals.pcnm` <-
function(x, ...)
{
out <- x$values
class(out) <- "eigenvals"
out
}
## betadisper (vegan)
`eigenvals.betadisper` <- function(x, ...) {
out <- x$eig
class(out) <- "eigenvals"
out
}
## dudi objects of ade4
`eigenvals.dudi` <-
function(x, ...)
{
out <- x$eig
class(out) <- "eigenvals"
out
}
## labdsv::pco
`eigenvals.pco` <-
function(x, ...)
{
out <- x$eig
class(out) <- "eigenvals"
out
}
## labdsv::pca
`eigenvals.pca` <-
function(x, ...)
{
out <- x$sdev^2
## pca() may return only some first eigenvalues
if ((seig <- sum(out)) < x$totdev) {
names(out) <- paste("PC", seq_along(out), sep="")
out <- c(out, "Rest" = x$totdev - seig)
}
class(out) <- "eigenvals"
out
}
`print.eigenvals` <-
function(x, ...)
{
print(zapsmall(unclass(x), ...))
invisible(x)
}
`summary.eigenvals` <-
function(object, ...)
{
## abs(object) is to handle neg eigenvalues of wcmdscale and
## capscale
vars <- object/sum(abs(object))
importance <- rbind(`Eigenvalue` = object,
`Proportion Explained` = round(abs(vars), 5),
`Cumulative Proportion`= round(cumsum(abs(vars)), 5))
out <- list(importance = importance)
class(out) <- c("summary.eigenvals")
out
}
## before R svn commit 70391 we used print.summary.prcomp, but now we
## need our own version that is similar to pre-70391 R function
`print.summary.eigenvals` <-
function(x, digits = max(3L, getOption("digits") - 3L), ...)
{
cat("Importance of components:\n")
print(x$importance, digits = digits, ...)
invisible(x)
}
|
/vegan/R/eigenvals.R
|
no_license
|
ingted/R-Examples
|
R
| false
| false
| 3,236
|
r
|
# Extract eigenvalues from an object that has them
`eigenvals` <-
function(x, ...)
{
UseMethod("eigenvals")
}
`eigenvals.default`<-
function(x, ...)
{
## svd and eigen return unspecified 'list', see if this could be
## either of them (like does cmdscale)
out <- NA
if (is.list(x)) {
## eigen
if (length(x) == 2 && all(names(x) %in% c("values", "vectors")))
out <- x$values
## svd: return squares of singular values
else if (length(x) == 3 && all(names(x) %in% c("d", "u", "v")))
out <- x$d^2
## cmdscale() will return all eigenvalues from R 2.12.1
else if (all(c("points","eig","GOF") %in% names(x)))
out <- x$eig
}
class(out) <- "eigenvals"
out
}
## squares of sdev
`eigenvals.prcomp` <-
function(x, ...)
{
out <- x$sdev^2
names(out) <- colnames(x$rotation)
class(out) <- "eigenvals"
out
}
## squares of sdev
`eigenvals.princomp` <-
function(x, ...)
{
out <- x$sdev^2
class(out) <- "eigenvals"
out
}
## concatenate constrained and unconstrained eigenvalues in cca, rda
## and capscale (vegan) -- ignore pCCA component
`eigenvals.cca` <- function(x, constrained = FALSE, ...)
{
if (constrained)
out <- x$CCA$eig
else
out <- c(x$CCA$eig, x$CA$eig)
if (!is.null(out))
class(out) <- "eigenvals"
out
}
## wcmdscale (in vegan)
`eigenvals.wcmdscale` <-
function(x, ...)
{
out <- x$eig
class(out) <- "eigenvals"
out
}
## pcnm (in vegan)
`eigenvals.pcnm` <-
function(x, ...)
{
out <- x$values
class(out) <- "eigenvals"
out
}
## betadisper (vegan)
`eigenvals.betadisper` <- function(x, ...) {
out <- x$eig
class(out) <- "eigenvals"
out
}
## dudi objects of ade4
`eigenvals.dudi` <-
function(x, ...)
{
out <- x$eig
class(out) <- "eigenvals"
out
}
## labdsv::pco
`eigenvals.pco` <-
function(x, ...)
{
out <- x$eig
class(out) <- "eigenvals"
out
}
## labdsv::pca
`eigenvals.pca` <-
function(x, ...)
{
out <- x$sdev^2
## pca() may return only some first eigenvalues
if ((seig <- sum(out)) < x$totdev) {
names(out) <- paste("PC", seq_along(out), sep="")
out <- c(out, "Rest" = x$totdev - seig)
}
class(out) <- "eigenvals"
out
}
`print.eigenvals` <-
function(x, ...)
{
print(zapsmall(unclass(x), ...))
invisible(x)
}
`summary.eigenvals` <-
function(object, ...)
{
## abs(object) is to handle neg eigenvalues of wcmdscale and
## capscale
vars <- object/sum(abs(object))
importance <- rbind(`Eigenvalue` = object,
`Proportion Explained` = round(abs(vars), 5),
`Cumulative Proportion`= round(cumsum(abs(vars)), 5))
out <- list(importance = importance)
class(out) <- c("summary.eigenvals")
out
}
## before R svn commit 70391 we used print.summary.prcomp, but now we
## need our own version that is similar to pre-70391 R function
`print.summary.eigenvals` <-
function(x, digits = max(3L, getOption("digits") - 3L), ...)
{
cat("Importance of components:\n")
print(x$importance, digits = digits, ...)
invisible(x)
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/wta_odds.R
\name{wta_odds}
\alias{wta_odds}
\title{WTA Match Odds}
\format{A data frame with 21,291 rows and 43 variables}
\source{
\url{http://www.tennis-data.co.uk/alldata.php}
}
\description{
This dataset contains outcomes of matches on the WTA Tour from 2007 forward along with betting odds. The variables of the dataset are:
}
\details{
\itemize{
\item WTA. A numeric value for the tournament number
\item Location. A character venue of tournament
\item Tournament. A character name of tounament (including sponsor if relevant)
\item Date. A date object for the date of match
\item Tier. A character name of tier (tournament ranking).
\item Court. A character indicating the type of court (Outdoor or Indoor)
\item Surface. A character indicating the type of surface (Clay, Hard, Carpet or Grass)
\item Round. A character indicating the round of match
\item Best.of. A numeric with the maximum number of sets playable in match
\item Winner. A character (Last First Initial) name of the match winner
\item Loser. A character (Last First Initial) name of the match loser
\item WRank. A character WTA Entry ranking of the match winner as of the start of the tournament
\item LRank. A character WTA Entry ranking of the match loser as of the start of the tournament
\item WPts. A character WTA Entry points of the match winner as of the start of the tournament
\item LPts. A character WTA Entry points of the match loser as of the start of the tournament
\item W1. A numeric Number of games won in 1st set by match winner
\item L1. A numeric Number of games won in 1st set by match loser
\item W2. A numeric Number of games won in 2nd set by match winner
\item L2. A numeric Number of games won in 2nd set by match loser
\item W3. A numeric Number of games won in 3rd set by match winner
\item L3. A numeric Number of games won in 3rd set by match loser
\item Wsets. A numeric Number of sets won by match winner
\item Lsets. A numeric Number of sets won by match loser
\item Comment. A character Comment on the match (Completed, won through retirement of loser, or via Walkover)
\item B365W. A numeric Bet365 odds of match winner
\item B365L. A numeric Bet365 odds of match loser
\item CBW. A numeric Centrebet odds of match winner
\item CBL. A numeric Centrebet odds of match loser
\item EXW. A numeric Expekt odds of match winner
\item EXL. A numeric Expekt odds of match loser
\item PSW. A numeric Pinnacles Sports odds of match winner
\item PSL. A numeric Pinnacles Sports odds of match loser
\item UBW. A numeric Unibet odds of match winner
\item UBL. A numeric Unibet odds of match loser
\item LBW. A numeric Ladbrokes odds of match winner
\item LBL. A numeric Ladbrokes odds of match loser
\item SJW. A numeric Stan James odds of match winner
\item SJL. A numeric Stan James odds of match loser
\item MaxW. A numeric Maximum odds of match winner (as shown by Oddsportal.com)
\item MaxL. A numeric Maximum odds of match loser (as shown by Oddsportal.com)
\item AvgW. A numeric Average odds of match winner (as shown by Oddsportal.com)
\item AvgL. A numeric Average odds of match loser (as shown by Oddsportal.com)
\item url. A character with the url page for the tournament CSV files
}
}
|
/man/wta_odds.Rd
|
no_license
|
dashee87/deuce
|
R
| false
| true
| 3,361
|
rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/wta_odds.R
\name{wta_odds}
\alias{wta_odds}
\title{WTA Match Odds}
\format{A data frame with 21,291 rows and 43 variables}
\source{
\url{http://www.tennis-data.co.uk/alldata.php}
}
\description{
This dataset contains outcomes of matches on the WTA Tour from 2007 forward along with betting odds. The variables of the dataset are:
}
\details{
\itemize{
\item WTA. A numeric value for the tournament number
\item Location. A character venue of tournament
\item Tournament. A character name of tounament (including sponsor if relevant)
\item Date. A date object for the date of match
\item Tier. A character name of tier (tournament ranking).
\item Court. A character indicating the type of court (Outdoor or Indoor)
\item Surface. A character indicating the type of surface (Clay, Hard, Carpet or Grass)
\item Round. A character indicating the round of match
\item Best.of. A numeric with the maximum number of sets playable in match
\item Winner. A character (Last First Initial) name of the match winner
\item Loser. A character (Last First Initial) name of the match loser
\item WRank. A character WTA Entry ranking of the match winner as of the start of the tournament
\item LRank. A character WTA Entry ranking of the match loser as of the start of the tournament
\item WPts. A character WTA Entry points of the match winner as of the start of the tournament
\item LPts. A character WTA Entry points of the match loser as of the start of the tournament
\item W1. A numeric Number of games won in 1st set by match winner
\item L1. A numeric Number of games won in 1st set by match loser
\item W2. A numeric Number of games won in 2nd set by match winner
\item L2. A numeric Number of games won in 2nd set by match loser
\item W3. A numeric Number of games won in 3rd set by match winner
\item L3. A numeric Number of games won in 3rd set by match loser
\item Wsets. A numeric Number of sets won by match winner
\item Lsets. A numeric Number of sets won by match loser
\item Comment. A character Comment on the match (Completed, won through retirement of loser, or via Walkover)
\item B365W. A numeric Bet365 odds of match winner
\item B365L. A numeric Bet365 odds of match loser
\item CBW. A numeric Centrebet odds of match winner
\item CBL. A numeric Centrebet odds of match loser
\item EXW. A numeric Expekt odds of match winner
\item EXL. A numeric Expekt odds of match loser
\item PSW. A numeric Pinnacles Sports odds of match winner
\item PSL. A numeric Pinnacles Sports odds of match loser
\item UBW. A numeric Unibet odds of match winner
\item UBL. A numeric Unibet odds of match loser
\item LBW. A numeric Ladbrokes odds of match winner
\item LBL. A numeric Ladbrokes odds of match loser
\item SJW. A numeric Stan James odds of match winner
\item SJL. A numeric Stan James odds of match loser
\item MaxW. A numeric Maximum odds of match winner (as shown by Oddsportal.com)
\item MaxL. A numeric Maximum odds of match loser (as shown by Oddsportal.com)
\item AvgW. A numeric Average odds of match winner (as shown by Oddsportal.com)
\item AvgL. A numeric Average odds of match loser (as shown by Oddsportal.com)
\item url. A character with the url page for the tournament CSV files
}
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/civis_ml_workflows.R
\name{civis_ml_sparse_logistic}
\alias{civis_ml_sparse_logistic}
\title{CivisML Sparse Logistic}
\usage{
civis_ml_sparse_logistic(x, dependent_variable, primary_key = NULL,
excluded_columns = NULL, penalty = c("l2", "l1"), dual = FALSE,
tol = 1e-08, C = 499999950, fit_intercept = TRUE,
intercept_scaling = 1, class_weight = NULL, random_state = 42,
solver = c("liblinear", "newton-cg", "lbfgs", "sag"), max_iter = 100,
multi_class = c("ovr", "multinomial"), fit_params = NULL,
cross_validation_parameters = NULL, calibration = NULL,
oos_scores_table = NULL, oos_scores_db = NULL,
oos_scores_if_exists = c("fail", "append", "drop", "truncate"),
model_name = NULL, cpu_requested = NULL, memory_requested = NULL,
disk_requested = NULL, notifications = NULL,
polling_interval = NULL, verbose = FALSE)
}
\arguments{
\item{x}{See the Data Sources section below.}
\item{dependent_variable}{The dependent variable of the training dataset.
For a multi-target problem, this should be a vector of column names of
dependent variables. Nulls in a single dependent variable will
automatically be dropped.}
\item{primary_key}{Optional, the unique ID (primary key) of the training
dataset. This will be used to index the out-of-sample scores. In
\code{predict.civis_ml}, the primary_key of the training task is used by
default \code{primary_key = NA}. Use \code{primary_key = NULL} to
explicitly indicate the data have no primary_key.}
\item{excluded_columns}{Optional, a vector of columns which will be
considered ineligible to be independent variables.}
\item{penalty}{Used to specify the norm used in the penalization. The
\code{newton-cg}, \code{sag}, and \code{lbfgs} solvers support only l2
penalties.}
\item{dual}{Dual or primal formulation. Dual formulation is only implemented
for \code{l2} penalty with the \code{liblinear} solver. \code{dual = FALSE}
should be preferred when n_samples > n_features.}
\item{tol}{Tolerance for stopping criteria.}
\item{C}{Inverse of regularization strength, must be a positive float.
Smaller values specify stronger regularization.}
\item{fit_intercept}{Should a constant or intercept term be included in the
model.}
\item{intercept_scaling}{Useful only when the \code{solver = "liblinear"}
and \code{fit_intercept = TRUE}. In this case, a constant term with the
value \code{intercept_scaling} is added to the design matrix.}
\item{class_weight}{A \code{list} with \code{class_label = value} pairs, or
\code{balanced}. When \code{class_weight = "balanced"}, the class weights
will be inversely proportional to the class frequencies in the input data
as:
\deqn{ \frac{n_samples}{n_classes * table(y)} }
Note, the class weights are multiplied with \code{sample_weight}
(passed via \code{fit_params}) if \code{sample_weight} is specified.}
\item{random_state}{The seed of the random number generator to use when
shuffling the data. Used only in \code{solver = "sag"} and
\code{solver = "liblinear"}.}
\item{solver}{Algorithm to use in the optimization problem. For small data
\code{liblinear} is a good choice. \code{sag} is faster for larger
problems. For multiclass problems, only \code{newton-cg}, \code{sag}, and
\code{lbfgs} handle multinomial loss. \code{liblinear} is limited to
one-versus-rest schemes. \code{newton-cg}, \code{lbfgs}, and \code{sag}
only handle the \code{l2} penalty.
Note that \code{sag} fast convergence is only guaranteed on features with
approximately the same scale.}
\item{max_iter}{The maximum number of iterations taken for the solvers to
converge. Useful for the \code{newton-cg}, \code{sag}, and \code{lbfgs}
solvers.}
\item{multi_class}{The scheme for multi-class problems. When \code{ovr}, then
a binary problem is fit for each label. When \code{multinomial}, a single
model is fit minimizing the multinomial loss. Note, \code{multinomial} only
works with the \code{newton-cg}, \code{sag}, and \code{lbfgs} solvers.}
\item{fit_params}{Optional, a mapping from parameter names in the model's
\code{fit} method to the column names which hold the data, e.g.
\code{list(sample_weight = 'survey_weight_column')}.}
\item{cross_validation_parameters}{Optional, parameter grid for learner
parameters, e.g. \code{list(n_estimators = c(100, 200, 500),
learning_rate = c(0.01, 0.1), max_depth = c(2, 3))}
or \code{"hyperband"} for supported models.}
\item{calibration}{Optional, if not \code{NULL}, calibrate output
probabilities with the selected method, \code{sigmoid}, or \code{isotonic}.
Valid only with classification models.}
\item{oos_scores_table}{Optional, if provided, store out-of-sample
predictions on training set data to this Redshift "schema.tablename".}
\item{oos_scores_db}{Optional, the name of the database where the
\code{oos_scores_table} will be created. If not provided, this will default
to \code{database_name}.}
\item{oos_scores_if_exists}{Optional, action to take if
\code{oos_scores_table} already exists. One of \code{"fail"}, \code{"append"}, \code{"drop"}, or \code{"truncate"}.
The default is \code{"fail"}.}
\item{model_name}{Optional, the prefix of the Platform modeling jobs.
It will have \code{" Train"} or \code{" Predict"} added to become the Script title.}
\item{cpu_requested}{Optional, the number of CPU shares requested in the
Civis Platform for training jobs or prediction child jobs.
1024 shares = 1 CPU.}
\item{memory_requested}{Optional, the memory requested from Civis Platform
for training jobs or prediction child jobs, in MiB.}
\item{disk_requested}{Optional, the disk space requested on Civis Platform
for training jobs or prediction child jobs, in GB.}
\item{notifications}{Optional, model status notifications. See
\code{\link{scripts_post_custom}} for further documentation about email
and URL notification.}
\item{polling_interval}{Check for job completion every this number of seconds.}
\item{verbose}{Optional, If \code{TRUE}, supply debug outputs in Platform
logs and make prediction child jobs visible.}
}
\value{
A \code{civis_ml} object, a list containing the following elements:
\item{job}{job metadata from \code{\link{scripts_get_custom}}.}
\item{run}{run metadata from \code{\link{scripts_get_custom_runs}}.}
\item{outputs}{CivisML metadata from \code{\link{scripts_list_custom_runs_outputs}} containing the locations of
files produced by CivisML e.g. files, projects, metrics, model_info, logs, predictions, and estimators.}
\item{metrics}{Parsed CivisML output from \code{metrics.json} containing metadata from validation.
A list containing the following elements:
\itemize{
\item run list, metadata about the run.
\item data list, metadata about the training data.
\item model list, the fitted scikit-learn model with CV results.
\item metrics list, validation metrics (accuracy, confusion, ROC, AUC, etc).
\item warnings list.
\item data_platform list, training data location.
}}
\item{model_info}{Parsed CivisML output from \code{model_info.json} containing metadata from training.
A list containing the following elements:
\itemize{
\item run list, metadata about the run.
\item data list, metadata about the training data.
\item model list, the fitted scikit-learn model.
\item metrics empty list.
\item warnings list.
\item data_platform list, training data location.
}}
}
\description{
CivisML Sparse Logistic
}
\section{Data Sources}{
For building models with \code{civis_ml}, the training data can reside in
four different places, a file in the Civis Platform, a CSV or feather-format file
on the local disk, a \code{data.frame} resident in local the R environment, and finally,
a table in the Civis Platform. Use the following helpers to specify the
data source when calling \code{civis_ml}:
\describe{
\item{\code{data.frame}}{\code{civis_ml(x = df, ...)}}
\item{local csv file}{\code{civis_ml(x = "path/to/data.csv", ...)}}
\item{file in Civis Platform}{\code{civis_ml(x = civis_file(1234))}}
\item{table in Civis Platform}{\code{civis_ml(x = civis_table(table_name = "schema.table", database_name = "database"))}}
}
}
\examples{
\dontrun{
df <- iris
names(df) <- stringr::str_replace(names(df), "\\\\.", "_")
m <- civis_ml_sparse_logistic(df, "Species")
yhat <- fetch_oos_scores(m)
# Grid Search
cv_params <- list(C = c(.01, 1, 10, 100, 1000))
m <- civis_ml_sparse_logistic(df, "Species",
cross_validation_parameters = cv_params)
# make a prediction job, storing in a redshift table
pred_info <- predict(m, newdata = civis_table("schema.table", "my_database"),
output_table = "schema.scores_table")
}
}
|
/man/civis_ml_sparse_logistic.Rd
|
no_license
|
ajchin/civis-r
|
R
| false
| true
| 8,669
|
rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/civis_ml_workflows.R
\name{civis_ml_sparse_logistic}
\alias{civis_ml_sparse_logistic}
\title{CivisML Sparse Logistic}
\usage{
civis_ml_sparse_logistic(x, dependent_variable, primary_key = NULL,
excluded_columns = NULL, penalty = c("l2", "l1"), dual = FALSE,
tol = 1e-08, C = 499999950, fit_intercept = TRUE,
intercept_scaling = 1, class_weight = NULL, random_state = 42,
solver = c("liblinear", "newton-cg", "lbfgs", "sag"), max_iter = 100,
multi_class = c("ovr", "multinomial"), fit_params = NULL,
cross_validation_parameters = NULL, calibration = NULL,
oos_scores_table = NULL, oos_scores_db = NULL,
oos_scores_if_exists = c("fail", "append", "drop", "truncate"),
model_name = NULL, cpu_requested = NULL, memory_requested = NULL,
disk_requested = NULL, notifications = NULL,
polling_interval = NULL, verbose = FALSE)
}
\arguments{
\item{x}{See the Data Sources section below.}
\item{dependent_variable}{The dependent variable of the training dataset.
For a multi-target problem, this should be a vector of column names of
dependent variables. Nulls in a single dependent variable will
automatically be dropped.}
\item{primary_key}{Optional, the unique ID (primary key) of the training
dataset. This will be used to index the out-of-sample scores. In
\code{predict.civis_ml}, the primary_key of the training task is used by
default \code{primary_key = NA}. Use \code{primary_key = NULL} to
explicitly indicate the data have no primary_key.}
\item{excluded_columns}{Optional, a vector of columns which will be
considered ineligible to be independent variables.}
\item{penalty}{Used to specify the norm used in the penalization. The
\code{newton-cg}, \code{sag}, and \code{lbfgs} solvers support only l2
penalties.}
\item{dual}{Dual or primal formulation. Dual formulation is only implemented
for \code{l2} penalty with the \code{liblinear} solver. \code{dual = FALSE}
should be preferred when n_samples > n_features.}
\item{tol}{Tolerance for stopping criteria.}
\item{C}{Inverse of regularization strength, must be a positive float.
Smaller values specify stronger regularization.}
\item{fit_intercept}{Should a constant or intercept term be included in the
model.}
\item{intercept_scaling}{Useful only when the \code{solver = "liblinear"}
and \code{fit_intercept = TRUE}. In this case, a constant term with the
value \code{intercept_scaling} is added to the design matrix.}
\item{class_weight}{A \code{list} with \code{class_label = value} pairs, or
\code{balanced}. When \code{class_weight = "balanced"}, the class weights
will be inversely proportional to the class frequencies in the input data
as:
\deqn{ \frac{n_samples}{n_classes * table(y)} }
Note, the class weights are multiplied with \code{sample_weight}
(passed via \code{fit_params}) if \code{sample_weight} is specified.}
\item{random_state}{The seed of the random number generator to use when
shuffling the data. Used only in \code{solver = "sag"} and
\code{solver = "liblinear"}.}
\item{solver}{Algorithm to use in the optimization problem. For small data
\code{liblinear} is a good choice. \code{sag} is faster for larger
problems. For multiclass problems, only \code{newton-cg}, \code{sag}, and
\code{lbfgs} handle multinomial loss. \code{liblinear} is limited to
one-versus-rest schemes. \code{newton-cg}, \code{lbfgs}, and \code{sag}
only handle the \code{l2} penalty.
Note that \code{sag} fast convergence is only guaranteed on features with
approximately the same scale.}
\item{max_iter}{The maximum number of iterations taken for the solvers to
converge. Useful for the \code{newton-cg}, \code{sag}, and \code{lbfgs}
solvers.}
\item{multi_class}{The scheme for multi-class problems. When \code{ovr}, then
a binary problem is fit for each label. When \code{multinomial}, a single
model is fit minimizing the multinomial loss. Note, \code{multinomial} only
works with the \code{newton-cg}, \code{sag}, and \code{lbfgs} solvers.}
\item{fit_params}{Optional, a mapping from parameter names in the model's
\code{fit} method to the column names which hold the data, e.g.
\code{list(sample_weight = 'survey_weight_column')}.}
\item{cross_validation_parameters}{Optional, parameter grid for learner
parameters, e.g. \code{list(n_estimators = c(100, 200, 500),
learning_rate = c(0.01, 0.1), max_depth = c(2, 3))}
or \code{"hyperband"} for supported models.}
\item{calibration}{Optional, if not \code{NULL}, calibrate output
probabilities with the selected method, \code{sigmoid}, or \code{isotonic}.
Valid only with classification models.}
\item{oos_scores_table}{Optional, if provided, store out-of-sample
predictions on training set data to this Redshift "schema.tablename".}
\item{oos_scores_db}{Optional, the name of the database where the
\code{oos_scores_table} will be created. If not provided, this will default
to \code{database_name}.}
\item{oos_scores_if_exists}{Optional, action to take if
\code{oos_scores_table} already exists. One of \code{"fail"}, \code{"append"}, \code{"drop"}, or \code{"truncate"}.
The default is \code{"fail"}.}
\item{model_name}{Optional, the prefix of the Platform modeling jobs.
It will have \code{" Train"} or \code{" Predict"} added to become the Script title.}
\item{cpu_requested}{Optional, the number of CPU shares requested in the
Civis Platform for training jobs or prediction child jobs.
1024 shares = 1 CPU.}
\item{memory_requested}{Optional, the memory requested from Civis Platform
for training jobs or prediction child jobs, in MiB.}
\item{disk_requested}{Optional, the disk space requested on Civis Platform
for training jobs or prediction child jobs, in GB.}
\item{notifications}{Optional, model status notifications. See
\code{\link{scripts_post_custom}} for further documentation about email
and URL notification.}
\item{polling_interval}{Check for job completion every this number of seconds.}
\item{verbose}{Optional, If \code{TRUE}, supply debug outputs in Platform
logs and make prediction child jobs visible.}
}
\value{
A \code{civis_ml} object, a list containing the following elements:
\item{job}{job metadata from \code{\link{scripts_get_custom}}.}
\item{run}{run metadata from \code{\link{scripts_get_custom_runs}}.}
\item{outputs}{CivisML metadata from \code{\link{scripts_list_custom_runs_outputs}} containing the locations of
files produced by CivisML e.g. files, projects, metrics, model_info, logs, predictions, and estimators.}
\item{metrics}{Parsed CivisML output from \code{metrics.json} containing metadata from validation.
A list containing the following elements:
\itemize{
\item run list, metadata about the run.
\item data list, metadata about the training data.
\item model list, the fitted scikit-learn model with CV results.
\item metrics list, validation metrics (accuracy, confusion, ROC, AUC, etc).
\item warnings list.
\item data_platform list, training data location.
}}
\item{model_info}{Parsed CivisML output from \code{model_info.json} containing metadata from training.
A list containing the following elements:
\itemize{
\item run list, metadata about the run.
\item data list, metadata about the training data.
\item model list, the fitted scikit-learn model.
\item metrics empty list.
\item warnings list.
\item data_platform list, training data location.
}}
}
\description{
CivisML Sparse Logistic
}
\section{Data Sources}{
For building models with \code{civis_ml}, the training data can reside in
four different places, a file in the Civis Platform, a CSV or feather-format file
on the local disk, a \code{data.frame} resident in local the R environment, and finally,
a table in the Civis Platform. Use the following helpers to specify the
data source when calling \code{civis_ml}:
\describe{
\item{\code{data.frame}}{\code{civis_ml(x = df, ...)}}
\item{local csv file}{\code{civis_ml(x = "path/to/data.csv", ...)}}
\item{file in Civis Platform}{\code{civis_ml(x = civis_file(1234))}}
\item{table in Civis Platform}{\code{civis_ml(x = civis_table(table_name = "schema.table", database_name = "database"))}}
}
}
\examples{
\dontrun{
df <- iris
names(df) <- stringr::str_replace(names(df), "\\\\.", "_")
m <- civis_ml_sparse_logistic(df, "Species")
yhat <- fetch_oos_scores(m)
# Grid Search
cv_params <- list(C = c(.01, 1, 10, 100, 1000))
m <- civis_ml_sparse_logistic(df, "Species",
cross_validation_parameters = cv_params)
# make a prediction job, storing in a redshift table
pred_info <- predict(m, newdata = civis_table("schema.table", "my_database"),
output_table = "schema.scores_table")
}
}
|
library(glmnet)
mydata = read.table("./TrainingSet/AvgRank/thyroid.csv",head=T,sep=",")
x = as.matrix(mydata[,4:ncol(mydata)])
y = as.matrix(mydata[,1])
set.seed(123)
glm = cv.glmnet(x,y,nfolds=10,type.measure="mae",alpha=0.3,family="gaussian",standardize=TRUE)
sink('./Model/EN/AvgRank/thyroid/thyroid_043.txt',append=TRUE)
print(glm$glmnet.fit)
sink()
|
/Model/EN/AvgRank/thyroid/thyroid_043.R
|
no_license
|
leon1003/QSMART
|
R
| false
| false
| 354
|
r
|
library(glmnet)
mydata = read.table("./TrainingSet/AvgRank/thyroid.csv",head=T,sep=",")
x = as.matrix(mydata[,4:ncol(mydata)])
y = as.matrix(mydata[,1])
set.seed(123)
glm = cv.glmnet(x,y,nfolds=10,type.measure="mae",alpha=0.3,family="gaussian",standardize=TRUE)
sink('./Model/EN/AvgRank/thyroid/thyroid_043.txt',append=TRUE)
print(glm$glmnet.fit)
sink()
|
# rmd가 추출되지 않아 R파일로 송부합니다.
# 페이지번호를 나누어 추출하였기 때문에 이 코드로 뽑히는 변수와 실제 데이터는 다릅니다.
# 이 코드를 실행하는 데는 조금 시간이 걸릴 수 있습니다.
# for문의 페이지 번호는 임의로 주면 제대로 데이터가 뽑히지 않기 때문에
# 홈페이지의 실제 페이지번호를 확인하고 적정한 길이로 주었음을 밝힙니다.
library(XML) #패키지 설치
library(stringr) # 패키지 설치
library(rvest) # 패키지 설치
setwd("c:\\r_temp") #디렉토리 설정
getwd()
all.url<-c()
all.text <- c()
urlall <- "http://www.jobkorea.co.kr/Starter/PassAssay/View/" # url 앞부분 변수에 넣기
for(EEE in 144296:145579){ # 페이지번호를 돌려서 페이지를 추출, EEE가 페이지번호
url <- paste(urlall,EEE,"?Page=1&OrderBy=0&FavorCo_Stat=0&Pass_An_Stat=0",sep="") #페이지 주소 전문이 되게 합치기
jasoju <- read_html(url) # 페이지 소스 불러오기
if(str_detect(jasoju,"선택하신 합격자소서 정보가 없습니다.")==T){ # 페이지가 없으면 next
next
}else{
b<-html_nodes(jasoju, 'title') # 페이지 소스에서 <title>에 있는 코드 b로 넣기
}
if(str_detect(b,'현대')==F){ # <title>에 현대가 없으면 next, 있으면 추출
next
}else{
tx <- html_nodes(jasoju, "span.tx") # <span class="tx"> 코드 안에 있는 글귀 tx 변수에 넣기
text <- html_text(tx[!str_detect(tx, "href")]) # <div class="tx"><a href (...)로 시작하는 코드에 있는 글귀 빼기
text <- str_replace_all(text,"글자수","") # 자소서에 관련없는 단어 빼기
text <- str_replace_all(text,"아쉬운점","") # 자소서에 관련없는 단어 빼기
text <- str_replace_all(text,"좋은점","") # 자소서에 관련없는 단어 빼기
text <- str_replace_all(text,"1\r","") # 자소서에 관련없는 단어 빼기
text <- str_replace_all(text,"2\r","") # 자소서에 관련없는 단어 빼기
text <- str_replace_all(text,"\r\n","") # 자소서에 관련없는 단어 빼기
text <- str_replace_all(text,"Byte\r\n","") # 자소서에 관련없는 단어 빼기
text <- str_replace_all(text,"\r","") # 자소서에 관련없는 단어 빼기
text <- str_replace_all(text,"Byte","") # 자소서에 관련없는 단어 빼기
text <- str_replace_all(text,"text","") # 자소서에 관련없는 단어 빼기
}
all.text<-rbind(all.text,text) # 자소서로 추출한 텍스트 all.text에 합치기
all.url<-c(all.url,url) # url 합치기
}
all.text<-str_replace_all(all.text,"text","") # 자소서에 관련없는 단어 빼기
write.csv(all.text,'현대 질문.csv') # 디렉토리에 csv로 저장
|
/hyundai/현대 질문 크롤링.R
|
no_license
|
parkkuri/-project-
|
R
| false
| false
| 2,850
|
r
|
# rmd가 추출되지 않아 R파일로 송부합니다.
# 페이지번호를 나누어 추출하였기 때문에 이 코드로 뽑히는 변수와 실제 데이터는 다릅니다.
# 이 코드를 실행하는 데는 조금 시간이 걸릴 수 있습니다.
# for문의 페이지 번호는 임의로 주면 제대로 데이터가 뽑히지 않기 때문에
# 홈페이지의 실제 페이지번호를 확인하고 적정한 길이로 주었음을 밝힙니다.
library(XML) #패키지 설치
library(stringr) # 패키지 설치
library(rvest) # 패키지 설치
setwd("c:\\r_temp") #디렉토리 설정
getwd()
all.url<-c()
all.text <- c()
urlall <- "http://www.jobkorea.co.kr/Starter/PassAssay/View/" # url 앞부분 변수에 넣기
for(EEE in 144296:145579){ # 페이지번호를 돌려서 페이지를 추출, EEE가 페이지번호
url <- paste(urlall,EEE,"?Page=1&OrderBy=0&FavorCo_Stat=0&Pass_An_Stat=0",sep="") #페이지 주소 전문이 되게 합치기
jasoju <- read_html(url) # 페이지 소스 불러오기
if(str_detect(jasoju,"선택하신 합격자소서 정보가 없습니다.")==T){ # 페이지가 없으면 next
next
}else{
b<-html_nodes(jasoju, 'title') # 페이지 소스에서 <title>에 있는 코드 b로 넣기
}
if(str_detect(b,'현대')==F){ # <title>에 현대가 없으면 next, 있으면 추출
next
}else{
tx <- html_nodes(jasoju, "span.tx") # <span class="tx"> 코드 안에 있는 글귀 tx 변수에 넣기
text <- html_text(tx[!str_detect(tx, "href")]) # <div class="tx"><a href (...)로 시작하는 코드에 있는 글귀 빼기
text <- str_replace_all(text,"글자수","") # 자소서에 관련없는 단어 빼기
text <- str_replace_all(text,"아쉬운점","") # 자소서에 관련없는 단어 빼기
text <- str_replace_all(text,"좋은점","") # 자소서에 관련없는 단어 빼기
text <- str_replace_all(text,"1\r","") # 자소서에 관련없는 단어 빼기
text <- str_replace_all(text,"2\r","") # 자소서에 관련없는 단어 빼기
text <- str_replace_all(text,"\r\n","") # 자소서에 관련없는 단어 빼기
text <- str_replace_all(text,"Byte\r\n","") # 자소서에 관련없는 단어 빼기
text <- str_replace_all(text,"\r","") # 자소서에 관련없는 단어 빼기
text <- str_replace_all(text,"Byte","") # 자소서에 관련없는 단어 빼기
text <- str_replace_all(text,"text","") # 자소서에 관련없는 단어 빼기
}
all.text<-rbind(all.text,text) # 자소서로 추출한 텍스트 all.text에 합치기
all.url<-c(all.url,url) # url 합치기
}
all.text<-str_replace_all(all.text,"text","") # 자소서에 관련없는 단어 빼기
write.csv(all.text,'현대 질문.csv') # 디렉토리에 csv로 저장
|
renv_watchdog_server_start <- function(client) {
# initialize logging
renv_log_init()
# create socket server
server <- renv_socket_server()
dlog("watchdog-server", "Listening on port %i.", server$port)
# communicate information back to client
dlog("watchdog-server", "Waiting for client...")
metadata <- list(port = server$port, pid = server$pid)
conn <- renv_socket_connect(port = client$port, open = "wb")
serialize(metadata, connection = conn)
close(conn)
dlog("watchdog-server", "Synchronized with client.")
# initialize locks
lockenv <- new.env(parent = emptyenv())
# start listening for connections
repeat tryCatch(
renv_watchdog_server_run(server, client, lockenv),
error = function(e) {
dlog("watchdog-server", "Error: %s", conditionMessage(e))
}
)
}
renv_watchdog_server_run <- function(server, client, lockenv) {
# check for parent exit
if (!renv_process_exists(client$pid)) {
dlog("watchdog-server", "Client process has exited; shutting down.")
renv_watchdog_server_exit(server, client, lockenv)
}
# set file time on owned locks, so we can see they're not orphaned
dlog("watchdog-server", "Refreshing lock times.")
locks <- ls(envir = lockenv, all.names = TRUE)
renv_lock_refresh(locks)
# wait for connection
dlog("watchdog-server", "Waiting for connection...")
conn <- renv_socket_accept(server$socket, open = "rb", timeout = 1)
defer(close(conn))
# read the request
dlog("watchdog-server", "Received connection; reading data.")
request <- unserialize(conn)
dlog("watchdog-server", "Received request.")
str(request)
# handle the request
switch(
request$method %||% "<missing>",
ListLocks = {
dlog("watchdog-server", "Executing 'ListLocks' request.")
conn <- renv_socket_connect(port = request$port, open = "watchdog-server", "b")
defer(close(conn))
locks <- ls(envir = lockenv, all.names = TRUE)
serialize(locks, connection = conn)
},
LockAcquired = {
dlog("watchdog-server", "Acquired lock on path '%s'.", request$data$path)
assign(request$data$path, TRUE, envir = lockenv)
},
LockReleased = {
dlog("watchdog-server", "Released lock on path '%s'.", request$data$path)
rm(list = request$data$path, envir = lockenv)
},
Shutdown = {
dlog("watchdog-server", "Received shutdown request; shutting down.")
renv_watchdog_server_exit(server, client, lockenv)
},
"<missing>" = {
dlog("watchdog-server", "Received request with no method field available.")
},
{
dlog("watchdog-server", "Unknown method '%s'", request$method)
}
)
}
renv_watchdog_server_exit <- function(server, client, lockenv) {
# remove any existing locks
locks <- ls(envir = lockenv, all.names = TRUE)
unlink(locks, recursive = TRUE, force = TRUE)
# shut down the socket server
close(server$socket)
# quit
quit(status = 0)
}
|
/R/watchdog-server.R
|
permissive
|
rstudio/renv
|
R
| false
| false
| 2,965
|
r
|
renv_watchdog_server_start <- function(client) {
# initialize logging
renv_log_init()
# create socket server
server <- renv_socket_server()
dlog("watchdog-server", "Listening on port %i.", server$port)
# communicate information back to client
dlog("watchdog-server", "Waiting for client...")
metadata <- list(port = server$port, pid = server$pid)
conn <- renv_socket_connect(port = client$port, open = "wb")
serialize(metadata, connection = conn)
close(conn)
dlog("watchdog-server", "Synchronized with client.")
# initialize locks
lockenv <- new.env(parent = emptyenv())
# start listening for connections
repeat tryCatch(
renv_watchdog_server_run(server, client, lockenv),
error = function(e) {
dlog("watchdog-server", "Error: %s", conditionMessage(e))
}
)
}
renv_watchdog_server_run <- function(server, client, lockenv) {
# check for parent exit
if (!renv_process_exists(client$pid)) {
dlog("watchdog-server", "Client process has exited; shutting down.")
renv_watchdog_server_exit(server, client, lockenv)
}
# set file time on owned locks, so we can see they're not orphaned
dlog("watchdog-server", "Refreshing lock times.")
locks <- ls(envir = lockenv, all.names = TRUE)
renv_lock_refresh(locks)
# wait for connection
dlog("watchdog-server", "Waiting for connection...")
conn <- renv_socket_accept(server$socket, open = "rb", timeout = 1)
defer(close(conn))
# read the request
dlog("watchdog-server", "Received connection; reading data.")
request <- unserialize(conn)
dlog("watchdog-server", "Received request.")
str(request)
# handle the request
switch(
request$method %||% "<missing>",
ListLocks = {
dlog("watchdog-server", "Executing 'ListLocks' request.")
conn <- renv_socket_connect(port = request$port, open = "watchdog-server", "b")
defer(close(conn))
locks <- ls(envir = lockenv, all.names = TRUE)
serialize(locks, connection = conn)
},
LockAcquired = {
dlog("watchdog-server", "Acquired lock on path '%s'.", request$data$path)
assign(request$data$path, TRUE, envir = lockenv)
},
LockReleased = {
dlog("watchdog-server", "Released lock on path '%s'.", request$data$path)
rm(list = request$data$path, envir = lockenv)
},
Shutdown = {
dlog("watchdog-server", "Received shutdown request; shutting down.")
renv_watchdog_server_exit(server, client, lockenv)
},
"<missing>" = {
dlog("watchdog-server", "Received request with no method field available.")
},
{
dlog("watchdog-server", "Unknown method '%s'", request$method)
}
)
}
renv_watchdog_server_exit <- function(server, client, lockenv) {
# remove any existing locks
locks <- ls(envir = lockenv, all.names = TRUE)
unlink(locks, recursive = TRUE, force = TRUE)
# shut down the socket server
close(server$socket)
# quit
quit(status = 0)
}
|
library('dplyr')
library('tidyr')
library('ggplot2')
salaries_SF <- read.csv("Salaries.csv")
str(salaries_SF)
ssf <- salaries_SF
dc <- c('Id', 'Notes', 'Agency')
#ssf <- subset(ssf, select = -c(1, 11, 12))
ssf <- ssf %>% select( -one_of(dc))
# Names & Job Title
ssf_names <- select(ssf, c(1:2))
ssf_names %>%
mutate_all(funs(factor(.)))
# Pay and Benefits
ssf_pay <- select(ssf, c(3:8)) %>%
mutate_if(is.factor, as.character) %>%
mutate_if(is.character, as.double)
ssf <- bind_cols(ssf_names, ssf_pay, select(ssf, c(9:10)))
ssfreg<-ssf[1:500,]
#-----------------------------------------------------
ggplot(ssf,aes(x=TotalPay))+geom_histogram()+facet_wrap(~Year)
varpie <- select(ssfreg,2) %>%
group_by(JobTitle) %>%
summarise(number = n()) %>% arrange(number)
tail(varpie)
pie(tail(varpie$number), main="Jobs", labels =tail(varpie$JobTitle),col= c(1,2,3,4,5,6),cex=0.6)
legend("bottomright",legend= tail(varpie$JobTitle), cex=0.3, fill= c(1,2,3,4,5,6))
#------------------------------------------------------------
dt<-tail(varpie)
pie(dt$number, main="Jobs", labels =paste(round(dt$number*100/sum(dt$number),1),"%",sep=" "),col= c(1,2,3,4,5,6),cex=0.6)
legend("bottomright",legend= tail(varpie$JobTitle), cex=0.6, fill= c(1,2,3,4,5,6))
#------------------------------------------------------------
bp<-ggplot(dt, aes(x=" ", y=number, fill=JobTitle))+ geom_bar(width = 1, stat = "identity")
bp
pie <- bp + coord_polar("y", start=0)
pie
pie + scale_fill_manual(values=c("#999999", "#E69F00", "#56B4E9","#563241","#124356","#897653"))
pie + scale_fill_brewer(palette="Blues")+theme_minimal()+ theme(axis.text.x=element_blank())
#-------------------------------------------
install.packages("VIM",dep=T)
library(VIM)
?sleep
data(sleep, package="VIM")
sleep[complete.cases(sleep),]
sleep[!complete.cases(sleep),]
str(sleep)
install.packages("mice",dep=T)
library(mice)
md.pattern(sleep)
na.omit(sleep)
cor(sleep, use="pairwise.complete.obs")
?mice
imp <- mice(sleep, seed=1234) #method ? parameter of method
imp$imp
imp$m
res <- complete(imp, action=2)
res
md.pattern(res)
#---------------------------------------
library(zoo)
na.approx(sleep,na.rm=F)
na.spline(sleep,na.rm=F)
na.approx(c(1,NA,4))
na.approx(c(1,NA,NA,4))
na.spline(c(1,5,NA,4))
na.approx(c(1,5,NA,4))
#-----------------------------------------------------------------------
# Neural Network
#-----------------------------------------------------------------------
# Read the Data http://lib.stat.cmu.edu/DASL/Datafiles/Cereals.html
data = read.csv("cereals.csv", header=T)
head(data)
# Random sampling
samplesize = 0.60 * nrow(data)
set.seed(80)
index = sample( seq_len(nrow(data)), size = samplesize )
## Scale data for neural network
max = apply(data , 2 , max)
min = apply(data, 2 , min)
scaled = as.data.frame(scale(data, center = min, scale = max - min))
scaled
## Fit neural network
# install library
#install.packages("neuralnet", dep=T)
# load library
library(neuralnet)
# creating training and test set
trainNN = scaled[index , ]
testNN = scaled[-index , ]
# fit neural network
set.seed(2)
f<-as.formula("rating ~ calories + protein + fat + sodium + fiber")
NN = neuralnet(f, trainNN, hidden = c(2,3),linear.output = T,rep=100)
which.min(NN$result.matrix[1,])
# plot neural network
plot(NN,rep=71)
## Prediction using neural network
predict_testNN = compute(NN, testNN[,c(1:5)],rep=71)
predict_testNN$net.result
predict_testNN = (predict_testNN$net.result * (max(data$rating) - min(data$rating))) + min(data$rating)
data[-index , 6]
plot(data[-index,]$rating, predict_testNN, col='blue', pch=16, ylab = "predicted rating NN", xlab = "real rating")
abline(0,1)
# Calculate Root Mean Square Error (RMSE)
RMSE.NN = (sum((data[-index,]$rating - predict_testNN)^2) / nrow(testNN)) ^ 0.5
RMSE.NN/median(data$rating)*100
|
/DS_Sat_2020/Day6R.R
|
no_license
|
Demolis/Infopulse
|
R
| false
| false
| 3,999
|
r
|
library('dplyr')
library('tidyr')
library('ggplot2')
salaries_SF <- read.csv("Salaries.csv")
str(salaries_SF)
ssf <- salaries_SF
dc <- c('Id', 'Notes', 'Agency')
#ssf <- subset(ssf, select = -c(1, 11, 12))
ssf <- ssf %>% select( -one_of(dc))
# Names & Job Title
ssf_names <- select(ssf, c(1:2))
ssf_names %>%
mutate_all(funs(factor(.)))
# Pay and Benefits
ssf_pay <- select(ssf, c(3:8)) %>%
mutate_if(is.factor, as.character) %>%
mutate_if(is.character, as.double)
ssf <- bind_cols(ssf_names, ssf_pay, select(ssf, c(9:10)))
ssfreg<-ssf[1:500,]
#-----------------------------------------------------
ggplot(ssf,aes(x=TotalPay))+geom_histogram()+facet_wrap(~Year)
varpie <- select(ssfreg,2) %>%
group_by(JobTitle) %>%
summarise(number = n()) %>% arrange(number)
tail(varpie)
pie(tail(varpie$number), main="Jobs", labels =tail(varpie$JobTitle),col= c(1,2,3,4,5,6),cex=0.6)
legend("bottomright",legend= tail(varpie$JobTitle), cex=0.3, fill= c(1,2,3,4,5,6))
#------------------------------------------------------------
dt<-tail(varpie)
pie(dt$number, main="Jobs", labels =paste(round(dt$number*100/sum(dt$number),1),"%",sep=" "),col= c(1,2,3,4,5,6),cex=0.6)
legend("bottomright",legend= tail(varpie$JobTitle), cex=0.6, fill= c(1,2,3,4,5,6))
#------------------------------------------------------------
bp<-ggplot(dt, aes(x=" ", y=number, fill=JobTitle))+ geom_bar(width = 1, stat = "identity")
bp
pie <- bp + coord_polar("y", start=0)
pie
pie + scale_fill_manual(values=c("#999999", "#E69F00", "#56B4E9","#563241","#124356","#897653"))
pie + scale_fill_brewer(palette="Blues")+theme_minimal()+ theme(axis.text.x=element_blank())
#-------------------------------------------
install.packages("VIM",dep=T)
library(VIM)
?sleep
data(sleep, package="VIM")
sleep[complete.cases(sleep),]
sleep[!complete.cases(sleep),]
str(sleep)
install.packages("mice",dep=T)
library(mice)
md.pattern(sleep)
na.omit(sleep)
cor(sleep, use="pairwise.complete.obs")
?mice
imp <- mice(sleep, seed=1234) #method ? parameter of method
imp$imp
imp$m
res <- complete(imp, action=2)
res
md.pattern(res)
#---------------------------------------
library(zoo)
na.approx(sleep,na.rm=F)
na.spline(sleep,na.rm=F)
na.approx(c(1,NA,4))
na.approx(c(1,NA,NA,4))
na.spline(c(1,5,NA,4))
na.approx(c(1,5,NA,4))
#-----------------------------------------------------------------------
# Neural Network
#-----------------------------------------------------------------------
# Read the Data http://lib.stat.cmu.edu/DASL/Datafiles/Cereals.html
data = read.csv("cereals.csv", header=T)
head(data)
# Random sampling
samplesize = 0.60 * nrow(data)
set.seed(80)
index = sample( seq_len(nrow(data)), size = samplesize )
## Scale data for neural network
max = apply(data , 2 , max)
min = apply(data, 2 , min)
scaled = as.data.frame(scale(data, center = min, scale = max - min))
scaled
## Fit neural network
# install library
#install.packages("neuralnet", dep=T)
# load library
library(neuralnet)
# creating training and test set
trainNN = scaled[index , ]
testNN = scaled[-index , ]
# fit neural network
set.seed(2)
f<-as.formula("rating ~ calories + protein + fat + sodium + fiber")
NN = neuralnet(f, trainNN, hidden = c(2,3),linear.output = T,rep=100)
which.min(NN$result.matrix[1,])
# plot neural network
plot(NN,rep=71)
## Prediction using neural network
predict_testNN = compute(NN, testNN[,c(1:5)],rep=71)
predict_testNN$net.result
predict_testNN = (predict_testNN$net.result * (max(data$rating) - min(data$rating))) + min(data$rating)
data[-index , 6]
plot(data[-index,]$rating, predict_testNN, col='blue', pch=16, ylab = "predicted rating NN", xlab = "real rating")
abline(0,1)
# Calculate Root Mean Square Error (RMSE)
RMSE.NN = (sum((data[-index,]$rating - predict_testNN)^2) / nrow(testNN)) ^ 0.5
RMSE.NN/median(data$rating)*100
|
rm(list=ls()) #clears environment
cat("\014") #clears the console in RStudio
#Problem 1
#(a) Define an indicator variable IsB such that IsB=TRUE for B-cell patients
#and IsB=FALSE for T-cell patients.
#(b) Use two genes "39317_at" and "38018_g_at" to fit a classification tree for
#IsB. Print out the confusion matrix. Plot ROC curve for the tree.
library("ALL"); data(ALL); #load package and data
library(ROCR) ##Load library
allB <- ALL[,which(ALL$BT %in% c("B","B1","B2","B3","B4"))] #select patients
allBT <- ALL[,which(ALL$BT %in% c("B","B1","B2","B3","B4","T","T1","T2","T3","T4"))] #select patients
prob.name <- c("39317_at","38018_g_at")
expr.data <-exprs(allBT)[prob.name,]
IsB <- (ALL$BT==allB$BT) #A boolean class indicator (in B)
data.lgr <- data.frame( IsB, t(expr.data)) #A data frame of class indicator and expression data.
fit.lgr <- glm(IsB~., family=binomial(link='logit'), data=data.lgr) #logistic regression of class indicator on other variables (1389_at expression values)
pred.prob <- predict(fit.lgr, data=data.lgr$expr.data, type="response") #predict class probability (response) using the regression fit on expression data.
pred.B<- factor(pred.prob> 0.5, levels=c(TRUE,FALSE), labels=c("B","not")) #classify according to prediction probability, and label TRUE to 'B' and FALSE to 'not'
IsB<-factor(IsB, levels=c(TRUE,FALSE), labels=c("B","not")) #class indicator label to 'B' and 'not'
table(pred.B, IsB) #confusion matrix that compares predicted versus true classes
#Classification Tree on ALL Data
library("hgu95av2.db");
ALLB <- ALL[,ALL$BT %in% c("B","B1","B2","B3","B4")] #patients in B1,B2,B3 stages
pano <- apply(exprs(ALLB), 1, function(x) anova(lm(x ~ ALLB$BT))$Pr[1]) #do ANOVA on every gene
names <- featureNames(ALL)[pano<0.000001] #get probe names for genes passing the ANOVA gene filter
symb <- mget(names, env = hgu95av2SYMBOL) #get the gene names
ALLBTnames <- ALLB[names, ] #keep only the gene passing filter
probedat <- as.matrix(exprs(ALLBTnames)) #save expression values in probedat
row.names(probedat)<-unlist(symb) #change row names to gene names
require(rpart) #load library
B.stage <- factor(ALLBTnames$BT) #The true classes: In all B stages
c.tr <- rpart(B.stage ~ ., data = data.frame(t(probedat))) #Fit the tree on data set 'probedat'. The transpose is needed since we are classifying the patients (rows).
plot(c.tr, branch=0,margin=0.1); #Plot the tree with V-shaped branch (=0), leave margins for later text additions.
text(c.tr, digits=3) #Add text for the decision rules on the tree. use 3 digits for numbers
rpartpred <- predict(c.tr, type="class") #predicted classes from the tree
table(rpartpred, B.stage) #confusion matrix compare predicted versus true classes
#ROC curve for the tree
pred.prob <- predict(fit.lgr, data=data.lgr[,-1], type="response")
pred <- prediction(pred.prob, IsB)
perf <- performance(pred, "tpr", "fpr" )
plot(perf) #add green ROC curve for this classifier
#(c) Find its empirical misclassification rate (mcr), false negative rate (fnr)
#and specificity. Find the area under curve (AUC) for the ROC curve.
#empirical misclassification rate
#Training and Validation (testing)
set.seed(131) #Set an arbitrary random seed.
testID <- sample(1:128, 51, replace = FALSE) #randomly select 31 test (validation) cases out of 78
data.tr<-data.lgr[-testID, ] #training data, remove test cases with "-"
data.test<-data.lgr[testID, ] #test (validation) data
fit.lgr <- glm(IsB~., family=binomial(link='logit'), data=data.tr) #fit logistic regression on training data.
pred.prob <- predict(fit.lgr, data=data.tr, type="response") #training data prediction
pred.B<- factor(pred.prob> 0.5)
mcr.tr<- sum(pred.B!=data.tr$IsB)/length(data.tr$IsB) #mcr on training data. (# of misclassification)/(# total cases)
pred.prob <- predict(fit.lgr, newdata=data.test, type="response") #test data prediction
pred.B<- factor(pred.prob> 0.5)
mcr.test<- sum(pred.B!=data.test$IsB)/length(data.test$IsB) #mcr on test data
data.frame(mcr.tr, mcr.test) #print out mcr
#area under curve (AUC) for the ROC curve
performance(pred,"auc")
#(d) Use 10-fold cross-validation to estimate its real false negative rate (fnr). What is your estimated fnr?
## 10 fold cross-validation
require(caret)
n<-dim(data.lgr)[1] #size of the whole sample
index<-1:n #index for data points
K<-10 #number of folds
flds <- createFolds(index, k=K) #create K folds
mcr.cv.raw<-rep(NA, K) ## A vector to save raw mcr
for (i in 1:K) {
testID<-flds[[i]] #the i-th fold as validation set
data.tr<-data.lgr[-testID,] #remove the test cases from training data
data.test<-data.lgr[testID,] #validation (test) data
fit.lgr <- glm(IsB~., family=binomial(link='logit'), data=data.tr) #train model
pred.prob <- predict(fit.lgr, newdata=data.test, type="response") #prediction probability
pred.B<- (pred.prob> 0.5) #prediction class
mcr.cv.raw[i]<- sum(pred.B!=data.test$IsB)/length(pred.B)#mcr on testing case
}
mcr.cv<-mean(mcr.cv.raw) #average the mcr over K folds.
mcr.cv
#(e) Do a logistic regression, using genes "39317_at" and "38018_g_at" to predict
#IsB. Find an 80% confidence interval for the coefficient of gene "39317_at".
allB <- ALL[,which(ALL$BT %in% c("B","B1","B2","B3","B4"))] #select patients
allBT <- ALL[,which(ALL$BT %in% c("B","B1","B2","B3","B4","T","T1","T2","T3","T4"))] #select patients
prob.name <- c("39317_at","38018_g_at")
expr.data <-exprs(allBT)[prob.name,]
IsB <- (ALL$BT==allB$BT) #A boolean class indicator (in B)
data.lgr <- data.frame( IsB, t(expr.data)) #A data frame of class indicator and expression data.
fit.lgr <- glm(IsB~., family=binomial(link='logit'), data=data.lgr) #logistic regression of class indicator on other variables (1389_at expression values)
pred.prob <- predict(fit.lgr, data=data.lgr$expr.data, type="response") #predict class probability (response) using the regression fit on expression data.
pred.B<- factor(pred.prob> 0.5, levels=c(TRUE,FALSE), labels=c("B","not")) #classify according to prediction probability, and label TRUE to 'B' and FALSE to 'not'
IsB<-factor(IsB, levels=c(TRUE,FALSE), labels=c("B","not")) #class indicator label to 'B' and 'not'
table(pred.B, IsB) #confusion matrix that compares predicted versus true classes
confint(fit.lgr, level=0.8) #Find 80% 2-sided CIs for the parameters
#f) Use n-fold cross-validation to estimate misclassification rate (mcr) of the
#logistic regression classifier. What is your estimated mcr?
n<-dim(data.lgr)[1] #size of the whole sample
index<-1:n #index for data points
K<-n #number of folds
flds <- createFolds(index, k=K) #create K folds
mcr.cv.raw<-rep(NA, K) ## A vector to save raw mcr
for (i in 1:K) {
testID<-flds[[i]] #the i-th fold as validation set
data.tr<-data.lgr[-testID,] #remove the test cases from training data
data.test<-data.lgr[testID,] #validation (test) data
fit.lgr <- glm(IsB~., family=binomial(link='logit'), data=data.tr) #train model
pred.prob <- predict(fit.lgr, newdata=data.test, type="response") #prediction probability
pred.B<- (pred.prob> 0.5) #prediction class
mcr.cv.raw[i]<- sum(pred.B!=data.test$IsB)/length(pred.B)#mcr on testing case
}
mcr.cv<-mean(mcr.cv.raw) #average the mcr over K folds.
mcr.cv
#(g) Conduct a PCA on the scaled variables of the whole ALL data set (NOT just
#the two genes used above).
pca.ALL<-prcomp(exprs(ALL), scale=TRUE) #Apply PCA on ALL data, scale each variable first
summary(pca.ALL) #print out summary of the PCA fit
#(h) Do a SVM classifier of IsB using only the first five PCs. (The number K=5
#is fixed so that we all use the same classifier. You do not need to choose this
#number in the previous part (g).) What is the sensitivity of this classifier?
B.stage<-factor(ALLBTnames$BT[1:5]) #the classes
data.pca<-pca.ALL$x[,1:5] #keep only the first five principal components
n<-length(B.stage)
pca.ALL.svm <- svm(data.pca, B.stage, type = "C-classification", kernel = "linear") #train SVM
svmpred <- predict(pca.ALL.svm , data.pca) #get SVM prediction.
mcr.svm<- mean(svmpred!=B.stage) #misclassification rate
### leave-one-out cross validation
mcr.cv.raw<-rep(NA, n) #A vector to save mcr validation
for (i in 1:n) {
svmest <- svm(data.pca[-i,], B.stage[-i], type = "C-classification", kernel = "linear") #train SVM without i-th observation
svmpred <- predict(svmest, t(data.pca[i,])) #predict i-th observation. Here transpose t() is used to make the vector back into a 1 by ncol matrix
mcr.cv.raw[i]<- mean(svmpred!=B.stage[i]) #misclassification rate
}
mcr.cv<-mean(mcr.cv.raw) #average the mcr over all n rounds.
c(mcr.svm, mcr.cv)
#(i) Use leave-one-out cross-validation to estimate misclassification rate (mcr)
#of the SVM classifier. Report your estimate.
pca.ALL<-data.frame(B.stage, data.pca) #combine response variable with PCA data
fit <- rpart(B.stage ~ ., data = pca.ALL, method = "class") #Fit tree pca.ALL data
pred.tr<-predict(fit, pca.ALL, type = "class") #predict classes from the tree
mcr.tr <- mean(pred.tr!=B.stage) #misclassification rate ### leave-one-out cross validation
mcr.cv.raw<-rep(NA, n) #A vector to save mcr validation
for (i in 1:n) {
fit.tr <- rpart(B.stage ~ ., data = pca.ALL[-i,], method = "class") #train the tree without i-th observation
pred <- predict(fit.tr, pca.ALL[i,], type = "class")#use tree to predict i-th observation class
mcr.cv.raw[i]<- mean(pred!=B.stage[i]) #check misclassifion
}
mcr.cv<-mean(mcr.cv.raw) #average the mcr over all n rounds.
c(mcr.tr, mcr.cv)
#Problem 2
rm(list=ls()) #clears environment
cat("\014") #clears the console in RStudio
## 10 fold cross-validation
pca.iris<-prcomp(iris[,1:4], scale=TRUE) #Apply PCA on first four variables in iris data, scale each variable first
Species<-iris$Species #response variable with true classes
data.pca<-pca.iris$x[,1:3] #keep only the first three principal components
n<-length(Species) #sample size n
iris2<-data.frame(Species, data.pca) #combine response variable with PCA data
n<-dim(iris2)[1] #size of the whole sample
index<-1:n #index for data points
#k=1
K<-1 #number of folds
flds <- createFolds(index, k=K) #create K folds
mcr.cv.raw<-rep(NA, K) ## A vector to save raw mcr
for (i in 1:K) {
testID<-flds[[i]] #the i-th fold as validation set
data.tr<-iris2[-testID,] #remove the test cases from training data
data.test<-iris2[testID,] #validation (test) data
fit.lgr <- glm(Species ~ ., family=binomial(link='logit'),data = data.tr) #train model
pred.prob <- predict(fit.lgr, newdata=data.test, type="response") #prediction probability
pred.B1<- (pred.prob> 0.5) #prediction class
mcr.cv.raw[i]<- sum(pred.B1!=data.pca)/length(pred.B1)#mcr on testing case
}
mcr.k<-mean(mcr.cv.raw) #average the mcr over K folds.
### leave-one-out cross validation
mcr.cv.raw<-rep(NA, n) #A vector to save mcr validation
for (i in 1:n) {
svmest <- svm(data.pca[-i,], Species[-i], type = "C-classification", kernel = "linear") #train SVM without i-th observation
svmpred <- predict(svmest, t(data.pca[i,])) #predict i-th observation. Here transpose t() is used to make the vector back into a 1 by ncol matrix
mcr.cv.raw[i]<- mean(svmpred!=Species[i]) #misclassification rate
}
mcr.cv<-mean(mcr.cv.raw) #average the mcr over all n rounds.
c(mcr.k, mcr.cv)
#k=2
K<-2 #number of folds
flds <- createFolds(index, k=K) #create K folds
mcr.cv.raw<-rep(NA, K) ## A vector to save raw mcr
for (i in 1:K) {
testID<-flds[[i]] #the i-th fold as validation set
data.tr<-iris2[-testID,] #remove the test cases from training data
data.test<-iris2[testID,] #validation (test) data
fit.lgr <- glm(Species ~ ., family=binomial(link='logit'),data = data.tr) #train model
pred.prob <- predict(fit.lgr, newdata=data.test, type="response") #prediction probability
pred.B1<- (pred.prob> 0.5) #prediction class
mcr.cv.raw[i]<- sum(pred.B1!=data.pca)/length(pred.B1)#mcr on testing case
}
mcr.k<-mean(mcr.cv.raw) #average the mcr over K folds.
### leave-one-out cross validation
mcr.cv.raw<-rep(NA, n) #A vector to save mcr validation
for (i in 1:n) {
svmest <- svm(data.pca[-i,], Species[-i], type = "C-classification", kernel = "linear") #train SVM without i-th observation
svmpred <- predict(svmest, t(data.pca[i,])) #predict i-th observation. Here transpose t() is used to make the vector back into a 1 by ncol matrix
mcr.cv.raw[i]<- mean(svmpred!=Species[i]) #misclassification rate
}
mcr.cv<-mean(mcr.cv.raw) #average the mcr over all n rounds.
c(mcr.k, mcr.cv)
#k=3
K<-3 #number of folds
flds <- createFolds(index, k=K) #create K folds
mcr.cv.raw<-rep(NA, K) ## A vector to save raw mcr
for (i in 1:K) {
testID<-flds[[i]] #the i-th fold as validation set
data.tr<-iris2[-testID,] #remove the test cases from training data
data.test<-iris2[testID,] #validation (test) data
fit.lgr <- glm(Species ~ ., family=binomial(link='logit'),data = data.tr) #train model
pred.prob <- predict(fit.lgr, newdata=data.test, type="response") #prediction probability
pred.B1<- (pred.prob> 0.5) #prediction class
mcr.cv.raw[i]<- sum(pred.B1!=data.pca)/length(pred.B1)#mcr on testing case
}
mcr.k<-mean(mcr.cv.raw) #average the mcr over K folds.
### leave-one-out cross validation
mcr.cv.raw<-rep(NA, n) #A vector to save mcr validation
for (i in 1:n) {
svmest <- svm(data.pca[-i,], Species[-i], type = "C-classification", kernel = "linear") #train SVM without i-th observation
svmpred <- predict(svmest, t(data.pca[i,])) #predict i-th observation. Here transpose t() is used to make the vector back into a 1 by ncol matrix
mcr.cv.raw[i]<- mean(svmpred!=Species[i]) #misclassification rate
}
mcr.cv<-mean(mcr.cv.raw) #average the mcr over all n rounds.
c(mcr.k, mcr.cv)
#k=4
K<-4 #number of folds
flds <- createFolds(index, k=K) #create K folds
mcr.cv.raw<-rep(NA, K) ## A vector to save raw mcr
for (i in 1:K) {
testID<-flds[[i]] #the i-th fold as validation set
data.tr<-iris2[-testID,] #remove the test cases from training data
data.test<-iris2[testID,] #validation (test) data
fit.lgr <- glm(Species ~ ., family=binomial(link='logit'),data = data.tr) #train model
pred.prob <- predict(fit.lgr, newdata=data.test, type="response") #prediction probability
pred.B1<- (pred.prob> 0.5) #prediction class
mcr.cv.raw[i]<- sum(pred.B1!=data.pca)/length(pred.B1)#mcr on testing case
}
mcr.k<-mean(mcr.cv.raw) #average the mcr over K folds.
### leave-one-out cross validation
mcr.cv.raw<-rep(NA, n) #A vector to save mcr validation
for (i in 1:n) {
svmest <- svm(data.pca[-i,], Species[-i], type = "C-classification", kernel = "linear") #train SVM without i-th observation
svmpred <- predict(svmest, t(data.pca[i,])) #predict i-th observation. Here transpose t() is used to make the vector back into a 1 by ncol matrix
mcr.cv.raw[i]<- mean(svmpred!=Species[i]) #misclassification rate
}
mcr.cv<-mean(mcr.cv.raw) #average the mcr over all n rounds.
c(mcr.k, mcr.cv)
|
/Module13/Conte_ModuleHW13.r
|
no_license
|
contej/Statistics_for_Bioinformatics
|
R
| false
| false
| 15,341
|
r
|
rm(list=ls()) #clears environment
cat("\014") #clears the console in RStudio
#Problem 1
#(a) Define an indicator variable IsB such that IsB=TRUE for B-cell patients
#and IsB=FALSE for T-cell patients.
#(b) Use two genes "39317_at" and "38018_g_at" to fit a classification tree for
#IsB. Print out the confusion matrix. Plot ROC curve for the tree.
library("ALL"); data(ALL); #load package and data
library(ROCR) ##Load library
allB <- ALL[,which(ALL$BT %in% c("B","B1","B2","B3","B4"))] #select patients
allBT <- ALL[,which(ALL$BT %in% c("B","B1","B2","B3","B4","T","T1","T2","T3","T4"))] #select patients
prob.name <- c("39317_at","38018_g_at")
expr.data <-exprs(allBT)[prob.name,]
IsB <- (ALL$BT==allB$BT) #A boolean class indicator (in B)
data.lgr <- data.frame( IsB, t(expr.data)) #A data frame of class indicator and expression data.
fit.lgr <- glm(IsB~., family=binomial(link='logit'), data=data.lgr) #logistic regression of class indicator on other variables (1389_at expression values)
pred.prob <- predict(fit.lgr, data=data.lgr$expr.data, type="response") #predict class probability (response) using the regression fit on expression data.
pred.B<- factor(pred.prob> 0.5, levels=c(TRUE,FALSE), labels=c("B","not")) #classify according to prediction probability, and label TRUE to 'B' and FALSE to 'not'
IsB<-factor(IsB, levels=c(TRUE,FALSE), labels=c("B","not")) #class indicator label to 'B' and 'not'
table(pred.B, IsB) #confusion matrix that compares predicted versus true classes
#Classification Tree on ALL Data
library("hgu95av2.db");
ALLB <- ALL[,ALL$BT %in% c("B","B1","B2","B3","B4")] #patients in B1,B2,B3 stages
pano <- apply(exprs(ALLB), 1, function(x) anova(lm(x ~ ALLB$BT))$Pr[1]) #do ANOVA on every gene
names <- featureNames(ALL)[pano<0.000001] #get probe names for genes passing the ANOVA gene filter
symb <- mget(names, env = hgu95av2SYMBOL) #get the gene names
ALLBTnames <- ALLB[names, ] #keep only the gene passing filter
probedat <- as.matrix(exprs(ALLBTnames)) #save expression values in probedat
row.names(probedat)<-unlist(symb) #change row names to gene names
require(rpart) #load library
B.stage <- factor(ALLBTnames$BT) #The true classes: In all B stages
c.tr <- rpart(B.stage ~ ., data = data.frame(t(probedat))) #Fit the tree on data set 'probedat'. The transpose is needed since we are classifying the patients (rows).
plot(c.tr, branch=0,margin=0.1); #Plot the tree with V-shaped branch (=0), leave margins for later text additions.
text(c.tr, digits=3) #Add text for the decision rules on the tree. use 3 digits for numbers
rpartpred <- predict(c.tr, type="class") #predicted classes from the tree
table(rpartpred, B.stage) #confusion matrix compare predicted versus true classes
#ROC curve for the tree
pred.prob <- predict(fit.lgr, data=data.lgr[,-1], type="response")
pred <- prediction(pred.prob, IsB)
perf <- performance(pred, "tpr", "fpr" )
plot(perf) #add green ROC curve for this classifier
#(c) Find its empirical misclassification rate (mcr), false negative rate (fnr)
#and specificity. Find the area under curve (AUC) for the ROC curve.
#empirical misclassification rate
#Training and Validation (testing)
set.seed(131) #Set an arbitrary random seed.
testID <- sample(1:128, 51, replace = FALSE) #randomly select 31 test (validation) cases out of 78
data.tr<-data.lgr[-testID, ] #training data, remove test cases with "-"
data.test<-data.lgr[testID, ] #test (validation) data
fit.lgr <- glm(IsB~., family=binomial(link='logit'), data=data.tr) #fit logistic regression on training data.
pred.prob <- predict(fit.lgr, data=data.tr, type="response") #training data prediction
pred.B<- factor(pred.prob> 0.5)
mcr.tr<- sum(pred.B!=data.tr$IsB)/length(data.tr$IsB) #mcr on training data. (# of misclassification)/(# total cases)
pred.prob <- predict(fit.lgr, newdata=data.test, type="response") #test data prediction
pred.B<- factor(pred.prob> 0.5)
mcr.test<- sum(pred.B!=data.test$IsB)/length(data.test$IsB) #mcr on test data
data.frame(mcr.tr, mcr.test) #print out mcr
#area under curve (AUC) for the ROC curve
performance(pred,"auc")
#(d) Use 10-fold cross-validation to estimate its real false negative rate (fnr). What is your estimated fnr?
## 10 fold cross-validation
require(caret)
n<-dim(data.lgr)[1] #size of the whole sample
index<-1:n #index for data points
K<-10 #number of folds
flds <- createFolds(index, k=K) #create K folds
mcr.cv.raw<-rep(NA, K) ## A vector to save raw mcr
for (i in 1:K) {
testID<-flds[[i]] #the i-th fold as validation set
data.tr<-data.lgr[-testID,] #remove the test cases from training data
data.test<-data.lgr[testID,] #validation (test) data
fit.lgr <- glm(IsB~., family=binomial(link='logit'), data=data.tr) #train model
pred.prob <- predict(fit.lgr, newdata=data.test, type="response") #prediction probability
pred.B<- (pred.prob> 0.5) #prediction class
mcr.cv.raw[i]<- sum(pred.B!=data.test$IsB)/length(pred.B)#mcr on testing case
}
mcr.cv<-mean(mcr.cv.raw) #average the mcr over K folds.
mcr.cv
#(e) Do a logistic regression, using genes "39317_at" and "38018_g_at" to predict
#IsB. Find an 80% confidence interval for the coefficient of gene "39317_at".
allB <- ALL[,which(ALL$BT %in% c("B","B1","B2","B3","B4"))] #select patients
allBT <- ALL[,which(ALL$BT %in% c("B","B1","B2","B3","B4","T","T1","T2","T3","T4"))] #select patients
prob.name <- c("39317_at","38018_g_at")
expr.data <-exprs(allBT)[prob.name,]
IsB <- (ALL$BT==allB$BT) #A boolean class indicator (in B)
data.lgr <- data.frame( IsB, t(expr.data)) #A data frame of class indicator and expression data.
fit.lgr <- glm(IsB~., family=binomial(link='logit'), data=data.lgr) #logistic regression of class indicator on other variables (1389_at expression values)
pred.prob <- predict(fit.lgr, data=data.lgr$expr.data, type="response") #predict class probability (response) using the regression fit on expression data.
pred.B<- factor(pred.prob> 0.5, levels=c(TRUE,FALSE), labels=c("B","not")) #classify according to prediction probability, and label TRUE to 'B' and FALSE to 'not'
IsB<-factor(IsB, levels=c(TRUE,FALSE), labels=c("B","not")) #class indicator label to 'B' and 'not'
table(pred.B, IsB) #confusion matrix that compares predicted versus true classes
confint(fit.lgr, level=0.8) #Find 80% 2-sided CIs for the parameters
#f) Use n-fold cross-validation to estimate misclassification rate (mcr) of the
#logistic regression classifier. What is your estimated mcr?
n<-dim(data.lgr)[1] #size of the whole sample
index<-1:n #index for data points
K<-n #number of folds
flds <- createFolds(index, k=K) #create K folds
mcr.cv.raw<-rep(NA, K) ## A vector to save raw mcr
for (i in 1:K) {
testID<-flds[[i]] #the i-th fold as validation set
data.tr<-data.lgr[-testID,] #remove the test cases from training data
data.test<-data.lgr[testID,] #validation (test) data
fit.lgr <- glm(IsB~., family=binomial(link='logit'), data=data.tr) #train model
pred.prob <- predict(fit.lgr, newdata=data.test, type="response") #prediction probability
pred.B<- (pred.prob> 0.5) #prediction class
mcr.cv.raw[i]<- sum(pred.B!=data.test$IsB)/length(pred.B)#mcr on testing case
}
mcr.cv<-mean(mcr.cv.raw) #average the mcr over K folds.
mcr.cv
#(g) Conduct a PCA on the scaled variables of the whole ALL data set (NOT just
#the two genes used above).
pca.ALL<-prcomp(exprs(ALL), scale=TRUE) #Apply PCA on ALL data, scale each variable first
summary(pca.ALL) #print out summary of the PCA fit
#(h) Do a SVM classifier of IsB using only the first five PCs. (The number K=5
#is fixed so that we all use the same classifier. You do not need to choose this
#number in the previous part (g).) What is the sensitivity of this classifier?
B.stage<-factor(ALLBTnames$BT[1:5]) #the classes
data.pca<-pca.ALL$x[,1:5] #keep only the first five principal components
n<-length(B.stage)
pca.ALL.svm <- svm(data.pca, B.stage, type = "C-classification", kernel = "linear") #train SVM
svmpred <- predict(pca.ALL.svm , data.pca) #get SVM prediction.
mcr.svm<- mean(svmpred!=B.stage) #misclassification rate
### leave-one-out cross validation
mcr.cv.raw<-rep(NA, n) #A vector to save mcr validation
for (i in 1:n) {
svmest <- svm(data.pca[-i,], B.stage[-i], type = "C-classification", kernel = "linear") #train SVM without i-th observation
svmpred <- predict(svmest, t(data.pca[i,])) #predict i-th observation. Here transpose t() is used to make the vector back into a 1 by ncol matrix
mcr.cv.raw[i]<- mean(svmpred!=B.stage[i]) #misclassification rate
}
mcr.cv<-mean(mcr.cv.raw) #average the mcr over all n rounds.
c(mcr.svm, mcr.cv)
#(i) Use leave-one-out cross-validation to estimate misclassification rate (mcr)
#of the SVM classifier. Report your estimate.
pca.ALL<-data.frame(B.stage, data.pca) #combine response variable with PCA data
fit <- rpart(B.stage ~ ., data = pca.ALL, method = "class") #Fit tree pca.ALL data
pred.tr<-predict(fit, pca.ALL, type = "class") #predict classes from the tree
mcr.tr <- mean(pred.tr!=B.stage) #misclassification rate ### leave-one-out cross validation
mcr.cv.raw<-rep(NA, n) #A vector to save mcr validation
for (i in 1:n) {
fit.tr <- rpart(B.stage ~ ., data = pca.ALL[-i,], method = "class") #train the tree without i-th observation
pred <- predict(fit.tr, pca.ALL[i,], type = "class")#use tree to predict i-th observation class
mcr.cv.raw[i]<- mean(pred!=B.stage[i]) #check misclassifion
}
mcr.cv<-mean(mcr.cv.raw) #average the mcr over all n rounds.
c(mcr.tr, mcr.cv)
#Problem 2
rm(list=ls()) #clears environment
cat("\014") #clears the console in RStudio
## 10 fold cross-validation
pca.iris<-prcomp(iris[,1:4], scale=TRUE) #Apply PCA on first four variables in iris data, scale each variable first
Species<-iris$Species #response variable with true classes
data.pca<-pca.iris$x[,1:3] #keep only the first three principal components
n<-length(Species) #sample size n
iris2<-data.frame(Species, data.pca) #combine response variable with PCA data
n<-dim(iris2)[1] #size of the whole sample
index<-1:n #index for data points
#k=1
K<-1 #number of folds
flds <- createFolds(index, k=K) #create K folds
mcr.cv.raw<-rep(NA, K) ## A vector to save raw mcr
for (i in 1:K) {
testID<-flds[[i]] #the i-th fold as validation set
data.tr<-iris2[-testID,] #remove the test cases from training data
data.test<-iris2[testID,] #validation (test) data
fit.lgr <- glm(Species ~ ., family=binomial(link='logit'),data = data.tr) #train model
pred.prob <- predict(fit.lgr, newdata=data.test, type="response") #prediction probability
pred.B1<- (pred.prob> 0.5) #prediction class
mcr.cv.raw[i]<- sum(pred.B1!=data.pca)/length(pred.B1)#mcr on testing case
}
mcr.k<-mean(mcr.cv.raw) #average the mcr over K folds.
### leave-one-out cross validation
mcr.cv.raw<-rep(NA, n) #A vector to save mcr validation
for (i in 1:n) {
svmest <- svm(data.pca[-i,], Species[-i], type = "C-classification", kernel = "linear") #train SVM without i-th observation
svmpred <- predict(svmest, t(data.pca[i,])) #predict i-th observation. Here transpose t() is used to make the vector back into a 1 by ncol matrix
mcr.cv.raw[i]<- mean(svmpred!=Species[i]) #misclassification rate
}
mcr.cv<-mean(mcr.cv.raw) #average the mcr over all n rounds.
c(mcr.k, mcr.cv)
#k=2
K<-2 #number of folds
flds <- createFolds(index, k=K) #create K folds
mcr.cv.raw<-rep(NA, K) ## A vector to save raw mcr
for (i in 1:K) {
testID<-flds[[i]] #the i-th fold as validation set
data.tr<-iris2[-testID,] #remove the test cases from training data
data.test<-iris2[testID,] #validation (test) data
fit.lgr <- glm(Species ~ ., family=binomial(link='logit'),data = data.tr) #train model
pred.prob <- predict(fit.lgr, newdata=data.test, type="response") #prediction probability
pred.B1<- (pred.prob> 0.5) #prediction class
mcr.cv.raw[i]<- sum(pred.B1!=data.pca)/length(pred.B1)#mcr on testing case
}
mcr.k<-mean(mcr.cv.raw) #average the mcr over K folds.
### leave-one-out cross validation
mcr.cv.raw<-rep(NA, n) #A vector to save mcr validation
for (i in 1:n) {
svmest <- svm(data.pca[-i,], Species[-i], type = "C-classification", kernel = "linear") #train SVM without i-th observation
svmpred <- predict(svmest, t(data.pca[i,])) #predict i-th observation. Here transpose t() is used to make the vector back into a 1 by ncol matrix
mcr.cv.raw[i]<- mean(svmpred!=Species[i]) #misclassification rate
}
mcr.cv<-mean(mcr.cv.raw) #average the mcr over all n rounds.
c(mcr.k, mcr.cv)
#k=3
K<-3 #number of folds
flds <- createFolds(index, k=K) #create K folds
mcr.cv.raw<-rep(NA, K) ## A vector to save raw mcr
for (i in 1:K) {
testID<-flds[[i]] #the i-th fold as validation set
data.tr<-iris2[-testID,] #remove the test cases from training data
data.test<-iris2[testID,] #validation (test) data
fit.lgr <- glm(Species ~ ., family=binomial(link='logit'),data = data.tr) #train model
pred.prob <- predict(fit.lgr, newdata=data.test, type="response") #prediction probability
pred.B1<- (pred.prob> 0.5) #prediction class
mcr.cv.raw[i]<- sum(pred.B1!=data.pca)/length(pred.B1)#mcr on testing case
}
mcr.k<-mean(mcr.cv.raw) #average the mcr over K folds.
### leave-one-out cross validation
mcr.cv.raw<-rep(NA, n) #A vector to save mcr validation
for (i in 1:n) {
svmest <- svm(data.pca[-i,], Species[-i], type = "C-classification", kernel = "linear") #train SVM without i-th observation
svmpred <- predict(svmest, t(data.pca[i,])) #predict i-th observation. Here transpose t() is used to make the vector back into a 1 by ncol matrix
mcr.cv.raw[i]<- mean(svmpred!=Species[i]) #misclassification rate
}
mcr.cv<-mean(mcr.cv.raw) #average the mcr over all n rounds.
c(mcr.k, mcr.cv)
#k=4
K<-4 #number of folds
flds <- createFolds(index, k=K) #create K folds
mcr.cv.raw<-rep(NA, K) ## A vector to save raw mcr
for (i in 1:K) {
testID<-flds[[i]] #the i-th fold as validation set
data.tr<-iris2[-testID,] #remove the test cases from training data
data.test<-iris2[testID,] #validation (test) data
fit.lgr <- glm(Species ~ ., family=binomial(link='logit'),data = data.tr) #train model
pred.prob <- predict(fit.lgr, newdata=data.test, type="response") #prediction probability
pred.B1<- (pred.prob> 0.5) #prediction class
mcr.cv.raw[i]<- sum(pred.B1!=data.pca)/length(pred.B1)#mcr on testing case
}
mcr.k<-mean(mcr.cv.raw) #average the mcr over K folds.
### leave-one-out cross validation
mcr.cv.raw<-rep(NA, n) #A vector to save mcr validation
for (i in 1:n) {
svmest <- svm(data.pca[-i,], Species[-i], type = "C-classification", kernel = "linear") #train SVM without i-th observation
svmpred <- predict(svmest, t(data.pca[i,])) #predict i-th observation. Here transpose t() is used to make the vector back into a 1 by ncol matrix
mcr.cv.raw[i]<- mean(svmpred!=Species[i]) #misclassification rate
}
mcr.cv<-mean(mcr.cv.raw) #average the mcr over all n rounds.
c(mcr.k, mcr.cv)
|
library(ggplot2)
x = get_count_by_month("Jaanika")
df = data.frame(month = factor(names(x), levels=names(x)),
count = as.numeric(x))
ggplot(df, aes(x=month, y=count, label=count)) +
geom_bar(stat="identity", fill="#53cfff") +
geom_text(col="white", vjust=1.25, size=3.5) +
theme_minimal() +
labs(title = expression(paste("People named ", bold("Jaanika"), " per birth month")),
caption = "data source: Statistics Estonia")
ggsave("figures/jaanika.png")
|
/figures/build_readme_figure.R
|
no_license
|
tanelp/nimi
|
R
| false
| false
| 486
|
r
|
library(ggplot2)
x = get_count_by_month("Jaanika")
df = data.frame(month = factor(names(x), levels=names(x)),
count = as.numeric(x))
ggplot(df, aes(x=month, y=count, label=count)) +
geom_bar(stat="identity", fill="#53cfff") +
geom_text(col="white", vjust=1.25, size=3.5) +
theme_minimal() +
labs(title = expression(paste("People named ", bold("Jaanika"), " per birth month")),
caption = "data source: Statistics Estonia")
ggsave("figures/jaanika.png")
|
# set working directory (change this to fit your needs)
setwd('~/Source Code/GitHub/Exploratory-Data-Analysis')
# make sure the plots folder exists
if (!file.exists('plots')) {
dir.create('plots')
}
# load data
source('scripts/get_and_clean_data.R')
# open device
png(filename='plots/plot1.png',width=480,height=480,units='px')
# plot data
hist(power.consumption$GlobalActivePower,main='Global Active Power',xlab='Global Active Power (kilowatts)',col='red')
# Turn off device
x<-dev.off()
|
/plot1.R
|
no_license
|
olenaostasheva/ExData_Plotting1
|
R
| false
| false
| 502
|
r
|
# set working directory (change this to fit your needs)
setwd('~/Source Code/GitHub/Exploratory-Data-Analysis')
# make sure the plots folder exists
if (!file.exists('plots')) {
dir.create('plots')
}
# load data
source('scripts/get_and_clean_data.R')
# open device
png(filename='plots/plot1.png',width=480,height=480,units='px')
# plot data
hist(power.consumption$GlobalActivePower,main='Global Active Power',xlab='Global Active Power (kilowatts)',col='red')
# Turn off device
x<-dev.off()
|
#This file contains an unoptimized version of the continuous model.
#It suffers in performance from relying on R's built-in matrix
#operations, which are overkill for a 2x2. By inlining the matrix
#computations we achieve a speedup of ~600%, but perhaps at the cost
#of clarity: the original implementation is therefore included as a
#reference.
oriole.init.state <- c(0,1)
model.old <- function(lambda.r,lambda.y,mu.r,mu.y,q.yr,q.ry){#deprecated
#dr/dt = ar + by
#dy/dt = cr + dy
a <- lambda.r - mu.r - q.ry
b <- q.yr
c <- q.ry
d <- lambda.y - mu.y - q.yr
A <- matrix(c(a,c,b,d),nrow=2)
ry0 <- oriole.init.state
eigen.sol <- eigen(A)
Lambda <- eigen.sol$values
L <- function(t)diag(exp(eigen.sol$values * t))
P <- eigen.sol$vectors
RY <- function(t) Re(P%*%L(t)%*%solve(P)%*%ry0)
RY
}
percent.yellow <- function(RY,t)RY(t)[2]/sum(RY(t))
results.old <- function(lambda.r,lambda.y,mu.r,mu.y,t,rows,cols,row.max,col.max){
#Return a matrix whose [i,j]th entry contains the proportion of
#yellow species at time t.
row.factor <- row.max/rows
col.factor <- col.max/rows
results <- matrix(nrow=rows,ncol=cols)
for(i in seq(rows)){
print(i)
for(j in seq(cols)){
RY <- model.old(lambda.r,lambda.y,mu.r,mu.y,i*row.factor,j*col.factor)
percent.yellow(RY,10)
results[i,j] <- percent.yellow(RY,t)
}
}
results
}
|
/old_model.R
|
no_license
|
poneill/demos
|
R
| false
| false
| 1,457
|
r
|
#This file contains an unoptimized version of the continuous model.
#It suffers in performance from relying on R's built-in matrix
#operations, which are overkill for a 2x2. By inlining the matrix
#computations we achieve a speedup of ~600%, but perhaps at the cost
#of clarity: the original implementation is therefore included as a
#reference.
oriole.init.state <- c(0,1)
model.old <- function(lambda.r,lambda.y,mu.r,mu.y,q.yr,q.ry){#deprecated
#dr/dt = ar + by
#dy/dt = cr + dy
a <- lambda.r - mu.r - q.ry
b <- q.yr
c <- q.ry
d <- lambda.y - mu.y - q.yr
A <- matrix(c(a,c,b,d),nrow=2)
ry0 <- oriole.init.state
eigen.sol <- eigen(A)
Lambda <- eigen.sol$values
L <- function(t)diag(exp(eigen.sol$values * t))
P <- eigen.sol$vectors
RY <- function(t) Re(P%*%L(t)%*%solve(P)%*%ry0)
RY
}
percent.yellow <- function(RY,t)RY(t)[2]/sum(RY(t))
results.old <- function(lambda.r,lambda.y,mu.r,mu.y,t,rows,cols,row.max,col.max){
#Return a matrix whose [i,j]th entry contains the proportion of
#yellow species at time t.
row.factor <- row.max/rows
col.factor <- col.max/rows
results <- matrix(nrow=rows,ncol=cols)
for(i in seq(rows)){
print(i)
for(j in seq(cols)){
RY <- model.old(lambda.r,lambda.y,mu.r,mu.y,i*row.factor,j*col.factor)
percent.yellow(RY,10)
results[i,j] <- percent.yellow(RY,t)
}
}
results
}
|
##############################################################################################################################
#
# Plot the cumulative probability density function (cdf) for a given number of degrees of freedom and noise distribution function
#
#
#' Plot CDF
#'
#' Plots the Cumulative Probability Density Function
#'
#' @param dof an integer
#' @param order a real number
#' @param dist a random number distribution function
#' @param fitmetric a character string naming a standard fit metric (R2, rmse, or user)
#' @param ... any argument that functions within this routine might use
#'
#' @return ggplot object
#'
#' @examples
#' plotcdf(5, dist=rnorm, fitmetric=rmse)
#'
#' @export
#' plotcdf()
plotcdf <- function(dof, order=4, dist=rnorm, fitmetric=R2, ...){
dfx <- pcdfs(dof=dof, order=order, dist=dist, fitmetric=fitmetric, ...)
#see http://stackoverflow.com/questions/9439256/how-can-i-handle-r-cmd-check-no-visible-binding-for-global-variable-notes-when. Need this to eliminate a note during R CMD check
cdf <- NULL
Nsam <- floor(10^order)
cdst <- deparse(substitute(dist))
cfit <- deparse(substitute(fitmetric))
dfxcdf <- dfx$cdf
dfxfitval <- dfx$fitval
mxy <- max(dfxcdf)
maxx <- max(dfxfitval)
plot <- ggplot() +
geom_point(aes(dfxfitval, dfxcdf),size=1) +
ylim(0,mxy) +
xlab(cfit) +
ylab("Cumulative Probability") +
ggtitle(paste(cfit,"Cumulative Probability Density Function"))
plot <- plot +
annotate("text",x=0.95*maxx,y=0.3*mxy,label=paste("Noise Distribution:",cdst,
"\nDegrees of Freedom:",dof,
"\nNumber of Samples:",Nsam),
size=3,hjust=1,fontface=2)
return(plot)
}
|
/R/plotcdf.R
|
no_license
|
cran/gofMC
|
R
| false
| false
| 1,669
|
r
|
##############################################################################################################################
#
# Plot the cumulative probability density function (cdf) for a given number of degrees of freedom and noise distribution function
#
#
#' Plot CDF
#'
#' Plots the Cumulative Probability Density Function
#'
#' @param dof an integer
#' @param order a real number
#' @param dist a random number distribution function
#' @param fitmetric a character string naming a standard fit metric (R2, rmse, or user)
#' @param ... any argument that functions within this routine might use
#'
#' @return ggplot object
#'
#' @examples
#' plotcdf(5, dist=rnorm, fitmetric=rmse)
#'
#' @export
#' plotcdf()
plotcdf <- function(dof, order=4, dist=rnorm, fitmetric=R2, ...){
dfx <- pcdfs(dof=dof, order=order, dist=dist, fitmetric=fitmetric, ...)
#see http://stackoverflow.com/questions/9439256/how-can-i-handle-r-cmd-check-no-visible-binding-for-global-variable-notes-when. Need this to eliminate a note during R CMD check
cdf <- NULL
Nsam <- floor(10^order)
cdst <- deparse(substitute(dist))
cfit <- deparse(substitute(fitmetric))
dfxcdf <- dfx$cdf
dfxfitval <- dfx$fitval
mxy <- max(dfxcdf)
maxx <- max(dfxfitval)
plot <- ggplot() +
geom_point(aes(dfxfitval, dfxcdf),size=1) +
ylim(0,mxy) +
xlab(cfit) +
ylab("Cumulative Probability") +
ggtitle(paste(cfit,"Cumulative Probability Density Function"))
plot <- plot +
annotate("text",x=0.95*maxx,y=0.3*mxy,label=paste("Noise Distribution:",cdst,
"\nDegrees of Freedom:",dof,
"\nNumber of Samples:",Nsam),
size=3,hjust=1,fontface=2)
return(plot)
}
|
#' @title Mark Straight Path Segments in GPS Track
#'
#' @description This function attempts to mark portions of a GPS track where a
#' ship is travelling in a straight line by comparing the recent average
#' heading with a longer term average heading. If these are different, then the
#' ship should be turning. Note this currently does not take in to account time,
#' only number of points
#'
#' @param gps gps data with columns Heading and UTC (POSIX format)
#' @param nSmall number of points to average to get ship's current heading
#' @param nLarge number of points to average to get ship's longer trend heading
#' @param thresh the amount which \code{nSmall} and \code{nBig} should differ by to
#' call this a turn
#' @param plot logical flag to plot result, \code{gps} must also have columns
#' Latitude and Longitude
#'
#' @details
#'
#' @author Taiki Sakai \email{taiki.sakai@@noaa.gov}
#'
#' @importFrom RcppRoll roll_meanr roll_sumr
#' @importFrom ggplot2 ggplot aes geom_path scale_color_manual
#' @export
#'
straightPath<- function(gps, nSmall = 10, nLarge = 60, thresh = 10, plot = FALSE) {
gps$realHead <- cos(gps$Heading * pi / 180)
gps$imHead <- sin(gps$Heading * pi / 180)
gps$smallLag <- Arg(complex(real=roll_sumr(gps$realHead, n=nSmall, fill=NA),
imaginary=roll_sumr(gps$imHead, n = nSmall, fill = NA))) * 180 / pi
gps$bigLag <- Arg(complex(real=roll_sumr(gps$realHead, n=nLarge, fill=NA),
imaginary=roll_sumr(gps$imHead, n = nLarge, fill = NA))) * 180 / pi
gps$timeDiff <- gps$UTC - c(gps$UTC[1], gps$UTC[1:(nrow(gps)-1)])
# this >10 is so that we dont connect big jumps if there's a disconnect in the gps track
# trying >30 because 10 broke i tup too much
# so if theres ever a jump of more than 30 seconds it wont connect those 30s apart pieces
gps$timeGroup <- as.factor(cumsum(gps$timeDiff > 30))
gps$headDiff1 <- (gps$bigLag - gps$smallLag) %% 360
gps$headDiff2 <- ifelse(gps$headDiff1 > 180, gps$headDiff1 - 360, gps$headDiff1)
gps$straight <- abs(gps$headDiff2) < thresh
gps <- gps[-c(1:(nLarge-1)),]
# why did i do this timeGroup thing??? and Im not sure its actually doing whatever i thought
# it was doing in the first place
# its for the colors in geom_path to work, bruh
if(plot) {
myPlot <- ggplot(gps, aes(x=Longitude, y=Latitude)) +
geom_point(aes(group=timeGroup, col=straight), size = 1.3) +
scale_color_manual(values = c('red', 'darkgreen')) +
ggtitle(paste0('nSmall=', nSmall, ", nLarge=", nLarge))
# scale_size_manual(values = c(2, 1))
print(myPlot)
}
gps
}
|
/code/PAMmisc/straightPath_yb.R
|
no_license
|
ybarkley/SpermWhales
|
R
| false
| false
| 2,706
|
r
|
#' @title Mark Straight Path Segments in GPS Track
#'
#' @description This function attempts to mark portions of a GPS track where a
#' ship is travelling in a straight line by comparing the recent average
#' heading with a longer term average heading. If these are different, then the
#' ship should be turning. Note this currently does not take in to account time,
#' only number of points
#'
#' @param gps gps data with columns Heading and UTC (POSIX format)
#' @param nSmall number of points to average to get ship's current heading
#' @param nLarge number of points to average to get ship's longer trend heading
#' @param thresh the amount which \code{nSmall} and \code{nBig} should differ by to
#' call this a turn
#' @param plot logical flag to plot result, \code{gps} must also have columns
#' Latitude and Longitude
#'
#' @details
#'
#' @author Taiki Sakai \email{taiki.sakai@@noaa.gov}
#'
#' @importFrom RcppRoll roll_meanr roll_sumr
#' @importFrom ggplot2 ggplot aes geom_path scale_color_manual
#' @export
#'
straightPath<- function(gps, nSmall = 10, nLarge = 60, thresh = 10, plot = FALSE) {
gps$realHead <- cos(gps$Heading * pi / 180)
gps$imHead <- sin(gps$Heading * pi / 180)
gps$smallLag <- Arg(complex(real=roll_sumr(gps$realHead, n=nSmall, fill=NA),
imaginary=roll_sumr(gps$imHead, n = nSmall, fill = NA))) * 180 / pi
gps$bigLag <- Arg(complex(real=roll_sumr(gps$realHead, n=nLarge, fill=NA),
imaginary=roll_sumr(gps$imHead, n = nLarge, fill = NA))) * 180 / pi
gps$timeDiff <- gps$UTC - c(gps$UTC[1], gps$UTC[1:(nrow(gps)-1)])
# this >10 is so that we dont connect big jumps if there's a disconnect in the gps track
# trying >30 because 10 broke i tup too much
# so if theres ever a jump of more than 30 seconds it wont connect those 30s apart pieces
gps$timeGroup <- as.factor(cumsum(gps$timeDiff > 30))
gps$headDiff1 <- (gps$bigLag - gps$smallLag) %% 360
gps$headDiff2 <- ifelse(gps$headDiff1 > 180, gps$headDiff1 - 360, gps$headDiff1)
gps$straight <- abs(gps$headDiff2) < thresh
gps <- gps[-c(1:(nLarge-1)),]
# why did i do this timeGroup thing??? and Im not sure its actually doing whatever i thought
# it was doing in the first place
# its for the colors in geom_path to work, bruh
if(plot) {
myPlot <- ggplot(gps, aes(x=Longitude, y=Latitude)) +
geom_point(aes(group=timeGroup, col=straight), size = 1.3) +
scale_color_manual(values = c('red', 'darkgreen')) +
ggtitle(paste0('nSmall=', nSmall, ", nLarge=", nLarge))
# scale_size_manual(values = c(2, 1))
print(myPlot)
}
gps
}
|
## ----part1, include=F, warning=F, message=FALSE, cache=F----------------------
#load the R packages we need.
library("pander")
library("knitr")
library("rmarkdown")
library("data.table")
library("ggplot2")
library("stringr")
#x = fread("master_11212020.csv")
#x = fread("master_11252020.csv")
#x = fread("master_12012020.csv")
#x = fread("master_12022020.csv")
#x = fread("master_12062020.csv")
#x = fread("master_12072020.csv")
#x = fread("master_01102021.csv")
#x = fread("master_01122021.csv")
#x = fread("master_01142021.csv")
#x = fread("master_01152021.csv")
#x = fread("master_01162021.csv")
x = fread("master_01182021.csv")
colnames(x) = make.names(colnames(x))
#cbind(colnames(x))
#TODO: make sure these numbers are correct
#there are two columns labelled diagnosis
which(colnames(x)=="diagnosis")
x[,2]
x[,6]
x[x[[2]] != x[[6]], 1:6]
#get rid of column 2
colnames(x)[2] = "diagnosis.old"
#we need to remove these patients
`%notin%` <- Negate(`%in%`)
x = x[MRN %notin% c(02974444, 03101756,01263443, 05796848),]
#exclude patients that do not have cancer
x = subset(x, Malignancy.category!="")
#There's a category call "GI lower" and one dcalled "GI-lower"
#I'm assuming these should be merged
x[["Malignancy.category"]] = gsub(x[["Malignancy.category"]], pat="-", rep=" ")
table(x[["Malignancy.category"]])
#here we'll fix some of the problem variables
x[["YITZ.AB.INDEX.DATA"]] = as.numeric(x[["YITZ.AB.INDEX.DATA"]])
#Alive/Dead (column AZ)
temp1 = trimws(tolower(x[["alive"]]))
table(temp1)
alive.dead=temp1
#alive.dead = NA
#alive.dead[grep(temp1, pat="a[lvi][vli]")] = "alive"
#alive.dead[grep(temp1, pat="d[ie][aei]")] = "dead"
##cbind(temp1, alive.dead)
x[,alive.dead:=alive.dead]
#Recent cancer tx (defined as in last 12 months) Column AF : yes/no
table(x[["Medical.Cancer.Tx.in.the.past.12.months...Yes.No."]])
recent.cancer.tx = trimws(tolower(x[["Medical.Cancer.Tx.in.the.past.12.months...Yes.No."]]))
#table(recent.cancer.tx)
recent.cancer.tx.yes.no = NA
recent.cancer.tx.yes.no[grep(recent.cancer.tx, pat="^no")]="no"
recent.cancer.tx.yes.no[grep(recent.cancer.tx, pat="^yes")]="yes"
#cbind(recent.cancer.tx.yes.no, recent.cancer.tx)
x[,recent.cancer.tx.yes.no:=recent.cancer.tx.yes.no]
#Active cancer (column AB ) how many w active cancer in symptomatic vs asymptomatic
table(x[["Is.cancer.active."]])
#clean up a bit, only use entries that are yes/no
cancer.active = trimws(tolower(x[["Is.cancer.active."]]))
#table(cancer.active)
cancer.active.yes.no1 = NA
#cancer.active.yes.no[grep(cancer.active, pat="(no$)|(no )", perl=T)]="no"
cancer.active.yes.no1[grep(cancer.active, pat="(no)", perl=T)]="no"
cancer.active.yes.no1[grep(cancer.active, pat="(yes)|(mds)|(mgus)", perl=T)]="yes"
#cbind(cancer.active, cancer.active.yes.no)
x[,cancer.active.yes.no:=cancer.active.yes.no1]
#change the variable name
colnames(x)[which(colnames(x) == "Comorbidities.category.A.0.1.B.2.3.C....3")] = "Comorbidities.category"
#fix entries a bit
x[["COVID.Antibody.Test.Result"]] = tolower(x[["COVID.Antibody.Test.Result"]])
#BMI (column P)- mean/median for both sheets
x[["BMI"]] = as.numeric(x[["BMI"]])
#Age (column G)- mean/median for both sheets
x[["Age.at.Index.Date"]] = as.numeric(x[["Age.at.Index.Date"]])
hemeMalig = NA
hemeMalig[x[["Malignancy.category"]] == "Heme malignancy"] = "heme"
hemeMalig[x[["Malignancy.category"]] != "Heme malignancy"] = "nonheme"
x[,hemeMalig:=hemeMalig]
x[["Treatment.setting..Home.ED..GMF.ICU."]] = trimws(tolower(x[["Treatment.setting..Home.ED..GMF.ICU."]]))
#set the blank entires to N/AW to be consistent
x[Treatment.setting..Home.ED..GMF.ICU.=="",Treatment.setting..Home.ED..GMF.ICU.:="n/a"]
#fix these new variables
x[["Cancer.Active.Relapsed.Remission.POD"]] = (trimws(tolower(x[["Cancer.Active.Relapsed.Remission.POD"]])))
x[["Cancer.Active.Relapsed.Remission.POD"]] = gsub(x[["Cancer.Active.Relapsed.Remission.POD"]], pat="relapsed", rep="relapse")
table(x[,"Cancer.Active.Relapsed.Remission.POD"])
x[["baseline.steroids"]] = trimws(tolower(x[["baseline.steroids"]]))
table(x[["baseline.steroids"]])
steroid = x[["Steroid.use"]]
steroid[steroid != "Yes"] = "No"
table(steroid)
x[["Steroid.use"]] = steroid
x[["Charlson.Comorbidity.Index"]]
#the categories for charlson comorbidity index per DR. Halmos
#should be 0-1, 2-3 and 4+
CCI = NA
CCI[x[["Charlson.Comorbidity.Index"]] <= 1] = "cci_0.1"
CCI[x[["Charlson.Comorbidity.Index"]] > 1 & x[["Charlson.Comorbidity.Index"]] <= 3] = "cci_2.3"
CCI[x[["Charlson.Comorbidity.Index"]] >= 4] = "cci_4up"
table(CCI)
x[,CCI:=CCI]
#new variable for active/cancer after 3 months
act.can = x[["Cancer.active.3.months.from.COVID.PCR.IGG"]]
act.can = trimws(tolower(act.can))
x[["Cancer.active.3.months.from.COVID.PCR.IGG"]] = act.can
## ----echo=T-------------------------------------------------------------------
tab1 = table(alive.dead=x[["alive.dead"]], asymp.inf=x[["Asymptomatic.infection.yes.no"]])
print(tab1)
fisher.test(tab1)
## ----echo=T-------------------------------------------------------------------
table(x[["recent.cancer.tx.yes.no"]])
tab1 = table(recent.cancer=x[["recent.cancer.tx.yes.no"]], asymp.inf=x[["Asymptomatic.infection.yes.no"]])
print(tab1)
fisher.test(tab1)
## ----echo =T------------------------------------------------------------------
#COVID IgG pos (column L) : numbers yes/no
table(x[["COVID.Antibody.Test.Result"]])
tab1 = table(covid.antibody.test=x[["COVID.Antibody.Test.Result"]], asymp.inf=x[["Asymptomatic.infection.yes.no"]])
print(tab1)
fisher.test(tab1)
## ----echo =T------------------------------------------------------------------
table(x[["cancer.active.yes.no"]])
tab1 = table(cancer.active=x[["cancer.active.yes.no"]], asymp.inf=x[["Asymptomatic.infection.yes.no"]])
print(tab1)
fisher.test(tab1)
## ----echo =T------------------------------------------------------------------
#Sex (column J) -how many M/F in each group
table(x[["Gender"]])
tab1 = table(gender=x[["Gender"]], asymp.inf=x[["Asymptomatic.infection.yes.no"]])
print(tab1)
fisher.test(tab1)
## ----echo =T------------------------------------------------------------------
#Comorbidity category (Column O) - A/B/C- numbers for each group
table(x[["Comorbidities.category"]])
tab1 = table(comorb=x[["Comorbidities.category"]], asymp.inf=x[["Asymptomatic.infection.yes.no"]])
print(tab1)
fisher.test(tab1)
## ----echo = T-----------------------------------------------------------------
#Malignancy category (column C)- how many in each group?
table(x[["Malignancy.category"]])
tab1 = table(mal.cat=x[["Malignancy.category"]], asymp.inf=x[["Asymptomatic.infection.yes.no"]])
print(tab1)
fisher.test(tab1, simulate.p.value=T)
## ----echo = T-----------------------------------------------------------------
#Malignancy category (column C)- how many in each group?
table(x[["Dx_Cat"]])
tab1 = table(dx.cat=x[["Dx_Cat"]], asymp.inf=x[["Asymptomatic.infection.yes.no"]])
print(tab1)
fisher.test(tab1, simulate.p.value=T)
## ----echo=T-------------------------------------------------------------------
ggplot(x, aes(x=BMI, color=Asymptomatic.infection.yes.no, fill=Asymptomatic.infection.yes.no)) +
geom_histogram(alpha=0.5)
t.test(BMI~Asymptomatic.infection.yes.no, data=x)
wilcox.test(BMI~Asymptomatic.infection.yes.no, data=x)
## ----echo=T-------------------------------------------------------------------
summary(x[["Age.at.Index.Date"]])
ggplot(x, aes(x=Age.at.Index.Date, color=Asymptomatic.infection.yes.no, fill=Asymptomatic.infection.yes.no)) +
geom_histogram(alpha=0.5)
t.test(Age.at.Index.Date~Asymptomatic.infection.yes.no, data=x)
wilcox.test(Age.at.Index.Date~Asymptomatic.infection.yes.no, data=x)
## ----echo =T------------------------------------------------------------------
table(x[["Treatment.setting..Home.ED..GMF.ICU."]])
## ----echo =T------------------------------------------------------------------
table(x[["Race.Ethnicity.from.epic"]])
## -----------------------------------------------------------------------------
table(x[["Malignancy.category"]])
tab1 = table(malig=x[["hemeMalig"]], asymp.inf=x[["Asymptomatic.infection.yes.no"]])
print(tab1)
res = fisher.test(tab1)
print(res)
## -----------------------------------------------------------------------------
tab1 = table(malig=x[["hemeMalig"]], mortality=x[["alive.dead"]])
print(tab1)
res = fisher.test(tab1)
print(res)
## -----------------------------------------------------------------------------
res = t.test(Age.at.Index.Date~alive.dead, data=x)
print(res)
res = wilcox.test(Age.at.Index.Date~alive.dead, data=x)
print(res)
## -----------------------------------------------------------------------------
tab1 = table(abtest=x[["COVID.Antibody.Test.Result"]], mortality=x[["alive.dead"]])
print(tab1)
res = fisher.test(tab1)
print(res)
## -----------------------------------------------------------------------------
#get the categorical variables
ix.cat = which(sapply(colnames(x), function(var){
#non numeric variables with less than 5 categories
(!is.numeric(x[[var]])) & (length(unique(x[[var]])) < 5) ||
#or variables with 2 distinct values
(length(unique(na.omit(x[[var]]))) == 2)
}))
#get the numeric variables
ix.num = which(sapply(colnames(x), function(var){
is.numeric(x[[var]])
}))
ix.num = setdiff(ix.num, ix.cat)
print(colnames(x)[ix.cat])
print(colnames(x)[ix.num])
## ---- results="asis"----------------------------------------------------------
#examine all cat vars
abtest = x[["COVID.Antibody.Test.Result"]]
for (i in ix.cat){
vals = x[[i]]
res = NA
try({
tab1 = table(abtest, vals)
res = fisher.test(tab1)
if (res$p.value < 0.05){
cat(paste0("\n###", colnames(x)[i]), "\n")
#print(colnames(x)[i])
cat(pander(tab1))
cat(pander(res))
cat("<br/><br/>\n")
cat("<br/><br/>\n")
cat("<br/><br/>\n")
}
}, silent=T)
}
#examine numeric variables
#run t.test and wilcox
for (i in ix.num){
vals = x[[i]]
res = NA
res2 = NA
try({
tab1 = table(abtest, vals)
res = t.test(vals~abtest)
res2 = wilcox.test(vals~abtest)
if (res$p.value < 0.05 || res2$p.value < 0.05){
cat(paste0("\n###", colnames(x)[i]), "\n")
#print(colnames(x)[i])
cat(pander(res))
cat(pander(res2))
cat("<br/><br/>\n")
cat("<br/><br/>\n")
cat("<br/><br/>\n")
}
}, silent=T)
}
## -----------------------------------------------------------------------------
abtest.val = x[["COVID.Antibody.Numerical.Value"]]
abtest.result = x[["COVID.Antibody.Test.Result"]]
plot(table(abtest.val))
abtest.result = as.numeric(factor(abtest.result))+runif(length(abtest.result))*.5
plot(abtest.val, abtest.result)
points(abtest.val[abtest.result < 1.75], abtest.result[abtest.result < 1.75], col="blue")
points(abtest.val[abtest.result > 1.75], abtest.result[abtest.result > 1.75], col="red")
pander(table(abtest.val))
#tab1 = table(abtest.result, paste0(x[[3]],"_", x[[4]]))
#write.table(file="malig.abneg.csv", cbind(tab1[1,tab1[1,]>0]), sep=",", col.names=F)
## ---- results="asis"----------------------------------------------------------
abtest.val = x[["COVID.Antibody.Numerical.Value"]]
for (i in c(ix.cat, ix.num)){
vals = x[[i]]
res = NA
try({
mod0 = glm(abtest.val~1)
mod1 = glm(abtest.val~vals)
res = anova(mod0, mod1, test="LRT")
pval = tail(res$Pr, 1)
#pval = coefficients(summary(mod1))["vals","Pr(>|t|)"]
if (pval < 0.10){
cat(paste0("\n##", colnames(x)[i]), "\n")
#print(colnames(x)[i])
cat(pander(res))
cat(pander(summary(mod1)))
cat("<br/><br/>\n")
cat("<br/><br/>\n")
cat("<br/><br/>\n")
}
}, silent=T)
}
## -----------------------------------------------------------------------------
#categorical variables
print(colnames(x)[ix.cat])
#numeric variables
print(colnames(x)[ix.num])
yitz.val = x[["YITZ.AB.INDEX.DATA"]]
hist(na.omit(yitz.val), breaks=100)
## ----results="asis"-----------------------------------------------------------
for (i in c(ix.cat, ix.num)){
vals = x[[i]]
res = NA
try({
mod0 = glm(yitz.val~1)
mod1 = glm(yitz.val~vals)
res = anova(mod0, mod1, test="LRT")
pval = tail(res$Pr, 1)
#pval = coefficients(summary(mod1))["vals","Pr(>|t|)"]
if (pval < 0.10){
cat(paste0("\n##", colnames(x)[i]), "\n")
#print(colnames(x)[i])
cat("\n\n")
df = data.frame(vals, yitz.val)
g = ggplot(df, aes(x=vals, y=yitz.val)) + geom_point()
print(g)
cat("\n\n")
cat("\n####LRT\n")
cat(pander(res))
cat("\n####Model Coefficients\n")
cat(pander(summary(mod1)))
cat("<br/><br/>\n")
cat("<br/><br/>\n")
cat("<br/><br/>\n")
}
}, silent=T)
}
## -----------------------------------------------------------------------------
var1 = "Dx_Cat"
var2 = "cancer.active.yes.no"
#a function that takes in 2 categorical variables
#(the 2nd should have just 2 levels)
#Make a table that shows
#for each level of the first var
# the split by levels of the 2nd
# result of fishertest vs remaining observations
makeTable <- function(x, var1, var2){
vals1 = x[[var1]]
vals2 = x[[var2]]
tab1 = table(vals1,vals2)
ns = colSums(tab1)
tab2 = t(sapply(1:nrow(tab1), function(i){
a1 = tab1[i,1]
b1 = tab1[i,2]
c1 = ns[1]-a1
d1 = ns[2]-b1
mat1 = matrix(c(a1,b1,c1,d1), ncol=2)
pval = NA
try({
res = fisher.test(mat1)
pval = res$p.value
}, silent=T)
p1 = format(round(a1/ns[1]*100, 2), nsmall = 2)
p2 = format(round(b1/ns[2]*100, 2), nsmall = 2)
s1 = paste0(a1, "(", p1, "%)")
s2 = paste0(b1, "(", p2, "%)")
s3 = format(round(pval, 2), nsmall=2)
c(s1, s2, s3)
}))
rownames(tab2) = rownames(tab1)
colnames(tab2) = c(colnames(tab1), "pval")
tab2
#add FDR qval to table
if (nrow(tab2) > 2){
tab2 = cbind(tab2, qval=p.adjust(tab2[,3]))
}
#find missing patients
ix.missing = which(apply(cbind(vals1, vals2),1, function(a){any(is.na(a))}))
list(tab2, x$MRN[ix.missing])
}
#tab1 = makeTable(x, var1, var2)
#knitr::kable(tab1)
sepvars = c("Cancer.active.3.months.from.COVID.PCR.IGG", "cancer.active.yes.no", "Asymptomatic.infection.yes.no", "COVID.Antibody.Test.Result", "alive.dead")
catvars = c("Dx_Cat", "Malignancy.category", "Malignancy.category_Version2", "Race.Ethnicity.from.epic", "Ethnicity", "Gender", "Comorbidities.category", "Solid.liquid", "CD20", "endocrine", "CAR.T", "BiTE", "endocrine...hormonal", "transplant.sct", "car.t.cellular.tx", "hemeMalig", "immuno", "Treatment.setting..Home.ED..GMF.ICU.", "Asymptomatic.infection.yes.no", "Cancer.Active.Relapsed.Remission.POD", "baseline.steroids", "Steroid.use", "CCI")
## ---- results='asis', echo=F--------------------------------------------------
#library("kableExtra")
for (var2 in sepvars){
tab0 = table(x[[var2]])
info = paste0(paste0(names(tab0), "(", tab0, ")"), collapse=" ")
cat(paste0("\n##", var2, ": ", info, "\n"))
for (var1 in catvars){
cat(paste0("\n###", var1, "\n"))
res = makeTable(x, var1, var2)
tab1 = res[[1]]
missing.pats = res[[2]]
print(knitr::kable(tab1))
cat("\n")
if (length(missing.pats) > 0){
pats = missing.pats
cat(paste0("missing: ", paste0(missing.pats, collapse=", "), "\n"))
}
}
}
## -----------------------------------------------------------------------------
getDate <- function(a){
d = NA
try({
d = as.Date(a, format="%m/%d/%Y")
}, silent=T)
d
}
#find all the data with valid dates (first/last)
d1 = sapply(x[["Date.of.first.positive.SARS.CoV.2.PCR."]], getDate)
d2 = sapply(x[["Date.of.last.positive.SARS.CoV.2.PCR."]], getDate)
shedding.time = d2 - d1
table(shedding.time)
x[,shedding.time:=shedding.time]
## -----------------------------------------------------------------------------
#list all shedding times by cat
x[shedding.time>0,shedding.time, by=Malignancy.category]
#average shedding times by category
x[shedding.time>0,mean(shedding.time), by=Malignancy.category]
#is there a difference in average shedding times by category?
res = aov(shedding.time~Malignancy.category, data=subset(x, shedding.time > 0))
summary(res)
#non-parametric test (more than 2 groups)
res = kruskal.test(shedding.time~Malignancy.category, data=subset(x, shedding.time > 0))
print(res)
## -----------------------------------------------------------------------------
#list all shedding times by cat
x[shedding.time>0,shedding.time, by=Solid.liquid]
#average shedding times by category
x[shedding.time>0,mean(shedding.time), by=Solid.liquid]
#is there a difference in average shedding times by category?
res = aov(shedding.time~Solid.liquid, data=subset(x, shedding.time > 0))
summary(res)
#non-parametric(can be used for 2 groups)
#
res = kruskal.test(shedding.time~Solid.liquid, data=subset(x, shedding.time > 0))
print(res)
#non-parametric wilcox 2-group comparison
res = wilcox.test(shedding.time~Solid.liquid, data=subset(x, shedding.time > 0))
print(res)
## ---- echo=F, results=F-------------------------------------------------------
sol.liq = x[["Solid.liquid"]]
ab.test = x[["COVID.Antibody.Test.Result"]]
act.can = x[["Cancer.active.3.months.from.COVID.PCR.IGG"]]
med.treat = x[["Medical.Cancer.treatment.90.days"]]
table(act.can)
table(med.treat)
table(ab.test)
table(sol.liq)
#need to fix the confounding variables
act.can = trimws(tolower(act.can))
table(trimws(tolower(med.treat)))
med.treat = (trimws(tolower(med.treat)))
#if it starts with n set it to no
#if it starts with y set to y
#NA otherwise
med.treat2 = NA
med.treat2[grep(med.treat, pat="^n")] = "no"
med.treat2[grep(med.treat, pat="^y")] = "yes"
#the one unknown patient has be reclassified as yes!
med.treat2[grep(med.treat, pat="^unkno")] = "yes"
table(med.treat2)
fisher.test(table(sol.liq, ab.test))
df1 = data.frame(ab.test, sol.liq, act.can, med.treat2)
## -----------------------------------------------------------------------------
by(df1, IND=df1[,"act.can"], FUN=function(a){
tab1 = table(a[,"sol.liq"], a[,"ab.test"])
})
by(df1, IND=df1[,"act.can"], FUN=function(a){
tab1 = table(a[,"sol.liq"], a[,"ab.test"])
fisher.test(tab1)
})
## -----------------------------------------------------------------------------
by(df1, IND=df1[,"med.treat2"], FUN=function(a){
tab1 = table(a[,"sol.liq"], a[,"ab.test"])
})
by(df1, IND=df1[,"med.treat2"], FUN=function(a){
tab1 = table(a[,"sol.liq"], a[,"ab.test"])
fisher.test(tab1)
})
## -----------------------------------------------------------------------------
df1$ab.test = factor(df1$ab.test)
#just solid/liquid
res = glm(df1, formula="ab.test~sol.liq", family="binomial")
summary(res)
#with confounders
res2 = glm(df1, formula="ab.test~med.treat2 + act.can + sol.liq", family="binomial")
summary(res2)
## -----------------------------------------------------------------------------
v1 = x[["Date.of.first.positive.SARS.CoV.2.PCR."]]
v2 = x[["Date.and.Time.of.COVID.Antibody.Test"]]
d1 = as.Date(v1, format="%m/%d/%Y")
#remove clock time stamp
v2 = gsub(v2, pat=" .*", rep="")
d2 = as.Date(v2, format="%m/%d/%Y")
diff.time = as.numeric(d2-d1)
#plot(sort(table(diff.time)))
hist(diff.time, 100)
mean(diff.time, na.rm=T)
median(diff.time, na.rm=T)
## ----exit, echo=T, warning=F, message=FALSE, cache=F--------------------------
knit_exit()
|
/report.R
|
no_license
|
kith-pradhan/CovidCancerReport
|
R
| false
| false
| 20,137
|
r
|
## ----part1, include=F, warning=F, message=FALSE, cache=F----------------------
#load the R packages we need.
library("pander")
library("knitr")
library("rmarkdown")
library("data.table")
library("ggplot2")
library("stringr")
#x = fread("master_11212020.csv")
#x = fread("master_11252020.csv")
#x = fread("master_12012020.csv")
#x = fread("master_12022020.csv")
#x = fread("master_12062020.csv")
#x = fread("master_12072020.csv")
#x = fread("master_01102021.csv")
#x = fread("master_01122021.csv")
#x = fread("master_01142021.csv")
#x = fread("master_01152021.csv")
#x = fread("master_01162021.csv")
x = fread("master_01182021.csv")
colnames(x) = make.names(colnames(x))
#cbind(colnames(x))
#TODO: make sure these numbers are correct
#there are two columns labelled diagnosis
which(colnames(x)=="diagnosis")
x[,2]
x[,6]
x[x[[2]] != x[[6]], 1:6]
#get rid of column 2
colnames(x)[2] = "diagnosis.old"
#we need to remove these patients
`%notin%` <- Negate(`%in%`)
x = x[MRN %notin% c(02974444, 03101756,01263443, 05796848),]
#exclude patients that do not have cancer
x = subset(x, Malignancy.category!="")
#There's a category call "GI lower" and one dcalled "GI-lower"
#I'm assuming these should be merged
x[["Malignancy.category"]] = gsub(x[["Malignancy.category"]], pat="-", rep=" ")
table(x[["Malignancy.category"]])
#here we'll fix some of the problem variables
x[["YITZ.AB.INDEX.DATA"]] = as.numeric(x[["YITZ.AB.INDEX.DATA"]])
#Alive/Dead (column AZ)
temp1 = trimws(tolower(x[["alive"]]))
table(temp1)
alive.dead=temp1
#alive.dead = NA
#alive.dead[grep(temp1, pat="a[lvi][vli]")] = "alive"
#alive.dead[grep(temp1, pat="d[ie][aei]")] = "dead"
##cbind(temp1, alive.dead)
x[,alive.dead:=alive.dead]
#Recent cancer tx (defined as in last 12 months) Column AF : yes/no
table(x[["Medical.Cancer.Tx.in.the.past.12.months...Yes.No."]])
recent.cancer.tx = trimws(tolower(x[["Medical.Cancer.Tx.in.the.past.12.months...Yes.No."]]))
#table(recent.cancer.tx)
recent.cancer.tx.yes.no = NA
recent.cancer.tx.yes.no[grep(recent.cancer.tx, pat="^no")]="no"
recent.cancer.tx.yes.no[grep(recent.cancer.tx, pat="^yes")]="yes"
#cbind(recent.cancer.tx.yes.no, recent.cancer.tx)
x[,recent.cancer.tx.yes.no:=recent.cancer.tx.yes.no]
#Active cancer (column AB ) how many w active cancer in symptomatic vs asymptomatic
table(x[["Is.cancer.active."]])
#clean up a bit, only use entries that are yes/no
cancer.active = trimws(tolower(x[["Is.cancer.active."]]))
#table(cancer.active)
cancer.active.yes.no1 = NA
#cancer.active.yes.no[grep(cancer.active, pat="(no$)|(no )", perl=T)]="no"
cancer.active.yes.no1[grep(cancer.active, pat="(no)", perl=T)]="no"
cancer.active.yes.no1[grep(cancer.active, pat="(yes)|(mds)|(mgus)", perl=T)]="yes"
#cbind(cancer.active, cancer.active.yes.no)
x[,cancer.active.yes.no:=cancer.active.yes.no1]
#change the variable name
colnames(x)[which(colnames(x) == "Comorbidities.category.A.0.1.B.2.3.C....3")] = "Comorbidities.category"
#fix entries a bit
x[["COVID.Antibody.Test.Result"]] = tolower(x[["COVID.Antibody.Test.Result"]])
#BMI (column P)- mean/median for both sheets
x[["BMI"]] = as.numeric(x[["BMI"]])
#Age (column G)- mean/median for both sheets
x[["Age.at.Index.Date"]] = as.numeric(x[["Age.at.Index.Date"]])
hemeMalig = NA
hemeMalig[x[["Malignancy.category"]] == "Heme malignancy"] = "heme"
hemeMalig[x[["Malignancy.category"]] != "Heme malignancy"] = "nonheme"
x[,hemeMalig:=hemeMalig]
x[["Treatment.setting..Home.ED..GMF.ICU."]] = trimws(tolower(x[["Treatment.setting..Home.ED..GMF.ICU."]]))
#set the blank entires to N/AW to be consistent
x[Treatment.setting..Home.ED..GMF.ICU.=="",Treatment.setting..Home.ED..GMF.ICU.:="n/a"]
#fix these new variables
x[["Cancer.Active.Relapsed.Remission.POD"]] = (trimws(tolower(x[["Cancer.Active.Relapsed.Remission.POD"]])))
x[["Cancer.Active.Relapsed.Remission.POD"]] = gsub(x[["Cancer.Active.Relapsed.Remission.POD"]], pat="relapsed", rep="relapse")
table(x[,"Cancer.Active.Relapsed.Remission.POD"])
x[["baseline.steroids"]] = trimws(tolower(x[["baseline.steroids"]]))
table(x[["baseline.steroids"]])
steroid = x[["Steroid.use"]]
steroid[steroid != "Yes"] = "No"
table(steroid)
x[["Steroid.use"]] = steroid
x[["Charlson.Comorbidity.Index"]]
#the categories for charlson comorbidity index per DR. Halmos
#should be 0-1, 2-3 and 4+
CCI = NA
CCI[x[["Charlson.Comorbidity.Index"]] <= 1] = "cci_0.1"
CCI[x[["Charlson.Comorbidity.Index"]] > 1 & x[["Charlson.Comorbidity.Index"]] <= 3] = "cci_2.3"
CCI[x[["Charlson.Comorbidity.Index"]] >= 4] = "cci_4up"
table(CCI)
x[,CCI:=CCI]
#new variable for active/cancer after 3 months
act.can = x[["Cancer.active.3.months.from.COVID.PCR.IGG"]]
act.can = trimws(tolower(act.can))
x[["Cancer.active.3.months.from.COVID.PCR.IGG"]] = act.can
## ----echo=T-------------------------------------------------------------------
tab1 = table(alive.dead=x[["alive.dead"]], asymp.inf=x[["Asymptomatic.infection.yes.no"]])
print(tab1)
fisher.test(tab1)
## ----echo=T-------------------------------------------------------------------
table(x[["recent.cancer.tx.yes.no"]])
tab1 = table(recent.cancer=x[["recent.cancer.tx.yes.no"]], asymp.inf=x[["Asymptomatic.infection.yes.no"]])
print(tab1)
fisher.test(tab1)
## ----echo =T------------------------------------------------------------------
#COVID IgG pos (column L) : numbers yes/no
table(x[["COVID.Antibody.Test.Result"]])
tab1 = table(covid.antibody.test=x[["COVID.Antibody.Test.Result"]], asymp.inf=x[["Asymptomatic.infection.yes.no"]])
print(tab1)
fisher.test(tab1)
## ----echo =T------------------------------------------------------------------
table(x[["cancer.active.yes.no"]])
tab1 = table(cancer.active=x[["cancer.active.yes.no"]], asymp.inf=x[["Asymptomatic.infection.yes.no"]])
print(tab1)
fisher.test(tab1)
## ----echo =T------------------------------------------------------------------
#Sex (column J) -how many M/F in each group
table(x[["Gender"]])
tab1 = table(gender=x[["Gender"]], asymp.inf=x[["Asymptomatic.infection.yes.no"]])
print(tab1)
fisher.test(tab1)
## ----echo =T------------------------------------------------------------------
#Comorbidity category (Column O) - A/B/C- numbers for each group
table(x[["Comorbidities.category"]])
tab1 = table(comorb=x[["Comorbidities.category"]], asymp.inf=x[["Asymptomatic.infection.yes.no"]])
print(tab1)
fisher.test(tab1)
## ----echo = T-----------------------------------------------------------------
#Malignancy category (column C)- how many in each group?
table(x[["Malignancy.category"]])
tab1 = table(mal.cat=x[["Malignancy.category"]], asymp.inf=x[["Asymptomatic.infection.yes.no"]])
print(tab1)
fisher.test(tab1, simulate.p.value=T)
## ----echo = T-----------------------------------------------------------------
#Malignancy category (column C)- how many in each group?
table(x[["Dx_Cat"]])
tab1 = table(dx.cat=x[["Dx_Cat"]], asymp.inf=x[["Asymptomatic.infection.yes.no"]])
print(tab1)
fisher.test(tab1, simulate.p.value=T)
## ----echo=T-------------------------------------------------------------------
ggplot(x, aes(x=BMI, color=Asymptomatic.infection.yes.no, fill=Asymptomatic.infection.yes.no)) +
geom_histogram(alpha=0.5)
t.test(BMI~Asymptomatic.infection.yes.no, data=x)
wilcox.test(BMI~Asymptomatic.infection.yes.no, data=x)
## ----echo=T-------------------------------------------------------------------
summary(x[["Age.at.Index.Date"]])
ggplot(x, aes(x=Age.at.Index.Date, color=Asymptomatic.infection.yes.no, fill=Asymptomatic.infection.yes.no)) +
geom_histogram(alpha=0.5)
t.test(Age.at.Index.Date~Asymptomatic.infection.yes.no, data=x)
wilcox.test(Age.at.Index.Date~Asymptomatic.infection.yes.no, data=x)
## ----echo =T------------------------------------------------------------------
table(x[["Treatment.setting..Home.ED..GMF.ICU."]])
## ----echo =T------------------------------------------------------------------
table(x[["Race.Ethnicity.from.epic"]])
## -----------------------------------------------------------------------------
table(x[["Malignancy.category"]])
tab1 = table(malig=x[["hemeMalig"]], asymp.inf=x[["Asymptomatic.infection.yes.no"]])
print(tab1)
res = fisher.test(tab1)
print(res)
## -----------------------------------------------------------------------------
tab1 = table(malig=x[["hemeMalig"]], mortality=x[["alive.dead"]])
print(tab1)
res = fisher.test(tab1)
print(res)
## -----------------------------------------------------------------------------
res = t.test(Age.at.Index.Date~alive.dead, data=x)
print(res)
res = wilcox.test(Age.at.Index.Date~alive.dead, data=x)
print(res)
## -----------------------------------------------------------------------------
tab1 = table(abtest=x[["COVID.Antibody.Test.Result"]], mortality=x[["alive.dead"]])
print(tab1)
res = fisher.test(tab1)
print(res)
## -----------------------------------------------------------------------------
#get the categorical variables
ix.cat = which(sapply(colnames(x), function(var){
#non numeric variables with less than 5 categories
(!is.numeric(x[[var]])) & (length(unique(x[[var]])) < 5) ||
#or variables with 2 distinct values
(length(unique(na.omit(x[[var]]))) == 2)
}))
#get the numeric variables
ix.num = which(sapply(colnames(x), function(var){
is.numeric(x[[var]])
}))
ix.num = setdiff(ix.num, ix.cat)
print(colnames(x)[ix.cat])
print(colnames(x)[ix.num])
## ---- results="asis"----------------------------------------------------------
#examine all cat vars
abtest = x[["COVID.Antibody.Test.Result"]]
for (i in ix.cat){
vals = x[[i]]
res = NA
try({
tab1 = table(abtest, vals)
res = fisher.test(tab1)
if (res$p.value < 0.05){
cat(paste0("\n###", colnames(x)[i]), "\n")
#print(colnames(x)[i])
cat(pander(tab1))
cat(pander(res))
cat("<br/><br/>\n")
cat("<br/><br/>\n")
cat("<br/><br/>\n")
}
}, silent=T)
}
#examine numeric variables
#run t.test and wilcox
for (i in ix.num){
vals = x[[i]]
res = NA
res2 = NA
try({
tab1 = table(abtest, vals)
res = t.test(vals~abtest)
res2 = wilcox.test(vals~abtest)
if (res$p.value < 0.05 || res2$p.value < 0.05){
cat(paste0("\n###", colnames(x)[i]), "\n")
#print(colnames(x)[i])
cat(pander(res))
cat(pander(res2))
cat("<br/><br/>\n")
cat("<br/><br/>\n")
cat("<br/><br/>\n")
}
}, silent=T)
}
## -----------------------------------------------------------------------------
abtest.val = x[["COVID.Antibody.Numerical.Value"]]
abtest.result = x[["COVID.Antibody.Test.Result"]]
plot(table(abtest.val))
abtest.result = as.numeric(factor(abtest.result))+runif(length(abtest.result))*.5
plot(abtest.val, abtest.result)
points(abtest.val[abtest.result < 1.75], abtest.result[abtest.result < 1.75], col="blue")
points(abtest.val[abtest.result > 1.75], abtest.result[abtest.result > 1.75], col="red")
pander(table(abtest.val))
#tab1 = table(abtest.result, paste0(x[[3]],"_", x[[4]]))
#write.table(file="malig.abneg.csv", cbind(tab1[1,tab1[1,]>0]), sep=",", col.names=F)
## ---- results="asis"----------------------------------------------------------
abtest.val = x[["COVID.Antibody.Numerical.Value"]]
for (i in c(ix.cat, ix.num)){
vals = x[[i]]
res = NA
try({
mod0 = glm(abtest.val~1)
mod1 = glm(abtest.val~vals)
res = anova(mod0, mod1, test="LRT")
pval = tail(res$Pr, 1)
#pval = coefficients(summary(mod1))["vals","Pr(>|t|)"]
if (pval < 0.10){
cat(paste0("\n##", colnames(x)[i]), "\n")
#print(colnames(x)[i])
cat(pander(res))
cat(pander(summary(mod1)))
cat("<br/><br/>\n")
cat("<br/><br/>\n")
cat("<br/><br/>\n")
}
}, silent=T)
}
## -----------------------------------------------------------------------------
#categorical variables
print(colnames(x)[ix.cat])
#numeric variables
print(colnames(x)[ix.num])
yitz.val = x[["YITZ.AB.INDEX.DATA"]]
hist(na.omit(yitz.val), breaks=100)
## ----results="asis"-----------------------------------------------------------
for (i in c(ix.cat, ix.num)){
vals = x[[i]]
res = NA
try({
mod0 = glm(yitz.val~1)
mod1 = glm(yitz.val~vals)
res = anova(mod0, mod1, test="LRT")
pval = tail(res$Pr, 1)
#pval = coefficients(summary(mod1))["vals","Pr(>|t|)"]
if (pval < 0.10){
cat(paste0("\n##", colnames(x)[i]), "\n")
#print(colnames(x)[i])
cat("\n\n")
df = data.frame(vals, yitz.val)
g = ggplot(df, aes(x=vals, y=yitz.val)) + geom_point()
print(g)
cat("\n\n")
cat("\n####LRT\n")
cat(pander(res))
cat("\n####Model Coefficients\n")
cat(pander(summary(mod1)))
cat("<br/><br/>\n")
cat("<br/><br/>\n")
cat("<br/><br/>\n")
}
}, silent=T)
}
## -----------------------------------------------------------------------------
var1 = "Dx_Cat"
var2 = "cancer.active.yes.no"
#a function that takes in 2 categorical variables
#(the 2nd should have just 2 levels)
#Make a table that shows
#for each level of the first var
# the split by levels of the 2nd
# result of fishertest vs remaining observations
makeTable <- function(x, var1, var2){
vals1 = x[[var1]]
vals2 = x[[var2]]
tab1 = table(vals1,vals2)
ns = colSums(tab1)
tab2 = t(sapply(1:nrow(tab1), function(i){
a1 = tab1[i,1]
b1 = tab1[i,2]
c1 = ns[1]-a1
d1 = ns[2]-b1
mat1 = matrix(c(a1,b1,c1,d1), ncol=2)
pval = NA
try({
res = fisher.test(mat1)
pval = res$p.value
}, silent=T)
p1 = format(round(a1/ns[1]*100, 2), nsmall = 2)
p2 = format(round(b1/ns[2]*100, 2), nsmall = 2)
s1 = paste0(a1, "(", p1, "%)")
s2 = paste0(b1, "(", p2, "%)")
s3 = format(round(pval, 2), nsmall=2)
c(s1, s2, s3)
}))
rownames(tab2) = rownames(tab1)
colnames(tab2) = c(colnames(tab1), "pval")
tab2
#add FDR qval to table
if (nrow(tab2) > 2){
tab2 = cbind(tab2, qval=p.adjust(tab2[,3]))
}
#find missing patients
ix.missing = which(apply(cbind(vals1, vals2),1, function(a){any(is.na(a))}))
list(tab2, x$MRN[ix.missing])
}
#tab1 = makeTable(x, var1, var2)
#knitr::kable(tab1)
sepvars = c("Cancer.active.3.months.from.COVID.PCR.IGG", "cancer.active.yes.no", "Asymptomatic.infection.yes.no", "COVID.Antibody.Test.Result", "alive.dead")
catvars = c("Dx_Cat", "Malignancy.category", "Malignancy.category_Version2", "Race.Ethnicity.from.epic", "Ethnicity", "Gender", "Comorbidities.category", "Solid.liquid", "CD20", "endocrine", "CAR.T", "BiTE", "endocrine...hormonal", "transplant.sct", "car.t.cellular.tx", "hemeMalig", "immuno", "Treatment.setting..Home.ED..GMF.ICU.", "Asymptomatic.infection.yes.no", "Cancer.Active.Relapsed.Remission.POD", "baseline.steroids", "Steroid.use", "CCI")
## ---- results='asis', echo=F--------------------------------------------------
#library("kableExtra")
for (var2 in sepvars){
tab0 = table(x[[var2]])
info = paste0(paste0(names(tab0), "(", tab0, ")"), collapse=" ")
cat(paste0("\n##", var2, ": ", info, "\n"))
for (var1 in catvars){
cat(paste0("\n###", var1, "\n"))
res = makeTable(x, var1, var2)
tab1 = res[[1]]
missing.pats = res[[2]]
print(knitr::kable(tab1))
cat("\n")
if (length(missing.pats) > 0){
pats = missing.pats
cat(paste0("missing: ", paste0(missing.pats, collapse=", "), "\n"))
}
}
}
## -----------------------------------------------------------------------------
getDate <- function(a){
d = NA
try({
d = as.Date(a, format="%m/%d/%Y")
}, silent=T)
d
}
#find all the data with valid dates (first/last)
d1 = sapply(x[["Date.of.first.positive.SARS.CoV.2.PCR."]], getDate)
d2 = sapply(x[["Date.of.last.positive.SARS.CoV.2.PCR."]], getDate)
shedding.time = d2 - d1
table(shedding.time)
x[,shedding.time:=shedding.time]
## -----------------------------------------------------------------------------
#list all shedding times by cat
x[shedding.time>0,shedding.time, by=Malignancy.category]
#average shedding times by category
x[shedding.time>0,mean(shedding.time), by=Malignancy.category]
#is there a difference in average shedding times by category?
res = aov(shedding.time~Malignancy.category, data=subset(x, shedding.time > 0))
summary(res)
#non-parametric test (more than 2 groups)
res = kruskal.test(shedding.time~Malignancy.category, data=subset(x, shedding.time > 0))
print(res)
## -----------------------------------------------------------------------------
#list all shedding times by cat
x[shedding.time>0,shedding.time, by=Solid.liquid]
#average shedding times by category
x[shedding.time>0,mean(shedding.time), by=Solid.liquid]
#is there a difference in average shedding times by category?
res = aov(shedding.time~Solid.liquid, data=subset(x, shedding.time > 0))
summary(res)
#non-parametric(can be used for 2 groups)
#
res = kruskal.test(shedding.time~Solid.liquid, data=subset(x, shedding.time > 0))
print(res)
#non-parametric wilcox 2-group comparison
res = wilcox.test(shedding.time~Solid.liquid, data=subset(x, shedding.time > 0))
print(res)
## ---- echo=F, results=F-------------------------------------------------------
sol.liq = x[["Solid.liquid"]]
ab.test = x[["COVID.Antibody.Test.Result"]]
act.can = x[["Cancer.active.3.months.from.COVID.PCR.IGG"]]
med.treat = x[["Medical.Cancer.treatment.90.days"]]
table(act.can)
table(med.treat)
table(ab.test)
table(sol.liq)
#need to fix the confounding variables
act.can = trimws(tolower(act.can))
table(trimws(tolower(med.treat)))
med.treat = (trimws(tolower(med.treat)))
#if it starts with n set it to no
#if it starts with y set to y
#NA otherwise
med.treat2 = NA
med.treat2[grep(med.treat, pat="^n")] = "no"
med.treat2[grep(med.treat, pat="^y")] = "yes"
#the one unknown patient has be reclassified as yes!
med.treat2[grep(med.treat, pat="^unkno")] = "yes"
table(med.treat2)
fisher.test(table(sol.liq, ab.test))
df1 = data.frame(ab.test, sol.liq, act.can, med.treat2)
## -----------------------------------------------------------------------------
by(df1, IND=df1[,"act.can"], FUN=function(a){
tab1 = table(a[,"sol.liq"], a[,"ab.test"])
})
by(df1, IND=df1[,"act.can"], FUN=function(a){
tab1 = table(a[,"sol.liq"], a[,"ab.test"])
fisher.test(tab1)
})
## -----------------------------------------------------------------------------
by(df1, IND=df1[,"med.treat2"], FUN=function(a){
tab1 = table(a[,"sol.liq"], a[,"ab.test"])
})
by(df1, IND=df1[,"med.treat2"], FUN=function(a){
tab1 = table(a[,"sol.liq"], a[,"ab.test"])
fisher.test(tab1)
})
## -----------------------------------------------------------------------------
df1$ab.test = factor(df1$ab.test)
#just solid/liquid
res = glm(df1, formula="ab.test~sol.liq", family="binomial")
summary(res)
#with confounders
res2 = glm(df1, formula="ab.test~med.treat2 + act.can + sol.liq", family="binomial")
summary(res2)
## -----------------------------------------------------------------------------
v1 = x[["Date.of.first.positive.SARS.CoV.2.PCR."]]
v2 = x[["Date.and.Time.of.COVID.Antibody.Test"]]
d1 = as.Date(v1, format="%m/%d/%Y")
#remove clock time stamp
v2 = gsub(v2, pat=" .*", rep="")
d2 = as.Date(v2, format="%m/%d/%Y")
diff.time = as.numeric(d2-d1)
#plot(sort(table(diff.time)))
hist(diff.time, 100)
mean(diff.time, na.rm=T)
median(diff.time, na.rm=T)
## ----exit, echo=T, warning=F, message=FALSE, cache=F--------------------------
knit_exit()
|
library(testthat)
library(Rwaves)
test_check("Rwaves")
|
/tests/testthat.R
|
no_license
|
mchiapello/Rwaves
|
R
| false
| false
| 56
|
r
|
library(testthat)
library(Rwaves)
test_check("Rwaves")
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.