blob_id
stringlengths 40
40
| directory_id
stringlengths 40
40
| path
stringlengths 2
327
| content_id
stringlengths 40
40
| detected_licenses
listlengths 0
91
| license_type
stringclasses 2
values | repo_name
stringlengths 5
134
| snapshot_id
stringlengths 40
40
| revision_id
stringlengths 40
40
| branch_name
stringclasses 46
values | visit_date
timestamp[us]date 2016-08-02 22:44:29
2023-09-06 08:39:28
| revision_date
timestamp[us]date 1977-08-08 00:00:00
2023-09-05 12:13:49
| committer_date
timestamp[us]date 1977-08-08 00:00:00
2023-09-05 12:13:49
| github_id
int64 19.4k
671M
⌀ | star_events_count
int64 0
40k
| fork_events_count
int64 0
32.4k
| gha_license_id
stringclasses 14
values | gha_event_created_at
timestamp[us]date 2012-06-21 16:39:19
2023-09-14 21:52:42
⌀ | gha_created_at
timestamp[us]date 2008-05-25 01:21:32
2023-06-28 13:19:12
⌀ | gha_language
stringclasses 60
values | src_encoding
stringclasses 24
values | language
stringclasses 1
value | is_vendor
bool 2
classes | is_generated
bool 2
classes | length_bytes
int64 7
9.18M
| extension
stringclasses 20
values | filename
stringlengths 1
141
| content
stringlengths 7
9.18M
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
942408e8523efac2bddeded625627b12905d892e
|
4c322322ea511b384eb9b60dac00507db04051d6
|
/R/asciiDefault.r
|
fbd05f086a2372a401fb39fb5976c241010a8003
|
[] |
no_license
|
oganm/ascii
|
cb48c9e8f20c8486ea9beadd222d3bf9dceadfc7
|
3a940b383ed8315e4a62bec0dcd61d0b1946d03a
|
refs/heads/master
| 2020-06-23T17:01:19.629427
| 2019-07-24T18:25:33
| 2019-07-24T18:25:33
| 198,688,895
| 0
| 1
| null | 2019-07-24T18:24:21
| 2019-07-24T18:24:20
| null |
UTF-8
|
R
| false
| false
| 9,893
|
r
|
asciiDefault.r
|
##' @param x An R object of class found among
##' \code{methods(ascii)}. If \code{x} is a list, it should be a list
##' of character strings (it will produce a bulleted list output by
##' default).
##' @param include.rownames logical. If \code{TRUE} the rows names are printed.
##' Default value depends of class of \code{x}.
##' @param include.colnames logical. If \code{TRUE} the columns names are
##' printed. Default value depends of class of \code{x}.
##' @param rownames Character vector (replicated or truncated as necessary)
##' indicating rownames of the corresponding rows. If \code{NULL} (default)
##' the row names are not modified
##' @param colnames Character vector (replicated or truncated as necessary)
##' indicating colnames of the corresponding columns. If \code{NULL}
##' (default) the column names are not modified
##' @param format Character vector or matrix indicating the format for the
##' corresponding columns. These values are passed to the \code{formatC}
##' function. Use \code{"d"} (for integers), \code{"f"}, \code{"e"},
##' \code{"E"}, \code{"g"}, \code{"G"}, \code{"fg"} (for reals), or
##' \code{"s"} (for strings). \code{"f"} gives numbers in the usual
##' \code{xxx.xxx} format; \code{"e"} and \code{"E"} give \code{n.ddde+nn} or
##' \code{n.dddE+nn} (scientific format); \code{"g"} and \code{"G"} put
##' \code{x[i]} into scientific format only if it saves space to do so.
##' \code{"fg"} uses fixed format as \code{"f"}, but \code{digits} as number
##' of \emph{significant} digits. Note that this can lead to quite long
##' result strings. Finaly, \code{"nice"} is like \code{"f"}, but with 0
##' digits if \code{x} is an integer. Default depends on the class of
##' \code{x}.
##' @param digits Numeric vector of length equal to the number of columns of
##' the resulting table (otherwise it will be replicated or truncated as
##' necessary) indicating the number of digits to display in the
##' corresponding columns. Default is \code{2}.
##' @param decimal.mark The character to be used to indicate the numeric
##' decimal point. Default is \code{"."}.
##' @param na.print The character string specifying how \code{NA} should be
##' formatted specially. Default is "".
##' @param caption Character vector of length 1 containing the table's caption
##' or title. Set to \code{""} to suppress the caption. Default value is
##' \code{NULL}.
##' @param caption.level Character or numeric vector of length 1 containing the
##' caption's level. Can take the following values: \code{0} to \code{5},
##' \code{"."} (block titles in asciidoc markup), \code{"s"} (strong),
##' \code{"e"} (emphasis), \code{"m"} (monospaced) or \code{""} (no markup).
##' Default is NULL.
##' @param width Numeric vector of length one containing the table width
##' relative to the available width (expressed as a percentage value,
##' \code{1}\dots{} \code{99}). Default is \code{0} (all available width).
##' @param frame Character vector of length one. Defines the table border, and
##' can take the following values: \code{"topbot"} (top and bottom),
##' \code{"all"} (all sides), \code{"none"} and \code{"sides"} (left and
##' right). The default value is \code{NULL}.
##' @param grid Character vector of length one. Defines which ruler lines are
##' drawn between table rows and columns, and can take the following values:
##' \code{"all"}, \code{"rows"}, \code{"cols"} and \code{"none"}. Default is
##' \code{NULL}.
##' @param valign Vector or matrix indicating vertical alignment of all cells
##' in table. Can take the following values: \code{"top"}, \code{"bottom"}
##' and \code{"middle"}. Default is \code{""}.
##' @param header logical or numeric. If \code{TRUE} or \code{1}, \code{2},
##' \dots{}, the first line(s) of the table is (are) emphasized. The default
##' value depends of class of \code{x}.
##' @param footer logical or numeric. If \code{TRUE} or \code{1}, the last
##' line(s) of the table is (are) emphasized. The default value depends of
##' class of \code{x}.
##' @param align Vector or matrix indicating the alignment of the corresponding
##' columns. Can be composed with \code{"r"} (right), \code{"l"} (left) and
##' \code{"c"} (center). Default value is \code{NULL}.
##' @param col.width Numeric vector of length equal to the number of columns of
##' the resulting table (otherwise it will be replicated or truncated as
##' necessary) indicating width of the corresponding columns (integer
##' proportional values). Default is \code{1}.
##' @param style Character vector or matrix indicating the style of the
##' corresponding columns. Can be composed with \code{"d"} (default),
##' \code{"s"} (strong), \code{"e"} (emphasis), \code{"m"} (monospaced),
##' \code{"h"} (header) \code{"a"} (cells can contain any of the AsciiDoc
##' elements that are allowed inside document), \code{"l"} (literal),
##' \code{"v"} (verse; all line breaks are retained). Default is
##' \code{NULL}.
##' @param tgroup Character vector or a list of character vectors defining
##' major top column headings. The default is to have none (\code{NULL}).
##' @param n.tgroup A numeric vector or a list of numeric vectors containing
##' the number of columns for which each element in tgroup is a heading. For
##' example, specify \code{tgroup=c("Major 1","Major 2")},
##' \code{n.tgroup=c(3,3)} if \code{"Major 1"} is to span columns 1-3 and
##' \code{"Major 2"} is to span columns 4-6.
##' @param talign Character vector of length one defining alignment of major
##' top column headings.
##' @param tvalign Character vector of length one defining vertical alignment
##' of major top column headings.
##' @param tstyle Character vector of length one indicating the style of major
##' top column headings
##' @param bgroup Character vector or list of character vectors defining major
##' bottom column headings. The default is to have none (\code{NULL}).
##' @param n.bgroup A numeric vector containing the number of columns for which
##' each element in bgroup is a heading.
##' @param balign Character vector of length one defining alignment of major
##' bottom column headings.
##' @param bvalign Character vector of length one defining vertical alignment
##' of major bottom column headings.
##' @param bstyle Character vector of length one indicating the style of major
##' bottom column headings
##' @param lgroup Character vector or list of character vectors defining major
##' left row headings. The default is to have none (\code{NULL}).
##' @param n.lgroup A numeric vector containing the number of rows for which
##' each element in lgroup is a heading. Column names count in the row
##' numbers if \code{include.colnames = TRUE}.
##' @param lalign Character vector of length one defining alignment of major
##' left row headings.
##' @param lvalign Character vector of length one defining vertical alignment
##' of major left row headings.
##' @param lstyle Character vector of length one indicating the style of major
##' left row headings
##' @param rgroup Character vector or list of character vectors defining major
##' right row headings. The default is to have none (\code{NULL}).
##' @param n.rgroup A numeric vector containing the number of rows for which
##' each element in rgroup is a heading. Column names count in the row
##' numbers if \code{include.colnames = TRUE}.
##' @param ralign Character vector of length one defining alignment of major
##' right row headings.
##' @param rvalign Character vector of length one defining vertical alignment
##' of major right row headings.
##' @param rstyle Character vector of length one indicating the style of major
##' right row headings
##' @param list.type Character vector of length one indicating the list type
##' (\code{"bullet"}, \code{"number"}, \code{"label"} or \code{"none"}). If
##' \code{"label"}, \code{names(list)} is used for labels. Default is
##' \code{"bullet"}.
##' @param ... Additional arguments. (Currently ignored.)
##' @keywords print
##' @rdname ascii
##' @export
##' @method ascii default
ascii.default <- function(x, include.rownames = TRUE, include.colnames = TRUE, rownames = NULL, colnames = NULL, format = "f", digits = 2, decimal.mark = ".", na.print = "", caption = NULL, caption.level = NULL, width = 0, frame = NULL, grid = NULL, valign = NULL, header = TRUE, footer = FALSE, align = NULL, col.width = 1, style = NULL, tgroup = NULL, n.tgroup = NULL, talign = "c", tvalign = "middle", tstyle = "h", bgroup = NULL, n.bgroup = NULL, balign = "c", bvalign = "middle", bstyle = "h", lgroup = NULL, n.lgroup = NULL, lalign = "c", lvalign = "middle", lstyle = "h", rgroup = NULL, n.rgroup = NULL, ralign = "c", rvalign = "middle", rstyle = "h", list.type = "bullet", ...) {
if (is.list(x)) {
x <- lapply(x, as.character)
obj <- asciiList$new(x = x, caption = caption, caption.level = caption.level, list.type = list.type)
}
else {
y <- as.data.frame(x)
obj <- asciiTable$new(x = y, include.rownames = include.rownames, include.colnames = include.colnames, rownames = rownames, colnames = colnames, format = format, digits = digits, decimal.mark = decimal.mark, na.print = na.print, caption = caption, caption.level = caption.level, width = width, frame = frame, grid = grid, valign = valign, header = header, footer = footer, align = align, col.width = col.width, style = style, tgroup = tgroup, n.tgroup = n.tgroup, talign = talign, tvalign = tvalign, tstyle = tstyle, bgroup = bgroup, n.bgroup = n.bgroup, balign = balign, bvalign = bvalign, bstyle = bstyle, lgroup = lgroup, n.lgroup = n.lgroup, lalign = lalign, lvalign = lvalign, lstyle = lstyle, rgroup = rgroup, n.rgroup = n.rgroup, ralign = ralign, rvalign = rvalign, rstyle = rstyle)
}
return(obj)
}
|
f293e6095909bb1b308a21ab8723554ea6c0dfe6
|
433b00198e92e756a37441b82727cfb1e9a6436a
|
/Acc_Calibration_Mongoose_Behavior_2022.R
|
97d0732bcc7ecffde94da53c1cbc49d1399b9f00
|
[] |
no_license
|
pvandevuurst/Accelerometer_Callibration_Mongoose_Behavior
|
05a7cf9a270b54cce8a8228d9b09f877a05936fd
|
b3e63efc2cbb897acda22a77605d0ec98bff4057
|
refs/heads/main
| 2023-04-07T11:57:38.581141
| 2023-03-24T16:11:21
| 2023-03-24T16:11:21
| 478,636,409
| 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 6,369
|
r
|
Acc_Calibration_Mongoose_Behavior_2022.R
|
#Begin by setting a working directory (i.e., file where all your data is stored)
setwd("C:/Users/paige/Desktop/Data/Accele_Data_AlexLab")
#install.packages(c("pracma", "rgl"))
#install.packages("rgl")
#install the library for these packages in your session
library (pracma)
library (rgl)
library(ggplot2)
library(xfun)
library(xfun)
library(dplyr)
library(tidyr)
library(magrittr)
#install.packages("xfun")
#install.packages("rgl", repos='http://cran.cnr.berkeley.edu/')
#You will need X11 to operate on a mac - download at https://www.xquartz.org/
#rgl does not work with monterey OS system
## This code provides example of Calibration of the Technosmart Europe AGM magnetometer by fitting an ellipsoid to a set of calibration data points
###and transforming it into a sphere centered in zero
##Theoretical aspects of this script are based in ST Microelectronics's algorithm by Andrea Vitali, which you may find at:
###https://www.st.com/content/ccc/resource/technical/document/design_tip/group0/a2/98/f5/d4/9c/48/4a/d1/DM00286302/files/DM00286302.pdf/jcr:content/translations/en.DM00286302.pdf
#Calibration movement manual for TechnoSmart Model Accelerometer can be found at: http://www.technosmart.eu/gallery.php
#To midigate formatting errors, Open the calbration and movement CSV (i.e., data downloaded from Accelerometer) in a reader first
##Delte any non-numeric rows
#Now you are ready to read your CSV file and source the calibration functions text file
source("C:/Users/paige/Desktop/Data/Accele_Data_AlexLab/AGM_functions.txt")
#Set source to your working directory (where data and AGM_functions text file are)
calibration.data = read.csv("calibration.csv", header=T)
#Plotting the raw data: the three axes should produce an ellipsoid-like shape
##Turning on/off the logger will produce outlying values which should be excluded from analysis.
#Plot the calibration data
doubleplot(calibration.data$Mx,calibration.data$My,calibration.data$Mz,"Raw calibration data")
mel_calibration=calibration.data
df = mel_calibration %>%
mutate(n=row_number()) %>%
subset((between(n,1300,4300) | between(n,7700,9500)) & between(Mx,-50,50) & between(My,-50,50) & between(Mz,-50,50))
#acc axes values. setting work space of acc field. Setting above or below 50 as "impossible"
#calibration.data#[1000:20000,] ## adjust the row index to subset
doubleplot(df$Mx,df$My,df$Mz,"Raw calibration data")
values = ellipsoid_fit(df$Mx,df$My,df$Mz)
## Choose a scaling value according to what you want the sphere radius to measure
## Should be calculated based on animal size
scaling = 1
## Apply correction to the calibration data and plot it
calibrated = correct(df$Mx,df$My,df$Mz, values$offset,values$gain,values$eigenvec,scaling)
doubleplot(calibrated$MagX_c,calibrated$MagY_c,calibrated$MagZ_c,"Corrected calibration data")
calibrated %>%
mutate(n=row_number()) %>%
set_colnames(c("X","Y","Z","n")) %>%
pivot_longer(cols = c("X","Y","Z"), names_to = "axis",values_to = "mag") %>%
ggplot(aes(x = n, y = mag)) + geom_line() + facet_grid(axis~.)
df %>%
select(Mx,My,Mz, n) %>%
#mutate(n=row_number()) %>%
set_colnames(c("X","Y","Z","n")) %>%
subset(between(n,1300,4300) | between(n,7700,9500)) %>%
pivot_longer(cols = c("X","Y","Z"), names_to = "axis",values_to = "mag") %>%
ggplot(aes(x = n, y = mag)) + geom_line() + facet_grid(axis~.)
###################################################################################################
## Finally, read your real magnetometer data and apply the same correction to it.
## If errors are encountered, remember to subset by eliminating data produced by magnet swipes upon turning the logger on/off
test_dat<-read.csv("test_nov5_copy.csv", header=T)
test_dat
#If testing data is error free, continue to calibration
my.data=test_dat
my.data.calibrated = correct(my.data$accX,my.data$accY,my.data$accZ,
values$offset,values$gain,values$eigenvec,scaling)
my.data.calibrated
## You might want to plot your data before and after correction. Use doubleplot function to do so.
## Depending on the size of your dataframe this might take some time
doubleplot(my.data$accX,my.data$accY,my.data$accZ,"Raw data")
doubleplot(my.data.calibrated$MagX_c,my.data.calibrated$MagY_c,my.data.calibrated$MagZ_c,"Corrected data")
my.data %>%
select(accX,accY,accZ) %>%
mutate(n=row_number()) %>%
set_colnames(c("X","Y","Z","n")) %>%
#subset(between(n,1300,4300) | between(n,7700,9500)) %>%
subset(between(n,0,5000)) %>%
pivot_longer(cols = c("X","Y","Z"), names_to = "axis",values_to = "mag") %>%
ggplot(aes(x = n, y = mag)) + geom_line() + facet_grid(axis~.)
my.data.calibrated %>%
select(MagX_c,MagY_c,MagZ_c) %>%
mutate(n=row_number()) %>%
set_colnames(c("X","Y","Z","n")) %>%
#subset(between(n,1300,4300) | between(n,7700,9500)) %>%
subset(between(n,0,5000)) %>%
pivot_longer(cols = c("X","Y","Z"), names_to = "axis",values_to = "mag") %>%
ggplot(aes(x = n, y = mag)) + geom_line() + facet_grid(axis~.)
################################
#Add your time stamps back in after calibration
my.data.calibrated$t<-test_dat$Time
#check data
my.data.calibrated
time<-my.data.calibrated$t
time
#Read in time as character
t2<-as.character(time)
t2
my.data.calibrated$t2<-t2
#plot three axes of movement (x,y,and z)
whole_run<-ggplot(my.data.calibrated, aes(x=t)) +
geom_line(aes(y = MagX_c), color = "blue") +
geom_line(aes(y = MagY_c), color="green", linetype="twodash") +
geom_line(aes(y= MagZ_c), color="red")+scale_x_discrete(guide = guide_axis(check.overlap=T))
whole_run
#Save Calibrated Data
write.csv(my.data.calibrated, "calibrated_data_nov5.csv")
#Read in subset of specific activities based on timestamps
alert<-read.csv("Alert_timestamp1_nov5.csv", header=T)
alert
#convert to Data frame
alert<-as.data.frame(alert)
plot(alert)
# Plot specific activity via three axes of movment
alert_plot1<-ggplot(alert, aes(x=t)) + geom_line(aes(y = MagX_c), color = "blue")+
geom_line(aes(y = MagY_c), color="green") +
geom_line(aes(y= MagZ_c), color="red")+ theme_classic()+
scale_x_discrete(guide = guide_axis(check.overlap=T))
alert_plot1
|
78203e04c1e57eae2a63b0b8cc2e65c76493e8f8
|
3ea006d26ec3b8944c7651c1960780d8fa0d5cbd
|
/aijContext.R
|
9a16b3d80f20ea2a88b59384a2297cf1ab1df353
|
[] |
no_license
|
Dordt-Statistics-Research/Bacterial-Code
|
d192d90aedbf2be72e55e87568b7e8823c76da33
|
5b3e3859162f176142e84119fe4070c4a9a5326b
|
refs/heads/master
| 2021-01-12T17:49:36.793510
| 2019-07-25T15:21:06
| 2019-07-25T15:21:06
| 69,396,077
| 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 28,886
|
r
|
aijContext.R
|
# Methods for aijContexts
# Gives a vector of gene names; the same as the rownames of the expression data, and guaranteed to be in the same order
get.genenames <- function(context) UseMethod("get.genenames")
# Gives a vector of experiment names; the same as the colnames of the expression data, and guaranteed to be in the same order
get.expnames <- function(context) UseMethod("get.expnames")
# Two methods giving the expression data matrix. For both, each row is a gene and each column is an experiment, named appropriately.
# The difference is in how outliers are reported.
# Which data points are outliers, is defined when the context is first formed, by the outlierHandling argument passed to get.aij.context()
# The first method reports these outliers as NAs in the expression data matrix
get.expression.data.outliersNA <- function(context) UseMethod("get.expression.data.outliersNA")
# The second method reports these outliers as Inf (if they were high outliers) or -Inf (if they were low outliers)
get.expression.data.outliersInf <- function(context) UseMethod("get.expression.data.outliersInf")
# Gives a list of character vectors, where each character vector represents one operon, and
# each element of each character vector is a gene name. All genes in the context will be present in exactly one operon.
get.operons <- function(context) UseMethod("get.operons")
# Gives a filename containing R code required for the proper use of the context
# (for aijContext, that's this file; it will presumably be a different file for subclasses)
get.sourcefile <- function(context) UseMethod("get.sourcefile")
# Gives the univariate or multivariate aijs assuming for the moment that all genes/operons are 2-component
get.uni.raw.aijs <- function(context) UseMethod("get.uni.raw.aijs")
get.multi.raw.aijs <- function(context) UseMethod("get.multi.raw.aijs")
# Gives a named vector with the fitted mu0 parameter for each gene/operon respectively
get.uni.mu0 <- function(context) UseMethod("get.uni.mu0")
get.multi.mu0 <- function(context) UseMethod("get.multi.mu0")
# Gives a named vector with the fitted mu1 parameter for each gene/operon respectively
get.uni.mu1 <- function(context) UseMethod("get.uni.mu1")
get.multi.mu1 <- function(context) UseMethod("get.multi.mu1")
# Gives a named vector with the fitted variance for each gene
get.uni.var <- function(context) UseMethod("get.uni.var")
# Gives a list containing the fitted covariance matrix for each operon
get.multi.var <- function(context) UseMethod("get.multi.var")
# Gives a named vector with the fitted mixing parameter for each gene/operon respectively
get.uni.pi <- function(context) UseMethod("get.uni.pi")
get.multi.pi <- function(context) UseMethod("get.multi.pi")
# Gives a named vector with a value for each gene/operon respectively
# Positive BICdiff indicates the 1-component model was a better fit
get.BICdiffs.by.gene <- function(context) UseMethod("get.BICdiffs.by.gene")
get.BICdiffs.by.operon <- function(context) UseMethod("get.BICdiffs.by.operon")
# Univariate. Gives gene names from the expression data in whatever context it is passed.
# BIC.bonus: A bonus applied to the 2-component BIC before comparing it to the 1-component BIC
# Higher values of BIC.bonus will give fewer 1-component genes and more 2-component genes.
get.1comp.genenames <- function(context, BIC.bonus) UseMethod("get.1comp.genenames")
# Multivariate. Gives indices in the operons list from whatever context it is passed.
# BIC.bonus: A bonus applied to the 2-component BIC before comparing it to the 1-component BIC
# Higher values of BIC.bonus will give fewer 1-component operons and more 2-component operons.
get.1comp.operons <- function(context, BIC.bonus) UseMethod("get.1comp.operons")
# Univariate. Gives the results of Mclust on each gene's expression data, assuming the gene is 1-component
get.1comp.fits <- function(context) UseMethod("get.1comp.fits")
# Gives the natural log of the Bayes Factor (calculated by gene/operon respectively)
# Higher means more likely to be 1-component
get.bayesfactors.by.gene <- function(context) UseMethod("get.bayesfactors.by.gene")
get.bayesfactors.by.operon <- function(context) UseMethod("get.bayesfactors.by.operon")
# Gives C-values calculated by gene/operon respectively (% confidences each gene/operon is 2-component)
# B is the NBF value at which C reaches 0
# The interpretation of A differs dramatically based on whether exponential=TRUE/FALSE
# when exponential=TRUE, A is a horizontal scaling coefficient
# when exponential=FALSE, A is the NBF value at which C reaches 1
# For get.Cvalues.by.gene or get.Cvalues.by.operon.oldmethod, A and B must be single values
# For get.Cvalues.by.operon, A and B can be either single values,
# or expressions, quoted using quote() or bquote(), involving 'p' (operon size)
get.Cvalues.by.gene <- function(context, A, B, exponential) UseMethod("get.Cvalues.by.gene")
get.Cvalues.by.operon <- function(context, A, B, exponential) UseMethod("get.Cvalues.by.operon")
get.Cvalues.by.operon.oldmethod <- function(context, A, B, exponential) UseMethod("get.Cvalues.by.operon.oldmethod")
# oldmethod uses the simple Ln(K)/N normalization which does not account for differences in operon size
# Gives a named vector with the C-value for each gene, but using the calculated-by-operon C-values
# Arguments are as to get.Cvalues.by.operon() or get.Cvalues.by.operon.oldmethod() respectively
get.operon.Cvalues.by.gene <- function(context, A, B, exponential) UseMethod("get.operon.Cvalues.by.gene")
get.operon.Cvalues.by.gene.oldmethod <- function(context, A, B, exponential) UseMethod("get.operon.Cvalues.by.gene.oldmethod")
###############################################################
# Wraps up expression data, operons, and the results of some initial calculations
# into an "aijContext" which can be passed to other methods to generate aijs
get.aij.context <- function(expression.data, operons=list(), outlierHandling="None", numGibbsSamplerRuns=500, numBurnedRuns=50, mu0.mean=8, mu1.mean=9, ...) {
# For explanations of the format of expression.data and operons, see get.aijs.all.methods() in Aij_generation.R
# outlierHandling: One of
# "None": Don't detect outliers or do anything special with them
# "Pct x" (where x is a number between 0 and 100 inclusive): Remove the top and bottom x% of expression values from each gene
# "Num x" (where x is a number between 0 and ncol(expression.data) inclusive): Remove the top and bottom x expression values from each gene
# "SD x" (where x is a nonnegative number): For each gene, remove all expression values outside x standard deviations from the mean
# "Wins x" (where x is a nonnegative number): For each gene, winsorize all expression values outside x standard deviations from the mean
# Importantly, outliers are always computed by gene. However, methods which operate on an operon level (notably MultiMM
# and its friends) require complete operons; so for those methods, any experiment which is dropped from any gene will be
# dropped from all genes in that operon, not just the gene(s) it was an outlier for.
# This may lead to much more data being dropped than it would seem at first glance.
# remove genes referenced in operons that are not in rownames(expression.data)
operons <- lapply(operons, function(operon) operon[operon %in% rownames(expression.data)])
# remove any resulting zero-length operons
operons <- operons[sapply(operons, length)!=0]
# add single-gene operons for any genes not already appearing in operons
genesInOperons <- unlist(operons)
operons <- c(operons, as.list(rownames(expression.data)[!(rownames(expression.data) %in% genesInOperons)]))
# check for duplicates
if(anyDuplicated(genesInOperons)) {
dups <- genesInOperons[duplicated(genesInOperons)]
stop(paste("The following gene(s) appear more than once in operons:", paste(dups, collapse=", ")))
}
expression.data <- handleOutliers(expression.data, outlierHandling)
multiAijResults <- get.aijResults(expression.data, operons, numGibbsSamplerRuns, numBurnedRuns, mu0.mean, mu1.mean, ..., multivariate=TRUE)
uniAijResults <- get.aijResults(expression.data, operons, numGibbsSamplerRuns, numBurnedRuns, mu0.mean, mu1.mean, ..., multivariate=FALSE)
expression.data <- expression.data[names(multiAijResults$mu0),] # make sure rows are in the same order
returnme <- list(multiAijResults=multiAijResults, uniAijResults=uniAijResults,
expression.data=expression.data, operons=operons)
class(returnme) <- c("aijContext", "list") # inherit from list
return(returnme)
}
### 'Public' S3 methods for class 'aijContext'
get.genenames.aijContext <- function(context) rownames(context$expression.data)
get.expnames.aijContext <- function(context) colnames(context$expression.data)
get.expression.data.outliersNA.aijContext <- function(context) {x <- context$expression.data; x[is.infinite(x)] <- NA; x}
get.expression.data.outliersInf.aijContext <- function(context) context$expression.data
get.operons.aijContext <- function(context) context$operons
get.sourcefile.aijContext <- function(context) "aijContext.R"
get.uni.raw.aijs.aijContext <- function(context) context$uniAijResults$aijs
get.multi.raw.aijs.aijContext <- function(context) context$multiAijResults$aijs
get.uni.mu0.aijContext <- function(context) context$uniAijResults$mu0
get.multi.mu0.aijContext <- function(context) context$multiAijResults$mu0
get.uni.mu1.aijContext <- function(context) context$uniAijResults$mu1
get.multi.mu1.aijContext <- function(context) context$multiAijResults$mu1
get.uni.var.aijContext <- function(context) context$uniAijResults$var
get.multi.var.aijContext <- function(context) context$multiAijResults$var
get.uni.pi.aijContext <- function(context) context$uniAijResults$pi
get.multi.pi.aijContext <- function(context) context$multiAijResults$pi
get.BICdiffs.by.gene.aijContext <- function(context) {
bics <- apply(get.expression.data.outliersNA(context), 1, function(genedata) mclustBIC(genedata[!is.na(genedata)], G=c(1:2), modelNames="E"))
return(bics[1,]-bics[2,])
}
get.BICdiffs.by.operon.aijContext <- function(context) {
bics <- sapply(get.operons(context), function(operon) {
operon.data <- get.expression.data.outliersNA(context)[unlist(operon),,drop=FALSE]
operon.data <- operon.data[,!apply(operon.data, 2, function(col) any(is.na(col))),drop=FALSE] # remove any experiment where any of this operon's genes were dropped due to outlier
return(mclustBIC(t(operon.data), G=c(1:2), modelNames=ifelse(length(operon)==1,"E","EEE")))
})
return(bics[1,]-bics[2,])
}
get.1comp.genenames.aijContext <- function(context, BIC.bonus) {
return(get.genenames(context)[get.BICdiffs.by.gene(context) > BIC.bonus])
}
get.1comp.operons.aijContext <- function(context, BIC.bonus) {
return(which(get.BICdiffs.by.operon(context) > BIC.bonus))
}
get.1comp.fits.aijContext <- function(context) {
apply(get.expression.data.outliersNA(context), 1, function(genedata) Mclust(genedata[!is.na(genedata)], G=1, modelNames="X"))
}
get.bayesfactors.by.gene.aijContext <- function(context) {
returnme <- apply(get.expression.data.outliersNA(context), 1, function(genedata) {
genedata <- genedata[!is.na(genedata)]
log.likelihood.1 <- Mclust(data=genedata, G=1, modelNames="E")$loglik
log.likelihood.2 <- Mclust(data=genedata, G=2, modelNames="E")$loglik
return(log.likelihood.1-log.likelihood.2)
})
names(returnme) <- get.genenames(context)
return(returnme)
}
get.bayesfactors.by.operon.aijContext <- function(context) {
sapply(get.operons(context), function(operon) {
operon.data <- get.expression.data.outliersNA(context)[unlist(operon),,drop=FALSE]
operon.data <- operon.data[,!apply(operon.data, 2, function(col) any(is.na(col))), drop=FALSE] # remove any experiment where any of this operon's genes were dropped due to outlier
log.likelihood.1 <- Mclust(data=t(operon.data), G=1, modelNames=ifelse(length(operon)==1,"E","EEE"))$loglik
log.likelihood.2 <- Mclust(data=t(operon.data), G=2, modelNames=ifelse(length(operon)==1,"E","EEE"))$loglik
return(log.likelihood.1-log.likelihood.2)
})
}
get.Cvalues.by.gene.aijContext <- function(context, A, B, exponential) {
if(exponential && A <= 0) stop("Exponential method requires A > 0")
if(!exponential && A >= B) stop("Piecewise-linear method requires A < B")
nbf <- get.bayesfactors.by.gene(context)/length(get.expnames(context))
if(exponential) C <- -exp((nbf-B)*A)+1
else C <- (nbf-B)/(A-B)
C <- pmax(0, pmin(1, C)) # bound C between 0 and 1 inclusive
names(C) <- names(nbf)
return(C)
}
get.Cvalues.by.operon.aijContext <- function(context, A, B, exponential) {
bf <- get.bayesfactors.by.operon(context)
operons <- get.operons(context)
N <- length(get.expnames(context))
C <- sapply(1:length(operons), function(opNum) {
p <- length(operons[[opNum]])
nbf <- -exp(-2*bf[opNum]/N)/p
actual.A <- eval(A, envir=list(p=length(operons[[opNum]]))) # Get the actual value for A, given this value of p
actual.B <- eval(B, envir=list(p=length(operons[[opNum]]))) # Get the actual value for B, given this value of p
if(exponential && actual.A <= 0) stop("Exponential method requires A > 0 for all positive integer values of p")
if(!exponential && actual.A >= actual.B) stop("Piecewise-linear method requires A < B for all positive integer values of p")
if(exponential) return(-exp((nbf-actual.B)*actual.A)+1)
else return((nbf-actual.B)/(actual.A-actual.B))
})
C <- pmax(0, pmin(1, C)) # bound C between 0 and 1 inclusive
names(C) <- names(bf)
return(C)
}
get.Cvalues.by.operon.oldmethod.aijContext <- function(context, A, B, exponential) {
if(exponential && A <= 0) stop("Exponential method requires A > 0")
if(!exponential && A >= B) stop("Piecewise-linear method requires A < B")
nbf <- get.bayesfactors.by.operon(context)/length(get.expnames(context))
if(exponential) C <- -exp((nbf-B)*A)+1
else C <- (nbf-B)/(A-B)
C <- pmax(0, pmin(1, C)) # bound C between 0 and 1 inclusive
names(C) <- names(nbf)
return(C)
}
get.operon.Cvalues.by.gene.aijContext <- function(context, A, B, exponential) {
get.values.by.gene(context, get.Cvalues.by.operon(context, A, B, exponential))
}
get.operon.Cvalues.by.gene.oldmethod.aijContext <- function(context, A, B, exponential) {
get.values.by.gene(context, get.Cvalues.by.operon.oldmethod(context, A, B, exponential))
}
### Private methods to be called only in this file ###
# Given a vector of values corresponding to operons, return a named vector of the same values where names are genes
# and each gene has the value that was assigned to its operon
get.values.by.gene <- function(context, values.by.operon) {
values.by.gene <- rep(NA, length(get.genenames(context)))
names(values.by.gene) <- get.genenames(context)
operons <- get.operons(context)
for(opNum in 1:length(operons)) values.by.gene[operons[[opNum]]] <- values.by.operon[opNum]
if(anyNA(values.by.gene, recursive=TRUE)) stop("NAs found in values.by.gene")
return(values.by.gene)
}
handleOutliers <- function(expression.data, method) {
# method: See comments on "outlierHandling" argument to get.aij.context; its options and behavior are identical
# "None": Don't detect outliers or do anything special with them
# "Pct x" (where x is a number between 0 and 100): Remove the top and bottom x% of expression values from each gene
# "Num x" (where x is a number): Remove the top and bottom x expression values from each gene
# "SD x" (where x is a number): For each gene, remove all expression values outside x standard deviations from the mean
# "Wins x" (where x is a number): For each gene, winsorize all expression values outside x standard deviations from the mean
# Importantly, outliers are always computed by gene. However, methods which operate on an operon level (notably MultiMM
# and its friends) require complete operons; so for those methods, any experiment which is dropped from any gene will be
# dropped from all genes in that operon, not just the gene(s) it was an outlier for.
# This may lead to much more data being dropped than it would seem at first glance.
if(method=="None") return(expression.data)
if(any(is.na(expression.data))) stop("expression.data cannot contain NAs prior to outlier handling")
first.space <- regexpr(" ", method)[1]
method.name <- substr(method, 1, first.space-1)
method.parameter <- as.numeric(substr(method, first.space+1, nchar(method)))
if(is.na(method.parameter)) stop(paste("Invalid method:", method))
handled <- t(apply(expression.data, 1, function(gene) {
if(method.name=="Pct") {
cutoffRank.low <- round((method.parameter/100)*length(gene))
cutoffRank.high <- round(((100-method.parameter)/100)*length(gene))+1 # e.g. if method.parameter is 100*(1/length(gene)), then we want to remove only the highest rank, so we want cutoffRank to be length(gene); but (100-method.parameter)/100 is 1-1/length(gene); multiplying by length(gene) gives length(gene)-1; but we actually desire length(gene), so we see why the +1 is necessary
ranks <- rank(gene)
gene[ranks<=cutoffRank.low] <- -Inf
gene[ranks>=cutoffRank.high] <- Inf
return(gene)
} else if(method.name=="Num") {
ranks <- rank(gene)
gene[ranks<=method.parameter] <- -Inf
gene[ranks>=length(gene)-method.parameter+1] <- Inf # e.g. if method.parameter is 1, this is gene[ranks>=length(gene)] <- Inf which turns only the top rank into Inf
return(gene)
} else if(method.name=="SD") {
gene.mean <- mean(gene)
gene.sd <- sd(gene)
gene[gene > gene.mean+method.parameter*gene.sd] <- Inf
gene[gene < gene.mean-method.parameter*gene.sd] <- -Inf
return(gene)
} else if(method.name=="Wins") {
gene.mean <- mean(gene)
gene.sd <- sd(gene)
upperBound <- gene.mean+method.parameter*gene.sd
lowerBound <- gene.mean-method.parameter*gene.sd
gene[gene>upperBound] <- upperBound
gene[gene<lowerBound] <- lowerBound
return(gene)
} else {
stop(paste("Unrecognized method:", method))
}
}))
rownames(handled) <- rownames(expression.data)
colnames(handled) <- colnames(expression.data)
return(handled)
}
# Gets aijs and associated fitting parameters, assuming for the moment that all genes are 2-component
get.aijResults <- function(expression.data, operons, numGibbsSamplerRuns=500, numBurnedRuns=50, mu0.mean=8, mu1.mean=9, ..., multivariate=TRUE) {
# expression.data: should be in its final form (post-outlier-handling) as would be returned by get.expression.data.outliersInf() later
cluster <- getCluster()
# Spread all needed data to the other processes
clusterExport(cluster, c("expression.data", "operons", "numGibbsSamplerRuns", "numBurnedRuns", "multivariate"), envir = environment())
clusterCall(cluster, source, "gibbs_sampler_multivariate.R")
resultsList <- parLapplyLB(cluster, operons, function(operon) {
if(multivariate) {
operon.data <- expression.data[operon,,drop=FALSE]
operon.data <- operon.data[,!apply(operon.data, 2, function(col) any(is.infinite(col))), drop=FALSE] # remove any experiment where any of this operon's genes were dropped due to outlier
if(anyNA(operon.data, recursive=T)) {
cat("operon.data was:\n"); print(operon.data)
stop("NA found in operon.data")
}
MGSresults <- find.posterior(operon.data, numGibbsSamplerRuns, mu0.mean, mu1.mean, ...)
operon.indicators <- rbind(MGSresults$ind, 0, 1) # The 'Greco correction': average in a 0 and a 1 in addition to the indicators. Suppose we're doing 100 iterations and they all come up 0, we really shouldn't report aij=0, we should report aij<0.01 instead. Same logic applies for other aijs; let's not overstate our confidence.
operon.aijs <- apply(operon.indicators[-(1:numBurnedRuns),],2,mean)
names(operon.aijs) <- colnames(operon.data)
aijs <- matrix(NA, nr=nrow(operon.data), nc=ncol(expression.data), dimnames=list(rownames(operon.data),colnames(expression.data)))
aijs[,colnames(operon.data)] <- t(as.matrix(sapply(operon, function(gene) operon.aijs))) # leaves the excluded experiments as NA for now
for(gene in operon) {
aijs[,expression.data[gene,]==-Inf] <- min(operon.aijs) # low outlier in any gene -> all genes in the operon get the lowest generated aij for the operon
aijs[,expression.data[gene,]==Inf] <- max(operon.aijs) # high outlier in any gene -> all genes in the operon get the highest generated aij for the operon
# Note: the above assumes that two different genes in the same operon won't have different-direction outliers in the same experiment. This seems a reasonable assumption to me.
}
if(anyNA(aijs, recursive=T)) {
cat("aijs was:\n"); print(aijs)
cat("operon.data was:\n"); print(operon.data)
cat("operon.aijs was:\n"); print(operon.aijs)
stop("NA found in aijs")
}
mu0mat <- do.call(rbind, MGSresults$mu0[-(1:numBurnedRuns)])
mu1mat <- do.call(rbind, MGSresults$mu1[-(1:numBurnedRuns)])
mu0 <- colMeans(mu0mat)
mu1 <- colMeans(mu1mat)
varsums <- matrix(0, nr=length(operon), nc=length(operon))
nonBurnedIndices <- (numBurnedRuns+1):(length(MGSresults$var))
lapply(MGSresults$var[-(1:numBurnedRuns)], function(covmat) {
varsums <<- varsums + covmat
})
var <- varsums / length(nonBurnedIndices)
rownames(var) <- operon
colnames(var) <- operon
pi <- rep(1-mean(as.vector(unlist(MGSresults$pi[-(1:numBurnedRuns)]))), length(operon)) # The 1- is because the Gibbs sampler's pi is the probability of being inactive
} else {
rows <- lapply(operon, function(gene) {
gene.data <- expression.data[gene,]
gene.data.nonOutliers <- gene.data[!is.infinite(gene.data)] # remove any experiments which were dropped for this gene due to outlier
if(anyNA(gene.data.nonOutliers, recursive=T)) {
cat("gene.data.nonOutliers was:\n"); print(gene.data.nonOutliers)
stop("NA found in gene.data.nonOutliers")
}
MGSresults <- find.posterior(gene.data.nonOutliers, numGibbsSamplerRuns, mu0.mean, mu1.mean, ...)
indicators <- rbind(MGSresults$ind, 0, 1) # The 'Greco correction': average in a 0 and a 1 in addition to the indicators. Suppose we're doing 100 iterations and they all come up 0, we really shouldn't report aij=0, we should report aij<0.01 instead. Same logic applies for other aijs; let's not overstate our confidence.
aijs <- rep(NA, length(gene.data))
names(aijs) <- names(gene.data)
generated.aijs <- apply(indicators[-(1:numBurnedRuns),],2,mean)
aijs[names(gene.data.nonOutliers)] <- generated.aijs # leaves the excluded experiments as NA for now
aijs[gene.data==-Inf] <- min(generated.aijs) # low outliers get the lowest generated aij for that gene
aijs[gene.data==Inf] <- max(generated.aijs) # high outliers get the highest generated aij for that gene
if(anyNA(aijs, recursive=T)) {
cat("aijs was:\n"); print(aijs)
cat("gene.data was:\n"); print(gene.data)
cat("generated.aijs was:\n"); print(generated.aijs)
stop("NA found in aijs")
}
mu0 <- mean(as.vector(unlist(MGSresults$mu0[-(1:numBurnedRuns)])))
mu1 <- mean(as.vector(unlist(MGSresults$mu1[-(1:numBurnedRuns)])))
var <- mean(as.vector(unlist(MGSresults$var[-(1:numBurnedRuns)])))
names(mu0) <- gene
names(mu1) <- gene
names(var) <- gene
pi <- 1-mean(as.vector(unlist(MGSresults$pi[-(1:numBurnedRuns)]))) # The 1- is because the Gibbs sampler's pi is the probability of being inactive
names(pi) <- gene
return(list(aijs=aijs, mu0=mu0, mu1=mu1, var=var, pi=pi))
})
aijs <- do.call(rbind, lapply(rows, function(row) row$aijs))
rownames(aijs) <- unlist(operon)
mu0 <- do.call(c, lapply(rows, function(row) row$mu0))
mu1 <- do.call(c, lapply(rows, function(row) row$mu1))
var <- do.call(c, lapply(rows, function(row) row$var))
pi <- do.call(c, lapply(rows, function(row) row$pi))
names(var) <- operon
}
names(mu0) <- operon
names(mu1) <- operon
names(pi) <- operon
return(list(aijs=aijs, mu0=mu0, mu1=mu1, var=var, pi=pi))
})
aijs <- do.call(rbind, lapply(resultsList, function(operon) operon$aijs))
mu0 <- do.call(c, lapply(resultsList, function(operon) operon$mu0))
mu1 <- do.call(c, lapply(resultsList, function(operon) operon$mu1))
if(multivariate) {
var <- do.call(list, lapply(resultsList, function(operon) operon$var))
} else {
var <- do.call(c, lapply(resultsList, function(operon) operon$var))
}
pi <- do.call(c, lapply(resultsList, function(operon) operon$pi))
# Clean up parallel resources
stopCluster(cluster)
if(anyNA(aijs, recursive=T)) stop("Ended up with NAs in the final aijs for get.aijResults")
return(list(aijs=aijs, mu0=mu0, mu1=mu1, var=var, pi=pi))
}
# This function will utilize the Rmpi package (if available) for multi-node cluster computation.
# Else no worries, still works with built-in R packages (e.g. "parallel") on a normal computer, just slower
getCluster <- function(useParallel=TRUE) {
# useParallel: if FALSE, don't try to do anything in parallel at all, even multicore on a normal computer
# (this is even slower than with useParallel=TRUE on a normal computer)
if(useParallel) {
library(parallel)
haveRMPI <- require("Rmpi", quietly=T)
if(haveRMPI) {
# if Rmpi is available, we will run on as many cluster nodes as we've been assigned, as long as that is more than one.
universeSize <- mpi.universe.size()
if(universeSize>1) {useMPI <- TRUE; numSlaves=universeSize-1} # one process is the master, so we can spawn (n-1) slaves
else if(universeSize==1) useMPI <- FALSE # We have only one node, so we won't use MPI for this run
else if(universeSize==0) {
# Using a version of MPI which doesn't support the mpi.universe.size() call
universeSize <- as.integer(system2("echo", args="$NUMPROCS", stdout=TRUE)) # user must set this environment variable to the total number of processes being used for this run
if(is.na(universeSize)) useMPI <- FALSE # user didn't set the environment variable
else if(universeSize>1) {useMPI <- TRUE; numSlaves=universeSize-1}
else useMPI <- FALSE # We have only one node, so we won't use MPI for this run
}
} else {
useMPI <- FALSE # Rmpi not available
}
if(useMPI) {
# Initialize parallel resources
cluster <- makeCluster(numSlaves, type="MPI", outfile="")
# Test that parallel resources are up and running - i.e. we
# actually are running on all the nodes we think we are.
getInfo <- function() { return(paste("Process", mpi.comm.rank(), "running on node", Sys.info()["nodename"])) }
cat(sep="", paste(clusterCall(cluster, getInfo), "\n"))
} else {
# Else, we will run using multicore computation on a single machine.
cat("Rmpi not found and/or only one node detected. Running on single node / single machine.\n")
cluster <- makePSOCKcluster(getOption("mc.cores", 2L))
}
return(cluster)
} else {
# Define stubs for the functions in library(parallel) that we'd actually use
clusterExport <<- function(...) NULL # do nothing
clusterCall <<- function(cluster, func, ...) func(...) # ignore the 'cluster' argument and execute locally
stopCluster <<- function(...) NULL # do nothing
parLapply <<- function(cluster, ..) lapply(...) # ignore the 'cluster' argument and execute locally
parLapplyLB <<- function(cluster, ...) lapply(...) # ignore the 'cluster' argument and execute locally
parSapply <<- function(cluster, ..) sapply(...) # ignore the 'cluster' argument and execute locally
parSapplyLB <<- function(cluster, ...) sapply(...) # ignore the 'cluster' argument and execute locally
return(NULL) # no cluster to return
}
}
|
e36991260431ccb4db828f3d07d3fa133119a436
|
5bdbd3096980e1f151469c38e04ddaa83ec95df2
|
/Methylation/Methylation_Test.R
|
4435bf49951cbaa0c59d3c674ed7e5d95babe2bc
|
[] |
no_license
|
jshkrob/BDSI-Project
|
757f489c7a175fd493cda6914265abe3fd3959e7
|
97f294b0cf59076135cae480eff17fe11dd5fe81
|
refs/heads/master
| 2020-07-21T09:38:57.586421
| 2019-09-06T23:23:34
| 2019-09-06T23:23:34
| 206,818,780
| 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 37,800
|
r
|
Methylation_Test.R
|
# packages:
library(tidyverse)
library(caret)
library(rpart)
library(randomForest)
library(naivebayes)
library(MASS)
library(glmnet)
library(caret)
library(rrlda)
library("class")
# loading data
screening <- readRDS("C:/Users/yasha/Desktop/BDSI/Data Mining/screening.rds")
meth <- readRDS("C:/Users/yasha/Desktop/BDSI/Data Mining/methylation.rds")
# EDA:
screening %>%
group_by(DRUG_ID_lib) %>%
summarise(size = n(), effective = sum(EFFECT), ineffective = sum(!EFFECT))
# packages for missing data: MICE, Amerlia, missForest, Hmisc, mi
### For now, examine drug 1014:
effective.1014 <- subset(screening, DRUG_ID_lib == "1014")[,c("CL","EFFECT", "CELL_LINE_NAME")]
meth.1014 <- meth[as.character(effective.1014$CELL_LINE_NAME), ]
# remove columns with NA values
meth.1014_na <- meth.1014[ , !is.na(colSums(meth.1014))]
meth.1014 <- meth.1014_na
# feature selection:
get.p <- function(dat, labs){
# split the data into effective and ineffective
effect <- dat[labs]
ineffect <- dat[!labs]
# calculate the two sample means
effect.bar <- mean(effect)
ineffect.bar <- mean(ineffect)
# calculate the two sample variances
v.effect <- var(effect)
v.ineffect <- var(ineffect)
# calculate the sample sizes
n.effect <- length(effect)
n.ineffect <- length(ineffect)
# calculate the sd
s_pooled = (n.effect*v.effect +n.ineffect*v.ineffect)/(n.effect+n.ineffect-2)
s <- sqrt(s_pooled*(1/n.effect + 1/n.ineffect))
#s <- sqrt((v.effect/n.effect) + (v.ineffect/n.ineffect))
# calculate the test statistic
T_abs <- abs((effect.bar - ineffect.bar)/s)
pval = 2*(1-pt(T_abs,df = n.effect+n.ineffect - 2))
return(pval)
}
pvals <- data.frame(METHSITE = colnames(meth.1014))
pvals$p = apply(meth.1014,2,get.p, effective.1014$EFFECT)
pvals_sel = pvals[pvals$p<=.05,]
pvals_best <- pvals %>% arrange(p) %>% slice(1:10000)
meth.1014 <- meth.1014[,pvals_best$METHSITE]
# pvals = pvals[complete.cases(pvals), ]
# Error data frame: repeat N (10) times test error and average for each method
ERROR <- data.frame(nb = numeric(10), lda = numeric(10), lasso = numeric(10), ridge = numeric(10), elas = numeric(10))
for(i in 1:10) {
# choosing training and test
n_obs <- nrow(meth.1014)
train.ix = sample(1:n_obs, round(n_obs* 0.6))
ytrain <- effective.1014$EFFECT[train.ix]
xtrain <- meth.1014[train.ix,]
ytest <- effective.1014$EFFECT[-train.ix]
xtest <- meth.1014[-train.ix, ]
# naive bayes:
model_nb = naive_bayes(x = xtrain, y = as.factor(ytrain))
ytrainhat = predict(model_nb, xtrain)
ytesthat = predict(model_nb, xtest)
#train_error = 1 - mean(ytrain == ytrainhat)
testerror = 1 - mean(ytest == ytesthat)
ERROR[i,1] <- testerror
# lda:
model_lda = lda(xtrain, ytrain)
ytrainhat = predict(model_lda, xtrain)$class
ytesthat = predict(model_lda, xtest)$class
#trainingerror = 1 - mean(ytrain==ytrainhat)
testerror = 1 - mean(ytest==ytesthat)
ERROR[i,2] <- testerror
# # qda: using 50 features...
# model_qda = qda(xtrain[,1:50], ytrain)
# ytrainhat = predict(model_qda,xtrain[,1:50])$class
# ytesthat = predict(model_qda,xtest[,1:50])$class
# #trainingerror = 1 - mean(ytrain == ytrainhat)
# testerror = 1 - mean(ytest == ytesthat)
# ERROR[i,3] <- testerror
# # Logistic Regression: 25 features (10,000 is too much overfitting)
# train <- data.frame(ytrain, xtrain[1:25])
# model_glm <- glm(ytrain ~ ., family = binomial(link = "logit"), data = train, control = list(maxit = 100)) # increasing allowed iterations beyond default of 25...
# ytrainhat <- ifelse(model_glm$fitted.values >= 0.5, 1, 0)
# ytesthat <- ifelse(predict(model_glm, xtest) >= 0.5, TRUE, FALSE)
# #trainingerror = 1 - mean(ytrain==ytrainhat)
# testerror = 1 - mean(ytest==ytesthat)
# ERROR[i,4] <- testerror
# Lasso
model_lasso <- glmnet(x = as.matrix(xtrain), y = ytrain, family = "binomial", alpha = 1)
ytrainhat <- ifelse(predict(model_lasso, as.matrix(xtrain)) >= 0.5, TRUE, FALSE)
ytesthat <- ifelse(predict(model_lasso, as.matrix(xtest)) >= 0.5, TRUE, FALSE)
#trainingerror <- mean(ytrainhat != ytrain)
testerror <- mean(ytesthat != ytest)
ERROR[i,3] <- testerror
# Ridge
model_ridge <- glmnet(x = as.matrix(xtrain), y = ytrain, family = "binomial", alpha = 0)
ytrainhat <- ifelse(predict(model_ridge, as.matrix(xtrain)) >= 0.5, TRUE, FALSE)
ytesthat <- ifelse(predict(model_ridge, as.matrix(xtest)) >= 0.5, TRUE, FALSE)
#trainingerror <- mean(ytrainhat != ytrain)
testerror <- mean(ytesthat != ytest)
ERROR[i,4] <- testerror
# Elastic Net:
model_elas <- glmnet(x = as.matrix(xtrain), y = ytrain, family = "binomial", alpha = 0.5)
ytrainhat <- ifelse(predict(model_elas, as.matrix(xtrain)) >= 0.5, TRUE, FALSE)
ytesthat <- ifelse(predict(model_elas, as.matrix(xtest)) >= 0.5, TRUE, FALSE)
#trainingerror <- mean(ytrainhat != ytrain)
testerror <- mean(ytesthat != ytest)
ERROR[i,5] <- testerror
}
summary(ERROR)
#----------------------------------------------------PC Scores---------------------------------------
# first, we need to center the expression data by column means
meth <- meth.1014
center.meth = scale(meth, scale = F) #scale = F centers the data
# now use svd to perform PCA
pca <- svd(center.meth, nu = 56, nv = 0)
hist(log(pca$d))## plots log of distribution of sigular values (variance of the singular components)
m = sum(cumsum(pca$d^2)/sum(pca$d^2)<.8) #Looks at the cumulative sum vector of the variances over the sum of variance and finds the number of components to explain 80% of the variation
pc.scores <- as.data.frame(diag(pca$d)%*%pca$u) #computes pc score
pc.scores <- cbind(effective.1014, pc.scores) #matches to labels of data
m
dim(pc.scores)
head(pc.scores)
pca_comp = prcomp(center.meth)
pc.scores <- pca_comp$x[,1:60]
pc.scores <- data.frame(pc.scores)
#----------------------------------------------------Cross Validation----------------------------------------------------------
# feature number to use: 50-500 via p-values
Q <- seq(50,500,50)
alpha <- seq(0,1,0.2)
n = nrow(meth.1014)
### Naive Bayes CV w/ feature number:
CV_NB <- data.frame(featurenum = numeric(10), error = numeric(10))
for (i in 1:10) {
# training & test:
train.ix = sample(1:n, round(n*0.60)) ## consider a approx. 60-40 split
methylation.train = meth.1014[train.ix,]
effective.train = effective.1014$EFFECT[train.ix]
methylation.test = meth.1014[-train.ix,]
effective.test = effective.1014$EFFECT[-train.ix]
best_q <- 0
CV_error <- vector(mode = "numeric")
for(q in Q) {
cv_error <- vector(mode = "numeric", length = 10)
for(j in 1:10) {
# splitting data into k folds:
n2 <- nrow(methylation.train)
train2.ix <- sample(1:n2, (19/20)*n2)
# 1 cv fold, 19 training folds:
methylation.train2 = methylation.train[train2.ix,]
effective.train2 = effective.train[train2.ix]
methylation.validate = methylation.train[-train2.ix,]
effective.validate = effective.train[-train2.ix]
# Fit naive Bayes classifier on folds, compute CV error using validation set
model_nb <- naive_bayes(x = methylation.train2[, 1:q], y = effective.train2)
ytesthat <- predict(model_nb, methylation.validate[, 1:q])
cv_error[j] <- 1 - mean(ytesthat == effective.validate)
}
# CV error for each feature number
CV_error <- append(CV_error, mean(cv_error))
}
# Find q with the smallest error
idx <- which.min(CV_error)
best_q <- Q[idx]
# Fit training data on selected feature number
model_nb <- naive_bayes(x = methylation.train[, 1:best_q], y = effective.train)
ytesthat <- predict(model_nb, methylation.test[, 1:q])
test_error <- 1 - mean(ytesthat == effective.test)
CV_NB[i, ] <- c(best_q, test_error)
}
### Lasso CV
CV_LASSO <- data.frame(lambda = numeric(10), featurenum = numeric(10), error = numeric(10))
l_min <- 1
for(i in 1:10) {
# for now, 6 iterations to find converging feature # and lamda:
for(k in 1:3) {
# FIRST, create TEST and TRAINING sets:
n = nrow(meth.1014)
train.ix = sample(1:n, round(n*0.60)) ## consider a approx. 60-40 split
methylation.train = meth.1014[train.ix,]
effective.train = effective.1014$EFFECT[train.ix]
methylation.test = meth.1014[-train.ix,]
effective.test = effective.1014$EFFECT[-train.ix]
# SECOND, select best # of features (q) given lambda via CV error
best_q <- 0
best_error <- 1
for(q in Q) {
# CV step:
cv_error <- vector(mode = "numeric", length = 10)
for(j in 1:10) {
# splitting data into k folds:
n2 <- nrow(methylation.train)
train2.ix <- sample(1:n2, (19/20)*n2)
# 1 cv fold, 19 training folds:
methylation.train2 = methylation.train[train2.ix,]
effective.train2 = effective.train[train2.ix]
methylation.validate = methylation.train[-train2.ix,]
effective.validate = effective.train[-train2.ix]
# train glm with regul. using q features w/ l_min choice & 19 training folds
lasso.fit <- glmnet(x = as.matrix(methylation.train2[,1:q]), y = effective.train2, family = "binomial", lambda = l_min)
test_predic <- ifelse(predict(lasso.fit, as.matrix(methylation.validate[,1:q]),type = "response") >= 0.5, TRUE, FALSE)
error_rate <- mean(test_predic != effective.validate)
cv_error[j] <- error_rate
}
CV_error <- mean(cv_error)
if (CV_error < best_error) {
best_q <- q
best_error <- CV_error
}
}
# THIRD, use feature number q to find best lamda for that q:
lasso.fit <- cv.glmnet(x = as.matrix(methylation.train[,1:best_q]), y = effective.train, family = "binomial", nfolds = 10, alpha = 1)
l_min <- lasso.fit$lambda.min
}
# use selected q and lamda on test set to estimtae error:
lasso.fit <- glmnet(x = as.matrix(methylation.train[,1:best_q]),
y = effective.train,
family = "binomial",
lambda = l_min)
error <- mean((ifelse(predict(lasso.fit, as.matrix(methylation.test[1:best_q])) >= 0.5, TRUE, FALSE)) != effective.test)
CV_LASSO[i,1] <- l_min
CV_LASSO[i,2] <- best_q
CV_LASSO[i,3] <- error
}
### Ridge CV
CV_RIDGE <- data.frame(lambda = numeric(10), featurenum = numeric(10), error = numeric(10))
l_min <- 1
for(i in 1:10) {
# for now, 6 iterations to find converging feature # and lamda:
for(k in 1:3) {
# FIRST, create TEST and TRAINING sets:
n = nrow(meth.1014)
train.ix = sample(1:n, round(n*0.60)) ## consider a approx. 60-40 split
methylation.train = meth.1014[train.ix,]
effective.train = effective.1014$EFFECT[train.ix]
methylation.test = meth.1014[-train.ix,]
effective.test = effective.1014$EFFECT[-train.ix]
# SECOND, select best # of features (q) given lambda via CV error
best_q <- 0
best_error <- 1
for(q in Q) {
# CV step:
cv_error <- vector(mode = "numeric", length = 10)
for(j in 1:10) {
# splitting data into k folds:
n2 <- nrow(methylation.train)
train2.ix <- sample(1:n2, (19/20)*n2)
# 1 cv fold, 19 training folds:
methylation.train2 = methylation.train[train2.ix,]
effective.train2 = effective.train[train2.ix]
methylation.validate = methylation.train[-train2.ix,]
effective.validate = effective.train[-train2.ix]
# train glm with regul. using q features w/ l_min choice & 19 training folds
lasso.fit <- glmnet(x = as.matrix(methylation.train2[,1:q]), y = effective.train2, family = "binomial", lambda = l_min, alpha = 0)
test_predic <- ifelse(predict(lasso.fit, as.matrix(methylation.validate[,1:q]),type = "response") >= 0.5, TRUE, FALSE)
error_rate <- mean(test_predic != effective.validate)
cv_error[j] <- error_rate
}
CV_error <- mean(cv_error)
if (CV_error < best_error) {
best_q <- q
best_error <- CV_error
}
}
# THIRD, use feature number q to find best lamda for that q:
lasso.fit <- cv.glmnet(x = as.matrix(methylation.train[,1:best_q]), y = effective.train, family = "binomial", nfolds = 10, alpha = 0)
l_min <- lasso.fit$lambda.min
}
# use selected q and lamda on test set to estimtae error:
lasso.fit <- glmnet(x = as.matrix(methylation.train[,1:best_q]),
y = effective.train,
family = "binomial",
lambda = l_min)
error <- mean((ifelse(predict(lasso.fit, as.matrix(methylation.test[1:best_q])) >= 0.5, TRUE, FALSE)) != effective.test)
CV_RIDGE[i,1] <- l_min
CV_RIDGE[i,2] <- best_q
CV_RIDGE[i,3] <- error
}
### Elastic CV
CV_ELAS <- data.frame(alpha = numeric(10), lambda = numeric(10), featurenum = numeric(10), error = numeric(10))
l_min <- 1
for(i in 1:10) {
# FIRST, create TEST and TRAINING sets:
n = nrow(meth.1014)
train.ix = sample(1:n, round(n*0.60)) ## consider a approx. 60-40 split
methylation.train = meth.1014[train.ix,]
effective.train = effective.1014$EFFECT[train.ix]
methylation.test = meth.1014[-train.ix,]
effective.test = effective.1014$EFFECT[-train.ix]
MCV <- vector(mode = "numeric")
LAMBDA <- vector(mode = "numeric")
# SECOND, select alpha and lambda based on CV:
for(a in alpha) {
# 10 fold cross validation for alpha = 0.0, 0.1, ..., 0.9, 1
model_elas <- cv.glmnet(x = as.matrix(methylation.train),
y = effective.train, family = "binomial",
type.measure = "class",
alpha = a)
# smallest cv error and corresponding lambda
min_lambda <- model_elas$lambda.min
idx <- which(model_elas$lambda == min_lambda)
cv_error <- model_elas$cvm[idx]
# cv_error <- model_elas$cvm[which.min(model_elas$cvm)]
# min_lambda <- model_elas$lambda[which.min(model_elas$cvm)]
MCV <- append(MCV, cv_error)
LAMBDA <- append(LAMBDA, min_lambda)
}
# find alpha that has lowest mcv error:
idx = which.min(MCV)
alpha_best <- alpha[idx]
lambda_best <- LAMBDA[idx]
# THRID, select best # of features (q) given lambda and alpha via CV error
best_q <- 0
best_error <- 1
for(q in Q) {
# CV step:
cv_error <- vector(mode = "numeric", length = 10)
for(j in 1:10) {
# splitting data into k folds:
n2 <- nrow(methylation.train)
train2.ix <- sample(1:n2, (19/20)*n2)
# 1 cv fold, 19 training folds:
methylation.train2 = methylation.train[train2.ix,]
effective.train2 = effective.train[train2.ix]
methylation.validate = methylation.train[-train2.ix,]
effective.validate = effective.train[-train2.ix]
# Train using q features w/ l_min choice & 19 training folds
elas.fit <- glmnet(x = as.matrix(methylation.train2[,1:q]), y = effective.train2,
family = "binomial", lambda = lambda_best, alpha = alpha_best)
test_predic <- ifelse(predict(elas.fit, as.matrix(methylation.validate[,1:q]),type = "response") >= 0.5, TRUE, FALSE)
error_rate <- mean(test_predic != effective.validate)
cv_error[j] <- error_rate
}
CV_error <- mean(cv_error)
if (CV_error < best_error) {
best_q <- q
best_error <- CV_error
}
}
# Use selected q, alpha, and lamda on training set to estimate error:
elas.fit <- glmnet(x = as.matrix(methylation.train[,1:best_q]), y = effective.train,
family = "binomial", lambda = lambda_best, alpha = alpha_best)
error <- mean((ifelse(predict(elas.fit, as.matrix(methylation.test[1:best_q])) >= 0.5, TRUE, FALSE)) != effective.test)
CV_ELAS[i,1] <- alpha_best
CV_ELAS[i,2] <- lambda_best
CV_ELAS[i,3] <- best_q
CV_ELAS[i,4] <- error
}
#----------------------------------------------------------------------Non-PCA Functions-------------------------------------------
## functions:
Q <- seq(50, 1000, 50)
cv_naive_bayes <- function(df, effect) {
### Naive Bayes CV w/ feature number:
CV_NB <- data.frame(featurenum = numeric(10), error = numeric(10))
for (i in 1:10) {
n <- nrow(df)
# training & test:
train.ix = sample(1:n, round(n*0.60)) ## consider a approx. 60-40 split
df.train = df[train.ix,]
effective.train = effect[train.ix]
df.test = df[-train.ix,]
effective.test = effect[-train.ix]
best_q <- 0
CV_error <- vector(mode = "numeric")
for(q in Q) {
cv_error <- vector(mode = "numeric", length = 10)
for(j in 1:10) {
# splitting data into k folds:
n2 <- nrow(df.train)
train2.ix <- sample(1:n2, (19/20)*n2)
# 1 cv fold, 19 training folds:
df.train2 = df.train[train2.ix,]
effective.train2 = effective.train[train2.ix]
df.validate = df.train[-train2.ix,]
effective.validate = effective.train[-train2.ix]
# Fit naive Bayes classifier on folds, compute CV error using validation set
model_nb <- naive_bayes(x = df.train2[, 1:q], y = effective.train2)
ytesthat <- predict(model_nb, df.validate[, 1:q])
cv_error[j] <- 1 - mean(ytesthat == effective.validate)
}
# CV error for each feature number
CV_error <- append(CV_error, mean(cv_error))
}
# Find q with the smallest error
idx <- which.min(CV_error)
best_q <- Q[idx]
# Fit training data on selected feature number
model_nb <- naive_bayes(x = df.train[, 1:best_q], y = effective.train)
ytesthat <- predict(model_nb, df.test[, 1:q])
test_error <- 1 - mean(ytesthat == effective.test)
CV_NB[i, ] <- c(best_q, test_error)
}
# Out of 10 results, use the one w/ lowest cv error
BEST_ERROR <- min(CV_NB$error)
BEST_Q <- CV_NB$featurenum[which.min(CV_NB$error)]
return(BEST_ERROR)
}
cv_lasso <- function(df, effect) {
### Lasso CV
CV_LASSO <- data.frame(lambda = numeric(10), featurenum = numeric(10), error = numeric(10))
l_min <- 1
for(i in 1:10) {
# for now, 6 iterations to find converging feature # and lamda:
for(k in 1:2) {
# FIRST, create TEST and TRAINING sets:
n = nrow(df)
train.ix = sample(1:n, round(n*0.60)) ## consider a approx. 60-40 split
df.train = df[train.ix,]
effective.train = effect[train.ix]
df.test = df[-train.ix,]
effective.test = effect[-train.ix]
# SECOND, select best # of features (q) given lambda via CV error
best_q <- 0
best_error <- 1
for(q in Q) {
# CV step:
cv_error <- vector(mode = "numeric", length = 10)
for(j in 1:10) {
# splitting data into k folds:
n2 <- nrow(df.train)
train2.ix <- sample(1:n2, (19/20)*n2)
# 1 cv fold, 19 training folds:
df.train2 = df.train[train2.ix,]
effective.train2 = effective.train[train2.ix]
df.validate = df.train[-train2.ix,]
effective.validate = effective.train[-train2.ix]
# train glm with regul. using q features w/ l_min choice & 19 training folds
lasso.fit <- glmnet(x = as.matrix(df.train2[,1:q]), y = effective.train2, family = "binomial", lambda = l_min)
test_predic <- ifelse(predict(lasso.fit, as.matrix(df.validate[,1:q]),type = "response") >= 0.5, TRUE, FALSE)
error_rate <- mean(test_predic != effective.validate)
cv_error[j] <- error_rate
}
CV_error <- mean(cv_error)
if (CV_error < best_error) {
best_q <- q
best_error <- CV_error
}
}
# THIRD, use feature number q to find best lamda for that q:
lasso.fit <- cv.glmnet(x = as.matrix(df.train[,1:best_q]), y = effective.train, family = "binomial", nfolds = 10, alpha = 1)
l_min <- lasso.fit$lambda.min
}
# use selected q and lamda on test set to estimtae error:
lasso.fit <- glmnet(x = as.matrix(df.train[,1:best_q]),
y = effective.train,
family = "binomial",
lambda = l_min)
error <- mean((ifelse(predict(lasso.fit, as.matrix(df.test[1:best_q])) >= 0.5, TRUE, FALSE)) != effective.test)
CV_LASSO[i,1] <- l_min
CV_LASSO[i,2] <- best_q
CV_LASSO[i,3] <- error
}
# return smallest error found among 10 examples:
BEST_ERROR <- min(CV_LASSO$error)
print(CV_LASSO)
return(BEST_ERROR)
}
cv_ridge <- function(df, effect) {
### Ridge CV
CV_RIDGE <- data.frame(lambda = numeric(10), featurenum = numeric(10), error = numeric(10))
l_min <- 1
for(i in 1:10) {
# for now, 6 iterations to find converging feature # and lamda:
for(k in 1:2) {
# FIRST, create TEST and TRAINING sets:
n = nrow(df)
train.ix = sample(1:n, round(n*0.60)) ## consider a approx. 60-40 split
df.train = df[train.ix,]
effective.train = effect[train.ix]
df.test = df[-train.ix,]
effective.test = effect[-train.ix]
# SECOND, select best # of features (q) given lambda via CV error
best_q <- 0
best_error <- 1
for(q in Q) {
# CV step:
cv_error <- vector(mode = "numeric", length = 10)
for(j in 1:10) {
# splitting data into k folds:
n2 <- nrow(df.train)
train2.ix <- sample(1:n2, (19/20)*n2)
# 1 cv fold, 19 training folds:
df.train2 = df.train[train2.ix,]
effective.train2 = effective.train[train2.ix]
df.validate = df.train[-train2.ix,]
effective.validate = effective.train[-train2.ix]
# train glm with regul. using q features w/ l_min choice & 19 training folds
lasso.fit <- glmnet(x = as.matrix(df.train2[,1:q]), y = effective.train2, family = "binomial", lambda = l_min, alpha = 0)
test_predic <- ifelse(predict(lasso.fit, as.matrix(df.validate[,1:q]),type = "response") >= 0.5, TRUE, FALSE)
error_rate <- mean(test_predic != effective.validate)
cv_error[j] <- error_rate
}
CV_error <- mean(cv_error)
if (CV_error < best_error) {
best_q <- q
best_error <- CV_error
}
}
# THIRD, use feature number q to find best lamda for that q:
lasso.fit <- cv.glmnet(x = as.matrix(df.train[,1:best_q]), y = effective.train, family = "binomial", nfolds = 10, alpha = 0)
l_min <- lasso.fit$lambda.min
}
# use selected q and lamda on test set to estimtae error:
lasso.fit <- glmnet(x = as.matrix(df.train[,1:best_q]),
y = effective.train,
family = "binomial",
lambda = l_min)
error <- mean((ifelse(predict(lasso.fit, as.matrix(df.test[1:best_q])) >= 0.5, TRUE, FALSE)) != effective.test)
CV_RIDGE[i,1] <- l_min
CV_RIDGE[i,2] <- best_q
CV_RIDGE[i,3] <- error
}
# return smallest error found among 10 examples:
BEST_ERROR <- min(CV_RIDGE$error)
print(CV_RIDGE)
return(BEST_ERROR)
}
cv_elas <- function(df, effect) {
### Elastic CV
CV_ELAS <- data.frame(alpha = numeric(10), lambda = numeric(10), featurenum = numeric(10), error = numeric(10))
l_min <- 1
for(i in 1:10) {
# FIRST, create TEST and TRAINING sets:
n = nrow(df)
train.ix = sample(1:n, round(n*0.60)) ## consider a approx. 60-40 split
df.train = df[train.ix,]
effective.train = effect[train.ix]
df.test = df[-train.ix,]
effective.test = effect[-train.ix]
MCV <- vector(mode = "numeric")
LAMBDA <- vector(mode = "numeric")
# SECOND, select alpha and lambda based on CV:
for(a in alpha) {
# 10 fold cross validation for alpha = 0.0, 0.1, ..., 0.9, 1
model_elas <- cv.glmnet(x = as.matrix(df.train),
y = effective.train, family = "binomial",
type.measure = "class",
alpha = a)
# smallest cv error and corresponding lambda
min_lambda <- model_elas$lambda.min
idx <- which(model_elas$lambda == min_lambda)
cv_error <- model_elas$cvm[idx]
# cv_error <- model_elas$cvm[which.min(model_elas$cvm)]
# min_lambda <- model_elas$lambda[which.min(model_elas$cvm)]
MCV <- append(MCV, cv_error)
LAMBDA <- append(LAMBDA, min_lambda)
}
# find alpha that has lowest mcv error:
idx = which.min(MCV)
alpha_best <- alpha[idx]
lambda_best <- LAMBDA[idx]
# THRID, select best # of features (q) given lambda and alpha via CV error
best_q <- 0
best_error <- 1
for(q in Q) {
# CV step:
cv_error <- vector(mode = "numeric", length = 10)
for(j in 1:10) {
# splitting data into k folds:
n2 <- nrow(df.train)
train2.ix <- sample(1:n2, (19/20)*n2)
# 1 cv fold, 19 training folds:
df.train2 = df.train[train2.ix,]
effective.train2 = effective.train[train2.ix]
df.validate = df.train[-train2.ix,]
effective.validate = effective.train[-train2.ix]
# Train using q features w/ l_min choice & 19 training folds
elas.fit <- glmnet(x = as.matrix(df.train2[,1:q]), y = effective.train2,
family = "binomial", lambda = lambda_best, alpha = alpha_best)
test_predic <- ifelse(predict(elas.fit, as.matrix(df.validate[,1:q]),type = "response") >= 0.5, TRUE, FALSE)
error_rate <- mean(test_predic != effective.validate)
cv_error[j] <- error_rate
}
CV_error <- mean(cv_error)
if (CV_error < best_error) {
best_q <- q
best_error <- CV_error
}
}
# Use selected q, alpha, and lamda on training set to estimate error:
elas.fit <- glmnet(x = as.matrix(df.train[,1:best_q]), y = effective.train,
family = "binomial", lambda = lambda_best, alpha = alpha_best)
error <- mean((ifelse(predict(elas.fit, as.matrix(df.test[1:best_q])) >= 0.5, TRUE, FALSE)) != effective.test)
CV_ELAS[i,1] <- alpha_best
CV_ELAS[i,2] <- lambda_best
CV_ELAS[i,3] <- best_q
CV_ELAS[i,4] <- error
}
BEST_ERROR <- min(CV_ELAS$error)
print(CV_ELAS)
return(BEST_ERROR)
}
cv_naive_bayes(meth.1014, effective.1014$EFFECT)
cv_lasso(meth.1014, effective.1014$EFFECT)
cv_ridge(meth.1014, effective.1014$EFFECT)
cv_elas(meth.1014, effective.1014$EFFECT)
pc.scores <- readRDS("C:/Users/yasha/Desktop/BDSI/Data Mining/Project/Methylation/meth_pcscores.rds")
Q <- seq(5,60,5)
cv_naive_bayes(pc.scores, effective.1014$EFFECT)
cv_lasso(pc.scores, effective.1014$EFFECT)
cv_ridge(pc.scores, effective.1014$EFFECT)
cv_elas(pc.scores, effective.1014$EFFECT)
#--------------------------------------------------------------Using PCA/PCR-------------------------------------------------------
# These functions do not find an optimal number of featuers; instead, they use
# the PCs for regression:
# Returns: CV error (TRAINING data) and test error (TEST data)
# Selects NUMB of features using CV, outputs best test error
cv.naive_bayes <- function(df, effect) {
n = nrow(df)
# training & test:
train.ix = sample(1:n, round(n*0.60)) ## consider a approx. 60-40 split
df.train = df[train.ix,]
effective.train = effect[train.ix]
df.test = df[-train.ix,]
effective.test = effect[-train.ix]
# cross validation error
cv_error <- vector(mode = "numeric", length = 10)
for(j in 1:10) {
# splitting data into k folds:
n2 <- nrow(df.train)
train2.ix <- sample(1:n2, (19/20)*n2)
# 1 cv fold, 19 training folds:
df.train2 = df.train[train2.ix,]
effective.train2 = effective.train[train2.ix]
df.validate = df.train[-train2.ix,]
effective.validate = effective.train[-train2.ix]
# Fit naive Bayes classifier on folds, compute CV error using validation set
model_nb <- naive_bayes(x = df.train2, y = as.factor(effective.train2))
ytesthat <- predict(model_nb, df.validate)
cv_error[j] <- 1 - mean(ytesthat == effective.validate)
}
# mean CV error for each feature number
CV_error <- append(CV_error, mean(cv_error))
# fit training data and calculate TEST error
model_nb <- naive_bayes(x = df.train, y = effective.train)
ytesthat <- predict(model_nb, df.test)
test_error <- 1 - mean(ytesthat == effective.test)
# Return CV and test error for df
return(c(CV_error,test_error))
}
cv.lasso <- function(df, effect) {
CV_LASSO <- data.frame(lambda = numeric(10), error = numeric(10))
l_min <- 1
# 10 estimates of error
for(i in 1:10) {
# FIRST, create TEST and TRAINING sets:
n = nrow(df)
train.ix = sample(1:n, round(n*0.60)) ## consider a approx. 60-40 split
df.train = df[train.ix,]
effective.train = effect[train.ix]
df.test = df[-train.ix,]
effective.test = effect[-train.ix]
# Use feature number q to find best lamda for that q:
lasso.fit <- cv.glmnet(x = as.matrix(df.train), y = effective.train, family = "binomial", nfolds = 10, alpha = 1)
l_min <- lasso.fit$lambda.min
# Use selected q and lamda on test set to estimtae error:
lasso.fit <- glmnet(x = as.matrix(df.train), y = effective.train, family = "binomial", lambda = l_min)
error <- mean((ifelse(predict(lasso.fit, as.matrix(df.test)) >= 0.5, TRUE, FALSE)) != effective.test)
CV_LASSO[i,1] <- l_min
CV_LASSO[i,2] <- error
}
# return smallest error found among 10 examples:
BEST_ERROR <- min(CV_LASSO$error)
print(CV_LASSO)
return(BEST_ERROR)
}
cv.ridge <- function(df, effect) {
### Ridge CV
CV_RIDGE <- data.frame(lambda = numeric(10), featurenum = numeric(10), error = numeric(10))
l_min <- 1
for(i in 1:10) {
# for now, 6 iterations to find converging feature # and lamda:
for(k in 1:3) {
# FIRST, create TEST and TRAINING sets:
n = nrow(meth.1014)
train.ix = sample(1:n, round(n*0.60)) ## consider a approx. 60-40 split
methylation.train = meth.1014[train.ix,]
effective.train = effective.1014$EFFECT[train.ix]
methylation.test = meth.1014[-train.ix,]
effective.test = effective.1014$EFFECT[-train.ix]
# SECOND, select best # of features (q) given lambda via CV error
best_q <- 0
best_error <- 1
for(q in Q) {
# CV step:
cv_error <- vector(mode = "numeric", length = 10)
for(j in 1:10) {
# splitting data into k folds:
n2 <- nrow(methylation.train)
train2.ix <- sample(1:n2, (19/20)*n2)
# 1 cv fold, 19 training folds:
methylation.train2 = methylation.train[train2.ix,]
effective.train2 = effective.train[train2.ix]
methylation.validate = methylation.train[-train2.ix,]
effective.validate = effective.train[-train2.ix]
# train glm with regul. using q features w/ l_min choice & 19 training folds
lasso.fit <- glmnet(x = as.matrix(methylation.train2[,1:q]), y = effective.train2, family = "binomial", lambda = l_min, alpha = 1)
test_predic <- ifelse(predict(lasso.fit, as.matrix(methylation.validate[,1:q]),type = "response") >= 0.5, TRUE, FALSE)
error_rate <- mean(test_predic != effective.validate)
cv_error[j] <- error_rate
}
CV_error <- mean(cv_error)
if (CV_error < best_error) {
best_q <- q
best_error <- CV_error
}
}
# THIRD, use feature number q to find best lamda for that q:
lasso.fit <- cv.glmnet(x = as.matrix(methylation.train[,1:best_q]), y = effective.train, family = "binomial", nfolds = 10, alpha = 1)
l_min <- lasso.fit$lambda.min
}
# use selected q and lamda on test set to estimtae error:
lasso.fit <- glmnet(x = as.matrix(methylation.train[,1:best_q]),
y = effective.train,
family = "binomial",
lambda = l_min)
error <- mean((ifelse(predict(lasso.fit, as.matrix(methylation.test[1:best_q])) >= 0.5, TRUE, FALSE)) != effective.test)
CV_RIDGE[i,1] <- l_min
CV_RIDGE[i,2] <- best_q
CV_RIDGE[i,3] <- error
}
# return smallest error found among 10 examples:
BEST_ERROR <- min(CV_RIDGE$error)
return(BEST_ERROR)
}
cv.elastic <- function(df, effect) {
CV_ELAS <- data.frame(alpha = numeric(10), lambda = numeric(10), featurenum = numeric(10), error = numeric(10))
l_min <- 1
for(i in 1:10) {
# FIRST, create TEST and TRAINING sets:
n = nrow(meth.1014)
train.ix = sample(1:n, round(n*0.60)) ## consider a approx. 60-40 split
df.train = df[train.ix,]
effective.train = effect[train.ix]
df.test = df[-train.ix,]
effective.test = effect[-train.ix]
MCV <- vector(mode = "numeric")
LAMBDA <- vector(mode = "numeric")
# SECOND, select alpha and lambda based on CV:
for(a in alpha) {
# 10 fold cross validation for alpha = 0.0, 0.1, ..., 0.9, 1
model_elas <- cv.glmnet(x = as.matrix(df.train),
y = effective.train, family = "binomial",
type.measure = "class",
alpha = a)
# smallest cv error and corresponding lambda
min_lambda <- model_elas$lambda.min
idx <- which(model_elas$lambda == min_lambda)
cv_error <- model_elas$cvm[idx]
# cv_error <- model_elas$cvm[which.min(model_elas$cvm)]
# min_lambda <- model_elas$lambda[which.min(model_elas$cvm)]
MCV <- append(MCV, cv_error)
LAMBDA <- append(LAMBDA, min_lambda)
}
# find alpha that has lowest mcv error:
idx = which.min(MCV)
alpha_best <- alpha[idx]
lambda_best <- LAMBDA[idx]
# THRID, select best # of features (q) given lambda and alpha via CV error
best_q <- 0
best_error <- 1
for(q in Q) {
# CV step:
cv_error <- vector(mode = "numeric", length = 10)
for(j in 1:10) {
# splitting data into k folds:
n2 <- nrow(methylation.train)
train2.ix <- sample(1:n2, (19/20)*n2)
# 1 cv fold, 19 training folds:
df.train2 = df[train2.ix,]
effective.train2 = effective.train[train2.ix]
df.validate = methylation.train[-train2.ix,]
effective.validate = effective.train[-train2.ix]
# Train using q features w/ l_min choice & 19 training folds
elas.fit <- glmnet(x = as.matrix(df.train2[,1:q]), y = effective.train2,
family = "binomial", lambda = lambda_best, alpha = alpha_best)
test_predic <- ifelse(predict(elas.fit, as.matrix(df.validate[,1:q]),type = "response") >= 0.5, TRUE, FALSE)
error_rate <- mean(test_predic != effective.validate)
cv_error[j] <- error_rate
}
CV_error <- mean(cv_error)
if (CV_error < best_error) {
best_q <- q
best_error <- CV_error
}
}
# Use selected q, alpha, and lamda on training set to estimate error:
elas.fit <- glmnet(x = as.matrix(df.train[,1:best_q]), y = effective.train,
family = "binomial", lambda = lambda_best, alpha = alpha_best)
error <- mean((ifelse(predict(elas.fit, as.matrix(df.test[1:best_q])) >= 0.5, TRUE, FALSE)) != effective.test)
CV_ELAS[i,1] <- alpha_best
CV_ELAS[i,2] <- lambda_best
CV_ELAS[i,3] <- best_q
CV_ELAS[i,4] <- error
}
BEST_ERROR <- min(CV_ELAS$error)
return(BEST_ERROR)
}
#############################################################################################
# PC Scores:
#####
|
01dce348ee0ed41da40077d97966feb1902409a8
|
e86bc0e7c2bd9c07f44c31f7482d4bcbc7205382
|
/plot2.R
|
4648658f37b1dec110754e68b17679ca26f475dd
|
[] |
no_license
|
jgoetsc2/Course-Project-1
|
2662d72ee79faa1e58febfb4490f8d4d47f21892
|
9c06f36d94ef69038fb4159d34ceb8b5b90b9966
|
refs/heads/master
| 2023-01-15T15:36:55.538077
| 2020-11-18T20:17:46
| 2020-11-18T20:17:46
| 314,005,026
| 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 1,124
|
r
|
plot2.R
|
library(dplyr)
# filename <- "EPC.txt"
# url<-"https://d396qusza40orc.cloudfront.net/exdata%2Fdata%2Fhousehold_power_consumption.zip"
# f <- file.path(getwd(), filename)
# download.file(url, f)
# unzip(filename)
EPC <- read.table('household_power_consumption.txt',header = TRUE,sep = ";",na.strings = "?")
#Format date to Type date
EPC$Date <- as.Date(EPC$Date,"%d/%m/%Y")
head(EPC)#filter data from 2/1/2007 to 2/2/2007
EPC <- EPC %>% filter(Date >= as.Date("2007-2-1") & Date<= as.Date("2007-2-2"))
head(EPC)
EPC$Global_active_power <- as.numeric(EPC$Global_active_power)
EPC$Global_reactive_power <- as.numeric(EPC$Global_reactive_power)
voltage <- as.numeric(EPC$Voltage)
EPC$Sub_metering_1 <- as.numeric(EPC$Sub_metering_1)
EPC$Sub_metering_2 <- as.numeric(EPC$Sub_metering_2)
EPC$Sub_metering_3 <- as.numeric(EPC$Sub_metering_3)
#Plot 2
#create DateTime column
EPC$DateTime <-paste(EPC$Date,EPC$Time)
EPC$DateTime <- as.POSIXct(EPC$DateTime)
plot(EPC$Global_active_power~EPC$DateTime, type="l", ylab="Global Active Power (kilowatts)", xlab="")
#Saving file
dev.copy(png,"plot2.png", width=480, height=480)
dev.off()
|
11e324fd7e1fdb665b4a2260a23d658478e676a5
|
443b37ea9377bfd267f68c20123893a85db17852
|
/R/parse_metadata.R
|
ab1efe1a32edf632e63781bb0f87ed530a565d2b
|
[] |
no_license
|
CharlesJB/rnaseq
|
7cda74303ebceaf9f8561ce5f331c35f034793d6
|
77352def1a5a2b0e0b7e6400ad13bd7385de0181
|
refs/heads/master
| 2023-04-02T13:03:09.120128
| 2023-03-15T20:12:24
| 2023-03-15T20:12:24
| 125,427,062
| 1
| 1
| null | null | null | null |
UTF-8
|
R
| false
| false
| 21,120
|
r
|
parse_metadata.R
|
#' Parse metadata LO
#'
#' Parse metadata and generate a draft for pca_info, volcano_info, report_info
#' files.
#'
#' @param metadata (dataframe)
#' @param pca_subset (character) column of metadata, filter for automated PCA (ex.: Cell type)
#' @param pca_batch_metadata (character) extra columns for pca coloring (biological or batch effects)
#' @param extra_count_matrix (character)
#' @param report_title (character)
#'
#' @return a list of data.frame
#'
#' @importFrom dplyr left_join mutate case_when select pull
#' @importFrom purrr imap_dfr
#'
parse_metadata_for_LO_report <- function(metadata,
pca_subset = "Cell",
pca_batch_metadata = c("Cell", "Compound"),
extra_count_matrix = NULL,
report_title = "title"){
# TODO: export and documentation
stopifnot(is(metadata, "data.frame"))
stopifnot(all(c("ID", "Compound", "Cell", "Dose", "Time", "Vehicule") %in%
colnames(metadata)))
stopifnot(pca_subset %in% colnames(metadata))
stopifnot(pca_batch_metadata %in% colnames(metadata))
checkmate::assert_character(report_title)
## start report info
########################
counter_report <- 1
report_info_df = list()
report_info_df[[counter_report]] <- list(id = counter_report,
add = "text",
value = paste0(
"---\ntitle: '", report_title, "'\ndate: \"`r Sys.Date()`\"\noutput: html_document\n---\n\n",
"```{r, echo = FALSE}\n knitr::opts_chunk$set( echo = FALSE, warning = FALSE, message = FALSE, fig.align = 'center')\n```"))
counter_report <- counter_report + 1
report_info_df[[counter_report]] <- list(id = counter_report,
add = "text",
value = "# PCA")
## PCA
########################
# report section: General PCA
counter_report <- counter_report + 1
report_info_df[[counter_report]] <- list(id = counter_report,
add = "text",
value = "## General")
# general PCA
# colored by metadata
counter_obj <- 0
pca_info_df <- list()
for(i in pca_batch_metadata){
# report section: General PCA
counter_report <- counter_report + 1
report_info_df[[counter_report]] <- list(id = counter_report,
add = "text",
value = paste0("### General - ", i))
counter_obj <- counter_obj + 1
id_pca <- paste(c("pca",counter_obj,i), collapse = '_')
pca_info_df[[id_pca]] <- list(id_plot=id_pca, id_metadata = "ID",
group = NA, group_val = NA,
use_normalisation = "none",
min_counts = 5,
size = 3,
shape = NA,
color = i,
title = paste0("General ",i),
legend.position = "right",
legend.box = "vertical",
show_names = TRUE)
# report add figure
counter_report <- counter_report + 1
report_info_df[[counter_report]] <- list(id = counter_report,
add = "plot",
value = id_pca)
}
sub <- unique(metadata[,pca_subset, drop = TRUE])
if(length(sub)>1){ # if ==1 -> same as general
for(i in sub){
# report section:
counter_report <- counter_report + 1
report_info_df[[counter_report]] <- list(id = counter_report,
add = "text",
value = paste0("## ", i))
for(j in pca_batch_metadata){
# report section:
counter_report <- counter_report + 1
report_info_df[[counter_report]] <- list(id = counter_report,
add = "text",
value = paste0("### ", j))
counter_obj <- counter_obj + 1
id_pca <- paste(c("pca",counter_obj, pca_subset, i, j), collapse = "_")
pca_info_df[[id_pca]] <- list(id_plot=id_pca, id_metadata = "ID",
group = pca_subset, group_val = i,
use_normalisation = "none",
min_counts = 5,
size = 3,
shape = NA,
color = j,
title = paste(pca_subset, i, j),
legend.position = "right",
legend.box = "vertical",
show_names = TRUE)
# report add figure
counter_report <- counter_report + 1
report_info_df[[counter_report]] <- list(id = counter_report,
add = "plot",
value = id_pca)
}
}
}
pca_info_df <- purrr::imap_dfr(pca_info_df, ~.x)
# report section: Volcano
counter_report <- counter_report + 1
report_info_df[[counter_report]] <- list(id = counter_report,
add = "text",
value = "# Differential Expression")
## DE and volcano
########################
de_info_df <- list()
volcano_info_df <- list()
counter_obj <- 0
metadata_design <- metadata %>%
dplyr::select(Cell, Compound, Time, Dose, Vehicule) %>%
unique
design_df <- list("sample" = metadata$ID)
for(line in 1:nrow(metadata_design)){
# DE: Compound vs Control (Vehicule), split by Cell, Time, Dose
current_contrast_1 <- metadata_design[line, "Compound", drop = TRUE]
current_contrast_2 <- metadata_design[line, "Vehicule", drop = TRUE]
if(current_contrast_1 == current_contrast_2){next}
# increment counter
counter_obj <- counter_obj + 1
# DE
id_de <- paste(c("de",counter_obj, current_contrast_1, current_contrast_2), collapse = "_")
de_info_df[[id_de]] <- list(id_de = id_de,
group = id_de, # always Compound vs Control
contrast_1 = current_contrast_1,
contrast_2 = current_contrast_2,
formula = paste("~", id_de),
filter = 2)
if(!is.null(extra_count_matrix)){
de_info_df[[id_de]][["count_matrix"]] <- extra_count_matrix
}
samples_contrast_1 <- metadata_design[line,] %>%
dplyr::left_join(metadata, by = c("Cell", "Compound", "Time", "Dose", "Vehicule")) %>%
dplyr::pull(ID)
samples_contrast_2 <- metadata_design[line,] %>%
dplyr::mutate(Compound = metadata_design[line, "Vehicule", drop = TRUE]) %>%
dplyr::mutate(Dose = "Control") %>%
dplyr::left_join(metadata, by = c("Cell", "Compound", "Time", "Dose", "Vehicule")) %>%
dplyr::pull(ID)
design_df[[id_de]] <- dplyr::case_when(
design_df$sample %in% samples_contrast_1 ~ current_contrast_1,
design_df$sample %in% samples_contrast_2 ~ current_contrast_2,
TRUE ~ "-")
# volcano
id_volcano <- paste(c("volcano",counter_obj, current_contrast_1, current_contrast_2), collapse = "_")
volcano_info_df[[id_volcano]] <- list(id_plot = id_volcano,
id_de = id_de,
y_axis = "padj",
p_threshold = 0.05,
fc_threshold = 1.5,
show_signif_counts = TRUE,
show_signif_lines = "vertical",
show_signif_color = TRUE,
col_up = "#E73426",
col_down = "#0020F5",
size = 3)
# report add figure: volcano
counter_report <- counter_report + 1
report_info_df[[counter_report]] <- list(id = counter_report,
add = "plot",
value = id_volcano)
}
de_info_df <- purrr::imap_dfr(de_info_df, ~.x)
design_df <- as.data.frame(design_df)
volcano_info_df <- purrr::imap_dfr(volcano_info_df, ~.x)
report_info_df <- purrr::imap_dfr(report_info_df, ~.x)
return(list(pca_info = pca_info_df,
de_info = de_info_df,
design_info = design_df,
volcano_info = volcano_info_df,
report_info = report_info_df))
}
#' wrapper report LO
#'
#' From metadata and txi, this wrapper parse the metadata and perform all the PCA and volcano (DE).
#' It produces a Rmd file which can be rendered.
#'
#'
#' @param metadata (data.frame) sample sheet with at least the columns: ID, Compound, Time, Vehicule, Dose, Cell
#' @param txi (list) txi object
#' @param pca_subset (character) column of metadata, filter for automated PCA (ex.: Cell type)
#' @param pca_batch_metadata (character) extra columns for pca coloring (biological or batch effects)
#' @param do_pca (logical) do PCA and included the results in report (default = TRUE)
#' @param do_DE (logical) do DE and included the results in report (default = TRUE)
#' @param render_repport (logical) render Rmd report
#' @param custom_parsed_metadata (list of data.frame) when automatic parsing can be problematic
#' @param report_title (character), title in the report
#'
#' @return a list containing pca, volcano and report
#'
#' @importFrom checkmate checkPathForOutput assert_logical assert_character
#' @importFrom dplyr mutate filter
#' @importFrom stringr str_detect
#' @importFrom rmarkdown render
#'
#' @export
wrapper_report_LO <- function(metadata, txi, outdir, pca_subset, pca_batch_metadata,
do_pca = TRUE, do_DE = TRUE,
produce_rmd_report = TRUE,
render_repport = TRUE,
extra_count_matrix = NULL,
volcano_split = NULL,
volcano_facet_x = NULL,
volcano_facet_y = NULL,
custom_parsed_metadata = NULL){
# check metadata
stopifnot(is(metadata, "data.frame"))
stopifnot(c("ID", "Compound", "Time", "Vehicule", "Dose", "Cell") %in% colnames(metadata))
stopifnot(is(metadata, "data.frame"))
stopifnot(all(c("ID", "Compound", "Cell", "Dose", "Time", "Vehicule") %in%
colnames(metadata)))
stopifnot(pca_subset %in% colnames(metadata))
stopifnot(pca_batch_metadata %in% colnames(metadata))
checkmate::assert_character(extra_count_matrix, null.ok = TRUE)
checkmate::assert_logical(do_pca)
checkmate::assert_logical(do_DE)
checkmate::assert_logical(render_repport)
checkmate::assert_character(volcano_split, null.ok = TRUE, len = 1)
checkmate::assert_character(volcano_facet_x, null.ok = TRUE, len = 1)
checkmate::assert_character(volcano_facet_y, null.ok = TRUE, len = 1)
# check txi
validate_txi(txi)
# check outdir and create out folders
checkmate::checkPathForOutput(outdir, overwrite = TRUE)
r_objects <- paste0(outdir,"/r_objects/")
path_png <- paste0(outdir,"/pdf/")
path_pca <- paste0(path_png,"/pca/")
path_csv <- paste0(outdir,"/de_csv/")
path_volcano <- paste0(path_png,"/volcano/")
rmd_out_filepath <- paste0(outdir, "/rapport.Rmd")
checkmate::checkPathForOutput(rmd_out_filepath, overwrite = TRUE)
dir.create(r_objects, showWarnings = TRUE, recursive = TRUE)
dir.create(path_png, showWarnings = TRUE, recursive = TRUE)
dir.create(path_pca, showWarnings = TRUE, recursive = TRUE)
dir.create(path_volcano, showWarnings = TRUE, recursive = TRUE)
dir.create(path_csv, showWarnings = TRUE, recursive = TRUE)
results <- list()
# 1) parse metadata
if(!is.null(custom_parsed_metadata)){
stopifnot(all(sapply(custom_parsed_metadata, function(x) is(x, "data.frame"))))
stopifnot(all(names(custom_parsed_metadata) %in% c("pca_info", "de_info", "design_info", "volcano_info", "report_info")))
parse_res <- custom_parsed_metadata
} else {
parse_res <- parse_metadata_for_LO_report(metadata, pca_subset = pca_subset, pca_batch_metadata = pca_batch_metadata,
extra_count_matrix = extra_count_matrix)
}
report_info <- parse_res$report_info
# check infos sheets: only pca and de
# volcano needs DE files
# report need pca, volcano files
tmp_pca_info <- complete_pca_infos(parse_res$pca_info)
stopifnot(length(validate_pca_infos(tmp_pca_info, metadata, txi)) == 0)
tmp_de_infos <- complete_de_infos(parse_res$de_info)
stopifnot(length(validate_de_infos(tmp_de_infos, parse_res$design_info, txi)) == 0)
if(do_pca){
# 2) from metadata, do batch pca
results[["pca"]] <- batch_pca(pca_infos = parse_res$pca_info,
txi = txi, metadata = metadata,
r_objects = r_objects, outdir = path_pca)
results[["parse_metadata"]] <- parse_res
} else {
# remove pca from report
report_info <- report_info %>%
dplyr::filter(!stringr::str_detect(value, "(?i)PCA"))
}
if(do_DE){
# 3) from metadata, do batch de
results[["de"]] <- batch_de(de_infos = parse_res$de_info,
txi = txi,
design = parse_res$design_info,
r_objects = r_objects, # DDS
outdir = path_csv) # DE res as .csv file
# 4) from metadata and batch_de results, do batch volcano
results[["volcano"]] <- batch_volcano(volcano_infos = parse_res$volcano_info,
de_results = path_csv, # unique ids
r_objects = r_objects, # unique ids
outdir = path_volcano)
# plot volcano object
if(!(is.null(volcano_split) | is.null(volcano_facet_x) | is.null(volcano_facet_y))){
volcanos_rds <- parse_res$volcano_info$id_plot
names(volcanos_rds) <- volcanos_rds
all_volcano <- imap_dfr(volcanos_rds, ~{
readRDS(paste0(r_objects, .x, ".rds")) %>%
.$data %>% mutate(id_volcano = .y)}) %>%
left_join(parse_res$volcano_info %>%
dplyr::select(-"y_axis"), by = c("id_volcano" = "id_plot"))
}
} else {
# remove volcano from report info
report_info <- report_info %>% dplyr::filter(!stringr::str_detect(value, "volcano")) %>%
dplyr::filter(!str_detect(value, "# Differential Expression"))
}
# 5) produce report anyway
# TODO: need to change parse_res$report_info to include path of object
# path should be relative to rmd file
results[["parse_metadata"]][["report_info"]] <- report_info <- report_info %>%
dplyr::mutate(value = ifelse(add == "plot",
# paste0(r_objects, value, ".rds"),
paste0("./r_objects/", value, ".rds"),
value))
#issues with filepath
if(produce_rmd_report){
results[["report"]] <- produce_report(report_infos = report_info,
report_filename = rmd_out_filepath)
if(render_repport){
## rmarkdown::render(...)
rmarkdown::render(rmd_out_filepath) # to htmls
}
}
return(invisible(results))
}
# produce_full_volcano <- function(de_res, fc_threshold = 3, p_threshold = 0.05,
# show_signif_counts = TRUE,
# show_signif_lines = "vertical",
# show_signif_color = TRUE, col_up = "#E73426",
# col_down = "#0020F5", size = 3, graph = TRUE,
# title = NULL) {
# stopifnot(is.numeric(fc_threshold))
# stopifnot(fc_threshold > 0)
# stopifnot(is.numeric(p_threshold))
# stopifnot(p_threshold >= 0 & p_threshold <= 1)
# stopifnot(is(show_signif_counts, "logical"))
# stopifnot(is(show_signif_lines, "character"))
# stopifnot(is(title, "character") | is.null(title))
# expected_values <- c("none", "both", "vertical", "horizontal")
# stopifnot(show_signif_lines %in% expected_values)
# stopifnot(is(show_signif_color, "logical"))
# if (show_signif_color) {
# stopifnot(is(col_up, "character"))
# stopifnot(is_color(col_up))
# stopifnot(is(col_down, "character"))
# stopifnot(is_color(col_down))
# }
# stopifnot(is(graph, "logical"))
#
# stopifnot(y_axis %in% colnames(de_res))
#
# if (show_signif_color) {
# red <- col_up
# blue <- col_down
# grey <- "#7C7C7C"
# } else {
# grey <- "#7C7C7C"
# red <- "#E73426"
# blue <- "#0020F5"
# }
#
#
# # counts
#
#
#
# count_blue <- dplyr::filter(de_res, color == blue) %>% nrow
# count_red <- dplyr::filter(de_res, color == red) %>% nrow
# lbl <- c(count_blue, count_red) %>% as.character
# count_y <- round(max(-log10(de_res$padj), na.rm = TRUE))
# count_y <- count_y * 0.925
# min_x <- round(min(de_res$log2FoldChange, na.rm = TRUE))
# min_x <- min_x * 0.925
# max_x <- round(max(de_res$log2FoldChange, na.rm = TRUE))
# max_x <- max_x * 0.925
#
# if (!show_signif_color) {
# de_res <- mutate(de_res, color = grey)
# p <- ggplot2::ggplot(de_res, ggplot2::aes(x = log2FoldChange,
# y = -log10(y_axis))) +
# ggplot2::geom_point(size = size, alpha = 0.8)
# } else {
# p <- ggplot2::ggplot(de_res, ggplot2::aes(x = log2FoldChange,
# y = -log10(y_axis),
# color = color)) +
# ggplot2::geom_point(size = size, alpha = 0.8) +
# ggplot2::scale_colour_identity()
# }
# if (show_signif_lines %in% c("both", "vertical")) {
# p <- p + ggplot2::geom_vline(xintercept = c(-log2(fc_threshold),
# log2(fc_threshold)),
# linetype = "dashed")
# }
# if (show_signif_lines %in% c("both", "horizontal")) {
# p <- p + ggplot2::geom_hline(yintercept = -log10(p_threshold),
# linetype = "dashed")
# }
# if (show_signif_counts) {
# if (show_signif_color) {
# p <- p + ggplot2::annotate("text",
# x = c(min_x, max_x),
# y = count_y,
# label = lbl,
# size = 8,
# fontface = 2,
# color = c(blue, red))
# } else {
# p <- p + ggplot2::annotate("text",
# x = c(min_x, max_x),
# y = count_y,
# label = lbl,
# size = 8,
# fontface = 2,
# color = c(grey, grey))
# }
# }
# if(!is.null(title)){
# p <- p + ggplot2::ggtitle(title)
# }
# p <- p + ggplot2::theme_minimal() +
# ggplot2::ylab(y_axis)
#
# if (isTRUE(graph)) {
# print(p)
# }
# invisible(list(p = p, df = de_res))
# }
|
07261baec98a9f791d1e67fb51d1d4d95fe55a33
|
77851606fdf0be274530bb10d160b130baa6b0bf
|
/15th May Factors.R
|
2216ce3003ee3b3589eff2f17db77eac2657970e
|
[] |
no_license
|
VerdeNotte/EstioTraining
|
04c689b0f36e66a9180c461d6dc8297d3c017118
|
a2d4b439373d82815e5eb7a584ed242aa26ff2d2
|
refs/heads/master
| 2020-05-24T00:48:27.342637
| 2019-07-02T09:52:43
| 2019-07-02T09:52:43
| 187,023,330
| 0
| 0
| null | 2019-05-20T13:25:50
| 2019-05-16T12:27:10
|
R
|
UTF-8
|
R
| false
| false
| 704
|
r
|
15th May Factors.R
|
DataEstio <- factor (c("Richard", "Sue", "Katy", "Liz","Tom"),
labels = c("Richard", "Sue", "Katy", "Liz","Tom"), ordered = F)
#Drop down menu, second bit is telling us the option
View (DataEstio)
#Look at the data
#All we want to do is add in another level but there's an issue
#Not on the fictional drop down menu
DataEstio[6] <- "Rebecca"
DataEstio[7] <- "Brenda"
labels(DataEstio)
#gives the number of the label
labels(DataEstio) <- c(labels(DataEstio), "Rebecca")
#Gives output and also assign something to it
install.packages("arules")
library (arules)
labels(DataEstio) <- c(labels(DataEstio), c("Rebecca"))
View (DataEstio)
|
59b682260702b13cbffe6c9b98073379f33751c7
|
e1ab103f11b794e4e3bcd0cb23f5bcc346b4780d
|
/man/parsimony_Crossover.Rd
|
29e7d64f64ac975734bb0d7673dc3e8e51ccfbed
|
[] |
no_license
|
jpison/GAparsimony
|
cd2145efabdab63e419bb49226d60050bfed202b
|
cdaad1e009b11481e3cd1619467e65122fcd0557
|
refs/heads/master
| 2023-04-12T16:35:21.304719
| 2023-04-07T09:48:48
| 2023-04-07T09:48:48
| 90,851,451
| 4
| 3
| null | null | null | null |
UTF-8
|
R
| false
| false
| 2,016
|
rd
|
parsimony_Crossover.Rd
|
\name{parsimony_crossover}
\alias{parsimony_Crossover}
%
\alias{parsimony_crossover}
\title{Crossover operators in GA-PARSIMONY}
\description{Functions implementing particular crossover genetic operator for GA-PARSIMONY. Method uses for model parameters Heuristic Blending and random swapping for binary selected features.}
\usage{
parsimony_crossover(object, parents, alpha=0.1, perc_to_swap=0.5, \dots)
}
\arguments{
\item{object}{An object of class \code{"ga_parsimony"}, usually resulting from a call to function \code{\link{ga_parsimony}}.}
\item{parents}{A two-rows matrix of values indexing the parents from the current population.}
\item{alpha}{A tuning parameter for the Heuristic Blending outer bounds [Michalewicz, 1991]. Typical and default value is 0.1.}
\item{perc_to_swap}{Percentage of features for swapping in the crossovering process.}
\item{\dots}{Further arguments passed to or from other methods.}
}
%\details{}
\value{
Return a list with two elements:
\item{children}{Matrix of dimension 2 times the number of decision variables containing the generated offsprings;}
\item{fitnessval}{Vector of length 2 containing the fitness validation values for the offsprings. A value \code{NA} is returned if an offspring is different (which is usually the case) from the two parents.}
\item{fitnesstst}{Vector of length 2 containing the fitness with the test database (if it was supplied), for the offsprings. A value \code{NA} is returned if an offspring is different (which is usually the case) from the two parents.}
\item{complexity}{Vector of length 2 containing the model complexity for the offsprings. A value \code{NA} is returned if an offspring is different (which is usually the case) from the two parents.}
}
%\references{}
\author{Francisco Javier Martinez de Pison. \email{fjmartin@unirioja.es}. EDMANS Group. \url{https://edmans.webs.com/}}
%\note{}
\seealso{\code{\link{ga_parsimony}}}
%\examples{}
%\keyword{ ~kwd1 }
%\keyword{ ~kwd2 }% __ONLY ONE__ keyword per line
|
7c6dce086cfa6dc94e572f241cd09f7f81e08379
|
f8509a0de4c57931f644a10bec3b783dc21722da
|
/analysis/archive/ggw-mod2_ALL_inprogress.R
|
c07ab4dd273353d9bdb3ce20aa0baa6d8ad3ee78
|
[] |
no_license
|
kgweisman/ggw-mod2
|
ee30ef8315ad7e3f501f696dec808baa832f79d0
|
acb87bd0e7f271e9f5f0cb34f1bab9f8c17f5614
|
refs/heads/master
| 2020-04-15T14:28:50.292381
| 2017-10-03T01:09:50
| 2017-10-03T01:09:50
| 48,076,480
| 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 28,622
|
r
|
ggw-mod2_ALL_inprogress.R
|
#################################################### WORKSPACE SETUP ###########
# set up workspace -------------------------------------------------------------
# load libraries
library(devtools)
library(psych)
library(stats)
library(nFactors)
library(ggplot2)
library(knitr)
library(tidyr)
library(dplyr)
# clear workspace
rm(list = ls(all = T))
graphics.off()
# choose datasource (manually) -------------------------------------------------
datasource <- "study 1" # 2015-12-15 (between)
# datasource <- "study 2" # 2016-01-12 (between rep)
# datasource <- "studies 1 & 2" # combine
# datasource <- "study 3" # 2016-01-10 (within)
# datasource <- "study 4" # 2016-01-14 (between, 21 characters)
########################################################## DATA PREP ###########
# load dataset -----------------------------------------------------------------
if(datasource == "study 1") {
d_raw <- read.csv("/Users/kweisman/Documents/Research (Stanford)/Projects/GGW-mod/ggw-mod2/mturk/v1 (2 conditions between)/GGWmod2_v1_clean.csv")
d <- d_raw %>%
mutate(study = "study 1")
rm(d_raw)
}
if(datasource == "study 2") {
d_raw <- read.csv("/Users/kweisman/Documents/Research (Stanford)/Projects/GGW-mod/ggw-mod2/mturk/v1 (replication)/GGWmod2_v1_replication_clean.csv")
d <- d_raw %>%
mutate(study = "study 2")
rm(d_raw)
}
if(datasource == "studies 1 & 2") {
d_raw1 <- read.csv("/Users/kweisman/Documents/Research (Stanford)/Projects/GGW-mod/ggw-mod2/mturk/v1 (2 conditions between)/GGWmod2_v1_clean.csv")
d_raw2 <- read.csv("/Users/kweisman/Documents/Research (Stanford)/Projects/GGW-mod/ggw-mod2/mturk/v1 (replication)/GGWmod2_v1_replication_clean.csv")
d1 <- d_raw1 %>%
mutate(yob = as.integer(as.character(yob)),
study = "study 1")
d2 <- d_raw2 %>%
mutate(yob = as.integer(as.character(yob)),
study = "study 2")
d <- full_join(d1, d2)
rm(d_raw1, d_raw2, d1, d2)
}
if(datasource == "study 3") {
d_raw <- read.csv("/Users/kweisman/Documents/Research (Stanford)/Projects/GGW-mod/ggw-mod2/mturk/v2 (2 conditions within)/GGWmod2_v2_withinsubjects_clean.csv")
d <- d_raw %>%
mutate(study = "study 3")
rm(d_raw)
}
if(datasource == "study 4") {
d_raw <- read.csv("/Users/kweisman/Documents/Research (Stanford)/Projects/GGW-mod/ggw-mod2/mturk/v3 (21 conditions between)/GGWmod2_v3_many_characters_clean.csv")
d <- d_raw %>%
mutate(study = "study 4")
rm(d_raw)
}
# clean up dataset -------------------------------------------------------------
if(datasource %in% c("study 1", "study 2", "studies 1 & 2")) {
# enact exclusionary criteria
d_clean_1 <- d %>%
mutate(finished_mod = ifelse(is.na(CATCH), 0,
ifelse(finished == 1, 1,
0.5))) %>%
filter(CATCH == 1, # exclude Ps who fail catch trials
finished_mod != 0) %>% # exclude Ps who did not complete task
mutate(yob_correct = as.numeric(
ifelse(as.numeric(as.character(yob)) > 1900 &
as.numeric(as.character(yob)) < 2000,
as.numeric(as.character(yob)), NA)), # correct formatting in yob
age_approx = 2015 - yob_correct) %>% # calculate approximate age
mutate(gender = factor(gender, levels = c(1, 2, 0),
labels = c("m", "f", "other"))) %>%
filter(age_approx >= 18) # exclude Ps who are younger than 18 years
# recode background and demographic variables
d_clean <- d_clean_1 %>%
mutate( # deal with study number
study = factor(study)) %>%
mutate( # deal with study duration
duration = strptime(end_time, "%I:%M:%S") -
strptime(start_time, "%I:%M:%S")) %>%
mutate( # deal with race
race_asian_east =
factor(ifelse(is.na(race_asian_east), "", "asian_east ")),
race_asian_south =
factor(ifelse(is.na(race_asian_south), "", "asian_south ")),
race_asian_other =
factor(ifelse(is.na(race_asian_other), "", "asian_other ")),
race_black =
factor(ifelse(is.na(race_black), "", "black ")),
race_hispanic =
factor(ifelse(is.na(race_hispanic), "", "hispanic ")),
race_middle_eastern =
factor(ifelse(is.na(race_middle_eastern), "", "middle_eastern ")),
race_native_american =
factor(ifelse(is.na(race_native_american), "", "native_american ")),
race_pac_islander =
factor(ifelse(is.na(race_pac_islander), "", "pac_islander ")),
race_white =
factor(ifelse(is.na(race_white), "", "white ")),
race_other_prefno =
factor(ifelse(is.na(race_other_prefno), "", "other_prefno ")),
race_cat = paste0(race_asian_east, race_asian_south, race_asian_other,
race_black, race_hispanic, race_middle_eastern,
race_native_american, race_pac_islander, race_white,
race_other_prefno),
race_cat2 = factor(sub(" +$", "", race_cat)),
race_cat3 = factor(ifelse(grepl(" ", race_cat2) == T, "multiracial",
as.character(race_cat2)))) %>%
select(study, subid:end_time, duration, finished:gender,
religion_buddhism:age_approx, race_cat3) %>%
rename(race_cat = race_cat3) %>%
mutate( # deal with religion
religion_buddhism =
factor(ifelse(is.na(religion_buddhism), "", "buddhism ")),
religion_christianity =
factor(ifelse(is.na(religion_christianity), "", "christianity ")),
religion_hinduism =
factor(ifelse(is.na(religion_hinduism), "", "hinduism ")),
religion_islam =
factor(ifelse(is.na(religion_islam), "", "islam ")),
religion_jainism =
factor(ifelse(is.na(religion_jainism), "", "jainism ")),
religion_judaism =
factor(ifelse(is.na(religion_judaism), "", "judaism ")),
religion_sikhism =
factor(ifelse(is.na(religion_sikhism), "", "sikhism ")),
religion_other =
factor(ifelse(is.na(religion_other), "", "other ")),
religion_none =
factor(ifelse(is.na(religion_none), "", "none ")),
religion_prefno =
factor(ifelse(is.na(religion_prefno), "", "other_prefno ")),
religion_cat = paste0(religion_buddhism, religion_christianity,
religion_hinduism, religion_islam, religion_jainism,
religion_judaism, religion_sikhism, religion_other,
religion_none, religion_prefno),
religion_cat2 = factor(sub(" +$", "", religion_cat)),
religion_cat3 = factor(ifelse(grepl(" ", religion_cat2) == T,
"multireligious",
as.character(religion_cat2)))) %>%
select(study:gender, feedback:race_cat, religion_cat3) %>%
rename(religion_cat = religion_cat3)
# remove extraneous dfs and variables
rm(d_clean_1)
}
if(datasource == "study 3") {
# enact exclusionary criteria
d_clean_1 <- d %>%
mutate(finished_mod = ifelse(is.na(CATCH..characterL) |
is.na(CATCH..characterR), 0,
ifelse(finished == 1, 1,
0.5))) %>%
filter(CATCH..characterL == 5, # exclude Ps who fail catch trials
CATCH..characterR == 5,
finished_mod != 0) %>% # exclude Ps who did not complete task
mutate(yob_correct = as.numeric(
ifelse(as.numeric(as.character(yob)) > 1900 &
as.numeric(as.character(yob)) < 2000,
as.numeric(as.character(yob)), NA)), # correct formatting in yob
age_approx = 2015 - yob_correct) %>% # calculate approximate age
mutate(gender = factor(gender, levels = c(1, 2, 0),
labels = c("m", "f", "other"))) %>%
filter(age_approx >= 18) # exclude Ps who are younger than 18 years
# recode background and demographic variables
d_clean_2 <- d_clean_1 %>%
mutate( # deal with study number
study = factor(study)) %>%
mutate( # deal with study duration
duration = strptime(end_time, "%I:%M:%S") -
strptime(start_time, "%I:%M:%S")) %>%
mutate( # deal with race
race_asian_east =
factor(ifelse(is.na(race_asian_east), "", "asian_east ")),
race_asian_south =
factor(ifelse(is.na(race_asian_south), "", "asian_south ")),
race_asian_other =
factor(ifelse(is.na(race_asian_other), "", "asian_other ")),
race_black =
factor(ifelse(is.na(race_black), "", "black ")),
race_hispanic =
factor(ifelse(is.na(race_hispanic), "", "hispanic ")),
race_middle_eastern =
factor(ifelse(is.na(race_middle_eastern), "", "middle_eastern ")),
race_native_american =
factor(ifelse(is.na(race_native_american), "", "native_american ")),
race_pac_islander =
factor(ifelse(is.na(race_pac_islander), "", "pac_islander ")),
race_white =
factor(ifelse(is.na(race_white), "", "white ")),
race_other_prefno =
factor(ifelse(is.na(race_other_prefno), "", "other_prefno ")),
race_cat = paste0(race_asian_east, race_asian_south, race_asian_other,
race_black, race_hispanic, race_middle_eastern,
race_native_american, race_pac_islander, race_white,
race_other_prefno),
race_cat2 = factor(sub(" +$", "", race_cat)),
race_cat3 = factor(ifelse(grepl(" ", race_cat2) == T, "multiracial",
as.character(race_cat2)))) %>%
select(study, subid:end_time, duration, finished:gender,
religion_buddhism:age_approx, race_cat3) %>%
rename(race_cat = race_cat3) %>%
mutate( # deal with religion
religion_buddhism =
factor(ifelse(is.na(religion_buddhism), "", "buddhism ")),
religion_christianity =
factor(ifelse(is.na(religion_christianity), "", "christianity ")),
religion_hinduism =
factor(ifelse(is.na(religion_hinduism), "", "hinduism ")),
religion_islam =
factor(ifelse(is.na(religion_islam), "", "islam ")),
religion_jainism =
factor(ifelse(is.na(religion_jainism), "", "jainism ")),
religion_judaism =
factor(ifelse(is.na(religion_judaism), "", "judaism ")),
religion_sikhism =
factor(ifelse(is.na(religion_sikhism), "", "sikhism ")),
religion_other =
factor(ifelse(is.na(religion_other), "", "other ")),
religion_none =
factor(ifelse(is.na(religion_none), "", "none ")),
religion_prefno =
factor(ifelse(is.na(religion_prefno), "", "other_prefno ")),
religion_cat = paste0(religion_buddhism, religion_christianity,
religion_hinduism, religion_islam, religion_jainism,
religion_judaism, religion_sikhism, religion_other,
religion_none, religion_prefno),
religion_cat2 = factor(sub(" +$", "", religion_cat)),
religion_cat3 = factor(ifelse(grepl(" ", religion_cat2) == T,
"multireligious",
as.character(religion_cat2)))) %>%
select(study:gender, feedback:race_cat, religion_cat3) %>%
rename(religion_cat = religion_cat3)
# rename response variables
d_clean_3 <- d_clean_2
names(d_clean_3) <- gsub("get", "", names(d_clean_3))
names(d_clean_3) <- gsub("\\.", "", names(d_clean_3))
names(d_clean_3) <- gsub("char", "_char", names(d_clean_3))
names(d_clean_3)[names(d_clean_3) %in% c("_characterL", "_characterR")] <- c("characterL", "characterR")
# recode response variables (center)
d_clean_4 <- d_clean_3
for(i in 11:92) {
d_clean_4[,i] <- d_clean_4[,i] - 4 # transform from 1 to 7 --> -3 to 3
}
# recode characterL vs. characterR as beetle vs. robot
d_clean_5_demo <- d_clean_4 %>%
select(study:condition, yob:religion_cat)
d_clean_5_characterL <- d_clean_4 %>%
mutate(target = characterL) %>%
select(study, subid, target, happy_characterL:CATCH_characterL)
names(d_clean_5_characterL) <- gsub("_characterL", "", names(d_clean_5_characterL))
d_clean_5_characterR <- d_clean_4 %>%
mutate(target = characterR) %>%
select(study, subid, target, happy_characterR:CATCH_characterR)
names(d_clean_5_characterR) <- gsub("_characterR", "", names(d_clean_5_characterR))
d_clean <- d_clean_5_characterL %>%
full_join(d_clean_5_characterR) %>%
full_join(d_clean_5_demo) %>%
select(study, subid, date:religion_cat, target:CATCH)
# remove extraneous dfs and variables
rm(d_clean_1, d_clean_2, d_clean_3, d_clean_4, d_clean_5_characterL,
d_clean_5_characterR, d_clean_5_demo, i)
}
if(datasource == "study 4") {
# enact exclusionary criteria
d_clean_1 <- d %>%
mutate(finished_mod = ifelse(is.na(CATCH), 0,
ifelse(finished == 1, 1,
0.5))) %>%
filter(CATCH == 1, # exclude Ps who fail catch trials
finished_mod != 0) %>% # exclude Ps who did not complete task
mutate(yob_correct = as.numeric(
ifelse(as.numeric(as.character(yob)) > 1900 &
as.numeric(as.character(yob)) < 2000,
as.numeric(as.character(yob)), NA)), # correct formatting in yob
age_approx = 2015 - yob_correct) %>% # calculate approximate age
mutate(gender = factor(gender, levels = c(1, 2, 0),
labels = c("m", "f", "other"))) %>%
filter(age_approx >= 18) # exclude Ps who are younger than 18 years
# recode one character
d_clean_2 <- d_clean_1 %>%
mutate(condition = factor(ifelse(
grepl("vegetative", as.character(condition)), "pvs",
ifelse(grepl("blue", as.character(condition)), "bluejay",
as.character(condition)))))
# recode background and demographic variables
d_clean <- d_clean_2 %>%
mutate( # deal with study number
study = factor(study)) %>%
mutate( # deal with study duration
duration = strptime(end_time, "%I:%M:%S") -
strptime(start_time, "%I:%M:%S")) %>%
mutate( # deal with race
race_asian_east =
factor(ifelse(is.na(race_asian_east), "", "asian_east ")),
race_asian_south =
factor(ifelse(is.na(race_asian_south), "", "asian_south ")),
race_asian_other =
factor(ifelse(is.na(race_asian_other), "", "asian_other ")),
race_black =
factor(ifelse(is.na(race_black), "", "black ")),
race_hispanic =
factor(ifelse(is.na(race_hispanic), "", "hispanic ")),
race_middle_eastern =
factor(ifelse(is.na(race_middle_eastern), "", "middle_eastern ")),
race_native_american =
factor(ifelse(is.na(race_native_american), "", "native_american ")),
race_pac_islander =
factor(ifelse(is.na(race_pac_islander), "", "pac_islander ")),
race_white =
factor(ifelse(is.na(race_white), "", "white ")),
race_other_prefno =
factor(ifelse(is.na(race_other_prefno), "", "other_prefno ")),
race_cat = paste0(race_asian_east, race_asian_south, race_asian_other,
race_black, race_hispanic, race_middle_eastern,
race_native_american, race_pac_islander, race_white,
race_other_prefno),
race_cat2 = factor(sub(" +$", "", race_cat)),
race_cat3 = factor(ifelse(grepl(" ", race_cat2) == T, "multiracial",
as.character(race_cat2)))) %>%
select(study, subid:end_time, duration, finished:gender,
education:age_approx, race_cat3) %>%
rename(race_cat = race_cat3)
# remove extraneous dfs and variables
rm(d_clean_1, d_clean_2)
}
# prepare datasets for dimension reduction analyses ----------------------------
if(datasource %in% c("study 1", "study 2", "studies 1 & 2")) {
# beetle condition
d_beetle <- d_clean %>%
filter(condition == "beetle") %>% # filter by condition
select(subid, happy:pride) # NOTE: make sure responses are scored as -3:3!
d_beetle <- data.frame(d_beetle[,-1], row.names = d_beetle[,1])
# robot condition
d_robot <- d_clean %>%
filter(condition == "robot") %>% # filter by condition
select(subid, happy:pride) # NOTE: make sure responses are scored as -3:3!
d_robot <- data.frame(d_robot[,-1], row.names = d_robot[,1])
# collapse across conditions
d_all <- d_clean %>%
select(subid, happy:pride) # NOTE: make sure responses are scored as -3:3!
d_all <- data.frame(d_all[,-1], row.names = d_all[,1])
}
if(datasource == "study 3") {
# beetle target
d_beetle <- d_clean %>%
filter(target == "beetle") %>% # filter by target
select(subid, happy:pride) # NOTE: make sure responses are scored as -3:3!
d_beetle <- data.frame(d_beetle[,-1], row.names = d_beetle[,1])
# robot target
d_robot <- d_clean %>%
filter(target == "robot") %>% # filter by target
select(subid, happy:pride) # NOTE: make sure responses are scored as -3:3!
d_robot <- data.frame(d_robot[,-1], row.names = d_robot[,1])
# collapse across targets
d_all <- d_clean %>%
mutate(subid = paste(target, subid, sep = "_")) %>%
select(subid, happy:pride) # NOTE: make sure responses are scored as -3:3!
d_all <- data.frame(d_all[,-1], row.names = d_all[,1])
}
if(datasource == "study 4") {
# collapse across conditions
d_all <- d_clean %>%
select(subid, happy:pride) # NOTE: make sure responses are scored as -3:3!
d_all <- data.frame(d_all[,-1], row.names = d_all[,1])
}
####################################################### DEMOGRAPHICS ###########
# examine demographic variables ------------------------------------------------
# sample size
sample_size <- with(d_clean %>% select(condition, subid) %>% unique(),
table(condition))
kable(d_clean %>% select(condition, subid) %>% unique() %>% count(condition))
# duration
duration <- d_clean %>%
group_by(condition) %>%
summarise(min_duration = min(duration, na.rm = T),
max_duration = max(duration, na.rm = T),
median_duration = median(duration, na.rm = T),
mean_duration = mean(duration, na.rm = T),
sd_duration = sd(duration, na.rm = T))
if(datasource %in% c("study 1", "study 2", "studies 1 & 2")) {
# test for differences in duration across conditions
duration_diff <- t.test(duration ~ condition,
data = d_clean %>%
select(condition, subid, duration) %>%
unique())
}
# approxiate age
age_approx <- d_clean %>%
group_by(condition) %>%
summarise(min_age = min(age_approx, na.rm = T),
max_age = max(age_approx, na.rm = T),
median_age = median(age_approx, na.rm = T),
mean_age = mean(age_approx, na.rm = T),
sd_age = sd(age_approx, na.rm = T))
if(datasource %in% c("study 1", "study 2", "studies 1 & 2")) {
# test for differences in age across conditions
age_diff <- t.test(age_approx ~ condition,
data = d_clean %>%
select(condition, subid, age_approx) %>%
unique())
}
# gender
if(datasource %in% c("study 1", "study 2", "studies 1 & 2")) {
gender <- with(d_clean %>% select(subid, condition, gender) %>% unique(),
table(condition, gender))
kable(addmargins(gender))
summary(gender) # test for difference in gender distribution across conditions
} else {
gender <- with(d_clean %>% select(subid, gender) %>% unique(),
table(gender))
gender_diff <- chisq.test(gender)
}
# racial/ethnic background
if(datasource %in% c("study 1", "study 2", "studies 1 & 2")) {
race <- with(d_clean %>% select(subid, condition, race_cat) %>% unique(),
table(condition, race_cat))
kable(addmargins(race))
summary(race) # test for difference in race distribution across conditions
} else {
race <- with(d_clean %>% select(subid, race_cat) %>% unique(),
table(race_cat))
race_diff <- chisq.test(race)
}
# racial/ethnic background
if(datasource %in% c("study 1", "study 2", "studies 1 & 2")) {
religion <- with(d_clean %>% select(subid, condition, religion_cat) %>% unique(),
table(condition, religion_cat))
kable(addmargins(religion))
summary(religion) # test for difference in religion distribution across conditions
} else if(datasource == "study 3") {
religion <- with(d_clean %>% select(subid, religion_cat) %>% unique(),
table(religion_cat))
religion_diff <- chisq.test(religion)
}
# STILL NEED TO DEAL WITH EDUCATION FOR STUDY 4
######################################## EXPLORATORY FACTOR ANALYSIS ###########
# make function for determining how many factors to extract
howManyFactors <- function(data) {
# scree plot: look for elbow
scree(data)
# eigenvalues: count how many > 1
eigen_efa <- length(eigen(cor(data))$values[eigen(cor(data))$values > 1])
# very simple structure: look at different indices
vss_unroted <- nfactors(data, n = 15, rotate = "none",
title = "VSS (no rotation)") # without rotation
vss_rotated <- nfactors(data, n = 15, rotate = "varimax",
title = "VSS (varimax rotation)") # with rotation
# return
return(list("Count of eigenvalues > 1" = eigen_efa,
"VSS (no rotation)" = vss_unroted,
"VSS (varimax rotation)" = vss_rotated))
}
# exploratory factor analysis by condition (studies 1-3): BEETLE ---------------
if(datasource %in% c("study 1", "study 2", "studies 1 & 2", "study 3")) {
# step 1: determine how many dimensions to extract ---------------------------
# howMany_beetle <- howManyFactors(d_beetle)
nfactors_beetle <- 3
# step 2: run FA without rotation with N factors -----------------------------
efa_beetle_unrotatedN <- factanal(d_beetle, nfactors_beetle, r = "none")
# examine loadings for each factor
efa_beetle_unrotatedN_F1 <-
data.frame(F1 = sort(efa_beetle_unrotatedN$loadings[,1],
decreasing = TRUE))
if(nfactors_beetle > 1) {
efa_beetle_unrotatedN_F2 <-
data.frame(F2 = sort(efa_beetle_unrotatedN$loadings[,2],
decreasing = TRUE))
if(nfactors_beetle > 2) {
efa_beetle_unrotatedN_F3 <-
data.frame(F3 = sort(efa_beetle_unrotatedN$loadings[,3],
decreasing = TRUE))
if(nfactors_beetle > 3) {
efa_beetle_unrotatedN_F4 <-
data.frame(F4 = sort(efa_beetle_unrotatedN$loadings[,4],
decreasing = TRUE))
}
}
}
# step 3: run FA with rotation with N factors --------------------------------
efa_beetle_rotatedN <- factanal(d_beetle, nfactors_beetle, r = "varimax")
# examine loadings for each factor
efa_beetle_rotatedN_F1 <-
data.frame(F1 = sort(efa_beetle_rotatedN$loadings[,1],
decreasing = TRUE))
if(nfactors_beetle > 1) {
efa_beetle_rotatedN_F2 <-
data.frame(F2 = sort(efa_beetle_rotatedN$loadings[,2],
decreasing = TRUE))
if(nfactors_beetle > 2) {
efa_beetle_rotatedN_F3 <-
data.frame(F3 = sort(efa_beetle_rotatedN$loadings[,3],
decreasing = TRUE))
if(nfactors_beetle > 3) {
efa_beetle_rotatedN_F4 <-
data.frame(F4 = sort(efa_beetle_rotatedN$loadings[,4],
decreasing = TRUE))
}
}
}
}
# exploratory factor analysis by condition (studies 1-3): ROBOT ----------------
if(datasource %in% c("study 1", "study 2", "studies 1 & 2", "study 3")) {
# step 1: determine how many dimensions to extract ---------------------------
# howMany_robot <- howManyFactors(d_robot)
nfactors_robot <- 3
# step 2: run FA without rotation with N factors -----------------------------
efa_robot_unrotatedN <- factanal(d_robot, nfactors_robot, r = "none")
# examine loadings for each factor
efa_robot_unrotatedN_F1 <-
data.frame(F1 = sort(efa_robot_unrotatedN$loadings[,1],
decreasing = TRUE))
if(nfactors_robot > 1) {
efa_robot_unrotatedN_F2 <-
data.frame(F2 = sort(efa_robot_unrotatedN$loadings[,2],
decreasing = TRUE))
if(nfactors_robot > 2) {
efa_robot_unrotatedN_F3 <-
data.frame(F3 = sort(efa_robot_unrotatedN$loadings[,3],
decreasing = TRUE))
if(nfactors_robot > 3) {
efa_robot_unrotatedN_F4 <-
data.frame(F4 = sort(efa_robot_unrotatedN$loadings[,4],
decreasing = TRUE))
}
}
}
# step 3: run FA with rotation with N factors --------------------------------
efa_robot_rotatedN <- factanal(d_robot, nfactors_robot, r = "varimax")
# examine loadings for each factor
efa_robot_rotatedN_F1 <-
data.frame(F1 = sort(efa_robot_rotatedN$loadings[,1],
decreasing = TRUE))
if(nfactors_robot > 1) {
efa_robot_rotatedN_F2 <-
data.frame(F2 = sort(efa_robot_rotatedN$loadings[,2],
decreasing = TRUE))
if(nfactors_robot > 2) {
efa_robot_rotatedN_F3 <-
data.frame(F3 = sort(efa_robot_rotatedN$loadings[,3],
decreasing = TRUE))
if(nfactors_robot > 3) {
efa_robot_rotatedN_F4 <-
data.frame(F4 = sort(efa_robot_rotatedN$loadings[,4],
decreasing = TRUE))
}
}
}
}
# exploratory factor analysis across conditions (studies 1-4) ------------------
# step 1: determine how many dimensions to extract -----------------------------
# howMany_all <- howManyFactors(d_all)
nfactors_all <- 3
# step 2: run FA without rotation with N factors -------------------------------
efa_all_unrotatedN <- factanal(d_all, nfactors_all, r = "none")
# examine loadings for each factor
efa_all_unrotatedN_F1 <-
data.frame(F1 = sort(efa_all_unrotatedN$loadings[,1],
decreasing = TRUE))
if(nfactors_all > 1) {
efa_all_unrotatedN_F2 <-
data.frame(F2 = sort(efa_all_unrotatedN$loadings[,2],
decreasing = TRUE))
if(nfactors_all > 2) {
efa_all_unrotatedN_F3 <-
data.frame(F3 = sort(efa_all_unrotatedN$loadings[,3],
decreasing = TRUE))
if(nfactors_all > 3) {
efa_all_unrotatedN_F4 <-
data.frame(F4 = sort(efa_all_unrotatedN$loadings[,4],
decreasing = TRUE))
}
}
}
# step 3: run FA with rotation with N factors ----------------------------------
efa_all_rotatedN <- factanal(d_all, nfactors_all, r = "varimax")
# examine loadings for each factor
efa_all_rotatedN_F1 <-
data.frame(F1 = sort(efa_all_rotatedN$loadings[,1],
decreasing = TRUE))
if(nfactors_all > 1) {
efa_all_rotatedN_F2 <-
data.frame(F2 = sort(efa_all_rotatedN$loadings[,2],
decreasing = TRUE))
if(nfactors_all > 2) {
efa_all_rotatedN_F3 <-
data.frame(F3 = sort(efa_all_rotatedN$loadings[,3],
decreasing = TRUE))
if(nfactors_all > 3) {
efa_all_rotatedN_F4 <-
data.frame(F4 = sort(efa_all_rotatedN$loadings[,4],
decreasing = TRUE))
}
}
}
# print ------------------------------------------------------------------------
if(datasource %in% c("study 1", "study 2", "studies 1 & 2", "study 3")) {
# BEETLE condition
efa_beetle_unrotatedN
efa_beetle_rotatedN
efa_beetle_rotatedN_F1
if(nfactors_beetle > 1) {efa_beetle_rotatedN_F2}
if(nfactors_beetle > 2) {efa_beetle_rotatedN_F3}
if(nfactors_beetle > 3) {efa_beetle_rotatedN_F4}
# ROBOT condition
efa_robot_unrotatedN
efa_robot_rotatedN
efa_robot_rotatedN_F1
if(nfactors_robot > 1) {efa_robot_rotatedN_F2}
if(nfactors_robot > 2) {efa_robot_rotatedN_F3}
if(nfactors_robot > 3) {efa_robot_rotatedN_F4}
}
# ALL conditions
efa_all_unrotatedN
efa_all_rotatedN
efa_all_rotatedN_F1
if(nfactors_all > 1) {efa_all_rotatedN_F2}
if(nfactors_all > 2) {efa_all_rotatedN_F3}
if(nfactors_all > 3) {efa_all_rotatedN_F4}
######################################## EXPLORATORY FACTOR ANALYSIS ###########
# visual =~ x1 + x2 + x3
# textual =~ x4 + x5 + x6
# speed =~ x7 + x8 + x9
|
8ff0eb13fadf91a978e299a774320237556d9e05
|
6c5bd91e0383af2a4134a07e121a3ea6ed2119af
|
/man/shattered.regions.cnv.Rd
|
0730ed0f56c4e5f84dd20917d6e4adb8aaa784c6
|
[] |
no_license
|
cansavvy/svcnvplus
|
d41f3503c42f400acb0e606d7ece209d8d01132f
|
ea43a5dbd7ae38f1b26520a31af1daddadbaf5dd
|
refs/heads/master
| 2020-12-04T04:21:20.444872
| 2019-12-16T19:42:28
| 2019-12-16T19:42:28
| null | 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| true
| 1,580
|
rd
|
shattered.regions.cnv.Rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/shattered.regions.cnv.r
\name{shattered.regions.cnv}
\alias{shattered.regions.cnv}
\title{Caller for shattered genomic regions based on breakpoint densities}
\usage{
shattered.regions.cnv(
seg,
fc.pct = 0.2,
min.seg.size = 0,
min.num.probes = 0,
low.cov = NULL,
clean.brk = clean.brk,
window.size = 10,
slide.size = 2,
num.breaks = 10,
num.sd = 5,
dist.iqm.cut = 1e+05,
verbose = FALSE
)
}
\arguments{
\item{seg}{(data.frame) segmentation data with 6 columns: sample, chromosome, start, end, probes, segment_mean}
\item{fc.pct}{(numeric) copy number change between 2 consecutive segments: i.e (default) cutoff = 0.2 represents a fold change of 0.8 or 1.2}
\item{min.seg.size}{(numeric) The minimun segment size (in base pairs) to include in the analysis}
\item{min.num.probes}{(numeric) The minimun number of probes per segment to include in the analysis}
\item{low.cov}{(data.frame) a data.frame (chr, start, end) indicating low coverage regions to exclude from the analysis}
\item{window.size}{(numeric) size in megabases of the genmome bin to compute break density}
\item{slide.size}{(numeric) size in megabases of the sliding genmome window}
\item{num.breaks}{(numeric) size in megabases of the genmome bin to compute break density}
\item{num.sd}{(numeric) size in megabases of the sliding genmome window}
}
\description{
Caller for shattered genomic regions based on breakpoint densities
}
\examples{
shattered.regions.cnv()
}
\keyword{CNV,}
\keyword{segmentation}
|
075d6bbe755e88785d2f4dfa94acd860fe380d77
|
acb2b3bebdbcb1d3e686d51b87c100353dc4a54f
|
/R/plotlis_somtc.R
|
802f90a348df347a6fdfb74601f800e96a9edcba
|
[] |
no_license
|
wangdata/soyface_daycent
|
ce5b3ce38d7abc0cb0898941de85ada1441b5cf2
|
21f459cbbf7b5f2c002940eb1ba6c1bf5e616f47
|
refs/heads/master
| 2023-07-21T06:51:08.221378
| 2018-06-01T06:51:27
| 2018-06-01T06:51:27
| null | 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 1,593
|
r
|
plotlis_somtc.R
|
#!/usr/bin/env Rscript
# A specialized variant of plotlis.R, created mostly to present annual values instead of monthly fluctuations for the publication version of total SOM C predictions.
library(ggplot2)
library(grid)
library(dplyr)
library(ggplotTicks)
library(DeLuciatoR)
# If argv exists already, we're being sourced from inside another script.
# If not, we're running standalone and taking arguments from the command line.
if(!exists("argv")){
argv = commandArgs(trailingOnly = TRUE)
}
lis = read.csv(argv[1], colClasses=c("character", rep("numeric", 30)))
lis = lis[lis$time >=1950,]
lis$run = relevel(factor(lis$run), ref="ctrl")
lis = (lis
%>% mutate(year=floor(time))
%>% group_by(run, year)
%>% select(somtc)
%>% summarise_each(funs(mean)))
scale_labels = c(
ctrl="Control",
heat="Heat",
co2=expression(CO[2]),
heatco2=expression(paste("Heat+", CO[2])))
scale_colors = c(ctrl="grey", heat="black", co2="grey", heatco2="black")
scale_linetypes = c(ctrl=1, heat=1, co2=2, heatco2=2)
plt = (ggplot(data=lis, aes(x=year, y=somtc, color=run, lty=run))
+geom_line()
+scale_color_manual(labels=scale_labels, values=scale_colors)
+scale_linetype_manual(labels=scale_labels, values=scale_linetypes)
+ylab(expression(paste("Daycent SOM, g C ", m^2)))
+theme_ggEHD(16)
+theme(
legend.title=element_blank(),
legend.key=element_blank(),
legend.position=c(0.15,0.85),
legend.background=element_blank(),
legend.text.align=0))
ggsave_fitmax(
plot=mirror_ticks(plt),
filename=paste0(argv[1], "_somtc_fancy.png"),
maxheight=9,
maxwidth=6.5,
units="in",
dpi=300)
|
13705f3def92f2bf9abe40fadc3c960674c81485
|
8f7320c10f2c5fc8475753dc5256d1a66067e15c
|
/rkeops/man/extract.Rd
|
964aa445e59e3eeae94771d4a2f0361c52a8717c
|
[
"MIT"
] |
permissive
|
getkeops/keops
|
947a5409710379893c6c7a46d0a256133a6d8aff
|
52ed22a7fbbcf4bd02dbdf5dc2b00bf79cceddf5
|
refs/heads/main
| 2023-08-25T12:44:22.092925
| 2023-08-09T13:33:58
| 2023-08-09T13:33:58
| 182,054,091
| 910
| 69
|
MIT
| 2023-09-03T20:35:44
| 2019-04-18T09:04:07
|
Python
|
UTF-8
|
R
| false
| true
| 2,269
|
rd
|
extract.Rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/lazytensor_operations.R
\name{extract}
\alias{extract}
\title{Extract.}
\usage{
extract(x, m, d)
}
\arguments{
\item{x}{A \code{LazyTensor} or a \code{ComplexLazyTensor}.}
\item{m}{An \code{integer} corresponding to the starting index.}
\item{d}{An \code{integer} corresponding to the output dimension.}
}
\value{
A \code{LazyTensor}.
}
\description{
Symbolic sub-element extraction. A unary operation.
}
\details{
\code{extract(x_i, m, d)} encodes, symbolically, the extraction
of a range of values \code{x[m:m+d]} in the \code{LazyTensor} \code{x}; (\code{m} is the
starting index, and \code{d} is the dimension of the extracted sub-vector).
\strong{IMPORTANT}
IN THIS CASE, INDICES START AT ZERO, therefore, \code{m} should be in \verb{[0, n)},
where \code{n} is the inner dimension of \code{x}. And \code{d} should be in \verb{[0, n-m]}.
\strong{Note}
See @examples for a more concrete explanation of the use of \code{extract()}.
}
\examples{
\dontrun{
# Two very rudimentary examples
# -----------------------------
# Let's say that you have a matrix `g` looking like this:
# [,1] [,2] [,3] [,4]
# [1,] 1 8 1 3
# [2,] 2 1 2 7
# [3,] 3 7 4 5
# [4,] 1 3 3 0
# [5,] 5 4 9 4
# Convert it to LazyTensor:
g_i <- LazyTensor(g, index = 'i')
# Then extract some elements:
ext_g <- extract(g_i, 1, 3)
# In this case, `ext_g` is a LazyTensor encoding, symbolically,
# the following part of g:
# [,1] [,2] [,3]
# [1,] 8 1 3
# [2,] 1 2 7
# [3,] 7 4 5
# [4,] 3 3 0
# [5,] 4 9 4
# Same principle with a LazyTensor encoding a vector:
v <- c(1, 2, 3, 1, 5)
Pm_v <- LazyTensor(v)
ext_Pm_v <- extract(Pm_v, 2, 3)
# In this case, `ext_Pm_v` is a LazyTensor encoding, symbolically,
# the following part of v:
# [,1] [,2] [,3]
# [1,] 3 1 5
# A more general example
# ----------------------
x <- matrix(runif(150 * 5), 150, 5) # arbitrary R matrix, 150 rows, 5 columns
x_i <- LazyTensor(x, index = 'i') # LazyTensor from matrix x, indexed by 'i'
m <- 2
d <- 2
extract_x <- extract(x_i, m, d) # symbolic matrix
}
}
\author{
Chloe Serre-Combe, Amelie Vernay
}
|
17d6a3fda732624fa3246a0d8341fe0864802984
|
897425b5ca880e106aae6483c2967a4e4540b81c
|
/deseq2/test.R
|
16679d0e81d5e6262aa867e6eb11bffa56fd93c5
|
[] |
no_license
|
MingChen0919/RShinyApps
|
ff4ab5398a27b500a198342cfcf59d225c3cef4c
|
74026c2a8f2a4f8a83d384d7fd361e440f41e6f5
|
refs/heads/master
| 2020-05-21T08:06:36.635508
| 2018-07-12T03:44:29
| 2018-07-12T03:44:29
| 84,600,724
| 2
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 4,351
|
r
|
test.R
|
library(DESeq2)
library("pasilla")
pasCts <- system.file("extdata", "pasilla_gene_counts.tsv",
package="pasilla", mustWork=TRUE)
# pasAnno <- system.file("extdata", "pasilla_sample_annotation.csv",
# package="pasilla", mustWork=TRUE)
countData <- as.matrix(read.csv(pasCts,sep="\t",row.names="gene_id"))
write.csv(countData, file='countData.csv', row.names = TRUE)
condition = c("treated", "treated", "treated", "untreated", "untreated", "untreated", "untreated")
type = c("single","paired","paired","single","single","paired","paired")
colData = data.frame(condition, type)
rownames(colData) = c("treated1","treated2","treated3",
"untreated1","untreated2","untreated3","untreated4")
## make sure rows in colData and columns in countData are in the same order
all(rownames(colData) %in% colnames(countData))
countData = countData[, rownames(colData)]
write.csv(colData, file='colData.csv', row.names = TRUE)
# construct a DESeqDataSet
dds = DESeqDataSetFromMatrix(countData = countData,
colData = colData,
design = ~ condition)
# pre-filtering (remove rows that have only 0 or 1 read)
dds
dds = dds[rowSums(counts(dds)) > 1, ]
dds
# set reference level
dds$condition = relevel(dds$condition, ref="untreated")
# differential expression analysis
dds = DESeq(dds)
res = results(dds)
res
summary(res)
sum(res$padj < 0.1, na.rm=TRUE)
res05 = results(dds, alpha = 0.05)
summary(res05)
sum(res05$padj < 0.05, na.rm=TRUE)
## MA-plot
plotMA(res, main="DESeq2", ylim=c(-2,2))
df = data.frame(mean=res$baseMean,
lfc = res$log2FoldChange,
sig = ifelse(res$padj < 0.1, TRUE, FALSE))
plotMA(df, ylim=c(-2,2))
## plot counts
plotCounts(dds, gene=which.min(res$padj), intgroup = "condition")
countData['FBgn0039155', ]
##===== MA plot with plotly ========
library(plotly)
df = data.frame(gene_id = rownames(res),
mean = res$baseMean,
lfc = res$log2FoldChange,
sig = ifelse(res$padj < 0.1, TRUE, FALSE),
col = ifelse(res$padj < 0.1, "padj<0.1", "padj>=0.1"))
df$col = factor(df$col, ordered = FALSE)
df = subset(df, mean != 0)
df = subset(df, !is.na(sig))
# df = subset(df, sig == TRUE)
plot_ly(
df, x = ~log(mean), y = ~lfc,
# hover text:
text = ~paste('Gene ID: ', gene_id, '<br>Normalized mean: ', mean, '<br>log fold change: ', lfc),
color = ~col, colors = c("red", "blue")
) %>%
layout(title = 'MA plot',
xaxis = list(title="Log mean"),
yaxis = list(title="log fold change"))
#option 2: ggplotly
p = ggplot(data = df, aes(x = log(mean), y = lfc, col = col)) +
geom_point(aes(text = paste('Gene ID: ', gene_id, '<br>Normalized mean: ', mean, '<br>log fold change: ', lfc))) +
xlab('Log base mean') +
ylab('Log fold change')
p = ggplotly(p)
p
##======== end of MA plot with plotly ===============
#========= plot count ===============
gene = 'FBgn0025111'
gene_counts = counts(dds[gene, ])
gene_counts
count = counts(dds[gene, ])[1,]
sample = colnames(gene_counts)
group = factor(colData[sample, 'condition'])
df_gene = data.frame(count, sample, group)
df_gene
plot_ly(
type = 'scatter',
df_gene, x = ~group, y = ~count,
color = ~group,
text = ~paste('Sample: ', sample)
) %>%
layout(
title = gene
)
## option 2: ggplotly
p = ggplot(data=df_gene, aes(x=group, y=count, col=group)) +
geom_jitter(aes(text=paste('Sample: ', sample)))
p = ggplotly(p)
p
#========= end of plot count ============
## heatmap of the count matrix
library("pheatmap")
select <- order(rowMeans(counts(dds,normalized=TRUE)),
decreasing=TRUE)[1:20]
nt <- normTransform(dds) # defaults to log2(x+1)
log2.norm.counts <- assay(nt)[select,]
df <- as.data.frame(colData(dds)[,c("condition","type")])
pheatmap(log2.norm.counts, cluster_rows=FALSE, show_rownames=FALSE,
cluster_cols=FALSE, annotation_col=df)
# data
df
log2.norm.counts = as.data.frame(log2.norm.counts)
plot_ly(x=colnms, y=rownms, z = as.matrix(log2.norm.counts),
#key = as.matrix(log2.norm.counts),
type = "heatmap", source = "heatplot") %>%
layout(xaxis = list(title = ""),
yaxis = list(title = ""))
library(DT)
dt_res = as.matrix(res)
datatable(dt_res, filter='top')
|
40b77779469bfde10da199d6428879104c9ea223
|
95754178320398752703338517402d7cbf1781b9
|
/KCompetition_GettingStarted.R
|
336331a860cbf1e6e67a1d6dd0960a2653e04c6a
|
[] |
no_license
|
emredjan/15.071x_Voting_Outcomes
|
8566156dac5ec8469707db030084396ced577ea8
|
1cf65273868420ee2d12f2a7d6d139508a1d0508
|
refs/heads/master
| 2021-01-09T20:41:18.127917
| 2016-06-14T17:56:33
| 2016-06-14T17:56:33
| 61,073,681
| 1
| 1
| null | null | null | null |
UTF-8
|
R
| false
| false
| 1,568
|
r
|
KCompetition_GettingStarted.R
|
# KAGGLE COMPETITION - GETTING STARTED
# This script file is intended to help you get started on the Kaggle platform, and to show you how to make a submission to the competition.
# Let's start by reading the data into R
# Make sure you have downloaded these files from the Kaggle website, and have navigated to the directory where you saved the files on your computer
train = read.csv("train2016.csv")
test = read.csv("test2016.csv")
# We will just create a simple logistic regression model, to predict Party using all other variables in the dataset, except for the user ID:
SimpleMod = glm(Party ~ . -USER_ID, data=train, family=binomial)
# And then make predictions on the test set:
PredTest = predict(SimpleMod, newdata=test, type="response")
threshold = 0.5
PredTestLabels = as.factor(ifelse(PredTest<threshold, "Democrat", "Republican"))
# However, you can submit the file on Kaggle to see how well the model performs. You can make up to 5 submissions per day, so don't hesitate to just upload a solution to see how you did.
# Let's prepare a submission file for Kaggle (for more about this, see the "Evaluation" page on the competition site):
MySubmission = data.frame(USER_ID = test$USER_ID, PREDICTION = PredTestLabels)
write.csv(MySubmission, "SubmissionSimpleLog.csv", row.names=FALSE)
# You should upload the submission "SubmissionSimpleLog.csv" on the Kaggle website to use this as a submission to the competition
# This model was just designed to help you get started - to do well in the competition, you will need to build better models!
|
d08238aace42279afce8dc11dcb6a7d0b14b1373
|
57ad5699cd427042bbf93ae9cd7d54af39788f73
|
/R/prox.isotonic.R
|
5d864a457e402dd2b627c8fa1d388a59132edc84
|
[] |
no_license
|
AlePasquini/apg
|
94f9eef7f233d721c663a4956581963d04d537df
|
8900984a8e7b1a469d2b775d56bf07bd192e252b
|
refs/heads/master
| 2020-04-12T04:44:58.238828
| 2016-06-04T01:44:00
| 2016-06-04T01:44:00
| 162,304,614
| 3
| 0
| null | 2018-12-18T14:59:41
| 2018-12-18T14:59:41
| null |
UTF-8
|
R
| false
| false
| 453
|
r
|
prox.isotonic.R
|
#' Proximal operator of the isotonic constraint
#'
#' Computes the proximal operator of the isotonic constraint, i.e., the
#' projection onto the set of nondecreasing vectors.
#'
#' @param x The input vector
#'
#' @return The projection of \code{x} onto the set of nondecreasing vectors,
#' obtained by solving an isotonic regression.
#'
#' @export
#' @examples prox.isotonic(c(1,3,-2,4,5))
prox.isotonic <- function(x, ...) {
return(pava(x))
}
|
2d509d14fdc2b510a0cf43cf8b7e94c006d72a4a
|
e5dcc8a6a80a5785d98ff7be57de24aa9690d982
|
/learning_function.R
|
15ea97997ffad5265f4b2d8fad7ecb574934355e
|
[] |
no_license
|
lja9702/No_namedTeam
|
182c13fe41569e79a50b1602b0e83adc4ccb947d
|
71387a10fb93cf5daea4f7e421d79eef1127df9f
|
refs/heads/master
| 2020-03-24T04:16:26.415124
| 2018-08-23T14:03:10
| 2018-08-23T14:03:10
| 142,450,049
| 0
| 0
| null | 2018-08-01T14:42:04
| 2018-07-26T14:10:13
|
R
|
UTF-8
|
R
| false
| false
| 22,069
|
r
|
learning_function.R
|
source("./setup_lib.R", encoding="utf-8")
source("./make_x.R", encoding="utf-8")
##########################################################################################함수등록
# 주야
# day_night(path, 0.01, 30, 20, round, 2000)
day_night <- function(path, learning_rate, out_node, hidden_node, round, seed)
{
file <- "Kor_Train_교통사망사고정보(12.1~17.6).csv"
accident <- read.csv(paste(path, file, sep=""))
test <- accident[1:5000, ]
train <- accident[5001:nrow(accident), ]
train.x <- day_night_x(train)
train.y <- as.numeric(train$주야)
test.x <- day_night_x(test)
test.y <- as.numeric(test$주야)
mx.set.seed(seed)
model <- mx.mlp(train.x, train.y, hidden_node=hidden_node, out_node=out_node, out_activation="softmax",
num.round=round, array.batch.size=50, learning.rate=learning_rate, momentum=0.9,
eval.metric=mx.metric.accuracy, eval.data = list(data = test.x, label = test.y))
preds = predict(model, test.x)
pred.label = max.col(t(preds))-1
table(pred.label, test.y)
return(model)
}
# 요일
# week(path, 0.01, 30, 20, round, 2000)
week <- function(path, learning_rate, out_node, hidden_node, round, seed)
{
file <- "Kor_Train_교통사망사고정보(12.1~17.6).csv"
accident <- read.csv(paste(path, file, sep=""))
test <- accident[1:5000, ]
train <- accident[5001:nrow(accident), ]
train.x <- week_x(train)
train.y <- as.numeric(train$요일)
test.x <- week_x(test)
test.y <- as.numeric(test$요일)
mx.set.seed(seed)
model <- mx.mlp(train.x, train.y, hidden_node=hidden_node, out_node=out_node, out_activation="softmax",
num.round=round, array.batch.size=50, learning.rate=learning_rate, momentum=0.9,
eval.metric=mx.metric.accuracy, eval.data = list(data = test.x, label = test.y))
preds = predict(model, test.x)
pred.label = max.col(t(preds))-1
table(pred.label, test.y)
return (model)
}
# 법규위반
# violation(path, 0.01, 30, 20, round, 2000)
violation <- function(path, learning_rate, out_node, hidden_node, round, seed)
{
file <- "Kor_Train_교통사망사고정보(12.1~17.6).csv"
accident <- read.csv(paste(path, file, sep=""))
test <- accident[1:5000, ]
train <- accident[5001:nrow(accident), ]
train.x <- violation_x(train)
train.y <- as.numeric(train$법규위반)
test.x <- violation_x(test)
test.y <- as.numeric(test$법규위반)
mx.set.seed(seed)
model <- mx.mlp(train.x, train.y, hidden_node=hidden_node, out_node=out_node, out_activation="softmax",
num.round=round, array.batch.size=50, learning.rate=learning_rate, momentum=0.9,
eval.metric=mx.metric.accuracy, eval.data = list(data = test.x, label = test.y))
return (model)
}
# 사상자수
injury_cnt <- function(path, learning_rate, out_node, hidden_node, round, seed)
{
file <- "Kor_Train_교통사망사고정보(12.1~17.6).csv"
accident <- read.csv(paste(path, file, sep=""))
sample <- accident %>% filter(사상자수 != 1) # 사상자 수가 1인 경우를 제외한 데이터
sample_one <- accident %>% filter(사상자수 == 1)
sample_one <- sample_one[sample(1:nrow(sample_one),2500),]
sample <- rbind(sample, sample_one)
train.x <- injury_count_x(sample)
train.y <- sample$사상자수
set.seed(seed)
mx.set.seed(seed)
model <- mx.mlp(train.x, train.y, hidden_node=hidden_node, out_node=out_node, out_activation="softmax",
num.round=round, array.batch.size=50, learning.rate=learning_rate, momentum=0.9,
eval.metric=mx.metric.accuracy)
return (model)
}
# 사망자수
injury_dead_cnt <- function(path, learning_rate, out_node, hidden_node, round, seed)
{
file <- "Kor_Train_교통사망사고정보(12.1~17.6).csv"
accident <- read.csv(paste(path, file, sep=""))
train.x <- injury_dead_cnt_x(accident)
train.y <- accident$사망자수
set.seed(seed)
mx.set.seed(seed)
model <- mx.mlp(train.x, train.y, hidden_node=hidden_node, out_node=out_node, out_activation="softmax",
num.round=round, array.batch.size=50, learning.rate=learning_rate, momentum=0.9,
eval.metric=mx.metric.accuracy)
return (model)
}
# 중상자수
injury_mid_cnt <- function(path, learning_rate, out_node, hidden_node, round, seed)
{
file <- "Kor_Train_교통사망사고정보(12.1~17.6).csv"
accident <- read.csv(paste(path, file, sep=""))
train.x <- injury_mid_cnt_x(accident)
train.y <- accident$중상자수
set.seed(seed)
mx.set.seed(seed)
model <- mx.mlp(train.x, train.y, hidden_node=hidden_node, out_node=out_node, out_activation="softmax",
num.round=round, array.batch.size=50, learning.rate=learning_rate, momentum=0.9,
eval.metric=mx.metric.accuracy)
return (model)
}
# 경상자수
injury_weak_cnt <- function(path, learning_rate, out_node, hidden_node, round, seed)
{
file <- "Kor_Train_교통사망사고정보(12.1~17.6).csv"
accident <- read.csv(paste(path, file, sep=""))
train.x <- injury_weak_cnt_x(accident)
train.y <- accident$경상자수
set.seed(seed)
mx.set.seed(seed)
model <- mx.mlp(train.x, train.y, hidden_node=hidden_node, out_node=out_node, out_activation="softmax",
num.round=round, array.batch.size=50, learning.rate=learning_rate, momentum=0.9,
eval.metric=mx.metric.accuracy)
return (model)
}
# 부상신고자수
injury_call_cnt <- function(path, learning_rate, out_node, hidden_node, round, seed)
{
file <- "Kor_Train_교통사망사고정보(12.1~17.6).csv"
accident <- read.csv(paste(path, file, sep=""))
train.x <- injury_call_cnt_x(accident)
train.y <- accident$부상신고자수
set.seed(seed)
mx.set.seed(seed)
model <- mx.mlp(train.x, train.y, hidden_node=hidden_node, out_node=out_node, out_activation="softmax",
num.round=round, array.batch.size=50, learning.rate=learning_rate, momentum=0.9,
eval.metric=mx.metric.accuracy)
return (model)
}
# 사고유형_중분류
# accident_type(path="", learning_rate=0.07, out_node=22, hidden_node=10, round=2000, seed=0)
accident_type <- function(path, learning_rate, out_node, hidden_node, round, seed) {
file <- "Kor_Train_교통사망사고정보(12.1~17.6).csv"
accident <- read.csv(paste(path, file, sep=""))
#1당 2당 소분류 필요없어서 제외
accident <- accident[,-21]
accident <- accident[,-22]
#진아_ 사고유형별 사망, 사상, 중상, 경상, 부상신고자 수 딥러닝
accident.temp <- cbind(accident$사망자수, accident$중상자수, accident$경상자수, accident$부상신고자수, accident$당사자종별_1당, accident$당사자종별_2당, accident$법규위반)
colnames(accident.temp) <- c("사망자수", "중상자수", "경상자수", "부상신고자수", "당사자종별_1당", "당사자종별_2당", "법규위반")
accident.temp <- as.data.frame(accident.temp)
accident.temp$사망자수 <- as.numeric(accident.temp$사망자수)
accident.temp$중상자수 <- as.numeric(accident.temp$중상자수)
accident.temp$경상자수 <- as.numeric(accident.temp$경상자수)
accident.temp$부상신고자수 <- as.numeric(accident.temp$부상신고자수)
sagou.temp <- as.data.frame(accident$사고유형_중분류)
colnames(sagou.temp) <- c("사고유형_중분류")
accident.scale <- cbind(accident.temp, sagou.temp)
accident.scale[, 8] <- as.numeric(accident.scale[, 8])
acsample <- sample(1:nrow(accident.scale), size = round(0.2 * nrow(accident.scale)))
test.x <- data.matrix(accident.scale[acsample, 1: 7])
test.y <- accident.scale[acsample, 8]
train.x <- data.matrix(accident.scale[-acsample, 1:7])
train.y <- accident.scale[-acsample, 8]
# 41%정확도
mx.set.seed(seed)
model <- mx.mlp(train.x, train.y, hidden_node=hidden_node, out_node=out_node, out_activation="softmax", num.round=round, array.batch.size=200, learning.rate=learning_rate, momentum=0.75, eval.metric=mx.metric.accuracy)
return(model)
}
# 발생시도
# sido(path, 0.01, 17, 100, 400, 4444)
sido <- function(path, learning_rate, out_node, hidden_node, round, seed) {
file <- "Kor_Train_교통사망사고정보(12.1~17.6).csv"
acc <- read.csv(paste(path, file, sep=""))
acc$발생지시군구 <- as.factor(acc$발생지시군구)
sample <- acc[1:20000, ]
test <- acc[20001:nrow(acc), ]
train.x <- sido_x(sample)
train.y <- as.numeric(sample$발생지시도)
test.x <- sido_x(test)
test.y <- as.numeric(test$발생지시도)
mx.set.seed(seed)
model <- mx.mlp(train.x, train.y, hidden_node=hidden_node, out_node=out_node, activation="relu", out_activation="softmax",
num.round=round, array.batch.size=100, learning.rate=learning_rate, momentum=0.9,
eval.metric=mx.metric.accuracy, eval.data=list(data = test.x, label = test.y))
return (model)
}
# 발생시군구
# bsigungu(path, 0.01, 209, 1000, 400, 4444)
bsigungu <- function(path, learning_rate, out_node, hidden_node, round, seed) {
file <- "Kor_Train_교통사망사고정보(12.1~17.6).csv"
acc <- read.csv(paste(path, file, sep=""))
acc$발생지시군구 <- as.factor(acc$발생지시군구)
sample <- acc[1:20000, ]
test <- acc[20001:nrow(acc), ]
train.x <- sigungu_x(sample)
train.y <- as.numeric(sample$발생지시군구)
test.x <- sigungu_x(test)
test.y <- as.numeric(test$발생지시군구)
mx.set.seed(seed)
model <- mx.mlp(train.x, train.y, hidden_node=hidden_node, out_node=out_node, activation="relu", out_activation="softmax",
num.round=round, array.batch.size=100, learning.rate=learning_rate, momentum=0.9,
eval.metric=mx.metric.accuracy, eval.data=list(data = test.x, label = test.y))
return (model)
}
# 도로형태_대분류
# main_road_type(path, 0.01, 9, 1000, 400, 4444)
main_road_type <- function(path, learning_rate, out_node, hidden_node, round, seed) {
file <- "Kor_Train_교통사망사고정보(12.1~17.6).csv"
acc <- read.csv(paste(path, file, sep=""))
acc$발생지시군구 <- as.factor(acc$발생지시군구)
sample <- acc[1:20000, ]
test <- acc[20001:nrow(acc), ]
train.x <- main_road_type_x(sample)
train.y <- as.numeric(sample$도로형태_대분류)
test.x <- main_road_type_x(test)
test.y <- as.numeric(test$도로형태_대분류)
mx.set.seed(seed)
model <- mx.mlp(train.x, train.y, hidden_node=hidden_node, out_node=out_node, activation="relu", out_activation="softmax",
num.round=round, array.batch.size=100, learning.rate=learning_rate, momentum=0.9,
eval.metric=mx.metric.accuracy, eval.data=list(data = test.x, label = test.y))
return (model)
}
# 도로형태
# detail_road_type(path, 0.01, 16, 1000, 400, 4444)
detail_road_type <- function(path, learning_rate, out_node, hidden_node, round, seed) {
file <- "Kor_Train_교통사망사고정보(12.1~17.6).csv"
acc <- read.csv(paste(path, file, sep=""))
acc$발생지시군구 <- as.factor(acc$발생지시군구)
sample <- acc[1:20000, ]
test <- acc[20001:nrow(acc), ]
train.x <- detail_road_type_x(sample)
train.y <- as.numeric(sample$도로형태)
test.x <- detail_road_type_x(test)
test.y <- as.numeric(test$도로형태)
mx.set.seed(seed)
model <- mx.mlp(train.x, train.y, hidden_node=hidden_node, out_node=out_node, activation="relu", out_activation="softmax",
num.round=round, array.batch.size=100, learning.rate=learning_rate, momentum=0.9,
eval.metric=mx.metric.accuracy, eval.data=list(data = test.x, label = test.y))
return (model)
}
# 당사자종별_1당_대분류
attacker <- function(path, learning_rate, out_node, hidden_node, round, seed) {
file <- "Kor_Train_교통사망사고정보(12.1~17.6).csv"
acc <- read.csv(paste(path, file, sep=""))
acc$발생지시군구 <- as.factor(acc$발생지시군구)
sample <- acc[1:20000, ]
test <- acc[20001:nrow(acc), ]
train.x <- attacker_x(sample)
train.y <- as.numeric(sample$당사자종별_1당_대분류)
test.x <- attacker_x(test)
test.y <- as.numeric(test$당사자종별_1당_대분류)
mx.set.seed(seed)
model <- mx.mlp(train.x, train.y, hidden_node=hidden_node, out_node=out_node, activation="relu", out_activation="softmax",
num.round=round, array.batch.size=100, learning.rate=learning_rate, momentum=0.9,
eval.metric=mx.metric.accuracy, eval.data=list(data = test.x, label = test.y))
return (model)
}
# 당사자종별_2당_대분류
victim <- function(path, learning_rate, out_node, hidden_node, round, seed) {
file <- "Kor_Train_교통사망사고정보(12.1~17.6).csv"
acc <- read.csv(paste(path, file, sep=""))
acc$발생지시군구 <- as.factor(acc$발생지시군구)
sample <- acc[1:20000, ]
test <- acc[20001:nrow(acc), ]
train.x <- victim_x(sample)
train.y <- as.numeric(sample$당사자종별_2당_대분류)
test.x <- victim_x(test)
test.y <- as.numeric(test$당사자종별_2당_대분류)
mx.set.seed(seed)
model <- mx.mlp(train.x, train.y, hidden_node=hidden_node, out_node=out_node, activation="relu", out_activation="softmax",
num.round=round, array.batch.size=100, learning.rate=learning_rate, momentum=0.9,
eval.metric=mx.metric.accuracy, eval.data=list(data = test.x, label = test.y))
return (model)
}
# 서울 구별 평균 통행속도
speed_subset_data <- function(path) {
accident <- read.csv(paste(path, '교통사망사고정보/Kor_Train_교통사망사고정보(12.1~17.6).csv', sep=""))
road <- readxl::read_xlsx(paste(path, '보조데이터/03.서울시 도로 링크별 교통 사고발생 수/서울시 도로링크별 교통사고(2015~2017).xlsx', sep=""))
speed_path <- paste(path, "보조데이터/01.서울시 차량 통행 속도", sep="")
# 진기 코드/
file_csv_01 <- list.files(paste(speed_path, "/", sep=""), pattern="*.csv")
file_CSV_01 <- list.files(paste(speed_path, "/", sep=""), pattern="*.CSV")
file_csv_cnt_01 <- length(file_csv_01)
file_CSV_cnt_01 <- length(file_CSV_01)
i=1
for(j in 1:file_csv_cnt_01){
tmp = paste("tmp",i,sep="")
assign(tmp, read.csv(paste(speed_path,"/",file_csv_01[j],sep="")))
i=i+1
}
for(j in 1:file_CSV_cnt_01){
tmp = paste("tmp",i,sep="")
assign(tmp, read.csv(paste(speed_path,"/",file_CSV_01[j],sep="")))
i=i+1
}
tmp25 <- tmp25[,-33]
tmp26 <- tmp26[,-33]
tmp27 <- tmp27[,-33]
tmp29 <- tmp29[,-33]
tmp35 <- tmp35[,-33]
speed <- rbind(tmp1,tmp2,tmp3,tmp4,tmp5,tmp6,tmp7,tmp8,tmp9,tmp10,
tmp11,tmp12,tmp13,tmp14,tmp15,tmp16,tmp17,tmp18,tmp19,tmp20,
tmp21,tmp22,tmp23,tmp24,tmp25,tmp26,tmp27,tmp28,tmp29,tmp30,
tmp31,tmp32,tmp33,tmp34,tmp35,tmp36,tmp37,tmp38,tmp39,tmp40,
tmp41)
rm(tmp1,tmp2,tmp3,tmp4,tmp5,tmp6,tmp7,tmp8,tmp9,tmp10,
tmp11,tmp12,tmp13,tmp14,tmp15,tmp16,tmp17,tmp18,tmp19,tmp20,
tmp21,tmp22,tmp23,tmp24,tmp25,tmp26,tmp27,tmp28,tmp29,tmp30,
tmp31,tmp32,tmp33,tmp34,tmp35,tmp36,tmp37,tmp38,tmp39,tmp40,
tmp41)
rm(file_csv_01)
rm(file_CSV_01)
rm(file_csv_cnt_01)
rm(file_CSV_cnt_01)
rm(tmp)
rm(i)
rm(j)
# /진기코드
# 전처리
speed <- rename(speed, "링크ID"="링크아이디")
speed$링크ID <- as.character(speed$링크ID)
speed$연도 <- substr(speed$일자, 1, 4)
speed$X01시 <- as.numeric(speed$X01시)
# 결측치처리 (전체의 평균으로 구함)
for (i in c(1:24))
{
colname <- sprintf("X%02d시", i)
tmp <- speed %>% filter(!is.na(speed[,colname]))
speed[,colname] <- ifelse(is.na(speed[,colname]), mean(tmp[,colname]), speed[,colname])
}
rm(tmp)
# 통행속도
ss<-speed %>% group_by(링크ID) %>% summarise(X01시=mean(X01시),X02시=mean(X02시),X03시=mean(X03시),X04시=mean(X04시),X05시=mean(X05시),X06시=mean(X06시),
X07시=mean(X07시),X08시=mean(X08시),X09시=mean(X09시),X10시=mean(X10시),X11시=mean(X11시),X12시=mean(X12시),
X13시=mean(X13시),X14시=mean(X14시),X15시=mean(X15시),X16시=mean(X16시),X17시=mean(X17시),X18시=mean(X18시),
X19시=mean(X19시),X20시=mean(X20시),X21시=mean(X21시),X22시=mean(X22시),X23시=mean(X23시),X24시=mean(X24시))
#View(ss)
r <- road
r$사고건수 = as.numeric(r$사고건수)
r$사망자수 = as.numeric(r$사망자수)
r$중상자수 = as.numeric(r$중상자수)
r$경상자수 = as.numeric(r$경상자수)
r$부상신고자수 = as.numeric(r$부상신고자수)
ss_j <- left_join(ss, r, by=c("링크ID"))
ss_j <- ss_j %>% filter(!is.na(시군구))
ss_j$평균통행속도 <- (ss_j$X01시+ss_j$X02시+ss_j$X03시+ss_j$X04시+ss_j$X05시+ss_j$X06시+ss_j$X07시+ss_j$X08시+ss_j$X09시+ss_j$X10시+ss_j$X11시+ss_j$X12시+ss_j$X13시+ss_j$X14시+ss_j$X15시+ss_j$X16시+
ss_j$X17시+ss_j$X18시+ss_j$X19시+ss_j$X20시+ss_j$X21시+ss_j$X22시+ss_j$X23시+ss_j$X24시)/24
#View(ss_j)
#View(head(traffic, 20))
#View(head(speed, 20))
#View(head(road, 20))
# 시군구 별 통행속도 및 사고건수
avg_by_sigungu <- ss_j %>% group_by(시군구) %>% summarise(평균통행속도=mean(평균통행속도),사고건수=mean(사고건수), 사망자수=mean(사망자수), 중상자수=mean(중상자수), 경상자수=mean(경상자수), 부상신고자수=mean(부상신고자수))
#View(avg_by_sigungu)
avg_by_sigungu$사상자수 <- avg_by_sigungu$사망자수+avg_by_sigungu$중상자수+avg_by_sigungu$경상자수+avg_by_sigungu$부상신고자수
avg_by_sigungu$통행속도대비사상자수 <- avg_by_sigungu$사상자수/avg_by_sigungu$평균통행속도
# 원본 데이터에 붙여보겠음
avg_speed <- mean(avg_by_sigungu$평균통행속도)
avg_cnt <- mean(avg_by_sigungu$사고건수)
avg_hurt <- mean(avg_by_sigungu$사상자수)
avg_by_sigungu <- rename(avg_by_sigungu, "발생지시군구"="시군구")
avg_by_sigungu <- rename(avg_by_sigungu, "시군구사상자수"="사상자수")
avg_by_sigungu <- avg_by_sigungu %>% select(발생지시군구, 평균통행속도, 사고건수, 시군구사상자수)
return (avg_by_sigungu)
#accident <- left_join(accident, avg_by_sigungu, by=c("발생지시군구"))
# NA는 그냥 평균으로 하자
#accident$평균통행속도 <- ifelse(is.na(accident$평균통행속도), avg_speed, accident$평균통행속도)
#accident$사고건수 <- ifelse(is.na(accident$사고건수), avg_cnt, accident$사고건수)
#accident$시군구사상자수 <- ifelse(is.na(accident$시군구사상자수), avg_hurt, accident$시군구사상자수)
}
learning_all_models <- function(path) {
# 보조데이터 생성
#speed_subset <- speed_subset_data(path)
# Make model
acc_path <- paste(path, "교통사망사고정보/", sep="")
day_night_model <- day_night(acc_path,learning_rate = 0.01,out_node = 30,hidden_node = 20,round = 1000,seed = 2000)
week_model <- week(acc_path,learning_rate = 0.01,out_node = 30,hidden_node = 20,round = 1000,seed = 2000)
violation_model <- violation(acc_path,learning_rate = 0.01,out_node = 30,hidden_node = 20,round = 1000,seed = 2000)
injury_cnt_model <- injury_cnt(acc_path,learning_rate = 0.001,out_node = 50,hidden_node = 100,round = 1000,seed = 4444)
injury_dead_cnt_model <- injury_dead_cnt(acc_path,learning_rate = 0.001,out_node = 50,hidden_node = 100,round = 1000,seed = 4444)
injury_mid_cnt_model <- injury_mid_cnt(acc_path,learning_rate = 0.001,out_node = 50,hidden_node = 100,round = 1000,seed = 4444)
injury_weak_cnt_model <- injury_weak_cnt(acc_path,learning_rate = 0.001,out_node = 50,hidden_node = 100,round = 1000,seed = 4444)
injury_call_cnt_model <- injury_call_cnt(acc_path,learning_rate = 0.001,out_node = 50,hidden_node = 100,round = 1000,seed = 4444)
accident_type_model <- accident_type(acc_path,learning_rate = 0.07,out_node = 22,hidden_node = 100,round = 1000,seed = 4444)
sido_model <- sido(acc_path,learning_rate = 0.01,out_node = 17,hidden_node = 100,round = 1000,seed = 4444)
sigungu_model <- bsigungu(acc_path,learning_rate = 0.01,out_node = 209,hidden_node = 1000,round = 1000,seed = 4444)
main_road_type_model <- main_road_type(acc_path,learning_rate = 0.01,out_node = 9,hidden_node = 1000,round = 1000,seed = 4444)
detail_road_type_model <- detail_road_type(acc_path,learning_rate = 0.01,out_node = 16,hidden_node = 1000,round = 1000,seed = 4444)
attacker_model <- attacker(acc_path,learning_rate = 0.01,out_node = 30,hidden_node = 20,round = 1000,seed = 2000)
victim_model <- victim(acc_path,learning_rate = 0.01,out_node = 30,hidden_node = 20,round = 1000,seed = 2000)
# 저장
dir.create("./models", showWarnings=FALSE)
saveRDS(day_night_model, "./models/day_night_model.rds")
saveRDS(week_model, "./models/week_model.rds")
saveRDS(violation_model, "./models/violation_model.rds")
saveRDS(injury_cnt_model, "./models/injury_cnt_model.rds")
saveRDS(injury_dead_cnt_model, "./models/injury_dead_cnt_model.rds")
saveRDS(injury_mid_cnt_model, "./models/injury_mid_cnt_model.rds")
saveRDS(injury_weak_cnt_model, "./models/injury_weak_cnt_model.rds")
saveRDS(injury_call_cnt_model, "./models/injury_call_cnt_model.rds")
saveRDS(accident_type_model, "./models/accident_type_model.rds")
saveRDS(sido_model, "./models/sido_model.rds")
saveRDS(sigungu_model, "./models/sigungu_model.rds")
saveRDS(main_road_type_model, "./models/main_road_type_model.rds")
saveRDS(detail_road_type_model, "./models/detail_road_type_model.rds")
saveRDS(attacker_model, "./models/attacker_model.rds")
saveRDS(victim_model, "./models/victim_model.rds")
}
|
ffa908219427441d590b76580cfcbca07c76d803
|
7a3e07ba0e54293cd2e06bdce8d4d9f497a07b10
|
/code/make_1h_stripchart.R
|
0af70ae7f27fa186d6de953ba085ef563bebb38e
|
[] |
no_license
|
BAAQMD/SFBA-COVID-air-quality
|
9742ebd76f6261eb8378ae998fd227921be8a606
|
46e395646c62a0db5afd42e3d5da4726fa78127e
|
refs/heads/master
| 2022-07-16T13:34:58.552026
| 2020-05-18T18:42:52
| 2020-05-18T18:42:52
| 263,976,229
| 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 2,071
|
r
|
make_1h_stripchart.R
|
source(here::here("code", "chart-helpers.R"))
source(here::here("code", "tidy_1h_data.R"))
source(here::here("code", "with_epoch.R"))
make_1h_stripchart <- function (
input_data,
value_var,
value_limits,
...
) {
chart_data <-
input_data %>%
tidy_1h_data(
value_vars = value_var,
na.rm = TRUE) %>%
with_epoch(
na.rm = TRUE)
#'
#' NOTE: `value_limits` is used in `coord_cartesian(ylim = .)`, instead
#' of `scale_y_continous(limits = ., ...)`. That will ensure that,
#' even though we are clipping the viewport, we aren't dropping the
#' data --- so things like `geom_smooth()` will be using the full
#' `chart_data` as the basis for smoothing.
#'
#'
#' Let's show `dttm_tz` (the timezone) on the x-axis.
#'
chart_x_axis_title <-
stringr::str_remove(
lubridate::tz(pull(chart_data, dttm)),
fixed("Etc/"))
#'
#' TODO: show vertical lines on Sundays instead of Mondays?
#'
chart_x_scale <-
scale_x_datetime(
name = chart_x_axis_title,
expand = expansion(0, 0),
date_labels = "%d\n%b",
date_breaks = "2 weeks",
date_minor_breaks = "1 week")
chart_y_scale <-
scale_y_continuous(
name = NULL,
expand = expansion(mult = 0, add = 0))
chart_object <-
ggplot(chart_data) +
aes(x = dttm, y = value, group = SiteName) +
aes(color = epoch) +
geom_hline(
size = I(0.5),
yintercept = 0) +
geom_point(
position = position_jitter(height = 0.5),
show.legend = FALSE,
size = I(0.3),
alpha = I(0.2)) +
chart_color_scale +
chart_faceting +
chart_x_scale +
chart_y_scale +
chart_guides +
chart_theme +
chart_description +
geom_smooth(
aes(group = str_c(epoch, SiteName)),
method = "lm",
se = FALSE,
size = I(0.75),
show.legend = FALSE,
formula = y ~ x + 0) +
coord_cartesian(
ylim = value_limits,
clip = FALSE) # don't actually drop any data, as `limits` would do
return(chart_object)
}
|
bf40a9d0b7f42e5dce11294065b3157a8fea412b
|
309ae3e447336af016b6df7b3b34845280e4fe9d
|
/eric.R
|
e045d3e9f09a64b83698345d76d6667b1e795043
|
[] |
no_license
|
Vexeff/family-cohort
|
91322bd09af94d2979b6e9f6d8df4cdd283855a7
|
e2282bffcd7ca7578b043e1b40a56f468ef58e03
|
refs/heads/master
| 2022-09-27T09:18:13.713717
| 2018-11-11T18:28:01
| 2018-11-11T18:28:01
| null | 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 59
|
r
|
eric.R
|
# HI MY NAME IS ERIC
# THIS IS MY FIRST R SCRIPT
3 + 4
|
549b6ed9c444207aa296eb985217e8e4cb37cafd
|
ff6b811e0e352859468cb0e2098b88024fcfc630
|
/ProgrammingAssignment1/corr.R
|
efba5606ce2bff00607fc0620548bf1e2b20ce47
|
[] |
no_license
|
sagospe/R-Programming
|
99f8eede50bbb616abe5133e1933df43a4aa812e
|
6321eac74e04286b5bfaee5384ba8ef87e7393d9
|
refs/heads/master
| 2021-01-11T01:04:21.796582
| 2016-10-13T21:00:44
| 2016-10-13T21:00:44
| 69,615,226
| 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 971
|
r
|
corr.R
|
corr <- function(directory, threshold = 0) {
## 'directory' is a character vector of length 1 indicating
## the location of the CSV files
## 'threshold' is a numeric vector of length 1 indicating the
## number of completely observed observations (on all
## variables) required to compute the correlation between
## nitrate and sulfate; the default is 0
## Return a numeric vector of correlations
## NOTE: Do not round the result!
list_files <- list.files(directory, full.names = TRUE)
df <- data.frame()
df_completed <- complete(directory)
df_filtered <- df_completed[which(df_completed$nobs > threshold), ]
result <- vector()
for (i in df_filtered$id) {
df <- read.csv(list_files[i], header = TRUE)
result <- c(result, cor(df$nitrate, df$sulfate, use = "complete.obs"))
}
result
}
|
db50c8d290175dd6f2af06644bebb628dfdfe5ba
|
7469bb7562c649c9799202ac5be0e0857d05a506
|
/man/PlotFreq.Rd
|
c0f1c99c38f9113fa759c5007f09ef50c4f68fd7
|
[] |
no_license
|
gilles-guillot/Geneland
|
8798447ec1936a26a86d5be24fe9ec4668935ba8
|
8e5873cc130ef5874c98b1d5766f54230e4c547a
|
refs/heads/master
| 2021-06-03T16:18:25.611534
| 2020-02-20T11:57:35
| 2020-02-20T11:57:35
| 234,557,841
| 6
| 0
| null | null | null | null |
UTF-8
|
R
| false
| true
| 1,033
|
rd
|
PlotFreq.Rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/PlotFreq.R
\name{PlotFreq}
\alias{PlotFreq}
\title{PlotFreq}
\usage{
PlotFreq(path.mcmc,ipop,iloc,iall,printit=FALSE,path)
@param path.mcmc Path to output files directory
@param ipop Integer number : index of population
@param iloc Integer number : index of locus
@param iall Integer number : index of allele. If \code{MCMC}
was launched with option \code{filter.null.alleles=TRUE}, an extra
fictive allele standing for putative null alleles is created. It
estimated frequency can be also plotted. If there is say, 5 alleles
at a locus, the estimated frequency of null alleles can be seen
invoking \code{PlotFreq} with \code{iall=6}.
@param printit Logical : if TRUE, figures are also printed
@param path Character : Path to directory where figures
should be printed
}
\description{
Plot the trace of allele frequencies in present time population
and optionnaly print it.
for allele \code{iall} of locus \code{iloc}
}
|
773cfbda110a61121ba7ddf651984716cb5eecf0
|
27003deb72c65e565c084d1f077db8a6c04b0e45
|
/ferramentas_paraRevisar/log2_transf.r
|
179d479f431f0ca45a7ea684cf2c809777d629a7
|
[] |
no_license
|
LABIS-SYS/GALAXY_OLD
|
57e46436ed16daf747be1a90477d177c4f87be1d
|
7315e239c198117ac126b0ab15e0afad9e542b34
|
refs/heads/master
| 2021-05-15T11:21:59.498310
| 2017-10-25T19:36:30
| 2017-10-25T19:36:30
| 108,316,021
| 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 1,116
|
r
|
log2_transf.r
|
#!/usr/bin/Rscript
# Import some required libraries
library('getopt');
library('scatterplot3d');
# Make an option specification, with the following arguments:
# 1. long flag
# 2. short flag
# 3. argument type: 0: No argument, 1: required, 2: optional
# 4. target data type
option_specification = matrix(c(
'infilename', 'i', 1, 'character',
'outfilename', 'o', 2, 'logical',
), byrow=TRUE, ncol=4);
# Parse options
options = getopt(option_specification);
normalization = function(infilename, head) {
## Read the data table and create a vector with the columns of interest
table = read.table(infilename, header=TRUE, sep="\t", dec=".")
head_input = colnames(table)
coluns = grep(head, head_input, perl=TRUE, value=TRUE)
data = as.matrix(table[,coluns])
## Makes the log2 transformation
l2 = log2(data)
l2 = signif(l2, 7)
col_names = c("Intensity.txt")
nor_list = list(data, l2)
# Creates a output files with the data Log2 transformed columns #
for (i in 1:7) {
outfilename = file(col_names[i])
write.table(nor_list[i], outfilename, dec=".", sep="\t", row.names=FALSE, col.names=TRUE)
}
}
|
067a38282256eeaa04d725c932edc8fb9f3df402
|
998bdef38eeb195434b444c275eb875932304019
|
/dec11 index testing dataset.R
|
e0ad02917bc037b0247b5fd9b7bf9c8a0709e01a
|
[] |
no_license
|
shazadahmed10/code
|
35a4697bd5824c0e21c866c89271766cb319ad4e
|
ef8b488cd05547be1b951147254bfedd5f53b2ad
|
refs/heads/master
| 2021-11-24T17:46:32.661522
| 2021-10-29T21:20:45
| 2021-10-29T21:20:45
| 123,955,897
| 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 1,723
|
r
|
dec11 index testing dataset.R
|
library ("tidyverse")
library ("openxlsx")
master <- read_tsv ("C:/Users/lqa9/Desktop/19Q4/Clean/MER_Structured_Datasets_PSNU_IM_FY17-20_20191220_v2_2.txt")
master2 <- master %>%
select(-c(region, regionuid, operatingunituid, countryname, mech_code, mech_name, pre_rgnlztn_hq_mech_code,
prime_partner_duns, award_number, categoryoptioncomboname, snu1uid, psnuuid)) %>%
filter(indicator %in% c("HTS_TST_POS", "HTS_INDEX", "HTS_INDEX_FAC", "HTS_INDEX_COM", "HTS_INDEX_KNOWNPOS",
"HTS_INDEX_NEWNEG", "HTS_INDEX_NEWPOS", "HTS_SELF", "TX_NEW", "HTS_TST")) %>%
mutate(ou_type =
if_else(operatingunit %in% c("Democratic Republic of the Congo","Lesotho", "Malawi", "Nigeria",
"South Sudan", "Uganda", "Ukraine", "Vietnam", "Zambia"), "ScaleFidelity",
if_else(operatingunit %in% c("Burundi", "Eswatini", "Ethiopia", "Kenya", "Namibia",
"Rwanda", "Zimbabwe"), "Sustain",
if_else(operatingunit %in% c("Angola", "Botswana", "Cameroon", "Dominican Republic",
"Haiti", "Mozambique", "South Africa", "Tanzania"), "Reboot",
"N/A"))))
tst <- filter(master2, indicator == "HTS_TST" & modality %in% c("Index", "IndexMod"))
master3 <- filter(master2, indicator != "HTS_TST")
master4 <- bind_rows(master3, tst)
View(count(master4, ou_type, indicator, modality))
#write_excel_csv(master2, "C:/Users/lqa9/Desktop/19Q4/IndexCooP.csv")
write_tsv(master4, "C:/Users/lqa9/Desktop/CoOP/Index Testing/IndexCooP_010820.txt")
|
1000c4f1397c16c31d1da9789f48ab17591cdd88
|
3f41dcde4498fcf47a5f8314de6086ffec3dd082
|
/man/makeplot.pseudo.ess.Rd
|
77f8c8ee66a802775aad51201bee11749a74e93e
|
[] |
no_license
|
arborworkflows/RWTY
|
6f961f76b69776d9c752be0483416dec7448661b
|
fbf5b695a1c8d16f7532123fb86715152f430b06
|
refs/heads/master
| 2020-12-31T02:01:50.609590
| 2016-08-17T22:16:18
| 2016-08-17T22:16:18
| 65,751,392
| 0
| 1
| null | 2016-08-15T17:31:51
| 2016-08-15T17:31:51
| null |
UTF-8
|
R
| false
| true
| 1,417
|
rd
|
makeplot.pseudo.ess.Rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/makeplot.pseudo.ess.R
\name{makeplot.pseudo.ess}
\alias{makeplot.pseudo.ess}
\title{Plot the pseudo ESS of tree topologies from MCMC chains.}
\usage{
makeplot.pseudo.ess(chains, burnin = 0, n = 20)
}
\arguments{
\item{chains}{A list of rwty.trees objects.}
\item{burnin}{The number of trees to eliminate as burnin}
\item{n}{The number of replicate analyses to do}
}
\value{
pseudo.ess.plot A ggplot2 plot object, in which each chain is represented by a point
which represents the median pseudo ESS from the n replicates, and
whiskers representing the upper and lower 95% intervals of the n replicates.
}
\description{
This function takes a list of rwty.trees objects, and plots the
pseudo ESS of the tree topologies from each chain, after removing burnin.
Each caulcation is repeated n times, where in each replicate a random
tree from the chain is chosen as a 'focal' tree. The calculation works
by calculating the path distance of each tree in the chain
from the focal tree, and calculating the ESS of the resulting vector
of phylogenetic distances using the effectiveSize function from the
coda package. NB this function requires the calculation of many
tree distances, so can take some time.
}
\examples{
\dontrun{
data(fungus)
makeplot.pseudo.ess(fungus, burnin = 20, n = 10)
}
}
\keyword{ESS,}
\keyword{distance}
\keyword{path}
|
2e91835be7d6262a70b348bcc661fe7e3c06d77a
|
77572ab0628f675213204e505599a068375da215
|
/R/germline_gwas_server.R
|
86e5cc6dc3ab3dbf5a583bb36fafeb331808996e
|
[
"MIT"
] |
permissive
|
CRI-iAtlas/iatlas-app
|
a61d408e504b00126796b9b18132462c5da28355
|
500c31d11dd60110ca70bdc019b599286f695ed5
|
refs/heads/staging
| 2023-08-23T11:09:16.183823
| 2023-03-20T21:57:41
| 2023-03-20T21:57:41
| 236,083,844
| 10
| 3
|
NOASSERTION
| 2023-03-21T21:27:26
| 2020-01-24T21:07:54
|
R
|
UTF-8
|
R
| false
| false
| 8,793
|
r
|
germline_gwas_server.R
|
germline_gwas_server <- function(id, cohort_obj){
shiny::moduleServer(
id,
function(input, output, session) {
ns <- session$ns
if(!dir.exists("tracks"))
dir.create("tracks")
addResourcePath("tracks", "tracks")
gwas_data <- reactive({
iatlasGraphQLClient::query_germline_gwas_results(datasets = "TCGA")
})
immune_feat <- reactive({
iatlas.modules::create_nested_named_list(
gwas_data(),
names_col1 = "feature_germline_category",
names_col2 = "feature_display",
values_col = "feature_name"
)
})
shiny::updateSelectizeInput(session, 'immunefeature',
choices = immune_feat(),
selected = c("Wolf_MHC2_21978456", "Bindea_Th17_cells"), #default to exclude that, as suggested by manuscript authors
server = TRUE)
snp_options <- reactive({
shiny::req(gwas_data())
gwas_data() %>% dplyr::filter(!is.na(snp_rsid)) %>% dplyr::pull(snp_rsid)
})
shiny::updateSelectizeInput(session, 'snp_int',
choices = c("", snp_options()),
selected = "",
server = TRUE)
subset_gwas <- shiny::reactive({
if(is.null(input$immunefeature)){
gwas_data() %>% format_gwas_df()
}else{
if(input$feature_action == "Exclude") gwas_data() %>% dplyr::filter(!(feature_name %in% input$immunefeature)) %>% format_gwas_df()
else gwas_data() %>% dplyr::filter(feature_name %in% input$immunefeature) %>% format_gwas_df()
}
})
trackname <- shiny::reactive({
if(is.null(input$immunefeature)) "GWAS"
else paste("GWAS -",
input$feature_action,
paste(gwas_data() %>% dplyr::filter(feature_name %in% input$immunefeature) %>% purrr::pluck("feature_display") %>% unique(),
collapse = ", "))
})
yaxis <- shiny::reactive({
c(min(-log10(subset_gwas()$P.VALUE)-1),
max(-log10(subset_gwas()$P.VALUE)+1)
)
})
output$igv_plot <- igvShiny::renderIgvShiny({
genomeOptions <- igvShiny::parseAndValidateGenomeSpec(genomeName="hg19", initialLocus="all")
igvShiny::igvShiny(genomeOptions,
displayMode="SQUISHED")
})
shiny::observeEvent(input$igvReady, {
containerID <- input$igvReady
igvShiny::loadGwasTrack(session, id=session$ns("igv_plot"),
trackName=trackname(),
tbl=subset_gwas(),
ymin = yaxis()[1],
ymax = yaxis()[2],
deleteTracksOfSameName=FALSE)
})
shiny::observeEvent(input$addGwasTrackButton, {
igvShiny::loadGwasTrack(session, id=session$ns("igv_plot"),
trackName=trackname(),
tbl=subset_gwas(),
ymin = yaxis()[1],
ymax = yaxis()[2],
deleteTracksOfSameName=FALSE)
})
#adding interactivity to select a SNP from the plot or from the dropdown menu
clicked_snp <- shiny::reactiveValues(ev=NULL)
shiny::observeEvent(input$trackClick,{
x <- input$trackClick
#we need to discover which track was clicked
if(x[1] == "SNP"){
clicked_snp$ev <- x[2]
shiny::showModal(shiny::modalDialog(create_snp_popup_tbl(x), easyClose=TRUE))
}
})
snp_of_int <- shiny::reactiveValues(ev="")
shiny::observeEvent(clicked_snp$ev,{
snp_of_int$ev <- clicked_snp$ev
})
shiny::observeEvent(input$snp_int,{ #search for selected SNP
igvShiny::showGenomicRegion(session, id=session$ns("igv_plot"), input$snp_int) #update region on IGV plot
snp_of_int$ev <- input$snp_int
})
selected_snp <- shiny::reactive({
shiny::validate(
shiny::need(!is.null(snp_of_int$ev),
"Click manhattan plot to select a SNP."))
gwas_data() %>%
dplyr::filter(snp_rsid == snp_of_int$ev) %>%
dplyr::mutate(nlog = round(-log10(p_value), 2)) %>%
dplyr::select(snp_rsid, snp_name, snp_chr, snp_bp, feature_display, nlog)
})
output$links <- renderUI({
shiny::validate(
shiny::need(selected_snp()$snp_rsid %in% gwas_data()$snp_rsid, "Select SNP")
)
get_snp_links(selected_snp()$snp_rsid[1],
selected_snp()$snp_name[1])
})
output$snp_tbl <- DT::renderDT({
shiny::req(gwas_data())
shiny::validate(
shiny::need(selected_snp()$snp_rsid %in% gwas_data()$snp_rsid, "")
)
DT::datatable(
selected_snp() %>%
dplyr::select(
Trait = feature_display,
`-log10(p)` = nlog),
rownames = FALSE,
caption = paste("GWAS hits"),
options = list(dom = 't')
)
})
#COLOCALIZATION
# coloc_label <- reactive({
# switch(
# input$selection,
# "See all chromosomes" = "for all chromosomes",
# "Select a region" = paste("for chromosome", selected_chr())
# )
# })
##TCGA
col_tcga <- reactive({
iatlasGraphQLClient::query_colocalizations(coloc_datasets = "TCGA") %>%
dplyr::filter(coloc_dataset_name == "TCGA") %>%
dplyr::select("Plot" = plot_type, "SNP" = snp_rsid, Trait = feature_display, QTL = qtl_type, Gene = gene_hgnc, `Causal SNPs` = ecaviar_pp, Splice = splice_loc, CHR = snp_chr, plot_link)
})
output$colocalization_tcga <- DT::renderDT({
DT::datatable(
col_tcga() %>% dplyr::select(!plot_link),
escape = FALSE,
rownames = FALSE,
caption = "TCGA colocalization plots available", #paste("TCGA colocalization plots available ", coloc_label()),
selection = 'single',
options = list(
pageLength = 5,
lengthMenu = c(5, 10, 15, 20)
)
)
})
output$tcga_colocalization_plot <- shiny::renderUI({
shiny::req(selected_snp(), input$colocalization_tcga_rows_selected)
shiny::validate(
shiny::need(
!is.null(input$colocalization_tcga_rows_selected), "Click on table to see plot"))
link_plot <- as.character(col_tcga()[input$colocalization_tcga_rows_selected, "plot_link"])
tags$div(
tags$hr(),
tags$img(src = link_plot,
width = "100%")
)
})
##GTEX
col_gtex <- reactive({
iatlasGraphQLClient::query_colocalizations(coloc_datasets = "GTEX") %>%
dplyr::filter(coloc_dataset_name == "GTEX") %>%
dplyr::select("SNP" = snp_rsid, Trait = feature_display, QTL = qtl_type, Gene = gene_hgnc, Tissue = tissue, Splice = splice_loc, CHR = snp_chr, plot_link)
})
output$colocalization_gtex <- DT::renderDT({
DT::datatable(
col_gtex()%>% dplyr::select(!c(plot_link, Splice)),
escape = FALSE,
rownames = FALSE,
caption = "GTEX colocalization plots available ", #paste("GTEX colocalization plots available ", coloc_label()),
selection = 'single',
options = list(
pageLength = 5,
lengthMenu = c(5, 10, 15, 20)
)
)
})
output$gtex_colocalization_plot <- shiny::renderUI({
shiny::req(input$colocalization_gtex_rows_selected)
shiny::validate(
shiny::need(
!is.null(input$colocalization_gtex_rows_selected), "Click on table to see plot"))
link_plot <- as.character(col_gtex()[input$colocalization_gtex_rows_selected, "plot_link"])
tags$div(
tags$hr(),
tags$p(paste("GTEX Splice ID: ", as.character(col_gtex()[input$colocalization_gtex_rows_selected, "Splice"]))),
tags$img(src = link_plot,
width = "100%")
)
})
observeEvent(input$method_link_gwas,{
shiny::showModal(modalDialog(
title = "Method",
includeMarkdown("inst/markdown/methods/germline-gwas.md"),
easyClose = TRUE,
footer = NULL
))
})
observeEvent(input$method_link_colocalization,{
shiny::showModal(modalDialog(
title = "Method",
includeMarkdown("inst/markdown/methods/germline-colocalization.md"),
easyClose = TRUE,
footer = NULL
))
})
}
)
}
|
58847032386e54a0075e0beefc27899384724ed7
|
f9f9476e77b025583654ab42f0f0e5611f598a75
|
/Capstone Project/step6_packagedata.R
|
c192b8c62e7b420bd57cbc0f1b5d0369a4990a3d
|
[] |
no_license
|
rsizem2/jhu-data-science
|
a56d1467f1d6cac1d000e3fcace99e68c105d08a
|
4ad9a319ed83d0f8c607ac63f05499102d865075
|
refs/heads/master
| 2023-01-03T03:04:25.929474
| 2020-10-21T18:38:07
| 2020-10-21T18:38:07
| 275,019,731
| 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 2,060
|
r
|
step6_packagedata.R
|
# Load Libraries
library(tidytext)
library(tidyr)
library(data.table)
library(dplyr)
library(dtplyr)
## Set project directories
main_directory <- "~/textpredict/"
#main_directory <- dirname(rstudioapi::getSourceEditorContext()$path)
loaddir <- paste(main_directory, "scores/", sep = "")
savedir <- paste(main_directory, "final_data/", sep = "")
if(!dir.exists(savedir)){
dir.create(savedir)
}
## Set cutoff, delete N-grams with fewer than x counts
cutoff = 2
## Load Unigrams
filepath <- paste(loaddir, "unigrams_score.RData", sep = "")
load(file = filepath)
print("Loaded Unigrams")
unigrams <- lazy_dt(unigrams) %>%
filter(count > cutoff) %>%
mutate(stopwords = (word %in% stop_words$word)) %>%
arrange(desc(score)) %>%
as_tibble()
print("Converted Unigrams")
## Load Bigrams
filepath <- paste(loaddir, "bigrams_score.RData", sep = "")
load(file = filepath)
print("Loaded Bigrams")
bigrams <- lazy_dt(bigrams) %>%
filter(count > cutoff) %>%
mutate(stopwords = (word2 %in% stop_words$word)) %>%
arrange(desc(score)) %>%
as_tibble()
print("Converted Bigrams")
## Load Trigrams
filepath <- paste(loaddir, "trigrams_score.RData", sep = "")
load(file = filepath)
print("Loaded Trigrams")
trigrams <- lazy_dt(trigrams) %>%
filter(count > cutoff)%>%
mutate(stopwords = (word3 %in% stop_words$word)) %>%
arrange(desc(score)) %>%
as_tibble()
print("Converted Trigrams")
## Load Fourgrams
filepath <- paste(loaddir, "fourgrams_score.RData", sep = "")
load(file = filepath)
print("Loaded Quadgrams.")
fourgrams <- lazy_dt(fourgrams) %>%
filter(count > cutoff)%>%
mutate(stopwords = (word4 %in% stop_words$word)) %>%
arrange(desc(score)) %>%
as_tibble()
## Save Data
if(cutoff > 0){
filepath <- paste(savedir, "data", as.character(cutoff), ".RData", sep = "")
save(unigrams, bigrams, trigrams, fourgrams, file = filepath)
print("Data Saved")
} else {
filepath <- paste(savedir, "data.RData", sep = "")
save(unigrams, bigrams, trigrams, fourgrams, file = filepath)
print("Data Saved")
}
rm(list = ls())
|
7b0e92285277cc84a93b5bcb235052274b60c2f9
|
577bfd9610409231e81e090a73d11f7b9c4f07cb
|
/NMFP/Example1/Simplesimulation.R
|
56fbcb6aa93b662bc5e2ee841c26a9528e85b537
|
[] |
no_license
|
Elric2718/NMFP
|
b6c7376bb0b3ca435816e684d651853936f8d992
|
c7657a31690ec85a25adc490078fd2804fd4306d
|
refs/heads/master
| 2021-06-04T21:04:51.814880
| 2016-09-04T23:03:12
| 2016-09-04T23:03:12
| 64,001,623
| 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 7,058
|
r
|
Simplesimulation.R
|
###############################################################################
######################### NMFP for lowly expressed gene #######################
###############################################################################
###### load package
library(dplyr)
library(compiler)
library(data.table)
library(reshape2)
library(ggplot2)
library(grid)
###load codes
########### please first set the path to the 'NMFP_code/NMFP' dir ############
code_file <- "~/Desktop/NMFP_code/NMFP"
source(paste(code_file,"/ExonBoundary.R",sep=""))
source(paste(code_file,"/FindTypes2.R",sep=""))
source(paste(code_file,"/PreSelection.R",sep=""))
source(paste(code_file,"/VoteforIsoform.R",sep=""))
source(paste(code_file,"/ncNMF2.R",sep=""))
source(paste(code_file,"/VisualizeG5.R",sep=""))
source(paste(code_file,"/VisualCheck.R",sep=""))
source(paste(code_file,"/VisualCheck4.R",sep=""))
source(paste(code_file,"/RankDetermine2.R",sep=""))
source(paste(code_file,"/Write_gtf.R",sep=""))
source(paste(code_file,"/ContradictTable.R",sep=""))
source(paste(code_file,"/Normalization.R",sep=""))
source(paste(code_file,"/PairReadLength.R",sep=""))
source(paste(code_file,"/AssignMatrix4.R",sep=""))
source(paste(code_file,"/ReadCount3.R",sep=""))
cmp_ncNMF2<-cmpfun(ncNMF2)
################## construct the gene ##################
nExon <- 8
nIsof <- 3
LenRead <- 76
exon_length <- seq(100,500, by = 50)
GenerateExon <- function(nExon, exon_length){
##### Genereate ExonPool (generating exons)
nchoice <- length(exon_length)
ExonPool <- rbind(rep(1, nExon), sample(exon_length, nExon, replace = TRUE))
for(i in 1 : (nExon-1)){
ExonPool[,(i+1)] <- ExonPool[,(i+1)] + ExonPool[2,i]
}
return(ExonPool)
}
ExonPool <- GenerateExon(nExon, exon_length)
ELen <- ExonPool[2,] - ExonPool[1,] + 1
## types of bins, like (1,1), (1,2)
TypesBox <- FindTypes2(nExon)
## get the effective length of each bin
effective_len <- sapply(TypesBox, function(type)
PairReadLength(ExonPool, LenRead, Type = type))
## generating the gene with three isoforms. The length of gene is of 8 exons.
ISOF_exon <- matrix(0, nrow = nExon , ncol = nIsof)
ISOF_exon[, 1] <- c(1, 1, 0, rep(1, 5))
ISOF_exon[, 2] <- c(0, 1, 1, rep(1, 5))
ISOF_exon[, 3] <- c(1, 0, 0, rep(1, 5))
## The profile for true isoforms
AnnoTrans <- apply(ISOF_exon, 2, function(isof){
paste(isof, collapse="")
})
## adding junction bins
ISOF <- apply(ISOF_exon, 2, function(isof){
nExon <- length(isof)
bin_level1 <- isof
bin_level2 <- isof[-nExon] * isof[-1]
bin_level3 <- isof[-c(nExon-1, nExon)] * isof[-c(1, 2)]
bin_level4 <- isof[-seq(nExon-2, nExon)] * isof[-seq(1, 3)]
return(c(bin_level1, bin_level2, bin_level3, bin_level4))
})
rownames(ISOF) <- TypesBox
## The probablity of reads falling into one bin given the isoform
length_ratio <- apply(ISOF, 2, function(isof){
isof * effective_len/sum(isof * effective_len)
})
GenerateReps <- function(ISOF, nrep = 10, length_ratio, depth){
##### Generate nrep replicates of genes given ISOF
stopifnot(nrep == length(depth))
nbin <- nrow(ISOF)
nIsof <- ncol(ISOF)
sapply(seq(nrep), function(i){
### simulatre gene expression
isof_ratio <- runif(nIsof)
isof_ratio <- isof_ratio/sum(isof_ratio)
express_level <- ISOF %*% diag(isof_ratio) * depth[i]
### add noise
bin_noise <- matrix(rnorm(n = nbin * nIsof , mean = 0, sd = 1),
nrow = nbin) %*%
diag(isof_ratio)
noise_express <- express_level + bin_noise
noise_express[noise_express < 0] <- 0
### simulate sequencing
reads_level <- rowSums(noise_express * length_ratio)
return(reads_level)
})
}
## generate the expression level
DepthSelect <- function(ntotal = 10, high_level = 5){
depth <- c(3, rep(high_level, (ntotal-1)))
return(depth)
}
### simulation part
RECALL <- NULL
nCand <- NULL
for(k in c(0.5,seq(10))){
recall <- NULL
nCandidates<- NULL
for(j in 1:50){
depth <- DepthSelect(ntotal = 10, high_level = k)
print(depth)
REPs <- GenerateReps(ISOF, 10, length_ratio, depth)
NormMatrix<-Normalization(REPs, ExonPool, LenRead, TypesBox)
BIsoform<-try(VoteforIsoform(rank=3,nrun=100,alpha0=0.1,
NormMatrix=NormMatrix,TypesBox=TypesBox,ExonPool=ExonPool,
LenRead=LenRead,Gcutoff=0.4,mode=1,
input_file=code_file))
if(inherits(BIsoform, "try-error"))
{
#error handling code, maybe just skip this iteration using
next
}
IsoformSet<-BIsoform$IsoformSet
#rank<-BIsoform$rank
Candidates <- names(IsoformSet[1:30])
#Candidates <- PreSelection(Candidates=names(IsoformSet[which(IsoformSet>=70)]),SumRead=apply(NormMatrix,1,mean),ExonPool=ExonPool,LenRead=LenRead)
candidates <- NULL
for(i in 1:length(AnnoTrans)){
candidates<-c(candidates,Candidates[which(is.element(Candidates,AnnoTrans[i]))])
}
nCandidates <- c(nCandidates, max(which(is.element(names(IsoformSet),AnnoTrans))))
recall <- c(recall,length(candidates)/nIsof)
print(j)
}
nCand[[as.character(k)]] <- nCandidates
RECALL[[as.character(k)]] <- recall
}
candmat <-
apply(t(c(0.5,seq(10))),2,function(i){
vec <- rep(NA,50)
vec[1:length(nCand[[as.character(i)]])] <- nCand[[as.character(i)]]
return(vec)
}) %>%
data.table() %>%
setnames(paste(c(0.5,seq(10)))) %>%
melt() %>%
setnames(c("exp_level", "n_candidates"))
ggplot(candmat, aes(x= exp_level, y = n_candidates)) +
geom_boxplot() +
xlab("Expression Level") +
ylab("Number of Isoform Candidates") +
theme_bw() +
theme(
axis.text = element_text(size = rel(0.9), colour = "grey50"),
axis.title = element_text(size = rel(1.2), colour = "black"),
axis.ticks = element_line(size = rel(1), colour = "black"),
axis.ticks.length = unit(0.15, "cm"),
axis.ticks.margin = unit(0.1, "cm"),
legend.title = element_text(colour="black", size=12, face="bold"),
legend.text = element_text(colour="black", size=12, face="bold"),
plot.title = element_text(size = rel(1.2), colour = "grey50"))
recallmat <-
apply(t(c(0.5,seq(10))),2,function(i){
vec <- rep(NA,50)
vec[1:length(RECALL[[as.character(i)]])] <- RECALL[[as.character(i)]]
return(vec)
}) %>%
data.table() %>%
setnames(paste(c(0.5,seq(9)))) %>%
melt() %>%
setnames(c("exp_level", "recall"))
ggplot(recallmat, aes(x= exp_level, y = recall)) +
geom_boxplot() +
xlab("Expression Level") +
ylab("Recall") +
theme_bw() +
theme(
axis.text = element_text(size = rel(0.9), colour = "grey50"),
axis.title = element_text(size = rel(1.5), colour = "black"),
axis.ticks = element_line(size = rel(1), colour = "black"),
axis.ticks.length = unit(0.15, "cm"),
axis.ticks.margin = unit(0.1, "cm"),
legend.title = element_text(colour="black", size=12, face="bold"),
legend.text = element_text(colour="black", size=12, face="bold"),
plot.title = element_text(size = rel(1.5), colour = "grey50"))
|
6d3c2b653124e640f3719e255873157697fc30c3
|
9aec8515144a368b070875f20f7027ffbaf03563
|
/Rtwitter.R
|
fdee63dc9dfa754431e1361c44f47f5391f021f8
|
[] |
no_license
|
AlessandroPTSN/Analizando-minha-conta-com-Rtweet
|
497b6a7b7a984b3447f8cdb76a77ea125ae6237f
|
83fa112a79089db75c6fa48f2dd24e57d924cdb5
|
refs/heads/master
| 2020-12-10T20:32:59.378233
| 2020-01-15T20:07:47
| 2020-01-15T20:07:47
| 233,703,595
| 0
| 1
| null | null | null | null |
UTF-8
|
R
| false
| false
| 4,905
|
r
|
Rtwitter.R
|
library(rtweet)
# Mude consumer_key, consume_secret, access_token, e
# access_secret baseado nas suas propias chaves
token <- create_token(
app = "Hi",
consumer_key = consumer_key,
consumer_secret = consumer_secret,
access_token = access_token,
access_secret = access_secret)
h=get_timeline("Alessandroptsn",n=500)
names(h)
####################################################################################################################
#criando o gráfico de pizza
source_table=h$source[h$source!="SumAll"]
source_table=source_table[source_table!="Twitter Web Client" ]
source_table=table(source_table)
#tendo uma vista previa do grafico
pie(source_table)
library(dplyr)
#arrumando a tabela
source_table_2 <-
dplyr::tibble(
Fonte = names(source_table),
size = source_table
) %>%
dplyr::arrange(desc(size)) %>%
dplyr::slice(1:10)
#melhorando a tabela
source_table_2
source_table_2$percent = round((source_table_2$size)/sum(source_table_2$size),2)
source_table_2
library(ggplot2)
#adicionando a posição da %
source_table_2 <- source_table_2%>%
arrange(desc(Fonte)) %>%
mutate(position = cumsum(size) - 0.5*size)
source_table_2
#gráfico de pizza
ggplot( source_table_2, aes(x = "", y = size, fill = Fonte)) +
geom_bar(width = 1, stat = "identity", color = "white") +
ggtitle(" Por onde eu mais faço tweets")+
coord_polar("y", start = 0)+
geom_text(aes(y = position, label = percent), color = "Black")+
scale_fill_brewer(palette = "Blues") +
#scale_fill_manual(values = c("#1F65CC","#3686D3")) +
theme_void()
####################################################################################################################
#criando o gráfico de linha
library(chron)
library(stringr)
names(h)
h$created_at
h$created_at=str_sub(h$created_at, end = 10)
h$created_at=as.Date(h$created_at)
ggplot(h, aes(x = created_at, y = favorite_count))+geom_line(size = 2,colour = "red")+
xlab("Tempo")+
ylab("Quantidade de Likes")+
ggtitle("Quantidade de Likes a medida do tempo, em média 2.26 Likes")
mean(h$favorite_count)
####################################################################################################################
#gráfico de palavras
library(readtext)
library(tm)
library(ggplot2)
library(reshape2)
# Manipulacao eficiente de dados
library(tidyverse)
# Manipulacao eficiente de texto
library(tidytext)
# Leitura de pdf para texto
library(textreadr)
# Pacote de mineracao de texto com stopwords
library(tm)
# Grafico nuvem de palavras
library(wordcloud)
library("tidytext")
library("stopwords")
texto <- scan("stopwords.txt", what="char", sep="\n", encoding = "UTF-8")
stop <- tolower(texto)
H=unlist(h$text)
stopwords_regex = paste(stop, collapse = '\\b|\\b')
stopwords_regex = paste0('\\b', stopwords_regex, '\\b')
documents = stringr::str_replace_all(H, stopwords_regex, '')
H=documents
NormalizaParaTextMining <- function(texto){
# Normaliza texto
texto %>%
chartr(
old = "(),´`^~¨:.!?&$@#0123456789",
new = " ",
x = .) %>% # Elimina acentos e caracteres desnecessarios
str_squish() %>% # Elimina espacos excedentes
tolower() %>% # Converte para minusculo
return() # Retorno da funcao
}
H=NormalizaParaTextMining(H)
H
texto <- tolower(H)
lista_palavras <- strsplit(texto, "\\W+")
vetor_palavras <- unlist(lista_palavras)
frequencia_palavras <- table(vetor_palavras)
frequencia_ordenada_palavras <- sort(frequencia_palavras, decreasing=TRUE)
palavras <- paste(names(frequencia_ordenada_palavras), frequencia_ordenada_palavras, sep=";")
cat("Palavra;Frequencia", palavras, file="frequencias.csv", sep="\n")
palavras = read.csv("frequencias.csv", sep=";")
# library
library(wordcloud2)
# Nuvem de palavras
wordcloud2(palavras, size = 0.7, color = "#1F65CC",backgroundColor = "grey")
####################################################################################################################
#Gráfico de barras mostrando os Retweets
isretweet=table(h$is_retweet)
#arrumando a tabela
retweet <-
dplyr::tibble(
Fonte = names(isretweet),
size = isretweet
) %>%
dplyr::arrange(desc(size)) %>%
dplyr::slice(1:10)
#melhorando a tabela
retweet$percent = round((retweet$size)/sum(retweet$size),2)
#mostrando a tabela
retweet
#fazendo o Gráfico
ggplot(data=retweet, aes(x=Fonte,y=percent,fill=factor(size))) +
geom_bar(position="dodge",stat="identity")+
scale_fill_manual(values = c("dodgerblue4","red3"))+
xlab("")+
ylab("Porcentagem")+
ggtitle("Meus Retwittes")
####################################################################################################################
|
3890530c393158d338913ca6916a19c75b0d9396
|
0b56be1b7df75e97165707c90f772f33b0623743
|
/peakPantheR/R scripts/Dementia_Urine_PP_RPOS.R
|
9195a324cd43a5e3a3dbb8fb2c305011a189561e
|
[
"MIT"
] |
permissive
|
phenomecentre/metabotyping-dementia-urine
|
68d5b7b249b8c8401c0615453d7039c45a1d8ad7
|
e3f7be5ab84cec005e8362683d7823f2a22461ac
|
refs/heads/master
| 2023-05-31T03:48:26.204967
| 2021-06-25T12:42:51
| 2021-06-25T12:42:51
| 279,675,819
| 2
| 1
| null | null | null | null |
UTF-8
|
R
| false
| false
| 7,741
|
r
|
Dementia_Urine_PP_RPOS.R
|
## ---------------------------------------------------------------------------------------------------------------------------------------------
## Dementia cohort - urine - peakPantheR RPOS assay
## ---------------------------------------------------------------------------------------------------------------------------------------------
# Import peakPantheR package
library(peakPantheR)
# Load function to parse the National Phenome Centre filenames and extract sample types
source('parse_IPC_project_folder.R')
# path to mzML files - Must be downloaded from Metabolights
rawData_folder <- './path_to_mzML'
# reference csv file
reference_ROI <- '../LC-MS Annotations/ROI Files/RPOS_ROI.csv'
# Internal Standards ROI csv file
project_name <- 'Dementia'
# parse the file names
project_files <- parse_IPC_MS_project_names(rawData_folder, 'U')
files_all <- project_files[[1]]
metadata_all <- project_files[[2]]
# Select only quality control samples
which_QC <- metadata_all$sampleType %in% c('LTR', 'SR')
## ROI (regions of interest)
tmp_ROI <- read.csv(file=reference_ROI, stringsAsFactors=FALSE, encoding='utf8')
ROI <- tmp_ROI[,c("cpdID", "cpdName", "rtMin", "rt", "rtMax", "mzMin", "mz", "mzMax")]
# working directory
work_dir <- '../LC-MS Annotations/peakPantheR Urine RPOS/'
#---------------------------------------------------------------------------------------------------------
# Example peakPantheR Workflow
#---------------------------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------------------------
# Step 1. Run peakPantheR with the default ROI files.
# For faster run-time and to facilitate review, only quality control samples (study reference/SR and Long
# term reference/LTR) will be analysẽd
# These have wide (+- 15 seconds) retention time intervals, and will be saved as "wideWindows_annotation"
#---------------------------------------------------------------------------------------------------------
# setp up the peakPantheRAnnotation object
data_annotation_wideWindows <- peakPantheRAnnotation(spectraPaths=files_all[which_QC], targetFeatTable=ROI, spectraMetadata=metadata_all[which_QC, ])
# run the annotation
data_result_wideWindows <- peakPantheR_parallelAnnotation(data_annotation_wideWindows, ncores=18, resetWorkers=30, verbose=TRUE )
## Save results in .Rdata file
save(data_result_wideWindows, file=file.path(work_dir, 'wideWindows_annotation_result.RData'), compress=TRUE)
# run the automated diagnostic method
data_annotation_wideWindows <- data_result_wideWindows$annotation
updated_annotation <- annotationParamsDiagnostic(data_annotation_wideWindows, verbose=TRUE)
# Save to disk ROI parameters (as csv) and diagnostic plot for each compound
uniq_sType <- c('SR', 'LTR', 'SRD', 'SS', 'Mix')
uniq_colours <- c('green', 'orange', 'red', 'dodgerblue3', 'darkgreen')
col_sType <- unname( setNames(c(uniq_colours),c(uniq_sType))[spectraMetadata(updated_annotation)$sampleType] )
outputAnnotationDiagnostic(updated_annotation, saveFolder=file.path(work_dir, 'wideWindows_annotation_SR_LTR/'), savePlots=TRUE, sampleColour=col_sType, verbose=TRUE, ncores=8)
outputAnnotationResult(updated_annotation, saveFolder=file.path(work_dir, 'wideWindows_annotation_SR_LTR'), annotationName=paste(project_name, '_wideWindows'), verbose=TRUE)
#---------------------------------------------------------------------------------------------------------
# 2. Repeat the process using the automatic uROI suggestion calculated using the
# annotationParamsDiagnostic method.
# For faster run-time and to facilitate review, only quality control samples (study reference/SR and Long
# term reference/LTR) will be analysẽd
# This run will be saved as "narrowWindows_annotation"
#---------------------------------------------------------------------------------------------------------
# load the csv exported from the "wideWindows" run
update_csv_path <- file.path(work_dir, './wideWindows_annotation_SR_LTR/annotationParameters_summary.csv')
# setp up the peakPantheRAnnotation object
narrowWindows_annotation <- peakPantheR_loadAnnotationParamsCSV(update_csv_path)
# add samples
narrowWindows_annotation <- resetAnnotation(narrowWindows_annotation, spectraPaths=files_all[which_QC], spectraMetadata=metadata_all[which_QC,], useUROI=TRUE, useFIR=TRUE)
# run the annotation
narrowWindows_annotation_results <- peakPantheR_parallelAnnotation(narrowWindows_annotation, ncores=18, resetWorkers=30, verbose=TRUE)
## Save results in .Rdata file
save(narrowWindows_annotation_results, file=file.path(work_dir, 'narrowWindows.RData'), compress=TRUE)
# run the automated diagnostic method
narrowWindows_annotation <- narrowWindows_annotation_results$annotation
narrowWindows_annotation <- annotationParamsDiagnostic(narrowWindows_annotation, verbose=TRUE)
# Save to disk ROI parameters (as csv) and diagnostic plot for each compound
uniq_sType <- c('SR', 'LTR', 'SRD', 'SS')
uniq_colours <- c('green', 'orange', 'red', 'dodgerblue3')
col_sType <- unname( setNames(c(uniq_colours),c(uniq_sType))[spectraMetadata(narrowWindows_annotation)$sampleType] )
outputAnnotationDiagnostic(narrowWindows_annotation, saveFolder=file.path(work_dir, 'narrowWindows_annogation_SR_LTR/'), savePlots=TRUE, sampleColour=col_sType, verbose=TRUE, ncores=8)
outputAnnotationResult(narrowWindows_annotation, saveFolder=file.path(work_dir, 'narrowWindows_annotation_SR_LTR'), annotationName=paste(project_name, '_narrowWindows'), verbose=TRUE)
#--------------------------------------------------------------------------------------------------
# Final run
# After reviewing the uROI windows from the previous run, integration of all
# samples is performed to generate the final dataset
# This or any of the previous steps can be repeated as desired.
#--------------------------------------------------------------------------------------------------
# load the csv exported from the "narrowWindows" run, either as exported or after manual adjustments.
# In this example, the file was manually optimised and
# saved as "annotationParameters_summary_final_ALL.csv"
# Manual modifications to this file
update_csv_path <- file.path('./PP Urine RPOS ppR paper/annotationParameters_summary_final_ALL.csv')
final_annotation <- peakPantheR_loadAnnotationParamsCSV(update_csv_path)
# add samples
final_annotation <- resetAnnotation(final_annotation, spectraPaths=files_all, spectraMetadata=metadata_all, useUROI=TRUE, useFIR=TRUE)
# run the annotation
final_annotation_results <- peakPantheR_parallelAnnotation(final_annotation, ncores=12, resetWorkers=30, verbose=TRUE)
## Save results in .Rdata file
save(final_annotation_results, file=file.path(work_dir, 'final_bySampleType_ALL.RData'), compress=TRUE)
# run the automated diagnostic method
final_annotation <- final_annotation_results$annotation
final_annotation <- annotationParamsDiagnostic(final_annotation, verbose=TRUE)
# Save to disk ROI parameters (as csv) and diagnostic plot for each compound
uniq_sType <- c('SR', 'LTR', 'SRD', 'SS')
uniq_colours <- c('green', 'orange', 'red', 'dodgerblue3')
col_sType <- unname( setNames(c(uniq_colours),c(uniq_sType))[spectraMetadata(final_annotation)$sampleType] )
outputAnnotationDiagnostic(final_annotation, saveFolder=file.path(work_dir, 'final_annotation_bySampleType_ALL'), savePlots=TRUE, sampleColour=col_sType, verbose=TRUE, ncores=12)
outputAnnotationResult(final_annotation, saveFolder=file.path(work_dir, 'final_results_bySampleType_ALL'), annotationName=paste(project_name, 'final'), verbose=TRUE)
|
ea5010e55e7289ec85b7c972d5a464110ef0690e
|
26cd41d81e252b397e4e997773cc3c02574e32d9
|
/Spiderman_Review.R
|
098af390676b09939b19f63cafd71fa39b728952
|
[] |
no_license
|
joeychoi12/R_Crawling
|
c6420505b2199308f9a9d590c3c3ee709ef0ee0d
|
e5b4c25ace8a8f5ba4fd12eb8fec4f028bb2cecb
|
refs/heads/master
| 2020-06-14T07:44:30.289482
| 2019-08-13T08:03:57
| 2019-08-13T08:03:57
| 194,950,923
| 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 1,538
|
r
|
Spiderman_Review.R
|
# NAVER 영화('Spiderman') 일반인 리뷰 크롤링
library(rvest)
library(stringr)
library(dplyr)
trim <- function (x) gsub("^\\s+|\\s+$", "", x)
url_base <- 'https://movie.naver.com'
start_url <- "/movie/bi/mi/point.nhn?code=173123#tab"
url <- paste0(url_base,start_url, encoding="euc-kr")
html <- read_html(url)
html %>%
html_node('iframe.ifr') %>%
html_attr('src') -> url2
url2
page <- "&page="
score <- c()
review <- c()
writer <- c()
time <- c()
for(i in 1:250){
ifr_url
ifr_url <- paste0(url_base, url2,page,i)
html2 <- read_html(ifr_url)
html2 %>%
html_node('div.score_result') %>%
html_nodes('li') -> lis
for (li in lis) {
score <- c(score, html_node(li, '.star_score') %>% html_text('em') %>% trim())
li %>%
html_node('.score_reple') %>%
html_text('p') %>%
trim() -> tmp
idx <- str_locate(tmp, "\r")
review <- c(review, str_sub(tmp, 1, idx[1]-1))
tmp <- trim(str_sub(tmp, idx[1], -1))
idx <- str_locate(tmp, "\r")
writer <- c(writer, str_sub(tmp, 1, idx[1]-1))
tmp <- trim(str_sub(tmp, idx[1], -1))
idx <- str_locate(tmp, "\r")
time <- c(time, str_sub(tmp, 1, idx[1]-1))
#print(time)
}
url2
}
review = data.frame(score=score, review=review, writer=writer, time=time)
View(review)
class(review$score)
review$score_asnum <- as.numeric(as.character(review$score))
View(review$score_asnum)
View(review$score_asnum)
mean(review$score_asnum)
setwd('d:/workspace/R_Crawling/')
write.csv(review,"spiderman_review.csv")
|
21cf2c33153caad9ea568ae1e96078f07d33c6d1
|
8d4dfa8b6c11e319fb44e578f756f0fa6aef4051
|
/man/isStrippedACs.Rd
|
e3797e3c54b56957e56bf22bbe6179b7973f8ae3
|
[] |
no_license
|
eahrne/SafeQuant
|
ce2ace309936b5fc2b076b3daf5d17b3168227db
|
01d8e2912864f73606feeea15d01ffe1a4a9812e
|
refs/heads/master
| 2021-06-13T02:10:58.866232
| 2020-04-14T10:01:43
| 2020-04-14T10:01:43
| 4,616,125
| 4
| 4
| null | 2015-11-03T20:12:03
| 2012-06-10T15:35:25
|
R
|
UTF-8
|
R
| false
| true
| 564
|
rd
|
isStrippedACs.Rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/IdentificationAnalysis.R
\name{isStrippedACs}
\alias{isStrippedACs}
\title{Check if ACs are in "non-stripped" uniprot format e.g. "sp|Q8CHJ2|AQP12_MOUSE"}
\usage{
isStrippedACs(acs)
}
\arguments{
\item{acs}{accession numbers}
}
\value{
boolean TRUE/FALSE
}
\description{
Check if ACs are in "non-stripped" uniprot format e.g. "sp|Q8CHJ2|AQP12_MOUSE"
}
\details{
TRUE if less than 10% of ACs contain a "|" character
}
\note{
No note
}
\examples{
print("No examples")
}
\references{
NA
}
|
a7ab2e0aef391b9c38b864297cbef765aa3b161d
|
9a4c2b70f32e380ad0df3413b607b73117ff4fcd
|
/man/rescaleVariance.Rd
|
795789f559c5be4531bb0ae0086feff5d6a565d3
|
[] |
no_license
|
DPCscience/PhenotypeSimulator
|
717d617eab3d5feb541152fe7547b908f0fed51b
|
79df7d1c2d29296c4636fac24746926249dc4972
|
refs/heads/master
| 2021-01-20T12:29:22.459120
| 2017-08-05T16:29:28
| 2017-08-05T16:29:28
| null | 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| true
| 818
|
rd
|
rescaleVariance.Rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/createphenotypeFunctions.R
\name{rescaleVariance}
\alias{rescaleVariance}
\title{Scale phenotype component.}
\usage{
rescaleVariance(component, propvar)
}
\arguments{
\item{component}{numeric [N x P] phenotype matrix where N are the number of
observations and P numer of phenotypes}
\item{propvar}{numeric specifying the proportion of variance that should be
explained by this phenotype component}
}
\value{
component_scaled numeric [N x P] phenotype matrix if propvar != 0 or
NULL
}
\description{
The function scales the specified phenotype component such that the average
column variance is equal
to the user-specified proportion of variance.
}
\examples{
x <- matrix(rnorm(100), nc=10)
x_scaled <- rescaleVariance(x, propvar=0.4)
}
|
fe78dd013fc7d9ec80fbdc53ccb762eef6c9bf3f
|
6923f79f1eaaba0ab28b25337ba6cb56be97d32d
|
/Gelman_BDA_ARM/arm/3.1.R
|
d0bb9e195e0eee00157760338ba1027ecb934988
|
[] |
no_license
|
burakbayramli/books
|
9fe7ba0cabf06e113eb125d62fe16d4946f4a4f0
|
5e9a0e03aa7ddf5e5ddf89943ccc68d94b539e95
|
refs/heads/master
| 2023-08-17T05:31:08.885134
| 2023-08-14T10:05:37
| 2023-08-14T10:05:37
| 72,460,321
| 223
| 174
| null | 2022-10-24T12:15:06
| 2016-10-31T17:24:00
|
Jupyter Notebook
|
UTF-8
|
R
| false
| false
| 280
|
r
|
3.1.R
|
library ("foreign")
iq.data <- read.dta ("../doc/gelman/ARM_Data/child.iq/kidiq.dta")
attach(iq.data)
fit.2 <- lm (kid_score ~ mom_hs )
print (fit.2)
fit.3 <- lm (kid_score ~ mom_hs + mom_iq)
print (fit.3)
fit.4 <- lm (kid_score ~ mom_hs + mom_iq + mom_hs*mom_iq)
print (fit.4)
|
478289896261fa5e439766ef8e1a7e7e9ae6e498
|
c9865b8080591efce5b3844bb2d915dc692ce7e5
|
/Classification and Regression models.r
|
2d751e258ecadff3969931c34161fc6d03cb7353
|
[] |
no_license
|
tydra33/Artificial_Intelligence-R_Project
|
727f7a327e8947d61eaf02e2b2aec7d941859845
|
7237c8dd6bfff47f57adc7d0e4848423af8232e7
|
refs/heads/main
| 2023-05-12T19:18:07.291036
| 2021-06-02T20:33:52
| 2021-06-02T20:33:52
| 350,842,019
| 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 11,374
|
r
|
Classification and Regression models.r
|
#So we are building classification model based on the variable "Poraba"
modelEval <- function( data, formula, modelFun, evalFun, fType, modelType,
toPrune=F) {
localTrain <- rep(F, times = nrow(goodInfo))
outPut <- vector()
for(i in 1:11) {
cat("ITERATION: ", i, "\n")
flush.console()
localTrain <- localTrain | goodInfo$month == i
localTest <- goodInfo$month == i + 1
model <- modelFun(formula, data[localTrain,])
if(toPrune) {
model <- modelFun(formula, data=data[localTrain,], cp=0)
tab <- printcp(model)
row <- which.min(tab[,"xerror"])
th <- mean(c(tab[row, "CP"], tab[row-1, "CP"]))
model <- prune(model, cp=th)
}
if(modelType == "class") {
observed <- data$norm_poraba[localTest]
obsMat <- class.ind(data$norm_poraba[localTest])
predicted <- predict(model, data[localTest,], type="class")
predMat <- predict(model, data[localTest,], type = fType)
if(evalFun == "CA") {
outPut[i] <- CA(observed, predicted)
}
else if(evalFun == "brier") {
outPut[i] <- brier.score(obsMat, predMat)
}
}
else if (modelType == "reg") {
predicted <- predict(model, data[localTest,])
observed <- data$poraba[localTest]
if (evalFun == "rmse") {
outPut[i] <- rmse(observed, predicted, mean(data$poraba[localTrain]))
}
else if (evalFun == "rmae") {
outPut[i] <- rmae(observed, predicted, mean(data$poraba[localTrain]))
}
else if (evalFun == "mae") {
outPut[i] <- mae(observed, predicted)
}
else if (evalFun == "mse") {
outPut[i] <- mse(observed, predicted)
}
}
}
outPut
}
modelEvalKNN <- function(data, formula, evalFun) {
localTrain <- rep(F, times = nrow(goodInfo))
outPut <- vector()
for(i in 1:11) {
cat("ITERATION: ", i, "\n")
flush.console()
localTrain <- localTrain | goodInfo$month == i
localTest <- goodInfo$month == i + 1
model <- kknn(formula, data[localTrain,], data[localTest,], k = 5)
predicted <- fitted(model)
observed <- data$poraba[localTest]
if (evalFun == "rmse") {
outPut[i] <- rmse(observed, predicted, mean(data$poraba[localTrain]))
}
else if (evalFun == "rmae") {
outPut[i] <- rmae(observed, predicted, mean(data$poraba[localTrain]))
}
else if (evalFun == "mae") {
outPut[i] <- mae(observed, predicted)
}
else if (evalFun == "mse") {
outPut[i] <- mse(observed, predicted)
}
}
outPut
}
porabat <- read.csv("data.csv", sep=",", stringsAsFactors = T)
porabat <- na.omit(porabat)
porabat$datum <- NULL
porabat$poraba <- NULL
porabat$avgPoraba<-NULL
porabat$maxPoraba<-NULL
porabat$minPoraba<-NULL
porabat$sumPoraba<-NULL
porabat$month<-NULL
porabat$X <- NULL
porabat$ura <- as.factor(porabat$ura)
set.seed(0)
samplec<- sample(1:nrow(porabat), size = as.integer(nrow(porabat) * 0.7), replace = F)
train<- porabat[samplec,]
test<- porabat[-samplec,]
table(train$norm_poraba)
table(test$norm_poraba)
library(nnet)
obsMat <- class.ind(test$norm_poraba)
library(caret)
observed <- test$norm_poraba
CA <- function(observed, predicted)
{
mean(observed == predicted)
}
brier.score <- function(observedMatrix, predictedMatrix)
{
sum((observedMatrix - predictedMatrix) ^ 2) / nrow(predictedMatrix)
}
#classification model 1 decision tree
library(rpart)
dt <- rpart(norm_poraba ~ ., data = train, cp =0)
printcp(dt)
tab <- printcp(dt)
row <- which.min(tab[,"xerror"])
th <- mean(c(tab[row, "CP"], tab[row-1, "CP"]))
th
dt <- prune(dt, cp=th)
predicted <- predict(dt, test, type="class")
CA(observed, predicted) #0.815472
predMat <- predict(dt, test, type = "prob")
brier.score(obsMat, predMat) #0.2818836
###
dt1 <- rpart(norm_poraba ~ namembnost + leto_izgradnje + povrsina, data = train, cp = 0)
printcp(dt1)
tab1 <- printcp(dt1)
row1 <- which.min(tab1[,"xerror"])
th1 <- mean(c(tab1[row1, "CP"], tab1[row1-1, "CP"]))
th1
dt1 <- prune(dt1, cp=th1)
predicted1 <- predict(dt1, test, type="class")
CA(observed, predicted1) #0.6408792
predMat1 <- predict(dt1, test, type = "prob")
brier.score(obsMat, predMat1) #0.4768113
modelEval(data = porabat, formula = as.formula(norm_poraba ~ .), modelFun = rpart,
evalFun = "brier", fType = "prob", modelType = "class", toPrune = T)
modelEval(data = porabat, formula = as.formula(norm_poraba ~ namembnost + leto_izgradnje + povrsina),
modelFun = rpart, evalFun = "brier", fType = "prob", modelType = "class", toPrune = T)
#Classification model 2 with random forest
library(CORElearn)
rf <- CoreModel(norm_poraba ~ ., data = train, model="rf")
predicted <- predict(rf, test, type="class")
CA(observed, predicted) #0.8312868
predMat <- predict(rf, test, type = "prob")
brier.score(obsMat, predMat) #0.2446596
###
rf1 <- CoreModel(norm_poraba ~ namembnost + leto_izgradnje + povrsina, data = train, model="rf")
predicted1 <- predict(rf1, test, type="class")
CA(observed, predicted1) #0.6408792
predMat1 <- predict(rf1, test, type = "prob")
brier.score(obsMat, predMat1) #0.4557486
modelEval(data = porabat, formula = as.formula(norm_poraba ~ .), modelFun = CoreModel,
evalFun = "brier", fType = "prob", modelType = "class")
modelEval(data = porabat, formula = as.formula(norm_poraba ~ namembnost + leto_izgradnje + povrsina), modelFun = CoreModel,
evalFun = "brier", fType = "prob", modelType = "class")
#Classification model 3 with naive Bayes
library(e1071)
nb <- naiveBayes(norm_poraba ~ ., data = train)
predicted <- predict(nb, test, type="class")
CA(observed, predicted) #0.3799943
predMat <- predict(nb, test, type = "raw")
brier.score(obsMat, predMat) #0.7411347
###
nb1 <- naiveBayes(norm_poraba ~ namembnost + leto_izgradnje + povrsina, data = train)
predicted1 <- predict(nb1, test, type="class")
CA(observed, predicted1) #0.3734106
predMat1 <- predict(nb1, test, type = "raw")
brier.score(obsMat, predMat1) #0.7360255
modelEval(data = porabat, formula = as.formula(norm_poraba ~ namembnost + leto_izgradnje + povrsina), modelFun = naiveBayes,
evalFun = "brier", fType = "raw", modelType = "class")
modelEval(data = porabat, formula = as.formula(norm_poraba ~ .), modelFun = naiveBayes,
evalFun = "brier", fType = "raw", modelType = "class")
##############################################Regresija############################################################################
# linearna regresija
porabac <- read.csv("data.csv", sep=",", stringsAsFactors = T)
porabac <- na.omit(porabac)
porabac$datum <- NULL
porabac$norm_poraba<-NULL
porabac$month<-NULL
porabac$season<-NULL
porabac$X <- NULL
porabac$weather <- NULL
porabac$ura <- as.factor(porabac$ura)
set.seed(0)
samplec<- sample(1:nrow(porabac), size = as.integer(nrow(porabac) * 0.7), replace = F)
train<- porabac[samplec,]
test<- porabac[-samplec,]
rmae <- function(obs, pred, mean.val)
{
sum(abs(obs - pred)) / sum(abs(obs - mean.val))
}
rmse <- function(obs, pred, mean.val)
{
sum((obs - pred)^2)/sum((obs - mean.val)^2)
}
mae <- function(obs, pred)
{
mean(abs(obs - pred))
}
mse <- function(obs, pred)
{
mean((obs - pred)^2)
}
##########################################
meanVal <- mean(train$poraba)
meanVal
predTrivial <- rep(meanVal, nrow(test))
#Linear regression model 1
model <- lm(poraba ~ ., train)
predicted <- predict(model, test)
observed <- test$poraba
rmse(observed, predicted, mean(train$poraba)) #0.06430346
rmae(observed, predicted, mean(train$poraba)) #0.1989883
###
model1 <- lm(poraba ~ maxPoraba + avgPoraba + minPoraba + sumPoraba, train)
predicted <- predict(model1, test)
observed <- test$poraba
rmse(observed, predicted, mean(train$poraba)) #0.0713572
rmae(observed, predicted, mean(train$poraba)) #0.1962335
modelEval(data = porabac, formula = as.formula(poraba ~ .), modelFun = lm,
evalFun = "rmse", fType = "", modelType = "reg")
modelEval(data = porabac, formula = as.formula(poraba ~ maxPoraba + avgPoraba + minPoraba + sumPoraba),
modelFun = lm, evalFun = "rmse", fType = "", modelType = "reg")
#################################################################################################
#Regression model 2 regression tree
library(rpart)
library(rpart.plot)
rt.model <- rpart(poraba ~ ., data=train)
predicted <- predict(rt.model, test)
rmse(test$poraba, predicted, mean(train$poraba)) #0.09481967
rmae(test$poraba, predicted, mean(train$poraba)) #0.2785586
###
rt.model1 <- rpart(poraba ~ maxPoraba + avgPoraba + minPoraba + sumPoraba, data=train)
predicted <- predict(rt.model1, test)
rmse(test$poraba, predicted, mean(train$poraba)) #0.09481967
rmae(test$poraba, predicted, mean(train$poraba)) #0.2785586
modelEval(data = porabac, formula = as.formula(poraba ~ .), modelFun = rpart,
evalFun = "rmse", fType = "", modelType = "reg")
modelEval(data = porabac, formula = as.formula(poraba ~ maxPoraba + avgPoraba + minPoraba + sumPoraba),
modelFun = rpart, evalFun = "rmse", fType = "", modelType = "reg")
#Regression model 3 nevronska mreza
library(nnet)
set.seed(0)
min_vals <- apply(train[c(3, 5:14, 18:21)], 2, min)
max_vals <- apply(train[c(3, 5:14, 18:21)], 2, max)
normTrain <- as.data.frame(scale(train[c(3, 5:14, 18:21)], center = min_vals, scale = max_vals - min_vals))
normTrain$poraba <- train$poraba
normTest <- as.data.frame(scale(test[c(3, 5:14, 18:21)], center = min_vals, scale = max_vals - min_vals))
normTest$quality <- test$quality
nn.model <- nnet(poraba ~ ., normTrain, size = 5, decay = 0.0001, maxit = 10000, linout = T)
predicted <- predict(nn.model, normTest)
rmse(test$poraba, predicted, mean(normTrain$poraba)) #0.06995541
rmae(test$poraba, predicted, mean(normTrain$poraba)) #0.1985665
###
nn.model1 <- nnet(poraba ~ maxPoraba + avgPoraba + minPoraba + sumPoraba,
normTrain, size = 5, decay = 0.0001, maxit = 10000, linout = T)
predicted <- predict(nn.model1, normTest)
rmse(test$poraba, predicted, mean(normTrain$poraba)) #0.07099577
rmae(test$poraba, predicted, mean(normTrain$poraba)) #0.195312
##############k nearest neighbor######################################################################
library(kknn)
knn.model <- kknn(poraba ~ ., train, test, k = 5)
predicted <- fitted(knn.model)
rmse(test$poraba, predicted, mean(train$poraba)) #0.05474145
rmae(test$poraba, predicted, mean(train$poraba)) #0.1970113
###
knn.model1 <- kknn(poraba ~ maxPoraba + avgPoraba + minPoraba + sumPoraba, train, test, k = 5)
predicted <- fitted(knn.model1)
rmse(test$poraba, predicted, mean(train$poraba)) #0.08148342
rmae(test$poraba, predicted, mean(train$poraba)) #0.211432
modelEvalKNN(data = porabac, formula = as.formula(poraba ~ .), evalFun = "rmse")
modelEvalKNN(data = porabac,
formula = as.formula(poraba ~ maxPoraba + avgPoraba + minPoraba + sumPoraba),
evalFun = "rmse")
|
fb00d915113818573db1aca160f76b9ccf62f3e8
|
edf7137cb8ddcb0df058fed72a267459baa75c30
|
/Clustering/Dataset2/part3.R
|
daf38d42268d61058962cd265f997649c970d3d9
|
[] |
no_license
|
pwalawal/DataMining
|
8f18f5790df70c48796e34570ab7c8e40241d54b
|
3a009705e8508fa7c7892c1a18d0b4f58e5d1c45
|
refs/heads/master
| 2021-04-26T16:04:37.260657
| 2016-10-21T14:32:47
| 2016-10-21T14:32:47
| 71,570,587
| 0
| 1
| null | null | null | null |
UTF-8
|
R
| false
| false
| 1,771
|
r
|
part3.R
|
install.packages("ggplot2")
library(ggplot2)
install.packages("gdata")
library(gdata)
install.packages("gplots")
library(gplots)
install.packages("class")
library(class)
install.packages("datasets")
library(datasets)
install.packages("Matrix")
library(Matrix)
install.packages("MatrixModels")
library(MatrixModels)
install.packages("cluster")
library(cluster)
install.packages("fpc")
library(fpc)
install.packages("e1071")
library(e1071)
install.packages("scatterplot3d")
library(scatterplot3d)
install.packages("MASS")
library(MASS)
datasetCsv=read.csv("dataset2.csv")
dataset <- datasetCsv[,2:6]
# Excluded the cluster label , as it is only given to check performance of our method
#datasetbind<-rbind(dataset$x, dataset$y, dataset$z, dataset$w, dataset$class)
#largecluster<-cmeans(datasetbind,2,20,verbose=FALSE,dist = "euclidean",method="cmeans")
#Cluster found after binding the 4 dimensions given in dataset
largecluster<-cmeans(dataset,2,20,verbose=FALSE,dist = "euclidean",method="cmeans")
print(largecluster)
print(largecluster$size)
#print(largecluster$cluster)
plot(dataset, col=largecluster$cluster)
#2d plot of cluster.
#scatterplot3d(largecluster$membership,color=largecluster$cluster,angle=60, scale.y=0.75)
#This gives 3d plot of cmeans clustering from scatterplot3d package.
#Detaching packages
detach("package:ggplot2", unload=TRUE)
detach("package:gdata", unload=TRUE)
detach("package:cluster", unload=TRUE)
detach("package:gplots", unload=TRUE)
detach("package:Matrix", unload=TRUE)
detach("package:MatrixModels", unload=TRUE)
detach("package:datasets", unload=TRUE)
detach("package:rgl", unload=TRUE)
detach("package:fpc", unload=TRUE)
detach("package:scatterplot3d", unload=TRUE)
detach("package:cluster", unload=TRUE)
|
934cff57d723016a92c9a3eb2f4deca9ed9a296b
|
0500ba15e741ce1c84bfd397f0f3b43af8cb5ffb
|
/cran/paws.application.integration/man/swf_deprecate_domain.Rd
|
b1310ad4aa2da899fddf18b30bb516f73e03d0db
|
[
"Apache-2.0"
] |
permissive
|
paws-r/paws
|
196d42a2b9aca0e551a51ea5e6f34daca739591b
|
a689da2aee079391e100060524f6b973130f4e40
|
refs/heads/main
| 2023-08-18T00:33:48.538539
| 2023-08-09T09:31:24
| 2023-08-09T09:31:24
| 154,419,943
| 293
| 45
|
NOASSERTION
| 2023-09-14T15:31:32
| 2018-10-24T01:28:47
|
R
|
UTF-8
|
R
| false
| true
| 808
|
rd
|
swf_deprecate_domain.Rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/swf_operations.R
\name{swf_deprecate_domain}
\alias{swf_deprecate_domain}
\title{Deprecates the specified domain}
\usage{
swf_deprecate_domain(name)
}
\arguments{
\item{name}{[required] The name of the domain to deprecate.}
}
\description{
Deprecates the specified domain. After a domain has been deprecated it cannot be used to create new workflow executions or register new types. However, you can still use visibility actions on this domain. Deprecating a domain also deprecates all activity and workflow types registered in the domain. Executions that were started before the domain was deprecated continues to run.
See \url{https://www.paws-r-sdk.com/docs/swf_deprecate_domain/} for full documentation.
}
\keyword{internal}
|
e43f3185be686cacd0116ffa784fc4de44957654
|
99b6b013dbfcff93cca1c7bbee22b7f58eec8dc1
|
/setpars-bh-joint.R
|
b13dbd1ecea14c894ab62be5f24d04c34fe6c79f
|
[] |
no_license
|
spencerwoody/safab-code
|
46c5d25349322740ca6e9380c39d625f85c54036
|
be9272456f571617229fd13fdf68c283af637c25
|
refs/heads/master
| 2021-01-05T23:35:53.965902
| 2020-02-17T17:46:52
| 2020-02-17T17:46:52
| 241,168,420
| 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 584
|
r
|
setpars-bh-joint.R
|
## Number of iterations
nMC <- 1000
# Target FDR
qstar <- 0.2
# confidence level
alpha <- 0.10
p <- 0.2
tau <- sqrt(3)
## Number of subjects
Nj <- 2000
## Number of observations per subject
Ni <- 3
## sigma <- 1 * sqrt(Ni - 1)
sigma <- 1 * sqrt(Ni)
## SD for ybar used for confidence intervals
## sigma_mean <- sigma / sqrt(Ni - 1)
sigma_mean <- sigma / sqrt(Ni)
# Acceptance region
tstar <- 2
## t <- tstar * sigma / sqrt(Ni - 1)
t <- tstar * sigma / sqrt(Ni)
## Number of folds
numFolds <- 5
foldid <- rep(1:numFolds, each = Nj / numFolds)
## Clipping
clip <- 0.1
|
77c61b6b8c9d555a6656a0a9ea9ee7fdcaa1ef9e
|
0575a2c951639cfe77812dd33f8f024898b4a932
|
/man/make_labels_directions.Rd
|
f242426c2e320b033af2b9b499900a952eb91bb4
|
[
"MIT"
] |
permissive
|
bayesiandemography/demprep
|
8f400672fbbee9d92f852056d4ff7cb31a7fc87a
|
3aa270ff261ab13570f3ba261629031d38773713
|
refs/heads/master
| 2021-12-29T04:00:24.929536
| 2021-12-16T22:15:01
| 2021-12-16T22:15:01
| 204,109,024
| 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| true
| 1,082
|
rd
|
make_labels_directions.Rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/make_labels.R
\name{make_labels_directions}
\alias{make_labels_directions}
\title{Make Directions labels}
\usage{
make_labels_directions(x)
}
\arguments{
\item{x}{A character vector.}
}
\value{
A character vector
}
\description{
Make labels with the format expected for an
object of class \code{"Directions"}
(as defined in the demarray package).
This function would not normally be called directly
by end users.
}
\details{
\code{x} is a character vector, with no duplicates.
The elements of \code{x} must belong
to \code{c("In", "Out" NA)}.
}
\examples{
make_labels_directions(x = c("In", NA, "Out"))
}
\seealso{
\code{\link{make_labels_categories}},
\code{\link{make_labels_triangles}},
\code{\link{make_labels_quantiles}},
\code{\link{make_labels_integers}},
\code{\link{make_labels_intervals}},
\code{\link{make_labels_quantities}},
\code{\link{make_labels_quarters}},
\code{\link{make_labels_months}},
\code{\link{make_labels_dateranges}},
\code{\link{make_labels_datepoints}}
}
\keyword{internal}
|
d033e35181ab55fb576768c68ea152df72f397b1
|
432a02b2af0afa93557ee16176e905ca00b653e5
|
/Misc/estimate_overlap_bases.R
|
17f59c6e9176c5f02a5c8da4c79125e39014794f
|
[] |
no_license
|
obigriffith/analysis-projects-R
|
403d47d61c26f180e3b5073ac4827c70aeb9aa6b
|
12452f9fc12c6823823702cd4ec4b1ca0b979672
|
refs/heads/master
| 2016-09-10T19:03:53.720129
| 2015-01-31T19:45:05
| 2015-01-31T19:45:05
| 25,434,074
| 0
| 1
| null | null | null | null |
UTF-8
|
R
| false
| false
| 396
|
r
|
estimate_overlap_bases.R
|
estimate_loss=function(n,mean,sd){
data=rnorm(n=n,mean=mean,sd=sd)
data_round=round(data)
data_round_lt200=data_round[data_round<200]
data_round_lt200_min50=data_round_lt200
data_round_lt200_min50[data_round_lt200_min50<50]=50
lost_bases=data_round_lt200_min50-200
perc_lost=((sum(lost_bases)*-1)/(1000000*200))*100
return(perc_lost)
}
n=1000000
mean=370
sd=76
estimate_loss(n,mean,sd)
|
d788d5474ffadfe7b6649f9333d0d1f5bf1daef4
|
0b72c2e836e7ba8590dd9cfd3f8dd67798c5eedc
|
/script/update/gwas/log/2015-05-17_UpdateDbgap.r
|
649c34889a6cbabf46ebb8f023c91a0fee738a87
|
[] |
no_license
|
leipzig/rchive
|
1fdc5b2b56009e93778556c5b11442f3dbece42c
|
8814456b218137eafe57bfb19adda19c0d0d625b
|
refs/heads/master
| 2020-12-24T12:06:15.051730
| 2015-08-06T11:59:08
| 2015-08-06T11:59:08
| 40,312,197
| 2
| 0
| null | 2015-08-06T15:24:31
| 2015-08-06T15:24:31
| null |
UTF-8
|
R
| false
| false
| 633
|
r
|
2015-05-17_UpdateDbgap.r
|
library(devtools);
install_github("zhezhangsh/rchive");
library(rchive);
cat('Downloading dbGaP analyses\n');
meta<-DownloadDbGap();
cat('Retrieve dbGaP p values\n');
RetrieveDbGapStat(rownames(meta), meta[,'study'], stat.name='p value');
cat('Summarize dbGaP metadata\n');
SummarizeDbGap(meta);
cat('Update log\n');
UpdateLog(meta, paste(Sys.getenv("RCHIVE_HOME"), 'data/gwas/public/dbgap', sep='/'));
tm<-strsplit(as.character(Sys.time()), ' ')[[1]][1];
fn0<-paste(RCHIVE_HOME, 'source/update/gwas/UpdateDbgap.r', sep='/');
fn1<-paste(RCHIVE_HOME, '/source/update/gwas/log/', tm, '_UpdateDbgap.r' , sep='');
file.copy(fn0, fn1)
|
516556e5d088b1d3231fabe93091dc74411d5d93
|
75d662fac4958ce9f117465ec6203e6cc3fcbaf7
|
/dplyr toy script.R
|
1905d748cf9db4332194db5cd9279c3795f1a2c1
|
[] |
no_license
|
chrisqiqiu/dplyr_toy
|
951af4dca9c26f80c855e77bc646d75607c709e3
|
331cdc74c7569c0b4338ba71a76d4dce5d0ae1bb
|
refs/heads/master
| 2021-05-02T10:35:11.568701
| 2019-01-20T10:30:22
| 2019-01-20T10:30:22
| 120,759,795
| 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 6,342
|
r
|
dplyr toy script.R
|
#install.packages('R.utils')
library(R.utils)
library(tidyverse)
#i created a folder called Test in working space just to separate the files from my other files
# create a function to download file from a url and unzip it and read the file into memory, return it as dataframe
loadFile<- function(url) {
fileName<- paste0("./Test/",gsub("https.*NetworkImpression_(.*)\\.log\\.gz",'\\1', url))
download.file(url,paste0(fileName,".gz"))
gunzip(paste0(fileName,".gz"))
read_delim(fileName,delim="\xfe")
}
# put all the urls in a list
list_of_files <- c("https://s3-ap-southeast-2.amazonaws.com/bohemia-test-tasks/CLD/NetworkImpression_246609_03-26-2016.log.gz",
"https://s3-ap-southeast-2.amazonaws.com/bohemia-test-tasks/CLD/NetworkImpression_246609_03-27-2016.log.gz",
"https://s3-ap-southeast-2.amazonaws.com/bohemia-test-tasks/CLD/NetworkImpression_246609_03-28-2016.log.gz")
# i use lapply here as i think it's neater than for loop
list_of_frames<- lapply(list_of_files,function(url) loadFile(url) )
# rbind all the dataframes in the list into one dataframe
df<- bind_rows(list_of_frames)
df
# observe the structure of the dataframe
str(df)
head(df,100)
df %>% summarise(distinctUserID=n_distinct(`User-ID`,na.rm=TRUE),distinctTime=n_distinct(Time),numberOfRow=n())
#984207 251781 3101915
# check if there's any na in user-id column
df %>%filter(is.na(`User-ID`) == TRUE) # return nothing
# observe the pattern of the User-ID, found there're quite a few records with only 1 character long
df %>%group_by(nchar(`User-ID`)) %>% summarise(n())
# nchar(`User-ID`) n()
# (int) (int)
# 1 1 550448
# 2 28 2551467
df %>%filter(nchar(`User-ID`) == 1) %>% print(n=50)
df %>%filter( nchar(`User-ID`) == 1 ) %>% group_by(`User-ID`) %>% summarise(n()) # they are all 0
# there're quite a lot of User-id which is 0 which doesn't look right.
# so i filter it out before i calculate the Average number of touch points per user
## Task 2 (1) Average number of touch points per user (User-ID column represents ID of user)
# get the total number of touch points and total distinct users. and then divide the total number of touch points by total distinct users
df %>%filter( nchar(`User-ID`) != 1 )%>% summarise(distinctUserID=n_distinct(`User-ID`,na.rm=TRUE),distinctTime=n_distinct(Time),numberOfRow=n())
# distinctUserID distinctTime numberOfRow
# (int) (int) (int)
# 1 984206 246329 2551467
2551467/984206
# 2.592412
# or we can get the same result using mean function . I filtered the user-ID not equal to 0 out as I think it's not a valid user id
df %>%filter( nchar(`User-ID`) != 1 )%>%group_by(`User-ID`) %>% summarise(totalNumberEachGroup=n())%>% summarise(totalNumberEachGroup=mean(totalNumberEachGroup))
# Source: local data frame [1 x 1]
#
# totalNumberEachGroup
# (dbl)
# 1 2.592412
## Task 2 (2) Top 5 the most frequently used creative size (e.g. 300x250)
# group the dataframe by create size and then sort it in descending order
df %>% group_by(`Creative-Size-ID`) %>% summarise(n=n()) %>% arrange(desc(n))
# Creative-Size-ID n
# (chr) (int)
# 1 0x0 1405085
# 2 300x250 1176420
# 3 728x90 263758
# 4 300x50 162879
# 5 160x600 60851
# 6 300x600 31958
# 7 320x50 711
# 8 120x600 246
# 9 970x250 7
# top 5 : 300x250, 728x90, 300x50,160x600 ,300x600
# check if the counts make sense
df %>% summarise(distinctSiteID=n_distinct(`Site-ID`),distinctADID=n_distinct(`Ad-ID`),distinctCreativeID=n_distinct(`Creative-ID`),distinctSize=n_distinct(`Creative-Size-ID`),numberOfRow=n())
# distinctSiteID distinctADID distinctCreativeID distinctSize numberOfRow
# (int) (int) (int) (int) (int)
# 17 107 77 9 3101915
# Task 2 (3) Average time before (I think you mean between?) 1st and 2nd touch (ignoring same time)
# observe the difference between 'Time' and 'Time-UTC-Sec'
df[,c('Time','Time-UTC-Sec')]
df %>%group_by(nchar(`Time-UTC-Sec`)) %>% summarise(n())
df %>% filter( `Time-UTC-Sec` == 1458922867 ) %>% select (everything())
df %>% sample_n(100) %>% select(Time,`Time-UTC-Sec`) %>% print(n=100)
df %>%filter(nchar(`User-ID`) != 1) %>% # filter out the 0 ids which doesn't look like a real id
group_by(`User-ID`,`Time-UTC-Sec`) %>% arrange(`User-ID`,`Time-UTC-Sec` ) %>% filter(row_number(`Time-UTC-Sec`)==1 ) %>% # dedupe when User-ID is the same and have same touch time to ignore same touch time
group_by(`User-ID`) %>% arrange(`Time-UTC-Sec`) %>% filter(row_number(`Time-UTC-Sec`)<=2 ) %>% # get the first two touch time
mutate(x=last(`Time-UTC-Sec`)-first(`Time-UTC-Sec`)) %>% # add the calculated temp column to get the difference of first touch point and second touch point
filter(row_number(`User-ID`) ==1 ) %>% # get one row per group for the difference we just appended as currently there're first and second touch two rows per group
ungroup %>% summarise(AverageTimeofFirstTwoTouchPoints=mean(as.numeric(x)))
#8332.629 secs
# Source: local data frame [1 x 1]
#
# AverageTimeofFirstTwoTouchPoints
# (dbl)
# 8332.629
## Task3 advanced data transformation
# grab a random sample to observe
df %>% sample_n(100) %>% arrange(`Time-UTC-Sec`) %>% select(Time,`Time-UTC-Sec`) %>% print(n=100)
# split the data frame based on User-ID and then sort it by time so that all the touch points of each starts from earliest to latest
# and then for each group(user-id), concatenate all the rows in Site-ID column of that group using " > " as delimiter
# finally output the dataframe to csv
df %>% filter(nchar(`User-ID`) != 1) %>%
group_by(`User-ID`) %>% arrange(`Time-UTC-Sec`) %>%
summarise( path=paste0(as.character(`Site-ID`) , collapse = " > ") ) %>% select(`User-ID`,path) %>%
write.csv( file = "./Test/Task3.csv", row.names = FALSE)
## Task 4
task4Frame <- read_delim("./Test/tempFile",delim="\t")
read_delim("./Test/unin.txt",delim="\t")
read.delim("./Test/unin.txt", fileEncoding="UCS-2LE")
read.delim("./Test/ESD_Conversionreport_05182016.csv", fileEncoding="UCS-2LE")
|
b8d24db34a411a27876ef8f5c401118bb2a23098
|
4b659f6e4f18fd333a446087af1861e8664dbb38
|
/man/importance_sampling.Rd
|
26726177af94abc7066f65d75f5ff78f41941805
|
[] |
no_license
|
codatmo/stanIncrementalImportanceSampling
|
c8f7497a712c05cc658b02abeaa7bfe4ee07a8da
|
e7dfc892c488f4da77fc7397e66df55f0af5e884
|
refs/heads/master
| 2023-05-04T18:55:20.389648
| 2021-05-27T13:21:08
| 2021-05-27T13:21:08
| 370,043,519
| 2
| 0
| null | null | null | null |
UTF-8
|
R
| false
| true
| 1,021
|
rd
|
importance_sampling.Rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/importance_sampling.R
\name{importance_sampling}
\alias{importance_sampling}
\title{Calculate Importance Weights}
\usage{
importance_sampling(
originalFit,
oldData,
newData,
model_code = originalFit@stanmodel@model_code[1]
)
}
\arguments{
\item{originalFit}{stanfit containing MCMC samples}
\item{oldData}{list() containing data used to fit originalFit}
\item{newData}{list() containing the new input data}
\item{model_code}{string containing Stan model code for originalFit (optional)}
}
\value{
list containing weights, neff, lpOriginal and lpNew
}
\description{
This function calculates importance weights for a previously fitted Stan
model given a new set of input data. Given a stanfit object with MCMC samples
and a new set of input data for the stan model, calculate importance sampling
weights. For example, a time series model predicting into the future can be
updated with new data post-hoc when data becomes available.
}
|
5cbb677b0a47dc6c7444357f20db369bd96934ba
|
086e220d2e6b5300b4287e1e3b1518e0aa5641c0
|
/rmd-setup.R
|
f1aef5becf0424ef235fc5bdd74d0c77ac5a39eb
|
[] |
no_license
|
Gedevan-Aleksizde/tokyor-91-rmd
|
fae8409d7d6d6a378320dc58820d2439f3050c5f
|
6cd2ca46e45d6b863c011f77d5172180a92968be
|
refs/heads/main
| 2023-04-20T06:02:00.765694
| 2021-05-15T09:44:49
| 2021-05-15T09:44:49
| 358,770,557
| 0
| 1
| null | null | null | null |
UTF-8
|
R
| false
| false
| 4,466
|
r
|
rmd-setup.R
|
##### (1) RStudio のバージョンについて ####
# 1.4.1103 は Windows 版では Python 使用時にエラーが発生します
# もし Python を使いたいなら新しいバージョンが出るまで待つか,
# 以下の daily build のどれかをインストールしてください
# https://dailies.rstudio.com/rstudio/oss/windows/
##### (2) パッケージのインストール ####
# インストール済みであっても最新版にしておいてください
# ダイアログボックスでなにか言われたらNO!
install.packages(
c("tidyverse",
"remotes",
"rmarkdown",
"bookdown",
"officedown",
"citr",
"ymlthis",
"svglite",
"kableExtra"
)
)
##### (3) ragg インストール ####
install.packages("ragg")
# ragg のインストール時, Mac/Linux では追加で外部ライブラリのインストールが要求されることがあります.
# その際は手動でインストールしてください. おそらくは以下のような操作になります.
#
# 例えば Ubuntu なら
# sudo apt install libharfbuzz-dev libfribidi-dev
#
# Mac なら homebrew でインストールします
# brew install harfbuzz fribidi
##### (4) PDF 画像の準備 ####
# さらに PDF で画像を出力したい場合は, X11 と Cairo が必要です.
# Windows の場合, 以下が TRUE になっていることを確認してください
capabilities()[c("cairo")]
# Mac や Linux の場合は, 両方が TRUE になっていることを確認してください.
capabilities()[c("X11", "cairo")]
# Windows や多くの Linux 系はあまり気にしなくても良いですが,
# 最近の Mac はデフォルトで必要なプログラムが入っていないようです.
# Mac は以下の2つをインストールすれば使えます (インストールには homebrew が必要です).
# ただし, xquartz のほうはうまく行かない例が報告されています.
# https://www.xquartz.org/ で dmg ファイルをダウンロードしてインストールすることも試してください.
#
# brew install cairo
# brew cask install xquartz
##### (5) rmdja パッケージのインストール ####
# PDF は設定が複雑なので, 私の作成した rmdja パッケージを使うことをお薦めします.
# このセッション時点では最新版は v0.4.5 です
remotes::install_github('Gedevan-Aleksizde/rmdja', upgrade = "never")
##### (6) TeX のインストールします ####
# これはすでにインストールしている人, PDF 文書の作成を目的としていない人は不要です
# それなりに時間がかかるので注意してください
tinytex::install_tinytex()
tinytex::tlmgr_install("texlive-msg-translations")
##### (7) 共通フォントのインストール (Linux のみ) ####
# 以降の説明を簡単にするため, Linux でのフォントを共通化します.
# これは Linux 系 OS をお使いの方のみ必要です.
# Linux 系 OS をお使いならば, Noto フォントをおすすめします.
# 例えば Ubuntu (RStudio Cloud も Ubuntu OS です) ならば以下でインストールできます
# sudo apt install fonts-noto-cjk fonts-noto-cjk-extra
# ---- ここで念の為 RStudio 再起動 ----
# PDF の閲覧は okular が便利です. MS Store で入手できます
# https://www.microsoft.com/ja-jp/p/okular/9n41msq1wnm8?rtc=1&activetab=pivot:overviewtab
#
# Sumatra は軽量ですが, フォントの埋め込みを確認できません.
# ----- 以下は基本チュートリアルの範囲ではあまり取り上げませんが, 便利な拡張パッケージです
# Python 使いたい人へ
install.package("reticulate")
# Julia 使いたい人へ
install.package("JuliaCall")
# ragg が使えない/Linux 以外で PDF 形式の画像で文字化けを防ぎたい場合は以下を試してください.
remotes::install_github("Gedevan-Aleksizde/fontregisterer", upgrade = "never")
install.packages(c(
"xaringan",
"bookdownplus",
"blogdown",
"pagedown"
))
# "word" 関連
install.packages(
c("officer", "rvg", "openxlsx",
"ggplot2", "flextable", "xtable", "rgl", "stargazer",
"tikzDevice", "xml2", "broom")
)
# パワーポイントやWordにグラフをエクスポートする
remotes::install_github("tomwenseleers/export")
# Word の更新差分を考慮して編集できる (ただし現在開発停止中)
remotes::install_github("noamross/redoc")
|
b6c5b90c2cbe502362c18144aae6d036e4a02cd9
|
4f74888f3788e1cac6131fcdd9f4ed8c63bf453f
|
/man/vivid_adj.Rd
|
bcc70e06f9a72e5ba0f444f312575d4d6a0adeec
|
[] |
no_license
|
scchess/VividR
|
c24714142f9ba8892eb09783f53dbe475c4a9b7f
|
02daef0ecfb1653d5fd0fb1c0b8bef22c743ffa6
|
refs/heads/master
| 2023-08-11T15:45:39.583985
| 2021-07-14T08:05:09
| 2021-07-14T08:05:09
| null | 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| true
| 950
|
rd
|
vivid_adj.Rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/vivid_adj.R
\name{vivid_adj}
\alias{vivid_adj}
\title{Adjust variables in a VIVID object}
\usage{
vivid_adj(vividObj, minFinalFeatures)
}
\arguments{
\item{vividObj}{An object passed from the vivid() funciton.}
\item{minFinalFeatures}{Integer value for the minimum number of final features selected.}
}
\value{
A list of results:
\itemize{
\item{optFeatures}: A character vector containing all features in the final vivid model, adjusted for minFinalFeatures.
\item{n}: A count of the new number of optimum features.
}
}
\description{
Adjust variables in a VIVID object
}
\examples{
library('ropls')
data("sacurine") #Load sacurine dataset from the 'ropls' package
dat = sacurine$dataMatrix
outcomes = sacurine$sampleMetadata$gender
vividResults = vivid(x = dat,
y = outcomes)
vivid_adj(vividResults,
minFinalFeatures = 10)
}
|
d19c03aa5a86451d93b1dadf7a41a5bd12f08b2b
|
d7a8317efa283107d7645b57409c60b53ab7fee8
|
/BEP/simulation/number_of_retained_components.R
|
89bb3c9685433fd2bfff1396a104306a45d4fc2e
|
[] |
no_license
|
lucasnijder/LS_BEP_2021
|
85558e598e8a2e3532bfdecfb4e82e1a741616e3
|
57808afe76d753dfc506e3e101cb4400cbe0ff92
|
refs/heads/main
| 2023-02-20T12:28:28.665971
| 2021-01-17T21:51:11
| 2021-01-17T21:51:11
| 324,770,191
| 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 5,555
|
r
|
number_of_retained_components.R
|
highlight = function(x, pat, color="black", family="") {
x <- substr(x, 3, 3)
ifelse(grepl(pat, x), glue("<b style='font-family:{family}; color:{color}'>{x}</b>"), x)
}
Plot_per_error <- function(total_res, error){
total_res <- total_res %>% arrange(index)
total_res <- total_res[,-11]
if(error == 0.01){
i <- 1
}
if(error == 0.1){
i <- 13
}
if(error == 0.15){
i <- 25
}
p_list <- vector('list', 6)
for(a in seq(0,12,2)){
bar_dat <- melt(total_res[(i+a):(i+a+1),])
bar_dat <- bar_dat %>% mutate(value = value*100/num_mc)
p <- ggplot(data=bar_dat, aes(x=variable, y=value, fill=type)) +
geom_bar(stat="identity", position = 'dodge')+
scale_fill_manual(values = c('coral1', 'cyan3')) +
scale_x_discrete(labels= function(x) highlight(x, "3", "black")) +
xlab("number of PCs") + ylab("% of outcomes") +
ylim(0,100) +
theme(axis.title.x = element_text(size=8),
axis.title.y = element_text(size=8),
plot.margin = unit(c(1,1,1,1), "lines"),
legend.position = 'none',
axis.text.x=element_markdown())
p_list[[(a/2)+1]] <- p
}
return(ggarrange(p_list[[1]],
p_list[[2]],
p_list[[3]],
p_list[[4]],
p_list[[5]],
p_list[[6]],
labels = c(" A. n = 25 and p = 10",
" B. n = 50 and p = 10",
" C. n = 100 and p = 10",
" D. n = 25 and p = 20",
" E. n = 50 and p = 20",
" F. n = 100 and p = 20"),
ncol = 3, nrow = 2,
font.label = list(size = 10,
color = "black"),
hjust = 0,
vjust = 1,
common.legend = T))
}
Plot_but_not_per_n <- function(total_res){
total_res <- total_res[,-11]
p <- rep(c(10,10,10,20,20,20),3)
n <- rep(c(25,50,100),6)
error <- c(rep(0.01,6), rep(0.1,6), rep(0.15,6))
total_res <- cbind(total_res, p)
total_res <- cbind(total_res, n)
total_res <- cbind(total_res, error)
total_res <- total_res %>% group_by(error, p, type) %>% summarise(PC0=sum(PC0),PC1=sum(PC1),PC2=sum(PC2),
PC3=sum(PC3),PC4=sum(PC4),PC5=sum(PC5),
PC6=sum(PC6),PC7=sum(PC7),PC8=sum(PC8))
p_list <- vector('list', 6)
for(a in seq(1,12,2)){
bar_dat <- melt(total_res[a:(a+1),], id.vars = c('type','error','p'))
bar_dat <- bar_dat %>% mutate(value = round((value*100)/(num_mc*3),2))
p <- ggplot(data=bar_dat, aes(x=variable, y=value, fill=type)) +
geom_bar(stat="identity", position = 'dodge')+
scale_fill_manual(values = c('coral1', 'cyan3')) +
scale_x_discrete(labels= function(x) highlight(x, "3", "black")) +
xlab("number of PCs") + ylab("% of outcomes") +
ylim(0,100) +
theme(axis.title.x = element_text(size=8),
axis.title.y = element_text(size=8),
plot.margin = unit(c(1,1,1,1), "lines"),
legend.position = 'none',
axis.text.x=element_markdown())
p_list[[(a+1)/2]] <- p
}
return(ggarrange(p_list[[1]],
p_list[[4]],
p_list[[5]],
p_list[[2]],
p_list[[3]],
p_list[[6]],
labels = c(" A. noise = 1% and p = 10",
" B. noise = 10% and p = 10",
" C. noise = 15% and p = 10",
" D. noise = 1% and p = 20",
" E. noise = 10% and p = 20",
" F. noise = 15% and p = 20"),
ncol = 3, nrow = 2,
font.label = list(size = 10,
color = "black"),
hjust = 0,
vjust = 1,
common.legend = T))
}
Plot_PCA_SPCA_res <- function(){
for(type_str in list('PA', 'OC', 'AF', 'KG', 'CV')){
pca_res <- read.csv(file = sprintf('pca_%s_pred_comps.csv',type_str))
spca_res <- read.csv(file = sprintf('spca_%s_pred_comps.csv',type_str))
pca_res <- cbind(pca_res, type = rep('pca',nrow(pca_res)))
pca_res <- cbind(pca_res, index=seq(1:18))
spca_res <- cbind(spca_res, type = rep('spca',nrow(pca_res)))
spca_res <- cbind(spca_res, index=seq(1:18))
total_res <- rbind(pca_res, spca_res)
for(error in list(0.01,0.1,0.15)){
Plot_per_error(total_res, error)
ggsave(
sprintf(sprintf("%s_%s.png", type_str, error)),
path = "C:/Users/20175878/Documents/bep_with_version_control/figs",
scale = 1,
width = 14,
height = 6,
units = "in",
dpi = 600,
limitsize = TRUE)
}
Plot_but_not_per_n(total_res)
ggsave(
sprintf(sprintf("%s_overall.png", type_str)),
path = "C:/Users/20175878/Documents/bep_with_version_control/figs",
scale = 1,
width = 14,
height = 8,
units = "in",
dpi = 600,
limitsize = TRUE)
}
}
num_mc <- 50
Plot_PCA_SPCA_res()
|
eee45c1080a435db0e9583bc05e063fbd010744a
|
45cae983b2d3354277ff2f701f600ddb20c05ddd
|
/subpixel-accuracy/analysis.R
|
aa4ec731819d927b1e96d0e53b282e83e1d332db
|
[] |
no_license
|
alessandro-gentilini/alessandro-gentilini.github.io
|
8c2b9c963a39fd5b9e8f70c61a153715df29a44c
|
cc5b42e7e2d104954ae701278f12492754f16b15
|
refs/heads/master
| 2023-08-09T16:28:09.175160
| 2023-07-30T09:57:50
| 2023-07-30T09:57:50
| 31,064,413
| 1
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 321
|
r
|
analysis.R
|
pdf("R_plot_%d.pdf")
df<-readr::read_csv('data.csv')
for(i in seq(1,nrow(df))){
if(is.na(df$value[i])){
df$value[i]=df$num[i]/df$den[i]
}
}
library(ggplot2)
plot(ggplot(data=df)
+geom_point(aes(x=year,y=value))
+scale_y_continuous(trans="log10",breaks=c(0.001,df$value))
+ylab('Accuracy in pixel (log scale)'))
|
bfeef26a7c5e0fa6e68b1d49e89ea1ebfa91202e
|
4eb5cda5f02f054d64745ce91923dd1fa4ea9095
|
/Vuln_Index/eck3.sensitivityGraphs.R
|
f2c6256229f89be073973517ba3ae53f7a7e32e7
|
[] |
no_license
|
mczapanskiy-usgs/WERC-SC
|
e2e7cb68616ef0492144c0ef97629abb280103ae
|
caec92b844c9af737dcfc8d6dbb736de79c3e71c
|
refs/heads/master
| 2021-12-02T16:42:16.529960
| 2021-12-01T16:58:20
| 2021-12-01T16:58:20
| 36,815,737
| 2
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 3,103
|
r
|
eck3.sensitivityGraphs.R
|
## this script is used to graph the summary scores for population, collision and displacement sensitivity
## box and whisker plots
scores <- read.csv("VulnIndexFinalSensitivityScores.csv") ## matrix of final PS, CS, and DS
library(ggplot2)
# Population Vulnerability
scores$CommonName <- factor(scores$CommonName, levels = scores$CommonName[order(scores$Order)]) # will keep original order of species
pop <- ggplot(scores, aes(x=CommonName, y=PopBest, color=Groups))
pop <- pop + geom_errorbar(aes(ymin=PopBest-PopLower, ymax=PopBest+PopUpper, width=.5)) + geom_point(width=1, color="black") # establish graph and uncertainty lines
pop <- pop + facet_grid(~Groups, scales="free_x", space="free", labeller=labeller(CommonName=label_both)) # facets = species groupings
pop <- pop + theme_bw() + theme(axis.text.x=element_text(angle=90), axis.text.y=element_text(angle=90), axis.text=element_text(size=10), axis.title=element_text(size=16,face="bold")) # axis labels orientation and size
pop <- pop + theme(legend.position="none") + ylab("Population Vulnerability") + xlab("Species Name") + scale_y_continuous(breaks=c(1,2,3,4,5), limits=(c(0,5.5))) # axis labels size and tick marks
# Collision Vulnerability
scores$CommonName <- factor(scores$CommonName, levels = scores$CommonName[order(scores$Order)]) # will keep original order of species
CV <- ggplot(scores, aes(x=CommonName, y=ColBest, color=Groups))
CV <- CV + geom_errorbar(aes(ymin=ColBest-ColLower, ymax=ColBest+ColUpper, width=.5)) + geom_point(width=1, color="black") # establish graph and uncertainty lines
CV <- CV + facet_grid(~Groups, scales="free_x", space="free", labeller=labeller(CommonName=label_both)) # facets = species groupings
CV <- CV + theme_bw() + theme(axis.text.x=element_text(angle=90), axis.text.y=element_text(angle=90), axis.text=element_text(size=10), axis.title=element_text(size=16,face="bold")) # axis labels orientation and size
CV <- CV + theme(legend.position="none") + ylab("Population Collision Vulnerability") + xlab("Species Name") + scale_y_continuous(breaks=c(1,2,3,4,5,6,7,8,9,10), limits=(c(0,10))) # axis labels size and tick marks
# Displacement Vulnerability
scores$CommonName <- factor(scores$CommonName, levels = scores$CommonName[order(scores$Order)]) # will keep original order of species
DV <- ggplot(scores, aes(x=CommonName, y=DispBest, color=Groups))
DV <- DV + geom_errorbar(aes(ymin=DispBest-DispLower, ymax=DispBest+DispUpper, width=.5)) + geom_point(width=1, color="black") # establish graph and uncertainty lines
DV <- DV + facet_grid(~Groups, scales="free_x", space="free", labeller=labeller(CommonName=label_both)) # facets = species groupings
DV <- DV + theme_bw() + theme(axis.text.x=element_text(angle=90), axis.text.y=element_text(angle=90), axis.text=element_text(size=10), axis.title=element_text(size=16,face="bold")) # axis labels orientation and size
DV <- DV + theme(legend.position="none") + ylab("Population Displacement Vulnerability") + xlab("Species Name") + scale_y_continuous(breaks=c(1,2,3,4,5,6,7,8,9,10), limits=(c(0,10))) # axis labels size and tick marks
|
4fd9047da22e413637a30f7ae7fe2dea0fc3f226
|
b2f61fde194bfcb362b2266da124138efd27d867
|
/code/dcnf-ankit-optimized/Results/QBFLIB-2018/A1/Database/Basler/terminator/stmt21_79_304/stmt21_79_304.R
|
94343ac95fbedf42c4ae693c6ff29ecf6f5358ae
|
[] |
no_license
|
arey0pushpa/dcnf-autarky
|
e95fddba85c035e8b229f5fe9ac540b692a4d5c0
|
a6c9a52236af11d7f7e165a4b25b32c538da1c98
|
refs/heads/master
| 2021-06-09T00:56:32.937250
| 2021-02-19T15:15:23
| 2021-02-19T15:15:23
| 136,440,042
| 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 65
|
r
|
stmt21_79_304.R
|
5433401a7397c5b7bd04551097cc9e65 stmt21_79_304.qdimacs 3123 10273
|
8387a93fd767a2838b794b42690f0f7630037ced
|
a1dbfdc992917085ed55988f955d5065a3aea69a
|
/R/expTable.R
|
2e2d658b9966b03630ad17f54d05f802258495d3
|
[] |
no_license
|
rpolicastro/kevin_scRNAseq_shiny
|
f643d5fe7e04b34ab116989398f818010eeaa43c
|
88c6ca89f92b571fca205a100332914ade4d0751
|
refs/heads/master
| 2022-11-22T08:12:24.511397
| 2020-07-21T01:32:45
| 2020-07-21T01:32:45
| 272,305,592
| 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 3,575
|
r
|
expTable.R
|
#' Expression Table UI
#'
#' @inheritParams metadataPlotUI
#'
#' @export
expTableUI <- function(
id,
ident = "orig.ident",
clusters = "seurat_clusters"
) {
## Namespace.
ns <- NS(id)
## Get sample choices.
sample_sheet <- con %>%
tbl("samples") %>%
collect
experiments <- unique(sample_sheet$experiment)
sidebarLayout(
## Expression table UI.
sidebarPanel(width = 2,
selectInput(
inputId = ns("experiment"), label = "Experiment",
choices = experiments,
selected = experiments[1]
),
uiOutput(ns("samples")),
uiOutput(ns("clusters")),
textAreaInput(
inputId = ns("genes"), label = "Genes",
value = "tdTomato\ntdTomatoStop", rows = 3
),
checkboxInput(
inputId = ns("log2t"), label = "Log2+1 Transform",
value = FALSE
)
),
mainPanel(width = 10, DT::dataTableOutput(ns("table")))
)
}
#' Expression Table Server
#'
#' @inheritParams metadataPlotServer
#'
#' @export
expTableServer <- function(
id,
ident = "orig.ident",
clusters = "seurat_clusters"
) {
moduleServer(id, function(input, output, session) {
## Get sample table.
samps <- con %>%
tbl("samples") %>%
collect
samps <- as.data.table(samps)
## Get clusters for each experiment.
clusts <- reactive({
clusters <- con %>%
tbl(str_c(input$experiment, "_metadata")) %>%
distinct_at(clusters) %>%
pull(clusters)
return(clusters)
})
## Render the samples based on experiment.
output$samples <- renderUI({
ns <- session$ns
choices <- samps[experiment == input$experiment]$samples
pickerInput(
inputId = ns("samples"), label = "Samples",
choices = choices, selected = choices,
multiple = TRUE,
options = list(
`actions-box` = TRUE,
`selected-text-format` = "count > 1"
)
)
})
## Render the clusters based on experiment.
output$clusters <- renderUI({
ns <- session$ns
pickerInput(
inputId = ns("clusters"), label = "Clusters",
choices = clusts(), selected = clusts(),
multiple = TRUE,
options = list(
`actions-box` = TRUE,
`selected-text-format` = "count > 1"
)
)
})
## Get the metadata.
md <- reactive({
metadata <- con %>%
tbl(str_c(input$experiment, "_metadata")) %>%
filter_at(ident, all_vars(. %in% !!input$samples)) %>%
filter_at(clusters, all_vars(. %in% !!input$clusters)) %>%
select_at(c("cell_id", ident, clusters)) %>%
collect()
setDT(metadata, key = "cell_id")
return(metadata)
})
## Get the gene counts.
cn <- reactive({
genes <- str_split(input$genes, "\\s", simplify = TRUE)[1, ]
validate(
need(length(genes) <= 10, "Can only display 10 genes or less.")
)
counts <- con %>%
tbl(str_c(input$experiment, "_counts")) %>%
filter(gene %in% genes) %>%
collect()
setDT(counts, key = "cell_id")
counts <- counts[cell_id %in% md()[["cell_id"]]]
if (input$log2t) {
counts[, exp := log2(exp + 1)]
}
return(counts)
})
exp_table <- reactive({
## Make genes columns, and cells rows.
counts <- dcast(cn(), cell_id ~ gene, value.var = "exp")
## Merge in the meta-data with the counts.
counts <- merge(md(), counts)
return(counts)
})
output$table <- DT::renderDataTable(
{exp_table()},
extensions = "Buttons",
options = list(
order = list(list(2, "desc")),
dom = "Bfrtpli",
buttons = c('copy', 'csv', 'excel', 'print')
)
)
})
}
|
0decb11daf049e14af087c50ee2dab29ba8a122f
|
1820722c3c8a37ee2b052db65d658085ab786630
|
/TP1/Tp1.R
|
9ad8db51da762a1a13cd0db1a907cde7a4a96ba2
|
[] |
no_license
|
cristianhernan/austral-mcd-aid
|
8f0efcb76fc90bf24d92f210ae7fa3636899636c
|
99d49f1562624de3525abeaba8783f942d8f910d
|
refs/heads/main
| 2023-06-24T05:33:16.219278
| 2021-07-22T01:14:21
| 2021-07-22T01:14:21
| 376,146,006
| 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 2,813
|
r
|
Tp1.R
|
#deshabilitar la notacion cientifica
options(scipen=999)
rm(list=ls())
gc()
library(dplyr)
library(tidyr)
reca_ene_raw <- read.csv("../../datasets/aid/tp1/RECA_CHAN_01_NEW.csv",header=TRUE, sep = ",")
reca_ene_raw <- select(reca_ene_raw,-RUNID)
reca_ene_filter <- filter(reca_ene_raw,PURCHASEAMOUNT > 0)
reca_feb_raw <- read.csv("../../datasets/aid/tp1/RECA_CHAN_02_NEW.csv",header=TRUE, sep = ",")
reca_feb_raw <- select(reca_feb_raw,-RUNID)
reca_feb_filter <- filter(reca_feb_raw,PURCHASEAMOUNT > 0)
reca_mar_raw <- read.csv("../../datasets/aid/tp1/RECA_CHAN_02_NEW.csv",header=TRUE, sep = ",")
reca_mar_raw <- select(reca_mar_raw,-RUNID)
reca_mar_filter <- filter(reca_mar_raw,PURCHASEAMOUNT > 0)
aux_data <- rbind(reca_ene_filter,reca_feb_filter,reca_mar_filter)
aux_data <- select(aux_data,-PURCHASETIME)
rm(reca_ene_raw,reca_feb_raw,reca_mar_raw,reca_ene_filter,reca_feb_filter,reca_mar_filter)
aux_data <-mutate(aux_data, CHANNELIDENTIFIER = ifelse(grepl("EMG",CHANNELIDENTIFIER,fixed = TRUE),"Tecno", "Manual"))
group_data <- aux_data %>% group_by(CUSTOMERID,CHANNELIDENTIFIER) %>%
summarize(cantidad = n(), monto = sum(PURCHASEAMOUNT))
cant_data <- spread(data = group_data, key = CHANNELIDENTIFIER, value = cantidad)
cant_data <- select(cant_data,-monto)
cant_data[is.na(cant_data)] <- 0
colnames(cant_data) <- c("ACCS_MTHD_CD","CANT_RMAN","CANT_RTEC")
cant_data <- mutate(cant_data,CANT_RECARGAS=CANT_RMAN+CANT_RTEC)
monto_data <- spread(data = group_data, key = CHANNELIDENTIFIER, value = monto)
monto_data <- select(monto_data,- cantidad)
monto_data[is.na(monto_data)] <- 0
colnames(monto_data) <- c("ACCS_MTHD_CD","MONTO_TECNO","MONTO_MANUAL")
monto_data <- mutate(monto_data,MONTO_TOTAL=MONTO_TECNO+MONTO_MANUAL)
nrow(monto_data)
#setnames(cant_data , new = c("ACCS_MTHD_CD","CANT_RECARGAS","CANT_RTEC"))
#monto_data <- spread(data = group_data, key = CHANNELIDENTIFIER, value = monto)
#spread(data = group_data, key = CHANNELIDENTIFIER, value = monto)
#tail(cant_data)
#group_data <- aux_data %>% group_by(CUSTOMERID,CHANNELIDENTIFIER) %>%
#summarize(cantidad = n(),monto = sum(PURCHASEAMOUNT))
#gather(data = group_data, key = "cantidad", value = "monto", 2:4)
#head(group2)
#df_2$weight, by = list(df_2$feed, df_2$cat_var), FUN = sum
#group_data <- aggregate(aux_data$PURCHASEAMOUNT, by = list(aux_data$CUSTOMERID, aux_data$CHANNELIDENTIFIER), FUN=sum)
#reca_grupo <- aux_data %>%
# group_by(CUSTOMERID) %>%
# summarise(MONTO_TOTAL = sum(PURCHASEAMOUNT, na.rm=TRUE),
# CANT_RECARGAS = n())
#head(reca_grupo)
#reca_clientes <- read.csv("../../datasets/aid/tp1/DNA_03_NEW.csv",header=TRUE, sep = ",")
#reca_clientes <- filter(reca_clientes,BASE_STAT_03 == "REJOINNER" | grepl("ACTIVE",BASE_STAT_03,fixed = TRUE))
#head(reca_clientes)
|
beca588a2db258b81b124f9b27dbfdb76af8d3ac
|
ffdea92d4315e4363dd4ae673a1a6adf82a761b5
|
/data/genthat_extracted_code/AnaCoDa/examples/getTrace.Rd.R
|
aedb8f509e49b6eeb71970e1bafff1973b9e4f22
|
[] |
no_license
|
surayaaramli/typeRrh
|
d257ac8905c49123f4ccd4e377ee3dfc84d1636c
|
66e6996f31961bc8b9aafe1a6a6098327b66bf71
|
refs/heads/master
| 2023-05-05T04:05:31.617869
| 2019-04-25T22:10:06
| 2019-04-25T22:10:06
| null | 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 777
|
r
|
getTrace.Rd.R
|
library(AnaCoDa)
### Name: getTrace
### Title: extracts an object of traces from a parameter object.
### Aliases: getTrace
### ** Examples
genome_file <- system.file("extdata", "genome.fasta", package = "AnaCoDa")
genome <- initializeGenomeObject(file = genome_file)
sphi_init <- c(1,1)
numMixtures <- 2
geneAssignment <- sample(1:2, length(genome), replace = TRUE) # random assignment to mixtures
parameter <- initializeParameterObject(genome = genome, sphi = sphi_init,
num.mixtures = numMixtures,
gene.assignment = geneAssignment,
mixture.definition = "allUnique")
trace <- getTrace(parameter) # empty trace object since no MCMC was perfomed
|
c25e0bd48c053c03339cb5a9dc73e3a7c691f5e6
|
0a4e001ae2276a4cab1925226c7120c9fac7867c
|
/assignment_04.02_PonisserilRadhakrishnan.R
|
08cd4182f34fe2278cf1c124d8618bb9c917242c
|
[] |
no_license
|
prrajeev/DSC520_Assignments
|
886fa5468819502f8c3b2610d7058fb4ba975d84
|
1a8b072716e7956b44f283ac0f6c800f7e01401c
|
refs/heads/main
| 2023-07-07T00:20:43.587106
| 2021-08-15T01:30:12
| 2021-08-15T01:30:12
| 378,742,662
| 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 1,329
|
r
|
assignment_04.02_PonisserilRadhakrishnan.R
|
# Assignment: ASSIGNMENT 4.2.2
# Name: Ponisseril, Radhakrishnan
# Date: 2021-07-04
## Load the tidyverse package
install.packages("tidyverse")
library(readxl)
library(lubridate)
## Set the working directory to the root of your DSC 520 directory
setwd("/Users/RajeevP/dsc520")
## Load the housing dataset
housing_df <- read_excel("data/week-7-housing.xlsx")
housing_df
# Use the apply function on a variable in your dataset
data <- matrix(housing_df$building_grade)
apply(data,1,mean)
# Use the aggregate function on a variable in your dataset
aggregate(building_grade ~ year_built, housing_df, mean)
#3 Use the plyr function on a variable in your dataset.
# split some data, perform a modification to the data,
# and then bring it back together
year_2010 <- housing_df[housing_df$year_built == "2010", ]
year_2010$sale_warning[is.na(year_2003$sale_warning)] <- "Not Available"
other_years <- housing_df[housing_df$year_built != "2010", ]
merged_data <- rbind(year_2010,other_years)
#4 Check distributions of the data
ggplot(data=merged_data, aes(x=year_built)) +
geom_histogram()
#5 Identify if there are any outliers
### There are outliers in year 1900
#6 Create at least 2 new variables
merged_data$SaleYear <- year(merged_data$`Sale Date`)
merged_data$SaleMonth<- month(merged_data$`Sale Date`)
merged_data
|
a7f58f6ac7eeafd058a096e44f22f9ebc030a024
|
696f9432ca70203924ca84851aa1299aafd28530
|
/R/Trial.Simulation.R
|
49aa297e9ab34b9c07b81f749c3d5950422383cf
|
[] |
no_license
|
vivienjyin/phase1RMD
|
ab725ab570056d5aaf9d3953f9f6968e65e99be7
|
af32af2e325c455e1531d52da4f96be12fb7d8df
|
refs/heads/main
| 2023-06-10T13:26:44.862333
| 2021-07-03T01:03:17
| 2021-07-03T01:03:17
| 380,626,085
| 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 55,251
|
r
|
Trial.Simulation.R
|
###################################################################################
###########Simulate Data from the Matrix and Do Stage 1-3##########################
###################################################################################
###model: yij = beta0 + beta1xi + beta2tj + ui + epiij#############################
######### ei = alpha0 + alpha1xi + alpha2xi^2 + gamma0ui + epi2i###################
######### epi ~ N(0, sigma_e); ui ~ N(0, sigma_u); epi2 ~ N(0, sigma_f)############
###################################################################################
#library(rjags)
#library(R2WinBUGS)
#library(BiocParallel)
# SimRMDEff <- function(seed=2441, tox_matrix, strDose = 1, chSize = 3, trlSize = 36, sdose = 1:6, MaxCycle = 6,
# tox.target = 0.28, eff.structure = c(0.1, 0.2, 0.3, 0.4, 0.7, 0.9)){
# # SimRMDEff <- function(seed=2441, tox_matrix, StrDose = 1, chSize = 3, trlSize = 36, sdose = 1:6, MaxCycle = 6,
# # tox.target = 0.28, eff.structure = c(0.1, 0.2, 0.3, 0.4, 0.7, 0.9),
# # eff.sd = 0.2, p1 = 0.2, p2 = 0.2,
# # ps1 = 0.2,
# # proxy.thrd = 0.1, thrd1 = 0.28, thrd2 = 0.28,
# # wm = matrix(c(0, 0.5, 0.75, 1 , 1.5,
# # 0, 0.5, 0.75, 1 , 1.5,
# # 0, 0 , 0 , 0.5, 1 ),
# # byrow = T, ncol = 5),
# # toxmax = 2.5, toxtype = NULL, intercept.alpha = NULL,
# # coef.beta = NULL, cycle.gamma = NULL){
# #revision031917
# #dummy variable defination for winbugs parameters
# beta_other <- 0;
# beta_dose <- 0;
# N1 <- 0;
# inprod <- 0;
# u <- 0;
# N2 <- 0;
# Nsub <- 0;
# alpha <- 0;
# gamma0 <- 0;
# #revision031917
# n.iter <- 3000;
# eff.sd <- 0.2; p1 <- 0.2; p2 <- 0.2;
# ps1 <- 0.2; proxy.thrd <- 0.1; thrd1 <- 0.28; thrd2 <- 0.28;
# wm <- matrix(c(0, 0.5, 0.75, 1 , 1.5,
# 0, 0.5, 0.75, 1 , 1.5,
# 0, 0 , 0 , 0.5, 1 ),
# byrow = T, ncol = 5);
# toxmax <- 2.5; toxtype = NULL; intercept.alpha = NULL;
# coef.beta <- NULL; cycle.gamma <- NULL;
# set.seed(seed)
# trialSize <- trlSize;
# doses <- sdose;
# cycles <- 1:MaxCycle;
# listdo <- paste0("D", doses)
# if(length(eff.structure) != length(doses))
# stop("Check if you have specified the efficacy mean structure for the correct number of doses.")
# if(nrow(wm) != 3)
# stop("Make sure that you specified the clinical weight matrix for three types of toxicities.")
# MaxCycle <- length(cycles)
# Numdose <- length(doses)
# flag.tox.matrix.null <- 0
# if(is.null(tox_matrix)){
# flag.tox.matrix.null <- 1
# tox_matrix <- array(NA, dim = c(Numdose, MaxCycle, 3, 5))
# if(length(toxtype) != 3)
# stop("Right now we are only considering three toxicity types, but we will relax this constraint in the future work.")
# if(length(intercept.alpha) != 4){
# stop("Exactly four intercepts alpha are needed for grade 0--4 in simulation!")
# }
# if(min(diff(intercept.alpha, lag = 1, differences = 1)) < 0){
# stop("Intercepts alpha for simulation must be in a monotonic increasing order!")
# }
# if(length(coef.beta) != 3){
# stop("Exactly three betas need to be specified for three types of toxicities!")
# }
# }
# #############################################
# #########Determin Scenarios##################
# #############################################
# w1 <- wm[1, ]
# w2 <- wm[2, ]
# w3 <- wm[3, ]
# ####Array of the normalized scores########
# nTTP.array <- function(w1,w2,w3){
# nTTP <- array(NA,c(5,5,5))
# for (t1 in 1:5){
# for (t2 in 1:5){
# for (t3 in 1:5){
# nTTP[t1,t2,t3] <-
# sqrt(w1[t1]^2 + w2[t2]^2 + w3[t3]^2)/toxmax
# }
# }
# }
# return(nTTP)
# }
# nTTP.all <- nTTP.array(w1,w2,w3)
# ####compute pdlt given probability array####
# pDLT <- function(proba){
# return(sum(proba[c(4,5), , ])
# + sum(proba[ ,c(4,5), ])
# + sum(proba[ , ,5])
# - sum(proba[c(4,5),,])*sum(proba[,c(4,5),])
# - sum(proba[c(4,5),,])*sum(proba[,,5])
# - sum(proba[,c(4,5),])*sum(proba[,,5])
# + sum(proba[c(4,5),,])*sum(proba[,c(4,5),])*sum(proba[,,5]))
# }
# ####compute mean nTTP given probability array######
# mnTTP <-function(proba){
# return(sum(proba * nTTP.all))
# }
# sc.mat <- matrix(NA, MaxCycle, Numdose)
# pdlt <- matrix(NA, MaxCycle, Numdose)
# for(i in 1:MaxCycle){ ##loop through cycle
# for(j in 1:Numdose){ ##loop through dose
# if(flag.tox.matrix.null == 0){
# nTTP.prob <- tox_matrix[j, i, ,]
# }else{
# # initialize storing matrix
# cump <- matrix(NA, nrow = length(toxtype), ncol = 5) # cumulative probability
# celp <- matrix(NA, nrow = length(toxtype), ncol = 5) # cell probability
# for(k in 1:length(toxtype)){
# # proportional odds model
# logitcp <- intercept.alpha + coef.beta[k] * j + cycle.gamma * (i - 1)
# # cumulative probabilities
# cump[k, 1:4] <- exp(logitcp) / (1 + exp(logitcp))
# cump[k, 5] <- 1
# # cell probabilities
# celp[k, ] <- c(cump[k,1], diff(cump[k, ], lag = 1, differences = 1))
# }
# nTTP.prob <- celp
# tox_matrix[j, i, ,] <- celp
# }
# proba <- outer(outer(nTTP.prob[1, ], nTTP.prob[2, ]), nTTP.prob[3, ])
# sc.mat[i, j] <- round(mnTTP(proba), 4)
# pdlt[i, j] <- round(pDLT(proba), 4)
# }
# }
# mEFF <- eff.structure
# sc <- rbind(sc.mat, mEFF, pdlt[1, ])
# colnames(sc) <- listdo
# rownames(sc) <- c("mnTTP.1st", "mnTTP.2nd", "mnTTP.3rd",
# "mnTTP.4th", "mnTTP.5th", "mnTTP.6th",
# "mEFF", "pDLT")
# ####################################################################################
# k <- 1 ####Let us use k to denote the column of altMat####
# doseA <- strDose ####Let us use doseA to denote the current dose assigned
# cmPat <- chSize ####current patient size
# masterData <- list()
# rec_doseA <- NULL
# stage1.allow <- NULL
# altMat <- matrix(0, Numdose, k)
# altMat[doseA, k] <- chSize
# ###############################################################
# ########################Stage 1################################
# ###############################################################
# #print('Stage 1')
# while(cmPat <= trialSize/2){
# rec_doseA <- c(rec_doseA, doseA)
# PatData <- NULL
# n <- chSize
# K <- MaxCycle
# ##############################################################
# #############design matrix for toxicity#######################
# ##############################################################
# dose.pat <- rep(doseA, n)
# xx <- expand.grid(dose.pat, 1:K)
# X <- as.matrix(cbind(rep(1, n * K), xx))
# X <- cbind(X, ifelse(X[, 3] == 1, 0, 1))
# colnames(X) <- c("intercept", "dose", "cycle", "icycle")
# ##############################################################
# #######design matrix for efficacy#############################
# ##############################################################
# Xe <- cbind(1, rep(doseA, n), rep(doseA^2, n))
# colnames(Xe) <- c("intercept", "dose", "squared dose")
# #####simulate nTTP and Efficacy for the cohort########
# outcome <- apply(X, 1, function(a){
# tox.by.grade <- tox_matrix[a[2], a[3], ,]
# nttp.indices <- apply(tox.by.grade, 1, function(o){
# sample(1:5, 1, prob = o)
# })
# if(max(wm[cbind(1:nrow(wm), nttp.indices)]) >= 1){
# y.dlt <- 1
# }else{
# y.dlt <- 0
# }
# y.nTTP <- nTTP.all[nttp.indices[1],
# nttp.indices[2],
# nttp.indices[3]]
# return(c(y.dlt = y.dlt,
# y = y.nTTP))
# })
# y <- outcome[2, ]
# y.dlt <- outcome[1, ]
# beta.mean <- eff.structure[Xe[1, 2]]
# beta.sd <- eff.sd
# beta.shape1 <- ((1 - beta.mean)/beta.sd^2 - 1/beta.mean) * beta.mean^2
# beta.shape2 <- beta.shape1 * (1/beta.mean - 1)
# e <- matrix(rbeta(chSize, beta.shape1, beta.shape2), ncol = 1)
# #####construct master Dataset and extract from this dataset for each interim#########
# #####PatData stores the dataset used to estimate the model at each interim###########
# temp.mtdata <- data.frame(cbind(y.dlt, y, X, rep(cmPat/chSize, n * K), 1:n))
# temp.mtdata$subID <- paste0("cohort", temp.mtdata[, 7], "subject", temp.mtdata[, 8])
# masterData$toxicity <- data.frame(rbind(masterData$toxicity, temp.mtdata))
# temp.eff.mtdata <- data.frame(cbind(e, Xe, cmPat/chSize))
# masterData$efficacy <- data.frame(rbind(masterData$efficacy, temp.eff.mtdata))
# current_cohort <- cmPat/chSize
# PatData.index <- data.frame(cohort = 1:current_cohort, cycles = current_cohort:1)
# PatData.index <- within(PatData.index, cycles <- ifelse(cycles >= MaxCycle, MaxCycle, cycles))
# for(i in 1:nrow(PatData.index)){
# PatData <- rbind(PatData, masterData$toxicity[masterData$toxicity[, 7] == PatData.index[i, "cohort"] &
# masterData$toxicity[, 5] <= PatData.index[i, "cycles"],
# c(1, 2, 3, 4, 5, 6, 9)])
# }
# #####dropout.dlt is 0 means to keep the observation; 1 means to drop it due to dlt#######
# dlt.drop <- sapply(by(PatData, PatData$subID, function(a){a[, 1]}), function(o){
# if((length(o) > 1 & all(o[1: (length(o) - 1)] == 0)) | length(o) == 1){
# return(rep(0, length(o)))
# }else{
# o.temp <- rep(0, length(o))
# o.temp[(min(which(o == 1)) + 1) : length(o.temp)] <- 1
# return(o.temp)
# }
# })
# dropout.dlt <- NULL
# for(i in unique(PatData$subID)){
# dropout.dlt[PatData$subID == i] <- dlt.drop[[i]]
# }
# PatData <- PatData[dropout.dlt == 0, ]
# ###################################################################################
# ################Model Estimation given PatData#####################################
# ################Stage 1 only considers the toxicity model##########################
# ###################################################################################
# nTTP <- PatData[, 2]
# X_y <- as.matrix(PatData[, c("intercept", "dose", "icycle")])
# n.subj <- length(unique(PatData[, "subID"]))
# W_y <- matrix(0, nrow(PatData), n.subj)
# W_y[cbind(1:nrow(W_y), as.numeric(sapply(PatData$subID, function(a){
# which(a == unique(PatData$subID))
# })))] <- 1
# model.file <- function()
# {
# beta <- c(beta_other[1], beta_dose, beta_other[2])
# for(i in 1:N1){
# y[i] ~ dnorm(mu[i], tau_e)
# mu[i] <- inprod(X_y[i, ], beta) + inprod(W_y[i, ], u)
# }
# for(j in 1:N2){
# u[j] ~ dnorm(0, tau_u)
# }
# beta_other ~ dmnorm(p1_beta_other[], p2_beta_other[, ])
# beta_dose ~ dunif(p1_beta_dose, p2_beta_dose)
# tau_e ~ dunif(p1_tau_e, p2_tau_e)
# tau_u ~ dunif(p1_tau_u, p2_tau_u)
# }
# mydata <- list(N1 = length(nTTP), N2 = n.subj, y = nTTP, X_y = X_y,
# W_y = W_y, p1_beta_other = c(0, 0),
# p2_beta_other = diag(rep(0.001, 2)),
# p1_beta_dose = 0, p2_beta_dose = 1000,
# p1_tau_e = 0, p2_tau_e = 1000,
# p1_tau_u = 0, p2_tau_u = 1000)
# path.model <- file.path(tempdir(), "model.file.txt")
# R2WinBUGS::write.model(model.file, path.model)
# inits.list <- list(list(beta_other = c(0.1, 0.1),
# beta_dose = 0.1,
# tau_e = 0.1,
# tau_u = 0.1,
# .RNG.seed = sample(1:1e+06, size = 1),
# .RNG.name = "base::Wichmann-Hill"))
# jagsobj <- rjags::jags.model(path.model, data = mydata, n.chains = 1,
# quiet = TRUE, inits = inits.list)
# update(jagsobj, n.iter = n.iter, progress.bar = "none")
# post.samples <- rjags::jags.samples(jagsobj, c("beta_dose", "beta_other"),
# n.iter = n.iter,
# progress.bar = "none")
# if(cmPat == trialSize/2){
# ######################################################################
# #############define allowable doses###################################
# ######################################################################
# sim.betas <- as.matrix(rbind(post.samples$beta_other[1,,1], post.samples$beta_dose[,,1],
# post.samples$beta_other[2,,1]))
# ####condition 1####
# prob1.doses <- sapply(doses, function(a){
# mean(apply(sim.betas, 2, function(o){
# as.numeric(o[1] + o[2] * a <= thrd1)
# }))
# })
# ####condition 2####
# prob2.doses <- sapply(doses, function(a){
# mean(apply(sim.betas, 2, function(o){
# as.numeric(o[1] + o[2] * a + o[3] * 1 <= thrd2)
# }))
# })
# allow.doses <- which(prob1.doses >= p1 & prob2.doses >= p2)
# if(length(allow.doses) == 0){
# stage1.allow <- 0
# return(results = list(sc = sc, opt.dose = 0,
# altMat = altMat,
# masterData = masterData,
# stage1.allow = stage1.allow,
# dose_trace = rec_doseA,
# cmPat = cmPat))
# }
# stage1.allow <- allow.doses
# }else{
# sim.betas <- as.matrix(rbind(post.samples$beta_other[1,,1], post.samples$beta_dose[,,1]))
# loss.doses <- sapply(doses, function(a){
# mean(apply(sim.betas, 2, function(o){
# abs(o[1] + o[2] * a - tox.target)
# }))
# })
# nxtdose <- doses[which.min(loss.doses)]
# if(as.numeric(nxtdose) > (doseA + 1)){
# doseA <- doseA + 1;
# }else{
# doseA <- as.numeric(nxtdose);
# }
# if (cmPat == chSize){
# dlt <- with(masterData$toxicity, y.dlt[cycle == 1 & V7 == cmPat/chSize])
# if (sum(dlt) == 0){
# doseA <- 2
# dose_flag <- 0
# }else{
# doseA <- 1
# dose_flag <- 1
# }
# }
# if ((cmPat >= chSize*2) & (dose_flag == 1)){
# dlt <- with(masterData$toxicity, y.dlt[cycle == 1 & V7 == cmPat/chSize])
# if (sum(dlt) == 0){
# doseA <- 2
# dose_flag <- 0
# }else{
# doseA <- 1
# dose_flag <- 1
# }
# }
# if(cmPat >= chSize*3){
# sim.betas <- as.matrix(rbind(post.samples$beta_other[1,,1], post.samples$beta_dose[,,1],
# post.samples$beta_other[2,,1]))
# ####condition 1####
# prob1.doses <- sapply(doses, function(a){
# mean(apply(sim.betas, 2, function(o){
# as.numeric(o[1] + o[2] * a <= thrd1)
# }))
# })
# ####condition 2####
# prob2.doses <- sapply(doses, function(a){
# mean(apply(sim.betas, 2, function(o){
# as.numeric(o[1] + o[2] * a + o[3] * 1 <= thrd2)
# }))
# })
# allow.doses <- which(prob1.doses >= ps1 & prob2.doses >= ps1)
# if(length(allow.doses) == 0){
# stage1.allow <- 0
# return(results = list(sc = sc, opt.dose = 0,
# altMat = altMat,
# masterData = masterData,
# stage1.allow = stage1.allow,
# dose_trace = rec_doseA,
# cmPat = cmPat))
# }
# }
# altMat[doseA, k] <- altMat[doseA, k] + chSize
# }
# cmPat <- cmPat + chSize # cmPat for the next cohort (loop)
# }
# #print('Stage 2')
# ###############################################################
# ########################Stage 2################################
# ###############################################################
# #######let us wait until the efficacy data for stage 1 is available######
# #########################################################################
# cmPat <- cmPat - chSize
# PatData <- list()
# PatData.index <- data.frame(cohort = current_cohort:1, cycles = 3:(3 + current_cohort - 1))
# PatData.index <- within(PatData.index, cycles <- ifelse(cycles >= MaxCycle, MaxCycle, cycles))
# for(i in 1:nrow(PatData.index)){
# PatData$toxicity <- rbind(PatData$toxicity, masterData$toxicity[masterData$toxicity[, 7] == PatData.index[i, "cohort"] &
# masterData$toxicity[, 5] <= PatData.index[i, "cycles"],
# c(1, 2, 3, 4, 5, 6, 9)])
# }
# #####change the ordering of the subjects--being consistent######
# temp.toxicity <- NULL
# for(i in 1:current_cohort){
# temp.toxicity <- rbind(temp.toxicity, PatData$toxicity[grepl(paste0("cohort",i,"subject"), PatData$toxicity$subID), ])
# }
# PatData$toxicity <- temp.toxicity
# rm(temp.toxicity)
# #####toxicity dropout########
# #####dropout.dlt is 0 means to keep the observation; 1 means to drop it due to dlt
# dlt.drop <- sapply(by(PatData$toxicity, PatData$toxicity$subID, function(a){a[, 1]}), function(o){
# if((length(o) > 1 & all(o[1: (length(o) - 1)] == 0)) | length(o) == 1){
# return(rep(0, length(o)))
# }else{
# o.temp <- rep(0, length(o))
# o.temp[(min(which(o == 1)) + 1) : length(o.temp)] <- 1
# return(o.temp)
# }
# })
# dropout.dlt <- NULL
# for(i in unique(PatData$toxicity$subID)){
# dropout.dlt[PatData$toxicity$subID == i] <- dlt.drop[[i]]
# }
# PatData$toxicity <- PatData$toxicity[dropout.dlt == 0, ]
# #####efficacy dropout########
# dropout.eff <- sapply(dlt.drop, function(a){
# if(sum(a) == 0){
# return(0)
# }else if(min(which(a == 1)) >= 4){
# return(0)
# }else{
# return(1)
# }
# })
# dropout.eff <- as.numeric(dropout.eff)
# PatData$efficacy <- masterData$efficacy[dropout.eff == 0, ]
# ############Model Estimation to randomize among allowable doses##########
# nTTP <- PatData$toxicity[, 2]
# X_y <- as.matrix(PatData$toxicity[, c("intercept", "dose", "icycle")])
# n.subj <- length(unique(PatData$toxicity[, "subID"]))
# W_y <- matrix(0, nrow(PatData$toxicity), n.subj)
# W_y[cbind(1:nrow(W_y), as.numeric(sapply(PatData$toxicity$subID, function(a){
# which(a == unique(PatData$toxicity$subID))
# })))] <- 1
# EFF <- PatData$efficacy[, 1]
# X_e <- as.matrix(PatData$efficacy[, c("intercept", "dose", "squared.dose")])
# keepeff.ind <- which(dropout.eff == 0)
# model.file <- function()
# {
# beta <- c(beta_other[1], beta_dose, beta_other[2])
# for(i in 1:N1){
# y[i] ~ dnorm(mu[i], tau_e)
# mu[i] <- inprod(X_y[i, ], beta) + inprod(W_y[i, ], u)
# }
# for(k in 1:Nsub){
# u[k] ~ dnorm(0, tau_u)
# }
# for(j in 1:N2){
# e[j] ~ dnorm(mu_e[j], tau_f)
# mu_e[j] <- inprod(X_e[j, ], alpha) + gamma0 * u[keepeff.ind[j]]
# }
# beta_other ~ dmnorm(p1_beta_other[], p2_beta_other[, ])
# beta_dose ~ dunif(p1_beta_dose, p2_beta_dose)
# alpha ~ dmnorm(p1_alpha[], p2_alpha[, ])
# gamma0 ~ dnorm(p1_gamma0, p2_gamma0)
# tau_e ~ dunif(p1_tau_e, p2_tau_e)
# tau_u ~ dunif(p1_tau_u, p2_tau_u)
# tau_f ~ dunif(p1_tau_f, p2_tau_f)
# }
# mydata <- list(N1 = length(nTTP), N2 = length(EFF), y = nTTP,
# Nsub = n.subj, X_y = X_y, e = EFF, keepeff.ind = keepeff.ind,
# W_y = W_y, X_e = X_e, p1_beta_other = c(0, 0),
# p2_beta_other = diag(rep(0.001, 2)),
# p1_beta_dose = 0, p2_beta_dose = 1000,
# p1_alpha = c(0, 0, 0), p2_alpha = diag(rep(0.001, 3)),
# p1_gamma0 = 0, p2_gamma0 = 0.001,
# p1_tau_e = 0, p2_tau_e = 1000,
# p1_tau_u = 0, p2_tau_u = 1000,
# p1_tau_f = 0, p2_tau_f = 1000)
# path.model <- file.path(tempdir(), "model.file.txt")
# R2WinBUGS::write.model(model.file, path.model)
# inits.list <- list(list(beta_other = c(0.1, 0.1),
# beta_dose = 0.1,
# alpha = c(0.1, 0.1, 0.1),
# gamma0 = 0.1,
# tau_e = 0.1,
# tau_u = 0.1,
# tau_f = 0.1,
# .RNG.seed = sample(1:1e+06, size = 1),
# .RNG.name = "base::Wichmann-Hill"))
# jagsobj <- rjags::jags.model(path.model, data = mydata, n.chains = 1,
# quiet = TRUE, inits = inits.list)
# update(jagsobj, n.iter = n.iter, progress.bar = "none")
# post.samples <- rjags::jags.samples(jagsobj, c("beta_dose", "beta_other", "alpha",
# "gamma0"),
# n.iter = n.iter,
# progress.bar = "none")
# ######################################################################
# #############update allowable doses###################################
# ######################################################################
# sim.betas <- as.matrix(rbind(post.samples$beta_other[1,,1], post.samples$beta_dose[,,1],
# post.samples$beta_other[2,,1]))
# ####condition 1####
# prob1.doses <- sapply(doses, function(a){
# mean(apply(sim.betas, 2, function(o){
# as.numeric(o[1] + o[2] * a <= thrd1)
# }))
# })
# ####condition 2####
# prob2.doses <- sapply(doses, function(a){
# mean(apply(sim.betas, 2, function(o){
# as.numeric(o[1] + o[2] * a + o[3] * 1 <= thrd2)
# }))
# })
# allow.doses <- which(prob1.doses >= p1 & prob2.doses >= p2)
# if(length(allow.doses) == 0){
# return(results = list(sc = sc, opt.dose = 0,
# altMat = altMat,
# masterData = masterData,
# stage1.allow = stage1.allow,
# dose_trace = rec_doseA,
# cmPat = cmPat))
# }
# sim.alphas <- as.matrix(rbind(post.samples$alpha[,,1]))
# RAND.EFF <- sapply(allow.doses, function(a){
# mean(apply(sim.alphas, 2, function(o){
# as.numeric(o[1] + o[2] * a + o[3] * a^2)
# }))
# })
# RAND.EFF <- exp(RAND.EFF)/sum(exp(RAND.EFF))
# nxtdose <- sample(allow.doses, 1, prob = RAND.EFF)
# ####a condition for untried higher dose level that is randomized#### rec_doseA[length(rec_doseA)]
# if(nxtdose > max(rec_doseA) + 1 & !nxtdose %in% rec_doseA){
# nxtdose <- max(rec_doseA) + 1
# }
# #################################################################
# ########generate data for the new enrolled cohort in Stage 2#####
# #################################################################
# while(cmPat < trialSize - chSize){
# doseA <- nxtdose
# rec_doseA <- c(rec_doseA, doseA)
# altMat[doseA, k] <- altMat[doseA, k] + chSize
# cmPat <- cmPat + chSize
# PatData <- list()
# n <- chSize
# K <- MaxCycle
# #######################################################
# #################design matrix for toxicity############
# #######################################################
# dose.pat <- rep(doseA, n)
# xx <- expand.grid(dose.pat, 1:K)
# X <- as.matrix(cbind(rep(1, n*K), xx))
# X <- cbind(X, ifelse(X[, 3] == 1, 0, 1))
# colnames(X) <- c("intercept", "dose", "cycle", "icycle")
# #########################################################
# #######design matrix for efficacy########################
# #########################################################
# Xe <- cbind(1, rep(doseA, n), rep(doseA^2, n))
# colnames(Xe) <- c("intercept", "dose", "squared dose")
# #####simulate nTTP and Efficacy for the cohort########
# outcome <- apply(X, 1, function(a){
# tox.by.grade <- tox_matrix[a[2], a[3], ,]
# nttp.indices <- apply(tox.by.grade, 1, function(o){
# sample(1:5, 1, prob = o)
# })
# if(max(wm[cbind(1:nrow(wm), nttp.indices)]) >= 1){
# y.dlt <- 1
# }else{
# y.dlt <- 0
# }
# y.nTTP <- nTTP.all[nttp.indices[1],
# nttp.indices[2],
# nttp.indices[3]]
# return(c(y.dlt = y.dlt,
# y = y.nTTP))
# })
# y <- outcome[2, ]
# y.dlt <- outcome[1, ]
# beta.mean <- eff.structure[Xe[1, 2]]
# beta.sd <- eff.sd
# beta.shape1 <- ((1 - beta.mean)/beta.sd^2 - 1/beta.mean) * beta.mean^2
# beta.shape2 <- beta.shape1 * (1/beta.mean - 1)
# e <- matrix(rbeta(chSize, beta.shape1, beta.shape2), ncol = 1)
# #################################################################
# #################################################################
# #####add to the master Dataset and extract from this dataset for each interim#########
# #####PatData stores the dataset used to estimate the model at each interim############
# temp.mtdata <- data.frame(cbind(y.dlt, y, X, rep(cmPat/chSize, n*K), 1:n))
# temp.mtdata$subID <- paste0("cohort", temp.mtdata[, 7], "subject", temp.mtdata[, 8])
# masterData$toxicity <- data.frame(rbind(masterData$toxicity, temp.mtdata))
# temp.eff.mtdata <- data.frame(cbind(e, Xe, cmPat/chSize))
# masterData$efficacy <- data.frame(rbind(masterData$efficacy, temp.eff.mtdata))
# current_cohort <- cmPat/chSize
# PatData.index <- data.frame(cohort = 1:current_cohort, cycles = current_cohort:1)
# cycles.adj <- c(rep(2, trialSize/(2 * chSize)), rep(0, current_cohort - trialSize/(2 * chSize)))
# PatData.index <- within(PatData.index, {cycles <- cycles + cycles.adj
# cycles <- ifelse(cycles >= MaxCycle, MaxCycle, cycles)})
# for(i in 1:nrow(PatData.index)){
# PatData$toxicity <- rbind(PatData$toxicity, masterData$toxicity[masterData$toxicity[, 7] == PatData.index[i, "cohort"] &
# masterData$toxicity[, 5] <= PatData.index[i, "cycles"],
# c(1, 2, 3, 4, 5, 6, 9)])
# }
# #####toxicity dropout########
# #####dropout.dlt is 0 means to keep the observation; 1 means to drop it due to dlt
# dlt.drop <- sapply(by(PatData$toxicity, PatData$toxicity$subID, function(a){a[, 1]}), function(o){
# if((length(o) > 1 & all(o[1: (length(o) - 1)] == 0)) | length(o) == 1){
# return(rep(0, length(o)))
# }else{
# o.temp <- rep(0, length(o))
# o.temp[(min(which(o == 1)) + 1) : length(o.temp)] <- 1
# return(o.temp)
# }
# })
# dropout.dlt <- NULL
# for(i in unique(PatData$toxicity$subID)){
# dropout.dlt[PatData$toxicity$subID == i] <- dlt.drop[[i]]
# }
# PatData$toxicity <- PatData$toxicity[dropout.dlt == 0, ]
# cohort.eff.index <- with(PatData.index, which(cycles >= 3))
# #####efficacy dropout########
# dropout.eff <- sapply(dlt.drop, function(a){
# if(sum(a) == 0){
# return(0)
# }else if(min(which(a == 1)) >= 4){
# return(0)
# }else{
# return(1)
# }
# })
# dropout.eff <- as.numeric(dropout.eff)
# PatData$efficacy <- subset(masterData$efficacy, masterData$efficacy[, 5] %in% cohort.eff.index & dropout.eff == 0)
# #############################################################################################################################
# ######################Model Estimation to update toxicity information and randomization probs################################
# #############################################################################################################################
# nTTP <- PatData$toxicity[, 2]
# X_y <- as.matrix(PatData$toxicity[, c("intercept", "dose", "icycle")])
# n.subj <- length(unique(PatData$toxicity[, "subID"]))
# W_y <- matrix(0, nrow(PatData$toxicity), n.subj)
# W_y[cbind(1:nrow(W_y), as.numeric(sapply(PatData$toxicity$subID, function(a){
# which(a == unique(PatData$toxicity$subID))
# })))] <- 1
# EFF <- PatData$efficacy[, 1]
# X_e <- as.matrix(PatData$efficacy[, c("intercept", "dose", "squared.dose")])
# keepeff.ind <- which(dropout.eff == 0 & masterData$efficacy[, 5] %in% cohort.eff.index)
# model.file <- function()
# {
# beta <- c(beta_other[1], beta_dose, beta_other[2])
# for(k in 1:Nsub){
# u[k] ~ dnorm(0, tau_u)
# }
# for(i in 1:N1){
# y[i] ~ dnorm(mu[i], tau_e)
# mu[i] <- inprod(X_y[i, ], beta) + inprod(W_y[i, ], u)
# }
# for(j in 1:N2){
# e[j] ~ dnorm(mu_e[j], tau_f)
# mu_e[j] <- inprod(X_e[j, ], alpha) + gamma0 * u[keepeff.ind[j]]
# }
# beta_other ~ dmnorm(p1_beta_other[], p2_beta_other[, ])
# beta_dose ~ dunif(p1_beta_dose, p2_beta_dose)
# alpha ~ dmnorm(p1_alpha[], p2_alpha[, ])
# gamma0 ~ dnorm(p1_gamma0, p2_gamma0)
# tau_e ~ dunif(p1_tau_e, p2_tau_e)
# tau_u ~ dunif(p1_tau_u, p2_tau_u)
# tau_f ~ dunif(p1_tau_f, p2_tau_f)
# }
# mydata <- list(N1 = length(nTTP), N2 = length(EFF), Nsub = n.subj,
# y = nTTP, X_y = X_y, e = EFF, keepeff.ind = keepeff.ind,
# W_y = W_y, X_e = X_e, p1_beta_other = c(0, 0),
# p2_beta_other = diag(rep(0.001, 2)),
# p1_beta_dose = 0, p2_beta_dose = 1000,
# p1_alpha = c(0, 0, 0), p2_alpha = diag(rep(0.001, 3)),
# p1_gamma0 = 0, p2_gamma0 = 0.001,
# p1_tau_e = 0, p2_tau_e = 1000,
# p1_tau_u = 0, p2_tau_u = 1000,
# p1_tau_f = 0, p2_tau_f = 1000)
# path.model <- file.path(tempdir(), "model.file.txt")
# R2WinBUGS::write.model(model.file, path.model)
# inits.list <- list(list(beta_other = c(0.1, 0.1),
# beta_dose = 0.1,
# alpha = c(0.1, 0.1, 0.1),
# gamma0 = 0.1,
# tau_e = 0.1,
# tau_u = 0.1,
# tau_f = 0.1,
# .RNG.seed = sample(1:1e+06, size = 1),
# .RNG.name = "base::Wichmann-Hill"))
# jagsobj <- rjags::jags.model(path.model, data = mydata, n.chains = 1,
# quiet = TRUE, inits = inits.list)
# update(jagsobj, n.iter = n.iter, progress.bar = "none")
# post.samples <- rjags::jags.samples(jagsobj, c("beta_dose", "beta_other", "alpha",
# "gamma0"),
# n.iter = n.iter,
# progress.bar = "none")
# sim.betas <- as.matrix(rbind(post.samples$beta_other[1,,1], post.samples$beta_dose[,,1],
# post.samples$beta_other[2,,1]))
# sim.alphas <- as.matrix(rbind(post.samples$alpha[,,1]))
# ############redefine allowable doses#############
# ####condition 1####
# prob1.doses <- sapply(doses, function(a){
# mean(apply(sim.betas, 2, function(o){
# as.numeric(o[1] + o[2] * a <= thrd1)
# }))
# })
# ####condition 2####
# prob2.doses <- sapply(doses, function(a){
# mean(apply(sim.betas, 2, function(o){
# as.numeric(o[1] + o[2] * a + o[3] * 1 <= thrd2)
# }))
# })
# allow.doses <- which(prob1.doses >= p1 & prob2.doses >= p2)
# if(length(allow.doses) == 0){
# return(results = list(sc = sc, opt.dose = 0,
# altMat = altMat,
# masterData = masterData,
# stage1.allow = stage1.allow,
# dose_trace = rec_doseA,
# cmPat = cmPat))
# }
# RAND.EFF <- sapply(allow.doses, function(a){
# mean(apply(sim.alphas, 2, function(o){
# as.numeric(o[1] + o[2] * a + o[3] * a^2)
# }))
# })
# RAND.EFF <- exp(RAND.EFF)/sum(exp(RAND.EFF))
# nxtdose <- sample(allow.doses, 1, prob = RAND.EFF)
# ####a condition for untried higher dose level that is predicted to be efficacious####
# if(nxtdose > max(rec_doseA) + 1 & !nxtdose %in% rec_doseA){
# nxtdose <- max(rec_doseA) + 1
# }
# }
# #print('Stage 3');
# ############################################################################
# ###############################Stage 3######################################
# ############################################################################
# ###################when all the data are available##########################
# ############################################################################
# doseA <- nxtdose
# rec_doseA <- c(rec_doseA, doseA)
# altMat[doseA, k] <- altMat[doseA, k] + chSize
# cmPat <- cmPat + chSize
# PatData <- list()
# n <- chSize
# K <- MaxCycle
# ##################################################
# #######design matrix for toxicity#################
# ##################################################
# dose.pat <- rep(doseA, n)
# xx <- expand.grid(dose.pat, 1:K)
# X <- as.matrix(cbind(rep(1, n*K), xx))
# X <- cbind(X, ifelse(X[, 3] == 1, 0, 1))
# colnames(X) <- c("intercept", "dose", "cycle", "icycle")
# ##################################################
# #######design matrix for efficacy#################
# ##################################################
# Xe <- cbind(1, rep(doseA, n), rep(doseA^2, n))
# colnames(Xe) <- c("intercept", "dose", "squared dose")
# #####simulate nTTP and Efficacy for the cohort########
# outcome <- apply(X, 1, function(a){
# tox.by.grade <- tox_matrix[a[2], a[3], ,]
# nttp.indices <- apply(tox.by.grade, 1, function(o){
# sample(1:5, 1, prob = o)
# })
# if(max(wm[cbind(1:nrow(wm), nttp.indices)]) >= 1){
# y.dlt <- 1
# }else{
# y.dlt <- 0
# }
# y.nTTP <- nTTP.all[nttp.indices[1],
# nttp.indices[2],
# nttp.indices[3]]
# return(c(y.dlt = y.dlt,
# y = y.nTTP))
# })
# y <- outcome[2, ]
# y.dlt <- outcome[1, ]
# beta.mean <- eff.structure[Xe[1, 2]]
# beta.sd <- eff.sd
# beta.shape1 <- ((1 - beta.mean)/beta.sd^2 - 1/beta.mean) * beta.mean^2
# beta.shape2 <- beta.shape1 * (1/beta.mean - 1)
# e <- matrix(rbeta(chSize, beta.shape1, beta.shape2), ncol = 1)
# #################################################################
# #################################################################
# #####add to the master Dataset and extract from this dataset for final analysis###
# #####PatData stores the dataset used to estimate the model########################
# temp.mtdata <- data.frame(cbind(y.dlt, y, X, rep(cmPat/chSize, n*K), 1:n))
# temp.mtdata$subID <- paste0("cohort", temp.mtdata[, 7], "subject", temp.mtdata[, 8])
# masterData$toxicity <- data.frame(rbind(masterData$toxicity, temp.mtdata))
# temp.eff.mtdata <- data.frame(cbind(e, Xe, cmPat/chSize))
# masterData$efficacy <- data.frame(rbind(masterData$efficacy, temp.eff.mtdata))
# PatData$toxicity <- masterData$toxicity[, c(1, 2, 3, 4, 5, 6, 9)]
# #####toxicity dropout########
# #####dropout.dlt is 0 means to keep the observation; 1 means to drop it due to dlt
# dlt.drop <- sapply(by(PatData$toxicity, PatData$toxicity$subID, function(a){a[, 1]}), function(o){
# if((length(o) > 1 & all(o[1: (length(o) - 1)] == 0)) | length(o) == 1){
# return(rep(0, length(o)))
# }else{
# o.temp <- rep(0, length(o))
# o.temp[(min(which(o == 1)) + 1) : length(o.temp)] <- 1
# return(o.temp)
# }
# }, simplify = FALSE)
# dropout.dlt <- NULL
# for(i in unique(PatData$toxicity$subID)){
# dropout.dlt[PatData$toxicity$subID == i] <- dlt.drop[[i]]
# }
# PatData$toxicity <- PatData$toxicity[dropout.dlt == 0, ]
# #####efficacy dropout########
# dropout.eff <- sapply(dlt.drop, function(a){
# if(sum(a) == 0){
# return(0)
# }else if(min(which(a == 1)) >= 4){
# return(0)
# }else{
# return(1)
# }
# })
# dropout.eff <- as.numeric(dropout.eff)
# PatData$efficacy <- masterData$efficacy[dropout.eff == 0, ]
# #################################################################
# #################Model Estimation################################
# #################################################################
# nTTP <- PatData$toxicity[, 2]
# X_y <- as.matrix(PatData$toxicity[, c("intercept", "dose", "cycle")])
# n.subj <- length(unique(PatData$toxicity[, "subID"]))
# W_y <- matrix(0, nrow(PatData$toxicity), n.subj)
# W_y[cbind(1:nrow(W_y), as.numeric(sapply(PatData$toxicity$subID, function(a){
# which(a == unique(PatData$toxicity$subID))
# })))] <- 1
# EFF <- PatData$efficacy[, 1]
# X_e <- as.matrix(PatData$efficacy[, c("intercept", "dose", "squared.dose")])
# keepeff.ind <- which(dropout.eff == 0)
# model.file <- function()
# {
# beta <- c(beta_other[1], beta_dose, beta_other[2])
# for(k in 1:Nsub){
# u[k] ~ dnorm(0, tau_u)
# }
# for(i in 1:N1){
# y[i] ~ dnorm(mu[i], tau_e)
# mu[i] <- inprod(X_y[i, ], beta) + inprod(W_y[i, ], u)
# }
# for(j in 1:N2){
# e[j] ~ dnorm(mu_e[j], tau_f)
# mu_e[j] <- inprod(X_e[j, ], alpha) + gamma0 * u[keepeff.ind[j]]
# }
# beta_other ~ dmnorm(p1_beta_other[], p2_beta_other[, ])
# beta_dose ~ dunif(p1_beta_dose, p2_beta_dose)
# alpha ~ dmnorm(p1_alpha[], p2_alpha[, ])
# gamma0 ~ dnorm(p1_gamma0, p2_gamma0)
# tau_e ~ dunif(p1_tau_e, p2_tau_e)
# tau_u ~ dunif(p1_tau_u, p2_tau_u)
# tau_f ~ dunif(p1_tau_f, p2_tau_f)
# }
# mydata <- list(N1 = length(nTTP), N2 = length(EFF), Nsub = n.subj,
# y = nTTP, X_y = X_y, e = EFF, keepeff.ind = keepeff.ind,
# W_y = W_y, X_e = X_e, p1_beta_other = c(0, 0),
# p2_beta_other = diag(rep(0.001, 2)),
# p1_beta_dose = 0, p2_beta_dose = 1000,
# p1_alpha = c(0, 0, 0), p2_alpha = diag(rep(0.001, 3)),
# p1_gamma0 = 0, p2_gamma0 = 0.001,
# p1_tau_e = 0, p2_tau_e = 1000,
# p1_tau_u = 0, p2_tau_u = 1000,
# p1_tau_f = 0, p2_tau_f = 1000)
# path.model <- file.path(tempdir(), "model.file.txt")
# R2WinBUGS::write.model(model.file, path.model)
# inits.list <- list(list(beta_other = c(0.1, 0.1),
# beta_dose = 0.1,
# alpha = c(0.1, 0.1, 0.1),
# gamma0 = 0.1,
# tau_e = 0.1,
# tau_u = 0.1,
# tau_f = 0.1,
# .RNG.seed = sample(1:1e+06, size = 1),
# .RNG.name = "base::Wichmann-Hill"))
# jagsobj <- rjags::jags.model(path.model, data = mydata, n.chains = 1,
# quiet = TRUE, inits = inits.list)
# update(jagsobj, n.iter = n.iter, progress.bar = "none")
# post.samples <- rjags::jags.samples(jagsobj, c("beta_dose", "beta_other", "alpha",
# "gamma0"),
# n.iter = n.iter,
# progress.bar = "none")
# sim.betas <- as.matrix(rbind(post.samples$beta_other[1,,1], post.samples$beta_dose[,,1],
# post.samples$beta_other[2,,1]))
# sim.alphas <- as.matrix(rbind(post.samples$alpha[,,1]))
# ############redefine allowable doses#############
# ####condition 1####
# prob1.doses <- sapply(doses, function(a){
# mean(apply(sim.betas, 2, function(o){
# as.numeric(o[1] + o[2] * a + o[3] * 1 <= thrd1)
# }))
# })
# ####condition 2####
# prob2.doses <- sapply(doses, function(a){
# mean(apply(sim.betas, 2, function(o){
# as.numeric(mean(sapply(2:K, function(m){
# as.numeric(o[1] + o[2] * a + o[3] * m)
# })) <= thrd2)
# }))
# })
# allow.doses <- which(prob1.doses >= p1 & prob2.doses >= p2)
# if(length(allow.doses) == 0){
# return(results = list(sc = sc, opt.dose = 0,
# altMat = altMat,
# masterData = masterData,
# stage1.allow = stage1.allow,
# dose_trace = rec_doseA,
# cmPat = cmPat))
# }
# effcy.doses <- sapply(allow.doses, function(a){
# mean(apply(sim.alphas, 2, function(o){
# o[1] + o[2] * a + o[3] * a^2}))
# })
# proxy.eff.doses <- allow.doses[which(sapply(effcy.doses, function(a){
# abs(a - max(effcy.doses)) <= proxy.thrd
# }))]
# ###recommend the lowest dose that is efficacious####
# recom.doses <- min(proxy.eff.doses)
# #print('output')
# return(results = list(sc = sc, opt.dose = recom.doses,
# altMat = altMat, masterData = masterData,
# stage1.allow = stage1.allow,
# effy.doses = proxy.eff.doses,
# dose_trace = rec_doseA,
# cmPat = cmPat))
# }
|
7b3518ebb9199e19e782991376360f2a4e52b54f
|
d7d07cc5bf1e5add6e9f66c07a1d63987bad52e9
|
/MATH4473carlos2021/man/mycip.Rd
|
fce254f627252f97d37016720c9774e88b5958e5
|
[
"MIT"
] |
permissive
|
Carlos91016/Applied-Regrssion-Analysis
|
5a9f8050c4a64034ebfd74cd45e2054874959dc3
|
e956cc5ad6c93941d02364a179b2ec9e7ccb01ba
|
refs/heads/main
| 2023-07-14T06:02:22.247429
| 2021-08-26T03:11:08
| 2021-08-26T03:11:08
| null | 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| true
| 374
|
rd
|
mycip.Rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/mycip.R
\name{mycip}
\alias{mycip}
\title{Confidence interval for p}
\usage{
mycip(x, n, alpha)
}
\arguments{
\item{x}{frequency category}
\item{n}{total frequency}
\item{alpha}{error rate}
}
\value{
list containing ci
}
\description{
Confidence interval for p
}
\examples{
mycip(10,20,0.05)
}
|
c23d343c669816e430e264a76187fb95e0decf73
|
fe2cfcf19c0877234603d519117c893e5b9ef1ed
|
/man/augment.Rd
|
299e66df56a9abad6aad8a66bc88813c3224f0c8
|
[] |
no_license
|
123rugby/multiridge
|
13e18b59bfbf4e466cc53a806a1de3d1d63e7f20
|
cf72cd737f31b59575d09e1cb654a44873ee56da
|
refs/heads/master
| 2023-01-13T15:20:30.354083
| 2020-11-26T14:05:11
| 2020-11-26T14:05:11
| null | 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 938
|
rd
|
augment.Rd
|
\name{augment}
\alias{augment}
%- Also NEED an '\alias' for EACH other topic documented here.
\title{
Augment data with zeros.
}
\description{
This function augments data with zeros to allow pairing of data on the same variables, but from DIFFERENT samples}
\usage{
augment(Xdata1, Xdata2)
}
%- maybe also 'usage' for other objects documented here.
\arguments{
\item{Xdata1}{
Data frame or data matrix of dimension \code{n_1 x p}.
}
\item{Xdata2}{
Data frame or data matrix of dimension \code{n_2 x p}
}
}
\details{
Xdata1 and Xdata2 should have the same number of columns. These columns represent variables. Augments both data matrices with zeros,
such that the matrices can be paired using \code{\link{createXXblocks}} on the output of this function.
}
\value{
List
\item{Xaug1}{Augmented data matrix 1}
\item{Xaug2}{Augmented data matrix 2}
}
\references{
%% ~put references to the literature/web site here ~
}
\author{
}
|
87cd25af36c393511516a2e3597c35018d749458
|
ffdea92d4315e4363dd4ae673a1a6adf82a761b5
|
/data/genthat_extracted_code/specklestar/examples/speckle_ps.Rd.R
|
1917082c7b5fe1f61393bc24c58a0a33ee3f8bc8
|
[] |
no_license
|
surayaaramli/typeRrh
|
d257ac8905c49123f4ccd4e377ee3dfc84d1636c
|
66e6996f31961bc8b9aafe1a6a6098327b66bf71
|
refs/heads/master
| 2023-05-05T04:05:31.617869
| 2019-04-25T22:10:06
| 2019-04-25T22:10:06
| null | 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 359
|
r
|
speckle_ps.Rd.R
|
library(specklestar)
### Name: speckle_ps
### Title: Power spectrum calculation
### Aliases: speckle_ps
### ** Examples
obj_filename <- system.file("extdata", "ads15182_550_2_frames.dat", package = "specklestar")
midd_dark <- matrix(0, 512, 512)
midd_flat <- matrix(1, 512, 512)
pow_spec <- speckle_ps(obj_filename, dark = midd_dark, flat = midd_flat)
|
b215bcb972c09b687844ec16656df75e543fd2e8
|
a510cfcfd7927e5e07a2509601f493c279618d3b
|
/code/supportingFuns.R
|
8d1f1dc90cc86cc1769e6a4361a5d5de911c5c71
|
[] |
no_license
|
MFEh2o/limnoEntryTool_TEMPLATE
|
c70b20cecaec369f1d3efcdc7a1b1db812b2f51d
|
14bfde98511cc2543a15e8edc58909f8e9813ef2
|
refs/heads/master
| 2023-06-08T15:43:14.311889
| 2023-06-07T19:02:38
| 2023-06-07T19:02:38
| 367,421,763
| 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 22,004
|
r
|
supportingFuns.R
|
# Supporting functions for limnoEntry.R
# MetadataID for staff gauge samples --------------------------------------
staffGaugeMetadataID <- "Staff.Gauge.Sample.20140319"
# List of retired projectIDs ----------------------------------------------
retiredProjectIDs <- c(1:2, 4:5, 7:12, 14, 16, 18:25, 27:33, 39:40)
# count NA's --------------------------------------------------------------
count_na <- function(x) sum(is.na(x))
# initial one-row data frames for each log file ---------------------------
bpInit <- data.frame(bpID = "BP0",
projectID = 99999,
lakeID = "ZZ",
site = "xxxx",
dateSample = "2222-22-22",
timeSample = "99:99",
depthClass = "surface",
depthTop = 0,
depthBottom = 0,
volFiltered = 0,
replicate = 1,
comments = "aaa",
sampleID = "ZZ_xxxx_NA_9999_surface_0_fishScapesLimno.20190319",
metadataID = NA)
chlInit <- data.frame(chlID = "C0",
projectID = 99999,
lakeID = "ZZ",
site = "zzz",
dateSample = "2222-22-22",
timeSample = "99:99",
depthClass = "surface",
depthTop = 0,
depthBottom = 0,
volFiltered = 0,
replicate = 1,
comments = "aaa",
sampleID = "ZZ_zzz_NA_9999_surface_0_fishScapesLimno.20190319",
metadataID = NA)
colorInit <- data.frame(colorID = "C0",
projectID = 99999,
lakeID = "ZZ",
site = "zzz",
dateSample = "2222-22-22",
timeSample = "99:99",
depthClass = "surface",
depthTop = 0,
depthBottom = 0,
abs440 = 0,
g440 = 0,
comments = "aaa",
sampleID = "ZZ_zzz_NA_9999_surface_0_fishScapesLimno.20190319")
docInit <- data.frame(docID = "D0",
projectID = 99999,
lakeID = "ZZ",
site = "zzz",
dateSample = "2222-22-22",
timeSample = "99:99",
depthClass = "surface",
depthTop = 0,
depthBottom = 0,
replicate = 1,
comments = "aaa",
sampleID = "ZZ_zzz_NA_9999_surface_0_fishScapesLimno.20190319",
metadataID = NA)
fdInit <- data.frame(filteredID = "F0",
projectID = 99999,
lakeID = "ZZ",
site = "zzz",
dateSample = "2222-22-22",
timeSample = "99:99",
depthClass = "surface",
depthTop = 0,
depthBottom = 0,
comments = "aaa",
sampleID = "ZZ_zzz_NA_9999_surface_0_fishScapesLimno.20190319",
metadataID = NA)
ionInit <- data.frame(ionsID = "I0",
projectID = 99999,
lakeID = "ZZ",
site = "zzz",
dateSample = "2222-22-22",
timeSample = "99:99",
depthClass = "surface",
depthTop = 0,
depthBottom = 0,
replicate=1,
comments = "aaa",
sampleID = "ZZ_zzz_NA_9999_surface_0_fishScapesLimno.20190319",
metadataID = NA)
pocInit <- data.frame(pocID = "P0",
projectID = 99999,
lakeID = "ZZ",
site = "zzz",
dateSample = "2222-22-22",
timeSample = "99:99",
depthClass = "surface",
depthTop = 0,
depthBottom = 0,
volFiltered = 0,
replicate = "aaaa",
comments = "aaa",
sampleID = "ZZ_zzz_NA_9999_surface_0_fishScapesLimno.20190319",
metadataID = NA)
ufdInit <- data.frame(unfilteredID = "U0",
projectID = 99999,
lakeID = "ZZ",
site = "zzz",
dateSample = "2222-22-22",
timeSample = "99:99",
depthClass = "surface",
depthTop = 0,
depthBottom = 0,
comments = "aaa",
sampleID = "ZZ_zzz_NA_9999_surface_0_fishScapesLimno.20190319",
metadataID = NA)
# Custom function for reading csv's and removing NA rows ------------------
# ARGUMENTS:
## `filepath`: path to the file.
## `header`: passed to read.csv. Whether or not the csv has a header row included. Should be T most of the time.
## `stringsAsFactors`: passed to read.csv. Whether to treat strings as factors, or leave them as character.
## `na.strings`: passed to read.csv. Strings to treat as NA on file read-in.
customReadCSV <- function(filepath, header = T, stringsAsFactors = F,
na.strings = c("", " ", NA)){
assertCharacter(filepath, len = 1)
assertLogical(header, len = 1)
assertLogical(stringsAsFactors, len = 1)
assertCharacter(na.strings, any.missing = T, all.missing = T)
library(dplyr) # for the pipe
obj <- read.csv(filepath, # read in the file with all the arguments set
header = header,
stringsAsFactors = stringsAsFactors,
na.strings = na.strings) %>%
filter(rowSums(is.na(.)) != ncol(.))
return(obj) # return the data frame read from the csv, with NA rows removed.
}
# getHeaderInfo -----------------------------------------------------------
getHeader <- function(d = cur){
assertDataFrame(d, ncols = 11, min.rows = 2)
header <- d[1:10, 2] %>%
setNames(c("lakeID", "siteName", "dateSample", "timeSample",
"projectID", "weather", "crew", "metadataID",
"zoopDepth", "comments")) %>%
as.list()
# Add siteID, since we'll use that a lot
header$siteID <- paste(header$lakeID, header$siteName, sep = "_")
# Convert zoopDepth to numeric
header$zoopDepth <- as.numeric(header$zoopDepth)
# Return the header list
return(header)
}
# Get profile data from the data sheet ------------------------------------
getProfData <- function(d = cur){
assertDataFrame(d, min.rows = 17)
profData <- d[17:nrow(d),]
names(profData) <- profData[1,]
profData <- profData[-1,]
profData <- profData %>%
mutate(across(everything(), as.numeric))
# remove any data that's all NA except for the depth, unless there is no profile data (all NAs), then return a single-row data frame filled with NAs
if(any(!is.na(profData[,-1]))){
profData <- profData %>%
filter(!(rowSums(is.na(select(., -depth))) ==
ncol(.) - 1))
}else{
profData=data.frame(depth=0,temp=NA,DOmgL=NA,DOsat=NA,SpC=NA,pH=NA,ORP=NA,PAR=NA,PML=NA,hypo=NA,point=NA)
}
return(profData)
}
# Get gauge data from the data sheet -------------------------------------
getGauges <- function(d = cur){
assertDataFrame(d, min.rows = 16, ncols = 11)
gauges <- d[11:16, 1:3]
names(gauges) <- gauges[1,]
gauges <- gauges[-1,]
row.names(gauges) <- gauges[,1]
names(gauges) <- c("NA", "height", "unit")
gauges <- gauges[, -1]
gauges$height <- as.numeric(gauges$height)
return(gauges)
}
# Get moieties data from the data sheet -----------------------------------
getMoieties <- function(d = cur){
moieties <- d[2:9, 5:11]
row.names(moieties) <- d[2:9, 4]
names(moieties) <- d[1, 5:11]
moieties <- moieties %>%
mutate(across(everything(), as.numeric))
return(moieties)
}
# Get volumes data from the data sheet ------------------------------------
getVolumes <- function(d = cur){
assertDataFrame(d, min.rows = 12, ncols = 11)
volumes <- d[11:13, 5:11]
row.names(volumes) <- d[11:13, 4]
names(volumes) <- d[10, 5:11]
volumes <- volumes %>%
mutate(across(everything(), as.numeric))
return(volumes)
}
# Format timeSample ------------------------------------------------------
formatTimeSample <- function(x){
assertCharacter(x, len = 1, any.missing = F, min.chars = 3)
if(nchar(x) == 3){
x <- paste0("0", x)
message("header$timeSample has 3 characters; appending a leading 0.")
}
if(!grepl("^[0-2][0-9][0-5][0-9]$", x)){
stop("timeSample is not a valid time.")
}
return(x)
}
# profileSamplesRows ----------------------------------------------
profileSamplesRows <- function(d = profData){
assertDataFrame(d)
assertChoice("depth", names(d))
profSamples <- d %>%
select(depth) %>%
mutate(depthClass = "point",
depthTop = depth,
depthBottom = depth) %>%
select(-depth)
return(profSamples)
}
# pointSamplesRows --------------------------------------------------------
pointSamplesRows <- function(d = profData, ind = whichRows){
assertDataFrame(d)
assertNumeric(ind, min.len = 1, max.len = nrow(d))
assertChoice("depth", names(d))
pointSamples <- d[ind,] %>%
select(depth) %>%
mutate(depthClass = "point",
depthTop = depth,
depthBottom = depth) %>%
select(-depth)
return(pointSamples)
}
# profileDataRows ---------------------------------------------------------
profileDataRows <- function(d = profData, h = header, dss = dateSampleString,
tss = timeSampleString){
assertDataFrame(d)
assertList(h)
assertSubset(c("PML", "hypo", "point", "depth"), names(d))
assertSubset(c("projectID", "lakeID", "siteName", "dateSample", "dateTimeSample", "metadataID", "comments"), names(h))
assertCharacter(dss, len = 1, pattern = "[0-9]{8}")
assertCharacter(tss, len = 1, pattern = "[0-9]{4}")
profilesNEW <- d %>%
select(-c("PML", "hypo", "point")) %>%
rename("depthBottom" = depth) %>%
mutate(projectID = h$projectID,
lakeID = h$lakeID,
siteName = h$siteName,
dateSample = h$dateSample,
dateTimeSample = h$dateTimeSample,
depthClass = "point",
depthTop = depthBottom,
metadataID = h$metadataID,
comments = h$comments,
updateID = NA) %>%
mutate(sampleID = paste(lakeID, siteName, dss, tss, depthClass, depthBottom, metadataID, sep = "_")) %>%
# put the columns in the right order
select(c("projectID", "sampleID", "lakeID", "siteName", "dateSample",
"dateTimeSample", "depthClass", "depthTop", "depthBottom", "temp",
"DOmgL", "DOsat", "SpC", "pH", "ORP", "PAR", "metadataID", "comments",
"updateID"))
return(profilesNEW)
}
# staffgageDataRows ---------------------------------------------------------
staffgageDataRows <- function(d = gauges, samps=gaugeSamples, h = header, dss = dateSampleString,
tss = timeSampleString){
assertDataFrame(d)
assertDataFrame(samps)
assertList(h)
assertSubset(c("projectID", "lakeID", "siteName", "dateSample", "dateTimeSample", "metadataID", "comments"), names(h))
assertCharacter(dss, len = 1, pattern = "[0-9]{8}")
assertCharacter(tss, len = 1, pattern = "[0-9]{4}")
# subset d to match samps
d=d[!is.na(d$height),]
gagesNEW <- data.frame(projectID = h$projectID,
lakeID = samps$lakeID,
siteName = unlist(lapply(strsplit(samps$siteID,"_"),function(x){return(x[2])})),
dateSample = h$dateSample,
dateTimeSample = h$dateTimeSample,
depthClass = "staff",
depthTop=0,
depthBottom=0,
waterHeight=d$height,
waterHeightUnits=d$unit,
waterHeight_m=NA,
metadataID=h$metadataID,
comments=h$comments,
updateID=NA)
# calculate waterHeight_m depending on waterHeightUnits
for(i in 1:nrow(gagesNEW)){
if(gagesNEW$waterHeightUnits[i]=="ft"){
gagesNEW$waterHeight_m[i]=gagesNEW$waterHeight[i]*0.3048
}else if(gagesNEW$waterHeightUnits[i]=="cm"){
gagesNEW$waterHeight_m[i]=gagesNEW$waterHeight[i]*0.01
}else if(gagesNEW$waterHeightUnits[i]=="m"){
gagesNEW$waterHeight_m[i]=gagesNEW$waterHeight[i]*1
}else if(gagesNEW$waterHeightUnits[i]=="in"){
gagesNEW$waterHeight_m[i]=gagesNEW$waterHeight[i]*0.0254
}
}
#generate sampleID
gagesNEW$sampleID = paste(gagesNEW$lakeID, gagesNEW$siteName, dss, tss, gagesNEW$depthClass, gagesNEW$depthBottom, gagesNEW$metadataID, sep = "_")
# put the columns in the right order
gagesNEW <- gagesNEW[,c("projectID", "sampleID", "lakeID", "siteName", "dateSample",
"dateTimeSample", "depthClass", "depthTop", "depthBottom", "waterHeight",
"waterHeightUnits", "waterHeight_m","metadataID", "comments",
"updateID")]
return(gagesNEW)
}
# General dataRows function -----------------------------------------------
dataRows <- function(idName, idPrefix, idStart = curID, rowName,
addReplicates = F, addVolumes = F, templateDF,
v = volumes, m = moieties, h = header, pt = pml_depthTop,
pb = pml_depthBottom, ht = hypo_depthTop,
hb = hypo_depthBottom,
dss = dateSampleString, tss = timeSampleString){
assertCharacter(idName, len = 1)
assertCharacter(idPrefix, len = 1)
assertNumeric(idStart, len = 1)
assertCharacter(rowName, len = 1)
assertLogical(addReplicates, len = 1)
assertLogical(addVolumes, len = 1)
assertDataFrame(templateDF)
assertDataFrame(v)
assertDataFrame(m)
assertList(h)
assertNumeric(pt, len = 1)
assertNumeric(pb, len = 1)
assertNumeric(ht, len = 1)
assertNumeric(hb, len = 1)
assertCharacter(dss, len = 1, pattern = "[0-9]{8}")
assertCharacter(tss, len = 1, pattern = "[0-9]{4}")
assertChoice("point", names(m))
assertSubset(c("siteName", "projectID", "lakeID",
"timeSample", "dateSample", "metadataID"),
names(h))
# If we'll be joining volumes, prepare the volumes data for the join
if(addVolumes == TRUE){
vols <- v[rowName, ] %>%
pivot_longer(cols = everything(),
names_to = "type",
values_to = "volFiltered") %>%
filter(volFiltered > 0)
}
# First, take the moieties data...
rows <- m[rowName, ] %>%
# remove the point samples
select(-point) %>%
# pivot to long format so it's easier to work with.
pivot_longer(cols = everything(),
names_to = "type",
values_to = "nSamples") %>%
# Remove any rows where nSamples is 0.
filter(nSamples > 0) %>%
# If volumes = T, join volume info (prepared above)
{if(addVolumes == TRUE) left_join(., vols, by = "type") else .} %>%
# Expand nSamples to give us the right number of rows
uncount(nSamples) %>%
# The "type" column technically contains both depthClasses and sites.
# That's confusing. Let's separate them.
## 1. For the actual depth classes, add the site name from `header`.
## Else, use `type` as site.
mutate(site = case_when(type %in% c("PML", "hypo") ~ h$siteName,
TRUE ~ word(type,2,2,sep="_")),
## 2. For the sites, assign depthClass "surface".
## Else, use `type` as depthClass
depthClass = case_when(!type %in% c("PML", "hypo") ~ "surface",
TRUE ~ type)) %>%
# Define depths conditionally based on depth classes
mutate(depthTop = case_when(depthClass == "surface" ~ 0,
depthClass == "PML" ~ pt,
depthClass == "hypo" ~ ht),
depthBottom = case_when(depthClass == "surface" ~ 0,
depthClass == "PML" ~ pb,
depthClass == "hypo" ~ hb)) %>%
# Add the rest of the information from the header:
mutate(!!idName := paste0(idPrefix,
seq(from = idStart,
length.out = nrow(.))),
projectID = h$projectID,
dateSample = h$dateSample,
timeSample = h$timeSample,
metadataID = h$metadataID,
comments = NA) %>%
# If these are color rows, add abs440 and g440
{if(rowName == "color") mutate(., abs440 = NA, g440 = NA) else .} %>%
# Add a `replicate` column only if replicates = T
{if(addReplicates == TRUE) {group_by(., site, depthClass) %>%
mutate(replicate = 1:n()) %>%
ungroup()} else .}
# deal with lakeID (lakes v. streams)
rows$lakeID = unlist(lapply(strsplit(rows$type,"_"),function(x){return(x[1])}))
rows$lakeID[rows$site==h$siteName] = h$lakeID
rows$type <- NULL
# Create sampleID's
rows$sampleID = ifelse(grepl("_",rows$site),
yes=paste(rows$site, dss, tss,rows$depthClass, rows$depthBottom, rows$metadataID, sep = "_"),
no=paste(rows$lakeID,rows$site, dss, tss,rows$depthClass, rows$depthBottom, rows$metadataID, sep = "_"))
rows <- rows %>%
# Put the columns in the right order
select(names(templateDF)) %>%
as.data.frame()
# Return the finished data
return(rows)
}
# General dataRowsPoint function ------------------------------------------
dataRowsPoint <- function(idName, idPrefix, idStart = curID, addReplicates = F,
addVolumes = F, volumesRowName = NULL, templateDF,
color = F, dss = dateSampleString,
tss = timeSampleString,
h = header, v = volumes, p = profData){
assertCharacter(idName, len = 1)
assertCharacter(idPrefix, len = 1)
assertNumeric(idStart, len = 1)
assertLogical(addReplicates, len = 1)
assertLogical(addVolumes, len = 1)
assertLogical(color, len = 1)
assertCharacter(volumesRowName, len = 1, null.ok = T)
assertDataFrame(templateDF)
assertDataFrame(v)
assertList(h)
assertCharacter(dss, len = 1, pattern = "[0-9]{8}")
assertCharacter(tss, len = 1, pattern = "[0-9]{4}")
assertChoice("point", v)
assertDataFrame(profData)
assertSubset(c("point", "depth"), names(profData))
assertSubset(c("siteName", "projectID", "lakeID",
"timeSample", "dateSample", "metadataID"),
names(h))
# If we'll be joining volumes, prepare the volumes data for the join
if(addVolumes == TRUE){
vol <- v[volumesRowName, "point"] # this will be a scalar
}
# First, take the profile data...
r <- p %>%
filter(!is.na(point)) %>%
select("depthTop" = depth) %>%
# Assign ID's that count up from the idStart
mutate(!!idName := paste0(idPrefix, seq(from = idStart,
length.out = nrow(.)))) %>%
# Add other header info
mutate(projectID = h$projectID,
lakeID = h$lakeID,
site = h$siteName,
dateSample = h$dateSample,
timeSample = h$timeSample,
depthClass = "point", # because these are point samples
depthBottom = depthTop,
comments = NA,
metadataID = h$metadataID) %>%
mutate(sampleID = paste(lakeID, site, dss, tss, "point",
depthBottom, metadataID, sep = "_")) %>%
# Add volumes only if addVolumes is TRUE
{if(addVolumes == TRUE) mutate(., volFiltered = vol) else .} %>%
# If these are color rows, add abs440 and g440
{if(color == TRUE) mutate(., abs440 = NA, g440 = NA) else .} %>%
{if(addReplicates == TRUE) {group_by(., sampleID) %>%
mutate(replicate = 1:n()) %>%
ungroup()} else .} %>%
# Transform to data frame
as.data.frame()
# Return the finished rows
return(r)
}
# zoopDataRows ------------------------------------------------------------
zoopDataRows <- function(c = curID, h = header, dss = dateSampleString, tss = timeSampleString, df = zoopIS){
assertNumeric(c, len = 1)
assertList(h)
assertCharacter(dss, len = 1, pattern = "[0-9]{8}")
assertCharacter(tss, len = 1, pattern = "[0-9]{4}")
assertDataFrame(df)
assertSubset(c("projectID", "lakeID", "siteName", "dateSample", "timeSample", "zoopDepth", "metadataID"),
names(h))
rows <- data.frame(
zoopID = paste0("Z", c), # XXX asked Stuart if this is right
projectID = h$projectID,
lakeID = h$lakeID,
site = h$siteName,
dateSample = h$dateSample,
timeSample = h$timeSample,
depthClass = "tow",
depthTop = 0,
depthBottom = h$zoopDepth,
comments = "",
metadataID = h$metadataID,
stringsAsFactors = F
) %>%
mutate(sampleID = paste(lakeID, site, dss, tss,
"tow", depthBottom, metadataID, sep = "_")) %>%
select(names(df)) %>%
as.data.frame()
return(rows)
}
# tochar ------------------------------------------------------------------
# shortcut for converting all columns in a data frame to character
tochar <- function(df){
assertDataFrame(df)
df2 <- df %>%
mutate(across(everything(),
as.character))
return(df2)
}
|
57837dcdfaaf5bc683b9f6d32425745aa66b80bf
|
876ffa84231869e97536430bf1e0efc9810b6d12
|
/R/pow.R
|
e7a377259cb0cbcfdb5b8e6eeb44b20528dd43ec
|
[] |
no_license
|
cran/sirt
|
3f13a159b57926a51425ae0f530b4650fc690ba7
|
a3daefdc2db1141263a5d52025eef37894d54a49
|
refs/heads/master
| 2023-08-31T23:28:39.426909
| 2023-08-11T09:40:02
| 2023-08-11T10:30:54
| 17,699,690
| 4
| 5
| null | null | null | null |
UTF-8
|
R
| false
| false
| 94
|
r
|
pow.R
|
## File Name: pow.R
## File Version: 0.05
pow <- function(x, a)
{
return( x^a )
}
|
f56c2ba9ec5fc14b5ded9fb5ede740fc763f8a3c
|
9e6de53388d316b967d16f1dc826b991deddd1e7
|
/week4lib.R
|
02e4010b13ab721b399f7b40b67efd5b9c4af9a0
|
[] |
no_license
|
mbac/courseraweek4
|
b7a4fe77739da6a03b0b9f479f9565e64a6e3667
|
cb8b93b9cd74b6b082c08273033a6da039b62980
|
refs/heads/main
| 2023-02-06T02:40:55.006957
| 2020-12-29T19:41:29
| 2020-12-29T19:41:29
| 325,033,238
| 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 2,319
|
r
|
week4lib.R
|
# We define a function which takes a string as argument, builds file names
# from it, reads in data and sets it up for further evaluation.
loadData <- function(suffix, extension = 'txt') {
if ((!is.character(suffix)) | (suffix == ""))
# Throw a fit
stop("Invalid argument; must be a string prefix for file names")
# Set the name of the directory containing data and prepare file path
data_dir <- file.path(getwd(), DATA_DIRECTORY)
# Common path/filename for all data files: measurements, activity type,
# subject id
base_datafile <- file.path(
data_dir,
paste0("X_", suffix, ".", extension)
)
base_activityfile <- file.path(
data_dir,
# Add file extension
paste0("y_", suffix, ".", extension)
)
base_subjectfile <- file.path(
data_dir,
# Add file extension
paste0("subject_", suffix, ".", extension)
)
# Load training data set, by pasting the generic file names and the suffix, supplied
# as argument.
data <-
read_table(base_datafile, col_names = FALSE, col_types = cols())
# a separate file contains the activity ID variable
activity <-
read_table(base_activityfile,
col_names = FALSE,
col_types = cols())
# Subject IDs
subjects <-
read_table(base_subjectfile,
col_names = FALSE,
col_types = cols())
# Convert large tibbles/dfs to data.table's for faster operation
setDT(data)
setDT(activity)
setDT(subjects)
# Apply variable names to the main data set. Make sure you refer to the column
# contents… Also note that we've added a variable at the beginning: activity.
# This has to be labeled manually.
# Var names are to be set to lowercase for better looks
names(data) <- str_to_lower(variable_names[[2]])
# Combine DTs: add activity IDs's to the data
data[, activity := ..activity[[1]]]
data[, subjects := ..subjects[[1]]]
# activity type should be a factor. Its labels are stored in the label variable.
# Again using data.table approach for possibly faster computing.
data[, activity := factor(activity, labels = activity_labels[[2]])]
# return data by reference (AFAIK)
data
}
|
d656ef4098e5d6699f6ad30bcf303142b0bcabd2
|
19339d5d540ee6bd736f337cb9993c711b204d7a
|
/tests/testthat/test-nltt_diff.R
|
d87bb48c19bca23cd0da099b2876b00d7234db6f
|
[] |
no_license
|
cran/nLTT
|
89876810619711b3fd7568ef186f9cbb3fd7f576
|
9b4db3454f1949a7c975c40357d5313c90c49f6d
|
refs/heads/master
| 2023-08-31T23:27:04.412506
| 2023-08-21T10:50:05
| 2023-08-21T11:30:57
| 29,488,473
| 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 727
|
r
|
test-nltt_diff.R
|
context("nltt_diff")
test_that("nltt_diff must signal it cannot handle polytomies, #23", {
phylogeny_1 <- ape::read.tree(text = "(a:1,b:1):1;")
phylogeny_2 <- ape::read.tree(
text = "((d:0.0000001,c:0.0000001):1,b:1,a:1):1;")
phylogeny_3 <- ape::read.tree(
text = "(((d:0.000000001,c:0.000000001):1,b:1):0.000000001,a:1.000000001):1;") # nolint
expect_equal(nLTT::nltt_diff(phylogeny_1, phylogeny_1), 0.00,
tolerance = 0.0001)
expect_equal(nLTT::nltt_diff(phylogeny_1, phylogeny_3), 0.25,
tolerance = 0.0001)
expect_equal(nLTT::nltt_diff(phylogeny_3, phylogeny_3), 0.00,
tolerance = 0.0001)
expect_error(nLTT::nltt_diff(phylogeny_1, phylogeny_2),
"phylogenies must both be binary")
})
|
a8831e219bc1f89fb67c748bc5d472e74e6e8c85
|
4b56315bc1671b8e25fce3c5293f270034a3f88c
|
/scripts/cognitive/GEE_GAMM/gee_gam_example.R
|
2069e76ad793460b6def932bf8f903fbf560b49d
|
[] |
no_license
|
PennBBL/pncLongitudinalPsychosis
|
fd8895b94bcd0c4910dd5a48019e7d2021993d3a
|
006364626ccbddac6197a0e7c5cbe04215601e33
|
refs/heads/master
| 2023-05-27T00:16:35.264690
| 2021-06-18T13:55:49
| 2021-06-18T13:55:49
| 172,975,758
| 1
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 9,328
|
r
|
gee_gam_example.R
|
# Example using GAM + GEE to test significance on example vertices and in simulated data
library(mgcv)
library(geepack)
# install.packages("doBy")
library(doBy)
library(MASS)
setwd("/home/smweinst/test_vertices_gee_gam/")
## Example vertices ----
# load in example vertices
Long_Motor<-readRDS('LongFormat_MotorVert.rds')
Long_Visual<-readRDS('LongFormat_VisualVert.rds')
Long_PFC<-readRDS('LongFormat_PFCVert.rds')
# load in mean vertex values at each scale for each subj. as pseudo example vertex
Long_Avg<-readRDS('LongFormat_AvgVert.rds')
# is "value" column of Long_PFC the same as BW_FC in the other datasets? going to assume yes (need consistent variable names for the loop below)
names(Long_PFC)[which(names(Long_PFC)=="value")] = "BW_FC"
# list different vertex datasets
list_vertices = list(motor = Long_Motor, visual = Long_Visual, pfc = Long_PFC, avg = Long_Avg)
n_vertex = length(list_vertices)
# L_contrast:
# 0's correspond to coefficients where variable is included in both the full and reduced model
# 1's correspond to coefficients that we want to test (jointly) if = 0
L_contrast = c(rep(0,3),rep(1,8))
pvalX2 = vector(mode = "numeric", length = n_vertex)
# p-value for every vertex
names(pvalX2) = names(list_vertices)
for (v in 1:n_vertex){ # loop over the vertices
vertex_dat = list_vertices[[v]]
# gam with smooth terms for scale, age, and interaction between scale and age
# doesn't account for within-subject correlation, but running this model so that we can extract the model.matrix
fit_gam = mgcv::gam(BW_FC~Motion+Sex+s(Scale,k=3,fx=T)+s(Age,k=3,fx=T)+ti(Scale,Age,k=3,fx=T),
data=vertex_dat)
# extract model matrix from GAM and then input the smooth terms to a gee (assuming exchangeable correlation structure, might be wrong)
gam.model.matrix = cbind(model.matrix(fit_gam), bblid = vertex_dat$bblid,
BW_FC = vertex_dat$BW_FC, Scale = vertex_dat$Scale)
gam.model.df = data.frame(gam.model.matrix)
gam.model.df = gam.model.df[order(gam.model.df$bblid),] # sort by ID for geeglm
# doing the rest with a geeglm will give robust variance estimators, accounting for within-subject
# correlation which was not the case in the gam part of the output from GAMM when we were doing bootstrapping
fit_gee = geepack::geeglm(BW_FC~Motion + Sex2 + s.Scale..1 +
s.Scale..2 + s.Age..1 + s.Age..2 +
ti.Scale.Age..1 + ti.Scale.Age..2 + ti.Scale.Age..3 +
ti.Scale.Age..4, id = gam.model.df$bblid,
data = gam.model.df, corstr = "exchangeable")
# joint test of whether the coefficients corresponding to a `1` in L_contrast differ from 0
joint_test = doBy::esticon(obj = fit_gee,L = L_contrast,joint.test = T)
# p-value based on chi-square with 1 df
pvalX2[v] = pchisq(joint_test$X2.stat, df = 1, lower.tail = F)
}
# p-value for each example vertex:
# print(pvalX2)
# motor visual pfc avg
# 3.615059e-53 1.487835e-17 4.321208e-05 9.509597e-184
# example testing a single coefficient instead of doing a joint test:
vertex_dat = list_vertices[[3]]
fit_gam = mgcv::gam(BW_FC~Motion+Sex+s(Scale,k=3,fx=T)+s(Age,k=3,fx=T)+ti(Scale,Age,k=3,fx=T),
data=vertex_dat)
gam.model.matrix = cbind(model.matrix(fit_gam), bblid = vertex_dat$bblid,
BW_FC = vertex_dat$BW_FC)
gam.model.df = data.frame(gam.model.matrix)
gam.model.df = gam.model.df[order(gam.model.df$bblid),] # sort by ID for geeglm
# doing the rest with a geeglm will give robust variance estimators, accounting for within-subject
# correlation which was not the case in the gam part of the output from GAMM when we were doing bootstrapping
fit_gee = geepack::geeglm(BW_FC~Motion + Sex2 + s.Scale..1 +
s.Scale..2 + s.Age..1 + s.Age..2 +
ti.Scale.Age..1 + ti.Scale.Age..2 + ti.Scale.Age..3 +
ti.Scale.Age..4, id = as.factor(gam.model.df$bblid),
data = gam.model.df, corstr = "exchangeable")
# joint test of whether the coefficients corresponding to a `1` in L_contrast differ from 0
joint_test_TRUE = doBy::esticon(obj = fit_gee,L = c(0,1,rep(0,9)),joint.test = T)
print(joint_test_TRUE)
joint_test_FALSE = doBy::esticon(obj = fit_gee,L = c(0,1,rep(0,9)),joint.test = F)
print(joint_test_FALSE)
## Simulations ----
# X1 ~ N(0,1) for subjects 1,...,n (one per subject)
# (v1, v2, v3) ~ MVN((0,0,0), Sigma) Sigma is a 3x3 covariance matrix
## transform data from wide to long format so that the column "v" in the long data includes v1, v2, and v3 for each subject
## (this is supposed to be analogous to the "scale" variable in the vertex examples)
# correlation structure scenarios:
## (1) correlation structure is exchangeable (correct specification of the GEE)
## and (2) correlation structure is not exchangeable (incorrect specification of the GEE)
# outcome variable:
# Y = a*(X1) + b*(v^3 + (v^3)*X1) + rnorm(n) -- add N(0,1) noise to each observation
# a and b are coefficients specified in the simulations; we want to test whether b = 0 (joint test of whether the coefficient(s) on the smooth terms from the GEE are 0)
# covariance matrices:
## exchangeable correlation/covariance structure:
cov_empty = matrix(nrow=3,ncol=3)
diag(cov_empty) = 1
cov1 = cov_empty; cov1[which(is.na(cov1))] = 0.9
cov2 = cov_empty; cov2[which(is.na(cov2))] = 0.6
# one example with some other correlation structure (gee correlation misspecified):
cov3 = cov_empty; cov3[upper.tri(cov3)] = c(0.2,0.6,0.1); cov3[lower.tri(cov3)] = cov3[upper.tri(cov3)]
par.vals = expand.grid(a = 1, # coefficient for linear terms in true model for Y
b = seq(0,0.1,by = 0.02), # coefficient for smooth terms in true model for Y
# null hypothesis of interest is true when b = 0
covar = paste0("cov", 1:3)
)
nsim = 1000
set.seed(917)
out = list()
pb <- txtProgressBar(min = 0, max = nrow(par.vals), style = 3)
system.time(
for (j in 1:nrow(par.vals)){
out[[j]] = vector(mode = "numeric", length = nsim)
for (sim in 1:nsim){
dat.sim = MASS::mvrnorm(n = n, mu = rep(0,3), Sigma = get(as.character(par.vals$covar[j])))
colnames(dat.sim) = paste0("v",1:3)
dat.sim = data.frame(cbind(id = 1:n, dat.sim))
dat.sim$X1 = rnorm(n) # covariate X1
# dat.sim$X2 = rnorm(n)
# convert data from wide to long format (multiple rows per subject)
dat.sim_long = reshape(dat.sim, idvar = "id", varying = list(c("v1","v2","v3")),v.names = c("v"),direction = "long")
dat.sim_long = dat.sim_long[order(dat.sim_long$id),]
dat.sim_long$Y = par.vals$a[j]*(dat.sim_long$X1 ) + par.vals$b[j]*(dat.sim_long$v^3 + (dat.sim_long$v^3)*dat.sim_long$X1) + rnorm(n) # rnorm(n) for additional noise
# fit gam to get model.matrix
fit_gam = mgcv::gam(Y~X1+s(v,k=3,fx=T)+ ti(X1,v,k=3,fx=T),
data=dat.sim_long)
gam.model.matrix = cbind(model.matrix(fit_gam), id = dat.sim_long$id,
Y = dat.sim_long$Y)
gam.model.df = data.frame(gam.model.matrix)
# gee using terms from model.matrix from the gam above; assuming exchangeable correlation structure (which is true in the simulated data...)
fit_gee = geepack::geeglm(Y~X1 + s.v..1 + s.v..2 + ti.X1.v..1 + ti.X1.v..2 + ti.X1.v..3 + ti.X1.v..4, id = gam.model.df$id,
data = gam.model.df, corstr = "exchangeable")
# joint test of the smooth terms
joint_test = doBy::esticon(obj = fit_gee,L = c(0,0,1,1,1,1,1,1),joint.test = T)
out[[j]][sim] = pchisq(joint_test$X2.stat, df = 1, lower.tail = F)
}
par.vals$p_reject[j] = length(which(out[[j]]<0.05))/nsim # for each set of parameters, get the proportion of simulations where which H_0 is rejected
setTxtProgressBar(pb, j)
}
)
close(pb)
# plot power and type I error
par(mfrow = c(1,1))
plot(x = par.vals$b, y=par.vals$p_reject, type = 'n',
xlab = "b", ylab = "pr(reject joint H0)",ylim = c(0,1), xlim = c(min(par.vals$b), max(par.vals$b)))
abline(h = c(0.05,0.1), col = c("blue","red"), lwd = c(2,2))
# specs for plot by a and covariance
legend_specs = unique(par.vals[,c("a","covar")])
legend_specs$lty = as.numeric(substr(legend_specs$covar,4,4))
for (c in unique(par.vals$covar)){
points(par.vals$b[which(par.vals$covar==c)],
par.vals$p_reject[which(par.vals$covar==c)],
type = 'b',
lty = legend_specs$lty[which(legend_specs$covar==c)],
lwd = 1.5, pch = 16, cex =0.8)
}
legend("topleft",lty = c(legend_specs$lty,1,1),
legend = c("exchangeable (rho = 0.2)", "exchangeable (rho = 0.6)", "misspecified correlation structure", "p = 0.05", "p = 0.1"),
col = c(rep("black",3),"blue","red"),lwd = 1.5, bty = 'n',
cex = 0.8)
# cov1 - exchangeable
# [,1] [,2] [,3]
# [1,] 1.0 0.2 0.2
# [2,] 0.2 1.0 0.2
# [3,] 0.2 0.2 1.0
# cov2 - exchangeable
# [,1] [,2] [,3]
# [1,] 1.0 0.6 0.6
# [2,] 0.6 1.0 0.6
# [3,] 0.6 0.6 1.0
# cov3 - misspecified
# [,1] [,2] [,3]
# [1,] 1.0 0.2 0.6
# [2,] 0.2 1.0 0.1
# [3,] 0.6 0.1 1.0
|
bd4c883e9a00184744a732cf570aec70f778097c
|
88c04d94a33b19d0c537e81ffae62018d2a18d34
|
/R/ordprob.pair.R
|
b57dd8645ecf8b009ae82aab30664c19ed54106a
|
[] |
no_license
|
cran/PLordprob
|
4512c18ed79b947878196050e0823e8c252b873c
|
be3d8d83587847ee4c846a4515141054ded0573e
|
refs/heads/master
| 2020-05-19T11:03:36.748185
| 2018-06-05T15:34:18
| 2018-06-05T15:34:18
| 25,037,091
| 0
| 1
| null | null | null | null |
UTF-8
|
R
| false
| false
| 1,273
|
r
|
ordprob.pair.R
|
ordprob.pair <-
function(param,K,x,ss,data,same.means=FALSE){
if(!is.null(x)){
x = as.matrix(x)
p <- dim(x)[2]
}
else p = 0
data = as.matrix(data)
n = nrow(data)
q = ncol(data)
delta = param[1:(K-2)]
a = delta2a(delta)
beta = param[(1:(p))+(K-2)]
if(same.means){
xi = rep(param[K-2+p+1], q)
corr = matrix(1,q,q)
corr[upper.tri(corr)] = param[-c(1:(K-2+p+1))]
corr[lower.tri(corr)]=t(corr)[lower.tri(t(corr))]
}
else{
xi = param[(1:(q))+(K-2+p)]
corr = matrix(1,q,q)
corr[upper.tri(corr)] = param[-c(1:(K-2+p+q))]
corr[lower.tri(corr)]=t(corr)[lower.tri(t(corr))]
}
if(!is.null(x)){
mu = x %*% matrix(beta,p,1)
}
else mu = rep(0, n)
res = 0.0
out = .C("cat_pair_llik_real2",
res = as.double(res),
data = as.double(data),
corr = as.double(corr),
mu = as.double(mu),
xi = as.double(xi),
ss = as.double(ss),
tresh = as.double(a),
n = as.integer(n),
q = as.integer(q),
nlev = as.integer(K),
two = as.integer(2), PACKAGE= "PLordprob")
-out$res
}
|
0a39b816690f63be8a2bcbf3388e0677d5dc0037
|
2f87fc27b2d6e10b736cf07f7f991fa4d37873f4
|
/tests/testthat/test-commutative-addition.R
|
24dd781a6dc8d4262898244424d931abd728cbb9
|
[] |
no_license
|
algebraic-graphs/R
|
37f7ac5fee5bf8eb49e02a36cac15d1df039bc93
|
8316776a21024b81b2300d1c037a9328a2b92cf9
|
refs/heads/master
| 2023-08-01T05:48:36.588424
| 2021-09-24T13:51:57
| 2021-09-24T13:51:57
| 409,034,609
| 2
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 156
|
r
|
test-commutative-addition.R
|
test_that("Addition is commutative", {
library(tidyverse)
library(tidygraph)
library(ralget)
expect_true(v("a") + v("b") == v("b") + v("a")
)
})
|
3ff68d3d3999192a97dff32d40ad50b84ace61ea
|
8455fc20fed9641f65ed8a5b2e065c7e8075e730
|
/man/which.max.Rd
|
208e77686aa8180024a32487435a52a5d66b78f0
|
[] |
no_license
|
andreas50/uts
|
0cfb629448886bcee992e6ae8ab453d15fd366ff
|
f7cea0d2ba074d332a4eb9b5498451fe0bc9a94f
|
refs/heads/master
| 2021-07-24T13:41:29.982215
| 2021-04-05T14:41:04
| 2021-04-05T14:41:04
| 35,902,127
| 16
| 0
| null | null | null | null |
UTF-8
|
R
| false
| true
| 818
|
rd
|
which.max.Rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/which.R
\name{which.max}
\alias{which.max}
\alias{which.max.default}
\title{Generic which.max function}
\usage{
which.max(x, ...)
\method{which.max}{default}(x, ...)
}
\arguments{
\item{x}{an \R object.}
\item{\dots}{further arguments passed to or from methods.}
}
\description{
The function is needed, because \code{\link[base:which.max]{which.max}} of base \R is not generic.
}
\section{Methods (by class)}{
\itemize{
\item \code{default}: simply calls the default implementation of base \R
}}
\note{
As recommended in Section 7.1 ("Adding new generics") of "Writing R Extensions", the implementation of \code{\link{which.max.default}} has been made a wrapper around \code{\link[base:which.max]{base::which.max}}.
}
\keyword{internal}
|
78b33bbb031b9c9d2771170d76f13370dc444642
|
cd6aa9ddbfc6d63d7f66cbc3d4576fb89abfd611
|
/building_long-form_tables.R
|
05ac1947c61f0564d37fab349bc826b0df456c2b
|
[] |
no_license
|
genevievekathleensmith/shakespeare-chronordination
|
315c8ceb1250c72ecdefed6314154813e75ff7e2
|
5eb69a16c52716c8c3b9e56e5981eccfe04321c7
|
refs/heads/master
| 2021-01-25T10:39:18.378317
| 2014-04-29T15:53:35
| 2014-04-29T15:53:35
| 17,147,583
| 1
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 3,087
|
r
|
building_long-form_tables.R
|
# For the up to date counts, including Arden of Faversham:
#counts = read.csv('~/Documents/shakespeare-chronordination/SH DATA all counts A minus C plus ARD.csv')
#verse = read.csv('~/Documents/shakespeare-chronordination/verse_line_counts plus ARD.csv')
# For the revised version of 1H6:
#counts = read.csv('~/Documents/shakespeare-chronordination/SH DATA red 1H6.csv')
#verse = read.csv('~/Documents/shakespeare-chronordination/verse_line_counts red 1H6.csv')
# For the revised version of Ed3:
#counts = read.csv('~/Documents/shakespeare-chronordination/SH DATA exp Ed3.csv')
#verse = read.csv('~/Documents/shakespeare-chronordination/verse_line_counts exp Ed3.csv')
# For both revisions, i.e. FULLY UP TO DATE FINAL COUNTS:
counts = read.csv('~/Documents/shakespeare-chronordination/SH DATA.csv')
verse = read.csv('~/Documents/shakespeare-chronordination/verse_line_counts.csv')
merged = merge(counts,verse)
data = merged[,c(11,2:10)]
first = as.data.frame(lapply(data[,1:2], function(x) rep(x, data$first)))
first$pause = rep('first',length(first$abbrv))
second = as.data.frame(lapply(data[,1:2], function(x) rep(x, data$second)))
second$pause = rep('second',length(second$abbrv))
third = as.data.frame(lapply(data[,1:2], function(x) rep(x, data$third)))
third$pause = rep('third',length(third$abbrv))
fourth = as.data.frame(lapply(data[,1:2], function(x) rep(x, data$fourth)))
fourth$pause = rep('fourth',length(fourth$abbrv))
fifth = as.data.frame(lapply(data[,1:2], function(x) rep(x, data$fifth)))
fifth$pause = rep('fifth',length(fifth$abbrv))
sixth = as.data.frame(lapply(data[,1:2], function(x) rep(x, data$sixth)))
sixth$pause = rep('sixth',length(sixth$abbrv))
seventh = as.data.frame(lapply(data[,1:2], function(x) rep(x, data$seventh)))
seventh$pause = rep('seventh',length(seventh$abbrv))
eighth = as.data.frame(lapply(data[,1:2], function(x) rep(x, data$eighth)))
eighth$pause = rep('eighth',length(eighth$abbrv))
ninth = as.data.frame(lapply(data[,1:2], function(x) rep(x, data$ninth)))
ninth$pause = rep('ninth',length(ninth$abbrv))
fulldata = rbind(first[,c(1,3)],second[,c(1,3)],third[,c(1,3)],fourth[,c(1,3)],fifth[,c(1,3)],sixth[,c(1,3)],seventh[,c(1,3)],eighth[,c(1,3)],ninth[,c(1,3)])
#table(fulldata)
#write.csv(fulldata,'~/Documents/shakespeare-chronordination/fulldata.csv',row.names=F)
#write.csv(table(fulldata)[,c(3,6,9,4,2,8,7,1,5)],'~/Documents/shakespeare-chronordination/tabledata.csv',row.names=F)
#write.csv(levels(fulldata$abbrv),'~/Documents/shakespeare-chronordination/playnames.csv',row.names=F)
#install.packages('R.matlab')
#library(R.matlab)
#plays = as.character(fulldata$abbrv)
#pauses = fulldata$pause
#writeMat('~/Documents/shakespeare-chronordination/MATLAB/plays.mat',plays=plays)
#writeMat('~/Documents/shakespeare-chronordination/MATLAB/pauses.mat',pauses=pauses)
A = as.character(fulldata$abbrv)
P = fulldata$pause
writeMat('~/Documents/shakespeare-chronordination/MATLAB/abbrvs_and_pauses.mat', A=A, P=P)
V = merged[,c(11,13)]
writeMat('~/Documents/shakespeare-chronordination/MATLAB/verse_lines.mat', V=V)
|
660e9362f0f2d47c7c8ff75c431e4f557c806a60
|
360df3c6d013b7a9423b65d1fac0172bbbcf73ca
|
/FDA_Pesticide_Glossary/(_-chloroethyl)trime.R
|
a803729c5e7b8322d3ce32b96de456684471e900
|
[
"MIT"
] |
permissive
|
andrewdefries/andrewdefries.github.io
|
026aad7bd35d29d60d9746039dd7a516ad6c215f
|
d84f2c21f06c40b7ec49512a4fb13b4246f92209
|
refs/heads/master
| 2016-09-06T01:44:48.290950
| 2015-05-01T17:19:42
| 2015-05-01T17:19:42
| 17,783,203
| 0
| 1
| null | null | null | null |
UTF-8
|
R
| false
| false
| 276
|
r
|
(_-chloroethyl)trime.R
|
library("knitr")
library("rgl")
#knit("(_-chloroethyl)trime.Rmd")
#markdownToHTML('(_-chloroethyl)trime.md', '(_-chloroethyl)trime.html', options=c("use_xhml"))
#system("pandoc -s (_-chloroethyl)trime.html -o (_-chloroethyl)trime.pdf")
knit2html('(_-chloroethyl)trime.Rmd')
|
734630240d22730a7b865b8b59c380002bdad435
|
ae5c61d8ffd1c03eb7ef503c3bd660451df31122
|
/run_analysis.R
|
0bcd99e664c0e2ffa7fe93e6e24cb468bc376bc3
|
[] |
no_license
|
ramithaJHU/WearableHMR
|
d2655557c4d6180acec27690e62f96bfea9a0bd0
|
40e3e65433ad30b7cfb73673e0aee49a4879ca81
|
refs/heads/master
| 2020-06-13T16:32:38.412334
| 2019-07-02T02:54:44
| 2019-07-02T02:54:44
| 194,712,300
| 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 4,279
|
r
|
run_analysis.R
|
# This file contains the R-code for generating tidy data out of row data collected from
# Samsung Galaxy S smartphone and the script for performing the analysis
# A group of 30 volunteers within an age bracket of 19-48 years were asked to
# perform six different activities. Their activities were recorded using smartphome's
# embedded accelerometer and gyroscope. Data set and a complete descriptopn of experiment
# is available at:
# http://archive.ics.uci.edu/ml/datasets/Human+Activity+Recognition+Using+Smartphones
#OS: Microsoft Windows 10 Pro
# OS Version: 10.0.16299 build 16299
# R version 3.5.2 (2018-12-20)
# RStudio Version: 1.1.456
library(dplyr)
## Download and unzip the dataset:
fileName <- "UCI HAR Dataset.zip"
fileURL <- "https://d396qusza40orc.cloudfront.net/getdata%2Fprojectfiles%2FUCI%20HAR%20Dataset.zip"
if (!file.exists(fileName)){
download.file(fileURL, fileName)
}
if (!file.exists("UCI HAR Dataset")) {
unzip(fileName)
}
# Read Feature descriptions and Activity lables
features<-read.table("./UCI HAR Dataset/features.txt", col.names = c("num:", "feature"))
activity_labels<-read.table("./UCI HAR Dataset/activity_labels.txt", col.names = c("num:", "activity_label"))
# Check for duplicated features
duplicated_features <- features[duplicated(features$feature), "feature"]
# Check if there are any measurements with word mean or sd among duplicated items
grep("[Mm][Ee][Aa][Nn]|[Ss][Tt][Dd]", duplicated_features, value = TRUE) # There are none
# So even though there are duplicate columns, as we will not be using them,
# we do not need to combine/filter them out
# Remove "()" , replace "(" with "_", replace ")" with "", replace "," with "_"
# in feature names to make it meaningfull as DF column names can not contain () and ,
features$feature <- gsub("[(][)]", "", features$feature)
features$feature <- gsub("[)]", "", features$feature)
features$feature <- gsub("[(]", "_", features$feature)
features$feature <- gsub(",", "_", features$feature)
features$feature <- gsub("-", "_", features$feature)
# Read the Test and Training data into variables with the same file names
# from the subfolders within project folder
subject_test<-read.table("./UCI HAR Dataset/test/subject_test.txt", col.names = c("subject"))
X_test<-read.table("./UCI HAR Dataset/test/X_test.txt", col.names = features$feature)
Y_test<-read.table("./UCI HAR Dataset/test/Y_test.txt", col.names = c("activity"))
subject_train<-read.table("./UCI HAR Dataset/train/subject_train.txt", col.names = c("subject"))
X_train<-read.table("./UCI HAR Dataset/train/X_train.txt", col.names = features$feature)
Y_train<-read.table("./UCI HAR Dataset/train/Y_train.txt", col.names = c("activity"))
# Merge Train and Test data into single data frames of each type
X_merged <- rbind(X_test, X_train)
Y_merged <- rbind(Y_test, Y_train)
subject_merged <- rbind(subject_test, subject_train)
# Remove original data frames from memory to save memory
rm("subject_test", "X_test", "Y_test", "subject_train", "X_train", "Y_train" )
## Replace activity numbers in Y_merge with activity names
Y_merged$activity <- factor(Y_merged$activity, labels = activity_labels$activity_label)
#names(X_merged) <- grep(".", "_", names(X_merged))
# Find the column indexes, where word mean or std appear in any letter combination
features_meanStd_index <- grep("[Mm][Ee][Aa][Nn]|[Ss][Tt][Dd]", features$feature)
X_Mean_Std <- X_merged[ , features_meanStd_index]
# Create a single DF with measurements on the mean and standard deviation, subject and activity
tidyData <- cbind(X_Mean_Std, Y_merged, subject_merged)
# Create a DF that contains summary of values, which are first grouped by subject and then grouped by activity
tidyData_summarized <- tidyData %>% group_by(subject, activity) %>% summarise_all(funs(mean))
# Change Variable names to represent these are Summarized by adding "AVG" prefix
colnames(tidyData_summarized) <- c("subject", "activity", paste("AVG",
colnames(tidyData_summarized[3:ncol(tidyData_summarized)]), sep = "_"))
# Write tidy and Summarized data to a file
write.table(tidyData, "Tidy_Data.txt")
write.table(tidyData_summarized, "Tidy_Data_Summary.txt", row.name=FALSE)
|
e3cbf51daafdbaf47959c76b8299f0d380615e07
|
1e42b9829b85bc37d112ec5b8efa1682264297b2
|
/man/trace_length.Rd
|
80ff01401827429f46f4a4ddb987a2896b1db006
|
[] |
no_license
|
strategist922/edeaR
|
ca83bf91f58e685bc9333f4db3bfea3d8c019343
|
ad96118cccfdc90a7bed94f5aef2ee0cfab3aac8
|
refs/heads/master
| 2021-07-05T04:30:35.286640
| 2017-09-27T12:25:04
| 2017-09-27T12:25:04
| null | 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| true
| 719
|
rd
|
trace_length.Rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/trace_length.R
\name{trace_length}
\alias{trace_length}
\title{Metric: Trace length}
\usage{
trace_length(eventlog, level_of_analysis = c("log", "trace", "case"))
}
\arguments{
\item{eventlog}{The event log to be used. An object of class
\code{eventlog}.}
\item{level_of_analysis}{At which level the analysis of trace_length should be performed: log, case or trace.}
}
\description{
Computes the length of each trace, in terms of the number of events, at the level of the eventlog or the level of a trace.
The relative numbers at trace level measure trace length compared to the average trace length of the top 80% cases, approximately.
}
|
526c452ff890f63415211d4ae5e54c33d08b918b
|
ccbed05be4d46205ef49ba1cc472521395c69ad4
|
/shiny/wildebeest/ga4_network.R
|
08ad7ccd15810a441d4ad86fed6dd712817ad517
|
[] |
no_license
|
govau/GAlileo
|
4936abc28f2f2e8f0282e8c8e86712723a421b7e
|
eba52024ca3a930571c53184b5275d80de3e04a9
|
refs/heads/develop
| 2022-09-18T18:51:46.153133
| 2022-09-05T05:18:18
| 2022-09-05T05:18:18
| 183,550,388
| 9
| 3
| null | 2021-12-03T12:15:00
| 2019-04-26T03:28:49
|
Jupyter Notebook
|
UTF-8
|
R
| false
| false
| 3,205
|
r
|
ga4_network.R
|
# network theory in R
raw_data <- read.csv('~/Documents/network_diagram_ga4/nodes_data.csv')
library(stringr)
# clean up a bit
raw_data$from_hostname <- gsub( "www.", "", raw_data$from_hostname, ignore.case = T)
raw_data$from_hostname <- gsub( "m.facebook.com", "facebook.com", raw_data$from_hostname, ignore.case = T)
raw_data$from_hostname <- gsub( "l.facebook.com", "facebook.com", raw_data$from_hostname, ignore.case = T)
raw_data$from_hostname <- gsub( "lm.facebook.com", "facebook.com", raw_data$from_hostname, ignore.case = T)
raw_data$from_hostname <- gsub( "pinterest", "pinterest.com.mx", raw_data$from_hostname, ignore.case = T)
library(shiny)
library(networkD3)
library(DT)
library(dplyr)
# Define UI ----
# User interface ----
ui <- shinyUI(fluidPage(
titlePanel(h1("DTA")),
sidebarLayout(
position = "left",
sidebarPanel(width = 2,
h2("Controls"),
#selectInput("var",
# label = "Choose a user journey",
# choices = c("all journeys", "carers journey",
# "start a business journey", "enrol in uni journey"),
# selected = "all journeys"),
sliderInput("opacity", "Node Opacity", 0.6, min = 0.1,
max = 1, step = .1),
sliderInput("events", "minimum events", 1500, min = 0,
max = 20000, step = 50)
),
mainPanel(
tabsetPanel(
tabPanel("Network Graph", simpleNetworkOutput("simple"), width = 13),
tabPanel("Dyad table", DT::dataTableOutput("table")),
tabPanel("Referring domain user count", DT::dataTableOutput("refer")),
tabPanel("Receiving domain user count", DT::dataTableOutput("receive"))
)
))))
# Define server logic ----
server <-
function(input, output) {
networkData <- reactive(raw_data %>%
filter(events > input$events) %>%
select(from_hostname, to_hostname))
refer_domain <- raw_data %>%
group_by(from_hostname) %>%
summarise(total = sum(events)) %>%
arrange(desc(total))
receive_domain <- raw_data %>%
group_by(to_hostname) %>%
summarise(total = sum(events)) %>%
arrange(desc(total))
output$simple <- renderSimpleNetwork({
simpleNetwork(networkData(), opacity = input$opacity, zoom = T)
})
output$table <- DT::renderDataTable({
DT::datatable(raw_data[,c("from_hostname", "to_hostname", "events")],
options = list(lengthMenu = c(10, 25, 50, 100), pageLength = 10))
})
output$refer <- DT::renderDataTable({
DT::datatable(receive_domain,
options = list(lengthMenu = c(10, 25, 50, 100), pageLength = 10))
})
output$receive <- DT::renderDataTable({
DT::datatable(refer_domain,
options = list(lengthMenu = c(10, 25, 50, 100), pageLength = 10))
})
}
# Run the app ----
app <- shinyApp(ui = ui, server = server)
if (Sys.getenv("PORT") != "") {
runApp(app, host = "0.0.0.0", port = strtoi(Sys.getenv("PORT")))
}
runApp(app)
|
20e6982604716f2d8bc1a6671e676f2bf114510b
|
9753d94f00a9db2bb5dea5a711bf3a976fa19fb7
|
/3. Probability/Sapply_2.R
|
43555dd0bb59a7c9cb86bb7294699e18cf819bc7
|
[] |
no_license
|
praveen556/R_Practice
|
a3131068011685fd1a945bf75758297a993357a7
|
0fc21f69c0027b16e0bce17ad074eeaa182170bf
|
refs/heads/master
| 2023-02-18T09:11:49.982980
| 2021-01-17T16:07:28
| 2021-01-17T16:07:28
| 293,621,569
| 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 1,028
|
r
|
Sapply_2.R
|
#Validating normal operations on Vectors
num_samp <- seq(1,10)
num_samp2 <- seq(101,110)
num_samp*num_samp2
num_samp*2
sqrt(num_samp)
sapply(num_samp,sqrt)
#sapply on Birthday Problem
n<-60
bth <- sample(1:365,n,replace = TRUE)
any(duplicated(bth))
#Function to calculate probability of shared bdays across n people
compute_prob <- function(n,B=10000){
same_day <- replicate(B, {
bth <- sample(1:365,n,replace = TRUE)
any(duplicated(bth))
})
mean(same_day)
}
n <-seq(1,60)
#Probability based on Monto Carlo
prob <- sapply(n,compute_prob)
plot(n,prob)
#Calculate Exact Probability
exact_prob <- function(n){
prob_uniq <- seq(365,365-n+1)/365 # vector of fractions for mult. rule
1-prod(prob_uniq) # calculate prob of no shared birthdays and subtract from 1
}
#Probability based on Mathametic function
eprob <- sapply(n,exact_prob)
# plotting Monte Carlo results and exact probabilities on same graph
plot(n, prob) # plot Monte Carlo results
lines(n, eprob, col = "red") # add line for exact prob
|
859cc540798f5cc896d3eed028b24c051b5b4125
|
a47ce30f5112b01d5ab3e790a1b51c910f3cf1c3
|
/A_github/sources/authors/2866/SSDforR/SD2legend.R
|
7d6083236406757ad8b765ff16361d8a91c7f8ad
|
[] |
no_license
|
Irbis3/crantasticScrapper
|
6b6d7596344115343cfd934d3902b85fbfdd7295
|
7ec91721565ae7c9e2d0e098598ed86e29375567
|
refs/heads/master
| 2020-03-09T04:03:51.955742
| 2018-04-16T09:41:39
| 2018-04-16T09:41:39
| 128,578,890
| 5
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 188
|
r
|
SD2legend.R
|
SD2legend <-
function(){
par(mar=c(1, 1, 1, 1))
plot.new()
legend("center", c("behavior","+2sd","mean","-2sd"), col = c("red","black", "green","black"),lwd = 3,ncol=4,bty ="n")
}
|
3ec139a392cd93fc071b86f8422e351c0855674b
|
2fb6b59645427f1e05564f15f8badc09b812b45f
|
/R/TuneMultiCritControlMBO.R
|
ab7ff80efe97bc7825546782ca4f576971fcedeb
|
[] |
no_license
|
NamwooPark/mlr
|
18cf6023e7bc4b743d40bd8df1070a098b267751
|
c08bb0090ec1999b78779a73d0242b45b2dcee42
|
refs/heads/master
| 2021-05-02T05:26:38.010558
| 2018-02-08T12:58:31
| 2018-02-08T12:58:31
| null | 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 1,779
|
r
|
TuneMultiCritControlMBO.R
|
#' @export
#' @inheritParams makeTuneControlMBO
#' @param n.objectives [\code{integer(1)}]\cr
#' Number of objectives, i.e. number of \code{\link{Measure}}s to optimize.
#' @rdname TuneMultiCritControl
makeTuneMultiCritControlMBO = function(n.objectives = mbo.control$n.objectives,
same.resampling.instance = TRUE, impute.val = NULL,
learner = NULL, mbo.control = NULL, tune.threshold = FALSE, tune.threshold.args = list(),
continue = FALSE, log.fun = "default", final.dw.perc = NULL, budget = NULL,
mbo.design = NULL) {
assertInt(n.objectives, lower = 2L)
if (!is.null(learner)) {
learner = checkLearner(learner, type = "regr")
learner = setPredictType(learner, "se")
}
if (is.null(mbo.control)) {
mbo.control = mlrMBO::makeMBOControl(n.objectives = n.objectives)
mbo.control = mlrMBO::setMBOControlInfill(mbo.control, crit = mlrMBO::makeMBOInfillCritDIB())
mbo.control = mlrMBO::setMBOControlMultiObj(mbo.control)
}
assertClass(mbo.control, "MBOControl")
assertFlag(continue)
if (!is.null(budget) && !is.null(mbo.design) && nrow(mbo.design) > budget)
stopf("The size of the initial design (init.design.points = %i) exceeds the given budget (%i).",
nrow(mbo.design), budget)
else if (!is.null(budget)) {
mbo.control = mlrMBO::setMBOControlTermination(mbo.control, max.evals = budget)
}
x = makeTuneMultiCritControl(same.resampling.instance = same.resampling.instance, impute.val = impute.val,
start = NULL, tune.threshold = tune.threshold, tune.threshold.args = tune.threshold.args,
cl = "TuneMultiCritControlMBO", log.fun = log.fun, final.dw.perc = final.dw.perc, budget = budget)
x$learner = learner
x$mbo.control = mbo.control
x$continue = continue
x$mbo.design = mbo.design
return(x)
}
|
9b5c89aac5ec3dd6675a5aa8736a562687100ad1
|
175fd850d274f7c47baefb7de0440e14c8d53856
|
/T3/R_sintaxes.R
|
2217bb6c8d33dafa29cb9e09ab7ff4070be4dd32
|
[] |
no_license
|
taisbellini/intro-pacotes-estatisticos
|
e597dc4ff3aad5a2b7e3277691173ee93dc979fd
|
3a64c7f12668781c962f3cccbedc0a60dfe020ef
|
refs/heads/master
| 2020-04-28T22:50:20.844451
| 2019-06-30T13:09:34
| 2019-06-30T13:09:34
| 175,631,674
| 3
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 4,929
|
r
|
R_sintaxes.R
|
#### Slide 12 ####
2 * 4
2 *
4
#### Slide 14 ####
sqrt(2)
log(,3)
#### Slide 17 ####
valor <- 2*4
valor
x = 25
x2 = x^2
x2
#### Slide 18 ####
valor
Valor
T
t
#### Slide 19 ####
valor = 2 * 4
valor
nome = "Sexo Feminino"
nome
nome = "Sexo Feminino"
nome
#### Slide 20 ####
valor >= 5 & valor <= 10
#### Slide 25 ####
# Mudar diret?rio
# Colar a sintaxe
# Dica: ao definir diret?rio, n?o precisa mais informar todo o endere?o do arquivo
library(readxl)
honolulu <- read_excel("honolulu.xls")
#### Slide 26 ####
summary(honolulu)
honolulu$ID
table(honolulu$ID)
table(honolulu$EDUCA) # table = tabela de frequencias
honolulu$EDUCA <- as.factor(honolulu$EDUCA) #as.factor indica que a var é nominal
summary(honolulu)
#### Slide 27 ####
library(haven)
milsa <- read_sav("milsa_R.sav")
View(milsa)
# Vendo o banco com r?tulos
milsa2 = as_factor(milsa) #as_factor com _ é do haven -> transforma em factor o que é labelled
milsa
milsa2
summary(milsa2)
table(milsa2)
#### Slide 29 ####
honolulu$FUMO = as.factor(honolulu$FUMO) #converte os codigos numericos em ordem crescente ( 0-> 1; 1->2; ...)
levels(honolulu$FUMO) = c("Nao fumante", "Fumante") #levels faz em ordem crescente
summary(honolulu)
#### Slide 30 ####
honolulu2 = transform(honolulu, alturam = ALTURA/100)
summary(honolulu2)
#### Slide 31 ####
honolulu2 = transform(honolulu2, pesopad = (PESO-mean(PESO))/sd(PESO) ) #transform = compute do SPSS
#### Slide 32 ####
honolulu2 = transform(honolulu2, idade3cat = cut(IDADE, breaks=c(min(IDADE), 50, 60, max(IDADE)), include.lowest = T))
summary(honolulu2)
# Se quiser intervalos fechados a esquerda e abertos a direita, usar o argumento right=F
# transform(honolulu2, idade3cat = cut(IDADE, breaks=c(min(IDADE), 50, 60, max(IDADE)), include.lowest = T, right=F))
#### Slide 33 ####
honolulu2 = transform(honolulu2, sist_quart = cut(SISTOLICA, breaks=quantile(SISTOLICA), include.lowest = T))
summary(honolulu2)
#### Slide 34 ####
honolulu2 = transform(honolulu2,
educa3cat = ifelse(EDUCA == 1, 1,
ifelse(EDUCA ==2 | EDUCA ==3, 2, 3)))
honolulu2$educa3cat = as.factor(honolulu2$educa3cat)
levels(honolulu2$educa3cat) = c("Nenhuma", "1o incomp ou compl", "2o ou tecnico")
table(honolulu2$educa3cat)
#### Slide 38 ####
table(milsa2$civil)
milsa2 = transform(milsa2,
civil = ifelse(civil == 4, 2, civil) )
table(milsa2$civil)
# Note que se deve atribuir novamente os rotulos
#### Slide 39 ####
table(milsa2$regiao)
milsa2 = transform(milsa2,
regiao = ifelse(regiao == 8, NA, regiao) )
table(milsa2$regiao)
#### Slide 40 ####
library(readxl)
Ex_Datas <- read_excel("Ex_Datas.xlsx")
# Verificando que ? data
class(Ex_Datas$Data_nasc)
library(haven)
Ex_Datas2 <- read_sav("Ex_Datas.sav")
#### Slide 41 ####
exdatas = transform(Ex_Datas, idade = as.numeric(difftime( "2019-01-01", Data_nasc, units = "days"))/365.25 )
#### Slide 43 ####
milsa.10 = subset(milsa2, salario < 10)
#### Slide 44 ####
library(haven)
merge_casos1 = read_sav("MERGE_GSS93_p1_casos.sav")
merge_casos2 = read_sav("MERGE_GSS93_p2_casos.sav")
merge_1500 = rbind(merge_casos1, merge_casos2)
# As vari?veis n?o precisam estar na mesma ordem, no entanto,
# n?o funcionar? se uma das vari?veis s? estiver presente em um dos bancos de dados.
# Deve-se deletar a vari?vel extra ou cri?-la no outro banco preenchendo-a de missings.
# deletando a vari?vel wrkstat do primeiro banco
merge_casos1.1 = subset(merge_casos1, select=-c(wrkstat))
# rbind dar? erro
rbind(merge_casos1.1, merge_casos2)
# pode-se deletar a mesma vari?vel no segundo banco
merge_casos2.1 = subset(merge_casos2, select=-c(wrkstat))
merge_1500.1 = rbind(merge_casos1.1, merge_casos2.1)
# ou ent?o pode-se preencher a vari?vel com missings no banco onde a vari?vel n?o existe
merge_casos1.1$wrkstat = rep(NA, 1000)
merge_1500.2 = rbind(merge_casos1.1, merge_casos2)
#### Slide 45 ####
library(haven)
merge_var1 = read_sav("MERGE_GSS93_p1_var.sav")
merge_var2 = read_sav("MERGE_GSS93_p2_var.sav")
merge_79 = merge(merge_var1, merge_var2, by="id", all = T)
# O argumento all = T ? para que casos que estejam num ?nico banco perten?am ao banco final, sen?o s?o exclu?dos
#### Slide 46 ####
library(haven)
dietstudy = read_sav("dietstudy.sav")
diet.reest = reshape(as.data.frame(dietstudy), idvar = "patid",
varying = list(c("tg0","tg1","tg2","tg3","tg4"), c("wgt0","wgt1","wgt2","wgt3","wgt4")),
v.names=c("tg", "wgt"), direction = "long")
# Em idvar coloca-se o identificador do caso
# Em varying deve-se separar os blocos de colunas que representam a mesma vari?vel
# Em v.names nomeia-se as colunas que conter?o todos os dados dos blocos anteriores
# direction="long" significa que os dados ser?o "empilhados"
# ordenando por patid
diet.reest = diet.reest[order(diet.reest$patid), ]
|
31e994e8ab923b8afa0b65f86d79a7304c5307a0
|
169874bae167edf16577f6c48a4d976e085935aa
|
/codes/medical_no_shows.R
|
f3643dc6db2c3bb632d19128bcacc7be5f734918
|
[] |
no_license
|
ArunSudhakaran/Data-Mining-and-Machine-Learning
|
818a3c355d6d8d3c8732f44584258fab208597e6
|
f88627297723299cdca79c67e5dc4b14ab9cd3a7
|
refs/heads/master
| 2022-12-19T13:11:24.282894
| 2020-09-22T23:20:23
| 2020-09-22T23:20:23
| 297,795,359
| 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 4,780
|
r
|
medical_no_shows.R
|
####predicting medical appointment no shows
data<- read.csv("data_mining/no_show_appointments_data.csv")
str(data)
head(data)
summary(data)
###converting to factors
data$Gender <- factor(data$Gender, levels = c("M", "F"))
data$Scholarship<-as.factor(data$Scholarship)
data$Hipertension<-as.factor(data$Hipertension)
data$Diabetes<-as.factor(data$Diabetes)
data$Alcoholism<-as.factor(data$Alcoholism)
data$Handcap<-as.factor(data$Handcap)
data$No.show<-as.factor(data$No.show)
str(data)
####removing unwanted columns
names(data)
new<-data[1]
names(new)
ncol(data)
new_data<-data[c(T,T,T,F,F,T,F,T,T,T,T,T,T,T)]
names(new_data)
#partitioning data
set.seed(123)
ind <- sample(2,nrow(new_data),replace=TRUE,prob=c(0.8,0.2))
train <-new_data[ind==1,]
test <-new_data[ind==2,]
###performing random forest###
str(data)
library(randomForest)
set.seed(222)
#training the data
rf <- randomForest(No.show~.,data=train)
print(rf)
rf$confusion
#prediction and confusion matrix
library(caret)
p1<-predict(rf,train)
p1# to view predicted values
head(train$No.show)#to view actual values
confusionMatrix(p1,train$No.show)
#prediction with test data
p2<-predict(rf,test)
confusionMatrix(p2,test$No.show)
#error-rate
library(ggplot2)
plot(rf)
#tuning the model
#t<-tuneRF(train[,100],train[,200],
#stepFactor = 1,
#plot = TRUE,
#ntreeTry = 300,
#trace = TRUE,
#improve = 0.05)
#no of nodes for trees
#no of nodes for the trees
hist(treesize(rf),main="no of nodes for the trees",col = "green")
#variable importance
varImpPlot(rf)
varUsed(rf)#to show the number of times a variable is used
#partial dependence plot
partialPlot(rf,train,Alcoholism,"No")
#extract single tree
getTree(rf,1,labelVar = TRUE)#information about first tree
#mdsplot-error ,to be used in case
MDSplot(rf,train$No.show)
################################################################################
##logisticregression
library(caTools)#explore the package
set.seed(123)
split<-sample.split(new_data,SplitRatio = 0.8)
split
training<-subset(new_data,split== "TRUE")
testing<-subset(new_data,split== "FALSE")
model<-glm(No.show ~.,training,family = "binomial")
summary(model)
#optimising the model by removing variables which are not statistically significant
#optiminsing model by removing gender
model<-glm(No.show ~.-Gender,training,family = "binomial")
summary(model) #shows high residual deviance and low aic but only 1 point below
#optimising model by removing hypertension
model<-glm(No.show ~.-Hipertension,training,family = "binomial")
summary(model) #shows same residual deviance and low aic
#optimising model by removing handicap
model<-glm(No.show ~.-Handcap,training,family = "binomial")
summary(model) #shows high residual deviance and low aic
#since hypertension shows same resid deviance and low aic-removing hypertension
model<-glm(No.show ~.-Hipertension,training,family = "binomial")
summary(model)
#predict the values of the test dataset and categorise them according to a threshold which is 0.5
res<-predict(model,testing,type = "response") #response means we have to get a probability from model
table(Actualvalue)
print(res)#gives the probabilities of "no shows" in testing dataset
print(testing)#check if the mmodel prediction is right
#creating a confusion matrix to test for accuracy with a threshold value of 0.5
(table(ActualValue=testing$No.show,PredictedValue=res>0.5))
(23953+67)/(23953+67+115+6008)##accuracy of the model is 79.6
#finding out the exact threshold value using roc curve
res<-predict(model,training,type = "response") #changing the roc value to training
library(ROCR)
#defining rocr prediction and performance
ROCRPred<-prediction(res,training$No.show)
ROCRPref<-performance(ROCRPred,"tpr","fpr")
plot(ROCRPref,colorize=TRUE,print.cutoffs.at=seq(0.1,by=0.1)) #analyse the graph
#analysing that 0.3/0.2 has better TP rate as compared to 0.5 from the roc , changing threshold value to 0.3
#comparing the thresholds 0.5 vs 0.3 for testing
(table(ActualValue=testing$No.show,PredictedValue=res>0.5))
(table(ActualValue=testing$No.show,PredictedValue=res>0.3))#false positive has decreased
(table(ActualValue=testing$No.show,PredictedValue=res>0.2))#checking with 0.2 threshold value
#checking for accuracy
(1249+21573)/(2495+4826+21573+1249) #accuracy reduced to 75.71 for threshold of 0.3
(3509+15386)/(15386+3509+8682+2566) #accuracy further decreased to 62.68
#0.5 has high accuracy and less true negative rate so that 115 people will show off and model predicts that they wont
|
0264f840f5477cf2549f373f4ae05438525522bb
|
36c06ce1b3f61c75740b80073d0237650a43dd15
|
/plot4.R
|
7695ae66a561aa5da4b0a26699ed558336635ea3
|
[] |
no_license
|
joancarter2000/ExData_Plotting1
|
e6c11f290a61f285071e3fb140ddbeb1e36b695a
|
b51e1c44815c532c17e3d91cd422082d93c1fcfd
|
refs/heads/master
| 2021-01-15T12:36:26.533509
| 2015-04-11T22:04:17
| 2015-04-11T22:04:17
| 33,793,914
| 0
| 0
| null | 2015-04-11T21:52:16
| 2015-04-11T21:52:16
| null |
UTF-8
|
R
| false
| false
| 1,891
|
r
|
plot4.R
|
##course project 1
##read in the entire dataset
epc<-read.table("household_power_consumption.txt", sep=";", header=TRUE, colClasses="character")
head(epc)
str(epc)
##subset 1/2/2007 and 2/2/2007 data from the original dataframe
subepc<-epc[epc$Date=="1/2/2007"|epc$Date=="2/2/2007",]
##change the "global active power" data from character into numeric
subepc$Global_active_power<-as.numeric(subepc$Global_active_power)
##using lubridate package which is very helpful in dealing with date/time
library(lubridate)
##combine Date and Time to make a new variable "Day"
subepc$Day<-paste(subepc$Date, subepc$Time, sep=" ")
##convert "Day" into POSIXct
subepc$Day<-dmy_hms(subepc$Day)
##convert related variable to numeric
subepc$Sub_metering_1<-as.numeric(subepc$Sub_metering_1)
subepc$Sub_metering_2<-as.numeric(subepc$Sub_metering_2)
subepc$Sub_metering_3<-as.numeric(subepc$Sub_metering_3)
##make plot4
##convert related varibale to numeric
subepc$Voltage<-as.numeric(subepc$Voltage)
subepc$Global_reactive_power<-as.numeric(subepc$Global_reactive_power)
png(filename="plot4.png", width=480, height=480, units="px")
par(mfrow=c(2,2), cex.axis=0.7, mar=c(5,4,4,2))
##draw top left graph
plot(subepc$Day, subepc$Global_active_power, type="l", xlab="", ylab="Global Active Power")
##draw top right graph
plot(subepc$Day, subepc$Voltage, type="l", xlab="datetime", ylab="Voltage")
##draw lower left graph
with(plot(subepc$Day, subepc$Sub_metering_1, type="l", xlab="", ylab="Energy sub metering"),
{lines(subepc$Day,subepc$Sub_metering_2, col="red")
lines(subepc$Day,subepc$Sub_metering_3, col="blue")})
legend("topright", lty=1, col=c("black","red","blue"),legend=c("Sub_metering_1","Sub_metering_2","Sub_metering_3"), cex=0.6)
##draw lower right graph
plot(subepc$Day, subepc$Global_reactive_power, type="l", xlab="datetime", ylab="Global_Reactive_Power")
dev.off()
|
a7bbb99224b3862d4069a1f6f70bfd8adabdbd33
|
2e74c7339c63385172629eaa84680a85a4731ee9
|
/alcohol/paf/04_2_assault.R
|
22bd8008ec28555b87568f6a70d92a3091c75508
|
[] |
no_license
|
zhusui/ihme-modeling
|
04545182d0359adacd22984cb11c584c86e889c2
|
dfd2fe2a23bd4a0799b49881cb9785f5c0512db3
|
refs/heads/master
| 2021-01-20T12:30:52.254363
| 2016-10-11T00:33:36
| 2016-10-11T00:33:36
| null | 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 5,199
|
r
|
04_2_assault.R
|
############# This program will compute the Alcohol Attributable Fractions (AAF) for assault injuries
library(foreign)
library(haven)
library(data.table)
if (Sys.info()["sysname"] == "Windows") {
root <- "J:/"
arg <- c(1995, "J:/WORK/05_risk/risks/drugs_alcohol/data/exp/summary_inputs", "/share/gbd/WORK/05_risk/02_models/02_results/drugs_alcohol/paf/temp", 1000)
} else {
root <- "/home/j/"
print(paste(commandArgs(),sep=" "))
arg <- commandArgs()[-(1:3)] # First args are for unix use only
}
### pass in arguments
arg <- commandArgs()[-(1:3)] # First args are for unix use only
yyy <- as.numeric(arg[1]) # Year for current analysis
data.dir <- arg[2] # Data directory
out.dir <- arg[3] # Directory to put temporary draws in
save.draws <- as.numeric(arg[4]) # Number of draws to save
## ages <- 1:3 # Input files only have ages 1-3, but output will have 0-3 because 0-14 year olds get hit by others
ages <- seq(from=0,to=80,by=5) ## adding these because of our new analysis
age_group_ids <- c(2:21)
sexes <- 1:2
sims <- 1:save.draws
##### SELECTING THE SIMULATIONS FILE ####
##### SELECTING THE SIMULATIONS FILE ####
##### SELECTING THE SIMULATIONS FILE ####
## Get population (from input data files)
## data.file <- paste0(data.dir, "/alc_data_", yyy, ".csv")
## pop <- read.csv(data.file, stringsAsFactors=F)
## pop <- pop[, c("REGION", "SEX", "AGE_CATEGORY", "POPULATION")]
## Read pop file
pop <- as.data.frame(read_dta(paste0(root,"/WORK/05_risk/risks/drugs_alcohol/data/exp/temp/population.dta")))
## get age from age_group_id
agemap <- read.csv(paste0(root,"/WORK/05_risk/risks/drugs_alcohol/data/exp/temp/agemap.csv"))
pop <- merge(pop,agemap,by=c("age_group_id"))
pop <- pop[pop$year_id == yyy & pop$sex_id %in% c(1,2),c("location_id","year_id","age","age_group_id","sex_id","pop_scaled")]
## Get percentage of region's population that is in a given age-, sex-group. This will be used to
## Population weight the PAFs.
totalpop <- aggregate(pop_scaled ~ location_id, data=pop, FUN=sum)
names(totalpop)[2] <- "TOTALPOPULATION"
pop <- merge(pop, totalpop, by="location_id")
pop$popfrac <- pop$pop_scaled / pop$TOTALPOPULATION
pop <- pop[, c("location_id", "sex_id", "age", "popfrac")]
names(pop) <- c("REGION","SEX","age","popfrac")
## Bring in injuries data and format it
data <- list()
for (aaa in age_group_ids[7:20]) {
for (sss in sexes) {
cat(paste0(aaa," ",sss,"\n")); flush.console()
data[[paste0(aaa,sss)]] <- fread(paste0(out.dir, "/AAF_", yyy, "_a", aaa, "_s", sss, "_inj_self.csv"))
}
}
data <- as.data.frame(rbindlist(data))
## Get a mortality variable, rather than the DISEASE one which varies by sex.
data$MORTALITY <- grepl("Mortality", data$DISEASE)
# Only use non-motor vehicle accidents and get rid of extraneous variables
data <- data[grep("^NON-Motor Vehicle Accidents", data$DISEASE), c("REGION", "SEX", "age", "MORTALITY", paste0("draw", sims))]
data[, paste0("draw", sims)][data[, paste0("draw", sims)] == 1] <- 0.9999 # This fails if we have PAFs equal to 1, so round it down so that it will work...
## Get the population weighted average RR and then convert back to the PAF.
data <- merge(data, pop, by=c("REGION", "SEX", "age"),all.x=T)
data[, paste0("draw", sims)] <- (1 / (1 - data[, paste0("draw", sims)])) * data$popfrac
data <- aggregate(data[, paste0("draw", sims)], list(REGION=data$REGION, MORTALITY=data$MORTALITY), FUN="sum")
data[, paste0("draw", sims)] <- (data[, paste0("draw", sims)] - 1) / data[, paste0("draw", sims)]
# Loop through the ages and sexes, and multiply the population weighted PAF by the Australian scalar
# of what a total population PAF is to age-,sex- PAF.
AGE_CATEGORIES <- 0:3
scalar <- data.frame(age=AGE_CATEGORIES, scalar=c(0.610859729, 1.75084178, 1.014666593, 0.456692913))
out <- data.frame()
for (aaa in AGE_CATEGORIES) {
for (sss in sexes) {
temp <- data[, c("REGION", "MORTALITY")]
temp$AGE_CATEGORY <- aaa
temp$SEX <- sss
## this was previously PAF*scalar...but the scalar should be for RR, so do in RR space, in steps to troubleshoot
## make RR: RR = 1/(1-PAF)
temp[, paste0("draw", sims)] <- 1/(1 - data[, paste0("draw", sims)])
## multiply (RR-1) by scalar, add to 1: RR_adj = (1 + (RR_orig -1)*scalar)
temp[, paste0("draw", sims)] <- 1 + (temp[, paste0("draw", sims)]-1)*scalar[scalar$age == aaa, "scalar"]
## convert back to PAF: PAF = 1 - (1/RR_adj)
temp[, paste0("draw", sims)] <- 1 - (1/(temp[, paste0("draw", sims)]))
out <- rbind(temp, out)
}
}
age <- ages
AGE_CATEGORY <- c(0,0,0,1,1,1,1,2,2,2,2,2,3,3,3,3,3)
expand <- cbind(age,AGE_CATEGORY)
out <- merge(out,expand,by="AGE_CATEGORY",all=T)
# Fix names of DISEASE values.
out$DISEASE <- paste("Assault Injuries", ifelse(out$MORTALITY, "Mortality", "Morbidity"), ifelse(out$SEX==1, "MEN", "WOMEN"), sep=" - ")
out$MORTALITY <- NULL
write.csv(out, file = paste0(out.dir, "/AAF_", yyy, "_inj_aslt", ".csv"))
|
37a604a0a1d0cfbe39e77b9af038acd2828e8580
|
a1a1661d8f42f8005f4a41de6ba333eec0b096a7
|
/Correlation.R
|
5fb2e102f9c08b96680f9301137168b9b23db76c
|
[] |
no_license
|
jihadrashid/UsedRcodeForMyWork
|
5db49e15b245f29d1f7ea2b76ba0494536e8b1ab
|
1b37b9c524464e0fb1b9e87b4a9a4444d494003a
|
refs/heads/main
| 2023-01-31T05:57:38.302766
| 2020-12-15T06:05:05
| 2020-12-15T06:05:05
| 321,569,013
| 0
| 0
| null | null | null | null |
WINDOWS-1252
|
R
| false
| false
| 1,168
|
r
|
Correlation.R
|
data=as.data.frame(Dhaka_21H_1col)
rain=as.data.frame(dhaka)
final_data=cbind.data.frame(data,rain)
View(final_data)
windowsFonts(A = windowsFont("Times New Roman"))
library(ggpubr)
library(devtools)
library(ggplot2)
ggscatter(final_data, x="...2", y="dhaka",col="blue",
conf.int = TRUE,cor.coef = TRUE,
cor.method = "pearson",xlab = "Temperature °C",
ylab = "Rainfall")+
geom_abline(col="black")
ggscatter(final_data, x="...2", y="dhaka",col="blue",
xlab = "Temperature °C",
ylab = "Rainfall")+
geom_smooth(method = "lm", se=FALSE,col="black", formula = y ~ x)+
stat_regline_equation(formula = y ~ x,
position = "identity", na.rm = FALSE, show.legend = NA)
x =final_data$...2
y =final_data$dhaka
# Plot with main and axis titles
# Change point shape (pch = 19) and remove frame.
plot(x, y,
xlab = "Temperature", ylab = "Rainfall",
pch = 19, frame = FALSE)
# Add regression line
plot(x, y, col = "blue",
xlab = "Temperature °C", ylab = "Rainfall",
pch = 19, frame = FALSE)
abline(lm(y ~ x, data = mtcars), col = "black")
|
9a08493fd74e4c915b1368b0fec919f217139b5d
|
92cbe9dab2a19c975ea9e74a960afc6b9a4513e0
|
/info_3010/Homeworks/practice_w2/practice_w2_3.R
|
335a87ddea58d6ce4eabc40fc3036c5056160d74
|
[] |
no_license
|
Silicon-beep/UNT_coursework
|
b43e35094c120cb8467ffb7d35ce95fe6552b39c
|
1acd54a6aeddd4be1a2e5ca171c75c4e0fabff82
|
refs/heads/main
| 2023-05-08T19:16:01.410167
| 2021-05-20T00:32:50
| 2021-05-20T00:32:50
| null | 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 426
|
r
|
practice_w2_3.R
|
#course.grades things
course.grades<-c(92,75,46,26,0,100,89,76)
n<-1
while (n<=length(course.grades)) {
print(course.grades[n])
n=n+1
}
n<-1
while (n<=length(course.grades)) {
if (course.grades[n]<60) {
print("failed")
} else {
print("pass")
}
n=n+1
}
#function to compute area of ractangle
#using length and width
Compute_area_rect<- function(l,w) {
area=l*w
return(area)
}
Compute_area_rect(10,10)
|
ca5b8ce7f4f1fe72c1cfa07384a42cc711d7405f
|
66a094ca8e95ce50948276e3bd3d2e5c00cf525e
|
/ui.R
|
8d70731ea06d7ce8259a58cd2fb7eb5ed64ece90
|
[] |
no_license
|
eharason/ally_AccessiCity
|
297d1fba0401e8e1dfd0188b7fe9034ddd4b1805
|
44e3b183a9247b4c6eff5fec2946681e4f28e2f5
|
refs/heads/master
| 2016-08-06T23:19:01.629828
| 2015-08-21T13:15:00
| 2015-08-21T13:15:00
| 41,119,805
| 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 769
|
r
|
ui.R
|
library(shiny)
library(leaflet)
library(sp)
# Define UI for AccessiCity that draws a map
shinyUI(fluidPage(
# Application title
titlePanel("AccessiCity for Mexico City's General Transit Network"),
# Sidebar with a checkbox and drop-down menu
sidebarLayout(
sidebarPanel(
selectInput("RouteType", label = h3("Select Transit Type"),
choices = list("Subway" = "Subway", "Bus" = "Bus",
"Trolleybus" = "Trolleybus", "Train" = "Train"),
selected = "Subway"),
checkboxInput("WheelchairAccessible", label = "Wheelchair Accessible",
value = FALSE)
),
# View map on right side of the page
mainPanel(
leafletOutput("mymap")
)
)
))
|
60bc9b3943073a02fcd53cbe1d85d1bda237d0b5
|
b4bea34c234d74f8812b23d43bd9dcb0227c048a
|
/Example_Data/jupyter_lab/analyze_health_and_income.R
|
fe4112f473eed4503eba30ef9fb1497e023ed8d6
|
[] |
no_license
|
nickeubank/data-science-in-julia
|
4ff8c686fe8f7f7056cac247ed644854f9d3f686
|
fbce77d89dedf6f64fb8572a20c2d03e8384ebd6
|
refs/heads/master
| 2022-04-17T00:35:25.187628
| 2020-04-07T16:01:47
| 2020-04-07T16:01:47
| 255,151,601
| 2
| 1
| null | null | null | null |
UTF-8
|
R
| false
| false
| 692
|
r
|
analyze_health_and_income.R
|
######################
#
# Import World Development Indicators
# and look at the relationship between income
# and health outcomes across countries
#
######################
# Download World Development Indicators
wdi = read.csv("https://media.githubusercontent.com/media/nickeubank/MIDS_Data/master/World_Development_Indicators/wdi_small_tidy_2015.csv")
# Get Mortality and GDP per capita for 2015
wdi$loggdppercap = log(wdi[['GDP.per.capita..constant.2010.US..']])
# Plot
library(ggplot2)
ggplot(wdi,
aes(x=loggdppercap,
y=Mortality.rate..under.5..per.1.000.live.births.)
) + geom_point() +
geom_label(aes(label=Country.Name)) +
geom_smooth()
|
5bd047ea62cc9ecad45b2e499ec024dbbac98f5c
|
8258ac64f4b1a6afe35c8b86e526148ba15ce4ce
|
/Master Thesis/R - S. Ambrogio/S.Ambrogio IN/9. SA - 1Q-Ntotin.R
|
57473eabf7667698305c877c7bc97e4555efba27
|
[] |
no_license
|
maricorsi17/University-Projects
|
212bba7462068ad0da5140000acd8a24c965cc57
|
f5e9e044ff17dfc47f2002759e19d8c72108f145
|
refs/heads/master
| 2020-04-23T17:39:21.037187
| 2019-03-02T19:16:18
| 2019-03-02T19:16:18
| 171,339,185
| 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 3,800
|
r
|
9. SA - 1Q-Ntotin.R
|
# Installare pacchetti
#install.packages("xts")
library("xts")
#install.packages("TTR")
library("TTR")
library("plotrix")
library("hydroTSM")
library("zoo")
# caricare file excel
SAmbrogio<-read.csv2(file="c:/Users/Marianna/Documents/Universita/Tesi/R - S. Ambrogio/S.Ambrogio.CSV", header=TRUE, sep=";")
# Creare timeseries
## Creare un oggetto con le date (tutti gli anni)
datestotal<-seq(as.Date("2015-01-01"), length=1277, by="days")
## Creare la time series (tutti gli anni)
ts1QOUTtotal<-1/(xts(x=SAmbrogio[,8], order.by=datestotal))
MAts1QOUTtotal<-na.omit(rollapply(ts1QOUTtotal, width=31, FUN=function(x) mean(x, na.rm=TRUE), by=1, by.column=TRUE, fill=NA, align="center"))
tsNtotINtotal<-na.omit(xts(x=SAmbrogio[,31], order.by=datestotal))
MAtsNtotINtotal<-na.omit(rollapply(xts(x=SAmbrogio[,31], order.by=datestotal), width=15, FUN=function(x) mean(x, na.rm=TRUE), by=1, by.column=TRUE, fill=NA, align="center"))
tsNtotINtotalNA<-(xts(x=SAmbrogio[,31], order.by=datestotal))
## Creare un oggetto con le date (2015)
tsQOUT2015<-ts1QOUTtotal["2015"]
## Creare un oggetto con le date (2016)
tsQOUT016<-ts1QOUTtotal["2016"]
## Creare un oggetto con le date (2017)
tsQOUT2017<-ts1QOUTtotal["2017"]
## Creare un oggetto con le date (2018)
tsQOUT2018<-ts1QOUTtotal["2018"]
# Plottare la time series
windows(width = 16,height = 9)
par(mar=c(6,6,4,6),mgp=c(4,1,0)) #margini e distanza etichette-asse
options(scipen=5) #togliere notazione scientifica (se non va via, aumentare il numero)
plot(as.zoo(ts1QOUTtotal),type="n", xlab="Mesi",ylab=expression(paste("1/Q [d/m"^"3","]")),yaxt="n",xaxt="n",yaxs="i",xaxs="i",cex.lab=1.2,ylim=c(0,0.0008), col="grey")
drawTimeAxis(as.zoo(ts1QOUTtotal), tick.tstep ="months", lab.tstep ="months",las=2,lab.fmt="%m")
axis(side=2,at=seq(from = 0,to = 0.0008,by = 0.0001),las=2,format(seq(from = 0,to = 0.0008,by = 0.0001), big.mark = ".", decimal.mark = ","))
grid(nx=NA,ny=8,col="grey")
lines(as.zoo(ts1QOUTtotal),col="darkslategrey")
lines(as.zoo(MAts1QOUTtotal))
par(new = TRUE) #aggiungere grafico con secondo asse y
plot(as.zoo(tsNtotINtotal), type = "l", yaxt="n",xaxt="n",xaxs="i",yaxs="i", bty="n", xlab="", ylab="", ylim=c(0,160), col="orange")
lines(as.zoo(MAtsNtotINtotal[index(tsNtotINtotal)]),col="red")
axis(side=4, at=seq(from=0, to=160, by=20),las=2)
mtext(expression(paste("N"[tot-IN]," [mg/L]")), side=4, line=3,cex=1.2)
abline(v=index(tsQOUT016[1,]),lwd=2)
abline(v=index(tsQOUT2017[1,]),lwd=2)
abline(v=index(tsQOUT2018[1,]),lwd=2)
text(x=index(tsQOUT2015[182,]),y=150,label="2015")
text(x=index(tsQOUT016[182,]),y=150,label="2016")
text(x=index(tsQOUT2017[182,]),y=150,label="2017")
text(x=index(tsQOUT2018[90,]),y=150,label="2018")
legend(x=index(tsQOUT2015[30,]),y=145, c("1/Q",expression(paste("N"[tot-IN])),"MA 1/Q (mensile)",expression(paste("MA N"[tot-IN]," (mensile)"))),col=c("darkslategrey","orange","black","red"),lty=c(1,1,1,1),lwd=c(1,1,1,1),bg="white")
# SCATTERPLOT
corr<-cor.test(coredata(tsNtotINtotalNA),coredata(ts1QOUTtotal),method = "spearman", use="complete.obs",exact = F)
windows(width = 16,height = 9)
par(mar=c(6,6,4,4),mgp=c(4,1,0))
plot(as.zoo(tsNtotINtotal),as.zoo(ts1QOUTtotal),xaxs="i",yaxt="n",xaxt="n",yaxs="i",xlab=expression(paste("N"[tot-IN]," [mg/L]")),ylab=expression(paste("1/Q [d/m"^"3","]")),cex.lab="1.2",xlim=c(0,160),ylim=c(0,0.0008),pch=16)
axis(side=2,at=seq(from = 0,to = 0.0008,by = 0.0002),las=2,format(seq(from = 0,to = 0.0008,by = 0.0002), big.mark = ".", decimal.mark = ","))
axis(side=1,at=seq(from = 0,to = 160,by = 40),las=1,format(seq(from = 0,to = 160,by = 40), big.mark = ".", decimal.mark = ","))
grid(nx=4,ny=4,col="grey")
abline(lm(ts1QOUTtotal~tsNtotINtotalNA),lwd=2)
text(x=20,y=0.0007, label=paste("r = ",format(round(corr$estimate,digits=2),decimal.mark=",")))
|
9212ca11998c0d8ff84224462cf8e593a0cd4ba1
|
eb214cc7d36c4fc739da8027810a4ff742673714
|
/man/mod_donneessup.Rd
|
c08756298e43307383083cc31d89e094a5326001
|
[
"MIT"
] |
permissive
|
forestys54/inventairexy
|
0c72e42b27543d256f8eb8d46bcb4ef69644b10d
|
a322e5cb9e3612eb19e7f34134ffd6f483a8f6c2
|
refs/heads/master
| 2020-12-30T06:15:34.869998
| 2020-07-23T09:02:42
| 2020-07-23T09:02:42
| 238,888,601
| 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| true
| 471
|
rd
|
mod_donneessup.Rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/mod_donneessup.R
\name{mod_donneessup_ui}
\alias{mod_donneessup_ui}
\alias{mod_donneessup_server}
\title{mod_donneessup_ui and mod_donneessup_server}
\usage{
mod_donneessup_ui(id)
mod_donneessup_server(input, output, session, rv)
}
\arguments{
\item{id}{shiny id}
\item{input}{internal}
\item{output}{internal}
\item{session}{internal}
}
\description{
A shiny Module.
}
\keyword{internal}
|
30cf25c438cc988eac80a6292662269766456e52
|
9e90a923b49e53e8d402d85d66bee9d02f6a483f
|
/tests/testthat/test-twin_to_yule_tree.R
|
cf9f74cdf3946f6b2d0d4044269db4b40b10d231
|
[] |
no_license
|
thijsjanzen/pirouette
|
7e66c8276a11a18a6184f27dab6cb8ee06abfa28
|
463a2bd61ee9d5ada7c249d803e99391de264b79
|
refs/heads/master
| 2020-05-25T16:21:22.467232
| 2019-05-07T12:46:27
| 2019-05-07T12:46:27
| null | 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 440
|
r
|
test-twin_to_yule_tree.R
|
context("test-twin_to_yule_tree")
test_that("use", {
phylogeny <- load_tree(tree_model = "mbd", seed = 1)
twinning_params <- create_twinning_params()
twinning_params$rng_seed <- 1
yule_tree <- twin_to_yule_tree(
phylogeny = phylogeny,
twinning_params = twinning_params
)
expect_equal(class(yule_tree), "phylo")
expect_equal(
max(ape::branching.times(yule_tree)),
max(ape::branching.times(phylogeny))
)
})
|
d2a6dfb4c9f2e3cc11b7764f5631c04f9e7ab501
|
56bf6691056c3e38aec9c0c9a6bb6094de0a3ca8
|
/processing/company_data_product_script.R
|
8786b78e898f46b0813aa019171749693e3394ec
|
[] |
no_license
|
ioanna-toufexi/companies-stream
|
1462a3209277f13077790e4ce6cce617e715456c
|
f7a2ea9f144d71a8db42abbf5f0cc2fec65ffb28
|
refs/heads/master
| 2022-12-13T06:41:50.723719
| 2020-09-14T10:15:34
| 2020-09-14T10:15:34
| 264,620,903
| 1
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 4,541
|
r
|
company_data_product_script.R
|
if(!require("pacman")) install.packages("pacman")
pacman::p_load(readr, htmlwidgets, geojsonio, tigris)
source("data_grouper.R")
source("plotter.R")
source("sic_mappings.R")
source("mapper.R")
# This script has a series of steps to analyse and visualise data from
# the Companies House `Company data product` CSV file.
#
# In particular,
# 1) It analyses how the number of new companies in the hospitality sector
# was affected by COVID-19 in the first months of 2020.
# 2) Visualises findings with plots and maps
# The results are demonstrated in a Shiny web app (see server.R and ui.R in same package)
# Sourcing Company data product CSV
company_data_product_path = "C:/Users/ioanna/Downloads/BasicCompanyDataAsOneFile-2020-05-01/BasicCompanyDataAsOneFile-2020-05-01.csv"
all_companies <- read_csv(company_data_product_path)
companies <- all_companies %>% filter_variables()
########## Faceted plot ###########
# Filtering and grouping data per month and SIC code
most_affected_2020 <- get_new_per_month_and_siccode(companies,
str_c(names(most_affected), collapse = "|"),
start_date = "01/01/2020")
least_affected_or_too_small_2020 <- get_new_per_month_and_siccode(companies,
str_c(names(least_affected_or_too_small), collapse = "|"),
start_date = "01/01/2020")
# Creating faceted plots
plot_interactive(most_affected_2020,
"New hospitality companies plummeted in April 2020",
"more.png", per_facet_col = 3, img_width = 10, img_height = 7)
plot_interactive(least_affected_or_too_small_2020,
"Economic activities less affected or with small numbers",
"less.png", per_facet_col = 4, img_width = 10, img_height = 3)
#saveWidget(g, "faceted.html", selfcontained = F, libdir = "lib")
########### Map ###########
# Creating two london borough maps,
# one for March 2020 and one for April 2020
# to show the change in the number of new companies just before and after COVID-19 spread in
# Filtering and grouping data per postcode for Mar 2020
mar_20 <- get_new_per_postcode(companies,
str_c(names(hospitality_all), collapse = "|"),
start_date = "01/03/2020",
end_date= "31/03/2020")
# Filtering and grouping data per postcode for Apr 2020
apr_20 <- get_new_per_postcode(companies,
str_c(names(hospitality_all), collapse = "|"),
start_date = "01/04/2020",
end_date= "30/04/2020")
# Joining in one dataframe
joined <- full_join(mar_20,apr_20, by = "RegAddress.PostCode", suffix = c(".mar_20", ".apr_20")) %>%
mutate_all(~replace(., is.na(.), 0))
#summarise(joined, sum(count.mar_20))
# Using https://geoportal.statistics.gov.uk/datasets/postcode-to-output-area-to-lower-layer-super-output-area-to-middle-layer-super-output-area-to-local-authority-district-february-2020-lookup-in-the-uk
# to convert postcodes to Local Authority Districts
postcode_to_lad_lookup <- read_csv("C:/Users/ioanna/Downloads/PCD_OA_LSOA_MSOA_LAD_FEB20_UK_LU/PCD_OA_LSOA_MSOA_LAD_FEB20_UK_LU.csv") %>%
select(pcds, ladcd)
# New companies per Local Authority District
new_by_lad <- left_join(joined, postcode_to_lad_lookup,by = c("RegAddress.PostCode" = "pcds")) %>%
group_by(ladcd) %>%
summarise(mar = sum(count.mar_20),apr = sum(count.apr_20))
# Using https://geoportal.statistics.gov.uk/datasets/local-authority-districts-december-2019-boundaries-uk-bfc
# as a lookup to get the names of the LADs
names <- read_csv("C:/Users/ioanna/Downloads/Local_Authority_Districts__December_2019__Boundaries_UK_BFC.csv") %>%
select(lad19cd,lad19nm)
# Adding names
new_by_lad <- left_join(new_by_lad, names, by = c("ladcd"="lad19cd"))
# Using https://geoportal.statistics.gov.uk/datasets/local-authority-districts-december-2019-boundaries-uk-bfc
# to get the geojson
url <- "https://opendata.arcgis.com/datasets/1d78d47c87df4212b79fe2323aae8e08_0.geojson?where=UPPER(lad19cd)%20like%20'%25E090000%25'"
london_mapp <- geojson_read(url, what = 'sp')
# Merging data and geojson
merged_map <- geo_join(london_mapp, new_by_lad, "lad19cd", "ladcd")
#Creating the maps
get_mar_html_map()
get_apr_html_map()
|
c83c6de99050725215c0f9817820f1b327399792
|
e1f70ac6d5604c3f5bf7dbe02345333dda964642
|
/analysis/UniprotAnnotations_NetworkAnalysis/Map2SRlabGOslimsTerms.R
|
782f345104accecedda08a285a29f5ddcf5855e2
|
[] |
no_license
|
shellywanamaker/OysterSeedProject
|
461b90d2dee9a9e2283a3af2fbe91af6a70c05ac
|
a761b8bd589e80454b83c5099b8dbc65a4e497c2
|
refs/heads/master
| 2023-01-23T18:30:14.929947
| 2019-10-23T19:34:33
| 2019-10-23T19:34:33
| null | 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 2,991
|
r
|
Map2SRlabGOslimsTerms.R
|
#load libraries
library(plyr)
library(tidyr)
#install.packages("ontologyIndex")
#install.packages("ontologySimilarity")
library(ontologyIndex)
library(ontologySimilarity)
library(GSEABase)
library(reshape2)
#####WITH SR LAB GO SLIMS#####
#load Robert's lab go slim terms
srlabGOSLIM <- read.csv("~/Documents/GitHub/OysterSeedProject/raw_data/background/GOSlim_terms.csv", stringsAsFactors = FALSE)
colnames(srlabGOSLIM)[1]<- "GO"
#load protein/GOid data
STACKED_sig0.1_pro_GOid_term <- read.csv("~/Documents/GitHub/OysterSeedProject/analysis/UniprotAnnotations_NetworkAnalysis/SameDayFCtoTemp/ChiSqPval0.1_ASCA/intermediate_files/STACKED_sig0.1_pro_GOid_term.csv", stringsAsFactors = FALSE)
STACKED_sig0.1_pro_GOid_term_srlab <- merge(STACKED_sig0.1_pro_GOid_term,srlabGOSLIM[,c(1,3)], by = "GO",all.x = TRUE)
##############################
uniqsrlabGOSLIMs <- unique(STACKED_sig0.1_pro_GOid_term_srlab$GOSlim_bin)
length(uniqsrlabGOSLIMs)
#37
length(unique(srlabGOSLIM$GOSlim_bin))
#38; only 38 unique GO slim terms exist in the Roberts lab GO slim list and these include CC, MP and BP
#I can't map srlab GOslim terms to GOIDs; they aren't the same terms as in the .obo file
#need to get GO IDs for Roberts Lab slim terms
#Use GSEA to generate list of all GO Slim BP, MP, and CC
#DF will have "term", "GOid", "GOcategory"
#BP first
#goslims with GSEA
myCollection <- GOCollection(sig0.1_sig_GOids)
#I downloaded goslim_generic.obo from http://geneontology.org/docs/go-subset-guide/
#then i moved it to the R library for GSEABase in the extdata folder
fl <- system.file("extdata", "goslim_generic.obo", package="GSEABase")
slim <- getOBOCollection(fl)
slims <- data.frame(goSlim(myCollection, slim, "BP"))
slims$GOid <- rownames(slims)
slims$Term <- as.character(slims$Term)
rownames(slims) <- NULL
GSEA_bp <- slims[,c("Term", "GOid")]
GSEA_bp$GOcategory <- "BP"
#MP next
slims <- data.frame(goSlim(myCollection, slim, "MF"))
slims$GOid <- rownames(slims)
slims$Term <- as.character(slims$Term)
rownames(slims) <- NULL
GSEA_MF <- slims[,c("Term", "GOid")]
GSEA_MF$GOcategory <- "MF"
#CC next
slims <- data.frame(goSlim(myCollection, slim, "CC"))
slims$GOid <- rownames(slims)
slims$Term <- as.character(slims$Term)
rownames(slims) <- NULL
GSEA_CC <- slims[,c("Term", "GOid")]
GSEA_CC$GOcategory <- "CC"
GSEA_BP_MF_CC <- rbind(GSEA_bp, GSEA_MF, GSEA_CC)
colnames(GSEA_BP_MF_CC) <- c("Term", "GOslim", "GOcategory")
#semantic sim with SR lab GO slim terms
beach2 <- list()
for(i in 1:length(uniqsrlabGOSLIMs)){
temp_beach2 <- try(go$name[[uniqsrlabGOSLIMs[i]]], TRUE)
if(isTRUE(class(temp_beach2)=="try-error")) {next} else {beach2[[i]] = temp_beach2}
}
test <- get_term_info_content(go, term_sets = uniqsrlabGOSLIMs)
####
##CAN'T MAP SR LAB GO SLIM TERMS TO TERMS IN goslim_generic.OBO BECAUSE THE TERMS DON'T MATCH!!!!!
###WOULD NEED THE GO SLIM IDs THAT MATCH THE SR LAB GO SLIM TERMS IN ORDER TO GET THE SEMANTIC SIMILARITIES TO MAKE THE GO SLIM TERM-TERM RELATIONSHIPS
|
7b41e33db2b2b093b7cf03bc11401651c557a61a
|
319c8effd49600b5796cd1759063b0b8f10aeac1
|
/workspace/splicing/lung/rmsk/scatterplot.r.2018050611
|
86997b7ff202a1b025f62079c4bddadc11fa8416
|
[] |
no_license
|
ijayden-lung/hpc
|
94ff6b8e30049b1246b1381638a39f4f46df655c
|
6e8efdebc6a070f761547b0af888780bdd7a761d
|
refs/heads/master
| 2021-06-16T14:58:51.056045
| 2021-01-27T02:51:12
| 2021-01-27T02:51:12
| 132,264,399
| 1
| 1
| null | null | null | null |
UTF-8
|
R
| false
| false
| 464
|
2018050611
|
scatterplot.r.2018050611
|
#!/usr/bin/env Rscript
library(ggplot2)
args<-commandArgs(T)
pdf(args[2])
data = read.table(args[1],header=TRUE,row.names=1)
#y = log2(data$high_1+data$high_2)
#x = log2(data$low_1+data$low_2)
#reg<-lm(log2(high_1+high_2) ~ log2(low_1+low_2),data,na.action = na.exclude)
#summary(reg)
ggplot(data, aes(x=log2(low_1+low_2), y=log2(high_1+high_2),colour=Tag))+geom_point()+xlim(0,17)+ylim(0,17)
#geom_smooth(method = "lm", se=FALSE, color="black", formula = y ~ x)
|
b21566403f263d9af49ced43c9395e18a3c706f2
|
ba5f80c55e710cb253fbcc6d2a638cc1e1c4aaf0
|
/run_analysis.R
|
e9e70b4f2387fd4ae3787d15ad4ac7bf33bf72e6
|
[] |
no_license
|
nofacetou/Gettingdata
|
fb52c461750fa29d6056a94cc40c627218777a64
|
41766018c9be8a89aa3445e05d54bfc7fbd52bbd
|
refs/heads/master
| 2021-01-10T19:48:46.544948
| 2014-04-27T07:41:58
| 2014-04-27T07:41:58
| null | 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 3,028
|
r
|
run_analysis.R
|
#set the filepath for the input and output files
filepath_X_train <- "D:/Document/My Copy/Copy/Portable Python 2.7.3.2/App/Scripts/getdata-002/UCI HAR Dataset/train/X_train.txt"
filepath_y_train <- "D:/Document/My Copy/Copy/Portable Python 2.7.3.2/App/Scripts/getdata-002/UCI HAR Dataset/train/y_train.txt"
filepath_subject_train <- "D:/Document/My Copy/Copy/Portable Python 2.7.3.2/App/Scripts/getdata-002/UCI HAR Dataset/train/subject_train.txt"
filepath_X_test <- "D:/Document/My Copy/Copy/Portable Python 2.7.3.2/App/Scripts/getdata-002/UCI HAR Dataset/test/X_test.txt"
filepath_y_test <- "D:/Document/My Copy/Copy/Portable Python 2.7.3.2/App/Scripts/getdata-002/UCI HAR Dataset/test/y_test.txt"
filepath_subject_test <- "D:/Document/My Copy/Copy/Portable Python 2.7.3.2/App/Scripts/getdata-002/UCI HAR Dataset/test/subject_test.txt"
filepath_feature <- "D:/Document/My Copy/Copy/Portable Python 2.7.3.2/App/Scripts/getdata-002/UCI HAR Dataset/features.txt"
activity_labels <- "D:/Document/My Copy/Copy/Portable Python 2.7.3.2/App/Scripts/getdata-002/UCI HAR Dataset/activity_labels.txt"
tidy_csv <- "D:/Document/My Copy/Copy/Portable Python 2.7.3.2/App/Scripts/getdata-002/Gettingdata/tidy.csv"
#get the activity labels
activity <- read.table(activity_labels)
#get the variable list
var <- read.table(filepath_feature)
#only read the columns that contain "mean()" and "std()"
mean_col <-grepl("mean()", var$V2, fixed=TRUE)
std_col <-grepl("std()", var$V2, fixed = TRUE)
all <- mean_col + std_col
var.sub <- subset(var, all[var$V1]==TRUE)
col <- var.sub$V1
mycols <- rep("NULL", 561)
mycols[col]<-NA
#read all the datasets, and change the colume names using the variable list
X_train <- read.table(filepath_X_train, colClasses=mycols)
colnames(X_train) <- var.sub$V2
X_test <- read.table(filepath_X_test, colClasses=mycols)
colnames(X_test) <- var.sub$V2
y_train <- read.table(filepath_y_train)
colnames(y_train) <- "activity_code"
y_test <- read.table(filepath_y_test)
colnames(y_test) <- "activity_code"
subject_train <- read.table(filepath_subject_train)
colnames(subject_train) <- "ID"
subject_test <- read.table(filepath_subject_test)
colnames(subject_test) <- "ID"
#merge the above 6 datasets into 2 separate datasets: train and test
test <- cbind(ID=subject_test$ID, activity_code=y_test$activity_code,X_test)
train <- cbind(ID=subject_train$ID, activity_code=y_train$activity_code,X_train)
#merge train and test into one dataset
all <- rbind(train,test)
#label the activity code
all$activity_code<-factor(all$activity_code,levels=activity$V1, labels=activity$V2)
#get the average of each variable for each activity and each subject
#will install the reshape2 library if you don't have it
if(!is.element('reshape2', installed.packages()[,1]))
{install.packages('reshape2')
}
library("reshape2")
all_molten <- melt(all, id=c("ID","activity_code"))
tidy <- dcast(all_molten, formula= ID + activity_code ~ variable, mean)
#create new tidy dataset csv file
write.csv(tidy, file = tidy_csv , row.names=FALSE)
|
784324070a2184ea1a6b533f44eff9788d4e62b8
|
974d1d8f820c03156ba00b3faf459267ca1d56f1
|
/man/output.Rd
|
cc3668693d462eb96f79655d586f46f945be645d
|
[] |
no_license
|
aumath-advancedr2019/Sampling
|
e60f91b9e2fd377f701e2797049435a942075647
|
3c0b4de0e066abb22833a360b65b2153b506d2ff
|
refs/heads/master
| 2020-07-26T23:33:37.322693
| 2019-11-25T13:17:10
| 2019-11-25T13:17:10
| 208,798,058
| 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| true
| 488
|
rd
|
output.Rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/permutation.R
\name{output}
\alias{output}
\title{output}
\usage{
output(data = data.frame(), null.value = c(),
alternative = "two-sided", method = "Two-sided permutation test",
estimate = c(), data.name = "", statistic = c(),
parameters = c(), perm_statistics = 0, p.value = 0,
fun = "No function specified", Warnings = "None")
}
\value{
a list of class object "permutation".
}
\description{
output
}
|
be7fb3baa986058652d82074e4114562bb53c35c
|
625c4159be5b9b4cc2d57f438228b5424423e38a
|
/R/show.R
|
ae511bb406fb3ed3e034cc62902da5a11bb1262e
|
[] |
no_license
|
tintinthong/chessR
|
e6d936e6cd51b2159a9d28c8b6683602367fd7bb
|
60f8e254f30e1cce77d177a558ae26467144841a
|
refs/heads/master
| 2020-04-20T21:52:30.082501
| 2019-02-08T06:32:01
| 2019-02-08T06:32:01
| 169,121,594
| 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 648
|
r
|
show.R
|
#show method for displaying S4 objects
#setMethod("show", "board",
# function(object)print(rbind(x = object@x, y=object@y))
#)
#setMethod("show", "game",
# function(object)print(rbind(x = object@x, y=object@y))
#)
#args(show) #args of method has to have same arguments as generic
#show
#setMethod("show",signature = "MArray",definition = function(object) {
# cat("An object of class ", class(object), "\n", sep = "")
# cat(" ", nrow(object@marray), " features by ",ncol(object@marray), " samples.\n", sep = "")
# invisible(NULL)
# })
#use ... for function set Generic
|
d2ebdd90f71bc7a2efef41d67465d3ffaf1b3d73
|
751570c81bda2218f9a06273613144c9715a8a69
|
/grafi.R
|
fe4724b64395aeb793191c1428bc44de68496be3
|
[] |
no_license
|
TinaJ/ProjektMZR
|
1b7dbb5a06490c611549288e525893ed30dea91d
|
8fe6c74cfb9207b93f413ca92902561178c88332
|
refs/heads/master
| 2016-09-05T18:03:37.828885
| 2014-01-30T10:05:34
| 2014-01-30T10:05:34
| null | 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 4,851
|
r
|
grafi.R
|
setwd("C:/Users/Tina/Documents/faks/2. letnik magisterija/matematika z računalnikom/ProjektMZR")
library(timeSeries)
## Narišem equity za vse strategije na en graf:
load("./data/equityVseStrategije.rda")
# določim maksimalno in minimalno vrednost, za y os:
# y.max = max(equityVseStrategije)
# y.min = min(equityVseStrategije)
#
# plot(1:nrow(equityVseStrategije), equityVseStrategije[, 1], "l", ylim = c(y.min, y.max),
# xlab = "dnevi", ylab = "equity", main = "equity za posamezno strategijo")
# lines(1:nrow(equityVseStrategije), equityVseStrategije[, 2], "l", col = "aquamarine")
# lines(1:nrow(equityVseStrategije), equityVseStrategije[, 3], "l", col = "blue")
# lines(1:nrow(equityVseStrategije), equityVseStrategije[, 4], "l", col = "green")
# lines(1:nrow(equityVseStrategije), equityVseStrategije[, 5], "l", col = "pink")
# lines(1:nrow(equityVseStrategije), equityVseStrategije[, 6], "l", col = "brown")
# lines(1:nrow(equityVseStrategije), equityVseStrategije[, 7], "l", col = "orange")
# lines(1:nrow(equityVseStrategije), equityVseStrategije[, 8], "l", col = "grey")
# lines(1:nrow(equityVseStrategije), equityVseStrategije[, 9], "l", col = "purple")
# lines(1:nrow(equityVseStrategije), equityVseStrategije[, 10], "l", col = "red")
# legend(x = "topleft", col = c("black", "aquamarine", "blue", "green", "pink", "brown", "orange", "grey", "purple", "red"),
# legend = colnames(equityVseStrategije), lty = 1)
# ## bollingerja sta enaka, zato enega ni :)
## graf z datumi :)
equity = as.timeSeries(equityVseStrategije)
plot(equity, plot.type = "single", col = c("black", "aquamarine", "blue", "green", "pink",
"brown", "orange", "purple", "red"),
at = "auto", xlab = "Čas", ylab = "Kapital", main = "Kapital za posamezno strategijo")
legend(x = "topleft", col = c("black", "aquamarine", "blue", "green", "pink", "brown", "orange", "purple", "red"),
legend = colnames(equityVseStrategije), lty = 1)
# graf za RSI14:
load("./data/podatki.rda")
data = podatki
library(TTR)
RSI2 = apply(data, 2, RSI, n = 2)
RSI14= apply(data, 2, RSI, n = 14)
plot(1:3481, RSI14[, 1], "l")
lines(1:3481, c(1:3481)*0+30, col = "red")
plot(1:3481, RSI14[, 2], "l")
lines(1:3481, c(1:3481)*0+30, col = "red")
# koliko trgovanj izvede RSI14 na vseh delnicah
sum(RSI14 < 35, na.rm = TRUE)
sum(RSI2 <30, na.rm = TRUE)
########### NARIŠEM SP za zadnje leto IN NJEGOVE INDIKATORJE
library(TTR)
load("./data/SP1leto.rda")
SP.SMA5 = SMA(SP1leto, n = 5)
SP.SMA25 = SMA(SP1leto, n = 25)
SP.SMA50 = SMA(SP1leto, n = 50)
SP.SMA150 = SMA(SP1leto, n = 150)
plot(SP1leto, xlab = "Čas", ylab = "Vrednost", main = "SP500 & SMA25")
lines(SP.SMA25, col = "blue")
SP.RSI2 = RSI(SP1leto, n = 2)
SP.RSI14 = RSI(SP1leto, n = 14)
plot(SP.RSI2, xlab = "Čas", ylab = "Vrednost", main = "RSI2")
abline(h = 30, col = "red")
SP.Bollinger = as.timeSeries(BBands(SP1leto, n = 20, sd = 1))
plot(SP1leto, xlab = "Čas", ylab = "Vrednost", main = "SP500 & Bollingerjevi pasovi")
lines(SP.Bollinger[,1], col = "blue")
lines(SP.Bollinger[,2], col = "green")
lines(SP.Bollinger[,3], col = "red")
############################
## GRAFI ZA VELIK IN MAJHEN PBO
setwd("C:/Users/Tina/Documents/faks/2. letnik magisterija/matematika z računalnikom/ProjektMZR")
load("./data/equityVseStrategije-5let.rda")
load("./data/dnevniDonosiPortfelja-5let.rda")
#majhen overfit
plot(as.timeSeries(equityVseStrategije[, c(5,7,9)]), plot.type = "single",
col = c("black", "blue", "green"),
at = "auto", xlab = "Čas", ylab = "Kapital", main = "Kapital za posamezno strategijo")
legend(x = "topleft", col = c("black", "blue", "green"),
legend = colnames(equityVseStrategije[, c(5,7,9)]), lty = 1)
plot(as.timeSeries(equityVseStrategije[, c(2,5,6,8,9)]), plot.type = "single",
col = c("black", "blue", "green", "purple", "red"),
at = "auto", xlab = "Čas", ylab = "Kapital", main = "Kapital za posamezno strategijo")
legend(x = "topleft", col = c("black", "blue", "green", "purple", "red"),
legend = colnames(equityVseStrategije[, c(2,5,6,8,9)]), lty = 1)
#velik overfit
plot(as.timeSeries(equityVseStrategije[, c(4,6,7)]), plot.type = "single",
col = c("black", "blue", "green"),
at = "auto", xlab = "Čas", ylab = "Kapital", main = "Kapital za posamezno strategijo")
legend(x = "bottomright", col = c("black", "blue", "green"),
legend = colnames(equityVseStrategije[, c(4,6,7)]), lty = 1)
plot(as.timeSeries(equityVseStrategije[, c(3,4,6,7,9)]), plot.type = "single",
col = c("black", "blue", "green", "orange", "red"),
at = "auto", xlab = "Čas", ylab = "Kapital", main = "Kapital za posamezno strategijo")
legend(x = "bottomright", col = c("black", "blue", "green", "orange", "red"),
legend = colnames(equityVseStrategije[, c(3,4,6,7,9)]), lty = 1)
|
05df8c64713a26075fdb7afa96b90de4bc038394
|
de947d882fa63e5a64f71d58037cc7fcca7f033d
|
/exploratory.r
|
47a1bbbabfcca428bd37665b1285b5322412e6a0
|
[] |
no_license
|
krashr-ds/old-r-models
|
30e71dcbbc3fbc04d58169bacd4e509f1ec94851
|
179a9367a0faceff5dee22ec4b19cbd0dc7e1510
|
refs/heads/master
| 2023-02-01T22:32:43.022610
| 2020-12-15T14:20:15
| 2020-12-15T14:20:15
| null | 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 21,414
|
r
|
exploratory.r
|
# Exploratory data analysis; Framingham data
library(dplyr)
library(ggplot2)
library(tidyr)
library(purrr)
library(broom)
# USE frmgham data where each person has 3 entries / periods (9618 rows / 3 = 3206 individuals)
fr_ex <- frmgham %>%
group_by(RANDID) %>%
filter(n()==3)
# GATHER AND RECODE #
#####################
# Gather data on existing diagnoses
fr_ex <- fr_ex %>% gather(DIAGNOSIS, HAS_DIAGNOSIS, c(PREVAP, PREVCHD, PREVMI, PREVSTRK, PREVHYP, DIABETES))
# Recode to readable
fr_ex <- fr_ex %>% mutate(DIAGNOSIS = recode(DIAGNOSIS,
PREVAP = "Angina (Diagnosis)",
PREVCHD = "Angina or MI (Diagnosis)",
PREVMI = "MI (Diagnosis)",
PREVSTRK = "Stroke (Diagnosis)",
PREVHYP = "Hypertension (Diagnosis)",
DIABETES = "Diabetes"))
# Gather data on flags
fr_ex <- fr_ex %>% gather(FLAG, FLAG_VAL, c(ANGINA, HOSPMI, MI_FCHD, ANYCHD, STROKE, CVD))
# Recode to readable
fr_ex <- fr_ex %>% mutate(FLAG = recode(FLAG,
ANGINA = "ANGINA",
HOSPMI = "MI",
MI_FCHD = "MI/FCHD",
ANYCHD = "ANG/MI/FCHD",
STROKE = "STROKE",
CVD = "ANYCHD/STROKE"))
# Gather data on events (Leave out DEATH)
fr_ex <- fr_ex %>% gather(EVENT, DAYS_TO_EVENT, c(TIMEAP, TIMEMI, TIMEMIFC, TIMECHD, TIMESTRK, TIMECVD, TIMEHYP))
# Recode to readable
fr_ex <- fr_ex %>% mutate(EVENT = recode(EVENT,
TIMEAP = "Angina",
TIMEMI = "MI",
TIMEMIFC = "MI/Fatal CHD",
TIMECHD = "Angina/MI/Fatal CHD",
TIMESTRK = "Any Stroke",
TIMECVD = "MI/Fatal CHD/Stroke (Any)",
TIMEHYP = "Hypertension"))
# Code BMI groups
fr_ex <- fr_ex %>% mutate(BMIGROUP=cut(BMI, breaks=c(0, 18.5, 25, 30, Inf), labels=c("Underweight",
"Normal",
"Overweight",
"Obese+")))
# Code HTN clinical groups
# Diastolic first, since most people with high diastolic pressure also have high systolic pressure.
fr_ex <- fr_ex %>% mutate(HTNGROUP=cut(DIABP, breaks=c(0, 60, 80, 90, 120, Inf), labels=c("Low",
"Normal",
"Hypertension Stage 1",
"Hypertension Stage 2",
"Hypertensive Crisis")))
fr_ex <- fr_ex %>% filter(DIABP < 90) %>% mutate(HTNGROUP=cut(SYSBP, breaks=c(0, 91, 121, 131, 141, 181, Inf), labels=c("Low",
"Normal",
"Prehypertension",
"Hypertension Stage 1",
"Hypertension Stage 2",
"Hypertensive Crisis")))
# Recode SEX
fr_ex <- fr_ex %>% mutate(SEXGROUP=cut(SEX, breaks=c(-Inf, 1, 2), labels=c("Male", "Female")))
# Remove rows with no information (NOTE: This removed more than a million rows.)
fr_ex <- fr_ex %>% filter(HAS_DIAGNOSIS!=0 | FLAG_VAL!=0 | DAYS_TO_EVENT!=0)
# FLATTENED, CATEGORIZED DATA: 1,785,340 rows and 28 cols #
###########################################################
# SUMMARIZE
fr_sum <- fr_ex %>% group_by(RANDID) %>% summarize(T_TIME = max(TIME), M_AGE = mean(AGE), NUM_DX = mean(HAS_DIAGNOSIS), NUM_EVENTS = mean(FLAG_VAL), MEAN_DAYS = mean(DAYS_TO_EVENT), DDEATH = mean(TIMEDTH), CPD = mean(CIGPDAY))
fr_sum <- fr_sum %>% filter(T_TIME!=0)
# Primary Summary Graph: Mean Days to Event x Number of Diagnoses
ggplot(fr_sum, aes(x = NUM_DX, y = MEAN_DAYS)) +
geom_point(position = "jitter", alpha = 0.3) +
geom_smooth(method = "lm") +
labs(title = "Summary: Days to Event Explained by Number of Diagnoses", x = "Mean Diagnoses / Prev Conditions", y = "Mean Days to Event")
# Summary Graph 2: Days to Death x Number of Diagnoses
ggplot(fr_sum, aes(x = NUM_DX, y = DDEATH)) +
geom_point(position = "jitter", alpha = 0.3) +
geom_smooth(method = "lm") +
labs(title = "Summary: Days to Death Explained by Number of Diagnoses", x = "Mean Diagnoses / Prev Conditions", y = "Days to Death")
# Summary Graph 3: Days to Death x Number of Events
ggplot(fr_sum, aes(x = NUM_EVENTS, y = DDEATH)) +
geom_point(position = "jitter", alpha = 0.3) +
geom_smooth(method = "lm") +
labs(title = "Summary: Days to Death Explained by Number of Events", x = "Mean Events", y = "Days to Death")
# Summary Graph 4: Days to Event x Age
ggplot(fr_sum, aes(x = M_AGE, y = MEAN_DAYS)) +
geom_point(position = "jitter", alpha = 0.3) +
geom_smooth(method = "lm") +
labs(title = "Summary: Days to Event Explained by Age", x = "Mean Age", y = "Days to Event")
# Summary Graph 5: Days to Death x Age
ggplot(fr_sum, aes(x = M_AGE, y = DDEATH)) +
geom_point(position = "jitter", alpha = 0.3) +
geom_smooth(method = "lm") +
labs(title = "Summary: Days to Death Explained by Age", x = "Mean Age", y = "Days to Death")
# Summary Graph 6: Days to Event x CPD (Smokers only)
frs <- fr_sum %>% filter(CPD>0)
ggplot(frs, aes(x = CPD, y = MEAN_DAYS)) +
geom_point(position = "jitter", alpha = 0.3) +
geom_smooth(method = "lm") +
labs(title = "Summary: Days to Event Explained by Cigarettes per Day > 0", x = "Mean CPD", y = "Days to Event")
# Separate into periods
fr_p1 <- fr_ex %>% filter(PERIOD == 1) %>% select(RANDID, TIME, AGE, SEXGROUP, HTNGROUP, HYPERTEN, BPMEDS, BMIGROUP, CURSMOKE, CIGPDAY, DIAGNOSIS, HAS_DIAGNOSIS, FLAG, FLAG_VAL, EVENT, DAYS_TO_EVENT, TIMEDTH)
fr_p2 <- fr_ex %>% filter(PERIOD == 2) %>% select(RANDID, TIME, AGE, SEXGROUP, HTNGROUP, HYPERTEN, BPMEDS, BMIGROUP, CURSMOKE, CIGPDAY, DIAGNOSIS, HAS_DIAGNOSIS, FLAG, FLAG_VAL, EVENT, DAYS_TO_EVENT, TIMEDTH)
fr_p3 <- fr_ex %>% filter(PERIOD == 3) %>% select(RANDID, TIME, AGE, SEXGROUP, HTNGROUP, HYPERTEN, BPMEDS, BMIGROUP, GLUCOSE, CURSMOKE, CIGPDAY, DIAGNOSIS, HAS_DIAGNOSIS, FLAG, FLAG_VAL, EVENT, DAYS_TO_EVENT, TIMEDTH, TOTCHOL, HDLC, LDLC)
# Period 1: 616704, Period 2: 570335, Period 3: 598301 rows
# Strongest Correlation: Diagnoses & Events
# Let's explore this in more detail among the PERIODs
# PERIOD 1
fr_p1 <- fr_p1 %>% filter(BPMEDS %in% c(0,1))
# No Diagnoses v. Events by Age, Sex, BMI
# Males
fr_p1z <- fr_p1 %>% filter(FLAG_VAL==1 & HAS_DIAGNOSIS==0 & SEXGROUP=="Male")
ggplot(fr_p1z, aes(x = AGE, y = log(DAYS_TO_EVENT), color = EVENT)) +
geom_point(position = "jitter", alpha=0.5) +
geom_smooth(method = "lm") +
labs(title = "P1 Males with No Prior Diagnosis, Time to Event", x = "AGE", y = "LOG(DAYS_TO_EVENT)")
# Females
fr_p1y <- fr_p1 %>% filter(FLAG_VAL==1 & HAS_DIAGNOSIS==0 & SEXGROUP=="Female")
ggplot(fr_p1y, aes(x = AGE, y = log(DAYS_TO_EVENT), color = EVENT)) +
geom_point(position = "jitter", alpha=0.5) +
geom_smooth(method = "lm") +
labs(title = "P1 Females with No Prior Diagnosis, Time to Event", x = "AGE", y = "LOG(DAYS_TO_EVENT)")
# DIAGNOSIS, EVENT & TIME TO DEATH
fr_p1a <- fr_p1 %>% select(FLAG, FLAG_VAL, TIMEDTH, DIAGNOSIS, HAS_DIAGNOSIS) %>% filter(HAS_DIAGNOSIS==1)
ggplot(fr_p1a, aes(x = log(TIMEDTH), y = FLAG, color = DIAGNOSIS)) +
geom_col() +
facet_grid(~FLAG_VAL) +
labs(title = "P1 Time to Death with a prior Diagnosis by Event (1) v. No Event(0)", y = "Event Flag", x = "LOG(Days to Death)")
# HYPERTENSION and EVENTS
# NOTE: I graphed people on BP Meds separately, because there are so few of them.
fr_p1b <- fr_p1 %>% filter(BPMEDS==0 & EVENT!="Hypertension")
ggplot(fr_p1b, aes(x = EVENT, y = log(DAYS_TO_EVENT), color=HTNGROUP)) +
geom_col() +
labs(title = "P1 Time to Event by Hypertension (Unmedicated)", x="Event", y = "LOG(Days to Event)")
fr_p1b2 <- fr_p1 %>% filter(BPMEDS==1 & EVENT!="Hypertension")
ggplot(fr_p1b2, aes(x = EVENT, y = log(DAYS_TO_EVENT), color=HTNGROUP)) +
geom_col() +
labs(title = "P1 Time to Event by Hypertension (Medicated)", x="Event", y = "LOG(Days to Event)")
# OTHER DIAGNOSES and EVENTS
# Angina: Days to Event by BMI, Sex
fr_p1c <- fr_p1 %>% filter(FLAG=="ANGINA" & FLAG_VAL==1 | (DIAGNOSIS=="Angina (Diagnosis)" & HAS_DIAGNOSIS==1))
ggplot(fr_p1c, aes(x = DAYS_TO_EVENT, y = EVENT, color = BMIGROUP)) +
geom_col() +
facet_grid(~SEXGROUP) +
labs(title = "P1 Persons with Angina: Time to an Event by BMI Group / Sex", x = "Days to Event", y = "Event")
# MI:
fr_p1d <- fr_p1 %>% filter(FLAG=="MI" & FLAG_VAL==1 | (DIAGNOSIS=="MI (Diagnosis)" & HAS_DIAGNOSIS==1))
ggplot(fr_p1d, aes(x = DAYS_TO_EVENT, y = EVENT, color = BMIGROUP)) +
geom_col() +
facet_grid(~SEXGROUP) +
labs(title = "P1 Persons with MI: Time to an Event by BMI Group / Sex", x = "Days to Event", y = "Event")
# Stroke:
fr_p1e <- fr_p1 %>% filter(FLAG=="STROKE" & FLAG_VAL==1 | (DIAGNOSIS=="Stroke (Diagnosis)" & HAS_DIAGNOSIS==1))
ggplot(fr_p1e, aes(x = DAYS_TO_EVENT, y = EVENT, color = BMIGROUP)) +
geom_col() +
facet_grid(~CURSMOKE) +
labs(title = "P1 Persons with Stroke: Time to an Event by BMI Group / Smoker", x = "Days to Event", y = "Event")
# Diabetes
fr_p1h <- fr_p1 %>% filter(DIAGNOSIS=="Diabetes" & HAS_DIAGNOSIS==1)
ggplot(fr_p1h, aes(x = DAYS_TO_EVENT, y = EVENT, color = BMIGROUP)) +
geom_col() +
facet_grid(~SEXGROUP) +
labs(title = "P1 Persons with Diabetes: Time to an Event by BMI Group / Sex", x = "Days to Event", y = "Event")
ggplot(fr_p1h, aes(x = DAYS_TO_EVENT, y = EVENT, color = BMIGROUP)) +
geom_col() +
facet_grid(~CURSMOKE) +
labs(title = "P1 Persons with Diabetes: Time to an Event by BMI Group / Smoker", x = "Days to Event", y = "Event")
# Cigs per day x Days to Event (Among smokers with at least 1 event)
fr_p1f <- fr_p1 %>% filter(CIGPDAY>0 & FLAG_VAL>0)
ggplot(fr_p1f, aes(x = CIGPDAY, y = log(DAYS_TO_EVENT), color=EVENT)) +
geom_point(position = "jitter", alpha = 0.4) +
geom_smooth(method = "lm") +
labs(title = "P1 Days to Events x Cigarettes per Day", x = "Cigarettes per Day", y = "LOG(Days to Events)")
# Cigs per day x Age @ Diagnoses (Among smokers with at least 1 diagnosis)
fr_p1g <- fr_p1 %>% filter(CIGPDAY>0 & HAS_DIAGNOSIS>0)
ggplot(fr_p1f, aes(x = CIGPDAY, y = AGE, color=DIAGNOSIS)) +
geom_point(position = "jitter", alpha = 0.4) +
geom_smooth(method = "lm") +
labs(title = "P1 Age @ Diagnosis x Cigarettes per Day", x = "Cigarettes per Day", y = "Age")
#####################
# PERIOD 2
fr_p2 <- fr_p2 %>% filter(BPMEDS %in% c(0,1))
# No Diagnoses v. Events by Age, Sex, BMI
# Males
fr_p2z <- fr_p2 %>% filter(FLAG_VAL==1 & HAS_DIAGNOSIS==0 & SEXGROUP=="Male")
ggplot(fr_p2z, aes(x = AGE, y = log(DAYS_TO_EVENT), color = EVENT)) +
geom_point(position = "jitter", alpha=0.5) +
geom_smooth(method = "lm") +
labs(title = "P2 Males with No Prior Diagnosis, Time to Event", x = "AGE", y = "LOG(DAYS_TO_EVENT)")
# Females
fr_p2y <- fr_p2 %>% filter(FLAG_VAL==1 & HAS_DIAGNOSIS==0 & SEXGROUP=="Female")
ggplot(fr_p2y, aes(x = AGE, y = log(DAYS_TO_EVENT), color = EVENT)) +
geom_point(position = "jitter", alpha=0.5) +
geom_smooth(method = "lm") +
labs(title = "P2 Females with No Prior Diagnosis, Time to Event", x = "AGE", y = "LOG(DAYS_TO_EVENT)")
# DIAGNOSIS, EVENT & TIME TO DEATH
fr_p2a <- fr_p2 %>% select(FLAG, FLAG_VAL, TIMEDTH, DIAGNOSIS, HAS_DIAGNOSIS) %>% filter(HAS_DIAGNOSIS==1)
ggplot(fr_p2a, aes(x = log(TIMEDTH), y = FLAG, color = DIAGNOSIS)) +
geom_col() +
facet_grid(~FLAG_VAL) +
labs(title = "P2 Time to Death with a prior Diagnosis by Event (1) v. No Event(0)", y = "Event Flag", x = "LOG(Days to Death)")
# HYPERTENSION and EVENTS
# NOTE: I graphed people on BP Meds separately, because there are so few of them.
fr_p2b <- fr_p2 %>% filter(BPMEDS==0 & EVENT!="Hypertension")
ggplot(fr_p2b, aes(x = EVENT, y = log(DAYS_TO_EVENT), color=HTNGROUP)) +
geom_col() +
labs(title = "P2 Time to Event by Hypertension (Unmedicated)", x="Event", y = "LOG(Days to Event)")
fr_p2b2 <- fr_p2 %>% filter(BPMEDS==1 & EVENT!="Hypertension")
ggplot(fr_p2b2, aes(x = EVENT, y = log(DAYS_TO_EVENT), color=HTNGROUP)) +
geom_col() +
labs(title = "P2 Time to Event by Hypertension (Medicated)", x="Event", y = "LOG(Days to Event)")
# OTHER DIAGNOSES and EVENTS
# Angina: Days to Event by BMI, Sex
fr_p2c <- fr_p2 %>% filter(FLAG=="ANGINA" & FLAG_VAL==1 | (DIAGNOSIS=="Angina (Diagnosis)" & HAS_DIAGNOSIS==1))
ggplot(fr_p2c, aes(x = DAYS_TO_EVENT, y = EVENT, color = BMIGROUP)) +
geom_col() +
facet_grid(~SEXGROUP) +
labs(title = "P2 Persons with Angina: Time to an Event by BMI Group / Sex", x = "Days to Event", y = "Event")
# MI:
fr_p2d <- fr_p2 %>% filter(FLAG=="MI" & FLAG_VAL==1 | (DIAGNOSIS=="MI (Diagnosis)" & HAS_DIAGNOSIS==1))
ggplot(fr_p2d, aes(x = DAYS_TO_EVENT, y = EVENT, color = BMIGROUP)) +
geom_col() +
facet_grid(~SEXGROUP) +
labs(title = "P2 Persons with MI: Time to an Event by BMI Group / Sex", x = "Days to Event", y = "Event")
# Stroke:
fr_p2e <- fr_p2 %>% filter(FLAG=="STROKE" & FLAG_VAL==1 | (DIAGNOSIS=="Stroke (Diagnosis)" & HAS_DIAGNOSIS==1))
ggplot(fr_p2e, aes(x = DAYS_TO_EVENT, y = EVENT, color = BMIGROUP)) +
geom_col() +
facet_grid(~CURSMOKE) +
labs(title = "P2 Persons with Stroke: Time to an Event by BMI Group / Smoker", x = "Days to Event", y = "Event")
# Diabetes
fr_p2h <- fr_p2 %>% filter(DIAGNOSIS=="Diabetes" & HAS_DIAGNOSIS==1)
ggplot(fr_p2h, aes(x = DAYS_TO_EVENT, y = EVENT, color = BMIGROUP)) +
geom_col() +
facet_grid(~SEXGROUP) +
labs(title = "P2 Persons with Diabetes: Time to an Event by BMI Group / Sex", x = "Days to Event", y = "Event")
ggplot(fr_p2h, aes(x = DAYS_TO_EVENT, y = EVENT, color = BMIGROUP)) +
geom_col() +
facet_grid(~CURSMOKE) +
labs(title = "P2 Persons with Diabetes: Time to an Event by BMI Group / Smoker", x = "Days to Event", y = "Event")
# Cigs per day x Days to Event (Among smokers with at least 1 event)
fr_p2f <- fr_p2 %>% filter(CIGPDAY>0 & FLAG_VAL>0)
ggplot(fr_p2f, aes(x = CIGPDAY, y = log(DAYS_TO_EVENT), color=EVENT)) +
geom_point(position = "jitter", alpha = 0.4) +
geom_smooth(method = "lm") +
labs(title = "P2 Days to Events x Cigarettes per Day", x = "Cigarettes per Day", y = "LOG(Days to Events)")
# Cigs per day x Age @ Diagnoses (Among smokers with at least 1 diagnosis)
fr_p2g <- fr_p2 %>% filter(CIGPDAY>0 & HAS_DIAGNOSIS>0)
ggplot(fr_p2f, aes(x = CIGPDAY, y = AGE, color=DIAGNOSIS)) +
geom_point(position = "jitter", alpha = 0.4) +
geom_smooth(method = "lm") +
labs(title = "P2 Age @ Diagnosis x Cigarettes per Day", x = "Cigarettes per Day", y = "Age")
#####################
# PERIOD 3
fr_p3 <- fr_p3 %>% filter(BPMEDS %in% c(0,1) & !is.na(BMIGROUP))
# No Diagnoses v. Events by Age, Sex, BMI
# Males
fr_p3z <- fr_p3 %>% filter(FLAG_VAL==1 & HAS_DIAGNOSIS==0 & SEXGROUP=="Male")
ggplot(fr_p3z, aes(x = AGE, y = log(DAYS_TO_EVENT), color = EVENT)) +
geom_point(position = "jitter", alpha=0.5) +
geom_smooth(method = "lm") +
labs(title = "P3 Males with No Prior Diagnosis, Time to Event", x = "AGE", y = "LOG(DAYS_TO_EVENT)")
# Females
fr_p3y <- fr_p3 %>% filter(FLAG_VAL==1 & HAS_DIAGNOSIS==0 & SEXGROUP=="Female")
ggplot(fr_p3y, aes(x = AGE, y = log(DAYS_TO_EVENT), color = EVENT)) +
geom_point(position = "jitter", alpha=0.5) +
geom_smooth(method = "lm") +
labs(title = "P3 Females with No Prior Diagnosis, Time to Event", x = "AGE", y = "LOG(DAYS_TO_EVENT)")
# DIAGNOSIS, EVENT & TIME TO DEATH
fr_p3a <- fr_p3 %>% select(FLAG, FLAG_VAL, TIMEDTH, DIAGNOSIS, HAS_DIAGNOSIS) %>% filter(HAS_DIAGNOSIS==1)
ggplot(fr_p3a, aes(x = log(TIMEDTH), y = FLAG, color = DIAGNOSIS)) +
geom_col() +
facet_grid(~FLAG_VAL) +
labs(title = "P3 Time to Death with a prior Diagnosis by Event (1) v. No Event(0)", y = "Event Flag", x = "LOG(Days to Death)")
# HYPERTENSION and EVENTS
# NOTE: I graphed people on BP Meds separately, because there are so few of them.
fr_p3b <- fr_p3 %>% filter(BPMEDS==0 & EVENT!="Hypertension")
ggplot(fr_p3b, aes(x = EVENT, y = log(DAYS_TO_EVENT), color=HTNGROUP)) +
geom_col() +
labs(title = "P3 Time to Event by Hypertension (Unmedicated)", x="Event", y = "LOG(Days to Event)")
fr_p3b2 <- fr_p3 %>% filter(BPMEDS==1 & EVENT!="Hypertension")
ggplot(fr_p3b2, aes(x = EVENT, y = log(DAYS_TO_EVENT), color=HTNGROUP)) +
geom_col() +
labs(title = "P3 Time to Event by Hypertension (Medicated)", x="Event", y = "LOG(Days to Event)")
# OTHER DIAGNOSES and EVENTS
# Angina: Days to Event by BMI, Sex
fr_p3c <- fr_p3 %>% filter(FLAG=="ANGINA" & FLAG_VAL==1 | (DIAGNOSIS=="Angina (Diagnosis)" & HAS_DIAGNOSIS==1))
ggplot(fr_p3c, aes(x = DAYS_TO_EVENT, y = EVENT, color = BMIGROUP)) +
geom_col() +
facet_grid(~SEXGROUP) +
labs(title = "P3 Persons with Angina: Time to an Event by BMI Group / Sex", x = "Days to Event", y = "Event")
# MI:
fr_p3d <- fr_p3 %>% filter(FLAG=="MI" & FLAG_VAL==1 | (DIAGNOSIS=="MI (Diagnosis)" & HAS_DIAGNOSIS==1))
ggplot(fr_p3d, aes(x = DAYS_TO_EVENT, y = EVENT, color = BMIGROUP)) +
geom_col() +
facet_grid(~SEXGROUP) +
labs(title = "P3 Persons with MI: Time to an Event by BMI Group / Sex", x = "Days to Event", y = "Event")
# Stroke:
fr_p3e <- fr_p3 %>% filter(FLAG=="STROKE" & FLAG_VAL==1 | (DIAGNOSIS=="Stroke (Diagnosis)" & HAS_DIAGNOSIS==1))
ggplot(fr_p3e, aes(x = DAYS_TO_EVENT, y = EVENT, color = BMIGROUP)) +
geom_col() +
facet_grid(~CURSMOKE) +
labs(title = "P3 Persons with Stroke: Time to an Event by BMI Group / Smoker", x = "Days to Event", y = "Event")
# Diabetes
fr_p3h <- fr_p3 %>% filter(DIAGNOSIS=="Diabetes" & HAS_DIAGNOSIS==1)
ggplot(fr_p3h, aes(x = DAYS_TO_EVENT, y = EVENT, color = BMIGROUP)) +
geom_col() +
facet_grid(~SEXGROUP) +
labs(title = "P3 Persons with Diabetes: Time to an Event by BMI Group / Sex", x = "Days to Event", y = "Event")
ggplot(fr_p3h, aes(x = DAYS_TO_EVENT, y = EVENT, color = BMIGROUP)) +
geom_col() +
facet_grid(~CURSMOKE) +
labs(title = "P3 Persons with Diabetes: Time to an Event by BMI Group / Smoker", x = "Days to Event", y = "Event")
# Cigs per day x Days to Event (Among smokers with at least 1 event)
fr_p3f <- fr_p3 %>% filter(CIGPDAY>0 & FLAG_VAL>0)
ggplot(fr_p3f, aes(x = CIGPDAY, y = log(DAYS_TO_EVENT), color=EVENT)) +
geom_point(position = "jitter", alpha = 0.4) +
geom_smooth(method = "lm") +
labs(title = "P3 Days to Events x Cigarettes per Day", x = "Cigarettes per Day", y = "LOG(Days to Events)")
# Cigs per day x Age @ Diagnoses (Among smokers with at least 1 diagnosis)
fr_p3g <- fr_p3 %>% filter(CIGPDAY>0 & HAS_DIAGNOSIS>0)
ggplot(fr_p3f, aes(x = CIGPDAY, y = AGE, color=DIAGNOSIS)) +
geom_point(position = "jitter", alpha = 0.4) +
geom_smooth(method = "lm") +
labs(title = "P3 Age @ Diagnosis x Cigarettes per Day", x = "Cigarettes per Day", y = "Age")
#
# ADDITIONAL FOR PERIOD 3
#
# Time to Hypertension Explained by Glucose Level, BMI Group
fr_p3xx <- fr_p3 %>% filter(EVENT=="Hypertension" & !is.na(GLUCOSE))
ggplot(fr_p3xx, aes(x = GLUCOSE, y = DAYS_TO_EVENT, color = BMIGROUP)) +
geom_point(position = "jitter", alpha=0.4) +
geom_smooth(method = "lm") +
labs(title = "Time to Hypertension Explained by Glucose level, BMI.", x = "Random Glucose mg/dl", y = "Days to Hypertension")
# Time to Hypertension Explained by Age, BMI Group
ggplot(fr_p3xx, aes(x = AGE, y = DAYS_TO_EVENT, color = BMIGROUP)) +
geom_point(position = "jitter", alpha=0.4) +
geom_smooth(method = "lm") +
labs(title = "Time to Hypertension Explained by Age, BMI.", x = "Age", y = "Days to Hypertension")
# Influence of Cholesterol (TOTCHOL graph was not significant, so removed.)
fr_pxxx <- fr_p3 %>% filter(!is.na(LDLC) & !is.na(HDLC))
ggplot(fr_pxxx, aes(x = LDLC, y = DAYS_TO_EVENT, color=EVENT)) +
geom_point(position = "jitter", alpha=0.4) +
geom_smooth(method = "lm") +
labs(title = "Time to Event Explained by LDLC", x = "LDLC", y = "Days to Event")
ggplot(fr_pxxx, aes(x = HDLC, y = DAYS_TO_EVENT, color=EVENT)) +
geom_point(position = "jitter", alpha=0.4) +
geom_smooth(method = "lm") +
labs(title = "Time to Event Explained by HDLC", x = "HDLC", y = "Days to Event")
|
cf75cf5557569c45fda34873fb6e7818d8599056
|
97932fb906650536ff644f4b57e1b05a74695e1d
|
/man/sq_list_locations.Rd
|
7b683a5f8294918734ed3bef0ca4dff9abe86c89
|
[] |
no_license
|
muschellij2/squareupr
|
350ed186d711182abfb6ad5dde0c068e5325ee29
|
37bf28750127235c09f7f57278faf484b04aac0d
|
refs/heads/master
| 2021-05-26T23:23:03.404648
| 2019-07-11T20:50:36
| 2019-07-11T20:50:36
| null | 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| true
| 699
|
rd
|
sq_list_locations.Rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/locations.R
\name{sq_list_locations}
\alias{sq_list_locations}
\title{List Locations}
\usage{
sq_list_locations(verbose = FALSE)
}
\arguments{
\item{verbose}{logical; do you want informative messages?}
}
\value{
\code{tbl_df} of locations
}
\description{
Provides the details for all of a business's locations.
}
\details{
Most other Connect API endpoints have a required location_id path parameter.
The id field of the Location objects returned by this endpoint correspond to
that location_id parameter. Required permissions: \code{MERCHANT_PROFILE_READ}
}
\examples{
\dontrun{
my_locations <- sq_list_locations()
}
}
|
ae7e4005570ae7a943d3986733187e6377e061f8
|
edc6dad2b241a1d0f4a7eef6378bc5e0fe1f57a0
|
/man/RR.Rd
|
c4b3e2a8dd7a472d105cf5f7af4cc7686c4bcbc0
|
[] |
no_license
|
ejanalysis/ejanalysis
|
5060b1ee3cb65dd5302cc6a42821492501aa334a
|
47f6a0fa33ddb4be04da2264eb6e9c7e1ccf99f8
|
refs/heads/master
| 2023-06-10T18:17:30.239436
| 2023-05-25T23:13:56
| 2023-05-25T23:13:56
| 32,804,966
| 4
| 2
| null | null | null | null |
UTF-8
|
R
| false
| true
| 8,030
|
rd
|
RR.Rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/RR.R
\name{RR}
\alias{RR}
\title{Relative Risk (RR) by demographic group by indicator based on Census data}
\usage{
RR(e, d, pop, dref, na.rm = TRUE)
}
\arguments{
\item{e}{Vector or data.frame or matrix with 1 or more environmental indicator(s) or health risk level
(e.g., PM2.5 concentration to which this person or place is exposed),
one row per Census unit and one column per indicator.}
\item{d}{Vector or data.frame or matrix with 1 or more demog groups percentage
(as fraction of 1, not 0-100!)
of place that is selected demog group (e.g. percent Hispanic)
(or d=1 or 0 per row if this is a vector of individuals)}
\item{pop}{Vector of one row per location providing population count of place
(or pop=1 if this is a vector of individuals),
to convert d into a count since d is a fraction}
\item{dref}{Optional vector specifying a reference group for RR calculation
by providing what percentage (as fraction of 1, not 0-100!) of place that is individuals in the reference group
(or dref= vector of ones and zeroes if this is a vector of individuals)}
\item{na.rm}{Optional, logical, TRUE by default. Specify if NA values should be removed first.}
}
\value{
numeric results as vector or data.frame
}
\description{
Finds the ratio of mean indicator value in one demographic subgroup to mean in everyone else,
based on data for each spatial unit such as for block groups or tracts.
}
\details{
This function requires, for each Census unit, demographic data on total population and percent in each demographic group,
and some indicator(s) for each Census unit, such as health status, exposure estimates, or environmental health risk.
For example, given population count, percent Hispanic, and ppm of ozone for each tract, this calculates the ratio
of the population mean tract-level ozone concentration among Hispanics divided by the same value among all non-Hispanics.
The result is a ratio of means for two demographic groups, or for each of several groups and indicators.
Each e (for environmental indicator) or d (for demographic percentage) is specified as a vector over small places like Census blocks or block groups
or even individuals (ideally) but then d would be a dummy=1 for selected group and 0 for people not in selected group
note: this currently does not use rrf() & rrfv() but perhaps it would be faster if it did? but rrfv not tested for multiple demog groups \cr
NEED TO VERIFY/TEST THIS: REMOVES PLACES WITH NA in any one or more of the values used (e, d, pop, dref) in numerators and denominators. \cr
Note also that THIS REMOVES NA VALUES FOR one e factor and not for another,
so results can use different places & people for different e factors
}
\examples{
bg <- structure(list(state = structure(c(1L, 2L, 3L, 4L, 5L, 6L, 7L,
8L, 10L, 11L, 12L, 13L, 14L, 15L, 16L, 17L, 18L, 19L, 20L, 21L,
22L, 23L, 24L, 25L, 26L, 27L, 28L, 29L, 30L, 31L, 32L, 33L, 34L,
35L, 36L, 37L, 38L, 39L, 41L, 42L, 43L, 44L, 45L, 46L, 47L, 48L,
49L, 50L, 51L, 52L), .Label = c("Alabama", "Alaska", "Arizona",
"Arkansas", "California", "Colorado", "Connecticut", "Delaware",
"District of Columbia", "Florida", "Georgia", "Hawaii", "Idaho",
"Illinois", "Indiana", "Iowa", "Kansas", "Kentucky", "Louisiana",
"Maine", "Maryland", "Massachusetts", "Michigan", "Minnesota",
"Mississippi", "Missouri", "Montana", "Nebraska", "Nevada", "New Hampshire",
"New Jersey", "New Mexico", "New York", "North Carolina", "North Dakota",
"Ohio", "Oklahoma", "Oregon", "Pennsylvania", "Puerto Rico",
"Rhode Island", "South Carolina", "South Dakota", "Tennessee",
"Texas", "Utah", "Vermont", "Virginia", "Washington", "West Virginia",
"Wisconsin", "Wyoming"), class = "factor"), pcthisp = c(0.0381527239296627,
0.056769492321473, 0.296826116572835, 0.0635169313105461, 0.375728960493789,
0.206327251656949, 0.134422275491411, 0.0813548250199138, 0.2249082771481,
0.0878682317249484, 0.0901734019211436, 0.111947738331921, 0.158094676641822,
0.0599941716405598, 0.0495552961203499, 0.104741084665558, 0.0301921562004411,
0.0425162900517816, 0.0129720920573869, 0.0816325860392955, 0.0960897601513277,
0.0442948677533508, 0.0470583828855611, 0.0264973278249911, 0.0354626134972627,
0.0292535716628734, 0.0914761950105784, 0.265451497002445, 0.0283535007142456,
0.177132117215957, 0.463472498001496, 0.176607017430808, 0.0834317084560556,
0.0209906647364226, 0.0307719359181436, 0.0883052970054721, 0.117261303415395,
0.0568442805511265, 0.124769233546578, 0.0503041778042313, 0.0279186292931113,
0.0454229079840698, 0.376044616311455, 0.129379195461843, 0.0151111594281676,
0.0788112971314249, 0.111945098129999, 0.0119028512046327, 0.0589405823830593,
0.0893971780534219), pop = c(3615, 365, 2212, 2110, 21198, 2541,
3100, 579, 8277, 4931, 868, 813, 11197, 5313, 2861, 2280, 3387,
3806, 1058, 4122, 5814, 9111, 3921, 2341, 4767, 746, 1544, 590,
812, 7333, 1144, 18076, 5441, 637, 10735, 2715, 2284, 11860,
931, 2816, 681, 4173, 12237, 1203, 472, 4981, 3559, 1799, 4589,
376), murder = c(15.1, 11.3, 7.8, 10.1, 10.3, 6.8, 3.1, 6.2,
10.7, 13.9, 6.2, 5.3, 10.3, 7.1, 2.3, 4.5, 10.6, 13.2, 2.7, 8.5,
3.3, 11.1, 2.3, 12.5, 9.3, 5, 2.9, 11.5, 3.3, 5.2, 9.7, 10.9,
11.1, 1.4, 7.4, 6.4, 4.2, 6.1, 2.4, 11.6, 1.7, 11, 12.2, 4.5,
5.5, 9.5, 4.3, 6.7, 3, 6.9), area = c(50708, 566432, 113417,
51945, 156361, 103766, 4862, 1982, 54090, 58073, 6425, 82677,
55748, 36097, 55941, 81787, 39650, 44930, 30920, 9891, 7826,
56817, 79289, 47296, 68995, 145587, 76483, 109889, 9027, 7521,
121412, 47831, 48798, 69273, 40975, 68782, 96184, 44966, 1049,
30225, 75955, 41328, 262134, 82096, 9267, 39780, 66570, 24070,
54464, 97203), temp = c(62.8, 26.6, 60.3, 60.4, 59.4, 45.1, 49,
55.3, 70.7, 63.5, 70, 44.4, 51.8, 51.7, 47.8, 54.3, 55.6, 66.4,
41, 54.2, 47.9, 44.4, 41.2, 63.4, 54.5, 42.7, 48.8, 49.9, 43.8,
52.7, 53.4, 45.4, 59, 40.4, 50.7, 59.6, 48.4, 48.8, 50.1, 62.4,
45.2, 57.6, 64.8, 48.6, 42.9, 55.1, 48.3, 51.8, 43.1, 42)), .Names = c("state",
"pcthisp", "pop", "murder", "area", "temp"), class = "data.frame", row.names = c(NA,
-50L))
RR(bg$area, bg$pcthisp, bg$pop)
# Avg Hispanic lives in a State that is 69 percent larger than
# that of avg. non-Hispanic
RR(bg$pcthisp, bg$pcthisp, bg$pop)
Avg Hispanic persons local percent Hispanic (their blockgroup) is 4x as everyone elses on avg,
but avg low income persons local percent low income is only 1.8x as high as everyone elses.
# cbind(RR=RR(e=data.frame(local_pct_hispanic=bg$pcthisp, local_pct_lowincome=bg$pctlowinc),
# d= cbind(Ratio_of_avg_among_hispanics_to_avg_among_nonhispanics=bg$pcthisp,
avg_among_lowinc_vs_rest_of_pop=bg$pctlowinc), bg$pop))
# RR(bg[ , names.e], bg$pctlowinc, bg$pop)
# sapply(bg[ , names.d], function(z) RR(bg[ , names.e], z, bg$pop) )
}
\seealso{
\itemize{
\item \code{\link{ej.indexes}} for local contribution to a variety of overall disparity metrics such as excess risk
\item \code{\link{RR}} to calculate overall disparity metric as relative risk (RR), ratio of mean environmental indicator values across demographic groups
\item \code{\link{RR.table}} to create 3-D table of RR values, by demographic group by environmental indicator by zone
\item \code{\link{RR.table.sort}} to sort existing RR table
\item \code{\link{RR.table.add}} to add zone(s) to existing RR table
\item \code{\link{write.RR.tables}} to write a file with a table or RR by indicator by group
\item \code{\link{ej.added}} to find EJ Index as local contribution to sum of EJ Indexes
\item \code{\link{RR.cut.if.gone}} to find local contribution to RR
\item \code{\link{RR.if.address.top.x}} to find how much RR would change if top-ranked places had different conditions
\item \code{\link{rrfv}} for obsolete attempt to vectorize rrf
\item \code{\link{rrf}} for older simpler function for RR of one indicator for one demographic group
\item \code{\link{pop.ecdf}} to compare plots of cumulative frequency distribution of indicator values by group
}
}
|
9e79b7274c9197eb276c177a57859fcf11371593
|
3d8591a58ee592967b06f296683397189e3fdedb
|
/R/ParallelKNNCrossValidation_functions.R
|
320153f3c8d902aab7cbece9086ebd3f2ccd18ef
|
[] |
no_license
|
ArdernHB/KnnDist
|
e390f80b2d1412e7b5816db3a8eda8751f94f8a4
|
06d0587ada12a3e08ce84380e8c782ceabf1ad13
|
refs/heads/master
| 2023-03-02T03:40:42.845418
| 2021-02-04T14:17:44
| 2021-02-04T14:17:44
| 259,664,220
| 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 15,792
|
r
|
ParallelKNNCrossValidation_functions.R
|
#' K-Nearest Neighbour correct cross-validation with distance input using parallel processing
#'
#' This function takes a square matrix of distances among specimens of known group membership
#' and returns the results of a leave-one-out correct cross validation identification for each
#' specimen to provide a correct cross-validation percentage.
#'
#' The function is primarily for use with resampling unequal groups to equal sample size a set
#' number of times. This process is carried out with parrallel processing.
#'
#' This function applies both a weighted approach and an unweighted approach and returns both results.
#'
#' Note that this function is faster when datasets are large and/or when greater numbers of resampling
#' iterations are used. For small samples and few resampling iterations the function is unlikely to be
#' much faster, this is because in addition to the time it takes to carry out calculations the parallel
#' processing will need to compile the results at the end. This process adds additional time to the
#' process.
#'
#'
#' @inheritParams KnnDistCV
#' @return Returns a matrix of the leave-one-out classifications for all the specimens along with their known classification.
#'
#'
#' @author Ardern Hulme-Beaman
#' @import doParallel
#' @import parallel
#' @import foreach
#' @export
KnnDistCVPar <- function(DistMat, GroupMembership, K, EqualIter=100, SampleSize=NA, TieBreaker=c('Random', 'Remove', 'Report'), Verbose=FALSE){
#K=2
#DistMat=ProcDTableRes; GroupMembership=Groups; K=10; Equal=TRUE; EqualIter=100
#Weighted=TRUE; TieBreaker='Report'
chr2nu <- function(X){
as.numeric(as.character(X))
}
ParOutput <- function(PreviousResults, ResultList){
NewResults <- PreviousResults
for (i in 1:length(ResultList)){
NewResults[[i]] <- rbind(PreviousResults[[i]], ResultList[[i]])
}
return(NewResults)
}
BalancedGrps <- function(GroupMembership, GroupSize=SampleSize){
#Data=PairwiseShapeDistMat; GroupMembership=chr(Groups[GrpPos])
if (is.na(GroupSize)){
minSampSize <- min(table(as.character(GroupMembership)))
} else {
minSampSize <- GroupSize
}
sampleindex <- 1:length(GroupMembership)
RandSamp <- stats::aggregate(x = sampleindex, by=list(factor(GroupMembership)), sample, size=minSampSize)
Index <- c(t(RandSamp$x))
GroupMem <- c(t(stats::aggregate(RandSamp$Group.1, list(RandSamp$Group.1), rep, minSampSize)$x))
return(list(IndexedLocations=Index, Newfactors=GroupMem))
}
KVote <- function(X, K, Weighting=FALSE, TieBreaker=c('Random', 'Remove', 'Report')){
#VotingData=KIDmat[,1]
#VotingData=c(rep('East', 5), rep('West', 5))
#Weighting=Weighted
#VotingData=cbind(KIDmat[,1], Kweightmat[,1])
VotingData <- X
if (Weighting==FALSE){
if (is.vector(VotingData)){
MajorityVote <- names(which(table(VotingData[1:K])==max(table(VotingData[1:K]))))
} else {
stop('Error: VotingData is not a vector, KVote function expects VotingData to be a vecotr when Weighting=FALSE')
}
} else {
if (dim(VotingData)[2]<2){
stop('Error: VotingData must be a matrix of 2 columns: the first column must be the nearest neighbour classifications, the second column must be the weighting values')
}
WeightingScores <- stats::aggregate(x = 1/chr2nu(VotingData[1:K,2]), by = list(as.factor(VotingData[1:K,1])), FUN = sum)
MajorityVote <- as.character(WeightingScores$Group.1[which(WeightingScores$x==max(WeightingScores$x))])
}
if (length(MajorityVote)>1){
if (TieBreaker=='Random' && length(TieBreaker)==1){
ReturnedVote <- sample(MajorityVote, size = 1)
} else if (TieBreaker=='Remove' && length(TieBreaker)==1){
ReturnedVote <- 'UnIDed'
} else if (TieBreaker=='Report' && length(TieBreaker)==1){
ReturnedVote <- paste(MajorityVote, collapse = '_')
} else {
warning('Warning: TieBreaker not set or not set to recognisible method. Function will revert to Report for TieBreaker argument. If you have supplied a TieBreaker argument please check capitalisation and spelling.')
ReturnedVote <- paste(MajorityVote, collapse = '_')
}
} else {
ReturnedVote <- MajorityVote
}
return(ReturnedVote)
}
ParEqualIter <- function(DistData, GrpMem, ParK=K, ParTieBreaker=TieBreaker, ParVerbose=Verbose){
#DistData=DistMat; GrpMem=GroupMembership; ParK=K; ParTieBreaker='Report'; ParVerbose=FALSE
BalancingGrps <- BalancedGrps(GrpMem, SampleSize)
BalancedDistMat <- DistData[BalancingGrps$IndexedLocations,BalancingGrps$IndexedLocations]
#this sorts each row of the distance matrix individually into an order of smallest to greatest distance
#column order is maintained
AdjSortedDistMat <- SortedDistMat <- BalancedDistMat
rownames(AdjSortedDistMat) <- rownames(SortedDistMat) <- 1:dim(SortedDistMat)[1]
for (i in 1:dim(BalancedDistMat)[1]){
#i <- 2
AdjSortedDistMat[,i] <- sort(BalancedDistMat[,i])
SortedDistMat[,i] <- as.character(BalancingGrps$Newfactors[sort(SortedDistMat[,i], index.return=TRUE)$ix])
}
#Removing the first row bbecause this will represent the distance 0 i.e. the distance from the column specimen to itself
KIDmat <- SortedDistMat[-1,]
Kweightmat <- AdjSortedDistMat[-1,]
KArray <- array(data = NA, dim = c(dim(KIDmat), 2))
KArray[,,1] <- KIDmat
KArray[,,2] <- Kweightmat
dimnames(KArray) <- list(dimnames(Kweightmat)[[1]], dimnames(Kweightmat)[[2]], c('Call', 'Weight'))
WeightedRes <- apply(X = KArray, MARGIN = 2, FUN = KVote, K=ParK, Weighting=TRUE, TieBreaker=ParTieBreaker)
UnweightedRes <- apply(X = KIDmat, MARGIN = 2, FUN = KVote, K=ParK, Weighting=FALSE, TieBreaker=ParTieBreaker)
WeightedCCVPercent <- sum(BalancingGrps$Newfactors==WeightedRes)/length(BalancingGrps$Newfactors)
UnweightedCCVPercent <- sum(BalancingGrps$Newfactors==UnweightedRes)/length(BalancingGrps$Newfactors)
ResultsSummaryTable <- c(WeightedCCVPercent, UnweightedCCVPercent)
if (ParVerbose==TRUE){
ResTable <- cbind(rownames(BalancedDistMat), BalancingGrps$Newfactors, WeightedRes, UnweightedRes)
colnames(ResTable) <- c('ID', 'True.Classification', 'Weighted.Classification', 'Unweighted.Classification')
return(list(ResultsSummaryTable, ResTable))
} else {
return(list(ResultsSummaryTable, NA))
}
}
cores <- parallel::detectCores()
clust <- parallel::makeCluster(cores[1]-1)
doParallel::registerDoParallel(clust)
a <- 1
ParResults <- foreach::foreach(a = 1:EqualIter, .combine = ParOutput) %dopar% {
ParEqualIter(DistData = DistMat, GrpMem = GroupMembership, ParK = K, ParTieBreaker = 'Report', ParVerbose = Verbose)
}
parallel::stopCluster(clust)
colnames(ParResults[[1]]) <- c('Weighted', 'Unweighted')
if (Verbose==TRUE){
names(ParResults) <- c('Iteration.Summaries', 'Verbose.Output')
return(ParResults)
} else {
return(ParResults[[1]])
}
}
#' Stepwise K-Nearest Neighbour correct cross-validation with distance input using parallel processing
#'
#' This function takes a square matrix of distances among specimens of known group membership
#' and returns the results of a leave-one-out correct cross validation identification exercise for
#' each incremental increase in k. The results of the analyses can be plotted to visualise the
#' change in correct identification given changes in k.
#'
#' The function is primarily for use with resampling unequal groups to equal sample size a set
#' number of times. This process is carried out with parrallel processing.
#'
#' This function applies both a weighted approach and an unweighted appraoch and returns both results.
#'
#' Note that this function is faster when datasets are large and/or when greater numbers of resampling
#' iterations are used. For small samples and few resampling iterations the function is unlikely to be
#' much faster, this is because in addition to the time it takes to carry out calculations the parallel
#' processing will need to compile the results at the end. This process adds additional time to the
#' process.
#'
#'
#' @inheritParams KnnDistCVStepwise
#' @return Returns a matrix of the leave-one-out classifications for all the specimens along with their known classificaiton.
#' @details When the \code{PrintProg} is set to TRUE, the \code{\link[svMisc]{progress}} function of the \code{svMisc} package is used.
#'
#'
#' @keywords shape distances
#' @keywords Geometric morphometric distances
#' @author Ardern Hulme-Beaman
#' @import shapes
#' @import svMisc
#' @export
KnnDistCVStepwisePar <- function(DistMat, GroupMembership, Kmax, EqualIter=100, SampleSize=NA, TieBreaker=c('Random', 'Remove', 'Report'), PlotResults=TRUE){
#DistMat = VoleDistMat; GroupMembership = VoleGrps; Kmax = 10; TieBreaker = 'Remove'; PlotResults = TRUE; EqualIter = 100
#Kmax=20
#DistMat=ProcDTableRes; GroupMembership=Groups; K=10; Equal=TRUE; EqualIter=100
#Weighted=TRUE; TieBreaker='Report'
#PrintProg=TRUE
#Verbose=TRUE; Equal=TRUE
MinSamp <- min(table(as.character(GroupMembership)))
if (Kmax>MinSamp){
warning('Kmax is set to higher than the smallest sample size.')
}
chr2nu <- function(X){
as.numeric(as.character(X))
}
ParOutput <- function(PreviousResults, ResultList){
NewResults <- PreviousResults
for (i in 1:length(ResultList)){
NewResults[[i]] <- rbind(PreviousResults[[i]], ResultList[[i]])
}
return(NewResults)
}
BalancedGrps <- function(GroupMembership, GroupSize=SampleSize){
#Data=PairwiseShapeDistMat; GroupMembership=chr(Groups[GrpPos])
if (is.na(GroupSize)){
minSampSize <- min(table(as.character(GroupMembership)))
} else {
minSampSize <- GroupSize
}
sampleindex <- 1:length(GroupMembership)
RandSamp <- stats::aggregate(x = sampleindex, by=list(factor(GroupMembership)), sample, size=minSampSize)
Index <- c(t(RandSamp$x))
GroupMem <- c(t(stats::aggregate(RandSamp$Group.1, list(RandSamp$Group.1), rep, minSampSize)$x))
return(list(IndexedLocations=Index, Newfactors=GroupMem))
}
KVote <- function(X, K, Weighting=FALSE, TieBreaker=c('Random', 'Remove', 'Report')){
#VotingData=KIDmat[,1]
#VotingData=c(rep('East', 5), rep('West', 5))
#Weighting=Weighted
#VotingData=cbind(KIDmat[,1], Kweightmat[,1])
VotingData <- X
if (Weighting==FALSE){
if (is.vector(VotingData)){
MajorityVote <- names(which(table(VotingData[1:K])==max(table(VotingData[1:K]))))
} else {
stop('Error: VotingData is not a vector, KVote function expects VotingData to be a vecotr when Weighting=FALSE')
}
} else {
if (dim(VotingData)[2]<2){
stop('Error: VotingData must be a matrix of 2 columns: the first column must be the nearest neighbour classifications, the second column must be the weighting values')
}
WeightingScores <- stats::aggregate(x = 1/chr2nu(VotingData[1:K,2]), by = list(as.factor(VotingData[1:K,1])), FUN = sum)
MajorityVote <- as.character(WeightingScores$Group.1[which(WeightingScores$x==max(WeightingScores$x))])
}
if (length(MajorityVote)>1){
if (TieBreaker=='Random' && length(TieBreaker)==1){
ReturnedVote <- sample(MajorityVote, size = 1)
} else if (TieBreaker=='Remove' && length(TieBreaker)==1){
ReturnedVote <- 'UnIDed'
} else if (TieBreaker=='Report' && length(TieBreaker)==1){
ReturnedVote <- paste(MajorityVote, collapse = '_')
} else {
warning('Warning: TieBreaker not set or not set to recognisible method. Function will revert to Report for TieBreaker argument. If you have supplied a TieBreaker argument please check capitalisation and spelling.')
ReturnedVote <- paste(MajorityVote, collapse = '_')
}
} else {
ReturnedVote <- MajorityVote
}
return(ReturnedVote)
}
ParEqualIterStepwise <- function(DistData, GrpMem, ParKmax=Kmax, ParTieBreaker=TieBreaker, ParSampleSize=SampleSize){
#DistData=DistMat; GrpMem=GroupMembership; ParKmax=Kmax; ParTieBreaker='Report'; ParVerbose=FALSE; ParSampleSize=NA
BalancingGrps <- BalancedGrps(GrpMem, ParSampleSize)
BalancedDistMat <- DistData[BalancingGrps$IndexedLocations,BalancingGrps$IndexedLocations]
#this sorts each row of the distance matrix individually into an order of smallest to greatest distance
#column order is maintained
AdjSortedDistMat <- SortedDistMat <- BalancedDistMat
rownames(AdjSortedDistMat) <- rownames(SortedDistMat) <- 1:dim(SortedDistMat)[1]
for (i in 1:dim(BalancedDistMat)[1]){
#i <- 2
AdjSortedDistMat[,i] <- sort(BalancedDistMat[,i])
SortedDistMat[,i] <- as.character(BalancingGrps$Newfactors[sort(SortedDistMat[,i], index.return=TRUE)$ix])
}
#Removing the first row bbecause this will represent the distance 0 i.e. the distance from the column specimen to itself
KArray <- array(data = NA, dim = c(dim(SortedDistMat[-1,]), 2))
KArray[,,1] <- as.matrix(SortedDistMat[-1,])
KArray[,,2] <- as.matrix(AdjSortedDistMat[-1,])
dimnames(KArray) <- list(dimnames(AdjSortedDistMat[-1,])[[1]], dimnames(AdjSortedDistMat[-1,])[[2]], c('Call', 'Weight'))
ResultsTable <- list(Unweighted.Results=matrix(NA, nrow = 1, ncol = ParKmax), Weighted.Results=matrix(NA, nrow = 1, ncol = ParKmax))
for (K in 1:ParKmax){
WeightedRes <- apply(X = KArray, MARGIN = 2, FUN = KVote, K=K, Weighting=TRUE, TieBreaker=TieBreaker)
UnweightedRes <- apply(X = KArray[,,1], MARGIN = 2, FUN = KVote, K=K, Weighting=FALSE, TieBreaker=TieBreaker)
WeightedCCVPercent <- sum(BalancingGrps$Newfactors==WeightedRes)/length(BalancingGrps$Newfactors)
UnweightedCCVPercent <- sum(BalancingGrps$Newfactors==UnweightedRes)/length(BalancingGrps$Newfactors)
ResultsTable$Unweighted.Results[1, K] <- UnweightedCCVPercent
ResultsTable$Weighted.Results[1, K] <- WeightedCCVPercent
}
return(ResultsTable)
}
cores <- parallel::detectCores()
clust <- parallel::makeCluster(cores[1]-1)
doParallel::registerDoParallel(clust)
a <- 1
ParResults <- foreach::foreach(a = 1:EqualIter, .combine = ParOutput) %dopar% {
ParEqualIterStepwise(DistData = DistMat, GrpMem = GroupMembership, ParKmax = Kmax, ParTieBreaker = 'Report', ParSampleSize = NA)
}
parallel::stopCluster(clust)
ResultsTable <- ParResults
if (PlotResults==TRUE){
graphics::plot(y = colMeans(ResultsTable$Unweighted.Results), x = 1:Kmax, type = 'n', ylim = c(10,105), ylab = 'CCV %', xlab = 'K')
graphics::abline(h = seq(from = 20, to = 100, by = 10), v = seq(from = 2, to = Kmax, by =2), lty = '1919')
WeightRange <- apply(ResultsTable$Weighted.Results, MARGIN = 2, FUN = stats::quantile, probs = c(.05, .95))
graphics::polygon(x = c(1:Kmax, Kmax:1),
y = c(WeightRange[1,], WeightRange[2,Kmax:1])*100,
col = transpar('darkblue', alpha = 25),
border = NA)
graphics::lines(y = colMeans(ResultsTable$Weighted.Results*100), x = 1:Kmax, col='darkblue', lwd=3)
UnweightRange <- apply(ResultsTable$Unweighted.Results, MARGIN = 2, FUN = stats::quantile, probs = c(.05, .95))
graphics::polygon(x = c(1:Kmax, Kmax:1),
y = c(UnweightRange[1,], UnweightRange[2,Kmax:1])*100,
col = transpar('lightblue', alpha = 95),
border = NA)
graphics::lines(y = colMeans(ResultsTable$Unweighted.Results*100), x = 1:Kmax, col='lightblue', lwd=3)
graphics::legend('bottomright', legend = c('Weighted', 'Unweighted'), col = c('darkblue', 'lightblue'), lty=1, lwd=3, bty = 'o')
}
return(ResultsTable)
}
|
09a7905d7dad2287c05ae417f950dbc98f2546b0
|
d14f4bc6f62eeb1d109a424f80b1ac0e4c2c687f
|
/inst/scripts/collect.nomisma.R
|
e42a60fe87c6bf7573d91ca48e58966e7b2afcb4
|
[] |
no_license
|
sfsheath/cawd
|
6534598c46dab7c2f91542c499122cb5c161e41b
|
928f175f6b48e8d14000cdf7bf1a233570bd0485
|
refs/heads/master
| 2021-01-17T08:00:18.432209
| 2017-12-21T20:32:26
| 2017-12-21T20:32:26
| 33,830,182
| 12
| 1
| null | null | null | null |
UTF-8
|
R
| false
| false
| 6,519
|
r
|
collect.nomisma.R
|
# first go at script to load nomisma.org data into cawd.
library(devtools)
library(sp)
library(SPARQL)
url <- "http://nomisma.org/query"
ns = c('geo','<http://www.w3.org/2003/01/geo/wgs84_pos#>',
'nmo','<http://nomisma.org/ontology#>',
'pleiades','<http://pleiades.stoa.org/places/',
'skos','<http://www.w3.org/2004/02/skos/core#>',
'spatial','<http://jena.apache.org/spatial#>',
'xsd','<http://www.w3.org/2001/XMLSchema#>')
# greek (turn parts of these chunks into functions)
sparql.response <- SPARQL(url = url, ns = ns,
query = '
PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX dcterms: <http://purl.org/dc/terms/>
PREFIX geo: <http://www.w3.org/2003/01/geo/wgs84_pos#>
PREFIX nm: <http://nomisma.org/id/>
PREFIX nmo: <http://nomisma.org/ontology#>
PREFIX skos: <http://www.w3.org/2004/02/skos/core#>
PREFIX spatial: <http://jena.apache.org/spatial#>
PREFIX xsd: <http://www.w3.org/2001/XMLSchema#>
SELECT ?title ?uri ?latitude ?longitude ?pleiades WHERE {
?uri a nmo:Mint;
dcterms:isPartOf nm:greek_numismatics ;
skos:prefLabel ?title;
geo:location ?loc .
OPTIONAL {?uri skos:closeMatch ?pleiades}
?loc geo:lat ?latitude ;
geo:long ?longitude .
FILTER (langMatches(lang(?title), "en") && regex(str(?pleiades), "pleiades.stoa.org" ))
}'
)
nomisma.greek.mints <- sparql.response$results
# remove host and path from Pleiades ID
nomisma.greek.mints$pleiades <- sub('pleiades:/','',nomisma.greek.mints$pleiades, fixed = T)
nomisma.greek.mints$title <- gsub('"','',nomisma.greek.mints$title, fixed = T)
nomisma.greek.mints$title <- gsub('@en','',nomisma.greek.mints$title, fixed = T)
nomisma.greek.mints$uri <- gsub('<','',nomisma.greek.mints$uri, fixed = T)
nomisma.greek.mints$uri <- gsub('>','',nomisma.greek.mints$uri, fixed = T)
nomisma.greek.mints.sp <- nomisma.greek.mints
coordinates(nomisma.greek.mints.sp) <- ~ longitude + latitude
proj4string(nomisma.greek.mints.sp) <- CRS("+proj=longlat +datum=WGS84")
use_data(nomisma.greek.mints, overwrite = T)
use_data(nomisma.greek.mints.sp, overwrite = T)
# roman
sparql.response <- SPARQL(url = url, ns = ns,
query = '
PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX dcterms: <http://purl.org/dc/terms/>
PREFIX geo: <http://www.w3.org/2003/01/geo/wgs84_pos#>
PREFIX nm: <http://nomisma.org/id/>
PREFIX nmo: <http://nomisma.org/ontology#>
PREFIX skos: <http://www.w3.org/2004/02/skos/core#>
PREFIX spatial: <http://jena.apache.org/spatial#>
PREFIX xsd: <http://www.w3.org/2001/XMLSchema#>
SELECT ?title ?uri ?latitude ?longitude ?pleiades WHERE {
?uri a nmo:Mint;
dcterms:isPartOf nm:roman_numismatics ;
skos:prefLabel ?title;
geo:location ?loc .
OPTIONAL {?uri skos:closeMatch ?pleiades}
?loc geo:lat ?latitude ;
geo:long ?longitude .
FILTER (langMatches(lang(?title), "en") && regex(str(?pleiades), "pleiades.stoa.org" ))
}'
)
nomisma.roman.mints <- sparql.response$results
# remove host and path from Pleiades ID
nomisma.roman.mints$pleiades <- sub('pleiades:/','',nomisma.roman.mints$pleiades, fixed = T)
nomisma.roman.mints$title <- gsub('"','',nomisma.roman.mints$title, fixed = T)
nomisma.roman.mints$title <- gsub('@en','',nomisma.roman.mints$title, fixed = T)
nomisma.roman.mints$uri <- gsub('<','',nomisma.roman.mints$uri, fixed = T)
nomisma.roman.mints$uri <- gsub('>','',nomisma.roman.mints$uri, fixed = T)
nomisma.roman.mints.sp <- nomisma.roman.mints
coordinates(nomisma.roman.mints.sp) <- ~ longitude + latitude
proj4string(nomisma.roman.mints.sp) <- CRS("+proj=longlat +datum=WGS84")
use_data(nomisma.roman.mints, overwrite = T)
use_data(nomisma.roman.mints.sp, overwrite = T)
# roman provincial. the nomisma data is clearly incomplete as only 4 mints are listed. needs work on that end.
sparql.response <- SPARQL(url = url, ns = ns,
query = '
PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX dcterms: <http://purl.org/dc/terms/>
PREFIX geo: <http://www.w3.org/2003/01/geo/wgs84_pos#>
PREFIX nm: <http://nomisma.org/id/>
PREFIX nmo: <http://nomisma.org/ontology#>
PREFIX skos: <http://www.w3.org/2004/02/skos/core#>
PREFIX spatial: <http://jena.apache.org/spatial#>
PREFIX xsd: <http://www.w3.org/2001/XMLSchema#>
SELECT ?title ?uri ?latitude ?longitude ?pleiades WHERE {
?uri a nmo:Mint;
dcterms:isPartOf nm:roman_provincial_numismatics ;
skos:prefLabel ?title;
geo:location ?loc .
OPTIONAL {?uri skos:closeMatch ?pleiades}
?loc geo:lat ?latitude ;
geo:long ?longitude .
FILTER (langMatches(lang(?title), "en") && regex(str(?pleiades), "pleiades.stoa.org" ))
}'
)
nomisma.roman.provincial.mints <- sparql.response$results
# remove host and path from Pleiades ID
nomisma.roman.provincial.mints$pleiades <- sub('pleiades:/','',nomisma.roman.provincial.mints$pleiades, fixed = T)
nomisma.roman.provincial.mints$title <- gsub('"','',nomisma.roman.provincial.mints$title, fixed = T)
nomisma.roman.provincial.mints$title <- gsub('@en','',nomisma.roman.provincial.mints$title, fixed = T)
nomisma.roman.provincial.mints$uri <- gsub('<','',nomisma.roman.provincial.mints$uri, fixed = T)
nomisma.roman.provincial.mints$uri <- gsub('>','',nomisma.roman.provincial.mints$uri, fixed = T)
nomisma.roman.provincial.mints.sp <- nomisma.roman.provincial.mints
coordinates(nomisma.roman.provincial.mints.sp) <- ~ longitude + latitude
proj4string(nomisma.roman.provincial.mints.sp) <- CRS("+proj=longlat +datum=WGS84")
use_data(nomisma.roman.provincial.mints, overwrite = T)
use_data(nomisma.roman.provincial.mints.sp, overwrite = T)
|
9cf7080c61d72d2165fce39fd25214b7b50a60a7
|
ebd676b6c648b8869101743a5838f300758cc018
|
/AmesHousing/AmesHousing.R
|
e4d56e12f049932c8d3b40b3558f4a65060d5c79
|
[] |
no_license
|
khushbup7/AmesHousing-Dataset-Analysis
|
db86b290a2afc6b9599f76922626d9a92089b4cf
|
85bd415d5833e39151e9f17ccb83674b45e58c7c
|
refs/heads/master
| 2021-06-14T15:58:08.459034
| 2017-03-23T02:31:03
| 2017-03-23T02:31:03
| null | 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 3,303
|
r
|
AmesHousing.R
|
myData=as.data.frame(unclass(AmesHousing))
summary(myData)
myData = myData[,-1]
dim(myData)
myData = myData[,c("MS.SubClass","MS.Zoning", "Lot.Frontage","Lot.Area","Lot.Shape"
,"Land.Contour","Lot.Config","Neighborhood","Bldg.Type","House.Style",
"Overall.Qual","Overall.Cond","Roof.Style",
"BsmtFin.Type.1","Bsmt.Unf.SF","X1st.Flr.SF","Gr.Liv.Area",
"Full.Bath","TotRms.AbvGrd","SalePrice")]
myData = na.omit(myData)
dim(myData)
train = sample(2374,1500)
lr.model = lm(SalePrice~., data=myData, subset=train)
summary(lr.model)
lr.predict = predict(lr.model, myData[-train,-20])
MSE.lr = mean((lr.predict-myData[-train, 20])^2)
MSE.lr
sqrt(MSE.lr) #32449.07
dim(myData)
library(pls)
pcr.fit = pcr(SalePrice~., data=myData, scale = FALSE, subset=train, validation="CV")
validationplot(pcr.fit, val.type="MSEP")
summary(pcr.fit)
pcr.pred = predict(pcr.fit, myData[-train,-20], ncomp=1)
MSE.pcr = mean((pcr.pred - myData[-train,20])^2)
sqrt(MSE.lr)
MSE.pcr
library(glmnet)
x=model.matrix(SalePrice~.,myData)[,-1]
y=myData$SalePrice
# We create a sequence of lambdas to test
grid=10^seq(10,-2,length=100)
# We now do Ridge reqression for each lambda
ridge.mod=glmnet(x,y,alpha=0,lambda=grid)
dim(coef(ridge.mod))
names(ridge.mod)
# We can access the lambda values, but the coeff we need to access
# separately
ridge.mod$lambda [50]
coef(ridge.mod)[,50]
# The predict function allows us to calculate the coefficients
# for lambdas that were not in our original grid
# Here are the coefficients for lambda = 50 ...
predict(ridge.mod,s=50,type="coefficients")[1:20,]
# We would like to choose an optimal lambda - we'll demonstrate
# a simple CV method here (K-fold can be used with more work).
set.seed(1)
#train=sample(1:nrow(x), nrow(x)/2)
test=(-train)
y.test=y[test]
ridge.mod=glmnet(x[train,],y[train],alpha=0,lambda=grid, thresh=1e-12)
# We need to find the optimal lambda - we can use CV
set.seed(1)
cv.out=cv.glmnet(x[train,],y[train],alpha=0)
# The cv function does 10-fold CV by default
plot(cv.out)
bestlam=cv.out$lambda.min
bestlam
ridge.pred=predict(ridge.mod,s=bestlam,newx=x[test,])
sqrt(mean((ridge.pred-y.test)^2)) #31854.3
#Standard erroy for salary
# ONce we do CV find the best lambda, we can use all of the data
# to build our model using this lambda ...
ridge.model=glmnet(x,y,alpha=0)
predict(ridge.model,type="coefficients",s=bestlam)[1:20,]
# Note all coef are included, but some a weighted heavier than others
# Now we apply the Lasso - note all we do is change the alpha option
lasso.mod =glmnet (x[train ,],y[train],alpha =1, lambda =grid)
plot(lasso.mod)
# Note that Lasso takes certain coeff to zero for large enough lambda
# We'll do CV to find the best lambda again ...
set.seed (1)
cv.out =cv.glmnet (x[train ,],y[train],alpha =1)
plot(cv.out)
# lambda small - too much variance | too large - too much bias
bestlam =cv.out$lambda.min
lasso.pred=predict (lasso.mod,s=bestlam,newx=x[test,])
sqrt(mean(( lasso.pred -y.test)^2)) #32219.42
# The MSE is similar to what we saw for Ridge
# Let's find the coefficients ...
out=glmnet (x,y,alpha =1, lambda =grid)
lasso.coef=predict(out,type ="coefficients",s=bestlam )[1:20,]
lasso.coef
lasso.coef[lasso.coef!=0]
accuracy = 1 - 31854.3/182200
accuracy
|
eecea4550bcc3c22fbfbe58f22871e5dc612dbaa
|
4b08dfacf916de2bf5ef0ce4a0d2040b70946794
|
/R/simulation.R
|
8e09c211c8f9478768c1f6253b1833065ea78461
|
[] |
no_license
|
richfitz/diversitree
|
7426e13415829e3fc6a37265926a36461d458cc6
|
8869a002f8c978883f5027254f6fbc00ccfa8443
|
refs/heads/master
| 2023-08-08T23:13:59.081736
| 2023-05-03T14:43:17
| 2023-05-03T14:43:17
| 3,779,673
| 16
| 13
| null | 2023-08-24T14:46:37
| 2012-03-20T20:29:11
|
R
|
UTF-8
|
R
| false
| false
| 6,152
|
r
|
simulation.R
|
## I need to tidy this up a little bit to allow for different tree
## types. I also cannot use functions beginning with simulate(), as
## this is a standard R generic function.
##
## It might be useful to have the simulations use somthing like the
## equilibrium distribution for characters at the bottom of the tree.
##
## It is also worth noting that Luke Harmon has a birthdeath.tree
## function that simulates a tree under a birth death process in
## geiger.
## There is also some missing logic with single branch trees, that I
## need to work through.
## Main interface. In the hope that I will make this generic over a
## 'model' object, I will design the calling structure in a way that
## is similar to S3 generics/methods.
trees <- function(pars,
type=c("bisse", "bisseness", "bd", "classe", "geosse",
"musse", "quasse", "yule"), n=1,
max.taxa=Inf, max.t=Inf, include.extinct=FALSE,
...) {
if ( is.infinite(max.taxa) && is.infinite(max.t) )
stop("At least one of max.taxa and max.t must be finite")
type <- match.arg(type)
f <- switch(type,
bisse=tree.bisse,
bisseness=tree.bisseness,
bd=tree.bd,
classe=tree.classe,
geosse=tree.geosse,
musse=tree.musse,
yule=tree.yule)
trees <- vector("list", n)
i <- 1
while ( i <= n ) {
trees[[i]] <- phy <-
f(pars, max.taxa, max.t, include.extinct, ...)
if ( include.extinct || !is.null(phy) )
i <- i + 1
}
trees
}
## My dodgy tree format to ape's tree format. The 'bisse' suffix is
## only here for historical reasons.
me.to.ape.bisse <- function(x, root.state) {
if ( nrow(x) == 0 )
return(NULL)
Nnode <- sum(!x$split) - 1
n.tips <- sum(!x$split)
x$idx2 <- NA
x$idx2[!x$split] <- 1:n.tips
x$idx2[ x$split] <- order(x$idx[x$split]) + n.tips + 1
i <- match(x$parent, x$idx)
x$parent2 <- x$idx2[i]
x$parent2[is.na(x$parent2)] <- n.tips + 1
tip.label <- ifelse(subset(x, !split)$extinct,
sprintf("ex%d", 1:n.tips),
sprintf("sp%d", 1:n.tips))
node.label <- sprintf("nd%d", 1:Nnode)
x$name <- NA
x$name[!x$split] <- tip.label
## More useful, but I don't want to clobber anything...
x$name2 <- c(tip.label, node.label)[x$idx2]
tip.state <- x$state[match(1:n.tips, x$idx2)]
names(tip.state) <- tip.label
node.state <- x$state[match(1:Nnode + n.tips, x$idx2)]
names(node.state) <- node.label
node.state["nd1"] <- root.state
hist <- attr(x, "hist")
if ( !is.null(hist) ) {
hist$idx2 <- x$idx2[match(hist$idx, x$idx)]
hist$name2 <- x$name2[match(hist$idx, x$idx)]
if ( nrow(hist) > 0 )
hist <- hist[order(hist$idx2),]
}
phy <- reorder(structure(list(edge=cbind(x$parent2, x$idx2),
Nnode=Nnode,
tip.label=tip.label,
tip.state=tip.state,
node.label=node.label,
node.state=node.state,
edge.length=x$len,
orig=x,
hist=hist),
class="phylo"))
phy$edge.state <- x$state[match(phy$edge[,2], x$idx2)]
phy
}
## Adapted from prune.extinct.taxa() in geiger
prune <- function(phy, to.drop=NULL) {
if ( is.null(to.drop) )
to.drop <- subset(phy$orig, !split)$extinct
if ( sum(!to.drop) < 2 ) {
NULL
} else if ( any(to.drop) ) {
phy2 <- drop_tip_fixed(phy, phy$tip.label[to.drop])
## phy2$orig <- subset(phy2$orig, !extinct) # Check NOTE
phy2$orig <- phy2$orig[!phy2$orig$extinct,]
phy2$tip.state <- phy2$tip.state[!to.drop]
phy2$node.state <- phy2$node.state[phy2$node.label]
phy2$hist <- prune.hist(phy, phy2)
phy2
} else {
phy
}
}
## This function aims to covert the "hist" object. This is fairly
## complicated and possibly can be streamlined a bit. The big issue
## here is that when extinct species are removed from the tree, it
## leaves unbranched nodes - the history along a branch with such a
## node needs to be joined.
prune.hist <- function(phy, phy2) {
hist <- phy$hist
if ( is.null(hist) || nrow(hist) == 0 )
return(hist)
## More interesting is to collect up all of the names and look at the
## branches that terminate
phy.names <- c(phy$tip.label, phy$node.label)
phy2.names <- c(phy2$tip.label, phy2$node.label)
## Next, check what the parent of the nodes is in the new tree, using
## the standard names (parent-offspring)
## First, for phy2
po.phy <- cbind(from=phy.names[phy$edge[,1]],
to=phy.names[phy$edge[,2]])
po.phy2 <- cbind(from=phy2.names[phy2$edge[,1]],
to=phy2.names[phy2$edge[,2]])
## Then find out where the parent/offspring relationship changed:
## i <- match(po.phy2[,2], po.phy[,2])
j <- which(po.phy[match(po.phy2[,2], po.phy[,2]),1] != po.phy2[,1])
for ( idx in j ) {
to <- po.phy2[idx,2]
from <- po.phy2[idx,1]
ans <- to
offset <- 0
repeat {
to <- po.phy[po.phy[,2] == to,1]
ans <- c(to, ans)
if ( is.na(to) )
stop("Horrible error")
if ( to == from )
break
}
if ( any(ans[-1] %in% hist$name2) ) {
k <- hist$name2 %in% ans[-1]
offset <- cumsum(phy$edge.length[match(ans[-1], po.phy[,2])])
offset <- c(0, offset[-length(offset)])
hist$x0[k] <- hist$x0[k] - offset[match(hist$name2[k], ans[-1])]
hist$tc[k] <- hist$t[k] - hist$x0[k]
hist$name2[k] <- ans[length(ans)]
}
}
## Prune out the extinct species and nodes that lead to them. Note
## that the root must be excluded as history objects that lead to
## the new root (if it has changed) should not be allowed.
phy2.names.noroot <- phy2.names[phy2.names != phy2$node.label[1]]
hist <- hist[hist$name2 %in% phy2.names.noroot,]
## Remake idx2 to point at the new tree.
hist$idx2 <- match(hist$name2, phy2.names)
hist[order(hist$idx2, hist$t),]
}
|
7f444a0e4b4b4bcb4bb2520e47596b4a059ebadc
|
1e36964d5de4f8e472be681bad39fa0475d91491
|
/man/SDMXTimeDimension.Rd
|
b071086375c5e58d21547b107364aa5b74fc5c46
|
[] |
no_license
|
cran/rsdmx
|
ea299980a1e9e72c547b2cca9496b613dcf0d37f
|
d6ee966a0a94c5cfa242a58137676a512dce8762
|
refs/heads/master
| 2023-09-01T03:53:25.208357
| 2023-08-28T13:00:02
| 2023-08-28T13:30:55
| 23,386,192
| 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| true
| 2,067
|
rd
|
SDMXTimeDimension.Rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/Class-SDMXTimeDimension.R,
% R/SDMXTimeDimension-methods.R
\docType{class}
\name{SDMXTimeDimension}
\alias{SDMXTimeDimension}
\alias{SDMXTimeDimension-class}
\alias{SDMXTimeDimension,SDMXTimeDimension-method}
\title{Class "SDMXTimeDimension"}
\usage{
SDMXTimeDimension(xmlObj, namespaces)
}
\arguments{
\item{xmlObj}{object of class "XMLInternalDocument derived from XML package}
\item{namespaces}{object of class "data.frame" given the list of namespace URIs}
}
\value{
an object of class "SDMXTimeDimension"
}
\description{
A basic class to handle a SDMX TimeDimension
}
\section{Slots}{
\describe{
\item{\code{conceptRef}}{Object of class "character" giving the dimension conceptRef (required)}
\item{\code{conceptVersion}}{Object of class "character" giving the dimension concept version}
\item{\code{conceptAgency}}{Object of class "character" giving the dimension concept agency}
\item{\code{conceptSchemeRef}}{Object of class "character" giving the dimension conceptScheme ref}
\item{\code{conceptSchemeAgency}}{Object of class "character" giving the dimension conceptScheme agency}
\item{\code{codelist}}{Object of class "character" giving the codelist ref name}
\item{\code{codelistVersion}}{Object of class "character" giving the codelist ref version}
\item{\code{codelistAgency}}{Object of class "character" giving the codelist ref agency}
\item{\code{crossSectionalAttachDataset}}{Object of class "logical"}
\item{\code{crossSectionalAttachGroup}}{Object of class "logical"}
\item{\code{crossSectionalAttachSection}}{Object of class "logical"}
\item{\code{crossSectionalAttachObservation}}{Object of class "logical"}
}}
\section{Warning}{
This class is not useful in itself, but non-abstract classes willencapsulate
it as slot, when parsing an SDMX-ML document (Concepts, or DataStructureDefinition)
}
\seealso{
\link{readSDMX}
}
\author{
Emmanuel Blondel, \email{emmanuel.blondel1@gmail.com}
}
|
6f6a40546a2f89da717ac5bb2d25e9149d988ab9
|
b6631ea332d229c06907963f20bcd543566925f7
|
/maininHit.R
|
813e084d8b0c88e85add9e47fe56289f9be09146
|
[] |
no_license
|
biddyweb/fraudDetectionR
|
9554d00a0e2c0093bc041720befc8cabf4793888
|
d0b0a9b103fdd0118e014f068bdd8be62e788865
|
refs/heads/master
| 2021-01-22T11:04:59.413642
| 2014-09-21T07:38:20
| 2014-09-21T07:38:20
| null | 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 565
|
r
|
maininHit.R
|
library("HMM", lib.loc="D:/Program Files (x86)/R-3.1.1/library")
#Die folgenden Sourcedateien liegen im selben ordner
source('C:/Users/User/Desktop/Bachlorarbeit/R-program/createData.R')
source('C:/Users/User/Desktop/Bachlorarbeit/R-program/Cluster.R')
source('C:/Users/User/Desktop/Bachlorarbeit/R-program/Modell.R')
source('C:/Users/User/Desktop/Bachlorarbeit/R-program/Detektion.R')
source('C:/Users/User/Desktop/Bachlorarbeit/R-program/main.R')
erg=main(5:10,1:5 *5,c(0.3,0.4,0.5),500)
save(erg, file = "C:/Users/User/Desktop/Bachlorarbeit/grosseDatei.RData")
|
c34ce53031928884e747392a877e143c5dd27b55
|
5d879a2a106ef99b5ec4af84a49c525835d38361
|
/man/elbow_detection.Rd
|
a8f59f1294faede7ce8d196154f340d12273b502
|
[] |
no_license
|
JinmiaoChenLab/uSORT
|
c32e6ce917a83ad061aad02d9ab0503f203435af
|
5b8dbefc55f95525a533ebb4f2340c594eaf033f
|
refs/heads/master
| 2020-04-17T03:17:35.499880
| 2016-10-12T00:47:03
| 2016-10-12T00:47:03
| 67,589,988
| 1
| 0
| null | null | null | null |
UTF-8
|
R
| false
| true
| 692
|
rd
|
elbow_detection.Rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/uSORT_GeneSelection.R
\name{elbow_detection}
\alias{elbow_detection}
\title{A elbow detection function}
\usage{
elbow_detection(scores, if_plot = FALSE)
}
\arguments{
\item{scores}{A vector of numeric scores.}
\item{if_plot}{Boolean determine if plot the results.}
}
\value{
a vector of selected elements IDs
}
\description{
A elbow detection function detects the elbow/knee of a given vector of values.
Values will be sorted descendingly before detection, and the ID of those values
above the elbow will be returned.
}
\examples{
scores <- c(10, 9 ,8, 6, 3, 2, 1, 0.1)
elbow_detection(scores, if_plot = TRUE)
}
|
d08c0abcb93146d5c8d6b86ebda09a924a30b7cc
|
738f3239dbdf6a79ceffe042d3b0c4974d10a088
|
/R/stylo2.R
|
1a7ef18532b61f0a980ba18b2831f4670c38230e
|
[
"MIT"
] |
permissive
|
zozlak/styloWorkshop
|
cd685907824dfeba4250dfe6373c20563d26bfd2
|
9776a25ea3f3f9dd6bb4438f9acc63ac6e3595e7
|
refs/heads/master
| 2021-01-21T12:53:39.887357
| 2016-05-10T16:17:45
| 2016-05-10T16:17:45
| 49,637,495
| 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 370
|
r
|
stylo2.R
|
#' runs the stylo() function
#' @description Allows to read configuration from file without using GUI.
#' @param file path to the saved configuration
#' @param ... any other parameters to be passed to the
#' \code{\link{stylo}} function
#' @export
#' @import stylo
stylo2 = function(file = 'stylo_config.txt', ...){
do.call('stylo::stylo', read.config(file, ...))
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.