content large_stringlengths 0 6.46M | path large_stringlengths 3 331 | license_type large_stringclasses 2 values | repo_name large_stringlengths 5 125 | language large_stringclasses 1 value | is_vendor bool 2 classes | is_generated bool 2 classes | length_bytes int64 4 6.46M | extension large_stringclasses 75 values | text stringlengths 0 6.46M |
|---|---|---|---|---|---|---|---|---|---|
\name{move}
\alias{move}
%\alias{move,character,ANY,ANY,ANY,ANY,ANY-method}
%\alias{move,character,ANY,ANY,ANY,ANY-method}
\alias{move,character,missing,missing,missing,missing-method}
\alias{move,connection,missing,missing,missing,missing-method}
\alias{move,numeric,numeric,POSIXct,data.frame,character-method}
\alias{move,numeric,numeric,POSIXct,data.frame,CRS-method}
\alias{move,numeric,numeric,POSIXct,data.frame,missing-method}
\alias{move,numeric,numeric,POSIXct,missing,ANY-method}
%\alias{move,character-method}
%\alias{move,numeric,numeric,POSIXct,data.frame,CRS-method}
\docType{methods}
\title{
Create a Move object
}
\description{
The move method creates Move or MoveStack object from Movebank or other (compressed) csv files, also zip files from the environmental annotation tool can be loaded. If you use your own data you need to set the projection method with the 'proj' argument and specify in which column of your data the function finds locations and timestamps.
}
\usage{
\S4method{move}{connection,missing,missing,missing,missing}(x, removeDuplicatedTimestamps=F, ...)
\S4method{move}{numeric,numeric,POSIXct,data.frame,CRS}(x, y, time, data, proj, sensor='unknown',animal='unnamed',...)
}
\arguments{
\item{x}{Full path to the file location, OR a vector with x coordinates if non-Movebank data are provided (e.g. \code{data$x})}
\item{y}{vector of y coordinates if non-Movebank data are provided}
\item{time}{column indicator for non-Movebank data for the time stamps, with POSIXct conversion, i.e. \code{as.POSIXct(data$timestamp, format="\%Y-\%m-\%d \%H:\%M:\%S", tz="UTC")}}
\item{data}{Optional extra data associated with the relocation, if empty it is filled with the coordinates and timestamps}
\item{proj}{projection method for non-Movebank data; requires a valid CRS (see \code{\link{CRS-class}}) object, like CRS("+proj=longlat +ellps=WGS84"); default is NA}
\item{sensor}{sensor name, either single character or a vector with length of the number of coordinates}
\item{animal}{animal ID or name, either single character or a vector with length of the number of coordinates}
\item{removeDuplicatedTimestamps}{It his possible to add the argument removeDuplicatedTimestamps and set it to true which allows you delete the duplicated timestamps, it is strongly advised not to use this option because there is no control over which records are removed. Its better to edit the records in movebank.}
\item{...}{Additional arguments}
}
\details{
The easiest way to import data is to download the study you are interested in from \url{www.movebank.org}. Set the file path as the x argument of the move function. The function detects whether there are single or multiple individuals in this file and automatically creates either a Move or MoveStack object.
Another way is to read in your data using \code{\link{read.csv}}. Then the columns with the x and y coordinates, and the timestamp, as well as the whole data.frame of the imported data are given to the \code{\link{move}} function. Again the function detects whether to return a Move or a MoveStack object
}
\note{
The imported data set is checked whether it is in a Movebank format. If this is not the case, you have to use the alternative import for non-Movebank data (see above).
Because the SpatialPointsDataFrame function that creates the spatial data frame (\code{sdf}) of the \code{Move} object can not process NA location values, all rows with NA locations are omitted. A list of the omitted timestamps is stored in the \code{timesMissedFixes} slot of the \code{Move} object. \cr
If the data include double timestamps check your data for validity. You may want to consider a function to delete double timestamps, like: \cr \code{data <- data[which(!duplicated(data$timestamp)), ]}
Due to convention all names are turned into 'good names' which means, without spaces ('Ricky T' becomes 'Ricky.T').
}
\author{Marco Smolla}
\examples{
## create a move object from a Movebank csv file
filePath<-system.file("extdata","leroy.csv.gz",package="move")
data <- move(filePath)
## create a move object from non-Movebank data
file <- read.table(filePath,
header=TRUE, sep=",", dec=".")
data <- move(x=file$location.long, y=file$location.lat,
time=as.POSIXct(file$timestamp,
format="\%Y-\%m-\%d \%H:\%M:\%S", tz="UTC"),
data=file, proj=CRS("+proj=longlat +ellps=WGS84"), animal="Leroy")
\dontshow{
move(x=file$location.long, y=file$location.lat, time=as.POSIXct(file$timestamp, format="\%Y-\%m-\%d \%H:\%M:\%S", tz="UTC"), data=file, proj=CRS("+proj=longlat +ellps=WGS84"))
move(x=1:10,y=1:10,time=as.POSIXct(1:10, origin='1970-1-1'),proj=CRS('+proj=longlat +ellps=WGS84'))
move(x=1:10,y=1:10,time=as.POSIXct(c(1:5,1:5), origin='1970-1-1'),proj=CRS('+proj=longlat +ellps=WGS84'), animal=c(rep('a',5),rep('b',5)))}
}
| /move/man/move.Rd | no_license | JohnPalmer/gMove | R | false | false | 4,838 | rd | \name{move}
\alias{move}
%\alias{move,character,ANY,ANY,ANY,ANY,ANY-method}
%\alias{move,character,ANY,ANY,ANY,ANY-method}
\alias{move,character,missing,missing,missing,missing-method}
\alias{move,connection,missing,missing,missing,missing-method}
\alias{move,numeric,numeric,POSIXct,data.frame,character-method}
\alias{move,numeric,numeric,POSIXct,data.frame,CRS-method}
\alias{move,numeric,numeric,POSIXct,data.frame,missing-method}
\alias{move,numeric,numeric,POSIXct,missing,ANY-method}
%\alias{move,character-method}
%\alias{move,numeric,numeric,POSIXct,data.frame,CRS-method}
\docType{methods}
\title{
Create a Move object
}
\description{
The move method creates Move or MoveStack object from Movebank or other (compressed) csv files, also zip files from the environmental annotation tool can be loaded. If you use your own data you need to set the projection method with the 'proj' argument and specify in which column of your data the function finds locations and timestamps.
}
\usage{
\S4method{move}{connection,missing,missing,missing,missing}(x, removeDuplicatedTimestamps=F, ...)
\S4method{move}{numeric,numeric,POSIXct,data.frame,CRS}(x, y, time, data, proj, sensor='unknown',animal='unnamed',...)
}
\arguments{
\item{x}{Full path to the file location, OR a vector with x coordinates if non-Movebank data are provided (e.g. \code{data$x})}
\item{y}{vector of y coordinates if non-Movebank data are provided}
\item{time}{column indicator for non-Movebank data for the time stamps, with POSIXct conversion, i.e. \code{as.POSIXct(data$timestamp, format="\%Y-\%m-\%d \%H:\%M:\%S", tz="UTC")}}
\item{data}{Optional extra data associated with the relocation, if empty it is filled with the coordinates and timestamps}
\item{proj}{projection method for non-Movebank data; requires a valid CRS (see \code{\link{CRS-class}}) object, like CRS("+proj=longlat +ellps=WGS84"); default is NA}
\item{sensor}{sensor name, either single character or a vector with length of the number of coordinates}
\item{animal}{animal ID or name, either single character or a vector with length of the number of coordinates}
\item{removeDuplicatedTimestamps}{It his possible to add the argument removeDuplicatedTimestamps and set it to true which allows you delete the duplicated timestamps, it is strongly advised not to use this option because there is no control over which records are removed. Its better to edit the records in movebank.}
\item{...}{Additional arguments}
}
\details{
The easiest way to import data is to download the study you are interested in from \url{www.movebank.org}. Set the file path as the x argument of the move function. The function detects whether there are single or multiple individuals in this file and automatically creates either a Move or MoveStack object.
Another way is to read in your data using \code{\link{read.csv}}. Then the columns with the x and y coordinates, and the timestamp, as well as the whole data.frame of the imported data are given to the \code{\link{move}} function. Again the function detects whether to return a Move or a MoveStack object
}
\note{
The imported data set is checked whether it is in a Movebank format. If this is not the case, you have to use the alternative import for non-Movebank data (see above).
Because the SpatialPointsDataFrame function that creates the spatial data frame (\code{sdf}) of the \code{Move} object can not process NA location values, all rows with NA locations are omitted. A list of the omitted timestamps is stored in the \code{timesMissedFixes} slot of the \code{Move} object. \cr
If the data include double timestamps check your data for validity. You may want to consider a function to delete double timestamps, like: \cr \code{data <- data[which(!duplicated(data$timestamp)), ]}
Due to convention all names are turned into 'good names' which means, without spaces ('Ricky T' becomes 'Ricky.T').
}
\author{Marco Smolla}
\examples{
## create a move object from a Movebank csv file
filePath<-system.file("extdata","leroy.csv.gz",package="move")
data <- move(filePath)
## create a move object from non-Movebank data
file <- read.table(filePath,
header=TRUE, sep=",", dec=".")
data <- move(x=file$location.long, y=file$location.lat,
time=as.POSIXct(file$timestamp,
format="\%Y-\%m-\%d \%H:\%M:\%S", tz="UTC"),
data=file, proj=CRS("+proj=longlat +ellps=WGS84"), animal="Leroy")
\dontshow{
move(x=file$location.long, y=file$location.lat, time=as.POSIXct(file$timestamp, format="\%Y-\%m-\%d \%H:\%M:\%S", tz="UTC"), data=file, proj=CRS("+proj=longlat +ellps=WGS84"))
move(x=1:10,y=1:10,time=as.POSIXct(1:10, origin='1970-1-1'),proj=CRS('+proj=longlat +ellps=WGS84'))
move(x=1:10,y=1:10,time=as.POSIXct(c(1:5,1:5), origin='1970-1-1'),proj=CRS('+proj=longlat +ellps=WGS84'), animal=c(rep('a',5),rep('b',5)))}
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/datadoc.R
\docType{data}
\name{Innsjo}
\alias{Innsjo}
\title{Innsjo}
\format{
\if{html}{\out{<div class="sourceCode">}}\preformatted{Simple feature collection with 3713 features and 2 fields
Geometry type: POLYGON
Dimension: XY
Bounding box: xmin: -54837 ymin: 6454936 xmax: 1111698 ymax: 7932555
Projected CRS: ETRS89 / UTM zone 33N
# A tibble: 3,713 × 3
oppdateringsdato høyde geometry
* <date> <int> <POLYGON [m]>
1 2017-01-05 282 ((868487 7886343, 868517 7887251, 868282 7887203, 868271 7886…
2 2017-01-05 179 ((887228 7885357, 887562 7886180, 887294 7885958, 887043 7885…
3 2017-01-05 229 ((1021888 7884770, 1021363 7885298, 1021085 7885349, 1020992 …
4 2017-01-05 149 ((883473 7885051, 882667 7884835, 882478 7884396, 882075 7884…
5 2017-01-05 284 ((876886 7881714, 875825 7879857, 875771 7879510, 875930 7879…
6 2017-01-05 6 ((854109 7881033, 853344 7881456, 852552 7881293, 852468 7881…
7 2017-01-05 96 ((882241 7879134, 882049 7878943, 882483 7878282, 882994 7878…
8 2017-01-05 42 ((965535 7930419, 965640 7930538, 965617 7930723, 965977 7931…
9 2017-01-05 2 ((896794 7926270, 896585 7926639, 896356 7926618, 895547 7925…
10 2017-01-05 193 ((967381 7919988, 967543 7920785, 967290 7920898, 967235 7921…
# ℹ 3,703 more rows
# ℹ Use `print(n = ...)` to see more rows
}\if{html}{\out{</div>}}
}
\source{
\code{Basisdata_0000_Norge_25833_N1000Arealdekke_GML.gml}
}
\usage{
Innsjo
}
\description{
Innsjø
}
\author{
© \href{https://kartverket.no/}{Kartverket}
}
\keyword{datasets}
| /man/Innsjo.Rd | permissive | hmalmedal/N1000 | R | false | true | 1,807 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/datadoc.R
\docType{data}
\name{Innsjo}
\alias{Innsjo}
\title{Innsjo}
\format{
\if{html}{\out{<div class="sourceCode">}}\preformatted{Simple feature collection with 3713 features and 2 fields
Geometry type: POLYGON
Dimension: XY
Bounding box: xmin: -54837 ymin: 6454936 xmax: 1111698 ymax: 7932555
Projected CRS: ETRS89 / UTM zone 33N
# A tibble: 3,713 × 3
oppdateringsdato høyde geometry
* <date> <int> <POLYGON [m]>
1 2017-01-05 282 ((868487 7886343, 868517 7887251, 868282 7887203, 868271 7886…
2 2017-01-05 179 ((887228 7885357, 887562 7886180, 887294 7885958, 887043 7885…
3 2017-01-05 229 ((1021888 7884770, 1021363 7885298, 1021085 7885349, 1020992 …
4 2017-01-05 149 ((883473 7885051, 882667 7884835, 882478 7884396, 882075 7884…
5 2017-01-05 284 ((876886 7881714, 875825 7879857, 875771 7879510, 875930 7879…
6 2017-01-05 6 ((854109 7881033, 853344 7881456, 852552 7881293, 852468 7881…
7 2017-01-05 96 ((882241 7879134, 882049 7878943, 882483 7878282, 882994 7878…
8 2017-01-05 42 ((965535 7930419, 965640 7930538, 965617 7930723, 965977 7931…
9 2017-01-05 2 ((896794 7926270, 896585 7926639, 896356 7926618, 895547 7925…
10 2017-01-05 193 ((967381 7919988, 967543 7920785, 967290 7920898, 967235 7921…
# ℹ 3,703 more rows
# ℹ Use `print(n = ...)` to see more rows
}\if{html}{\out{</div>}}
}
\source{
\code{Basisdata_0000_Norge_25833_N1000Arealdekke_GML.gml}
}
\usage{
Innsjo
}
\description{
Innsjø
}
\author{
© \href{https://kartverket.no/}{Kartverket}
}
\keyword{datasets}
|
library(GEOmap)
### Name: rectPERIM
### Title: Extract a rectangular perimeter
### Aliases: rectPERIM
### Keywords: misc
### ** Examples
fx =rnorm(20)
fy = rnorm(20)
plot(fx, fy, xlim=c(-4, 4), ylim=c(-4,4))
rp = rectPERIM(fx, fy)
polygon(rp)
text(rp, labels=1:4, pos=c(1,1,3,3), font=2, cex=2)
fx2 =rnorm(20, m=-1)
fy2 = rnorm(20, m=-1)
Fx = list(x=fx2, y=fy2)
points(Fx$x, Fx$y, col='red')
rp = rectPERIM(Fx)
polygon(rp, border='red')
######## try expanding the perim:
plot(fx, fy, xlim=c(-4, 4), ylim=c(-4,4), asp=1)
rp = rectPERIM(fx, fy, pct=0.1)
polygon(rp)
rp = rectPERIM(fx, fy, pct=0.2)
polygon(rp)
| /data/genthat_extracted_code/GEOmap/examples/rectPERIM.Rd.R | no_license | surayaaramli/typeRrh | R | false | false | 637 | r | library(GEOmap)
### Name: rectPERIM
### Title: Extract a rectangular perimeter
### Aliases: rectPERIM
### Keywords: misc
### ** Examples
fx =rnorm(20)
fy = rnorm(20)
plot(fx, fy, xlim=c(-4, 4), ylim=c(-4,4))
rp = rectPERIM(fx, fy)
polygon(rp)
text(rp, labels=1:4, pos=c(1,1,3,3), font=2, cex=2)
fx2 =rnorm(20, m=-1)
fy2 = rnorm(20, m=-1)
Fx = list(x=fx2, y=fy2)
points(Fx$x, Fx$y, col='red')
rp = rectPERIM(Fx)
polygon(rp, border='red')
######## try expanding the perim:
plot(fx, fy, xlim=c(-4, 4), ylim=c(-4,4), asp=1)
rp = rectPERIM(fx, fy, pct=0.1)
polygon(rp)
rp = rectPERIM(fx, fy, pct=0.2)
polygon(rp)
|
# --------------------------------------------------------------------------------
# atlas_cnv.R v0. called by main atlas_cnv.pl v0, Sep 11, 2018. Ted Chiang
# Copyright 2016-2018, Baylor College of Medicine Human Genome Sequencing Center.
# All rights reserved.
# --------------------------------------------------------------------------------
library(reshape2)
library("optparse")
library(plotrix)
option_list = list(
make_option(c("--rpkm_file"), type="character", default=NULL,
help="RPKM matrix data filename", metavar="character"),
make_option(c("--panel_file"), type="character", default="HG19_CAREseqv2_eMERGE3_LD_panel_design_3565_exons.tab",
help="panel design filename. 3 tab-delimited columns: 'Exon_Target', 'Gene_Exon', 'Call_CNV'
[default = %default]", metavar="character"),
make_option(c("--threshold_del"), type="numeric", default="-0.6",
help="log2 ratio cutoff to call a CNV deletion. [default= %default]", metavar="numeric"),
make_option(c("--threshold_dup"), type="numeric", default="0.4",
help="log2 ratio cutoff to call a CNV duplication. [default= %default]", metavar="numeric"),
# ???????#make_option(c("--threshold_exonQC_Zscore"), type="numeric", default="0.2",
# help="ExonQC cutoff to fail an exon target, ie. stddev > this value are flagged as failed. [default= %default]", metavar="numeric"),
make_option(c("--threshold_sampleQC"), type="numeric", default="0.2",
help="SampleQC cutoff to fail a sample, ie. stddev > this value are flagged as failed. [default= %default]", metavar="numeric"),
make_option(c("--threshold_sample_anova"), type="numeric", default="0.05",
help="In the coefficients of one way ANOVA of sample means, the Pr(>|t|) cutoff to flag a sample. [default= %default]", metavar="numeric")
);
opt_parser = OptionParser(option_list=option_list);
opt = parse_args(opt_parser);
if (is.null(opt$rpkm_file)) {
print_help(opt_parser)
stop("RPKM matrix datafile must be supplied (--rpkm_file).", call.=FALSE)
}
if (!file.exists(opt$panel_file)) {
print_help(opt_parser)
stop("Panel design file is not supplied, or default cannot be found. (--panel_file).", call.=FALSE)
}
# ccc: df of rpkms for all samples, all targets, ie. RPKM_matrix.IRC_MB-MID0075.CLEMRG_1pA (already 2066 columns, ie. only Call_CNV=Y)
rpkm_matrix_file = opt$rpkm_file
ccc = read.table(file=rpkm_matrix_file, header=T, check.names=F, row.names=1)
# Midpool summary result outputfile & midpool directory
midpool_summary_results = gsub('RPKM_matrix', 'atlas_cnv_summary', rpkm_matrix_file)
midpool_dir = gsub('/.*', '/', rpkm_matrix_file)
# ppp: df of the panel design file with 3 columns: ppp$Exon_Target, ppp$Gene_Exon, ppp$Call_CNV, ppp$RefSeq
panel_design_file = opt$panel_file
ppp = read.table(file=panel_design_file, header=T, check.names=F)
# =============================================================================
# CNV default cutoffs:
# 2^(0.5) = 1.414214.... so if RPKM ratio is >= 1.41 then, it's a DUP call.
# 2^(0.4692898) = 1.3844278
# 1-2^(-0.7) = 0.3844278
# 2^(-0.7) = 0.615572.... so if RPKM ratio is <= 0.61 then, it's a DEL call. <--OLD
# 2^(-0.415) = 0.75.... so if RPKM ratio is <= 0.75 then, it's a DEL call.
# =============================================================================
threshold_del = opt$threshold_del
threshold_dup = opt$threshold_dup
#threshold_exonQC = opt$threshold_exonQC
threshold_sampleQC = opt$threshold_sampleQC
threshold_sample_anova = opt$threshold_sample_anova
# some in-house functions:
find_median_sample = function (x) {
median_rpkm = mmm[x]
targ_coor = names(mmm[x]) # "1:1220087-1220186"
# ccc$"1:1220087-1220186" works but not as a variable ccc$"targ_coor".
# ccc[targ_coor] <-- column slice
# ccc[[targ_coor]] <-- column vector, same as ccc$"1:1220087-1220186"
median_rpkm = max(ccc[[targ_coor]][ which(ccc[[targ_coor]] <= median_rpkm) ]);
sample_idx = which(ccc[[targ_coor]] == median_rpkm)
sample_idx = sample_idx[1] # in case there are duplicate samples of 0 rpkm, then just pick one.
sample_id = rownames(ccc)[sample_idx]
med = c(targ_coor, median_rpkm, sample_id, sample_idx)
return (med)
}
my_row_sd = function(x) {
aaa = DDD[x,][FFF[x,]]
mysd = sd( aaa )
return (mysd)
}
my_row_count = function(x) {
count = sum(FFF[x,])
# cat ('n(for stddev cal)=', count, "\n")
return (count)
}
my_col_sd = function(x) {
aaa = DDD[,x][FFF[,x]]
mysd = sd( aaa )
return (mysd)
}
my_col_count = function(x) {
count = sum(FFF[,x])
# cat ('n(for stddev cal)=', count, "\n")
return (count)
}
my_col_sd_wo_outliers = function(x) {
aaa = DDD_wo_outliers[,x][FFF_wo_outliers[,x]]
mysd = sd( aaa )
return (mysd)
}
my_col_x_based_on_zscore_neg3_wo_outliers = function(x) {
aaa = DDD_wo_outliers[,x][FFF_wo_outliers[,x]]
mymean = mean( aaa )
mysd = sd( aaa )
#myx = (-4)*mysd + mymean
myx = (-2.576)*mysd + mymean
return (myx)
}
my_col_x_based_on_zscore_pos3_wo_outliers = function(x) {
aaa = DDD_wo_outliers[,x][FFF_wo_outliers[,x]]
mymean = mean( aaa )
mysd = sd( aaa )
#myx = (4)*mysd + mymean
myx = (2.576)*mysd + mymean
return (myx)
}
pass_fail = function(x) {
if (x==TRUE) { return ("Fail") }
else { return ("Pass") }
}
call_cnvs = function (x) { # calling cnv per sample via idx of ccc df
sample_id = rownames(fff)[x]
# --------------------------------------------------------------------------------------------------------------------------
# check to see if sample failed sample QC or ANOVA. If so, name the file: .cnv.FAILED_sampleQC or .cnv.FAILED_sampleANOVA
# --------------------------------------------------------------------------------------------------------------------------
anova_sample_id = paste('sample_id', sample_id, sep='')
if (is.element(sample_id, failed_samples) & !is.element(anova_sample_id, rownames(failed_samples_by_anova_pval_lt_5pct) )) {
outfile = paste(midpool_dir, sample_id, '.cnv.FAILED_sampleQC', sep='')
}
else if (!is.element(sample_id, failed_samples) & is.element(anova_sample_id, rownames(failed_samples_by_anova_pval_lt_5pct) )) {
outfile = paste(midpool_dir, sample_id, '.cnv.FAILED_sampleANOVA', sep='')
}
else if (is.element(sample_id, failed_samples) & is.element(anova_sample_id, rownames(failed_samples_by_anova_pval_lt_5pct) )) {
outfile = paste(midpool_dir, sample_id, '.cnv.FAILED_sampleQC_and_sampleANOVA', sep='')
}
else {
outfile = paste(midpool_dir, sample_id, '.cnv', sep='')
}
header = rbind(c('Exon_Target', 'Gene_Exon', 'cnv', 'log2R', 'rpkm', 'median_rpkm', 'Exon_Status', 'E_StDev', 'c_Score', 'RefSeq'))
write.table(header, file=outfile, append=T, quote=F, row.names=F, col.names=F, sep="\t")
# call dels
cat ("Calling dels on:", sample_id)
sample_dels=NULL
#dels = which(fff[x,1:(ncol(fff)-2)]<=threshold_del)
dels = which(fff[x,1:(ncol(fff)-2)]<=threshold_del & fff[x,1:(ncol(fff)-2)]<=threshold_del_soft)
if (length(dels)==0) { cat (" has no cnv dels.\n") }
else {
cat (" has cnv dels.\n")
Gene_Exon = ppp$Gene_Exon[ppp$Exon_Target %in% colnames(fff)[dels]]
RefSeq = ppp$RefSeq[ppp$Exon_Target %in% colnames(fff)[dels]]
cnv = rep('del', length(dels))
log2R = as.data.frame( t(fff[x, dels]) )
rpkm = as.data.frame( t( ccc[x, colnames(ccc) %in% colnames(fff)[dels]] ) )
median_rpkm = mmm$median_rpkm[mmm$targ_coor %in% colnames(fff)[dels]]
ExonQC_PF = is.element(colnames(fff)[dels], rownames(failed_exons_by_ExonQC))
ExonQC_PF = sapply(ExonQC_PF, pass_fail)
ExonQC_value = as.numeric(fff["ExonQC", dels])
Confidence_Score = round(log2R/ExonQC_value, 2)
sample_dels = data.frame( as.character(Gene_Exon), cnv, log2R, rpkm, median_rpkm, ExonQC_PF, ExonQC_value, Confidence_Score, RefSeq, stringsAsFactors=FALSE )
colnames(sample_dels) = c('Gene_Exon', 'cnv', 'log2R', 'rpkm', 'median_rpkm', 'Exon_Status', 'E_StDev', 'c_Score', 'RefSeq')
rownames(sample_dels) = colnames(fff)[dels]
write.table(sample_dels, file=outfile, append=T, quote=F, row.names=T, col.names=F, sep="\t")
}
# call dups
cat ("Calling dups on:", sample_id)
sample_dups=NULL
#dups = which(fff[x,1:(ncol(fff)-2)]>=threshold_dup)
dups = which(fff[x,1:(ncol(fff)-2)]>=threshold_dup & fff[x,1:(ncol(fff)-2)]>=threshold_dup_soft)
if (length(dups)==0) { cat (" has no cnv dups.\n") }
else {
cat (" has cnv dups.\n")
Gene_Exon = ppp$Gene_Exon[ppp$Exon_Target %in% colnames(fff)[dups]]
RefSeq = ppp$RefSeq[ppp$Exon_Target %in% colnames(fff)[dups]]
cnv = rep('dup', length(dups))
log2R = as.data.frame( t(fff[x, dups]) )
rpkm = as.data.frame( t( ccc[x, colnames(ccc) %in% colnames(fff)[dups]] ) )
median_rpkm = mmm$median_rpkm[mmm$targ_coor %in% colnames(fff)[dups]]
ExonQC_PF = is.element(colnames(fff)[dups], rownames(failed_exons_by_ExonQC))
ExonQC_PF = sapply(ExonQC_PF, pass_fail)
ExonQC_value = as.numeric(fff["ExonQC", dups])
Confidence_Score = round(log2R/ExonQC_value, 2)
sample_dups = data.frame( as.character(Gene_Exon), cnv, log2R, rpkm, median_rpkm, ExonQC_PF, ExonQC_value, Confidence_Score, RefSeq, stringsAsFactors=FALSE )
colnames(sample_dups) = c('Gene_Exon', 'cnv', 'log2R', 'rpkm', 'median_rpkm', 'Exon_Status', 'E_StDev', 'c_Score', 'RefSeq')
rownames(sample_dups) = colnames(fff)[dups]
write.table(sample_dups, file=outfile, append=T, quote=F, row.names=T, col.names=F, sep="\t")
}
# plots
plot_cnvs(rbind(sample_dels, sample_dups), sample_id)
}
plot_cnvs = function(x,y) { # x is df of sample_dels/dups, y is sample_id
pdffile = paste(midpool_dir, y, '.pdf', sep='')
pdf.options(bg='honeydew')
pdf(file=pdffile);
par(mfrow=c(2,1))
sample_id = gsub('[_A-Z]', '', y)
if (length(nrow(x))==0) {
plot.new()
text (0.5, 1, paste('No called CNVs for: ', sample_id, sep=''))
garbage = dev.off()
}
else {
# gene plots
cnv_genes = rle( gsub('-.*', '', x$Gene_Exon) ) # rle is like unix:uniq, rm the transcript & exon numbering of Gene_Exon: MTHFR-001_11, ie. '-001_11'.
#cnv_genes_2plot = cnv_genes$values[ which(cnv_genes$lengths >= 1) ] # Do gene plots w/ 1+ exons
cnv_genes_2plot = cnv_genes$values[ which(cnv_genes$lengths >= 2) ] # Do gene plots w/ 2+ exons
if (length(cnv_genes_2plot) > 0) { # are there any genes to plot?
for (g in 1:length(cnv_genes_2plot)) {
gene = cnv_genes_2plot[g]
gene_pattern = paste('^', cnv_genes_2plot[g], '-', sep='') # bug: grep 'NF2' gets 'RNF207'
cat ('Gene plot for sample: ', y, 'using gene pattern: ', gene_pattern, "\n")
gene_exon_all = ppp$Gene_Exon[ grep(gene_pattern, ppp$Gene_Exon) ] # ppp: df of the panel design file with 3 columns: ppp$Exon_Target, ppp$Gene_Exon, ppp$Call_CNV
gene_targ_coor_all = ppp$Exon_Target[ grep(gene_pattern, ppp$Gene_Exon) ] # ppp: df of the panel design file with 3 columns: ppp$Exon_Target, ppp$Gene_Exon, ppp$Call_CNV
gene_targ_coor_start = which(colnames(fff) == gene_targ_coor_all[1])
gene_targ_coor_stop = which(colnames(fff) == gene_targ_coor_all[length(gene_targ_coor_all)])
lll = fff[y, gene_targ_coor_start:gene_targ_coor_stop] # df: one row w/column slice of log2R for given gene targets.
lll = 2^(unlist(lll)) # convert log2R back to ratio for barplot
names(lll) = gene_exon_all
cnv_gene_exons = x$Gene_Exon[ grep(gene_pattern, x$Gene_Exon) ]
col_vector=rep('grey', length(gene_exon_all))
col_vector[ gene_exon_all %in% cnv_gene_exons ] = 'red'
mean_Confidence_Score = round( mean( x$c_Score[ grep(gene_pattern, x$Gene_Exon) ] ), 2)
barplot(lll, col=col_vector, ylim=c(0,2), ylab='sample / median' , las=2, cex.names=0.5, axis.lty=1, main='Gene plot', space=0)
mtext(gene, cex=0.7)
legend("top", c(paste('mean c_Score', mean_Confidence_Score, sep=' = ')), col=c('black'), pch=15, cex=0.5, bg="transparent")
legend("topright", c('c_Score: (-)loss, (+)gain', 'High: abs(c_Score) > 5', 'Med: abs(c_Score) = 3-5', 'Low: abs(c_Score) < 3'), col=c('black', 'grey', 'grey', 'grey'), pch=15, cex=0.5, bg="transparent")
#legend("topright", c('dip: S/M = 0.85-1.15', 'del(het): S/M = 0.4-0.6', 'del(hom): S/M = 0-0.1', 'dup: S/M = >1.2': ), col=c('black', 'black', 'black', 'black'), pch=15, cex=0.5, bg="transparent")
axis(1, at=match(cnv_gene_exons,names(lll))-0.5, labels=cnv_gene_exons, las=2, cex.axis=0.5, col.axis='red')
abline(h=0.5)
abline(h=1)
table_rows = grep(gene_pattern, x$Gene_Exon) # each plot.new() can only handle 12 rows per table, so make appropriate number of plots.
if (length(table_rows) <= 12) {
plot.new()
p_start = 1
p_stop = length(table_rows)
addtable2plot(x='top', table=x[ table_rows[p_start:p_stop], 1:8], cex=0.75, bty='o', hlines=T, vlines=T, bg='honeydew', xpad=0.25, ypad=1)
}
else if (length(table_rows) > 12) {
num_of_tables = floor(length(table_rows)/12)
p_stop = 0
for ( p in 1:num_of_tables ) {
plot.new()
p_start = p_stop + 1
p_stop = p*12
addtable2plot(x='top', table=x[ table_rows[p_start:p_stop], 1:8], cex=0.75, bty='o', hlines=T, vlines=T, bg='honeydew', xpad=0.25, ypad=1)
}
if ((length(table_rows) %% 12) > 0) { # a last table if needed.
plot.new() # last table
p_start = p_stop + 1
p_stop = length(table_rows)
addtable2plot(x='top', table=x[ table_rows[p_start:p_stop], 1:8], cex=0.75, bty='o', hlines=T, vlines=T, bg='honeydew', xpad=0.25, ypad=1)
}
}
}
}
# exon plots
for (i in 1:nrow(x)) {
cnv_targ_coor = rownames(x)[i]
cnv_gene_exon = x$Gene_Exon[i]
cnv_exon_qc_value = fff["ExonQC",cnv_targ_coor]
cnv_exon_qc_status = x$Exon_Status[i]
cnv_exon_confidence_score = x$c_Score[i]
lll = fff[1:(nrow(fff)-2), cnv_targ_coor , drop=FALSE] # df: column slice of log2R for given target. Note:drop=FALSE keeps dimnames when subsetting to one dim.
nnn = gsub('[_A-Z]', '', rownames(lll)) # remove the '_HMWCYBCXX' and 'IDMB', for x-axis labels in the plot.
j = match(cnv_targ_coor, mmm$targ_coor)
median_rpkm = mmm$median_rpkm[j]
median_sample_id = mmm$sample_id[j]
median_sample_id = gsub('[_A-Z]', '', median_sample_id)
median_sample_idx = mmm$sample_idx[j]
col_vector=rep('grey', dim(ccc)[1])
threshold_del_soft_this_exon = threshold_del_soft[[cnv_gene_exon]]
threshold_dup_soft_this_exon = threshold_dup_soft[[cnv_gene_exon]]
col_vector[ which(lll<=threshold_del) ] = 'darkgoldenrod3' # hom/het dels
col_vector[ which(lll>=threshold_dup) ] = 'darkgoldenrod3' # dups
col_vector[ which(lll<=threshold_del & lll<=threshold_del_soft_this_exon) ] = 'darkgoldenrod3' # hom/het dels
col_vector[ which(lll>=threshold_dup & lll>=threshold_dup_soft_this_exon) ] = 'darkgoldenrod3' # dups
col_vector[ which(rownames(ccc)==y)] = 'red' # this given sample
col_vector[ median_sample_idx ] = 'blue' # median sample used for reference
lll=as.vector(t(2^(lll))) # convert log2R back to ratio for barplot
names(lll)=nnn # as.vector conversion causes rownames to be lost
barplot(lll, col=col_vector, ylim=c(0,2), ylab='sample / median', las=2, cex.names=0.5, axis.lty=1, main='Exon plot', space=0)
mtext(paste(cnv_gene_exon, cnv_targ_coor, sep=', '), cex=0.7)
legend("top", c(paste('exon status', cnv_exon_qc_status, sep=': '), paste('E[StDev]', cnv_exon_qc_value, sep=' = '), paste('c_Score', cnv_exon_confidence_score, sep=' = ')), col=c('black', 'black', 'black'), pch=15, cex=0.5, bg="transparent")
legend("topright", c('sample', 'median'), col=c('red', 'blue'), pch=15, cex=0.5, bg="transparent")
axis(1, at=match(sample_id,names(lll))-0.5, labels=sample_id, las=2, cex.axis=0.5, col.axis='red')
axis(1, at=match(median_sample_id,names(lll))-0.5, labels=median_sample_id, las=2, cex.axis=0.5, col.axis='blue')
abline(h=0.5)
abline(h=1)
}
garbage = dev.off()
}
}
remove_5pc_outliers = function (aaa) {
for (i in seq_along(aaa)) {
a = which( aaa[[i]] < ( -1.96*sd(aaa[[i]]) + mean(aaa[[i]]) ) )
b = which( aaa[[i]] > ( 1.96*sd(aaa[[i]]) + mean(aaa[[i]]) ) )
aaa[a,i]=NA
aaa[b,i]=NA
}
return(aaa)
}
# mmm: df of medians for each target: rownames(targ_coor) ==> colnames(mmm$median_rpkm, mmm$sample_id, mmm$sample_idx)
# ie. the artificial sample of reference medians.
# mmm = apply(ccc, 2, median) # 2 for column, 1 for row.
ccc_wo_outliers = remove_5pc_outliers(ccc)
mmm = apply(ccc_wo_outliers, 2, median, na.rm=TRUE)
mmm = lapply(seq_along(mmm), find_median_sample)
mmm = data.frame(matrix(unlist(mmm), ncol=4, byrow=T), stringsAsFactors=FALSE) # suppresses data.frame()’s default behaviour which turns strings into factors.
colnames(mmm) = c('targ_coor', 'median_rpkm', 'sample_id', 'sample_idx')
mmm = data.frame(targ_coor=mmm$targ_coor, median_rpkm=as.integer(mmm$median_rpkm), sample_id=mmm$sample_id, sample_idx=as.integer(mmm$sample_idx), stringsAsFactors=FALSE)
# ddd: df of log2 ratios of: samples (rows) vs. targets
ddd = as.data.frame(t( log2( t(ccc)/mmm$median_rpkm) ), stringsAsFactors=FALSE) # a matrix has to be coerced into a dataframe!
DDD = as.matrix(ddd) # apply function requires matrix.
FFF = is.finite(DDD) # T/F matrix to rm the Inf, NaN.
# ---------------------------------------------------------------------------------------------
# Same as above 3, but on ccc_wo_outliers. THIS IS FOR COMPUTING THE ExonQC_wo_outliers vector.
# ---------------------------------------------------------------------------------------------
ddd_wo_outliers = as.data.frame(t( log2( t(ccc_wo_outliers)/mmm$median_rpkm) ), stringsAsFactors=FALSE) # a matrix has to be coerced into a dataframe!
DDD_wo_outliers = as.matrix(ddd_wo_outliers) # apply function requires matrix.
FFF_wo_outliers = is.finite(DDD_wo_outliers) # T/F matrix to rm the Inf, NaN.
# ---------------------------------------------------------------------------------------------
# Vector of exon stddev wo_outliers
ExonQC = sapply(seq_along(colnames(ddd)), my_col_sd)
ExonQC_wo_outliers = sapply(seq_along(colnames(ddd_wo_outliers)), my_col_sd_wo_outliers)
threshold_del_soft = sapply(seq_along(colnames(ddd_wo_outliers)), my_col_x_based_on_zscore_neg3_wo_outliers) # The 'x' log2Ratio threshold (soft and data-derived) to call deletions (wo/outliers)
threshold_dup_soft = sapply(seq_along(colnames(ddd_wo_outliers)), my_col_x_based_on_zscore_pos3_wo_outliers) # The 'x' log2Ratio threshold (soft and data-derived) to call deletions (wo/outliers)
names(ExonQC) = ppp$Gene_Exon[ which(ppp$Call_CNV=='Y') ]
names(ExonQC_wo_outliers) = ppp$Gene_Exon[ which(ppp$Call_CNV=='Y') ]
names(threshold_del_soft) = ppp$Gene_Exon[ which(ppp$Call_CNV=='Y') ]
names(threshold_dup_soft) = ppp$Gene_Exon[ which(ppp$Call_CNV=='Y') ]
exon_qc_outfile = gsub('RPKM_matrix', 'ExonQC', rpkm_matrix_file)
exon_qc_wo_outliers_outfile = gsub('RPKM_matrix', 'ExonQC_wo_outliers', rpkm_matrix_file)
threshold_del_soft_outfile = gsub('RPKM_matrix', 'Exon_threshold_del_soft_wo_outliers', rpkm_matrix_file)
threshold_dup_soft_outfile = gsub('RPKM_matrix', 'Exon_threshold_dup_soft_wo_outliers', rpkm_matrix_file)
suppressWarnings(write.table(ExonQC, file=exon_qc_outfile, append=T, quote=F, row.names=T, col.names=T, sep="\t"))
suppressWarnings(write.table(ExonQC_wo_outliers, file=exon_qc_wo_outliers_outfile, append=T, quote=F, row.names=T, col.names=T, sep="\t"))
suppressWarnings(write.table(threshold_del_soft, file=threshold_del_soft_outfile, append=T, quote=F, row.names=T, col.names=T, sep="\t"))
suppressWarnings(write.table(threshold_dup_soft, file=threshold_dup_soft_outfile, append=T, quote=F, row.names=T, col.names=T, sep="\t"))
# 95%, 97.5%, 99%, 99.95%, 99.99% => zscore: 1.645, 1.96, 2.576, 3.291, 4
#threshold_exonQC = 2.576 * sd(ExonQC_wo_outliers) + mean(ExonQC_wo_outliers)
threshold_exonQC = 3.291 * sd(ExonQC_wo_outliers) + mean(ExonQC_wo_outliers)
#threshold_exonQC = 100 # To disable ExonQC metric.
names(threshold_exonQC) = 'threshold_exonQC'
cat ("ExonQC threshold based on 99% of sd distribution is: ", threshold_exonQC, "\n")
cat ("CNV Exon threshold del/dup: ", threshold_del, ", ", threshold_dup, "\n")
#sample_size = round(0.1*length(ExonQC_wo_outliers_for_Call_CNV_Y)) # sample 10% of the data to compute a exonQC threshold.
#threshold_exonQC = NULL
#for (i in 1:1) {
# hhh = sample(ExonQC_wo_outliers_for_Call_CNV_Y, size=sample_size, replace=F)
# threshold_exonQC = c(threshold_exonQC, 1.96 * sd(hhh) + mean(hhh)) # the sd cutoff is 95% of this background sampling
#}
#threshold_exonQC = mean(threshold_exonQC)
#cat ("ExonQC [ave of 1 events, each sampling 10% of bkgrd exon SD thresholded at 95% is]:", threshold_exonQC, "\n")
# add SampleQC stddev of log2 ratios, and row counts of data w/o Inf, NaN.
SampleQC = sapply(seq_along(rownames(ddd)), my_row_sd)
Sample_count_wo_InfNaN = sapply(seq_along(rownames(ddd)), my_row_count)
# add ExonQC of target stddev of log2 ratios, and col counts of data w/o Inf, NaN.
Exon_count_wo_InfNaN = sapply(seq_along(colnames(ddd)), my_col_count)
# ==> as two columns at the right end of matrix.
fff = cbind( ddd, data.frame( SampleQC = SampleQC, Sample_count_wo_InfNaN = Sample_count_wo_InfNaN ) )
# ==> as two rows at bottom end of matrix.
eee = as.data.frame(rbind( c(ExonQC_wo_outliers, NA, NA), c(Exon_count_wo_InfNaN, NA, NA) ))
rownames(eee) = c('ExonQC', 'Exon_count_wo_InfNaN')
colnames(eee) = colnames(fff)
fff = rbind(fff, eee)
fff = round(fff, 2)
rm(DDD, FFF, eee)
# fff: df of ddd w/ removed panel targets not used for CNV calling ie. (rm N ==> ppp$Call_CNV)
# fff = ddd[, which(ppp$Call_CNV=='Y') ]
# fff = cbind( fff, SampleQC=ddd$SampleQC, Sample_count_wo_InfNaN=ddd$Sample_count_wo_InfNaN )
# rrr: df of sample rpkm means & stddev: rownames(sampleids) ==> colnames(rrr$rpkm_mean, rrr$rpkm_stddev)
rrr = data.frame(rpkm_mean = rowMeans(ccc), rpkm_stddev = apply(ccc, 1, sd))
rrr = round(rrr, 2)
sss = data.frame(rpkm_mean = rowMeans(ccc), rpkm_stddev = apply(ccc, 1, sd), SampleQC = SampleQC)
sss = round(sss, 2)
# write sample rpkm means & stddev results to outfile:
sample_mean_stddev_outfile = gsub('RPKM_matrix', 'Sample_RPKM-means-stddevs_log2-stddevs', rpkm_matrix_file)
suppressWarnings(write.table(sss, file=sample_mean_stddev_outfile, append=T, quote=F, row.names=T, col.names=T, sep="\t"))
# c_score matrix and pvalue matrix
cscore_outfile = gsub('RPKM_matrix', 'Cscore_matrix', rpkm_matrix_file)
pval_outfile = gsub('RPKM_matrix', 'Pval_matrix', rpkm_matrix_file)
c_scores = as.matrix( t( t(ddd) / ExonQC_wo_outliers) ) # apply function requires matrix.
#pvals = as.data.frame( apply(c_scores, 2, function(x) { 2*(1 - pnorm(abs(x))) } ), stringsAsFactors=FALSE ) # a matrix has to be coerced into a dataframe! two tailed pvalues
pvals = as.data.frame( apply(c_scores, 2, function(x) { 1 - pnorm(abs(x)) } ), stringsAsFactors=FALSE ) # a matrix has to be coerced into a dataframe! one tail pvalues
c_scores = as.data.frame(c_scores)
c_scores = round(c_scores, 2)
c_scores = cbind(Samples = rownames(c_scores), c_scores)
pvals = round(pvals, 2)
pvals = cbind(Samples = rownames(pvals), pvals)
suppressWarnings(write.table(c_scores, file=cscore_outfile, append=T, quote=F, row.names=F, col.names=T, sep="\t"))
suppressWarnings(write.table(pvals, file=pval_outfile, append=T, quote=F, row.names=F, col.names=T, sep="\t"))
# Failed exons by Exon QC > computed exon sd without outliers (row slice is a dataframe already)
fff_idx = which(fff["ExonQC",]>threshold_exonQC)
if (length(fff_idx)==1) { # when row slice is only 1 datapoint which is not automatically a row slice.
fff_idx = which(fff["ExonQC",]>threshold_exonQC)
failed_exons_by_ExonQC = as.data.frame(fff["ExonQC", fff_idx])
rownames(failed_exons_by_ExonQC) = colnames(fff)[fff_idx]
colnames(failed_exons_by_ExonQC) = 'ExonQC'
} else { # when row slice has 0 datapoints or 2 or more which would be a row slice.
failed_exons_by_ExonQC = as.data.frame( t( fff["ExonQC", which(fff["ExonQC",]>threshold_exonQC)] ) )
}
Exon_count_wo_InfNaN = as.data.frame( t( fff["Exon_count_wo_InfNaN", rownames(failed_exons_by_ExonQC)] ) )
failed_exons_by_ExonQC = data.frame(
Gene_Exon=as.character(ppp$Gene_Exon[ppp$Exon_Target %in% rownames(failed_exons_by_ExonQC)]),
failed_exons_by_ExonQC,
Exon_count_wo_InfNaN,
stringsAsFactors=FALSE
)
# Failed samples by SampleQC > 0.2
failed_samples = rownames(fff)[which(fff$SampleQC>threshold_sampleQC)]
failed_samples_by_SampleQC = data.frame(
failed_samples = failed_samples,
sd_SampleQC = fff$SampleQC[which(fff$SampleQC>threshold_sampleQC)],
Sample_count_wo_InfNaN = fff$Sample_count_wo_InfNaN[which(fff$SampleQC>threshold_sampleQC)],
rpkm_mean = rrr$rpkm_mean[rownames(rrr) %in% failed_samples],
stringsAsFactors=FALSE
)
# sample rpkm stats
sample_rpkm_stats = data.frame(sample_stats = c(min(rrr$rpkm_mean), max(rrr$rpkm_mean), median(rrr$rpkm_mean), min(rrr$rpkm_stddev), max(rrr$rpkm_stddev), median(rrr$rpkm_stddev)))
rownames(sample_rpkm_stats) = c('min_rpkm', 'max_rpkm', 'median_rpkm', 'min_stddev', 'max_stddev', 'median_stddev')
# Failed samples by anova, use rrr$rpkm_mean
aaa = melt(t(ccc)) # from reshape2 library to collapse df to data for lm.
colnames(aaa) = c('targ_coor', 'sample_id', 'rpkm')
fstatistic = anova(lm(rpkm ~ sample_id, data=aaa))$F[1]
pvalue = anova(lm(rpkm ~ sample_id, data=aaa))$P[1]
s = summary(lm(rpkm ~ sample_id, data=aaa))
anova = data.frame(fstatistic,pvalue)
failed_samples_by_anova_pval_lt_5pct = data.frame(Prob_gt_t = s$coefficients[ which(s$coefficients[,4]<threshold_sample_anova), 4])
# write midpool summary results of above 5 dfs:
suppressWarnings(write.table(threshold_exonQC, file=midpool_summary_results, append=T, quote=F, row.names=T, col.names=T, sep="\t"))
suppressWarnings(write.table(failed_exons_by_ExonQC, file=midpool_summary_results, append=T, quote=F, row.names=T, col.names=T, sep="\t"))
suppressWarnings(write.table(failed_samples_by_SampleQC, file=midpool_summary_results, append=T, quote=F, row.names=F, col.names=T, sep="\t"))
suppressWarnings(write.table(sample_rpkm_stats, file=midpool_summary_results, append=T, quote=F, row.names=T, col.names=T, sep="\t"))
suppressWarnings(write.table(anova, file=midpool_summary_results, append=T, quote=F, row.names=T, col.names=T, sep="\t"))
suppressWarnings(write.table(failed_samples_by_anova_pval_lt_5pct, file=midpool_summary_results, append=T, quote=F, row.names=T, col.names=T, sep="\t"))
# Calling CNVs on fff dataframe: call dels first since it creates the .cnv outputfile.
garbage = sapply(seq_along(rownames(ccc)), call_cnvs)
# -------under development, think about this-----------#
# dels_targs = cbind(colnames(fff)[dels], rep(1, length(dels)))
# dels_targs = split( dels_targs, seq(nrow(dels_targs)) )
# x = c("7:6013030-6013173", 1)
# targ_zscore = lapply( dels_targs, get_targ_zscore)
# get_targ_zscore = function (x) { # x is a list.
# i = x[1]
# j = as.integer(x[2])
# x1 = ccc[[i]][j]
# xbar = mean(ccc[[i]])
# mu = sd(ccc[[i]])
# zscore = (x1 - xbar)/mu
# return(zscore)
# }
# targ_pval = # write function to get the pval of the targ_zscore above.
| /atlas_cnv.R | permissive | limingxiang/Atlas-CNV | R | false | false | 27,559 | r | # --------------------------------------------------------------------------------
# atlas_cnv.R v0. called by main atlas_cnv.pl v0, Sep 11, 2018. Ted Chiang
# Copyright 2016-2018, Baylor College of Medicine Human Genome Sequencing Center.
# All rights reserved.
# --------------------------------------------------------------------------------
library(reshape2)
library("optparse")
library(plotrix)
option_list = list(
make_option(c("--rpkm_file"), type="character", default=NULL,
help="RPKM matrix data filename", metavar="character"),
make_option(c("--panel_file"), type="character", default="HG19_CAREseqv2_eMERGE3_LD_panel_design_3565_exons.tab",
help="panel design filename. 3 tab-delimited columns: 'Exon_Target', 'Gene_Exon', 'Call_CNV'
[default = %default]", metavar="character"),
make_option(c("--threshold_del"), type="numeric", default="-0.6",
help="log2 ratio cutoff to call a CNV deletion. [default= %default]", metavar="numeric"),
make_option(c("--threshold_dup"), type="numeric", default="0.4",
help="log2 ratio cutoff to call a CNV duplication. [default= %default]", metavar="numeric"),
# ???????#make_option(c("--threshold_exonQC_Zscore"), type="numeric", default="0.2",
# help="ExonQC cutoff to fail an exon target, ie. stddev > this value are flagged as failed. [default= %default]", metavar="numeric"),
make_option(c("--threshold_sampleQC"), type="numeric", default="0.2",
help="SampleQC cutoff to fail a sample, ie. stddev > this value are flagged as failed. [default= %default]", metavar="numeric"),
make_option(c("--threshold_sample_anova"), type="numeric", default="0.05",
help="In the coefficients of one way ANOVA of sample means, the Pr(>|t|) cutoff to flag a sample. [default= %default]", metavar="numeric")
);
opt_parser = OptionParser(option_list=option_list);
opt = parse_args(opt_parser);
if (is.null(opt$rpkm_file)) {
print_help(opt_parser)
stop("RPKM matrix datafile must be supplied (--rpkm_file).", call.=FALSE)
}
if (!file.exists(opt$panel_file)) {
print_help(opt_parser)
stop("Panel design file is not supplied, or default cannot be found. (--panel_file).", call.=FALSE)
}
# ccc: df of rpkms for all samples, all targets, ie. RPKM_matrix.IRC_MB-MID0075.CLEMRG_1pA (already 2066 columns, ie. only Call_CNV=Y)
rpkm_matrix_file = opt$rpkm_file
ccc = read.table(file=rpkm_matrix_file, header=T, check.names=F, row.names=1)
# Midpool summary result outputfile & midpool directory
midpool_summary_results = gsub('RPKM_matrix', 'atlas_cnv_summary', rpkm_matrix_file)
midpool_dir = gsub('/.*', '/', rpkm_matrix_file)
# ppp: df of the panel design file with 3 columns: ppp$Exon_Target, ppp$Gene_Exon, ppp$Call_CNV, ppp$RefSeq
panel_design_file = opt$panel_file
ppp = read.table(file=panel_design_file, header=T, check.names=F)
# =============================================================================
# CNV default cutoffs:
# 2^(0.5) = 1.414214.... so if RPKM ratio is >= 1.41 then, it's a DUP call.
# 2^(0.4692898) = 1.3844278
# 1-2^(-0.7) = 0.3844278
# 2^(-0.7) = 0.615572.... so if RPKM ratio is <= 0.61 then, it's a DEL call. <--OLD
# 2^(-0.415) = 0.75.... so if RPKM ratio is <= 0.75 then, it's a DEL call.
# =============================================================================
threshold_del = opt$threshold_del
threshold_dup = opt$threshold_dup
#threshold_exonQC = opt$threshold_exonQC
threshold_sampleQC = opt$threshold_sampleQC
threshold_sample_anova = opt$threshold_sample_anova
# some in-house functions:
find_median_sample = function (x) {
median_rpkm = mmm[x]
targ_coor = names(mmm[x]) # "1:1220087-1220186"
# ccc$"1:1220087-1220186" works but not as a variable ccc$"targ_coor".
# ccc[targ_coor] <-- column slice
# ccc[[targ_coor]] <-- column vector, same as ccc$"1:1220087-1220186"
median_rpkm = max(ccc[[targ_coor]][ which(ccc[[targ_coor]] <= median_rpkm) ]);
sample_idx = which(ccc[[targ_coor]] == median_rpkm)
sample_idx = sample_idx[1] # in case there are duplicate samples of 0 rpkm, then just pick one.
sample_id = rownames(ccc)[sample_idx]
med = c(targ_coor, median_rpkm, sample_id, sample_idx)
return (med)
}
my_row_sd = function(x) {
aaa = DDD[x,][FFF[x,]]
mysd = sd( aaa )
return (mysd)
}
my_row_count = function(x) {
count = sum(FFF[x,])
# cat ('n(for stddev cal)=', count, "\n")
return (count)
}
my_col_sd = function(x) {
aaa = DDD[,x][FFF[,x]]
mysd = sd( aaa )
return (mysd)
}
my_col_count = function(x) {
count = sum(FFF[,x])
# cat ('n(for stddev cal)=', count, "\n")
return (count)
}
my_col_sd_wo_outliers = function(x) {
aaa = DDD_wo_outliers[,x][FFF_wo_outliers[,x]]
mysd = sd( aaa )
return (mysd)
}
my_col_x_based_on_zscore_neg3_wo_outliers = function(x) {
aaa = DDD_wo_outliers[,x][FFF_wo_outliers[,x]]
mymean = mean( aaa )
mysd = sd( aaa )
#myx = (-4)*mysd + mymean
myx = (-2.576)*mysd + mymean
return (myx)
}
my_col_x_based_on_zscore_pos3_wo_outliers = function(x) {
aaa = DDD_wo_outliers[,x][FFF_wo_outliers[,x]]
mymean = mean( aaa )
mysd = sd( aaa )
#myx = (4)*mysd + mymean
myx = (2.576)*mysd + mymean
return (myx)
}
pass_fail = function(x) {
if (x==TRUE) { return ("Fail") }
else { return ("Pass") }
}
call_cnvs = function (x) { # calling cnv per sample via idx of ccc df
sample_id = rownames(fff)[x]
# --------------------------------------------------------------------------------------------------------------------------
# check to see if sample failed sample QC or ANOVA. If so, name the file: .cnv.FAILED_sampleQC or .cnv.FAILED_sampleANOVA
# --------------------------------------------------------------------------------------------------------------------------
anova_sample_id = paste('sample_id', sample_id, sep='')
if (is.element(sample_id, failed_samples) & !is.element(anova_sample_id, rownames(failed_samples_by_anova_pval_lt_5pct) )) {
outfile = paste(midpool_dir, sample_id, '.cnv.FAILED_sampleQC', sep='')
}
else if (!is.element(sample_id, failed_samples) & is.element(anova_sample_id, rownames(failed_samples_by_anova_pval_lt_5pct) )) {
outfile = paste(midpool_dir, sample_id, '.cnv.FAILED_sampleANOVA', sep='')
}
else if (is.element(sample_id, failed_samples) & is.element(anova_sample_id, rownames(failed_samples_by_anova_pval_lt_5pct) )) {
outfile = paste(midpool_dir, sample_id, '.cnv.FAILED_sampleQC_and_sampleANOVA', sep='')
}
else {
outfile = paste(midpool_dir, sample_id, '.cnv', sep='')
}
header = rbind(c('Exon_Target', 'Gene_Exon', 'cnv', 'log2R', 'rpkm', 'median_rpkm', 'Exon_Status', 'E_StDev', 'c_Score', 'RefSeq'))
write.table(header, file=outfile, append=T, quote=F, row.names=F, col.names=F, sep="\t")
# call dels
cat ("Calling dels on:", sample_id)
sample_dels=NULL
#dels = which(fff[x,1:(ncol(fff)-2)]<=threshold_del)
dels = which(fff[x,1:(ncol(fff)-2)]<=threshold_del & fff[x,1:(ncol(fff)-2)]<=threshold_del_soft)
if (length(dels)==0) { cat (" has no cnv dels.\n") }
else {
cat (" has cnv dels.\n")
Gene_Exon = ppp$Gene_Exon[ppp$Exon_Target %in% colnames(fff)[dels]]
RefSeq = ppp$RefSeq[ppp$Exon_Target %in% colnames(fff)[dels]]
cnv = rep('del', length(dels))
log2R = as.data.frame( t(fff[x, dels]) )
rpkm = as.data.frame( t( ccc[x, colnames(ccc) %in% colnames(fff)[dels]] ) )
median_rpkm = mmm$median_rpkm[mmm$targ_coor %in% colnames(fff)[dels]]
ExonQC_PF = is.element(colnames(fff)[dels], rownames(failed_exons_by_ExonQC))
ExonQC_PF = sapply(ExonQC_PF, pass_fail)
ExonQC_value = as.numeric(fff["ExonQC", dels])
Confidence_Score = round(log2R/ExonQC_value, 2)
sample_dels = data.frame( as.character(Gene_Exon), cnv, log2R, rpkm, median_rpkm, ExonQC_PF, ExonQC_value, Confidence_Score, RefSeq, stringsAsFactors=FALSE )
colnames(sample_dels) = c('Gene_Exon', 'cnv', 'log2R', 'rpkm', 'median_rpkm', 'Exon_Status', 'E_StDev', 'c_Score', 'RefSeq')
rownames(sample_dels) = colnames(fff)[dels]
write.table(sample_dels, file=outfile, append=T, quote=F, row.names=T, col.names=F, sep="\t")
}
# call dups
cat ("Calling dups on:", sample_id)
sample_dups=NULL
#dups = which(fff[x,1:(ncol(fff)-2)]>=threshold_dup)
dups = which(fff[x,1:(ncol(fff)-2)]>=threshold_dup & fff[x,1:(ncol(fff)-2)]>=threshold_dup_soft)
if (length(dups)==0) { cat (" has no cnv dups.\n") }
else {
cat (" has cnv dups.\n")
Gene_Exon = ppp$Gene_Exon[ppp$Exon_Target %in% colnames(fff)[dups]]
RefSeq = ppp$RefSeq[ppp$Exon_Target %in% colnames(fff)[dups]]
cnv = rep('dup', length(dups))
log2R = as.data.frame( t(fff[x, dups]) )
rpkm = as.data.frame( t( ccc[x, colnames(ccc) %in% colnames(fff)[dups]] ) )
median_rpkm = mmm$median_rpkm[mmm$targ_coor %in% colnames(fff)[dups]]
ExonQC_PF = is.element(colnames(fff)[dups], rownames(failed_exons_by_ExonQC))
ExonQC_PF = sapply(ExonQC_PF, pass_fail)
ExonQC_value = as.numeric(fff["ExonQC", dups])
Confidence_Score = round(log2R/ExonQC_value, 2)
sample_dups = data.frame( as.character(Gene_Exon), cnv, log2R, rpkm, median_rpkm, ExonQC_PF, ExonQC_value, Confidence_Score, RefSeq, stringsAsFactors=FALSE )
colnames(sample_dups) = c('Gene_Exon', 'cnv', 'log2R', 'rpkm', 'median_rpkm', 'Exon_Status', 'E_StDev', 'c_Score', 'RefSeq')
rownames(sample_dups) = colnames(fff)[dups]
write.table(sample_dups, file=outfile, append=T, quote=F, row.names=T, col.names=F, sep="\t")
}
# plots
plot_cnvs(rbind(sample_dels, sample_dups), sample_id)
}
plot_cnvs = function(x,y) { # x is df of sample_dels/dups, y is sample_id
pdffile = paste(midpool_dir, y, '.pdf', sep='')
pdf.options(bg='honeydew')
pdf(file=pdffile);
par(mfrow=c(2,1))
sample_id = gsub('[_A-Z]', '', y)
if (length(nrow(x))==0) {
plot.new()
text (0.5, 1, paste('No called CNVs for: ', sample_id, sep=''))
garbage = dev.off()
}
else {
# gene plots
cnv_genes = rle( gsub('-.*', '', x$Gene_Exon) ) # rle is like unix:uniq, rm the transcript & exon numbering of Gene_Exon: MTHFR-001_11, ie. '-001_11'.
#cnv_genes_2plot = cnv_genes$values[ which(cnv_genes$lengths >= 1) ] # Do gene plots w/ 1+ exons
cnv_genes_2plot = cnv_genes$values[ which(cnv_genes$lengths >= 2) ] # Do gene plots w/ 2+ exons
if (length(cnv_genes_2plot) > 0) { # are there any genes to plot?
for (g in 1:length(cnv_genes_2plot)) {
gene = cnv_genes_2plot[g]
gene_pattern = paste('^', cnv_genes_2plot[g], '-', sep='') # bug: grep 'NF2' gets 'RNF207'
cat ('Gene plot for sample: ', y, 'using gene pattern: ', gene_pattern, "\n")
gene_exon_all = ppp$Gene_Exon[ grep(gene_pattern, ppp$Gene_Exon) ] # ppp: df of the panel design file with 3 columns: ppp$Exon_Target, ppp$Gene_Exon, ppp$Call_CNV
gene_targ_coor_all = ppp$Exon_Target[ grep(gene_pattern, ppp$Gene_Exon) ] # ppp: df of the panel design file with 3 columns: ppp$Exon_Target, ppp$Gene_Exon, ppp$Call_CNV
gene_targ_coor_start = which(colnames(fff) == gene_targ_coor_all[1])
gene_targ_coor_stop = which(colnames(fff) == gene_targ_coor_all[length(gene_targ_coor_all)])
lll = fff[y, gene_targ_coor_start:gene_targ_coor_stop] # df: one row w/column slice of log2R for given gene targets.
lll = 2^(unlist(lll)) # convert log2R back to ratio for barplot
names(lll) = gene_exon_all
cnv_gene_exons = x$Gene_Exon[ grep(gene_pattern, x$Gene_Exon) ]
col_vector=rep('grey', length(gene_exon_all))
col_vector[ gene_exon_all %in% cnv_gene_exons ] = 'red'
mean_Confidence_Score = round( mean( x$c_Score[ grep(gene_pattern, x$Gene_Exon) ] ), 2)
barplot(lll, col=col_vector, ylim=c(0,2), ylab='sample / median' , las=2, cex.names=0.5, axis.lty=1, main='Gene plot', space=0)
mtext(gene, cex=0.7)
legend("top", c(paste('mean c_Score', mean_Confidence_Score, sep=' = ')), col=c('black'), pch=15, cex=0.5, bg="transparent")
legend("topright", c('c_Score: (-)loss, (+)gain', 'High: abs(c_Score) > 5', 'Med: abs(c_Score) = 3-5', 'Low: abs(c_Score) < 3'), col=c('black', 'grey', 'grey', 'grey'), pch=15, cex=0.5, bg="transparent")
#legend("topright", c('dip: S/M = 0.85-1.15', 'del(het): S/M = 0.4-0.6', 'del(hom): S/M = 0-0.1', 'dup: S/M = >1.2': ), col=c('black', 'black', 'black', 'black'), pch=15, cex=0.5, bg="transparent")
axis(1, at=match(cnv_gene_exons,names(lll))-0.5, labels=cnv_gene_exons, las=2, cex.axis=0.5, col.axis='red')
abline(h=0.5)
abline(h=1)
table_rows = grep(gene_pattern, x$Gene_Exon) # each plot.new() can only handle 12 rows per table, so make appropriate number of plots.
if (length(table_rows) <= 12) {
plot.new()
p_start = 1
p_stop = length(table_rows)
addtable2plot(x='top', table=x[ table_rows[p_start:p_stop], 1:8], cex=0.75, bty='o', hlines=T, vlines=T, bg='honeydew', xpad=0.25, ypad=1)
}
else if (length(table_rows) > 12) {
num_of_tables = floor(length(table_rows)/12)
p_stop = 0
for ( p in 1:num_of_tables ) {
plot.new()
p_start = p_stop + 1
p_stop = p*12
addtable2plot(x='top', table=x[ table_rows[p_start:p_stop], 1:8], cex=0.75, bty='o', hlines=T, vlines=T, bg='honeydew', xpad=0.25, ypad=1)
}
if ((length(table_rows) %% 12) > 0) { # a last table if needed.
plot.new() # last table
p_start = p_stop + 1
p_stop = length(table_rows)
addtable2plot(x='top', table=x[ table_rows[p_start:p_stop], 1:8], cex=0.75, bty='o', hlines=T, vlines=T, bg='honeydew', xpad=0.25, ypad=1)
}
}
}
}
# exon plots
for (i in 1:nrow(x)) {
cnv_targ_coor = rownames(x)[i]
cnv_gene_exon = x$Gene_Exon[i]
cnv_exon_qc_value = fff["ExonQC",cnv_targ_coor]
cnv_exon_qc_status = x$Exon_Status[i]
cnv_exon_confidence_score = x$c_Score[i]
lll = fff[1:(nrow(fff)-2), cnv_targ_coor , drop=FALSE] # df: column slice of log2R for given target. Note:drop=FALSE keeps dimnames when subsetting to one dim.
nnn = gsub('[_A-Z]', '', rownames(lll)) # remove the '_HMWCYBCXX' and 'IDMB', for x-axis labels in the plot.
j = match(cnv_targ_coor, mmm$targ_coor)
median_rpkm = mmm$median_rpkm[j]
median_sample_id = mmm$sample_id[j]
median_sample_id = gsub('[_A-Z]', '', median_sample_id)
median_sample_idx = mmm$sample_idx[j]
col_vector=rep('grey', dim(ccc)[1])
threshold_del_soft_this_exon = threshold_del_soft[[cnv_gene_exon]]
threshold_dup_soft_this_exon = threshold_dup_soft[[cnv_gene_exon]]
col_vector[ which(lll<=threshold_del) ] = 'darkgoldenrod3' # hom/het dels
col_vector[ which(lll>=threshold_dup) ] = 'darkgoldenrod3' # dups
col_vector[ which(lll<=threshold_del & lll<=threshold_del_soft_this_exon) ] = 'darkgoldenrod3' # hom/het dels
col_vector[ which(lll>=threshold_dup & lll>=threshold_dup_soft_this_exon) ] = 'darkgoldenrod3' # dups
col_vector[ which(rownames(ccc)==y)] = 'red' # this given sample
col_vector[ median_sample_idx ] = 'blue' # median sample used for reference
lll=as.vector(t(2^(lll))) # convert log2R back to ratio for barplot
names(lll)=nnn # as.vector conversion causes rownames to be lost
barplot(lll, col=col_vector, ylim=c(0,2), ylab='sample / median', las=2, cex.names=0.5, axis.lty=1, main='Exon plot', space=0)
mtext(paste(cnv_gene_exon, cnv_targ_coor, sep=', '), cex=0.7)
legend("top", c(paste('exon status', cnv_exon_qc_status, sep=': '), paste('E[StDev]', cnv_exon_qc_value, sep=' = '), paste('c_Score', cnv_exon_confidence_score, sep=' = ')), col=c('black', 'black', 'black'), pch=15, cex=0.5, bg="transparent")
legend("topright", c('sample', 'median'), col=c('red', 'blue'), pch=15, cex=0.5, bg="transparent")
axis(1, at=match(sample_id,names(lll))-0.5, labels=sample_id, las=2, cex.axis=0.5, col.axis='red')
axis(1, at=match(median_sample_id,names(lll))-0.5, labels=median_sample_id, las=2, cex.axis=0.5, col.axis='blue')
abline(h=0.5)
abline(h=1)
}
garbage = dev.off()
}
}
remove_5pc_outliers = function (aaa) {
for (i in seq_along(aaa)) {
a = which( aaa[[i]] < ( -1.96*sd(aaa[[i]]) + mean(aaa[[i]]) ) )
b = which( aaa[[i]] > ( 1.96*sd(aaa[[i]]) + mean(aaa[[i]]) ) )
aaa[a,i]=NA
aaa[b,i]=NA
}
return(aaa)
}
# mmm: df of medians for each target: rownames(targ_coor) ==> colnames(mmm$median_rpkm, mmm$sample_id, mmm$sample_idx)
# ie. the artificial sample of reference medians.
# mmm = apply(ccc, 2, median) # 2 for column, 1 for row.
ccc_wo_outliers = remove_5pc_outliers(ccc)
mmm = apply(ccc_wo_outliers, 2, median, na.rm=TRUE)
mmm = lapply(seq_along(mmm), find_median_sample)
mmm = data.frame(matrix(unlist(mmm), ncol=4, byrow=T), stringsAsFactors=FALSE) # suppresses data.frame()’s default behaviour which turns strings into factors.
colnames(mmm) = c('targ_coor', 'median_rpkm', 'sample_id', 'sample_idx')
mmm = data.frame(targ_coor=mmm$targ_coor, median_rpkm=as.integer(mmm$median_rpkm), sample_id=mmm$sample_id, sample_idx=as.integer(mmm$sample_idx), stringsAsFactors=FALSE)
# ddd: df of log2 ratios of: samples (rows) vs. targets
ddd = as.data.frame(t( log2( t(ccc)/mmm$median_rpkm) ), stringsAsFactors=FALSE) # a matrix has to be coerced into a dataframe!
DDD = as.matrix(ddd) # apply function requires matrix.
FFF = is.finite(DDD) # T/F matrix to rm the Inf, NaN.
# ---------------------------------------------------------------------------------------------
# Same as above 3, but on ccc_wo_outliers. THIS IS FOR COMPUTING THE ExonQC_wo_outliers vector.
# ---------------------------------------------------------------------------------------------
ddd_wo_outliers = as.data.frame(t( log2( t(ccc_wo_outliers)/mmm$median_rpkm) ), stringsAsFactors=FALSE) # a matrix has to be coerced into a dataframe!
DDD_wo_outliers = as.matrix(ddd_wo_outliers) # apply function requires matrix.
FFF_wo_outliers = is.finite(DDD_wo_outliers) # T/F matrix to rm the Inf, NaN.
# ---------------------------------------------------------------------------------------------
# Vector of exon stddev wo_outliers
ExonQC = sapply(seq_along(colnames(ddd)), my_col_sd)
ExonQC_wo_outliers = sapply(seq_along(colnames(ddd_wo_outliers)), my_col_sd_wo_outliers)
threshold_del_soft = sapply(seq_along(colnames(ddd_wo_outliers)), my_col_x_based_on_zscore_neg3_wo_outliers) # The 'x' log2Ratio threshold (soft and data-derived) to call deletions (wo/outliers)
threshold_dup_soft = sapply(seq_along(colnames(ddd_wo_outliers)), my_col_x_based_on_zscore_pos3_wo_outliers) # The 'x' log2Ratio threshold (soft and data-derived) to call deletions (wo/outliers)
names(ExonQC) = ppp$Gene_Exon[ which(ppp$Call_CNV=='Y') ]
names(ExonQC_wo_outliers) = ppp$Gene_Exon[ which(ppp$Call_CNV=='Y') ]
names(threshold_del_soft) = ppp$Gene_Exon[ which(ppp$Call_CNV=='Y') ]
names(threshold_dup_soft) = ppp$Gene_Exon[ which(ppp$Call_CNV=='Y') ]
exon_qc_outfile = gsub('RPKM_matrix', 'ExonQC', rpkm_matrix_file)
exon_qc_wo_outliers_outfile = gsub('RPKM_matrix', 'ExonQC_wo_outliers', rpkm_matrix_file)
threshold_del_soft_outfile = gsub('RPKM_matrix', 'Exon_threshold_del_soft_wo_outliers', rpkm_matrix_file)
threshold_dup_soft_outfile = gsub('RPKM_matrix', 'Exon_threshold_dup_soft_wo_outliers', rpkm_matrix_file)
suppressWarnings(write.table(ExonQC, file=exon_qc_outfile, append=T, quote=F, row.names=T, col.names=T, sep="\t"))
suppressWarnings(write.table(ExonQC_wo_outliers, file=exon_qc_wo_outliers_outfile, append=T, quote=F, row.names=T, col.names=T, sep="\t"))
suppressWarnings(write.table(threshold_del_soft, file=threshold_del_soft_outfile, append=T, quote=F, row.names=T, col.names=T, sep="\t"))
suppressWarnings(write.table(threshold_dup_soft, file=threshold_dup_soft_outfile, append=T, quote=F, row.names=T, col.names=T, sep="\t"))
# 95%, 97.5%, 99%, 99.95%, 99.99% => zscore: 1.645, 1.96, 2.576, 3.291, 4
#threshold_exonQC = 2.576 * sd(ExonQC_wo_outliers) + mean(ExonQC_wo_outliers)
threshold_exonQC = 3.291 * sd(ExonQC_wo_outliers) + mean(ExonQC_wo_outliers)
#threshold_exonQC = 100 # To disable ExonQC metric.
names(threshold_exonQC) = 'threshold_exonQC'
cat ("ExonQC threshold based on 99% of sd distribution is: ", threshold_exonQC, "\n")
cat ("CNV Exon threshold del/dup: ", threshold_del, ", ", threshold_dup, "\n")
#sample_size = round(0.1*length(ExonQC_wo_outliers_for_Call_CNV_Y)) # sample 10% of the data to compute a exonQC threshold.
#threshold_exonQC = NULL
#for (i in 1:1) {
# hhh = sample(ExonQC_wo_outliers_for_Call_CNV_Y, size=sample_size, replace=F)
# threshold_exonQC = c(threshold_exonQC, 1.96 * sd(hhh) + mean(hhh)) # the sd cutoff is 95% of this background sampling
#}
#threshold_exonQC = mean(threshold_exonQC)
#cat ("ExonQC [ave of 1 events, each sampling 10% of bkgrd exon SD thresholded at 95% is]:", threshold_exonQC, "\n")
# add SampleQC stddev of log2 ratios, and row counts of data w/o Inf, NaN.
SampleQC = sapply(seq_along(rownames(ddd)), my_row_sd)
Sample_count_wo_InfNaN = sapply(seq_along(rownames(ddd)), my_row_count)
# add ExonQC of target stddev of log2 ratios, and col counts of data w/o Inf, NaN.
Exon_count_wo_InfNaN = sapply(seq_along(colnames(ddd)), my_col_count)
# ==> as two columns at the right end of matrix.
fff = cbind( ddd, data.frame( SampleQC = SampleQC, Sample_count_wo_InfNaN = Sample_count_wo_InfNaN ) )
# ==> as two rows at bottom end of matrix.
eee = as.data.frame(rbind( c(ExonQC_wo_outliers, NA, NA), c(Exon_count_wo_InfNaN, NA, NA) ))
rownames(eee) = c('ExonQC', 'Exon_count_wo_InfNaN')
colnames(eee) = colnames(fff)
fff = rbind(fff, eee)
fff = round(fff, 2)
rm(DDD, FFF, eee)
# fff: df of ddd w/ removed panel targets not used for CNV calling ie. (rm N ==> ppp$Call_CNV)
# fff = ddd[, which(ppp$Call_CNV=='Y') ]
# fff = cbind( fff, SampleQC=ddd$SampleQC, Sample_count_wo_InfNaN=ddd$Sample_count_wo_InfNaN )
# rrr: df of sample rpkm means & stddev: rownames(sampleids) ==> colnames(rrr$rpkm_mean, rrr$rpkm_stddev)
rrr = data.frame(rpkm_mean = rowMeans(ccc), rpkm_stddev = apply(ccc, 1, sd))
rrr = round(rrr, 2)
sss = data.frame(rpkm_mean = rowMeans(ccc), rpkm_stddev = apply(ccc, 1, sd), SampleQC = SampleQC)
sss = round(sss, 2)
# write sample rpkm means & stddev results to outfile:
sample_mean_stddev_outfile = gsub('RPKM_matrix', 'Sample_RPKM-means-stddevs_log2-stddevs', rpkm_matrix_file)
suppressWarnings(write.table(sss, file=sample_mean_stddev_outfile, append=T, quote=F, row.names=T, col.names=T, sep="\t"))
# c_score matrix and pvalue matrix
cscore_outfile = gsub('RPKM_matrix', 'Cscore_matrix', rpkm_matrix_file)
pval_outfile = gsub('RPKM_matrix', 'Pval_matrix', rpkm_matrix_file)
c_scores = as.matrix( t( t(ddd) / ExonQC_wo_outliers) ) # apply function requires matrix.
#pvals = as.data.frame( apply(c_scores, 2, function(x) { 2*(1 - pnorm(abs(x))) } ), stringsAsFactors=FALSE ) # a matrix has to be coerced into a dataframe! two tailed pvalues
pvals = as.data.frame( apply(c_scores, 2, function(x) { 1 - pnorm(abs(x)) } ), stringsAsFactors=FALSE ) # a matrix has to be coerced into a dataframe! one tail pvalues
c_scores = as.data.frame(c_scores)
c_scores = round(c_scores, 2)
c_scores = cbind(Samples = rownames(c_scores), c_scores)
pvals = round(pvals, 2)
pvals = cbind(Samples = rownames(pvals), pvals)
suppressWarnings(write.table(c_scores, file=cscore_outfile, append=T, quote=F, row.names=F, col.names=T, sep="\t"))
suppressWarnings(write.table(pvals, file=pval_outfile, append=T, quote=F, row.names=F, col.names=T, sep="\t"))
# Failed exons by Exon QC > computed exon sd without outliers (row slice is a dataframe already)
fff_idx = which(fff["ExonQC",]>threshold_exonQC)
if (length(fff_idx)==1) { # when row slice is only 1 datapoint which is not automatically a row slice.
fff_idx = which(fff["ExonQC",]>threshold_exonQC)
failed_exons_by_ExonQC = as.data.frame(fff["ExonQC", fff_idx])
rownames(failed_exons_by_ExonQC) = colnames(fff)[fff_idx]
colnames(failed_exons_by_ExonQC) = 'ExonQC'
} else { # when row slice has 0 datapoints or 2 or more which would be a row slice.
failed_exons_by_ExonQC = as.data.frame( t( fff["ExonQC", which(fff["ExonQC",]>threshold_exonQC)] ) )
}
Exon_count_wo_InfNaN = as.data.frame( t( fff["Exon_count_wo_InfNaN", rownames(failed_exons_by_ExonQC)] ) )
failed_exons_by_ExonQC = data.frame(
Gene_Exon=as.character(ppp$Gene_Exon[ppp$Exon_Target %in% rownames(failed_exons_by_ExonQC)]),
failed_exons_by_ExonQC,
Exon_count_wo_InfNaN,
stringsAsFactors=FALSE
)
# Failed samples by SampleQC > 0.2
failed_samples = rownames(fff)[which(fff$SampleQC>threshold_sampleQC)]
failed_samples_by_SampleQC = data.frame(
failed_samples = failed_samples,
sd_SampleQC = fff$SampleQC[which(fff$SampleQC>threshold_sampleQC)],
Sample_count_wo_InfNaN = fff$Sample_count_wo_InfNaN[which(fff$SampleQC>threshold_sampleQC)],
rpkm_mean = rrr$rpkm_mean[rownames(rrr) %in% failed_samples],
stringsAsFactors=FALSE
)
# sample rpkm stats
sample_rpkm_stats = data.frame(sample_stats = c(min(rrr$rpkm_mean), max(rrr$rpkm_mean), median(rrr$rpkm_mean), min(rrr$rpkm_stddev), max(rrr$rpkm_stddev), median(rrr$rpkm_stddev)))
rownames(sample_rpkm_stats) = c('min_rpkm', 'max_rpkm', 'median_rpkm', 'min_stddev', 'max_stddev', 'median_stddev')
# Failed samples by anova, use rrr$rpkm_mean
aaa = melt(t(ccc)) # from reshape2 library to collapse df to data for lm.
colnames(aaa) = c('targ_coor', 'sample_id', 'rpkm')
fstatistic = anova(lm(rpkm ~ sample_id, data=aaa))$F[1]
pvalue = anova(lm(rpkm ~ sample_id, data=aaa))$P[1]
s = summary(lm(rpkm ~ sample_id, data=aaa))
anova = data.frame(fstatistic,pvalue)
failed_samples_by_anova_pval_lt_5pct = data.frame(Prob_gt_t = s$coefficients[ which(s$coefficients[,4]<threshold_sample_anova), 4])
# write midpool summary results of above 5 dfs:
suppressWarnings(write.table(threshold_exonQC, file=midpool_summary_results, append=T, quote=F, row.names=T, col.names=T, sep="\t"))
suppressWarnings(write.table(failed_exons_by_ExonQC, file=midpool_summary_results, append=T, quote=F, row.names=T, col.names=T, sep="\t"))
suppressWarnings(write.table(failed_samples_by_SampleQC, file=midpool_summary_results, append=T, quote=F, row.names=F, col.names=T, sep="\t"))
suppressWarnings(write.table(sample_rpkm_stats, file=midpool_summary_results, append=T, quote=F, row.names=T, col.names=T, sep="\t"))
suppressWarnings(write.table(anova, file=midpool_summary_results, append=T, quote=F, row.names=T, col.names=T, sep="\t"))
suppressWarnings(write.table(failed_samples_by_anova_pval_lt_5pct, file=midpool_summary_results, append=T, quote=F, row.names=T, col.names=T, sep="\t"))
# Calling CNVs on fff dataframe: call dels first since it creates the .cnv outputfile.
garbage = sapply(seq_along(rownames(ccc)), call_cnvs)
# -------under development, think about this-----------#
# dels_targs = cbind(colnames(fff)[dels], rep(1, length(dels)))
# dels_targs = split( dels_targs, seq(nrow(dels_targs)) )
# x = c("7:6013030-6013173", 1)
# targ_zscore = lapply( dels_targs, get_targ_zscore)
# get_targ_zscore = function (x) { # x is a list.
# i = x[1]
# j = as.integer(x[2])
# x1 = ccc[[i]][j]
# xbar = mean(ccc[[i]])
# mu = sd(ccc[[i]])
# zscore = (x1 - xbar)/mu
# return(zscore)
# }
# targ_pval = # write function to get the pval of the targ_zscore above.
|
install.packages("elastic")
elastic::connect(es_base = "http://localhost", es_port = 19300);
#读取数据
a0705 <- elastic::Search(index = "sevenga.ironcommander", type = "登陆游戏",
body = '{"query":{"range":{"doc_timestamp":{"from":"2015-07-05T00:00:00Z","to":"2015-07-05T23:59:59Z"}}}}', size = 99999999)$hits$hits;
#new method 用lapply匿名函数处理循环赋值问题;
newts <- lapply(a, function(x) x$fields$doc_timestamp[[1]]);
newid <- lapply(a, function(x) x$fields$'#user_id'[[1]]);
#检验数据好坏,剔除坏数据
factor <- nchar(newid)
table(factor)
newts <- newts[-which(nchar(newid)==4)]#必须先筛选newts再筛选newid,保证which下标有效。
newid <- newid[-which(nchar(newid)==4)]
##时间戳改造成日期和时间格式,转化成数据框
newid <- do.call(c, newid)
newts <- do.call(c, newts)#然而用newts <- c(newts)并没有起到转换的作用
result <- data.frame(id = newid, time = newts);
result$time <- paste(substr(result$time, 1,10), substr(result$time, 12,19));
result$time <- as.POSIXlt(result$time, format = "%Y-%m-%d %H:%M:%S")
result$date <- as.Date(substr(newts,1,10))
head(result)
#按照user_id排序,id内部按照时间排序
result.ordered <- result[order(result$id,result$time),]
result.ordered[c(1:10,3000:3010,400000:400010),]
test <- result[which(result$date=="2015-07-12")]
head(test)
save(test, "userlife0712.RData")
#############################################以下是尝试历史,上面是当前有效代码##################################################################################
#use "scroll" to read pages;
aa <- elastic::Search(index = "sevenga.ironcommander", type = "登陆游戏", fields = c("#user_id","doc_timestamp"),
size = 50, scroll = "1m")$hits$hits;
elastic::scroll(scroll_id = aa$'_scroll_id');
out <- list();
hits <- 1;
while(hits != 0){
aa <- scroll(elastic::scroll_id = aa$'_scroll_id')
hits <- length(aa$hits$hits)
if(hits > 0)
out <- c(out, aa$hits$hits)
}
#用循环把每个list中所需要的结果存储到一个数据框中。
#Error in id[i] = a[[i]]$fields$`#user_id`[[1]] : 更换参数长度为零
l <- length(a);
time=numeric(l);
id=numeric(l);
for(i in 1:l){
id[i]=a[[i]]$fields$`#user_id`[[1]]
time[i]=a[[i]]$fields$doc_timestamp[[1]]
}
result=data.frame(id=id,time=time);
result[1:10]
#elastic::count(index = "sevenga.ironcommander", type = "player_es")
elastic::Search(index = "sevenga.ironcommander", type = "player_es", q = "user_id:1054209");
l <- length(a);
newts <- sapply(1:l, function(i) a$fields$doc_timestamp[[i]],simplify = TRUE);
newid <- sapply(1:l, function(i) a$fields$'#user_id'[[i]]) | /R_elastic.R | no_license | GithubRan/gameUserAnalysis | R | false | false | 2,694 | r | install.packages("elastic")
elastic::connect(es_base = "http://localhost", es_port = 19300);
#读取数据
a0705 <- elastic::Search(index = "sevenga.ironcommander", type = "登陆游戏",
body = '{"query":{"range":{"doc_timestamp":{"from":"2015-07-05T00:00:00Z","to":"2015-07-05T23:59:59Z"}}}}', size = 99999999)$hits$hits;
#new method 用lapply匿名函数处理循环赋值问题;
newts <- lapply(a, function(x) x$fields$doc_timestamp[[1]]);
newid <- lapply(a, function(x) x$fields$'#user_id'[[1]]);
#检验数据好坏,剔除坏数据
factor <- nchar(newid)
table(factor)
newts <- newts[-which(nchar(newid)==4)]#必须先筛选newts再筛选newid,保证which下标有效。
newid <- newid[-which(nchar(newid)==4)]
##时间戳改造成日期和时间格式,转化成数据框
newid <- do.call(c, newid)
newts <- do.call(c, newts)#然而用newts <- c(newts)并没有起到转换的作用
result <- data.frame(id = newid, time = newts);
result$time <- paste(substr(result$time, 1,10), substr(result$time, 12,19));
result$time <- as.POSIXlt(result$time, format = "%Y-%m-%d %H:%M:%S")
result$date <- as.Date(substr(newts,1,10))
head(result)
#按照user_id排序,id内部按照时间排序
result.ordered <- result[order(result$id,result$time),]
result.ordered[c(1:10,3000:3010,400000:400010),]
test <- result[which(result$date=="2015-07-12")]
head(test)
save(test, "userlife0712.RData")
#############################################以下是尝试历史,上面是当前有效代码##################################################################################
#use "scroll" to read pages;
aa <- elastic::Search(index = "sevenga.ironcommander", type = "登陆游戏", fields = c("#user_id","doc_timestamp"),
size = 50, scroll = "1m")$hits$hits;
elastic::scroll(scroll_id = aa$'_scroll_id');
out <- list();
hits <- 1;
while(hits != 0){
aa <- scroll(elastic::scroll_id = aa$'_scroll_id')
hits <- length(aa$hits$hits)
if(hits > 0)
out <- c(out, aa$hits$hits)
}
#用循环把每个list中所需要的结果存储到一个数据框中。
#Error in id[i] = a[[i]]$fields$`#user_id`[[1]] : 更换参数长度为零
l <- length(a);
time=numeric(l);
id=numeric(l);
for(i in 1:l){
id[i]=a[[i]]$fields$`#user_id`[[1]]
time[i]=a[[i]]$fields$doc_timestamp[[1]]
}
result=data.frame(id=id,time=time);
result[1:10]
#elastic::count(index = "sevenga.ironcommander", type = "player_es")
elastic::Search(index = "sevenga.ironcommander", type = "player_es", q = "user_id:1054209");
l <- length(a);
newts <- sapply(1:l, function(i) a$fields$doc_timestamp[[i]],simplify = TRUE);
newid <- sapply(1:l, function(i) a$fields$'#user_id'[[i]]) |
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/module5_mst_sim.R
\name{mst_sim}
\alias{mst_sim}
\alias{print.mst_sim}
\alias{plot.mst_sim}
\title{Simulation of Multistage Testing}
\usage{
mst_sim(x, true, rdp = NULL, ...)
\method{print}{mst_sim}(x, ...)
\method{plot}{mst_sim}(x, ...)
}
\arguments{
\item{x}{the assembled MST}
\item{true}{the true theta parameter (numeric)}
\item{rdp}{routing decision points (list)}
\item{...}{additional option/control parameters}
}
\description{
\code{mst_sim} simulates a MST administration
}
\examples{
\dontrun{
## assemble a MST
nitems <- 200
pool <- with(model_3pl_gendata(1, nitems), data.frame(a=a, b=b, c=c))
pool$content <- sample(1:3, nrow(pool), replace=TRUE)
x <- mst(pool, "1-2-2", 2, 'topdown', len=20, max_use=1)
x <- mst_obj(x, theta=-1, indices=1)
x <- mst_obj(x, theta=0, indices=2:3)
x <- mst_obj(x, theta=1, indices=4)
x <- mst_constraint(x, "content", 6, 6, level=1)
x <- mst_constraint(x, "content", 6, 6, level=2)
x <- mst_constraint(x, "content", 8, 8, level=3)
x <- mst_stage_length(x, 1:2, min=5)
x <- mst_assemble(x)
## ex. 1: administer the MST using fixed RDP for routing
x_sim <- mst_sim(x, .5, list(stage1=0, stage2=0))
plot(x_sim)
## ex. 2: administer the MST using the max. info. for routing
x_sim <- mst_sim(x, .5)
plot(x_sim, ylim=c(-5, 5))
}
}
| /man/mst_sim.Rd | no_license | zjiang4/xxIRT | R | false | true | 1,355 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/module5_mst_sim.R
\name{mst_sim}
\alias{mst_sim}
\alias{print.mst_sim}
\alias{plot.mst_sim}
\title{Simulation of Multistage Testing}
\usage{
mst_sim(x, true, rdp = NULL, ...)
\method{print}{mst_sim}(x, ...)
\method{plot}{mst_sim}(x, ...)
}
\arguments{
\item{x}{the assembled MST}
\item{true}{the true theta parameter (numeric)}
\item{rdp}{routing decision points (list)}
\item{...}{additional option/control parameters}
}
\description{
\code{mst_sim} simulates a MST administration
}
\examples{
\dontrun{
## assemble a MST
nitems <- 200
pool <- with(model_3pl_gendata(1, nitems), data.frame(a=a, b=b, c=c))
pool$content <- sample(1:3, nrow(pool), replace=TRUE)
x <- mst(pool, "1-2-2", 2, 'topdown', len=20, max_use=1)
x <- mst_obj(x, theta=-1, indices=1)
x <- mst_obj(x, theta=0, indices=2:3)
x <- mst_obj(x, theta=1, indices=4)
x <- mst_constraint(x, "content", 6, 6, level=1)
x <- mst_constraint(x, "content", 6, 6, level=2)
x <- mst_constraint(x, "content", 8, 8, level=3)
x <- mst_stage_length(x, 1:2, min=5)
x <- mst_assemble(x)
## ex. 1: administer the MST using fixed RDP for routing
x_sim <- mst_sim(x, .5, list(stage1=0, stage2=0))
plot(x_sim)
## ex. 2: administer the MST using the max. info. for routing
x_sim <- mst_sim(x, .5)
plot(x_sim, ylim=c(-5, 5))
}
}
|
/run_analysis.r | no_license | neenujoseaug/Peer-graded-Assignment-Getting-and-Cleaning-Data | R | false | false | 3,159 | r | ||
data("PreSex", package="vcd")
structable(Gender+PremaritalSex+ExtramaritalSex ~ MaritalStatus, PreSex)
#structable(MaritalStatus ~ Gender+PremaritalSex+ExtramaritalSex, PreSex)
## Mosaic display for Gender and Premarital Sexual Experience
## (Gender Pre)
PreSex <- aperm(PreSex, 4:1) # order variables G, P, E, M
mosaic(margin.table(PreSex, 1:2), shade=TRUE,
main = "Gender and Premarital Sex")
## (Gender Pre)(Extra)
mosaic(margin.table(PreSex, 1:3),
expected = ~Gender * PremaritalSex + ExtramaritalSex ,
main = "Gender*Pre + ExtramaritalSex")
## (Gender Pre Extra)(Marital)
mosaic(PreSex,
expected = ~Gender*PremaritalSex*ExtramaritalSex + MaritalStatus,
main = "Gender*Pre*Extra + MaritalStatus")
## (GPE)(PEM)
mosaic(PreSex,
expected = ~ Gender * PremaritalSex * ExtramaritalSex
+ MaritalStatus * PremaritalSex * ExtramaritalSex,
main = "G*P*E + P*E*M")
## Problem: cant use marginals=2:4 here or subset with [2:4]
## because the latter converts back to a list.
mods <- seq_loglm(PreSex, type="joint")
res <- summarise(mods)
rownames(res) <- c("[G]", "[G][P]", "[GP][E]", "[GPE][M]")
res
try this directly
library(MASS)
mods.GP <- loglm(~Gender + PremaritalSex, data=margin.table(PreSex, 1:2))
mods.GPE <- loglm(~Gender * PremaritalSex + ExtramaritalSex, margin.table(PreSex, 1:3))
mods.GPEM <- loglm(~Gender * PremaritalSex * ExtramaritalSex + MaritalStatus, data=PreSex)
mods.mut <- loglm(~Gender + PremaritalSex + ExtramaritalSex + MaritalStatus, data=PreSex)
mods.list <- loglmlist("[G][P]"=mods.GP, "[GP][E]"=mods.GPE,
"[GPE][M]"=mods.GPEM, "[G][P][E][M]"=mods.mut)
summarise(mods.list)
GSQ <- sapply(mods.list[1:3], function(x)x$lrt)
dimnames(GSQ) <- dimnames(mods.list)
GSQ
sum(GSQ)
mosaic(mods, 2, main = TRUE)
mosaic(mods, 3, main = TRUE)
mosaic(mods, 4, main = TRUE)
## (GPE)(PEM)
mosaic(PreSex,
expected = ~ Gender * PremaritalSex * ExtramaritalSex
+ MaritalStatus * PremaritalSex * ExtramaritalSex,
main = "[GPE] + [PEM]")
oddsratio(margin.table(PreSex, 1:3), stratum=1, log=FALSE) | /ch05/R/presex.R | no_license | friendly/VCDR | R | false | false | 2,156 | r | data("PreSex", package="vcd")
structable(Gender+PremaritalSex+ExtramaritalSex ~ MaritalStatus, PreSex)
#structable(MaritalStatus ~ Gender+PremaritalSex+ExtramaritalSex, PreSex)
## Mosaic display for Gender and Premarital Sexual Experience
## (Gender Pre)
PreSex <- aperm(PreSex, 4:1) # order variables G, P, E, M
mosaic(margin.table(PreSex, 1:2), shade=TRUE,
main = "Gender and Premarital Sex")
## (Gender Pre)(Extra)
mosaic(margin.table(PreSex, 1:3),
expected = ~Gender * PremaritalSex + ExtramaritalSex ,
main = "Gender*Pre + ExtramaritalSex")
## (Gender Pre Extra)(Marital)
mosaic(PreSex,
expected = ~Gender*PremaritalSex*ExtramaritalSex + MaritalStatus,
main = "Gender*Pre*Extra + MaritalStatus")
## (GPE)(PEM)
mosaic(PreSex,
expected = ~ Gender * PremaritalSex * ExtramaritalSex
+ MaritalStatus * PremaritalSex * ExtramaritalSex,
main = "G*P*E + P*E*M")
## Problem: cant use marginals=2:4 here or subset with [2:4]
## because the latter converts back to a list.
mods <- seq_loglm(PreSex, type="joint")
res <- summarise(mods)
rownames(res) <- c("[G]", "[G][P]", "[GP][E]", "[GPE][M]")
res
try this directly
library(MASS)
mods.GP <- loglm(~Gender + PremaritalSex, data=margin.table(PreSex, 1:2))
mods.GPE <- loglm(~Gender * PremaritalSex + ExtramaritalSex, margin.table(PreSex, 1:3))
mods.GPEM <- loglm(~Gender * PremaritalSex * ExtramaritalSex + MaritalStatus, data=PreSex)
mods.mut <- loglm(~Gender + PremaritalSex + ExtramaritalSex + MaritalStatus, data=PreSex)
mods.list <- loglmlist("[G][P]"=mods.GP, "[GP][E]"=mods.GPE,
"[GPE][M]"=mods.GPEM, "[G][P][E][M]"=mods.mut)
summarise(mods.list)
GSQ <- sapply(mods.list[1:3], function(x)x$lrt)
dimnames(GSQ) <- dimnames(mods.list)
GSQ
sum(GSQ)
mosaic(mods, 2, main = TRUE)
mosaic(mods, 3, main = TRUE)
mosaic(mods, 4, main = TRUE)
## (GPE)(PEM)
mosaic(PreSex,
expected = ~ Gender * PremaritalSex * ExtramaritalSex
+ MaritalStatus * PremaritalSex * ExtramaritalSex,
main = "[GPE] + [PEM]")
oddsratio(margin.table(PreSex, 1:3), stratum=1, log=FALSE) |
# Make - multi class -----------------------------------------------------------
#' Create a `class_pred` vector from class probabilities
#'
#' These functions can be used to convert class probability estimates to
#' `class_pred` objects with an optional equivocal zone.
#'
#' @param ... Numeric vectors corresponding to class probabilities. There should
#' be one for each level in `levels`, and _it is assumed that the vectors
#' are in the same order as `levels`_.
#'
#' @param estimate A single numeric vector corresponding to the class
#' probabilities of the first level in `levels`.
#'
#' @param levels A character vector of class levels. The length should be the
#' same as the number of selections made through `...`, or length `2`
#' for `make_two_class_pred()`.
#'
#' @param ordered A single logical to determine if the levels should be regarded
#' as ordered (in the order given). This results in a `class_pred` object
#' that is flagged as ordered.
#'
#' @param min_prob A single numeric value. If any probabilities are less than
#' this value (by row), the row is marked as _equivocal_.
#'
#' @param threshold A single numeric value for the threshold to call a row to
#' be labeled as the first value of `levels`.
#'
#' @param buffer A numeric vector of length 1 or 2 for the buffer around
#' `threshold` that defines the equivocal zone (i.e., `threshold - buffer[1]` to
#' `threshold + buffer[2]`). A length 1 vector is recycled to length 2. The
#' default, `NULL`, is interpreted as no equivocal zone.
#'
#' @return A vector of class [`class_pred`].
#'
#' @examples
#'
#' library(dplyr)
#'
#' good <- segment_logistic$.pred_good
#' lvls <- levels(segment_logistic$Class)
#'
#' # Equivocal zone of .5 +/- .15
#' make_two_class_pred(good, lvls, buffer = 0.15)
#'
#' # Equivocal zone of c(.5 - .05, .5 + .15)
#' make_two_class_pred(good, lvls, buffer = c(0.05, 0.15))
#'
#' # These functions are useful alongside dplyr::mutate()
#' segment_logistic %>%
#' mutate(
#' .class_pred = make_two_class_pred(
#' estimate = .pred_good,
#' levels = levels(Class),
#' buffer = 0.15
#' )
#' )
#'
#' # Multi-class example
#' # Note that we provide class probability columns in the same
#' # order as the levels
#' species_probs %>%
#' mutate(
#' .class_pred = make_class_pred(
#' .pred_bobcat, .pred_coyote, .pred_gray_fox,
#' levels = levels(Species),
#' min_prob = .5
#' )
#' )
#'
#' @importFrom tidyselect vars_select
#' @export
make_class_pred <- function(...,
levels,
ordered = FALSE,
min_prob = 1/length(levels)) {
dots <- rlang::quos(...)
probs <- lapply(dots, rlang::eval_tidy)
# Length check
lens <- vapply(probs, length, numeric(1))
if (any(lens != lens[1])) {
stop(
"All vectors passed to `...` must be of the same length.",
call. = FALSE
)
}
# Type check
num_cols <- vapply(probs, is.numeric, logical(1))
if (any(!num_cols)) {
not_numeric <- which(!num_cols)
stop (
"At least one vector supplied to `...` is not numeric: ",
paste(not_numeric, collapse = ", "),
call. = FALSE
)
}
# Levels check (length and type)
if (length(levels) != length(probs) && is.character(levels)) {
stop (
"`levels` must be a character vector with the ",
"same length as the number of vectors passed to `...`.",
call. = FALSE
)
}
# min_prob checks
if (length(min_prob) != 1 && is.numeric(min_prob)) {
stop(
"`min_prob` must be a single numeric value.",
call. = FALSE
)
}
probs <- list2mat(probs)
x <- levels[apply(probs, 1, which.max)]
x <- factor(x, levels = levels, ordered = ordered)
if (!is.null(min_prob)) {
eq_ind <- which(apply(probs, 1, max) < min_prob)
} else {
eq_ind <- integer()
}
x <- class_pred(x, eq_ind)
x
}
# Make - two class -------------------------------------------------------------
#' @rdname make_class_pred
#' @export
make_two_class_pred <- function(estimate,
levels,
threshold = 0.5,
ordered = FALSE,
buffer = NULL) {
if (length(levels) != 2 && is.character(levels))
stop ("`levels` must be a character vector of length 2.", call. = FALSE)
if (!is.numeric(estimate))
stop ("The selected probability vector should be numeric.", call. = FALSE)
if (length(buffer) > 2 && is.numeric(buffer))
stop ("`buffer` must be a numeric vector of length 1 or 2.", call. = FALSE)
if (length(buffer) == 1) {
buffer <- c(buffer, buffer)
}
x <- ifelse(estimate >= threshold, levels[1], levels[2])
x <- factor(x, levels = levels, ordered = ordered)
if (is.null(buffer)) {
eq_ind <- integer()
}
else {
eq_ind <- which(
estimate >= threshold - buffer[1] &
estimate <= threshold + buffer[2]
)
}
x <- class_pred(x, eq_ind)
x
}
# Append -----------------------------------------------------------------------
#' Add a `class_pred` column
#'
#' This function is similar to [make_class_pred()], but is useful when you have
#' a large number of class probability columns and want to use `tidyselect`
#' helpers. It appends the new `class_pred` vector as a column on the original
#' data frame.
#'
#' @inheritParams make_class_pred
#'
#' @param .data A data frame or tibble.
#'
#' @param ... One or more unquoted expressions separated by commas
#' to capture the columns of `.data` containing the class
#' probabilities. You can treat variable names like they are
#' positions, so you can use expressions like `x:y` to select ranges
#' of variables or use selector functions to choose which columns.
#' For `make_class_pred`, the columns for all class probabilities
#' should be selected (in the same order as the `levels` object).
#' For `two_class_pred`, a vector of class probabilities should be
#' selected.
#'
#' @param name A single character value for the name of the appended
#' `class_pred` column.
#'
#' @return `.data` with an extra `class_pred` column appended onto it.
#'
#' @examples
#'
#' # The following two examples are equivalent and demonstrate
#' # the helper, append_class_pred()
#'
#' library(dplyr)
#'
#' species_probs %>%
#' mutate(
#' .class_pred = make_class_pred(
#' .pred_bobcat, .pred_coyote, .pred_gray_fox,
#' levels = levels(Species),
#' min_prob = .5
#' )
#' )
#'
#' lvls <- levels(species_probs$Species)
#'
#' append_class_pred(
#' .data = species_probs,
#' contains(".pred_"),
#' levels = lvls,
#' min_prob = .5
#' )
#'
#' @export
append_class_pred <- function(.data,
...,
levels,
ordered = FALSE,
min_prob = 1/length(levels),
name = ".class_pred") {
if (!is.data.frame(.data) && ncol(.data) < 2) {
stop (
"`.data` should be a data frame or tibble with at least 2 columns.",
call. = FALSE
)
}
if (!rlang::is_scalar_character(name)) {
stop("`name` must be a single character value.", call. = FALSE)
}
prob_names <- tidyselect::vars_select(names(.data), !!!quos(...))
if (length(prob_names) < 2) {
stop ("`...` should select at least 2 columns.", call. = FALSE)
}
prob_syms <- rlang::syms(prob_names)
# Using a mutate() automatically supports groups
dplyr::mutate(
.data,
!! name := make_class_pred(
!!!prob_syms,
levels = levels,
ordered = ordered,
min_prob = min_prob
)
)
}
# Util -------------------------------------------------------------------------
list2mat <- function(lst) {
n_col <- length(lst)
vec <- unlist(lst, recursive = FALSE, use.names = FALSE)
matrix(vec, ncol = n_col)
}
| /R/make_class_pred.R | permissive | otavioacm/probably | R | false | false | 7,930 | r | # Make - multi class -----------------------------------------------------------
#' Create a `class_pred` vector from class probabilities
#'
#' These functions can be used to convert class probability estimates to
#' `class_pred` objects with an optional equivocal zone.
#'
#' @param ... Numeric vectors corresponding to class probabilities. There should
#' be one for each level in `levels`, and _it is assumed that the vectors
#' are in the same order as `levels`_.
#'
#' @param estimate A single numeric vector corresponding to the class
#' probabilities of the first level in `levels`.
#'
#' @param levels A character vector of class levels. The length should be the
#' same as the number of selections made through `...`, or length `2`
#' for `make_two_class_pred()`.
#'
#' @param ordered A single logical to determine if the levels should be regarded
#' as ordered (in the order given). This results in a `class_pred` object
#' that is flagged as ordered.
#'
#' @param min_prob A single numeric value. If any probabilities are less than
#' this value (by row), the row is marked as _equivocal_.
#'
#' @param threshold A single numeric value for the threshold to call a row to
#' be labeled as the first value of `levels`.
#'
#' @param buffer A numeric vector of length 1 or 2 for the buffer around
#' `threshold` that defines the equivocal zone (i.e., `threshold - buffer[1]` to
#' `threshold + buffer[2]`). A length 1 vector is recycled to length 2. The
#' default, `NULL`, is interpreted as no equivocal zone.
#'
#' @return A vector of class [`class_pred`].
#'
#' @examples
#'
#' library(dplyr)
#'
#' good <- segment_logistic$.pred_good
#' lvls <- levels(segment_logistic$Class)
#'
#' # Equivocal zone of .5 +/- .15
#' make_two_class_pred(good, lvls, buffer = 0.15)
#'
#' # Equivocal zone of c(.5 - .05, .5 + .15)
#' make_two_class_pred(good, lvls, buffer = c(0.05, 0.15))
#'
#' # These functions are useful alongside dplyr::mutate()
#' segment_logistic %>%
#' mutate(
#' .class_pred = make_two_class_pred(
#' estimate = .pred_good,
#' levels = levels(Class),
#' buffer = 0.15
#' )
#' )
#'
#' # Multi-class example
#' # Note that we provide class probability columns in the same
#' # order as the levels
#' species_probs %>%
#' mutate(
#' .class_pred = make_class_pred(
#' .pred_bobcat, .pred_coyote, .pred_gray_fox,
#' levels = levels(Species),
#' min_prob = .5
#' )
#' )
#'
#' @importFrom tidyselect vars_select
#' @export
make_class_pred <- function(...,
levels,
ordered = FALSE,
min_prob = 1/length(levels)) {
dots <- rlang::quos(...)
probs <- lapply(dots, rlang::eval_tidy)
# Length check
lens <- vapply(probs, length, numeric(1))
if (any(lens != lens[1])) {
stop(
"All vectors passed to `...` must be of the same length.",
call. = FALSE
)
}
# Type check
num_cols <- vapply(probs, is.numeric, logical(1))
if (any(!num_cols)) {
not_numeric <- which(!num_cols)
stop (
"At least one vector supplied to `...` is not numeric: ",
paste(not_numeric, collapse = ", "),
call. = FALSE
)
}
# Levels check (length and type)
if (length(levels) != length(probs) && is.character(levels)) {
stop (
"`levels` must be a character vector with the ",
"same length as the number of vectors passed to `...`.",
call. = FALSE
)
}
# min_prob checks
if (length(min_prob) != 1 && is.numeric(min_prob)) {
stop(
"`min_prob` must be a single numeric value.",
call. = FALSE
)
}
probs <- list2mat(probs)
x <- levels[apply(probs, 1, which.max)]
x <- factor(x, levels = levels, ordered = ordered)
if (!is.null(min_prob)) {
eq_ind <- which(apply(probs, 1, max) < min_prob)
} else {
eq_ind <- integer()
}
x <- class_pred(x, eq_ind)
x
}
# Make - two class -------------------------------------------------------------
#' @rdname make_class_pred
#' @export
make_two_class_pred <- function(estimate,
levels,
threshold = 0.5,
ordered = FALSE,
buffer = NULL) {
if (length(levels) != 2 && is.character(levels))
stop ("`levels` must be a character vector of length 2.", call. = FALSE)
if (!is.numeric(estimate))
stop ("The selected probability vector should be numeric.", call. = FALSE)
if (length(buffer) > 2 && is.numeric(buffer))
stop ("`buffer` must be a numeric vector of length 1 or 2.", call. = FALSE)
if (length(buffer) == 1) {
buffer <- c(buffer, buffer)
}
x <- ifelse(estimate >= threshold, levels[1], levels[2])
x <- factor(x, levels = levels, ordered = ordered)
if (is.null(buffer)) {
eq_ind <- integer()
}
else {
eq_ind <- which(
estimate >= threshold - buffer[1] &
estimate <= threshold + buffer[2]
)
}
x <- class_pred(x, eq_ind)
x
}
# Append -----------------------------------------------------------------------
#' Add a `class_pred` column
#'
#' This function is similar to [make_class_pred()], but is useful when you have
#' a large number of class probability columns and want to use `tidyselect`
#' helpers. It appends the new `class_pred` vector as a column on the original
#' data frame.
#'
#' @inheritParams make_class_pred
#'
#' @param .data A data frame or tibble.
#'
#' @param ... One or more unquoted expressions separated by commas
#' to capture the columns of `.data` containing the class
#' probabilities. You can treat variable names like they are
#' positions, so you can use expressions like `x:y` to select ranges
#' of variables or use selector functions to choose which columns.
#' For `make_class_pred`, the columns for all class probabilities
#' should be selected (in the same order as the `levels` object).
#' For `two_class_pred`, a vector of class probabilities should be
#' selected.
#'
#' @param name A single character value for the name of the appended
#' `class_pred` column.
#'
#' @return `.data` with an extra `class_pred` column appended onto it.
#'
#' @examples
#'
#' # The following two examples are equivalent and demonstrate
#' # the helper, append_class_pred()
#'
#' library(dplyr)
#'
#' species_probs %>%
#' mutate(
#' .class_pred = make_class_pred(
#' .pred_bobcat, .pred_coyote, .pred_gray_fox,
#' levels = levels(Species),
#' min_prob = .5
#' )
#' )
#'
#' lvls <- levels(species_probs$Species)
#'
#' append_class_pred(
#' .data = species_probs,
#' contains(".pred_"),
#' levels = lvls,
#' min_prob = .5
#' )
#'
#' @export
append_class_pred <- function(.data,
...,
levels,
ordered = FALSE,
min_prob = 1/length(levels),
name = ".class_pred") {
if (!is.data.frame(.data) && ncol(.data) < 2) {
stop (
"`.data` should be a data frame or tibble with at least 2 columns.",
call. = FALSE
)
}
if (!rlang::is_scalar_character(name)) {
stop("`name` must be a single character value.", call. = FALSE)
}
prob_names <- tidyselect::vars_select(names(.data), !!!quos(...))
if (length(prob_names) < 2) {
stop ("`...` should select at least 2 columns.", call. = FALSE)
}
prob_syms <- rlang::syms(prob_names)
# Using a mutate() automatically supports groups
dplyr::mutate(
.data,
!! name := make_class_pred(
!!!prob_syms,
levels = levels,
ordered = ordered,
min_prob = min_prob
)
)
}
# Util -------------------------------------------------------------------------
list2mat <- function(lst) {
n_col <- length(lst)
vec <- unlist(lst, recursive = FALSE, use.names = FALSE)
matrix(vec, ncol = n_col)
}
|
#' @name loadWorkbook
#' @title Load an exisiting .xlsx file
#' @author Alexander Walker
#' @param file A path to an existing .xlsx or .xlsm file
#' @param xlsxFile alias for file
#' @description loadWorkbook returns a workbook object conserving styles and
#' formatting of the original .xlsx file.
#' @return Workbook object.
#' @export
#' @seealso \code{\link{removeWorksheet}}
#' @examples
#' ## load existing workbook from package folder
#' wb <- loadWorkbook(file = system.file("loadExample.xlsx", package= "openxlsx"))
#' names(wb) #list worksheets
#' wb ## view object
#' ## Add a worksheet
#' addWorksheet(wb, "A new worksheet")
#'
#' ## Save workbook
#' saveWorkbook(wb, "loadExample.xlsx", overwrite = TRUE)
loadWorkbook <- function(file, xlsxFile = NULL){
if(!is.null(xlsxFile))
file <- xlsxFile
file <- getFile(file)
file <- getFile(file)
if(!file.exists(file))
stop("File does not exist.")
wb <- createWorkbook()
## create temp dir
xmlDir <- file.path(tempdir(), paste0(tempfile(tmpdir = ""), "_openxlsx_loadWorkbook"))
## Unzip files to temp directory
xmlFiles <- unzip(file, exdir = xmlDir)
## Not used
# .relsXML <- xmlFiles[grepl("_rels/.rels$", xmlFiles, perl = TRUE)]
# appXML <- xmlFiles[grepl("app.xml$", xmlFiles, perl = TRUE)]
drawingsXML <- xmlFiles[grepl("drawings/drawing[0-9]+.xml$", xmlFiles, perl = TRUE)]
worksheetsXML <- xmlFiles[grepl("/worksheets/sheet[0-9]+", xmlFiles, perl = TRUE)]
coreXML <- xmlFiles[grepl("core.xml$", xmlFiles, perl = TRUE)]
workbookXML <- xmlFiles[grepl("workbook.xml$", xmlFiles, perl = TRUE)]
stylesXML <- xmlFiles[grepl("styles.xml$", xmlFiles, perl = TRUE)]
sharedStringsXML <- xmlFiles[grepl("sharedStrings.xml$", xmlFiles, perl = TRUE)]
themeXML <- xmlFiles[grepl("theme[0-9]+.xml$", xmlFiles, perl = TRUE)]
drawingRelsXML <- xmlFiles[grepl("drawing[0-9]+.xml.rels$", xmlFiles, perl = TRUE)]
sheetRelsXML <- xmlFiles[grepl("sheet[0-9]+.xml.rels$", xmlFiles, perl = TRUE)]
media <- xmlFiles[grepl("image[0-9]+.[a-z]+$", xmlFiles, perl = TRUE)]
vmlDrawingXML <- xmlFiles[grepl("drawings/vmlDrawing[0-9]+\\.vml$", xmlFiles, perl = TRUE)]
vmlDrawingRelsXML <- xmlFiles[grepl("vmlDrawing[0-9]+.vml.rels$", xmlFiles, perl = TRUE)]
commentsXML <- xmlFiles[grepl("xl/comments[0-9]+\\.xml", xmlFiles, perl = TRUE)]
embeddings <- xmlFiles[grepl("xl/embeddings", xmlFiles, perl = TRUE)]
charts <- xmlFiles[grepl("xl/charts/.*xml$", xmlFiles, perl = TRUE)]
chartsRels <- xmlFiles[grepl("xl/charts/_rels", xmlFiles, perl = TRUE)]
chartSheetsXML <- xmlFiles[grepl("xl/chartsheets/sheet[0-9]+\\.xml", xmlFiles, perl = TRUE)]
tablesXML <- xmlFiles[grepl("tables/table[0-9]+.xml$", xmlFiles, perl = TRUE)]
tableRelsXML <- xmlFiles[grepl("table[0-9]+.xml.rels$", xmlFiles, perl = TRUE)]
queryTablesXML <- xmlFiles[grepl("queryTable[0-9]+.xml$", xmlFiles, perl = TRUE)]
connectionsXML <- xmlFiles[grepl("connections.xml$", xmlFiles, perl = TRUE)]
extLinksXML <- xmlFiles[grepl("externalLink[0-9]+.xml$", xmlFiles, perl = TRUE)]
extLinksRelsXML <- xmlFiles[grepl("externalLink[0-9]+.xml.rels$", xmlFiles, perl = TRUE)]
# pivot tables
pivotTableXML <- xmlFiles[grepl("pivotTable[0-9]+.xml$", xmlFiles, perl = TRUE)]
pivotTableRelsXML <- xmlFiles[grepl("pivotTable[0-9]+.xml.rels$", xmlFiles, perl = TRUE)]
pivotDefXML <- xmlFiles[grepl("pivotCacheDefinition[0-9]+.xml$", xmlFiles, perl = TRUE)]
pivotDefRelsXML <- xmlFiles[grepl("pivotCacheDefinition[0-9]+.xml.rels$", xmlFiles, perl = TRUE)]
pivotCacheRecords <- xmlFiles[grepl("pivotCacheRecords[0-9]+.xml$", xmlFiles, perl = TRUE)]
## slicers
slicerXML <- xmlFiles[grepl("slicer[0-9]+.xml$", xmlFiles, perl = TRUE)]
slicerCachesXML <- xmlFiles[grepl("slicerCache[0-9]+.xml$", xmlFiles, perl = TRUE)]
## VBA Macro
vbaProject <- xmlFiles[grepl("vbaProject\\.bin$", xmlFiles, perl = TRUE)]
## remove all EXCEPT media and charts
on.exit(expr = unlink(xmlFiles[!grepl("charts|media|vmlDrawing|comment|embeddings|pivot|slicer|vbaProject", xmlFiles, ignore.case = TRUE)], recursive = TRUE, force = TRUE), add = TRUE)
nSheets <- length(worksheetsXML) + length(chartSheetsXML)
## get Rid of chartsheets, these do not have a worksheet/sheeti.xml
worksheet_rId_mapping <- NULL
workbookRelsXML <- xmlFiles[grepl("workbook.xml.rels$", xmlFiles, perl = TRUE)]
if(length(workbookRelsXML) > 0){
workbookRelsXML <- paste(readLines(con = workbookRelsXML, encoding="UTF-8", warn = FALSE), collapse = "")
workbookRelsXML <- getChildlessNode(xml = workbookRelsXML, tag = "<Relationship ")
worksheet_rId_mapping <- workbookRelsXML[grepl("worksheets/sheet", workbookRelsXML, fixed = TRUE)]
}
##
chartSheetRIds <- NULL
if(length(chartSheetsXML) > 0){
workbookRelsXML <- workbookRelsXML[grepl("chartsheets/sheet", workbookRelsXML, fixed = TRUE)]
chartSheetRIds <- unlist(getId(workbookRelsXML))
chartsheet_rId_mapping <- unlist(regmatches(workbookRelsXML, gregexpr('sheet[0-9]+\\.xml', workbookRelsXML, perl = TRUE, ignore.case = TRUE)))
sheetNo <- as.integer(regmatches(chartSheetsXML, regexpr("(?<=sheet)[0-9]+(?=\\.xml)", chartSheetsXML, perl = TRUE)))
chartSheetsXML <- chartSheetsXML[order(sheetNo)]
chartSheetsRelsXML <- xmlFiles[grepl("xl/chartsheets/_rels", xmlFiles, perl = TRUE)]
sheetNo2 <- as.integer(regmatches(chartSheetsRelsXML, regexpr("(?<=sheet)[0-9]+(?=\\.xml\\.rels)", chartSheetsRelsXML, perl = TRUE)))
chartSheetsRelsXML <- chartSheetsRelsXML[order(sheetNo2)]
chartSheetsRelsDir <- dirname(chartSheetsRelsXML[1])
}
## xl\
## xl\workbook
if(length(workbookXML) > 0){
workbook <- readLines(workbookXML, warn=FALSE, encoding="UTF-8")
workbook <- removeHeadTag(workbook)
sheets <- unlist(regmatches(workbook, gregexpr("<sheet .*/sheets>", workbook, perl = TRUE)))
## sheetId is meaningless
## sheet rId links to the workbook.xml.resl which links worksheets/sheet(i).xml file
## order they appear here gives order of worksheets in xlsx file
sheetrId <- unlist(getRId(sheets))
sheetId <- unlist(regmatches(sheets, gregexpr('(?<=sheetId=")[0-9]+', sheets, perl = TRUE)))
sheetNames <- unlist(regmatches(sheets, gregexpr('(?<=name=")[^"]+', sheets, perl = TRUE)))
is_chart_sheet <- sheetrId %in% chartSheetRIds
is_visible <- !grepl("hidden", unlist(strsplit(sheets, split = "<sheet "))[-1])
if(length(is_visible) != length(sheetrId))
is_visible <- rep(TRUE, length(sheetrId))
## add worksheets to wb
j <- 1
for(i in 1:length(sheetrId)){
if(is_chart_sheet[i]){
count <- 0
txt <- paste(readLines(chartSheetsXML[j], warn = FALSE, encoding = "UTF-8"), collapse = "")
zoom <- regmatches(txt, regexpr('(?<=zoomScale=")[0-9]+', txt, perl = TRUE))
if(length(zoom) == 0)
zoom <- 100
tabColour <- getChildlessNode(xml = txt, tag = "<tabColor ")
if(length(tabColour) == 0)
tabColour <- NULL
j <- j + 1L
wb$addChartSheet(sheetName = sheetNames[i], tabColour = tabColour, zoom = as.numeric(zoom))
}else{
wb$addWorksheet(sheetNames[i], visible = is_visible[i])
}
}
## replace sheetId
for(i in 1:nSheets)
wb$workbook$sheets[[i]] <- gsub(sprintf(' sheetId="%s"', i), sprintf(' sheetId="%s"', sheetId[i]), wb$workbook$sheets[[i]])
## additional workbook attributes
calcPr <- getChildlessNode(xml = workbook, tag = "<calcPr ")
if(length(calcPr) > 0)
wb$workbook$calcPr <- calcPr
workbookPr <- getChildlessNode(xml = workbook, tag = "<workbookPr ")
if(length(calcPr) > 0)
wb$workbook$workbookPr <- workbookPr
## defined Names
dNames <- getNodes(xml = workbook, tagIn = "<definedNames>")
if(length(dNames) > 0){
dNames <- gsub("^<definedNames>|</definedNames>$", "", dNames)
wb$workbook$definedNames <- paste0(getNodes(xml = dNames, tagIn = "<definedName"), ">")
}
}
## xl\sharedStrings
if(length(sharedStringsXML) > 0){
sharedStrings <- readLines(sharedStringsXML, warn = FALSE, encoding = "UTF-8")
sharedStrings <- paste(sharedStrings, collapse = "\n")
sharedStrings <- removeHeadTag(sharedStrings)
uniqueCount <- as.integer(regmatches(sharedStrings, regexpr('(?<=uniqueCount=")[0-9]+', sharedStrings, perl = TRUE)))
## read in and get <si> nodes
vals <- getNodes(xml = sharedStrings, tagIn = "<si>")
if("<si><t/></si>" %in% vals){
vals[vals == "<si><t/></si>"] <- "<si><t>NA</t></si>"
Encoding(vals) <- "UTF-8"
attr(vals, "uniqueCount") <- uniqueCount - 1L
}else{
Encoding(vals) <- "UTF-8"
attr(vals, "uniqueCount") <- uniqueCount
}
wb$sharedStrings <- vals
}
## xl\pivotTables & xl\pivotCache
if(length(pivotTableXML) > 0){
# pivotTable cacheId links to workbook.xml which links to workbook.xml.rels via rId
# we don't modify the cacheId, only the rId
nPivotTables <- length(pivotTableXML)
rIds <- 20000L + 1:nPivotTables
## pivot tables
pivotTableXML <- pivotTableXML[order(nchar(pivotTableXML), pivotTableXML)]
pivotTableRelsXML <- pivotTableRelsXML[order(nchar(pivotTableRelsXML), pivotTableRelsXML)]
## Cache
pivotDefXML <- pivotDefXML[order(nchar(pivotDefXML), pivotDefXML)]
pivotDefRelsXML <- pivotDefRelsXML[order(nchar(pivotDefRelsXML), pivotDefRelsXML)]
pivotCacheRecords <- pivotCacheRecords[order(nchar(pivotCacheRecords), pivotCacheRecords)]
wb$pivotDefinitionsRels <- character(nPivotTables)
pivot_content_type <- NULL
if(length(pivotTableRelsXML) > 0)
wb$pivotTables.xml.rels <- unlist(lapply(pivotTableRelsXML, function(x) removeHeadTag(cppReadFile(x))))
# ## Check what caches are used
cache_keep <- unlist(regmatches(wb$pivotTables.xml.rels, gregexpr("(?<=pivotCache/pivotCacheDefinition)[0-9](?=\\.xml)",
wb$pivotTables.xml.rels, perl = TRUE, ignore.case = TRUE)))
## pivot cache records
tmp <- unlist(regmatches(pivotCacheRecords, gregexpr("(?<=pivotCache/pivotCacheRecords)[0-9]+(?=\\.xml)", pivotCacheRecords, perl = TRUE, ignore.case = TRUE)))
pivotCacheRecords <- pivotCacheRecords[tmp %in% cache_keep]
## pivot cache definitions rels
tmp <- unlist(regmatches(pivotDefRelsXML, gregexpr("(?<=_rels/pivotCacheDefinition)[0-9]+(?=\\.xml)", pivotDefRelsXML, perl = TRUE, ignore.case = TRUE)))
pivotDefRelsXML <- pivotDefRelsXML[tmp %in% cache_keep]
## pivot cache definitions
tmp <- unlist(regmatches(pivotDefXML, gregexpr("(?<=pivotCache/pivotCacheDefinition)[0-9]+(?=\\.xml)", pivotDefXML, perl = TRUE, ignore.case = TRUE)))
pivotDefXML <- pivotDefXML[tmp %in% cache_keep]
if(length(pivotTableXML) > 0){
wb$pivotTables[1:length(pivotTableXML)] <- pivotTableXML
pivot_content_type <- c(pivot_content_type,
sprintf('<Override PartName="/xl/pivotTables/pivotTable%s.xml" ContentType="application/vnd.openxmlformats-officedocument.spreadsheetml.pivotTable+xml"/>', 1:length(pivotTableXML)))
}
if(length(pivotDefXML) > 0){
wb$pivotDefinitions[1:length(pivotDefXML)] <- pivotDefXML
pivot_content_type <- c(pivot_content_type,
sprintf('<Override PartName="/xl/pivotCache/pivotCacheDefinition%s.xml" ContentType="application/vnd.openxmlformats-officedocument.spreadsheetml.pivotCacheDefinition+xml"/>', 1:length(pivotDefXML)))
}
if(length(pivotCacheRecords) > 0){
wb$pivotRecords[1:length(pivotCacheRecords)] <- pivotCacheRecords
pivot_content_type <- c(pivot_content_type,
sprintf('<Override PartName="/xl/pivotCache/pivotCacheRecords%s.xml" ContentType="application/vnd.openxmlformats-officedocument.spreadsheetml.pivotCacheRecords+xml"/>', 1:length(pivotCacheRecords)))
}
if(length(pivotDefRelsXML) > 0)
wb$pivotDefinitionsRels[1:length(pivotDefRelsXML)] <- pivotDefRelsXML
## update content_types
wb$Content_Types <- c(wb$Content_Types, pivot_content_type)
## workbook rels
wb$workbook.xml.rels <- c(wb$workbook.xml.rels,
sprintf('<Relationship Id="rId%s" Type="http://schemas.openxmlformats.org/officeDocument/2006/relationships/pivotCacheDefinition" Target="pivotCache/pivotCacheDefinition%s.xml"/>', rIds, 1:length(pivotDefXML))
)
caches <- getNodes(xml = workbook, tagIn = "<pivotCaches>")
caches <- getChildlessNode(xml = caches, tag = "<pivotCache ")
for(i in 1:length(caches))
caches[i] <- gsub('"rId[0-9]+"', sprintf('"rId%s"', rIds[i]), caches[i])
wb$workbook$pivotCaches <- paste0('<pivotCaches>', paste(caches, collapse = ""), '</pivotCaches>')
}
## xl\vbaProject
if(length(vbaProject) > 0){
wb$vbaProject <- vbaProject
wb$Content_Types[grepl('<Override PartName="/xl/workbook.xml" ', wb$Content_Types)] <- '<Override PartName="/xl/workbook.xml" ContentType="application/vnd.ms-excel.sheet.macroEnabled.main+xml"/>'
wb$Content_Types <- c(wb$Content_Types, '<Override PartName="/xl/vbaProject.bin" ContentType="application/vnd.ms-office.vbaProject"/>')
}
## xl\styles
if(length(stylesXML) > 0){
styleObjects <- wb$loadStyles(stylesXML)
}else{
styleObjects <- list()
}
## xl\media
if(length(media) > 0){
mediaNames <- regmatches(media, regexpr("image[0-9]+\\.[a-z]+$", media))
fileTypes <- unique(gsub("image[0-9]+\\.", "", mediaNames))
contentNodes <- sprintf('<Default Extension="%s" ContentType="image/%s"/>', fileTypes, fileTypes)
contentNodes[fileTypes == "emf"] <- '<Default Extension="emf" ContentType="image/x-emf"/>'
wb$Content_Types <- c(contentNodes, wb$Content_Types)
names(media) <- mediaNames
wb$media <- media
}
## xl\chart
if(length(charts) > 0){
chartNames <- basename(charts)
nCharts <- sum(grepl("chart[0-9]+.xml", chartNames))
nChartStyles <- sum(grepl("style[0-9]+.xml", chartNames))
nChartCol <- sum(grepl("colors[0-9]+.xml", chartNames))
if(nCharts > 0)
wb$Content_Types <- c(wb$Content_Types, sprintf('<Override PartName="/xl/charts/chart%s.xml" ContentType="application/vnd.openxmlformats-officedocument.drawingml.chart+xml"/>', 1:nCharts))
if(nChartStyles > 0)
wb$Content_Types <- c(wb$Content_Types, sprintf('<Override PartName="/xl/charts/style%s.xml" ContentType="application/vnd.ms-office.chartstyle+xml"/>', 1:nChartStyles))
if(nChartCol > 0)
wb$Content_Types <- c(wb$Content_Types, sprintf('<Override PartName="/xl/charts/colors%s.xml" ContentType="application/vnd.ms-office.chartcolorstyle+xml"/>', 1:nChartCol))
if(length(chartsRels)){
charts <- c(charts, chartsRels)
chartNames <- c(chartNames, file.path("_rels", basename(chartsRels)))
}
names(charts) <- chartNames
wb$charts <- charts
}
## xl\theme
if(length(themeXML) > 0)
wb$theme <- removeHeadTag(paste(unlist(lapply(sort(themeXML)[[1]], function(x) readLines(x, warn = FALSE, encoding = "UTF-8"))), collapse = ""))
## externalLinks
if(length(extLinksXML) > 0){
wb$externalLinks <- lapply(sort(extLinksXML), function(x) removeHeadTag(cppReadFile(x)))
wb$Content_Types <-c(wb$Content_Types,
sprintf('<Override PartName="/xl/externalLinks/externalLink%s.xml" ContentType="application/vnd.openxmlformats-officedocument.spreadsheetml.externalLink+xml"/>', 1:length(extLinksXML)))
wb$workbook.xml.rels <- c(wb$workbook.xml.rels, sprintf('<Relationship Id="rId4" Type="http://schemas.openxmlformats.org/officeDocument/2006/relationships/externalLink" Target="externalLinks/externalLink1.xml"/>',
1:length(extLinksXML)))
}
## externalLinksRels
if(length(extLinksRelsXML) > 0)
wb$externalLinksRels <- lapply(sort(extLinksRelsXML), function(x) removeHeadTag(cppReadFile(x)))
##*----------------------------------------------------------------------------------------------*##
### BEGIN READING IN WORKSHEET DATA
##*----------------------------------------------------------------------------------------------*##
## xl\worksheets
file_names <- regmatches(worksheet_rId_mapping, regexpr("sheet[0-9]+\\.xml", worksheet_rId_mapping, perl = TRUE))
file_rIds <- unlist(getId(worksheet_rId_mapping))
file_names <- file_names[match(sheetrId, file_rIds)]
worksheetsXML <- file.path(dirname(worksheetsXML), file_names)
wb <- loadworksheets(wb = wb, styleObjects = styleObjects, xmlFiles = worksheetsXML, is_chart_sheet = is_chart_sheet)
## Fix styleobject encoding
if(length(wb$styleObjects) > 0){
style_names <- sapply(wb$styleObjects, "[[", "sheet")
Encoding(style_names) <- "UTF-8"
wb$styleObjects <- lapply(1:length(style_names), function(i) {wb$styleObjects[[i]]$sheet = style_names[[i]]; wb$styleObjects[[i]]})
}
## Fix headers/footers
for(i in 1:length(worksheetsXML)){
if(!is_chart_sheet[i]){
if(length(wb$worksheets[[i]]$headerFooter) > 0)
wb$worksheets[[i]]$headerFooter <- lapply(wb$worksheets[[i]]$headerFooter, splitHeaderFooter)
}
}
##*----------------------------------------------------------------------------------------------*##
### READING IN WORKSHEET DATA COMPLETE
##*----------------------------------------------------------------------------------------------*##
## Next sheetRels to see which drawings_rels belongs to which sheet
if(length(sheetRelsXML) > 0){
## sheetrId is order sheet appears in xlsx file
## create a 1-1 vector of rels to worksheet
## haveRels is boolean vector where i-the element is TRUE/FALSE if sheet has a rels sheet
if(length(chartSheetsXML) == 0){
allRels <- file.path(dirname(sheetRelsXML[1]), paste0(file_names, ".rels"))
haveRels <- allRels %in% sheetRelsXML
}else{
haveRels <- rep(FALSE, length(wb$worksheets))
allRels <- rep("", length(wb$worksheets))
for(i in 1:nSheets){
if(is_chart_sheet[i]){
ind <- which(chartSheetRIds == sheetrId[i])
rels_file <- file.path(chartSheetsRelsDir, paste0(chartsheet_rId_mapping[ind], ".rels"))
}else{
ind <- sheetrId[i]
rels_file <- file.path(xmlDir, "xl", "worksheets", "_rels", paste0(file_names[i], ".rels"))
}
if(file.exists(rels_file)){
allRels[i] <- rels_file
haveRels[i] <- TRUE
}
}
}
## sheet.xml have been reordered to be in the order of sheetrId
## not every sheet has a worksheet rels
xml <- lapply(1:length(allRels), function(i) {
if(haveRels[i]){
xml <- readLines(allRels[[i]], warn = FALSE, encoding = "UTF-8")
xml <- removeHeadTag(xml)
xml <- gsub("<Relationships .*?>", "", xml)
xml <- gsub("</Relationships>", "", xml)
xml <- getChildlessNode(xml = xml, tag = "<Relationship ")
}else{
xml <- "<Relationship >"
}
return(xml)
})
############################################################################################
############################################################################################
## Slicers
if(length(slicerXML) > 0){
slicerXML <- slicerXML[order(nchar(slicerXML), slicerXML)]
slicersFiles <- lapply(xml, function(x) as.integer(regmatches(x, regexpr("(?<=slicer)[0-9]+(?=\\.xml)", x, perl = TRUE))))
inds <- sapply(slicersFiles, length) > 0
## worksheet_rels Id for slicer will be rId0
k <- 1L
wb$slicers <- rep("", nSheets)
for(i in 1:nSheets){
## read in slicer[j].XML sheets into sheet[i]
if(inds[i]){
wb$slicers[[i]] <- slicerXML[k]
k <- k + 1L
wb$worksheets_rels[[i]] <- unlist(c(wb$worksheets_rels[[i]],
sprintf('<Relationship Id="rId0" Type="http://schemas.microsoft.com/office/2007/relationships/slicer" Target="../slicers/slicer%s.xml"/>', i)))
wb$Content_Types <- c(wb$Content_Types,
sprintf('<Override PartName="/xl/slicers/slicer%s.xml" ContentType="application/vnd.ms-excel.slicer+xml"/>', i))
slicer_xml_exists <- FALSE
## Append slicer to worksheet extLst
if(length(wb$worksheets[[i]]$extLst) > 0){
if(grepl('x14:slicer r:id="rId[0-9]+"', wb$worksheets[[i]]$extLst)){
wb$worksheets[[i]]$extLst <- sub('x14:slicer r:id="rId[0-9]+"', 'x14:slicer r:id="rId0"', wb$worksheets[[i]]$extLst)
slicer_xml_exists <- TRUE
}
}
if(!slicer_xml_exists)
wb$worksheets[[i]]$extLst <- c(wb$worksheets[[i]]$extLst, genBaseSlicerXML())
}
}
}
if(length(slicerCachesXML) > 0){
## ---- slicerCaches
inds <- 1:length(slicerCachesXML)
wb$Content_Types <- c(wb$Content_Types, sprintf('<Override PartName="/xl/slicerCaches/slicerCache%s.xml" ContentType="application/vnd.ms-excel.slicerCache+xml"/>', inds))
wb$slicerCaches <- sapply(slicerCachesXML[order(nchar(slicerCachesXML), slicerCachesXML)], function(x) removeHeadTag(cppReadFile(x)))
wb$workbook.xml.rels <- c(wb$workbook.xml.rels, sprintf('<Relationship Id="rId%s" Type="http://schemas.microsoft.com/office/2007/relationships/slicerCache" Target="slicerCaches/slicerCache%s.xml"/>', 1E5 + inds, inds))
wb$workbook$extLst <- c(wb$workbook$extLst, genSlicerCachesExtLst(1E5 + inds))
}
############################################################################################
############################################################################################
## tables
if(length(tablesXML) > 0){
tables <- lapply(xml, function(x) as.integer(regmatches(x, regexpr("(?<=table)[0-9]+(?=\\.xml)", x, perl = TRUE))))
tableSheets <- unlist(lapply(1:length(sheetrId), function(i) rep(i, length(tables[[i]]))))
if(length(unlist(tables)) > 0){
## get the tables that belong to each worksheet and create a worksheets_rels for each
tCount <- 2L ## table r:Ids start at 3
for(i in 1:length(tables)){
if(length(tables[[i]]) > 0){
k <- 1:length(tables[[i]]) + tCount
wb$worksheets_rels[[i]] <- unlist(c(wb$worksheets_rels[[i]],
sprintf('<Relationship Id="rId%s" Type="http://schemas.openxmlformats.org/officeDocument/2006/relationships/table" Target="../tables/table%s.xml"/>', k, k)))
wb$worksheets[[i]]$tableParts <- sprintf("<tablePart r:id=\"rId%s\"/>", k)
tCount <- tCount + length(k)
}
}
## sort the tables into the order they appear in the xml and tables variables
names(tablesXML) <- basename(tablesXML)
tablesXML <- tablesXML[sprintf("table%s.xml", unlist(tables))]
## tables are now in correct order so we can read them in as they are
wb$tables <- sapply(tablesXML, function(x) removeHeadTag(paste(readLines(x, warn = FALSE), collapse = "")))
## pull out refs and attach names
refs <- regmatches(wb$tables, regexpr('(?<=ref=")[0-9A-Z:]+', wb$tables, perl = TRUE))
names(wb$tables) <- refs
wb$Content_Types <- c(wb$Content_Types, sprintf('<Override PartName="/xl/tables/table%s.xml" ContentType="application/vnd.openxmlformats-officedocument.spreadsheetml.table+xml"/>', 1:length(wb$tables)+2))
## relabel ids
for(i in 1:length(wb$tables)){
newId <- sprintf(' id="%s" ', i+2)
wb$tables[[i]] <- sub(' id="[0-9]+" ' , newId, wb$tables[[i]])
}
displayNames <- unlist(regmatches(wb$tables, regexpr('(?<=displayName=").*?[^"]+', wb$tables, perl = TRUE)))
if(length(displayNames) != length(tablesXML))
displayNames <- paste0("Table", 1:length(tablesXML))
attr(wb$tables, "sheet") <- tableSheets
attr(wb$tables, "tableName") <- displayNames
for(i in 1:length(tableSheets)){
table_sheet_i <- tableSheets[i]
attr(wb$worksheets[[table_sheet_i]]$tableParts, "tableName") <- c(attr(wb$worksheets[[table_sheet_i]]$tableParts, "tableName"), displayNames[i])
}
}
} ## if(length(tablesXML) > 0)
## might we have some external hyperlinks
if(any(sapply(wb$worksheets[!is_chart_sheet], function(x) length(x$hyperlinks) > 0))){
## Do we have external hyperlinks
hlinks <- lapply(xml, function(x) x[grepl("hyperlink", x) & grepl("External", x)])
hlinksInds <- which(sapply(hlinks, length) > 0)
## If it's an external hyperlink it will have a target in the sheet_rels
if(length(hlinksInds) > 0){
for(i in hlinksInds){
ids <- unlist(lapply(hlinks[[i]], function(x) regmatches(x, gregexpr('(?<=Id=").*?"', x, perl = TRUE))[[1]]))
ids <- gsub('"$', "", ids)
targets <- unlist(lapply(hlinks[[i]], function(x) regmatches(x, gregexpr('(?<=Target=").*?"', x, perl = TRUE))[[1]]))
targets <- gsub('"$', "", targets)
ids2 <- lapply(wb$worksheets[[i]]$hyperlinks, function(x) regmatches(x, gregexpr('(?<=r:id=").*?"', x, perl = TRUE))[[1]])
ids2[sapply(ids2, length) == 0] <- NA
ids2 <- gsub('"$', "", unlist(ids2))
targets <- targets[match(ids2, ids)]
names(wb$worksheets[[i]]$hyperlinks) <- targets
}
}
}
############################################################################################
############################################################################################
## drawings
## xml is in the order of the sheets, drawIngs is toes to sheet position of hasDrawing
## Not every sheet has a drawing.xml
drawXMLrelationship <- lapply(xml, function(x) x[grepl("drawings/drawing", x)])
hasDrawing <- sapply(drawXMLrelationship, length) > 0 ## which sheets have a drawing
if(length(drawingRelsXML) > 0){
dRels <- lapply(drawingRelsXML, readLines, warn = FALSE)
dRels <- unlist(lapply(dRels, removeHeadTag))
dRels <- gsub("<Relationships .*?>", "", dRels)
dRels <- gsub("</Relationships>", "", dRels)
}
if(length(drawingsXML) > 0){
dXML <- lapply(drawingsXML, readLines, warn = FALSE, encoding = "UTF-8")
dXML <- unlist(lapply(dXML, removeHeadTag))
dXML <- gsub("<xdr:wsDr .*?>", "", dXML)
dXML <- gsub("</xdr:wsDr>", "", dXML)
# ptn1 <- "<(mc:AlternateContent|xdr:oneCellAnchor|xdr:twoCellAnchor|xdr:absoluteAnchor)"
# ptn2 <- "</(mc:AlternateContent|xdr:oneCellAnchor|xdr:twoCellAnchor|xdr:absoluteAnchor)>"
## split at one/two cell Anchor
# dXML <- regmatches(dXML, gregexpr(paste0(ptn1, ".*?", ptn2), dXML))
}
## loop over all worksheets and assign drawing to sheet
if(any(hasDrawing)){
for(i in 1:length(xml)){
if(hasDrawing[i]){
target <- unlist(lapply(drawXMLrelationship[[i]], function(x) regmatches(x, gregexpr('(?<=Target=").*?"', x, perl = TRUE))[[1]]))
target <- basename(gsub('"$', "", target))
## sheet_i has which(hasDrawing)[[i]]
relsInd <- grepl(target, drawingRelsXML)
if(any(relsInd))
wb$drawings_rels[i] <- dRels[relsInd]
drawingInd <- grepl(target, drawingsXML)
if(any(drawingInd))
wb$drawings[i] <- dXML[drawingInd]
}
}
}
############################################################################################
############################################################################################
## VML drawings
if(length(vmlDrawingXML) > 0){
wb$Content_Types <- c(wb$Content_Types, '<Default Extension="vml" ContentType="application/vnd.openxmlformats-officedocument.vmlDrawing"/>')
drawXMLrelationship <- lapply(xml, function(x) x[grepl("drawings/vmlDrawing", x)])
hasDrawing <- sapply(drawXMLrelationship, length) > 0 ## which sheets have a drawing
## loop over all worksheets and assign drawing to sheet
if(any(hasDrawing)){
for(i in 1:length(xml)){
if(hasDrawing[i]){
target <- unlist(lapply(drawXMLrelationship[[i]], function(x) regmatches(x, gregexpr('(?<=Target=").*?"', x, perl = TRUE))[[1]]))
target <- basename(gsub('"$', "", target))
ind <- grepl(target, vmlDrawingXML)
if(any(ind)){
txt <- paste(readLines(vmlDrawingXML[ind], warn = FALSE), collapse = "\n")
txt <- removeHeadTag(txt)
i1 <- regexpr("<v:shapetype", txt, fixed = TRUE)
i2 <- regexpr("</xml>", txt, fixed = TRUE)
wb$vml[[i]] <- substring(text = txt, first = i1, last = (i2 - 1L))
relsInd <- grepl(target, vmlDrawingRelsXML)
if(any(relsInd))
wb$vml_rels[i] <- vmlDrawingRelsXML[relsInd]
}
}
}
}
}
## vmlDrawing and comments
if(length(commentsXML) > 0){
drawXMLrelationship <- lapply(xml, function(x) x[grepl("drawings/vmlDrawing[0-9]+\\.vml", x)])
hasDrawing <- sapply(drawXMLrelationship, length) > 0 ## which sheets have a drawing
commentXMLrelationship <- lapply(xml, function(x) x[grepl("comments[0-9]+\\.xml", x)])
hasComment <- sapply(drawXMLrelationship, length) > 0 ## which sheets have a drawing
for(i in 1:length(xml)){
if(hasComment[i]){
target <- unlist(lapply(drawXMLrelationship[[i]], function(x) regmatches(x, gregexpr('(?<=Target=").*?"', x, perl = TRUE))[[1]]))
target <- basename(gsub('"$', "", target))
ind <- grepl(target, vmlDrawingXML)
if(any(ind)){
txt <- paste(readLines(vmlDrawingXML[ind], warn = FALSE), collapse = "\n")
txt <- removeHeadTag(txt)
cd <- unique(getNodes(xml = txt, tagIn = "<x:ClientData"))
cd <- cd[grepl('ObjectType="Note"', cd)]
cd <- paste0(cd, ">")
## now loada comment
target <- unlist(lapply(commentXMLrelationship[[i]], function(x) regmatches(x, gregexpr('(?<=Target=").*?"', x, perl = TRUE))[[1]]))
target <- basename(gsub('"$', "", target))
txt <- paste(readLines(commentsXML[grepl(target, commentsXML)], warn = FALSE), collapse = "\n")
txt <- removeHeadTag(txt)
authors <- getNodes(xml = txt, tagIn = "<author>")
authors <- gsub("<author>|</author>", "", authors)
comments <- getNodes(xml = txt, tagIn = "<commentList>")
comments <- gsub( "<commentList>", "", comments)
comments <- getNodes(xml = comments, tagIn = "<comment")
refs <- regmatches(comments, regexpr('(?<=ref=").*?[^"]+', comments, perl = TRUE))
authorsInds <- as.integer(regmatches(comments, regexpr('(?<=authorId=").*?[^"]+', comments, perl = TRUE))) + 1
authors <- authors[authorsInds]
style <- lapply(comments, getNodes, tagIn = "<rPr>")
comments <- regmatches(comments, gregexpr('(?<=<t( |>)).*?[^/]+', comments, perl = TRUE))
comments <- lapply(comments, function(x) gsub("<", "", x))
comments <- lapply(comments, function(x) gsub(".*?>", "", x, perl = TRUE))
wb$comments[[i]] <- lapply(1:length(comments), function(j){
comment_list <- list("ref" = refs[j],
"author" = authors[j],
"comment" = comments[[j]],
"style" = style[[j]],
"clientData" = cd[[j]])
})
}
}
}
}
## rels image
drawXMLrelationship <- lapply(xml, function(x) x[grepl("relationships/image", x)])
hasDrawing <- sapply(drawXMLrelationship, length) > 0 ## which sheets have a drawing
if(any(hasDrawing)){
for(i in 1:length(xml)){
if(hasDrawing[i]){
image_ids <- unlist(getId(drawXMLrelationship[[i]]))
new_image_ids <- paste0("rId", 1:length(image_ids) + 70000)
for(j in 1:length(image_ids)){
wb$worksheets[[i]]$oleObjects <- gsub(image_ids[j], new_image_ids[j], wb$worksheets[[i]]$oleObjects, fixed = TRUE)
wb$worksheets_rels[[i]] <- c(wb$worksheets_rels[[i]], gsub(image_ids[j], new_image_ids[j], drawXMLrelationship[[i]][j], fixed = TRUE)
)
}
}
}
}
## rels image
drawXMLrelationship <- lapply(xml, function(x) x[grepl("relationships/package", x)])
hasDrawing <- sapply(drawXMLrelationship, length) > 0 ## which sheets have a drawing
if(any(hasDrawing)){
for(i in 1:length(xml)){
if(hasDrawing[i]){
image_ids <- unlist(getId(drawXMLrelationship[[i]]))
new_image_ids <- paste0("rId", 1:length(image_ids) + 90000)
for(j in 1:length(image_ids)){
wb$worksheets[[i]]$oleObjects <- gsub(image_ids[j], new_image_ids[j], wb$worksheets[[i]]$oleObjects, fixed = TRUE)
wb$worksheets_rels[[i]] <- c(wb$worksheets_rels[[i]],
sprintf("<Relationship Id=\"%s\" Type=\"http://schemas.openxmlformats.org/officeDocument/2006/relationships/package\" Target=\"../embeddings/Microsoft_Word_Document1.docx\"/>", new_image_ids[j])
)
}
}
}
}
## Embedded docx
if(length(embeddings) > 0){
wb$Content_Types <- c(wb$Content_Types, '<Default Extension="docx" ContentType="application/vnd.openxmlformats-officedocument.wordprocessingml.document"/>')
wb$embeddings <- embeddings
}
## pivot tables
if(length(pivotTableXML) > 0){
pivotTableJ <- lapply(xml, function(x) as.integer(regmatches(x, regexpr("(?<=pivotTable)[0-9]+(?=\\.xml)", x, perl = TRUE))))
sheetWithPivot <- which(sapply(pivotTableJ, length) > 0)
pivotRels <- lapply(xml, function(x) {y <- x[grepl("pivotTable", x)]; y[order(nchar(y), y)]})
hasPivot <- sapply(pivotRels, length) > 0
## Modify rIds
for(i in 1:length(pivotRels)){
if(hasPivot[i]){
for(j in 1:length(pivotRels[[i]]))
pivotRels[[i]][j] <- gsub('"rId[0-9]+"', sprintf('"rId%s"', 20000L + j), pivotRels[[i]][j])
wb$worksheets_rels[[i]] <- c(wb$worksheets_rels[[i]] , pivotRels[[i]])
}
}
## remove any workbook_res references to pivot tables that are not being used in worksheet_rels
inds <- 1:length(wb$pivotTables.xml.rels)
fileNo <- as.integer(unlist(regmatches(unlist(wb$worksheets_rels), gregexpr('(?<=pivotTable)[0-9]+(?=\\.xml)', unlist(wb$worksheets_rels), perl = TRUE))))
inds <- inds[!inds %in% fileNo]
if(length(inds) > 0){
toRemove <- paste(sprintf("(pivotCacheDefinition%s\\.xml)", inds), collapse = "|")
fileNo <- which(grepl(toRemove, wb$pivotTables.xml.rels))
toRemove <- paste(sprintf("(pivotCacheDefinition%s\\.xml)", fileNo), collapse = "|")
## remove reference to file from workbook.xml.res
wb$workbook.xml.rels <- wb$workbook.xml.rels[!grepl(toRemove, wb$workbook.xml.rels)]
}
}
} ## end of worksheetRels
## convert hyperliks to hyperlink objects
for(i in 1:nSheets)
wb$worksheets[[i]]$hyperlinks <- xml_to_hyperlink(wb$worksheets[[i]]$hyperlinks)
## queryTables
if(length(queryTablesXML) > 0){
ids <- as.numeric(regmatches(queryTablesXML, regexpr("[0-9]+(?=\\.xml)", queryTablesXML, perl = TRUE)))
wb$queryTables <- unlist(lapply(queryTablesXML[order(ids)], function(x) removeHeadTag(cppReadFile(xmlFile = x))))
wb$Content_Types <- c(wb$Content_Types,
sprintf('<Override PartName="/xl/queryTables/queryTable%s.xml" ContentType="application/vnd.openxmlformats-officedocument.spreadsheetml.queryTable+xml"/>', 1:length(queryTablesXML)))
}
## connections
if(length(connectionsXML) > 0){
wb$connections <- removeHeadTag(cppReadFile(xmlFile = connectionsXML))
wb$workbook.xml.rels <- c(wb$workbook.xml.rels, '<Relationship Id="rId3" Type="http://schemas.openxmlformats.org/officeDocument/2006/relationships/connections" Target="connections.xml"/>')
wb$Content_Types <- c(wb$Content_Types, '<Override PartName="/xl/connections.xml" ContentType="application/vnd.openxmlformats-officedocument.spreadsheetml.connections+xml"/>')
}
## table rels
if(length(tableRelsXML) > 0){
## table_i_might have tableRels_i but I am re-ordering the tables to be in order of worksheets
## I make every table have a table_rels so i need to fill in the gaps if any table_rels are missing
tmp <- paste0(basename(tablesXML), ".rels")
hasRels <- tmp %in% basename(tableRelsXML)
## order tableRelsXML
tableRelsXML <- tableRelsXML[match(tmp[hasRels], basename(tableRelsXML))]
##
wb$tables.xml.rels <- character(length=length(tablesXML))
## which sheet does it belong to
xml <- sapply(tableRelsXML, cppReadFile, USE.NAMES = FALSE)
xml <- sapply(xml, removeHeadTag, USE.NAMES = FALSE)
wb$tables.xml.rels[hasRels] <- xml
}else if(length(tablesXML) > 0){
wb$tables.xml.rels <- rep("", length(tablesXML))
}
return(wb)
}
| /R/loadWorkbook.R | no_license | tkunstek/openxlsx | R | false | false | 39,410 | r |
#' @name loadWorkbook
#' @title Load an exisiting .xlsx file
#' @author Alexander Walker
#' @param file A path to an existing .xlsx or .xlsm file
#' @param xlsxFile alias for file
#' @description loadWorkbook returns a workbook object conserving styles and
#' formatting of the original .xlsx file.
#' @return Workbook object.
#' @export
#' @seealso \code{\link{removeWorksheet}}
#' @examples
#' ## load existing workbook from package folder
#' wb <- loadWorkbook(file = system.file("loadExample.xlsx", package= "openxlsx"))
#' names(wb) #list worksheets
#' wb ## view object
#' ## Add a worksheet
#' addWorksheet(wb, "A new worksheet")
#'
#' ## Save workbook
#' saveWorkbook(wb, "loadExample.xlsx", overwrite = TRUE)
loadWorkbook <- function(file, xlsxFile = NULL){
if(!is.null(xlsxFile))
file <- xlsxFile
file <- getFile(file)
file <- getFile(file)
if(!file.exists(file))
stop("File does not exist.")
wb <- createWorkbook()
## create temp dir
xmlDir <- file.path(tempdir(), paste0(tempfile(tmpdir = ""), "_openxlsx_loadWorkbook"))
## Unzip files to temp directory
xmlFiles <- unzip(file, exdir = xmlDir)
## Not used
# .relsXML <- xmlFiles[grepl("_rels/.rels$", xmlFiles, perl = TRUE)]
# appXML <- xmlFiles[grepl("app.xml$", xmlFiles, perl = TRUE)]
drawingsXML <- xmlFiles[grepl("drawings/drawing[0-9]+.xml$", xmlFiles, perl = TRUE)]
worksheetsXML <- xmlFiles[grepl("/worksheets/sheet[0-9]+", xmlFiles, perl = TRUE)]
coreXML <- xmlFiles[grepl("core.xml$", xmlFiles, perl = TRUE)]
workbookXML <- xmlFiles[grepl("workbook.xml$", xmlFiles, perl = TRUE)]
stylesXML <- xmlFiles[grepl("styles.xml$", xmlFiles, perl = TRUE)]
sharedStringsXML <- xmlFiles[grepl("sharedStrings.xml$", xmlFiles, perl = TRUE)]
themeXML <- xmlFiles[grepl("theme[0-9]+.xml$", xmlFiles, perl = TRUE)]
drawingRelsXML <- xmlFiles[grepl("drawing[0-9]+.xml.rels$", xmlFiles, perl = TRUE)]
sheetRelsXML <- xmlFiles[grepl("sheet[0-9]+.xml.rels$", xmlFiles, perl = TRUE)]
media <- xmlFiles[grepl("image[0-9]+.[a-z]+$", xmlFiles, perl = TRUE)]
vmlDrawingXML <- xmlFiles[grepl("drawings/vmlDrawing[0-9]+\\.vml$", xmlFiles, perl = TRUE)]
vmlDrawingRelsXML <- xmlFiles[grepl("vmlDrawing[0-9]+.vml.rels$", xmlFiles, perl = TRUE)]
commentsXML <- xmlFiles[grepl("xl/comments[0-9]+\\.xml", xmlFiles, perl = TRUE)]
embeddings <- xmlFiles[grepl("xl/embeddings", xmlFiles, perl = TRUE)]
charts <- xmlFiles[grepl("xl/charts/.*xml$", xmlFiles, perl = TRUE)]
chartsRels <- xmlFiles[grepl("xl/charts/_rels", xmlFiles, perl = TRUE)]
chartSheetsXML <- xmlFiles[grepl("xl/chartsheets/sheet[0-9]+\\.xml", xmlFiles, perl = TRUE)]
tablesXML <- xmlFiles[grepl("tables/table[0-9]+.xml$", xmlFiles, perl = TRUE)]
tableRelsXML <- xmlFiles[grepl("table[0-9]+.xml.rels$", xmlFiles, perl = TRUE)]
queryTablesXML <- xmlFiles[grepl("queryTable[0-9]+.xml$", xmlFiles, perl = TRUE)]
connectionsXML <- xmlFiles[grepl("connections.xml$", xmlFiles, perl = TRUE)]
extLinksXML <- xmlFiles[grepl("externalLink[0-9]+.xml$", xmlFiles, perl = TRUE)]
extLinksRelsXML <- xmlFiles[grepl("externalLink[0-9]+.xml.rels$", xmlFiles, perl = TRUE)]
# pivot tables
pivotTableXML <- xmlFiles[grepl("pivotTable[0-9]+.xml$", xmlFiles, perl = TRUE)]
pivotTableRelsXML <- xmlFiles[grepl("pivotTable[0-9]+.xml.rels$", xmlFiles, perl = TRUE)]
pivotDefXML <- xmlFiles[grepl("pivotCacheDefinition[0-9]+.xml$", xmlFiles, perl = TRUE)]
pivotDefRelsXML <- xmlFiles[grepl("pivotCacheDefinition[0-9]+.xml.rels$", xmlFiles, perl = TRUE)]
pivotCacheRecords <- xmlFiles[grepl("pivotCacheRecords[0-9]+.xml$", xmlFiles, perl = TRUE)]
## slicers
slicerXML <- xmlFiles[grepl("slicer[0-9]+.xml$", xmlFiles, perl = TRUE)]
slicerCachesXML <- xmlFiles[grepl("slicerCache[0-9]+.xml$", xmlFiles, perl = TRUE)]
## VBA Macro
vbaProject <- xmlFiles[grepl("vbaProject\\.bin$", xmlFiles, perl = TRUE)]
## remove all EXCEPT media and charts
on.exit(expr = unlink(xmlFiles[!grepl("charts|media|vmlDrawing|comment|embeddings|pivot|slicer|vbaProject", xmlFiles, ignore.case = TRUE)], recursive = TRUE, force = TRUE), add = TRUE)
nSheets <- length(worksheetsXML) + length(chartSheetsXML)
## get Rid of chartsheets, these do not have a worksheet/sheeti.xml
worksheet_rId_mapping <- NULL
workbookRelsXML <- xmlFiles[grepl("workbook.xml.rels$", xmlFiles, perl = TRUE)]
if(length(workbookRelsXML) > 0){
workbookRelsXML <- paste(readLines(con = workbookRelsXML, encoding="UTF-8", warn = FALSE), collapse = "")
workbookRelsXML <- getChildlessNode(xml = workbookRelsXML, tag = "<Relationship ")
worksheet_rId_mapping <- workbookRelsXML[grepl("worksheets/sheet", workbookRelsXML, fixed = TRUE)]
}
##
chartSheetRIds <- NULL
if(length(chartSheetsXML) > 0){
workbookRelsXML <- workbookRelsXML[grepl("chartsheets/sheet", workbookRelsXML, fixed = TRUE)]
chartSheetRIds <- unlist(getId(workbookRelsXML))
chartsheet_rId_mapping <- unlist(regmatches(workbookRelsXML, gregexpr('sheet[0-9]+\\.xml', workbookRelsXML, perl = TRUE, ignore.case = TRUE)))
sheetNo <- as.integer(regmatches(chartSheetsXML, regexpr("(?<=sheet)[0-9]+(?=\\.xml)", chartSheetsXML, perl = TRUE)))
chartSheetsXML <- chartSheetsXML[order(sheetNo)]
chartSheetsRelsXML <- xmlFiles[grepl("xl/chartsheets/_rels", xmlFiles, perl = TRUE)]
sheetNo2 <- as.integer(regmatches(chartSheetsRelsXML, regexpr("(?<=sheet)[0-9]+(?=\\.xml\\.rels)", chartSheetsRelsXML, perl = TRUE)))
chartSheetsRelsXML <- chartSheetsRelsXML[order(sheetNo2)]
chartSheetsRelsDir <- dirname(chartSheetsRelsXML[1])
}
## xl\
## xl\workbook
if(length(workbookXML) > 0){
workbook <- readLines(workbookXML, warn=FALSE, encoding="UTF-8")
workbook <- removeHeadTag(workbook)
sheets <- unlist(regmatches(workbook, gregexpr("<sheet .*/sheets>", workbook, perl = TRUE)))
## sheetId is meaningless
## sheet rId links to the workbook.xml.resl which links worksheets/sheet(i).xml file
## order they appear here gives order of worksheets in xlsx file
sheetrId <- unlist(getRId(sheets))
sheetId <- unlist(regmatches(sheets, gregexpr('(?<=sheetId=")[0-9]+', sheets, perl = TRUE)))
sheetNames <- unlist(regmatches(sheets, gregexpr('(?<=name=")[^"]+', sheets, perl = TRUE)))
is_chart_sheet <- sheetrId %in% chartSheetRIds
is_visible <- !grepl("hidden", unlist(strsplit(sheets, split = "<sheet "))[-1])
if(length(is_visible) != length(sheetrId))
is_visible <- rep(TRUE, length(sheetrId))
## add worksheets to wb
j <- 1
for(i in 1:length(sheetrId)){
if(is_chart_sheet[i]){
count <- 0
txt <- paste(readLines(chartSheetsXML[j], warn = FALSE, encoding = "UTF-8"), collapse = "")
zoom <- regmatches(txt, regexpr('(?<=zoomScale=")[0-9]+', txt, perl = TRUE))
if(length(zoom) == 0)
zoom <- 100
tabColour <- getChildlessNode(xml = txt, tag = "<tabColor ")
if(length(tabColour) == 0)
tabColour <- NULL
j <- j + 1L
wb$addChartSheet(sheetName = sheetNames[i], tabColour = tabColour, zoom = as.numeric(zoom))
}else{
wb$addWorksheet(sheetNames[i], visible = is_visible[i])
}
}
## replace sheetId
for(i in 1:nSheets)
wb$workbook$sheets[[i]] <- gsub(sprintf(' sheetId="%s"', i), sprintf(' sheetId="%s"', sheetId[i]), wb$workbook$sheets[[i]])
## additional workbook attributes
calcPr <- getChildlessNode(xml = workbook, tag = "<calcPr ")
if(length(calcPr) > 0)
wb$workbook$calcPr <- calcPr
workbookPr <- getChildlessNode(xml = workbook, tag = "<workbookPr ")
if(length(calcPr) > 0)
wb$workbook$workbookPr <- workbookPr
## defined Names
dNames <- getNodes(xml = workbook, tagIn = "<definedNames>")
if(length(dNames) > 0){
dNames <- gsub("^<definedNames>|</definedNames>$", "", dNames)
wb$workbook$definedNames <- paste0(getNodes(xml = dNames, tagIn = "<definedName"), ">")
}
}
## xl\sharedStrings
if(length(sharedStringsXML) > 0){
sharedStrings <- readLines(sharedStringsXML, warn = FALSE, encoding = "UTF-8")
sharedStrings <- paste(sharedStrings, collapse = "\n")
sharedStrings <- removeHeadTag(sharedStrings)
uniqueCount <- as.integer(regmatches(sharedStrings, regexpr('(?<=uniqueCount=")[0-9]+', sharedStrings, perl = TRUE)))
## read in and get <si> nodes
vals <- getNodes(xml = sharedStrings, tagIn = "<si>")
if("<si><t/></si>" %in% vals){
vals[vals == "<si><t/></si>"] <- "<si><t>NA</t></si>"
Encoding(vals) <- "UTF-8"
attr(vals, "uniqueCount") <- uniqueCount - 1L
}else{
Encoding(vals) <- "UTF-8"
attr(vals, "uniqueCount") <- uniqueCount
}
wb$sharedStrings <- vals
}
## xl\pivotTables & xl\pivotCache
if(length(pivotTableXML) > 0){
# pivotTable cacheId links to workbook.xml which links to workbook.xml.rels via rId
# we don't modify the cacheId, only the rId
nPivotTables <- length(pivotTableXML)
rIds <- 20000L + 1:nPivotTables
## pivot tables
pivotTableXML <- pivotTableXML[order(nchar(pivotTableXML), pivotTableXML)]
pivotTableRelsXML <- pivotTableRelsXML[order(nchar(pivotTableRelsXML), pivotTableRelsXML)]
## Cache
pivotDefXML <- pivotDefXML[order(nchar(pivotDefXML), pivotDefXML)]
pivotDefRelsXML <- pivotDefRelsXML[order(nchar(pivotDefRelsXML), pivotDefRelsXML)]
pivotCacheRecords <- pivotCacheRecords[order(nchar(pivotCacheRecords), pivotCacheRecords)]
wb$pivotDefinitionsRels <- character(nPivotTables)
pivot_content_type <- NULL
if(length(pivotTableRelsXML) > 0)
wb$pivotTables.xml.rels <- unlist(lapply(pivotTableRelsXML, function(x) removeHeadTag(cppReadFile(x))))
# ## Check what caches are used
cache_keep <- unlist(regmatches(wb$pivotTables.xml.rels, gregexpr("(?<=pivotCache/pivotCacheDefinition)[0-9](?=\\.xml)",
wb$pivotTables.xml.rels, perl = TRUE, ignore.case = TRUE)))
## pivot cache records
tmp <- unlist(regmatches(pivotCacheRecords, gregexpr("(?<=pivotCache/pivotCacheRecords)[0-9]+(?=\\.xml)", pivotCacheRecords, perl = TRUE, ignore.case = TRUE)))
pivotCacheRecords <- pivotCacheRecords[tmp %in% cache_keep]
## pivot cache definitions rels
tmp <- unlist(regmatches(pivotDefRelsXML, gregexpr("(?<=_rels/pivotCacheDefinition)[0-9]+(?=\\.xml)", pivotDefRelsXML, perl = TRUE, ignore.case = TRUE)))
pivotDefRelsXML <- pivotDefRelsXML[tmp %in% cache_keep]
## pivot cache definitions
tmp <- unlist(regmatches(pivotDefXML, gregexpr("(?<=pivotCache/pivotCacheDefinition)[0-9]+(?=\\.xml)", pivotDefXML, perl = TRUE, ignore.case = TRUE)))
pivotDefXML <- pivotDefXML[tmp %in% cache_keep]
if(length(pivotTableXML) > 0){
wb$pivotTables[1:length(pivotTableXML)] <- pivotTableXML
pivot_content_type <- c(pivot_content_type,
sprintf('<Override PartName="/xl/pivotTables/pivotTable%s.xml" ContentType="application/vnd.openxmlformats-officedocument.spreadsheetml.pivotTable+xml"/>', 1:length(pivotTableXML)))
}
if(length(pivotDefXML) > 0){
wb$pivotDefinitions[1:length(pivotDefXML)] <- pivotDefXML
pivot_content_type <- c(pivot_content_type,
sprintf('<Override PartName="/xl/pivotCache/pivotCacheDefinition%s.xml" ContentType="application/vnd.openxmlformats-officedocument.spreadsheetml.pivotCacheDefinition+xml"/>', 1:length(pivotDefXML)))
}
if(length(pivotCacheRecords) > 0){
wb$pivotRecords[1:length(pivotCacheRecords)] <- pivotCacheRecords
pivot_content_type <- c(pivot_content_type,
sprintf('<Override PartName="/xl/pivotCache/pivotCacheRecords%s.xml" ContentType="application/vnd.openxmlformats-officedocument.spreadsheetml.pivotCacheRecords+xml"/>', 1:length(pivotCacheRecords)))
}
if(length(pivotDefRelsXML) > 0)
wb$pivotDefinitionsRels[1:length(pivotDefRelsXML)] <- pivotDefRelsXML
## update content_types
wb$Content_Types <- c(wb$Content_Types, pivot_content_type)
## workbook rels
wb$workbook.xml.rels <- c(wb$workbook.xml.rels,
sprintf('<Relationship Id="rId%s" Type="http://schemas.openxmlformats.org/officeDocument/2006/relationships/pivotCacheDefinition" Target="pivotCache/pivotCacheDefinition%s.xml"/>', rIds, 1:length(pivotDefXML))
)
caches <- getNodes(xml = workbook, tagIn = "<pivotCaches>")
caches <- getChildlessNode(xml = caches, tag = "<pivotCache ")
for(i in 1:length(caches))
caches[i] <- gsub('"rId[0-9]+"', sprintf('"rId%s"', rIds[i]), caches[i])
wb$workbook$pivotCaches <- paste0('<pivotCaches>', paste(caches, collapse = ""), '</pivotCaches>')
}
## xl\vbaProject
if(length(vbaProject) > 0){
wb$vbaProject <- vbaProject
wb$Content_Types[grepl('<Override PartName="/xl/workbook.xml" ', wb$Content_Types)] <- '<Override PartName="/xl/workbook.xml" ContentType="application/vnd.ms-excel.sheet.macroEnabled.main+xml"/>'
wb$Content_Types <- c(wb$Content_Types, '<Override PartName="/xl/vbaProject.bin" ContentType="application/vnd.ms-office.vbaProject"/>')
}
## xl\styles
if(length(stylesXML) > 0){
styleObjects <- wb$loadStyles(stylesXML)
}else{
styleObjects <- list()
}
## xl\media
if(length(media) > 0){
mediaNames <- regmatches(media, regexpr("image[0-9]+\\.[a-z]+$", media))
fileTypes <- unique(gsub("image[0-9]+\\.", "", mediaNames))
contentNodes <- sprintf('<Default Extension="%s" ContentType="image/%s"/>', fileTypes, fileTypes)
contentNodes[fileTypes == "emf"] <- '<Default Extension="emf" ContentType="image/x-emf"/>'
wb$Content_Types <- c(contentNodes, wb$Content_Types)
names(media) <- mediaNames
wb$media <- media
}
## xl\chart
if(length(charts) > 0){
chartNames <- basename(charts)
nCharts <- sum(grepl("chart[0-9]+.xml", chartNames))
nChartStyles <- sum(grepl("style[0-9]+.xml", chartNames))
nChartCol <- sum(grepl("colors[0-9]+.xml", chartNames))
if(nCharts > 0)
wb$Content_Types <- c(wb$Content_Types, sprintf('<Override PartName="/xl/charts/chart%s.xml" ContentType="application/vnd.openxmlformats-officedocument.drawingml.chart+xml"/>', 1:nCharts))
if(nChartStyles > 0)
wb$Content_Types <- c(wb$Content_Types, sprintf('<Override PartName="/xl/charts/style%s.xml" ContentType="application/vnd.ms-office.chartstyle+xml"/>', 1:nChartStyles))
if(nChartCol > 0)
wb$Content_Types <- c(wb$Content_Types, sprintf('<Override PartName="/xl/charts/colors%s.xml" ContentType="application/vnd.ms-office.chartcolorstyle+xml"/>', 1:nChartCol))
if(length(chartsRels)){
charts <- c(charts, chartsRels)
chartNames <- c(chartNames, file.path("_rels", basename(chartsRels)))
}
names(charts) <- chartNames
wb$charts <- charts
}
## xl\theme
if(length(themeXML) > 0)
wb$theme <- removeHeadTag(paste(unlist(lapply(sort(themeXML)[[1]], function(x) readLines(x, warn = FALSE, encoding = "UTF-8"))), collapse = ""))
## externalLinks
if(length(extLinksXML) > 0){
wb$externalLinks <- lapply(sort(extLinksXML), function(x) removeHeadTag(cppReadFile(x)))
wb$Content_Types <-c(wb$Content_Types,
sprintf('<Override PartName="/xl/externalLinks/externalLink%s.xml" ContentType="application/vnd.openxmlformats-officedocument.spreadsheetml.externalLink+xml"/>', 1:length(extLinksXML)))
wb$workbook.xml.rels <- c(wb$workbook.xml.rels, sprintf('<Relationship Id="rId4" Type="http://schemas.openxmlformats.org/officeDocument/2006/relationships/externalLink" Target="externalLinks/externalLink1.xml"/>',
1:length(extLinksXML)))
}
## externalLinksRels
if(length(extLinksRelsXML) > 0)
wb$externalLinksRels <- lapply(sort(extLinksRelsXML), function(x) removeHeadTag(cppReadFile(x)))
##*----------------------------------------------------------------------------------------------*##
### BEGIN READING IN WORKSHEET DATA
##*----------------------------------------------------------------------------------------------*##
## xl\worksheets
file_names <- regmatches(worksheet_rId_mapping, regexpr("sheet[0-9]+\\.xml", worksheet_rId_mapping, perl = TRUE))
file_rIds <- unlist(getId(worksheet_rId_mapping))
file_names <- file_names[match(sheetrId, file_rIds)]
worksheetsXML <- file.path(dirname(worksheetsXML), file_names)
wb <- loadworksheets(wb = wb, styleObjects = styleObjects, xmlFiles = worksheetsXML, is_chart_sheet = is_chart_sheet)
## Fix styleobject encoding
if(length(wb$styleObjects) > 0){
style_names <- sapply(wb$styleObjects, "[[", "sheet")
Encoding(style_names) <- "UTF-8"
wb$styleObjects <- lapply(1:length(style_names), function(i) {wb$styleObjects[[i]]$sheet = style_names[[i]]; wb$styleObjects[[i]]})
}
## Fix headers/footers
for(i in 1:length(worksheetsXML)){
if(!is_chart_sheet[i]){
if(length(wb$worksheets[[i]]$headerFooter) > 0)
wb$worksheets[[i]]$headerFooter <- lapply(wb$worksheets[[i]]$headerFooter, splitHeaderFooter)
}
}
##*----------------------------------------------------------------------------------------------*##
### READING IN WORKSHEET DATA COMPLETE
##*----------------------------------------------------------------------------------------------*##
## Next sheetRels to see which drawings_rels belongs to which sheet
if(length(sheetRelsXML) > 0){
## sheetrId is order sheet appears in xlsx file
## create a 1-1 vector of rels to worksheet
## haveRels is boolean vector where i-the element is TRUE/FALSE if sheet has a rels sheet
if(length(chartSheetsXML) == 0){
allRels <- file.path(dirname(sheetRelsXML[1]), paste0(file_names, ".rels"))
haveRels <- allRels %in% sheetRelsXML
}else{
haveRels <- rep(FALSE, length(wb$worksheets))
allRels <- rep("", length(wb$worksheets))
for(i in 1:nSheets){
if(is_chart_sheet[i]){
ind <- which(chartSheetRIds == sheetrId[i])
rels_file <- file.path(chartSheetsRelsDir, paste0(chartsheet_rId_mapping[ind], ".rels"))
}else{
ind <- sheetrId[i]
rels_file <- file.path(xmlDir, "xl", "worksheets", "_rels", paste0(file_names[i], ".rels"))
}
if(file.exists(rels_file)){
allRels[i] <- rels_file
haveRels[i] <- TRUE
}
}
}
## sheet.xml have been reordered to be in the order of sheetrId
## not every sheet has a worksheet rels
xml <- lapply(1:length(allRels), function(i) {
if(haveRels[i]){
xml <- readLines(allRels[[i]], warn = FALSE, encoding = "UTF-8")
xml <- removeHeadTag(xml)
xml <- gsub("<Relationships .*?>", "", xml)
xml <- gsub("</Relationships>", "", xml)
xml <- getChildlessNode(xml = xml, tag = "<Relationship ")
}else{
xml <- "<Relationship >"
}
return(xml)
})
############################################################################################
############################################################################################
## Slicers
if(length(slicerXML) > 0){
slicerXML <- slicerXML[order(nchar(slicerXML), slicerXML)]
slicersFiles <- lapply(xml, function(x) as.integer(regmatches(x, regexpr("(?<=slicer)[0-9]+(?=\\.xml)", x, perl = TRUE))))
inds <- sapply(slicersFiles, length) > 0
## worksheet_rels Id for slicer will be rId0
k <- 1L
wb$slicers <- rep("", nSheets)
for(i in 1:nSheets){
## read in slicer[j].XML sheets into sheet[i]
if(inds[i]){
wb$slicers[[i]] <- slicerXML[k]
k <- k + 1L
wb$worksheets_rels[[i]] <- unlist(c(wb$worksheets_rels[[i]],
sprintf('<Relationship Id="rId0" Type="http://schemas.microsoft.com/office/2007/relationships/slicer" Target="../slicers/slicer%s.xml"/>', i)))
wb$Content_Types <- c(wb$Content_Types,
sprintf('<Override PartName="/xl/slicers/slicer%s.xml" ContentType="application/vnd.ms-excel.slicer+xml"/>', i))
slicer_xml_exists <- FALSE
## Append slicer to worksheet extLst
if(length(wb$worksheets[[i]]$extLst) > 0){
if(grepl('x14:slicer r:id="rId[0-9]+"', wb$worksheets[[i]]$extLst)){
wb$worksheets[[i]]$extLst <- sub('x14:slicer r:id="rId[0-9]+"', 'x14:slicer r:id="rId0"', wb$worksheets[[i]]$extLst)
slicer_xml_exists <- TRUE
}
}
if(!slicer_xml_exists)
wb$worksheets[[i]]$extLst <- c(wb$worksheets[[i]]$extLst, genBaseSlicerXML())
}
}
}
if(length(slicerCachesXML) > 0){
## ---- slicerCaches
inds <- 1:length(slicerCachesXML)
wb$Content_Types <- c(wb$Content_Types, sprintf('<Override PartName="/xl/slicerCaches/slicerCache%s.xml" ContentType="application/vnd.ms-excel.slicerCache+xml"/>', inds))
wb$slicerCaches <- sapply(slicerCachesXML[order(nchar(slicerCachesXML), slicerCachesXML)], function(x) removeHeadTag(cppReadFile(x)))
wb$workbook.xml.rels <- c(wb$workbook.xml.rels, sprintf('<Relationship Id="rId%s" Type="http://schemas.microsoft.com/office/2007/relationships/slicerCache" Target="slicerCaches/slicerCache%s.xml"/>', 1E5 + inds, inds))
wb$workbook$extLst <- c(wb$workbook$extLst, genSlicerCachesExtLst(1E5 + inds))
}
############################################################################################
############################################################################################
## tables
if(length(tablesXML) > 0){
tables <- lapply(xml, function(x) as.integer(regmatches(x, regexpr("(?<=table)[0-9]+(?=\\.xml)", x, perl = TRUE))))
tableSheets <- unlist(lapply(1:length(sheetrId), function(i) rep(i, length(tables[[i]]))))
if(length(unlist(tables)) > 0){
## get the tables that belong to each worksheet and create a worksheets_rels for each
tCount <- 2L ## table r:Ids start at 3
for(i in 1:length(tables)){
if(length(tables[[i]]) > 0){
k <- 1:length(tables[[i]]) + tCount
wb$worksheets_rels[[i]] <- unlist(c(wb$worksheets_rels[[i]],
sprintf('<Relationship Id="rId%s" Type="http://schemas.openxmlformats.org/officeDocument/2006/relationships/table" Target="../tables/table%s.xml"/>', k, k)))
wb$worksheets[[i]]$tableParts <- sprintf("<tablePart r:id=\"rId%s\"/>", k)
tCount <- tCount + length(k)
}
}
## sort the tables into the order they appear in the xml and tables variables
names(tablesXML) <- basename(tablesXML)
tablesXML <- tablesXML[sprintf("table%s.xml", unlist(tables))]
## tables are now in correct order so we can read them in as they are
wb$tables <- sapply(tablesXML, function(x) removeHeadTag(paste(readLines(x, warn = FALSE), collapse = "")))
## pull out refs and attach names
refs <- regmatches(wb$tables, regexpr('(?<=ref=")[0-9A-Z:]+', wb$tables, perl = TRUE))
names(wb$tables) <- refs
wb$Content_Types <- c(wb$Content_Types, sprintf('<Override PartName="/xl/tables/table%s.xml" ContentType="application/vnd.openxmlformats-officedocument.spreadsheetml.table+xml"/>', 1:length(wb$tables)+2))
## relabel ids
for(i in 1:length(wb$tables)){
newId <- sprintf(' id="%s" ', i+2)
wb$tables[[i]] <- sub(' id="[0-9]+" ' , newId, wb$tables[[i]])
}
displayNames <- unlist(regmatches(wb$tables, regexpr('(?<=displayName=").*?[^"]+', wb$tables, perl = TRUE)))
if(length(displayNames) != length(tablesXML))
displayNames <- paste0("Table", 1:length(tablesXML))
attr(wb$tables, "sheet") <- tableSheets
attr(wb$tables, "tableName") <- displayNames
for(i in 1:length(tableSheets)){
table_sheet_i <- tableSheets[i]
attr(wb$worksheets[[table_sheet_i]]$tableParts, "tableName") <- c(attr(wb$worksheets[[table_sheet_i]]$tableParts, "tableName"), displayNames[i])
}
}
} ## if(length(tablesXML) > 0)
## might we have some external hyperlinks
if(any(sapply(wb$worksheets[!is_chart_sheet], function(x) length(x$hyperlinks) > 0))){
## Do we have external hyperlinks
hlinks <- lapply(xml, function(x) x[grepl("hyperlink", x) & grepl("External", x)])
hlinksInds <- which(sapply(hlinks, length) > 0)
## If it's an external hyperlink it will have a target in the sheet_rels
if(length(hlinksInds) > 0){
for(i in hlinksInds){
ids <- unlist(lapply(hlinks[[i]], function(x) regmatches(x, gregexpr('(?<=Id=").*?"', x, perl = TRUE))[[1]]))
ids <- gsub('"$', "", ids)
targets <- unlist(lapply(hlinks[[i]], function(x) regmatches(x, gregexpr('(?<=Target=").*?"', x, perl = TRUE))[[1]]))
targets <- gsub('"$', "", targets)
ids2 <- lapply(wb$worksheets[[i]]$hyperlinks, function(x) regmatches(x, gregexpr('(?<=r:id=").*?"', x, perl = TRUE))[[1]])
ids2[sapply(ids2, length) == 0] <- NA
ids2 <- gsub('"$', "", unlist(ids2))
targets <- targets[match(ids2, ids)]
names(wb$worksheets[[i]]$hyperlinks) <- targets
}
}
}
############################################################################################
############################################################################################
## drawings
## xml is in the order of the sheets, drawIngs is toes to sheet position of hasDrawing
## Not every sheet has a drawing.xml
drawXMLrelationship <- lapply(xml, function(x) x[grepl("drawings/drawing", x)])
hasDrawing <- sapply(drawXMLrelationship, length) > 0 ## which sheets have a drawing
if(length(drawingRelsXML) > 0){
dRels <- lapply(drawingRelsXML, readLines, warn = FALSE)
dRels <- unlist(lapply(dRels, removeHeadTag))
dRels <- gsub("<Relationships .*?>", "", dRels)
dRels <- gsub("</Relationships>", "", dRels)
}
if(length(drawingsXML) > 0){
dXML <- lapply(drawingsXML, readLines, warn = FALSE, encoding = "UTF-8")
dXML <- unlist(lapply(dXML, removeHeadTag))
dXML <- gsub("<xdr:wsDr .*?>", "", dXML)
dXML <- gsub("</xdr:wsDr>", "", dXML)
# ptn1 <- "<(mc:AlternateContent|xdr:oneCellAnchor|xdr:twoCellAnchor|xdr:absoluteAnchor)"
# ptn2 <- "</(mc:AlternateContent|xdr:oneCellAnchor|xdr:twoCellAnchor|xdr:absoluteAnchor)>"
## split at one/two cell Anchor
# dXML <- regmatches(dXML, gregexpr(paste0(ptn1, ".*?", ptn2), dXML))
}
## loop over all worksheets and assign drawing to sheet
if(any(hasDrawing)){
for(i in 1:length(xml)){
if(hasDrawing[i]){
target <- unlist(lapply(drawXMLrelationship[[i]], function(x) regmatches(x, gregexpr('(?<=Target=").*?"', x, perl = TRUE))[[1]]))
target <- basename(gsub('"$', "", target))
## sheet_i has which(hasDrawing)[[i]]
relsInd <- grepl(target, drawingRelsXML)
if(any(relsInd))
wb$drawings_rels[i] <- dRels[relsInd]
drawingInd <- grepl(target, drawingsXML)
if(any(drawingInd))
wb$drawings[i] <- dXML[drawingInd]
}
}
}
############################################################################################
############################################################################################
## VML drawings
if(length(vmlDrawingXML) > 0){
wb$Content_Types <- c(wb$Content_Types, '<Default Extension="vml" ContentType="application/vnd.openxmlformats-officedocument.vmlDrawing"/>')
drawXMLrelationship <- lapply(xml, function(x) x[grepl("drawings/vmlDrawing", x)])
hasDrawing <- sapply(drawXMLrelationship, length) > 0 ## which sheets have a drawing
## loop over all worksheets and assign drawing to sheet
if(any(hasDrawing)){
for(i in 1:length(xml)){
if(hasDrawing[i]){
target <- unlist(lapply(drawXMLrelationship[[i]], function(x) regmatches(x, gregexpr('(?<=Target=").*?"', x, perl = TRUE))[[1]]))
target <- basename(gsub('"$', "", target))
ind <- grepl(target, vmlDrawingXML)
if(any(ind)){
txt <- paste(readLines(vmlDrawingXML[ind], warn = FALSE), collapse = "\n")
txt <- removeHeadTag(txt)
i1 <- regexpr("<v:shapetype", txt, fixed = TRUE)
i2 <- regexpr("</xml>", txt, fixed = TRUE)
wb$vml[[i]] <- substring(text = txt, first = i1, last = (i2 - 1L))
relsInd <- grepl(target, vmlDrawingRelsXML)
if(any(relsInd))
wb$vml_rels[i] <- vmlDrawingRelsXML[relsInd]
}
}
}
}
}
## vmlDrawing and comments
if(length(commentsXML) > 0){
drawXMLrelationship <- lapply(xml, function(x) x[grepl("drawings/vmlDrawing[0-9]+\\.vml", x)])
hasDrawing <- sapply(drawXMLrelationship, length) > 0 ## which sheets have a drawing
commentXMLrelationship <- lapply(xml, function(x) x[grepl("comments[0-9]+\\.xml", x)])
hasComment <- sapply(drawXMLrelationship, length) > 0 ## which sheets have a drawing
for(i in 1:length(xml)){
if(hasComment[i]){
target <- unlist(lapply(drawXMLrelationship[[i]], function(x) regmatches(x, gregexpr('(?<=Target=").*?"', x, perl = TRUE))[[1]]))
target <- basename(gsub('"$', "", target))
ind <- grepl(target, vmlDrawingXML)
if(any(ind)){
txt <- paste(readLines(vmlDrawingXML[ind], warn = FALSE), collapse = "\n")
txt <- removeHeadTag(txt)
cd <- unique(getNodes(xml = txt, tagIn = "<x:ClientData"))
cd <- cd[grepl('ObjectType="Note"', cd)]
cd <- paste0(cd, ">")
## now loada comment
target <- unlist(lapply(commentXMLrelationship[[i]], function(x) regmatches(x, gregexpr('(?<=Target=").*?"', x, perl = TRUE))[[1]]))
target <- basename(gsub('"$', "", target))
txt <- paste(readLines(commentsXML[grepl(target, commentsXML)], warn = FALSE), collapse = "\n")
txt <- removeHeadTag(txt)
authors <- getNodes(xml = txt, tagIn = "<author>")
authors <- gsub("<author>|</author>", "", authors)
comments <- getNodes(xml = txt, tagIn = "<commentList>")
comments <- gsub( "<commentList>", "", comments)
comments <- getNodes(xml = comments, tagIn = "<comment")
refs <- regmatches(comments, regexpr('(?<=ref=").*?[^"]+', comments, perl = TRUE))
authorsInds <- as.integer(regmatches(comments, regexpr('(?<=authorId=").*?[^"]+', comments, perl = TRUE))) + 1
authors <- authors[authorsInds]
style <- lapply(comments, getNodes, tagIn = "<rPr>")
comments <- regmatches(comments, gregexpr('(?<=<t( |>)).*?[^/]+', comments, perl = TRUE))
comments <- lapply(comments, function(x) gsub("<", "", x))
comments <- lapply(comments, function(x) gsub(".*?>", "", x, perl = TRUE))
wb$comments[[i]] <- lapply(1:length(comments), function(j){
comment_list <- list("ref" = refs[j],
"author" = authors[j],
"comment" = comments[[j]],
"style" = style[[j]],
"clientData" = cd[[j]])
})
}
}
}
}
## rels image
drawXMLrelationship <- lapply(xml, function(x) x[grepl("relationships/image", x)])
hasDrawing <- sapply(drawXMLrelationship, length) > 0 ## which sheets have a drawing
if(any(hasDrawing)){
for(i in 1:length(xml)){
if(hasDrawing[i]){
image_ids <- unlist(getId(drawXMLrelationship[[i]]))
new_image_ids <- paste0("rId", 1:length(image_ids) + 70000)
for(j in 1:length(image_ids)){
wb$worksheets[[i]]$oleObjects <- gsub(image_ids[j], new_image_ids[j], wb$worksheets[[i]]$oleObjects, fixed = TRUE)
wb$worksheets_rels[[i]] <- c(wb$worksheets_rels[[i]], gsub(image_ids[j], new_image_ids[j], drawXMLrelationship[[i]][j], fixed = TRUE)
)
}
}
}
}
## rels image
drawXMLrelationship <- lapply(xml, function(x) x[grepl("relationships/package", x)])
hasDrawing <- sapply(drawXMLrelationship, length) > 0 ## which sheets have a drawing
if(any(hasDrawing)){
for(i in 1:length(xml)){
if(hasDrawing[i]){
image_ids <- unlist(getId(drawXMLrelationship[[i]]))
new_image_ids <- paste0("rId", 1:length(image_ids) + 90000)
for(j in 1:length(image_ids)){
wb$worksheets[[i]]$oleObjects <- gsub(image_ids[j], new_image_ids[j], wb$worksheets[[i]]$oleObjects, fixed = TRUE)
wb$worksheets_rels[[i]] <- c(wb$worksheets_rels[[i]],
sprintf("<Relationship Id=\"%s\" Type=\"http://schemas.openxmlformats.org/officeDocument/2006/relationships/package\" Target=\"../embeddings/Microsoft_Word_Document1.docx\"/>", new_image_ids[j])
)
}
}
}
}
## Embedded docx
if(length(embeddings) > 0){
wb$Content_Types <- c(wb$Content_Types, '<Default Extension="docx" ContentType="application/vnd.openxmlformats-officedocument.wordprocessingml.document"/>')
wb$embeddings <- embeddings
}
## pivot tables
if(length(pivotTableXML) > 0){
pivotTableJ <- lapply(xml, function(x) as.integer(regmatches(x, regexpr("(?<=pivotTable)[0-9]+(?=\\.xml)", x, perl = TRUE))))
sheetWithPivot <- which(sapply(pivotTableJ, length) > 0)
pivotRels <- lapply(xml, function(x) {y <- x[grepl("pivotTable", x)]; y[order(nchar(y), y)]})
hasPivot <- sapply(pivotRels, length) > 0
## Modify rIds
for(i in 1:length(pivotRels)){
if(hasPivot[i]){
for(j in 1:length(pivotRels[[i]]))
pivotRels[[i]][j] <- gsub('"rId[0-9]+"', sprintf('"rId%s"', 20000L + j), pivotRels[[i]][j])
wb$worksheets_rels[[i]] <- c(wb$worksheets_rels[[i]] , pivotRels[[i]])
}
}
## remove any workbook_res references to pivot tables that are not being used in worksheet_rels
inds <- 1:length(wb$pivotTables.xml.rels)
fileNo <- as.integer(unlist(regmatches(unlist(wb$worksheets_rels), gregexpr('(?<=pivotTable)[0-9]+(?=\\.xml)', unlist(wb$worksheets_rels), perl = TRUE))))
inds <- inds[!inds %in% fileNo]
if(length(inds) > 0){
toRemove <- paste(sprintf("(pivotCacheDefinition%s\\.xml)", inds), collapse = "|")
fileNo <- which(grepl(toRemove, wb$pivotTables.xml.rels))
toRemove <- paste(sprintf("(pivotCacheDefinition%s\\.xml)", fileNo), collapse = "|")
## remove reference to file from workbook.xml.res
wb$workbook.xml.rels <- wb$workbook.xml.rels[!grepl(toRemove, wb$workbook.xml.rels)]
}
}
} ## end of worksheetRels
## convert hyperliks to hyperlink objects
for(i in 1:nSheets)
wb$worksheets[[i]]$hyperlinks <- xml_to_hyperlink(wb$worksheets[[i]]$hyperlinks)
## queryTables
if(length(queryTablesXML) > 0){
ids <- as.numeric(regmatches(queryTablesXML, regexpr("[0-9]+(?=\\.xml)", queryTablesXML, perl = TRUE)))
wb$queryTables <- unlist(lapply(queryTablesXML[order(ids)], function(x) removeHeadTag(cppReadFile(xmlFile = x))))
wb$Content_Types <- c(wb$Content_Types,
sprintf('<Override PartName="/xl/queryTables/queryTable%s.xml" ContentType="application/vnd.openxmlformats-officedocument.spreadsheetml.queryTable+xml"/>', 1:length(queryTablesXML)))
}
## connections
if(length(connectionsXML) > 0){
wb$connections <- removeHeadTag(cppReadFile(xmlFile = connectionsXML))
wb$workbook.xml.rels <- c(wb$workbook.xml.rels, '<Relationship Id="rId3" Type="http://schemas.openxmlformats.org/officeDocument/2006/relationships/connections" Target="connections.xml"/>')
wb$Content_Types <- c(wb$Content_Types, '<Override PartName="/xl/connections.xml" ContentType="application/vnd.openxmlformats-officedocument.spreadsheetml.connections+xml"/>')
}
## table rels
if(length(tableRelsXML) > 0){
## table_i_might have tableRels_i but I am re-ordering the tables to be in order of worksheets
## I make every table have a table_rels so i need to fill in the gaps if any table_rels are missing
tmp <- paste0(basename(tablesXML), ".rels")
hasRels <- tmp %in% basename(tableRelsXML)
## order tableRelsXML
tableRelsXML <- tableRelsXML[match(tmp[hasRels], basename(tableRelsXML))]
##
wb$tables.xml.rels <- character(length=length(tablesXML))
## which sheet does it belong to
xml <- sapply(tableRelsXML, cppReadFile, USE.NAMES = FALSE)
xml <- sapply(xml, removeHeadTag, USE.NAMES = FALSE)
wb$tables.xml.rels[hasRels] <- xml
}else if(length(tablesXML) > 0){
wb$tables.xml.rels <- rep("", length(tablesXML))
}
return(wb)
}
|
library(shiny)
library(ggplot2)
function(input, output) {
dataset <- reactive({
airquality[sample(nrow(airquality), input$sampleSize),]
})
output$plot <- renderPlot({
p <- ggplot(dataset(), aes_string(x=input$x, y=input$y)) + geom_point()
if (input$color != 'None')
p <- p + aes_string(color=input$color)
if (input$smooth)
p <- p + geom_smooth()
print(p)
}, height=350, width = 800)
output$plot2 <- renderPlot({
p2 <- ggplot(dataset(), aes_string(x=input$x2, y=input$y2)) + geom_bar(stat="identity")
print(p2)
}, height=275)
} | /server.R | no_license | alizeefrd/R_TP1_FERRANDIS_GUILLOT_PRAT | R | false | false | 615 | r | library(shiny)
library(ggplot2)
function(input, output) {
dataset <- reactive({
airquality[sample(nrow(airquality), input$sampleSize),]
})
output$plot <- renderPlot({
p <- ggplot(dataset(), aes_string(x=input$x, y=input$y)) + geom_point()
if (input$color != 'None')
p <- p + aes_string(color=input$color)
if (input$smooth)
p <- p + geom_smooth()
print(p)
}, height=350, width = 800)
output$plot2 <- renderPlot({
p2 <- ggplot(dataset(), aes_string(x=input$x2, y=input$y2)) + geom_bar(stat="identity")
print(p2)
}, height=275)
} |
modelInfo <- list(label = "Robust Regularized Linear Discriminant Analysis",
library = "rrlda",
loop = NULL,
type = c('Classification'),
parameters = data.frame(parameter = c('lambda', 'hp', 'penalty'),
class = c('numeric', 'numeric', 'character'),
label = c('Penalty Parameter', 'Robustness Parameter', 'Penalty Type')),
grid = function(x, y, len = NULL)
expand.grid(lambda = (1:len)*.25,
hp = seq(.5, 1, length = len),
penalty = "L2"),
fit = function(x, y, wts, param, lev, last, classProbs, ...) {
rrlda:::rrlda(x, as.numeric(y), lambda = param$lambda,
hp = param$hp, penalty = as.character(param$penalty), ...)
},
predict = function(modelFit, newdata, submodels = NULL) {
out <- predict(modelFit, newdata)$class
modelFit$obsLevels[as.numeric(out)]
},
prob = function(modelFit, newdata, submodels = NULL) {
out <- predict(modelFit, newdata)$posterior
colnames(out) <- modelFit$obsLevels
out
},
levels = function(x) x$obsLevels,
tags = c("Discriminant Analysis", "Robust Model", "Regularization", "Linear Classifier"),
sort = function(x) x[order(-x$lambda),])
| /models/files/rrlda.R | no_license | bleutner/caret | R | false | false | 1,645 | r | modelInfo <- list(label = "Robust Regularized Linear Discriminant Analysis",
library = "rrlda",
loop = NULL,
type = c('Classification'),
parameters = data.frame(parameter = c('lambda', 'hp', 'penalty'),
class = c('numeric', 'numeric', 'character'),
label = c('Penalty Parameter', 'Robustness Parameter', 'Penalty Type')),
grid = function(x, y, len = NULL)
expand.grid(lambda = (1:len)*.25,
hp = seq(.5, 1, length = len),
penalty = "L2"),
fit = function(x, y, wts, param, lev, last, classProbs, ...) {
rrlda:::rrlda(x, as.numeric(y), lambda = param$lambda,
hp = param$hp, penalty = as.character(param$penalty), ...)
},
predict = function(modelFit, newdata, submodels = NULL) {
out <- predict(modelFit, newdata)$class
modelFit$obsLevels[as.numeric(out)]
},
prob = function(modelFit, newdata, submodels = NULL) {
out <- predict(modelFit, newdata)$posterior
colnames(out) <- modelFit$obsLevels
out
},
levels = function(x) x$obsLevels,
tags = c("Discriminant Analysis", "Robust Model", "Regularization", "Linear Classifier"),
sort = function(x) x[order(-x$lambda),])
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/ti_stemnet.R
\name{ti_stemnet}
\alias{ti_stemnet}
\title{Inferring a trajectory inference using STEMNET}
\usage{
ti_stemnet(alpha = 0.1, lambda_auto = TRUE, lambda = 0.1,
force = FALSE, run_environment = NULL)
}
\arguments{
\item{alpha}{numeric; The elastic net mixing parameter of the ‘glmnet’
classifier. (default: \code{0.1}; range: from \code{0.001} to \code{10L})}
\item{lambda_auto}{logical; Whether to select the lambda by cross-validation}
\item{lambda}{numeric; The lambda penalty of GLM. (default: \code{0.1}; range: from
\code{0.05} to \code{1L})}
\item{force}{logical; Do not use! This is a parameter to force FateID to run on
benchmark datasets where not enough end groups are present.}
\item{run_environment}{In which environment to run the method, can be \code{"docker"} or \code{"singularity"}.}
}
\value{
A TI method wrapper to be used together with
\code{\link[dynwrap:infer_trajectories]{infer_trajectory}}
}
\description{
Will generate a trajectory using \href{https://doi.org/10.1038/ncb3493}{STEMNET}.
This method was wrapped inside a
\href{https://github.com/dynverse/dynmethods/tree/master/containers/stemnet}{container}.
The original code of this method is available
\href{https://git.embl.de/velten/STEMNET}{here}.
}
\references{
Velten, L., Haas, S.F., Raffel, S., Blaszkiewicz, S., Islam, S.,
Hennig, B.P., Hirche, C., Lutz, C., Buss, E.C., Nowak, D., Boch, T., Hofmann,
W.-K., Ho, A.D., Huber, W., Trumpp, A., Essers, M.A.G., Steinmetz, L.M., 2017.
Human haematopoietic stem cell lineage commitment is a continuous process.
Nature Cell Biology 19, 271–281.
}
| /man/ti_stemnet.Rd | no_license | herrinca/dynmethods | R | false | true | 1,679 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/ti_stemnet.R
\name{ti_stemnet}
\alias{ti_stemnet}
\title{Inferring a trajectory inference using STEMNET}
\usage{
ti_stemnet(alpha = 0.1, lambda_auto = TRUE, lambda = 0.1,
force = FALSE, run_environment = NULL)
}
\arguments{
\item{alpha}{numeric; The elastic net mixing parameter of the ‘glmnet’
classifier. (default: \code{0.1}; range: from \code{0.001} to \code{10L})}
\item{lambda_auto}{logical; Whether to select the lambda by cross-validation}
\item{lambda}{numeric; The lambda penalty of GLM. (default: \code{0.1}; range: from
\code{0.05} to \code{1L})}
\item{force}{logical; Do not use! This is a parameter to force FateID to run on
benchmark datasets where not enough end groups are present.}
\item{run_environment}{In which environment to run the method, can be \code{"docker"} or \code{"singularity"}.}
}
\value{
A TI method wrapper to be used together with
\code{\link[dynwrap:infer_trajectories]{infer_trajectory}}
}
\description{
Will generate a trajectory using \href{https://doi.org/10.1038/ncb3493}{STEMNET}.
This method was wrapped inside a
\href{https://github.com/dynverse/dynmethods/tree/master/containers/stemnet}{container}.
The original code of this method is available
\href{https://git.embl.de/velten/STEMNET}{here}.
}
\references{
Velten, L., Haas, S.F., Raffel, S., Blaszkiewicz, S., Islam, S.,
Hennig, B.P., Hirche, C., Lutz, C., Buss, E.C., Nowak, D., Boch, T., Hofmann,
W.-K., Ho, A.D., Huber, W., Trumpp, A., Essers, M.A.G., Steinmetz, L.M., 2017.
Human haematopoietic stem cell lineage commitment is a continuous process.
Nature Cell Biology 19, 271–281.
}
|
library(leaflet)
s <- as.data.frame(point.sp@coords)[1:25,]
data(quakes)
# Show first 20 rows from the `quakes` dataset
leaflet(data = s) %>% addTiles() %>%
addMarkers(~x, ~y)
library(shiny)
ui <- fluidPage(
leafletOutput("mymap"),
p()
)
server <- function(input, output, session) {
output$mymap <- renderLeaflet({
leaflet(data = s) %>% addTiles() %>%
addMarkers(~x, ~y, popup = "a") %>%
%>% addProviderTiles("MtbMap") %>%
addProviderTiles("Stamen.TonerLines",
options = providerTileOptions(opacity = 0.35)
) %>%
addProviderTiles("Stamen.TonerLabels")
})
}
shinyApp(ui, server)
| /leaflet.R | no_license | arielfuentes/wrangling-spatial-data | R | false | false | 650 | r | library(leaflet)
s <- as.data.frame(point.sp@coords)[1:25,]
data(quakes)
# Show first 20 rows from the `quakes` dataset
leaflet(data = s) %>% addTiles() %>%
addMarkers(~x, ~y)
library(shiny)
ui <- fluidPage(
leafletOutput("mymap"),
p()
)
server <- function(input, output, session) {
output$mymap <- renderLeaflet({
leaflet(data = s) %>% addTiles() %>%
addMarkers(~x, ~y, popup = "a") %>%
%>% addProviderTiles("MtbMap") %>%
addProviderTiles("Stamen.TonerLines",
options = providerTileOptions(opacity = 0.35)
) %>%
addProviderTiles("Stamen.TonerLabels")
})
}
shinyApp(ui, server)
|
## plot1.r
# read in the data
pds <- read.table("household_power_consumption.txt",header=TRUE,sep=";", na="?")
# subset the data for the dates we are interested in
subpds <- pds[which(pds$Date == "1/2/2007" | pds$Date == "2/2/2007"),]
# make numeric variables to be plotted
subpds$Global_active_power <- as.numeric(subpds$Global_active_power)
# Just plot the histogram
png("plot1.png", width=480, height=480)
hist(subpds$Global_active_power,main="Global Active Power",xlab="Global Active Power (kilowatts)",col="red")
dev.off()
| /plot1.R | no_license | mbayekebe/ExData_Plotting1 | R | false | false | 536 | r | ## plot1.r
# read in the data
pds <- read.table("household_power_consumption.txt",header=TRUE,sep=";", na="?")
# subset the data for the dates we are interested in
subpds <- pds[which(pds$Date == "1/2/2007" | pds$Date == "2/2/2007"),]
# make numeric variables to be plotted
subpds$Global_active_power <- as.numeric(subpds$Global_active_power)
# Just plot the histogram
png("plot1.png", width=480, height=480)
hist(subpds$Global_active_power,main="Global Active Power",xlab="Global Active Power (kilowatts)",col="red")
dev.off()
|
# Read entire dataset
data <- read.table("household_power_consumption.txt", header=TRUE, sep=";", na.strings="?")
# Subset days 2/1/07 and 2/2/07
data$Date <- as.character(data$Date)
data$Date <- as.Date(data$Date, format="%d/%m/%Y")
data$Time <- as.character(data$Time)
subset <- data[data$Date >= "2007-02-01" & data$Date <= "2007-02-02", ]
# Combine and convert date and time variables
DateTime <- paste(subset$Date,subset$Time, sep = ' ')
subset <- cbind(subset, DateTime)
subset$DateTime <- strptime(subset$DateTime, format = "%Y-%m-%d %H:%M:%S")
# Plot 4
quartz()
png(file = "plot4.png")
par(mfrow=c(2,2))
plot(subset$DateTime, subset$Global_active_power, type="l", xlab="", ylab="Global Active Power")
plot(subset$DateTime, subset$Voltage, type="l", xlab="datetime", ylab="Voltage")
plot(subset$DateTime, subset$Sub_metering_1, col="black", type="l", ylab = "Energy sub metering", xlab="")
lines(subset$DateTime, subset$Sub_metering_2, col="red", type="l")
lines(subset$DateTime, subset$Sub_metering_3, type="l", col="blue")
legend("topright", lty=c(1,1) , col=c("black", "red", "blue"), legend=c("Sub_metering_1", "Sub_metering_2", "Sub_metering_3"), bty="n")
plot(subset$DateTime, subset$Global_reactive_power, type="l", xlab="datetime", ylab="Global_reactive_power")
dev.off()
| /plot4.R | no_license | sthomason/ExData_Plotting1 | R | false | false | 1,290 | r | # Read entire dataset
data <- read.table("household_power_consumption.txt", header=TRUE, sep=";", na.strings="?")
# Subset days 2/1/07 and 2/2/07
data$Date <- as.character(data$Date)
data$Date <- as.Date(data$Date, format="%d/%m/%Y")
data$Time <- as.character(data$Time)
subset <- data[data$Date >= "2007-02-01" & data$Date <= "2007-02-02", ]
# Combine and convert date and time variables
DateTime <- paste(subset$Date,subset$Time, sep = ' ')
subset <- cbind(subset, DateTime)
subset$DateTime <- strptime(subset$DateTime, format = "%Y-%m-%d %H:%M:%S")
# Plot 4
quartz()
png(file = "plot4.png")
par(mfrow=c(2,2))
plot(subset$DateTime, subset$Global_active_power, type="l", xlab="", ylab="Global Active Power")
plot(subset$DateTime, subset$Voltage, type="l", xlab="datetime", ylab="Voltage")
plot(subset$DateTime, subset$Sub_metering_1, col="black", type="l", ylab = "Energy sub metering", xlab="")
lines(subset$DateTime, subset$Sub_metering_2, col="red", type="l")
lines(subset$DateTime, subset$Sub_metering_3, type="l", col="blue")
legend("topright", lty=c(1,1) , col=c("black", "red", "blue"), legend=c("Sub_metering_1", "Sub_metering_2", "Sub_metering_3"), bty="n")
plot(subset$DateTime, subset$Global_reactive_power, type="l", xlab="datetime", ylab="Global_reactive_power")
dev.off()
|
\name{twelve}
\docType{data}
\alias{twelve}
\title{Ishihara plate number 1}
\description{
Ishihara plate number 1, with the numeral \code{'12'} designed to be visible by all
persons.
}
\usage{data(twelve)}
\format{
An array of 380-by-380-by-3 representing a RGB image.
}
\keyword{datasets}
| /man/twelve.Rd | no_license | cran/SpatialPack | R | false | false | 297 | rd | \name{twelve}
\docType{data}
\alias{twelve}
\title{Ishihara plate number 1}
\description{
Ishihara plate number 1, with the numeral \code{'12'} designed to be visible by all
persons.
}
\usage{data(twelve)}
\format{
An array of 380-by-380-by-3 representing a RGB image.
}
\keyword{datasets}
|
##set date and time
pow<-read.table("household_power_consumption.txt",header=TRUE,nrows=1,sep=";")
pow1<- names(pow)
firstDateTime<- strptime("2006-12-16 17:24:00", "%Y-%m-%d %H:%M:%S")
beginDateTime<- strptime("2007-02-01 00:01:00", "%Y-%m-%d %H:%M:%S")
##calculate number of rows needed
boof<- beginDateTime - firstDateTime
startrow<- as.numeric(boof)*24*60 #starting row to read
numrows<- 2880 #total number of rows needed
mytab<- read.table("household_power_consumption.txt",sep=";",
skip = startrow,nrows=2880) #Reading part of file
names(mytab)<- pow1#Assigning column names to data
mytab$Date<- as.Date(mytab$Date, "%d/%m/%Y") #Changing column to Date&time resp:
mytab$DateTime<- as.POSIXct(paste(mytab$Date,mytab$Time),format="%Y-%m-%d %H:%M:%S")
dev.cur()
plot(mytab$DateTime,mytab$Sub_metering_1,type="n",xlab="",
ylab="Energy sub metering")
lines(mytab$DateTime,mytab$Sub_metering_1)
lines(mytab$DateTime,mytab$Sub_metering_2,col="red")
lines(mytab$DateTime,mytab$Sub_metering_3,col="blue")
legend("topright",pch=c("-","-","-"),col=c("black","red","blue"),
legend=c("Sub_metering_1","Sub_metering_2","Sub_metering_3"))
dev.copy(png,file="plot3.png")
dev.off()
| /plot3.R | no_license | fdoughan/ExploratoryData-Project1 | R | false | false | 1,198 | r | ##set date and time
pow<-read.table("household_power_consumption.txt",header=TRUE,nrows=1,sep=";")
pow1<- names(pow)
firstDateTime<- strptime("2006-12-16 17:24:00", "%Y-%m-%d %H:%M:%S")
beginDateTime<- strptime("2007-02-01 00:01:00", "%Y-%m-%d %H:%M:%S")
##calculate number of rows needed
boof<- beginDateTime - firstDateTime
startrow<- as.numeric(boof)*24*60 #starting row to read
numrows<- 2880 #total number of rows needed
mytab<- read.table("household_power_consumption.txt",sep=";",
skip = startrow,nrows=2880) #Reading part of file
names(mytab)<- pow1#Assigning column names to data
mytab$Date<- as.Date(mytab$Date, "%d/%m/%Y") #Changing column to Date&time resp:
mytab$DateTime<- as.POSIXct(paste(mytab$Date,mytab$Time),format="%Y-%m-%d %H:%M:%S")
dev.cur()
plot(mytab$DateTime,mytab$Sub_metering_1,type="n",xlab="",
ylab="Energy sub metering")
lines(mytab$DateTime,mytab$Sub_metering_1)
lines(mytab$DateTime,mytab$Sub_metering_2,col="red")
lines(mytab$DateTime,mytab$Sub_metering_3,col="blue")
legend("topright",pch=c("-","-","-"),col=c("black","red","blue"),
legend=c("Sub_metering_1","Sub_metering_2","Sub_metering_3"))
dev.copy(png,file="plot3.png")
dev.off()
|
context("boxCox() and boxCox2() functions unit testing")
# expect_success
test_that("Should work when numeric values satisfying: (1) x+y > 0 ; (2) lambda == 0.", {
num_x <- 77
num_y <- -3
num_sum <- num_x + num_y
expect_success(expect_identical(boxCox2(num_x,num_y),log(num_x+num_y) ))
expect_success(expect_identical(boxCox(num_sum),log(num_sum) ))
})
# expect_error
test_that("Suppose error when x+y < 0 .", {
num_x <- 2
num_y <- -3
num_sum <- num_x + num_y
expect_success(expect_identical(boxCox2(num_x,num_y), NULL))
expect_success(expect_identical(boxCox(num_sum), NULL))
})
# expect_error
test_that("Suppose error if input string", {
str <- "this is a test string"
expect_error(boxCox2(str))
expect_error(boxCox(str))
})
| /tests/testthat/test_boxCox_and_boxCox2.R | no_license | dorawyy/powers | R | false | false | 788 | r | context("boxCox() and boxCox2() functions unit testing")
# expect_success
test_that("Should work when numeric values satisfying: (1) x+y > 0 ; (2) lambda == 0.", {
num_x <- 77
num_y <- -3
num_sum <- num_x + num_y
expect_success(expect_identical(boxCox2(num_x,num_y),log(num_x+num_y) ))
expect_success(expect_identical(boxCox(num_sum),log(num_sum) ))
})
# expect_error
test_that("Suppose error when x+y < 0 .", {
num_x <- 2
num_y <- -3
num_sum <- num_x + num_y
expect_success(expect_identical(boxCox2(num_x,num_y), NULL))
expect_success(expect_identical(boxCox(num_sum), NULL))
})
# expect_error
test_that("Suppose error if input string", {
str <- "this is a test string"
expect_error(boxCox2(str))
expect_error(boxCox(str))
})
|
library(caret)
library(sp)
library(rgdal)
library(randomForest)
library(e1071)
library(raster)
library(parallel)
library(doParallel)
library(RStoolbox)
library(raster)
# Check extents of '2017-05-10' and '2017-05-14'!!
# dateVec <- c('2017-08-19', '2017-09-09')
# dateVec <- c('2017-05-02', '2017-05-05', '2017-05-06', '2017-05-07',
# '2017-05-09', '2017-05-10', '2017-05-13', '2017-05-14',
# '2017-05-15')
dateVec <- c('2017-05-02', '2017-05-09', '2017-05-13')
userPath <- "~/Box_Sync/tmp/2017-05/"
mosaicPath <- paste0(userPath, "Mosaics/")
mosaicSuffix <- "_cbw_masked-mosaic_3m.tif"
naFlagPath <- paste0(userPath, "NA Flagged/")
naFlagSuffix <- "_R_na-flagged_mosaic_3m.tif"
trainerPath <- "~/Box_Sync/GIS/Flooding_Remote_Sensing/PLabs/Combined_Trainer_Polys/2017-05_Combined_Trainers_Cai+Paul.shp"
classPath <- paste0(userPath, "Classified/R_randomforest_bin-class_")
classSuffix <- ".tif"
statsPath <- paste0(userPath, "Statistics/")
statsSuffix <- "_Training.txt"
polysIn <- readOGR(trainerPath)
for(i in seq_len(length(dateVec))) {
thisDate <- dateVec[i]
print(paste0("Starting on ", thisDate))
# # Open the mosaic that has zeroes where NAs should be
# filepath <- paste0(mosaicPath, thisDate, mosaicSuffix)
# r <- brick(filepath)
#
# startTime <- Sys.time()
filepath <- paste0(naFlagPath, thisDate, naFlagSuffix)
# Write the new raster with NAs flagged as 0 values
# writeRaster(r,
# filepath,
# format = "GTiff",
# overwrite = TRUE,
# datatype = "INT2U",
# NAflag = 0)
# endTime <- Sys.time()
# print(paste0(filepath, " finished: ", endTime - startTime, " m"))
# Load the NA-flagged files
plabs <- brick(filepath)
outPath <- paste0(classPath, thisDate, classSuffix)
# Restrict to the date we're looking at
train <- polysIn[polysIn$strDate == thisDate,]
# Redefine class factors; caret (via superClass) will use the first factor
# as the "positive" class
train$class <- factor(train$class, levels = c(1, 0))
startTime <- Sys.time()
# Begin CPU parallellization
beginCluster()
# Fit classifier (Random Forests; 50 trees; 10-fold cross-validation,
# splitting training into 90% training/CV data, 10% validation data; validate
# folds on Kappa value; cross-validate by polygon)
# Should run parallelized
SC <- superClass(plabs, trainData = train, responseCol = "class",
model = "rf", nSamples = 275000, kfold = 10, minDist = 1,
na.action = na.omit, trainPartition = 0.9, verbose = TRUE,
predict = FALSE, metric = "Kappa", ntree = 50,
polygonBasedCV = TRUE)
endTime <- Sys.time()
print(paste0("Finished training: ", endTime - startTime, " m"))
print(paste0("Starting prediction on ", thisDate))
startTime <- Sys.time()
# Classification of predicted classification values, parallelized
map <- clusterR(plabs,
predict,
args = list(model = SC[[1]], datatype = "INT1U"))
writeRaster(map,
outPath,
format = "GTiff",
overwrite = TRUE,
datatype = "INT1U")
# End parallelization
endCluster()
endTime <- Sys.time()
print(paste0(thisDate, " finished predicting: ", endTime - startTime, " m"))
# Save the model and validation statistics to file
sink(file = paste0(statsPath, thisDate, statsSuffix))
print(thisDate)
print(SC$model)
print(SC$modelFit)
print(SC$validation$performance)
# Reset to output to console only
sink()
}
| /Quantify_ponding_distributions/Classification.R | no_license | rfpaul/Flooding_Laszlo | R | false | false | 3,629 | r | library(caret)
library(sp)
library(rgdal)
library(randomForest)
library(e1071)
library(raster)
library(parallel)
library(doParallel)
library(RStoolbox)
library(raster)
# Check extents of '2017-05-10' and '2017-05-14'!!
# dateVec <- c('2017-08-19', '2017-09-09')
# dateVec <- c('2017-05-02', '2017-05-05', '2017-05-06', '2017-05-07',
# '2017-05-09', '2017-05-10', '2017-05-13', '2017-05-14',
# '2017-05-15')
dateVec <- c('2017-05-02', '2017-05-09', '2017-05-13')
userPath <- "~/Box_Sync/tmp/2017-05/"
mosaicPath <- paste0(userPath, "Mosaics/")
mosaicSuffix <- "_cbw_masked-mosaic_3m.tif"
naFlagPath <- paste0(userPath, "NA Flagged/")
naFlagSuffix <- "_R_na-flagged_mosaic_3m.tif"
trainerPath <- "~/Box_Sync/GIS/Flooding_Remote_Sensing/PLabs/Combined_Trainer_Polys/2017-05_Combined_Trainers_Cai+Paul.shp"
classPath <- paste0(userPath, "Classified/R_randomforest_bin-class_")
classSuffix <- ".tif"
statsPath <- paste0(userPath, "Statistics/")
statsSuffix <- "_Training.txt"
polysIn <- readOGR(trainerPath)
for(i in seq_len(length(dateVec))) {
thisDate <- dateVec[i]
print(paste0("Starting on ", thisDate))
# # Open the mosaic that has zeroes where NAs should be
# filepath <- paste0(mosaicPath, thisDate, mosaicSuffix)
# r <- brick(filepath)
#
# startTime <- Sys.time()
filepath <- paste0(naFlagPath, thisDate, naFlagSuffix)
# Write the new raster with NAs flagged as 0 values
# writeRaster(r,
# filepath,
# format = "GTiff",
# overwrite = TRUE,
# datatype = "INT2U",
# NAflag = 0)
# endTime <- Sys.time()
# print(paste0(filepath, " finished: ", endTime - startTime, " m"))
# Load the NA-flagged files
plabs <- brick(filepath)
outPath <- paste0(classPath, thisDate, classSuffix)
# Restrict to the date we're looking at
train <- polysIn[polysIn$strDate == thisDate,]
# Redefine class factors; caret (via superClass) will use the first factor
# as the "positive" class
train$class <- factor(train$class, levels = c(1, 0))
startTime <- Sys.time()
# Begin CPU parallellization
beginCluster()
# Fit classifier (Random Forests; 50 trees; 10-fold cross-validation,
# splitting training into 90% training/CV data, 10% validation data; validate
# folds on Kappa value; cross-validate by polygon)
# Should run parallelized
SC <- superClass(plabs, trainData = train, responseCol = "class",
model = "rf", nSamples = 275000, kfold = 10, minDist = 1,
na.action = na.omit, trainPartition = 0.9, verbose = TRUE,
predict = FALSE, metric = "Kappa", ntree = 50,
polygonBasedCV = TRUE)
endTime <- Sys.time()
print(paste0("Finished training: ", endTime - startTime, " m"))
print(paste0("Starting prediction on ", thisDate))
startTime <- Sys.time()
# Classification of predicted classification values, parallelized
map <- clusterR(plabs,
predict,
args = list(model = SC[[1]], datatype = "INT1U"))
writeRaster(map,
outPath,
format = "GTiff",
overwrite = TRUE,
datatype = "INT1U")
# End parallelization
endCluster()
endTime <- Sys.time()
print(paste0(thisDate, " finished predicting: ", endTime - startTime, " m"))
# Save the model and validation statistics to file
sink(file = paste0(statsPath, thisDate, statsSuffix))
print(thisDate)
print(SC$model)
print(SC$modelFit)
print(SC$validation$performance)
# Reset to output to console only
sink()
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/datasets.R
\docType{data}
\name{golfballs}
\alias{golfballs}
\title{Golf ball numbers}
\format{
The format is: num [1:4] 137 138 107 104
}
\source{
Data collected by Allan Rossman in Carlisle, PA.
}
\description{
Allan Rossman used to live on a golf course in a spot where dozens of balls
would come into his yard every week. He collected the balls and eventually
tallied up the numbers on the first 5000 golf balls he collected. Of these
486 bore the number 1, 2, 3, or 4. The remaining 14 golf balls were omitted
from the data.
}
\examples{
data(golfballs)
golfballs/sum(golfballs)
chisq.test(golfballs, p = rep(.25,4))
}
\keyword{datasets}
| /man/golfballs.Rd | no_license | rpruim/fastR2 | R | false | true | 725 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/datasets.R
\docType{data}
\name{golfballs}
\alias{golfballs}
\title{Golf ball numbers}
\format{
The format is: num [1:4] 137 138 107 104
}
\source{
Data collected by Allan Rossman in Carlisle, PA.
}
\description{
Allan Rossman used to live on a golf course in a spot where dozens of balls
would come into his yard every week. He collected the balls and eventually
tallied up the numbers on the first 5000 golf balls he collected. Of these
486 bore the number 1, 2, 3, or 4. The remaining 14 golf balls were omitted
from the data.
}
\examples{
data(golfballs)
golfballs/sum(golfballs)
chisq.test(golfballs, p = rep(.25,4))
}
\keyword{datasets}
|
#Working, and saves gif file to location. Need to search up locatino in finder with Cmd+Shift+G
setwd("/Users/manojarachige/Documents/Coding/BmedScDOC1/BMedScDOC2021/R_BMedScDOC")
library(rgl)
library(misc3d)
library(MNITemplate)
library(aal)
library(neurobase)
library(magick)
library(dplyr)
img = aal_image()
template = readMNI(res = "2mm")
cut <- 4500
dtemp <- dim(template)
# All of the sections you can label from aal atlas
labs = aal_get_labels()
#set up table: 2 rows [brain area] and [number of mentions]
df <- read.csv(file = "/Users/manojarachige/Documents/Coding/BmedScDOC1/BMedScDOC2021/R_BMedScDOC/Full_Run/test_gif.csv", header = FALSE)
df <- df[c(1:2), ]
df <- t(df)
df <- df[-1, ]
#cleanup
df <- as_tibble(df)
df <- rename(df, Number = 2)
df <- rename(df, Area = 1)
df$Number <- as.numeric(as.character(df$Number))
df <- df[order(df$Number),]
#Get colfunc
colfunc <- colorRampPalette(c("blue", "red"))
y <- nrow(df)
colfunc(y)
plot(rep(1,y),col=colfunc(y),pch=19,cex=3)
#add colfunc to df as a new column
df <- cbind(df,colfunc(y))
#contour for MNI template
contour3d(template, x=1:dtemp[1], y=1:dtemp[2], z=1:dtemp[3], level = cut, alpha = 0.1, draw = TRUE)
#iterate over dataframe
for (row in 1:nrow(df)){
area = labs$index[grep(df[row,1], labs$name)]
mask = remake_img(vec = img %in% area, img = img)
contour3d(mask, level = c(0.5), alpha = c(0.5), add = TRUE, color=c(df[row,3]) )
}
#MOVIE CREATE
### add text
text3d(x=dtemp[1]/2, y=dtemp[2]/2, z = dtemp[3]*0.98, text="Top")
text3d(x=-0.98, y=dtemp[2]/2, z = dtemp[3]/2, text="Right")
#create movie
movie3d(spin3d(),duration=5, fps = 5, startTime = 0, movie = "movie", convert = TRUE, type = "gif", top = TRUE)
| /Visualisation/brain_gif_v3.R | no_license | GNtem2/BMedScDOC_Scripts | R | false | false | 1,704 | r | #Working, and saves gif file to location. Need to search up locatino in finder with Cmd+Shift+G
setwd("/Users/manojarachige/Documents/Coding/BmedScDOC1/BMedScDOC2021/R_BMedScDOC")
library(rgl)
library(misc3d)
library(MNITemplate)
library(aal)
library(neurobase)
library(magick)
library(dplyr)
img = aal_image()
template = readMNI(res = "2mm")
cut <- 4500
dtemp <- dim(template)
# All of the sections you can label from aal atlas
labs = aal_get_labels()
#set up table: 2 rows [brain area] and [number of mentions]
df <- read.csv(file = "/Users/manojarachige/Documents/Coding/BmedScDOC1/BMedScDOC2021/R_BMedScDOC/Full_Run/test_gif.csv", header = FALSE)
df <- df[c(1:2), ]
df <- t(df)
df <- df[-1, ]
#cleanup
df <- as_tibble(df)
df <- rename(df, Number = 2)
df <- rename(df, Area = 1)
df$Number <- as.numeric(as.character(df$Number))
df <- df[order(df$Number),]
#Get colfunc
colfunc <- colorRampPalette(c("blue", "red"))
y <- nrow(df)
colfunc(y)
plot(rep(1,y),col=colfunc(y),pch=19,cex=3)
#add colfunc to df as a new column
df <- cbind(df,colfunc(y))
#contour for MNI template
contour3d(template, x=1:dtemp[1], y=1:dtemp[2], z=1:dtemp[3], level = cut, alpha = 0.1, draw = TRUE)
#iterate over dataframe
for (row in 1:nrow(df)){
area = labs$index[grep(df[row,1], labs$name)]
mask = remake_img(vec = img %in% area, img = img)
contour3d(mask, level = c(0.5), alpha = c(0.5), add = TRUE, color=c(df[row,3]) )
}
#MOVIE CREATE
### add text
text3d(x=dtemp[1]/2, y=dtemp[2]/2, z = dtemp[3]*0.98, text="Top")
text3d(x=-0.98, y=dtemp[2]/2, z = dtemp[3]/2, text="Right")
#create movie
movie3d(spin3d(),duration=5, fps = 5, startTime = 0, movie = "movie", convert = TRUE, type = "gif", top = TRUE)
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/montyhall.R
\name{select_door}
\alias{select_door}
\title{Select Door Function.}
\usage{
select_door()
}
\arguments{
\item{No}{arguments are used by the function.}
}
\value{
The function returns a length one character vector which is a number
between 1 and 3 and represents the number of the initial door selected.
The returned value is assigned to the variable \code{a.pick}.
}
\description{
\code{select_door()} is the function which simulates the user selecting a
random door.
}
\details{
The initial door selection simulates the player choosing one of three
closed doors. There are goats behind two of the doors and a car
behind one of the doors. At this point in the game all three doors are
still closed so the player doesn't know if they picked the car door or
a goat door.
}
\examples{
select_door()
}
| /man/select_door.Rd | no_license | RTrehern-ASU/montyhall | R | false | true | 893 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/montyhall.R
\name{select_door}
\alias{select_door}
\title{Select Door Function.}
\usage{
select_door()
}
\arguments{
\item{No}{arguments are used by the function.}
}
\value{
The function returns a length one character vector which is a number
between 1 and 3 and represents the number of the initial door selected.
The returned value is assigned to the variable \code{a.pick}.
}
\description{
\code{select_door()} is the function which simulates the user selecting a
random door.
}
\details{
The initial door selection simulates the player choosing one of three
closed doors. There are goats behind two of the doors and a car
behind one of the doors. At this point in the game all three doors are
still closed so the player doesn't know if they picked the car door or
a goat door.
}
\examples{
select_door()
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/s3_operations.R
\name{s3_list_multipart_uploads}
\alias{s3_list_multipart_uploads}
\title{This action lists in-progress multipart uploads}
\usage{
s3_list_multipart_uploads(
Bucket,
Delimiter = NULL,
EncodingType = NULL,
KeyMarker = NULL,
MaxUploads = NULL,
Prefix = NULL,
UploadIdMarker = NULL,
ExpectedBucketOwner = NULL
)
}
\arguments{
\item{Bucket}{[required] The name of the bucket to which the multipart upload was initiated.
When using this action with an access point, you must direct requests to
the access point hostname. The access point hostname takes the form
\emph{AccessPointName}-\emph{AccountId}.s3-accesspoint.\emph{Region}.amazonaws.com.
When using this action with an access point through the Amazon Web
Services SDKs, you provide the access point ARN in place of the bucket
name. For more information about access point ARNs, see \href{https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html}{Using access points}
in the \emph{Amazon S3 User Guide}.
When you use this action with Amazon S3 on Outposts, you must direct
requests to the S3 on Outposts hostname. The S3 on Outposts hostname
takes the form
\code{ AccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com}.
When you use this action with S3 on Outposts through the Amazon Web
Services SDKs, you provide the Outposts access point ARN in place of the
bucket name. For more information about S3 on Outposts ARNs, see \href{https://docs.aws.amazon.com/AmazonS3/latest/userguide/S3onOutposts.html}{What is S3 on Outposts}
in the \emph{Amazon S3 User Guide}.}
\item{Delimiter}{Character you use to group keys.
All keys that contain the same string between the prefix, if specified,
and the first occurrence of the delimiter after the prefix are grouped
under a single result element, \code{CommonPrefixes}. If you don't specify
the prefix parameter, then the substring starts at the beginning of the
key. The keys that are grouped under \code{CommonPrefixes} result element are
not returned elsewhere in the response.}
\item{EncodingType}{}
\item{KeyMarker}{Together with upload-id-marker, this parameter specifies the multipart
upload after which listing should begin.
If \code{upload-id-marker} is not specified, only the keys lexicographically
greater than the specified \code{key-marker} will be included in the list.
If \code{upload-id-marker} is specified, any multipart uploads for a key
equal to the \code{key-marker} might also be included, provided those
multipart uploads have upload IDs lexicographically greater than the
specified \code{upload-id-marker}.}
\item{MaxUploads}{Sets the maximum number of multipart uploads, from 1 to 1,000, to return
in the response body. 1,000 is the maximum number of uploads that can be
returned in a response.}
\item{Prefix}{Lists in-progress uploads only for those keys that begin with the
specified prefix. You can use prefixes to separate a bucket into
different grouping of keys. (You can think of using prefix to make
groups in the same way you'd use a folder in a file system.)}
\item{UploadIdMarker}{Together with key-marker, specifies the multipart upload after which
listing should begin. If key-marker is not specified, the
upload-id-marker parameter is ignored. Otherwise, any multipart uploads
for a key equal to the key-marker might be included in the list only if
they have an upload ID lexicographically greater than the specified
\code{upload-id-marker}.}
\item{ExpectedBucketOwner}{The account ID of the expected bucket owner. If the bucket is owned by a
different account, the request fails with the HTTP status code
\verb{403 Forbidden} (access denied).}
}
\description{
This action lists in-progress multipart uploads. An in-progress multipart upload is a multipart upload that has been initiated using the Initiate Multipart Upload request, but has not yet been completed or aborted.
See \url{https://www.paws-r-sdk.com/docs/s3_list_multipart_uploads/} for full documentation.
}
\keyword{internal}
| /man/s3_list_multipart_uploads.Rd | no_license | cran/paws.storage | R | false | true | 4,085 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/s3_operations.R
\name{s3_list_multipart_uploads}
\alias{s3_list_multipart_uploads}
\title{This action lists in-progress multipart uploads}
\usage{
s3_list_multipart_uploads(
Bucket,
Delimiter = NULL,
EncodingType = NULL,
KeyMarker = NULL,
MaxUploads = NULL,
Prefix = NULL,
UploadIdMarker = NULL,
ExpectedBucketOwner = NULL
)
}
\arguments{
\item{Bucket}{[required] The name of the bucket to which the multipart upload was initiated.
When using this action with an access point, you must direct requests to
the access point hostname. The access point hostname takes the form
\emph{AccessPointName}-\emph{AccountId}.s3-accesspoint.\emph{Region}.amazonaws.com.
When using this action with an access point through the Amazon Web
Services SDKs, you provide the access point ARN in place of the bucket
name. For more information about access point ARNs, see \href{https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html}{Using access points}
in the \emph{Amazon S3 User Guide}.
When you use this action with Amazon S3 on Outposts, you must direct
requests to the S3 on Outposts hostname. The S3 on Outposts hostname
takes the form
\code{ AccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com}.
When you use this action with S3 on Outposts through the Amazon Web
Services SDKs, you provide the Outposts access point ARN in place of the
bucket name. For more information about S3 on Outposts ARNs, see \href{https://docs.aws.amazon.com/AmazonS3/latest/userguide/S3onOutposts.html}{What is S3 on Outposts}
in the \emph{Amazon S3 User Guide}.}
\item{Delimiter}{Character you use to group keys.
All keys that contain the same string between the prefix, if specified,
and the first occurrence of the delimiter after the prefix are grouped
under a single result element, \code{CommonPrefixes}. If you don't specify
the prefix parameter, then the substring starts at the beginning of the
key. The keys that are grouped under \code{CommonPrefixes} result element are
not returned elsewhere in the response.}
\item{EncodingType}{}
\item{KeyMarker}{Together with upload-id-marker, this parameter specifies the multipart
upload after which listing should begin.
If \code{upload-id-marker} is not specified, only the keys lexicographically
greater than the specified \code{key-marker} will be included in the list.
If \code{upload-id-marker} is specified, any multipart uploads for a key
equal to the \code{key-marker} might also be included, provided those
multipart uploads have upload IDs lexicographically greater than the
specified \code{upload-id-marker}.}
\item{MaxUploads}{Sets the maximum number of multipart uploads, from 1 to 1,000, to return
in the response body. 1,000 is the maximum number of uploads that can be
returned in a response.}
\item{Prefix}{Lists in-progress uploads only for those keys that begin with the
specified prefix. You can use prefixes to separate a bucket into
different grouping of keys. (You can think of using prefix to make
groups in the same way you'd use a folder in a file system.)}
\item{UploadIdMarker}{Together with key-marker, specifies the multipart upload after which
listing should begin. If key-marker is not specified, the
upload-id-marker parameter is ignored. Otherwise, any multipart uploads
for a key equal to the key-marker might be included in the list only if
they have an upload ID lexicographically greater than the specified
\code{upload-id-marker}.}
\item{ExpectedBucketOwner}{The account ID of the expected bucket owner. If the bucket is owned by a
different account, the request fails with the HTTP status code
\verb{403 Forbidden} (access denied).}
}
\description{
This action lists in-progress multipart uploads. An in-progress multipart upload is a multipart upload that has been initiated using the Initiate Multipart Upload request, but has not yet been completed or aborted.
See \url{https://www.paws-r-sdk.com/docs/s3_list_multipart_uploads/} for full documentation.
}
\keyword{internal}
|
install.packages("Boruta")
library(Boruta)
setwd('/Users/hinagandhi/Desktop')
#testdata <- subset(testdata, select=c(2:15,17))
Boston<-read.csv("dataset_new.csv")
Boston <- subset(Boston, select=c(1:16,18))
#View(Boston)
set.seed(123)
boruta.train <- Boruta(recession~., data = Boston, doTrace = 2)
print(boruta.train)
plot(boruta.train, xlab = "", xaxt = "n")
lz<-lapply(1:ncol(boruta.train$ImpHistory),function(i)
boruta.train$ImpHistory[is.finite(boruta.train$ImpHistory[,i]),i])
names(lz) <- colnames(boruta.train$ImpHistory)
Labels <- sort(sapply(lz,median))
axis(side = 1,las=2,labels = names(Labels),
at = 1:ncol(boruta.train$ImpHistory), cex.axis = 0.7)
| /Economic Analysis/R scripts/parameter_selection_recession.R | no_license | hinagandhi/Datascience-Projects | R | false | false | 675 | r | install.packages("Boruta")
library(Boruta)
setwd('/Users/hinagandhi/Desktop')
#testdata <- subset(testdata, select=c(2:15,17))
Boston<-read.csv("dataset_new.csv")
Boston <- subset(Boston, select=c(1:16,18))
#View(Boston)
set.seed(123)
boruta.train <- Boruta(recession~., data = Boston, doTrace = 2)
print(boruta.train)
plot(boruta.train, xlab = "", xaxt = "n")
lz<-lapply(1:ncol(boruta.train$ImpHistory),function(i)
boruta.train$ImpHistory[is.finite(boruta.train$ImpHistory[,i]),i])
names(lz) <- colnames(boruta.train$ImpHistory)
Labels <- sort(sapply(lz,median))
axis(side = 1,las=2,labels = names(Labels),
at = 1:ncol(boruta.train$ImpHistory), cex.axis = 0.7)
|
testlist <- list(id = NULL, id = NULL, booklet_id = c(8168473L, 2127314835L, 171177770L, -1942759639L, -1814565844L, 601253144L, -804651186L, 2094281728L, 860713787L, -971707632L, -1475044502L, 870040598L, -1182814578L, -1415711445L, 1901326755L, -1882837573L, 1340545259L, 1156041943L, 823641812L, -1106109928L, -1048157941L), person_id = c(0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L))
result <- do.call(dexterMST:::is_person_booklet_sorted,testlist)
str(result) | /dexterMST/inst/testfiles/is_person_booklet_sorted/AFL_is_person_booklet_sorted/is_person_booklet_sorted_valgrind_files/1615939083-test.R | no_license | akhikolla/updatedatatype-list1 | R | false | false | 826 | r | testlist <- list(id = NULL, id = NULL, booklet_id = c(8168473L, 2127314835L, 171177770L, -1942759639L, -1814565844L, 601253144L, -804651186L, 2094281728L, 860713787L, -971707632L, -1475044502L, 870040598L, -1182814578L, -1415711445L, 1901326755L, -1882837573L, 1340545259L, 1156041943L, 823641812L, -1106109928L, -1048157941L), person_id = c(0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L))
result <- do.call(dexterMST:::is_person_booklet_sorted,testlist)
str(result) |
#'This function will look for any requests with "NA" as first problem type
#'and will create a spreadsheet with the ticket number and the assignee.
#'The function should exclude the merged ticket, which have a specific tag.
NANA <- function(day, nana_qu){
## Libraries
require(httr)
require(stringr)
require(magrittr)
require(openxlsx)
require(dplyr)
require(purrr)
apikey <- Sys.getenv("fd_api")
agents_name <- c('Amit',
'Andrea',
'Carina',
'Chris',
'Emillia',
'Michel',
'Nadja',
'Rachid',
'Vera')
agents_id <- c(19000532180,
19000445734,
19000562176,
19012915923,
19014387275,
19000532181,
19000532183,
19000532182,
19011097494)
agents_email <- c('amit_gujral@trimble.com',
'andrea_piccioni@trimble.com',
'carina_pamminger@trimble.com',
'chris_goodwin@trimble.com',
'emillia_nlandu@trimble.com',
'michel_diomande@trimble.com',
'nadja_mbomson@trimble.com',
'rachid_boukerche@trimble.com',
'vera_boteva@trimble.com')
agents <- as.data.frame(cbind(agents_id,agents_name,agents_email), stringsAsFactors = FALSE)
ini_curl <-
"https://alk.freshdesk.com/api/v2/search/tickets?"
query_curl <- "query=\"group_id:19000105695%20AND%20(status:4%20OR%20status:5)%20AND%20created_at:>%27"
fin_curl <- "%27\"&page="
page_num <- 1
full_curl <- paste0(ini_curl, query_curl, day, fin_curl, page_num)
req <- GET(full_curl, authenticate(apikey, "X", type = "basic"))
req_content <- content(req) ## storing the content of the call
tckt_list <- req_content[[1]]
rev_descr <- length(req_content[[1]])
while (rev_descr == 30) {
page_num <- page_num + 1
full_curl <- paste0(ini_curl, query_curl, day, fin_curl, page_num)
req <- GET(full_curl, authenticate(apikey, "X", type = "basic"))
req_content <- content(req)
tckt_list <- append(tckt_list, req_content[[1]])
rev_descr <- length(req_content[[1]])
}
cess <- tckt_list%>%map_lgl(is.list) ## to resolve the 301 line
## from here it changes to get the more details
descr_txt <- data.frame(matrix(ncol = 5, nrow = 0))
colnames(descr_txt) <- c(
"Ticket",
"Subject",
"P_type",
"Assignee",
"Tags"
)
for (i in seq_along(which(cess=='TRUE'))) {
if(class(tckt_list[i][[1]][['responder_id']]) == 'numeric'){
descr_txt[i, 1] <- tckt_list[i][[1]][['id']]
descr_txt[i, 2] <- tckt_list[i][[1]][['subject']]
if(length(tckt_list[i][[1]][['custom_fields']][['main_problem_type']]) > 0){
descr_txt[i, 4] <-
agents$agents_name[agents$agents_id==tckt_list[i][[1]][['responder_id']]]
##descr_txt[i, 4] <- tckt_list[i][[1]][['responder_id']]
descr_txt[i, 3] <- tckt_list[i][[1]][['custom_fields']][['main_problem_type']]
}else{descr_txt[i, 3] <- ''}
if(length(tckt_list[i][[1]][['tags']]) > 0){
descr_txt[i, 5] <- tckt_list[i][[1]][['tags']]
}
}
}
na_tbl <- descr_txt %>%
filter(P_type == "NA") %>%
filter(!grepl("Merged", Tags)) %>%
filter(!grepl("merged", Tags))
rich_num <- nrow(na_tbl)
nana_mess <- paste0("Found",' ', rich_num,' ', "requests missing the Problem Type. Do you want to notify the assignees (Y/N)?")
nana_qu <- readline(nana_mess)
if(toupper(nana_qu) == 'Y'){
# adding a note
curl_start <- 'curl -v -u z6L822MqF9kZ2K3D4J8c:X -H "Content-Type: application/json" -X POST -d "{\\\"body\\\":\\\"'
curl_msg <- "Please, select the Problem type and the Issue type that apply. Thank you! "
curl_x <- '\\\"}" https://alk.freshdesk.com/api/v2/tickets/'
curl_end<- "/notes"
for(j in seq_along(na_tbl)){
ticket_id <- na_tbl[j, 1]
user_id <- na_tbl[j, 2]
url <-
paste0(curl_start, curl_msg, curl_x, ticket_id, curl_end)
shell(url)
}
print(na_tbl)
}else if(toupper(nana_qu) == 'N'){
# provide table only
print(na_tbl)
}
}
| /fd_nana.R | no_license | andpiccioni/freshdesk | R | false | false | 4,352 | r | #'This function will look for any requests with "NA" as first problem type
#'and will create a spreadsheet with the ticket number and the assignee.
#'The function should exclude the merged ticket, which have a specific tag.
NANA <- function(day, nana_qu){
## Libraries
require(httr)
require(stringr)
require(magrittr)
require(openxlsx)
require(dplyr)
require(purrr)
apikey <- Sys.getenv("fd_api")
agents_name <- c('Amit',
'Andrea',
'Carina',
'Chris',
'Emillia',
'Michel',
'Nadja',
'Rachid',
'Vera')
agents_id <- c(19000532180,
19000445734,
19000562176,
19012915923,
19014387275,
19000532181,
19000532183,
19000532182,
19011097494)
agents_email <- c('amit_gujral@trimble.com',
'andrea_piccioni@trimble.com',
'carina_pamminger@trimble.com',
'chris_goodwin@trimble.com',
'emillia_nlandu@trimble.com',
'michel_diomande@trimble.com',
'nadja_mbomson@trimble.com',
'rachid_boukerche@trimble.com',
'vera_boteva@trimble.com')
agents <- as.data.frame(cbind(agents_id,agents_name,agents_email), stringsAsFactors = FALSE)
ini_curl <-
"https://alk.freshdesk.com/api/v2/search/tickets?"
query_curl <- "query=\"group_id:19000105695%20AND%20(status:4%20OR%20status:5)%20AND%20created_at:>%27"
fin_curl <- "%27\"&page="
page_num <- 1
full_curl <- paste0(ini_curl, query_curl, day, fin_curl, page_num)
req <- GET(full_curl, authenticate(apikey, "X", type = "basic"))
req_content <- content(req) ## storing the content of the call
tckt_list <- req_content[[1]]
rev_descr <- length(req_content[[1]])
while (rev_descr == 30) {
page_num <- page_num + 1
full_curl <- paste0(ini_curl, query_curl, day, fin_curl, page_num)
req <- GET(full_curl, authenticate(apikey, "X", type = "basic"))
req_content <- content(req)
tckt_list <- append(tckt_list, req_content[[1]])
rev_descr <- length(req_content[[1]])
}
cess <- tckt_list%>%map_lgl(is.list) ## to resolve the 301 line
## from here it changes to get the more details
descr_txt <- data.frame(matrix(ncol = 5, nrow = 0))
colnames(descr_txt) <- c(
"Ticket",
"Subject",
"P_type",
"Assignee",
"Tags"
)
for (i in seq_along(which(cess=='TRUE'))) {
if(class(tckt_list[i][[1]][['responder_id']]) == 'numeric'){
descr_txt[i, 1] <- tckt_list[i][[1]][['id']]
descr_txt[i, 2] <- tckt_list[i][[1]][['subject']]
if(length(tckt_list[i][[1]][['custom_fields']][['main_problem_type']]) > 0){
descr_txt[i, 4] <-
agents$agents_name[agents$agents_id==tckt_list[i][[1]][['responder_id']]]
##descr_txt[i, 4] <- tckt_list[i][[1]][['responder_id']]
descr_txt[i, 3] <- tckt_list[i][[1]][['custom_fields']][['main_problem_type']]
}else{descr_txt[i, 3] <- ''}
if(length(tckt_list[i][[1]][['tags']]) > 0){
descr_txt[i, 5] <- tckt_list[i][[1]][['tags']]
}
}
}
na_tbl <- descr_txt %>%
filter(P_type == "NA") %>%
filter(!grepl("Merged", Tags)) %>%
filter(!grepl("merged", Tags))
rich_num <- nrow(na_tbl)
nana_mess <- paste0("Found",' ', rich_num,' ', "requests missing the Problem Type. Do you want to notify the assignees (Y/N)?")
nana_qu <- readline(nana_mess)
if(toupper(nana_qu) == 'Y'){
# adding a note
curl_start <- 'curl -v -u z6L822MqF9kZ2K3D4J8c:X -H "Content-Type: application/json" -X POST -d "{\\\"body\\\":\\\"'
curl_msg <- "Please, select the Problem type and the Issue type that apply. Thank you! "
curl_x <- '\\\"}" https://alk.freshdesk.com/api/v2/tickets/'
curl_end<- "/notes"
for(j in seq_along(na_tbl)){
ticket_id <- na_tbl[j, 1]
user_id <- na_tbl[j, 2]
url <-
paste0(curl_start, curl_msg, curl_x, ticket_id, curl_end)
shell(url)
}
print(na_tbl)
}else if(toupper(nana_qu) == 'N'){
# provide table only
print(na_tbl)
}
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/spanner_objects.R
\name{PlanNode.executionStats}
\alias{PlanNode.executionStats}
\title{PlanNode.executionStats Object}
\usage{
PlanNode.executionStats()
}
\value{
PlanNode.executionStats object
}
\description{
PlanNode.executionStats Object
}
\details{
Autogenerated via \code{\link[googleAuthR]{gar_create_api_objects}}
The execution statistics associated with the node, contained in a group ofkey-value pairs. Only present if the plan was returned as a result of aprofile query. For example, number of executions, number of rows/time perexecution etc.
}
\seealso{
Other PlanNode functions: \code{\link{PlanNode.metadata}},
\code{\link{PlanNode}}
}
| /googlespannerv1.auto/man/PlanNode.executionStats.Rd | permissive | GVersteeg/autoGoogleAPI | R | false | true | 731 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/spanner_objects.R
\name{PlanNode.executionStats}
\alias{PlanNode.executionStats}
\title{PlanNode.executionStats Object}
\usage{
PlanNode.executionStats()
}
\value{
PlanNode.executionStats object
}
\description{
PlanNode.executionStats Object
}
\details{
Autogenerated via \code{\link[googleAuthR]{gar_create_api_objects}}
The execution statistics associated with the node, contained in a group ofkey-value pairs. Only present if the plan was returned as a result of aprofile query. For example, number of executions, number of rows/time perexecution etc.
}
\seealso{
Other PlanNode functions: \code{\link{PlanNode.metadata}},
\code{\link{PlanNode}}
}
|
# Issue with where to write objects... can't use tempdir() across sessions
context("Test serialization with cereal package")
old_serialize_option <- get_nOption("serialize")
set_nOption("serialize", TRUE)
test_that("Basic serialization works",
{
nc1 <- nClass(
classname = "nc1",
Rpublic = list(
Rv = NULL,
Rfoo = function(x) x+1
),
Cpublic = list(
Cv = 'numericScalar',
Cx = 'integerScalar',
Cfoo = nFunction(
fun = function(x) {
return(x+1)
},
argTypes = list(x = 'numericScalar'),
returnType = 'numericScalar'),
Cbar = nFunction(
fun = function(x, y) {
return(x + y)
},
argTypes = list(x = 'numericMatrix',
y = 'numericMatrix'),
returnType = 'numericMatrix')
)
)
set_nOption("showCompilerOutput", TRUE)
set_nOption("pause_after_writing_files", TRUE)
ans <- try(nCompile_nClass(nc1, interface = "generic"))
expect_true(is.function(ans)) ## compilation succeeded
obj <- ans()
expect_true(nCompiler:::is.loadedObjectEnv(obj))
expect_equal(method(obj, "Cfoo")(1.2), 2.2)
value(obj, "Cv") <- 1.23
expect_equal(value(obj, "Cv"), 1.23)
value(obj, "Cx") <- 3
expect_equal(value(obj, "Cx"), 3L)
test <- parent.env(obj)$nComp_serialize_( nCompiler:::getExtptr(obj) )
untest <- parent.env(obj)$nComp_deserialize_(test)
debug(nCompiler:::serialize_nComp_object)
test2 <- nCompiler:::serialize_nComp_object(obj)
nCompiler:::get_value(untest, "Cv")
serialized <-
nCompiler:::serialize_nComp_object(obj, nComp_serialize_nc1)
expect_true(nCompiler:::is.loadedObjectEnv(serialized))
deserialized <-
nCompiler:::deserialize_nComp_object(serialized,
nComp_deserialize_nc1)
expect_true(nCompiler:::is.loadedObjectEnv(serialized))
expect_equal(value(deserialized, "Cv"), 1.23)
x <- matrix(as.numeric(1:6), nrow = 2)
y <- matrix(as.numeric(101:106), nrow = 2)
expect_equal(
method(deserialized, "Cbar")(
x, y
),
x+y)
})
# The below block uses to calls to `RScript` to test nClass saving/loading in
# distinct sessions. See scripts in the dir testthat/serialization_test_utils/
delete_dir <- FALSE
if (!dir.exists("testserial_nCompInternalOnly")) {
dir.create("testserial_nCompInternalOnly")
delete_dir <- TRUE
}
test_that("Saving and loading nClasses across sessions works (generic interface)", {
system(paste0("Rscript ",
system.file(file.path('tests', 'testthat',
'serialization_test_utils',
'testserial_save_Generic.R'),
package = 'nCompiler')))
system(paste0("Rscript ",
system.file(file.path('tests', 'testthat',
'serialization_test_utils',
'testserial_read_Generic.R'),
package = 'nCompiler')))
})
test_that("Saving and loading nClasses across sessions works (full interface)", {
system(paste0("Rscript ",
system.file(file.path('tests', 'testthat',
'serialization_test_utils',
'testserial_save_Full.R'),
package = 'nCompiler')))
system(paste0("Rscript ",
system.file(file.path('tests', 'testthat',
'serialization_test_utils',
'testserial_read_Full.R'),
package = 'nCompiler')))
})
test_that("Saving and loading mult nClasses across sessions works", {
system(paste0("Rscript ",
system.file(file.path('tests', 'testthat',
'serialization_test_utils',
'testserial_save_Multiple.R'),
package = 'nCompiler')))
system(paste0("Rscript ",
system.file(file.path('tests', 'testthat',
'serialization_test_utils',
'testserial_read_Multiple.R'),
package = 'nCompiler')))
})
test_that("Saving and loading an nClass from a specified package works", {
system(paste0("Rscript ",
system.file(file.path('tests', 'testthat',
'serialization_test_utils',
'testserial_create_Package.R'),
package = 'nCompiler')))
system(paste0("Rscript ",
system.file(file.path('tests', 'testthat',
'serialization_test_utils',
'testserial_save_Package.R'),
package = 'nCompiler')))
system(paste0("Rscript ",
system.file(file.path('tests', 'testthat',
'serialization_test_utils',
'testserial_read_Package.R'),
package = 'nCompiler')))
})
if (delete_dir) unlink("testserial_nCompInternalOnly", recursive = TRUE)
test_that("Serialization works for Rcpp types", {
# Can we serialize and retrieve all supported Rcpp types?
rcpp_supported_types <- c(
"RcppNumericVector",
"RcppNumericMatrix",
"RcppIntegerVector",
"RcppIntegerMatrix",
"RcppLogicalVector",
"RcppLogicalMatrix",
"RcppCharacterVector",
"RcppCharacterMatrix",
"RcppComplexVector",
"RcppComplexMatrix",
# "RcppDateVector", # Doesn't work
# "RcppDatetimeVector", # Doesn't work
"RcppRawVector",
# "RcppDataFrame", # Doesn't work
"RcppS4"#,
# "RcppFunction", # Doesn't work
# "RcppEigenMatrixXd", # Doesn't work
# "RcppEigenMatrixXi", # Doesn't work
# "RcppEigenMatrixXcd", # Doesn't work
# "RcppEigenVectorXd", # Doesn't work
# "RcppEigenVectorXi", # Doesn't work
# "RcppEigenVectorXcd" # Doesn't work
)
IntMat <- matrix(1:9, nrow = 3)
DblMat <- matrix(1:9 / 10, nrow = 3)
LogMat <- IntMat %% 2 == 0
ChrMat <- matrix(as.character(1:9), nrow = 3)
CpxMat <- matrix(1:9 + 2i, nrow = 3)
track <- setClass("track", slots = c(x="numeric", y="numeric"))
t1 <- track(x = 1:10, y = 1:10 + rnorm(10))
# The following is a list of objects with types corresponding to the list of
# supported Rcpp types above.
rcpp_type_values <- list(as.numeric(DblMat), DblMat,
as.numeric(IntMat), IntMat,
as.logical(LogMat), LogMat,
as.character(ChrMat), ChrMat,
as.complex(CpxMat), CpxMat,
#rep(as.Date(c("2010-01-01", "2011-02-03")), 2),
#rep(as.POSIXct(c(Sys.time(), Sys.time() + 100900)), 2),
serialize(ChrMat, connection = NULL),
#data.frame(x = 1:9, y = (1:9)^2),
t1# rnorm,
#DblMat, IntMat, CpxMat,
#as.numeric(DblMat), as.numeric(IntMat),
#as.complex(CpxMat)
)
compare_fn <- list(all.equal.numeric, all.equal.numeric,
all.equal.numeric, all.equal.numeric,
all.equal, all.equal,
all.equal.character, all.equal.character,
all.equal, all.equal,
#all.equal, all.equal,
all.equal.raw,
#identical,
identical#,
#identical,
#all.equal, all.equal, all.equal,
#all.equal, all.equal, all.equal
)
test_rcpp_serial_class <- function(type, value, compfn) {
name <- paste0("nc_", type)
nc1 <- nClass(classname = name,
Cpublic = list(x = type))
nc1_generator <- nCompile_nClass(nc1, interface = "generic")
my_nc1 <- nc1_generator[[1]]()
value(my_nc1, "x") <- value
serialized <- serialize_nComp_object(my_nc1,
get(paste0("nComp_serialize_", name)))
deserialized <- deserialize_nComp_object(serialized,
get(paste0("nComp_deserialize_", name)))
return(compfn(value(deserialized, "x"), value(my_nc1, "x")))
}
for (i in 1:length(rcpp_supported_types)) {
# cat(i, "\n")
expect_true(
test_rcpp_serial_class(value = rcpp_type_values[[i]],
type = rcpp_supported_types[i],
compfn = compare_fn[[i]])
)
}
})
test_that("Serialize with a bad classname", {
nc1 <- nClass(
classname = "nc one",
Rpublic = list(
Rv = NULL,
Rfoo = function(x) x+1
),
Cpublic = list(
Cv = 'numericScalar',
Cx = 'integerScalar',
Cfoo = nFunction(
fun = function(x) {
return(x+1)
},
argTypes = list(x = 'numericScalar'),
returnType = 'numericScalar'),
Cbar = nFunction(
fun = function(x, y) {
return(x + y)
},
argTypes = list(x = 'numericMatrix',
y = 'numericMatrix'),
returnType = 'numericMatrix')
)
)
ans <- try(nCompile_nClass(nc1, interface = "generic"))
expect_true(is.function(ans[[1]])) ## compilation succeeded
})
set_nOption("serialize", old_serialize_option)
| /nCompiler/tests/testthat/test-serialization.R | permissive | nimble-dev/nCompiler | R | false | false | 10,329 | r | # Issue with where to write objects... can't use tempdir() across sessions
context("Test serialization with cereal package")
old_serialize_option <- get_nOption("serialize")
set_nOption("serialize", TRUE)
test_that("Basic serialization works",
{
nc1 <- nClass(
classname = "nc1",
Rpublic = list(
Rv = NULL,
Rfoo = function(x) x+1
),
Cpublic = list(
Cv = 'numericScalar',
Cx = 'integerScalar',
Cfoo = nFunction(
fun = function(x) {
return(x+1)
},
argTypes = list(x = 'numericScalar'),
returnType = 'numericScalar'),
Cbar = nFunction(
fun = function(x, y) {
return(x + y)
},
argTypes = list(x = 'numericMatrix',
y = 'numericMatrix'),
returnType = 'numericMatrix')
)
)
set_nOption("showCompilerOutput", TRUE)
set_nOption("pause_after_writing_files", TRUE)
ans <- try(nCompile_nClass(nc1, interface = "generic"))
expect_true(is.function(ans)) ## compilation succeeded
obj <- ans()
expect_true(nCompiler:::is.loadedObjectEnv(obj))
expect_equal(method(obj, "Cfoo")(1.2), 2.2)
value(obj, "Cv") <- 1.23
expect_equal(value(obj, "Cv"), 1.23)
value(obj, "Cx") <- 3
expect_equal(value(obj, "Cx"), 3L)
test <- parent.env(obj)$nComp_serialize_( nCompiler:::getExtptr(obj) )
untest <- parent.env(obj)$nComp_deserialize_(test)
debug(nCompiler:::serialize_nComp_object)
test2 <- nCompiler:::serialize_nComp_object(obj)
nCompiler:::get_value(untest, "Cv")
serialized <-
nCompiler:::serialize_nComp_object(obj, nComp_serialize_nc1)
expect_true(nCompiler:::is.loadedObjectEnv(serialized))
deserialized <-
nCompiler:::deserialize_nComp_object(serialized,
nComp_deserialize_nc1)
expect_true(nCompiler:::is.loadedObjectEnv(serialized))
expect_equal(value(deserialized, "Cv"), 1.23)
x <- matrix(as.numeric(1:6), nrow = 2)
y <- matrix(as.numeric(101:106), nrow = 2)
expect_equal(
method(deserialized, "Cbar")(
x, y
),
x+y)
})
# The below block uses to calls to `RScript` to test nClass saving/loading in
# distinct sessions. See scripts in the dir testthat/serialization_test_utils/
delete_dir <- FALSE
if (!dir.exists("testserial_nCompInternalOnly")) {
dir.create("testserial_nCompInternalOnly")
delete_dir <- TRUE
}
test_that("Saving and loading nClasses across sessions works (generic interface)", {
system(paste0("Rscript ",
system.file(file.path('tests', 'testthat',
'serialization_test_utils',
'testserial_save_Generic.R'),
package = 'nCompiler')))
system(paste0("Rscript ",
system.file(file.path('tests', 'testthat',
'serialization_test_utils',
'testserial_read_Generic.R'),
package = 'nCompiler')))
})
test_that("Saving and loading nClasses across sessions works (full interface)", {
system(paste0("Rscript ",
system.file(file.path('tests', 'testthat',
'serialization_test_utils',
'testserial_save_Full.R'),
package = 'nCompiler')))
system(paste0("Rscript ",
system.file(file.path('tests', 'testthat',
'serialization_test_utils',
'testserial_read_Full.R'),
package = 'nCompiler')))
})
test_that("Saving and loading mult nClasses across sessions works", {
system(paste0("Rscript ",
system.file(file.path('tests', 'testthat',
'serialization_test_utils',
'testserial_save_Multiple.R'),
package = 'nCompiler')))
system(paste0("Rscript ",
system.file(file.path('tests', 'testthat',
'serialization_test_utils',
'testserial_read_Multiple.R'),
package = 'nCompiler')))
})
test_that("Saving and loading an nClass from a specified package works", {
system(paste0("Rscript ",
system.file(file.path('tests', 'testthat',
'serialization_test_utils',
'testserial_create_Package.R'),
package = 'nCompiler')))
system(paste0("Rscript ",
system.file(file.path('tests', 'testthat',
'serialization_test_utils',
'testserial_save_Package.R'),
package = 'nCompiler')))
system(paste0("Rscript ",
system.file(file.path('tests', 'testthat',
'serialization_test_utils',
'testserial_read_Package.R'),
package = 'nCompiler')))
})
if (delete_dir) unlink("testserial_nCompInternalOnly", recursive = TRUE)
test_that("Serialization works for Rcpp types", {
# Can we serialize and retrieve all supported Rcpp types?
rcpp_supported_types <- c(
"RcppNumericVector",
"RcppNumericMatrix",
"RcppIntegerVector",
"RcppIntegerMatrix",
"RcppLogicalVector",
"RcppLogicalMatrix",
"RcppCharacterVector",
"RcppCharacterMatrix",
"RcppComplexVector",
"RcppComplexMatrix",
# "RcppDateVector", # Doesn't work
# "RcppDatetimeVector", # Doesn't work
"RcppRawVector",
# "RcppDataFrame", # Doesn't work
"RcppS4"#,
# "RcppFunction", # Doesn't work
# "RcppEigenMatrixXd", # Doesn't work
# "RcppEigenMatrixXi", # Doesn't work
# "RcppEigenMatrixXcd", # Doesn't work
# "RcppEigenVectorXd", # Doesn't work
# "RcppEigenVectorXi", # Doesn't work
# "RcppEigenVectorXcd" # Doesn't work
)
IntMat <- matrix(1:9, nrow = 3)
DblMat <- matrix(1:9 / 10, nrow = 3)
LogMat <- IntMat %% 2 == 0
ChrMat <- matrix(as.character(1:9), nrow = 3)
CpxMat <- matrix(1:9 + 2i, nrow = 3)
track <- setClass("track", slots = c(x="numeric", y="numeric"))
t1 <- track(x = 1:10, y = 1:10 + rnorm(10))
# The following is a list of objects with types corresponding to the list of
# supported Rcpp types above.
rcpp_type_values <- list(as.numeric(DblMat), DblMat,
as.numeric(IntMat), IntMat,
as.logical(LogMat), LogMat,
as.character(ChrMat), ChrMat,
as.complex(CpxMat), CpxMat,
#rep(as.Date(c("2010-01-01", "2011-02-03")), 2),
#rep(as.POSIXct(c(Sys.time(), Sys.time() + 100900)), 2),
serialize(ChrMat, connection = NULL),
#data.frame(x = 1:9, y = (1:9)^2),
t1# rnorm,
#DblMat, IntMat, CpxMat,
#as.numeric(DblMat), as.numeric(IntMat),
#as.complex(CpxMat)
)
compare_fn <- list(all.equal.numeric, all.equal.numeric,
all.equal.numeric, all.equal.numeric,
all.equal, all.equal,
all.equal.character, all.equal.character,
all.equal, all.equal,
#all.equal, all.equal,
all.equal.raw,
#identical,
identical#,
#identical,
#all.equal, all.equal, all.equal,
#all.equal, all.equal, all.equal
)
test_rcpp_serial_class <- function(type, value, compfn) {
name <- paste0("nc_", type)
nc1 <- nClass(classname = name,
Cpublic = list(x = type))
nc1_generator <- nCompile_nClass(nc1, interface = "generic")
my_nc1 <- nc1_generator[[1]]()
value(my_nc1, "x") <- value
serialized <- serialize_nComp_object(my_nc1,
get(paste0("nComp_serialize_", name)))
deserialized <- deserialize_nComp_object(serialized,
get(paste0("nComp_deserialize_", name)))
return(compfn(value(deserialized, "x"), value(my_nc1, "x")))
}
for (i in 1:length(rcpp_supported_types)) {
# cat(i, "\n")
expect_true(
test_rcpp_serial_class(value = rcpp_type_values[[i]],
type = rcpp_supported_types[i],
compfn = compare_fn[[i]])
)
}
})
test_that("Serialize with a bad classname", {
nc1 <- nClass(
classname = "nc one",
Rpublic = list(
Rv = NULL,
Rfoo = function(x) x+1
),
Cpublic = list(
Cv = 'numericScalar',
Cx = 'integerScalar',
Cfoo = nFunction(
fun = function(x) {
return(x+1)
},
argTypes = list(x = 'numericScalar'),
returnType = 'numericScalar'),
Cbar = nFunction(
fun = function(x, y) {
return(x + y)
},
argTypes = list(x = 'numericMatrix',
y = 'numericMatrix'),
returnType = 'numericMatrix')
)
)
ans <- try(nCompile_nClass(nc1, interface = "generic"))
expect_true(is.function(ans[[1]])) ## compilation succeeded
})
set_nOption("serialize", old_serialize_option)
|
library(Rcmdr)
library(dplyr)
#Read data on flowering stages and temperatures
data17_stages<-read.table("data/clean/cerastium_2017_stages.csv",header=T,sep="\t",dec=",")
head(data17_stages)
str(data17_stages)
data17_stages$temp<-as.numeric(sub(",", ".", as.character(data17_stages$temp), fixed = TRUE))
with(data17_stages,aggregate(id~plot,FUN=length)) #See number of ids per plot
data17_stages$date<-as.Date(as.character(data17_stages$date),format="%d/%m/%Y")
# Flowering stages for Cerastium fontanum:
# Stage Description
# VS (Vegetative: small) Only vegetative growth, the plant is < 2 cm
# VL (Vegetative: large) Only vegetative growth, the plant is > 2 cm
# B1 (Bud stage 1) Buds are just starting to form, very small
# B2 (Bud stage 2) Buds are at medium size
# B3 (Bud stage 3) Buds are large but still completely closed
# B4 (Bud stage 4) Buds are large and almost starting to flower
# FL (Flowering) At least one flower has opened
# FL100 (100% flowering) All flowers have opened, none are yet wilted
# W (Wilted) At least one flower has wilted
# W100 (100% wilted) All flowers are wilted
#
# Stages to account for grazing damage and lost or dead plants:
# Stage Description
# X (Lost) Neither plant nor nail was not found that day
# D (Dead) Plant still in place but dead
# G (Grazed/gone) Nail was found but no sign of plant
# SG (Stem grazed/gone) All stems grazed/gone but with visible stubs
# /G (Grazing damage) One or more stems have been grazed but there are still some left, phenology is measured for the one that is most mature
levels(data17_stages$stage) #See the levels
#Correct stage values
data17_stages$stage_corr<-as.character(data17_stages$stage)
data17_stages$stage_corr[data17_stages$stage_corr=="B1/G"] <- "B1"
data17_stages$stage_corr[data17_stages$stage_corr=="B1/SG"] <- "B1"
data17_stages$stage_corr[data17_stages$stage_corr=="B2/G"] <- "B2"
data17_stages$stage_corr[data17_stages$stage_corr=="B3/G"] <- "B3"
data17_stages$stage_corr[data17_stages$stage_corr=="B4/G"] <- "B4"
data17_stages$stage_corr[data17_stages$stage_corr=="FL/G"] <- "FL"
data17_stages$stage_corr[data17_stages$stage_corr=="VL "] <- "VL"
data17_stages$stage_corr[data17_stages$stage_corr=="VL/G"] <- "VL"
data17_stages$stage_corr[data17_stages$stage_corr=="W "] <- "W"
data17_stages$stage_corr[data17_stages$stage_corr=="W/G"] <- "W"
data17_stages$stage_corr[data17_stages$stage_corr=="W/SG"] <- "W"
data17_stages$stage_corr[data17_stages$stage_corr=="W10"] <- "W100"
data17_stages$stage_corr[data17_stages$stage_corr=="W100 "] <- "W100"
data17_stages$stage_corr[data17_stages$stage_corr=="W100/G"] <- "W100"
data17_stages$stage_corr[data17_stages$stage_corr=="comment"] <- "wrong"
data17_stages$stage_corr[data17_stages$stage_corr=="D"] <- "wrong"
data17_stages$stage_corr[data17_stages$stage_corr=="G"] <- "wrong"
data17_stages$stage_corr[data17_stages$stage_corr=="missing data"] <- "wrong"
data17_stages$stage_corr[data17_stages$stage_corr=="SG"] <- "wrong"
data17_stages$stage_corr[data17_stages$stage_corr=="WRONG"] <- "wrong"
data17_stages$stage_corr[data17_stages$stage_corr=="wrong sp/"] <- "wrong"
data17_stages$stage_corr[data17_stages$stage_corr=="wrong species"] <- "wrong"
data17_stages$stage_corr[data17_stages$stage_corr=="X"] <- "wrong"
data17_stages$stage_corr[data17_stages$stage_corr=="X "] <- "wrong"
data17_stages$stage_corr[data17_stages$stage_corr=="X/G"] <- "wrong"
data17_stages$stage_corr<-as.factor(data17_stages$stage_corr)
levels(data17_stages$stage_corr)
nrow(data17_stages)
#To remove
subset(data17_stages,stage_corr=="wrong")
nrow(subset(data17_stages,stage_corr=="wrong")) #287 rows
unique(subset(data17_stages,stage_corr=="wrong")$id) #from 104 unique ids
#Select data with meaningful stages
data17_stages_complete<-subset(data17_stages,!stage_corr=="wrong")
nrow(data17_stages_complete)
data17_stages_complete$stage_corr<-droplevels(data17_stages_complete$stage_corr)
levels(data17_stages_complete$stage_corr)
#Calculate stage in numeric
data17_stages_complete <- within(data17_stages_complete, {
stage_num <- Recode(stage_corr,
'"VS"=1; "VL"=2; "B1"=3; "B2"=4; "B3"=5; "B4"=6; "B5"=7; "FL"=8; "FL100"=9; "W"=10; "W50"=11; "W100"=12',
as.factor.result=F)})
str(data17_stages_complete)
hist(data17_stages_complete$stage_num)
subset(data17_stages_complete,stage_num==7) #Only 1, recode?
subset(data17_stages_complete,stage_num==9) #Only 6, recode?
subset(data17_stages_complete,stage_num==11) #Only 10, recode?
with(data17_stages_complete,aggregate(id~plot,FUN=length)) #See number of ids per plot
with(subset(data17_stages_complete,stage_num>=8),aggregate(id~plot,FUN=length)) #See number of ids per plot that actually flowered
#Calculate flowering/not flowering as 1/0
data17_stages_complete <- within(data17_stages_complete, {
stage_bin <- Recode(stage_corr,
'"VS"=0; "VL"=0; "B1"=0; "B2"=0; "B3"=0; "B4"=0; "B5"=0; "FL"=1; "FL100"=1; "W"=1; "W50"=1; "W100"=1',
as.factor.result=F)})
# last_nfl=max date when stage_bin=0 - last date as vegetative
# first_fl=min data when stage_bin=1 - first date as flowering
last_nfl<-data.frame(subset(data17_stages_complete,stage_bin==0) %>%
group_by(id) %>%
summarise(max_date= max(date)))
first_fl<-data.frame(subset(data17_stages_complete,stage_bin==1) %>%
group_by(id) %>%
summarise(min_date= min(date)))
nrow(last_nfl) #417
nrow(first_fl) #300
head(last_nfl)
head(first_fl)
FFD<-merge(last_nfl,first_fl,all.x=T,all.y=T)
FFD$FFD<-(FFD$min_date-FFD$max_date)
FFD$FFD<-as.Date(rowMeans(cbind(FFD$min_date,FFD$max_date)),origin="1970-01-01") #FFD mid-interval
head(FFD)
# Mean temperature for each plant
temp<-data.frame(data17_stages_complete %>%
group_by(id) %>%
summarise(mean_temp= mean(temp,na.rm=T),min_temp= min(temp,na.rm=T),max_temp= max(temp,na.rm=T),
sd_temp=sd(temp,na.rm=T)))
temp$diff_temp<-temp$max_temp-temp$min_temp
hist(temp$diff_temp)
nrow(temp) #417 pls w temp
nrow(subset(temp,diff_temp>5)) #54 pls w temp diff > 5 - 13%
nrow(subset(temp,diff_temp>10)) #5 pls w temp diff >10 - 1%
#Add temperature in first recording
temp<-merge(temp,subset(data17_stages_complete,record==1)[c(2,9)])
temp$temp1<-temp$temp
temp$temp<-NULL
head(temp)
#Merge data FFD and temperature
FFD_temp<-merge(FFD[c(1,4)],temp[c(1:2,7)])
FFD_temp<-merge(FFD_temp,unique(data17_stages_complete[2:3])) #Add plot
head(FFD_temp)
str(FFD_temp)
write.table(FFD_temp,"data/clean/cerastium_2017_FFD_temp.txt",sep="\t",dec=".")
| /code/7_data_prep_2017.R | no_license | aliciavaldes1501/cerastium | R | false | false | 6,527 | r | library(Rcmdr)
library(dplyr)
#Read data on flowering stages and temperatures
data17_stages<-read.table("data/clean/cerastium_2017_stages.csv",header=T,sep="\t",dec=",")
head(data17_stages)
str(data17_stages)
data17_stages$temp<-as.numeric(sub(",", ".", as.character(data17_stages$temp), fixed = TRUE))
with(data17_stages,aggregate(id~plot,FUN=length)) #See number of ids per plot
data17_stages$date<-as.Date(as.character(data17_stages$date),format="%d/%m/%Y")
# Flowering stages for Cerastium fontanum:
# Stage Description
# VS (Vegetative: small) Only vegetative growth, the plant is < 2 cm
# VL (Vegetative: large) Only vegetative growth, the plant is > 2 cm
# B1 (Bud stage 1) Buds are just starting to form, very small
# B2 (Bud stage 2) Buds are at medium size
# B3 (Bud stage 3) Buds are large but still completely closed
# B4 (Bud stage 4) Buds are large and almost starting to flower
# FL (Flowering) At least one flower has opened
# FL100 (100% flowering) All flowers have opened, none are yet wilted
# W (Wilted) At least one flower has wilted
# W100 (100% wilted) All flowers are wilted
#
# Stages to account for grazing damage and lost or dead plants:
# Stage Description
# X (Lost) Neither plant nor nail was not found that day
# D (Dead) Plant still in place but dead
# G (Grazed/gone) Nail was found but no sign of plant
# SG (Stem grazed/gone) All stems grazed/gone but with visible stubs
# /G (Grazing damage) One or more stems have been grazed but there are still some left, phenology is measured for the one that is most mature
levels(data17_stages$stage) #See the levels
#Correct stage values
data17_stages$stage_corr<-as.character(data17_stages$stage)
data17_stages$stage_corr[data17_stages$stage_corr=="B1/G"] <- "B1"
data17_stages$stage_corr[data17_stages$stage_corr=="B1/SG"] <- "B1"
data17_stages$stage_corr[data17_stages$stage_corr=="B2/G"] <- "B2"
data17_stages$stage_corr[data17_stages$stage_corr=="B3/G"] <- "B3"
data17_stages$stage_corr[data17_stages$stage_corr=="B4/G"] <- "B4"
data17_stages$stage_corr[data17_stages$stage_corr=="FL/G"] <- "FL"
data17_stages$stage_corr[data17_stages$stage_corr=="VL "] <- "VL"
data17_stages$stage_corr[data17_stages$stage_corr=="VL/G"] <- "VL"
data17_stages$stage_corr[data17_stages$stage_corr=="W "] <- "W"
data17_stages$stage_corr[data17_stages$stage_corr=="W/G"] <- "W"
data17_stages$stage_corr[data17_stages$stage_corr=="W/SG"] <- "W"
data17_stages$stage_corr[data17_stages$stage_corr=="W10"] <- "W100"
data17_stages$stage_corr[data17_stages$stage_corr=="W100 "] <- "W100"
data17_stages$stage_corr[data17_stages$stage_corr=="W100/G"] <- "W100"
data17_stages$stage_corr[data17_stages$stage_corr=="comment"] <- "wrong"
data17_stages$stage_corr[data17_stages$stage_corr=="D"] <- "wrong"
data17_stages$stage_corr[data17_stages$stage_corr=="G"] <- "wrong"
data17_stages$stage_corr[data17_stages$stage_corr=="missing data"] <- "wrong"
data17_stages$stage_corr[data17_stages$stage_corr=="SG"] <- "wrong"
data17_stages$stage_corr[data17_stages$stage_corr=="WRONG"] <- "wrong"
data17_stages$stage_corr[data17_stages$stage_corr=="wrong sp/"] <- "wrong"
data17_stages$stage_corr[data17_stages$stage_corr=="wrong species"] <- "wrong"
data17_stages$stage_corr[data17_stages$stage_corr=="X"] <- "wrong"
data17_stages$stage_corr[data17_stages$stage_corr=="X "] <- "wrong"
data17_stages$stage_corr[data17_stages$stage_corr=="X/G"] <- "wrong"
data17_stages$stage_corr<-as.factor(data17_stages$stage_corr)
levels(data17_stages$stage_corr)
nrow(data17_stages)
#To remove
subset(data17_stages,stage_corr=="wrong")
nrow(subset(data17_stages,stage_corr=="wrong")) #287 rows
unique(subset(data17_stages,stage_corr=="wrong")$id) #from 104 unique ids
#Select data with meaningful stages
data17_stages_complete<-subset(data17_stages,!stage_corr=="wrong")
nrow(data17_stages_complete)
data17_stages_complete$stage_corr<-droplevels(data17_stages_complete$stage_corr)
levels(data17_stages_complete$stage_corr)
#Calculate stage in numeric
data17_stages_complete <- within(data17_stages_complete, {
stage_num <- Recode(stage_corr,
'"VS"=1; "VL"=2; "B1"=3; "B2"=4; "B3"=5; "B4"=6; "B5"=7; "FL"=8; "FL100"=9; "W"=10; "W50"=11; "W100"=12',
as.factor.result=F)})
str(data17_stages_complete)
hist(data17_stages_complete$stage_num)
subset(data17_stages_complete,stage_num==7) #Only 1, recode?
subset(data17_stages_complete,stage_num==9) #Only 6, recode?
subset(data17_stages_complete,stage_num==11) #Only 10, recode?
with(data17_stages_complete,aggregate(id~plot,FUN=length)) #See number of ids per plot
with(subset(data17_stages_complete,stage_num>=8),aggregate(id~plot,FUN=length)) #See number of ids per plot that actually flowered
#Calculate flowering/not flowering as 1/0
data17_stages_complete <- within(data17_stages_complete, {
stage_bin <- Recode(stage_corr,
'"VS"=0; "VL"=0; "B1"=0; "B2"=0; "B3"=0; "B4"=0; "B5"=0; "FL"=1; "FL100"=1; "W"=1; "W50"=1; "W100"=1',
as.factor.result=F)})
# last_nfl=max date when stage_bin=0 - last date as vegetative
# first_fl=min data when stage_bin=1 - first date as flowering
last_nfl<-data.frame(subset(data17_stages_complete,stage_bin==0) %>%
group_by(id) %>%
summarise(max_date= max(date)))
first_fl<-data.frame(subset(data17_stages_complete,stage_bin==1) %>%
group_by(id) %>%
summarise(min_date= min(date)))
nrow(last_nfl) #417
nrow(first_fl) #300
head(last_nfl)
head(first_fl)
FFD<-merge(last_nfl,first_fl,all.x=T,all.y=T)
FFD$FFD<-(FFD$min_date-FFD$max_date)
FFD$FFD<-as.Date(rowMeans(cbind(FFD$min_date,FFD$max_date)),origin="1970-01-01") #FFD mid-interval
head(FFD)
# Mean temperature for each plant
temp<-data.frame(data17_stages_complete %>%
group_by(id) %>%
summarise(mean_temp= mean(temp,na.rm=T),min_temp= min(temp,na.rm=T),max_temp= max(temp,na.rm=T),
sd_temp=sd(temp,na.rm=T)))
temp$diff_temp<-temp$max_temp-temp$min_temp
hist(temp$diff_temp)
nrow(temp) #417 pls w temp
nrow(subset(temp,diff_temp>5)) #54 pls w temp diff > 5 - 13%
nrow(subset(temp,diff_temp>10)) #5 pls w temp diff >10 - 1%
#Add temperature in first recording
temp<-merge(temp,subset(data17_stages_complete,record==1)[c(2,9)])
temp$temp1<-temp$temp
temp$temp<-NULL
head(temp)
#Merge data FFD and temperature
FFD_temp<-merge(FFD[c(1,4)],temp[c(1:2,7)])
FFD_temp<-merge(FFD_temp,unique(data17_stages_complete[2:3])) #Add plot
head(FFD_temp)
str(FFD_temp)
write.table(FFD_temp,"data/clean/cerastium_2017_FFD_temp.txt",sep="\t",dec=".")
|
# Make_chrtM8() monthly hours worked by industry charts function
# June 6, 2021
pkgs <- c("tidyverse","scales","tibble","stringr","rlang","lubridate")
inst <- lapply(pkgs,library,character.only=TRUE)
source("Common_stuff.R")
Make_chrtM8 <- function(MYtitl,geo,type,month1,month2,altTitl,interv) {
prov <- case_when(
geo=="Canada"~"CA",
geo=="Newfoundland and Labrador"~"NL",
geo=="Prince Edward Island"~"PE",
geo=="Nova Scotia"~"NS",
geo=="New Brunswick"~"NB",
geo=="Quebec"~"QC",
geo=="Ontario"~"ON",
geo=="Manitoba"~"MB",
geo=="Saskatchewan"~"SK",
geo=="Alberta"~"AB",
geo=="British Columbia"~"BC"
)
q0 <- readRDS(paste0("rds/q1",prov,".rds"))
if (altTitl=="") {ChrtTitl <- paste0(MYtitl)}
if (altTitl!="") {ChrtTitl <- altTitl}
Fmth <- format(month1,"%b %Y")
Lmth <- format(month2,"%b %Y")
if (type==1) {
MYsubtitl=paste0("Hours worked in ",geo,"\n",
"Monthly, ",Fmth," to ",Lmth,", ",TS[[8]]$Seas)
q1 <- mutate(q0,val=.data[[MYtitl]])
q1 <- filter(q1,REF_DATE>=month1 & REF_DATE<=month2)
c1 <- ggplot(q1,
aes(x=REF_DATE,y=val))+
geom_line(colour="black",size=1.5)+
scale_y_continuous(labels=scales::"comma")
if(posNeg(q1$val)) {
c1 <- c1+ geom_hline(yintercept=0,size=0.4,colour="black",
linetype="dashed")
}
} else if (type==2) {
MYsubtitl=paste0("Hours worked in ",geo,"\n",
"\nIncluding trend line\nMonthly, ",Fmth," to ",
Lmth,", ",TS[[8]]$Seas)
q1 <- mutate(q0,val=.data[[MYtitl]])
q1 <- filter(q1,REF_DATE>=month1 & REF_DATE<=month2)
c1 <- ggplot(q1,
aes(x=REF_DATE,y=val))+
geom_line(colour="black",size=1.5)+
geom_smooth(method="lm",se=FALSE,linetype="dashed")+
scale_y_continuous(labels=scales::"comma")
if(posNeg(q1$val)) {
c1 <- c1+ geom_hline(yintercept=0,size=0.4,colour="black",
linetype="dashed")
}
} else if (type==3) {
MYsubtitl=paste0("Hours worked in ",geo,"\n",
"\nIndex with starting month = 100\nMonthly, ",Fmth," to ",
Lmth,", ",TS[[8]]$Seas)
q0 <- filter(q0,REF_DATE>=month1 & REF_DATE<=month2)
q1 <- mutate(q0,val=IDX(.data[[MYtitl]]))
c1 <- ggplot(q1,
aes(x=REF_DATE,y=val))+
geom_line(colour="black",size=1.5)+
scale_y_continuous()
if(posNeg(q1$val)) {
c1 <- c1+ geom_hline(yintercept=0,size=0.4,colour="black",
linetype="dashed")
}
} else if (type==4) {
MYsubtitl=paste0("Hours worked in ",geo,"\n",
"\nOne-month percentage change\nMonthly, ",Fmth," to ",
Lmth,", ",TS[[8]]$Seas)
q1 <- mutate(q0,val=PC(.data[[MYtitl]])/100)
q1 <- filter(q1,REF_DATE>=month1 & REF_DATE<=month2)
c1 <- ggplot(q1,
aes(x=REF_DATE,y=val))+
geom_col(fill="gold",colour="black",size=0.2)+
scale_y_continuous(labels=scales::"percent")
if(posNeg(q1$val)) {
c1 <- c1+ geom_hline(yintercept=0,size=0.4,colour="black",
linetype="dashed")
}
} else if (type==5) {
MYsubtitl=paste0("Hours worked in ",geo,"\n",
"\nTwelve-month percentage change\nMonthly, ",Fmth," to ",
Lmth,", ",TS[[8]]$Seas)
q1 <- mutate(q0,val=PC12(.data[[MYtitl]])/100)
q1 <- filter(q1,REF_DATE>=month1 & REF_DATE<=month2)
c1 <- ggplot(q1,
aes(x=REF_DATE,y=val))+
geom_col(fill="gold",colour="black",size=0.2)+
scale_y_continuous(labels=scales::"percent")
if(posNeg(q1$val)) {
c1 <- c1+ geom_hline(yintercept=0,size=0.4,colour="black",
linetype="dashed")
}
} else if (type==6) {
MYsubtitl=paste0("Hours worked in ",geo,"\n",
"\nThirteen-month centred moving average (dashed blue line)\nMonthly, ",
Fmth," to ",Lmth,", ",TS[[8]]$Seas)
q1 <- mutate(q0,val=MA13(.data[[MYtitl]]))
q1 <- filter(q1,REF_DATE>=month1 & REF_DATE<=month2)
c1 <- ggplot(q1,
aes(x=REF_DATE,y=val))+
geom_line(colour="blue",size=1.5,linetype="dashed")+
geom_line(aes(x=REF_DATE,y=.data[[MYtitl]]),colour="black",size=1.5)+
scale_y_continuous(labels=scales::"comma")+
if(posNeg(q1$val)) {
c1 <- c1+ geom_hline(yintercept=0,size=0.4,colour="black",
linetype="dashed")
}
}
if (datDif(month1,month2)>20 & interv=="") {
interv <- "36 months"
} else if (datDif(month1,month2)>10 & interv=="") {
interv <- "12 months"
} else if (datDif(month1,month2)>5 & interv=="") {
interv <- "3 months"
} else if (datDif(month1,month2)>2 & interv=="") {
interv <- "2 months"
} else if (interv=="") {
interv <- "1 month"
}
c1 <- c1 + scale_x_date(breaks=seq.Date(month1,month2,by=interv))+
labs(title=ChrtTitl,subtitle=paste0(MYsubtitl),
caption=TS[[8]]$Ftnt,x="",y="")+
theme(axis.text.y = element_text(size=18))+
theme_DB()
c1
}
| /Make_chrtM8.R | no_license | PhilSmith26/LFStables | R | false | false | 4,796 | r | # Make_chrtM8() monthly hours worked by industry charts function
# June 6, 2021
pkgs <- c("tidyverse","scales","tibble","stringr","rlang","lubridate")
inst <- lapply(pkgs,library,character.only=TRUE)
source("Common_stuff.R")
Make_chrtM8 <- function(MYtitl,geo,type,month1,month2,altTitl,interv) {
prov <- case_when(
geo=="Canada"~"CA",
geo=="Newfoundland and Labrador"~"NL",
geo=="Prince Edward Island"~"PE",
geo=="Nova Scotia"~"NS",
geo=="New Brunswick"~"NB",
geo=="Quebec"~"QC",
geo=="Ontario"~"ON",
geo=="Manitoba"~"MB",
geo=="Saskatchewan"~"SK",
geo=="Alberta"~"AB",
geo=="British Columbia"~"BC"
)
q0 <- readRDS(paste0("rds/q1",prov,".rds"))
if (altTitl=="") {ChrtTitl <- paste0(MYtitl)}
if (altTitl!="") {ChrtTitl <- altTitl}
Fmth <- format(month1,"%b %Y")
Lmth <- format(month2,"%b %Y")
if (type==1) {
MYsubtitl=paste0("Hours worked in ",geo,"\n",
"Monthly, ",Fmth," to ",Lmth,", ",TS[[8]]$Seas)
q1 <- mutate(q0,val=.data[[MYtitl]])
q1 <- filter(q1,REF_DATE>=month1 & REF_DATE<=month2)
c1 <- ggplot(q1,
aes(x=REF_DATE,y=val))+
geom_line(colour="black",size=1.5)+
scale_y_continuous(labels=scales::"comma")
if(posNeg(q1$val)) {
c1 <- c1+ geom_hline(yintercept=0,size=0.4,colour="black",
linetype="dashed")
}
} else if (type==2) {
MYsubtitl=paste0("Hours worked in ",geo,"\n",
"\nIncluding trend line\nMonthly, ",Fmth," to ",
Lmth,", ",TS[[8]]$Seas)
q1 <- mutate(q0,val=.data[[MYtitl]])
q1 <- filter(q1,REF_DATE>=month1 & REF_DATE<=month2)
c1 <- ggplot(q1,
aes(x=REF_DATE,y=val))+
geom_line(colour="black",size=1.5)+
geom_smooth(method="lm",se=FALSE,linetype="dashed")+
scale_y_continuous(labels=scales::"comma")
if(posNeg(q1$val)) {
c1 <- c1+ geom_hline(yintercept=0,size=0.4,colour="black",
linetype="dashed")
}
} else if (type==3) {
MYsubtitl=paste0("Hours worked in ",geo,"\n",
"\nIndex with starting month = 100\nMonthly, ",Fmth," to ",
Lmth,", ",TS[[8]]$Seas)
q0 <- filter(q0,REF_DATE>=month1 & REF_DATE<=month2)
q1 <- mutate(q0,val=IDX(.data[[MYtitl]]))
c1 <- ggplot(q1,
aes(x=REF_DATE,y=val))+
geom_line(colour="black",size=1.5)+
scale_y_continuous()
if(posNeg(q1$val)) {
c1 <- c1+ geom_hline(yintercept=0,size=0.4,colour="black",
linetype="dashed")
}
} else if (type==4) {
MYsubtitl=paste0("Hours worked in ",geo,"\n",
"\nOne-month percentage change\nMonthly, ",Fmth," to ",
Lmth,", ",TS[[8]]$Seas)
q1 <- mutate(q0,val=PC(.data[[MYtitl]])/100)
q1 <- filter(q1,REF_DATE>=month1 & REF_DATE<=month2)
c1 <- ggplot(q1,
aes(x=REF_DATE,y=val))+
geom_col(fill="gold",colour="black",size=0.2)+
scale_y_continuous(labels=scales::"percent")
if(posNeg(q1$val)) {
c1 <- c1+ geom_hline(yintercept=0,size=0.4,colour="black",
linetype="dashed")
}
} else if (type==5) {
MYsubtitl=paste0("Hours worked in ",geo,"\n",
"\nTwelve-month percentage change\nMonthly, ",Fmth," to ",
Lmth,", ",TS[[8]]$Seas)
q1 <- mutate(q0,val=PC12(.data[[MYtitl]])/100)
q1 <- filter(q1,REF_DATE>=month1 & REF_DATE<=month2)
c1 <- ggplot(q1,
aes(x=REF_DATE,y=val))+
geom_col(fill="gold",colour="black",size=0.2)+
scale_y_continuous(labels=scales::"percent")
if(posNeg(q1$val)) {
c1 <- c1+ geom_hline(yintercept=0,size=0.4,colour="black",
linetype="dashed")
}
} else if (type==6) {
MYsubtitl=paste0("Hours worked in ",geo,"\n",
"\nThirteen-month centred moving average (dashed blue line)\nMonthly, ",
Fmth," to ",Lmth,", ",TS[[8]]$Seas)
q1 <- mutate(q0,val=MA13(.data[[MYtitl]]))
q1 <- filter(q1,REF_DATE>=month1 & REF_DATE<=month2)
c1 <- ggplot(q1,
aes(x=REF_DATE,y=val))+
geom_line(colour="blue",size=1.5,linetype="dashed")+
geom_line(aes(x=REF_DATE,y=.data[[MYtitl]]),colour="black",size=1.5)+
scale_y_continuous(labels=scales::"comma")+
if(posNeg(q1$val)) {
c1 <- c1+ geom_hline(yintercept=0,size=0.4,colour="black",
linetype="dashed")
}
}
if (datDif(month1,month2)>20 & interv=="") {
interv <- "36 months"
} else if (datDif(month1,month2)>10 & interv=="") {
interv <- "12 months"
} else if (datDif(month1,month2)>5 & interv=="") {
interv <- "3 months"
} else if (datDif(month1,month2)>2 & interv=="") {
interv <- "2 months"
} else if (interv=="") {
interv <- "1 month"
}
c1 <- c1 + scale_x_date(breaks=seq.Date(month1,month2,by=interv))+
labs(title=ChrtTitl,subtitle=paste0(MYsubtitl),
caption=TS[[8]]$Ftnt,x="",y="")+
theme(axis.text.y = element_text(size=18))+
theme_DB()
c1
}
|
context("taskscheduler-examples")
test_that("taskscheduleR examples can be scheduled as expected", {
skip_on_cran()
myscript <- system.file("extdata", "helloworld.R", package = "taskscheduleR")
## run script once within 62 seconds
expect_warning(taskscheduler_create(taskname = "myfancyscript", rscript = myscript,
schedule = "ONCE", starttime = format(Sys.time() + 62, "%H:%M")), NA)
## get a data.frame of all tasks
expect_warning({
tasks <- taskscheduler_ls()
})
## delete the tasks
expect_warning(taskscheduler_delete(taskname = "myfancyscript"), NA)
})
test_that("taskscheduler_ls returns a data.frame", {
skip_on_cran()
expect_is(suppressWarnings({taskscheduler_ls()}), "data.frame")
}) | /tests/testthat/test-taskscheduler_create.R | no_license | cran/taskscheduleR | R | false | false | 779 | r | context("taskscheduler-examples")
test_that("taskscheduleR examples can be scheduled as expected", {
skip_on_cran()
myscript <- system.file("extdata", "helloworld.R", package = "taskscheduleR")
## run script once within 62 seconds
expect_warning(taskscheduler_create(taskname = "myfancyscript", rscript = myscript,
schedule = "ONCE", starttime = format(Sys.time() + 62, "%H:%M")), NA)
## get a data.frame of all tasks
expect_warning({
tasks <- taskscheduler_ls()
})
## delete the tasks
expect_warning(taskscheduler_delete(taskname = "myfancyscript"), NA)
})
test_that("taskscheduler_ls returns a data.frame", {
skip_on_cran()
expect_is(suppressWarnings({taskscheduler_ls()}), "data.frame")
}) |
# Figure 5 top words
library(readr)
library(tidytext)
library(dplyr)
library(tidyr)
library(ggplot2)
library(stringr)
library(forcats)
library(hrbrthemes)
library(here)
library(patchwork)
library(extrafont)
# read data
absdata <- read_csv(here("data","abstracts.csv"))
# stopwords
stopWords <- data_frame(word=read_lines(here("data","minimal-stop.txt")))
# overall
titlesAllbg <- absdata %>%
mutate(title_all = talk_title) %>% unnest_tokens(BGword, talk_title, token = "ngrams", n=2)
# separate
bigrams_separatedAll <- titlesAllbg %>%
separate(BGword, c("word1", "word2"), sep = " ")
# filtering
bigrams_filteredAll <- bigrams_separatedAll %>%
filter(!word1 %in% stopWords$word) %>%
filter(!word2 %in% stopWords$word)
# new bigram counts:
bigram_countsAll <- bigrams_filteredAll %>%
count(word1, word2, sort = TRUE)
# unite
bigrams_unitedAll <- bigrams_filteredAll %>%
unite(bigram, word1, word2, sep = " ")
# count
bigrams_united_countAll <- bigrams_unitedAll %>% count(bigram,sort=TRUE)
# just words
titlesAllug <-
absdata %>%
mutate(title_all = talk_title) %>% unnest_tokens(word, talk_title)
# remove stop words
titles_filteredAllug <- titlesAllug %>% anti_join(stopWords, by = "word")
# quantify
by_wordAllug <- titles_filteredAllug %>% count(word, sort = TRUE)
# df with both for everything
topBGall <- bigrams_united_countAll %>% rename(ngram=bigram) %>% mutate(term="bigram")
topWall <- by_wordAllug %>% rename(ngram=word) %>% mutate(term="word")
# bind
topngramsAll <- bind_rows(topBGall,topWall)
# by format and status
#### pubished talks ####
titlesTalksbigPUB <- absdata %>% filter(published=="TRUE" & pres_format == "talk") %>%
mutate(title_all = talk_title) %>% unnest_tokens(BGword, talk_title, token = "ngrams", n=2)
# separate
bigrams_separatedTalksPUB <- titlesTalksbigPUB %>%
separate(BGword, c("word1", "word2"), sep = " ")
# filtering
bigrams_filteredTalksPUB <- bigrams_separatedTalksPUB %>%
filter(!word1 %in% stopWords$word) %>%
filter(!word2 %in% stopWords$word)
# new bigram counts:
bigram_countsTalksPUB <- bigrams_filteredTalksPUB %>%
count(word1, word2, sort = TRUE)
# unite
bigrams_unitedTalksPUB <- bigrams_filteredTalksPUB %>%
unite(bigram, word1, word2, sep = " ")
# count
bigrams_united_countTalksPUB <- bigrams_unitedTalksPUB %>% count(bigram,sort=TRUE)
# now just words
titlesTalksPub <-
absdata %>% filter(published=="TRUE" & pres_format == "talk") %>%
mutate(title_all = talk_title) %>% unnest_tokens(word, talk_title)
# remove stop words
titles_filterTalkPub <- titlesTalksPub %>% anti_join(stopWords, by = "word")
# quantify
by_wordPubTalk <- titles_filterTalkPub %>% count(word, sort = TRUE)
# df with both for published presentations
topBGpubTalks <- bigrams_united_countTalksPUB %>% rename(ngram=bigram) %>% mutate(term="bigram")
topWpubTalks <- by_wordPubTalk %>% rename(ngram=word) %>% mutate(term="word")
# bind
topngramspubTalks <- bind_rows(topBGpubTalks,topWpubTalks)
# label
topngramspubTalks <- topngramspubTalks %>% mutate(presentations="published talks")
##### unpubished talks ####
titlesTalksbigUnPUB <- absdata %>% filter(published=="FALSE" & pres_format=="talk") %>%
mutate(title_all = talk_title) %>% unnest_tokens(BGword, talk_title, token = "ngrams", n=2)
# separate and filter for stop words
bigrams_separatedUnPUBTalks <- titlesTalksbigUnPUB %>%
separate(BGword, c("word1", "word2"), sep = " ")
# filtering
bigrams_filteredUnPUBTalks <- bigrams_separatedUnPUBTalks %>%
filter(!word1 %in% stopWords$word) %>%
filter(!word2 %in% stopWords$word)
# new bigram counts:
bigram_countsUnPUBTalks <- bigrams_filteredUnPUBTalks %>%
count(word1, word2, sort = TRUE)
# unite
bigrams_unitedUnPUBTalks <- bigrams_filteredUnPUBTalks %>%
unite(bigram, word1, word2, sep = " ")
# count
bigrams_united_countUnPUBTalks <- bigrams_unitedUnPUBTalks %>% count(bigram,sort=TRUE)
### now just words
titlesUnPubTalks <-
absdata %>% filter(published=="FALSE" & pres_format =="talk") %>%
mutate(title_all = talk_title) %>% unnest_tokens(word, talk_title)
# remove stop words
titles_filterUnPubTalk <- titlesUnPubTalks %>% anti_join(stopWords, by = "word")
# quantify
by_wordUnPubTalk <- titles_filterUnPubTalk %>% count(word, sort = TRUE)
# df with both for published presentations
topBGUnPubTalks <- bigrams_united_countUnPUBTalks %>% rename(ngram=bigram) %>% mutate(term="bigram")
topWUnPubTalks <- by_wordUnPubTalk %>% rename(ngram=word) %>% mutate(term="word")
# bind
topngramsUnpubTalks <- bind_rows(topBGUnPubTalks,topWUnPubTalks)
# label
topngramsUnpubTalks <- topngramsUnpubTalks %>% mutate(presentations="unpublished talks")
#### pubished posters ####
titlesPosterbigPUB <- absdata %>% filter(published=="TRUE" & pres_format == "poster") %>%
mutate(title_all = talk_title) %>% unnest_tokens(BGword, talk_title, token = "ngrams", n=2)
# separate and filter for stop words
bigrams_separatedPosterPUB <- titlesPosterbigPUB %>%
separate(BGword, c("word1", "word2"), sep = " ")
# filtering
bigrams_filteredPosterPUB <- bigrams_separatedPosterPUB %>%
filter(!word1 %in% stopWords$word) %>%
filter(!word2 %in% stopWords$word)
# new bigram counts:
bigram_countsPosterPUB <- bigrams_filteredPosterPUB %>%
count(word1, word2, sort = TRUE)
# unite
bigrams_unitedPosterPUB <- bigrams_filteredPosterPUB %>%
unite(bigram, word1, word2, sep = " ")
# count
bigrams_united_countPosterPUB <- bigrams_unitedPosterPUB %>% count(bigram,sort=TRUE)
## now just words
titlesPosterPub <-
absdata %>% filter(published=="TRUE" & pres_format == "poster") %>%
mutate(title_all = talk_title) %>% unnest_tokens(word, talk_title)
# remove stop words
titles_filterPosterPub <- titlesPosterPub %>% anti_join(stopWords, by = "word")
# quantify
by_wordPubPoster <- titles_filterPosterPub %>% count(word, sort = TRUE)
# df with both for published presentations
topBGpubPoster <- bigrams_united_countPosterPUB %>% rename(ngram=bigram) %>% mutate(term="bigram")
topWpubPoster <- by_wordPubPoster %>% rename(ngram=word) %>% mutate(term="word")
# bind
topngramspubPoster <- bind_rows(topBGpubPoster,topWpubPoster)
# label
topngramspubPoster <- topngramspubPoster %>% mutate(presentations="published posters")
#### unpubished Posters ####
titlesPostersbigUnPUB <- absdata %>% filter(published=="FALSE" & pres_format=="poster") %>%
mutate(title_all = talk_title) %>% unnest_tokens(BGword, talk_title, token = "ngrams", n=2)
# separate and filter for stop words
bigrams_separatedUnPUBPosters <- titlesPostersbigUnPUB %>%
separate(BGword, c("word1", "word2"), sep = " ")
# filtering
bigrams_filteredUnPUBPosters <- bigrams_separatedUnPUBPosters %>%
filter(!word1 %in% stopWords$word) %>%
filter(!word2 %in% stopWords$word)
# new bigram counts:
bigram_countsUnPUBPosters <- bigrams_filteredUnPUBPosters %>%
count(word1, word2, sort = TRUE)
# unite
bigrams_unitedUnPUBPosters <- bigrams_filteredUnPUBPosters %>%
unite(bigram, word1, word2, sep = " ")
# count
bigrams_united_countUnPUBPosters <- bigrams_unitedUnPUBPosters %>% count(bigram,sort=TRUE)
## now just words
titlesUnPubPosters <-
absdata %>% filter(published=="FALSE" & pres_format=="poster") %>%
mutate(title_all = talk_title) %>% unnest_tokens(word, talk_title)
# remove stop words
titles_filterUnPubPoster <- titlesUnPubPosters %>% anti_join(stopWords, by = "word")
# quantify
by_wordUnPubPoster <- titles_filterUnPubPoster %>% count(word, sort = TRUE)
# df with both for published presentations
topBGUnPubPosters <- bigrams_united_countUnPUBPosters %>% rename(ngram=bigram) %>% mutate(term="bigram")
topWUnPubPosters <- by_wordUnPubPoster %>% rename(ngram=word) %>% mutate(term="word")
# bind
topngramsUnpubPosters <- bind_rows(topBGUnPubPosters,topWUnPubPosters)
# label
topngramsUnpubPosters <- topngramsUnpubPosters %>% mutate(presentations="unpublished posters")
##### binding all four #####
topWords <- rbind(topngramspubTalks,topngramsUnpubTalks,topngramspubPoster,topngramsUnpubPosters)
# add format label
topWords <- topWords %>% mutate(format=if_else(str_detect(presentations,"posters"),"poster","talk"),
pubStatus=if_else(str_detect(presentations,"unpublished"),"unpublished","published"))
#topWordsNpres <-
topWords %>% count(format)
#### plotting ####
#all
############
###
doubCwords <- data_frame(word=c("new","climate","change"))
topWords_dc <- anti_join(topWords,doubCwords,by=c("ngram"="word"))
#### word frequencies ####
# talks vs posters
totalsfreq <- topWords %>% filter(term=="word") %>% group_by(format) %>% summarise(wfq=sum(n)) %>% ungroup()
topWordsTfreq <- left_join(topWords_dc,totalsfreq)
(leftgrob <-
topWordsTfreq %>%
mutate(reln=scale(n/wfq)) %>%
mutate(presentationsW=str_replace(presentations," ","\n")) %>%
top_n(30,reln) %>%
ggplot(aes(x = fct_reorder(ngram, reln), y = reln)) +
ggalt::geom_lollipop() +
scale_y_continuous(breaks = c(0,15,30))+
theme_ipsum(base_size = 11, axis_title_size = 11,
grid = "Y",axis_title_just = "c",axis_text_size = 8,
strip_text_size = 9,base_family = "Roboto Condensed")+
coord_flip()+facet_wrap(~fct_relevel(presentationsW,c("unpublished\nposters","published\nposters","unpublished\ntalks","published\ntalks")),nrow =1)+
labs(x="term",y="relative frequency (scaled)"))
# spread by category
(topWordsSp <- topWords %>% filter(term=="word") %>% spread(presentations,n))
(topWordsSp <- topWordsSp %>% replace(is.na(.), 0))
# distance matrix
termsDistmat <- (dist(t(as.matrix(topWordsSp[,5:8])),method = "euclidean"))
termsclust <- hclust(termsDistmat,method = "ward.D")
plot(termsclust)
library(ggraph)
library(tidygraph)
library(dendextend)
dendcl <- as.dendrogram(termsclust)
# wrap labels
dendextend::labels(dendcl) <- str_replace(labels(dendcl)," ","\n")
right <-
ggraph(dendcl,layout = "dendrogram") +
geom_edge_fan(alpha = 0.2) +
scale_edge_size_continuous(range = c(1,3))+
geom_node_text(aes(label = label), size = 2.4,
repel = T,nudge_y = 4) +
geom_node_point(size=1,shape=21) +
theme_graph(base_family = "Roboto Condensed")+coord_flip()
right
figtextmin <-
leftgrob+right+plot_layout(nrow = 1,widths = c(2,1),tag_level = 'new')+plot_annotation(tag_levels = 'a',tag_suffix = ")")
figtextmin
#ggsave(figtextmin,filename = here("figures","fig5.pdf"),width = 8,height = 3.6, dpi = 300)
| /R/05_titles-text-mining.R | permissive | luisDVA/iccb_metrics | R | false | false | 10,417 | r | # Figure 5 top words
library(readr)
library(tidytext)
library(dplyr)
library(tidyr)
library(ggplot2)
library(stringr)
library(forcats)
library(hrbrthemes)
library(here)
library(patchwork)
library(extrafont)
# read data
absdata <- read_csv(here("data","abstracts.csv"))
# stopwords
stopWords <- data_frame(word=read_lines(here("data","minimal-stop.txt")))
# overall
titlesAllbg <- absdata %>%
mutate(title_all = talk_title) %>% unnest_tokens(BGword, talk_title, token = "ngrams", n=2)
# separate
bigrams_separatedAll <- titlesAllbg %>%
separate(BGword, c("word1", "word2"), sep = " ")
# filtering
bigrams_filteredAll <- bigrams_separatedAll %>%
filter(!word1 %in% stopWords$word) %>%
filter(!word2 %in% stopWords$word)
# new bigram counts:
bigram_countsAll <- bigrams_filteredAll %>%
count(word1, word2, sort = TRUE)
# unite
bigrams_unitedAll <- bigrams_filteredAll %>%
unite(bigram, word1, word2, sep = " ")
# count
bigrams_united_countAll <- bigrams_unitedAll %>% count(bigram,sort=TRUE)
# just words
titlesAllug <-
absdata %>%
mutate(title_all = talk_title) %>% unnest_tokens(word, talk_title)
# remove stop words
titles_filteredAllug <- titlesAllug %>% anti_join(stopWords, by = "word")
# quantify
by_wordAllug <- titles_filteredAllug %>% count(word, sort = TRUE)
# df with both for everything
topBGall <- bigrams_united_countAll %>% rename(ngram=bigram) %>% mutate(term="bigram")
topWall <- by_wordAllug %>% rename(ngram=word) %>% mutate(term="word")
# bind
topngramsAll <- bind_rows(topBGall,topWall)
# by format and status
#### pubished talks ####
titlesTalksbigPUB <- absdata %>% filter(published=="TRUE" & pres_format == "talk") %>%
mutate(title_all = talk_title) %>% unnest_tokens(BGword, talk_title, token = "ngrams", n=2)
# separate
bigrams_separatedTalksPUB <- titlesTalksbigPUB %>%
separate(BGword, c("word1", "word2"), sep = " ")
# filtering
bigrams_filteredTalksPUB <- bigrams_separatedTalksPUB %>%
filter(!word1 %in% stopWords$word) %>%
filter(!word2 %in% stopWords$word)
# new bigram counts:
bigram_countsTalksPUB <- bigrams_filteredTalksPUB %>%
count(word1, word2, sort = TRUE)
# unite
bigrams_unitedTalksPUB <- bigrams_filteredTalksPUB %>%
unite(bigram, word1, word2, sep = " ")
# count
bigrams_united_countTalksPUB <- bigrams_unitedTalksPUB %>% count(bigram,sort=TRUE)
# now just words
titlesTalksPub <-
absdata %>% filter(published=="TRUE" & pres_format == "talk") %>%
mutate(title_all = talk_title) %>% unnest_tokens(word, talk_title)
# remove stop words
titles_filterTalkPub <- titlesTalksPub %>% anti_join(stopWords, by = "word")
# quantify
by_wordPubTalk <- titles_filterTalkPub %>% count(word, sort = TRUE)
# df with both for published presentations
topBGpubTalks <- bigrams_united_countTalksPUB %>% rename(ngram=bigram) %>% mutate(term="bigram")
topWpubTalks <- by_wordPubTalk %>% rename(ngram=word) %>% mutate(term="word")
# bind
topngramspubTalks <- bind_rows(topBGpubTalks,topWpubTalks)
# label
topngramspubTalks <- topngramspubTalks %>% mutate(presentations="published talks")
##### unpubished talks ####
titlesTalksbigUnPUB <- absdata %>% filter(published=="FALSE" & pres_format=="talk") %>%
mutate(title_all = talk_title) %>% unnest_tokens(BGword, talk_title, token = "ngrams", n=2)
# separate and filter for stop words
bigrams_separatedUnPUBTalks <- titlesTalksbigUnPUB %>%
separate(BGword, c("word1", "word2"), sep = " ")
# filtering
bigrams_filteredUnPUBTalks <- bigrams_separatedUnPUBTalks %>%
filter(!word1 %in% stopWords$word) %>%
filter(!word2 %in% stopWords$word)
# new bigram counts:
bigram_countsUnPUBTalks <- bigrams_filteredUnPUBTalks %>%
count(word1, word2, sort = TRUE)
# unite
bigrams_unitedUnPUBTalks <- bigrams_filteredUnPUBTalks %>%
unite(bigram, word1, word2, sep = " ")
# count
bigrams_united_countUnPUBTalks <- bigrams_unitedUnPUBTalks %>% count(bigram,sort=TRUE)
### now just words
titlesUnPubTalks <-
absdata %>% filter(published=="FALSE" & pres_format =="talk") %>%
mutate(title_all = talk_title) %>% unnest_tokens(word, talk_title)
# remove stop words
titles_filterUnPubTalk <- titlesUnPubTalks %>% anti_join(stopWords, by = "word")
# quantify
by_wordUnPubTalk <- titles_filterUnPubTalk %>% count(word, sort = TRUE)
# df with both for published presentations
topBGUnPubTalks <- bigrams_united_countUnPUBTalks %>% rename(ngram=bigram) %>% mutate(term="bigram")
topWUnPubTalks <- by_wordUnPubTalk %>% rename(ngram=word) %>% mutate(term="word")
# bind
topngramsUnpubTalks <- bind_rows(topBGUnPubTalks,topWUnPubTalks)
# label
topngramsUnpubTalks <- topngramsUnpubTalks %>% mutate(presentations="unpublished talks")
#### pubished posters ####
titlesPosterbigPUB <- absdata %>% filter(published=="TRUE" & pres_format == "poster") %>%
mutate(title_all = talk_title) %>% unnest_tokens(BGword, talk_title, token = "ngrams", n=2)
# separate and filter for stop words
bigrams_separatedPosterPUB <- titlesPosterbigPUB %>%
separate(BGword, c("word1", "word2"), sep = " ")
# filtering
bigrams_filteredPosterPUB <- bigrams_separatedPosterPUB %>%
filter(!word1 %in% stopWords$word) %>%
filter(!word2 %in% stopWords$word)
# new bigram counts:
bigram_countsPosterPUB <- bigrams_filteredPosterPUB %>%
count(word1, word2, sort = TRUE)
# unite
bigrams_unitedPosterPUB <- bigrams_filteredPosterPUB %>%
unite(bigram, word1, word2, sep = " ")
# count
bigrams_united_countPosterPUB <- bigrams_unitedPosterPUB %>% count(bigram,sort=TRUE)
## now just words
titlesPosterPub <-
absdata %>% filter(published=="TRUE" & pres_format == "poster") %>%
mutate(title_all = talk_title) %>% unnest_tokens(word, talk_title)
# remove stop words
titles_filterPosterPub <- titlesPosterPub %>% anti_join(stopWords, by = "word")
# quantify
by_wordPubPoster <- titles_filterPosterPub %>% count(word, sort = TRUE)
# df with both for published presentations
topBGpubPoster <- bigrams_united_countPosterPUB %>% rename(ngram=bigram) %>% mutate(term="bigram")
topWpubPoster <- by_wordPubPoster %>% rename(ngram=word) %>% mutate(term="word")
# bind
topngramspubPoster <- bind_rows(topBGpubPoster,topWpubPoster)
# label
topngramspubPoster <- topngramspubPoster %>% mutate(presentations="published posters")
#### unpubished Posters ####
titlesPostersbigUnPUB <- absdata %>% filter(published=="FALSE" & pres_format=="poster") %>%
mutate(title_all = talk_title) %>% unnest_tokens(BGword, talk_title, token = "ngrams", n=2)
# separate and filter for stop words
bigrams_separatedUnPUBPosters <- titlesPostersbigUnPUB %>%
separate(BGword, c("word1", "word2"), sep = " ")
# filtering
bigrams_filteredUnPUBPosters <- bigrams_separatedUnPUBPosters %>%
filter(!word1 %in% stopWords$word) %>%
filter(!word2 %in% stopWords$word)
# new bigram counts:
bigram_countsUnPUBPosters <- bigrams_filteredUnPUBPosters %>%
count(word1, word2, sort = TRUE)
# unite
bigrams_unitedUnPUBPosters <- bigrams_filteredUnPUBPosters %>%
unite(bigram, word1, word2, sep = " ")
# count
bigrams_united_countUnPUBPosters <- bigrams_unitedUnPUBPosters %>% count(bigram,sort=TRUE)
## now just words
titlesUnPubPosters <-
absdata %>% filter(published=="FALSE" & pres_format=="poster") %>%
mutate(title_all = talk_title) %>% unnest_tokens(word, talk_title)
# remove stop words
titles_filterUnPubPoster <- titlesUnPubPosters %>% anti_join(stopWords, by = "word")
# quantify
by_wordUnPubPoster <- titles_filterUnPubPoster %>% count(word, sort = TRUE)
# df with both for published presentations
topBGUnPubPosters <- bigrams_united_countUnPUBPosters %>% rename(ngram=bigram) %>% mutate(term="bigram")
topWUnPubPosters <- by_wordUnPubPoster %>% rename(ngram=word) %>% mutate(term="word")
# bind
topngramsUnpubPosters <- bind_rows(topBGUnPubPosters,topWUnPubPosters)
# label
topngramsUnpubPosters <- topngramsUnpubPosters %>% mutate(presentations="unpublished posters")
##### binding all four #####
topWords <- rbind(topngramspubTalks,topngramsUnpubTalks,topngramspubPoster,topngramsUnpubPosters)
# add format label
topWords <- topWords %>% mutate(format=if_else(str_detect(presentations,"posters"),"poster","talk"),
pubStatus=if_else(str_detect(presentations,"unpublished"),"unpublished","published"))
#topWordsNpres <-
topWords %>% count(format)
#### plotting ####
#all
############
###
doubCwords <- data_frame(word=c("new","climate","change"))
topWords_dc <- anti_join(topWords,doubCwords,by=c("ngram"="word"))
#### word frequencies ####
# talks vs posters
totalsfreq <- topWords %>% filter(term=="word") %>% group_by(format) %>% summarise(wfq=sum(n)) %>% ungroup()
topWordsTfreq <- left_join(topWords_dc,totalsfreq)
(leftgrob <-
topWordsTfreq %>%
mutate(reln=scale(n/wfq)) %>%
mutate(presentationsW=str_replace(presentations," ","\n")) %>%
top_n(30,reln) %>%
ggplot(aes(x = fct_reorder(ngram, reln), y = reln)) +
ggalt::geom_lollipop() +
scale_y_continuous(breaks = c(0,15,30))+
theme_ipsum(base_size = 11, axis_title_size = 11,
grid = "Y",axis_title_just = "c",axis_text_size = 8,
strip_text_size = 9,base_family = "Roboto Condensed")+
coord_flip()+facet_wrap(~fct_relevel(presentationsW,c("unpublished\nposters","published\nposters","unpublished\ntalks","published\ntalks")),nrow =1)+
labs(x="term",y="relative frequency (scaled)"))
# spread by category
(topWordsSp <- topWords %>% filter(term=="word") %>% spread(presentations,n))
(topWordsSp <- topWordsSp %>% replace(is.na(.), 0))
# distance matrix
termsDistmat <- (dist(t(as.matrix(topWordsSp[,5:8])),method = "euclidean"))
termsclust <- hclust(termsDistmat,method = "ward.D")
plot(termsclust)
library(ggraph)
library(tidygraph)
library(dendextend)
dendcl <- as.dendrogram(termsclust)
# wrap labels
dendextend::labels(dendcl) <- str_replace(labels(dendcl)," ","\n")
right <-
ggraph(dendcl,layout = "dendrogram") +
geom_edge_fan(alpha = 0.2) +
scale_edge_size_continuous(range = c(1,3))+
geom_node_text(aes(label = label), size = 2.4,
repel = T,nudge_y = 4) +
geom_node_point(size=1,shape=21) +
theme_graph(base_family = "Roboto Condensed")+coord_flip()
right
figtextmin <-
leftgrob+right+plot_layout(nrow = 1,widths = c(2,1),tag_level = 'new')+plot_annotation(tag_levels = 'a',tag_suffix = ")")
figtextmin
#ggsave(figtextmin,filename = here("figures","fig5.pdf"),width = 8,height = 3.6, dpi = 300)
|
## This script needs to be run with the "UCI HAR Dataset" directory
## contained in the working directory
#
## Input all the data
#
setwd("UCI HAR Dataset/test/")
#
xtest <- read.csv("X_test.txt", sep="", header=F)
ytest <- read.csv("y_test.txt", sep="", header=F)[,1]
subjecttest <- read.csv("subject_test.txt", sep="", header=F)[,1]
#
setwd("Inertial Signals/")
#
bodyaccxtest <- read.csv("body_acc_x_test.txt", sep="", header=F)
bodyaccytest <- read.csv("body_acc_y_test.txt", sep="", header=F)
bodyaccztest <- read.csv("body_acc_z_test.txt", sep="", header=F)
bodygyroxtest <- read.csv("body_gyro_x_test.txt", sep="", header=F)
bodygyroytest <- read.csv("body_gyro_y_test.txt", sep="", header=F)
bodygyroztest <- read.csv("body_gyro_z_test.txt", sep="", header=F)
totalaccxtest <- read.csv("body_acc_x_test.txt", sep="", header=F)
totalaccytest <- read.csv("body_acc_y_test.txt", sep="", header=F)
totalaccztest <- read.csv("body_acc_z_test.txt", sep="", header=F)
#
setwd("../../train/")
#
xtrain <- read.csv("X_train.txt", sep="", header=F)
ytrain <- read.csv("y_train.txt", sep="", header=F)[,1]
subjecttrain <- read.csv("subject_train.txt", sep="", header=F)[,1]
#
setwd("Inertial Signals/")
#
bodyaccxtrain <- read.csv("body_acc_x_train.txt", sep="", header=F)
bodyaccytrain <- read.csv("body_acc_y_train.txt", sep="", header=F)
bodyaccztrain <- read.csv("body_acc_z_train.txt", sep="", header=F)
bodygyroxtrain <- read.csv("body_gyro_x_train.txt", sep="", header=F)
bodygyroytrain <- read.csv("body_gyro_y_train.txt", sep="", header=F)
bodygyroztrain <- read.csv("body_gyro_z_train.txt", sep="", header=F)
totalaccxtrain <- read.csv("body_acc_x_train.txt", sep="", header=F)
totalaccytrain <- read.csv("body_acc_y_train.txt", sep="", header=F)
totalaccztrain <- read.csv("body_acc_z_train.txt", sep="", header=F)
#
setwd("../..")
#
features <- read.csv("features.txt", sep="",header=F, stringsAsFactors = FALSE)[,2]
activitylabels <- read.csv("activity_labels.txt", sep="",header=F, stringsAsFactors = FALSE)[,2]
#
#
#
## Join the data into a single dataset
#
#
subjectall <- c(subjecttrain,subjecttest)
xall <- rbind(xtrain,xtest)
yall <- c(ytrain,ytest)
bodyaccxall <- rbind(bodyaccxtrain,bodyaccxtest)
bodyaccyall <- rbind(bodyaccytrain,bodyaccytest)
bodyacczall <- rbind(bodyaccztrain,bodyaccztest)
bodygyroxall <- rbind(bodygyroxtrain,bodygyroxtest)
bodygyroyall <- rbind(bodygyroytrain,bodygyroytest)
bodygyrozall <- rbind(bodygyroztrain,bodygyroztest)
totalaccxall <- rbind(totalaccxtrain,totalaccxtest)
totalaccyall <- rbind(totalaccytrain,totalaccytest)
totalacczall <- rbind(totalaccztrain,totalaccztest)
## Rewrite the activity labels in lower case without underscores
activities <- c("walking","walkingupstairs","walkingdownstairs","sitting","standing","laying")
yall <- as.factor(yall)
levels(yall) <- activities
## Extract the mean and standard deviation for each element
## boolean vector of which features are means
meanfeatures <- grepl(".*-mean\\(\\)$",features)
## boolean vector of which features are standard deviations
sdfeatures <- grepl(".*-std\\(\\)$",features)
## boolean vector of which features are standard deviations OR means
sdmean <- sdfeatures | meanfeatures
## Combine participant number, activity labels and features into one DF
combinedDF <- cbind(subjectall,yall,xall[,sdmean])
## Rename columns without parenthesis, capital letters, dashes:
names(combinedDF) <- c("subjectid",
"activityname",
"tbodyaccmagmean",
"tbodyaccmagstd",
"tgravityaccmagmean",
"tgravityaccmagstd",
"tbodyaccjerkmagmean",
"tbodyaccjerkmagstd",
"tbodygyromagmean",
"tbodygyromagstd",
"tbodygyrojerkmagmean",
"tbodygyrojerkmagstd",
"fbodyaccmagmean",
"fbodyaccmagstd",
"fbodybodyaccjerkmagmean",
"fbodybodyaccjerkmagstd",
"fbodybodygyromagmean",
"fbodybodygyromagstd",
"fbodybodygyrojerkmagmean",
"fbodybodygyrojerkmagstd")
## Change working directory back to the original
setwd("..")
## Output the dataframe to a csv
write.table(combinedDF, file="TidyTable.txt", row.names = F)
## Create a new dataframe with just the mean for each observation and
## each experimental subject
## Load the dplyr package
library(dplyr)
## Create a dataframe grouped by the id of the experimental
## subject and by the activity
grouped <- group_by(combinedDF,subjectid,activityname)
## Compute the means of the various means which were
## supplied in the original dataset for each subject
## and each activity
means <- summarize(grouped,
tbodyaccmagmean = mean(tbodyaccmagmean),
tgravityaccmagmean = mean(tgravityaccmagmean),
tbodyaccjerkmagmean = mean(tbodyaccjerkmagmean),
tbodygyromagmean = mean(tbodygyromagmean),
tbodygyrojerkmagmean = mean(tbodygyrojerkmagmean),
fbodyaccmagmean = mean(fbodyaccmagmean),
fbodybodyaccjerkmagmean = mean(fbodybodyaccjerkmagmean),
fbodybodygyromagmean = mean(fbodybodygyromagmean),
fbodybodygyrojerkmagmean = mean(fbodybodygyrojerkmagmean))
## Output to file
## write.table(means, file="MeansTable.txt", row.names = F)
## This is commented out because the submission page says
## "The output should be the tidy data set you submitted for part 1." | /run_analysis.R | no_license | SamPlayle/AccelerometerProject | R | false | false | 5,881 | r | ## This script needs to be run with the "UCI HAR Dataset" directory
## contained in the working directory
#
## Input all the data
#
setwd("UCI HAR Dataset/test/")
#
xtest <- read.csv("X_test.txt", sep="", header=F)
ytest <- read.csv("y_test.txt", sep="", header=F)[,1]
subjecttest <- read.csv("subject_test.txt", sep="", header=F)[,1]
#
setwd("Inertial Signals/")
#
bodyaccxtest <- read.csv("body_acc_x_test.txt", sep="", header=F)
bodyaccytest <- read.csv("body_acc_y_test.txt", sep="", header=F)
bodyaccztest <- read.csv("body_acc_z_test.txt", sep="", header=F)
bodygyroxtest <- read.csv("body_gyro_x_test.txt", sep="", header=F)
bodygyroytest <- read.csv("body_gyro_y_test.txt", sep="", header=F)
bodygyroztest <- read.csv("body_gyro_z_test.txt", sep="", header=F)
totalaccxtest <- read.csv("body_acc_x_test.txt", sep="", header=F)
totalaccytest <- read.csv("body_acc_y_test.txt", sep="", header=F)
totalaccztest <- read.csv("body_acc_z_test.txt", sep="", header=F)
#
setwd("../../train/")
#
xtrain <- read.csv("X_train.txt", sep="", header=F)
ytrain <- read.csv("y_train.txt", sep="", header=F)[,1]
subjecttrain <- read.csv("subject_train.txt", sep="", header=F)[,1]
#
setwd("Inertial Signals/")
#
bodyaccxtrain <- read.csv("body_acc_x_train.txt", sep="", header=F)
bodyaccytrain <- read.csv("body_acc_y_train.txt", sep="", header=F)
bodyaccztrain <- read.csv("body_acc_z_train.txt", sep="", header=F)
bodygyroxtrain <- read.csv("body_gyro_x_train.txt", sep="", header=F)
bodygyroytrain <- read.csv("body_gyro_y_train.txt", sep="", header=F)
bodygyroztrain <- read.csv("body_gyro_z_train.txt", sep="", header=F)
totalaccxtrain <- read.csv("body_acc_x_train.txt", sep="", header=F)
totalaccytrain <- read.csv("body_acc_y_train.txt", sep="", header=F)
totalaccztrain <- read.csv("body_acc_z_train.txt", sep="", header=F)
#
setwd("../..")
#
features <- read.csv("features.txt", sep="",header=F, stringsAsFactors = FALSE)[,2]
activitylabels <- read.csv("activity_labels.txt", sep="",header=F, stringsAsFactors = FALSE)[,2]
#
#
#
## Join the data into a single dataset
#
#
subjectall <- c(subjecttrain,subjecttest)
xall <- rbind(xtrain,xtest)
yall <- c(ytrain,ytest)
bodyaccxall <- rbind(bodyaccxtrain,bodyaccxtest)
bodyaccyall <- rbind(bodyaccytrain,bodyaccytest)
bodyacczall <- rbind(bodyaccztrain,bodyaccztest)
bodygyroxall <- rbind(bodygyroxtrain,bodygyroxtest)
bodygyroyall <- rbind(bodygyroytrain,bodygyroytest)
bodygyrozall <- rbind(bodygyroztrain,bodygyroztest)
totalaccxall <- rbind(totalaccxtrain,totalaccxtest)
totalaccyall <- rbind(totalaccytrain,totalaccytest)
totalacczall <- rbind(totalaccztrain,totalaccztest)
## Rewrite the activity labels in lower case without underscores
activities <- c("walking","walkingupstairs","walkingdownstairs","sitting","standing","laying")
yall <- as.factor(yall)
levels(yall) <- activities
## Extract the mean and standard deviation for each element
## boolean vector of which features are means
meanfeatures <- grepl(".*-mean\\(\\)$",features)
## boolean vector of which features are standard deviations
sdfeatures <- grepl(".*-std\\(\\)$",features)
## boolean vector of which features are standard deviations OR means
sdmean <- sdfeatures | meanfeatures
## Combine participant number, activity labels and features into one DF
combinedDF <- cbind(subjectall,yall,xall[,sdmean])
## Rename columns without parenthesis, capital letters, dashes:
names(combinedDF) <- c("subjectid",
"activityname",
"tbodyaccmagmean",
"tbodyaccmagstd",
"tgravityaccmagmean",
"tgravityaccmagstd",
"tbodyaccjerkmagmean",
"tbodyaccjerkmagstd",
"tbodygyromagmean",
"tbodygyromagstd",
"tbodygyrojerkmagmean",
"tbodygyrojerkmagstd",
"fbodyaccmagmean",
"fbodyaccmagstd",
"fbodybodyaccjerkmagmean",
"fbodybodyaccjerkmagstd",
"fbodybodygyromagmean",
"fbodybodygyromagstd",
"fbodybodygyrojerkmagmean",
"fbodybodygyrojerkmagstd")
## Change working directory back to the original
setwd("..")
## Output the dataframe to a csv
write.table(combinedDF, file="TidyTable.txt", row.names = F)
## Create a new dataframe with just the mean for each observation and
## each experimental subject
## Load the dplyr package
library(dplyr)
## Create a dataframe grouped by the id of the experimental
## subject and by the activity
grouped <- group_by(combinedDF,subjectid,activityname)
## Compute the means of the various means which were
## supplied in the original dataset for each subject
## and each activity
means <- summarize(grouped,
tbodyaccmagmean = mean(tbodyaccmagmean),
tgravityaccmagmean = mean(tgravityaccmagmean),
tbodyaccjerkmagmean = mean(tbodyaccjerkmagmean),
tbodygyromagmean = mean(tbodygyromagmean),
tbodygyrojerkmagmean = mean(tbodygyrojerkmagmean),
fbodyaccmagmean = mean(fbodyaccmagmean),
fbodybodyaccjerkmagmean = mean(fbodybodyaccjerkmagmean),
fbodybodygyromagmean = mean(fbodybodygyromagmean),
fbodybodygyrojerkmagmean = mean(fbodybodygyrojerkmagmean))
## Output to file
## write.table(means, file="MeansTable.txt", row.names = F)
## This is commented out because the submission page says
## "The output should be the tidy data set you submitted for part 1." |
complete <- function(directory, id = 1:332) {
nobsNum <- numeric(0)
for (cid in id) {
fileStr <- paste(directory, "/", sprintf("%03d", as.numeric(cid)), ".csv", sep = "")
cDfr <- read.csv(fileStr)
nobsNum <- c(nobsNum, nrow(na.omit(cDfr)))
}
# --- Assert return value is a data frame with TWO (2) columns
data.frame(id = id, nobs = nobsNum)
} | /complete.R | no_license | meleswujira/Coursera | R | false | false | 398 | r | complete <- function(directory, id = 1:332) {
nobsNum <- numeric(0)
for (cid in id) {
fileStr <- paste(directory, "/", sprintf("%03d", as.numeric(cid)), ".csv", sep = "")
cDfr <- read.csv(fileStr)
nobsNum <- c(nobsNum, nrow(na.omit(cDfr)))
}
# --- Assert return value is a data frame with TWO (2) columns
data.frame(id = id, nobs = nobsNum)
} |
rm(list=ls())
library(readr)
library(dplyr)
library(ggplot2)
library(lubridate)
fname = "yemen_dat.csv"
df = read_csv(fname)
df = df %>% select(-date_variable, adm2pcod, longitude, latitude)
df = df %>%
mutate(year = year(date_value))
min_year = min(df$year)
df = df %>%
mutate(diff_year = year - min_year,
add_weeks = diff_year * 52,
new_epiweek = add_weeks + epiweek) %>%
select(-year, -diff_year)
df %>%
filter(date_value == ymd("2017-01-01") |
date_value == ymd("2017-01-02") |
date_value == ymd("2017-01-08") |
date_value == ymd("2016-09-29") |
date_value == ymd("2016-12-31") ) %>%
select(date_value, epiweek, add_weeks, new_epiweek) %>%
unique
week_df = df %>%
group_by(adm1name, date_value) %>%
summarize(vweek = var(epiweek),
epiweek = unique(epiweek),
uweek = unique(lubridate::week(date_value)),
dow = unique(wday(date_value, label = TRUE))) %>%
ungroup
stopifnot(all(week_df$vweek == 0))
week_df = df %>%
group_by(adm1name, new_epiweek) %>%
summarize(y = sum(incidence),
date_value = first(date_value)) %>%
ungroup
date_week_df = week_df %>%
mutate(group = adm1name,
x = date_value) %>%
select(x, y, group)
week_df = week_df %>%
mutate(group = adm1name,
x = new_epiweek)
week_df = week_df %>% select(x, y, group)
df = df %>%
group_by(adm1name, date_value) %>%
summarize(y = sum(incidence)) %>%
ungroup
df = df %>%
mutate(group = adm1name,
x = date_value)
df = df %>% select(x, y, group)
regroup = function(x) {
factor(as.numeric(factor(x)))
}
# remove these when using real data
df = df %>%
mutate(group = regroup(group))
week_df = week_df %>%
mutate(group = regroup(group))
date_week_df = date_week_df %>%
mutate(group = regroup(group))
g = df %>% ggplot(aes(x = x,
y = y,
colour = group)) +
geom_line() +
ylab("Number of Cases") + xlab("Date")
pdf("incidence_plots_over_time.pdf", height = 5, width = 10)
print(g)
print(g + guides(color = FALSE))
print({g %+% week_df})
print({( g + guides(color = FALSE)) %+% week_df})
dev.off()
# saveRDS(df, file = "plot_data.rds")
saveRDS(week_df, file = "plot_data.rds")
saveRDS(date_week_df, file = "plot_data_date.rds")
| /summarize_data.R | no_license | scottyaz/epi_click | R | false | false | 2,338 | r | rm(list=ls())
library(readr)
library(dplyr)
library(ggplot2)
library(lubridate)
fname = "yemen_dat.csv"
df = read_csv(fname)
df = df %>% select(-date_variable, adm2pcod, longitude, latitude)
df = df %>%
mutate(year = year(date_value))
min_year = min(df$year)
df = df %>%
mutate(diff_year = year - min_year,
add_weeks = diff_year * 52,
new_epiweek = add_weeks + epiweek) %>%
select(-year, -diff_year)
df %>%
filter(date_value == ymd("2017-01-01") |
date_value == ymd("2017-01-02") |
date_value == ymd("2017-01-08") |
date_value == ymd("2016-09-29") |
date_value == ymd("2016-12-31") ) %>%
select(date_value, epiweek, add_weeks, new_epiweek) %>%
unique
week_df = df %>%
group_by(adm1name, date_value) %>%
summarize(vweek = var(epiweek),
epiweek = unique(epiweek),
uweek = unique(lubridate::week(date_value)),
dow = unique(wday(date_value, label = TRUE))) %>%
ungroup
stopifnot(all(week_df$vweek == 0))
week_df = df %>%
group_by(adm1name, new_epiweek) %>%
summarize(y = sum(incidence),
date_value = first(date_value)) %>%
ungroup
date_week_df = week_df %>%
mutate(group = adm1name,
x = date_value) %>%
select(x, y, group)
week_df = week_df %>%
mutate(group = adm1name,
x = new_epiweek)
week_df = week_df %>% select(x, y, group)
df = df %>%
group_by(adm1name, date_value) %>%
summarize(y = sum(incidence)) %>%
ungroup
df = df %>%
mutate(group = adm1name,
x = date_value)
df = df %>% select(x, y, group)
regroup = function(x) {
factor(as.numeric(factor(x)))
}
# remove these when using real data
df = df %>%
mutate(group = regroup(group))
week_df = week_df %>%
mutate(group = regroup(group))
date_week_df = date_week_df %>%
mutate(group = regroup(group))
g = df %>% ggplot(aes(x = x,
y = y,
colour = group)) +
geom_line() +
ylab("Number of Cases") + xlab("Date")
pdf("incidence_plots_over_time.pdf", height = 5, width = 10)
print(g)
print(g + guides(color = FALSE))
print({g %+% week_df})
print({( g + guides(color = FALSE)) %+% week_df})
dev.off()
# saveRDS(df, file = "plot_data.rds")
saveRDS(week_df, file = "plot_data.rds")
saveRDS(date_week_df, file = "plot_data_date.rds")
|
#' Result of feature selection.
#'
#' Container for results of feature selection.
#' Contains the obtained features, their performance values
#' and the optimization path which lead there. \cr
#' You can visualize it using [analyzeFeatSelResult].
#'
#' Object members:
#' \describe{
#' \item{learner ([Learner])}{Learner that was optimized.}
#' \item{control ([FeatSelControl])}{ Control object from feature selection.}
#' \item{x ([character])}{Vector of feature names identified as optimal.}
#' \item{y ([numeric])}{Performance values for optimal `x`.}
#' \item{threshold ([numeric])}{Vector of finally found and used thresholds
#' if `tune.threshold` was enabled in [FeatSelControl], otherwise not present and
#' hence `NULL`.}
#' \item{opt.path ([ParamHelpers::OptPath])}{Optimization path which lead to `x`.}
#' }
#' @name FeatSelResult
#' @rdname FeatSelResult
NULL
makeFeatSelResult = function(learner, control, x, y, threshold, opt.path) {
makeOptResult(learner, control, x, y, threshold, opt.path, "FeatSelResult")
}
#' @export
print.FeatSelResult = function(x, ...) {
catf("FeatSel result:")
n.feats = length(x$x)
printed.features = 10L
if (n.feats > printed.features)
x = c(head(x, printed.features), "...")
else
x = head(x, printed.features)
catf("Features (%i): %s", n.feats, collapse(x$x, ", "))
if (!is.null(x$threshold))
catf("Threshold: %s", collapse(sprintf("%2.2f", x$threshold)))
catf("%s", perfsToString(x$y))
}
makeFeatSelResultFromOptPath = function(learner, measures, control, opt.path,
dob = opt.path$env$dob, ties = "random") {
i = getOptPathBestIndex(opt.path, measureAggrName(measures[[1]]), dob = dob, ties = ties)
e = getOptPathEl(opt.path, i)
# if we had threshold tuning, get th from op and set it in result object
threshold = if (control$tune.threshold) e$extra$threshold else NULL
makeFeatSelResult(learner, control, names(e$x)[e$x == 1], e$y, threshold, opt.path)
}
| /R/FeatSelResult.R | no_license | cauldnz/mlr | R | false | false | 1,958 | r | #' Result of feature selection.
#'
#' Container for results of feature selection.
#' Contains the obtained features, their performance values
#' and the optimization path which lead there. \cr
#' You can visualize it using [analyzeFeatSelResult].
#'
#' Object members:
#' \describe{
#' \item{learner ([Learner])}{Learner that was optimized.}
#' \item{control ([FeatSelControl])}{ Control object from feature selection.}
#' \item{x ([character])}{Vector of feature names identified as optimal.}
#' \item{y ([numeric])}{Performance values for optimal `x`.}
#' \item{threshold ([numeric])}{Vector of finally found and used thresholds
#' if `tune.threshold` was enabled in [FeatSelControl], otherwise not present and
#' hence `NULL`.}
#' \item{opt.path ([ParamHelpers::OptPath])}{Optimization path which lead to `x`.}
#' }
#' @name FeatSelResult
#' @rdname FeatSelResult
NULL
makeFeatSelResult = function(learner, control, x, y, threshold, opt.path) {
makeOptResult(learner, control, x, y, threshold, opt.path, "FeatSelResult")
}
#' @export
print.FeatSelResult = function(x, ...) {
catf("FeatSel result:")
n.feats = length(x$x)
printed.features = 10L
if (n.feats > printed.features)
x = c(head(x, printed.features), "...")
else
x = head(x, printed.features)
catf("Features (%i): %s", n.feats, collapse(x$x, ", "))
if (!is.null(x$threshold))
catf("Threshold: %s", collapse(sprintf("%2.2f", x$threshold)))
catf("%s", perfsToString(x$y))
}
makeFeatSelResultFromOptPath = function(learner, measures, control, opt.path,
dob = opt.path$env$dob, ties = "random") {
i = getOptPathBestIndex(opt.path, measureAggrName(measures[[1]]), dob = dob, ties = ties)
e = getOptPathEl(opt.path, i)
# if we had threshold tuning, get th from op and set it in result object
threshold = if (control$tune.threshold) e$extra$threshold else NULL
makeFeatSelResult(learner, control, names(e$x)[e$x == 1], e$y, threshold, opt.path)
}
|
#####plot scatter pie plot function####
#####Jan 28,2019####
##load packages##
library(dplyr)
library(ggplot2)
library(ggrepel)
library(scatterpie)
library(reshape2)
library(tidyr)
#test if pieplotfunctionOne.R works##
##produce y##
source("pieplotfunctionOne.R")
pie5<-pieplotfunction('ucsf_su2c_common.txt')
exon5<-pieplotfunction('with_exon_all.filtered.txt')
##read new broken data and transform to "gene, Broken.samples"##produce x##
broken_new<-broken('sort_by_count_new.txt')
##NO DEL NEW BROKEN DATA##
broken_new2<-broken('sort_by_count_new_no_del.txt')
##JOIN X AND Y##
pie<-join4plot(broken_new2,pie5)
exon<-join4plot(broken_new,exon5)
##define vertical and horizontal line intercept##
testplot(pie,10,20)
#################selected genes#########################
###1)only pie area I###
#slgene<-c("ACPP","MYC","PTEN","CDKN1B","RB1","FOXA1","TP53","AR","TMPRSS2","ERG","ELK4","ETV1","FAT1","FOXP1")
slgene<-c("ACPP","MYC","PTEN","CDKN1B","RB1","FOXA1","TP53","AR","TMPRSS2","ERG","ELK4","ETV1","FOXP1")
#####slgene in gene and in first area##use this for final plot##
pie.plot<-labelfunction(slgene,pie,10,20)
##plot regular scatter plot##
regscatter(pie.plot,10,20,'testreg')
#####plot pie plot area I and slgenes###############
piescatter(pie.plot,10,20,'test')
write.table(pie,file="pie-with_exon_all.filtered.txt",row.names = FALSE,col.names = TRUE,quote = FALSE,sep="\t")
#############################################################
| /pieplotfunction_useOnetest_Jan282019.R | no_license | ynren1020/R | R | false | false | 1,459 | r | #####plot scatter pie plot function####
#####Jan 28,2019####
##load packages##
library(dplyr)
library(ggplot2)
library(ggrepel)
library(scatterpie)
library(reshape2)
library(tidyr)
#test if pieplotfunctionOne.R works##
##produce y##
source("pieplotfunctionOne.R")
pie5<-pieplotfunction('ucsf_su2c_common.txt')
exon5<-pieplotfunction('with_exon_all.filtered.txt')
##read new broken data and transform to "gene, Broken.samples"##produce x##
broken_new<-broken('sort_by_count_new.txt')
##NO DEL NEW BROKEN DATA##
broken_new2<-broken('sort_by_count_new_no_del.txt')
##JOIN X AND Y##
pie<-join4plot(broken_new2,pie5)
exon<-join4plot(broken_new,exon5)
##define vertical and horizontal line intercept##
testplot(pie,10,20)
#################selected genes#########################
###1)only pie area I###
#slgene<-c("ACPP","MYC","PTEN","CDKN1B","RB1","FOXA1","TP53","AR","TMPRSS2","ERG","ELK4","ETV1","FAT1","FOXP1")
slgene<-c("ACPP","MYC","PTEN","CDKN1B","RB1","FOXA1","TP53","AR","TMPRSS2","ERG","ELK4","ETV1","FOXP1")
#####slgene in gene and in first area##use this for final plot##
pie.plot<-labelfunction(slgene,pie,10,20)
##plot regular scatter plot##
regscatter(pie.plot,10,20,'testreg')
#####plot pie plot area I and slgenes###############
piescatter(pie.plot,10,20,'test')
write.table(pie,file="pie-with_exon_all.filtered.txt",row.names = FALSE,col.names = TRUE,quote = FALSE,sep="\t")
#############################################################
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/regression.R
\name{.solveWtExp}
\alias{.solveWtExp}
\title{Correlation between weighted predictor composite and criterion.}
\usage{
.solveWtExp(wt, rxx, rxy_list)
}
\arguments{
\item{wt}{A vector of predictor weights, or a matrix of predictor weights with
one column per predictor and one row per case.}
\item{rxx}{A matrix of predictor intercorrelations.}
\item{rxy_list}{A list of rxy vectors.}
}
\value{
A matrix of correlation coefficent with one row per weight vector
and one column per rxy vector.
}
\description{
Correlation between weighted predictor composite and criterion.
}
\note{
This function should be merged with the fuse functions and replace the
other .solvewt functions.
}
\examples{
library(iopsych)
data(dls2007)
dat <- dls2007[1:6, 2:7]
rxx <- dat[1:4, 1:4]
rxy1 <- dat[1:4, 5]
rxy2 <- dat[1:4, 6]
rxy_list <- list(rxy1, rxy2)
wt1 <- c(1,1,1,1)
wt2 <- c(1,2,3,4)
wt_mat <- rbind(wt1, wt2)
#.solveWtExp(wt=wt_mat, rxx=rxx, rxy_list=rxy_list)
}
\author{
Allen Goebl Jeff Jones
}
\keyword{internal}
| /man/internal.solveWtExp.Rd | no_license | cran/iopsych | R | false | true | 1,162 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/regression.R
\name{.solveWtExp}
\alias{.solveWtExp}
\title{Correlation between weighted predictor composite and criterion.}
\usage{
.solveWtExp(wt, rxx, rxy_list)
}
\arguments{
\item{wt}{A vector of predictor weights, or a matrix of predictor weights with
one column per predictor and one row per case.}
\item{rxx}{A matrix of predictor intercorrelations.}
\item{rxy_list}{A list of rxy vectors.}
}
\value{
A matrix of correlation coefficent with one row per weight vector
and one column per rxy vector.
}
\description{
Correlation between weighted predictor composite and criterion.
}
\note{
This function should be merged with the fuse functions and replace the
other .solvewt functions.
}
\examples{
library(iopsych)
data(dls2007)
dat <- dls2007[1:6, 2:7]
rxx <- dat[1:4, 1:4]
rxy1 <- dat[1:4, 5]
rxy2 <- dat[1:4, 6]
rxy_list <- list(rxy1, rxy2)
wt1 <- c(1,1,1,1)
wt2 <- c(1,2,3,4)
wt_mat <- rbind(wt1, wt2)
#.solveWtExp(wt=wt_mat, rxx=rxx, rxy_list=rxy_list)
}
\author{
Allen Goebl Jeff Jones
}
\keyword{internal}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/fugu.R
\name{curryRight}
\alias{curryRight}
\title{Currying on the right.}
\usage{
curryRight(f)
}
\arguments{
\item{f}{a function}
\item{...}{arguments list to apply on the left}
}
\value{
g st. curryRight(f)(a,b,c)(q,r,s) == f(q,r,s,a,b,c)
}
\description{
Returns a function curried applied on the right.
}
\keyword{functional}
| /man/curryRight.Rd | no_license | VincentToups/fugu | R | false | true | 410 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/fugu.R
\name{curryRight}
\alias{curryRight}
\title{Currying on the right.}
\usage{
curryRight(f)
}
\arguments{
\item{f}{a function}
\item{...}{arguments list to apply on the left}
}
\value{
g st. curryRight(f)(a,b,c)(q,r,s) == f(q,r,s,a,b,c)
}
\description{
Returns a function curried applied on the right.
}
\keyword{functional}
|
#' Print for a 'wbs' object
#'
#' @param x an object of class 'wbs'
#' @param ... further arguments passed to \code{print} method
#' @return NULL
#' @method print wbs
#' @export
#' @seealso \code{\link{wbs}}
print.wbs <- function(x,...){
cat("Algorithm: ")
if(x$integrated) cat("Wild Binary Segmentation integreated with standard BS\n")
else cat("Wild Binary Segmentation\n")
cat(paste("Number of intervals M =",x$M,"\n"))
cat("Type of intervals: ")
if(x$rand.intervals) cat("random\n")
else cat("fixed\n")
cat("Results: \n")
print(x$res)
}
#' Print for an 'sbs' object
#'
#' @param x an object of class 'sbs'
#' @param ... further arguments passed to \code{print} method
#' @return NULL
#' @method print sbs
#' @export
#' @seealso \code{\link{sbs}}
print.sbs <- function(x,...){
cat("Algorithm: standard Binary Segmentation\n")
cat("Results: \n")
print(x$res)
}
| /R/print.R | no_license | cran/wbs | R | false | false | 898 | r | #' Print for a 'wbs' object
#'
#' @param x an object of class 'wbs'
#' @param ... further arguments passed to \code{print} method
#' @return NULL
#' @method print wbs
#' @export
#' @seealso \code{\link{wbs}}
print.wbs <- function(x,...){
cat("Algorithm: ")
if(x$integrated) cat("Wild Binary Segmentation integreated with standard BS\n")
else cat("Wild Binary Segmentation\n")
cat(paste("Number of intervals M =",x$M,"\n"))
cat("Type of intervals: ")
if(x$rand.intervals) cat("random\n")
else cat("fixed\n")
cat("Results: \n")
print(x$res)
}
#' Print for an 'sbs' object
#'
#' @param x an object of class 'sbs'
#' @param ... further arguments passed to \code{print} method
#' @return NULL
#' @method print sbs
#' @export
#' @seealso \code{\link{sbs}}
print.sbs <- function(x,...){
cat("Algorithm: standard Binary Segmentation\n")
cat("Results: \n")
print(x$res)
}
|
# this does not handle LCOV_EXCL_START ect.
parse_gcov <- function(file, source_file) {
if (!file.exists(file)) {
return(NULL)
}
lines <- readLines(file)
re <- rex::rex(any_spaces,
capture(name = "coverage", some_of(digit, "-", "#", "=")),
":", any_spaces,
capture(name = "line", digits),
":"
)
matches <- rex::re_matches(lines, re)
# gcov lines which have no coverage
matches$coverage[matches$coverage == "#####"] <- 0
# gcov lines which have parse error, so make untracked
matches$coverage[matches$coverage == "====="] <- "-"
coverage_lines <- matches$line != "0" & matches$coverage != "-"
matches <- matches[coverage_lines, ]
values <- as.numeric(matches$coverage)
# create srcfile reference from the source file
src_file <- srcfilecopy(source_file, readLines(source_file))
line_lengths <- vapply(src_file$lines[as.numeric(matches$line)], nchar, numeric(1))
if (any(is.na(values))) {
stop("values could not be coerced to numeric ", matches$coverage)
}
res <- Map(function(line, length, value) {
src_ref <- srcref(src_file, c(line, 1, line, length))
res <- list(srcref = src_ref, value = value, functions = NA_character_)
class(res) <- "line_coverage"
res
},
matches$line, line_lengths, values)
if (!length(res)) {
return(NULL)
}
names(res) <- lapply(res, function(x) key(x$srcref))
class(res) <- "line_coverages"
res
}
clear_gcov <- function(path) {
src_dir <- file.path(path, "src")
gcov_files <- dir(src_dir,
pattern = rex::rex(or(".gcda", ".gcno", ".gcov"), end),
full.names = TRUE,
recursive = TRUE)
unlink(gcov_files)
}
run_gcov <- function(path, sources, quiet = TRUE,
gcov_path = options("covr.gcov")) {
sources <- normalizePath(sources)
src_path <- normalizePath(file.path(path, "src"))
res <- unlist(recursive = FALSE,
Filter(Negate(is.null),
lapply(sources,
function(src) {
gcda <- paste0(remove_extension(src), ".gcda")
gcno <- paste0(remove_extension(src), ".gcno")
if (file.exists(gcno) && file.exists(gcda)) {
in_dir(src_path,
system_check(gcov_path,
args = c(src, "-o", dirname(src)),
quiet = quiet)
)
# the gcov files are in the src_path with the basename of the file
gcov_file <- file.path(src_path, paste0(basename(src), ".gcov"))
if (file.exists(gcov_file)) {
parse_gcov(gcov_file, src)
}
}
})))
if (is.null(res)) {
res <- list()
}
class(res) <- "coverage"
res
}
remove_extension <- function(x) {
rex::re_substitutes(x, rex::rex(".", except_any_of("."), end), "")
}
sources <- function(pkg = ".") {
pkg <- devtools::as.package(pkg)
srcdir <- file.path(pkg$path, "src")
dir(srcdir, rex::rex(".", one_of("cfh"), except_any_of("."), end),
recursive = TRUE,
full.names = TRUE)
}
| /R/compiled.R | no_license | brodieG/covr | R | false | false | 2,973 | r | # this does not handle LCOV_EXCL_START ect.
parse_gcov <- function(file, source_file) {
if (!file.exists(file)) {
return(NULL)
}
lines <- readLines(file)
re <- rex::rex(any_spaces,
capture(name = "coverage", some_of(digit, "-", "#", "=")),
":", any_spaces,
capture(name = "line", digits),
":"
)
matches <- rex::re_matches(lines, re)
# gcov lines which have no coverage
matches$coverage[matches$coverage == "#####"] <- 0
# gcov lines which have parse error, so make untracked
matches$coverage[matches$coverage == "====="] <- "-"
coverage_lines <- matches$line != "0" & matches$coverage != "-"
matches <- matches[coverage_lines, ]
values <- as.numeric(matches$coverage)
# create srcfile reference from the source file
src_file <- srcfilecopy(source_file, readLines(source_file))
line_lengths <- vapply(src_file$lines[as.numeric(matches$line)], nchar, numeric(1))
if (any(is.na(values))) {
stop("values could not be coerced to numeric ", matches$coverage)
}
res <- Map(function(line, length, value) {
src_ref <- srcref(src_file, c(line, 1, line, length))
res <- list(srcref = src_ref, value = value, functions = NA_character_)
class(res) <- "line_coverage"
res
},
matches$line, line_lengths, values)
if (!length(res)) {
return(NULL)
}
names(res) <- lapply(res, function(x) key(x$srcref))
class(res) <- "line_coverages"
res
}
clear_gcov <- function(path) {
src_dir <- file.path(path, "src")
gcov_files <- dir(src_dir,
pattern = rex::rex(or(".gcda", ".gcno", ".gcov"), end),
full.names = TRUE,
recursive = TRUE)
unlink(gcov_files)
}
run_gcov <- function(path, sources, quiet = TRUE,
gcov_path = options("covr.gcov")) {
sources <- normalizePath(sources)
src_path <- normalizePath(file.path(path, "src"))
res <- unlist(recursive = FALSE,
Filter(Negate(is.null),
lapply(sources,
function(src) {
gcda <- paste0(remove_extension(src), ".gcda")
gcno <- paste0(remove_extension(src), ".gcno")
if (file.exists(gcno) && file.exists(gcda)) {
in_dir(src_path,
system_check(gcov_path,
args = c(src, "-o", dirname(src)),
quiet = quiet)
)
# the gcov files are in the src_path with the basename of the file
gcov_file <- file.path(src_path, paste0(basename(src), ".gcov"))
if (file.exists(gcov_file)) {
parse_gcov(gcov_file, src)
}
}
})))
if (is.null(res)) {
res <- list()
}
class(res) <- "coverage"
res
}
remove_extension <- function(x) {
rex::re_substitutes(x, rex::rex(".", except_any_of("."), end), "")
}
sources <- function(pkg = ".") {
pkg <- devtools::as.package(pkg)
srcdir <- file.path(pkg$path, "src")
dir(srcdir, rex::rex(".", one_of("cfh"), except_any_of("."), end),
recursive = TRUE,
full.names = TRUE)
}
|
##### CorrMC1 #####
CorrMC1= function(
title = "CorrMC1", # Question-bank title that will be easily viewable in e-learning
n = 200, # Number of questions to generate
type = "MC", # The question type, one of many possible types on e-learning
answers = 4, # Number of answers per MC question
points.per.q = 4, # Number of points the question is worth (on the test)
difficulty = 1, # An easily viewable difficulty level on e-learning
quest.txt1 = "Four scatterplots of two variables Y vs. X are depicted above. Determine which image depicts the strongest correlation between Y and X.", # This is static text for the question
digits = 2, # This is the number of decimal places to round off the data
loc.path = "/Users/josephyang/Desktop/School Stuff/STAT 1600/Course Development/Question Generators/Stat1600-QGen/CorrMC1 images/", # This is the local path used to store any randomly generated image files
e.path = "Images/", # This is the path on e-learning used to store any above-implemented image files
hint = "No calculation is necessary. Just interpret the above graph.", # This is a student hint, visible to them during the exam on e-learning
feedback = "Strongest correlations have points closest to a line of best fit." # This is student feedback, visible after the exam
)
{
param <- c("NewQuestion","ID","Title","QuestionText","Points","Difficulty", "Image",
rep("Option", answers),"Hint","Feedback") # These are setting row names for the CSV file
questions <- data.frame() # This opens a data frame to store the randomly generated questions below
for(i in 1:n)
{
ID <- paste(title, i, sep = "-") # The ID of the specific question within the bank, title + question number in the loop
points <- sample(c(rep(0,answers-1),100),replace=F) # The proportion of points assigned to each possible answer, 1 if correct or 0 if incorrect
corr.ind <- 7 + which.max(points) # This is the row index of the correct answer
X <- sample(1:30, size = 20); X1 <- X; X2 <- X; X3 <- X; X4 <- X # randomly generating the data points of x-axis for all four scatterplots
decis <- sample(1:4, size = 1) # Randomly generating indication number for each of four scatterplots
if(decis == 1)
{
Y1 <- sample(10:20, size = 1) + sample(3:6, size = 1)*X + rnorm(length(X),0,30)
Y2 <- sample(10:20, size = 1) + sample(seq(-6, -3, 1), size = 1)*X + rnorm(length(X),0,2)
Y3 <- sample(seq(-20, -10, 1), size = 1) + sample(seq(-4, -3, 1), size = 1)*X + rnorm(length(X),0,30)
Y4 <- sample(seq(-20, -10, 1), size = 1) + sample(seq(2, 3, 1), size = 1)*X + rnorm(length(X), 0, 30)
}
else{if(decis == 2)
{
Y2 <- sample(10:20, size = 1) + sample(3:6, size = 1)*X + rnorm(length(X),0,30)
Y3 <- sample(10:20, size = 1) + sample(3:6, size = 1)*X + rnorm(length(X),0,2)
Y4 <- sample(seq(-20, -10, 1), size = 1) + sample(seq(-4, -3, 1), size = 1)*X + rnorm(length(X),0,30)
Y1 <- sample(seq(-20, -10, 1), size = 1) + sample(seq(2, 3, 1), size = 1)*X + rnorm(length(X), 0, 30)
}
else{if(decis == 3)
{
Y3 <- sample(10:20, size = 1) + sample(3:6, size = 1)*X + rnorm(length(X),0,30)
Y4 <- sample(10:20, size = 1) + sample(seq(-6, -3, 1), size = 1)*X + rnorm(length(X),0,2)
Y1 <- sample(seq(-20, -10, 1), size = 1) + sample(seq(-4, -3, 1), size = 1)*X + rnorm(length(X),0,30)
Y2 <- sample(seq(-20, -10, 1), size = 1) + sample(seq(2, 3, 1), size = 1)*X + rnorm(length(X), 0, 30)
}
else{if(decis == 4)
{
Y4 <- sample(10:20, size = 1) + sample(3:6, size = 1)*X + rnorm(length(X),0,30)
Y1 <- sample(10:20, size = 1) + sample(3:6, size = 1)*X + rnorm(length(X),0,2)
Y2 <- sample(seq(-20, -10, 1), size = 1) + sample(seq(-4, -3, 1), size = 1)*X + rnorm(length(X),0,30)
Y3 <- sample(seq(-20, -10, 1), size = 1) + sample(seq(2, 3, 1), size = 1)*X + rnorm(length(X), 0, 30)
}}}} # randomly generating the y-axis value depending on the indicator value of the scatterplots
r1 <- cor(X1, Y1); r2 <- cor(X2, Y2); r3 <- cor(X3, Y3); r4 <- cor(X4, Y4) # generating correlation value for each of the scatterplots
r <- abs(c(r1,r2,r3,r4)) # vector of absolute values of all the four scatterplots
corr.ans <- which.max(r) # This is the correct answer to the question
ans.txt <- 1:4 # Finding the indicator number of the scatterplot having maximum correlation value among all the four
ans.txt <- c(ans.txt[ans.txt != corr.ans], "None of the Above") # These are randomly generated incorrect answers
content <- c(type, ID, ID, quest.txt1,
points.per.q, difficulty, paste(e.path, paste(title, i, sep = "-"), ".jpeg", sep = ""),
points, hint, feedback) # This is collecting a lot of the above information into a single vector
options <- c(rep("",7), ans.txt, rep("",2)) # This is collecting the incorrect answers above, and indexing them correctly by row
options[corr.ind] <- corr.ans # This is imputing the correct answer at the appropriate row index
questions[(1+(9+answers)*i):((9+answers)*(i+1)),1] <- param # This is indexing and storing all the row names
questions[(1+(9+answers)*i):((9+answers)*(i+1)),2] <- content # Indexing and storing all the content
questions[(1+(9+answers)*i):((9+answers)*(i+1)),3] <- options # Indexing and storing the answers, both incorrect and correct
jpeg(filename=paste(loc.path, paste(title, i, sep = "-"), ".jpeg", sep = "")) # creating the image files in the designated place
par(mfrow=c(2,2)) # creating the graphical parameter that two scatterplots are drawn in two rows
plot(X1, Y1, xlim=c(0,30), ylim = c(-200, 200)); title(1)
plot(X2, Y2, xlim=c(0,30), ylim = c(-200, 200)); title(2)
plot(X3, Y3, xlim=c(0,30), ylim = c(-200, 200)); title(3)
plot(X4, Y4, xlim=c(0,30), ylim = c(-200, 200)); title(4); # setting up the boundary limits of x-axis and y-axis for each of the scatterplots
dev.off()
}
questions <- questions[((10+answers)):((9+answers)*(n+1)),] # Storing only what's needed for e-learning upload as a CSV file
write.table(questions, sep=",", file=paste(title, ".csv", sep = ""),
row.names=F, col.names=F) # Writing the CSV file
}
CorrMC1() # creating the csv file
| /CorrMC1.R | no_license | joseph950521/Stat1600-QGen | R | false | false | 6,181 | r | ##### CorrMC1 #####
CorrMC1= function(
title = "CorrMC1", # Question-bank title that will be easily viewable in e-learning
n = 200, # Number of questions to generate
type = "MC", # The question type, one of many possible types on e-learning
answers = 4, # Number of answers per MC question
points.per.q = 4, # Number of points the question is worth (on the test)
difficulty = 1, # An easily viewable difficulty level on e-learning
quest.txt1 = "Four scatterplots of two variables Y vs. X are depicted above. Determine which image depicts the strongest correlation between Y and X.", # This is static text for the question
digits = 2, # This is the number of decimal places to round off the data
loc.path = "/Users/josephyang/Desktop/School Stuff/STAT 1600/Course Development/Question Generators/Stat1600-QGen/CorrMC1 images/", # This is the local path used to store any randomly generated image files
e.path = "Images/", # This is the path on e-learning used to store any above-implemented image files
hint = "No calculation is necessary. Just interpret the above graph.", # This is a student hint, visible to them during the exam on e-learning
feedback = "Strongest correlations have points closest to a line of best fit." # This is student feedback, visible after the exam
)
{
param <- c("NewQuestion","ID","Title","QuestionText","Points","Difficulty", "Image",
rep("Option", answers),"Hint","Feedback") # These are setting row names for the CSV file
questions <- data.frame() # This opens a data frame to store the randomly generated questions below
for(i in 1:n)
{
ID <- paste(title, i, sep = "-") # The ID of the specific question within the bank, title + question number in the loop
points <- sample(c(rep(0,answers-1),100),replace=F) # The proportion of points assigned to each possible answer, 1 if correct or 0 if incorrect
corr.ind <- 7 + which.max(points) # This is the row index of the correct answer
X <- sample(1:30, size = 20); X1 <- X; X2 <- X; X3 <- X; X4 <- X # randomly generating the data points of x-axis for all four scatterplots
decis <- sample(1:4, size = 1) # Randomly generating indication number for each of four scatterplots
if(decis == 1)
{
Y1 <- sample(10:20, size = 1) + sample(3:6, size = 1)*X + rnorm(length(X),0,30)
Y2 <- sample(10:20, size = 1) + sample(seq(-6, -3, 1), size = 1)*X + rnorm(length(X),0,2)
Y3 <- sample(seq(-20, -10, 1), size = 1) + sample(seq(-4, -3, 1), size = 1)*X + rnorm(length(X),0,30)
Y4 <- sample(seq(-20, -10, 1), size = 1) + sample(seq(2, 3, 1), size = 1)*X + rnorm(length(X), 0, 30)
}
else{if(decis == 2)
{
Y2 <- sample(10:20, size = 1) + sample(3:6, size = 1)*X + rnorm(length(X),0,30)
Y3 <- sample(10:20, size = 1) + sample(3:6, size = 1)*X + rnorm(length(X),0,2)
Y4 <- sample(seq(-20, -10, 1), size = 1) + sample(seq(-4, -3, 1), size = 1)*X + rnorm(length(X),0,30)
Y1 <- sample(seq(-20, -10, 1), size = 1) + sample(seq(2, 3, 1), size = 1)*X + rnorm(length(X), 0, 30)
}
else{if(decis == 3)
{
Y3 <- sample(10:20, size = 1) + sample(3:6, size = 1)*X + rnorm(length(X),0,30)
Y4 <- sample(10:20, size = 1) + sample(seq(-6, -3, 1), size = 1)*X + rnorm(length(X),0,2)
Y1 <- sample(seq(-20, -10, 1), size = 1) + sample(seq(-4, -3, 1), size = 1)*X + rnorm(length(X),0,30)
Y2 <- sample(seq(-20, -10, 1), size = 1) + sample(seq(2, 3, 1), size = 1)*X + rnorm(length(X), 0, 30)
}
else{if(decis == 4)
{
Y4 <- sample(10:20, size = 1) + sample(3:6, size = 1)*X + rnorm(length(X),0,30)
Y1 <- sample(10:20, size = 1) + sample(3:6, size = 1)*X + rnorm(length(X),0,2)
Y2 <- sample(seq(-20, -10, 1), size = 1) + sample(seq(-4, -3, 1), size = 1)*X + rnorm(length(X),0,30)
Y3 <- sample(seq(-20, -10, 1), size = 1) + sample(seq(2, 3, 1), size = 1)*X + rnorm(length(X), 0, 30)
}}}} # randomly generating the y-axis value depending on the indicator value of the scatterplots
r1 <- cor(X1, Y1); r2 <- cor(X2, Y2); r3 <- cor(X3, Y3); r4 <- cor(X4, Y4) # generating correlation value for each of the scatterplots
r <- abs(c(r1,r2,r3,r4)) # vector of absolute values of all the four scatterplots
corr.ans <- which.max(r) # This is the correct answer to the question
ans.txt <- 1:4 # Finding the indicator number of the scatterplot having maximum correlation value among all the four
ans.txt <- c(ans.txt[ans.txt != corr.ans], "None of the Above") # These are randomly generated incorrect answers
content <- c(type, ID, ID, quest.txt1,
points.per.q, difficulty, paste(e.path, paste(title, i, sep = "-"), ".jpeg", sep = ""),
points, hint, feedback) # This is collecting a lot of the above information into a single vector
options <- c(rep("",7), ans.txt, rep("",2)) # This is collecting the incorrect answers above, and indexing them correctly by row
options[corr.ind] <- corr.ans # This is imputing the correct answer at the appropriate row index
questions[(1+(9+answers)*i):((9+answers)*(i+1)),1] <- param # This is indexing and storing all the row names
questions[(1+(9+answers)*i):((9+answers)*(i+1)),2] <- content # Indexing and storing all the content
questions[(1+(9+answers)*i):((9+answers)*(i+1)),3] <- options # Indexing and storing the answers, both incorrect and correct
jpeg(filename=paste(loc.path, paste(title, i, sep = "-"), ".jpeg", sep = "")) # creating the image files in the designated place
par(mfrow=c(2,2)) # creating the graphical parameter that two scatterplots are drawn in two rows
plot(X1, Y1, xlim=c(0,30), ylim = c(-200, 200)); title(1)
plot(X2, Y2, xlim=c(0,30), ylim = c(-200, 200)); title(2)
plot(X3, Y3, xlim=c(0,30), ylim = c(-200, 200)); title(3)
plot(X4, Y4, xlim=c(0,30), ylim = c(-200, 200)); title(4); # setting up the boundary limits of x-axis and y-axis for each of the scatterplots
dev.off()
}
questions <- questions[((10+answers)):((9+answers)*(n+1)),] # Storing only what's needed for e-learning upload as a CSV file
write.table(questions, sep=",", file=paste(title, ".csv", sep = ""),
row.names=F, col.names=F) # Writing the CSV file
}
CorrMC1() # creating the csv file
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/tls.R
\name{tls}
\alias{tls}
\title{Fitting error-in-variables models via total least squares.}
\usage{
tls(formula, data, method = c("normal", "bootstrap"), conf.level = 0.95,
...)
}
\arguments{
\item{formula}{an object of class "formula" (or one that can be
coerced to that class): a symbolic description of the model to
be fitted.}
\item{data}{an optional data frame, list or environment (or object
coercible by as.data.frame to a data frame) containing the
variables in the model.}
\item{method}{method for computing confidence interval}
\item{conf.level}{the confidence level for confidence interval.}
\item{...}{Optional arguments for future usage.}
}
\value{
\code{tls} returns parameters of the fitted model including
estimations of coefficient, corresponding estimated standard errors
and confidence intervals.
}
\description{
It can be used to carry out regression models that account for
measurement errors in the independent variables.
}
\details{
This function should be used with care. Confidence interval estimation
is given by normal approximation or bootstrap. The normal approximation and
bootstrap are proper when all the error terms are independent from normal
distribution with zero mean and equal variance (see the references for
more details).
}
\examples{
library(tls)
set.seed(100)
X.1 <- sqrt(1:100)
X.tilde.1 <- rnorm(100) + X.1
X.2 <- sample(X.1, size = length(X.1), replace = FALSE)
X.tilde.2 <- rnorm(100) + X.2
Y <- rnorm(100) + X.1 + X.2
data <- data.frame(Y = Y, X.1 = X.tilde.1, X.2 = X.tilde.2)
tls(Y ~ X.1 + X.2 - 1, data = data)
}
\references{
\itemize{
\item Gleser, Estimation in a Multivariate "Errors in Variables"
Regression Model: Large Sample Results, 1981, Ann. Stat.
\item Golub and Laon, An Analysis of the Total Least Squares Problem,
1980, SIAM J. Numer. Anal.
\item Pesta, Total least squares and bootstrapping with
applications in calibration, 2012, Statistics.}
}
\author{
Yan Li
}
| /man/tls.Rd | no_license | cran/tls | R | false | true | 2,027 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/tls.R
\name{tls}
\alias{tls}
\title{Fitting error-in-variables models via total least squares.}
\usage{
tls(formula, data, method = c("normal", "bootstrap"), conf.level = 0.95,
...)
}
\arguments{
\item{formula}{an object of class "formula" (or one that can be
coerced to that class): a symbolic description of the model to
be fitted.}
\item{data}{an optional data frame, list or environment (or object
coercible by as.data.frame to a data frame) containing the
variables in the model.}
\item{method}{method for computing confidence interval}
\item{conf.level}{the confidence level for confidence interval.}
\item{...}{Optional arguments for future usage.}
}
\value{
\code{tls} returns parameters of the fitted model including
estimations of coefficient, corresponding estimated standard errors
and confidence intervals.
}
\description{
It can be used to carry out regression models that account for
measurement errors in the independent variables.
}
\details{
This function should be used with care. Confidence interval estimation
is given by normal approximation or bootstrap. The normal approximation and
bootstrap are proper when all the error terms are independent from normal
distribution with zero mean and equal variance (see the references for
more details).
}
\examples{
library(tls)
set.seed(100)
X.1 <- sqrt(1:100)
X.tilde.1 <- rnorm(100) + X.1
X.2 <- sample(X.1, size = length(X.1), replace = FALSE)
X.tilde.2 <- rnorm(100) + X.2
Y <- rnorm(100) + X.1 + X.2
data <- data.frame(Y = Y, X.1 = X.tilde.1, X.2 = X.tilde.2)
tls(Y ~ X.1 + X.2 - 1, data = data)
}
\references{
\itemize{
\item Gleser, Estimation in a Multivariate "Errors in Variables"
Regression Model: Large Sample Results, 1981, Ann. Stat.
\item Golub and Laon, An Analysis of the Total Least Squares Problem,
1980, SIAM J. Numer. Anal.
\item Pesta, Total least squares and bootstrapping with
applications in calibration, 2012, Statistics.}
}
\author{
Yan Li
}
|
##################
## Factor Names ##
##################
#' @title factorNames: set and retrieve factor names
#' @description
#' @name factorNames
#' @rdname factorNames
#' @export
setGeneric("factorNames", function(object) {standardGeneric("factorNames")})
#' @name factorNames
#' @rdname factorNames
#' @aliases factorNames<-
#' @export
setGeneric("factorNames<-", function(object, value) {standardGeneric("factorNames<-")})
##################
## Sample Names ##
##################
#' @title sampleNames: set and retrieve sample names
#' @name sampleNames
#' @rdname sampleNames
#' @export
setGeneric("sampleNames", function(object) {standardGeneric("sampleNames")})
#' @name sampleNames
#' @rdname sampleNames
#' @aliases sampleNames<-
#' @export
setGeneric("sampleNames<-", function(object, value) {standardGeneric("sampleNames<-")})
###################
## Feature Names ##
###################
#' @title featureNames: set and retrieve feature names
#' @name featureNames
#' @rdname featureNames
#' @export
setGeneric("featureNames", function(object) {standardGeneric("featureNames")})
#' @name featureNames
#' @rdname featureNames
#' @aliases featureNames<-
#' @export
setGeneric("featureNames<-", function(object, value) {standardGeneric("featureNames<-")})
################
## View Names ##
################
#' @title viewNames: set and retrieve view names
#' @name viewNames
#' @rdname viewNames
#' @export
setGeneric("viewNames", function(object) {standardGeneric("viewNames")})
#' @name viewNames
#' @rdname viewNames
#' @aliases viewNames<-
#' @export
setGeneric("viewNames<-", function(object, value) {standardGeneric("viewNames<-")})
################
## Input Data ##
################
#' @title Set and retrieve input data
#' @name InputData
#' @export
setGeneric("InputData", function(object) {standardGeneric("InputData")})
#' @name InputData
#' @aliases inputData<-
#' @export
setGeneric(".InputData<-", function(object, value) {standardGeneric(".InputData<-")})
##################
## Imputed Data ##
##################
#' @title ImputedData: set and retrieve imputed data
#' @name ImputedData
#' @export
setGeneric("ImputedData", function(object) {standardGeneric("ImputedData")})
#' @name ImputedData
#' @aliases ImputedData<-
#' @export
setGeneric(".ImputedData<-", function(object, value) {standardGeneric(".ImputedData<-")})
################
## Train Data ##
################
#' @title TrainData: set and retrieve training data
#' @name TrainData
#' @rdname TrainData
#' @export
setGeneric("TrainData", function(object) {standardGeneric("TrainData")})
#' @name TrainData
#' @aliases TrainData<-
#' @export
setGeneric(".TrainData<-", function(object, value) {standardGeneric(".TrainData<-")})
###################
## Train Options ##
###################
#' @title TrainOptions: set and retrieve training opts
#' @name TrainOptions
#' @rdname TrainOptions
#' @export
setGeneric("TrainOptions", function(object) {standardGeneric("TrainOptions")})
#' @name TrainOptions
#' @rdname TrainOptions
#' @aliases TrainOptions<-
#' @export
setGeneric(".TrainOptions<-", function(object, value) {standardGeneric(".TrainOptions<-")})
###################
## Model Options ##
###################
#' @title ModelOptions: set and retrieve Model options
#' @name ModelOptions
#' @export
setGeneric("ModelOptions", function(object) {standardGeneric("ModelOptions")})
#' @name ModelOptions
#' @aliases ModelOptions<-
#' @export
setGeneric(".ModelOptions<-", function(object, value) {standardGeneric(".ModelOptions<-")})
######################
## Train Statistics ##
######################
#' @title TrainStats: set and retrieve training statistics
#' @name TrainStats
#' @export
setGeneric("TrainStats", function(object) {standardGeneric("TrainStats")})
#' @name TrainStats
#' @aliases TrainStats<-
#' @export
setGeneric(".TrainStats<-", function(object, value) {standardGeneric(".TrainStats<-")})
##################
## Expectations ##
##################
#' @title Expectations: set and retrieve expectations
#' @name Expectations
#' @rdname Expectations
#' @export
setGeneric("Expectations", function(object) {standardGeneric("Expectations")})
#' @name Expectations
#' @rdname Expectations
#' @aliases Expectations<-
#' @export
setGeneric(".Expectations<-", function(object, value) {standardGeneric(".Expectations<-")})
| /MOFAtools/R/AllGenerics.R | no_license | vibbits/MOFA | R | false | false | 4,359 | r |
##################
## Factor Names ##
##################
#' @title factorNames: set and retrieve factor names
#' @description
#' @name factorNames
#' @rdname factorNames
#' @export
setGeneric("factorNames", function(object) {standardGeneric("factorNames")})
#' @name factorNames
#' @rdname factorNames
#' @aliases factorNames<-
#' @export
setGeneric("factorNames<-", function(object, value) {standardGeneric("factorNames<-")})
##################
## Sample Names ##
##################
#' @title sampleNames: set and retrieve sample names
#' @name sampleNames
#' @rdname sampleNames
#' @export
setGeneric("sampleNames", function(object) {standardGeneric("sampleNames")})
#' @name sampleNames
#' @rdname sampleNames
#' @aliases sampleNames<-
#' @export
setGeneric("sampleNames<-", function(object, value) {standardGeneric("sampleNames<-")})
###################
## Feature Names ##
###################
#' @title featureNames: set and retrieve feature names
#' @name featureNames
#' @rdname featureNames
#' @export
setGeneric("featureNames", function(object) {standardGeneric("featureNames")})
#' @name featureNames
#' @rdname featureNames
#' @aliases featureNames<-
#' @export
setGeneric("featureNames<-", function(object, value) {standardGeneric("featureNames<-")})
################
## View Names ##
################
#' @title viewNames: set and retrieve view names
#' @name viewNames
#' @rdname viewNames
#' @export
setGeneric("viewNames", function(object) {standardGeneric("viewNames")})
#' @name viewNames
#' @rdname viewNames
#' @aliases viewNames<-
#' @export
setGeneric("viewNames<-", function(object, value) {standardGeneric("viewNames<-")})
################
## Input Data ##
################
#' @title Set and retrieve input data
#' @name InputData
#' @export
setGeneric("InputData", function(object) {standardGeneric("InputData")})
#' @name InputData
#' @aliases inputData<-
#' @export
setGeneric(".InputData<-", function(object, value) {standardGeneric(".InputData<-")})
##################
## Imputed Data ##
##################
#' @title ImputedData: set and retrieve imputed data
#' @name ImputedData
#' @export
setGeneric("ImputedData", function(object) {standardGeneric("ImputedData")})
#' @name ImputedData
#' @aliases ImputedData<-
#' @export
setGeneric(".ImputedData<-", function(object, value) {standardGeneric(".ImputedData<-")})
################
## Train Data ##
################
#' @title TrainData: set and retrieve training data
#' @name TrainData
#' @rdname TrainData
#' @export
setGeneric("TrainData", function(object) {standardGeneric("TrainData")})
#' @name TrainData
#' @aliases TrainData<-
#' @export
setGeneric(".TrainData<-", function(object, value) {standardGeneric(".TrainData<-")})
###################
## Train Options ##
###################
#' @title TrainOptions: set and retrieve training opts
#' @name TrainOptions
#' @rdname TrainOptions
#' @export
setGeneric("TrainOptions", function(object) {standardGeneric("TrainOptions")})
#' @name TrainOptions
#' @rdname TrainOptions
#' @aliases TrainOptions<-
#' @export
setGeneric(".TrainOptions<-", function(object, value) {standardGeneric(".TrainOptions<-")})
###################
## Model Options ##
###################
#' @title ModelOptions: set and retrieve Model options
#' @name ModelOptions
#' @export
setGeneric("ModelOptions", function(object) {standardGeneric("ModelOptions")})
#' @name ModelOptions
#' @aliases ModelOptions<-
#' @export
setGeneric(".ModelOptions<-", function(object, value) {standardGeneric(".ModelOptions<-")})
######################
## Train Statistics ##
######################
#' @title TrainStats: set and retrieve training statistics
#' @name TrainStats
#' @export
setGeneric("TrainStats", function(object) {standardGeneric("TrainStats")})
#' @name TrainStats
#' @aliases TrainStats<-
#' @export
setGeneric(".TrainStats<-", function(object, value) {standardGeneric(".TrainStats<-")})
##################
## Expectations ##
##################
#' @title Expectations: set and retrieve expectations
#' @name Expectations
#' @rdname Expectations
#' @export
setGeneric("Expectations", function(object) {standardGeneric("Expectations")})
#' @name Expectations
#' @rdname Expectations
#' @aliases Expectations<-
#' @export
setGeneric(".Expectations<-", function(object, value) {standardGeneric(".Expectations<-")})
|
#####################################################################################################
#
# This code is used to combine all the data
#
#
#####################################################################################################
##setwd("/exports/csce/eddie/inf/groups/mamode_prendergast/Scripts")
library("crayon", lib.loc = "/exports/csce/eddie/inf/groups/mamode_prendergast/R_packages/")
library("rstudioapi", lib.loc = "/exports/csce/eddie/inf/groups/mamode_prendergast/R_packages/")
library("cli", lib.loc = "/exports/csce/eddie/inf/groups/mamode_prendergast/R_packages/")
library("withr", lib.loc = "/exports/csce/eddie/inf/groups/mamode_prendergast/R_packages/")
library("readr", lib.loc = "/exports/csce/eddie/inf/groups/mamode_prendergast/R_packages/")
library("tidyverse", lib.loc = "/exports/csce/eddie/inf/groups/mamode_prendergast/R_packages/")
library("BiocGenerics", lib.loc = "/exports/csce/eddie/inf/groups/mamode_prendergast/R_packages/")
library("S4Vectors", lib.loc = "/exports/csce/eddie/inf/groups/mamode_prendergast/R_packages/")
library("IRanges", lib.loc = "/exports/csce/eddie/inf/groups/mamode_prendergast/R_packages/")
library("GenomeInfoDb", lib.loc = "/exports/csce/eddie/inf/groups/mamode_prendergast/R_packages/")
library("GenomicRanges", lib.loc = "/exports/csce/eddie/inf/groups/mamode_prendergast/R_packages/")
library("R.methodsS3", lib.loc = "/exports/csce/eddie/inf/groups/mamode_prendergast/R_packages/")
library("R.oo", lib.loc = "/exports/csce/eddie/inf/groups/mamode_prendergast/R_packages/")
library("R.utils", lib.loc = "/exports/csce/eddie/inf/groups/mamode_prendergast/R_packages/")
#library("tools", lib.loc = "/exports/csce/eddie/inf/groups/mamode_prendergast/R_packages/")
library(tools)
#we will get the nearest gene to each eQTL in CaVEMaN results
##eqtl<-read_tsv("../Data/GTEx_Analysis_2017-06-05_v8_WholeGenomeSeq_838Indiv_Analysis_Freeze.lookup_table.txt")
##eqtl <- eqtl[-c(1,6,7,8)]
#add the index as a column & select certain columns
##eqtl$col_ref <- rownames(eqtl)
##eqtl_chromatin<-(as.data.frame(select(eqtl, seqnames=chr, start=variant_pos, col_ref) %>% mutate(end=start)))
##eqtl_chromatin$col_ref <- as.numeric(eqtl$col_ref)
setwd("/exports/eddie/scratch/s1772751/Prepared_data_6")
#setwd("D:/Dissertation_Data/Eddie/Prepared_data_2")
#find the paths of all the files in the folder
files <- list.files(pattern = ".zip$", recursive = TRUE, full.names = TRUE)
#length(files)
#for each file
for(i in (1:40)) {
#read in to chromatin variable
chromatin<-read_csv(files[i], col_names = TRUE)
#name columns
folders <- strsplit(files[i],"/")
name <- substr(folders[[1]][2], 1, nchar(folders[[1]][2]) -8)
names(chromatin)[names(chromatin) == 'distance'] <- name
print(folders)
#delete unneccessary column
chromatin <- chromatin[ , !names(chromatin) %in% c("queryHits")]
print(files[i])
print(i)
file_name_gz <- paste(name,'.csv.gz', sep="")
file_name_zip <- paste(name,'.csv.zip', sep="")
#file_path <- file_path_as_absolute(file_name_zip)
#file_path <- strsplit(file_path,"/")
file_to_save <- paste(folders[[1]][1], file_name_gz, sep="/")
file_to_remove <- paste(folders[[1]][1], file_name_zip, sep="/")
print(file_to_save)
write.csv(chromatin, file = gzfile(file_to_save))
#file.remove(file_to_remove)
#rm(list=setdiff(ls(), "files"))
rm(list=ls()[! ls() %in% c("files","i")])
print(getwd())
print(gc())
}
###########################################################################################################
#setwd("D:/OneDrive/Documents/Edinburgh Uni/Dissertation")
#
#
#chromatin <- read.csv("eqtl_chromatin_new_2.csv", nrows = 10, header = TRUE) #, colClasses = column_list)
#chromatin_nan <- chromatin[colSums(!is.na(chromatin)) > 0]
#chromatin_col <- colnames(chromatin)
#chromatin_nan_col <- colnames(chromatin_nan)
#drop <- c(setdiff(chromatin_col, chromatin_nan_col))
#
#chromatin <- read.csv("eqtl_chromatin_new_2.csv", header = TRUE)
#chromatin <- chromatin[ , !(names(chromatin) %in% drop)]
#write.csv(chromatin, file = "D:/OneDrive/Documents/Edinburgh Uni/Dissertation/eqtl_chromatin_new_2_v.csv")
#
#
#chromatin2 <- read.csv("eqtl_chromatin2.csv", header = TRUE)
#chromatin2_nan <- chromatin2[colSums(!is.na(chromatin2)) > 0]
#chromatin2_col <- colnames(chromatin2)
#chromatin2_nan_col <- colnames(chromatin2_nan)
#drop <- c(setdiff(chromatin2_col, chromatin2_nan_col))
#chromatin2_new <- chromatin2[ , !(names(chromatin2) %in% drop)]
#write.csv(chromatin2_new, file = "D:/OneDrive/Documents/Edinburgh Uni/Dissertation/eqtl_chromatin2_new.csv")
#
#
#
################################################################################################################
#
#
#column_class <- sapply(chromatin, class)
#length(column_class)
#column_list<-vector()
#
#for (i in 1:length(column_class)){
# #list.append(column_list,column_class[i][[1]]
# if (column_class[i][[1]]=='integer') {
# column_list <- c(column_list, 'int')
# } else if (column_class[i][[1]]=='logical') {
# print(i)
# column_list <- c(column_list, 'logi')
# } else {
# column_list <- c(column_list, column_class[i][[1]])
# }
#}
#
#column_list[1:1015] <- "integer"
#column_list[1:2] <- "factor"
#column_list[c(3,5)] <- "character"
#column_list[c(78,80,624,627,661)] <- "logical"
#column_list[600:1015] <- "NULL"
#column_list[c(7:600,1015)] <- "NULL"
#col_ <- column_list
#
#
#
#
#col_ = c(rep("NULL", 10), rep("integer", 5), rep("NULL", 1000))
#col_ = c("factor", rep("NULL", 1014))
#col_ = c(column_list[1:4], rep("NULL", 1011))
#chromatin <- read.csv("eqtl_chromatin.csv", nrows = 10, header = TRUE)
#chromatin_b <- read.csv("eqtl_chromatin2.csv", nrows = 10, header = TRUE)
#chromatin <- read.csv("eqtl_chromatin.csv", nrows = -1, header = TRUE, colClasses = col_)
#
#
#chromatin <- read.csv("eqtl_chromatin_new_3.csv", nrows = -1, header = TRUE)
#chromatin <- chromatin[ -c(1:2) ]
#write.csv(chromatin, file = "D:/OneDrive/Documents/Edinburgh Uni/Dissertation/eqtl_chromatin_new_3a.csv")
#
| /old/Prepared_data_6/Ensembl_Distance2Gene_full_combine_data_array_40.R | permissive | HIM003/edinburgh-university-dissertation | R | false | false | 6,120 | r | #####################################################################################################
#
# This code is used to combine all the data
#
#
#####################################################################################################
##setwd("/exports/csce/eddie/inf/groups/mamode_prendergast/Scripts")
library("crayon", lib.loc = "/exports/csce/eddie/inf/groups/mamode_prendergast/R_packages/")
library("rstudioapi", lib.loc = "/exports/csce/eddie/inf/groups/mamode_prendergast/R_packages/")
library("cli", lib.loc = "/exports/csce/eddie/inf/groups/mamode_prendergast/R_packages/")
library("withr", lib.loc = "/exports/csce/eddie/inf/groups/mamode_prendergast/R_packages/")
library("readr", lib.loc = "/exports/csce/eddie/inf/groups/mamode_prendergast/R_packages/")
library("tidyverse", lib.loc = "/exports/csce/eddie/inf/groups/mamode_prendergast/R_packages/")
library("BiocGenerics", lib.loc = "/exports/csce/eddie/inf/groups/mamode_prendergast/R_packages/")
library("S4Vectors", lib.loc = "/exports/csce/eddie/inf/groups/mamode_prendergast/R_packages/")
library("IRanges", lib.loc = "/exports/csce/eddie/inf/groups/mamode_prendergast/R_packages/")
library("GenomeInfoDb", lib.loc = "/exports/csce/eddie/inf/groups/mamode_prendergast/R_packages/")
library("GenomicRanges", lib.loc = "/exports/csce/eddie/inf/groups/mamode_prendergast/R_packages/")
library("R.methodsS3", lib.loc = "/exports/csce/eddie/inf/groups/mamode_prendergast/R_packages/")
library("R.oo", lib.loc = "/exports/csce/eddie/inf/groups/mamode_prendergast/R_packages/")
library("R.utils", lib.loc = "/exports/csce/eddie/inf/groups/mamode_prendergast/R_packages/")
#library("tools", lib.loc = "/exports/csce/eddie/inf/groups/mamode_prendergast/R_packages/")
library(tools)
#we will get the nearest gene to each eQTL in CaVEMaN results
##eqtl<-read_tsv("../Data/GTEx_Analysis_2017-06-05_v8_WholeGenomeSeq_838Indiv_Analysis_Freeze.lookup_table.txt")
##eqtl <- eqtl[-c(1,6,7,8)]
#add the index as a column & select certain columns
##eqtl$col_ref <- rownames(eqtl)
##eqtl_chromatin<-(as.data.frame(select(eqtl, seqnames=chr, start=variant_pos, col_ref) %>% mutate(end=start)))
##eqtl_chromatin$col_ref <- as.numeric(eqtl$col_ref)
setwd("/exports/eddie/scratch/s1772751/Prepared_data_6")
#setwd("D:/Dissertation_Data/Eddie/Prepared_data_2")
#find the paths of all the files in the folder
files <- list.files(pattern = ".zip$", recursive = TRUE, full.names = TRUE)
#length(files)
#for each file
for(i in (1:40)) {
#read in to chromatin variable
chromatin<-read_csv(files[i], col_names = TRUE)
#name columns
folders <- strsplit(files[i],"/")
name <- substr(folders[[1]][2], 1, nchar(folders[[1]][2]) -8)
names(chromatin)[names(chromatin) == 'distance'] <- name
print(folders)
#delete unneccessary column
chromatin <- chromatin[ , !names(chromatin) %in% c("queryHits")]
print(files[i])
print(i)
file_name_gz <- paste(name,'.csv.gz', sep="")
file_name_zip <- paste(name,'.csv.zip', sep="")
#file_path <- file_path_as_absolute(file_name_zip)
#file_path <- strsplit(file_path,"/")
file_to_save <- paste(folders[[1]][1], file_name_gz, sep="/")
file_to_remove <- paste(folders[[1]][1], file_name_zip, sep="/")
print(file_to_save)
write.csv(chromatin, file = gzfile(file_to_save))
#file.remove(file_to_remove)
#rm(list=setdiff(ls(), "files"))
rm(list=ls()[! ls() %in% c("files","i")])
print(getwd())
print(gc())
}
###########################################################################################################
#setwd("D:/OneDrive/Documents/Edinburgh Uni/Dissertation")
#
#
#chromatin <- read.csv("eqtl_chromatin_new_2.csv", nrows = 10, header = TRUE) #, colClasses = column_list)
#chromatin_nan <- chromatin[colSums(!is.na(chromatin)) > 0]
#chromatin_col <- colnames(chromatin)
#chromatin_nan_col <- colnames(chromatin_nan)
#drop <- c(setdiff(chromatin_col, chromatin_nan_col))
#
#chromatin <- read.csv("eqtl_chromatin_new_2.csv", header = TRUE)
#chromatin <- chromatin[ , !(names(chromatin) %in% drop)]
#write.csv(chromatin, file = "D:/OneDrive/Documents/Edinburgh Uni/Dissertation/eqtl_chromatin_new_2_v.csv")
#
#
#chromatin2 <- read.csv("eqtl_chromatin2.csv", header = TRUE)
#chromatin2_nan <- chromatin2[colSums(!is.na(chromatin2)) > 0]
#chromatin2_col <- colnames(chromatin2)
#chromatin2_nan_col <- colnames(chromatin2_nan)
#drop <- c(setdiff(chromatin2_col, chromatin2_nan_col))
#chromatin2_new <- chromatin2[ , !(names(chromatin2) %in% drop)]
#write.csv(chromatin2_new, file = "D:/OneDrive/Documents/Edinburgh Uni/Dissertation/eqtl_chromatin2_new.csv")
#
#
#
################################################################################################################
#
#
#column_class <- sapply(chromatin, class)
#length(column_class)
#column_list<-vector()
#
#for (i in 1:length(column_class)){
# #list.append(column_list,column_class[i][[1]]
# if (column_class[i][[1]]=='integer') {
# column_list <- c(column_list, 'int')
# } else if (column_class[i][[1]]=='logical') {
# print(i)
# column_list <- c(column_list, 'logi')
# } else {
# column_list <- c(column_list, column_class[i][[1]])
# }
#}
#
#column_list[1:1015] <- "integer"
#column_list[1:2] <- "factor"
#column_list[c(3,5)] <- "character"
#column_list[c(78,80,624,627,661)] <- "logical"
#column_list[600:1015] <- "NULL"
#column_list[c(7:600,1015)] <- "NULL"
#col_ <- column_list
#
#
#
#
#col_ = c(rep("NULL", 10), rep("integer", 5), rep("NULL", 1000))
#col_ = c("factor", rep("NULL", 1014))
#col_ = c(column_list[1:4], rep("NULL", 1011))
#chromatin <- read.csv("eqtl_chromatin.csv", nrows = 10, header = TRUE)
#chromatin_b <- read.csv("eqtl_chromatin2.csv", nrows = 10, header = TRUE)
#chromatin <- read.csv("eqtl_chromatin.csv", nrows = -1, header = TRUE, colClasses = col_)
#
#
#chromatin <- read.csv("eqtl_chromatin_new_3.csv", nrows = -1, header = TRUE)
#chromatin <- chromatin[ -c(1:2) ]
#write.csv(chromatin, file = "D:/OneDrive/Documents/Edinburgh Uni/Dissertation/eqtl_chromatin_new_3a.csv")
#
|
library(factoextra)
library(plotly)
library(dplyr)
library(arules)
library(spatstat)
#######
dfdrugs <- read.csv("dfdrugs.csv", header = T)
dfdrugs2 <- read.csv("dfdrugs2.csv", header = T)
dfall <- read.csv("dfall.csv", header = T)
dfdays <- read.csv("days.csv", header = T)
dfdaysdec <- dfdays[order(dfdays$x, decreasing = T),]
c1 <- paste0("X", dfdaysdec$X[1:61])
c2 <- paste0("X", dfdaysdec$X[62:119])
c1
c1
all.data <- rbind(dfdrugs2, dfdis)
all.data <- all.data[,-1]
all.data[all.data>0] <- 1
all.data <- unique(all.data)
rownames(all.data) <- all.data$.rownames
all.data <- all.data[!(rownames(all.data) %in% c("and","with","for","when","the", "[1]")),]
seldata <- all.data[,which(colnames(all.data) %in% c("row.names",c1))]
seldata <- filter(seldata, rowSums(seldata)>0)
seldata[order(rowSums(seldata), decreasing = T),] %>% rownames()
diff <- rowSums(all.data[,which(colnames(all.data) %in% c1)])-rowSums(all.data[,which(colnames(all.data) %in% c2)])
ratio <- rowSums(all.data[,which(colnames(all.data) %in% c1)])/rowSums(all.data[,which(colnames(all.data) %in% c2)])
diffordered <- diff[order(diff, decreasing = T)]
dfdiff <- data.frame("groupA"=rowSums(all.data[,which(colnames(all.data) %in% c1)]),"groupB"=rowSums(all.data[,which(colnames(all.data) %in% c2)]), "diff"=diff, "ratio(A/B)"=ratio)
dfdiff <- dfdiff[order(dfdiff$diff, decreasing = T),]
drugdiff <- dfdiff[which(row.names(dfdiff) %in% dfdrugs2$.rownames),]
write.csv(drugdiff, "drugdiff.csv")
write.csv(diff, "diff.csv")
write.csv(dfdiff, "dfdiff.csv")
###c1, rid of rows that has 0 as sum
apply_cosine_similarity <- function(df){
cos.sim <- function(df, ix)
{
A = df[ix[1],]
B = df[ix[2],]
return( sum(A*B)/sqrt(sum(A^2)*sum(B^2)) )
}
n <- nrow(df)
cmb <- expand.grid(i=1:n, j=1:n)
C <- matrix(apply(cmb,1,function(cmb){ cos.sim(df, cmb) }),n,n)
C
}
dfcos <- data.frame()
for(i in 1:3){
veccos <- c()
for(j in 1:nrow(all.data)){
if(i==j){
veccos[j] <- NA
}
else{
A <- all.data[i,-1]
B <- all.data[j,-1]
veccos[j] <- sum(A*B)/sqrt(sum(A^2)*sum(B^2))
}
}
dfcos[i,] <- veccos
}
A <- all.data[1,-1]
B <- all.data[2,-1]
sum(A*B)/sqrt(sum(A^2)*sum(B^2))
mtcos <- apply_cosine_similarity(all.data[,-1])
max(nndist(all.data[,2], k=(nrow(all.data))-1))
nndist(all.data[,3], k=3)
#apriori
factordf <- all.data[,which(colnames(all.data) %in% c1)]
for(i in 1:nrow(factordf)){
factordf[,i] <- as.factor(factordf [,i])
}
factordf = as(factordf, "transactions")
rules <- apriori(factordf,
parameter = list(supp = 0.5, conf = 0.9, target = "rules"))
summary(rules)
vis <- function(x,y,center,min){
sel <- all.data[-1]
sel <- sel[ , which(colnames(sel) %in% x)]
sel[sel>0]<-1
sel <- sel[rowSums(sel)>=min,]
sel <- sel[,colSums(sel)>0]
set.seed(20)
km <- kmeans(sel, centers = center, nstart = 5)
return(ggplotly(fviz_cluster(km, data = sel, main = y)))
}
vis(c2, "disease & drug cluster, 119 patients, <22days",5,2)
gp3 <- vis(c21, "drug cluster(day=21~30)",3)
gp4 <- vis(c(c1,c11) , "disease & drug cluster(day<21), times>=1",3)
gp1
gp2
gp3
gp4
dd <-all.data[ , which(colnames(all.data) %in% c(c1, c11))]
dd[dd>0]<-1
dd
z <- vis(c(c1, c11), "drug & symptoms cluster(day=1~20)",2)
fviz_nbclust(run,
FUNcluster = kmeans, # K-Means
method = "wss", # total within sum of square
k.max = 12 # max number of clusters to consider
) +labs(title="Elbow Method for K-Means") + geom_vline(xintercept = 6,
linetype = 2)
#####
rule <- apriori(t(as.matrix(all.data)),parameter=list(supp=0.2,conf=0.8,maxlen=5))
inspect(rule)
summary(rule)
inspect(head(sort(rule,by="support"),10))
rule
| /week3/2/cluster.R | no_license | MarvinChung/107-2_NTU_CS-X_Data_Science_Programming | R | false | false | 3,805 | r | library(factoextra)
library(plotly)
library(dplyr)
library(arules)
library(spatstat)
#######
dfdrugs <- read.csv("dfdrugs.csv", header = T)
dfdrugs2 <- read.csv("dfdrugs2.csv", header = T)
dfall <- read.csv("dfall.csv", header = T)
dfdays <- read.csv("days.csv", header = T)
dfdaysdec <- dfdays[order(dfdays$x, decreasing = T),]
c1 <- paste0("X", dfdaysdec$X[1:61])
c2 <- paste0("X", dfdaysdec$X[62:119])
c1
c1
all.data <- rbind(dfdrugs2, dfdis)
all.data <- all.data[,-1]
all.data[all.data>0] <- 1
all.data <- unique(all.data)
rownames(all.data) <- all.data$.rownames
all.data <- all.data[!(rownames(all.data) %in% c("and","with","for","when","the", "[1]")),]
seldata <- all.data[,which(colnames(all.data) %in% c("row.names",c1))]
seldata <- filter(seldata, rowSums(seldata)>0)
seldata[order(rowSums(seldata), decreasing = T),] %>% rownames()
diff <- rowSums(all.data[,which(colnames(all.data) %in% c1)])-rowSums(all.data[,which(colnames(all.data) %in% c2)])
ratio <- rowSums(all.data[,which(colnames(all.data) %in% c1)])/rowSums(all.data[,which(colnames(all.data) %in% c2)])
diffordered <- diff[order(diff, decreasing = T)]
dfdiff <- data.frame("groupA"=rowSums(all.data[,which(colnames(all.data) %in% c1)]),"groupB"=rowSums(all.data[,which(colnames(all.data) %in% c2)]), "diff"=diff, "ratio(A/B)"=ratio)
dfdiff <- dfdiff[order(dfdiff$diff, decreasing = T),]
drugdiff <- dfdiff[which(row.names(dfdiff) %in% dfdrugs2$.rownames),]
write.csv(drugdiff, "drugdiff.csv")
write.csv(diff, "diff.csv")
write.csv(dfdiff, "dfdiff.csv")
###c1, rid of rows that has 0 as sum
apply_cosine_similarity <- function(df){
cos.sim <- function(df, ix)
{
A = df[ix[1],]
B = df[ix[2],]
return( sum(A*B)/sqrt(sum(A^2)*sum(B^2)) )
}
n <- nrow(df)
cmb <- expand.grid(i=1:n, j=1:n)
C <- matrix(apply(cmb,1,function(cmb){ cos.sim(df, cmb) }),n,n)
C
}
dfcos <- data.frame()
for(i in 1:3){
veccos <- c()
for(j in 1:nrow(all.data)){
if(i==j){
veccos[j] <- NA
}
else{
A <- all.data[i,-1]
B <- all.data[j,-1]
veccos[j] <- sum(A*B)/sqrt(sum(A^2)*sum(B^2))
}
}
dfcos[i,] <- veccos
}
A <- all.data[1,-1]
B <- all.data[2,-1]
sum(A*B)/sqrt(sum(A^2)*sum(B^2))
mtcos <- apply_cosine_similarity(all.data[,-1])
max(nndist(all.data[,2], k=(nrow(all.data))-1))
nndist(all.data[,3], k=3)
#apriori
factordf <- all.data[,which(colnames(all.data) %in% c1)]
for(i in 1:nrow(factordf)){
factordf[,i] <- as.factor(factordf [,i])
}
factordf = as(factordf, "transactions")
rules <- apriori(factordf,
parameter = list(supp = 0.5, conf = 0.9, target = "rules"))
summary(rules)
vis <- function(x,y,center,min){
sel <- all.data[-1]
sel <- sel[ , which(colnames(sel) %in% x)]
sel[sel>0]<-1
sel <- sel[rowSums(sel)>=min,]
sel <- sel[,colSums(sel)>0]
set.seed(20)
km <- kmeans(sel, centers = center, nstart = 5)
return(ggplotly(fviz_cluster(km, data = sel, main = y)))
}
vis(c2, "disease & drug cluster, 119 patients, <22days",5,2)
gp3 <- vis(c21, "drug cluster(day=21~30)",3)
gp4 <- vis(c(c1,c11) , "disease & drug cluster(day<21), times>=1",3)
gp1
gp2
gp3
gp4
dd <-all.data[ , which(colnames(all.data) %in% c(c1, c11))]
dd[dd>0]<-1
dd
z <- vis(c(c1, c11), "drug & symptoms cluster(day=1~20)",2)
fviz_nbclust(run,
FUNcluster = kmeans, # K-Means
method = "wss", # total within sum of square
k.max = 12 # max number of clusters to consider
) +labs(title="Elbow Method for K-Means") + geom_vline(xintercept = 6,
linetype = 2)
#####
rule <- apriori(t(as.matrix(all.data)),parameter=list(supp=0.2,conf=0.8,maxlen=5))
inspect(rule)
summary(rule)
inspect(head(sort(rule,by="support"),10))
rule
|
#' @name VSN
#' @title Variance Stabilizing Normalization
#' @description
#' Variance Stabilizing Normalization (VSN) is based on a generalized logarithm
#' transformation and aims at stabilizing variance inside microarray data.
#'
#' @keywords Normalization
#'
#' @section Introduction:
#' Usage of VSN was suggested by Karp (2010) but its usage is not common as the
#' scale change can make the fold change ratio difficult to interpret, and
#' applying it can lead to negative values. This is well summarized in
#' Mahoney (2011) :
#' "Karp et al proposed a variance stabilizing transformation to stabilize the
#' variance across all proteins[15]. This has been applied to other data types
#' such as mRNA and miRNA[33-35]. We prefer use of WLS for the following reasons:
#' 1) constant variance is generally not expected across all proteins in a study;
#' 2) downstream statistical analyses assume equal variances between groups
#' within a protein but do not assume constant variance across all proteins;
#' and 3) the scale is constant across all proteins with WLS. The interpretation
#' is difficult after the proposed transformation, with fold changes on the
#' transformed scale interpreted on the raw scale at low intensities, log scale
#' at high intensities, and a sliding hybrid of these two scales over the
#' continuum of middle range abundance values."
#'
#' #' Usage note:
#' Default settings is lts.quantile=0.9 but lts.quantile=0.5 is more robust.
#' The reason why lts.quantile=0.5 is not the default is that the estimator
#' with lts.quantile=0.9 is more efficient (more precise with less data)
#' if the fraction of differentially expressed genes is not that large.
#' From Bioconductor case studies.
#'
#' @section References:
#'
#' Huber, W., Von Heydebreck, A., Sultmann, H., Poustka, A., & Vingron, M. (2002).
#' Variance stabilization applied to microarray data calibration and to the
#' quantification of differential expression.
#' Bioinformatics, 18(Suppl 1), S96-S104.
#' doi:10.1093/bioinformatics/18.suppl_1.S96
#'
#' Karp, N. A., Huber, W., Sadowski, P. G., Charles, P. D., Hester, S. V,
#' Lilley, K. S. (2010).
#' Addressing accuracy and precision issues in iTRAQ quantitation.
#' Molecular & cellular proteomics : MCP, 9(9), 1885-97.
#' doi:10.1074/mcp.M900628-MCP200
#'
#' Mahoney, D. W., Therneau, T. M., Heppelmann, C. J., Higgins, L.,
#' Benson, L. M., Zenka, R. M., Jagtap, P., et al. (2011).
#' Relative quantification: characterization of bias, variability and fold
#' changes in mass spectrometry data from iTRAQ-labeled peptides.
#' Journal of proteome research, 10(9), 4325-33.
#' doi:10.1021/pr2001308
#'
#' http://bioinfo.cnio.es/files/training/Microarray_Course/3_UBio_Normalization_Theory.pdf
#'
#' @import vsn
# library(vsn)
NULL
#' @title Apply variance-stabilizing normalization.
#'
#' @description Apply variance-stabilizing normalization.
#'
#' @details
#' Apply variance-stabilizing normalization using generalized logarithm. Use
#' default parameter: lts.quantile = 0.9
#'
#' @param dataNotNorm Data to be normalized.
#' @return A dataframe containing normlized data.
#' @export
applyVSN09 <- function(dataNotNorm) {
dataNormalized <- data.frame(
justvsn(as.matrix(dataNotNorm), verbose=FALSE)
) #, backgroundsubtract=FALSE ?
# equivalent to :
#fit <- vsn2(as.matrix(dataNotNorm), subsample=100000)
#x <- predict(fit, newdata=as.matrix(dataNotNorm))
return(dataNormalized)
}
#' @title Apply robust variance-stabilizing normalization.
#'
#' @description Apply robust variance-stabilizing normalization.
#'
#' @details
#' Apply variance-stabilizing normalization using generalized logarithm. Use
#' parameter: lts.quantile = 0.5, more robust to high number of outliers.
#'
#' @param dataNotNorm Data to be normalized.
#' @return A dataframe containing normlized data.
#' @export
applyVSN05 <- function(dataNotNorm) {
dataNormalized <- data.frame(
justvsn(as.matrix(dataNotNorm), verbose=FALSE, lts.quantile=0.5)
)
return(dataNormalized)
}
#' @title Apply VSN and report result
#'
#' @description Apply VSN and report result
#'
#' @details
#' Apply VSN on a dataset.
#' TODO Use ExpressionSet object as input
#'
#' @param dataNotNorm Dataset to be normalized.
#' @param outputFolderTemp Temporary folder to store data generated at this
#' step of the analysis.
#' @param outputFile Report of current analysis step will be appended to
#' this Rmd file.
#' @param useRobustFit If the more robust version of the method should be used.
#' @return A data frame with normalized data.
applyAndReportVSN <- function(dataNotNorm,
outputFolderTemp, outputFile,
useRobustFit=FALSE) {
# Apply VSN
dataNormalized <- matrix(c(''))
if(useRobustFit==FALSE) {
dataNormalized <- applyVSN09(dataNotNorm)
}
else {
dataNormalized <- applyVSN05(dataNotNorm)
}
# Generate report
execLabel <- paste(
c(format(Sys.time(), "%Y%m%d%H%M%S"), trunc(
runif(1) * 10000)),
collapse='')
cat('',
'Normalization (VSN)',
'---------------------------------------------------------------------',
'',
'Normalisation was achieved using the Variance Stabilizing method (R package version `r packageDescription("vsn")$Version`).',
'',
ifelse(
useRobustFit==FALSE,
"Default settings (lts.quantile=0.9) were used.",
"Settings more robust to high number of outliers were used (lts.quantile=0.5)."
),
sep="\n", file=outputFile, append=TRUE)
allTestsNormalizationRmd(dataNormalized, outputFile=outputFile,
outFolder=outputFolderTemp)
tempOutput <- paste(
c(outputFolderTemp, '/vsn_tests_normalization_Rmd_data_',
execLabel, '.txt'),
collapse='')
write.table(dataNormalized, tempOutput, sep="\t")
cat('',
'Specific test for VSN.',
'',
paste(
c('```{r applyAndReportVSN2', execLabel,
', echo=FALSE, fig.width=10, fig.height=6}'),
collapse=''),
'',
sep="\n", file=outputFile, append=TRUE
)
cat(
paste(
c('matrixdata <- as.matrix(read.table("',
tempOutput,
'", stringsAsFactors=FALSE))'),
collapse=''
),
'meanSdPlot(matrixdata)',
'',
'```',
'',
sep="\n", file=outputFile, append=TRUE
)
cat(' ',
'> ',
'> Please cite these articles whenever using results from this software in a publication :',
'> ',
'> Method article :',
'> Variance stabilization applied to microarray data calibration and to the quantification of differential expression. ',
'> Huber, W., Von Heydebreck, A., Sultmann, H., Poustka, A., & Vingron, M. (2002).',
'> Bioinformatics, 18(Suppl 1), S96-S104. ',
'> doi:10.1093/bioinformatics/18.suppl_1.S96',
'> ',
'> Software article (R package) :',
'> `r citation("vsn")$textVersion`',
'> ',
'> Usage for quantitative proteomics data suggested by :',
'> Karp, N. A., Huber, W., Sadowski, P. G., Charles, P. D., Hester, S. V, Lilley, K. S. (2010). ',
'> Addressing accuracy and precision issues in iTRAQ quantitation. ',
'> Molecular & cellular proteomics : MCP, 9(9), 1885-97. ',
'> doi:10.1074/mcp.M900628-MCP200',
' ',
'---------------------------------------------------------------------',
' ',
sep="\n", file=outputFile, append=TRUE)
return(dataNormalized)
}
| /R/nrm_variance-stabilizing.R | no_license | rmylonas/Prots4Prots | R | false | false | 7,862 | r | #' @name VSN
#' @title Variance Stabilizing Normalization
#' @description
#' Variance Stabilizing Normalization (VSN) is based on a generalized logarithm
#' transformation and aims at stabilizing variance inside microarray data.
#'
#' @keywords Normalization
#'
#' @section Introduction:
#' Usage of VSN was suggested by Karp (2010) but its usage is not common as the
#' scale change can make the fold change ratio difficult to interpret, and
#' applying it can lead to negative values. This is well summarized in
#' Mahoney (2011) :
#' "Karp et al proposed a variance stabilizing transformation to stabilize the
#' variance across all proteins[15]. This has been applied to other data types
#' such as mRNA and miRNA[33-35]. We prefer use of WLS for the following reasons:
#' 1) constant variance is generally not expected across all proteins in a study;
#' 2) downstream statistical analyses assume equal variances between groups
#' within a protein but do not assume constant variance across all proteins;
#' and 3) the scale is constant across all proteins with WLS. The interpretation
#' is difficult after the proposed transformation, with fold changes on the
#' transformed scale interpreted on the raw scale at low intensities, log scale
#' at high intensities, and a sliding hybrid of these two scales over the
#' continuum of middle range abundance values."
#'
#' #' Usage note:
#' Default settings is lts.quantile=0.9 but lts.quantile=0.5 is more robust.
#' The reason why lts.quantile=0.5 is not the default is that the estimator
#' with lts.quantile=0.9 is more efficient (more precise with less data)
#' if the fraction of differentially expressed genes is not that large.
#' From Bioconductor case studies.
#'
#' @section References:
#'
#' Huber, W., Von Heydebreck, A., Sultmann, H., Poustka, A., & Vingron, M. (2002).
#' Variance stabilization applied to microarray data calibration and to the
#' quantification of differential expression.
#' Bioinformatics, 18(Suppl 1), S96-S104.
#' doi:10.1093/bioinformatics/18.suppl_1.S96
#'
#' Karp, N. A., Huber, W., Sadowski, P. G., Charles, P. D., Hester, S. V,
#' Lilley, K. S. (2010).
#' Addressing accuracy and precision issues in iTRAQ quantitation.
#' Molecular & cellular proteomics : MCP, 9(9), 1885-97.
#' doi:10.1074/mcp.M900628-MCP200
#'
#' Mahoney, D. W., Therneau, T. M., Heppelmann, C. J., Higgins, L.,
#' Benson, L. M., Zenka, R. M., Jagtap, P., et al. (2011).
#' Relative quantification: characterization of bias, variability and fold
#' changes in mass spectrometry data from iTRAQ-labeled peptides.
#' Journal of proteome research, 10(9), 4325-33.
#' doi:10.1021/pr2001308
#'
#' http://bioinfo.cnio.es/files/training/Microarray_Course/3_UBio_Normalization_Theory.pdf
#'
#' @import vsn
# library(vsn)
NULL
#' @title Apply variance-stabilizing normalization.
#'
#' @description Apply variance-stabilizing normalization.
#'
#' @details
#' Apply variance-stabilizing normalization using generalized logarithm. Use
#' default parameter: lts.quantile = 0.9
#'
#' @param dataNotNorm Data to be normalized.
#' @return A dataframe containing normlized data.
#' @export
applyVSN09 <- function(dataNotNorm) {
dataNormalized <- data.frame(
justvsn(as.matrix(dataNotNorm), verbose=FALSE)
) #, backgroundsubtract=FALSE ?
# equivalent to :
#fit <- vsn2(as.matrix(dataNotNorm), subsample=100000)
#x <- predict(fit, newdata=as.matrix(dataNotNorm))
return(dataNormalized)
}
#' @title Apply robust variance-stabilizing normalization.
#'
#' @description Apply robust variance-stabilizing normalization.
#'
#' @details
#' Apply variance-stabilizing normalization using generalized logarithm. Use
#' parameter: lts.quantile = 0.5, more robust to high number of outliers.
#'
#' @param dataNotNorm Data to be normalized.
#' @return A dataframe containing normlized data.
#' @export
applyVSN05 <- function(dataNotNorm) {
dataNormalized <- data.frame(
justvsn(as.matrix(dataNotNorm), verbose=FALSE, lts.quantile=0.5)
)
return(dataNormalized)
}
#' @title Apply VSN and report result
#'
#' @description Apply VSN and report result
#'
#' @details
#' Apply VSN on a dataset.
#' TODO Use ExpressionSet object as input
#'
#' @param dataNotNorm Dataset to be normalized.
#' @param outputFolderTemp Temporary folder to store data generated at this
#' step of the analysis.
#' @param outputFile Report of current analysis step will be appended to
#' this Rmd file.
#' @param useRobustFit If the more robust version of the method should be used.
#' @return A data frame with normalized data.
applyAndReportVSN <- function(dataNotNorm,
outputFolderTemp, outputFile,
useRobustFit=FALSE) {
# Apply VSN
dataNormalized <- matrix(c(''))
if(useRobustFit==FALSE) {
dataNormalized <- applyVSN09(dataNotNorm)
}
else {
dataNormalized <- applyVSN05(dataNotNorm)
}
# Generate report
execLabel <- paste(
c(format(Sys.time(), "%Y%m%d%H%M%S"), trunc(
runif(1) * 10000)),
collapse='')
cat('',
'Normalization (VSN)',
'---------------------------------------------------------------------',
'',
'Normalisation was achieved using the Variance Stabilizing method (R package version `r packageDescription("vsn")$Version`).',
'',
ifelse(
useRobustFit==FALSE,
"Default settings (lts.quantile=0.9) were used.",
"Settings more robust to high number of outliers were used (lts.quantile=0.5)."
),
sep="\n", file=outputFile, append=TRUE)
allTestsNormalizationRmd(dataNormalized, outputFile=outputFile,
outFolder=outputFolderTemp)
tempOutput <- paste(
c(outputFolderTemp, '/vsn_tests_normalization_Rmd_data_',
execLabel, '.txt'),
collapse='')
write.table(dataNormalized, tempOutput, sep="\t")
cat('',
'Specific test for VSN.',
'',
paste(
c('```{r applyAndReportVSN2', execLabel,
', echo=FALSE, fig.width=10, fig.height=6}'),
collapse=''),
'',
sep="\n", file=outputFile, append=TRUE
)
cat(
paste(
c('matrixdata <- as.matrix(read.table("',
tempOutput,
'", stringsAsFactors=FALSE))'),
collapse=''
),
'meanSdPlot(matrixdata)',
'',
'```',
'',
sep="\n", file=outputFile, append=TRUE
)
cat(' ',
'> ',
'> Please cite these articles whenever using results from this software in a publication :',
'> ',
'> Method article :',
'> Variance stabilization applied to microarray data calibration and to the quantification of differential expression. ',
'> Huber, W., Von Heydebreck, A., Sultmann, H., Poustka, A., & Vingron, M. (2002).',
'> Bioinformatics, 18(Suppl 1), S96-S104. ',
'> doi:10.1093/bioinformatics/18.suppl_1.S96',
'> ',
'> Software article (R package) :',
'> `r citation("vsn")$textVersion`',
'> ',
'> Usage for quantitative proteomics data suggested by :',
'> Karp, N. A., Huber, W., Sadowski, P. G., Charles, P. D., Hester, S. V, Lilley, K. S. (2010). ',
'> Addressing accuracy and precision issues in iTRAQ quantitation. ',
'> Molecular & cellular proteomics : MCP, 9(9), 1885-97. ',
'> doi:10.1074/mcp.M900628-MCP200',
' ',
'---------------------------------------------------------------------',
' ',
sep="\n", file=outputFile, append=TRUE)
return(dataNormalized)
}
|
#problem 13
#given a list of 100 numbers, find the first ten digits of the sum
mylist <- scan("p-013d.txt", what=" ")
#MAIN
print(mylist[1])
mymatr <- matrix(, nrow = 100, ncol = 50)
for (row in 1:100){
trow <- strsplit(mylist[row], "")
for (col in 1:50){
mymatr[row,col] <- as.numeric(trow[[1]][col])
}
}
tsum <- 0
rem <- 0
bigsum <- c()
for (col in 50:2){
tsum <- rem
for (row in 1:100){
tsum <- tsum + mymatr[row,col]
}
dig <- tsum %% 10
bigsum <- c(bigsum,dig)
rem <- (tsum - dig)/10
}
tsum <- rem
for (row in 1:100){
tsum <- tsum + mymatr[row,1]
}
print(tsum)
print(rev(bigsum))
| /r-lang/p-013.r | no_license | SplendidStrontium/project-euler | R | false | false | 604 | r | #problem 13
#given a list of 100 numbers, find the first ten digits of the sum
mylist <- scan("p-013d.txt", what=" ")
#MAIN
print(mylist[1])
mymatr <- matrix(, nrow = 100, ncol = 50)
for (row in 1:100){
trow <- strsplit(mylist[row], "")
for (col in 1:50){
mymatr[row,col] <- as.numeric(trow[[1]][col])
}
}
tsum <- 0
rem <- 0
bigsum <- c()
for (col in 50:2){
tsum <- rem
for (row in 1:100){
tsum <- tsum + mymatr[row,col]
}
dig <- tsum %% 10
bigsum <- c(bigsum,dig)
rem <- (tsum - dig)/10
}
tsum <- rem
for (row in 1:100){
tsum <- tsum + mymatr[row,1]
}
print(tsum)
print(rev(bigsum))
|
# Request: the most recent housing status and housing status update date
# for all clients who were active as of 5-15-2017. It is safe to assume
# that a client is still active if s/he does not have a subsequent inactive
# program status update.
# Installing some of the packages that we might or might not need.
# I wrote this 'installer' function some time ago to automate library
# installations. It checks if a library is installed and if yes, it will
# load it. If not, it will install it and then load it.
installer <- function(x){
for( i in x ){
if( !require( i , character.only = TRUE ) ){
# If not loading - install
install.packages( i , dependencies = TRUE )
# Load
require(i, character.only = TRUE )
}
}
}
installer( c("dplyr" , "data.table" , "RMySQL", 'lubridate', 'pbapply') )
# I loaded the data into mySQL test database on AWS so I can show you
# that I can do both R and SQL, and also for convenience.
# Below, you will find 3 different solutions with the same results.
# 1. Straight SQL query
# 2. R with data pulled from SQL
# 3. R with data pulled from CSVs
# Note: If you are behind DOITT firewall, the AWS SQL connection might
# get blocked. You can either switch to a regular WIFI or just go straight
# to the 3rd solution (R from CSVs).
#SQL SOLUTION----------------------------------------------------
# Connecting to the DB
connection = dbConnect(drv = MySQL(), #specifying database type.
user = "test", # username
password = 'testtest', # password
host = 'nikitatest.ctxqlb5xlcju.us-east-2.rds.amazonaws.com', # address
port = 3306, # port
dbname = 'nikita99') # name of the database
# The query below selects required fields from two derived
# tables that I joing together using LEFT JOIN. The derived
# tables are aggregations that return required statues.
resultSQL <- dbGetQuery(connection,
"SELECT
program.client_id,
program.program_status,
housing.housing_status_date,
housing.housing_status
FROM
(SELECT
table1.client_id,
IFNULL(table1.program_status_date, 'missing_date') AS program_status_date,
table1.program_status
FROM nikita99.test_program_status as table1
INNER JOIN (SELECT
client_id,
MAX(IFNULL(program_status_date, 'missing_date')) AS lastDate
FROM nikita99.test_program_status
WHERE program_status_date <= '2017-05-15'
GROUP BY client_id) AS table2
ON table2.client_id = table1.client_id
AND table2.lastDate = IFNULL(table1.program_status_date, 'missing_date')
WHERE program_status IN ('missing_date','active')) AS program
LEFT JOIN
(SELECT
table3.client_id,
table3.housing_status_date,
IF(table3.housing_status = '', 'missing_status', table3.housing_status) AS housing_status
FROM nikita99.test_housing_status as table3
INNER JOIN (SELECT
client_id,
max(housing_status_date) AS lastDate
FROM nikita99.test_housing_status
GROUP BY client_id) AS table4
ON table4.client_id = table3.client_id
AND table4.lastDate = table3.housing_status_date)
AS housing
ON housing.client_id = program.client_id
")
# Printing the result
print(resultSQL)
##################################################################################
##################################################################################
#R SOLUTION----------------------------------------------------
# Pulling program status table
program <- dbGetQuery(connection,
"SELECT *
FROM test_program_status")
# Pulling housing status table
housing <- dbGetQuery(connection,
"SELECT *
FROM test_housing_status")
# Renaming faulty data
program <- setDT(program)[is.na(Program_status_date), Program_status_date:='0000-00-00']
# Using dplyr piping to find the last
# status by id before or including
# the requested date, and then keep it if it was 'active'.
program <- program %>%
dplyr::filter(Program_status_date <= '2017-05-15') %>%
dplyr::group_by(Client_ID) %>%
dplyr::arrange(Client_ID, desc(Program_status_date)) %>%
dplyr::top_n(1) %>%
dplyr::filter(Program_status == 'Active')
# Keeping the last housing status by id
housing <- housing %>%
dplyr::group_by(Client_ID) %>%
dplyr::arrange(Client_ID, desc(Housing_Status_Date)) %>%
dplyr::top_n(1)
# Renaming faulty data
housing <- setDT(housing)[Housing_Status == '', Housing_Status:='missing_status']
# Left joining housing statuses to program statuses to see
# the housing situation of the active ids. Keeping only relevant
# columns and arranging by date
resultR <- dplyr::left_join(program,housing)[,c(1,2,4,5)] %>%
dplyr::arrange(Housing_Status_Date)
# Printing
print(resultR)
# Disconnecting from the DB
dbDisconnect(connection)
#####################################################################
#####################################################################
#R from CSVs-----------------------------------------------
# Pulling data from CSVs
housingCSV <- fread('test_housing_status.csv')
programCSV <- fread('test_program_status.csv')
# Dates are messed up. Transforming them to ymd format
programCSV$Program_status_date <- as.character(lubridate::mdy(programCSV$Program_status_date))
housingCSV$Housing_Status_Date <- as.character(lubridate::mdy(housingCSV$Housing_Status_Date))
# Renaming faulty data
programCSV <- setDT(programCSV)[is.na(Program_status_date), Program_status_date:='0000-00-00']
# Using dplyr piping to find the last
# status by id before or including
# the requested date, and then keep it if it was 'active'.
programCSV <- programCSV %>%
dplyr::filter(Program_status_date <= '2017-05-15') %>%
dplyr::group_by(Client_ID) %>%
dplyr::arrange(Client_ID, desc(Program_status_date)) %>%
dplyr::top_n(1) %>%
dplyr::filter(Program_status == 'Active')
# Keeping the last housing status by id before or including
# the requested date.
housingCSV <- housingCSV %>%
dplyr::group_by(Client_ID) %>%
dplyr::arrange(Client_ID, desc(Housing_Status_Date)) %>%
dplyr::top_n(1)
# renaming empty strings into readable format
housingCSV <- setDT(housingCSV)[Housing_Status == '', Housing_Status:='missing_status']
# Left joining housing statuses to program statuses to see
# the housing situation of the active ids. Keeping only relevant
# columns and arranging by date
resultR_CSV <- dplyr::left_join(programCSV, housingCSV)[,c(1,2,4,5)] %>%
dplyr::arrange(Housing_Status_Date)
# Printing
print(resultR_CSV)
fwrite(resultR_CSV, 'final_result.csv')
| /raw_code.R | no_license | nvoevodin/da_test | R | false | false | 6,811 | r | # Request: the most recent housing status and housing status update date
# for all clients who were active as of 5-15-2017. It is safe to assume
# that a client is still active if s/he does not have a subsequent inactive
# program status update.
# Installing some of the packages that we might or might not need.
# I wrote this 'installer' function some time ago to automate library
# installations. It checks if a library is installed and if yes, it will
# load it. If not, it will install it and then load it.
installer <- function(x){
for( i in x ){
if( !require( i , character.only = TRUE ) ){
# If not loading - install
install.packages( i , dependencies = TRUE )
# Load
require(i, character.only = TRUE )
}
}
}
installer( c("dplyr" , "data.table" , "RMySQL", 'lubridate', 'pbapply') )
# I loaded the data into mySQL test database on AWS so I can show you
# that I can do both R and SQL, and also for convenience.
# Below, you will find 3 different solutions with the same results.
# 1. Straight SQL query
# 2. R with data pulled from SQL
# 3. R with data pulled from CSVs
# Note: If you are behind DOITT firewall, the AWS SQL connection might
# get blocked. You can either switch to a regular WIFI or just go straight
# to the 3rd solution (R from CSVs).
#SQL SOLUTION----------------------------------------------------
# Connecting to the DB
connection = dbConnect(drv = MySQL(), #specifying database type.
user = "test", # username
password = 'testtest', # password
host = 'nikitatest.ctxqlb5xlcju.us-east-2.rds.amazonaws.com', # address
port = 3306, # port
dbname = 'nikita99') # name of the database
# The query below selects required fields from two derived
# tables that I joing together using LEFT JOIN. The derived
# tables are aggregations that return required statues.
resultSQL <- dbGetQuery(connection,
"SELECT
program.client_id,
program.program_status,
housing.housing_status_date,
housing.housing_status
FROM
(SELECT
table1.client_id,
IFNULL(table1.program_status_date, 'missing_date') AS program_status_date,
table1.program_status
FROM nikita99.test_program_status as table1
INNER JOIN (SELECT
client_id,
MAX(IFNULL(program_status_date, 'missing_date')) AS lastDate
FROM nikita99.test_program_status
WHERE program_status_date <= '2017-05-15'
GROUP BY client_id) AS table2
ON table2.client_id = table1.client_id
AND table2.lastDate = IFNULL(table1.program_status_date, 'missing_date')
WHERE program_status IN ('missing_date','active')) AS program
LEFT JOIN
(SELECT
table3.client_id,
table3.housing_status_date,
IF(table3.housing_status = '', 'missing_status', table3.housing_status) AS housing_status
FROM nikita99.test_housing_status as table3
INNER JOIN (SELECT
client_id,
max(housing_status_date) AS lastDate
FROM nikita99.test_housing_status
GROUP BY client_id) AS table4
ON table4.client_id = table3.client_id
AND table4.lastDate = table3.housing_status_date)
AS housing
ON housing.client_id = program.client_id
")
# Printing the result
print(resultSQL)
##################################################################################
##################################################################################
#R SOLUTION----------------------------------------------------
# Pulling program status table
program <- dbGetQuery(connection,
"SELECT *
FROM test_program_status")
# Pulling housing status table
housing <- dbGetQuery(connection,
"SELECT *
FROM test_housing_status")
# Renaming faulty data
program <- setDT(program)[is.na(Program_status_date), Program_status_date:='0000-00-00']
# Using dplyr piping to find the last
# status by id before or including
# the requested date, and then keep it if it was 'active'.
program <- program %>%
dplyr::filter(Program_status_date <= '2017-05-15') %>%
dplyr::group_by(Client_ID) %>%
dplyr::arrange(Client_ID, desc(Program_status_date)) %>%
dplyr::top_n(1) %>%
dplyr::filter(Program_status == 'Active')
# Keeping the last housing status by id
housing <- housing %>%
dplyr::group_by(Client_ID) %>%
dplyr::arrange(Client_ID, desc(Housing_Status_Date)) %>%
dplyr::top_n(1)
# Renaming faulty data
housing <- setDT(housing)[Housing_Status == '', Housing_Status:='missing_status']
# Left joining housing statuses to program statuses to see
# the housing situation of the active ids. Keeping only relevant
# columns and arranging by date
resultR <- dplyr::left_join(program,housing)[,c(1,2,4,5)] %>%
dplyr::arrange(Housing_Status_Date)
# Printing
print(resultR)
# Disconnecting from the DB
dbDisconnect(connection)
#####################################################################
#####################################################################
#R from CSVs-----------------------------------------------
# Pulling data from CSVs
housingCSV <- fread('test_housing_status.csv')
programCSV <- fread('test_program_status.csv')
# Dates are messed up. Transforming them to ymd format
programCSV$Program_status_date <- as.character(lubridate::mdy(programCSV$Program_status_date))
housingCSV$Housing_Status_Date <- as.character(lubridate::mdy(housingCSV$Housing_Status_Date))
# Renaming faulty data
programCSV <- setDT(programCSV)[is.na(Program_status_date), Program_status_date:='0000-00-00']
# Using dplyr piping to find the last
# status by id before or including
# the requested date, and then keep it if it was 'active'.
programCSV <- programCSV %>%
dplyr::filter(Program_status_date <= '2017-05-15') %>%
dplyr::group_by(Client_ID) %>%
dplyr::arrange(Client_ID, desc(Program_status_date)) %>%
dplyr::top_n(1) %>%
dplyr::filter(Program_status == 'Active')
# Keeping the last housing status by id before or including
# the requested date.
housingCSV <- housingCSV %>%
dplyr::group_by(Client_ID) %>%
dplyr::arrange(Client_ID, desc(Housing_Status_Date)) %>%
dplyr::top_n(1)
# renaming empty strings into readable format
housingCSV <- setDT(housingCSV)[Housing_Status == '', Housing_Status:='missing_status']
# Left joining housing statuses to program statuses to see
# the housing situation of the active ids. Keeping only relevant
# columns and arranging by date
resultR_CSV <- dplyr::left_join(programCSV, housingCSV)[,c(1,2,4,5)] %>%
dplyr::arrange(Housing_Status_Date)
# Printing
print(resultR_CSV)
fwrite(resultR_CSV, 'final_result.csv')
|
#' Add Springer Books to Calibre
#'
#' @author Ivan Jacob Agaloos Pesigan
#' @inheritParams lib_remove_doi_http
#' @param dir Character string.
#' Directory where Springer Books are located.
#' @param calibre_dir Character string.
#' Directory where Calibre's Library Database is located.
#' @importFrom jeksterslabRutils util_clean_tempdir
#' @export
lib_springer_calibredb_add <- function(doi,
dir,
calibre_dir) {
springer_books_available <- system.file(
"extdata",
"springer_books_available.Rds",
package = "jeksterslabRlib",
mustWork = TRUE
)
springer_books_available <- readRDS(springer_books_available)
extract <- function(doi,
springer_books_available,
dir,
type = c("pdf", "epub")) {
doi <- lib_remove_doi_prefix(doi)
if (type == "pdf") {
available <- springer_books_available[["available_pdf"]]
}
if (type == "epub") {
available <- springer_books_available[["available_epub"]]
}
# remove dois not in available files
doi_trimmed <- rep(x = NA, times = length(doi))
for (i in seq_along(doi)) {
if (doi[i] %in% available[["doi_root"]]) {
doi_trimmed[i] <- doi[i]
} else {
doi_trimmed[i] <- NA
}
}
if (all(is.na(doi_trimmed))) {
stop("None of the DOIs provided are available.")
}
doi <- doi_trimmed[!is.na(doi_trimmed)]
doi <- as.data.frame(doi)
names(doi) <- "doi_root"
metadata <- merge(
x = doi,
y = available,
by = "doi_root"
)
list(
author = metadata[["Author"]],
title = metadata[["Book Title"]],
isbn = lib_remove_isbn_dashes(metadata[["Electronic ISBN"]]),
doi = metadata[["DOI URL"]],
doi_root = as.character(metadata[["doi_root"]]),
uri = metadata[["OpenURL"]],
series = metadata[["Series Title"]],
publisher = metadata[["Publisher"]]
)
}
pdf <- extract(
doi = doi,
springer_books_available = springer_books_available,
dir = dir,
type = "pdf"
)
epub <- extract(
doi = doi,
springer_books_available = springer_books_available,
dir = dir,
type = "epub"
)
exe <- function(author,
title,
isbn,
doi,
doi_root,
uri,
series,
publisher,
dir,
calibre_dir,
type) {
tmp_dir <- tempdir()
fn <- paste0(
doi_root,
".",
type
)
tmp_file <- paste0(
file.path(
tmp_dir,
fn
)
)
tmp_opf <- paste0(
tempfile(),
".opf"
)
file.copy(
file.path(
dir,
fn
),
tmp_dir
)
fetch <- "fetch-ebook-metadata"
meta <- "ebook-meta"
add <- "calibredb add"
to_opf <- paste(
"--opf >",
shQuote(tmp_opf)
)
from_opf <- paste(
"--from-opf",
shQuote(tmp_opf)
)
common_options <- paste(
"--authors",
shQuote(author),
"--title",
shQuote(title),
paste0(
"--identifier isbn:",
shQuote(isbn)
),
paste0(
"--identifier doi:",
shQuote(doi)
),
paste0(
"--identifier uri:",
shQuote(uri)
),
"--isbn",
shQuote(isbn)
)
if (is.na(series)) {
series <- " "
} else {
series <- paste(
"--series",
shQuote(series)
)
}
publisher <- paste0(
"--publisher",
shQuote(publisher)
)
library_path <- paste(
"--library-path",
shQuote(calibre_dir)
)
system(
paste(
fetch,
common_options,
to_opf
),
ignore.stdout = FALSE,
ignore.stderr = FALSE
)
system(
paste(
meta,
tmp_file,
common_options,
from_opf
),
ignore.stdout = FALSE,
ignore.stderr = FALSE
)
system(
paste(
add,
library_path,
common_options,
series,
tmp_file
),
ignore.stdout = FALSE,
ignore.stderr = FALSE
)
}
# add epub first
invisible(
mapply(
FUN = exe,
author = epub[["author"]],
title = epub[["title"]],
isbn = epub[["isbn"]],
doi = epub[["doi"]],
doi_root = epub[["doi_root"]],
uri = epub[["uri"]],
series = epub[["series"]],
publisher = epub[["publisher"]],
dir = dir,
calibre_dir = calibre_dir,
type = "epub"
)
)
# if epub does not exist this adds pdf
invisible(
mapply(
FUN = exe,
author = pdf[["author"]],
title = pdf[["title"]],
isbn = pdf[["isbn"]],
doi = pdf[["doi"]],
doi_root = pdf[["doi_root"]],
uri = pdf[["uri"]],
series = pdf[["series"]],
publisher = pdf[["publisher"]],
dir = dir,
calibre_dir = calibre_dir,
type = "pdf"
)
)
util_clean_tempdir()
}
| /R/lib_springer_calibredb_add.R | permissive | jeksterslabds/jeksterslabRlib | R | false | false | 5,117 | r | #' Add Springer Books to Calibre
#'
#' @author Ivan Jacob Agaloos Pesigan
#' @inheritParams lib_remove_doi_http
#' @param dir Character string.
#' Directory where Springer Books are located.
#' @param calibre_dir Character string.
#' Directory where Calibre's Library Database is located.
#' @importFrom jeksterslabRutils util_clean_tempdir
#' @export
lib_springer_calibredb_add <- function(doi,
dir,
calibre_dir) {
springer_books_available <- system.file(
"extdata",
"springer_books_available.Rds",
package = "jeksterslabRlib",
mustWork = TRUE
)
springer_books_available <- readRDS(springer_books_available)
extract <- function(doi,
springer_books_available,
dir,
type = c("pdf", "epub")) {
doi <- lib_remove_doi_prefix(doi)
if (type == "pdf") {
available <- springer_books_available[["available_pdf"]]
}
if (type == "epub") {
available <- springer_books_available[["available_epub"]]
}
# remove dois not in available files
doi_trimmed <- rep(x = NA, times = length(doi))
for (i in seq_along(doi)) {
if (doi[i] %in% available[["doi_root"]]) {
doi_trimmed[i] <- doi[i]
} else {
doi_trimmed[i] <- NA
}
}
if (all(is.na(doi_trimmed))) {
stop("None of the DOIs provided are available.")
}
doi <- doi_trimmed[!is.na(doi_trimmed)]
doi <- as.data.frame(doi)
names(doi) <- "doi_root"
metadata <- merge(
x = doi,
y = available,
by = "doi_root"
)
list(
author = metadata[["Author"]],
title = metadata[["Book Title"]],
isbn = lib_remove_isbn_dashes(metadata[["Electronic ISBN"]]),
doi = metadata[["DOI URL"]],
doi_root = as.character(metadata[["doi_root"]]),
uri = metadata[["OpenURL"]],
series = metadata[["Series Title"]],
publisher = metadata[["Publisher"]]
)
}
pdf <- extract(
doi = doi,
springer_books_available = springer_books_available,
dir = dir,
type = "pdf"
)
epub <- extract(
doi = doi,
springer_books_available = springer_books_available,
dir = dir,
type = "epub"
)
exe <- function(author,
title,
isbn,
doi,
doi_root,
uri,
series,
publisher,
dir,
calibre_dir,
type) {
tmp_dir <- tempdir()
fn <- paste0(
doi_root,
".",
type
)
tmp_file <- paste0(
file.path(
tmp_dir,
fn
)
)
tmp_opf <- paste0(
tempfile(),
".opf"
)
file.copy(
file.path(
dir,
fn
),
tmp_dir
)
fetch <- "fetch-ebook-metadata"
meta <- "ebook-meta"
add <- "calibredb add"
to_opf <- paste(
"--opf >",
shQuote(tmp_opf)
)
from_opf <- paste(
"--from-opf",
shQuote(tmp_opf)
)
common_options <- paste(
"--authors",
shQuote(author),
"--title",
shQuote(title),
paste0(
"--identifier isbn:",
shQuote(isbn)
),
paste0(
"--identifier doi:",
shQuote(doi)
),
paste0(
"--identifier uri:",
shQuote(uri)
),
"--isbn",
shQuote(isbn)
)
if (is.na(series)) {
series <- " "
} else {
series <- paste(
"--series",
shQuote(series)
)
}
publisher <- paste0(
"--publisher",
shQuote(publisher)
)
library_path <- paste(
"--library-path",
shQuote(calibre_dir)
)
system(
paste(
fetch,
common_options,
to_opf
),
ignore.stdout = FALSE,
ignore.stderr = FALSE
)
system(
paste(
meta,
tmp_file,
common_options,
from_opf
),
ignore.stdout = FALSE,
ignore.stderr = FALSE
)
system(
paste(
add,
library_path,
common_options,
series,
tmp_file
),
ignore.stdout = FALSE,
ignore.stderr = FALSE
)
}
# add epub first
invisible(
mapply(
FUN = exe,
author = epub[["author"]],
title = epub[["title"]],
isbn = epub[["isbn"]],
doi = epub[["doi"]],
doi_root = epub[["doi_root"]],
uri = epub[["uri"]],
series = epub[["series"]],
publisher = epub[["publisher"]],
dir = dir,
calibre_dir = calibre_dir,
type = "epub"
)
)
# if epub does not exist this adds pdf
invisible(
mapply(
FUN = exe,
author = pdf[["author"]],
title = pdf[["title"]],
isbn = pdf[["isbn"]],
doi = pdf[["doi"]],
doi_root = pdf[["doi_root"]],
uri = pdf[["uri"]],
series = pdf[["series"]],
publisher = pdf[["publisher"]],
dir = dir,
calibre_dir = calibre_dir,
type = "pdf"
)
)
util_clean_tempdir()
}
|
# if (!require(stringdist)) install.packages("stringdist")
# if (!require(PASWR)) install.packages("PASWR")
# if (!require(DescTools)) install.packages("DescTools")
# if (!require(RecordLinkage)) install.packages("RecordLinkage")
# library("RecordLinkage")
# library(DescTools)
# library (MASS)
# library(dplyr)
# library(stringdist)
# library(PASWR)
#Load data set
s<-getwd()
datapath1<-paste(s,"/Desktop/DSSG/gitscripper/DSSG-2018_Housing/results/Standardized_Deduped_Datasets/Louie_Clean_20180808.csv",sep = "")
datapath2<-paste(s,"/Desktop/DSSG/gitscripper/DSSG-2018_Housing/results/Standardized_Deduped_Datasets/June_Clean_20180808.csv",sep = "")
datapath3<-paste(s,"/Desktop/DSSG/gitscripper/DSSG-2018_Housing/results/Standardized_Deduped_Datasets/July_Clean_20180808.csv",sep = "")
dat1 <- read.csv(file=datapath1,header=T,stringsAsFactors = FALSE,na.strings = c("","NA"))
dat2 <- read.csv(file=datapath2,header=T,stringsAsFactors = FALSE,na.strings = c("","NA"))
dat3 <- read.csv(file=datapath3,header=T,stringsAsFactors = FALSE,na.strings = c("","NA"))
dat1 <- dat1 %>% select(-c(X,ID))
dat1$from <- 'Louie'
dat2 <- dat2 %>% select(-c(X,ID,inSurrey))
dat2$from <- 'June'
dat3 <- dat3 %>% select(-c(X,ID,inSurrey))
dat3$from <- 'July'
aggreated_dat <-rbind(dat1,dat2)
aggreated_dat <-rbind(aggreated_dat,dat3)
aggreated_dat$ID <- seq(1:nrow(aggreated_dat))
set.seed(1)
samples <- sample_n(aggreated_dat, 1000)
write.csv(samples, file = "/Users/hyeongcheolpark/Desktop/DSSG/gitscripper/DSSG-2018_Housing/results/Standardized_Deduped_Datasets/1000samples_20180809.csv")
| /Rcode/sampling_1000_June_July.R | no_license | jocelynjyl/housing | R | false | false | 1,578 | r | # if (!require(stringdist)) install.packages("stringdist")
# if (!require(PASWR)) install.packages("PASWR")
# if (!require(DescTools)) install.packages("DescTools")
# if (!require(RecordLinkage)) install.packages("RecordLinkage")
# library("RecordLinkage")
# library(DescTools)
# library (MASS)
# library(dplyr)
# library(stringdist)
# library(PASWR)
#Load data set
s<-getwd()
datapath1<-paste(s,"/Desktop/DSSG/gitscripper/DSSG-2018_Housing/results/Standardized_Deduped_Datasets/Louie_Clean_20180808.csv",sep = "")
datapath2<-paste(s,"/Desktop/DSSG/gitscripper/DSSG-2018_Housing/results/Standardized_Deduped_Datasets/June_Clean_20180808.csv",sep = "")
datapath3<-paste(s,"/Desktop/DSSG/gitscripper/DSSG-2018_Housing/results/Standardized_Deduped_Datasets/July_Clean_20180808.csv",sep = "")
dat1 <- read.csv(file=datapath1,header=T,stringsAsFactors = FALSE,na.strings = c("","NA"))
dat2 <- read.csv(file=datapath2,header=T,stringsAsFactors = FALSE,na.strings = c("","NA"))
dat3 <- read.csv(file=datapath3,header=T,stringsAsFactors = FALSE,na.strings = c("","NA"))
dat1 <- dat1 %>% select(-c(X,ID))
dat1$from <- 'Louie'
dat2 <- dat2 %>% select(-c(X,ID,inSurrey))
dat2$from <- 'June'
dat3 <- dat3 %>% select(-c(X,ID,inSurrey))
dat3$from <- 'July'
aggreated_dat <-rbind(dat1,dat2)
aggreated_dat <-rbind(aggreated_dat,dat3)
aggreated_dat$ID <- seq(1:nrow(aggreated_dat))
set.seed(1)
samples <- sample_n(aggreated_dat, 1000)
write.csv(samples, file = "/Users/hyeongcheolpark/Desktop/DSSG/gitscripper/DSSG-2018_Housing/results/Standardized_Deduped_Datasets/1000samples_20180809.csv")
|
# read in data
data <- read.csv("C:/Users/Kevin/Desktop/Assignments/Graduate/EE239AS/P1/network_backup_dataset.csv")
data2 <- read.csv("C:/Users/Kevin/Desktop/Assignments/Graduate/EE239AS/P1/network_backup_dataset.csv")
data3 <- read.csv("C:/Users/Kevin/Desktop/Assignments/Graduate/EE239AS/P1/network_backup_dataset.csv")
# Part 1 ------------------------------------------------------------------
day = NULL
day.counter = 1
for (k in 1:18588) {
if (k > 1) {
if (data[k,]$Day.of.Week != data[k-1,]$Day.of.Week)
day.counter = day.counter+1
}
day[k] = day.counter
}
data$day <- day
d20 = data[which(data$day <= 20),]
d20_wf0 = d20[which(d20$Work.Flow.ID == 'work_flow_0'),]
d20_wf1 = d20[which(d20$Work.Flow.ID == 'work_flow_1'),]
d20_wf2 = d20[which(d20$Work.Flow.ID == 'work_flow_2'),]
d20_wf3 = d20[which(d20$Work.Flow.ID == 'work_flow_3'),]
d20_wf4 = d20[which(d20$Work.Flow.ID == 'work_flow_4'),]
# size0 = d20$Size.of.Backup..GB.[which(d20$Work.Flow.ID == 'work_flow_0')]
# size1 = d20$Size.of.Backup..GB.[which(d20$Work.Flow.ID == 'work_flow_1')]
# size2 = d20$Size.of.Backup..GB.[which(d20$Work.Flow.ID == 'work_flow_2')]
# size3 = d20$Size.of.Backup..GB.[which(d20$Work.Flow.ID == 'work_flow_3')]
# size4 = d20$Size.of.Backup..GB.[which(d20$Work.Flow.ID == 'work_flow_4')]
plot(d20_wf0$day,d20_wf0$Size.of.Backup..GB.,main='Work Flow 0',xlab='Day',ylab='Size of Backup (GB)')
plot(d20_wf1$day,d20_wf1$Size.of.Backup..GB.,main='Work Flow 1',xlab='Day',ylab='Size of Backup (GB)')
plot(d20_wf2$day,d20_wf2$Size.of.Backup..GB.,main='Work Flow 2',xlab='Day',ylab='Size of Backup (GB)')
plot(d20_wf3$day,d20_wf3$Size.of.Backup..GB.,main='Work Flow 3',xlab='Day',ylab='Size of Backup (GB)')
plot(d20_wf4$day,d20_wf4$Size.of.Backup..GB.,main='Work Flow 4',xlab='Day',ylab='Size of Backup (GB)')
# Part 2a ------------------------------------------------------------------
fit1 = lm(data$Size.of.Backup..GB~data$Week.+data$Day.of.Week+data$Backup.Start.Time...Hour.of.Day+data$Work.Flow.ID+data$File.Name+data$Backup.Time..hour.)
summary(fit1)
tr_sz = 16729
set.seed(111) #change seed 10 times for different training sets
training = sample(seq_len(nrow(data)),size=tr_sz)
train = data[training,]
test = data[-training,]
# fit_tt = lm(train$Size.of.Backup..GB~train$Week.+train$Day.of.Week+train$Backup.Start.Time...Hour.of.Day+train$Work.Flow.ID+train$File.Name+train$Backup.Time..hour.)
fit_tt = lm(Size.of.Backup..GB. ~ Week..+Day.of.Week+Backup.Start.Time...Hour.of.Day+Work.Flow.ID+File.Name+Backup.Time..hour.,data=train)
summary(fit_tt)
test_res = predict.lm(fit_tt,test)
plot(fit_tt)
plot(test$day,test$Size.of.Backup..GB.,main='Fitted vs. Actual',xlab='Day',ylab='Size of Backup (GB)')
par(new=TRUE)
plot(test$day,test_res,col='blue', ann=FALSE, axes=FALSE)
RMSE = sqrt(sum((test$Size.of.Backup..GB.-test_res)^2)/1859)
# Part 2b -----------------------------------------------------------------
library(randomForest)
rf1 = randomForest(Size.of.Backup..GB. ~ Week..+Day.of.Week+Backup.Start.Time...Hour.of.Day+Work.Flow.ID+File.Name+Backup.Time..hour.,data=data2,type='regression',ntree=20,mtry=6)
print(rf1)
set.seed(111) #change seed 10 times for different training sets
training_2b = sample(seq_len(nrow(data2)),size=tr_sz)
train_2b = data2[training_2b,]
test_2b = data2[-training_2b,]
rf1_tt = randomForest(Size.of.Backup..GB. ~ Week..+Day.of.Week+Backup.Start.Time...Hour.of.Day+Work.Flow.ID+File.Name+Backup.Time..hour.,data=train_2b,type='regression',ntree=20,mtry=6)
print(rf1_tt)
test_rf = predict(rf1_tt,test_2b,type='response')
RMSE2 = sqrt(sum((test_2b$Size.of.Backup..GB.-test_rf)^2)/1859)
rf2 = randomForest(Size.of.Backup..GB. ~ Week..+Day.of.Week+Backup.Start.Time...Hour.of.Day+Work.Flow.ID+File.Name+Backup.Time..hour.,data=data2,type='regression',ntree=20,mtry=3)
print(rf2)
set.seed(111) #change seed 10 times for different training sets
training = sample(seq_len(nrow(data)),size=tr_sz)
train = data[training,]
test = data[-training,]
rf2_tt = randomForest(Size.of.Backup..GB. ~ Week..+Day.of.Week+Backup.Start.Time...Hour.of.Day+Work.Flow.ID+File.Name+Backup.Time..hour.,data=train,type='regression',ntree=20,mtry=6)
test_rf2 = predict(rf2_tt,test,type='response')
plot(test$day,test_rf,main='Fitted Output of Test',xlab='Day',ylab='Size of Backup (GB)')
# Part 2c -----------------------------------------------------------------
library(neuralnet)
#set up training set
tr_sz = 16729
set.seed(111) #change seed 10 times for different training sets
training = sample(seq_len(nrow(data)),size=tr_sz)
train = data[training,]
test = data[-training,]
dm = model.matrix(~Size.of.Backup..GB. + Week..+Day.of.Week+Backup.Start.Time...Hour.of.Day+Work.Flow.ID+File.Name+Backup.Time..hour.,data=train)
nn1 = neuralnet(Size.of.Backup..GB. ~ Week..+ Day.of.WeekMonday + Day.of.WeekTuesday + Day.of.WeekWednesday + Day.of.WeekThursday + Day.of.WeekSaturday + Day.of.WeekSunday + Backup.Start.Time...Hour.of.Day +Work.Flow.IDwork_flow_1+Work.Flow.IDwork_flow_2+Work.Flow.IDwork_flow_3+Work.Flow.IDwork_flow_4 +File.NameFile_1 +File.NameFile_2 +File.NameFile_3 +File.NameFile_4 +File.NameFile_5 +File.NameFile_6 +File.NameFile_7 +File.NameFile_8 +File.NameFile_9 +File.NameFile_10 +File.NameFile_11 +File.NameFile_12 +File.NameFile_13 +File.NameFile_14 +File.NameFile_15 +File.NameFile_16 +File.NameFile_17 +File.NameFile_18 +File.NameFile_19 +File.NameFile_20 +File.NameFile_21 +File.NameFile_22 +File.NameFile_23 +File.NameFile_24 +File.NameFile_25 +File.NameFile_26 +File.NameFile_27 +File.NameFile_28 +File.NameFile_29+Backup.Time..hour.,data=dm)
print(nn1)
plot(nn1)
#dt = model.matrix(~Size.of.Backup..GB. + Week..+Day.of.Week+Backup.Start.Time...Hour.of.Day+Work.Flow.ID+File.Name+Backup.Time..hour.,data=test)
#test_nn = compute(nn1,dt)
# Part 3-1 ------------------------------------------------------------------
d_wf0 = d20[which(data$Work.Flow.ID == 'work_flow_0'),]
d_wf1 = d20[which(data$Work.Flow.ID == 'work_flow_1'),]
d_wf2 = d20[which(data$Work.Flow.ID == 'work_flow_2'),]
d_wf3 = d20[which(data$Work.Flow.ID == 'work_flow_3'),]
d_wf4 = d20[which(data$Work.Flow.ID == 'work_flow_4'),]
tr_sz_0 = floor(nrow(d_wf0)*.9)
tr_sz_1 = floor(nrow(d_wf1)*.9)
tr_sz_2 = floor(nrow(d_wf2)*.9)
tr_sz_3 = floor(nrow(d_wf3)*.9)
tr_sz_4 = floor(nrow(d_wf4)*.9)
set.seed(111) #change seed 10 times for different training sets
training0 = sample(seq_len(nrow(data)),size=tr_sz_0)
training1 = sample(seq_len(nrow(data)),size=tr_sz_1)
training2 = sample(seq_len(nrow(data)),size=tr_sz_2)
training3 = sample(seq_len(nrow(data)),size=tr_sz_3)
training4 = sample(seq_len(nrow(data)),size=tr_sz_4)
train0 = data[training0,]
train1 = data[training1,]
train2 = data[training2,]
train3 = data[training3,]
train4 = data[training4,]
test0 = data[-training0,]
test1 = data[-training1,]
test2 = data[-training2,]
test3 = data[-training3,]
test4 = data[-training4,]
fit_wf0 = lm(Size.of.Backup..GB. ~ Week..+Day.of.Week+Backup.Start.Time...Hour.of.Day+File.Name+Backup.Time..hour.,data=train0)
fit_wf1 = lm(Size.of.Backup..GB. ~ Week..+Day.of.Week+Backup.Start.Time...Hour.of.Day+File.Name+Backup.Time..hour.,data=train1)
fit_wf2 = lm(Size.of.Backup..GB. ~ Week..+Day.of.Week+Backup.Start.Time...Hour.of.Day+File.Name+Backup.Time..hour.,data=train2)
fit_wf3 = lm(Size.of.Backup..GB. ~ Week..+Day.of.Week+Backup.Start.Time...Hour.of.Day+File.Name+Backup.Time..hour.,data=train3)
fit_wf4 = lm(Size.of.Backup..GB. ~ Week..+Day.of.Week+Backup.Start.Time...Hour.of.Day+File.Name+Backup.Time..hour.,data=train4)
summary(fit_wf0)
summary(fit_wf1)
summary(fit_wf2)
summary(fit_wf3)
summary(fit_wf4)
test_res0 = predict.lm(fit_wf0,test0)
test_res1 = predict.lm(fit_wf1,test1)
test_res2 = predict.lm(fit_wf2,test2)
test_res3 = predict.lm(fit_wf3,test3)
test_res4 = predict.lm(fit_wf4,test4)
RMSE0 = sqrt(sum((test0$Size.of.Backup..GB.-test_res0)^2)/nrow(test0))
RMSE1 = sqrt(sum((test1$Size.of.Backup..GB.-test_res1)^2)/nrow(test1))
RMSE2 = sqrt(sum((test2$Size.of.Backup..GB.-test_res2)^2)/nrow(test2))
RMSE3 = sqrt(sum((test3$Size.of.Backup..GB.-test_res3)^2)/nrow(test3))
RMSE4 = sqrt(sum((test4$Size.of.Backup..GB.-test_res4)^2)/nrow(test4))
# Part 3-2 fixed----------------------------------------------------------------
tr_sz = 16729
set.seed(1111) #change seed 10 times for different training sets
training = sample(seq_len(nrow(data)),size=tr_sz)
train = data[training,]
test = data[-training,]
degree = 1 #adjust degree for polynomials
RMSE_FIX = NULL
max_deg = 80 #max degree without overflow error
for (degree in 1:max_deg) {
SoB = train$Size.of.Backup..GB.
Week = train$Week..
DoW = as.numeric(train$Day.of.Week)
BST = train$Backup.Start.Time...Hour.of.Day
WF = as.numeric(train$Work.Flow.ID)
FN = as.numeric(train$File.Name)
BTH = train$Backup.Time..hour.
fit_poly = lm(SoB ~ poly(Week,degree,raw=TRUE)+poly(DoW,degree,raw=TRUE)+poly(BST,degree,raw=TRUE)+poly(WF,degree,raw=TRUE)+poly(FN,degree,raw=TRUE)+poly(BTH,degree,raw=TRUE))
SoB = test$Size.of.Backup..GB.
Week = test$Week..
DoW = as.numeric(test$Day.of.Week)
BST = test$Backup.Start.Time...Hour.of.Day
WF = as.numeric(test$Work.Flow.ID)
FN = as.numeric(test$File.Name)
BTH = test$Backup.Time..hour.
test_res = predict.lm(fit_poly,test)
RMSE_FIX[degree] = sqrt(sum((test$Size.of.Backup..GB.-test_res)^2)/1859)
}
plot(RMSE_FIX,xlab='degree',ylab='RMSE',main='RMSE for Fixed Sets')
# Part 3-2 CV -------------------------------------------------------------
tr_sz = 16729
set.seed(1111) #change seed 10 times for different training sets
degree = 1 #adjust degree for polynomials
RMSE_CV = NULL
RMSE_TEMP = NULL
max_deg = 20 #max degree without overflow error
l = 1
for (degree in 1:max_deg) {
for (l in 1:10) {
training = sample(seq_len(nrow(data)),size=tr_sz)
train = data[training,]
test = data[-training,]
SoB = train$Size.of.Backup..GB.
Week = train$Week..
DoW = as.numeric(train$Day.of.Week)
BST = train$Backup.Start.Time...Hour.of.Day
WF = as.numeric(train$Work.Flow.ID)
FN = as.numeric(train$File.Name)
BTH = train$Backup.Time..hour.
fit_poly = lm(SoB ~ poly(Week,degree,raw=TRUE)+poly(DoW,degree,raw=TRUE)+poly(BST,degree,raw=TRUE)+poly(WF,degree,raw=TRUE)+poly(FN,degree,raw=TRUE)+poly(BTH,degree,raw=TRUE))
SoB = test$Size.of.Backup..GB.
Week = test$Week..
DoW = as.numeric(test$Day.of.Week)
BST = test$Backup.Start.Time...Hour.of.Day
WF = as.numeric(test$Work.Flow.ID)
FN = as.numeric(test$File.Name)
BTH = test$Backup.Time..hour.
test_res = predict.lm(fit_poly,test)
RMSE_TEMP[l] = sqrt(sum((test$Size.of.Backup..GB.-test_res)^2)/1859)
}
RMSE_CV[degree] = mean(RMSE_TEMP)
RMSE_TEMP=NULL
}
plot(RMSE_CV,xlab='degree',ylab='RMSE',main='RMSE for 10-Fold Cross Validation')
| /BigDataNetworking/P1/proj1-1.R | no_license | kvwwang/UCLA | R | false | false | 11,079 | r | # read in data
data <- read.csv("C:/Users/Kevin/Desktop/Assignments/Graduate/EE239AS/P1/network_backup_dataset.csv")
data2 <- read.csv("C:/Users/Kevin/Desktop/Assignments/Graduate/EE239AS/P1/network_backup_dataset.csv")
data3 <- read.csv("C:/Users/Kevin/Desktop/Assignments/Graduate/EE239AS/P1/network_backup_dataset.csv")
# Part 1 ------------------------------------------------------------------
day = NULL
day.counter = 1
for (k in 1:18588) {
if (k > 1) {
if (data[k,]$Day.of.Week != data[k-1,]$Day.of.Week)
day.counter = day.counter+1
}
day[k] = day.counter
}
data$day <- day
d20 = data[which(data$day <= 20),]
d20_wf0 = d20[which(d20$Work.Flow.ID == 'work_flow_0'),]
d20_wf1 = d20[which(d20$Work.Flow.ID == 'work_flow_1'),]
d20_wf2 = d20[which(d20$Work.Flow.ID == 'work_flow_2'),]
d20_wf3 = d20[which(d20$Work.Flow.ID == 'work_flow_3'),]
d20_wf4 = d20[which(d20$Work.Flow.ID == 'work_flow_4'),]
# size0 = d20$Size.of.Backup..GB.[which(d20$Work.Flow.ID == 'work_flow_0')]
# size1 = d20$Size.of.Backup..GB.[which(d20$Work.Flow.ID == 'work_flow_1')]
# size2 = d20$Size.of.Backup..GB.[which(d20$Work.Flow.ID == 'work_flow_2')]
# size3 = d20$Size.of.Backup..GB.[which(d20$Work.Flow.ID == 'work_flow_3')]
# size4 = d20$Size.of.Backup..GB.[which(d20$Work.Flow.ID == 'work_flow_4')]
plot(d20_wf0$day,d20_wf0$Size.of.Backup..GB.,main='Work Flow 0',xlab='Day',ylab='Size of Backup (GB)')
plot(d20_wf1$day,d20_wf1$Size.of.Backup..GB.,main='Work Flow 1',xlab='Day',ylab='Size of Backup (GB)')
plot(d20_wf2$day,d20_wf2$Size.of.Backup..GB.,main='Work Flow 2',xlab='Day',ylab='Size of Backup (GB)')
plot(d20_wf3$day,d20_wf3$Size.of.Backup..GB.,main='Work Flow 3',xlab='Day',ylab='Size of Backup (GB)')
plot(d20_wf4$day,d20_wf4$Size.of.Backup..GB.,main='Work Flow 4',xlab='Day',ylab='Size of Backup (GB)')
# Part 2a ------------------------------------------------------------------
fit1 = lm(data$Size.of.Backup..GB~data$Week.+data$Day.of.Week+data$Backup.Start.Time...Hour.of.Day+data$Work.Flow.ID+data$File.Name+data$Backup.Time..hour.)
summary(fit1)
tr_sz = 16729
set.seed(111) #change seed 10 times for different training sets
training = sample(seq_len(nrow(data)),size=tr_sz)
train = data[training,]
test = data[-training,]
# fit_tt = lm(train$Size.of.Backup..GB~train$Week.+train$Day.of.Week+train$Backup.Start.Time...Hour.of.Day+train$Work.Flow.ID+train$File.Name+train$Backup.Time..hour.)
fit_tt = lm(Size.of.Backup..GB. ~ Week..+Day.of.Week+Backup.Start.Time...Hour.of.Day+Work.Flow.ID+File.Name+Backup.Time..hour.,data=train)
summary(fit_tt)
test_res = predict.lm(fit_tt,test)
plot(fit_tt)
plot(test$day,test$Size.of.Backup..GB.,main='Fitted vs. Actual',xlab='Day',ylab='Size of Backup (GB)')
par(new=TRUE)
plot(test$day,test_res,col='blue', ann=FALSE, axes=FALSE)
RMSE = sqrt(sum((test$Size.of.Backup..GB.-test_res)^2)/1859)
# Part 2b -----------------------------------------------------------------
library(randomForest)
rf1 = randomForest(Size.of.Backup..GB. ~ Week..+Day.of.Week+Backup.Start.Time...Hour.of.Day+Work.Flow.ID+File.Name+Backup.Time..hour.,data=data2,type='regression',ntree=20,mtry=6)
print(rf1)
set.seed(111) #change seed 10 times for different training sets
training_2b = sample(seq_len(nrow(data2)),size=tr_sz)
train_2b = data2[training_2b,]
test_2b = data2[-training_2b,]
rf1_tt = randomForest(Size.of.Backup..GB. ~ Week..+Day.of.Week+Backup.Start.Time...Hour.of.Day+Work.Flow.ID+File.Name+Backup.Time..hour.,data=train_2b,type='regression',ntree=20,mtry=6)
print(rf1_tt)
test_rf = predict(rf1_tt,test_2b,type='response')
RMSE2 = sqrt(sum((test_2b$Size.of.Backup..GB.-test_rf)^2)/1859)
rf2 = randomForest(Size.of.Backup..GB. ~ Week..+Day.of.Week+Backup.Start.Time...Hour.of.Day+Work.Flow.ID+File.Name+Backup.Time..hour.,data=data2,type='regression',ntree=20,mtry=3)
print(rf2)
set.seed(111) #change seed 10 times for different training sets
training = sample(seq_len(nrow(data)),size=tr_sz)
train = data[training,]
test = data[-training,]
rf2_tt = randomForest(Size.of.Backup..GB. ~ Week..+Day.of.Week+Backup.Start.Time...Hour.of.Day+Work.Flow.ID+File.Name+Backup.Time..hour.,data=train,type='regression',ntree=20,mtry=6)
test_rf2 = predict(rf2_tt,test,type='response')
plot(test$day,test_rf,main='Fitted Output of Test',xlab='Day',ylab='Size of Backup (GB)')
# Part 2c -----------------------------------------------------------------
library(neuralnet)
#set up training set
tr_sz = 16729
set.seed(111) #change seed 10 times for different training sets
training = sample(seq_len(nrow(data)),size=tr_sz)
train = data[training,]
test = data[-training,]
dm = model.matrix(~Size.of.Backup..GB. + Week..+Day.of.Week+Backup.Start.Time...Hour.of.Day+Work.Flow.ID+File.Name+Backup.Time..hour.,data=train)
nn1 = neuralnet(Size.of.Backup..GB. ~ Week..+ Day.of.WeekMonday + Day.of.WeekTuesday + Day.of.WeekWednesday + Day.of.WeekThursday + Day.of.WeekSaturday + Day.of.WeekSunday + Backup.Start.Time...Hour.of.Day +Work.Flow.IDwork_flow_1+Work.Flow.IDwork_flow_2+Work.Flow.IDwork_flow_3+Work.Flow.IDwork_flow_4 +File.NameFile_1 +File.NameFile_2 +File.NameFile_3 +File.NameFile_4 +File.NameFile_5 +File.NameFile_6 +File.NameFile_7 +File.NameFile_8 +File.NameFile_9 +File.NameFile_10 +File.NameFile_11 +File.NameFile_12 +File.NameFile_13 +File.NameFile_14 +File.NameFile_15 +File.NameFile_16 +File.NameFile_17 +File.NameFile_18 +File.NameFile_19 +File.NameFile_20 +File.NameFile_21 +File.NameFile_22 +File.NameFile_23 +File.NameFile_24 +File.NameFile_25 +File.NameFile_26 +File.NameFile_27 +File.NameFile_28 +File.NameFile_29+Backup.Time..hour.,data=dm)
print(nn1)
plot(nn1)
#dt = model.matrix(~Size.of.Backup..GB. + Week..+Day.of.Week+Backup.Start.Time...Hour.of.Day+Work.Flow.ID+File.Name+Backup.Time..hour.,data=test)
#test_nn = compute(nn1,dt)
# Part 3-1 ------------------------------------------------------------------
d_wf0 = d20[which(data$Work.Flow.ID == 'work_flow_0'),]
d_wf1 = d20[which(data$Work.Flow.ID == 'work_flow_1'),]
d_wf2 = d20[which(data$Work.Flow.ID == 'work_flow_2'),]
d_wf3 = d20[which(data$Work.Flow.ID == 'work_flow_3'),]
d_wf4 = d20[which(data$Work.Flow.ID == 'work_flow_4'),]
tr_sz_0 = floor(nrow(d_wf0)*.9)
tr_sz_1 = floor(nrow(d_wf1)*.9)
tr_sz_2 = floor(nrow(d_wf2)*.9)
tr_sz_3 = floor(nrow(d_wf3)*.9)
tr_sz_4 = floor(nrow(d_wf4)*.9)
set.seed(111) #change seed 10 times for different training sets
training0 = sample(seq_len(nrow(data)),size=tr_sz_0)
training1 = sample(seq_len(nrow(data)),size=tr_sz_1)
training2 = sample(seq_len(nrow(data)),size=tr_sz_2)
training3 = sample(seq_len(nrow(data)),size=tr_sz_3)
training4 = sample(seq_len(nrow(data)),size=tr_sz_4)
train0 = data[training0,]
train1 = data[training1,]
train2 = data[training2,]
train3 = data[training3,]
train4 = data[training4,]
test0 = data[-training0,]
test1 = data[-training1,]
test2 = data[-training2,]
test3 = data[-training3,]
test4 = data[-training4,]
fit_wf0 = lm(Size.of.Backup..GB. ~ Week..+Day.of.Week+Backup.Start.Time...Hour.of.Day+File.Name+Backup.Time..hour.,data=train0)
fit_wf1 = lm(Size.of.Backup..GB. ~ Week..+Day.of.Week+Backup.Start.Time...Hour.of.Day+File.Name+Backup.Time..hour.,data=train1)
fit_wf2 = lm(Size.of.Backup..GB. ~ Week..+Day.of.Week+Backup.Start.Time...Hour.of.Day+File.Name+Backup.Time..hour.,data=train2)
fit_wf3 = lm(Size.of.Backup..GB. ~ Week..+Day.of.Week+Backup.Start.Time...Hour.of.Day+File.Name+Backup.Time..hour.,data=train3)
fit_wf4 = lm(Size.of.Backup..GB. ~ Week..+Day.of.Week+Backup.Start.Time...Hour.of.Day+File.Name+Backup.Time..hour.,data=train4)
summary(fit_wf0)
summary(fit_wf1)
summary(fit_wf2)
summary(fit_wf3)
summary(fit_wf4)
test_res0 = predict.lm(fit_wf0,test0)
test_res1 = predict.lm(fit_wf1,test1)
test_res2 = predict.lm(fit_wf2,test2)
test_res3 = predict.lm(fit_wf3,test3)
test_res4 = predict.lm(fit_wf4,test4)
RMSE0 = sqrt(sum((test0$Size.of.Backup..GB.-test_res0)^2)/nrow(test0))
RMSE1 = sqrt(sum((test1$Size.of.Backup..GB.-test_res1)^2)/nrow(test1))
RMSE2 = sqrt(sum((test2$Size.of.Backup..GB.-test_res2)^2)/nrow(test2))
RMSE3 = sqrt(sum((test3$Size.of.Backup..GB.-test_res3)^2)/nrow(test3))
RMSE4 = sqrt(sum((test4$Size.of.Backup..GB.-test_res4)^2)/nrow(test4))
# Part 3-2 fixed----------------------------------------------------------------
tr_sz = 16729
set.seed(1111) #change seed 10 times for different training sets
training = sample(seq_len(nrow(data)),size=tr_sz)
train = data[training,]
test = data[-training,]
degree = 1 #adjust degree for polynomials
RMSE_FIX = NULL
max_deg = 80 #max degree without overflow error
for (degree in 1:max_deg) {
SoB = train$Size.of.Backup..GB.
Week = train$Week..
DoW = as.numeric(train$Day.of.Week)
BST = train$Backup.Start.Time...Hour.of.Day
WF = as.numeric(train$Work.Flow.ID)
FN = as.numeric(train$File.Name)
BTH = train$Backup.Time..hour.
fit_poly = lm(SoB ~ poly(Week,degree,raw=TRUE)+poly(DoW,degree,raw=TRUE)+poly(BST,degree,raw=TRUE)+poly(WF,degree,raw=TRUE)+poly(FN,degree,raw=TRUE)+poly(BTH,degree,raw=TRUE))
SoB = test$Size.of.Backup..GB.
Week = test$Week..
DoW = as.numeric(test$Day.of.Week)
BST = test$Backup.Start.Time...Hour.of.Day
WF = as.numeric(test$Work.Flow.ID)
FN = as.numeric(test$File.Name)
BTH = test$Backup.Time..hour.
test_res = predict.lm(fit_poly,test)
RMSE_FIX[degree] = sqrt(sum((test$Size.of.Backup..GB.-test_res)^2)/1859)
}
plot(RMSE_FIX,xlab='degree',ylab='RMSE',main='RMSE for Fixed Sets')
# Part 3-2 CV -------------------------------------------------------------
tr_sz = 16729
set.seed(1111) #change seed 10 times for different training sets
degree = 1 #adjust degree for polynomials
RMSE_CV = NULL
RMSE_TEMP = NULL
max_deg = 20 #max degree without overflow error
l = 1
for (degree in 1:max_deg) {
for (l in 1:10) {
training = sample(seq_len(nrow(data)),size=tr_sz)
train = data[training,]
test = data[-training,]
SoB = train$Size.of.Backup..GB.
Week = train$Week..
DoW = as.numeric(train$Day.of.Week)
BST = train$Backup.Start.Time...Hour.of.Day
WF = as.numeric(train$Work.Flow.ID)
FN = as.numeric(train$File.Name)
BTH = train$Backup.Time..hour.
fit_poly = lm(SoB ~ poly(Week,degree,raw=TRUE)+poly(DoW,degree,raw=TRUE)+poly(BST,degree,raw=TRUE)+poly(WF,degree,raw=TRUE)+poly(FN,degree,raw=TRUE)+poly(BTH,degree,raw=TRUE))
SoB = test$Size.of.Backup..GB.
Week = test$Week..
DoW = as.numeric(test$Day.of.Week)
BST = test$Backup.Start.Time...Hour.of.Day
WF = as.numeric(test$Work.Flow.ID)
FN = as.numeric(test$File.Name)
BTH = test$Backup.Time..hour.
test_res = predict.lm(fit_poly,test)
RMSE_TEMP[l] = sqrt(sum((test$Size.of.Backup..GB.-test_res)^2)/1859)
}
RMSE_CV[degree] = mean(RMSE_TEMP)
RMSE_TEMP=NULL
}
plot(RMSE_CV,xlab='degree',ylab='RMSE',main='RMSE for 10-Fold Cross Validation')
|
############################################
### Common database functions
############################################
sqlQuote = function(x) {
sprintf("'%s'", x)
}
dbGetConnection = function(drv, reg, ...) {
# method dispatch to support different DBMS
UseMethod("dbGetConnection")
}
dbGetConnection.SQLiteDriver = function(drv, reg, flags = "ro", ...) {
flags = switch(flags, "ro" = SQLITE_RO, "rw" = SQLITE_RW, "rwc" = SQLITE_RWC)
opts = list(dbname = file.path(reg$file.dir, "BatchJobs.db"), flags = flags, drv = drv)
con = do.call(dbConnect, args = c(dropNamed(reg$db.options, "pragmas"), opts))
for (pragma in reg$db.options$pragmas)
dbClearResult(dbSendQuery(con, sprintf("PRAGMA %s", pragma)))
return(con)
}
dbConnectToJobsDB = function(reg, flags = "ro") {
drv = do.call(reg$db.driver, list())
dbGetConnection(drv, reg, flags)
}
dbDoQueries = function(reg, queries, flags = "ro", max.retries = 100L, sleep = function(r) 1.025^r) {
for (i in seq_len(max.retries)) {
con = try(dbConnectToJobsDB(reg, flags), silent = TRUE)
if (is.error(con)) {
if (!grepl("(lock|i/o|readonly)", tolower(con)))
stopf("Error while etablishing the connection: %s", as.character(con))
} else {
ok = try ({
dbBegin(con)
ress = lapply(queries, dbGetQuery, con = con)
}, silent = TRUE)
if (!is.error(ok)) {
# this can fail because DB is locked
ok2 = dbCommit(con)
if (ok2) {
dbDisconnect(con)
return(ress)
} else {
dbRollback(con)
dbDisconnect(con)
}
} else {
ok = as.character(ok)
dbRollback(con)
dbDisconnect(con)
# catch known temporary errors:
# - database is still locked
# - disk I/O error
# - disk I/O error
# - database is only readable
if(!grepl("(lock|i/o|readonly)", tolower(ok)))
stopf("Error in dbDoQueries. Displaying only 1st query. %s (%s)", ok, queries[1L])
}
}
# if we reach this here, DB was locked or temporary I/O error
Sys.sleep(runif(1L, min = 1, max = sleep(i)))
}
stopf("dbDoQueries: max retries (%i) reached, database is still locked!", max.retries)
}
dbDoQuery = function(reg, query, flags = "ro", max.retries = 100L, sleep = function(r) 1.025^r) {
for (i in seq_len(max.retries)) {
con = try(dbConnectToJobsDB(reg, flags), silent = TRUE)
if (is.error(con)) {
if (!grepl("(lock|i/o|readonly)", tolower(con)))
stopf("Error while etablishing the connection: %s", as.character(con))
} else {
res = try(dbGetQuery(con, query), silent = TRUE)
dbDisconnect(con)
if (!is.error(res))
return(res)
res = as.character(res)
if(!grepl("(lock|i/o|readonly)", tolower(res))) {
stopf("Error in dbDoQuery. %s (%s)", res, query)
}
}
# if we reach this here, DB was locked or temporary I/O error
Sys.sleep(runif(1L, min = 1, max = sleep(i)))
}
stopf("dbDoQuery: max retries (%i) reached, database is still locked!", max.retries)
}
dbAddData = function(reg, tab, data) {
query = sprintf("INSERT INTO %s_%s (%s) VALUES(%s)", reg$id, tab,
collapse(colnames(data)), collapse(rep.int("?", ncol(data))))
con = dbConnectToJobsDB(reg, flags = "rw")
on.exit(dbDisconnect(con))
dbBegin(con)
ok = try(dbGetPreparedQuery(con, query, bind.data = data))
if(is.error(ok)) {
dbRollback(con)
stopf("Error in dbAddData: %s", as.character(ok))
}
dbCommit(con)
as.integer(dbGetQuery(con, "SELECT total_changes()"))
}
dbSelectWithIds = function(reg, query, ids, where = TRUE, group.by, limit = NULL, reorder = TRUE) {
if(!missing(ids))
query = sprintf("%s %s job_id IN (%s)", query, ifelse(where, "WHERE", "AND"), collapse(ids))
if(!missing(group.by))
query = sprintf("%s GROUP BY %s", query, collapse(group.by))
if(!is.null(limit))
query = sprintf("%s LIMIT %i", query, limit)
res = dbDoQuery(reg, query)
if(missing(ids) || !reorder)
return(res)
return(res[na.omit(match(ids, res$job_id)),, drop = FALSE])
}
############################################
### CREATE
############################################
#' ONLY FOR INTERNAL USAGE.
#' @param reg [\code{\link{Registry}}]\cr
#' Registry.
#' @return Nothing.
#' @keywords internal
#' @export
dbCreateJobDefTable = function(reg) {
UseMethod("dbCreateJobDefTable")
}
#' @export
dbCreateJobDefTable.Registry = function(reg) {
query = sprintf("CREATE TABLE %s_job_def (job_def_id INTEGER PRIMARY KEY, fun_id TEXT, pars TEXT, jobname TEXT)", reg$id)
dbDoQuery(reg, query, flags = "rwc")
dbCreateExpandedJobsView(reg)
}
dbCreateJobStatusTable = function(reg, extra.cols = "", constraints = "") {
query = sprintf(paste("CREATE TABLE %s_job_status (job_id INTEGER PRIMARY KEY, job_def_id INTEGER,",
"first_job_in_chunk_id INTEGER, seed INTEGER, resources_timestamp INTEGER, memory REAL, submitted INTEGER,",
"started INTEGER, batch_job_id TEXT, node TEXT, r_pid INTEGER,",
"done INTEGER, error TEXT %s %s)"), reg$id, extra.cols, constraints)
dbDoQuery(reg, query, flags = "rwc")
query = sprintf("CREATE INDEX job_def_id ON %s_job_status(job_def_id)", reg$id)
dbDoQuery(reg, query, flags = "rw")
return(invisible(TRUE))
}
dbCreateExpandedJobsView = function(reg) {
query = sprintf("CREATE VIEW %1$s_expanded_jobs AS SELECT * FROM %1$s_job_status LEFT JOIN %1$s_job_def USING(job_def_id)", reg$id)
dbDoQuery(reg, query, flags = "rw")
}
############################################
### SELECT
############################################
#' ONLY FOR INTERNAL USAGE.
#' @param reg [\code{\link{Registry}}]\cr
#' Registry.
#' @param ids [\code{integer}]\cr
#' Ids of selected jobs.
#' @return [list of \code{\link{Job}}]. Retrieved jobs from DB.
#' @keywords internal
#' @export
dbGetJobs = function(reg, ids) {
UseMethod("dbGetJobs")
}
# note that this does not load the job function from disk to increase speed
#' @method dbGetJobs Registry
#' @export
dbGetJobs.Registry = function(reg, ids) {
query = sprintf("SELECT job_id, fun_id, pars, jobname, seed FROM %s_expanded_jobs", reg$id)
tab = dbSelectWithIds(reg, query, ids)
lapply(seq_row(tab), function(i) {
makeJob(id = tab$job_id[i],
fun.id = tab$fun_id[i],
fun = NULL,
pars = unserialize(charToRaw(tab$pars[i])),
name = tab$jobname[i],
seed = tab$seed[i])
})
}
dbGetExpandedJobsTable = function(reg, ids, cols = "*") {
# Note: job_id must be in cols!
query = sprintf("SELECT %s FROM %s_expanded_jobs", collapse(cols), reg$id)
tab = dbSelectWithIds(reg, query, ids)
setRowNames(tab, tab$job_id)
}
dbGetJobStatusTable = function(reg, ids, cols = "*") {
# Note: job_id must be in cols!
query = sprintf("SELECT %s FROM %s_job_status", collapse(cols), reg$id)
tab = dbSelectWithIds(reg, query, ids)
setRowNames(tab, tab$job_id)
}
dbGetJobCount = function(reg) {
query = sprintf("SELECT COUNT(*) AS count FROM %s_job_status", reg$id)
dbDoQuery(reg, query)$count
}
dbGetJobId = function(reg) {
query = sprintf("SELECT job_id FROM %s_job_status LIMIT 1", reg$id)
as.integer(dbDoQuery(reg, query)$job_id)
}
dbGetJobIds = function(reg) {
query = sprintf("SELECT job_id FROM %s_job_status", reg$id)
dbDoQuery(reg, query)$job_id
}
dbCheckJobIds = function(reg, ids) {
not.found = setdiff(ids, dbGetJobIds(reg))
if (length(not.found) > 0L)
stopf("Ids not present in registry: %s", collapse(not.found))
}
dbGetJobIdsIfAllDone = function(reg) {
query = sprintf("SELECT job_id, done FROM %s_job_status", reg$id)
res = dbDoQuery(reg, query)
if (all(! is.na(res$done)))
return(res$job_id)
stop("Not all jobs finished (yet)!")
}
dbGetLastAddedIds = function(reg, tab, id.col, n) {
query = sprintf("SELECT %s AS id_col FROM %s_%s ORDER BY %s DESC LIMIT %i",
id.col, reg$id, tab, id.col, n)
rev(dbDoQuery(reg, query)$id_col)
}
dbFindDone = function(reg, ids, negate = FALSE, limit = NULL) {
query = sprintf("SELECT job_id FROM %s_job_status WHERE %s (done IS NOT NULL)", reg$id, if(negate) "NOT" else "")
dbSelectWithIds(reg, query, ids, where = FALSE, limit = limit)$job_id
}
dbFindErrors = function(reg, ids, negate = FALSE, limit = NULL) {
query = sprintf("SELECT job_id FROM %s_job_status WHERE %s (error IS NOT NULL)", reg$id, if(negate) "NOT" else "")
dbSelectWithIds(reg, query, ids, where = FALSE, limit = limit)$job_id
}
dbFindTerminated = function(reg, ids, negate = FALSE, limit = NULL) {
query = sprintf("SELECT job_id FROM %s_job_status WHERE %s (done IS NOT NULL OR error IS NOT NULL)", reg$id, if(negate) "NOT" else "")
dbSelectWithIds(reg, query, ids, where = FALSE, limit = limit)$job_id
}
dbFindSubmitted = function(reg, ids, negate = FALSE, limit = NULL) {
query = sprintf("SELECT job_id FROM %s_job_status WHERE %s (submitted IS NOT NULL)", reg$id, if (negate) "NOT" else "")
dbSelectWithIds(reg, query, ids, where = FALSE, limit = limit)$job_id
}
dbFindStarted = function(reg, ids, negate = FALSE, limit = NULL) {
query = sprintf("SELECT job_id FROM %s_job_status WHERE %s (started IS NOT NULL)", reg$id, if (negate) "NOT" else "")
dbSelectWithIds(reg, query, ids, where = FALSE, limit = limit)$job_id
}
dbFindOnSystem = function(reg, ids, negate = FALSE, limit = NULL, batch.ids) {
if (missing(batch.ids))
batch.ids = getBatchIds(reg, "Cannot find jobs on system")
query = sprintf("SELECT job_id FROM %s_job_status WHERE %s (batch_job_id IN (%s))",
reg$id, if (negate) "NOT" else "", collapse(sqlQuote(batch.ids)))
dbSelectWithIds(reg, query, ids, where = FALSE, limit = limit)$job_id
}
dbFindSubmittedNotTerminated = function(reg, ids, negate = FALSE, limit = NULL) {
query = sprintf("SELECT job_id FROM %s_job_status WHERE %s (submitted IS NOT NULL AND done IS NULL AND error IS NULL)",
reg$id, if (negate) "NOT" else "")
dbSelectWithIds(reg, query, ids, where = FALSE, limit = limit)$job_id
}
dbFindRunning = function(reg, ids, negate = FALSE, limit = NULL, batch.ids) {
if (missing(batch.ids))
batch.ids = getBatchIds(reg, "Cannot find jobs on system")
query = sprintf("SELECT job_id FROM %s_job_status WHERE %s (batch_job_id IN (%s) AND started IS NOT NULL AND done IS NULL AND error IS NULL)",
reg$id, if (negate) "NOT" else "", collapse(sqlQuote(batch.ids)))
dbSelectWithIds(reg, query, ids, where = FALSE, limit = limit)$job_id
}
dbFindExpiredJobs = function(reg, ids, negate = FALSE, limit = NULL, batch.ids) {
if (missing(batch.ids))
batch.ids = getBatchIds(reg, "Cannot find jobs on system")
# started, not terminated, not running
query = sprintf("SELECT job_id FROM %s_job_status WHERE %s (started IS NOT NULL AND done IS NULL AND error is NULL AND
batch_job_id NOT IN (%s))", reg$id, if (negate) "NOT" else "", collapse(sqlQuote(batch.ids)))
dbSelectWithIds(reg, query, ids, where = FALSE, limit = limit)$job_id
}
dbFindDisappeared = function(reg, ids, negate = FALSE, limit = NULL, batch.ids) {
if (missing(batch.ids))
batch.ids = getBatchIds(reg, "Cannot find jobs on system")
query = sprintf("SELECT job_id FROM %s_job_status WHERE %s (submitted IS NOT NULL AND started IS NULL AND batch_job_id NOT IN (%s))",
reg$id, if (negate) "NOT" else "", collapse(sqlQuote(batch.ids)))
dbSelectWithIds(reg, query, ids, where = FALSE, limit = limit)$job_id
}
dbGetFirstJobInChunkIds = function(reg, ids){
query = sprintf("SELECT job_id, first_job_in_chunk_id FROM %s_job_status", reg$id)
dbSelectWithIds(reg, query, ids)$first_job_in_chunk_id
}
dbGetErrorMsgs = function(reg, ids, filter = FALSE, limit = NULL) {
query = sprintf("SELECT job_id, error from %s_job_status", reg$id)
if (filter)
query = sprintf("%s WHERE error IS NOT NULL", query)
dbSelectWithIds(reg, query, ids, where = !filter, limit = limit)
}
dbGetStats = function(reg, ids, running = FALSE, expired = FALSE, times = FALSE, batch.ids) {
cols = c(n = "COUNT(job_id)",
submitted = "COUNT(submitted)",
started = "COUNT(started)",
done = "COUNT(done)",
error = "COUNT(error)",
running = "NULL",
expired = "NULL",
t_min = "NULL",
t_avg = "NULL",
t_max = "NULL")
if (missing(batch.ids) && (expired || running))
batch.ids = getBatchIds(reg, "Cannot find jobs on system")
if(running)
cols["running"] = sprintf("SUM(started IS NOT NULL AND done IS NULL AND error IS NULL AND batch_job_id IN (%s))", collapse(sqlQuote(batch.ids)))
if(expired)
cols["expired"] = sprintf("SUM(started IS NOT NULL AND done IS NULL AND error IS NULL AND batch_job_id NOT IN (%s))", collapse(sqlQuote(batch.ids)))
if (times)
cols[c("t_min", "t_avg", "t_max")] = c("MIN(done - started)", "AVG(done - started)", "MAX(done - started)")
query = sprintf("SELECT %s FROM %s_job_status", collapse(paste(cols, "AS", names(cols)), sep = ", "), reg$id)
df = dbSelectWithIds(reg, query, ids, reorder = FALSE)
# Convert to correct type. Null has no type and casts tend to not work properly with RSQLite
x = c("n", "submitted", "started", "done", "error", "running", "expired")
df[x] = lapply(df[x], as.integer)
x = c("t_min", "t_avg", "t_max")
df[x] = lapply(df[x], as.double)
df
}
dbGetJobNames = function(reg, ids) {
query = sprintf("SELECT job_id, jobname FROM %s_expanded_jobs", reg$id)
as.character(dbSelectWithIds(reg, query, ids)$jobname)
}
dbMatchJobNames = function(reg, ids, jobnames) {
query = sprintf("SELECT job_id FROM %s_expanded_jobs WHERE jobname IN (%s)", reg$id, collapse(sqlQuote(jobnames)))
dbSelectWithIds(reg, query, ids, where = FALSE)$job_id
}
############################################
### DELETE
############################################
dbRemoveJobs = function(reg, ids) {
query = sprintf("DELETE FROM %s_job_status WHERE job_id IN (%s)", reg$id, collapse(ids))
dbDoQuery(reg, query, flags = "rw")
query = sprintf("DELETE FROM %1$s_job_def WHERE job_def_id NOT IN (SELECT DISTINCT job_def_id FROM %1$s_job_status)", reg$id)
dbDoQuery(reg, query, flags = "rw")
return(invisible(TRUE))
}
############################################
### Messages
############################################
dbSendMessage = function(reg, msg, staged = useStagedQueries(), fs.timeout = NA_real_) {
if (staged) {
fn = getPendingFile(reg, msg$type, msg$ids[1L])
writeSQLFile(msg$msg, fn)
waitForFiles(fn, timeout = fs.timeout)
} else {
dbDoQuery(reg, msg$msg, flags = "rw")
}
}
dbSendMessages = function(reg, msgs, max.retries = 200L, sleep = function(r) 1.025^r,
staged = useStagedQueries(), fs.timeout = NA_real_) {
if (length(msgs) == 0L)
return(TRUE)
if (staged) {
chars = .OrderChars
# reorder messages in sublist
msgs = split(msgs, extractSubList(msgs, "type"))
msgs = msgs[order(match(names(msgs), names(chars)))]
fns = vcapply(msgs, function(cur) {
first = cur[[1L]]
fn = getPendingFile(reg, first$type, first$ids[1L], chars[first$type])
writeSQLFile(extractSubList(cur, "msg"), fn)
fn
})
waitForFiles(fns, timeout = fs.timeout)
} else {
ok = try(dbDoQueries(reg, extractSubList(msgs, "msg"), flags = "rw", max.retries, sleep))
if (is.error(ok)) {
ok = as.character(ok)
if (ok == "dbDoQueries: max retries reached, database is still locked!") {
return(FALSE)
} else {
# throw exception again
stopf("Error in dbSendMessages: %s", ok)
}
}
}
return(TRUE)
}
dbMakeMessageSubmitted = function(reg, job.ids, time = now(), batch.job.id, first.job.in.chunk.id = NULL,
resources.timestamp, type = "submitted") {
if(is.null(first.job.in.chunk.id))
first.job.in.chunk.id = "NULL"
updates = sprintf("first_job_in_chunk_id=%s, submitted=%i, batch_job_id='%s', resources_timestamp=%i",
first.job.in.chunk.id, time, batch.job.id, resources.timestamp)
list(msg = sprintf("UPDATE %s_job_status SET %s WHERE job_id in (%s)", reg$id, updates, collapse(job.ids)),
ids = job.ids,
type = type)
}
dbMakeMessageStarted = function(reg, job.ids, time = now(), type = "started") {
node = gsub("'", "\"", Sys.info()["nodename"], fixed = TRUE)
updates = sprintf("started=%i, node='%s', r_pid=%i, error=NULL, done=NULL", time, node, Sys.getpid())
list(msg = sprintf("UPDATE %s_job_status SET %s WHERE job_id in (%s)", reg$id, updates, collapse(job.ids)),
ids = job.ids,
type = type)
}
dbMakeMessageError = function(reg, job.ids, err.msg, memory = -1, type = "error") {
# FIXME how to escape ticks (')? Just replaced with double quotes for the moment
err.msg = gsub("'", "\"", err.msg, fixed = TRUE)
err.msg = gsub("[^[:print:]]", " ", err.msg)
updates = sprintf("error='%s', done=NULL, memory='%.4f'", err.msg, memory)
list(msg = sprintf("UPDATE %s_job_status SET %s WHERE job_id in (%s)", reg$id, updates, collapse(job.ids)),
ids = job.ids,
type = type)
}
dbMakeMessageDone = function(reg, job.ids, time = now(), memory = -1, type = "done") {
updates = sprintf("done=%i, error=NULL, memory='%.04f'", time, memory)
list(msg = sprintf("UPDATE %s_job_status SET %s WHERE job_id in (%s)", reg$id, updates, collapse(job.ids)),
ids = job.ids,
type = type)
}
dbMakeMessageKilled = function(reg, job.ids, type = "last") {
updates = "resources_timestamp=NULL, memory=NULL, submitted=NULL, started=NULL, batch_job_id=NULL, node=NULL, r_pid=NULL, done=NULL, error=NULL"
list(msgs = sprintf("UPDATE %s_job_status SET %s WHERE job_id in (%s)", reg$id, updates, collapse(job.ids)),
ids = job.ids,
type = type)
}
dbConvertNumericToPOSIXct = function(x) {
now = Sys.time()
as.POSIXct(x, origin = now - as.integer(now))
}
dbSetJobFunction = function(reg, ids, fun.id) {
query = sprintf("UPDATE %1$s_job_def SET fun_id = '%2$s' WHERE job_def_id IN (SELECT job_def_id FROM %1$s_job_status WHERE job_id IN (%3$s))", reg$id, fun.id, collapse(ids))
dbDoQuery(reg, query, flags = "rw")
}
dbSetJobNames = function(reg, ids, jobnames) {
queries = sprintf("UPDATE %1$s_job_def SET jobname = '%2$s' WHERE job_def_id IN (SELECT job_def_id FROM %1$s_job_status WHERE job_id IN (%3$i))", reg$id, jobnames, ids)
dbDoQueries(reg, queries, flags = "rw")
}
| /BatchJobs/R/database.R | no_license | ingted/R-Examples | R | false | false | 18,605 | r | ############################################
### Common database functions
############################################
sqlQuote = function(x) {
sprintf("'%s'", x)
}
dbGetConnection = function(drv, reg, ...) {
# method dispatch to support different DBMS
UseMethod("dbGetConnection")
}
dbGetConnection.SQLiteDriver = function(drv, reg, flags = "ro", ...) {
flags = switch(flags, "ro" = SQLITE_RO, "rw" = SQLITE_RW, "rwc" = SQLITE_RWC)
opts = list(dbname = file.path(reg$file.dir, "BatchJobs.db"), flags = flags, drv = drv)
con = do.call(dbConnect, args = c(dropNamed(reg$db.options, "pragmas"), opts))
for (pragma in reg$db.options$pragmas)
dbClearResult(dbSendQuery(con, sprintf("PRAGMA %s", pragma)))
return(con)
}
dbConnectToJobsDB = function(reg, flags = "ro") {
drv = do.call(reg$db.driver, list())
dbGetConnection(drv, reg, flags)
}
dbDoQueries = function(reg, queries, flags = "ro", max.retries = 100L, sleep = function(r) 1.025^r) {
for (i in seq_len(max.retries)) {
con = try(dbConnectToJobsDB(reg, flags), silent = TRUE)
if (is.error(con)) {
if (!grepl("(lock|i/o|readonly)", tolower(con)))
stopf("Error while etablishing the connection: %s", as.character(con))
} else {
ok = try ({
dbBegin(con)
ress = lapply(queries, dbGetQuery, con = con)
}, silent = TRUE)
if (!is.error(ok)) {
# this can fail because DB is locked
ok2 = dbCommit(con)
if (ok2) {
dbDisconnect(con)
return(ress)
} else {
dbRollback(con)
dbDisconnect(con)
}
} else {
ok = as.character(ok)
dbRollback(con)
dbDisconnect(con)
# catch known temporary errors:
# - database is still locked
# - disk I/O error
# - disk I/O error
# - database is only readable
if(!grepl("(lock|i/o|readonly)", tolower(ok)))
stopf("Error in dbDoQueries. Displaying only 1st query. %s (%s)", ok, queries[1L])
}
}
# if we reach this here, DB was locked or temporary I/O error
Sys.sleep(runif(1L, min = 1, max = sleep(i)))
}
stopf("dbDoQueries: max retries (%i) reached, database is still locked!", max.retries)
}
dbDoQuery = function(reg, query, flags = "ro", max.retries = 100L, sleep = function(r) 1.025^r) {
for (i in seq_len(max.retries)) {
con = try(dbConnectToJobsDB(reg, flags), silent = TRUE)
if (is.error(con)) {
if (!grepl("(lock|i/o|readonly)", tolower(con)))
stopf("Error while etablishing the connection: %s", as.character(con))
} else {
res = try(dbGetQuery(con, query), silent = TRUE)
dbDisconnect(con)
if (!is.error(res))
return(res)
res = as.character(res)
if(!grepl("(lock|i/o|readonly)", tolower(res))) {
stopf("Error in dbDoQuery. %s (%s)", res, query)
}
}
# if we reach this here, DB was locked or temporary I/O error
Sys.sleep(runif(1L, min = 1, max = sleep(i)))
}
stopf("dbDoQuery: max retries (%i) reached, database is still locked!", max.retries)
}
dbAddData = function(reg, tab, data) {
query = sprintf("INSERT INTO %s_%s (%s) VALUES(%s)", reg$id, tab,
collapse(colnames(data)), collapse(rep.int("?", ncol(data))))
con = dbConnectToJobsDB(reg, flags = "rw")
on.exit(dbDisconnect(con))
dbBegin(con)
ok = try(dbGetPreparedQuery(con, query, bind.data = data))
if(is.error(ok)) {
dbRollback(con)
stopf("Error in dbAddData: %s", as.character(ok))
}
dbCommit(con)
as.integer(dbGetQuery(con, "SELECT total_changes()"))
}
dbSelectWithIds = function(reg, query, ids, where = TRUE, group.by, limit = NULL, reorder = TRUE) {
if(!missing(ids))
query = sprintf("%s %s job_id IN (%s)", query, ifelse(where, "WHERE", "AND"), collapse(ids))
if(!missing(group.by))
query = sprintf("%s GROUP BY %s", query, collapse(group.by))
if(!is.null(limit))
query = sprintf("%s LIMIT %i", query, limit)
res = dbDoQuery(reg, query)
if(missing(ids) || !reorder)
return(res)
return(res[na.omit(match(ids, res$job_id)),, drop = FALSE])
}
############################################
### CREATE
############################################
#' ONLY FOR INTERNAL USAGE.
#' @param reg [\code{\link{Registry}}]\cr
#' Registry.
#' @return Nothing.
#' @keywords internal
#' @export
dbCreateJobDefTable = function(reg) {
UseMethod("dbCreateJobDefTable")
}
#' @export
dbCreateJobDefTable.Registry = function(reg) {
query = sprintf("CREATE TABLE %s_job_def (job_def_id INTEGER PRIMARY KEY, fun_id TEXT, pars TEXT, jobname TEXT)", reg$id)
dbDoQuery(reg, query, flags = "rwc")
dbCreateExpandedJobsView(reg)
}
dbCreateJobStatusTable = function(reg, extra.cols = "", constraints = "") {
query = sprintf(paste("CREATE TABLE %s_job_status (job_id INTEGER PRIMARY KEY, job_def_id INTEGER,",
"first_job_in_chunk_id INTEGER, seed INTEGER, resources_timestamp INTEGER, memory REAL, submitted INTEGER,",
"started INTEGER, batch_job_id TEXT, node TEXT, r_pid INTEGER,",
"done INTEGER, error TEXT %s %s)"), reg$id, extra.cols, constraints)
dbDoQuery(reg, query, flags = "rwc")
query = sprintf("CREATE INDEX job_def_id ON %s_job_status(job_def_id)", reg$id)
dbDoQuery(reg, query, flags = "rw")
return(invisible(TRUE))
}
dbCreateExpandedJobsView = function(reg) {
query = sprintf("CREATE VIEW %1$s_expanded_jobs AS SELECT * FROM %1$s_job_status LEFT JOIN %1$s_job_def USING(job_def_id)", reg$id)
dbDoQuery(reg, query, flags = "rw")
}
############################################
### SELECT
############################################
#' ONLY FOR INTERNAL USAGE.
#' @param reg [\code{\link{Registry}}]\cr
#' Registry.
#' @param ids [\code{integer}]\cr
#' Ids of selected jobs.
#' @return [list of \code{\link{Job}}]. Retrieved jobs from DB.
#' @keywords internal
#' @export
dbGetJobs = function(reg, ids) {
UseMethod("dbGetJobs")
}
# note that this does not load the job function from disk to increase speed
#' @method dbGetJobs Registry
#' @export
dbGetJobs.Registry = function(reg, ids) {
query = sprintf("SELECT job_id, fun_id, pars, jobname, seed FROM %s_expanded_jobs", reg$id)
tab = dbSelectWithIds(reg, query, ids)
lapply(seq_row(tab), function(i) {
makeJob(id = tab$job_id[i],
fun.id = tab$fun_id[i],
fun = NULL,
pars = unserialize(charToRaw(tab$pars[i])),
name = tab$jobname[i],
seed = tab$seed[i])
})
}
dbGetExpandedJobsTable = function(reg, ids, cols = "*") {
# Note: job_id must be in cols!
query = sprintf("SELECT %s FROM %s_expanded_jobs", collapse(cols), reg$id)
tab = dbSelectWithIds(reg, query, ids)
setRowNames(tab, tab$job_id)
}
dbGetJobStatusTable = function(reg, ids, cols = "*") {
# Note: job_id must be in cols!
query = sprintf("SELECT %s FROM %s_job_status", collapse(cols), reg$id)
tab = dbSelectWithIds(reg, query, ids)
setRowNames(tab, tab$job_id)
}
dbGetJobCount = function(reg) {
query = sprintf("SELECT COUNT(*) AS count FROM %s_job_status", reg$id)
dbDoQuery(reg, query)$count
}
dbGetJobId = function(reg) {
query = sprintf("SELECT job_id FROM %s_job_status LIMIT 1", reg$id)
as.integer(dbDoQuery(reg, query)$job_id)
}
dbGetJobIds = function(reg) {
query = sprintf("SELECT job_id FROM %s_job_status", reg$id)
dbDoQuery(reg, query)$job_id
}
dbCheckJobIds = function(reg, ids) {
not.found = setdiff(ids, dbGetJobIds(reg))
if (length(not.found) > 0L)
stopf("Ids not present in registry: %s", collapse(not.found))
}
dbGetJobIdsIfAllDone = function(reg) {
query = sprintf("SELECT job_id, done FROM %s_job_status", reg$id)
res = dbDoQuery(reg, query)
if (all(! is.na(res$done)))
return(res$job_id)
stop("Not all jobs finished (yet)!")
}
dbGetLastAddedIds = function(reg, tab, id.col, n) {
query = sprintf("SELECT %s AS id_col FROM %s_%s ORDER BY %s DESC LIMIT %i",
id.col, reg$id, tab, id.col, n)
rev(dbDoQuery(reg, query)$id_col)
}
dbFindDone = function(reg, ids, negate = FALSE, limit = NULL) {
query = sprintf("SELECT job_id FROM %s_job_status WHERE %s (done IS NOT NULL)", reg$id, if(negate) "NOT" else "")
dbSelectWithIds(reg, query, ids, where = FALSE, limit = limit)$job_id
}
dbFindErrors = function(reg, ids, negate = FALSE, limit = NULL) {
query = sprintf("SELECT job_id FROM %s_job_status WHERE %s (error IS NOT NULL)", reg$id, if(negate) "NOT" else "")
dbSelectWithIds(reg, query, ids, where = FALSE, limit = limit)$job_id
}
dbFindTerminated = function(reg, ids, negate = FALSE, limit = NULL) {
query = sprintf("SELECT job_id FROM %s_job_status WHERE %s (done IS NOT NULL OR error IS NOT NULL)", reg$id, if(negate) "NOT" else "")
dbSelectWithIds(reg, query, ids, where = FALSE, limit = limit)$job_id
}
dbFindSubmitted = function(reg, ids, negate = FALSE, limit = NULL) {
query = sprintf("SELECT job_id FROM %s_job_status WHERE %s (submitted IS NOT NULL)", reg$id, if (negate) "NOT" else "")
dbSelectWithIds(reg, query, ids, where = FALSE, limit = limit)$job_id
}
dbFindStarted = function(reg, ids, negate = FALSE, limit = NULL) {
query = sprintf("SELECT job_id FROM %s_job_status WHERE %s (started IS NOT NULL)", reg$id, if (negate) "NOT" else "")
dbSelectWithIds(reg, query, ids, where = FALSE, limit = limit)$job_id
}
dbFindOnSystem = function(reg, ids, negate = FALSE, limit = NULL, batch.ids) {
if (missing(batch.ids))
batch.ids = getBatchIds(reg, "Cannot find jobs on system")
query = sprintf("SELECT job_id FROM %s_job_status WHERE %s (batch_job_id IN (%s))",
reg$id, if (negate) "NOT" else "", collapse(sqlQuote(batch.ids)))
dbSelectWithIds(reg, query, ids, where = FALSE, limit = limit)$job_id
}
dbFindSubmittedNotTerminated = function(reg, ids, negate = FALSE, limit = NULL) {
query = sprintf("SELECT job_id FROM %s_job_status WHERE %s (submitted IS NOT NULL AND done IS NULL AND error IS NULL)",
reg$id, if (negate) "NOT" else "")
dbSelectWithIds(reg, query, ids, where = FALSE, limit = limit)$job_id
}
dbFindRunning = function(reg, ids, negate = FALSE, limit = NULL, batch.ids) {
if (missing(batch.ids))
batch.ids = getBatchIds(reg, "Cannot find jobs on system")
query = sprintf("SELECT job_id FROM %s_job_status WHERE %s (batch_job_id IN (%s) AND started IS NOT NULL AND done IS NULL AND error IS NULL)",
reg$id, if (negate) "NOT" else "", collapse(sqlQuote(batch.ids)))
dbSelectWithIds(reg, query, ids, where = FALSE, limit = limit)$job_id
}
dbFindExpiredJobs = function(reg, ids, negate = FALSE, limit = NULL, batch.ids) {
if (missing(batch.ids))
batch.ids = getBatchIds(reg, "Cannot find jobs on system")
# started, not terminated, not running
query = sprintf("SELECT job_id FROM %s_job_status WHERE %s (started IS NOT NULL AND done IS NULL AND error is NULL AND
batch_job_id NOT IN (%s))", reg$id, if (negate) "NOT" else "", collapse(sqlQuote(batch.ids)))
dbSelectWithIds(reg, query, ids, where = FALSE, limit = limit)$job_id
}
dbFindDisappeared = function(reg, ids, negate = FALSE, limit = NULL, batch.ids) {
if (missing(batch.ids))
batch.ids = getBatchIds(reg, "Cannot find jobs on system")
query = sprintf("SELECT job_id FROM %s_job_status WHERE %s (submitted IS NOT NULL AND started IS NULL AND batch_job_id NOT IN (%s))",
reg$id, if (negate) "NOT" else "", collapse(sqlQuote(batch.ids)))
dbSelectWithIds(reg, query, ids, where = FALSE, limit = limit)$job_id
}
dbGetFirstJobInChunkIds = function(reg, ids){
query = sprintf("SELECT job_id, first_job_in_chunk_id FROM %s_job_status", reg$id)
dbSelectWithIds(reg, query, ids)$first_job_in_chunk_id
}
dbGetErrorMsgs = function(reg, ids, filter = FALSE, limit = NULL) {
query = sprintf("SELECT job_id, error from %s_job_status", reg$id)
if (filter)
query = sprintf("%s WHERE error IS NOT NULL", query)
dbSelectWithIds(reg, query, ids, where = !filter, limit = limit)
}
dbGetStats = function(reg, ids, running = FALSE, expired = FALSE, times = FALSE, batch.ids) {
cols = c(n = "COUNT(job_id)",
submitted = "COUNT(submitted)",
started = "COUNT(started)",
done = "COUNT(done)",
error = "COUNT(error)",
running = "NULL",
expired = "NULL",
t_min = "NULL",
t_avg = "NULL",
t_max = "NULL")
if (missing(batch.ids) && (expired || running))
batch.ids = getBatchIds(reg, "Cannot find jobs on system")
if(running)
cols["running"] = sprintf("SUM(started IS NOT NULL AND done IS NULL AND error IS NULL AND batch_job_id IN (%s))", collapse(sqlQuote(batch.ids)))
if(expired)
cols["expired"] = sprintf("SUM(started IS NOT NULL AND done IS NULL AND error IS NULL AND batch_job_id NOT IN (%s))", collapse(sqlQuote(batch.ids)))
if (times)
cols[c("t_min", "t_avg", "t_max")] = c("MIN(done - started)", "AVG(done - started)", "MAX(done - started)")
query = sprintf("SELECT %s FROM %s_job_status", collapse(paste(cols, "AS", names(cols)), sep = ", "), reg$id)
df = dbSelectWithIds(reg, query, ids, reorder = FALSE)
# Convert to correct type. Null has no type and casts tend to not work properly with RSQLite
x = c("n", "submitted", "started", "done", "error", "running", "expired")
df[x] = lapply(df[x], as.integer)
x = c("t_min", "t_avg", "t_max")
df[x] = lapply(df[x], as.double)
df
}
dbGetJobNames = function(reg, ids) {
query = sprintf("SELECT job_id, jobname FROM %s_expanded_jobs", reg$id)
as.character(dbSelectWithIds(reg, query, ids)$jobname)
}
dbMatchJobNames = function(reg, ids, jobnames) {
query = sprintf("SELECT job_id FROM %s_expanded_jobs WHERE jobname IN (%s)", reg$id, collapse(sqlQuote(jobnames)))
dbSelectWithIds(reg, query, ids, where = FALSE)$job_id
}
############################################
### DELETE
############################################
dbRemoveJobs = function(reg, ids) {
query = sprintf("DELETE FROM %s_job_status WHERE job_id IN (%s)", reg$id, collapse(ids))
dbDoQuery(reg, query, flags = "rw")
query = sprintf("DELETE FROM %1$s_job_def WHERE job_def_id NOT IN (SELECT DISTINCT job_def_id FROM %1$s_job_status)", reg$id)
dbDoQuery(reg, query, flags = "rw")
return(invisible(TRUE))
}
############################################
### Messages
############################################
dbSendMessage = function(reg, msg, staged = useStagedQueries(), fs.timeout = NA_real_) {
if (staged) {
fn = getPendingFile(reg, msg$type, msg$ids[1L])
writeSQLFile(msg$msg, fn)
waitForFiles(fn, timeout = fs.timeout)
} else {
dbDoQuery(reg, msg$msg, flags = "rw")
}
}
dbSendMessages = function(reg, msgs, max.retries = 200L, sleep = function(r) 1.025^r,
staged = useStagedQueries(), fs.timeout = NA_real_) {
if (length(msgs) == 0L)
return(TRUE)
if (staged) {
chars = .OrderChars
# reorder messages in sublist
msgs = split(msgs, extractSubList(msgs, "type"))
msgs = msgs[order(match(names(msgs), names(chars)))]
fns = vcapply(msgs, function(cur) {
first = cur[[1L]]
fn = getPendingFile(reg, first$type, first$ids[1L], chars[first$type])
writeSQLFile(extractSubList(cur, "msg"), fn)
fn
})
waitForFiles(fns, timeout = fs.timeout)
} else {
ok = try(dbDoQueries(reg, extractSubList(msgs, "msg"), flags = "rw", max.retries, sleep))
if (is.error(ok)) {
ok = as.character(ok)
if (ok == "dbDoQueries: max retries reached, database is still locked!") {
return(FALSE)
} else {
# throw exception again
stopf("Error in dbSendMessages: %s", ok)
}
}
}
return(TRUE)
}
dbMakeMessageSubmitted = function(reg, job.ids, time = now(), batch.job.id, first.job.in.chunk.id = NULL,
resources.timestamp, type = "submitted") {
if(is.null(first.job.in.chunk.id))
first.job.in.chunk.id = "NULL"
updates = sprintf("first_job_in_chunk_id=%s, submitted=%i, batch_job_id='%s', resources_timestamp=%i",
first.job.in.chunk.id, time, batch.job.id, resources.timestamp)
list(msg = sprintf("UPDATE %s_job_status SET %s WHERE job_id in (%s)", reg$id, updates, collapse(job.ids)),
ids = job.ids,
type = type)
}
dbMakeMessageStarted = function(reg, job.ids, time = now(), type = "started") {
node = gsub("'", "\"", Sys.info()["nodename"], fixed = TRUE)
updates = sprintf("started=%i, node='%s', r_pid=%i, error=NULL, done=NULL", time, node, Sys.getpid())
list(msg = sprintf("UPDATE %s_job_status SET %s WHERE job_id in (%s)", reg$id, updates, collapse(job.ids)),
ids = job.ids,
type = type)
}
dbMakeMessageError = function(reg, job.ids, err.msg, memory = -1, type = "error") {
# FIXME how to escape ticks (')? Just replaced with double quotes for the moment
err.msg = gsub("'", "\"", err.msg, fixed = TRUE)
err.msg = gsub("[^[:print:]]", " ", err.msg)
updates = sprintf("error='%s', done=NULL, memory='%.4f'", err.msg, memory)
list(msg = sprintf("UPDATE %s_job_status SET %s WHERE job_id in (%s)", reg$id, updates, collapse(job.ids)),
ids = job.ids,
type = type)
}
dbMakeMessageDone = function(reg, job.ids, time = now(), memory = -1, type = "done") {
updates = sprintf("done=%i, error=NULL, memory='%.04f'", time, memory)
list(msg = sprintf("UPDATE %s_job_status SET %s WHERE job_id in (%s)", reg$id, updates, collapse(job.ids)),
ids = job.ids,
type = type)
}
dbMakeMessageKilled = function(reg, job.ids, type = "last") {
updates = "resources_timestamp=NULL, memory=NULL, submitted=NULL, started=NULL, batch_job_id=NULL, node=NULL, r_pid=NULL, done=NULL, error=NULL"
list(msgs = sprintf("UPDATE %s_job_status SET %s WHERE job_id in (%s)", reg$id, updates, collapse(job.ids)),
ids = job.ids,
type = type)
}
dbConvertNumericToPOSIXct = function(x) {
now = Sys.time()
as.POSIXct(x, origin = now - as.integer(now))
}
dbSetJobFunction = function(reg, ids, fun.id) {
query = sprintf("UPDATE %1$s_job_def SET fun_id = '%2$s' WHERE job_def_id IN (SELECT job_def_id FROM %1$s_job_status WHERE job_id IN (%3$s))", reg$id, fun.id, collapse(ids))
dbDoQuery(reg, query, flags = "rw")
}
dbSetJobNames = function(reg, ids, jobnames) {
queries = sprintf("UPDATE %1$s_job_def SET jobname = '%2$s' WHERE job_def_id IN (SELECT job_def_id FROM %1$s_job_status WHERE job_id IN (%3$i))", reg$id, jobnames, ids)
dbDoQueries(reg, queries, flags = "rw")
}
|
rm(list=ls())
library(ggplot2)
library(ggthemes)
library(estCI)
#############################################
### The data used by Rosenbaum (2001)
#############################################
data(tunca.and.egeli.1996)
y = tunca.and.egeli.1996$y
tr= tunca.and.egeli.1996$tr
results = aveCI(y,tr, print=TRUE)
length.gain = 1 - (results$sattCI[[2]]-results$sattCI[[1]])/(results$neyman[[2]]-results$neyman[[1]])
cat("Percentage length gain for SATT: ",round(length.gain*100),"%","\n",sep="")
length.gain = 1 - (results$satcCI[[2]]-results$satcCI[[1]])/(results$neyman[[2]]-results$neyman[[1]])
cat("Percentage length gain for SATC: ",round(length.gain*100),"%","\n",sep="")
### plot CI
dp = data.frame(ymax = c(results$sattCI[[2]],
results$satcCI[[2]],
results$neymanCI[[2]]),
ymin = c(results$sattCI[[1]],
results$satcCI[[1]],
results$neymanCI[[1]]),
parameter = c("SATT","SATC","SATE"),
ave.diff = rep(mean(y[tr==1])-mean(y[tr==0]),3)
)
p <- ggplot(dp, aes(x=parameter, y=ave.diff, colour=parameter))+
geom_point(size=3)+
geom_errorbar(aes(ymin=ymin, ymax=ymax, colour=parameter), size=1, width=.1)+
labs(
x = "\n Estimand of interest",
y = "Average treatment effects \n ( prediction\\confidence interval ) \n"
)+scale_colour_grey( start = 0, end = 0.5 )+
theme_bw()+
theme(panel.grid.major.x = element_blank(), panel.grid.minor.x = element_blank() )+
theme(panel.border = element_blank(),
axis.line.x = element_line(colour = "black"),
axis.line.y = element_line(colour = "black"))+
theme( legend.position = "bottom" )
p <- p + guides(col=guide_legend(title=""))
ggsave(file="~/Dropbox/att_CI/figures/rosenbaum2001_CI.pdf", plot = p)
| /Rosenbaum2001.R | no_license | yotamshemtov/Data_Replication | R | false | false | 1,870 | r |
rm(list=ls())
library(ggplot2)
library(ggthemes)
library(estCI)
#############################################
### The data used by Rosenbaum (2001)
#############################################
data(tunca.and.egeli.1996)
y = tunca.and.egeli.1996$y
tr= tunca.and.egeli.1996$tr
results = aveCI(y,tr, print=TRUE)
length.gain = 1 - (results$sattCI[[2]]-results$sattCI[[1]])/(results$neyman[[2]]-results$neyman[[1]])
cat("Percentage length gain for SATT: ",round(length.gain*100),"%","\n",sep="")
length.gain = 1 - (results$satcCI[[2]]-results$satcCI[[1]])/(results$neyman[[2]]-results$neyman[[1]])
cat("Percentage length gain for SATC: ",round(length.gain*100),"%","\n",sep="")
### plot CI
dp = data.frame(ymax = c(results$sattCI[[2]],
results$satcCI[[2]],
results$neymanCI[[2]]),
ymin = c(results$sattCI[[1]],
results$satcCI[[1]],
results$neymanCI[[1]]),
parameter = c("SATT","SATC","SATE"),
ave.diff = rep(mean(y[tr==1])-mean(y[tr==0]),3)
)
p <- ggplot(dp, aes(x=parameter, y=ave.diff, colour=parameter))+
geom_point(size=3)+
geom_errorbar(aes(ymin=ymin, ymax=ymax, colour=parameter), size=1, width=.1)+
labs(
x = "\n Estimand of interest",
y = "Average treatment effects \n ( prediction\\confidence interval ) \n"
)+scale_colour_grey( start = 0, end = 0.5 )+
theme_bw()+
theme(panel.grid.major.x = element_blank(), panel.grid.minor.x = element_blank() )+
theme(panel.border = element_blank(),
axis.line.x = element_line(colour = "black"),
axis.line.y = element_line(colour = "black"))+
theme( legend.position = "bottom" )
p <- p + guides(col=guide_legend(title=""))
ggsave(file="~/Dropbox/att_CI/figures/rosenbaum2001_CI.pdf", plot = p)
|
normalise <- function(X, choice = 1) {
n <- switch(choice,
"1" = rescaling(X),
"2" = mean_norm(X),
"3" = std_dev(X))
return(n)
}
| /normalisation.R | no_license | msk4862/machine-learning-algos | R | false | false | 168 | r | normalise <- function(X, choice = 1) {
n <- switch(choice,
"1" = rescaling(X),
"2" = mean_norm(X),
"3" = std_dev(X))
return(n)
}
|
NormalInverseGaussian <- R6Class("NormalInverseGaussian",
inherit = ContinuousDistribution,
public = list(
names = c("mu", "alpha", "beta", "delta"),
mu = NA,
alpha = NA,
beta = NA,
delta = NA,
gamma = NA,
initialize = function(u, a, b, d) {
self$mu <- u
self$alpha <- a
self$beta <- b
self$delta <- d
self$gamma <- sqrt(a^2 - b^2)
},
supp = function() { c(-Inf, Inf) },
properties = function() {
u <- self$mu
a <- self$alpha
b <- self$beta
d <- self$delta
g <- self$gamma
list(mean = u + d * b / g,
var = d * a^2 / g^3,
skewness = 3 * b / a / sqrt(d * g),
kurtosis = 3 * (1 + 4 * b^2 / a^2) / (d * g))
},
pdf = function(x, log=FALSE) {
fBasics::dnig(x, self$alpha, self$beta, self$delta, self$mu, log=log)
},
cdf = function(x) {
fBasics::pnig(x, self$alpha, self$beta, self$delta, self$mu)
},
quan = function(v) {
fBasics::qnig(v, self$alpha, self$beta, self$delta, self$mu)
}
)
)
| /test/ref/continuous/normalinversegaussian.R | permissive | JuliaStats/Distributions.jl | R | false | false | 1,249 | r | NormalInverseGaussian <- R6Class("NormalInverseGaussian",
inherit = ContinuousDistribution,
public = list(
names = c("mu", "alpha", "beta", "delta"),
mu = NA,
alpha = NA,
beta = NA,
delta = NA,
gamma = NA,
initialize = function(u, a, b, d) {
self$mu <- u
self$alpha <- a
self$beta <- b
self$delta <- d
self$gamma <- sqrt(a^2 - b^2)
},
supp = function() { c(-Inf, Inf) },
properties = function() {
u <- self$mu
a <- self$alpha
b <- self$beta
d <- self$delta
g <- self$gamma
list(mean = u + d * b / g,
var = d * a^2 / g^3,
skewness = 3 * b / a / sqrt(d * g),
kurtosis = 3 * (1 + 4 * b^2 / a^2) / (d * g))
},
pdf = function(x, log=FALSE) {
fBasics::dnig(x, self$alpha, self$beta, self$delta, self$mu, log=log)
},
cdf = function(x) {
fBasics::pnig(x, self$alpha, self$beta, self$delta, self$mu)
},
quan = function(v) {
fBasics::qnig(v, self$alpha, self$beta, self$delta, self$mu)
}
)
)
|
library(devtools)
install_github('ollyburren/cupcake')
library(cupcake)
library(ggplot2)
## import GWAS data for basis
## support files
support.dir<-'/scratch/ob219/as_basis/support_tab'
# reference allele frequencies
ref_af_file<-file.path(support.dir,'as_basis_snps.tab')
ld_file<-file.path(support.dir,'all.1cM.tab')
#m_file<-file.path(support.dir,'as_basis_manifest.tab')
m_file<-file.path(support.dir,'as_basis_manifest_with_jia_cc.tab')
## dir where all preprocessed gwas files are.
## we expect variants to be reflected in ref_af_file, have there OR aligned and be defined in the manifest file
gwas_data_dir <- '/home/ob219/scratch/as_basis/gwas_stats/input_files'
basis.DT<-get_gwas_data(m_file,ref_af_file,ld_file,gwas_data_dir)
shrink.DT<-compute_shrinkage_metrics(basis.DT)
## need to add control where beta is zero
basis.mat.emp <- create_ds_matrix(basis.DT,shrink.DT,'emp')
## need to add control where beta is zero
basis.mat.emp<-rbind(basis.mat.emp,control=rep(0,ncol(basis.mat.emp)))
pc.emp <- prcomp(basis.mat.emp,center=TRUE,scale=FALSE)
## project on biobank to see if we can recreate Chris' figure.
bb_traits<-fread(m_file)[grep('jia_',trait),]$trait
#bb_traits<-bb_traits[bb_traits != 'jia_cc']
bb.DT<-get_gwas_data(m_file,ref_af_file,ld_file,gwas_data_dir,bb_traits)
bb.mat.emp<-create_ds_matrix(bb.DT,shrink.DT,'emp')
pred.emp <- predict(pc.emp,newdata=bb.mat.emp)
emp<-rbind(pc.emp$x,pred.emp)
ml<-list(
CD = 'bb_CD',
CEL = 'bb_CEL',
MS = 'bb_MS',
RA = 'bb_RA',
SLE = 'bb_SLE',
T1D = 'bb_T1D',
UC = 'bb_UC'
)
g <- function(M){
M <- cbind(as.data.table(M),trait=rownames(M))
M$compare<-"none"
for(i in seq_along(ml)) {
M[trait %in% c(names(ml)[i], ml[i]), compare:=names(ml)[i]]
}
M[trait=="control",compare:="control"]
M
}
emp<-g(emp)
ggplot(emp,aes(x=PC1,y=PC2,color=grepl('jia',trait),label=trait)) + geom_point() + geom_text() + theme_bw() + ggtitle('Empirical MAF SE shrinkage')
plot <- melt(emp,id.vars=c('trait','compare'))
plot[,label:=ifelse(!grepl('jia',trait),'non.jia',trait)]
## plot scree plots
library(ggplot2)
ggplot(plot,aes(x=variable,y=value,alpha=label!='non.jia',color=label,group=trait)) + geom_point() + geom_line() + theme_bw() + scale_alpha_discrete(range = c(0.1, 1))
#JIA is not adding much I don't think we can probably exclude from the basis.
## worried that MHC is causing the difference so remove chr6 from basis and projection
no6.basis.DT <- subset(basis.DT, chr !='6')
no6.shrink.DT<-compute_shrinkage_metrics(no6.basis.DT)
## need to add control where beta is zero
no6.basis.mat.emp <- create_ds_matrix(no6.basis.DT,no6.shrink.DT,'emp')
## need to add control where beta is zero
no6.basis.mat.emp<-rbind( no6.basis.mat.emp,control=rep(0,ncol( no6.basis.mat.emp)))
no6.pc.emp <- prcomp( no6.basis.mat.emp,center=TRUE,scale=FALSE)
no6.emp<-g(no6.pc.emp$x)
ggplot(no6.emp,aes(x=PC1,y=PC2,color=grepl('jia',trait),label=trait)) + geom_point() + geom_text() + theme_bw() + ggtitle('Empirical MAF SE shrinkage')
no6.bb.DT <- subset(bb.DT, chr !='6')
no6.bb.mat.emp<-create_ds_matrix(no6.bb.DT, no6.shrink.DT,'emp')
no6.pred.emp <- predict( no6.pc.emp,newdata=no6.bb.mat.emp)
no6.emp<-rbind(no6.pc.emp$x,no6.pred.emp)
no6.emp<-g(no6.emp)
ggplot(no6.emp,aes(x=PC1,y=PC2,color=grepl('jia',trait),label=trait)) + geom_point() + geom_text() + theme_bw() + ggtitle('Empirical MAF SE shrinkage')
| /R/analysis/jia_plot.R | no_license | ollyburren/as_basis | R | false | false | 3,403 | r | library(devtools)
install_github('ollyburren/cupcake')
library(cupcake)
library(ggplot2)
## import GWAS data for basis
## support files
support.dir<-'/scratch/ob219/as_basis/support_tab'
# reference allele frequencies
ref_af_file<-file.path(support.dir,'as_basis_snps.tab')
ld_file<-file.path(support.dir,'all.1cM.tab')
#m_file<-file.path(support.dir,'as_basis_manifest.tab')
m_file<-file.path(support.dir,'as_basis_manifest_with_jia_cc.tab')
## dir where all preprocessed gwas files are.
## we expect variants to be reflected in ref_af_file, have there OR aligned and be defined in the manifest file
gwas_data_dir <- '/home/ob219/scratch/as_basis/gwas_stats/input_files'
basis.DT<-get_gwas_data(m_file,ref_af_file,ld_file,gwas_data_dir)
shrink.DT<-compute_shrinkage_metrics(basis.DT)
## need to add control where beta is zero
basis.mat.emp <- create_ds_matrix(basis.DT,shrink.DT,'emp')
## need to add control where beta is zero
basis.mat.emp<-rbind(basis.mat.emp,control=rep(0,ncol(basis.mat.emp)))
pc.emp <- prcomp(basis.mat.emp,center=TRUE,scale=FALSE)
## project on biobank to see if we can recreate Chris' figure.
bb_traits<-fread(m_file)[grep('jia_',trait),]$trait
#bb_traits<-bb_traits[bb_traits != 'jia_cc']
bb.DT<-get_gwas_data(m_file,ref_af_file,ld_file,gwas_data_dir,bb_traits)
bb.mat.emp<-create_ds_matrix(bb.DT,shrink.DT,'emp')
pred.emp <- predict(pc.emp,newdata=bb.mat.emp)
emp<-rbind(pc.emp$x,pred.emp)
ml<-list(
CD = 'bb_CD',
CEL = 'bb_CEL',
MS = 'bb_MS',
RA = 'bb_RA',
SLE = 'bb_SLE',
T1D = 'bb_T1D',
UC = 'bb_UC'
)
g <- function(M){
M <- cbind(as.data.table(M),trait=rownames(M))
M$compare<-"none"
for(i in seq_along(ml)) {
M[trait %in% c(names(ml)[i], ml[i]), compare:=names(ml)[i]]
}
M[trait=="control",compare:="control"]
M
}
emp<-g(emp)
ggplot(emp,aes(x=PC1,y=PC2,color=grepl('jia',trait),label=trait)) + geom_point() + geom_text() + theme_bw() + ggtitle('Empirical MAF SE shrinkage')
plot <- melt(emp,id.vars=c('trait','compare'))
plot[,label:=ifelse(!grepl('jia',trait),'non.jia',trait)]
## plot scree plots
library(ggplot2)
ggplot(plot,aes(x=variable,y=value,alpha=label!='non.jia',color=label,group=trait)) + geom_point() + geom_line() + theme_bw() + scale_alpha_discrete(range = c(0.1, 1))
#JIA is not adding much I don't think we can probably exclude from the basis.
## worried that MHC is causing the difference so remove chr6 from basis and projection
no6.basis.DT <- subset(basis.DT, chr !='6')
no6.shrink.DT<-compute_shrinkage_metrics(no6.basis.DT)
## need to add control where beta is zero
no6.basis.mat.emp <- create_ds_matrix(no6.basis.DT,no6.shrink.DT,'emp')
## need to add control where beta is zero
no6.basis.mat.emp<-rbind( no6.basis.mat.emp,control=rep(0,ncol( no6.basis.mat.emp)))
no6.pc.emp <- prcomp( no6.basis.mat.emp,center=TRUE,scale=FALSE)
no6.emp<-g(no6.pc.emp$x)
ggplot(no6.emp,aes(x=PC1,y=PC2,color=grepl('jia',trait),label=trait)) + geom_point() + geom_text() + theme_bw() + ggtitle('Empirical MAF SE shrinkage')
no6.bb.DT <- subset(bb.DT, chr !='6')
no6.bb.mat.emp<-create_ds_matrix(no6.bb.DT, no6.shrink.DT,'emp')
no6.pred.emp <- predict( no6.pc.emp,newdata=no6.bb.mat.emp)
no6.emp<-rbind(no6.pc.emp$x,no6.pred.emp)
no6.emp<-g(no6.emp)
ggplot(no6.emp,aes(x=PC1,y=PC2,color=grepl('jia',trait),label=trait)) + geom_point() + geom_text() + theme_bw() + ggtitle('Empirical MAF SE shrinkage')
|
### QC_Seq_Depth Analysis
rm(list = ls())
source("src/load_packages.R")
source("src/load_phyloseq_obj.R")
source("src/miscellaneous_funcs.R")
source("src/metadata_prep_funcs.R")
source("src/alpha_diversity.R")
source("src/beta_diversity.R")
base::load("files/Phyloseq_objects_Woltka.RData")
base::load("files/Phyloseq_objects_Woltka_Rarefied.RData")
#------------------------------------------
# Alpha Diversity
#------------------------------------------
alpha_diversity_summary(x = phylo_dats_rare, z = names(phylo_dats_rare),
tag = "Rarefied")
alpha_diversity_summary(x = phylo_dats, z = names(phylo_dats),
tag = "Non-Rarefied")
#------------------------------------------
# Beta Diversity
#------------------------------------------
beta_diversity_summary(x = phylo_dats_rare, z = names(phylo_dats_rare),
tag = "Rarefied", dist = "Aitchisons")
beta_diversity_summary(x = phylo_dats, z = names(phylo_dats),
tag = "Non-Rarefied", dist = "Aitchisons")
beta_diversity_summary(x = phylo_dats_rare, z = names(phylo_dats_rare),
tag = "Rarefied", dist = "bray")
beta_diversity_summary(x = phylo_dats, z = names(phylo_dats),
tag = "Non-Rarefied", dist = "bray")
#______________________________________________________________________________
# Taxonomy Rel Abundance Bars -----
#______________________________________________________________________________
dat.obj <-
dat.genus
dat.top.30 <- dat.obj %>%
get_top_taxa(n=30, relative = TRUE, discard_other = F, other_label = "Other")
dat.top.20 <- dat.obj %>%
get_top_taxa(n=20, relative = TRUE, discard_other = F, other_label = "Other")
dat.top.15 <- dat.obj %>%
get_top_taxa(n=15, relative = TRUE, discard_other = F, other_label = "Other")
dat.obj %>%
abundance_heatmap(treatment = "treatment_group")
barcols <- c(
"#386cb0",
"#7fc97f",
"#beaed4",
"#fdc086",
"#ffff99",
"#f0027f",
"#bf5b17",
"#666666",
"#7fc97f",
"#beaed4"
)
# Plot all Samples
barplt1 <-
fantaxtic_bar(
dat.top.30,
color_by = "Order",
label_by = "Genus",
other_label = "Other",
facet_by = "genotype",
grid_by = "diet",
facet_cols = 2,
order_alg = "hclust",
base_color = "#5b9bd5",
palette = barcols,
# color_levels = barcol_ID
) +
labs(y = "Relative Abundance") +
theme(axis.text.x = element_blank())
ggsave(barplt1, filename = "data/Community_Composition/Stacked_Barplots/Top30_Genera_Cohort_Facet.png",
width = 8, height = 5.75)
# Merge cohort specific donor groups
dat.top2 <- merge_samples(dat.top.30, "treatment_group")
# Summary Barplot
barplt2 <-
fantaxtic_bar(
dat.top2,
color_by = "Order",
label_by = "Genus",
other_label = "Other",
order_alg = "hclust",
palette = barcols
) +
labs(y = "Relative Abundance", x = NULL) +
theme(axis.text.x = element_text(angle = 45, hjust = 1),
strip.text = element_blank(),
plot.margin = unit(c(1, 1, 1, 1), "cm")
)
å
ggsave(barplt2, filename = "data/Community_Composition/Stacked_Barplots/Top30_Genera_Group_Facet_Summary.png",
width = 4, height = 6)
# Abundance filter for top 15 Genera
dat.top3 <- dat.obj %>%
get_top_taxa(n=15, relative = TRUE, discard_other = F, other_label = "Other")
# Merge cohort specific donor groups
dat.top3 <- merge_samples(dat.top3, "cohort_donor_group")
# Summary Barplot
barplt2 <-
fantaxtic_bar(
dat.top2,
color_by = "Order",
label_by = "Genus",
other_label = "Other",
facet_by = "cohort",
facet_cols = 2,
order_alg = "hclust",
base_color = "#5b9bd5"
) +
labs(y = "Relative Abundance")
ggsave(barplt2, filename = "data/Community_Composition/Stacked_Barplots/Top30_Genera_Cohort_Facet_Summary.svg",
width = 4.5, height = 6)
| /src/community_composition_overview.R | permissive | jboktor/SCFA_wmgx | R | false | false | 3,946 | r | ### QC_Seq_Depth Analysis
rm(list = ls())
source("src/load_packages.R")
source("src/load_phyloseq_obj.R")
source("src/miscellaneous_funcs.R")
source("src/metadata_prep_funcs.R")
source("src/alpha_diversity.R")
source("src/beta_diversity.R")
base::load("files/Phyloseq_objects_Woltka.RData")
base::load("files/Phyloseq_objects_Woltka_Rarefied.RData")
#------------------------------------------
# Alpha Diversity
#------------------------------------------
alpha_diversity_summary(x = phylo_dats_rare, z = names(phylo_dats_rare),
tag = "Rarefied")
alpha_diversity_summary(x = phylo_dats, z = names(phylo_dats),
tag = "Non-Rarefied")
#------------------------------------------
# Beta Diversity
#------------------------------------------
beta_diversity_summary(x = phylo_dats_rare, z = names(phylo_dats_rare),
tag = "Rarefied", dist = "Aitchisons")
beta_diversity_summary(x = phylo_dats, z = names(phylo_dats),
tag = "Non-Rarefied", dist = "Aitchisons")
beta_diversity_summary(x = phylo_dats_rare, z = names(phylo_dats_rare),
tag = "Rarefied", dist = "bray")
beta_diversity_summary(x = phylo_dats, z = names(phylo_dats),
tag = "Non-Rarefied", dist = "bray")
#______________________________________________________________________________
# Taxonomy Rel Abundance Bars -----
#______________________________________________________________________________
dat.obj <-
dat.genus
dat.top.30 <- dat.obj %>%
get_top_taxa(n=30, relative = TRUE, discard_other = F, other_label = "Other")
dat.top.20 <- dat.obj %>%
get_top_taxa(n=20, relative = TRUE, discard_other = F, other_label = "Other")
dat.top.15 <- dat.obj %>%
get_top_taxa(n=15, relative = TRUE, discard_other = F, other_label = "Other")
dat.obj %>%
abundance_heatmap(treatment = "treatment_group")
barcols <- c(
"#386cb0",
"#7fc97f",
"#beaed4",
"#fdc086",
"#ffff99",
"#f0027f",
"#bf5b17",
"#666666",
"#7fc97f",
"#beaed4"
)
# Plot all Samples
barplt1 <-
fantaxtic_bar(
dat.top.30,
color_by = "Order",
label_by = "Genus",
other_label = "Other",
facet_by = "genotype",
grid_by = "diet",
facet_cols = 2,
order_alg = "hclust",
base_color = "#5b9bd5",
palette = barcols,
# color_levels = barcol_ID
) +
labs(y = "Relative Abundance") +
theme(axis.text.x = element_blank())
ggsave(barplt1, filename = "data/Community_Composition/Stacked_Barplots/Top30_Genera_Cohort_Facet.png",
width = 8, height = 5.75)
# Merge cohort specific donor groups
dat.top2 <- merge_samples(dat.top.30, "treatment_group")
# Summary Barplot
barplt2 <-
fantaxtic_bar(
dat.top2,
color_by = "Order",
label_by = "Genus",
other_label = "Other",
order_alg = "hclust",
palette = barcols
) +
labs(y = "Relative Abundance", x = NULL) +
theme(axis.text.x = element_text(angle = 45, hjust = 1),
strip.text = element_blank(),
plot.margin = unit(c(1, 1, 1, 1), "cm")
)
å
ggsave(barplt2, filename = "data/Community_Composition/Stacked_Barplots/Top30_Genera_Group_Facet_Summary.png",
width = 4, height = 6)
# Abundance filter for top 15 Genera
dat.top3 <- dat.obj %>%
get_top_taxa(n=15, relative = TRUE, discard_other = F, other_label = "Other")
# Merge cohort specific donor groups
dat.top3 <- merge_samples(dat.top3, "cohort_donor_group")
# Summary Barplot
barplt2 <-
fantaxtic_bar(
dat.top2,
color_by = "Order",
label_by = "Genus",
other_label = "Other",
facet_by = "cohort",
facet_cols = 2,
order_alg = "hclust",
base_color = "#5b9bd5"
) +
labs(y = "Relative Abundance")
ggsave(barplt2, filename = "data/Community_Composition/Stacked_Barplots/Top30_Genera_Cohort_Facet_Summary.svg",
width = 4.5, height = 6)
|
ReadDisk = function(aeidir, disknames, n, maxTlength, c.ind) {
disk = array(data = NA, dim = c( n, maxTlength, length(c.ind)+1),
dimnames = list(NULL,NULL,c(c.ind,'r')) )
for (i in 1:length(disknames)) {
Tlength = length(readLines( paste(aeidir,disknames[i],'.aei',sep='') ) )-4
objdata = read.table(
paste(aeidir,disknames[i],'.aei',sep=''), header=F,skip=4,
col.names=cols)[,c.ind]
# It's stupid that I have to iterate over this, but otherwise it
# turns my matrix into a list of n*maxTlength*ndim single elements...???
for (k in 1:Tlength) {
for (j in 1:length(c.ind)) {
disk[i, k, j] = objdata[k,j]
} # j, c.ind
r = sqrt( (disk[i,k,'x']-star[cent,k,'x'])^2 +
(disk[i,k,'y']-star[cent,k,'y'])^2 +
(disk[i,k,'z']-star[cent,k,'z'])^2 )
disk[i, k, length(c.ind)+1] = r
} # k, Tlength
} # i, disknames
return(disk)
} # function
#------------------------------------------------------------------------------
FindClosest = function(x,n) {
Delta = abs(x-n)
MinDel = min(Delta)
MinInd = which(Delta == MinDel)
if (length(MinInd) >1) MinInd = MinInd[1]
return(MinInd)
} # function
#------------------------------------------------------------------------------
### Moving average that gives NAs for boundaries
#MovingAvg <- function(x,n=5){filter(x,rep(1/n,n), sides=2)}
#------------------------------------------------------------------------------
### Moving average that gives one-sided average for boundaries.
### n = number of points on either side to include (so n=1 avgs 3 points)
### (needs work for n>1 -- will fail)
MovingAvg = function(x,n=1) {
len=length(x)
avg = rep(NA,len)
for (i in 1:len) {
if (i == 1) avg[i] = mean(x[i:(i+n)]) else if (i == len) {
avg[i]=mean(x[(i-1):i]) } else {
avg[i] = mean(x[(i-n):(i+n)]) }
}
return(avg)
}
| /Analysis/ReadDiskFns.R | no_license | RJWorth/AlphaCen | R | false | false | 1,807 | r | ReadDisk = function(aeidir, disknames, n, maxTlength, c.ind) {
disk = array(data = NA, dim = c( n, maxTlength, length(c.ind)+1),
dimnames = list(NULL,NULL,c(c.ind,'r')) )
for (i in 1:length(disknames)) {
Tlength = length(readLines( paste(aeidir,disknames[i],'.aei',sep='') ) )-4
objdata = read.table(
paste(aeidir,disknames[i],'.aei',sep=''), header=F,skip=4,
col.names=cols)[,c.ind]
# It's stupid that I have to iterate over this, but otherwise it
# turns my matrix into a list of n*maxTlength*ndim single elements...???
for (k in 1:Tlength) {
for (j in 1:length(c.ind)) {
disk[i, k, j] = objdata[k,j]
} # j, c.ind
r = sqrt( (disk[i,k,'x']-star[cent,k,'x'])^2 +
(disk[i,k,'y']-star[cent,k,'y'])^2 +
(disk[i,k,'z']-star[cent,k,'z'])^2 )
disk[i, k, length(c.ind)+1] = r
} # k, Tlength
} # i, disknames
return(disk)
} # function
#------------------------------------------------------------------------------
FindClosest = function(x,n) {
Delta = abs(x-n)
MinDel = min(Delta)
MinInd = which(Delta == MinDel)
if (length(MinInd) >1) MinInd = MinInd[1]
return(MinInd)
} # function
#------------------------------------------------------------------------------
### Moving average that gives NAs for boundaries
#MovingAvg <- function(x,n=5){filter(x,rep(1/n,n), sides=2)}
#------------------------------------------------------------------------------
### Moving average that gives one-sided average for boundaries.
### n = number of points on either side to include (so n=1 avgs 3 points)
### (needs work for n>1 -- will fail)
MovingAvg = function(x,n=1) {
len=length(x)
avg = rep(NA,len)
for (i in 1:len) {
if (i == 1) avg[i] = mean(x[i:(i+n)]) else if (i == len) {
avg[i]=mean(x[(i-1):i]) } else {
avg[i] = mean(x[(i-n):(i+n)]) }
}
return(avg)
}
|
\name{quadline}
\alias{quadline}
\title{Quadratic Overlay}
\description{
Overlays a quadratic curve to a fitted quadratic model.
}
\usage{
quadline(lm.obj, ...)
}
\arguments{
\item{lm.obj}{A \code{lm} object (a quadratic fit) }
\item{...}{Other arguments to the \code{lines} function; e.g. \code{col}}
}
\value{
The function superimposes a quadratic curve onto an existing scatterplot.
}
\author{W.J. Braun}
\seealso{\code{lm}}
\examples{
data(p4.18)
attach(p4.18)
y.lm <- lm(y ~ x1 + I(x1^2))
plot(x1, y)
quadline(y.lm)
detach(p4.18)
}
\keyword{models}
| /man/quadline.Rd | no_license | cran/MPV | R | false | false | 589 | rd | \name{quadline}
\alias{quadline}
\title{Quadratic Overlay}
\description{
Overlays a quadratic curve to a fitted quadratic model.
}
\usage{
quadline(lm.obj, ...)
}
\arguments{
\item{lm.obj}{A \code{lm} object (a quadratic fit) }
\item{...}{Other arguments to the \code{lines} function; e.g. \code{col}}
}
\value{
The function superimposes a quadratic curve onto an existing scatterplot.
}
\author{W.J. Braun}
\seealso{\code{lm}}
\examples{
data(p4.18)
attach(p4.18)
y.lm <- lm(y ~ x1 + I(x1^2))
plot(x1, y)
quadline(y.lm)
detach(p4.18)
}
\keyword{models}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/ch11-fn.R
\name{ttest.plot}
\alias{ttest.plot}
\title{Plot the PDF of Test Statistic with the T-distribution}
\usage{
ttest.plot(md, deg, prng = c(-4, 4), side = "two", dig = 4, mt,
pvout = TRUE)
}
\arguments{
\item{md}{T-test statistic for the difference of population means}
\item{deg}{Degree of freedom}
\item{prng}{Range of x-axis, Default: c(-4, 4)}
\item{side}{Type of the alternative hypothesis, Default: 'two'}
\item{dig}{Number of digits below the decimal point, Default: 4}
\item{mt}{Plot title}
\item{pvout}{Print p-value? Default: TRUE}
}
\value{
None.
}
\description{
Plot the PDF of Test Statistic with the T-distribution
}
\examples{
ttest.plot(1.96, deg=24)
}
| /man/ttest.plot.Rd | no_license | zlfn/Rstat-1 | R | false | true | 762 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/ch11-fn.R
\name{ttest.plot}
\alias{ttest.plot}
\title{Plot the PDF of Test Statistic with the T-distribution}
\usage{
ttest.plot(md, deg, prng = c(-4, 4), side = "two", dig = 4, mt,
pvout = TRUE)
}
\arguments{
\item{md}{T-test statistic for the difference of population means}
\item{deg}{Degree of freedom}
\item{prng}{Range of x-axis, Default: c(-4, 4)}
\item{side}{Type of the alternative hypothesis, Default: 'two'}
\item{dig}{Number of digits below the decimal point, Default: 4}
\item{mt}{Plot title}
\item{pvout}{Print p-value? Default: TRUE}
}
\value{
None.
}
\description{
Plot the PDF of Test Statistic with the T-distribution
}
\examples{
ttest.plot(1.96, deg=24)
}
|
# Code écrit par Giovanni Zanitti (giovanni.zanitti@gmail.com)
# Dernière MAJ 04/08/16
# Ce fichier contient l'import des données pour le tableau de bord ainsi que quelques manipulations sur celles-ci
# Il est appelé par les fichiers server.R et UI.R et est donc indispensable au bon fonctionnement du tableau de bord
library(sqldf)
library(ggplot2)
library(leaflet)
library(gtools)
library(dplyr)
library(mapdata)
library(ggplot2)
library(dplyr)
library(stringr)
library(proto)
library(gridSVG)
library(ggmap)
library(grid)
library(RColorBrewer)
#Fonctions
ReplaceNA <- function(df){
m_df <- as.matrix(df)
m_df[which(m_df == "")] <- "Vide"
m_df[which(m_df == "None")] <- "Vide"
return(data.frame(m_df))
}
Unaccent <- function(text) {
text <- gsub("['`^~\"]", " ", text)
text <- iconv(text, to="ASCII//TRANSLIT//IGNORE")
text <- gsub("['`^~\"]", "", text)
return(text)
}
geom_tooltip <- function (mapping = NULL, data = NULL, stat = "identity",
position = "identity", real.geom = NULL, ...) {
rg <- real.geom(mapping = mapping, data = data, stat = stat, ## â
position = position, ...)
rg$geom <- proto(rg$geom, { ## â¡
draw <- function(., data, ...) {
grobs <- list()
for (i in 1:nrow(data)) {
grob <- .super$draw(., data[i,], ...) ## â¢
grobs[[i]] <- garnishGrob(grob, ## â£
`data-tooltip`=data[i,]$tooltip)
}
ggplot2:::ggname("geom_tooltip", gTree(
children = do.call("gList", grobs)
))
}
required_aes <- c("tooltip", .super$required_aes)
})
rg ## â¤
}
######################### Import des données
#### Coordonnées des pays
coord_country <- read.csv(file = "data/coord_country.csv", header = T)
#### Cours
course <- read.csv("data/Courses.csv", header = T, encoding="UTF-8")
#### Inscriptions
inscr <- read.csv(file = "data/list-enrollments-MinesTelecom.csv", header = T)
# Calcul de l'âge
inscr <- cbind(inscr, Age = as.numeric(format(as.Date(inscr$enrollment_time), format = "%Y")) - as.numeric(paste(inscr$year_of_birth)))
inscr$Age <- as.numeric(inscr$Age)
names(inscr)[1] <- "course"
# Passage de "enrollment_time" en type "Date" pour pouvoir l'utiliser dans les graphiques comme telle
inscr = cbind(inscr,Date = as.Date(inscr$enrollment_time))
### dff (Final dataframe)
dff <- data.frame()
df_var = data.frame(course = NULL,session = NULL, Quest = NULL,variable = NULL)
# Pour chaque MOOC
for(i in list.files("data/MOOC/")){
# Pour chaque session de chaque MOOC
for(j in list.files(paste("data/MOOC/",i, sep = ""))){
# Pour chaque fichier de chaque session de chaque MOOC
for(k in list.files(paste("data/MOOC/",i,"/",j, sep =""))){
# On importe le fichier
data <- read.csv(paste("data/MOOC/",i,"/",j,"/",k,sep = ""), header = T)
# On crée deux nouvelles variables donnant le code du cours et la session
data <- cbind(data, short_code = substr(data$course,0,18), session = sub(".*/", "", data$course))
# On récupère le nom exact du cours
data = sqldf("Select data.*, course.name_course from data, course where short_code = code_course")
#On récupère le nom des variables de ce fichier
vec <- names(data)
df_var_temp <- data.frame(course = as.character(unique(data$name_course)), session = as.character(unique(data$session)), Quest = as.character(unique(data$Quest)), variable = vec)
# On stocke les variables dans le df "df_var"
df_var <- rbind(df_var,df_var_temp)
# On stocke le fichier importé et modifié dans dff
dff <- smartbind(dff,data)
}
}
}
#Suppression premi?re colonne
dff <- dff[-1]
#Calcul de l'?ge
dff <- cbind(dff, Age = as.numeric(format(as.Date(dff$enrollment_time), format = "%Y")) - as.numeric(paste(dff$year_of_birth)))
#Calcul des classes d'?ge
dff <- cbind(dff, C_Age = cut(dff$Age, breaks = c(0, 13, 17, 24, 34, 44, 54, 64, 100), include.lowest = TRUE))
inscr <- cbind(inscr, C_Age = cut(inscr$Age, breaks = c(0, 13, 17, 24, 34, 44, 54, 64, 100), include.lowest = TRUE))
#D?composition du code cours en "short_code" et en session
inscr = cbind(inscr, short_code = substr(inscr$course,0,18), session = sub(".*/", "", inscr$course))
inscr = sqldf("Select inscr.*, course.name_course from inscr, course where short_code = code_course")
#Remplacement des valeurs vides
dff<-ReplaceNA(dff)
#Passage de la variable Age et year_of_birth en quantitative
dff$Age <- as.numeric(dff$Age)
dff$year_of_birth <- as.numeric(dff$year_of_birth)
#Suppression des accents
dff$country<-Unaccent(dff$country)
#Liste de toutes les variables de dff
List_Var_Tot = as.character(names(dff))
# Import du fichier où sont stockés les intitulés des questions
Questi <- read.csv("data/questions.csv", header = T, encoding = "UTF-8")
########### Donn?es pour les cartes
df_map <- sqldf("select inscr.*,c.lat, C.lon from inscr, coord_country c where inscr.country = c.country")
df_map = df_map[!is.na(df_map$lat),]
#dataset
df_map <- sp::SpatialPointsDataFrame(
cbind(
df_map$lon,
df_map$lat
),
data.frame(Country = df_map$country, course = df_map$name_course, session = df_map$session)
)
#Variable servant dans server.R pour rendre r?actif le choix des variables
final <- NULL
for (i in List_Var_Tot){
commands <- paste("'",i,"' = dff[dff$Quest == input$sel_Quest & dff$name_course == input$sel_MOOC & dff$session == input$sel_Sess,]$",i,",", sep = "")
final <- paste(final,commands,sep = "")
}
final <- substr(final,start = 0, stop = nchar(final)-1)
cols_gender <- setNames(c("lightblue","pink","lightgreen"),c("m","f","o"))
# Satis = data.frame(course = NA, session = NA)
# for (i in unique(as.character(dff$name_course))){
# for (j in unique(as.character(dff[dff$name_course == i,]$session))){
# MOOC_fin <- dff[dff$name_course == i & dff$Quest == 'fin' & dff$session == j,
# -which(is.na(dff[dff$name_course == i & dff$Quest == 'fin' & dff$session == j,][1,]))]
#
# MOOC_fin <- MOOC_fin[,grep("Satis",substr(colnames(MOOC_fin),0,5))]
# l <- names(MOOC_fin)
# Satis = smartbind(Satis,l)
# Satis[nrow(Satis),]$session <- j
# Satis[nrow(Satis),]$course <- i
# }
# }
#
# Satis<-Satis[-1,]
#Code servant pour les jauges
Satis = data.frame(course = NA)
for (i in unique(as.character(dff$name_course))){
MOOC_fin <- dff[dff$name_course == i & dff$Quest == 'fin',
-which(is.na(dff[dff$name_course == i & dff$Quest == 'fin',][1,]))]
MOOC_fin <- MOOC_fin[,grep("Satis",substr(colnames(MOOC_fin),0,5))]
l <- names(MOOC_fin)
Satis = smartbind(Satis,l)
Satis[nrow(Satis),]$course <- i
}
Satis<-Satis[-1,]
#Code pour avoir les donn?es pour le TOP 10 des pays repr?sent?s
nb_mooc_p <- sqldf("Select DISTINCT user,count(course) as nb,gender,age,education,country,C_Age from inscr group by user order by count(course) DESC")
n_course = levels(course$name_course)
Course_country_nb <- sqldf("Select name_course,country,count(user) as nb from inscr group by name_course, country ")
Course_country_nb <- ReplaceNA(Course_country_nb)
tab_nb_cours_country <- NULL
for (i in levels(course$name_course)){
Nb_country_course <- sqldf(paste("Select country,sum(nb) as s_nb from Course_country_nb where name_course = '",i,"' group by country order by s_nb DESC", sep = ""))
Nb_country_course <- data.frame(t(Nb_country_course))
colnames(Nb_country_course) = as.character(unlist(Nb_country_course[1,]))
Nb_country_course <- Nb_country_course[-1,]
Nb_country_course <- cbind(course = i, Nb_country_course)
#print(Nb_country_course)
tab_nb_cours_country <- smartbind(tab_nb_cours_country,Nb_country_course)
}
tab_nb_cours_country <- tab_nb_cours_country[-1,]
inscr$session <- as.character(inscr$session)
dff$session <- as.character(dff$session)
| /Application/Analyse.R | no_license | GiovanniZanitti/Stage_TB | R | false | false | 7,906 | r | # Code écrit par Giovanni Zanitti (giovanni.zanitti@gmail.com)
# Dernière MAJ 04/08/16
# Ce fichier contient l'import des données pour le tableau de bord ainsi que quelques manipulations sur celles-ci
# Il est appelé par les fichiers server.R et UI.R et est donc indispensable au bon fonctionnement du tableau de bord
library(sqldf)
library(ggplot2)
library(leaflet)
library(gtools)
library(dplyr)
library(mapdata)
library(ggplot2)
library(dplyr)
library(stringr)
library(proto)
library(gridSVG)
library(ggmap)
library(grid)
library(RColorBrewer)
#Fonctions
ReplaceNA <- function(df){
m_df <- as.matrix(df)
m_df[which(m_df == "")] <- "Vide"
m_df[which(m_df == "None")] <- "Vide"
return(data.frame(m_df))
}
Unaccent <- function(text) {
text <- gsub("['`^~\"]", " ", text)
text <- iconv(text, to="ASCII//TRANSLIT//IGNORE")
text <- gsub("['`^~\"]", "", text)
return(text)
}
geom_tooltip <- function (mapping = NULL, data = NULL, stat = "identity",
position = "identity", real.geom = NULL, ...) {
rg <- real.geom(mapping = mapping, data = data, stat = stat, ## â
position = position, ...)
rg$geom <- proto(rg$geom, { ## â¡
draw <- function(., data, ...) {
grobs <- list()
for (i in 1:nrow(data)) {
grob <- .super$draw(., data[i,], ...) ## â¢
grobs[[i]] <- garnishGrob(grob, ## â£
`data-tooltip`=data[i,]$tooltip)
}
ggplot2:::ggname("geom_tooltip", gTree(
children = do.call("gList", grobs)
))
}
required_aes <- c("tooltip", .super$required_aes)
})
rg ## â¤
}
######################### Import des données
#### Coordonnées des pays
coord_country <- read.csv(file = "data/coord_country.csv", header = T)
#### Cours
course <- read.csv("data/Courses.csv", header = T, encoding="UTF-8")
#### Inscriptions
inscr <- read.csv(file = "data/list-enrollments-MinesTelecom.csv", header = T)
# Calcul de l'âge
inscr <- cbind(inscr, Age = as.numeric(format(as.Date(inscr$enrollment_time), format = "%Y")) - as.numeric(paste(inscr$year_of_birth)))
inscr$Age <- as.numeric(inscr$Age)
names(inscr)[1] <- "course"
# Passage de "enrollment_time" en type "Date" pour pouvoir l'utiliser dans les graphiques comme telle
inscr = cbind(inscr,Date = as.Date(inscr$enrollment_time))
### dff (Final dataframe)
dff <- data.frame()
df_var = data.frame(course = NULL,session = NULL, Quest = NULL,variable = NULL)
# Pour chaque MOOC
for(i in list.files("data/MOOC/")){
# Pour chaque session de chaque MOOC
for(j in list.files(paste("data/MOOC/",i, sep = ""))){
# Pour chaque fichier de chaque session de chaque MOOC
for(k in list.files(paste("data/MOOC/",i,"/",j, sep =""))){
# On importe le fichier
data <- read.csv(paste("data/MOOC/",i,"/",j,"/",k,sep = ""), header = T)
# On crée deux nouvelles variables donnant le code du cours et la session
data <- cbind(data, short_code = substr(data$course,0,18), session = sub(".*/", "", data$course))
# On récupère le nom exact du cours
data = sqldf("Select data.*, course.name_course from data, course where short_code = code_course")
#On récupère le nom des variables de ce fichier
vec <- names(data)
df_var_temp <- data.frame(course = as.character(unique(data$name_course)), session = as.character(unique(data$session)), Quest = as.character(unique(data$Quest)), variable = vec)
# On stocke les variables dans le df "df_var"
df_var <- rbind(df_var,df_var_temp)
# On stocke le fichier importé et modifié dans dff
dff <- smartbind(dff,data)
}
}
}
#Suppression premi?re colonne
dff <- dff[-1]
#Calcul de l'?ge
dff <- cbind(dff, Age = as.numeric(format(as.Date(dff$enrollment_time), format = "%Y")) - as.numeric(paste(dff$year_of_birth)))
#Calcul des classes d'?ge
dff <- cbind(dff, C_Age = cut(dff$Age, breaks = c(0, 13, 17, 24, 34, 44, 54, 64, 100), include.lowest = TRUE))
inscr <- cbind(inscr, C_Age = cut(inscr$Age, breaks = c(0, 13, 17, 24, 34, 44, 54, 64, 100), include.lowest = TRUE))
#D?composition du code cours en "short_code" et en session
inscr = cbind(inscr, short_code = substr(inscr$course,0,18), session = sub(".*/", "", inscr$course))
inscr = sqldf("Select inscr.*, course.name_course from inscr, course where short_code = code_course")
#Remplacement des valeurs vides
dff<-ReplaceNA(dff)
#Passage de la variable Age et year_of_birth en quantitative
dff$Age <- as.numeric(dff$Age)
dff$year_of_birth <- as.numeric(dff$year_of_birth)
#Suppression des accents
dff$country<-Unaccent(dff$country)
#Liste de toutes les variables de dff
List_Var_Tot = as.character(names(dff))
# Import du fichier où sont stockés les intitulés des questions
Questi <- read.csv("data/questions.csv", header = T, encoding = "UTF-8")
########### Donn?es pour les cartes
df_map <- sqldf("select inscr.*,c.lat, C.lon from inscr, coord_country c where inscr.country = c.country")
df_map = df_map[!is.na(df_map$lat),]
#dataset
df_map <- sp::SpatialPointsDataFrame(
cbind(
df_map$lon,
df_map$lat
),
data.frame(Country = df_map$country, course = df_map$name_course, session = df_map$session)
)
#Variable servant dans server.R pour rendre r?actif le choix des variables
final <- NULL
for (i in List_Var_Tot){
commands <- paste("'",i,"' = dff[dff$Quest == input$sel_Quest & dff$name_course == input$sel_MOOC & dff$session == input$sel_Sess,]$",i,",", sep = "")
final <- paste(final,commands,sep = "")
}
final <- substr(final,start = 0, stop = nchar(final)-1)
cols_gender <- setNames(c("lightblue","pink","lightgreen"),c("m","f","o"))
# Satis = data.frame(course = NA, session = NA)
# for (i in unique(as.character(dff$name_course))){
# for (j in unique(as.character(dff[dff$name_course == i,]$session))){
# MOOC_fin <- dff[dff$name_course == i & dff$Quest == 'fin' & dff$session == j,
# -which(is.na(dff[dff$name_course == i & dff$Quest == 'fin' & dff$session == j,][1,]))]
#
# MOOC_fin <- MOOC_fin[,grep("Satis",substr(colnames(MOOC_fin),0,5))]
# l <- names(MOOC_fin)
# Satis = smartbind(Satis,l)
# Satis[nrow(Satis),]$session <- j
# Satis[nrow(Satis),]$course <- i
# }
# }
#
# Satis<-Satis[-1,]
#Code servant pour les jauges
Satis = data.frame(course = NA)
for (i in unique(as.character(dff$name_course))){
MOOC_fin <- dff[dff$name_course == i & dff$Quest == 'fin',
-which(is.na(dff[dff$name_course == i & dff$Quest == 'fin',][1,]))]
MOOC_fin <- MOOC_fin[,grep("Satis",substr(colnames(MOOC_fin),0,5))]
l <- names(MOOC_fin)
Satis = smartbind(Satis,l)
Satis[nrow(Satis),]$course <- i
}
Satis<-Satis[-1,]
#Code pour avoir les donn?es pour le TOP 10 des pays repr?sent?s
nb_mooc_p <- sqldf("Select DISTINCT user,count(course) as nb,gender,age,education,country,C_Age from inscr group by user order by count(course) DESC")
n_course = levels(course$name_course)
Course_country_nb <- sqldf("Select name_course,country,count(user) as nb from inscr group by name_course, country ")
Course_country_nb <- ReplaceNA(Course_country_nb)
tab_nb_cours_country <- NULL
for (i in levels(course$name_course)){
Nb_country_course <- sqldf(paste("Select country,sum(nb) as s_nb from Course_country_nb where name_course = '",i,"' group by country order by s_nb DESC", sep = ""))
Nb_country_course <- data.frame(t(Nb_country_course))
colnames(Nb_country_course) = as.character(unlist(Nb_country_course[1,]))
Nb_country_course <- Nb_country_course[-1,]
Nb_country_course <- cbind(course = i, Nb_country_course)
#print(Nb_country_course)
tab_nb_cours_country <- smartbind(tab_nb_cours_country,Nb_country_course)
}
tab_nb_cours_country <- tab_nb_cours_country[-1,]
inscr$session <- as.character(inscr$session)
dff$session <- as.character(dff$session)
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/p_value.R
\name{get_pvalue}
\alias{get_pvalue}
\alias{p_value}
\title{Compute p-value}
\usage{
p_value(x, obs_stat, direction)
get_pvalue(x, obs_stat, direction)
}
\arguments{
\item{x}{Data frame of calculated statistics or containing attributes of
theoretical distribution values.}
\item{obs_stat}{A numeric value or a 1x1 data frame (as extreme or more
extreme than this).}
\item{direction}{A character string. Options are \code{"less"}, \code{"greater"}, or
\code{"two_sided"}. Can also specify \code{"left"}, \code{"right"}, or \code{"both"}.}
}
\value{
A 1x1 data frame with value between 0 and 1.
}
\description{
Only simulation-based methods are (currently only) supported. \code{get_pvalue()}
is an alias of \code{p_value}.
}
\examples{
mtcars_df <- mtcars \%>\%
dplyr::mutate(am = factor(am))
d_hat <- mtcars_df \%>\%
specify(mpg ~ am) \%>\%
calculate(stat = "diff in means", order = c("1", "0"))
null_distn <- mtcars_df \%>\%
specify(mpg ~ am) \%>\%
hypothesize(null = "independence") \%>\%
generate(reps = 100) \%>\%
calculate(stat = "diff in means", order = c("1", "0"))
null_distn \%>\%
p_value(obs_stat = d_hat, direction = "right")
}
| /man/get_pvalue.Rd | no_license | jimtyhurst/infer | R | false | true | 1,249 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/p_value.R
\name{get_pvalue}
\alias{get_pvalue}
\alias{p_value}
\title{Compute p-value}
\usage{
p_value(x, obs_stat, direction)
get_pvalue(x, obs_stat, direction)
}
\arguments{
\item{x}{Data frame of calculated statistics or containing attributes of
theoretical distribution values.}
\item{obs_stat}{A numeric value or a 1x1 data frame (as extreme or more
extreme than this).}
\item{direction}{A character string. Options are \code{"less"}, \code{"greater"}, or
\code{"two_sided"}. Can also specify \code{"left"}, \code{"right"}, or \code{"both"}.}
}
\value{
A 1x1 data frame with value between 0 and 1.
}
\description{
Only simulation-based methods are (currently only) supported. \code{get_pvalue()}
is an alias of \code{p_value}.
}
\examples{
mtcars_df <- mtcars \%>\%
dplyr::mutate(am = factor(am))
d_hat <- mtcars_df \%>\%
specify(mpg ~ am) \%>\%
calculate(stat = "diff in means", order = c("1", "0"))
null_distn <- mtcars_df \%>\%
specify(mpg ~ am) \%>\%
hypothesize(null = "independence") \%>\%
generate(reps = 100) \%>\%
calculate(stat = "diff in means", order = c("1", "0"))
null_distn \%>\%
p_value(obs_stat = d_hat, direction = "right")
}
|
#' @name filter
#' @rdname dplyr_single
#' @importFrom dplyr filter
#' @export
NULL
#' Return documents with matching conditions
#'
#' Use `filter()` to select documents where conditions evaluated on document
#' variables are true. Documents where the condition evaluates to `NA` are
#' dropped. A tidy replacement for [corpus_subset()][quanteda::corpus_subset()].
#'
#' @param .data a \pkg{quanteda} object whose documents will be filtered
#' @param ... Logical predicates defined in terms of the document variables in
#' `.data`, or a condition supplied externally whose length matches `the
#' number of `ndoc(.data)`. See [filter][dplyr::filter()].
#' @inheritParams dplyr::filter
#' @importFrom quanteda corpus convert %>% meta
#' @export
#' @examples
#' data_corpus_inaugural %>%
#' filter(Year < 1810) %>%
#' summary()
#'
filter.corpus <- function(.data, ..., .preserve = FALSE) {
corpus_stv_bydoc(.data, ..., .preserve = .preserve, fun = filter)
}
#' Return rows with matching conditions including feature matches
#'
#' Use filters to find rows of the return objects from many `textstat_*()`
#' functions where conditions are true, including matches to features using
#' `"glob"`, `"regex"` or `"fixed"` patterns.
#' @param .data a `textstat` object returned from one of the applicable
#' `textstat_*` functions in \pkg{quanteda}
#' @param ... filter conditions passed to \pkg{dplyr} [dplyr::filter()]
#' @inheritParams quanteda::pattern
#' @inheritParams quanteda::valuetype
#' @param case_insensitive ignore case when matching, if `TRUE`
#' @seealso [quanteda.textstats::textstat_collocations()],
#' [quanteda.textstats::textstat_keyness()],
#' [quanteda.textstats::textstat_frequency()]
#' @keywords internal
#' @importFrom utils getS3method getFromNamespace
#' @export
#' @examples
#' period <- ifelse(docvars(data_corpus_inaugural, "Year") < 1945,
#' "pre-war", "post-war")
#' mydfm <- tokens(data_corpus_inaugural) %>%
#' dfm() %>%
#' dfm_group(groups = period)
#' keyness <- quanteda.textstats::textstat_keyness(mydfm)
#' filter(keyness, pattern = "america*")
#' filter(keyness, p < .00001, pattern = "america*")
#' filter(keyness, p < .00001) %>% head()
filter.textstat <- function(.data, ...,
pattern = NULL,
valuetype = c("glob", "regex", "fixed"),
case_insensitive = TRUE) {
attrs <- attributes(.data)
# get regex2id from quanteda
regex2id <- getFromNamespace("pattern2id", "quanteda")
# call dplyr filter, if ... arguments are supplied
ndots <- function(...) nargs()
if (length(ndots)) {
.data <- getS3method("filter", "data.frame")(.data, ...)
}
# select on features if specified
if (!is.null(pattern)) {
valuetype <- match.arg(valuetype)
id <- unlist(regex2id(pattern, .data[[1]], valuetype, case_insensitive))
.data <- .data[id, , drop = FALSE]
}
# reclass the object as textstat etc.
class(.data) <- attrs$class
.data
}
| /R/filter.R | no_license | quanteda/quanteda.tidy | R | false | false | 3,065 | r | #' @name filter
#' @rdname dplyr_single
#' @importFrom dplyr filter
#' @export
NULL
#' Return documents with matching conditions
#'
#' Use `filter()` to select documents where conditions evaluated on document
#' variables are true. Documents where the condition evaluates to `NA` are
#' dropped. A tidy replacement for [corpus_subset()][quanteda::corpus_subset()].
#'
#' @param .data a \pkg{quanteda} object whose documents will be filtered
#' @param ... Logical predicates defined in terms of the document variables in
#' `.data`, or a condition supplied externally whose length matches `the
#' number of `ndoc(.data)`. See [filter][dplyr::filter()].
#' @inheritParams dplyr::filter
#' @importFrom quanteda corpus convert %>% meta
#' @export
#' @examples
#' data_corpus_inaugural %>%
#' filter(Year < 1810) %>%
#' summary()
#'
filter.corpus <- function(.data, ..., .preserve = FALSE) {
corpus_stv_bydoc(.data, ..., .preserve = .preserve, fun = filter)
}
#' Return rows with matching conditions including feature matches
#'
#' Use filters to find rows of the return objects from many `textstat_*()`
#' functions where conditions are true, including matches to features using
#' `"glob"`, `"regex"` or `"fixed"` patterns.
#' @param .data a `textstat` object returned from one of the applicable
#' `textstat_*` functions in \pkg{quanteda}
#' @param ... filter conditions passed to \pkg{dplyr} [dplyr::filter()]
#' @inheritParams quanteda::pattern
#' @inheritParams quanteda::valuetype
#' @param case_insensitive ignore case when matching, if `TRUE`
#' @seealso [quanteda.textstats::textstat_collocations()],
#' [quanteda.textstats::textstat_keyness()],
#' [quanteda.textstats::textstat_frequency()]
#' @keywords internal
#' @importFrom utils getS3method getFromNamespace
#' @export
#' @examples
#' period <- ifelse(docvars(data_corpus_inaugural, "Year") < 1945,
#' "pre-war", "post-war")
#' mydfm <- tokens(data_corpus_inaugural) %>%
#' dfm() %>%
#' dfm_group(groups = period)
#' keyness <- quanteda.textstats::textstat_keyness(mydfm)
#' filter(keyness, pattern = "america*")
#' filter(keyness, p < .00001, pattern = "america*")
#' filter(keyness, p < .00001) %>% head()
filter.textstat <- function(.data, ...,
pattern = NULL,
valuetype = c("glob", "regex", "fixed"),
case_insensitive = TRUE) {
attrs <- attributes(.data)
# get regex2id from quanteda
regex2id <- getFromNamespace("pattern2id", "quanteda")
# call dplyr filter, if ... arguments are supplied
ndots <- function(...) nargs()
if (length(ndots)) {
.data <- getS3method("filter", "data.frame")(.data, ...)
}
# select on features if specified
if (!is.null(pattern)) {
valuetype <- match.arg(valuetype)
id <- unlist(regex2id(pattern, .data[[1]], valuetype, case_insensitive))
.data <- .data[id, , drop = FALSE]
}
# reclass the object as textstat etc.
class(.data) <- attrs$class
.data
}
|
library(shiny)
# Define UI for app that draws a histogram ----
ui <- fluidPage(
# App title ----
titlePanel("Hello Shiny By Sasan!!"),
# Sidebar layout with input and output definitions ----
sidebarLayout(
# Sidebar panel for inputs ----
sidebarPanel(
# Input: Slider for the number of bins ----
sliderInput(inputId = "bins",
label = "Number of bins:",
min = 0,
max = 100,
value = 30
)
),
# Main panel for displaying outputs ----
mainPanel(
# Output: Histogram ----
plotOutput(outputId = "distPlot")
)
)
)
# Define server logic required to draw a histogram ----
server <- function(input, output) {
# Histogram of the Old Faithful Geyser Data ----
# with requested number of bins
# This expression that generates a histogram is wrapped in a call
# to renderPlot to indicate that:
#
# 1. It is "reactive" and therefore should be automatically
# re-executed when inputs (input$bins) change
# 2. Its output type is a plot
output$distPlot <- renderPlot({
x <- faithful$waiting
bins <- seq(min(x), max(x), length.out = input$bins + 1)
hist(x, breaks = bins, col = "red", border = "black",
xlab = "Waiting time to next eruption (in mins)",
main = "Histogram of waiting times")
})
}
shinyApp(ui = ui, server = server)
| /rShinyApp.R | no_license | sgnajar/R-Shiny | R | false | false | 1,482 | r | library(shiny)
# Define UI for app that draws a histogram ----
ui <- fluidPage(
# App title ----
titlePanel("Hello Shiny By Sasan!!"),
# Sidebar layout with input and output definitions ----
sidebarLayout(
# Sidebar panel for inputs ----
sidebarPanel(
# Input: Slider for the number of bins ----
sliderInput(inputId = "bins",
label = "Number of bins:",
min = 0,
max = 100,
value = 30
)
),
# Main panel for displaying outputs ----
mainPanel(
# Output: Histogram ----
plotOutput(outputId = "distPlot")
)
)
)
# Define server logic required to draw a histogram ----
server <- function(input, output) {
# Histogram of the Old Faithful Geyser Data ----
# with requested number of bins
# This expression that generates a histogram is wrapped in a call
# to renderPlot to indicate that:
#
# 1. It is "reactive" and therefore should be automatically
# re-executed when inputs (input$bins) change
# 2. Its output type is a plot
output$distPlot <- renderPlot({
x <- faithful$waiting
bins <- seq(min(x), max(x), length.out = input$bins + 1)
hist(x, breaks = bins, col = "red", border = "black",
xlab = "Waiting time to next eruption (in mins)",
main = "Histogram of waiting times")
})
}
shinyApp(ui = ui, server = server)
|
#!/usr/bin/env Rscript
source("~/CMWT/common.R")
"Usage:\nevt.R [-f <file> -s <sheet>]\n\nOptions:\n -f Start time\n -s End time [default: 1]\n\n]" -> doc
opts <- docopt(doc)
input_dates <- Sys.Date() - 1
if (lubridate::wday(input_dates) == 1) {
input_dates <- as.Date((input_dates - 2):input_dates, "1970-01-01")
}
dat <- readxl::read_excel(opts$f, sheet = eval(parse(text = opts$s)))
dat <- dat %>% left_join(kods, by = c("Источник" = "news_source"))
if (!is.POSIXct(dat$`Start time`)) {
dat$`Start time` <- as.POSIXct(lubridate::hms(dat$`Start time`), origin = "1969-12-31 21:00:00", tz = "Europe/Kiev")
dat$`End time` <- as.POSIXct(lubridate::hms(dat$`End time`), origin = "1969-12-31 21:00:00", tz = "Europe/Kiev")
}
if ("Канал для свода" %in% names(dat)) {
programs <- readr::read_rds(paste0("~/CMWT/workfiles/programs/programs_", input_dates[1], ".rds")) %>% left_join(kods, by = c("Источник" = "news_source"))
dat %<>% mutate(`Час початку програми` = ifelse(grepl("*.mp4$", Текст), substr(Текст, nchar(Текст) - 20, nchar(Текст) - 8),
substr(Текст, nchar(Текст) - 17, nchar(Текст) - 5)
))
dat$`Час початку програми` <- ymd_hm(dat$`Час початку програми`, tz = "Europe/Kiev")
dat %<>% select(Дата, Источник, `Час початку програми`, `Start time`, `End time`, ID)
dat$Дата <- as.Date(dat$Дата)
programs %<>% semi_join(dat, by = c("Дата", "ID", "Час початку програми"))
dat %<>% bind_rows(programs)
}
evt(dat, basename(opts$f)) | /evt.R | no_license | RomanKyrychenko/CMWT | R | false | false | 1,689 | r | #!/usr/bin/env Rscript
source("~/CMWT/common.R")
"Usage:\nevt.R [-f <file> -s <sheet>]\n\nOptions:\n -f Start time\n -s End time [default: 1]\n\n]" -> doc
opts <- docopt(doc)
input_dates <- Sys.Date() - 1
if (lubridate::wday(input_dates) == 1) {
input_dates <- as.Date((input_dates - 2):input_dates, "1970-01-01")
}
dat <- readxl::read_excel(opts$f, sheet = eval(parse(text = opts$s)))
dat <- dat %>% left_join(kods, by = c("Источник" = "news_source"))
if (!is.POSIXct(dat$`Start time`)) {
dat$`Start time` <- as.POSIXct(lubridate::hms(dat$`Start time`), origin = "1969-12-31 21:00:00", tz = "Europe/Kiev")
dat$`End time` <- as.POSIXct(lubridate::hms(dat$`End time`), origin = "1969-12-31 21:00:00", tz = "Europe/Kiev")
}
if ("Канал для свода" %in% names(dat)) {
programs <- readr::read_rds(paste0("~/CMWT/workfiles/programs/programs_", input_dates[1], ".rds")) %>% left_join(kods, by = c("Источник" = "news_source"))
dat %<>% mutate(`Час початку програми` = ifelse(grepl("*.mp4$", Текст), substr(Текст, nchar(Текст) - 20, nchar(Текст) - 8),
substr(Текст, nchar(Текст) - 17, nchar(Текст) - 5)
))
dat$`Час початку програми` <- ymd_hm(dat$`Час початку програми`, tz = "Europe/Kiev")
dat %<>% select(Дата, Источник, `Час початку програми`, `Start time`, `End time`, ID)
dat$Дата <- as.Date(dat$Дата)
programs %<>% semi_join(dat, by = c("Дата", "ID", "Час початку програми"))
dat %<>% bind_rows(programs)
}
evt(dat, basename(opts$f)) |
#' @title Hannan-Quinn Information Criterion
#'
#' @description Calculates Hannan-Quinn Information Criterion (HQIC) for "lm" and "glm" objects.
#'
#' @param model a "lm" or "glm" object
#'
#' @details
#' HQIC (Hannan and Quinn, 1979) is calculated as
#'
#' \deqn{-2LL(theta) + 2klog(log(n))}
#'
#' @return HQIC measurement of the model
#'
#' @importFrom stats logLik
#' @examples
#' x1 <- rnorm(100, 3, 2)
#' x2 <- rnorm(100, 5, 3)
#' x3 <- rnorm(100, 67, 5)
#' err <- rnorm(100, 0, 4)
#'
#' ## round so we can use it for Poisson regression
#' y <- round(3 + 2*x1 - 5*x2 + 8*x3 + err)
#'
#' m1 <- lm(y~x1 + x2 + x3)
#' m2 <- glm(y~x1 + x2 + x3, family = "gaussian")
#' m3 <- glm(y~x1 + x2 + x3, family = "poisson")
#'
#' HQIC(m1)
#' HQIC(m2)
#' HQIC(m3)
#'
#' @references
#' Hannan, E. J., & Quinn, B. G. (1979). The determination of the order of an autoregression. Journal of the Royal Statistical Society: Series B (Methodological), 41(2), 190-195.
#'
#' @export
HQIC <- function(model) {
LL <- logLik(object = model)
df <- attr(LL, "df")
n <- length(model$residuals)
c(-2*LL + 2*df*log(log(n)))
}
| /R/HQIC.R | no_license | cran/ICglm | R | false | false | 1,153 | r | #' @title Hannan-Quinn Information Criterion
#'
#' @description Calculates Hannan-Quinn Information Criterion (HQIC) for "lm" and "glm" objects.
#'
#' @param model a "lm" or "glm" object
#'
#' @details
#' HQIC (Hannan and Quinn, 1979) is calculated as
#'
#' \deqn{-2LL(theta) + 2klog(log(n))}
#'
#' @return HQIC measurement of the model
#'
#' @importFrom stats logLik
#' @examples
#' x1 <- rnorm(100, 3, 2)
#' x2 <- rnorm(100, 5, 3)
#' x3 <- rnorm(100, 67, 5)
#' err <- rnorm(100, 0, 4)
#'
#' ## round so we can use it for Poisson regression
#' y <- round(3 + 2*x1 - 5*x2 + 8*x3 + err)
#'
#' m1 <- lm(y~x1 + x2 + x3)
#' m2 <- glm(y~x1 + x2 + x3, family = "gaussian")
#' m3 <- glm(y~x1 + x2 + x3, family = "poisson")
#'
#' HQIC(m1)
#' HQIC(m2)
#' HQIC(m3)
#'
#' @references
#' Hannan, E. J., & Quinn, B. G. (1979). The determination of the order of an autoregression. Journal of the Royal Statistical Society: Series B (Methodological), 41(2), 190-195.
#'
#' @export
HQIC <- function(model) {
LL <- logLik(object = model)
df <- attr(LL, "df")
n <- length(model$residuals)
c(-2*LL + 2*df*log(log(n)))
}
|
#' BoskR - Assess Adequacy of Diversification Models Using Tree Shapes
#' @details \packageIndices{BoskR}
#' @references In prep.
#' @keywords package
#' @seealso \code{\link{CombineTrees}}, \code{\link{GetMetricTreeSets}}, \code{\link{GetTreeMetrics}}, \code{\link{GetTreeParams}}, \code{\link{PvalMetrics}}, \code{\link{ScatterMetrics}}, \code{\link{TreeCorr}}, \code{\link{plotPvalMetricsCDF}}, \code{\link{plotPvalMetricsPDF}}
#'
#' \code{\link[RPANDA:RPANDA-package]{RPANDA}}
"_PACKAGE"
| /R/BoskR.R | no_license | oschwery/BoskR | R | false | false | 492 | r | #' BoskR - Assess Adequacy of Diversification Models Using Tree Shapes
#' @details \packageIndices{BoskR}
#' @references In prep.
#' @keywords package
#' @seealso \code{\link{CombineTrees}}, \code{\link{GetMetricTreeSets}}, \code{\link{GetTreeMetrics}}, \code{\link{GetTreeParams}}, \code{\link{PvalMetrics}}, \code{\link{ScatterMetrics}}, \code{\link{TreeCorr}}, \code{\link{plotPvalMetricsCDF}}, \code{\link{plotPvalMetricsPDF}}
#'
#' \code{\link[RPANDA:RPANDA-package]{RPANDA}}
"_PACKAGE"
|
load("RDATA/forRevision/Sim_SS3_HCA-reads.RData")
ss3.umi <- read.table("DATA/Smartseq3/HCA.UMIcounts.PBMC.txt", header=TRUE)
ss3.umi <- ss3.umi[,colnames(ss3.data)]
countsSim <- simdata@assays@data$umi_counts
compareData <- ss3.umi
library(yarrr)
pdf("simMatching_SeqDepth_SS3-HCA-umi.pdf", height=4, width=8, useDingbats=F)
X <- data.frame( Depth = colSums(countsSim) , Species = "Simulated")
Y <- data.frame( Depth = colSums(compareData) , Species = "Original")
longdata <- rbind(Y, X)
par(mfrow=c(1,2))
par(mar=c(4,5,2,1), mgp = c(3, .5, 0))
pirateplot(formula = Depth ~ Species,
data = longdata,
xlab = "",
ylab = "Sequencing Depth", pal=c("cornflowerblue", "brown1"),
main = "", point.cex=1.1, bar.lwd=1, cex.lab=1.5, cex.axis=1.3,cex.names=1.5)
par(mar=c(5.5,4,2,1), mgp = c(2.5, 1, 0))
plot(ecdf(X$Depth), col="brown1", main="", xlab="Sequencing Depth (millions)",
cex.axis=1.3, cex.lab=1.5, cex.main=1.5, xlim=c(0, max(X$Depth, Y$Depth)))
plot(ecdf(Y$Depth), add=T, col="cornflowerblue")
legend('bottomright', c("Simulated", "EC Data"), col=c("brown1","cornflowerblue"), lwd=3)
dev.off()
pdf("simMatching_CellDetectRate_SS3-HCA-umi.pdf", height=4, width=8, useDingbats=F)
X <- data.frame( Depth = colSums(countsSim!=0) / nrow(countsSim), Species = "Simulated")
Y <- data.frame( Depth = colSums(compareData!=0) / nrow(countsSim), Species = "Original")
longdata <- rbind(Y, X)
par(mfrow=c(1,2))
par(mar=c(4,5,2,1), mgp = c(3, .5, 0))
pirateplot(formula = Depth ~ Species,
data = longdata, ylim=c(0,1),
xlab = "",
ylab = "Detection Rate", pal=c("cornflowerblue", "brown1"),
main = "", point.cex=1.1, bar.lwd=1, cex.lab=1.5, cex.axis=1.3,cex.names=1.5)
par(mar=c(5.5,4,2,1), mgp = c(2.5, 1, 0))
plot(ecdf(X$Depth), col="brown1", main="", xlab="Detection Rate",
cex.axis=1.3, cex.lab=1.5, cex.main=1.5, xlim=c(0, max(X$Depth, Y$Depth)))
plot(ecdf(Y$Depth), add=T, col="cornflowerblue")
legend('bottomright', c("Simulated", "EC Data"), col=c("brown1","cornflowerblue"), lwd=3)
dev.off()
pdf("simMatching_GeneDetectRate_SS3-HCA-umi.pdf", height=4, width=8, useDingbats=F)
set.seed(777)
Genes = rownames(countsSim)
XX <- sample(Genes, 200)
X1 <- apply(countsSim[XX,], 1, function(x) sum(x!=0, na.rm=T)) / dim(countsSim)[2]
X2 <- apply(compareData[XX,], 1, function(x) sum(x!=0, na.rm=T)) / dim(compareData)[2]
X <- data.frame( Depth =X1, Species = "Simulated")
Y <- data.frame( Depth = X2, Species = "Original")
longdata <- rbind(Y, X)
par(mfrow=c(1,2))
par(mar=c(4,5,2,1), mgp = c(3, .5, 0))
pirateplot(formula = Depth ~ Species,
data = longdata,
xlab = "",
ylab = "Detection Rate", pal=c("cornflowerblue", "brown1"),
main = "", point.cex=1.1, bar.lwd=1, cex.lab=1.5, cex.axis=1.3,cex.names=1.5)
par(mar=c(5.5,4,2,1), mgp = c(2.5, 1, 0))
plot(ecdf(X$Depth), col="brown1", main="", xlab="Detection Rate",
cex.axis=1.3, cex.lab=1.5, cex.main=1.5, xlim=c(0, max(X$Depth, Y$Depth)))
plot(ecdf(Y$Depth), add=T, col="cornflowerblue")
legend('bottomright', c("Simulated", "EC Data"), col=c("brown1","cornflowerblue"), lwd=3)
dev.off()
pdf("simMatching_GeneMean_SS3-HCA-umi.pdf", height=4, width=8, useDingbats=F)
X1 <- log(apply(countsSim[XX,], 1, function(x) mean(x, na.rm=T))+1)
X2 <- log(apply(compareData[XX,], 1, function(x) mean(x, na.rm=T))+1)
useg <- names(which(X1<Inf & X2 < Inf))
X <- data.frame( Depth =X1[useg], Species = "Simulated")
Y <- data.frame( Depth = X2[useg], Species = "Original")
longdata <- rbind(Y, X)
par(mfrow=c(1,2))
par(mar=c(4,5,2,1), mgp = c(3, .5, 0))
pirateplot(formula = Depth ~ Species,
data = longdata,
xlab = "",
ylab = "log (mean+1)", pal=c("cornflowerblue", "brown1"),
main = "", point.cex=1.1, bar.lwd=1, cex.lab=1.5, cex.axis=1.3,cex.names=1.5)
par(mar=c(5.5,4,2,1), mgp = c(2.5, 1, 0))
plot(ecdf(X$Depth), col="brown1", main="", xlab="log (mean+1)",
cex.axis=2, cex.lab=2, cex.main=2, xlim=c(0, max(X$Depth, Y$Depth)))
plot(ecdf(Y$Depth), add=T, col="cornflowerblue")
legend('bottomright', c("Simulated", "H1 Data"), col=c("brown1","cornflowerblue"), lwd=3)
dev.off()
pdf("simMatching_GeneSD_SS3-HCA-umi.pdf", height=4, width=8, useDingbats=F)
X1 <- log(apply(countsSim[XX,], 1, function(x) sd(x, na.rm=T))+1)
X2 <- log(apply(compareData[XX,], 1, function(x) sd(x, na.rm=T))+1)
useg <- names(which(X1<Inf & X2 < Inf))
X <- data.frame( Depth =X1[useg], Species = "Simulated")
Y <- data.frame( Depth = X2[useg], Species = "Original")
longdata <- rbind(Y, X)
par(mfrow=c(1,2))
par(mar=c(4,5,2,1), mgp = c(3, .5, 0))
pirateplot(formula = Depth ~ Species,
data = longdata,
xlab = "",
ylab = "log (sd+1)", pal=c("cornflowerblue", "brown1"),
main = "", point.cex=1.1, bar.lwd=1, cex.lab=1.5, cex.axis=1.3,cex.names=1.5)
par(mar=c(5.5,4,2,1), mgp = c(2.5, 1, 0))
plot(ecdf(X$Depth), col="brown1", main="", xlab="log (sd+1)",
cex.axis=2, cex.lab=2, cex.main=2, xlim=c(0, max(X$Depth, Y$Depth)))
plot(ecdf(Y$Depth), add=T, col="cornflowerblue")
legend('bottomright', c("Simulated", "H1 Data"), col=c("brown1","cornflowerblue"), lwd=3)
dev.off()
| /AdditionalAnalysis/SIM_Matching_SS3_HCA-UMI.R | no_license | rhondabacher/scEqualization-Paper | R | false | false | 5,255 | r | load("RDATA/forRevision/Sim_SS3_HCA-reads.RData")
ss3.umi <- read.table("DATA/Smartseq3/HCA.UMIcounts.PBMC.txt", header=TRUE)
ss3.umi <- ss3.umi[,colnames(ss3.data)]
countsSim <- simdata@assays@data$umi_counts
compareData <- ss3.umi
library(yarrr)
pdf("simMatching_SeqDepth_SS3-HCA-umi.pdf", height=4, width=8, useDingbats=F)
X <- data.frame( Depth = colSums(countsSim) , Species = "Simulated")
Y <- data.frame( Depth = colSums(compareData) , Species = "Original")
longdata <- rbind(Y, X)
par(mfrow=c(1,2))
par(mar=c(4,5,2,1), mgp = c(3, .5, 0))
pirateplot(formula = Depth ~ Species,
data = longdata,
xlab = "",
ylab = "Sequencing Depth", pal=c("cornflowerblue", "brown1"),
main = "", point.cex=1.1, bar.lwd=1, cex.lab=1.5, cex.axis=1.3,cex.names=1.5)
par(mar=c(5.5,4,2,1), mgp = c(2.5, 1, 0))
plot(ecdf(X$Depth), col="brown1", main="", xlab="Sequencing Depth (millions)",
cex.axis=1.3, cex.lab=1.5, cex.main=1.5, xlim=c(0, max(X$Depth, Y$Depth)))
plot(ecdf(Y$Depth), add=T, col="cornflowerblue")
legend('bottomright', c("Simulated", "EC Data"), col=c("brown1","cornflowerblue"), lwd=3)
dev.off()
pdf("simMatching_CellDetectRate_SS3-HCA-umi.pdf", height=4, width=8, useDingbats=F)
X <- data.frame( Depth = colSums(countsSim!=0) / nrow(countsSim), Species = "Simulated")
Y <- data.frame( Depth = colSums(compareData!=0) / nrow(countsSim), Species = "Original")
longdata <- rbind(Y, X)
par(mfrow=c(1,2))
par(mar=c(4,5,2,1), mgp = c(3, .5, 0))
pirateplot(formula = Depth ~ Species,
data = longdata, ylim=c(0,1),
xlab = "",
ylab = "Detection Rate", pal=c("cornflowerblue", "brown1"),
main = "", point.cex=1.1, bar.lwd=1, cex.lab=1.5, cex.axis=1.3,cex.names=1.5)
par(mar=c(5.5,4,2,1), mgp = c(2.5, 1, 0))
plot(ecdf(X$Depth), col="brown1", main="", xlab="Detection Rate",
cex.axis=1.3, cex.lab=1.5, cex.main=1.5, xlim=c(0, max(X$Depth, Y$Depth)))
plot(ecdf(Y$Depth), add=T, col="cornflowerblue")
legend('bottomright', c("Simulated", "EC Data"), col=c("brown1","cornflowerblue"), lwd=3)
dev.off()
pdf("simMatching_GeneDetectRate_SS3-HCA-umi.pdf", height=4, width=8, useDingbats=F)
set.seed(777)
Genes = rownames(countsSim)
XX <- sample(Genes, 200)
X1 <- apply(countsSim[XX,], 1, function(x) sum(x!=0, na.rm=T)) / dim(countsSim)[2]
X2 <- apply(compareData[XX,], 1, function(x) sum(x!=0, na.rm=T)) / dim(compareData)[2]
X <- data.frame( Depth =X1, Species = "Simulated")
Y <- data.frame( Depth = X2, Species = "Original")
longdata <- rbind(Y, X)
par(mfrow=c(1,2))
par(mar=c(4,5,2,1), mgp = c(3, .5, 0))
pirateplot(formula = Depth ~ Species,
data = longdata,
xlab = "",
ylab = "Detection Rate", pal=c("cornflowerblue", "brown1"),
main = "", point.cex=1.1, bar.lwd=1, cex.lab=1.5, cex.axis=1.3,cex.names=1.5)
par(mar=c(5.5,4,2,1), mgp = c(2.5, 1, 0))
plot(ecdf(X$Depth), col="brown1", main="", xlab="Detection Rate",
cex.axis=1.3, cex.lab=1.5, cex.main=1.5, xlim=c(0, max(X$Depth, Y$Depth)))
plot(ecdf(Y$Depth), add=T, col="cornflowerblue")
legend('bottomright', c("Simulated", "EC Data"), col=c("brown1","cornflowerblue"), lwd=3)
dev.off()
pdf("simMatching_GeneMean_SS3-HCA-umi.pdf", height=4, width=8, useDingbats=F)
X1 <- log(apply(countsSim[XX,], 1, function(x) mean(x, na.rm=T))+1)
X2 <- log(apply(compareData[XX,], 1, function(x) mean(x, na.rm=T))+1)
useg <- names(which(X1<Inf & X2 < Inf))
X <- data.frame( Depth =X1[useg], Species = "Simulated")
Y <- data.frame( Depth = X2[useg], Species = "Original")
longdata <- rbind(Y, X)
par(mfrow=c(1,2))
par(mar=c(4,5,2,1), mgp = c(3, .5, 0))
pirateplot(formula = Depth ~ Species,
data = longdata,
xlab = "",
ylab = "log (mean+1)", pal=c("cornflowerblue", "brown1"),
main = "", point.cex=1.1, bar.lwd=1, cex.lab=1.5, cex.axis=1.3,cex.names=1.5)
par(mar=c(5.5,4,2,1), mgp = c(2.5, 1, 0))
plot(ecdf(X$Depth), col="brown1", main="", xlab="log (mean+1)",
cex.axis=2, cex.lab=2, cex.main=2, xlim=c(0, max(X$Depth, Y$Depth)))
plot(ecdf(Y$Depth), add=T, col="cornflowerblue")
legend('bottomright', c("Simulated", "H1 Data"), col=c("brown1","cornflowerblue"), lwd=3)
dev.off()
pdf("simMatching_GeneSD_SS3-HCA-umi.pdf", height=4, width=8, useDingbats=F)
X1 <- log(apply(countsSim[XX,], 1, function(x) sd(x, na.rm=T))+1)
X2 <- log(apply(compareData[XX,], 1, function(x) sd(x, na.rm=T))+1)
useg <- names(which(X1<Inf & X2 < Inf))
X <- data.frame( Depth =X1[useg], Species = "Simulated")
Y <- data.frame( Depth = X2[useg], Species = "Original")
longdata <- rbind(Y, X)
par(mfrow=c(1,2))
par(mar=c(4,5,2,1), mgp = c(3, .5, 0))
pirateplot(formula = Depth ~ Species,
data = longdata,
xlab = "",
ylab = "log (sd+1)", pal=c("cornflowerblue", "brown1"),
main = "", point.cex=1.1, bar.lwd=1, cex.lab=1.5, cex.axis=1.3,cex.names=1.5)
par(mar=c(5.5,4,2,1), mgp = c(2.5, 1, 0))
plot(ecdf(X$Depth), col="brown1", main="", xlab="log (sd+1)",
cex.axis=2, cex.lab=2, cex.main=2, xlim=c(0, max(X$Depth, Y$Depth)))
plot(ecdf(Y$Depth), add=T, col="cornflowerblue")
legend('bottomright', c("Simulated", "H1 Data"), col=c("brown1","cornflowerblue"), lwd=3)
dev.off()
|
\name{grad}
\Rdversion{1.1}
\alias{grad}
%- Also NEED an '\alias' for EACH other topic documented here.
\title{gradient function (internal function)
%% ~~function to do ... ~~
}
\description{
Numerical gradient for a function at a given value (internal).
}
\usage{
grad(func, x, ...)
}
%- maybe also 'usage' for other objects documented here.
\arguments{
\item{func}{
Function taking a vector argument x (returns a vector of length>=1)
}
\item{x}{
vector of arguments for where the gradient is wanted.
}
\item{\dots}{
other arguments to the function
}
}
\details{
(func(x+delta,...)-func(x-delta,...))/(2 delta) where delta is the third root
of the machine precision times pmax(1,abs(x)).
}
\value{
A vector if func(x) has length 1, otherwise a matrix with rows for x
and columns for func(x).
}
%% \references{
%% %% ~put references to the literature/web site here ~
%% }
\author{
Mark Clements.
}
%% \note{
%% %% ~~further notes~~
%% }
%% ~Make other sections like Warning with \section{Warning }{....} ~
\seealso{
numDelta()
}
%% \examples{
%% }
% Add one or more standard keywords, see file 'KEYWORDS' in the
% R documentation directory.
\keyword{ ~kwd1 }
\keyword{ ~kwd2 }% __ONLY ONE__ keyword per line
| /man/grad.Rd | no_license | guhjy/rstpm2 | R | false | false | 1,246 | rd | \name{grad}
\Rdversion{1.1}
\alias{grad}
%- Also NEED an '\alias' for EACH other topic documented here.
\title{gradient function (internal function)
%% ~~function to do ... ~~
}
\description{
Numerical gradient for a function at a given value (internal).
}
\usage{
grad(func, x, ...)
}
%- maybe also 'usage' for other objects documented here.
\arguments{
\item{func}{
Function taking a vector argument x (returns a vector of length>=1)
}
\item{x}{
vector of arguments for where the gradient is wanted.
}
\item{\dots}{
other arguments to the function
}
}
\details{
(func(x+delta,...)-func(x-delta,...))/(2 delta) where delta is the third root
of the machine precision times pmax(1,abs(x)).
}
\value{
A vector if func(x) has length 1, otherwise a matrix with rows for x
and columns for func(x).
}
%% \references{
%% %% ~put references to the literature/web site here ~
%% }
\author{
Mark Clements.
}
%% \note{
%% %% ~~further notes~~
%% }
%% ~Make other sections like Warning with \section{Warning }{....} ~
\seealso{
numDelta()
}
%% \examples{
%% }
% Add one or more standard keywords, see file 'KEYWORDS' in the
% R documentation directory.
\keyword{ ~kwd1 }
\keyword{ ~kwd2 }% __ONLY ONE__ keyword per line
|
rm(list=ls(all=TRUE))
library('ggplot2')
library('scales')
library('grDevices')
plotlist = c(1,2,4,5,6)
for (i in c(1:5))
{
j = plotlist[i]
fname = paste("allresults_",j,".RData",sep="")
load(fname)
newfname = paste("newallresults_",j,".RData",sep="")
load(newfname)
fullhseq = hseq
fullhseq = fullhseq[-length(fullhseq)]
fullhseq = c(fullhseq, newhseq)
fulll1errs = l1errs
fulll1errs = fulll1errs[-length(fulll1errs)]
fulll1errs = c(fulll1errs, newpdferrl1)
linmod = lm(log(fulll1errs[-1]) ~ log(fullhseq[-1]))
mypreds = data.frame(x=fullhseq[-1],y=as.numeric(exp(linmod$fitted.values)))
fullinferrs = inferrs
fullinferrs = fullinferrs[-length(fullinferrs)]
fullinferrs = c(fullinferrs, newpdferrinf)
fullkserrs = kserrs
fullkserrs = fullkserrs[-length(fullkserrs)]
fullkserrs = c(fullkserrs, newksnorm)
myl1 = data.frame(x=fullhseq,y=fulll1errs)
myinf = data.frame(x=fullhseq,y=fullinferrs)
myks = data.frame(x=fullhseq,y=fullkserrs)
mydat = data.frame(x=fullhseq,y=fulll1errs,norm="L1")
mydat = rbind(mydat,data.frame(x=fullhseq,y=fullinferrs,norm="Linf"))
mydat = rbind(mydat,data.frame(x=fullhseq,y=fullkserrs,norm="K-S"))
myplot <- ggplot(data=mydat, aes(x=x,y=y,group=norm,color=norm))
myplot <- myplot + theme_bw() + theme(plot.background = element_rect(fill='white'))
myxticks = sort(10^(round(log(fullhseq)/log(10)*10)/10))
rawyticks = round(log(mydat$y)/log(10)*10)/10
rawyticks = round(seq(from=min(rawyticks),to=max(rawyticks),length.out=length(myxticks))*1)/1
myyticks = unique(10^rawyticks)
myplot <- myplot + scale_x_log10(breaks = fullhseq)
myplot <- myplot + theme(axis.text.x = element_text(angle=90,hjust=1))
myplot <- myplot + scale_y_log10(breaks = myyticks,
labels = trans_format("log10", math_format(10^.x)))
mytitle = paste("Example",j,sep=" ")
# myplot <- myplot + labs(title = mytitle, x="h (temporal step size)", y="error")
myplot <- myplot + labs(x="h (temporal step size)", y="error")
myplot <- myplot + annotate("text",label=mytitle,y=max(l1errs),x=0.002)
myplot <- myplot + geom_line() + geom_point()
myplot <- myplot + geom_line(aes(x=x,y=y,group="L1"),data=mypreds,col="black")
myslope = 0.001*round(1000*as.numeric(linmod$coefficients[2]))
myplot <- myplot + annotate("text",label=paste("slope=",myslope,sep=''),x=fullhseq[6],y=fulll1errs[6]+0.01,size=4)
fname = paste("convplot_",j,".eps",sep="")
ggsave(filename=fname, plot=myplot, width=5, height=4, device=cairo_ps)
fname = paste("convplot_",j,".pdf",sep="")
ggsave(filename=fname, plot=myplot, width=5, height=4)
}
| /DTQpaper/convergence/plotter.R | no_license | hbhat4000/sdeinference | R | false | false | 2,655 | r | rm(list=ls(all=TRUE))
library('ggplot2')
library('scales')
library('grDevices')
plotlist = c(1,2,4,5,6)
for (i in c(1:5))
{
j = plotlist[i]
fname = paste("allresults_",j,".RData",sep="")
load(fname)
newfname = paste("newallresults_",j,".RData",sep="")
load(newfname)
fullhseq = hseq
fullhseq = fullhseq[-length(fullhseq)]
fullhseq = c(fullhseq, newhseq)
fulll1errs = l1errs
fulll1errs = fulll1errs[-length(fulll1errs)]
fulll1errs = c(fulll1errs, newpdferrl1)
linmod = lm(log(fulll1errs[-1]) ~ log(fullhseq[-1]))
mypreds = data.frame(x=fullhseq[-1],y=as.numeric(exp(linmod$fitted.values)))
fullinferrs = inferrs
fullinferrs = fullinferrs[-length(fullinferrs)]
fullinferrs = c(fullinferrs, newpdferrinf)
fullkserrs = kserrs
fullkserrs = fullkserrs[-length(fullkserrs)]
fullkserrs = c(fullkserrs, newksnorm)
myl1 = data.frame(x=fullhseq,y=fulll1errs)
myinf = data.frame(x=fullhseq,y=fullinferrs)
myks = data.frame(x=fullhseq,y=fullkserrs)
mydat = data.frame(x=fullhseq,y=fulll1errs,norm="L1")
mydat = rbind(mydat,data.frame(x=fullhseq,y=fullinferrs,norm="Linf"))
mydat = rbind(mydat,data.frame(x=fullhseq,y=fullkserrs,norm="K-S"))
myplot <- ggplot(data=mydat, aes(x=x,y=y,group=norm,color=norm))
myplot <- myplot + theme_bw() + theme(plot.background = element_rect(fill='white'))
myxticks = sort(10^(round(log(fullhseq)/log(10)*10)/10))
rawyticks = round(log(mydat$y)/log(10)*10)/10
rawyticks = round(seq(from=min(rawyticks),to=max(rawyticks),length.out=length(myxticks))*1)/1
myyticks = unique(10^rawyticks)
myplot <- myplot + scale_x_log10(breaks = fullhseq)
myplot <- myplot + theme(axis.text.x = element_text(angle=90,hjust=1))
myplot <- myplot + scale_y_log10(breaks = myyticks,
labels = trans_format("log10", math_format(10^.x)))
mytitle = paste("Example",j,sep=" ")
# myplot <- myplot + labs(title = mytitle, x="h (temporal step size)", y="error")
myplot <- myplot + labs(x="h (temporal step size)", y="error")
myplot <- myplot + annotate("text",label=mytitle,y=max(l1errs),x=0.002)
myplot <- myplot + geom_line() + geom_point()
myplot <- myplot + geom_line(aes(x=x,y=y,group="L1"),data=mypreds,col="black")
myslope = 0.001*round(1000*as.numeric(linmod$coefficients[2]))
myplot <- myplot + annotate("text",label=paste("slope=",myslope,sep=''),x=fullhseq[6],y=fulll1errs[6]+0.01,size=4)
fname = paste("convplot_",j,".eps",sep="")
ggsave(filename=fname, plot=myplot, width=5, height=4, device=cairo_ps)
fname = paste("convplot_",j,".pdf",sep="")
ggsave(filename=fname, plot=myplot, width=5, height=4)
}
|
\name{ggflow_plot}
\alias{ggflow_plot}
%- Also NEED an '\alias' for EACH other topic documented here.
\title{
2D scatter plots of a flowFrame object with ggplot2
}
\description{
this function will read a flowFrame object from flowCore and plot a 2-D scatter plot of values for single cell data.
}
\usage{
ggflow_plot(flowFrame,
x_value = "SSC-H",
y_value = "FSC-H",
logx = TRUE,
logy = TRUE,
color_v = "standard",
x_lim = NA,
y_lim = NA,
contour = TRUE)
}
%- maybe also 'usage' for other objects documented here.
\arguments{
\item{flowFrame}{
a flowFrame object from flowCore.
}
\item{x_value}{
a string describing the parameter to be plotted on the x-axis e.g. x_value = "SSC-H"
}
\item{y_value}{
a string describing the parameter to be plotted on the x-axis e.g. y_value = "FSC-H"
}
\item{color_v}{
this option specifies the density gradient color scale and defaults to blue, yellow, and red ("standard"). You can either input a pre-defined color scheme or just a color vector (such as c("blue","red")). The currently pre-loaded schemes are "standard","bluered","bellpepper","londonfog","deepblue","parissummer"
}
\item{logx}{
this option specifies to use log10 scale on the x axis
}
\item{logy}{
this option specifies to use log10 scale on the y axis
}
\item{x_lim}{
a vector with min and max values displayed on the x-axis
}
\item{y_lim}{
a vector with min and max values displayed on the y-axis
}
\item{contour}{
if TRUE (default), it will print out a contour plot around the dots
}
}
\value{
This function returns a ggplot2 object. It can be used in input with the other functions in this package.
}
\author{
Francesco Vallania
}
\examples{
#load data from flowCore
data(GvHD)
#plot data
ggflow_plot(GvHD[[1]],x_value="FL4-H",y_value="FL1-H")
}
% Add one or more standard keywords, see file 'KEYWORDS' in the
% R documentation directory.
\keyword{ ggflow }
| /man/ggflow_plot.Rd | no_license | nbafrank/ggflow | R | false | false | 2,007 | rd | \name{ggflow_plot}
\alias{ggflow_plot}
%- Also NEED an '\alias' for EACH other topic documented here.
\title{
2D scatter plots of a flowFrame object with ggplot2
}
\description{
this function will read a flowFrame object from flowCore and plot a 2-D scatter plot of values for single cell data.
}
\usage{
ggflow_plot(flowFrame,
x_value = "SSC-H",
y_value = "FSC-H",
logx = TRUE,
logy = TRUE,
color_v = "standard",
x_lim = NA,
y_lim = NA,
contour = TRUE)
}
%- maybe also 'usage' for other objects documented here.
\arguments{
\item{flowFrame}{
a flowFrame object from flowCore.
}
\item{x_value}{
a string describing the parameter to be plotted on the x-axis e.g. x_value = "SSC-H"
}
\item{y_value}{
a string describing the parameter to be plotted on the x-axis e.g. y_value = "FSC-H"
}
\item{color_v}{
this option specifies the density gradient color scale and defaults to blue, yellow, and red ("standard"). You can either input a pre-defined color scheme or just a color vector (such as c("blue","red")). The currently pre-loaded schemes are "standard","bluered","bellpepper","londonfog","deepblue","parissummer"
}
\item{logx}{
this option specifies to use log10 scale on the x axis
}
\item{logy}{
this option specifies to use log10 scale on the y axis
}
\item{x_lim}{
a vector with min and max values displayed on the x-axis
}
\item{y_lim}{
a vector with min and max values displayed on the y-axis
}
\item{contour}{
if TRUE (default), it will print out a contour plot around the dots
}
}
\value{
This function returns a ggplot2 object. It can be used in input with the other functions in this package.
}
\author{
Francesco Vallania
}
\examples{
#load data from flowCore
data(GvHD)
#plot data
ggflow_plot(GvHD[[1]],x_value="FL4-H",y_value="FL1-H")
}
% Add one or more standard keywords, see file 'KEYWORDS' in the
% R documentation directory.
\keyword{ ggflow }
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/irced.r
\name{bot_connect}
\alias{bot_connect}
\title{Make bot connection}
\usage{
bot_connect(irc_obj, handler)
}
\arguments{
\item{irc_obj}{irc object}
\item{handler}{bot handler R function}
}
\description{
Make bot connection
}
| /man/bot_connect.Rd | no_license | hrbrmstr/irced | R | false | true | 311 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/irced.r
\name{bot_connect}
\alias{bot_connect}
\title{Make bot connection}
\usage{
bot_connect(irc_obj, handler)
}
\arguments{
\item{irc_obj}{irc object}
\item{handler}{bot handler R function}
}
\description{
Make bot connection
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/survFitTT.survDataCstExp.R
\name{survFitTT.survDataCstExp}
\alias{survFitTT.survDataCstExp}
\title{Fits a Bayesian concentration-response model for target-time survival analysis}
\usage{
\method{survFitTT}{survDataCstExp}(
data,
target.time = NULL,
lcx = c(5, 10, 20, 50),
n.chains = 3,
quiet = FALSE,
...
)
}
\arguments{
\item{data}{an object of class \code{survData}}
\item{target.time}{the chosen endpoint to evaluate the effect of the chemical compound
concentration, by default the last time point available for
all concentrations}
\item{lcx}{desired values of \eqn{x} (in percent) for which to compute
\eqn{LC_x}.}
\item{n.chains}{number of MCMC chains, the minimum required number of chains
is 2}
\item{quiet}{if \code{TRUE}, does not print messages and progress bars from
JAGS}
\item{\dots}{Further arguments to be passed to generic methods}
}
\value{
The function returns an object of class \code{survFitTT}, which is a
list with the following information:
\item{estim.LCx}{a table of the estimated \eqn{LC_x} along with their 95\%
credible intervals}
\item{estim.par}{a table of the estimated parameters (medians) and 95\%
credible intervals}
\item{det.part}{the name of the deterministic part of the used model}
\item{mcmc}{an object of class \code{mcmc.list} with the posterior
distribution}
\item{warnings}{a table with warning messages}
\item{model}{a JAGS model object}
\item{parameters}{a list of parameter names used in the model}
\item{n.chains}{an integer value corresponding to the number of chains used
for the MCMC computation}
\item{n.iter}{a list of two indices indicating the beginning and the end of
monitored iterations}
\item{n.thin}{a numerical value corresponding to the thinning interval}
\item{jags.data}{a list of the data passed to the JAGS model}
\item{transformed.data}{the \code{survData} object passed to the function}
\item{dataTT}{the dataset with which the parameters are estimated}
}
\description{
This function estimates the parameters of an concentration-response
model for target-time survival analysis using Bayesian inference. In this model,
the survival rate of individuals at a given time point (called target time) is modeled
as a function of the chemical compound concentration. The actual number of
surviving individuals is then modeled as a stochastic function of the survival
rate. Details of the model are presented in the
vignette accompanying the package.
}
\details{
The function returns
parameter estimates of the concentration-response model and estimates of the so-called
\eqn{LC_x}, that is the concentration of chemical compound required to get an \eqn{(1 - x/100)} survival rate.
}
\examples{
# (1) Load the data
data(cadmium1)
# (2) Create an object of class "survData"
dat <- survData(cadmium1)
\donttest{
# (3) Run the survFitTT function with the log-logistic
# binomial model
out <- survFitTT(dat, lcx = c(5, 10, 15, 20, 30, 50, 80),
quiet = TRUE)
}
}
\keyword{estimation}
| /man/survFitTT.survDataCstExp.Rd | no_license | cran/morse | R | false | true | 3,061 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/survFitTT.survDataCstExp.R
\name{survFitTT.survDataCstExp}
\alias{survFitTT.survDataCstExp}
\title{Fits a Bayesian concentration-response model for target-time survival analysis}
\usage{
\method{survFitTT}{survDataCstExp}(
data,
target.time = NULL,
lcx = c(5, 10, 20, 50),
n.chains = 3,
quiet = FALSE,
...
)
}
\arguments{
\item{data}{an object of class \code{survData}}
\item{target.time}{the chosen endpoint to evaluate the effect of the chemical compound
concentration, by default the last time point available for
all concentrations}
\item{lcx}{desired values of \eqn{x} (in percent) for which to compute
\eqn{LC_x}.}
\item{n.chains}{number of MCMC chains, the minimum required number of chains
is 2}
\item{quiet}{if \code{TRUE}, does not print messages and progress bars from
JAGS}
\item{\dots}{Further arguments to be passed to generic methods}
}
\value{
The function returns an object of class \code{survFitTT}, which is a
list with the following information:
\item{estim.LCx}{a table of the estimated \eqn{LC_x} along with their 95\%
credible intervals}
\item{estim.par}{a table of the estimated parameters (medians) and 95\%
credible intervals}
\item{det.part}{the name of the deterministic part of the used model}
\item{mcmc}{an object of class \code{mcmc.list} with the posterior
distribution}
\item{warnings}{a table with warning messages}
\item{model}{a JAGS model object}
\item{parameters}{a list of parameter names used in the model}
\item{n.chains}{an integer value corresponding to the number of chains used
for the MCMC computation}
\item{n.iter}{a list of two indices indicating the beginning and the end of
monitored iterations}
\item{n.thin}{a numerical value corresponding to the thinning interval}
\item{jags.data}{a list of the data passed to the JAGS model}
\item{transformed.data}{the \code{survData} object passed to the function}
\item{dataTT}{the dataset with which the parameters are estimated}
}
\description{
This function estimates the parameters of an concentration-response
model for target-time survival analysis using Bayesian inference. In this model,
the survival rate of individuals at a given time point (called target time) is modeled
as a function of the chemical compound concentration. The actual number of
surviving individuals is then modeled as a stochastic function of the survival
rate. Details of the model are presented in the
vignette accompanying the package.
}
\details{
The function returns
parameter estimates of the concentration-response model and estimates of the so-called
\eqn{LC_x}, that is the concentration of chemical compound required to get an \eqn{(1 - x/100)} survival rate.
}
\examples{
# (1) Load the data
data(cadmium1)
# (2) Create an object of class "survData"
dat <- survData(cadmium1)
\donttest{
# (3) Run the survFitTT function with the log-logistic
# binomial model
out <- survFitTT(dat, lcx = c(5, 10, 15, 20, 30, 50, 80),
quiet = TRUE)
}
}
\keyword{estimation}
|
L <- 5 # number of knots to use for the basis functions
cvs <- seq_len(5) # cross-validation sets
process <- "ebf" # ebf: empirical basis functions, gsk: gaussian kernels
margin <- "gsk" # ebf: empirical basis functions, gsk: gaussian kernels
times <- c("current", "future")
for (cv in cvs) {
for (time in times) {
# don't lose which setting we're running
rm(list=setdiff(ls(),
c("cv", "cvs", "L", "process", "margin", "time", "times")))
gc()
source(file = "./package_load.R", chdir = T)
source(file = "./fitmodel.R")
}
}
| /markdown/precip-rev/fit-ebf-5.R | permissive | sammorris81/extreme-decomp | R | false | false | 597 | r | L <- 5 # number of knots to use for the basis functions
cvs <- seq_len(5) # cross-validation sets
process <- "ebf" # ebf: empirical basis functions, gsk: gaussian kernels
margin <- "gsk" # ebf: empirical basis functions, gsk: gaussian kernels
times <- c("current", "future")
for (cv in cvs) {
for (time in times) {
# don't lose which setting we're running
rm(list=setdiff(ls(),
c("cv", "cvs", "L", "process", "margin", "time", "times")))
gc()
source(file = "./package_load.R", chdir = T)
source(file = "./fitmodel.R")
}
}
|
#' An R alternative to the lodash \code{get} in JavaScript
#'
#' This is a handy function for retrieving items deep in a nested structure
#' without causing error if not found
#'
#' @param sourceList The \code{list()}/\code{c()} that is to be searched for the element
#' @param path A string that can be separated by [,] or ., the string \code{"elementname1.1.elementname"}
#' is equivalent to \code{"elementname1[[1]]]elementname"}. Note that the function doesn't check
#' the validity of the path - it only separates and tries to address that element with `[[]]`.
#' @param default The value to return if the element isn't found
#'
#' @return Returns a sub-element from \code{sourceList} or the \code{default} value.
#' @importFrom stringr str_detect str_split str_replace_all
#' @examples
#' source <- list(a = list(b = 1, `odd.name` = 'I hate . in names', c(1,2,3)))
#' retrieve(source, "a.b")
#' retrieve(source, "a.b.1")
#' retrieve(source, "a.odd\\.name")
#' retrieve(source, "a.not_in_list")
#'
#' @family lodash similar functions
#' @export
retrieve <- function(sourceList, path, default = NA) {
path <- str_split(path, "(?<!\\\\)[\\[\\].]+")[[1]]
for (element in path) {
element <- str_replace_all(element, "\\\\", "")
if (str_detect(element, "^[0-9]+$")) {
element <- as.numeric(element)
if (length(sourceList) < element || element <= 0) {
return(default)
}
} else if (!(element %in% names(sourceList))) {
return(default)
}
sourceList <- sourceList[[element]]
}
return(sourceList)
}
#' An R alternative to the lodash \code{has} in JavaScript
#'
#' This is a handy function for checking if item exist in a nested structure
#'
#' @param sourceList The \code{list()}/\code{c()} that is to be searched for the element
#' @param path A string that can be separated by [,] or ., the string \code{"elementname1.1.elementname"}
#' the validity of the path - it only separates and tries to address that element with `[[]]`.
#'
#' @return Returns a boolean.
#' @importFrom stringr str_detect str_split
#' @examples
#' has(list(a = list(b = 1)), "a.b")
#'
#' @family lodash similar functions
#' @export
has <- function(sourceList, path) {
uniqueNotFoundId <- "__@GMISC_NOT_FOUND@__"
value <- retrieve(sourceList, path, default=uniqueNotFoundId)
if (length(value) > 1) {
return(TRUE)
}
return (value != uniqueNotFoundId)
}
| /R/retrieve_and_has.R | no_license | arcresu/Gmisc | R | false | false | 2,420 | r | #' An R alternative to the lodash \code{get} in JavaScript
#'
#' This is a handy function for retrieving items deep in a nested structure
#' without causing error if not found
#'
#' @param sourceList The \code{list()}/\code{c()} that is to be searched for the element
#' @param path A string that can be separated by [,] or ., the string \code{"elementname1.1.elementname"}
#' is equivalent to \code{"elementname1[[1]]]elementname"}. Note that the function doesn't check
#' the validity of the path - it only separates and tries to address that element with `[[]]`.
#' @param default The value to return if the element isn't found
#'
#' @return Returns a sub-element from \code{sourceList} or the \code{default} value.
#' @importFrom stringr str_detect str_split str_replace_all
#' @examples
#' source <- list(a = list(b = 1, `odd.name` = 'I hate . in names', c(1,2,3)))
#' retrieve(source, "a.b")
#' retrieve(source, "a.b.1")
#' retrieve(source, "a.odd\\.name")
#' retrieve(source, "a.not_in_list")
#'
#' @family lodash similar functions
#' @export
retrieve <- function(sourceList, path, default = NA) {
path <- str_split(path, "(?<!\\\\)[\\[\\].]+")[[1]]
for (element in path) {
element <- str_replace_all(element, "\\\\", "")
if (str_detect(element, "^[0-9]+$")) {
element <- as.numeric(element)
if (length(sourceList) < element || element <= 0) {
return(default)
}
} else if (!(element %in% names(sourceList))) {
return(default)
}
sourceList <- sourceList[[element]]
}
return(sourceList)
}
#' An R alternative to the lodash \code{has} in JavaScript
#'
#' This is a handy function for checking if item exist in a nested structure
#'
#' @param sourceList The \code{list()}/\code{c()} that is to be searched for the element
#' @param path A string that can be separated by [,] or ., the string \code{"elementname1.1.elementname"}
#' the validity of the path - it only separates and tries to address that element with `[[]]`.
#'
#' @return Returns a boolean.
#' @importFrom stringr str_detect str_split
#' @examples
#' has(list(a = list(b = 1)), "a.b")
#'
#' @family lodash similar functions
#' @export
has <- function(sourceList, path) {
uniqueNotFoundId <- "__@GMISC_NOT_FOUND@__"
value <- retrieve(sourceList, path, default=uniqueNotFoundId)
if (length(value) > 1) {
return(TRUE)
}
return (value != uniqueNotFoundId)
}
|
##############################################
# WASH Benefits Kenya
# Primary outcome analysis
# Table with diarrhea results
# by Jade Benjamin-Chung (jadebc@berkeley.edu)
##############################################
rm(list=ls())
load("~/Dropbox/WBK-primary-analysis/results/jade/diar_prev.RData")
load("~/Dropbox/WBK-primary-analysis/results/jade/diar_pr_unadj.RData")
load("~/Dropbox/WBK-primary-analysis/results/jade/diar_rd_unadj.RData")
load("~/Dropbox/WBK-primary-analysis/results/jade/diarr_pval_unadj.RData")
load("~/Dropbox/WBK-primary-analysis/Results/jade/diarr-PR-adj.RData")
load("~/Dropbox/WBK-primary-analysis/results/jade/diarr_pval_adj.RData")
load("C:/Users/andre/Dropbox/WBK-primary-analysis/results/jade/diar_prev.RData")
load("C:/Users/andre/Dropbox/WBK-primary-analysis/results/jade/diar_pr_unadj.RData")
load("C:/Users/andre/Dropbox/WBK-primary-analysis/results/jade/diar_rd_unadj.RData")
load("C:/Users/andre/Dropbox/WBK-primary-analysis/results/jade/diarr_pval_unadj.RData")
load("C:/Users/andre/Dropbox/WBK-primary-analysis/Results/jade/diarr-PR-adj.RData")
load("C:/Users/andre/Dropbox/WBK-primary-analysis/results/jade/diarr_pval_adj.RData")
try(source("~/Documents/CRG/wash-benefits/kenya/src/primary/tables/0-table-base-functions.R"))
try(source("C:/Users/andre/Dropbox/WBK-primary-analysis/Results/Table/0-table-base-functions.R"))
ls()
diar_t1_n_j
#----------------- prevalence -----------------
prevn12=paste(sprintf("%0.01f",diar_t12_prev_j[,1]*100),"$\\%$",sep="")
#----------------- risk difference - unadjusted -----------------
diar_h1_rd_unadj_j=diar_h1_rd_unadj_j[,-2]
diar_h2_rd_unadj_j=diar_h2_rd_unadj_j[,-2]
rd.h1.unadj.pt.est=c("",apply(as.matrix(diar_h1_rd_unadj_j),1,pt.est.ci.f,decimals=1,scale=100))
rd.h2.unadj.pt.est=c("",apply(as.matrix(diar_h2_rd_unadj_j),1,pt.est.ci.f,decimals=1,scale=100))
#----------------- risk difference - adjusted -----------------
rd.h1.adj.pt.est=c("",apply(as.matrix(diar_h1_rd_adj_j),1,pt.est.ci.f,decimals=1,scale=100))
rd.h2.adj.pt.est=c("",apply(as.matrix(diar_h2_rd_adj_j),1,pt.est.ci.f,decimals=1,scale=100))
#----------------- rr table -----------------
h1.tab=cbind(prevn12,rd.h1.unadj.pt.est,rd.h1.adj.pt.est)
h2.tab=cbind(prevn12[c(6,3:5)],
rd.h2.unadj.pt.est,rd.h2.adj.pt.est)
diarr.table=rbind(h1.tab,h2.tab)
lab=c(rownames(diar_t12_prev_j),"WSH","Water","Sanitation","Handwashing")
diarr.table=cbind(lab,diarr.table)
rownames(diarr.table)=NULL
save(diarr.table,file="~/Dropbox/WBK-primary-analysis/Results/jade/table-diarrhea.RData")
| /table-diarrhea_Lancet_update.R | no_license | amertens/Wash-Benefits-Kenya | R | false | false | 2,559 | r | ##############################################
# WASH Benefits Kenya
# Primary outcome analysis
# Table with diarrhea results
# by Jade Benjamin-Chung (jadebc@berkeley.edu)
##############################################
rm(list=ls())
load("~/Dropbox/WBK-primary-analysis/results/jade/diar_prev.RData")
load("~/Dropbox/WBK-primary-analysis/results/jade/diar_pr_unadj.RData")
load("~/Dropbox/WBK-primary-analysis/results/jade/diar_rd_unadj.RData")
load("~/Dropbox/WBK-primary-analysis/results/jade/diarr_pval_unadj.RData")
load("~/Dropbox/WBK-primary-analysis/Results/jade/diarr-PR-adj.RData")
load("~/Dropbox/WBK-primary-analysis/results/jade/diarr_pval_adj.RData")
load("C:/Users/andre/Dropbox/WBK-primary-analysis/results/jade/diar_prev.RData")
load("C:/Users/andre/Dropbox/WBK-primary-analysis/results/jade/diar_pr_unadj.RData")
load("C:/Users/andre/Dropbox/WBK-primary-analysis/results/jade/diar_rd_unadj.RData")
load("C:/Users/andre/Dropbox/WBK-primary-analysis/results/jade/diarr_pval_unadj.RData")
load("C:/Users/andre/Dropbox/WBK-primary-analysis/Results/jade/diarr-PR-adj.RData")
load("C:/Users/andre/Dropbox/WBK-primary-analysis/results/jade/diarr_pval_adj.RData")
try(source("~/Documents/CRG/wash-benefits/kenya/src/primary/tables/0-table-base-functions.R"))
try(source("C:/Users/andre/Dropbox/WBK-primary-analysis/Results/Table/0-table-base-functions.R"))
ls()
diar_t1_n_j
#----------------- prevalence -----------------
prevn12=paste(sprintf("%0.01f",diar_t12_prev_j[,1]*100),"$\\%$",sep="")
#----------------- risk difference - unadjusted -----------------
diar_h1_rd_unadj_j=diar_h1_rd_unadj_j[,-2]
diar_h2_rd_unadj_j=diar_h2_rd_unadj_j[,-2]
rd.h1.unadj.pt.est=c("",apply(as.matrix(diar_h1_rd_unadj_j),1,pt.est.ci.f,decimals=1,scale=100))
rd.h2.unadj.pt.est=c("",apply(as.matrix(diar_h2_rd_unadj_j),1,pt.est.ci.f,decimals=1,scale=100))
#----------------- risk difference - adjusted -----------------
rd.h1.adj.pt.est=c("",apply(as.matrix(diar_h1_rd_adj_j),1,pt.est.ci.f,decimals=1,scale=100))
rd.h2.adj.pt.est=c("",apply(as.matrix(diar_h2_rd_adj_j),1,pt.est.ci.f,decimals=1,scale=100))
#----------------- rr table -----------------
h1.tab=cbind(prevn12,rd.h1.unadj.pt.est,rd.h1.adj.pt.est)
h2.tab=cbind(prevn12[c(6,3:5)],
rd.h2.unadj.pt.est,rd.h2.adj.pt.est)
diarr.table=rbind(h1.tab,h2.tab)
lab=c(rownames(diar_t12_prev_j),"WSH","Water","Sanitation","Handwashing")
diarr.table=cbind(lab,diarr.table)
rownames(diarr.table)=NULL
save(diarr.table,file="~/Dropbox/WBK-primary-analysis/Results/jade/table-diarrhea.RData")
|
# 06/07/2021 merging dwi, flair, bravo from csr, k, sp, r01 =========
scsvroot = 'H:/adluru/StrokeAndDiffusionProject/uwstrokeproject/shapiro/csvs/'
c('csr', 'sp', 'k', 'r01') %>% map(function(s) map(c('_dwi_adc.csv', '_dwi_b0.csv', '_flair.csv', '_bravo.csv'), ~paste0(s, .x)) %>% map(~read.csv(paste0(scsvroot, .x))) %>% reduce(inner_join, by = 'id') %>% write.csv(paste0(scsvroot, s, '_joined.csv'), row.names = F, quote = F))
| /shapiro/groundwork.R | no_license | nadluru/uwstrokeproject | R | false | false | 431 | r | # 06/07/2021 merging dwi, flair, bravo from csr, k, sp, r01 =========
scsvroot = 'H:/adluru/StrokeAndDiffusionProject/uwstrokeproject/shapiro/csvs/'
c('csr', 'sp', 'k', 'r01') %>% map(function(s) map(c('_dwi_adc.csv', '_dwi_b0.csv', '_flair.csv', '_bravo.csv'), ~paste0(s, .x)) %>% map(~read.csv(paste0(scsvroot, .x))) %>% reduce(inner_join, by = 'id') %>% write.csv(paste0(scsvroot, s, '_joined.csv'), row.names = F, quote = F))
|
testlist <- list(m = NULL, repetitions = 0L, in_m = structure(c(2.35202681265618e+77, 9.53818252170339e+295, 1.22810536108214e+146, 4.12396251261199e-221, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), .Dim = c(5L, 7L)))
result <- do.call(CNull:::communities_individual_based_sampling_alpha,testlist)
str(result) | /CNull/inst/testfiles/communities_individual_based_sampling_alpha/AFL_communities_individual_based_sampling_alpha/communities_individual_based_sampling_alpha_valgrind_files/1615777069-test.R | no_license | akhikolla/updatedatatype-list2 | R | false | false | 362 | r | testlist <- list(m = NULL, repetitions = 0L, in_m = structure(c(2.35202681265618e+77, 9.53818252170339e+295, 1.22810536108214e+146, 4.12396251261199e-221, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), .Dim = c(5L, 7L)))
result <- do.call(CNull:::communities_individual_based_sampling_alpha,testlist)
str(result) |
rankhospital <- function(state, outcome, num = "best") {
## Read outcome data
out <- read.csv("outcome-of-care-measures.csv", colClasses="character")
states <- out$State
hospNames <- out$Hospital.Name
## Check that state and outcome are valid
if (!(state %in% states)) {
stop("invalid state")
}
stateOutcomes <- out[state==states,]
if (outcome == "heart attack") {
col=11
} else if(outcome == "heart failure") {
col=17
} else if (outcome == "pneumonia") {
col = 23
} else {
stop("invalid outcome")
}
## Return hospital name in that state with the given rank
hospRate <- cbind(stateOutcomes["Hospital.Name"],
as.numeric(stateOutcomes[,col]))
validHosp <- hospRate[complete.cases(hospRate),]
nhosp <- length(validHosp[,1])
if (num=="best") {
num <- 1
} else if (num=="worst") {
num <- nhosp
}
orderedHosp <- hospRate[order(validHosp[,2], validHosp[,1]), ]
if (num <= nhosp) {
ranked <- orderedHosp[num,1]
} else {
ranked <- NA
}
## 30-day death rate
ranked
}
| /R/coursera/rankhospital.R | no_license | mishagam/progs | R | false | false | 1,196 | r | rankhospital <- function(state, outcome, num = "best") {
## Read outcome data
out <- read.csv("outcome-of-care-measures.csv", colClasses="character")
states <- out$State
hospNames <- out$Hospital.Name
## Check that state and outcome are valid
if (!(state %in% states)) {
stop("invalid state")
}
stateOutcomes <- out[state==states,]
if (outcome == "heart attack") {
col=11
} else if(outcome == "heart failure") {
col=17
} else if (outcome == "pneumonia") {
col = 23
} else {
stop("invalid outcome")
}
## Return hospital name in that state with the given rank
hospRate <- cbind(stateOutcomes["Hospital.Name"],
as.numeric(stateOutcomes[,col]))
validHosp <- hospRate[complete.cases(hospRate),]
nhosp <- length(validHosp[,1])
if (num=="best") {
num <- 1
} else if (num=="worst") {
num <- nhosp
}
orderedHosp <- hospRate[order(validHosp[,2], validHosp[,1]), ]
if (num <= nhosp) {
ranked <- orderedHosp[num,1]
} else {
ranked <- NA
}
## 30-day death rate
ranked
}
|
#' Determine all n-tuples using the elements of a set.
#'
#' Determine all n-tuples using the elements of a set. This is really just a
#' simple wrapper for expand.grid, so it is not optimized.
#'
#' @param set a set
#' @param n length of each tuple
#' @param repeats if set contains duplicates, should the result?
#' @param list tuples as list?
#' @return a matrix whose rows are the n-tuples
#' @export
#' @examples
#'
#' tuples(1:2, 3)
#' tuples(1:2, 3, list = TRUE)
#'
#' apply(tuples(c("x","y","z"), 3), 1, paste, collapse = "")
#'
#' # multinomial coefficients
#' r <- 2 # number of variables, e.g. x, y
#' n <- 2 # power, e.g. (x+y)^2
#' apply(burst(n,r), 1, function(v) factorial(n)/ prod(factorial(v))) # x, y, xy
#' mp("x + y")^n
#'
#' r <- 2 # number of variables, e.g. x, y
#' n <- 3 # power, e.g. (x+y)^3
#' apply(burst(n,r), 1, function(v) factorial(n)/ prod(factorial(v)))
#' mp("x + y")^n
#'
#' r <- 3 # number of variables, e.g. x, y, z
#' n <- 2 # power, e.g. (x+y+z)^2
#' apply(burst(n,r), 1, function(v) factorial(n)/ prod(factorial(v))) # x, y, z, xy, xz, yz
#' mp("x + y + z")^n
#'
#'
tuples <- function(set, n = length(set), repeats = FALSE, list = FALSE){
## determine how big the output will be
r <- length(set)
nCombos <- r^n
## expand.grid really does all the work in this function, so
## setup the interior part of expand.grid.
## sets4call looks like "set2, set2, set2" with n = 3
out <- do.call(expand.grid, replicate(n, set, simplify = FALSE))
out <- unname(as.matrix(out))
out <- out[,n:1]
## delete duplicates
if(!repeats) out <- unique(out)
## do list
if(list) out <- unname(split(out, rep(1:nCombos, n)))
## return
out
}
| /R/tuples.R | no_license | dkahle/mpoly | R | false | false | 1,703 | r | #' Determine all n-tuples using the elements of a set.
#'
#' Determine all n-tuples using the elements of a set. This is really just a
#' simple wrapper for expand.grid, so it is not optimized.
#'
#' @param set a set
#' @param n length of each tuple
#' @param repeats if set contains duplicates, should the result?
#' @param list tuples as list?
#' @return a matrix whose rows are the n-tuples
#' @export
#' @examples
#'
#' tuples(1:2, 3)
#' tuples(1:2, 3, list = TRUE)
#'
#' apply(tuples(c("x","y","z"), 3), 1, paste, collapse = "")
#'
#' # multinomial coefficients
#' r <- 2 # number of variables, e.g. x, y
#' n <- 2 # power, e.g. (x+y)^2
#' apply(burst(n,r), 1, function(v) factorial(n)/ prod(factorial(v))) # x, y, xy
#' mp("x + y")^n
#'
#' r <- 2 # number of variables, e.g. x, y
#' n <- 3 # power, e.g. (x+y)^3
#' apply(burst(n,r), 1, function(v) factorial(n)/ prod(factorial(v)))
#' mp("x + y")^n
#'
#' r <- 3 # number of variables, e.g. x, y, z
#' n <- 2 # power, e.g. (x+y+z)^2
#' apply(burst(n,r), 1, function(v) factorial(n)/ prod(factorial(v))) # x, y, z, xy, xz, yz
#' mp("x + y + z")^n
#'
#'
tuples <- function(set, n = length(set), repeats = FALSE, list = FALSE){
## determine how big the output will be
r <- length(set)
nCombos <- r^n
## expand.grid really does all the work in this function, so
## setup the interior part of expand.grid.
## sets4call looks like "set2, set2, set2" with n = 3
out <- do.call(expand.grid, replicate(n, set, simplify = FALSE))
out <- unname(as.matrix(out))
out <- out[,n:1]
## delete duplicates
if(!repeats) out <- unique(out)
## do list
if(list) out <- unname(split(out, rep(1:nCombos, n)))
## return
out
}
|
# Exercise 2: More ggplot2 Grammar
# Install and load `ggplot2`
# install.packages("ggplot2") # if needed
library("ggplot2")
# For this exercise you will again be working with the `diamonds` data set.
# Use `?diamonds` to review details about this data set
?diamonds
## Statistical Transformations
# Draw a bar chart of the diamonds data, organized by cut
# The height of each bar is based on the "count" (number) of diamonds with that cut
ggplot(data = diamonds) +
geom_bar(mapping = aes(x = cut))
# Use the `stat_count` to apply the statistical transformation "count" to the diamonds
# by cut. You do not need a separate geometry layer!
ggplot(data = diamonds) +
stat_count(mapping = aes(x = cut))
# Use the `stat_summary` function to draw a chart with a summary layer.
# Map the x-position to diamond `cut`, and the y-position to diamond `depth`
# Bonus: use `min` as the function ymin, `max` as the function ymax, and `median` as the function y
ggplot(data = diamonds) +
stat_summary(mapping = aes(x = cut, y = depth),
fun.ymin = min, fun.ymax = max, fun.y = median)
## Position Adjustments
# Draw a bar chart of diamond data organized by cut, with each bar filled by clarity.
# You should see a _stacked_ bar chart.
ggplot(data = diamonds) +
geom_bar(mapping = aes(x = cut, fill = clarity))
# Draw the same chart again, but with each element positioned to "fill" the y axis
ggplot(data = diamonds) +
geom_bar(mapping = aes(x = cut, fill = clarity), position = "fill")
# Draw the same chart again, but with each element positioned to "dodge" each other
ggplot(data = diamonds) +
geom_bar(mapping = aes(x = cut, fill = clarity), position = "dodge")
# Draw a plot with point geometry with the x-position mapped to `cut` and the y-position mapped to `clarity`
# This creates a "grid" grouping the points
ggplot(data = diamonds) +
geom_point(mapping = aes(x = cut, y = clarity))
# Use the "jitter" position adjustment to keep the points from all overlapping!
# (This works a little better with a sample of diamond data, such as from the previous exercise).
ggplot(data = diamonds) +
geom_point(mapping = aes(x = cut, y = clarity), position = "jitter")
## Scales
# Draw a "boxplot" (with `geom_boxplot()`) for the diamond's price (y) by color (x)
ggplot(data = diamonds) +
geom_boxplot(mapping = aes(x = color, y = price))
# This has a lot of outliers, making it harder to read. To fix this, draw the same plot but
# with a _logarithmic_ scale for the y axis.
ggplot(data = diamonds) +
geom_boxplot(mapping = aes(x = color, y = price)) +
scale_y_log10()
# For another version, draw the same plot but with `violin` geometry instead of `boxplot` geometry!
# How does the logarithmic scale change the data presentation?
ggplot(data = diamonds) +
geom_violin(mapping = aes(x = color, y = price)) +
scale_y_log10()
# Another interesting plot: draw a plot of the diamonds price (y) by carat (x), using a heatmap of 2d bins
# (geom_bin2d)
# What happens when you make the x and y channels scale logarithmically?
# Draw a scatter plot for the diamonds price (y) by carat (x). Color each point by the clarity
# (Remember, this will take a while. Use a sample of the diamonds for faster results)
# Change the color of the previous plot using a ColorBrewer scale of your choice. What looks nice?
## Coordinate Systems
# Draw a bar chart with x-position and fill color BOTH mapped to cut
# For best results, SET the `width` of the geometry to be 1 (fill plot, no space between)
# You can save this to a variable for easier modifications
# Draw the same chart, but with the coordinate system flipped
# Draw the same chart, but in a polar coordinate system. Now you have a Coxcomb chart!
## Facets
# Take the scatter plot of price by carat data (colored by clarity) and add _facets_ based on
# the diamond's `color`
## Saving Plots
# Use the `ggsave()` function to save one of your plots (the most recent one generated) to disk.
# Name the output file "my-plot.png".
# Make sure you've set the working directory!!
| /exercise-2/exercise.R | permissive | AnnieeeeC/m14-ggplot2 | R | false | false | 4,072 | r | # Exercise 2: More ggplot2 Grammar
# Install and load `ggplot2`
# install.packages("ggplot2") # if needed
library("ggplot2")
# For this exercise you will again be working with the `diamonds` data set.
# Use `?diamonds` to review details about this data set
?diamonds
## Statistical Transformations
# Draw a bar chart of the diamonds data, organized by cut
# The height of each bar is based on the "count" (number) of diamonds with that cut
ggplot(data = diamonds) +
geom_bar(mapping = aes(x = cut))
# Use the `stat_count` to apply the statistical transformation "count" to the diamonds
# by cut. You do not need a separate geometry layer!
ggplot(data = diamonds) +
stat_count(mapping = aes(x = cut))
# Use the `stat_summary` function to draw a chart with a summary layer.
# Map the x-position to diamond `cut`, and the y-position to diamond `depth`
# Bonus: use `min` as the function ymin, `max` as the function ymax, and `median` as the function y
ggplot(data = diamonds) +
stat_summary(mapping = aes(x = cut, y = depth),
fun.ymin = min, fun.ymax = max, fun.y = median)
## Position Adjustments
# Draw a bar chart of diamond data organized by cut, with each bar filled by clarity.
# You should see a _stacked_ bar chart.
ggplot(data = diamonds) +
geom_bar(mapping = aes(x = cut, fill = clarity))
# Draw the same chart again, but with each element positioned to "fill" the y axis
ggplot(data = diamonds) +
geom_bar(mapping = aes(x = cut, fill = clarity), position = "fill")
# Draw the same chart again, but with each element positioned to "dodge" each other
ggplot(data = diamonds) +
geom_bar(mapping = aes(x = cut, fill = clarity), position = "dodge")
# Draw a plot with point geometry with the x-position mapped to `cut` and the y-position mapped to `clarity`
# This creates a "grid" grouping the points
ggplot(data = diamonds) +
geom_point(mapping = aes(x = cut, y = clarity))
# Use the "jitter" position adjustment to keep the points from all overlapping!
# (This works a little better with a sample of diamond data, such as from the previous exercise).
ggplot(data = diamonds) +
geom_point(mapping = aes(x = cut, y = clarity), position = "jitter")
## Scales
# Draw a "boxplot" (with `geom_boxplot()`) for the diamond's price (y) by color (x)
ggplot(data = diamonds) +
geom_boxplot(mapping = aes(x = color, y = price))
# This has a lot of outliers, making it harder to read. To fix this, draw the same plot but
# with a _logarithmic_ scale for the y axis.
ggplot(data = diamonds) +
geom_boxplot(mapping = aes(x = color, y = price)) +
scale_y_log10()
# For another version, draw the same plot but with `violin` geometry instead of `boxplot` geometry!
# How does the logarithmic scale change the data presentation?
ggplot(data = diamonds) +
geom_violin(mapping = aes(x = color, y = price)) +
scale_y_log10()
# Another interesting plot: draw a plot of the diamonds price (y) by carat (x), using a heatmap of 2d bins
# (geom_bin2d)
# What happens when you make the x and y channels scale logarithmically?
# Draw a scatter plot for the diamonds price (y) by carat (x). Color each point by the clarity
# (Remember, this will take a while. Use a sample of the diamonds for faster results)
# Change the color of the previous plot using a ColorBrewer scale of your choice. What looks nice?
## Coordinate Systems
# Draw a bar chart with x-position and fill color BOTH mapped to cut
# For best results, SET the `width` of the geometry to be 1 (fill plot, no space between)
# You can save this to a variable for easier modifications
# Draw the same chart, but with the coordinate system flipped
# Draw the same chart, but in a polar coordinate system. Now you have a Coxcomb chart!
## Facets
# Take the scatter plot of price by carat data (colored by clarity) and add _facets_ based on
# the diamond's `color`
## Saving Plots
# Use the `ggsave()` function to save one of your plots (the most recent one generated) to disk.
# Name the output file "my-plot.png".
# Make sure you've set the working directory!!
|
library(MeTo)
### Name: Rn
### Title: Net radiation (Rn)
### Aliases: Rn
### ** Examples
# --------------------------------------------
# Daily period
# --------------------------------------------
Rn(x = 105, n = 8.5, elev = 2, actVP = 2.85, Tmax = 34.8,
Tmin = 25.6, lat.deg = 13.73)
Rn(x = 135, elev = 1, Rs = 14.5, Tmax = 25.1, Tmin = 19.1,
lat.deg = -22.9, actVP = 2.1)
# --------------------------------------------
# Hourly period
# --------------------------------------------
Rn(x = as.POSIXct(c('2018-10-01 14:30', '2018-10-01 15:30')), Tmean = c(38, 37.8),
Rhmean = c(52, 52.2), Rs = c(2.450, 2.1), elev = 8, lat.deg = 16.2,
long.deg = 343.75, control = list(Lz = 15))
Rn(x = as.POSIXct('2018-10-01 14:30'), Tmean = 38, Rhmean = 52, tl = 1, Rs = 2.450,
elev = 8, lat.deg = 16.2, long.deg = 343.75, control = list(Lz = 15))
| /data/genthat_extracted_code/MeTo/examples/Rn.Rd.R | no_license | surayaaramli/typeRrh | R | false | false | 864 | r | library(MeTo)
### Name: Rn
### Title: Net radiation (Rn)
### Aliases: Rn
### ** Examples
# --------------------------------------------
# Daily period
# --------------------------------------------
Rn(x = 105, n = 8.5, elev = 2, actVP = 2.85, Tmax = 34.8,
Tmin = 25.6, lat.deg = 13.73)
Rn(x = 135, elev = 1, Rs = 14.5, Tmax = 25.1, Tmin = 19.1,
lat.deg = -22.9, actVP = 2.1)
# --------------------------------------------
# Hourly period
# --------------------------------------------
Rn(x = as.POSIXct(c('2018-10-01 14:30', '2018-10-01 15:30')), Tmean = c(38, 37.8),
Rhmean = c(52, 52.2), Rs = c(2.450, 2.1), elev = 8, lat.deg = 16.2,
long.deg = 343.75, control = list(Lz = 15))
Rn(x = as.POSIXct('2018-10-01 14:30'), Tmean = 38, Rhmean = 52, tl = 1, Rs = 2.450,
elev = 8, lat.deg = 16.2, long.deg = 343.75, control = list(Lz = 15))
|
#' @import methods
NULL
rex::register_shortcuts("covr")
#' calculate the coverage on an environment after evaluating some expressions.
#'
#' This function uses non_standard evaluation so is best used in interactive
#' sessions.
#' @param env the environment to take function definitions from
#' @param ... one or more expressions to be evaluated.
#' @param enc The enclosing environment from which the expressions should be
#' evaluated
#' @export
environment_coverage <- function(env, ..., enc = parent.frame()) {
exprs <- dots(...)
environment_coverage_(env, exprs, enc = enc)
}
#' calculate the coverage on an environment after evaluating some expressions.
#'
#' This function does not use non_standard evaluation so is more appropriate
#' for use in other functions.
#' @inheritParams environment_coverage
#' @param exprs a list of parsed expressions to be evaluated.
#' @export
environment_coverage_ <- function(env, exprs, enc = parent.frame()) {
clear_counters()
replacements <-
c(replacements_S4(env),
Filter(Negate(is.null), lapply(ls(env, all.names = TRUE), replacement, env = env))
)
on.exit(lapply(replacements, reset), add = TRUE)
lapply(replacements, replace)
for (expr in exprs) {
eval(expr, enc)
}
res <- as.list(.counters)
clear_counters()
class(res) <- "coverage"
res
}
#' Calculate test coverage for specific function.
#'
#' @param fun name of the function.
#' @param env environment the function is defined in.
#' @param ... expressions to run.
#' @param enc the enclosing environment which to run the expressions.
#' @export
function_coverage <- function(fun, ..., env = NULL, enc = parent.frame()) {
if (is.function(fun)) {
env <- environment(fun)
# get name of function, stripping preceding blah:: if needed
fun <- rex::re_substitutes(deparse(substitute(fun)), rex::regex(".*:::?"), "")
}
exprs <- dots(...)
clear_counters()
replacement <- if (!is.null(env)) {
replacement(fun, env)
} else {
replacement(fun)
}
on.exit(reset(replacement), add = TRUE)
replace(replacement)
for (expr in exprs) {
eval(expr, enc)
}
res <- as.list(.counters)
clear_counters()
for (i in seq_along(res)) {
display_name(res[[i]]$srcref) <- generate_display_name(res[[i]], path = NULL)
class(res[[i]]) <- "expression_coverage"
}
class(res) <- "coverage"
res
}
#' Calculate test coverage for a package
#'
#' @param path file path to the package
#' @param ... extra expressions to run
#' @param type run the package \sQuote{test}, \sQuote{vignette},
#' \sQuote{example}, \sQuote{all}, or \sQuote{none}. The default is
#' \sQuote{test}.
#' @param relative_path whether to output the paths as relative or absolute
#' paths.
#' @param quiet whether to load and compile the package quietly
#' @param clean whether to clean temporary output files after running.
#' @param exclusions a named list of files with the lines to exclude from each file.
#' @param exclude_pattern a search pattern to look for in the source to exclude a particular line.
#' @param exclude_start a search pattern to look for in the source to start an exclude block.
#' @param exclude_end a search pattern to look for in the source to stop an exclude block.
#' @param use_subprocess whether to run the code in a separate subprocess.
#' Needed for compiled code and many packages using S4 classes.
#' @seealso exclusions
#' @export
package_coverage <- function(path = ".",
...,
type = c("test", "vignette", "example", "all", "none"),
relative_path = TRUE,
quiet = TRUE,
clean = TRUE,
exclusions = NULL,
exclude_pattern = getOption("covr.exclude_pattern"),
exclude_start = getOption("covr.exclude_start"),
exclude_end = getOption("covr.exclude_end"),
use_subprocess = TRUE
) {
pkg <- devtools::as.package(path)
type <- match.arg(type)
if (type == "all") {
# store the args that were called
called_args <- as.list(match.call())[-1]
# remove the type
called_args$type <- NULL
res <- list(
test = do.call(Recall, c(called_args, type = "test")),
vignette = do.call(Recall, c(called_args, type = "vignette")),
example = do.call(Recall, c(called_args, type = "example"))
)
attr(res, "package") <- pkg
class(res) <- "coverages"
return(res)
}
dots <- dots(...)
sources <- sources(pkg$path)
tmp_lib <- tempdir()
# if there are compiled components to a package we have to run in a subprocess
if (length(sources)) {
flags <- c(CFLAGS = "-g -O0 -fprofile-arcs -ftest-coverage",
CXXFLAGS = "-g -O0 -fprofile-arcs -ftest-coverage",
FFLAGS = "-g -O0 -fprofile-arcs -ftest-coverage",
FCFLAGS = "-g -O0 -fprofile-arcs -ftest-coverage",
LDFLAGS = "--coverage")
if (is_windows()) {
# workaround for https://bugs.r-project.org/bugzilla3/show_bug.cgi?id=16384
# LDFLAGS is ignored on Windows and we don't want to override PKG_LIBS if
# it is set, so use SHLIB_LIBADD
flags[["SHLIB_LIBADD"]] <- "--coverage"
}
with_makevars(
flags, {
if (use_subprocess) {
subprocess(
clean = clean,
quiet = quiet,
coverage <- run_tests(pkg, tmp_lib, dots, type, quiet)
)
} else {
coverage <- run_tests(pkg, tmp_lib, dots, type, quiet)
}
})
coverage <- c(coverage, run_gcov(pkg$path, sources, quiet))
if (isTRUE(clean)) {
devtools::clean_dll(pkg$path)
clear_gcov(pkg$path)
}
} else {
if (use_subprocess) {
subprocess(
clean = clean,
quiet = quiet,
coverage <- run_tests(pkg, tmp_lib, dots, type, quiet)
)
} else {
coverage <- run_tests(pkg, tmp_lib, dots, type, quiet)
}
}
# set the display names for coverage
for (i in seq_along(coverage)) {
display_path <- if (isTRUE(relative_path)) pkg$path else NULL
display_name(coverage[[i]]$srcref) <-
generate_display_name(coverage[[i]], display_path)
class(coverage[[i]]) <- "expression_coverage"
}
attr(coverage, "type") <- type
attr(coverage, "package") <- pkg
class(coverage) <- "coverage"
# BasicClasses are functions from the method package
coverage <- coverage[!rex::re_matches(display_name(coverage),
rex::rex("R", one_of("/", "\\"), "BasicClasses.R"))]
# perform exclusions
exclude(coverage,
exclusions = exclusions,
exclude_pattern = exclude_pattern,
exclude_start = exclude_start,
exclude_end = exclude_end,
path = if (isTRUE(relative_path)) pkg$path else NULL
)
}
generate_display_name <- function(x, path = NULL) {
file_path <- normalizePath(getSrcFilename(x$srcref, full.names = TRUE), mustWork = FALSE)
if (!is.null(path)) {
# we have to check the system explicitly because both file.path and
# normalizePath strip the trailing path separator.
if (is_windows()) {
sep <- "\\"
} else {
sep <- "/"
}
package_path <- paste0(path, sep)
file_path <- rex::re_substitutes(file_path, rex::rex(package_path), "")
}
file_path
}
run_tests <- function(pkg, tmp_lib, dots, type, quiet) {
testing_dir <- test_directory(pkg$path)
# install the package in a temporary directory
RCMD("INSTALL",
options = c(pkg$path,
"--no-docs",
"--no-multiarch",
"--preclean",
"--with-keep.source",
"--no-byte-compile",
"--no-test-load",
"--no-multiarch",
"-l",
tmp_lib),
quiet = quiet)
if (isNamespaceLoaded(pkg$package)) {
try_unload(pkg$package)
on.exit(loadNamespace(pkg$package), add = TRUE)
}
with_lib(tmp_lib, {
ns_env <- loadNamespace(pkg$package)
env <- new.env(parent = ns_env) # nolint
# directories for vignettes and examples
vignette_dir <- file.path(pkg$path, "vignettes")
example_dir <- file.path(pkg$path, "man")
# get expressions to run
exprs <-
c(dots,
quote("library(methods)"),
if (type == "test" && file.exists(testing_dir)) {
bquote(try(source_dir(path = .(testing_dir), env = .(env), quiet = .(quiet))))
} else if (type == "vignette" && file.exists(vignette_dir)) {
lapply(dir(vignette_dir, pattern = rex::rex(".", one_of("R", "r"), or("nw", "md")), full.names = TRUE),
function(file) {
out_file <- tempfile(fileext = ".R")
knitr::knit(input = file, output = out_file, tangle = TRUE)
bquote(source2(.(out_file), .(env), path = .(vignette_dir), quiet = .(quiet)))
})
} else if (type == "example" && file.exists(example_dir)) {
ex_file <- process_examples(pkg, tmp_lib, quiet) # nolint
bquote(source(.(ex_file)))
})
enc <- environment()
# actually calculate the coverage
cov <- environment_coverage_(ns_env, exprs, enc)
# unload the package being tested
try_unload(pkg$package)
cov
})
}
try_unload <- function(pkg) {
tryCatch(unloadNamespace(pkg), error = function(e) warning(e))
}
process_examples <- function(pkg, lib = getwd(), quiet = TRUE) {
ex_file <- ex_dot_r(pkg$package, file.path(lib, pkg$package), silent = quiet)
# we need to move the file from the working directory into the tmp
# dir, remove the last line (which quits) and remove the originaal
# and *-cnt file
tmp_ex_file <- file.path(lib, ex_file)
lines <- readLines(ex_file)
header_lines <- readLines(file.path(R.home("share"), "R", "examples-header.R"))
# pdf output at lib
header_lines <- rex::re_substitutes(header_lines,
rex::rex("grDevices::pdf(paste(pkgname, \"-Ex.pdf\", sep=\"\")"),
paste0("grDevices::pdf(\"", file.path(lib, pkg$package), "-Ex.pdf\""))
# remove header source line
lines <- lines[-2]
# append header_lines after the first line
lines <- append(lines, header_lines, after = 1)
# remove last line "quit("no")"
lines <- lines[-length(lines)]
writeLines(lines, con = tmp_ex_file)
if (file.exists(ex_file)) {
file.remove(ex_file)
}
cnt_file <- paste0(ex_file, "-cnt")
if (file.exists(cnt_file)) {
file.remove(paste0(ex_file, "-cnt"))
}
tmp_ex_file
}
| /R/covr.R | no_license | brodieG/covr | R | false | false | 10,842 | r | #' @import methods
NULL
rex::register_shortcuts("covr")
#' calculate the coverage on an environment after evaluating some expressions.
#'
#' This function uses non_standard evaluation so is best used in interactive
#' sessions.
#' @param env the environment to take function definitions from
#' @param ... one or more expressions to be evaluated.
#' @param enc The enclosing environment from which the expressions should be
#' evaluated
#' @export
environment_coverage <- function(env, ..., enc = parent.frame()) {
exprs <- dots(...)
environment_coverage_(env, exprs, enc = enc)
}
#' calculate the coverage on an environment after evaluating some expressions.
#'
#' This function does not use non_standard evaluation so is more appropriate
#' for use in other functions.
#' @inheritParams environment_coverage
#' @param exprs a list of parsed expressions to be evaluated.
#' @export
environment_coverage_ <- function(env, exprs, enc = parent.frame()) {
clear_counters()
replacements <-
c(replacements_S4(env),
Filter(Negate(is.null), lapply(ls(env, all.names = TRUE), replacement, env = env))
)
on.exit(lapply(replacements, reset), add = TRUE)
lapply(replacements, replace)
for (expr in exprs) {
eval(expr, enc)
}
res <- as.list(.counters)
clear_counters()
class(res) <- "coverage"
res
}
#' Calculate test coverage for specific function.
#'
#' @param fun name of the function.
#' @param env environment the function is defined in.
#' @param ... expressions to run.
#' @param enc the enclosing environment which to run the expressions.
#' @export
function_coverage <- function(fun, ..., env = NULL, enc = parent.frame()) {
if (is.function(fun)) {
env <- environment(fun)
# get name of function, stripping preceding blah:: if needed
fun <- rex::re_substitutes(deparse(substitute(fun)), rex::regex(".*:::?"), "")
}
exprs <- dots(...)
clear_counters()
replacement <- if (!is.null(env)) {
replacement(fun, env)
} else {
replacement(fun)
}
on.exit(reset(replacement), add = TRUE)
replace(replacement)
for (expr in exprs) {
eval(expr, enc)
}
res <- as.list(.counters)
clear_counters()
for (i in seq_along(res)) {
display_name(res[[i]]$srcref) <- generate_display_name(res[[i]], path = NULL)
class(res[[i]]) <- "expression_coverage"
}
class(res) <- "coverage"
res
}
#' Calculate test coverage for a package
#'
#' @param path file path to the package
#' @param ... extra expressions to run
#' @param type run the package \sQuote{test}, \sQuote{vignette},
#' \sQuote{example}, \sQuote{all}, or \sQuote{none}. The default is
#' \sQuote{test}.
#' @param relative_path whether to output the paths as relative or absolute
#' paths.
#' @param quiet whether to load and compile the package quietly
#' @param clean whether to clean temporary output files after running.
#' @param exclusions a named list of files with the lines to exclude from each file.
#' @param exclude_pattern a search pattern to look for in the source to exclude a particular line.
#' @param exclude_start a search pattern to look for in the source to start an exclude block.
#' @param exclude_end a search pattern to look for in the source to stop an exclude block.
#' @param use_subprocess whether to run the code in a separate subprocess.
#' Needed for compiled code and many packages using S4 classes.
#' @seealso exclusions
#' @export
package_coverage <- function(path = ".",
...,
type = c("test", "vignette", "example", "all", "none"),
relative_path = TRUE,
quiet = TRUE,
clean = TRUE,
exclusions = NULL,
exclude_pattern = getOption("covr.exclude_pattern"),
exclude_start = getOption("covr.exclude_start"),
exclude_end = getOption("covr.exclude_end"),
use_subprocess = TRUE
) {
pkg <- devtools::as.package(path)
type <- match.arg(type)
if (type == "all") {
# store the args that were called
called_args <- as.list(match.call())[-1]
# remove the type
called_args$type <- NULL
res <- list(
test = do.call(Recall, c(called_args, type = "test")),
vignette = do.call(Recall, c(called_args, type = "vignette")),
example = do.call(Recall, c(called_args, type = "example"))
)
attr(res, "package") <- pkg
class(res) <- "coverages"
return(res)
}
dots <- dots(...)
sources <- sources(pkg$path)
tmp_lib <- tempdir()
# if there are compiled components to a package we have to run in a subprocess
if (length(sources)) {
flags <- c(CFLAGS = "-g -O0 -fprofile-arcs -ftest-coverage",
CXXFLAGS = "-g -O0 -fprofile-arcs -ftest-coverage",
FFLAGS = "-g -O0 -fprofile-arcs -ftest-coverage",
FCFLAGS = "-g -O0 -fprofile-arcs -ftest-coverage",
LDFLAGS = "--coverage")
if (is_windows()) {
# workaround for https://bugs.r-project.org/bugzilla3/show_bug.cgi?id=16384
# LDFLAGS is ignored on Windows and we don't want to override PKG_LIBS if
# it is set, so use SHLIB_LIBADD
flags[["SHLIB_LIBADD"]] <- "--coverage"
}
with_makevars(
flags, {
if (use_subprocess) {
subprocess(
clean = clean,
quiet = quiet,
coverage <- run_tests(pkg, tmp_lib, dots, type, quiet)
)
} else {
coverage <- run_tests(pkg, tmp_lib, dots, type, quiet)
}
})
coverage <- c(coverage, run_gcov(pkg$path, sources, quiet))
if (isTRUE(clean)) {
devtools::clean_dll(pkg$path)
clear_gcov(pkg$path)
}
} else {
if (use_subprocess) {
subprocess(
clean = clean,
quiet = quiet,
coverage <- run_tests(pkg, tmp_lib, dots, type, quiet)
)
} else {
coverage <- run_tests(pkg, tmp_lib, dots, type, quiet)
}
}
# set the display names for coverage
for (i in seq_along(coverage)) {
display_path <- if (isTRUE(relative_path)) pkg$path else NULL
display_name(coverage[[i]]$srcref) <-
generate_display_name(coverage[[i]], display_path)
class(coverage[[i]]) <- "expression_coverage"
}
attr(coverage, "type") <- type
attr(coverage, "package") <- pkg
class(coverage) <- "coverage"
# BasicClasses are functions from the method package
coverage <- coverage[!rex::re_matches(display_name(coverage),
rex::rex("R", one_of("/", "\\"), "BasicClasses.R"))]
# perform exclusions
exclude(coverage,
exclusions = exclusions,
exclude_pattern = exclude_pattern,
exclude_start = exclude_start,
exclude_end = exclude_end,
path = if (isTRUE(relative_path)) pkg$path else NULL
)
}
generate_display_name <- function(x, path = NULL) {
file_path <- normalizePath(getSrcFilename(x$srcref, full.names = TRUE), mustWork = FALSE)
if (!is.null(path)) {
# we have to check the system explicitly because both file.path and
# normalizePath strip the trailing path separator.
if (is_windows()) {
sep <- "\\"
} else {
sep <- "/"
}
package_path <- paste0(path, sep)
file_path <- rex::re_substitutes(file_path, rex::rex(package_path), "")
}
file_path
}
run_tests <- function(pkg, tmp_lib, dots, type, quiet) {
testing_dir <- test_directory(pkg$path)
# install the package in a temporary directory
RCMD("INSTALL",
options = c(pkg$path,
"--no-docs",
"--no-multiarch",
"--preclean",
"--with-keep.source",
"--no-byte-compile",
"--no-test-load",
"--no-multiarch",
"-l",
tmp_lib),
quiet = quiet)
if (isNamespaceLoaded(pkg$package)) {
try_unload(pkg$package)
on.exit(loadNamespace(pkg$package), add = TRUE)
}
with_lib(tmp_lib, {
ns_env <- loadNamespace(pkg$package)
env <- new.env(parent = ns_env) # nolint
# directories for vignettes and examples
vignette_dir <- file.path(pkg$path, "vignettes")
example_dir <- file.path(pkg$path, "man")
# get expressions to run
exprs <-
c(dots,
quote("library(methods)"),
if (type == "test" && file.exists(testing_dir)) {
bquote(try(source_dir(path = .(testing_dir), env = .(env), quiet = .(quiet))))
} else if (type == "vignette" && file.exists(vignette_dir)) {
lapply(dir(vignette_dir, pattern = rex::rex(".", one_of("R", "r"), or("nw", "md")), full.names = TRUE),
function(file) {
out_file <- tempfile(fileext = ".R")
knitr::knit(input = file, output = out_file, tangle = TRUE)
bquote(source2(.(out_file), .(env), path = .(vignette_dir), quiet = .(quiet)))
})
} else if (type == "example" && file.exists(example_dir)) {
ex_file <- process_examples(pkg, tmp_lib, quiet) # nolint
bquote(source(.(ex_file)))
})
enc <- environment()
# actually calculate the coverage
cov <- environment_coverage_(ns_env, exprs, enc)
# unload the package being tested
try_unload(pkg$package)
cov
})
}
try_unload <- function(pkg) {
tryCatch(unloadNamespace(pkg), error = function(e) warning(e))
}
process_examples <- function(pkg, lib = getwd(), quiet = TRUE) {
ex_file <- ex_dot_r(pkg$package, file.path(lib, pkg$package), silent = quiet)
# we need to move the file from the working directory into the tmp
# dir, remove the last line (which quits) and remove the originaal
# and *-cnt file
tmp_ex_file <- file.path(lib, ex_file)
lines <- readLines(ex_file)
header_lines <- readLines(file.path(R.home("share"), "R", "examples-header.R"))
# pdf output at lib
header_lines <- rex::re_substitutes(header_lines,
rex::rex("grDevices::pdf(paste(pkgname, \"-Ex.pdf\", sep=\"\")"),
paste0("grDevices::pdf(\"", file.path(lib, pkg$package), "-Ex.pdf\""))
# remove header source line
lines <- lines[-2]
# append header_lines after the first line
lines <- append(lines, header_lines, after = 1)
# remove last line "quit("no")"
lines <- lines[-length(lines)]
writeLines(lines, con = tmp_ex_file)
if (file.exists(ex_file)) {
file.remove(ex_file)
}
cnt_file <- paste0(ex_file, "-cnt")
if (file.exists(cnt_file)) {
file.remove(paste0(ex_file, "-cnt"))
}
tmp_ex_file
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/gmail_objects.R
\name{ListFiltersResponse}
\alias{ListFiltersResponse}
\title{ListFiltersResponse Object}
\usage{
ListFiltersResponse(filter = NULL)
}
\arguments{
\item{filter}{List of a user's filters}
}
\value{
ListFiltersResponse object
}
\description{
ListFiltersResponse Object
}
\details{
Autogenerated via \code{\link[googleAuthR]{gar_create_api_objects}}
Response for the ListFilters method.
}
| /googlegmailv1.auto/man/ListFiltersResponse.Rd | permissive | GVersteeg/autoGoogleAPI | R | false | true | 480 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/gmail_objects.R
\name{ListFiltersResponse}
\alias{ListFiltersResponse}
\title{ListFiltersResponse Object}
\usage{
ListFiltersResponse(filter = NULL)
}
\arguments{
\item{filter}{List of a user's filters}
}
\value{
ListFiltersResponse object
}
\description{
ListFiltersResponse Object
}
\details{
Autogenerated via \code{\link[googleAuthR]{gar_create_api_objects}}
Response for the ListFilters method.
}
|
#StackOverflow Developer Talent Map for Canadian Cities and Provinces
##Script for static preprocessing
library(rnaturalearth)
library(leaflet)
library(tidyverse)
library(sf)
library(tools)
setwd("~/GitHub/developer-talent-map")
#Provinces===========================
#Download and subset map from www.naturalearthdata.com
provinces <- ne_states(country = 'canada', returnclass = c('sf'))
provinces <- provinces[provinces$OBJECTID_1 != 5682,] #Filter out Canada as a whole
#Process provincial data
provincedev <- read_csv("devroles-province.csv")
provincelookup <- read_csv("provincelookup.csv")
provincedev <- left_join(provincedev, provincelookup)
provinces <- sp::merge(provinces, provincedev)
provinces$dev_role <- toTitleCase(provinces$dev_role)
provinces$dev_role[provinces$dev_role=="Ios Developers"] <- "Mobile - iOS"
provinces$dev_role[provinces$dev_role=="Android Developers"] <- "Mobile - Android"
provinces$dev_role[provinces$dev_role=="Desktop Developers"] <- "Desktop"
provinces$dev_role[provinces$dev_role=="Embedded Developers"] <- "Embedded"
provinces$dev_role[provinces$dev_role=="all Developers"] <- "All Developers"
provinces$dev_role[provinces$dev_role=="Web Developers (Front-End, Back-End, or Full-Stack)"] <- "Web Developers"
provinces$dev_role[provinces$dev_role=="Front-End Web Developers"] <- "Web - Front-End"
provinces$dev_role[provinces$dev_role=="Back-End Web Developers"] <- "Web - Back-End"
provinces$dev_role[provinces$dev_role=="Full-Stack Web Developers"] <- "Web - Full Stack"
#Simplify province polygons
provinces <- rmapshaper::ms_simplify(provinces)
#Cities===========================
#Process and filter city data
cities <- read_csv("devroles-city.csv")
citylookup <- read_csv("citylookup.csv")
cities <- left_join(cities, citylookup)
cities <- rename(cities, share = city_visitors_share, loc_quo = location_quotient)
cities <- filter(cities, !(dev_role %in% c("Business Intelligence Developers", "Highly Technical Designers", "Highly Technical Product Managers", "Quality Assurance Engineers")),
city %in% c("calgary", "edmonton", "guelph", "halifax", "hamilton", "kitchener_waterloo",
"london", "montreal", "ottawa", "quebec city", "regina", "saskatoon",
"toronto", "vancouver", "victoria", "winnipeg", "nyc_metro_area", "sf_silicon_valley")) %>%
select(-region, -dev_role_parent_group)
#Proper names for developer roles
cities$dev_role <- toTitleCase(cities$dev_role)
cities$dev_role[cities$dev_role=="iOS Developers"] <- "Mobile - iOS"
cities$dev_role[cities$dev_role=="Android Developers"] <- "Mobile - Android"
cities$dev_role[cities$dev_role=="Desktop Developers"] <- "Desktop"
cities$dev_role[cities$dev_role=="Embedded Developers"] <- "Embedded"
cities$dev_role[cities$dev_role=="all Developers"] <- "All Developers"
cities$dev_role[cities$dev_role=="Web Developers (Front-End, Back-End, or Full-Stack)"] <- "Web Developers"
cities$dev_role[cities$dev_role=="Front-End Web Developers"] <- "Web - Front-End"
cities$dev_role[cities$dev_role=="Back-End Web Developers"] <- "Web - Back-End"
cities$dev_role[cities$dev_role=="Full-Stack Web Developers"] <- "Web - Full Stack"
cities <- arrange(cities, dev_role, city)
#Reformat cities to shapefile
cities <- st_as_sf(cities, coords = c("long", "lat"), crs = 4326)
#Write files for app
write_sf(provinces, "provinces.shp")
write_sf(cities, "cities.shp")
rm(list=ls())
| /1 - Load and Tidy.R | permissive | thedaisTMU/developer-talent-map | R | false | false | 3,472 | r | #StackOverflow Developer Talent Map for Canadian Cities and Provinces
##Script for static preprocessing
library(rnaturalearth)
library(leaflet)
library(tidyverse)
library(sf)
library(tools)
setwd("~/GitHub/developer-talent-map")
#Provinces===========================
#Download and subset map from www.naturalearthdata.com
provinces <- ne_states(country = 'canada', returnclass = c('sf'))
provinces <- provinces[provinces$OBJECTID_1 != 5682,] #Filter out Canada as a whole
#Process provincial data
provincedev <- read_csv("devroles-province.csv")
provincelookup <- read_csv("provincelookup.csv")
provincedev <- left_join(provincedev, provincelookup)
provinces <- sp::merge(provinces, provincedev)
provinces$dev_role <- toTitleCase(provinces$dev_role)
provinces$dev_role[provinces$dev_role=="Ios Developers"] <- "Mobile - iOS"
provinces$dev_role[provinces$dev_role=="Android Developers"] <- "Mobile - Android"
provinces$dev_role[provinces$dev_role=="Desktop Developers"] <- "Desktop"
provinces$dev_role[provinces$dev_role=="Embedded Developers"] <- "Embedded"
provinces$dev_role[provinces$dev_role=="all Developers"] <- "All Developers"
provinces$dev_role[provinces$dev_role=="Web Developers (Front-End, Back-End, or Full-Stack)"] <- "Web Developers"
provinces$dev_role[provinces$dev_role=="Front-End Web Developers"] <- "Web - Front-End"
provinces$dev_role[provinces$dev_role=="Back-End Web Developers"] <- "Web - Back-End"
provinces$dev_role[provinces$dev_role=="Full-Stack Web Developers"] <- "Web - Full Stack"
#Simplify province polygons
provinces <- rmapshaper::ms_simplify(provinces)
#Cities===========================
#Process and filter city data
cities <- read_csv("devroles-city.csv")
citylookup <- read_csv("citylookup.csv")
cities <- left_join(cities, citylookup)
cities <- rename(cities, share = city_visitors_share, loc_quo = location_quotient)
cities <- filter(cities, !(dev_role %in% c("Business Intelligence Developers", "Highly Technical Designers", "Highly Technical Product Managers", "Quality Assurance Engineers")),
city %in% c("calgary", "edmonton", "guelph", "halifax", "hamilton", "kitchener_waterloo",
"london", "montreal", "ottawa", "quebec city", "regina", "saskatoon",
"toronto", "vancouver", "victoria", "winnipeg", "nyc_metro_area", "sf_silicon_valley")) %>%
select(-region, -dev_role_parent_group)
#Proper names for developer roles
cities$dev_role <- toTitleCase(cities$dev_role)
cities$dev_role[cities$dev_role=="iOS Developers"] <- "Mobile - iOS"
cities$dev_role[cities$dev_role=="Android Developers"] <- "Mobile - Android"
cities$dev_role[cities$dev_role=="Desktop Developers"] <- "Desktop"
cities$dev_role[cities$dev_role=="Embedded Developers"] <- "Embedded"
cities$dev_role[cities$dev_role=="all Developers"] <- "All Developers"
cities$dev_role[cities$dev_role=="Web Developers (Front-End, Back-End, or Full-Stack)"] <- "Web Developers"
cities$dev_role[cities$dev_role=="Front-End Web Developers"] <- "Web - Front-End"
cities$dev_role[cities$dev_role=="Back-End Web Developers"] <- "Web - Back-End"
cities$dev_role[cities$dev_role=="Full-Stack Web Developers"] <- "Web - Full Stack"
cities <- arrange(cities, dev_role, city)
#Reformat cities to shapefile
cities <- st_as_sf(cities, coords = c("long", "lat"), crs = 4326)
#Write files for app
write_sf(provinces, "provinces.shp")
write_sf(cities, "cities.shp")
rm(list=ls())
|
# Server logic
server <- function(input, output) {
dataInput <- reactive({
getSymbols(input$symb, src = "yahoo",
from = input$dates[1],
to = input$dates[2],
auto.assign = FALSE)
})
stockName <- reactive ({
getSymbols(input$symb, src = "yahoo",
auto.assign = FALSE)
})
finalInput <- reactive({
if(!input$adjust) return(dataInput())
adjust(dataInput())
})
output$plot <- renderPlot({
chartSeries(finalInput(), theme = chartTheme("white"),
type = "line", log.scale = input$log, TA = NULL, name = input$symb)
})
}
| /stocks/server.R | no_license | nickjkomarov/shiny-server | R | false | false | 653 | r |
# Server logic
server <- function(input, output) {
dataInput <- reactive({
getSymbols(input$symb, src = "yahoo",
from = input$dates[1],
to = input$dates[2],
auto.assign = FALSE)
})
stockName <- reactive ({
getSymbols(input$symb, src = "yahoo",
auto.assign = FALSE)
})
finalInput <- reactive({
if(!input$adjust) return(dataInput())
adjust(dataInput())
})
output$plot <- renderPlot({
chartSeries(finalInput(), theme = chartTheme("white"),
type = "line", log.scale = input$log, TA = NULL, name = input$symb)
})
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/alink.R
\name{alink}
\alias{alink}
\title{Return a Link To Download an Artifact Stored on Remote Repository}
\usage{
alink(md5hash, repo = aoptions("repo"), user = aoptions("user"),
subdir = aoptions("subdir"), branch = "master",
repoType = aoptions("repoType"), format = "markdown",
rawLink = FALSE)
}
\arguments{
\item{md5hash}{A character assigned to the artifact through the use of a cryptographical hash function with MD5 algorithm.
If it is specified in a format of \code{'repo/user/md5hash'} then \code{user} and \code{repo} parameters are omitted.}
\item{repo}{The Remote \code{Repository} on which the artifact that we want to download
is stored.}
\item{user}{The name of a user on whose \code{Repository} the the artifact that we want to download
is stored.}
\item{subdir}{A character containing a name of a directory on the Remote repository
on which the Repository is stored. If the Repository is stored in the main folder on the Remote repository, this should be set
to \code{subdir = "/"} as default.}
\item{branch}{A character containing a name of the Remote Repository's branch
on which the Repository is archived. Default \code{branch} is \code{master}.}
\item{repoType}{A character containing a type of the remote repository. Currently it can be 'github' or 'bitbucket'.}
\item{format}{In which format the link should be returned. Possibilites are \code{markdown} (default) or \code{latex}.}
\item{rawLink}{A logical denoting whether to return raw link or a link in the \code{format} convention.
Default value is \code{FALSE}.}
}
\value{
This function returns a link to download artifact that is archived on GitHub.
}
\description{
\code{alink} returns a link to download an artifact from the Remote \link{Repository}.
Artifact has to be already archived on GitHub, e.g with \code{archivist.github::archive} function (recommended) or
\link{saveToRepo} function and traditional Git manual synchronization.
To learn more about artifacts visit \link[archivist]{archivist-package}.
}
\details{
For more information about \code{md5hash} see \link{md5hash}.
}
\section{Contact}{
Bug reports and feature requests can be sent to
\href{https://github.com/pbiecek/archivist/issues}{https://github.com/pbiecek/archivist/issues}
}
\examples{
\dontrun{
# link in markdown format
alink('pbiecek/archivist/134ecbbe2a8814d98f0c2758000c408e')
# link in markdown format with additional subdir
alink(user='BetaAndBit',repo='PieczaraPietraszki',
md5hash = '1569cc44e8450439ac52c11ccac35138',
subdir = 'UniwersytetDzieci/arepo')
# link in latex format
alink(user = 'MarcinKosinski', repo = 'Museum',
md5hash = '1651caa499a2b07a3bdad3896a2fc717', format = 'latex')
# link in raw format
alink('pbiecek/graphGallery/f5185c458bff721f0faa8e1332f01e0f', rawLink = TRUE)
alink('pbiecek/graphgallerygit/02af4f99e440324b9e329faa293a9394', repoType='bitbucket')
}
}
\seealso{
Other archivist: \code{\link{Repository}},
\code{\link{Tags}}, \code{\link{\%a\%}},
\code{\link{addHooksToPrint}}, \code{\link{addTagsRepo}},
\code{\link{aformat}}, \code{\link{ahistory}},
\code{\link{aoptions}}, \code{\link{archivist-package}},
\code{\link{areadLocal}}, \code{\link{aread}},
\code{\link{asearchLocal}}, \code{\link{asearch}},
\code{\link{asession}}, \code{\link{atrace}},
\code{\link{cache}}, \code{\link{copyLocalRepo}},
\code{\link{createLocalRepo}},
\code{\link{createMDGallery}},
\code{\link{deleteLocalRepo}},
\code{\link{getRemoteHook}}, \code{\link{getTagsLocal}},
\code{\link{loadFromLocalRepo}}, \code{\link{md5hash}},
\code{\link{removeTagsRepo}}, \code{\link{restoreLibs}},
\code{\link{rmFromLocalRepo}},
\code{\link{saveToLocalRepo}},
\code{\link{searchInLocalRepo}},
\code{\link{setLocalRepo}},
\code{\link{shinySearchInLocalRepo}},
\code{\link{showLocalRepo}},
\code{\link{splitTagsLocal}},
\code{\link{summaryLocalRepo}},
\code{\link{zipLocalRepo}}
}
\author{
Marcin Kosinski, \email{m.p.kosinski@gmail.com}
}
\concept{archivist}
| /man/alink.Rd | no_license | PredictiveEcology/archivist | R | false | true | 4,095 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/alink.R
\name{alink}
\alias{alink}
\title{Return a Link To Download an Artifact Stored on Remote Repository}
\usage{
alink(md5hash, repo = aoptions("repo"), user = aoptions("user"),
subdir = aoptions("subdir"), branch = "master",
repoType = aoptions("repoType"), format = "markdown",
rawLink = FALSE)
}
\arguments{
\item{md5hash}{A character assigned to the artifact through the use of a cryptographical hash function with MD5 algorithm.
If it is specified in a format of \code{'repo/user/md5hash'} then \code{user} and \code{repo} parameters are omitted.}
\item{repo}{The Remote \code{Repository} on which the artifact that we want to download
is stored.}
\item{user}{The name of a user on whose \code{Repository} the the artifact that we want to download
is stored.}
\item{subdir}{A character containing a name of a directory on the Remote repository
on which the Repository is stored. If the Repository is stored in the main folder on the Remote repository, this should be set
to \code{subdir = "/"} as default.}
\item{branch}{A character containing a name of the Remote Repository's branch
on which the Repository is archived. Default \code{branch} is \code{master}.}
\item{repoType}{A character containing a type of the remote repository. Currently it can be 'github' or 'bitbucket'.}
\item{format}{In which format the link should be returned. Possibilites are \code{markdown} (default) or \code{latex}.}
\item{rawLink}{A logical denoting whether to return raw link or a link in the \code{format} convention.
Default value is \code{FALSE}.}
}
\value{
This function returns a link to download artifact that is archived on GitHub.
}
\description{
\code{alink} returns a link to download an artifact from the Remote \link{Repository}.
Artifact has to be already archived on GitHub, e.g with \code{archivist.github::archive} function (recommended) or
\link{saveToRepo} function and traditional Git manual synchronization.
To learn more about artifacts visit \link[archivist]{archivist-package}.
}
\details{
For more information about \code{md5hash} see \link{md5hash}.
}
\section{Contact}{
Bug reports and feature requests can be sent to
\href{https://github.com/pbiecek/archivist/issues}{https://github.com/pbiecek/archivist/issues}
}
\examples{
\dontrun{
# link in markdown format
alink('pbiecek/archivist/134ecbbe2a8814d98f0c2758000c408e')
# link in markdown format with additional subdir
alink(user='BetaAndBit',repo='PieczaraPietraszki',
md5hash = '1569cc44e8450439ac52c11ccac35138',
subdir = 'UniwersytetDzieci/arepo')
# link in latex format
alink(user = 'MarcinKosinski', repo = 'Museum',
md5hash = '1651caa499a2b07a3bdad3896a2fc717', format = 'latex')
# link in raw format
alink('pbiecek/graphGallery/f5185c458bff721f0faa8e1332f01e0f', rawLink = TRUE)
alink('pbiecek/graphgallerygit/02af4f99e440324b9e329faa293a9394', repoType='bitbucket')
}
}
\seealso{
Other archivist: \code{\link{Repository}},
\code{\link{Tags}}, \code{\link{\%a\%}},
\code{\link{addHooksToPrint}}, \code{\link{addTagsRepo}},
\code{\link{aformat}}, \code{\link{ahistory}},
\code{\link{aoptions}}, \code{\link{archivist-package}},
\code{\link{areadLocal}}, \code{\link{aread}},
\code{\link{asearchLocal}}, \code{\link{asearch}},
\code{\link{asession}}, \code{\link{atrace}},
\code{\link{cache}}, \code{\link{copyLocalRepo}},
\code{\link{createLocalRepo}},
\code{\link{createMDGallery}},
\code{\link{deleteLocalRepo}},
\code{\link{getRemoteHook}}, \code{\link{getTagsLocal}},
\code{\link{loadFromLocalRepo}}, \code{\link{md5hash}},
\code{\link{removeTagsRepo}}, \code{\link{restoreLibs}},
\code{\link{rmFromLocalRepo}},
\code{\link{saveToLocalRepo}},
\code{\link{searchInLocalRepo}},
\code{\link{setLocalRepo}},
\code{\link{shinySearchInLocalRepo}},
\code{\link{showLocalRepo}},
\code{\link{splitTagsLocal}},
\code{\link{summaryLocalRepo}},
\code{\link{zipLocalRepo}}
}
\author{
Marcin Kosinski, \email{m.p.kosinski@gmail.com}
}
\concept{archivist}
|
# Model selection script for SBMs.
library(igraph)
library(sbmlogit)
library(sbmlhelpers)
library(RMTstat)
# simulate data from a K = 4 SBM with following connectivity matrix
n <- 200
p <- 0.75
q <- 0.05
K = 4
P <- matrix(c(p,q,q,q,
q,p,q,q,
q,q,p,q,
q,q,q,p),
nrow = 4,
ncol = 4,
byrow = TRUE)
G1 <- sample_sbm(n = n,
block.sizes = rep(n/K,K),
pref.matrix = P,
directed = FALSE,
loops = FALSE)
G1_g <- as_adjacency_matrix(G1,
type = "both",
sparse = FALSE)
# implement SVT to recover K
S <- svd(G1_g)
s <- S$d
sum(s > sqrt(n))
# implement Lei's goodness of fit test to recover K
K0 <- 4
fitK <- sbmlogit.mcmc(G1,
alpha = K0,
nsamples = 1000)
z <- get_labels(fitK)
Ns <- table(z)
A <- G1_g
B <- matrix(0,nrow = K0,ncol = K0)
ids <- 1:n
for(k in 1:K0)
{
for(l in k:K0)
{
if(k != l)
{
nk = Ns[k]
nl = Ns[l]
is = ids[z == k]
js = ids[z == l]
b = 0
for(i in is)
{
for(j in js)
{
b = b + A[i,j]
}
}
B[k,l] = B[l,k] = b/(nk*nl)
}
else
{
nk = Ns[k]
is = ids[z == k]
b = 0
for(i in 1:length(is))
{
for(j in i:length(is))
{
b = b + A[is[i],is[j]]
}
}
B[k,l] = B[l,k] = b/((nk*(nk-1))/2)
}
}
}
Az = A
for(i in 1:ncol(A))
{
for(j in 1:nrow(A))
{
if(i == j)
{
A[i,j] = 0
}
else
{
Az[i,j] = (A[i,j] - B[z[i],z[j]])/(sqrt((n-1)*B[z[i],z[j]]*(1-B[z[i],z[j]])))
}
}
}
Sz = svd(Az)
sz = max(Sz$d)
Tz = (n^(2/3))*(sz-2)
a = 0.05
T_crit = qtw(1-(a/2))
| /Code/model_selection/model_selection.R | no_license | carter-allen/Research | R | false | false | 2,056 | r | # Model selection script for SBMs.
library(igraph)
library(sbmlogit)
library(sbmlhelpers)
library(RMTstat)
# simulate data from a K = 4 SBM with following connectivity matrix
n <- 200
p <- 0.75
q <- 0.05
K = 4
P <- matrix(c(p,q,q,q,
q,p,q,q,
q,q,p,q,
q,q,q,p),
nrow = 4,
ncol = 4,
byrow = TRUE)
G1 <- sample_sbm(n = n,
block.sizes = rep(n/K,K),
pref.matrix = P,
directed = FALSE,
loops = FALSE)
G1_g <- as_adjacency_matrix(G1,
type = "both",
sparse = FALSE)
# implement SVT to recover K
S <- svd(G1_g)
s <- S$d
sum(s > sqrt(n))
# implement Lei's goodness of fit test to recover K
K0 <- 4
fitK <- sbmlogit.mcmc(G1,
alpha = K0,
nsamples = 1000)
z <- get_labels(fitK)
Ns <- table(z)
A <- G1_g
B <- matrix(0,nrow = K0,ncol = K0)
ids <- 1:n
for(k in 1:K0)
{
for(l in k:K0)
{
if(k != l)
{
nk = Ns[k]
nl = Ns[l]
is = ids[z == k]
js = ids[z == l]
b = 0
for(i in is)
{
for(j in js)
{
b = b + A[i,j]
}
}
B[k,l] = B[l,k] = b/(nk*nl)
}
else
{
nk = Ns[k]
is = ids[z == k]
b = 0
for(i in 1:length(is))
{
for(j in i:length(is))
{
b = b + A[is[i],is[j]]
}
}
B[k,l] = B[l,k] = b/((nk*(nk-1))/2)
}
}
}
Az = A
for(i in 1:ncol(A))
{
for(j in 1:nrow(A))
{
if(i == j)
{
A[i,j] = 0
}
else
{
Az[i,j] = (A[i,j] - B[z[i],z[j]])/(sqrt((n-1)*B[z[i],z[j]]*(1-B[z[i],z[j]])))
}
}
}
Sz = svd(Az)
sz = max(Sz$d)
Tz = (n^(2/3))*(sz-2)
a = 0.05
T_crit = qtw(1-(a/2))
|
#' Aggregates over a discrete field
#'
#' @description
#'
#' Uses very generic dplyr code to aggregate data. Because of this approach,
#' the calculations automatically run inside the database if `data` has
#' a database or sparklyr connection. The `class()` of such tables
#' in R are: tbl_sql, tbl_dbi, tbl_sql
#'
#' @param data A table (tbl)
#' @param x A discrete variable
#' @param y The aggregation formula. Defaults to count (n)
#'
#' @examples
#'
#' # Returns the row count per am
#' mtcars %>%
#' db_compute_count(am)
#'
#' # Returns the average mpg per am
#' mtcars %>%
#' db_compute_count(am, mean(mpg))
#'
#' @export
#' @import dplyr
#' @importFrom rlang enexpr
db_compute_count <- function(data, x, y = n()){
x <- enexpr(x)
y <- enexpr(y)
df <- data %>%
group_by(!! x) %>%
summarise(result = !! y) %>%
collect() %>%
ungroup() %>%
mutate(result = as.numeric(result)) #Accounts for interger64
colnames(df) <- c(x, y)
df
}
#' Bar plot
#'
#' @description
#'
#' Uses very generic dplyr code to aggregate data and then `ggplot2`
#' to create the plot. Because of this approach,
#' the calculations automatically run inside the database if `data` has
#' a database or sparklyr connection. The `class()` of such tables
#' in R are: tbl_sql, tbl_dbi, tbl_spark
#'
#' @param data A table (tbl)
#' @param x A discrete variable
#' @param y The aggregation formula. Defaults to count (n)
#'
#' @examples
#'
#' library(ggplot2)
#'
#' # Returns a plot of the row count per am
#' mtcars %>%
#' dbplot_bar(am)
#'
#' # Returns a plot of the average mpg per am
#' mtcars %>%
#' dbplot_bar(am, mean(mpg))
#'
#' @seealso
#' \code{\link{dbplot_line}} ,
#' \code{\link{dbplot_histogram}}, \code{\link{dbplot_raster}} ,
#'
#'
#' @export
#' @import dplyr
#' @importFrom rlang enexpr
dbplot_bar <- function(data, x, y = n()){
x <- enexpr(x)
y <- enexpr(y)
df <- db_compute_count(data = data,
x = !! x,
y = !! y)
colnames(df) <- c("x", "y")
ggplot2::ggplot(df) +
ggplot2::geom_col(aes(x, y)) +
ggplot2::labs(x = x,
y = y)
}
#' Bar plot
#'
#' @description
#'
#' Uses very generic dplyr code to aggregate data and then `ggplot2`
#' to create a line plot. Because of this approach,
#' the calculations automatically run inside the database if `data` has
#' a database or sparklyr connection. The `class()` of such tables
#' in R are: tbl_sql, tbl_dbi, tbl_spark
#'
#' @param data A table (tbl)
#' @param x A discrete variable
#' @param y The aggregation formula. Defaults to count (n)
#'
#' @examples
#'
#' library(ggplot2)
#'
#' # Returns a plot of the row count per cyl
#' mtcars %>%
#' dbplot_line(cyl)
#'
#' # Returns a plot of the average mpg per cyl
#' mtcars %>%
#' dbplot_line(cyl, mean(mpg))
#'
#' @seealso
#' \code{\link{dbplot_bar}},
#' \code{\link{dbplot_histogram}}, \code{\link{dbplot_raster}}
#'
#'
#' @export
#' @import dplyr
#' @importFrom rlang enexpr
dbplot_line <- function(data, x, y = n()){
x <- enexpr(x)
y <- enexpr(y)
df <- db_compute_count(data = data,
x = !! x,
y = !! y)
colnames(df) <- c("x", "y")
ggplot2::ggplot(df) +
ggplot2::geom_line(aes(x, y), stat = "identity") +
ggplot2::labs(x = x,
y = y)
}
| /R/discrete.R | no_license | overmar/dbplot | R | false | false | 3,397 | r | #' Aggregates over a discrete field
#'
#' @description
#'
#' Uses very generic dplyr code to aggregate data. Because of this approach,
#' the calculations automatically run inside the database if `data` has
#' a database or sparklyr connection. The `class()` of such tables
#' in R are: tbl_sql, tbl_dbi, tbl_sql
#'
#' @param data A table (tbl)
#' @param x A discrete variable
#' @param y The aggregation formula. Defaults to count (n)
#'
#' @examples
#'
#' # Returns the row count per am
#' mtcars %>%
#' db_compute_count(am)
#'
#' # Returns the average mpg per am
#' mtcars %>%
#' db_compute_count(am, mean(mpg))
#'
#' @export
#' @import dplyr
#' @importFrom rlang enexpr
db_compute_count <- function(data, x, y = n()){
x <- enexpr(x)
y <- enexpr(y)
df <- data %>%
group_by(!! x) %>%
summarise(result = !! y) %>%
collect() %>%
ungroup() %>%
mutate(result = as.numeric(result)) #Accounts for interger64
colnames(df) <- c(x, y)
df
}
#' Bar plot
#'
#' @description
#'
#' Uses very generic dplyr code to aggregate data and then `ggplot2`
#' to create the plot. Because of this approach,
#' the calculations automatically run inside the database if `data` has
#' a database or sparklyr connection. The `class()` of such tables
#' in R are: tbl_sql, tbl_dbi, tbl_spark
#'
#' @param data A table (tbl)
#' @param x A discrete variable
#' @param y The aggregation formula. Defaults to count (n)
#'
#' @examples
#'
#' library(ggplot2)
#'
#' # Returns a plot of the row count per am
#' mtcars %>%
#' dbplot_bar(am)
#'
#' # Returns a plot of the average mpg per am
#' mtcars %>%
#' dbplot_bar(am, mean(mpg))
#'
#' @seealso
#' \code{\link{dbplot_line}} ,
#' \code{\link{dbplot_histogram}}, \code{\link{dbplot_raster}} ,
#'
#'
#' @export
#' @import dplyr
#' @importFrom rlang enexpr
dbplot_bar <- function(data, x, y = n()){
x <- enexpr(x)
y <- enexpr(y)
df <- db_compute_count(data = data,
x = !! x,
y = !! y)
colnames(df) <- c("x", "y")
ggplot2::ggplot(df) +
ggplot2::geom_col(aes(x, y)) +
ggplot2::labs(x = x,
y = y)
}
#' Bar plot
#'
#' @description
#'
#' Uses very generic dplyr code to aggregate data and then `ggplot2`
#' to create a line plot. Because of this approach,
#' the calculations automatically run inside the database if `data` has
#' a database or sparklyr connection. The `class()` of such tables
#' in R are: tbl_sql, tbl_dbi, tbl_spark
#'
#' @param data A table (tbl)
#' @param x A discrete variable
#' @param y The aggregation formula. Defaults to count (n)
#'
#' @examples
#'
#' library(ggplot2)
#'
#' # Returns a plot of the row count per cyl
#' mtcars %>%
#' dbplot_line(cyl)
#'
#' # Returns a plot of the average mpg per cyl
#' mtcars %>%
#' dbplot_line(cyl, mean(mpg))
#'
#' @seealso
#' \code{\link{dbplot_bar}},
#' \code{\link{dbplot_histogram}}, \code{\link{dbplot_raster}}
#'
#'
#' @export
#' @import dplyr
#' @importFrom rlang enexpr
dbplot_line <- function(data, x, y = n()){
x <- enexpr(x)
y <- enexpr(y)
df <- db_compute_count(data = data,
x = !! x,
y = !! y)
colnames(df) <- c("x", "y")
ggplot2::ggplot(df) +
ggplot2::geom_line(aes(x, y), stat = "identity") +
ggplot2::labs(x = x,
y = y)
}
|
#' Online Retail Data Set
#'
#' This Online Retail dataset contains all the transactions occurring for a
#' UK-based and registered, non-store online retail between 01/12/2010 and 09/12/2011.
#' The company mainly sells unique all-occasion gift-ware. Many customers of the
#' company are wholesalers.
#'
#' @docType data
#'
#' @usage data(onlineretail)
#'
#' @format A data frame with eight variables:
#' \describe{
#' \item{\code{InvoiceNo}}{A \code{character} indicating the invoice number,
#' which is a 6-digit integral number uniquely assigned to each transaction. If
#' this code starts with the letter 'c', it indicates a cancellation.}
#' \item{\code{StockCode}}{A \code{character} indicating the product (item) code,
#' which is a 5-digit integral number uniquely assigned to each distinct product.
#' It can be accompanied by a trailing uppercase letter.}
#' \item{\code{Description}}{A \code{character} indicating the Product (item) name.}
#' \item{\code{Quantity}}{A \code{numeric} indicating the quantities of each product
#' (item) per transaction.}
#' \item{\code{InvoiceDate}}{A \code{POSIXct} indicating the invoice day and time
#' when a transaction was generated.}
#' \item{\code{UnitPrice}}{A \code{numeric} indicating the product price per unit in
#' sterling (£)}
#' \item{\code{CustomerID}}{A \code{numeric} indicating the customer number, which
#' is a 5-digit integral number uniquely assigned to each customer.}
#' \item{\code{Country}}{A \code{character} indicating the name of the country where
#' a customer resides.}
#' }
#'
#' @keywords datasets
#'
#' @references Daqing Chen, Sai Liang Sain, and Kun Guo (2012), Data mining for the online retail
#' industry: A case study of RFM model-based customer segmentation using data mining,
#' Journal of Database Marketing and Customer Strategy Management, Vol. 19, No. 3,
#' pp. 197-208, 2012 (Published online before print: 27 August 2012. doi: 10.1057/dbm.2012.17).
#'
#' @source \href{https://archive.ics.uci.edu/ml/datasets/online+retail}{UCI Machine Learning Repository}
#'
#' @examples
#' data(onlineretail)
"onlineretail"
| /R/data.R | no_license | allanvc/onlineretail | R | false | false | 2,105 | r | #' Online Retail Data Set
#'
#' This Online Retail dataset contains all the transactions occurring for a
#' UK-based and registered, non-store online retail between 01/12/2010 and 09/12/2011.
#' The company mainly sells unique all-occasion gift-ware. Many customers of the
#' company are wholesalers.
#'
#' @docType data
#'
#' @usage data(onlineretail)
#'
#' @format A data frame with eight variables:
#' \describe{
#' \item{\code{InvoiceNo}}{A \code{character} indicating the invoice number,
#' which is a 6-digit integral number uniquely assigned to each transaction. If
#' this code starts with the letter 'c', it indicates a cancellation.}
#' \item{\code{StockCode}}{A \code{character} indicating the product (item) code,
#' which is a 5-digit integral number uniquely assigned to each distinct product.
#' It can be accompanied by a trailing uppercase letter.}
#' \item{\code{Description}}{A \code{character} indicating the Product (item) name.}
#' \item{\code{Quantity}}{A \code{numeric} indicating the quantities of each product
#' (item) per transaction.}
#' \item{\code{InvoiceDate}}{A \code{POSIXct} indicating the invoice day and time
#' when a transaction was generated.}
#' \item{\code{UnitPrice}}{A \code{numeric} indicating the product price per unit in
#' sterling (£)}
#' \item{\code{CustomerID}}{A \code{numeric} indicating the customer number, which
#' is a 5-digit integral number uniquely assigned to each customer.}
#' \item{\code{Country}}{A \code{character} indicating the name of the country where
#' a customer resides.}
#' }
#'
#' @keywords datasets
#'
#' @references Daqing Chen, Sai Liang Sain, and Kun Guo (2012), Data mining for the online retail
#' industry: A case study of RFM model-based customer segmentation using data mining,
#' Journal of Database Marketing and Customer Strategy Management, Vol. 19, No. 3,
#' pp. 197-208, 2012 (Published online before print: 27 August 2012. doi: 10.1057/dbm.2012.17).
#'
#' @source \href{https://archive.ics.uci.edu/ml/datasets/online+retail}{UCI Machine Learning Repository}
#'
#' @examples
#' data(onlineretail)
"onlineretail"
|
#' @name runDirinputExample
#'
#' @title Runs a demo app with the code
#'
#' @details
#' Runs a demo app with the code. See the code in the example_application folder for
#' how this is accomplished.
#'
#' @export
runDirinputExample <- function() {
appDir <- system.file("example_application", package = "shinyDirectoryInput")
if (appDir == "") {
stop("Could not find example directory. Try re-installing `shinyDirectoryInput`.", call. = FALSE)
}
shiny::runApp(appDir, display.mode = "normal")
} | /R/runExample.R | permissive | wleepang/shiny-directory-input | R | false | false | 514 | r | #' @name runDirinputExample
#'
#' @title Runs a demo app with the code
#'
#' @details
#' Runs a demo app with the code. See the code in the example_application folder for
#' how this is accomplished.
#'
#' @export
runDirinputExample <- function() {
appDir <- system.file("example_application", package = "shinyDirectoryInput")
if (appDir == "") {
stop("Could not find example directory. Try re-installing `shinyDirectoryInput`.", call. = FALSE)
}
shiny::runApp(appDir, display.mode = "normal")
} |
analisis_tendencia<-function(datos){
readline("recordar que laa 1er columna debe contener los tiempos y la 2da los datos")
serie<-data.frame(datos[which(!is.na(datos[,2])),1],datos[which(!is.na(datos[,2])),2])
serie[,1]<-seq(1:length(serie[,1])) #IMPORTANTE XQ SINO AFECTA EL LM
ajuste1<-lm(serie[,2]~serie[,1],data=serie) #Y , X!!
tendencia<-ajuste1$coefficients[1]+ajuste1$coefficients[2]*serie[,1] #contruccion de la recta y=ax+b... coef[2]x+coef[1]
ff<-readline("forma 1 centrado en cero o 2 en valores de la variable--(1 o 2)?: ")
if(ff==1){
datosf<-serie[,2] - tendencia
plot(ts(datosf))
} else {
datosf<-serie[,2]- (ajuste1$coefficients[2]*serie[,1])
plot(ts(datosf))
}
return(datosf)
} | /P3/funcion_analisis_tendencia.R | no_license | LucianoAndrian/Estadistica2_2019 | R | false | false | 762 | r | analisis_tendencia<-function(datos){
readline("recordar que laa 1er columna debe contener los tiempos y la 2da los datos")
serie<-data.frame(datos[which(!is.na(datos[,2])),1],datos[which(!is.na(datos[,2])),2])
serie[,1]<-seq(1:length(serie[,1])) #IMPORTANTE XQ SINO AFECTA EL LM
ajuste1<-lm(serie[,2]~serie[,1],data=serie) #Y , X!!
tendencia<-ajuste1$coefficients[1]+ajuste1$coefficients[2]*serie[,1] #contruccion de la recta y=ax+b... coef[2]x+coef[1]
ff<-readline("forma 1 centrado en cero o 2 en valores de la variable--(1 o 2)?: ")
if(ff==1){
datosf<-serie[,2] - tendencia
plot(ts(datosf))
} else {
datosf<-serie[,2]- (ajuste1$coefficients[2]*serie[,1])
plot(ts(datosf))
}
return(datosf)
} |
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/primary_api.R
\name{trading_md}
\alias{trading_md}
\title{Primary API Market Data Real Time}
\usage{
trading_md(market_id = "ROFX", symbol, entries = c("BI", "OF", "LA",
"OP", "CL", "SE", "OI"), depth = 1L)
}
\arguments{
\item{market_id}{String. Market to wich you are going to connect.
\itemize{
\item ROFX. Matba Rofex
}}
\item{symbol}{String. Use \code{\link{trading_instruments}} to see which symbols are available.}
\item{entries}{Vector of Strings. It contains the information to be required:
\itemize{
\item BI. Bid.
\item OF. Offer.
\item LA. Last Available Price.
\item OP. Open Price.
\item CL. Close Price.
\item SE. Settlement Price.
\item OI. Open Interest.
}}
\item{depth}{Integer. Depth of the book to be retrivied.}
}
\value{
If correct, it will load a data frame.
}
\description{
\code{trading_md} retrivies Market Data in Real Time.
}
\examples{
\dontrun{trading_md(symbol='I.RFX20')}
}
| /man/trading_md.Rd | permissive | hsfoggia/rRofex | R | false | true | 988 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/primary_api.R
\name{trading_md}
\alias{trading_md}
\title{Primary API Market Data Real Time}
\usage{
trading_md(market_id = "ROFX", symbol, entries = c("BI", "OF", "LA",
"OP", "CL", "SE", "OI"), depth = 1L)
}
\arguments{
\item{market_id}{String. Market to wich you are going to connect.
\itemize{
\item ROFX. Matba Rofex
}}
\item{symbol}{String. Use \code{\link{trading_instruments}} to see which symbols are available.}
\item{entries}{Vector of Strings. It contains the information to be required:
\itemize{
\item BI. Bid.
\item OF. Offer.
\item LA. Last Available Price.
\item OP. Open Price.
\item CL. Close Price.
\item SE. Settlement Price.
\item OI. Open Interest.
}}
\item{depth}{Integer. Depth of the book to be retrivied.}
}
\value{
If correct, it will load a data frame.
}
\description{
\code{trading_md} retrivies Market Data in Real Time.
}
\examples{
\dontrun{trading_md(symbol='I.RFX20')}
}
|
##' @name metric_RMSE
##' @title Root Mean Square Error
##' @export
##' @param dat dataframe
##'
##' @author Betsy Cowdery
metric_RMSE <- function(dat, ...) {
PEcAn.utils::logger.info("Metric: Root Mean Square Error")
return(sqrt(mean((dat$model - dat$obvs) ^ 2,na.rm=TRUE)))
} # metric_RMSE
| /modules/benchmark/R/metric_RMSE.R | permissive | Kah5/pecan | R | false | false | 298 | r | ##' @name metric_RMSE
##' @title Root Mean Square Error
##' @export
##' @param dat dataframe
##'
##' @author Betsy Cowdery
metric_RMSE <- function(dat, ...) {
PEcAn.utils::logger.info("Metric: Root Mean Square Error")
return(sqrt(mean((dat$model - dat$obvs) ^ 2,na.rm=TRUE)))
} # metric_RMSE
|
#' Removes duplicated rows in the matrices and rows with "NA" inside.
#' @description Takes list with matrices passed by "plot_hm"-function and removes duplicated rows and rows with "NA" inside. Called by "plot_hm"-function.
#' @param ugmat A list with matrices and additional information about the selected region. mat generated by "get_matrix"-function, passed by "plot_hm"-function. Default value is NULL.
#' @return list with cleaned matrices and additional information about the region entered in "get_matrix"-function. Will be used of "plot_hm"-function
########## removes duplicated rows in the matrices and rows with "NA" inside ##########
###called by plot_hm function
tidy_mat = function(ugmat = NULL){
if(is.null(ugmat)){
stop("no ugmat available")
}
nmats = length(ugmat)-5
for(i in 1:nmats){
ugmat[[i]] = ugmat[[i]][!duplicated(ugmat[[i]]), ]
ugmat[[i]]=na.omit(ugmat[[i]])
}
return(ugmat)
}
| /R/tidy_mat.R | no_license | ClaudiaRHD/chipAnalyseR | R | false | false | 935 | r | #' Removes duplicated rows in the matrices and rows with "NA" inside.
#' @description Takes list with matrices passed by "plot_hm"-function and removes duplicated rows and rows with "NA" inside. Called by "plot_hm"-function.
#' @param ugmat A list with matrices and additional information about the selected region. mat generated by "get_matrix"-function, passed by "plot_hm"-function. Default value is NULL.
#' @return list with cleaned matrices and additional information about the region entered in "get_matrix"-function. Will be used of "plot_hm"-function
########## removes duplicated rows in the matrices and rows with "NA" inside ##########
###called by plot_hm function
tidy_mat = function(ugmat = NULL){
if(is.null(ugmat)){
stop("no ugmat available")
}
nmats = length(ugmat)-5
for(i in 1:nmats){
ugmat[[i]] = ugmat[[i]][!duplicated(ugmat[[i]]), ]
ugmat[[i]]=na.omit(ugmat[[i]])
}
return(ugmat)
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/rewardestimation.R
\name{reward_estimator}
\alias{reward_estimator}
\title{Learner for conducting reward estimation with categorical treatments}
\usage{
reward_estimator(...)
}
\arguments{
\item{...}{Use keyword arguments to set parameters on the resulting learner.
Refer to the Julia documentation for available parameters.}
}
\description{
This function was deprecated and renamed to \code{\link[=categorical_reward_estimator]{categorical_reward_estimator()}}
in iai 1.4.0. This is for consistency with the IAI v2.1.0 Julia release.
}
\details{
This deprecation is no longer supported as of the IAI v3 release.
}
\section{IAI Compatibility}{
Requires IAI version 2.2 or lower.
}
\examples{
\dontrun{lnr <- iai::reward_estimator()}
}
| /man/reward_estimator.Rd | no_license | cran/iai | R | false | true | 816 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/rewardestimation.R
\name{reward_estimator}
\alias{reward_estimator}
\title{Learner for conducting reward estimation with categorical treatments}
\usage{
reward_estimator(...)
}
\arguments{
\item{...}{Use keyword arguments to set parameters on the resulting learner.
Refer to the Julia documentation for available parameters.}
}
\description{
This function was deprecated and renamed to \code{\link[=categorical_reward_estimator]{categorical_reward_estimator()}}
in iai 1.4.0. This is for consistency with the IAI v2.1.0 Julia release.
}
\details{
This deprecation is no longer supported as of the IAI v3 release.
}
\section{IAI Compatibility}{
Requires IAI version 2.2 or lower.
}
\examples{
\dontrun{lnr <- iai::reward_estimator()}
}
|
# Documentation -----------------------------------------------------------
#' Pdqr methods for print function
#'
#' Pdqr-functions have their own methods for [print()] which displays function's
#' [metadata][meta_all()] in readable and concise form.
#'
#' @param x Pdqr-function to print.
#' @param ... Further arguments passed to or from other methods.
#'
#' @details Print output of pdqr-function describes the following information:
#' - Full name of function [class][meta_class()]:
#' - P-function is "Cumulative distribution function".
#' - D-function is "Probability mass function" for "discrete" type and
#' "Probability density function" for "continuous".
#' - Q-function is "Quantile function".
#' - R-function is "Random generation function".
#' - [Type][meta_type()] of function in the form "of * type" where "\*" is
#' "discrete" or "continuous" depending on actual type.
#' - [Support][meta_support()] of function.
#' - Number of elements in distribution for "discrete" type or number of
#' intervals of piecewise-linear density for "continuous" type.
#' - If pdqr-function has "discrete" type and exactly two possible values 0 and
#' 1, it is treated as "boolean" pdqr-function and probability of 1 is shown.
#' This is done to simplify interactive work with output of comparing functions
#' like `>=`, etc. (see [description of methods for S3 group generic
#' functions][methods-group-generic]). To extract probabilities from "boolean"
#' pdqr-function, use [summ_prob_true()] and [summ_prob_false()].
#'
#' Symbol "~" in `print()` output indicates that printed value or support is an
#' approximation to a true one (for readability purpose).
#'
#' @family pdqr methods for generic functions
#'
#' @examples
#' print(new_d(1:10, "discrete"))
#'
#' r_unif <- as_r(runif, n_grid = 251)
#' print(r_unif)
#'
#' # Printing of boolean pdqr-function
#' print(r_unif >= 0.3)
#'
#' @name methods-print
NULL
# Functions ---------------------------------------------------------------
pdqr_print <- function(x, fun_name) {
cat(line_title(x, fun_name))
cat(line_support(x))
invisible(x)
}
line_title <- function(x, fun_name) {
type_print <- paste0(meta_type_print_name(x))
paste0(
bold(fun_name), " function of ", bold(type_print), " type\n"
)
}
line_support <- function(x) {
x_support <- meta_support(x)
if (is.null(x_support) || !is_support(x_support)) {
paste0("Support: ", bold("not proper"), "\n")
} else {
x_supp_round <- round(x_support, digits = 5)
approx_sign <- get_approx_sign(x_support, x_supp_round)
x_supp_string <- paste0(x_supp_round, collapse = ", ")
support_print <- bold(paste0(approx_sign, "[", x_supp_string, "]"))
paste0("Support: ", support_print, n_x_tbl_info(x), "\n")
}
}
n_x_tbl_info <- function(x) {
x_type <- meta_type(x)
x_tbl <- meta_x_tbl(x)
if (is.null(x_type) || !is.character(x_type) ||
is.null(x_tbl) || !is.data.frame(x_tbl)) {
return("")
}
n_x_tbl <- nrow(x_tbl)
if (x_type == "discrete") {
# Add "probability of 1: " printing in case of a "boolean" pdqr
if (identical(x_tbl[["x"]], c(0, 1))) {
prob_one <- x_tbl[["prob"]][x_tbl[["x"]] == 1]
prob_one_round <- round(prob_one, digits = 5)
approx_sign <- get_approx_sign(prob_one, prob_one_round)
prob_one_string <- paste0(approx_sign, prob_one_round)
if (prob_one_round %in% c(0, 1)) {
prob_one_string <- paste0(approx_sign, prob_one_round, ".0")
}
prob_string <- paste0(
", ", bold(paste0("probability of 1: ", prob_one_string))
)
} else {
prob_string <- ""
}
paste0(
" (", n_x_tbl, " ", ngettext(n_x_tbl, "element", "elements"),
prob_string, ")"
)
} else if (x_type == "continuous") {
paste0(
" (", n_x_tbl-1, " ", ngettext(n_x_tbl-1, "interval", "intervals"), ")"
)
} else {
""
}
}
# `force_color` is added for 100% coverage purpose
bold <- function(x, force_color = FALSE) {
if (force_color || use_color()) {
paste0("\033[1m", x, "\033[22m")
} else {
x
}
}
# Reasonable effort of dependency free `crayon::has_color()` emulation
use_color <- function() {
(sink.number() == 0) &&
grepl(
"^screen|^xterm|^vt100|color|ansi|cygwin|linux", Sys.getenv("TERM"),
ignore.case = TRUE, perl = TRUE
)
}
meta_type_print_name <- function(x) {
x_type <- meta_type(x)
if (is.null(x_type) || !(x_type %in% c("discrete", "continuous"))) {
"unknown"
} else {
x_type
}
}
get_approx_sign <- function(x, x_round) {
x_is_rounded <- any(!is_near(x, x_round, tol = 1e-13))
if (x_is_rounded) {
"~"
} else {
""
}
}
| /R/print.R | permissive | ismayc/pdqr | R | false | false | 4,691 | r | # Documentation -----------------------------------------------------------
#' Pdqr methods for print function
#'
#' Pdqr-functions have their own methods for [print()] which displays function's
#' [metadata][meta_all()] in readable and concise form.
#'
#' @param x Pdqr-function to print.
#' @param ... Further arguments passed to or from other methods.
#'
#' @details Print output of pdqr-function describes the following information:
#' - Full name of function [class][meta_class()]:
#' - P-function is "Cumulative distribution function".
#' - D-function is "Probability mass function" for "discrete" type and
#' "Probability density function" for "continuous".
#' - Q-function is "Quantile function".
#' - R-function is "Random generation function".
#' - [Type][meta_type()] of function in the form "of * type" where "\*" is
#' "discrete" or "continuous" depending on actual type.
#' - [Support][meta_support()] of function.
#' - Number of elements in distribution for "discrete" type or number of
#' intervals of piecewise-linear density for "continuous" type.
#' - If pdqr-function has "discrete" type and exactly two possible values 0 and
#' 1, it is treated as "boolean" pdqr-function and probability of 1 is shown.
#' This is done to simplify interactive work with output of comparing functions
#' like `>=`, etc. (see [description of methods for S3 group generic
#' functions][methods-group-generic]). To extract probabilities from "boolean"
#' pdqr-function, use [summ_prob_true()] and [summ_prob_false()].
#'
#' Symbol "~" in `print()` output indicates that printed value or support is an
#' approximation to a true one (for readability purpose).
#'
#' @family pdqr methods for generic functions
#'
#' @examples
#' print(new_d(1:10, "discrete"))
#'
#' r_unif <- as_r(runif, n_grid = 251)
#' print(r_unif)
#'
#' # Printing of boolean pdqr-function
#' print(r_unif >= 0.3)
#'
#' @name methods-print
NULL
# Functions ---------------------------------------------------------------
pdqr_print <- function(x, fun_name) {
cat(line_title(x, fun_name))
cat(line_support(x))
invisible(x)
}
line_title <- function(x, fun_name) {
type_print <- paste0(meta_type_print_name(x))
paste0(
bold(fun_name), " function of ", bold(type_print), " type\n"
)
}
line_support <- function(x) {
x_support <- meta_support(x)
if (is.null(x_support) || !is_support(x_support)) {
paste0("Support: ", bold("not proper"), "\n")
} else {
x_supp_round <- round(x_support, digits = 5)
approx_sign <- get_approx_sign(x_support, x_supp_round)
x_supp_string <- paste0(x_supp_round, collapse = ", ")
support_print <- bold(paste0(approx_sign, "[", x_supp_string, "]"))
paste0("Support: ", support_print, n_x_tbl_info(x), "\n")
}
}
n_x_tbl_info <- function(x) {
x_type <- meta_type(x)
x_tbl <- meta_x_tbl(x)
if (is.null(x_type) || !is.character(x_type) ||
is.null(x_tbl) || !is.data.frame(x_tbl)) {
return("")
}
n_x_tbl <- nrow(x_tbl)
if (x_type == "discrete") {
# Add "probability of 1: " printing in case of a "boolean" pdqr
if (identical(x_tbl[["x"]], c(0, 1))) {
prob_one <- x_tbl[["prob"]][x_tbl[["x"]] == 1]
prob_one_round <- round(prob_one, digits = 5)
approx_sign <- get_approx_sign(prob_one, prob_one_round)
prob_one_string <- paste0(approx_sign, prob_one_round)
if (prob_one_round %in% c(0, 1)) {
prob_one_string <- paste0(approx_sign, prob_one_round, ".0")
}
prob_string <- paste0(
", ", bold(paste0("probability of 1: ", prob_one_string))
)
} else {
prob_string <- ""
}
paste0(
" (", n_x_tbl, " ", ngettext(n_x_tbl, "element", "elements"),
prob_string, ")"
)
} else if (x_type == "continuous") {
paste0(
" (", n_x_tbl-1, " ", ngettext(n_x_tbl-1, "interval", "intervals"), ")"
)
} else {
""
}
}
# `force_color` is added for 100% coverage purpose
bold <- function(x, force_color = FALSE) {
if (force_color || use_color()) {
paste0("\033[1m", x, "\033[22m")
} else {
x
}
}
# Reasonable effort of dependency free `crayon::has_color()` emulation
use_color <- function() {
(sink.number() == 0) &&
grepl(
"^screen|^xterm|^vt100|color|ansi|cygwin|linux", Sys.getenv("TERM"),
ignore.case = TRUE, perl = TRUE
)
}
meta_type_print_name <- function(x) {
x_type <- meta_type(x)
if (is.null(x_type) || !(x_type %in% c("discrete", "continuous"))) {
"unknown"
} else {
x_type
}
}
get_approx_sign <- function(x, x_round) {
x_is_rounded <- any(!is_near(x, x_round, tol = 1e-13))
if (x_is_rounded) {
"~"
} else {
""
}
}
|
\name{RM2006-package}
\alias{RM2006-package}
\docType{package}
\title{
\packageTitle{RM2006}
}
\description{
\packageDescription{RM2006}
}
\author{
\packageAuthor{RM2006}
Maintainer: \packageMaintainer{RM2006}
}
\references{
Zumbach, G. (2007) The Riskmetrics 2006 methodology. Available at SSRN: https://ssrn.com/abstract=1420185 or http://dx.doi.org/10.2139/ssrn.1420185
}
\keyword{Risk Metrics}
\keyword{Conditional Covariance Matrix}
| /man/RM2006-package.Rd | no_license | cran/RM2006 | R | false | false | 442 | rd | \name{RM2006-package}
\alias{RM2006-package}
\docType{package}
\title{
\packageTitle{RM2006}
}
\description{
\packageDescription{RM2006}
}
\author{
\packageAuthor{RM2006}
Maintainer: \packageMaintainer{RM2006}
}
\references{
Zumbach, G. (2007) The Riskmetrics 2006 methodology. Available at SSRN: https://ssrn.com/abstract=1420185 or http://dx.doi.org/10.2139/ssrn.1420185
}
\keyword{Risk Metrics}
\keyword{Conditional Covariance Matrix}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.