content large_stringlengths 0 6.46M | path large_stringlengths 3 331 | license_type large_stringclasses 2 values | repo_name large_stringlengths 5 125 | language large_stringclasses 1 value | is_vendor bool 2 classes | is_generated bool 2 classes | length_bytes int64 4 6.46M | extension large_stringclasses 75 values | text stringlengths 0 6.46M |
|---|---|---|---|---|---|---|---|---|---|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/isoforest.R
\name{export.isotree.model}
\alias{export.isotree.model}
\title{Export Isolation Forest model}
\usage{
export.isotree.model(model, file, ...)
}
\arguments{
\item{model}{An Isolation Forest model as returned by function \link{isolation.forest}.}
\item{file}{File path where to save the model. File connections are not accepted, only
file paths}
\item{...}{Additional arguments to pass to \link{writeBin} - you might want to pass
extra parameters if passing files between different CPU architectures or similar.}
}
\value{
No return value.
}
\description{
Save Isolation Forest model to a serialized file along with its
metadata, in order to be used in the Python or the C++ versions of this package.
This function is not suggested to be used for passing models to and from R -
in such case, one can use `saveRDS` and `readRDS` instead, although the function
still works correctly for serializing objects between R sessions.
Note that, if the model was fitted to a `data.frame`, the column names must be
something exportable as JSON, and must be something that Python's Pandas could
use as column names (e.g. strings/character).
It is recommended to visually inspect the produced `.metadata` file in any case.
}
\details{
This function will create 2 files: the serialized model, in binary format,
with the name passed in `file`; and a metadata file in JSON format with the same
name but ending in `.metadata`. The second file should \bold{NOT} be edited manually,
except for the field `nthreads` if desired.
If the model was built with `build_imputer=TRUE`, there will also be a third binary file
ending in `.imputer`.
The metadata will contain, among other things, the encoding that was used for
categorical columns - this is under `data_info.cat_levels`, as an array of arrays by column,
with the first entry for each column corresponding to category 0, second to category 1,
and so on (the C++ version takes them as integers). This metadata is written to a JSON file
using the `jsonlite` package, which must be installed in order for this to work.
The serialized file can be used in the C++ version by reading it as a binary raw file
and de-serializing its contents with the `cereal` library or using the provided C++ functions
for de-serialization. If using `ndim=1`, it will be an object of class `IsoForest`, and if
using `ndim>1`, will be an object of class `ExtIsoForest`. The imputer file, if produced, will
be an object of class `Imputer`.
Be aware that this function will write raw bytes from memory as-is without compression,
so the file sizes can end up being much larger than when using `saveRDS`.
The metadata is not used in the C++ version, but is necessary for the Python version.
Note that the model treats boolean/logical variables as categorical. Thus, if the model was fit
to a `data.frame` with boolean columns, when importing this model into C++, they need to be
encoded in the same order - e.g. the model might encode `TRUE` as zero and `FALSE`
as one - you need to look at the metadata for this.
}
\references{
\url{https://uscilab.github.io/cereal/}
}
\seealso{
\link{load.isotree.model} \link{writeBin} \link{unpack.isolation.forest}
}
| /man/export.isotree.model.Rd | permissive | Shea1986/isotree | R | false | true | 3,262 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/isoforest.R
\name{export.isotree.model}
\alias{export.isotree.model}
\title{Export Isolation Forest model}
\usage{
export.isotree.model(model, file, ...)
}
\arguments{
\item{model}{An Isolation Forest model as returned by function \link{isolation.forest}.}
\item{file}{File path where to save the model. File connections are not accepted, only
file paths}
\item{...}{Additional arguments to pass to \link{writeBin} - you might want to pass
extra parameters if passing files between different CPU architectures or similar.}
}
\value{
No return value.
}
\description{
Save Isolation Forest model to a serialized file along with its
metadata, in order to be used in the Python or the C++ versions of this package.
This function is not suggested to be used for passing models to and from R -
in such case, one can use `saveRDS` and `readRDS` instead, although the function
still works correctly for serializing objects between R sessions.
Note that, if the model was fitted to a `data.frame`, the column names must be
something exportable as JSON, and must be something that Python's Pandas could
use as column names (e.g. strings/character).
It is recommended to visually inspect the produced `.metadata` file in any case.
}
\details{
This function will create 2 files: the serialized model, in binary format,
with the name passed in `file`; and a metadata file in JSON format with the same
name but ending in `.metadata`. The second file should \bold{NOT} be edited manually,
except for the field `nthreads` if desired.
If the model was built with `build_imputer=TRUE`, there will also be a third binary file
ending in `.imputer`.
The metadata will contain, among other things, the encoding that was used for
categorical columns - this is under `data_info.cat_levels`, as an array of arrays by column,
with the first entry for each column corresponding to category 0, second to category 1,
and so on (the C++ version takes them as integers). This metadata is written to a JSON file
using the `jsonlite` package, which must be installed in order for this to work.
The serialized file can be used in the C++ version by reading it as a binary raw file
and de-serializing its contents with the `cereal` library or using the provided C++ functions
for de-serialization. If using `ndim=1`, it will be an object of class `IsoForest`, and if
using `ndim>1`, will be an object of class `ExtIsoForest`. The imputer file, if produced, will
be an object of class `Imputer`.
Be aware that this function will write raw bytes from memory as-is without compression,
so the file sizes can end up being much larger than when using `saveRDS`.
The metadata is not used in the C++ version, but is necessary for the Python version.
Note that the model treats boolean/logical variables as categorical. Thus, if the model was fit
to a `data.frame` with boolean columns, when importing this model into C++, they need to be
encoded in the same order - e.g. the model might encode `TRUE` as zero and `FALSE`
as one - you need to look at the metadata for this.
}
\references{
\url{https://uscilab.github.io/cereal/}
}
\seealso{
\link{load.isotree.model} \link{writeBin} \link{unpack.isolation.forest}
}
|
#' dNdScv
#'
#' Analyses of selection using the dNdScv and dNdSloc models. Default parameters typically increase the performance of the method on cancer genomic studies. Reference files are currently only available for the GRCh37/hg19 version of the human genome.
#'
#' @author Inigo Martincorena (Wellcome Sanger Institute)
#' @details Martincorena I, et al. (2017) Universal patterns of selection in cancer and somatic tissues. Cell. 171(5):1029-1041.
#'
#' @param mutations Table of mutations (5 columns: sampleID, chr, pos, ref, alt). Only list independent events as mutations.
#' @param gene_list List of genes to restrict the analysis (use for targeted sequencing studies)
#' @param refdb Reference database (path to .rda file)
#' @param sm Substitution model (precomputed models are available in the data directory)
#' @param kc List of a-priori known cancer genes (to be excluded from the indel background model)
#' @param cv Covariates (a matrix of covariates -columns- for each gene -rows-) [default: reference covariates] [cv=NULL runs dndscv without covariates]
#' @param max_muts_per_gene_per_sample If n<Inf, arbitrarily the first n mutations by chr position will be kept
#' @param max_coding_muts_per_sample Hypermutator samples often reduce power to detect selection
#' @param use_indel_sites Use unique indel sites instead of the total number of indels (it tends to be more robust)
#' @param min_indels Minimum number of indels required to run the indel recurrence module
#' @param maxcovs Maximum number of covariates that will be considered (additional columns in the matrix of covariates will be excluded)
#' @param constrain_wnon_wspl This constrains wnon==wspl (this typically leads to higher power to detect selection)
#' @param outp Output: 1 = Global dN/dS values; 2 = Global dN/dS and dNdSloc; 3 = Global dN/dS, dNdSloc and dNdScv
#' @param numcode NCBI genetic code number (default = 1; standard genetic code). To see the list of genetic codes supported use: ? seqinr::translate. Note that the same genetic code must be used in the dndscv and buildref functions.
#' @param outmats Output the internal N and L matrices (default = F)
#'
#' @return 'dndscv' returns a list of objects:
#' @return - globaldnds: Global dN/dS estimates across all genes.
#' @return - sel_cv: Gene-wise selection results using dNdScv.
#' @return - sel_loc: Gene-wise selection results using dNdSloc.
#' @return - annotmuts: Annotated coding mutations.
#' @return - genemuts: Observed and expected numbers of mutations per gene.
#' @return - mle_submodel: MLEs of the substitution model.
#' @return - exclsamples: Samples excluded from the analysis.
#' @return - exclmuts: Coding mutations excluded from the analysis.
#' @return - nbreg: Negative binomial regression model for substitutions.
#' @return - nbregind: Negative binomial regression model for indels.
#' @return - poissmodel: Poisson regression model used to fit the substitution model and the global dNdS values.
#' @return - wrongmuts: Table of input mutations with a wrong annotation of the reference base (if any).
#'
#' @export
dndscv = function(mutations,
gene_list = NULL,
refdb = "hg19",
sm = "192r_3w",
kc = "cgc81",
cv = "hg19",
max_muts_per_gene_per_sample = 3,
max_coding_muts_per_sample = 3000,
use_indel_sites = T,
min_indels = 5,
maxcovs = 20,
constrain_wnon_wspl = T,
outp = 3,
numcode = 1,
outmats = F) {
## 1. Environment
message("[1] Loading the environment...")
## 突变提取前五列
mutations = mutations[,1:5] # Restricting input matrix to first 5 columns
## 转化为字符串
#sampleID chr pos ref mut
#F16061220030-CLN 17 7577058 C A
#F16061220030-CLN 3 47155465 C T
#ct-C1512289427-S-CLN 17 7578441 G T
#ct-C16042816301-S-CLN 17 7578441 G T
#ct-C16060119301-S-CLN 17 7578441 G T
#ct-C16060119301-S-YH095-CLN-RE 17 7578441 G T
#ct-C16092028993-S-CLN 2 47601106 TG CA
#ct-C16092028993-S-CLN 17 7578206 T C
#F16053119231-CLN 17 7577538 C T
mutations[,c(1,2,3,4,5)] = lapply(mutations[,c(1,2,3,4,5)], as.character) # Factors to character
## 变异位点转化为数值型
mutations[[3]] = as.numeric(mutations[[3]]) # Chromosome position as numeric
## 提取发生真实碱基突变的位点
mutations[[3]] = as.numeric(mutations[[3]]) # Removing mutations with identical reference and mutant base
## 添加表头
colnames(mutations) = c("sampleID","chr","pos","ref","mut")
## 提取存在NA的行,返回index
# Removing NA entries from the input mutation table
indna = which(is.na(mutations),arr.ind=T)
## 整行去除含NA的突变
if (nrow(indna)>0) {
mutations = mutations[-unique(indna[,1]),] # Removing entries with an NA in any row
warning(sprintf("%0.0f rows in the input table contained NA entries and have been removed. Please investigate.",length(unique(indna[,1]))))
}
## 载入参考数据库
# [Input] Reference database
if (refdb == "hg19") {
data("refcds_hg19", package="dndscv")
if (any(gene_list=="CDKN2A")) { # Replace CDKN2A in the input gene list with two isoforms 为啥
gene_list = unique(c(setdiff(gene_list,"CDKN2A"),"CDKN2A.p14arf","CDKN2A.p16INK4a"))
}
} else {
load(refdb)
}
## 输入基因列表,默认全部基因
# [Input] Gene list (The user can input a gene list as a character vector)
##
if (is.null(gene_list)) {
gene_list = sapply(RefCDS, function(x) x$gene_name) # All genes [default]
} else { # Using only genes in the input gene list
## 首先提取全部基因
allg = sapply(RefCDS,function(x) x$gene_name)
## 基因必须存在于全部基因中
nonex = gene_list[!(gene_list %in% allg)]
## 打印出不在列的基因
if (length(nonex)>0) { stop(sprintf("The following input gene names are not in the RefCDS database: %s", paste(nonex,collapse=", "))) }
RefCDS = RefCDS[allg %in% gene_list] # Only input genes
## gr_genes
gr_genes = [gr_genes$names %in% gene_list] # Only input genes
}
## 协变量矩阵
# [Input] Covariates (The user can input a custom set of covariates as a matrix)
if (is.character(cv)) {
data(list=sprintf("covariates_%s",cv), package="dndscv")
} else {
covs = cv
}
## 已知癌症基因 默认使用CGC81数据库
# [Input] Known cancer genes (The user can input a gene list as a character vector)
if (kc[1] %in% c("cgc81")) {
data(list=sprintf("cancergenes_%s",kc), package="dndscv")
} else {
known_cancergenes = kc
}
## 突变模型 默认 192r_3w 用户可以提供一个个性化的突变模型作为举证
# [Input] Substitution model (The user can also input a custom substitution model as a matrix)
if (length(sm)==1) {
data(list=sprintf("submod_%s",sm), package="dndscv")
} else {
substmodel = sm
}
## 扩展的参考序列 是为了更快的访问,这部分是高级数据结构
# Expanding the reference sequences [for faster access]
for (j in 1:length(RefCDS)) {
RefCDS[[j]]$seq_cds = base::strsplit(as.character(RefCDS[[j]]$seq_cds), split="")[[1]]
RefCDS[[j]]$seq_cds1up = base::strsplit(as.character(RefCDS[[j]]$seq_cds1up), split="")[[1]]
RefCDS[[j]]$seq_cds1down = base::strsplit(as.character(RefCDS[[j]]$seq_cds1down), split="")[[1]]
if (!is.null(RefCDS[[j]]$seq_splice)) {
RefCDS[[j]]$seq_splice = base::strsplit(as.character(RefCDS[[j]]$seq_splice), split="")[[1]]
RefCDS[[j]]$seq_splice1up = base::strsplit(as.character(RefCDS[[j]]$seq_splice1up), split="")[[1]]
RefCDS[[j]]$seq_splice1down = base::strsplit(as.character(RefCDS[[j]]$seq_splice1down), split="")[[1]]
}
}
## 突变注释
## 2. Mutation annotation
message("[2] Annotating the mutations...")
## 三碱基构建
nt = c("A","C","G","T")
trinucs = paste(rep(nt,each=16,times=1),rep(nt,each=4,times=4),rep(nt,each=1,times=16), sep="")
trinucinds = setNames(1:64, trinucs)
trinucsubs = NULL
for (j in 1:length(trinucs)) {
trinucsubs = c(trinucsubs, paste(trinucs[j], paste(substr(trinucs[j],1,1), setdiff(nt,substr(trinucs[j],2,2)), substr(trinucs[j],3,3), sep=""), sep=">"))
}
trinucsubsind = setNames(1:192, trinucsubs)
##
ind = setNames(1:length(RefCDS), sapply(RefCDS,function(x) x$gene_name)) ## 每个基因名对应再RefCDS上的位置编号记录下来
## 表达的基因
gr_genes_ind = ind[gr_genes$names] ## 查看gr基因基因名对饮的ind
## 警告:可能存在多碱基突变注释失败的情况,删除complex突变
# Warning about possible unannotated dinucleotide substitutions
if (any(diff(mutations$pos)==1)) {
warning("Mutations observed in contiguous sites within a sample. Please annotate or remove dinucleotide or complex substitutions for best results.")
}
## 警告:在不同样本中同一个突变不同实例
# Warning about multiple instances of the same mutation in different sampleIDs
if (nrow(unique(mutations[,2:5])) < nrow(mutations)) {
warning("Same mutations observed in different sampleIDs. Please verify that these are independent events and remove duplicates otherwise.")
}
## 突变的起始点
# Start and end position of each mutation
mutations$end = mutations$start = mutations$pos
l = nchar(mutations$ref)-1 # Deletions of multiple bases
mutations$end = mutations$end + l
ind = substr(mutations$ref,1,1)==substr(mutations$mut,1,1) & nchar(mutations$ref)>nchar(mutations$mut) # Position correction for deletions annotated in the previous base (e.g. CA>C)
mutations$start = mutations$start + ind
## 突变对应到基因上
# Mapping mutations to genes
gr_muts = GenomicRanges::GRanges(mutations$chr, IRanges::IRanges(mutations$start,mutations$end))
ol = as.data.frame(GenomicRanges::findOverlaps(gr_muts, gr_genes, type="any", select="all"))
mutations = mutations[ol[,1],] # Duplicating subs if they hit more than one gene ## 如果对应到多个基因,删除第二个#
mutations$geneind = gr_genes_ind[ol[,2]] ## 添加上对应基因再RefDCS上的位置
mutations$gene = sapply(RefCDS,function(x) x$gene_name)[mutations$geneind] ## 添加上基因名
mutations = unique(mutations)
## 排除超突变样本 max_coding_muts_per_sample
# Optional: Excluding samples exceeding the limit of mutations/sample [see Default parameters]
nsampl = sort(table(mutations$sampleID))
exclsamples = NULL
if (any(nsampl>max_coding_muts_per_sample)) {
message(sprintf(' Note: %0.0f samples excluded for exceeding the limit of mutations per sample (see the max_coding_muts_per_sample argument in dndscv). %0.0f samples left after filtering.',sum(nsampl>max_coding_muts_per_sample),sum(nsampl<=max_coding_muts_per_sample)))
exclsamples = names(nsampl[nsampl>max_coding_muts_per_sample])
mutations = mutations[!(mutations$sampleID %in% names(nsampl[nsampl>max_coding_muts_per_sample])),]
}
## 排除超突变基因
# Optional: Limiting the number of mutations per gene per sample (to minimise the impact of unannotated kataegis and other mutation clusters) [see Default parameters]
mutrank = ave(mutations$pos, paste(mutations$sampleID,mutations$gene), FUN = function(x) rank(x))
exclmuts = NULL
if (any(mutrank>max_muts_per_gene_per_sample)) {
message(sprintf(' Note: %0.0f mutations removed for exceeding the limit of mutations per gene per sample (see the max_muts_per_gene_per_sample argument in dndscv)',sum(mutrank>max_muts_per_gene_per_sample)))
exclmuts = mutations[mutrank>max_muts_per_gene_per_sample,]
mutations = mutations[mutrank<=max_muts_per_gene_per_sample,]
}
## 对突变的额外的注释
# Additional annotation of substitutions
## 转录本的链信息
mutations$strand = sapply(RefCDS,function(x) x$strand)[mutations$geneind]
## snv
snv = (mutations$ref %in% nt & mutations$mut %in% nt)
if (!any(snv)) { stop("Zero coding substitutions found in this dataset. Unable to run dndscv. Common causes for this error are inputting only indels or using chromosome names different to those in the reference database (e.g. chr1 vs 1)") }
indels = mutations[!snv,]
## 取出snv
mutations = mutations[snv,]
mutations$ref_cod = mutations$ref
mutations$mut_cod = mutations$mut
compnt = setNames(rev(nt), nt)
isminus = (mutations$strand==-1)
mutations$ref_cod[isminus] = compnt[mutations$ref[isminus]]
mutations$mut_cod[isminus] = compnt[mutations$mut[isminus]]
##############################################################################################
for (j in 1:length(RefCDS)) {
RefCDS[[j]]$N = array(0, dim=c(192,4)) # Initialising the N matrices
## 再 N列表中添加192 行 + 4列的矩阵
}
## 子功能:获得编码位置 在给定的外显子区间
# Subfunction: obtaining the codon positions of a coding mutation given the exon intervals
chr2cds = function(pos,cds_int,strand) {
if (strand==1) {
return(which(unlist(apply(cds_int, 1, function(x) x[1]:x[2])) %in% pos)) ## 返回当前位置再外显子上的坐标
} else if (strand==-1) {
return(which(rev(unlist(apply(cds_int, 1, function(x) x[1]:x[2]))) %in% pos))
}
}
##
# Annotating the functional impact of each substitution and populating the N matrices
## 每个突变简历array
ref3_cod = mut3_cod = wrong_ref = aachange = ntchange = impact = codonsub = array(NA, nrow(mutations))
for (j in 1:nrow(mutations)) {
geneind = mutations$geneind[j] ## 基因编号
pos = mutations$pos[j] ## 对应的位置
## 注释潜在的剪切位点
if (any(pos == RefCDS[[geneind]]$intervals_splice)) { # Essential splice-site substitution
impact[j] = "Essential_Splice"; impind = 4
pos_ind = (pos==RefCDS[[geneind]]$intervals_splice) ## 当前位置所在的剪切位点的编号
cdsnt = RefCDS[[geneind]]$seq_splice[pos_ind] ## 剪切位点的碱基
## ref三碱基 ## 剪切位点的上游一个碱基 当前碱基 下游一个碱基
ref3_cod[j] = sprintf("%s%s%s", RefCDS[[geneind]]$seq_splice1up[pos_ind], RefCDS[[geneind]]$seq_splice[pos_ind], RefCDS[[geneind]]$seq_splice1down[pos_ind])
## -1碱基 突变碱基 +1碱基
mut3_cod[j] = sprintf("%s%s%s", RefCDS[[geneind]]$seq_splice1up[pos_ind], mutations$mut_cod[j], RefCDS[[geneind]]$seq_splice1down[pos_ind])
aachange[j] = ntchange[j] = codonsub[j] = "."
} else { # Coding substitution
pos_ind = chr2cds(pos, RefCDS[[geneind]]$intervals_cds, RefCDS[[geneind]]$strand)
cdsnt = RefCDS[[geneind]]$seq_cds[pos_ind] ## cds上的碱基
## cds前一个碱基 ref碱基 后一个碱基
ref3_cod[j] = sprintf("%s%s%s", RefCDS[[geneind]]$seq_cds1up[pos_ind], RefCDS[[geneind]]$seq_cds[pos_ind], RefCDS[[geneind]]$seq_cds1down[pos_ind])
## cds前一个碱基 突变碱基 后一个碱基
mut3_cod[j] = sprintf("%s%s%s", RefCDS[[geneind]]$seq_cds1up[pos_ind], mutations$mut_cod[j], RefCDS[[geneind]]$seq_cds1down[pos_ind])
## 密码子的位置
codon_pos = c(ceiling(pos_ind/3)*3-2, ceiling(pos_ind/3)*3-1, ceiling(pos_ind/3)*3)
## 取得ref的密码子碱基
old_codon = as.character(as.vector(RefCDS[[geneind]]$seq_cds[codon_pos]))
## 再cds上的位置
pos_in_codon = pos_ind-(ceiling(pos_ind/3)-1)*3
## 突变后的密码子
new_codon = old_codon; new_codon[pos_in_codon] = mutations$mut_cod[j]
## 突变前的氨基酸
old_aa = seqinr::translate(old_codon, numcode = numcode)
## 突变后的氨基酸
new_aa = seqinr::translate(new_codon, numcode = numcode)
## ref氨基酸 氨基酸位置 老氨基酸
aachange[j] = sprintf('%s%0.0f%s',old_aa,ceiling(pos_ind/3),new_aa)
## 碱基变化 突变碱基 突变碱基
ntchange[j] = sprintf('%s%0.0f%s',mutations$ref_cod[j],pos_ind,mutations$mut_cod[j])
## G>C
codonsub[j] = sprintf('%s>%s',paste(old_codon,collapse=""),paste(new_codon,collapse=""))
## 如果新氨基酸和旧氨基酸,同义突变
# Annotating the impact of the mutation
if (new_aa == old_aa){
impact[j] = "Synonymous"; impind = 1
# 新氨基酸为* 沉默突变
} else if (new_aa == "*"){
impact[j] = "Nonsense"; impind = 3
# 老氨基酸不为* 错译突变
} else if (old_aa != "*"){
impact[j] = "Missense"; impind = 2
# 老氨基酸为* 终止子丢失
} else if (old_aa=="*") {
impact[j] = "Stop_loss"; impind = NA
}
}
if (mutations$ref_cod[j] != as.character(cdsnt)) { # Incorrect base annotation in the input mutation file (the mutation will be excluded with a warning)
wrong_ref[j] = 1 ## 如果ref碱基和cds碱基不一致,位点存放到防止到wrong_ref 中
} else if (!is.na(impind)) { # Correct base annotation in the input mutation file
## 影响不为空
## 三碱基替换 CAA>CAT 得到三碱基编号
trisub = trinucsubsind[ paste(ref3_cod[j], mut3_cod[j], sep=">") ]
## 三碱基编号 影响编号 +1
RefCDS[[geneind]]$N[trisub,impind] = RefCDS[[geneind]]$N[trisub,impind] + 1 # Adding the mutation to the N matrices
}
### 打印提示信息
if (round(j/1e4)==(j/1e4)) { message(sprintf(' %0.3g%% ...', round(j/nrow(mutations),2)*100)) }
}
mutations$ref3_cod = ref3_cod
mutations$mut3_cod = mut3_cod
mutations$aachange = aachange
mutations$ntchange = ntchange
mutations$codonsub = codonsub ## 碱基替换
mutations$impact = impact
mutations$pid = sapply(RefCDS,function(x) x$protein_id)[mutations$geneind] ## 添加蛋白编号
##
错误注释位点
如果少于10%,
if (any(!is.na(wrong_ref))) {
if (mean(!is.na(wrong_ref)) < 0.1) { # If fewer than 10% of mutations have a wrong reference base, we warn the user
warning(sprintf('%0.0f (%0.2g%%) mutations have a wrong reference base (see the affected mutations in dndsout$wrongmuts). Please identify the causes and rerun dNdScv.', sum(!is.na(wrong_ref)), 100*mean(!is.na(wrong_ref))))
} else { # If more than 10% of mutations have a wrong reference base, we stop the execution (likely wrong assembly or a serious problem with the data)
stop(sprintf('%0.0f (%0.2g%%) mutations have a wrong reference base. Please confirm that you are not running data from a different assembly or species.', sum(!is.na(wrong_ref)), 100*mean(!is.na(wrong_ref))))
}
## 筛选正确注释的位点
wrong_refbase = mutations[!is.na(wrong_ref), 1:5]
mutations = mutations[is.na(wrong_ref),]
}
## 看是否存在indel信息;
if (any(nrow(indels))) { # If there are indels we concatenate the tables of subs and indels
indels = cbind(indels, data.frame(ref_cod=".", mut_cod=".", ref3_cod=".", mut3_cod=".", aachange=".", ntchange=".", codonsub=".", impact="no-SNV", pid=sapply(RefCDS,function(x) x$protein_id)[indels$geneind]))
# Annotation of indels
## 判断ins 还是 del
ins = nchar(gsub("-","",indels$ref))<nchar(gsub("-","",indels$mut))
del = nchar(gsub("-","",indels$ref))>nchar(gsub("-","",indels$mut))
## 双核苷酸变异
multisub = nchar(gsub("-","",indels$ref))==nchar(gsub("-","",indels$mut)) # Including dinucleotides
## ref 和 mut变化的碱基说
l = nchar(gsub("-","",indels$ref))-nchar(gsub("-","",indels$mut))
## 构建每个indels的字符
indelstr = rep(NA,nrow(indels))
for (j in 1:nrow(indels)) {
## indel 基因编号
geneind = indels$geneind[j]
## indels 的起始位置,连续的
pos = indels$start[j]:indels$end[j]
## 如果时ins,pos-1 pos
if (ins[j]) { pos = c(pos-1,pos) } # Adding the upstream base for insertions
## pos的位置 cds的位置
pos_ind = chr2cds(pos, RefCDS[[geneind]]$intervals_cds, RefCDS[[geneind]]$strand)
if (length(pos_ind)>0) {
## 框内
inframe = (length(pos_ind) %% 3) == 0
## ins frshift infram
## del frshift infram
## 双核苷酸改变 MNV
if (ins[j]) { # Insertion
indelstr[j] = sprintf("%0.0f-%0.0f-ins%s",min(pos_ind),max(pos_ind),c("frshift","inframe")[inframe+1])
} else if (del[j]) { # Deletion
indelstr[j] = sprintf("%0.0f-%0.0f-del%s",min(pos_ind),max(pos_ind),c("frshift","inframe")[inframe+1])
} else { # Dinucleotide and multinucleotide changes (MNVs)
indelstr[j] = sprintf("%0.0f-%0.0f-mnv",min(pos_ind),max(pos_ind))
}
}
}
## 标记indel的突变类型 添加到碱基变化中
indels$ntchange = indelstr
## 合并snv 和 indels
annot = rbind(mutations, indels)
} else {
annot = mutations
}
## 样本id chr pos 排序
annot = annot[order(annot$sampleID, annot$chr, annot$pos),]
## 评估 全局率 和 选择参数
## 3. Estimation of the global rate and selection parameters
message("[3] Estimating global rates...")
##
Lall = array(sapply(RefCDS, function(x) x$L), dim=c(192,4,length(RefCDS)))
##
Nall = array(sapply(RefCDS, function(x) x$N), dim=c(192,4,length(RefCDS)))
## 计算三碱基变异的不同突变类型的所有碱基的综合,这个数据的来源时啥?
L = apply(Lall, c(1,2), sum) ## 二维矩阵
## 计算当前数据集合中的对应的信息
N = apply(Nall, c(1,2), sum)
## 子功能 拟合突变模型
# Subfunction: fitting substitution model
fit_substmodel = function(N, L, substmodel) {
## 将矩阵拉伸成向量
l = c(L);
n = c(N);
r = c(substmodel)
n = n[l!=0];
r = r[l!=0];
l = l[l!=0];
##
params = unique(base::strsplit(x=paste(r,collapse="*"), split="\\*")[[1]])
indmat = as.data.frame(array(0, dim=c(length(r),length(params))))
colnames(indmat) = params
for (j in 1:length(r)) {
indmat[j, base::strsplit(r[j], split="\\*")[[1]]] = 1
}
## 广义线性回归模型
## offset 偏置量 以log(l)作为偏置量
model = glm(formula = n ~ offset(log(l)) + . -1, data=indmat, family=poisson(link=log))
## exp,自然对数e为底指数函数,全称Exponential(指数曲线)。
## 提取系数 截距 和 斜率
mle = exp(coefficients(model)) # Maximum-likelihood estimates for the rate params
## Wald置信区间 ???
## 置信区间
ci = exp(confint.default(model)) # Wald confidence intervals
par = data.frame(name=gsub("\`","",rownames(ci)), mle=mle[rownames(ci)], cilow=ci[,1], cihigh=ci[,2])
## 返回参数和模型
return(list(par=par, model=model))
}
## 拟合所有突变率 和 3个全局选择参数
# Fitting all mutation rates and the 3 global selection parameters
## 原始的突变模型,泊松分布
poissout = fit_substmodel(N, L, substmodel) # Original substitution model
## 分布参数
par = poissout$par
## 分布模型
poissmodel = poissout$model
## 参数的第一列为第二列的表头
parmle = setNames(par[,2], par[,1])
## 最大似然子模型
mle_submodel = par
rownames(mle_submodel) = NULL
## 拟合模型 用1 和 2 全局选择参数
# Fitting models with 1 and 2 global selection parameters
s1 = gsub("wmis","wall",gsub("wnon","wall",gsub("wspl","wall",substmodel)))
par1 = fit_substmodel(N, L, s1)$par # Substitution model with 1 selection parameter
s2 = gsub("wnon","wtru",gsub("wspl","wtru",substmodel))
par2 = fit_substmodel(N, L, s2)$par # Substitution model with 2 selection parameter
globaldnds = rbind(par, par1, par2)[c("wmis","wnon","wspl","wtru","wall"),]
sel_loc = sel_cv = NULL
## 可变的 dnds 模型率 : 基因突变率 从 单个基因的同义突变中推测出来
## 4. dNdSloc: variable rate dN/dS model (gene mutation rate inferred from synonymous subs in the gene only)
## 只从当前基因的同义突变中推测基因的突变率
genemuts = data.frame(gene_name = sapply(RefCDS,
function(x) x$gene_name),
n_syn=NA,
n_mis=NA,
n_non=NA,
n_spl=NA,
exp_syn=NA,
exp_mis=NA,
exp_non=NA,
exp_spl=NA,
stringsAsFactors=F)
genemuts[,2:5] = t(sapply(RefCDS, function(x) colSums(x$N)))
mutrates = sapply(substmodel[,1], function(x) prod(parmle[base::strsplit(x,split="\\*")[[1]]])) # Expected rate per available site
genemuts[,6:9] = t(sapply(RefCDS, function(x) colSums(x$L*mutrates)))
numrates = length(mutrates)
if (outp > 1) {
message("[4] Running dNdSloc...")
selfun_loc = function(j) {
y = as.numeric(genemuts[j,-1])
x = RefCDS[[j]]
# a. Neutral model: wmis==1, wnon==1, wspl==1
mrfold = sum(y[1:4])/sum(y[5:8]) # Correction factor of "t" based on the obs/exp ratio of "neutral" mutations under the model
ll0 = sum(dpois(x=x$N, lambda=x$L*mutrates*mrfold*t(array(c(1,1,1,1),dim=c(4,numrates))), log=T)) # loglik null model
# b. Missense model: wmis==1, free wnon, free wspl
mrfold = max(1e-10, sum(y[c(1,2)])/sum(y[c(5,6)])) # Correction factor of "t" based on the obs/exp ratio of "neutral" mutations under the model
wfree = y[3:4]/y[7:8]/mrfold; wfree[y[3:4]==0] = 0
llmis = sum(dpois(x=x$N, lambda=x$L*mutrates*mrfold*t(array(c(1,1,wfree),dim=c(4,numrates))), log=T)) # loglik free wmis
# c. free wmis, wnon and wspl
mrfold = max(1e-10, y[1]/y[5]) # Correction factor of "t"
w = y[2:4]/y[6:8]/mrfold; w[y[2:4]==0] = 0 # MLE of dN/dS based on the local rate (using syn muts as neutral)
llall = sum(dpois(x=x$N, lambda=x$L*mutrates*mrfold*t(array(c(1,w),dim=c(4,numrates))), log=T)) # loglik free wmis, wnon, wspl
w[w>1e4] = 1e4
p = 1-pchisq(2*(llall-c(llmis,ll0)),df=c(1,3))
return(c(w,p))
}
sel_loc = as.data.frame(t(sapply(1:nrow(genemuts), selfun_loc)))
colnames(sel_loc) = c("wmis_loc","wnon_loc","wspl_loc","pmis_loc","pall_loc")
sel_loc$qmis_loc = p.adjust(sel_loc$pmis_loc, method="BH")
sel_loc$qall_loc = p.adjust(sel_loc$pall_loc, method="BH")
sel_loc = cbind(genemuts[,1:5],sel_loc)
sel_loc = sel_loc[order(sel_loc$pall_loc,sel_loc$pmis_loc,-sel_loc$wmis_loc),]
}
## 阴性 的二项式回归 +局部的同义突变
## 5. dNdScv: Negative binomial regression (with or without covariates) + local synonymous mutations
nbreg = nbregind = NULL
if (outp > 2) {
message("[5] Running dNdScv...")
## 协变量
# Covariates
if (is.null(cv)) {
nbrdf = genemuts[,c("n_syn","exp_syn")]
model = MASS::glm.nb(n_syn ~ offset(log(exp_syn)) - 1 , data = nbrdf)
message(sprintf(" Regression model for substitutions: no covariates were used (theta = %0.3g).", model$theta))
} else {
covs = covs[genemuts$gene_name,]
if (ncol(covs) > maxcovs) {
warning(sprintf("More than %s input covariates. Only the first %s will be considered.", maxcovs, maxcovs))
covs = covs[,1:maxcovs]
}
nbrdf = cbind(genemuts[,c("n_syn","exp_syn")], covs)
## 阴性二项式回归
# Negative binomial regression for substitutions
if (nrow(genemuts)<500) { # If there are <500 genes, we run the regression without covariates
model = MASS::glm.nb(n_syn ~ offset(log(exp_syn)) - 1 , data = nbrdf)
} else {
model = tryCatch({
MASS::glm.nb(n_syn ~ offset(log(exp_syn)) + . , data = nbrdf) # We try running the model with covariates
}, warning = function(w){
MASS::glm.nb(n_syn ~ offset(log(exp_syn)) - 1 , data = nbrdf) # If there are warnings or errors we run the model without covariates
}, error = function(e){
MASS::glm.nb(n_syn ~ offset(log(exp_syn)) - 1 , data = nbrdf) # If there are warnings or errors we run the model without covariates
})
}
message(sprintf(" Regression model for substitutions (theta = %0.3g).", model$theta))
}
if (all(model$y==genemuts$n_syn)) {
genemuts$exp_syn_cv = model$fitted.values
}
theta = model$theta
nbreg = model
## 子功能:
# Subfunction: Analytical opt_t using only neutral subs
mle_tcv = function(n_neutral, exp_rel_neutral, shape, scale) {
tml = (n_neutral+shape-1)/(exp_rel_neutral+(1/scale))
if (shape<=1) { # i.e. when theta<=1
tml = max(shape*scale,tml) # i.e. tml is bounded to the mean of the gamma (i.e. y[9]) when theta<=1, since otherwise it takes meaningless values
}
return(tml)
}
## 子功能
# Subfunction: dNdScv per gene
selfun_cv = function(j) {
y = as.numeric(genemuts[j,-1])
x = RefCDS[[j]]
exp_rel = y[5:8]/y[5]
## gama
# Gamma
shape = theta
scale = y[9]/theta
## 中性模型
# a. Neutral model
indneut = 1:4 # vector of neutral mutation types under this model (1=synonymous, 2=missense, 3=nonsense, 4=essential_splice)
opt_t = mle_tcv(n_neutral=sum(y[indneut]), exp_rel_neutral=sum(exp_rel[indneut]), shape=shape, scale=scale)
mrfold = max(1e-10, opt_t/y[5]) # Correction factor of "t" based on the obs/exp ratio of "neutral" mutations under the model
ll0 = sum(dpois(x=x$N, lambda=x$L*mutrates*mrfold*t(array(c(1,1,1,1),dim=c(4,numrates))), log=T)) + dgamma(opt_t, shape=shape, scale=scale, log=T) # loglik null model
## 错义突变模型
# b. Missense model: wmis==1, free wnon, free wspl
indneut = 1:2
opt_t = mle_tcv(n_neutral=sum(y[indneut]), exp_rel_neutral=sum(exp_rel[indneut]), shape=shape, scale=scale)
mrfold = max(1e-10, opt_t/sum(y[5])) # Correction factor of "t" based on the obs/exp ratio of "neutral" mutations under the model
wfree = y[3:4]/y[7:8]/mrfold; wfree[y[3:4]==0] = 0
llmis = sum(dpois(x=x$N, lambda=x$L*mutrates*mrfold*t(array(c(1,1,wfree),dim=c(4,numrates))), log=T)) + dgamma(opt_t, shape=shape, scale=scale, log=T) # loglik free wmis
## 截断突变模型
# c. Truncating muts model: free wmis, wnon==wspl==1
indneut = c(1,3,4)
opt_t = mle_tcv(n_neutral=sum(y[indneut]), exp_rel_neutral=sum(exp_rel[indneut]), shape=shape, scale=scale)
mrfold = max(1e-10, opt_t/sum(y[5])) # Correction factor of "t" based on the obs/exp ratio of "neutral" mutations under the model
wfree = y[2]/y[6]/mrfold; wfree[y[2]==0] = 0
lltrunc = sum(dpois(x=x$N, lambda=x$L*mutrates*mrfold*t(array(c(1,wfree,1,1),dim=c(4,numrates))), log=T)) + dgamma(opt_t, shape=shape, scale=scale, log=T) # loglik free wmis
## 自由选择模型
# d. Free selection model: free wmis, free wnon, free wspl
indneut = 1
opt_t = mle_tcv(n_neutral=sum(y[indneut]), exp_rel_neutral=sum(exp_rel[indneut]), shape=shape, scale=scale)
mrfold = max(1e-10, opt_t/sum(y[5])) # Correction factor of "t" based on the obs/exp ratio of "neutral" mutations under the model
wfree = y[2:4]/y[6:8]/mrfold; wfree[y[2:4]==0] = 0
llall_unc = sum(dpois(x=x$N, lambda=x$L*mutrates*mrfold*t(array(c(1,wfree),dim=c(4,numrates))), log=T)) + dgamma(opt_t, shape=shape, scale=scale, log=T) # loglik free wmis
if (constrain_wnon_wspl == 0) {
p = 1-pchisq(2*(llall_unc-c(llmis,lltrunc,ll0)),df=c(1,2,3))
return(c(wfree,p))
} else { # d2. Free selection model: free wmis, free wnon==wspl
wmisfree = y[2]/y[6]/mrfold; wmisfree[y[2]==0] = 0
wtruncfree = sum(y[3:4])/sum(y[7:8])/mrfold; wtruncfree[sum(y[3:4])==0] = 0
llall = sum(dpois(x=x$N, lambda=x$L*mutrates*mrfold*t(array(c(1,wmisfree,wtruncfree,wtruncfree),dim=c(4,numrates))), log=T)) + dgamma(opt_t, shape=shape, scale=scale, log=T) # loglik free wmis, free wnon==wspl
p = 1-pchisq(2*c(llall_unc-llmis,llall-c(lltrunc,ll0)),df=c(1,1,2))
return(c(wmisfree,wtruncfree,wtruncfree,p))
}
}
sel_cv = as.data.frame(t(sapply(1:nrow(genemuts), selfun_cv)))
colnames(sel_cv) = c("wmis_cv","wnon_cv","wspl_cv","pmis_cv","ptrunc_cv","pallsubs_cv")
sel_cv$qmis_cv = p.adjust(sel_cv$pmis_cv, method="BH")
sel_cv$qtrunc_cv = p.adjust(sel_cv$ptrunc_cv, method="BH")
sel_cv$qallsubs_cv = p.adjust(sel_cv$pallsubs_cv, method="BH")
sel_cv = cbind(genemuts[,1:5],sel_cv)
sel_cv = sel_cv[order(sel_cv$pallsubs_cv, sel_cv$pmis_cv, sel_cv$ptrunc_cv, -sel_cv$wmis_cv),] # Sorting genes in the output file
## indel 频发:基于负二项分布回归
## Indel recurrence: based on a negative binomial regression (ideally fitted excluding major known driver genes)
if (nrow(indels) >= min_indels) {
geneindels = as.data.frame(array(0,dim=c(length(RefCDS),8)))
colnames(geneindels) = c("gene_name","n_ind","n_induniq","n_indused","cds_length","excl","exp_unif","exp_indcv")
geneindels$gene_name = sapply(RefCDS, function(x) x$gene_name)
geneindels$n_ind = as.numeric(table(indels$gene)[geneindels[,1]]); geneindels[is.na(geneindels[,2]),2] = 0
geneindels$n_induniq = as.numeric(table(unique(indels[,-1])$gene)[geneindels[,1]]); geneindels[is.na(geneindels[,3]),3] = 0
geneindels$cds_length = sapply(RefCDS, function(x) x$CDS_length)
if (use_indel_sites) {
geneindels$n_indused = geneindels[,3]
} else {
geneindels$n_indused = geneindels[,2]
}
## 剔除 已知的癌症基因
# Excluding known cancer genes (first using the input or the default list, but if this fails, we use a shorter data-driven list)
geneindels$excl = (geneindels[,1] %in% known_cancergenes)
min_bkg_genes = 50
if (sum(!geneindels$excl)<min_bkg_genes | sum(geneindels[!geneindels$excl,"n_indused"]) == 0) { # If the exclusion list is too restrictive (e.g. targeted sequencing of cancer genes), then identify a shorter list of selected genes using the substitutions.
newkc = as.vector(sel_cv$gene_name[sel_cv$qallsubs_cv<0.01])
geneindels$excl = (geneindels[,1] %in% newkc)
if (sum(!geneindels$excl)<min_bkg_genes | sum(geneindels[!geneindels$excl,"n_indused"]) == 0) { # If the new exclusion list is still too restrictive, then do not apply a restriction.
geneindels$excl = F
message(" No gene was excluded from the background indel model.")
} else {
warning(sprintf(" Genes were excluded from the indel background model based on the substitution data: %s.", paste(newkc, collapse=", ")))
}
}
geneindels$exp_unif = sum(geneindels[!geneindels$excl,"n_indused"]) / sum(geneindels[!geneindels$excl,"cds_length"]) * geneindels$cds_length
## 对indels进行负二项回归
# Negative binomial regression for indels
if (is.null(cv)) {
nbrdf = geneindels[,c("n_indused","exp_unif")][!geneindels[,6],] # We exclude known drivers from the fit
model = MASS::glm.nb(n_indused ~ offset(log(exp_unif)) - 1 , data = nbrdf)
nbrdf_all = geneindels[,c("n_indused","exp_unif")]
} else {
nbrdf = cbind(geneindels[,c("n_indused","exp_unif")], covs)[!geneindels[,6],] # We exclude known drivers from the fit
if (sum(!geneindels$excl)<500) { # If there are <500 genes, we run the regression without covariates
model = MASS::glm.nb(n_indused ~ offset(log(exp_unif)) - 1 , data = nbrdf)
} else {
model = tryCatch({
MASS::glm.nb(n_indused ~ offset(log(exp_unif)) + . , data = nbrdf) # We try running the model with covariates
}, warning = function(w){
MASS::glm.nb(n_indused ~ offset(log(exp_unif)) - 1 , data = nbrdf) # If there are warnings or errors we run the model without covariates
}, error = function(e){
MASS::glm.nb(n_indused ~ offset(log(exp_unif)) - 1 , data = nbrdf) # If there are warnings or errors we run the model without covariates
})
}
nbrdf_all = cbind(geneindels[,c("n_indused","exp_unif")], covs)
}
message(sprintf(" Regression model for indels (theta = %0.3g)", model$theta))
theta_indels = model$theta
nbregind = model
geneindels$exp_indcv = exp(predict(model,nbrdf_all))
geneindels$wind = geneindels$n_indused / geneindels$exp_indcv
## 对每个基因的indel频发进行统计学检验
# Statistical testing for indel recurrence per gene
geneindels$pind = pnbinom(q=geneindels$n_indused-1, mu=geneindels$exp_indcv, size=theta_indels, lower.tail=F)
geneindels$qind = p.adjust(geneindels$pind, method="BH")
## fisher 联合 p-values
# Fisher combined p-values (substitutions and indels)
sel_cv = merge(sel_cv, geneindels, by="gene_name")[,c("gene_name","n_syn","n_mis","n_non","n_spl","n_indused","wmis_cv","wnon_cv","wspl_cv","wind","pmis_cv","ptrunc_cv","pallsubs_cv","pind","qmis_cv","qtrunc_cv","qallsubs_cv")]
colnames(sel_cv) = c("gene_name","n_syn","n_mis","n_non","n_spl","n_ind","wmis_cv","wnon_cv","wspl_cv","wind_cv","pmis_cv","ptrunc_cv","pallsubs_cv","pind_cv","qmis_cv","qtrunc_cv","qallsubs_cv")
sel_cv$pglobal_cv = 1 - pchisq(-2 * (log(sel_cv$pallsubs_cv) + log(sel_cv$pind_cv)), df = 4)
sel_cv$qglobal_cv = p.adjust(sel_cv$pglobal, method="BH")
sel_cv = sel_cv[order(sel_cv$pglobal_cv, sel_cv$pallsubs_cv, sel_cv$pmis_cv, sel_cv$ptrunc_cv, -sel_cv$wmis_cv),] # Sorting genes in the output file
}
}
if (!any(!is.na(wrong_ref))) {
wrong_refbase = NULL # Output value if there were no wrong bases
}
annot = annot[,setdiff(colnames(annot),c("start","end","geneind"))]
if (outmats) {
dndscvout = list(globaldnds = globaldnds, sel_cv = sel_cv, sel_loc = sel_loc, annotmuts = annot, genemuts = genemuts, mle_submodel = mle_submodel, exclsamples = exclsamples, exclmuts = exclmuts, nbreg = nbreg, nbregind = nbregind, poissmodel = poissmodel, wrongmuts = wrong_refbase, N = Nall, L = Lall)
} else {
dndscvout = list(globaldnds = globaldnds, sel_cv = sel_cv, sel_loc = sel_loc, annotmuts = annot, genemuts = genemuts, mle_submodel = mle_submodel, exclsamples = exclsamples, exclmuts = exclmuts, nbreg = nbreg, nbregind = nbregind, poissmodel = poissmodel, wrongmuts = wrong_refbase)
}
} # EOF
| /R/dndscv.R | no_license | feihongloveworld/dndscv | R | false | false | 42,076 | r | #' dNdScv
#'
#' Analyses of selection using the dNdScv and dNdSloc models. Default parameters typically increase the performance of the method on cancer genomic studies. Reference files are currently only available for the GRCh37/hg19 version of the human genome.
#'
#' @author Inigo Martincorena (Wellcome Sanger Institute)
#' @details Martincorena I, et al. (2017) Universal patterns of selection in cancer and somatic tissues. Cell. 171(5):1029-1041.
#'
#' @param mutations Table of mutations (5 columns: sampleID, chr, pos, ref, alt). Only list independent events as mutations.
#' @param gene_list List of genes to restrict the analysis (use for targeted sequencing studies)
#' @param refdb Reference database (path to .rda file)
#' @param sm Substitution model (precomputed models are available in the data directory)
#' @param kc List of a-priori known cancer genes (to be excluded from the indel background model)
#' @param cv Covariates (a matrix of covariates -columns- for each gene -rows-) [default: reference covariates] [cv=NULL runs dndscv without covariates]
#' @param max_muts_per_gene_per_sample If n<Inf, arbitrarily the first n mutations by chr position will be kept
#' @param max_coding_muts_per_sample Hypermutator samples often reduce power to detect selection
#' @param use_indel_sites Use unique indel sites instead of the total number of indels (it tends to be more robust)
#' @param min_indels Minimum number of indels required to run the indel recurrence module
#' @param maxcovs Maximum number of covariates that will be considered (additional columns in the matrix of covariates will be excluded)
#' @param constrain_wnon_wspl This constrains wnon==wspl (this typically leads to higher power to detect selection)
#' @param outp Output: 1 = Global dN/dS values; 2 = Global dN/dS and dNdSloc; 3 = Global dN/dS, dNdSloc and dNdScv
#' @param numcode NCBI genetic code number (default = 1; standard genetic code). To see the list of genetic codes supported use: ? seqinr::translate. Note that the same genetic code must be used in the dndscv and buildref functions.
#' @param outmats Output the internal N and L matrices (default = F)
#'
#' @return 'dndscv' returns a list of objects:
#' @return - globaldnds: Global dN/dS estimates across all genes.
#' @return - sel_cv: Gene-wise selection results using dNdScv.
#' @return - sel_loc: Gene-wise selection results using dNdSloc.
#' @return - annotmuts: Annotated coding mutations.
#' @return - genemuts: Observed and expected numbers of mutations per gene.
#' @return - mle_submodel: MLEs of the substitution model.
#' @return - exclsamples: Samples excluded from the analysis.
#' @return - exclmuts: Coding mutations excluded from the analysis.
#' @return - nbreg: Negative binomial regression model for substitutions.
#' @return - nbregind: Negative binomial regression model for indels.
#' @return - poissmodel: Poisson regression model used to fit the substitution model and the global dNdS values.
#' @return - wrongmuts: Table of input mutations with a wrong annotation of the reference base (if any).
#'
#' @export
dndscv = function(mutations,
gene_list = NULL,
refdb = "hg19",
sm = "192r_3w",
kc = "cgc81",
cv = "hg19",
max_muts_per_gene_per_sample = 3,
max_coding_muts_per_sample = 3000,
use_indel_sites = T,
min_indels = 5,
maxcovs = 20,
constrain_wnon_wspl = T,
outp = 3,
numcode = 1,
outmats = F) {
## 1. Environment
message("[1] Loading the environment...")
## 突变提取前五列
mutations = mutations[,1:5] # Restricting input matrix to first 5 columns
## 转化为字符串
#sampleID chr pos ref mut
#F16061220030-CLN 17 7577058 C A
#F16061220030-CLN 3 47155465 C T
#ct-C1512289427-S-CLN 17 7578441 G T
#ct-C16042816301-S-CLN 17 7578441 G T
#ct-C16060119301-S-CLN 17 7578441 G T
#ct-C16060119301-S-YH095-CLN-RE 17 7578441 G T
#ct-C16092028993-S-CLN 2 47601106 TG CA
#ct-C16092028993-S-CLN 17 7578206 T C
#F16053119231-CLN 17 7577538 C T
mutations[,c(1,2,3,4,5)] = lapply(mutations[,c(1,2,3,4,5)], as.character) # Factors to character
## 变异位点转化为数值型
mutations[[3]] = as.numeric(mutations[[3]]) # Chromosome position as numeric
## 提取发生真实碱基突变的位点
mutations[[3]] = as.numeric(mutations[[3]]) # Removing mutations with identical reference and mutant base
## 添加表头
colnames(mutations) = c("sampleID","chr","pos","ref","mut")
## 提取存在NA的行,返回index
# Removing NA entries from the input mutation table
indna = which(is.na(mutations),arr.ind=T)
## 整行去除含NA的突变
if (nrow(indna)>0) {
mutations = mutations[-unique(indna[,1]),] # Removing entries with an NA in any row
warning(sprintf("%0.0f rows in the input table contained NA entries and have been removed. Please investigate.",length(unique(indna[,1]))))
}
## 载入参考数据库
# [Input] Reference database
if (refdb == "hg19") {
data("refcds_hg19", package="dndscv")
if (any(gene_list=="CDKN2A")) { # Replace CDKN2A in the input gene list with two isoforms 为啥
gene_list = unique(c(setdiff(gene_list,"CDKN2A"),"CDKN2A.p14arf","CDKN2A.p16INK4a"))
}
} else {
load(refdb)
}
## 输入基因列表,默认全部基因
# [Input] Gene list (The user can input a gene list as a character vector)
##
if (is.null(gene_list)) {
gene_list = sapply(RefCDS, function(x) x$gene_name) # All genes [default]
} else { # Using only genes in the input gene list
## 首先提取全部基因
allg = sapply(RefCDS,function(x) x$gene_name)
## 基因必须存在于全部基因中
nonex = gene_list[!(gene_list %in% allg)]
## 打印出不在列的基因
if (length(nonex)>0) { stop(sprintf("The following input gene names are not in the RefCDS database: %s", paste(nonex,collapse=", "))) }
RefCDS = RefCDS[allg %in% gene_list] # Only input genes
## gr_genes
gr_genes = [gr_genes$names %in% gene_list] # Only input genes
}
## 协变量矩阵
# [Input] Covariates (The user can input a custom set of covariates as a matrix)
if (is.character(cv)) {
data(list=sprintf("covariates_%s",cv), package="dndscv")
} else {
covs = cv
}
## 已知癌症基因 默认使用CGC81数据库
# [Input] Known cancer genes (The user can input a gene list as a character vector)
if (kc[1] %in% c("cgc81")) {
data(list=sprintf("cancergenes_%s",kc), package="dndscv")
} else {
known_cancergenes = kc
}
## 突变模型 默认 192r_3w 用户可以提供一个个性化的突变模型作为举证
# [Input] Substitution model (The user can also input a custom substitution model as a matrix)
if (length(sm)==1) {
data(list=sprintf("submod_%s",sm), package="dndscv")
} else {
substmodel = sm
}
## 扩展的参考序列 是为了更快的访问,这部分是高级数据结构
# Expanding the reference sequences [for faster access]
for (j in 1:length(RefCDS)) {
RefCDS[[j]]$seq_cds = base::strsplit(as.character(RefCDS[[j]]$seq_cds), split="")[[1]]
RefCDS[[j]]$seq_cds1up = base::strsplit(as.character(RefCDS[[j]]$seq_cds1up), split="")[[1]]
RefCDS[[j]]$seq_cds1down = base::strsplit(as.character(RefCDS[[j]]$seq_cds1down), split="")[[1]]
if (!is.null(RefCDS[[j]]$seq_splice)) {
RefCDS[[j]]$seq_splice = base::strsplit(as.character(RefCDS[[j]]$seq_splice), split="")[[1]]
RefCDS[[j]]$seq_splice1up = base::strsplit(as.character(RefCDS[[j]]$seq_splice1up), split="")[[1]]
RefCDS[[j]]$seq_splice1down = base::strsplit(as.character(RefCDS[[j]]$seq_splice1down), split="")[[1]]
}
}
## 突变注释
## 2. Mutation annotation
message("[2] Annotating the mutations...")
## 三碱基构建
nt = c("A","C","G","T")
trinucs = paste(rep(nt,each=16,times=1),rep(nt,each=4,times=4),rep(nt,each=1,times=16), sep="")
trinucinds = setNames(1:64, trinucs)
trinucsubs = NULL
for (j in 1:length(trinucs)) {
trinucsubs = c(trinucsubs, paste(trinucs[j], paste(substr(trinucs[j],1,1), setdiff(nt,substr(trinucs[j],2,2)), substr(trinucs[j],3,3), sep=""), sep=">"))
}
trinucsubsind = setNames(1:192, trinucsubs)
##
ind = setNames(1:length(RefCDS), sapply(RefCDS,function(x) x$gene_name)) ## 每个基因名对应再RefCDS上的位置编号记录下来
## 表达的基因
gr_genes_ind = ind[gr_genes$names] ## 查看gr基因基因名对饮的ind
## 警告:可能存在多碱基突变注释失败的情况,删除complex突变
# Warning about possible unannotated dinucleotide substitutions
if (any(diff(mutations$pos)==1)) {
warning("Mutations observed in contiguous sites within a sample. Please annotate or remove dinucleotide or complex substitutions for best results.")
}
## 警告:在不同样本中同一个突变不同实例
# Warning about multiple instances of the same mutation in different sampleIDs
if (nrow(unique(mutations[,2:5])) < nrow(mutations)) {
warning("Same mutations observed in different sampleIDs. Please verify that these are independent events and remove duplicates otherwise.")
}
## 突变的起始点
# Start and end position of each mutation
mutations$end = mutations$start = mutations$pos
l = nchar(mutations$ref)-1 # Deletions of multiple bases
mutations$end = mutations$end + l
ind = substr(mutations$ref,1,1)==substr(mutations$mut,1,1) & nchar(mutations$ref)>nchar(mutations$mut) # Position correction for deletions annotated in the previous base (e.g. CA>C)
mutations$start = mutations$start + ind
## 突变对应到基因上
# Mapping mutations to genes
gr_muts = GenomicRanges::GRanges(mutations$chr, IRanges::IRanges(mutations$start,mutations$end))
ol = as.data.frame(GenomicRanges::findOverlaps(gr_muts, gr_genes, type="any", select="all"))
mutations = mutations[ol[,1],] # Duplicating subs if they hit more than one gene ## 如果对应到多个基因,删除第二个#
mutations$geneind = gr_genes_ind[ol[,2]] ## 添加上对应基因再RefDCS上的位置
mutations$gene = sapply(RefCDS,function(x) x$gene_name)[mutations$geneind] ## 添加上基因名
mutations = unique(mutations)
## 排除超突变样本 max_coding_muts_per_sample
# Optional: Excluding samples exceeding the limit of mutations/sample [see Default parameters]
nsampl = sort(table(mutations$sampleID))
exclsamples = NULL
if (any(nsampl>max_coding_muts_per_sample)) {
message(sprintf(' Note: %0.0f samples excluded for exceeding the limit of mutations per sample (see the max_coding_muts_per_sample argument in dndscv). %0.0f samples left after filtering.',sum(nsampl>max_coding_muts_per_sample),sum(nsampl<=max_coding_muts_per_sample)))
exclsamples = names(nsampl[nsampl>max_coding_muts_per_sample])
mutations = mutations[!(mutations$sampleID %in% names(nsampl[nsampl>max_coding_muts_per_sample])),]
}
## 排除超突变基因
# Optional: Limiting the number of mutations per gene per sample (to minimise the impact of unannotated kataegis and other mutation clusters) [see Default parameters]
mutrank = ave(mutations$pos, paste(mutations$sampleID,mutations$gene), FUN = function(x) rank(x))
exclmuts = NULL
if (any(mutrank>max_muts_per_gene_per_sample)) {
message(sprintf(' Note: %0.0f mutations removed for exceeding the limit of mutations per gene per sample (see the max_muts_per_gene_per_sample argument in dndscv)',sum(mutrank>max_muts_per_gene_per_sample)))
exclmuts = mutations[mutrank>max_muts_per_gene_per_sample,]
mutations = mutations[mutrank<=max_muts_per_gene_per_sample,]
}
## 对突变的额外的注释
# Additional annotation of substitutions
## 转录本的链信息
mutations$strand = sapply(RefCDS,function(x) x$strand)[mutations$geneind]
## snv
snv = (mutations$ref %in% nt & mutations$mut %in% nt)
if (!any(snv)) { stop("Zero coding substitutions found in this dataset. Unable to run dndscv. Common causes for this error are inputting only indels or using chromosome names different to those in the reference database (e.g. chr1 vs 1)") }
indels = mutations[!snv,]
## 取出snv
mutations = mutations[snv,]
mutations$ref_cod = mutations$ref
mutations$mut_cod = mutations$mut
compnt = setNames(rev(nt), nt)
isminus = (mutations$strand==-1)
mutations$ref_cod[isminus] = compnt[mutations$ref[isminus]]
mutations$mut_cod[isminus] = compnt[mutations$mut[isminus]]
##############################################################################################
for (j in 1:length(RefCDS)) {
RefCDS[[j]]$N = array(0, dim=c(192,4)) # Initialising the N matrices
## 再 N列表中添加192 行 + 4列的矩阵
}
## 子功能:获得编码位置 在给定的外显子区间
# Subfunction: obtaining the codon positions of a coding mutation given the exon intervals
chr2cds = function(pos,cds_int,strand) {
if (strand==1) {
return(which(unlist(apply(cds_int, 1, function(x) x[1]:x[2])) %in% pos)) ## 返回当前位置再外显子上的坐标
} else if (strand==-1) {
return(which(rev(unlist(apply(cds_int, 1, function(x) x[1]:x[2]))) %in% pos))
}
}
##
# Annotating the functional impact of each substitution and populating the N matrices
## 每个突变简历array
ref3_cod = mut3_cod = wrong_ref = aachange = ntchange = impact = codonsub = array(NA, nrow(mutations))
for (j in 1:nrow(mutations)) {
geneind = mutations$geneind[j] ## 基因编号
pos = mutations$pos[j] ## 对应的位置
## 注释潜在的剪切位点
if (any(pos == RefCDS[[geneind]]$intervals_splice)) { # Essential splice-site substitution
impact[j] = "Essential_Splice"; impind = 4
pos_ind = (pos==RefCDS[[geneind]]$intervals_splice) ## 当前位置所在的剪切位点的编号
cdsnt = RefCDS[[geneind]]$seq_splice[pos_ind] ## 剪切位点的碱基
## ref三碱基 ## 剪切位点的上游一个碱基 当前碱基 下游一个碱基
ref3_cod[j] = sprintf("%s%s%s", RefCDS[[geneind]]$seq_splice1up[pos_ind], RefCDS[[geneind]]$seq_splice[pos_ind], RefCDS[[geneind]]$seq_splice1down[pos_ind])
## -1碱基 突变碱基 +1碱基
mut3_cod[j] = sprintf("%s%s%s", RefCDS[[geneind]]$seq_splice1up[pos_ind], mutations$mut_cod[j], RefCDS[[geneind]]$seq_splice1down[pos_ind])
aachange[j] = ntchange[j] = codonsub[j] = "."
} else { # Coding substitution
pos_ind = chr2cds(pos, RefCDS[[geneind]]$intervals_cds, RefCDS[[geneind]]$strand)
cdsnt = RefCDS[[geneind]]$seq_cds[pos_ind] ## cds上的碱基
## cds前一个碱基 ref碱基 后一个碱基
ref3_cod[j] = sprintf("%s%s%s", RefCDS[[geneind]]$seq_cds1up[pos_ind], RefCDS[[geneind]]$seq_cds[pos_ind], RefCDS[[geneind]]$seq_cds1down[pos_ind])
## cds前一个碱基 突变碱基 后一个碱基
mut3_cod[j] = sprintf("%s%s%s", RefCDS[[geneind]]$seq_cds1up[pos_ind], mutations$mut_cod[j], RefCDS[[geneind]]$seq_cds1down[pos_ind])
## 密码子的位置
codon_pos = c(ceiling(pos_ind/3)*3-2, ceiling(pos_ind/3)*3-1, ceiling(pos_ind/3)*3)
## 取得ref的密码子碱基
old_codon = as.character(as.vector(RefCDS[[geneind]]$seq_cds[codon_pos]))
## 再cds上的位置
pos_in_codon = pos_ind-(ceiling(pos_ind/3)-1)*3
## 突变后的密码子
new_codon = old_codon; new_codon[pos_in_codon] = mutations$mut_cod[j]
## 突变前的氨基酸
old_aa = seqinr::translate(old_codon, numcode = numcode)
## 突变后的氨基酸
new_aa = seqinr::translate(new_codon, numcode = numcode)
## ref氨基酸 氨基酸位置 老氨基酸
aachange[j] = sprintf('%s%0.0f%s',old_aa,ceiling(pos_ind/3),new_aa)
## 碱基变化 突变碱基 突变碱基
ntchange[j] = sprintf('%s%0.0f%s',mutations$ref_cod[j],pos_ind,mutations$mut_cod[j])
## G>C
codonsub[j] = sprintf('%s>%s',paste(old_codon,collapse=""),paste(new_codon,collapse=""))
## 如果新氨基酸和旧氨基酸,同义突变
# Annotating the impact of the mutation
if (new_aa == old_aa){
impact[j] = "Synonymous"; impind = 1
# 新氨基酸为* 沉默突变
} else if (new_aa == "*"){
impact[j] = "Nonsense"; impind = 3
# 老氨基酸不为* 错译突变
} else if (old_aa != "*"){
impact[j] = "Missense"; impind = 2
# 老氨基酸为* 终止子丢失
} else if (old_aa=="*") {
impact[j] = "Stop_loss"; impind = NA
}
}
if (mutations$ref_cod[j] != as.character(cdsnt)) { # Incorrect base annotation in the input mutation file (the mutation will be excluded with a warning)
wrong_ref[j] = 1 ## 如果ref碱基和cds碱基不一致,位点存放到防止到wrong_ref 中
} else if (!is.na(impind)) { # Correct base annotation in the input mutation file
## 影响不为空
## 三碱基替换 CAA>CAT 得到三碱基编号
trisub = trinucsubsind[ paste(ref3_cod[j], mut3_cod[j], sep=">") ]
## 三碱基编号 影响编号 +1
RefCDS[[geneind]]$N[trisub,impind] = RefCDS[[geneind]]$N[trisub,impind] + 1 # Adding the mutation to the N matrices
}
### 打印提示信息
if (round(j/1e4)==(j/1e4)) { message(sprintf(' %0.3g%% ...', round(j/nrow(mutations),2)*100)) }
}
mutations$ref3_cod = ref3_cod
mutations$mut3_cod = mut3_cod
mutations$aachange = aachange
mutations$ntchange = ntchange
mutations$codonsub = codonsub ## 碱基替换
mutations$impact = impact
mutations$pid = sapply(RefCDS,function(x) x$protein_id)[mutations$geneind] ## 添加蛋白编号
##
错误注释位点
如果少于10%,
if (any(!is.na(wrong_ref))) {
if (mean(!is.na(wrong_ref)) < 0.1) { # If fewer than 10% of mutations have a wrong reference base, we warn the user
warning(sprintf('%0.0f (%0.2g%%) mutations have a wrong reference base (see the affected mutations in dndsout$wrongmuts). Please identify the causes and rerun dNdScv.', sum(!is.na(wrong_ref)), 100*mean(!is.na(wrong_ref))))
} else { # If more than 10% of mutations have a wrong reference base, we stop the execution (likely wrong assembly or a serious problem with the data)
stop(sprintf('%0.0f (%0.2g%%) mutations have a wrong reference base. Please confirm that you are not running data from a different assembly or species.', sum(!is.na(wrong_ref)), 100*mean(!is.na(wrong_ref))))
}
## 筛选正确注释的位点
wrong_refbase = mutations[!is.na(wrong_ref), 1:5]
mutations = mutations[is.na(wrong_ref),]
}
## 看是否存在indel信息;
if (any(nrow(indels))) { # If there are indels we concatenate the tables of subs and indels
indels = cbind(indels, data.frame(ref_cod=".", mut_cod=".", ref3_cod=".", mut3_cod=".", aachange=".", ntchange=".", codonsub=".", impact="no-SNV", pid=sapply(RefCDS,function(x) x$protein_id)[indels$geneind]))
# Annotation of indels
## 判断ins 还是 del
ins = nchar(gsub("-","",indels$ref))<nchar(gsub("-","",indels$mut))
del = nchar(gsub("-","",indels$ref))>nchar(gsub("-","",indels$mut))
## 双核苷酸变异
multisub = nchar(gsub("-","",indels$ref))==nchar(gsub("-","",indels$mut)) # Including dinucleotides
## ref 和 mut变化的碱基说
l = nchar(gsub("-","",indels$ref))-nchar(gsub("-","",indels$mut))
## 构建每个indels的字符
indelstr = rep(NA,nrow(indels))
for (j in 1:nrow(indels)) {
## indel 基因编号
geneind = indels$geneind[j]
## indels 的起始位置,连续的
pos = indels$start[j]:indels$end[j]
## 如果时ins,pos-1 pos
if (ins[j]) { pos = c(pos-1,pos) } # Adding the upstream base for insertions
## pos的位置 cds的位置
pos_ind = chr2cds(pos, RefCDS[[geneind]]$intervals_cds, RefCDS[[geneind]]$strand)
if (length(pos_ind)>0) {
## 框内
inframe = (length(pos_ind) %% 3) == 0
## ins frshift infram
## del frshift infram
## 双核苷酸改变 MNV
if (ins[j]) { # Insertion
indelstr[j] = sprintf("%0.0f-%0.0f-ins%s",min(pos_ind),max(pos_ind),c("frshift","inframe")[inframe+1])
} else if (del[j]) { # Deletion
indelstr[j] = sprintf("%0.0f-%0.0f-del%s",min(pos_ind),max(pos_ind),c("frshift","inframe")[inframe+1])
} else { # Dinucleotide and multinucleotide changes (MNVs)
indelstr[j] = sprintf("%0.0f-%0.0f-mnv",min(pos_ind),max(pos_ind))
}
}
}
## 标记indel的突变类型 添加到碱基变化中
indels$ntchange = indelstr
## 合并snv 和 indels
annot = rbind(mutations, indels)
} else {
annot = mutations
}
## 样本id chr pos 排序
annot = annot[order(annot$sampleID, annot$chr, annot$pos),]
## 评估 全局率 和 选择参数
## 3. Estimation of the global rate and selection parameters
message("[3] Estimating global rates...")
##
Lall = array(sapply(RefCDS, function(x) x$L), dim=c(192,4,length(RefCDS)))
##
Nall = array(sapply(RefCDS, function(x) x$N), dim=c(192,4,length(RefCDS)))
## 计算三碱基变异的不同突变类型的所有碱基的综合,这个数据的来源时啥?
L = apply(Lall, c(1,2), sum) ## 二维矩阵
## 计算当前数据集合中的对应的信息
N = apply(Nall, c(1,2), sum)
## 子功能 拟合突变模型
# Subfunction: fitting substitution model
fit_substmodel = function(N, L, substmodel) {
## 将矩阵拉伸成向量
l = c(L);
n = c(N);
r = c(substmodel)
n = n[l!=0];
r = r[l!=0];
l = l[l!=0];
##
params = unique(base::strsplit(x=paste(r,collapse="*"), split="\\*")[[1]])
indmat = as.data.frame(array(0, dim=c(length(r),length(params))))
colnames(indmat) = params
for (j in 1:length(r)) {
indmat[j, base::strsplit(r[j], split="\\*")[[1]]] = 1
}
## 广义线性回归模型
## offset 偏置量 以log(l)作为偏置量
model = glm(formula = n ~ offset(log(l)) + . -1, data=indmat, family=poisson(link=log))
## exp,自然对数e为底指数函数,全称Exponential(指数曲线)。
## 提取系数 截距 和 斜率
mle = exp(coefficients(model)) # Maximum-likelihood estimates for the rate params
## Wald置信区间 ???
## 置信区间
ci = exp(confint.default(model)) # Wald confidence intervals
par = data.frame(name=gsub("\`","",rownames(ci)), mle=mle[rownames(ci)], cilow=ci[,1], cihigh=ci[,2])
## 返回参数和模型
return(list(par=par, model=model))
}
## 拟合所有突变率 和 3个全局选择参数
# Fitting all mutation rates and the 3 global selection parameters
## 原始的突变模型,泊松分布
poissout = fit_substmodel(N, L, substmodel) # Original substitution model
## 分布参数
par = poissout$par
## 分布模型
poissmodel = poissout$model
## 参数的第一列为第二列的表头
parmle = setNames(par[,2], par[,1])
## 最大似然子模型
mle_submodel = par
rownames(mle_submodel) = NULL
## 拟合模型 用1 和 2 全局选择参数
# Fitting models with 1 and 2 global selection parameters
s1 = gsub("wmis","wall",gsub("wnon","wall",gsub("wspl","wall",substmodel)))
par1 = fit_substmodel(N, L, s1)$par # Substitution model with 1 selection parameter
s2 = gsub("wnon","wtru",gsub("wspl","wtru",substmodel))
par2 = fit_substmodel(N, L, s2)$par # Substitution model with 2 selection parameter
globaldnds = rbind(par, par1, par2)[c("wmis","wnon","wspl","wtru","wall"),]
sel_loc = sel_cv = NULL
## 可变的 dnds 模型率 : 基因突变率 从 单个基因的同义突变中推测出来
## 4. dNdSloc: variable rate dN/dS model (gene mutation rate inferred from synonymous subs in the gene only)
## 只从当前基因的同义突变中推测基因的突变率
genemuts = data.frame(gene_name = sapply(RefCDS,
function(x) x$gene_name),
n_syn=NA,
n_mis=NA,
n_non=NA,
n_spl=NA,
exp_syn=NA,
exp_mis=NA,
exp_non=NA,
exp_spl=NA,
stringsAsFactors=F)
genemuts[,2:5] = t(sapply(RefCDS, function(x) colSums(x$N)))
mutrates = sapply(substmodel[,1], function(x) prod(parmle[base::strsplit(x,split="\\*")[[1]]])) # Expected rate per available site
genemuts[,6:9] = t(sapply(RefCDS, function(x) colSums(x$L*mutrates)))
numrates = length(mutrates)
if (outp > 1) {
message("[4] Running dNdSloc...")
selfun_loc = function(j) {
y = as.numeric(genemuts[j,-1])
x = RefCDS[[j]]
# a. Neutral model: wmis==1, wnon==1, wspl==1
mrfold = sum(y[1:4])/sum(y[5:8]) # Correction factor of "t" based on the obs/exp ratio of "neutral" mutations under the model
ll0 = sum(dpois(x=x$N, lambda=x$L*mutrates*mrfold*t(array(c(1,1,1,1),dim=c(4,numrates))), log=T)) # loglik null model
# b. Missense model: wmis==1, free wnon, free wspl
mrfold = max(1e-10, sum(y[c(1,2)])/sum(y[c(5,6)])) # Correction factor of "t" based on the obs/exp ratio of "neutral" mutations under the model
wfree = y[3:4]/y[7:8]/mrfold; wfree[y[3:4]==0] = 0
llmis = sum(dpois(x=x$N, lambda=x$L*mutrates*mrfold*t(array(c(1,1,wfree),dim=c(4,numrates))), log=T)) # loglik free wmis
# c. free wmis, wnon and wspl
mrfold = max(1e-10, y[1]/y[5]) # Correction factor of "t"
w = y[2:4]/y[6:8]/mrfold; w[y[2:4]==0] = 0 # MLE of dN/dS based on the local rate (using syn muts as neutral)
llall = sum(dpois(x=x$N, lambda=x$L*mutrates*mrfold*t(array(c(1,w),dim=c(4,numrates))), log=T)) # loglik free wmis, wnon, wspl
w[w>1e4] = 1e4
p = 1-pchisq(2*(llall-c(llmis,ll0)),df=c(1,3))
return(c(w,p))
}
sel_loc = as.data.frame(t(sapply(1:nrow(genemuts), selfun_loc)))
colnames(sel_loc) = c("wmis_loc","wnon_loc","wspl_loc","pmis_loc","pall_loc")
sel_loc$qmis_loc = p.adjust(sel_loc$pmis_loc, method="BH")
sel_loc$qall_loc = p.adjust(sel_loc$pall_loc, method="BH")
sel_loc = cbind(genemuts[,1:5],sel_loc)
sel_loc = sel_loc[order(sel_loc$pall_loc,sel_loc$pmis_loc,-sel_loc$wmis_loc),]
}
## 阴性 的二项式回归 +局部的同义突变
## 5. dNdScv: Negative binomial regression (with or without covariates) + local synonymous mutations
nbreg = nbregind = NULL
if (outp > 2) {
message("[5] Running dNdScv...")
## 协变量
# Covariates
if (is.null(cv)) {
nbrdf = genemuts[,c("n_syn","exp_syn")]
model = MASS::glm.nb(n_syn ~ offset(log(exp_syn)) - 1 , data = nbrdf)
message(sprintf(" Regression model for substitutions: no covariates were used (theta = %0.3g).", model$theta))
} else {
covs = covs[genemuts$gene_name,]
if (ncol(covs) > maxcovs) {
warning(sprintf("More than %s input covariates. Only the first %s will be considered.", maxcovs, maxcovs))
covs = covs[,1:maxcovs]
}
nbrdf = cbind(genemuts[,c("n_syn","exp_syn")], covs)
## 阴性二项式回归
# Negative binomial regression for substitutions
if (nrow(genemuts)<500) { # If there are <500 genes, we run the regression without covariates
model = MASS::glm.nb(n_syn ~ offset(log(exp_syn)) - 1 , data = nbrdf)
} else {
model = tryCatch({
MASS::glm.nb(n_syn ~ offset(log(exp_syn)) + . , data = nbrdf) # We try running the model with covariates
}, warning = function(w){
MASS::glm.nb(n_syn ~ offset(log(exp_syn)) - 1 , data = nbrdf) # If there are warnings or errors we run the model without covariates
}, error = function(e){
MASS::glm.nb(n_syn ~ offset(log(exp_syn)) - 1 , data = nbrdf) # If there are warnings or errors we run the model without covariates
})
}
message(sprintf(" Regression model for substitutions (theta = %0.3g).", model$theta))
}
if (all(model$y==genemuts$n_syn)) {
genemuts$exp_syn_cv = model$fitted.values
}
theta = model$theta
nbreg = model
## 子功能:
# Subfunction: Analytical opt_t using only neutral subs
mle_tcv = function(n_neutral, exp_rel_neutral, shape, scale) {
tml = (n_neutral+shape-1)/(exp_rel_neutral+(1/scale))
if (shape<=1) { # i.e. when theta<=1
tml = max(shape*scale,tml) # i.e. tml is bounded to the mean of the gamma (i.e. y[9]) when theta<=1, since otherwise it takes meaningless values
}
return(tml)
}
## 子功能
# Subfunction: dNdScv per gene
selfun_cv = function(j) {
y = as.numeric(genemuts[j,-1])
x = RefCDS[[j]]
exp_rel = y[5:8]/y[5]
## gama
# Gamma
shape = theta
scale = y[9]/theta
## 中性模型
# a. Neutral model
indneut = 1:4 # vector of neutral mutation types under this model (1=synonymous, 2=missense, 3=nonsense, 4=essential_splice)
opt_t = mle_tcv(n_neutral=sum(y[indneut]), exp_rel_neutral=sum(exp_rel[indneut]), shape=shape, scale=scale)
mrfold = max(1e-10, opt_t/y[5]) # Correction factor of "t" based on the obs/exp ratio of "neutral" mutations under the model
ll0 = sum(dpois(x=x$N, lambda=x$L*mutrates*mrfold*t(array(c(1,1,1,1),dim=c(4,numrates))), log=T)) + dgamma(opt_t, shape=shape, scale=scale, log=T) # loglik null model
## 错义突变模型
# b. Missense model: wmis==1, free wnon, free wspl
indneut = 1:2
opt_t = mle_tcv(n_neutral=sum(y[indneut]), exp_rel_neutral=sum(exp_rel[indneut]), shape=shape, scale=scale)
mrfold = max(1e-10, opt_t/sum(y[5])) # Correction factor of "t" based on the obs/exp ratio of "neutral" mutations under the model
wfree = y[3:4]/y[7:8]/mrfold; wfree[y[3:4]==0] = 0
llmis = sum(dpois(x=x$N, lambda=x$L*mutrates*mrfold*t(array(c(1,1,wfree),dim=c(4,numrates))), log=T)) + dgamma(opt_t, shape=shape, scale=scale, log=T) # loglik free wmis
## 截断突变模型
# c. Truncating muts model: free wmis, wnon==wspl==1
indneut = c(1,3,4)
opt_t = mle_tcv(n_neutral=sum(y[indneut]), exp_rel_neutral=sum(exp_rel[indneut]), shape=shape, scale=scale)
mrfold = max(1e-10, opt_t/sum(y[5])) # Correction factor of "t" based on the obs/exp ratio of "neutral" mutations under the model
wfree = y[2]/y[6]/mrfold; wfree[y[2]==0] = 0
lltrunc = sum(dpois(x=x$N, lambda=x$L*mutrates*mrfold*t(array(c(1,wfree,1,1),dim=c(4,numrates))), log=T)) + dgamma(opt_t, shape=shape, scale=scale, log=T) # loglik free wmis
## 自由选择模型
# d. Free selection model: free wmis, free wnon, free wspl
indneut = 1
opt_t = mle_tcv(n_neutral=sum(y[indneut]), exp_rel_neutral=sum(exp_rel[indneut]), shape=shape, scale=scale)
mrfold = max(1e-10, opt_t/sum(y[5])) # Correction factor of "t" based on the obs/exp ratio of "neutral" mutations under the model
wfree = y[2:4]/y[6:8]/mrfold; wfree[y[2:4]==0] = 0
llall_unc = sum(dpois(x=x$N, lambda=x$L*mutrates*mrfold*t(array(c(1,wfree),dim=c(4,numrates))), log=T)) + dgamma(opt_t, shape=shape, scale=scale, log=T) # loglik free wmis
if (constrain_wnon_wspl == 0) {
p = 1-pchisq(2*(llall_unc-c(llmis,lltrunc,ll0)),df=c(1,2,3))
return(c(wfree,p))
} else { # d2. Free selection model: free wmis, free wnon==wspl
wmisfree = y[2]/y[6]/mrfold; wmisfree[y[2]==0] = 0
wtruncfree = sum(y[3:4])/sum(y[7:8])/mrfold; wtruncfree[sum(y[3:4])==0] = 0
llall = sum(dpois(x=x$N, lambda=x$L*mutrates*mrfold*t(array(c(1,wmisfree,wtruncfree,wtruncfree),dim=c(4,numrates))), log=T)) + dgamma(opt_t, shape=shape, scale=scale, log=T) # loglik free wmis, free wnon==wspl
p = 1-pchisq(2*c(llall_unc-llmis,llall-c(lltrunc,ll0)),df=c(1,1,2))
return(c(wmisfree,wtruncfree,wtruncfree,p))
}
}
sel_cv = as.data.frame(t(sapply(1:nrow(genemuts), selfun_cv)))
colnames(sel_cv) = c("wmis_cv","wnon_cv","wspl_cv","pmis_cv","ptrunc_cv","pallsubs_cv")
sel_cv$qmis_cv = p.adjust(sel_cv$pmis_cv, method="BH")
sel_cv$qtrunc_cv = p.adjust(sel_cv$ptrunc_cv, method="BH")
sel_cv$qallsubs_cv = p.adjust(sel_cv$pallsubs_cv, method="BH")
sel_cv = cbind(genemuts[,1:5],sel_cv)
sel_cv = sel_cv[order(sel_cv$pallsubs_cv, sel_cv$pmis_cv, sel_cv$ptrunc_cv, -sel_cv$wmis_cv),] # Sorting genes in the output file
## indel 频发:基于负二项分布回归
## Indel recurrence: based on a negative binomial regression (ideally fitted excluding major known driver genes)
if (nrow(indels) >= min_indels) {
geneindels = as.data.frame(array(0,dim=c(length(RefCDS),8)))
colnames(geneindels) = c("gene_name","n_ind","n_induniq","n_indused","cds_length","excl","exp_unif","exp_indcv")
geneindels$gene_name = sapply(RefCDS, function(x) x$gene_name)
geneindels$n_ind = as.numeric(table(indels$gene)[geneindels[,1]]); geneindels[is.na(geneindels[,2]),2] = 0
geneindels$n_induniq = as.numeric(table(unique(indels[,-1])$gene)[geneindels[,1]]); geneindels[is.na(geneindels[,3]),3] = 0
geneindels$cds_length = sapply(RefCDS, function(x) x$CDS_length)
if (use_indel_sites) {
geneindels$n_indused = geneindels[,3]
} else {
geneindels$n_indused = geneindels[,2]
}
## 剔除 已知的癌症基因
# Excluding known cancer genes (first using the input or the default list, but if this fails, we use a shorter data-driven list)
geneindels$excl = (geneindels[,1] %in% known_cancergenes)
min_bkg_genes = 50
if (sum(!geneindels$excl)<min_bkg_genes | sum(geneindels[!geneindels$excl,"n_indused"]) == 0) { # If the exclusion list is too restrictive (e.g. targeted sequencing of cancer genes), then identify a shorter list of selected genes using the substitutions.
newkc = as.vector(sel_cv$gene_name[sel_cv$qallsubs_cv<0.01])
geneindels$excl = (geneindels[,1] %in% newkc)
if (sum(!geneindels$excl)<min_bkg_genes | sum(geneindels[!geneindels$excl,"n_indused"]) == 0) { # If the new exclusion list is still too restrictive, then do not apply a restriction.
geneindels$excl = F
message(" No gene was excluded from the background indel model.")
} else {
warning(sprintf(" Genes were excluded from the indel background model based on the substitution data: %s.", paste(newkc, collapse=", ")))
}
}
geneindels$exp_unif = sum(geneindels[!geneindels$excl,"n_indused"]) / sum(geneindels[!geneindels$excl,"cds_length"]) * geneindels$cds_length
## 对indels进行负二项回归
# Negative binomial regression for indels
if (is.null(cv)) {
nbrdf = geneindels[,c("n_indused","exp_unif")][!geneindels[,6],] # We exclude known drivers from the fit
model = MASS::glm.nb(n_indused ~ offset(log(exp_unif)) - 1 , data = nbrdf)
nbrdf_all = geneindels[,c("n_indused","exp_unif")]
} else {
nbrdf = cbind(geneindels[,c("n_indused","exp_unif")], covs)[!geneindels[,6],] # We exclude known drivers from the fit
if (sum(!geneindels$excl)<500) { # If there are <500 genes, we run the regression without covariates
model = MASS::glm.nb(n_indused ~ offset(log(exp_unif)) - 1 , data = nbrdf)
} else {
model = tryCatch({
MASS::glm.nb(n_indused ~ offset(log(exp_unif)) + . , data = nbrdf) # We try running the model with covariates
}, warning = function(w){
MASS::glm.nb(n_indused ~ offset(log(exp_unif)) - 1 , data = nbrdf) # If there are warnings or errors we run the model without covariates
}, error = function(e){
MASS::glm.nb(n_indused ~ offset(log(exp_unif)) - 1 , data = nbrdf) # If there are warnings or errors we run the model without covariates
})
}
nbrdf_all = cbind(geneindels[,c("n_indused","exp_unif")], covs)
}
message(sprintf(" Regression model for indels (theta = %0.3g)", model$theta))
theta_indels = model$theta
nbregind = model
geneindels$exp_indcv = exp(predict(model,nbrdf_all))
geneindels$wind = geneindels$n_indused / geneindels$exp_indcv
## 对每个基因的indel频发进行统计学检验
# Statistical testing for indel recurrence per gene
geneindels$pind = pnbinom(q=geneindels$n_indused-1, mu=geneindels$exp_indcv, size=theta_indels, lower.tail=F)
geneindels$qind = p.adjust(geneindels$pind, method="BH")
## fisher 联合 p-values
# Fisher combined p-values (substitutions and indels)
sel_cv = merge(sel_cv, geneindels, by="gene_name")[,c("gene_name","n_syn","n_mis","n_non","n_spl","n_indused","wmis_cv","wnon_cv","wspl_cv","wind","pmis_cv","ptrunc_cv","pallsubs_cv","pind","qmis_cv","qtrunc_cv","qallsubs_cv")]
colnames(sel_cv) = c("gene_name","n_syn","n_mis","n_non","n_spl","n_ind","wmis_cv","wnon_cv","wspl_cv","wind_cv","pmis_cv","ptrunc_cv","pallsubs_cv","pind_cv","qmis_cv","qtrunc_cv","qallsubs_cv")
sel_cv$pglobal_cv = 1 - pchisq(-2 * (log(sel_cv$pallsubs_cv) + log(sel_cv$pind_cv)), df = 4)
sel_cv$qglobal_cv = p.adjust(sel_cv$pglobal, method="BH")
sel_cv = sel_cv[order(sel_cv$pglobal_cv, sel_cv$pallsubs_cv, sel_cv$pmis_cv, sel_cv$ptrunc_cv, -sel_cv$wmis_cv),] # Sorting genes in the output file
}
}
if (!any(!is.na(wrong_ref))) {
wrong_refbase = NULL # Output value if there were no wrong bases
}
annot = annot[,setdiff(colnames(annot),c("start","end","geneind"))]
if (outmats) {
dndscvout = list(globaldnds = globaldnds, sel_cv = sel_cv, sel_loc = sel_loc, annotmuts = annot, genemuts = genemuts, mle_submodel = mle_submodel, exclsamples = exclsamples, exclmuts = exclmuts, nbreg = nbreg, nbregind = nbregind, poissmodel = poissmodel, wrongmuts = wrong_refbase, N = Nall, L = Lall)
} else {
dndscvout = list(globaldnds = globaldnds, sel_cv = sel_cv, sel_loc = sel_loc, annotmuts = annot, genemuts = genemuts, mle_submodel = mle_submodel, exclsamples = exclsamples, exclmuts = exclmuts, nbreg = nbreg, nbregind = nbregind, poissmodel = poissmodel, wrongmuts = wrong_refbase)
}
} # EOF
|
#!/usr/bin/env Rscript
suppressPackageStartupMessages(require(locfit))
suppressPackageStartupMessages(require(fields))
suppressPackageStartupMessages(require(leaps))
#January data
data.jan <- read.table('data/colo_monthly_precip_01.dat',header=TRUE)
data.jan <- data.jan[!(data.jan$precip == -999.999),]
y.jan <- data.jan$precip
xdata.jan <- as.data.frame(data.jan[c('lat','lon','elev')])
x.jan <- as.matrix(xdata.jan)
#July Data
data.jul <- read.table('data/colo_monthly_precip_07.dat',header=TRUE)
data.jul <- data.jul[!(data.jul$precip == -999.999),]
y.jul <- data.jul$precip
xdata.jul <- as.data.frame(data.jul[c('lat','lon','elev')])
x.jul <- as.matrix(xdata.jul)
#DEM data
dem <- as.matrix(read.table('data/colo_dem.dat',header=T))
dem.x <- dem[,1]
dem.y <- dem[,2]
dem.z <- dem[,3]
# prepare the data for plotting
nx <- length(unique(dem.x))
ny <- length(unique(dem.y))
dem.x <- sort(unique(dem.x))
dem.y <- sort(unique(dem.y))
#returns the best alpha and degree
best.par <-
function(x,y, a = seq(0.2,1.0,by=0.05), n = length(a), f=c(gcvplot,aicplot)){
# get the gcv values for all combinations of deg and alpha
d1 <- f(y~x, deg=1, alpha=a, kern='bisq', scale=T)
d2 <- f(y~x, deg=2, alpha=a, kern='bisq', scale=T)
gcvs <- c(d1$values,d2$values)
best <- order(gcvs)[1]
#get the best alpha and degree
bestd <- c(rep(1,n),rep(2,n))[best]
bestalpha <- c(a,a)[best]
return(list(p=bestd,a=bestalpha,gcv=gcvs[best]))
}
best.pred <- function(x,y,f){
combo <- leaps(x,y)$which
nc <- nrow(combo)
best <- data.frame(a=numeric(nc),p=numeric(nc),gcv=numeric(nc))
for(i in 1:nc){
this.best <- best.par(x[,combo[i,]],y,f=f)
best$a[i] <- this.best$a
best$p[i] <- this.best$p
best$gcv[i] <- this.best$gcv
}
return(list(par=best,which=combo))
}
#get the best degree and alpha for each month
best.jan <- best.par(x.jan,y.jan,f=gcvplot)
best.jul <- best.par(x.jul,y.jul,f=gcvplot)
best.jul.aic <- best.par(x.jul,y.jul,f=aicplot)
best.models.jul <- best.pred(x.jul,y.jul,f=gcvplot)
best.models.jul.aic <- best.pred(x.jul,y.jul,f=aicplot)
#Fit models for january
# At the grid points
datfit.jan <- locfit(y.jan~x.jan, deg=best.jan$p, alpha = best.jan$a,
kern='bisq', scale=T, ev=dat())
# At the data density based locations
fit.jan <- locfit(y.jan~x.jan, deg=best.jan$p, alpha = best.jan$a,
kern='bisq', scale=T)
# And the cross validated estimates
fitcv.jan <- locfit(y.jan~x.jan, deg=best.jan$p, alpha=best.jan$a,
kern="bisq", ev = dat(cv=TRUE), scale=TRUE)
#Fit models for July
datfit.jul <- locfit(y.jul~x.jul, deg=best.jul$p, alpha = best.jul$a,
kern='bisq', scale=T, ev=dat())
fit.jul <- locfit(y.jul~x.jul, deg=best.jul$p, alpha = best.jul$a,
kern='bisq', scale=T)
fitcv.jul <- locfit(y.jul~x.jul, deg=best.jul$p, alpha=best.jul$a,
kern="bisq", ev = dat(cv=TRUE), scale=TRUE)
#get estimates at dem points
pred.jan <- predict( fit.jan, newdata = dem, se.fit=T )
pred.jul <- predict( fit.jul, newdata = dem, se.fit=T )
#fit linear models
# y must be a vector, xdata a data.frame
# January
lmfit.jan <- lm(y.jan~., data = xdata.jan)
lmpred.jan <- predict.lm( lmfit.jan, as.data.frame(dem), se.fit=T)
# July
lmfit.jul <- lm(y.jul~., data = xdata.jul)
lmpred.jul <- predict.lm( lmfit.jul, as.data.frame(dem), se.fit=T)
#January grid
locfitgrid.jan <- matrix(pred.jan$fit, nrow = ny, byrow=T)
lmgrid.jan <- matrix(lmpred.jan$fit, nrow = ny, byrow=T)
#july grid
locfitgrid.jul <- matrix(pred.jul$fit, nrow = ny, byrow=T)
lmgrid.jul <- matrix(lmpred.jul$fit, nrow = ny, byrow=T)
#January error grid
locfitgrid.se.jan <- matrix(pred.jan$se.fit, nrow = ny, byrow=T)
lmgrid.se.jan <- matrix(lmpred.jan$se.fit, nrow = ny, byrow=T)
#july error grid
locfitgrid.se.jul <- matrix(pred.jul$se.fit, nrow = ny, byrow=T)
lmgrid.se.jul <- matrix(lmpred.jul$se.fit, nrow = ny, byrow=T)
# lm cv predictions
lmpred.cv.jan <- numeric(length(y.jan))
lmpred.cv.jul <- numeric(length(y.jul))
#January
for(i in 1:length(y.jan)){
tmp <- lm(y.jan[-i]~., data = xdata.jan[-i,])
lmpred.cv.jan[i] <- predict.lm(tmp, as.data.frame(xdata.jan[i,]))
}
# July
for(i in 1:length(y.jul)){
tmp <- lm(y.jul[-i]~., data = xdata.jul[-i,])
lmpred.cv.jul[i] <- predict.lm(tmp, as.data.frame(xdata.jul[i,]))
}
save(list=ls(),file='output/2.Rdata') | /data-analysis/1-localRegression/src/2.R | no_license | cameronbracken/classy | R | false | false | 4,296 | r | #!/usr/bin/env Rscript
suppressPackageStartupMessages(require(locfit))
suppressPackageStartupMessages(require(fields))
suppressPackageStartupMessages(require(leaps))
#January data
data.jan <- read.table('data/colo_monthly_precip_01.dat',header=TRUE)
data.jan <- data.jan[!(data.jan$precip == -999.999),]
y.jan <- data.jan$precip
xdata.jan <- as.data.frame(data.jan[c('lat','lon','elev')])
x.jan <- as.matrix(xdata.jan)
#July Data
data.jul <- read.table('data/colo_monthly_precip_07.dat',header=TRUE)
data.jul <- data.jul[!(data.jul$precip == -999.999),]
y.jul <- data.jul$precip
xdata.jul <- as.data.frame(data.jul[c('lat','lon','elev')])
x.jul <- as.matrix(xdata.jul)
#DEM data
dem <- as.matrix(read.table('data/colo_dem.dat',header=T))
dem.x <- dem[,1]
dem.y <- dem[,2]
dem.z <- dem[,3]
# prepare the data for plotting
nx <- length(unique(dem.x))
ny <- length(unique(dem.y))
dem.x <- sort(unique(dem.x))
dem.y <- sort(unique(dem.y))
#returns the best alpha and degree
best.par <-
function(x,y, a = seq(0.2,1.0,by=0.05), n = length(a), f=c(gcvplot,aicplot)){
# get the gcv values for all combinations of deg and alpha
d1 <- f(y~x, deg=1, alpha=a, kern='bisq', scale=T)
d2 <- f(y~x, deg=2, alpha=a, kern='bisq', scale=T)
gcvs <- c(d1$values,d2$values)
best <- order(gcvs)[1]
#get the best alpha and degree
bestd <- c(rep(1,n),rep(2,n))[best]
bestalpha <- c(a,a)[best]
return(list(p=bestd,a=bestalpha,gcv=gcvs[best]))
}
best.pred <- function(x,y,f){
combo <- leaps(x,y)$which
nc <- nrow(combo)
best <- data.frame(a=numeric(nc),p=numeric(nc),gcv=numeric(nc))
for(i in 1:nc){
this.best <- best.par(x[,combo[i,]],y,f=f)
best$a[i] <- this.best$a
best$p[i] <- this.best$p
best$gcv[i] <- this.best$gcv
}
return(list(par=best,which=combo))
}
#get the best degree and alpha for each month
best.jan <- best.par(x.jan,y.jan,f=gcvplot)
best.jul <- best.par(x.jul,y.jul,f=gcvplot)
best.jul.aic <- best.par(x.jul,y.jul,f=aicplot)
best.models.jul <- best.pred(x.jul,y.jul,f=gcvplot)
best.models.jul.aic <- best.pred(x.jul,y.jul,f=aicplot)
#Fit models for january
# At the grid points
datfit.jan <- locfit(y.jan~x.jan, deg=best.jan$p, alpha = best.jan$a,
kern='bisq', scale=T, ev=dat())
# At the data density based locations
fit.jan <- locfit(y.jan~x.jan, deg=best.jan$p, alpha = best.jan$a,
kern='bisq', scale=T)
# And the cross validated estimates
fitcv.jan <- locfit(y.jan~x.jan, deg=best.jan$p, alpha=best.jan$a,
kern="bisq", ev = dat(cv=TRUE), scale=TRUE)
#Fit models for July
datfit.jul <- locfit(y.jul~x.jul, deg=best.jul$p, alpha = best.jul$a,
kern='bisq', scale=T, ev=dat())
fit.jul <- locfit(y.jul~x.jul, deg=best.jul$p, alpha = best.jul$a,
kern='bisq', scale=T)
fitcv.jul <- locfit(y.jul~x.jul, deg=best.jul$p, alpha=best.jul$a,
kern="bisq", ev = dat(cv=TRUE), scale=TRUE)
#get estimates at dem points
pred.jan <- predict( fit.jan, newdata = dem, se.fit=T )
pred.jul <- predict( fit.jul, newdata = dem, se.fit=T )
#fit linear models
# y must be a vector, xdata a data.frame
# January
lmfit.jan <- lm(y.jan~., data = xdata.jan)
lmpred.jan <- predict.lm( lmfit.jan, as.data.frame(dem), se.fit=T)
# July
lmfit.jul <- lm(y.jul~., data = xdata.jul)
lmpred.jul <- predict.lm( lmfit.jul, as.data.frame(dem), se.fit=T)
#January grid
locfitgrid.jan <- matrix(pred.jan$fit, nrow = ny, byrow=T)
lmgrid.jan <- matrix(lmpred.jan$fit, nrow = ny, byrow=T)
#july grid
locfitgrid.jul <- matrix(pred.jul$fit, nrow = ny, byrow=T)
lmgrid.jul <- matrix(lmpred.jul$fit, nrow = ny, byrow=T)
#January error grid
locfitgrid.se.jan <- matrix(pred.jan$se.fit, nrow = ny, byrow=T)
lmgrid.se.jan <- matrix(lmpred.jan$se.fit, nrow = ny, byrow=T)
#july error grid
locfitgrid.se.jul <- matrix(pred.jul$se.fit, nrow = ny, byrow=T)
lmgrid.se.jul <- matrix(lmpred.jul$se.fit, nrow = ny, byrow=T)
# lm cv predictions
lmpred.cv.jan <- numeric(length(y.jan))
lmpred.cv.jul <- numeric(length(y.jul))
#January
for(i in 1:length(y.jan)){
tmp <- lm(y.jan[-i]~., data = xdata.jan[-i,])
lmpred.cv.jan[i] <- predict.lm(tmp, as.data.frame(xdata.jan[i,]))
}
# July
for(i in 1:length(y.jul)){
tmp <- lm(y.jul[-i]~., data = xdata.jul[-i,])
lmpred.cv.jul[i] <- predict.lm(tmp, as.data.frame(xdata.jul[i,]))
}
save(list=ls(),file='output/2.Rdata') |
library(SSDforR)
### Name: IQRbandgraph
### Title: Interquartile band graph for one phase
### Aliases: IQRbandgraph
### Keywords: ~kwd1 ~kwd2
### ** Examples
cry<-c(3, 4, 2, 5, 3, 4, NA, 2, 2, 3, 2, 1, 2, NA, 2, 2, 1, 2, 1, 0, 0, 0)
pcry<-c("A", "A", "A", "A", "A", "A", NA, "B", "B", "B", "B", "B", "B",
NA, "B1", "B1", "B1", "B1", "B1", "B1", "B1", "B1")
IQRbandgraph(cry,pcry,"A","week","amount","Crying")
| /data/genthat_extracted_code/SSDforR/examples/IQRbandgraph.Rd.R | no_license | surayaaramli/typeRrh | R | false | false | 416 | r | library(SSDforR)
### Name: IQRbandgraph
### Title: Interquartile band graph for one phase
### Aliases: IQRbandgraph
### Keywords: ~kwd1 ~kwd2
### ** Examples
cry<-c(3, 4, 2, 5, 3, 4, NA, 2, 2, 3, 2, 1, 2, NA, 2, 2, 1, 2, 1, 0, 0, 0)
pcry<-c("A", "A", "A", "A", "A", "A", NA, "B", "B", "B", "B", "B", "B",
NA, "B1", "B1", "B1", "B1", "B1", "B1", "B1", "B1")
IQRbandgraph(cry,pcry,"A","week","amount","Crying")
|
# Example
data = read.csv("seeds.txt", sep = "", header = FALSE)
head(data)
source("mcmc_ess.r")
T = 1000
# Run the MCMC
out = post_ess_graphs(Y = data, m = 14, T = T, verbose = TRUE)
# Compute (estimated) posterior probabilities of edge inclusion
q = ncol(data)
burn = 200
probs = matrix((rowMeans(out[,(burn + 1):(T)])), q, q)
probs
| /example.R | no_license | FedeCastelletti/obayes_learn_essential_graphs | R | false | false | 346 | r | # Example
data = read.csv("seeds.txt", sep = "", header = FALSE)
head(data)
source("mcmc_ess.r")
T = 1000
# Run the MCMC
out = post_ess_graphs(Y = data, m = 14, T = T, verbose = TRUE)
# Compute (estimated) posterior probabilities of edge inclusion
q = ncol(data)
burn = 200
probs = matrix((rowMeans(out[,(burn + 1):(T)])), q, q)
probs
|
#' Compute Subsets
#'
#' Compute the subsets of a given set.
#'
#' Note that this algorithm is run in R: it is therefore not
#' intended to be the most efficient algorithm for computins
#' subsets.
#'
#' @param set the original set
#' @param sizes desired size(s) of subsets
#' @param include_null should the empty vector be included?
#' @return a list of subsets as vectors
#' @export subsets
#' @seealso [utils::combn()]
#' @examples
#'
#'
#' subsets(1:3)
#' subsets(1:3, size = 2)
#' subsets(1:3, include_null = TRUE)
#'
#' subsets(c('a','b','c','d'))
#' subsets(c('a','b','c','d'), include_null = TRUE)
#'
#'
#'
#'
#'
#'
#'
subsets <- function(set, sizes = 1:length(set), include_null = FALSE){
if((length(set) == 1) && (is.numeric(set) || is.integer(set) )){
set <- 1:set
}
subsetsBySize <- lapply(sizes, function(n){
combn(length(set), n, function(x){
set[x]
}, simplify = FALSE)
})
out <- unlist(subsetsBySize, recursive = FALSE)
if(include_null) out <- unlist(list(list(set[0]), out), recursive = FALSE)
out
} | /R/subsets.r | no_license | dkahle/algstat | R | false | false | 1,063 | r | #' Compute Subsets
#'
#' Compute the subsets of a given set.
#'
#' Note that this algorithm is run in R: it is therefore not
#' intended to be the most efficient algorithm for computins
#' subsets.
#'
#' @param set the original set
#' @param sizes desired size(s) of subsets
#' @param include_null should the empty vector be included?
#' @return a list of subsets as vectors
#' @export subsets
#' @seealso [utils::combn()]
#' @examples
#'
#'
#' subsets(1:3)
#' subsets(1:3, size = 2)
#' subsets(1:3, include_null = TRUE)
#'
#' subsets(c('a','b','c','d'))
#' subsets(c('a','b','c','d'), include_null = TRUE)
#'
#'
#'
#'
#'
#'
#'
subsets <- function(set, sizes = 1:length(set), include_null = FALSE){
if((length(set) == 1) && (is.numeric(set) || is.integer(set) )){
set <- 1:set
}
subsetsBySize <- lapply(sizes, function(n){
combn(length(set), n, function(x){
set[x]
}, simplify = FALSE)
})
out <- unlist(subsetsBySize, recursive = FALSE)
if(include_null) out <- unlist(list(list(set[0]), out), recursive = FALSE)
out
} |
get.terminal.width <- function(){
stty.loc <- system("which stty", intern=TRUE, ignore.stderr=TRUE)
if(length(stty.loc) == 0) return(getOption("width"))
stty.command <- sprintf("%s size", stty.loc)
stty.result <- system(stty.command, intern=TRUE, ignore.stderr=TRUE)
if(length(stty.result) == 0) return(getOption("width"))
if(length(grep("^\\d+\\s+(\\d+)$", stty.result, perl=TRUE)) != 1) return(getOption("width"))
return(as.numeric(gsub("^\\d+\\s+(\\d+)$", "\\1", stty.result, perl=TRUE)))
}
| /R/get.terminal.width.R | no_license | alastair-droop/apdMisc | R | false | false | 508 | r | get.terminal.width <- function(){
stty.loc <- system("which stty", intern=TRUE, ignore.stderr=TRUE)
if(length(stty.loc) == 0) return(getOption("width"))
stty.command <- sprintf("%s size", stty.loc)
stty.result <- system(stty.command, intern=TRUE, ignore.stderr=TRUE)
if(length(stty.result) == 0) return(getOption("width"))
if(length(grep("^\\d+\\s+(\\d+)$", stty.result, perl=TRUE)) != 1) return(getOption("width"))
return(as.numeric(gsub("^\\d+\\s+(\\d+)$", "\\1", stty.result, perl=TRUE)))
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/monarchr.R
\name{bioentity_exp_anatomy_assoc_w_gene}
\alias{bioentity_exp_anatomy_assoc_w_gene}
\title{Gets expression anatomy associated with a gene.}
\usage{
bioentity_exp_anatomy_assoc_w_gene(gene, rows = 100)
}
\arguments{
\item{gene}{A valid monarch initiative gene id.}
\item{rows}{Number of rows of results to fetch.}
}
\value{
A list of (tibble of anatomy information, monarch_api S3 class).
}
\description{
Given a gene, what expression anatomy is associated with it.
}
\details{
https://api.monarchinitiative.org/api/bioentity/gene/NCBIGene%3A8314/expression/anatomy?rows=100&fetch_objects=true
}
\examples{
gene <- "NCBIGene:8314"
bioentity_exp_anatomy_assoc_w_gene(gene)
}
| /man/bioentity_exp_anatomy_assoc_w_gene.Rd | no_license | charlieccarey/monarchr | R | false | true | 764 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/monarchr.R
\name{bioentity_exp_anatomy_assoc_w_gene}
\alias{bioentity_exp_anatomy_assoc_w_gene}
\title{Gets expression anatomy associated with a gene.}
\usage{
bioentity_exp_anatomy_assoc_w_gene(gene, rows = 100)
}
\arguments{
\item{gene}{A valid monarch initiative gene id.}
\item{rows}{Number of rows of results to fetch.}
}
\value{
A list of (tibble of anatomy information, monarch_api S3 class).
}
\description{
Given a gene, what expression anatomy is associated with it.
}
\details{
https://api.monarchinitiative.org/api/bioentity/gene/NCBIGene%3A8314/expression/anatomy?rows=100&fetch_objects=true
}
\examples{
gene <- "NCBIGene:8314"
bioentity_exp_anatomy_assoc_w_gene(gene)
}
|
# Load libraries
library(tidyverse)
library(ggpubr)
library(rstatix)
library(ggpmisc)
options("scipen"=100, "digits"=5)
# Prepare Data Frame
summary <- data.frame(PredictionModel=integer(),TotalTime=integer(), Proband=integer(),stringsAsFactors=FALSE)
# Read original CSV Data
filenames <- c()
for(i in 1:24) {
filenames <- c(filenames, paste("Daten/FittsLaw_Unity/Proband_",i, "_Unity_FittsLawTask.csv", sep=""))
}
# Prepare Data for Evaluation, Compute signifikant Values (ID, IDe, SDx, Ae, Throughput)
for(i in 1:length(filenames)) {
print(filenames[i])
df <- read.table(filenames[i],header = TRUE,sep = ";")
# Remove unimportant columns
df$CircleCounter <- NULL
df$StartPointX <- NULL
df$StartPointY <- NULL
df$StartPointZ <- NULL
df$EndPointX <- NULL
df$EndPointY <- NULL
df$EndPointZ <- NULL
df$SelectPointX <- NULL
df$SelectPointY <- NULL
df$SelectpointZ <- NULL
df$NormalizedSelectionPointX <- NULL
df$NormalizedSelectionPointY <- NULL
df$NormalizedSelectionPointZ <- NULL
df$DX <- NULL
df$AE <- NULL
library(dplyr)
# Filter on only the important entry after each trial was ended
df <- filter(df, Notice == "endedTrial")
# Read rows in correct type
df[, 1] <- as.numeric( df[, 1] )
df[, 9] <- gsub(",", ".",df[, 9])
df[, 10] <- gsub(",", ".",df[, 10])
df[, 9] <- as.numeric(( df[, 9] ))
df[, 10] <- as.numeric(( df[, 10] ))
df[, 8] <- as.numeric(as.character( df[, 8] ))
df[, 7] <- as.numeric(as.character( df[, 7] ))
df[, 2] <- as.numeric(as.character( df[, 2] ))
df[, 13] <- as.character( df[, 13] )
df[, 13] <- gsub(",", ".",df[, 13])
df[, 14] <- as.character( df[, 14] )
df[, 14] <- gsub(",", ".",df[, 14])
# Compute signifikant values for evalution
for(i in 1:nrow(df)) {
dx <- ((strsplit(df[i,13], "_")))
df[i,15] <- sd(as.numeric(unlist(dx)))
ae <- ((strsplit(df[i,14], "_")))
df[i,16] <- mean(as.numeric(unlist(ae)))
id_shannon <- log2(df[i, 9]/(df[i, 10])+1)
id_shannon_e <- log2((df[i, 16]/(df[i, 15] * 4.133))+1)
mean_movement_time <- (df[i, 11]/16.0)/1000
throughput <- id_shannon_e/mean_movement_time
df[i,17] <- id_shannon
df[i,18] <- id_shannon_e
df[i,19] <- mean_movement_time
df[i,20] <- throughput
}
# Trial Time Computation
#totalTrialTime["Proband"] <- c(df[1,2], df[1,2],df[1,2],df[1,2],df[1,2],df[1,2])
#colnames(totalTrialTime) <- c("PredictionModel", "TotalTime","Proband")
# Append values
colnames(df)[15] <- "SDx"
colnames(df)[16] <- "Mean AE"
colnames(df)[17] <- "ID_Shannon"
colnames(df)[18] <- "IDE"
colnames(df)[19] <- "MeanMovementTime"
colnames(df)[20] <- "Throughput"
df[, 15] <- as.numeric( df[ ,15] )
df[, 16] <- as.numeric( df[ ,16] )
df[, 17] <- as.numeric( df[ ,17] )
df[, 18] <- as.numeric( df[ ,18] )
df[, 19] <- as.numeric( df[, 19] )
df[, 20] <- as.numeric( df[, 20] )
summary <- rbind(summary, df)
}
# Create Dataframe for Throughput Evaluation
total_throughputs <- data.frame(matrix(ncol = 3, nrow = 0))
# Fill Dataframe for Throughput Evaluation with average Throughput for each Participant per Conditon
number_of_participants <- 24
prediction_models <- c(-12, 0, 12, 24, 36, 48)
for (participant in 1:number_of_participants) {
for (predicton_model in prediction_models) {
condition_subset <- summary[summary$PredictionModel == predicton_model, ]
participant_and_condition_subset <- condition_subset[condition_subset$ProbandenID == participant, ]
mean_throughput_for_proband_and_condition <- mean(participant_and_condition_subset$Throughput)
total_throughputs <- rbind(total_throughputs, c(participant, predicton_model,mean_throughput_for_proband_and_condition ))
}
}
total_throughputs_names <- c("ProbandenID", "PredictionModel", "Throughput")
colnames(total_throughputs) <- total_throughputs_names
# Change grouping columns into factors for Anova with repeated measures
total_throughputs$ProbandenID <- factor(total_throughputs$ProbandenID)
total_throughputs$PredictionModel <- factor(total_throughputs$PredictionModel)
# Get Simple Summarizing Statistics
total_throughputs %>%
#group_by(PredictionModel) %>%
get_summary_stats(Throughput, type = "mean_sd")
# Get Simple Boxplot
#bxp <- ggboxplot(total_throughputs, x = "PredictionModel", y = "Throughput", add = "point")
#bxp
total_throughputs$PredictionModel <- factor(total_throughputs$PredictionModel, levels = c("-12", "0", "12", "24", "36", "48"),
labels = c("-48 ms", "Base", "+48 ms", "+96 ms", "+144 ms", "+192 ms"))
ggplot(total_throughputs,aes(x=PredictionModel, y=Throughput, fill=PredictionModel)) +
geom_boxplot(outlier.shape=16,outlier.size=2, position=position_dodge2(width=0.9, preserve="single"), width=0.9) +
ylab(label = "Throughput [bit/s]") +
xlab(label ="") +
scale_x_discrete(position = "bottom", labels = NULL)+
stat_summary(fun.y=mean, geom="point", shape=4, size=5, color="black") +
#ggtitle("Throughput")
theme_light() +
#theme(legend.position = "none") +
guides(fill=guide_legend(title="Prediction Time Offset")) +
theme(legend.position="bottom", text = element_text(size=20)) +
ggsave("boxplotchart_Throughput.pdf", width=10, height=6, device=cairo_pdf)
# Check Assumptions for repeated measures Anova
total_throughputs %>%
group_by(PredictionModel) %>%
identify_outliers(Throughput)
# No extreme outliers => Outlier Assumption is met
# Check Normality Assumption
total_throughputs %>%
group_by(PredictionModel) %>%
shapiro_test(Throughput)
# No condition with p < 0.05 => Normality Assumption is met
ggqqplot(total_throughputs, "Throughput", facet.by = "PredictionModel")
# This would be the repeated measures anova code, but is not used here since the Prerequisits are not met
# (Assumption of Normality is not given for total throughput)
#res.aov <- anova_test(data = total_throughputs, dv = Throughput, wid = ProbandenID, within = PredictionModel)
#get_anova_table(res.aov)
# Would compute group comparisons using pairwise t tests with Bonferroni multiple testing correction method if Anova is significant
#pwc <- total_throughputs %>% pairwise_t_test(Throughput ~ PredictionModel, paired = TRUE, p.adjust.method = "bonferroni")
#pwc
# Since the prerequisits for a repeated measures anova are not met, we use a non-parametric alternative, the Friedmann-Test
res.fried <- total_throughputs %>% friedman_test(Throughput ~ PredictionModel |ProbandenID)
res.fried
# p > 0.05 => There are significant differences between the groups
# Compute effect size
res.fried.effect <- total_throughputs %>% friedman_effsize(Throughput ~ PredictionModel |ProbandenID)
res.fried.effect
# Compute group comparisons using pairwise Wilcoxon signed-rank tests with Bonferroni multiple testing correction method
pwc <- total_throughputs %>% wilcox_test(Throughput ~ PredictionModel, paired = TRUE, p.adjust.method = "bonferroni")
pwc
# Visualization: box plots with p-values
pwc <- pwc %>% add_xy_position(x = "PredictionModel")
ggboxplot(total_throughputs, x = "PredictionModel", y = "Throughput", add = "point") +
stat_pvalue_manual(pwc, hide.ns = TRUE) +
labs(
subtitle = get_test_label(res.fried, detailed = TRUE),
caption = get_pwc_label(pwc)
)
# Visualize Fitts Slope for Throughput
total_throughputs_per_condition_and_id <- data.frame(matrix(ncol = 3, nrow = 0))
fitts_width <- c(1.5,2.5,3.0)
fitts_target_width <- c(0.15,0.30,0.70)
prediction_models <- c(-12, 0, 12, 24, 36, 48)
for (width in fitts_width) {
for (target_width in fitts_target_width) {
for (predicton_model in prediction_models) {
subset_id_and_cond <- filter(summary, BigCircleRadius == width, SmallCircleRadius == target_width, PredictionModel == predicton_model)
id_s <- subset_id_and_cond[1,17]
mean_throughput_for_id_and_cond <- mean(subset_id_and_cond$Throughput)
total_throughputs_per_condition_and_id <- rbind(total_throughputs_per_condition_and_id, c(id_s, predicton_model,mean_throughput_for_id_and_cond ))
}
}
}
total_throughputs_per_condition_and_id_names <- c("ID_Shannon", "PredictionModel", "Throughput")
colnames(total_throughputs_per_condition_and_id) <- total_throughputs_per_condition_and_id_names
total_throughputs_per_condition_and_id$PredictionModel <- factor(total_throughputs_per_condition_and_id$PredictionModel)
my.formula <- y ~ x
ggplot(total_throughputs_per_condition_and_id, aes(x=ID_Shannon, y = Throughput, color=PredictionModel)) +
coord_fixed(ratio = 1) +
geom_point() +
geom_smooth(method="lm",se=FALSE, formula = my.formula) +
ggtitle("Fitts' Law Model: Movement time over ID") +
stat_poly_eq(formula = my.formula,
eq.with.lhs = "italic(hat(y))~`=`~",
aes(label = paste(..eq.label.., ..rr.label.., sep = "~~~")),
vstep = 0.03,
show.legend = TRUE,
size = 2.4,
parse = TRUE) +
ylab(label = "Throughput [bit/s]") +
scale_x_continuous("ID [bit]", position = "bottom")+
theme_light() +
ggsave("lin_Throughput_ID.pdf", width=8, height=6, device=cairo_pdf)
## regression
descStats <- function(x) c(mean = mean(x),
sd = sd(x), se = sd(x)/sqrt(length(x)),
ci = qt(0.95,df=length(x)-1)*sd(x)/sqrt(length(x)))
total_throughputs_2 <- total_throughputs
total_throughputs_2$PredictionModel <- factor(total_throughputs_2$PredictionModel, levels = c("-48 ms", "Base", "+48 ms", "+96 ms", "+144 ms", "+192 ms"),
labels =c("-12", "0", "12", "24", "36", "48"))
total_throughputs_2$PredictionModel <- as.numeric(levels(total_throughputs_2$PredictionModel))[total_throughputs_2$PredictionModel]
total_throughputs_2_means <- aggregate(total_throughputs_2$Throughput, by=list(total_throughputs_2$PredictionModel, total_throughputs_2$ProbandenID), FUN=mean)
total_throughputs_2_means <- do.call(data.frame, total_throughputs_2_means)
ggplot() +
geom_boxplot(data=total_throughputs,aes(x=PredictionModel, y=Throughput, fill=PredictionModel), outlier.shape=16,outlier.size=2, position=position_dodge2(width=0.9, preserve="single"), width=0.9) +
geom_smooth(data = total_throughputs_2_means,aes(x=PredictionModel, y=Throughput), method="lm", se=TRUE, fill=NA,
formula=y ~ poly(x, 2, raw=TRUE),colour="red") +
ylab(label = "Throughput [bit/s]") +
xlab(label ="") +
stat_summary(fun.y=mean, geom="point", shape=4, size=5, color="black") +
#ggtitle("Throughput")
theme_light() +
#theme(legend.position = "none") +
guides(fill=guide_legend(title="Prediction Time Offset")) +
theme(legend.position="bottom", text = element_text(size=20)) +
ggsave("rrrr", width=10, height=6, device=cairo_pdf)
| /Auswertung/EvaluationThroughput.R | no_license | andreasPfaffelhuber/Faster-than-in-Real-Time | R | false | false | 11,109 | r | # Load libraries
library(tidyverse)
library(ggpubr)
library(rstatix)
library(ggpmisc)
options("scipen"=100, "digits"=5)
# Prepare Data Frame
summary <- data.frame(PredictionModel=integer(),TotalTime=integer(), Proband=integer(),stringsAsFactors=FALSE)
# Read original CSV Data
filenames <- c()
for(i in 1:24) {
filenames <- c(filenames, paste("Daten/FittsLaw_Unity/Proband_",i, "_Unity_FittsLawTask.csv", sep=""))
}
# Prepare Data for Evaluation, Compute signifikant Values (ID, IDe, SDx, Ae, Throughput)
for(i in 1:length(filenames)) {
print(filenames[i])
df <- read.table(filenames[i],header = TRUE,sep = ";")
# Remove unimportant columns
df$CircleCounter <- NULL
df$StartPointX <- NULL
df$StartPointY <- NULL
df$StartPointZ <- NULL
df$EndPointX <- NULL
df$EndPointY <- NULL
df$EndPointZ <- NULL
df$SelectPointX <- NULL
df$SelectPointY <- NULL
df$SelectpointZ <- NULL
df$NormalizedSelectionPointX <- NULL
df$NormalizedSelectionPointY <- NULL
df$NormalizedSelectionPointZ <- NULL
df$DX <- NULL
df$AE <- NULL
library(dplyr)
# Filter on only the important entry after each trial was ended
df <- filter(df, Notice == "endedTrial")
# Read rows in correct type
df[, 1] <- as.numeric( df[, 1] )
df[, 9] <- gsub(",", ".",df[, 9])
df[, 10] <- gsub(",", ".",df[, 10])
df[, 9] <- as.numeric(( df[, 9] ))
df[, 10] <- as.numeric(( df[, 10] ))
df[, 8] <- as.numeric(as.character( df[, 8] ))
df[, 7] <- as.numeric(as.character( df[, 7] ))
df[, 2] <- as.numeric(as.character( df[, 2] ))
df[, 13] <- as.character( df[, 13] )
df[, 13] <- gsub(",", ".",df[, 13])
df[, 14] <- as.character( df[, 14] )
df[, 14] <- gsub(",", ".",df[, 14])
# Compute signifikant values for evalution
for(i in 1:nrow(df)) {
dx <- ((strsplit(df[i,13], "_")))
df[i,15] <- sd(as.numeric(unlist(dx)))
ae <- ((strsplit(df[i,14], "_")))
df[i,16] <- mean(as.numeric(unlist(ae)))
id_shannon <- log2(df[i, 9]/(df[i, 10])+1)
id_shannon_e <- log2((df[i, 16]/(df[i, 15] * 4.133))+1)
mean_movement_time <- (df[i, 11]/16.0)/1000
throughput <- id_shannon_e/mean_movement_time
df[i,17] <- id_shannon
df[i,18] <- id_shannon_e
df[i,19] <- mean_movement_time
df[i,20] <- throughput
}
# Trial Time Computation
#totalTrialTime["Proband"] <- c(df[1,2], df[1,2],df[1,2],df[1,2],df[1,2],df[1,2])
#colnames(totalTrialTime) <- c("PredictionModel", "TotalTime","Proband")
# Append values
colnames(df)[15] <- "SDx"
colnames(df)[16] <- "Mean AE"
colnames(df)[17] <- "ID_Shannon"
colnames(df)[18] <- "IDE"
colnames(df)[19] <- "MeanMovementTime"
colnames(df)[20] <- "Throughput"
df[, 15] <- as.numeric( df[ ,15] )
df[, 16] <- as.numeric( df[ ,16] )
df[, 17] <- as.numeric( df[ ,17] )
df[, 18] <- as.numeric( df[ ,18] )
df[, 19] <- as.numeric( df[, 19] )
df[, 20] <- as.numeric( df[, 20] )
summary <- rbind(summary, df)
}
# Create Dataframe for Throughput Evaluation
total_throughputs <- data.frame(matrix(ncol = 3, nrow = 0))
# Fill Dataframe for Throughput Evaluation with average Throughput for each Participant per Conditon
number_of_participants <- 24
prediction_models <- c(-12, 0, 12, 24, 36, 48)
for (participant in 1:number_of_participants) {
for (predicton_model in prediction_models) {
condition_subset <- summary[summary$PredictionModel == predicton_model, ]
participant_and_condition_subset <- condition_subset[condition_subset$ProbandenID == participant, ]
mean_throughput_for_proband_and_condition <- mean(participant_and_condition_subset$Throughput)
total_throughputs <- rbind(total_throughputs, c(participant, predicton_model,mean_throughput_for_proband_and_condition ))
}
}
total_throughputs_names <- c("ProbandenID", "PredictionModel", "Throughput")
colnames(total_throughputs) <- total_throughputs_names
# Change grouping columns into factors for Anova with repeated measures
total_throughputs$ProbandenID <- factor(total_throughputs$ProbandenID)
total_throughputs$PredictionModel <- factor(total_throughputs$PredictionModel)
# Get Simple Summarizing Statistics
total_throughputs %>%
#group_by(PredictionModel) %>%
get_summary_stats(Throughput, type = "mean_sd")
# Get Simple Boxplot
#bxp <- ggboxplot(total_throughputs, x = "PredictionModel", y = "Throughput", add = "point")
#bxp
total_throughputs$PredictionModel <- factor(total_throughputs$PredictionModel, levels = c("-12", "0", "12", "24", "36", "48"),
labels = c("-48 ms", "Base", "+48 ms", "+96 ms", "+144 ms", "+192 ms"))
ggplot(total_throughputs,aes(x=PredictionModel, y=Throughput, fill=PredictionModel)) +
geom_boxplot(outlier.shape=16,outlier.size=2, position=position_dodge2(width=0.9, preserve="single"), width=0.9) +
ylab(label = "Throughput [bit/s]") +
xlab(label ="") +
scale_x_discrete(position = "bottom", labels = NULL)+
stat_summary(fun.y=mean, geom="point", shape=4, size=5, color="black") +
#ggtitle("Throughput")
theme_light() +
#theme(legend.position = "none") +
guides(fill=guide_legend(title="Prediction Time Offset")) +
theme(legend.position="bottom", text = element_text(size=20)) +
ggsave("boxplotchart_Throughput.pdf", width=10, height=6, device=cairo_pdf)
# Check Assumptions for repeated measures Anova
total_throughputs %>%
group_by(PredictionModel) %>%
identify_outliers(Throughput)
# No extreme outliers => Outlier Assumption is met
# Check Normality Assumption
total_throughputs %>%
group_by(PredictionModel) %>%
shapiro_test(Throughput)
# No condition with p < 0.05 => Normality Assumption is met
ggqqplot(total_throughputs, "Throughput", facet.by = "PredictionModel")
# This would be the repeated measures anova code, but is not used here since the Prerequisits are not met
# (Assumption of Normality is not given for total throughput)
#res.aov <- anova_test(data = total_throughputs, dv = Throughput, wid = ProbandenID, within = PredictionModel)
#get_anova_table(res.aov)
# Would compute group comparisons using pairwise t tests with Bonferroni multiple testing correction method if Anova is significant
#pwc <- total_throughputs %>% pairwise_t_test(Throughput ~ PredictionModel, paired = TRUE, p.adjust.method = "bonferroni")
#pwc
# Since the prerequisits for a repeated measures anova are not met, we use a non-parametric alternative, the Friedmann-Test
res.fried <- total_throughputs %>% friedman_test(Throughput ~ PredictionModel |ProbandenID)
res.fried
# p > 0.05 => There are significant differences between the groups
# Compute effect size
res.fried.effect <- total_throughputs %>% friedman_effsize(Throughput ~ PredictionModel |ProbandenID)
res.fried.effect
# Compute group comparisons using pairwise Wilcoxon signed-rank tests with Bonferroni multiple testing correction method
pwc <- total_throughputs %>% wilcox_test(Throughput ~ PredictionModel, paired = TRUE, p.adjust.method = "bonferroni")
pwc
# Visualization: box plots with p-values
pwc <- pwc %>% add_xy_position(x = "PredictionModel")
ggboxplot(total_throughputs, x = "PredictionModel", y = "Throughput", add = "point") +
stat_pvalue_manual(pwc, hide.ns = TRUE) +
labs(
subtitle = get_test_label(res.fried, detailed = TRUE),
caption = get_pwc_label(pwc)
)
# Visualize Fitts Slope for Throughput
total_throughputs_per_condition_and_id <- data.frame(matrix(ncol = 3, nrow = 0))
fitts_width <- c(1.5,2.5,3.0)
fitts_target_width <- c(0.15,0.30,0.70)
prediction_models <- c(-12, 0, 12, 24, 36, 48)
for (width in fitts_width) {
for (target_width in fitts_target_width) {
for (predicton_model in prediction_models) {
subset_id_and_cond <- filter(summary, BigCircleRadius == width, SmallCircleRadius == target_width, PredictionModel == predicton_model)
id_s <- subset_id_and_cond[1,17]
mean_throughput_for_id_and_cond <- mean(subset_id_and_cond$Throughput)
total_throughputs_per_condition_and_id <- rbind(total_throughputs_per_condition_and_id, c(id_s, predicton_model,mean_throughput_for_id_and_cond ))
}
}
}
total_throughputs_per_condition_and_id_names <- c("ID_Shannon", "PredictionModel", "Throughput")
colnames(total_throughputs_per_condition_and_id) <- total_throughputs_per_condition_and_id_names
total_throughputs_per_condition_and_id$PredictionModel <- factor(total_throughputs_per_condition_and_id$PredictionModel)
my.formula <- y ~ x
ggplot(total_throughputs_per_condition_and_id, aes(x=ID_Shannon, y = Throughput, color=PredictionModel)) +
coord_fixed(ratio = 1) +
geom_point() +
geom_smooth(method="lm",se=FALSE, formula = my.formula) +
ggtitle("Fitts' Law Model: Movement time over ID") +
stat_poly_eq(formula = my.formula,
eq.with.lhs = "italic(hat(y))~`=`~",
aes(label = paste(..eq.label.., ..rr.label.., sep = "~~~")),
vstep = 0.03,
show.legend = TRUE,
size = 2.4,
parse = TRUE) +
ylab(label = "Throughput [bit/s]") +
scale_x_continuous("ID [bit]", position = "bottom")+
theme_light() +
ggsave("lin_Throughput_ID.pdf", width=8, height=6, device=cairo_pdf)
## regression
descStats <- function(x) c(mean = mean(x),
sd = sd(x), se = sd(x)/sqrt(length(x)),
ci = qt(0.95,df=length(x)-1)*sd(x)/sqrt(length(x)))
total_throughputs_2 <- total_throughputs
total_throughputs_2$PredictionModel <- factor(total_throughputs_2$PredictionModel, levels = c("-48 ms", "Base", "+48 ms", "+96 ms", "+144 ms", "+192 ms"),
labels =c("-12", "0", "12", "24", "36", "48"))
total_throughputs_2$PredictionModel <- as.numeric(levels(total_throughputs_2$PredictionModel))[total_throughputs_2$PredictionModel]
total_throughputs_2_means <- aggregate(total_throughputs_2$Throughput, by=list(total_throughputs_2$PredictionModel, total_throughputs_2$ProbandenID), FUN=mean)
total_throughputs_2_means <- do.call(data.frame, total_throughputs_2_means)
ggplot() +
geom_boxplot(data=total_throughputs,aes(x=PredictionModel, y=Throughput, fill=PredictionModel), outlier.shape=16,outlier.size=2, position=position_dodge2(width=0.9, preserve="single"), width=0.9) +
geom_smooth(data = total_throughputs_2_means,aes(x=PredictionModel, y=Throughput), method="lm", se=TRUE, fill=NA,
formula=y ~ poly(x, 2, raw=TRUE),colour="red") +
ylab(label = "Throughput [bit/s]") +
xlab(label ="") +
stat_summary(fun.y=mean, geom="point", shape=4, size=5, color="black") +
#ggtitle("Throughput")
theme_light() +
#theme(legend.position = "none") +
guides(fill=guide_legend(title="Prediction Time Offset")) +
theme(legend.position="bottom", text = element_text(size=20)) +
ggsave("rrrr", width=10, height=6, device=cairo_pdf)
|
library(ggplot2)
library(viridis)
str(mpg)
mpg2008 <- subset(mpg,mpg$year == 2008)
mpg2008 <- mpg2008[complete.cases(mpg2008),]
head(mpg2008,3)
# Creamos columna con secuencia
mpg2008$ID <- seq(1,nrow(mpg2008))
# Concatena nombre de modelo [1] con numero de sequencia [12]
mpg2008$ID <- do.call(paste0,mpg2008[c(1,12)])
# Asigna los nombres
rownames(mpg2008) <- mpg2008$ID
head(mpg2008,3)
# Normalizar columnas numericas
mpg2008N <- mpg2008
mpg2008N[,c(3,5,8,9)] <- scale(mpg2008[,c(3,5,8,9)])
head(mpg2008N,3)
d <- dist(mpg2008N, upper = TRUE)
str(d)
mtrx <- as.matrix(d)
heatmap(mtrx, keep.dendro = FALSE, symm = TRUE, revC = TRUE, col = heat.colors(100))
#Viridis
heatmap(mtrx, keep.dendro = FALSE ,symm = TRUE, revC = TRUE, col = viridis(100))
#Magma
heatmap(mtrx, keep.dendro = FALSE ,symm = TRUE, revC = TRUE, col = magma(100))
#Plasma
heatmap(mtrx, keep.dendro = FALSE ,symm = TRUE, revC = TRUE, col = plasma(100))
#Inferno
heatmap(mtrx, keep.dendro = FALSE ,symm = TRUE, revC = TRUE, col = inferno(100))
heatmap(mTwoSeater, revC = TRUE, na.rm = TRUE, symm = TRUE, col = heat.colors(50),
column_title = "Two Seater")
heatmap(mCompact, revC = TRUE, na.rm = TRUE, symm = TRUE, col = heat.colors(50),
column_title = "Compact")
heatmap(mMidsize, revC = TRUE, na.rm = TRUE, symm = TRUE, col = heat.colors(50),
column_title = "Mid Size")
heatmap(mMinivan, revC = TRUE, na.rm = TRUE, symm = TRUE, col = heat.colors(50),
column_title = "Mini Van")
heatmap(mPickup, revC = TRUE, na.rm = TRUE, symm = TRUE, col = heat.colors(50),
column_title = "PickUp")
heatmap(mSubcompact, revC = TRUE, na.rm = TRUE, symm = TRUE, col = heat.colors(50),
column_title = "Sub Compact")
heatmap(mSuv, revC = TRUE, na.rm = TRUE, symm = TRUE, col = heat.colors(50),
column_title = "Sub Urban Vehicle")
| /HeatMap.R | no_license | vodelerk/Rgraphandread | R | false | false | 1,947 | r | library(ggplot2)
library(viridis)
str(mpg)
mpg2008 <- subset(mpg,mpg$year == 2008)
mpg2008 <- mpg2008[complete.cases(mpg2008),]
head(mpg2008,3)
# Creamos columna con secuencia
mpg2008$ID <- seq(1,nrow(mpg2008))
# Concatena nombre de modelo [1] con numero de sequencia [12]
mpg2008$ID <- do.call(paste0,mpg2008[c(1,12)])
# Asigna los nombres
rownames(mpg2008) <- mpg2008$ID
head(mpg2008,3)
# Normalizar columnas numericas
mpg2008N <- mpg2008
mpg2008N[,c(3,5,8,9)] <- scale(mpg2008[,c(3,5,8,9)])
head(mpg2008N,3)
d <- dist(mpg2008N, upper = TRUE)
str(d)
mtrx <- as.matrix(d)
heatmap(mtrx, keep.dendro = FALSE, symm = TRUE, revC = TRUE, col = heat.colors(100))
#Viridis
heatmap(mtrx, keep.dendro = FALSE ,symm = TRUE, revC = TRUE, col = viridis(100))
#Magma
heatmap(mtrx, keep.dendro = FALSE ,symm = TRUE, revC = TRUE, col = magma(100))
#Plasma
heatmap(mtrx, keep.dendro = FALSE ,symm = TRUE, revC = TRUE, col = plasma(100))
#Inferno
heatmap(mtrx, keep.dendro = FALSE ,symm = TRUE, revC = TRUE, col = inferno(100))
heatmap(mTwoSeater, revC = TRUE, na.rm = TRUE, symm = TRUE, col = heat.colors(50),
column_title = "Two Seater")
heatmap(mCompact, revC = TRUE, na.rm = TRUE, symm = TRUE, col = heat.colors(50),
column_title = "Compact")
heatmap(mMidsize, revC = TRUE, na.rm = TRUE, symm = TRUE, col = heat.colors(50),
column_title = "Mid Size")
heatmap(mMinivan, revC = TRUE, na.rm = TRUE, symm = TRUE, col = heat.colors(50),
column_title = "Mini Van")
heatmap(mPickup, revC = TRUE, na.rm = TRUE, symm = TRUE, col = heat.colors(50),
column_title = "PickUp")
heatmap(mSubcompact, revC = TRUE, na.rm = TRUE, symm = TRUE, col = heat.colors(50),
column_title = "Sub Compact")
heatmap(mSuv, revC = TRUE, na.rm = TRUE, symm = TRUE, col = heat.colors(50),
column_title = "Sub Urban Vehicle")
|
#' Adds the year column
#' @param list_of_datasets A list containing named datasets
#' @return A list of datasets with the year column
#' @description This function works by extracting the year string contained in
#' the data set name and appending a new column to the data set with the numeric
#' value of the year. This means that the data sets have to have a name of the
#' form data_set_2001 or data_2001_europe, etc
#' @export
#' @examples
#' \dontrun{
#' #`list_of_data_sets` is a list containing named data sets
#' # For example, to access the first data set, called dataset_1 you would
#' # write
#' list_of_data_sets$dataset_1
#' add_year_column(list_of_data_sets)
#' }
add_year_column <- function(list_of_datasets){
for_one_dataset <- function(dataset, dataset_name){
if ("ANNEE" %in% colnames(dataset) | "Annee" %in% colnames(dataset)){
return(dataset)
} else {
# Split the name of the data set and extract the number index
index <- grep("\\d+", stringr::str_split(dataset_name, "[_.]", simplify = TRUE))
# Get the year
year <- as.numeric(stringr::str_split(dataset_name, "[_.]", simplify = TRUE)[index])
# Add it to the data set
dataset$ANNEE <- year
return(dataset)
}
}
output <- purrr::map2(list_of_datasets, names(list_of_datasets), for_one_dataset)
return(output)
}
| /R/add_year_column.R | no_license | davan690/brotools | R | false | false | 1,353 | r | #' Adds the year column
#' @param list_of_datasets A list containing named datasets
#' @return A list of datasets with the year column
#' @description This function works by extracting the year string contained in
#' the data set name and appending a new column to the data set with the numeric
#' value of the year. This means that the data sets have to have a name of the
#' form data_set_2001 or data_2001_europe, etc
#' @export
#' @examples
#' \dontrun{
#' #`list_of_data_sets` is a list containing named data sets
#' # For example, to access the first data set, called dataset_1 you would
#' # write
#' list_of_data_sets$dataset_1
#' add_year_column(list_of_data_sets)
#' }
add_year_column <- function(list_of_datasets){
for_one_dataset <- function(dataset, dataset_name){
if ("ANNEE" %in% colnames(dataset) | "Annee" %in% colnames(dataset)){
return(dataset)
} else {
# Split the name of the data set and extract the number index
index <- grep("\\d+", stringr::str_split(dataset_name, "[_.]", simplify = TRUE))
# Get the year
year <- as.numeric(stringr::str_split(dataset_name, "[_.]", simplify = TRUE)[index])
# Add it to the data set
dataset$ANNEE <- year
return(dataset)
}
}
output <- purrr::map2(list_of_datasets, names(list_of_datasets), for_one_dataset)
return(output)
}
|
testing merge to branch
khkhhkhkhl
khkjhkhkj | /RGitProject/RGitProject/script3.R | no_license | SkynetAppDev/R | R | false | false | 44 | r | testing merge to branch
khkhhkhkhl
khkjhkhkj |
simplifyGOterms <- function(goterms, maxOverlap=0.8, ontology, go2allEGs) {
if(!is.character(goterms))
stop('goterms has to be of class character ...')
if(!is.numeric(maxOverlap))
stop('maxOverlap has to be of class numeric ...')
if(maxOverlap < 0 || maxOverlap > 1)
stop('maxOverlap is a percentage and has to range in [0,1] ...')
if(!all(ontology %in% c('BP','CC','MF')))
stop('ontology has to be one of: CC, BP, MF ...')
if(!is(go2allEGs,"AnnDbBimap"))
stop('go2allEGs has to be of class AnnDbBimap ..')
if(ontology == 'CC') go2parents <- as.list(GOCCPARENTS)
if(ontology == 'BP') go2parents <- as.list(GOBPPARENTS)
if(ontology == 'MF') go2parents <- as.list(GOMFPARENTS)
go2discard <- NULL
for(goterm in goterms) {
parents <- go2parents[[goterm]]
parents <- intersect(parents, goterms)
# no parents are found for a given GO term, check the others
if(length(parents) == 0) next
##gotermEGs <- go2allEGs[[goterm]] # EGs associated to a given GO term
# EGs associated to a given GO term
gotermEGs <- unlist(mget(goterm, go2allEGs))
for(parent in parents) {
parentEGs <- go2allEGs[[parent]] # EGs associated to its parent
# EGs associated to its parent
parentEGs <- unlist(mget(parent, go2allEGs))
commonEGs <- intersect(gotermEGs, parentEGs)
if(length(commonEGs) / length(parentEGs) > maxOverlap)
go2discard <- c(go2discard, parent)
}
}
# discard redundant parents
if(length(go2discard) > 0)
goterms <- goterms[-which(goterms %in% go2discard)]
return(goterms)
}
| /R/simplifyGOterms.R | no_license | kamalfartiyal84/compEpiTools | R | false | false | 1,785 | r | simplifyGOterms <- function(goterms, maxOverlap=0.8, ontology, go2allEGs) {
if(!is.character(goterms))
stop('goterms has to be of class character ...')
if(!is.numeric(maxOverlap))
stop('maxOverlap has to be of class numeric ...')
if(maxOverlap < 0 || maxOverlap > 1)
stop('maxOverlap is a percentage and has to range in [0,1] ...')
if(!all(ontology %in% c('BP','CC','MF')))
stop('ontology has to be one of: CC, BP, MF ...')
if(!is(go2allEGs,"AnnDbBimap"))
stop('go2allEGs has to be of class AnnDbBimap ..')
if(ontology == 'CC') go2parents <- as.list(GOCCPARENTS)
if(ontology == 'BP') go2parents <- as.list(GOBPPARENTS)
if(ontology == 'MF') go2parents <- as.list(GOMFPARENTS)
go2discard <- NULL
for(goterm in goterms) {
parents <- go2parents[[goterm]]
parents <- intersect(parents, goterms)
# no parents are found for a given GO term, check the others
if(length(parents) == 0) next
##gotermEGs <- go2allEGs[[goterm]] # EGs associated to a given GO term
# EGs associated to a given GO term
gotermEGs <- unlist(mget(goterm, go2allEGs))
for(parent in parents) {
parentEGs <- go2allEGs[[parent]] # EGs associated to its parent
# EGs associated to its parent
parentEGs <- unlist(mget(parent, go2allEGs))
commonEGs <- intersect(gotermEGs, parentEGs)
if(length(commonEGs) / length(parentEGs) > maxOverlap)
go2discard <- c(go2discard, parent)
}
}
# discard redundant parents
if(length(go2discard) > 0)
goterms <- goterms[-which(goterms %in% go2discard)]
return(goterms)
}
|
\alias{gtkRadioMenuItemSetGroup}
\name{gtkRadioMenuItemSetGroup}
\title{gtkRadioMenuItemSetGroup}
\description{Sets the group of a radio menu item, or changes it.}
\usage{gtkRadioMenuItemSetGroup(object, group)}
\arguments{
\item{\verb{object}}{a \code{\link{GtkRadioMenuItem}}.}
\item{\verb{group}}{the new group.}
}
\author{Derived by RGtkGen from GTK+ documentation}
\keyword{internal}
| /RGtk2/man/gtkRadioMenuItemSetGroup.Rd | no_license | lawremi/RGtk2 | R | false | false | 389 | rd | \alias{gtkRadioMenuItemSetGroup}
\name{gtkRadioMenuItemSetGroup}
\title{gtkRadioMenuItemSetGroup}
\description{Sets the group of a radio menu item, or changes it.}
\usage{gtkRadioMenuItemSetGroup(object, group)}
\arguments{
\item{\verb{object}}{a \code{\link{GtkRadioMenuItem}}.}
\item{\verb{group}}{the new group.}
}
\author{Derived by RGtkGen from GTK+ documentation}
\keyword{internal}
|
\name{vacation}
\alias{vacation}
\docType{data}
\title{
Vacation Data
}
\description{
Obs: 200
}
\usage{data("vacation")}
\format{
A data frame with 200 observations on the following 4 variables.
\describe{
\item{\code{miles}}{miles traveled per year}
\item{\code{income}}{annual income in $1000's}
\item{\code{age}}{average age of adult members of household}
\item{\code{kids}}{number of children in household}
}
}
\details{
A sample of Chicago households.
}
\source{
http://principlesofeconometrics.com/poe4/poe4.htm
}
\references{
%% ~~ possibly secondary sources and usages ~~
}
\examples{
data(vacation)
## maybe str(vacation) ; plot(vacation) ...
}
\keyword{datasets}
| /man/vacation.Rd | no_license | Worathan/PoEdata | R | false | false | 699 | rd | \name{vacation}
\alias{vacation}
\docType{data}
\title{
Vacation Data
}
\description{
Obs: 200
}
\usage{data("vacation")}
\format{
A data frame with 200 observations on the following 4 variables.
\describe{
\item{\code{miles}}{miles traveled per year}
\item{\code{income}}{annual income in $1000's}
\item{\code{age}}{average age of adult members of household}
\item{\code{kids}}{number of children in household}
}
}
\details{
A sample of Chicago households.
}
\source{
http://principlesofeconometrics.com/poe4/poe4.htm
}
\references{
%% ~~ possibly secondary sources and usages ~~
}
\examples{
data(vacation)
## maybe str(vacation) ; plot(vacation) ...
}
\keyword{datasets}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/commit.R
\name{git_commit}
\alias{git_commit}
\alias{git_commit_all}
\alias{git_commit_info}
\alias{git_commit_id}
\alias{git_commit_descendant_of}
\alias{git_add}
\alias{git_rm}
\alias{git_status}
\alias{git_conflicts}
\alias{git_ls}
\alias{git_log}
\title{Stage and commit changes}
\usage{
git_commit(message, author = NULL, committer = NULL, repo = ".")
git_commit_all(message, author = NULL, committer = NULL, repo = ".")
git_commit_info(ref = "HEAD", repo = ".")
git_commit_id(ref = "HEAD", repo = ".")
git_commit_descendant_of(ancestor, ref = "HEAD", repo = ".")
git_add(files, force = FALSE, repo = ".")
git_rm(files, repo = ".")
git_status(staged = NULL, repo = ".")
git_conflicts(repo = ".")
git_ls(repo = ".")
git_log(ref = "HEAD", max = 100, repo = ".")
}
\arguments{
\item{message}{a commit message}
\item{author}{A \link{git_signature} value, default is \code{\link[=git_signature_default]{git_signature_default()}}.}
\item{committer}{A \link{git_signature} value, default is same as \code{author}}
\item{repo}{The path to the git repository. If the directory is not a
repository, parent directories are considered (see \link{git_find}). To disable
this search, provide the filepath protected with \code{\link[=I]{I()}}. When using this
parameter, always explicitly call by name (i.e. \verb{repo = }) because future
versions of gert may have additional parameters.}
\item{ref}{revision string with a branch/tag/commit value}
\item{ancestor}{a reference to a potential ancestor commit}
\item{files}{vector of paths relative to the git root directory.
Use \code{"."} to stage all changed files.}
\item{force}{add files even if in gitignore}
\item{staged}{return only staged (TRUE) or unstaged files (FALSE).
Use \code{NULL} or \code{NA} to show both (default).}
\item{max}{lookup at most latest n parent commits}
}
\value{
\itemize{
\item \code{git_status()}, \code{git_ls()}: A data frame with one row per file
\item \code{git_log()}: A data frame with one row per commit
\item \code{git_commit()}, \code{git_commit_all()}: A SHA
}
}
\description{
To commit changes, start by \emph{staging} the files to be included in the commit
using \code{git_add()} or \code{git_rm()}. Use \code{git_status()} to see an overview of
staged and unstaged changes, and finally \code{git_commit()} creates a new commit
with currently staged files.
\code{git_commit_all()} is a convenience function that automatically stages and
commits all modified files. Note that \code{git_commit_all()} does \strong{not} add
new, untracked files to the repository. You need to make an explicit call to
\code{git_add()} to start tracking new files.
\code{git_log()} shows the most recent commits and \code{git_ls()} lists all the files
that are being tracked in the repository.
}
\examples{
oldwd <- getwd()
repo <- file.path(tempdir(), "myrepo")
git_init(repo)
setwd(repo)
# Set a user if no default
if(!user_is_configured()){
git_config_set("user.name", "Jerry")
git_config_set("user.email", "jerry@gmail.com")
}
writeLines(letters[1:6], "alphabet.txt")
git_status()
git_add("alphabet.txt")
git_status()
git_commit("Start alphabet file")
git_status()
git_ls()
git_log()
cat(letters[7:9], file = "alphabet.txt", sep = "\n", append = TRUE)
git_status()
git_commit_all("Add more letters")
# cleanup
setwd(oldwd)
unlink(repo, recursive = TRUE)
}
\seealso{
Other git:
\code{\link{git_archive}},
\code{\link{git_branch}()},
\code{\link{git_config}()},
\code{\link{git_diff}()},
\code{\link{git_fetch}()},
\code{\link{git_merge}()},
\code{\link{git_rebase}()},
\code{\link{git_remote}},
\code{\link{git_repo}},
\code{\link{git_signature}()},
\code{\link{git_stash}},
\code{\link{git_tag}}
}
\concept{git}
| /man/git_commit.Rd | no_license | ijlyttle/gert | R | false | true | 3,793 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/commit.R
\name{git_commit}
\alias{git_commit}
\alias{git_commit_all}
\alias{git_commit_info}
\alias{git_commit_id}
\alias{git_commit_descendant_of}
\alias{git_add}
\alias{git_rm}
\alias{git_status}
\alias{git_conflicts}
\alias{git_ls}
\alias{git_log}
\title{Stage and commit changes}
\usage{
git_commit(message, author = NULL, committer = NULL, repo = ".")
git_commit_all(message, author = NULL, committer = NULL, repo = ".")
git_commit_info(ref = "HEAD", repo = ".")
git_commit_id(ref = "HEAD", repo = ".")
git_commit_descendant_of(ancestor, ref = "HEAD", repo = ".")
git_add(files, force = FALSE, repo = ".")
git_rm(files, repo = ".")
git_status(staged = NULL, repo = ".")
git_conflicts(repo = ".")
git_ls(repo = ".")
git_log(ref = "HEAD", max = 100, repo = ".")
}
\arguments{
\item{message}{a commit message}
\item{author}{A \link{git_signature} value, default is \code{\link[=git_signature_default]{git_signature_default()}}.}
\item{committer}{A \link{git_signature} value, default is same as \code{author}}
\item{repo}{The path to the git repository. If the directory is not a
repository, parent directories are considered (see \link{git_find}). To disable
this search, provide the filepath protected with \code{\link[=I]{I()}}. When using this
parameter, always explicitly call by name (i.e. \verb{repo = }) because future
versions of gert may have additional parameters.}
\item{ref}{revision string with a branch/tag/commit value}
\item{ancestor}{a reference to a potential ancestor commit}
\item{files}{vector of paths relative to the git root directory.
Use \code{"."} to stage all changed files.}
\item{force}{add files even if in gitignore}
\item{staged}{return only staged (TRUE) or unstaged files (FALSE).
Use \code{NULL} or \code{NA} to show both (default).}
\item{max}{lookup at most latest n parent commits}
}
\value{
\itemize{
\item \code{git_status()}, \code{git_ls()}: A data frame with one row per file
\item \code{git_log()}: A data frame with one row per commit
\item \code{git_commit()}, \code{git_commit_all()}: A SHA
}
}
\description{
To commit changes, start by \emph{staging} the files to be included in the commit
using \code{git_add()} or \code{git_rm()}. Use \code{git_status()} to see an overview of
staged and unstaged changes, and finally \code{git_commit()} creates a new commit
with currently staged files.
\code{git_commit_all()} is a convenience function that automatically stages and
commits all modified files. Note that \code{git_commit_all()} does \strong{not} add
new, untracked files to the repository. You need to make an explicit call to
\code{git_add()} to start tracking new files.
\code{git_log()} shows the most recent commits and \code{git_ls()} lists all the files
that are being tracked in the repository.
}
\examples{
oldwd <- getwd()
repo <- file.path(tempdir(), "myrepo")
git_init(repo)
setwd(repo)
# Set a user if no default
if(!user_is_configured()){
git_config_set("user.name", "Jerry")
git_config_set("user.email", "jerry@gmail.com")
}
writeLines(letters[1:6], "alphabet.txt")
git_status()
git_add("alphabet.txt")
git_status()
git_commit("Start alphabet file")
git_status()
git_ls()
git_log()
cat(letters[7:9], file = "alphabet.txt", sep = "\n", append = TRUE)
git_status()
git_commit_all("Add more letters")
# cleanup
setwd(oldwd)
unlink(repo, recursive = TRUE)
}
\seealso{
Other git:
\code{\link{git_archive}},
\code{\link{git_branch}()},
\code{\link{git_config}()},
\code{\link{git_diff}()},
\code{\link{git_fetch}()},
\code{\link{git_merge}()},
\code{\link{git_rebase}()},
\code{\link{git_remote}},
\code{\link{git_repo}},
\code{\link{git_signature}()},
\code{\link{git_stash}},
\code{\link{git_tag}}
}
\concept{git}
|
library(jsonlite)
library(parallel)
#import data
cl <- makeCluster(detectCores() - 1)
json_files<-list.files(path="C:\\Users\\danie\\Desktop\\spark\\StreamingData\\Unstructured/", recursive=T, pattern="part*", full.names=T)
json_list<-parLapply(cl,json_files,function(x) rjson::fromJSON(file=x,method = "R"))
stopCluster(cl)
#create dataframe
df <- data.frame(matrix(unlist(json_list), nrow=length(json_list), byrow=T))
#rename col
names(df) <- c("book_title", "review_title", "review_user", "book_id", "review_id", "timestamp", "review_text", "review_score")
#convert222
df$book_title <- as.character(df$book_title)
df$review_title <- as.character(df$review_title)
df$review_user <- as.character(df$review_user)
df$book_id <- as.character(df$book_id)
df$review_id <- as.character(df$review_id)
df$timestamp <- as.character(df$timestamp)
df$review_text <- as.character(df$review_text)
df$review_score <- as.integer(df$review_score)
#remove duplicates
df2 <- df[!duplicated(df$review_text),]
#write csv
write.csv(df2, file = "C:\\Users\\danie\\Documents\\Daniel Gil\\KULeuven\\Stage 2\\Term 2\\Advanced Analytics\\Assignments\\Streaming\\Data/bookdataDaniel.csv")
#import data check
datacheck <- read.table("C:\\Users\\danie\\Documents\\Daniel Gil\\KULeuven\\Stage 2\\Term 2\\Advanced Analytics\\Assignments\\Streaming\\Data/bookdataDaniel.csv", header=TRUE, sep=",")
#works
View(head(datacheck))
| /R/createdataframeQ3.r | no_license | WhiteChair/Streaming | R | false | false | 1,407 | r |
library(jsonlite)
library(parallel)
#import data
cl <- makeCluster(detectCores() - 1)
json_files<-list.files(path="C:\\Users\\danie\\Desktop\\spark\\StreamingData\\Unstructured/", recursive=T, pattern="part*", full.names=T)
json_list<-parLapply(cl,json_files,function(x) rjson::fromJSON(file=x,method = "R"))
stopCluster(cl)
#create dataframe
df <- data.frame(matrix(unlist(json_list), nrow=length(json_list), byrow=T))
#rename col
names(df) <- c("book_title", "review_title", "review_user", "book_id", "review_id", "timestamp", "review_text", "review_score")
#convert222
df$book_title <- as.character(df$book_title)
df$review_title <- as.character(df$review_title)
df$review_user <- as.character(df$review_user)
df$book_id <- as.character(df$book_id)
df$review_id <- as.character(df$review_id)
df$timestamp <- as.character(df$timestamp)
df$review_text <- as.character(df$review_text)
df$review_score <- as.integer(df$review_score)
#remove duplicates
df2 <- df[!duplicated(df$review_text),]
#write csv
write.csv(df2, file = "C:\\Users\\danie\\Documents\\Daniel Gil\\KULeuven\\Stage 2\\Term 2\\Advanced Analytics\\Assignments\\Streaming\\Data/bookdataDaniel.csv")
#import data check
datacheck <- read.table("C:\\Users\\danie\\Documents\\Daniel Gil\\KULeuven\\Stage 2\\Term 2\\Advanced Analytics\\Assignments\\Streaming\\Data/bookdataDaniel.csv", header=TRUE, sep=",")
#works
View(head(datacheck))
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/content.R
\name{content_title}
\alias{content_title}
\title{Get Content Title}
\usage{
content_title(connect, guid, default = "Unknown Content")
}
\arguments{
\item{connect}{A Connect object}
\item{guid}{The GUID for the content item to be retrieved}
\item{default}{The default value returned for missing or not visible content}
}
\value{
character. The title of the requested content
}
\description{
Return content title for a piece of content. If the content
is missing (deleted) or not visible, then returns the \code{default}
}
\seealso{
Other content functions:
\code{\link{acl_add_user}()},
\code{\link{content_delete}()},
\code{\link{content_item}()},
\code{\link{content_update}()},
\code{\link{create_random_name}()},
\code{\link{dashboard_url_chr}()},
\code{\link{dashboard_url}()},
\code{\link{delete_vanity_url}()},
\code{\link{deploy_repo}()},
\code{\link{get_acl_user}()},
\code{\link{get_bundles}()},
\code{\link{get_environment}()},
\code{\link{get_image}()},
\code{\link{get_jobs}()},
\code{\link{get_vanity_url}()},
\code{\link{git}},
\code{\link{permissions}},
\code{\link{set_image_path}()},
\code{\link{set_run_as}()},
\code{\link{set_vanity_url}()},
\code{\link{swap_vanity_url}()},
\code{\link{verify_content_name}()}
}
\concept{content functions}
| /man/content_title.Rd | permissive | rstudio/connectapi | R | false | true | 1,352 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/content.R
\name{content_title}
\alias{content_title}
\title{Get Content Title}
\usage{
content_title(connect, guid, default = "Unknown Content")
}
\arguments{
\item{connect}{A Connect object}
\item{guid}{The GUID for the content item to be retrieved}
\item{default}{The default value returned for missing or not visible content}
}
\value{
character. The title of the requested content
}
\description{
Return content title for a piece of content. If the content
is missing (deleted) or not visible, then returns the \code{default}
}
\seealso{
Other content functions:
\code{\link{acl_add_user}()},
\code{\link{content_delete}()},
\code{\link{content_item}()},
\code{\link{content_update}()},
\code{\link{create_random_name}()},
\code{\link{dashboard_url_chr}()},
\code{\link{dashboard_url}()},
\code{\link{delete_vanity_url}()},
\code{\link{deploy_repo}()},
\code{\link{get_acl_user}()},
\code{\link{get_bundles}()},
\code{\link{get_environment}()},
\code{\link{get_image}()},
\code{\link{get_jobs}()},
\code{\link{get_vanity_url}()},
\code{\link{git}},
\code{\link{permissions}},
\code{\link{set_image_path}()},
\code{\link{set_run_as}()},
\code{\link{set_vanity_url}()},
\code{\link{swap_vanity_url}()},
\code{\link{verify_content_name}()}
}
\concept{content functions}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/rules-line-breaks.R
\name{set_line_break_if_call_is_multi_line}
\alias{set_line_break_if_call_is_multi_line}
\alias{set_line_break_before_closing_call}
\alias{remove_line_break_in_fun_call}
\title{Set line break for multi-line function calls}
\usage{
set_line_break_before_closing_call(pd, except_token_before)
remove_line_break_in_fun_call(pd, strict)
}
\arguments{
\item{pd}{A parse table.}
\item{except_token_before}{A character vector with text before "')'" that do
not cause a line break before "')'".}
\item{except_token_after}{A character vector with tokens after "'('" that do
not cause a line break after "'('".}
\item{except_text_before}{A character vector with text before "'('" that do
not cause a line break after "'('".}
\item{force_text_before}{A character vector with text before "'('" that
forces a line break after every argument in the call.}
}
\description{
Set line break for multi-line function calls
}
\section{Functions}{
\itemize{
\item \code{set_line_break_before_closing_call()}: Sets line break before
closing parenthesis.
}}
\keyword{internal}
| /man/set_line_break_if_call_is_multi_line.Rd | permissive | r-lib/styler | R | false | true | 1,157 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/rules-line-breaks.R
\name{set_line_break_if_call_is_multi_line}
\alias{set_line_break_if_call_is_multi_line}
\alias{set_line_break_before_closing_call}
\alias{remove_line_break_in_fun_call}
\title{Set line break for multi-line function calls}
\usage{
set_line_break_before_closing_call(pd, except_token_before)
remove_line_break_in_fun_call(pd, strict)
}
\arguments{
\item{pd}{A parse table.}
\item{except_token_before}{A character vector with text before "')'" that do
not cause a line break before "')'".}
\item{except_token_after}{A character vector with tokens after "'('" that do
not cause a line break after "'('".}
\item{except_text_before}{A character vector with text before "'('" that do
not cause a line break after "'('".}
\item{force_text_before}{A character vector with text before "'('" that
forces a line break after every argument in the call.}
}
\description{
Set line break for multi-line function calls
}
\section{Functions}{
\itemize{
\item \code{set_line_break_before_closing_call()}: Sets line break before
closing parenthesis.
}}
\keyword{internal}
|
args <- commandArgs(T)
print( args )
maxInd <- function(inputVec) { # function to select vertices from map data
mIndex <- which( inputVec==max(inputVec) )
mIndexClean <- mIndex[1]
return( mIndexClean )
}
## debug:
#setwd('/media/alessiofracasso/storage2/dataLaminar_leftVSrightNEW/V2676leftright_Ser/results_CBS')
#args<-c(9)
#argSplit <- strsplit(args[1],'[.]')
#fileBase <- argSplit[[1]][1]
fileBase <- 'whole'
## checkdir, create if it does not exists, if it exists then halts execution
mainDir <- getwd()
subDir <- sprintf( '%s_folder', fileBase )
flagDir <- dir.create( file.path(mainDir, subDir) )
if (flagDir==FALSE) { # directory already exists!
msg <- sprintf( 'Remove the directory %s_folder to proceed', fileBase )
warning( msg )
stopifnot(flagDir)
}
## back to main directory
setwd( file.path(mainDir) )
flagCSF <- 1 # 0 for debugging purposes
if (flagCSF==1) {
##############################################################################
## creates mapping and rois over the surfaces, sequentially towards the CSF ##
##############################################################################
# takes a 1D roi file generated by afni, removes the comments and the column of 1s,
# saves an roi file with only the roi vertex index for further processing in AFNI
startSurface <- as.numeric(args[1])
if (startSurface<10) { roiSurfacesName <- sprintf( 'roiSurface_%s_0%s_%s.1D.roi', fileBase, startSurface, startSurface+1 ) }
if (startSurface>=10) { roiSurfacesName <- sprintf( 'roiSurface_%s_%s_%s.1D.roi', fileBase, startSurface, startSurface+1 ) }
#roiData <- read.table( args[1], comment.char='#' )
#write.table( roiData[,1], file=roiSurfacesName, col.names=FALSE, row.names=FALSE )
## select coords ad topo files
setwd( sprintf( '%s/surfaces_folder', mainDir) )
coordFiles <- dir(pattern='^boundary.*coord')
topoFiles <- dir(pattern='^boundary.*topo')
setwd( mainDir )
## loop through the surfaces, towards the CSF
for ( k in (startSurface+1) : (length( coordFiles )-1) ) { #because files starts form 0
outputFile <- sprintf( 'map_01_%s', roiSurfacesName )
# roiData <- read.table( roiSurfacesName ) # in case the node in the destination surface is not connected (-1), then put 1
# fileOutIdx <- which( roiData[,1]==-1 )
# if ( length(fileOutIdx)>0 ) {
# roiData[fileOutIdx,1] <- 1
# }
# write.table( roiData[,1], file=roiSurfacesName, col.names=FALSE, row.names=FALSE )
instr <- sprintf( 'SurfToSurf -i_1D surfaces_folder/%s surfaces_folder/%s -i_1D surfaces_folder/%s surfaces_folder/%s -prefix %s',
topoFiles[k], coordFiles[k], topoFiles[k+1], coordFiles[k+1], outputFile )
system( instr ) # create map data
mapData <- read.table( sprintf( '%s%s', outputFile, '.1D' ), comment.char='#' ) #read map data
maxIndArray <- apply( mapData[,5:7], 1, maxInd ) #select vertices indices on surf k+1 from mapData
appSurfMap <- mapData[,2:4]
vertexIndArray <- rep( 0, dim(mapData)[1] )
for ( n in 1:dim(mapData)[1] ) { #select vertices on surf k+1 from mapData
vertexIndArray[n] <- appSurfMap[ n, maxIndArray[n] ]
}
if (k<10) { roiSurfacesName <- sprintf( 'roiSurface_%s_0%d.1D.roi', fileBase, k ) }
if (k>=10) { roiSurfacesName <- sprintf( 'roiSurface_%s_%d.1D.roi', fileBase, k ) }
write.table( vertexIndArray, file=roiSurfacesName, col.names=FALSE, row.names=FALSE )
}
}
flagWM <- 1 # 0 for debugging purposes
if (flagWM==1) {
#############################################################################
## creates mapping and rois over the surfaces, sequentially towards the WM ##
#############################################################################
# takes a 1D roi file generated by afni, removes the comments and the column of 1s,
# saves an roi file with only the roi vertex index for further processing in AFNI
startSurface <- as.numeric(args[1])
if (startSurface<10) { roiSurfacesName <- sprintf( 'roiSurface_%s_0%s_%s.1D.roi', fileBase, startSurface, startSurface-1 ) }
if (startSurface>=10) { roiSurfacesName <- sprintf( 'roiSurface_%s_%s_%s.1D.roi', fileBase, startSurface, startSurface-1 ) }
#roiData <- read.table( args[1], comment.char='#' )
#write.table( roiData[,1], file=roiSurfacesName, col.names=FALSE, row.names=FALSE )
## select coords ad topo files
setwd( sprintf( '%s/surfaces_folder', mainDir) )
coordFiles <- dir(pattern='^boundary.*coord')
topoFiles <- dir(pattern='^boundary.*topo')
setwd( mainDir )
## loop through the surfaces, towards the WM
for ( k in (startSurface+1) : 2 ) { #because files starts form 0
outputFile <- sprintf( 'map_01_%s', roiSurfacesName )
#roiData <- read.table( roiSurfacesName ) # in case the node in the destination surface is not connected (-1), then put 1
#fileOutIdx <- which( roiData[,1]==-1 )
#if ( length(fileOutIdx)>0 ) {
# roiData[fileOutIdx,1] <- 1
#}
#write.table( roiData[,1], file=roiSurfacesName, col.names=FALSE, row.names=FALSE )
instr <- sprintf( 'SurfToSurf -i_1D surfaces_folder/%s surfaces_folder/%s -i_1D surfaces_folder/%s surfaces_folder/%s -prefix %s',
topoFiles[k], coordFiles[k], topoFiles[k-1], coordFiles[k-1], outputFile )
system( instr ) # create map data
mapData <- read.table( sprintf( '%s%s', outputFile, '.1D' ), comment.char='#' ) #read map data
maxIndArray <- apply( mapData[,5:7], 1, maxInd ) #select vertices indices on surf k-1 from mapData
appSurfMap <- mapData[,2:4]
vertexIndArray <- rep( 0, dim(mapData)[1] )
for ( n in 1:dim(mapData)[1] ) { #select vertices on surf k+1 from mapData
vertexIndArray[n] <- appSurfMap[ n, maxIndArray[n] ]
}
if (k<10) { roiSurfacesName <- sprintf( 'roiSurface_%s_0%d.1D.roi', fileBase, k-2 ) }
if (k>=10) { roiSurfacesName <- sprintf( 'roiSurface_%s_0%d.1D.roi', fileBase, k-2 ) }
write.table( vertexIndArray, file=roiSurfacesName, col.names=FALSE, row.names=FALSE )
}
}
## move all the files generated into the subDir
system( sprintf('mv roiSurface*%s*.roi %s', fileBase, subDir) )
system( sprintf('mv map*%s*.M2M %s', fileBase, subDir) )
system( sprintf('mv map_01*%s*.1D %s', fileBase, subDir) )
| /surfaces/mappingBetweenSurfaces_WHOLE.R | no_license | AlessioPsych/AnalysisAfni | R | false | false | 6,377 | r | args <- commandArgs(T)
print( args )
maxInd <- function(inputVec) { # function to select vertices from map data
mIndex <- which( inputVec==max(inputVec) )
mIndexClean <- mIndex[1]
return( mIndexClean )
}
## debug:
#setwd('/media/alessiofracasso/storage2/dataLaminar_leftVSrightNEW/V2676leftright_Ser/results_CBS')
#args<-c(9)
#argSplit <- strsplit(args[1],'[.]')
#fileBase <- argSplit[[1]][1]
fileBase <- 'whole'
## checkdir, create if it does not exists, if it exists then halts execution
mainDir <- getwd()
subDir <- sprintf( '%s_folder', fileBase )
flagDir <- dir.create( file.path(mainDir, subDir) )
if (flagDir==FALSE) { # directory already exists!
msg <- sprintf( 'Remove the directory %s_folder to proceed', fileBase )
warning( msg )
stopifnot(flagDir)
}
## back to main directory
setwd( file.path(mainDir) )
flagCSF <- 1 # 0 for debugging purposes
if (flagCSF==1) {
##############################################################################
## creates mapping and rois over the surfaces, sequentially towards the CSF ##
##############################################################################
# takes a 1D roi file generated by afni, removes the comments and the column of 1s,
# saves an roi file with only the roi vertex index for further processing in AFNI
startSurface <- as.numeric(args[1])
if (startSurface<10) { roiSurfacesName <- sprintf( 'roiSurface_%s_0%s_%s.1D.roi', fileBase, startSurface, startSurface+1 ) }
if (startSurface>=10) { roiSurfacesName <- sprintf( 'roiSurface_%s_%s_%s.1D.roi', fileBase, startSurface, startSurface+1 ) }
#roiData <- read.table( args[1], comment.char='#' )
#write.table( roiData[,1], file=roiSurfacesName, col.names=FALSE, row.names=FALSE )
## select coords ad topo files
setwd( sprintf( '%s/surfaces_folder', mainDir) )
coordFiles <- dir(pattern='^boundary.*coord')
topoFiles <- dir(pattern='^boundary.*topo')
setwd( mainDir )
## loop through the surfaces, towards the CSF
for ( k in (startSurface+1) : (length( coordFiles )-1) ) { #because files starts form 0
outputFile <- sprintf( 'map_01_%s', roiSurfacesName )
# roiData <- read.table( roiSurfacesName ) # in case the node in the destination surface is not connected (-1), then put 1
# fileOutIdx <- which( roiData[,1]==-1 )
# if ( length(fileOutIdx)>0 ) {
# roiData[fileOutIdx,1] <- 1
# }
# write.table( roiData[,1], file=roiSurfacesName, col.names=FALSE, row.names=FALSE )
instr <- sprintf( 'SurfToSurf -i_1D surfaces_folder/%s surfaces_folder/%s -i_1D surfaces_folder/%s surfaces_folder/%s -prefix %s',
topoFiles[k], coordFiles[k], topoFiles[k+1], coordFiles[k+1], outputFile )
system( instr ) # create map data
mapData <- read.table( sprintf( '%s%s', outputFile, '.1D' ), comment.char='#' ) #read map data
maxIndArray <- apply( mapData[,5:7], 1, maxInd ) #select vertices indices on surf k+1 from mapData
appSurfMap <- mapData[,2:4]
vertexIndArray <- rep( 0, dim(mapData)[1] )
for ( n in 1:dim(mapData)[1] ) { #select vertices on surf k+1 from mapData
vertexIndArray[n] <- appSurfMap[ n, maxIndArray[n] ]
}
if (k<10) { roiSurfacesName <- sprintf( 'roiSurface_%s_0%d.1D.roi', fileBase, k ) }
if (k>=10) { roiSurfacesName <- sprintf( 'roiSurface_%s_%d.1D.roi', fileBase, k ) }
write.table( vertexIndArray, file=roiSurfacesName, col.names=FALSE, row.names=FALSE )
}
}
flagWM <- 1 # 0 for debugging purposes
if (flagWM==1) {
#############################################################################
## creates mapping and rois over the surfaces, sequentially towards the WM ##
#############################################################################
# takes a 1D roi file generated by afni, removes the comments and the column of 1s,
# saves an roi file with only the roi vertex index for further processing in AFNI
startSurface <- as.numeric(args[1])
if (startSurface<10) { roiSurfacesName <- sprintf( 'roiSurface_%s_0%s_%s.1D.roi', fileBase, startSurface, startSurface-1 ) }
if (startSurface>=10) { roiSurfacesName <- sprintf( 'roiSurface_%s_%s_%s.1D.roi', fileBase, startSurface, startSurface-1 ) }
#roiData <- read.table( args[1], comment.char='#' )
#write.table( roiData[,1], file=roiSurfacesName, col.names=FALSE, row.names=FALSE )
## select coords ad topo files
setwd( sprintf( '%s/surfaces_folder', mainDir) )
coordFiles <- dir(pattern='^boundary.*coord')
topoFiles <- dir(pattern='^boundary.*topo')
setwd( mainDir )
## loop through the surfaces, towards the WM
for ( k in (startSurface+1) : 2 ) { #because files starts form 0
outputFile <- sprintf( 'map_01_%s', roiSurfacesName )
#roiData <- read.table( roiSurfacesName ) # in case the node in the destination surface is not connected (-1), then put 1
#fileOutIdx <- which( roiData[,1]==-1 )
#if ( length(fileOutIdx)>0 ) {
# roiData[fileOutIdx,1] <- 1
#}
#write.table( roiData[,1], file=roiSurfacesName, col.names=FALSE, row.names=FALSE )
instr <- sprintf( 'SurfToSurf -i_1D surfaces_folder/%s surfaces_folder/%s -i_1D surfaces_folder/%s surfaces_folder/%s -prefix %s',
topoFiles[k], coordFiles[k], topoFiles[k-1], coordFiles[k-1], outputFile )
system( instr ) # create map data
mapData <- read.table( sprintf( '%s%s', outputFile, '.1D' ), comment.char='#' ) #read map data
maxIndArray <- apply( mapData[,5:7], 1, maxInd ) #select vertices indices on surf k-1 from mapData
appSurfMap <- mapData[,2:4]
vertexIndArray <- rep( 0, dim(mapData)[1] )
for ( n in 1:dim(mapData)[1] ) { #select vertices on surf k+1 from mapData
vertexIndArray[n] <- appSurfMap[ n, maxIndArray[n] ]
}
if (k<10) { roiSurfacesName <- sprintf( 'roiSurface_%s_0%d.1D.roi', fileBase, k-2 ) }
if (k>=10) { roiSurfacesName <- sprintf( 'roiSurface_%s_0%d.1D.roi', fileBase, k-2 ) }
write.table( vertexIndArray, file=roiSurfacesName, col.names=FALSE, row.names=FALSE )
}
}
## move all the files generated into the subDir
system( sprintf('mv roiSurface*%s*.roi %s', fileBase, subDir) )
system( sprintf('mv map*%s*.M2M %s', fileBase, subDir) )
system( sprintf('mv map_01*%s*.1D %s', fileBase, subDir) )
|
##### Find variants for model organisms for Genes of interest
##### Setup
library(dplyr)
library(xtable)
library(gridExtra)
library(ggplot2)
setwd("/home/bcallaghan/NateDBCopy")
####
#### Load data
GENE <- "SYNGAP1"
CHROM <- 6
TRANSCRIPT <- "NM_006772"
INPUTDIR <- paste0("./inputs/",GENE,"/")
getwd()
FASTA <- FASTA
BED <- BED
anno.path <- paste0(INPUTDIR,GENE,"_anno.hg19_multianno.csv")
GENE.vars <- read.csv(anno.path)
pp.path <- paste0(INPUTDIR,GENE,"PP.tsv")
GENE.PP <- read.table(pp.path)
nate.path <- paste0(INPUTDIR,GENE,".hg19_multianno.csv")
natevars <- read.csv(nate.path)
head(natevars)
natevars %>% filter(Gene.refGene == GENE) -> natevars
natevars %>% select(c(1,2,3,4,5,6,7,9,10,11,36,41)) -> p
png(filename="outputs/natvars.png",width=1500, height = 26 * nrow(p))
grid.arrange(tableGrob(p))
dev.off()
####
#### Gene / Transcript Info (not used yet)
# # Should add .bed file, alternative transcripts?
# GENE == "GENE.vars" #
#Canonical Transcript
####
#### Fix annotations
x <- GENE.vars$Otherinfo
y <- strsplit(as.character(x), split = '\t')
y <- unlist(y)
y <- as.numeric(y[seq(2,length(y),2)])
GENE.vars$CADD.phred <- as.numeric(as.character(y))
####
##### Synonymous Mutations for Nate
natesyn <- natevars$Start
GENE.vars %>%
filter(ExonicFunc.refGene == "synonymous SNV") %>%
filter(grepl(paste0(GENE,":",TRANSCRIPT,".+"),AAChange.refGene)) -> GENE.vars.syn
GENE.vars.syn$aapos <- as.numeric(gsub(".*p.[A-Z]([0-9]+)[A-Z]","\\1",GENE.vars.syn$AAChange.refGene))
GENE.vars.syn %>% filter(Start %in% as.numeric(levels(natesyn))[natesyn]) -> GENE.vars.syn
GENE.vars.syn$aachange <- gsub(".*p.([A-Z][0-9]+[A-Z]).*","\\1",GENE.vars.syn$AAChange.refGene,perl=TRUE)
head(GENE.vars.syn)
if(nrow(GENE.vars.syn) == 0){
print("No appropriate synonymous mutations")
}
#####
#### Filtering
GENE.vars$Func.refGene <- gsub("exonic;splicing","exonic",GENE.vars$Func.refGene)
# Multiple isoforms in GENE.vars so filter for canonical NM_000314
GENE.vars %>% filter(grepl(paste0(GENE,":",TRANSCRIPT,".+"),AAChange.refGene)) -> GENE.vars.f
GENE.vars.f$AAChange.refGene
# GENE.vars.f %>% mutate(aapos = as.numeric(gsub(".*(GENE.vars:NM_000314:exon[0-9]+:c.[A-Z][0-9]+[A-Z]:p.[A-Z]([0-9]+)[A-Z],).+","\\2",GENE.vars.f$AAChange.refGene))) -> GENE.vars.f
#********************
# GENE.vars.f$aapos <- as.numeric(gsub(paste0(".*GENE.vars:NM_000314:exon[0-9]+:c.[A-Z][0-9]+[A-Z]:p.[A-Z]([0-9]+)[A-Z],).+"),"\\2",GENE.vars.f$AAChange.refGene))
# GENE.vars.f$aapos <- as.numeric(gsub(paste0( ".*" , GENE , ":" , TRANSCRIPT , ":exon[0-9]+:c.[A-Z][0-9]+[A-Z]:p.[A-Z]([0-9]+)[A-Z],).+"),"\\2",GENE.vars.f$AAChange.refGene))
# as.numeric(gsub(paste0( ".*" , GENE , ":" , TRANSCRIPT , ":exon[0-9]+:c.[A-Z][0-9]+[A-Z]:p.[A-Z]([0-9]+)[A-Z],).+"),"\\2",GENE.vars.f$AAChange.refGene))
# regex <- paste0(GENE,":",TRANSCRIPT,":exon[")
# GENE.vars.f$AAChange.refGene
# Add columns for aa position and aa change for each mutation
regex <- ".*p.[A-Z]([0-9]+)[A-Z]"
GENE.vars.f$aapos <- as.numeric(gsub(regex,"\\1",GENE.vars.f$AAChange.refGene))
head(GENE.vars.f)
regex <- paste0(GENE,":",TRANSCRIPT,".*p.([A-Z][0-9]+[A-Z]).*")
GENE.vars.f$aachange <- gsub(regex,"\\1",GENE.vars.f$AAChange.refGene)
# *********************
ggplot(GENE.vars.f, aes(x= aapos, y = CADD.phred)) + geom_point()
# GENE.vars$aachange <- gsub("GENE.vars:NM_000314:exon[0-9]+:c.[A-Z][0-9]+[A-Z]:p.([A-Z][0-9]+[A-Z]).+","\\1",GENE.vars$AAChange.refGene,perl=TRUE)
# What's PredictProtein score look like in the context of a gene?
GENE.PP$aapos <- as.numeric(gsub('[A-Z]([0-9]+)[A-Z]','\\1',GENE.PP$V1))
head(GENE.PP,40)
ggplot(GENE.PP, aes(x = aapos, y=V2)) + geom_point()
####
#### Merge protein Predict and fix up new df
GENE.PP$V1 <- as.character(GENE.PP$V1)
GENE.vars.f$aachange
GENE.vars.mr <- merge(GENE.vars.f,GENE.PP, by.x='aachange', by.y = 'V1', all=FALSE)
GENE.vars.mr$CADD.phred <- as.numeric(GENE.vars.mr$CADD.phred)
GENE.vars.mr$PredictProtein <- as.numeric(GENE.vars.mr$V2)
GENE.vars.mr$exac03 <- as.numeric(as.character(GENE.vars.mr$exac03))
GENE.vars.mr$exac03[is.na(GENE.vars.mr$exac03)] <- 0
GENE.vars.mr %>% arrange(Start) -> GENE.vars.mr
head(GENE.vars.mr)
colnames(GENE.vars.mr)[31] <- "aapos"
####
##### Filtering
boxplot(GENE.vars.mr$PredictProtein)
boxplot(GENE.vars.mr$CADD.phred)
thresh.pp <- quantile(GENE.vars.mr$PredictProtein,.90)
thresh.cadd <- quantile(GENE.vars.mr$CADD.phred,.90)
##### Filter Positive Controls
GENE.vars.mr %>% mutate(PC = ifelse(PredictProtein > thresh.pp & CADD.phred > thresh.cadd & (exac03 == 0 | is.na(exac03)) ,yes="PositiveControl", no = FALSE)) -> GENE.vars.mr
# Graph Predict Protein
ggplot(GENE.vars.mr, aes(x = aapos, y = PredictProtein)) + geom_point(aes()) + xlab("Amino Acid Coordinates") + ylab("PredictProtein") + ggtitle("GENE.vars")
# Graph CADD
ggplot(GENE.vars.mr, aes(x = aapos, y = CADD.phred)) + geom_point(aes()) + xlab("Amino Acid Coordinates") + ylab("CADD") + ggtitle("GENE.vars")
ggplot(GENE.vars.mr, aes(x = CADD.phred, y = PredictProtein)) + geom_point(aes()) + xlab("CADD") + ylab("PP") + ggtitle("GENE.vars")
####
#### Graph the Positive Controls
p1 <- ggplot(GENE.vars.mr, aes(x = aapos, y = PredictProtein, colour = PC,shape = PC)) +
geom_point(size = 2.5) + xlab("Amino Acid Coordinates") + ylab("PredictProtein") + ggtitle(paste0(GENE," Positive Controls"))
p2 <- ggplot(GENE.vars.mr, aes(x = aapos, y = CADD.phred, colour = PC,shape = PC)) +
geom_point(size = 2.5) + xlab("Amino Acid Coordinates") + ylab("CADD Phred") + ggtitle(paste0(GENE," Positive Controls"))
# CADD vs PredictProtein
p3 <- ggplot(GENE.vars.mr, aes(x = PredictProtein, y = CADD.phred,colour = PC,shape = PC)) +
geom_point(size = 2.5) + xlab("PredictProtein") + ylab("CADD Phred") + ggtitle(paste0(GENE," Positive Controls"))
cor(GENE.vars.mr$CADD.phred, GENE.vars.mr$PredictProtein, method='spearman')
####
#### Negative Controls Filtering
thresh.pp <- quantile(GENE.vars.mr$PredictProtein,.10)
thresh.cadd <- quantile(GENE.vars.mr$CADD.phred,.10)
GENE.vars.mr %>% filter(ExonicFunc.refGene == "nonsynonymous SNV") -> GENE.vars.mr.nc
GENE.vars.mr.nc %>% mutate(NC = ifelse( test = ((PredictProtein < thresh.pp & CADD.phred < thresh.cadd & exac03 > 0)|(PredictProtein < -50 & CADD.phred < 5)),
yes = "NegativeControl", no = FALSE)) -> GENE.vars.mr.nc
GENE.vars.mr.nc$NC[is.na(GENE.vars.mr.nc$NC)] <- FALSE
####
#### Graphing the Negative Controls
p4 <- ggplot(GENE.vars.mr.nc, aes(x = aapos, y = PredictProtein,colour=NC,shape = NC)) +
geom_point(size = 2.5) + xlab("Amino Acid Coordinates") + ylab("PredictProtein") + ggtitle("GENE.vars Negative Controls");p4
p5 <- ggplot(GENE.vars.mr.nc, aes(x = aapos, y = CADD.phred, colour = NC, shape = NC)) +
geom_point(size = 2.5) + xlab("Amino Acid Coordinates") + ylab("CADD Phred") + ggtitle("GENE.vars Negative Controls")
p6 <- ggplot(GENE.vars.mr.nc, aes(x = PredictProtein, y = CADD.phred, colour = NC, shape = NC)) +
geom_point(size = 2.5) + xlab("PredictProtein") + ylab("CADD Phred") + ggtitle("GENE.vars Negative Controls")
p9 <- ggplot(GENE.vars.mr, aes( x= as.numeric(Start),y = as.numeric(CADD.phred))) +
geom_point(size = 2.5) + xlab('Genomic Coordinates') + ylab("CADD score") + ggtitle("CADD by Genomic Coordinates")
p10 <- ggplot(GENE.vars.mr, aes(CADD.phred, PredictProtein)) + geom_point()
graph_variants <- function(anno.df,x.name,y.name,group,title){
ggplot(anno.df, aes_string(x = x.name, y = y.name, colour = group, shape = group)) +
geom_point(size = 3) +
xlab(x.name) + ylab(y.name) + ggtitle(title)
}
graph_variants(GENE.vars,'aapos','CADD.phred','NC',"GENE.vars Negative Controls")
p01 <- graph_variant_list(GENE.vars.mr,'aapos','CADD.phred',GENE.vars.nate$aachange,"in.group","GENE.vars Literature Mutations","Literature");
graph_variant_list <- function(anno.df,x.name,y.name,var.list,group,title, legend.title){
anno.df %>% filter(Func.refGene == "exonic") -> anno.df.f
anno.df.f %>% mutate(in.group = ifelse(aachange %in% var.list, yes = TRUE, no = FALSE)) -> anno.df.fm
anno.df.fm$aapos <- as.numeric(gsub(paste0(".*",GENE,":",TRANSCRIPT,":exon[0-9]+:c.[A-Z][0-9]+[A-Z]:p.[A-Z]([0-9]+)[A-Z],).+"),"\\2",anno.df.fm$AAChange.refGene))
anno.df.fm %>% filter(in.group == TRUE) -> anno.df.trus
anno.df.trus %>% print()
# group <- anno.df.fm$in.group
ggplot(anno.df.fm, aes_string(x = x.name, y = y.name, colour = group)) +
geom_point(size = 3, alpha = 0.8) +
geom_point(aes_string(x = x.name, y = y.name, colour = group),data = anno.df.trus, size = 5) +
xlab(x.name) + ylab(y.name) + ggtitle(title) + guides(title = "in.list") +
# theme(legend.title = element_text("asd"))+
labs(colour = legend.title ) + guides(shape = FALSE) #+
#scale_y_continuous( limits = c(0,50), expand = c(0,0) )
}
p11 <- graph_variant_list(GENE.vars.mr,'aapos','CADD.phred',GENE.vars.syn$aachange,"in.group","GENE.vars Synonymous Mutations","Synonymous");p11
p12 <- graph_variant_list(GENE.vars.mr,'aapos','PredictProtein',GENE.vars.syn$aachange,"in.group","GENE.vars Synonymous Mutations","Synonymous");p12
p11 <- graph_variant_list(GENE.vars.mr,'aapos','PredictProtein',pcs$aachange,"in.group","GENE.vars High Impact Mutations","High Impact");p11
GENE.vars.mr.nc %>% filter(PC == "PositiveControl") %>% select(aachange) -> pos.list
GENE.vars.mr.nc %>% filter(NC == "NegativeControl") %>% select(aachange) -> neg.list
GENE.vars %>% filter(ExonicFunc.refGene == "synonymous SNV") %>% select(aachange) -> syn.list
var.list <- c("L265L")
neg.list$aachange
# p11 <- graph_variant_list(GENE.vars,'aapos','CADD.phred',GENE.vars.syn$aachange,"in.group","GENE.vars Synonymous Mutations")
####Lit variants
### Filtering Nate Variants
natevars %>% mutate(stalt = paste0(Start,Ref,Alt)) -> natevars
GENE.vars.mr.nc %>% mutate(stalt = paste0(Start,Ref,Alt)) -> GENE.vars.mr.nc
GENE.vars.mr.nc %>% filter(stalt %in% natevars$stalt) -> GENE.vars.nate
#### Correlation / List coordinates of exons
cor(GENE.vars.mr$CADD.phred, GENE.vars.mr$PredictProtein, method='spearman')
v <- GENE.vars.mr$Start
split(unique(v),cumsum(c(1,diff(unique(v))!=1)))
####
#### Table output
GENE.vars.mr.nc %>% filter(PC == "PositiveControl") -> pcs
GENE.vars.mr.nc %>% filter(NC == "NegativeControl") -> ncs
cons <- rbind(pcs,ncs)
write.table(cons, "GENE.varscontrols.tsv", quote = FALSE, row.names=FALSE, sep = "\t")
####
#### Controls Filtering
GENE.vars.mr.nc %>% filter(PC == "PositiveControl") %>%
select(aachange,Chr,Start,Ref,Alt,Func.refGene,ExonicFunc.refGene,snp135,exac03,CADD.phred,PredictProtein,PC,NC) -> pcs
GENE.vars.mr.nc %>% filter(NC == "NegativeControl") %>%
select(aachange,Chr,Start,Ref,Alt,Func.refGene,ExonicFunc.refGene,snp135,exac03,CADD.phred,PredictProtein,PC,NC) -> ncs
GENE.vars.syn$NC <- "SynonymousControl"
GENE.vars.syn$PC <- FALSE
# GENE.vars.syn$PredictProtein <- 0
GENE.vars.syn$PredictProtein <- GENE.PP$V2[which(GENE.PP$V1 %in% GENE.vars.syn$aachange)]
GENE.vars.syn
GENE.vars.nate %>%
select(aachange,Chr,Start,Ref,Alt,Func.refGene,ExonicFunc.refGene,snp135,exac03,CADD.phred,PredictProtein,PC,NC) -> lit
GENE.vars.syn %>%
select(aachange,Chr,Start,Ref,Alt,Func.refGene,ExonicFunc.refGene,snp135,exac03,CADD.phred,PredictProtein,PC,NC) -> scs
cons <- rbind(pcs,ncs,scs)
#### Table output
write.table(cons, "GENE.varscontrols_lite.tsv", quote = FALSE, row.names=FALSE, sep = "\t")
write.table(lit,"GENE.vars_MARV_vars.tsv", quote = FALSE, row.names=FALSE, sep = "\t")
# write.table(der, "deriziotis_vars_all_columns.tsv", quote = FALSE, row.names = FALSE, sep = "\t")
####
#############################
#### Call graphing functions
##############################
# lit
p01 <- graph_variant_list(GENE.vars.mr,'aapos.y','CADD.phred',GENE.vars.nate$aachange,"in.group","GENE.vars Literature Mutations","Literature");p01
p02 <- graph_variant_list(GENE.vars.mr,'aapos.y','PredictProtein',GENE.vars.nate$aachange,"in.group","GENE.vars Literature Mutations","Literature");p02
p03 <- graph_variant_list(GENE.vars.mr,'PredictProtein','CADD.phred',GENE.vars.nate$aachange,"in.group","GENE.vars Literature Mutations","Literature");p03
#syn
p1 <- graph_variant_list(GENE.vars.mr,'aapos.y','CADD.phred',GENE.vars.syn$aachange,"in.group","GENE.vars Synonymous Mutations","Synonymous");p1
p2 <- graph_variant_list(GENE.vars.mr,'aapos.y','PredictProtein',GENE.vars.syn$aachange,"in.group","GENE.vars Synonymous Mutations","Synonymous");p2
p3 <- graph_variant_list(GENE.vars.mr,'PredictProtein','CADD.phred',GENE.vars.syn$aachange,"in.group","GENE.vars Synonymous Mutations","Synonymous");p3
#pcs
p4 <- graph_variant_list(GENE.vars.mr,'aapos.y','CADD.phred',pcs$aachange,"in.group","GENE.vars High Impact Mutations","High Impact");p4
p5 <- graph_variant_list(GENE.vars.mr,'aapos.y','PredictProtein',pcs$aachange,"in.group","GENE.vars High Impact Mutations","High Impact");p5
p6 <- graph_variant_list(GENE.vars.mr,'PredictProtein','CADD.phred',pcs$aachange,"in.group","GENE.vars High Impact Mutations","High Impact");p6
#ncs
p7 <- graph_variant_list(GENE.vars.mr,'aapos.y','CADD.phred',ncs$aachange,"in.group","GENE.vars Low Impact Mutations","Low Impact");p7
p8 <- graph_variant_list(GENE.vars.mr,'aapos.y','PredictProtein',ncs$aachange,"in.group","GENE.vars Low Impact Mutations","Low Impact");p8
p9 <- graph_variant_list(GENE.vars.mr,'PredictProtein','CADD.phred',ncs$aachange,"in.group","Low Impact Mutations","Low Impact");p9
####
# Slicing variants ****MEETING
thresh.pp <- quantile(GENE.vars.mr$PredictProtein,.10)
thresh.cadd <- quantile(GENE.vars.mr$CADD.phred,.90)
GENE.vars.mr %>% filter(PredictProtein < thresh.pp & CADD.phred > thresh.cadd)
thresh.pp <- quantile(GENE.vars.mr$PredictProtein,.90)
thresh.cadd <- quantile(GENE.vars.mr$CADD.phred,.10)
GENE.vars.mr %>% filter(PredictProtein > thresh.pp & CADD.phred < thresh.cadd)
## PDF output 2
outpath <- paste0("outputs/",GENE,"/",GENE,"controlvariants.pdf")
pdf(file=outpath, height = 12, width = 17)
grid.arrange(tableGrob(lit), top = paste0(GENE," Literature Mutations" ));p01;p02;p03
grid.arrange(tableGrob(scs), top = paste0(GENE," Synonymous Mutations" ));p1;p2;p3
grid.arrange(tableGrob(pcs), top = paste0(GENE," High Impact Mutations"));p4;p5;p6
grid.arrange(tableGrob(ncs), top = paste0(GENE," Low Impact Mutations"));p7;p8;p9;p9
#p10
dev.off()
#### PDF output
# pdf(file="outputs/GENE.vars/GENE.varscontrols.pdf", height = 12, width = 17)
# grid.arrange(tableGrob(scs), top = "GENE.vars Synonymous Controls" );p11;p12
# grid.arrange(tableGrob(pcs), top = "GENE.vars Positive Controls" );p1;p2;p3
# grid.arrange(tableGrob(ncs), top = "GENE.vars Negative Controls");p4;p5;p6;p9
# #p10
# dev.off()
####
#### What happens when we compare same aa changes (different nuc change)?
GENE.vars.mr.nc %>% filter(duplicated(aachange,fromLast=FALSE) | duplicated(aachange,fromLast=TRUE)) %>%
arrange(aapos.x)-> sameaa
ggplot(sameaa, aes(x = aapos.x, y = CADD.phred)) + geom_point(aes(colour=PredictProtein)) +
xlab("AA Pos") + ylab("CADD Phred") + ggtitle("GENE.vars Same aa vars") +
scale_color_gradient2(low = "green", mid = "yellow", high = "red" )
####
# cor(GENE.vars.mr$LJB_MutationTaster, GENE.vars.mr$CADD.phred)
#### Notes
# IGV - get screenshots per variant - is the alignment suspicious?
# Controls are the same region but in other subjects (use either controls or case subjects)
# Is there weird stuff going on in that region in controls???
# By next week
# Add parent DNA availabilty to aspire
# For example if some ms variants do not have parental dna might just skim them off, lower the cost, find other variants to fill out the list
# Ying - a lot of the LOF variants are splice sites - is this concerning?
# Disregarding dn status - do we have a lot more LOF per individual than other studies)
# Get information on CNV's - are they inherited etc?
# Look at UCSC - these splice sites - which exon are they in, what's the expression profile etc also expression of particular mRNAs
# Waiting on GSC information
| /GENE_prioritised_variants.R | no_license | bencallaghan/sfari-variant | R | false | false | 16,040 | r | ##### Find variants for model organisms for Genes of interest
##### Setup
library(dplyr)
library(xtable)
library(gridExtra)
library(ggplot2)
setwd("/home/bcallaghan/NateDBCopy")
####
#### Load data
GENE <- "SYNGAP1"
CHROM <- 6
TRANSCRIPT <- "NM_006772"
INPUTDIR <- paste0("./inputs/",GENE,"/")
getwd()
FASTA <- FASTA
BED <- BED
anno.path <- paste0(INPUTDIR,GENE,"_anno.hg19_multianno.csv")
GENE.vars <- read.csv(anno.path)
pp.path <- paste0(INPUTDIR,GENE,"PP.tsv")
GENE.PP <- read.table(pp.path)
nate.path <- paste0(INPUTDIR,GENE,".hg19_multianno.csv")
natevars <- read.csv(nate.path)
head(natevars)
natevars %>% filter(Gene.refGene == GENE) -> natevars
natevars %>% select(c(1,2,3,4,5,6,7,9,10,11,36,41)) -> p
png(filename="outputs/natvars.png",width=1500, height = 26 * nrow(p))
grid.arrange(tableGrob(p))
dev.off()
####
#### Gene / Transcript Info (not used yet)
# # Should add .bed file, alternative transcripts?
# GENE == "GENE.vars" #
#Canonical Transcript
####
#### Fix annotations
x <- GENE.vars$Otherinfo
y <- strsplit(as.character(x), split = '\t')
y <- unlist(y)
y <- as.numeric(y[seq(2,length(y),2)])
GENE.vars$CADD.phred <- as.numeric(as.character(y))
####
##### Synonymous Mutations for Nate
natesyn <- natevars$Start
GENE.vars %>%
filter(ExonicFunc.refGene == "synonymous SNV") %>%
filter(grepl(paste0(GENE,":",TRANSCRIPT,".+"),AAChange.refGene)) -> GENE.vars.syn
GENE.vars.syn$aapos <- as.numeric(gsub(".*p.[A-Z]([0-9]+)[A-Z]","\\1",GENE.vars.syn$AAChange.refGene))
GENE.vars.syn %>% filter(Start %in% as.numeric(levels(natesyn))[natesyn]) -> GENE.vars.syn
GENE.vars.syn$aachange <- gsub(".*p.([A-Z][0-9]+[A-Z]).*","\\1",GENE.vars.syn$AAChange.refGene,perl=TRUE)
head(GENE.vars.syn)
if(nrow(GENE.vars.syn) == 0){
print("No appropriate synonymous mutations")
}
#####
#### Filtering
GENE.vars$Func.refGene <- gsub("exonic;splicing","exonic",GENE.vars$Func.refGene)
# Multiple isoforms in GENE.vars so filter for canonical NM_000314
GENE.vars %>% filter(grepl(paste0(GENE,":",TRANSCRIPT,".+"),AAChange.refGene)) -> GENE.vars.f
GENE.vars.f$AAChange.refGene
# GENE.vars.f %>% mutate(aapos = as.numeric(gsub(".*(GENE.vars:NM_000314:exon[0-9]+:c.[A-Z][0-9]+[A-Z]:p.[A-Z]([0-9]+)[A-Z],).+","\\2",GENE.vars.f$AAChange.refGene))) -> GENE.vars.f
#********************
# GENE.vars.f$aapos <- as.numeric(gsub(paste0(".*GENE.vars:NM_000314:exon[0-9]+:c.[A-Z][0-9]+[A-Z]:p.[A-Z]([0-9]+)[A-Z],).+"),"\\2",GENE.vars.f$AAChange.refGene))
# GENE.vars.f$aapos <- as.numeric(gsub(paste0( ".*" , GENE , ":" , TRANSCRIPT , ":exon[0-9]+:c.[A-Z][0-9]+[A-Z]:p.[A-Z]([0-9]+)[A-Z],).+"),"\\2",GENE.vars.f$AAChange.refGene))
# as.numeric(gsub(paste0( ".*" , GENE , ":" , TRANSCRIPT , ":exon[0-9]+:c.[A-Z][0-9]+[A-Z]:p.[A-Z]([0-9]+)[A-Z],).+"),"\\2",GENE.vars.f$AAChange.refGene))
# regex <- paste0(GENE,":",TRANSCRIPT,":exon[")
# GENE.vars.f$AAChange.refGene
# Add columns for aa position and aa change for each mutation
regex <- ".*p.[A-Z]([0-9]+)[A-Z]"
GENE.vars.f$aapos <- as.numeric(gsub(regex,"\\1",GENE.vars.f$AAChange.refGene))
head(GENE.vars.f)
regex <- paste0(GENE,":",TRANSCRIPT,".*p.([A-Z][0-9]+[A-Z]).*")
GENE.vars.f$aachange <- gsub(regex,"\\1",GENE.vars.f$AAChange.refGene)
# *********************
ggplot(GENE.vars.f, aes(x= aapos, y = CADD.phred)) + geom_point()
# GENE.vars$aachange <- gsub("GENE.vars:NM_000314:exon[0-9]+:c.[A-Z][0-9]+[A-Z]:p.([A-Z][0-9]+[A-Z]).+","\\1",GENE.vars$AAChange.refGene,perl=TRUE)
# What's PredictProtein score look like in the context of a gene?
GENE.PP$aapos <- as.numeric(gsub('[A-Z]([0-9]+)[A-Z]','\\1',GENE.PP$V1))
head(GENE.PP,40)
ggplot(GENE.PP, aes(x = aapos, y=V2)) + geom_point()
####
#### Merge protein Predict and fix up new df
GENE.PP$V1 <- as.character(GENE.PP$V1)
GENE.vars.f$aachange
GENE.vars.mr <- merge(GENE.vars.f,GENE.PP, by.x='aachange', by.y = 'V1', all=FALSE)
GENE.vars.mr$CADD.phred <- as.numeric(GENE.vars.mr$CADD.phred)
GENE.vars.mr$PredictProtein <- as.numeric(GENE.vars.mr$V2)
GENE.vars.mr$exac03 <- as.numeric(as.character(GENE.vars.mr$exac03))
GENE.vars.mr$exac03[is.na(GENE.vars.mr$exac03)] <- 0
GENE.vars.mr %>% arrange(Start) -> GENE.vars.mr
head(GENE.vars.mr)
colnames(GENE.vars.mr)[31] <- "aapos"
####
##### Filtering
boxplot(GENE.vars.mr$PredictProtein)
boxplot(GENE.vars.mr$CADD.phred)
thresh.pp <- quantile(GENE.vars.mr$PredictProtein,.90)
thresh.cadd <- quantile(GENE.vars.mr$CADD.phred,.90)
##### Filter Positive Controls
GENE.vars.mr %>% mutate(PC = ifelse(PredictProtein > thresh.pp & CADD.phred > thresh.cadd & (exac03 == 0 | is.na(exac03)) ,yes="PositiveControl", no = FALSE)) -> GENE.vars.mr
# Graph Predict Protein
ggplot(GENE.vars.mr, aes(x = aapos, y = PredictProtein)) + geom_point(aes()) + xlab("Amino Acid Coordinates") + ylab("PredictProtein") + ggtitle("GENE.vars")
# Graph CADD
ggplot(GENE.vars.mr, aes(x = aapos, y = CADD.phred)) + geom_point(aes()) + xlab("Amino Acid Coordinates") + ylab("CADD") + ggtitle("GENE.vars")
ggplot(GENE.vars.mr, aes(x = CADD.phred, y = PredictProtein)) + geom_point(aes()) + xlab("CADD") + ylab("PP") + ggtitle("GENE.vars")
####
#### Graph the Positive Controls
p1 <- ggplot(GENE.vars.mr, aes(x = aapos, y = PredictProtein, colour = PC,shape = PC)) +
geom_point(size = 2.5) + xlab("Amino Acid Coordinates") + ylab("PredictProtein") + ggtitle(paste0(GENE," Positive Controls"))
p2 <- ggplot(GENE.vars.mr, aes(x = aapos, y = CADD.phred, colour = PC,shape = PC)) +
geom_point(size = 2.5) + xlab("Amino Acid Coordinates") + ylab("CADD Phred") + ggtitle(paste0(GENE," Positive Controls"))
# CADD vs PredictProtein
p3 <- ggplot(GENE.vars.mr, aes(x = PredictProtein, y = CADD.phred,colour = PC,shape = PC)) +
geom_point(size = 2.5) + xlab("PredictProtein") + ylab("CADD Phred") + ggtitle(paste0(GENE," Positive Controls"))
cor(GENE.vars.mr$CADD.phred, GENE.vars.mr$PredictProtein, method='spearman')
####
#### Negative Controls Filtering
thresh.pp <- quantile(GENE.vars.mr$PredictProtein,.10)
thresh.cadd <- quantile(GENE.vars.mr$CADD.phred,.10)
GENE.vars.mr %>% filter(ExonicFunc.refGene == "nonsynonymous SNV") -> GENE.vars.mr.nc
GENE.vars.mr.nc %>% mutate(NC = ifelse( test = ((PredictProtein < thresh.pp & CADD.phred < thresh.cadd & exac03 > 0)|(PredictProtein < -50 & CADD.phred < 5)),
yes = "NegativeControl", no = FALSE)) -> GENE.vars.mr.nc
GENE.vars.mr.nc$NC[is.na(GENE.vars.mr.nc$NC)] <- FALSE
####
#### Graphing the Negative Controls
p4 <- ggplot(GENE.vars.mr.nc, aes(x = aapos, y = PredictProtein,colour=NC,shape = NC)) +
geom_point(size = 2.5) + xlab("Amino Acid Coordinates") + ylab("PredictProtein") + ggtitle("GENE.vars Negative Controls");p4
p5 <- ggplot(GENE.vars.mr.nc, aes(x = aapos, y = CADD.phred, colour = NC, shape = NC)) +
geom_point(size = 2.5) + xlab("Amino Acid Coordinates") + ylab("CADD Phred") + ggtitle("GENE.vars Negative Controls")
p6 <- ggplot(GENE.vars.mr.nc, aes(x = PredictProtein, y = CADD.phred, colour = NC, shape = NC)) +
geom_point(size = 2.5) + xlab("PredictProtein") + ylab("CADD Phred") + ggtitle("GENE.vars Negative Controls")
p9 <- ggplot(GENE.vars.mr, aes( x= as.numeric(Start),y = as.numeric(CADD.phred))) +
geom_point(size = 2.5) + xlab('Genomic Coordinates') + ylab("CADD score") + ggtitle("CADD by Genomic Coordinates")
p10 <- ggplot(GENE.vars.mr, aes(CADD.phred, PredictProtein)) + geom_point()
graph_variants <- function(anno.df,x.name,y.name,group,title){
ggplot(anno.df, aes_string(x = x.name, y = y.name, colour = group, shape = group)) +
geom_point(size = 3) +
xlab(x.name) + ylab(y.name) + ggtitle(title)
}
graph_variants(GENE.vars,'aapos','CADD.phred','NC',"GENE.vars Negative Controls")
p01 <- graph_variant_list(GENE.vars.mr,'aapos','CADD.phred',GENE.vars.nate$aachange,"in.group","GENE.vars Literature Mutations","Literature");
graph_variant_list <- function(anno.df,x.name,y.name,var.list,group,title, legend.title){
anno.df %>% filter(Func.refGene == "exonic") -> anno.df.f
anno.df.f %>% mutate(in.group = ifelse(aachange %in% var.list, yes = TRUE, no = FALSE)) -> anno.df.fm
anno.df.fm$aapos <- as.numeric(gsub(paste0(".*",GENE,":",TRANSCRIPT,":exon[0-9]+:c.[A-Z][0-9]+[A-Z]:p.[A-Z]([0-9]+)[A-Z],).+"),"\\2",anno.df.fm$AAChange.refGene))
anno.df.fm %>% filter(in.group == TRUE) -> anno.df.trus
anno.df.trus %>% print()
# group <- anno.df.fm$in.group
ggplot(anno.df.fm, aes_string(x = x.name, y = y.name, colour = group)) +
geom_point(size = 3, alpha = 0.8) +
geom_point(aes_string(x = x.name, y = y.name, colour = group),data = anno.df.trus, size = 5) +
xlab(x.name) + ylab(y.name) + ggtitle(title) + guides(title = "in.list") +
# theme(legend.title = element_text("asd"))+
labs(colour = legend.title ) + guides(shape = FALSE) #+
#scale_y_continuous( limits = c(0,50), expand = c(0,0) )
}
p11 <- graph_variant_list(GENE.vars.mr,'aapos','CADD.phred',GENE.vars.syn$aachange,"in.group","GENE.vars Synonymous Mutations","Synonymous");p11
p12 <- graph_variant_list(GENE.vars.mr,'aapos','PredictProtein',GENE.vars.syn$aachange,"in.group","GENE.vars Synonymous Mutations","Synonymous");p12
p11 <- graph_variant_list(GENE.vars.mr,'aapos','PredictProtein',pcs$aachange,"in.group","GENE.vars High Impact Mutations","High Impact");p11
GENE.vars.mr.nc %>% filter(PC == "PositiveControl") %>% select(aachange) -> pos.list
GENE.vars.mr.nc %>% filter(NC == "NegativeControl") %>% select(aachange) -> neg.list
GENE.vars %>% filter(ExonicFunc.refGene == "synonymous SNV") %>% select(aachange) -> syn.list
var.list <- c("L265L")
neg.list$aachange
# p11 <- graph_variant_list(GENE.vars,'aapos','CADD.phred',GENE.vars.syn$aachange,"in.group","GENE.vars Synonymous Mutations")
####Lit variants
### Filtering Nate Variants
natevars %>% mutate(stalt = paste0(Start,Ref,Alt)) -> natevars
GENE.vars.mr.nc %>% mutate(stalt = paste0(Start,Ref,Alt)) -> GENE.vars.mr.nc
GENE.vars.mr.nc %>% filter(stalt %in% natevars$stalt) -> GENE.vars.nate
#### Correlation / List coordinates of exons
cor(GENE.vars.mr$CADD.phred, GENE.vars.mr$PredictProtein, method='spearman')
v <- GENE.vars.mr$Start
split(unique(v),cumsum(c(1,diff(unique(v))!=1)))
####
#### Table output
GENE.vars.mr.nc %>% filter(PC == "PositiveControl") -> pcs
GENE.vars.mr.nc %>% filter(NC == "NegativeControl") -> ncs
cons <- rbind(pcs,ncs)
write.table(cons, "GENE.varscontrols.tsv", quote = FALSE, row.names=FALSE, sep = "\t")
####
#### Controls Filtering
GENE.vars.mr.nc %>% filter(PC == "PositiveControl") %>%
select(aachange,Chr,Start,Ref,Alt,Func.refGene,ExonicFunc.refGene,snp135,exac03,CADD.phred,PredictProtein,PC,NC) -> pcs
GENE.vars.mr.nc %>% filter(NC == "NegativeControl") %>%
select(aachange,Chr,Start,Ref,Alt,Func.refGene,ExonicFunc.refGene,snp135,exac03,CADD.phred,PredictProtein,PC,NC) -> ncs
GENE.vars.syn$NC <- "SynonymousControl"
GENE.vars.syn$PC <- FALSE
# GENE.vars.syn$PredictProtein <- 0
GENE.vars.syn$PredictProtein <- GENE.PP$V2[which(GENE.PP$V1 %in% GENE.vars.syn$aachange)]
GENE.vars.syn
GENE.vars.nate %>%
select(aachange,Chr,Start,Ref,Alt,Func.refGene,ExonicFunc.refGene,snp135,exac03,CADD.phred,PredictProtein,PC,NC) -> lit
GENE.vars.syn %>%
select(aachange,Chr,Start,Ref,Alt,Func.refGene,ExonicFunc.refGene,snp135,exac03,CADD.phred,PredictProtein,PC,NC) -> scs
cons <- rbind(pcs,ncs,scs)
#### Table output
write.table(cons, "GENE.varscontrols_lite.tsv", quote = FALSE, row.names=FALSE, sep = "\t")
write.table(lit,"GENE.vars_MARV_vars.tsv", quote = FALSE, row.names=FALSE, sep = "\t")
# write.table(der, "deriziotis_vars_all_columns.tsv", quote = FALSE, row.names = FALSE, sep = "\t")
####
#############################
#### Call graphing functions
##############################
# lit
p01 <- graph_variant_list(GENE.vars.mr,'aapos.y','CADD.phred',GENE.vars.nate$aachange,"in.group","GENE.vars Literature Mutations","Literature");p01
p02 <- graph_variant_list(GENE.vars.mr,'aapos.y','PredictProtein',GENE.vars.nate$aachange,"in.group","GENE.vars Literature Mutations","Literature");p02
p03 <- graph_variant_list(GENE.vars.mr,'PredictProtein','CADD.phred',GENE.vars.nate$aachange,"in.group","GENE.vars Literature Mutations","Literature");p03
#syn
p1 <- graph_variant_list(GENE.vars.mr,'aapos.y','CADD.phred',GENE.vars.syn$aachange,"in.group","GENE.vars Synonymous Mutations","Synonymous");p1
p2 <- graph_variant_list(GENE.vars.mr,'aapos.y','PredictProtein',GENE.vars.syn$aachange,"in.group","GENE.vars Synonymous Mutations","Synonymous");p2
p3 <- graph_variant_list(GENE.vars.mr,'PredictProtein','CADD.phred',GENE.vars.syn$aachange,"in.group","GENE.vars Synonymous Mutations","Synonymous");p3
#pcs
p4 <- graph_variant_list(GENE.vars.mr,'aapos.y','CADD.phred',pcs$aachange,"in.group","GENE.vars High Impact Mutations","High Impact");p4
p5 <- graph_variant_list(GENE.vars.mr,'aapos.y','PredictProtein',pcs$aachange,"in.group","GENE.vars High Impact Mutations","High Impact");p5
p6 <- graph_variant_list(GENE.vars.mr,'PredictProtein','CADD.phred',pcs$aachange,"in.group","GENE.vars High Impact Mutations","High Impact");p6
#ncs
p7 <- graph_variant_list(GENE.vars.mr,'aapos.y','CADD.phred',ncs$aachange,"in.group","GENE.vars Low Impact Mutations","Low Impact");p7
p8 <- graph_variant_list(GENE.vars.mr,'aapos.y','PredictProtein',ncs$aachange,"in.group","GENE.vars Low Impact Mutations","Low Impact");p8
p9 <- graph_variant_list(GENE.vars.mr,'PredictProtein','CADD.phred',ncs$aachange,"in.group","Low Impact Mutations","Low Impact");p9
####
# Slicing variants ****MEETING
thresh.pp <- quantile(GENE.vars.mr$PredictProtein,.10)
thresh.cadd <- quantile(GENE.vars.mr$CADD.phred,.90)
GENE.vars.mr %>% filter(PredictProtein < thresh.pp & CADD.phred > thresh.cadd)
thresh.pp <- quantile(GENE.vars.mr$PredictProtein,.90)
thresh.cadd <- quantile(GENE.vars.mr$CADD.phred,.10)
GENE.vars.mr %>% filter(PredictProtein > thresh.pp & CADD.phred < thresh.cadd)
## PDF output 2
outpath <- paste0("outputs/",GENE,"/",GENE,"controlvariants.pdf")
pdf(file=outpath, height = 12, width = 17)
grid.arrange(tableGrob(lit), top = paste0(GENE," Literature Mutations" ));p01;p02;p03
grid.arrange(tableGrob(scs), top = paste0(GENE," Synonymous Mutations" ));p1;p2;p3
grid.arrange(tableGrob(pcs), top = paste0(GENE," High Impact Mutations"));p4;p5;p6
grid.arrange(tableGrob(ncs), top = paste0(GENE," Low Impact Mutations"));p7;p8;p9;p9
#p10
dev.off()
#### PDF output
# pdf(file="outputs/GENE.vars/GENE.varscontrols.pdf", height = 12, width = 17)
# grid.arrange(tableGrob(scs), top = "GENE.vars Synonymous Controls" );p11;p12
# grid.arrange(tableGrob(pcs), top = "GENE.vars Positive Controls" );p1;p2;p3
# grid.arrange(tableGrob(ncs), top = "GENE.vars Negative Controls");p4;p5;p6;p9
# #p10
# dev.off()
####
#### What happens when we compare same aa changes (different nuc change)?
GENE.vars.mr.nc %>% filter(duplicated(aachange,fromLast=FALSE) | duplicated(aachange,fromLast=TRUE)) %>%
arrange(aapos.x)-> sameaa
ggplot(sameaa, aes(x = aapos.x, y = CADD.phred)) + geom_point(aes(colour=PredictProtein)) +
xlab("AA Pos") + ylab("CADD Phred") + ggtitle("GENE.vars Same aa vars") +
scale_color_gradient2(low = "green", mid = "yellow", high = "red" )
####
# cor(GENE.vars.mr$LJB_MutationTaster, GENE.vars.mr$CADD.phred)
#### Notes
# IGV - get screenshots per variant - is the alignment suspicious?
# Controls are the same region but in other subjects (use either controls or case subjects)
# Is there weird stuff going on in that region in controls???
# By next week
# Add parent DNA availabilty to aspire
# For example if some ms variants do not have parental dna might just skim them off, lower the cost, find other variants to fill out the list
# Ying - a lot of the LOF variants are splice sites - is this concerning?
# Disregarding dn status - do we have a lot more LOF per individual than other studies)
# Get information on CNV's - are they inherited etc?
# Look at UCSC - these splice sites - which exon are they in, what's the expression profile etc also expression of particular mRNAs
# Waiting on GSC information
|
library(DJL)
### Name: target.arrival.hdf
### Title: Arrival target setting using HDF
### Aliases: target.arrival.hdf
### ** Examples
# Estimate arrivals of MY2015 SC/TC 8 cylinder engines
# Load engine dataset
df <- dataset.engine.2015
# Subset for SC/TC 8 cylinder engines
stc.8 <- subset(df, grepl("^.C..", df[, 8]) & df[, 3] == 8)
# Parameters
x <- subset(stc.8, select = 4)
y <- subset(stc.8, select = 5:7)
d <- subset(stc.8, select = 2)
# Generate an SOA map
target.arrival.hdf(x, y, d, 2014, "vrs")
| /data/genthat_extracted_code/DJL/examples/target.arrival.hdf.Rd.R | no_license | surayaaramli/typeRrh | R | false | false | 554 | r | library(DJL)
### Name: target.arrival.hdf
### Title: Arrival target setting using HDF
### Aliases: target.arrival.hdf
### ** Examples
# Estimate arrivals of MY2015 SC/TC 8 cylinder engines
# Load engine dataset
df <- dataset.engine.2015
# Subset for SC/TC 8 cylinder engines
stc.8 <- subset(df, grepl("^.C..", df[, 8]) & df[, 3] == 8)
# Parameters
x <- subset(stc.8, select = 4)
y <- subset(stc.8, select = 5:7)
d <- subset(stc.8, select = 2)
# Generate an SOA map
target.arrival.hdf(x, y, d, 2014, "vrs")
|
\name{qval.VM2.vect}
\alias{qval.VM2.vect}
\title{Vector of VM2 q-values from vm.result data object}
\description{This function extracts the vector of VM2 q-values from vm.result data object}
\usage{
qval.VM2.vect(data, lambda = seq(0, 0.95, 0.05), pi0.meth = "smoother", fdr.level = NULL, robust = FALSE)
}
\arguments{
\item{data}{ Gene expression data object }
\item{lambda}{ parameter for qvalue function }
\item{pi0.meth}{parameter for qvalue function }
\item{fdr.level}{parameter for qvalue function}
\item{robust}{parameter for qvalue function }
}
\details{}
\value{The vector of q value computed from the p values of the VM2 method}
\references{}
\author{Paul Delmar}
\note{}
\seealso{}
\examples{}
\keyword{htest}
| /man/qval.VM2.vect.Rd | no_license | cran/varmixt | R | false | false | 758 | rd | \name{qval.VM2.vect}
\alias{qval.VM2.vect}
\title{Vector of VM2 q-values from vm.result data object}
\description{This function extracts the vector of VM2 q-values from vm.result data object}
\usage{
qval.VM2.vect(data, lambda = seq(0, 0.95, 0.05), pi0.meth = "smoother", fdr.level = NULL, robust = FALSE)
}
\arguments{
\item{data}{ Gene expression data object }
\item{lambda}{ parameter for qvalue function }
\item{pi0.meth}{parameter for qvalue function }
\item{fdr.level}{parameter for qvalue function}
\item{robust}{parameter for qvalue function }
}
\details{}
\value{The vector of q value computed from the p values of the VM2 method}
\references{}
\author{Paul Delmar}
\note{}
\seealso{}
\examples{}
\keyword{htest}
|
library(readr)
library(tidyr)
library(dplyr)
library(rgdal)
library(spdplyr)
library(janitor)
library(openxlsx)
library(ggplot2)
# Load wr_info.Rdata
load("wr_info.RData")
## Load and process POD shapefile.
# identify most recently downloaded gdb.
gdb_files <- file.info(list.files("./gis", full.names = T))
dsn <- rownames(gdb_files)[which.max(gdb_files$mtime)]
# Load POD shapefile
pods <- readOGR(dsn = dsn,
layer = ogrListLayers(dsn)[1],
verbose = FALSE)
# Clean and filter for active rights in wr_info.
names(pods) <- make_clean_names(names(pods))
pods <- pods %>%
rename(wr_id = appl_id) %>%
filter(wr_id %in% wr_info$wr_id)
# Plot the PODs, should roughly look like a map of CA.
plot(pods, pch = 20)
## How many water rights have a POD in more than one HUC-8 watershed?
pods_tbl <- as_tibble(pods)
multi_huc8_rights <- pods_tbl %>%
distinct(wr_id, hu_8_name) %>%
arrange(hu_8_name) %>%
group_by(wr_id) %>%
summarize(huc8_count = n(),
huc_8_list = paste(hu_8_name, collapse = ", "),
.groups = "drop") %>%
arrange(desc(huc8_count)) %>%
filter(huc8_count > 1) %>%
left_join(., wr_info, by = "wr_id") %>%
select(wr_id, owner, wr_type, wr_status, huc8_count, huc_8_list)
# ggplot(multi_huc8_rights, aes(x = huc8_count)) + geom_bar()
# Create GeoJSON of PODs.
multi_huc8_rights_geo <- pods %>%
dplyr::filter(wr_id %in% multi_huc8_rights$wr_id)
plot(multi_huc8_rights_geo, pch = 20)
# Save resulting water right list.
if(!dir.exists("./output")) dir.create("./output")
f_name <- paste0("Water Rights with PODs in Multiple HUC-8s ",
Sys.Date(), ".xlsx")
openxlsx::write.xlsx(x = multi_huc8_rights,
file = paste0("./output/", f_name),
asTable = TRUE)
# Save shapefile of the resulting PODs.
if(!dir.exists("./pods")) dir.create("./pods")
writeOGR(obj = multi_huc8_rights_geo,
dsn = "pods",
layer = "pods",
driver = "ESRI Shapefile",
overwrite_layer = TRUE)
| /explore/identify-multi-huc8-water-rights.R | permissive | dhesswrce/dwr-wasdet | R | false | false | 2,106 | r |
library(readr)
library(tidyr)
library(dplyr)
library(rgdal)
library(spdplyr)
library(janitor)
library(openxlsx)
library(ggplot2)
# Load wr_info.Rdata
load("wr_info.RData")
## Load and process POD shapefile.
# identify most recently downloaded gdb.
gdb_files <- file.info(list.files("./gis", full.names = T))
dsn <- rownames(gdb_files)[which.max(gdb_files$mtime)]
# Load POD shapefile
pods <- readOGR(dsn = dsn,
layer = ogrListLayers(dsn)[1],
verbose = FALSE)
# Clean and filter for active rights in wr_info.
names(pods) <- make_clean_names(names(pods))
pods <- pods %>%
rename(wr_id = appl_id) %>%
filter(wr_id %in% wr_info$wr_id)
# Plot the PODs, should roughly look like a map of CA.
plot(pods, pch = 20)
## How many water rights have a POD in more than one HUC-8 watershed?
pods_tbl <- as_tibble(pods)
multi_huc8_rights <- pods_tbl %>%
distinct(wr_id, hu_8_name) %>%
arrange(hu_8_name) %>%
group_by(wr_id) %>%
summarize(huc8_count = n(),
huc_8_list = paste(hu_8_name, collapse = ", "),
.groups = "drop") %>%
arrange(desc(huc8_count)) %>%
filter(huc8_count > 1) %>%
left_join(., wr_info, by = "wr_id") %>%
select(wr_id, owner, wr_type, wr_status, huc8_count, huc_8_list)
# ggplot(multi_huc8_rights, aes(x = huc8_count)) + geom_bar()
# Create GeoJSON of PODs.
multi_huc8_rights_geo <- pods %>%
dplyr::filter(wr_id %in% multi_huc8_rights$wr_id)
plot(multi_huc8_rights_geo, pch = 20)
# Save resulting water right list.
if(!dir.exists("./output")) dir.create("./output")
f_name <- paste0("Water Rights with PODs in Multiple HUC-8s ",
Sys.Date(), ".xlsx")
openxlsx::write.xlsx(x = multi_huc8_rights,
file = paste0("./output/", f_name),
asTable = TRUE)
# Save shapefile of the resulting PODs.
if(!dir.exists("./pods")) dir.create("./pods")
writeOGR(obj = multi_huc8_rights_geo,
dsn = "pods",
layer = "pods",
driver = "ESRI Shapefile",
overwrite_layer = TRUE)
|
#
# This is a Shiny web application. You can run the application by clicking
# the 'Run App' button above.
#
# Find out more about building applications with Shiny here:
#
# http://shiny.rstudio.com/
#
library(shiny)
library(tidyverse)
library(ggmap)
library(maptools)
library(maps)
library(mapproj)
map_proj <- c("cylindrical", "mercator", "sinusoidal", "gnomonic")
mp1 <- ggplot(mapWorld, aes(x=long, y=lat, group=group))+
geom_polygon(fill="white", color="black") +
coord_map(xlim=c(-180,180), ylim=c(-60, 90))
ui <- fluidPage(
selectInput("projection", "Type of Map Projection", map_proj, multiple = F, selected = NULL),
plotOutput("projection")
)
# Define server logic required to draw a histogram
server <- function(input, output, session) {
output$projection <- renderPlot(
mp1 + coord_map(input$projection ,xlim=c(-180,180), ylim=c(-60, 90))
)
}
# Run the application
shinyApp(ui = ui, server = server)
| /Map Dropdown (Assignment).R | no_license | jacobburke15/Mapping-615-Assignments | R | false | false | 973 | r | #
# This is a Shiny web application. You can run the application by clicking
# the 'Run App' button above.
#
# Find out more about building applications with Shiny here:
#
# http://shiny.rstudio.com/
#
library(shiny)
library(tidyverse)
library(ggmap)
library(maptools)
library(maps)
library(mapproj)
map_proj <- c("cylindrical", "mercator", "sinusoidal", "gnomonic")
mp1 <- ggplot(mapWorld, aes(x=long, y=lat, group=group))+
geom_polygon(fill="white", color="black") +
coord_map(xlim=c(-180,180), ylim=c(-60, 90))
ui <- fluidPage(
selectInput("projection", "Type of Map Projection", map_proj, multiple = F, selected = NULL),
plotOutput("projection")
)
# Define server logic required to draw a histogram
server <- function(input, output, session) {
output$projection <- renderPlot(
mp1 + coord_map(input$projection ,xlim=c(-180,180), ylim=c(-60, 90))
)
}
# Run the application
shinyApp(ui = ui, server = server)
|
#' A complete table for multiple univariable survival analysis
#'
#'JS.uni_m output the table with general multivariable survival analysis result with Number of total patients,
#'Number of Events, HR (95\% Confidence Interval),P value. This function only change the format of the output table.
#'Note: c index and d index are from package survcomp.
#'@param D A data.frame in which to interpret the variables
#'@param Event The status indicator, normally 0=alive, 1=dead
#'@param Stime This is the follow up time
#'@param Svars A vector of variables
#'@param groupns A text vector of the the group names for output
#'@param Cats a vector of logical elements indicating whether or not Svar is a categorical varaible
#'@param EM a logcial term, include estimated median survival nor not
#'@return A dataframe of coxph output including Number of total patients, Number of Events, HRs (95\% Confidence Intervals), P values, C index and D index.
#'@examples
#'Event <- c("pd_censor")
#'Stime <- c("pd_surv")
#'Svars <- c("tr_group", "age_m")
#'Cats <- c(T, F)
#'Groupns <- c("Treatment", "Age")
#'JS.uni_m(D, Event, Stime, Svars, Cats, Groupns)
#'
#'@export
#'@name JS.uni_m_fdr
#'
#'
JS.uni_m_fdr <- function(Data , Event, Stime , Svars, Cats, Groupns, EM = F) {
rs.all <- NULL
if (EM == F){
for (i in 1:length(Svars))
{
rs <- JS.uni_fdr(Data, Event, Stime, Svars[i] , Groupns[i], Cats[i])
rs.all <- rbind(rs.all, rs)
}
}
if (EM == T){
for (i in 1:length(Svars))
{
rs <- JS.uni_em(Data, Event, Stime, Svars[i] , Groupns[i], Cats[i])
rs.all <- rbind(rs.all, rs)
}
}
return(rs.all)
} | /R/JSuni_m_fdr.R | no_license | SophiaJia/Jsurvformat | R | false | false | 1,830 | r | #' A complete table for multiple univariable survival analysis
#'
#'JS.uni_m output the table with general multivariable survival analysis result with Number of total patients,
#'Number of Events, HR (95\% Confidence Interval),P value. This function only change the format of the output table.
#'Note: c index and d index are from package survcomp.
#'@param D A data.frame in which to interpret the variables
#'@param Event The status indicator, normally 0=alive, 1=dead
#'@param Stime This is the follow up time
#'@param Svars A vector of variables
#'@param groupns A text vector of the the group names for output
#'@param Cats a vector of logical elements indicating whether or not Svar is a categorical varaible
#'@param EM a logcial term, include estimated median survival nor not
#'@return A dataframe of coxph output including Number of total patients, Number of Events, HRs (95\% Confidence Intervals), P values, C index and D index.
#'@examples
#'Event <- c("pd_censor")
#'Stime <- c("pd_surv")
#'Svars <- c("tr_group", "age_m")
#'Cats <- c(T, F)
#'Groupns <- c("Treatment", "Age")
#'JS.uni_m(D, Event, Stime, Svars, Cats, Groupns)
#'
#'@export
#'@name JS.uni_m_fdr
#'
#'
JS.uni_m_fdr <- function(Data , Event, Stime , Svars, Cats, Groupns, EM = F) {
rs.all <- NULL
if (EM == F){
for (i in 1:length(Svars))
{
rs <- JS.uni_fdr(Data, Event, Stime, Svars[i] , Groupns[i], Cats[i])
rs.all <- rbind(rs.all, rs)
}
}
if (EM == T){
for (i in 1:length(Svars))
{
rs <- JS.uni_em(Data, Event, Stime, Svars[i] , Groupns[i], Cats[i])
rs.all <- rbind(rs.all, rs)
}
}
return(rs.all)
} |
## use library dplyr for cleaning data purpouses
library(dplyr)
library(lubridate)
# Parameters for reading and ploting the data
fileName <- "household_power_consumption.txt"
initDate <- "2007-02-01"
endDate <- "2007-02-02"
outputFileName <- "plot3.png"
## read the file into the table
print("Program started...")
print(paste("Reading file:", fileName, sep = " "))
if(!file.exists(fileName))
{
print(paste("Input file", fileName, "does not exist in working directory: ", getwd(), sep = " "))
print("Please check...")
stop()
}
householdPC_DF <- read.table( file = fileName, sep = ";",
header = TRUE, na.strings = c("?")
)
## filter rows occording to init and end dates
print(paste("Filtering data with dates between", initDate, "and", endDate, sep = " "))
householdPC_DF <- filter( householdPC_DF, dmy(Date) == ymd(initDate) | dmy(Date) == ymd(endDate) )
# plot the data
print("Plotting data...")
# setup png device
png(filename = outputFileName, width = 480, height = 480)
# plot 1st graph
plot( dmy_hms ( paste ( householdPC_DF$Date, householdPC_DF$Time, sep = " " ) ),
householdPC_DF$Sub_metering_1, type = "n", xlab = "", ylab = "Energy sub metering"
)
lines( dmy_hms ( paste ( householdPC_DF$Date, householdPC_DF$Time, sep = " " ) ),
householdPC_DF$Sub_metering_1, col = "black")
lines( dmy_hms ( paste ( householdPC_DF$Date, householdPC_DF$Time, sep = " " ) ),
householdPC_DF$Sub_metering_2, col = "red")
lines( dmy_hms ( paste ( householdPC_DF$Date, householdPC_DF$Time, sep = " " ) ),
householdPC_DF$Sub_metering_3, col = "blue")
legend( "topright", lty = 1, col = c("black", "red", "blue"),
legend = c("Sub_metering_1", "Sub_metering_2", "Sub_metering_3"))
# send the plot to the png device
print("Sending the plot to png device")
dev.off()
print(paste("Program finished, please check the file", outputFileName, sep = " ")) | /plot3.R | no_license | germanchaparro/ExData_Plotting1 | R | false | false | 1,950 | r | ## use library dplyr for cleaning data purpouses
library(dplyr)
library(lubridate)
# Parameters for reading and ploting the data
fileName <- "household_power_consumption.txt"
initDate <- "2007-02-01"
endDate <- "2007-02-02"
outputFileName <- "plot3.png"
## read the file into the table
print("Program started...")
print(paste("Reading file:", fileName, sep = " "))
if(!file.exists(fileName))
{
print(paste("Input file", fileName, "does not exist in working directory: ", getwd(), sep = " "))
print("Please check...")
stop()
}
householdPC_DF <- read.table( file = fileName, sep = ";",
header = TRUE, na.strings = c("?")
)
## filter rows occording to init and end dates
print(paste("Filtering data with dates between", initDate, "and", endDate, sep = " "))
householdPC_DF <- filter( householdPC_DF, dmy(Date) == ymd(initDate) | dmy(Date) == ymd(endDate) )
# plot the data
print("Plotting data...")
# setup png device
png(filename = outputFileName, width = 480, height = 480)
# plot 1st graph
plot( dmy_hms ( paste ( householdPC_DF$Date, householdPC_DF$Time, sep = " " ) ),
householdPC_DF$Sub_metering_1, type = "n", xlab = "", ylab = "Energy sub metering"
)
lines( dmy_hms ( paste ( householdPC_DF$Date, householdPC_DF$Time, sep = " " ) ),
householdPC_DF$Sub_metering_1, col = "black")
lines( dmy_hms ( paste ( householdPC_DF$Date, householdPC_DF$Time, sep = " " ) ),
householdPC_DF$Sub_metering_2, col = "red")
lines( dmy_hms ( paste ( householdPC_DF$Date, householdPC_DF$Time, sep = " " ) ),
householdPC_DF$Sub_metering_3, col = "blue")
legend( "topright", lty = 1, col = c("black", "red", "blue"),
legend = c("Sub_metering_1", "Sub_metering_2", "Sub_metering_3"))
# send the plot to the png device
print("Sending the plot to png device")
dev.off()
print(paste("Program finished, please check the file", outputFileName, sep = " ")) |
## Test that perm gives the same answers in some special cases
library(perm)
library(coin)
#########################
## Wilcoxon Rank Sum Test
#########################
set.seed(1)
y<-rpois(10,5)
## Try and example with ties
table(y)
g<-as.factor(c(rep(1,4),rep(2,6)))
## Asymptotic method is the uncorrected one in wilcox.test
permTS(rank(y)~g,method="pclt")
wilcox.test(y~g,correct=FALSE)
## Compare to exact method in coin
permTS(rank(y)~g,method="exact.ce",alternative="two.sided")
permTS(rank(y)~g,method="exact.network",alternative="two.sided")
permTS(rank(y)~g,method="exact.ce",alternative="two.sided", control=permControl(tsmethod="abs"))
permTS(rank(y)~g,method="exact.network",alternative="two.sided",control=permControl(tsmethod="abs"))
## Note that coin uses the default two.sided p-value that matches our
## alternative="two.sided" with permControl(tsmethod="abs"), but the default is permControl(tsmethod="central")
## need to use coin because wilcox.test exact=TRUE does not handle ties
wilcox_test(y~g,distribution=exact())
## Try one-sided
permTS(rank(y)~g,method="exact.network",alternative="less")
wilcox_test(y~g,distribution=exact(),alternative="less")
#################################
# Kruskal-Wallis Test
################################
g2<-as.factor(c(rep(1,3),rep(2,4),rep(3,3)))
kruskal.test(y~g2)
permKS(rank(y)~g2,exact=FALSE)
## Since both coin and perm use Monte Carlo,
## we cannot expect an exact match
set.seed(11)
kruskal_test(y~g2,distribution="approximate")
permKS(rank(y),g2,method="exact.mc")
##############################
# Trend Tests
###############################
g3<-as.numeric(g2)
independence_test(y~g3,distribution="asymptotic",teststat="max")
independence_test(y~g3,distribution="asymptotic",teststat="quad")
independence_test(y~g3,distribution="asymptotic",teststat="scalar")
permTREND(y,g3,method="pclt")
independence_test(y~g3,alternative="less",distribution="asymptotic",teststat="max")
independence_test(y~g3,alternative="less",distribution="asymptotic",teststat="scalar")
permTREND(y,g3,alternative="less",method="pclt")
# I tested this data set using the Linear-by-Linear
# Association test in StatXact Procs 8, the Asymptotic
# p-value matches the two-sided one from permTREND
# asymptotic p=0.8604
# exact p = 0.9305
#permTREND(y,g3,method="exact.mc",alternative="less",nmc=10^5-1)
# gives p=.5945 with 99 pcnt ci (.5905,.5985)
#independence_test(y~g3,alternative="less",distribution=approximate(10^5-1),teststat="scalar")
# gives p=.593
| /tests/wilcoxon.kruskal.trend.R | no_license | cran/perm | R | false | false | 2,584 | r | ## Test that perm gives the same answers in some special cases
library(perm)
library(coin)
#########################
## Wilcoxon Rank Sum Test
#########################
set.seed(1)
y<-rpois(10,5)
## Try and example with ties
table(y)
g<-as.factor(c(rep(1,4),rep(2,6)))
## Asymptotic method is the uncorrected one in wilcox.test
permTS(rank(y)~g,method="pclt")
wilcox.test(y~g,correct=FALSE)
## Compare to exact method in coin
permTS(rank(y)~g,method="exact.ce",alternative="two.sided")
permTS(rank(y)~g,method="exact.network",alternative="two.sided")
permTS(rank(y)~g,method="exact.ce",alternative="two.sided", control=permControl(tsmethod="abs"))
permTS(rank(y)~g,method="exact.network",alternative="two.sided",control=permControl(tsmethod="abs"))
## Note that coin uses the default two.sided p-value that matches our
## alternative="two.sided" with permControl(tsmethod="abs"), but the default is permControl(tsmethod="central")
## need to use coin because wilcox.test exact=TRUE does not handle ties
wilcox_test(y~g,distribution=exact())
## Try one-sided
permTS(rank(y)~g,method="exact.network",alternative="less")
wilcox_test(y~g,distribution=exact(),alternative="less")
#################################
# Kruskal-Wallis Test
################################
g2<-as.factor(c(rep(1,3),rep(2,4),rep(3,3)))
kruskal.test(y~g2)
permKS(rank(y)~g2,exact=FALSE)
## Since both coin and perm use Monte Carlo,
## we cannot expect an exact match
set.seed(11)
kruskal_test(y~g2,distribution="approximate")
permKS(rank(y),g2,method="exact.mc")
##############################
# Trend Tests
###############################
g3<-as.numeric(g2)
independence_test(y~g3,distribution="asymptotic",teststat="max")
independence_test(y~g3,distribution="asymptotic",teststat="quad")
independence_test(y~g3,distribution="asymptotic",teststat="scalar")
permTREND(y,g3,method="pclt")
independence_test(y~g3,alternative="less",distribution="asymptotic",teststat="max")
independence_test(y~g3,alternative="less",distribution="asymptotic",teststat="scalar")
permTREND(y,g3,alternative="less",method="pclt")
# I tested this data set using the Linear-by-Linear
# Association test in StatXact Procs 8, the Asymptotic
# p-value matches the two-sided one from permTREND
# asymptotic p=0.8604
# exact p = 0.9305
#permTREND(y,g3,method="exact.mc",alternative="less",nmc=10^5-1)
# gives p=.5945 with 99 pcnt ci (.5905,.5985)
#independence_test(y~g3,alternative="less",distribution=approximate(10^5-1),teststat="scalar")
# gives p=.593
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/sbj_country_and_age.R
\name{replaceAgeRange}
\alias{replaceAgeRange}
\title{Replace the nodes of the age range with a new nodes which contain their values in the category Trial information}
\usage{
replaceAgeRange(xml, x)
}
\arguments{
\item{xml}{takes as parameter XML file}
\item{x}{takes as parameter a function which count the number of the subject per age range}
}
\value{
xml file
}
\description{
Replace the nodes of the age range with a new nodes which contain their values in the category Trial information
}
| /man/replaceAgeRange.Rd | permissive | tarazaf/eudract | R | false | true | 597 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/sbj_country_and_age.R
\name{replaceAgeRange}
\alias{replaceAgeRange}
\title{Replace the nodes of the age range with a new nodes which contain their values in the category Trial information}
\usage{
replaceAgeRange(xml, x)
}
\arguments{
\item{xml}{takes as parameter XML file}
\item{x}{takes as parameter a function which count the number of the subject per age range}
}
\value{
xml file
}
\description{
Replace the nodes of the age range with a new nodes which contain their values in the category Trial information
}
|
\name{model.fram}
\title{Model Frames for timeSeries Objects}
\alias{model.frame}
\alias{model.frame.timeSeries}
\description{
Allows to work with model frames for 'timeSeries' objects.
}
\usage{
\method{model.frame}{timeSeries}(formula, data, ...)
}
\arguments{
\item{formula}{
a model formula object.
}
\item{data}{
an object of class \code{timeSeries}.
}
\item{\dots}{
arguments passed to the function \code{stats::model.frame}.
}
}
\value{
an object of class \code{timeSeries}.
}
\details{
The function \code{model.frame} is a generic function which returns
in the R-ststs framework by default a data.frame with the variables
needed to use formula and any ... arguments. In contrast to this
the method returns an object of class \code{timeSeries} when the
argument data was not a \code{data.frame} but also an object of class
\code{timeSeries}.
}
\note{
This function is preliminary and untested.
}
\seealso{
\code{\link{model.frame}}.
}
\author{
Diethelm Wuertz for the Rmetrics \R-port.
}
\examples{
## data -
# Microsoft Data:
myFinCenter <<- "GMT"
MSFT = as.timeSeries(data(msft.dat))[1:12, ]
## model.frame -
# Extract High's and Low's:
model.frame( ~ High + Low, data = MSFT)
# Extract Open Prices and their log10's:
base = 10
Open = model.frame(Open ~ log(Open, base = `base`), data = MSFT)
colnames(Open) <- c("MSFT", "log10(MSFT)")
Open
}
\keyword{chron}
| /man/model.frame.Rd | no_license | cran/fSeries | R | false | false | 1,606 | rd | \name{model.fram}
\title{Model Frames for timeSeries Objects}
\alias{model.frame}
\alias{model.frame.timeSeries}
\description{
Allows to work with model frames for 'timeSeries' objects.
}
\usage{
\method{model.frame}{timeSeries}(formula, data, ...)
}
\arguments{
\item{formula}{
a model formula object.
}
\item{data}{
an object of class \code{timeSeries}.
}
\item{\dots}{
arguments passed to the function \code{stats::model.frame}.
}
}
\value{
an object of class \code{timeSeries}.
}
\details{
The function \code{model.frame} is a generic function which returns
in the R-ststs framework by default a data.frame with the variables
needed to use formula and any ... arguments. In contrast to this
the method returns an object of class \code{timeSeries} when the
argument data was not a \code{data.frame} but also an object of class
\code{timeSeries}.
}
\note{
This function is preliminary and untested.
}
\seealso{
\code{\link{model.frame}}.
}
\author{
Diethelm Wuertz for the Rmetrics \R-port.
}
\examples{
## data -
# Microsoft Data:
myFinCenter <<- "GMT"
MSFT = as.timeSeries(data(msft.dat))[1:12, ]
## model.frame -
# Extract High's and Low's:
model.frame( ~ High + Low, data = MSFT)
# Extract Open Prices and their log10's:
base = 10
Open = model.frame(Open ~ log(Open, base = `base`), data = MSFT)
colnames(Open) <- c("MSFT", "log10(MSFT)")
Open
}
\keyword{chron}
|
data1 <- readRDS("./Data_Day2and3/car.rds")
head(data1)
str(data1)
summary(data1)
set.seed(100)
train <- sample(nrow(data1), 0.7*nrow(data1), replace = FALSE)
TrainSet <- data1[train,]
ValidSet <- data1[-train,]
library(randomForest)
model1 <- randomForest(Condition ~ ., data = TrainSet, importance = TRUE)
model1
model2 <- randomForest(Condition ~ ., data = TrainSet, ntree = 500, mtry = 6, importance = TRUE)
model2
predTrain <- predict(model2, TrainSet, type = "class")
table(predTrain, TrainSet$Condition)
predValid <- predict(model2, ValidSet, type = "class")
mean(predValid == ValidSet$Condition)
table(predValid,ValidSet$Condition)
importance(model2)
varImpPlot(model2)
| /src/3.3_ensemble_RF.R | permissive | vetpsk/MLAM | R | false | false | 692 | r | data1 <- readRDS("./Data_Day2and3/car.rds")
head(data1)
str(data1)
summary(data1)
set.seed(100)
train <- sample(nrow(data1), 0.7*nrow(data1), replace = FALSE)
TrainSet <- data1[train,]
ValidSet <- data1[-train,]
library(randomForest)
model1 <- randomForest(Condition ~ ., data = TrainSet, importance = TRUE)
model1
model2 <- randomForest(Condition ~ ., data = TrainSet, ntree = 500, mtry = 6, importance = TRUE)
model2
predTrain <- predict(model2, TrainSet, type = "class")
table(predTrain, TrainSet$Condition)
predValid <- predict(model2, ValidSet, type = "class")
mean(predValid == ValidSet$Condition)
table(predValid,ValidSet$Condition)
importance(model2)
varImpPlot(model2)
|
# nocov
.onLoad <- function(libname, pkgname) {
vctrs::s3_register("dplyr::dplyr_reconstruct", "posterior", method = posterior_reconstruct)
vctrs::s3_register("dplyr::dplyr_reconstruct", "posterior_diff", method = posterior_diff_reconstruct)
vctrs::s3_register("ggplot2::autoplot", "posterior")
vctrs::s3_register("ggplot2::autoplot", "posterior_diff")
}
# nocov end
| /R/zzz.R | no_license | tonyk7440/tidyposterior | R | false | false | 378 | r | # nocov
.onLoad <- function(libname, pkgname) {
vctrs::s3_register("dplyr::dplyr_reconstruct", "posterior", method = posterior_reconstruct)
vctrs::s3_register("dplyr::dplyr_reconstruct", "posterior_diff", method = posterior_diff_reconstruct)
vctrs::s3_register("ggplot2::autoplot", "posterior")
vctrs::s3_register("ggplot2::autoplot", "posterior_diff")
}
# nocov end
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/explore_clustering.R
\name{explore_clustering}
\alias{explore_clustering}
\title{fit and plot K-Means, DBScan clustering algorithm on the dataset}
\usage{
explore_clustering(df, hyperparameter_dict = NULL)
}
\arguments{
\item{df}{dataframe}
\item{hyperparameter_dict}{dictionary of hyperparameters to be used, default is NULL, a default set of hyperparameters will be used}
}
\value{
a dictionary with each key = a clustering model name, value = list of plots generated by that model
}
\description{
fit and plot K-Means, DBScan clustering algorithm on the dataset
}
\examples{
library(palmerpenguins)
explore_clustering(penguins)
}
| /man/explore_clustering.Rd | permissive | lephanthuymai/datascience.eda.R | R | false | true | 712 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/explore_clustering.R
\name{explore_clustering}
\alias{explore_clustering}
\title{fit and plot K-Means, DBScan clustering algorithm on the dataset}
\usage{
explore_clustering(df, hyperparameter_dict = NULL)
}
\arguments{
\item{df}{dataframe}
\item{hyperparameter_dict}{dictionary of hyperparameters to be used, default is NULL, a default set of hyperparameters will be used}
}
\value{
a dictionary with each key = a clustering model name, value = list of plots generated by that model
}
\description{
fit and plot K-Means, DBScan clustering algorithm on the dataset
}
\examples{
library(palmerpenguins)
explore_clustering(penguins)
}
|
# Install and load packages
package_names <- c("survey","dplyr","foreign","devtools")
lapply(package_names, function(x) if(!x %in% installed.packages()) install.packages(x))
lapply(package_names, require, character.only=T)
install_github("e-mitchell/meps_r_pkg/MEPS")
library(MEPS)
options(survey.lonely.psu="adjust")
# Load FYC file
FYC <- read_sas('C:/MEPS/.FYC..sas7bdat');
year <- .year.
FYC <- FYC %>%
mutate_at(vars(starts_with("AGE")),funs(replace(., .< 0, NA))) %>%
mutate(AGELAST = coalesce(AGE.yy.X, AGE42X, AGE31X))
FYC$ind = 1
# Reason for difficulty receiving needed prescribed medicines
FYC <- FYC %>%
mutate(delay_PM = (PMUNAB42 == 1 | PMDLAY42 == 1)*1,
afford_PM = (PMDLRS42 == 1 | PMUNRS42 == 1)*1,
insure_PM = (PMDLRS42 %in% c(2,3) | PMUNRS42 %in% c(2,3))*1,
other_PM = (PMDLRS42 > 3 | PMUNRS42 > 3)*1)
# Perceived health status
if(year == 1996)
FYC <- FYC %>% mutate(RTHLTH53 = RTEHLTH2, RTHLTH42 = RTEHLTH2, RTHLTH31 = RTEHLTH1)
FYC <- FYC %>%
mutate_at(vars(starts_with("RTHLTH")), funs(replace(., .< 0, NA))) %>%
mutate(
health = coalesce(RTHLTH53, RTHLTH42, RTHLTH31),
health = recode_factor(health, .default = "Missing", .missing = "Missing",
"1" = "Excellent",
"2" = "Very good",
"3" = "Good",
"4" = "Fair",
"5" = "Poor"))
FYCdsgn <- svydesign(
id = ~VARPSU,
strata = ~VARSTR,
weights = ~PERWT.yy.F,
data = FYC,
nest = TRUE)
results <- svyby(~afford_PM + insure_PM + other_PM, FUN = svymean, by = ~health, design = subset(FYCdsgn, ACCELI42==1 & delay_PM==1))
print(results)
| /mepstrends/hc_care/json/code/r/pctPOP__health__rsn_PM__.r | permissive | HHS-AHRQ/MEPS-summary-tables | R | false | false | 1,659 | r | # Install and load packages
package_names <- c("survey","dplyr","foreign","devtools")
lapply(package_names, function(x) if(!x %in% installed.packages()) install.packages(x))
lapply(package_names, require, character.only=T)
install_github("e-mitchell/meps_r_pkg/MEPS")
library(MEPS)
options(survey.lonely.psu="adjust")
# Load FYC file
FYC <- read_sas('C:/MEPS/.FYC..sas7bdat');
year <- .year.
FYC <- FYC %>%
mutate_at(vars(starts_with("AGE")),funs(replace(., .< 0, NA))) %>%
mutate(AGELAST = coalesce(AGE.yy.X, AGE42X, AGE31X))
FYC$ind = 1
# Reason for difficulty receiving needed prescribed medicines
FYC <- FYC %>%
mutate(delay_PM = (PMUNAB42 == 1 | PMDLAY42 == 1)*1,
afford_PM = (PMDLRS42 == 1 | PMUNRS42 == 1)*1,
insure_PM = (PMDLRS42 %in% c(2,3) | PMUNRS42 %in% c(2,3))*1,
other_PM = (PMDLRS42 > 3 | PMUNRS42 > 3)*1)
# Perceived health status
if(year == 1996)
FYC <- FYC %>% mutate(RTHLTH53 = RTEHLTH2, RTHLTH42 = RTEHLTH2, RTHLTH31 = RTEHLTH1)
FYC <- FYC %>%
mutate_at(vars(starts_with("RTHLTH")), funs(replace(., .< 0, NA))) %>%
mutate(
health = coalesce(RTHLTH53, RTHLTH42, RTHLTH31),
health = recode_factor(health, .default = "Missing", .missing = "Missing",
"1" = "Excellent",
"2" = "Very good",
"3" = "Good",
"4" = "Fair",
"5" = "Poor"))
FYCdsgn <- svydesign(
id = ~VARPSU,
strata = ~VARSTR,
weights = ~PERWT.yy.F,
data = FYC,
nest = TRUE)
results <- svyby(~afford_PM + insure_PM + other_PM, FUN = svymean, by = ~health, design = subset(FYCdsgn, ACCELI42==1 & delay_PM==1))
print(results)
|
## UI
aboutUI <- function(id){
ns <- NS(id)
tagList(
h2("About this app"),
tags$p("Designed by xxx, xxx"),
h3("Support"),
fluidRow(
widgetUserBox(
title = "Thomas Girke",
subtitle = "PI",
type = NULL,
width = 6,
src = "https://avatars3.githubusercontent.com/u/1336916?s=400&v=4",
background = TRUE,
backgroundUrl = "https://bashooka.com/wp-content/uploads/2018/04/scg-canvas-background-animation-24.jpg",
closable = FALSE,
collapsible = FALSE,
HTML('Thomas Girke <a href="mailto:tgirke@ucr.edu"><tgirke@ucr.edu></a>')
),
widgetUserBox(
title = "Le Zhang",
subtitle = "Student",
type = NULL,
width = 6,
src = "https://avatars0.githubusercontent.com/u/35240440?s=460&v=4",
background = TRUE,
backgroundUrl = "https://bashooka.com/wp-content/uploads/2018/04/scg-canvas-background-animation-24.jpg",
closable = FALSE,
collapsible = FALSE,
HTML('Le Zhang <a href="mailto:le.zhang001@email.ucr.edu"><le.zhang001@email.ucr.edu></a>')
)
),
fluidRow(
widgetUserBox(
title = "Ponmathi Ramasamy",
subtitle = "Student",
type = NULL,
width = 6,
src = "https://avatars2.githubusercontent.com/u/45085174?s=400&v=4",
background = TRUE,
backgroundUrl = "https://bashooka.com/wp-content/uploads/2018/04/scg-canvas-background-animation-24.jpg",
closable = FALSE,
collapsible = FALSE,
HTML('Ponmathi Ramasamy <a href="mailto:prama008@ucr.edu"><prama008@ucr.edu></a>')
),
widgetUserBox(
title = "Daniela Cassol",
subtitle = "Postdoc",
type = NULL,
width = 6,
src = "https://avatars2.githubusercontent.com/u/12722576?s=400&v=4",
background = TRUE,
backgroundUrl = "https://bashooka.com/wp-content/uploads/2018/04/scg-canvas-background-animation-24.jpg",
closable = FALSE,
collapsible = FALSE,
HTML('Daniela Cassol <a href="mailto:danielac@ucr.edu"><danielac@ucr.edu></a>')
)
),
h3("About SystemPipeR"),
tags$iframe(src="http://girke.bioinformatics.ucr.edu/systemPipeR/mydoc_systemPipeR_2.html",
style="border: 1px solid #AAA; width: 100%; height: 700px"),
br(),
tags$a(href="https://bioconductor.org/packages/release/bioc/html/systemPipeR.html",
"Visist Bioconductor page")
)
}
## server
aboutServer <- function(input, output, session){
} | /R/tab_about.R | permissive | mathrj/systemPipeS | R | false | false | 3,094 | r | ## UI
aboutUI <- function(id){
ns <- NS(id)
tagList(
h2("About this app"),
tags$p("Designed by xxx, xxx"),
h3("Support"),
fluidRow(
widgetUserBox(
title = "Thomas Girke",
subtitle = "PI",
type = NULL,
width = 6,
src = "https://avatars3.githubusercontent.com/u/1336916?s=400&v=4",
background = TRUE,
backgroundUrl = "https://bashooka.com/wp-content/uploads/2018/04/scg-canvas-background-animation-24.jpg",
closable = FALSE,
collapsible = FALSE,
HTML('Thomas Girke <a href="mailto:tgirke@ucr.edu"><tgirke@ucr.edu></a>')
),
widgetUserBox(
title = "Le Zhang",
subtitle = "Student",
type = NULL,
width = 6,
src = "https://avatars0.githubusercontent.com/u/35240440?s=460&v=4",
background = TRUE,
backgroundUrl = "https://bashooka.com/wp-content/uploads/2018/04/scg-canvas-background-animation-24.jpg",
closable = FALSE,
collapsible = FALSE,
HTML('Le Zhang <a href="mailto:le.zhang001@email.ucr.edu"><le.zhang001@email.ucr.edu></a>')
)
),
fluidRow(
widgetUserBox(
title = "Ponmathi Ramasamy",
subtitle = "Student",
type = NULL,
width = 6,
src = "https://avatars2.githubusercontent.com/u/45085174?s=400&v=4",
background = TRUE,
backgroundUrl = "https://bashooka.com/wp-content/uploads/2018/04/scg-canvas-background-animation-24.jpg",
closable = FALSE,
collapsible = FALSE,
HTML('Ponmathi Ramasamy <a href="mailto:prama008@ucr.edu"><prama008@ucr.edu></a>')
),
widgetUserBox(
title = "Daniela Cassol",
subtitle = "Postdoc",
type = NULL,
width = 6,
src = "https://avatars2.githubusercontent.com/u/12722576?s=400&v=4",
background = TRUE,
backgroundUrl = "https://bashooka.com/wp-content/uploads/2018/04/scg-canvas-background-animation-24.jpg",
closable = FALSE,
collapsible = FALSE,
HTML('Daniela Cassol <a href="mailto:danielac@ucr.edu"><danielac@ucr.edu></a>')
)
),
h3("About SystemPipeR"),
tags$iframe(src="http://girke.bioinformatics.ucr.edu/systemPipeR/mydoc_systemPipeR_2.html",
style="border: 1px solid #AAA; width: 100%; height: 700px"),
br(),
tags$a(href="https://bioconductor.org/packages/release/bioc/html/systemPipeR.html",
"Visist Bioconductor page")
)
}
## server
aboutServer <- function(input, output, session){
} |
t_1_sample_ci_plot <- function(ci_df) {
#########################################################################################
# t_1_sample_ci(52.1,45.1,22,95)
# x_mu x_sigma x_n x_pct alpha p_star t_star SE ci_lb ci_ub ci
# 52.1 45.1 22 95 0.05 0.975 2.079614 9.615352 32.10378 72.09622 52.1+-9.62
#########################################################################################
x_df <- as.numeric(ci_df['x_n_alt']) - 1 # sample size degrees of freedom
alpha <- as.numeric(ci_df['alpha'])
t_star <- as.numeric(ci_df['t_star'])
p_star <- pt(-1*t_star,x_df)
t_value <- round(as.numeric(ci_df['t_value']),2)
p_value <- round(as.numeric(ci_df['p_value']),2)
pct <- (1 - 2*p_star) * 100
ci <- ci_df['ci']
x_mu_null <- 0
tails <- function(x) {
norm_tail_std <- dt(x,df=x_df)
norm_tail_std[x > -1 * t_star & x < t_star] <- NA
return(norm_tail_std)
}
center <- function(x) {
middle <- dt(x,df=x_df)
middle[x < -1 * t_star & x > t_star] <- NA
return(middle)
}
xvalues <- data.frame(x = c(-3, 3))
p <- ggplot(xvalues, aes(x = xvalues))
# Summary plot for normal distribution (Version Three)
xvalues <- data.frame(x = c(-3, 3))
p <- ggplot(xvalues, aes(x = xvalues))
p + stat_function(fun = dt, args = list(df=x_df)) +
stat_function(fun = center,geom = "area", fill = "red", alpha = .5) +
stat_function(fun = tails, geom = "area", fill = "green", alpha = 1.0) +
geom_vline(xintercept = -1 * t_star) +
geom_vline(xintercept = t_star) +
geom_segment(aes(x=t_value,y=0,xend=t_value,yend=.2)) +
geom_text(x = t_star, y = 0.2, size = 4
,label=paste0('t =',t_value
,'\np-value=',p_value
)
) +
geom_text(x = -t_star, y = 0.38, size = 4
,label= paste0("alpha=(100 - pct)/100"
,"\np* = alpha/2"
,"\nt* = | qt(p*,df) |")
) +
geom_text(x = 0, y = .3, size = 4, label = paste0("pct = ",round(pct,4))) +
geom_text(x = t_star, y = 0.38, size = 4
,label=paste0('alpha=',alpha
,'\np*=',round(p_star,4)
,'\nt*=',round(t_star ,4)
)
) +
geom_text(x = -3.5, y = 0.01, size = 4, label = paste0('p*',round(p_star,4)) ) +
geom_text(x = 3.5, y = 0.01, size = 4, label = paste0('p*',round(p_star,4)) ) +
xlim(c(-4, 4)) +
xlab(expression(t==frac(x- bar(x),frac(s,sqrt(n))) )) +
labs(caption = paste0('confidence level=',ci))
# if(x_mu_null != 0) {
# p + geom_segment(aes(x=t_value,y=0,xend=t_value,yend=.2)) +
# geom_text(x = t_star, y = 0.2, size = 4
# ,label=paste0('t =',t_value
# ,'\np-value=',p_value
# )
# )
# }
# p
}
| /R/t_1_sample_ci_plot.r | no_license | SophieMYang/statistics | R | false | false | 3,017 | r |
t_1_sample_ci_plot <- function(ci_df) {
#########################################################################################
# t_1_sample_ci(52.1,45.1,22,95)
# x_mu x_sigma x_n x_pct alpha p_star t_star SE ci_lb ci_ub ci
# 52.1 45.1 22 95 0.05 0.975 2.079614 9.615352 32.10378 72.09622 52.1+-9.62
#########################################################################################
x_df <- as.numeric(ci_df['x_n_alt']) - 1 # sample size degrees of freedom
alpha <- as.numeric(ci_df['alpha'])
t_star <- as.numeric(ci_df['t_star'])
p_star <- pt(-1*t_star,x_df)
t_value <- round(as.numeric(ci_df['t_value']),2)
p_value <- round(as.numeric(ci_df['p_value']),2)
pct <- (1 - 2*p_star) * 100
ci <- ci_df['ci']
x_mu_null <- 0
tails <- function(x) {
norm_tail_std <- dt(x,df=x_df)
norm_tail_std[x > -1 * t_star & x < t_star] <- NA
return(norm_tail_std)
}
center <- function(x) {
middle <- dt(x,df=x_df)
middle[x < -1 * t_star & x > t_star] <- NA
return(middle)
}
xvalues <- data.frame(x = c(-3, 3))
p <- ggplot(xvalues, aes(x = xvalues))
# Summary plot for normal distribution (Version Three)
xvalues <- data.frame(x = c(-3, 3))
p <- ggplot(xvalues, aes(x = xvalues))
p + stat_function(fun = dt, args = list(df=x_df)) +
stat_function(fun = center,geom = "area", fill = "red", alpha = .5) +
stat_function(fun = tails, geom = "area", fill = "green", alpha = 1.0) +
geom_vline(xintercept = -1 * t_star) +
geom_vline(xintercept = t_star) +
geom_segment(aes(x=t_value,y=0,xend=t_value,yend=.2)) +
geom_text(x = t_star, y = 0.2, size = 4
,label=paste0('t =',t_value
,'\np-value=',p_value
)
) +
geom_text(x = -t_star, y = 0.38, size = 4
,label= paste0("alpha=(100 - pct)/100"
,"\np* = alpha/2"
,"\nt* = | qt(p*,df) |")
) +
geom_text(x = 0, y = .3, size = 4, label = paste0("pct = ",round(pct,4))) +
geom_text(x = t_star, y = 0.38, size = 4
,label=paste0('alpha=',alpha
,'\np*=',round(p_star,4)
,'\nt*=',round(t_star ,4)
)
) +
geom_text(x = -3.5, y = 0.01, size = 4, label = paste0('p*',round(p_star,4)) ) +
geom_text(x = 3.5, y = 0.01, size = 4, label = paste0('p*',round(p_star,4)) ) +
xlim(c(-4, 4)) +
xlab(expression(t==frac(x- bar(x),frac(s,sqrt(n))) )) +
labs(caption = paste0('confidence level=',ci))
# if(x_mu_null != 0) {
# p + geom_segment(aes(x=t_value,y=0,xend=t_value,yend=.2)) +
# geom_text(x = t_star, y = 0.2, size = 4
# ,label=paste0('t =',t_value
# ,'\np-value=',p_value
# )
# )
# }
# p
}
|
#SCrape from Sportsbook review
lego_movie <- read_html("https://www.sportsbookreview.com/betting-odds/nba-basketball/pointspread/?date=20191108")
scrape2<-
lego_movie%>%
#html_nodes(".opener")
html_nodes("._1QEDd span")
#html_nodes(".undefined div , ._1t1eJ span , ._3IKv4 , ._2XT2S , #bettingOddsGridContainer a , ._1kega span , ._1QEDd")
html_nodes("#bettingOddsGridContainer img , .pZWkv , ._2XT2S span , ._1kega span , .undefined div , ._3ptK- , ._3qi53")
scrape2%>%html_text()
test<-scrape2%>%
.[[5]]%>%
.[[2]]
scrape2%>%
.[[400]]%>%
html_text()
html_nodes("_3Nv_7 opener")
scrape<-
lego_movie%>%
html_nodes("script")%>%
.[[5]]%>%
html_text()%>%
gsub("window.__INITIAL_STATE__=| .window.__config = .*|;", "",.)%>%
trimws()
#html_node("#bettingOddsGridContainer")
test<-fromJSON(scrape)
scrape%>% html_text()
test<-scrape%>% html_nodes("div div")%>%
.[[1]] | /SBR Workflow/Code/_old/sbr scrape attempt.R | no_license | ngbasch/NBA | R | false | false | 910 | r | #SCrape from Sportsbook review
lego_movie <- read_html("https://www.sportsbookreview.com/betting-odds/nba-basketball/pointspread/?date=20191108")
scrape2<-
lego_movie%>%
#html_nodes(".opener")
html_nodes("._1QEDd span")
#html_nodes(".undefined div , ._1t1eJ span , ._3IKv4 , ._2XT2S , #bettingOddsGridContainer a , ._1kega span , ._1QEDd")
html_nodes("#bettingOddsGridContainer img , .pZWkv , ._2XT2S span , ._1kega span , .undefined div , ._3ptK- , ._3qi53")
scrape2%>%html_text()
test<-scrape2%>%
.[[5]]%>%
.[[2]]
scrape2%>%
.[[400]]%>%
html_text()
html_nodes("_3Nv_7 opener")
scrape<-
lego_movie%>%
html_nodes("script")%>%
.[[5]]%>%
html_text()%>%
gsub("window.__INITIAL_STATE__=| .window.__config = .*|;", "",.)%>%
trimws()
#html_node("#bettingOddsGridContainer")
test<-fromJSON(scrape)
scrape%>% html_text()
test<-scrape%>% html_nodes("div div")%>%
.[[1]] |
#import libraries to work with
library(plyr)
library(stringr)
library(e1071)
library(tm)
library(tm.plugin.mail)
library(party)
library(randomForest)
library(caret)
#load up word polarity list and format it
afinn_list <- read.delim(file='E:/Books/Msc CA/Sem 3/DM/sentiment_analysis-master/AFINN/AFINN-111.txt', header=FALSE, stringsAsFactors=FALSE)
names(afinn_list) <- c('word', 'score')
afinn_list$word <- tolower(afinn_list$word)
#categorize words as very negative to very positive
vNegTerms <- afinn_list$word[afinn_list$score==-5 | afinn_list$score==-4]
negTerms <- c(afinn_list$word[afinn_list$score==-3 | afinn_list$score==-2 | afinn_list$score==-1])
posTerms <- c(afinn_list$word[afinn_list$score==3 | afinn_list$score==2 | afinn_list$score==1])
vPosTerms <- c(afinn_list$word[afinn_list$score==5 | afinn_list$score==4])
#load up positive and negative sentences and format
posText <- read.csv("E:/Books/Msc CA/Sem 3/DM/posemails.csv")
posText <- as.character(posText$emails)
negText <- read.csv("E:/Books/Msc CA/Sem 3/DM/negemails.csv")
negText <- as.character(negText$emails)
#build tables of positive and negative sentences with scores
posResult <- as.data.frame(sentimentScore(posText, vNegTerms, negTerms, posTerms, vPosTerms))
negResult <- as.data.frame(sentimentScore(negText, vNegTerms, negTerms, posTerms, vPosTerms))
posResult <- cbind(posResult, 'positive')
colnames(posResult) <- c('sentence', 'vNeg', 'neg', 'pos', 'vPos', 'sentiment')
negResult <- cbind(negResult, 'negative')
colnames(negResult) <- c('sentence', 'vNeg', 'neg', 'pos', 'vPos', 'sentiment')
#combine the positive and negative tables
results <- rbind(posResult, negResult)
| /email_sentiment.R | no_license | RachelAde/Email-Sentiment-Analysis | R | false | false | 1,703 | r | #import libraries to work with
library(plyr)
library(stringr)
library(e1071)
library(tm)
library(tm.plugin.mail)
library(party)
library(randomForest)
library(caret)
#load up word polarity list and format it
afinn_list <- read.delim(file='E:/Books/Msc CA/Sem 3/DM/sentiment_analysis-master/AFINN/AFINN-111.txt', header=FALSE, stringsAsFactors=FALSE)
names(afinn_list) <- c('word', 'score')
afinn_list$word <- tolower(afinn_list$word)
#categorize words as very negative to very positive
vNegTerms <- afinn_list$word[afinn_list$score==-5 | afinn_list$score==-4]
negTerms <- c(afinn_list$word[afinn_list$score==-3 | afinn_list$score==-2 | afinn_list$score==-1])
posTerms <- c(afinn_list$word[afinn_list$score==3 | afinn_list$score==2 | afinn_list$score==1])
vPosTerms <- c(afinn_list$word[afinn_list$score==5 | afinn_list$score==4])
#load up positive and negative sentences and format
posText <- read.csv("E:/Books/Msc CA/Sem 3/DM/posemails.csv")
posText <- as.character(posText$emails)
negText <- read.csv("E:/Books/Msc CA/Sem 3/DM/negemails.csv")
negText <- as.character(negText$emails)
#build tables of positive and negative sentences with scores
posResult <- as.data.frame(sentimentScore(posText, vNegTerms, negTerms, posTerms, vPosTerms))
negResult <- as.data.frame(sentimentScore(negText, vNegTerms, negTerms, posTerms, vPosTerms))
posResult <- cbind(posResult, 'positive')
colnames(posResult) <- c('sentence', 'vNeg', 'neg', 'pos', 'vPos', 'sentiment')
negResult <- cbind(negResult, 'negative')
colnames(negResult) <- c('sentence', 'vNeg', 'neg', 'pos', 'vPos', 'sentiment')
#combine the positive and negative tables
results <- rbind(posResult, negResult)
|
#'
#'@title Function to plot natural mortality rates by year using ggplot2
#'
#'@description This function plots natural mortality estimates by year,
#' sex and maturity state.
#'
#'@param objs - list of resLst objects
#'@param type - type of mortality values to plot
#'@param dodge - width to dodge overlapping series
#'@param pdf - creates pdf, if not NULL
#'@param showPlot - flag (T/F) to show plot
#'@param verbose - flag (T/F) to print diagnostic information
#'
#'@return ggplot2 object as list element
#'
#'@details None.
#'
#'@import ggplot2
#'@import wtsPlots
#'
#'@export
#'
compareResults.Pop.NaturalMortality<-function(objs,
type="M_cy",
dodge=0.2,
pdf=NULL,
showPlot=FALSE,
verbose=FALSE){
options(stringsAsFactors=FALSE);
std_theme = wtsPlots::getStdTheme();
cases<-names(objs);
#create pdf, if necessary
if(!is.null(pdf)){
pdf(file=pdf,width=11,height=8,onefile=TRUE);
on.exit(grDevices::dev.off());
showPlot<-TRUE;
}
mdfr<-extractMDFR.Pop.NaturalMortality(objs,type=type,verbose=verbose);
#----------------------------------
# plot natural mortality rates by year
#----------------------------------
pd<-position_dodge(width=dodge);
p <- ggplot(mdfr,aes_string(x='y',y='val',colour='case'));
p <- p + geom_line(position=pd);
p <- p + geom_point(position=pd);
if (any(!is.na(mdfr$lci))) p <- p + geom_errorbar(aes_string(ymin='lci',ymax='uci'),position=pd);
p <- p + geom_abline(intercept=0.23,slope=0,linetype=2,colour='black')
p <- p + labs(x='year',y="natural mortality");
p <- p + ggtitle("Natural Mortality");
p <- p + facet_grid(m+s~x);
p <- p + ylim(c(0,NA));
p = p + std_theme;
if (showPlot) print(p);
plots<-list();
cap1<-" \n \nFigure &&figno. Estimated natural mortality rates, by year. \n \n";
plots[[cap1]]<-p;
return(plots);
}
| /R/compareResults.Pop.NaturalMortality.R | permissive | wStockhausen/rCompTCMs | R | false | false | 2,126 | r | #'
#'@title Function to plot natural mortality rates by year using ggplot2
#'
#'@description This function plots natural mortality estimates by year,
#' sex and maturity state.
#'
#'@param objs - list of resLst objects
#'@param type - type of mortality values to plot
#'@param dodge - width to dodge overlapping series
#'@param pdf - creates pdf, if not NULL
#'@param showPlot - flag (T/F) to show plot
#'@param verbose - flag (T/F) to print diagnostic information
#'
#'@return ggplot2 object as list element
#'
#'@details None.
#'
#'@import ggplot2
#'@import wtsPlots
#'
#'@export
#'
compareResults.Pop.NaturalMortality<-function(objs,
type="M_cy",
dodge=0.2,
pdf=NULL,
showPlot=FALSE,
verbose=FALSE){
options(stringsAsFactors=FALSE);
std_theme = wtsPlots::getStdTheme();
cases<-names(objs);
#create pdf, if necessary
if(!is.null(pdf)){
pdf(file=pdf,width=11,height=8,onefile=TRUE);
on.exit(grDevices::dev.off());
showPlot<-TRUE;
}
mdfr<-extractMDFR.Pop.NaturalMortality(objs,type=type,verbose=verbose);
#----------------------------------
# plot natural mortality rates by year
#----------------------------------
pd<-position_dodge(width=dodge);
p <- ggplot(mdfr,aes_string(x='y',y='val',colour='case'));
p <- p + geom_line(position=pd);
p <- p + geom_point(position=pd);
if (any(!is.na(mdfr$lci))) p <- p + geom_errorbar(aes_string(ymin='lci',ymax='uci'),position=pd);
p <- p + geom_abline(intercept=0.23,slope=0,linetype=2,colour='black')
p <- p + labs(x='year',y="natural mortality");
p <- p + ggtitle("Natural Mortality");
p <- p + facet_grid(m+s~x);
p <- p + ylim(c(0,NA));
p = p + std_theme;
if (showPlot) print(p);
plots<-list();
cap1<-" \n \nFigure &&figno. Estimated natural mortality rates, by year. \n \n";
plots[[cap1]]<-p;
return(plots);
}
|
#### METADATA ####
# Objective: Estimate water depth and hydroperiod at a point of known elevation in the Ria Formosa lagoon
# Authors: Carmen B. de los Santos, with contributions by Márcio Martins
# Creation: Seville, 27 March 2020
#### SETTINGS ####
# required packages
packages <- c("readxl", # to read xlsx
"lubridate", # for handling dates
"plyr", # for ddply function
"reshape2", # to shape data
"ggplot2", # for graphing
"grid", # for graphing
"gridExtra") # for graphing
for (i in seq_along(packages)) {
if(!do.call(require, list(package = packages[i]))) {
do.call(install.packages, list(pkgs = packages[i]))
do.call(require, list(package = packages[i]))
}
}
# clean
rm(list = ls())
# working directory
setwd("~/OneDrive - Universidade do Algarve/Trabajo/STUDIES/wip/elevation/wordir/model/")
# theme
default <- theme(plot.background=element_blank()) +
theme(panel.background=element_rect(fill = "white", colour = "black")) +
theme(strip.background=element_blank()) +
theme(strip.text=element_text(size = 9,colour = "black", angle = 0)) +
theme(panel.grid.major=element_blank()) +
theme(panel.grid.minor=element_blank()) +
theme(axis.text.x=element_text(colour = "black", size = 10, angle = 0)) +
theme(axis.text.y=element_text(colour = "black", size = 10, angle = 0)) +
theme(axis.title.x=element_text(size = 10, vjust = 0.2)) +
theme(axis.title.y=element_text(size = 10)) +
theme(plot.margin= unit(c(0.5, 0.5, 0.5, 0.5), "lines")) +
theme(legend.background=element_blank()) +
theme(legend.key=element_blank()) +
theme(legend.key.height=unit(0.8, "line")) +
theme(legend.text.align=0) +
theme(plot.title=element_text(size = 16, face = "bold"))
#### ------------------------------ MODEL -----------------------------------####
#### INPUT - POINTS ####
# load data (points of known elevation)
data.poi <- read.csv("./inputs/data_points.csv")
# check structure
str(data.poi)
#### INPUT - HEIGHTS ####
# load data (tide heights from official charts)
data.hei <- read.csv("./inputs/data_heights.csv")
# check structure
str(data.hei)
# select data
data.hei <- data.hei[, c("datetime", "height", "tide")]
# date settings (must be UTC)
data.hei$datetime <- as.POSIXct(data.hei$datetime, tz = "UTC")
# calculate minutes from first date
data.hei$timemin <- as.numeric(data.hei$datetime)/60
data.hei$timemin <- data.hei$timemin-as.numeric(as.POSIXct("2017-03-01 00:00", tz = "UTC"))/60
# add column day, month, time in hours
data.hei$day <- day(data.hei$datetime)
data.hei$month <- month(data.hei$datetime)
data.hei$timeh <- data.hei$timemin/60
# check
table(data.hei$tide, useNA="ifany")
ggplot(data.hei,aes(x = timeh, y = height)) +
geom_point(size = 0.8, colour = "blue") +
geom_line()
#### MODEL-Part 1: CALCULATION INTERPOLATED TIDE HEIGHT ####
# the goal is to interpolate tide heights at 1-min intervals using low/high tide charts.
# month of reference: March 2017.
# create intervals
data.int <- data.frame(datetime=seq(ISOdatetime(2017, 2, 28, 0, 0, 0, tz = "UTC"),
ISOdatetime(2017, 4, 1, 0, 0, 0, tz = "UTC"),
by = 60*1)) # time interval in seconds
# calculate minutes from first date
data.int$timemin <- as.numeric(data.int$datetime)/60
data.int$timemin <- data.int$timemin-as.numeric(as.POSIXct("2017-03-01 00:00", tz = "UTC"))/60
data.int <- merge(data.int,data.hei,all.x=T)
# correction term for tide height from official chart (source: "Dado que o plano do Zero Hidrográfico (ZH) foi fixado em relação
# a níveis médios adotados há várias décadas, existe presentemente uma diferença sistemática de cerca de +10 cm entre
# as alturas de água observadas e as alturas de maré previstas. Para mais informações consultar www.hidrografico.pt).
tide.corr <- 0.1
# for each time event (t), i.e. row in data.int, identify previous/posterior tide event (time, height)
DATA.INT <- data.frame(
"datetime" = POSIXct(length = nrow(data.int)),
"timemin" = numeric(length = nrow(data.int)),
"prev_event" = character(length = nrow(data.int)),
"prev_time" = numeric(length = nrow(data.int)),
"prev_height"= numeric(length = nrow(data.int)) ,
"post_event" = character(length = nrow(data.int)),
"post_time" = numeric(length = nrow(data.int)),
"post_height"= numeric(length = nrow(data.int))
)
for (i in 1:nrow(data.int)){
# select time event
data <- data.int[i,]
# get info previous event
prev_event <- data.hei[data.hei$timemin <= data$timemin,]
prev_event <- prev_event[which.max(prev_event$timemin),]
# get info posterior event
post_event <- data.hei[data.hei$timemin >= data$timemin,]
post_event <- post_event[which.min(post_event$timemin),]
# add info to data
DATA.INT[i, ] <- data.frame(data$datetime,
data$timemin,
prev_event$tide,
prev_event$timemin,
prev_event$height + tide.corr,
post_event$tide,
post_event$timemin,
post_event$height + tide.corr)
}
# replace
data.int <- DATA.INT
# clean
rm(i, DATA.INT, post_event, prev_event)
# calculate parameters for each point
data.int$par_T <- data.int$post_time-data.int$prev_time # time (min) between closest event
data.int$par_t <- data.int$timemin-data.int$prev_time # time (min) from previous event
# calculate estimated height using analitical formula
data.int$height <- with(data.int,ifelse(prev_event == post_event,prev_height,
(prev_height + post_height)/2 + (prev_height-post_height)/2*cos((pi*par_t)/par_T)))
# select columns
data.int <- data.int[,c("datetime", "timemin", "height")]
# check time (must be UTC)
data.int$datetime[[1]]
# add column day, month, time in hours
data.int$day <- day(data.int$datetime)
data.int$month <- month(data.int$datetime)
data.int$timeh <- data.int$timemin/60
# select data (only March)
data.int <- data.int[data.int$month==3,]
# select columns
data.int <- data.int[c("datetime", "day", "timemin", "timeh", "height")]
# check plot (only March)
pdf("./outputs/plots_heights_chart_interpolated.pdf", onefile = TRUE, paper = "a4")
ggplot(data.int,aes(x = timeh, y = height)) +
geom_point(size = 0.3) +
geom_point(data = data.hei[data.hei$month == 3,],
aes(x = timeh, y = height),colour = "red", shape = 21) +
facet_wrap(. ~ day,scales = "free_x")
dev.off()
# save
write.csv(data.int,"./outputs/data_heights_interpolated.csv")
#### MODEL-Part 2: CALCULATIONS DEPTHS and HYDROPERIODS ####
# at each point p and time t, we want to estimate the water depth d
# d(p,t) = h(t) - e(p)
# where
# d(p,t) is depth, in meters, at time t and point p
# h(t) is the tidal height, in meters, referred to MSL at time t
# e(p) is the elevation, in meters, referred to MSL at point p
# it requires datasets: data.int, data.hei and data.poi (already in the global environment)
# data.int <- read.csv("./outputs/data_heights_interpolated.csv")
# data.hei <- read.csv("./inputs/data_heights.csv")
# data.poi <- read.csv("./inputs/data_points.csv")
# preparations for the loop
pdf("./outputs/plots_hydroperiod_days.pdf", onefile=TRUE, paper = "a4")
par(mfrow = c(3, 3), pty = "s", las = 1, oma = c(1, 1, 1, 1), mar = c(5, 5, 3, 1))
points <- unique(data.poi$point)
DATA.DEP <- data.frame() # for depths over 1-min intervals over the month
TABLE.MON <- data.frame() # for montly hydroperiod
TABLE.DAY <- data.frame() # for daily hydroperiod
TABLE.SAM <- data.frame() # for sampling days hydroperiod
for (i in 1:length(points)){
## SELECT POINT
table.mon <- data.poi[i,]
## CALCULATE MONTLY HYDROPERIOD FOR EACH POINT
# calculate depth
data.dep <- data.int # h(t)
data.dep$elevation <- table.mon$elevation # e(p)
data.dep$depth <- with(data.dep, # d(p,t)
ifelse(height>elevation,
height-elevation,
0))
# add info point to data.dep
data.dep$point <- table.mon$point
# calculate maximum and minimum depths
table.mon$depth_max <- max(data.dep$depth)
table.mon$depth_min <- min(data.dep$depth)
# calculate montly hydroperiod (hours/month)
table.mon$hydroperiod_hmon <- nrow(data.dep[data.dep$depth>0,])/60
# calculate montly hydroperiod (% month)
table.mon$hydroperiod_mperc <- 100*table.mon$hydroperiod/(31*24)
# bind data
TABLE.MON <- rbind(TABLE.MON,table.mon)
DATA.DEP <- rbind(DATA.DEP,data.dep)
## CALCULATE DAILY HYDROPERIOD
for(j in 1:31){
# select j day
subset <- data.dep[data.dep$day == j,]
# create table
table.day <- data.poi[i,]
table.day$day <- j
# calculate maximum and minimum depths
table.day$depth_max <- max(subset$depth)
table.day$depth_min <- min(subset$depth)
# calcualte hydroperiod
table.day$hydroperiod_hday <- nrow(subset[subset$depth>0,])/60
# bind data
TABLE.DAY <- rbind(TABLE.DAY,table.day)
# plot water depth over a day (BLACK)
plot(x = subset$timemin/60,
y = subset$depth,
type = "l",
xlab = "Time (hours of month)",
ylab = "Water depth (m)",
ylim = c(-3, 3),
pch = 19, col = "black")
# add tide height from interpolations
lines(x = subset$timemin/60, y = subset$height, col = "grey")
# add low/high tide from charts (GREY CIRCLES)
# points(x=data.hei$timemin/60,y=data.hei$height,col="grey")
# add text point, day, hydroperiod
mtext(paste0(table.day$point, " day ", j, " - E = ", round(table.day$hydroperiod_hday, 1)),
side = 3, line = 0, adj = 0, cex = 0.8)
# add elevation line (GREEN LINE)
abline(a = table.day$elevation, b = 0, col = "green", lwd = 1)
}
par(mfrow = c(3, 3), pty = "s", las = 1, oma = c(1, 1, 1, 1), mar = c(5, 5, 3, 1))
## CALCULATE SAMPLING DAYS HYDROPERIOD
# select days = c(28,29,30)
subset <- data.dep[data.dep$day %in% c(28, 29, 30),]
# create table
table.sam <- data.poi[i,]
# calculate maximum and minimum depths
table.sam$depth_max <- max(subset$depth)
table.sam$depth_min <- min(subset$depth)
# calcualte hydroperiod
table.sam$hydroperiod_hday <- nrow(subset[subset$depth>0,])/60
# bind data
TABLE.SAM <- rbind(TABLE.SAM,table.sam)
}
# rename
table.mon <- TABLE.MON # table with montly hydroperiod at each point (n = 40)
table.day <- TABLE.DAY # table with daily hydroperiod at each point and day (n = 40*31)
table.sam <- TABLE.SAM # table with daily hydroperiod at each point from 28 to 30 March (n = 40*3)
data.dep <- DATA.DEP # dataset with depth at each point and minute over a month (n = 40*31*1440)
# manage data
data.dep$datetime <- as.POSIXct(data.dep$datetime, format = "%Y-%m-%d %H:%M:%S", tz = "UTC")
data.dep$datetime[[1]]
# clean
rm(i, j, points, data.poi, data.hei, subset, TABLE.MON, TABLE.DAY, TABLE.SAM, DATA.DEP)
dev.off()
dev.off()
# save data in csv (long-term data storage)
write.csv(table.mon, "./outputs/table_hydroperiod_monthly.csv")
write.csv(table.day, "./outputs/table_hydroperiod_daily.csv")
write.csv(table.sam, "./outputs/table_hydroperiod_sampling_days.csv")
write.csv(data.dep, "./outputs/data_depths_minute.csv")
#### ------------------------------ VALIDATION ------------------------------####
#### COMPARISON OBSERVED vs MODELLED ####
## DATA OBSERVED
# load data
data.obs <- read.csv("./inputs/data_depths.csv")
# check structure
str(data.obs)
# date settings (must be UTC)
data.obs$datetime <- as.POSIXct(data.obs$datetime, tz = "UTC")
# know start/end time
data.obs$datetime[1]
data.obs$datetime[nrow(data.obs)]
# select 24 hours ("2017-03-30 09:30:00 UTC" to "2017-03-31 09:30:00 UTC")
data.obs <- data.obs[data.obs$datetime<as.POSIXct("2017-03-31 09:31:00", tz = "UTC"),]
# check start/end time
data.obs$datetime[1]
data.obs$datetime[nrow(data.obs)]
## DATA MODEL
# select points Zn3_0 and Zn3_30 in data.dep (points at which the pressure loggers were installed on the 2017-03-30)
data.mod <- data.dep[data.dep$point=="Zn3_0" | data.dep$point=="Zn3_30",]
# select 24 hours ("2017-03-30 09:30:00 UTC" to "2017-03-31 09:30:00 UTC")
data.mod <- data.mod[data.mod$datetime>=as.POSIXct("2017-03-30 09:30:00", tz = "UTC"),]
data.mod <- data.mod[data.mod$datetime<=as.POSIXct("2017-03-31 09:30:00", tz = "UTC"),]
# check start/end time
data.mod$datetime[1]
data.mod$datetime[nrow(data.mod)]
# check
ggplot(data.mod,aes(x = datetime,y = depth)) +
geom_line() +
facet_wrap(~ point)
ggplot(data.obs,aes(x = datetime, y = depth)) +
geom_line() +
facet_wrap(~ point)
## COMPARE DATA
# define subset mod to merge
subset.mod <- data.mod[,c("point", "datetime", "depth")]
subset.mod$set <- "model"
subset.obs <- data.obs[,c("point", "datetime", "depth")]
subset.obs$set <- "observed"
data.com <- rbind(subset.obs, subset.mod)
rm(subset.mod, subset.obs)
# plot comparison depths modelled vs observed
plot <- ggplot(data.com,aes(x = datetime, y = depth, colour = set)) +
geom_line() +
facet_wrap(~ point) +
default +
theme(axis.text.x = element_text(colour = "black", size = 10, angle = 90))
plot
pdf(file="~/OneDrive - Universidade do Algarve/Trabajo/STUDIES/wip/elevation/wordir/model/outputs/plot_validation.pdf",
width=7,height=4)
grid.arrange(plot,top="")
dev.off()
#### COMPARISON HYDROPERIOD ####
# set loop to calculate daily hydroperiod (hours day-1)
data.com$set_point <- with(data.com,paste0(set, "_", point))
set_points <- unique(data.com$set_point)
table.com <- data.frame()
for(i in 1:length(set_points)){
# select data for a set_point
data <- data.com[data.com$set_point==set_points[i],]
# create table
table <- data.frame(set_point = set_points[i],
set = unique(data$set),
point = unique(data$point),
hydroperiod_hday = nrow(data[data$depth>0,])/60)
table.com <- rbind(table.com, table)
}
table.com
# clean
rm(i, set_points, data, table, data.obs, data.mod)
# plot
ggplot(table.com, aes(x = point, y = hydroperiod_hday, fill = set)) +
geom_col(position = position_dodge())
# clean
rm(table.com)
dev.off()
#### END ####
dev.off()
rm(list=ls())
| /model_script.R | permissive | cbdelossantos/estimatehydroperiod | R | false | false | 14,665 | r | #### METADATA ####
# Objective: Estimate water depth and hydroperiod at a point of known elevation in the Ria Formosa lagoon
# Authors: Carmen B. de los Santos, with contributions by Márcio Martins
# Creation: Seville, 27 March 2020
#### SETTINGS ####
# required packages
packages <- c("readxl", # to read xlsx
"lubridate", # for handling dates
"plyr", # for ddply function
"reshape2", # to shape data
"ggplot2", # for graphing
"grid", # for graphing
"gridExtra") # for graphing
for (i in seq_along(packages)) {
if(!do.call(require, list(package = packages[i]))) {
do.call(install.packages, list(pkgs = packages[i]))
do.call(require, list(package = packages[i]))
}
}
# clean
rm(list = ls())
# working directory
setwd("~/OneDrive - Universidade do Algarve/Trabajo/STUDIES/wip/elevation/wordir/model/")
# theme
default <- theme(plot.background=element_blank()) +
theme(panel.background=element_rect(fill = "white", colour = "black")) +
theme(strip.background=element_blank()) +
theme(strip.text=element_text(size = 9,colour = "black", angle = 0)) +
theme(panel.grid.major=element_blank()) +
theme(panel.grid.minor=element_blank()) +
theme(axis.text.x=element_text(colour = "black", size = 10, angle = 0)) +
theme(axis.text.y=element_text(colour = "black", size = 10, angle = 0)) +
theme(axis.title.x=element_text(size = 10, vjust = 0.2)) +
theme(axis.title.y=element_text(size = 10)) +
theme(plot.margin= unit(c(0.5, 0.5, 0.5, 0.5), "lines")) +
theme(legend.background=element_blank()) +
theme(legend.key=element_blank()) +
theme(legend.key.height=unit(0.8, "line")) +
theme(legend.text.align=0) +
theme(plot.title=element_text(size = 16, face = "bold"))
#### ------------------------------ MODEL -----------------------------------####
#### INPUT - POINTS ####
# load data (points of known elevation)
data.poi <- read.csv("./inputs/data_points.csv")
# check structure
str(data.poi)
#### INPUT - HEIGHTS ####
# load data (tide heights from official charts)
data.hei <- read.csv("./inputs/data_heights.csv")
# check structure
str(data.hei)
# select data
data.hei <- data.hei[, c("datetime", "height", "tide")]
# date settings (must be UTC)
data.hei$datetime <- as.POSIXct(data.hei$datetime, tz = "UTC")
# calculate minutes from first date
data.hei$timemin <- as.numeric(data.hei$datetime)/60
data.hei$timemin <- data.hei$timemin-as.numeric(as.POSIXct("2017-03-01 00:00", tz = "UTC"))/60
# add column day, month, time in hours
data.hei$day <- day(data.hei$datetime)
data.hei$month <- month(data.hei$datetime)
data.hei$timeh <- data.hei$timemin/60
# check
table(data.hei$tide, useNA="ifany")
ggplot(data.hei,aes(x = timeh, y = height)) +
geom_point(size = 0.8, colour = "blue") +
geom_line()
#### MODEL-Part 1: CALCULATION INTERPOLATED TIDE HEIGHT ####
# the goal is to interpolate tide heights at 1-min intervals using low/high tide charts.
# month of reference: March 2017.
# create intervals
data.int <- data.frame(datetime=seq(ISOdatetime(2017, 2, 28, 0, 0, 0, tz = "UTC"),
ISOdatetime(2017, 4, 1, 0, 0, 0, tz = "UTC"),
by = 60*1)) # time interval in seconds
# calculate minutes from first date
data.int$timemin <- as.numeric(data.int$datetime)/60
data.int$timemin <- data.int$timemin-as.numeric(as.POSIXct("2017-03-01 00:00", tz = "UTC"))/60
data.int <- merge(data.int,data.hei,all.x=T)
# correction term for tide height from official chart (source: "Dado que o plano do Zero Hidrográfico (ZH) foi fixado em relação
# a níveis médios adotados há várias décadas, existe presentemente uma diferença sistemática de cerca de +10 cm entre
# as alturas de água observadas e as alturas de maré previstas. Para mais informações consultar www.hidrografico.pt).
tide.corr <- 0.1
# for each time event (t), i.e. row in data.int, identify previous/posterior tide event (time, height)
DATA.INT <- data.frame(
"datetime" = POSIXct(length = nrow(data.int)),
"timemin" = numeric(length = nrow(data.int)),
"prev_event" = character(length = nrow(data.int)),
"prev_time" = numeric(length = nrow(data.int)),
"prev_height"= numeric(length = nrow(data.int)) ,
"post_event" = character(length = nrow(data.int)),
"post_time" = numeric(length = nrow(data.int)),
"post_height"= numeric(length = nrow(data.int))
)
for (i in 1:nrow(data.int)){
# select time event
data <- data.int[i,]
# get info previous event
prev_event <- data.hei[data.hei$timemin <= data$timemin,]
prev_event <- prev_event[which.max(prev_event$timemin),]
# get info posterior event
post_event <- data.hei[data.hei$timemin >= data$timemin,]
post_event <- post_event[which.min(post_event$timemin),]
# add info to data
DATA.INT[i, ] <- data.frame(data$datetime,
data$timemin,
prev_event$tide,
prev_event$timemin,
prev_event$height + tide.corr,
post_event$tide,
post_event$timemin,
post_event$height + tide.corr)
}
# replace
data.int <- DATA.INT
# clean
rm(i, DATA.INT, post_event, prev_event)
# calculate parameters for each point
data.int$par_T <- data.int$post_time-data.int$prev_time # time (min) between closest event
data.int$par_t <- data.int$timemin-data.int$prev_time # time (min) from previous event
# calculate estimated height using analitical formula
data.int$height <- with(data.int,ifelse(prev_event == post_event,prev_height,
(prev_height + post_height)/2 + (prev_height-post_height)/2*cos((pi*par_t)/par_T)))
# select columns
data.int <- data.int[,c("datetime", "timemin", "height")]
# check time (must be UTC)
data.int$datetime[[1]]
# add column day, month, time in hours
data.int$day <- day(data.int$datetime)
data.int$month <- month(data.int$datetime)
data.int$timeh <- data.int$timemin/60
# select data (only March)
data.int <- data.int[data.int$month==3,]
# select columns
data.int <- data.int[c("datetime", "day", "timemin", "timeh", "height")]
# check plot (only March)
pdf("./outputs/plots_heights_chart_interpolated.pdf", onefile = TRUE, paper = "a4")
ggplot(data.int,aes(x = timeh, y = height)) +
geom_point(size = 0.3) +
geom_point(data = data.hei[data.hei$month == 3,],
aes(x = timeh, y = height),colour = "red", shape = 21) +
facet_wrap(. ~ day,scales = "free_x")
dev.off()
# save
write.csv(data.int,"./outputs/data_heights_interpolated.csv")
#### MODEL-Part 2: CALCULATIONS DEPTHS and HYDROPERIODS ####
# at each point p and time t, we want to estimate the water depth d
# d(p,t) = h(t) - e(p)
# where
# d(p,t) is depth, in meters, at time t and point p
# h(t) is the tidal height, in meters, referred to MSL at time t
# e(p) is the elevation, in meters, referred to MSL at point p
# it requires datasets: data.int, data.hei and data.poi (already in the global environment)
# data.int <- read.csv("./outputs/data_heights_interpolated.csv")
# data.hei <- read.csv("./inputs/data_heights.csv")
# data.poi <- read.csv("./inputs/data_points.csv")
# preparations for the loop
pdf("./outputs/plots_hydroperiod_days.pdf", onefile=TRUE, paper = "a4")
par(mfrow = c(3, 3), pty = "s", las = 1, oma = c(1, 1, 1, 1), mar = c(5, 5, 3, 1))
points <- unique(data.poi$point)
DATA.DEP <- data.frame() # for depths over 1-min intervals over the month
TABLE.MON <- data.frame() # for montly hydroperiod
TABLE.DAY <- data.frame() # for daily hydroperiod
TABLE.SAM <- data.frame() # for sampling days hydroperiod
for (i in 1:length(points)){
## SELECT POINT
table.mon <- data.poi[i,]
## CALCULATE MONTLY HYDROPERIOD FOR EACH POINT
# calculate depth
data.dep <- data.int # h(t)
data.dep$elevation <- table.mon$elevation # e(p)
data.dep$depth <- with(data.dep, # d(p,t)
ifelse(height>elevation,
height-elevation,
0))
# add info point to data.dep
data.dep$point <- table.mon$point
# calculate maximum and minimum depths
table.mon$depth_max <- max(data.dep$depth)
table.mon$depth_min <- min(data.dep$depth)
# calculate montly hydroperiod (hours/month)
table.mon$hydroperiod_hmon <- nrow(data.dep[data.dep$depth>0,])/60
# calculate montly hydroperiod (% month)
table.mon$hydroperiod_mperc <- 100*table.mon$hydroperiod/(31*24)
# bind data
TABLE.MON <- rbind(TABLE.MON,table.mon)
DATA.DEP <- rbind(DATA.DEP,data.dep)
## CALCULATE DAILY HYDROPERIOD
for(j in 1:31){
# select j day
subset <- data.dep[data.dep$day == j,]
# create table
table.day <- data.poi[i,]
table.day$day <- j
# calculate maximum and minimum depths
table.day$depth_max <- max(subset$depth)
table.day$depth_min <- min(subset$depth)
# calcualte hydroperiod
table.day$hydroperiod_hday <- nrow(subset[subset$depth>0,])/60
# bind data
TABLE.DAY <- rbind(TABLE.DAY,table.day)
# plot water depth over a day (BLACK)
plot(x = subset$timemin/60,
y = subset$depth,
type = "l",
xlab = "Time (hours of month)",
ylab = "Water depth (m)",
ylim = c(-3, 3),
pch = 19, col = "black")
# add tide height from interpolations
lines(x = subset$timemin/60, y = subset$height, col = "grey")
# add low/high tide from charts (GREY CIRCLES)
# points(x=data.hei$timemin/60,y=data.hei$height,col="grey")
# add text point, day, hydroperiod
mtext(paste0(table.day$point, " day ", j, " - E = ", round(table.day$hydroperiod_hday, 1)),
side = 3, line = 0, adj = 0, cex = 0.8)
# add elevation line (GREEN LINE)
abline(a = table.day$elevation, b = 0, col = "green", lwd = 1)
}
par(mfrow = c(3, 3), pty = "s", las = 1, oma = c(1, 1, 1, 1), mar = c(5, 5, 3, 1))
## CALCULATE SAMPLING DAYS HYDROPERIOD
# select days = c(28,29,30)
subset <- data.dep[data.dep$day %in% c(28, 29, 30),]
# create table
table.sam <- data.poi[i,]
# calculate maximum and minimum depths
table.sam$depth_max <- max(subset$depth)
table.sam$depth_min <- min(subset$depth)
# calcualte hydroperiod
table.sam$hydroperiod_hday <- nrow(subset[subset$depth>0,])/60
# bind data
TABLE.SAM <- rbind(TABLE.SAM,table.sam)
}
# rename
table.mon <- TABLE.MON # table with montly hydroperiod at each point (n = 40)
table.day <- TABLE.DAY # table with daily hydroperiod at each point and day (n = 40*31)
table.sam <- TABLE.SAM # table with daily hydroperiod at each point from 28 to 30 March (n = 40*3)
data.dep <- DATA.DEP # dataset with depth at each point and minute over a month (n = 40*31*1440)
# manage data
data.dep$datetime <- as.POSIXct(data.dep$datetime, format = "%Y-%m-%d %H:%M:%S", tz = "UTC")
data.dep$datetime[[1]]
# clean
rm(i, j, points, data.poi, data.hei, subset, TABLE.MON, TABLE.DAY, TABLE.SAM, DATA.DEP)
dev.off()
dev.off()
# save data in csv (long-term data storage)
write.csv(table.mon, "./outputs/table_hydroperiod_monthly.csv")
write.csv(table.day, "./outputs/table_hydroperiod_daily.csv")
write.csv(table.sam, "./outputs/table_hydroperiod_sampling_days.csv")
write.csv(data.dep, "./outputs/data_depths_minute.csv")
#### ------------------------------ VALIDATION ------------------------------####
#### COMPARISON OBSERVED vs MODELLED ####
## DATA OBSERVED
# load data
data.obs <- read.csv("./inputs/data_depths.csv")
# check structure
str(data.obs)
# date settings (must be UTC)
data.obs$datetime <- as.POSIXct(data.obs$datetime, tz = "UTC")
# know start/end time
data.obs$datetime[1]
data.obs$datetime[nrow(data.obs)]
# select 24 hours ("2017-03-30 09:30:00 UTC" to "2017-03-31 09:30:00 UTC")
data.obs <- data.obs[data.obs$datetime<as.POSIXct("2017-03-31 09:31:00", tz = "UTC"),]
# check start/end time
data.obs$datetime[1]
data.obs$datetime[nrow(data.obs)]
## DATA MODEL
# select points Zn3_0 and Zn3_30 in data.dep (points at which the pressure loggers were installed on the 2017-03-30)
data.mod <- data.dep[data.dep$point=="Zn3_0" | data.dep$point=="Zn3_30",]
# select 24 hours ("2017-03-30 09:30:00 UTC" to "2017-03-31 09:30:00 UTC")
data.mod <- data.mod[data.mod$datetime>=as.POSIXct("2017-03-30 09:30:00", tz = "UTC"),]
data.mod <- data.mod[data.mod$datetime<=as.POSIXct("2017-03-31 09:30:00", tz = "UTC"),]
# check start/end time
data.mod$datetime[1]
data.mod$datetime[nrow(data.mod)]
# check
ggplot(data.mod,aes(x = datetime,y = depth)) +
geom_line() +
facet_wrap(~ point)
ggplot(data.obs,aes(x = datetime, y = depth)) +
geom_line() +
facet_wrap(~ point)
## COMPARE DATA
# define subset mod to merge
subset.mod <- data.mod[,c("point", "datetime", "depth")]
subset.mod$set <- "model"
subset.obs <- data.obs[,c("point", "datetime", "depth")]
subset.obs$set <- "observed"
data.com <- rbind(subset.obs, subset.mod)
rm(subset.mod, subset.obs)
# plot comparison depths modelled vs observed
plot <- ggplot(data.com,aes(x = datetime, y = depth, colour = set)) +
geom_line() +
facet_wrap(~ point) +
default +
theme(axis.text.x = element_text(colour = "black", size = 10, angle = 90))
plot
pdf(file="~/OneDrive - Universidade do Algarve/Trabajo/STUDIES/wip/elevation/wordir/model/outputs/plot_validation.pdf",
width=7,height=4)
grid.arrange(plot,top="")
dev.off()
#### COMPARISON HYDROPERIOD ####
# set loop to calculate daily hydroperiod (hours day-1)
data.com$set_point <- with(data.com,paste0(set, "_", point))
set_points <- unique(data.com$set_point)
table.com <- data.frame()
for(i in 1:length(set_points)){
# select data for a set_point
data <- data.com[data.com$set_point==set_points[i],]
# create table
table <- data.frame(set_point = set_points[i],
set = unique(data$set),
point = unique(data$point),
hydroperiod_hday = nrow(data[data$depth>0,])/60)
table.com <- rbind(table.com, table)
}
table.com
# clean
rm(i, set_points, data, table, data.obs, data.mod)
# plot
ggplot(table.com, aes(x = point, y = hydroperiod_hday, fill = set)) +
geom_col(position = position_dodge())
# clean
rm(table.com)
dev.off()
#### END ####
dev.off()
rm(list=ls())
|
#----------------------------
# Script to get AUC for MAX score from ROCS output chem vs chem table
# ---------------------------
# Supporting information: Abdulhameed et al. Correction based on shuffling (CBOS)
# improves 3D ligand-based virtual screening
# For any questions, please email: mabdulhameed@bhsai.org
#-----------------------------
# setwd("./Data_Scripts_to_reproduce_AUC_14targets/AHR")
nTARGET <- 5
TARG <- 'AHR'
#**************************
# Load Train/test split for TARG
#**************************
TAR_ACTANNOT <- paste(TARG,"_agANT_combined_ACTIVITY_forROCplot.txt",sep="")
#######################
FILE_3DCBOS_RDATA <- paste(TARG,"_TRAIN_TEST_split.RData",sep="")
load(FILE_3DCBOS_RDATA)
#######################
#######################
DDqMAT <- TARG3d_DD[row.names(TARG3d_DD) %in% TARG_TR1$CID,colnames(TARG3d_DD) %in% TARG_Q$CID]
tDDqMAT <- t(DDqMAT)
#
source("./get_max_w3_fn.R")
MAXw3_out1 <- get_max_w3(tDDqMAT,TARG)
RUN_APPD246 <- MAXw3_out1[[1]] # MAX score
RUN_APPD246_w3 <- MAXw3_out1[[2]]
#############
# get ROC AUC
PREP_4ROCplot_1 <- function(TAR_ACTANNOT,TAR_APPD246,TAR_uniprot){
TARannot <- read.delim(TAR_ACTANNOT, header=TRUE,sep="\t",stringsAsFactors = FALSE)
rownames(TARannot) <- paste("X",TARannot$CID,sep="")
TARannot <- TARannot[,-1,drop=FALSE]
TAR_val1 <- merge(TARannot,TAR_APPD246,by="row.names")
# TAR_val1_try1 <- TAR_val1[,c(1:2,which(colnames(TAR_val1)==TAR_uniprot))]
TAR_val1_try1 <- TAR_val1
row.names(TAR_val1_try1) <- TAR_val1$Row.names
TAR_val1_try1 <- TAR_val1_try1[,2:3]
TAR_val1_try1$ACT_AN <- sub("Inactive",0,TAR_val1_try1$ACT_AN)
TAR_val1_try1$ACT_AN <- sub("Active",1,TAR_val1_try1$ACT_AN)
TAR_val1_try1 <- TAR_val1_try1[order(-TAR_val1_try1[,2]),]
TAR_val1_try1$ACT_AN <- as.numeric(TAR_val1_try1$ACT_AN)
return(TAR_val1_try1)
}
############
TAR_APPD246 <- RUN_APPD246
TAR_val1_try1 <- PREP_4ROCplot_1(TAR_ACTANNOT,RUN_APPD246)
#install.packages('ROCR')
library(ROCR)
pred <- prediction(TAR_val1_try1[,2],TAR_val1_try1$ACT_AN)
pred_test <- performance(pred, "tpr", "fpr")
auc_ROCR <- performance(pred, measure = "auc")
MAXROCAUC <-round(auc_ROCR@y.values[[1]],2)
######################
# EF10%
library('enrichvs')
MAX_e10 <- round(enrichment_factor(TAR_val1_try1[,2],TAR_val1_try1$ACT_AN,top=0.1,decreasing=TRUE),2)
######################
nACTIV <- nrow(TAR_val1_try1[TAR_val1_try1$ACT_AN==1,]) # number of actives in query/test set
nINACTIV <- nrow(TAR_val1_try1[TAR_val1_try1$ACT_AN==0,]) # number of inactives in query/test set
data.frame(TARG,MAXROCAUC,MAX_e10,nACTIV,nINACTIV)
###############
rm(list=ls())
| /Data_Scripts_to_reproduce_AUC_14targets/AHR/AHR_Script_for_MAXscore_AUC.R | no_license | BHSAI/CBOS | R | false | false | 2,642 | r | #----------------------------
# Script to get AUC for MAX score from ROCS output chem vs chem table
# ---------------------------
# Supporting information: Abdulhameed et al. Correction based on shuffling (CBOS)
# improves 3D ligand-based virtual screening
# For any questions, please email: mabdulhameed@bhsai.org
#-----------------------------
# setwd("./Data_Scripts_to_reproduce_AUC_14targets/AHR")
nTARGET <- 5
TARG <- 'AHR'
#**************************
# Load Train/test split for TARG
#**************************
TAR_ACTANNOT <- paste(TARG,"_agANT_combined_ACTIVITY_forROCplot.txt",sep="")
#######################
FILE_3DCBOS_RDATA <- paste(TARG,"_TRAIN_TEST_split.RData",sep="")
load(FILE_3DCBOS_RDATA)
#######################
#######################
DDqMAT <- TARG3d_DD[row.names(TARG3d_DD) %in% TARG_TR1$CID,colnames(TARG3d_DD) %in% TARG_Q$CID]
tDDqMAT <- t(DDqMAT)
#
source("./get_max_w3_fn.R")
MAXw3_out1 <- get_max_w3(tDDqMAT,TARG)
RUN_APPD246 <- MAXw3_out1[[1]] # MAX score
RUN_APPD246_w3 <- MAXw3_out1[[2]]
#############
# get ROC AUC
PREP_4ROCplot_1 <- function(TAR_ACTANNOT,TAR_APPD246,TAR_uniprot){
TARannot <- read.delim(TAR_ACTANNOT, header=TRUE,sep="\t",stringsAsFactors = FALSE)
rownames(TARannot) <- paste("X",TARannot$CID,sep="")
TARannot <- TARannot[,-1,drop=FALSE]
TAR_val1 <- merge(TARannot,TAR_APPD246,by="row.names")
# TAR_val1_try1 <- TAR_val1[,c(1:2,which(colnames(TAR_val1)==TAR_uniprot))]
TAR_val1_try1 <- TAR_val1
row.names(TAR_val1_try1) <- TAR_val1$Row.names
TAR_val1_try1 <- TAR_val1_try1[,2:3]
TAR_val1_try1$ACT_AN <- sub("Inactive",0,TAR_val1_try1$ACT_AN)
TAR_val1_try1$ACT_AN <- sub("Active",1,TAR_val1_try1$ACT_AN)
TAR_val1_try1 <- TAR_val1_try1[order(-TAR_val1_try1[,2]),]
TAR_val1_try1$ACT_AN <- as.numeric(TAR_val1_try1$ACT_AN)
return(TAR_val1_try1)
}
############
TAR_APPD246 <- RUN_APPD246
TAR_val1_try1 <- PREP_4ROCplot_1(TAR_ACTANNOT,RUN_APPD246)
#install.packages('ROCR')
library(ROCR)
pred <- prediction(TAR_val1_try1[,2],TAR_val1_try1$ACT_AN)
pred_test <- performance(pred, "tpr", "fpr")
auc_ROCR <- performance(pred, measure = "auc")
MAXROCAUC <-round(auc_ROCR@y.values[[1]],2)
######################
# EF10%
library('enrichvs')
MAX_e10 <- round(enrichment_factor(TAR_val1_try1[,2],TAR_val1_try1$ACT_AN,top=0.1,decreasing=TRUE),2)
######################
nACTIV <- nrow(TAR_val1_try1[TAR_val1_try1$ACT_AN==1,]) # number of actives in query/test set
nINACTIV <- nrow(TAR_val1_try1[TAR_val1_try1$ACT_AN==0,]) # number of inactives in query/test set
data.frame(TARG,MAXROCAUC,MAX_e10,nACTIV,nINACTIV)
###############
rm(list=ls())
|
#' Mallick's Custom Plotly Template
#'
#' @param data data.table. Data set to be plotted
#' @param x Character. Formula of x value names. "~x"
#' @param y Character. Formula of y value names. "~y".
#' @param title Character. Title of chart
#' @param xlab Character. Title of x-axis
#' @param ylab Character. Title of y-axis
#' @param yrange 2-element vector. Range of y-axis
#' @param ydtick Integer.
#' @param source Character. Sources of data to be plotted
#' @param yname Character. Name of y series
#' @param height Integer. Height of plot
#' @param width Integer. Width of plot
#' @param linewidth Integer. Width of line.
#'
#' @return Plotly chart
#' @export
#'
hossainPlot <- function(data, x, y, yname, title, xlab, ylab, yrange, ydtick, source,
height = 800, width = 1200, linewidth = 10) {
chart <- plot_ly(data = data, x = formula(x), height = height, width = width) %>%
add_lines(y = formula(y), name = yname, line = list(width = linewidth)) %>%
layout(title = title, titlefont = list(size = 35),
xaxis = list(title = xlab, titlefont = list(size = 30),
tickfont = list(size = 25), nticks = 10),
yaxis = list(title = ylab, range = yrange, dtick = ydtick,
titlefont = list(size = 30), tickfont = list(size = 25)),
# Pick positioning of legend so it doesn"t overlap the chart
legend = list(yanchor = "top", y = 1.02, x = 0, font = list(size = 20)),
# Adjust margins so things look nice
margin = list(l = 140, r = 140, t = 70, b = 150, pad = 10),
annotations = list(text = paste0("Source: ", source), font = list(size = 20),
showarrow = FALSE, align = "left", valign = "bottom",
xref = "paper", x = -0.03, yref = "paper", y = -0.25))
return(chart)
}
| /code/0_data/hossainPlot.R | no_license | emallickhossain/OnlineShoppingSalesTax | R | false | false | 1,879 | r | #' Mallick's Custom Plotly Template
#'
#' @param data data.table. Data set to be plotted
#' @param x Character. Formula of x value names. "~x"
#' @param y Character. Formula of y value names. "~y".
#' @param title Character. Title of chart
#' @param xlab Character. Title of x-axis
#' @param ylab Character. Title of y-axis
#' @param yrange 2-element vector. Range of y-axis
#' @param ydtick Integer.
#' @param source Character. Sources of data to be plotted
#' @param yname Character. Name of y series
#' @param height Integer. Height of plot
#' @param width Integer. Width of plot
#' @param linewidth Integer. Width of line.
#'
#' @return Plotly chart
#' @export
#'
hossainPlot <- function(data, x, y, yname, title, xlab, ylab, yrange, ydtick, source,
height = 800, width = 1200, linewidth = 10) {
chart <- plot_ly(data = data, x = formula(x), height = height, width = width) %>%
add_lines(y = formula(y), name = yname, line = list(width = linewidth)) %>%
layout(title = title, titlefont = list(size = 35),
xaxis = list(title = xlab, titlefont = list(size = 30),
tickfont = list(size = 25), nticks = 10),
yaxis = list(title = ylab, range = yrange, dtick = ydtick,
titlefont = list(size = 30), tickfont = list(size = 25)),
# Pick positioning of legend so it doesn"t overlap the chart
legend = list(yanchor = "top", y = 1.02, x = 0, font = list(size = 20)),
# Adjust margins so things look nice
margin = list(l = 140, r = 140, t = 70, b = 150, pad = 10),
annotations = list(text = paste0("Source: ", source), font = list(size = 20),
showarrow = FALSE, align = "left", valign = "bottom",
xref = "paper", x = -0.03, yref = "paper", y = -0.25))
return(chart)
}
|
rm(list=ls())
openg(5,2.75)
par(mfrow=c(1,2))
#increasing variance
a=1; b=1; r=20;c=5;
x=matrix(ncol=c,nrow=r)
y=matrix(ncol=c,nrow=r)
z=matrix(ncol=c,nrow=r)
for(j in 1:c) x[,j]=seq(j,j,length=r)
for(j in 1:c) z[,j]=seq(-2.5,2.5,length=r)
for(j in 1:c) y[,j]=a+rnorm(z[,j],b*x[1,j],x[1,j]*.75)
plot(x[,1],y[,1],xlim=c(0,5),ylim=c(0,10),
xlab='x',ylab='Y')
for(j in 2:c) points(x[,j],y[,j])
abline(a,b)
#not independent
a=1; b=1
r=20;c=5;
x=matrix(ncol=c,nrow=r)
y=matrix(ncol=c,nrow=r)
z=matrix(ncol=c,nrow=r)
t=sin(seq(1,5,length=5)*2*pi/5)
for(j in 1:c) x[,j]=seq(j,j,length=r)
for(j in 1:c) z[,j]=seq(-2.5,2.5,length=r)
for(j in 1:c) y[,j]=a+t[j]+rnorm(z[,j],b*x[1,j],.75)
plot(x[,1],y[,1],xlim=c(0,5),ylim=c(0,10),
xlab='x',ylab='')
for(j in 2:c) points(x[,j],y[,j])
abline(a,b)
saveg('lm-assumptions-violated',5,2.75)
| /scripts/ch14/lm-assumptions-violated.r | no_license | StefanoCiotti/MyProgectsFirst | R | false | false | 833 | r | rm(list=ls())
openg(5,2.75)
par(mfrow=c(1,2))
#increasing variance
a=1; b=1; r=20;c=5;
x=matrix(ncol=c,nrow=r)
y=matrix(ncol=c,nrow=r)
z=matrix(ncol=c,nrow=r)
for(j in 1:c) x[,j]=seq(j,j,length=r)
for(j in 1:c) z[,j]=seq(-2.5,2.5,length=r)
for(j in 1:c) y[,j]=a+rnorm(z[,j],b*x[1,j],x[1,j]*.75)
plot(x[,1],y[,1],xlim=c(0,5),ylim=c(0,10),
xlab='x',ylab='Y')
for(j in 2:c) points(x[,j],y[,j])
abline(a,b)
#not independent
a=1; b=1
r=20;c=5;
x=matrix(ncol=c,nrow=r)
y=matrix(ncol=c,nrow=r)
z=matrix(ncol=c,nrow=r)
t=sin(seq(1,5,length=5)*2*pi/5)
for(j in 1:c) x[,j]=seq(j,j,length=r)
for(j in 1:c) z[,j]=seq(-2.5,2.5,length=r)
for(j in 1:c) y[,j]=a+t[j]+rnorm(z[,j],b*x[1,j],.75)
plot(x[,1],y[,1],xlim=c(0,5),ylim=c(0,10),
xlab='x',ylab='')
for(j in 2:c) points(x[,j],y[,j])
abline(a,b)
saveg('lm-assumptions-violated',5,2.75)
|
x<-c()
xsqr<-c()
for (i in 1:25) {
x[i]<-i
xsqr<-c(xsqr,i*i)
}
for (i in 1:25) {
3
cat(x[i],xsqr[i],"\n")
}
plot(xsqr~x)
png("plot.png")
plot(xsqr~x)
dev.off()
| /vector.R | no_license | jtse9/chem160module13 | R | false | false | 178 | r | x<-c()
xsqr<-c()
for (i in 1:25) {
x[i]<-i
xsqr<-c(xsqr,i*i)
}
for (i in 1:25) {
3
cat(x[i],xsqr[i],"\n")
}
plot(xsqr~x)
png("plot.png")
plot(xsqr~x)
dev.off()
|
#' sylcount
#'
#' @description
#' A vectorized syllable counter for English language text.
#'
#' Because of the R memory allocations required, the operation is not thread
#' safe. It is evaluated in serial.
#'
#' @details
#' The maximum supported word length is 64 characters. For any token having more
#' than 64 characters, the returned syllable count will be \code{NA}.
#'
#' The syllable counter uses a hash table of known, mostly "irregular" (with
#' respect to syllable counting) words. If the word is not known to us
#' (i.e., not in the hash table), then we try to "approximate" the number
#' of syllables by counting the number of non-consecutive vowels in a word.
#'
#' So for example, using this scheme, each of "to", "too", and "tool" would be
#' classified as having one syllable. However, "tune" would be classified as
#' having 2. Fortunately, "tune" is in our table, listed as having 1 syllable.
#'
#' The hash table uses a perfect hash generated by gperf.
#'
#' @param s
#' A character vector (vector of strings).
#' @param counts.only
#' Should only counts be returned, or words + counts?
#'
#' @return
#' A list of dataframes.
#'
#' @examples
#' library(sylcount)
#' a <- "I am the very model of a modern major general."
#' b <- "I have information vegetable, animal, and mineral."
#'
#' sylcount(c(a, b))
#' sylcount(c(a, b), counts.only=FALSE)
#'
#' @useDynLib sylcount R_sylcount
#' @seealso \code{\link{readability}}
#' @export
sylcount <- function(s, counts.only=TRUE)
{
.Call(R_sylcount, s, counts.only)
}
| /R/R/sylcount.r | permissive | vijaynarne/sylcount | R | false | false | 1,549 | r | #' sylcount
#'
#' @description
#' A vectorized syllable counter for English language text.
#'
#' Because of the R memory allocations required, the operation is not thread
#' safe. It is evaluated in serial.
#'
#' @details
#' The maximum supported word length is 64 characters. For any token having more
#' than 64 characters, the returned syllable count will be \code{NA}.
#'
#' The syllable counter uses a hash table of known, mostly "irregular" (with
#' respect to syllable counting) words. If the word is not known to us
#' (i.e., not in the hash table), then we try to "approximate" the number
#' of syllables by counting the number of non-consecutive vowels in a word.
#'
#' So for example, using this scheme, each of "to", "too", and "tool" would be
#' classified as having one syllable. However, "tune" would be classified as
#' having 2. Fortunately, "tune" is in our table, listed as having 1 syllable.
#'
#' The hash table uses a perfect hash generated by gperf.
#'
#' @param s
#' A character vector (vector of strings).
#' @param counts.only
#' Should only counts be returned, or words + counts?
#'
#' @return
#' A list of dataframes.
#'
#' @examples
#' library(sylcount)
#' a <- "I am the very model of a modern major general."
#' b <- "I have information vegetable, animal, and mineral."
#'
#' sylcount(c(a, b))
#' sylcount(c(a, b), counts.only=FALSE)
#'
#' @useDynLib sylcount R_sylcount
#' @seealso \code{\link{readability}}
#' @export
sylcount <- function(s, counts.only=TRUE)
{
.Call(R_sylcount, s, counts.only)
}
|
## LOAD PACKAGES ##
library(tidyverse)
library(ggrepel)
library(magrittr)
library(reshape2)
library(ggpubr)
library(ggforce)
library(ggsci)
library(data.table)
library(DT)
## DATASET FILES ##
source_dir <- '~/Documents/USC/USC_docs/ml/surgical-training-project/data/carotid_outcomes/'
plot_dir <- file.path(source_dir, 'plots')
final_figure_dir <- file.path(source_dir, 'figures')
file_to_use <- file.path(source_dir, 'data', 'UPDATED_Raw Data.xlsx - Sheet1.tsv')
emory_data <- file.path(source_dir, 'data', 'Emory ETN-fixed.txt')
## LOAD DATASETS ##
raw_data <- read_tsv(file_to_use) %>%
mutate(SurveyID=ifelse(is.na(SurveyID), paste0('USC', row.names(.)), SurveyID)) %>%
mutate(Group=case_when(
(!is.na(Attyears) & (Attyears >= 1)) ~ 'Attending',
(!is.na(Resyears) & (Resyears >= 1)) ~ 'Trainee',
grepl('USC', SurveyID) ~ 'Trainee',
TRUE ~ 'None'
)) %>%
mutate(Resyears=ifelse(grepl('USC', SurveyID), NA, Resyears)) %>%
# For now, filter out those that are not attendings or residents, but will want to come back to this later
filter(Group != 'None') %>%
dplyr::rename(`trial 2 ebl`=`trial 2 ebl`) %>%
mutate(Source='USC')
# We want to add the Emory data to the raw data file we are using
emory_processed <- read_tsv(emory_data) %>%
mutate(SurveyID=paste0('E', row.names(.))) %>%
mutate(Group=case_when(
(grepl('fellow', Year)) ~ 'Trainee',
(grepl('pgy[0-9]+', Year)) ~ 'Trainee',
(grepl('[0-9]+', Year)) ~ 'Attending',
TRUE ~ 'None'
)) %>% filter(Group != 'None') %>%
mutate(
`Trial 1 Success`=ifelse(`tiral 1 in second` >= 300, 0, 1),
`Trial 2 Success`=ifelse(`Trial 2 time` >= 300, 0, 1)
) %>%
dplyr::select(
SurveyID,
OtherSpec=Speciality,
Totyears,
`Trial 1 TTH`=`tiral 1 in second`,
`trial 1 ebl`=`Trial 1 EBL`,
`Trial 1 Success`,
`Trial 2 TTH`=`Trial 2 time`,
`trial 2 ebl`=`Trial 2 EBL`,
`Trial 2 Success`,
Group
) %>% mutate(Source='Emory') %>%
# Filter incomplete data
filter(`Trial 2 TTH`!=0, !is.na(`Trial 2 TTH`))
raw_data %<>% plyr::rbind.fill(., emory_processed)
# Create a specialty field
raw_data %<>% mutate(Specialty=case_when(
NeurosurgerySpec == 'Neurosurgery' ~ 'Neurosurgery',
ENTSpec == 'ENT / OTO-HNS' ~ 'Otolaryngology',
OtherSpec == 'ent' ~ 'Otolaryngology',
OtherSpec == 'nsg' ~ 'Neurosurgery',
OtherSpec == 'GS' ~ 'General Surgery',
grepl('USC', SurveyID) ~ 'Neurosurgery',
TRUE ~ 'Unspecified'
))
# We include even those that did three trials for the summary of participants
complete_data <- raw_data
complete_data %>%
write.table(., file = file.path(source_dir, 'complete_data_set.tsv'), sep='\t', quote=F, row.names=F)
# We should remove the people who did three trials for now, as they were not trained between trial 1 and 2 but trained between trials 2 and 3, so we will need to think about how to compare them to everyone else
raw_data %<>% filter(is.na(`Trial 3 Start Time (hhmmss)`)) | /analysis/carotid_model/preprocess_data.R | no_license | guillaumekugener/surgical-training-project | R | false | false | 2,965 | r | ## LOAD PACKAGES ##
library(tidyverse)
library(ggrepel)
library(magrittr)
library(reshape2)
library(ggpubr)
library(ggforce)
library(ggsci)
library(data.table)
library(DT)
## DATASET FILES ##
source_dir <- '~/Documents/USC/USC_docs/ml/surgical-training-project/data/carotid_outcomes/'
plot_dir <- file.path(source_dir, 'plots')
final_figure_dir <- file.path(source_dir, 'figures')
file_to_use <- file.path(source_dir, 'data', 'UPDATED_Raw Data.xlsx - Sheet1.tsv')
emory_data <- file.path(source_dir, 'data', 'Emory ETN-fixed.txt')
## LOAD DATASETS ##
raw_data <- read_tsv(file_to_use) %>%
mutate(SurveyID=ifelse(is.na(SurveyID), paste0('USC', row.names(.)), SurveyID)) %>%
mutate(Group=case_when(
(!is.na(Attyears) & (Attyears >= 1)) ~ 'Attending',
(!is.na(Resyears) & (Resyears >= 1)) ~ 'Trainee',
grepl('USC', SurveyID) ~ 'Trainee',
TRUE ~ 'None'
)) %>%
mutate(Resyears=ifelse(grepl('USC', SurveyID), NA, Resyears)) %>%
# For now, filter out those that are not attendings or residents, but will want to come back to this later
filter(Group != 'None') %>%
dplyr::rename(`trial 2 ebl`=`trial 2 ebl`) %>%
mutate(Source='USC')
# We want to add the Emory data to the raw data file we are using
emory_processed <- read_tsv(emory_data) %>%
mutate(SurveyID=paste0('E', row.names(.))) %>%
mutate(Group=case_when(
(grepl('fellow', Year)) ~ 'Trainee',
(grepl('pgy[0-9]+', Year)) ~ 'Trainee',
(grepl('[0-9]+', Year)) ~ 'Attending',
TRUE ~ 'None'
)) %>% filter(Group != 'None') %>%
mutate(
`Trial 1 Success`=ifelse(`tiral 1 in second` >= 300, 0, 1),
`Trial 2 Success`=ifelse(`Trial 2 time` >= 300, 0, 1)
) %>%
dplyr::select(
SurveyID,
OtherSpec=Speciality,
Totyears,
`Trial 1 TTH`=`tiral 1 in second`,
`trial 1 ebl`=`Trial 1 EBL`,
`Trial 1 Success`,
`Trial 2 TTH`=`Trial 2 time`,
`trial 2 ebl`=`Trial 2 EBL`,
`Trial 2 Success`,
Group
) %>% mutate(Source='Emory') %>%
# Filter incomplete data
filter(`Trial 2 TTH`!=0, !is.na(`Trial 2 TTH`))
raw_data %<>% plyr::rbind.fill(., emory_processed)
# Create a specialty field
raw_data %<>% mutate(Specialty=case_when(
NeurosurgerySpec == 'Neurosurgery' ~ 'Neurosurgery',
ENTSpec == 'ENT / OTO-HNS' ~ 'Otolaryngology',
OtherSpec == 'ent' ~ 'Otolaryngology',
OtherSpec == 'nsg' ~ 'Neurosurgery',
OtherSpec == 'GS' ~ 'General Surgery',
grepl('USC', SurveyID) ~ 'Neurosurgery',
TRUE ~ 'Unspecified'
))
# We include even those that did three trials for the summary of participants
complete_data <- raw_data
complete_data %>%
write.table(., file = file.path(source_dir, 'complete_data_set.tsv'), sep='\t', quote=F, row.names=F)
# We should remove the people who did three trials for now, as they were not trained between trial 1 and 2 but trained between trials 2 and 3, so we will need to think about how to compare them to everyone else
raw_data %<>% filter(is.na(`Trial 3 Start Time (hhmmss)`)) |
rp.sample <- function(mu = 0, sigma = 1, n = 25, panel.plot = TRUE,
hscale = NA, vscale = hscale) {
if (is.na(hscale)) {
if (.Platform$OS.type == "unix") hscale <- 1
else hscale <- 1.4
}
if (is.na(vscale))
vscale <- hscale
sample.draw <- function(panel) {
mu <- panel$pars["mu"]
sigma <- panel$pars["sigma"]
n <- panel$pars["n"]
col.pars <- "green3"
with(panel, {
par(mfrow = c(2, 1), mar = c(2, 1.5, 1, 1) + 0.1, mgp = c(1, 0.2, 0), tcl = -0.2,
yaxs = "i")
hst <- hist(y, plot = FALSE,
breaks = seq(min(y, mu - 3 * sigma), max(y, mu + 3 * sigma), length = 20))
plot(mu + c(-3, 3) * sigma, c(0, 1 / sigma),
type = "n", axes = FALSE, xlab = "Data", ylab = "")
usr <- par("usr")
rect(usr[1], usr[3], usr[2], usr[4], col = grey(0.9), border = NA)
grid(col = "white", lty = 1)
axis(1, lwd = 0, lwd.ticks = 2, col = grey(0.6), col.ticks = grey(0.6),
col.axis = grey(0.6), cex.axis = 0.8)
# axis(2, lwd = 0, lwd.ticks = 2, col = grey(0.6), col.ticks = grey(0.9),
# col.axis = grey(0.6), cex.axis = 0.8)
if (display.sample["data"]) {
hist(y, probability = TRUE, add = TRUE,
col = grey(0.5), border = grey(0.9),
breaks = seq(min(y, mu - 3 * sigma), max(y, mu + 3 * sigma), length = 20),
axes = FALSE, ylab = "", main = "")
ind <- which(hst$density > usr[4])
if (length(ind) > 0)
segments(hst$breaks[ind], usr[4], hst$breaks[ind + 1], usr[4], col = "red", lwd = 3)
ind <- any(hst$density > 0 & hst$breaks[-1] < usr[1])
if (ind) segments(usr[1], 0, usr[1], usr[4], col = "red", lwd = 3)
ind <- any(hst$density > 0 & hst$breaks[-length(hst$breaks)] > usr[2])
if (ind) segments(usr[2], 0, usr[2], usr[4], col = "red", lwd = 3)
}
if (display.sample["population"]) {
xgrid <- seq(usr[1], usr[2], length = 100)
lines(xgrid, dnorm(xgrid, mu, sigma), col = "blue", lwd = 2)
}
if (display.sample["mean"])
lines(rep(mu, 2), c(0, 0.9 / sigma), col = col.pars, lwd = 2)
if (display.sample["+/- 2 st.dev."])
arrows(mu - 2 * sigma, 0.8 / sigma,
mu + 2 * sigma, 0.8 / sigma,
length = 0.05, code = 3, col = col.pars, lwd = 2)
if (any(display.mean)) {
hst <- hist(mns, plot = FALSE,
breaks = seq(mu - 3 * sigma, mu + 3 * sigma, length = 50))
plot(mu + c(-3, 3) * sigma, c(0, sqrt(n) / sigma),
type = "n", axes = FALSE, xlab = "Sample mean", ylab = "")
usr <- par("usr")
rect(usr[1], usr[3], usr[2], usr[4], col = grey(0.9), border = NA)
grid(col = "white", lty = 1)
axis(1, lwd = 0, lwd.ticks = 2, col = grey(0.6), col.ticks = grey(0.6),
col.axis = grey(0.6), cex.axis = 0.8)
if (any(display.mean["sample mean"])) {
hist(mns, probability = TRUE, add = TRUE,
breaks = seq(mu - 3 * sigma, mu + 3 * sigma, length = 50),
axes = FALSE, col = grey(0.5), border = grey(0.9), ylab = "", main = "")
}
if (display.sample["mean"])
lines(rep(mu, 2), c(0, 0.9 * usr[4]), col = col.pars, lwd = 2)
if (display.mean["+/- 2 se"])
arrows(mu - 2 * sigma / sqrt(n), 0.8 * usr[4],
mu + 2 * sigma / sqrt(n), 0.8 * usr[4],
length = 0.05, code = 3, col = col.pars, lwd = 2)
if (display.mean["distribution"]) {
xgrid <- seq(usr[1], usr[2], length = 200)
lines(xgrid, dnorm(xgrid, mu, sigma / sqrt(n)), col = "blue", lwd = 2)
}
}
par(mfrow = c(1, 1))
})
panel
}
sample.redraw <- function(panel) {
rp.tkrreplot(panel, plot)
panel
}
sample.changepars <- function(panel) {
panel$mns <- NULL
rp.control.put(panel$panelname, panel)
rp.do(panel, sample.new)
panel
}
sample.new <- function(panel) {
panel$y <- rnorm(panel$pars["n"], panel$pars["mu"], panel$pars["sigma"])
if (!panel$display.mean["accumulate"]) panel$mns <- NULL
panel$mns <- c(mean(panel$y), panel$mns)
rp.control.put(panel$panelname, panel)
if (panel$pplot) rp.tkrreplot(panel, plot) else sample.draw(panel)
panel
}
display.sample <- c(TRUE, rep(FALSE, 3))
display.mean <- rep(FALSE, 4)
names(display.sample) <- c("data", "population", "mean", "+/- 2 st.dev.")
names(display.mean) <- c("sample mean", "accumulate", "+/- 2 se", "distribution")
pars <- c(mu, sigma, n)
names(pars) <- c("mu", "sigma", "n")
y <- rnorm(n, mu, sigma)
panel <- rp.control(pars = pars, y = y, mns = mean(y),
display.sample = display.sample, display.mean = display.mean,
pplot = panel.plot)
if (panel.plot) {
rp.tkrplot(panel, plot, sample.draw, pos = "right", hscale = hscale, vscale = vscale)
action.fn <- sample.redraw
}
else {
action.fn <- sample.draw
rp.do(panel, action.fn)
}
rp.textentry(panel, pars, sample.changepars, width = 10,
c("mean", "st.dev.", "sample size"), c("mu", "sigma", "n"))
rp.button(panel, sample.new, "Sample")
rp.checkbox(panel, display.sample, action.fn, names(display.sample), title = "Sample")
rp.checkbox(panel, display.mean, action.fn, names(display.mean), title = "Sample mean")
}
| /R/rp-sample.r | no_license | cran/rpanel | R | false | false | 5,925 | r | rp.sample <- function(mu = 0, sigma = 1, n = 25, panel.plot = TRUE,
hscale = NA, vscale = hscale) {
if (is.na(hscale)) {
if (.Platform$OS.type == "unix") hscale <- 1
else hscale <- 1.4
}
if (is.na(vscale))
vscale <- hscale
sample.draw <- function(panel) {
mu <- panel$pars["mu"]
sigma <- panel$pars["sigma"]
n <- panel$pars["n"]
col.pars <- "green3"
with(panel, {
par(mfrow = c(2, 1), mar = c(2, 1.5, 1, 1) + 0.1, mgp = c(1, 0.2, 0), tcl = -0.2,
yaxs = "i")
hst <- hist(y, plot = FALSE,
breaks = seq(min(y, mu - 3 * sigma), max(y, mu + 3 * sigma), length = 20))
plot(mu + c(-3, 3) * sigma, c(0, 1 / sigma),
type = "n", axes = FALSE, xlab = "Data", ylab = "")
usr <- par("usr")
rect(usr[1], usr[3], usr[2], usr[4], col = grey(0.9), border = NA)
grid(col = "white", lty = 1)
axis(1, lwd = 0, lwd.ticks = 2, col = grey(0.6), col.ticks = grey(0.6),
col.axis = grey(0.6), cex.axis = 0.8)
# axis(2, lwd = 0, lwd.ticks = 2, col = grey(0.6), col.ticks = grey(0.9),
# col.axis = grey(0.6), cex.axis = 0.8)
if (display.sample["data"]) {
hist(y, probability = TRUE, add = TRUE,
col = grey(0.5), border = grey(0.9),
breaks = seq(min(y, mu - 3 * sigma), max(y, mu + 3 * sigma), length = 20),
axes = FALSE, ylab = "", main = "")
ind <- which(hst$density > usr[4])
if (length(ind) > 0)
segments(hst$breaks[ind], usr[4], hst$breaks[ind + 1], usr[4], col = "red", lwd = 3)
ind <- any(hst$density > 0 & hst$breaks[-1] < usr[1])
if (ind) segments(usr[1], 0, usr[1], usr[4], col = "red", lwd = 3)
ind <- any(hst$density > 0 & hst$breaks[-length(hst$breaks)] > usr[2])
if (ind) segments(usr[2], 0, usr[2], usr[4], col = "red", lwd = 3)
}
if (display.sample["population"]) {
xgrid <- seq(usr[1], usr[2], length = 100)
lines(xgrid, dnorm(xgrid, mu, sigma), col = "blue", lwd = 2)
}
if (display.sample["mean"])
lines(rep(mu, 2), c(0, 0.9 / sigma), col = col.pars, lwd = 2)
if (display.sample["+/- 2 st.dev."])
arrows(mu - 2 * sigma, 0.8 / sigma,
mu + 2 * sigma, 0.8 / sigma,
length = 0.05, code = 3, col = col.pars, lwd = 2)
if (any(display.mean)) {
hst <- hist(mns, plot = FALSE,
breaks = seq(mu - 3 * sigma, mu + 3 * sigma, length = 50))
plot(mu + c(-3, 3) * sigma, c(0, sqrt(n) / sigma),
type = "n", axes = FALSE, xlab = "Sample mean", ylab = "")
usr <- par("usr")
rect(usr[1], usr[3], usr[2], usr[4], col = grey(0.9), border = NA)
grid(col = "white", lty = 1)
axis(1, lwd = 0, lwd.ticks = 2, col = grey(0.6), col.ticks = grey(0.6),
col.axis = grey(0.6), cex.axis = 0.8)
if (any(display.mean["sample mean"])) {
hist(mns, probability = TRUE, add = TRUE,
breaks = seq(mu - 3 * sigma, mu + 3 * sigma, length = 50),
axes = FALSE, col = grey(0.5), border = grey(0.9), ylab = "", main = "")
}
if (display.sample["mean"])
lines(rep(mu, 2), c(0, 0.9 * usr[4]), col = col.pars, lwd = 2)
if (display.mean["+/- 2 se"])
arrows(mu - 2 * sigma / sqrt(n), 0.8 * usr[4],
mu + 2 * sigma / sqrt(n), 0.8 * usr[4],
length = 0.05, code = 3, col = col.pars, lwd = 2)
if (display.mean["distribution"]) {
xgrid <- seq(usr[1], usr[2], length = 200)
lines(xgrid, dnorm(xgrid, mu, sigma / sqrt(n)), col = "blue", lwd = 2)
}
}
par(mfrow = c(1, 1))
})
panel
}
sample.redraw <- function(panel) {
rp.tkrreplot(panel, plot)
panel
}
sample.changepars <- function(panel) {
panel$mns <- NULL
rp.control.put(panel$panelname, panel)
rp.do(panel, sample.new)
panel
}
sample.new <- function(panel) {
panel$y <- rnorm(panel$pars["n"], panel$pars["mu"], panel$pars["sigma"])
if (!panel$display.mean["accumulate"]) panel$mns <- NULL
panel$mns <- c(mean(panel$y), panel$mns)
rp.control.put(panel$panelname, panel)
if (panel$pplot) rp.tkrreplot(panel, plot) else sample.draw(panel)
panel
}
display.sample <- c(TRUE, rep(FALSE, 3))
display.mean <- rep(FALSE, 4)
names(display.sample) <- c("data", "population", "mean", "+/- 2 st.dev.")
names(display.mean) <- c("sample mean", "accumulate", "+/- 2 se", "distribution")
pars <- c(mu, sigma, n)
names(pars) <- c("mu", "sigma", "n")
y <- rnorm(n, mu, sigma)
panel <- rp.control(pars = pars, y = y, mns = mean(y),
display.sample = display.sample, display.mean = display.mean,
pplot = panel.plot)
if (panel.plot) {
rp.tkrplot(panel, plot, sample.draw, pos = "right", hscale = hscale, vscale = vscale)
action.fn <- sample.redraw
}
else {
action.fn <- sample.draw
rp.do(panel, action.fn)
}
rp.textentry(panel, pars, sample.changepars, width = 10,
c("mean", "st.dev.", "sample size"), c("mu", "sigma", "n"))
rp.button(panel, sample.new, "Sample")
rp.checkbox(panel, display.sample, action.fn, names(display.sample), title = "Sample")
rp.checkbox(panel, display.mean, action.fn, names(display.mean), title = "Sample mean")
}
|
#Plot aa ditribution using ggplot2
#Plot_Orthogroup.R pub_og_id|all [aa]
library(ggplot2)
library(dplyr)
library(gridExtra)
library(grid)
Taxa=c("Actinopterygii","Sauropsida","Mammalia")
aa = c("A","C","D","E","F","G","H","I","K","L","M","N","R","P","Q","S","T","Y","W","V")
args = commandArgs(trailingOnly=TRUE)
args = "100129at7742"
df<-read.csv("AA_Comp.csv", header=TRUE)
og<-args[1]
if (length(args)==2){
aa<-args[2]
}
if (args[1]!="all"){
df <- filter(df, pub_og_id==og)
}
df[,aa]<-df[,aa]/df$width*100 #percentage
x=df[['Classification']]
pdf(file = paste0(args,"Boxplot.pdf"), width = 15, height = 10) # defaults to 7 x 7 inches
#geom_violin(fill="gray")+stat_summary(fun.data="mean_cl_boot", colour="red", size=1)+theme_classic()
if (length(aa)==1){ #single plot
qplot(x,df[[aa]],geom="blank")+scale_x_discrete(name="",limits=Taxa)+ylab(paste("Percentage of",aa))+
geom_boxplot(outlier.shape = NA) + geom_jitter(width = 0.2)
} else { #multi plot
pltList<-lapply(aa,function(i){qplot(x,df[[i]],geom="blank")+scale_x_discrete(name="",limits=Taxa)+ylab(paste("Percentage of",i))+
#geom_boxplot(outlier.shape = NA) + geom_jitter(width = 0.2)
geom_boxplot(outlier.shape = NA)+ ylim(0,15)})
do.call(grid.arrange, c(pltList, ncol=5))
}
warnings()
dev.off()
| /Utilities/Boxplot.R | no_license | cianissimo/AA_Comp | R | false | false | 1,292 | r | #Plot aa ditribution using ggplot2
#Plot_Orthogroup.R pub_og_id|all [aa]
library(ggplot2)
library(dplyr)
library(gridExtra)
library(grid)
Taxa=c("Actinopterygii","Sauropsida","Mammalia")
aa = c("A","C","D","E","F","G","H","I","K","L","M","N","R","P","Q","S","T","Y","W","V")
args = commandArgs(trailingOnly=TRUE)
args = "100129at7742"
df<-read.csv("AA_Comp.csv", header=TRUE)
og<-args[1]
if (length(args)==2){
aa<-args[2]
}
if (args[1]!="all"){
df <- filter(df, pub_og_id==og)
}
df[,aa]<-df[,aa]/df$width*100 #percentage
x=df[['Classification']]
pdf(file = paste0(args,"Boxplot.pdf"), width = 15, height = 10) # defaults to 7 x 7 inches
#geom_violin(fill="gray")+stat_summary(fun.data="mean_cl_boot", colour="red", size=1)+theme_classic()
if (length(aa)==1){ #single plot
qplot(x,df[[aa]],geom="blank")+scale_x_discrete(name="",limits=Taxa)+ylab(paste("Percentage of",aa))+
geom_boxplot(outlier.shape = NA) + geom_jitter(width = 0.2)
} else { #multi plot
pltList<-lapply(aa,function(i){qplot(x,df[[i]],geom="blank")+scale_x_discrete(name="",limits=Taxa)+ylab(paste("Percentage of",i))+
#geom_boxplot(outlier.shape = NA) + geom_jitter(width = 0.2)
geom_boxplot(outlier.shape = NA)+ ylim(0,15)})
do.call(grid.arrange, c(pltList, ncol=5))
}
warnings()
dev.off()
|
\name{getJagsInits,AnnualLossDevModelInput-method}
\alias{getJagsInits,AnnualLossDevModelInput-method}
\title{A method to collect all the needed initial values common to both the standard model and the break model.}
\description{A method to collect all the needed initial values common to both the standard model and the break model. Intended for internal use only.}
\details{There are currenly two types of \code{AnnualAggLossDevModel}s (break and standard). These models have many features in common, and code to create initial values for these common features is placed in this method.
The derived classes \code{StandardAnnualAggLossDevModelInput} and \code{BreakAnnualAggLossDevModelInput} call this method via \code{NextMethod()} and then return a new function.}
\value{A named list of the specific model elements. See details for more information.}
\docType{methods}
\arguments{\item{object}{An object of type \code{AnnualAggLossDevModelInput} from which to collect the needed initial values for the model.}}
| /man/getJagsInits-comma-AnnualLossDevModelInput-dash-method.Rd | no_license | cran/BALD | R | false | false | 1,026 | rd | \name{getJagsInits,AnnualLossDevModelInput-method}
\alias{getJagsInits,AnnualLossDevModelInput-method}
\title{A method to collect all the needed initial values common to both the standard model and the break model.}
\description{A method to collect all the needed initial values common to both the standard model and the break model. Intended for internal use only.}
\details{There are currenly two types of \code{AnnualAggLossDevModel}s (break and standard). These models have many features in common, and code to create initial values for these common features is placed in this method.
The derived classes \code{StandardAnnualAggLossDevModelInput} and \code{BreakAnnualAggLossDevModelInput} call this method via \code{NextMethod()} and then return a new function.}
\value{A named list of the specific model elements. See details for more information.}
\docType{methods}
\arguments{\item{object}{An object of type \code{AnnualAggLossDevModelInput} from which to collect the needed initial values for the model.}}
|
# # pROC: Tools Receiver operating characteristic (ROC curves) with
# # (partial) area under the curve, confidence intervals and comparison.
# # Copyright (C) 2010-2014 Xavier Robin, Alexandre Hainard, Natacha Turck,
# # Natalia Tiberti, Frédérique Lisacek, Jean-Charles Sanchez
# # and Markus Müller
# #
# # This program is free software: you can redistribute it and/or modify
# # it under the terms of the GNU General Public License as published by
# # the Free Software Foundation, either version 3 of the License, or
# # (at your option) any later version.
# #
# # This program is distributed in the hope that it will be useful,
# # but WITHOUT ANY WARRANTY; without even the implied warranty of
# # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# # GNU General Public License for more details.
# #
# # You should have received a copy of the GNU General Public License
# # along with this program. If not, see <http://www.gnu.org/licenses/>.
#
# ci.multiclass.auc <- function(multiclass.auc, ...) {
# stop("CI of a multiclass AUC not implemented")
# ci.multiclass.roc(attr(multiclass.auc, "roc"), ...)
# }
#
# ci.multiclass.roc <- function(multiclass.roc,
# conf.level = 0.95,
# boot.n = 2000,
# boot.stratified = TRUE,
# reuse.auc=TRUE,
# progress = getOption("pROCProgress")$name,
# parallel = FALSE,
# ...
# ) {
# stop("ci of a multiclass ROC curve not implemented")
# if (conf.level > 1 | conf.level < 0)
# stop("conf.level must be within the interval [0,1].")
#
# # We need an auc
# if (is.null(multiclass.roc$auc) | !reuse.auc)
# multiclass.roc$auc <- auc(multiclass.roc, ...)
#
# # do all the computations in fraction, re-transform in percent later if necessary
# percent <- multiclass.roc$percent
# oldauc <- multiclass.roc$auc
# if (percent) {
# multiclass.roc <- roc.utils.unpercent(multiclass.roc)
# }
#
# ci <- ci.multiclass.auc.bootstrap(multiclass.roc, conf.level, boot.n, boot.stratified, progress, parallel, ...)
#
# if (percent) {
# ci <- ci * 100
# }
# attr(ci, "conf.level") <- conf.level
# attr(ci, "boot.n") <- boot.n
# attr(ci, "boot.stratified") <- boot.stratified
# attr(ci, "multiclass.auc") <- oldauc
# class(ci) <- "ci.multiclass.auc"
# return(ci)
# }
| /R/ci.multiclass.auc.R | no_license | HannahJohns/pROC | R | false | false | 2,414 | r | # # pROC: Tools Receiver operating characteristic (ROC curves) with
# # (partial) area under the curve, confidence intervals and comparison.
# # Copyright (C) 2010-2014 Xavier Robin, Alexandre Hainard, Natacha Turck,
# # Natalia Tiberti, Frédérique Lisacek, Jean-Charles Sanchez
# # and Markus Müller
# #
# # This program is free software: you can redistribute it and/or modify
# # it under the terms of the GNU General Public License as published by
# # the Free Software Foundation, either version 3 of the License, or
# # (at your option) any later version.
# #
# # This program is distributed in the hope that it will be useful,
# # but WITHOUT ANY WARRANTY; without even the implied warranty of
# # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# # GNU General Public License for more details.
# #
# # You should have received a copy of the GNU General Public License
# # along with this program. If not, see <http://www.gnu.org/licenses/>.
#
# ci.multiclass.auc <- function(multiclass.auc, ...) {
# stop("CI of a multiclass AUC not implemented")
# ci.multiclass.roc(attr(multiclass.auc, "roc"), ...)
# }
#
# ci.multiclass.roc <- function(multiclass.roc,
# conf.level = 0.95,
# boot.n = 2000,
# boot.stratified = TRUE,
# reuse.auc=TRUE,
# progress = getOption("pROCProgress")$name,
# parallel = FALSE,
# ...
# ) {
# stop("ci of a multiclass ROC curve not implemented")
# if (conf.level > 1 | conf.level < 0)
# stop("conf.level must be within the interval [0,1].")
#
# # We need an auc
# if (is.null(multiclass.roc$auc) | !reuse.auc)
# multiclass.roc$auc <- auc(multiclass.roc, ...)
#
# # do all the computations in fraction, re-transform in percent later if necessary
# percent <- multiclass.roc$percent
# oldauc <- multiclass.roc$auc
# if (percent) {
# multiclass.roc <- roc.utils.unpercent(multiclass.roc)
# }
#
# ci <- ci.multiclass.auc.bootstrap(multiclass.roc, conf.level, boot.n, boot.stratified, progress, parallel, ...)
#
# if (percent) {
# ci <- ci * 100
# }
# attr(ci, "conf.level") <- conf.level
# attr(ci, "boot.n") <- boot.n
# attr(ci, "boot.stratified") <- boot.stratified
# attr(ci, "multiclass.auc") <- oldauc
# class(ci) <- "ci.multiclass.auc"
# return(ci)
# }
|
directory <- "\\Users\\kcaj2\\Desktop\\Coursera\\Exploratory Data Analysis\\Assignment 2"
setwd(directory)
##below reads the data
NEI <- readRDS("summarySCC_PM25.rds")
NEI1999 <- subset(NEI, NEI$year == 1999)
NEI2002 <- subset(NEI, NEI$year == 2002)
NEI2005 <- subset(NEI, NEI$year == 2005)
NEI2008 <- subset(NEI, NEI$year == 2008)
NEI1999$Emissions <- as.numeric(NEI1999$Emissions)
NEI2002$Emissions <- as.numeric(NEI2002$Emissions)
NEI2005$Emissions <- as.numeric(NEI2005$Emissions)
NEI2008$Emissions <- as.numeric(NEI2008$Emissions)
histdata <- c(
sum(NEI1999$Emissions, na.rm = TRUE)/1000,
sum(NEI2002$Emissions, na.rm = TRUE)/1000,
sum(NEI2005$Emissions, na.rm = TRUE)/1000,
sum(NEI2008$Emissions, na.rm = TRUE)/1000)
png(filename = "plot1.png")
barplot(histdata,
names.arg = c("1999","2002","2005","2008"),
xlab = "Year", ylab = "PM2.5 Emissions (Kilotons)",
main = "Total PM2.5 Emissions By Year")
dev.off()
| /Plot1.R | no_license | jrkramer926/Exploratory-Data-Analysis_Assignment-2 | R | false | false | 995 | r | directory <- "\\Users\\kcaj2\\Desktop\\Coursera\\Exploratory Data Analysis\\Assignment 2"
setwd(directory)
##below reads the data
NEI <- readRDS("summarySCC_PM25.rds")
NEI1999 <- subset(NEI, NEI$year == 1999)
NEI2002 <- subset(NEI, NEI$year == 2002)
NEI2005 <- subset(NEI, NEI$year == 2005)
NEI2008 <- subset(NEI, NEI$year == 2008)
NEI1999$Emissions <- as.numeric(NEI1999$Emissions)
NEI2002$Emissions <- as.numeric(NEI2002$Emissions)
NEI2005$Emissions <- as.numeric(NEI2005$Emissions)
NEI2008$Emissions <- as.numeric(NEI2008$Emissions)
histdata <- c(
sum(NEI1999$Emissions, na.rm = TRUE)/1000,
sum(NEI2002$Emissions, na.rm = TRUE)/1000,
sum(NEI2005$Emissions, na.rm = TRUE)/1000,
sum(NEI2008$Emissions, na.rm = TRUE)/1000)
png(filename = "plot1.png")
barplot(histdata,
names.arg = c("1999","2002","2005","2008"),
xlab = "Year", ylab = "PM2.5 Emissions (Kilotons)",
main = "Total PM2.5 Emissions By Year")
dev.off()
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/analytics.R
\name{decafNAVComparison}
\alias{decafNAVComparison}
\title{A function to compare NAV's between 2 decaf systems.}
\usage{
decafNAVComparison(
accounts,
type,
ccy,
date,
sSession,
tSession,
tDeployment,
sDeployment,
charLimit = 100
)
}
\arguments{
\item{accounts}{The account mapping list.}
\item{type}{The type of container. Either 'accounts' or 'portfolios'.}
\item{ccy}{The currency to be valued.}
\item{date}{The date of the consolidations.}
\item{sSession}{The source session.}
\item{tSession}{The target session.}
\item{tDeployment}{The name of the target deployment.}
\item{sDeployment}{The name of the source deployment.}
\item{charLimit}{The character limit for names.}
}
\value{
A data frame with the NAv comparisons.
}
\description{
A function to compare NAV's between 2 decaf systems.
}
| /man/decafNAVComparison.Rd | no_license | beatnaut/remaputils | R | false | true | 914 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/analytics.R
\name{decafNAVComparison}
\alias{decafNAVComparison}
\title{A function to compare NAV's between 2 decaf systems.}
\usage{
decafNAVComparison(
accounts,
type,
ccy,
date,
sSession,
tSession,
tDeployment,
sDeployment,
charLimit = 100
)
}
\arguments{
\item{accounts}{The account mapping list.}
\item{type}{The type of container. Either 'accounts' or 'portfolios'.}
\item{ccy}{The currency to be valued.}
\item{date}{The date of the consolidations.}
\item{sSession}{The source session.}
\item{tSession}{The target session.}
\item{tDeployment}{The name of the target deployment.}
\item{sDeployment}{The name of the source deployment.}
\item{charLimit}{The character limit for names.}
}
\value{
A data frame with the NAv comparisons.
}
\description{
A function to compare NAV's between 2 decaf systems.
}
|
#####################################################################
# NAME: Chris Bilder #
# DATE: 3-18-11 #
# PURPOSE: Simulate data from a multinomial distribution #
# #
# NOTES: #
#####################################################################
#####################################################################
# Simulate one set of observations
set.seed(2195) # Set a seed to be able to reproduce the sample
pi.j<-c(0.25, 0.35, 0.2, 0.1, 0.1)
n.j<-rmultinom(n = 1, size = 1000, prob = pi.j)
data.frame(n.j, pihat.j = n.j/1000, pi.j)
# m sets of n = 1000 observations
set.seed(9182)
n.j<-rmultinom(n = 5, size = 1000, prob = pi.j)
n.j
n.j/1000
#####################################################################
# Simulate m sets of observations
set.seed(7812)
save2<-rmultinom(n = 1000, size = 1, prob = c(0.25, 0.35, 0.2, 0.1, 0.1))
save2[1:5,1:3] # Each column is one set of observations from a n = 1 multinomial
rowMeans(save2)
#####################################################################
# Probability mass function evaluated
# Simple example with obvious result of 0.25 - P(n_1 = 1, n_2 = 0, n_3 = 0, n_4 = 0, n_5 = 0)
dmultinom(x = c(1,0,0,0,0), size = NULL, prob = c(0.25, 0.35, 0.2, 0.1, 0.1))
# Example of finding P(n_1 = 1, n_2 = 0, n_3 = 0, n_4 = 0, n_5 = 1)
dmultinom(x = c(1,0,0,0,1), size = NULL, prob = c(0.25, 0.35, 0.2, 0.1, 0.1))
#####################################################################
# Simulate n_ij for a 2x3 contingency table
# 1 multinomial distribution
# Probabilities entered by column for array()
pi.ij<-c(0.2, 0.3, 0.2, 0.1, 0.1, 0.1) # pi_ij
pi.table<-array(data = pi.ij, dim = c(2,3), dimnames = list(X = 1:2, Y = 1:3))
pi.table
set.seed(9812)
save<-rmultinom(n = 1, size = 1000, prob = pi.ij)
save
c.table1<-array(data = save, dim = c(2,3), dimnames = list(X = 1:2, Y = 1:3))
c.table1
c.table1/sum(c.table1)
# I multinomial distributions
pi.cond<-pi.table/rowSums(pi.table)
pi.cond # pi_j|i
set.seed(8111)
save1<-rmultinom(n = 1, size = 400, prob = pi.cond[1,])
save2<-rmultinom(n = 1, size = 600, prob = pi.cond[2,])
c.table2<-array(data = c(save1[1], save2[1], save1[2], save2[2], save1[3], save2[3]),
dim = c(2,3), dimnames = list(X = 1:2, Y = 1:3))
c.table2
rowSums(c.table2)
c.table2/rowSums(c.table2)
round(c.table1/rowSums(c.table1),4)
#####################################################################
# Simulate n_ij under independence for a 2x3 contingency table
# 1 multinomial under independence
pi.i<-rowSums(pi.table)
pi.j<-colSums(pi.table)
pi.ij.ind<-pi.i%o%pi.j # Quick way to find pi_i+ * pi_+j
pi.ij.ind
set.seed(9218)
save.ind<-rmultinom(n = 1, size = 1000, prob = pi.ij.ind)
save.ind
c.table1.ind<-array(data = save.ind, dim = c(2,3), dimnames = list(X = 1:2, Y = 1:3))
c.table1.ind/sum(c.table1.ind) # pi^_ij is similar to pi^_i+ * pi^_+j
# Using methods found later in the chapter
chisq.test(x = c.table1.ind, correct = FALSE) # Do not reject independence
# I multinomials under independence
set.seed(7718)
save1.ind<-rmultinom(n = 1, size = 400, prob = pi.j)
save2.ind<-rmultinom(n = 1, size = 600, prob = pi.j)
c.table2.ind<-array(data = c(save1.ind[1], save2.ind[1], save1.ind[2], save2.ind[2], save1.ind[3], save2.ind[3]),
dim = c(2,3), dimnames = list(X = 1:2, Y = 1:3))
c.table2.ind
rowSums(c.table2.ind)
c.table2.ind/rowSums(c.table2.ind) # pi^_j|1 is similar to pi^_j|2
chisq.test(x = c.table2.ind, correct = FALSE) # Do not reject independence
#
| /01.categorical_analysis/00.data/Chapter3/Multinomial.R | no_license | yerimlim/2018Spring | R | false | false | 3,890 | r | #####################################################################
# NAME: Chris Bilder #
# DATE: 3-18-11 #
# PURPOSE: Simulate data from a multinomial distribution #
# #
# NOTES: #
#####################################################################
#####################################################################
# Simulate one set of observations
set.seed(2195) # Set a seed to be able to reproduce the sample
pi.j<-c(0.25, 0.35, 0.2, 0.1, 0.1)
n.j<-rmultinom(n = 1, size = 1000, prob = pi.j)
data.frame(n.j, pihat.j = n.j/1000, pi.j)
# m sets of n = 1000 observations
set.seed(9182)
n.j<-rmultinom(n = 5, size = 1000, prob = pi.j)
n.j
n.j/1000
#####################################################################
# Simulate m sets of observations
set.seed(7812)
save2<-rmultinom(n = 1000, size = 1, prob = c(0.25, 0.35, 0.2, 0.1, 0.1))
save2[1:5,1:3] # Each column is one set of observations from a n = 1 multinomial
rowMeans(save2)
#####################################################################
# Probability mass function evaluated
# Simple example with obvious result of 0.25 - P(n_1 = 1, n_2 = 0, n_3 = 0, n_4 = 0, n_5 = 0)
dmultinom(x = c(1,0,0,0,0), size = NULL, prob = c(0.25, 0.35, 0.2, 0.1, 0.1))
# Example of finding P(n_1 = 1, n_2 = 0, n_3 = 0, n_4 = 0, n_5 = 1)
dmultinom(x = c(1,0,0,0,1), size = NULL, prob = c(0.25, 0.35, 0.2, 0.1, 0.1))
#####################################################################
# Simulate n_ij for a 2x3 contingency table
# 1 multinomial distribution
# Probabilities entered by column for array()
pi.ij<-c(0.2, 0.3, 0.2, 0.1, 0.1, 0.1) # pi_ij
pi.table<-array(data = pi.ij, dim = c(2,3), dimnames = list(X = 1:2, Y = 1:3))
pi.table
set.seed(9812)
save<-rmultinom(n = 1, size = 1000, prob = pi.ij)
save
c.table1<-array(data = save, dim = c(2,3), dimnames = list(X = 1:2, Y = 1:3))
c.table1
c.table1/sum(c.table1)
# I multinomial distributions
pi.cond<-pi.table/rowSums(pi.table)
pi.cond # pi_j|i
set.seed(8111)
save1<-rmultinom(n = 1, size = 400, prob = pi.cond[1,])
save2<-rmultinom(n = 1, size = 600, prob = pi.cond[2,])
c.table2<-array(data = c(save1[1], save2[1], save1[2], save2[2], save1[3], save2[3]),
dim = c(2,3), dimnames = list(X = 1:2, Y = 1:3))
c.table2
rowSums(c.table2)
c.table2/rowSums(c.table2)
round(c.table1/rowSums(c.table1),4)
#####################################################################
# Simulate n_ij under independence for a 2x3 contingency table
# 1 multinomial under independence
pi.i<-rowSums(pi.table)
pi.j<-colSums(pi.table)
pi.ij.ind<-pi.i%o%pi.j # Quick way to find pi_i+ * pi_+j
pi.ij.ind
set.seed(9218)
save.ind<-rmultinom(n = 1, size = 1000, prob = pi.ij.ind)
save.ind
c.table1.ind<-array(data = save.ind, dim = c(2,3), dimnames = list(X = 1:2, Y = 1:3))
c.table1.ind/sum(c.table1.ind) # pi^_ij is similar to pi^_i+ * pi^_+j
# Using methods found later in the chapter
chisq.test(x = c.table1.ind, correct = FALSE) # Do not reject independence
# I multinomials under independence
set.seed(7718)
save1.ind<-rmultinom(n = 1, size = 400, prob = pi.j)
save2.ind<-rmultinom(n = 1, size = 600, prob = pi.j)
c.table2.ind<-array(data = c(save1.ind[1], save2.ind[1], save1.ind[2], save2.ind[2], save1.ind[3], save2.ind[3]),
dim = c(2,3), dimnames = list(X = 1:2, Y = 1:3))
c.table2.ind
rowSums(c.table2.ind)
c.table2.ind/rowSums(c.table2.ind) # pi^_j|1 is similar to pi^_j|2
chisq.test(x = c.table2.ind, correct = FALSE) # Do not reject independence
#
|
library(ggplot2)
library(ggpubr)
library(gridExtra)
grey = "#d1d2d4"
fill_colour = "#2678b2"
line_colour = "#222222"
red_colour = "#d4292f"
get_n <- function(x, vjust=0){
data = data.frame(y = max(x)+vjust,label = paste("n = ", length(x), sep=""))
return(data)
}
normalised_density_plot = function(data, stops) {
# plot of normalised densities #
plot = ggplot() +
geom_histogram(aes(data$nd), bins = 30, col = line_colour, fill = fill_colour) +
geom_vline(xintercept = stops$nd, lty = 1, cex = 2, col = red_colour) +
labs(x = "Codon set (FE)", y = "Count") +
theme_minimal()
return(plot)
}
purine_boxplot = function(data, stops) {
# data$colour <- ifelse(data$purine == stops$purine, "a", "b")
plot = ggplot(data, aes(x = data$purine, y = data$nd)) +
geom_hline(yintercept = 0, lty=1) +
stat_boxplot(geom ='errorbar') +
geom_boxplot(fill=grey) +
scale_fill_manual(values=c(fill_colour, grey)) +
geom_hline(yintercept = stops$nd, lty=2) +
labs(x = "Codon set purine content", y="FE") +
annotate("text", x=min(as.numeric(data$purine)), hjust= -0.2, y= stops$nd + 0.05, label="TAA,TAG,TGA", cex=3) +
annotate("text", x=min(as.numeric(data$purine)), hjust= -0.5, y= stops$nd - 0.05, label=round(stops$nd,3), cex=3) +
theme(legend.position="none") +
scale_x_discrete(labels = c("0-0.1", "0.1-0.2", "0.2-0.3", "0.3-0.4", "0.4-0.5", "0.5-0.6", "0.6-0.7", "0.7-0.8", "0.8-0.9", "0.9-1", "1")) +
stat_summary(fun.data = get_n, fun.args = list("vjust" = 0.1), geom = "text", aes(group = "purine"), size = 3) +
theme_minimal() +
theme(
legend.position = "none",
)
return(plot)
}
binom_test <- function(data, ycol = "density", group = NULL) {
stops = data[data$codons == "TAA_TAG_TGA",]
not_stops = data[data$codons != "TAA_TAG_TGA",]
if (!is.null(group)) {
not_stops = not_stops[not_stops[[group]] == stops[[group]],]
}
b = binom.test(nrow(not_stops[not_stops[[ycol]] <= stops[[ycol]], ]), nrow(not_stops), alternative = "l")
return(b)
}
####
filepath = "clean_run/motif_tests/int3_densities.csv"
# filepath = "clean_run/motif_tests/int3_densities_strictly_no_stops.csv"
# filepath = "clean_run/motif_tests/int3_densities_non_overlapping.csv"
file = read.csv(filepath, head = T)
file$gc = substr(file$gc_content, 0, 3)
file$purine = substr(file$purine_content, 0, 3)
stops = file[file$codons == "TAA_TAG_TGA",]
gc_matched = file[file$gc == stops$gc & file$codons != "TAA_TAG_TGA",]
purine_matched = file[file$purine_content == stops$purine_content,]
nrow(purine_matched)
# gc matched
nrow(gc_matched)
nrow(gc_matched[gc_matched$nd > stops$nd,])
binom.test(nrow(gc_matched[gc_matched$nd > stops$nd,]), nrow(gc_matched), alternative = "g")
norm_density_gc_plot = normalised_density_plot(gc_matched, stops)
norm_density_gc_plot
# ggsave(plot = norm_density_gc_plot, "clean_run/plots/codon_sets_nd_gc_matched.pdf", width = 12, height= 5, plot = plot)
# purine matched
nrow(purine_matched)
nrow(purine_matched[purine_matched$nd > stops$nd,])
binom.test(nrow(purine_matched[purine_matched$nd > stops$nd,]), nrow(purine_matched), alternative = "g")
norm_density_purine_plot = normalised_density_plot(purine_matched, stops)
ggsave(plot = norm_density_purine_plot, "clean_run/plots/codon_sets_nd_purine_matched.pdf", width = 6, height= 5)
# match by gc and purine
gc_purine_matched = gc_matched[gc_matched$purine_content == stops$purine_content,]
binom.test(nrow(gc_purine_matched[gc_purine_matched$nd > stops$nd,]), nrow(gc_purine_matched), alternative = "g")
# plot
purine_plot = purine_boxplot(gc_matched, stops)
purine_plot
plot = ggarrange(
norm_density_gc_plot,
purine_plot,
labels = c("A", "B"),
nrow = 2,
ncol = 1
)
plot
ggsave(plot = plot, file = "clean_run/plots/codon_histogram_boxplot_fe.pdf", width = 8, height = 10)
ggsave(plot = plot, file = "clean_run/plots/codon_histogram_boxplot_fe.eps", width = 8, height = 10)
| /R_motif_codon_nd.R | no_license | la466/lincrna_stops_repo | R | false | false | 3,960 | r | library(ggplot2)
library(ggpubr)
library(gridExtra)
grey = "#d1d2d4"
fill_colour = "#2678b2"
line_colour = "#222222"
red_colour = "#d4292f"
get_n <- function(x, vjust=0){
data = data.frame(y = max(x)+vjust,label = paste("n = ", length(x), sep=""))
return(data)
}
normalised_density_plot = function(data, stops) {
# plot of normalised densities #
plot = ggplot() +
geom_histogram(aes(data$nd), bins = 30, col = line_colour, fill = fill_colour) +
geom_vline(xintercept = stops$nd, lty = 1, cex = 2, col = red_colour) +
labs(x = "Codon set (FE)", y = "Count") +
theme_minimal()
return(plot)
}
purine_boxplot = function(data, stops) {
# data$colour <- ifelse(data$purine == stops$purine, "a", "b")
plot = ggplot(data, aes(x = data$purine, y = data$nd)) +
geom_hline(yintercept = 0, lty=1) +
stat_boxplot(geom ='errorbar') +
geom_boxplot(fill=grey) +
scale_fill_manual(values=c(fill_colour, grey)) +
geom_hline(yintercept = stops$nd, lty=2) +
labs(x = "Codon set purine content", y="FE") +
annotate("text", x=min(as.numeric(data$purine)), hjust= -0.2, y= stops$nd + 0.05, label="TAA,TAG,TGA", cex=3) +
annotate("text", x=min(as.numeric(data$purine)), hjust= -0.5, y= stops$nd - 0.05, label=round(stops$nd,3), cex=3) +
theme(legend.position="none") +
scale_x_discrete(labels = c("0-0.1", "0.1-0.2", "0.2-0.3", "0.3-0.4", "0.4-0.5", "0.5-0.6", "0.6-0.7", "0.7-0.8", "0.8-0.9", "0.9-1", "1")) +
stat_summary(fun.data = get_n, fun.args = list("vjust" = 0.1), geom = "text", aes(group = "purine"), size = 3) +
theme_minimal() +
theme(
legend.position = "none",
)
return(plot)
}
binom_test <- function(data, ycol = "density", group = NULL) {
stops = data[data$codons == "TAA_TAG_TGA",]
not_stops = data[data$codons != "TAA_TAG_TGA",]
if (!is.null(group)) {
not_stops = not_stops[not_stops[[group]] == stops[[group]],]
}
b = binom.test(nrow(not_stops[not_stops[[ycol]] <= stops[[ycol]], ]), nrow(not_stops), alternative = "l")
return(b)
}
####
filepath = "clean_run/motif_tests/int3_densities.csv"
# filepath = "clean_run/motif_tests/int3_densities_strictly_no_stops.csv"
# filepath = "clean_run/motif_tests/int3_densities_non_overlapping.csv"
file = read.csv(filepath, head = T)
file$gc = substr(file$gc_content, 0, 3)
file$purine = substr(file$purine_content, 0, 3)
stops = file[file$codons == "TAA_TAG_TGA",]
gc_matched = file[file$gc == stops$gc & file$codons != "TAA_TAG_TGA",]
purine_matched = file[file$purine_content == stops$purine_content,]
nrow(purine_matched)
# gc matched
nrow(gc_matched)
nrow(gc_matched[gc_matched$nd > stops$nd,])
binom.test(nrow(gc_matched[gc_matched$nd > stops$nd,]), nrow(gc_matched), alternative = "g")
norm_density_gc_plot = normalised_density_plot(gc_matched, stops)
norm_density_gc_plot
# ggsave(plot = norm_density_gc_plot, "clean_run/plots/codon_sets_nd_gc_matched.pdf", width = 12, height= 5, plot = plot)
# purine matched
nrow(purine_matched)
nrow(purine_matched[purine_matched$nd > stops$nd,])
binom.test(nrow(purine_matched[purine_matched$nd > stops$nd,]), nrow(purine_matched), alternative = "g")
norm_density_purine_plot = normalised_density_plot(purine_matched, stops)
ggsave(plot = norm_density_purine_plot, "clean_run/plots/codon_sets_nd_purine_matched.pdf", width = 6, height= 5)
# match by gc and purine
gc_purine_matched = gc_matched[gc_matched$purine_content == stops$purine_content,]
binom.test(nrow(gc_purine_matched[gc_purine_matched$nd > stops$nd,]), nrow(gc_purine_matched), alternative = "g")
# plot
purine_plot = purine_boxplot(gc_matched, stops)
purine_plot
plot = ggarrange(
norm_density_gc_plot,
purine_plot,
labels = c("A", "B"),
nrow = 2,
ncol = 1
)
plot
ggsave(plot = plot, file = "clean_run/plots/codon_histogram_boxplot_fe.pdf", width = 8, height = 10)
ggsave(plot = plot, file = "clean_run/plots/codon_histogram_boxplot_fe.eps", width = 8, height = 10)
|
% Generated by roxygen2 (4.1.1): do not edit by hand
% Please edit documentation in R/myrowcol_func.R
\name{colMin}
\alias{colMin}
\title{Form column minima}
\usage{
colMin(x, na.rm = TRUE, dims = 1L)
}
\arguments{
\item{x}{an array of two or more dimensions, containing numeric, complex, integer or logical values, or a numeric data frame.}
\item{na.rm}{logical. if \code{TRUE}, remove \code{NA}s, otherwise do not remove (and return errors or warnings)}
\item{dims}{integer.}
}
\value{
a vector of minimum values from each column
}
\description{
returns the minimum value by column for numeric arrays.
}
\details{
This function is equivalent to the use of apply with FUN = min, similar to \code{colSums()} and related functions in \code{base}.
}
\seealso{
\link{colSums}, \link{max}, \link{min}, \link{apply}
}
| /man/colMin.Rd | no_license | tomhopper/rmaxmin | R | false | false | 816 | rd | % Generated by roxygen2 (4.1.1): do not edit by hand
% Please edit documentation in R/myrowcol_func.R
\name{colMin}
\alias{colMin}
\title{Form column minima}
\usage{
colMin(x, na.rm = TRUE, dims = 1L)
}
\arguments{
\item{x}{an array of two or more dimensions, containing numeric, complex, integer or logical values, or a numeric data frame.}
\item{na.rm}{logical. if \code{TRUE}, remove \code{NA}s, otherwise do not remove (and return errors or warnings)}
\item{dims}{integer.}
}
\value{
a vector of minimum values from each column
}
\description{
returns the minimum value by column for numeric arrays.
}
\details{
This function is equivalent to the use of apply with FUN = min, similar to \code{colSums()} and related functions in \code{base}.
}
\seealso{
\link{colSums}, \link{max}, \link{min}, \link{apply}
}
|
\name{problem2}
\alias{problem2}
\docType{data}
\title{
Function for homework 2 problem 2
}
\description{
The data containing medal statistics by country.
}
\usage{data(problem2)}
\format{
A list-mode object ith 45 slots, all contain the overall
Gold, Silver, and Bronze medal counts:
\describe {
\item{\code{Gold}}{Excellent!}
\item{\code{Silver}}{Great!}
\item{\code{Bronze}}{Good!}
}
}
\details{}
\source{\url{http://www.guardian.co.uk/news/datablog/2010/feb/11/winter-olympics-medals-by-country}}
\examples{
data(problem2)
}
\keyword{datasets} | /my1rpack/man/problem2.Rd | no_license | jhuang63/jhuang6310242012 | R | false | false | 585 | rd | \name{problem2}
\alias{problem2}
\docType{data}
\title{
Function for homework 2 problem 2
}
\description{
The data containing medal statistics by country.
}
\usage{data(problem2)}
\format{
A list-mode object ith 45 slots, all contain the overall
Gold, Silver, and Bronze medal counts:
\describe {
\item{\code{Gold}}{Excellent!}
\item{\code{Silver}}{Great!}
\item{\code{Bronze}}{Good!}
}
}
\details{}
\source{\url{http://www.guardian.co.uk/news/datablog/2010/feb/11/winter-olympics-medals-by-country}}
\examples{
data(problem2)
}
\keyword{datasets} |
# SUMMARY: Plot age gaps of partnerships formed for each relationship type
# INPUT: output/RelationshipStart.csv
# OUTPUT:
# 1. figs/RelationshipsFormed_Transitory.png
# 2. figs/RelationshipsFormed_Informal.png
# 3. figs/RelationshipsFormed_Marital.png
# ** Note that figures are only created when at least one relationship of the corresponding type is observed
rm( list=ls( all=TRUE ) )
graphics.off()
library(reshape)
library(ggplot2)
rel_names <- c('Transitory', 'Informal', 'Marital')
fig_dir = 'figs'
if( !file.exists(fig_dir) ) {
dir.create(fig_dir)
}
dat <- read.csv("output/RelationshipStart.csv", header=TRUE)
dat$Count = 1
ageBin = function(age) {
if( age < 40 )
return("<40")
return("40+")
}
dat$A_agebin = sapply(dat$A_age, ageBin)
dat$B_agebin = sapply(dat$B_age, ageBin)
dat$Rel_type = dat$Rel_type..0...transitory.1...informal.2...marital.
dat.m = melt(dat, id=c('Rel_type','A_agebin', 'B_agebin', 'A_age', 'B_age'), measure='Count')
if( !file.exists(fig_dir) ) {
dir.create(fig_dir)
}
for(rel_type in 0:2)
{
rel_name = rel_names[rel_type+1]
if( length(which(dat.m$Rel_type == rel_type)) == 0 ) {
next
}
dat.c = cast(dat.m, A_agebin + B_agebin ~ variable, sum, subset=(Rel_type == rel_type))
tot = sum(dat.c$Count)
dat.c$Percent = dat.c$Count / tot
m = min( rbind(dat.m$A_age, dat.m$B_age) )
text_color = "black"
### Plot ###
p <- ggplot() +
#geom_tile(data=dat.c, aes(x=A_agebin, y=B_agebin, fill=Percent)) +
#stat_bin2d(data=dat.m, aes(x=A_age, y=B_age), breaks=list(x=c(m,40,55),y=c(m,40,55)) ) +
geom_point(data=dat.m, aes(x=A_age, y=B_age), color="orange", size=1) +
#scale_fill_continuous(name="Relationships\nFormed") +
coord_fixed() +
annotate("text", x=25, y=25, label=paste(round(100*dat.c[dat.c$A_agebin=="<40" & dat.c$B_agebin=="<40",]$Percent), '%',sep=''), color=text_color, size=10) +
annotate("text", x=50, y=25, label=paste(round(100*dat.c[dat.c$A_agebin=="40+" & dat.c$B_agebin=="<40",]$Percent), '%',sep=''), color=text_color, size=10) +
#annotate("text", x=25, y=50, label=paste(round(100*dat.c[dat.c$A_agebin=="<40" & dat.c$B_agebin=="40+",]$Percent), '%',sep=''), color=text_color, size=10) +
annotate("text", x=50, y=50, label=paste(round(100*dat.c[dat.c$A_agebin=="40+" & dat.c$B_agebin=="40+",]$Percent), '%',sep=''), color=text_color, size=10) +
ggtitle( rel_name ) +
xlab( 'Male Age' ) +
ylab( 'Female Age' ) +
#dev.new()
png( file.path(fig_dir,paste('RelationshipsFormed_',rel_name, '.png',sep="")), width=600, height=400)
print( p )
dev.off()
dev.new()
print(p)
}
| /Scenarios/STIAndHIV/01_STI_Network/C_Choice_Of_Partner_Age/plotRelStart.R | no_license | MIzzo-IDM/EMOD-QuickStart | R | false | false | 2,726 | r | # SUMMARY: Plot age gaps of partnerships formed for each relationship type
# INPUT: output/RelationshipStart.csv
# OUTPUT:
# 1. figs/RelationshipsFormed_Transitory.png
# 2. figs/RelationshipsFormed_Informal.png
# 3. figs/RelationshipsFormed_Marital.png
# ** Note that figures are only created when at least one relationship of the corresponding type is observed
rm( list=ls( all=TRUE ) )
graphics.off()
library(reshape)
library(ggplot2)
rel_names <- c('Transitory', 'Informal', 'Marital')
fig_dir = 'figs'
if( !file.exists(fig_dir) ) {
dir.create(fig_dir)
}
dat <- read.csv("output/RelationshipStart.csv", header=TRUE)
dat$Count = 1
ageBin = function(age) {
if( age < 40 )
return("<40")
return("40+")
}
dat$A_agebin = sapply(dat$A_age, ageBin)
dat$B_agebin = sapply(dat$B_age, ageBin)
dat$Rel_type = dat$Rel_type..0...transitory.1...informal.2...marital.
dat.m = melt(dat, id=c('Rel_type','A_agebin', 'B_agebin', 'A_age', 'B_age'), measure='Count')
if( !file.exists(fig_dir) ) {
dir.create(fig_dir)
}
for(rel_type in 0:2)
{
rel_name = rel_names[rel_type+1]
if( length(which(dat.m$Rel_type == rel_type)) == 0 ) {
next
}
dat.c = cast(dat.m, A_agebin + B_agebin ~ variable, sum, subset=(Rel_type == rel_type))
tot = sum(dat.c$Count)
dat.c$Percent = dat.c$Count / tot
m = min( rbind(dat.m$A_age, dat.m$B_age) )
text_color = "black"
### Plot ###
p <- ggplot() +
#geom_tile(data=dat.c, aes(x=A_agebin, y=B_agebin, fill=Percent)) +
#stat_bin2d(data=dat.m, aes(x=A_age, y=B_age), breaks=list(x=c(m,40,55),y=c(m,40,55)) ) +
geom_point(data=dat.m, aes(x=A_age, y=B_age), color="orange", size=1) +
#scale_fill_continuous(name="Relationships\nFormed") +
coord_fixed() +
annotate("text", x=25, y=25, label=paste(round(100*dat.c[dat.c$A_agebin=="<40" & dat.c$B_agebin=="<40",]$Percent), '%',sep=''), color=text_color, size=10) +
annotate("text", x=50, y=25, label=paste(round(100*dat.c[dat.c$A_agebin=="40+" & dat.c$B_agebin=="<40",]$Percent), '%',sep=''), color=text_color, size=10) +
#annotate("text", x=25, y=50, label=paste(round(100*dat.c[dat.c$A_agebin=="<40" & dat.c$B_agebin=="40+",]$Percent), '%',sep=''), color=text_color, size=10) +
annotate("text", x=50, y=50, label=paste(round(100*dat.c[dat.c$A_agebin=="40+" & dat.c$B_agebin=="40+",]$Percent), '%',sep=''), color=text_color, size=10) +
ggtitle( rel_name ) +
xlab( 'Male Age' ) +
ylab( 'Female Age' ) +
#dev.new()
png( file.path(fig_dir,paste('RelationshipsFormed_',rel_name, '.png',sep="")), width=600, height=400)
print( p )
dev.off()
dev.new()
print(p)
}
|
library(shinyjqui)
### Name: jqui_icon
### Title: Create a jQuery UI icon
### Aliases: jqui_icon
### ** Examples
jqui_icon('caret-1-n')
library(shiny)
# add an icon to an actionButton
actionButton('button', 'Button', icon = jqui_icon('refresh'))
# add an icon to a tabPanel
tabPanel('Help', icon = jqui_icon('help'))
| /data/genthat_extracted_code/shinyjqui/examples/jqui_icon.Rd.R | no_license | surayaaramli/typeRrh | R | false | false | 327 | r | library(shinyjqui)
### Name: jqui_icon
### Title: Create a jQuery UI icon
### Aliases: jqui_icon
### ** Examples
jqui_icon('caret-1-n')
library(shiny)
# add an icon to an actionButton
actionButton('button', 'Button', icon = jqui_icon('refresh'))
# add an icon to a tabPanel
tabPanel('Help', icon = jqui_icon('help'))
|
library(WGCNA);
library(org.Hs.eg.db)
library(AnnotationDbi)
library(GO.db)
# The following setting is important, do not omit.
options(stringsAsFactors = FALSE);
#RCD4datad in the female liver data set
EA = read.table("/cluster/project8/vyp/Winship_GVHD/claire/data_files/immvar_expression/GSE56033_GSM.ImmVarCD4.EA.PC12.txt", header = T)
AA = read.table("/cluster/project8/vyp/Winship_GVHD/claire/data_files/immvar_expression/GSE56033_GSM.ImmVarCD4.AA.PC12.txt", header = T)
EU = read.table("/cluster/project8/vyp/Winship_GVHD/claire/data_files/immvar_expression/GSE56033_GSM.ImmVarCD4.EU.PC20.txt", header = T)
EU$ID_REF = NULL
AA$ID_REF = NULL
expression_temp = cbind(EA, AA)
expression_all = cbind(expression_temp, EU)
CD4data = as.data.frame(expression_all)
# Take a quick look at what is in the data set:
dim(CD4data);
datExpr0 = as.data.frame(t(CD4data[,-c(1)]));
names(datExpr0) = CD4data$ID_REF;
rownames(datExpr0) = names(CD4data)[-c(1)];
gsg = goodSamplesGenes(datExpr0, verbose = 3);
gsg$allOK
sampleTree = hclust(dist(datExpr0), method = "average");
# Plot the sample tree: Open a graphic output window of size 12 by 9 inches
# The user should change the dimensions if the window is too large or too small.
#sizeGrWindow(25,9)
#png(file = "/cluster/project8/vyp/Winship_GVHD/claire/results/immvar/plots/sampleClustering.png", width = 15, height = 9);
#par(cex = 0.6);
#par(mar = c(0,4,2,0))
#plot(sampleTree, main = "Sample clustering to detect outliers", sub="", xlab="", cex.lab = 1.5,
# cex.axis = 1.5, cex.main = 2)
#abline(h = 42, col = "red")
#dev.off()
clust = cutreeStatic(sampleTree, cutHeight = 42)
table(clust)
# clust 1 contains the samples we want to keep.
keepSamples = (clust==0)
datExpr = datExpr0[keepSamples, ]
nGenes = ncol(datExpr)
nSamples = nrow(datExpr)
# Choose a set of soft-thresholding powers
powers = c(c(1:10), seq(from = 12, to=20, by=2))
# Call the network topology analysis function
sft = pickSoftThreshold(datExpr, dataIsExpr = TRUE, powerVector = powers, verbose = 5)
pdf(file = "/cluster/project8/vyp/Winship_GVHD/claire/GVHD/immvar/results/plots/scale_free_topology_fit.png", width = 12, height = 9);
par(mfrow = c(1,2));
cex1 = 0.9;
# Scale-free topology fit index as a function of the soft-thresholding power
#plot(sft$fitIndices[,1], -sign(sft$fitIndices[,3])*sft$fitIndices[,2],
# xlab="Soft Threshold (power)",ylab="Scale Free Topology Model Fit,signed R^2",type="n",
# main = paste("Scale independence"));
#text(sft$fitIndices[,1], -sign(sft$fitIndices[,3])*sft$fitIndices[,2],
# labels=powers,cex=cex1,col="red");
# this line corresponds to using an R^2 cut-off of h
#abline(h=0.97,col="red")
# Mean connectivity as a function of the soft-thresholding power
#plot(sft$fitIndices[,1], sft$fitIndices[,5],
# xlab="Soft Threshold (power)",ylab="Mean Connectivity", type="n",
# main = paste("Mean connectivity"))
#text(sft$fitIndices[,1], sft$fitIndices[,5], labels=powers, cex=cex1,col="red")
#dev.off()
##################### check scale free topology ################################
k=softConnectivity(datE=datExpr,power=3)
pdf(file = "/cluster/project8/vyp/Winship_GVHD/claire/GVHD/immvar/results/plots/check_scale_free_topology.pdf", width = 12, height = 9);
sizeGrWindow(10,5)
par(mfrow=c(1,2))
hist(k)
scaleFreePlot(k, main="Check scale free topology\n")
dev.off()
net = blockwiseModules(datExpr, power = 3,
TOMType = "unsigned", minModuleSize = 20,
reassignThreshold = 0, mergeCutHeight = 0.25,
numericLabels = TRUE, pamRespectsDendro = FALSE,
saveTOMs = TRUE,
saveTOMFileBase = "TOM",
verbose = 3)
table(net$colors)
pdf(file = "/cluster/project8/vyp/Winship_GVHD/claire/GVHD/immvar/results/plots/cluster_dendogram.pdf", width = 12, height = 9);
# open a graphics window
sizeGrWindow(12, 9)
# Convert labels to colors for plotting
mergedColors = labels2colors(net$colors)
# Plot the dendrogram and the module colors underneath
plotDendroAndColors(net$dendrograms[[1]], mergedColors[net$blockGenes[[1]]],
"Module colors",
dendroLabels = FALSE, hang = 0.03,
addGuide = TRUE, guideHang = 0.05)
dev.off()
moduleLabels = net$colors
moduleColors = labels2colors(net$colors)
MEs = net$MEs;
geneTree = net$dendrograms[[1]];
save(MEs, moduleLabels, moduleColors, geneTree,
file = "immvar-networkConstruction-auto.RData")
MEs0 = moduleEigengenes(datExpr, moduleColors)$eigengenes
MEs = orderMEs(MEs0)
#################### annotation ########################
annot = read.csv(file = "/cluster/project8/vyp/Winship_GVHD/claire/scripts/immvar/HuGene-1_0-st.csv", header = T);
dim(annot)
probes = names(datExpr)
probes2annot = match(probes, annot$transcript_cluster_id)
# The following is the number or probes without annotation:
sum(is.na(probes2annot))
# Should return 0.
gene.list <- unique(strsplit(paste(annot$gene_assignment, collapse = " /// "), split = " /// ")[[1]])
gene.list <- unique(strsplit(paste(gene.list, collapse = " // "), split = " // ")[[1]])
gene.list <- gsub(gene.list, pattern = "ENST.+", gene.list, replacement = "")
x = select(org.Hs.eg.db, gene.list, c("ENTREZID"), "ALIAS")
allLLIDs = x$ENTREZID[probes2annot];
#print(allLLIDs)
GOenr = GOenrichmentAnalysis(moduleColors, allLLIDs, organism = "human", nBestP = 10);
tab = GOenr$bestPTerms[[4]]$enrichment
geneInfo0 = data.frame(transcript_cluster_id = probes,
geneSymbol = annot$gene_symbol[probes2annot],
LocusLinkID = annot$LocusLinkID[probes2annot],
moduleColor = moduleColors)
# Order modules by their significance for weight
modOrder = order(-abs(cor(MEs, weight, use = "p")));
write.csv(geneInfo, file = "/cluster/project8/vyp/Winship_GVHD/claire/GVHD/immvar/results/geneInfo.csv")
| /immvar/WGCNA_immvar_CD4.R | no_license | plagnollab/GVHD | R | false | false | 5,736 | r | library(WGCNA);
library(org.Hs.eg.db)
library(AnnotationDbi)
library(GO.db)
# The following setting is important, do not omit.
options(stringsAsFactors = FALSE);
#RCD4datad in the female liver data set
EA = read.table("/cluster/project8/vyp/Winship_GVHD/claire/data_files/immvar_expression/GSE56033_GSM.ImmVarCD4.EA.PC12.txt", header = T)
AA = read.table("/cluster/project8/vyp/Winship_GVHD/claire/data_files/immvar_expression/GSE56033_GSM.ImmVarCD4.AA.PC12.txt", header = T)
EU = read.table("/cluster/project8/vyp/Winship_GVHD/claire/data_files/immvar_expression/GSE56033_GSM.ImmVarCD4.EU.PC20.txt", header = T)
EU$ID_REF = NULL
AA$ID_REF = NULL
expression_temp = cbind(EA, AA)
expression_all = cbind(expression_temp, EU)
CD4data = as.data.frame(expression_all)
# Take a quick look at what is in the data set:
dim(CD4data);
datExpr0 = as.data.frame(t(CD4data[,-c(1)]));
names(datExpr0) = CD4data$ID_REF;
rownames(datExpr0) = names(CD4data)[-c(1)];
gsg = goodSamplesGenes(datExpr0, verbose = 3);
gsg$allOK
sampleTree = hclust(dist(datExpr0), method = "average");
# Plot the sample tree: Open a graphic output window of size 12 by 9 inches
# The user should change the dimensions if the window is too large or too small.
#sizeGrWindow(25,9)
#png(file = "/cluster/project8/vyp/Winship_GVHD/claire/results/immvar/plots/sampleClustering.png", width = 15, height = 9);
#par(cex = 0.6);
#par(mar = c(0,4,2,0))
#plot(sampleTree, main = "Sample clustering to detect outliers", sub="", xlab="", cex.lab = 1.5,
# cex.axis = 1.5, cex.main = 2)
#abline(h = 42, col = "red")
#dev.off()
clust = cutreeStatic(sampleTree, cutHeight = 42)
table(clust)
# clust 1 contains the samples we want to keep.
keepSamples = (clust==0)
datExpr = datExpr0[keepSamples, ]
nGenes = ncol(datExpr)
nSamples = nrow(datExpr)
# Choose a set of soft-thresholding powers
powers = c(c(1:10), seq(from = 12, to=20, by=2))
# Call the network topology analysis function
sft = pickSoftThreshold(datExpr, dataIsExpr = TRUE, powerVector = powers, verbose = 5)
pdf(file = "/cluster/project8/vyp/Winship_GVHD/claire/GVHD/immvar/results/plots/scale_free_topology_fit.png", width = 12, height = 9);
par(mfrow = c(1,2));
cex1 = 0.9;
# Scale-free topology fit index as a function of the soft-thresholding power
#plot(sft$fitIndices[,1], -sign(sft$fitIndices[,3])*sft$fitIndices[,2],
# xlab="Soft Threshold (power)",ylab="Scale Free Topology Model Fit,signed R^2",type="n",
# main = paste("Scale independence"));
#text(sft$fitIndices[,1], -sign(sft$fitIndices[,3])*sft$fitIndices[,2],
# labels=powers,cex=cex1,col="red");
# this line corresponds to using an R^2 cut-off of h
#abline(h=0.97,col="red")
# Mean connectivity as a function of the soft-thresholding power
#plot(sft$fitIndices[,1], sft$fitIndices[,5],
# xlab="Soft Threshold (power)",ylab="Mean Connectivity", type="n",
# main = paste("Mean connectivity"))
#text(sft$fitIndices[,1], sft$fitIndices[,5], labels=powers, cex=cex1,col="red")
#dev.off()
##################### check scale free topology ################################
k=softConnectivity(datE=datExpr,power=3)
pdf(file = "/cluster/project8/vyp/Winship_GVHD/claire/GVHD/immvar/results/plots/check_scale_free_topology.pdf", width = 12, height = 9);
sizeGrWindow(10,5)
par(mfrow=c(1,2))
hist(k)
scaleFreePlot(k, main="Check scale free topology\n")
dev.off()
net = blockwiseModules(datExpr, power = 3,
TOMType = "unsigned", minModuleSize = 20,
reassignThreshold = 0, mergeCutHeight = 0.25,
numericLabels = TRUE, pamRespectsDendro = FALSE,
saveTOMs = TRUE,
saveTOMFileBase = "TOM",
verbose = 3)
table(net$colors)
pdf(file = "/cluster/project8/vyp/Winship_GVHD/claire/GVHD/immvar/results/plots/cluster_dendogram.pdf", width = 12, height = 9);
# open a graphics window
sizeGrWindow(12, 9)
# Convert labels to colors for plotting
mergedColors = labels2colors(net$colors)
# Plot the dendrogram and the module colors underneath
plotDendroAndColors(net$dendrograms[[1]], mergedColors[net$blockGenes[[1]]],
"Module colors",
dendroLabels = FALSE, hang = 0.03,
addGuide = TRUE, guideHang = 0.05)
dev.off()
moduleLabels = net$colors
moduleColors = labels2colors(net$colors)
MEs = net$MEs;
geneTree = net$dendrograms[[1]];
save(MEs, moduleLabels, moduleColors, geneTree,
file = "immvar-networkConstruction-auto.RData")
MEs0 = moduleEigengenes(datExpr, moduleColors)$eigengenes
MEs = orderMEs(MEs0)
#################### annotation ########################
annot = read.csv(file = "/cluster/project8/vyp/Winship_GVHD/claire/scripts/immvar/HuGene-1_0-st.csv", header = T);
dim(annot)
probes = names(datExpr)
probes2annot = match(probes, annot$transcript_cluster_id)
# The following is the number or probes without annotation:
sum(is.na(probes2annot))
# Should return 0.
gene.list <- unique(strsplit(paste(annot$gene_assignment, collapse = " /// "), split = " /// ")[[1]])
gene.list <- unique(strsplit(paste(gene.list, collapse = " // "), split = " // ")[[1]])
gene.list <- gsub(gene.list, pattern = "ENST.+", gene.list, replacement = "")
x = select(org.Hs.eg.db, gene.list, c("ENTREZID"), "ALIAS")
allLLIDs = x$ENTREZID[probes2annot];
#print(allLLIDs)
GOenr = GOenrichmentAnalysis(moduleColors, allLLIDs, organism = "human", nBestP = 10);
tab = GOenr$bestPTerms[[4]]$enrichment
geneInfo0 = data.frame(transcript_cluster_id = probes,
geneSymbol = annot$gene_symbol[probes2annot],
LocusLinkID = annot$LocusLinkID[probes2annot],
moduleColor = moduleColors)
# Order modules by their significance for weight
modOrder = order(-abs(cor(MEs, weight, use = "p")));
write.csv(geneInfo, file = "/cluster/project8/vyp/Winship_GVHD/claire/GVHD/immvar/results/geneInfo.csv")
|
readEPC <- function() { #this function reads the txt file and creates the dataframe
dfEPClocal <- NULL
filename <- file("./household_power_consumption.txt")
rows= read.table(filename,header= TRUE,sep=";", nrows = 5) #read the first 5 rows, to get the classes
classes = sapply(rows,class)
library(sqldf) #library sqldf is required; if not installed, install.packages("sqldf")
filename <- file("./household_power_consumption.txt")
dfEPClocal <- sqldf("select * from filename where Date in('1/2/2007' ,'2/2/2007') ", #read the dataset, only the dates 2007-02-01 and 2007-02-02, as suggested
dbname=tempfile(),
file.format=list(header=T,sep=";",
row.names=F, colClasses = classes))
dfEPClocal$DateTime <- as.POSIXct(strptime(paste(dfEPClocal$Date,dfEPClocal$Time), "%d/%m/%Y %H:%M:%S")) #create a new field, DateTime
dfEPC <<- dfEPClocal #copy the local dataframe to the global dataframe
sqldf()
}
if(!exists("dfEPC")) readEPC() #if the database exists and is not null I don't read it again, for time saving
if(is.null(dfEPC)) readEPC()
#plot1 ---------------------------------------------------------------------------
png('plot1.png', width=480, height=480, bg='transparent')
hist(dfEPC$Global_active_power, main="Global Active Power", xlab="Global Active Power (kilowatts)", col='red')
dev.off() | /plot1.R | no_license | MatwB/ExData_Plotting1 | R | false | false | 1,343 | r |
readEPC <- function() { #this function reads the txt file and creates the dataframe
dfEPClocal <- NULL
filename <- file("./household_power_consumption.txt")
rows= read.table(filename,header= TRUE,sep=";", nrows = 5) #read the first 5 rows, to get the classes
classes = sapply(rows,class)
library(sqldf) #library sqldf is required; if not installed, install.packages("sqldf")
filename <- file("./household_power_consumption.txt")
dfEPClocal <- sqldf("select * from filename where Date in('1/2/2007' ,'2/2/2007') ", #read the dataset, only the dates 2007-02-01 and 2007-02-02, as suggested
dbname=tempfile(),
file.format=list(header=T,sep=";",
row.names=F, colClasses = classes))
dfEPClocal$DateTime <- as.POSIXct(strptime(paste(dfEPClocal$Date,dfEPClocal$Time), "%d/%m/%Y %H:%M:%S")) #create a new field, DateTime
dfEPC <<- dfEPClocal #copy the local dataframe to the global dataframe
sqldf()
}
if(!exists("dfEPC")) readEPC() #if the database exists and is not null I don't read it again, for time saving
if(is.null(dfEPC)) readEPC()
#plot1 ---------------------------------------------------------------------------
png('plot1.png', width=480, height=480, bg='transparent')
hist(dfEPC$Global_active_power, main="Global Active Power", xlab="Global Active Power (kilowatts)", col='red')
dev.off() |
% Generated by roxygen2 (4.0.2): do not edit by hand
\name{info}
\alias{info}
\title{Display a list of special commands.}
\usage{
info()
}
\description{
Display a list of the special commands, \code{bye()}, \code{play()},
\code{nxt()}, \code{skip()}, and \code{info()}.
}
\examples{
\dontrun{
| Create a new variable called `z` that contains the number 11.
> info()
| When you are at the R prompt (>):
| -- Typing skip() allows you to skip the current question.
| -- Typing play() lets you experiment with R on your own; swirl will ignore what
| you do...
| -- UNTIL you type nxt() which will regain swirl's attention.
| -- Typing bye() causes swirl to exit. Your progress will be saved.
| -- Typing info() displays these options again.
> bye()
| Leaving swirl now. Type swirl() to resume.
}
}
| /man/info.Rd | no_license | dtrentedwards/swirl | R | false | false | 800 | rd | % Generated by roxygen2 (4.0.2): do not edit by hand
\name{info}
\alias{info}
\title{Display a list of special commands.}
\usage{
info()
}
\description{
Display a list of the special commands, \code{bye()}, \code{play()},
\code{nxt()}, \code{skip()}, and \code{info()}.
}
\examples{
\dontrun{
| Create a new variable called `z` that contains the number 11.
> info()
| When you are at the R prompt (>):
| -- Typing skip() allows you to skip the current question.
| -- Typing play() lets you experiment with R on your own; swirl will ignore what
| you do...
| -- UNTIL you type nxt() which will regain swirl's attention.
| -- Typing bye() causes swirl to exit. Your progress will be saved.
| -- Typing info() displays these options again.
> bye()
| Leaving swirl now. Type swirl() to resume.
}
}
|
# Yige Wu @WashU Jul 2020
## for merging 31 snRNA datasets
# set up libraries and output directory -----------------------------------
## getting the path to the current script
thisFile <- function() {
cmdArgs <- commandArgs(trailingOnly = FALSE)
needle <- "--file="
match <- grep(needle, cmdArgs)
if (length(match) > 0) {
# Rscript
return(normalizePath(sub(needle, "", cmdArgs[match])))
} else {
# 'source'd via R console
return(normalizePath(sys.frames()[[1]]$ofile))
}
}
path_this_script <- thisFile()
## set working directory
dir_base = "/diskmnt/Projects/ccRCC_scratch/ccRCC_snRNA/"
setwd(dir_base)
source("./ccRCC_snRNA_analysis/load_pkgs.R")
source("./ccRCC_snRNA_analysis/functions.R")
source("./ccRCC_snRNA_analysis/variables.R")
## set run id
version_tmp <- 1
run_id <- paste0(format(Sys.Date(), "%Y%m%d") , ".v", version_tmp)
## set output directory
dir_out <- paste0(makeOutDir_katmai(path_this_script), run_id, "/")
dir.create(dir_out)
options(future.globals.maxSize = 1000 * 1024^2)
# input dependencies ------------------------------------------------------
## input seurat paths
paths_srat <- fread(data.table = F, input = "./Resources/Analysis_Results/data_summary/write_individual_srat_object_paths/20210308.v1/Seurat_Object_Paths.20210308.v1.tsv")
# input the seurat object and store in a list--------------------------------------------------------
paths_srat2process <- paths_srat
renal.list <- list()
for (i in 1:nrow(paths_srat2process)) {
sample_id_tmp <- paths_srat2process$Aliquot[i]
seurat_obj_path <- paths_srat2process$Path_katmai_seurat_object[i]
seurat_obj <- readRDS(file = seurat_obj_path)
renal.list[[sample_id_tmp]] <- seurat_obj
}
length(renal.list)
rm(seurat_obj)
# Run the standard workflow for visualization and clustering ------------
renal.list <- lapply(X = renal.list, FUN = function(x) {
x <- NormalizeData(x)
x <- FindVariableFeatures(x, selection.method = "vst", nfeatures = 3000)
})
## integrate without anchor
renal.integrated <- merge(x = renal.list[[1]], y = renal.list[2:length(renal.list)], project = "integrated")
rm(renal.list)
## save as RDS file
saveRDS(object = renal.integrated, file = paste0(dir_out, "32_aliquot.merged.", run_id, ".RDS"), compress = T)
cat("Finished saving the output!\n")
| /merging/merge_32_aliquot.R | no_license | ding-lab/ccRCC_snRNA_analysis | R | false | false | 2,294 | r | # Yige Wu @WashU Jul 2020
## for merging 31 snRNA datasets
# set up libraries and output directory -----------------------------------
## getting the path to the current script
thisFile <- function() {
cmdArgs <- commandArgs(trailingOnly = FALSE)
needle <- "--file="
match <- grep(needle, cmdArgs)
if (length(match) > 0) {
# Rscript
return(normalizePath(sub(needle, "", cmdArgs[match])))
} else {
# 'source'd via R console
return(normalizePath(sys.frames()[[1]]$ofile))
}
}
path_this_script <- thisFile()
## set working directory
dir_base = "/diskmnt/Projects/ccRCC_scratch/ccRCC_snRNA/"
setwd(dir_base)
source("./ccRCC_snRNA_analysis/load_pkgs.R")
source("./ccRCC_snRNA_analysis/functions.R")
source("./ccRCC_snRNA_analysis/variables.R")
## set run id
version_tmp <- 1
run_id <- paste0(format(Sys.Date(), "%Y%m%d") , ".v", version_tmp)
## set output directory
dir_out <- paste0(makeOutDir_katmai(path_this_script), run_id, "/")
dir.create(dir_out)
options(future.globals.maxSize = 1000 * 1024^2)
# input dependencies ------------------------------------------------------
## input seurat paths
paths_srat <- fread(data.table = F, input = "./Resources/Analysis_Results/data_summary/write_individual_srat_object_paths/20210308.v1/Seurat_Object_Paths.20210308.v1.tsv")
# input the seurat object and store in a list--------------------------------------------------------
paths_srat2process <- paths_srat
renal.list <- list()
for (i in 1:nrow(paths_srat2process)) {
sample_id_tmp <- paths_srat2process$Aliquot[i]
seurat_obj_path <- paths_srat2process$Path_katmai_seurat_object[i]
seurat_obj <- readRDS(file = seurat_obj_path)
renal.list[[sample_id_tmp]] <- seurat_obj
}
length(renal.list)
rm(seurat_obj)
# Run the standard workflow for visualization and clustering ------------
renal.list <- lapply(X = renal.list, FUN = function(x) {
x <- NormalizeData(x)
x <- FindVariableFeatures(x, selection.method = "vst", nfeatures = 3000)
})
## integrate without anchor
renal.integrated <- merge(x = renal.list[[1]], y = renal.list[2:length(renal.list)], project = "integrated")
rm(renal.list)
## save as RDS file
saveRDS(object = renal.integrated, file = paste0(dir_out, "32_aliquot.merged.", run_id, ".RDS"), compress = T)
cat("Finished saving the output!\n")
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/paws.s3control_package.R
\docType{package}
\name{paws.s3control-package}
\alias{paws.s3control}
\alias{paws.s3control-package}
\title{paws.s3control: AWS S3 Control}
\description{
AWS S3 Control provides access to Amazon S3 control
plane operations.
}
\author{
\strong{Maintainer}: David Kretch \email{david.kretch@gmail.com}
Authors:
\itemize{
\item Adam Banker \email{adam.banker39@gmail.com}
}
Other contributors:
\itemize{
\item Amazon.com, Inc. [copyright holder]
}
}
\keyword{internal}
| /service/paws.s3control/man/paws.s3control-package.Rd | permissive | CR-Mercado/paws | R | false | true | 581 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/paws.s3control_package.R
\docType{package}
\name{paws.s3control-package}
\alias{paws.s3control}
\alias{paws.s3control-package}
\title{paws.s3control: AWS S3 Control}
\description{
AWS S3 Control provides access to Amazon S3 control
plane operations.
}
\author{
\strong{Maintainer}: David Kretch \email{david.kretch@gmail.com}
Authors:
\itemize{
\item Adam Banker \email{adam.banker39@gmail.com}
}
Other contributors:
\itemize{
\item Amazon.com, Inc. [copyright holder]
}
}
\keyword{internal}
|
# Objective: compute and analyse how many genomes present an expansion beyond premut and patho thrsehold
# After visual inspection
date ()
Sys.info ()[c("nodename", "user")]
commandArgs ()
rm (list = ls ())
R.version.string ## "R version 3.6.3
# libraries
library(dplyr); packageDescription ("dplyr", fields = "Version") #"0.8.3"
# set working dire
setwd("~/Documents/STRs/ANALYSIS/population_research/PAPER/expanded_genomes_main_pilot/feb2021/")
# Load pathogenic table
patho_table = read.csv("./beyond_full-mutation/13_loci_beyond__pathogenic_cutoff_38_EHv322_92K_population_15M.tsv",
stringsAsFactors = F,
header = T,
sep = "\t")
# Compute how many expansions has each platekey
patho_table = patho_table %>%
group_by(platekey, Final.decision) %>%
mutate(number_exp = n()) %>%
ungroup() %>%
as.data.frame()
# See which ones are `number_exp` == 2 and `Final.decision` = Yes
patho_table %>% filter(number_exp == 2, Final.decision %in% "Yes")
# Load premutation table (FMR1 not yet)
premut_table = read.csv("./beyond_premut/13loci_beyond_premut_cutoff_to_review_VGD_enriched_pathoFinalDecision_100621.tsv",
stringsAsFactors = F,
header = T,
sep = "\t")
# Compute how many expansions has each platekey
premut_table = premut_table %>%
group_by(platekey, Final.decision) %>%
mutate(number_exp = n()) %>%
ungroup() %>%
as.data.frame()
# See which ones are `number_exp` == 2 and `Final.decision` = Yes
premut_table %>% filter(number_exp == 2, Final.decision %in% "Yes")
| /population_research/analysing_present_more_than_one_expansion.R | no_license | kibanez/analysing_STRs | R | false | false | 1,631 | r | # Objective: compute and analyse how many genomes present an expansion beyond premut and patho thrsehold
# After visual inspection
date ()
Sys.info ()[c("nodename", "user")]
commandArgs ()
rm (list = ls ())
R.version.string ## "R version 3.6.3
# libraries
library(dplyr); packageDescription ("dplyr", fields = "Version") #"0.8.3"
# set working dire
setwd("~/Documents/STRs/ANALYSIS/population_research/PAPER/expanded_genomes_main_pilot/feb2021/")
# Load pathogenic table
patho_table = read.csv("./beyond_full-mutation/13_loci_beyond__pathogenic_cutoff_38_EHv322_92K_population_15M.tsv",
stringsAsFactors = F,
header = T,
sep = "\t")
# Compute how many expansions has each platekey
patho_table = patho_table %>%
group_by(platekey, Final.decision) %>%
mutate(number_exp = n()) %>%
ungroup() %>%
as.data.frame()
# See which ones are `number_exp` == 2 and `Final.decision` = Yes
patho_table %>% filter(number_exp == 2, Final.decision %in% "Yes")
# Load premutation table (FMR1 not yet)
premut_table = read.csv("./beyond_premut/13loci_beyond_premut_cutoff_to_review_VGD_enriched_pathoFinalDecision_100621.tsv",
stringsAsFactors = F,
header = T,
sep = "\t")
# Compute how many expansions has each platekey
premut_table = premut_table %>%
group_by(platekey, Final.decision) %>%
mutate(number_exp = n()) %>%
ungroup() %>%
as.data.frame()
# See which ones are `number_exp` == 2 and `Final.decision` = Yes
premut_table %>% filter(number_exp == 2, Final.decision %in% "Yes")
|
## The fist function calcultes the inverse of a given matrix and returns this value
## The second function returns the inverse matrix calculated by the first function, if the matrix has not changed.
## If the matrix has already changed, the second function will calculate again the inverse matrix.
## This funtion receives a matrix and calculates and returns the inverse of the matrix.
makeCacheMatrix <- function(x = matrix()) {
matrix_inv <- NULL
set <- function(y) {
x <<- y
matrix_inv <<- NULL
}
get <- function() x
setinverse <- function(inverse) matrix_inv <<- inverse
getinverse <- function() matrix_inv
list(set = set, get = get,
setinverse = setinverse,
getinverse = getinverse)
}
##This function returns the inverse matrix calculated for the first function, if this matrix has not changed.
##If the matrix has changed, this funtion recalculates the inverse of the matrix.
cacheSolve <- function(x, ...) {
## Return a matrix that is the inverse of 'x'
matrix_inv <- x$getinverse()
if(!is.null(matrix_inv)) {
message("getting cached data")
return(matrix_inv)
}
data <- x$get()
matrix_inv <- solve(data, ...)
x$setinverse(matrix_inv)
matrix_inv
}
| /cachematrix.R | no_license | marcelapatlo97/ProgrammingAssignment2 | R | false | false | 1,221 | r |
## The fist function calcultes the inverse of a given matrix and returns this value
## The second function returns the inverse matrix calculated by the first function, if the matrix has not changed.
## If the matrix has already changed, the second function will calculate again the inverse matrix.
## This funtion receives a matrix and calculates and returns the inverse of the matrix.
makeCacheMatrix <- function(x = matrix()) {
matrix_inv <- NULL
set <- function(y) {
x <<- y
matrix_inv <<- NULL
}
get <- function() x
setinverse <- function(inverse) matrix_inv <<- inverse
getinverse <- function() matrix_inv
list(set = set, get = get,
setinverse = setinverse,
getinverse = getinverse)
}
##This function returns the inverse matrix calculated for the first function, if this matrix has not changed.
##If the matrix has changed, this funtion recalculates the inverse of the matrix.
cacheSolve <- function(x, ...) {
## Return a matrix that is the inverse of 'x'
matrix_inv <- x$getinverse()
if(!is.null(matrix_inv)) {
message("getting cached data")
return(matrix_inv)
}
data <- x$get()
matrix_inv <- solve(data, ...)
x$setinverse(matrix_inv)
matrix_inv
}
|
library(stringr)
library(tidyverse)
library(here)
library(scales)
var_names <- c(
est_prev = "Prevalence estimate",
infections = "Estimated incidence",
dcases = "Daily estimated prevalence",
r = "Growth rate",
Rt = "Reproduction number",
cumulative_infections = "Cumulative incidence",
cumulative_exposure = "Proportion ever infected"
)
labels <- list(
est_prev = scales::percent_format(1L),
infections = scales::percent_format(0.1),
dcases = scales::percent_format(1L),
r = waiver(),
Rt = waiver(),
cumulative_infections = scales::percent_format(1L),
cumulative_exposure = scales::percent_format(1L)
)
group <- c(
age_school = "Age group",
national = "Nation",
regional = "Region",
variant_regional = "Variant",
variant_national = "Variant"
)
hline <- c(
r = 0,
Rt = 1
)
histories <- c(all = Inf)
breaks <- c(all = "4 months")
# use .csv files and not .rds files
file_pattern <-
paste0("estimates", "_[^_]+\\.csv")
files <-
list.files(here("epiforecast-data"),
pattern = file_pattern,
full.names = TRUE)
names(files) <- str_extract(files, "(?<=_).*(?=.csv)")
# 5 files: _age, _local, _national, _regional, _variants
level <- "national"
file = files[[level]]
data <- read_csv(file) %>%
pivot_longer(matches("^q[0-9]+$"), names_to = "quantile") %>%
mutate(value = if_else(name == "est_prev", value / 100, value),
value = if_else(name == "infections", value / population, value)) %>%
pivot_wider(names_from = "quantile")
cum_file <-
here::here("epiforecast-data", paste0("cumulative_", sub("^[^_]+_", "", file)))
cum_data <- read_csv(cum_file) %>%
filter(name %in% names(var_names))
data <- data %>%
bind_rows(cum_data)
if (!"geography" %in% colnames(data)) { # in all but the _variant file
data <- data %>%
mutate(geography = NA_character_,
variable = as.character(variable))
}
level_data <- data %>%
filter(level == {{ level }}) %>%
mutate(variable = if_else(variable == "2-10", "02-10", variable),
name = if_else(name == "R", "Rt", name)) %>%
filter(name %in% names(var_names))
if (level == "local") {
level_data <- level_data %>%
left_join(local_region, by = c("level", "variable")) # adds column
# region with names for the J* codes in column 'variable'
}
colour_var <- ifelse(level == "local", "region", "variable")
history <- "all"
name <- "infections"
spaghetti <- FALSE
plot_df <- level_data %>%
filter(name == {{ name }},
date > max(date) - histories[[history]],
variable == "England")
aes_str <- list(x = "date", colour = colour_var)
aes_str <- c(aes_str, list(y = "q50", fill = colour_var))
p <- ggplot(plot_df, do.call(aes_string, aes_str)) +
ylab(name) + xlab("")
p <- p +
geom_line() +
geom_ribbon(mapping = aes(ymin = `q25`, ymax = `q75`), alpha = 0.35, colour = NA) +
geom_ribbon(mapping = aes(ymin = `q5`, ymax = `q95`), alpha = 0.175, colour = NA)
p <- p +
scale_x_date(breaks = breaks[["all"]],
labels = date_format("%b %Y")) +
scale_y_continuous(var_names["infections"], labels = labels[["infections"]]) +
scale_colour_brewer(group["national"], palette = "Set1") +
scale_fill_brewer(group["national"], palette = "Set1") +
theme_light()
if (grepl("^variant_", level)) {
p <- p +
facet_wrap(~geography)
}
if (grepl("cumulative_", name)) {
p <- p +
expand_limits(y = 0)
}
fs::dir_create(here::here("output", "figures", "epiforecast"))
ggsave(here::here("output", "figures", "epiforecast",
paste0(level, "_", name, "_",
history, ".", "png")), p,
width = 8, height = 4)
| /analysis/plots_incidence.R | permissive | opensafely/CIS-pop-validation | R | false | false | 3,657 | r | library(stringr)
library(tidyverse)
library(here)
library(scales)
var_names <- c(
est_prev = "Prevalence estimate",
infections = "Estimated incidence",
dcases = "Daily estimated prevalence",
r = "Growth rate",
Rt = "Reproduction number",
cumulative_infections = "Cumulative incidence",
cumulative_exposure = "Proportion ever infected"
)
labels <- list(
est_prev = scales::percent_format(1L),
infections = scales::percent_format(0.1),
dcases = scales::percent_format(1L),
r = waiver(),
Rt = waiver(),
cumulative_infections = scales::percent_format(1L),
cumulative_exposure = scales::percent_format(1L)
)
group <- c(
age_school = "Age group",
national = "Nation",
regional = "Region",
variant_regional = "Variant",
variant_national = "Variant"
)
hline <- c(
r = 0,
Rt = 1
)
histories <- c(all = Inf)
breaks <- c(all = "4 months")
# use .csv files and not .rds files
file_pattern <-
paste0("estimates", "_[^_]+\\.csv")
files <-
list.files(here("epiforecast-data"),
pattern = file_pattern,
full.names = TRUE)
names(files) <- str_extract(files, "(?<=_).*(?=.csv)")
# 5 files: _age, _local, _national, _regional, _variants
level <- "national"
file = files[[level]]
data <- read_csv(file) %>%
pivot_longer(matches("^q[0-9]+$"), names_to = "quantile") %>%
mutate(value = if_else(name == "est_prev", value / 100, value),
value = if_else(name == "infections", value / population, value)) %>%
pivot_wider(names_from = "quantile")
cum_file <-
here::here("epiforecast-data", paste0("cumulative_", sub("^[^_]+_", "", file)))
cum_data <- read_csv(cum_file) %>%
filter(name %in% names(var_names))
data <- data %>%
bind_rows(cum_data)
if (!"geography" %in% colnames(data)) { # in all but the _variant file
data <- data %>%
mutate(geography = NA_character_,
variable = as.character(variable))
}
level_data <- data %>%
filter(level == {{ level }}) %>%
mutate(variable = if_else(variable == "2-10", "02-10", variable),
name = if_else(name == "R", "Rt", name)) %>%
filter(name %in% names(var_names))
if (level == "local") {
level_data <- level_data %>%
left_join(local_region, by = c("level", "variable")) # adds column
# region with names for the J* codes in column 'variable'
}
colour_var <- ifelse(level == "local", "region", "variable")
history <- "all"
name <- "infections"
spaghetti <- FALSE
plot_df <- level_data %>%
filter(name == {{ name }},
date > max(date) - histories[[history]],
variable == "England")
aes_str <- list(x = "date", colour = colour_var)
aes_str <- c(aes_str, list(y = "q50", fill = colour_var))
p <- ggplot(plot_df, do.call(aes_string, aes_str)) +
ylab(name) + xlab("")
p <- p +
geom_line() +
geom_ribbon(mapping = aes(ymin = `q25`, ymax = `q75`), alpha = 0.35, colour = NA) +
geom_ribbon(mapping = aes(ymin = `q5`, ymax = `q95`), alpha = 0.175, colour = NA)
p <- p +
scale_x_date(breaks = breaks[["all"]],
labels = date_format("%b %Y")) +
scale_y_continuous(var_names["infections"], labels = labels[["infections"]]) +
scale_colour_brewer(group["national"], palette = "Set1") +
scale_fill_brewer(group["national"], palette = "Set1") +
theme_light()
if (grepl("^variant_", level)) {
p <- p +
facet_wrap(~geography)
}
if (grepl("cumulative_", name)) {
p <- p +
expand_limits(y = 0)
}
fs::dir_create(here::here("output", "figures", "epiforecast"))
ggsave(here::here("output", "figures", "epiforecast",
paste0(level, "_", name, "_",
history, ".", "png")), p,
width = 8, height = 4)
|
#different in the nemlinearity
nemlinplot<-function(y.skala,mit, xfel, yfel,mitmain){
RanMax<-max(abs(range(mit)))
xPow<-3
skal<-seq(0,1,by=0.01)^xPow
brakes<-c(-rev(skal),skal[-1])*RanMax
##x.skala<-c(1:dim(mit)[2])
x.skala<-c(1:dim(mit)[2]/adatsec)
#nemlineris skálás
image( x.skala,y.skala,t(mit),xlab=xfel, ylab=yfel, main=mitmain,col=rainbow(200, start=0,end=0.7),breaks=brakes,cex.lab=1.2,cex.main=1.2, yaxt='n')
##axis(1,c(1:dim(mit)[2])/adatsec, tick='FALSE')
#csatszamok<-c(cs:1)
##axis(2,round(y.skala[1:length(y.skala)]),abs(round(y.skala[length(y.skala):1])))
##axis(2,round(y.skala[1:length(y.skala)]),abs(round(y.skala[length(y.skala):1])))
## at=c(0,seq(range(y.skala)[1],range(y.skala)[2],length.out=5)))
axis(2,at=c(0,seq(range(y.skala)[1],range(y.skala)[2],length.out=5)))
image.plot(x.skala,y.skala,t(mit),col=rainbow(200, start=0,end=0.7),breaks=brakes, add=TRUE, legend.shrink=0.9,
legend.width=0.7,horizontal =FALSE,cex=1.2)
##axis(1,c(1:dim(mit)[2])/20, tick='FALSE')
##axis(2,round(y.skala[1:length(y.skala)]),abs(round(y.skala[length(y.skala):1])))
}
| /nemlinplot3.R | no_license | csdori/Spherical_2014 | R | false | false | 1,102 | r | #different in the nemlinearity
nemlinplot<-function(y.skala,mit, xfel, yfel,mitmain){
RanMax<-max(abs(range(mit)))
xPow<-3
skal<-seq(0,1,by=0.01)^xPow
brakes<-c(-rev(skal),skal[-1])*RanMax
##x.skala<-c(1:dim(mit)[2])
x.skala<-c(1:dim(mit)[2]/adatsec)
#nemlineris skálás
image( x.skala,y.skala,t(mit),xlab=xfel, ylab=yfel, main=mitmain,col=rainbow(200, start=0,end=0.7),breaks=brakes,cex.lab=1.2,cex.main=1.2, yaxt='n')
##axis(1,c(1:dim(mit)[2])/adatsec, tick='FALSE')
#csatszamok<-c(cs:1)
##axis(2,round(y.skala[1:length(y.skala)]),abs(round(y.skala[length(y.skala):1])))
##axis(2,round(y.skala[1:length(y.skala)]),abs(round(y.skala[length(y.skala):1])))
## at=c(0,seq(range(y.skala)[1],range(y.skala)[2],length.out=5)))
axis(2,at=c(0,seq(range(y.skala)[1],range(y.skala)[2],length.out=5)))
image.plot(x.skala,y.skala,t(mit),col=rainbow(200, start=0,end=0.7),breaks=brakes, add=TRUE, legend.shrink=0.9,
legend.width=0.7,horizontal =FALSE,cex=1.2)
##axis(1,c(1:dim(mit)[2])/20, tick='FALSE')
##axis(2,round(y.skala[1:length(y.skala)]),abs(round(y.skala[length(y.skala):1])))
}
|
## makeCacheMatrix shows functions to get and set the matrix and get and set the its inverse
makeCacheMatrix <- function(x = matrix()) {
m <- NULL
set <- function(y) {
x <<- y
m <<- NULL
}
get <- function() x
setsolve <- function(solve) m <<- solve
getsolve <- function() m
l <- list(set = set, get = get, setsolve = setsolve, getsolve = getsolve)
l
}
## cacheSolve verifies if the cached inverse was stored before to retrieve it. If dont, calculates
## using the R solve function
cacheSolve <- function(x, ...) {
## Return a matrix that is the inverse of 'x'
m <- x$getsolve()
if(!is.null(m)) {
message("getting cached data")
return(m)
}
d <- x$get()
m <- solve(d)
x$setsolve(m)
m
} | /cachematrix.R | no_license | hadautho/ProgrammingAssignment2 | R | false | false | 724 | r | ## makeCacheMatrix shows functions to get and set the matrix and get and set the its inverse
makeCacheMatrix <- function(x = matrix()) {
m <- NULL
set <- function(y) {
x <<- y
m <<- NULL
}
get <- function() x
setsolve <- function(solve) m <<- solve
getsolve <- function() m
l <- list(set = set, get = get, setsolve = setsolve, getsolve = getsolve)
l
}
## cacheSolve verifies if the cached inverse was stored before to retrieve it. If dont, calculates
## using the R solve function
cacheSolve <- function(x, ...) {
## Return a matrix that is the inverse of 'x'
m <- x$getsolve()
if(!is.null(m)) {
message("getting cached data")
return(m)
}
d <- x$get()
m <- solve(d)
x$setsolve(m)
m
} |
#
# R source code for preprocessing and scoring
#
No_Causata_Name <- "No Causata Name" #
#
# constructors for classes
#
CausataVariable <- function(variableName, values, causata.name=variableName) {
# initialize a Causata Variable object
# - variableName: a string containing the variable name
# - values: a vector of values
variableObject <- list()
class(variableObject) <- "CausataVariable"
variableObject$name <- variableName # variable name
variableObject$causata.name <- causata.name # Name that Causata uses for this variable
variableObject$class <- class(values) # class of data: numeric, integer, factor
variableObject$steps <- c() # list of preprocessing steps executed for this variable
variableObject$outlierUpperLimit <- NULL # upper limit for outlier removal
variableObject$outlierLowerLimit <- NULL # lower limit for outlier removal
variableObject$binRight <- NULL # logical, indicating if the intervals should be closed on the right (and open on the left) or vice versa
variableObject$binLimits <- NULL # limit values for continous bins
variableObject$binValues <- NULL # map continuous values to discrete, e.g. weight of evidence binning
variableObject$missingReplacement <- NULL # missing values will be replaced with this
variableObject$unmergedLevels <- NULL # levels that remained after MergeLevels
variableObject$categorizing.attribute <- NULL # Some input variables are categorized.
variableObject$category.value <- NULL # the category to use for a categorized variable.
variableObject$time.domain <- NULL # The variable time domain
# transformation is a function that takes a vector (column) from a data.frame and applies
# all the transformations to it that have been recorded in this CausataVariable, returning the transformed vector
variableObject$transformation <- identity
return(variableObject)
}
is.CausataVariable <- function(this) inherits(this, "CausataVariable")
CausataData <- function(dataframe, dependent.variable=NULL, query=NULL) {
# initialzie a Causata Data object
# - dataframe: a dataframe containing independent variables
# - dependent.variable: a vector of dv values, or a name of a variable in the data frame that contains dependent variable
# - dvName: the name of the dependent variable
dataObject <- list()
class(dataObject) <- "CausataData"
if (is.null(dependent.variable)){
# if dependent variable argument is missing then require that "dependent.variable" is defined in data frame
dataObject$dvName <- "dependent.variable"
if (!(dataObject$dvName %in% names(dataframe))){
stop("No argument provided for dependent.variable, so a column named 'dependent.variable' must be provided in the data frame.")
}
# copy the dependent variable
dv <- dataframe$dependent.variable
} else if (class(dependent.variable) == "character"){
# character was provided, test if it's a column in the data frame
if (!(dependent.variable %in% names(dataframe))){
stop("A variable named ", dependent.variable, " must be present in the data frame.")
}
# store the name
dataObject$dvName <- dependent.variable
# copy the data to a variable called dv
dv <- dataframe[[dataObject$dvName]]
} else {
# assume that the dependent variable is an array of values with length matching data frame
# copy the data to a variable called dv
if (!(nrow(dataframe) == length(dependent.variable))){
stop("dependent.variable length ", length(dependent.variable),
" must match rows in dataframe ", nrow(dataframe))
}
if (is.null(names(dependent.variable))){
# no name provided, so give it a default name
dataObject$dvName <- "dependent.variable"
} else {
# name provided, copy it
dataObject$dvName <- names(dependent.variable)
}
# copy the values to the data frame
dataframe[[dataObject$dvName]] <- dependent.variable
dv <- dependent.variable
}
# dv values can't be missing
if (any(is.na(dv))){
stop("Missing values not allowed in dependent variable ", dataObject$dvName)
}
# initialize a variable object for every variable in the data frame
dataObject$variableList <- list()
dataObject$skippedVariables <- c()
for (varname in names(dataframe)) {
# if the dependent variable is in the data frame then skip it, don't create a causata variable for it
if (varname == dataObject$dvName){
next # skip to next variable
}
# find causata name corresponding to this column of data frame
#causata.name <- r.to.causata.name.mapping[ match(varname, names(dataframe)) ]
causata.name <- RToCausataNames(varname)
if (causata.name == No_Causata_Name){
# a variable in the data frame does not have a causata name, e.g. profile_id
# do not create a variable object for this column
dataObject$skippedVariables <- c(dataObject$skippedVariables, varname)
next
} else {
dataObject$variableList[[varname]] <- CausataVariable(
varname, dataframe[[varname]], causata.name)
}
}
# if variables were skipped then generate a warning
if (length(dataObject$skippedVariables) > 0){
warning("CausataData did not create variables for ", length(dataObject$skippedVariables),
" that did not conform to variable naming conventions.")
}
dataObject$df <- dataframe
dataObject$query <- query
return(dataObject)
}
# Converts Causata variable names to R variable names.
# For example:
# CausataToRNames(c("total-spend$All Past", "page-view-count$Last 30 Days"))
# returns list("total-spend$All Past"="total.spend__All.Past", "page-view-count$Last 30 Days"="page.view.count__Last.30.Days")
#
CausataToRNames <- function(name.vector) {
safe.names <- make.names(str_replace(as.vector(name.vector), "\\$", "__"), unique=TRUE)
ToRName <- function(index) {
l <- list()
l[name.vector[[index]]] <- safe.names[[index]]
return(l)
}
return(unlist(lapply(1:length(name.vector), ToRName)))
}
# Converts R variable column names to Causata variable names.
# For example:
# RToCausataNames(c("total.spend__All.Past", "page.view.count__Last.30.Days"))
# returns list("total.spend_All.Past"="total-spend$All Past", "page.view.count_Last.30.Days"="page-view-count$Last 30 Days")
#
RToCausataNames <- function(name.vector) {
DotsToDashes <- function(string) return(str_replace_all(string, "\\.", "-"))
DotsToSpaces <- function(string) return(str_replace_all(string, "\\.", " "))
ToCausataName <- function(name) {
parts <- str_split(name, "__")[[1]] # split at double underscore for data exported from SQL
parts.csv <- str_split(name, "_")[[1]] # split at single underscore for data exported from csv
if (length(parts) == 2) {
#stopifnot(length(parts) == 2)
return(paste(DotsToDashes(parts[[1]]), DotsToSpaces(parts[[2]]), sep="$", collapse="$"))
} else if (length(parts.csv)==3 & length(parts.csv[2]>0)) {
# this variable fits a pattern where there are two underscores with text in between,
# assume this is a csv variable, return the name unaltered
return(name)
} else {
# the name doesn't fit pattern name__interval, so set a default non-match value
warning("The variable named ", name, " does not match Causata naming conventions.")
return(No_Causata_Name)
}
}
# return a vector, not a list, so use as.vector on results of sapply
v <- unlist(lapply(name.vector, ToCausataName))
names(v) <- name.vector
return(v)
}
#
# define generic functions
#
CleanNaFromContinuous <- function(x, ...) {
# generic function to replace missing values
UseMethod("CleanNaFromContinuous")
}
CleanNaFromFactor <- function(x, ...){
# generic function to replace missing values
UseMethod("CleanNaFromFactor")
}
Discretize <- function(this, ...){
# generic function to replace missing values
UseMethod("Discretize")
}
GetVariable <- function(this, ...) {
UseMethod("GetVariable")
}
GetTransforms <- function(this, ...) {
UseMethod("GetTransforms")
}
GetQuery <- function(this, ...) {
UseMethod("GetQuery")
}
ReplaceOutliers.CausataData <- function(this, variableName, lowerLimit=NULL, upperLimit=NULL, ...){
# Given a causata data object, replaces outliers with specified values.
stopifnot(variableName %in% names(this$variableList))
# extract the causata variable
causataVariable <- this$variableList[[variableName]]
# store this step in the list of steps
causataVariable$steps <- c(causataVariable$steps, "ReplaceOutliers")
# replace missing values
this$df[[variableName]] <- ReplaceOutliers( this$df[[variableName]], lowerLimit=lowerLimit, upperLimit=upperLimit )
# update the variable object
causataVariable$outlierUpperLimit <- upperLimit
causataVariable$outlierLowerLimit <- lowerLimit
this$variableList[[variableName]] <- causataVariable
# return the data object
return(this)
}
CleanNaFromFactor.CausataVariable <- function(x, dataObject, replacement, ...) {
# given a causata variable object, replaces missing values with BLANK
varname <- x$name
# store this step in the list of steps
x$steps <- c(x$steps, "CleanNaFromFactor")
x$missingReplacement <- replacement
# replace missing values
dataObject$df[[varname]] <- CleanNaFromFactor( dataObject$df[[varname]], replacement=replacement )
# update the variable object
dataObject$variableList[[varname]] <- x
# return the data object
return(dataObject)
}
CleanNaFromFactor.CausataData <- function(x, variableName=NULL, replacement="BLANK", ...) {
# allow users to pass in a bare, unquoted variable name, this will convert to a string
if (!is.null(variableName)){
# a single variable name was provided, confirm that it's in the causataData object
if (!(variableName %in% names(x$variableList))){
# the parsed string doesn't match, throw an error
stop("The variable ", variableName, " was not found in the causataData input.")
}
variableList <- variableName
} else {
# no variable name was provided, so use all of them
variableList <- names(x$variableList)
}
for (varname in variableList) {
# test if this is a factor
if (class(x$df[[varname]]) == "factor") {
x <- CleanNaFromFactor.CausataVariable( x$variableList[[varname]], x, replacement=replacement )
}
}
# return the data object
return(x)
}
CleanNaFromContinuous.CausataVariable <- function(x, dataObject, method="median", ...) {
# given a causata variable object, replaces missing values within
# get the variable name
varname <- x$name
# store this step in the list of steps
x$steps <- c(x$steps, "CleanNaFromContinuous")
# replace missing values
outList <- CleanNaFromContinuous( dataObject$df[[varname]], method=method, return.replacement=TRUE )
# update the data object and store the value used to replace missing values
dataObject$df[[varname]] <- outList[[1]]
# check if the missing replacement value was already set, which may have happened in Discretize. If it was set then do not reset here.
if (is.null(x$missingReplacement)){
x$missingReplacement <- outList[[2]]
}
# update the variable object
dataObject$variableList[[varname]] <- x
# return the data object
return(dataObject)
}
CleanNaFromContinuous.CausataData <- function(x, variableName=NULL, method="median", ...) {
# allow users to pass in a bare, unquoted variable name, this will convert to a string
if (!is.null(variableName)){
# a single variable name was provided, confirm that it's in the causataData object
if (!(variableName %in% names(x$variableList))){
# the parsed string doesn't match, throw an error
stop("The variable ", variableName, " was not found in the causataData input.")
}
variableList <- variableName
} else {
# no variable name was provided, so use all of them
variableList <- names(x$variableList)
}
for (varname in variableList) {
# test if this is a numeric or integer variable
varClass <- class(x$df[[varname]])
if (any(varClass=="numeric") || any(varClass=="POSIXct") || any(varClass=="integer")) {
x <- CleanNaFromContinuous.CausataVariable(x$variableList[[varname]], x, method=method)
}
}
return(x)
}
Discretize.CausataData <- function(this, variableName, breaks, discrete.values, verbose=FALSE, ...) {
# Given a causata data object, replaces outliers with specified values.
stopifnot(variableName %in% names(this$variableList))
# copy variable from list
varObject <- this$variableList[[variableName]]
# require that this variable is numeric
stopifnot(varObject$class %in% c("numeric","integer"))
# ensure that missing values were not replaced before this step, missing values should not be replaced before Discretize
if ("CleanNaFromContinuous" %in% varObject$steps) {
stop("The CleanNaFromContinuous step must not be executed before the Discretize step for variable ", variableName)
}
# ensure that outliers were replaced before this step
if (!("ReplaceOutliers" %in% varObject$steps)){
stop("The ReplaceOutliers step must be run before the Discretize step for variable ", variableName)
}
# require that the outlier replacement upper limit is <= the highest bin value
if (is.null(varObject$outlierUpperLimit) || varObject$outlierUpperLimit > breaks[length(breaks)]){
stop("The outlier limit is not set or greater than the last breaks value in ", variableName)
}
if (is.null(varObject$outlierLowerLimit)){
stop("The outlier lower limit must be set before the Discretize step for variable ",variableName)
}
#
# check the length of inputs and for missing values
#
if (any(is.na(this$df[[variableName]]))){
#
# missing values are present in numeric variable, must have N+1 breaks and N values
#
if (length(breaks) != length(discrete.values)){
stop("Discretize requires number of breaks to match number of discrete values when missing values are present for variable ", variableName,
", found ", length(breaks), " breaks and ", length(discrete.values), " discrete values.")
}
# create an artificial bin for missing values, make it above the max value in the data
newbreak <- 0.5 * (max(breaks) - min(breaks)) + max(breaks)
breaks <- c(breaks, newbreak)
# map missing values to the new last break, make it slightly smaller than last artificial bin boundary
varObject$missingReplacement <- newbreak * 0.99
this$df[[variableName]] <- CleanNaFromContinuous(this$df[[variableName]], replacement.value = varObject$missingReplacement)
# update the variable object
varObject$binLimits <- breaks
varObject$binValues <- discrete.values
# set a label to be appended to the last level name for verbose output
last.level.label.append <- " (Missing)" # append missing label since last level contains missing values
} else {
#
# no missing values present, must have N+1 breaks and N discrete values
#
if (length(breaks) != (length(discrete.values)+1)){
stop("Discretize requires number of breaks N+1 and N discrete values for variable ", variableName,
", found ", length(breaks), " breaks and ", length(discrete.values), " discrete values.")
}
# no missing values found, copy over breaks and woe levels as normal
# update the variable object
varObject$binLimits <- breaks
varObject$binValues <- discrete.values
# set a label to be appended to the last level name for verbose output
last.level.label.append <- "" # append blank since the last level does not contain missing values
}
# store this step in the list of steps
varObject$steps <- c(varObject$steps, "Discretize")
# set bins to right, closed on right and open on left, default for cut command
varObject$binRight <- TRUE
# use cut to discretize the variable
fiv <- cut(this$df[[variableName]], breaks=breaks, include.lowest=TRUE)
# replace continuous values with discrete values in the causataData object
discrete.variable <- rep(0, nrow(this$df))
level.vec <- levels(fiv)
if (verbose) {cat("Mapping values to", length(discrete.values), "discrete bins, n is number of values in bins:\n")}
# loop for each level in the discretized data
for (i in 1:length(discrete.values)){
idx <- level.vec[i] == fiv
if (sum(idx)>0){discrete.variable[idx] <- discrete.values[i]}
# print message if verbose flag set, add label to level with missing values if present
if (verbose) {cat(" ", level.vec[i], "->", discrete.values[i], " n =", sum(idx),
if(i==length(discrete.values)) last.level.label.append else "", "\n")}
}
this$df[[variableName]] <- discrete.variable
# update the causata variable and return the causata data object
this$variableList[[variableName]] <- varObject
return(this)
}
MergeLevels.CausataVariable <- function(this, causataData, max.levels, ...) {
# Given a Causata Variable object for a factor, merges the smallest levels
# in the factor so that the total number of levels does not exceed maxlevels.
# get the variable name
varname <- this$name
# store this step in the list of steps
this$steps <- c(this$steps, "MergeLevels")
# Store the levels before merging
levels1 <- levels(causataData$df[[varname]])
# merge levels and update the data object
causataData$df[[varname]] <- MergeLevels(causataData$df[[varname]], max.levels)
# store a vector of the levels that were not merged into "Other" (all other values become "Other")
this$unmergedLevels <- levels(causataData$df[[varname]])
# update the variable object stored in the data object
causataData$variableList[[varname]] <- this
# return the data object
return(causataData)
}
MergeLevels.CausataData <- function(this, variableName=NULL, max.levels, other.name="Other", verbose=FALSE, ...) {
# given a Causata Data object, merges levels in all of the factors within where
# the number of levels exceeds maxlevels
if (!is.null(variableName)){
# a single variable name was provided, confirm that it's in the causataData object
stopifnot(variableName %in% names(this$variableList))
variableList <- variableName
} else {
# no variable name was provided, so use all of them
variableList <- names(this$variableList)
}
if (verbose) {cat("\nMerging levels in factors:\n")}
for (varname in variableList) {
# test if this is a factor
if (class(this$df[[varname]]) == "factor") {
# it is a factor, check the number of levels
numLevels <- length(levels(this$df[[varname]]))
# check if the number of levels exceeds a limit
if (numLevels > max.levels){
# this factor exceeds the limit, merge levels
if (verbose) {cat(" ", varname, "\n")}
}
# run mergelevels regardless of whether the threshold number of levels is exceeded. this way
# we record which levels were present
this <- MergeLevels.CausataVariable( this$variableList[[varname]], this, max.levels )
}
}
# return the data object
return(this)
}
GetVariable.CausataData <- function(this, r.name=NULL, ...) {
for (variable in this$variableList) {
if (variable$name == r.name) {
# exact match found, return variable
return(variable)
} else {
# look for a partial match from a dummy variable produced by model.matrix
# e.g. var.name__APlevel, where we should match var.name__AP
idx <- grep( paste("^", variable$name, sep=""), r.name )
if (length(idx) == 1){
# match found
return(variable)
}
}
}
# if no match was found then throw an error
stop("No matching variable name found for ", r.name)
}
DiscretizeTransform <- function(this) {
column.function <- function(column) {
# if missing values are present then replace them first, before mapping to discrete values
if (!is.null(this$missingReplacement)){
# missing replacement value is set, so replace missing values
column[is.na(column)] <- this$missingReplacement
}
this$binValues[ cut(column, breaks=this$binLimits, include.lowest=TRUE, labels=FALSE, right=this$binRight) ]
}
ColumnarTransformation(this, column.function)
}
ReplaceOutliersTransform <- function(this) {
column.function <- function(column) {
ReplaceOutliers(column, lowerLimit=this$outlierLowerLimit, upperLimit=this$outlierUpperLimit)
}
ColumnarTransformation(this, column.function)
}
CleanNaFromContinuousTransform <- function(this) {
column.function <- function(column) {
column[is.na(column)] <- this$missingReplacement
return(column)
}
ColumnarTransformation(this, column.function)
}
CleanNaFromFactorTransform <- function(this) {
ColumnarTransformation(this, function(column) CleanNaFromFactor(column, this$missingReplacement))
}
MergeLevelsTransform <- function(this) {
stopifnot(is.CausataVariable(this))
unit.of.work <- function(value) {
if (is.na(value)) {
NA
} else if (value %in% this$unmergedLevels) {
value
} else {
"Other"
}
}
return(VectorizedFactorTransformation(this, unit.of.work))
}
# Takes a CausataVariable and a function that operates on a single column vector
# returns a function that operates on a data.frame, and applies the given function to the column vector
# that corresponds to the given CausataVariable
#
ColumnarTransformation <- function(this, column.function) {
function(df) {
#df[,this$name] <- column.function(df[,this$name])
# using set from data.table as it is much more efficient with memory
colnumber <- which(this$name == names(df))
if (length(colnumber)!=1){
# require that one column matches
stop("One column match required but found ", length(colnumber) , " for column ", this$name)
}
set(df, j=colnumber, value=column.function(df[, this$name]))
return(df)
}
}
# Takes a CausataVariable and a function that operates on a single value in a factor,
# returns a function that operates on a data.frame, and applies the given function to the factor
# that corresponds to the given CausataVariable
#
VectorizedFactorTransformation <- function(this, work.unit, desired.levels=NULL) {
function(df) {
col <- df[,this$name]
stopifnot(is.factor(col))
new.col <- as.character(col)
for (i in 1:length(new.col)) {
new.col[i] <- work.unit(new.col[i])
}
new.col <- if (length(desired.levels)) {
factor(new.col, levels=desired.levels)
} else {
factor(new.col)
}
result <- df
result[this$name] <- new.col
return(result)
}
}
# Returns a function that applies the given transforms (functions) in sequence
# Could possibly use Reduce (like a foldleft function)
#
Sequence <- function(functions) {
function(df) {
for (f in functions) {
df <- f(df)
}
return(df)
}
}
GetTransforms.CausataData <- function(this, ...) {
# iterate over steps, getting a function for each, and the return a function that applies
# each of those transforms in sequence.
#
transforms <- NULL
for (variable in this$variableList) {
transforms <- c(transforms, GetTransformFunctionForVariable(variable))
}
return(Sequence(transforms))
}
GetQuery.CausataData <- function(this, ...) {
this$query
}
GetTransformFunctionForVariable <- function(this) {
stopifnot(is.CausataVariable(this))
transforms <- NULL
for (step in this$steps) {
transforms <- c(transforms, switch (step,
"Discretize"=DiscretizeTransform(this),
"ReplaceOutliers"=ReplaceOutliersTransform(this),
"CleanNaFromFactor"=CleanNaFromFactorTransform(this),
"CleanNaFromContinuous"=CleanNaFromContinuousTransform(this),
"MergeLevels"=MergeLevelsTransform(this),
stop("Unknown transformation")
))
}
return(Sequence(transforms))
}
| /Causata/R/Preprocess.R | no_license | ingted/R-Examples | R | false | false | 23,901 | r | #
# R source code for preprocessing and scoring
#
No_Causata_Name <- "No Causata Name" #
#
# constructors for classes
#
CausataVariable <- function(variableName, values, causata.name=variableName) {
# initialize a Causata Variable object
# - variableName: a string containing the variable name
# - values: a vector of values
variableObject <- list()
class(variableObject) <- "CausataVariable"
variableObject$name <- variableName # variable name
variableObject$causata.name <- causata.name # Name that Causata uses for this variable
variableObject$class <- class(values) # class of data: numeric, integer, factor
variableObject$steps <- c() # list of preprocessing steps executed for this variable
variableObject$outlierUpperLimit <- NULL # upper limit for outlier removal
variableObject$outlierLowerLimit <- NULL # lower limit for outlier removal
variableObject$binRight <- NULL # logical, indicating if the intervals should be closed on the right (and open on the left) or vice versa
variableObject$binLimits <- NULL # limit values for continous bins
variableObject$binValues <- NULL # map continuous values to discrete, e.g. weight of evidence binning
variableObject$missingReplacement <- NULL # missing values will be replaced with this
variableObject$unmergedLevels <- NULL # levels that remained after MergeLevels
variableObject$categorizing.attribute <- NULL # Some input variables are categorized.
variableObject$category.value <- NULL # the category to use for a categorized variable.
variableObject$time.domain <- NULL # The variable time domain
# transformation is a function that takes a vector (column) from a data.frame and applies
# all the transformations to it that have been recorded in this CausataVariable, returning the transformed vector
variableObject$transformation <- identity
return(variableObject)
}
is.CausataVariable <- function(this) inherits(this, "CausataVariable")
CausataData <- function(dataframe, dependent.variable=NULL, query=NULL) {
# initialzie a Causata Data object
# - dataframe: a dataframe containing independent variables
# - dependent.variable: a vector of dv values, or a name of a variable in the data frame that contains dependent variable
# - dvName: the name of the dependent variable
dataObject <- list()
class(dataObject) <- "CausataData"
if (is.null(dependent.variable)){
# if dependent variable argument is missing then require that "dependent.variable" is defined in data frame
dataObject$dvName <- "dependent.variable"
if (!(dataObject$dvName %in% names(dataframe))){
stop("No argument provided for dependent.variable, so a column named 'dependent.variable' must be provided in the data frame.")
}
# copy the dependent variable
dv <- dataframe$dependent.variable
} else if (class(dependent.variable) == "character"){
# character was provided, test if it's a column in the data frame
if (!(dependent.variable %in% names(dataframe))){
stop("A variable named ", dependent.variable, " must be present in the data frame.")
}
# store the name
dataObject$dvName <- dependent.variable
# copy the data to a variable called dv
dv <- dataframe[[dataObject$dvName]]
} else {
# assume that the dependent variable is an array of values with length matching data frame
# copy the data to a variable called dv
if (!(nrow(dataframe) == length(dependent.variable))){
stop("dependent.variable length ", length(dependent.variable),
" must match rows in dataframe ", nrow(dataframe))
}
if (is.null(names(dependent.variable))){
# no name provided, so give it a default name
dataObject$dvName <- "dependent.variable"
} else {
# name provided, copy it
dataObject$dvName <- names(dependent.variable)
}
# copy the values to the data frame
dataframe[[dataObject$dvName]] <- dependent.variable
dv <- dependent.variable
}
# dv values can't be missing
if (any(is.na(dv))){
stop("Missing values not allowed in dependent variable ", dataObject$dvName)
}
# initialize a variable object for every variable in the data frame
dataObject$variableList <- list()
dataObject$skippedVariables <- c()
for (varname in names(dataframe)) {
# if the dependent variable is in the data frame then skip it, don't create a causata variable for it
if (varname == dataObject$dvName){
next # skip to next variable
}
# find causata name corresponding to this column of data frame
#causata.name <- r.to.causata.name.mapping[ match(varname, names(dataframe)) ]
causata.name <- RToCausataNames(varname)
if (causata.name == No_Causata_Name){
# a variable in the data frame does not have a causata name, e.g. profile_id
# do not create a variable object for this column
dataObject$skippedVariables <- c(dataObject$skippedVariables, varname)
next
} else {
dataObject$variableList[[varname]] <- CausataVariable(
varname, dataframe[[varname]], causata.name)
}
}
# if variables were skipped then generate a warning
if (length(dataObject$skippedVariables) > 0){
warning("CausataData did not create variables for ", length(dataObject$skippedVariables),
" that did not conform to variable naming conventions.")
}
dataObject$df <- dataframe
dataObject$query <- query
return(dataObject)
}
# Converts Causata variable names to R variable names.
# For example:
# CausataToRNames(c("total-spend$All Past", "page-view-count$Last 30 Days"))
# returns list("total-spend$All Past"="total.spend__All.Past", "page-view-count$Last 30 Days"="page.view.count__Last.30.Days")
#
CausataToRNames <- function(name.vector) {
safe.names <- make.names(str_replace(as.vector(name.vector), "\\$", "__"), unique=TRUE)
ToRName <- function(index) {
l <- list()
l[name.vector[[index]]] <- safe.names[[index]]
return(l)
}
return(unlist(lapply(1:length(name.vector), ToRName)))
}
# Converts R variable column names to Causata variable names.
# For example:
# RToCausataNames(c("total.spend__All.Past", "page.view.count__Last.30.Days"))
# returns list("total.spend_All.Past"="total-spend$All Past", "page.view.count_Last.30.Days"="page-view-count$Last 30 Days")
#
RToCausataNames <- function(name.vector) {
DotsToDashes <- function(string) return(str_replace_all(string, "\\.", "-"))
DotsToSpaces <- function(string) return(str_replace_all(string, "\\.", " "))
ToCausataName <- function(name) {
parts <- str_split(name, "__")[[1]] # split at double underscore for data exported from SQL
parts.csv <- str_split(name, "_")[[1]] # split at single underscore for data exported from csv
if (length(parts) == 2) {
#stopifnot(length(parts) == 2)
return(paste(DotsToDashes(parts[[1]]), DotsToSpaces(parts[[2]]), sep="$", collapse="$"))
} else if (length(parts.csv)==3 & length(parts.csv[2]>0)) {
# this variable fits a pattern where there are two underscores with text in between,
# assume this is a csv variable, return the name unaltered
return(name)
} else {
# the name doesn't fit pattern name__interval, so set a default non-match value
warning("The variable named ", name, " does not match Causata naming conventions.")
return(No_Causata_Name)
}
}
# return a vector, not a list, so use as.vector on results of sapply
v <- unlist(lapply(name.vector, ToCausataName))
names(v) <- name.vector
return(v)
}
#
# define generic functions
#
CleanNaFromContinuous <- function(x, ...) {
# generic function to replace missing values
UseMethod("CleanNaFromContinuous")
}
CleanNaFromFactor <- function(x, ...){
# generic function to replace missing values
UseMethod("CleanNaFromFactor")
}
Discretize <- function(this, ...){
# generic function to replace missing values
UseMethod("Discretize")
}
GetVariable <- function(this, ...) {
UseMethod("GetVariable")
}
GetTransforms <- function(this, ...) {
UseMethod("GetTransforms")
}
GetQuery <- function(this, ...) {
UseMethod("GetQuery")
}
ReplaceOutliers.CausataData <- function(this, variableName, lowerLimit=NULL, upperLimit=NULL, ...){
# Given a causata data object, replaces outliers with specified values.
stopifnot(variableName %in% names(this$variableList))
# extract the causata variable
causataVariable <- this$variableList[[variableName]]
# store this step in the list of steps
causataVariable$steps <- c(causataVariable$steps, "ReplaceOutliers")
# replace missing values
this$df[[variableName]] <- ReplaceOutliers( this$df[[variableName]], lowerLimit=lowerLimit, upperLimit=upperLimit )
# update the variable object
causataVariable$outlierUpperLimit <- upperLimit
causataVariable$outlierLowerLimit <- lowerLimit
this$variableList[[variableName]] <- causataVariable
# return the data object
return(this)
}
CleanNaFromFactor.CausataVariable <- function(x, dataObject, replacement, ...) {
# given a causata variable object, replaces missing values with BLANK
varname <- x$name
# store this step in the list of steps
x$steps <- c(x$steps, "CleanNaFromFactor")
x$missingReplacement <- replacement
# replace missing values
dataObject$df[[varname]] <- CleanNaFromFactor( dataObject$df[[varname]], replacement=replacement )
# update the variable object
dataObject$variableList[[varname]] <- x
# return the data object
return(dataObject)
}
CleanNaFromFactor.CausataData <- function(x, variableName=NULL, replacement="BLANK", ...) {
# allow users to pass in a bare, unquoted variable name, this will convert to a string
if (!is.null(variableName)){
# a single variable name was provided, confirm that it's in the causataData object
if (!(variableName %in% names(x$variableList))){
# the parsed string doesn't match, throw an error
stop("The variable ", variableName, " was not found in the causataData input.")
}
variableList <- variableName
} else {
# no variable name was provided, so use all of them
variableList <- names(x$variableList)
}
for (varname in variableList) {
# test if this is a factor
if (class(x$df[[varname]]) == "factor") {
x <- CleanNaFromFactor.CausataVariable( x$variableList[[varname]], x, replacement=replacement )
}
}
# return the data object
return(x)
}
CleanNaFromContinuous.CausataVariable <- function(x, dataObject, method="median", ...) {
# given a causata variable object, replaces missing values within
# get the variable name
varname <- x$name
# store this step in the list of steps
x$steps <- c(x$steps, "CleanNaFromContinuous")
# replace missing values
outList <- CleanNaFromContinuous( dataObject$df[[varname]], method=method, return.replacement=TRUE )
# update the data object and store the value used to replace missing values
dataObject$df[[varname]] <- outList[[1]]
# check if the missing replacement value was already set, which may have happened in Discretize. If it was set then do not reset here.
if (is.null(x$missingReplacement)){
x$missingReplacement <- outList[[2]]
}
# update the variable object
dataObject$variableList[[varname]] <- x
# return the data object
return(dataObject)
}
CleanNaFromContinuous.CausataData <- function(x, variableName=NULL, method="median", ...) {
# allow users to pass in a bare, unquoted variable name, this will convert to a string
if (!is.null(variableName)){
# a single variable name was provided, confirm that it's in the causataData object
if (!(variableName %in% names(x$variableList))){
# the parsed string doesn't match, throw an error
stop("The variable ", variableName, " was not found in the causataData input.")
}
variableList <- variableName
} else {
# no variable name was provided, so use all of them
variableList <- names(x$variableList)
}
for (varname in variableList) {
# test if this is a numeric or integer variable
varClass <- class(x$df[[varname]])
if (any(varClass=="numeric") || any(varClass=="POSIXct") || any(varClass=="integer")) {
x <- CleanNaFromContinuous.CausataVariable(x$variableList[[varname]], x, method=method)
}
}
return(x)
}
Discretize.CausataData <- function(this, variableName, breaks, discrete.values, verbose=FALSE, ...) {
# Given a causata data object, replaces outliers with specified values.
stopifnot(variableName %in% names(this$variableList))
# copy variable from list
varObject <- this$variableList[[variableName]]
# require that this variable is numeric
stopifnot(varObject$class %in% c("numeric","integer"))
# ensure that missing values were not replaced before this step, missing values should not be replaced before Discretize
if ("CleanNaFromContinuous" %in% varObject$steps) {
stop("The CleanNaFromContinuous step must not be executed before the Discretize step for variable ", variableName)
}
# ensure that outliers were replaced before this step
if (!("ReplaceOutliers" %in% varObject$steps)){
stop("The ReplaceOutliers step must be run before the Discretize step for variable ", variableName)
}
# require that the outlier replacement upper limit is <= the highest bin value
if (is.null(varObject$outlierUpperLimit) || varObject$outlierUpperLimit > breaks[length(breaks)]){
stop("The outlier limit is not set or greater than the last breaks value in ", variableName)
}
if (is.null(varObject$outlierLowerLimit)){
stop("The outlier lower limit must be set before the Discretize step for variable ",variableName)
}
#
# check the length of inputs and for missing values
#
if (any(is.na(this$df[[variableName]]))){
#
# missing values are present in numeric variable, must have N+1 breaks and N values
#
if (length(breaks) != length(discrete.values)){
stop("Discretize requires number of breaks to match number of discrete values when missing values are present for variable ", variableName,
", found ", length(breaks), " breaks and ", length(discrete.values), " discrete values.")
}
# create an artificial bin for missing values, make it above the max value in the data
newbreak <- 0.5 * (max(breaks) - min(breaks)) + max(breaks)
breaks <- c(breaks, newbreak)
# map missing values to the new last break, make it slightly smaller than last artificial bin boundary
varObject$missingReplacement <- newbreak * 0.99
this$df[[variableName]] <- CleanNaFromContinuous(this$df[[variableName]], replacement.value = varObject$missingReplacement)
# update the variable object
varObject$binLimits <- breaks
varObject$binValues <- discrete.values
# set a label to be appended to the last level name for verbose output
last.level.label.append <- " (Missing)" # append missing label since last level contains missing values
} else {
#
# no missing values present, must have N+1 breaks and N discrete values
#
if (length(breaks) != (length(discrete.values)+1)){
stop("Discretize requires number of breaks N+1 and N discrete values for variable ", variableName,
", found ", length(breaks), " breaks and ", length(discrete.values), " discrete values.")
}
# no missing values found, copy over breaks and woe levels as normal
# update the variable object
varObject$binLimits <- breaks
varObject$binValues <- discrete.values
# set a label to be appended to the last level name for verbose output
last.level.label.append <- "" # append blank since the last level does not contain missing values
}
# store this step in the list of steps
varObject$steps <- c(varObject$steps, "Discretize")
# set bins to right, closed on right and open on left, default for cut command
varObject$binRight <- TRUE
# use cut to discretize the variable
fiv <- cut(this$df[[variableName]], breaks=breaks, include.lowest=TRUE)
# replace continuous values with discrete values in the causataData object
discrete.variable <- rep(0, nrow(this$df))
level.vec <- levels(fiv)
if (verbose) {cat("Mapping values to", length(discrete.values), "discrete bins, n is number of values in bins:\n")}
# loop for each level in the discretized data
for (i in 1:length(discrete.values)){
idx <- level.vec[i] == fiv
if (sum(idx)>0){discrete.variable[idx] <- discrete.values[i]}
# print message if verbose flag set, add label to level with missing values if present
if (verbose) {cat(" ", level.vec[i], "->", discrete.values[i], " n =", sum(idx),
if(i==length(discrete.values)) last.level.label.append else "", "\n")}
}
this$df[[variableName]] <- discrete.variable
# update the causata variable and return the causata data object
this$variableList[[variableName]] <- varObject
return(this)
}
MergeLevels.CausataVariable <- function(this, causataData, max.levels, ...) {
# Given a Causata Variable object for a factor, merges the smallest levels
# in the factor so that the total number of levels does not exceed maxlevels.
# get the variable name
varname <- this$name
# store this step in the list of steps
this$steps <- c(this$steps, "MergeLevels")
# Store the levels before merging
levels1 <- levels(causataData$df[[varname]])
# merge levels and update the data object
causataData$df[[varname]] <- MergeLevels(causataData$df[[varname]], max.levels)
# store a vector of the levels that were not merged into "Other" (all other values become "Other")
this$unmergedLevels <- levels(causataData$df[[varname]])
# update the variable object stored in the data object
causataData$variableList[[varname]] <- this
# return the data object
return(causataData)
}
MergeLevels.CausataData <- function(this, variableName=NULL, max.levels, other.name="Other", verbose=FALSE, ...) {
# given a Causata Data object, merges levels in all of the factors within where
# the number of levels exceeds maxlevels
if (!is.null(variableName)){
# a single variable name was provided, confirm that it's in the causataData object
stopifnot(variableName %in% names(this$variableList))
variableList <- variableName
} else {
# no variable name was provided, so use all of them
variableList <- names(this$variableList)
}
if (verbose) {cat("\nMerging levels in factors:\n")}
for (varname in variableList) {
# test if this is a factor
if (class(this$df[[varname]]) == "factor") {
# it is a factor, check the number of levels
numLevels <- length(levels(this$df[[varname]]))
# check if the number of levels exceeds a limit
if (numLevels > max.levels){
# this factor exceeds the limit, merge levels
if (verbose) {cat(" ", varname, "\n")}
}
# run mergelevels regardless of whether the threshold number of levels is exceeded. this way
# we record which levels were present
this <- MergeLevels.CausataVariable( this$variableList[[varname]], this, max.levels )
}
}
# return the data object
return(this)
}
GetVariable.CausataData <- function(this, r.name=NULL, ...) {
for (variable in this$variableList) {
if (variable$name == r.name) {
# exact match found, return variable
return(variable)
} else {
# look for a partial match from a dummy variable produced by model.matrix
# e.g. var.name__APlevel, where we should match var.name__AP
idx <- grep( paste("^", variable$name, sep=""), r.name )
if (length(idx) == 1){
# match found
return(variable)
}
}
}
# if no match was found then throw an error
stop("No matching variable name found for ", r.name)
}
DiscretizeTransform <- function(this) {
column.function <- function(column) {
# if missing values are present then replace them first, before mapping to discrete values
if (!is.null(this$missingReplacement)){
# missing replacement value is set, so replace missing values
column[is.na(column)] <- this$missingReplacement
}
this$binValues[ cut(column, breaks=this$binLimits, include.lowest=TRUE, labels=FALSE, right=this$binRight) ]
}
ColumnarTransformation(this, column.function)
}
ReplaceOutliersTransform <- function(this) {
column.function <- function(column) {
ReplaceOutliers(column, lowerLimit=this$outlierLowerLimit, upperLimit=this$outlierUpperLimit)
}
ColumnarTransformation(this, column.function)
}
CleanNaFromContinuousTransform <- function(this) {
column.function <- function(column) {
column[is.na(column)] <- this$missingReplacement
return(column)
}
ColumnarTransformation(this, column.function)
}
CleanNaFromFactorTransform <- function(this) {
ColumnarTransformation(this, function(column) CleanNaFromFactor(column, this$missingReplacement))
}
MergeLevelsTransform <- function(this) {
stopifnot(is.CausataVariable(this))
unit.of.work <- function(value) {
if (is.na(value)) {
NA
} else if (value %in% this$unmergedLevels) {
value
} else {
"Other"
}
}
return(VectorizedFactorTransformation(this, unit.of.work))
}
# Takes a CausataVariable and a function that operates on a single column vector
# returns a function that operates on a data.frame, and applies the given function to the column vector
# that corresponds to the given CausataVariable
#
ColumnarTransformation <- function(this, column.function) {
function(df) {
#df[,this$name] <- column.function(df[,this$name])
# using set from data.table as it is much more efficient with memory
colnumber <- which(this$name == names(df))
if (length(colnumber)!=1){
# require that one column matches
stop("One column match required but found ", length(colnumber) , " for column ", this$name)
}
set(df, j=colnumber, value=column.function(df[, this$name]))
return(df)
}
}
# Takes a CausataVariable and a function that operates on a single value in a factor,
# returns a function that operates on a data.frame, and applies the given function to the factor
# that corresponds to the given CausataVariable
#
VectorizedFactorTransformation <- function(this, work.unit, desired.levels=NULL) {
function(df) {
col <- df[,this$name]
stopifnot(is.factor(col))
new.col <- as.character(col)
for (i in 1:length(new.col)) {
new.col[i] <- work.unit(new.col[i])
}
new.col <- if (length(desired.levels)) {
factor(new.col, levels=desired.levels)
} else {
factor(new.col)
}
result <- df
result[this$name] <- new.col
return(result)
}
}
# Returns a function that applies the given transforms (functions) in sequence
# Could possibly use Reduce (like a foldleft function)
#
Sequence <- function(functions) {
function(df) {
for (f in functions) {
df <- f(df)
}
return(df)
}
}
GetTransforms.CausataData <- function(this, ...) {
# iterate over steps, getting a function for each, and the return a function that applies
# each of those transforms in sequence.
#
transforms <- NULL
for (variable in this$variableList) {
transforms <- c(transforms, GetTransformFunctionForVariable(variable))
}
return(Sequence(transforms))
}
GetQuery.CausataData <- function(this, ...) {
this$query
}
GetTransformFunctionForVariable <- function(this) {
stopifnot(is.CausataVariable(this))
transforms <- NULL
for (step in this$steps) {
transforms <- c(transforms, switch (step,
"Discretize"=DiscretizeTransform(this),
"ReplaceOutliers"=ReplaceOutliersTransform(this),
"CleanNaFromFactor"=CleanNaFromFactorTransform(this),
"CleanNaFromContinuous"=CleanNaFromContinuousTransform(this),
"MergeLevels"=MergeLevelsTransform(this),
stop("Unknown transformation")
))
}
return(Sequence(transforms))
}
|
# additional polyRAD functions that perform calculations.
# various internal functions.
# internal function to take allele frequencies and get prior probs under HWE
# freqs is a vector of allele frequencies
# ploidy is a vector indicating the ploidy
.HWEpriors <- function(freqs, ploidy, selfing.rate){
if(length(unique(ploidy)) != 1){
stop("All subgenomes must be same ploidy")
}
if(selfing.rate < 0 || selfing.rate > 1){
stop("selfing.rate must not be less than zero or more than one.")
}
nsubgen <- length(ploidy)
if(nsubgen == 1){ # for diploid/autopolyploid, or single subgenome with recursion
priors <- matrix(NA, nrow = ploidy+1, ncol = length(freqs),
dimnames = list(as.character(0:ploidy), names(freqs)))
antifreqs <- 1 - freqs
# genotype probabilities under random mating
for(i in 0:ploidy){
priors[i+1,] <- choose(ploidy, i) * freqs ^ i * antifreqs ^ (ploidy - i)
}
# adjust for self fertilization if applicable
if(selfing.rate > 0 && selfing.rate < 1){
sm <- .selfmat(ploidy)
# Equation 6 from de Silva et al. 2005 (doi:10.1038/sj.hdy.6800728)
priors <- (1 - selfing.rate) *
solve(diag(ploidy + 1) - selfing.rate * sm, priors)
rownames(priors) <- as.character(0:ploidy)
}
if(selfing.rate == 1){
priors <- matrix(0, nrow = ploidy+1, ncol = length(freqs),
dimnames = list(as.character(0:ploidy), names(freqs)))
priors[1,] <- antifreqs
priors[ploidy + 1, ] <- freqs
}
} else {
remainingfreqs <- freqs
priors <- matrix(1, nrow = 1, ncol = length(freqs))
for(pld in ploidy){
thesefreqs <- remainingfreqs
# allele frequencies partitioned for this subgenome
thesefreqs[thesefreqs > 1/nsubgen] <- 1/nsubgen
# allele frequencies reserved for remaining subgenomes
remainingfreqs <- remainingfreqs - thesefreqs
# priors just for this subgenome
thesepriors <- .HWEpriors(nsubgen * thesefreqs, pld, selfing.rate)
# multiply by priors already calculated to get overall priors
oldpriors <- priors
newpld <- dim(thesepriors)[1] + dim(oldpriors)[1] - 2
priors <- matrix(0, nrow = newpld + 1, ncol = length(freqs),
dimnames = list(as.character(0:newpld),
names(freqs)))
for(i in 1:dim(oldpriors)[1]){
for(j in 1:dim(thesepriors)[1]){
thisalcopy <- i + j - 2
priors[thisalcopy + 1,] <- priors[thisalcopy + 1,] +
oldpriors[i,] * thesepriors[j,]
}
}
}
}
return(priors)
}
# internal function to multiply genotype priors by genotype likelihood
.priorTimesLikelihood <- function(object){
if(!"RADdata" %in% class(object)){
stop("RADdata object needed for .priorTimesLikelihood")
}
if(is.null(object$priorProb)){
stop("Genotype prior probabilities must be added first.")
}
if(is.null(object$genotypeLikelihood)){
stop("Genotype likelihoods must be added first.")
}
ploidytotpriors <- sapply(object$priorProb, function(x) dim(x)[1] - 1)
ploidytotlikeli <- sapply(object$genotypeLikelihood, function(x) dim(x)[1] - 1)
results <- list()
length(results) <- length(object$priorProb)
for(i in 1:length(object$priorProb)){
j <- which(ploidytotlikeli == ploidytotpriors[i])
stopifnot(length(j) == 1)
if(attr(object, "priorType") == "population"){
# expand priors out by individuals
thispriorarr <- array(object$priorProb[[i]],
dim = c(dim(object$priorProb[[i]])[1], 1,
dim(object$priorProb[[i]])[2]))[,rep(1, nTaxa(object)),]
dimnames(thispriorarr) <- dimnames(object$genotypeLikelihoods)[[j]]
} else {
thispriorarr <- object$priorProb[[i]]
}
stopifnot(identical(dim(thispriorarr), dim(object$genotypeLikelihood[[j]])))
results[[i]] <- thispriorarr * object$genotypeLikelihood[[j]]
# factor in LD if present
if(!is.null(object$priorProbLD)){
results[[i]] <- results[[i]] * object$priorProbLD[[i]]
}
# find any that total to zero (within taxon x allele) and replace with priors
totzero <- which(colSums(results[[i]]) == 0)
if(length(totzero) > 0){
for(a in 1:dim(thispriorarr)[1]){
results[[i]][a,,][totzero] <- thispriorarr[a,,][totzero]
}
}
# in a mapping population, don't use priors for parents
if(!is.null(attr(object, "donorParent")) &&
!is.null(attr(object, "recurrentParent"))){
parents <- c(GetDonorParent(object), GetRecurrentParent(object))
results[[i]][, parents, ] <- object$genotypeLikelihood[[j]][, parents, ]
}
}
return(results)
}
# internal function to get best estimate of allele frequencies depending
# on what parameters are available
.alleleFreq <- function(object, type = "choose", taxaToKeep = GetTaxa(object)){
if(!"RADdata" %in% class(object)){
stop("RADdata object needed for .priorTimesLikelihood")
}
if(!type %in% c("choose", "individual frequency", "posterior prob",
"depth ratio")){
stop("Type must be 'choose', 'individual frequency', 'posterior prob', or 'depth ratio'.")
}
if(type == "individual frequency" && is.null(object$alleleFreqByTaxa)){
stop("Need alleleFreqByTaxa if type = 'individual frequency'.")
}
if(type == "posterior prob" &&
(is.null(object$posteriorProb) || is.null(object$ploidyChiSq))){
stop("Need posteriorProb and ploidyChiSq if type = 'posterior prob'.")
}
if(type %in% c("choose", "individual frequency") &&
!is.null(object$alleleFreqByInd)){
outFreq <- colMeans(object$alleleFreqByTaxa[taxaToKeep,,drop = FALSE],
na.rm = TRUE)
attr(outFreq, "type") <- "individual frequency"
} else {
if(type %in% c("choose", "posterior prob") &&
CanDoGetWeightedMeanGeno(object)){
wmgeno <- GetWeightedMeanGenotypes(object, minval = 0, maxval = 1,
omit1allelePerLocus = FALSE)
outFreq <- colMeans(wmgeno[taxaToKeep,,drop = FALSE], na.rm = TRUE)
attr(outFreq, "type") <- "posterior prob"
} else {
if(type %in% c("choose", "depth ratio")){
outFreq <- colMeans(object$depthRatio[taxaToKeep,,drop = FALSE],
na.rm = TRUE)
attr(outFreq, "type") <- "depth ratio"
}
}
}
return(outFreq)
}
# Internal function to get a consensus between two DNA sequences, using
# IUPAC ambiguity codes. Should work for either character strings
# or DNAStrings. Puts in ambiguity codes any time there is not 100%
# consensus.
.mergeNucleotides <- function(seq1, seq2){
split1 <- strsplit(seq1, split = "")[[1]]
split2 <- strsplit(seq2, split = "")[[1]]
splitNew <- character(length(split1))
splitNew[split1 == split2] <- split1[split1 == split2]
IUPAC_key <- list(M = list(c('A', 'C'), c('A', 'M'), c('C', 'M')),
R = list(c('A', 'G'), c('A', 'R'), c('G', 'R')),
W = list(c('A', 'T'), c('A', 'W'), c('T', 'W')),
S = list(c('C', 'G'), c('C', 'S'), c('G', 'S')),
Y = list(c('C', 'T'), c('C', 'Y'), c('T', 'Y')),
K = list(c('G', 'T'), c('G', 'K'), c('T', 'K')),
V = list(c('A', 'S'), c('C', 'R'), c('G', 'M'),
c('A', 'V'), c('C', 'V'), c('G', 'V')),
H = list(c('A', 'Y'), c('C', 'W'), c('T', 'M'),
c('A', 'H'), c('C', 'H'), c('T', 'H')),
D = list(c('A', 'K'), c('G', 'W'), c('T', 'R'),
c('A', 'D'), c('G', 'D'), c('T', 'D')),
B = list(c('C', 'K'), c('G', 'Y'), c('T', 'S'),
c('C', 'B'), c('G', 'B'), c('T', 'B')),
# throw out deletions with respect to reference
A = list(c('A', '-')), C = list(c('C', '-')),
G = list(c('G', '-')), T = list(c('T', '-')),
# throw out insertions with respect to reference
. = list(c('A', '.'), c('C', '.'), c('G', '.'),
c('T', '.'), c('M', '.'), c('R', '.'),
c('W', '.'), c('S', '.'), c('Y', '.'),
c('K', '.'), c('V', '.'), c('H', '.'),
c('D', '.'), c('B', '.'), c('N', '.')))
for(i in which(split1 != split2)){
nucset <- c(split1[i], split2[i])
splitNew[i] <- "N"
for(nt in names(IUPAC_key)){
for(matchset in IUPAC_key[[nt]]){
if(setequal(nucset, matchset)){
splitNew[i] <- nt
break
}
}
}
}
out <- paste(splitNew, collapse = "")
return(out)
}
# substitution matrix for distances between alleles in polyRAD.
# Any partial match based on ambiguity counts as a complete match.
polyRADsubmat <- matrix(c(0,1,1,1, 0,0,0,1,1,1, 0,0,0,1, 0,1,1, # A
1,0,1,1, 0,1,1,0,0,1, 0,0,1,0, 0,1,1, # C
1,1,0,1, 1,0,1,0,1,0, 0,1,0,0, 0,1,1, # G
1,1,1,0, 1,1,0,1,0,0, 1,0,0,0, 0,1,1, # T
0,0,1,1, 0,0,0,0,0,1, 0,0,0,0, 0,1,1, # M
0,1,0,1, 0,0,0,0,1,0, 0,0,0,0, 0,1,1, # R
0,1,1,0, 0,0,0,1,0,0, 0,0,0,0, 0,1,1, # W
1,0,0,1, 0,0,1,0,0,0, 0,0,0,0, 0,1,1, # S
1,0,1,0, 0,1,0,0,0,0, 0,0,0,0, 0,1,1, # Y
1,1,0,0, 1,0,0,0,0,0, 0,0,0,0, 0,1,1, # K
0,0,0,1, 0,0,0,0,0,0, 0,0,0,0, 0,1,1, # V
0,0,1,0, 0,0,0,0,0,0, 0,0,0,0, 0,1,1, # H
0,1,0,0, 0,0,0,0,0,0, 0,0,0,0, 0,1,1, # D
1,0,0,0, 0,0,0,0,0,0, 0,0,0,0, 0,1,1, # B
0,0,0,0, 0,0,0,0,0,0, 0,0,0,0, 0,0,0, # N
1,1,1,1, 1,1,1,1,1,1, 1,1,1,1, 0,0,1, # -
1,1,1,1, 1,1,1,1,1,1, 1,1,1,1, 0,1,0 # .
),
nrow = 17, ncol = 17,
dimnames = list(c('A', 'C', 'G', 'T', 'M', 'R', 'W',
'S', 'Y', 'K', 'V', 'H', 'D', 'B',
'N', '-', '.'),
c('A', 'C', 'G', 'T', 'M', 'R', 'W',
'S', 'Y', 'K', 'V', 'H', 'D', 'B',
'N', '-', '.')))
#### Getting genotype priors in mapping pops and simulating selfing ####
# function to generate all gamete genotypes for a set of genotypes.
# alCopy is a vector of values ranging from zero to ploidy indicating
# allele copy number.
# ploidy is the ploidy
# rnd indicates which round of the recursive algorithm we are on
# Output is a matrix. Alleles are in columns, which should be treated
# independently. Rows indicate gametes, with values indicating how many
# copies of the allele that gamete has.
.makeGametes <- function(alCopy, ploidy, rnd = ploidy[1]/2){
if(rnd %% 1 != 0 || ploidy[1] < 2){
stop("Even numbered ploidy needed to simulate gametes.")
}
if(length(unique(ploidy)) != 1){
stop("Currently all subgenome ploidies must be equal.")
}
if(any(alCopy > sum(ploidy), na.rm = TRUE)){
stop("Cannot have alCopy greater than ploidy.")
}
if(length(ploidy) > 1){ # allopolyploids
thisAl <- matrix(0L, nrow = 1, ncol = length(alCopy))
for(pl in ploidy){
# get allele copies for this isolocus. Currently simplified;
# minimizes the number of isoloci to which an allele can belong.
### update in the future ###
thisCopy <- alCopy
thisCopy[thisCopy > pl] <- pl
alCopy <- alCopy - thisCopy
# make gametes for this isolocus (recursive, goes to autopoly version)
thisIsoGametes <- .makeGametes(thisCopy, pl, rnd)
# add to current gamete set
nGamCurr <- dim(thisAl)[1]
nGamNew <- dim(thisIsoGametes)[1]
thisAl <- thisAl[rep(1:nGamCurr, each = nGamNew),, drop = FALSE] +
thisIsoGametes[rep(1:nGamNew, times = nGamCurr),, drop = FALSE]
}
} else { # autopolyploids
thisAl <- sapply(alCopy, function(x){
if(is.na(x)){
rep(NA, ploidy)
} else {
c(rep(0L, ploidy - x), rep(1L, x))
}})
if(rnd > 1){
# recursively add alleles to gametes for polyploid
nReps <- factorial(ploidy-1)/factorial(ploidy - rnd)
thisAl <- thisAl[rep(1:ploidy, each = nReps),, drop = FALSE]
for(i in 1:ploidy){
thisAl[((i-1)*nReps+1):(i*nReps),] <-
thisAl[((i-1)*nReps+1):(i*nReps),, drop=FALSE] +
.makeGametes(alCopy - thisAl[i*nReps,, drop = FALSE], ploidy - 1, rnd - 1)
}
}
}
return(thisAl)
}
# function to get probability of a gamete with a given allele copy number,
# given output from makeGametes
.gameteProb <- function(makeGamOutput, ploidy){
outmat <- sapply(0:(sum(ploidy)/2),
function(x) colMeans(makeGamOutput == x))
if(ncol(makeGamOutput) == 1){
outmat <- matrix(outmat, nrow = ploidy/2 + 1, ncol = 1)
} else {
outmat <- t(outmat)
}
return(outmat)
}
# function to take two sets of gamete probabilities (from two parents)
# and output genotype probabilities
.progenyProb <- function(gamProb1, gamProb2){
outmat <- matrix(0L, nrow = dim(gamProb1)[1] + dim(gamProb2)[1] - 1,
ncol = dim(gamProb1)[2])
for(i in 1:dim(gamProb1)[1]){
copy1 <- i - 1
for(j in 1:dim(gamProb2)[1]){
copy2 <- j - 1
thisrow <- copy1 + copy2 + 1
outmat[thisrow,] <- outmat[thisrow,] + gamProb1[i,] * gamProb2[j,]
}
}
return(outmat)
}
# function to get gamete probabilities for a population, given genotype priors
.gameteProbPop <- function(priors, ploidy){
# get gamete probs for all possible genotypes
possGamProb <- .gameteProb(.makeGametes(1:dim(priors)[1] - 1, ploidy),ploidy)
# output matrix
outmat <- possGamProb %*% priors
return(outmat)
}
# function just to make selfing matrix.
# Every parent genotype is a column. The frequency of its offspring after
# selfing is in rows.
.selfmat <- function(ploidy){
# get gamete probs for all possible genotypes
possGamProb <- .gameteProb(.makeGametes(0:ploidy, ploidy), ploidy)
# genotype probabilities after selfing each possible genotype
outmat <- matrix(0, nrow = ploidy + 1, ncol = ploidy + 1)
for(i in 1:(ploidy + 1)){
outmat[,i] <- .progenyProb(possGamProb[,i,drop = FALSE],
possGamProb[,i,drop = FALSE])
}
return(outmat)
}
# function to adjust genotype probabilities from one generation of selfing
.selfPop <- function(priors, ploidy){
# progeny probs for all possible genotypes, selfed
possProgenyProb <- .selfmat(ploidy)
# multiple progeny probs by prior probabilities of those genotypes
outmat <- possProgenyProb %*% priors
return(outmat)
}
# Functions used by HindHeMapping ####
# Function for making multiallelic gametes. Autopolyploid segretation only.
# geno is a vector of length ploidy with values indicating allele identity.
# rnd is how many alleles will be in the gamete.
.makeGametes2 <- function(geno, rnd = length(geno)/2){
nal <- length(geno) # number of remaining alleles in genotype
if(rnd == 1){
return(matrix(geno, nrow = nal, ncol = 1))
} else {
outmat <- matrix(0L, nrow = choose(nal, rnd), ncol = rnd)
currrow <- 1
for(i in 1:(nal - rnd + 1)){
submat <- .makeGametes2(geno[-(1:i)], rnd - 1)
theserows <- currrow + 1:nrow(submat) - 1
currrow <- currrow + nrow(submat)
outmat[theserows,1] <- geno[i]
outmat[theserows,2:ncol(outmat)] <- submat
}
return(outmat)
}
}
# take output of .makeGametes2 and get progeny probabilities.
# Return a list, where the first item is a matrix with progeny genotypes in
# rows, and the second item is a vector of corresponding probabilities.
.progenyProb2 <- function(gam1, gam2){
allprog <- cbind(gam1[rep(1:nrow(gam1), each = nrow(gam2)),],
gam2[rep(1:nrow(gam2), times = nrow(gam1)),])
allprog <- t(apply(allprog, 1, sort)) # could be sped up with Rcpp
# set up output
outprog <- matrix(allprog[1,], nrow = 1, ncol = ncol(allprog))
outprob <- 1/nrow(allprog)
# tally up unique genotypes
for(i in 2:nrow(allprog)){
thisgen <- allprog[i,]
existsYet <- FALSE
for(j in 1:nrow(outprog)){
if(identical(thisgen, outprog[j,])){
outprob[j] <- outprob[j] + 1/nrow(allprog)
existsYet <- TRUE
break
}
}
if(!existsYet){
outprog <- rbind(outprog, thisgen, deparse.level = 0)
outprob <- c(outprob, 1/nrow(allprog))
}
}
return(list(outprog, outprob))
}
# Consolidate a list of outputs from .progenyProb2 into a single output.
# It is assumed the all probabilities in pplist have been multiplied by
# some factor so they sum to one.
.consolProgProb <- function(pplist){
outprog <- pplist[[1]][[1]]
outprob <- pplist[[1]][[2]]
if(length(pplist) > 1){
for(i in 2:length(pplist)){
for(pr1 in 1:nrow(pplist[[i]][[1]])){
thisgen <- pplist[[i]][[1]][pr1,]
existsYet <- FALSE
for(pr2 in 1:nrow(outprog)){
if(identical(thisgen, outprog[pr2,])){
outprob[pr2] <- outprob[pr2] + pplist[[i]][[2]][pr1]
existsYet <- TRUE
break
}
}
if(!existsYet){
outprog <- rbind(outprog, thisgen, deparse.level = 0)
outprob <- c(outprob, pplist[[i]][[2]][pr1])
}
}
}
}
return(list(outprog, outprob))
}
# Probability that, depending on the generation, two alleles sampled in a progeny
# are different locus copies from the same parent, or from different parents.
# If there is backcrossing, parent 1 is the recurrent parent.
# (No double reduction.)
# Can take a few seconds to run if there are many generations, but it is only
# intended to be run once for the whole dataset.
.progAlProbs <- function(ploidy, gen_backcrossing, gen_selfing){
# set up parents; number indexes locus copy
p1 <- 1:ploidy
p2 <- p1 + ploidy
# create F1 progeny probabilities
progprob <- .progenyProb2(.makeGametes2(p1), .makeGametes2(p2))
# backcross
if(gen_backcrossing > 0){
gam1 <- .makeGametes2(p1)
for(g in 1:gen_backcrossing){
allprogprobs <- lapply(1:nrow(progprob[[1]]),
function(x){
pp <- .progenyProb2(gam1, .makeGametes2(progprob[[1]][x,]))
pp[[2]] <- pp[[2]] * progprob[[2]][x]
return(pp)
})
progprob <- .consolProgProb(allprogprobs)
}
}
# self-fertilize
if(gen_selfing > 0){
for(g in 1:gen_selfing){
allprogprobs <- lapply(1:nrow(progprob[[1]]),
function(x){
gam <- .makeGametes2(progprob[[1]][x,])
pp <- .progenyProb2(gam, gam)
pp[[2]] <- pp[[2]] * progprob[[2]][x]
return(pp)
})
progprob <- .consolProgProb(allprogprobs)
}
}
# total probability that (without replacement, from individual progeny):
diffp1 <- 0 # two different locus copies, both from parent 1, are sampled
diffp2 <- 0 # two different locus copies, both from parent 2, are sampled
diff12 <- 0 # locus copies from two different parents are sampled
ncombo <- choose(ploidy, 2) # number of ways to choose 2 alleles from a genotype
# examine each progeny genotype
for(p in 1:nrow(progprob[[1]])){
thisgen <- progprob[[1]][p,]
thisprob <- progprob[[2]][p]
for(m in 1:(ploidy - 1)){
al1 <- thisgen[m]
for(n in (m + 1):ploidy){
al2 <- thisgen[n]
if(al1 == al2) next
if((al1 %in% p1) && (al2 %in% p1)){
diffp1 <- diffp1 + (thisprob / ncombo)
next
}
if((al1 %in% p2) && (al2 %in% p2)){
diffp2 <- diffp2 + (thisprob / ncombo)
next
}
if(((al1 %in% p1) && (al2 %in% p2)) ||
((al2 %in% p1) && (al1 %in% p2))){
diff12 <- diff12 + (thisprob / ncombo)
next
}
stop("Allele indexing messed up.")
}
}
}
return(c(diffp1, diffp2, diff12))
}
| /R/calculations.R | no_license | quanrd/polyRAD | R | false | false | 20,571 | r | # additional polyRAD functions that perform calculations.
# various internal functions.
# internal function to take allele frequencies and get prior probs under HWE
# freqs is a vector of allele frequencies
# ploidy is a vector indicating the ploidy
.HWEpriors <- function(freqs, ploidy, selfing.rate){
if(length(unique(ploidy)) != 1){
stop("All subgenomes must be same ploidy")
}
if(selfing.rate < 0 || selfing.rate > 1){
stop("selfing.rate must not be less than zero or more than one.")
}
nsubgen <- length(ploidy)
if(nsubgen == 1){ # for diploid/autopolyploid, or single subgenome with recursion
priors <- matrix(NA, nrow = ploidy+1, ncol = length(freqs),
dimnames = list(as.character(0:ploidy), names(freqs)))
antifreqs <- 1 - freqs
# genotype probabilities under random mating
for(i in 0:ploidy){
priors[i+1,] <- choose(ploidy, i) * freqs ^ i * antifreqs ^ (ploidy - i)
}
# adjust for self fertilization if applicable
if(selfing.rate > 0 && selfing.rate < 1){
sm <- .selfmat(ploidy)
# Equation 6 from de Silva et al. 2005 (doi:10.1038/sj.hdy.6800728)
priors <- (1 - selfing.rate) *
solve(diag(ploidy + 1) - selfing.rate * sm, priors)
rownames(priors) <- as.character(0:ploidy)
}
if(selfing.rate == 1){
priors <- matrix(0, nrow = ploidy+1, ncol = length(freqs),
dimnames = list(as.character(0:ploidy), names(freqs)))
priors[1,] <- antifreqs
priors[ploidy + 1, ] <- freqs
}
} else {
remainingfreqs <- freqs
priors <- matrix(1, nrow = 1, ncol = length(freqs))
for(pld in ploidy){
thesefreqs <- remainingfreqs
# allele frequencies partitioned for this subgenome
thesefreqs[thesefreqs > 1/nsubgen] <- 1/nsubgen
# allele frequencies reserved for remaining subgenomes
remainingfreqs <- remainingfreqs - thesefreqs
# priors just for this subgenome
thesepriors <- .HWEpriors(nsubgen * thesefreqs, pld, selfing.rate)
# multiply by priors already calculated to get overall priors
oldpriors <- priors
newpld <- dim(thesepriors)[1] + dim(oldpriors)[1] - 2
priors <- matrix(0, nrow = newpld + 1, ncol = length(freqs),
dimnames = list(as.character(0:newpld),
names(freqs)))
for(i in 1:dim(oldpriors)[1]){
for(j in 1:dim(thesepriors)[1]){
thisalcopy <- i + j - 2
priors[thisalcopy + 1,] <- priors[thisalcopy + 1,] +
oldpriors[i,] * thesepriors[j,]
}
}
}
}
return(priors)
}
# internal function to multiply genotype priors by genotype likelihood
.priorTimesLikelihood <- function(object){
if(!"RADdata" %in% class(object)){
stop("RADdata object needed for .priorTimesLikelihood")
}
if(is.null(object$priorProb)){
stop("Genotype prior probabilities must be added first.")
}
if(is.null(object$genotypeLikelihood)){
stop("Genotype likelihoods must be added first.")
}
ploidytotpriors <- sapply(object$priorProb, function(x) dim(x)[1] - 1)
ploidytotlikeli <- sapply(object$genotypeLikelihood, function(x) dim(x)[1] - 1)
results <- list()
length(results) <- length(object$priorProb)
for(i in 1:length(object$priorProb)){
j <- which(ploidytotlikeli == ploidytotpriors[i])
stopifnot(length(j) == 1)
if(attr(object, "priorType") == "population"){
# expand priors out by individuals
thispriorarr <- array(object$priorProb[[i]],
dim = c(dim(object$priorProb[[i]])[1], 1,
dim(object$priorProb[[i]])[2]))[,rep(1, nTaxa(object)),]
dimnames(thispriorarr) <- dimnames(object$genotypeLikelihoods)[[j]]
} else {
thispriorarr <- object$priorProb[[i]]
}
stopifnot(identical(dim(thispriorarr), dim(object$genotypeLikelihood[[j]])))
results[[i]] <- thispriorarr * object$genotypeLikelihood[[j]]
# factor in LD if present
if(!is.null(object$priorProbLD)){
results[[i]] <- results[[i]] * object$priorProbLD[[i]]
}
# find any that total to zero (within taxon x allele) and replace with priors
totzero <- which(colSums(results[[i]]) == 0)
if(length(totzero) > 0){
for(a in 1:dim(thispriorarr)[1]){
results[[i]][a,,][totzero] <- thispriorarr[a,,][totzero]
}
}
# in a mapping population, don't use priors for parents
if(!is.null(attr(object, "donorParent")) &&
!is.null(attr(object, "recurrentParent"))){
parents <- c(GetDonorParent(object), GetRecurrentParent(object))
results[[i]][, parents, ] <- object$genotypeLikelihood[[j]][, parents, ]
}
}
return(results)
}
# internal function to get best estimate of allele frequencies depending
# on what parameters are available
.alleleFreq <- function(object, type = "choose", taxaToKeep = GetTaxa(object)){
if(!"RADdata" %in% class(object)){
stop("RADdata object needed for .priorTimesLikelihood")
}
if(!type %in% c("choose", "individual frequency", "posterior prob",
"depth ratio")){
stop("Type must be 'choose', 'individual frequency', 'posterior prob', or 'depth ratio'.")
}
if(type == "individual frequency" && is.null(object$alleleFreqByTaxa)){
stop("Need alleleFreqByTaxa if type = 'individual frequency'.")
}
if(type == "posterior prob" &&
(is.null(object$posteriorProb) || is.null(object$ploidyChiSq))){
stop("Need posteriorProb and ploidyChiSq if type = 'posterior prob'.")
}
if(type %in% c("choose", "individual frequency") &&
!is.null(object$alleleFreqByInd)){
outFreq <- colMeans(object$alleleFreqByTaxa[taxaToKeep,,drop = FALSE],
na.rm = TRUE)
attr(outFreq, "type") <- "individual frequency"
} else {
if(type %in% c("choose", "posterior prob") &&
CanDoGetWeightedMeanGeno(object)){
wmgeno <- GetWeightedMeanGenotypes(object, minval = 0, maxval = 1,
omit1allelePerLocus = FALSE)
outFreq <- colMeans(wmgeno[taxaToKeep,,drop = FALSE], na.rm = TRUE)
attr(outFreq, "type") <- "posterior prob"
} else {
if(type %in% c("choose", "depth ratio")){
outFreq <- colMeans(object$depthRatio[taxaToKeep,,drop = FALSE],
na.rm = TRUE)
attr(outFreq, "type") <- "depth ratio"
}
}
}
return(outFreq)
}
# Internal function to get a consensus between two DNA sequences, using
# IUPAC ambiguity codes. Should work for either character strings
# or DNAStrings. Puts in ambiguity codes any time there is not 100%
# consensus.
.mergeNucleotides <- function(seq1, seq2){
split1 <- strsplit(seq1, split = "")[[1]]
split2 <- strsplit(seq2, split = "")[[1]]
splitNew <- character(length(split1))
splitNew[split1 == split2] <- split1[split1 == split2]
IUPAC_key <- list(M = list(c('A', 'C'), c('A', 'M'), c('C', 'M')),
R = list(c('A', 'G'), c('A', 'R'), c('G', 'R')),
W = list(c('A', 'T'), c('A', 'W'), c('T', 'W')),
S = list(c('C', 'G'), c('C', 'S'), c('G', 'S')),
Y = list(c('C', 'T'), c('C', 'Y'), c('T', 'Y')),
K = list(c('G', 'T'), c('G', 'K'), c('T', 'K')),
V = list(c('A', 'S'), c('C', 'R'), c('G', 'M'),
c('A', 'V'), c('C', 'V'), c('G', 'V')),
H = list(c('A', 'Y'), c('C', 'W'), c('T', 'M'),
c('A', 'H'), c('C', 'H'), c('T', 'H')),
D = list(c('A', 'K'), c('G', 'W'), c('T', 'R'),
c('A', 'D'), c('G', 'D'), c('T', 'D')),
B = list(c('C', 'K'), c('G', 'Y'), c('T', 'S'),
c('C', 'B'), c('G', 'B'), c('T', 'B')),
# throw out deletions with respect to reference
A = list(c('A', '-')), C = list(c('C', '-')),
G = list(c('G', '-')), T = list(c('T', '-')),
# throw out insertions with respect to reference
. = list(c('A', '.'), c('C', '.'), c('G', '.'),
c('T', '.'), c('M', '.'), c('R', '.'),
c('W', '.'), c('S', '.'), c('Y', '.'),
c('K', '.'), c('V', '.'), c('H', '.'),
c('D', '.'), c('B', '.'), c('N', '.')))
for(i in which(split1 != split2)){
nucset <- c(split1[i], split2[i])
splitNew[i] <- "N"
for(nt in names(IUPAC_key)){
for(matchset in IUPAC_key[[nt]]){
if(setequal(nucset, matchset)){
splitNew[i] <- nt
break
}
}
}
}
out <- paste(splitNew, collapse = "")
return(out)
}
# substitution matrix for distances between alleles in polyRAD.
# Any partial match based on ambiguity counts as a complete match.
polyRADsubmat <- matrix(c(0,1,1,1, 0,0,0,1,1,1, 0,0,0,1, 0,1,1, # A
1,0,1,1, 0,1,1,0,0,1, 0,0,1,0, 0,1,1, # C
1,1,0,1, 1,0,1,0,1,0, 0,1,0,0, 0,1,1, # G
1,1,1,0, 1,1,0,1,0,0, 1,0,0,0, 0,1,1, # T
0,0,1,1, 0,0,0,0,0,1, 0,0,0,0, 0,1,1, # M
0,1,0,1, 0,0,0,0,1,0, 0,0,0,0, 0,1,1, # R
0,1,1,0, 0,0,0,1,0,0, 0,0,0,0, 0,1,1, # W
1,0,0,1, 0,0,1,0,0,0, 0,0,0,0, 0,1,1, # S
1,0,1,0, 0,1,0,0,0,0, 0,0,0,0, 0,1,1, # Y
1,1,0,0, 1,0,0,0,0,0, 0,0,0,0, 0,1,1, # K
0,0,0,1, 0,0,0,0,0,0, 0,0,0,0, 0,1,1, # V
0,0,1,0, 0,0,0,0,0,0, 0,0,0,0, 0,1,1, # H
0,1,0,0, 0,0,0,0,0,0, 0,0,0,0, 0,1,1, # D
1,0,0,0, 0,0,0,0,0,0, 0,0,0,0, 0,1,1, # B
0,0,0,0, 0,0,0,0,0,0, 0,0,0,0, 0,0,0, # N
1,1,1,1, 1,1,1,1,1,1, 1,1,1,1, 0,0,1, # -
1,1,1,1, 1,1,1,1,1,1, 1,1,1,1, 0,1,0 # .
),
nrow = 17, ncol = 17,
dimnames = list(c('A', 'C', 'G', 'T', 'M', 'R', 'W',
'S', 'Y', 'K', 'V', 'H', 'D', 'B',
'N', '-', '.'),
c('A', 'C', 'G', 'T', 'M', 'R', 'W',
'S', 'Y', 'K', 'V', 'H', 'D', 'B',
'N', '-', '.')))
#### Getting genotype priors in mapping pops and simulating selfing ####
# function to generate all gamete genotypes for a set of genotypes.
# alCopy is a vector of values ranging from zero to ploidy indicating
# allele copy number.
# ploidy is the ploidy
# rnd indicates which round of the recursive algorithm we are on
# Output is a matrix. Alleles are in columns, which should be treated
# independently. Rows indicate gametes, with values indicating how many
# copies of the allele that gamete has.
.makeGametes <- function(alCopy, ploidy, rnd = ploidy[1]/2){
if(rnd %% 1 != 0 || ploidy[1] < 2){
stop("Even numbered ploidy needed to simulate gametes.")
}
if(length(unique(ploidy)) != 1){
stop("Currently all subgenome ploidies must be equal.")
}
if(any(alCopy > sum(ploidy), na.rm = TRUE)){
stop("Cannot have alCopy greater than ploidy.")
}
if(length(ploidy) > 1){ # allopolyploids
thisAl <- matrix(0L, nrow = 1, ncol = length(alCopy))
for(pl in ploidy){
# get allele copies for this isolocus. Currently simplified;
# minimizes the number of isoloci to which an allele can belong.
### update in the future ###
thisCopy <- alCopy
thisCopy[thisCopy > pl] <- pl
alCopy <- alCopy - thisCopy
# make gametes for this isolocus (recursive, goes to autopoly version)
thisIsoGametes <- .makeGametes(thisCopy, pl, rnd)
# add to current gamete set
nGamCurr <- dim(thisAl)[1]
nGamNew <- dim(thisIsoGametes)[1]
thisAl <- thisAl[rep(1:nGamCurr, each = nGamNew),, drop = FALSE] +
thisIsoGametes[rep(1:nGamNew, times = nGamCurr),, drop = FALSE]
}
} else { # autopolyploids
thisAl <- sapply(alCopy, function(x){
if(is.na(x)){
rep(NA, ploidy)
} else {
c(rep(0L, ploidy - x), rep(1L, x))
}})
if(rnd > 1){
# recursively add alleles to gametes for polyploid
nReps <- factorial(ploidy-1)/factorial(ploidy - rnd)
thisAl <- thisAl[rep(1:ploidy, each = nReps),, drop = FALSE]
for(i in 1:ploidy){
thisAl[((i-1)*nReps+1):(i*nReps),] <-
thisAl[((i-1)*nReps+1):(i*nReps),, drop=FALSE] +
.makeGametes(alCopy - thisAl[i*nReps,, drop = FALSE], ploidy - 1, rnd - 1)
}
}
}
return(thisAl)
}
# function to get probability of a gamete with a given allele copy number,
# given output from makeGametes
.gameteProb <- function(makeGamOutput, ploidy){
outmat <- sapply(0:(sum(ploidy)/2),
function(x) colMeans(makeGamOutput == x))
if(ncol(makeGamOutput) == 1){
outmat <- matrix(outmat, nrow = ploidy/2 + 1, ncol = 1)
} else {
outmat <- t(outmat)
}
return(outmat)
}
# function to take two sets of gamete probabilities (from two parents)
# and output genotype probabilities
.progenyProb <- function(gamProb1, gamProb2){
outmat <- matrix(0L, nrow = dim(gamProb1)[1] + dim(gamProb2)[1] - 1,
ncol = dim(gamProb1)[2])
for(i in 1:dim(gamProb1)[1]){
copy1 <- i - 1
for(j in 1:dim(gamProb2)[1]){
copy2 <- j - 1
thisrow <- copy1 + copy2 + 1
outmat[thisrow,] <- outmat[thisrow,] + gamProb1[i,] * gamProb2[j,]
}
}
return(outmat)
}
# function to get gamete probabilities for a population, given genotype priors
.gameteProbPop <- function(priors, ploidy){
# get gamete probs for all possible genotypes
possGamProb <- .gameteProb(.makeGametes(1:dim(priors)[1] - 1, ploidy),ploidy)
# output matrix
outmat <- possGamProb %*% priors
return(outmat)
}
# function just to make selfing matrix.
# Every parent genotype is a column. The frequency of its offspring after
# selfing is in rows.
.selfmat <- function(ploidy){
# get gamete probs for all possible genotypes
possGamProb <- .gameteProb(.makeGametes(0:ploidy, ploidy), ploidy)
# genotype probabilities after selfing each possible genotype
outmat <- matrix(0, nrow = ploidy + 1, ncol = ploidy + 1)
for(i in 1:(ploidy + 1)){
outmat[,i] <- .progenyProb(possGamProb[,i,drop = FALSE],
possGamProb[,i,drop = FALSE])
}
return(outmat)
}
# function to adjust genotype probabilities from one generation of selfing
.selfPop <- function(priors, ploidy){
# progeny probs for all possible genotypes, selfed
possProgenyProb <- .selfmat(ploidy)
# multiple progeny probs by prior probabilities of those genotypes
outmat <- possProgenyProb %*% priors
return(outmat)
}
# Functions used by HindHeMapping ####
# Function for making multiallelic gametes. Autopolyploid segretation only.
# geno is a vector of length ploidy with values indicating allele identity.
# rnd is how many alleles will be in the gamete.
.makeGametes2 <- function(geno, rnd = length(geno)/2){
nal <- length(geno) # number of remaining alleles in genotype
if(rnd == 1){
return(matrix(geno, nrow = nal, ncol = 1))
} else {
outmat <- matrix(0L, nrow = choose(nal, rnd), ncol = rnd)
currrow <- 1
for(i in 1:(nal - rnd + 1)){
submat <- .makeGametes2(geno[-(1:i)], rnd - 1)
theserows <- currrow + 1:nrow(submat) - 1
currrow <- currrow + nrow(submat)
outmat[theserows,1] <- geno[i]
outmat[theserows,2:ncol(outmat)] <- submat
}
return(outmat)
}
}
# take output of .makeGametes2 and get progeny probabilities.
# Return a list, where the first item is a matrix with progeny genotypes in
# rows, and the second item is a vector of corresponding probabilities.
.progenyProb2 <- function(gam1, gam2){
allprog <- cbind(gam1[rep(1:nrow(gam1), each = nrow(gam2)),],
gam2[rep(1:nrow(gam2), times = nrow(gam1)),])
allprog <- t(apply(allprog, 1, sort)) # could be sped up with Rcpp
# set up output
outprog <- matrix(allprog[1,], nrow = 1, ncol = ncol(allprog))
outprob <- 1/nrow(allprog)
# tally up unique genotypes
for(i in 2:nrow(allprog)){
thisgen <- allprog[i,]
existsYet <- FALSE
for(j in 1:nrow(outprog)){
if(identical(thisgen, outprog[j,])){
outprob[j] <- outprob[j] + 1/nrow(allprog)
existsYet <- TRUE
break
}
}
if(!existsYet){
outprog <- rbind(outprog, thisgen, deparse.level = 0)
outprob <- c(outprob, 1/nrow(allprog))
}
}
return(list(outprog, outprob))
}
# Consolidate a list of outputs from .progenyProb2 into a single output.
# It is assumed the all probabilities in pplist have been multiplied by
# some factor so they sum to one.
.consolProgProb <- function(pplist){
outprog <- pplist[[1]][[1]]
outprob <- pplist[[1]][[2]]
if(length(pplist) > 1){
for(i in 2:length(pplist)){
for(pr1 in 1:nrow(pplist[[i]][[1]])){
thisgen <- pplist[[i]][[1]][pr1,]
existsYet <- FALSE
for(pr2 in 1:nrow(outprog)){
if(identical(thisgen, outprog[pr2,])){
outprob[pr2] <- outprob[pr2] + pplist[[i]][[2]][pr1]
existsYet <- TRUE
break
}
}
if(!existsYet){
outprog <- rbind(outprog, thisgen, deparse.level = 0)
outprob <- c(outprob, pplist[[i]][[2]][pr1])
}
}
}
}
return(list(outprog, outprob))
}
# Probability that, depending on the generation, two alleles sampled in a progeny
# are different locus copies from the same parent, or from different parents.
# If there is backcrossing, parent 1 is the recurrent parent.
# (No double reduction.)
# Can take a few seconds to run if there are many generations, but it is only
# intended to be run once for the whole dataset.
.progAlProbs <- function(ploidy, gen_backcrossing, gen_selfing){
# set up parents; number indexes locus copy
p1 <- 1:ploidy
p2 <- p1 + ploidy
# create F1 progeny probabilities
progprob <- .progenyProb2(.makeGametes2(p1), .makeGametes2(p2))
# backcross
if(gen_backcrossing > 0){
gam1 <- .makeGametes2(p1)
for(g in 1:gen_backcrossing){
allprogprobs <- lapply(1:nrow(progprob[[1]]),
function(x){
pp <- .progenyProb2(gam1, .makeGametes2(progprob[[1]][x,]))
pp[[2]] <- pp[[2]] * progprob[[2]][x]
return(pp)
})
progprob <- .consolProgProb(allprogprobs)
}
}
# self-fertilize
if(gen_selfing > 0){
for(g in 1:gen_selfing){
allprogprobs <- lapply(1:nrow(progprob[[1]]),
function(x){
gam <- .makeGametes2(progprob[[1]][x,])
pp <- .progenyProb2(gam, gam)
pp[[2]] <- pp[[2]] * progprob[[2]][x]
return(pp)
})
progprob <- .consolProgProb(allprogprobs)
}
}
# total probability that (without replacement, from individual progeny):
diffp1 <- 0 # two different locus copies, both from parent 1, are sampled
diffp2 <- 0 # two different locus copies, both from parent 2, are sampled
diff12 <- 0 # locus copies from two different parents are sampled
ncombo <- choose(ploidy, 2) # number of ways to choose 2 alleles from a genotype
# examine each progeny genotype
for(p in 1:nrow(progprob[[1]])){
thisgen <- progprob[[1]][p,]
thisprob <- progprob[[2]][p]
for(m in 1:(ploidy - 1)){
al1 <- thisgen[m]
for(n in (m + 1):ploidy){
al2 <- thisgen[n]
if(al1 == al2) next
if((al1 %in% p1) && (al2 %in% p1)){
diffp1 <- diffp1 + (thisprob / ncombo)
next
}
if((al1 %in% p2) && (al2 %in% p2)){
diffp2 <- diffp2 + (thisprob / ncombo)
next
}
if(((al1 %in% p1) && (al2 %in% p2)) ||
((al2 %in% p1) && (al1 %in% p2))){
diff12 <- diff12 + (thisprob / ncombo)
next
}
stop("Allele indexing messed up.")
}
}
}
return(c(diffp1, diffp2, diff12))
}
|
## Here we're going to extrapolate Household Final Consumption for 2016.
## The approch used was the same that Marie used for extrapolating GDP for 2014-2016.
## Load and prepare data
## Our variable is:
# CONS_S14_USD2005: Household consumption expenditure (including Non-profit institutions serving households)
library('forecast')
library("xlsx")
library("dplyr")
library('reshape2')
householdConsumpExpend = read.csv(
"sandbox/planB/household_final_consumption/Routput_CONS_dataCapture.csv"
# , dec = ".", sep = ","
)
head(householdConsumpExpend)
hhConsExp <- select(householdConsumpExpend, FAOCode, FAOName, year, CONS_S14_USD2005)
hhConsExp <- dcast(hhConsExp, FAOCode + FAOName ~ year, value.var = "CONS_S14_USD2005")
head(hhConsExp)
# Drop former economies
hhConsExp <- hhConsExp[!(is.na(hhConsExp$'2015')), ]
a <- which(colnames(hhConsExp) == 1970)
b <- dim(hhConsExp)[2]
dataHhConsExp <- data.matrix(hhConsExp[,a:b], rownames.force = NA)
nrows <- dim(dataHhConsExp)[1]
col2015 <- dim(dataHhConsExp)[2]
# SECTION II. --------------------------------------------------------------
# Create empty matrix to store forecated values
store_forecast <- matrix( NA, nrow = nrows, ncol = col2015+1)
# Loop over all countries, select best ARIMA model and produce forecast
for (i in 1:nrows){
#1. Delta-log transformation
ldata <- log(dataHhConsExp[i,])
ldata.ts <- as.ts(ldata)
#2. Outlier detection and replacement
x <- tsoutliers(ldata.ts)
ldata.ts[x$index] <- x$replacements
dldata.ts <- diff(ldata.ts, lag=1)
dldata <- 100*dldata.ts
# 3. Select best ARIMA model and use it for forecasting
fit1 <- forecast::auto.arima(x=dldata)
fit1
temp <- forecast( fit1, h=1 )
plot.forecast( forecast( fit1, h=1 ))
# Revert the delta log forecasted values to levels and store them
level_forecast <- dataHhConsExp[i,col2015]*exp(temp$mean[1]/100)
store_forecast[i,1:col2015] <- exp(ldata.ts)
store_forecast[i,col2015+1] <- level_forecast
rm(level_forecast, fit1, temp, x)
}
# SECTION III. --------------------------------------------------------------
# Save results
# Original GDP series with 2016 forecast appended
results <- cbind(dataHhConsExp, store_forecast[,47])
colnames(results)[47] <- "2016"
results <- cbind(hhConsExp[,1:2], results)
library(data.table)
results = data.table(results)
hhConsExpForecasted = melt.data.table(results, id.vars = c("FAOCode", "FAOName"))
setnames(hhConsExpForecasted, "variable", "timePointYears")
setnames(hhConsExpForecasted, "value", "hhConsExp")
hhConsExpForecasted[, timePointYears := as.numeric(as.character(timePointYears))]
# hhConsExpForecasted[, previousValue := shift(hhConsExp),
# by = list(FAOCode, FAOName)]
#
# hhConsExpForecasted[, percent := hhConsExp/previousValue,
# by = list(FAOCode, FAOName)]
#
# ggplot(hhConsExpForecasted[FAOName == "India"],
# aes(x=as.numeric(timePointYears), y=hhConsExp, group=1, col = type)) +
# geom_line(colour = "#5882FA", size = .8) +
# scale_x_continuous(limits = c(1991, 2016), breaks = seq(1991, 2016, 1))
#
# hist(hhConsExpForecasted[timePointYears == 2016]$percent)
#
# hhConsExpForecasted[percent > 1.05 & timePointYears == 2016]
# hhConsExpForecasted[FAOName == "United States of America"]
#
# ggplot(hhConsExpForecasted[timePointYears == 2016],
# aes(y=percent)) +
# geom_line() +
# scale_x_continuous(limits = c(1991, 2016), breaks = seq(1991, 2016, 1))
write.table(
hhConsExpForecasted,
file = "sandbox/planB/household_final_consumption/household_final_consumptionhhConsExpForecasted.csv",
row.names = F)
| /sandbox/planB/household_final_consumption/extrapolating.R | no_license | SWS-Methodology/faoswsFood | R | false | false | 3,696 | r |
## Here we're going to extrapolate Household Final Consumption for 2016.
## The approch used was the same that Marie used for extrapolating GDP for 2014-2016.
## Load and prepare data
## Our variable is:
# CONS_S14_USD2005: Household consumption expenditure (including Non-profit institutions serving households)
library('forecast')
library("xlsx")
library("dplyr")
library('reshape2')
householdConsumpExpend = read.csv(
"sandbox/planB/household_final_consumption/Routput_CONS_dataCapture.csv"
# , dec = ".", sep = ","
)
head(householdConsumpExpend)
hhConsExp <- select(householdConsumpExpend, FAOCode, FAOName, year, CONS_S14_USD2005)
hhConsExp <- dcast(hhConsExp, FAOCode + FAOName ~ year, value.var = "CONS_S14_USD2005")
head(hhConsExp)
# Drop former economies
hhConsExp <- hhConsExp[!(is.na(hhConsExp$'2015')), ]
a <- which(colnames(hhConsExp) == 1970)
b <- dim(hhConsExp)[2]
dataHhConsExp <- data.matrix(hhConsExp[,a:b], rownames.force = NA)
nrows <- dim(dataHhConsExp)[1]
col2015 <- dim(dataHhConsExp)[2]
# SECTION II. --------------------------------------------------------------
# Create empty matrix to store forecated values
store_forecast <- matrix( NA, nrow = nrows, ncol = col2015+1)
# Loop over all countries, select best ARIMA model and produce forecast
for (i in 1:nrows){
#1. Delta-log transformation
ldata <- log(dataHhConsExp[i,])
ldata.ts <- as.ts(ldata)
#2. Outlier detection and replacement
x <- tsoutliers(ldata.ts)
ldata.ts[x$index] <- x$replacements
dldata.ts <- diff(ldata.ts, lag=1)
dldata <- 100*dldata.ts
# 3. Select best ARIMA model and use it for forecasting
fit1 <- forecast::auto.arima(x=dldata)
fit1
temp <- forecast( fit1, h=1 )
plot.forecast( forecast( fit1, h=1 ))
# Revert the delta log forecasted values to levels and store them
level_forecast <- dataHhConsExp[i,col2015]*exp(temp$mean[1]/100)
store_forecast[i,1:col2015] <- exp(ldata.ts)
store_forecast[i,col2015+1] <- level_forecast
rm(level_forecast, fit1, temp, x)
}
# SECTION III. --------------------------------------------------------------
# Save results
# Original GDP series with 2016 forecast appended
results <- cbind(dataHhConsExp, store_forecast[,47])
colnames(results)[47] <- "2016"
results <- cbind(hhConsExp[,1:2], results)
library(data.table)
results = data.table(results)
hhConsExpForecasted = melt.data.table(results, id.vars = c("FAOCode", "FAOName"))
setnames(hhConsExpForecasted, "variable", "timePointYears")
setnames(hhConsExpForecasted, "value", "hhConsExp")
hhConsExpForecasted[, timePointYears := as.numeric(as.character(timePointYears))]
# hhConsExpForecasted[, previousValue := shift(hhConsExp),
# by = list(FAOCode, FAOName)]
#
# hhConsExpForecasted[, percent := hhConsExp/previousValue,
# by = list(FAOCode, FAOName)]
#
# ggplot(hhConsExpForecasted[FAOName == "India"],
# aes(x=as.numeric(timePointYears), y=hhConsExp, group=1, col = type)) +
# geom_line(colour = "#5882FA", size = .8) +
# scale_x_continuous(limits = c(1991, 2016), breaks = seq(1991, 2016, 1))
#
# hist(hhConsExpForecasted[timePointYears == 2016]$percent)
#
# hhConsExpForecasted[percent > 1.05 & timePointYears == 2016]
# hhConsExpForecasted[FAOName == "United States of America"]
#
# ggplot(hhConsExpForecasted[timePointYears == 2016],
# aes(y=percent)) +
# geom_line() +
# scale_x_continuous(limits = c(1991, 2016), breaks = seq(1991, 2016, 1))
write.table(
hhConsExpForecasted,
file = "sandbox/planB/household_final_consumption/household_final_consumptionhhConsExpForecasted.csv",
row.names = F)
|
getRowOrdered <- function(train_dataset, finalRow = 1633){
dToReturn <- train_dataset
for(i in 1:500){
pOfM <- dToReturn[(6*i-5):(6*i),]
pOfM <- pOfM[order(pOfM[, finalRow]), ]
dToReturn[(6*i-5):(6*i),] <- pOfM
}
return(dToReturn)
}
operationBetweenMatrices <- function(A, B, additionalParameter = 10){
result <- A + (1/additionalParameter)*B
return(result)
}
getIterationMean <- function(orderedTrain, gap = 12, nOfChars = 25, iterations = 10, tCol = 1634){
orderedTrain <- as.matrix(orderedTrain)
numberOfRows <- gap*nOfChars
mToRet <- matrix(0L, nrow = numberOfRows, ncol = tCol)
for (i in 1:nOfChars){
for (j in 1:iterations){
A <- mToRet[(i*gap -(gap-1)):(i*gap), ]
B <- orderedTrain[(iterations*gap*(i-1)+gap*(j-1)+1):(iterations*gap*(i-1)+gap*(j-1)+gap), ]
test <- operationBetweenMatrices(A, B, iterations)
mToRet[(i*gap -(gap-1)):(i*gap), ] <- test
}
}
return (mToRet)
}
getTrainWithMeans <- function(){
load_file_with_names()
dataset_sensorsP <- getFewSensors()
train <- dataset_sensorsP[1:3000, ]
test <- dataset_sensorsP[3001:3600, - which(colnames(dataset_sensorsP) == "cData")]
trainOrdered <- getRowOrdered(train, 1022)
column_names <- colnames(train)
trainFinal <- getIterationMean(trainOrdered, tCol = 1022)
trainFinal <- as.data.frame(trainFinal)
colnames(trainFinal) <- column_names
trainFinal$target<-as.factor(trainFinal$target)
#TEST 1
train <- rbind(train, trainFinal)
costs <- table(train$target) # the weight vector must be named with the classes names
costs[1] <- 1 # a class -1 mismatch has a terrible cost
costs[2] <- 1 # a class +1 mismatch not so much...
class <- train_svm(train[, -1021], "polynomial", 3, costs)
for(i in 1:5){
letter <- get_character(10, class, i, test)
print(letter)
}
}
| /OLD/orderInBlocks.R | no_license | loplace/MOBD_PROJECT_BCI | R | false | false | 1,885 | r |
getRowOrdered <- function(train_dataset, finalRow = 1633){
dToReturn <- train_dataset
for(i in 1:500){
pOfM <- dToReturn[(6*i-5):(6*i),]
pOfM <- pOfM[order(pOfM[, finalRow]), ]
dToReturn[(6*i-5):(6*i),] <- pOfM
}
return(dToReturn)
}
operationBetweenMatrices <- function(A, B, additionalParameter = 10){
result <- A + (1/additionalParameter)*B
return(result)
}
getIterationMean <- function(orderedTrain, gap = 12, nOfChars = 25, iterations = 10, tCol = 1634){
orderedTrain <- as.matrix(orderedTrain)
numberOfRows <- gap*nOfChars
mToRet <- matrix(0L, nrow = numberOfRows, ncol = tCol)
for (i in 1:nOfChars){
for (j in 1:iterations){
A <- mToRet[(i*gap -(gap-1)):(i*gap), ]
B <- orderedTrain[(iterations*gap*(i-1)+gap*(j-1)+1):(iterations*gap*(i-1)+gap*(j-1)+gap), ]
test <- operationBetweenMatrices(A, B, iterations)
mToRet[(i*gap -(gap-1)):(i*gap), ] <- test
}
}
return (mToRet)
}
getTrainWithMeans <- function(){
load_file_with_names()
dataset_sensorsP <- getFewSensors()
train <- dataset_sensorsP[1:3000, ]
test <- dataset_sensorsP[3001:3600, - which(colnames(dataset_sensorsP) == "cData")]
trainOrdered <- getRowOrdered(train, 1022)
column_names <- colnames(train)
trainFinal <- getIterationMean(trainOrdered, tCol = 1022)
trainFinal <- as.data.frame(trainFinal)
colnames(trainFinal) <- column_names
trainFinal$target<-as.factor(trainFinal$target)
#TEST 1
train <- rbind(train, trainFinal)
costs <- table(train$target) # the weight vector must be named with the classes names
costs[1] <- 1 # a class -1 mismatch has a terrible cost
costs[2] <- 1 # a class +1 mismatch not so much...
class <- train_svm(train[, -1021], "polynomial", 3, costs)
for(i in 1:5){
letter <- get_character(10, class, i, test)
print(letter)
}
}
|
library(wavethresh)
### Name: wpst
### Title: Non-decimated wavelet packet transform.
### Aliases: wpst
### Keywords: math smooth nonlinear
### ** Examples
v <- rnorm(128)
vwpst <- wpst(v)
| /data/genthat_extracted_code/wavethresh/examples/wpst.rd.R | no_license | surayaaramli/typeRrh | R | false | false | 196 | r | library(wavethresh)
### Name: wpst
### Title: Non-decimated wavelet packet transform.
### Aliases: wpst
### Keywords: math smooth nonlinear
### ** Examples
v <- rnorm(128)
vwpst <- wpst(v)
|
% Generated by roxygen2 (4.0.2): do not edit by hand
\name{p_path}
\alias{p_path}
\title{Path to Library of Add-On Packages}
\usage{
p_path()
}
\description{
Path to library of add-on packages.
}
\examples{
p_path()
}
\seealso{
\code{\link[base]{.libPaths}}
}
\keyword{library}
\keyword{location}
\keyword{package}
\keyword{path}
| /man/p_path.Rd | no_license | dpastoor/pacman | R | false | false | 331 | rd | % Generated by roxygen2 (4.0.2): do not edit by hand
\name{p_path}
\alias{p_path}
\title{Path to Library of Add-On Packages}
\usage{
p_path()
}
\description{
Path to library of add-on packages.
}
\examples{
p_path()
}
\seealso{
\code{\link[base]{.libPaths}}
}
\keyword{library}
\keyword{location}
\keyword{package}
\keyword{path}
|
##Plot3
##household_power_consumption.txt needs to be copied into
##the working directory. File was provided by Coursera
##sourced from UC Irvine Machine Learning Repository
datafile <- ("household_power_consumption.txt")
pwr <- read.table(datafile, header=T, sep=";")
pwr$Date <- as.Date(pwr$Date, format="%d/%m/%Y")
dframe <- pwr[(pwr$Date=="2007-02-01") | (pwr$Date=="2007-02-02"),]
dframe$Global_active_power <- as.numeric(as.character(dframe$Global_active_power))
dframe$Global_reactive_power <- as.numeric(as.character(dframe$Global_reactive_power))
dframe$Voltage <- as.numeric(as.character(dframe$Voltage))
dframe <- transform(dframe, timestamp=as.POSIXct(paste(Date, Time)), "%d/%m/%Y %H:%M:%S")
dframe$Sub_metering_1 <- as.numeric(as.character(dframe$Sub_metering_1))
dframe$Sub_metering_2 <- as.numeric(as.character(dframe$Sub_metering_2))
dframe$Sub_metering_3 <- as.numeric(as.character(dframe$Sub_metering_3))
plot3 <- function()
{
plot(dframe$timestamp,dframe$Sub_metering_1, type="l", xlab="", ylab="Energy sub metering")
lines(dframe$timestamp,dframe$Sub_metering_2,col="red")
lines(dframe$timestamp,dframe$Sub_metering_3,col="blue")
legend("topright", col=c("black","red","blue"), c("Sub_metering_1 ","Sub_metering_2 ", "Sub_metering_3 "),lty=c(1,1), lwd=c(1,1))
dev.copy(png, file="plot3.png", width=480, height=480)
dev.off()
}
plot3()
| /plot3.R | no_license | mmeitzne/ExData_Plotting1 | R | false | false | 1,457 | r | ##Plot3
##household_power_consumption.txt needs to be copied into
##the working directory. File was provided by Coursera
##sourced from UC Irvine Machine Learning Repository
datafile <- ("household_power_consumption.txt")
pwr <- read.table(datafile, header=T, sep=";")
pwr$Date <- as.Date(pwr$Date, format="%d/%m/%Y")
dframe <- pwr[(pwr$Date=="2007-02-01") | (pwr$Date=="2007-02-02"),]
dframe$Global_active_power <- as.numeric(as.character(dframe$Global_active_power))
dframe$Global_reactive_power <- as.numeric(as.character(dframe$Global_reactive_power))
dframe$Voltage <- as.numeric(as.character(dframe$Voltage))
dframe <- transform(dframe, timestamp=as.POSIXct(paste(Date, Time)), "%d/%m/%Y %H:%M:%S")
dframe$Sub_metering_1 <- as.numeric(as.character(dframe$Sub_metering_1))
dframe$Sub_metering_2 <- as.numeric(as.character(dframe$Sub_metering_2))
dframe$Sub_metering_3 <- as.numeric(as.character(dframe$Sub_metering_3))
plot3 <- function()
{
plot(dframe$timestamp,dframe$Sub_metering_1, type="l", xlab="", ylab="Energy sub metering")
lines(dframe$timestamp,dframe$Sub_metering_2,col="red")
lines(dframe$timestamp,dframe$Sub_metering_3,col="blue")
legend("topright", col=c("black","red","blue"), c("Sub_metering_1 ","Sub_metering_2 ", "Sub_metering_3 "),lty=c(1,1), lwd=c(1,1))
dev.copy(png, file="plot3.png", width=480, height=480)
dev.off()
}
plot3()
|
## R script does the following:
## 1.Merges the training and the test sets to create one data set.
## 2.Extracts only the measurements on the mean and standard deviation for each measurement.
## 3.Uses descriptive activity names to name the activities in the data set
## 4.Appropriately labels the data set with descriptive variable names.
## 5.From the data set in step 4, creates a second, independent tidy data set with the average of each variable for each activity and each subject.
## Format training and test data sets
## Both training and test data sets are split up into subject, activity and features. They are present in three different files.
## Read training data
subjectTrain <- read.table("UCI HAR Dataset/train/subject_train.txt", header = FALSE)
activityTrain <- read.table("UCI HAR Dataset/train/y_train.txt", header = FALSE)
featuresTrain <- read.table("UCI HAR Dataset/train/X_train.txt", header = FALSE)
## Read test data
subjectTest <- read.table("UCI HAR Dataset/test/subject_test.txt", header = FALSE)
activityTest <- read.table("UCI HAR Dataset/test/y_test.txt", header = FALSE)
featuresTest <- read.table("UCI HAR Dataset/test/X_test.txt", header = FALSE)
## Part 1 - Merge the training and the test sets to create one data set
subject <- rbind(subjectTrain, subjectTest)
activity <- rbind(activityTrain, activityTest)
features <- rbind(featuresTrain, featuresTest)
## The columns in the features data set can be named from the metadata in featureNames
colnames(features) <- t(featureNames[2])
## Merge the data
colnames(activity) <- "Activity"
colnames(subject) <- "Subject"
completeData <- cbind(features,activity,subject)
## Part 2 - Extracts only the measurements on the mean and standard deviation for each measurement
columnsWithMeanSTD <- grep(".*Mean.*|.*Std.*", names(completeData), ignore.case=TRUE)
##Add activity and subject columns to the list and look at the dimension of completeData
requiredColumns <- c(columnsWithMeanSTD, 562, 563)
dim(completeData)
## We create extractedData with the selected columns in requiredColumns. And again, we look at the dimension of requiredColumns.
extractedData <- completeData[,requiredColumns]
dim(extractedData)
## Part 3 - Uses descriptive activity names to name the activities in the data set
##The activity field in extractedData is originally of numeric type. We need to change its type to character so that it can accept activity names. The activity names are taken from metadata activityLabels.
extractedData$Activity <- as.character(extractedData$Activity)
for (i in 1:6){
extractedData$Activity[extractedData$Activity == i] <- as.character(activityLabels[i,2])
}
## We need to factor the activity variable, once the activity names are updated.
extractedData$Activity <- as.factor(extractedData$Activity)
## Part 4 - Appropriately labels the data set with descriptive variable names
## By examining extractedData, we can say that the following acronyms can be replaced:
## Acc can be replaced with Accelerometer
## Gyro can be replaced with Gyroscope
## BodyBody can be replaced with Body
## Mag can be replaced with Magnitude
## Character f can be replaced with Frequency
## Character t can be replaced with Time
names(extractedData)<-gsub("Acc", "Accelerometer", names(extractedData))
names(extractedData)<-gsub("Gyro", "Gyroscope", names(extractedData))
names(extractedData)<-gsub("BodyBody", "Body", names(extractedData))
names(extractedData)<-gsub("Mag", "Magnitude", names(extractedData))
names(extractedData)<-gsub("^t", "Time", names(extractedData))
names(extractedData)<-gsub("^f", "Frequency", names(extractedData))
names(extractedData)<-gsub("tBody", "TimeBody", names(extractedData))
names(extractedData)<-gsub("-mean()", "Mean", names(extractedData), ignore.case = TRUE)
names(extractedData)<-gsub("-std()", "STD", names(extractedData), ignore.case = TRUE)
names(extractedData)<-gsub("-freq()", "Frequency", names(extractedData), ignore.case = TRUE)
names(extractedData)<-gsub("angle", "Angle", names(extractedData))
names(extractedData)<-gsub("gravity", "Gravity", names(extractedData))
##Part 5 - From the data set in step 4, creates a second, independent tidy data set with the average of each variable for each activity and each subject
## Firstly, let us set Subject as a factor variable.
extractedData$Subject <- as.factor(extractedData$Subject)
extractedData <- data.table(extractedData)
## We create tidyData as a data set with average for each activity and subject. Then, we order the enties in tidyData and write it into data file Tidy.txt that contains the processed data.
tidyData <- aggregate(. ~Subject + Activity, extractedData, mean)
tidyData <- tidyData[order(tidyData$Subject,tidyData$Activity),]
write.table(tidyData, file = "Tidy.txt", row.names = FALSE)
| /run_analysis.R | no_license | AbhishekFarande/GettingAndCleaningDataWeek3 | R | false | false | 4,809 | r | ## R script does the following:
## 1.Merges the training and the test sets to create one data set.
## 2.Extracts only the measurements on the mean and standard deviation for each measurement.
## 3.Uses descriptive activity names to name the activities in the data set
## 4.Appropriately labels the data set with descriptive variable names.
## 5.From the data set in step 4, creates a second, independent tidy data set with the average of each variable for each activity and each subject.
## Format training and test data sets
## Both training and test data sets are split up into subject, activity and features. They are present in three different files.
## Read training data
subjectTrain <- read.table("UCI HAR Dataset/train/subject_train.txt", header = FALSE)
activityTrain <- read.table("UCI HAR Dataset/train/y_train.txt", header = FALSE)
featuresTrain <- read.table("UCI HAR Dataset/train/X_train.txt", header = FALSE)
## Read test data
subjectTest <- read.table("UCI HAR Dataset/test/subject_test.txt", header = FALSE)
activityTest <- read.table("UCI HAR Dataset/test/y_test.txt", header = FALSE)
featuresTest <- read.table("UCI HAR Dataset/test/X_test.txt", header = FALSE)
## Part 1 - Merge the training and the test sets to create one data set
subject <- rbind(subjectTrain, subjectTest)
activity <- rbind(activityTrain, activityTest)
features <- rbind(featuresTrain, featuresTest)
## The columns in the features data set can be named from the metadata in featureNames
colnames(features) <- t(featureNames[2])
## Merge the data
colnames(activity) <- "Activity"
colnames(subject) <- "Subject"
completeData <- cbind(features,activity,subject)
## Part 2 - Extracts only the measurements on the mean and standard deviation for each measurement
columnsWithMeanSTD <- grep(".*Mean.*|.*Std.*", names(completeData), ignore.case=TRUE)
##Add activity and subject columns to the list and look at the dimension of completeData
requiredColumns <- c(columnsWithMeanSTD, 562, 563)
dim(completeData)
## We create extractedData with the selected columns in requiredColumns. And again, we look at the dimension of requiredColumns.
extractedData <- completeData[,requiredColumns]
dim(extractedData)
## Part 3 - Uses descriptive activity names to name the activities in the data set
##The activity field in extractedData is originally of numeric type. We need to change its type to character so that it can accept activity names. The activity names are taken from metadata activityLabels.
extractedData$Activity <- as.character(extractedData$Activity)
for (i in 1:6){
extractedData$Activity[extractedData$Activity == i] <- as.character(activityLabels[i,2])
}
## We need to factor the activity variable, once the activity names are updated.
extractedData$Activity <- as.factor(extractedData$Activity)
## Part 4 - Appropriately labels the data set with descriptive variable names
## By examining extractedData, we can say that the following acronyms can be replaced:
## Acc can be replaced with Accelerometer
## Gyro can be replaced with Gyroscope
## BodyBody can be replaced with Body
## Mag can be replaced with Magnitude
## Character f can be replaced with Frequency
## Character t can be replaced with Time
names(extractedData)<-gsub("Acc", "Accelerometer", names(extractedData))
names(extractedData)<-gsub("Gyro", "Gyroscope", names(extractedData))
names(extractedData)<-gsub("BodyBody", "Body", names(extractedData))
names(extractedData)<-gsub("Mag", "Magnitude", names(extractedData))
names(extractedData)<-gsub("^t", "Time", names(extractedData))
names(extractedData)<-gsub("^f", "Frequency", names(extractedData))
names(extractedData)<-gsub("tBody", "TimeBody", names(extractedData))
names(extractedData)<-gsub("-mean()", "Mean", names(extractedData), ignore.case = TRUE)
names(extractedData)<-gsub("-std()", "STD", names(extractedData), ignore.case = TRUE)
names(extractedData)<-gsub("-freq()", "Frequency", names(extractedData), ignore.case = TRUE)
names(extractedData)<-gsub("angle", "Angle", names(extractedData))
names(extractedData)<-gsub("gravity", "Gravity", names(extractedData))
##Part 5 - From the data set in step 4, creates a second, independent tidy data set with the average of each variable for each activity and each subject
## Firstly, let us set Subject as a factor variable.
extractedData$Subject <- as.factor(extractedData$Subject)
extractedData <- data.table(extractedData)
## We create tidyData as a data set with average for each activity and subject. Then, we order the enties in tidyData and write it into data file Tidy.txt that contains the processed data.
tidyData <- aggregate(. ~Subject + Activity, extractedData, mean)
tidyData <- tidyData[order(tidyData$Subject,tidyData$Activity),]
write.table(tidyData, file = "Tidy.txt", row.names = FALSE)
|
egg.hatched <- function(EGG.LANDSCAPE, FECUNDITY, P.EMERGE, PENALTY, CARRY.CAP = 10){
egg.landscape <- EGG.LANDSCAPE
fec <- FECUNDITY
p.emerge <- P.EMERGE
emerge.penalty <- PENALTY
K <- CARRY.CAP
##
total.eggs <- sum(egg.landscape)
hatched <- array(dim = dim(egg.landscape))
for (i in 1:length(c(egg.landscape))){
eggs <- egg.landscape[i]
if (a$module.landscape[i] < 2){
tmpEmerge <- p.emerge * (1 - (eggs / K ))
} else {
tmpEmerge <- (p.emerge - emerge.penalty) * (1 - (eggs / K ))
}
hatched[i] <- sum(runif(eggs) < tmpEmerge) * fec
}
# print(tmpEmerge)
return(hatched)
}
# x <- egg.hatched(sim$egg.landscape, P.EMERGE = 1, PENALTY = 0.9)
# sum(x)
| /single_gen/eggHatching.R | no_license | peterzee/habitatselection | R | false | false | 858 | r | egg.hatched <- function(EGG.LANDSCAPE, FECUNDITY, P.EMERGE, PENALTY, CARRY.CAP = 10){
egg.landscape <- EGG.LANDSCAPE
fec <- FECUNDITY
p.emerge <- P.EMERGE
emerge.penalty <- PENALTY
K <- CARRY.CAP
##
total.eggs <- sum(egg.landscape)
hatched <- array(dim = dim(egg.landscape))
for (i in 1:length(c(egg.landscape))){
eggs <- egg.landscape[i]
if (a$module.landscape[i] < 2){
tmpEmerge <- p.emerge * (1 - (eggs / K ))
} else {
tmpEmerge <- (p.emerge - emerge.penalty) * (1 - (eggs / K ))
}
hatched[i] <- sum(runif(eggs) < tmpEmerge) * fec
}
# print(tmpEmerge)
return(hatched)
}
# x <- egg.hatched(sim$egg.landscape, P.EMERGE = 1, PENALTY = 0.9)
# sum(x)
|
x <- c(23,15,46,NA)
z <- c(5,6,NA,8)
mean(x, na.rm= TRUE)
sum(x,na.rm = TRUE)
d <- data.frame(rost=x,ves=z)
d
d$rost
my_list <- list(a=7,b=10:20,table=d)
my_list[[2]]
| /Программирование и Анализ данных на R/Команды для работы с векторами 2 .R | no_license | Sergeevva/Sergeevva | R | false | false | 167 | r | x <- c(23,15,46,NA)
z <- c(5,6,NA,8)
mean(x, na.rm= TRUE)
sum(x,na.rm = TRUE)
d <- data.frame(rost=x,ves=z)
d
d$rost
my_list <- list(a=7,b=10:20,table=d)
my_list[[2]]
|
# QUESTION 1.1
edges = read.csv("edges.csv")
users = read.csv("users.csv")
str(users)
str(edges)
# QUESTION 1.2
table(users$locale, users$school)
# QUESTION 1.3
table(users$gender, users$school)
# QUESTION 2.1
install.packages("igraph")
library(igraph)
# QUESTION 2.2
g = graph.data.frame(edges, FALSE, users)
plot(g, vertex.size=5, vertex.label=NA)
# QUESTION 2.3
table(degree(g) >= 10)
# QUESTION 2.4
V(g)$size = degree(g)/2+2
plot(g, vertex.label=NA)
summary(degree(g))
# QUESTION 3.1
V(g)$color = "black"
V(g)$color[V(g)$gender == "A"] = "red"
V(g)$color[V(g)$gender == "B"] = "gray"
plot(g, vertex.label=NA)
# QUESTION 3.2
V(g)$color = "black"
V(g)$color[V(g)$school == "A"] = "red"
V(g)$color[V(g)$school == "AB"] = "gray"
plot(g, vertex.label=NA)
# QUESTION 3.3
V(g)$color = "black"
V(g)$color[V(g)$locale == "A"] = "red"
V(g)$color[V(g)$locale == "B"] = "gray"
plot(g, vertex.label=NA)
# QUESTION 4
rglplot(g, vertex.label=NA)
plot(g, edge.width=2, vertex.label=NA)
| /Module-10/Assignment-2.R | no_license | khyathimsit/DataAnalytics | R | false | false | 1,085 | r | # QUESTION 1.1
edges = read.csv("edges.csv")
users = read.csv("users.csv")
str(users)
str(edges)
# QUESTION 1.2
table(users$locale, users$school)
# QUESTION 1.3
table(users$gender, users$school)
# QUESTION 2.1
install.packages("igraph")
library(igraph)
# QUESTION 2.2
g = graph.data.frame(edges, FALSE, users)
plot(g, vertex.size=5, vertex.label=NA)
# QUESTION 2.3
table(degree(g) >= 10)
# QUESTION 2.4
V(g)$size = degree(g)/2+2
plot(g, vertex.label=NA)
summary(degree(g))
# QUESTION 3.1
V(g)$color = "black"
V(g)$color[V(g)$gender == "A"] = "red"
V(g)$color[V(g)$gender == "B"] = "gray"
plot(g, vertex.label=NA)
# QUESTION 3.2
V(g)$color = "black"
V(g)$color[V(g)$school == "A"] = "red"
V(g)$color[V(g)$school == "AB"] = "gray"
plot(g, vertex.label=NA)
# QUESTION 3.3
V(g)$color = "black"
V(g)$color[V(g)$locale == "A"] = "red"
V(g)$color[V(g)$locale == "B"] = "gray"
plot(g, vertex.label=NA)
# QUESTION 4
rglplot(g, vertex.label=NA)
plot(g, edge.width=2, vertex.label=NA)
|
\name{imagehash1}
\docType{data}
\alias{imagehash1}
\title{Image Hash JSON Example}
\description{
Example of JSON file with image hashes retrieved by ExtractImgHash
}
\format{JSON}
\keyword{datasets}
| /man/imagehash1.Rd | no_license | cran/VideoComparison | R | false | false | 200 | rd | \name{imagehash1}
\docType{data}
\alias{imagehash1}
\title{Image Hash JSON Example}
\description{
Example of JSON file with image hashes retrieved by ExtractImgHash
}
\format{JSON}
\keyword{datasets}
|
#install and load required packages -----------------
ipak <- function(pkg){
new.pkg <- pkg[!(pkg %in% installed.packages()[, "Package"])]
if (length(new.pkg))
install.packages(new.pkg, dependencies = TRUE)
sapply(pkg, require, character.only = TRUE)
}
# usage
packages <- c("shiny", "DT")
ipak(packages)
shinyUI(pageWithSidebar(
headerPanel("Alberta Aerial Ungulate Survey Population Estimator"),
sidebarPanel(
fileInput('MegaDB', 'Step 1. Choose your Access database to commence distance sampling analysis',
accept=c('.accdb', '.mdb')),
fileInput('WMU_Shp', 'Step 2. Choose your WMU Polygon Shapefile File (Alberta 10TM) - Note include all shapefile components (i.e. *.shp, *.dbf, *.sbn, etc.)',
accept=c('.shp','.dbf','.sbn','.sbx', '.shx','.prj','.cpg'), multiple=TRUE),
sliderInput("truncation", "Step 3: Choose right truncation distance", min=0, max=1000, value=800, step = 25)
),
mainPanel(
tabsetPanel(
tabPanel("Moose",textOutput("MOOS_TXT"), DT::dataTableOutput('MOOS_TAB'), plotOutput("myplot2"),
plotOutput("MOOS_QQ"), plotOutput("myplot")),
tabPanel("Mule Deer", plotOutput("MUDE_MAP"),
plotOutput("myplot3"),
plotOutput("MUDE_QQ")),
tabPanel("White-tailed Deer", plotOutput("WTDE_MAP"),
plotOutput("WTDE_DF"),
plotOutput("WTDE_QQ")),
tabPanel("Elk", plotOutput("WAPT_MAP"),
plotOutput("WAPT_DF"),
plotOutput("WAPT_QQ")),
tabPanel("Power analysis")
))
))
| /ui.R | no_license | FauveBlanchard/fox | R | false | false | 1,693 | r | #install and load required packages -----------------
ipak <- function(pkg){
new.pkg <- pkg[!(pkg %in% installed.packages()[, "Package"])]
if (length(new.pkg))
install.packages(new.pkg, dependencies = TRUE)
sapply(pkg, require, character.only = TRUE)
}
# usage
packages <- c("shiny", "DT")
ipak(packages)
shinyUI(pageWithSidebar(
headerPanel("Alberta Aerial Ungulate Survey Population Estimator"),
sidebarPanel(
fileInput('MegaDB', 'Step 1. Choose your Access database to commence distance sampling analysis',
accept=c('.accdb', '.mdb')),
fileInput('WMU_Shp', 'Step 2. Choose your WMU Polygon Shapefile File (Alberta 10TM) - Note include all shapefile components (i.e. *.shp, *.dbf, *.sbn, etc.)',
accept=c('.shp','.dbf','.sbn','.sbx', '.shx','.prj','.cpg'), multiple=TRUE),
sliderInput("truncation", "Step 3: Choose right truncation distance", min=0, max=1000, value=800, step = 25)
),
mainPanel(
tabsetPanel(
tabPanel("Moose",textOutput("MOOS_TXT"), DT::dataTableOutput('MOOS_TAB'), plotOutput("myplot2"),
plotOutput("MOOS_QQ"), plotOutput("myplot")),
tabPanel("Mule Deer", plotOutput("MUDE_MAP"),
plotOutput("myplot3"),
plotOutput("MUDE_QQ")),
tabPanel("White-tailed Deer", plotOutput("WTDE_MAP"),
plotOutput("WTDE_DF"),
plotOutput("WTDE_QQ")),
tabPanel("Elk", plotOutput("WAPT_MAP"),
plotOutput("WAPT_DF"),
plotOutput("WAPT_QQ")),
tabPanel("Power analysis")
))
))
|
x=1:10
y<-c(8, 2.5, -2, 0 ,5 ,2 ,4 ,7 ,4.5, 2)
lm9 = lm(y~x+I(x^2)+I(x^3)+I(x^4)+I(x^5)+I(x^6)+I(x^7)+I(x^8)+I(x^9))
xplot=seq(from = 0,to = 10,by = .05)
plot(x,y)
lines(xplot,predict(lm9,newdata=data.frame(x=xplot)), col="blue") | /Numerical_Methods_In_Finance_And_Economics:_A_Matlab-Based_Introduction_by_Paolo_Brandimarte/CH3/EX3.16/Ex3_16.R | permissive | FOSSEE/R_TBC_Uploads | R | false | false | 234 | r | x=1:10
y<-c(8, 2.5, -2, 0 ,5 ,2 ,4 ,7 ,4.5, 2)
lm9 = lm(y~x+I(x^2)+I(x^3)+I(x^4)+I(x^5)+I(x^6)+I(x^7)+I(x^8)+I(x^9))
xplot=seq(from = 0,to = 10,by = .05)
plot(x,y)
lines(xplot,predict(lm9,newdata=data.frame(x=xplot)), col="blue") |
#' Handsontable widget
#'
#' Create a \href{http://handsontable.com}{Handsontable.js} widget.
#'
#' For full documentation on the package, visit \url{http://jrowen.github.io/rhandsontable/}
#' @param data a \code{data.table}, \code{data.frame} or \code{matrix}
#' @param colHeaders a vector of column names. If missing \code{colnames}
#' will be used. Setting to \code{NULL} will omit.
#' @param rowHeaders a vector of row names. If missing \code{rownames}
#' will be used. Setting to \code{NULL} will omit.
#' @param comments matrix or data.frame of comments; NA values are ignored
#' @param useTypes logical specifying whether column classes should be mapped to
#' equivalent Javascript types. Note that
#' Handsontable does not support column add/remove when column types
#' are defined (i.e. useTypes == TRUE in rhandsontable).
#' @param readOnly logical specifying whether the table is editable
#' @param selectCallback logical enabling the afterSelect event to return data.
#' This can be used with shiny to tie updates to a selected table cell.
#' @param width numeric table width
#' @param height numeric table height
#' @param ... passed to hot_table
#' @examples
#' library(rhandsontable)
#' DF = data.frame(val = 1:10, bool = TRUE, big = LETTERS[1:10],
#' small = letters[1:10],
#' dt = seq(from = Sys.Date(), by = "days", length.out = 10),
#' stringsAsFactors = FALSE)
#'
#' rhandsontable(DF, rowHeaders = NULL)
#' @seealso \code{\link{hot_table}}, \code{\link{hot_cols}}, \code{\link{hot_rows}}, \code{\link{hot_cell}}
#' @export
rhandsontable <- function(data, colHeaders, rowHeaders, comments = NULL,
useTypes = TRUE, readOnly = NULL,
selectCallback = FALSE,
width = NULL, height = NULL, ...) {
if (missing(colHeaders))
colHeaders = colnames(data)
if (missing(rowHeaders))
rowHeaders = rownames(data)
rClass = class(data)
if ("matrix" %in% rClass) {
rColClasses = class(data[1, 1])
} else {
rColClasses = lapply(data, class)
rColClasses[grepl("factor", rColClasses)] = "factor"
}
if (!useTypes) {
data = do.call(cbind, lapply(data, function(x) {
if (class(x) == "Date")
as.character(x, format = "%m/%d/%Y")
else
as.character(x)
}))
data = as.matrix(data, rownames.force = TRUE)
cols = NULL
} else {
# get column data types
col_typs = get_col_types(data)
# format date for display
dt_inds = which(col_typs == "date")
if (length(dt_inds) > 0L) {
for (i in dt_inds)
data[, i] = as.character(data[, i], format = "%m/%d/%Y")
}
cols = lapply(seq_along(col_typs), function(i) {
type = col_typs[i]
if (type == "factor") {
# data_fact = data.frame(level = levels(data[, i]),
# label = labels(data[, i]))
res = list(type = "dropdown",
source = levels(data[, i]),
allowInvalid = FALSE
# handsontable = list(
# colHeaders = FALSE, #c("Label", "Level"),
# data = levels(data[, i]) #jsonlite::toJSON(data_fact, na = "string",
# #rownames = FALSE)
# )
)
} else if (type == "numeric") {
res = list(type = "numeric",
format = "0.00")
} else if (type == "integer") {
res = list(type = "numeric",
format = "0")
} else if (type == "date") {
res = list(type = "date",
correctFormat = TRUE,
dateFormat = "MM/DD/YYYY")
} else {
res = list(type = type)
}
res$readOnly = readOnly
res$renderer = JS("customRenderer")
res
})
}
x = list(
data = jsonlite::toJSON(data, na = "string", rownames = FALSE),
rClass = rClass,
rColClasses = rColClasses,
selectCallback = selectCallback,
colHeaders = colHeaders,
rowHeaders = rowHeaders,
columns = cols,
width = width,
height = height
)
# create widget
hot = htmlwidgets::createWidget(
name = 'rhandsontable',
x,
width = width,
height = height,
package = 'rhandsontable',
sizingPolicy = htmlwidgets::sizingPolicy(
padding = 5,
defaultHeight = "100%",
defaultWidth = "100%"
)
)
if (!is.null(readOnly)) {
for (x in hot$x$colHeaders)
hot = hot %>% hot_col(x, readOnly = readOnly)
}
hot = hot %>% hot_table(enableComments = !is.null(comments), ...)
if (!is.null(comments)) {
inds = which(!is.na(comments), arr.ind = TRUE)
for (i in 1:nrow(inds))
hot = hot %>%
hot_cell(inds[i, "row"], inds[i, "col"],
comment = comments[inds[i, "row"], inds[i, "col"]])
}
hot
}
#' Handsontable widget
#'
#' Configure table. See
#' \href{http://handsontable.com}{Handsontable.js} for details.
#'
#' @param hot rhandsontable object
#' @param contextMenu logical enabling the right-click menu
#' @param stretchH character describing column stretching. Options are 'all', 'right',
#' and 'none'. See \href{http://docs.handsontable.com/0.15.1/demo-stretching.html}{Column stretching} for details.
#' @param customBorders json object. See
#' \href{http://handsontable.com/demo/custom_borders.html}{Custom borders} for details.
#' @param groups json object. See
#' \href{http://docs.handsontable.com/0.16.1/demo-grouping-and-ungrouping.html}{Grouping & ungrouping of rows and columns} for details.
#' @param highlightRow logical enabling row highlighting for the selected
#' cell
#' @param highlightCol logical enabling column highlighting for the
#' selected cell
#' @param enableComments logical enabling comments in the table
#' @param ... passed to \href{http://handsontable.com}{Handsontable.js} constructor
#' @examples
#' library(rhandsontable)
#' DF = data.frame(val = 1:10, bool = TRUE, big = LETTERS[1:10],
#' small = letters[1:10],
#' dt = seq(from = Sys.Date(), by = "days", length.out = 10),
#' stringsAsFactors = FALSE)
#'
#' rhandsontable(DF) %>%
#' hot_table(highlightCol = TRUE, highlightRow = TRUE)
#' @seealso \code{\link{rhandsontable}}
#' @export
hot_table = function(hot, contextMenu = TRUE, stretchH = "none",
customBorders = NULL, groups = NULL, highlightRow = NULL,
highlightCol = NULL, enableComments = FALSE,
...) {
if (!is.null(stretchH)) hot$x$stretchH = stretchH
if (!is.null(customBorders)) hot$x$customBorders = customBorders
if (!is.null(groups)) hot$x$groups = groups
if (!is.null(enableComments)) hot$x$comments = enableComments
if ((!is.null(highlightRow) && highlightRow) ||
(!is.null(highlightCol) && highlightCol))
hot$x$ishighlight = TRUE
if (!is.null(highlightRow) && highlightRow)
hot$x$currentRowClassName = "currentRow"
if (!is.null(highlightCol) && highlightCol)
hot$x$currentColClassName = "currentCol"
if (!is.null(contextMenu) && contextMenu)
hot = hot %>%
hot_context_menu(allowComments = enableComments,
allowCustomBorders = !is.null(customBorders),
allowColEdit = is.null(hot$x$columns), ...)
else
hot$x$contextMenu = FALSE
if (!is.null(list(...)))
hot$x = c(hot$x, list(...))
hot
}
#' Handsontable widget
#'
#' Configure the options for the right-click context menu. See
#' \href{http://docs.handsontable.com/0.16.1/demo-context-menu.html}{Context Menu} and
#' \href{http://swisnl.github.io/jQuery-contextMenu/docs.html}{jquery contextMenu}
#' for details.
#'
#' @param hot rhandsontable object
#' @param allowRowEdit logical enabling row editing
#' @param allowColEdit logical enabling column editing. Note that
#' Handsontable does not support column add/remove when column types
#' are defined (i.e. useTypes == TRUE in rhandsontable).
#' @param allowReadOnly logical enabling read-only toggle
#' @param allowComments logical enabling comments
#' @param allowCustomBorders logical enabling custom borders
#' @param customOpts list
#' @param ... ignored
#' @examples
#' library(rhandsontable)
#' DF = data.frame(val = 1:10, bool = TRUE, big = LETTERS[1:10],
#' small = letters[1:10],
#' dt = seq(from = Sys.Date(), by = "days", length.out = 10),
#' stringsAsFactors = FALSE)
#'
#' rhandsontable(DF) %>%
#' hot_context_menu(allowRowEdit = FALSE, allowColEdit = FALSE)
#' @export
hot_context_menu = function(hot, allowRowEdit = TRUE, allowColEdit = TRUE,
allowReadOnly = FALSE, allowComments = FALSE,
allowCustomBorders = FALSE,
customOpts = NULL, ...) {
if (!is.null(hot$x$contextMenu) && is.logical(hot$x$contextMenu) &&
!hot$x$contextMenu)
warning("The context menu was disabled but will be re-enabled (hot_context_menu)")
if (!is.null(hot$x$colums) && allowColEdit)
warning("Handsontable.js does not support column add/delete when column types ",
"are defined. Set useTypes = FALSE in rhandsontable to enable column ",
"edits.")
if (is.null(hot$x$contextMenu$items))
opts = list()
else
opts = hot$x$contextMenu$items
sep_ct = 1
add_opts = function(new, old, sep = TRUE, val = list()) {
new_ = lapply(new, function(x) val)
names(new_) = new
if (length(old) > 0 && sep) {
old[[paste0("hsep", sep_ct)]] = list(name = "---------")
sep_ct <<- sep_ct + 1
modifyList(old, new_)
} else if (!sep) {
modifyList(old, new_)
} else {
new_
}
}
remove_opts = function(new) {
add_opts(new, opts, sep = FALSE, val = NULL)
}
if (!is.null(allowRowEdit) && allowRowEdit)
opts = add_opts(c("row_above", "row_below", "remove_row"), opts)
else
opts = remove_opts(c("row_above", "row_below", "remove_row"))
if (!is.null(allowColEdit) && allowColEdit)
opts = add_opts(c("col_left", "col_right", "remove_col"), opts)
else
opts = remove_opts(c("col_left", "col_right", "remove_col"))
opts = add_opts(c("undo", "redo"), opts)
opts = add_opts(c("alignment"), opts)
if (!is.null(allowReadOnly) && allowReadOnly)
opts = add_opts(c("make_read_only"), opts)
else
opts = remove_opts(c("make_read_only"))
if (!is.null(allowComments) && allowComments)
opts = add_opts(c("commentsAddEdit", "commentsRemove"), opts)
else
opts = remove_opts(c("commentsAddEdit", "commentsRemove"))
if (!is.null(allowCustomBorders) && allowCustomBorders)
opts = add_opts(c("borders"), opts)
else
opts = remove_opts(c("borders"))
if (!is.null(customOpts)) {
opts[[paste0("hsep", sep_ct)]] = list(name = "---------")
sep_ct = sep_ct + 1
opts = modifyList(opts, customOpts)
}
hot$x$contextMenu = list(items = opts)
hot
}
#' Handsontable widget
#'
#' Configure multiple columns.
#'
#' @param hot rhandsontable object
#' @param colWidths a scalar or numeric vector of column widths
#' @param columnSorting logical enabling row sorting. Sorting only alters the
#' table presentation and the original dataset row order is maintained.
#' @param manualColumnMove logical enabling column drag-and-drop reordering
#' @param manualColumnResize logical enabline column width resizing
#' @param fixedColumnsLeft a numeric vector indicating which columns should be
#' frozen on the left
#' @param ... passed to hot_col
#' @examples
#' library(rhandsontable)
#' DF = data.frame(val = 1:10, bool = TRUE, big = LETTERS[1:10],
#' small = letters[1:10],
#' dt = seq(from = Sys.Date(), by = "days", length.out = 10),
#' stringsAsFactors = FALSE)
#'
#' rhandsontable(DF) %>%
#' hot_cols(columnSorting = TRUE)
#' @seealso \code{\link{hot_col}}, \code{\link{hot_rows}}, \code{\link{hot_cell}}
#' @export
hot_cols = function(hot, colWidths = NULL, columnSorting = NULL,
manualColumnMove = NULL, manualColumnResize = NULL,
fixedColumnsLeft = NULL, ...) {
if (!is.null(colWidths)) hot$x$colWidths = colWidths
if (!is.null(columnSorting)) hot$x$columnSorting = columnSorting
if (!is.null(manualColumnMove)) hot$x$manualColumnMove = manualColumnMove
if (!is.null(manualColumnResize)) hot$x$manualColumnResize = manualColumnResize
if (!is.null(fixedColumnsLeft)) hot$x$fixedColumnsLeft = fixedColumnsLeft
for (x in hot$x$colHeaders)
hot = hot %>% hot_col(x, ...)
hot
}
#' Handsontable widget
#'
#' Configure single column.
#'
#' @param hot rhandsontable object
#' @param col vector of column names or indices
#' @param type character specify the data type. Options include:
#' numeric, date, checkbox, select, dropdown, autocomplete, password,
#' and handsontable (not implemented yet)
#' @param format characer specifying column format. See Cell Types at
#' \href{http://handsontable.com}{Handsontable.js} for the formatting
#' options for each data type. Numeric columns are formatted using
#' \href{http://numeraljs.com}{Numeral.js}.
#' @param source a vector of choices for select, dropdown and autocomplete
#' column types
#' @param strict logical specifying whether values not in the \code{source}
#' vector will be accepted
#' @param readOnly logical making the table read-only
#' @param validator character defining a Javascript function to be used
#' to validate user input. See \code{hot_validate_numeric} and
#' \code{hot_validate_character} for pre-build validators.
#' @param allowInvalid logical specifying whether invalid data will be
#' accepted. Invalid data cells will be color red.
#' @param halign character defining the horizontal alignment. Possible
#' values are htLeft, htCenter, htRight and htJustify
#' @param valign character defining the vertical alignment. Possible
#' values are htTop, htMiddle, htBottom
#' @param renderer character defining a Javascript function to be used
#' to format column cells. Can be used to implement conditional formatting.
#' @param copyable logical defining whether data in a cell can be copied using
#' Ctrl + C
#' @param dateFormat character defining the date format. See
#' {https://github.com/moment/moment}{Moment.js} for details.
#' @param ... passed to handsontable
#' @examples
#' library(rhandsontable)
#' DF = data.frame(val = 1:10, bool = TRUE, big = LETTERS[1:10],
#' small = letters[1:10],
#' dt = seq(from = Sys.Date(), by = "days", length.out = 10),
#' stringsAsFactors = FALSE)
#'
#' rhandsontable(DF, rowHeaders = NULL) %>%
#' hot_col(col = "big", type = "dropdown", source = LETTERS) %>%
#' hot_col(col = "small", type = "autocomplete", source = letters,
#' strict = FALSE)
#' @seealso \code{\link{hot_cols}}, \code{\link{hot_rows}}, \code{\link{hot_cell}}
#' @export
hot_col = function(hot, col, type = NULL, format = NULL, source = NULL,
strict = NULL, readOnly = NULL, validator = NULL,
allowInvalid = NULL, halign = NULL, valign = NULL,
renderer = NULL, copyable = NULL, dateFormat = NULL, ...) {
cols = hot$x$columns
if (is.null(cols)) {
# create a columns list
warning("rhandsontable column types were previously not defined but are ",
"now being set to 'text' to support column properties")
cols = lapply(hot$x$colHeaders, function(x) {
list(type = "text")
})
}
for (i in col) {
if (is.character(i)) i = which(hot$x$colHeaders == i)
if (!is.null(type)) cols[[i]]$type = type
if (!is.null(format)) cols[[i]]$format = format
if (!is.null(dateFormat)) cols[[i]]$dateFormat = dateFormat
if (!is.null(source)) cols[[i]]$source = source
if (!is.null(strict)) cols[[i]]$strict = strict
if (!is.null(readOnly)) cols[[i]]$readOnly = readOnly
if (!is.null(copyable)) cols[[i]]$copyable = copyable
if (!is.null(validator)) cols[[i]]$validator = JS(validator)
if (!is.null(allowInvalid)) cols[[i]]$allowInvalid = allowInvalid
if (!is.null(renderer)) cols[[i]]$renderer = JS(renderer)
if (!is.null(list(...)))
cols[[i]] = c(cols[[i]], list(...))
className = c(halign, valign)
if (!is.null(className)) {
cols[[i]]$className = className
}
}
hot$x$columns = cols
hot
}
#' Handsontable widget
#'
#' Configure rows. See
#' \href{http://handsontable.com}{Handsontable.js} for details.
#'
#' @param hot rhandsontable object
#' @param rowHeights a scalar or numeric vector of row heights
#' @param fixedRowsTop a numeric vector indicating which rows should be
#' frozen on the top
#' @examples
#' library(rhandsontable)
#' MAT = matrix(rnorm(50), nrow = 10, dimnames = list(LETTERS[1:10],
#' letters[1:5]))
#'
#' rhandsontable(MAT, width = 300, height = 150) %>%
#' hot_cols(colWidths = 100, fixedColumnsLeft = 1) %>%
#' hot_rows(rowHeights = 50, fixedRowsTop = 1)
#' @seealso \code{\link{hot_cols}}, \code{\link{hot_cell}}
#' @export
hot_rows = function(hot, rowHeights = NULL, fixedRowsTop = NULL) {
if (!is.null(rowHeights)) hot$x$rowHeights = rowHeights
if (!is.null(fixedRowsTop)) hot$x$fixedRowsTop = fixedRowsTop
hot
}
#' Handsontable widget
#'
#' Configure single cell. See
#' \href{http://handsontable.com}{Handsontable.js} for details.
#'
#' @param hot rhandsontable object
#' @param row numeric row index
#' @param col numeric column index
#' @param comment character comment to add to cell
#' @examples
#' library(rhandsontable)
#' DF = data.frame(val = 1:10, bool = TRUE, big = LETTERS[1:10],
#' small = letters[1:10],
#' dt = seq(from = Sys.Date(), by = "days", length.out = 10),
#' stringsAsFactors = FALSE)
#'
#' rhandsontable(DF, readOnly = TRUE) %>%
#' hot_cell(1, 1, "Test comment")
#' @seealso \code{\link{hot_cols}}, \code{\link{hot_rows}}
#' @export
hot_cell = function(hot, row, col, comment = NULL) {
cell = list(row = row - 1, col = col - 1, comment = comment)
hot$x$cell = c(hot$x$cell, list(cell))
if (is.null(hot$x$comments))
hot = hot %>% hot_table(comments = TRUE)
hot
}
#' Handsontable widget
#'
#' Add numeric validation to a column
#'
#' @param hot rhandsontable object
#' @param cols vector of column names or indices
#' @param min minimum value to accept
#' @param max maximum value to accept
#' @param choices a vector of acceptable numeric choices. It will be evaluated
#' after min and max if specified.
#' @param exclude a vector or unacceptable numeric values
#' @param allowInvalid logical specifying whether invalid data will be
#' accepted. Invalid data cells will be color red.
#' @examples
#' library(rhandsontable)
#' MAT = matrix(rnorm(50), nrow = 10, dimnames = list(LETTERS[1:10],
#' letters[1:5]))
#'
#' rhandsontable(MAT * 10) %>%
#' hot_validate_numeric(col = 1, min = -50, max = 50, exclude = 40)
#'
#' rhandsontable(MAT * 10) %>%
#' hot_validate_numeric(col = 1, choices = c(10, 20, 40))
#' @seealso \code{\link{hot_validate_character}}
#' @export
hot_validate_numeric = function(hot, cols, min = NULL, max = NULL,
choices = NULL, exclude = NULL,
allowInvalid = FALSE) {
f = "function (value, callback) {
setTimeout(function(){
if (isNaN(parseFloat(value))) {
callback(false);
}
%exclude
%min
%max
%choices
callback(true);
}, 500)
}"
if (!is.null(exclude))
ex_str = paste0("if ([",
paste0(paste0("'", exclude, "'"), collapse = ","),
"].indexOf(value) > -1) { callback(false); }")
else
ex_str = ""
f = gsub("%exclude", ex_str, f)
if (!is.null(min))
min_str = paste0("if (value < ", min, ") { callback(false); }")
else
min_str = ""
f = gsub("%min", min_str, f)
if (!is.null(max))
max_str = paste0("if (value > ", max, ") { callback(false); }")
else
max_str = ""
f = gsub("%max", max_str, f)
if (!is.null(choices))
chcs_str = paste0("if ([",
paste0(paste0("'", choices, "'"), collapse = ","),
"].indexOf(value) == -1) { callback(false); }")
else
chcs_str = ""
f = gsub("%choices", chcs_str, f)
for (x in cols)
hot = hot %>% hot_col(x, validator = f,
allowInvalid = allowInvalid)
hot
}
#' Handsontable widget
#'
#' Add numeric validation to a column
#'
#' @param hot rhandsontable object
#' @param cols vector of column names or indices
#' @param choices a vector of acceptable numeric choices. It will be evaluated
#' after min and max if specified.
#' @param allowInvalid logical specifying whether invalid data will be
#' accepted. Invalid data cells will be color red.
#' @examples
#' library(rhandsontable)
#' DF = data.frame(val = 1:10, bool = TRUE, big = LETTERS[1:10],
#' small = letters[1:10],
#' dt = seq(from = Sys.Date(), by = "days", length.out = 10),
#' stringsAsFactors = FALSE)
#'
#' rhandsontable(DF) %>%
#' hot_validate_character(col = "big", choices = LETTERS[1:10])
#' @seealso \code{\link{hot_validate_numeric}}
#' @export
hot_validate_character = function(hot, cols, choices,
allowInvalid = FALSE) {
f = "function (value, callback) {
setTimeout(function() {
if (typeof(value) != 'string') {
callback(false);
}
%choices
callback(false);
}, 500)
}"
ch_str = paste0("if ([",
paste0(paste0("'", choices, "'"), collapse = ","),
"].indexOf(value) > -1) { callback(true); }")
f = gsub("%choices", ch_str, f)
for (x in cols)
hot = hot %>% hot_col(x, validator = f,
allowInvalid = allowInvalid)
hot
}
#' Handsontable widget
#'
#' Add heatmap to table. See
#' \href{http://handsontable.com/demo/heatmaps.html}{Heatmaps for values in a column}
#' for details.
#'
#' @param hot rhandsontable object
#' @param cols numeric vector of columns to include in the heatmap. If missing
#' all columns are used.
#' @param color_scale character vector that includes the lower and upper
#' colors
#' @param renderer character defining a Javascript function to be used
#' to determine the cell colors. If missing,
#' \code{rhandsontable:::renderer_heatmap} is used.
#' @examples
#' MAT = matrix(rnorm(50), nrow = 10, dimnames = list(LETTERS[1:10],
#' letters[1:5]))
#'
#'rhandsontable(MAT) %>%
#' hot_heatmap()
#' @export
hot_heatmap = function(hot, cols, color_scale = c("#ED6D47", "#17F556"),
renderer = NULL) {
if (is.null(renderer)) {
renderer = renderer_heatmap(color_scale)
}
if (missing(cols))
cols = seq_along(hot$x$colHeaders)
for (x in hot$x$colHeaders[cols])
hot = hot %>% hot_col(x, renderer = renderer)
hot
}
# Used by hot_heatmap
renderer_heatmap = function(color_scale) {
renderer = gsub("\n", "", "
function (instance, td, row, col, prop, value, cellProperties) {
Handsontable.renderers.TextRenderer.apply(this, arguments);
heatmapScale = chroma.scale(['%s1', '%s2']);
if (instance.heatmap[col]) {
mn = instance.heatmap[col].min;
mx = instance.heatmap[col].max;
pt = (parseInt(value, 10) - mn) / (mx - mn);
td.style.backgroundColor = heatmapScale(pt).hex();
}
}
")
renderer = gsub("%s1", color_scale[1], renderer)
renderer = gsub("%s2", color_scale[2], renderer)
renderer
}
#' Handsontable widget
#'
#' Shiny bindings for rhandsontable
#'
#' @param outputId output variable to read from
#' @param width,height Must be a valid CSS unit in pixels
#' or a number, which will be coerced to a string and have \code{"px"} appended.
#' @seealso \code{\link{renderRHandsontable}}
#' @export
rHandsontableOutput <- function(outputId, width = "100%", height = "100%"){
htmlwidgets::shinyWidgetOutput(outputId, 'rhandsontable', width, height,
package = 'rhandsontable')
}
#' Handsontable widget
#'
#' Shiny bindings for rhandsontable
#'
#' @param expr An expression that generates threejs graphics.
#' @param env The environment in which to evaluate \code{expr}.
#' @param quoted Is \code{expr} a quoted expression (with \code{quote()})? This
#' is useful if you want to save an expression in a variable.
#' @seealso \code{\link{rHandsontableOutput}}, \code{\link{hot_to_r}}
#' @export
renderRHandsontable <- function(expr, env = parent.frame(), quoted = FALSE) {
if (!quoted) { expr <- substitute(expr) } # force quoted
htmlwidgets::shinyRenderWidget(expr, rHandsontableOutput, env, quoted = TRUE)
}
#' Handsontable widget
#'
#' Convert handsontable data to R object. Can be used in a \code{shiny} app
#' to convert the input json to an R dataset.
#'
#' @param ... passed to \code{rhandsontable:::toR}
#' @seealso \code{\link{rHandsontableOutput}}
#' @export
hot_to_r = function(...) {
do.call(toR, ...)
}
| /R/rhandsontable.R | permissive | garthtarr/rhandsontable | R | false | false | 25,310 | r | #' Handsontable widget
#'
#' Create a \href{http://handsontable.com}{Handsontable.js} widget.
#'
#' For full documentation on the package, visit \url{http://jrowen.github.io/rhandsontable/}
#' @param data a \code{data.table}, \code{data.frame} or \code{matrix}
#' @param colHeaders a vector of column names. If missing \code{colnames}
#' will be used. Setting to \code{NULL} will omit.
#' @param rowHeaders a vector of row names. If missing \code{rownames}
#' will be used. Setting to \code{NULL} will omit.
#' @param comments matrix or data.frame of comments; NA values are ignored
#' @param useTypes logical specifying whether column classes should be mapped to
#' equivalent Javascript types. Note that
#' Handsontable does not support column add/remove when column types
#' are defined (i.e. useTypes == TRUE in rhandsontable).
#' @param readOnly logical specifying whether the table is editable
#' @param selectCallback logical enabling the afterSelect event to return data.
#' This can be used with shiny to tie updates to a selected table cell.
#' @param width numeric table width
#' @param height numeric table height
#' @param ... passed to hot_table
#' @examples
#' library(rhandsontable)
#' DF = data.frame(val = 1:10, bool = TRUE, big = LETTERS[1:10],
#' small = letters[1:10],
#' dt = seq(from = Sys.Date(), by = "days", length.out = 10),
#' stringsAsFactors = FALSE)
#'
#' rhandsontable(DF, rowHeaders = NULL)
#' @seealso \code{\link{hot_table}}, \code{\link{hot_cols}}, \code{\link{hot_rows}}, \code{\link{hot_cell}}
#' @export
rhandsontable <- function(data, colHeaders, rowHeaders, comments = NULL,
useTypes = TRUE, readOnly = NULL,
selectCallback = FALSE,
width = NULL, height = NULL, ...) {
if (missing(colHeaders))
colHeaders = colnames(data)
if (missing(rowHeaders))
rowHeaders = rownames(data)
rClass = class(data)
if ("matrix" %in% rClass) {
rColClasses = class(data[1, 1])
} else {
rColClasses = lapply(data, class)
rColClasses[grepl("factor", rColClasses)] = "factor"
}
if (!useTypes) {
data = do.call(cbind, lapply(data, function(x) {
if (class(x) == "Date")
as.character(x, format = "%m/%d/%Y")
else
as.character(x)
}))
data = as.matrix(data, rownames.force = TRUE)
cols = NULL
} else {
# get column data types
col_typs = get_col_types(data)
# format date for display
dt_inds = which(col_typs == "date")
if (length(dt_inds) > 0L) {
for (i in dt_inds)
data[, i] = as.character(data[, i], format = "%m/%d/%Y")
}
cols = lapply(seq_along(col_typs), function(i) {
type = col_typs[i]
if (type == "factor") {
# data_fact = data.frame(level = levels(data[, i]),
# label = labels(data[, i]))
res = list(type = "dropdown",
source = levels(data[, i]),
allowInvalid = FALSE
# handsontable = list(
# colHeaders = FALSE, #c("Label", "Level"),
# data = levels(data[, i]) #jsonlite::toJSON(data_fact, na = "string",
# #rownames = FALSE)
# )
)
} else if (type == "numeric") {
res = list(type = "numeric",
format = "0.00")
} else if (type == "integer") {
res = list(type = "numeric",
format = "0")
} else if (type == "date") {
res = list(type = "date",
correctFormat = TRUE,
dateFormat = "MM/DD/YYYY")
} else {
res = list(type = type)
}
res$readOnly = readOnly
res$renderer = JS("customRenderer")
res
})
}
x = list(
data = jsonlite::toJSON(data, na = "string", rownames = FALSE),
rClass = rClass,
rColClasses = rColClasses,
selectCallback = selectCallback,
colHeaders = colHeaders,
rowHeaders = rowHeaders,
columns = cols,
width = width,
height = height
)
# create widget
hot = htmlwidgets::createWidget(
name = 'rhandsontable',
x,
width = width,
height = height,
package = 'rhandsontable',
sizingPolicy = htmlwidgets::sizingPolicy(
padding = 5,
defaultHeight = "100%",
defaultWidth = "100%"
)
)
if (!is.null(readOnly)) {
for (x in hot$x$colHeaders)
hot = hot %>% hot_col(x, readOnly = readOnly)
}
hot = hot %>% hot_table(enableComments = !is.null(comments), ...)
if (!is.null(comments)) {
inds = which(!is.na(comments), arr.ind = TRUE)
for (i in 1:nrow(inds))
hot = hot %>%
hot_cell(inds[i, "row"], inds[i, "col"],
comment = comments[inds[i, "row"], inds[i, "col"]])
}
hot
}
#' Handsontable widget
#'
#' Configure table. See
#' \href{http://handsontable.com}{Handsontable.js} for details.
#'
#' @param hot rhandsontable object
#' @param contextMenu logical enabling the right-click menu
#' @param stretchH character describing column stretching. Options are 'all', 'right',
#' and 'none'. See \href{http://docs.handsontable.com/0.15.1/demo-stretching.html}{Column stretching} for details.
#' @param customBorders json object. See
#' \href{http://handsontable.com/demo/custom_borders.html}{Custom borders} for details.
#' @param groups json object. See
#' \href{http://docs.handsontable.com/0.16.1/demo-grouping-and-ungrouping.html}{Grouping & ungrouping of rows and columns} for details.
#' @param highlightRow logical enabling row highlighting for the selected
#' cell
#' @param highlightCol logical enabling column highlighting for the
#' selected cell
#' @param enableComments logical enabling comments in the table
#' @param ... passed to \href{http://handsontable.com}{Handsontable.js} constructor
#' @examples
#' library(rhandsontable)
#' DF = data.frame(val = 1:10, bool = TRUE, big = LETTERS[1:10],
#' small = letters[1:10],
#' dt = seq(from = Sys.Date(), by = "days", length.out = 10),
#' stringsAsFactors = FALSE)
#'
#' rhandsontable(DF) %>%
#' hot_table(highlightCol = TRUE, highlightRow = TRUE)
#' @seealso \code{\link{rhandsontable}}
#' @export
hot_table = function(hot, contextMenu = TRUE, stretchH = "none",
customBorders = NULL, groups = NULL, highlightRow = NULL,
highlightCol = NULL, enableComments = FALSE,
...) {
if (!is.null(stretchH)) hot$x$stretchH = stretchH
if (!is.null(customBorders)) hot$x$customBorders = customBorders
if (!is.null(groups)) hot$x$groups = groups
if (!is.null(enableComments)) hot$x$comments = enableComments
if ((!is.null(highlightRow) && highlightRow) ||
(!is.null(highlightCol) && highlightCol))
hot$x$ishighlight = TRUE
if (!is.null(highlightRow) && highlightRow)
hot$x$currentRowClassName = "currentRow"
if (!is.null(highlightCol) && highlightCol)
hot$x$currentColClassName = "currentCol"
if (!is.null(contextMenu) && contextMenu)
hot = hot %>%
hot_context_menu(allowComments = enableComments,
allowCustomBorders = !is.null(customBorders),
allowColEdit = is.null(hot$x$columns), ...)
else
hot$x$contextMenu = FALSE
if (!is.null(list(...)))
hot$x = c(hot$x, list(...))
hot
}
#' Handsontable widget
#'
#' Configure the options for the right-click context menu. See
#' \href{http://docs.handsontable.com/0.16.1/demo-context-menu.html}{Context Menu} and
#' \href{http://swisnl.github.io/jQuery-contextMenu/docs.html}{jquery contextMenu}
#' for details.
#'
#' @param hot rhandsontable object
#' @param allowRowEdit logical enabling row editing
#' @param allowColEdit logical enabling column editing. Note that
#' Handsontable does not support column add/remove when column types
#' are defined (i.e. useTypes == TRUE in rhandsontable).
#' @param allowReadOnly logical enabling read-only toggle
#' @param allowComments logical enabling comments
#' @param allowCustomBorders logical enabling custom borders
#' @param customOpts list
#' @param ... ignored
#' @examples
#' library(rhandsontable)
#' DF = data.frame(val = 1:10, bool = TRUE, big = LETTERS[1:10],
#' small = letters[1:10],
#' dt = seq(from = Sys.Date(), by = "days", length.out = 10),
#' stringsAsFactors = FALSE)
#'
#' rhandsontable(DF) %>%
#' hot_context_menu(allowRowEdit = FALSE, allowColEdit = FALSE)
#' @export
hot_context_menu = function(hot, allowRowEdit = TRUE, allowColEdit = TRUE,
allowReadOnly = FALSE, allowComments = FALSE,
allowCustomBorders = FALSE,
customOpts = NULL, ...) {
if (!is.null(hot$x$contextMenu) && is.logical(hot$x$contextMenu) &&
!hot$x$contextMenu)
warning("The context menu was disabled but will be re-enabled (hot_context_menu)")
if (!is.null(hot$x$colums) && allowColEdit)
warning("Handsontable.js does not support column add/delete when column types ",
"are defined. Set useTypes = FALSE in rhandsontable to enable column ",
"edits.")
if (is.null(hot$x$contextMenu$items))
opts = list()
else
opts = hot$x$contextMenu$items
sep_ct = 1
add_opts = function(new, old, sep = TRUE, val = list()) {
new_ = lapply(new, function(x) val)
names(new_) = new
if (length(old) > 0 && sep) {
old[[paste0("hsep", sep_ct)]] = list(name = "---------")
sep_ct <<- sep_ct + 1
modifyList(old, new_)
} else if (!sep) {
modifyList(old, new_)
} else {
new_
}
}
remove_opts = function(new) {
add_opts(new, opts, sep = FALSE, val = NULL)
}
if (!is.null(allowRowEdit) && allowRowEdit)
opts = add_opts(c("row_above", "row_below", "remove_row"), opts)
else
opts = remove_opts(c("row_above", "row_below", "remove_row"))
if (!is.null(allowColEdit) && allowColEdit)
opts = add_opts(c("col_left", "col_right", "remove_col"), opts)
else
opts = remove_opts(c("col_left", "col_right", "remove_col"))
opts = add_opts(c("undo", "redo"), opts)
opts = add_opts(c("alignment"), opts)
if (!is.null(allowReadOnly) && allowReadOnly)
opts = add_opts(c("make_read_only"), opts)
else
opts = remove_opts(c("make_read_only"))
if (!is.null(allowComments) && allowComments)
opts = add_opts(c("commentsAddEdit", "commentsRemove"), opts)
else
opts = remove_opts(c("commentsAddEdit", "commentsRemove"))
if (!is.null(allowCustomBorders) && allowCustomBorders)
opts = add_opts(c("borders"), opts)
else
opts = remove_opts(c("borders"))
if (!is.null(customOpts)) {
opts[[paste0("hsep", sep_ct)]] = list(name = "---------")
sep_ct = sep_ct + 1
opts = modifyList(opts, customOpts)
}
hot$x$contextMenu = list(items = opts)
hot
}
#' Handsontable widget
#'
#' Configure multiple columns.
#'
#' @param hot rhandsontable object
#' @param colWidths a scalar or numeric vector of column widths
#' @param columnSorting logical enabling row sorting. Sorting only alters the
#' table presentation and the original dataset row order is maintained.
#' @param manualColumnMove logical enabling column drag-and-drop reordering
#' @param manualColumnResize logical enabline column width resizing
#' @param fixedColumnsLeft a numeric vector indicating which columns should be
#' frozen on the left
#' @param ... passed to hot_col
#' @examples
#' library(rhandsontable)
#' DF = data.frame(val = 1:10, bool = TRUE, big = LETTERS[1:10],
#' small = letters[1:10],
#' dt = seq(from = Sys.Date(), by = "days", length.out = 10),
#' stringsAsFactors = FALSE)
#'
#' rhandsontable(DF) %>%
#' hot_cols(columnSorting = TRUE)
#' @seealso \code{\link{hot_col}}, \code{\link{hot_rows}}, \code{\link{hot_cell}}
#' @export
hot_cols = function(hot, colWidths = NULL, columnSorting = NULL,
manualColumnMove = NULL, manualColumnResize = NULL,
fixedColumnsLeft = NULL, ...) {
if (!is.null(colWidths)) hot$x$colWidths = colWidths
if (!is.null(columnSorting)) hot$x$columnSorting = columnSorting
if (!is.null(manualColumnMove)) hot$x$manualColumnMove = manualColumnMove
if (!is.null(manualColumnResize)) hot$x$manualColumnResize = manualColumnResize
if (!is.null(fixedColumnsLeft)) hot$x$fixedColumnsLeft = fixedColumnsLeft
for (x in hot$x$colHeaders)
hot = hot %>% hot_col(x, ...)
hot
}
#' Handsontable widget
#'
#' Configure single column.
#'
#' @param hot rhandsontable object
#' @param col vector of column names or indices
#' @param type character specify the data type. Options include:
#' numeric, date, checkbox, select, dropdown, autocomplete, password,
#' and handsontable (not implemented yet)
#' @param format characer specifying column format. See Cell Types at
#' \href{http://handsontable.com}{Handsontable.js} for the formatting
#' options for each data type. Numeric columns are formatted using
#' \href{http://numeraljs.com}{Numeral.js}.
#' @param source a vector of choices for select, dropdown and autocomplete
#' column types
#' @param strict logical specifying whether values not in the \code{source}
#' vector will be accepted
#' @param readOnly logical making the table read-only
#' @param validator character defining a Javascript function to be used
#' to validate user input. See \code{hot_validate_numeric} and
#' \code{hot_validate_character} for pre-build validators.
#' @param allowInvalid logical specifying whether invalid data will be
#' accepted. Invalid data cells will be color red.
#' @param halign character defining the horizontal alignment. Possible
#' values are htLeft, htCenter, htRight and htJustify
#' @param valign character defining the vertical alignment. Possible
#' values are htTop, htMiddle, htBottom
#' @param renderer character defining a Javascript function to be used
#' to format column cells. Can be used to implement conditional formatting.
#' @param copyable logical defining whether data in a cell can be copied using
#' Ctrl + C
#' @param dateFormat character defining the date format. See
#' {https://github.com/moment/moment}{Moment.js} for details.
#' @param ... passed to handsontable
#' @examples
#' library(rhandsontable)
#' DF = data.frame(val = 1:10, bool = TRUE, big = LETTERS[1:10],
#' small = letters[1:10],
#' dt = seq(from = Sys.Date(), by = "days", length.out = 10),
#' stringsAsFactors = FALSE)
#'
#' rhandsontable(DF, rowHeaders = NULL) %>%
#' hot_col(col = "big", type = "dropdown", source = LETTERS) %>%
#' hot_col(col = "small", type = "autocomplete", source = letters,
#' strict = FALSE)
#' @seealso \code{\link{hot_cols}}, \code{\link{hot_rows}}, \code{\link{hot_cell}}
#' @export
hot_col = function(hot, col, type = NULL, format = NULL, source = NULL,
strict = NULL, readOnly = NULL, validator = NULL,
allowInvalid = NULL, halign = NULL, valign = NULL,
renderer = NULL, copyable = NULL, dateFormat = NULL, ...) {
cols = hot$x$columns
if (is.null(cols)) {
# create a columns list
warning("rhandsontable column types were previously not defined but are ",
"now being set to 'text' to support column properties")
cols = lapply(hot$x$colHeaders, function(x) {
list(type = "text")
})
}
for (i in col) {
if (is.character(i)) i = which(hot$x$colHeaders == i)
if (!is.null(type)) cols[[i]]$type = type
if (!is.null(format)) cols[[i]]$format = format
if (!is.null(dateFormat)) cols[[i]]$dateFormat = dateFormat
if (!is.null(source)) cols[[i]]$source = source
if (!is.null(strict)) cols[[i]]$strict = strict
if (!is.null(readOnly)) cols[[i]]$readOnly = readOnly
if (!is.null(copyable)) cols[[i]]$copyable = copyable
if (!is.null(validator)) cols[[i]]$validator = JS(validator)
if (!is.null(allowInvalid)) cols[[i]]$allowInvalid = allowInvalid
if (!is.null(renderer)) cols[[i]]$renderer = JS(renderer)
if (!is.null(list(...)))
cols[[i]] = c(cols[[i]], list(...))
className = c(halign, valign)
if (!is.null(className)) {
cols[[i]]$className = className
}
}
hot$x$columns = cols
hot
}
#' Handsontable widget
#'
#' Configure rows. See
#' \href{http://handsontable.com}{Handsontable.js} for details.
#'
#' @param hot rhandsontable object
#' @param rowHeights a scalar or numeric vector of row heights
#' @param fixedRowsTop a numeric vector indicating which rows should be
#' frozen on the top
#' @examples
#' library(rhandsontable)
#' MAT = matrix(rnorm(50), nrow = 10, dimnames = list(LETTERS[1:10],
#' letters[1:5]))
#'
#' rhandsontable(MAT, width = 300, height = 150) %>%
#' hot_cols(colWidths = 100, fixedColumnsLeft = 1) %>%
#' hot_rows(rowHeights = 50, fixedRowsTop = 1)
#' @seealso \code{\link{hot_cols}}, \code{\link{hot_cell}}
#' @export
hot_rows = function(hot, rowHeights = NULL, fixedRowsTop = NULL) {
if (!is.null(rowHeights)) hot$x$rowHeights = rowHeights
if (!is.null(fixedRowsTop)) hot$x$fixedRowsTop = fixedRowsTop
hot
}
#' Handsontable widget
#'
#' Configure single cell. See
#' \href{http://handsontable.com}{Handsontable.js} for details.
#'
#' @param hot rhandsontable object
#' @param row numeric row index
#' @param col numeric column index
#' @param comment character comment to add to cell
#' @examples
#' library(rhandsontable)
#' DF = data.frame(val = 1:10, bool = TRUE, big = LETTERS[1:10],
#' small = letters[1:10],
#' dt = seq(from = Sys.Date(), by = "days", length.out = 10),
#' stringsAsFactors = FALSE)
#'
#' rhandsontable(DF, readOnly = TRUE) %>%
#' hot_cell(1, 1, "Test comment")
#' @seealso \code{\link{hot_cols}}, \code{\link{hot_rows}}
#' @export
hot_cell = function(hot, row, col, comment = NULL) {
cell = list(row = row - 1, col = col - 1, comment = comment)
hot$x$cell = c(hot$x$cell, list(cell))
if (is.null(hot$x$comments))
hot = hot %>% hot_table(comments = TRUE)
hot
}
#' Handsontable widget
#'
#' Add numeric validation to a column
#'
#' @param hot rhandsontable object
#' @param cols vector of column names or indices
#' @param min minimum value to accept
#' @param max maximum value to accept
#' @param choices a vector of acceptable numeric choices. It will be evaluated
#' after min and max if specified.
#' @param exclude a vector or unacceptable numeric values
#' @param allowInvalid logical specifying whether invalid data will be
#' accepted. Invalid data cells will be color red.
#' @examples
#' library(rhandsontable)
#' MAT = matrix(rnorm(50), nrow = 10, dimnames = list(LETTERS[1:10],
#' letters[1:5]))
#'
#' rhandsontable(MAT * 10) %>%
#' hot_validate_numeric(col = 1, min = -50, max = 50, exclude = 40)
#'
#' rhandsontable(MAT * 10) %>%
#' hot_validate_numeric(col = 1, choices = c(10, 20, 40))
#' @seealso \code{\link{hot_validate_character}}
#' @export
hot_validate_numeric = function(hot, cols, min = NULL, max = NULL,
choices = NULL, exclude = NULL,
allowInvalid = FALSE) {
f = "function (value, callback) {
setTimeout(function(){
if (isNaN(parseFloat(value))) {
callback(false);
}
%exclude
%min
%max
%choices
callback(true);
}, 500)
}"
if (!is.null(exclude))
ex_str = paste0("if ([",
paste0(paste0("'", exclude, "'"), collapse = ","),
"].indexOf(value) > -1) { callback(false); }")
else
ex_str = ""
f = gsub("%exclude", ex_str, f)
if (!is.null(min))
min_str = paste0("if (value < ", min, ") { callback(false); }")
else
min_str = ""
f = gsub("%min", min_str, f)
if (!is.null(max))
max_str = paste0("if (value > ", max, ") { callback(false); }")
else
max_str = ""
f = gsub("%max", max_str, f)
if (!is.null(choices))
chcs_str = paste0("if ([",
paste0(paste0("'", choices, "'"), collapse = ","),
"].indexOf(value) == -1) { callback(false); }")
else
chcs_str = ""
f = gsub("%choices", chcs_str, f)
for (x in cols)
hot = hot %>% hot_col(x, validator = f,
allowInvalid = allowInvalid)
hot
}
#' Handsontable widget
#'
#' Add numeric validation to a column
#'
#' @param hot rhandsontable object
#' @param cols vector of column names or indices
#' @param choices a vector of acceptable numeric choices. It will be evaluated
#' after min and max if specified.
#' @param allowInvalid logical specifying whether invalid data will be
#' accepted. Invalid data cells will be color red.
#' @examples
#' library(rhandsontable)
#' DF = data.frame(val = 1:10, bool = TRUE, big = LETTERS[1:10],
#' small = letters[1:10],
#' dt = seq(from = Sys.Date(), by = "days", length.out = 10),
#' stringsAsFactors = FALSE)
#'
#' rhandsontable(DF) %>%
#' hot_validate_character(col = "big", choices = LETTERS[1:10])
#' @seealso \code{\link{hot_validate_numeric}}
#' @export
hot_validate_character = function(hot, cols, choices,
allowInvalid = FALSE) {
f = "function (value, callback) {
setTimeout(function() {
if (typeof(value) != 'string') {
callback(false);
}
%choices
callback(false);
}, 500)
}"
ch_str = paste0("if ([",
paste0(paste0("'", choices, "'"), collapse = ","),
"].indexOf(value) > -1) { callback(true); }")
f = gsub("%choices", ch_str, f)
for (x in cols)
hot = hot %>% hot_col(x, validator = f,
allowInvalid = allowInvalid)
hot
}
#' Handsontable widget
#'
#' Add heatmap to table. See
#' \href{http://handsontable.com/demo/heatmaps.html}{Heatmaps for values in a column}
#' for details.
#'
#' @param hot rhandsontable object
#' @param cols numeric vector of columns to include in the heatmap. If missing
#' all columns are used.
#' @param color_scale character vector that includes the lower and upper
#' colors
#' @param renderer character defining a Javascript function to be used
#' to determine the cell colors. If missing,
#' \code{rhandsontable:::renderer_heatmap} is used.
#' @examples
#' MAT = matrix(rnorm(50), nrow = 10, dimnames = list(LETTERS[1:10],
#' letters[1:5]))
#'
#'rhandsontable(MAT) %>%
#' hot_heatmap()
#' @export
hot_heatmap = function(hot, cols, color_scale = c("#ED6D47", "#17F556"),
renderer = NULL) {
if (is.null(renderer)) {
renderer = renderer_heatmap(color_scale)
}
if (missing(cols))
cols = seq_along(hot$x$colHeaders)
for (x in hot$x$colHeaders[cols])
hot = hot %>% hot_col(x, renderer = renderer)
hot
}
# Used by hot_heatmap
renderer_heatmap = function(color_scale) {
renderer = gsub("\n", "", "
function (instance, td, row, col, prop, value, cellProperties) {
Handsontable.renderers.TextRenderer.apply(this, arguments);
heatmapScale = chroma.scale(['%s1', '%s2']);
if (instance.heatmap[col]) {
mn = instance.heatmap[col].min;
mx = instance.heatmap[col].max;
pt = (parseInt(value, 10) - mn) / (mx - mn);
td.style.backgroundColor = heatmapScale(pt).hex();
}
}
")
renderer = gsub("%s1", color_scale[1], renderer)
renderer = gsub("%s2", color_scale[2], renderer)
renderer
}
#' Handsontable widget
#'
#' Shiny bindings for rhandsontable
#'
#' @param outputId output variable to read from
#' @param width,height Must be a valid CSS unit in pixels
#' or a number, which will be coerced to a string and have \code{"px"} appended.
#' @seealso \code{\link{renderRHandsontable}}
#' @export
rHandsontableOutput <- function(outputId, width = "100%", height = "100%"){
htmlwidgets::shinyWidgetOutput(outputId, 'rhandsontable', width, height,
package = 'rhandsontable')
}
#' Handsontable widget
#'
#' Shiny bindings for rhandsontable
#'
#' @param expr An expression that generates threejs graphics.
#' @param env The environment in which to evaluate \code{expr}.
#' @param quoted Is \code{expr} a quoted expression (with \code{quote()})? This
#' is useful if you want to save an expression in a variable.
#' @seealso \code{\link{rHandsontableOutput}}, \code{\link{hot_to_r}}
#' @export
renderRHandsontable <- function(expr, env = parent.frame(), quoted = FALSE) {
if (!quoted) { expr <- substitute(expr) } # force quoted
htmlwidgets::shinyRenderWidget(expr, rHandsontableOutput, env, quoted = TRUE)
}
#' Handsontable widget
#'
#' Convert handsontable data to R object. Can be used in a \code{shiny} app
#' to convert the input json to an R dataset.
#'
#' @param ... passed to \code{rhandsontable:::toR}
#' @seealso \code{\link{rHandsontableOutput}}
#' @export
hot_to_r = function(...) {
do.call(toR, ...)
}
|
# maRketSim - Load various publically available bond data sets and compare a bond fund to a portfolio of bonds
# - Federal Reserve Yield Curve for Treasuries on the secondary market - #
URLdir <- "http://federalreserve.gov/releases/h15/data/Monthly/"
datadir <- "/tmp/"
files <- c('M1','M3','M6','Y1','Y2','Y3','Y5','Y7','Y10','Y20','Y30')
for(f in files) {
download.file(paste(URLdir,"H15_TCMNOM_",f,".txt",sep=""),paste(datadir,"H15_TCMNOM_",f,".txt",sep=""))
df <- read.csv(paste(datadir,"H15_TCMNOM_",f,".txt",sep=""),skip=7)
colnames(df) <- c("date","i")
df$mat=switch(substr(f,1,1),M=as.integer(substr(f,2,3))/12,Y=as.integer(substr(f,2,3)))
# Subset out only the half-year periods
mth <- substr(df$date,1,2)
df <- df[mth=="12"|mth=="06",]
if(f==files[1]) {
TreasuryCurve.df <- df
} else {
TreasuryCurve.df <- rbind(TreasuryCurve.df,df)
}
}
splt <- strsplit(as.character(TreasuryCurve.df$date),"/")
TreasuryCurve.df$t <- sapply(splt,function(x) as.integer(x[2])+(as.integer(x[1]))/12)
TreasuryCurve.df <- subset(TreasuryCurve.df,select=c(-date))
TreasuryCurve.df$i <- as.numeric(TreasuryCurve.df$i)/100 # "ND" for no data will be coerced to NAs
# Convert to history.market object
TreasuryCurve.hm <- read.yield.curve(TreasuryCurve.df,drop.if.no.MMrate=TRUE)
plot(TreasuryCurve.hm,main="Treasury Yield Data")
# Compare performance of a rolling portfolio to a bond fund of similar duration
fund.dur <- 5.2 # VFITX ave duration in 2010
fund.weights <- c(1003791750,633466400,476050460,416342700,390484680,355996830,313689000,293081600,264752800,250087200,215642640,210906020,203584120,147038700,136450020,134760780,131523750,122942160,105412320,100536600,91462500,74025360,62334600,54714020,53347530,48630240,42979800,36484500,21205400,14360594,10504410,9177500,7037170,1142886,361589) # market values of the bonds in the fund
fund.weights <- fund.weights/min(fund.weights) # standardaize
fund.mats <- c(65,104,41,66,73,101,45,76,43,51,56,53,75,42,74,89,37,98,131,107,40,67,77,49,7,30,125,128,15,38,119,27,16,66,55)/12 # monthly maturities of bonds in fund, converted to annual
fund.dur.sd <- sd( rep(fund.mats,fund.weights) ) #weighted SD of the maturities of the holdings of VFITX (source https://personal.vanguard.com/us/FundsAllHoldings?FundId=0035&FundIntExt=INT&tableName=Bond&tableIndex=0&sort=marketValue&sortOrder=desc)
prt <- genPortfolio.bond(
n=50,
mkt=TreasuryCurve.hm[[1]],
dur=fund.dur,
dur.sd=fund.dur.sd,
name="Rolling Ladder",
type="random.constrained"
)
cat("Our generated portfolio has a duration of",round(summary(prt)$portfolio.sum$dur,3),"compared to the fund's duration of",fund.dur,"\n")
acct <- account(prts=list(prt),hist.mkt=TreasuryCurve.hm)
sum.acct <- summary(acct,t=2010.5,rebal.function=rebal.function.default,rebal.function.args=list(min.bond.size=1000,new.bond.dur=fund.dur) )
plot(sum.acct$history.account)
# - Vanguard Intermediate Treasury Fund - #
# -- Price data -- #
pr.df <- VFITX_prices
pr.df$Date <- as.Date(pr.df$Date)
pr.df$month <- format(pr.df$Date,format="%m")
pr.df <- subset(pr.df,month=="06"|month=="12",select=c("Date","Close","month"))
pr.df$month <- as.numeric(pr.df$month)
pr.df$year <- as.numeric(format(pr.df$Date,format="%Y"))
pr.df$day <- as.numeric(format(pr.df$Date,format="%d"))
# Select last available day of each month
by.res <- by(pr.df,list(pr.df$month,pr.df$year),function(x) x[x$day==max(x$day),] )
pr.df <- by.res[[1]]
for(i in seq(2,length(by.res))) {
if(!is.null(by.res[[i]])) {
pr.df <- rbind(pr.df,by.res[[i]])
}
}
pr.df <- subset(pr.df,select=c("Close","month","year"))
names(pr.df)[names(pr.df)=="Close"] <- "p"
# -- Dividend data -- #
div.df <- VFITX_div
div.df$Date <- as.Date(div.df$Date)
div.df$month <- as.numeric(format(div.df$Date,format="%m"))
div.df$year <- as.numeric(format(div.df$Date,format="%Y"))
# Aggregate into 6-month dividends
div.df$month[div.df$month>=1&div.df$month<=6] <- 6
div.df$month[div.df$month>=7&div.df$month<=12] <- 12
by.res <- by(div.df,list(div.df$month,div.df$year),
function(x) {
return(data.frame(div=sum(x$Dividends),N=length(x$Dividends)))
})
div.df <- by.res[[1]]
for(i in seq(2,length(by.res))) {
if(!is.null(by.res[[i]])) {
div.df <- rbind(div.df,by.res[[i]])
}
}
div.df$month <- as.numeric(rep(dimnames(by.res)[[1]],length(dimnames(by.res)[[2]])))
div.df$year <- as.numeric(rep(dimnames(by.res)[[2]],each=length(dimnames(by.res)[[1]])))
div.df <- subset(div.df,N>=6,select=c(-N)) # Exclude partial half-years
# -- Combine dividend and price data -- #
vfitx.df <- merge(div.df,pr.df)
# -- Calculate comparable performance -- #
vfitx.df <- subset(vfitx.df,year>=2002)
vfitx.df <- sort.data.frame(vfitx.df,~year+month)
N <- nrow(vfitx.df)
shares <- rep(NA,N)
shares[1] <- 50000/vfitx.df$p[1]
for(n in seq(2,N)) {
# Add dividend payment reinvested into fund
shares[n] <- (vfitx.df$div[n]*shares[n-1] / vfitx.df$p[n]) + shares[n-1]
}
cat("Total invesment value in 2010 is ",shares[length(shares)]*vfitx.df$p[nrow(vfitx.df)],"\n")
| /maRketSim/demo/public_bond_data.R | no_license | ingted/R-Examples | R | false | false | 4,990 | r | # maRketSim - Load various publically available bond data sets and compare a bond fund to a portfolio of bonds
# - Federal Reserve Yield Curve for Treasuries on the secondary market - #
URLdir <- "http://federalreserve.gov/releases/h15/data/Monthly/"
datadir <- "/tmp/"
files <- c('M1','M3','M6','Y1','Y2','Y3','Y5','Y7','Y10','Y20','Y30')
for(f in files) {
download.file(paste(URLdir,"H15_TCMNOM_",f,".txt",sep=""),paste(datadir,"H15_TCMNOM_",f,".txt",sep=""))
df <- read.csv(paste(datadir,"H15_TCMNOM_",f,".txt",sep=""),skip=7)
colnames(df) <- c("date","i")
df$mat=switch(substr(f,1,1),M=as.integer(substr(f,2,3))/12,Y=as.integer(substr(f,2,3)))
# Subset out only the half-year periods
mth <- substr(df$date,1,2)
df <- df[mth=="12"|mth=="06",]
if(f==files[1]) {
TreasuryCurve.df <- df
} else {
TreasuryCurve.df <- rbind(TreasuryCurve.df,df)
}
}
splt <- strsplit(as.character(TreasuryCurve.df$date),"/")
TreasuryCurve.df$t <- sapply(splt,function(x) as.integer(x[2])+(as.integer(x[1]))/12)
TreasuryCurve.df <- subset(TreasuryCurve.df,select=c(-date))
TreasuryCurve.df$i <- as.numeric(TreasuryCurve.df$i)/100 # "ND" for no data will be coerced to NAs
# Convert to history.market object
TreasuryCurve.hm <- read.yield.curve(TreasuryCurve.df,drop.if.no.MMrate=TRUE)
plot(TreasuryCurve.hm,main="Treasury Yield Data")
# Compare performance of a rolling portfolio to a bond fund of similar duration
fund.dur <- 5.2 # VFITX ave duration in 2010
fund.weights <- c(1003791750,633466400,476050460,416342700,390484680,355996830,313689000,293081600,264752800,250087200,215642640,210906020,203584120,147038700,136450020,134760780,131523750,122942160,105412320,100536600,91462500,74025360,62334600,54714020,53347530,48630240,42979800,36484500,21205400,14360594,10504410,9177500,7037170,1142886,361589) # market values of the bonds in the fund
fund.weights <- fund.weights/min(fund.weights) # standardaize
fund.mats <- c(65,104,41,66,73,101,45,76,43,51,56,53,75,42,74,89,37,98,131,107,40,67,77,49,7,30,125,128,15,38,119,27,16,66,55)/12 # monthly maturities of bonds in fund, converted to annual
fund.dur.sd <- sd( rep(fund.mats,fund.weights) ) #weighted SD of the maturities of the holdings of VFITX (source https://personal.vanguard.com/us/FundsAllHoldings?FundId=0035&FundIntExt=INT&tableName=Bond&tableIndex=0&sort=marketValue&sortOrder=desc)
prt <- genPortfolio.bond(
n=50,
mkt=TreasuryCurve.hm[[1]],
dur=fund.dur,
dur.sd=fund.dur.sd,
name="Rolling Ladder",
type="random.constrained"
)
cat("Our generated portfolio has a duration of",round(summary(prt)$portfolio.sum$dur,3),"compared to the fund's duration of",fund.dur,"\n")
acct <- account(prts=list(prt),hist.mkt=TreasuryCurve.hm)
sum.acct <- summary(acct,t=2010.5,rebal.function=rebal.function.default,rebal.function.args=list(min.bond.size=1000,new.bond.dur=fund.dur) )
plot(sum.acct$history.account)
# - Vanguard Intermediate Treasury Fund - #
# -- Price data -- #
pr.df <- VFITX_prices
pr.df$Date <- as.Date(pr.df$Date)
pr.df$month <- format(pr.df$Date,format="%m")
pr.df <- subset(pr.df,month=="06"|month=="12",select=c("Date","Close","month"))
pr.df$month <- as.numeric(pr.df$month)
pr.df$year <- as.numeric(format(pr.df$Date,format="%Y"))
pr.df$day <- as.numeric(format(pr.df$Date,format="%d"))
# Select last available day of each month
by.res <- by(pr.df,list(pr.df$month,pr.df$year),function(x) x[x$day==max(x$day),] )
pr.df <- by.res[[1]]
for(i in seq(2,length(by.res))) {
if(!is.null(by.res[[i]])) {
pr.df <- rbind(pr.df,by.res[[i]])
}
}
pr.df <- subset(pr.df,select=c("Close","month","year"))
names(pr.df)[names(pr.df)=="Close"] <- "p"
# -- Dividend data -- #
div.df <- VFITX_div
div.df$Date <- as.Date(div.df$Date)
div.df$month <- as.numeric(format(div.df$Date,format="%m"))
div.df$year <- as.numeric(format(div.df$Date,format="%Y"))
# Aggregate into 6-month dividends
div.df$month[div.df$month>=1&div.df$month<=6] <- 6
div.df$month[div.df$month>=7&div.df$month<=12] <- 12
by.res <- by(div.df,list(div.df$month,div.df$year),
function(x) {
return(data.frame(div=sum(x$Dividends),N=length(x$Dividends)))
})
div.df <- by.res[[1]]
for(i in seq(2,length(by.res))) {
if(!is.null(by.res[[i]])) {
div.df <- rbind(div.df,by.res[[i]])
}
}
div.df$month <- as.numeric(rep(dimnames(by.res)[[1]],length(dimnames(by.res)[[2]])))
div.df$year <- as.numeric(rep(dimnames(by.res)[[2]],each=length(dimnames(by.res)[[1]])))
div.df <- subset(div.df,N>=6,select=c(-N)) # Exclude partial half-years
# -- Combine dividend and price data -- #
vfitx.df <- merge(div.df,pr.df)
# -- Calculate comparable performance -- #
vfitx.df <- subset(vfitx.df,year>=2002)
vfitx.df <- sort.data.frame(vfitx.df,~year+month)
N <- nrow(vfitx.df)
shares <- rep(NA,N)
shares[1] <- 50000/vfitx.df$p[1]
for(n in seq(2,N)) {
# Add dividend payment reinvested into fund
shares[n] <- (vfitx.df$div[n]*shares[n-1] / vfitx.df$p[n]) + shares[n-1]
}
cat("Total invesment value in 2010 is ",shares[length(shares)]*vfitx.df$p[nrow(vfitx.df)],"\n")
|
#' Starting values for polytomous logit-normit model
#' @aliases startbetas
#' @description Computes starting values for estimation of polytomous logit-normit model.
#' @usage
#' startalphas(x, ncat, nitem = NULL)
#' startbetas(x, ncat, nitem = NULL)
#' @param x A data matrix. Data can be in one of two formats: 1) raw data
#' where the number of rows corresponds to the number of raw cases and
#' each column represents an item, and 2) a matrix of dimensions
#' \code{nrec}\eqn{\times}{X}(\code{nitem}+1) where each row corresponds to a
#' response pattern and the last column is the frequency of that response
#' pattern. A data matrix of the second type requires input for \code{nitem}
#' and \code{nrec}.
#' @param ncat Number of ordinal categories for each item, coded as
#' 0,\dots,(\code{ncat}-1). Currently supported are items that have the same number of categories.
#' @param nitem Number of items. If omitted, it is assumed that \code{x} contains
#' a data matrix of the first type (raw data) and the number of
#' columns in \code{x} will be selected as the number of items.
#' @details \code{startalphas} computes starting values for the (decreasing) cutpoints
#' for the items based on logit transformed probabilities, assuming independent items.
#'
#' \code{startbetas} computes starting values for slopes under the polytomous
#' logit-normit model, using a method based on values that are proportional to the
#' average correlations of each item with all other items. Starting values are
#' currently bounded between -.2 and 1.
#' @return
#' A vector of starting values, depending on which function was called.
#' @author Carl F. Falk \email{cffalk@gmail.com}, Harry Joe
#' @seealso
#' \code{\link{nrmlepln}}
#' \code{\link{nrmlerasch}}
#' \code{\link{nrbcpln}}
#' @examples
#' ### Raw data
#' data(item9cat5)
#'
#' myAlphas<-startalphas(item9cat5, ncat=5)
#' print(myAlphas)
#'
#' myBetas<-startbetas(item9cat5, ncat=5)
#' print(myBetas)
#'
#' nrbcplnout<-nrbcpln(item9cat5, ncat=5, alphas=myAlphas, betas=myBetas, se=FALSE)
#' print(nrbcplnout)
#'
#' ## Matrix of response patterns and frequencies
#' data(item5fr)
#'
#' myAlphas<-startalphas(item5fr, ncat=3, nitem=5)
#' print(myAlphas)
#'
#' myBetas<-startbetas(item5fr, ncat=3, nitem=5)
#' print(myBetas)
#'
#' nrbcplnout<-nrbcpln(item5fr, ncat=3, nitem=5, alphas=myAlphas, betas=myBetas, se=FALSE)
#' print(nrbcplnout)
#'
#' @export
startalphas<- function (x, ncat, nitem=NULL) {
## input checking & data prep
myInput<-check.input(x, ncat, nitem)
out <- startpln.func(myInput$nitem, myInput$ncat, myInput$nrec, myInput$myX)
return(out$alphas)
}
startpln.func<-function(nitem, ncat, nrec, myX){
## prep return variables
alphas=rep(0,nitem*(ncat-1))
## TO DO: add PACKAGE argument here
out <- .C("Rstartpln",
as.integer(nitem), as.integer(ncat), as.integer(nrec), as.double(myX),
alphas=as.double(alphas))
return(out)
}
#' @export
startbetas<-function(x, ncat, nitem=NULL){
myInput<-check.input(x, ncat, nitem)
x<-myInput$myX
betas<-startbetas.func(x)
return(betas)
}
startbetas.func<-function(x){
nn<-ncol(x)
nitem<-nn-1
y<-x[,-nn]
fr<-x[,nn]
tot<-sum(fr)
mnvec<-apply(y*fr,2,sum)/tot ## means
vvec<-apply(y*y*fr,2,sum)/tot ## variances
vvec<-vvec-mnvec^2
cc<-matrix(0,nitem,nitem)
for(j in 1:(nitem-1))
{ for(k in (j+1):nitem)
{ ss<-sum(y[,j]*y[,k]*fr)/tot
den<-vvec[j]*vvec[k]
cc[j,k]<-(ss-mnvec[j]*mnvec[k])/sqrt(den)
cc[k,j]<-cc[j,k]
}
}
avcc<-apply(cc,2,sum)/(nitem-1)
#print(avcc)
bvec<-avcc*10; bvec[bvec>=1]=1; bvec[bvec<= -.2]= -.2
#print(bvec)
bvec
} | /R/startvals.R | no_license | falkcarl/pln | R | false | false | 3,697 | r | #' Starting values for polytomous logit-normit model
#' @aliases startbetas
#' @description Computes starting values for estimation of polytomous logit-normit model.
#' @usage
#' startalphas(x, ncat, nitem = NULL)
#' startbetas(x, ncat, nitem = NULL)
#' @param x A data matrix. Data can be in one of two formats: 1) raw data
#' where the number of rows corresponds to the number of raw cases and
#' each column represents an item, and 2) a matrix of dimensions
#' \code{nrec}\eqn{\times}{X}(\code{nitem}+1) where each row corresponds to a
#' response pattern and the last column is the frequency of that response
#' pattern. A data matrix of the second type requires input for \code{nitem}
#' and \code{nrec}.
#' @param ncat Number of ordinal categories for each item, coded as
#' 0,\dots,(\code{ncat}-1). Currently supported are items that have the same number of categories.
#' @param nitem Number of items. If omitted, it is assumed that \code{x} contains
#' a data matrix of the first type (raw data) and the number of
#' columns in \code{x} will be selected as the number of items.
#' @details \code{startalphas} computes starting values for the (decreasing) cutpoints
#' for the items based on logit transformed probabilities, assuming independent items.
#'
#' \code{startbetas} computes starting values for slopes under the polytomous
#' logit-normit model, using a method based on values that are proportional to the
#' average correlations of each item with all other items. Starting values are
#' currently bounded between -.2 and 1.
#' @return
#' A vector of starting values, depending on which function was called.
#' @author Carl F. Falk \email{cffalk@gmail.com}, Harry Joe
#' @seealso
#' \code{\link{nrmlepln}}
#' \code{\link{nrmlerasch}}
#' \code{\link{nrbcpln}}
#' @examples
#' ### Raw data
#' data(item9cat5)
#'
#' myAlphas<-startalphas(item9cat5, ncat=5)
#' print(myAlphas)
#'
#' myBetas<-startbetas(item9cat5, ncat=5)
#' print(myBetas)
#'
#' nrbcplnout<-nrbcpln(item9cat5, ncat=5, alphas=myAlphas, betas=myBetas, se=FALSE)
#' print(nrbcplnout)
#'
#' ## Matrix of response patterns and frequencies
#' data(item5fr)
#'
#' myAlphas<-startalphas(item5fr, ncat=3, nitem=5)
#' print(myAlphas)
#'
#' myBetas<-startbetas(item5fr, ncat=3, nitem=5)
#' print(myBetas)
#'
#' nrbcplnout<-nrbcpln(item5fr, ncat=3, nitem=5, alphas=myAlphas, betas=myBetas, se=FALSE)
#' print(nrbcplnout)
#'
#' @export
startalphas<- function (x, ncat, nitem=NULL) {
## input checking & data prep
myInput<-check.input(x, ncat, nitem)
out <- startpln.func(myInput$nitem, myInput$ncat, myInput$nrec, myInput$myX)
return(out$alphas)
}
startpln.func<-function(nitem, ncat, nrec, myX){
## prep return variables
alphas=rep(0,nitem*(ncat-1))
## TO DO: add PACKAGE argument here
out <- .C("Rstartpln",
as.integer(nitem), as.integer(ncat), as.integer(nrec), as.double(myX),
alphas=as.double(alphas))
return(out)
}
#' @export
startbetas<-function(x, ncat, nitem=NULL){
myInput<-check.input(x, ncat, nitem)
x<-myInput$myX
betas<-startbetas.func(x)
return(betas)
}
startbetas.func<-function(x){
nn<-ncol(x)
nitem<-nn-1
y<-x[,-nn]
fr<-x[,nn]
tot<-sum(fr)
mnvec<-apply(y*fr,2,sum)/tot ## means
vvec<-apply(y*y*fr,2,sum)/tot ## variances
vvec<-vvec-mnvec^2
cc<-matrix(0,nitem,nitem)
for(j in 1:(nitem-1))
{ for(k in (j+1):nitem)
{ ss<-sum(y[,j]*y[,k]*fr)/tot
den<-vvec[j]*vvec[k]
cc[j,k]<-(ss-mnvec[j]*mnvec[k])/sqrt(den)
cc[k,j]<-cc[j,k]
}
}
avcc<-apply(cc,2,sum)/(nitem-1)
#print(avcc)
bvec<-avcc*10; bvec[bvec>=1]=1; bvec[bvec<= -.2]= -.2
#print(bvec)
bvec
} |
library(monocle3)
data <- as(as.matrix(brain_3prime_bcell[['RNA']]@data), 'sparseMatrix')
pd <- brain_3prime_bcell@meta.data
fd <- data.frame(gene_short_name = row.names(data), row.names = row.names(data))
cds <- new_cell_data_set(data, cell_metadata = pd, gene_metadata = fd)
cds@reducedDims$PCA <- brain_3prime_bcell@reductions$pca@cell.embeddings
cds@reducedDims$UMAP <- brain_3prime_bcell@reductions$umap@cell.embeddings
recreate.partition <- c(rep(1, length(cds@colData@rownames)))
names(recreate.partition) <- cds@colData@rownames
recreate.partition <- as.factor(recreate.partition)
cds@clusters@listData[["UMAP"]][["partitions"]] <- recreate.partition
cds@clusters@listData[["UMAP"]][["louvain_res"]] <- "NA"
cds@clusters$UMAP$clusters <- brain_3prime_bcell$seurat_clusters
cds@preprocess_aux$gene_loadings <- brain_3prime_bcell@reductions$pca@feature.loadings
cds <- learn_graph(cds)
rownames(cds@principal_graph_aux[['UMAP']]$dp_mst) <- NULL
colnames(cds@reducedDims$UMAP) <- NULL
cds <- order_cells(cds)
plot_cells(cds,
color_cells_by = "seurat_clusters",
label_groups_by_cluster=TRUE,
label_leaves=TRUE,
label_branch_points=TRUE) +
coord_equal()+
lims(x=c(-11, 10), y=c(-11, 10)) +
scale_color_manual(values = brewer.pal(7, "Paired"))
plot_cells(cds,
color_cells_by = "pseudotime",
label_cell_groups=FALSE,
label_leaves=FALSE,
label_branch_points=FALSE,
graph_label_size=1.5) +
coord_equal()+
lims(x=c(-11, 10), y=c(-11, 10))
## DEG by pseudotime
deg_pseudotime <- graph_test(cds, neighbor_graph="principal_graph")
pr_deg_ids <- row.names(subset(deg_pseudotime, q_value < 0.05))
gene_module_df <- find_gene_modules(cds[pr_deg_ids,], resolution=10^seq(-6,-1))
cell_group_df <- tibble::tibble(cell=row.names(colData(cds)),
cell_group=colData(cds)$seurat_clusters)
agg_mat <- aggregate_gene_expression(cds, gene_module_df, cell_group_df)
row.names(agg_mat) <- stringr::str_c("Module ", row.names(agg_mat))
pheatmap::pheatmap(agg_mat,
scale="column", clustering_method="ward.D2")
pheatmap::pheatmap(cds[pr_deg_ids[1:100], order(pData(cds)$seurat_clusters)]@assays[["counts"]],
cluster_cols = F,
show_colnames = F,
show_rownames = F,
scale = "row")
monocle::plot_pseudotime_heatmap(cds[pr_deg_ids,],
cluster_rows = F,
show_rownames = T)
marker_pseudotime <- c("Lef1", "Lmo4", "Rag1",
"Rag2", "Dntt", "Vpreb1",
"Vpreb3", "Ighm", "Ighd",
"Cd24a", "Spn", "Kit")
marker_pseudotime_subset <- cds[rowData(cds)$gene_short_name %in% marker_pseudotime, ]
plot_genes_in_pseudotime(marker_pseudotime_subset ,
color_cells_by="seurat_clusters",
min_expr=0.5,
ncol = 3)
| /script/Monocle3_Mouse.R | no_license | chendianyu/2021_Immunity_BrainB | R | false | false | 3,033 | r | library(monocle3)
data <- as(as.matrix(brain_3prime_bcell[['RNA']]@data), 'sparseMatrix')
pd <- brain_3prime_bcell@meta.data
fd <- data.frame(gene_short_name = row.names(data), row.names = row.names(data))
cds <- new_cell_data_set(data, cell_metadata = pd, gene_metadata = fd)
cds@reducedDims$PCA <- brain_3prime_bcell@reductions$pca@cell.embeddings
cds@reducedDims$UMAP <- brain_3prime_bcell@reductions$umap@cell.embeddings
recreate.partition <- c(rep(1, length(cds@colData@rownames)))
names(recreate.partition) <- cds@colData@rownames
recreate.partition <- as.factor(recreate.partition)
cds@clusters@listData[["UMAP"]][["partitions"]] <- recreate.partition
cds@clusters@listData[["UMAP"]][["louvain_res"]] <- "NA"
cds@clusters$UMAP$clusters <- brain_3prime_bcell$seurat_clusters
cds@preprocess_aux$gene_loadings <- brain_3prime_bcell@reductions$pca@feature.loadings
cds <- learn_graph(cds)
rownames(cds@principal_graph_aux[['UMAP']]$dp_mst) <- NULL
colnames(cds@reducedDims$UMAP) <- NULL
cds <- order_cells(cds)
plot_cells(cds,
color_cells_by = "seurat_clusters",
label_groups_by_cluster=TRUE,
label_leaves=TRUE,
label_branch_points=TRUE) +
coord_equal()+
lims(x=c(-11, 10), y=c(-11, 10)) +
scale_color_manual(values = brewer.pal(7, "Paired"))
plot_cells(cds,
color_cells_by = "pseudotime",
label_cell_groups=FALSE,
label_leaves=FALSE,
label_branch_points=FALSE,
graph_label_size=1.5) +
coord_equal()+
lims(x=c(-11, 10), y=c(-11, 10))
## DEG by pseudotime
deg_pseudotime <- graph_test(cds, neighbor_graph="principal_graph")
pr_deg_ids <- row.names(subset(deg_pseudotime, q_value < 0.05))
gene_module_df <- find_gene_modules(cds[pr_deg_ids,], resolution=10^seq(-6,-1))
cell_group_df <- tibble::tibble(cell=row.names(colData(cds)),
cell_group=colData(cds)$seurat_clusters)
agg_mat <- aggregate_gene_expression(cds, gene_module_df, cell_group_df)
row.names(agg_mat) <- stringr::str_c("Module ", row.names(agg_mat))
pheatmap::pheatmap(agg_mat,
scale="column", clustering_method="ward.D2")
pheatmap::pheatmap(cds[pr_deg_ids[1:100], order(pData(cds)$seurat_clusters)]@assays[["counts"]],
cluster_cols = F,
show_colnames = F,
show_rownames = F,
scale = "row")
monocle::plot_pseudotime_heatmap(cds[pr_deg_ids,],
cluster_rows = F,
show_rownames = T)
marker_pseudotime <- c("Lef1", "Lmo4", "Rag1",
"Rag2", "Dntt", "Vpreb1",
"Vpreb3", "Ighm", "Ighd",
"Cd24a", "Spn", "Kit")
marker_pseudotime_subset <- cds[rowData(cds)$gene_short_name %in% marker_pseudotime, ]
plot_genes_in_pseudotime(marker_pseudotime_subset ,
color_cells_by="seurat_clusters",
min_expr=0.5,
ncol = 3)
|
# Exercise 1: Shiny basics
# Install and load the `shiny` package
library(shiny)
# Define a new `ui` variable. This variable should be assigned a `fluidPage()` layout
# The `fluidPage()` layout should be passed the following:
ui <- fluidPage(
titlePanel("Cost Calculator"),
numericInput("price", label = "Price (in dollars)", value = 0, min = 0),
numericInput("quantity", label = "Quantity", value = 1, min = 1),
strong("Cost"),
textOutput("cost")
)
# A `titlePanel()` layout with the text "Cost Calculator"
# A `numericInput()` widget with the label "Price (in dollars)"
# It should have a default value of 0 and a minimum value of 0
# Hint: look up the function's arguments in the documentation!
# A second `numericInput()` widget with the label "Quantity"
# It should have a default value of 1 and a minimum value of 1
# The word "Cost", strongly bolded
# A `textOutput()` output of a calculated value labeled `cost`
# Define a `server` function (with appropriate arguments)
# This function should perform the following:
server <- function(input, output) {
output$cost <- renderText({
return(paste0("$", input$price * input$quantity))
})
}
# Assign a reactive `renderText()` function to the output's `cost` value
# The reactive expression should return the input `price` times the `quantity`
# So it looks nice, paste a "$" in front of it!
# Create a new `shinyApp()` using the above ui and server
shinyApp(ui = ui, server = server)
| /exercise-2/app.R | permissive | alliL/ch16-shiny | R | false | false | 1,508 | r | # Exercise 1: Shiny basics
# Install and load the `shiny` package
library(shiny)
# Define a new `ui` variable. This variable should be assigned a `fluidPage()` layout
# The `fluidPage()` layout should be passed the following:
ui <- fluidPage(
titlePanel("Cost Calculator"),
numericInput("price", label = "Price (in dollars)", value = 0, min = 0),
numericInput("quantity", label = "Quantity", value = 1, min = 1),
strong("Cost"),
textOutput("cost")
)
# A `titlePanel()` layout with the text "Cost Calculator"
# A `numericInput()` widget with the label "Price (in dollars)"
# It should have a default value of 0 and a minimum value of 0
# Hint: look up the function's arguments in the documentation!
# A second `numericInput()` widget with the label "Quantity"
# It should have a default value of 1 and a minimum value of 1
# The word "Cost", strongly bolded
# A `textOutput()` output of a calculated value labeled `cost`
# Define a `server` function (with appropriate arguments)
# This function should perform the following:
server <- function(input, output) {
output$cost <- renderText({
return(paste0("$", input$price * input$quantity))
})
}
# Assign a reactive `renderText()` function to the output's `cost` value
# The reactive expression should return the input `price` times the `quantity`
# So it looks nice, paste a "$" in front of it!
# Create a new `shinyApp()` using the above ui and server
shinyApp(ui = ui, server = server)
|
% Generated by roxygen2 (4.0.2): do not edit by hand
\name{updateDerivativesClosure}
\alias{updateDerivativesClosure}
\title{updateDerivativeClosure}
\usage{
updateDerivativesClosure(nDomains, nTime, Z1, W)
}
\description{
Marhuenda et. al. (2013): page 312 Remark 1
}
| /man/updateDerivativesClosure.Rd | no_license | wahani/SAE | R | false | false | 270 | rd | % Generated by roxygen2 (4.0.2): do not edit by hand
\name{updateDerivativesClosure}
\alias{updateDerivativesClosure}
\title{updateDerivativeClosure}
\usage{
updateDerivativesClosure(nDomains, nTime, Z1, W)
}
\description{
Marhuenda et. al. (2013): page 312 Remark 1
}
|
#' Compute binary search
#'
#' this fuction computes binary search
#'
#' @examples
#' a<-1:100
#' BinSearch(a,50,1,length(a))
BinSearch <- function(A, value, low, high) {
if ( high < low ) {
return(NULL)
} else {
mid <- floor((low + high) / 2)
if ( A[mid] > value )
BinSearch(A, value, low, mid-1)
else if ( A[mid] < value )
BinSearch(A, value, mid+1, high)
else
mid
}
}
| /R/BinSearch.R | no_license | 2zii118/IM | R | false | false | 416 | r | #' Compute binary search
#'
#' this fuction computes binary search
#'
#' @examples
#' a<-1:100
#' BinSearch(a,50,1,length(a))
BinSearch <- function(A, value, low, high) {
if ( high < low ) {
return(NULL)
} else {
mid <- floor((low + high) / 2)
if ( A[mid] > value )
BinSearch(A, value, low, mid-1)
else if ( A[mid] < value )
BinSearch(A, value, mid+1, high)
else
mid
}
}
|
#Script to calculate life-stage specific regressions
require(ggplot2)
#Get data
Data <- read.csv("../Data/EcolArchives-E089-51-D1.csv")
p <- ggplot(Data, aes(x=Prey.mass, y=Predator.mass, colour=Predator.lifestage)) +
geom_point(shape = 3) +
geom_smooth(method = "lm", fullrange = TRUE) +
facet_grid(Type.of.feeding.interaction ~ .) +
theme_bw() + theme(legend.position="bottom") +
theme(aspect.ratio = 1/2) +
xlab("Prey Mass in grams") +
ylab("Predator mass in grams")
pdf("../Results/PP_Regress.pdf", 8, 11)
print (p + scale_x_log10() + scale_y_log10())
dev.off()
#Regressions
library(plyr)
regressions.function <- function(x)
{
models <- lm(log(Predator.mass) ~ log(Prey.mass), data = x)
int <- models$coefficients[1]
Slope <- models$coefficients[2]
R <- summary(models)$r.squared
ANOVA <- anova(models)
data.frame(int, Slope, R, fstat = ANOVA[[4]][1], pvalue = ANOVA[[5]][1])
}
regressions.out <- ddply(Data, .(Type.of.feeding.interaction, Predator.lifestage), regressions.function)
#Rename column headings
regressions.out <- rename(regressions.out, c("Type.of.feeding.interaction" = "Feeding type",
"Predator.lifestage"="Predator lifestage",
"int"="Intercept",
"R"="R Squared",
"fstat"="F Statistic",
"pvalue"="p-value"))
#Put it in a CSV
write.table(regressions.out, "../Results/PP_Regress_Results.csv", sep = ",", row.names = FALSE, col.names = TRUE)
##-------Extra credit - locations added in
regressions2.function <- function(x)
{
models <- lm(log(Predator.mass) ~ log(Prey.mass), data = x)
int <- models$coefficients[1]
Slope <- models$coefficients[2]
R <- summary(models)$r.squared
ANOVA <- anova(models)
data.frame(int, Slope, R, fstat = ANOVA[[4]][1], pvalue = ANOVA[[5]][1])
}
regressions2.out <- ddply(Data, .(Type.of.feeding.interaction, Predator.lifestage, Location), regressions2.function)
#Rename column headings
regressions2.out <- rename(regressions2.out, c("Type.of.feeding.interaction" = "Feeding type",
"Predator.lifestage"="Predator lifestage",
"int"="Intercept",
"R"="R Squared",
"fstat"="F Statistic",
"pvalue"="p-value"))
#Put it in a CSV
write.table(regressions2.out, "../Results/PP_Regress_Results_ExtraCredit.csv", sep = ",", row.names = FALSE, col.names = TRUE)
| /CodeSolution/PP_Regress_loc.R | no_license | zongyi2020/CMEECourseWork | R | false | false | 2,608 | r | #Script to calculate life-stage specific regressions
require(ggplot2)
#Get data
Data <- read.csv("../Data/EcolArchives-E089-51-D1.csv")
p <- ggplot(Data, aes(x=Prey.mass, y=Predator.mass, colour=Predator.lifestage)) +
geom_point(shape = 3) +
geom_smooth(method = "lm", fullrange = TRUE) +
facet_grid(Type.of.feeding.interaction ~ .) +
theme_bw() + theme(legend.position="bottom") +
theme(aspect.ratio = 1/2) +
xlab("Prey Mass in grams") +
ylab("Predator mass in grams")
pdf("../Results/PP_Regress.pdf", 8, 11)
print (p + scale_x_log10() + scale_y_log10())
dev.off()
#Regressions
library(plyr)
regressions.function <- function(x)
{
models <- lm(log(Predator.mass) ~ log(Prey.mass), data = x)
int <- models$coefficients[1]
Slope <- models$coefficients[2]
R <- summary(models)$r.squared
ANOVA <- anova(models)
data.frame(int, Slope, R, fstat = ANOVA[[4]][1], pvalue = ANOVA[[5]][1])
}
regressions.out <- ddply(Data, .(Type.of.feeding.interaction, Predator.lifestage), regressions.function)
#Rename column headings
regressions.out <- rename(regressions.out, c("Type.of.feeding.interaction" = "Feeding type",
"Predator.lifestage"="Predator lifestage",
"int"="Intercept",
"R"="R Squared",
"fstat"="F Statistic",
"pvalue"="p-value"))
#Put it in a CSV
write.table(regressions.out, "../Results/PP_Regress_Results.csv", sep = ",", row.names = FALSE, col.names = TRUE)
##-------Extra credit - locations added in
regressions2.function <- function(x)
{
models <- lm(log(Predator.mass) ~ log(Prey.mass), data = x)
int <- models$coefficients[1]
Slope <- models$coefficients[2]
R <- summary(models)$r.squared
ANOVA <- anova(models)
data.frame(int, Slope, R, fstat = ANOVA[[4]][1], pvalue = ANOVA[[5]][1])
}
regressions2.out <- ddply(Data, .(Type.of.feeding.interaction, Predator.lifestage, Location), regressions2.function)
#Rename column headings
regressions2.out <- rename(regressions2.out, c("Type.of.feeding.interaction" = "Feeding type",
"Predator.lifestage"="Predator lifestage",
"int"="Intercept",
"R"="R Squared",
"fstat"="F Statistic",
"pvalue"="p-value"))
#Put it in a CSV
write.table(regressions2.out, "../Results/PP_Regress_Results_ExtraCredit.csv", sep = ",", row.names = FALSE, col.names = TRUE)
|
#read in test and training duplicate data
#start in root folder
labels <- read.table("./UCI HAR Dataset/activity_labels.txt", col.names =
c("activityInd", "activity"))
names <- read.table("./UCI HAR Dataset/features.txt")
#trainfolder
trainsub <- read.table("./UCI HAR Dataset/train/subject_train.txt", col.names = "subject")
trainact <- read.table("./UCI HAR Dataset/train/y_train.txt", col.names = "activity")
trainx <- read.table("./UCI HAR Dataset/train/X_train.txt") #get column names from the features.txt
trainy <- read.table("./UCI HAR Dataset/train/Y_train.txt", col.names="activityy")
#testfolder
testsubj <- read.table("./UCI HAR Dataset/test/subject_test.txt", col.names = "subject")
testact <- read.table("./UCI HAR Dataset/test/y_test.txt", col.names = "activity")
testx <- read.table("./UCI HAR Dataset/test/X_test.txt") #get column names from the features.txt
testy <- read.table("./UCI HAR Dataset/test/Y_test.txt", col.names="activityy")
#begin merging
subject<- rbind(testsubj, trainsub)
act <- rbind(testact, trainact)
xdat <- rbind(testx, trainx)
names(xdat) <- names[,2]
meansx <- xdat[,grep("mean",names(xdat))]
tidyset <- cbind(subject, act, meansx)
library(dplyr)
#I know I've done this without dplyr before...but can't find how, oh well, thanks google
tidyset4<-tidyset %<>% group_by(activity, subject) %>% summarise_each(funs(mean))
write.table(tidyset4, "tidyset.txt", row.names = FALSE)
#ydat <- rbind(testy, trainy)
#names(ydat) <- names[,2]
#6meansy <- ydat[grepl("*mean*",names(ydat)) | grepl("*std*",names(ydat)),] | /supermerge.R | no_license | bug5654/gcdproj | R | false | false | 1,576 | r | #read in test and training duplicate data
#start in root folder
labels <- read.table("./UCI HAR Dataset/activity_labels.txt", col.names =
c("activityInd", "activity"))
names <- read.table("./UCI HAR Dataset/features.txt")
#trainfolder
trainsub <- read.table("./UCI HAR Dataset/train/subject_train.txt", col.names = "subject")
trainact <- read.table("./UCI HAR Dataset/train/y_train.txt", col.names = "activity")
trainx <- read.table("./UCI HAR Dataset/train/X_train.txt") #get column names from the features.txt
trainy <- read.table("./UCI HAR Dataset/train/Y_train.txt", col.names="activityy")
#testfolder
testsubj <- read.table("./UCI HAR Dataset/test/subject_test.txt", col.names = "subject")
testact <- read.table("./UCI HAR Dataset/test/y_test.txt", col.names = "activity")
testx <- read.table("./UCI HAR Dataset/test/X_test.txt") #get column names from the features.txt
testy <- read.table("./UCI HAR Dataset/test/Y_test.txt", col.names="activityy")
#begin merging
subject<- rbind(testsubj, trainsub)
act <- rbind(testact, trainact)
xdat <- rbind(testx, trainx)
names(xdat) <- names[,2]
meansx <- xdat[,grep("mean",names(xdat))]
tidyset <- cbind(subject, act, meansx)
library(dplyr)
#I know I've done this without dplyr before...but can't find how, oh well, thanks google
tidyset4<-tidyset %<>% group_by(activity, subject) %>% summarise_each(funs(mean))
write.table(tidyset4, "tidyset.txt", row.names = FALSE)
#ydat <- rbind(testy, trainy)
#names(ydat) <- names[,2]
#6meansy <- ydat[grepl("*mean*",names(ydat)) | grepl("*std*",names(ydat)),] |
library(glmnet)
mydata = read.table("./TrainingSet/ReliefF/cervix.csv",head=T,sep=",")
x = as.matrix(mydata[,4:ncol(mydata)])
y = as.matrix(mydata[,1])
set.seed(123)
glm = cv.glmnet(x,y,nfolds=10,type.measure="mae",alpha=0.3,family="gaussian",standardize=FALSE)
sink('./Model/EN/ReliefF/cervix/cervix_044.txt',append=TRUE)
print(glm$glmnet.fit)
sink()
| /Model/EN/ReliefF/cervix/cervix_044.R | no_license | leon1003/QSMART | R | false | false | 352 | r | library(glmnet)
mydata = read.table("./TrainingSet/ReliefF/cervix.csv",head=T,sep=",")
x = as.matrix(mydata[,4:ncol(mydata)])
y = as.matrix(mydata[,1])
set.seed(123)
glm = cv.glmnet(x,y,nfolds=10,type.measure="mae",alpha=0.3,family="gaussian",standardize=FALSE)
sink('./Model/EN/ReliefF/cervix/cervix_044.txt',append=TRUE)
print(glm$glmnet.fit)
sink()
|
# 法国食道癌数据的交互效应图
data(esoph)
par(mar = c(4, 4, 0.2, 0.2))
with(esoph, {
interaction.plot(agegp, alcgp, ncases / (ncases + ncontrols),
trace.label = "饮酒量", fixed = TRUE,
xlab = "年龄", ylab = "患癌概率")
})
| /inst/examples/interaction-plot.R | no_license | yihui/MSG | R | false | false | 283 | r | # 法国食道癌数据的交互效应图
data(esoph)
par(mar = c(4, 4, 0.2, 0.2))
with(esoph, {
interaction.plot(agegp, alcgp, ncases / (ncases + ncontrols),
trace.label = "饮酒量", fixed = TRUE,
xlab = "年龄", ylab = "患癌概率")
})
|
#plot2.png
# Greg R Gay
library(datasets)
# read in the full data set if it has not already
if(!exists("consumption")){
consumption <- read.table("household_power_consumption.txt",
sep=";",
na.strings = "?",
header=TRUE)
}
# use strptime() and as.Date()
# set the date range we need
date1 <- as.Date("01/02/2007", format="%d/%m/%Y")
date2 <- as.Date("02/02/2007", format="%d/%m/%Y")
# create the subset we need Feb 1 & 2, 2007
if(!exists("c2007")){
c2007 <- subset(
consumption,
as.Date(consumption$Date,
format="%d/%m/%Y") >= date1 & as.Date(consumption$Date, format="%d/%m/%Y") <= date2
)
}
# data to file tosee what it looks like
#write.table(c2007, "c2007.csv", sep=",")
plot(as.numeric(c2007$Sub_metering_3),
type = "l",
ylab="Energy Sub Metering",
xlab="",
xaxt = 'n',
ylim = c(0,40),
col="blue"
)
lines(c2007$Sub_metering_2, lty=1, type="l", col="red" )
lines(c2007$Sub_metering_1, lty=1, type="l", col="black" )
legend("topright",
legend=c("Sub_metering_1","Sub_metering_2","Sub_metering_3"),
bty="n",
lty=c(1,1),
col=c("black","red","blue"),
cex=.6,
pt.cex = 1,
inset=0.05)
axis(1,at=c(0,1500,2900),labels=c("thur","fri","sat"))
dev.copy(png, file = "plot3.png") ## Copy the plot to a PNG file
dev.off() | /plot3.R | no_license | gregrgay/ExData_Plotting1 | R | false | false | 1,479 | r | #plot2.png
# Greg R Gay
library(datasets)
# read in the full data set if it has not already
if(!exists("consumption")){
consumption <- read.table("household_power_consumption.txt",
sep=";",
na.strings = "?",
header=TRUE)
}
# use strptime() and as.Date()
# set the date range we need
date1 <- as.Date("01/02/2007", format="%d/%m/%Y")
date2 <- as.Date("02/02/2007", format="%d/%m/%Y")
# create the subset we need Feb 1 & 2, 2007
if(!exists("c2007")){
c2007 <- subset(
consumption,
as.Date(consumption$Date,
format="%d/%m/%Y") >= date1 & as.Date(consumption$Date, format="%d/%m/%Y") <= date2
)
}
# data to file tosee what it looks like
#write.table(c2007, "c2007.csv", sep=",")
plot(as.numeric(c2007$Sub_metering_3),
type = "l",
ylab="Energy Sub Metering",
xlab="",
xaxt = 'n',
ylim = c(0,40),
col="blue"
)
lines(c2007$Sub_metering_2, lty=1, type="l", col="red" )
lines(c2007$Sub_metering_1, lty=1, type="l", col="black" )
legend("topright",
legend=c("Sub_metering_1","Sub_metering_2","Sub_metering_3"),
bty="n",
lty=c(1,1),
col=c("black","red","blue"),
cex=.6,
pt.cex = 1,
inset=0.05)
axis(1,at=c(0,1500,2900),labels=c("thur","fri","sat"))
dev.copy(png, file = "plot3.png") ## Copy the plot to a PNG file
dev.off() |
##########################################################Aggregation of Values################
setwd("C:/Users/Rohini/Desktop/Share-Price-Prediction-master/Cut_Data/Precip_Temp/")
sub12 = read.csv(file="Subbasin_28.csv", header=TRUE, sep=",")
sub12$sin_day=0
sub12$cos_day=0
sub12$Three_Day_Flow=0
sub12$Two_Week_Flow=0
sub12$Month_Flow=0
sub12$Season_Flow=0
sub12$Precip_Times_Temperature=0
sub12$Temperature_Times_Day=0
sub12$Precip_Times_Temperature_Times_Day=0
library(zoo)
sub12$sin_day=sin(sub12$Day)
sub12$cos_day=cos(sub12$Day)
sub12$Three_Day_Flow=rollsumr(sub12$Precipitation,k=72,fill=0)
sub12$Two_Week_Flow=rollsumr(sub12$Precipitation,k=168,fill=0)
sub12$Month_Flow=rollsumr(sub12$Precipitation,k=720,fill=0)
library(dplyr)
sub12$Hours=as.numeric(sub12$Hours)
#Total flow since the beginning of the water year
for (i in 1:length(sub12$Year)){
if (sub12$Month[i]==10|sub12$Month[i]==11|sub12$Month[i]==12){
water_year_index=which(sub12$Year==sub12$Year[i] & sub12$Month==10 & sub12$Day==1 & sub12$Hours==1)
sub12$Season_Flow[i]=sum(sub12$Precipitation[water_year_index]:sub12$Precipitation[i])
}
else{
water_year_index=which(sub12$Year==sub12$Year[i]-1 & sub12$Month==10 & sub12$Day==1 & sub12$Hours==1)
sub12$Season_Flow[i]=sum(sub12$Precipitation[water_year_index]:sub12$Precipitation[i])
}
}
sub12$Precip_Times_Temperature=sub12$Precipitation*sub12$Temperature
sub12$Temperature_Times_Day=sub12$Temperature*sub12$sin_day
sub12$Precip_Times_Temperature_Times_Day=sub12$Precipitation*sub12$Temperature*sub12$sin_day
setwd("C:/Users/Rohini/Desktop/Share-Price-Prediction-master/Aggregated_Data/Hourly/")
write.csv(sub12, file = "Sub_28.csv")
| /Aggregation.R | no_license | rg727/Army_Corp_Competition | R | false | false | 1,752 | r |
##########################################################Aggregation of Values################
setwd("C:/Users/Rohini/Desktop/Share-Price-Prediction-master/Cut_Data/Precip_Temp/")
sub12 = read.csv(file="Subbasin_28.csv", header=TRUE, sep=",")
sub12$sin_day=0
sub12$cos_day=0
sub12$Three_Day_Flow=0
sub12$Two_Week_Flow=0
sub12$Month_Flow=0
sub12$Season_Flow=0
sub12$Precip_Times_Temperature=0
sub12$Temperature_Times_Day=0
sub12$Precip_Times_Temperature_Times_Day=0
library(zoo)
sub12$sin_day=sin(sub12$Day)
sub12$cos_day=cos(sub12$Day)
sub12$Three_Day_Flow=rollsumr(sub12$Precipitation,k=72,fill=0)
sub12$Two_Week_Flow=rollsumr(sub12$Precipitation,k=168,fill=0)
sub12$Month_Flow=rollsumr(sub12$Precipitation,k=720,fill=0)
library(dplyr)
sub12$Hours=as.numeric(sub12$Hours)
#Total flow since the beginning of the water year
for (i in 1:length(sub12$Year)){
if (sub12$Month[i]==10|sub12$Month[i]==11|sub12$Month[i]==12){
water_year_index=which(sub12$Year==sub12$Year[i] & sub12$Month==10 & sub12$Day==1 & sub12$Hours==1)
sub12$Season_Flow[i]=sum(sub12$Precipitation[water_year_index]:sub12$Precipitation[i])
}
else{
water_year_index=which(sub12$Year==sub12$Year[i]-1 & sub12$Month==10 & sub12$Day==1 & sub12$Hours==1)
sub12$Season_Flow[i]=sum(sub12$Precipitation[water_year_index]:sub12$Precipitation[i])
}
}
sub12$Precip_Times_Temperature=sub12$Precipitation*sub12$Temperature
sub12$Temperature_Times_Day=sub12$Temperature*sub12$sin_day
sub12$Precip_Times_Temperature_Times_Day=sub12$Precipitation*sub12$Temperature*sub12$sin_day
setwd("C:/Users/Rohini/Desktop/Share-Price-Prediction-master/Aggregated_Data/Hourly/")
write.csv(sub12, file = "Sub_28.csv")
|
test_that("empty_assignment_linter skips valid usage", {
expect_lint("x <- { 3 + 4 }", NULL, empty_assignment_linter())
expect_lint("x <- if (x > 1) { 3 + 4 }", NULL, empty_assignment_linter())
# also triggers assignment_linter
expect_lint("x = { 3 + 4 }", NULL, empty_assignment_linter())
})
test_that("empty_assignment_linter blocks disallowed usages", {
linter <- empty_assignment_linter()
lint_msg <- rex::rex("Assign NULL explicitly or, whenever possible, allocate the empty object")
expect_lint("xrep <- {}", lint_msg, linter)
# assignment with equal works as well, and white space doesn't matter
expect_lint("x = { }", lint_msg, linter)
# ditto right assignments
expect_lint("{} -> x", lint_msg, linter)
expect_lint("{} ->> x", lint_msg, linter)
# ditto data.table-style walrus assignments
expect_lint("x[, col := {}]", lint_msg, linter)
# newlines also don't matter
expect_lint("x <- {\n}", lint_msg, linter)
# LHS of assignment doesn't matter
expect_lint("env$obj <- {}", lint_msg, linter)
})
| /tests/testthat/test-empty_assignment_linter.R | permissive | cordis-dev/lintr | R | false | false | 1,048 | r | test_that("empty_assignment_linter skips valid usage", {
expect_lint("x <- { 3 + 4 }", NULL, empty_assignment_linter())
expect_lint("x <- if (x > 1) { 3 + 4 }", NULL, empty_assignment_linter())
# also triggers assignment_linter
expect_lint("x = { 3 + 4 }", NULL, empty_assignment_linter())
})
test_that("empty_assignment_linter blocks disallowed usages", {
linter <- empty_assignment_linter()
lint_msg <- rex::rex("Assign NULL explicitly or, whenever possible, allocate the empty object")
expect_lint("xrep <- {}", lint_msg, linter)
# assignment with equal works as well, and white space doesn't matter
expect_lint("x = { }", lint_msg, linter)
# ditto right assignments
expect_lint("{} -> x", lint_msg, linter)
expect_lint("{} ->> x", lint_msg, linter)
# ditto data.table-style walrus assignments
expect_lint("x[, col := {}]", lint_msg, linter)
# newlines also don't matter
expect_lint("x <- {\n}", lint_msg, linter)
# LHS of assignment doesn't matter
expect_lint("env$obj <- {}", lint_msg, linter)
})
|
library(TSP)
library(maptools)
library(ggplot2)
library(ggthemes)
data(eurodist)
distances <- as.matrix(eurodist)
cityList <- read.csv2(file = "countries.coords.csv", stringsAsFactors=FALSE)
cityList$long <- as.numeric(cityList$long)
cityList$lat <- as.numeric(cityList$lat)
rownames(cityList) <- cityList$CityName
t <- readShapeSpatial("world/MyEurope.shp")
mapToPlot <- fortify(t)
| /global.R | permissive | zelite/dataproductscoursera | R | false | false | 388 | r | library(TSP)
library(maptools)
library(ggplot2)
library(ggthemes)
data(eurodist)
distances <- as.matrix(eurodist)
cityList <- read.csv2(file = "countries.coords.csv", stringsAsFactors=FALSE)
cityList$long <- as.numeric(cityList$long)
cityList$lat <- as.numeric(cityList$lat)
rownames(cityList) <- cityList$CityName
t <- readShapeSpatial("world/MyEurope.shp")
mapToPlot <- fortify(t)
|
# example
# choice <- c(0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1)
# outcome <- c(1, 0, 1, 1, 2, 0, 1, 2, 1, 2, 0, 2, 0, 1, 1, 1, 1, 0, 2, 1, 1, 2, 1, 2, 0, 2, 1, 2, 1, 1, 1, 1, 1, 1, 1, 2, 1, 2, 1, 2, 1, 1, 0, 1, 1, 1, 2, 0)
# mag1.prop <- c(0.183333333, 0.252525253, 0.000000000, 0.000000000, 0.850467290, 0.529411765, 0.571428571, 0.850000000, 0.302521008, 0.747899160, 0.181034483, 0.557522124, 0.358333333, 0.000000000, 0.008333333, 0.037037037, 0.295918367, 0.193181818, 0.884297521, 0.099099099, 0.442622951, 0.834710744, 0.000000000, 0.764705882, 0.282828283, 0.842975207, 0.000000000, 0.571428571, 0.000000000, 0.000000000, 0.342857143, 0.208333333, 0.000000000, 0.000000000, 0.217391304, 0.773109244, 0.246753247, 0.983471074, 0.000000000, 0.473214286, 0.113207547, 0.000000000, 0.000000000, 0.000000000, 0.245762712, 0.064220183, 0.525423729, 0.100840336)
# mag2.prop <- c(0.00000000, 0.60606061, 0.00000000, 0.00000000, 0.14953271, 0.47058824, 0.00000000, 0.15000000, 0.00000000, 0.15966387, 0.81896552, 0.14159292, 0.64166667, 0.00000000, 0.00000000, 0.00000000, 0.00000000, 0.80681818, 0.11570248, 0.00000000, 0.00000000, 0.16528926, 0.00000000, 0.23529412, 0.65656566, 0.15702479, 0.00000000, 0.15966387, 0.04347826, 0.00000000, 0.00000000, 0.00000000, 0.00000000, 0.00000000, 0.12173913, 0.11764706, 0.00000000, 0.01652893, 0.32258065, 0.20535714, 0.12264151, 0.00000000, 1.00000000, 0.00000000, 0.00000000, 0.25688073, 0.47457627, 0.89915966)
# sure2.prop <- c(0.81666667, 0.14141414, 1.00000000, 1.00000000, 0.00000000, 0.00000000, 0.42857143, 0.00000000, 0.69747899, 0.09243697, 0.00000000, 0.30088496, 0.00000000, 1.00000000, 0.99166667, 0.96296296, 0.70408163, 0.00000000, 0.00000000, 0.90090090, 0.55737705, 0.00000000, 1.00000000, 0.00000000, 0.06060606, 0.00000000, 1.00000000, 0.26890756, 0.95652174, 1.00000000, 0.65714286, 0.79166667, 1.00000000, 1.00000000, 0.66086957, 0.10924370, 0.75324675, 0.00000000, 0.67741935, 0.32142857, 0.76415094, 1.00000000, 0.00000000, 1.00000000, 0.75423729, 0.67889908, 0.00000000, 0.00000000)
# optim(c(.7,.8), emot_decay, choice = choice, outcome=outcome, mag1.prop=mag1.prop, mag2.prop=mag2.prop, sure2.prop=sure2.prop, output_type=="LL")
# SURE TRIALS ARE SKIPPED, BUT DECAY RATES STILL APPLY FOR THAT TRIAL BY USING ^trial_interval
emot_decay_Gain_dissertation <- function(decay_temp, trial, trialType, gamble, outcome, coinsWon, sure1, sure2, mag1, mag2, mag1.prop, mag2.prop, sure1.prop, sure2.prop, output_type) {
# mag1.prop <- current_data$mag1_prop.outcome
# mag2.prop <- current_data$mag2_prop.outcome
# sure1.prop <- current_data$sure1_prop.outcome
# sure2.prop <- current_data$sure2_prop.outcome
# choice <- current_data$gamble
# outcome <- current_data$outcome
# neg_decay <- 0.6455282
# pos_decay <- 0.8963159
neg_decay <- decay_temp[1]
pos_decay <- decay_temp[2]
n_obs <- length(trial)
elation <- rep(0, n_obs)
relief <- rep(0, n_obs)
gain <- rep(0, n_obs)
relief_inaction <- rep(0, n_obs)
disappointment <- rep(0, n_obs)
regret <- rep(0, n_obs)
loss <- rep(0, n_obs)
regret_inaction <- rep(0, n_obs)
prev_outcome <- rep(NA, n_obs)
for (i in 1:n_obs) {
#print(i)
current_trial_num <- trial[i]
if ( i > 1) {
trial_interval <- trial[i] - trial[i-1]
}
if (trialType[i] == 6) { # mag1 > mag2; sure1 > sure2
if (coinsWon[i] %in% c(mag1[i],mag2[i]) ) {
if (coinsWon[i] == mag1[i] ) { # they won
# ADJUST POSITIVE EMOTION
elation[i] <- mag2.prop[i] # WINS: loss time [counterfactual outcome]
relief[i] <- sure1.prop[i] + sure2.prop[i] # WINS: sure time [counterfactual choice]
gain[i] <- mag1.prop[i] # WINS: win time [outcome]
if ( i > 1) {
# UPDATE NEGATIVE EMOTION
disappointment[i] <- neg_decay^trial_interval*disappointment[i-1] # LOSSES: win time [counterfactual outcome]
regret[i] <- neg_decay^trial_interval*regret[i-1] # LOSSES: sure time [counterfactual choice]
loss[i] <- neg_decay^trial_interval*loss[i-1] # LOSSES: loss time [outcome]
# UPDATE POSITIVE EMOTION
elation[i] <- elation[i] + pos_decay^trial_interval*elation[i-1] # WINS: sure time [counterfactual choice]
relief[i] <- relief[i] + pos_decay^trial_interval*relief[i-1] # WINS: loss time [counterfactual outcome]
gain[i] <- gain[i] + pos_decay^trial_interval*gain[i-1] # WINS: win time [outcome]
# UPDATE INACTION EMOTIONS
relief_inaction[i] <- pos_decay^trial_interval*relief_inaction[i-1]
regret_inaction[i] <- neg_decay^trial_interval*regret_inaction[i-1]
}
}
if (coinsWon[i] == mag2[i] ) { # they lost
# ADJUST NEGATIVE EMOTION
disappointment[i] <- mag1.prop[i] # LOSSES: win time [counterfactual outcome]
regret[i] <- sure1.prop[i] + sure2.prop[i] # LOSSES: sure time [counterfactual choice]
loss[i] <- mag2.prop[i] # LOSSES: loss time [outcome]
if ( i > 1) {
# UPDATE POSITIVE EMOTION
elation[i] <- pos_decay^trial_interval*elation[i-1] # WINS: sure time [counterfactual choice]
relief[i] <- pos_decay^trial_interval*relief[i-1] # WINS: loss time [counterfactual outcome]
gain[i] <- pos_decay^trial_interval*gain[i-1] # WINS: win time [outcome]
# UPDATE NEGATIVE EMOTION
disappointment[i] <- disappointment[i] + neg_decay^trial_interval*disappointment[i-1] # LOSSES: win time [counterfactual outcome]
regret[i] <- regret[i] + neg_decay^trial_interval*regret[i-1] # LOSSES: sure time [counterfactual choice]
loss[i] <- loss[i] + neg_decay^trial_interval*loss[i-1] # LOSSES: loss time [outcome]
# UPDATE INACTION EMOTIONS
relief_inaction[i] <- pos_decay^trial_interval*relief_inaction[i-1]
regret_inaction[i] <- neg_decay^trial_interval*regret_inaction[i-1]
}
}
} else { # coinsWon %in% c(sure1, sure2)
if (coinsWon[i] == sure1[i] ) { # they won
# ADJUST POSITIVE EMOTION
elation[i] <- sure2.prop[i] # WINS: loss time [counterfactual outcome]
relief[i] <- mag2.prop[i] # WINS: loss time [counterfactual choice]
gain[i] <- sure1.prop[i] # WINS: win time [outcome]
# ADJUST NEGATIVE EMOTION FOR COUNTERFACTUAL GAIN
regret_inaction[i] <- mag1.prop[i] # WINS: win time [counterfactual choice]
if ( i > 1) {
# UPDATE NEGATIVE EMOTION
disappointment[i] <- neg_decay^trial_interval*disappointment[i-1] # LOSSES: win time [counterfactual outcome]
loss[i] <- neg_decay^trial_interval*loss[i-1] # LOSSES: loss time [outcome]
regret[i] <- regret[i] + neg_decay^trial_interval*regret[i-1] # WIN: win time [counterfactual choice]
# UPDATE POSITIVE EMOTION
elation[i] <- elation[i] + pos_decay^trial_interval*elation[i-1] # WINS: sure time [counterfactual choice]
relief[i] <- relief[i] + pos_decay^trial_interval*relief[i-1] # WINS: loss time [counterfactual outcome]
gain[i] <- gain[i] + pos_decay^trial_interval*gain[i-1] # WINS: win time [outcome]
# UPDATE INACTION EMOTIONS
relief_inaction[i] <- pos_decay^trial_interval*relief_inaction[i-1]
regret_inaction[i] <- regret_inaction[i] + neg_decay^trial_interval*regret_inaction[i-1]
}
}
if (coinsWon[i] == sure2[i] ) { # they lost
# ADJUST NEGATIVE EMOTION
disappointment[i] <- sure1.prop[i] # LOSSES: win time [counterfactual outcome]
loss[i] <- sure2.prop[i] # LOSSES: loss time [outcome]
# ADJUST POSITIVE EMOTION FOR COUNTERFACTUAL
relief_inaction[i] <- mag2.prop[i] # LOSSES: loss time [counterfactual choice]
regret_inaction[i] <- mag1.prop[i] # LOSSES: win time [counterfactual choice]
if ( i > 1) {
# UPDATE NEGATIVE EMOTION
disappointment[i] <- disappointment[i] + neg_decay^trial_interval*disappointment[i-1] # LOSSES: win time [counterfactual outcome]
loss[i] <- loss[i] + neg_decay^trial_interval*loss[i-1] # LOSSES: loss time [outcome]
regret[i] <- neg_decay^trial_interval*regret[i-1] # LOSSES: sure time [counterfactual choice]
# UPDATE POSITIVE EMOTION
elation[i] <- pos_decay^trial_interval*elation[i-1] # WINS: sure time [counterfactual choice]
relief[i] <- pos_decay^trial_interval*relief[i-1] # WINS: loss time [counterfactual outcome]
gain[i] <- pos_decay^trial_interval*gain[i-1] # WINS: win time [outcome]
# UPDATE INACTION EMOTIONS
relief_inaction[i] <- relief_inaction[i] + pos_decay^trial_interval*relief_inaction[i-1]
regret_inaction[i] <- regret_inaction[i] + neg_decay^trial_interval*regret_inaction[i-1]
}
}
}
} else { # trialType != 6
if (coinsWon[i] == mag1[i]) { #WINS
# ADJUST POSITIVE EMOTION
elation[i] <- mag2.prop[i] # WINS: loss time [counterfactual outcome]
relief[i] <- sure2.prop[i] # WINS: sure time [counterfactual choice]
gain[i] <- mag1.prop[i] # WINS: win time [outcome]
if ( i > 1) {
# UPDATE NEGATIVE EMOTION
disappointment[i] <- neg_decay^trial_interval*disappointment[i-1] # LOSSES: win time [counterfactual outcome]
regret[i] <- neg_decay^trial_interval*regret[i-1] # LOSSES: sure time [counterfactual choice]
loss[i] <- neg_decay^trial_interval*loss[i-1] # LOSSES: loss time [outcome]
# UPDATE POSITIVE EMOTION
elation[i] <- elation[i] + pos_decay^trial_interval*elation[i-1] # WINS: sure time [counterfactual choice]
relief[i] <- relief[i] + pos_decay^trial_interval*relief[i-1] # WINS: loss time [counterfactual outcome]
gain[i] <- gain[i] + pos_decay^trial_interval*gain[i-1] # WINS: win time [outcome]
# UPDATE INACTION EMOTIONS
relief_inaction[i] <- pos_decay^trial_interval*relief_inaction[i-1]
regret_inaction[i] <- neg_decay^trial_interval*regret_inaction[i-1]
}
} else if (coinsWon[i] == mag2[i]) { # LOSSES
# ADJUST NEGATIVE EMOTION
disappointment[i] <- mag1.prop[i] # LOSSES: win time [counterfactual outcome]
regret[i] <- sure2.prop[i] # LOSSES: sure time [counterfactual choice]
loss[i] <- mag2.prop[i] # LOSSES: loss time [outcome]
if ( i > 1) {
# UPDATE POSITIVE EMOTION
elation[i] <- pos_decay^trial_interval*elation[i-1] # WINS: sure time [counterfactual choice]
relief[i] <- pos_decay^trial_interval*relief[i-1] # WINS: loss time [counterfactual outcome]
gain[i] <- pos_decay^trial_interval*gain[i-1] # WINS: win time [outcome]
# UPDATE NEGATIVE EMOTION
disappointment[i] <- disappointment[i] + neg_decay^trial_interval*disappointment[i-1] # LOSSES: win time [counterfactual outcome]
regret[i] <- regret[i] + neg_decay^trial_interval*regret[i-1] # LOSSES: sure time [counterfactual choice]
loss[i] <- loss[i] + neg_decay^trial_interval*loss[i-1] # LOSSES: loss time [outcome]
# UPDATE INACTION EMOTIONS
relief_inaction[i] <- pos_decay^trial_interval*relief_inaction[i-1]
regret_inaction[i] <- neg_decay^trial_interval*regret_inaction[i-1]
}
} else if (coinsWon[i] == sure2[i]) {
relief_inaction[i] <- mag2.prop[i] # SURE: loss time [counterfactual choice]
regret_inaction[i] <- mag1.prop[i] # SURE: win time [counterfactual choice]
if ( i > 1) {
# UPDATE POSITIVE EMOTION
elation[i] <- pos_decay^trial_interval*elation[i-1] # WINS: sure time [counterfactual choice]
relief[i] <- pos_decay^trial_interval*relief[i-1] # WINS: loss time [counterfactual outcome]
gain[i] <- pos_decay^trial_interval*gain[i-1] # WINS: win time [outcome]
# UPDATE NEGATIVE EMOTION
disappointment[i] <- neg_decay^trial_interval*disappointment[i-1] # LOSSES: win time [counterfactual outcome]
regret[i] <- neg_decay^trial_interval*regret[i-1] # LOSSES: sure time [counterfactual choice]
loss[i] <- neg_decay^trial_interval*loss[i-1] # LOSSES: loss time [outcome]
# UPDATE INACTION EMOTIONS
relief_inaction[i] <- relief_inaction[i] + pos_decay^trial_interval*relief_inaction[i-1]
regret_inaction[i] <- regret_inaction[i] + neg_decay^trial_interval*regret_inaction[i-1]
}
} # if outcome == c(win, loss, sure bet)
}
if (i > 1) {
prev_outcome[i] <- as.numeric(as.character(outcome[i-1]))
}
} # for i in 1:n_obs
disappointment0 <- 0
regret0 <- 0
loss0 <- 0
regret_inaction0 <- 0
elation0 <- 0
relief0 <- 0
gain0 <- 0
relief_inaction0 <- 0
# shift all values down by 1
disappointment0[2:n_obs] <- disappointment[1:n_obs-1]
regret0[2:n_obs] <- regret[1:n_obs-1]
loss0[2:n_obs] <- loss[1:n_obs-1]
regret_inaction0[2:n_obs] <- regret_inaction[1:n_obs-1]
elation0[2:n_obs] <- elation[1:n_obs-1]
relief0[2:n_obs] <- relief[1:n_obs-1]
gain0[2:n_obs] <- gain[1:n_obs-1]
relief_inaction0[2:n_obs] <- relief_inaction[1:n_obs-1]
disappointment <- disappointment0
regret <- regret0
loss <- loss0
elation <- elation0
relief <- relief0
gain <- gain0
relief_inaction <- relief_inaction0
regret_inaction <- regret_inaction0
glm1 <- glm(gamble ~ disappointment + regret + elation + relief, family=binomial(link = "logit"))
# + relief_inaction + regret_inaction
if (output_type == "LL") {
-1*logLik(glm1)[1]
} else if (output_type == "data") {
cbind(disappointment, regret, loss, elation, relief, regret_inaction, relief_inaction, gain, prev_outcome)
}
}
| /scripts/functions/emot/emot_decay_Gain_dissertation.R | no_license | paulsendavidjay/EyeTracking2 | R | false | false | 13,550 | r |
# example
# choice <- c(0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1)
# outcome <- c(1, 0, 1, 1, 2, 0, 1, 2, 1, 2, 0, 2, 0, 1, 1, 1, 1, 0, 2, 1, 1, 2, 1, 2, 0, 2, 1, 2, 1, 1, 1, 1, 1, 1, 1, 2, 1, 2, 1, 2, 1, 1, 0, 1, 1, 1, 2, 0)
# mag1.prop <- c(0.183333333, 0.252525253, 0.000000000, 0.000000000, 0.850467290, 0.529411765, 0.571428571, 0.850000000, 0.302521008, 0.747899160, 0.181034483, 0.557522124, 0.358333333, 0.000000000, 0.008333333, 0.037037037, 0.295918367, 0.193181818, 0.884297521, 0.099099099, 0.442622951, 0.834710744, 0.000000000, 0.764705882, 0.282828283, 0.842975207, 0.000000000, 0.571428571, 0.000000000, 0.000000000, 0.342857143, 0.208333333, 0.000000000, 0.000000000, 0.217391304, 0.773109244, 0.246753247, 0.983471074, 0.000000000, 0.473214286, 0.113207547, 0.000000000, 0.000000000, 0.000000000, 0.245762712, 0.064220183, 0.525423729, 0.100840336)
# mag2.prop <- c(0.00000000, 0.60606061, 0.00000000, 0.00000000, 0.14953271, 0.47058824, 0.00000000, 0.15000000, 0.00000000, 0.15966387, 0.81896552, 0.14159292, 0.64166667, 0.00000000, 0.00000000, 0.00000000, 0.00000000, 0.80681818, 0.11570248, 0.00000000, 0.00000000, 0.16528926, 0.00000000, 0.23529412, 0.65656566, 0.15702479, 0.00000000, 0.15966387, 0.04347826, 0.00000000, 0.00000000, 0.00000000, 0.00000000, 0.00000000, 0.12173913, 0.11764706, 0.00000000, 0.01652893, 0.32258065, 0.20535714, 0.12264151, 0.00000000, 1.00000000, 0.00000000, 0.00000000, 0.25688073, 0.47457627, 0.89915966)
# sure2.prop <- c(0.81666667, 0.14141414, 1.00000000, 1.00000000, 0.00000000, 0.00000000, 0.42857143, 0.00000000, 0.69747899, 0.09243697, 0.00000000, 0.30088496, 0.00000000, 1.00000000, 0.99166667, 0.96296296, 0.70408163, 0.00000000, 0.00000000, 0.90090090, 0.55737705, 0.00000000, 1.00000000, 0.00000000, 0.06060606, 0.00000000, 1.00000000, 0.26890756, 0.95652174, 1.00000000, 0.65714286, 0.79166667, 1.00000000, 1.00000000, 0.66086957, 0.10924370, 0.75324675, 0.00000000, 0.67741935, 0.32142857, 0.76415094, 1.00000000, 0.00000000, 1.00000000, 0.75423729, 0.67889908, 0.00000000, 0.00000000)
# optim(c(.7,.8), emot_decay, choice = choice, outcome=outcome, mag1.prop=mag1.prop, mag2.prop=mag2.prop, sure2.prop=sure2.prop, output_type=="LL")
# SURE TRIALS ARE SKIPPED, BUT DECAY RATES STILL APPLY FOR THAT TRIAL BY USING ^trial_interval
emot_decay_Gain_dissertation <- function(decay_temp, trial, trialType, gamble, outcome, coinsWon, sure1, sure2, mag1, mag2, mag1.prop, mag2.prop, sure1.prop, sure2.prop, output_type) {
# mag1.prop <- current_data$mag1_prop.outcome
# mag2.prop <- current_data$mag2_prop.outcome
# sure1.prop <- current_data$sure1_prop.outcome
# sure2.prop <- current_data$sure2_prop.outcome
# choice <- current_data$gamble
# outcome <- current_data$outcome
# neg_decay <- 0.6455282
# pos_decay <- 0.8963159
neg_decay <- decay_temp[1]
pos_decay <- decay_temp[2]
n_obs <- length(trial)
elation <- rep(0, n_obs)
relief <- rep(0, n_obs)
gain <- rep(0, n_obs)
relief_inaction <- rep(0, n_obs)
disappointment <- rep(0, n_obs)
regret <- rep(0, n_obs)
loss <- rep(0, n_obs)
regret_inaction <- rep(0, n_obs)
prev_outcome <- rep(NA, n_obs)
for (i in 1:n_obs) {
#print(i)
current_trial_num <- trial[i]
if ( i > 1) {
trial_interval <- trial[i] - trial[i-1]
}
if (trialType[i] == 6) { # mag1 > mag2; sure1 > sure2
if (coinsWon[i] %in% c(mag1[i],mag2[i]) ) {
if (coinsWon[i] == mag1[i] ) { # they won
# ADJUST POSITIVE EMOTION
elation[i] <- mag2.prop[i] # WINS: loss time [counterfactual outcome]
relief[i] <- sure1.prop[i] + sure2.prop[i] # WINS: sure time [counterfactual choice]
gain[i] <- mag1.prop[i] # WINS: win time [outcome]
if ( i > 1) {
# UPDATE NEGATIVE EMOTION
disappointment[i] <- neg_decay^trial_interval*disappointment[i-1] # LOSSES: win time [counterfactual outcome]
regret[i] <- neg_decay^trial_interval*regret[i-1] # LOSSES: sure time [counterfactual choice]
loss[i] <- neg_decay^trial_interval*loss[i-1] # LOSSES: loss time [outcome]
# UPDATE POSITIVE EMOTION
elation[i] <- elation[i] + pos_decay^trial_interval*elation[i-1] # WINS: sure time [counterfactual choice]
relief[i] <- relief[i] + pos_decay^trial_interval*relief[i-1] # WINS: loss time [counterfactual outcome]
gain[i] <- gain[i] + pos_decay^trial_interval*gain[i-1] # WINS: win time [outcome]
# UPDATE INACTION EMOTIONS
relief_inaction[i] <- pos_decay^trial_interval*relief_inaction[i-1]
regret_inaction[i] <- neg_decay^trial_interval*regret_inaction[i-1]
}
}
if (coinsWon[i] == mag2[i] ) { # they lost
# ADJUST NEGATIVE EMOTION
disappointment[i] <- mag1.prop[i] # LOSSES: win time [counterfactual outcome]
regret[i] <- sure1.prop[i] + sure2.prop[i] # LOSSES: sure time [counterfactual choice]
loss[i] <- mag2.prop[i] # LOSSES: loss time [outcome]
if ( i > 1) {
# UPDATE POSITIVE EMOTION
elation[i] <- pos_decay^trial_interval*elation[i-1] # WINS: sure time [counterfactual choice]
relief[i] <- pos_decay^trial_interval*relief[i-1] # WINS: loss time [counterfactual outcome]
gain[i] <- pos_decay^trial_interval*gain[i-1] # WINS: win time [outcome]
# UPDATE NEGATIVE EMOTION
disappointment[i] <- disappointment[i] + neg_decay^trial_interval*disappointment[i-1] # LOSSES: win time [counterfactual outcome]
regret[i] <- regret[i] + neg_decay^trial_interval*regret[i-1] # LOSSES: sure time [counterfactual choice]
loss[i] <- loss[i] + neg_decay^trial_interval*loss[i-1] # LOSSES: loss time [outcome]
# UPDATE INACTION EMOTIONS
relief_inaction[i] <- pos_decay^trial_interval*relief_inaction[i-1]
regret_inaction[i] <- neg_decay^trial_interval*regret_inaction[i-1]
}
}
} else { # coinsWon %in% c(sure1, sure2)
if (coinsWon[i] == sure1[i] ) { # they won
# ADJUST POSITIVE EMOTION
elation[i] <- sure2.prop[i] # WINS: loss time [counterfactual outcome]
relief[i] <- mag2.prop[i] # WINS: loss time [counterfactual choice]
gain[i] <- sure1.prop[i] # WINS: win time [outcome]
# ADJUST NEGATIVE EMOTION FOR COUNTERFACTUAL GAIN
regret_inaction[i] <- mag1.prop[i] # WINS: win time [counterfactual choice]
if ( i > 1) {
# UPDATE NEGATIVE EMOTION
disappointment[i] <- neg_decay^trial_interval*disappointment[i-1] # LOSSES: win time [counterfactual outcome]
loss[i] <- neg_decay^trial_interval*loss[i-1] # LOSSES: loss time [outcome]
regret[i] <- regret[i] + neg_decay^trial_interval*regret[i-1] # WIN: win time [counterfactual choice]
# UPDATE POSITIVE EMOTION
elation[i] <- elation[i] + pos_decay^trial_interval*elation[i-1] # WINS: sure time [counterfactual choice]
relief[i] <- relief[i] + pos_decay^trial_interval*relief[i-1] # WINS: loss time [counterfactual outcome]
gain[i] <- gain[i] + pos_decay^trial_interval*gain[i-1] # WINS: win time [outcome]
# UPDATE INACTION EMOTIONS
relief_inaction[i] <- pos_decay^trial_interval*relief_inaction[i-1]
regret_inaction[i] <- regret_inaction[i] + neg_decay^trial_interval*regret_inaction[i-1]
}
}
if (coinsWon[i] == sure2[i] ) { # they lost
# ADJUST NEGATIVE EMOTION
disappointment[i] <- sure1.prop[i] # LOSSES: win time [counterfactual outcome]
loss[i] <- sure2.prop[i] # LOSSES: loss time [outcome]
# ADJUST POSITIVE EMOTION FOR COUNTERFACTUAL
relief_inaction[i] <- mag2.prop[i] # LOSSES: loss time [counterfactual choice]
regret_inaction[i] <- mag1.prop[i] # LOSSES: win time [counterfactual choice]
if ( i > 1) {
# UPDATE NEGATIVE EMOTION
disappointment[i] <- disappointment[i] + neg_decay^trial_interval*disappointment[i-1] # LOSSES: win time [counterfactual outcome]
loss[i] <- loss[i] + neg_decay^trial_interval*loss[i-1] # LOSSES: loss time [outcome]
regret[i] <- neg_decay^trial_interval*regret[i-1] # LOSSES: sure time [counterfactual choice]
# UPDATE POSITIVE EMOTION
elation[i] <- pos_decay^trial_interval*elation[i-1] # WINS: sure time [counterfactual choice]
relief[i] <- pos_decay^trial_interval*relief[i-1] # WINS: loss time [counterfactual outcome]
gain[i] <- pos_decay^trial_interval*gain[i-1] # WINS: win time [outcome]
# UPDATE INACTION EMOTIONS
relief_inaction[i] <- relief_inaction[i] + pos_decay^trial_interval*relief_inaction[i-1]
regret_inaction[i] <- regret_inaction[i] + neg_decay^trial_interval*regret_inaction[i-1]
}
}
}
} else { # trialType != 6
if (coinsWon[i] == mag1[i]) { #WINS
# ADJUST POSITIVE EMOTION
elation[i] <- mag2.prop[i] # WINS: loss time [counterfactual outcome]
relief[i] <- sure2.prop[i] # WINS: sure time [counterfactual choice]
gain[i] <- mag1.prop[i] # WINS: win time [outcome]
if ( i > 1) {
# UPDATE NEGATIVE EMOTION
disappointment[i] <- neg_decay^trial_interval*disappointment[i-1] # LOSSES: win time [counterfactual outcome]
regret[i] <- neg_decay^trial_interval*regret[i-1] # LOSSES: sure time [counterfactual choice]
loss[i] <- neg_decay^trial_interval*loss[i-1] # LOSSES: loss time [outcome]
# UPDATE POSITIVE EMOTION
elation[i] <- elation[i] + pos_decay^trial_interval*elation[i-1] # WINS: sure time [counterfactual choice]
relief[i] <- relief[i] + pos_decay^trial_interval*relief[i-1] # WINS: loss time [counterfactual outcome]
gain[i] <- gain[i] + pos_decay^trial_interval*gain[i-1] # WINS: win time [outcome]
# UPDATE INACTION EMOTIONS
relief_inaction[i] <- pos_decay^trial_interval*relief_inaction[i-1]
regret_inaction[i] <- neg_decay^trial_interval*regret_inaction[i-1]
}
} else if (coinsWon[i] == mag2[i]) { # LOSSES
# ADJUST NEGATIVE EMOTION
disappointment[i] <- mag1.prop[i] # LOSSES: win time [counterfactual outcome]
regret[i] <- sure2.prop[i] # LOSSES: sure time [counterfactual choice]
loss[i] <- mag2.prop[i] # LOSSES: loss time [outcome]
if ( i > 1) {
# UPDATE POSITIVE EMOTION
elation[i] <- pos_decay^trial_interval*elation[i-1] # WINS: sure time [counterfactual choice]
relief[i] <- pos_decay^trial_interval*relief[i-1] # WINS: loss time [counterfactual outcome]
gain[i] <- pos_decay^trial_interval*gain[i-1] # WINS: win time [outcome]
# UPDATE NEGATIVE EMOTION
disappointment[i] <- disappointment[i] + neg_decay^trial_interval*disappointment[i-1] # LOSSES: win time [counterfactual outcome]
regret[i] <- regret[i] + neg_decay^trial_interval*regret[i-1] # LOSSES: sure time [counterfactual choice]
loss[i] <- loss[i] + neg_decay^trial_interval*loss[i-1] # LOSSES: loss time [outcome]
# UPDATE INACTION EMOTIONS
relief_inaction[i] <- pos_decay^trial_interval*relief_inaction[i-1]
regret_inaction[i] <- neg_decay^trial_interval*regret_inaction[i-1]
}
} else if (coinsWon[i] == sure2[i]) {
relief_inaction[i] <- mag2.prop[i] # SURE: loss time [counterfactual choice]
regret_inaction[i] <- mag1.prop[i] # SURE: win time [counterfactual choice]
if ( i > 1) {
# UPDATE POSITIVE EMOTION
elation[i] <- pos_decay^trial_interval*elation[i-1] # WINS: sure time [counterfactual choice]
relief[i] <- pos_decay^trial_interval*relief[i-1] # WINS: loss time [counterfactual outcome]
gain[i] <- pos_decay^trial_interval*gain[i-1] # WINS: win time [outcome]
# UPDATE NEGATIVE EMOTION
disappointment[i] <- neg_decay^trial_interval*disappointment[i-1] # LOSSES: win time [counterfactual outcome]
regret[i] <- neg_decay^trial_interval*regret[i-1] # LOSSES: sure time [counterfactual choice]
loss[i] <- neg_decay^trial_interval*loss[i-1] # LOSSES: loss time [outcome]
# UPDATE INACTION EMOTIONS
relief_inaction[i] <- relief_inaction[i] + pos_decay^trial_interval*relief_inaction[i-1]
regret_inaction[i] <- regret_inaction[i] + neg_decay^trial_interval*regret_inaction[i-1]
}
} # if outcome == c(win, loss, sure bet)
}
if (i > 1) {
prev_outcome[i] <- as.numeric(as.character(outcome[i-1]))
}
} # for i in 1:n_obs
disappointment0 <- 0
regret0 <- 0
loss0 <- 0
regret_inaction0 <- 0
elation0 <- 0
relief0 <- 0
gain0 <- 0
relief_inaction0 <- 0
# shift all values down by 1
disappointment0[2:n_obs] <- disappointment[1:n_obs-1]
regret0[2:n_obs] <- regret[1:n_obs-1]
loss0[2:n_obs] <- loss[1:n_obs-1]
regret_inaction0[2:n_obs] <- regret_inaction[1:n_obs-1]
elation0[2:n_obs] <- elation[1:n_obs-1]
relief0[2:n_obs] <- relief[1:n_obs-1]
gain0[2:n_obs] <- gain[1:n_obs-1]
relief_inaction0[2:n_obs] <- relief_inaction[1:n_obs-1]
disappointment <- disappointment0
regret <- regret0
loss <- loss0
elation <- elation0
relief <- relief0
gain <- gain0
relief_inaction <- relief_inaction0
regret_inaction <- regret_inaction0
glm1 <- glm(gamble ~ disappointment + regret + elation + relief, family=binomial(link = "logit"))
# + relief_inaction + regret_inaction
if (output_type == "LL") {
-1*logLik(glm1)[1]
} else if (output_type == "data") {
cbind(disappointment, regret, loss, elation, relief, regret_inaction, relief_inaction, gain, prev_outcome)
}
}
|
# > file written: Sat, 08 Dec 2018 09:26:53 +0100
# in this file, settings that are specific for a run on a dataset
# gives path to output folder
pipOutFold <- "OUTPUT_FOLDER/TCGAskcm_lowInf_highInf"
# full path (starting with /mnt/...)
# following format expected for the input
# colnames = samplesID
# rownames = geneID
# !!! geneID are expected not difficulted
# *************************************************************************************************************************
# ************************************ SETTINGS FOR 0_prepGeneData
# *************************************************************************************************************************
# UPDATE 07.12.2018: for RSEM data, the "analog" FPKM file is provided separately (built in prepData)
rna_fpkmDT_file <- "/mnt/ed4/marie/other_datasets/TCGAskcm_lowInf_highInf/fpkmDT.Rdata"
rnaseqDT_file <- "/mnt/ed4/marie/other_datasets/TCGAskcm_lowInf_highInf/rnaseqDT_v2.Rdata"
my_sep <- "\t"
# input is Rdata or txt file ?
# TRUE if the input is Rdata
inRdata <- TRUE
# can be ensemblID, entrezID, geneSymbol
geneID_format <- "entrezID"
stopifnot(geneID_format %in% c("ensemblID", "entrezID", "geneSymbol"))
# are geneID rownames ? -> "rn" or numeric giving the column
geneID_loc <- "rn"
stopifnot(geneID_loc == "rn" | is.numeric(geneID_loc))
removeDupGeneID <- TRUE
# *************************************************************************************************************************
# ************************************ SETTINGS FOR 1_runGeneDE
# *************************************************************************************************************************
# labels for conditions
cond1 <- "lowInf"
cond2 <- "highInf"
# path to sampleID for each condition - should be Rdata ( ! sample1 for cond1, sample2 for cond2 ! )
sample1_file <- "/mnt/ed4/marie/other_datasets/TCGAskcm_lowInf_highInf/lowInf_ID.Rdata"
sample2_file <- "/mnt/ed4/marie/other_datasets/TCGAskcm_lowInf_highInf/highInf_ID.Rdata"
minCpmRatio <- 20/888
inputDataType <- "RSEM"
nCpu <- 20
# number of permutations
nRandomPermut <- 10000
step8_for_permutGenes <- TRUE
step8_for_randomTADsFix <- FALSE
step8_for_randomTADsGaussian <- FALSE
step8_for_randomTADsShuffle <- FALSE
step14_for_randomTADsShuffle <- FALSE
# > file edited: Fri, 20 Mar 2020 17:46:30 +0100
# path to output folder:
pipOutFold <- "/mnt/etemp/marie/v2_Yuanlong_Cancer_HiC_data_TAD_DA/PIPELINE/OUTPUT_FOLDER/ENCSR862OGI_RPMI-7951_RANDOMMIDPOSSTRICT_40kb/TCGAskcm_lowInf_highInf"
# OVERWRITE THE DEFAULT SETTINGS FOR INPUT FILES - use TADs from the current Hi-C dataset
TADpos_file <- paste0(setDir, "/mnt/etemp/marie/v2_Yuanlong_Cancer_HiC_data_TAD_DA/ENCSR862OGI_RPMI-7951_RANDOMMIDPOSSTRICT_40kb/genes2tad/all_assigned_regions.txt")
#chr1 chr1_TAD1 750001 1300000
#chr1 chr1_TAD2 2750001 3650000
#chr1 chr1_TAD3 3650001 4150000
gene2tadDT_file <- paste0(setDir, "/mnt/etemp/marie/v2_Yuanlong_Cancer_HiC_data_TAD_DA/ENCSR862OGI_RPMI-7951_RANDOMMIDPOSSTRICT_40kb/genes2tad/all_genes_positions.txt")
#LINC00115 chr1 761586 762902 chr1_TAD1
#FAM41C chr1 803451 812283 chr1_TAD1
#SAMD11 chr1 860260 879955 chr1_TAD1
#NOC2L chr1 879584 894689 chr1_TAD1
# overwrite main_settings.R: nCpu <- 25
nCpu <- 40
# *************************************************************************************************************************
# ************************************ SETTINGS FOR PERMUTATIONS (5#_, 8c_)
# *************************************************************************************************************************
# number of permutations
nRandomPermut <- 100000
gene2tadAssignMethod <- "maxOverlap"
nRandomPermutShuffle <- 100000
step8_for_permutGenes <- TRUE
step8_for_randomTADsFix <- FALSE
step8_for_randomTADsGaussian <- FALSE
step8_for_randomTADsShuffle <- FALSE
step14_for_randomTADsShuffle <- FALSE
# added here 13.08.2019 to change the filtering of min. read counts
rm(minCpmRatio)
min_counts <- 5
min_sampleRatio <- 0.8
# to have compatible versions of Rdata
options(save.defaults = list(version = 2))
| /INPUT_FILES/ENCSR862OGI_RPMI-7951_RANDOMMIDPOSSTRICT_40kb/run_settings_TCGAskcm_lowInf_highInf.R | no_license | marzuf/v2_Yuanlong_Cancer_HiC_data_TAD_DA_PIPELINE_INPUT_FILES | R | false | false | 4,380 | r |
# > file written: Sat, 08 Dec 2018 09:26:53 +0100
# in this file, settings that are specific for a run on a dataset
# gives path to output folder
pipOutFold <- "OUTPUT_FOLDER/TCGAskcm_lowInf_highInf"
# full path (starting with /mnt/...)
# following format expected for the input
# colnames = samplesID
# rownames = geneID
# !!! geneID are expected not difficulted
# *************************************************************************************************************************
# ************************************ SETTINGS FOR 0_prepGeneData
# *************************************************************************************************************************
# UPDATE 07.12.2018: for RSEM data, the "analog" FPKM file is provided separately (built in prepData)
rna_fpkmDT_file <- "/mnt/ed4/marie/other_datasets/TCGAskcm_lowInf_highInf/fpkmDT.Rdata"
rnaseqDT_file <- "/mnt/ed4/marie/other_datasets/TCGAskcm_lowInf_highInf/rnaseqDT_v2.Rdata"
my_sep <- "\t"
# input is Rdata or txt file ?
# TRUE if the input is Rdata
inRdata <- TRUE
# can be ensemblID, entrezID, geneSymbol
geneID_format <- "entrezID"
stopifnot(geneID_format %in% c("ensemblID", "entrezID", "geneSymbol"))
# are geneID rownames ? -> "rn" or numeric giving the column
geneID_loc <- "rn"
stopifnot(geneID_loc == "rn" | is.numeric(geneID_loc))
removeDupGeneID <- TRUE
# *************************************************************************************************************************
# ************************************ SETTINGS FOR 1_runGeneDE
# *************************************************************************************************************************
# labels for conditions
cond1 <- "lowInf"
cond2 <- "highInf"
# path to sampleID for each condition - should be Rdata ( ! sample1 for cond1, sample2 for cond2 ! )
sample1_file <- "/mnt/ed4/marie/other_datasets/TCGAskcm_lowInf_highInf/lowInf_ID.Rdata"
sample2_file <- "/mnt/ed4/marie/other_datasets/TCGAskcm_lowInf_highInf/highInf_ID.Rdata"
minCpmRatio <- 20/888
inputDataType <- "RSEM"
nCpu <- 20
# number of permutations
nRandomPermut <- 10000
step8_for_permutGenes <- TRUE
step8_for_randomTADsFix <- FALSE
step8_for_randomTADsGaussian <- FALSE
step8_for_randomTADsShuffle <- FALSE
step14_for_randomTADsShuffle <- FALSE
# > file edited: Fri, 20 Mar 2020 17:46:30 +0100
# path to output folder:
pipOutFold <- "/mnt/etemp/marie/v2_Yuanlong_Cancer_HiC_data_TAD_DA/PIPELINE/OUTPUT_FOLDER/ENCSR862OGI_RPMI-7951_RANDOMMIDPOSSTRICT_40kb/TCGAskcm_lowInf_highInf"
# OVERWRITE THE DEFAULT SETTINGS FOR INPUT FILES - use TADs from the current Hi-C dataset
TADpos_file <- paste0(setDir, "/mnt/etemp/marie/v2_Yuanlong_Cancer_HiC_data_TAD_DA/ENCSR862OGI_RPMI-7951_RANDOMMIDPOSSTRICT_40kb/genes2tad/all_assigned_regions.txt")
#chr1 chr1_TAD1 750001 1300000
#chr1 chr1_TAD2 2750001 3650000
#chr1 chr1_TAD3 3650001 4150000
gene2tadDT_file <- paste0(setDir, "/mnt/etemp/marie/v2_Yuanlong_Cancer_HiC_data_TAD_DA/ENCSR862OGI_RPMI-7951_RANDOMMIDPOSSTRICT_40kb/genes2tad/all_genes_positions.txt")
#LINC00115 chr1 761586 762902 chr1_TAD1
#FAM41C chr1 803451 812283 chr1_TAD1
#SAMD11 chr1 860260 879955 chr1_TAD1
#NOC2L chr1 879584 894689 chr1_TAD1
# overwrite main_settings.R: nCpu <- 25
nCpu <- 40
# *************************************************************************************************************************
# ************************************ SETTINGS FOR PERMUTATIONS (5#_, 8c_)
# *************************************************************************************************************************
# number of permutations
nRandomPermut <- 100000
gene2tadAssignMethod <- "maxOverlap"
nRandomPermutShuffle <- 100000
step8_for_permutGenes <- TRUE
step8_for_randomTADsFix <- FALSE
step8_for_randomTADsGaussian <- FALSE
step8_for_randomTADsShuffle <- FALSE
step14_for_randomTADsShuffle <- FALSE
# added here 13.08.2019 to change the filtering of min. read counts
rm(minCpmRatio)
min_counts <- 5
min_sampleRatio <- 0.8
# to have compatible versions of Rdata
options(save.defaults = list(version = 2))
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/assoc.functions.R
\name{makeAssociation}
\alias{makeAssociation}
\alias{makeNonSymAssociation}
\title{Generate Association Matrix from Observed Association Data}
\usage{
makeAssociation(x)
makeNonSymAssociation(x)
}
\arguments{
\item{x}{a m x n data frame or matrix that contains the entities observed
in the columns, the observations in rows, and each cell=c(0,1) indicating whether
the entity was observed in the observation}
}
\value{
A n x n matrix containing the association rates between each n entity in x
}
\description{
\code{makeAssociation} takes as its input a m x n two-mode matrix
of observed association data and generates a n x n matrix of
association rates between n entities.
}
\section{Functions}{
\itemize{
\item \code{makeNonSymAssociation}: Association rates that are not symmetric
}}
| /man/makeAssociation.Rd | no_license | telliott27/whassocr | R | false | true | 887 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/assoc.functions.R
\name{makeAssociation}
\alias{makeAssociation}
\alias{makeNonSymAssociation}
\title{Generate Association Matrix from Observed Association Data}
\usage{
makeAssociation(x)
makeNonSymAssociation(x)
}
\arguments{
\item{x}{a m x n data frame or matrix that contains the entities observed
in the columns, the observations in rows, and each cell=c(0,1) indicating whether
the entity was observed in the observation}
}
\value{
A n x n matrix containing the association rates between each n entity in x
}
\description{
\code{makeAssociation} takes as its input a m x n two-mode matrix
of observed association data and generates a n x n matrix of
association rates between n entities.
}
\section{Functions}{
\itemize{
\item \code{makeNonSymAssociation}: Association rates that are not symmetric
}}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.