blob_id stringlengths 40 40 | directory_id stringlengths 40 40 | path stringlengths 2 327 | content_id stringlengths 40 40 | detected_licenses listlengths 0 91 | license_type stringclasses 2 values | repo_name stringlengths 5 134 | snapshot_id stringlengths 40 40 | revision_id stringlengths 40 40 | branch_name stringclasses 46 values | visit_date timestamp[us]date 2016-08-02 22:44:29 2023-09-06 08:39:28 | revision_date timestamp[us]date 1977-08-08 00:00:00 2023-09-05 12:13:49 | committer_date timestamp[us]date 1977-08-08 00:00:00 2023-09-05 12:13:49 | github_id int64 19.4k 671M ⌀ | star_events_count int64 0 40k | fork_events_count int64 0 32.4k | gha_license_id stringclasses 14 values | gha_event_created_at timestamp[us]date 2012-06-21 16:39:19 2023-09-14 21:52:42 ⌀ | gha_created_at timestamp[us]date 2008-05-25 01:21:32 2023-06-28 13:19:12 ⌀ | gha_language stringclasses 60 values | src_encoding stringclasses 24 values | language stringclasses 1 value | is_vendor bool 2 classes | is_generated bool 2 classes | length_bytes int64 7 9.18M | extension stringclasses 20 values | filename stringlengths 1 141 | content stringlengths 7 9.18M |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1d0628d76c892325b1379d7261a5fba4f4bb2092 | 93c344d2d9f5e835eb62f4b4a89af372004015bc | /man/factor.switching-package.Rd | 15e85f877a57b188415104763f63a799f304a8e1 | [] | no_license | cran/factor.switching | d0311c0fb94843af0e4975be86c1e68b4679f992 | 1fcb32e132f6587518ac65dd98e94a5fcf3513c0 | refs/heads/master | 2022-04-30T12:47:27.722212 | 2022-03-16T10:20:02 | 2022-03-16T10:20:02 | 246,801,367 | 1 | 0 | null | null | null | null | UTF-8 | R | false | false | 2,773 | rd | factor.switching-package.Rd | \name{factor.switching-package}
\alias{factor.switching-package}
\alias{factor.switching}
\docType{package}
\title{
\packageTitle{factor.switching}
}
\description{
\packageDescription{factor.switching}
There are three alternative schemes for minimizing the objective function.
\enumerate{
\item{Exact \code{\link{rsp_exact}}}
\item{Partial Simulated Annealing \code{\link{rsp_partial_sa}}}
\item{Full simulated annealing \code{\link{rsp_full_sa}}}
}
The exact algorithm solves \eqn{2^q} assignment problems per MCMC iteration, where \eqn{q} denotes the number of factors of the fitted model. For typical values of the number of factors (e.g. \eqn{q<11}) the exact scheme should be preferred. Otherwise, the two approximate algorithms based on simulated annealing may be considered. The Partial simulated annealing is more efficient than the full simulated annealing scheme.
In cases of parallel MCMC chains, applying the RSP algorithm for each chain separately will identify the factor loadings within each chain. However, the results will not be comparable between chains. The comparison of multiple MCMC chains is doable via the \code{\link{compareMultipleChains}} function.
}
\details{
The DESCRIPTION file:
\packageIndices{factor.switching}
}
\author{
Panagiotis Papastamoulis
Maintainer: Panagiotis Papastamoulis
}
\references{
Papastamoulis, P. and Ntzoufras, I. (2022).
On the identifiability of Bayesian Factor Analytic models.
\emph{Statistics and Computing}, 32, 23 (2022) https://doi.org/10.1007/s11222-022-10084-4.
}
\keyword{ package }
\seealso{
\code{\link{rsp_exact}}, \code{\link{plot.rsp}}, \code{\link{compareMultipleChains}}
}
\examples{
# load 2 chains each one consisting of a
# small mcmc sample of 100 iterations
# with p=6 variables and q=2 factors.
data(small_posterior_2chains)
Nchains <- length(small_posterior_2chains)
reorderedPosterior <- vector('list',length=Nchains)
# post-process the 2 chains
for(i in 1:Nchains){
reorderedPosterior[[i]] <- rsp_exact( lambda_mcmc = small_posterior_2chains[[i]],
maxIter = 100,
threshold = 1e-6,
verbose=TRUE )
}
# plot posterior summary for chain 1:
plot(reorderedPosterior[[1]])
# plot posterior summary for chain 2:
plot(reorderedPosterior[[2]])
# make them comparable
makeThemSimilar <- compareMultipleChains(rspObjectList=reorderedPosterior)
# plot the traces of both chains
oldpar <- par(no.readonly =TRUE)
par(mfcol=c(2,6),mar=c(4,4,2,1))
plot(makeThemSimilar,auto.layout=FALSE,density=FALSE,
ylim=c(-1.1,1.1),smooth=FALSE,col=c('red','blue'))
legend('topright',c('post-processed chain 1',
'post-processed chain 2'),lty=1:2,col=c('red','blue'))
par(oldpar)
# you can also use the summary of mcmc.list
summary(makeThemSimilar)
}
|
bfa489a1210557872fb75b5cf91d6ee79c27edb9 | e8c5c67d2012d344dec21b20bb6ed29e1db85c79 | /man/theme_statthinking.Rd | dc2d8f271e84b0002a637deeebbb97c6907a1e4c | [] | no_license | lebebr01/statthink | 59244bd7421e2f5f6b28c9d71292a059985b0aa7 | a2a54c063aebccfd7c30c4a76b925ba69f33be1a | refs/heads/main | 2021-09-11T14:53:23.119904 | 2021-08-30T17:27:19 | 2021-08-30T17:27:19 | 203,598,240 | 0 | 0 | null | null | null | null | UTF-8 | R | false | true | 691 | rd | theme_statthinking.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/book_theme.r
\name{theme_statthinking}
\alias{theme_statthinking}
\title{Theme for the Statistical Thinking Text}
\usage{
theme_statthinking(base_size = 12, base_family = "sans")
}
\arguments{
\item{base_size}{The base size of the text passed to ggplot2 theme.}
\item{base_family}{The base font family to use, passed as a character string.}
}
\description{
Theme created specifically for the statistical thinking text.
}
\examples{
library(ggplot2)
ggplot(mtcars, aes(x = wt, y = mpg, colour = factor(gear))) +
geom_point() +
facet_wrap(~am) +
geom_smooth(method = "lm", se = FALSE) +
theme_statthinking()
}
|
d18a4f4ec9ce6db916346c8ec4f7a166d798f1a6 | f21895199dc2ad4982299677ace1185cec7c730e | /R_Data&Scripts/16S_Sequence_Processing_DADA2.R | cfaa2e95767715006b93a0a11e400371c35038fc | [] | no_license | jordan129/Rat_Antibiotics_FMT_mSystems_2020 | e907aea5a4aa91b509b020a2c3b5eb09463ec220 | 3761b1d81148a9f57d77c66d9828e4e75da5fc01 | refs/heads/main | 2023-01-13T12:01:58.808697 | 2020-11-09T18:14:16 | 2020-11-09T18:14:16 | 310,916,779 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 6,768 | r | 16S_Sequence_Processing_DADA2.R | # References:
# https://benjjneb.github.io/dada2/tutorial.html
# https://www.bioconductor.org/packages/devel/bioc/vignettes/dada2/inst/doc/dada2-intro.html
### Load all packages
install.packages("pacman")
library("pacman")
pacman::p_load(ggplot2, dada2, phyloseq, gridExtra, dplyr, DECIPHER, phangorn, knitr)
# dada2 for sequence processing; DECIPHER to performs sequence alignment
# phangorn for phylogenetic tree generation; phyloseq for data visualisation and analysis
setwd("D:/dada2/") # working folder to store outputs
getwd()
path <- "D:/16S_data" # CHANGE to the directory containing the fastq files and silva data files (provided in the R_Data&Scripts folder in Github)
list.files(path)
### STEP 1 ###
## Read sample names
# Forward and reverse fastq filenames have format: SAMPLENAME_R1_001.fastq and SAMPLENAME_R2_001.fastq
fnFs <- sort(list.files(path, pattern="_R1_001.fastq", full.names = TRUE))
fnRs <- sort(list.files(path, pattern="_R2_001.fastq", full.names = TRUE))
# Extract sample names, assuming filenames have format: SAMPLENAME_XXX.fastq
sample.names <- sapply(strsplit(basename(fnFs), "_"), `[`, 1)
### STEP 2 ###
## Inspect read quality profiles
#Visualize the quality profiles of the forward reads:
plotQualityProfile(fnFs[1:6])
#Visualize the quality profiles of the reverse reads:
plotQualityProfile(fnRs[1:6])
### STEP 3 ###
## Filter annd trim (it takes 5 seconds per sample)
# Make filenames for the filtered fastq files
filtFs <- file.path(path, "filtered", paste0(sample.names, "_F_filt.fastq.gz"))
filtRs <- file.path(path, "filtered", paste0(sample.names, "_R_filt.fastq.gz"))
out <- filterAndTrim(fnFs, filtFs, fnRs, filtRs, trimLeft=c(19, 19), truncLen=c(250,185),
maxN=0, maxEE=c(2,2), truncQ=2, rm.phix=TRUE,
compress=TRUE, multithread=TRUE)
head(out)
save.image("DADA2_output_1.Rdata")
#If you want to speed up downstream computation, consider tightening maxEE.
#If too few reads are passing the filter, consider relaxing maxEE,
#perhaps especially on the reverse reads (eg. maxEE=c(2,5)),
#and reducing the truncLen to remove low quality tails.
#Remember you must maintain overlap after truncation (truncLen) in order to merge them later.
#trimLeft=c(19, 19) is used to remove the primers, if you have done it, delete this
#truncQ=2, #truncate reads after a quality score of 2 or less
#truncLen=130, #truncate after 130 bases
#trimLeft=10, #remove 10 bases off the 5’ end of the sequence
#maxN=0, #Don’t allow any Ns in sequence
#maxEE=2, #A maximum number of expected errors
#rm.phix=TRUE, #Remove lingering PhiX (control DNA used in sequencing) as it is likely there is some.
#mutithread cannot work on windows, so it doesnot matter if you select true or false
### STEP 4 ###
## Learn the Error Rates
errF <- learnErrors(filtFs, multithread=TRUE)
errR <- learnErrors(filtRs, multithread=TRUE)
#Visualize the estimated error rates:
plotErrors(errF, nominalQ=TRUE)
plotErrors(errR, nominalQ=TRUE)
save.image("DADA2_output_2.Rdata")
### STEP 5 ###
## Dereplication and inference
# Dereplication combines all identical sequencing reads into into “unique sequences” with a corresponding “abundance”: the number of reads with that unique sequence
# Dereplication substantially reduces computation time by eliminating redundant comparisons.
##Dereplication
derepFs <- derepFastq(filtFs, verbose=TRUE)
derepRs <- derepFastq(filtRs, verbose=TRUE)
# Name the derep-class objects by the sample names
names(derepFs) <- sample.names
names(derepRs) <- sample.names
##Sample Inference
dadaFs <- dada(derepFs, err=errF, multithread=TRUE)
dadaRs <- dada(derepRs, err=errR, multithread=TRUE)
dadaFs[[1]]
save.image("DADA2_output_3.Rdata")
### STEP 6 ###
## Merge paired reads
mergers <- mergePairs(dadaFs, derepFs, dadaRs, derepRs, verbose=TRUE)
# Inspect the merger data.frame from the first sample
head(mergers[[1]])
save.image("DADA2_output_4.Rdata")
### STEP 7 ###
## Construct sequence table
seqtab <- makeSequenceTable(mergers)
dim(seqtab)
# Inspect distribution of sequence lengths
table(nchar(getSequences(seqtab)))
hist(nchar(getSequences(seqtab)))
#You can remove non-target-length sequences with base R manipulations of the sequence table
seqtab2 <- seqtab[,nchar(colnames(seqtab)) %in% seq(272,366)] # change according to histogram
seqtab <- seqtab2
save.image("DADA2_output_5.Rdata")
### STEP 8 ###
## Remove chimeras
seqtab.nochim <- removeBimeraDenovo(seqtab, method="consensus", multithread=TRUE, verbose=TRUE)
dim(seqtab.nochim)
sum(seqtab.nochim)/sum(seqtab)
##Sanity check
#Track reads through the pipeline, This is good to report in your methods/results
getN <- function(x) sum(getUniques(x))
track <- cbind(out, sapply(dadaFs, getN), sapply(dadaRs, getN), sapply(mergers, getN), rowSums(seqtab.nochim))
# If processing a single sample, remove the sapply calls: e.g. replace sapply(dadaFs, getN) with getN(dadaFs)
colnames(track) <- c("input", "filtered", "denoisedF", "denoisedR", "merged", "nonchim")
rownames(track) <- sample.names
head(track)
write.csv(track, "track_reads.csv")
save.image("DADA2_output_6.Rdata")
### STEP 9 ###
## Assign taxonomy using Silva database
taxa_silva <- assignTaxonomy(seqtab.nochim, "D:/16S_data/silva_nr_v138_train_set.fa.gz", multithread=TRUE, tryRC=TRUE)
taxa_silva_species <- addSpecies(taxa_silva, "D:/16S_data/silva_species_assignment_v138.fa.gz")
taxa_silva_one_direction <- assignTaxonomy(seqtab.nochim, "D:/16S_data/silva_nr_v138_train_set.fa.gz", multithread=TRUE)
taxa_silva_species_one_direction <- addSpecies(taxa_silva_one_direction, "D:/16S_data/silva_species_assignment_v138.fa.gz")
save.image("DADA2_output_8.Rdata")
#Inspect the taxonomic assignments:
taxa.print <- taxa_silva # Removing sequence rownames for display only
rownames(taxa.print) <- NULL
head(taxa.print)
# If your reads do not seem to be appropriately assigned, for example lots of your bacterial 16S sequences
# are being assigned as Eukaryota NA NA NA NA NA, your reads may be in the opposite orientation as the reference database.
# Tell dada2 to try the reverse-complement orientation and see if this fixes the assignments:
# taxa <- assignTaxonomy(seqtab.nochim, "/silva_nr_v128_train_set.fa.gz", multithread=TRUE, tryRC=TRUE)
write.csv(seqtab.nochim, "USV_counts_silva.csv")
write.csv(taxa_silva, "taxa_silva.csv")
write.csv(taxa_silva_species, "taxa_silva_species.csv")
write.csv(taxa_silva_species_one_direction, "taxa_silva_species_one_direction.csv")
# here we do both one and two direction. When we get both csv, compare and combine. Some bacteria
# will only be assigned in one or two direction file, so you need to compare and combine to get more.
save.image("DADA2_output_9.Rdata") |
5f6fe9c3ca467b19368f41353243687d339dad45 | 316336f52dfa708beaf1577c062b88185a68d157 | /Kernel_and_Neural_R/man/kernpredictor.Rd | c5ef412ea054fe9c22b01be1e7fcef9d631aa257 | [] | no_license | fetacore/NonparametricNeuralNets | b401f7bc5d0fe1897d296c659277d4f2e1692aa3 | bba5e27329ce49a7bbbae16dfef72e4522b8359a | refs/heads/master | 2021-05-09T04:14:27.522022 | 2018-01-28T15:03:12 | 2018-01-28T15:03:12 | 119,266,862 | 0 | 0 | null | null | null | null | UTF-8 | R | false | true | 770 | rd | kernpredictor.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/kernpredictor.R
\name{kernpredictor}
\alias{kernpredictor}
\title{Non parametric predictor}
\usage{
kernpredictor(my_x, y, x, h = 0.5, p = 0, distance = "Eucledean",
kernel = "Gaussian")
}
\arguments{
\item{my_x}{The row vector of values (or scalar value) wrt which we do prediction}
\item{y}{The given values before the prediction}
\item{x}{The previous observations of x}
\item{h}{The bandwidth}
\item{p}{The order of the local polynomial (by default p=0)}
\item{distance}{The distance metric}
\item{kernel}{The alphabetical argument for the kernel function of choice (from choose_kernel)}
}
\value{
The predicted y corresponding to my_x
}
\description{
Non parametric predictor
}
|
3757016555b060c1451b0f724e170c7f2a78f11a | b85cb92935407d40d03405ea09a7f96d005c1954 | /Functions/data_format.R | e7bd6ce0dcdac4e863feaf5828bf732e925fe53a | [] | no_license | enerhiya/Spatio-Temporal-Cross-Covariance-Functions-under-the-Lagrangian-Framework | 0cccffd7a98d13e4f4c7353d9c42e923ae34dbdd | 5084f24d9b89c9bff2794b0575a44d7ea0ccaf54 | refs/heads/master | 2021-06-18T19:50:38.829233 | 2021-02-17T17:09:46 | 2021-02-17T17:09:46 | 177,747,457 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,096 | r | data_format.R | data_format_into_matrix <- function(data1, data2 = NULL, temporal_replicates, aggs_max = 1, simulated = T){
# data1, data2: data matrix with temporal replicates as columns and each row represents the locations
# aggs_max: the number of consecutive timesteps you want to average. Here we take the data as is and do not take averages since they are already Gaussian and stationary
# output: data matrix with the first t columns for variable 1 and the t+1 column to 2t column for variable 2
if(!simulated == T){
ave_var1 <- ave_var2 <- matrix(,ncol=floor(temporal_replicates/aggs_max),nrow=nrow(data1))
for(locat in 1:nrow(data1)){
new_uu <- new_uu2 <- matrix(,ncol=floor(temporal_replicates/aggs_max),nrow=aggs_max)
for(gg in 1:floor(temporal_replicates/aggs_max)){
new_uu[,gg] <- data1[locat,((gg-1)*aggs_max+1):(gg*aggs_max)]
new_uu2[,gg] <- data2[locat,((gg-1)*aggs_max+1):(gg*aggs_max)]
}
ave_var1[locat,]<- colMeans(new_uu)
ave_var2[locat,]<- colMeans(new_uu2)
}
return(cbind(ave_var1, ave_var2))
}
} |
eaff8c1c3d026fb8bdb9a2ed67f3a0c9d542fe46 | b62236109f1e8d01739e6cc8ec3d62fd046a1784 | /STAT_541_Statistical_Methods_2/Week_2/Exercise 11-6 R Code.R | 93487a9983af120a526fb8efd816077e02d151b9 | [] | no_license | shahidnawazkhan/Machine-Learning-Book | 049670a3b9d74b11e619428ee11468f08ffee595 | f152f36a8b7dbabe30d2c1b6de10788cbd1fa5e8 | refs/heads/master | 2023-08-22T02:48:33.039710 | 2021-10-19T14:59:49 | 2021-10-19T14:59:49 | 395,435,421 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 979 | r | Exercise 11-6 R Code.R | #
# Chapter 11 Exercise 11.6
#
# In RStudio, use File, Import Dataset, From Excel...
# to get Excel data file
# Note the name for the imported Excel file
str(ex11_6)
# To have most of our R code reuseable for future
# analyses, we will use a data object called dataobj
dataobj <- as.data.frame(ex11_6)
str(dataobj)
par(mfrow=c(1,1))
plot(dataobj$x,dataobj$y,xlab="x", ylab="y",
main="Chapter 11 Exercise 11.6", pch=19,cex=1.5)
# Split the plotting panel into a 2 x 2 grid
# this puts four graphs in one window
par(mfrow = c(2, 2))
# model the dependent variable as a function of
# the independent variable
model <- lm(y ~ x , data=dataobj)
summary(model)
confint(model,level=0.95)
plot(model)
par(mfrow=c(1,1))
plot(dataobj$x,dataobj$y,xlab="x", ylab="y",
main="Chapter 11 Exercise 11.6", pch=19,cex=1.5)
lines(sort(dataobj$x),fitted(model)[order(dataobj$x)], col="blue", type="l")
# Predict y for a given value of x
predict(model, data.frame(x = 77))
|
5513a18e2f864db203a61208c7844a195098bc15 | 30b4c042ad99283d184823e3ba98110efed674db | /man/next_test.Rd | 92723f2e7e71b733bfe9fa35e1a6e1c08b894098 | [
"MIT"
] | permissive | almartin82/parccvizieR | d095e6a9c79387ace6372db06f0fb508351b7d25 | da09e8ea5c2253ec9729cc10974764a6a96e43e4 | refs/heads/master | 2023-06-14T04:10:44.961051 | 2023-05-25T13:30:58 | 2023-05-25T13:30:58 | 46,429,452 | 3 | 2 | MIT | 2018-06-11T21:01:18 | 2015-11-18T15:54:04 | R | UTF-8 | R | false | true | 366 | rd | next_test.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/growth.R
\name{next_test}
\alias{next_test}
\title{Returns sequential test code (next or prior)}
\usage{
next_test(x)
}
\arguments{
\item{x}{a test code}
}
\value{
the next or prior test code using standard PARCC progression
}
\description{
Returns sequential test code (next or prior)
}
|
7723d2ba0e2243082afaa2bad9ed62e0c2c11710 | ca85aca58c75b824f5f7668673a4e9958e3be049 | /man/dx.Rd | 6258ba0bd6ab4091c436758590dbefb865d96109 | [] | no_license | jmjablons/icager | 1a2fd0586f73682b7f4dd3578e5c2a50af46638e | 7dddccef96b892ccf3750a8af348f2f547d46937 | refs/heads/master | 2022-12-03T14:54:28.876604 | 2020-08-17T13:13:52 | 2020-08-17T13:13:52 | 228,870,393 | 0 | 0 | null | null | null | null | UTF-8 | R | false | true | 1,526 | rd | dx.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/managedata.R
\docType{data}
\name{dx}
\alias{dx}
\title{Sample experimental data}
\format{A tibble with 330 rows and 16 variables:
\describe{
\item{id}{unique visit id, not really handy; <chr>}
\item{deviceid}{house cage; <chr>}
\item{start}{time of visit start; <dttm>}
\item{end}{time of visit end; <dttm>}
\item{corner}{which corner given visit took plase; <int>}
\item{condition}{indicate reward probability settup; <chr>}
\item{solution}{settup issue, not handy; <chr>}
\item{paradigm}{name of paradigm set while recording; <chr>}
\item{tag}{participant unique identificator; <dbl>}
\item{temperature}{temperature noted; <dbl>}
\item{illumination}{light detected level; <int>}
\item{nlick}{number of licks during given visit; <dbl>}
\item{durationlick}{total time spent licking; <dbl>}
\item{contacttimelick}{similar to total licking time detected; <dbl>}
\item{nnosepoke}{total number of nosepokes per given visit; <int>}
\item{dooropened}{binary indicator of access to reward being granted 1 - yes, 0 -no; <dbl>}
}}
\source{
experiment at Mai IF PAN
}
\usage{
dx
}
\description{
A dataset of randomly selected choices
made by sample agents performing
probabilistic reversal learning task.
Includes actions /visits/ performed by 3 random agents,
first 10 per each of 11 experimental stages.
}
\examples{
class(dx) #similar to data frame
dim(dx) #nrow, ncol
head(dx) #couple of first rows
}
\keyword{datasets}
|
7c7c838eebbcf65493a9ce0d7f36d681ae6a72fe | 4c961b6c657e9a8e91a483ab48e1318094b40130 | /man/bacteria.Rd | 6acfee2a7676b1623a42104d49a6db0058fefed7 | [] | no_license | cran/mdhglm | 5c220f06acb28428bd035dfb9097fb9b243ca504 | 689c0b38742cb75370cd0686d1b3717dd25c87c8 | refs/heads/master | 2020-04-06T21:26:22.271570 | 2018-10-25T07:00:19 | 2018-10-25T07:00:19 | 54,140,240 | 1 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,039 | rd | bacteria.Rd | \name{bacteria}
\alias{bacteria}
\docType{data}
\title{Bacteria Data}
\description{
Tests of the presence of the bacteria H. influenzae in children with otitis media in the Northern
Territory of Australia (Ripley, et al., 2016).
}
\usage{data("bacteria")}
\format{
A data frame with 220 observations on the following 6 variables.
\describe{
\item{\code{y}}{a factor with levels \code{n} (absence of bacteria) and \code{y} (preence of bacteria)}
\item{\code{ap}}{a factor with levels \code{a} (active) and \code{p} (placebo)}
\item{\code{hilo}}{a factor with levels \code{hi} (high compliance) and \code{lo} (low compliance)}
\item{\code{week}}{week of test}
\item{\code{ID}}{subjects identifier}
\item{\code{trt}}{a factor with levels \code{placebo}, \code{drug} and \code{drug+}}
}
}
\references{
Ripley, B., Venables, B., Bates, D.M., Hornik, K., Gebhardt, A. and Firsth, D. (2016). MASS: Support Functions and Datasets for Venables and Ripley's MASS. R package version 7.3-45.
}
|
4929a03263611edafee386bc93f317693e92a26e | 839222f83d7868ca613fa2bf7bda85123ec867f5 | /cibersort_luad_experiment/.PowerFolder/archive/cibersort_merge_stanford_data_K_59.R | fbf9b9738b8cd57471456aa4b05221607ac5b0df | [] | no_license | QifengOu/legendary-disco | 72d8e90adc3e9cea9207d75784e801419aff46fb | 1f368181f95cc67de4a3a8e71569e3117d27bea4 | refs/heads/master | 2021-01-02T06:28:18.566439 | 2019-09-26T13:05:07 | 2019-09-26T13:05:07 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,173 | r | cibersort_merge_stanford_data_K_59.R | #### ----------------------------------------------------------------------------------- ####
########################### MCP-Counter Abundance Analysis ##################################
#### ----------------------------------------------------------------------------------- ####
mypackages <- c("GSVA", "GSEABase", "Biobase", "genefilter",
"limma", "RColorBrewer", "scales", "dplyr")
lapply(mypackages, library, character.only = T)
### Abundance Analysis ###
# Import CIBERSORT Export Data
setwd("C:/Users/rbuch/DataShare/cibersort_luad_experiment/Output/RSEM/Raw")
df <- read.csv("CIBERSORT.Output_1000_permutations_rsem.csv", header = T)
setwd("C:/Users/rbuch/DataShare/mutations_for_alignment/")
detailed_mt <- read.csv("KRAS_STK11_plus_dbl_tidy_vs_other.csv", header = T)
# Re-name First column and Merge
colnames(df)[colnames(df)=="Input.Sample"] <- "Patient.ID"
output_merged <- merge(df, detailed_mt)
output_arranged <- arrange(output_merged, as.character(Mutation_Status))
setwd("C:/Users/rbuch/DataShare/cibersort_luad_experiment/Output/RSEM/")
write.csv(output_arranged, "cibersort_1000_permutations_output_merged.csv", row.names = F)
|
5705ce8f8ff069c8e9362b63e6c367114cc546a5 | 2195aa79fbd3cf2f048ad5a9ee3a1ef948ff6601 | /docs/Files.rd | 0648a39b0447a07ed86c1c2045253eccb30bec02 | [
"MIT"
] | permissive | snakamura/q3 | d3601503df4ebb08f051332a9669cd71dc5256b2 | 6ab405b61deec8bb3fc0f35057dd880efd96b87f | refs/heads/master | 2016-09-02T00:33:43.224628 | 2014-07-22T23:38:22 | 2014-07-22T23:38:22 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,467 | rd | Files.rd | =begin
=設定ファイル
QMAIL3では設定はXMLファイルで行います。設定は基本的には((<オプションの設定|URL:Options.html>))や((<アカウントのプロパティ|URL:AccountProperty.html>))でUIから行うことができますが、一部の設定はファイルを直接編集する必要があります。ここではこれらのファイルの形式を説明します。
また、一部の設定ファイル以外の自動的に作られるファイルについても説明します。
==プロファイル
*((<addressbook.xml|URL:AddressBookXml.html>))
*((<autopilot.xml|URL:AutoPilotXml.html>))
*((<colors.xml|URL:ColorsXml.html>))
*((<filters.xml|URL:FiltersXml.html>))
*((<fonts.xml|URL:FontsXml.html>))
*((<goround.xml|URL:GoRoundXml.html>))
*((<header.xml|URL:HeaderXml.html>))
*((<headeredit.xml|URL:HeaderEditXml.html>))
*((<keymap.xml|URL:KeyMapXml.html>))
*((<menus.xml|URL:MenusXml.html>))
*passwords.xml
*((<qmail.xml|URL:QmailXml.html>))
*resources.xml
*((<rules.xml|URL:RulesXml.html>))
*((<signatures.xml|URL:SignaturesXml.html>))
*((<syncfilters.xml|URL:SyncFiltersXml.html>))
*tabs.xml
*((<texts.xml|URL:TextsXml.html>))
*((<toolbars.xml|URL:ToolbarsXml.html>))
*views.xml
==アカウント
*((<account.xml|URL:AccountXml.html>))
*folders.xml
*views.xml
*((<uidl.xml|URL:UidlXml.html>))
==画像
*account.bmp
*folder.bmp
*list.bmp
*listdata.bmp
*toolbar.bmp
=end
|
cc17c2af396109b6b616020d6f4e273ec0e9a69c | ebad5c92788ce2f1ae5307a11412d540f7fe9f7c | /exploratory.R | c9cb55654376a250afc5a1ef1e9e486d00ab349c | [] | no_license | techtronics/stocks-sentiment-analysis | c8c8b3b2d15d5e0bb630a91e51c81860f0d66ff3 | c976be9603646a27cbdbc34191cd41cb619a905a | refs/heads/master | 2017-12-14T12:16:46.666417 | 2015-12-12T19:41:04 | 2015-12-12T19:41:04 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,934 | r | exploratory.R | library(ggplot2)
library(vars)
library(forecast)
data_twitter = read.csv2("./csv/csv-google-2-copy.csv", header=TRUE, sep=",", dec=".")
data_stock = read.csv2("./stocks/google-stock-26-10.csv", header=TRUE, sep=",", dec=".")
data_twitter$NormPos <- data_twitter$Pos / data_twitter$Num
data_twitter$NormNeg <- data_twitter$Neg / data_twitter$Num
data_twitter$Date <- as.Date(data_twitter$Date)
data_stock$Date <- as.Date(data_stock$Date)
data_stock$Diff <- data_stock$Close - data_stock$Open
summary(data_twitter$NormPos)
summary(data_twitter$NormNeg)
summary(data_stock$Diff)
var(data_twitter$NormPos)
var(data_twitter$NormNeg)
var(data_stock$Diff)
plot(data_twitter$Date, data_twitter$NormPos, type='l', col="black", main="Positive Sentiment Evolution",
xlab="working days", ylab="+ sentiments scores")
abline(h=mean(data_twitter$NormPos))
plot(data_twitter$Date, data_twitter$NormNeg, type='l', col="black", main="Negative Sentiment Evolution",
xlab="working days", ylab="- sentiments scores")
abline(h=mean(data_twitter$NormNeg))
plot(data_stock$Date, data_stock$Diff, type='l', col="black", main="Daily GOOG Stock Evolution",
xlab="working days", ylab="difference ($)")
abline(h=0)
pairs(~data_stock$Diff + data_twitter$NormPos + data_twitter$NormNeg)
h<-0
plot(data_stock$NormPos[1:(length(data_stock$NormPos)-h)], data_twitter$Diff[-(1:h)])
fit <- lm(data_stock$Diff ~ data_twitter$NormPos + data_twitter$NormNeg + data_twitter$Num)
summary(fit)
AIC(fit)
fit <- lm(data_stock$Diff ~ data_twitter$NormPos + data_twitter$NormNeg)
summary(fit)
AIC(fit)
df <- data.frame(Diff = data_stock$Diff,Pos = data_twitter$NormPos, Neg = data_twitter$NormNeg)
lin <- lm(Diff ~ . , df)
summary(lin)
lin2 <- step(lin, trace = 0)
summary(lin2)
df2 <- data.frame(Diff = data_stock$Diff,Pos = data_twitter$NormPos)
VARselect(df2, lag.max=2)
var.2c <- VAR(df2, p=2)
plot(var.2c)
summary(var)
acf(residuals(var)[,1])
|
629c6d5a6f1b47159c4f1a5392e0e4bb99545423 | ffdea92d4315e4363dd4ae673a1a6adf82a761b5 | /data/genthat_extracted_code/crunch/examples/makeMarginMap.Rd.R | 92decb06df529ecac786cc2e33a63d9c9fff7d50 | [] | no_license | surayaaramli/typeRrh | d257ac8905c49123f4ccd4e377ee3dfc84d1636c | 66e6996f31961bc8b9aafe1a6a6098327b66bf71 | refs/heads/master | 2023-05-05T04:05:31.617869 | 2019-04-25T22:10:06 | 2019-04-25T22:10:06 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 428 | r | makeMarginMap.Rd.R | library(crunch)
### Name: makeMarginMap
### Title: Make a map of margins
### Aliases: makeMarginMap
### Keywords: internal
### ** Examples
## Not run:
##D makeMarginMap(getDimTypes(cat_by_cat_cube))
##D # 1 2
##D
##D makeMarginMap(getDimTypes(MR_by_cat_cube))
##D # 1 1 2
##D
##D makeMarginMap(getDimTypes(cat_by_MR_cube))
##D # 1 2 2
##D
##D makeMarginMap(getDimTypes(MR_by_MR_cube))
##D # 1 1 2 2
## End(Not run)
|
4e8b544873e3b9dcb0b93a2457b3d79d8dd66d50 | 9ece3c458fb1c4e7132bffa0d6dadbe549c741c7 | /plot2.R | 16321a80f2ca787c61e72761bdfe66b5a32d7e6e | [] | no_license | ZzezZ/ExData_Plotting1 | e0df73e1c533752f6c79ca42a8eceb5e758a0395 | f1c019bf4443dbb5f56191c127ddcf4ca06a6f72 | refs/heads/master | 2021-01-18T07:26:07.138676 | 2014-06-08T19:54:23 | 2014-06-08T19:54:23 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 807 | r | plot2.R | source("loaddata.R")
plot2 <- function(file="household_power_consumption.txt",destinyfile=getwd()) {
#Load data from the file into the variable
#Due to the data file is large,so when call function,it'll take about 1-3 minutes
data <- loaddata(file)
if(is.null(data)){
print(" Unable to create the image ")
}
else{
print("finish ! the image plot2 is created ")
#create a plot2.png file in the path specified through "destinyfile"
#if destinyfile doesn't have a path specified,it'll use the current directory
png(filename=paste(destinyfile,"plot2.png",sep="/"),width=480,height=480)
plot(data$Time, data$Global_active_power,
ylab="Global Active Power (kilowatts)",
xlab="",
type="l")
dev.off()
}
} |
23bd015fc83cc4227d046f4c32b0876c1e153101 | 020c2ff9ba9e7b0e53826e10275b9ed161433561 | /R/update_y_mixed_add.R | 61be6f088811b6d77fa34815cc4536690490bb3d | [] | no_license | aimeertaylor/FreqEstimationModel | 9d2e27eeba9adf8ecb719cfcccc72498a1ee864f | a0b48873752bd656d5ea838d45370e54a2782f76 | refs/heads/master | 2021-12-23T03:18:57.597770 | 2016-11-18T18:33:42 | 2016-11-18T18:33:42 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,102 | r | update_y_mixed_add.R | #'@title Update mixed data samaple clone addition
#'@name update_y_mixed_add
#'@description Function for addition of clone to a mixed data sample
#'@export
update_y_mixed_add <- function(to_update,
genotype_counts_mixed_add,
frequency_truncated)
{
genotype_counts_proposed_mixed_add <- genotype_counts_mixed_add # Allocate space
x2 <- unique(to_update) # Instead of looping over all one-by-one, group into classes
for (i in 1:length(x2)) {
x3 <- x2[i] # Extract unique data type
x4 <- to_update[to_update == x3] # Find all bloodsamples that have the same observed data as the first unique data type
x5 <- names(x4) # Extract the names of those bloodsamples
x6 <- t(rmultinom(n = length(x5), size = 1, prob = 1 * frequency_truncated[x3,])) # For each bloodsample pick one proposed genotype from those compatible
genotype_counts_proposed_mixed_add[x5,] <- genotype_counts_mixed_add[x5,] + x6 # Update genotype_counts_proposed_mixed_add
}
return(genotype_counts_proposed_mixed_add)
}
|
3a7cc2e87f68b9c5be139a6a52c53721193bda4d | b314a1209310aad4b0d16060f2b052044bd413d4 | /CentralPointsAugmentation.R | 75a1431cb1c4b1ff65b4c6f8fd3ff4e6493712f0 | [] | no_license | pbosetti/DCPP-rstudio | 15f267661400743b7603737695cac0c8ab6f3ece | cfdb6dbec3a5aa40083647e642838f10f97bb3cd | refs/heads/master | 2020-12-24T09:52:49.465959 | 2017-01-25T08:08:16 | 2017-01-25T08:08:16 | 73,264,855 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 322 | r | CentralPointsAugmentation.R | # Augmentation with central points
# 2^2 FP, A=reaction time, B=Temperature, Y=reaction rate
lvl <- c(-1, +1)
df <- expand.grid(A=lvl, B=lvl)
df[5:9,]=0
df$Y <- c(
39.3, 40.9, 40, 41.5,
40.3, 40.5, 40.7, 40.2, 40.6
)
anova(lm(Y~A*B+poly(B,2), data=df))
anova(lm(Y~A*B+poly(A,2), data=df))
anova(lm(Y~A+B, data=df))
|
7091d5a1e1fd8541b329fc7b73ef243b6805ad23 | 7f72ac13d08fa64bfd8ac00f44784fef6060fec3 | /RGtk2/man/gtkColorSelectionNew.Rd | eb73cb7799d12d7eb20f42f70baffa043633e213 | [] | no_license | lawremi/RGtk2 | d2412ccedf2d2bc12888618b42486f7e9cceee43 | eb315232f75c3bed73bae9584510018293ba6b83 | refs/heads/master | 2023-03-05T01:13:14.484107 | 2023-02-25T15:19:06 | 2023-02-25T15:20:41 | 2,554,865 | 14 | 9 | null | 2023-02-06T21:28:56 | 2011-10-11T11:50:22 | R | UTF-8 | R | false | false | 318 | rd | gtkColorSelectionNew.Rd | \alias{gtkColorSelectionNew}
\name{gtkColorSelectionNew}
\title{gtkColorSelectionNew}
\description{Creates a new GtkColorSelection.}
\usage{gtkColorSelectionNew(show = TRUE)}
\value{[\code{\link{GtkWidget}}] a new \code{\link{GtkColorSelection}}}
\author{Derived by RGtkGen from GTK+ documentation}
\keyword{internal}
|
99b5437fcfaea8f6eaef80111e4847c25918515e | 1f9b92954e1397f4aec19fa3db099adb088634c5 | /uttrekk.R | 995be0be0aea775254dbbc0603213c4558c22715 | [] | no_license | umhvorvisstovan/usbotn_uttrekk_alioki | b6be024e2fba7b88bcb1e014959483c7d53c0ae0 | f7d6c05d6e725be2b120b92fa4ff953f0e97d4cf | refs/heads/master | 2022-12-19T20:58:52.879395 | 2020-10-08T14:34:50 | 2020-10-08T14:34:50 | 268,803,517 | 0 | 0 | null | 2020-08-26T15:53:43 | 2020-06-02T13:07:43 | R | UTF-8 | R | false | false | 2,392 | r | uttrekk.R |
#her skrivar tú hvat fyri aliøki tú skal hyggja at
alioki <- "A11"
#her skrivar tú hvussu nógvar útsetur tú skal hyggja at
utsetur <- 3
#her skrivar tú frá hvørjum ári tú ynskir kemiskarkanningar vístar frá í síðstu plottinum
kemi <- 2014
#her skrivar tú frá hvørjum ári tú ynskir djóralívskanningar vístar á plottunum
djorlivyear <- 2006
#her skrivar tú (TRUE/FALSE) um tú ynskir eina interactiva talvu við útroknaðu indexunum
djorlivreactivetable <- TRUE
#her skrivar tú um tú ynskir úttrekkið sum, "pdf" ella "html"
type <- "html"
# trýst so á ctrl + alt + r (allar í senn) so byrjar koyringin
# - tú verður biðin um at inntøppa títt brúkaranavn og loyniorð til postgres tá koyringin byrjar
# hald eyga við console vindeyganum, har koma feilmeldingar um nakrar eru!
#----------
#ikki broyta nakað niðanfyri her!!!!
if(type == "pdf") {
out <- "bookdown::pdf_document2"
} else if (type == "html") {
out <- "bookdown::html_document2"
} else {
stop("Hevur tú skrivað rætt í type? pdf ella html?")
}
if(dir.exists("rapportir") == FALSE) {dir.create("rapportir")}
if(dir.exists("data_output") == FALSE) {dir.create("data_output")}
#packages required
uttrekk_pack <- c("tidyverse", "lubridate", "shiny", "odbc", "DBI", "knitr", "kableExtra",
"ggforce", "ggpubr", "ggrepel", "sf", "reactable", "rmarkdown", "bookdown")
mangla <- uttrekk_pack[!uttrekk_pack %in% installed.packages()[,"Package"]]
if(!is.null(nrow(mangla))) {stop(paste("\n \n Tú manglar at installera package/s:", mangla))}
rmarkdown::render("RapportUppsamlingAlioki.Rmd", params = list(
alioki = alioki,
server = Sys.getenv("SERVER"),
database = Sys.getenv("DATABASE"),
user = Sys.getenv("USER"),
loyniord = Sys.getenv("PASSWORD"),
kemi = kemi,
lev = utsetur,
djorlivyear = djorlivyear,
djorlivreactivetable = djorlivreactivetable
),
output_format = out,
output_file = paste("rapportir/",alioki, "-", format(Sys.Date(), "%y%m%d"), ".", type, sep = ""), envir = new.env()
)
filestodelete <- tibble(files = intersect(list.files(path = "rapportir", pattern = alioki), list.files(path = "rapportir", pattern = type))) %>%
arrange(desc(files)) %>%
slice(-1) %>%
mutate(files = paste("rapportir/", files, sep = ""))
filestodelete <- as.vector(filestodelete$files)
file.remove(filestodelete)
rm(list = ls()) |
801dd6c8014ff4e44fa516c1c3ca8be9c787b596 | 484e0ac5abfa5e566ca34b401c120fbbad6ffd6d | /multiLibrary.R | 49d0c7a5e0fea7c8f768d8c580a00b74c1dcf187 | [] | no_license | rpietro/Toolbox | 89e385e6056b936f7d39ff46c9a9db6d9320ed90 | 71e15ec89e013f3c34b4bbfeb986d5d9588a2f68 | refs/heads/master | 2016-09-07T19:07:57.693464 | 2012-06-15T14:38:48 | 2012-06-15T14:38:48 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,985 | r | multiLibrary.R | #######################################################################################
#multiLibrary is licensed under a Creative Commons Attribution - Non commercial 3.0 Unported License. see full license at the end of this file.
#######################################################################################
#transform code below into the multiLibrary function
multiLibrary <- function (library) {
lapply(c("ggplot2", "psych", "RCurl", "irr"), library, character.only=T)
}
#######################################################################################
#multiLibrary is licensed under a Creative Commons Attribution - Non commercial 3.0 Unported License. You are free: to Share — to copy, distribute and transmit the work to Remix — to adapt the work, under the following conditions: Attribution — You must attribute the work in the manner specified by the author or licensor (but not in any way that suggests that they endorse you or your use of the work). Noncommercial — You may not use this work for commercial purposes. With the understanding that: Waiver — Any of the above conditions can be waived if you get permission from the copyright holder. Public Domain — Where the work or any of its elements is in the public domain under applicable law, that status is in no way affected by the license. Other Rights — In no way are any of the following rights affected by the license: Your fair dealing or fair use rights, or other applicable copyright exceptions and limitations; The author's moral rights; Rights other persons may have either in the work itself or in how the work is used, such as publicity or privacy rights. Notice — For any reuse or distribution, you must make clear to others the license terms of this work. The best way to do this is with a link to this web page. For more details see http://creativecommons.org/licenses/by-nc/3.0/
####################################################################################### |
8370a4b3bbb2ed1c13a334424de489b53e9d4765 | 465228f11e8ca089c6ff11876ee88f54475525f6 | /plots.R | 6e7cb35aee34c84e3f56c7f70c66133161156c9a | [] | no_license | zimolzak/realty | 6634f7c7c282db6d5cbcf70674852f5e554436f9 | a8f45d575b17a354c3e08087c3dcb0b887cc4bf0 | refs/heads/master | 2021-05-04T14:17:17.405424 | 2018-02-04T15:48:43 | 2018-02-04T15:48:43 | 120,197,315 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 100 | r | plots.R | library(ggplot2)
X = read.csv("~/Desktop/realty/realty.csv")
qplot(price, sqft, data=X, color=type)
|
29914a21d29ae6ca0c85d93f1066a392a88e2107 | ffdea92d4315e4363dd4ae673a1a6adf82a761b5 | /data/genthat_extracted_code/brr/examples/plot.brr.Rd.R | 4006bc5b3e1b16c1700617dbdc58c96ff21e1669 | [] | no_license | surayaaramli/typeRrh | d257ac8905c49123f4ccd4e377ee3dfc84d1636c | 66e6996f31961bc8b9aafe1a6a6098327b66bf71 | refs/heads/master | 2023-05-05T04:05:31.617869 | 2019-04-25T22:10:06 | 2019-04-25T22:10:06 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 512 | r | plot.brr.Rd.R | library(brr)
### Name: plot.brr
### Title: plot brr
### Aliases: plot.brr
### ** Examples
model <- Brr(a=2, b=3)
plot(model)
plot(model, dprior(mu))
plot(model, dprior(mu), xlim=c(0,4), lwd=3, col="blue")
plot(model, pprior(mu))
plot(model, qprior(mu))
model <- model(c=4, d=6, S=10, T=10)
plot(model)
plot(model, dprior(phi))
plot(model, dprior(x))
model <- model(y=4)
plot(model, dprior(x_given_y))
model <- model(x=5, y=5)
plot(model, dpost(phi))
model <- model(Snew=10, Tnew=10)
plot(model, dpost(x))
|
80b44062ed429a389acf9af85010133a8cf14874 | d78db58b9c334159d85104234609556cc91e13c1 | /R/estimatebeta_SA.R | 99415857a5967406b2b74e29182d3242eb6f4d3d | [
"MIT"
] | permissive | meleangelo/grdpg | 234339ca3e289450a096b2049876b7622c4db839 | a4fda351addcfc0950e58fccb4f5d702758c678b | refs/heads/master | 2021-06-10T01:44:12.582473 | 2021-05-01T14:34:06 | 2021-05-01T14:34:06 | 170,724,378 | 3 | 3 | null | null | null | null | UTF-8 | R | false | false | 8,008 | r | estimatebeta_SA.R | #' Estimate beta
#'
#' Estimate beta (the effect of covariates) using simple procedure.
#'
#' @import mclust
#'
#' @param Xhat Estimated latent positions, should be an `n` by `d` matrix where `n` is the number nodes and `d` is the embeded dimension.
#' @param muhats The center of the latent position for each block. Should be a `d` by `K` matrix where `d` is the embed dimension and `K` is the number of blocks.
#' @param Ipq `Ipq` matrix for GRDPG. See \link{getIpq}.
#' @param cov A vector specifying possible value that each covariate could take. For example, if two binary covariates, then \code{cov <- c(2, 2)}.
#' @param covariates_block Estimated covariates for each block. Should be a `k` by `c` matrix or dataframe where `k` is the number of blocks and `c` is the number of covariates.
#' @param clusters_cov An `n`-vecotr indicating the block label of each nodes with the effect of covariates where `n` is the number nodes.
#' @param link Link function. Could be 'identity' (by default) or 'logit'.
#' @param check Method to check probability matrix. Could be 'BF' (by default, see \link{BFcheck}) or 'Remove' (see \link{Removecheck}).
#' @param sd Whether to compute standard errors of the estimate of beta. \code{TRUE} by default.
#' @param rho Sparsity coefficient. Coule be `1` (by default) or `0`.
#'
#' @return A list containing the following:
#' \describe{
#' \item{`betahats`}{A list containing all estimated beta. The length of the list equals to the number of covariates. Using \code{sapply(betahats, mean)} to get the estimate of each beta.}
#' \item{`bias`}{If \code{sd==TRUE}, A list containing all bias terms for `betahats` in the CLT. The length of the list equals to the number of covariates. Using \code{sapply(bias, mean)} to get the bias of each beta and using \code{sapply(betahats, mean) - sapply(bias, mean)} to get the unbiased estimate of each beta.}
#' \item{`sd2s`}{If \code{sd==TRUE}, A list containing all variances of `betahats`. The length of the list equals to the number of covariates. Using \code{sapply(sd2s, mean)/n^2} to get the variance of each beta where `n` is the number of nodes.}
#' \item{`...`}{If \code{sd==TRUE}, Lists containing all `psi`, `sigma2` and `covariances`. The length of the list equals to the number of covariates.}
#' }
#'
#' @author Cong Mu \email{placid8889@gmail.com}
#'
#' @seealso \code{\link{grdpg}}
#'
#' @export
estimatebeta_SA <- function(Xhat, muhats, Ipq, cov, covariates_block, clusters_cov, link = 'identity', check = 'BF', sd = TRUE, rho = 1) {
result <- list()
covariates_block <- data.frame(covariates_block)
if (length(cov) != ncol(covariates_block)) {
stop("The length of `cov` should equal to the number of columns in `covariates_block` (both equal to the number of covariates).")
}
if (!(link %in% c('identity', 'logit'))) {
print("Unrecognized `link`, would use 'identity' by default.")
}
if (!(check %in% c('BF', 'Remove'))) {
print("Unrecognized `check`, would use 'BF' by default.")
}
K <- length(unique(clusters_cov))
BXhat <- t(muhats) %*% Ipq %*% muhats
if (link == 'logit') {
if (check == 'BF') {
BXhat <- logit(BFcheck(BXhat))
} else {
BXhat <- logit(Removecheck(BXhat))
}
}
betahats <- vector('list', ncol(covariates_block))
if (sd) {
theta <- t(muhats) %*% Ipq %*% muhats
theta <- BFcheck(theta)
eta <- getWeight(clusters_cov)$freq
delta <- 0
for (kk in 1:length(eta)) {
delta <- delta + eta[kk] * muhats[,kk,drop=FALSE] %*% t(muhats[,kk,drop=FALSE])
}
deltainv <- solve(delta)
zeta <- t(muhats) %*% deltainv %*% muhats
E <- vector('list', length(unique(clusters_cov)))
for (alpha in unique(clusters_cov)) {
for (n in 1:nrow(Xhat)) {
if (rho == 0) {
E[[alpha]] <- c(E[[alpha]], theta[alpha,clusters_cov[n]]*(Xhat[n,,drop=FALSE]%*%t(Xhat[n,,drop=FALSE])))
} else {
E[[alpha]] <- c(E[[alpha]], theta[alpha,clusters_cov[n]]*(1-theta[alpha,clusters_cov[n]])*(Xhat[n,,drop=FALSE]%*%t(Xhat[n,,drop=FALSE])))
}
}
}
bias <- vector('list', ncol(covariates_block))
sd2s <- vector('list', ncol(covariates_block))
psiil1s <- vector('list', ncol(covariates_block))
psiil2s <- vector('list', ncol(covariates_block))
sigma2il1s <- vector('list', ncol(covariates_block))
sigma2il2s <- vector('list', ncol(covariates_block))
sigma2l1l1s <- vector('list', ncol(covariates_block))
sigma2l2l2s <- vector('list', ncol(covariates_block))
covil1il2s <- vector('list', ncol(covariates_block))
}
model2 <- Mclust(diag(BXhat), ncol(BXhat)/prod(cov), verbose = FALSE)
c <- getClusters(data.frame(model2$z))
for (i in 1:nrow(BXhat)) {
for (k in 1:ncol(covariates_block)) {
ind1 <- which(covariates_block[,k]==covariates_block[i,k])
ind2 <- which(covariates_block[,k]!=covariates_block[i,k])
for (l1 in ind1) {
for (l2 in ind2) {
if (c[l1] == c[l2]) {
temp <- setdiff(1:ncol(covariates_block), k)
ind <- c()
if (length(temp) > 0) {
for (l in temp) {
ind <- c(ind, covariates_block[l1,l] == covariates_block[l2,l])
}
} else {
ind <- TRUE
}
if (all(ind)) {
betahats[[k]] <- c(betahats[[k]], BXhat[i,l1] - BXhat[i,l2])
if (sd) {
meanE <- sapply(E, mean)
covs_temps <- compute_cov(i, l1, l2, K, meanE, muhats, deltainv, eta, theta, zeta, Ipq)
covs <- covs_temps[[1]]
temps <- covs_temps[[2]]
psiil1 <- temps[1]
psiil2 <- temps[2]
sigma2il1 <- temps[3]
sigma2il2 <- temps[4]
sigma2l1l1 <- temps[5]
sigma2l2l2 <- temps[6]
covil1il2 <- covs[1] + covs[2] + covs[3] + covs[4] - covs[5] - covs[6] - covs[7] - covs[8] + covs[9]
if (link == 'logit') {
psiil1 <- psiil1 * logitderivative(BFcheck(theta[i,l1]))
psiil2 <- psiil2 * logitderivative(BFcheck(theta[i,l2]))
sigma2il1 <- sigma2il1 * logitderivative(BFcheck(theta[i,l1]))^2
sigma2il2 <- sigma2il2 * logitderivative(BFcheck(theta[i,l2]))^2
sigma2l1l1 <- sigma2l1l1 * logitderivative(BFcheck(theta[l1,l1]))^2
sigma2l2l2 <- sigma2l2l2 * logitderivative(BFcheck(theta[l2,l2]))^2
covil1il2 <- covil1il2 * logitderivative(BFcheck(theta[i,l1])) * logitderivative(BFcheck(theta[i,l2]))
}
psiil1s[[k]] <- c(psiil1s[[k]], psiil1)
psiil2s[[k]] <- c(psiil1s[[k]], psiil2)
sigma2il1s[[k]] <- c(sigma2il1s[[k]], sigma2il1)
sigma2il2s[[k]] <- c(sigma2il2s[[k]], sigma2il2)
sigma2l1l1s[[k]] <- c(sigma2l1l1s[[k]], sigma2l1l1)
sigma2l2l2s[[k]] <- c(sigma2l2l2s[[k]], sigma2l2l2)
covil1il2s[[k]] <- c(covil1il2s[[k]], covil1il2)
if (i == l1) {
tempsd <- sigma2l1l1 + sigma2il2 - 2 * covil1il2
} else if (i == l2) {
tempsd <- sigma2il1 + sigma2l2l2 - 2 * covil1il2
} else {
tempsd <- sigma2il1 + sigma2il2 - 2 * covil1il2
}
tempbias <- (psiil1 - psiil2) / length(clusters_cov)
bias[[k]] <- c(bias[[k]], tempbias)
sd2s[[k]] <- c(sd2s[[k]], tempsd)
}
}
}
}
}
}
}
result$betahats <- betahats
if (sd) {
result$bias <- bias
result$sd2s <- sd2s
result$psiil1s <- psiil1s
result$psiil2s <- psiil2s
result$sigma2il1s <- sigma2il1s
result$sigma2il2s <- sigma2il2s
result$sigma2l1l1s <- sigma2l1l1s
result$sigma2l2l2s <- sigma2l2l2s
result$covil1il2s <- covil1il2s
}
return(result)
}
|
8355a246d8e00f4b4902551e41787be437d93aa7 | be17754a5cdbab01b012bf27028aeb6e8d28d129 | /funs.R | 718610a1cfbdb1b60b83c6f048b36e82a49e525a | [] | no_license | a4a/JRC-tecrep-simTest | 55066a1d13cb811d15c9cf52056f5e5b269084da | 1727dca3fe4dfb0578ac3e38a05fce9ad3411a51 | refs/heads/master | 2016-09-05T14:38:29.849908 | 2013-11-15T14:40:18 | 2013-11-15T14:40:18 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 21,006 | r | funs.R | # models for fit
mod0 <- function() ~ 1
amod <- function() ~ age
famod <- function() ~ factor(age)
fymod <- function() ~ factor(year)
faymod <- function() ~ factor(age) + factor(year)
bamod <- function(na) as.formula(paste("~bs(age,",na, ")"))
bymod <- function(ny) as.formula(paste("~bs(year,",ny, ")"))
baymod <- function(na, ny) as.formula(paste("~",as.character(bamod(na))[2],"+", as.character(bymod(ny))[2]))
temod <- function(na, ny) as.formula(paste("~ te(age, year, bs=c(\"tp\", \"tp\"), k=c(", na,",",ny,"))"))
selDn <- function (params, data){
a1 = FLQuant(1, dimnames = dimnames(data)) %*% params["a1"]
s = FLQuant(1, dimnames = dimnames(data)) %*% params["sl"]
sr = FLQuant(1, dimnames = dimnames(data)) %*% params["sr"]
if (dims(data)$iter == 1 & dims(a1)$iter > 1)
data = propagate(data, dims(a1)$iter)
s[data >= a1] = sr[data >= a1]
2^(-((data - a1)/s * (data - a1)/s))
}
meanssq <- function(x){
mean(x^2)
}
sim <- function(x){
# BRP
xx <- lh(gislasim(FLPar(linf=x$linf, k=x$k, s=x$s, v=x$v, a1=x$a1*x$a50, sr=x$sr, sl=x$sl, a50=x$a50, a=x$a, b=x$b)), sr=as.character(x$srMod), range=c(min=1, max=ceiling(x$amax*1.1), minfbar=1, maxfbar=ceiling(x$amax*1.1)-1, plusgroup=ceiling(x$amax*1.1)))
# exploitation history
Fc <- c(refpts(xx)["crash","harvest"]*0.8)
Fmsy <- c(refpts(xx)["msy","harvest"])
if(is.na(Fc)){
Fc <- c(refpts(xx)["f0.1","harvest"]*5)
Fmsy <- c(refpts(xx)["f0.1","harvest"])
}
Ftrg <- c(seq(0, Fc, len=19), rep(Fc, 20), seq(Fc, Fmsy, len=10))
trg <- fwdControl(data.frame(year=c(2:50), quantity=rep('f', 49), val=Ftrg))
stk <- as(xx, "FLStock")[,1:50]
stk <- fwd(stk, ctrl=trg, sr=list(model=SRModelName(xx@model), params=xx@params))
stk0 <- stk
# Observation error
ages <- range(stk)["min"]:min(10, range(stk)["max"])
yrs <- range(stk)["minyear"]:range(stk)["maxyear"]
range(stk)[c("maxfbar")] <- min(5, range(stk)[c("maxfbar")])
stk <- setPlusGroup(stk, max(ages))
N <- stock.n(stk)
sel <- selDn(FLPar(a1=x$a1*(x$a50+0.5), sr=x$sr, sl=x$sl), data=FLQuant(ages+0.5,dimnames=list(age=ages)))
logq <- FLQuant(log(x$oe.iq), dimnames=list(year=yrs))
logq[] <- cumsum(c(logq))
logq <- log(exp(logq[rep(1,length(ages))])*sel[,rep(1,length(yrs))]*0.01) # fixed absolute q value
index <- FLIndex(index = N * exp(logq + rnorm(prod(dim(N)), 0, x$oe.icv)), index.q=exp(logq))
dimnames(index.q(index)) <- dimnames(index(index))
index@range[c("startf", "endf")] <- c(0.01, 1)
catch.n(stk) <- catch.n(stk) * exp(rnorm(prod(dim(N)), 0, x$oe.ccv))
catch(stk) <- computeCatch(stk)
lst <- list(scn=x, brp=xx, stock=stk0,
oe1=list(stk = window(stk, 5, 20), idx = window(index, 5, 20)),
oe2=list(stk = window(stk, 15, 30), idx = window(index, 15, 30)),
oe3=list(stk = window(stk, 25, 40), idx = window(index, 25, 40)),
oe4=list(stk = window(stk, 35, 50), idx = window(index, 35, 50)),
oe5=list(stk = stk, idx = index)
)
lst
}
getMods <- function(scn, na, ny){
fmodel <- scn["fmodel"]
if(fmodel=="fm1"){
fmodel <- faymod()
} else if(fmodel=="fm2"){
fmodel <- baymod(na=na, ny=ny)
} else {
fmodel <- temod(na=na, ny=ny)
}
qmodel <- scn["qmodel"]
if(qmodel=="qm0"){
qmodel <- list(mod0())
} else if(qmodel=="qm1"){
qmodel <- list(amod())
} else if(qmodel=="qm2"){
qmodel <- list(famod())
} else if(qmodel=="qm3"){
qmodel <- list(bamod(na=na))
} else {
qmodel <- list(baymod(na=na, ny=ny))
}
rmodel <- scn["rmodel"]
if(rmodel=="rm1"){
rmodel <- fymod()
} else {
rmodel <- bymod(ny=ny)
}
list(fmodel=fmodel, qmodel=qmodel, rmodel=rmodel)
}
doFits0 <- function(x){
# build the models considering the age and year lengths
ny <- 10
na <- min(range(x$oe5$stk)["max"], 4)
# models
mods <- getMods(x$scn, na=na, ny=ny)
fmodel <- mods$fmodel
qmodel <- mods$qmodel
rmodel <- mods$rmodel
fit <- try(a4aFit(fmodel=fmodel, qmodel=qmodel, rmodel=rmodel, stock=x$oe1$stk, indices=FLIndices(idx=x$oe1$idx)))
if(class(fit)=="try-error") x$oe1$fit <- NA else x$oe1$fit <- fit
rm(fit)
fit <- try(a4aFit(fmodel=fmodel, qmodel=qmodel, rmodel=rmodel, stock=x$oe2$stk, indices=FLIndices(idx=x$oe2$idx)))
if(class(fit)=="try-error") x$oe2$fit <- NA else x$oe2$fit <- fit
rm(fit)
fit <- try(a4aFit(fmodel=fmodel, qmodel=qmodel, rmodel=rmodel, stock=x$oe3$stk, indices=FLIndices(idx=x$oe3$idx)))
if(class(fit)=="try-error") x$oe3$fit <- NA else x$oe3$fit <- fit
rm(fit)
fit <- try(a4aFit(fmodel=fmodel, qmodel=qmodel, rmodel=rmodel, stock=x$oe4$stk, indices=FLIndices(idx=x$oe4$idx)))
if(class(fit)=="try-error") x$oe4$fit <- NA else x$oe4$fit <- fit
rm(fit)
# for this last fit ny needs to change and models rebuild
ny <- 30
# models
mods <- getMods(x$scn, na=na, ny=ny)
fmodel <- mods$fmodel
qmodel <- mods$qmodel
rmodel <- mods$rmodel
fit <- try(a4aFit(fmodel=fmodel, qmodel=qmodel, rmodel=rmodel, stock=window(x$oe5$stk, 5), indices=FLIndices(idx=window(x$oe5$idx, 5))))
if(class(fit)=="try-error") x$oe5$fit <- NA else x$oe5$fit <- fit
rm(fit)
x
}
getStats <- function(fit, stk, ssb0, fbar0, rec0, cat0, cat0.wt, q0){
yrs <- dimnames(stock.n(fit))[[2]]
stk <- stk[,yrs]+fit
fitssb <- ssb(stk)
fitfbar <- fbar(stk)
# mse
q1 <- meanssq(fit@logq[[1]]-log(q0[,yrs]))
ssb1 <- meanssq(fitssb-ssb0[,yrs])
fbar1 <- meanssq(fitfbar-fbar0[,yrs])
rec1 <- meanssq(fit@stock.n[1]-rec0[,yrs])
cat1 <- meanssq(quantSums(exp(fit@catch.lhat) * cat0.wt[,yrs]) - cat0[,yrs])
# rbias
q2 <- mean(abs(fit@logq[[1]]/log(q0[,yrs])-1))
ssb2 <- mean(abs(fitssb/ssb0[,yrs]-1))
fbar2 <- mean(abs(fitfbar/fbar0[,yrs]-1))
rec2 <- mean(abs(fit@stock.n[1]/rec0[,yrs]-1))
cat2 <- mean(abs(quantSums(exp(fit@catch.lhat) * cat0.wt[,yrs]) / cat0[,yrs]-1))
c(fit@fit.sum, catlvar=mean(fit@catch.lvar), idxvar=mean(fit@index.lvar[[1]]), ssbrbias=ssb2, ssbmse=ssb1, fbarrbias=fbar2, fbarmse=fbar1, recrbias=rec2, recmse=rec1, catrbias=cat2, catmse=cat1, qrbias=q2, qmse=q1)
}
getFitStats <- function(x){
cat(x$scn$id, ",")
stk <- x$stock
#stk <- x$oe5$stk
ssb0 <- ssb(stk)
# need this tweak because range in a4aFit is not correct
v <- x$oe5$stk@range[c("minfbar", "maxfbar")]
fbar0 <- quantMeans(harvest(stk)[as.character(v[1]:v[2])])
rec0 <- rec(stk)
cat0 <- catch(stk)
df0 <- data.frame(x$scn, fmsy=c(x$brp@refpts["msy","harvest"]), f0.1=c(x$brp@refpts["f0.1","harvest"]), m=c(mean(x$stock@m)), expl=NA, nopar=NA, nlogl=NA, maxgrad=NA, npar=NA, logDetHess=NA, catlvar=NA, idxlvar=NA, ssbrbias=NA, ssbmse=NA, fbarrbias=NA, fbarmse=NA, recrbias=NA, recmse=NA, catrbias=NA, catmse=NA, qrbias=NA, qmse=NA)
df0 <- df0[rep(1,5),]
df0$expl <- 1:5
vars <- c("nopar", "nlogl", "maxgrad", "npar", "logDetHess", "catlvar", "idxlvar", "ssbrbias", "ssbmse", "fbarrbias", "fbarmse", "recrbias", "recmse", "catrbias", "catmse", "qrbias","qmse")
if(!is.na(x$oe1$fit)) df0[1,vars] <- getStats(x$oe1$fit, x$oe1$stk, ssb0, fbar0, rec0, cat0, catch.wt(x$oe1$stk), index.q(x$oe1$idx))
if(!is.na(x$oe2$fit)) df0[2,vars] <- getStats(x$oe2$fit, x$oe2$stk, ssb0, fbar0, rec0, cat0, catch.wt(x$oe2$stk), index.q(x$oe2$idx))
if(!is.na(x$oe3$fit)) df0[3,vars] <- getStats(x$oe3$fit, x$oe3$stk, ssb0, fbar0, rec0, cat0, catch.wt(x$oe3$stk), index.q(x$oe3$idx))
if(!is.na(x$oe4$fit)) df0[4,vars] <- getStats(x$oe4$fit, x$oe4$stk, ssb0, fbar0, rec0, cat0, catch.wt(x$oe4$stk), index.q(x$oe4$idx))
if(!is.na(x$oe5$fit)) df0[5,vars] <- getStats(x$oe5$fit, x$oe5$stk, ssb0, fbar0, rec0, cat0, catch.wt(x$oe5$stk), index.q(x$oe5$idx))
df0
}
getSumm <- function(fit, stk, ssb0, fbar0, rec0, cat0, cat0.wt, I0, B0, C0, qFa, qFa0, qR0, expl){
yrs <- dimnames(stock.n(fit))[[2]]
stk <- stk[,yrs]+fit
stat <- c("S", "F", "R", "Y", "I", "B", "C", "qFa", "qR")
src <- c("obs", "hat")
df0 <- data.frame(expand.grid(expl=expl, y=yrs, stat=stat, src=src), val=NA)
ssb1 <- ssb(stk)
fbar1 <- fbar(stk)
rec1 <- fit@stock.n[1]
cat1 <- quantSums(exp(fit@catch.lhat) * cat0.wt[,yrs])
I1 <- quantSums(exp(fit@index.lhat[[1]]))
B1 <- quantSums(exp(fit@index.lhat[[1]]) * cat0.wt[,yrs])
C1 <- quantSums(fit@catch.n)
qFa1 <- fit@logq[[1]][qFa]
qR1 <- fit@logq[[1]][1]
df0[,"val"] <- c(ssb0[,yrs], fbar0[,yrs], rec0[,yrs], cat0[,yrs], I0[,yrs], B0[,yrs], C0[,yrs], qFa0[,yrs], qR0[,yrs], ssb1, fbar1, rec1, cat1, I1, B1, C1, qFa1, qR1)
df0
}
getFitSumm <- function(x){
stk <- x$stock
idxq <- x$oe5$idx@index.q
ssb0 <- ssb(stk)
# need this tweak because range in a4aFit is not correct
a <- dimnames(idxq)[[1]]
y <- dimnames(idxq)[[2]]
v <- x$oe5$stk@range[c("minfbar", "maxfbar")]
fbar0 <- quantMeans(harvest(stk)[as.character(v[1]:v[2])])
rec0 <- rec(stk)
cat0 <- catch(stk)
I0 <- quantSums(stock.n(stk)[a,y]*idxq)
B0 <- quantSums(stock.n(stk)[a,y]*idxq*stock.wt(stk)[a,y])
C0 <- quantSums(catch.n(stk))
qFa <- as.character(floor(x$scn["a1"] * (x$scn["a50"]+0.5)))
if(!(qFa %in% a)) qFa <- a[ceiling(length(a)/2)]
qFa0 <- log(idxq[qFa])
qR0 <- log(idxq[1])
df0 <- data.frame(expl=NA, y=NA, stat=NA, src=NA, val=NA)
if(!is.na(x$oe1$fit)) df0 <- rbind(df0, getSumm(x$oe1$fit, x$oe1$stk, ssb0, fbar0, rec0, cat0, catch.wt(x$oe1$stk), I0, B0, C0, qFa, qFa0, qR0, "1"))
if(!is.na(x$oe2$fit)) df0 <- rbind(df0, getSumm(x$oe2$fit, x$oe2$stk, ssb0, fbar0, rec0, cat0, catch.wt(x$oe2$stk), I0, B0, C0, qFa, qFa0, qR0, "2"))
if(!is.na(x$oe3$fit)) df0 <- rbind(df0, getSumm(x$oe3$fit, x$oe3$stk, ssb0, fbar0, rec0, cat0, catch.wt(x$oe3$stk), I0, B0, C0, qFa, qFa0, qR0, "3"))
if(!is.na(x$oe4$fit)) df0 <- rbind(df0, getSumm(x$oe4$fit, x$oe4$stk, ssb0, fbar0, rec0, cat0, catch.wt(x$oe4$stk), I0, B0, C0, qFa, qFa0, qR0, "4"))
if(!is.na(x$oe5$fit)) df0 <- rbind(df0, getSumm(x$oe5$fit, x$oe5$stk, ssb0, fbar0, rec0, cat0, catch.wt(x$oe5$stk), I0, B0, C0, qFa, qFa0, qR0, "5"))
attr(df0, "scn") <- x$scn
df0[-1,]
}
## HAD TO FIX A SMALL BUG ##
#### Life History Generator ####################################################
lh=function(par,
growth =vonB,
fnM =function(par,len) exp(par["M1"]+par["M2"]*log(len)),
# fnM =function(par,len,T=290,a=FLPar(c(a=-2.1104327,b=-1).7023068,c=1.5067827,d=0.9664798,e=763.5074169),iter=dims(par)$iter))
# exp(a[1]+a[2]*log(len) + a[3]*log(par["linf"]) + a[4]*log(par["k"]) + a[5]/T),
fnMat =logistic,
fnSel =dnormal,
sr ="bevholt",
range =c(min=1,max=40,minfbar=1,maxfbar=40,plusgroup=40),
spwn = 0,
fish = 0.5, # proportion of year when fishing happens
units=if("units" %in% names(attributes(par))) attributes(par)$units else NULL,
...){
# Check that m.spwn and harvest.spwn are 0 - 1
if (spwn > 1 | spwn < 0 | fish > 1 | fish < 0)
stop("spwn and fish must be in the range 0 to 1\n")
if (("m.spwn" %in% names(args)))
m.spwn =args[["m.spwn"]]
else
m.spwn=FLQuant(spwn, dimnames=list(age=range["min"]:range["max"]))
if (("harvest.spwn" %in% names(args)))
harvest.spwn =args[["harvest.spwn"]]
else
harvest.spwn=FLQuant(spwn, dimnames=list(age=range["min"]:range["max"]))
age=propagate(FLQuant(range["min"]:range["max"],dimnames=list(age=range["min"]:range["max"])),length(dimnames(par)$iter))
# Get the lengths through different times of the year
stocklen <- growth(par[c("linf","t0","k")],age+m.spwn) # stocklen is length at spawning time
catchlen <- growth(par[c("linf","t0","k")],age+fish) # catchlen is length when fishing happens
midyearlen <- growth(par[c("linf","t0","k")],age+0.5) # midyear length used for natural mortality
# Corresponding weights
swt=par["a"]*stocklen^par["b"]
cwt=par["a"]*catchlen^par["b"]
if ("bg" %in% dimnames(par)$param)
swt=par["a"]*stocklen^par["bg"]
args<-list(...)
m. =fnM( par=par,len=midyearlen) # natural mortality is always based on mid year length
mat. =fnMat(par,age + m.spwn) # maturity is biological therefore + m.spwn
sel. =fnSel(par,age + fish) # selectivty is fishery based therefore + fish
## create a FLBRP object to calculate expected equilibrium values and ref pts
dms=dimnames(m.)
res=FLBRP(stock.wt =swt,
landings.wt =cwt,
discards.wt =cwt,
bycatch.wt =cwt,
m =m.,
mat =FLQuant(mat., dimnames=dimnames(m.)),
landings.sel =FLQuant(sel., dimnames=dimnames(m.)),
discards.sel =FLQuant(0, dimnames=dimnames(m.)),
bycatch.harvest=FLQuant(0, dimnames=dimnames(m.)),
harvest.spwn =FLQuant(harvest.spwn, dimnames=dimnames(m.)),
m.spwn =FLQuant(m.spwn, dimnames=dimnames(m.)),
availability =FLQuant(1, dimnames=dimnames(m.)),
range =range)
## FApex
#if (!("range" %in% names(args))) range(res,c("minfbar","maxfbar"))[]<-as.numeric(dimnames(landings.sel(res)[landings.sel(res)==max(landings.sel(res))][1])$age)
## replace any slot passed in as an arg
for (slt in names(args)[names(args) %in% names(getSlots("FLBRP"))[names(getSlots("FLBRP"))!="fbar"]])
slot(res, slt)<-args[[slt]]
params(res)=propagate(params(res),dims(res)$iter)
## Stock recruitment relationship
model(res) =do.call(sr,list())$model
if (dims(par)$iter>1) {
warning("Scarab, iters dont work for SRR:sv/ab etc")
params(res)=FLPar(c(a=NA,b=NA),iter=dims(par)$iter)
for (i in seq(dims(par)$iter))
params(res)[,i][]=unlist(c(ab(par[c("s","v"),i],sr,spr0=iter(spr0(res),i))[c("a","b")]))
warning("iter(params(res),i)=ab(par[c(s,v),i],sr,spr0=iter(spr0(res),i))[c(a,b)] assignment doesnt work")
warning("iter(FLBRP,i) doesnt work")
}else
params(res)=ab(par[c("s","v")],sr,spr0=spr0(res))[c("a","b")]
dimnames(refpts(res))$refpt[5]="crash"
res=brp(res)
if ("fbar" %in% names(args))
fbar(res)<-args[["fbar"]] else
if (any((!is.nan(refpts(res)["crash","harvest"]))))
# the line below has a bug, the quant is not "age"
# which later creates problems. Fixed on the next line
#fbar(res)<-FLQuant(seq(0,1,length.out=101))*refpts(res)["crash","harvest"]
fbar(res)<-FLQuant(seq(0,1,length.out=101), quant="age")*refpts(res)["crash","harvest"]
res=brp(res)
if (!("units" %in% names(attributes(par)))) return(res)
res <- setUnits(res, par)
return(res)}
setMethod('sv', signature(x='FLPar', model='character'),
function(x, model, spr0=NA){
a=x["a"]
b=x["b"]
s=FLPar(a=1,dimnames=dimnames(a))
v=FLPar(b=1,dimnames=dimnames(a))
spr0=FLPar(spr0,dimnames=dimnames(a))
if ("spr0" %in% dimnames(x)$params)
spr0=x["spr0"]
c=FLPar(c=1,dimnames=dimnames(a))
d=FLPar(d=1,dimnames=dimnames(a))
if (("c" %in% dimnames(x)$params)) c=x["c"]
if (("d" %in% dimnames(x)$params)) d=x["d"]
v <- v*spr2v(model, spr0, a, b, c, d)
s <- s*srr2s(model, ssb=v*.2, a=a, b=b, c=c, d=d) / srr2s(model, ssb=v, a=a, b=b, c=c, d=d)
res=rbind(s, v, spr0)
if ("c" %in% dimnames(x)$params)
res=rbind(res, c)
if ("d" %in% dimnames(x)$params)
res=rbind(res, d)
res=rbind(res, spr0)
return(res)})
abPars. <- function(x,spr0=NA,model){
s=x["s"]
v=x["v"]
if ("c" %in% names(x))
c=x["c"]
if ("d" %in% names(x))
d=x["d"]
if ("spr0" %in% names(x))
spr0=x["spr0"]
# converts a & b parameterisation into steepness & virgin biomass (s & v)
switch(model,
"bevholt" ={a=(v%+%(v%-%s%*%v)%/%(5%*%s%-%1))%/%spr0; b=(v%-%s%*%v)%/%(5%*%s%-%1)},
"bevholtSV" ={a=(v+(v-s*v)/(5*s-1))/spr0; b=(v-s*v)/(5*s-1)},
"ricker" ={b=log(5*s)/(v*0.8); a=exp(v*b)/spr0},
"rickerSV" ={b=log(5*s)/(v*0.8); a=exp(v*b)/spr0},
"cushing" ={b=log(s)/log(0.2); a=(v^(1-b))/(spr0)},
"cushingSV" ={b=log(s)/log(0.2); a=(v^(1-b))/(spr0)},
"shepherd" ={b=v*(((0.2-s)/(s*0.2^c-0.2))^-(1/c)); a=((v/b)^c+1)/spr0},
"shepherdSV"={b=v*(((0.2-s)/(s*0.2^c-0.2))^-(1/c)); a=((v/b)^c+1)/spr0},
"mean" ={a=v/spr0;b=NULL},
"meanSV" ={a=v/spr0;b=NULL},
"segreg" ={a=5*s/spr0; b=v/(a*spr0)},
"segregSV" ={a=5*s/spr0; b=v/(a*spr0)},
{stop("model name not recognized")})
res <- c(a=a, b=b)
return(res[!is.null(res)])}
# setMethod('ab', signature(x='FLPar', model='character'),
# function(x, model, spr0=NA){
#
# s=x["a"]
# v=x["b"]
# a=FLPar(a=1,dimnames=dimnames(s))
# b=FLPar(b=1,dimnames=dimnames(v))
#
# if ("spr0" %in% dimnames(x)$params)
# spr0=x["spr0"] else
# spr0=FLPar(spr0,dimnames=dimnames(a))
#
# c=FLPar(c=1,dimnames=dimnames(a))
# d=FLPar(d=1,dimnames=dimnames(a))
# if (("c" %in% dimnames(x)$params)) c=x["c"]
# if (("d" %in% dimnames(x)$params)) d=x["d"]
#
# v <- v*spr2v(model, spr0, a, b, c, d)
# s <- s*srr2s(model, ssb=v*.2, a=a, b=b, c=c, d=d) / srr2s(model, ssb=v, a=a, b=b, c=c, d=d)
#
# res=rbind(s, v, spr0)
#
# if ("c" %in% dimnames(x)$params)
# res=rbind(res, c)
#
# if ("d" %in% dimnames(x)$params)
# res=rbind(res, d)
#
# res=rbind(res, spr0)
#
# return(res)})
gislasim=function(par,t0=-0.1,a=0.00001,b=3,ato95=1,sl=2,sr=5000,s=0.9,v=1000){
names(dimnames(par)) <- tolower(names(dimnames(par)))
if (!("t0" %in% dimnames(par)$params)) par=rbind(par,FLPar("t0" =t0, iter=dims(par)$iter))
if (!("a" %in% dimnames(par)$params)) par=rbind(par,FLPar("a" =a, iter=dims(par)$iter))
if (!("b" %in% dimnames(par)$params)) par=rbind(par,FLPar("b" =b, iter=dims(par)$iter))
if (!("asym" %in% dimnames(par)$params)) par=rbind(par,FLPar("asym" =1, iter=dims(par)$iter))
if (!("bg" %in% dimnames(par)$params)) par=rbind(par,FLPar("bg" =b, iter=dims(par)$iter))
if (!("sl" %in% dimnames(par)$params)) par=rbind(par,FLPar("sl" =sl, iter=dims(par)$iter))
if (!("sr" %in% dimnames(par)$params)) par=rbind(par,FLPar("sr" =sr, iter=dims(par)$iter))
if (!("s" %in% dimnames(par)$params)) par=rbind(par,FLPar("s" =s, iter=dims(par)$iter))
if (!("v" %in% dimnames(par)$params)) par=rbind(par,FLPar("v" =v, iter=dims(par)$iter))
## growth parameters
if (!("k" %in% dimnames(par)$params)) par=rbind(par,FLPar("k"=3.15*par["linf"]^(-0.64), iter=dims(par)$iter)) # From Gislason et al 2008, all species combined
# Natural mortality parameters from Model 2, Table 1 Gislason 2010
par=rbind(par,FLPar(M1=0.55+1.44*log(par["linf"])+log(par["k"]), iter=dims(par)$iter),
FLPar(M2=-1.61 , iter=dims(par)$iter))
if (!("ato95" %in% dimnames(par)$params)) par=rbind(par,FLPar("ato95" =ato95, iter=dims(par)$iter))
if (!("sl" %in% dimnames(par)$params)) par=rbind(par,FLPar("sl" =sl, iter=dims(par)$iter))
if (!("sr" %in% dimnames(par)$params)) par=rbind(par,FLPar("sr" =sr, iter=dims(par)$iter))
## maturity parameters from http://www.fishbase.org/manual/FishbaseThe_MATURITY_Table.htm
if (!("asym" %in% dimnames(par)$params)) par=rbind(par,FLPar("asym" =asym, iter=dims(par)$iter))
if (!("a50" %in% dimnames(par)$params)){
par=rbind(par,FLPar(a50=0.72*par["linf"]^0.93, iter=dims(par)$iter))
par["a50"]=invVonB(par,c(par["a50"]))
}
## selectivity guestimate
a1=par["a50"]
dimnames(a1)$params="a1"
par=rbind(par,a1)
attributes(par)$units=c("cm","kg","1000s")
return(par)}
setUnits=function(res, par){
units=attributes(par)$units
#browser()
allUnits=list("params"= "",
"refpts"= "",
"fbar"= "",
"fbar.obs"= "",
"landings.obs"= paste(units[2],units[3]),
"discards.obs"= paste(units[2],units[3]),
"rec.obs"= units[3],
"ssb.obs"= paste(units[2],units[3]),
"stock.obs"= paste(units[2],units[3]),
"profit.obs"= "",
# "revenue.obs"= "",
"landings.sel"= "",
"discards.sel"= "",
"bycatch.harvest"="",
"stock.wt"= units[2],
"landings.wt"= units[2],
"discards.wt"= units[2],
"bycatch.wt"= units[2],
"m"= "",
"mat"= "proportion",
"harvest.spwn"= "proportion",
"m.spwn"= "proportion",
"availability"= "proportion",
"price"= "",
"vcost"= "",
"fcost"= "")
units(res)[names(allUnits)]=allUnits
return(res)}
|
e3ed7ea3c9a2bc2ed4a89f769fb78b3382727d04 | 3ea16585f2726641cc80d2b7df871444d1eff50f | /neural_network.R | 3a71f31ac6a6ef097e39d7f4aa891c3e156ccd6c | [] | no_license | billzichos/Deal-Probability-Predictor | 352409e3c2bddc2bc7f6f6b4bec249b42c5729ac | ed1f7a5dd6e859c013d893935bc94947fd655734 | refs/heads/master | 2021-07-15T12:41:52.605193 | 2016-10-30T14:05:42 | 2016-10-30T14:05:42 | 35,170,982 | 1 | 1 | null | null | null | null | UTF-8 | R | false | false | 1,139 | r | neural_network.R | #install.packages("neuralnet")
library("neuralnet")
privateDir <- "C:\\Users\\Bill\\Documents\\Private-Github-Files"
setwd(privateDir)
Deals <- "Deal.csv"
col.Deals <- c("Deal.Number", "Snapshot.Date", "Agreement", "Deal.Type", "Primary.Leasing.Agent", "Building", "Deal.SF", "Lease.Term.Months", "No.Days.Since.Creation", "Current.Phase.No", "No.Days.in.Current.Phase", "Proposed.NER.Compared.To.Budget.NER", "State", "ZIP", "Submarket", "Building.Type", "Customer", "Government.Indicator", "AM.Probability", "NER", "Closed.As.Won")
df.deals <- read.csv(Deals, header = TRUE, as.is = FALSE, col.names = col.Deals)
df.deals$Snapshot.Date <- as.Date(df.deals$Snapshot.Date, "%Y-%m-%d")
df.deals$Deal.Number <- as.character(df.deals$Deal.Number)
set.seed(10)
randomVector <- sample(1:nrow(df.deals), 0.70*nrow(df.deals), replace = FALSE)
df.deals.train <- df.deals[randomVector,]
df.deals.test <- df.deals[-randomVector,]
df.deals.test <- df.deals.test[df.deals.test$Agreement!="Termination,Negotiated",]
nn <- neuralnet(Closed.As.Won ~ No.Days.Since.Creation, hidden = c(5,3), data = df.deals.train, linear.output = T)
plot(nn) |
738c912163877e72024ce87ee6525969c410154e | a9bcba10609f9dd18e9a29a26b1e5485b6d4803f | /plot1.R | bb1316d53b46c9ac3a140fe57ac69a01c597d7d1 | [] | no_license | JordiOvereem/ExData_Plotting1 | c859a02a7d96c4438c412c17f20ec600803970f7 | f77b3ad0d3933aaf4fe86af952f523ad6f6c1386 | refs/heads/master | 2021-01-18T10:49:44.340125 | 2016-09-13T10:49:55 | 2016-09-13T10:49:55 | 68,080,047 | 0 | 0 | null | 2016-09-13T06:13:44 | 2016-09-13T06:13:44 | null | UTF-8 | R | false | false | 804 | r | plot1.R | ## Course Project 1 - Exploratory Data Analysis by Jordi Overeem
## This code produces plot1.png.
# Load data
data <- read.table("household_power_consumption.txt", header = TRUE, sep=";",stringsAsFactors = FALSE, dec=".")
# Convert and subset data
data$Date <- as.Date(data$Date, format = "%d/%m/%Y")
subset_data <- subset(data, subset = (Date >= "2007-02-01" & Date <= "2007-02-02"))
subset_data$datetime <- strptime(paste(subset_data$Date, subset_data$Time), "%Y-%m-%d %H:%M:%S")
GAP <- as.numeric(subset_data$Global_active_power)
# Create and save plot 1
par(mfrow = c(1,1))
attach(subset_data)
hist(GAP, main = "Global Active Power", xlab = "Global Active Power (kilowatts)", col = "Red")
dev.copy(png, file = "plot1.png", height = 480, width = 480)
dev.off()
detach(subset_data) |
b4340c7ec9acdc748ec2925e21bb438635712a06 | 968d2abde9432110a4e657d48ed9838aba2956e2 | /plotInfectionAnalysis.R | d5b813fb6017af4f835ade1a194335689e21909b | [] | no_license | levashinalab/Larva-Development | 3f1033e57d5e92534735db4cb49ba06d0d06c5aa | 8f3a4db71b8648232d69217d14b5db1f37f7ee84 | refs/heads/master | 2020-03-22T09:23:38.536941 | 2019-06-03T08:05:49 | 2019-06-03T08:05:49 | 139,834,106 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 2,949 | r | plotInfectionAnalysis.R | #remove old values to avoid trouble
rm(list=ls())
#load the necessary libraries and files. You might need to install the packages first
require(plyr) #necessary for data processing
require(gdata) #necessary to read xls files
library(ggplot2)
source('~/Documents/Projects/Malaria/modeling/ABM/src/RScripts/ggplotThemes.R')
TYPE<- "infection"
IN_DIR<-paste('/Volumes/abt.levashina/Project Development_AW_PCB_PS/rawData/', TYPE, '/', sep = "")
IN_FILE<-paste("/Volumes/abt.levashina/Project Development_AW_PCB_PS/analysis/",TYPE,"/", Sys.Date(),"_infection_oocysts.csv",sep = "")
IN_FILE_S<-paste("/Volumes/abt.levashina/Project Development_AW_PCB_PS/analysis/",TYPE,"/", Sys.Date(),"_infection_spz.csv",sep = "")
FILE<-paste(IN_DIR, "Infection.xlsx", sep = "")
df.ooc<-read.xls(FILE, sheet = 1)
#df.spz<-read.xls(FILE, sheet = 2)
OUT_FILE<-paste("/Volumes/abt.levashina/Project Development_AW_PCB_PS/figures/", TYPE, "/",Sys.Date(),"_oocysts.pdf",sep = "")
summary_oocysts<-read.csv(IN_FILE, sep = ",")
summary_spz<-read.csv(IN_FILE_S, sep = ",")
ind_plot<-ggplot(df.ooc, aes(x = as.factor(Density), y = Oocyst, group = as.factor(Density), colour = Infection)) +geom_boxplot() + facet_wrap(~Infection) + geom_point(size = 2) +basic_theme +scale_color_manual(values=c("red", "orange", "blue")) +ylab("Number of oocysts") +xlab("Density")
md_plot<-ggplot(summary_oocysts, aes(x = as.factor(Density), y = ooc_median, group = as.factor(Density), colour = Infection)) +geom_point(size = 5) + basic_theme+scale_color_manual(values=c("red", "orange", "blue")) +ylab("Number of oocysts") +xlab("Density") + stat_summary(fun.y = mean, fun.ymin = mean, fun.ymax = mean,geom = "crossbar", width = 0.5)+ylim(c(0,100))
md_short_plot<-ggplot(summary_oocysts, aes(x = as.factor(Density), y = ooc_median_short, group = as.factor(Density), colour = Infection)) +geom_point(size = 5) + basic_theme+scale_color_manual(values=c("red", "orange", "blue")) +ylab("Number of oocysts") +xlab("Density") + stat_summary(fun.y = mean, fun.ymin = mean, fun.ymax = mean,geom = "crossbar", width = 0.5)+ylim(c(0,100))
prev_plot<-ggplot(summary_oocysts, aes(x = as.factor(Density), y = prev*100, group = as.factor(Density), colour = Infection)) +geom_point(size = 5) + basic_theme+scale_color_manual(values=c("red", "orange", "blue")) +ylab("Prevalence %") +xlab("Density")+ stat_summary(fun.y = mean, fun.ymin = mean, fun.ymax = mean,geom = "crossbar", width = 0.5) +ylim(c(0,100))
plot_spz<-ggplot(summary_spz, aes(x = as.factor(Density), y = mean_sporo, group = Density, colour = Infection)) +scale_y_log10()+geom_point(size = 5) + basic_theme+scale_color_manual(values=c("red", "orange", "blue")) +ylab("Sporozoites") +xlab("Density")+ stat_summary(fun.y = mean, fun.ymin = mean, fun.ymax = mean,geom = "crossbar", width = 0.5)
pdf(OUT_FILE, width = 9, height = 9)
ind_plot
md_plot
gridExtra::grid.arrange(prev_plot,md_short_plot, nrow = 2)
plot_spz
dev.off()
|
3a757eb77d54e9ccaa3646a2881c459f76796b9a | 9598c94fe076830bfece6e366b87cff83c4c66b6 | /trade spillovers/Trade spillovers ES.R | e45d2a54166ca14134fbf23aff235520ec291376 | [] | no_license | mdg9709/spilloversNL | 1b8d836ad4f5d1145f6cc15345745358d5f15b24 | 04388fe2bfcf764ab4a256b456db3de00ba824f9 | refs/heads/master | 2022-11-18T22:37:24.704113 | 2020-07-17T17:07:07 | 2020-07-17T17:07:07 | 280,177,983 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 8,073 | r | Trade spillovers ES.R | # --- Spillovers via trade channel
source('~/Studie/MSc ECO/Period 5-6 MSc thesis/MSc thesis RStudio project/Scripts/Destination and origin spillovers ES.R')
source('~/Studie/MSc ECO/Period 5-6 MSc thesis/MSc thesis RStudio project/Scripts/Spillovers IT and ES v4 1.R')
source('~/Studie/MSc ECO/Period 5-6 MSc thesis/MSc thesis RStudio project/Scripts/Spillovers FR and ES v4 1.R')
source('~/Studie/MSc ECO/Period 5-6 MSc thesis/MSc thesis RStudio project/Scripts/Spillovers ES and DE v4 1.R')
source('~/Studie/MSc ECO/Period 5-6 MSc thesis/MSc thesis RStudio project/Scripts/Spillovers NL and ES v4 1.R')
# --- Effect of domestic shock on own imports
rmES.l <- data.frame(shift(data2$rmES, n = 1:12, type = "lead"))
names(rmES.l) = c("rmES.l1", "rmES.l2", "rmES.l3", "rmES.l4", "rmES.l5", "rmES.l6",
"rmES.l7", "rmES.l8", "rmES.l9", "rmES.l10", "rmES.l11", "rmES.l12")
rxES.l <- data.frame(shift(data2$rxES, n = 1:12, type = "lead"))
names(rxES.l) = c("rxES.l1", "rxES.l2", "rxES.l3", "rxES.l4", "rxES.l5", "rxES.l6",
"rxES.l7", "rxES.l8", "rxES.l9", "rxES.l10", "rxES.l11", "rxES.l12")
l.rmES <- data.frame(shift(data2$rmES, n = 1:4, type = "lag"))
names(l.rmES) = c("l1.rmES", "l2.rmES", "l3.rmES", "l4.rmES")
l.rxES <- data.frame(shift(data2$rxES, n = 1:4, type = "lag"))
names(l.rxES) = c("l1.rxES", "l2.rxES", "l3.rxES", "l4.rxES")
data3$shockES2 <- (data3$shockES / unlist(l.rmES[1])) / sd((data3$shockES / unlist(l.rmES[1])), na.rm = TRUE)
data3$shockES3 <- data3$shockES2 / 100
data4 <- cbind(data3, l.rmES, l.rxES, rmES.l, rxES.l)
data4 <- subset(data4, select = -c(30:32, 35:37, 152:203))
h <- 12
# -- Equation 5
lhsES50 <- (data4$rmES - data4$l1.rmES) / data4$l1.rmES
lhsES5 <- lapply(1:h, function(x) (data4[, 153+x] - data4$l1.rmES) / data4$l1.rmES)
lhsES5 <- data.frame(lhsES5)
names(lhsES5) = paste("lhsES5", 1:h, sep = "")
data4 <- cbind(data4, lhsES50, lhsES5)
ES5 <- lapply(1:13, function(x) lm(data4[, 177+x] ~ shockES2 + l1.debtES + l1.intES + l1.lrtrES + l1.lrgES + l1.lryESc + l2.debtES + l2.intES + l2.lrtrES + l2.lrgES + l2.lryESc + l3.debtES + l3.intES + l3.lrtrES + l3.lrgES + l3.lryESc + l4.debtES + l4.intES + l4.lrtrES + l4.lrgES + l4.lryESc, data = data4))
summariesES5 <- lapply(ES5, summary)
ES5conf95 <- lapply(ES5, coefci, vcov = NeweyWest, lag = 0, prewhite = FALSE, level = 0.95)
ES5conf68 <- lapply(ES5, coefci, vcov = NeweyWest, lag = 0, prewhite = FALSE, level = 0.68)
ES5up95 <- lapply(1:13, function(x) ES5conf95[[x]][2,2])
ES5low95 <- lapply(1:13, function(x) ES5conf95[[x]][2,1])
ES5up68 <- lapply(1:13, function(x) ES5conf68[[x]][2,2])
ES5low68 <- lapply(1:13, function(x) ES5conf68[[x]][2,1])
betaES <- lapply(summariesES5, function(x) x$coefficients[2,1])
names(betaES) <- paste("betaES", 0:h, sep = "")
# -- Equation 6
lhsES60 <- (data4$rgES - data4$l1.rgES) / data4$l1.rmES
lhsES6 <- lapply(1:h, function(x) (data4[, 84+x] - data4$l1.rgES) / data4$l1.rmES)
lhsES6 <- data.frame(lhsES6)
names(lhsES6) = paste("lhsES6", 1:h, sep = "")
data4 <- cbind(data4, lhsES60, lhsES6)
ES6 <- lapply(1:13, function(x) lm(data4[, 190+x] ~ shockES3 + l1.debtES + l1.intES + l1.lrtrES + l1.lrgES + l1.lryESc + l2.debtES + l2.intES + l2.lrtrES + l2.lrgES + l2.lryESc + l3.debtES + l3.intES + l3.lrtrES + l3.lrgES + l3.lryESc + l4.debtES + l4.intES + l4.lrtrES + l4.lrgES + l4.lryESc, data = data4))
summariesES6 <- lapply(ES6, summary)
ES6conf95 <- lapply(ES6, coefci, vcov = NeweyWest, lag = 0, prewhite = FALSE, level = 0.95)
ES6conf68 <- lapply(ES6, coefci, vcov = NeweyWest, lag = 0, prewhite = FALSE, level = 0.68)
ES6up95 <- lapply(1:13, function(x) ES6conf95[[x]][2,2])
ES6low95 <- lapply(1:13, function(x) ES6conf95[[x]][2,1])
ES6up68 <- lapply(1:13, function(x) ES6conf68[[x]][2,2])
ESlow68 <- lapply(1:13, function(x) ES6conf68[[x]][2,1])
gammaES <- lapply(summariesES6, function(x) x$coefficients[2,1])
names(gammaES) <- paste("gammaES", 0:h, sep = "")
# -- Domestic cumulative multipliers ES
mESc <- lapply(1:13, function(x) cumsum(betaES)[x] / cumsum(gammaES)[x])
mESc1 <- as.numeric(mESc[1]); print(mESc1)
mESc4 <- as.numeric(mESc[5]); print(mESc4)
mESc8 <- as.numeric(mESc[9]); print(mESc8)
mESc12 <- as.numeric(mESc[13]); print(mESc12)
# -- Generate IRF graph (as in Fig. 4 of Alloza et al.)
v1 <- data.frame(cbind(betaES = unlist(betaES), ES5up95 = unlist(ES5up95), ES5low95 = unlist(ES5low95),
ES5up68 = unlist(ES5up68), ES5low68 = unlist(ES5low68)))
quarter <- data.frame(0:12)
df.v1 <- cbind(quarter, v1)
colnames(df.v1) <- c("quarters", "percent", "up95", "low95", "up68", "low68")
irfESm <- ggplot(df.v1, aes(x = quarters, y = percent)) +
geom_hline(yintercept = 0, color = "black", size = 0.5, linetype = "dashed") + geom_line()
irfESm <- irfESm + geom_ribbon(aes(ymin = low95, ymax = up95), linetype=2, alpha=0.1) +
geom_ribbon(aes(ymin = low68, ymax = up68), linetype=2, alpha=0.1)
irfESm1 <- irfESm + coord_cartesian(xlim = c(0, 12)) +
scale_x_continuous(breaks = seq(0, 12, 1)) +
scale_y_continuous(breaks = seq(-0.01, 0.015, 0.005)) +
ggtitle("ES - IMP change (GOV shock in ES)")
irfESm2 <- irfESm1 + theme(plot.background = element_rect(fill = "white", color = "white"),
panel.background = element_rect(fill = "white"),
panel.border = element_rect(linetype = "solid", fill = NA))
irfESm2
# --- Spillovers by destination via exports
source('~/Studie/MSc ECO/Period 5-6 MSc thesis/MSc thesis RStudio project/Scripts/Trade spillovers ES and DE v3 1.R')
source('~/Studie/MSc ECO/Period 5-6 MSc thesis/MSc thesis RStudio project/Scripts/Trade spillovers NL and ES v3 1.R')
source('~/Studie/MSc ECO/Period 5-6 MSc thesis/MSc thesis RStudio project/Scripts/Trade spillovers IT and ES v3 1.R')
source('~/Studie/MSc ECO/Period 5-6 MSc thesis/MSc thesis RStudio project/Scripts/Trade spillovers FR and ES v3 1.R')
library(lmtest)
library(purrr)
library(ggplot2)
library(gridExtra)
library(dplyr)
library(data.table)
mESdt <- 0
gamma1ES <- 0
gamma2ES <- 0
ESspillDt <- 0
error95dt <- 0
up95dt <- 0
low95dt <- 0
error68dt <- 0
up68dt <- 0
low68dt <- 0
for (i in 1:13) {
mESdt[i] = sum(mDEEStc[i], mFREStc[i], mNLEStc[i], mITEStc[i])
gamma1ES[i] = sum(gammaDEES[[i]], gammaNLES[[i]], gammaFRES[[i]], gammaITES[[i]])
gamma2ES[i] = sum(gammaESDE[[i]], gammaESNL[[i]], gammaESFR[[i]], gammaESIT[[i]])
ESspillDt[i] = mESdt[i] * (gamma1ES[i] / gamma2ES[i])
}
mESdt
ESspillDt
for (i in 1:13) {
error95dt[i] = qnorm(0.975) * sd(ESspillDt) / sqrt(12)
up95dt[i] = mean(ESspillDt[i]) + error95dt[i]
low95dt[i] = mean(ESspillDt[i]) - error95dt[i]
error68dt[i] = qnorm(0.84) * sd(ESspillDt) / sqrt(12)
up68dt[i] = mean(ESspillDt[i]) + error68dt[i]
low68dt[i] = mean(ESspillDt[i]) - error68dt[i]
}
# -- Cumulative multipliers
mESdtc1 <- cumsum(mESdt)[1]; print(mESdtc1)
mESdtc4 <- cumsum(mESdt)[5]; print(mESdtc4)
mESdtc8 <- cumsum(mESdt)[9]; print(mESdtc8)
mESdtc12 <- cumsum(mESdt)[13]; print(mESdtc12)
# --- Spillovers by origin via exports
mESot <- 0
wES <- 0
ESspillOt <- 0
error95ot <- 0
up95ot <- 0
low95ot <- 0
error68ot <- 0
up68ot <- 0
low68ot <- 0
for (i in 1:13) {
mESot[i] = sum(mESDEtc[i], mESFRtc[i], mESNLtc[i], mESITtc[i])
wES[i] = sum(ES$Y) / sum(DE$Y, FR$Y, NL$Y, IT$Y, na.rm = TRUE)
ESspillOt[i] = mESot[i] * wES[i]
}
mESot
ESspillOt
for (i in 1:13) {
error95ot[i] = qnorm(0.975) * sd(ESspillOt) / sqrt(12)
up95ot[i] = mean(ESspillOt[i]) + error95ot[i]
low95ot[i] = mean(ESspillOt[i]) - error95ot[i]
error68ot[i] = qnorm(0.84) * sd(ESspillOt) / sqrt(12)
up68ot[i] = mean(ESspillOt[i]) + error68ot[i]
low68ot[i] = mean(ESspillOt[i]) - error68ot[i]
}
# -- Cumulative multipliers
mESotc1 <- cumsum(mESot)[1]; print(mESotc1)
mESotc4 <- cumsum(mESot)[5]; print(mESotc4)
mESotc8 <- cumsum(mESot)[9]; print(mESotc8)
mESotc12 <- cumsum(mESot)[13]; print(mESotc12)
|
e835cee30945facbcf2b53c63257d27c5ea9a58d | c2ca52ff213c784ebb7e3e6f45ed9d148fb089cd | /BA/Homework/HW02/3fgl.R | dcf122a99d83f3518042d110e19da016fd243b9c | [
"MIT"
] | permissive | jeklen/notes | 93d3091ea2a14cf244004fe1a3a539018ae671d2 | 700498ce6577f83707c8d497ddef4b673b190e2a | refs/heads/master | 2021-06-12T09:36:41.024345 | 2017-01-22T02:13:22 | 2017-01-22T02:13:22 | 71,848,578 | 1 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,452 | r | 3fgl.R | library(class) ## needed for KNN
setwd("D:/BA/Homework/HW02")
fgl <- read.csv("fgl.csv",header = TRUE,sep=",")
summary(fgl)
#Na、Mg、Al、Si、K、Ca、Ba、Fe
fgl[,"Na"] <- factor(fgl[,"Na"])
fgl[,"Mg"] <- factor(fgl[,"Mg"])
fgl[,"Al"] <- factor(fgl[,"Al"])
fgl[,"Si"] <- factor(fgl[,"Si"])
fgl[,"K"] <- factor(fgl[,"K"])
fgl[,"Ca"] <- factor(fgl[,"Ca"])
fgl[,"Ba"] <- factor(fgl[,"Ba"])
fgl[,"Fe"] <- factor(fgl[,"Fe"])
fgl[,"type"] <- factor(fgl[,"type"])
Predictors <- fgl[,c("Na","Mg","Al","Si","K","Ca","Ba","Fe")]
model <- train(
Predictors, fgl[,"type"],
method='knn',
#上面先列出Predictors, 然后是结果变量,然后说明使用KNN方法。
tuneGrid = data.frame(k=1:8),
#KNN模型当中k的取值范围从1到8。
metric='Accuracy',
#评价指标是“准确率” Accuracy
trControl=trainControl(
method='repeatedcv',
number=5,
repeats=20) )
#trControl 是对训练过程进行控制的函数。此处的method='repeatedcv'意思是使用repeated cross validation
#方法(重复交叉验证)。number=5表示做5-fold cross validation,意思是把数据集割成5块,然后做5次
#训练和验证,每次都取其中一块数据(1/5的数据)当验证数据集,剩下的当训练数据集。repeat=20表示上面的
#过程重复20次,等总共要做100次训练-验证。最终计算评价指标(此处是Accuracy)的平均值。
model
plot(model)
confusionMatrix(model)
|
149014f45395774c784dbeb9612cd03494e1b4f4 | 50272cf92b22f6436abd5e18a31a45f4fe9efc01 | /man/bootify.Rd | 22244bcace759e691ff8de1d9ccd2ff6bfca270f | [] | no_license | mattdneal/gaussianProcess | 906154584b3c84184f0d9afd2df8865c286d4718 | 9e740bf0948569a8ea24d0311a1cf0ea83544533 | refs/heads/master | 2021-01-10T07:28:42.254970 | 2019-01-20T00:06:34 | 2019-01-20T00:09:04 | 50,181,064 | 0 | 0 | null | null | null | null | UTF-8 | R | false | true | 374 | rd | bootify.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/utils.R
\name{bootify}
\alias{bootify}
\title{Bootify a Statistic}
\usage{
bootify(statistic)
}
\arguments{
\item{statistic}{statistic to bootify}
}
\value{
an R function
}
\description{
Takes a statistic and returns a wrapper of that statistic suitable for use
with \code{\link[boot]{boot}}.
}
|
c8d4de9a6fed8cc6cde4255cc95b518dfebc814f | 41e4aca43cbd980c72209843c53e0293eaff953f | /man/str_replace_all.Rd | cd8662fa27416515aa0abe5b3df65bb138028706 | [] | no_license | kohske/stringr | 4f6bcfa93d9d8169319b1286129ccc1415ebd391 | f64767cee5837ce07f746d952f17b1ddad0f846c | refs/heads/master | 2020-12-24T11:38:04.960071 | 2010-08-24T13:49:22 | 2010-08-24T13:49:22 | 1,215,422 | 3 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,302 | rd | str_replace_all.Rd | \name{str_replace_all}
\alias{str_replace_all}
\title{Replace all occurrences of a matched pattern in a string.}
\usage{str_replace_all(string, pattern, replacement)}
\description{
Replace all occurrences of a matched pattern in a string.
}
\details{
Vectorised over \code{string}, \code{pattern} and \code{replacement}.
Shorter arguments will be expanded to length of longest.
}
\value{character vector.}
\keyword{character}
\seealso{\code{\link{gsub}} which this function wraps,
\code{\link{str_replace_all}} to replace a single match}
\arguments{
\item{string}{input character vector}
\item{pattern}{pattern to look for, as defined by a POSIX regular
expression. See the ``Extended Regular Expressions'' section of
\code{\link{regex}} for details.}
\item{replacement}{replacement string. References of the form \code{\1},
\code{\2} will be replaced with the contents of the respective matched
group (created by \code{()}) within the pattern.}
}
\examples{fruits <- c("one apple", "two pears", "three bananas")
str_replace(fruits, "[aeiou]", "-")
str_replace_all(fruits, "[aeiou]", "-")
str_replace_all(fruits, "([aeiou])", "")
str_replace_all(fruits, "([aeiou])", "\\\\1\\\\1")
str_replace_all(fruits, "[aeiou]", c("1", "2", "3"))
str_replace_all(fruits, c("a", "e", "i"), "-")}
|
036f57b62fb7cc45a0c2f6ab7fb5463598db7e38 | 364b442fe6d6ee08c1150f1eabbc850a81d3a5e8 | /R/utils.R | bcc5aea739f3b963e0784e53f0cd3cc5672c5600 | [] | no_license | pherephobia/mlr | 5ad480184cb9e27c4f25be547d5a36413624c390 | e335540cd7e0e4533c902a0c721532cf10b5fff8 | refs/heads/master | 2020-07-14T05:50:15.364056 | 2016-10-25T02:26:18 | 2016-10-25T02:26:18 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 2,195 | r | utils.R | # get one el from each row of a matrix, given indices or col names (factors for colnames are converted to characters)
getRowEls = function(mat, inds) {
if (is.factor(inds))
inds = as.character(inds)
if (is.character(inds))
inds = match(inds, colnames(mat))
inds = cbind(seq_row(mat), inds)
mat[inds]
}
# get one el from each col of a matrix, given indices or row names
getColEls = function(mat, inds) {
getRowEls(t(mat), inds)
}
# prints more meaningful 'head' output indicating that there is more output
printHead = function(x, n = 6L, ...) {
print(head(x, n = n, ...))
if (nrow(x) > n)
catf("... (%i rows, %i cols)\n", nrow(x), ncol(x))
}
# Do fuzzy string matching between input and a set of valid inputs
# and return the most similar valid inputs.
getNameProposals = function(input, possible.inputs, nproposals = 3L) {
assertString(input)
assertCharacter(possible.inputs)
assertInt(nproposals, lower = 1L)
# compute the approximate string distance (using the generalized Levenshtein / edit distance)
# and get the nproposals most similar valid inputs.
indices = order(adist(input, possible.inputs))[1:nproposals]
possibles = na.omit(possible.inputs[indices])
return(possibles)
}
# generates a grid for a vector of features and returns a list
# expand.grid can be applied to this to find all possible combinations of the features
generateFeatureGrid = function(features, data, resample, gridsize, fmin, fmax) {
sapply(features, function(feature) {
nunique = length(unique(data[[feature]]))
cutoff = ifelse(gridsize >= nunique, nunique, gridsize)
if (is.factor(data[[feature]])) {
factor(rep(levels(data[[feature]]), length.out = cutoff),
levels = levels(data[[feature]]), ordered = is.ordered(data[[feature]]))
} else {
if (resample != "none") {
sort(sample(data[[feature]], cutoff, resample == "bootstrap"))
} else {
if (is.integer(data[[feature]]))
sort(rep(fmin[[feature]]:fmax[[feature]], length.out = cutoff))
else
seq(fmin[[feature]], fmax[[feature]], length.out = cutoff)
}
}
}, simplify = FALSE)
}
|
98b88ab7e74718daacaa30790b4966331549cae7 | 883a4a0c1eae84485e1d38e1635fcae6ecca1772 | /nCompiler/R/NF_CompilerClass.R | ac2c01af285882dec78723cbd92afa1af3ec0d36 | [
"BSD-3-Clause"
] | permissive | nimble-dev/nCompiler | 6d3a64d55d1ee3df07775e156064bb9b3d2e7df2 | 392aabaf28806827c7aa7b0b47f535456878bd69 | refs/heads/master | 2022-10-28T13:58:45.873095 | 2022-10-05T20:14:58 | 2022-10-05T20:14:58 | 174,240,931 | 56 | 7 | BSD-3-Clause | 2022-05-07T00:25:21 | 2019-03-07T00:15:35 | R | UTF-8 | R | false | false | 14,468 | r | NF_CompilerClass.R | nFunctionIDMaker <- labelFunctionCreator('NFID')
NFvirtual_CompilerClass <- R6::R6Class(
classname = 'NFvirtual_CompilerClass',
portable = FALSE,
public = list(
name = NULL,
origName = NULL,
NFinternals = NULL,
stageCompleted = 'start',
nameSubList = NULL,
origRcode = NULL,
newRcode = NULL,
symbolTable = NULL,
returnSymbol = NULL,
code = NULL,
auxEnv = new.env(),
##... to here
const = NULL,
needed_nFunctions = list(), #Each list element will be a list with (name, env), so that nGet(name, env) returns the nFunction
needed_nClasses = list(), #Each list element will be an NCgenerator (returned by nClass). Populated only from "$new()" usags.
initialTypeInferenceDone = FALSE,
initialize = function(f = NULL,
## funName,
const = FALSE,
useUniqueNameInCpp = FALSE) {
const <<- const
if(!is.null(f)) {
isNFinternals <- inherits(f, 'NF_InternalsClass')
if(!(isNF(f) || isNFinternals)) {
stop('Attempt to compile something is neither an nFunction nor an object of class NF_InternalsClass')
}
if(isNFinternals) {
NFinternals <<- f
} else {
NFinternals <<- NFinternals(f)
}
origName <<- NFinternals$uniqueName
if (useUniqueNameInCpp) name <<- NFinternals$uniqueName
else name <<- NFinternals$cpp_code_name
origRcode <<- NFinternals$code
newRcode <<- NFinternals$code
}
},
showCpp = function() {
writeCode(
compile_generateCpp(code, symbolTable)
)
},
setupSymbolTable = function(parentSymbolTable = NULL) {
argNames <- NFinternals$argSymTab$getSymbolNames()
symbolTable <<- NFinternals$argSymTab$clone(deep = TRUE)
mangledArgumentNames <- mangleArgumentNames( argNames )
symbolTable$setSymbolNames(mangledArgumentNames)
nameSubList <<- lapply(mangledArgumentNames,
as.name)
names(nameSubList) <<- argNames
if(!is.null(parentSymbolTable)) {
symbolTable$setParentST(parentSymbolTable)
}
returnSymbol <<- NFinternals$returnSym$clone(deep = TRUE)
},
process = function(...) {
if(is.null(symbolTable)) {
setupSymbolTable()
}
}
)
)
NF_CompilerClass <- R6::R6Class(
'NF_CompilerClass',
inherit = NFvirtual_CompilerClass,
portable = FALSE,
public = list(
cppDef = NULL,
##Rwrapper = NULL,
createCpp = function(control = list(),
sourceObj = NULL) {
## Do all steps to create C++ (and R wrapper).
## When the function is a class method, the NC_CompilerClass
## object manages these steps by calling process() and createCppInternal().
controlFull <- updateDefaults(
get_nOption('compilerOptions'),
control
)
process(control = controlFull,
sourceObj = sourceObj)
createCppInternal()
},
createCppInternal = function() {
cppDef <<- cpp_nFunctionClass$new(
name = self$name
)
## It would be nice if self were not stored in cppDef
## but for now it is.
cppDef$buildFunction(self)
##cppDef$buildSEXPwrapper()
##Rwrapper <<- cppDef$buildRwrapper()
invisible(NULL)
},
process = function(control = list(),
sourceObj = NULL,
doKeywords = TRUE, ## deprecated?
.nCompilerProject = NULL, ## deprecated?
initialTypeInferenceOnly = FALSE) { ## deprecated?
## Do all steps of manipulating the abstract syntax tree
## to the point where it is ready to be used for C++ generation.
controlFull <- updateDefaults(
get_nOption('compilerOptions'),
control
)
processNFstages(self,
controlFull,
sourceObj,
doKeywords,
.nCompilerProject,
initialTypeInferenceOnly)
}
)
)
processNFstages <- function(NFcompiler,
control = list(),
sourceObj = NULL,
doKeywords = TRUE,
.nCompilerProject = NULL,
initialTypeInferenceOnly = FALSE) {
## Do all steps of manipulating the abstract syntax tree
## to the point where it is ready to be used for C++ generation.
controlFull <- updateDefaults(
get_nOption('compilerOptions'),
control
)
debug <- controlFull$debug
debugCpp <- FALSE
cppStacktrace <- controlFull$cppStacktrace
logging <- controlFull$logging
startStage <- controlFull$startStage
endStage <- controlFull$endStage
use_nCompiler_error_handling <- controlFull$use_nCompiler_error_handling
if(debug) browser()
nameMsg <- paste0("(for method or nFunction ", NFcompiler$origName, ")")
if (logging)
nDebugEnv$compilerLog <- c(
nDebugEnv$compilerLog,
paste("---- Begin compilation log", nameMsg, '----\n'),
"Original R code", "--------",
capture.output(NFcompiler$origRcode), "--------\n",
"Argument Symbol Table", "--------",
capture.output(NFcompiler$NFinternals$argSymTab), "--------\n",
"Return Type", "--------",
capture.output(NFcompiler$NFinternals$returnSym),
"--------\n"
)
### SET INPUT AND OUTPUT TYPES
stageName <- 'setInputOutputTypes'
if (logging) logBeforeStage(stageName)
if(NFcompilerMaybeStop(stageName, controlFull)) return(invisible(NULL))
if(!NFcompilerMaybeSkip(stageName, controlFull)) {
eval(NFcompilerMaybeDebug(stageName, controlFull))
NFtry({
if(!NFcompiler$initialTypeInferenceDone) {
if(doKeywords) {
## NEW: possibly implement these steps
## using the exprClass code, not the R code
## matchKeywords()
## processKeywords()
}
## If this function is a class, method, its symbolTable
## may have already been created by the NC_CompilerClass object
## calling setupSymbolTable()
if(is.null(NFcompiler$symbolTable)) {
NFcompiler$setupSymbolTable()
}
NFcompiler$initialTypeInferenceDone <- TRUE
}
if(initialTypeInferenceOnly)
return(NULL)
},
stageName,
use_nCompiler_error_handling)
NFcompiler$stageCompleted <- stageName
if (logging) logAfterStage(stageName)
}
### SET MANGLED ARGUMENT NAMES (e.g. x -> ARG1_X_)
stageName <- 'substituteMangledArgumentNames'
if (logging) logBeforeStage(stageName)
## simple substitution of the mangled argument names
## whereever they are used.
if(NFcompilerMaybeStop(stageName, controlFull)) return(invisible(NULL))
if(!NFcompilerMaybeSkip(stageName, controlFull)) {
eval(NFcompilerMaybeDebug(stageName, controlFull))
NFtry(
compilerStage_substituteMangledArgumentNames(
NFcompiler,
NFcompiler$nameSubList,
debug),
stageName,
use_nCompiler_error_handling)
NFcompiler$stageCompleted <- stageName
}
## INITIALIZE CODE
stageName <- 'initializeCode'
if (logging) logBeforeStage(stageName)
## set up abstract syntax tree (exprClass objects)
if(NFcompilerMaybeStop(stageName, controlFull)) return(invisible(NULL))
if(!NFcompilerMaybeSkip(stageName, controlFull)) {
eval(NFcompilerMaybeDebug(stageName, controlFull))
NFtry(
compilerStage_initializeCode(
NFcompiler,
debug),
stageName,
use_nCompiler_error_handling)
NFcompiler$stageCompleted <- stageName
if (logging) {
logAST(NFcompiler$code, 'AST initialized:',
showType = FALSE, showImpl = FALSE)
logAfterStage(stageName)
}
## Initialize initializerList if present (only for constructors)
if(!is.null(NFcompiler$NFinternals$aux)) {
if(!is.null(NFcompiler$NFinternals$aux$initializerList)) {
NFcompiler$NFinternals$aux$initializerList_exprClasses <-
lapply(NFcompiler$NFinternals$aux$initializerList, nParse)
}
}
}
stageName <- 'initializeAuxiliaryEnvironment'
if (logging) logBeforeStage(stageName)
if(NFcompilerMaybeStop(stageName, controlFull)) return(invisible(NULL))
if(!NFcompilerMaybeSkip(stageName, controlFull)) {
eval(NFcompilerMaybeDebug(stageName, controlFull))
NFtry({
compilerStage_initializeAuxEnv(NFcompiler,
sourceObj,
debug)
},
stageName,
use_nCompiler_error_handling)
resolveTBDsymbols(NFcompiler$symbolTable,
env = NFcompiler$auxEnv[['where']])
NFcompiler$returnSymbol <- resolveOneTBDsymbol(NFcompiler$returnSymbol,
env = NFcompiler$auxEnv[['where']])
if(inherits(NFcompiler$returnSymbol, "symbolNC")) {
NFcompiler$auxEnv$needed_nClasses <- c(NFcompiler$auxEnv$needed_nClasses, NFcompiler$returnSymbol$NCgenerator)
}
NFcompiler$stageCompleted <- stageName
if (logging) logAfterStage(stageName)
}
## SIMPLE TRANSFORMATIONS (e.g. name changes)
stageName <- 'simpleTransformations'
if (logging) logBeforeStage(stageName)
if(NFcompilerMaybeStop(stageName, controlFull)) return(invisible(NULL))
if(!NFcompilerMaybeSkip(stageName, controlFull)) {
eval(NFcompilerMaybeDebug(stageName, controlFull))
## Make modifications that do not need size processing
NFtry(
compilerStage_simpleTransformations(
NFcompiler,
debug),
stageName,
use_nCompiler_error_handling)
NFcompiler$stageCompleted <- stageName
if (logging) {
logAST(NFcompiler$code, showType = FALSE, showImpl = FALSE)
logAfterStage(stageName)
}
}
## build intermediate variables:
## Currently this only affects eigen, chol, and run.time
stageName <- 'simpleIntermediates'
if (logging) logBeforeStage(stageName)
if(NFcompilerMaybeStop(stageName, controlFull)) return(invisible(NULL))
if(!NFcompilerMaybeSkip(stageName, controlFull)) {
eval(NFcompilerMaybeDebug(stageName, controlFull))
NFtry({
compilerStage_simpleIntermediates(NFcompiler,
debug)
},
stageName,
use_nCompiler_error_handling)
NFcompiler$stageCompleted <- stageName
if (logging) {
logAST(NFcompiler$code, showType = FALSE, showImpl = FALSE)
logAfterStage(stageName)
}
}
## annotate sizes and types
stageName <- 'labelAbstractTypes'
if (logging) logBeforeStage(stageName)
if(NFcompilerMaybeStop(stageName, controlFull)) return(invisible(NULL))
if(!NFcompilerMaybeSkip(stageName, controlFull)) {
eval(NFcompilerMaybeDebug(stageName, controlFull))
NFtry({
compilerStage_labelAbstractTypes(NFcompiler,
debug)
# This will only collect nClasses from classGenerator$new()
# Other nClasses will end up in the symbolTable and be
# collected later.
NFcompiler$needed_nClasses <-
c(NFcompiler$needed_nClasses,
NFcompiler$auxEnv[['needed_nClasses']])
NFcompiler$needed_nFunctions <-
c(NFcompiler$needed_nFunctions,
NFcompiler$auxEnv[['needed_nFunctions']])
},
paste(stageName, nameMsg),
use_nCompiler_error_handling)
NFcompiler$stageCompleted <- stageName
if (logging) {
logAST(NFcompiler$code, showImpl = FALSE)
logAfterStage(stageName)
}
}
## insert new lines created by size processing
stageName <- 'addInsertions'
if (logging) logBeforeStage(stageName)
if(NFcompilerMaybeStop(stageName, controlFull)) return(invisible(NULL))
if(!NFcompilerMaybeSkip(stageName, controlFull)) {
eval(NFcompilerMaybeDebug(stageName, controlFull))
NFtry({
compileInfo_insertAssertions(NFcompiler,
debug)
},
stageName,
use_nCompiler_error_handling)
NFcompiler$stageCompleted <- stageName
if (logging) {
logAST(NFcompiler$code)
logAfterStage(stageName)
}
}
## create symbol table of Eigen implementation types from symbol table of abstract types
stageName <- 'setImplementation'
if (logging) logBeforeStage(stageName)
if(NFcompilerMaybeStop(stageName, controlFull)) return(invisible(NULL))
if(!NFcompilerMaybeSkip(stageName, controlFull)) {
eval(NFcompilerMaybeDebug(stageName, controlFull))
NFtry({
symbolTable_setImplementation(NFcompiler$symbolTable,
"Eigen")
},
stageName,
use_nCompiler_error_handling)
NFcompiler$stageCompleted <- stageName
if (logging) logAfterStage(stageName)
}
## modify code either for Eigen or Tensorflow back-end
stageName <- 'doImplementation'
if (logging) logBeforeStage(stageName)
if(NFcompilerMaybeStop(stageName, controlFull)) return(invisible(NULL))
if(!NFcompilerMaybeSkip(stageName, controlFull)) {
eval(NFcompilerMaybeDebug(stageName, controlFull))
NFtry(
compileInfo_eigenize(NFcompiler,
debug),
stageName,
use_nCompiler_error_handling)
NFcompiler$stageCompleted <- stageName
}
## place Eigen maps in code
## This step may be moot in the new design
exprClasses_liftMaps(NFcompiler$code,
NFcompiler$symbolTable,
NFcompiler$auxEnv)
NFcompiler$stageCompleted <- stageName
if (logging) {
logAST(NFcompiler$code)
logAfterStage(stageName)
}
## Expand into fully-fledged stage: finalTransformations
NFtry(
compilerStage_finalTransformations(NFcompiler,
debug),
"finalTransformations",
use_nCompiler_error_handling)
stageName <- 'addDebugging'
if (logging) logBeforeStage(stageName)
if(NFcompilerMaybeStop(stageName, controlFull)) return(invisible(NULL))
if(cppStacktrace) {
if(debug) writeLines('*** Inserting debugging')
NFtry(
compilerStage_addDebug(NFcompiler, debug),
stageName,
use_nCompiler_error_handling
)
}
if(debug & debugCpp) {
writeCode(
compile_generateCpp(NFcompiler$code,
NFcompiler$symbolTable)
)
}
NFcompiler$stageCompleted <- stageName
if (logging)
logAfterStage(stageName)
}
|
f97d37784630a3a727308a807681b41e7c28566e | 29585dff702209dd446c0ab52ceea046c58e384e | /PAC/R/makeList.R | 4bd8c4ea9c414e77598e0cb9879243a276b5d966 | [] | no_license | ingted/R-Examples | 825440ce468ce608c4d73e2af4c0a0213b81c0fe | d0917dbaf698cb8bc0789db0c3ab07453016eab9 | refs/heads/master | 2020-04-14T12:29:22.336088 | 2016-07-21T14:01:14 | 2016-07-21T14:01:14 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 430 | r | makeList.R | #' Helper function to obtain a JSON-friendly format of a matrix formatted object
#'
#' @param x A matrix.
#' @return A list for generating JSON format output.
#' @export
makeList<-function(x){
if(ncol(x)>2){
listSplit<-split(x[-1],x[1],drop=T)
lapply(names(listSplit),function(y){list(name=y,children=makeList(listSplit[[y]]))})
}else{
lapply(seq(nrow(x[1])),function(y){list(name=x[,1][y],size=x[,2][y])})
}
} |
d9fc2384292bbacdf1ae5073b6afb0aafb412b9f | 71cbf070f4f13ce64cf2cf9e29590693bf05401e | /C5.0 and Random Forests/DecisionTrees.R | 081403554baa746e14147300e5c39fa8e606a236 | [] | no_license | zulyang/machine-learning-models | 16971e8a6782307c1521ffdc1b533a0477a4f191 | 36cdb4247aee529a82e772c3de59a0173d0d80b3 | refs/heads/master | 2020-05-30T20:15:22.381073 | 2019-06-03T11:18:58 | 2019-06-03T11:18:58 | 189,943,752 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 2,237 | r | DecisionTrees.R | install.packages("rpart")
install.packages("rpart.plot")
install.packages("caret")
install.packages("e1071")
install.packages("pROC")
install.packages("C50")
library(rpart)
library(rpart.plot)
library(caret)
library(e1071)
library(pROC)
library(C50)
# a) load genie dataset and replace column headers
geniefd.df <- read.csv("geniefd.csv")
geniefdheaders.df <- read.csv("geniefdcol.csv")
colnames(geniefd.df) <- geniefdheaders.df[,]
# b)
# i) Partioning Dataset to 60(training) and 40(test)
set.seed(1)
geniefd.df.index <- sample(c(1:nrow(geniefd.df)[1]),nrow(geniefd.df)[1]*.6)
geniefd.df.train <- geniefd.df[geniefd.df.index,]
geniefd.df.test <- geniefd.df[-geniefd.df.index,]
# ii) In PDF
# iii)
rpart.tree <- rpart(FlashDeal ~., data = geniefd.df.train,
control = rpart.control(maxdepth = 5, minbucket =50), method = "class")
prp(rpart.tree, type = 3, extra = 1, under = T, split.font = 1, varlen = -10)
rpart.tree.pred.train <- predict(rpart.tree,geniefd.df.train,type="class")
confusionMatrix(table(rpart.tree.pred.train, geniefd.df.train$FlashDeal),
positive = "1")
# predictions on test set
rpart.tree.pred.test <- predict(rpart.tree,geniefd.df.test,type="class")
confusionMatrix(table(rpart.tree.pred.test, geniefd.df.test$FlashDeal),
positive = "1")
# iv) In PDF
# v)
#Using C50 Decision Trees
geniefd.df.c5<-C5.0(as.factor(FlashDeal)~., data = geniefd.df.train)
# Rules set
geniefd.df.c5.rules<-C5.0(as.factor(FlashDeal)~., data = geniefd.df.train, rules= TRUE)
summary(geniefd.df.c5.rules)
# on test set
geniefd.df.c5.pred <- predict(geniefd.df.c5,geniefd.df.test,type="class")
confusionMatrix(table(geniefd.df.c5.pred, geniefd.df.test$FlashDeal),
positive = "1")
# c)
# i) Evaulating the performance of rpart and c5.0
confusionMatrix(table(rpart.tree.pred.test, geniefd.df.test$FlashDeal),
positive = "1")
confusionMatrix(table(geniefd.df.c5.pred, geniefd.df.test$FlashDeal),
positive = "1")
#From the results, c5 is a better model.
# ii) In PDF
# d)
# i) Extracting and exporting the rules into a text file.
write(capture.output(summary(geniefd.df.c5.rules)), "c50model.txt")
#ii) In PDF
#iii) In PDF
|
13be5ef99e6cdba8d6eda64ee45d2d06d32b1b5b | 3c772bf06929bcaaacb0c20779858c3745b1761e | /Week 1 hw script.R | f730e02a967d4fc4b8885241c05bd31a1700639a | [] | no_license | aqvining/Animal-Movement-Modelling-Course | 72757a7e6382ea70ac3ebf0d4963e506ec99b4c6 | 849191c2055a734ea859cd27285785dd13bcebc8 | refs/heads/master | 2021-08-26T06:40:07.384720 | 2017-11-21T22:46:58 | 2017-11-21T22:46:58 | 105,563,697 | 3 | 7 | null | null | null | null | UTF-8 | R | false | false | 617 | r | Week 1 hw script.R | rm(list <- ls())
coordinates <- data.frame(x = runif(4, 0, 10), y = runif(4, 0, 10))
v1 <- c(1, 6, 3, 22)
v2 <- c("A", "F", "B", "V")
v3 <- c(1, 1, 0, 1)
alphaCube <- array(LETTERS, c(3,3,3))
##Problem 4 test variables
P4test1 = "this is a string"
P4test2 = list('this', 'is', 'a', 'list', 'of', 'strings')
P4test3 = c(1,2,3,4,5,6,7,8)
P4test4 = c(1,2,3,'A',5,6,7,8)
P4test5 = list(1,2,3,'A',5,6,7,8)
P4test6 = list(words = P4test2, numbers = P4test3)
P4test7 = data.frame(numeric = P4test3, string = P4test4)
P4test8 = data.frame(numeric = P4test3, string = P4test4, stringsAsFactors = FALSE)
|
5008290c9c380e3a38f4b0c5e9050927ca9dbde9 | 2e83b3630df102c1581cab01ba15b477d50dacfb | /loadingdata.R | 730ae6aad179de01aa0977aa3d5158328272bf37 | [] | no_license | pacoraggio/MelbourneHousePriceShinyApp | b543f50a5479b717ccb03799771cd22be75c4825 | 533c5cdd84184c217f4fb2c119cecff9129ac65c | refs/heads/master | 2022-04-07T17:36:07.857074 | 2020-02-14T16:35:40 | 2020-02-14T16:35:40 | 238,421,217 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,381 | r | loadingdata.R | library(dplyr)
rm(list = ls())
# df.mel <- read.csv("./Data/melb_data.csv")
df.mel <- read.csv("./Data/Melbourne_housing_FULL.csv")
df.mel <- df.mel[which(!is.na(df.mel$Lattitude) &
!is.na(df.mel$Price) &
!is.na(df.mel$Bathroom)),]
# df1_FULL <- df1_FULL[which(!is.na(df1_FULL$Lattitude) &
# !is.na(df1_FULL$Price) &
# !is.na(df1_FULL$Bathroom)),]
df.mel$PriceCategory <-cut(df.mel$Price,
breaks = quantile(df.mel$Price),
labels = c("low", "medium low", "medium high", "high"))
df.mel$MarkerColor <-cut(df.mel$Price,
breaks = quantile(df.mel$Price),
labels = c("darkgreen", "green", "red", "darkred"))
residency.type <- function(ttype)
{
if (ttype == "h")
{
return("House Cottage Villa")
}else if (ttype == "u")
{
return("Unit Duplex")
}else
{
return("Town house")
}
}
df.mel$HoverText <- with(df.mel, paste('<b>Price:</b>', Price,
'<br>', "Council: ", CouncilArea,
'<br>', "Region: ", Regionname,
'<br>', "# Rooms: ", Rooms, " # Bathrooms: ", Bathroom,
'<br>', "Type", lapply(Type,residency.type)))
|
1ac17f86df8258d89143e5c58d713934057d0a69 | 1291f9c4909cb5236dca9b2f8f4f6560c1040b94 | /man/EW_pop_out.Rd | e9b50320df3daf04b99fa3ccdb0b11d4bb652699 | [] | no_license | cran/IBMPopSim | 4718088f6da032fa83fe534cdc82eb49ddf22c55 | 40f6c5ae8b045d98340b9a7b168b9f4df8ff40c5 | refs/heads/master | 2023-01-07T17:18:30.302973 | 2023-01-07T11:50:02 | 2023-01-07T11:50:02 | 305,096,593 | 0 | 0 | null | null | null | null | UTF-8 | R | false | true | 636 | rd | EW_pop_out.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/data.R
\docType{data}
\name{EW_pop_out}
\alias{EW_pop_out}
\title{Example of "human population" after 100 years of simulation.}
\format{
Data frame containing a population structured by age and gender,
simulated with an initial population of 100 000 individuals sampled from \code{EW_pop_14$age_pyramid}
over 100 years, with birth and death events.
}
\usage{
EW_pop_out
}
\description{
Example of "human population" data frame after 100 years of simulation, based on a sample of England and Wales 2014 population and demographic rates.
}
\keyword{datasets}
|
ca54c35b3a2dfb5feecdea0710eff2837593a0dc | 9412630b2d1e941addc84c58ffa76d38b8488c47 | /Data_Science/R_Programming/Hospital_Quality/best.R | e5bec4e2e0f686daf2cbe35d4894217a2a1e768f | [] | no_license | armeni/coursera | 5da526063c53b2de9a51d0bc14c237370ad9580f | 4d4b3e78e678bb7b4830646287fcda6dc871c022 | refs/heads/master | 2022-12-23T04:14:21.318758 | 2020-09-06T12:19:43 | 2020-09-06T12:19:43 | 288,823,631 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 848 | r | best.R | best <- function(state, outcome) {
data <- data.table::fread('outcome-of-care-measures.csv')
outcome <- tolower(outcome)
chosen_state <- state
if (!chosen_state %in% unique(data[["State"]])) {
stop('invalid state')
}
if (!outcome %in% c("heart attack", "heart failure", "pneumonia")) {
stop('invalid outcome')
}
setnames(data,
tolower(sapply(colnames(data), gsub, pattern = "^Hospital 30-Day Death \\(Mortality\\) Rates from ", replacement = "" ))
)
data <- data[state == chosen_state]
col_indices <- grep(paste0("hospital name|state|^", outcome), colnames(data))
data <- data[, .SD, .SDcols = col_indices]
data[, outcome] <- data[, as.numeric(get(outcome))]
data <- data[complete.cases(data), ]
data <- data[order(get(outcome), `hospital name`)]
data[, "hospital name"][1]
}
|
ff5360228021394759c320978a7c08bb1961f42e | 78e9f611567ddce58d02f569a4fa1017cc4cab85 | /man/theme_map_dark.Rd | 6311d856a6b68433586adbe6f6832398d072869a | [] | no_license | OwnKng/JLLify | c654a9e622c446bed6e1d8f2abfe1312f3cfae8b | 1f54c5fccb6b40a0095514828666fac730a7c2a4 | refs/heads/master | 2021-06-27T00:28:32.641965 | 2021-03-30T14:26:27 | 2021-03-30T14:26:27 | 222,095,710 | 0 | 0 | null | null | null | null | UTF-8 | R | false | true | 342 | rd | theme_map_dark.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/theme_jll.R
\name{theme_map_dark}
\alias{theme_map_dark}
\title{Style a map with a minimal look}
\usage{
theme_map_dark(...)
}
\description{
This function applies a clean dark-theme to a map. It removes all grids and axis markings.
}
\examples{
theme_map_dark()
}
|
f34d911e0f280b80a8e7aca630ebcd9ea28ad855 | b492d7d024e68f0357292c77dd74f42873e596c2 | /Titanic.R | cd1354686f759f0e8ebde80dbd9517c97cfbd4bb | [] | no_license | linpyl/Titanic | 591a73d31698afcf51c5d202e1ce1762d58bac46 | 9ac873082aa066ba8c83a15576981cde5a1f65c4 | refs/heads/master | 2020-12-31T00:29:26.080079 | 2016-09-11T06:10:09 | 2016-09-11T06:10:09 | 67,913,255 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 15,113 | r | Titanic.R | rm(list=ls(all=TRUE))
# define functions
f.usePackage = function(p) {
if (!is.element(p, installed.packages()[,1])) {
install.packages(p, dep = TRUE);
}
require(p, character.only = TRUE);
}
f.getTitle = function(data) {
title.start = regexpr("\\,[A-Z ]{1,20}\\.", data, TRUE)
title.end = title.start + attr(title.start, "match.length") - 1
Title = substr(data, title.start+2, title.end-1)
return (Title)
}
`%ni%` = Negate(`%in%`)
f.cutoff.optimizer = function(pred, y) {
output.auc = vector()
grid = seq(0.1, 0.99, by=0.01)
for (cut.i in 1:length(grid)) {
yhat = ifelse(pred >= grid[cut.i], 1, 0)
result = prediction(yhat, y)
perf = performance(result,"tpr","fpr")
auc = performance(result,"auc")
auc = unlist(slot(auc, "y.values"))
output.auc = rbind(output.auc, auc)
}
output = cbind(grid, output.auc)
return(output)
}
f.logloss = function(actual, pred) {
eps = 1e-15
nr = nrow(pred)
pred = matrix(sapply(pred, function(x) max(eps,x)), nrow = nr)
pred = matrix(sapply(pred, function(x) min(1-eps,x)), nrow = nr)
ll = sum(actual*log(pred) + (1-actual)*log(1-pred))
ll = -ll/nr
return(ll);
}
f.cutoff.optimizer = function(pred, y) {
output.auc <- vector()
grid = seq(0.1, 0.99, by=0.01)
for (cut.i in 1:length(grid)) {
yhat = ifelse(pred >= grid[cut.i], 1, 0)
result = prediction(yhat, y)
perf = performance(result,"tpr","fpr")
auc = performance(result,"auc")
auc = unlist(slot(auc, "y.values"))
output.auc = rbind(output.auc, auc)
}
output = cbind(grid=grid, auc=output.auc)
return(output)
}
f.lasso.modeller = function(x.train, y.train, x.test, y.test){
# Use cv.glmnet() to determine the best lambda for the lasso:
lasso.cv = cv.glmnet(x=x.train, y=y.train, alpha=1, family='binomial', type.measure='auc', nfolds=10)
bestlam.lasso = lasso.cv$lambda.min
# Calculate the predicted outcome of the training and test sets:
fitted = predict(lasso.cv, s=bestlam.lasso, newx=x.train, type='response')
pred = predict(lasso.cv, s=bestlam.lasso, newx=x.test, type='response')
y.train = as.numeric(as.character(y.train))
y.test = as.numeric(as.character(y.test))
# Calculate the log-loss of the test
logloss = f.logloss(actual=y.test, pred=pred)
# Find the optimal probability cutoff with highest accuracy
output = f.cutoff.optimizer(fitted, y.train)
output = data.frame(output)
OptimalCutoff = output[which.max(output[,2]),c("grid")]
# Use the optimal probability cutoff to calcuate the accuracy of test data
pred.binary = ifelse(pred>=OptimalCutoff, 1, 0)
conf.table = confusionMatrix(table(pred=pred.binary, actual=y.test))
pred.accuracy = as.numeric(conf.table$overall[1])
#sens = as.numeric(conf.table$byClass[2])
#spec = as.numeric(conf.table$byClass[1])
#kappa = as.numeric(conf.table$overall[2])
rm(pred, fitted, lasso.cv, bestlam.lasso, pred.binary, output)
return(c(logloss, OptimalCutoff, pred.accuracy))
}
# load library
f.usePackage("data.table")
f.usePackage("glmnet")
f.usePackage("dplyr")
f.usePackage("pROC")
f.usePackage("ROCR")
f.usePackage("caret")
setwd("C:/Users/chang_000/Dropbox/Pinsker/CV_Resume/InterviewQuestions/Avant/avant-analytics-interview/")
#dir()
#============================================================
# Read data sets
data1 = fread("TitanicData.csv")
data2 = fread("TitanicData2.csv")
data1 = data.frame(data1)
data2 = data.frame(data2)
str(data1)
str(data2)
unique(data1$PassengerId)
unique(data2$PassengerId)
#============================================================
# Data manupulation
# data1
data1$Survived = as.factor(data1$Survived)
data1$Sex = as.factor(ifelse(data1$Sex=="male", 0, 1))
data1$Pclass = as.factor(data1$Pclass)
# data2
data2$Survived = as.factor(data2$Survived)
data2$Sex = as.factor(data2$Sex)
data2$Pclass = as.factor(data2$Pclass)
data2$Fare = as.numeric(gsub(",","",data2$Fare))
# merge the data sets together
variables_list = names(data1)[order(names(data1))]
data = rbind(data1[,variables_list], data2[,variables_list])
dim(data)
summary(data)
# remove the duplicates if there are any in the merged data set
data = data[!duplicated(data), ]
dim(data)
##################################################################
# extract some additional information from the `Name` variable
##################################################################
# data$Title
data$Title = f.getTitle(data$Name)
table(data$Title)
# reduce the levels of the "Title" variable
data$Title[data$Title %in% c("Mme","Mlle","Lady","Ms","the Countess", "Jonkheer")] = "OtherFemale"
data$Title[data$Title %in% c("Capt", "Don", "Major", "Sir","Col","Dr","Rev")] = "OtherMale"
data$Title = as.factor(data$Title)
#######################################
# detect and impute the missing values
#######################################
apply(is.na(data), 2, sum)
#######
# Age
#######
# impute the missing values of "Age" variable
for (i in 1:nrow(data)) {
if (is.na(data$Age[i])) {
data$Age[i] = median(data$Age[data$Title==data$Title[i] & data$Pclass==data$Pclass[i]], na.rm=TRUE)
}
}
# Bin Age
obj = cut(data$Age, c(seq(0, 60, 10),Inf), labels=c(0:6))
data$Age_bin = as.factor(obj)
#######
# Fare
#######
data[data$Fare > 1000,]
# Adjust 2 outliers
for (i in 1:nrow(data)) {
if (data$Fare[i] > 1000) {
data$Fare[i] = median(data$Fare[data$Title==data$Title[i] & data$Pclass==data$Pclass[i] & data$Embarked==data$Embarked[i]], na.rm=TRUE)
}
}
# log-transformation of Fare: log(Fare)
data$logFare = ifelse(data$Fare > 0, log(data$Fare), log(0.001))
###########
# Embarked
###########
table(data$Embarked) # S is the majority
data$Embarked[data$Embarked == ""] = "S" # impute missing value of Embarked
data$Embarked = as.factor(data$Embarked)
# save the new data set
write.csv(data,"mergeddata.csv", row.names=FALSE)
#####################################################################################
# Produce a glmnet model predicting the chance that a Titanic passanger survived.
#####################################################################################
# set up of K-fold CV for validation
K = 10
block = sample(1:K, nrow(data), replace=TRUE)
# Lasso models:
yvariable = c("Survived")
mod1.xvariables = c("Age","Fare","Embarked","Pclass","Sex","Title")
mod2.xvariables = c("Age_bin","Fare","Embarked","Pclass","Sex","Title")
mod3.xvariables = c("Age","logFare","Embarked","Pclass","Sex","Title")
mod4.xvariables = c("Age_bin","logFare","Embarked","Pclass","Sex","Title")
# initiate tables for the outputs
cv.logloss <- cv.OptimalCutoff <- cv.accuracy <- matrix(0, K, 4)
for (i in 1:K) {
train.data = data[block!=i,]
test.data = data[block==i,]
#=========================================================
# Model 1: Age + Embarked + Fare + Pclass + Sex + Title
train = train.data[,c(yvariable,mod1.xvariables)]
test = test.data[,c(yvariable,mod1.xvariables)]
x.train = model.matrix(Survived~., train)[,-1]
y.train = train$Survived
x.test = model.matrix(Survived~., test)[,-1]
y.test = test$Survived
temp = f.lasso.modeller(x.train, y.train, x.test, y.test)
cv.logloss[i,1] = temp[1]
cv.OptimalCutoff[i,1] = temp[2]
cv.accuracy[i,1] = temp[3]
rm(train, test, x.train, y.train, x.test, y.test, temp)
#=========================================================
# Model 2:
train = train.data[,c(yvariable,mod2.xvariables)]
test = test.data[,c(yvariable,mod2.xvariables)]
x.train = model.matrix(Survived~., train)[,-1]
y.train = train$Survived
x.test = model.matrix(Survived~., test)[,-1]
y.test = test$Survived
temp = f.lasso.modeller(x.train, y.train, x.test, y.test)
cv.logloss[i,2] = temp[1]
cv.OptimalCutoff[i,2] = temp[2]
cv.accuracy[i,2] = temp[3]
rm(train, test, x.train, y.train, x.test, y.test, temp)
#=========================================================
# Model 3:
train = train.data[,c(yvariable,mod3.xvariables)]
test = test.data[,c(yvariable,mod3.xvariables)]
x.train = model.matrix(Survived~., train)[,-1]
y.train = train$Survived
x.test = model.matrix(Survived~., test)[,-1]
y.test = test$Survived
temp = f.lasso.modeller(x.train, y.train, x.test, y.test)
cv.logloss[i,3] = temp[1]
cv.OptimalCutoff[i,3] = temp[2]
cv.accuracy[i,3] = temp[3]
rm(train, test, x.train, y.train, x.test, y.test, temp)
#=========================================================
# Model 4:
train = train.data[,c(yvariable,mod4.xvariables)]
test = test.data[,c(yvariable,mod4.xvariables)]
x.train = model.matrix(Survived~., train)[,-1]
y.train = train$Survived
x.test = model.matrix(Survived~., test)[,-1]
y.test = test$Survived
temp = f.lasso.modeller(x.train, y.train, x.test, y.test)
cv.logloss[i,4] = temp[1]
cv.OptimalCutoff[i,4] = temp[2]
cv.accuracy[i,4] = temp[3]
rm(train, test, x.train, y.train, x.test, y.test, temp)
}
# average log-loss of the models
cv.logloss = data.frame(cv.logloss)
colnames(cv.logloss) = c("Model 1", "Model 2", "Model 3", "Model 4")
rownames(cv.logloss) = 1:K
cv.logloss
apply(cv.logloss, 2, mean)
# average test accuracy of the models
cv.accuracy = data.frame(cv.accuracy)
colnames(cv.accuracy) = c("Model 1", "Model 2", "Model 3", "Model 4")
rownames(cv.accuracy) = 1:K
cv.accuracy
apply(cv.accuracy, 2, mean)
#
cv.OptimalCutoff = data.frame(cv.OptimalCutoff)
colnames(cv.OptimalCutoff) = c("Model 1", "Model 2", "Model 3", "Model 4")
rownames(cv.OptimalCutoff) = 1:K
cv.OptimalCutoff
apply(cv.OptimalCutoff, 2, mean)
######################################################################
# Model 3 has the smallest log-loss and largest accuracy on test set
# Build Model 3 using Full data
######################################################################
dataset = data[,c(yvariable,mod3.xvariables)]
x = model.matrix(Survived~., dataset)[,-1]
y = dataset$Survived
# Use cv.glmnet() to determine the best lambda for the lasso:
mod.lasso = cv.glmnet(x=x, y=y, alpha=1, family='binomial', type.measure='auc', nfolds=10)
(bestlam.lasso = mod.lasso$lambda.min)
plot(mod.lasso, main="lasso")
# Accuracy, Sensitivity, Specificity, and Kappa of the new model using the Full dataset
pred = predict(mod.lasso, s=bestlam.lasso, newx=x, type='response')
optimal.cutoff = apply(cv.OptimalCutoff, 2, mean)[3]
pred.binary = ifelse(pred>=optimal.cutoff, 1, 0)
conf.table = confusionMatrix(table(pred=pred.binary, actual=y))
result = cbind(accuracy = as.numeric(conf.table$overall[1]),
sensitivity = as.numeric(conf.table$byClass[2]),
specificity = as.numeric(conf.table$byClass[1]),
kappa = as.numeric(conf.table$overall[2]))
result = data.frame(result)
rownames(result) = c("Model 1")
result
# Grep the variables which have non-zero coefficients in the lasso
c = coef(mod.lasso, s=bestlam.lasso, exact=TRUE)
inds = which(c!=0)
(variables = row.names(c)[inds])
result.coef = data.frame(cbind(var=variables, coef=c@x))
result.coef = cbind(result.coef, exp(c@x)-1)
colnames(result.coef) = c("var","coef","exp(coef) - 1")
result.coef
#######################################################################
# Adding Gender:Pclass terms to Model 3
#######################################################################
dataNew = data
dataNew$Male_Pclass1 = as.factor(ifelse((data$Sex==0 & data$Pclass==1),1,0))
dataNew$Male_Pclass2 = as.factor(ifelse((data$Sex==0 & data$Pclass==2),1,0))
dataNew$Male_Pclass3 = as.factor(ifelse((data$Sex==0 & data$Pclass==3),1,0))
dataNew$Female_Pclass1 = as.factor(ifelse((data$Sex==1 & data$Pclass==1),1,0))
dataNew$Female_Pclass2 = as.factor(ifelse((data$Sex==1 & data$Pclass==2),1,0))
dataNew$Female_Pclass3 = as.factor(ifelse((data$Sex==1 & data$Pclass==3),1,0))
mod3a.xvariables = c("Age","logFare","Embarked","Pclass","Sex","Title","Male_Pclass1",
"Male_Pclass2","Male_Pclass3","Female_Pclass1","Female_Pclass2",
"Female_Pclass3")
yvariable = c("Survived")
dataset = dataNew[,c(yvariable,mod3a.xvariables)]
# set up of K-fold CV for validation
K = 10
block = sample(1:K, nrow(dataset), replace=TRUE)
# initiate tables for the outputs
cv.result <- matrix(0, K, 3)
for (i in 1:K) {
train.data = dataset[block!=i,]
test.data = dataset[block==i,]
#=========================================================
# Model 3a: Age+logFare+Embarked+Pclass+Sex+Title+Male_Pclass1+Male_Pclass2+Male_Pclass3
# +Female_Pclass1+Female_Pclass2+Female_Pclass3
train = train.data[,c(yvariable,mod3a.xvariables)]
test = test.data[,c(yvariable,mod3a.xvariables)]
x.train = model.matrix(Survived~., train)[,-1]
y.train = train$Survived
x.test = model.matrix(Survived~., test)[,-1]
y.test = test$Survived
temp = f.lasso.modeller(x.train, y.train, x.test, y.test)
cv.result[i,1] = temp[1]
cv.result[i,2] = temp[2]
cv.result[i,3] = temp[3]
rm(train.data, test.data, x.train, y.train, x.test, y.test, temp)
}
# output
cv.result = data.frame(cv.result)
colnames(cv.result) = c("log-loss", "optimal.cutoff", "test.accuracy")
rownames(cv.result) = 1:K
cv.result
apply(cv.result, 2, mean)
# Build Model 3a using the Full data
x = model.matrix(Survived~., dataset)[,-1]
y = dataset$Survived
mod.lasso = cv.glmnet(x=x, y=y, alpha=1, family='binomial', type.measure='auc', nfolds=10)
(bestlam.lasso = mod.lasso$lambda.min)
# Predict using Model 3a
pred = predict(mod.lasso, s=bestlam.lasso, newx=x, type='response')
# Calculate the log-loss of Model 3a
f.logloss(as.numeric(as.character(y)), pred)
# Calcuate the accuracy of Model 3a
optimal.cutoff = apply(cv.result, 2, mean)[2]
pred.binary = ifelse(pred>=optimal.cutoff, 1, 0)
conf.table = confusionMatrix(table(pred=pred.binary, actual=y))
result.3a = cbind(accuracy = as.numeric(conf.table$overall[1]),
sensitivity = as.numeric(conf.table$byClass[2]),
specificity = as.numeric(conf.table$byClass[1]),
kappa = as.numeric(conf.table$overall[2]))
result.3a = data.frame(result.3a)
rownames(result.3a) = c("Model_3a")
result.3a = rbind(result, result.3a)
result.3a
|
57d8b45f759c7d4abe2ecac4b923c8e2478e4578 | 9ed7cc1726a9c8631a422a07fbdaf4cad0924709 | /figures/network_structure/network_structure.R | 644f195a980cb87c6f57046e302ed18555ff4e98 | [] | no_license | whit1951/EEBCovid | e364f4c7cbfb63f5b7bc7e4e538f8e366d51845f | bcefacd1deb4703c51c7a2e937ecf7d9e99953df | refs/heads/master | 2022-12-12T13:21:19.397445 | 2020-09-11T13:32:15 | 2020-09-11T13:32:15 | 257,691,046 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 4,844 | r | network_structure.R | library(patchwork)
library(magrittr)
library(tidygraph)
library(ggraph)
library(tidyverse)
library(parallel)
source("../../code/generate_EEB_networks.R")
mycols <- c("#2e4052", "#d95d39", "#754042")
theme_set(theme_bw())
time_max <- 200
measure_network_structure <- function(network) {
as_tbl_graph(network) %>%
morph(to_components) %>%
crystallise() %>%
rowwise() %>%
mutate(`Size` = with_graph(graph, graph_order()),
`Diameter` = with_graph(graph, graph_diameter(directed=FALSE)),
`Mean Path Length` = with_graph(graph, graph_mean_dist(directed=FALSE))) %>%
ungroup() %>%
select(-graph) %>%
rename(component = name)
}
get_distances <- . %>% igraph::distances() %>% .[upper.tri(.)] %>% as.vector() %>% .[is.finite(.)]
EEB_nets <- generate_EEB_networks("../../code/")
g<-as_tbl_graph(EEB_nets$office)
h<-as_tbl_graph(EEB_nets$lab)
full_graph <- graph_join(g, h, by="name") %>% to_undirected() %>% simplify() %>% as_tbl_graph()
network_structure <- list(measure_network_structure(h) %>% mutate(network = str_c("Just Shared Lab Space (", n(), " components)")),
measure_network_structure(full_graph) %>% mutate(network = str_c("Combined Lab and Office (", n(), " components)"))) %>%
bind_rows() %>%
pivot_longer(c(-network, -component), names_to="metric", values_to="value") %>%
mutate(metric = fct_inorder(metric))
set.seed(0)
full_net <- ggraph(full_graph, layout=igraph::layout_nicely(full_graph)) +
geom_edge_link(edge_width=0.66, colour="#635E5B") +
geom_node_point(aes(colour=I(mycols[1])), size=2) +
ggtitle("A) Combined Lab and Office",
str_c(with_graph(full_graph, graph_component_count()), " distinct components")) +
theme_graph(base_family="") +
theme(legend.position="none",
plot.title = element_text(size = rel(1.2), hjust = 0, face="bold", family="Computer Modern",
vjust = 1, margin = margin(b = 5.5)), plot.title.position = "panel",
plot.subtitle = element_text(size=rel(1), hjust = 0, vjust = 1, margin = margin(b = 5.5)))
set.seed(0)
lab_net <- ggraph(h, layout=igraph::layout_nicely(h)) +
geom_edge_link(edge_width=0.66, colour="#635E5B") +
geom_node_point(aes(colour=I(mycols[2])), size=2) +
ggtitle("B) Just Shared Labspace",
str_c(with_graph(h, graph_component_count()), " distinct components")) +
theme_graph() +
theme(legend.position="none",
plot.title = element_text(size = rel(1.2), hjust = 0, face="bold", family="Computer Modern",
vjust = 1, margin = margin(b = 5.5)), plot.title.position = "panel",
plot.subtitle = element_text(size=rel(1), hjust = 0, vjust = 1, margin = margin(b = 5.5)))
path_lengths <- ggplot(bind_rows(tibble(distance=get_distances(h), network="lab"),
tibble(distance=get_distances(full_graph), network="full"))) +
aes(x=distance, fill=network) +
geom_histogram(position="dodge", binwidth=0.5) +
scale_x_continuous(breaks=1:8) +
scale_fill_manual(values=mycols) +
xlab("Shortest Path Length") + ylab("Number of Paths") +
ggtitle("C) Distribution of shortest paths") +
theme(legend.position="none",
plot.title = element_text(size = rel(1.2), hjust = 0, face="bold", family="Computer Modern",
vjust = 1, margin = margin(b = 5.5)), plot.title.position = "panel",
plot.subtitle = element_text(size=rel(1), hjust = 0, vjust = 1, margin = margin(b = 5.5)))
full_plot <- full_net + lab_net + path_lengths + plot_layout(widths=c(1,1,1.5))
ggsave(full_plot, filename="network_structure.tiff", width=13, height=5)
ggsave(full_plot, filename="network_structure.png", width=13, height=5)
component_structure <- ggplot(network_structure %>% na.omit()) +
aes(x=metric, y=value, colour=network) +
geom_point(position=position_jitterdodge(jitter.width=0.1, dodge.width=0.5)) +
coord_trans(y="log1p") +
scale_y_continuous(breaks=c(0, 1, 2, 3, 4, 5, 10, 20, 30, 40, 50, 100, 200),
minor_breaks=c(6, 7, 8, 9, 60, 70, 80, 90),
limits=c(NA, 160)) +
scale_colour_manual(values=mycols)+ scale_fill_manual(values=mycols) +
# ggtitle("Component-wise structural metrics") +
theme(axis.title=element_blank(),
legend.position="none",
axis.ticks.x=element_blank(),
plot.title = element_text(size = rel(1.2), hjust = 0, face="bold", family="Computer Modern",
vjust = 1, margin = margin(b = 5.5)), plot.title.position = "panel",
plot.subtitle = element_text(hjust = 0, vjust = 1, margin = margin(b = 5.5)))
ggsave(component_structure, filename="component_structure.tiff", width=5, height=4)
ggsave(component_structure, filename="component_structure.png", width=5, height=4)
|
dbc32f4c63b2aa7b09174c70f37b2bc2371c5ba2 | 28de7940089fc91c6832f7cce42e72c1380303c6 | /tests/testthat/test-test_praise_bien.R | bacdfb3ed60c041997bb500f95a90362eb4e350c | [] | no_license | ComunidadBioInfo/praiseMX | b6bc4547fe677fd3f426b120f7329c249e87e30e | 9d2639974a1382fc36b84dfdc2ebc0592cb12bc5 | refs/heads/master | 2020-06-28T00:27:57.678012 | 2019-08-02T17:53:20 | 2019-08-02T17:53:20 | 200,092,936 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 157 | r | test-test_praise_bien.R | context("praise_bien.R")
test_that("Evalúa que praise_bien genera una frase de éxito que es un caracter", {
expect_is(praise_bien(), "character")
})
|
acceed306703a88c8c6bf60776fcc92fcb6d4aa7 | b69ea4c85c60f4a3c59d302eea64c620270cfaae | /data-processing-code-salvage/27_richness_ss_scale_2017-07-28.R | 29240a3d7a2d47c20380e4cfecf06730db17c8ce | [] | no_license | rvanmazijk/Hons-thesis-code-salvage | 52c1dbef9c136afabe385550d0258262c59d2773 | b3492c5f5c212d75631557c8f280c898f91718a7 | refs/heads/master | 2020-03-20T01:30:39.817374 | 2018-08-30T14:30:26 | 2018-08-30T14:30:26 | 137,078,280 | 0 | 0 | null | 2018-08-30T14:30:27 | 2018-06-12T13:47:58 | HTML | UTF-8 | R | false | false | 7,285 | r | 27_richness_ss_scale_2017-07-28.R | # ...
# Hons thesis
# Ruan van Mazijk
# created: 2017-07-28
# last edited: 2017-07-28
# Preamble ---------------------------------------------------------------------
# ...
# Set up -----------------------------------------------------------------------
rm(list = ls())
# The order in which these source() cmds are run is NB
source("Scripts/i_my_funs_4spaces.R")
source("Scripts/ii_my_objs.R")
source("Scripts/v_prelim_anal_fn.R")
source("Scripts/vi_rand_raster_null_fn.R")
source("Scripts/vii_compare_null_obs_fn.R")
source("Scripts/viii_prelim_rand_compare_fn.R")
# ------------------------------------------------------------------------------
do_richness_PCA <- function(x_richness, x_tester, x_border) {
# Aggregate GCFR floral richness raster -----------------------------------
# (to QDS, HDS, 3/4DS, & DS)
agg_richness_x <- custom_aggregate_loop(x_richness$raster, facts = 2:4)
agg_richness_x %<>% c(x_richness$raster, .)
names(agg_richness_x) <- c("QDS", "HDS", "TQDS", "DS")
op <- par(mfrow = c(1, 1), mar = c(5, 4, 4, 1))
par(mfrow = c(2, 2))
plot_from_list(agg_richness_x)
par(op)
# Extracted pts ------------------------------------------------------------
set.seed(57701)
rand_pts_x <- x_tester %>%
sampleRandom(size = 1000, xy = TRUE, sp = TRUE, na.rm = TRUE)
extracted_agg_richness_x <- agg_richness_x %>%
map(raster::extract, rand_pts_x, sp = TRUE) %>%
map(as.data.frame) %>%
map(dplyr::select, x, y, layer)
for (i in 1:(length(2:4) + 1)) {
names(extracted_agg_richness_x[[i]]) <-
c("lon", "lat", "floral_richness")
}
extracted_agg_richness_x %<>%
map(function(x) { x %<>% cbind(pt_ID = rownames(.), .) })
for (i in 1:4) {
extracted_agg_richness_x[[i]] %<>%
cbind(., fact = names(extracted_agg_richness_x)[i])
}
extracted_agg_richness_x %<>%
reshape2::melt(
id.vars = c("lon", "lat", "pt_ID", "fact"),
value.name = "floral_richness"
) %>%
dplyr::select(lon, lat, pt_ID, fact, floral_richness)
extracted_agg_richness_x_df_tbl <- extracted_agg_richness_x %>%
spread(key = fact, value = floral_richness)
any(c(
extracted_agg_richness_x_df_tbl %$% {QDS != HDS } %>% any(),
extracted_agg_richness_x_df_tbl %$% {QDS != TQDS} %>% any(),
extracted_agg_richness_x_df_tbl %$% {QDS != DS } %>% any(),
extracted_agg_richness_x_df_tbl %$% {HDS != TQDS} %>% any(),
extracted_agg_richness_x_df_tbl %$% {HDS != DS } %>% any(),
extracted_agg_richness_x_df_tbl %$% {TQDS != DS } %>% any()
))
# Oh dear... (?) (IS this bad?)
# FIXME
# PCA ----------------------------------------------------------------------
richness_pca <- prcomp(extracted_agg_richness_x_df_tbl[, -c(1:3)])
par(mfrow = c(1, 1))
extracted_agg_richness_x_df_tbl %<>% cbind(
pc1 = richness_pca$x[, 1],
pc2 = richness_pca$x[, 2]
)
# Plot biplot
biplot <- TRUE
if (biplot) {
plot(
richness_pca$x,
ylim = c(
-max(abs(richness_pca$x[, 2])), max(abs(richness_pca$x[, 2]))
),
xlim = c(
-max(abs(richness_pca$x[, 1])), max(abs(richness_pca$x[, 1]))
)
)
abline(h = 0, lty = 2)
abline(v = 0, lty = 2)
}
# Plot PC1/2 loadings
loadings <- TRUE
if (loadings) {
par(mfrow = c(1, 2))
plot_PC_loadings(x = richness_pca, facts = 2:4)
par(op)
}
# Plot in geospace
geospace <- TRUE
if (geospace) {
pt_size <- 2
p <-
ggplot(
aes(x = lon, y = lat),
dat = extracted_agg_richness_x_df_tbl
) +
geom_polygon(
aes(x = long, y = lat, group = group),
colour = "black",
fill = NA,
data = x_border
) +
theme_classic() +
theme(legend.position = "top")
p_pc1 <- p +
geom_point(
aes(col = pc1),
size = pt_size,
data = extracted_agg_richness_x_df_tbl
) +
scale_color_distiller(
palette = "Spectral",
name = "PC1"
)
p_pc2 <- p +
geom_point(
aes(col = pc2),
size = pt_size,
data = extracted_agg_richness_x_df_tbl
) +
scale_color_distiller(
palette = "Spectral",
name = "PC2"
)
plot(arrangeGrob(
p_pc1, p_pc2,
ncol = 2, widths = c(15, 15), heights = 10
))
}
return(list(
agg_richness = agg_richness_x,
extracted_agg_richness = extracted_agg_richness_x_df_tbl,
richness_pca = richness_pca,
geospace_grob = arrangeGrob(p_pc1, p_pc2,
ncol = 2,
widths = c(15, 15),
heights = 10)
))
}
# Individually PCAs ------------------------------------------------------------
GCFR_richness_pca <- do_richness_PCA(
GCFR_richness,
GCFR_border,
x_tester = MAP_GCFR_0.05
)
summary(GCFR_richness_pca)
par(mfrow = c(1, 2))
plot_PC_loadings(x = GCFR_richness_pca$richness_pca, facts = 2:4)
par(op)
plot(GCFR_richness_pca$geospace_grob)
SWAFR_richness_pca <- do_richness_PCA(
SWAFR_richness,
SWAFR_border,
x_tester = MAP_SWAFR_0.05
)
summary(SWAFR_richness_pca)
par(mfrow = c(1, 2))
plot_PC_loadings(x = SWAFR_richness_pca$richness_pca, facts = 2:4)
par(op)
plot(SWAFR_richness_pca$geospace_grob)
# SWAFR & GCFR ARE REVERSE ITO scale vs richness!
# viz. GCFR:
# PC2 high = fine scale,
# And ...
# COmbo-time! ------------------------------------------------------------------
GCFR_richness_pca$extracted_agg_richness
SWAFR_richness_pca$extracted_agg_richness
both_regions_extracted_agg_richness <-
rbind(cbind(
GCFR_richness_pca$extracted_agg_richness,
region_name = rep("GCFR", times = 1000)
),
cbind(
SWAFR_richness_pca$extracted_agg_richness,
region_name = rep("SWAFR", times = 1000)
)
)
both_regions_richness_pca <-
prcomp(both_regions_extracted_agg_richness[, -c(1:3, 8:10)])
plot(
both_regions_richness_pca$x,
col = both_regions_extracted_agg_richness$region_name,
ylim = c(
-max(abs(both_regions_richness_pca$x[, 2])),
max(abs(both_regions_richness_pca$x[, 2]))
),
xlim = c(
-max(abs(both_regions_richness_pca$x[, 1])),
max(abs(both_regions_richness_pca$x[, 1]))
)
)
abline(h = 0, lty = 2)
abline(v = 0, lty = 2)
plot_PC_loadings(both_regions_richness_pca, 2:4)
plot_PC_geospace(
both_regions_extracted_agg_richness[, -c(8:9)],
region_a = "GCFR", region_b = "SWAFR",
border_a = GCFR_border, border_b = SWAFR_border,
pc1 = both_regions_richness_pca$x[, 1],
pc2 = both_regions_richness_pca$x[, 2]
)
|
19f448d09d09be00b464deabe91433457158c58f | 10c9e8e29c27811d92f2d72ca43bdc1ea1e83209 | /public/class07/logr_model.r | f420809b2a9a0154f0c8cce971f4d47dbff511dd | [] | no_license | danbikle/ml4us | f7b741e457d5a4550e97f4df0787c43ced6ab68a | 859f32bbf4d8228582b7cf706da9f7fa6d7d1792 | refs/heads/master | 2020-04-16T10:53:32.356441 | 2019-02-16T09:20:04 | 2019-02-16T09:20:04 | 65,154,613 | 1 | 2 | null | null | null | null | UTF-8 | R | false | false | 2,893 | r | logr_model.r | # logr_model.r
# This script should create a logistic regression model.
# Ref:
# http://www.ml4.us/cclasses/class07#hr
# Demo:
# R -f logr_model.r
train_test_logr = function(feat_df,yr_i,size_i){
# This function should train and then test using Logistic Regression and data in feat_df.
# I should use yr_i to compute end, start:
yr_train_end_i = yr_i - 1
yr_train_start_i = yr_i - size_i
# I should constrain the training data.
yr_v = strtoi(format(as.Date(feat_df$cdate),"%Y"))
pred1_v = (yr_v >= yr_train_start_i)
pred2_v = (yr_v <= yr_train_end_i)
pred3_v = (pred1_v & pred2_v)
train_df = feat_df[ pred3_v , ]
# I should build a model from train_df.
# So, I should generate labels from pctlead:
train_df$labels = (train_df$pctlead > median(train_df$pctlead))
# Now I should learn:
mymodel = glm(labels ~ pctlag1 + moy + dow, data=train_df, family='binomial')
# The above model assumes that each label relies somewhat on pctlag1,moy, and dow
# The model returns the probability that label is TRUE
# If the probability is above 0.51 I consider that a bullish prediction.
# If the probability is below 0.49 I consider that a bearish prediction.
# I should load test data
yr_test = yr_i
yr_test_v = strtoi(format(as.Date(feat_df$cdate),"%Y"))
pred_test_v = (yr_test_v == yr_test)
test_df = feat_df[pred_test_v , ]
test_df$prediction = predict(mymodel,test_df, type='response')
test_df$eff = sign(test_df$prediction-0.5) * test_df$pctlead
test_df$accurate = (test_df$eff >= 0)
# I should write predictions to CSV
csv_s = paste('predictions',yr_test,'.csv',sep='')
write.csv(test_df,csv_s, row.names=FALSE)
return(csv_s)
} # train_test_logr = function(feat_df,yr_i,size_i)
# I should load features from CSV:
feat_df = read.csv('feat.csv')
size_i = 25
for (yr_i in c(2000:2018)){
pf_s = train_test_logr(feat_df,yr_i,size_i)
print(pf_s)
}
# I should report effectiveness, accuracy:
sum_eff_long_f = 0
sum_eff_logr_f = 0
sum_long_accuracy_i = 0
sum_accuracy_i = 0
sum_all_i = 0
for (yr_i in c(2000:2018)){
csv_s = paste('predictions',yr_i,'.csv',sep='')
p_df = read.csv(csv_s)
sum_eff_long_f = sum_eff_long_f + sum(p_df$pctlead)
sum_eff_logr_f = sum_eff_logr_f + sum(p_df$eff)
sum_long_accuracy_i = sum_long_accuracy_i + sum((p_df$pctlead > 0))
sum_accuracy_i = sum_accuracy_i + sum(p_df$accurate)
sum_all_i = sum_all_i + length(p_df$accurate)
}
print('Long-Only Effectiveness:')
print(sum_eff_long_f)
print('Logistic-Regression Effectiveness:')
print(sum_eff_logr_f)
print('Long-Only Accuracy:')
acc_long_f = 100.0 * sum_long_accuracy_i / sum_all_i
print(acc_long_f)
print('Logistic-Regression Accuracy:')
acc_logr_f = 100.0 * sum_accuracy_i / sum_all_i
print(acc_logr_f)
'bye'
|
e79a244a17708309e3d41a98c2f2377533a6333d | 2a7e77565c33e6b5d92ce6702b4a5fd96f80d7d0 | /fuzzedpackages/oppr/R/branch_matrix.R | 927c1af38eed20b5944d5e393a72e94b0a2d6453 | [] | no_license | akhikolla/testpackages | 62ccaeed866e2194652b65e7360987b3b20df7e7 | 01259c3543febc89955ea5b79f3a08d3afe57e95 | refs/heads/master | 2023-02-18T03:50:28.288006 | 2021-01-18T13:23:32 | 2021-01-18T13:23:32 | 329,981,898 | 7 | 1 | null | null | null | null | UTF-8 | R | false | false | 2,034 | r | branch_matrix.R | #' @include internal.R
NULL
#' Branch matrix
#'
#' Phylogenetic trees depict the evolutionary relationships between different
#' species. Each branch in a phylogenetic tree represents a period of
#' evolutionary history. Species that are connected to the same branch
#' share the same period of evolutionary history represented by the branch.
#' This function creates
#' a matrix that shows which species are connected with which branches. In
#' other words, it creates a matrix that shows which periods of evolutionary
#' history each species has experienced.
#'
#' @param x \code{\link[ape]{phylo}} tree object.
#'
#' @param assert_validity \code{logical} value (i.e. \code{TRUE} or \code{FALSE}
#' indicating if the argument to \code{x} should be checked for validity.
#' Defaults to \code{TRUE}.
#'
#' @param ... not used.
#'
#'
#' @return \code{\link[Matrix]{dgCMatrix-class}} sparse matrix object. Each row
#' corresponds to a different species. Each column corresponds to a different
#' branch. Species that inherit from a given branch are indicated with a one.
#'
#' @name branch_matrix
#'
#' @rdname branch_matrix
#'
#' @examples
#' # load Matrix package to plot matrices
#' library(Matrix)
#'
#' # load data
#' data(sim_tree)
#'
#' # generate species by branch matrix
#' m <- branch_matrix(sim_tree)
#'
#' # plot data
#' par(mfrow = c(1,2))
#' plot(sim_tree, main = "phylogeny")
#' image(m, main = "branch matrix")
#'
#' @export
branch_matrix <- function(x, ...) UseMethod("branch_matrix")
#' @rdname branch_matrix
#' @method branch_matrix default
#' @export
branch_matrix.default <- function(x, ...)
rcpp_branch_matrix(methods::as(x, "phylo"))
#' @rdname branch_matrix
#' @method branch_matrix phylo
#' @export
branch_matrix.phylo <- function(x, assert_validity = TRUE, ...) {
# check that tree is valid and return error if not
assertthat::assert_that(assertthat::is.flag(assert_validity))
if (assert_validity)
assertthat::assert_that(is_valid_phylo(x))
# generate matrix
rcpp_branch_matrix(x)
}
|
0e9677601095ba339e4a3421e12be89b0d459548 | da362636b9b9aaedddc7734a02f0cc345036b1ee | /run_analysis.R | 060cb649e1b37f0a9920c593140881c8c8705cc7 | [
"MIT"
] | permissive | connorcl/processed-uci-har-dataset | 582c0c1f12f98dbbd8d221b80e6c10e39105b670 | ee37373943925bab4ddf6c3e029b1b9dcdb7b7e0 | refs/heads/master | 2020-03-18T21:45:05.178006 | 2019-08-14T20:52:14 | 2019-08-14T20:52:14 | 135,300,409 | 1 | 0 | null | null | null | null | UTF-8 | R | false | false | 2,840 | r | run_analysis.R |
## Download dataset if it is not present
if(!file.exists("Dataset.zip")) {
url <- "https://d396qusza40orc.cloudfront.net/getdata%2Fprojectfiles%2FUCI%20HAR%20Dataset.zip"
download.file(url = url, destfile = "Dataset.zip", method = "curl")
}
## Extract the dataset if it is not already extracted
data_dir <- "UCI HAR Dataset"
if(!dir.exists(data_dir)) {
unzip("Dataset.zip")
}
## Load the training and test set files
X_train <- read.table(file.path(data_dir, "train", "X_train.txt"))
y_train <- read.table(file.path(data_dir, "train", "y_train.txt"))
subject_train <- read.table(file.path(data_dir, "train", "subject_train.txt"))
X_test <- read.table(file.path(data_dir, "test", "X_test.txt"))
y_test <- read.table(file.path(data_dir, "test", "y_test.txt"))
subject_test <- read.table(file.path(data_dir, "test", "subject_test.txt"))
## Merge into one dataset
training_set <- cbind(subject_train, y_train, X_train)
test_set <- cbind(subject_test, y_test, X_test)
dataset <- rbind(training_set, test_set)
## Extract columns containing mean or sd of measurements only
features <- read.table(file.path(data_dir, "features.txt"),
stringsAsFactors = FALSE)
# Caclulate indexes of relevant features, offsetting by 2 to take into account
# the subject and activity columns in the dataset
idxs_mean_std <- grep("mean\\(\\)|std\\(\\)", features[[2]])
dataset <- dataset[c(1:2, idxs_mean_std + 2)]
## Replace integers in activity column with descriptive names
activities <- read.table(file.path(data_dir, "activity_labels.txt"),
stringsAsFactors = FALSE)
dataset[[2]] <- factor(sub("_", " ", tolower(activities[[2]][dataset[[2]]])),
levels = c("walking", "walking upstairs",
"walking downstairs", "sitting",
"standing", "laying"))
## Label dataset with descriptive variable names
# Extract relevant feature names
varnames <- features[[2]][idxs_mean_std]
# Fix double 'Body' in some names
varnames <- sub("(Body)Body", "\\1", varnames)
# Remove parentheses
varnames <- sub("\\(\\)", "", varnames)
# Separate parts of variable names with .
varnames <- gsub("-", ".", varnames)
varnames <- gsub("([a-z])([A-Z])", "\\1\\.\\2", varnames)
# Replace 't' or 'f' at start of variable name with 'time' or 'freq'
varnames <- sub("^t", "time", varnames)
varnames <- sub("^f", "freq", varnames)
# Convert to lowercase
varnames <- tolower(varnames)
# Apply names to the dataset
names(dataset) <- c("subject", "activity", varnames)
## Summarize by mean of each variable for each subject/activity pair
## to create tidy dataset
library(dplyr)
tidy_dataset <- dataset %>% group_by(subject, activity) %>% summarise_all(mean)
## Write tidy dataset
write.table(tidy_dataset, "tidy_data.txt", row.name = FALSE) |
b6ed47e5c1313f106a71e02a618a8c0fc84488f2 | bf848dfe495cdba717790cd07299d2d4429db4b2 | /man/replicates.Rd | 2325ecc66344a6bb5edac33c7fffe11f0f617571 | [
"MIT"
] | permissive | llrs/experDesign | c77955228b57b8e2e1b58375fd6819231fe0109b | a8f9735b5e14d8cf7a03842e941a2524c1337dd6 | refs/heads/master | 2023-05-23T01:15:22.803537 | 2023-04-11T07:50:41 | 2023-04-11T07:50:41 | 142,569,201 | 8 | 1 | NOASSERTION | 2019-11-14T11:23:16 | 2018-07-27T11:31:12 | R | UTF-8 | R | false | true | 1,445 | rd | replicates.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/designer.R
\name{replicates}
\alias{replicates}
\title{Design a batch experiment with experimental controls}
\usage{
replicates(pheno, size_subset, controls, omit = NULL, iterations = 500)
}
\arguments{
\item{pheno}{Data.frame with the sample information.}
\item{size_subset}{Numeric value of the number of sample per batch.}
\item{controls}{The numeric value of the amount of technical controls per
batch.}
\item{omit}{Name of the columns of the \code{pheno} that will be omitted.}
\item{iterations}{Numeric value of iterations that will be performed.}
}
\value{
A index with some samples duplicated in the batches.
}
\description{
To ensure that the batches are comparable some samples are processed in each
batch. This function allows to take into account that effect.
It uses the most different samples as controls as defined with \code{\link[=extreme_cases]{extreme_cases()}}.
}
\details{
To control for variance replicates are important, see for example \url{https://www.nature.com/articles/nmeth.3091}.
}
\examples{
samples <- data.frame(L = letters[1:25], Age = rnorm(25),
type = sample(LETTERS[1:5], 25, TRUE))
index <- replicates(samples, 5, controls = 2, omit = "L", iterations = 10)
head(index)
}
\seealso{
\code{\link[=design]{design()}}, \code{\link[=extreme_cases]{extreme_cases()}}.
}
|
a6e86096b246683902065006bd3662fe66213b0e | 570d3c03daad4f21a30cc7df65f5edd8045f2d55 | /MVA_project.R | ba74326ed925fabb0849a0c2ca2f53c372f586a5 | [] | no_license | kaustubhchalke/Data-set-for-Hepatitis-C-Virus-HCV-for-Egyptian-patients | bbf33b40390d2841d0df79d58052d026532d8a61 | d64212e8823be7aa4ff2857be0c1cab7e6abed5a | refs/heads/master | 2020-08-05T20:37:46.695554 | 2019-11-15T04:47:18 | 2019-11-15T04:47:18 | 212,701,588 | 1 | 0 | null | null | null | null | UTF-8 | R | false | false | 11,351 | r | MVA_project.R | #Importing the Hepatatis c Dataset
HCV= read.csv("HCV-Egy-Data.csv")
HCV
#Summary
attach(HCV)
summary(HCV)
#Dimensions of the data set
NROW(HCV)
NCOL(HCV)
#Displaying the column names of the dataset
colnames(HCV)
#Another menthod for dimensions
dim(HCV)
#Preprocessing data was done but did'nt find any discrepancies.
na= is.na(HCV)
na
any(is.na(HCV))
#Displaying the first six rows of the datasets
head(HCV)
tail(HCV)
#Differentiating data set based on gender
Gen_male = HCV[HCV$Gender== '1',]
Gen_female = HCV[HCV$Gender=='2',]
#Exploring symptoms
#**********************Male Data Exploration*****************
#Fever
Fev_male = Gen_male[Gen_male$Fever == '2',]
Fev_male
summary(Fev_male)
#Vomiting and Nausea
Nau_male = Gen_male[Gen_male$Nausea.Vomting =='2',]
Nau_male
summary(Nau_male)
#Fatigue
Fat_male = Gen_male[Gen_male$Fatigue...generalized.bone.ache =='2',]
Fat_male
summary(Fat_male)
#Jaundice
Jau_male = Gen_male[Gen_male$Jaundice =='2',]
Jau_male
summary(Jau_male)
#Stomack pain
sto_male = Gen_female[Gen_female$Epigastric.pain =='2',]
sto_male
summary(sto_male)
#*********************Female Data Exploration*****************
#Fever
Fev_female = Gen_female[Gen_female$Fever == '2',]
Fev_female
summary(Fev_female)
#Vomiting and Nausea
Nau_female = Gen_female[Gen_female$Nausea.Vomting =='2',]
Nau_female
summary(Nau_female)
#Fatigue
Fat_female = Gen_female[Gen_female$Fatigue...generalized.bone.ache =='2',]
Fat_female
summary(Fat_female)
#Jaundice
Jau_female = Gen_female[Gen_female$Jaundice =='2',]
Jau_female
summary(Jau_female)
#Stomack pain
sto_female = Gen_female[Gen_female$Epigastric.pain =='2',]
sto_female
summary(sto_female)
#CORRELATION, COVARIANCE AND DISTANCE
covariance<-cov(HCV[,c(11:16,23)]) #variamce-covariance matrix created
correlation<-cor(HCV[,c(11:16,23)]) #standardized
#colmeans
cm<-colMeans(HCV[,c(11:16,23)])
distance<-dist(scale(HCV[,c(11:16,23)],center=FALSE))
#Calculating di(generalized distance for all observations of our data)
#before that first extract all numeric variable in a dataframe
x<-HCV[,c(11:16,23)]
d <- apply(x, MARGIN = 1, function(x) + t(x - cm) %*% solve(covariance) %*% (x - cm))
#Exlporation of the data for high chances of HCV Infection
#Here RNA.base value if it is more than 700000 units then virus is detected in high quantity.
#Here ALT.1 if value is greater than 57 then it is not normal.
#we sorted the data on these two components.
library(dplyr)
HCV_male = HCV %>% filter(Gender == 1 & RNA.Base>= 700000 & ALT.1 >= 57)
HCV_male
HCV_female = HCV %>% filter(Gender == 2 & RNA.Base>= 700000 & ALT.1 >= 57)
HCV_female
#Box Plot
boxplot(RNA.Base, main="RNA.BASE Box plot",yaxt="n", xlab="RNA", horizontal=TRUE)
boxplot(ALT.1, main="ALT.1 Box plot",yaxt="n", xlab="ALT", horizontal=TRUE)
boxplot(WBC, main="WBC Box plot",yaxt="n", xlab="WBC", horizontal=TRUE)
boxplot(RBC, main="WBC Box plot",yaxt="n", xlab="RBC", horizontal=TRUE)
boxplot(AST.1, main="AST.1 Box plot",yaxt="n", xlab="AST", horizontal=TRUE)
#plotting, Are they in a straight line.
#Male Plotting of the dataset is done for five different attributes.
qqnorm(HCV_male[,"RNA.Base"], main = "RNA.Base"); qqline(HCV_male[,"RNA.Base"])
qqnorm(HCV_male[,"ALT.1"], main = "ALT.1"); qqline(HCV_male[,"ALT.1"])
qqnorm(HCV_male[,"WBC"], main = "WBC"); qqline(HCV_male[,"WBC"])
qqnorm(HCV_male[,"RBC"], main = "RBC"); qqline(HCV_male[,"RBC"])
qqnorm(HCV_male[,"AST.1"], main = "AST.1"); qqline(HCV_male[,"AST.1"])
#Female, Are they in a straight line.
#FeMale Plotting of the dataset is done for five different attributes.
qqnorm(HCV_female[,"RNA.Base"], main = "RNA.Base"); qqline(HCV_female[,"RNA.Base"])
qqnorm(HCV_female[,"ALT.1"], main = "ALT.1"); qqline(HCV_female[,"ALT.1"])
qqnorm(HCV_female[,"WBC"], main = "WBC"); qqline(HCV_female[,"WBC"])
qqnorm(HCV_female[,"RBC"], main = "RBC"); qqline(HCV_female[,"RBC"])
qqnorm(HCV_female[,"AST.1"], main = "AST.1"); qqline(HCV_female[,"AST.1"])
#Visualisatiom
#Chiplot
library(HSAUR2)
library(tools)
library(MVA)
#Chiplot
#For male data
with(HCV_male, chiplot(RNA.Base, ALT.1))
#For Female Data
with(HCV_female, chiplot(RNA.Base, ALT.1))
library(GGally)
ggpairs(HCV_male, columns=c("AST.1","RNA.EOT","WBC","ALT.1", "RBC"), color="Survivorship")
ggpairs(HCV_female, columns=c("AST.1","RNA.EOT","WBC","ALT.1", "RBC"), color="Survivorship")
summary(lm(data = HCV , RNA.EOT~Age))
summary(lm(data = HCV , RNA.EOT~Gender))
summary(lm(data = HCV , RNA.EOT~WBC))
summary(lm(data = HCV , RNA.EOT~ALT.1))
cor(HCV)
#PRINCIPLE COMPONENT ANALYSIS**************ASSIGNMENT 3***********************************************************
#Importing the Hepatatis c Dataset
HCV_NEW= read.csv("HCV-Egy-Data.csv")
HCV<- HCV_NEW
attach(HCV)
library(dplyr)
HCV$Survivorship <- if_else( RNA.EOT>= 400000 , 'NC','C')
Survivorship
cbind(Survivorship,HCV)
#Summary
summary(HCV)
#Dimensions of the data set
NROW(HCV)
NCOL(HCV)
#Another menthod for dimensions
dim(HCV)
#Preprocessing data was done but did'nt find any discrepancies.
na= is.na(HCV)
na
any(is.na(HCV))
#Displaying the first six rows of the datasets
head(HCV)
tail(HCV)
correlation<-cor(HCV[1:29]) #standardized
correlation
# Using prcomp to compute the principal components (eigenvalues and eigenvectors). With scale=TRUE, variable means are set to zero, and variances set to one
hcv_pca <- prcomp(HCV[1:29],scale=TRUE)
hcv_pca
summary(hcv_pca)
# sample scores stored in sparrows_pca$x
# singular values (square roots of eigenvalues) stored in sparrow_pca$sdev
# loadings (eigenvectors) are stored in sparrows_pca$rotation
# variable means stored in sparrows_pca$center
# variable standard deviations stored in sparrows_pca$scale
# A table containing eigenvalues and %'s accounted, follows
# Eigenvalues are sdev^2
(eigen_hcv <- hcv_pca$sdev^2)
names(eigen_hcv) <- paste("PC",1:29,sep="")
eigen_hcv
sumlambdas <- sum(eigen_hcv)
sumlambdas
propvar <- eigen_hcv/sumlambdas
propvar
cumvar_hcv <- cumsum(propvar)
cumvar_hcv
matlambdas <- rbind(eigen_hcv,propvar,cumvar_hcv)
rownames(matlambdas) <- c("Eigenvalues","Prop. variance","Cum. prop. variance")
round(matlambdas,29)
summary(hcv_pca)
hcv_pca$rotation
print(hcv_pca)
# Sample scores stored in hcv_pca$x
hcv_pca$x
############################################### Identifying the scores by their survival status
hcvtyp_pca <- cbind(data.frame(Survivorship),hcv_pca$x)
hcvtyp_pca
# Means of scores for all the PC's classified by Survival status
tabmeansPC <- aggregate(hcvtyp_pca[,2:30],by=list(Survivorship=HCV$Survivorship),mean)
tabmeansPC
tabmeansPC <- tabmeansPC[rev(order(tabmeansPC$Survivorship)),]
tabmeansPC
tabfmeans <- t(tabmeansPC[,-1])
tabfmeans
colnames(tabfmeans) <- t(as.vector(tabmeansPC[1]))
tabfmeans
# Standard deviations of scores for all the PC's classified by Survival status
tabsdsPC <- aggregate(hcvtyp_pca[,2:30],by=list(Survivorship=HCV$Survivorship),sd)
tabfsds <- t(tabsdsPC[,-1])
colnames(tabfsds) <- t(as.vector(tabsdsPC[1]))
tabfsds
#t-test
t.test(PC1~HCV$Survivorship,data=hcvtyp_pca)
t.test(PC2~HCV$Survivorship,data=hcvtyp_pca)
t.test(PC3~HCV$Survivorship,data=hcvtyp_pca)
t.test(PC4~HCV$Survivorship,data=hcvtyp_pca)
t.test(PC5~HCV$Survivorship,data=hcvtyp_pca)
t.test(PC6~HCV$Survivorship,data=hcvtyp_pca)
t.test(PC7~HCV$Survivorship,data=hcvtyp_pca)
t.test(PC8~HCV$Survivorship,data=hcvtyp_pca)
t.test(PC9~HCV$Survivorship,data=hcvtyp_pca)
t.test(PC10~HCV$Survivorship,data=hcvtyp_pca)
t.test(PC11~HCV$Survivorship,data=hcvtyp_pca)
t.test(PC12~HCV$Survivorship,data=hcvtyp_pca)
t.test(PC13~HCV$Survivorship,data=hcvtyp_pca)
t.test(PC14~HCV$Survivorship,data=hcvtyp_pca)
t.test(PC15~HCV$Survivorship,data=hcvtyp_pca)
t.test(PC16~HCV$Survivorship,data=hcvtyp_pca)
t.test(PC17~HCV$Survivorship,data=hcvtyp_pca)
t.test(PC18~HCV$Survivorship,data=hcvtyp_pca)
t.test(PC19~HCV$Survivorship,data=hcvtyp_pca)
t.test(PC20~HCV$Survivorship,data=hcvtyp_pca)
t.test(PC21~HCV$Survivorship,data=hcvtyp_pca)
t.test(PC22~HCV$Survivorship,data=hcvtyp_pca)
t.test(PC23~HCV$Survivorship,data=hcvtyp_pca)
t.test(PC24~HCV$Survivorship,data=hcvtyp_pca)
t.test(PC25~HCV$Survivorship,data=hcvtyp_pca)
t.test(PC26~HCV$Survivorship,data=hcvtyp_pca)
t.test(PC27~HCV$Survivorship,data=hcvtyp_pca)
t.test(PC28~HCV$Survivorship,data=hcvtyp_pca)
t.test(PC29~HCV$Survivorship,data=hcvtyp_pca)
# F ratio tests
var.test(PC1~HCV$Survivorship,data=hcvtyp_pca)
var.test(PC2~HCV$Survivorship,data=hcvtyp_pca)
var.test(PC3~HCV$Survivorship,data=hcvtyp_pca)
var.test(PC4~HCV$Survivorship,data=hcvtyp_pca)
var.test(PC5~HCV$Survivorship,data=hcvtyp_pca)
var.test(PC6~HCV$Survivorship,data=hcvtyp_pca)
var.test(PC7~HCV$Survivorship,data=hcvtyp_pca)
var.test(PC8~HCV$Survivorship,data=hcvtyp_pca)
var.test(PC9~HCV$Survivorship,data=hcvtyp_pca)
var.test(PC10~HCV$Survivorship,data=hcvtyp_pca)
var.test(PC11~HCV$Survivorship,data=hcvtyp_pca)
var.test(PC12~HCV$Survivorship,data=hcvtyp_pca)
var.test(PC13~HCV$Survivorship,data=hcvtyp_pca)
var.test(PC14~HCV$Survivorship,data=hcvtyp_pca)
var.test(PC15~HCV$Survivorship,data=hcvtyp_pca)
var.test(PC16~HCV$Survivorship,data=hcvtyp_pca)
var.test(PC17~HCV$Survivorship,data=hcvtyp_pca)
var.test(PC18~HCV$Survivorship,data=hcvtyp_pca)
var.test(PC19~HCV$Survivorship,data=hcvtyp_pca)
var.test(PC20~HCV$Survivorship,data=hcvtyp_pca)
var.test(PC21~HCV$Survivorship,data=hcvtyp_pca)
var.test(PC22~HCV$Survivorship,data=hcvtyp_pca)
var.test(PC23~HCV$Survivorship,data=hcvtyp_pca)
var.test(PC24~HCV$Survivorship,data=hcvtyp_pca)
var.test(PC25~HCV$Survivorship,data=hcvtyp_pca)
var.test(PC26~HCV$Survivorship,data=hcvtyp_pca)
var.test(PC27~HCV$Survivorship,data=hcvtyp_pca)
var.test(PC28~HCV$Survivorship,data=hcvtyp_pca)
var.test(PC29~HCV$Survivorship,data=hcvtyp_pca)
# Plotting the scores for the first and second components
plot(hcvtyp_pca$PC1, hcvtyp_pca$PC2,pch=ifelse(hcvtyp_pca$Survivorship == "S",1,16),xlab="PC1", ylab="PC2", main="49 HCV against values for PC1 & PC2")
abline(h=0)
abline(v=0)
legend("bottomleft", legend=c("Cured","Not_Cured"), pch=c(1,16))
plot(eigen_hcv, xlab = "Component number", ylab = "Component variance", type = "l", main = "Scree diagram")
plot(log(eigen_hcv), xlab = "Component number",ylab = "log(Component variance)", type="l",main = "Log(eigenvalue) diagram")
print(summary(hcv_pca))
View(hcv_pca)
diag(cov(hcv_pca$x))
xlim <- range(hcv_pca$x[,1])
hcv_pca$x[,1]
hcv_pca$x
plot(hcv_pca$x,xlim=xlim,ylim=xlim)
hcv_pca$rotation[,1]
hcv_pca$rotation
plot(HCV[,-1])
hcv_pca$x
plot(hcv_pca)
#get the original value of the data based on PCA
center <- hcv_pca$center
scale <- hcv_pca$scale
new_HCV1 <- as.matrix(HCV[,-1])
new_HCV1
drop(scale(new_HCV1,center=center, scale=scale)%*%hcv_pca$rotation[,1])
predict(hcv_pca)[,1]
#The aboved two gives us the same thing. predict is a good function to know.
out <- sapply(1:5, function(i){plot(HCV$Survivorship,hcv_pca$x[,i],xlab=paste("PC",i,sep=""),ylab="Survivorship")})
pairs(hcv_pca$x[,1:5], ylim = c(-6,4),xlim = c(-6,4),panel=function(x,y,...){text(x,y,HCV$Survivorship)})
|
76cfde8a919efb14a93f37536b2d5edb787a0eb5 | 50187b8b46441a85b7ec8fc42f35a97a306ddb23 | /pp/plot-mec-pops.R | bde3fbd3dbc309b0e419b9eb53fe2e9e20a20fda | [] | no_license | rbehravesh/mec-generator | 3988703117ead6f8234c60e222a3e3b5f9e6868c | 226363c09e892e545d68853b75a531a1ae793fdd | refs/heads/master | 2023-03-06T11:21:48.864449 | 2021-02-15T08:12:16 | 2021-02-15T08:12:16 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 10,103 | r | plot-mec-pops.R | source("gen-utils-clean.R")
library(pracma)
library(SDMTools)
library(ggmap)
library(spatstat)
library(graphics)
library(rjson)
library(cluster)
library(latex2exp)
library(metR)
library(stringi)
########### MADRID PARAMS ###########
REGIONS <- "../data/regions.json"
REGION_NAME <- "Madrid-center"
MEC_LOCATIONS_CSV <- paste("../data/mec-pops/Madrid-center/mec-locations-i-iii",
"/Madrid-Center-12AAUs-factor16-1.csv", sep = "")
MEC_M1_DIR <- "../data/mec-pops/Madrid-center/mec-m1-mats-i-iii"
MEC_M2_DIR <- "../data/mec-pops/Madrid-center/mec-m2-mats-i-iii"
PEOPLE_LONS <- "../data/people/Madrid-centro/people-intensity-longitudes"
PEOPLE_LATS <- "../data/people/Madrid-centro/people-intensity-latitudes"
CLI <- FALSE # flag to tell if file is executed from CLI
OUT_PLOT_M1 <- NULL
OUT_PLOT_M2 <- NULL
#####################################
########### COBO CALLEJA PARAMS ###########
REGIONS <- "../data/regions.json"
REGION_NAME <- "Cobo-Calleja"
MEC_LOCATIONS_CSV <- "../data/mec-pops/Cobo-Calleja/mec-locations-i-iii/Cobo-Calleja-12AAUs-factor1.csv"
MEC_M1_DIR <- "../data/mec-pops/Cobo-Calleja/mec-m1-mats-i-iii"
MEC_M2_DIR <- "../data/mec-pops/Cobo-Calleja/mec-m2-mats-i-iii"
PEOPLE_LONS <- "../data/people/Cobo-Calleja/people-intensity-lons"
PEOPLE_LATS <- "../data/people/Cobo-Calleja/people-intensity-lats"
CLI <- FALSE # flag to tell if file is executed from CLI
OUT_PLOT_M1 <- NULL
OUT_PLOT_M2 <- NULL
###########################################
########### HOCES DEL CABRIEL PARAMS ###########
REGIONS <- "../data/regions.json"
REGION_NAME <- "Hoces-del-Cabriel"
MEC_LOCATIONS_CSV <- paste("../data/mec-pops/Hoces-del-Cabriel/",
"mec-locations-i-iii-road/",
"Hoces-del-Cabriel-road-1.csv", sep = "")
MEC_M1_DIR <- "../data/mec-pops/Hoces-del-Cabriel/mec-m1-mats-i-iii-road"
MEC_M2_DIR <- "../data/mec-pops/Hoces-del-Cabriel/mec-m2-mats-i-iii-road"
PEOPLE_LONS <- "../data/people/Hoces-del-Cabriel/people-intensity-lons"
PEOPLE_LATS <- "../data/people/Hoces-del-Cabriel/people-intensity-lats"
CLI <- FALSE # flag to tell if file is executed from CLI
OUT_PLOT_M1 <- NULL
OUT_PLOT_M2 <- NULL
################################################
# Get the radio technology
radioTech = ifelse(stri_detect(MEC_M1_DIR, regex="iii"),
yes = "FDD 30kHz 2 symbols SPS, TDD 120kHz 7 symbols SPS",
no = "FDD 120kHz 7 symbols SPS")
# Parse arguments if existing to change default global variables
args <- commandArgs(trailingOnly=TRUE)
if(length(args) > 0)
if(length(args) != 8) {
stop(paste("Arguments to receive are: ",
"regionID latSamples lonSamples MEC_LOCATIONS_CSV MEC_M1_DIR|NULL",
"MEC_M2_DIR|NULL OUT_PLOT_M1 OUT_PLOT_M2"
))
} else {
CLI <- TRUE
REGION_NAME <- args[1]
PEOPLE_LATS <- args[2]
PEOPLE_LONS <- args[3]
MEC_LOCATIONS_CSV <- args[4]
MEC_M1_DIR <- args[5]
if (MEC_M1_DIR == "NULL") {
MEC_M1_DIR = NULL
}
MEC_M2_DIR <- args[6]
if (MEC_M2_DIR == "NULL") {
MEC_M2_DIR = NULL
}
OUT_PLOT_M1 <- args[7]
OUT_PLOT_M2 <- args[8]
if (is.null(MEC_M1_DIR) & is.null(MEC_M2_DIR)) {
stop("M1 or M2 matrices must be provided, but none is given")
}
}
# Load files
regions <- fromJSON(file = REGIONS)
lonAxis <- scan(file = PEOPLE_LONS)
latAxis <- scan(file = PEOPLE_LATS)
mecLocs <- read.csv(file = MEC_LOCATIONS_CSV)
mecLocs$circleSize <- ceil(10 * mecLocs$coveredAs / max(mecLocs$coveredAs))
shapes <- c()
for (row in 1:nrow(mecLocs)) {
shapes <- c(shapes, ifelse(mecLocs[row,]$ring == "M2", 15, 18))
}
mecLocs$shapes <- as.factor(shapes)
# Find the selected region
region <- NULL
for (region_ in regions$regions) {
if (region_$id == REGION_NAME) {
region <- list(id = region_$id,
bl = getCoords(region_$bl),
br = getCoords(region_$br),
tl = getCoords(region_$tl),
tr = getCoords(region_$tr),
repulsionRadius = region_$repulsionRadius,
plotDetails = region_$plotDetails,
populations = region_$populations)
break
}
}
# Get the subregion to be plotted
if (!is.null(region$plotDetails$tr)) {
region$bl <- getCoords(region$plotDetails$bl)
region$br <- getCoords(region$plotDetails$br)
region$tl <- getCoords(region$plotDetails$tl)
region$tr <- getCoords(region$plotDetails$tr)
mecLocs <- subset(mecLocs, mecLocs$lon > region$bl$lon &
mecLocs$lon < region$br$lon &
mecLocs$lat > region$br$lat & mecLocs$lat < region$tr$lat)
}
# Obtain the average of all the M1 matrices
avgM1 <- NULL
m1MatLons <- lonAxis
m1MatLats <- latAxis
if (!is.null(MEC_M1_DIR)) {
m1s <- 0
for (M1_CSV in list.files(path = MEC_M1_DIR, full.names = TRUE)) {
chop <- NULL
readM1 <- as.matrix(read.csv(file = M1_CSV)[,-1])
if (!is.null(region$plotDetails$tr)) { # chopped region
chop <- chopIntMat(intMat = readM1, lonAxis = lonAxis, latAxis = latAxis,
lonL = region$bl$lon, lonR = region$tr$lon,
latB = region$bl$lat, latT = region$tr$lat)
if (is.null(avgM1)) {
avgM1 <- chop$mat
m1MatLats <- chop$latAxis
m1MatLons <- chop$lonAxis
} else {
avgM1 <- avgM1 + chop$mat
}
} else { # full region
if (is.null(avgM1)) {
avgM1 <- matrix(data = 0, nrow = length(lonAxis),
ncol = length(latAxis))
}
avgM1 <- avgM1 + readM1
}
m1s <- m1s + 1
}
avgM1 <- avgM1 / m1s
}
# Obtain the average of all the M2 matrices
avgM2 <- NULL
m2MatLons <- lonAxis
m2MatLats <- latAxis
if (!is.null(MEC_M2_DIR)) {
m2s <- 0
for (M2_CSV in list.files(path = MEC_M2_DIR, full.names = TRUE)) {
chop <- NULL
readM2 <- as.matrix(read.csv(file = M2_CSV)[,-1])
if (!is.null(region$plotDetails$tr)) { # chopped region
chop <- chopIntMat(intMat = readM2, lonAxis = lonAxis, latAxis = latAxis,
lonL = region$bl$lon, lonR = region$tr$lon,
latB = region$bl$lat, latT = region$tr$lat)
if (is.null(avgM2)) {
avgM2 <- chop$mat
m2MatLats <- chop$latAxis
m2MatLons <- chop$lonAxis
} else {
avgM2 <- avgM2 + chop$mat
}
} else { # full region
if (is.null(avgM2)) {
avgM2 <- matrix(data = 0, nrow = length(lonAxis),
ncol = length(latAxis))
}
avgM2 <- avgM2 + readM2
}
m2s <- m2s + 1
}
avgM2 <- avgM2 / m2s
}
# Get the map
mapRegion <- c(left = region$bl$lon, bottom = region$bl$lat,
right = region$tr$lon, top = region$tr$lat)
map <- get_map(mapRegion, zoom = region$plotDetails$zoom, source = "stamen",
maptype = region$plotDetails$mapType)
# Plot
if (!is.null(MEC_M1_DIR)) {
# Get the groups correctly for the contour
levels <- pretty(range(avgM1), n = 7)
p <- contourPolys::fcontour(x = m1MatLons, y = m1MatLats,
z = avgM1, levels)
m <- cbind(x = unlist(p[[1]]),
y = unlist(p[[2]]),
lower = rep(unlist(p[[3]]), lengths(p[[1]])),
upper = rep(unlist(p[[4]]), lengths(p[[1]])),
g = rep(seq_along(p[[1]]), lengths(p[[1]])))
gd <- as.data.frame(m)
mynamestheme <- theme(plot.title = element_text(family = "Helvetica", face = "bold", size = (15)))
# Plot
gg_m <- ggmap(map)
gg_m <- gg_m + geom_polygon(data=gd, aes(x, y, group = g, fill = upper, alpha = upper)) +
scale_fill_gradient(low = "gray", high = "black") +
scale_alpha(range = c(0.3, 0.6)) +
geom_point(data=mecLocs, aes(x=lon, y=lat, shape=shapes,
size=circleSize)) +
mynamestheme +
labs(size = TeX("$\\frac{10·C_A(mp)}{max_{mp \\in PoPs} C_A(mp)}$")) +
labs(alpha = TeX("C_{A,M1}(mp)"), x = "longitude", y = "latitude") +
guides(fill = FALSE) +
scale_shape_manual(name = "Network ring",
labels = c("M1", "M2"),
values = c(18, 15)) +
ggtitle(label = "MEC PoPs", subtitle = sprintf("antennas' radio: %s", radioTech)) +
theme(plot.title = element_text(hjust = 0.5),
plot.subtitle = element_text(hjust = 0.5))
gg_m
if (CLI) {
ggsave(filename = OUT_PLOT_M1, plot = gg_m)
}
}
if (!is.null(MEC_M2_DIR)) {
# Get the groups correctly for the contour
levels <- pretty(range(avgM2), n = 7)
p <- contourPolys::fcontour(x = m2MatLons, y = m2MatLats,
z = avgM2, levels)
m <- cbind(x = unlist(p[[1]]),
y = unlist(p[[2]]),
lower = rep(unlist(p[[3]]), lengths(p[[1]])),
upper = rep(unlist(p[[4]]), lengths(p[[1]])),
g = rep(seq_along(p[[1]]), lengths(p[[1]])))
gd <- as.data.frame(m)
mynamestheme <- theme(plot.title = element_text(family = "Helvetica", face = "bold", size = (15)))
# Plot
gg_m <- ggmap(map)
gg_m <- gg_m + geom_polygon(data=gd, aes(x, y, group = g, fill = upper, alpha = upper)) +
scale_fill_gradient(low = "grey45", high = "black") +
scale_alpha(range = c(0.3, 0.6)) +
geom_point(data=mecLocs, aes(x=lon, y=lat, shape=shapes,
size=circleSize)) +
mynamestheme +
labs(size = TeX("$\\frac{10·C_A(mp)}{max_{mp \\in PoPs} C_A(mp)}$")) +
labs(alpha = TeX("C_{A,M2}(mp)"), x = "longitude", y = "latitude") +
guides(fill = FALSE) +
scale_shape_manual(name = "Network ring",
labels = c("M2", "M1"),
values = c(15, 18)) +
ggtitle("MEC PoPs", subtitle = sprintf("antennas' radio: %s", radioTech)) +
ggtitle(label = "MEC PoPs", subtitle = sprintf("antennas' radio: %s", radioTech)) +
theme(plot.title = element_text(hjust = 0.5),
plot.subtitle = element_text(hjust = 0.5))
gg_m
if (CLI) {
ggsave(filename = OUT_PLOT_M2, plot = gg_m)
}
}
|
2d3e3f50728059bf430df032dce0ef8f8bd6bd82 | 8e6ed677d5ab1fe60c982c35b55a80d19e79a562 | /man/get_artists.Rd | 67a96552c252d33a8ce6149e78691cd787f4929a | [] | no_license | cannin/spotifyr | 0f99879c3ccc2dc62b7f4e1b17d7cf3c8c2d0d0c | c2c227712e8ca66c2fdcffef34202cae91e27f3f | refs/heads/master | 2020-03-23T18:26:52.888912 | 2018-07-18T22:18:12 | 2018-07-18T22:18:12 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | true | 706 | rd | get_artists.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/get_artists.R
\name{get_artists}
\alias{get_artists}
\title{Get Artists}
\usage{
get_artists(artist_name, return_closest_artist = FALSE,
access_token = get_spotify_access_token())
}
\arguments{
\item{artist_name}{String of artist name}
\item{return_closest_artist}{Boolean for selecting the artist result with the closest match on Spotify's Search endpoint. Defaults to \code{TRUE}.}
\item{access_token}{Spotify Web API token. Defaults to spotifyr::get_spotify_access_token()}
}
\description{
This function searches Spotify's library for artists by name
}
\examples{
\dontrun{
get_artists('radiohead')
}
}
\keyword{artists}
|
89c173898e3684c9448b70e1a6afaf7665168826 | 7ae9ab7071e3c311b260c1eab47768868b7081ce | /man/geocode.Rd | 03bc34ee6e505cac3573bf8aecb459688faf20f4 | [] | no_license | abresler/realtR | 8332519b75e142fbd4ca4733fc4a3dada8619d8e | cb8f52e780e427c7ff60b11e6dfeba8c6e026a93 | refs/heads/master | 2023-07-27T07:33:45.571242 | 2023-07-19T13:09:23 | 2023-07-19T13:09:23 | 124,319,306 | 68 | 11 | null | null | null | null | UTF-8 | R | false | true | 1,295 | rd | geocode.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/aws.R
\name{geocode}
\alias{geocode}
\title{Location geocoder}
\usage{
geocode(
locations = NULL,
search_types = c("neighborhood", "city", "county", "postal_code", "address",
"building", "street", "school"),
use_future = F,
snake_names = F,
limit = 100,
remove_list_columns = F,
return_message = TRUE,
...
)
}
\arguments{
\item{locations}{vector of locations}
\item{search_types}{vector of search parameters options include \itemize{
\item neighborhood - includes neighborhood information
\item city - includes city information
\item county - includes county information
\item postal_code - includes zipcode
\item building - include building info
\item street - include street info
\item school - include school info
}}
\item{use_future}{}
\item{snake_names}{}
\item{limit}{numeric vector of results cannot exceed 100}
\item{remove_list_columns}{}
\item{return_message}{if \code{TRUE} returns a message}
\item{...}{extra parameters}
}
\value{
a \code{tibble}
}
\description{
This function geocodes a users vector of locations
and returns a \code{tibble} with the corresponding results
}
\examples{
geocode(locations = c("Palm Springs", "Bethesda", 10016), limit = 100)
}
\concept{geocoder}
|
474e430fe1f1903dbad043e968e187272a1f2c48 | c53c9860b272c93556765be09577b3a8b1ad31b0 | /Mass-Spec.r | 4afc8b34b2dd4235549ba926bffcdbca8eb4f5de | [] | no_license | swebb1/pdash | b6c6fd7781194d2f18fc29fa10faf6eda8bfc985 | ec03919be14e6aac7be81d54b193362d984c60dd | refs/heads/master | 2020-12-30T14:44:09.114431 | 2017-05-15T14:44:46 | 2017-05-15T14:44:46 | 91,075,332 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,923 | r | Mass-Spec.r | require(gdata)
require(ggplot2)
require(plotly)
require(plyr)
require(d3heatmap)
require(reshape2)
require(RColorBrewer)
require(heatmaply)
library(scales)
cols<-brewer.pal(11,"Spectral")
t<-read.xls("cmce_volcano.xlsx",header=T)
x
y
colour
scale
logx
logy
labels
nudge_x
nudge_y
g<-ggplot(t,aes(y=Other,x=AVG.Log.Ratio,text=Genes,colour=Qvalue))+geom_point()+theme_bw()+
geom_text(aes(label=ifelse(Other>200,as.character(Genes),'')),nudge_y =10)
l<-c("rpb1","rpb2","pfk1")
g<-ggplot(t,aes(y=Other,x=AVG.Log.Ratio,text=Genes,colour=Qvalue))+geom_point()+theme_bw()+
geom_text(aes(label=ifelse(Genes %in% l,as.character(Genes),'')),nudge_y =10)
ggplotly(g)
a<-"X"
b<-"sample.name"
c<-"Total.Area"
m<-read.csv("h3.csv",header=T)
mm<-dcast(m, list(b,a),value.var = c,fun.aggregate = sum)
mn<-cbind(sample=mm[,1],mm[,-1]/rowSums(mm[,-1]))
mmelt<-melt(mn)
g<-ggplot(mmelt,aes(x=sample,y=value,fill=variable))+geom_bar(stat="identity",position="dodge")+
theme_dark()+theme(text = element_text(size=12),axis.text.x = element_text(angle = 90, hjust = 1))+scale_fill_viridis(discrete=T,guide=F)+xlab("Sample name")+
ylab("Normalised proportion")+facet_wrap(~variable,scales="free")
ggsave(g,device = "png",filename = "sample.png")
g2<-ggplot(m,aes(x=sample.name,y=Total.Area,fill=X))+geom_bar(stat="identity",position="fill")+
theme_dark()+theme(text = element_text(size=12),axis.text.x = element_text(angle = 90, hjust = 1))+scale_fill_viridis(discrete=T)+xlab("Sample name")+
ylab("Normalised proportion")
ggsave(g2,device = "png",filename = "sample2.png")
g3<-ggplot(mmelt,aes(x=sample,y=variable,fill=value))+geom_tile(colour="black")+
theme_minimal()+theme(text = element_text(size=12),axis.text.x = element_text(angle = 90, hjust = 1))+
scale_fill_gradientn(name="Normalised proportion",colours = brewer.pal(9,"Purples"))+xlab("Sample name")+geom_text(aes(label=round(value,digits = 4)))+ylab("e")
g3
|
db1b67787d613e08e80f962fea65dadae575282a | fda2c6824523211ea7affa998e5cdd9027d55638 | /UFCBouts/model.R | 4c074afbd70c5e405008767789175da98b1cb6f1 | [] | no_license | Raj9677/UFC_Project | 68623b808e603618ed0e23b63eb660c292b9789d | 72280f9d51ada37bd388d39fc107937be34fc257 | refs/heads/main | 2022-12-29T13:16:03.663271 | 2020-10-17T01:42:04 | 2020-10-17T01:42:04 | 304,729,279 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 3,334 | r | model.R |
# disconnect from db
# dbDisconnect(con)
display_uc = uc[,c("R_wins","R_Weight_lbs","R_Reach_cms","R_Height_cms","R_age","R_odds","R_fighter", "date","weight_class" ,"location", "B_fighter","B_odds","B_age","B_Height_cms","B_Reach_cms","B_Weight_lbs","B_wins")]
getwd()
source("DataQualityReport.R")
columns_not_related = c("index","R_fighter","B_fighter","date","location","country","constant_1","finish","finish_details","finish_round","finish_round_time","total_fight_time_secs",
"gender","no_of_rounds","title_bout","empty_arena")
new_m = m[, !(colnames(m) %in% columns_not_related)]
# First remove columns that have less than 60% data available
incomplete <- DataQualityReport(new_m)[,c(1,4)]
new_m <- new_m[,which(incomplete$PercentComplete >60)]
(columns_missing_data_2000 = colnames(new_m)[ colSums(is.na(new_m)) > 2000])
for (i in 1:ncol(new_m))
{
if (is.numeric(new_m[,i])==TRUE) {new_m[,i]<-as.numeric(new_m[,i])}
else if (is.character(new_m[,i])==TRUE & colnames(new_m[i])=='date') {new_m[,i]<-as.Date(new_m[,i],"%m/%d/%Y")}
else if (is.character(new_m[,i])==TRUE) {new_m[,i]<-as.factor(new_m[,i])}
}
dummies1 = dummyVars( ~ ., data = new_m)
ex1 = data.frame(predict(dummies1,newdata = new_m))
ex1
descrCor <- cor(ex1$Winner.Blue, ex1,use = "pairwise.complete.obs")
descrCor
#mean(new_m$R_sig_str_pct_bout, na.rm = TRUE)
#mean(new_m$R_kd_bout, na.rm = TRUE)
summary(new_m$R_kd_bout)
descrCor1 <- cor(ex1,use = "pairwise.complete.obs")
descrCor1
#Blue positive first , then blue negative - cutoff of 0.1
selected_features = c("R_sig_str_pct_bout","B_odds","R_kd_bout","R_pass_bout","R_tot_str_landed_bout",
"R_td_pct_bout","R_td_landed_bout","R_sig_str_landed_bout","R_sub_attempts_bout","R_tot_str_attempted_bout",
"R_sig_str_attempted_bout","B_kd_bout","R_odds","B_pass_bout","B_sig_str_pct_bout","B_tot_str_landed_bout","B_sig_str_landed_bout",
"B_td_landed_bout","B_td_pct_bout","B_tot_str_attempted_bout","B_sub_attempts_bout",
"age_dif","B_sig_str_attempted_bout","B_td_attempted_bout","win_streak_dif")
logit <- glm(Winner ~ R_sig_str_pct_bout+B_odds+R_kd_bout+R_pass_bout+R_tot_str_landed_bout+
R_td_landed_bout+R_sig_str_landed_bout+R_sub_attempts_bout+
B_kd_bout+B_pass_bout+B_sig_str_pct_bout+B_sig_str_landed_bout+
B_td_pct_bout+B_tot_str_attempted_bout+B_sub_attempts_bout+
B_td_attempted_bout, data=new_m, family= "binomial",na.action = na.exclude)
summary(logit)
round(logit$coefficients,6)
round(exp(logit$coefficients),4)
pLogit <- predict(logit, type="response")
pLogit
yLogit <- ifelse(pLogit>=0.50,"Red","Blue") # actual class labels
results = data.frame(m$Winner, yLogit, pLogit)
colnames(results) <- c("Winner","yLogit","pLogit")
results
(cm <- table(results$Winner, results$yLogit))
# overall model accuracy
sum(diag(cm))/sum(cm)
#0.8798701
# overall error rate
1-sum(diag(cm))/sum(cm)
table(results$Winner)
sum(cm)
# Load the pROC package
library(pROC)
results$Winner_num <- ifelse(results$Winner == "Blue", 0.5, 1)
# Create a ROC curve
ROC <- roc(results$Winner_num,results$pLogit)
# Plot the ROC curve
plot(ROC, col = "blue")
# Calculate the area under the curve (AUC)
auc(ROC) |
296397d74f07169c4d48395392c688f579213809 | 51a62cf6043094e32b4c75da5fe20ac31f53d711 | /R/questao97.R | 2ed8240902ee4b0362029bb5d5a3d3002e0caad4 | [
"MIT"
] | permissive | AnneLivia/CollegeProjects | 9e32c4da216caaa973ebd4e4fe472f57557a3436 | 96d33d0ed79b5efa8da4a1401acba60b0895e461 | refs/heads/master | 2022-12-23T10:13:03.503797 | 2022-12-12T16:35:29 | 2022-12-12T16:35:29 | 128,656,614 | 2 | 0 | MIT | 2022-12-12T16:36:09 | 2018-04-08T15:44:18 | PHP | ISO-8859-1 | R | false | false | 343 | r | questao97.R | n1<-as.integer(readline("Digite o inicio: "))
n2<-as.integer(readline("Digite o fim: "))
for(cont in n1:n2){
p<-0
for(cont2 in 1:cont){
primos<-cont%%cont2
if(isTRUE(all.equal(primos,0))){
p<-p+1
}
}
if(isTRUE(all.equal(p,2))){
cat("\nNúmero primo: ",cont)
}else{
cat("\nNúmero não primo: ",cont)
}
}
|
eee5e9dbbef9dfeb8792e3d9af0209f88dcc9768 | ae2356e4364617ba6b5fe4c395467186c8bc678a | /projectDataMiningDenzel.R | d1dd112cb09c52bf815faf89be907fa57e6d9bd4 | [] | no_license | dmathew1/R | b28dde53a0044255b0fa3ea009b6ea8e38c83b7c | 4427a9890683505c39712b2fc13a29ed6ec7ad1a | refs/heads/master | 2021-08-27T15:38:31.911510 | 2015-11-23T08:19:12 | 2015-11-23T08:19:12 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 3,551 | r | projectDataMiningDenzel.R | library(RWeka)
library(caret)
library(oblique.tree)
library(e1071)
#Make sure the following packages are installed:
# 1. RWeka
# 2. caret
# 3. oblique.tree
# 4. e1071
# 5. BradleyTerry2
#LifeExpectancy variable
lifeExp <- read.csv("C:\\Users\\Denzel\\Desktop\\life_expectancy.csv")
#### Partitions the dataset into training and test data sets ######
divideDataSet <- function(orgData){
set.seed(1618)
testData <- orgData[sample(nrow(orgData),size=nrow(orgData)*0.2, replace = FALSE),]
trainData <- orgData[sample(nrow(orgData),size=nrow(orgData)*0.8,replace = FALSE),]
my_data <- list("train" = trainData, "test" = testData)
return(my_data)
}
#################### IRIS FUNCTIONS #########################
myIrisC45 <- function(){
dataset <- divideDataSet(iris)$train
fit <- J48(Species~.,data=dataset)
myC45predict(fit)
}
myC45predict <- function(fit){
testData <- divideDataSet(iris)$test
predictions <- predict(fit,testData)
data <- summary(fit,newdata=testData,class=TRUE)
stuff <- confusionMatrix(predictions, testData$Species)
return(data)
}
myIrisRIPPER <- function(){
dataset <- divideDataSet(iris)$train
JRipFit <- JRip(Species~.,dataset)
myC45predict(JRipFit)
}
myIrisOblique <- function(){
dataset <- divideDataSet(iris)$train
fit <- oblique.tree(Species~.,dataset,oblique.splits = "only")
myIrisObliquePrediction(fit)
}
myIrisObliquePrediction <- function(fit){
testData <- divideDataSet(iris)$test
predictions <- predict(fit,testData)
data <- summary(fit,newdata=testData,class=TRUE)
return(data)
}
myIrisNaiveBayes <- function(){
dataset <- divideDataSet(iris)$train
fit <- naiveBayes(Species~.,dataset)
myIrisNaiveBayesPrediction(fit)
}
myIrisNaiveBayesPrediction <- function(fit){
testData <- divideDataSet(iris)$test
predictions <- predict(fit,testData)
stuff <- confusionMatrix(predictions, testData$Species)
return(data)
}
myIrisKnn <- function(){
dataset <- divideDataSet(iris)$train
fit <- IBk(Species~.,dataset)
myC45predict(fit)
}
######## My Life Expectancy ########
myLE.C45 <- function(){
dataset <- divideDataSet(lifeExp)$train
fit <- J48(dataset$Continent~.,data=dataset)
myLE.C45Predict(fit)
}
myLE.C45Predict <- function(fit){
testData <- divideDataSet(lifeExp)$test
predictions <- predict(fit,testData)
stuff <- confusionMatrix(predictions, testData$Continent)
return(stuff)
}
myLE.RIPPER <- function(){
dataset <- divideDataSet(lifeExp)$train
fit <- JRip(dataset$Continent~.,data=dataset)
myLE.C45Predict(fit)
}
myLE.Oblique <- function(){
dataset <- divideDataSet(lifeExp)$train
testData <- divideDataSet(lifeExp)$test
fit <- oblique.tree(formula=Continent~., data=dataset[,3:8], oblique.splits="only")
predictions <- predict(fit, testData[,3:8], type="class")
stuff <- confusionMatrix(predictions, testData$Continent)
return(stuff)
}
myLE.naiveBayes <- function(){
dataset <- divideDataSet(lifeExp)$train
fit <- naiveBayes(Continent~.,dataset)
myLE.naiveBayesPrediction(fit)
}
myLE.naiveBayesPrediction <- function(fit){
testData <- divideDataSet(lifeExp)$test
predictions <- predict(fit,testData)
stuff <- confusionMatrix(predictions, testData$Continent)
return(stuff)
}
myLE.knn <- function(){
dataset <- divideDataSet(lifeExp)$train
fit <- IBk(Continent~.,dataset)
myLE.C45Predict(fit)
}
removal <- function(){
detach("package:RWeka",unload=TRUE)
detach("package:caret",unload=TRUE)
detach("package:oblique.tree",unload=TRUE)
detach("package:e1071",unload=TRUE)
} |
4ce38d321e040973bd9811f63232c0e01e23e864 | e1a3abc30055c43440773900ec95ebbbea4815a9 | /R/Task6.R | c6626df0d170ed99938ce152f3ffb1113a177119 | [] | no_license | lnonell/IntegromicsPractical | ca7dd2316aa208615c81b00a8f50c809db6d7972 | 81a7eff69541782493a438b41a1e8a6eaddbdeba | refs/heads/master | 2021-05-23T12:04:26.230916 | 2020-06-12T12:11:31 | 2020-06-12T12:11:31 | 253,277,526 | 0 | 1 | null | null | null | null | UTF-8 | R | false | false | 8,327 | r | Task6.R | #Task 6: Blanca Rodriguez Fernandez
#Purpose: Apply MFA to the mRNA, CN and methylation data comparing stage iv vs stage i.
#input: files of task1, task2, task3, task4 + path for output files
#outputs: dataframe 100 most correlated variables with PC1 and PC2 + MFA plots in png format
task6 <- function(df_samples, df.rna, df.cn, df.met, pth = getwd(), mean.meth = FALSE,...){
###########################
## Load needed libraries ##
###########################
suppressPackageStartupMessages(library(FactoMineR))
suppressPackageStartupMessages(library(ggplot2))
suppressPackageStartupMessages(library(factoextra))
suppressPackageStartupMessages(library(stringr))
##################################
## Define filter by SD function ##
##################################
filterSD <- function(data, percentage = 0.1){
SD <- apply(data,1,sd)
top10sd <- head(sort(SD,decreasing = TRUE), round(nrow(data)*percentage))
data.f <- data[names(top10sd),]
return(data.f)
}
##################################
## Transform CN to class matrix ##
##################################
n.cn <- apply(df.cn,2, as.numeric) #class numeric is needed
rownames(n.cn) <- rownames(df.cn)
## Now, we can perfom MFA w/o methylation data. If methylation data
## is missing, MFA will be applied to CN and expression data.
if(missing(df.met)){
##########################
## MFA without methylation
##########################
warning(print("Methylation data is missing"))
if (pth == getwd()){
warning(print("Default output file is your current working directory"))
}
## Check arguments ##
#####################
stopifnot(is.data.frame(df_samples))
stopifnot(is.data.frame(df.rna))
stopifnot(is.data.frame(df.cn))
stopifnot(is.numeric(n.cn[,1]))
## Filter 10% genes by standard deviation ##
############################################
rna.f <- filterSD(df.rna)
cn.f <- filterSD(n.cn)
## Set colnames order equal to task1 ##
#######################################
rna.f <- rna.f[, order(match(colnames(rna.f), as.character(df_samples[,1])))]
cn.f <- cn.f[, order(match(colnames(cn.f), as.character(df_samples[,1])))]
if(identical(colnames(rna.f), colnames(cn.f)) != TRUE){
stop(print("Samples are not the same"))
}
## Data preparation for MFA ##
##############################
## Remove NAs
rna4MFA <- rna.f[!is.na(rna.f[,1]),]
cn4MFA <- cn.f[!is.na(cn.f[,1]),]
## Label genes: c = copy number; r = expression data
rownames(rna4MFA) <- paste(rownames(rna4MFA),"r",sep=".")
rownames(cn4MFA) <- paste(rownames(cn4MFA),"c",sep=".")
## Define conditions
cond <- as.factor(df_samples$tumor_stage)
## Number of genes in each dataset
rna.l<-nrow(rna4MFA)
cn.l<-nrow(cn4MFA)
## New data frame with individuals in rows and variables in columns
data4Facto<-data.frame(cond,t(rna4MFA),t(cn4MFA))
cat("\nGetting participant identifier\n")
rownames(data4Facto) <- paste(str_sub(df_samples$barcode,-4), cond, sep="_")
## Apply MFA ##
###############
res.cond <- MFA(data4Facto, group=c(1,rna.l,cn.l), type=c("n","c","c"),
ncp=2, name.group=c("cond","RNA","CN"),num.group.sup=c(1), graph = FALSE)
## Obtain informative plots ##
##############################
## Create folder to store the plots
dir.create(file.path(pth, "./Results_MFA"),showWarnings = FALSE)
# Group of variables
fviz_mfa_var(res.cond, "group")
ggsave(file="./Results_MFA/VariableGroupsMFA.png")
# Partial individuals
fviz_mfa_ind(res.cond,habillage = cond, palette = c("#00AFBB", "#E7B800", "#FC4E07"),
addEllipses = TRUE, ellipse.type = "confidence",
repel = TRUE
)
ggsave(file="./Results_MFA/IndividualsMFA.png")
# Partial axes
fviz_mfa_axes(res.cond, palette = c("#00AFBB", "#E7B800", "#FC4E07"))
ggsave(file="./Results_MFA/PartialAxesMFA.png")
cat("\nCheck MFA plots in your working directory or output path\n")
## Return 100 most correlated variables ##
##########################################
## PC1 (dimension 1 of global PCA)
PC1 <- names(head(sort(res.cond$global.pca$var$cor[,1],decreasing = TRUE),100))
## PC2 (dimension 2 of global PCA)
PC2 <- names(head(sort(res.cond$global.pca$var$cor[,2],decreasing = TRUE), 100))
cat("\nThese are the 100 most correlated genes to PC1 and PC2\n")
return(data.frame(PC1,PC2))
} else{
##########################
## MFA with methylation ##
##########################
## Check arguments ##
#####################
stopifnot(is.data.frame(df_samples))
stopifnot(is.data.frame(df.rna))
stopifnot(is.data.frame(df.cn))
stopifnot(is.numeric(n.cn[,1]))
stopifnot(is.numeric(df.met[,1]))
## Drop total mean column from methylation data ##
##################################################
if(mean.meth == TRUE) {
df.met <- df.met[,-1]
}
## Filter 10% genes by standard deviation ##
############################################
rna.f <- filterSD(df.rna)
cn.f <- filterSD(n.cn)
met.f <- filterSD(df.met)
## Set colnames order equal to task1 ##
#######################################
rna.f <- rna.f[, order(match(colnames(rna.f), as.character(df_samples[,1])))]
cn.f <- cn.f[, order(match(colnames(cn.f), as.character(df_samples[,1])))]
met.f <- met.f[, order(match(colnames(met.f), as.character(df_samples[,1])))]
if (all(sapply(list(colnames(rna.f),colnames(cn.f),colnames(met.f), df_samples[,1]), function(x) x == df_samples[,1])) == FALSE){
stop(print("Samples are not the same"))}
## Data preparation for MFA ##
##############################
## Remove NAs
rna4MFA <- rna.f[!is.na(rna.f[,1]),]
cn4MFA <- cn.f[!is.na(cn.f[,1]),]
met4MFA <- met.f[!is.na(met.f[,1]),]
## Label genes: c = copy number; r = expression data; m = methylation
rownames(rna4MFA) <- paste(rownames(rna4MFA),"r",sep=".")
rownames(cn4MFA) <- paste(rownames(cn4MFA),"c",sep=".")
rownames(met4MFA) <- paste(rownames(met4MFA),"m",sep=".")
## Define conditions
cond <- as.factor(df_samples$tumor_stage)
## Number of genes in each dataset
rna.l<-nrow(rna4MFA)
cn.l<-nrow(cn4MFA)
met.l<-nrow(met4MFA)
# New data frame with individuals in rows and variables in columns
data4Facto<-data.frame(as.factor(cond),t(rna4MFA),t(cn4MFA),t(met4MFA))
cat("\nGetting participant identifier\n")
rownames(data4Facto) <- paste(str_sub(df_samples$barcode,-4), cond, sep="_")
## Apply MFA ##
###############
res.cond <- MFA(data4Facto, group=c(1,rna.l,cn.l,met.l), type=c("n","c","c","c"),
ncp=2, name.group=c("cond","RNA","CN","MET"),num.group.sup=c(1), graph = FALSE)
## Obtain informative plots ##
##############################
## Create folder to store the plots
dir.create(file.path(pth, "./Results_MFA"), showWarnings = FALSE)
# Group of variables
fviz_mfa_var(res.cond, "group")
ggsave(file="./Results_MFA/VariableGroupsMFA.png")
# Partial individuals
fviz_mfa_ind(res.cond,habillage = cond, palette = c("#00AFBB", "#E7B800", "#FC4E07"),
addEllipses = TRUE, ellipse.type = "confidence",
repel = TRUE
)
ggsave(file="./Results_MFA/IndividualsMFA.png")
# Partial axes
fviz_mfa_axes(res.cond, palette = c("#00AFBB","#999999", "#E7B800", "#FC4E07"))
ggsave(file="./Results_MFA/PartialAxesMFA.png")
cat("\nCheck MFA plots in your working directory or output path\n")
## Return 100 most correlated variables with
## PC1 (dimension 1 of global PCA)
PC1 <- names(head(sort(res.cond$global.pca$var$cor[,1],decreasing = TRUE),100))
## PC2 (dimension 2 of global PCA)
PC2 <- names(head(sort(res.cond$global.pca$var$cor[,2],decreasing = TRUE), 100))
cat("\nThese are the 100 most correlated genes to PC1 and PC2\n")
return(data.frame(PC1,PC2))
}
}
|
a40c64f7a7d38c20c19c640fa1260241d2af8a9e | 2632c34e060fe3a625b8b64759c7f1088dca2e28 | /R/convert_bodymap.R | 9b07428832849f301e6fc250db2a1b463e7500ef | [
"MIT"
] | permissive | emcramer/CHOIRBM | 0319103968ac662cbec472585b9b2accfdea96b1 | 178c6833e1235c8d7f4d0fae9f550860eb7f36a3 | refs/heads/master | 2022-11-25T08:12:15.576043 | 2022-10-28T19:57:40 | 2022-10-28T19:57:40 | 329,775,974 | 5 | 2 | null | null | null | null | UTF-8 | R | false | false | 1,161 | r | convert_bodymap.R | #' convert_bodymap
#' Helper function to convert a single bodymap
#' @param segments a character vector containing segment numbers as individual
#' strings in the vector that need to be adjusted/standardized
#' @return a character vector containing standardized segment numbers as
#' individual strings in the vector
#'
#' @examples
#' exampledata <- data.frame(
#' GENDER = as.character(c("Male", "Female", "Female")),
#' BODYMAP_CSV = as.character(c("112,125","112,113","128,117"))
#' )
#' convert_bodymap(exampledata[2,2])
#' @export
convert_bodymap <- function(segments){
if(length(segments) == 0){
return("")
}
for(i in 1:length(segments)){
# for each segment, check to see if it needs to be set to the
# male numbering
if(segments[i] == "112"){
segments[i] <- "116"
} else if(segments[i] == "113"){
segments[i] <- "117"
} else if(segments[i] == "114"){
segments[i] <- "112"
} else if(segments[i] == "115"){
segments[i] <- "113"
} else if(segments[i] == "116"){
segments[i] <- "114"
} else if(segments[i] == "117"){
segments[i] <- "115"
}
}
return(segments)
}
|
47e83ff9199e9cd405592fe0888cf49c8f13cb17 | 14b866ed3002fc26ef02db5b75a2bc23192c0fed | /dev-r-shiny-simulamicrog/classMatrixImage.R | 0994818f239aed31fc00096ef5458c773491fc3e | [] | no_license | sergiocy/scripts-tests | 76333d26721f1bd6e61b473f6ce108f3a5bb2ccb | a55814799deae47954a7ca8320d07258d4b1a8ca | refs/heads/master | 2022-09-16T10:49:41.689459 | 2020-07-19T12:56:46 | 2020-07-19T12:56:46 | 178,724,341 | 0 | 0 | null | 2022-09-01T23:26:30 | 2019-03-31T18:09:41 | R | UTF-8 | R | false | false | 139 | r | classMatrixImage.R |
#########
# S4-CLASS TO PROCESS MATRIX ASSOCIATED TO AN IMAGE
# comenzado: 19/06/2016
#########
##### PLOT 3D ......esto en la clase S4 |
78ef268059e52d7f697804d0347de2cdc53c3a6f | e972a6dda07fcfe3dbb8f898462a6e634300337e | /metDownloadR/man/getData.Rd | 882f0ee738bbbb64ce900b3288fac0a81e58ae28 | [] | no_license | NEONScience/is-thresholds | d146067ad931475076cff6b529ee0ff4a101333b | a858e654f3a23cba720e09ce2d23f37653cfd316 | refs/heads/master | 2022-04-12T21:32:19.209947 | 2020-03-23T20:25:11 | 2020-03-23T20:25:11 | 237,132,782 | 0 | 0 | null | null | null | null | UTF-8 | R | false | true | 1,455 | rd | getData.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/get_data.R
\name{getData}
\alias{getData}
\title{Pull data for a site found with metScanR}
\usage{
getData(site_meta, start_date, end_date, temp_agg)
}
\arguments{
\item{site_meta}{a metScanR list element.}
\item{start_date}{A YYYY-MM-DD hh:mm:ss formated start time}
\item{end_date}{A YYYY-MM-DD hh:mm:ss formated end time}
}
\value{
Data frame data for the input site found with metScanR, if available.
}
\description{
This function takes an input of the site metadata from a metScanR search,
as well as start and end dates to download data for, and downloads
}
\examples{
\dontrun{
# Get Data for one SCAN site near CPER
cper_sites=metScanR::getNearby(siteID="NEON:CPER", radius = 30)
uscrn_out=metDownloadR::getData(site_meta = cper_sites$USW00094074,
start_date = "2018-10-01",
end_date = "2018-10-31",
temp_agg="monthly")
# Get all october 2018 data from sites within a 5 km radius of CPER
out=lapply(
metScanR::getNearby(siteID="NEON:CPER", radius = 30),
metDownloadR::getData,
start_date = "2018-10-01",
end_date = "2018-10-31",
temp_agg="daily")
}
}
\seealso{
Currently none
}
\author{
Robert Lee \email{rlee@battelleecology.org}\cr
Josh Roberti\cr
}
\keyword{USCRN,}
\keyword{commissioning}
\keyword{data}
\keyword{data,}
\keyword{gaps,}
\keyword{process}
\keyword{quality,}
|
f465745a62ce87d2df3a8581033be1c931774a6e | 62970f7f4c033ee9353ac8586ba6f69a4425a5e8 | /R/cvmboost.R | 60a2ec2975d3b4567f31f4154eb65e1089e99cf3 | [] | no_license | cran/bujar | 7d86a5019e433ef5507d834324a5ea1edffc6f20 | 9dbf65067d18bd65681c51d167615b297860383e | refs/heads/master | 2023-07-07T01:03:24.380034 | 2023-06-25T01:40:02 | 2023-06-25T01:40:02 | 19,339,814 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 630 | r | cvmboost.R | cvmboost <- function(obj,ndat,nfold=10,figname=NULL,trace=FALSE){
### inspect coefficient path and AIC-based stopping criterion
### 10-fold cross-validation
n <- ndat
k <- nfold
ntest <- floor(n / k)
cv10f <- matrix(c(rep(c(rep(0, ntest), rep(1, n)), k - 1),
rep(0, n * k - (k - 1) * (n + ntest))), nrow = n)
cvm <- cvrisk(obj, folds = cv10f)
if(trace)
print(cvm)
if(!is.null(figname)){
postscript(figname, height = 6.9, width = 6.6,horizontal = FALSE, onefile = FALSE, print.it = FALSE)
plot(cvm)
dev.off()
}
return(mstop(cvm))
}
|
ca3fd45e52c676d9b827bb492c36cdc8a7655204 | 584e40563e6250c23558555fd624200108021f2d | /R/deseq.R | 3889716ce50f7f8f769761e90c4bedb6c1ff3ebc | [] | no_license | opalasca/integrated_miRNA_analysis | 91803937709f79d10f22b012049dec16c5a68763 | 9722f094c297ea530ac42aac1eea2643e034317d | refs/heads/master | 2020-03-18T14:35:42.086846 | 2018-07-06T13:10:27 | 2018-07-06T13:10:27 | 134,856,248 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 4,052 | r | deseq.R | #install.packages("Vennerable", repos="http://R-Forge.R-project.org")
#source("https://bioconductor.org/biocLite.R")
#biocLite("affy")
#biocLite("DESeq2")
#biocLite("pheatmap")
setwd("~/Desktop/IBD/DESeq2")
require("DESeq2")
cts <- read.csv(file="miRNAs_expressed_all_samples_26_04_2018_t_13_43_32.csv", sep="\t")
#remove duplicate mature miRNAs
cts<-cts[order(cts$X.miRNA,-cts$read_count),]
cts<-cts[!duplicated(cts$X.miRNA),]
#cts <- as.matrix(cts)
cts<-cts[,c(1,5:41)]
# Assign the first column as row names
cts2 <- cts[,-1]
cts2 <- as.matrix(cts2)
rownames(cts2) <- cts[,1]
coldata <- read.csv(file="config2.txt", sep="\t", header=FALSE)
coldata <- coldata[,c(2,3,4)]
coldata2 <- coldata[,-1]
rownames(coldata2) <- coldata[,1]
colnames(coldata2) <- c("condition","type")
#colnames(coldata2) <- c("condition")
#arrange columns of cts in the same order as rows of colnames
all(rownames(coldata2) %in% colnames(cts2))
cts2 <- cts2[, rownames(coldata2)]
all(rownames(coldata2) == colnames(cts2))
dds <- DESeqDataSetFromMatrix(countData = cts2,
colData = coldata2,
design = ~ condition)
dds <- DESeq(dds)
res_UC_DD <- results(dds, contrast=c("condition","UC","DD"))
res_CD_DD <- results(dds, contrast=c("condition","CD","DD"))
res_UC_CD <- results(dds, contrast=c("condition","UC","CD"))
resOrdered_UC_DD <- as.data.frame(res_UC_DD[order(res_UC_DD$pvalue),])
resOrdered_CD_DD <- as.data.frame(res_CD_DD[order(res_CD_DD$pvalue),])
resOrdered_UC_CD <- as.data.frame(res_UC_CD[order(res_UC_CD$pvalue),])
write.table(resOrdered_UC_DD, file="condition_UC_vs_DD.csv", sep="\t")
write.table(resOrdered_CD_DD, file="condition_CD_vs_DD.csv", sep="\t")
write.table(resOrdered_UC_CD, file="condition_UC_vs_CD.csv", sep="\t")
resOrdered_UC_DD$mirna <-rownames(resOrdered_UC_DD)
resOrdered_CD_DD$mirna <-rownames(resOrdered_CD_DD)
resOrdered_UC_CD$mirna <-rownames(resOrdered_UC_CD)
thr<-0.05
resOrdered_UC_DD <- subset(resOrdered_UC_DD, padj < thr)
resOrdered_CD_DD <- subset(resOrdered_CD_DD, padj < thr)
resOrdered_UC_CD <- subset(resOrdered_UC_CD, padj < thr)
resOrdered_UC_DD$common_id <-gsub("_.*","",resOrdered_UC_DD$mirna)
resOrdered_CD_DD$common_id <-gsub("_.*","",resOrdered_CD_DD$mirna)
resOrdered_UC_CD$common_id <-gsub("_.*","",resOrdered_UC_CD$mirna)
common_1_2 <- as.data.frame(merge(resOrdered_UC_DD[c(1,2,6,7)], resOrdered_CD_DD[c(1,2,6,7)], by=c('mirna','mirna'))[c(1,2,3,4,6,7)])
common_1_3 <- as.data.frame(merge(resOrdered_UC_DD[c(1,2,6,7)], resOrdered_UC_CD[c(1,2,6,7)], by=c('mirna','mirna'))[c(1,2,3,4,6,7)])
common_2_3 <- as.data.frame(merge(resOrdered_CD_DD[c(1,2,6,7)], resOrdered_UC_CD[c(1,2,6,7)], by=c('mirna','mirna'))[c(1,2,3,4,6,7)])
common_1_2_3 <- as.data.frame(merge(common_1_2, resOrdered_UC_CD[c(1,2,6,7)], by=c('mirna','mirna')))
vennD=Venn(SetNames = c("Samp1", "Samp2","Samp3"), Weight=c(`100`=x,`010`=x,`110`=x,`001`=x,`101`=x,`011`=x,`111`=x))
summary(resOrdered)
sum(res$padj < 0.1, na.rm=TRUE)
plotMA(res)
resNorm <- lfcShrink(dds, coef=2, type="normal")
plotMA(resNorm)
plotCounts(dds, gene=which.min(res$padj), intgroup="condition")
library("pheatmap")
ntd <- normTransform(dds)
vsd <- vst(dds, blind=FALSE)
rld <- rlog(dds, blind=FALSE)
select <- order(rowMeans(counts(dds,normalized=TRUE)),
decreasing=TRUE)[1:20]
df <- as.data.frame(colData(dds)[,c("condition","type")])
pheatmap(assay(ntd)[select,], cluster_rows=FALSE, show_rownames=FALSE,
cluster_cols=FALSE, annotation_col=df)
vsd <- vst(dds, blind=FALSE)
library("RColorBrewer")
vsd<-ntd
sampleDists <- dist(t(assay(vsd)))
sampleDistMatrix <- as.matrix(sampleDists)
rownames(sampleDistMatrix) <- paste(vsd$condition, vsd$type, sep="-")
colnames(sampleDistMatrix) <- NULL
colors <- colorRampPalette( rev(brewer.pal(9, "Blues")) )(255)
pheatmap(sampleDistMatrix,
clustering_distance_rows=sampleDists,
clustering_distance_cols=sampleDists,
col=colors)
vsd<-ntd
library("affy")
plotPCA(vsd, intgroup=c("condition", "type"))
|
f602ecd9faf91c82c21f946ab9c32bd16562a38b | 54b4976030ae6a42e10282c8f41609ef266721c9 | /R/ecd-numericMpfr-class.R | ba745e5cc9a30df746a520c303de8618e7aaf971 | [] | no_license | cran/ecd | b1be437b407e20c34d65bcf7dbee467a9556b4c1 | 18f3650d6dff442ee46ed7fed108f35c4a4199b9 | refs/heads/master | 2022-05-18T20:24:56.375378 | 2022-05-09T20:10:02 | 2022-05-09T20:10:02 | 48,670,406 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 929 | r | ecd-numericMpfr-class.R | #' The numericMpfr class
#'
#' The S4 class union of numeric and mpfr, primarily used to define slots in ecd class.
#' The use of MPFR does not necessarily increase precision. Its major strength in ecd
#' is ability to handle very large numbers when studying asymptotic behavior, and
#' very small numbers caused by small sigma when studying high frequency option data.
#' Since there are many convergence issues with integrating PDF using native integrateR library,
#' the ecd package adds many algorithms to improve its performance. These additions
#' may decrease precision (knowningly or unknowningly) for the sake of increasing performance.
#' More research is certainly needed in order to cover a vast range of parameter space!
#'
#' @name numericMpfr-class
#'
#' @importClassesFrom Rmpfr mpfr mpfrArray mpfrMatrix
#'
#' @exportClass numericMpfr
setClassUnion("numericMpfr", c("numeric", "mpfr", "mpfrArray"))
# end
|
f91941ce50f5a1ea95a91d459d1ad59848f76429 | d857e340220e755013eae13898f2ac0fc41aa38d | /man/createThresholdPlot.Rd | effea02774c697085fede7329a5dac22ffb6fee3 | [] | no_license | PriceLab/FPML | 5ac19c88a6b5a08a74bfef7af59260c92380cd48 | daeaf27fec2bce3e7cfd88df2443552fec1ab77e | refs/heads/master | 2021-08-31T15:09:14.105450 | 2017-12-21T20:51:09 | 2017-12-21T20:51:09 | 110,296,991 | 0 | 0 | null | null | null | null | UTF-8 | R | false | true | 851 | rd | createThresholdPlot.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/buildModels.R
\name{createThresholdPlot}
\alias{createThresholdPlot}
\title{Create a threshold plot}
\usage{
createThresholdPlot(stats.df, modelName, plotFile)
}
\arguments{
\item{stats.df}{A dataframe of statistics generated by one of the model runs}
\item{modelName}{A string containing a model name from the stats data frame. To see the options
available, use unique(stats.df$Model.Name)}
\item{plotFile}{A path to a file for saving the generated plot as a .png file}
}
\value{
A .png file containing a plot of 3 metrics (sensitivity, specificity, ppv) as a function
of threshold
}
\description{
Create a "threshold" plot, which plots the 3 relevant prediction metrics against threshold to
enable identification of the ideal "threshold" for ChIPseq hit probability.
}
|
1ac39730411d5cc08a43bf7d562d322119f90acb | ffdea92d4315e4363dd4ae673a1a6adf82a761b5 | /data/genthat_extracted_code/IDetect/examples/ht_ID_cplm.Rd.R | 45a4181828c3d21393478c975ba9b04289063b3d | [] | no_license | surayaaramli/typeRrh | d257ac8905c49123f4ccd4e377ee3dfc84d1636c | 66e6996f31961bc8b9aafe1a6a6098327b66bf71 | refs/heads/master | 2023-05-05T04:05:31.617869 | 2019-04-25T22:10:06 | 2019-04-25T22:10:06 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 572 | r | ht_ID_cplm.Rd.R | library(IDetect)
### Name: ht_ID_cplm
### Title: Apply the Isolate-Detect methodology for multiple change-point
### detection in a continuous, piecewise-linear vector with non Gaussian
### noise
### Aliases: ht_ID_cplm
### ** Examples
single.cpt <- c(seq(0, 1999, 1), seq(1998, -1, -1))
single.cpt.student <- single.cpt + rt(4000, df = 5)
cpt.single <- ht_ID_cplm(single.cpt.student)
three.cpt <- c(seq(0, 3998, 2), seq(3996, -2, -2), seq(0,3998,2), seq(3996,-2,-2))
three.cpt.student <- three.cpt + rt(8000, df = 5)
cpt.three <- ht_ID_cplm(three.cpt.student)
|
1e080882c30dae022d4cb93321ce3458f0d6bd4d | b5f752f3d65cd0d94015b14814778c9a49ae85bc | /covid_brasil.R | 068397065665062cde6e5056ec6ad5073fbd065b | [] | no_license | GuillaumePressiat/covidbrasil | 178c589f9af9e550fb7a567edb737b7648eabfda | 150907f5d33c7c4443b293e38f54abfccf90ed6b | refs/heads/master | 2022-11-19T18:04:39.707140 | 2020-07-18T15:49:32 | 2020-07-18T15:49:32 | 280,686,124 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 7,833 | r | covid_brasil.R |
library(shiny)
library(RColorBrewer)
library(dplyr)
# library(rgdal)
library(leaflet)
library(sf)
library(rmapshaper)
# install.packages('rmapshaper')
dataset <- read.csv(file = "dataset_res.csv", stringsAsFactors = FALSE, encoding = 'latin1') %>%
select(Codigo_IBGE, Sigla_Estado, cidade, longitude, latitude, Data, Total_Exames, Total_positivos, Indice_Positividade)
dataset$Data <- as.Date(dataset$Data)
dataset$Codigo_IBGE <- as.character(dataset$Codigo_IBGE)
# for one day first
dataset1 <- dataset
# %>%
# filter(Data == as.Date('2020-03-10'))
data_uf <- st_read('br_unidades_da_federacao', layer = 'BR_UF_2019')
# plot(data_uf)
# Summarise by UF
dataset1 <- dataset1 %>%
group_by(Data, Sigla_Estado) %>%
summarise(Total_positivos = sum(Total_positivos, na.rm = TRUE))
data_uf <- data_uf %>% ms_simplify(keep = 0.05)
#plot(data_uf)
data.p <- data_uf
casos <- data_uf %>%
left_join(dataset1, by = c('SIGLA_UF' = 'Sigla_Estado')) %>%
mutate(popup = paste0(SIGLA_UF," -", NM_UF, " : ", prettyNum(Total_positivos, big.mark = ",")))
data <- casos
pal_fun <- colorNumeric(scico::scico(n = 300, palette = "tokyo", direction = - 1, end = 0.85), data$Total_positivos, na.color = 'grey90')
data <- sf::st_transform(data,sp::CRS('+proj=longlat +datum=WGS84'))
data.p <- sf::st_transform(data.p,sp::CRS('+proj=longlat +datum=WGS84'))
# Just one map one day
tictoc::tic()
leaflet(data %>%
filter(Data == lubridate::as_date('2020-03-11'))) %>%
#addTiles() %>%
addProviderTiles("CartoDB", options = providerTileOptions(opacity = 1, minZoom = 3, maxZoom = 5), group = "Open Street Map") %>%
#setView(lng = -100, lat = 40, zoom = 3) %>%
addPolygons(color = 'white', weight = 1.4,
group = 'base',
fillColor = ~pal_fun(Total_positivos),
fillOpacity = 1, stroke = 2,
label = ~ popup) %>%
addLegend("bottomleft", pal = pal_fun, values = ~Total_positivos,
title = 'Confirmed', opacity = 1)
tictoc::toc()
ui <- bootstrapPage(
tags$head(
tags$link(href = "https://fonts.googleapis.com/css?family=Oswald", rel = "stylesheet"),
tags$style(type = "text/css", "html, body {width:100%;height:100%; font-family: Oswald, sans-serif;}"),
#includeHTML("meta.html"),
tags$script(src="https://cdnjs.cloudflare.com/ajax/libs/iframe-resizer/3.5.16/iframeResizer.contentWindow.min.js",
type="text/javascript")),
leafletOutput("covid", width = "100%", height = "100%"),
absolutePanel(
bottom = 20, left = 40, draggable = TRUE, width = "20%", style = "z-index:500; min-width: 300px;",
titlePanel("Brasil | Covid"),
# br(),
em('data is available mouse on hover'),
sliderInput("jour",h3(""),
min = min(dataset1$Data), max = max(dataset1$Data), step = 1,
value = max(dataset1$Data),
animate = animationOptions(interval = 1700, loop = FALSE)),
shinyWidgets::prettyRadioButtons('sel_data', 'data',
choices = c('Total positivos'),
selected = 'Total positivos',
shape = "round", animation = "jelly",plain = TRUE,bigger = FALSE,inline = FALSE),
#shinyWidgets::prettySwitch('pop', "Ratio / 100 000 inhabitants", FALSE),
#em(tags$small("*à noter sur ce ratio : un patient peut être hospitalisé plus d'une fois")),
#em(tags$small(br(), "Pour les décès, il s'agit de ceux ayant lieu à l'hôpital")),
h5(tags$a(href = 'http://github.com/GuillaumePressiat', 'Guillaume Pressiat'), ' & ',
tags$a(href = 'http://github.com/fsvm78', 'fsvm78')),
h5(em('Last update : ' , 'not available')),
#br(),
tags$small(
tags$li(tags$a(href = 'http://www.fabiocrameri.ch/resources/ScientificColourMaps_FabioCrameri.png', 'Scientific colour maps'), ' with ',
tags$a(href = 'https://cran.r-project.org/web/packages/scico/index.html', 'scico package'))))
)
server <- function(input, output) {
# Confirmed, People_Hospitalized, Deaths, People_Tested
get_data <- reactive({
temp <- data[which(data$Data == input$jour),]
if (input$sel_data == "Total positivos"){
temp$val <- temp$Total_positivos
} else if (input$sel_data == "People Hospitalized"){
temp$val <- temp$People_Hospitalized
} else if (input$sel_data == "People Tested"){
temp$val <- temp$People_Tested
} else if (input$sel_data == "Deaths"){
temp$val <- temp$Deaths
} else if (input$sel_data == "Recovered"){
temp$val <- temp$Recovered
}
temp$label <- prettyNum(temp$val, big.mark = ',')
# if (input$pop){
# temp$val <- NA
# #temp$val <- (temp$val * 100000) / temp$POPESTIMATE2019
# #temp$label <- paste0(temp$label, '<br><em>', round(temp$val,1), ' / 100 000 inhab.</em><br>', prettyNum(temp$POPESTIMATE2019, big.mark = ','), ' inhabitants')
# }
return(temp)
})
values_leg <- reactive({
temp <- data
if (input$sel_data == "Total positivos"){
temp$leg <- temp$Total_positivos
} else if (input$sel_data == "People Hospitalized"){
temp$leg <- temp$People_Hospitalized
} else if (input$sel_data == "People Tested"){
temp$leg <- temp$People_Tested
} else if (input$sel_data == "Deaths"){
temp$leg <- temp$Deaths
} else if (input$sel_data == "Recovered"){
temp$leg <- temp$Recovered
}
# if (input$pop){
# temp$leg <- NA ;# (temp$leg * 100000) / temp$POPESTIMATE2019
# }
temp <- temp$leg
# if (input$log){
# temp <- log(temp)
# temp[temp < 0] <- 0
# }
return(temp)
})
leg_title <- reactive({
# if (input$pop){
# htmltools::HTML('Nb for<br>100,000<br>inhab.')
# } else{
# 'Nb'
# }
'Nb'
})
output$covid <- renderLeaflet({
leaflet(data = data.p) %>%
addProviderTiles("CartoDB", options = providerTileOptions(opacity = 1, minZoom = 3, maxZoom = 6), group = "Open Street Map") %>%
addPolygons(group = 'base',
fillColor = NA,
color = 'white',
weight = 1.5) %>%
addLegend(pal = pal(), values = values_leg(), opacity = 1, title = leg_title(),
position = "topright", na.label = 'No data', )
})
pal <- reactive({
if (input$sel_data != "Recovered"){
return(colorNumeric(scico::scico(n = 300, palette = "tokyo", direction = - 1, end = 0.85), values_leg(), na.color = '#c1c1d7'))
} else {
return(colorNumeric(scico::scico(n = 300, palette = "oslo", direction = - 1, begin = 0.2, end = 0.85), domain = values_leg(), na.color = '#808080'))
}
})
observe({
if(input$jour == min(dataset1$Data)){
data <- get_data()
leafletProxy('covid', data = data) %>%
clearGroup('polygons') %>%
addPolygons(group = 'polygons',
fillColor = ~pal()(val),
fillOpacity = 1,
stroke = 2,
color = 'white',
weight = 1.5, label = ~ lapply(paste0("<b>", CD_UF, " - ", NM_UF, "</b><br>",Data, ' : ', label), htmltools::HTML))
} else {
data <- get_data()
leafletProxy('covid', data = data) %>%
#clearGroup('polygons') %>%
addPolygons(group = 'polygons',
fillColor = ~pal()(val),
fillOpacity = 1,
stroke = 2,
color = 'white',
weight = 1.5, label = ~ lapply(paste0("<b>", CD_UF, " - ", NM_UF, "</b><br>", Data, ' : ', label), htmltools::HTML))
}
})
}
# Run the application
shinyApp(ui = ui, server = server)
|
c62babf9e7f975704375a6bb725777890225d22c | 59d85155e2dbfeb3a2cb9d6e71632365e9a424be | /Scripts/PhoenixAir_v1.R | 85448b461e56cae372a4be3dfe7c9e32285e2f85 | [] | no_license | rlrandolphIII/WestPhoenixAir | fc27572d5112ca9f10b08b9389e015ac8a6f2f50 | e6b771a34b88782945aa7255c218552cfcc4c0d4 | refs/heads/master | 2022-11-10T15:03:24.966237 | 2020-07-04T17:49:22 | 2020-07-04T17:49:22 | 277,009,425 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 14,592 | r | PhoenixAir_v1.R | # Michael Randolph, 22-June-2020
#
# This R-Script examines pollution in the Phoenix metropolitan area,
# specifically ozone and particulate matter PM2.5.
# The data was collected from a single monitor (JLG SUPERSITE) in
# Phoenix, AZ.
# The goal is to determine if pollution levels decreased durin the
# Covid-19 lockdown in AZ ( 31-March-2020 through 15-May-2020).
# Data was examined over this timeframe for 2020 and 2019.
###############################################################################
###############################################################################
# Set working directory
setwd("./Data")
# Set it to the folder/directory that you are working from (where the data files are stored)
###############################################################################
###############################################################################
# load packages
library(ggplot2)
library(dplyr)
###############################################################################
###############################################################################
# set strings as factors to false
options(stringsAsFactors = FALSE)
###############################################################################
##### Ozone and PM2.5 Data ####################################################
###############################################################################
# Phoenix Monitor: JLG SUPERSITE #40139997
# 4530 N 17TH AVE, PHOENIX, AZ 85015-3809
# Pulling ozone data - Obtained from https://www.epa.gov/outdoor-air-quality-data/download-daily-data
df_ozone_2020 <- read.csv("ozone_040139997_2020.csv", header=TRUE, sep = ",")
df_ozone_2019 <- read.csv("ozone_040139997_2019.csv", header=TRUE, sep = ",")
df_ozone_2018 <- read.csv("ozone_040139997_2018.csv", header=TRUE, sep = ",")
df_ozone_2017 <- read.csv("ozone_040139997_2017.csv", header=TRUE, sep = ",")
df_ozone_2016 <- read.csv("ozone_040139997_2016.csv", header=TRUE, sep = ",")
df_ozone_2015 <- read.csv("ozone_040139997_2015.csv", header=TRUE, sep = ",")
df_pm25_2020 <- read.csv("pm25_040139997_2020_mod.csv", header=TRUE, sep = ",")
df_pm25_2019 <- read.csv("pm25_040139997_2019_mod.csv", header=TRUE, sep = ",")
df_pm25_2018 <- read.csv("pm25_040139997_2018_mod.csv", header=TRUE, sep = ",")
df_pm25_2017 <- read.csv("pm25_040139997_2017_mod.csv", header=TRUE, sep = ",")
df_pm25_2016 <- read.csv("pm25_040139997_2016_mod.csv", header=TRUE, sep = ",")
df_pm25_2015 <- read.csv("pm25_040139997_2015_mod.csv", header=TRUE, sep = ",")
# reduce dataframe to only two columns: Date and Ozone
df_ozone_2020 <- df_ozone_2020 %>%
select(Date, "ozone_ppm" = Daily.Max.8.hour.Ozone.Concentration) %>%
na.omit()
df_ozone_2019 <- df_ozone_2019 %>%
select(Date, "ozone_ppm" = Daily.Max.8.hour.Ozone.Concentration) %>%
na.omit()
df_ozone_2018 <- df_ozone_2018 %>%
select(Date, "ozone_ppm" = Daily.Max.8.hour.Ozone.Concentration) %>%
na.omit()
df_ozone_2017 <- df_ozone_2017 %>%
select(Date, "ozone_ppm" = Daily.Max.8.hour.Ozone.Concentration) %>%
na.omit()
df_ozone_2016 <- df_ozone_2016 %>%
select(Date, "ozone_ppm" = Daily.Max.8.hour.Ozone.Concentration) %>%
na.omit()
df_ozone_2015 <- df_ozone_2015 %>%
select(Date, "ozone_ppm" = Daily.Max.8.hour.Ozone.Concentration) %>%
na.omit()
#############################
df_pm25_2020 <- df_pm25_2020 %>%
select(Date, "pm25" = Daily.Mean.PM2.5.Concentration) %>%
na.omit()
df_pm25_2019 <- df_pm25_2019 %>%
select(Date, "pm25" = Daily.Mean.PM2.5.Concentration) %>%
na.omit()
df_pm25_2018 <- df_pm25_2018 %>%
select(Date, "pm25" = Daily.Mean.PM2.5.Concentration) %>%
na.omit()
df_pm25_2017 <- df_pm25_2017 %>%
select(Date, "pm25" = Daily.Mean.PM2.5.Concentration) %>%
na.omit()
df_pm25_2016 <- df_pm25_2016 %>%
select(Date, "pm25" = Daily.Mean.PM2.5.Concentration) %>%
na.omit()
df_pm25_2015 <- df_pm25_2015 %>%
select(Date, "pm25" = Daily.Mean.PM2.5.Concentration) %>%
na.omit()
# Convert data (<chr> datatype to <date> datatype)
df_ozone_2020$Date <- as.Date(df_ozone_2020$Date, "%m/%d/%Y")
df_ozone_2019$Date <- as.Date(df_ozone_2019$Date, "%m/%d/%Y")
df_ozone_2018$Date <- as.Date(df_ozone_2018$Date, "%m/%d/%Y")
df_ozone_2017$Date <- as.Date(df_ozone_2017$Date, "%m/%d/%Y")
df_ozone_2016$Date <- as.Date(df_ozone_2016$Date, "%m/%d/%Y")
df_ozone_2015$Date <- as.Date(df_ozone_2015$Date, "%m/%d/%Y")
df_pm25_2020$Date <- as.Date(df_pm25_2020$Date, "%m/%d/%Y")
df_pm25_2019$Date <- as.Date(df_pm25_2019$Date, "%m/%d/%Y")
df_pm25_2018$Date <- as.Date(df_pm25_2018$Date, "%m/%d/%Y")
df_pm25_2017$Date <- as.Date(df_pm25_2017$Date, "%m/%d/%Y")
df_pm25_2016$Date <- as.Date(df_pm25_2016$Date, "%m/%d/%Y")
df_pm25_2015$Date <- as.Date(df_pm25_2015$Date, "%m/%d/%Y")
df_ozone_2020 <- df_ozone_2020 %>%
mutate(rdate = Date >= as.Date('2020-03-11'))
df_ozone_2019 <- df_ozone_2019 %>%
mutate(rdate = Date >= as.Date('2020-03-11'))
df_ozone_2018 <- df_ozone_2018 %>%
mutate(rdate = Date >= as.Date('2020-03-11'))
df_ozone_2017 <- df_ozone_2017 %>%
mutate(rdate = Date >= as.Date('2020-03-11'))
df_ozone_2016 <- df_ozone_2016 %>%
mutate(rdate = Date >= as.Date('2020-03-11'))
df_ozone_2015 <- df_ozone_2015 %>%
mutate(rdate = Date >= as.Date('2020-03-11'))
df_pm25_2020 <- df_pm25_2020 %>%
mutate(rdate = Date >= as.Date('2020-03-11'))
df_pm25_2019 <- df_pm25_2019 %>%
mutate(rdate = Date >= as.Date('2020-03-11'))
df_pm25_2018 <- df_pm25_2018 %>%
mutate(rdate = Date >= as.Date('2020-03-11'))
df_pm25_2017 <- df_pm25_2017 %>%
mutate(rdate = Date >= as.Date('2020-03-11'))
df_pm25_2016 <- df_pm25_2016 %>%
mutate(rdate = Date >= as.Date('2020-03-11'))
df_pm25_2015 <- df_pm25_2015 %>%
mutate(rdate = Date >= as.Date('2020-03-11'))
# transposed dataframe to see column headers and datatype
glimpse(df_ozone_2020)
glimpse(df_ozone_2019)
glimpse(df_ozone_2018)
glimpse(df_ozone_2017)
glimpse(df_ozone_2016)
glimpse(df_ozone_2015)
glimpse(df_pm25_2020)
glimpse(df_pm25_2019)
glimpse(df_pm25_2018)
glimpse(df_pm25_2017)
glimpse(df_pm25_2016)
glimpse(df_pm25_2015)
###############################################################################
# 2020 is not a complete year. Need to match time-frame
# Determine time-frame for 2020 data and adjust 2019 data to match
head(df_ozone_2020)
tail(df_ozone_2020) # observations through 2020-06-18
head(df_ozone_2019)
tail(df_ozone_2019)
head(df_pm25_2020)
tail(df_pm25_2020) # observations through 2020-06-20
head(df_pm25_2019)
tail(df_pm25_2019)
#############################
# Subset 2019 data to match time of 2020 data
df_ozone_2019 <- df_ozone_2019 %>%
filter(Date <= as.Date('2019-06-18'))
df_pm25_2019 <- df_pm25_2019 %>%
filter(Date <= as.Date('2019-06-20'))
#############################
# Subset Lockdown 2020-03-31 to 2020-05-15
df_ozone_2020_ld <- df_ozone_2020 %>%
filter(Date >= as.Date('2020-03-31') & Date <= as.Date('2020-05-15'))
df_ozone_2019_ld <- df_ozone_2019 %>%
filter(Date >= as.Date('2019-03-31') & Date <= as.Date('2019-05-15'))
df_ozone_2018_ld <- df_ozone_2018 %>%
filter(Date >= as.Date('2018-03-31') & Date <= as.Date('2018-05-15'))
df_ozone_2017_ld <- df_ozone_2017 %>%
filter(Date >= as.Date('2017-03-31') & Date <= as.Date('2017-05-15'))
df_ozone_2016_ld <- df_ozone_2016 %>%
filter(Date >= as.Date('2016-03-31') & Date <= as.Date('2016-05-15'))
df_ozone_2015_ld <- df_ozone_2015 %>%
filter(Date >= as.Date('2015-03-31') & Date <= as.Date('2015-05-15'))
##############
df_pm25_2020_ld <- df_pm25_2020 %>%
filter(Date >= as.Date('2020-03-31') & Date <= as.Date('2020-05-15'))
df_pm25_2019_ld <- df_pm25_2019 %>%
filter(Date >= as.Date('2019-03-31') & Date <= as.Date('2019-05-15'))
df_pm25_2018_ld <- df_pm25_2018 %>%
filter(Date >= as.Date('2018-03-31') & Date <= as.Date('2018-05-15'))
df_pm25_2017_ld <- df_pm25_2017 %>%
filter(Date >= as.Date('2017-03-31') & Date <= as.Date('2017-05-15'))
df_pm25_2016_ld <- df_pm25_2016 %>%
filter(Date >= as.Date('2016-03-31') & Date <= as.Date('2016-05-15'))
df_pm25_2015_ld <- df_pm25_2015 %>%
filter(Date >= as.Date('2015-03-31') & Date <= as.Date('2015-05-15'))
###############################################################################
# Review mean of data for initial compaison
mean(df_ozone_2019$ozone_ppm)
mean(df_ozone_2020$ozone_ppm)
mean(df_pm25_2019$pm25)
mean(df_pm25_2020$pm25)
# Lockdown
mean(df_ozone_2019_ld$ozone_ppm)
mean(df_ozone_2020_ld$ozone_ppm)
mean(df_pm25_2019_ld$pm25)
mean(df_pm25_2020_ld$pm25)
###############################################################################
##### Plot Data ###############################################################
###############################################################################
#############################
# Scatter Plot: Ozone vs Date
ggplot(data=df_ozone_2020, aes(x = Date, y = ozone_ppm)) +
geom_point(color = "darkorchid4") +
labs(title = "Daily Ozone - Phoenix, AZ\n Measurement Site (ID \ Name): JLG SUPERSITE \ 40139997\n 2020-01-01 through 2020-06-18",
x = "Date",
y = "Ozone (ppm)") +
theme_bw(base_size = 12) +
scale_y_continuous(limits = c(0, 0.08)) +
geom_point() +
stat_smooth(method = lm)
ggplot(data=df_ozone_2019, aes(x = Date, y = ozone_ppm)) +
geom_point(color = "darkorchid4") +
labs(title = "Daily Ozone - Phoenix, AZ\n Measurement Site (ID \ Name): JLG SUPERSITE \ 40139997\n 2019-01-01 through 2019-06-18",
x = "Date",
y = "Ozone (ppm)") +
theme_bw(base_size = 12) +
scale_y_continuous(limits = c(0, 0.08)) +
geom_point() +
stat_smooth(method = lm)
#############################
# Scatter Plot: PM2.5 vs Date
ggplot(data=df_pm25_2020, aes(x = Date, y = pm25)) +
geom_point(color = "darkorchid4") +
labs(title = "Daily PM2.5 - Phoenix, AZ\n Measurement Site (ID \ Name): JLG SUPERSITE \ 40139997\n 2020-01-02 through 2020-06-18",
x = "Date",
y = "PM2.5 (ug/m3)") +
theme_bw(base_size = 12) +
scale_y_continuous(limits = c(0, 25)) +
geom_point() +
stat_smooth(method = loess)
ggplot(data=df_pm25_2019, aes(x = Date, y = pm25)) +
geom_point(color = "darkorchid4") +
labs(title = "Daily PM2.5 - Phoenix, AZ\n Measurement Site (ID \ Name): JLG SUPERSITE \ 40139997\n 2019-01-01 through 2019-06-18",
x = "Date",
y = "PM2.5 (ug/m3)") +
theme_bw(base_size = 12) +
scale_y_continuous(limits = c(0, 25)) +
geom_point() +
stat_smooth(method = loess)
###############################################################################
boxplot(df_ozone_2019$ozone_ppm, df_ozone_2020$ozone_ppm,
main="Daily Ozone - Phoenix, AZ\n Measurement Site (ID \ Name): JLG SUPERSITE \ 40139997\n 01-Jan through 18-Jun",
xlab="Left: 2019, Right: 2020",
ylab="ozone (ppm)")
boxplot(df_pm25_2019$pm25, df_pm25_2020$pm25,
main="Daily PM2.5 - Phoenix, AZ\n Measurement Site (ID \ Name): JLG SUPERSITE \ 40139997\n 01-Jan through 18-Jun",
xlab="Left: 2019, Right: 2020",
ylab="PM2.5 (ug/m3)")
boxplot(log10(df_pm25_2019$pm25), log10(df_pm25_2020$pm25),
main="Daily PM2.5 - Phoenix, AZ\n Measurement Site (ID \ Name): JLG SUPERSITE \ 40139997\n 01-Jan through 18-Jun",
xlab="Left: 2019, Right: 2020",
ylab="log10(PM2.5) (ug/m3)")
abline(v=as.Date('2020-03-11'), col="red", lwd = 2, lty=2)
## Date Range - AZ Lockdown: 31-Mar through 15-May
boxplot(df_ozone_2019_ld$ozone_ppm, df_ozone_2020_ld$ozone_ppm,
main="Daily Ozone - Phoenix, AZ\n Measurement Site (ID \ Name): JLG SUPERSITE \ 40139997\n 31-Mar through 15-May",
xlab="Left: 2019, Right: 2020- lockdown",
ylab="ozone (ppm)")
boxplot(df_pm25_2019_ld$pm25, df_pm25_2020_ld$pm25,
main="Daily PM2.5 - Phoenix, AZ\n Measurement Site (ID \ Name): JLG SUPERSITE \ 40139997\n 31-Mar through 15-May",
xlab="Left: 2019, Right: 2020 - lockdown",
ylab="PM2.5 (ug/m3)")
##############
## Date Range - AZ Lockdown: 31-Mar through 15-May
boxplot(df_ozone_2015_ld$ozone_ppm, df_ozone_2016_ld$ozone_ppm, df_ozone_2017_ld$ozone_ppm, df_ozone_2018_ld$ozone_ppm, df_ozone_2019_ld$ozone_ppm, df_ozone_2020_ld$ozone_ppm,
main="Daily Ozone - Phoenix, AZ\n Measurement Site (ID \ Name): JLG SUPERSITE \ 40139997\n 31-Mar through 15-May (2015 to 2020)",
xlab="2015 -> 2020 (lockdown)",
ylab="ozone (ppm)")
boxplot(df_pm25_2015_ld$pm25, df_pm25_2016_ld$pm25, df_pm25_2017_ld$pm25, df_pm25_2018_ld$pm25, df_pm25_2019_ld$pm25, df_pm25_2020_ld$pm25,
main="Daily PM2.5 - Phoenix, AZ\n Measurement Site (ID \ Name): JLG SUPERSITE \ 40139997\n 31-Mar through 15-May (2015 to 2020)",
xlab="2015 -> 2020 (lockdown)",
ylab="PM2.5 (ug/m3)")
###############################################################################
df_ozone_2019$Week <- as.Date(cut(df_ozone_2019$Date,breaks = "week"))
df_ozone_2020$Week <- as.Date(cut(df_ozone_2020$Date,breaks = "week"))
df_pm25_2019$Week <- as.Date(cut(df_pm25_2019$Date,breaks = "week"))
df_pm25_2020$Week <- as.Date(cut(df_pm25_2020$Date,breaks = "week"))
ggplot(data = df_ozone_2019,
aes(Week, ozone_ppm)) +
stat_summary(fun = sum, geom = "bar") +
theme_bw(base_size = 12) +
labs(title = "Daily Ozone - Phoenix, AZ\n Measurement Site (ID \ Name): JLG SUPERSITE \ 40139997\n 2019-01-01 through 2019-06-18",
x = "Date (Grouped by Week)",
y = "Ozone (ppm)")
ggplot(data = df_ozone_2020,
aes(Week, ozone_ppm)) +
stat_summary(fun = sum, geom = "bar") +
theme_bw(base_size = 12) +
labs(title = "Daily Ozone - Phoenix, AZ\n Measurement Site (ID \ Name): JLG SUPERSITE \ 40139997\n 2020-01-01 through 2020-06-18",
x = "Date (Grouped by Week)",
y = "Ozone (ppm)")
ggplot(data = df_pm25_2019,
aes(Week, pm25)) +
stat_summary(fun = sum, geom = "bar") +
theme_bw(base_size = 12) +
labs(title = "Daily PM2.5 - Phoenix, AZ\n Measurement Site (ID \ Name): JLG SUPERSITE \ 40139997\n 2019-01-01 through 2019-06-18",
x = "Date (Grouped by Week)",
y = "PM2.5 (ug/m3)") +
theme_bw(base_size = 12) +
scale_y_continuous(limits = c(0, 170))
ggplot(data = df_pm25_2020,
aes(Week, pm25)) +
stat_summary(fun = sum, geom = "bar") +
theme_bw(base_size = 12) +
labs(title = "Daily PM2.5 - Phoenix, AZ\n Measurement Site (ID \ Name): JLG SUPERSITE \ 40139997\n 2020-01-02 through 2020-06-18",
x = "Date (Grouped by Week)",
y = "PM2.5 (ug/m3)") +
theme_bw(base_size = 12) +
scale_y_continuous(limits = c(0, 170))
|
5370ddee1e419d8bdf6bf74b92ddf1785dcc196c | 3f298f0f77598808a68793546c56a1827332c05d | /plots/SRCD2017_figures.R | d3de9bf8f0bcbea36c7c1bccad00e1979820f999 | [] | no_license | dmoracze/TRW | b45a4c186f7be7317e4791200c88a5e26702483b | 564117d7347a931148ebbe4503d544abcd4324a7 | refs/heads/master | 2020-04-24T16:34:31.093752 | 2019-11-27T16:10:40 | 2019-11-27T16:10:40 | 172,112,931 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 4,097 | r | SRCD2017_figures.R | library(ggplot2)
library(lme4)
library(arm)
library(MASS)
library(lmerTest)
library(reshape2)
library(plyr)
library(grid)
dat <- read.table('~/Dropbox/DSCN/Experiments/TRW/data/comp_dat_strict_motion_Mar8.txt',header=T)
###############################
##### participant ageXsex #####
###############################
# ggplot(dat, aes(age, fill=sex)) +
# geom_histogram(breaks=seq(6,14, by=0.99999),col="black",alpha=0.7) +
# scale_x_continuous(breaks=6:14) +
# labs(x="Age",y="Number of participants") +
# theme_bw() +
# scale_fill_manual(values=c("forestgreen","steelblue")) +
# theme(panel.grid.minor.x=element_blank(),
# panel.grid.minor.y=element_blank(),
# plot.margin=unit(c(1.2,1.2,1.2,1.2), "cm"))
#####################################
##### mean+se for 2x2 factorial #####
#####################################
# set up average data
# create empty containers
condition <- matrix(NA,nrow=4)
type <- matrix(NA,nrow=4)
mean <- matrix(NA,nrow=4)
se <- matrix(NA,nrow=4)
count <- 1
# find the mean and se for each type/condition
for (c in levels(dat$con)) {
print(c)
for (t in levels(dat$type)) {
print(t)
temp <- subset(dat, dat$con == c & dat$type == t)
condition[count] <- c
type[count] <- t
mean[count] <- mean(temp$perc)
se[count] <- sd(temp$perc)/sqrt(length(temp$subj))
count <- count + 1
}
}
# concatenate average data
mdat <- data.frame(condition,type,mean,se)
mdat$mean <- mdat$mean*100
mdat$se <- mdat$se*100
mdat$condition <- revalue(mdat$condition, c(int='Intact', scr='Scrambled'))
mdat$type <- revalue(mdat$type, c(M='Mental', NM='Non-Mental'))
# plot the average data
dodge <- position_dodge(width=0.9)
ggplot(mdat, aes(condition,mean,fill=type)) +
geom_bar(stat='identity',position=dodge) +
geom_bar(stat='identity',position=dodge,color='black') +
geom_errorbar(aes(ymin=mean-se,ymax=mean+se), position=dodge, width = .25) +
labs(y='Percent',x="") +
ylim(0,100) +
scale_fill_manual(values=c("darkorchid4","darkslategray4")) +
theme_bw() +
theme(axis.title=element_blank(),
legend.position='none',
axis.text.y=element_text(size=20),
axis.text.x=element_blank(),
axis.ticks.x=element_blank(),
strip.background=element_blank(),
strip.text=element_text(size=24),
plot.margin=unit(c(1.2,1.2,1.2,1.2), "cm"))
#########################
##### ageXcomp plot #####
#########################
dat$con <- revalue(dat$con, c(int='Intact', scr='Scrambled'))
dat$type <- revalue(dat$type, c(M='Mental', NM='Non-Mental'))
ggplot(dat, aes(age,perc, color=type)) +
geom_point(size=2.5,alpha=0.6) +
geom_point(size=2.5,alpha=0.95,shape=1) +
geom_smooth(method='lm',se=FALSE,size=2) +
facet_wrap(~con) +
labs(x='Age',y='% correct') +
scale_color_manual(values=c("darkorchid4","darkslategray4")) +
scale_x_continuous(breaks=c(5,6,7,8,9,10,11,12,13,14)) +
theme_bw() +
theme(axis.title=element_blank(),
legend.position='none',
axis.text.y=element_text(size=20),
axis.text.x=element_text(size=20),
strip.background=element_blank(),
strip.text=element_blank(),
plot.margin=unit(c(1.2,1.2,1.2,1.2), "cm"))
####################################
##### lmer model distributions #####
####################################
# define model
mod <- lmer(perc ~ con*type + age + (1|subj), data=dat)
display(mod)
summary(mod)
s <- sim(mod,100000)
sdat <- data.frame(s@fixef)
head(sdat)
names(sdat) <- c('Intercept','Scrambled','Non Mental','Age','Scrambled * Non Mental')
msdat <- melt(sdat[,2:ncol(sdat)])
msdat$variable <- factor(msdat$variable, levels=c('Age','Scrambled','Non Mental','Scrambled * Non Mental'))
ggplot(msdat, aes(value, fill=variable)) +
geom_vline(xintercept = 0,size=.25) +
geom_density(alpha=0.85) +
scale_x_continuous(breaks=c(-0.3,-0.25,-0.2,-0.15,-0.1,-0.05,0,0.05,0.1,0.15,0.2,0.25,0.3)) +
labs(x='',y='') +
scale_fill_manual(values=c('peachpuff3','steelblue','forestgreen','firebrick')) +
theme_bw() +
theme(legend.position='none',
axis.text.x=element_text(size=16),
axis.text.y=element_blank(),
axis.ticks.y=element_blank())
beta <- 2
quantile(sdat[,beta], c(0.025,0.975))
|
b5d80f627a84d8abc54bf7a4b255b271246dd114 | c53267587bd8bed6ef219e2f776ad4b63aa711ef | /R/recent_SRC_scripts.R | ede0ac4a259f447e39b5d426b4c3eef023fe974a | [] | no_license | jcchai/SAP_for_SRC | 8d35d3ed3797839c317fcac2c9def71a709cb84b | d6489e0f25cfd1bd12e58fe9249aa9304235357e | refs/heads/master | 2021-01-17T12:54:17.520240 | 2016-05-31T19:50:13 | 2016-05-31T19:50:13 | 58,425,775 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 3,808 | r | recent_SRC_scripts.R | # recent SRC scripts
library("DESeq2")
setwd("/data/works/hs_BM-MSC/hs_BM-MSC_RNA-seq_BT_Macrogen_Mar2016/star-using-UCSC-hg19-genesgtf2015version-bam-mismatch3-may112016/DESeq2")
sampleFiles=grep("tab",list.files("/data/works/hs_BM-MSC/hs_BM-MSC_RNA-seq_BT_Macrogen_Mar2016/star-using-UCSC-hg19-genesgtf2015version-bam-mismatch3-may112016/DESeq2"),value=TRUE)
sampleFiles
sampleCondition<-c("Cont","Cont","Cont","LPS-HI","LPS-HI","LPS-HI","LPS-LOW","LPS-LOW","LPS-LOW","Poly-HI","Poly-HI","Poly-HI")
sampleTable<-data.frame(sampleName=sampleFiles,fileName=sampleFiles, condition=sampleCondition)
sampleTable
ddsHTSeq<-DESeqDataSetFromHTSeqCount(sampleTable=sampleTable,directory="/data/works/hs_BM-MSC/hs_BM-MSC_RNA-seq_BT_Macrogen_Mar2016/star-using-UCSC-hg19-genesgtf2015version-bam-mismatch3-may112016/DESeq2", design=~condition)
colData(ddsHTSeq)$condition <-factor(colData(ddsHTSeq)$condition)
dds<-DESeq(ddsHTSeq)
colData(dds)
resMF<-results(dds)
head(resMF)
resMFType_BMMSC_RNAseq_BT___Cont_vs_LPSLOW <- results(dds, contrast=c("condition","LPS-LOW","Cont"))
resMFType_BMMSC_RNAseq_BT___Cont_vs_LPSHI <- results(dds, contrast=c("condition","LPS-HI","Cont"))
resMFType_BMMSC_RNAseq_BT___Cont_vs_PolyHI <- results(dds, contrast=c("condition","Poly-HI","Cont"))
resMFType_BMMSC_RNAseq_BT___LPSHI_vs_PolyHI <- results(dds, contrast=c("condition","Poly-HI","LPS-HI"))
resMFType_BMMSC_RNAseq_BT___LPSLOW_vs_LPSHI <- results(dds, contrast=c("condition","LPS-HI","LPS-LOW"))
resMFType_BMMSC_RNAseq_BT___LPSLOW_vs_PolyHI <- results(dds, contrast=c("condition","Poly-HI","LPS-LOW"))
resMFType_BMMSC_RNAseq_BT___Cont_vs_LPSLOW_ordered <- resMFType_BMMSC_RNAseq_BT___Cont_vs_LPSLOW[order(resMFType_BMMSC_RNAseq_BT___Cont_vs_LPSLOW$padj),]
resMFType_BMMSC_RNAseq_BT___Cont_vs_LPSHI_ordered <- resMFType_BMMSC_RNAseq_BT___Cont_vs_LPSHI[order(resMFType_BMMSC_RNAseq_BT___Cont_vs_LPSHI$padj),]
resMFType_BMMSC_RNAseq_BT___Cont_vs_PolyHI_ordered <- resMFType_BMMSC_RNAseq_BT___Cont_vs_PolyHI[order(resMFType_BMMSC_RNAseq_BT___Cont_vs_PolyHI$padj),]
resMFType_BMMSC_RNAseq_BT___LPSHI_vs_PolyHI_ordered <- resMFType_BMMSC_RNAseq_BT___LPSHI_vs_PolyHI[order(resMFType_BMMSC_RNAseq_BT___LPSHI_vs_PolyHI$padj),]
resMFType_BMMSC_RNAseq_BT___LPSLOW_vs_LPSHI_ordered <- resMFType_BMMSC_RNAseq_BT___LPSLOW_vs_LPSHI[order(resMFType_BMMSC_RNAseq_BT___LPSLOW_vs_LPSHI$padj),]
resMFType_BMMSC_RNAseq_BT___LPSLOW_vs_PolyHI_ordered <- resMFType_BMMSC_RNAseq_BT___LPSLOW_vs_PolyHI[order(resMFType_BMMSC_RNAseq_BT___LPSLOW_vs_PolyHI$padj),]
write.csv(as.data.frame(resMFType_BMMSC_RNAseq_BT___Cont_vs_LPSLOW_ordered), file="hs_BM-MSC_RNA-seq_BT___Cont_vs_LPS-LOW__STAR__mismatch3-genesgtf_DESeq2_May112016.csv")
write.csv(as.data.frame(resMFType_BMMSC_RNAseq_BT___Cont_vs_LPSHI_ordered), file="hs_BM-MSC_RNA-seq_BT___Cont_vs_LPS-HI__STAR__mismatch3-genesgtf_DESeq2_May112016.csv")
write.csv(as.data.frame(resMFType_BMMSC_RNAseq_BT___Cont_vs_PolyHI_ordered), file="hs_BM-MSC_RNA-seq_BT___Cont_vs_Poly-HI__STAR__mismatch3-genesgtf_DESeq2_May112016.csv")
write.csv(as.data.frame(resMFType_BMMSC_RNAseq_BT___LPSHI_vs_PolyHI_ordered), file="hs_BM-MSC_RNA-seq_BT___LPS-HI_vs_Poly-HI__STAR__mismatch3-genesgtf_DESeq2_May112016.csv")
write.csv(as.data.frame(resMFType_BMMSC_RNAseq_BT___LPSLOW_vs_LPSHI_ordered), file="hs_BM-MSC_RNA-seq_BT___LPS-LOW_vs_LPS-HI__STAR__mismatch3-genesgtf_DESeq2_May112016.csv")
write.csv(as.data.frame(resMFType_BMMSC_RNAseq_BT___LPSLOW_vs_PolyHI_ordered), file="hs_BM-MSC_RNA-seq_BT___LPS-LOW_vs_Poly-HI__STAR__mismatch3-genesgtf_DESeq2_May112016.csv")
rld <- rlog(dds)
vsd <- varianceStabilizingTransformation(dds)
head(assay(rld), 3)
write.csv(assay(rld), file="hs_BM-MSC_RNA-seq_BiolTriple___RLD__STAR__mismatch3-genesgtf_DESeq2__Macrogen_May112016.csv")
p = plotPCA (vsd, intgroup=c("condition"))
p
quit()
|
2cdf7fc72e3f0cfc20d42d6c78ad85048373ad01 | 8c1333fb9fbaac299285dfdad34236ffdac6f839 | /financial-analytics/ch2/solution-03c.R | 07e6a80c959db6b2d5cee22f389141f7b377346f | [
"MIT"
] | permissive | cassiopagnoncelli/datacamp-courses | 86b4c2a6d19918fc7c6bbf12c51966ad6aa40b07 | d05b74a1e42b119efbbf74da3dfcf71569c8ec85 | refs/heads/master | 2021-07-15T03:24:50.629181 | 2020-06-07T04:44:58 | 2020-06-07T04:44:58 | 138,947,757 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 477 | r | solution-03c.R | # Define cashflows
cashflow_old <- rep(-500, 11)
cashflow_new <- c(-2200, rep(-300,10))
options <-
data.frame(time = rep(0:10, 2),
option = c(rep("Old",11),rep("New",11)),
cashflow = c(cashflow_old, cashflow_new))
# Calculate total expenditure with and without discounting
options %>%
group_by(option) %>%
summarize(sum_cashflow = sum(cashflow),
sum_disc_cashflow = sum(calc_pv(cashflow, 0.12, time)) )
|
282c6583520c17a11b480fab14b595c02d65b960 | 1ebc83dccdc6c6d2867c6ef303c91077cb3f3a84 | /src/analysis-traits.R | 37f8bb7061b1ead0ce264cee676477d026f3af32 | [] | no_license | willpearse/track-index | 2d04bbcc6b5595f06cb9c39e00e8385998094fea | 0b68a07e602f51aa4a18ab265d9d699436fae6ce | refs/heads/master | 2023-02-08T21:16:42.048139 | 2023-01-27T14:12:10 | 2023-01-27T14:12:10 | 217,920,212 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 10,645 | r | analysis-traits.R | # Headers
source("src/headers.R")
stem <- "clnbin-clnspc-100-TRUE"
# Wrapper functions
.univariate <- function(data, explanatory){
tests <- lapply(data, function(x) cor.test(x, explanatory))
cors <- sapply(tests, function(x) x$estimate)
p <- sapply(tests, function(x) x$p.value)
p <- ifelse(p < .2, round(p,3), NA)
output <- matrix(c(cors,p), nrow=2, byrow=TRUE)
colnames(output) <- names(data); rownames(output) <- c("r","p")
return(output)
}
.combine.data <- function(stem, index, quant, null, clean=10, abs=FALSE){
c(data,metadata) %<-% .load.indices(stem)
data <- as.data.frame(.simplify(data, index, quant, null, clean, abs))
data$species <- rownames(data)
data$taxon <- metadata$taxon[match(data$species, metadata$species)]
data$bm.mamm <- neotoma$log.mass[match(data$species, neotoma$binomial)]
data$bm.mamm[data$taxon != "mammals"] <- NA
data$bm.bird <- elton.bm[data$species]
data$bm.bird[data$taxon != "birds"] <- NA
data$log.ll <- glopnet$log.LL[match(data$species, glopnet$Species)]
data$log.lma <- glopnet$log.LMA[match(data$species, glopnet$Species)]
data$log.Nmass <- glopnet$log.Nmass[match(data$species, glopnet$Species)]
data$log.Amass <- glopnet$log.Amass[match(data$species, glopnet$Species)]
return(data)
}
.corr.mat <- function(stem, index, quant, null, clean, abs=FALSE, p.val=FALSE){
data <- .combine.data(stem, index, quant, null, clean, abs)
c.var <- setNames(
c("clouds","frost","evapo-trans.","precipitation","min(temp)","mean(temp)","max(temp)","vapour","rainy day"),
c("cld","frs","pet","pre","tmn","tmp","tmx","vap","wet")
)
q.var <- setNames(
c("5th","25th","50th","75th","95th"),
c("0.05","0.25","0.5","0.75","0.95")
)[quant]
t.var <- setNames(
c("mammal mass","bird mass","leaf lifespan","leaf mass/area","leaf N","photosynth"),
c("bm.mamm","bm.bird","log.ll","log.lma","log.Nmass","log.Amass")
)
mat <- matrix(nrow=length(t.var), ncol=length(c.var), dimnames=list(t.var, c.var))
for(i in seq_along(c.var)){
for(j in seq_along(t.var)){
if(p.val){
mat[j,i] <- cor.test(data[,names(c.var)[i]], data[,names(t.var)[j]])$p.value
} else {
mat[j,i] <- cor.test(data[,names(c.var)[i]], data[,names(t.var)[j]])$estimate
}
}
}
return(mat)
}
.plot.corr.mat <- function(stem, index, comparison, quant, null, clean, abs=FALSE){
index.mat <- t(.corr.mat(stem, index, quant, null, clean, abs))
index.mat.p <- t(.corr.mat(stem, index, quant, null, clean, abs, p.val=TRUE))
signif <- index.mat.p < .05
index.mat.p <- matrix(0, nrow=nrow(index.mat), ncol=ncol(index.mat))
index.mat.p[signif] <- 1
comparison.mat <- t(.corr.mat(stem, comparison, quant, null, clean=NA))
cols <- colorRampPalette(c("red", "white", "blue"))
comparison.cuts <- as.numeric(cut(as.numeric(comparison.mat), breaks=seq(-1,1,length.out=201)))
comparison.cuts <- cols(200)[comparison.cuts]
index.cuts <- as.numeric(cut(as.numeric(index.mat), breaks=seq(-1,1,length.out=201)))
index.cuts <- cols(200)[index.cuts]
dummy.mat <- matrix(0, nrow=nrow(index.mat), ncol=ncol(index.mat), dimnames=dimnames(index.mat))
corrplot(comparison.mat, method="square", is.corr=FALSE, cl.lim=c(-1,1), tl.srt=45, col=cols(200), tl.col="black")
corrplot(comparison.mat, method="square", is.corr=FALSE, cl.lim=c(-1,1), tl.srt=45, bg=comparison.cuts, add=TRUE, addgrid.col=NA, col=alpha("white",0), cl.pos="n", tl.pos="n")
corrplot(index.mat, method="square", is.corr=FALSE, cl.lim=c(-1,1), tl.srt=45, bg=alpha("white",0), col=index.cuts, add=TRUE, addgrid.col=NA, cl.pos="n", , tl.pos="n", p.mat=index.mat.p, sig.level=.05)
text(-1.3, 9.5, "Legend", font=2)
rect(-1, 8.5, 0, 9.5, col=cols(200)[150], border=NA)
rect(-.8, 8.7, -.2, 9.3, col=cols(200)[120], border=NA)
text(-.5, 9.4, "present")
text(-.5, 9, "track\nindex")
}
# Load Neotoma data
neotoma <- read.csv("raw-data/Amniote_Database_Aug_2015.csv", as.is=TRUE)
for(i in seq_along(names(neotoma)))
neotoma[neotoma[,i]==-999,i] <- NA
neotoma$binomial <- with(neotoma, tolower(paste(genus, species, sep="_")))
neotoma$log.mass <- log10(neotoma$adult_body_mass_g)
# Load Glopnet
glopnet <- read.xls("raw-data/glopnet.xls")
glopnet$Species <- tolower(gsub(" ", "_", glopnet$Species))
# Load Elton
elton.birds <- read.delim("raw-data/BirdFuncDat.txt", as.is=TRUE)
elton.birds$Scientific <- tolower(gsub(" ", "_", elton.birds$Scientific))
elton.mam <- read.delim("raw-data/MamFuncDat.txt", as.is=TRUE)
elton.mam$Scientific <- tolower(gsub(" ", "_", elton.mam$Scientific))
elton.bm <- setNames(c(elton.mam$BodyMass.Value, elton.birds$BodyMass.Value), c(elton.mam$Scientific,elton.birds$Scientific))
elton.bm <- log10(elton.bm)
# Combine
pdf("figures/traits-05.pdf"); .plot.corr.mat(stem, "b.track.index", "pres.dis.pres.env", "0.05", "observed", clean=5); dev.off()
pdf("figures/traits-25.pdf"); .plot.corr.mat(stem, "b.track.index", "pres.dis.pres.env", "0.25", "observed", 5); dev.off()
pdf("figures/traits-50.pdf"); .plot.corr.mat(stem, "b.track.index", "pres.dis.pres.env", "0.5", "observed", 5); dev.off()
pdf("figures/traits-75.pdf"); .plot.corr.mat(stem, "b.track.index", "pres.dis.pres.env", "0.75", "observed", 5); dev.off()
pdf("figures/traits-95.pdf"); .plot.corr.mat(stem, "b.track.index", "pres.dis.pres.env", "0.95", "observed", 5); dev.off()
# Correlations
.get.vals <- function(stem, index, null, clean, p)
return(
abind(
.corr.mat(stem, index, "0.05", null, clean, p),
.corr.mat(stem, index, "0.25", null, clean, p),
.corr.mat(stem, index, "0.5", null, clean, p),
.corr.mat(stem, index, "0.75", null, clean, p),
.corr.mat(stem, index, "0.95", null, clean, p),
along=3
)
)
index.r <- .get.vals(stem, "b.track.index", "observed", 100, FALSE)
index.p <- .get.vals(stem, "b.track.index", "observed", 100, TRUE)
past.r <- .get.vals(stem, "past.dis.past.env", "observed", 100, FALSE)
past.p <- .get.vals(stem, "past.dis.past.env", "observed", 100, TRUE)
pres.r <- .get.vals(stem, "pres.dis.pres.env", "observed", 100, FALSE)
pres.p <- .get.vals(stem, "pres.dis.pres.env", "observed", 100, TRUE)
sum(index.p < .05); prod(dim(index.p))
sum(past.p < .05); prod(dim(index.p))
sum(pres.p < .05); prod(dim(index.p))
plot(as.numeric(index.r) ~ as.numeric(past.r), asp=1, xlab="", ylab="")
abline(0, 1, col="grey40", lty=2, lwd=3)
# Making the combined traits and phylogeny plot
# Get data
traits <- data.frame(index=as.numeric(index.r[,,3]), past=as.numeric(past.r[,,3]), pres=as.numeric(pres.r[,,3]), trait=rep(rownames(index.r),ncol(index.r)), climate=rep(colnames(index.r),each=nrow(index.r)), taxon=rep(c("mammals","birds","plants","plants","plants","plants"),ncol(index.r)), type="traits")
phylo <- data.frame(
index=c(
as.numeric(readRDS("../track-index-2019-r1+/clean-data/birds-signal.RDS")$bootstrap.index["0.5",]),
as.numeric(readRDS("../track-index-2019-r1+/clean-data/mammals-signal.RDS")$bootstrap.index["0.5",]),
as.numeric(readRDS("../track-index-2019-r1+/clean-data/plants-signal.RDS")$bootstrap.index["0.5",]),
as.numeric(readRDS("../track-index-2019-r1+/clean-data/reptiles-signal.RDS")$bootstrap.index["0.5",])
),
past=c(
as.numeric(readRDS("../track-index-2019-r1+/clean-data/birds-signal.RDS")$past["0.5",]),
as.numeric(readRDS("../track-index-2019-r1+/clean-data/mammals-signal.RDS")$past["0.5",]),
as.numeric(readRDS("../track-index-2019-r1+/clean-data/plants-signal.RDS")$past["0.5",]),
as.numeric(readRDS("../track-index-2019-r1+/clean-data/reptiles-signal.RDS")$past["0.5",])
),
pres=c(
as.numeric(readRDS("../track-index-2019-r1+/clean-data/birds-signal.RDS")$present["0.5",]),
as.numeric(readRDS("../track-index-2019-r1+/clean-data/mammals-signal.RDS")$present["0.5",]),
as.numeric(readRDS("../track-index-2019-r1+/clean-data/plants-signal.RDS")$present["0.5",]),
as.numeric(readRDS("../track-index-2019-r1+/clean-data/reptiles-signal.RDS")$present["0.5",])
),
trait="phylo",
climate=rep(colnames(readRDS("../track-index-2019-r1+/clean-data/reptiles-signal.RDS")$bootstrap.index), 4),
taxon=rep(c("mammals","birds","plants","reptiles"), each=9),
type="phylo"
)
data <- rbind(traits, phylo)
z.off <- .1
data$index <- abs(data$index)+z.off; data$pres <- abs(data$pres)+z.off; data$past <- abs(data$past)+z.off
data$index[data$type=="phylo"] <- -data$index[data$type=="phylo"]; data$pres[data$type=="phylo"] <- -data$pres[data$type=="phylo"]; data$past[data$type=="phylo"] <- -data$past[data$type=="phylo"]
data$t.taxon <- as.numeric(factor(data$taxon))
data$t.trait <- as.numeric(factor(data$trait))
lookup <- c("clouds"="cld", "frost"="frs", "evapo-trans."="vap", "precipitation"="pre", "min(temp)"="tmn", "mean(temp)"="tmn", "max(temp)"="tmx", "vapour"="vap", "rainy day"="wet", "cld"="cld", "frs"="frs", "vap"="vap", "pre"="pre", "tmn"="tmn", "tmp"="tmp", "tmx"="tmx", "vap"="vap", "wet"="wet", "pet"="pet")
data$climate <- lookup[data$climate]
data$t.climate <- (as.numeric(factor(data$climate))-4.5)/(4.5*4)
pdf("figures/trait-phylo.pdf")
with(data, plot(index ~ I(t.taxon+t.climate), pch=ifelse(abs(index)>abs(past) & abs(index)>abs(pres),20,21), xlab="", ylab="", axes=FALSE, ylim=c(-1-z.off,1+z.off), col="black"))
with(data, points(pres ~ I(t.taxon+t.climate), pch=ifelse(abs(pres)>abs(past) & abs(pres)>abs(index),20,21), col="blue"))
with(data, points(past ~ I(t.taxon+t.climate), pch=ifelse(abs(past)>abs(index) & abs(past)>abs(pres),20,21), col="red"))
axis(2, at=seq(z.off,1+z.off,by=.2), labels=c(NA, seq(.2,1,by=.2)))
axis(2, at=-seq(z.off,1+z.off,by=.2), labels=c(NA, seq(.2,1,by=.2)))
#abline(h=0, col="grey40", lwd=5)
mtext("trait correlations", side=2, adj=.5, at=.5, line=2, cex=1.2)
mtext("phylogenetic signal", side=2, adj=.5, at=-.5, line=2, cex=1.2)
legend(.8, 1.2, pch=20, col=c("black","blue","red"), legend=c("index","past","present"), cex=1.2, bty="n")
legend(1.6, 1.2, pch=c(20,21), col="black", legend=c("strongest","weaker"), cex=1.2, bty="n")
grid.raster(readPNG("phylopics/mammal.png"), .20, .52, width=.06)
grid.raster(readPNG("phylopics/bird.png"), .42, .52, width=.06)
grid.raster(readPNG("phylopics/plant.png"), .65, .52, width=.06)
grid.raster(readPNG("phylopics/reptile.png"), .88, .50, width=.06)
dev.off()
|
b5abb5d82d4f58ed36d15f67340fd825445558c7 | 324e231b4c744517df60fbd6d85bd17e0c9b6d2d | /man/prCaSelectAndOrderVars.Rd | 1551e650927ccadedfe8414fa35c28ba546a3f17 | [] | no_license | landroni/Greg | 927511b0fd9b34b06a64c539c7e9f16ca15cb775 | 2e9b2afe4e8526d08910b370ed158c261a1b41f5 | refs/heads/master | 2020-12-28T12:12:25.870020 | 2014-08-29T06:35:43 | 2014-08-29T06:35:43 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,123 | rd | prCaSelectAndOrderVars.Rd | % Generated by roxygen2 (4.0.1): do not edit by hand
\name{prCaSelectAndOrderVars}
\alias{prCaSelectAndOrderVars}
\title{Re-order variables}
\usage{
prCaSelectAndOrderVars(names, order, ok2skip = FALSE)
}
\arguments{
\item{names}{The names of the variables}
\item{order}{The order regular expression}
\item{ok2skip}{If you have the intercept then
it should be ok for the function to skip that
variable if it isn't found among the variable list}
}
\value{
\code{vector} A vector containing the greps
}
\description{
Re-order variables
}
\seealso{
Other printCrudeAndAdjusted functions: \code{\link{latex.printCrudeAndAdjusted}},
\code{\link{print.printCrudeAndAdjusted}},
\code{\link{printCrudeAndAdjustedModel}};
\code{\link{prCaAddRefAndStat}};
\code{\link{prCaAddReference}};
\code{\link{prCaAddUserReferences}};
\code{\link{prCaGetImputationCols}};
\code{\link{prCaGetRowname}};
\code{\link{prCaGetVnStats}};
\code{\link{prCaPrepareCrudeAndAdjusted}};
\code{\link{prCaReorderReferenceDescribe}};
\code{\link{prCaReorder}}; \code{\link{prCaSetRownames}}
}
\keyword{internal}
\keyword{intrenal}
|
3c88ad53c05dcb17635f1a9eff209ceab4352d08 | ba026735b53b12f0b5f6c0ed20d76155d2840117 | /man/read_rfept.Rd | 1dd704f6be39fa659f50b0918f90af4775a1991b | [] | no_license | r-ifpe/sistec | 71895160a7cbf8ed63f98c0185674294685b24e9 | dc238cbad1056e029b2973a960cde690cdfd49e2 | refs/heads/main | 2021-06-18T09:47:01.909950 | 2021-05-18T18:45:48 | 2021-05-18T18:45:48 | 237,207,158 | 10 | 4 | null | 2021-05-18T18:17:50 | 2020-01-30T12:20:35 | R | UTF-8 | R | false | true | 1,411 | rd | read_rfept.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/read_rfept.R
\name{read_rfept}
\alias{read_rfept}
\title{Identify and read academic registration}
\usage{
read_rfept(path = "", start = NULL)
}
\arguments{
\item{path}{The folder's path to Qacademico, Sigaa, Conecta or Suap files.}
\item{start}{A character with the date to start the comparison. The default is the minimum
value found in the data. The date has to be in this format: "yyyy.semester".
Ex.: "2019.1" or "2019.2".}
}
\value{
A data frame.
}
\description{
The \code{read_rfept()} is a wrapper around \code{read_qacademico()} and \code{read_sigaa()}. Now
you just need to specify the folder path and \code{read_rfept()} identifies if it is a
qacademico or sigaa file and then read it.
}
\details{
By now, this function only supports qacademico and sigaa-sc.
}
\examples{
# these datasets are not a real ones. It is just for test purpose.
qacademico <- read_rfept(system.file("extdata/examples/qacademico", package = "sistec"))
sigaa <- read_rfept(system.file("extdata/examples/sigaa", package = "sistec"))
class(qacademico)
class(sigaa)
# example selecting the period
qacademico_2019_2 <- read_rfept(system.file("extdata/examples/qacademico", package = "sistec"),
start = "2019.2")
class(qacademico_2019_2)
}
|
60fff5ac6d3b464e599116975920a3cd2af46c17 | 759654dcf73527c94c4a27efac6b9f6dc00dd3d0 | /code/buildRLib.R | b657d8e43f468135f8832abd8f213c07353f1566 | [] | no_license | paradisepilot/buildRLib | 515620e1dc1bf4a2a28f941b9a0cd32c590d5b8d | 33f6fb0d58b46f06b4aca85effd27005a9bd99e5 | refs/heads/master | 2023-03-07T12:25:47.178723 | 2023-03-05T15:19:16 | 2023-03-05T15:19:16 | 231,972,880 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 13,741 | r | buildRLib.R |
### ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ###
command.arguments <- commandArgs(trailingOnly = TRUE);
code.directory <- normalizePath(command.arguments[1]);
output.directory <- normalizePath(command.arguments[2]);
pkgs.desired.FILE <- normalizePath(command.arguments[3]);
setwd(output.directory);
### ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ###
fh.output <- file("log.output", open = "wt");
fh.message <- file("log.message", open = "wt");
sink(file = fh.message, type = "message");
sink(file = fh.output, type = "output" );
### ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ###
print("\n##### Sys.time()");
Sys.time();
start.proc.time <- proc.time();
###################################################
default.libPaths <- .libPaths();
default.libPaths <- gsub(x = default.libPaths, pattern = "^/(Users|home)/.+/Library/R/.+/library", replacement = "");
default.libPaths <- gsub(x = default.libPaths, pattern = "^/(Users|home)/.+/buildRLib/.+", replacement = "");
default.libPaths <- setdiff(default.libPaths,c(""));
cat("\n# default.libPaths\n");
print( default.libPaths );
.libPaths(default.libPaths);
### ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ###
# copy the file of desired packages to output directory
file.copy(
from = pkgs.desired.FILE,
to = "."
);
# read list of desired R packages
pkgs.desired <- read.table(
file = pkgs.desired.FILE,
header = FALSE,
stringsAsFactors = FALSE
)[,1];
### ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ###
# assemble full path for R library to be built
current.version <- paste0(R.Version()["major"],".",R.Version()["minor"]);
myLibrary <- file.path(".","library",current.version,"library");
if(!dir.exists(myLibrary)) { dir.create(path = myLibrary, recursive = TRUE); }
### ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ###
# exclude packages already installed
preinstalled.packages <- as.character(
installed.packages(lib.loc = c(.libPaths(),myLibrary))[,"Package"]
);
cat("\n# pre-installed packages:\n");
print( preinstalled.packages );
pkgs.desired <- sort(setdiff(pkgs.desired,preinstalled.packages));
pkgs.desired <- sort(unique(c("rstan",pkgs.desired)));
cat("\n# packages to be installed:\n");
print( pkgs.desired );
write.table(
file = "Rpackages-desired-minus-preinstalled.txt",
x = data.frame(package = sort(pkgs.desired)),
quote = FALSE,
row.names = FALSE,
col.names = FALSE
);
### ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ###
# get URL of the cloud CRAN mirror
DF.CRAN.mirrors <- getCRANmirrors();
myRepoURL <- DF.CRAN.mirrors[grepl(x = DF.CRAN.mirrors[,"Name"], pattern = "0-Cloud"),"URL"];
# myRepoURL <- "https://cloud.r-project.org";
print(paste("\n# myRepoURL",myRepoURL,sep=" = "));
### ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ###
# set timeout to 600 seconds;
# needed for downloading large package source when using download.file(),
# which in turn is used by install.packages.
options( timeout = 600 );
cat("\n# getOption('timeout')\n");
print( getOption('timeout') );
### ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ###
is.macOS <- grepl(x = sessionInfo()[['platform']], pattern = 'apple', ignore.case = TRUE);
install.type <- ifelse(test = is.macOS, yes = "binary", no = getOption("pkgType"));
### ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ###
cat("\n##### installation begins: 'codetools' ...\n");
install.packages(
pkgs = c("codetools"),
lib = myLibrary,
repos = myRepoURL,
dependencies = TRUE # c("Depends", "Imports", "LinkingTo", "Suggests")
);
cat("\n##### installation complete: 'codetools' ...\n");
library(
package = "codetools",
character.only = TRUE,
lib.loc = myLibrary
);
### ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ###
cat("\n##### installation begins: 'boot' ...\n");
install.packages(
pkgs = c("boot"),
lib = myLibrary,
repos = myRepoURL,
dependencies = TRUE # c("Depends", "Imports", "LinkingTo", "Suggests")
);
cat("\n##### installation complete: 'boot' ...\n");
library(
package = "boot",
character.only = TRUE,
lib.loc = myLibrary
);
### ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ###
cat("\n##### installation begins: 'BiocManager' ...\n");
install.packages(
pkgs = c("BiocManager"),
lib = myLibrary,
repos = myRepoURL,
dependencies = TRUE # c("Depends", "Imports", "LinkingTo", "Suggests")
);
cat("\n##### installation complete: 'BiocManager' ...\n");
library(
package = "BiocManager",
character.only = TRUE,
lib.loc = myLibrary
);
### ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ###
cat("\n##### installation begins: 'Bioconductor' packages ...\n");
BiocPkgs <- c("BiocVersion","BiocStyle","graph","Rgraphviz","ComplexHeatmap");
already.installed.packages <- as.character(
installed.packages(lib.loc = c(.libPaths(),myLibrary))[,"Package"]
);
BiocPkgs <- setdiff(BiocPkgs,already.installed.packages);
if ( length(BiocPkgs) > 0 ) {
BiocManager::install(
pkgs = BiocPkgs,
lib = myLibrary,
dependencies = TRUE
);
}
cat("\n##### installation complete: 'Bioconductor' packages ...\n");
### ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ###
# is.linux <- grepl(x = sessionInfo()[['platform']], pattern = 'linux', ignore.case = TRUE);
# if ( is.linux ) {
#
# ### See instructions for installing arrow on Linux here:
# ### https://cran.r-project.org/web/packages/arrow/vignettes/install.html
# options(
# HTTPUserAgent = sprintf(
# "R/%s R (%s)",
# getRversion(),
# paste(getRversion(), R.version["platform"], R.version["arch"], R.version["os"])
# )
# );
#
# myRepoURL <- "https://packagemanager.rstudio.com/all/__linux__/focal/latest";
# cat(paste("\n# myRepoURL (Linux)",myRepoURL,sep=" = "));
#
# }
### ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ###
cat("\n##### first-round installation begins: not-yet-installed packages in Rpackages-desired.txt ...\n");
# exclude packages already installed
already.installed.packages <- as.character(
installed.packages(lib.loc = c(.libPaths(),myLibrary))[,"Package"]
);
cat("\n# already-installed packages:\n");
print( already.installed.packages );
pkgs.still.to.install <- setdiff(pkgs.desired,already.installed.packages);
pkgs.still.to.install <- sort(unique(c("rstan",pkgs.still.to.install)));
cat("\n# packages to be installed:\n");
print( pkgs.still.to.install );
install.packages(
pkgs = c("KernSmooth","lattice"),
lib = myLibrary,
repos = myRepoURL,
dependencies = TRUE # c("Depends", "Imports", "LinkingTo", "Suggests")
);
library(
package = "KernSmooth",
character.only = TRUE,
lib.loc = myLibrary
);
library(
package = "lattice",
character.only = TRUE,
lib.loc = myLibrary
);
install.packages(
pkgs = pkgs.still.to.install,
lib = myLibrary,
repos = myRepoURL,
type = install.type,
dependencies = TRUE # c("Depends", "Imports", "LinkingTo", "Suggests")
);
cat("\n##### first-round installation complete: not-yet-installed packages in Rpackages-desired.txt ...\n");
### ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ###
already.installed.packages <- as.character(
installed.packages(lib.loc = c(.libPaths(),myLibrary))[,"Package"]
);
pkgs.still.to.install <- sort(setdiff(pkgs.desired,already.installed.packages));
if ( length(pkgs.still.to.install) > 0 ) {
cat("\n##### second-round installation begins: not-yet-installed packages in Rpackages-desired.txt ...\n");
cat("\n# already-installed packages:\n");
print( already.installed.packages );
cat("\n# packages to be installed:\n");
print( pkgs.still.to.install );
myRepoURL <- "https://cran.microsoft.com";
print(paste("\n# myRepoURL",myRepoURL,sep=" = "));
install.packages(
pkgs = pkgs.still.to.install,
lib = myLibrary,
repos = myRepoURL,
type = install.type,
dependencies = TRUE # c("Depends", "Imports", "LinkingTo", "Suggests")
);
cat("\n##### second-round installation complete: not-yet-installed packages in Rpackages-desired.txt ...\n");
}
### ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ###
already.installed.packages <- as.character(
installed.packages(lib.loc = c(.libPaths(),myLibrary))[,"Package"]
);
pkgs.still.to.install <- sort(setdiff(pkgs.desired,already.installed.packages));
if ( length(pkgs.still.to.install) > 0 ) {
cat("\n##### third-round installation begins: not-yet-installed packages in Rpackages-desired.txt ...\n");
cat("\n# already-installed packages:\n");
print( already.installed.packages );
cat("\n# packages to be installed:\n");
print( pkgs.still.to.install );
is.cloud.or.microsoft <- grep(x = DF.CRAN.mirrors[,'URL'], pattern = "(cloud|microsoft)")
DF.CRAN.mirrors <- DF.CRAN.mirrors[setdiff(1:nrow(DF.CRAN.mirrors),is.cloud.or.microsoft),];
DF.CRAN.mirrors <- DF.CRAN.mirrors[DF.CRAN.mirrors[,'OK'] == 1,];
if ( nrow(DF.CRAN.mirrors) == 1 ) {
cat("\n# Found no additional CRAN mirrors; do nothing.'\n");
} else {
random.row.index <- sample(x = seq(1,nrow(DF.CRAN.mirrors)), size = 1);
myRepoURL <- DF.CRAN.mirrors[random.row.index,"URL"];
print(paste("\n# myRepoURL",myRepoURL,sep=" = "));
install.packages(
pkgs = pkgs.still.to.install,
lib = myLibrary,
repos = myRepoURL,
type = install.type,
dependencies = TRUE # c("Depends", "Imports", "LinkingTo", "Suggests")
);
}
cat("\n##### third-round installation complete: not-yet-installed packages in Rpackages-desired.txt ...\n");
}
### ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ###
is.linux <- grepl(x = sessionInfo()[['platform']], pattern = 'linux', ignore.case = TRUE);
if ( is.linux ) {
cat("\n##### special installation on Linux begins ...\n");
already.installed.packages <- as.character(
installed.packages(lib.loc = c(.libPaths(),myLibrary))[,"Package"]
);
pkgs.still.to.install <- sort(setdiff(pkgs.desired,already.installed.packages));
if ( length(pkgs.still.to.install) == 0 ) {
cat("\n### all special-install packages have already been installed ...\n");
} else {
cat("\n### installation (on Linux) begins ...\n");
### See instructions for installing arrow on Linux here:
### https://cran.r-project.org/web/packages/arrow/vignettes/install.html
options(
HTTPUserAgent = sprintf(
"R/%s R (%s)",
getRversion(),
paste(getRversion(), R.version["platform"], R.version["arch"], R.version["os"])
)
);
install.packages(
pkgs = pkgs.still.to.install,
lib = myLibrary,
repos = "https://packagemanager.rstudio.com/all/__linux__/focal/latest"
);
cat("\n### installation (on Linux) complete ...\n");
}
cat("\n##### special installation on Linux complete ...\n");
}
### ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ###
# On macOS, install also: spDataLarge, getSpatialData
is.macOS <- grepl(x = sessionInfo()[['platform']], pattern = 'apple', ignore.case = TRUE);
if ( is.macOS ) {
cat("\n##### installation begins: 'spDataLarge' ...\n");
install.packages(
pkgs = 'spDataLarge',
lib = myLibrary,
repos = 'https://nowosad.github.io/drat/',
type = 'source'
);
cat("\n##### installation complete: 'spDataLarge' ...\n");
.libPaths(c(myLibrary,.libPaths()));
github.repos <- c("r-spatial/RQGIS3","16EAGLE/getSpatialData");
for ( github.repo in github.repos ) {
cat(paste0("\n##### installation begins: '",github.repo,"' ...\n"));
devtools::install_github(repo = github.repo, upgrade = "always");
cat(paste0("\n##### installation complete: '",github.repo,"' ...\n"));
}
}
### ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ###
my.colnames <- c("Package","Version","License","License_restricts_use","NeedsCompilation","Built");
DF.installed.packages <- as.data.frame(installed.packages(lib = myLibrary)[,my.colnames]);
write.table(
file = "Rpackages-newlyInstalled.txt",
x = DF.installed.packages,
sep = "\t",
quote = FALSE,
row.names = FALSE,
col.names = TRUE
);
### ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ###
pkgs.notInstalled <- setdiff(
pkgs.desired,
as.character(installed.packages(lib = myLibrary)[,"Package"])
);
write.table(
file = "Rpackages-notInstalled.txt",
x = data.frame(package.notInstalled = sort(pkgs.notInstalled)),
quote = FALSE,
row.names = FALSE,
col.names = TRUE
);
###################################################
### ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ###
cat("\n##### warnings()\n");
print( warnings() );
cat("\n##### sessionInfo()\n");
print( sessionInfo() );
### ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ###
cat("\n##### Sys.time()\n");
print( Sys.time() );
stop.proc.time <- proc.time();
cat("\n##### start.proc.time() - stop.proc.time()\n");
stop.proc.time - start.proc.time;
sink(type = "output" );
sink(type = "message");
closeAllConnections();
|
d11ec835c81557a99661cb191c11d8725d823083 | cbadd8a51c648af6a3857fb51501f5231b85bddc | /604/HW/HW2/Hwk2_Luke Henslee.R | 606b3fc389a832013de5e2b4eb87f38eeb83f3ee | [] | no_license | lukehenslee/Coursework | 90b757805c7f6816361fe8dc0566d98ae374f19b | 6e7e8cf42691a06cca5db68e86dd5a8e1e7e99fa | refs/heads/master | 2023-08-14T10:26:15.992183 | 2021-09-24T20:33:32 | 2021-09-24T20:33:32 | 354,631,358 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 3,887 | r | Hwk2_Luke Henslee.R | ##########################################################
# FISH 604
# Homework 2 script file
# Chatham sablefish growth
# Luke Henslee- Sep 16, 2021
##########################################################
library(tidyverse)
############ Prob. 1: Data import
sable <- read.csv("sablefish.csv", as.is=F)
head(sable) # Look at first few rows
hist(sable$Age) # Age composition
table(sable$Age) # Same in table format
summary(sable) # Basic summary of each variable
is.factor(sable$Sex)
############ Prob. 2: Data exploration:
### 1. Plot Length-at-age, grouped by sex:
plot(Length ~ Age, data=sable, subset = Sex == "Female", col=2,
xlab = "Age", ylab = "Length (mm)")
points(Length ~ Age, data = sable, subset = Sex == "Male", col=4)
# ggplot version
p1 <- ggplot(data = sable, aes(x= Age, y= Length))
p1 + geom_point(aes(color = Sex))
## Jittering coordinates to show points that overlap
# base graphics:
plot(jitter(Length) ~ jitter(Age), data=sable, subset = Sex == "Female",
col=2, xlab = "Age", ylab = "Length (mm)")
points(jitter(Length) ~ jitter(Age+0.5), data = sable,
subset = Sex == "Male", col=4)
# ggplot version:
p1 + geom_jitter(aes(color = Sex))
### 2. Plot Length-at-age by year, grouped by sex:
sable$Year <- factor(sable$Year)
# From 'lattice' package:
library(lattice)
xyplot(jitter(Length) ~ jitter(Age) | factor(Year), data=sable, groups = Sex)
# Females only, base graphics, with scatterplot smoother (spline):
sub <- sable$Sex == "Female"
scatter.smooth(jitter(sable$Age[sub]), jitter(sable$Length[sub]), col=2,
xlab = "Age", ylab = "Length (mm)")
# ggplot, females only
ggplot(data = subset(sable, Sex == "Female"), aes(x = Age, y = Length)) +
geom_jitter() +
geom_smooth(method = "loess")
# lattice graphics, scatterplots and scatterplot smoothers, by year
xyplot(jitter(Length) ~ jitter(Age) | factor(Year), data=sable, groups=Sex, auto.key=T)
xyplot(jitter(Length) ~ jitter(Age) | factor(Year), data=sable, groups=Sex,
xlim = c(0,50), ylim = c(450, 1000), auto.key=T)
xyplot(Length ~ Age | factor(Year), data=sable, groups=Sex, panel = "panel.superpose",
panel.groups = "panel.loess", auto.key=T)
# ggplot version, with loess smoother and 95% confidence bands:
p1 + geom_smooth(method = "loess", aes (color = Sex)) +
facet_wrap(~Year)
# scatterplot + loess smoother:
p1 + geom_jitter(aes(color = Sex), pch=1) +
geom_smooth(aes(color=Sex), method="loess", level=0.95) +
facet_wrap(~Year)
########### Prob. 3: Fit Ludwig van Bertalanffy (LVB)
########### growth model to sablefish data
# LvB growth model function to compute predicted values:
LvB <- function(a, k, L.inf, a0) {
L.inf*(1-exp(-k*(a-a0)))
}
# Try out some starting values and superimpose on scatterplot:
ST <- c(k = 0.05, L.inf = 800 , a0 = -15)
# Make sure to pick sensible values for L.inf and a0, then add line:
plot(jitter(Length) ~ jitter(Age), data=sable, col=2)
lines(1:80, LvB(1:80, k = ST[1], L.inf = ST[2], a0 = ST[3]), lwd=3)
# Fit the model using 'nls()' with the regression equation
# 'nls' finds parameters that minimize the sum of squared
# differences between the left-hand side (observed lengths)
# and the right-hand side (predicted lengths) of the formula
fit <- nls(Length ~ LvB(Age, k, L.inf, a0), data = sable, start = ST)
summary(fit)
coef(fit)
# Visualize the overall model fit
plot(jitter(Length) ~ jitter(Age), data=sable, col=2)
cf <- coef(fit)
# Add the fitted model to the scatterplot...
lines(1:80, LvB(1:80, k = cf[1], L.inf = cf[2], a0 = cf[3]), lwd=3)
##### Prob. 4: Diagnostics
r <- resid(fit) # Extract residuals (make sure 'fit' is the combined model fit)
cf <- coef(fit)
r2 <- sable$Length - LvB(sable$Age,k=cf[1],L.inf=cf[2], a0=cf[3])
# Example plot:
boxplot(r ~ Year, data = sable); abline(h=0, col=2)
##### Prob. 5: Analyses by sex
|
8c4a7ae9f611f78e233985439c104088bfcc72c7 | 37d46d358f9289b3099aad8c5066b3e87dc9de6e | /Black Friday Sales/Demographic.R | 263c474f15c93462a0d07ddc256a9779e4292b10 | [
"MIT"
] | permissive | sachinshubhams/Black-Friday-Sales | 297a0cd8398f0f599bb1eceb534484887be9628c | eb8dcf357dd2758e55ed615235090aa7a87f8e0a | refs/heads/main | 2023-03-08T10:40:32.994828 | 2021-02-19T23:56:55 | 2021-02-19T23:56:55 | 340,280,848 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 5,414 | r | Demographic.R | frow_c <- fluidRow(
box(
title = "Purchase amount by Males and Females"
,status = "primary"
,solidHeader = TRUE,background = 'purple'
,collapsible = TRUE
,plotOutput("pie_1")
),
box(
title = "Purchase amount by Age intervals"
,status = "primary"
,solidHeader = TRUE,background = 'purple'
,collapsible = TRUE
,plotOutput("pie_2")),
box(
title = "Purchase amount by Marital Status"
,status = "primary"
,solidHeader = TRUE,background = 'purple'
,collapsible = TRUE
,plotOutput("pie_3")),
box(
title = "Purchase amount by City categories"
,status = "primary"
,solidHeader = TRUE,background = 'purple'
,collapsible = TRUE
,plotOutput("pie_4")),
box(
title = "Purchase by Stay in Current city"
,status = "primary"
,solidHeader = TRUE,background = 'purple'
,collapsible = TRUE
,plotOutput("bar_1")),
box(
title = "Purchase by Occupation"
,status = "primary"
,solidHeader = TRUE,background = 'purple'
,collapsible = TRUE
,plotOutput("bar_2")))
frow_d<-fluidRow(
box(
title = "Purchase by Product category 1"
,status = "primary"
,solidHeader = TRUE,background = 'purple'
,collapsible = TRUE
,plotOutput("bar_3")),
box(
title = "Purchase by Product category 2"
,status = "primary"
,solidHeader = TRUE,background = 'purple'
,collapsible = TRUE
,plotOutput("bar_4")),
box(
title = "Purchase by Product category 3"
,status = "primary"
,solidHeader = TRUE,background = 'purple'
,collapsible = TRUE
,plotOutput("bar_5")),
box(
title = "Top 10 Products"
,status = "primary"
,solidHeader = TRUE,background = 'purple'
,collapsible = TRUE
,plotOutput("bar_6")))
ui<-shinyUI(
dashboardPage(
dashboardHeader(title = "BLACK FRIDAY SALES",titleWidth = 300),
dashboardSidebar(
sidebarMenu(id = 'sidebarmenu',
menuItem("Demographics Insights",tabName = "insight", icon = icon("dashboard"))
)),
dashboardBody(
tabItems(
tabItem("insight",frow_c)
)
),skin = 'red'
)
)
df1<-aggregate(data$Purchase~data$Gender,data = data,FUN = sum)
df2<-aggregate(data$Purchase~data$Age,data = data,FUN = sum)
df3<-aggregate(data$Purchase~data$Marital_Status,data = data,FUN = sum)
df4<-aggregate(data$Purchase~data$City_Category,data = data,FUN = sum)
df5<-aggregate(data$Purchase~data$Stay_In_Current_City_Years,data = data,FUN = sum)
df6<-aggregate(data$Purchase~data$Occupation,data = data,FUN = sum)
server <- function(input, output,session){
output$pie_1<-renderPlot({
ggplot(df1, aes(x="", y=df1$`data$Purchase`, fill=df1$`data$Gender`))+
geom_bar(position ="fill" ,width = 1, stat = "identity")+coord_polar("y", start=0)+
geom_text(aes(label=scales::percent(df1$`data$Purchase`/sum(df1$`data$Purchase`))),
stat='identity',position=position_fill(vjust=0.5))+ggtitle("Purchase amount by Males and Females")+
ylab("")+labs(fill="Gender levels")
})
output$pie_2<-renderPlot({
ggplot(df2, aes(x="", y=df2$`data$Purchase`, fill=df2$`data$Age`))+
geom_bar(position ="fill" ,width = 1, stat = "identity")+coord_polar("y", start=0)+
geom_text(aes(label=scales::percent(df2$`data$Purchase`/sum(df2$`data$Purchase`))),
stat='identity',position=position_fill(vjust=0.5))+ggtitle("Purchase amount by Age intervals")+
ylab("")+labs(fill="Age levels")
})
output$pie_3<-renderPlot({
ggplot(df3, aes(x="", y=df3$`data$Purchase`, fill=df3$`data$Marital_Status`))+
geom_bar(position ="fill" ,width = 1, stat = "identity")+coord_polar("y", start=0)+
geom_text(aes(label=scales::percent(df3$`data$Purchase`/sum(df3$`data$Purchase`))),
stat='identity',position=position_fill(vjust=0.5))+ggtitle("Purchase amount by Marital Status")+
ylab("")+labs(fill="Marital Status Levels")
})
output$pie_4<-renderPlot({
ggplot(df4, aes(x="", y=df4$`data$Purchase`, fill=df4$`data$City_Category`))+
geom_bar(position ="fill" ,width = 1, stat = "identity")+coord_polar("y", start=0)+
geom_text(aes(label=scales::percent(df4$`data$Purchase`/sum(df4$`data$Purchase`))),
stat='identity',position=position_fill(vjust=0.5))+ggtitle("Purchase amount by City categories")+
ylab("")+labs(fill="City Categories")
})
output$bar_1<-renderPlot({
ggplot(df5, aes(x="", y=df5$`data$Purchase`, fill=df5$`data$Stay_In_Current_City_Years`)) +
geom_bar(position ="fill" ,width = 1,stat="identity")+
geom_text(aes(label=scales::percent(df5$`data$Purchase`/sum(df5$`data$Purchase`))),
stat='identity',position=position_fill(vjust= 0.5))+
ggtitle("Purchase by Stay in Current city")+xlab("Years of Stay in Current City")+
labs(fill="Years in City")
})
output$bar_2<-renderPlot({
ggplot(df6, aes(x=df6$`data$Occupation`, y=df6$`data$Purchase`, fill=df6$`data$Occupation`)) +
geom_bar(stat="identity")+
ggtitle("Purchase by Occupation")+xlab("Occupation Levels")+
ylab("Purchase Amount")+labs(fill="Occupation Levels")
})
}
shinyApp(ui, server) |
ada37094285bbc02ad3563479d586ce334c34bd0 | 89c5ac74c2a45f671eb32bf6e8f89ea306488846 | /GraphPaintedHaplotypes/readPaintedHaploFile.r | 5d0fde057a1a9d8e39a8a80d81cf5aa9647da769 | [
"MIT"
] | permissive | RemiMattheyDoret/SimBit | de17b55727445ea171c3a0753e59106d8d5593b1 | ed0e64c0abb97c6c889bc0adeec1277cbc6cbe43 | refs/heads/master | 2021-09-23T23:07:49.373512 | 2021-09-09T15:46:26 | 2021-09-09T15:46:26 | 148,374,611 | 7 | 0 | null | null | null | null | UTF-8 | R | false | false | 2,394 | r | readPaintedHaploFile.r | readFirstPartOfHaplotype = function(str)
{
## Examples: "0-500 P434 I0 H0", "anc-end P434 I0 H0"
generations = strsplit(str[1], '-')[[1]]
paintedGeneration = generations[1]
observedGeneration = generations[2]
stopifnot(substring(str[2], 1, 1) == 'P')
stopifnot(substring(str[3], 1, 1) == "I")
stopifnot(substring(str[4], 1, 1) == "H")
patch = as.integer(substring(str[2], 2))
ind = as.integer(substring(str[3], 2))
haplo = as.integer(substring(str[4], 2))
return(c(paintedGeneration, observedGeneration, patch, paste(ind, haplo, sep="_"
)))
}
readSegment = function(str)
{
ss = strsplit(str, " ")[[1]]
stopifnot(length(ss) == 4)
stopifnot(substring(ss[3], 1, 1) == "P")
stopifnot(substring(ss[4], 1, 1) == "I")
left = as.integer(ss[1])
right = as.integer(ss[2])
patch = as.integer(substring(ss[3], 2))
id = as.integer(substring(ss[4], 2))
stopifnot(left < right)
return(c(left, right, patch, id))
}
readHaplotype = function(str)
{
twoStrings = strsplit(str, ": ")[[1]]
firstPart = strsplit(twoStrings[1], " ")[[1]]
segmentsPart = strsplit(substring(twoStrings[2], 2, nchar(twoStrings[2])-1), "][", fixed=TRUE)[[1]]
r = as.data.frame(matrix(NA, nrow=length(segmentsPart), ncol = 8))
first = readFirstPartOfHaplotype(firstPart)
for (seg_index in 1:length(segmentsPart))
{
r[seg_index,] = c(first, readSegment(segmentsPart[seg_index]))
}
return(r)
}
readPaintedHaploFile = function(path)
{
## read file
lines = readLines(path)
## Initialize list that will be merged
ldata = vector(mode = "list", length = length(lines))
## Read haplotypes and fill upp list
for (line_index in 1:length(lines))
{
ldata[[length(ldata) + 1]] = readHaplotype(lines[line_index])
}
## Merge list
data = do.call("rbind", ldata)
colnames(data) = c("paintedGeneration", "observedGeneration", "sampled_patch", "sampled_haploID", "left", "right", "patch", "ID")
data$paintedGeneration = as.integer(data$paintedGeneration)
data$observedGeneration = as.integer(data$observedGeneration)
data$sampled_patch = as.integer(data$sampled_patch)
data$left = as.integer(data$left)
data$right = as.integer(data$right)
data$patch = as.integer(data$patch)
data$ID = as.integer(data$ID)
return(data)
} |
7cc616604faa4d3b3d5c3168b191bde616399b7a | 932eb8adcd8756af899d8b5e5f13b03877fa07f6 | /Final.R | 0287fb5d17f8482591cd9b574607eaeb1e67bd7f | [] | no_license | tobiasdahlnielsen/P9 | 8f4cf34ac1073b125377c73b87eb05fb4a51ef28 | a3bd92ccab29101e01a9d90f75ed4d5ba9d3cff1 | refs/heads/master | 2023-02-24T17:10:20.183992 | 2021-01-24T10:20:13 | 2021-01-24T10:20:13 | 295,356,538 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 64,231 | r | Final.R | ################################################################################
######################### Initial Data setup ###################################
################################################################################
source("Lib.R")
spot_data <- spot_data %>% mutate(dd = hour(StartUTC), wd = wday(StartUTC), yd = month(StartUTC),Yd = year(StartUTC))
acf(spot_data$DEForecast,300,main="Acf plot for Spot_DE_Forecast")
for (i in 1:4) {
med <- median(spot_data[,i+1])
mea <- mean(spot_data[,i+1])
out.in <- which(spot_data[,i+1]>(med+10*mea))
if (!(length(out.in)==0)) {
spot_data[,(i+1)] <- medianswap(spot_data[,(i+1)],out.in)
}
}
plot(spot_data$DE,type="l")
plot(spot_data$FR,type="l")
plot(spot_data$DEForecast,type="l")
plot(spot_data$FRForecast,type="l")
spot_data$StartUTC
Data = list()
Data[[1]] <- spot_data %>% filter(year(spot_data$StartUTC) == 2016, month(StartUTC) < 7)
Data[[2]] <- spot_data %>% filter(year(spot_data$StartUTC) == 2016, month(StartUTC) > 6)
Data[[3]] <- spot_data %>% filter(year(spot_data$StartUTC) == 2017, month(StartUTC) < 7)
Data[[4]] <- spot_data %>% filter(year(spot_data$StartUTC) == 2017, month(StartUTC) > 6)
Data[[5]] <- spot_data %>% filter(year(spot_data$StartUTC) == 2018, month(StartUTC) < 7)
Data[[6]] <- spot_data %>% filter(year(spot_data$StartUTC) == 2018, month(StartUTC) > 6)
Data[[7]] <- spot_data %>% filter(year(spot_data$StartUTC) == 2019, month(StartUTC) < 7)
Data[[8]] <- spot_data %>% filter(year(spot_data$StartUTC) == 2019, month(StartUTC) > 6)
Data[[9]] <- spot_data %>% filter(year(spot_data$StartUTC) == 2020)
FC_RMSE <- c(rep(0,9))
FC_MAE <- c(rep(0,9))
for (i in 1:9) {
FC_RMSE[i] <- sqrt( mean( (Data[[1]]$DE-Data[[i]]$DEForecast)^2 ))
FC_MAE[i] <- mean( abs( Data[[i]]$DE - Data[[i]]$DEForecast ) )
}
par(mfrow=c(3,3))
HY = c("2016h1","2016H2","2017H1","2017H2","2018H1","2018H2","2019H1","2019H2","2020")
for (i in 1:9) {
plot(Data[[i]]$StartUTC, Data[[i]]$DE,type="l",ylab="Price",xlab=" ", main=HY[i])
}
Y16 <- spot_data %>% filter(year(spot_data$StartUTC) == 2016) %>% select(StartUTC)
Y17 <- spot_data %>% filter(year(spot_data$StartUTC) == 2017) %>% select(StartUTC)
Y18 <- spot_data %>% filter(year(spot_data$StartUTC) == 2018) %>% select(StartUTC)
Y19 <- spot_data %>% filter(year(spot_data$StartUTC) == 2019) %>% select(StartUTC)
Y20 <- spot_data %>% filter(year(spot_data$StartUTC) == 2019) %>% select(StartUTC)
Y_length = c(length(Y16[,1]),length(Y16[,1]), length(Y17[,1]),length(Y17[,1]), length(Y18[,1]),length(Y18[,1]),
length(Y19[,1]), length(Y19[,1]),length(Y20[,1]),length(Y20[,1]))
acf(spot_data$DE,lag.max=300,main="")
acf(Data_Res$DE_res,lag.max=300,main="")
acf(spot_data$FR,lag.max=300,main="")
acf(Data_Res$FR_res,lag.max=300,main="")
acf(spot_data$DEForecast,lag.max=300,main="")
acf(Data_Res$DE_FC_res,lag.max=300,main="")
acf(spot_data$FRForecast,lag.max=300,main="")
acf(Data_Res$FR_FC_res,lag.max=300,main="")
ggseasonplot(ts(spot_data$DE[1:(24*10)],frequency=24), year.labels = FALSE,year.labels.left = F)
################################################################################
#################### Fitting a model to DE _ Actual ############################
################################################################################
models_DE = list()
for (i in 1:9) {
models_DE[[i]] <- lm(Data[[i]]$DE ~
time(Data[[i]][,1])+
# I(time(Data[[i]][,1])^2)+
#I(time(Data[[i]][,1])^3)+
cos( ( 2*pi/Y_length[i] ) * I(time(Data[[i]][,1])) )+
sin((2*pi/Y_length[i])*I(time(Data[[i]][,1])))+
#cos(((2*2)*pi/Y_length[i])*I(time(spot_data[,1])))+
#sin(((2*2)*pi/Y_length[i])*I(time(spot_data[,1])))+
#cos(((4*2)*pi/Y_length[i])*I(time(spot_data[,1])))+
#sin(((4*2)*pi/Y_length[i])*I(time(spot_data[,1])))+
#cos(((12*2)*pi/Y_length[i])*I(time(spot_data[,1])))+
#sin(((12*2)*pi/Y_length[i])*I(time(spot_data[,1])))+
#cos(((52*2)*pi/Y_length[i])*I(time(spot_data[,1])))+
#sin(((52*2)*pi/Y_length[i])*I(time(spot_data[,1])))+
cos(((2*365*2)*pi/Y_length[i])*I(time(Data[[i]][,1])))+
sin(((2*365*2)*pi/Y_length[i])*I(time(Data[[i]][,1])))+
cos(((365*2)*pi/Y_length[i])*I(time(Data[[i]][,1])))+
sin(((365*2)*pi/Y_length[i])*I(time(Data[[i]][,1])))+
#factor(Data[[i]]$Yd)+
#factor(Data[[i]]$yd)+
factor(Data[[i]]$wd)+
factor(Data[[i]]$dd)
)
}
Q = list()
orders <- list()
for (i in 1:9) {
Q[[i]] <- auto.arima(models_DE[[i]]$residuals)
orders[[i]] <- Q[[i]]$coef
}
od = list(c(2,0,2),c(5,0,2),c(1,0,2),c(4,0,1),c(5,0,3),c(4,0,1),c(2,0,3),c(1,0,2),c(4,0,2))
par(mfrow=c(3,3))
sarma24_DE = list()
for (i in 1:9) {
sarma24_DE[[i]] <- arima(models_DE[[i]]$residuals, order = od[[i]], seasonal = list(order=c(1,0,0),period=24))
acf(sarma24_DE[[i]]$residuals,lag.max=400)
}
sarma24_DE_aic_1 = list()
#sarma24_DE_aic_2 = list()
for (i in 1:9) {
acf(sarma24_DE[[i]]$residuals,lag.max=40)
#pacf(sarma24_DE[[i]]$residuals,lag.max=400)
sarma24_DE_aic_1[[i]] <- sarma24_DE[[i]]$aic
#sarma24_DE_aic_2[[i]] <- sarma24_DE[[i]]$aic
#plot(sarma24_DE[[i]]$residuals,type="l")
}
sarma168_DE = list()
for (i in 1:9) {
sarma168_DE[[i]] <- arima(sarma24_DE[[i]]$residuals, seasonal = list(order = c(1,0,0), period=168))
acf(sarma168_DE[[i]]$residuals,lag.max=400)
}
aic_168_DE_1 = list()
for (i in 1:9) {
Acf(sarma168_DE[[i]]$residuals,lag.max=400)
#pacf(sarma168_DE[[i]]$residuals,lag.max=400)
aic_168_DE_1[[i]] = sarma168_DE[[i]]$aic
#aic_168_DE_2[[i]] = sarma168_DE[[i]]$aic
#plot(sarma168_DE[[i]]$residuals)
}
arma_DE = list()
for (i in 1:9) {
arma_DE[[i]] <- auto.arima(sarma168_DE[[i]]$residuals)
Acf(arma_DE[[i]]$residuals)
}
for (i in 1:9) {
Acf(arma_DE[[i]]$residuals,lag.max=300)
}
Residuals_DE = list()
for (i in 1:9) {
Residuals_DE[[i]] <- arma_DE[[i]]$residuals
plot(Residuals_DE[[i]],type="l")
}
data_DE <- unlist(Residuals_DE)
data_FR <- unlist(Residuals_FR)
data_DE_FC <- unlist(Residuals_DE_FC)
data_FR_FC <- unlist(Residuals_FR_FC)
Acf(data_DE_FC,lag.max=300,main="",ylim=c(-0.1,0.3))
plot(spot_data$StartUTC,data_DE_FC,type="l",xlab="Time",ylab="DE_FC_spot_residuals")
hist(data_DE_FC,breaks=160,xlim=c(-40,40),main="",xlab="DE_FC_spot_residuals")
################################################################################
#################### Fitting a model to FR _ Actual ############################
################################################################################
plot(spot_data$FR)
models_FR = list()
for (i in 1:9) {
models_FR[[i]] <- lm(Data[[i]]$FR ~
time(Data[[i]][,1])+
# I(time(Data[[i]][,1])^2)+
#I(time(Data[[i]][,1])^3)+
cos( ( 2*pi/Y_length[i] ) * I(time(Data[[i]][,1])) )+
sin((2*pi/Y_length[i])*I(time(Data[[i]][,1])))+
#cos(((2*2)*pi/Y_length[i])*I(time(spot_data[,1])))+
#sin(((2*2)*pi/Y_length[i])*I(time(spot_data[,1])))+
#cos(((4*2)*pi/Y_length[i])*I(time(spot_data[,1])))+
#sin(((4*2)*pi/Y_length[i])*I(time(spot_data[,1])))+
#cos(((12*2)*pi/Y_length[i])*I(time(spot_data[,1])))+
#sin(((12*2)*pi/Y_length[i])*I(time(spot_data[,1])))+
#cos(((52*2)*pi/Y_length[i])*I(time(spot_data[,1])))+
#sin(((52*2)*pi/Y_length[i])*I(time(spot_data[,1])))+
cos(((2*365*2)*pi/Y_length[i])*I(time(Data[[i]][,1])))+
sin(((2*365*2)*pi/Y_length[i])*I(time(Data[[i]][,1])))+
cos(((365*2)*pi/Y_length[i])*I(time(Data[[i]][,1])))+
sin(((365*2)*pi/Y_length[i])*I(time(Data[[i]][,1])))+
#factor(Data[[i]]$Yd)+
#factor(Data[[i]]$yd)+
factor(Data[[i]]$wd)+
factor(Data[[i]]$dd)
)
}
par(mfrow=c(3,3))
for (i in 1:9) {
acf(models_FR[[i]]$residuals,75)
}
Q = list()
orders <- list()
for (i in 1:9) {
Q[[i]] <- auto.arima(models_FR[[i]]$residuals)
orders[[i]] <- Q[[i]]$coef
}
od = list(c(4,0,0),c(2,0,1),c(4,0,5),c(4,0,1),c(5,0,4),c(1,0,1),c(3,0,5),c(2,0,3),c(5,0,3))
sarma24_FR = list()
for (i in 1:9) {
sarma24_FR[[i]] <- arima(models_FR[[i]]$residuals, order = od[[i]], seasonal = list(order=c(1,0,0),period=24))
acf(sarma24_FR[[i]]$residuals,lag.max=100)
}
sarma24_FR_aic_1 = list()
#sarma24_FR_aic_2 = list()
for (i in 1:9) {
Acf(sarma24_FR[[i]]$residuals,lag.max=400)
#pacf(sarma24_FR[[i]]$residuals,lag.max=400)
#sarma24_FR_aic_1[[i]] <- sarma24_FR[[i]]$aic
#sarma24_FR_aic_2[[i]] <- sarma24_FR[[i]]$aic
}
sarma168_FR = list()
for (i in 1:9) {
sarma168_FR[[i]] <- arima(sarma24_FR[[i]]$residuals, seasonal = list(order = c(1,0,0), period=168))
acf(sarma168_FR[[i]]$residuals,lag.max=400)
}
aic168_FR_1 = list()
#aic168_FR_2 = list()
for (i in 1:9) {
#Acf(sarma168_FR[[i]]$residuals,lag.max=40)
#pacf(sarma168_FR[[i]]$residuals,lag.max=400)
#aic168_FR_1[[i]] = sarma168_FR[[i]]$aic
#aic168_FR_2[[i]] = sarma168_FR[[i]]$aic
plot(sarma168_FR[[i]]$residuals)
}
arma_FR = list()
for (i in 1:9) {
arma_FR[[i]] <- auto.arima(sarma168_FR[[i]]$residuals)
Acf(arma_FR[[i]]$residuals)
}
for (i in 1:9) {
Acf(arma[[i]]$residuals,lag.max=300)
}
Residuals_FR = list()
for (i in 1:9) {
Residuals_FR[[i]] <- arma_FR[[i]]$residuals
plot(Residuals_FR[[i]],type="l")
}
################################################################################
#################### Fitting a model to FR _ Forecast ##########################
################################################################################
models_FR_FC = list()
for (i in 1:9) {
models_FR_FC[[i]] <- lm(Data[[i]]$FRForecast ~
time(Data[[i]][,1])+
# I(time(Data[[i]][,1])^2)+
#I(time(Data[[i]][,1])^3)+
cos( ( 2*pi/Y_length[i] ) * I(time(Data[[i]][,1])) )+
sin((2*pi/Y_length[i])*I(time(Data[[i]][,1])))+
#cos(((2*2)*pi/Y_length[i])*I(time(spot_data[,1])))+
#sin(((2*2)*pi/Y_length[i])*I(time(spot_data[,1])))+
#cos(((4*2)*pi/Y_length[i])*I(time(spot_data[,1])))+
#sin(((4*2)*pi/Y_length[i])*I(time(spot_data[,1])))+
#cos(((12*2)*pi/Y_length[i])*I(time(spot_data[,1])))+
#sin(((12*2)*pi/Y_length[i])*I(time(spot_data[,1])))+
#cos(((52*2)*pi/Y_length[i])*I(time(spot_data[,1])))+
#sin(((52*2)*pi/Y_length[i])*I(time(spot_data[,1])))+
cos(((2*365*2)*pi/Y_length[i])*I(time(Data[[i]][,1])))+
sin(((2*365*2)*pi/Y_length[i])*I(time(Data[[i]][,1])))+
cos(((365*2)*pi/Y_length[i])*I(time(Data[[i]][,1])))+
sin(((365*2)*pi/Y_length[i])*I(time(Data[[i]][,1])))+
#factor(Data[[i]]$Yd)+
#factor(Data[[i]]$yd)+
factor(Data[[i]]$wd)+
factor(Data[[i]]$dd)
)
}
for (i in 1:9) {
acf(models_FR_FC[[i]]$residuals,75)
}
Q = list()
orders <- list()
for (i in 1:9) {
Q[[i]] <- auto.arima(models_FR_FC[[i]]$residuals)
orders[[i]] <- Q[[i]]$coef
}
od = list(c(3,0,2),c(1,0,3),c(5,0,0),c(4,0,1),c(4,0,1),c(4,0,1),c(3,0,2),c(3,0,1),c(1,0,1))
sarma24_FR_FC = list()
for (i in 1:9) {
sarma24_FR_FC[[i]] <- arima(models_FR_FC[[i]]$residuals, order = od[[i]], seasonal = list(order=c(1,0,0),period=24))
acf(sarma24_FR_FC[[i]]$residuals,lag.max=100)
}
sarma24_FR_FC_aic_1 = list()
#sarma24_FR_aic_2 = list()
for (i in 1:9) {
Acf(sarma24_FR_FC[[i]]$residuals,lag.max=50)
#pacf(sarma24_FR_FC[[i]]$residuals,lag.max=400)
sarma24_FR_FC_aic_1[[i]] <- sarma24_FR_FC[[i]]$aic
#sarma24_FR_FC_aic_2[[i]] <- sarma24_FR_FC[[i]]$aic
}
sarma168_FR_FC = list()
for (i in 1:9) {
sarma168_FR_FC[[i]] <- arima(sarma24_FR_FC[[i]]$residuals, seasonal = list(order = c(1,0,0), period=168))
acf(sarma168_FR_FC[[i]]$residuals,lag.max=400)
}
aic168_FR_FC_1 = list()
#aic168_FR_FC_2 = list()
sarma168_FR_FC_res <- c()
for (i in 1:9) {
#Acf(sarma168_FR_FC[[i]]$residuals,lag.max=40)
#pacf(sarma168_FR_FC[[i]]$residuals,lag.max=400)
#aic168_FR_FC_1[[i]] = sarma168_FR_FC[[i]]$aic
#aic168_FR_FC_2[[i]] = sarma168_FR_FC[[i]]$aic
#plot(sarma168_FR_FC[[i]]$residuals)
sarma168_FR_FC_res <- c(sarma168_FR_FC_res,sarma168_FR_FC[[i]]$residuals)
}
hist(sarma168_FR_FC_res,breaks=400,prob=TRUE,xlim=c(-40,40),ylim=c(0,0.2))
lines(density(sarma168_FR_FC_res))
arma_FR_FC = list()
for (i in 1:9) {
arma_FR_FC[[i]] <- auto.arima(sarma168_FR_FC[[i]]$residuals)
#Acf(arma_FR[[i]]$residuals)
print(i)
}
for (i in 1:9) {
Acf(arma_FR_FC[[i]]$residuals,lag.max=300)
}
Residuals_FR_FC = list()
for (i in 1:9) {
Residuals_FR_FC[[i]] <- arma_FR_FC[[i]]$residuals
plot(Residuals_FR_FC[[i]],type="l")
}
data_FR_FC <- unlist(Residuals_FR_FC)
par(mfrow=c(3,3))
test_FR_FC_res <- c()
for (i in 1:9) {
#hist(arma_FR_FC[[i]]$residuals,prob=T,breaks=200)
#lines(density(arma_FR_FC[[i]]$residuals),col="red")
test_FR_FC_res <- c(test_FR_FC_res, arma_FR_FC[[i]]$residuals)
}
par(mfrow=c(1,1))
hist(test_FR_FC_res,prob=T,breaks=400,xlim=c(-40,40),ylim=c(0,0.3))
lines(density(test_FR_FC_res))
ggplot() +
geom_histogram( aes(x = test_FR_FC_res, y = ..density..), bins = 200) +
geom_density( aes( x = test_FR_FC_res, color = 'Residuals')) +
geom_density( aes( x = dsstd(seq(-50,50,by=0.1), mean = marg_DE_sstd$estimate[1], sd = marg_DE_sstd$estimate[2], nu =marg_DE_sstd$estimate[3], xi=marg_DE_sstd$estimate[4]), color = "SSTD")) +
xlim(c(-40,40)) +
ylim(c(0,0.3))
################################################################################
#################### Fitting a model to DE _ Forecast ##########################
################################################################################
models_DE_FC = list()
for (i in 1:9) {
models_DE_FC[[i]] <- lm(Data[[i]]$DEForecast ~
time(Data[[i]][,1])+
# I(time(Data[[i]][,1])^2)+
#I(time(Data[[i]][,1])^3)+
cos( ( 2*pi/Y_length[i] ) * I(time(Data[[i]][,1])) )+
sin((2*pi/Y_length[i])*I(time(Data[[i]][,1])))+
#cos(((2*2)*pi/Y_length[i])*I(time(spot_data[,1])))+
#sin(((2*2)*pi/Y_length[i])*I(time(spot_data[,1])))+
#cos(((4*2)*pi/Y_length[i])*I(time(spot_data[,1])))+
#sin(((4*2)*pi/Y_length[i])*I(time(spot_data[,1])))+
#cos(((12*2)*pi/Y_length[i])*I(time(spot_data[,1])))+
#sin(((12*2)*pi/Y_length[i])*I(time(spot_data[,1])))+
#cos(((52*2)*pi/Y_length[i])*I(time(spot_data[,1])))+
#sin(((52*2)*pi/Y_length[i])*I(time(spot_data[,1])))+
cos(((2*365*2)*pi/Y_length[i])*I(time(Data[[i]][,1])))+
sin(((2*365*2)*pi/Y_length[i])*I(time(Data[[i]][,1])))+
cos(((365*2)*pi/Y_length[i])*I(time(Data[[i]][,1])))+
sin(((365*2)*pi/Y_length[i])*I(time(Data[[i]][,1])))+
#factor(Data[[i]]$Yd)+
#factor(Data[[i]]$yd)+
factor(Data[[i]]$wd)+
factor(Data[[i]]$dd)
)
}
for (i in 1:9) {
acf(models_DE_FC[[i]]$residuals,75)
}
Q = list()
orders <- list()
for (i in 1:9) {
Q[[i]] <- auto.arima(models_DE_FC[[i]]$residuals)
orders[[i]] <- Q[[i]]$coef
}
od = list(c(2,0,5),c(4,0,1),c(1,0,5),c(1,0,2),c(1,0,2),c(1,0,2),c(1,0,2),c(2,0,4),c(1,0,3))
sarma24_DE_FC = list()
for (i in 1:9) {
sarma24_DE_FC[[i]] <- arima(models_DE_FC[[i]]$residuals, order = od[[i]], seasonal = list(order=c(1,0,0),period=24))
acf(sarma24_DE_FC[[i]]$residuals,lag.max=100)
}
sarma24_DE_FC_aic_1 = list()
#sarma24_DE_aic_2 = list()
for (i in 1:9) {
Acf(sarma24_DE_FC[[i]]$residuals,lag.max=50)
#pacf(sarma24_DE_FC[[i]]$residuals,lag.max=400)
sarma24_DE_FC_aic_1[[i]] <- sarma24_DE_FC[[i]]$aic
#sarma24_DE_FC_aic_2[[i]] <- sarma24_DE_FC[[i]]$aic
}
sarma168_DE_FC = list()
for (i in 1:9) {
sarma168_DE_FC[[i]] <- arima(sarma24_DE_FC[[i]]$residuals, seasonal = list(order = c(1,0,0), period=168))
acf(sarma168_DE_FC[[i]]$residuals,lag.max=400)
}
aic168_DE_FC_1 = list()
#aic168_DE_FC_2 = list()
for (i in 1:9) {
Acf(sarma168_DE_FC[[i]]$residuals,lag.max=40)
#pacf(sarma168_DE_FC[[i]]$residuals,lag.max=400)
aic168_DE_FC_1[[i]] = sarma168_DE_FC[[i]]$aic
#aic168_DE_FC_2[[i]] = sarma168_DE_FC[[i]]$aic
#plot(sarma168_DE_FC[[i]]$residuals)
}
arma_DE_FC = list()
for (i in 1:9) {
arma_DE_FC[[i]] <- auto.arima(sarma168_DE_FC[[i]]$residuals)
Acf(arma_DE_FC[[i]]$residuals)
}
for (i in 1:9) {
Acf(arma_DE_FC[[i]]$residuals,lag.max=300)
}
Residuals_FR_FC = list()
for (i in 1:9) {
Residuals_FR_FC[[i]] <- arma_FR_FC[[i]]$residuals
}
for (i in 1:9) {
plot(Residuals_DE_FC[[i]],type="l")
}
################################################################################
########################### Combine Data #######################################
################################################################################
DE_res <- unlist(Residuals_DE)
FR_res <- unlist(Residuals_FR)
DE_FC_res <- unlist(Residuals_DE_FC)
FR_FC_res <- unlist(Residuals_FR_FC)
Data_Res <- data.frame(StartUTC=spot_data$StartUTC,DE_res,FR_res,DE_FC_res,FR_FC_res)
View(Data_Res)
c(sqrt(mean(Data_Res$DE_res^2)),sqrt(mean(Data_Res$FR_res^2)),sqrt(mean(Data_Res$DE_FC_res^2)),sqrt(mean(Data_Res$FR_FC_res^2)))
par(mfrow=c(2,2))
plot(Data_Res$StartUTC,DE_res,type="l",xlab="Time",ylab="Price",main="DE_Spot")
plot(Data_Res$StartUTC,FR_res,type="l",xlab="Time",ylab="Price",main="FR_Spot")
plot(Data_Res$StartUTC,DE_FC_res,type="l",xlab="Time",ylab="Price",main="DE_Forecast")
plot(Data_Res$StartUTC,FR_FC_res,type="l",xlab="Time",ylab="Price",main="FR_Forecast")
Acf(DE_res,main="DE_Spot")
Acf(FR_res,main="FR_Spot")
Acf(DE_FC_res,main="DE_Forecast")
Acf(FR_FC_res,main="FR_Forecast")
hist(Data_Res$DE_res,breaks=80,xlim=c(-40,40),main="DE_spot",xlab="Price")
hist(Data_Res$FR_res,breaks=80,xlim=c(-40,40),main="FR_Spot",xlab="Price")
hist(Data_Res$DE_FC_res,breaks=80,xlim=c(-40,40),main="DE_Forecast",xlab="Price")
hist(Data_Res$FR_FC_res,breaks=80,xlim=c(-40,40),main="FR_Forecast",xlab="Price")
LB_test_DE = c()
LB_test_FR = c()
LB_test_DE_FC = c()
LB_test_FR_FC = c()
for (i in 1:5) {
test1 = Box.test(Data_Res$DE_res,type="Ljung-Box",lag=i)
test2 = Box.test(Data_Res$FR_res,type="Ljung-Box",lag=i)
test3 = Box.test(Data_Res$DE_FC_res,type="Ljung-Box",lag=i)
test4 = Box.test(Data_Res$FR_FC_res,type="Ljung-Box",lag=i)
LB_test_DE = c(LB_test_DE,test1$p.value)
LB_test_FR = c(LB_test_FR,test2$p.value)
LB_test_DE_FC = c(LB_test_DE_FC,test3$p.value)
LB_test_FR_FC = c(LB_test_FR_FC,test4$p.value)
}
LB_test_DE
LB_test_FR
LB_test_DE_FC
LB_test_FR_FC
adf_DE = adf.test(Data_Res$DE_res)
adf_FR = adf.test(Data_Res$FR_res)
adf_DE_FC = adf.test(Data_Res$DE_FC_res)
adf_FR_FC = adf.test(Data_Res$FR_FC_res)
list(adf_DE,adf_FR,adf_DE_FC,adf_FR_FC)
################################################################################
######################### Estimating Marginals ################################
################################################################################
DE_marg <- ecdf(Data_Res$DE_res)
FR_marg <- ecdf(Data_Res$FR_res)
DE_FC_marg <- ecdf(Data_Res$DE_FC_res)
FR_FC_marg <- ecdf(Data_Res$FR_FC_res)
plot(DE_marg,xlim=c(-20,20))
plot(FR_marg,xlim=c(-20,20))
plot(DE_FC_marg,xlim=c(-20,20))
plot(FR_FC_marg,xlim=c(-20,20))
par(mfrow=c(1,1))
plot(DE_marg,xlim=c(-20,20))
lines(FR_marg)
lines(DE_FC_marg)
lines(FR_FC_marg)
par(mfrow=c(2,2))
DE_pdf <- estimatePDF(Data_Res$DE_res)
plot(DE_pdf$x,DE_pdf$pdf)
################################################################################
############# Fitting Copulas and computing Dependence Measures ################
################################################################################
DE_Copula_select <- BiCopSelect(pobs(Data_Res$DE_FC_res),pobs(Data_Res$DE_res),familyset = NA)
FR_Copula_select <- BiCopSelect(pobs(Data_Res$FR_FC_res),pobs(Data_Res$FR_res),familyset = NA)
fam <- c(1,2,3,4)
DE_emp_cops <- list()
FR_emp_cops <- list()
AICs_DE <- c()
AICs_FR <- c()
for (i in 1:4) {
DE_emp_cops[[i]] <- BiCopSelect(pobs(Data_Res$DE_FC_res),pobs(Data_Res$DE_res),familyset = fam[i])
FR_emp_cops[[i]] <- BiCopSelect(pobs(Data_Res$FR_FC_res),pobs(Data_Res$FR_res),familyset = fam[i])
AICs_DE[i] <- DE_emp_cops[[i]]$AIC
AICs_FR[i] <- FR_emp_cops[[i]]$AIC
print(i)
}
summary(DE_Copula_select)
plot(DE_Copula_select,xlab="DE_FC",ylab="DE")
plot(FR_Copula_select,xlab="FR_FC",ylab="FR")
pseudoobsDE <- Data_Res %>% dplyr::select(DE_res,DE_FC_res) %>% pobs()
pseudoobsFR <- Data_Res %>% dplyr::select(FR_res,FR_FC_res) %>% pobs()
# ### Fitting one with copula fit ###
#
#
# Copula_DE <- fitCopula(tCopula(), pseudoobsDE, method="mpl")
# Copula_FR <- fitCopula(tCopula(), pseudoobsFR, method="mpl")
# summary(Copula_DE)
# summary(Copula_FR)
### Calculating Dependence Measures ###
DE_Kendall <- Data_Res %>% dplyr::select(DE_res,DE_FC_res) %>% cor(method = "kendall")
DE_Spearman <- Data_Res %>% dplyr::select(DE_res,DE_FC_res) %>% cor(method = "spearman")
FR_Kendall <- Data_Res %>% dplyr::select(FR_res,FR_FC_res) %>% cor(method = "kendall")
FR_Spearman <- Data_Res %>% dplyr::select(FR_res,FR_FC_res) %>% cor(method = "spearman")
lambda_DE <- c(lower = fitLambda(pseudoobsDE)[2,1],
upper = fitLambda(pseudoobsDE,lower.tail = FALSE)[2,1])
lambda_FR <- c(lower = fitLambda(pseudoobsFR)[2,1],
upper = fitLambda(pseudoobsFR,lower.tail = FALSE)[2,1])
#lambda_DE_test <- c(lower = fitLambda(cbind( pobs(spot_data$DE),pobs(spot_data$DEForecast) ))[2,1],
# upper = fitLambda(cbind( pobs(spot_data$DE),pobs(spot_data$DEForecast) ),lower.tail = FALSE)[2,1])
### Parametrisk taildependence ###
DE_Copula_select$taildep
FR_Copula_select$taildep
DE_FR_Copula$taildep
pairs.copuladata(Copula_DE@copula)
Sim_DE <- BiCopSim(20000,family=2,0.4015,2.9490)
plot(Sim_DE)
contour(Sim_DE)
################################################################################
####################### Creating model reversion ###############################
################################################################################
Fitted_DE <- list()
Fitted_FR <- list()
Fitted_DE_FC <- list()
Fitted_FR_FC <- list()
Residuals_DE <- list()
Residuals_FR <- list()
Residuals_DE_FC <- list()
Residuals_FR_FC <- list()
for (i in 1:9) {
Fitted_DE[[i]] <- arma_DE[[i]]$fitted + fitted(sarma168_DE[[i]]) + fitted(sarma24_DE[[i]]) + models_DE[[i]]$fitted.values
Fitted_FR[[i]] <- arma_FR[[i]]$fitted + fitted(sarma168_FR[[i]]) + fitted(sarma24_FR[[i]]) + models_FR[[i]]$fitted.values
Fitted_DE_FC[[i]] <- arma_DE_FC[[i]]$fitted + fitted(sarma168_DE_FC[[i]]) + fitted(sarma24_DE_FC[[i]]) + models_DE_FC[[i]]$fitted.values
Fitted_FR_FC[[i]] <- arma_FR_FC[[i]]$fitted + fitted(sarma168_FR_FC[[i]]) + fitted(sarma24_FR_FC[[i]]) + models_FR_FC[[i]]$fitted.values
Residuals_DE[[i]] <- arma_DE[[i]]$residuals + sarma168_DE[[i]]$residuals + sarma24_DE[[i]]$residuals + models_DE[[i]]$residuals
Residuals_FR[[i]] <- arma_FR[[i]]$residuals + sarma168_FR[[i]]$residuals + sarma24_FR[[i]]$residuals + models_FR[[i]]$residuals
Residuals_DE_FC[[i]] <- arma_DE_FC[[i]]$residuals + sarma168_DE_FC[[i]]$residuals + sarma24_DE_FC[[i]]$residuals + models_DE_FC[[i]]$residuals
Residuals_FR_FC[[i]] <- arma_FR_FC[[i]]$residuals + sarma168_FR_FC[[i]]$residuals + sarma24_FR_FC[[i]]$residuals + models_FR_FC[[i]]$residuals
}
fitted_DE <- unlist(Fitted_DE)
fitted_FR <- unlist(Fitted_FR)
fitted_DE_FC <- unlist(Fitted_DE_FC)
fitted_FR_FC <- unlist(Fitted_FR_FC)
residuals_DE <- unlist(Residuals_DE)
residuals_FR <- unlist(Residuals_FR)
residuals_DE_FC <- unlist(Residuals_DE_FC)
residuals_FR_FC <- unlist(Residuals_FR_FC)
DE_sim_prices <- DE_simulation_res + fitted_DE
FR_sim_prices <- FR_simulation_res + fitted_FR
DE_FC_sim_prices <- DE_FC_simulation_res + fitted_DE_FC
FR_FC_sim_prices <- FR_FC_simulation_res + fitted_FR_FC
################################################################################
################# Computing Conditional Distributions DE #######################
################################################################################
n_simulations <- 100
DE <- list()
CDFs_DE <- list()
Percentile = c(0.00001,0.0001,0.0005, 0.025, 0.5, 0.975, 0.9995,0.9999,0.99999)
for (j in 1:length(Percentile)) {
#Forecasted_value_DE[j] <- quantile(spot_data$DEForecast,probs=Percentile[j])
cond_sim_cop_DE <- matrix(nrow=100000,ncol=n_simulations)
for (i in 1:n_simulations) {
cond_sim <- BiCopCondSim(100000, cond.val = Percentile[j], cond.var = 2, DE_Copula_select)
cond_sim_cop_DE[,i] <- quantile(Data_Res$DE_res,probs=cond_sim)
#print(i)
}
Forecasted_value_DE[j] <- quantile(Data_Res$DE_FC_res,probs=Percentile[j])
cond_sim_cop <- cbind(cond_sim_cop_DE,Forecasted_value_DE[j])
### If we want to reverse back from residuals to prices time series ###
#cond_sim_cop[,1:n_simulations] <- cond_sim_cop[,1:n_simulations] + fitted_DE
#cond_sim_cop[,n_simulations+1] <- cond_sim_cop[,n_simulations+1] + fitted_DE_FC
DE[[j]] <- cbind(rowMeans(cond_sim_cop[,1:n_simulations]),cond_sim_cop[,n_simulations+1])
CDFs_DE[[j]] <- ecdf(DE[[j]][,1])
print(j)
}
names(Forecasted_value_DE) <- names(Forecasted_value)
plot(CDFs_DE[[1]],xlim=c(-60,80),xlab="x",ylab="P( DE_res < x | DE_FC_res = y )",main="",col=1)
for (j in 2:length(Percentile)) {
lines(CDFs_DE[[j]],col=j)
}
spaces <- c(" "," "," "," "," "," "," "," ","")
temp <- legend("bottomright",legend=c("","","","","","","","",""),lwd=2,col=cols
,text.width = strwidth("1,000,000,000,000,000"), xjust = 1, yjust = 1)
text(temp$rect$left + temp$rect$w, temp$text$y,
c(paste(round(Forecasted_value_DE,digits = 2),paste0(" = Q_",names(Forecasted_value_DE)),spaces)), pos = 2)
L <- c()
U <- c()
means <- c()
for (j in 1:length(Percentile)) {
sim1 <- BiCopCondSim(10000, cond.val = Percentile[j], cond.var = 2, DE_Copula_select)
cond_pdf_DE <- quantile(Data_Res$DE_res,probs=sim1)
L[j] <- mean(cond_pdf_DE) - quantile(cond_pdf_DE, 1 - 0.05 / 2) * sd(cond_pdf_DE) / sqrt(length(cond_pdf_DE))
U[j] <- mean(cond_pdf_DE) + quantile(cond_pdf_DE, 1 - 0.05 / 2) * sd(cond_pdf_DE) / sqrt(length(cond_pdf_DE))
means[j] <- mean(cond_pdf_DE)
}
plot(means,type="l",ylim=c(-62,62),axes=FALSE, lwd = 2,
ylab="E[ DE_res | DE_FC_res ]",xlab="Quantiles for DE_Forecast_residuals")
lines(Forecasted_value_DE,col="purple",lwd=2,lty=2)
lines(L,col="red",lwd=2)
lines(U,col="blue",lwd=2)
axis(2,ylim=c(0,2))
axis(1, at=1:9, labels=paste0("Q_",names(Forecasted_value_DE)))
legend("bottomright",legend=c("E[ DE_res | DE_FC_res ]", "Upper Confidence",
"Lower Conficence", "DE_FC_res")
,lty=c(1,1,1,2), lwd=2,col=c("black","blue","red","purple"))
################################################################################
################# Computing Conditional Distributions FR #######################
################################################################################
n_simulations <- 100
FR <- list()
CDFs_FR <- list()
Percentile = c(0.00001,0.0001,0.0005, 0.025, 0.5, 0.975, 0.9995,0.9999,0.99999)
for (j in 1:length(Percentile)) {
#Forecasted_value_FR[j] <- quantile(spot_data$FRForecast,probs=Percentile[j])
cond_sim_cop_FR <- matrix(nrow=100000,ncol=n_simulations)
for (i in 1:n_simulations) {
cond_sim <- BiCopCondSim(100000, cond.val = Percentile[j], cond.var = 2, FR_Copula_select)
cond_sim_cop_FR[,i] <- quantile(Data_Res$FR_res,probs=cond_sim)
#print(i)
}
Forecasted_value_FR[j] <- quantile(Data_Res$FR_FC_res,probs=Percentile[j])
cond_sim_cop <- cbind(cond_sim_cop_FR,Forecasted_value_FR[j])
### If we want to reverse back from residuals to prices time series ###
#cond_sim_cop[,1:n_simulations] <- cond_sim_cop[,1:n_simulations] + fitted_FR
#cond_sim_cop[,n_simulations+1] <- cond_sim_cop[,n_simulations+1] + fitted_FR_FC
FR[[j]] <- cbind(rowMeans(cond_sim_cop[,1:n_simulations]),cond_sim_cop[,n_simulations+1])
CDFs_FR[[j]] <- ecdf(FR[[j]][,1])
print(j)
}
names(Forecasted_value_FR) <- names(Forecasted_value)
plot(CDFs_FR[[1]],xlim=c(-60,180),xlab="x",ylab="P( FR_res < x | FR_FC_res = y )",main="",col=1)
for (j in 2:length(Percentile)) {
lines(CDFs_FR[[j]],col=j)
}
spaces <- c(" "," "," "," "," "," "," "," ","")
temp <- legend("bottomright",legend=c("","","","","","","","",""),lwd=2,col=cols
,text.width = strwidth("1,000,000,000,000,000"), xjust = 1, yjust = 1)
text(temp$rect$left + temp$rect$w, temp$text$y,
c(paste(round(Forecasted_value_FR,digits = 2),paste0(" = Q_",names(Forecasted_value)),spaces)), pos = 2)
L <- c()
U <- c()
means <- c()
for (j in 1:length(Percentile)) {
sim1 <- BiCopCondSim(100000, cond.val = Percentile[j], cond.var = 2, FR_Copula_select)
cond_pdf_FR <- quantile(Data_Res$FR_res,probs=sim1)
L[j] <- mean(cond_pdf_FR) - quantile(cond_pdf_FR, 1 - 0.05 / 2) * sd(cond_pdf_FR) / sqrt(length(cond_pdf_FR))
U[j] <- mean(cond_pdf_FR) + quantile(cond_pdf_FR, 1 - 0.05 / 2) * sd(cond_pdf_FR) / sqrt(length(cond_pdf_FR))
means[j] <- mean(cond_pdf_FR)
print(j)
}
plot(means,type="l",ylim=c(-200,250),axes=FALSE, lwd = 2,
ylab="E[ FR_res | FR_FC_res ]",xlab="Quantiles for FR_Forecast_residuals")
lines(Forecasted_value_FR,col="purple",lwd=2,lty=2)
lines(L,col="red",lwd=2)
lines(U,col="blue",lwd=2)
axis(2,ylim=c(0,2))
axis(1, at=1:9, labels=paste0("Q_",names(Forecasted_value)))
legend("bottomright",legend=c("E[ FR_res | FR_FC_res ]", "Upper Confidence",
"Lower Conficence", "FR_FC_res")
,lty=c(1,1,1,2), lwd=2,col=c("black","blue","red","purple"))
################################################################################
################ Computing dependence measures over time #######################
################################################################################
### fitting copulas on half years ###
copulas_DE <- list()
copulas_FR <- list()
taildependence_DE <- list()
taildependence_FR <- list()
NP_TD_DE_L <- list()
NP_TD_FR_L <- list()
NP_TD_DE_U <- list()
NP_TD_FR_U <- list()
rho_DE <- list()
tau_DE <- list()
rho_FR <- list()
tau_FR <- list()
for (i in 1:9) {
copulas_DE[[i]] <- BiCopSelect(pobs(Residuals_DE[[i]]),pobs(Residuals_DE_FC[[i]]),familyset = NA)
copulas_FR[[i]] <- BiCopSelect(pobs(Residuals_FR[[i]]),pobs(Residuals_FR_FC[[i]]),familyset = NA)
taildependence_DE[[i]] <- copulas_DE[[i]]$taildep[1]
taildependence_FR[[i]] <- copulas_FR[[i]]$taildep[1]
NP_TD_DE_L[[i]] <- c(lower = fitLambda(cbind(pobs(Residuals_DE[[i]]),pobs(Residuals_DE_FC[[i]])))[2,1])
NP_TD_FR_L[[i]] <- c(lower = fitLambda(cbind(pobs(Residuals_FR[[i]]),pobs(Residuals_FR_FC[[i]])))[2,1])
NP_TD_DE_U[[i]] <- c(upper = fitLambda(cbind(pobs(Residuals_DE[[i]]),pobs(Residuals_DE_FC[[i]])),lower.tail = FALSE)[2,1])
NP_TD_FR_U[[i]] <- c(upper = fitLambda(cbind(pobs(Residuals_FR[[i]]),pobs(Residuals_FR_FC[[i]])),lower.tail = FALSE)[2,1])
rho_DE[[i]] <- cor(Residuals_DE[[i]],Residuals_DE_FC[[i]],method = "spearman")
tau_DE[[i]] <- cor(Residuals_DE[[i]],Residuals_DE_FC[[i]],method = "kendall")
rho_FR[[i]] <- cor(Residuals_FR[[i]],Residuals_FR_FC[[i]],method = "spearman")
tau_FR[[i]] <- cor(Residuals_FR[[i]],Residuals_FR_FC[[i]],method = "kendall")
print(i)
}
HY <- c("2016_1","2016_2","2017_1","2017_2","2018_1","2018_2","2019_1","2019_2","2020_1")
HHY <- data.frame(cbind(rho_DE=unlist(rho_DE),rho_FR=unlist(rho_FR),
tau_DE=unlist(tau_DE),tau_FR=unlist(tau_FR),
TD_DE = unlist(taildependence_DE),TD_FR = unlist(taildependence_FR)),
NP_TD_DE_L = unlist(NP_TD_DE_L), NP_TD_DE_U = unlist(NP_TD_DE_U),
NP_TD_FR_L = unlist(NP_TD_FR_L), NP_TD_FR_U = unlist(NP_TD_FR_U),row.names = HY)
par(mfrow=c(1,3))
plot(HHY$rho_DE,type="l",axes=FALSE,xlab="",ylab="Spearman's Rho",ylim=c(0.3,0.8))
lines(HHY$rho_FR,col="blue")
axis(2,ylim=c(0,1))
axis(1, at=1:9, labels=HY,xlab="")
plot(HHY$tau_DE,type="l",axes=FALSE,xlab="",ylab="Kendall's Tau",ylim=c(0.3,0.8))
lines(HHY$tau_FR,col="blue")
axis(2,ylim=c(0,1))
axis(1, at=1:9, labels=HY,xlab="")
#legend("topright",legend = c("Germany","France"), col=c("black","blue"),lty=1, cex=1)
plot(HHY$TD_DE,type="l",axes=FALSE,xlab="",ylab="Tail dependence",ylim=c(0.2,0.7))
lines(HHY$TD_FR,col="blue")
axis(2,ylim=c(0,1))
axis(1, at=1:9, labels=HY,xlab="")
legend("topright",legend = c("Germany","France"), col=c("black","blue"),lty=1, cex=1.2)
par(mfrow=c(1,2))
plot(HHY$NP_TD_DE_L,type="l",axes=FALSE,ylim=c(0.2,0.7),xlab="",ylab="Non-parametric Tail dependence",main="Germany")
lines(HHY$NP_TD_DE_U,col="red")
#lines(HHY$TD_DE,col="blue")
axis(2,ylim=c(0,1))
axis(1, at=1:9, labels=HY,xlab="")
plot(HHY$NP_TD_FR_L,type="l",axes=FALSE,ylim=c(0.20,0.7),xlab="",ylab="",main="France")
lines(HHY$NP_TD_FR_U,col="red")
#lines(HHY$TD_FR,col="blue")
axis(2,ylim=c(0,1))
axis(1, at=1:9, labels=HY,xlab="")
legend("topright",legend = c("Lower","Upper","parametric"), col=c("black","red","blue"),lty=1, cex=0.9)
par(mfrow=c(1,1))
plot(HHY$TD_DE,type="l",axes=FALSE,ylim=c(0.10,0.5),xlab="",ylab="Parametric Tail dependence",main="")
lines(HHY$TD_FR,col="blue")
axis(2,ylim=c(0,1))
axis(1, at=1:9, labels=HY,xlab="")
legend("topright",legend = c("Germany","France"), col=c("black","blue"),lty=1, cex=1.4)
dseq <- c(2,4,3,5)
par(mfrow=c(1,2))
lim <- list(c(0,0.25),c(0,0.25),c(0,0.25),c(0,0.25))
b <- c(100,100,450,450)
for (i in 1:4){
hist(Data_Res[,dseq[i]], prob=T, col="skyblue2",breaks=b[i],
main=paste("Density of",names(Data_Res)[dseq[i]]),xlab="",xlim=c(-30,30),ylim=lim[[i]])
lines(density(Data_Res[,dseq[i]], adjust=2),type="l", col="red", lwd=2)
curve(dnorm(x, 0, sd(Data_Res[,dseq[i]])), add=T, lty="dotted")
legend("topright", legend=c("Empirical","Normal"),
col=c("red","black"), lty=c(1,2), cex=0.8)
}
RMSE_9 <- c(rep(0,9))
RMSE_9_res <- c(rep(0,9))
MAE_9 <- c(rep(0,9))
MAE_9_res <- c(rep(0,9))
mean_9_res <- c(rep(0,9))
mean_9_res_FC <- c(rep(0,9))
for (i in 1:9) {
#RMSE_9[i] <- sqrt( mean( (Data[[i]]$DE - Data[[i]]$DEForecast )^2 ) )
#RMSE_9_res[i] <- sqrt( mean( ( Residuals_DE[[i]] - Residuals_DE_FC[[i]] )^2 ) )
#MAE_9[i] <- mean( abs( Data[[i]]$DE - Data[[i]]$DEForecast ) )
#MAE_9_res[i] <- mean( abs( Residuals_DE[[i]] - Residuals_DE_FC[[i]] ) )
mean_9_res[i] <- mean(Residuals_DE[[i]])
mean_9_res_FC[i] <- mean(Residuals_DE_FC[[i]])
}
par(mfrow=c(1,2))
plot(RMSE_9,type="l")
lines(RMSE_9_res,col="red")
lines(MAE_9,col="black")
#plot(MAE_9,type="l",ylim=c(0,4))
lines(MAE_9_res,col="red")
plot(mean_9_res,type="l")
lines(mean_9_res_FC,col="red")
plot(mean_9_res-mean_9_res_FC,type="l")
################################################################################
######################## Comparing distributions ###############################
################################################################################
marg_DE_gaus <- MASS::fitdistr(Data_Res$DE_res,densfun = "norma" )
marg_FR_gaus <- MASS::fitdistr(Data_Res$FR_res,densfun = "normal" )
marg_DE_FC_gaus <- MASS::fitdistr(Data_Res$DE_FC_res,densfun = "normal" )
marg_FR_FC_gaus <- MASS::fitdistr(Data_Res$FR_FC_res,densfun = "normal" )
marg_DE_t <- MASS::fitdistr(Data_Res$DE_res,densfun = "t" )
marg_FR_t <- MASS::fitdistr(Data_Res$FR_res,densfun = "t" )
marg_DE_FC_t <- MASS::fitdistr(Data_Res$DE_FC_res,densfun = "t" )
marg_FR_FC_t <- MASS::fitdistr(Data_Res$FR_FC_res,densfun = "t" )
marg_DE_sstd <- fGarch::sstdFit(Data_Res$DE_res)
marg_FR_sstd <- fGarch::sstdFit(Data_Res$FR_res)
marg_DE_FC_sstd <- fGarch::sstdFit(Data_Res$DE_FC_res)
marg_FR_FC_sstd <- fGarch::sstdFit(Data_Res$FR_FC_res)
#AIC
broom::glance(marg_DE_t)
broom::glance(marg_FR_t)
broom::glance(marg_DE_FC_t)
broom::glance(marg_FR_FC_t)
2*marg_DE_sstd$minimum + 2*4
2*marg_FR_sstd$minimum + 2*4
2*marg_DE_FC_sstd$minimum + 2*4
2*marg_FR_FC_sstd$minimum + 2*4
hist(Data_Res$DE_res, pch=20, breaks=120, prob=TRUE, main="",ylim=c(0,0.3),xlim=c(-40,40),xlab="DE_residuals")
curve(dnorm(x, mean = marg_DE_gaus$estimate[1], sd=marg_DE_gaus$estimate[2]),col="blue",lwd=2,add=T)
curve(dt(x, df = marg_DE_t$estimate[3], ncp = marg_DE_t$estimate[1]), col="purple", lwd=2, add=T)
curve(dsstd(x, mean = marg_DE_sstd$estimate[1], sd = marg_DE_sstd$estimate[2], nu =marg_DE_sstd$estimate[3], xi=marg_DE_sstd$estimate[4]), col="red", lwd=2, add=T)
epdfPlot(Data_Res$DE_res,add=T,epdf.lty = 2,epdf.col = "green",epdf.lwd = 2)
legend("topright",legend=c("Empirical","Gaussian","Student's t", "Skewed Student's t"), col=c("green","blue","purple","red"),lty=c(2,1,1,1), lwd=2, )
hist(Data_Res$FR_res, pch=20, breaks=420, prob=TRUE, main="",ylim=c(0,0.3),xlim=c(-40,40),xlab="FR_residuals")
curve(dnorm(x, mean = marg_FR_gaus$estimate[1], sd=marg_FR_gaus$estimate[2]),col="blue",lwd=2,add=T)
curve(dt(x, df = marg_FR_t$estimate[3], ncp = marg_FR_t$estimate[1]), col="purple", lwd=2, add=T)
curve(dsstd(x, mean = marg_FR_sstd$estimate[1], sd = marg_FR_sstd$estimate[2], nu =marg_FR_sstd$estimate[3], xi=marg_FR_sstd$estimate[4]), col="red", lwd=2, add=T)
epdfPlot(Data_Res$FR_res, add=T, epdf.lty = 2, epdf.col = "green", epdf.lwd = 2)
legend("topright",legend=c("Empirical","Gaussian","Student's t", "Skewed Student's t"), col=c("green","blue","purple","red"),lty=c(2,1,1,1), lwd=2, )
hist(Data_Res$DE_FC_res, pch=20, breaks=120, prob=TRUE, main="",ylim=c(0,0.3),xlim=c(-40,40),xlab="DE_FC_residuals")
curve(dnorm(x, mean = marg_DE_FC_gaus$estimate[1], sd=marg_DE_FC_gaus$estimate[2]),col="blue",lwd=2,add=T)
curve(dt(x, df = marg_DE_FC_t$estimate[3], ncp = marg_DE_FC_t$estimate[1]), col="purple", lwd=2, add=T)
curve(dsstd(x, mean = marg_DE_FC_sstd$estimate[1], sd = marg_DE_FC_sstd$estimate[2], nu =marg_DE_FC_sstd$estimate[3], xi=marg_DE_FC_sstd$estimate[4]), col="red", lwd=2, add=T)
epdfPlot(Data_Res$DE_FC_res,add=T,epdf.lty = 2,epdf.col = "green",epdf.lwd = 2)
legend("topright",legend=c("Empirical","Gaussian","Student's t", "Skewed Student's t"), col=c("green","blue","purple","red"),lty=c(2,1,1,1), lwd=2, )
hist(Data_Res$FR_FC_res, pch=20, breaks=620, prob=TRUE, main="",ylim=c(0,0.3),xlim=c(-40,40),xlab="FR_FC_residuals")
curve(dnorm(x, mean = marg_FR_FC_gaus$estimate[1], sd=marg_FR_FC_gaus$estimate[2]),col="blue",lwd=2,add=T)
curve(dt(x, df = marg_FR_FC_t$estimate[3], ncp = marg_FR_FC_t$estimate[1]), col="purple", lwd=2, add=T)
curve(dsstd(x, mean = marg_FR_FC_sstd$estimate[1], sd = marg_FR_FC_sstd$estimate[2], nu =marg_FR_FC_sstd$estimate[3], xi=marg_FR_FC_sstd$estimate[4]), col="red", lwd=2, add=T)
epdfPlot(Data_Res$FR_FC_res,add=T,epdf.lty = 2,epdf.col = "green",epdf.lwd = 2)
legend("topright",legend=c("Empirical","Gaussian","Student's t", "Skewed Student's t"), col=c("green","blue","purple","red"),lty=c(2,1,1,1), lwd=2, )
hist(Data_Res$DE_FC_res,breaks=120,prob=TRUE)
hist(Data_Res$FR_FC_res,breaks=820,prob=TRUE,ylim=c(0,0.3),xlim=c(-40,40))
hist(FR_FC_res,prob=T,breaks=800,ylim=c(0,0.1),xlim=c(-40,40))
lines(density(FR_FC_res))
curve(dsstd(x, mean = marg_DE_FC_sstd$estimate[1], sd = marg_DE_FC_sstd$estimate[2], nu =marg_DE_FC_sstd$estimate[3], xi=marg_DE_FC_sstd$estimate[4]), col="red", lwd=2, add=T)
curve(dsstd(x, mean = marg_FR_sstd$estimate[1], sd = marg_FR_sstd$estimate[2], nu =marg_FR_sstd$estimate[3], xi=marg_FR_sstd$estimate[4]), col="red", lwd=2, add=T)
################################################################################
######################## Fitting copula using sstd #############################
################################################################################
pseudo_DE <- psstd(Data_Res$DE_res, mean = marg_DE_sstd$estimate[1], sd = marg_DE_sstd$estimate[2], nu =marg_DE_sstd$estimate[3], xi=marg_DE_sstd$estimate[4])
pseudo_FR <- psstd(Data_Res$FR_res, mean = marg_FR_sstd$estimate[1], sd = marg_FR_sstd$estimate[2], nu =marg_FR_sstd$estimate[3], xi=marg_FR_sstd$estimate[4])
pseudo_DE_FC <- psstd(Data_Res$DE_FC_res, mean = marg_DE_FC_sstd$estimate[1], sd = marg_DE_FC_sstd$estimate[2], nu =marg_DE_FC_sstd$estimate[3], xi=marg_DE_FC_sstd$estimate[4])
pseudo_FR_FC <- psstd(Data_Res$FR_FC_res, mean = marg_FR_FC_sstd$estimate[1], sd = marg_FR_FC_sstd$estimate[2], nu =marg_FR_FC_sstd$estimate[3], xi=marg_FR_FC_sstd$estimate[4])
fam <- c(0,1,2,3,4,5,6)
DE_sstd_cops <- list()
FR_sstd_cops <- list()
AICs <- c()
for (i in 1:7) {
#DE_sstd_cops[[i]] <- BiCopSelect(pseudo_DE,pseudo_DE_FC,familyset = fam[i])
FR_sstd_cops[[i]] <- BiCopSelect(pseudo_FR,pseudo_FR_FC,familyset = fam[i])
AICs[i] <- FR_sstd_cops[[i]]$AIC
print(i)
}
DE_sstd_cop <- BiCopSelect(pseudo_DE,pseudo_DE_FC,familyset = NA)
DE_sstd_cop
DE_sstd_cop$taildep
plot(DE_sstd_cop)
plot(DE_Copula_select)
FR_sstd_cop <- BiCopSelect(pseudo_FR,pseudo_FR_FC,familyset = NA)
FR_sstd_cop$AIC
plot(FR_sstd_cop)
FR_sstd_cop$taildep
lambda_DE <- c(lower = fitLambda(cbind(pseudo_DE,pseudo_DE_FC))[2,1],
upper = fitLambda(cbind(pseudo_DE,pseudo_DE_FC) ,lower.tail = FALSE)[2,1])
lambda_FR <- c(lower = fitLambda(cbind(pseudo_FR,pseudo_FR_FC))[2,1],
upper = fitLambda(cbind(pseudo_FR,pseudo_FR_FC) ,lower.tail = FALSE)[2,1])
n_simulations <- 100
DE <- list()
CDFs_DE_sstd <- list()
Percentile = c(0.00001,0.0001,0.0005, 0.025, 0.5, 0.975, 0.9995,0.9999,0.99999)
Forecasted_value_DE <- c()
for (j in 1:length(Percentile)) {
#Forecasted_value_DE[j] <- quantile(spot_data$DEForecast,probs=Percentile[j])
cond_sim_cop_DE <- matrix(nrow=100000,ncol=n_simulations)
for (i in 1:n_simulations) {
cond_sim <- BiCopCondSim(100000, cond.val = Percentile[j], cond.var = 2, DE_sstd_cop)
cond_sim_cop_DE[,i] <- qsstd(cond_sim,mean = marg_DE_FC_sstd$estimate[1], sd = marg_DE_FC_sstd$estimate[2], nu =marg_DE_FC_sstd$estimate[3], xi=marg_DE_FC_sstd$estimate[4])
#print(i)
}
Forecasted_value_DE[j] <- qsstd(Percentile[j], mean = marg_DE_FC_sstd$estimate[1], sd = marg_DE_FC_sstd$estimate[2], nu =marg_DE_FC_sstd$estimate[3], xi=marg_DE_FC_sstd$estimate[4])
cond_sim_cop <- cbind(cond_sim_cop_DE,Forecasted_value_DE[j])
### If we want to reverse back from residuals to prices time series ###
#cond_sim_cop[,1:n_simulations] <- cond_sim_cop[,1:n_simulations] + fitted_DE
#cond_sim_cop[,n_simulations+1] <- cond_sim_cop[,n_simulations+1] + fitted_DE_FC
DE[[j]] <- cbind(rowMeans(cond_sim_cop[,1:n_simulations]),cond_sim_cop[,n_simulations+1])
CDFs_DE_sstd[[j]] <- ecdf(DE[[j]][,1])
print(j)
}
names(Forecasted_value_DE) <- names(Forecasted_value)
plot(CDFs_DE_sstd[[1]],xlim=c(-140,170),xlab="x",ylab="P( DE_res < x | DE_FC_res = y )",main="",col=1)
for (j in 2:length(Percentile)) {
lines(CDFs_DE_sstd[[j]],col=j)
}
spaces <- c(" "," "," "," "," "," "," "," ","")
temp <- legend("bottomright",legend=c("","","","","","","","",""),lwd=2,col=cols
,text.width = strwidth("1,000,000,000,000,000"), xjust = 1, yjust = 1)
text(temp$rect$left + temp$rect$w, temp$text$y,
c(paste(round(Forecasted_value_DE,digits = 2),paste0(" = Q_",names(Forecasted_value)),spaces)), pos = 2)
L <- c()
U <- c()
means_DE <- c()
for (j in 1:length(Percentile)) {
sim1 <- BiCopCondSim(1000000, cond.val = Percentile[j], cond.var = 2, DE_Copula_select)
cond_pdf_DE <- qsstd(sim1, mean = marg_DE_FC_sstd$estimate[1], sd = marg_DE_FC_sstd$estimate[2], nu =marg_DE_FC_sstd$estimate[3], xi=marg_DE_FC_sstd$estimate[4])
L[j] <- quantile(cond_pdf_DE,probs=0.025)
U[j] <- quantile(cond_pdf_DE,probs=0.975)
means_DE[j] <- mean(cond_pdf_DE)
}
plot(means_DE,type="l",ylim=c(-320,280),axes=FALSE, lwd = 2,
ylab="E[ DE_res | DE_FC_res ]",xlab="Quantiles for DE_Forecast_residuals")
lines(Forecasted_value_DE,col="purple",lwd=2,lty=2)
lines(L,col="red",lwd=2)
lines(U,col="blue",lwd=2)
axis(2,ylim=c(0,2))
axis(1, at=1:9, labels=names(Forecasted_value))
legend("bottomright",legend=c("E[ DE_res | DE_FC_res ]", "Upper Confidence",
"Lower Conficence", "DE_FC_res")
,lty=c(1,1,1,2), lwd=2,col=c("black","blue","red","purple"))
n_simulations <- 100
FR <- list()
CDFs_FR_sstd <- list()
Percentile = c(0.00001,0.0001,0.0005, 0.025, 0.5, 0.975, 0.9995,0.9999,0.99999)
Forecasted_value_FR <- c()
for (j in 1:length(Percentile)) {
#Forecasted_value_FR[j] <- quantile(spot_data$FRForecast,probs=Percentile[j])
cond_sim_cop_FR <- matrix(nrow=100000,ncol=n_simulations)
for (i in 1:n_simulations) {
cond_sim <- BiCopCondSim(100000, cond.val = Percentile[j], cond.var = 2, FR_sstd_cop)
cond_sim_cop_FR[,i] <- qsstd(cond_sim,mean = marg_FR_FC_sstd$estimate[1], sd = marg_FR_FC_sstd$estimate[2], nu =marg_FR_FC_sstd$estimate[3], xi=marg_FR_FC_sstd$estimate[4])
#print(i)
}
Forecasted_value_FR[j] <- qsstd(Percentile[j], mean = marg_FR_FC_sstd$estimate[1], sd = marg_FR_FC_sstd$estimate[2], nu =marg_FR_FC_sstd$estimate[3], xi=marg_FR_FC_sstd$estimate[4])
cond_sim_cop <- cbind(cond_sim_cop_FR,Forecasted_value_FR[j])
### If we want to reverse back from residuals to prices time series ###
#cond_sim_cop[,1:n_simulations] <- cond_sim_cop[,1:n_simulations] + fitted_DE
#cond_sim_cop[,n_simulations+1] <- cond_sim_cop[,n_simulations+1] + fitted_DE_FC
FR[[j]] <- cbind(rowMeans(cond_sim_cop[,1:n_simulations]),cond_sim_cop[,n_simulations+1])
CDFs_FR_sstd[[j]] <- ecdf(FR[[j]][,1])
print(j)
}
names(Forecasted_value_FR) <- names(Forecasted_value)
plot(CDFs_FR_sstd[[1]],xlim=c(-100,200),xlab="x",ylab="P( FR_res < x | FR_FC_res = y )",main="",col=1)
for (j in 2:length(Percentile)) {
lines(CDFs_FR_sstd[[j]],col=j)
}
spaces <- c(" "," "," "," "," "," "," "," ","")
temp <- legend("bottomright",legend=c("","","","","","","","",""),lwd=2,col=cols
,text.width = strwidth("1,000,000,000,000,000"), xjust = 1, yjust = 1)
text(temp$rect$left + temp$rect$w, temp$text$y,
c(paste(round(Forecasted_value_FR,digits = 2),paste0(" = Q_",names(Forecasted_value)),spaces)), pos = 2)
L <- c()
U <- c()
means_FR <- c()
for (j in 1:length(Percentile)) {
sim1 <- BiCopCondSim(1000000, cond.val = Percentile[j], cond.var = 2, FR_Copula_select)
cond_pdf_FR <- qsstd(sim1, mean = marg_FR_FC_sstd$estimate[1], sd = marg_FR_FC_sstd$estimate[2], nu = marg_FR_FC_sstd$estimate[3], xi = marg_FR_FC_sstd$estimate[4])
L[j] <- quantile(cond_pdf_FR,probs=0.025)
U[j] <- quantile(cond_pdf_FR,probs=0.975)
means_FR[j] <- mean(cond_pdf_FR)
}
plot(means_FR,type="l",ylim=c(-320,280),axes=FALSE, lwd = 2,
ylab="E[ FR_res | FR_FC_res ]",xlab="Quantiles for FR_Forecast_residuals")
lines(Forecasted_value_FR,col="purple",lwd=2,lty=2)
lines(L,col="red",lwd=2)
lines(U,col="blue",lwd=2)
axis(2,ylim=c(0,2))
axis(1, at=1:9, labels=c("Q_0.001%"," ","Q_0.05%"," ","Q_50%"," ","Q_99.95.%"," ","99.999%"))
legend("bottomright",legend=c("E[ FR_res | FR_FC_res ]", "Upper Confidence",
"Lower Conficence", "FR_FC_res")
,lty=c(1,1,1,2), lwd=2,col=c("black","blue","red","purple"))
################################################################################
###### Computing Conditional Distributions using numerical differentiation #####
################################################################################
C_hat = function(w){
pCopula(w, fitted@copula)
}
`F_X|Y` = function(x,y,F_X,F_Y,C){
eps = 1e-5
v = F_Y(y)
#Numerisk afledt i punktet
dCdV = function(u){
(C(c(u,v+eps)) - C(c(u,v-eps)))/(2*eps)
}
sapply(X = F_X(x), dCdV)
}
Y = list(c(0,10,-10,20,-20,30,-30,40,-40),c(0,10,-10,25,-25,50,-50,75,-75))
Legend = list(c("FC = -40","FC = -30","FC = -20","FC = -10","FC = 0","FC = 10",
"FC = 20","FC = 30","FC = 40")
,c("FC = -75","FC = -50","FC = -25","FC = -10","FC = 0","FC = 10",
"FC = 25","FC = 50","FC = 75"))
X_lim = list(c(-60,60),c(-100,100))
CCC = list()
navn <- c("DE","FR")
for (i in 1:2) {
Data = tibble(X = Data_Res[,i+1], Y = Data_Res[,i+3])
F_X = ecdf(Data$X)
F_Y = ecdf(Data$Y)
U = Data %>% mutate(U = F_X(X), V = F_Y(Y)) %>% select(U,V)
fitted = fitCopula(tCopula(), U, method = "itau")
CCC[[i]] = fitted
x_seq = seq(from = X_lim[[i]][1], to = X_lim[[i]][2], length.out = 1000)
y = Y[[i]][1]
`P(X<=x|Y=y)` = `F_X|Y`(x_seq,y,F_X,F_Y,C_hat)
plot(x_seq, `P(X<=x|Y=y)`, type = "l", col = "yellow",xlab = "x", main=navn[i])
y = Y[[i]][2]
`P(X<=x|Y=y)` = `F_X|Y`(x_seq,y,F_X,F_Y,C_hat)
lines(x_seq, `P(X<=x|Y=y)`, col = "green")
y = Y[[i]][3]
`P(X<=x|Y=y)` = `F_X|Y`(x_seq,y,F_X,F_Y,C_hat)
lines(x_seq, `P(X<=x|Y=y)`, col = "orange")
y = Y[[i]][4]
`P(X<=x|Y=y)` = `F_X|Y`(x_seq,y,F_X,F_Y,C_hat)
lines(x_seq, `P(X<=x|Y=y)`, col = "lightblue")
y = Y[[i]][5]
`P(X<=x|Y=y)` = `F_X|Y`(x_seq,y,F_X,F_Y,C_hat)
lines(x_seq, `P(X<=x|Y=y)`, col = "red")
y = Y[[i]][6]
`P(X<=x|Y=y)` = `F_X|Y`(x_seq,y,F_X,F_Y,C_hat)
lines(x_seq, `P(X<=x|Y=y)`, col = "blue")
y = Y[[i]][7]
`P(X<=x|Y=y)` = `F_X|Y`(x_seq,y,F_X,F_Y,C_hat)
lines(x_seq, `P(X<=x|Y=y)`, col = "magenta")
y = Y[[i]][8]
`P(X<=x|Y=y)` = `F_X|Y`(x_seq,y,F_X,F_Y,C_hat)
lines(x_seq, `P(X<=x|Y=y)`, col = "darkblue")
y = Y[[i]][9]
`P(X<=x|Y=y)` = `F_X|Y`(x_seq,y,F_X,F_Y,C_hat)
lines(x_seq, `P(X<=x|Y=y)`, col = "purple")
legend("bottomright", legend=Legend[[i]],
col=c("purple","magenta","red", "orange","yellow","green","lightblue","blue","darkblue"), lty=1, cex=0.8)
}
################################################################################
############################# Spread Analysis ##################################
################################################################################
spread <- spot_data$DE - spot_data$FR
spread_FC <- spot_data$DEForecast - spot_data$FRForecast
plot(spot_data$StartUTC,spread,type="l",xlab="Time",main="Spot Spread",ylab="Price")
lines(spot_data$StartUTC, spread_FC,col="red")
#plot(spot_data$StartUTC,spread_FC,type="l",xlab="Time",main="Forecasted Spot Spread",ylab="Price")
hist(spread,breaks=80,main="Spot Spread",xlab="Price",xlim=c(-80,80))
hist(spread_FC,breaks=80,main="Forecasted Spot Spread",xlab="Price",xlim=c(-80,80))
plot(spot_data$StartUTC,spot_data$Spread-spot_data$Spread_FC,type="l",xlab="Time",ylab="Euros")
plot(spot_data$StartUTC,spot_data$FR-spot_data$FRForecast,type="l")
hist(spot_data$Spread-spot_data$Spread_FC,breaks=138,xlim=c(-50,50),xlab="Price",main="")
acf(spread,lag.max=300)
acf(spread_FC,lag.max=300)
spot_data <- spot_data %>% mutate(Spread=spread, Spread_FC = spread_FC)
Spread_model <- lm(spot_data$Spread~
time(spot_data[,1])+
# I(time(Data[[i]][,1])^2)+
#I(time(Data[[i]][,1])^3)+
cos((2*pi/length(spot_data[,1])) * I(time(spot_data[,1])))+
sin((2*pi/length(spot_data[,1])) * I(time(spot_data[,1])))+
#cos(((2*2)*pi/Y_length[i])*I(time(spot_data[,1])))+
#sin(((2*2)*pi/Y_length[i])*I(time(spot_data[,1])))+
#cos(((4*2)*pi/Y_length[i])*I(time(spot_data[,1])))+
#sin(((4*2)*pi/Y_length[i])*I(time(spot_data[,1])))+
#cos(((12*2)*pi/Y_length[i])*I(time(spot_data[,1])))+
#sin(((12*2)*pi/Y_length[i])*I(time(spot_data[,1])))+
#cos(((52*2)*pi/Y_length[i])*I(time(spot_data[,1])))+
#sin(((52*2)*pi/Y_length[i])*I(time(spot_data[,1])))+
#cos(((2*365*2)*pi/length(spot_data[,1]))*I(time(spot_data[,1])))+
#sin(((2*365*2)*pi/length(spot_data[,1]))*I(time(spot_data[,1])))+
#cos(((365*2)*pi/length(spot_data[,1]))*I(time(spot_data[,1])))+
#sin(((365*2)*pi/length(spot_data[,1]))*I(time(spot_data[,1])))+
#factor(Data[[i]]$Yd)+
#factor(Data[[i]]$yd)+
factor(spot_data$wd)+
factor(spot_data$dd)
)
Spread_FC_model <- lm(spot_data$Spread_FC~
time(spot_data[,1])+
# I(time(Data[[i]][,1])^2)+
#I(time(Data[[i]][,1])^3)+
cos((2*pi/length(spot_data[,1])) * I(time(spot_data[,1])))+
sin((2*pi/length(spot_data[,1])) * I(time(spot_data[,1])))+
#cos(((2*2)*pi/Y_length[i])*I(time(spot_data[,1])))+
#sin(((2*2)*pi/Y_length[i])*I(time(spot_data[,1])))+
#cos(((4*2)*pi/Y_length[i])*I(time(spot_data[,1])))+
#sin(((4*2)*pi/Y_length[i])*I(time(spot_data[,1])))+
#cos(((12*2)*pi/Y_length[i])*I(time(spot_data[,1])))+
#sin(((12*2)*pi/Y_length[i])*I(time(spot_data[,1])))+
#cos(((52*2)*pi/Y_length[i])*I(time(spot_data[,1])))+
#sin(((52*2)*pi/Y_length[i])*I(time(spot_data[,1])))+
cos(((2*365*2)*pi/length(spot_data[,1]))*I(time(spot_data[,1])))+
sin(((2*365*2)*pi/length(spot_data[,1]))*I(time(spot_data[,1])))+
cos(((365*2)*pi/length(spot_data[,1]))*I(time(spot_data[,1])))+
sin(((365*2)*pi/length(spot_data[,1]))*I(time(spot_data[,1])))+
#factor(Data[[i]]$Yd)+
#factor(Data[[i]]$yd)+
factor(spot_data$wd)+
factor(spot_data$dd)
)
summary(Spread_model)
summary(Spread_FC_model)
acf(Spread_model$residuals,lag.max = 300)
acf(Spread_FC_model$residuals,lag.max = 300)
Spread_sarma_1 <- arima(Spread_model$residuals, order = c(0,0,0), seasonal = list(order=c(1,0,0),period=24))
Spread_FC_sarma_1 <- arima(Spread_FC_model$residuals, order = c(0,0,0), seasonal = list(order=c(1,0,0),period=24))
acf(Spread_sarma_1$residuals,lag.max=30)
acf(Spread_FC_sarma_1$residuals,lag.max=30)
Spread_arima <- auto.arima(Spread_sarma_1$residuals)
Spread_FC_arima <- auto.arima(Spread_FC_sarma_1$residuals)
summary(Spread_arima)
summary(Spread_FC_arima)
acf(Spread_arima$residuals,lag.max = 300)
acf(Spread_FC_arima$residuals,lag.max = 300)
spread_res <- Spread_arima$residuals
spread_FC_res <- Spread_FC_arima$residuals
hist(spread_res,breaks=30)
hist(spread_FC_res,breaks=30)
plot(spread_FC_res,type="l")
res_spread <- as.data.frame(cbind(spread_res,spread_FC_res))
spread_no_zero <- res_spread[which(spot_data$Spread_FC == 0),]
hist(spread_no_zero$spread_res,breaks=30)
plot(spread_no_zero$spread_res)
spread_cop_select <- BiCopSelect(pobs(res_spread$spread_res),pobs(res_spread$spread_FC_res),familyset = NA)
spread_cop_select
plot(spread_cop_select)
spread_cop_select$taildep
### DE prediction analysis when spread = 0 ###
index_spread_zero <- which(spot_data$Spread == 0)
Data_res_no_spread <- Data_Res[index_spread_zero,]
View(Data_res_no_spread)
DE_cop_no_spread <- BiCopSelect(pobs(Data_res_no_spread$DE_res),pobs(Data_res_no_spread$DE_FC_res),familyset = NA)
DE_cop_no_spread
plot(DE_cop_no_spread)
DE_cop_no_spread$taildep
sqrt(mean((Data_res_no_spread$DE_res - Data_res_no_spread$DE_FC_res)^2))
sqrt(mean((Data_Res$DE_res - Data_Res$DE_FC_res)^2))
sqrt(mean((spot_data[index_spread_zero,2]-spot_data[index_spread_zero,4])^2))
sqrt(mean((spot_data[,2]-spot_data[,4])^2))
Data_Spread <- list()
Data_Spread_FC <- list()
for (i in 1:9) {
Data_Spread[[i]] <- Data[[i]]$DE-Data[[i]]$FR
Data_Spread_FC[[i]] <- Data[[i]]$DEForecast-Data[[i]]$FRForecast
}
Y_length = c(length(Y16[,1]),length(Y16[,1]), length(Y17[,1]),length(Y17[,1]), length(Y18[,1]),length(Y18[,1]),
length(Y19[,1]), length(Y19[,1]),length(Y20[,1]),length(Y20[,1]))
models_Spread = list()
for (i in 1:9) {
models_Spread[[i]] <- lm(Data_Spread[[i]] ~
time(Data[[i]][,1])+
# I(time(Data[[i]][,1])^2)+
#I(time(Data[[i]][,1])^3)+
cos( ( 2*pi/Y_length[i] ) * I(time(Data[[i]][,1])) )+
sin((2*pi/Y_length[i])*I(time(Data[[i]][,1])))+
#cos(((2*2)*pi/Y_length[i])*I(time(spot_data[,1])))+
#sin(((2*2)*pi/Y_length[i])*I(time(spot_data[,1])))+
#cos(((4*2)*pi/Y_length[i])*I(time(spot_data[,1])))+
#sin(((4*2)*pi/Y_length[i])*I(time(spot_data[,1])))+
#cos(((12*2)*pi/Y_length[i])*I(time(spot_data[,1])))+
#sin(((12*2)*pi/Y_length[i])*I(time(spot_data[,1])))+
#cos(((52*2)*pi/Y_length[i])*I(time(spot_data[,1])))+
#sin(((52*2)*pi/Y_length[i])*I(time(spot_data[,1])))+
cos(((2*365*2)*pi/Y_length[i])*I(time(Data[[i]][,1])))+
sin(((2*365*2)*pi/Y_length[i])*I(time(Data[[i]][,1])))+
cos(((365*2)*pi/Y_length[i])*I(time(Data[[i]][,1])))+
sin(((365*2)*pi/Y_length[i])*I(time(Data[[i]][,1])))+
#factor(Data[[i]]$Yd)+
#factor(Data[[i]]$yd)+
factor(Data[[i]]$wd)+
factor(Data[[i]]$dd)
)
}
models_Spread_FC = list()
for (i in 1:9) {
models_Spread_FC[[i]] <- lm(Data_Spread_FC[[i]] ~
time(Data[[i]][,1])+
# I(time(Data[[i]][,1])^2)+
#I(time(Data[[i]][,1])^3)+
cos( ( 2*pi/Y_length[i] ) * I(time(Data[[i]][,1])) )+
sin((2*pi/Y_length[i])*I(time(Data[[i]][,1])))+
#cos(((2*2)*pi/Y_length[i])*I(time(spot_data[,1])))+
#sin(((2*2)*pi/Y_length[i])*I(time(spot_data[,1])))+
#cos(((4*2)*pi/Y_length[i])*I(time(spot_data[,1])))+
#sin(((4*2)*pi/Y_length[i])*I(time(spot_data[,1])))+
#cos(((12*2)*pi/Y_length[i])*I(time(spot_data[,1])))+
#sin(((12*2)*pi/Y_length[i])*I(time(spot_data[,1])))+
#cos(((52*2)*pi/Y_length[i])*I(time(spot_data[,1])))+
#sin(((52*2)*pi/Y_length[i])*I(time(spot_data[,1])))+
cos(((2*365*2)*pi/Y_length[i])*I(time(Data[[i]][,1])))+
sin(((2*365*2)*pi/Y_length[i])*I(time(Data[[i]][,1])))+
cos(((365*2)*pi/Y_length[i])*I(time(Data[[i]][,1])))+
sin(((365*2)*pi/Y_length[i])*I(time(Data[[i]][,1])))+
#factor(Data[[i]]$Yd)+
#factor(Data[[i]]$yd)+
factor(Data[[i]]$wd)+
factor(Data[[i]]$dd)
)
}
head(models_Spread[[1]]$residuals)
head(models_Spread_FC[[1]]$residuals)
acf(models_Spread[[1]]$residuals)
par(mfrow=c(3,3))
sarma24_spread = list()
sarma24_spread_FC = list()
for (i in 1:9) {
#sarma24_spread[[i]] <- arima(models_Spread[[i]]$residuals, order = od[[i]], seasonal = list(order=c(1,0,0),period=24))
sarma24_spread_FC[[i]] <- arima(models_Spread_FC[[i]]$residuals, order = od[[i]], seasonal = list(order=c(1,0,0),period=24))
Acf(sarma24_spread_FC[[i]]$residuals,lag.max=400)
}
head(sarma24_spread[[1]]$residuals)
head(sarma24_spread_FC[[1]]$residuals)
sarma168_Spread = list()
sarma168_Spread_FC = list()
for (i in 1:9) {
#sarma168_Spread[[i]] <- arima(sarma24_spread[[i]]$residuals, seasonal = list(order = c(1,0,0), period=168))
sarma168_Spread_FC[[i]] <- arima(sarma24_spread_FC[[i]]$residuals, seasonal = list(order = c(1,0,0), period=168))
acf(sarma168_Spread_FC[[i]]$residuals,lag.max=400)
}
head(sarma168_Spread[[1]]$residuals)
head(sarma168_Spread_FC[[1]]$residuals)
ARMA_Spread = list()
ARMA_Spread_FC = list()
for (i in 1:9) {
#ARMA_Spread[[i]] <- auto.arima(sarma168_Spread[[i]]$residuals)
ARMA_Spread_FC[[i]] <- auto.arima(sarma168_Spread_FC[[i]]$residuals)
Acf(ARMA_Spread_FC[[i]]$residuals)
}
head(ARMA_Spread[[1]]$residuals)
head(ARMA_Spread_FC[[1]]$residuals)
Spread_arma_residuals = list()
Spread_FC_arma_residuals = list()
for (i in 1:9) {
#Spread_arma_residuals[[i]] <- ARMA_Spread[[i]]$residuals
Spread_FC_arma_residuals[[i]] <- ARMA_Spread_FC[[i]]$residuals
}
Spread_residuals <- unlist(Spread_arma_residuals)
Spread_FC_residuals <- unlist(Spread_FC_arma_residuals)
Data_Spread_Res <- data.frame(StartUTC=spot_data$StartUTC,Spread_residuals,Spread_FC_residuals)
#View(Data_Spread_Res)
spread_9_cop_select <- BiCopSelect(pobs(Data_Spread_Res$Spread_residuals),pobs(Data_Spread_Res$Spread_FC_residuals),familyset = NA)
spread_9_cop_select
plot(spread_9_cop_select,xlab="Spread", ylab="Spread FC")
spread_9_cop_select$taildep
## Comparison for spread prediction error for whole period ##
sqrt(mean((spread_res-spread_FC_res)^2))
sqrt(mean((Data_Res$DE_res - Data_Res$DE_FC_res)^2))
plot(spot_data$StartUTC ,spot_data$FR-spot_data$FRForecast,type="l")
hist(Data_Res$DE_res-Data_Res$FR_res,breaks=3000,xlim=c(-30,30))
hist(Data_Res$DE_FC_res-Data_Res$FR_FC_res,breaks=3000,xlim=c(-30,30))
hist(Data_Res$DE_res-Data_Res$DE_FC_res,breaks=100,xlim=c(-30,30))
hist(Data_Res$DE_FC_res-Data_Res$FR_FC_res,breaks=300,xlim=c(-30,30))
ggseasonplot(ts(spot_data$DE[(168*30):(168*60)],frequency = 168))
|
fce81ff6b86bf4bdc2d46c7347935e95c60a4927 | 98785360bd2c6420283cc94f0e9738f740a25e58 | /Others/PrimeFactorCount.R | b115f0b74fe91e267237ba0d45a9e30257133619 | [] | no_license | songyuzhou324/LeetCode_RSolution | 26a00635b1ab8d8f4e2a6d7a0e315e2401d8f1f6 | 03e8387022899ff2e9ca4255a1845d84e138a264 | refs/heads/master | 2021-01-20T14:09:02.706208 | 2017-09-26T01:37:49 | 2017-09-26T01:37:49 | 90,567,918 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 894 | r | PrimeFactorCount.R | # function to calculate the count of all prime factors
# n >= 2
get_prime_factor_count <- function(n, dict){
cnt <- 1
for(i in 2:n){
while(n != i){
if(n%%i != 0 ){
break;
}
n <- n/i
if(dict[n] != 0){ # look up the dictionary table
print('speed up')
cnt <- cnt + dict[n]
break
} else {
cnt <- cnt + 1
}
}
}
return(cnt)
}
# main function to get count for all from (2, N)
get_prime_factor_count_main <- function(N){
# create a vector to store the count we already had
# the vector can be referred by later calculation
dict = rep(0, N)
for(i in 2:N){
dict[i] <- get_prime_factor_count(i, dict)
}
return(sum(dict[2:N]))
}
get_prime_factor_count_main(40)
|
015f4116098ef8902a140366575fbbf01d3bfd77 | 391672e0b64620e6fee2b3f0b5f7e3a257dd3871 | /CLANFIELD/3.MONTHLY_UFO_ANALYSIS_REPORTING.R | 755ac856da7a16f4cb9a591327f9537a52c5a9bc | [] | no_license | SteveBosley/Meteor-Data | b714e27f09790639561abdddc2678174f4b6525d | f57442a9b8e884b0fb0e1448b88132d11feb428b | refs/heads/master | 2021-01-19T23:53:57.327380 | 2017-12-18T08:24:29 | 2017-12-18T08:24:29 | 89,051,725 | 0 | 1 | null | 2017-12-18T08:24:30 | 2017-04-22T07:14:37 | R | UTF-8 | R | false | false | 4,286 | r | 3.MONTHLY_UFO_ANALYSIS_REPORTING.R | #=============================================================================
#
#-- Author: Steve Bosley - HAG, UKMON, NEMETODE
#
#
#-- Description:
#
# This script runs a set of R scripts which generate tables and reports
# from the monthly Output files created at the end of a UFO Analysis cycle
#
# This script prompts the user for the output type (PDF, JPEG or CONSOLE),
# a reporting year and month. It then ...
#
# Each script can use the following data prepared by this master script:
#
# CategoryData: The selected remote monitoring data
# Dataset: A desciptive title printed on the plot footer
# SelectYear: The 4 digit number of the year for which reporting is required
# SelectMonth: The 3 character abbreviation of the reporting month required
# SelectCamera: The 2 character identifier of the camera for which reporting is required
# SelectCategory:The name of the event category for which reporting is required
#
# Environment varables are set by script Lib_Config.r which sets pointers
# to source data, directory holding the scripts, directory recieving reports
# etc. This file must be first configured to match the installation.
#
# Note, the distribution (ANALYSIS folder) must be held in My Documents
#
#-- Shared under the Creative Common Non Commercial Sharealike
# License V4.0 (www.creativecommons.org/licenses/by-nc-sa/4.0/)
#
#-- Version history
#
# Vers Date Notes
# ---- ---- -----
# 1.0 30/11/2016 First release
#
#=============================================================================
cat("Reporting started",format(Sys.time(), "%a %b %d %Y %H:%M:%S"))
# Initialise environment variables and common functions
source("D:/R Libraries/MeteorData/R MONTHLY REPORTING/CLANFIELD/CONFIG/Lib_Config.r")
source(paste(FuncDir,"/common_functions.r",sep=""))
# Close any open graphical output devices (other than NULL)
repeat{
if(dev.cur() == 1) {
break
}
dev.off()
}
# Select Output Type
if (is.na(OutType)) {
Olist = c("PDF","JPG")
i <- menu(Olist, graphics=TRUE, title="Choose output type")
OutType = Olist[i]
}
# Set the R working directory (this is where R saves its environment)
setwd(WorkingDir)
# Read the UFO Analysis Monthly .csv files from the original tree structure
# The logic is Clanfield specific but could be easily modified - it is the closest I have to the UKMON
# R Reporting Suite
# Select which year / month to process
OutYear <<- get_year()
OutMonth <<- get_month()
OutCamera <<- get_camera()
OutLimit <<- 5
SelectYr <<- OutYear
SelectMon <<- OutMonth
SelectCam <<- OutCamera
SelectMonN <<- convert_month()
## Read in all required Monthly UFO Analyser csv files
Runscript("Read and Enrich CSV Files.r",Oyear=OutYear,Omonth=SelectMon,Ocamera="ALL",Otype=OutType,orient=Landscape)
## Produce Monthly Analysis Tables
Runscript("Write_Top_5.r",Oyear=OutYear,Omonth=SelectMonN,Ocamera="ALL",Oshower="",Otype=OutType,orient=Landscape)
Runscript("Write_Fireballs.r",Oyear=OutYear,Omonth=SelectMon,Ocamera="ALL",Otype=OutType,orient=Landscape)
Runscript("Write_Shower_summary.r",Oyear=OutYear,Omonth=SelectMon,Ocamera="ALL",Otype=OutType,orient=Landscape)
Runscript("Write_Active_Showers.r",Oyear=OutYear,Omonth=SelectMon,Ocamera="ALL",Olimit=5,Otype=OutType,orient=Landscape)
## Read in all required YTD UFO Analyser csv files *** TEMP FIX FOR YTD ISSUE ***
Runscript("Read and Enrich CSV Files.r",Oyear=OutYear,Omonth="ALL",Ocamera="ALL",Otype=OutType,orient=Landscape)
Runscript("Write_Top_5.r",Oyear=OutYear,Omonth="YTD",Ocamera="ALL",Oshower="",Otype=OutType,orient=Landscape)
Runscript("Write_Fireballs.r",Oyear=OutYear,Omonth="YTD",Ocamera="ALL",Otype=OutType,orient=Landscape)
Runscript("Write_Shower_summary.r",Oyear=OutYear,Omonth="YTD",Ocamera="ALL",Otype=OutType,orient=Landscape)
cat("Reporting complete",format(Sys.time(), "%a %b %d %Y %H:%M:%S"))
|
6a584ac6e1aa6f6056ac28ddcc71833fb1927e6f | 5a20d78fd4388bb509e3f4627932e84d3e0e740e | /plot1.R | d305008febac3364b1ec50b3e249ada1eb0e6380 | [] | no_license | Lorderot/ExData_Plotting1 | 1bba9234acf4ac50e5529a950d6f918d3f954f2a | 390a3fbfcaef7ba9a9d49935415999a1a4c2bf3b | refs/heads/master | 2021-01-18T07:34:42.907388 | 2015-01-10T17:39:12 | 2015-01-10T17:39:12 | 29,064,529 | 0 | 0 | null | 2015-01-10T17:18:11 | 2015-01-10T17:18:11 | null | UTF-8 | R | false | false | 561 | r | plot1.R | png(file="plot1.png", bg="transparent")
data<-read.csv("household_power_consumption.txt",sep=";",skip=66636,nrows=2880, col.names=c("Date","Time","Global_active_power","Global_reactive_power","Voltage","Global_intensity","Sub_metering_1","Sub_metering_2","Sub_metering_3"))
y<-logical(length(data[,3]))
for(i in 1:length(data[,3]))
if (data[,3][i]!="?") y[i]<-TRUE
y<-data[["Global_active_power"]][y]
hist(y,col="red",main="Global Active Power",xlab="Global Active Power (kilowatts)",axes=FALSE)
axis(1,at=c(0,2,4,6))
axis(2,at=c(0,200,400,600,800,1000,1200)) |
062f5533895063f7fd798e8f406158c85137d3b3 | ffdea92d4315e4363dd4ae673a1a6adf82a761b5 | /data/genthat_extracted_code/questionr/examples/cprop.Rd.R | ee2f9a7d8f42d3f15ecc756d5d8318b3aeb62d65 | [] | no_license | surayaaramli/typeRrh | d257ac8905c49123f4ccd4e377ee3dfc84d1636c | 66e6996f31961bc8b9aafe1a6a6098327b66bf71 | refs/heads/master | 2023-05-05T04:05:31.617869 | 2019-04-25T22:10:06 | 2019-04-25T22:10:06 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 379 | r | cprop.Rd.R | library(questionr)
### Name: cprop
### Title: Column percentages of a two-way frequency table.
### Aliases: cprop cprop.table cprop.data.frame cprop.matrix cprop.tabyl
### ** Examples
## Sample table
data(Titanic)
tab <- apply(Titanic, c(4,1), sum)
## Column percentages
cprop(tab)
## Column percentages with custom display
cprop(tab, digits=2, percent=TRUE, total=FALSE)
|
d97820cd35e1e19e02b2a2c01b572b783c0aecf5 | e57feeb76a5cd7faba3f3a0762d6d581fdb665b5 | /northeast_murder.R | 8adb39a2494d7dabd6102d27c31f3024c841661a | [] | no_license | fall2018-wallace/hw_7 | 4bcea5133860e8ab967651b6367548dca6b9b224 | 92bf301016c9ae3bc04671f5be572d1a35ac8001 | refs/heads/master | 2020-04-01T18:26:07.104900 | 2018-10-18T01:48:48 | 2018-10-18T01:48:48 | 153,492,239 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 631 | r | northeast_murder.R |
#Importing the required libraries
library(ggplot2)
library(ggmap)
#Getting the US state data for creating map
US=map_data("state")
#Converting the scale to readable form
options(scipen=999)
#Plotting the map with murder rate
northeast_murder=ggplot(merged, aes(map_id=stateName)) + geom_map(map=US, aes(fill=Murder),color="black")
northeast_murder=northeast_murder + expand_limits(x=US$long, y=US$lat) + ggtitle("Murder rate in north east US states") + coord_map()
##Using new york city to plot the murder rate innorth east states
northeast_murder=northeast_murder+ xlim(-83.93, -63.93) + ylim(30.73, 50.73)
northeast_murder
|
d31106923e721562640344c7f7f07186bfbd1987 | 467c89ddbfe26b6e6811cec9623e687156d3411d | /daily_coding_mul/day10.R | b0adf2d10abad7a7f5559d8c56ca8da4433b3494 | [] | no_license | ingu627/R | e454eb312e79a95e0161aba4cddb16f8fd5e3a23 | ddeb460a818dfc95f978fd82d9babd481b43f31c | refs/heads/master | 2023-08-24T11:02:42.296489 | 2021-09-24T14:30:59 | 2021-09-24T14:30:59 | 401,576,730 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,980 | r | day10.R | mydata=read.csv('f:/data/examscore.csv')
plot(mydata$midterm, mydata$final)
c("red","blue")[as.factor(mydata$gender)]
as.factor(mydata$gender)
m<-c(40,45,50,60,80)
y=1*x+5
f<-1*m+5
f
# 단순회귀모델(중간 -> 기말)
lm(final~midterm, mydata)
#기울기 : 0.8967
#절편 : 13.8666
#기말점수 = 0.8967*중간점수 + 13.8866
m=mydata$midterm
f=mydata$final
# 기울기 = 상관계수(중간, 기말) * 표준편차(기말)/표준편차(중간)
slope=cor(m,f) * sd(f) / sd(m)
#절편 = 평균(기말) - 평균(중간) * 기울기
bias=mean(f) - mean(m) * slope
bias
#중간고사 60점 -> 기말고사?
f_hat=slope*60+bias
f_hat
#성별에 따른 회귀직선
mydata
data_male=mydata[mydata$gender == 'M',] # 남
data_female=mydata[mydata$gender == 'F',] # 여
# 참인 행의 전체 데이터를 다 가져온다.
library(dplyr)
model1= lm(final~midterm, data_male)
model2= lm(final~midterm, data_female)
model1$coefficients
model2$coefficients
with(mydata,
plot(midterm, final,
xlab="중간",
ylab="기말",
pch=c(16,17)[as.factor(mydata$gender)],
col=c("red","blue")[as.factor(mydata$gender)],
main="점수"))
legend(10,80,
legend=c("여자","남자"),
col=c("red","blue"),
pch=c(16,17))
#직선 그릴 때 사용
abline(model1$coefficients, col='blue')
abline(model2$coefficients, col='red')
#여학생, 중간고사 55점 -> 기말고사 예상점?
predict(model2, data.frame(midterm=55))
model2$coefficients[2]*55+model2$coefficients[1]
# 중간고사 + 성별 -> 기말고사 점수 예상
str(mydata)
model3=lm(final~midterm+gender, mydata)
model3
#genderM은 Male은 1, female은 0
# 기말고사=0.8808*중간점수 -6.6563*성별+18.9774
c(0,1)[as.factor(mydata$gender)]
class(model3$coefficients)
par=model3$coefficients
par[1]+par[2]*60+par[3]*1 #남학생, 중간고사 60점 => 기말:65.17
predict(model3, newdata=data.frame(midterm=60, gender='M'))
|
9898f42ccfc601332fff484edeb03491656a940b | 08f73cc09b37d0bb42f522c4ee876a9d7824391a | /ui.R | fa8832832cbd48c2f44e1d0ccfb2857f1df2ecdf | [] | no_license | ManjushreeHarisha/Pima-Diabetes-prediction | da7db7fe0b48ebf74f016fe138a81492415189d7 | ca5d620dbbd1d4b12325d703bbc949389862105c | refs/heads/master | 2020-04-24T18:53:02.688627 | 2019-02-23T09:14:45 | 2019-02-23T09:14:45 | 172,194,531 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 3,484 | r | ui.R | library(shiny)
library(shinyjs)
shinyUI(fluidPage(
headerPanel("Diabetes prediction"),
sidebarPanel(
conditionalPanel(condition="input.tabselected==1",h4("diabetes prediction"), tags$img(src="diabetes.jpg",width=380, height=400)),
conditionalPanel(condition="input.tabselected==2",
fileInput("dataset", "browse the dataset"),
radioButtons("choice","choose an option",choices=c("Dataset"=1, "Structure"=2, "Summary"=3)),
tags$img(src="search-man1.jpg",width=300)
),
conditionalPanel(condition="input.tabselected==3", h4("comparision of models"),
helpText("This bar graph shows the accuracy of different models in determing the onset of diabetes mellitus")),
conditionalPanel(condition="input.tabselected==4",
h4("prediction"),
radioButtons("split","choose the split ratio",choices=c("70%"=1, "75%"=2, "80%"=3)),
uiOutput("selattrib"),
helpText("ACCURACY = (TP+TN)/(TP+TN+FP+FN)"),
helpText("SENSITIVITY(TP RATE) = TP/(TP+FN)"),
helpText("SPECIFICITY(FP Rate) = FP/(FP+TN)"),
helpText("Sensitivity and Specificity are statistical measures that describe how well the classifier discriminates between a case with positive and with negative class. Sensitivity is the detection of disease rate that needs to be maximized and Specificity is the false alarm rate that is to be minimized for accurate diagnosis.")),
conditionalPanel(condition="input.tabselected==5",h4("IMPORTANCE OF ATTRIBUTES"), helpText("This pie chart shows the importance of each attribute in determining the onset of diabetes mellitus")),
conditionalPanel(condition="input.tabselected==6",h4("SPLIT RATIO V/S ACCURACY"), helpText("This plot shows the variation of accuracy with respect to split raio"))
),
mainPanel(
tabsetPanel(
tabPanel("ABOUT", value=1, tags$div(HTML(paste(tags$span(style="font-size:150%;", "This application has different sections to browse the dataset and to compare
the accuracy of different algorithms. The pictorial representation which gives information about
accuracy helps the user to understand and to use specific algorithm that suits for the given
dataset. It also gives information about the importance of each attribute in determining the
accurate output."),sep="")))),
tabPanel("DATA", value=2, conditionalPanel(condition = "input.choice==1", dataTableOutput("dat")),
conditionalPanel(condition="input.choice==2", verbatimTextOutput("struct")),
conditionalPanel(condition="input.choice==3", verbatimTextOutput("summ"))
),
tabPanel("COMPARISION", value=3 ,plotOutput("compGraph")),
tabPanel("PERFORMANCE", value=4 , tableOutput("test"),verbatimTextOutput("accuracy"), verbatimTextOutput("sensitivity"), verbatimTextOutput("specificity")),
tabPanel("IMPORTANT ATTRIBUTES", value=5 , plotOutput("impGraph")),
tabPanel("SPLIT RATIO V/S ACCURACY", value=6, plotOutput("splGraph")),
id= "tabselected"
)
)
)) |
d620a9ba524114e3337058738cdfad4b28a611a3 | 186e5fce96f9eaa5bde953381815fd0b38a58234 | /second presentation/make_IS_demo.R | 496dd2cf9922daf10cc2d2b8cc2d450d7c9c0902 | [] | no_license | ekernf01/prelim_tex_files | 6ef89914995c0cbd3a25e4ae173e7ddd6e710032 | cb98a418a7eb99d093feb8c33752b797c7b8a354 | refs/heads/master | 2020-05-02T14:45:47.280258 | 2015-06-12T04:26:14 | 2015-06-12T04:26:14 | 34,136,304 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 347 | r | make_IS_demo.R | png('IS_plot.png')
mygrid <- seq(-6, 10, by=0.1)
plot(mygrid, dnorm(mygrid, mean=-2.5), type='l',
col=4, xlab='x', ylab='density (or function value)')
text(labels='Q', x=-5, y=0.3)
lines(mygrid, dnorm(mygrid, mean=2.5), col=2)
text(labels='P', x=0, y=0.3)
lines(mygrid, dnorm(mygrid, mean=7.5), col=3)
text(labels='g', x=5, y=0.3)
dev.off()
|
238847b8017a29c076a6c3ea80d5fdc8a0c22a60 | 7d20eb3ff42c6a947115670eaff548812b6cb436 | /man/convert_list_to_tibble.Rd | 4a58f0cf3dc9cb2a8a7c15f689a00daccad7a7ca | [
"MIT"
] | permissive | inrae/hubeau | f746b05933750fc66ab38c343138b7e5783d4f68 | fde87c07014b25b08af803f40ce7e92b1854ee59 | refs/heads/master | 2023-08-31T07:11:47.099934 | 2023-05-31T11:01:44 | 2023-05-31T11:01:44 | 393,369,696 | 9 | 7 | NOASSERTION | 2023-08-19T20:21:48 | 2021-08-06T12:25:53 | R | UTF-8 | R | false | true | 1,245 | rd | convert_list_to_tibble.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/doApiQuery.R
\name{convert_list_to_tibble}
\alias{convert_list_to_tibble}
\title{Convert list provided by the APIs into a tibble}
\usage{
convert_list_to_tibble(l)
}
\arguments{
\item{l}{a \link{list} provided by the API (See \link{doApiQuery})}
}
\value{
A \link[tibble:tibble]{tibble::tibble} with one row by record and one column by field.
}
\description{
Convert list provided by the APIs into a tibble
}
\details{
This function is used internally by all the retrieving data functions for
converting data after the call to \link{doApiQuery}.
}
\examples{
# To get the available APIs in the package
list_apis()
# To get the available endpoints in an API
list_endpoints("prelevements")
# To get available parameters in endpoint "chroniques" of the API "prelevements"
list_params(api = "prelevements", endpoint = "chroniques")
# To query the endpoint "chroniques" of the API "prelevements"
# on all devices in the commune of Romilly-sur-Seine in 2018
\dontrun{
resp <- doApiQuery(api = "prelevements",
endpoint = "chroniques",
code_commune_insee = "10323",
annee = "2018")
convert_list_to_tibble(resp)
}
}
|
981335c18d69bd9f164c822add812d930bea6b2b | 21e534bdb847ce3b9f6091316619f80ec280e2ad | /R/dataprep.R | 371ac55900e8434a264c557e1933b749f3918576 | [] | no_license | cran/intccr | a51a8c9ca40c495e2219ae22050bebd7b018809f | 252644c0d347a663ea5d7bef88fa2ab04f114b1e | refs/heads/master | 2022-06-06T17:15:32.892874 | 2022-05-10T07:00:02 | 2022-05-10T07:00:02 | 101,859,824 | 1 | 0 | null | null | null | null | UTF-8 | R | false | false | 3,918 | r | dataprep.R | #' Data manipulation
#' @description The function \code{dataprep} reshapes data from a long format to a ready-to-use format to be used directly in the function \code{ciregic}.
#' @author Jun Park, \email{jun.park@alumni.iu.edu}
#' @author Giorgos Bakoyannis, \email{gbakogia@iu.edu}
#' @param data a data frame that includes the variables named in the \code{ID}, \code{time}, \code{event}, and \code{z} arguments
#' @param ID a variable indicating individuals' ID
#' @param time a variable indicating observed time points
#' @param event a vector of event indicator. If an observation is righ-censored, \code{event = 0}; otherwise, \code{event = 1} or \code{event = 2}, where \code{1} represents the first cause of failure, and \code{2} represents the second cause of failure. The current version of package only allows two causes of failure.
#' @param Z a vector of variables indicating name of covariates
#' @keywords dataprep
#' @details The function \code{dataprep} provides a ready-to-use data format that can be directly used in the function \code{ciregic}. The returned data frame consists of \code{id}, \code{v}, \code{u}, \code{c}, and covariates as columns. The \code{v} and \code{u} indicate time window with the last observation time before the event and the first observation after the event. The \code{c} represents a type of event, for example, \code{c = 1} for the first cause of failure, \code{c = 2} for the second cause of failure, and \code{c = 0} for the right-censored. For individuals having one time record with the event, the lower bound \code{v} will be replaced by zero, for example \code{(0, v]}. For individuals having one time record without the event, the upper bound \code{u} will be replaced by \code{Inf}, for example \code{(v, Inf]}.
#' @return a data frame
#' @examples
#' library(intccr)
#' dataprep(data = longdata, ID = id, time = t, event = c, Z = c(z1, z2))
#'
#' @export
dataprep <- function(data, ID, time, event, Z) {
mcall <- match.call()
ID <- deparse(mcall$ID)
time <- deparse(mcall$time)
event <- deparse(mcall$event)
Z <- unlist(strsplit(as.character(mcall$Z), " "))
if(length(Z) > 1) Z <- Z[-1]
data <- data[order(data[, ID] & data[, time]), ]
tmiss <- sum(is.na(data[, time]))
if(tmiss > 0) {
print.df <- function(x) {
paste(capture.output(data[which(is.na(data[, time])), ]), collapse = "\n")
}
warning("The following records have missing visit times and will be discarded:\n\n", print.df(data))
data <- data[!is.na(data[, time]), ]
}
uid <- sort(unique(data[, ID]))
n <- length(uid)
p <- length(Z)
mZ <- data[colnames(data) %in% Z]
id <- v <- u <- c <- rep(NA, n)
X <- matrix(data = NA, nrow = n, ncol = p, byrow = TRUE)
for (i in 1:n){
indID <- (data[, ID] == uid[i])
indt <- data[, time][indID]
indc <- data[, event][indID]
indZ <- as.matrix(mZ[indID,])
id[i] <- uid[i]
X[i,] <- indZ[1,]
if(length(indt) == 1){
if(indc != 0){
v[i] <- 0
u[i] <- indt
c[i] <- indc
} else {
v[i] <- indt
u[i] <- Inf
c[i] <- indc
}
} else {
for (j in 1:length(indt)){
if (indc[j] == 0){
v[i] <- indt[j]
u[i] <- Inf
c[i] <- 0
} else {
u[i] <- indt[j]
if (indc[j] == 1){
c[i] <- 1
} else {
c[i] <- 2
}
break
}
}
}
}
colnames(X) <- Z
temp <- data.frame(id, v, u, c, X)
if (sum(is.na(temp)) != 0){
naval <- which(is.na(v))
if(length(naval) == 1) {
warning("subject id ", naval, " is omitted because its interval is (0, Inf).")
} else {
warning("subject id ", toString(naval), " are omitted because those intervals are (0, Inf).")
}
}
na.omit(temp)
}
|
ed96ddfd101219654e154324ae4d5620b57c79ae | abda097fa7fe9a0d4047e2b9c206318e5b49fc0b | /workspace/nldas.R | a1b099967d341e8df1df7f5c08e1c497da0f08c1 | [
"MIT"
] | permissive | NOAA-OWP/geogrids | 702f2e8b124175f87f39e961612ed19f414e4303 | 23a4833ffa339dcd89c3d52c1a958022cdb85cea | refs/heads/master | 2023-07-17T13:19:42.190688 | 2021-08-31T02:33:02 | 2021-08-31T02:33:02 | 395,325,866 | 0 | 1 | null | null | null | null | UTF-8 | R | false | false | 2,181 | r | nldas.R | ts = "M"
type = "FORA"
startDate = "2000-01-01"
endDate = "2020-12-31"
var = "APCP"
ext = "tif"
dir = '/Volumes/Transcend/ngen/climate/NLDAS/'
get_nldas = function(
type = "FORA",
ts = "M",
startDate = "2000-01-01",
endDate = "2020-12-31",
var = "APCP",
ext = "tif",
dir = '/Volumes/Transcend/ngen/climate/NLDAS/'){
date = seq.Date(as.Date(startDate), as.Date(endDate), by = "m")
bbox = "25%2C-125%2C53%2C-67"
type2 = paste0('NLDAS_', type, "0125_", ts)
date2 = gsub("-", "", date)
year = lubridate::year(date)
month = sprintf("%02s", lubridate::month(date))
ext = ifelse(ext == "nc", ".nc4", ".tif")
url = paste0('https://hydro1.gesdisc.eosdis.nasa.gov/daac-bin/OTF/HTTP_services.cgi?FILENAME=',
'%2Fdata',
'%2FNLDAS',
'%2F',type2,'.002',
'%2F', year,
'%2F',type2,'.A',year,month,'.002','.grb',
'&FORMAT=dGlmLw',
'&BBOX=', bbox,
'&LABEL=',type2,'.A',year,month,'.002','.grb.SUB', ext,
'&SHORTNAME=',type2,
'&SERVICE=L34RS_LDAS',
'&VERSION=1.02',
'&DATASET_VERSION=002',
'&VARIABLES=', var)
tmp = file.path(dir, var, paste0(type2, ".", date, ext))
fs::dir_create(dirname(tmp))
lapply(1:length(url), function(x){
if(!file.exists(tmp[x])){
httr::GET(url[x],
write_disk(tmp[x], overwrite = TRUE),
progress(),
config(netrc = TRUE, netrc_file = getNetrcPath()),
set_cookies("LC" = "cookies"))
}
})
}
get_nldas(
type = "FORA",
ts = "M",
startDate = "2000-01-01",
endDate = "2020-12-31",
var = "APCP",
ext = "tif",
dir = '/Volumes/Transcend/ngen/climate/NLDAS/')
get_nldas(
type = "FORA",
ts = "M",
startDate = "2000-01-01",
endDate = "2020-12-31",
var = "PEVAP",
ext = "tif",
dir = '/Volumes/Transcend/ngen/climate/NLDAS/')
get_nldas(
type = "VIC",
ts = "M",
startDate = "2000-01-01",
endDate = "2020-12-31",
var = "SNOWC",
ext = "tif",
dir = '/Volumes/Transcend/ngen/climate/NLDAS/')
|
0a3d7569520396c3b201d7e19eb86bfa0b561987 | 72f9564aa51cd289446b9f6babcbbb80e1097fde | /clustering.R | 1a920a096019e03416ed6d5b1ab6370cb3f118a8 | [] | no_license | auGGuo/r_data_projects | 11cbeda0287573b80a2df72fd86126224eee7344 | 45ab8c28ab8f8e08a4e1e5fb33cb75955fac249c | refs/heads/main | 2023-01-09T14:59:38.270063 | 2020-11-15T16:03:29 | 2020-11-15T16:03:29 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 3,575 | r | clustering.R | # Cluster Analysis
setwd('~/Desktop/r_projects/prac_ds_r')
url = 'https://raw.githubusercontent.com/WinVector/zmPDSwR/master/Protein/protein.txt'
protein <- read.table(url, sep='\t', header=T)
summary(protein)
# rescaling the dataset
vars.to.use <- colnames(protein)[-1]
pmatrix <- scale(protein[, vars.to.use])
# Store mean and standard deviations so we can “unscale” the data later
pcenter <- attr(pmatrix, 'scaled:center')
pscale <- attr(pmatrix, 'scaled:scale')
# Hierarchical clustering
# Create the distance matrix
d <- dist(pmatrix, method='euclidean')
# Do the clustering.
pfit <- hclust(d, method='ward.D')
# Plot the dendrogram.
plot(pfit, labels=protein$Country)
rect.hclust(pfit, k=5)
# Extracting the clusters found by hclust()
groups <- cutree(pfit, k=5)
print_clusters <- function(labels, k) {
for (i in 1:k) {
print(paste('cluster', i))
print(protein[labels==i, c("Country","RedMeat","Fish","Fr.Veg")])
}
}
print_clusters(groups, 5)
# Project the clusters on the first two principal components
library(ggplot2)
# Calculate the principal components of the data.
princ <- prcomp(pmatrix)
nComp <- 2
# The predict() function will rotate the data into the space described by
# the principal components. We only want the projection on the first two axes.
project <- predict(princ, newdata=pmatrix)[, 1:nComp]
project.plus <- cbind(as.data.frame(project),
cluster=as.factor(groups),
country=protein$Country)
ggplot(data=project.plus, aes(x=PC1, y=PC2)) +
geom_point(aes(shape=cluster)) +
geom_text(aes(label=country), hjust=0, vjust=1)
# Running clusterboot() to check clustering stablity
# As a rule of thumb, clusters with a stability value less than 0.6 should be
# considered unstable Values between 0.6 and 0.75 indicate that the cluster
# is measuring a pattern in the data, but there isn’t high certainty
# about which points should be clustered together. Clusters with stability values
# above about 0.85 can be considered highly stable (they’re likely to be real clusters).
library(fpc)
# Set the desired number of clusters
kbest.p <- 5
cboot.hclust <-clusterboot(pmatrix, clustermethod=hclustCBI,
method='ward.D', k=kbest.p)
summary(cboot.hclust$result)
# cboot.hclust$result$partition returns a vector of cluster labels.
groups <- cboot.hclust$result$partition
groups
print_clusters(groups, kbest.p)
# The vector of cluster stabilities.
cboot.hclust$bootmean
# The count of how many times each cluster was dissolved. By default clusterboot()
# runs 100 bootstrap iterations.
cboot.hclust$bootbrd
# The clusterboot() results show that the cluster of countries with high fish
# consumption (cluster 4) is highly stable. Cluster 3 is less stable
# k-means clustering
pclusters <- kmeans(pmatrix, kbest.p, nstart=100, iter.max=100)
summary(pclusters)
pclusters$centers
pclusters$size
groups <- pclusters$cluster
print_clusters(groups, kbest.p)
# To run kmeans(), we must know k. The fpc package (the same package that has
# clusterboot()) has a function called kmeansruns() that calls kmeans() over
# a range of k and estimates the best k.
# kmeansruns() has two criteria: the Calinski-Harabasz Index ("ch"), and the
# average silhouette width ("asw")
clustering.ch <- kmeansruns(pmatrix, krange=1:10, criterion="ch")
clustering.ch$bestk
# The CH criterion picks two clusters.
clustering.asw <- kmeansruns(pmatrix, krange=1:10, criterion="asw")
clustering.asw$bestk
Average
# The silhouette width picks 3 clusters.
|
b5037f5ac2949dd7ed164fc53187fad900ad5029 | d9fd17e52b48a841a3fb40c14112d1db7f75fb2a | /ui.R | c3d412364370623ea5c1bf90ac0f4dd455f62cd1 | [] | no_license | Cbanzaime23/ddp-Course-Project | 31829d623fdff6709fd58ce45407454eeda454be | 80fb98111693afb071c95f1bac53f11321ecd619 | refs/heads/master | 2020-04-21T21:10:34.340475 | 2019-02-09T13:37:17 | 2019-02-09T13:37:17 | 169,870,351 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 2,448 | r | ui.R | library(shiny)
#UserGuide <- h3("How to use this app?", align = "center"),<br>,<p>(Hello)</p>
shinyUI(pageWithSidebar(
titlePanel(title = h1("Histogram Simulator", align="center")),
sidebarPanel(
selectInput("n", "Choose how many random numbers",
choices = c(10,100,1000,10000,100000,1000000),
selected = 1000),
selectInput("family", "Choose which type of distribution",
choices = c("Normal",
"Uniform",
"Exponential"),
selected = "normal"),
sliderInput("bins", "Choose a number of bins",
min = 1, max = 50, value = 25)
),
mainPanel(
tabsetPanel(type = "tab",
tabPanel("Plot",plotOutput("histogram")),
tabPanel("User Guide",h3("How to use this histogram simulator?", align = "center"),
br(),
h4("The aim of this histogram simulator is to have a view of the general shape of the distribution of random numbers"),
br(),
strong("Step 1"),
p("Use the 'Choose how many random numbers' dropdown for setting up how many random number do you prefer"),
p("Note: the only preferences for the number of random numbers are 10, 100, 1000, 10000, 100000, and 1000000"),
br(),
strong("Step 2"),
p("Use the 'Choose which type of distribution' dropdown for setting up what type of distribution you prefer for the random number"),
p("Note: the only preferences for the type of distribution are Uniform, Normal, and Exponential"),
br(),
strong("Step 3"),
p("Use the 'Choose a number of bins' slider to adjust the number of bins depending on which you prefer")
)
)
)
))
|
b7de7a481fcf66030cf1b6d2b0334d8b0f47390c | cd545db34cc4e68b1b5ff7f726d0f4039e9792f8 | /man/exp.Rd | 0bfb761aa18f07852c529d3cb6a064bfdb906a8f | [] | no_license | menggf/DEComplexDisease | 3c3598cf7403fda0c0a2fbc4c988196fe229c6cb | c7ad81ba12b3274c4d2d36a77cc22baea16caf60 | refs/heads/master | 2022-07-09T00:53:35.770287 | 2022-06-25T06:13:24 | 2022-06-25T06:13:24 | 95,849,656 | 0 | 0 | null | null | null | null | UTF-8 | R | false | true | 302 | rd | exp.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/data.R
\docType{data}
\name{exp}
\alias{exp}
\title{expression matrix of breast cancer (part)}
\format{a matrix}
\value{
A matrix
}
\description{
A matrix of breast cancer patients from TCGA (only part)
}
\keyword{datasets}
|
cadffb2abb37a48062bbbd28ec5ef1554c99c61a | 9c97254dd4cc2c48865fa8d4ae932a721ba4368b | /Genalg_Ex3_Food.R | d9ec2d4d12d1c188154ba610c7b9a734b8ede2f7 | [] | no_license | dalilareis/R-genetic-alg | fbeec89ba37a915610c2bf16b93be76b72ec550d | 266b67c966e59663c11cccf07b1284f8d0b18747 | refs/heads/master | 2020-04-16T00:03:42.390969 | 2019-01-10T20:05:13 | 2019-01-10T20:05:13 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,077 | r | Genalg_Ex3_Food.R | library(ggplot2)
library(genalg)
item <- c("pocketknife", "beans", "potatoes", "unions", "sleeping bag", "rope", "compass")
survivalpoints <- c(10, 20, 15, 2, 30, 10, 30)
hungerpoints <- c(0, 20, 15, 0, 0, 0, 0)
weight <- c(1, 5, 10, 1, 7, 5, 1)
weightlimit <- 15
dataset <- data.frame(item=item, totalpoints=survivalpoints + hungerpoints,
weight = weight)
#cromossoma
chromosome <- c(1,1,0,0,1,0,0)
dataset [chromosome == 1, ]
cat (chromosome %*% dataset$totalpoints)
#evaluation function
evalFunc <- function(x) {
current_solution_totalpoints <- x %*% dataset$totalpoints
current_solution_weight <- x %*% dataset$weight
if (current_solution_weight > weightlimit)
return(0) else return(-current_solution_totalpoints)
}
#run and view model
iter = 100
GAmodel <- rbga.bin(size = 7, popSize = 200, iters = iter, mutationChance = 0.01,
elitism = T, evalFunc = evalFunc)
cat(summary(GAmodel))
plot(GAmodel)
#best solution
bestSolution <- GAmodel$population[which.min(GAmodel$evaluations),]
dataset[bestSolution == 1,]
|
24a47f31f6ee59c55d575fef51390262fdf122ff | 953e70080a0fa00c852e1b439ed85426c550b88f | /man/add_data_field_children_for_start_var.Rd | 1f993da1d92f9c1d69cebcdcfa40e6a60f538771 | [
"MIT"
] | permissive | cran/recodeflow | ec98d6d47b3bff19f00b64f59b634f3d8d1e01ac | 9c82f34cadbe4f3e9148b1ac31bac8aae4d3df8f | refs/heads/master | 2023-05-26T10:09:22.117846 | 2021-06-09T06:00:02 | 2021-06-09T06:00:02 | 375,409,898 | 0 | 0 | null | null | null | null | UTF-8 | R | false | true | 577 | rd | add_data_field_children_for_start_var.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/nodes.R
\name{add_data_field_children_for_start_var}
\alias{add_data_field_children_for_start_var}
\title{Add DataField child nodes for start variable.}
\usage{
add_data_field_children_for_start_var(data_field, var_details_rows)
}
\arguments{
\item{data_field}{DataField node to attach child nodes.}
\item{var_details_rows}{Variable details rows associated with current variable.}
}
\value{
Updated DataField node.
}
\description{
Add DataField child nodes for start variable.
}
|
40be7203c255ae06bab76ce4cdf70a9c83f4c12c | 42dedcc81d5dc9a61a79dbcea9bdd7363cad97be | /age+gender/analysis/03_robustness/D-51_color-bar.R | 13b650e649c31172227a7cd0f8ab65111d4b5611 | [] | no_license | vishalmeeni/cwas-paper | 31f4bf36919bba6caf287eca2abd7b57f03d2c99 | 7d8fe59e68bc7c242f9b3cfcd1ebe6fe6918225c | refs/heads/master | 2020-04-05T18:32:46.641314 | 2015-09-02T18:45:10 | 2015-09-02T18:45:10 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 541 | r | D-51_color-bar.R | # This script will simply visualize and save the color bar
outdir <- "/home2/data/Projects/CWAS/age+gender/03_robustness/viz_cwas"
red_yellow_rgb <- as.matrix(read.table("z_red_yellow.txt")[,])
red_yellow <- apply(red_yellow_rgb/255, 1, function(x) rgb(x[1], x[2], x[3]))
n <- length(red_yellow)
x11(width=12, height=3)
png(file.path(outdir, "colorbar_red_yellow.png"), width=1200, height=300)
image(1:n, 1, as.matrix(1:n), col = red_yellow,
xlab = "", ylab = "", xaxt = "n", yaxt = "n",
bty = "n")
dev.off(); dev.off()
|
f840fe73631d52bd9436c38d14054b40339f4bfe | 237938d75951fe0ec5d854ce4a2fb025969afee6 | /Quiz3.R | fd76bafbb2f6b76de232f8cd42488881e5881afe | [] | no_license | yash29/Getting-and-Cleaning-Data | 179caecdd51871dffe183d3bf2ba346ba9c8461e | 8fef053bf203e78f9c526f7d1f030edd6f3463a1 | refs/heads/master | 2020-03-25T08:50:02.836708 | 2018-08-05T18:05:07 | 2018-08-05T18:05:07 | 143,633,850 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,171 | r | Quiz3.R | #Q1
library("dplyr")
download.file("https://d396qusza40orc.cloudfront.net/getdata%2Fdata%2Fss06hid.csv","Q3q1.csv")
data<- read.csv("Q3q1.csv",stringsAsFactors = FALSE)
agrilogical<-which(data$ACR==3&data$AGS==6)
data<-tbl_df(data)
head(data)
select(data,ACR==3,AGS==6)
str(data)
#Q2
library("jpeg")
download.file('https://d396qusza40orc.cloudfront.net/getdata%2Fjeff.jpg'
, 'jeff.jpg'
, mode='wb' )
pic <- readJPEG("jeff.jpg",native=TRUE)
quantile(pic,probs = (c(0.3,0.8)))
#Q3
download.file("https://d396qusza40orc.cloudfront.net/getdata%2Fdata%2FGDP.csv","Q3q31.csv")
download.file("https://d396qusza40orc.cloudfront.net/getdata%2Fdata%2FEDSTATS_Country.csv","Q3q32.csv")
d1<-read.csv("Q3q31.csv",stringsAsFactors = FALSE)
d2<-read.csv("Q3q32.csv",stringsAsFactors = FALSE)
str(d1)
head(d1)
str(d2)
summarise()
library(data.table)
d1<- fread("Q3q31.csv",
skip = 4,
nrow = 190,
select = c(1,2,4,5),
col.names = c("CountryCode", "Rank", "Economy", "Total")
)
d2 <- fread("Q3q32.csv")
a<-merge(d1,d2, by="CountryCode")
nrow(a) |
718e4a64890e46547ef3228778df36195c8e574a | ffdea92d4315e4363dd4ae673a1a6adf82a761b5 | /data/genthat_extracted_code/cleanr/tests/runit.R | 957e0bc57c3d9e1c88e46743623ca32b51180f0c | [] | no_license | surayaaramli/typeRrh | d257ac8905c49123f4ccd4e377ee3dfc84d1636c | 66e6996f31961bc8b9aafe1a6a6098327b66bf71 | refs/heads/master | 2023-05-05T04:05:31.617869 | 2019-04-25T22:10:06 | 2019-04-25T22:10:06 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 2,947 | r | runit.R | #!/usr/bin/Rscript --vanilla
if (requireNamespace("RUnit", quietly = TRUE)) {
library("cleanr")
cleanr::load_internal_functions(package = "cleanr")
path <- getwd()
# Unit testing
name <- "cleanr_R_code"
package_suite <- RUnit::defineTestSuite(name,
dirs = file.path(path, "runit"),
testFileRegexp = "^.*\\.r",
testFuncRegexp = "^test_+")
test_result <- RUnit::runTestSuite(package_suite)
RUnit::printTextProtocol(test_result, showDetails = TRUE)
html_file <- file.path(path, paste0(package_suite[["name"]], ".html"))
RUnit::printHTMLProtocol(test_result, fileName = html_file)
message("\n========\nRUnit test result is:")
print(test_result)
# Coverage inspection
track <- RUnit::tracker()
track[["init"]]()
tryCatch(RUnit::inspect(check_file(system.file("source", "R", "checks.R",
package = "cleanr")),
track = track),
error = function(e) return(e)
)
tryCatch(RUnit::inspect(check_file(system.file("source", "R", "wrappers.R",
package = "cleanr")),
track = track),
error = function(e) return(e)
)
tryCatch(RUnit::inspect(check_directory(system.file("source", "R",
package = "cleanr")),
track = track),
error = function(e) return(e)
)
res_track <- track[["getTrackInfo"]]()
RUnit::printHTML.trackInfo(res_track, baseDir = path)
html_file <- file.path(path, "results", "index.html")
if (interactive()) browseURL(paste0("file:", html_file))
if (FALSE) {
check_function_coverage <- function(function_track_info){
lines_of_code_missed <- function_track_info[["run"]] == 0
opening_braces_only <- grepl("\\s*\\{\\s*",
function_track_info[["src"]])
closing_braces_only <- grepl("\\s*\\}\\s*",
function_track_info[["src"]])
braces_only <- opening_braces_only | closing_braces_only
statements_missed <- lines_of_code_missed & ! braces_only
if (any(statements_missed)) stop(paste("missed line ",
which(statements_missed),
sep = ""))
return(invisible(TRUE))
}
#'# TODO: for function_in_functions {if function not in names(res_track)
#'# throw()}
for (track_info in res_track) {
check_function_coverage(track_info)
}
}
} else {
warning("Package RUnit is not available!")
}
|
eae24141f146021d60ae45e423f0f34413d58cbc | 032f396221e412ae04fb013e15fc310a17cc3e68 | /climate/PRCPTOT_85_code.R | eee061238dd6d36a09e08991d75a40bebb56bf21 | [] | no_license | tvpenha/sismoi | 6a2f7fde2106c45f256a44cef158aa790f98a41f | 1e6267b74faf7daf6f0de064c59cf230f945714e | refs/heads/master | 2020-04-20T21:34:15.199323 | 2019-03-20T18:25:27 | 2019-03-20T18:25:27 | 169,112,900 | 0 | 0 | null | 2019-02-04T16:51:30 | 2019-02-04T16:51:29 | null | ISO-8859-1 | R | false | false | 18,867 | r | PRCPTOT_85_code.R | require(ncdf4)
require(ncdf4.helpers)
require(ncdf4.tools)
require(ggplot2)
require(raster)
require(rgdal)
require(spatial.tools)
################################################################################
setwd("C:/Users/inpe-eba/SISMOI/prcptot/rcp85")
# Abrir shapefile
brasil = readOGR("C:/Users/inpe-eba/SISMOI/Shapefiles/Brasil.shp")
grid = readOGR("C:/Users/inpe-eba/SISMOI/Shapefiles/Grid.shp")
# abrir um arquivo netCDF file
prcptot_1 <- nc_open("prcptotETCCDI_yr_ACCESS1-0_rcp85_r1i1p1_2006-2100.nc")
print(prcptot_1)
# tempo
prcptot_time <- nc.get.time.series(prcptot_1, v="prcptotETCCDI",
time.dim.name = "time")
head(prcptot_time)
tail(prcptot_time)
# get time
time <- ncvar_get(prcptot_1, "time")
time <- as.vector(time)
tunits <- ncatt_get(prcptot_1,"time","units")
nt <- dim(time)
# prcptot analise
prcptot <- ncvar_get(prcptot_1, "prcptotETCCDI")
head(prcptot)
tail(prcptot)
#Modelo ACCESS1
# transforma o NetCDF em Raster
prcptot1 = brick("C:/Users/inpe-eba/SISMOI/prcptot/rcp85/prcptotETCCDI_yr_ACCESS1-0_rcp85_r1i1p1_2006-2100.nc")
#transforma a longitude de 0-360 para -180-180
prcptot1 = rotate(prcptot1)
#recorte espacial da área de estudo
#prcptot1_mask = crop(prcptot1, brasil)
#recorte temporal no dado
#prcptot1_slice = subset(prcptot1, 112:156) # 2006-2100
#Transformação do GRID em raster
r <- raster(ncol=18, nrow=16)
extent(r) <- extent(grid)
rp <- rasterize(grid, r)
#Reamostragem para o GRID
prcptot1_ajusted = resample(prcptot1, rp, method="bilinear")
# Modelo bcc-csm1
# transforma o NetCDF em Raster
prcptot2 = brick("C:/Users/inpe-eba/SISMOI/prcptot/rcp85/prcptotETCCDI_yr_ACCESS1-3_rcp85_r1i1p1_2006-2100.nc")
#transforma a longitude de 0-360 para -180-180
prcptot2 = rotate(prcptot2)
#recorte espacial da área de estudo
#prcptot2_mask = crop(prcptot2, brasil)
#recorte temporal no dado
#prcptot2_slice = subset(prcptot2, 112:156) # 2006-2100
#Transformação do GRID em raster
r <- raster(ncol=18, nrow=16)
extent(r) <- extent(grid)
rp <- rasterize(grid, r)
#Reamostragem para o GRID
prcptot2_ajusted = resample(prcptot2, rp, method='bilinear')
# transforma o NetCDF em Raster
prcptot3 = brick("C:/Users/inpe-eba/SISMOI/prcptot/rcp85/prcptotETCCDI_yr_bcc-csm1-1_rcp85_r1i1p1_2006-2300.nc")
#transforma a longitude de 0-360 para -180-180
prcptot3 = rotate(prcptot3)
#recorte espacial da área de estudo
#prcptot3_mask = crop(prcptot3, brasil)
#recorte temporal no dado
prcptot3_slice = subset(prcptot3, 1:95) # 2006-2100
#Transformação do GRID em raster
r <- raster(ncol=18, nrow=16)
extent(r) <- extent(grid)
rp <- rasterize(grid, r)
#Reamostragem para o GRID
prcptot3_ajusted = resample(prcptot3_slice, rp, method="bilinear")
# transforma o NetCDF em Raster
prcptot4 = brick("C:/Users/inpe-eba/SISMOI/prcptot/rcp85/prcptotETCCDI_yr_bcc-csm1-1-m_rcp85_r1i1p1_2006-2099.nc")
#transforma a longitude de 0-360 para -180-180
prcptot4 = rotate(prcptot4)
#recorte espacial da área de estudo
#prcptot4_mask = crop(prcptot4, brasil)
#recorte temporal no dado
#prcptot4_slice = subset(prcptot4, 12:56) # 2006-2100
#Transformação do GRID em raster
r <- raster(ncol=18, nrow=16)
extent(r) <- extent(grid)
rp <- rasterize(grid, r)
#Reamostragem para o GRID
prcptot4_ajusted = resample(prcptot4, rp, method="bilinear")
# transforma o NetCDF em Raster
prcptot5 = brick("C:/Users/inpe-eba/SISMOI/prcptot/rcp85/prcptotETCCDI_yr_BNU-ESM_rcp85_r1i1p1_2006-2100.nc")
#transforma a longitude de 0-360 para -180-180
prcptot5 = rotate(prcptot5)
#recorte espacial da área de estudo
#prcptot5_mask = crop(prcptot5, brasil)
#recorte temporal no dado
#prcptot5_slice = subset(prcptot5, 1:85) # 2006-2100
#Transformação do GRID em raster
r <- raster(ncol=18, nrow=16)
extent(r) <- extent(grid)
rp <- rasterize(grid, r)
#Reamostragem para o GRID
prcptot5_ajusted = resample(prcptot5, rp, method="bilinear")
# transforma o NetCDF em Raster
prcptot6 = brick("C:/Users/inpe-eba/SISMOI/prcptot/rcp85/prcptotETCCDI_yr_CanESM2_rcp85_r1i1p1_2006-2100.nc")
#transforma a longitude de 0-360 para -180-180
prcptot6 = rotate(prcptot6)
#recorte espacial da área de estudo
#prcptot6_mask = crop(prcptot6, brasil)
#recorte temporal no dado
#prcptot6_slice = subset(prcptot6, 1:95) # 2006-2100
#Transformação do GRID em raster
r <- raster(ncol=18, nrow=16)
extent(r) <- extent(grid)
rp <- rasterize(grid, r)
#Reamostragem para o GRID
prcptot6_ajusted = resample(prcptot6, rp, method="bilinear")
# transforma o NetCDF em Raster
prcptot7 = brick("C:/Users/inpe-eba/SISMOI/prcptot/rcp85/prcptotETCCDI_yr_CCSM4_rcp85_r1i1p1_2006-2300.nc")
#transforma a longitude de 0-360 para -180-180
prcptot7 = rotate(prcptot7)
#recorte espacial da área de estudo
#prcptot7_mask = crop(prcptot7, brasil)
#recorte temporal no dado
prcptot7_slice = subset(prcptot7, 1:95) # 2006-2100
#Transformação do GRID em raster
r <- raster(ncol=18, nrow=16)
extent(r) <- extent(grid)
rp <- rasterize(grid, r)
#Reamostragem para o GRID
prcptot7_ajusted = resample(prcptot7_slice, rp, method="bilinear")
# transforma o NetCDF em Raster
prcptot8 = brick("C:/Users/inpe-eba/SISMOI/prcptot/rcp85/prcptotETCCDI_yr_CMCC-CESM_rcp85_r1i1p1_2000-2100.nc")
#transforma a longitude de 0-360 para -180-180
prcptot8 = rotate(prcptot8)
#recorte espacial da área de estudo
#prcptot8_mask = crop(prcptot8, brasil)
#recorte temporal no dado
prcptot8_slice = subset(prcptot8, 7:101) # 2006-2100
#Transformação do GRID em raster
r <- raster(ncol=18, nrow=16)
extent(r) <- extent(grid)
rp <- rasterize(grid, r)
#Reamostragem para o GRID
prcptot8_ajusted = resample(prcptot8_slice, rp, method="bilinear")
# transforma o NetCDF em Raster
prcptot9 = brick("C:/Users/inpe-eba/SISMOI/prcptot/rcp85/prcptotETCCDI_yr_CMCC-CM_rcp85_r1i1p1_2006-2100.nc")
#transforma a longitude de 0-360 para -180-180
prcptot9 = rotate(prcptot9)
#recorte espacial da área de estudo
#prcptot9_mask = crop(prcptot9, brasil)
#recorte temporal no dado
#prcptot9_slice = subset(prcptot9, 112:156) # 2006-2100
#Transformação do GRID em raster
r <- raster(ncol=18, nrow=16)
extent(r) <- extent(grid)
rp <- rasterize(grid, r)
#Reamostragem para o GRID
prcptot9_ajusted = resample(prcptot9, rp, method="bilinear")
# transforma o NetCDF em Raster
prcptot10 = brick("C:/Users/inpe-eba/SISMOI/prcptot/rcp85/prcptotETCCDI_yr_CMCC-CMS_rcp85_r1i1p1_2006-2100.nc")
#transforma a longitude de 0-360 para -180-180
prcptot10 = rotate(prcptot10)
#recorte espacial da área de estudo
#prcptot10_mask = crop(prcptot10, brasil)
#recorte temporal no dado
#prcptot10_slice = subset(prcptot10, 112:156) # 2006-2100
#Transformação do GRID em raster
r <- raster(ncol=18, nrow=16)
extent(r) <- extent(grid)
rp <- rasterize(grid, r)
#Reamostragem para o GRID
prcptot10_ajusted = resample(prcptot10, rp, method="bilinear")
# transforma o NetCDF em Raster
prcptot11 = brick("C:/Users/inpe-eba/SISMOI/prcptot/rcp85/prcptotETCCDI_yr_CNRM-CM5_rcp85_r1i1p1_2006-2100.nc")
#transforma a longitude de 0-360 para -180-180
prcptot11 = rotate(prcptot11)
#recorte espacial da área de estudo
#prcptot11_mask = crop(prcptot11, brasil)
#recorte temporal no dado
prcptot11_slice = subset(prcptot11, 1:95) # 2006-2100
#Transformação do GRID em raster
r <- raster(ncol=18, nrow=16)
extent(r) <- extent(grid)
rp <- rasterize(grid, r)
#Reamostragem para o GRID
prcptot11_ajusted = resample(prcptot11_slice, rp, method="bilinear")
# transforma o NetCDF em Raster
prcptot12 = brick("C:/Users/inpe-eba/SISMOI/prcptot/rcp85/prcptotETCCDI_yr_CSIRO-Mk3-6-0_rcp85_r1i1p1_2006-2300.nc")
#transforma a longitude de 0-360 para -180-180
prcptot12 = rotate(prcptot12)
#recorte espacial da área de estudo
#prcptot11_mask = crop(prcptot11, brasil)
#recorte temporal no dado
prcptot12_slice = subset(prcptot12, 1:95) # 2006-2100
#Transformação do GRID em raster
r <- raster(ncol=18, nrow=16)
extent(r) <- extent(grid)
rp <- rasterize(grid, r)
#Reamostragem para o GRID
prcptot12_ajusted = resample(prcptot12_slice, rp, method="bilinear")
# transforma o NetCDF em Raster
prcptot13 = brick("C:/Users/inpe-eba/SISMOI/prcptot/rcp85/prcptotETCCDI_yr_FGOALS-s2_rcp85_r1i1p1_2006-2100.nc")
#transforma a longitude de 0-360 para -180-180
prcptot13 = rotate(prcptot13)
#recorte espacial da área de estudo
#prcptot13_mask = crop(prcptot13, brasil)
#recorte temporal no dado
prcptot13_slice = subset(prcptot13, 1:95) # 2006-2100
#Transformação do GRID em raster
r <- raster(ncol=18, nrow=16)
extent(r) <- extent(grid)
rp <- rasterize(grid, r)
#Reamostragem para o GRID
prcptot13_ajusted = resample(prcptot13_slice, rp, method="bilinear")
# transforma o NetCDF em Raster
prcptot14 = brick("C:/Users/inpe-eba/SISMOI/prcptot/rcp85/prcptotETCCDI_yr_GFDL-CM3_rcp85_r1i1p1_2006-2100.nc")
#transforma a longitude de 0-360 para -180-180
prcptot14 = rotate(prcptot14)
#recorte espacial da área de estudo
#prcptot15_mask = crop(prcptot15, brasil)
#recorte temporal no dado
#prcptot14_slice = subset(prcptot14, 2:96) # 2006-2100
#Transformação do GRID em raster
r <- raster(ncol=18, nrow=16)
extent(r) <- extent(grid)
rp <- rasterize(grid, r)
#Reamostragem para o GRID
prcptot14_ajusted = resample(prcptot14, rp, method="bilinear")
# transforma o NetCDF em Raster
prcptot15 = brick("C:/Users/inpe-eba/SISMOI/prcptot/rcp85/prcptotETCCDI_yr_HadGEM2-CC_rcp85_r1i1p1_2005-2100.nc")
#transforma a longitude de 0-360 para -180-180
prcptot15 = rotate(prcptot15)
#recorte espacial da área de estudo
#prcptot15_mask = crop(prcptot15, brasil)
#recorte temporal no dado
prcptot15_slice = subset(prcptot15, 2:96) # 2006-2100
#Transformação do GRID em raster
r <- raster(ncol=18, nrow=16)
extent(r) <- extent(grid)
rp <- rasterize(grid, r)
#Reamostragem para o GRID
prcptot15_ajusted = resample(prcptot15_slice, rp, method="bilinear")
# transforma o NetCDF em Raster
prcptot16 = brick("C:/Users/inpe-eba/SISMOI/prcptot/rcp85/prcptotETCCDI_yr_HadGEM2-ES_rcp85_r1i1p1_2005-2299.nc")
#transforma a longitude de 0-360 para -180-180
prcptot16 = rotate(prcptot16)
#recorte espacial da área de estudo
#prcptot16_mask = crop(prcptot16, brasil)
#recorte temporal no dado
prcptot16_slice = subset(prcptot16, 2:96) # 2006-2100
#Transformação do GRID em raster
r <- raster(ncol=18, nrow=16)
extent(r) <- extent(grid)
rp <- rasterize(grid, r)
#Reamostragem para o GRID
prcptot16_ajusted = resample(prcptot16_slice, rp, method="bilinear")
# transforma o NetCDF em Raster
prcptot17 = brick("C:/Users/inpe-eba/SISMOI/prcptot/rcp85/prcptotETCCDI_yr_inmcm4_rcp85_r1i1p1_2006-2100.nc")
#transforma a longitude de 0-360 para -180-180
prcptot17 = rotate(prcptot17)
#recorte espacial da área de estudo
#prcptot17_mask = crop(prcptot17, brasil)
#recorte temporal no dado
#prcptot17_slice = subset(prcptot17, 1:95) # 2006-2100
#Transformação do GRID em raster
r <- raster(ncol=18, nrow=16)
extent(r) <- extent(grid)
rp <- rasterize(grid, r)
#Reamostragem para o GRID
prcptot17_ajusted = resample(prcptot17, rp, method="bilinear")
# transforma o NetCDF em Raster
prcptot18 = brick("C:/Users/inpe-eba/SISMOI/prcptot/rcp85/prcptotETCCDI_yr_IPSL-CM5A-LR_rcp85_r1i1p1_2006-2300.nc")
#transforma a longitude de 0-360 para -180-180
prcptot18 = rotate(prcptot18)
#recorte espacial da área de estudo
#prcptot18_mask = crop(prcptot18, brasil)
#recorte temporal no dado
prcptot18_slice = subset(prcptot18, 1:95) # 2006-2100
#Transformação do GRID em raster
r <- raster(ncol=18, nrow=16)
extent(r) <- extent(grid)
rp <- rasterize(grid, r)
#Reamostragem para o GRID
prcptot18_ajusted = resample(prcptot18_slice, rp, method="bilinear")
# transforma o NetCDF em Raster
prcptot19 = brick("C:/Users/inpe-eba/SISMOI/prcptot/rcp85/prcptotETCCDI_yr_IPSL-CM5A-MR_rcp85_r1i1p1_2006-2100.nc")
#transforma a longitude de 0-360 para -180-180
prcptot19 = rotate(prcptot19)
#recorte espacial da área de estudo
#prcptot19_mask = crop(prcptot19, brasil)
#recorte temporal no dado
#prcptot19_slice = subset(prcptot19, 103:147) # 2006-2100
#Transformação do GRID em raster
r <- raster(ncol=18, nrow=16)
extent(r) <- extent(grid)
rp <- rasterize(grid, r)
#Reamostragem para o GRID
prcptot19_ajusted = resample(prcptot19, rp, method="bilinear")
# transforma o NetCDF em Raster
prcptot20 = brick("C:/Users/inpe-eba/SISMOI/prcptot/rcp85/prcptotETCCDI_yr_IPSL-CM5B-LR_rcp85_r1i1p1_2006-2100.nc")
#transforma a longitude de 0-360 para -180-180
prcptot20 = rotate(prcptot20)
#recorte espacial da área de estudo
#prcptot20_mask = crop(prcptot20, brasil)
#recorte temporal no dado
#prcptot20_slice = subset(prcptot20, 1:95) # 2006-2100
#Transformação do GRID em raster
r <- raster(ncol=18, nrow=16)
extent(r) <- extent(grid)
rp <- rasterize(grid, r)
#Reamostragem para o GRID
prcptot20_ajusted = resample(prcptot20, rp, method="bilinear")
# transforma o NetCDF em Raster
prcptot21 = brick("C:/Users/inpe-eba/SISMOI/prcptot/rcp85/prcptotETCCDI_yr_MIROC5_rcp85_r1i1p1_2006-2100.nc")
#transforma a longitude de 0-360 para -180-180
prcptot21 = rotate(prcptot21)
#recorte espacial da área de estudo
#prcptot21_mask = crop(prcptot21, brasil)
#recorte temporal no dado
#prcptot21_slice = subset(prcptot21, 112:156) # 2006-2100
#Transformação do GRID em raster
r <- raster(ncol=18, nrow=16)
extent(r) <- extent(grid)
rp <- rasterize(grid, r)
#Reamostragem para o GRID
prcptot21_ajusted = resample(prcptot21, rp, method="bilinear")
# transforma o NetCDF em Raster
prcptot22 = brick("C:/Users/inpe-eba/SISMOI/prcptot/rcp85/prcptotETCCDI_yr_MIROC-ESM_rcp85_r1i1p1_2006-2100.nc")
#transforma a longitude de 0-360 para -180-180
prcptot22 = rotate(prcptot22)
#recorte espacial da área de estudo
#prcptot22_mask = crop(prcptot22, brasil)
#recorte temporal no dado
#prcptot22_slice = subset(prcptot22, 1:95) # 2006-2100
#Transformação do GRID em raster
r <- raster(ncol=18, nrow=16)
extent(r) <- extent(grid)
rp <- rasterize(grid, r)
#Reamostragem para o GRID
prcptot22_ajusted = resample(prcptot22, rp, method="bilinear")
# transforma o NetCDF em Raster
prcptot23 = brick("C:/Users/inpe-eba/SISMOI/prcptot/rcp85/prcptotETCCDI_yr_MIROC-ESM-CHEM_rcp85_r1i1p1_2006-2100.nc")
#transforma a longitude de 0-360 para -180-180
prcptot23 = rotate(prcptot23)
#recorte espacial da área de estudo
#prcptot23_mask = crop(prcptot23, brasil)
#recorte temporal no dado
#prcptot23_slice = subset(prcptot23, 112:156) # 2006-2100
#Transformação do GRID em raster
r <- raster(ncol=18, nrow=16)
extent(r) <- extent(grid)
rp <- rasterize(grid, r)
#Reamostragem para o GRID
prcptot23_ajusted = resample(prcptot23, rp, method="bilinear")
# transforma o NetCDF em Raster
prcptot24 = brick("C:/Users/inpe-eba/SISMOI/prcptot/rcp85/prcptotETCCDI_yr_MPI-ESM-LR_rcp85_r1i1p1_2006-2300.nc")
#transforma a longitude de 0-360 para -180-180
prcptot24 = rotate(prcptot24)
#recorte espacial da área de estudo
#prcptot24_mask = crop(prcptot24, brasil)
#recorte temporal no dado
prcptot24_slice = subset(prcptot24, 1:95) # 2006-2100
#Transformação do GRID em raster
r <- raster(ncol=18, nrow=16)
extent(r) <- extent(grid)
rp <- rasterize(grid, r)
#Reamostragem para o GRID
prcptot24_ajusted = resample(prcptot24_slice, rp, method="bilinear")
# transforma o NetCDF em Raster
prcptot25 = brick("C:/Users/inpe-eba/SISMOI/prcptot/rcp85/prcptotETCCDI_yr_MPI-ESM-MR_rcp85_r1i1p1_2006-2100.nc")
#transforma a longitude de 0-360 para -180-180
prcptot25 = rotate(prcptot25)
#recorte espacial da área de estudo
#prcptot25_mask = crop(prcptot25, brasil)
#recorte temporal no dado
#prcptot25_slice = subset(prcptot25, 1:95) # 2006-2100
#Transformação do GRID em raster
r <- raster(ncol=18, nrow=16)
extent(r) <- extent(grid)
rp <- rasterize(grid, r)
#Reamostragem para o GRID
prcptot25_ajusted = resample(prcptot25, rp, method="bilinear")
# transforma o NetCDF em Raster
prcptot26 = brick("C:/Users/inpe-eba/SISMOI/prcptot/rcp85/prcptotETCCDI_yr_MRI-CGCM3_rcp85_r1i1p1_2006-2100.nc")
#transforma a longitude de 0-360 para -180-180
prcptot26 = rotate(prcptot25)
#recorte espacial da área de estudo
#prcptot25_mask = crop(prcptot25, brasil)
#recorte temporal no dado
#prcptot26_slice = subset(prcptot25, 1:95) # 2006-2100
#Transformação do GRID em raster
r <- raster(ncol=18, nrow=16)
extent(r) <- extent(grid)
rp <- rasterize(grid, r)
#Reamostragem para o GRID
prcptot26_ajusted = resample(prcptot26, rp, method="bilinear")
# transforma o NetCDF em Raster
prcptot27 = brick("C:/Users/inpe-eba/SISMOI/prcptot/rcp85/prcptotETCCDI_yr_NorESM1-M_rcp85_r1i1p1_2006-2100.nc")
#transforma a longitude de 0-360 para -180-180
prcptot27 = rotate(prcptot27)
#recorte espacial da área de estudo
#prcptot25_mask = crop(prcptot25, brasil)
#recorte temporal no dado
#prcptot25_slice = subset(prcptot25, 1:95) # 2006-2100
#Transformação do GRID em raster
r <- raster(ncol=18, nrow=16)
extent(r) <- extent(grid)
rp <- rasterize(grid, r)
#Reamostragem para o GRID
prcptot27_ajusted = resample(prcptot27, rp, method="bilinear")
#cria lista de rasters
prcptot_rcp85 = stack(prcptot1_ajusted, prcptot2_ajusted, prcptot3_ajusted, prcptot5_ajusted,
prcptot6_ajusted, prcptot7_ajusted, prcptot8_ajusted, prcptot9_ajusted, prcptot10_ajusted,
prcptot11_ajusted, prcptot12_ajusted, prcptot13_ajusted, prcptot14_ajusted, prcptot15_ajusted,
prcptot16_ajusted, prcptot17_ajusted, prcptot18_ajusted, prcptot19_ajusted, prcptot20_ajusted,
prcptot21_ajusted, prcptot22_ajusted, prcptot23_ajusted, prcptot24_ajusted, prcptot25_ajusted,
prcptot26_ajusted, prcptot27_ajusted)
#calcula a media prcptot
rMean <- calc( prcptot_rcp85 , fun = function(x){ by(x , c( rep(seq(1:95), times = 26)) , mean, na.rm=TRUE ) } )
#trasnformação do dado em dataframe
prcptot_rcp85_df = as.data.frame(rMean, xy=TRUE)
# nomeando as colunas do data frame
dates <- seq(as.Date("2006/1/1"), by = "year", length.out = 95)
head(dates)
tail(dates)
names(prcptot_rcp85_df) <- c("lon","lat", paste(dates,as.character(), sep="_"))
# Exportar o data frame como tabela CSV
write.csv(prcptot_rcp85_df, file = "prcptot_rcp85_mean.csv")
|
7db8aae892fc5f1f7a870880209d27b9f4b6b7f9 | 45feb2cf67587c8c13e232def92cacaee9749458 | /R/plotting/visualize_state_level_fake_post_likes/user_state_plot_with_user_state_scatter_plot.R | 88801d8e86fde5d515bbe127a2d04934916b902e | [] | no_license | Chunhsiang/NTU-Facebook-Project | 25940c4b4840576426b466520f07d8f1e3806daf | 66e4f0ad238a47092bb31db382932cccebfaeb36 | refs/heads/master | 2022-01-15T18:31:14.998597 | 2019-06-11T15:59:07 | 2019-06-11T15:59:07 | 105,336,546 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 5,920 | r | user_state_plot_with_user_state_scatter_plot.R | library(data.table)
library(dplyr)
library(plyr)
library(ggplot2)
library(fiftystater)
library(Cairo)
#library(extrafont)
library(maps)
library(ggthemr)
ggthemr(palette="pale", layout="clear", spacing=0.6) # load ggthemr to use
base_theme = theme(
title = element_text(size=13, face="plain"),
axis.title = element_text(size=12, face="plain"),
axis.text.y = element_text(size=10, face="plain"),
axis.text.x = element_text(size=10, face="plain"),
legend.text = element_text(size=10, face="plain"),
legend.title = element_text(size=10, face="plain"),
axis.title.y = element_text(margin=margin(r=2), face="plain"),
axis.title.x = element_text(margin=margin(t=2), face="plain"),
plot.caption = element_text(size=11, margin=margin(t=4), hjust = 0.5,
face="plain")
)
user_like_state <- fread("/home3/usfb/analysis/analysis-ideology-change/temp/user-state/us_user_like_state_max_unique.csv")
states <- user_like_state$like_state_max
state_total_users_df <- data.table(table(states)[state.name])
user_like_fakepage <- fread("/home3/usfb/analysis/analysis-ideology-change/temp/fake-news-user/all_user_like_fake_post_page_time.csv")
user_like_fakepage$state <- user_like_state$like_state_max[match(user_like_fakepage$user_id, user_like_state$user_id)]
overall = data.frame(tolower(state.name))
colnames(overall) <- "state"
row.names(overall) <- state.name
overall$state_times <- as.numeric(table(user_like_fakepage$state)[state.name])
overall$share_user_like_fake_posts <- overall$state_times/(state_total_users_df$N[match(row.names(overall), state_total_users_df$states)])
overall$share_user_like_fake_posts <- round(overall$share_user_like_fake_posts, digits = 4)
# Plot Share of User Like Fake Post
p_map <- ggplot(overall, aes(map_id = state)) +
# map points to the fifty_states shape data
geom_map(aes(fill = share_user_like_fake_posts), map = fifty_states) +
ggtitle("Share of User Like Fake Post")+
expand_limits(x = fifty_states$long, y = fifty_states$lat) +
coord_map() +
scale_x_continuous(breaks = NULL) +
scale_y_continuous(breaks = NULL) +
labs(x = "", y = "") +
scale_fill_gradient(low = "blue", high = "red", name = "") +
theme(plot.title = element_text(hjust=0.5), legend.position = "bottom",
panel.background = element_blank()) +
base_theme
setwd("/home3/usfb/analysis/analysis-ideology-change/output/chunhsiang")
cairo_pdf("share_of_user_like_fake_post.pdf",
width=6, height=4.5, family="Source Sans Pro")
print(p_map)
dev.off()
#Plot scatter plot
state_path = "~/usfbdata/state_level/"
election = read_csv(paste0(state_path, "presidential_general_election_2016.csv"))
election = election[election$name=="H. Clinton", ]
overall$state_m = rownames(overall)
#median = read_csv(paste0(state_path, "state_median_ideology_from_1000_page_20161001_to_20161107.csv"))
# median = read_csv(paste0(state_path, "state_share_from_1000_page_20161001_to_20161107_state_20161001_to_20161107_exclude_gov.csv"))
state = read_csv(paste0(state_path, "state-level-variables.csv"))
abbre = read_csv(paste0(state_path, "state_abbreviation.csv"))
colnames(abbre) = c("state", "state_abbre")
merge = merge(overall, election, by.x= "state_m", by.y="state")
merge = merge(merge, state, by.x="state_m", by.y="state")
merge = merge(merge, abbre, by.x="state_m", by.y="state")
merge$state_type = ifelse(merge$is_winner == "True", "Dem", "Rep")
swing_states = c("FL", "OH", "WI", "MI", "PA", "IA")
merge[merge$state_abbre %in% swing_states, ]$state_type = "Swing"
merge$state_type = as.factor(merge$state_type)
merge$state_type = factor(merge$state_type,
levels=c("Rep", "Swing", "Dem"))
merge$vote_pct = 1 - merge$vote_pct
library(ggrepel)
library(RColorBrewer)
red = brewer.pal(4, "Set1")[1]
blue = brewer.pal(4, "Set1")[2]
purple = brewer.pal(4, "Set1")[4]
## State voter share closer to Clinton
fake_post_VS_trump_share_scatter_plot = function(merge , title){
p = ggplot(merge, aes(share_user_like_fake_posts, vote_pct , label=state_abbre))
p_scatter = p +
stat_smooth(
method="lm",
colour="gray50",
se=TRUE,
fill="grey") +
geom_point(aes(shape=state_type, color=state_type, fill=state_type)) +
scale_color_manual(
labels=c("Rep wins 2016 & 2012", "Swings from Obama to Trump", "Dem wins 2016 & 2012"),
values=c(red, purple, blue)) +
scale_fill_manual(
labels=c("Rep wins 2016 & 2012", "Swings from Obama to Trump", "Dem wins 2016 & 2012"),
values=c(red, purple, blue)) +
scale_shape_manual(
labels=c("Rep wins 2016 & 2012", "Swings from Obama to Trump", "Dem wins 2016 & 2012"),
values=c(23, 21, 22)) +
geom_text_repel(point.padding = unit(0.15, "lines"),
box.padding = unit(0.15, "lines"),
nudge_y = 0.1,
size = 3
) +
theme_classic(base_size = 16) +
scale_x_continuous("Share of Facebook User liked fake post") +
scale_y_continuous("2016 Trump Vote Share") +
# labs(caption = "Using 2016-10-28 to 2016-11-07 likes to guess user's state") +
theme(legend.position=c(0.21, 0.95), legend.title=element_blank()) +
base_theme
setwd("/home3/usfb/analysis/analysis-ideology-change/output/chunhsiang")
cairo_pdf(title,
width=6, height=4.5, family="Source Sans Pro")
print(p_scatter)
dev.off()
return(p_scatter)
}
p_scatter = fake_post_VS_trump_share_scatter_plot(merge,
"fake_post_vs_trump_share_scatter_plot")
#p_scatter
outlier_states = c("GA", "ID", "TX", "AZ")
merge_adjusted = merge[!(merge$state_abbre %in% outlier_states),]
p_scatter_adjusted = fake_post_VS_trump_share_scatter_plot(merge_adjusted,
"fake_post_vs_trump_share_scatter_plot_adjusted")
#p_scatter_adjusted
|
0e36d81bdcec49023a281f286d2dcdaf1cbd295d | b9f284784a43cab06b0b06ce34a2e8123b3f04a0 | /2_scripts_sun_shade_sun/nlmeLightResponses.R | fc5fd0ec64b74548686fdfb1396f3d33a935008b | [
"CC0-1.0"
] | permissive | smuel-tylor/Fast-Deactivation-of-Rubisco | 78d02e927d9d5f15c4f8e8d456750ad2665bb0cd | fe73b746fa3950a77d990a2cc795b245685e5e01 | refs/heads/main | 2023-09-03T20:47:29.662493 | 2021-11-10T15:17:05 | 2021-11-10T15:17:05 | 321,707,002 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 6,585 | r | nlmeLightResponses.R | #Non-linear-mixed-effects models to produce coefficients for the diurnal model
# - light response curves
# - using the same data/factors etc. as summarize.R
#updated 0121 to use R 4.x and here()
#clear workspace
#rm(list = ls())
library(here)
library(lattice)
library(nlme)
load(here("output/summarize.Rdata"))
objects()
#check factors in AQ
AQ$geno
AQ$plant
#To get the plots etc. in a nice order
AQ <- AQ[order(AQ$plant), ]
################################################################################
#Analysis
#maximal, i.e. fully parameterised model
AQ.nlsList <- nlsList(
A ~ (
(
phi * Qin + Asat -
sqrt((phi * Qin+Asat)^2 - (4 * theta * phi * Qin * Asat))
) /
(2 * theta)
) - Rd | plant,
start = c(phi = 0.05, Asat = 30, theta = 0.75, Rd = 1.9),
#Below is a quick fix for issues with some of the other columns
data = AQ[, c("Qin", "A", "plant")],
na.action = na.omit
)
summary(AQ.nlsList)
plot(AQ.nlsList, plant ~ resid(.), abline = 0)
plot(intervals(AQ.nlsList))
#nothing looks very genotypey... maybe Rd?
#What is the improvement due to considering individual leaves?
#Model that ignores this:
AQ.nls <- nls(
A ~ (
(
phi * Qin + Asat -
sqrt((phi * Qin+Asat)^2 - (4 * theta * phi * Qin * Asat))
) / (2 * theta)
) - Rd,
start = c(phi = 0.05, Asat = 30, theta = 0.75, Rd = 1.9),
data = AQ,
na.action = na.exclude
)
summary(AQ.nls)
#residual is much larger than for the fully parameterised model
plot(AQ.nls, plant ~ resid(.), abline = 0)
#and there is a lot of mis-fitting
AQ.nlme <- nlme(AQ.nlsList)
AQ.nlme
#Unsurprisingly, based on the nlsList object,
# random effects are reasonably independent,
# but I need to include fixed effects of genotype
AQ.nlme2 <- update(
AQ.nlme,
fixed = phi + Asat + theta + Rd ~ geno,
start = c(fixef(AQ.nlme)[1], 0, 0, 0,
fixef(AQ.nlme)[2], 0, 0, 0,
fixef(AQ.nlme)[3], 0, 0, 0,
fixef(AQ.nlme)[4], 0, 0, 0
),
data = AQ #needed so that 'geno' is found
)
anova(AQ.nlme, AQ.nlme2)
#improves the model
anova(AQ.nlme2)
#all the fixef seem to be important, except maybe theta...
AQ.nlme2
#phi and theta could each be dropped as ranef
AQ.nlme3 <- update(
AQ.nlme,
fixed = list(phi + Asat + Rd ~ geno, theta ~ 1),
start = c(fixef(AQ.nlme)[1], 0, 0, 0,
fixef(AQ.nlme)[2], 0, 0, 0,
fixef(AQ.nlme)[4], 0, 0, 0,
fixef(AQ.nlme)[3]
),
data = AQ #needed so that 'geno' is found
)
anova(AQ.nlme3, AQ.nlme2)
#I'm still not convinced it can be dropped...
AQ.nlme4 <- update(AQ.nlme2,
random = Asat + theta + Rd ~ 1
)
AQ.nlme5 <- update(AQ.nlme2,
random = phi + Asat + Rd ~ 1
)
anova(AQ.nlme2,AQ.nlme4,AQ.nlme5)
#dropping these is not useful...
#So, the model requires both fixed and random effects for all parameters
plot(AQ.nlme2)
#not ideal wrt the initial slope,
#perhaps this is not as linear at the non-rect-hyper requires
intervals(AQ.nlme2)
#this plot failed
#plot(augPred(AQ.nlme2, primary = ~Qin, level = 0:1))
################################################################################
#produce a plant-by-plant plot matching Vcmax and ActivationState
#0121 a few changes made here in terms of specifying factors
# to make sure that objects were all consistently ordered
q.smth <- c(0:2000)
p.smth <- as.character(unique(AQ[ , "plant"]))
g.smth <- sapply(strsplit(p.smth, "_"), function(.){ .[1] })
nd.AQ <- data.frame(
Qin = rep(q.smth, length(p.smth)),
geno = factor(
rep(g.smth, each = length(q.smth)),
levels = levels(AQ$geno)
),
plant = factor(
rep(p.smth, each = length(q.smth)),
levels = levels(AQ$plant)
)
)
pred.AQ.nlme2 <- data.frame(
A.geno = predict(AQ.nlme2, newdata = nd.AQ, level = 0),
A.plant = predict(AQ.nlme2, newdata = nd.AQ, level = 1),
plant = factor(nd.AQ$plant, levels = levels(nd.AQ$plant)),
geno = factor(nd.AQ$geno, levels = levels(nd.AQ$geno)),
Qin = nd.AQ$Qin
)
fhds <- c("plant", "Qin", "A")
facs.AQ.nlme2 <- AQ[, fhds]
pred.AQ.nlme2 <- merge(pred.AQ.nlme2, facs.AQ.nlme2, all.x = TRUE)
get_rep_list <- function(geno, p.all){
g.all <- p.all[p.all$geno == geno, ]
levels(g.all$plant) <- replace(levels(g.all$plant),
!levels(g.all$plant) %in% unique(g.all$plant),
NA
)
by(g.all, g.all$plant, identity)
}
plot_rep <- function(p.rep){
ordp <- p.rep[order(p.rep$Qin), ]
plot(1, 1,
xlim = c(0, 2000),
ylim = c(-3, 43),
xlab = expression("incident PPFD " * (mu*mol~~m^-2~~s^-1)),
ylab = expression(italic(A)~~(mu*mol~~m^-2~~s^-1)),
main = ordp$plant[1],
type = "n",
axes = FALSE)
axis(side = 1, at = seq(0, 2000, 500), las = 1)
axis(side = 2, at = seq(0, 40, 10), las = 1)
points(ordp$A ~ ordp$Qin)
lines(ordp$A.geno ~ ordp$Qin, lty = 1)
lines(ordp$A.plant ~ ordp$Qin, lty = 2)
box()
}
allrep_plot <- function(geno, p.all){
p.reps <- get_rep_list(geno, p.all)
par(mfrow = c(3, 2))
lapply(p.reps, plot_rep)
}
pdf(here("output/nlmeLightResponses_nlme2.pdf"),
w = 6, h = 9
)
par(
mar = c(4.5, 5.5, 3, 1),
tcl = 0.4,
oma = c(0, 0, 0, 0),
cex = 1.2,
cex.lab = 1.2,
cex.axis = 1.2
)
glist <- levels(pred.AQ.nlme2$geno)
lapply(glist, allrep_plot, p.all = pred.AQ.nlme2)
dev.off()
################################################################################
#fixed effects CIs as one-tailed value
cis <- apply(intervals(AQ.nlme2)$fixed[,c(1,2)], 1, diff)
fixphi <- c(
fixef(AQ.nlme2)[1],
fixef(AQ.nlme2)[1] + fixef(AQ.nlme2)[c(2:4)]
)
cbind(fixphi, fixphi + cis[c(1:4)] %*% cbind(-1, 1))
fixAsat <- c(
fixef(AQ.nlme2)[5],
fixef(AQ.nlme2)[5] + fixef(AQ.nlme2)[c(6:8)]
)
cbind(fixAsat, fixAsat + cis[c(5:8)] %*% cbind(-1, 1))
fixtheta <- c(
fixef(AQ.nlme2)[9],
fixef(AQ.nlme2)[9] + fixef(AQ.nlme2)[c(10:12)]
)
cbind(fixtheta, fixtheta + cis[c(9:12)] %*% cbind(-1, 1))
fixRd <- c(
fixef(AQ.nlme2)[13],
fixef(AQ.nlme2)[13] + fixef(AQ.nlme2)[c(14:16)]
)
cbind(fixRd, fixRd + cis[c(13:16)] %*% cbind(-1, 1))
AQ.fixed <- rbind(
cbind(fixphi, fixphi + cis[c(1:4)] %*% cbind(-1, 1)),
cbind(fixAsat, fixAsat + cis[c(5:8)] %*% cbind(-1, 1)),
cbind(fixtheta, fixtheta + cis[c(9:12)] %*% cbind(-1, 1)),
cbind(fixRd, fixRd + cis[c(13:16)] %*% cbind(-1, 1))
)
AQ.fixed <- as.data.frame(AQ.fixed)
names(AQ.fixed) <- c("Est", "lower", "upper")
AQ.fixed
save(AQ.fixed,
AQ.nlme2,
AQ,
file = here("output/nlmeLightResponses.Rdata")
)
|
26421ca64d63abf4b566dfee9233ae16be3c9a6e | 71c0281b7dc10b47d32923d95d0d4e2b119999fd | /Scripts/modeling.R | 11641e401a91b157a283d9b427749b209e23bed1 | [] | no_license | vdhyani96/education-loan-repayment | 4b1aa69377cc307ed9447e696cc1fcba8ed13bfc | bd322ce5dcf7cd35f3a6a63709a2dd0f846790f9 | refs/heads/master | 2020-08-04T19:52:50.471895 | 2019-10-17T00:28:48 | 2019-10-17T00:28:48 | 212,260,362 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 4,512 | r | modeling.R | # install.packages("caret")
library(caret)
library(dplyr)
setwd("C:/Users/admin/Desktop/R/Coursera - Predicting Student Loan Repayment/Dataset")
schoolData <- read.csv("ImpT&TCombined.csv")
# backup -- will use in the end
backUp <- schoolData$INSTNM[is.na(schoolData$COMPL_RPY_3YR_RT)]
# Let's remove the INSTNM feature right here
schoolData <- schoolData[, c(2:15)]
str(schoolData)
summary(schoolData)
# splitting into train and test set
train <- schoolData %>% filter(!is.na(COMPL_RPY_3YR_RT))
test <- schoolData %>% filter(is.na(COMPL_RPY_3YR_RT))
# Now creating data partitions for training set and validation set
# We make a 75-25 split.
set.seed(162)
forTraining <- createDataPartition(train$COMPL_RPY_3YR_RT, p = 0.75, list = FALSE)
training <- train[forTraining, ]
validation <- train[-forTraining, ]
# Now, build a model using caret with trainControl having Cross Validation
# cross-validation
control <- trainControl(method = "cv", number = 10) # can be "method = oob" too, for randomforest, cforest
# now train a model -- maybe first I should exclude INSTNM from list of predictors
# First use Linear Model (Linear Regression)
# predList <- names(training[, c(2:15)])
modelFit <- train(COMPL_RPY_3YR_RT ~ ., data = training, method = "lm", metric = "RMSE", trControl = control)
# check the performance on the training set, how well my model fits the training set using cross-validation
print(modelFit)
# RMSE = 0.1032599
# Rsquared = 0.7418828
# now checking variable importance. Can be useful for feature selection
varImp(modelFit)
importance <- varImp(modelFit, scale = FALSE)
print(importance)
plot(importance, top = 22) # remove top to view all
# As we can see, many features were very important, while some features were very less important, eg, GRAD_DEBT_MDN
# and ATDCOST. Surprisingly, ATDCOST is somewhere on the least important side.
# On the other hand, CDR3 (Cohort Default Rates) has a critical impact on the predictions.
# Also Median Earnings (MD_EARN_WNE_P8) of the individuals after 8 years of enrollment
# is very important. Age entry, median household income, etc are some other important features.
# Let's make predictions for the validation set, which is our hold out set from the training set
valPredictions <- predict(modelFit, validation)
head(valPredictions)
summary(valPredictions)
# Apparantly, the value predicted seems to go higher than 1. Let's bring all such values to 1.
# BTW, I don't have to do this because this is what my model has predicted, and this should be reflected in the RMSE score
# as well!
for(i in 1:length(valPredictions)) {
if(valPredictions[i] > 1) {
valPredictions[i] = 1
}
}
summary(valPredictions)
# since repayment rate is a fraction, it should be atmost 1.
# Now will compute RMSE on the hold out from the training set using the custom function webclipped
# BTW, Caret has an inbuilt RMSE function. Can use that instead.
# Just for the record, there is a function rmse() in "Metrics" package which is created by Ben Hamner
# Co-Founder of Kaggle
RMSE(validation$COMPL_RPY_3YR_RT, valPredictions)
# RMSE = 0.09805046; way below the threshold!!!!
# after some refinement, RMSE = 0.09797155
# Let's verify the above using the custom function
RMSE_Cust <- function(m, o) {
sqrt(mean((m - o)^2)) # m = model fitted values; o = observed values
}
RMSE_Cust(valPredictions, validation$COMPL_RPY_3YR_RT)
# again 0.09805046
# finally, we make predictions on our test set. We don't have any way to verify that though.
Predictions <- predict(modelFit, test)
# finally, write predictions into a CSV file
predictionsDf <- data.frame(INSTNM = backUp, COMPL_RPY_3YR_RT = Predictions)
# I would correct the predictions for this too
summary(predictionsDf$COMPL_RPY_3YR_RT)
predictionsDf$COMPL_RPY_3YR_RT[predictionsDf$COMPL_RPY_3YR_RT > 1] <- 1
summary(predictionsDf$COMPL_RPY_3YR_RT)
# include the INSTNM backup into the test CSV too
test$INSTNM <- backUp
test <- test[, c(15, 1:14)]
write.csv(test, file = "test.csv", row.names = FALSE)
write.csv(predictionsDf, file = "TestPredictions.csv", row.names = FALSE)
# COMPLETED >>> 10/10/2017 3.52pm (at office) ||| Total 3 R scripts
# UPDATE... COMPLETED >>> Same day at 11.39pm (at room) (needed to correct the predictions going > 1)
# COURSERA >> Challenge: Predict Students' Ability to Repay Educational Loans
# Accuracy achieved very well
# This challenge was provided by Michael Boerrigter and presented by Claire Smith
# Started by me on 25/09/2017
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.